Papers
Topics
Authors
Recent
Search
2000 character limit reached

CC-GPX: Extracting High-Quality Annotated Geospatial Data from Common Crawl

Published 17 May 2024 in cs.CL | (2405.11039v3)

Abstract: The Common Crawl (CC) corpus is the largest open web crawl dataset containing 9.5+ petabytes of data captured since 2008. The dataset is instrumental in training LLMs, and as such it has been studied for (un)desirable content, and distilled for smaller, domain-specific datasets. However, to our knowledge, no research has been dedicated to using CC as a source of annotated geospatial data. In this paper, we introduce an efficient pipeline to extract annotated user-generated tracks from GPX files found in CC, and the resulting multimodal dataset with 1,416 pairings of human-written descriptions and MultiLineString vector data from the 6 most recent CC releases. The dataset can be used to study people's outdoor activity patterns, the way people talk about their outdoor experiences, as well as for developing trajectory generation or track annotation models, or for various other problems in place of synthetically generated routes. Our reproducible code is available on GitHub: https://github.com/ilyankou/cc-gpx

Definition Search Book Streamline Icon: https://streamlinehq.com
References (10)
  1. Meta AI. 2024. Introducing Meta Llama 3: The most capable openly available LLM to date. https://ai.meta.com/blog/meta-llama-3/
  2. Stefan Baack. 2024. Training Data for the Price of a Sandwich: Common Crawl’s Impact on Generative AI. (Feb. 2024).
  3. esCorpius: A Massive Spanish Crawling Corpus. http://arxiv.org/abs/2206.15147 arXiv:2206.15147 [cs].
  4. How Are Sports-Trackers Used by Runners? Running-Related Data, Personal Goals, and Self-Tracking in Running. Sensors 21, 11 (Jan. 2021), 3687. https://doi.org/10.3390/s21113687 Number: 11 Publisher: Multidisciplinary Digital Publishing Institute.
  5. Alexandra Sasha Luccioni and Joseph D. Viviano. 2021. What’s in the Box? A Preliminary Analysis of Undesirable Content in the Common Crawl Corpus. http://arxiv.org/abs/2105.02732 arXiv:2105.02732 [cs].
  6. Dirt cheap web-scale parallel text from the Common Crawl. In Smith, Jason R; Saint-Amand, Herve; Plamada, Magdalena; Koehn, Philipp; Callison-Burch, Chris; Lopez, Adam (2013). Dirt cheap web-scale parallel text from the Common Crawl. In: 51st Annual Meeting of the Association for Computational Linguistics, Sofia, Bulgaria, August 2013. Association for Computational Linguistics, 1374-1383. Association for Computational Linguistics, Sofia, Bulgaria, 1374–1383. https://doi.org/10.5167/uzh-80038
  7. Alan D Thompson. 2022. What’s in my AI? A Comprehensive Analysis of Datasets Used to Train GPT-1, GPT-2, GPT-3, GPT-NeoX-20B, Megatron-11B, MT-NLG, and Gopher. (2022).
  8. CCpdf: Building a High Quality Corpus for Visually Rich Documents from Web Crawl Data. In Document Analysis and Recognition - ICDAR 2023, Gernot A. Fink, Rajiv Jain, Koichi Kise, and Richard Zanibbi (Eds.). Springer Nature Switzerland, Cham, 348–365. https://doi.org/10.1007/978-3-031-41682-8_22
  9. WordScape: a Pipeline to extract multilingual, visually rich Documents with Layout Annotations from Web Crawl Data. (2023).
  10. Multilingual Machine Translation with Large Language Models: Empirical Results and Analysis. http://arxiv.org/abs/2304.04675 arXiv:2304.04675 [cs].
Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 0 likes about this paper.