LAMP: A Language Model on the Map
Abstract: LLMs are poised to play an increasingly important role in our lives, providing assistance across a wide array of tasks. In the geospatial domain, LLMs have demonstrated the ability to answer generic questions, such as identifying a country's capital; nonetheless, their utility is hindered when it comes to answering fine-grained questions about specific places, such as grocery stores or restaurants, which constitute essential aspects of people's everyday lives. This is mainly because the places in our cities haven't been systematically fed into LLMs, so as to understand and memorize them. This study introduces a novel framework for fine-tuning a pre-trained model on city-specific data, to enable it to provide accurate recommendations, while minimizing hallucinations. We share our model, LAMP, and the data used to train it. We conduct experiments to analyze its ability to correctly retrieving spatial objects, and compare it to well-known open- and closed- source LLMs, such as GPT-4. Finally, we explore its emerging capabilities through a case study on day planning.
- City foundation models for learning general purpose representations from openstreetmap.
- Language models are few-shot learners.
- Efficient retrieval of the top-k most relevant spatial web objects. Proc. VLDB Endow., 2(1):337–348.
- Learning a foundation language model for geoscience knowledge understanding and utilization. arXiv preprint arXiv:2306.05064.
- Wes Gurnee and Max Tegmark. 2023. Language models represent space and time. arXiv preprint arXiv:2310.02207.
- Lora: Low-rank adaptation of large language models.
- Ernie-geol: A geography-and-language pre-trained model and its applications in baidu maps. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 3029–3039.
- Understanding the benefits and challenges of deploying conversational ai leveraging large language models for public health intervention. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, New York, NY, USA. Association for Computing Machinery.
- Retrieval-augmented generation for knowledge-intensive nlp tasks.
- Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9):1–35.
- Geo-BERT pre-training model for query rewriting in POI search. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2209–2214, Punta Cana, Dominican Republic. Association for Computational Linguistics.
- On the opportunities and challenges of foundation models for geospatial artificial intelligence.
- Geollm: Extracting geospatial knowledge from large language models. arXiv preprint arXiv:2310.06213.
- The potential of visual chatgpt for remote sensing. Remote Sensing, 15(13).
- Llama 2: Open foundation and fine-tuned chat models.
- Optimizing and fine-tuning large language model for urban renewal. arXiv preprint arXiv:2311.15490.
- Towards urban general intelligence: A review and outlook of urban foundation models. arXiv preprint arXiv:2402.01749.
- Atom: Low-bit quantization for efficient and accurate llm serving.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.