Papers
Topics
Authors
Recent
Search
2000 character limit reached

Automated Conversion of Static to Dynamic Scheduler via Natural Language

Published 8 May 2024 in cs.CL and cs.AI | (2405.06697v1)

Abstract: In this paper, we explore the potential application of LLMs that will automatically model constraints and generate code for dynamic scheduling problems given an existing static model. Static scheduling problems are modelled and coded by optimization experts. These models may be easily obsoleted as the underlying constraints may need to be fine-tuned in order to reflect changes in the scheduling rules. Furthermore, it may be necessary to turn a static model into a dynamic one in order to cope with disturbances in the environment. In this paper, we propose a Retrieval-Augmented Generation (RAG) based LLM model to automate the process of implementing constraints for Dynamic Scheduling (RAGDyS), without seeking help from an optimization modeling expert. Our framework aims to minimize technical complexities related to mathematical modelling and computational workload for end-users, thereby allowing end-users to quickly obtain a new schedule close to the original schedule with changes reflected by natural language constraint descriptions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (10)
  1. OptiMUS: Optimization modeling using mip solvers and large language models, 2024. URL https://openreview.net/forum?id=2FAPahXyVh.
  2. Anthropic. The claude 3 model family: Opus, sonnet, haiku, 2024. URL https://paperswithcode.com/paper/the-claude-3-model-family-opus-sonnet-haiku.
  3. Minimal perturbation in dynamic scheduling. In Proceedings of the 13th European Conference on Artificial Intelligence (ECAI-98). John Wiley & Sons. Citeseer, 1998.
  4. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997, 2023.
  5. Gemini. Gemini: A family of highly capable multimodal models, 2024.
  6. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020.
  7. OpenAI. Gpt-4 technical report, 2024.
  8. Constraint programming. Foundations of Artificial Intelligence, 3:181–211, 2008.
  9. Large language models still can’t plan (a benchmark for llms on planning and reasoning about change). arXiv preprint arXiv:2206.10498, 2022.
  10. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS 2020), 2020.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 0 likes about this paper.