Notes on Applicability of GPT-4 to Document Understanding
Abstract: We perform a missing, reproducible evaluation of all publicly available GPT-4 family models concerning the Document Understanding field, where it is frequently required to comprehend text spacial arrangement and visual clues in addition to textual semantics. Benchmark results indicate that though it is hard to achieve satisfactory results with text-only models, GPT-4 Vision Turbo performs well when one provides both text recognized by an external OCR engine and document images on the input. Evaluation is followed by analyses that suggest possible contamination of textual GPT-4 models and indicate the significant performance drop for lengthy documents.
- Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. arXiv preprint arXiv:2308.12966.
- Gram: Global reasoning for multi-page vqa.
- DUE: End-to-End Document Understanding Benchmark. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, volume 1. Curran.
- How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites. arXiv preprint arXiv:2404.16821.
- Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. arXiv preprint arXiv:2312.14238.
- From dataset recycling to multi-property extraction and beyond. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 641â651, Online. Association for Computational Linguistics.
- Shahriar Golchin and Mihai Surdeanu. 2023. Time Travel in LLMs: Tracing Data Contamination in Large Language Models.
- Teaching models to express their uncertainty in words.
- Lost in the Middle: How Language Models Use Long Contexts.
- InfographicVQA. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1697â1706.
- Document Visual Question Answering Challenge 2020. arXiv preprint arXiv:2008.08899.
- OpenAI. 2023. GPT-4. https://openai.com/research/gpt-4.
- Instructdoc: A dataset for zero-shot generalization of visual document understanding with instructions.
- SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images.
- Document Understanding Dataset and Evaluation (DUDE). In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 19528â19540.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.