Papers
Topics
Authors
Recent
Search
2000 character limit reached

StackOverflowVQA: Stack Overflow Visual Question Answering Dataset

Published 17 May 2024 in cs.CV | (2405.10736v1)

Abstract: In recent years, people have increasingly used AI to help them with their problems by asking questions on different topics. One of these topics can be software-related and programming questions. In this work, we focus on the questions which need the understanding of images in addition to the question itself. We introduce the StackOverflowVQA dataset, which includes questions from StackOverflow that have one or more accompanying images. This is the first VQA dataset that focuses on software-related questions and contains multiple human-generated full-sentence answers. Additionally, we provide a baseline for answering the questions with respect to images in the introduced dataset using the GIT model. All versions of the dataset are available at https://huggingface.co/mirzaei2114.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. VQA: visual question answering. CoRR, abs/1505.00468.
  2. Vqa therapy: Exploring answer differences by visually grounding answers.
  3. Nat Friedman. 2021. Introducing github copilot: your ai pair programmer. URL https://github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer.
  4. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR).
  5. mplug: Effective and efficient vision-language learning by cross-modal skip-connections.
  6. Mike. 2023. Mikex86/stackoverflow-posts · datasets at hugging face.
  7. Fawaz Sammani and Nikos Deligiannis. 2023. Uni-nlx: Unifying textual explanations for vision and vision-language tasks.
  8. Generate answer to visual questions with pre-trained vision-and-language embeddings. WiNLP Workshop at EMNLP.
  9. The color of the cat is gray: 1 million full-sentences visual question answering (fsvqa). arXiv preprint arXiv:1609.06657.
  10. StackExchangeCommunity. 2023. Stack exchange data dump.
  11. GIT: A generative image-to-text transformer for vision and language. Transactions on Machine Learning Research.
  12. Vlmo: Unified vision-language pre-training with mixture-of-modality-experts. CoRR, abs/2111.02358.

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.