Papers
Topics
Authors
Recent
Search
2000 character limit reached

Massive Online Crowdsourced Study of Subjective and Objective Picture Quality

Published 9 Nov 2015 in cs.CV | (1511.02919v1)

Abstract: Most publicly available image quality databases have been created under highly controlled conditions by introducing graded simulated distortions onto high-quality photographs. However, images captured using typical real-world mobile camera devices are usually afflicted by complex mixtures of multiple distortions, which are not necessarily well-modeled by the synthetic distortions found in existing databases. The originators of existing legacy databases usually conducted human psychometric studies to obtain statistically meaningful sets of human opinion scores on images in a stringently controlled visual environment, resulting in small data collections relative to other kinds of image analysis databases. Towards overcoming these limitations, we designed and created a new database that we call the LIVE In the Wild Image Quality Challenge Database, which contains widely diverse authentic image distortions on a large number of images captured using a representative variety of modern mobile devices. We also designed and implemented a new online crowdsourcing system, which we have used to conduct a very large-scale, multi-month image quality assessment subjective study. Our database consists of over 350000 opinion scores on 1162 images evaluated by over 7000 unique human observers. Despite the lack of control over the experimental environments of the numerous study participants, we demonstrate excellent internal consistency of the subjective dataset. We also evaluate several top-performing blind Image Quality Assessment algorithms on it and present insights on how mixtures of distortions challenge both end users as well as automatic perceptual quality prediction models.

Citations (607)

Summary

  • The paper introduces a novel LIVE In the Wild database featuring 1,162 real-world images with authentic distortions.
  • It deploys a robust crowdsourcing framework on Amazon Mechanical Turk, collecting over 350,000 ratings from more than 8,100 observers.
  • The study validates subjective assessments with a Spearman correlation of 0.9851, underscoring the need for improved IQA models.

Overview of "Massive Online Crowdsourced Study of Subjective and Objective Picture Quality"

This paper presents an extensive study on image quality assessment through the creation of the LIVE In the Wild Image Quality Challenge Database. The authors, Ghadiyaram and Bovik, address shortcomings in existing image quality databases, many of which rely on synthetic distortions applied in controlled environments. To bridge the gap, this research leverages real-world images captured with mobile devices, which contain authentic complex distortion mixtures.

Contributions and Methodology

The primary contributions of the paper include:

  1. Database Creation: The LIVE In the Wild Image Quality Challenge Database consists of 1,162 images afflicted by genuine distortions from diverse mobile devices.
  2. Crowdsourcing Framework: The authors deployed a crowdsourcing strategy on Amazon Mechanical Turk, gathering over 350,000 opinion scores from more than 8,100 unique observers.
  3. Data Reliability and Validation: Despite variability in study conditions, the paper reports high internal consistency in the subjective dataset, validating this large-scale crowdsourced approach by achieving a Spearman correlation of 0.9851 between crowdsourced data and established lab results.

Insights into Perceptual Image Quality

The authors highlight that traditional databases typically focus on single, synthetic distortions. By contrast, the new database captures images affected by natural conditions such as lighting variations and device-specific artifacts, which are often misrepresented in controlled lab settings. This perspective allows for a broader understanding of how real-world distortions challenge both human observers and automated perceptual models.

Performance Evaluation of IQA Models

Several top-performing no-reference image quality assessment (NR IQA) models were tested on the new database. The results reveal that existing models, including state-of-the-art ones like BRISQUE, perform poorly on this dataset, with FRIQUEE, a new method proposed by the authors, showing relatively better performance.

Implications for Future Research

The findings suggest an urgent need for the development of more robust IQA models that can handle the complex and varied distortions observed in real-world scenarios. The crowdsourced approach also opens new avenues for large-scale subjective quality data collection, overcoming traditional limitations of lab-based studies.

Conclusion

This study sets a precedent for future research in real-world image quality assessment, highlighting the necessity of combining authentic distortions with crowdsourced subjective assessments. The paper's contributions could potentially reshape methodologies in the field, encouraging the development of advanced models to ensure a satisfactory quality of experience (QoE) in everyday image consumption.

Future developments could explore similar methodologies in video quality assessment, extending the crowd-based strategy to capture a more comprehensive understanding of multimedia quality perceptions in real-world settings.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.