Papers
Topics
Authors
Recent
Search
2000 character limit reached

Data Quality-aware Mixed-precision Quantization via Hybrid Reinforcement Learning

Published 9 Feb 2023 in cs.AI and cs.LG | (2302.04453v1)

Abstract: Mixed-precision quantization mostly predetermines the model bit-width settings before actual training due to the non-differential bit-width sampling process, obtaining sub-optimal performance. Worse still, the conventional static quality-consistent training setting, i.e., all data is assumed to be of the same quality across training and inference, overlooks data quality changes in real-world applications which may lead to poor robustness of the quantized models. In this paper, we propose a novel Data Quality-aware Mixed-precision Quantization framework, dubbed DQMQ, to dynamically adapt quantization bit-widths to different data qualities. The adaption is based on a bit-width decision policy that can be learned jointly with the quantization training. Concretely, DQMQ is modeled as a hybrid reinforcement learning (RL) task that combines model-based policy optimization with supervised quantization training. By relaxing the discrete bit-width sampling to a continuous probability distribution that is encoded with few learnable parameters, DQMQ is differentiable and can be directly optimized end-to-end with a hybrid optimization target considering both task performance and quantization benefits. Trained on mixed-quality image datasets, DQMQ can implicitly select the most proper bit-width for each layer when facing uneven input qualities. Extensive experiments on various benchmark datasets and networks demonstrate the superiority of DQMQ against existing fixed/mixed-precision quantization methods.

Citations (16)

Summary

  • The paper introduces DQMQ, a hybrid reinforcement learning framework that dynamically adjusts quantization bit-widths based on data quality.
  • It employs a Precision Decision Agent and a Quantization Auxiliary Computer to integrate bit-width decision-making into end-to-end quantization training.
  • Experiments on CIFAR-10 and SVHN show improved top-1 accuracy over methods like HAWQ, proving the framework's robustness under varying data qualities.

Data Quality-aware Mixed-precision Quantization via Hybrid Reinforcement Learning

Introduction

This paper presents a novel framework, DQMQ (Data Quality-aware Mixed-precision Quantization), which addresses the limitations of conventional fixed/mixed-precision quantization by incorporating data quality awareness into the quantization process using hybrid reinforcement learning. Traditional quantization approaches often predefine bit-width settings based on static assumptions, leading to suboptimal performance when data quality varies across training and inference stages. The proposed DQMQ framework dynamically adapts quantization bit-widths in response to input data qualities, thereby enhancing the model's robustness and efficiency.

Methodology

DQMQ is characterized by its hybrid reinforcement learning framework that combines model-based policy optimization with supervised quantization training. By utilizing a continuous probability distribution approach for bit-width sampling, DQMQ remains differentiable and allows for end-to-end optimization. This aspect distinguishes it from prior works that decouple bit-width decision-making from quantization training. The core components of DQMQ include:

  1. Precision Decision Agent (PDA): A lightweight module that makes layer-wise bit-width decisions based on current data quality and quantization sensitivity derived from Hessian trace information.
  2. Quantization Auxiliary Computer (QAC): Ensures that quantization operations do not accumulate errors over multiple iterations and allows the model to maintain high accuracy across varying data qualities. Figure 1

    Figure 1: Overall architecture design of DQMQ.

Experimental Evaluation

The performance of DQMQ was evaluated across multiple datasets, including CIFAR-10 and SVHN, with varying data qualities. The results demonstrated the effectiveness of DQMQ in maintaining high top-1 accuracy under different quality levels compared to existing methods such as HAWQ and AutoQ. Figure 2

Figure 2

Figure 2

Figure 2: Comparison of accuracy (Top-1)(%) of DQMQ and HAWQ of ResNet-18 on CIFAR-10 with different image qualities.

Figure 3

Figure 3

Figure 3

Figure 3: Comparison of the layer-wise quantization sensitivities of DQMQ on ResNet-18 under CIFAR-10 with five different data qualities.

Analysis

The main advantage of DQMQ over traditional methods is its ability to dynamically adjust bit-widths considering real-world data quality variations. This adaptability is crucial for edge deployment scenarios where data quality can significantly fluctuate due to environmental factors. The PDA component effectively learns a policy that balances quantization precision and task performance, resulting in more efficient model compression without sacrificing accuracy.

DQM's approach to integrate the previously decoupled bit-width decision making with quantization training into a one-shot process enhances overall training efficiency and accuracy, supporting its claims of improved robustness and adaptability.

Conclusion

DQMQ presents a significant advancement in mixed-precision quantization by introducing data quality awareness into the quantization process through hybrid reinforcement learning. The framework ensures that deep neural networks can operate efficiently on edge devices while maintaining robustness and adaptability to dynamic input data qualities. Future research could explore further optimization of the PDA and QAC components to handle a broader range of neural network architectures and data complexities. This work establishes a foundational approach for achieving data-aware quantization in real-world AI applications.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.