GO4Align: Group Optimization for Multi-Task Alignment
Abstract: This paper proposes \textit{GO4Align}, a multi-task optimization approach that tackles task imbalance by explicitly aligning the optimization across tasks. To achieve this, we design an adaptive group risk minimization strategy, comprising two techniques in implementation: (i) dynamical group assignment, which clusters similar tasks based on task interactions; (ii) risk-guided group indicators, which exploit consistent task correlations with risk information from previous iterations. Comprehensive experimental results on diverse benchmarks demonstrate our method's performance superiority with even lower computational costs.
- The k-means algorithm: A comprehensive survey and performance evaluation. Electronics, 9(8):1295, 2020.
- Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12):2481–2495, 2017.
- 970 million druglike small molecules for virtual screening in the chemical universe database GDB-13. J. Am. Chem. Soc., 131:8732, 2009.
- Multi-task learning in natural language processing: An overview. arXiv preprint arXiv:2109.09138, 2021.
- Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In International Conference on Machine Learning, pp. 794–803. PMLR, 2018.
- Just pick a sign: Optimizing deep multitask models with gradient sign dropout. arXiv preprint arXiv:2010.06808, 2020.
- The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3213–3223, 2016.
- Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. In International conference on machine learning, pp. 1407–1416. PMLR, 2018.
- Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019.
- Efficiently identifying task groupings for multi-task learning. Advances in Neural Information Processing Systems, 34:27503–27516, 2021.
- Nddr-cnn: Layerwise feature fusing in multi-task cnns by neural discriminative dimensionality reduction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3205–3214, 2019.
- Dynamic task prioritization for multitask learning. In Proceedings of the European conference on computer vision (ECCV), pp. 270–287, 2018.
- Learning to branch for multi-task learning. In International Conference on Machine Learning, pp. 3854–3863. PMLR, 2020.
- Learning with whom to share in multi-task feature learning. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 521–528, 2011.
- Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7482–7491, 2018.
- Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Review on determining number of cluster in k-means clustering. International Journal, 1(6):90–95, 2013.
- Kokkinos, I. Ubernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6129–6138, 2017.
- Genetic k-means algorithm. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 29(3):433–439, 1999.
- In defense of the unitary scalarization for deep multi-task learning. Advances in Neural Information Processing Systems, 35:12169–12183, 2022.
- A closer look at loss weighting in multi-task learning. arXiv preprint arXiv:2111.10603, 2021a.
- Reasonable effectiveness of random weighting: A litmus test for multi-task learning. arXiv preprint arXiv:2111.10603, 2021b.
- Conflict-averse gradient descent for multi-task learning. Advances in Neural Information Processing Systems, 34:18878–18890, 2021.
- Famo: Fast adaptive multitask optimization. Advances in Neural Information Processing Systems, 33, 2023.
- Towards impartial multi-task learning. In International Conference on Learning Representations, 2020.
- Adversarial multi-task learning for text classification. arXiv preprint arXiv:1704.05742, 2017.
- End-to-end multi-task learning with attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1871–1880, 2019.
- Auto-lambda: Disentangling dynamic task relationships. arXiv preprint arXiv:2202.03091, 2022.
- Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
- Learning multiple tasks with multilinear relationship networks. Advances in neural information processing systems, 30, 2017.
- Fully-adaptive feature sharing in multi-task networks with applications in person attribute classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5334–5343, 2017.
- Attentive single-tasking of multiple tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1851–1860, 2019.
- Cross-stitch networks for multi-task learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3994–4003, 2016.
- Self-paced multitask learning with shared knowledge. arXiv preprint arXiv:1703.00977, 2017.
- Indoor segmentation and support inference from rgbd images. In ECCV, 2012.
- Multi-task learning as a bargaining game. arXiv preprint arXiv:2202.01017, 2022.
- Task weighting in meta-learning with trajectory optimisation. Transactions on Machine Learning Research, 2023. ISSN 2835-8856.
- Conditionally adaptive multi-task learning: Improving transfer learning in nlp using fewer parameters & less data. In International Conference on Learning Representations, 2020.
- Multi-task learning as multi-objective optimization. arXiv preprint arXiv:1810.04650, 2018.
- Independent component alignment for multi-task learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20083–20093, 2023.
- Unsupervised k-means clustering algorithm. IEEE access, 8:80716–80727, 2020.
- Multi-task reinforcement learning with context-based representations. arXiv preprint arXiv:2102.06177, 2021.
- Efficient and effective multi-task grouping via meta learning on task combinations. Advances in Neural Information Processing Systems, 35:37647–37659, 2022.
- Which tasks should be learned together in multi-task learning? In International Conference on Machine Learning, pp. 9120–9132. PMLR, 2020.
- Adashare: Learning what to share for efficient deep multi-task learning. Advances in Neural Information Processing Systems, 33:8728–8740, 2020.
- Clustering learning tasks and the selective cross-task transfer of knowledge. In Learning to learn, pp. 235–257. Springer, 1998.
- Branched multi-task networks: deciding what layers to share. arXiv preprint arXiv:1904.02920, 2019.
- Multi-task learning for dense prediction tasks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
- Do current multi-task optimization methods in deep learning even help? Advances in Neural Information Processing Systems, 35:13597–13609, 2022.
- Robust task grouping with representative tasks for clustered multi-task learning. In Proceedings of the 25th ACM SIGKDD International conference on knowledge discovery & data mining, pp. 1408–1417, 2019.
- Gradient surgery for multi-task learning. arXiv preprint arXiv:2001.06782, 2020.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.