Papers
Topics
Authors
Recent
Search
2000 character limit reached

Reliability of CKA as a Similarity Measure in Deep Learning

Published 28 Oct 2022 in cs.LG, cs.AI, and cs.CV | (2210.16156v2)

Abstract: Comparing learned neural representations in neural networks is a challenging but important problem, which has been approached in different ways. The Centered Kernel Alignment (CKA) similarity metric, particularly its linear variant, has recently become a popular approach and has been widely used to compare representations of a network's different layers, of architecturally similar networks trained differently, or of models with different architectures trained on the same data. A wide variety of conclusions about similarity and dissimilarity of these various representations have been made using CKA. In this work we present analysis that formally characterizes CKA sensitivity to a large class of simple transformations, which can naturally occur in the context of modern machine learning. This provides a concrete explanation of CKA sensitivity to outliers, which has been observed in past works, and to transformations that preserve the linear separability of the data, an important generalization attribute. We empirically investigate several weaknesses of the CKA similarity metric, demonstrating situations in which it gives unexpected or counter-intuitive results. Finally we study approaches for modifying representations to maintain functional behaviour while changing the CKA value. Our results illustrate that, in many cases, the CKA value can be easily manipulated without substantial changes to the functional behaviour of the models, and call for caution when leveraging activation alignment metrics.

Citations (29)

Summary

  • The paper demonstrates that CKA's sensitivity to minor transformations can lead to unexpectedly low similarity scores despite negligible changes in output.
  • It employs both theoretical and empirical analyses to reveal how outliers and specific data shifts critically influence CKA measurements.
  • The findings underscore the need to combine CKA with other metrics to achieve a more robust and comprehensive evaluation of neural network representations.

Reliability of CKA as a Similarity Measure in Deep Learning

Introduction

The proliferation of deep learning models across diverse domains has intensified the need to understand their internal workings, particularly focusing on how neural networks learn and represent data. Representation learning posits that as data passes through the layers of an Artificial Neural Network (ANN), increasingly complex internal representations are formed. A critical aspect of this understanding involves comparing such learned representations across different models. The Centered Kernel Alignment (CKA) metric has recently emerged as a favored tool for this purpose, especially its linear variant, enabling the comparison of representations within a network, across similar networks with different initializations, and across different architectures trained on the same dataset.

Theoretical Analysis of CKA Sensitivity

This paper presents formal analyses that delineate the sensitivity of CKA to various transformations. One such focus is the sensitivity of CKA to simple transformations that maintain the linear separability of the data, which is a vital attribute for generalization. The authors provide concrete explanations for CKA's responsiveness to outliers—a phenomenon previously observed empirically. Additionally, they explore CKA's handling of transformations that preserve data separability but may drastically alter CKA similarity values. Figure 1

Figure 1: Visual representations of the transformations considered in the theoretical analysis.

Empirical Findings

The empirical section of the paper further investigates CKA's reliability. The authors demonstrate scenarios where CKA paradoxically indicates similarity or dissimilarity, challenging intuitive expectations. For example, they show how CKA may yield low similarity scores between ostensibly similar sets due to minor but impactful transformations. Figure 2

Figure 2

Figure 2: A layer-wise CKA comparison of generalized, memorized, and randomly initialized networks.

Figure 3

Figure 3: CKA sensitivity tests with subset translations and outliers.

Implications for Neural Network Analysis

The findings suggest that while CKA is a powerful tool, its use demands careful consideration of its limitations. Particularly, the ability to manipulate CKA scores without noticeably affecting model outputs suggests that CKA similarity values can be misleading if relied upon exclusively. This insight calls for a nuanced application of CKA, suggesting that it be used in conjunction with other metrics to ensure robustness when analyzing and comparing neural network representations. Figure 4

Figure 4: Analytical optimization of CKA maps across network layers.

Conclusion

The study underscores important limitations of the CKA similarity measure, particularly in its linear form. It shows that CKA values can be heavily influenced by transformations that do not affect the functional output of neural networks, calling into question the reliability of CKA as a standalone metric for representation similarity. This necessitates caution in interpreting CKA results and an encouragement to combine it with other methods to obtain a comprehensive understanding of neural representations. Future work may explore more robust similarity measures or adjustments to CKA that can account for the identified issues, aiming to enhance the interpretability and reliability of model comparisons in deep learning.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.