NCC mismatch improves from IT to EOT at each intermediate layer

Establish whether, at each intermediate layer of a supervised deep neural network classifier, the NCC mismatch on both the training and test sets decreases from the Interpolation Threshold (the point when training accuracy first reaches one) to the End of Training.

Background

Beyond the Interpolation Threshold, training enters the Terminal Phase where Neural Collapse phenomena emerge. The authors quantify NCC mismatch at intermediate layers as the fraction of samples for which the classifier’s prediction disagrees with the nearest train class-mean in that layer’s feature space.

This conjecture asserts that, for every intermediate layer, NCC mismatch decreases as optimization proceeds from the IT to the End of Training (EOT), on both training and test sets. Validating this would connect optimization progress during TPT to increasingly consistent class-center alignment across layers.

References

Our conjectures can now be described as follows At each intermediate layer, both the train and test NCC mismatch improves from the IT to End of Training(EOT).

Nearest Class-Center Simplification through Intermediate Layers  (2201.08924 - Ben-Shaul et al., 2022) in Section 4.1 (NCC mismatch in Intermediate Layers)