1 Comment

Appreciate your very nuanced and even take on this Patrick that cuts through much of the noise. Do you think there is a moat between the more general explainable AI research community and NeuroAI community as seems to be suggested by Shaeffer? Take the toy case of a very simple ground-truth function y=x*x with no noise and training/validation data generated from the domain [-1,+1]. We can train two overparameterized MLPs, Model 1 and Model 2, identical in structure but initialized with different weights. Say both converge to close to zero loss (but not zero) on both training and validation sets (despite being over parameterized). We still have no guarantee that there is any correlation between the parameters of Model 1 and Model 2. We also know that the latent representations can differ substantially in this case. This can even be the case across various metrics like CKA as used in the Klabunde paper. If we then call Model 1 our "Brain" and Model 2 our "ANN", we know that regression of the ANN onto our Brain will show that the ANN is a great model of our Brain despite there being little similarity in parameters and different latent representations. We know this to be true in an ANN to ANN comparison. We should expect the same in a biological brain to ANN comparison.

I do not see anything from the papers critiqued in the Shaeffer paper that make the claim that regression of a learned ANN onto a brain is a perfect metric of ANN to brain similarity nor any evidence that any of the papers are unaware that the more general explainable AI community would take issue with such a claim.

Expand full comment