4 Comments
Jan 14Liked by Patrick Mineault

I think there are two major concerns with the approach and goals as you have outlined them—even allowing for this to be a speculative and rather general take on the goals. As an ex-and-sometimes-still philosopher, I am comfortable with thought experiments. So let’s consider the goal in the context of science and potential value to humanity. There is also the option that we want to produce a First, let’s assume that you could produce a WBE. Let’s also assume that you could do it using a top down or bottom up (and therefore, by extension, by a hybrid) approach. Now the question is, what would be the necessary ancillary assumption needed for this to be a useful investment of time and resources, or of scientific or societal value. Let’s start with the bottom up approach as proposed by most of the current efforts. As you noted, there are a series of assumptions that need to be met for this to be both possible and useful as an endeavor. First, the emulation would need to match that of a (human, mouse, drosophila, c elegans) brain and behavior to a meaningful level. This is the concept of a reasonable tolerance or error. However, if the goals is to emulate (human, mouse, drosophila, c elegans) neural and behavioral activity, we need to be able to measure it all to make the comparison at all. This means, for a WBE project to be meaningful, we can discard for the near future human and mouse—but most likely the rest of the model organisms, as we will see in a moment. This is simply because the capacity to reasonably measure even a low-dimensional readout of the neural activity of the system is way, way beyond the capacity of any near-term foreseeable technology. And probably prohibitive by physics—because measuring systems alters them and in the case of a brain, the best measurements we use today literally damage the tissue and only capture an infinitesimal fraction of the neural activity. And it gets worse, because if the state of the system—the initial conditions—need to be measured accurately for the WBE to compared to the model, we likely need to at least capture some sub-neural states, as DNA methylation, current RNA and protein production we know effects neural activity—even before considering current hypotheses that suggest these are directly involved in neural computation. So, if we cannot measure the systems output to validate it within a tolerance nor measure its internal state, we fail to get off the ground before we even start or consider problems of the granularity of the emulation. Chaotic dynamics come from the model we use being exponentially divergent with respect to initial conditions, but our philosophical thought experiment could allow for a perfect capture of the initial conditions and the system dynamics. However, for this to be useful, the emulation would need to operate at on order realtime and within a compact system (i.e. the memory required would need to be on order what we can imagine in the near term for computing systems on earth). But simple back-of-the-envelope calculations suggest that if you take even a simple system and try to emulate it with an approximation approaching something that will be non-chaotic for timeframes and scales we care about, we are going to need far more capacity and the system would be run orders of magnitude slower than real time. So, it seems that conceptually, without respect to the specific (and thoroughly ludicrous IMHO) claims about using a single point sample of a connectome to produce an emulation that is useful using very low fidelity emulated (neural) units, we can see that the bottom up approach fails to make any sense for WBE. What about top down? Well, then we have a different problem. If we take the class of models that could reproduce the observed behavior of a system, sans any other assumptions or limitations, we know that class of models to be infinite. And we will gloss over the problems we already identified on the practicals of computability and measurement and simply focus on how would we select from the infinite class of models that exist that can produce the observed behavior? Well, we would need to use other criterion. Namely, approaches that restrict the computations, or memory, or the elements of the emulation, or some-such. So, then we are already in the hybrid domain. Now, interestingly, the hybrid domain is something that has existed for a while—it’s called computational neuroscience or cognitive psychology or similar, depending on your background. But there is a further problem if the hybrid approach’s goal is WBE. We humans use science to describe the world in compact forms, that average over aspects we don’t care about for specific questions—but that are accurate to within good tolerances for the questions we do care about. The goal of science is—in part—hermeneutical in that it helps us interpret what we observed in a common frame of reference that is publicly observable. Science is also about utility. We need the science to solve practical problems, inform us about new experiments, and generally provide frameworks for understanding the world that provide a common way for humans to interact with the world. Understanding and utility are interconnected. The hypothetical WBE is only useful if it is actually many orders of magnitude more compact than the actual system. It also needs to align to principles that help with using it to do things. So a useful WBE actually looks like what we do with traditional computational neuroscience—we try to understand and reduce the complexity of the system so that we can efficiently explore the space of neural activity or behaviors orders of magnitude faster than just observing the actual system. So, a useful WBE must be more efficient for understanding and use than just observing the actual human, mouse, etc. and must provide us insights into how to think about concepts of memory, intelligent behavior, etc. in ways that allow bridging between animals (and AI). So that brings me to the last possibility implied in the original and commentary—could a WBE be useful for developing and AGI? Well, if it met the criteria for being useful for neural (behavioral, biological) science it would presumably be for AI as well. But lets assume it fails the assumptions to make the thought experiment useful for ‘science’ and instead we could just use it to produce an AGI. At best, we would have all the above problems and in addition, an intelligence that we didn’t understand and would be massively expensive (memory, computation) in that it needs to reproduce all the behavior of our current benchmark for intelligence, humans. But if to get to AGI we need to first create a WBE of a human, then we will have produced an AGI that is just as good as a human, but without the interpretability or compactness that would allow us to ‘improve’ the AGI with respect to known human failings, etc. And it would only help with alignment to the extent that we could observe that it acted like a human in all observed contexts—we wouldn’t really know how it would act out-of-sample when used for things we might not want to use humans for—as that isn’t part of our WBE definition. So, all things considered, I’m not very optimistic that there is even a logically coherent definition of a WBE that isn’t—in reality—just doing good science. But that doesn’t get you the money or hype that Markram and others have used to waste vast resources and it has created efforts to do things that make no sense. For example, what we need—precisely for reasons you cite—is not a larger, more precise connectome, but we need a really good course connectome that is statistical in nature, allowing us to have the variance we need to understand how that variance relates to learning, individual differences, etc. But instead we are wasting vast resources making bigger ‘single-shot’ connectomes. Now I am not naive enough to think that this won’t provide valuable insights. The same applies to all the circuit cracking that is the norm in systems neuroscience today. But it won’t get us closer to WBE nor to understanding the brain or intelligence. It will just be more constraints and more data for us to try to use in that process. And given how power our best models that can connect behavior and course neural activity to computations and predicting future learning and behavior, we should probably focus our efforts on meso-scale (lo-fi, if that’s what they really mean) computational models that use or expose reliable principles and course-grained descriptions of neural activity and behavior. These, conveniently, exist and are already in use in both the neuroscience and AI communities—reinforcement learning (broadly defined) is a good example.

Expand full comment

I wonder have you considered the question, in light of the newer findings on possible quantum activity in the brain, whether the Penrose theories might impose a necessity here to include quantum matters into the whole brain models? I don't know if I may post links, but here is a popular science version of the topic: https://www.youtube.com/watch?v=xa2Kpkksf3k

Expand full comment

Interesting read!

Thanks for sharing, Patrick!

When I read this, "We wouldn’t reach human whole-brain recording capacity until the 2060s," it's kinda surprising. Hopefully, advancements in AI and neuroscience will take us closer than that timeline.

Expand full comment

Fascinating read Patrick. This is typically an exercise in sci-fi, but your approach here is novel (to me at least) in that it's based on the engineering that would go in to making this a reality. I'll be sure to watch In Silico.

Expand full comment