Home > Uncategorized > Guest post: Dirty Rant About The Human Brain Project

Guest post: Dirty Rant About The Human Brain Project

October 20, 2015

This is a guest post by a neuroscientist who may or may not be a graduate student somewhere in Massachusetts.

You asked me about the Human Brain Project. Well, there is only one way to properly address that topic: with a rant.

Henry Markram at EPFL in Switzerland was the leader of the “Blue Brain” project, to simulate a brain (well, actually just one cubic millimeter of a mouse brain) on an IBM Blue-Gene supercomputer. He got tons of money for this project, including the IBM supercomputer for the simulations. Of course he never published anything showing that these simulations lead to any understanding of brain function whatsoever. But he did create a team of graphics professionals to make cool pictures of the simulations. Building on this “success”, he led the “Human Brain” EU flagship project into being funded by some miracle of bureaucratic gullibility. The clearly promised goal was simulating a human brain (hence the name of the project). Almost everyone in Europe publicly supported the project, although in private the neuroscientists (who, if they have done any simulations, know that the stated goal is completely absurd) would say something more like “hey, maybe it’s crazy, but it’ll bring a bunch of money.”

Now, some simple observations must be made, which are true now, and will still be true in ten years’ time, at the conclusion of this flagship project:

(1) We have no fucking clue how to simulate a brain. 

We can’t simulate the brain of C. Elegans, a very well studied roundworm (first animal to have its genome sequenced) in which every animal has exactly the same 302-neuron brain (out of 959 total cells) and we know the wiring diagram and we have tons of data on how the animal behaves, including how it behaves if you kill this neuron or that neuron. Pretty much whatever data you want, we can generate it. And yet we don’t know how this brain works. Simply put, data does not equal understanding. You might see a talk in which someone argues for some theory for a subnetwork of 6 or 8 neurons in this animal. Our state of understanding is that bad.

(2) We have no fucking clue how to wire up a brain. 

Ok, we do have a macroscopic clue, this region connects to that region and so on. You can get beautiful pictures with methods like DTI, with a resolution of one cubic millimeter per voxel. Very detailed, right?  Well, apart from DTI being a noisy and controversial method to begin with, remember that one cubic millimeter of brain required a supercomputer to simulate it (not worrying here about how worthless that “simulation” was), so any map with cubic-millimeter voxels is a very coarse map indeed. And microscopically, we have no clue. It looks pretty random. We collect statistics (with great difficulty), and do tons of measurements (also with great difficulty), but not on humans. Even for well studied animals such as cats, rats, and mice, it’s anyone’s guess what the fine structure of the connectivity matrix is. As an overly simplistic comparison, imagine taking statistics on the connectivity of transistors in a Pentium chip and then trying to make your own chip based on those statistics. There’s just no way it’s gonna work.

(3) We have no fucking clue what makes human brains work so well. 

Humans (and great apes and whales and elephants and dolphins and a few other animals that we love) happen to have a class of neurons (“spindle neurons”) that we don’t see in the animals that we spend most of our time studying. Is it important? Who knows. We know for sure that we are missing a lot about what makes a human brain human — it’s definitely not just its size. There’s a guy whose brain is mostly not there, and he was probably one of the dumber kids in class, but still he functions fine in human society (has a job, family, etc.). Is this surprising? Not surprising? How would we know, we don’t know how brains work anyway.

(4) We have no fucking clue what the parameters are. 

If you try to do a simulation to see how neurons behave when they are connected in networks, you need to know a bunch of biophysical parameters. For example, what’s the time constant for voltage leak across the cell membrane? And a ton of other parameters, which are of course different for different classes of cells. So let’s just take the most common excitatory cell class and the most common inhibitory cell class and try to make a network. Luckily, there are papers that report numbers for this or that parameter of these cells. But the reported numbers are all over the place! One lengthy detailed study will find a parameter to be 35±4, and the next in-depth study will find the same parameter to be 12±3. So what should you use in your simulation for this or the many, many other uncertain parameters? Who the fuck knows.

(5) We have no fucking clue what the important thing to simulate is. 

Neurons in vertebrates communicate (*) via “spikes”, where the neuron’s voltage level suddenly goes way up for a millisecond or so. This electro-chemical process, involving various ions flowing across the cell membrane, is very well understood. But now, what do these spikes mean? Is it the number of spikes per second that matters? Or is it the precise timing of the spikes? Who the fuck knows. For certain types of cells in certain areas, we see that they are active (producing a lot of spikes) under certain conditions. For example, in the primary visual cortex of a cat, a cell will be active when the eye sees a line at a certain position and a certain orientation moving in a certain direction. Is the timing of these spikes important? We don’t know! Some experts believe one way, some experts believe the other, and the rest admit they don’t know. And primary visual cortex of the cat is the most well studied area of any brain in any animal.

(*) How does a spike allow communication? The voltage spike triggers the release of chemicals at “synapses” (the connections to target neurons), which in turn dock with the target cell’s membrane in various ways to allow ions to cross the membrane, thereby affecting the voltage of the targeted cell. If the voltage in a cell reaches a certain threshold, a spike will occur. Each neuron targets (and is targeted by) thousands of other neurons. And the total number of neurons in a human brain is about a fourth of the number of stars in the milky way. You wanna map that circuit?

So, the next time you see a pretty 3D picture of many neurons being simulated, think “cargo cult brain”. That simulation isn’t gonna think any more than the cargo cult planes are gonna fly. The reason is the same in both cases: We have no clue about what principles allow the real machine to operate. We can only create pretty things that are superficially similar in the ways that we currently understand, which an enlightened being (who has some vague idea how the thing actually works) would just laugh at.

cargo-cult

Categories: Uncategorized
  1. October 20, 2015 at 7:06 am

    I love this. Perhaps it’s funded by Gates and Co. Maybe he wants to know how his dumb ideas on education arose.

    Not entirely unrelated, I often wondered about how fast evolution worked, in terms of frequency and usefulness of mutations. A search on the net produced a single mathematical analysis. Amazing! I would have thought that this was a most important aspect of the theory of evolution, worthy of much investigation. I will send you the reference when I find it on my computer. It is fascinating. I suspect that evolution scientists have trouble understanding the maths of predator/prey dynamics.

    Liked by 1 person

    • sglover
      October 22, 2015 at 5:28 pm

      To my knowledge, the Michigan State E. coli long-term evolution experiment is is the most comprehensive study of “mutation” in action:

      http://myxo.css.msu.edu/ecoli/overview.html

      The beauty of their approach is that once they detect an “evolution” (e.g., the acquisition or loss of a particular metabolic competence), they can backtrack and find the genetic events that contributed to the change. But of course it’s laborious as hell. And it helps if your lab organism is something like gut bacteria.

      Like

    • October 31, 2015 at 5:53 pm

      I don’t think you really know how evolution works, because you are conflating evolution (change in allele frequency between generations) and natural selection (a mechanism of evolution). We have measured the frequency of mutations for lots of organisms. The vast majority of evolutionary change is due to genetic drift (another mechanism); individual contributions of beneficial alleles to fitness are fairly low, so natural selection isn’t very strong. Natural selection mostly acts against deleterious mutations that knock out genes that are necessary for an organism to function.

      Like

  2. Martin
    October 20, 2015 at 7:30 am

    I think you might be missing some work, a quick Google scholar search for [frequency of mutation evolution] reveals papers like this:

    Kimura, M. (1968). Evolutionary rate at the molecular level. Nature, 217(5129), 624-626.

    which has 2922 citations, so it seems that there has been a fair bit of work on this for at least the last 50 years

    Liked by 1 person

    • October 20, 2015 at 8:45 am

      Thanks, I’ll check it out.

      Like

    • October 20, 2015 at 6:55 pm

      The one I found originally, which I cannot find now, was an information theory analysis of the length of time it would take for a primate/human ancestor to develop the necessary requirements for the development of language. A higher level inquiry. I shall keep trying to track it down.

      Like

      • November 3, 2015 at 1:25 pm

        Ah. You mean a study on something we have barely got a clue about, done by people who don’t understand, to produce a simulation that has no meaning. Good on them.

        Liked by 1 person

  3. Johan
    October 20, 2015 at 8:27 am

    While the whole human brain simulation thing has gotten way over-hyped and a rant may be in order, I fear this rant goes too far in the other direction.

    While it is true that we are nowhere close the simulating a mouse brain, much less a human brain, there are active ongoing research project to create simulations of C. Elegans. Unlike a mouse brain, simulating C. Elegans is feasible given current computational resources. Not maybe on your laptop, but if you are willing to drop enough money on Amazon compute instances simulating it is feasible.

    Getting a high enough quality connectome is a big challenge, but something that reasonable for a simple animal, like, again, the C. Elegans.

    What the important parts are is a very good question, but that is something you can sort of overcome by throwing enough computation at it and just simulating everything. For the C. Elegnas project they are not just simulating the brain, but the entire 1000 cell animal.

    Parameters are a big challenge, but there are two saving graces: 1) Because of the variable environments in which real brains form and operate in the actual parameters have to vary and the brains have to be robust to these variations to be reliable, so getting the parameters close enough is good enough. 2) Once you have a simulation and some ball-park parameter numbers you can try running your simulation with different values and see what works.

    The biggest problem in this rant relates to the cargo cult plane. To design a plane you need a good grasp of the principles of aerodynamics. Same thing if you seek to design a brain. But to copy a plane you do not need to understand the aerodynamics of it, you just need a good enough copy. The problem with the cargo cult plane is not that they didn’t understand aerodynamics, its that the copy is nowhere near good enough. To simulate a brain you do not need to know how a brain works! It is enough that you can copy the design closely enough. This still means you need to get all the neurons and connections correct (which is no mean feat) but you do not need to know how the interaction of the neurons give rise higher level brain functions. In fact, simulations of the brain can be very good tool to help figure out how the brain actually works.

    We won’t be seeing a simulation of a human brain any time soon, but I think a full C. Elegans simulation is feasible within the next 5-10 years.

    Like

    • October 20, 2015 at 9:30 am

      Also, even if you can copy exactly enough to get a working plane, it doesn’t mean you understand the principles of aerodynamics.

      Like

    • October 20, 2015 at 5:26 pm

      I remember all this from back circa 1993, when the Churchlands were confidently predicting that we understood the basic neural circuitry and interactions, we had the integrated circuits, perfect simulations of a bee’s brain were just around the corner (though human brains would take a little longer).

      Liked by 1 person

    • wombatine
      October 20, 2015 at 6:38 pm

      You say that “throwing enough computation at it” will solve the problem. But this doesn’t seem right. Computing power does not seem to be the limiting factor for neuroscience. Ask your favorite supercomputing center how many neuroscientists are scheduling jobs on the system — the answer at most places is zero.

      And saying the parameters don’t matter because it’s biology (or because you plan to just tweak every part of the system until it works) seems like wishful thinking.

      You make the reasonable point that “you just need a good enough copy”. But one of the rant’s valid points seemed to be that we don’t yet know what makes a copy good. You can copy the aerodynamics of a plane perfectly, and your copy won’t fly, because it turns out there are other more important factors, like engine power and being light-weight.

      If you don’t know which are the important factors, you’ll have to make your copy good enough in *every* possible way, which seems difficult, and for a brain where we don’t know the wiring diagram or the molecular mechanisms yet, seems impossible.

      Liked by 2 people

      • Johan
        October 21, 2015 at 2:08 am

        I am not suggesting that “throwing enough computation at it” is some sort of general solution to research problems in neuroscience, Rather, what I am suggesting is that there are specific part of the problem of simulating a brain that can be circumvented with enough computing power thrown at it.

        I absolutely did not say parameters don’t matter or that we can just tweak every part of the system. What I said was nearly the opposite. Doing the hard work to get an idea for what the parameters are is essential. What I did suggest is that if we have one study suggesting a parameter value of 35±4 and another suggesting 12±3, that this can be good enough. We can the take the range of say 0-50 split it into 100 steps and they each one. This is only possible because we have studied the organism enough to have an idea of the range and because we have “enough computing power to throw at it”.

        I did not suggest you should only copy the aerodynamic parts of a plane. In fact, without knowing aerodynamics you couldn’t know which parts are the aerodynamic ones, so you couldn’t copy just them. That is why I suggested you just do your best to copy everything. The important parts and the unimportant ones. I agree that this is a very challenging problem still, and we are nowhere close to do that for a human brain, but for C. Elegans we actually have a complete wiring diagram.

        Liked by 1 person

        • The Dopa-Queen
          October 21, 2015 at 11:45 am

          Johan,

          Your statements about parameter-estimation are still wishful thinking.

          You said: “We can the take the range of say 0-50 split it into 100 steps and [try] each one.”. This works well for low-dimensional problems, like the slope of a linear regression. These whole-brain models have orders of magnitudes more parameters than that. This is the curse of high dimensionality — parameters sweeps in high dimensions are useless. Dimensional reduction methods (i.e. PCA) will only take you so far, and abstract the experimenter away from the biology.

          To “try out” a parameter combination (forgetting the curse of high-dimensionality for a second) also takes huge amounts of computing resources and time, so it is not feasible to just shotgun-run many instantiations of the model in series.

          Additionally, the brain is a chaotic system, and is exquisitely sensitive to changes in initial condition. So if you see phenotype A at parameter value 4, then B at 5, C at 6, and A again at 7, how will you know what “””””regular””””” function is?

          peace,

          the dopaqueen

          Liked by 1 person

    • araybold
      October 20, 2015 at 7:47 pm

      “We won’t be seeing a simulation of a human brain any time soon, but I think a full C. Elegans simulation is feasible within the next 5-10 years.”

      That is not much of a refutation of the article’s thesis.

      Like

      • Johan
        October 21, 2015 at 2:18 am

        I never attempted to refute the main point, that the human brain project is way overhyped. Rather, my criticism related to two secondary points: That we can’t even simulate the brain of C. Elegans (true at the moment but is definitively changing), and that we would need to know how a brain works to simulate it (ie we don’t need to know why a guy with most of his brain missing can still function in order to be able to simulate brains).

        There is a often repeated claim in neuroscience that you must know how a brain works to simulate it. That is what I mainly want to push back against. Knowing how a brain works surely makes simulating it a lot easier, but it is not necessary.

        Like

    • October 20, 2015 at 11:31 pm

      The problem with the cargo cult plane is not that they didn’t understand aerodynamics, its that the copy is nowhere near good enough

      But an understanding of aerodynamics is necessary to tell you in advance whether your copy is good enough, so as not to waste effort if it isn’t. “Omitting whole cell populations, intra-cellular parameters, and methods of intercellular communication from the simulation” is the equivalent of leaving out the plane’s engine and the fabric of the wings.

      Like

      • Johan
        October 21, 2015 at 7:37 am

        Knowing in advanced if the copy is good enough is not necessary. Sure, it would make things a lot easier, but that doesn’t mean a imperfect copy is a waste of time. At the very least you learn that the copy you have is not good enough. All the work you’ve put into making the imperfect copy doesn’t disappear when the copy fail. You can go on building your next copy on top of what you already have in an iterative manner. If you are lucky, the way your copy fails might even guide you to where its shortcomings are.

        You stand a much bigger chance of not missing the engine if you don’t limit your simulation to just the brain but instead simulate the entire C. Elegans.

        Like

    • TeaBag
      October 23, 2015 at 6:19 am

      The cargo cult plane analogy is perfect. If you want to make a replica plane that can fly without understanding aerodynamics, you would have to make an EXACT copy of an existing plane. You would need to use all the same materials, assembled precisely as the original. Without a good understanding of the principles, you can’t know where near enough is good enough or what substitutions can be made in materials or design.

      When people talk about simulating a human brain, that is not an exact replica in any sense. The materials and operating systems are *completely* different. It therefore requires a very deep and mostly complete understanding of the principles of how it works, in order to be able to reproduce those operations in a totally different format.

      Like

    • November 3, 2015 at 1:37 pm

      Oh dear, the connectome people have arrived. Johan, we can’t make a memory now. Oh, we can stimulate a cell, track processes, and say stuff. But we can’t manufacture a memory and insert it into a cell or network and have it work. We are, in a word, utterly clueless.

      The connectome idea was created by a hopeful computer science fella, IIRC. It’s complete nonsense, exhibit A in the pantheon of cargo cult brain ideas. The connectome meme is precisely cargo cult science because it says that by creating the apparent shape/form of a brain you will, thereby, have a brain. D’oh. You can’t see that?

      No. First, that connectome is flexible, changing. Second, that “connectome” has these things we call, um, signals running through it. Those signals are controlled by things we don’t know about, just like this guest poster said. Third, did you know that branchlets of neurons have been shown to do sub-processing of some kind, just locally, without involving the primary neuron axis? This can go on sumultaneously in multiple parts of the neuron.

      Add to that we can’t find memories and can’t create them, and it should make you realize that if anything this “rant” is too soft-pedaled.

      We can, sometimes, knock out memories in a very gross way by destroying an entire cell.

      Like

  4. October 20, 2015 at 8:30 am

    One of my all-time favorite quotations is, “If the brain was so simple that we could understand it, then we would be so simple that we couldn’t.”

    Like

    • Jim
      October 22, 2015 at 6:02 am

      There are at least two arguments against this. (1) Much of the wiring diagram of the brain is repetitive, with the same local connectivity pattern replicated many, many times. The number of concepts we need to understand is certainly less than the number of neurons in the brain. (2) There is no single person who understands everything about an iPhone, but nobody would argue that “we” don’t understand how an iPhone works. A large community of scientists and engineers does collectively understand it.

      Like

      • October 22, 2015 at 12:22 pm

        On point (2) I totally disagree. You aren’t actually drawing a comparison between understanding an iPhone and understanding the human brain, are you?

        An iPhone is a computer, and we’ve understood computers well enough and continued to refine them since the 70’s (if not the 60’s). They can be generally represented by the concept of the “wiring diagram” you brought up in point (1).

        The human brain is not some physical device we’ve created. It was (and is still being) “created” by natural evolutionary processes. It isn’t static like an iPhone (it doesn’t seem so, to my understanding) it’s dynamic.

        We control the evolution of the iPhone and future variations, so we can control the outcome (so far).

        We haven’t even begun to fully understand the human brain beyond what you’re calling a “wiring diagram that is repetitive.” (Generally and conceptually).

        At some point in the future, we might understand it … but for now, we’re still exploring.

        My take is once we get through this “we have to reconstruct it all, then we’ll understand it” phase we’ll actually find the journey towards understanding the human brain has only begun.

        Like

  5. October 20, 2015 at 8:44 am

    All my acid freak physicist friends back in the Sixties assured me that consciousness occurs at quantum levels. I’m guessing they were probably right, that the real business of mind goes on several levels down from neural connections. All that’s neural spiking is just the gross electrical power grid that it takes to keep the quantum homunculi’s houses warm and well-lit.

    Like

    • Jim
      October 22, 2015 at 6:14 am

      That argument has always struck me as a comical attempt to reduce the number of incomprehensible things from two (the brain, quantum) down to one, based on really nothing except their incomprehensibility. I don’t think Occam’s Razor is supposed to work that way.

      Like

      • October 22, 2015 at 7:47 am

        I think of it more as a commedious way of remaining conscious of the fact that a certain hypothesis is a hypothesis.

        Like

  6. October 20, 2015 at 8:44 am

    I was led to this cartoon site by Mean Green Math, and I found this one:
    http://www.xkcd.com/1588/

    Like

  7. Nathanael
    October 20, 2015 at 9:15 am

    The number of neurons doesn’t even reflect the complexity; we’re quite sure that *neurochemistry* matters, and we haven’t even *identified* all the neurotransmitters. Sooooooo….. we’re a long way from simulating a brain. Even C. Elegans. A long long long long way.

    Liked by 1 person

  8. October 20, 2015 at 9:28 am

    Thanks. This was interesting.

    From the swearing, I understand that the author is angry. What I didn’t understand is about what or why. I guess you think the funders of the project are wasting their money. I guess you think they should have given the money to other projects. I still don’t understand the anger, though.

    Like

    • October 20, 2015 at 9:32 am

      I mean, rant. Rants are generally speaking kind of angry. I can see what someone in neuroscience would be angry about an over-hyped project. It’s not great for their field or their future career.

      Like

      • October 21, 2015 at 6:31 am

        someone in neuroscience would be angry about an over-hyped project

        And although the project is nominally about neuroscience, all the rationales for it focus on the impetus it would provide for supercomputer development. It was billed as a way to throw 1$billion of corporate welfare at the supercomputer industry with neuroscience as an excuse.

        Like

    • wombatine
      October 20, 2015 at 7:30 pm

      I think some people are upset because of the scale of money being diverted to this one project. Let’s compare with federal grants awarded this year in all disciplines at Berkeley and MIT. This one project will get more than all of those put together.

      Like

  9. tdhawkes
    October 20, 2015 at 9:47 am

    I am a working neuroscientist, and I support this rant. I will add this information to the rant just to boggle the mind a bit further: the response of neurons to stimulation by other neurons is operationalized by gene transcription and protein constructions in the cell nucelus that result from cell to cell signaling in each neuron’s dendritic arbor, axon trigger zone, nodes of Ranvier and terminals, and the neuron soma (cell body). This signaling happens at synapses. These transcriptions and protein constructs that result from signaling are essential for cell to cell communication at the synapse, but happen at much faster timescales than spikes, and require the maintenance and monitoring of extensive intracellular molecular signaling pathways (this is one of thousands: http://faculty.cas.usf.edu/gullah/Rsearch.html), packaging of protein constructs, and movement of such along cytoskeletal pathways to target zones at synapses, and none of these essential processes are included in any modeling of any circuit so far. Be aware that each neuron contains thousands to hundreds of thousands of synapses, and that each synapse can be composed of millions of ion channels which participate in synapse activity. Each ion channel is monitored by something in the cell — and we have zero idea how this is accomplished — because the number, activation status, and repair of EVERY ion channel is accomplished such that the synapse’s functionality in the network of synapses that affect it is optimized. So, mathbabe — do the math — billions of neurons with thousands to hundreds of thousands of synapses with millions of ion channels per synapse, all working hard to generate your experience of mind. Further, each neuron has a unique set of synapses and ion channel distributions based on the neuron’s function in the network of neurons in which it has its life. To add further complexity, it has just been shown that each neuron has unique DNA sets from which to construct proteins to operationalize the neuron’s life. And we want to reverse engineer the limited neuron behaviors we have managed to observe, and map this complex living machine such that we can imitate it with a computer whose computational mechanisms are nothing like what the ranter or I have just described. Again,, you do the math on the odds that the currents mapping methods could possibly work.

    Liked by 3 people

    • October 22, 2015 at 12:32 pm

      This, times 1000.

      Like

    • DocMartyn
      October 25, 2015 at 9:58 am

      I am a neurochemist and don’t even bother thinking about the complexities of neurons, just looking at the complexity of normal human astrocyte cultures, and watching them trafficking vesicles leaves me in awe.
      All the worm people I respect will tell you honestly they don’t have a clue how such a tiny nervous system generates such a complex phenotype.
      The Human Brain Project will generate nothing but very pretty rubbish.

      Like

    • October 31, 2015 at 12:14 am

      Working neuroscientist (and neurosci educator) here – totally agree with both the rant and tdhawkes’ detailed endorsement. I’m in the habit of saying that people who feel these problems are tractable, are revealing their lack of understanding by thinking they are tractable. On the plus side, generations of work ahead in a fascinating field, not complaining about that.

      Liked by 1 person

    • November 3, 2015 at 1:54 pm

      Yay! 🙂

      The other thing that this brain simulation stuff is selling is nouveau AI by riding on those coattails.

      What bothers me is that there are hopeful, quite serious people like Kurzweil and various transhumanists who are setting people up to be victimized by scammers. This idea of a consciousness download by “destructive scanning” of the brain’s connectome, (perhaps after death) has taken hold and is on the verge of being marketed.

      Like the cryonics, it is total nonsense. It is a con to suggest to people that they can be “saved” this way, and the usual suspects are lining up to suck money out of hopeful rubes by suggesting they might live forever.

      Like

  10. October 20, 2015 at 10:34 am

    A short while back they were calling it the “Human Brain Wreck”..name tagged to the project in the UK as it was turning out to be not a lot more than an expensive data base project.

    http://ducknetweb.blogspot.com/2014/09/european-brain-project-turning-into.html

    Like

  11. October 20, 2015 at 11:15 am

    Researchers need money, therefore we need outspoken people to keep them in check against the public desire for progress. This would be a better article if the writer were able to tame his emotional response to the data. The same emotional response that leads to “click bait science” is present.

    The article is both an appeal to real science and a lack of respect for the science of communication, philosophy and thought. The emotional anger and swearing are out of place with the implicit goal of promoting scientific thought.

    Our lack of knowledge is not a problem. It may be frustrating that we are in a position of needing to take one step at a time, but it is the way to move forward.

    When we understand the physical properties we can advance to understanding the quantum effects that they enable.

    Like

    • Silvanus
      November 2, 2015 at 7:10 am

      Really? Emotional response is a normal and necessary part of human communication but you think it’s is bad because.. what exactly? What science shows that anger and swearing is ‘out of place with the implicit goal of promoting scientific thought’?

      Like

  12. October 20, 2015 at 11:33 am

    Computer scientist here (well, software engineer.) WRT @tdhawkes’ comment — while I totally defer to your knowledge of the inner details of signaling, it may not be necessary to simulate all the processes, at every level of detail, in order to replicate the significant behaviors of neurons.

    To use a simplistic CS analogy, we can reverse engineer the behavior of a program like a server or a video game by examining its behavior at a high level, or by reading its machine code. Even though the machine code is executed by a hugely complex mass of transistors, it’s not necessary to know how a Xeon or ARM CPU is wired to understand the software. (If it were, I’d be out of a job!) And the underpinnings can be implemented in very different ways — Super Mario Bros running on a Wii U uses completely different hardware than the original NES.

    Or back to biology, my understanding is that a lot of research on DNA and genomes is done at an information-theory level that considers DNA strands as symbolic encodings while ignoring the chemistry of amino acids.

    But to undermine my own argument: one thing that’s been observed about structures evolved by genetic algorithms (such as FPGA circuits) is that they don’t use the neat layers of encapsulation that human engineers do. The results are, to us, often ridiculously messy, with macro and micro level behaviors all interdependent. If the same applies to neurons and the brain, then maybe we would have to simulate everything at all levels…

    Like

    • Fats Grobnik
      October 20, 2015 at 1:27 pm

      @tdhawkes’ point is twofold: (a) neurons have internal state (epigenetic and transcriptional) that we do not understand and is difficult to measure, and (b) voltage spikes are not the only ways that neurons communicate. Both of these are reasons that the neural system in C. elegans is not well understood, despite having so few neurons. The deeper you go into biology, the more subtle and complex you realize it is, and the more the complexity is required to explain the biological phenomena.

      BTW, your understanding of DNA is at best incomplete. There are many open questions about how the cell controls transcription of DNA. While information-theoretic tools do seem to be useful, they don’t replace experimental approaches. In addition to the primary DNA nucleotide sequence, cells use a variety of physical processes to control this, including proximity of the gene to the nuclear envelope, epigenetic (chemical) modifications, and packing of the DNA into chromatin.

      Like

      • bks
        October 20, 2015 at 10:48 pm

        Recommended: Wetware, A Computer in Every Living Cell
        http://yalepress.yale.edu/book.asp?isbn=9780300141733

        Like

      • October 20, 2015 at 11:25 pm

        voltage spikes are not the only ways that neurons communicate
        I am not seeing any place in these postulated simulations for the whole phenomenon of volume-effect non-synaptic neuromodulation. The disconnectome, if you will.
        I had read that some glial cells take part in communication, with synapses as well as non-synaptic signalling (which is a problem if the simulations stick only to neurons), and I would welcome any correction from the experts here.

        Like

    • October 20, 2015 at 10:31 pm

      @mooseyard You mention experiments with genetic algorithms on FPGA circuits, any chance you could give some more details about this?

      Like

      • sglover
        October 22, 2015 at 6:15 pm

        This might be the original FPGA evolution paper, from 1996:

        http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=7D81073D97A7646A079C6B4A89DE66CE?doi=10.1.1.50.9691&rep=rep1&type=pdf

        (There are probably better links. Look up something like “university of sussex adrian thompson fpga evolution”.)

        A very interesting interesting observation is on page 10 (I can’t seem to copy from the original). There Thompson mentions that in the final, highly optimized circuit, there are gates that are not actually connected, and therefore presumably redundant. Yet if they are removed the circuit no longer functions as desired.

        Like

  13. mathematrucker
    October 20, 2015 at 12:10 pm

    The word “fuck” can be really useful in informal communication, like for example in a mathbabe guest post. Choosing when, where and how to use it well, requires some skill. This author—clearly a talented writer to begin with—used it very skillfully in my opinion.

    Like

    • sglover
      October 22, 2015 at 6:17 pm

      I wonder if it’s just the use of “fuck” that makes people object to the tone of the piece. It seems pretty informed and informative and astute to me — what more could you want?

      Like

      • mathematrucker
        October 22, 2015 at 9:32 pm

        I was under the mistaken impression that one or more commenters were objecting to the word per se, when instead it was actually the post’s tone.

        If you ask me the tone is a lot more informative than angry. But even if it were real angry I wouldn’t fault the author for expressing his anger (within the bounds of reasonableness).

        Like

  14. Mike
    October 20, 2015 at 1:34 pm

    This post correctly describes the reality of computational neuroscience every day. My PhD thesis was in this area. I don’t do this kind of work anymore because sophisticated cargo-cult science is the norm. There is good stuff going on. Fairly abstract math about how huge dynamical systems can compute, and merciless nitty gritty biology, but the gap between is still huge.

    @howardat58. Seconding Martin, the math of evolutionary theory is incredibly well studied, going back 90 years. A key difficulty of that field now is that many distinct evolutionary mechanisms produce very similar patterns, and so it is very difficult to tell if your favorite math applies to your favorite organism/ecosystem. “Population genetics” is a good key phrase to search. Also “e coil long-term evolution experiment.”

    Liked by 1 person

  15. October 20, 2015 at 4:42 pm

    I celebrate this clear acknowledgement of collective ignorance. Ignorance is not fundamentally a bad thing- it’s the stuff of mystery and provides us motivation to keep at it. We’re so hypnotized by what we know that we forget that there’s so much more that we don’t know. Love it!

    Like

  16. Adam
  17. smythejames78
    October 20, 2015 at 6:56 pm

    This has got to be one of my top 5 of your posts.

    Like

  18. October 21, 2015 at 1:51 am

    You cannot simulate the brain without its natural inputs and outputs- afferents and efferents. Brains are not born ready-wired- they sort of evolve through childhood and adolescence. This point is not trivial, it is far deeper than it may seem on the surface. The brain is made through interaction with the social and physical world. Thus, the simulated brain would have to be situated in a social and physical context providing the stimuli and feedback and affordances that our evolved brain need to develop.

    Liked by 1 person

  19. crf
    October 21, 2015 at 2:18 am

    The Human Brain Project was selected from a shortlist of candidates for this research money prize (Graphene also won.). All the candidates were very sciency: science as if dreamt up by vain people who are not scientists, but who like the idea of it.

    http://cordis.europa.eu/fp7/ict/programme/fet/flagship/6pilots_en.html

    So a rant was inevitably going to be written. If it were not about HBP, it would be one of those failed candidates.

    Like

  20. Asimio
    October 21, 2015 at 9:33 am

    I actually recall remember Markham saying on his TED talk that “hopefully we’ll have a hologram here in ten years speaking”

    I totally agree with everything you said, however, such a project is absolutely essential. Although, probably with better leadership and direction. Markham essentially says that they’re working without a hypothesis, fitting together all that they know and hoping for an emergenct effect. Now, this is certainly how work on simulation is done, but when he says he’s going to simulate a human brain (rather than a specific sub-system) it just seems like a theatrical stunt.

    Hopefully mindscope will produce more interesting results. Go team Koch!

    Like

  21. SynapseCrackle&Pop
    October 21, 2015 at 11:33 am

    Well said. But perhaps there is still a note of some unexpected success from the HBP?

    https://humanbrainproblemsblog.wordpress.com/2015/10/21/if-anyone-can-lacques-lacan/

    Like

  22. October 21, 2015 at 7:27 pm

    Over here in Europe we have completely absorbed the razzle-dazzle approach to science, and the HBP is a logical consequence of it.

    Note that he and his wife (who is CEO or something like that of the Frontiers Open Access journals) also have a proposal that, if I recall correctly, involves treating autistic children by depriving them of or at least drastically reducing sensory stimulus (something along those lines). Someone who actually knows something about autism needs to take a close look at that work too.

    It seems that everything they touch turns to ashes. Frontiers regularly sends editorship offers for special issues to graduate students who have no expertise in that area. They also offer editorships of special issues to complete non-experts; Dorothy Bishop discussed one such case recently.

    And yet, this power couple is a lot more successful than you or me, or indeed pretty much anyone else. I guess they worked out what they wanted (money and fame) and went after it very systematically. As a business strategy, it is hard to fault them.

    Like

    • October 24, 2015 at 8:54 pm

      Frontiers is a multi level marketing scheme like Amway.

      Like

  23. October 22, 2015 at 8:36 am

    Imagine looking down on Earth from space and trying to figure out what the biosphere is thinking from the pattern of blinking lights on its surface.

    That is about where we are today in trying to understand how the brain of any critter works.

    Like

  24. Jack Morava
    October 22, 2015 at 10:19 am

    Sorry to jump in late in the game, but maybe

    https://en.wikipedia.org/wiki/Cargo_cult

    [excerpt: The metaphorical use of “cargo cult” was popularized by physicist Richard Feynman at a 1974 Caltech commencement speech, which later became a chapter in his book `Surely You’re Joking, Mr. Feynman!’, where he coined the phrase “cargo cult science” to describe activity that had some of the trappings of real science (such as publication in scientific journals) but lacked a basis in honest experimentation.]

    There are of course plenty of examples beyond the HBP. I don’t mean to put down neurological modelling and such things; the question seems to me not about science but politics and the way science is sold, not to agencies like the NSF but to those who write the budgets for such agencies.

    Like

  25. Mark
    October 22, 2015 at 10:22 am

    I am so grateful that someone with the scientific knowledge is say what I have been suspecting for years – every ten years someone claims we will have a model of the human brain in 10 years and I think – No we won’t!

    Like

  26. Blue sunset
    October 22, 2015 at 12:02 pm

    1, For anyone with a passing knowledge of control theory, chaos theory, emergent properties, etc. the whole idea would seem bananas. How the grant committee came to the conclusion that the idea was plausible is beyond me.
    2, Expanding on 1, it is infuriating and frustrating to see such frivolous projects get tons of money while others doing solid, incremental, nonglamorous work struggle for funding.
    3, I’d be very interested in this particular grad student’s take on the BRIAN project over the other side of the Atlantic.

    Like

    • October 31, 2015 at 6:18 pm

      I think I know why such projects get funded by the European funding agencies. Their stated goal is to fund high-risk and high-gain research, not anything that is “incremental”. Under those criteria, the HBP makes a lot of sense to fund. I think part of the problem is that the kind of people who become in charge of such funding agencies often don’t actually do any research themselves, they are either scientists turned administrators, or have their underlings do their research for them and just stick in their names onto anything that comes out of their “lab”. So, such people get sucked into the idea of having a truly wild proposal as something worth funding. If they were actually doing research themselves, they might not get pulled into the hype so easily. It’s like with politicians, the type of people who *want* to be in charge of funding research is exactly the type of person who should not be in charge of funding research because they are there for the wrong reason.

      Like

  27. October 22, 2015 at 12:36 pm

    Reblogged this on The Ratliff Notepad.

    Like

  28. rloosemore
    October 22, 2015 at 3:17 pm

    Fuck YES! I have written similar rants, on other forums having to do with artificial intelligence and similar matters, ever since the first claims by Markram started to be repeated like the Second Coming of the Neuroscience Messiah.

    Everything you say is spot-on.

    Thank you for saying it so forcefully.

    Like

  29. October 22, 2015 at 6:01 pm

    I basically agree that we don’t know how to simulate brains.My own, higher-level take on the issue, here:
    http://fqxi.org/community/forum/topic/2119.

    Like

  30. October 22, 2015 at 6:50 pm

    The longer this has gone on, the more I’ve become persuaded that we could simulate a human brain… specifically, Trey Gowdy’s.

    Like

  31. Chip L.
    October 23, 2015 at 10:35 am

    The more I read about human brain function, structure and processes; the more I suspect that there are massive parts of the human mind that just don’t actually exist in the currently measurable physical realm. Some have started suspecting that much of neural operations that make up ‘thought’ depend on quantum interactions of which we barely have the slightest understanding.

    Like

  32. Disgruntled Biochemist
    October 23, 2015 at 11:27 am

    Agreed on all counts.

    Like

  33. BM3
    October 23, 2015 at 3:58 pm
  34. dirk bruere
    October 23, 2015 at 6:11 pm

    Somehow I suspect that neural nets running on exascale computers might do some really cool AI stuff even if they are not like “real” brains.

    Like

  35. October 25, 2015 at 11:32 am

    Clear and true, even OBVIOUS to anyone with any formal minimal training in psychology or animal biology.
    So what were the grant reviewers doing? Presume they like playing with simulations on biggest possible computer – relation to living organisms totally irrelevant.
    Emperor has no clothes, although probably not fucking as often as the ranter’s rant. How did the word ‘fucking’ add to the clarity of the argument? We probably got message that blogger feels strongly about it without the overemphasis

    Like

    • November 1, 2015 at 12:39 pm

      I’m starting to see a pattern, which is, neuroscientists and biologists find this project a waste of time, while computer scientists like myself think it’s worth a shot. Perhaps the gap here is between people and their understanding of the topic, but if I had to bet on this one, I’d say computer scientists know more about simulations than biologists.

      Like

      • November 1, 2015 at 11:24 pm

        Yeah but a lot of us biologists ARE computer scientists (or I was, when I graduated). The problem is not about understanding what’s algorithmically tractable, or awareness that even simple rules when iterated on a massive scale yield unexpected results (which is the basis of neural net type computation). It’s the frank admission that we don’t know what needs to be simulated. Not that we don’t have enough detail, or the right algorithm. We just don’t actually know WHAT needs to be simulated, to simulate a brain. We can simulate the voltage changes of neuronal membranes in painful detail, and the outcome is a simulation of membrane voltages, not thought, or behaviour, or anything brainlike.

        Liked by 2 people

        • November 5, 2015 at 4:54 am

          “We can simulate the voltage changes of neuronal membranes in painful detail, and the outcome is a simulation of membrane voltages, not thought, or behaviour, or anything brainlike.”

          How do we know it would not be brainlike if we haven’t tried it?

          Like

  36. Dafydd Gibbon
    October 25, 2015 at 11:44 am

    Are there any results concerning the effects of the electric fields in the brain on the way the brain works, rather than actual connections, channels etc.? I’m not a neuroscientist but familiar with signal processing. Just curious.

    Like

  37. Mike
    October 25, 2015 at 1:04 pm

    Dafydd Gibbon, look up “ephaptic coupling”. Field interactions between neurons aren’t as well-studied as synaptic (chemical) coupling, but it’s believed that the nonlinear threshhold/spiking behavior of neurons helps limit the role of diffuse linear electric field interactions (with some known and probably many unknown exceptions).

    Like

    • Dafydd
      October 26, 2015 at 3:36 pm

      Thanks, Mike, that’s given me exactly the lead I was looking for. I was thinking, no doubt naively, that if EEG sensors can couple to these fields externally, then there could be something quite subtle going on internally. I’ll look it up.

      Like

      • Heavy Fuel
        October 27, 2015 at 2:28 pm

        Dafydd Gibbon, the Blue Brain Project published a paper on LFP generation a couple of years back, which might also be of interest to you:

        http://www.ncbi.nlm.nih.gov/pubmed/23889937

        In this model the electric fields are driven by circuit/neuron activity and do not necessarily affect it.

        Like

        • Dafydd
          October 28, 2015 at 10:28 am

          Thanks, Heavy Fuel, very helpful wrt generation of electrical fields.

          Like

  38. Cos
    October 28, 2015 at 7:11 am

    This post has similarity to opinions of people about planes before the first plane was built. Yes, we’ve never done it before and we don’t exactly know anything about it – but we’re not going to learn how to do it by not trying.

    And regarding your rant, most of it is from a very biological perspective. Especially the parameter rant (nr 4) looks to me as if you don’t actually think of the simulation as ‘simulating the software’ but simulating the hardware of the brain. The reality is, simulating the functions of the brain might not require to simulate the chemical processes – at all. That’s just the way nature does things because it has no other way. We might not need all the interim steps to achieve the processes taking place in the brain in silica. We don’t need axons that have a myelin sheath – because we are already able to perform data transfer in a different, more exact way than any biological process could. The same goes for those measured parameters – those are biological limtiations and variance which we don’t need to take into account – we don’t need to simulate signal loss if we can already guarantee a good signal – that would be taking a step back just because nature is limited like that.

    I agree that we don’t know which way to go exactly, but standing still will not solve it. Let people who have the drive to try things out do it, and we’ll see what comes out of it.

    Like

    • Heavy Fuel
      October 28, 2015 at 8:34 am

      “Heavier-than-air flying machines are impossible.” –Lord William Thomson Kelvin, 1895

      Like

      • Bruce Cleaver
        October 30, 2015 at 12:04 pm

        “Heavier-than-air flying machines are impossible.” –Lord William Thomson Kelvin, 1895

        “We must Know. We will Know.” – David Hilbert, 1930

        “Nuh-uh” – Kurt Godel, 1931

        Like

  39. Cian
    November 1, 2015 at 12:02 pm

    Agree that the simulation part of this project is a waste of money. Maybe the tedious experimental data collected and the neuromorphic computing will be useful in future though.

    As for parameters. It is not only finding eatimates of for the mean of each parameter that is needed. In principle we also need to know the correlations between each parameter. Eve Marder, for one, has shown this in detail for small neural circuits in lobster stomach. This is a problem because the number of correlations grows exponentially with the number of parameters. So in the end it will be astronomical.

    If you ask me what is instead needed will be an understanding of the rules of plasticity and homeostasis that wire and rewire circuits during development. Although we don’t understand these rules very well now, they will be simpler to figure out than simply brute force measuring each parameter (which is impossible in any case).

    And that’s just for the electrical behaviour of the brain. My bet is that most of the computation is done biochemically by molecules within neurons. Much higher dimensional problem again.

    Like

  40. Walt G
    November 1, 2015 at 8:38 pm

    I’m willing to bet ten U. S. dollars (with one person) that, within ten years, a working brain simulation based on an organic model will be created. It may fail to exactly or even approximately operate as the template, but it will surely teach us something about how the natural product works.

    Like

  41. HF
    November 3, 2015 at 1:03 am

    Obviously there is something wrong with seventy years of research tradition. The big secret has not yet been discovered. Perhaps they use the wrong mathematical tools? What remains if you leave behind linear algebra, calculus and probabilities? My pipe dream is a local model of neural computation that uses the espace etale of the sheaf of locally constant functions as its state space. The underlying dynamics should grow germs of functions, constantly forming products and quotients, trying to resolve emerging singularities. Alas, its just a dream…

    Like

  42. xtin
    November 3, 2015 at 3:54 am

    The cargo cult comparison is wrongfully applied to the point that I’d even say it is an argument for HBP rather than against it. The essential cargo cult argument is that there is at least one central principle (aerodynamics) to a property (flying) that we need to understand, before we can build an object exhibiting that property (plane). Applying this argument to the HBP means that before we can build a brain that exhibits “consciousness” (or any other effect you want to see that brain doing, really), we’ll need to understand some core principles first. However, it is not apriori clear that the core principle for flying is aerodynamics, we know this now, but it might just as well have been some other feature that was captured by the wood model. Furthermore, we don’t know if there even is something like “aerodynamics” for brains.
    And this is the crucial point, where the cargo cult comparison is wrongfully applied: We already know that network effects are a central principle for brains and the only feasible way to understand such behaviours is to actually try to model and then simulate them, create and compare statistics, etc. Thats why HBP isn’t about creating “the” human brain, but more about creating the tools to build brains at very different complexities and to build a central ICT platform to unite research data on this topic, so we can learn how to answer the questions the rant so adamantly claims to be unsolved.
    Transported to cargo cult world: HBP is much more of a toolbox to assemble and study paper, metal, wood and other planes and see if they exhibit the same effect as the thing in the sky. Most planes will not fly, but some might and if they do we can see why.

    Like

  43. Snorfu
    November 3, 2015 at 4:50 am

    I see a lot of brain simulations on facebook everyday.

    Like

  44. Chris
    November 3, 2015 at 9:08 am

    If you want to get an impression on the theoretical problem associated with the simulation of a complex system like the brain, I suggest the talk:

    “On the Nature of Causality in Complex Systems, George F.R. Ellis”
    (video available on youtube)

    The challenge lies rather in the principle than in the knowledge and accuracy of the parameters.

    Like

  45. November 3, 2015 at 11:03 am

    We very likely do have a good bit of information about how the brain works and functions from its structure, the Structure/function method. For every function there is structure which creates it. and for every structure of the brain there is function. Using this model we’re pretty easily able to tell what’s happened and where when a stroke or other sort of brain injury comes to us. The clinico-pathological correlation work.

    Brain do NOT use math to function. Math output of brains comes from an area to the posterior of the left hemisphere speech centers, which are far, far older than the recently acquire math functions, anatomically speaking.

    The cortical brain is organized according to left reversed to right, and upside down pattern. Why this is, how it comes about is explained very clearly in this:

    Comparison Process. Brain Organization: Explananda, Pt. 3

    A deeper understanding of the higher level multiple functions of brain in cortex can be better described by this:

    The Promised Land of the Undiscovered Country: Towards Universal Understanding

    On this basis rather than saying “consciousness is” we describe how do the the many functions of consciousness, speech/language, vision, sensation, movement, the emotions, and information processing come about? Essentially, it’s a real time input of internal/external data from the senses & memories, which are processed in the 100K’s of cortical cell columns in the gyri, by a repeating “comparison process” which creates the signal detection, the recognitions, and the series of hierarchically arranged pattern recognitions. Knowledge and understanding are thus made up of relationships among events mediated & created by cortical processing.

    That’s largely what’s going on in the neocortex. & if we want to simulate brain fucntions, we have to simulate those brain outputs. Some progress has been made, but the higher, input/output where outputs are redircted to further input/output processing, to create pattern recognitions & the higher abstractions are the core of the current block in general AI. The above model shows what must be done to break thru to the “Promised Land” of understanding understanding, which will eventually yield general AI.

    Herb Wiggins, MD; clinical neurosciences

    Like

  46. Confucius
    November 3, 2015 at 4:49 pm

    “People who say it cannot be done should not interrupt those who are doing it.” – Chinese proverb

    Like

  47. The Truth, the Whole Truth, and Nothing but the Truth
    November 4, 2015 at 2:20 pm

    Talk about hubris on behalf of the grad student who published this “rant”. In fact, we do have “a f***ing” clue” how to do all of the five things s/he lists as obstacles. The question is whether we have enough of an idea to make these models useful, or not. Markam and the EU are betting that we do.

    Let me say one more thing: I have yet to see another as ambitious proposal from any other neuroscientist as this one. What would this grad student do with 1 billion euros? A lot of this complaining, underneath it all, is terror that neuroscientist’s X’s or Y’s research will be made obsolete by what Markram is doing.

    If you think this is an ideal fear, see what D. E. Shaw Research did to the entire cottage industry of molecular dynamics people. Disruption can happen in academic circles, too.

    Like

    • November 5, 2015 at 2:14 am

      I fully agree. A great deal of neuroscientists are actively attacking the project, even when it shows first results. And their arguments are mostly rants about some biological details, always seeming to lack understanding of what a computer model and a simulation are supposed to be.

      Like

    • Dafydd
      November 5, 2015 at 4:30 am

      Absolutely. It might be useful to recall the way many computer scientists see themselves: “A computer scientist is someone with solutions in search of problems.” The solution can only be a solution to a more or less approximative model of a problem. If a problem is not yet well understood, then an approximative solution by simulation may still be better than nothing and if the model is a good one it will lead to new ideas for new solutions. I am not in neuroscience but in speech and language technologies; in general terms, the situation is not too different.

      Like

  48. November 5, 2015 at 1:35 am

    The HBP is synonymous really with the HMP (Human Mindless Project). Without theoretic premises for the mind-body gap it appears somewhat premature to attempt to simulate the brain, since our ultimate goal is to try to understand how the brain might simulate the mind.

    We already have a very good understanding of human brain structure and excellent animal models that are better than 1 billion euro will ever hope to emulate on current understanding, but how its function relates to a well-developed behavioural science is lacking.

    Funding of the LHC depended on well developed theoretical predictions e.g Higgs boson, no such theories exist for HBP, which appears to be a flattering take on the science of the mind but devoid of the requisite paradigms and thus, an empty gesture, at least for the moment.

    Like

    • November 5, 2015 at 2:18 am

      What we lack is the ability to go inside the brain and accurately track interactions between neurons following all sorts of stimuli. This is what a model is for,nothing else.

      Like

      • November 5, 2015 at 3:23 am

        Yes, I agree it is nothing else but tracking interactions it has no other meaningful intentions. The question remains is it really what the mind sciences aspire too? If so there appears a disconnect between the understanding of policy makers and needs within the scientific community. Will this be an ill-fated exercise in superficial claims of prestige?

        Funding of HBP looks a bit like a “me too” or “me before” after the ‘connectome’ initiative, but I am not sure how much more it will reveal than 4 decades plus of tract tracing and neurophysiological recordings and the already abundant publications of histopathology in clinical cases of disordered mentation.

        Like

        • November 5, 2015 at 4:51 am

          Perhaps we should stop looking at this from the neuropysiological perspective alone, which most critics seem to do. This is a computer science project/subject – it’s aspirations are understanding the algorithms and the mathematical functions behind the neuron interactions. Deep Learning and similar machine learning techniques profit greatly from understanding the inner workings at this level and will ultimately enable much more ‘simple’ but accurate models of abstractization of input data and intelligence.

          Like

        • November 5, 2015 at 6:33 am

          … but that’s precisely the point. Its about the nature of the algorithm neural networks process that gives rise to mind and that is missing in the equation. Simply replicating a system in a ‘vat’ and hoping to discover them is the naive aspect of the enterprise.

          The algorithm is acquired from a closed loop between the interacting organism and its environment, a process of discovery.

          So you must already have a prior understanding of the algorithm you wish to simulate and test predictions your model makes. You can’t simply observe this, after all neurons are just wiring and complexity in itself may be necessary but not sufficient. It is what the algorithm derives from that is more important than just an exact simulation of neural properties, which hold or represent no specific content i.e. the content needs to be specified.

          What does this mean? Well, it could allude to the algorithm derived from motor efference copy, for example, that embodies perceptual information imbuing it with egocentric perspective.

          I personally believe it is a fallacy to consider a neural correlate as any explanation at all or in this case that simply describing every connection accurately is the equivalent of mind as it misses the entire history of an organism and you can’t just recreate that by mirroring firing properties.

          Like

        • November 5, 2015 at 8:55 am

          You may be right. But I feel like the chance of the firing itself to tell us something about the nature of the contents and their formation is similarly high to the change of it not telling us anything. We simply don’t know yet. What if it’s enough? What if the emerging patterns model the behaviour that follows? There is no other way to know than trying.

          Like

        • November 5, 2015 at 3:31 pm

          It appears to be the latest overly optimistic version of AI. There is nothing preventing successful simulation in principle of course, I am just arguing we need better models. The current approach is rather hopeful and random, but not all intuitive approaches in science have to follow a logical course I suppose. Look at alchemy as a predecessor to chemistry.

          Like

    • November 6, 2015 at 6:26 pm

      ‘Return of the alchemists and the immutability of an entangled quantum passion’

      Quantum theories of brain and mind appear to be open to similar criticisms as ‘connectome’ type neural correlate models of cognition. Whether they underlie aspects of neural network behaviour doesn’t differentiate the process for the purposes of the mind anymore than descriptions within any other isolated system. In other words the processes are ubiquitous and do not apply solely to principles for the emergence of mind. But the problems faced by a quantum knowledge of the brain are even more profound. The transformative events of entangled Posner clusters may just as well occur in my fingernail. Here is an excerpt from the abstract I have submitted for TSC 2016 in Tucson, a bastion of “crazy-old-guys” with quantum syndrome, a potentially curable disorder of entangled ideas and not just from Caltech:

      “The dominant theories of consciousness remain correlational. The two most popular currently are ‘neural correlates of consciousness’ and ‘quantum wave theories’. These rely on events as explanations of binding neural populations that correspond to mental events. There is no attempt to bridge the gap between neural event and mental faculty and explain away the mind-body problem as a non-reductionist physical element akin to basic physics particles, perhaps like a photon or string theory. Indeed, other complex neural networks such as seen in the gut can possess many of the same properties in principle described in connectionist or quantum theories, without being sufficient for the emergence or description of conscious awareness.

      A modern interpretation is Chalmer’s idea of quantum wave collapse. Everything remains in a state of possibilities or superposition until a definitive state is associated with a conscious instant. However, much of cognitive function and engagement with the world occurs subconsciously, which requires a definite state for successful interaction. This is the problem with correlational theories and Chalmer’s theory of wave collapse in that it invokes the need for M and M’ particle properties of conscious or non-conscious interaction with the world with no obvious demonstration of what differentiates them.

      Another problem with this group of theories, which I call naïve is that apart from demonstrating a form of binding, thus purporting to solve issues of simultaneity and the subjective feeling of instantaneity of conscious awareness, their predictive value for almost every observed psychological or behavioural observational measure has proved rather limited. Although neuroimaging studies have used the correlational premise in studying mental illness they have not been useful in providing explanatory frameworks in terms of disorders of thinking or perceiving. In fact, clinical descriptions have not been explained in terms of discovered pathophysiology. An outcome of the naïve premise in guiding neuroscience research is the ‘connectome’ an ambitious and expensive but well-marketed endeavour to describe all connections in the brain without any presumption of a guiding principles of a useful mind-body paradigm. Nor does this project clearly prespecify what it might mean for other disciplines of the mind such as psychiatry…”

      Like

    • cvak1
      November 6, 2015 at 6:30 pm

      ‘Return of the alchemists and the immutability of an entangled quantum passion’

      Quantum theories of brain and mind appear to be open to similar criticisms as ‘connectome’- type neural correlate models of cognition. Whether or not they underlie aspects of neural network behaviour doesn’t differentiate the process for the purposes of the mind anymore than descriptions within any other isolated system. In other words the processes are ubiquitous and do not apply solely to principles for the emergence of mind. The transformative events of entangled Posner clusters may just as well occur in my fingernail. But the problems faced by a quantum knowledge of the brain are even more profound. Here is an excerpt from the abstract I have submitted for TSC 2016 in Tucson, a bastion of “crazy-old-guys” with quantum syndrome, a potentially curable disorder of entangled ideas and not just from Caltech:

      “The dominant theories of consciousness remain correlational. The two most popular currently are ‘neural correlates of consciousness’ and ‘quantum wave theories’. These rely on events as explanations of binding neural populations that correspond to mental events. There is no attempt to bridge the gap between neural event and mental faculty and explain away the mind-body problem as a non-reductionist physical element akin to basic physics particles, perhaps like a photon or string theory. Indeed, other complex neural networks such as seen in the gut can possess many of the same properties in principle described in connectionist or quantum theories, without being sufficient for the emergence or description of conscious awareness.

      A modern interpretation is Chalmer’s idea of quantum wave collapse. Everything remains in a state of possibilities or superposition until a definitive state is associated with a conscious instant. However, much of cognitive function and engagement with the world occurs subconsciously, which requires a definite state for successful interaction. This is the problem with correlational theories and Chalmer’s theory of wave collapse in that it invokes the need for M and M’ particle properties of conscious or non-conscious interaction with the world with no obvious demonstration of what differentiates them.

      Another problem with this group of theories, which I call naïve is that apart from demonstrating a form of binding, thus purporting to solve issues of simultaneity and the subjective feeling of instantaneity of conscious awareness, their predictive value for almost every observed psychological or behavioural observational measure has proved rather limited. Although neuroimaging studies have used the correlational premise in studying mental illness they have not been useful in providing explanatory frameworks in terms of disorders of thinking or perceiving. In fact, clinical descriptions have not been explained in terms of discovered pathophysiology. An outcome of the naïve premise in guiding neuroscience research is the ‘connectome’ an ambitious and expensive but well-marketed endeavour to describe all connections in the brain without any presumption of a guiding principles of a useful mind-body paradigm. Nor does this project clearly prespecify what it might mean for other disciplines of the mind such as psychiatry…”

      Like

  49. November 6, 2015 at 6:36 pm

    Apologize for repeating myself

    Like

  50. November 15, 2015 at 11:00 pm

    “being funded by some miracle of bureaucratic gullibility” There’s no miracle in finding bureaucrats who are extremely gullible, which is why government funding of research is the worst possible use of money.

    Like

  1. October 20, 2015 at 7:46 pm
  2. October 21, 2015 at 12:42 pm
  3. October 23, 2015 at 7:23 am
  4. October 24, 2015 at 10:39 am
  5. October 26, 2015 at 9:36 am
  6. October 26, 2015 at 3:02 pm
  7. October 26, 2015 at 4:43 pm
  8. October 30, 2015 at 7:42 am
  9. October 30, 2015 at 8:10 am
  10. October 30, 2015 at 8:48 pm
  11. October 31, 2015 at 3:00 am
  12. November 2, 2015 at 12:30 pm
  13. November 3, 2015 at 5:05 am
Comments are closed.