I am a computational cognitive neuroscientist, an have worked at many levels. I find each kind of data and model useful to some extent, but I have to admit that the least useful, are, to my mind, those at the detailed neural network level, like the ones discussing in this paper. Somewhat more useful are higher level dynamic architecture models, and, at the highest level, cognitive models, which constrain the behavioral target we are trying to explain. I personally (as one can tell from my other posts here) find the dynamics brain development models to be the most compelling as overall models, but they are not particularly explanatory at the detailed level. Brain science is trying to do the hardest thing you can imagine, that is, explain the most complex machine in the known universe. We persist, but no one entering this field should have very high expectations of near term grand successes.
As a counterpoint, I am a computational neuroscientist who transitioned form working in human cognition to fruit fly motor control. Fruit fly neuroscience in the past decade has advanced tremendously. With the latest tools, we can record activity from specific genetically labeled neurons while stimulating others. We have identified specific groups of neurons to stimulate to get the fly to groom, walk, turn, and even walk backwards. The full fly brain has been scanned with similar techniques and the connectome is beginning to be mapped out (e.g. see this very recent post from Google AI research https://ai.googleblog.com/2020/01/releasing-drosophila-hemib... ).
I find that as we gain new tools to study the nervous system more specifically, both data and models of how neurons are organized at the circuit level become more important. To advance on an analogy in the article, it's like trying to explore the dynamics of NYC without a map. For instance, it's hard to tell how/why people interact with central park if you don't even know where they live. The more specifically you are able to pin down people, the more it matters where exactly they live to understand.
Granted, the fly is much simpler than humans or even mice, and it will likely take decades and new tools for us to study humans in this way. However, when we get there, mapping out the brain connections will be crucial to make sense of it all.
To me, though, a better analogy is assuming that because one has a detailed map of the sewer system of NYC, we now understand where we're going in Berlin, or Barcelona, or Vancouver, or that that level of detail is necessary to understand the economics of poverty or pollution. That wouldn't work for city planning and I don't know why people assumes it works for neural architecture either.
Similar tricks can be played with the human brain, things we have been able to do for decades, while people are undergoing brain surgery, and now later, with TMS. However, being able to elicit limb movements or bits of speech, or even emotional qualia is different from having a dynamic understanding of the brain in vivo in everyday life.
Certainly having an understanding of detailed circuitry is interesting and important, but to me there's a forest for the trees problem.
The poster above you is why I decided to just work as a software engineer instead. It was such a depressing and bleak field. Thank you for having another opinion!
To me all these mapping and monitoring efforts always seem like trying to reverse engineer Microsoft Word's grammar checker by measuring electrical signals on various parts of a computer that somehow got into the 18th century.
I really hope we can realize how the whole thing works by looking at its parts. But I doubt it will bring the breakthrough. On the other hand maybe there is this one mechanism that we have to discover to make sense of all the parts. Then those efforts will form the groundwork for an explosive understanding.
much simpler by how many orders of magnitude? If it's surface area driven it rises as a function of size more slowly for increases in volume. If it's connectionist then it's an O(n^2) Increase?
I had a similar reaction, working in an adjunct field but as someone who often works with neuroscientists.
My impression is that there's a lot of very oversimplified assumptions being made all the time in these fields that get glossed over in very arrogant (or naive?) ways. It's really astonishing to me, not just because of how oversimplified the assumptions are but because researchers are then surprised things don't work out.
To be fair, this is true of other fields as well. I'm more familiar with molecular genetics and genomics, and the same things happen there. There seems to be a certain hubris that goes unquestioned, and it always amazes me, the sci-fi fantasy narrative being accepted as fact.
Just to take one thing for example: there's huge anatomical differences between people's brains even at the macroscopic level, that just get glossed over in discussion. Those fMRI images you see? They're often done by aligning different images to a common map, just assuming individuals' brains are carbon copies of one another. Now you're going to try to delineate a connectome at the neural level, as if there is one connectome at that level?
When will everyone learn? Where's the public skepticism?
Thanks. Comments like this from people with a long and broad experience in a field are why I keep coming back to HN, because it helps me know where to look next if I want to verify or learn more about a new topic. What do you think we learn should keep our eye on specifically for new developments?
What we need is a Newtonian model of the brain. A model that is incomplete and "wrong", but useful and generative. While Newtonian physics may be "wrong", is much easier to learn than quantum physics or relativity, etc.
Neuroscience usually focuses on precision details, but doesn't aim to tell big picture stories. There are a few exceptions, however, like Karl Friston's free energy reduction model.
This seems like an odd take on Newton to me. What made his contributions important is that they were correct up to the precision we could measure for centuries.
We are nowhere near that for a subject like neuroscience.
There have been models adopted by scientests who at the time knew they were wrong and incomplete. For example ancient astronomy or medicine or logic. but their adherents tended to hold back science when new discoveries were made. So they are double edged swords.
It's a great point. And I agree that precision in stars or mechanics seems much simpler than neuroscience -- but I do actually think that there will be some laws akin to f=ma.
Ok, maybe not quite to that level. But consider this paper, by great neuroscientists and cited over 1000 times, that models brain wave bands as harmonics, with band widths as the golden mean. [1, see figure 4]
My money is on some synthesis of the many theories in oscillatory neurodynamics. Neural resonance, dissonance, harmonics and entrainment... So many of the theories* have borne out empirically, but there has hardly been an attempt at synthesis.
* Theories like "Communication through coherence", "binding by synchrony", "phase amplitude coupling" + "working memory", etc etc etc.
[1] Klimesch, W. (2012). Alpha-band oscillations, attention, and controlled access to stored information. Trends in cognitive sciences, 16(12), 606-617.
I've noticed a trend of software engineers proposing that other fields create some abstraction that already exists. Last week someone avowed that a citizen regulatory body for aviation be created. Of course, that is the FAA.
Yes, in "you can't play 20 questions with nature", Alan newell advocated for computational "unified models" of cognition. But they were heavily symbolic. While useful, they don't really make it more understandable. (That's my experience with act-r, anyway! It's useful, but doesn't give big picture synthesis, like a Newtonian model might. It's a lot of little models.)
I'm not an expert at all. I read the book How Can The Human Mind Occur In The Physical Universe? a few years ago and was basically left with the impression that actually cognitive science is way more advanced than I had realised. I was very struck with the amount of definite results and falsifiable experiments (i.e. "actual science"). I think it's one of those things like global poverty where if you are 30 or older, what you learnt at school, and your whole rule of thumb intuition, is totally wrong.
None of them as BS, or, put more positively, none of these are complete, but each offers different complexly interlocking partial models, at different levels of description. This is how we understand any complex system. Indeed, this is how we understand anything. Indeed, this is what it means to “understand”, that is, to use models to reason about something. Each of these, and all brain and cognitive science, provides different complexly inTeracting models that we use to reason about the system.
That's a question I asked myself about 10 years ago, so I made one. Is it perfect? Probably not, but I have strived to make it as rational and rigorous as I could. http://behaviorallogic.com/foundations/
Don't we have that in the model that describes what the parts of the brain do?
Proven to be wrong when you watch people with serious brain injuries re learn skills, but shown to have value by how it can predict what tumors or injuries will do to someone's ability.
Those models and explanations are not nearly as helpful as one might think. A lot of "neuroscience" explanations of everyday behaviour are mostly nonsense that sounds plausible and appealing (https://www.nature.com/articles/nrn3817, https://www.mitpressjournals.org/doi/abs/10.1162/jocn_a_0075...). In this context plausible doesn't even mean plausible to a neuroscientist, it means more like consistent with the nonsense a layperson has heard before.
What would be really helpful is a model that can add to things we already know. Saying "studying for an exam engages the X part of the brain and uses the Y neurotransmitter" adds literally nothing to your understanding of studying (you can find out much more about studying by talking to people that are good at it and who have done a lot of studying themselves), it's just taking an everyday activity and identifying the small but still vastly complex portion of the brain that is activated more than others. Imagine being told that a particular bug in a 1-billion-lines-of-code codebase is due to some code within a 10-million-lines-of-code portion of it: that's great but how helpful is it really?
In general I'm very skeptical of much neuroscience. (If an article ever says 'emerging neuroscience shows...' you can be assured what follows is almost certainly BS.) Yet I found the senior researcher in this article quite refreshing about the limitations of their research.
I would argue not since it doesn't do much for explaining how it works. We know how computers process information. We don't really have a good story for how the brain does it.
We need a story that explains how the brain uses rhythms (oscillations) for computation.
We know how to wire FETs, we know this is an i5 and is fast, we also know how to solder thick wires or how to loosen large screws, but no one knows how to build even a Z80 out of some germanium is how neuroscience is understood in my understanding
I would equate our current understanding of the brain to the common knowledge that apples fall to the ground from trees; where else would they go? People knew that long before Newton.
Almost 2000 years before Newton, Archimedes wrote "Any object, totally or partially immersed in a fluid or liquid, is buoyed up by a force equal to the weight of the fluid displaced by the object."
What you're looking for is Elman et al's theory in Rethinking Innateness (https://mitpress.mit.edu/books/rethinking-innateness) It's more like Darwin than Newton, and is (to the point of another post off this thread) an early deep-learning-like theory of how the brain (or at least the cortex) becomes organized.
Have you read smolensky's harmonium paper? It's the first restricted Boltzmann Machine -- and I believe elman and smolensky were colleagues with hinton back at UCSD (with rumelhart and Don Norman, et al).
The approach was focused on presymbolic processing -- and tried to optimize harmony. Harmony was, interestingly, the first mathematical model of the mind (by Pythagoreans/platonists in ancient Greece). It has a lot going for it these days, too, to understand oscillatory coupling in neural circuits. I learned recently that brain waves are harmonics (frequency doublings), which somehow I missed before!
I know about Smolensky's theories (have probably read that paper, but don't remember it exactly; have def. read others by PS); PS and JE are definitely contemporaries, and work/have worked in similar areas. However, these two theories operate at different levels and time scales. The oscillatory coupling theories of PS et al are related to real time computations carried out by neural networks, whereas the trophic wave theories of JE et al. relate to how these networks come to be organized as they are. As per other posts, both are useful, and probably both true to some extent. Neither is directly applicable yet in a way that makes contact with the cognitive level.
One thing that shocked me was that no one ever tried to connect "cognitive dissonance", one of the most successful social psychology theories of all time, to actual dissonance in neural oscillations.
Consonance results in greater periodicity, meaning that the action potentials are more likely to line up, whereas dissonance has less periodicity, so action potentials don't line up. It feels better (there is pleasure) when the action potentials align because of hebbian reinforcement (synchronous firing). This assumes that reinforcement would be pleasurable, but pleasure is the main reinforcer at a cognitive level.
Deep learning is as close to being a "newtonian theory" of the brain as it gets -- deep learning abstracts away a lot of the complexity of neural systems (e.g., a simple artificial neuron vs a highly complex biological one) while maintaining a number of essential characteristics: massively parallel computation, error tolerance, graceful degradation, distributed representations, information is stored in slowly-changing synapses, and, most importantly: a simple, local, and powerful biologically-plausible-if-you-squint-hard-enough learning rule.
The important question to ask is: is the deep learning abstraction any good?
There's a very strong case to be made that the answer is yes: deep learning systems can perform many (of course, not all, at least not yet) tasks that involve perception (computer vision/speech recognition), motor control (the recent openai robot), language understanding (machine translation/BERT/GPT), planning (alphago/dota/the deepmind protein folding), and even some symbolic reasoning (the recent work from facebook on symbolic integration https://ai.facebook.com/blog/using-neural-networks-to-solve-...). Some of these tasks are performed at such a high level that they become commercially useful, and in some cases, surpass "human level".
So here we have a "model family" -- deep learning -- with a set of principles so simple that it can be studied with intense mathematical rigor (for example, https://arxiv.org/pdf/1904.11955.pdf or https://papers.nips.cc/paper/9030-which-algorithmic-choices-...), and that produces many of the behaviors we want out of brains (and not just behavioral: see, e.g., https://arxiv.org/abs/1805.10734: " Interestingly, recent work has shown that deep convolutional neural networks (CNNs) trained on large-scale image recognition tasks can serve as strikingly good models for predicting the responses of neurons in visual cortex to visual stimuli, suggesting that analogies between artificial and biological neural networks may be more than superficial." -- this is just one of many papers that show that even under the hood, trained deep learning systems exhibit many properties of biological neural networks).
These reasons strongly suggest (imho) that deep learning is in fact the newtonian theory of neuroscience. More strongly, no other theory comes remotely close in its simplicity and explanatory power.
Everybody in the history of humans has said the latest technology is the best model for how a brain works. There used to be a piston model for the brain.
Self driving cars can't leave an enclosed environment and might never do so safely.
Richard Dawkins spoke very highly of the brains ability to do some kind of natural calculus for the sake of tracking a ball in flight, but most animals run on simple tricks and reference points.
Deep learning might be the "good think" for the next ten years, some of us are not going to let go of the transcendent truth that the brain is not defined by what we think it is. I see limited reason to see deep learning as more likely than some emergent behaviour from a vast number of simple rules. Like animals flocking together in a boid sim.
> Everybody in the history of humans has said the latest technology is the best model for how a brain works. There used to be a piston model for the brain.
Is this the same mistake as in "The Relativity of Wrong" [1]?
> people have thought they understood the Universe at last, and in every century they were proven to be wrong. It follows that the one thing we can say about out modern "knowledge" is that it is wrong.
> [...]
> My answer to him was, "John, when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical, they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Modelling the brain as a bunch of pistons or as a complicated machine or clockwork thing is a lot better than as a magical clay golem or opaque soul. Modelling it as a computer is even better than that. Not a computer in the sense of an x86 desktop exactly, of course, but the concept of computation is clearly fundamental to understanding the system. Similarly, the brain is not ResNet but concepts like backpropagation are probably useful.
So, sure, maybe people have been using the latest fad to explain the brain forever. But that's only bad to the extent that the latest fad is getting further away instead of closer.
Degrees of wrongness along history would make sense in this discussion if computing were the only path for understanding the brain.
Ancients used to think that thinking happened in the gut and recently the microbiome pathway for describing thought has re-emerged. Both the gut pathway and the computational stream could be wrong.
The outcome of seeing the brain as a computational device will run out of juice like revelation has.
There's one minor difference between past models of the brain and deep learning: deep learning can actually perform difficult and useful cognitive tasks that cannot be accomplished by any other means.
Computers beat chess by grinding out the answers, something previously unattainable for humankind. Computers can beat chess,go and starcraft now with curve fitting ml programs.
The problem is for all this power people still play chess,go and starcraft and we don't know how their brain works.
Translation: your level of familiarity with the material only allows you to “see” it as the separate details.
Daniel Coleman’s “Emotional Intelligence” offers a useful daily model for reasoning around it.
They exist but realize they’re going to “feel” different than physics models. With general physics knowledge one can literally implement tests and build on that independently.
With NS, one can read a model but without imaging machines and chemical testing... shrug... it’s harder to build a muscle memory.
Which neuroscience research argues is super important to connecting details into a composite one can intuit around competently.
Reading AND writing are important to learning English. Same with everything else.
TLDR the practical value of NS is already known: practice learning as we do, don’t be a dick.
tbf it's debatable if there 's a lot to learn from C.elegans. Simple animals have been studied for decades from aplysia to the mouse. But those are not behaviours that are interesting when attempting to learn more about the human brain. The Allen institute's connectome project is more relevant to mammals, even if it's only a tiny volume of the mouse cortex, in order to mildly constrain models of brain function. Even if we had the whole brain, it s too large to be simulatable. These data help our understanding, and we 're lucky we have amazing tools to probe brains at this moment. But we need more and better theories to put them to good use
I don't even know where to begin rebutting your argument. Most of the stuff we actually know, actionable knowledge that has withstood the test of time, comes from small animals, including C. elegans, Drosophila, Aplysia and others. This is because a lot of the stuff is genetically conserved. Even things that are considered specific to mammals, neuromodulatory systems such as dopamine, serotonin are highly conserved. For example worms, flies etc they all get hooked on cocaine through what's thought to be a very similar pathway. Pathways governing such "complex" behaviors as learning, memory, exploration, exploitation etc. all seem highly conserved, which means that a lot can he learned. Source: I'm a worm neuroscientist working on mathematical aspects of neuromodulation of behavior.
c elegans has very primitive, simple behaviours. It's not really possible to get something useful out of it about either our cognitive functions or our brain disorders. The things that regard single cell pathologies (e.g. plasticity) are already studied in vitro in mammalian cells. There s probably many cognitive phenomena that only become apparent in large brain sizes, so i m not sure this method scales up.
In what situation WOULD you be able to expect to extract "something useful" about "our cognitive fucntions or our brain disorders"? It seems silly to think we could learn anything about such complex things without understanding something simpler first, hence the approach of validating models of simpler structures.
From the article, a quote that is enlightening:
if I asked, ‘Do you understand New York City?’ you would probably respond, ‘What do you mean?’ There’s all this complexity. If you can’t understand New York City, it’s not because you can’t get access to the data. It’s just there’s so much going on at the same time. That’s what a human brain is. It’s millions of things happening simultaneously among different types of cells, neuromodulators, genetic components, things from the outside. There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’ ”
True, but the equating of a brain with a city is largely specious, especially when trying to understand cognition. Yes a brain regulates many low level parasympathetic processes. But it's the decision making and memory systems that we most want to understand and replicate in silico. And the coordination of a city of mostly independent self-interested humans is a poor analogue for the biological bases for a coherent thought process, unless it's only the medulla or pons that we hope to model.
There is a good chance that complete brain mapping will be similar to whole genome sequencing. The result will be interesting but only answer a limited subset of questions.
I'm confused. This article doesn't say anything. It makes no points and has no insight. "There's a lot of data in neuroscience?" Is that the message? An unusual number of Nautilus articles frontpage HN like this one, where there doesn't seem to be any value in the article itself. What is going on?
It's that the author is undergoing a 'crisis', not that the field is. The title is clever, not descriptive.
More broadly, there is a crisis. Our statistical methods/understandings are not working out in these large N-dimensional data sets, at least for the researchers that were raised on excel and not numpy.
Aside: I'm surprised that the FAANGs haven't revolutionized statistics yet. When you have 'phase changes' where a LOT more data becomes available, you get to see very low probability events. It's happened in psych, in bio, in physics most famously, in politics, in economics, etc. We have a LOT more data now in for statistical use, but it's still just Poisson distributions and t-tests. What gives?
It might vary from field to field, but in genetics Excel is a complete nonstarter, and even numpy is not going to work on sufficiently large datasets. The project I work on, http://hail.is, was started to help deal with this.
It doesn't look like that project has been discussed on HN before. One of you should definitely post it! If you email hn@ycombinator.com we can give you some tips on how to do that, and possibly also put the submission in the second-chance pool (described at https://news.ycombinator.com/item?id=11662380).
I used to think corporations were where things get invented. It seemed that way to me, looking at Microsoft and Netscape and Apple and SGI growing up. And that in academia, people only made toys, and only got to the earliest stage of invention: ideas partly evaluated.
But my experience in software companies has been that in a corporation you are a slave to convention. Because even if you innovate in your domain, that innovation will be invisible to the rest of the corporation. Because when they look at it, they will evaluate it by asking which conventions it follows. But inventions by definition don’t follow conventions. What people like is to see conventions followed, and the results of that. It reads as progress.
Now I think it’s primarily artists who invent. But it’s hard to much production as a solo artist. So those who succeed are artists who split their time between doing their work in secret and public ally playing the game of performing conventions in a corporate body. If you split your time that way, you will be praised for applying convention and then periodically your secret art will appear as a magic trick and delight people who are already happy with you.
Without the convention theater, the art will just strike them as confirmation that you’re out touch with the organization.
The same is true in academia really. It’s just a different set of convention theater, around publishing rather than sales.
Like, with QM, we had the time and equipment to start pointing radium at some gold foil. We came up with some surprising results. With economics, we finally got enough data in to say that people really are not rational at all. With biology, we finally got the time and equipment to poke audio amplifiers into rabbit brains and some strange stuff happened. Etc.
Similarly, with all this 'big data', I would have guessed that we would have come up with something kinda like that. Finally, we have all this stuff coming in. It's all fairly well captured, fairly well correlated, and fairly valuable so you can pay people to just screw around in the labs and see what drops out.
One of those stands out as different from the others: statistics. It's a branch of math. It's fundamental. We've proven its properties to be comprehensive, such that any other formulation with the same basic axioms would do the same thing.
It's not like when we got a lot of experience gathering physical data we suddenly realized we were doing integrals and derivatives wrong.
Math is a brach of logic, sure, but the computer has acted like a microscope for math already. We can more easily check our conjectures and see if they hold true. Also, we've been able to communicate better, so that has helped out math as well. Even though math is 'above' us little humans, the day-to-day work of mathematicians has been helped out a lot, even in the small ways of not having to lug your mortal-coil through the library stacks.
So, with all this big data, I'd have guessed that we could have seen more edge cases with statistics. Little areas where someone said 'huh that's funny'.
There has always been high level statistics and theoretical modeling going on in biology. Biology is a vast field, encompassing field work to lab work to clinical work to computer science to quantum physics and theoretical math in the context of evolution and population genetics.
Namely that the field seems to be doubling down on increasingly hard computational problems, with deminishing returns, rather than generating wholly new avenues of exploration or insights.
Of course, I've no idea if this is a valid complaint about either neuroscience or theoretical physics.
The message is there are some potential new workflows. Revelation is running out of steam, processing huge quantities of data and relationships between nodes is the new hotness.
To identify what the author feels is missing in neuroscience: in order to understand something, you need to figure out how to describe two things about it (1) what its state is at any point in time, and (2) how that state evolves in time. Connectomics gives you the beginnings to solve (1), but it doesn't go the whole way. There's a fundamental misunderstanding that you can collect exabytes of data and glean understanding from it just because how much you had to sweat getting the storage and collecting it. That's not how it works, it needs to have structure too. I wish more biologists / neuroscientists understood this.
I think they understand this. This is demonstrated by there being little real understanding of ~300 neuron worm connectomes.
The point of the article is discussing how mapping the full human connectome is only going to be a small next step towards understanding what's actually going on. That doesn't mean it's not worth doing.
This is an over-rated article by those outside the field of neuroscience. While it brings up a few good points, it fails to acknowledge the sophistication of our neuroscience and behavioral techniques, and the value of converging evidence across levels of description. Also, its relatedness to the present article is tenuous at best.
Mostly agreed. In my eyes, the takeaway from this piece is not that neuroscience is futile and that we should give up, but rather that we now should redouble efforts on analysis methods to make sense of the large, complex datasets that we are on the cusp of generating.
I'd have preferred to see follow up experiments conducted on the microcircuit. If the test bed was made into a platform to ease set-up and experimentation by other researchers, maybe the paper would be built upon. Specifically, I'd like to see a series of behavioral neuroscience papers that looked at specific responses to user behavior, differences in task demand, etc, then coupled with inferences from knowledge about "neurons" i.e. transistor, registers, etc, that one would gather from the "harder" neurosciences.
I don't care how often I get down voted for making the above comment in response to posts about this article. I am a neuroscientist and I will defend my field from overrated, simplistic criticisms that happen to appeal to the HN crowd's sensibilities.
Do you have any suggestions for what the average layperson could read to get a better understanding of contemporary neuroscience? Any articles or books you'd recommend?
That is a bit difficult because the field is very broad, and I don't tend to read popular science books. Mostly, I'd look up recent review articles on Google Scholar...
As someone who did his dissertation on episodic memory development, I see the wry humor, but I think your comment is a non sequitur. It is not that we have figured it all out, but rather can we gleam knowledge about cognition (e.g. episodic memory) and the brain from neuroscience and behavioral techniques. I think the answer is clearly yes. In terms of memory, we understand, for example, that the hippocampus is critical to some forms of memory, most critically memory that requires binding of arbitrary information/percepts representations into a more cohesive event representation, that the hippocmapus may achieve this through computational properties potentially afforded to it in microcircuits in subfields of the hippcampus (e.g. pattern completion from heavy recurrnecy in CA3) and pattern separation of information in the dentate gyrus, etc. and so on.
That's a bit silly because neuroscience tools are built got biological brains. electrical engineering tools are used on microprocessors, as they should.
OP and TFA meant "Would a similar method of observing a complex, connected system of systems work on a less complex example like a microchip if we had just up and found one?"
I highly recommend this lecture by Jeff Lichtman, where he describes the machine they've built to slice the brain and the software they have written to visualize and make sense of this vast amount of data:
Generally too purple for its own good but a very interesting read! The Borges reference (and C. elegans conundrum) makes me as a layreader really appreciate how little we actually know about the endgame for all this data.
But there is such elegance in "rudimentary" DNN's giving us the ability to at all assemble this stuff.
The author should go talk to some astrophysicists. They have a similar problem -- humans are unlikely to ever understand how the entirety of the cosmos works, but it's still interesting to learn about the small bits.
"We don’t understand how their interactions contribute to behavior, perception, or memory. Technology has made it easy for us to gather behemoth datasets, but I’m not sure understanding the brain has kept pace with the size of the datasets."
Exemplifies:
- Data is not information.
- Information is not knowledge.
- Knowledge is not understanding.
We've not even left the gate of the first tier. Both exciting and intimidating, but mostly humbling. Or should be.
I believe we are at the part where we think how the city works is by mapping it, but you got sewers, pipelines, everything underneath that you haven’t really dug into. There is a lot more inside a neuron that can be mapped. Let’s just say your map isn’t detailed enough
If you've seen some of the high resolution videos of neural activity captures from even simple fish, the slightest motor movements activate hundreds of thousands of cells in a chaotic pattern. Neural circuitry is not neatly laid out like a silicon chip, its a forest of inter-connectivity that resists analysis even with extremely detailed visualization and data captures.
I think it’s interesting that people think we’ll be able to make something that does more in a smaller space.
As if there was something other than the laws of physics preventing natural selection from testing smaller structures.
Or that there’s something (other than the demands of the computation itself) constraining the architectures that were tested through natural selection.
Constraints:
- No temperatures hot enough to melt silicon or metal.
- Everything must unfold from a single cell during the process of reproduction, no factories allowed.
- Everything must be made out of what it can find to eat, no mining allowed unless you're a subterranean bacteria.
> something (other than the demands of the computation itself) constraining the architectures that were tested through natural selection.
There is. Anything that can't be reached by a small number of genetically small steps that either enhance or at least maintain fitness will never be reached.
Right. Every tool invented by humans is superior in achieving a certain end than is our inborn equivalent. We don't have to faithfully model nature to surpass it.
Likewise manmade models based in mathematics and statistics have long proved more accurate in predicting outcomes than the human mind, even though we know the mind doesn't employ math.
Human-made machines have made it possible for elephants to fly. Nature never will.
I love those videos. We will be getting a lot more, soon, as well with the new voltage-sentitive microscopy.
I think the best story for understanding circuitry in vertebrates comes from the work on hierarchical pattern generators -- where much of the work was done on lampreys.
Grillner, S. (2006). Biological pattern generation: the cellular and computational logic of networks in motion. Neuron, 52(5), 751-766.
The organization of human thought to achieve that level of understanding might be an issue. From the original article: 'the data footprint of all books ever written come out to less than 100 terabytes, or 0.005 percent of a mouse brain'
That formulation makes the goal seem remote but not implausible. We have the technology to index and analyse 10 Terrabytes certainly and even gmail apparently is an exabyte ... [0]. A mouse brain given your premise is 20 petabytes. Given even linear progress this class of problem is not insurmountable.
That is a cute aphorism, but there is no empirical evidence one way or the other. If you are thinking that it is obviously too much information for a brain to know itself, note that most of what we understand about complex systems, such as the weather, is understood using only a miniscule fraction of the total information present. That is the power of abstraction.
Physicists don't understand gravity...neuroscientists don't understand the main...maybe the universe is a giant brain and the stars are neurons, the big bang was conception and we are bacterial growth. Better than all current theories.
I am planning to solve this =) just cracked the inner workings of ANNs (future SHOW HN) and am going to read a book on computational neuroscience tomorrow.
Not to be discouraging, but if you're just starting to learn about artificial neural networks, you have a long way to go...
That said, in the few years of experience I've had with ANNs, they really do seem like an intuitive analog for human learning, at a high level. And thinking about training problems in this way, approximately "what kind of training data would I need to train a small child to do this" can be more helpful than one might have otherwise expected.
I think we're just a handful of major breakthroughs away from true AI, assuming compute and memory continue to scale. Certainly within 100 years.