设为首页收藏本站

弧论坛

 找回密码
 立即注册
搜索
热搜: 活动 交友 discuz
查看: 3763|回复: 0
打印 上一主题 下一主题

The empty brain

[复制链接]

5909

主题

6606

帖子

7166

积分

坛主

Rank: 10Rank: 10Rank: 10

积分
7166
跳转到指定楼层
楼主
发表于 2017-7-15 02:33 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
The empty brain
Your brain does notprocess information, retrieve knowledge or store memories. In short: your brainis not a computer

Robert Epstein is a senior research psychologist atthe American Institute for Behavioral Research and Technology in California. Heis the author of 15 books, and the former editor-in-chief of Psychology Today.
No matter how hardthey try, brain scientists and cognitive psychologists will never find a copyof Beethoven’s 5th Symphony in the brain – or copies of words, pictures,grammatical rules or any other kinds of environmental stimuli. The human brainisn’t really empty, of course. But it does not contain most ofthe things people think it does – not even simple things such as ‘memories’.


Our shoddy thinkingabout the brain has deep historical roots, but the invention of computers inthe 1940s got us especially confused. For more than half a century now,psychologists, linguists, neuroscientists and other experts on human behaviourhave been asserting that the human brain works like a computer.


To see how vacuousthis idea is, consider the brains of babies. Thanks to evolution, humanneonates, like the newborns of all other mammalian species, enter the worldprepared to interact with it effectively. A baby’s vision is blurry, but itpays special attention to faces, and is quickly able to identify its mother’s.It prefers the sound of voices to non-speech sounds, and can distinguish onebasic speech sound from another. We are, without doubt, built to make socialconnections.


A healthy newborn isalso equipped with more than a dozen reflexes – ready-made reactions to certainstimuli that are important for its survival. It turns its head in the directionof something that brushes its cheek and then sucks whatever enters its mouth.It holds its breath when submerged in water. It grasps things placed in its handsso strongly it can nearly support its own weight. Perhaps most important,newborns come equipped with powerful learning mechanisms that allow themto change rapidly so they can interact increasinglyeffectively with their world, even if that world is unlike the one theirdistant ancestors faced.


Senses, reflexes andlearning mechanisms – this is what we start with, and it is quite a lot, whenyou think about it. If we lacked any of these capabilities at birth, we wouldprobably have trouble surviving.


But here is what weare not born with: information, data, rules, software,knowledge, lexicons, representations, algorithms, programs, models, memories,images, processors, subroutines, encoders, decoders, symbols, or buffers –design elements that allow digital computers to behave somewhat intelligently.Not only are we not born with such things, we also don’t develop them– ever.


We don’t store wordsor the rules that tell us how to manipulate them. We don’t create representations ofvisual stimuli, store them in a short-term memory buffer, andthen transferthe representation into a long-term memory device. Wedon’t retrieve information or images or words from memoryregisters. Computers do all of these things, but organisms do not.


Computers, quiteliterally, process information – numbers, letters, words,formulas, images. The information first has to be encoded into a formatcomputers can use, which means patterns of ones and zeroes (‘bits’) organisedinto small chunks (‘bytes’). On my computer, each byte contains 64 bits, and acertain pattern of those bits stands for the letter d, another forthe letter o, and another for the letter g. Sideby side, those three bytes form the word dog. One single image –say, the photograph of my cat Henry on my desktop – is represented by a veryspecific pattern of a million of these bytes (‘one megabyte’), surrounded bysome special characters that tell the computer to expect an image, not a word.

Computers, quiteliterally, move these patterns from place to place in different physicalstorage areas etched into electronic components. Sometimes they also copy thepatterns, and sometimes they transform them in various ways – say, when we arecorrecting errors in a manuscript or when we are touching up a photograph. Therules computers follow for moving, copying and operating on these arrays ofdata are also stored inside the computer. Together, a set of rules is called a‘program’ or an ‘algorithm’. A group of algorithms that work together to helpus do something (like buy stocks or find a date online) is called an‘application’ – what most people now call an ‘app’.


Forgive me for thisintroduction to computing, but I need to be clear: computers really do operateon symbolic representations of the world. They really store and retrieve.They really process. They really have physical memories.They really are guided in everything they do, without exception, by algorithms.


Humans, on the otherhand, do not – never did, never will. Given this reality, why do so many scientiststalk about our mental life as if we were computers?



In his book InOur Own Image (2015), the artificial intelligence expert GeorgeZarkadakis describes six different metaphors people have employed over the past2,000 years to try to explain human intelligence.


In the earliest one,eventually preserved in the Bible, humans were formed from clay or dirt, whichan intelligent god then infused with its spirit. That spirit ‘explained’ ourintelligence – grammatically, at least.


The invention of hydraulicengineering in the 3rd century BCE led to the popularity of a hydraulic modelof human intelligence, the idea that the flow of different fluids in the body –the ‘humours’ – accounted for both our physical and mental functioning. Thehydraulic metaphor persisted for more than 1,600 years, handicapping medicalpractice all the while.


By the 1500s,automata powered by springs and gears had been devised, eventually inspiringleading thinkers such as René Descartes to assert that humans are complex machines.In the 1600s, the British philosopher Thomas Hobbes suggested that thinkingarose from small mechanical motions in the brain. By the 1700s, discoveriesabout electricity and chemistry led to new theories of human intelligence –again, largely metaphorical in nature. In the mid-1800s, inspired by recentadvances in communications, the German physicist Hermann von Helmholtz comparedthe brain to a telegraph.


The mathematician John von Neumann stated flatly
that thefunction of the human nervous system is ‘prima facie digital’,
drawing parallel after parallel between the components of
the computingmachines of the day and the components of the human brain


Each metaphorreflected the most advanced thinking of the era that spawned it. Predictably,just a few years after the dawn of computer technology in the 1940s, the brainwas said to operate like a computer, with the role of physical hardware playedby the brain itself and our thoughts serving as software. The landmark eventthat launched what is now broadly called ‘cognitive science’ was thepublication of Language and Communication(1951) by the psychologistGeorge Miller. Miller proposed that the mental world could be studiedrigorously using concepts from information theory, computation and linguistics.


This kind ofthinking was taken to its ultimate expression in the short book TheComputer and the Brain (1958), in which the mathematician John vonNeumann stated flatly that the function of the human nervous system is ‘primafacie digital’. Although he acknowledged that little was actuallyknown about the role the brain played in human reasoning and memory, he drewparallel after parallel between the components of the computing machines of theday and the components of the human brain.


Propelled bysubsequent advances in both computer technology and brain research, anambitious multidisciplinary effort to understand human intelligence graduallydeveloped, firmly rooted in the idea that humans are, like computers,information processors. This effort now involves thousands of researchers,consumes billions of dollars in funding, and has generated a vast literatureconsisting of both technical and mainstream articles and books. Ray Kurzweil’sbook How to Create a Mind: The Secret of Human Thought Revealed(2013),exemplifies this perspective, speculating about the ‘algorithms’ of the brain,how the brain ‘processes data’, and even how it superficially resemblesintegrated circuits in its structure.


The informationprocessing (IP) metaphor of human intelligence now dominates human thinking,both on the street and in the sciences. There is virtually no form of discourseabout intelligent human behaviour that proceeds without employing thismetaphor, just as no form of discourse about intelligent human behaviour couldproceed in certain eras and cultures without reference to a spirit or deity.The validity of the IP metaphor in today’s world is generally assumed withoutquestion.


But the IP metaphoris, after all, just another metaphor – a story we tell to make sense ofsomething we don’t actually understand. And like all the metaphors thatpreceded it, it will certainly be cast aside at some point – either replaced byanother metaphor or, in the end, replaced by actual knowledge.


Just over a yearago, on a visit to one of the world’s most prestigious research institutes, Ichallenged researchers there to account for intelligent human behaviour withoutreference to any aspect of the IP metaphor. They couldn’t do it,and when I politely raised the issue in subsequent email communications, theystill had nothing to offer months later. They saw the problem. They didn’tdismiss the challenge as trivial. But they couldn’t offer an alternative. Inother words, the IP metaphor is ‘sticky’. It encumbers our thinking withlanguage and ideas that are so powerful we have trouble thinking around them.


The faulty logic ofthe IP metaphor is easy enough to state. It is based on a faulty syllogism –one with two reasonable premises and a faulty conclusion. Reasonablepremise #1: all computers are capable of behaving intelligently. Reasonablepremise #2: all computers are information processors. Faultyconclusion: all entities that are capable of behaving intelligentlyare information processors.


Setting aside theformal language, the idea that humans must be information processors justbecause computers are information processors is just plainsilly, and when, some day, the IP metaphor is finally abandoned, it will almostcertainly be seen that way by historians, just as we now view the hydraulic andmechanical metaphors to be silly.


If the IP metaphoris so silly, why is it so sticky? What is stopping us from brushing it aside,just as we might brush aside a branch that was blocking our path? Is there away to understand human intelligence without leaning on a flimsy intellectualcrutch? And what price have we paid for leaning so heavily on this particularcrutch for so long? The IP metaphor, after all, has been guiding the writingand thinking of a large number of researchers in multiple fields fordecades. At what cost?

In a classroomexercise I have conducted many times over the years, I begin by recruiting astudent to draw a detailed picture of a dollar bill – ‘as detailed aspossible’, I say – on the blackboard in front of the room. When the student hasfinished, I cover the drawing with a sheet of paper, remove a dollar bill frommy wallet, tape it to the board, and ask the student to repeat the task. Whenhe or she is done, I remove the cover from the first drawing, and the classcomments on the differences.


Because you mightnever have seen a demonstration like this, or because you might have troubleimagining the outcome, I have asked Jinny Hyun, one of the student interns atthe institute where I conduct my research, to make the two drawings. Here isher drawing ‘from memory’ (notice the metaphor):



                               
登录/注册后可看大图
And here is thedrawing she subsequently made with a dollar bill present:


                               
登录/注册后可看大图
Jinny was assurprised by the outcome as you probably are, but it is typical. As you cansee, the drawing made in the absence of the dollar bill is horrible comparedwith the drawing made from an exemplar, even though Jinny has seen a dollarbill thousands of times.

What is the problem?Don’t we have a ‘representation’ of the dollar bill ‘stored’ in a ‘memoryregister’ in our brains? Can’t we just ‘retrieve’ it and use it to make ourdrawing?

Obviously not, and athousand years of neuroscience will never locate a representation of a dollarbill stored inside the human brain for the simple reason that it is not thereto be found.


The idea that memories are stored in individualneurons is preposterous:
how and where is the memory stored in thecell?


A wealth of brainstudies tells us, in fact, that multiple and sometimes large areas of the brainare often involved in even the most mundane memory tasks. When strong emotionsare involved, millions of neurons can become more active. In a 2016 study ofsurvivors of a plane crash by the University of Toronto neuropsychologist BrianLevine and others, recalling the crash increased neural activity in ‘theamygdala, medial temporal lobe, anterior and posterior midline, and visualcortex’ of the passengers.


The idea, advancedby several scientists, that specific memories are somehow stored in individualneurons is preposterous; if anything, that assertion just pushes theproblem of memory to an even more challenging level: how and where, after all,is the memory stored in the cell?


So what is occurringwhen Jinny draws the dollar bill in its absence? If Jinny had never seena dollar bill before, her first drawing would probably have not resembled thesecond drawing at all. Having seen dollar bills before, she was changed insome way. Specifically, her brain was changed in a way that allowed herto visualise a dollar bill – that is, to re-experience seeinga dollar bill, at least to some extent.


The differencebetween the two diagrams reminds us that visualising something (that is, seeingsomething in its absence) is far less accurate than seeing something in itspresence. This is why we’re much better at recognising than recalling. Whenwe re-membersomething (from the Latin re, ‘again’,and memorari, ‘be mindful of’), we have to try to relive anexperience; but when we recognise something, we must merely be conscious of thefact that we have had this perceptual experience before.


Perhaps you willobject to this demonstration. Jinny had seen dollar bills before, but shehadn’t made a deliberate effort to ‘memorise’ the details. Had she done so, youmight argue, she could presumably have drawn the second image without the billbeing present. Even in this case, though, no image of the dollar billhas in any sense been ‘stored’ in Jinny’s brain. She has simply becomebetter prepared to draw it accurately, just as, through practice, a pianistbecomes more skilled in playing a concerto without somehow inhaling a copy ofthe sheet music.



From this simple exercise, we can beginto build the framework of a metaphor-free theory of intelligent human behaviour– one in which the brain isn’t completely empty, but is at leastempty of the baggage of the IP metaphor.


As we navigatethrough the world, we are changed by a variety of experiences. Of special noteare experiences of three types: (1) we observe what ishappening around us (other people behaving, sounds of music, instructionsdirected at us, words on pages, images on screens); (2) we are exposed to the pairing ofunimportant stimuli (such as sirens) with important stimuli (such as theappearance of police cars); (3) we are punished or rewardedforbehaving in certain ways.


We become moreeffective in our lives if we change in ways that are consistent with theseexperiences – if we can now recite a poem or sing a song, if we are able tofollow the instructions we are given, if we respond to the unimportant stimulimore like we do to the important stimuli, if we refrain from behaving in waysthat were punished, if we behave more frequently in ways that were rewarded.


Misleading headlinesnotwithstanding, no one really has the slightest idea how the brain changesafter we have learned to sing a song or recite a poem. But neither the song northe poem has been ‘stored’ in it. The brain has simply changed inan orderly way that now allows us to sing the song or recite the poem undercertain conditions. When called on to perform, neither the song nor the poem isin any sense ‘retrieved’ from anywhere in the brain, any more than my fingermovements are ‘retrieved’ when I tap my finger on my desk. We simply sing orrecite – no retrieval necessary.


A few years ago, Iasked the neuroscientist Eric Kandel of Columbia University – winner of a NobelPrize for identifying some of the chemical changes that take place in theneuronal synapses of the Aplysia (a marine snail) after itlearns something – how long he thought it would take us to understand how humanmemory works. He quickly replied: ‘A hundred years.’ I didn’t think to ask himwhether he thought the IP metaphor was slowing down neuroscience, but someneuroscientists are indeed beginning to think the unthinkable – that themetaphor is not indispensable.


A few cognitivescientists – notably Anthony Chemero of the University of Cincinnati, theauthor of Radical Embodied Cognitive Science (2009) – nowcompletely reject the view that the human brain works like a computer. Themainstream view is that we, like computers, make sense of the world byperforming computations on mental representations of it, but Chemero and othersdescribe another way of understanding intelligent behaviour – as a directinteraction between organisms and their world.

My favourite exampleof the dramatic difference between the IP perspective and what some now callthe ‘anti-representational’ view of human functioning involves two differentways of explaining how a baseball player manages to catch a fly ball –beautifully explicated by Michael McBeath, now at Arizona State University, andhis colleagues in a 1995 paper in Science. The IP perspective requires theplayer to formulate an estimate of various initial conditions of the ball’sflight – the force of the impact, the angle of the trajectory, that kind ofthing – then to create and analyse an internal model of the path along whichthe ball will likely move, then to use that model to guide and adjust motormovements continuously in time in order to intercept the ball.


That is all well andgood if we functioned as computers do, but McBeath and hiscolleagues gave a simpler account: to catch the ball, the player simply needsto keep moving in a way that keeps the ball in a constant visual relationshipwith respect to home plate and the surrounding scenery (technically, in a‘linear optical trajectory’). This might sound complicated, but it is actuallyincredibly simple, and completely free of computations, representations andalgorithms.


We will never have to worry about a human mind going amokin cyberspace,
and we will never achieve immortality through downloading


Two determinedpsychology professors at Leeds Beckett University in the UK – Andrew Wilson andSabrina Golonka – include the baseball example among many others that can belooked at simply and sensibly outside the IP framework. They have been bloggingfor years about what they call a ‘more coherent, naturalised approach to thescientific study of human behaviour… at odds with the dominant cognitiveneuroscience approach’. This is far from a movement, however; the mainstreamcognitive sciences continue to wallow uncritically in the IP metaphor, and someof the world’s most influential thinkers have made grand predictions abouthumanity’s future that depend on the validity of the metaphor.


One prediction –made by the futurist Kurzweil, the physicist Stephen Hawking and theneuroscientist Randal Koene, among others – is that, because humanconsciousness is supposedly like computer software, it will soon be possible todownload human minds to a computer, in the circuits of which we will becomeimmensely powerful intellectually and, quite possibly, immortal. This conceptdrove the plot of the dystopian movie Transcendence (2014)starring Johnny Depp as the Kurzweil-like scientist whose mind was downloadedto the internet – with disastrous results for humanity.


Fortunately, becausethe IP metaphor is not even slightly valid, we will never have to worry about ahuman mind going amok in cyberspace; alas, we will also never achieveimmortality through downloading. This is not only because of the absence ofconsciousness software in the brain; there is a deeper problem here – let’scall it the uniqueness problem – which is both inspirationaland depressing.

Because neither‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and becauseall that is required for us to function in the world is for the brain to changein an orderly way as a result of our experiences, there is no reason tobelieve that any two of us are changed the same way by the same experience.If you and I attend the same concert, the changes that occur in my brain when Ilisten to Beethoven’s 5th will almost certainly be completely different fromthe changes that occur in your brain. Those changes, whatever they are, arebuilt on the unique neural structure that already exists, each structure havingdeveloped over a lifetime of unique experiences.


This is why, as SirFrederic Bartlett demonstrated in his book Remembering (1932),no two people will repeat a story they have heard the same way and why, overtime, their recitations of the story will diverge more and more. No ‘copy’ ofthe story is ever made; rather, each individual, upon hearing the story,changes to some extent – enough so that when asked about the story later (insome cases, days, months or even years after Bartlett first read them thestory) – they can re-experience hearing the story to someextent, although not very well (see the first drawing of the dollar bill,above).

This isinspirational, I suppose, because it means that each of us is truly unique, notjust in our genetic makeup, but even in the way our brains change over time. Itis also depressing, because it makes the task of the neuroscientist dauntingalmost beyond imagination. For any given experience, orderly change couldinvolve a thousand neurons, a million neurons or even the entire brain, withthe pattern of change different in every brain.


Worse still, even ifwe had the ability to take a snapshot of all of the brain’s 86 billion neuronsand then to simulate the state of those neurons in a computer, thatvast pattern would mean nothing outside the body of the brain that produced it.This is perhaps the most egregious way in which the IP metaphor has distortedour thinking about human functioning. Whereas computers do store exact copiesof data – copies that can persist unchanged for long periods of time, even ifthe power has been turned off – the brain maintains our intellect only as longas it remains alive. There is no on-off switch. Either the brainkeeps functioning, or we disappear. What’s more, as the neurobiologist StevenRose pointed out in The Future of the Brain (2005), a snapshotof the brain’s current state might also be meaningless unless we knew the entirelife history of that brain’s owner – perhaps even about the socialcontext in which he or she was raised.


Think how difficultthis problem is. To understand even the basics of how the brain maintains thehuman intellect, we might need to know not just the current state of all 86billion neurons and their 100 trillion interconnections, not just the varyingstrengths with which they are connected, and not just the states of more than1,000 proteins that exist at each connection point, but how themoment-to-moment activity of the brain contributes to theintegrity of the system. Add to this the uniqueness of each brain, broughtabout in part because of the uniqueness of each person’s life history, andKandel’s prediction starts to sound overly optimistic. (In a recent op-edin The New York Times, the neuroscientist KennethMiller suggested it will take ‘centuries’ just to figure out basic neuronalconnectivity.)


Meanwhile, vast sumsof money are being raised for brain research, based in some cases on faultyideas and promises that cannot be kept. The most blatant instance ofneuroscience gone awry, documented recently in a report in ScientificAmerican, concerns the $1.3 billion Human Brain Project launched by theEuropean Union in 2013. Convinced by the charismatic Henry Markram that hecould create a simulation of the entire human brain on a supercomputer by theyear 2023, and that such a model would revolutionise the treatment ofAlzheimer’s disease and other disorders, EU officials funded his project withvirtually no restrictions. Less than two years into it, the project turned intoa ‘brain wreck’, and Markram was asked to step down.


We are organisms,not computers. Get over it. Let’s get on with the business oftrying to understand ourselves, but without being encumbered by unnecessaryintellectual baggage. The IP metaphor has had a half-century run, producingfew, if any, insights along the way. The time has come to hit the DELETE key.
大道至简 万物于弧
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

手机版|Archiver|小黑屋|国际弧学研究会    

GMT-7, 2024-10-31 15:52 , Processed in 0.413096 second(s), 23 queries .

Powered by Discuz! X3.1

© 2001-2013 Comsenz Inc.

快速回复 返回顶部 返回列表