Waldo Emerson, “Conduct of Life”
Invention is the application of the mind to the material world. Most ideas extend the power of the physical body and some the physical senses. The hoe does more work than the digging stick, and the plow does more work than the hoe. Domesticating animals and harnessing them to the plow extends a mans physical abilities yet further. The telescope and the microscope expand on the ability of the eye. The telephone and the radio assist the ear.
Very few ideas extend and amplify the power of the human mind. Those that do have an effect much greater than any invention that extends the physical body or senses. The invention of writing, for example, was an expansion of human memory. It allowed for the transmission of ideas to people separated by space and time. Archeologists unearth clay tablets inscribed with cuneiform script and learn about the details of life from the mind of one long gone. Writing allows for the wisdom of the ages to accumulate, for one man to stand on the shoulders of giants and see further than they.
Without books, without reading, people developed prodigious memories. Long, extended oral traditions were passed on between generations. Most scholars believe Moses was not the writer of Genesis, in the sense of creating a totally original work, but was instead the collector and editor of oral traditions. I once read how Alex Haley went to Africa and found tribal elders who could recite centuries long tribal histories. Even works of art such as the works of Homer are thought to have been passed on orally for generations before having been set in script. Today when a little Islamic girl memorizes the Koran it is remarkable, yet in ages past this would have been normal.
This seems odd to us in the west, having been raised in a culture which does not depend on memory alone. I do not need to memorize an entire story; I can simply reread the book or replay the movie. A person who can recite the entire dialogue to movies like Star Wars or The Rocky Horror Picture Show is thought to be at least half a bubble off plumb. We recall ideas, not words. Our minds have adapted to the new reality of easily stored and recalled information.
Writing extends the human mind. It allows for the maintenance of ideas beyond ones’ life span and the transmission of those ideas to people far removed in space and time. But when all books were copied by hand, few books existed. Very few books were deemed important enough to spend months copying, and few people were trustworthy or important enough to be allowed access to those books. I suspect the ancient Catholic prohibition against the laity reading the scriptures grew out of the difficulties in creating and preserving each copy. Since books were so few and so important, the information in them was venerated and anything new was suspect. Science could not develop in an atmosphere of ancestor worship.
The invention of the printing press changed everything. Suddenly, many books could be produced in little time. The printing press created a market for reading and people developed a voracious appetite for information. The printing press satisfied a need no one knew existed. The public’s search for knowledge led to the printing of an short pamphlet by an obscure German monk, and the transmission of his ideas sparked the Protestant revolution.
Easy access to books changed the way people thought. No longer did people need to memorize long narratives and extensive lists; they were written down. Education became something more than simple rote memorization of information, but became a process of analyzing and integrating that information. Education also became a process of discovering something new to write about.
The printing press, by allowing the inexpensive printing of books, created an auxiliary memory. I no longer have to know everything; I know longer have to memorize everything; I can simply look it up. Books are an addition to, and in some sense a replacement for, the memory. Rote memorization of revealed truth becomes redundant. Rote memorization is an easy and natural task for digital computers, but not for an analog pattern recognition computer like the human brain. By eliminating the need to memorize everything, the mind was set free to explore, to pick and choose, and to compare. By freeing the mind, the printing press changed history forever.
During the early 1870s, Alexander Graham Bell was trying to develop a replacement for the telegraph. The telegraph, an early example of a digital, binary device, was limited to sending one message in one direction at a time. Bell was trying to develop something called a “harmonic telegraph,” a device for sending multiple messages, each one at a different frequency, over ordinary telegraph wires. In itself this would have been a dramatic achievement. But even though he was successful, he began to see that speech could be transmitted, too.[ii]
Bell was granted his first telephone patent in 1876, and with the words, “Mr. Watson, come here, I want you,” a new era in communications was born. The growth of telephone service is a remarkable example of Say’s Law, that supply precedes demand. Before Bell’s invention no one knew they needed a telephone. By 1880 the United States had 54,000 telephones; this number had increased to 1,500,000 by the turn of the century.[iii] But with all this growth came increasing complexity.
In any network the number of possible direct connections, known in algebra as permutations, is given by the following equation:
; or n elements taken r at a time. For example,
Bell’s first experimental systems used a pair of wires to directly connect each telephone. This worked well for small experimental systems, but directly connecting 100 telephones requires 9900 pairs of wire. The dramatic growth of the telephone system made the idea of direct connection of each telephone absurd. By running a pair of wires from each phone to a central location, switches can create temporary connections and reduce the overall complexity of the system. The first phone systems used organic analog switches, otherwise known as operators, as a means of keeping the overall cost and complexity down.
For many years women were used as operators, manually making connections as needed. But by the late 1940s the growth of the phone system had gotten out of hand. Statisticians were saying the increasing complexity would soon require all the women in the United States to be operators. The telephone industry responded by creating dial telephones and mechanical switches; in effect, each person became his or her own operator. As the phone system continued to grow in numbers of possible connections the mechanical switches became more sophisticated and choosing the best way to route the call became the more difficult problem.[iv]
The complexity of the telephone network led directly to the computer age. Routing calls properly required faster and faster switches. One switch made one part of a decision and passed the problem on to the next switch, and so on and so on. Mechanical switches gave way to vacuum tubes, and soon the race was on to create a solid state switch, also known as the transistor. The problem was two fold: first, mechanical switches and vacuum tubes were bulky; second, the switches used a lot of power. In 1947 the devices that allowed a call to be routed and connected to an overseas destination required the power of a locomotive.[v]
In the 1940s Bell Labs was desperately trying to find a solution to the bulk and power problems. The solution, the solid state transistor, would eventually lead to the digital computer; it was not, however, an instant success. Transistors were seen as a niche product; since they were not manufactured in bulk, they were relatively expensive. In addition, the early transistors were made out of germanium and were reliable only in a narrow temperature range. When Texas Instruments designed the first practical silicon transistor and found a way to manufacture it in bulk, the market for the former niche product exploded.
The transistor is, at its most basic level, a switch. It can be used to turn a signal on and off, or it can be used to amplify that signal. And since it is solid state, it is reliable. It is also small and uses little power. But that alone did not create the computer revolution. That would have to wait until the creation of the integrated circuit. The integrated circuit, with its miniaturized transistors, diodes, resistors, and capacitors, allowed for an entire circuit to be imprinted on a silicon plane. It was small, it used little power, and it was made up of many, many switches. What it excelled at was logic and arithmetic. If this, then that. One plus one equals two. The creation of a small, power efficient device capable of performing arithmetic and logic had applications beyond the telephone industry.
The punch card, (do not fold, spindle, or mutilate,) was designed in 1801 to control the process of weaving cloth. The punch card is essentially a digital input device; the stitch is either there or it is not. In 1822 Charles Babbage completed his first small computer, called a difference machine. In 1836 he designed an analytical machine controlled by punch cards. Both of these machines were mechanically analog. A gear or cam moved a certain distance representing a certain number. By adding or subtracting distances, the correct numeric answer could be derived. Manufacturing processes were too imprecise to build a correctly functioning machine; this is widely thought, however, to be the first programmable computer design.[vi]
The first electronic analog computer was designed in 1931; the first digital computer was built in 1945.[vii] Analog computers add or subtract voltages. To add two and three, voltages of two volts and three volts were summed together; the output of five volts is then converted to numeric form. Digital devices, on the other hand, convert analog inputs into forms able to be represented by digital circuits, by switches rapidly turning on and off. The simplest form is binary notation, also known as base two. The number “4” in our decimal system is represented in binary by three switches; the first one on, the second two off, or “100”. The equation “4+3=7” becomes “100+11=111”. The decimal numbers one through seven can be represented by three switches; the numbers one through fifteen by only four. These are the octal and hexadecimal numbering systems, and together with binary are the basic numbering systems used in digital computers.
Just as computers fluctuated wildly from analog to digital devices, so also did communications. The telegraph was a digital device with only two states. By varying the time any one state was held, (dots vs. dashes,) messages could be sent in Morse code. With the invention of the telephone, Alexander Graham Bell converted communication back into its analog state; but the growing complexity necessitated the development of electromechanical switches to connect and route calls. The switching network became primarily digital, while the communications medium, an electrical signal of varying amplitude and frequency, was analog.
As communications traffic grew and digital computers became smaller and faster, it became possible to convert sound into its digital equivalent, transmit a bunch of ones and zeroes, and then reconvert the digital signal into sound again. By time division multiplexing---by sending digital samples of various calls on a single pair of wires or fiber optic channels, each in its own time slice---the need for wiring is reduced. It is now possible to create, with the exception of the circuit from the telephone to the first A/D or D/A converter, a completely digital communications circuit. The solution to the original problem of reducing the complexity of connecting individual telephones has created the digital revolution and resulted in several new communications technologies such as the fax machine, e-mail, and the Internet.
All is not as it appears, however. For all their speed, computers are just adding machines. If you can convert an input into digital form and devise the right calculation, a computer will come up with an adequate solution. But the digital computer is hobbled by the necessity to convert analog data into its digital equivalent for processing. While the calculation is rapid and precise, the programming and data conversion are relatively slow.
But the paradigm of the digital computer is wrong. Yes, the switches are digital. But the most important circuits, the ones that allow the switches to interact with the real world, are analog. The transducers that convert inputs of light, pressure, and temperature into voltages and currents? Analog. The A/D converters that change voltages and current into their digital equivalent? Analog. The op amps that keep voltages stable throughout the integrated circuit? Analog. The line drivers and sense amplifiers necessary to read the state of an individual memory cell? Analog. The D/A converters that convert digital datum into their voltage and current equivalents? Analog. The devices that move, light up, or in some way respond to changes in voltage and current? Analog.
The primary function of the human brain, pattern recognition, is an analog function. The brain is poorly optimized for logic, the very function a digital computer is good at. But a digital computer is poorly optimized to interpret speech, to discern between two people, to tell which of two chocolate bars is the better tasting one. The computer reduces the richness of life into symbolic representations, manipulates those symbols, and then produces an output. By reducing everything into symbolic logic, the digital computer is barely able to perform tasks a baby finds simple, such as recognize its mother’s face.
George Gilder, “Microcosm”[viii]
Listen to a symphony orchestra. Tens of musicians, all blowing, sawing, or pounding away at their instruments to produce a wall of sound. Each instrument has its unique harmonic characteristics, the attack, the sustain, the delay, and the sub-harmonics, all blended together to differentiate the sound of an oboe from a trombone, and a trombone from a kettle drum. The person behind you is whispering. The man across the aisle is coughing. To the listener this all makes sense. Yet think about how the soundscape impinges upon the ear.
Sound is created by variations in air pressure. These variations occur at certain frequencies. Each additional instrument, each cough, each whisper, also occurs at certain frequencies. If the frequency amplitudes match, they reinforce each other. If they do not match they dampen each other. At any instant in time the air pressure is either increasing or decreasing. These subtle variations make no sense without being placed in context with the variations occurring before and after. What you are hearing---cough or kettle drum, piccolo or piano---can only be determined based on the context, on the pattern. The sonic vibrations are transmitted to the inner ear where they are converted into their electrical analogy by the cochlea of the inner ear, performing, in one simple step a complex mathematical process taking several steps, plenty of processing cycles, and millions of transistors in a digital computer. The electrical impulses are then interpreted, real time, by the analog, slow, but massively parallel human brain.
George Gilder, “Microcosm”[ix]
The problem of sight is even more complex. The world exists in three dimensions. Light, either radiated or reflected, is focused by the lens of the eye onto the retina, stimulating various specialized receptors. The pattern reproduced on the back of the eye is two-dimensional, yet even with one eye closed the world is perceived to have depth. This perception is improved upon when both eyes are used, binocular fashion, and the slightly different images are summed up. How does the brain derive three dimensions from two-dimensional images?
Oliver Sacks describes the experience of a man who had been blind since early childhood and whose brain had lost the ability to perceive depth and dimension from the subtle sensory cues impinging upon the retina. The patient “was able to see colors and movements, to see (but not identify) large objects and shapes”,[x] The world was a bewildering array of colors and shapes, and the shapes were only resolved into objects by handling them, just as he had perceived the world while blind. But this recognition, this enhanced perception, was transitory; the next time the object came into view it once again had to be interpreted by touch.[xi] The visual cortex had atrophied from lack of use, and the adult brain hasn’t the plasticity of the infant brain. The patient was a tactile person, and the visual inputs did not interrelate with his internal modeling of the world.
Oliver Sacks, “An Anthropologist on Mars.”[xii]
Another important question is how the brain perceives color. In an important experiment Edwin Land, the inventor of the Polaroid camera, proved that the brain constructs rather than perceives color, that color is not simply a matter of wavelength, but is also determined by its context. A particular color may be perceived as green in one context, but white or gray in another.[xiii] Many of the new car colors demonstrate this effect; they appear to change color depending on whether it is sunny or overcast, the angle of the sun, or the color of their surroundings.
It has now been shown that the V1 area of the brain responds to wavelength and the V4 area to color.[xiv] Thus the brain not only has a visual cortex, but various locations within that cortex perform unique functions. These specialized areas process their own data, but their processing may be influenced by the results of other area’s processes.
Oliver Sacks, “An Anthropologist on Mars.”[xv]
I first worked on the F-111D, an aircraft that was considered at the time to be the most sophisticated aircraft in the Air Force inventory. The avionics suite was highly integrated; the Doppler radar sent drift signals to the Terrain Following Radar (TFR) that aligned the antennas with the flight path instead of the aircraft heading; and sent signals to the Inertial Navigation System (INS) to dampen Shueller drift, an inherent oscillation with a period of 90 minutes. The TFR system sent climb/dive commands to the Automatic Flight Control System; it also performed Air to Ground Ranging computations for the Digital Computer Complex that assisted in bombing computations. The INS sent roll signals to the Attack Radar System (ARS) to stabilize the Roll Gimbal, and sent Roll Good signals through tiny switches on the ARS Roll Gimbal to the TFR to let it know it was within the proper roll parameters for safe operation. In the event of a partial ARS failure, TFR radar signals could be routed through the ARS antenna, back to the TFR system for processing, and then to the Integrated Display System for display as ARS data. Highly specialized systems performed their own functions, yet some of the results of those functions influenced and were influenced by other systems. This is highly analogous to the way the brain functions.
The brain is not the undifferentiated mass of neural matter it was once thought to be, but is instead a highly differentiated organ. Each tiny area of the brain works on its section of the problem. These solutions are then integrated together “with memories, expectations, associations and desires” to create a complete mental picture. For example, damage to the bean-sized V4 area of the brain, or the V1 “blobs” or V2 “stripes” leading to V4, eliminates a persons ability to perceive or even remember color.[xvi] And even these small areas may contain areas of specialization, all the way down to the level of the individual neuron.
The digital computer may have a sound card and a video card, but this is nothing like the specialization of the human brain. The sound card works on all parts of the sound picture one instruction at a time; that this appears to be real time is an aural illusion based on the speed of the chip. The human brain works on multiple parts of the sound picture in parallel. The differences between what the left and the right ear hear give us lateral direction; the differences in the echoes from the top half and the bottom half of the ear give us vertical direction. Other cues give us frequency, harmony, volume, etc. The digital sound card cannot duplicate this type of parallel processing.
If the brain is a pattern recognition machine, how does it ever perform logical functions such as mathematics? A clue to this predicament may be found in the remarkable abilities of so-called savants and Williams people. Savants, also known in less politically correct times as idiot savants, are usually low functioning autistic people with extraordinary abilities in certain areas, usually mathematics and music. Williams people generally have low intelligence, yet are distinct from people with Downs syndrome. Like savants, Williams people often exhibit remarkable musical abilities; unlike savants they have extraordinary verbal and emotive gifts while exhibiting little mathematical ability.[xvii]
Autistic people, whether low or high functioning, seem to be overwhelmed by sensory input. Dr. Temple Gravin, perhaps the most famous of contemporary high-functioning autistic people, “speaks of her ears, at the age of two or three, as helpless microphones, transmitting everything, irrespective of relevance, at full, overwhelming volume---and their was an equal lack of modulation in all her senses.” Eventually she developed something called hyperfocus; an immense power to concentrate for hours at a time at sand dribbling through her fingers or tracing the swoops and whorls of the lines on her hands, all the while blocking out the sensory barrage. [xviii] She has powerful gifts of visualization, being able to design complete industrial processes in her head,[xix] yet has difficulty with verbalization, with sequential processing, with symbolism, and with the complex emotions of people.[xx] Dr. Gravin also has perfect pitch and a remarkable musical memory, yet is curiously unmoved by music. She just doesn’t get it.[xxi] Dr. Gravin feels her differing abilities are caused by structural abnormalities in her brain, that certain sections don’t work properly, but that other areas are much better developed as a result; her MRI scans show a smaller than normal cerebellum, for example.[xxii]
Dr. Gravin is a special case, a high-functioning autistic, also known as having Asperger’s syndrome. She is more highly socialized than most, although much of her social skills are the result of rote memorization of various social rituals, not internal, empathetic, and automatic the way they are for most people. Most autistic people are less able to bridge the gap between the way they perceive the world and the way other’s perceive it; this is especially true of savants.
Savants are unlike other people. They have remarkable gifts. Some draw, some play music, some display remarkable memories, and others have remarkable calculating skills. Most are autistic, some are retarded, and a few are of normal intelligence. (10 percent of the autistic population is a savant, nearly 200 times the rate for the retarded population and several thousands of times the rate for the normal population.[xxiii]) The autistic savants often have several talents, but also have a range of serious developmental abnormalities: similar in kind, but perhaps more intense, to those of Dr. Gravin.
Most savants, being autistic, are unable to verbally relate their mental processes. One savant, or prodigy, of normal intelligence was George Parker Bidder who could derive the “logarithm of any number to seven or eight places and, apparently intuitively, could divine the factors for any large number.” For some time Bidder was unable to describe how he came upon the correct answer, saying only that “they seem to rise with the rapidity of lightning.” Bidder was finally able to discover and describe some of the techniques, but their use and his eventual discovery of them was an unconscious process. Another normal savant, A. C. Aitken, gave the following description:[xxiv]
A. C. Aitken, Oliver Sacks, “An Anthropologist on Mars”,[xxv] quoting from Steven B. Smith, “Calculating Prodigies”
F. W. H. Myers in his book “Human Personality” was one of the first to try and describe the process by which savant calculators derived the correct answers. He believed they used highly personal methods, unlike those methods taught in schools. He also believed these methods were unconscious, unlike normal people’s efforts to solve problems, and that the unconscious mind bumped the answer into the conscious upon completion.[xxvi] This is born out by the example of Jedediah Buxton, one of the more famous eighteenth century savants, who would take weeks or even months to answer the more difficult problems posed to him, yet would carry on with his normal life, eating, drinking, and talking, until the answer came to him.
The prodigious gifts of the savant arrive almost full-blown. They are the result of biology, not education, and suggest the seemingly latent powers of the human brain. The hypertrophy of certain areas of the savant brain and the relative atrophy of others suggests to psychologist Howard Gardner that their are various intelligences, “---visual, musical, lexical, etc.---all of them autonomous and independent, with their own powers of apprehending regularities and structures in each cognitive domain, their own “rules,” and probably their own neural bases.”[xxvii]
William’s Syndrome, caused by a missing genetic sequence on chromosome 7,[xxviii] expresses itself in various physical characteristics such as short stature, elfin facial features, congenital cardiac problems, difficulties with fine motor coordination, and generally below average intelligence. Williams people have difficulty reading, writing, and drawing, yet have larger vocabularies than others their own age and have dramatic and entrancing verbal skills. They also have heightened musical abilities, despite being unable to read music. The common facial characteristics, need for order and routine, and their remarkable storytelling and musical abilities are now thought to be the source of elves in folk tales.
The neurological characteristics of Williams people, when compared to those of people with Downs syndrome, those with Asperger’s syndrome, and with the normal population, demonstrate the degree to which brain structure and abilities are linked. Like people with Down’s syndrome, Williams people have below normal cortical volume. Unlike Down people, the frontal lobes and areas of the temporal lobes, (the limbic region,) are normal sized. The cerebellum of Williams people is normal sized, unlike Dr. Gravin’s cerebellum. But the neocerebellum is enlarged.[xxix]
The structures of the brain that are normal or enlarged in the brains of Williams people are those thought responsible for their remarkable abilities. The frontal lobe and the neocerebellum are thought to be involved in processing speech. The limbic system, involved in memory and emotions, is intact. The primary auditory cortex and the planum temporale, important in language and music, are enlarged. The left planum temporale is enlarged to a degree normally seen only in professional musicians.[xxx]
The brain is more than the sum of its structures, as is demonstrated by the differences in the way Williams people process information. Normal people respond to grammatical stimuli asymmetrically, favoring the left side of the brain. Williams people respond symmetrically. When processing facial images, normal brains have greater activity on the left side, while Williams people favor the right side of the brain. This suggests the brains ability to reprogram itself, to use one area in place of another damaged or otherwise impaired area.[xxxi]
Howard Gardner’s concept of a multiplicity of intelligences, each able to recognize patterns and structures, is useful in solving the problems of how savant’s perform their remarkable feats of computations. The key is the ability to recognize patterns. Savants must discern patterns in what most of us perceive as a jumble of random, unrelated inputs. When a savant views the world in the area of his or her extraordinary ability, order is imposed upon chaos. Since the savant’s skills are not the result of education, but have a biological component, the processing algorithms must be neurological, structural. It follows that these algorithms, while part of the structure, can be discerned.
The savant’s brain is somehow optimized for certain pattern recognition tasks. Since these tasks are normally performed by certain areas of the brain, it follows that these areas must be larger, better organized, or better connected in the savant brain. By better connected I mean they may take advantage of other areas of the brain that are underutilized. Just as the Williams people have normal or enlarged parts of the brain that correspond to their abilities, and just as they process some information using different parts of the brain than do normal people, the savant must also process information differently: either quantitatively, doing the same information differentiation and integration, only more of it and faster; or qualitatively, using an entirely different process than normal people do. The latter seems unlikely, since the same brain structures exist; it cannot be ruled out, however, that the savant’s brain may be wired differently---more interconnected, more specialized, more integrated.
A man with a sprained ankle favors one leg while relying on the other one. In a similar fashion the brain may program itself to be better at one function, using resources ordinarily reserved for another but made available due to some abnormality. The brain may also develop specialized areas more fully to compensate for diminished functions in other areas. Perhaps the normal brain, while processing different pieces of data in specialized structures of the brain, is really a general purpose, self programming, pattern recognition computer; when portions of its general abilities are undeveloped, the brain sometimes puts all its remaining eggs in one basket by optimizing itself for specialized tasks.
So to return to a question posed earlier: how does the human brain, optimized for pattern recognition, perform logical functions? By discerning patterns in the data. Think how easy it was to learn the nines part of the multiplication table once the pattern was explained; the first digit is one less than the number being multiplied, and the rest of the digits, when added to the first, must add up to nine. An even number multiplied by another even number must always have an even answer. An odd number multiplied by another odd number will always be odd, while an odd number added to another odd number will always be even. These patterns are inherent in the data, but must be learned.
For the savant these and other much more complex patterns are not learned through education, but are understood intuitively. The brain creates neurological analogies in the deep structures and interconnected neurons of the brain. These analogies, by corresponding directly to reality, may allow the brain to perform spectacular feats of logic by amplifying the brain’s normal pattern recognition abilities. By using larger structures, by better integration with other structures, or by developing more highly integrated interconnections between neurons, the brain’s normal powers are expanded upon.
The abilities of the organic brain, especially those extraordinary abilities of savants, suggest new avenues of research and new ways of computing. By contrasting the abilities of the brain with those of the dominant digital architecture, we may expose the flaws inherent in the digital brain and suggest ways in which it must change in the future.
The digital computer has at its heart the CPU, a general purpose chip. Any digital CPU performs only one instruction at a time. Thus, the heart of almost every computer contains a significant bottleneck with instructions lined up waiting to be processed. That this works at all is due to a series of innovations designed make sure the queue is orderly, that time is not lost by having to fetch instructions out of sequence. Speed is also important; by processing millions of instructions per second, the central flaw of the current computer architecture is effectively masked until the computer is asked to do something closely relating to real life.
A digital chip has certain instructions laid down in the hardware and used to manipulate data. The Intelâ chips and their clones, (called CISC, or complex instruction set,) are the most widely used chips in personal computing. Unlike reduced instruction set (RISC) chips, which seek to achieve speed by reducing the number of instructions the hardware has to choose from, the CISC architecture recently added 57 new instructions designed to speed multimedia applications such as sound and video. Each instruction is a special way to handle a particular problem. The CISC chip is imprinting special abilities into the silicon, but can still only process them one at a time.
For a computer to function it must have the ability to remember the results of previous calculations. As computers have become more complex, their memory needs have increased tremendously. In 1980 the leading memory technology was the 64K Dynamic Random Access memory chip, capable of holding 64,000 bits. During the 1980s memory chips went through four generations: 64k, 256k, 1Megabyte, (a.k.a, 1 Meg,) and 4 Meg. In 1995 16 Meg chips were top of the line; today 32 Meg memory modules are common. A consortium of chip designers have announced the development of a 256 Meg memory module; the module is scheduled to reach market sometime in the late 1990s. [xxxii]
Few people “need” a 256 Meg module for current applications but, in another application of Say’s Law, supply is preceding demand. Research is underway to produce the first 1 Gigabyte, (Gig,) memory module, and if current trends continue then by 2010 we will have desktop computers containing 64 Gig memory modules.[xxxiii] What will likely happen is that operating systems and applications will be kept in active memory with hard drives used only as backup in case of power failure. By eliminating the need to fetch data and instructions from the relatively slow hard drive, a bottleneck will be eliminated allowing the processor to be more fully utilized.
As memory chips are getting larger and larger, so too are processors getting faster. The early chips in desktop computers handled data in 16 bit segments and ran at 1 Megahertz, (Mhz,) or 1 million cycles per second. The most common chips today handle data in 32 bit segments at speeds approaching 300 Mhz. Some chips intended for use in workstations operate at 500 Mhz. New chips under design will have an instruction path 64 bits wide.[xxxiv]
The increase in the size of the data path and the speed of the processor have created problems. Many of the subsystems the processor relies on operate at different speeds. The most common data bus, the PCI bus,* operates at 66 Mhz, well below the speed of the processor. Even with high speed memory caches, today’s processors often have to wait hundreds of clock cycles waiting for data from memory.[xxxv] If the data is not in memory but on the hard drive or the CD-ROM, the processor must sit idle for what must be, in silicon time, a millennia.
To compensate for the limitations of the current computer architecture, new features have been added to modern chips to try and improve their performance. The newest chips have instruction pipelines where instructions line up like boxcars on a train. The chip tries to predict what instructions will be needed, fetch them from memory, and line them up. The chip will then speculatively execute instructions, determine where the results branch, try to decide which branch is the correct one, and then load the instructions in the pipeline. If the chip guesses wrong the pipelines must be flushed and the correct instructions loaded. This causes lost processor cycles. The complexity of the control circuitry needed to try and overcome the limitations of the current architecture has only limited success; a processor that is theoretically able to execute four instructions per clock cycle averages only two.[xxxvi] This is like installing a V-8 engine in your car but only hooking up four of the plug wires.
The digital computer is heading in exactly the opposite direction it needs to go. To model the world, the computer needs to become analogous to the human brain. Analog processing instead of digital. Multiple specialized processors each working on different parts of the problem instead of a single high-speed, general-purpose processor. Current versions of multi-processor computers involve linking several general purpose processors together; this requires specialized software to divide the problem up between the processors and then integrate the partial solutions. This is a partial step in the right direction, but until the silicon begins to resemble the specialization of the organic model, until the instructions for parallel processing and reintegration of the outputs are designed into the hardware, and until the circuits make use of analog Fourier transformations, the integrated chip will remain little more than a really fast calculator.
* A bus is a term in common usage. It refers to a data path used in common between various part of the computer. Why the term “bus?” Because data gets on, then gets off, just like a real bus. The bus runs a complete route, while the data gets on and off where needed.
[i] Emerson, p. 622
[ii] New Standard Encyclopedia, Standard Educational Corporation, Chicago, 1983, Volume Three, p. B-185
[iii] New Standard Encyclopedia, Standard Educational Corporation, Chicago, 1983, Volume Fifteen, pp. T-106,107
[iv] Gilder, George F.; “Microcosm: The Quantum Revolution in Economics and Technology”; Simon and Schuster, New York, 1989, p. 47
[v] Gilder, p. 48
[vi] New Standard Encyclopedia, Standard Educational Corporation, Chicago, 1983, Volume Four, p. C-518
[vii] New Standard Encyclopedia, Standard Educational Corporation, Chicago, 1983, Volume Four, p. C-518
[viii] Gilder, pp. 296, 297
[ix] Gilder, p. 297
[x] Sacks, Oliver W.; “An Anthropologist on Mars: Seven Paradoxical Tales”; 1st Edition, Vintage Books, New York; February 1996; p. 115
[xi] Sacks, p. 129
[xii] Sacks, p. 128
[xiii] Sacks, pp. 24, 25
[xiv] Sacks, p. 28, 29
[xv] Sacks, pp. 28. 29
[xvi] Sacks, p. 31
[xvii] Lenhoff, Howard M.; Wang, Paul P.; Greenburg, Frank; and Bellugi, Ursula; “Williams Syndrome and the Brain”; Scientific American, Vol. 277, No. 6; pp. 68, 70, 71
[xviii] Sacks, p. 254
[xix] Sacks, pp. 266, 267
[xx] Sacks, pp. 286, 288, 289
[xxi] Sacks, p. 286
[xxii] Sacks, p.289
[xxiii] Sacks, p. 194
[xxiv] Sacks, p. 191, 192 and 192f
[xxv] Sacks, p. 192f
[xxvi] Sacks, p. 194
[xxvii] Sacks, p. 223
[xxviii] Lenhoff, p. 70
[xxix] Lenhoff, p. 72
[xxx] Lenhoff, p. 72
[xxxi] Lenhoff, p. 72
[xxxii] Greider, pp. 173, 174
[xxxiii] Geider, p. 174
[xxxiv] Halfhill, Tom R.; “Beyond Pentium II”; Byte, Vol. 22, No. 12; p. 80
[xxxv] Halfhill, p. 81
[xxxvi] Halfhill, p. 81
Last updated on December 31, 2005