My 1969 New Scientist article Dinosaur among the Data? shows my early thinking on the need to move away from pure
Von Neumann. As the technology has advanced, it offers to take us further and
further along the path of parallel processing. The first (1969) step was
Content Addressable Memory. Later would come processing in situ in each memory
location within a prescribed set.
The technology then advanced further, first
offering giving us a linear array of processor/memory nodes (embodied in my
patent called "Property 1a), then the presently achievable 2D array, the
Kernel Machine, with high speed (100Mb/sec) global instructions plus high speed
(100Mb/s) intercommunication between adjacent nodes, in which I had worldwide
patents which have now lapsed. Not yet achievable is the 3D array of
processor/memory nodes with globals and also high
speed intercommunication between adjacent nodes.
It seems that the dead hand of "total
Von Neumann" is too strong, and none of these machines will be built. Ivor Catt 5jan01
Professor Fred Heath, who died perhaps
fifteen years ago, said I should have received the IEE Prize for first pointing
out the Von Neumann Bottleneck, which is most clearly stated in my book Computer
Worship, pub. Pitman 1973. Some other chap
received the prize a decade or two later.
The straightjacket continues into today's
computer, which is identical in architecture to that of 1969 and indeed of
1945. It cannot do array processing, for instance the simulation of global
warming, or air traffic control Europe. What we need to do is have processing
distributed around memory, something that is easy to do today, when both memory
and processing use the same (semiconductor) technology. However, the Japanese
took the American 4kbit memory and ran with it unchanged to the tune of
billions of investment, currently reaching 256Mbit. It still contains no
processing (except, irritatingly, for internal test). The machine's
architecture has been frozen for more than half a century, with the Von Neumann
Bottleneck (separation of processing from memory) strictly enforced. A beautiful example of design by a Japanese committee, not
excluding the tea boy. They all understand the concept of increasing
memory size by four times. (However, the Americans are just as bad.)
The processing at each memory node needs to
be primitive, giving us a fine grain machine. However, the lust for MIPS, FLOPS
and the like means that attempts to insert a million primitive processors into
a memory array (= the Kernel Machine) is obstructed by those who would feel it
is macho to have far more powerful processors (and so inevitably far fewer of
them, much more difficult to test). For situation analysis and manipulation,
like weather forecasting, this is inappropriate. The Z80 is far too powerful.
The most primitive processing in memory was
content addressing. Very soon after my 1969 article, as semiconductor
technology improved, we could have had manipulation of each memory location in
parallel (in a prescribed memory field) according to some criterion inputted
globally from outside the memory. That is the way we should have developed, not
like the Americans and then the Japanese, who froze out processing from memory.
Ivor Catt jan99. - Ivor Catt, jan99.
"Optimists expect the performance
of a single processor to double by the early 1990s, a modest increase that
falls a long way short of satisfying the needs of the larger users. Only
parallel processing - the concurrent use of more than one processor to carry
out a single job - offers the prospect of meeting these requirements." - Edwin
Galea, Supercomputers and the need for speed,
New Scientist, 12nov88, p50-55. Galea
discusses the various applications that we have more or less given up on now,
since we have stopped developing parallel processor systems. These applications
are also described in my article The Kernel Logic Machine, Electronics
& Wireless World March 1989, p254-259 - Ivor Catt, jan99.
Ivor Catt, New Scientist, 6 March 1969, pp501/502
Is the computer becoming fossilized in the
stratum of data processing? The present high speed machines are still concerned
with sequential processing of information. But far greater possibilities lie in
store if the computer memory is designed to consider whole 'situations'.
When Charles Babbage, the British
mathematician, designed the first computer or calculator in the middle of the nineteenth
century, his interests included subjects, such as the calculation of
logarithms, which demanded that a great deal of tedious, repetitive arithmetic
calculations be carried out. He called his machine a calculating machine.
The modern electronic computer was brought
into being during the second world war with the need
to do many tedious calculations on such things as shell trajectories, and was
likewise originally thought of as just a calculating machine. Data processing
was a completely separate industry, basically concerned with punched card
sorting, and the first data-processing machines were just overgrown punched
The two industries have now merged and today
a computer is generally regarded as a machine primarily concerned with processing
data. The electronic computer has thus made a radical change in the space of a
few years, from arithmetic calculator to data processor.
The question now arises as to whether data
processing is the main function that we want electronic computers to perform
now and in the future. Will future generations looking back say that the
electronic computer started off as an arithmetic calculator, very soon turned
into a data processor, and there it remained? This does not seem like a
plausible sequence of events, and we can surely expect the electronic computer
to go through further mutations. Its present position as a hybrid between the
arithmetic calculator and the data processor deems to be rather unstable.
Seeing the whole situation
It would surely be to our advantage to
attempt to foresee future developments, rather than let our industry grow like Topsy, with no ideas within or without it as to where we
are heading. That day, we could run into a blind alley, which would be bad for
our self esteem, our prestige and our pockets. As I see it, the only positive
ideas at present are in the ICL Stevenage basic language machine, and in the
theory that the computer will link up with the communications industry.
However, these ideas seem to be just extensions of the data-processing idea,
and not very fundamental.
In the basic language machine, we try to
unscramble the Tower of Babel that is the modern data-processing machine so
that it may process data a little more efficiently. By linking up with the
communications industry, we transfer the data from punched cards to telephone
or other communication lines, so that the cards do not have to be carried
At the risk of being dubbed impractical, let
us reculer pour mieux
sauter. And while we are about it, let us do a
thorough job by retracing our steps as far back as our minds can conceive. If
we do so, reminding ourselves what our business is supposed to be about, there
appear to be two basic factors: human and sociological needs and the
capabilities of electronic technology.
The common denominator of most human and
sociological needs is that I call "Situation Analysis and
Manipulation". The clearest example of a "situation" as I use
the word in this phrase, is a topological or geographical environment, where
the altitude varies across a two-dimensional surface, and the altitude at any
point can be specified - a well-mapped piece of land, for example.
"Manipulation" in this case would be achieved by altering the
altitudes by means of a bulldozer. A more complex example of a situation is a
geographic environment which is altering with time, so that a complete
description of the situation must contain the time dimension as well as the two
horizontal dimensions. Again, manipulation could be achieved with a bulldozer.
Closed system design
What about our data-processing machine, the
digital computer of today? Does its organization lend itself to the analysis
and manipulation of situations? It would certainly be surprising if it did,
since the machine is rooted in the sorting of punched cards (sequential data
processing) and arithmetic calculations (again sequential). This tendency is
reinforced by the fact that up to the present time all memories which were
technically practicable, such as magnetic core memories, have been passive.
This meant that to extract data for analysis from a memory requires the
insertion of energy into the memory, and we can only insert a limited amount of
energy into the memory at one instant. The result is that all the time we are
blind to most of the information in the memory, and data has to be extracted
sequentially, bit by bit, from the memory. When analysing a situation we, like
a short-sighted man, would have only enough vision (energy) to see a very small
area at any one time.
The two requirements of sequential data
processing and sequential arithmetic calculation, backed up by the historical
accident that we had no active "parallel" memories, so that our
machine memories were oriented towards sequential work, created what Koestler called a "closed system". This then
proceeded to reject any evidence which did not indicate that all we needed to
do was process data sequentially. A "closed system" will reject new
developments, and oppose any attempt to profit from them. The unreasonably slow
development of semiconductor integrated circuit and large-scale integration
memory is not because of technical barriers, and must be caused by other
factors, such as the ideology of a "closed system".
The first integrated circuit memory, the
Honeywell Transitron 16 bit memory, has had forced
into its design all the weaknesses of a passive memory (in this case core
memory). The first emitter coupled logic (as opposed to transistor/transistor
logic) 16 bit memory, made by Motorola, has been similarly "gelded".
However there is the possibility of progress, in spite of the fact that we seem
to be trapped in the "closed system" of sequential data processing.
This is because the modern data-processing machine has become so highly
developed that the problems of its internal organization have taken on, to a
small degree, the appearance of the larger problems of the real world outside
The problem of efficient time sharing -
running more than one programme on the machine at the same time - is a
situation, albeit a rather simple one, calling for analysis and manipulation.
The problem of memory usage in a big machine is again a situation. The
data-processing machine, for reasons of speed, cost and efficiency, has a
hierarchy of memories, ranging from very large, slow ones to very small, fast
ones. The problem of where information is and should be stored is a situation
calling for analysis and manipulation.
Not surprisingly, the tool that is being
developed to deal with these situations is an active memory, a semiconductor
integrated-circuit memory. These have only recently become technically
practicable. They are generally described as "associative memory",
"content addressable memory" (CAM) or slave memory. There is almost
no literature on the subject, either in published papers or in text books.
My suggestion is that we should do all we can
to encourage designers to allow the content addressable memory to look outward
at the world outside the machine and try to deal with external situations, as
well as look inward and handle situations internal to the machine. I feel
confident that this will happen in time of its own
accord. But in the absence of a conscious effort on our part it will take 10
years instead of two, and out increasingly sophisticated and complicated
data-processing machines will meanwhile come to be regarded with justifiable
cynicism as dinosaurs which are very good at solving the wrong problem.
This article is not a discussion of content
addressable memories. However, it has to be said that the coming of active
memories ushers in a new era of memory, where we have a whole range of
possibilities. At the one extreme is the equivalent of passive (core) memory,
where only a small portion of the information can be analysed and manipulated
at one time. Then we proceed to content addressable systems, which have a small
amount of processing (analysing) capability distributed throughout the memory.
Then come the more complex types of memory, with a
greater degree of processing capability distributed around the memory.
I like to call the old passive type of memory
"simple memory", and memory with any distributed processing
capability "complex memory". We shall have to feel our way towards
more sophisticated, complex memory, and our minds cannot at present conceive
the direction of further development. But let us make an effort to get onto the
first rung of the ladder by pushing forward in the use of content addressable
memories, so that we have the chance, with more complex memories, to develop
further towards a machine with the capacity to analyse and manipulate
situations in general.
Martin Banks http://www.presshere.com/html/bs8506.htm