Dave Morton - Jan 2, 2013:
Back in the early to mid 1980’s, there was a lot of fanfare and hype over a computing project called “The Connection Machine”, Danny Hillis and others from MIT strung together a huge collection of single-bit microprocessors into one immense super computer.
I remember that was much-touted for a while. I bought that book when it came out, but I never read the whole thing and it ultimately turned out to be of little interest to me. I talked to a friend of mine who was an expert in the computer field around that time and he said although the idea was good, the Connection Machine was badly engineered, whatever that means: either the physical structure was poor, or certain specific design decisions were poor, probably.
Here’s another such multiprocessor project I learned about recently:
Economically, the system cannot support this growth in the cost of factories. At
the beginning of this year, 1998, the construction of about 20 fabrication plants
was started. Before the first devices have come out of any of them, at least four
have been closed. Much talked about physics issues, revolving around the
problems with shrinking of transistors, will probably not become a problem;
economic issues are going to slow Moore’s Law before physics, unless a new
manufacturing paradigm is found.
So, given that there may be a window of opportunity to come in and
replace integrated circuits, what other opportunities are there?
To research this, I decided to learn a lot more about computers and
computer architecture. For over a year we had detailed discussion sessions
involving how computer architecture, chemistry and physics could work together.
It took six months before we were speaking the same language, where we could
say words that roughly registered equivalent meanings in the different
participants.
The computer architects had made a machine that they called ‘Teramac’:
it performs 10^12 operations per second, and it is a Multi-Architecture Computer.
The detail is given in Science, 12 June 98, Volume 2a 0, pp. 1716-1721. It was
based on standard silicon integrated circuit technology. It was quite large, of
refrigerator size - it weighed about 400 pounds. It had the equivalent of 256
processors, but the logic operations were performed with look-up tables, i.e.
memory, rather than standard processors.
Teramac contained 220,000 mistakes - in effect, it was built out of
defective parts. Its inventors designed the chips they wanted, sent the design to the
fabrication plant, got costings for so many good chips and took the defective ones
too - in this architecture the defective chips were every bit as usable as the non-
defective ones. They did the same with several other components: in the end they
had 800 chips, miles of wire, and the entire system was assembled as quickly and
cheaply as possible. Thus, there were lots of manufacturing mistakes in both the
components and the way they were connected. Even though it had a clock speed of
only 1 Mhz [sic], for some applications it managed to outperform - by a factor of 100 -
the best workstations available at the time, which operated at more than 100 MHz
clock speed.
The Teramac team was surprised how quick and efficient it could be. The
communication network put in to handle the defects had also made the computer
much more efficient at doing some things than the designers had expected. One
issue: How long does it take to compile a program on this computer?
(“Nanocircuitry, Defect Tolerance and Quantum Computing: Architectural and Manufacturing Considerations”, R. Stanley Williams (pp. 63-69), In “Quantum Computing and Communications”, ed. Michael Brooks, 1999, pages 67-68)