Computers are designed to be programmable and not evolvable
Computers are anticipated, if properly programmed, can achieve human level artificial intelligence and eventually super intelligence. However, the computer architectures that we had today are designed for maximum programmability. The source code or designed are input to the compiler or synthesis tool, and after that an executable or computer logic is synthesised. Thus, the output is a programmed/hardware developed system that can execute the code or instructions. The case is here, the computer will only execute the program. The computer is designed to achieve maximum execution speed. The computer/IC is aimed to be designed to possess speed intelligence only.
What does this imply to achieving true artificial intelligence? The requirement of artificial intelligence is that it has to learn, not just execute the instructions. It should ideally rewrite its own code. The system has to possess qualitative intelligence, not just speed intelligence, as Nick Bostrom from his popular book “Superintelligence” has defined. What is happening now is that the best we could do on these programmable computers is that we compile a source code that emulates learning on top of a frozen code, right? My opinion is that some machine learning aspects are innately ineffective in von Neumann architectures or other conventional hardware. It may possibly worsen the performance on every increase in machine learning complexity. In other words, machine learning cannot be efficient as computers see the program as intelligent, but as a finite set of instructions with maximum flexibility.
There’s a chance that further adding features of more generalized intelligence will result in a computational bottleneck that will greatly reduce “speed intelligence” to compensate for “quality intelligence”, cancelling the accummulated effects the two intelligences previously mentioned. Or at a lesser extent, the expected speed increase in machine intelligence would be magnitude lower by two for example, as general learning can be done at an emulated environment. An example is the convoluted neural network model of visual recognition, an intensive computational power is required due to the inefficiency of the underlying hardware. A fortunate immediate solution to this is to use GPUs to increase computation at a magnitude increase. This is a sample of an architectural consideration at a hardware level, but it may still not be enough for higher cognitive tasks.
The bottomline is, for the envisioned artificial general intelligence to be evolved, assuming that the correct AGI model is achieved and simulated, what immediately follows is an architectural redesign of our computer systems that are aimed for evolvability or “true learning”, and not just programmability for. This will fully realize the expected speed intelligence that computers exhibit today and to be coupled with high quality intelligence that humans possess.
What are your opinions on this? I want to know if hardware development of computers will ultimately be the stumbling block of the most intensive AI researches today. Development of quantum computers or memristors are under development to set the computers to better performance, but I think a more direct approach is needed. Or is it possible that if evolvability is integrated into a system, speed will naturally and significantly suffer?