Since its inception, FinalSpark has been actively doing fundamental research using digital computers. We now believe relying exclusively on digital computing to reach a General Artificial Intelligence is a dead end, and here is why.
Since its inception, FinalSpark has been actively doing fundamental research using digital computers. We now believe relying exclusively on digital computing to reach a General Artificial Intelligence is a dead end, and here is why:
From the beginning of the research in AI, say 50 years ago, the leading research strategy has fundamentally relied on digital computing (and so we did). Very few alternate approaches have been actively pursued. Among those alternates, one could cite analog electronic circuits implementations of ANN, liquid state machine (or reservoir computing) with real water (or with biological neurons…).
Three basic reasons can be given to explain why AI research mainly relies on digital computers:
- Flexibility, ease of designs of experiments, actually an effective tool if the goal is to write publications
- Proven track record: digital computing has proven to be effective in many automation tasks, and cognitive processes can be considered as yet another automation task
- Understandability: the computer engineer understands what is going on when he writes and uses a computer program. However, this particular point can be debated. Indeed, understandability has been lost for some particular classes of approaches. For instance, a deep learning artificial neural network is usually seen as a black box since its internal computing processes are too difficult to conceive for a human being. Another example is genetic programming where the algorithm itself is invented by the computer and can sometimes hardly be understood by a human being
This research provided a number of useful tools like Bayesian networks, Fuzzy logic or Artificial Neural Networks, to name a few… but these are just this: tools. Intelligence seems elusive: each time one successfully automates a cognitive process (like playing chess, go and recognizing pictures), we realize we did not get any closer to intelligence.
Let’s consider the most successful tool used in AI: artificial neural networks: they do not appear to anyone serious in the field as a realistic approach to build a GAI.
Why is that?
We believe there are two unsolved problems with ANN:
a) Lack of training algorithm: yes, we can simulate 100 billion neurons, but we don’t know how to connect them to achieve any high level cognitive processes. Even worst, it is not even clear what ANN models should be used, so we are left with many alternatives that researchers have been testing for decades now:
- what training strategy? (global, local, supervised, error based, Wissner-Gross like, etc.)
- what connectivity models? (electrical, chemical, quantum, etc.)
- what neuron model? (sigmoid activation, spiking, biologically realistic, etc.)
Actually, it is not even known if choosing one or another of those models simply matters at all! (at least we know that neurons need to exhibit a non-linear behavior).
Worst of all, brain dynamic is so mysterious that 30 years of work on neuro-degenerative diseases have basically lead to nothing. Situation is such that some neurobiologists even start to wonder if the importance of neurons was not overlooked (…and neurons are the focal point of AI since the beginning of connectionism) and that maybe we should rather focus on glial cells… (see for instance “l’homme glial” of Yves Agid and Pierre Magistretti).
b) Computer power available today is just not sufficient: the problem with this argument is it was the same 30 years ago, and in-between computer power has increased by several orders of magnitude. Based on our own experience at FinalSpark, the computing power is a constant issue: even with a simple neuron model and simple training rules, as soon as you start doing recursive topologies (which at least we can assume are more biologically plausible than feedforwards…) and looping on a few parameters, you end-up waiting a lot (we are using our 5kW HP server of 16 blades) for ridiculously small networks of a few thousands neurons.
So given all that, we decided to find a radical solution that solves simultaneously point a and b (while instantly creating a myriad of new issues…): instead of simulating artificial neural network, let’s culture biological neural networks and interact with them using electrophysiology technologies. This way we hope to get for free the built-in training capabilities and a realistic system with virtually boundless scalable computing power: remember dear reader, your 100 billion neurons are consuming only 20W when reading those lines…