It is only a matter of time before humanity reaches the singularity. For us, the singularity means the creation of a machine that passes the Turing test. FinalSpark wants to be the first to build this machine, an Artificial Intelligence (AI).

The AI dream has been pursued for about 70 years by mankind, and the most striking outcomes have been related to smart automation with applications such as robotics, image recognition and expert systems. Nevertheless, it seems like the gap between these accomplishments and a successful Turing has not decreased substantially.

We believe that today’s main stream research strategies are not adapted for AI. We have identified two pitfalls in the traditional research strategies, which we believe are responsible for the lack of progress.

1) Research strategy in major programs worldwide primarily relies on a fundamental drive to understand the processes underlying human intelligence in order to replicate them. However, it is not given that humans can model and understand human thinking. And it is not even proven that this understanding is a pre-requisite to artificially replicate it. It may be that the longing to understand creates a big and useless barrier to success.

2) A significant fraction of the research projects are financed by industry and therefore have very, specialized applications in mind with a requirement for a short term return-on-investment. The applications are generally set for a very well defined situation and when the research is successful the best outcome is just yet another “smart automation” appliance. For instance, it is quite unlikely to have project objectives set as “Develop a system to discuss, exchange ideas and joke with a computer”. Less ambitious R&D objectives with a clearer return-on-investment potential such as “Develop a system that minimizes the lengths of the path followed by a robot to clean a house” are more common.

Of course, we cannot be sure that slow progress of AI research is only caused by the above two pitfalls. The reasons may also be much more scientific than methodological. For instance,  the human brain is basically a biological multi-core system, and therefore cannot be modeled using continuous mathematics. For instance, if you want to compute the maximum acceptable load of a steel bar, you can express equations giving the stress in any point of the bar. Using symbolic primitive function computation it is even possible to compute how much it will heat under constrain. Of course, the bar is said to be an “ideal, homogeneous bar” and therefore simple enough to be modeled that way. Indeed, if we had to take into consideration each and every defect of a real bar, such a simple model would not work. The point is that there is no such a thing like an “ideal brain” which would naturally lend itself to a modeling using continuous equations. We have to revert to discrete equations where summation symbols replace the integration symbol for example. As a consequence, most of the powerful mathematics which are used in the greatest scientific models like quantum mechanics or general relativity are useless in this case. This argument is voluntarily not taking into account the on-going research on quantum cognition, on which we have not yet formed an opinion.

A second reason is linked to the limited computational power. Since all neurons are working intrinsically in parallel, we are left in the uncomfortable situation to compete against 100 billion processors. But the problem is not exactly here, because we can very well do it if each neuron represents only 1 Flop of computing power. No, the problem is that we ignore the accuracy of the neuron simulation we should achieve: it it’s more like 100 Mflop per neuron, then it is impossible. Indeed best computing power is around 1E16 Flops and 100E9x100E6 = 1E11x1E8 = 1E19 which is about 1000 times more power than what is available on the best computer in the world. Even a non real-time simulation would be too slow.

There is no short term remedy those 2 problems and we can only hope, and therefore assume, that the core of the problematic lies in the research methodology pitfalls 1 and 2 expressed above. We strive to stay away from these two pitfalls through the adoption of the following two approaches. 1) Rejecting approaches that rely on understanding the human brain. Even more, privileging approaches that do not bring any substantial help to improve our understanding. 2) Focusing solely and strictly on our objective, which is having our machine pass the Turing test.

Right now, we are focusing on genetic programming (tree-based and linear) coupled with recurrent spiking artificial neural networks. We also designed a new programming language which we call CPL (Continuous Programming Language), which is tuned for mutation and cross-over operators on instructions. Two types of test task are used to assess the performance of our approach: inverse pendulum problem and prime numbers finder. The purpose of these simple tests is to validate the core approach by having it work on a simple problem, and then scale it – under human supervision – to more complex problems.

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt