One of the primary goals of artificial intelligence [AI] is to understand how the human brain compares to neural networks work in making decisions. For example, whether artificial pattern recognition systems are a viable path to more advanced, more reliable AI.
Currently, neural networks are being used to analyse vast volumes of data in applications such as face recognition, driverless cars and other autonomous machines, and language translation.
It seems AI researchers “have limited insight into why neural networks make particular decisions”. And this is where a major focus of concern lies. If we don’t understand how AI, and deep learning progress towards making a decision, how can we trust them to have our best interest at heart? It seems there are no guarantees with deep learning.
Towards overcoming the weaknesses in neural networks, AI developers are broadening their approaches, with application of Bayesian and Gaussian approaches. This is taking a more scientific, hypothesis-based approach to AI learning, rather than relying on data alone to drive conclusions.
Bayesian Approach
By starting with an hypothesis, data analysis can then be used to update that hypothesis in a more structured learning pathway.
Using a technique called probabilistic programming, AI developers are aiming to automate this learning and decision process.
There are so many ways that machines can learn. And they don’t all require big data. IN fact, Geometric Intelligence founder and fellow researchers are developing systems that learn from limited data – more akin to the human approach. They claim this alternate approach “ could exceed the powers of deep neural networks”. Rather than the brute force approach adopted by many big data applications, small data systems rely on a conversational approach and incremental learning that constantly updates.
Another kind of statistical model called a Gaussian process [GP] also plays a role in this more refined AI approach.
The Gaussian Process
In simple terms, a Gaussian process seeks to find the optimal solution to certain problems; underpinning Bayesian optimization. GPs are currently used on websites to determine what ads to serve and how pages are displayed. GPs identify uncertainty helping humans recognise what they don’t know.
Designing a neural network is a very time consuming extreme task of trial and error to coax a result from a sea of data. GPs and Bayesian optimisation help automate the task.
So now we can use machine learning to improve machine learning. And it is why some researchers also believe that GP can play a vital role in the push toward autonomous AI. GP helps AI constantly adapt to its environment very quickly. It enables AI to learn in a more data efficient way, than neural networks. They also make problem identification possible, something that has not been resolved in neural networks – hence the term ‘the black-box’ problem.
Using these mathematical tools is helping AI researchers to gain a better understanding of AI, and in doing so, gain a greater sense of control over it.
This article was inspired by wired.com