I just started a course of Artificial Intelligence [AI] this morning and there were a few comments in the lecture that really made me think.
What does it mean to build an artificial intelligence? Well, it seems there are two main schools of thought on this. We could either focus on building machines that:
- Emulate human thinking – that somehow go through the thinking processes that people do. So to do that we really have to understand the brain; which will require the learnings from a mix of cognitive science and computational neuroscience.
- Act like people – after all, who cares about how they think, its the action, the behavior that has to be human-like. This is similar to the early Alan Turing definition of AI.
- Think rationally – that have ‘correct thought processes are, they should be correct. What does it mean to have a correct thought process, it’s a very kind of a prescriptive thing.
Our ability to write down how to do logical deduction, and to incorporate uncertainty is relatively fragile. In the end, its not about how we think, but about the actions we take in the end. So the real aim of AI is to make machines, that act rationally. They don’t need to think like humans, to emulate the same brain function processes – they only need to achieve their goals optimally. So the input to an AI is a goal, and rationality means you achieve it in the best possible way. To achieve rationality, it only matters what you do; it doesn’t matter the thought process you go through.
We don’t need to replicate the human brain, even if we knew how the brain worked. We don’t need to be able to distil detailed definitions of intelligence and consciousness. We don’t need to rationalise philosophical concepts. AI is more of a utility – so we only need to be able to describe the functional outputs towards attaining a specific goal. Trying to recreate a logical chain was never going to work – there would always be one weak link, a or missing part, that creates a breakdown in the chain. We don’t need to approach things in AI the same way humans do – we just need to produce the right outcomes.
When applying AI to robotics – simulating robots is much easier than actually deploying them. Everything is always much more complex in reality than it is in simulation. However, if we focus on behaviour and not the intellectual computations that go on in the brain, human or non-human, then we will more likely reach our goal.
From this I recognised that there are three main areas of interest in AI:
- Making decisions, and the behaviour that results from those decisions
- Reasoning when there is uncertainty
- Learning from the perspective of building up knowledge and scaffolding to support rationality
Author: Gail La Grouw. Insight Mastery Program Director, and Strategic Performance Consultant for Coded Vision Ltd.