Philosophers have been considering how the human mind works, and whether non-humans can have minds, since the dawn of time (sorry about the platitude, but I have to start somewhere). On one side of the argument, some consider that machines can do everything that humans can, while the other side believes that the complex and sophisticated behaviour commonplace in humans (e.g. love, creativity, morality) will never be obtainable by machines.
The objective of the field of Artificial Intelligence is to create machines that can perform tasks which require intelligence when humans perform them. Consequently, the question of whether machines can think for themselves become a vitally important one. Fundamentally, considering this question forces us to define very precisely what is ‘intelligence’ and what is ‘thinking’, and wonder if these concepts only make sense in the context of the human brain.
An early yet significant paper on machine intelligence was proposed by Alan Turing in 1950 entitled “Computing machinery and intelligence”. Turing is the iconic father of AI and computer science, his achievement have shaped the disciple ever since. Turing wisely didn’t attempt to define what ‘intelligence’ is or how it could be embodied within a machine. He instead defined a test to see if a machine could fool a human in believing that it was human too. He also described an abstract machine able to manipulate symbols on a magnetic tape as it migrates some state to state.
(Statue of Alan Turing, by Stephen Kettle)
The field of AI as we know it today was established by successive generations of groundbreaking scientists. Some of the key first generation fathers are introduced below:
- Alan Turing
- Warren McCulloch and Walter Pitts defined a neural network model where each neuron in the network would have binary state. They showed that neural networks were instances of Turing Machines, and hence could become the medium through which AI could be developed.
- John von Neumann, a friend and colleague of Alan Turing, helped design some of the earliest stored program machines and formalised computer architecture concepts which are still used to the present day.
- Claude Shannon demonstrated (through the idea of a computer playing chess) that a brute force approach to AI would never be computationally possible, and that a heuristic approach was needed.
- John McCarthy was instrumental in organising the 1956 AI workshop at Dartmouth College, sponsored by IBM. During this workshop the field of AI was established as a science. Though attendance to the workshop was sparse, the next 20 years of AI research would be dominated by its attendees and their students. He also designed LISP, one of the oldest computer language.
- Marvin Minsky worked on a non-logic-based approach to knowledge representation an reasoning.
The early days of AI focused on simulating cognitive processes by defining general methods for solving a wide range for problems. The emphasis was on general-purpose researching and reasoning approaches. Unfortunately, this resulted in weak performance of the resulting AI programmes. However, the great minds attracted to the field of AI helped set the foundations of knowledge representation, learning algorithms, neural computing, and natural language computing.
This post is based on the first chapter of “Artificial Intelligence -A Guide to Intelligent Systems” (2nd Edition) by Michael Negnevitsky.