Looking cognitive scientist Marvin Minsky dominated with his

Looking back, significant Artificial Intelligence breakthroughs have been promised ‘in 10 years’ for the past 60 years. By 2020, we believed that we’d have flying cars, but no, all we have is a robot called Sophia with a citizenship wanting to start a family? Not trying to discourage the development in A.I, but our expectations of what it would be now do not meet reality just quite yet. 

In 1936, Alan Turing published ‘On Computable Numbers, with an Application to the Entscheidungsproblem’ (Turing, 1936) that is now recognised as the foundation of computer science. Within the paper, Turing analysed what it meant for a human to follow a definite method or procedure to perform a task. For this purpose, he invented the idea of a ‘universal machine’ that could decode and perform any set of instructions. Two years later, Turing, with help from other mathematicians developed a new machine, the ‘bombe’ — used to crack Nazi ciphers in World War II. Turing also worked on other technical innovations during the war including a system to encrypt and decrypt spoken telephone conversations. Although it was successfully demonstrated with a recorded speech by Winston Churchill, it was never used it action, but it gave Turing the experience of manually working with electronics. After the war, Turing designed the ‘Automatic Computer Engine'(Negnevitsky, 2010), that would look like your early-day computer, but just stored programs in its memory. In 1950, Turing published a philosophical paper where he asked “Can machines think?” (Turing, 1950) along with the idea of an ‘imitation game’ for comparing human and machine outputs, now called the Turing Test. This paper remains his best known work and contribution to the field of A.I. however, this was at the time when the first general purpose computers had only just been built, so how could Turing already be questioning artificial intelligence? 

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

It was only in 1956, when John McCarthy, American computer scientist, invented the term “artificial intelligence”. This is when McCarthy defined A.I as “the science and engineering of making intelligent machines” (Peart, 2018) in the topic of the Dartmouth Conference, the first conference to be devoted to the subject. This conference indicated the beginning of A.I research. Top scientists debated how to tackle A.I; cognitive scientist Marvin Minsky dominated with his top-down approach, being to pre-program a computer with the rules that govern human behaviour. Minsky and McCarthy then won substantial funding from the US government, who hoped that A.I might give them the upper hand in the Cold War. 

Considered by many as the first successful A.I program was LISP. Dating back to 1958, it was originally created as a practical mathematical notation for computer programs, but it quickly become the favoured programming language for artificial intelligence research. Although, LISP had critical influence far beyond A.I in the theory and design of languages, including all functional programming languages as well as object-orientated languages, with Java being one of those that we still use today. Another initial well-known development of A.I is the General Problem Solver (GPS) that was capable of solving any array of problems that challenged human intelligence, but more importantly, it solved these problems by stimulating the way a human being would solve them. 

Come 1969, A.I was lagging far behind the predictions made by advocates even though the first general-purpose mobile robot, named Shakey, was able to make decisions about its own actions by reasoning about its surroundings. Although Shakey was clever; by building a spatial map of what it saw before moving, the bot was painfully slow — a moving object in its view could easily bewilder it, sometimes stopping for an hour while planning his next move. 

By the early 1970s, millions had been spent on A.I, with little to show for it. A.I was in trouble, the Science Research Council of Britain commissioned Professor Sir James Lighthill to review the state of affairs within the A.I field. The council were concerned due to not seeing much in return for their funding and wanted to know if it was advisable to continue. Lighthill reported “In no part of the field have the discoveries made so far produced the major impact that was promised.” Which is true, Turing was even promised that machines would be able to pass his test by 2000, back in the 50s and other A.I researchers were making promises to build all-purpose intelligent machines on a human-scale knowledge base by the 80s, however the 70s was a big realisation that the problem domain for intelligent machines had to be sufficiently restricted which is a development within itself, really. 

It was then the 80s and what did we get? An Expert system; a big step for Artificial Intelligence. In A.I, an expert system is a computer system that emulates the decision-making of a human expert, it’s simply a computer software that attempts to act like a human expert on a particular subject area. The first successful commercial expert system began operation at the Digital Equipment Corporation helping configure orders for new computer systems — by 1986 it was saving the company an estimated $40 million a year.

At the beginning of the 90s, Rodney Brooks, roboticist, published a paper: Elephants Don’t Play Chess. Brooks argued that the top-down approach was wrong and that the bottom-up approach was more effective. The bottom-up strategy, also known as behaviour-based robotics, is a style of robotics in which robots are programmed with many independent behaviours that are coupled together to produce coordinated action. This paper helped drive a revival for the bottom-up approach, however that doesn’t mean supporters of top-down A.I weren’t going to succeed too. In 1997, IBM’s chess computer, Deep Blue, shocked the world of chess and many in computer science by defeating Garry Kasparov in a six-game match. Capable of imagining an average of 200,000,000 positions per second, it was a belief that chess could serve as the ultimate test for machine intelligence as Martin Ford said ‘computers are machines that can — in a very limited and specialised sense — think’ (Ford, 2017). Although this was a revolutionary moment for A.I, it did trigger alarmist fears of an era when machines will take over, excel in human mental processes and render us redundant. 

Rodney Brooks company, iRobot, created the first commercially successful robot for home in 2002 — an autonomous vacuum cleaner. Selling around 1 million annually and still being around today, Roomba combines a powerful cleaning system with intelligent sensors to move seamlessly through homes, adapting to the surroundings to thoroughly vacuum your floors. It’s understandable that cleaning a carpet was a far cry from the early A.I pioneers’ ambitions, but it’s still revolutionary to those who may not have time or physically can’t hoover up themselves. iRobot have even helped the US and international coalition forces in Iraq and Afghanistan by providing them with bomb disposal robots, with 6,000 PackBots shipped, almost 4,500 are with US Armed Forces, and the remainder are spread across 35 partner nations, including the UK and countries in the Middle East region and Asia Pacific.

Speech recognition were next and Google did that with 80% accuracy in 2008. Speech recognition has always been the availability of data, and the ability to process it efficiently. Google’s app adds, to its analysis, the data from billions of search queries, to better predict what you’re probably saying. Overtime, like everything else, there’s improvements; Google’s English Voice Search system now incorporates 230 billion words from actual user queries and is now nearly on par with humans having achieved a 95% word accuracy rate for the English language as of May 2017.

Remember the unbeaten Turing Test? Well, in 2014 a computer named Eugene Goostman fooled people 33% of the time, making them think that he was actually a 13 year old boy, winning the test. However, very few A.I experts seen this as a defining moment due to other developments such as, Google’s billion dollar investment in driverless cars and Skype’s launch of real-time voice translation. We were now seeing intelligent machines become an everyday reality that would possibly change all of our lives…

Where are we today?
The machines haven’t taken over. Yet. However, they are much more involved in our day-to-day lives now. They affect how we live, work and entertain ourselves; from voice-powered personal assistants like Siri and Alexa, to more underlying and fundamental technologies such as, behavioural algorithms, suggestive searches and self-driving vehicles. In 1949, Popular Mechanics said: ‘Computers in the future may weigh no more than 1.5 tons.’ Yet, computers today, are now largely invisible. They’re everywhere — in walls, tables, chairs, desks, clothing, jewellery, and bodies. In 1965, Moore predicted that “the number of transistors incorporated in a chip will approximately double every 24 months” and Moore’s Law has held true, computers have gone to fitting in your pocket, all while becoming way more powerful. Technology is only going to get better too, like Ray Kurzwell said: ‘the only way for our species to keep up the pace will be for humans to gain greater competence from the computational technology we have created, that is, for the species to merge with its technology.’ (Kurzweil, 1999) meaning we have to embrace the new technology rather than doubt it. Being in the 21st century, we can now leave “real” reality and enter a virtual reality environment. A person using Virtual Reality, puts on a 3D headset and is then able to “look around” the artificial world, and with the high quality VR you’re able to move around in it and interact with virtual features or items. This has literally changed, not just the gaming world, but even the health care industry, using the computer generated images for diagnosis and treatments. VR is also helping surgeons, as it uses actual images from CAT scans or ultrasounds to construct 3D models of a patients anatomy. The models help determine the safest and most efficient way to locate tumours, place surgical incisions or practice difficult procedures ahead of time (Science, 2018).