Hitting the books: the Brooklyn revolution that led to rational robots


We are experiencing an AI renaissance that was thought totally unimaginable just a few decades ago – automobiles are becoming more and more autonomous, machine learning systems can create prose almost as well as human poets. , and almost all smartphones on the market are now equipped with an AI assistant. Michael Woolridge, professor at Oxford, has spent the last quarter of a decade studying technology. In his new book, A brief history of artificial intelligence, Woolridge guides readers on an exciting tour of the history of AI, its current capabilities, and where the field is heading into the future.

Flatiron Books

Extract of A brief history of artificial intelligence. Copyright © 2021 by Michael Woolridge. Excerpted with permission from Flatiron Books, a division of Macmillan Publishers. No part of this excerpt may be reproduced or reprinted without the written permission of the publisher.

Robots and rationality

In his 1962 book, The structure of scientific revolutions, philosopher Thomas Kuhn argued that, as scientific understanding advances, there will be times when established scientific orthodoxy can no longer withstand the strain of overt failures. In these times of crisis, he argued, a new orthodoxy will emerge and replace the established order: the scientific paradigm will change. By the late 1980s, the days of the expert systems boom were over and another AI crisis was looming. Once again, the AI ​​community has come under fire for selling too many ideas, too many promises, and too few results. This time, the paradigm challenged was not just the ‘Knowledge is Power’ doctrine that drove the expert systems boom, but the basic assumptions that had underpinned AI since the 1950s. , symbolic AI in particular. The fiercest critics of AI in the late 1980s were not outsiders, however, but came from the field itself.

The most eloquent and influential critic of the AI ​​paradigm was robotics Rodney Brooks, born in Australia in 1954. Brooks’ main interest was to build robots capable of performing useful tasks in the real world. Throughout the early 1980s he began to become frustrated with the prevailing idea that the key to building such robots was to encode knowledge about the world in a form that could be used by the robot as basis for reasoning and decision-making. He took a professorship at MIT in the mid-1980s and began his campaign to rethink AI at its most basic level.

THE BROOKSIAN REVOLUTION

To understand Brooks’ arguments, it helps to go back to the world of blocks. Remember that the block world is a simulated domain consisting of a table on which a number of different objects are stacked – the task is to rearrange the objects in certain specified ways. At first glance, the world of blocks seems perfectly reasonable as a testing ground for AI techniques: it looks like a warehouse environment, and I dare say exactly that this point has been made in many grant proposals. over the years. But for Brooks, and those who have come to adopt his ideas, the Blocks World was bogus for the simple reason that it is simulated, and the simulation glosses over anything that would be difficult in a task like organizing blocks in the real world. A system capable of solving problems in the block world, however clever it might seem, would have no value in a warehouse, because the real difficulty in the physical world comes from dealing with problems like perception, which are completely ignored in the world. Blocks World: It has become a symbol of all that was false and intellectually bankrupt about the AI ​​orthodoxy of the 1970s and 1980s. (This hasn’t stopped research on the block world, however: you can still find regular research articles using it to this day; I admit I wrote some myself.)

Brooks had become convinced that significant advances in AI could only be achieved with systems located in the real world: that is, systems that were directly in an environment, perceiving it and acting on it. . He argued that intelligent behavior can be generated without explicit knowledge and reasoning of the type promoted by knowledge-based AI in general and logic-based AI in particular, and he suggested instead that the intelligence is an emergent property that arises from the interaction of an entity. in its environment. The point here is that when we contemplate human intelligence, we tend to focus on its more glamorous and tangible aspects: reasoning, for example, or problem solving, or playing chess. Reasoning and problem-solving could play a role in intelligent behavior, but Brooks and others argued that they are not the right place to start building AI.

Brooks also challenged the divide and conquer hypothesis that has underpinned AI from its very beginnings: the idea that advancements in AI research could be made by breaking down intelligent behavior into its constituent components (reasoning, learning, perception), without any attempt. to examine how these components worked together.

Finally, he stressed the naivety of ignoring the issue of computational effort. In particular, he challenged the idea that all intelligent activities should be reduced to activities such as logical reasoning, which are computationally expensive.

As a student working on AI in the late 1980s, it seemed like Brooks was questioning everything I thought I knew about my field. It was like a heresy. In 1991, a young colleague returning from a major AI conference in Australia told me, eyes wide with excitement, about a screaming match that had developed between doctoral students. students of Stanford (McCarthy’s home institute) and MIT (Brooks’s). On the one hand, there was an established tradition: logic, the representation of knowledge and reasoning. On the other, the outspoken and disrespectful adherents of a new AI movement – not only turning their backs on sacred tradition, but loudly ridiculing it.

While Brooks was arguably the new leadership’s greatest advocate, he was by no means alone. Many other researchers came to similar conclusions, and while they didn’t necessarily agree on the small details, there were a number of frequently recurring themes in their different approaches.

Most important was the idea that knowledge and reasoning were stripped of their role at the heart of AI. McCarthy’s vision of an AI system that maintained a central symbolic and logical model of its environment, around which all intelligence activity revolves, was firmly rejected. Some moderate voices argued that reasoning and portrayal still have a role to play, although perhaps not a leading role, but more extreme voices reject them altogether.

It is worth exploring this point in a little more detail. Remember that McCarthy’s view of logical AI assumes that an AI system will continually follow a particular loop: perceive its surroundings, reason about what to do, and then act. But in a system that works this way, the system is decoupled from the environment.

Take a second to stop reading this book and look around you. You can be in an airport departure lounge, in a cafe, on a train, at home or lying by a river in the sun. When you look around you are not disconnected from your surroundings and the changes the environment is going through. You are in the moment. Your perception – and your actions – are integrated and in tune with your environment.

The problem is, the knowledge-based approach doesn’t seem to reflect this. Knowledge-based AI assumes that an intelligent system operates through a continual perception-reason-act loop, processing and interpreting the data it receives from its sensors; use this perceptual information to update their beliefs; reason about what to do; perform the action he then selects; and restart its decision loop. But in this way, an AI system is inherently decoupled from its environment. In particular, if the environment changes after it has been observed, it will not make any difference to our intelligent knowledge-based system, which will stubbornly continue as if nothing has changed. You and I are not like that. For these reasons, another key theme at the time was the idea that there should be a close relationship between the situation the system finds itself in and the behavior it exhibits.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *