Comprehensive coverage

Interview: artificial intelligence, autonomous cars and understanding the "music" of speech

The president of the Academy of Sciences, Prof. David Harel, shares with the science website his research interests at the Weizmann Institute: among others, biological systems modeling and computer science

Prof. David Harel, President of the Academy of Sciences. PR photo
Prof. David Harel, President of the Academy of Sciences. PR photo

Professor David Harel, currently the president of the Israel National Academy of Sciences, received his doctorate from the Massachusetts Institute of Technology (MIT) in 1978, in a record time of one year and nine months. He has been a faculty member at the Weizmann Institute of Science since 1980, where he served as head of the Department of Computer Science and Applied Mathematics (1989-1995) and Dean of the Faculty of Mathematics and Computer Science (1998-2004). He worked for periods of time in the IBM research department in New York and did sabbatical years at Carnegie-Mellon, Cornell and Edinburgh universities, and founded with partners the company I-Logix, which was later assimilated into IBM. Harel won many awards, including the Israel Prize in 2004 and the Emet Prize in 2010. In addition to the Israeli Academy, he is a member of several academies around the world, including the American National Academy of Sciences, the American National Academy of Engineering, the Chinese Academy of Sciences and the British Royal Society.

The main areas of Prof. Harel's specialization in the past were in theoretical computer science (logic, computation, automata and database theory), and in recent decades he has focused mainly on software and systems engineering, studies on modeling and reconstruction of smell and modeling and analysis of biological systems. Prof. Harel is the inventor of the Statecharts language and co-inventor of Live Script Diagrams (LSCs) as well as the Statemate, Rhapsody, Play-Engine and PlayGo tools.

In an interview with the science website, he explains: "The modeling process of biological systems that we carry out uses methods and tools taken from the fields of systems engineering and software engineering. In the last two years, I have also been researching the field of 'music' of speech, or intonation (prosody), in collaboration with linguist Dr. Teresa Biron. In general, it is possible to record conversations between people and there are programs that can output an exact transcript of what was said, assuming that the conversation is clear enough. These programs also know how to distinguish between my voice and yours. If you give the transcript to another person, he or she can get a pretty good sense of what was said. However, there are many things that pass in conversation between us that go beyond the words themselves, and are conveyed between us by the music of the speech. It is enough that I emphasize one part of the sentence and not another part and already the meaning changes. I can tell you, for example, "Who is knocking at the door" in a normal tone, in a tone of wonder or out of fear and panic, and you will immediately be able to distinguish between the three. But to calculate what we humans are able to do very easily is very difficult.

Of course, it is possible to record all the details of the sound wave, but to remove from the recording the fact, which is clear to any human listener, that a sentence was uttered with skepticism, surprise, hesitation or joy, no one yet knows how to do that. The task is important because as the number of tasks involving communication with computerized systems increases, this will become essential. When you call the HMO and a machine answers you and you say the tests haven't arrived, that machine will have to distinguish different tones of voice - for example, if you only mean to inform that the test hasn't arrived or if you're really angry - in order to respond to you reasonably."

Speech recognition - the next step. Illustration: depositphotos.com
Speech recognition - the next step. Illustration: depositphotos.com

General artificial intelligence

From this task we will move on to what is now called general artificial intelligence. Prof. Harel refers to an article he wrote together with Assaf Maron from his research group at the Weizmann Institute and Prof. Joseph Sipakis from the University of Grenoble-Alpes in France. The title of the article is  "Autonomics: a search for an infrastructure for the development of the autonomous systems of the next generation".

In the introduction to the article, the researchers write: "Autonomous systems are replacing humans in a wide variety of tasks, and in the future these systems will be central and decisive for human society. They will come in many forms, such as various types of vehicles, medical equipment, industrial robots, robots used in agriculture, management of transportation systems and many other tasks."

 "Many organizations are already busy trying to develop the next generation of autonomous systems, so that they will be reliable and financially viable. But the enormous complexity of such systems and their critical importance create new fundamental challenges. There is a crucial need to define a fundamental scientific and engineering infrastructure that will form the basis for the development of these systems. We call such an infrastructure "Autonomics", and believe that its availability can dramatically accelerate the assimilation and acceptance by the public of high-quality autonomous systems, for the benefit of human society."

 black box

"In recent years there has been a dramatic improvement in the methods of machine learning, which are at the base of the prevailing approaches of artificial intelligence. These methods use what are called deep neural networks, which consist of layers upon layers of 'parts' that simulate types of neurons in software. The inputs are received on one side of the network and the outputs (that is, the answers) come out on the other side. It is difficult to explain things on one foot, but it is important to note that the problem with this process is that there is not enough transparency regarding what is happening inside the network during the calculation. Unlike a normal computer program, where you can see the lines of code to be executed and follow what it is doing while running, here it is a system that learns through training, and the network itself changes while learning. For example, if we 'train' the network to distinguish between a dog and a cat by giving it a large number of images and showing it who is who, we still cannot easily prove that in the future it will know how to make the distinction well, with a high probability. It will also be difficult for us to explain to ourselves why she decided that this one was a cat and that one was a dog. For the same reasons, it is difficult to go inside the web and change it so that you can, for example, distinguish between a dog and a snake. This is no longer something that can be done through logical code changes. You have to teach the network from the beginning."

 "Of course, this is not only about non-critical and lighter tasks, such as distinguishing between a cat and a dog, but about weapons systems, airplanes, autonomous cars and robot systems that help in hospitals and perform actions that can be critical.

"My research is related to these problems. How can you verify the software, make sure that the system really does what you expect it to do, and this from the aspect of a thorough understanding of the system's operations while running, and not just relying on the final results."

Do you mean reconstructing the actions of the system, ie how it arrived at the result?

General Artificial Intelligence Illustrative: depositphotos.com
General artificial intelligence. Illustration: depositphotos.com

Professor Harel "Of course, it is possible to perform simulations of runs in different situations, thus 'reproducing' what the system would do in a real run in the field. But I didn't mean only that, but the possibility to prove mathematically, formally, that the system will indeed always do what you want it to do and not do what it should not do. In principle, it is possible to reach such proofs for software or models that are written in a classical way, with the help of sophisticated and complex verification methods. But in software produced by machine learning there is currently no way to mathematically prove that the system will correctly identify 99% percent of the pictures of cats shown to it. It is possible to run again and again and extract statistics from these runs, but there is still no "closed" mathematical proof that this will always be the case. And it's much worse in the really critical applications of machine learning. I'm not sure that any of us would be ready to have a pacemaker implanted using a system that learns without being able to prove that it makes the right decisions, so there is a problem."

Prof. Harel emphasizes that "the problem is not with the artificial intelligence itself, but with the programming methods of critical systems in which artificial intelligence is used too much, in such a way that the 'programming' itself is done by the learning machine."

Does the use of artificial intelligence require considerations that go beyond the classical sciences?

 Professor Harel: "Yes. First, as soon as you attach the title 'artificial intelligence' to these systems, which use learning techniques, there will be many who will think that these are systems that understand everything like humans, for example the considerations of drivers and pedestrians on the road.

"Secondly, matters from the fields of social sciences and the humanities, such as psychology, social psychology, ethics, jurisprudence, etc., are definitely included here. The typical example of this issue is what is known as the 'car dilemma': "You are driving, and suddenly in a split second you have to decide whether you run over a young family who suddenly crosses the road, and which you did not see ahead of time, or whether you turn right (because there is a wall on the left) and run over Two elderly people standing on the sidewalk. In court you can explain the decision you made. You could say that maybe you saw the family for a fraction of a second and instinctively turned to the right, or maybe the other way around, that you saw the elderly couple who were unable to escape easily, and continued straight so as not to hurt them. In any case, you won't be in prison for life because you chose one way or the other. On the other hand, if an autonomous car has to make such a decision, they will immediately shut down all the cars for a year, start a long and tedious investigation, check the software in depth, and try to find out why the programmer did this or that, and why he didn't think about such dilemmas in advance."

So ethical considerations also come into play here, right?

The trolley problem, BBC

"Yes. We cannot easily apply the morals, ethics and life experience of the human driver to the future autonomous car, in such a way that the responsibility of the vehicle manufacturer will also be explained in terms of artificial intelligence. It is not enough for the car to know how to recognize traffic lights and stop before crossing a crosswalk. For it to be called an intelligent system it has to acquire something approaching general human intelligence and this is an extremely difficult task.

"My car has a system that knows how to keep a distance from the vehicle in front of me and stick to the driving lane, and I have been told many times, 'What a beautiful car! Automatically she will be autonomous. It doesn't seem like this will happen anytime soon, and not only because of the ethical issues: putting into the autonomous car's software everything that concerns the relationship between it and relevant human beings, pedestrians and other drivers, is extremely complex and complicated. For example, let's say I'm slowing down before a crosswalk and there are two people standing next to them talking to each other, and one of them suddenly slightly tilts her body to the side. As a human driver, I can usually tell intuitively if the swerve is to get something out of my pocket and show the other person or if she is about to cross the road. We currently have no idea how to give an autonomous system similar intuition. It is a task whose scientific and technological complexity is difficult to describe." Prof. Harel concludes.

The carotid problem - Wikipedia

More of the topic in Hayadan:

.

Leave a Reply

Email will not be published. Required fields are marked *

This site uses Akismat to prevent spam messages. Click here to learn how your response data is processed.