Video: MIT scientists make autopilot more human-like

Making self-driving cars capable of making decisions like a human has been a longstanding goal for companies like Waymo, GM Cruise, Uber and others. Intel Mobileye offers a Responsibility-Sensitive Safety (RSS) math model, which is described by the company as a "common sense" approach that is characterized by programming the autopilot for "good" behavior, such as giving other cars the right of way. On the other hand, NVIDIA is actively developing Safety Force Field, a system-based decision-making technology that monitors the unsafe actions of surrounding road users by analyzing data from vehicle sensors in real time. Now a group of scientists from the Massachusetts Institute of Technology (MIT) has joined this research, who have proposed a new approach based on the use of GPS-like maps and visual data obtained from cameras installed on the car, so that the autopilot can navigate unknown roads like a person. manner.

Video: MIT scientists make autopilot more human-like

People are exceptionally good at driving on roads they've never been on before. We simply compare what we see around us with what we see on our navigators to determine where we are and where we need to go. Self-driving cars, on the other hand, find it extremely difficult to navigate unknown stretches of road. For each new location, the autopilot needs to carefully analyze the new route, and often automatic control systems rely on complex 3D maps prepared in advance for them by suppliers.

In a paper presented this week at the International Conference on Robotics and Automation, MIT researchers describe an autonomous driving system that "learns" and remembers a human driver's decision-making pattern when driving on roads in a small area of ​​a city, using only data to do so. from video cameras and a simple GPS-like map. A trained autopilot can then drive a driverless car in a completely new location, mimicking human driving.

Just like a human, the autopilot also detects any inconsistencies between its map and the features of the road. This helps the system determine if its position on the road, sensors, or map is incorrect in order to correct the vehicle's course.

To initially train the system, a human operator drove an automated Toyota Prius equipped with multiple cameras and a basic GPS navigation system to collect data from local suburban streets, including various road structures and obstacles. The system then successfully steered the vehicle on a pre-planned route in another forested area designated for testing autonomous vehicles.

β€œWith our system, you don’t have to pre-train on every road,” says study author Alexander Amini, an MIT graduate student. "You can download a new map for a car to navigate roads it has never seen before."

β€œOur goal is to create autonomous navigation that is resilient to driving in the new environment,” adds co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL). β€œFor example, if we are teaching an autonomous vehicle to drive in an urban environment, such as the streets of Cambridge, the system must also be able to move smoothly in a forest, even if it has never seen such an environment before.”

Traditional navigation systems process data from sensors through multiple modules configured for tasks such as localization, mapping, object detection, traffic planning, and steering. For years, Daniela's group has been developing "end-to-end" navigation systems that process sensory data and drive the car without the need for any specialized modules. Until now, however, these models have been used strictly for safe road following, with no real purpose. In the new work, the researchers improved their end-to-end system for moving from target to destination in a previously unknown environment. To do this, the scientists trained their autopilot to predict the full probability distribution over all possible control commands at any given time while driving.

The system uses a machine learning model called a convolutional neural network (CNN), commonly used for image recognition. During training, the system monitors the driving behavior of the human driver. CNN correlates steering wheel turns with the curvature of the road, which it observes through cameras and on its small map. As a result, the system remembers the most likely steering commands for various driving situations, such as straight roads, four-way or T-junctions, forks and turns.

β€œInitially, at a T-junction, there are many different directions that the car can turn into,” Rus says. β€œThe model starts by thinking about all these directions, as CNN gets more and more data on what people are doing in certain situations on the road, it will see that some drivers turn left and others turn right, but no one is driving. directly. Direct traffic is excluded as a possible direction, and the model concludes that at T-junctions it can only move left or right.”

While driving, CNN also extracts visual features of the road from the cameras, allowing it to predict possible route changes. For example, it identifies a red stop sign or a broken line on the side of the road as signs of an upcoming intersection. At each moment, it uses the predicted probability distribution of the control commands to select the most correct command.

It is important to note that, according to the researchers, their autopilot uses maps that are extremely easy to store and process. Autonomous control systems typically use lidar-generated maps, which take up approximately 4000 GB of data to store the city of San Francisco alone. For each new destination, the car must use and create new maps, which requires a huge amount of memory. On the other hand, the map used by the new autopilot covers the whole world, while taking up only 40 gigabytes of data.

During autonomous driving, the system also constantly compares its visual data with map data and flags any inconsistencies. This helps the autonomous vehicle better determine where it is on the road. And this ensures that the car stays on the safest path even if it receives conflicting input: if, say, the car is driving on a straight road with no turns, and the GPS indicates that the car should turn right, the car will know to go straight ahead. or stop.

β€œIn the real world, sensors fail,” says Amini. β€œWe want to make sure our autopilot is immune to various sensor failures by creating a system that can pick up any noise signals and still correctly navigate the road.”



Source: 3dnews.ru

Add a comment