There are four phases of driverless driving. Which stage are we up to now?

Recently, three patents relating to Google's self-driving have been exposed. One is to switch driverless status, one is to remind passengers, and another is to gesture the car. Google's driverless car is getting closer and closer to us. .

Since Google started the driverless car project, the industry has been very interested. The driverless car was supposed to be the field of auto giants. Google, the internet company, has chosen to insert a patent. It seems that the project is not far from the success of the project. How much value does the patent have in the end? What will the future driverless cars look like? Let's take a look.


Google Autopilot

One, the source of driverless cars

The so-called driverless car does not fall from the sky, but with the development of the automobile industry, the progress of electronic control technology has evolved from manned driving assistance step by step.

From manned to unmanned, the industry is divided into four phases.

The first stage is driver assistance. The driver assistance system can provide the driver with the necessary information for driving and give clear, precise warnings at critical times. The relevant technologies are: lane departure warning (LDW), frontal collision Warning (FCW) and blind spot alarm system.

The second stage is semi-autonomous driving. When the driver fails to take corresponding measures after being warned, the semi-automatic system can automatically react in the car. Related technologies include emergency automatic braking (AEB) and emergency lane assist (ELA).

The third stage is highly automated driving. The system enables the car to provide long-term or short-time automatic control driving under the driver's control. This stage is still relatively elementary.

Phase IV: Fully automatic driving, without the need for driver monitoring, the car can fully achieve automatic driving, meaning that the driver can engage in other activities in the car, such as the Internet office, entertainment or rest.

According to this classification. The first phase has been basically popularized, and now high-profile models are basically brought, including BYD 100,000 in the early days.

The second phase is becoming more common. EU legislation requires that all vehicles before November 2013 must be equipped with AEB. Volvo's urban security system, Honda's CMBS, and Mercedes-Benz's Pre-Safe are all at this level. Infiniti's new car can also bypass the driver's automatic steering wheel.

The third stage is currently in its infancy. The new Mercedes-Benz S-Series can keep track of cars in the case of traffic jams. The owners do not have to control the vehicles. Some manufacturers including China are also doing relevant experiments and explorations.

The fourth stage is what Google has been working hard for. The Google driverless car in the light prismatic tank in the Red Alert 2 game looks a little weird. But it has been tested for a long time and its maturity is already high. It is the most practical and practical driverless car.

Second, the principle of drone and Google's technical route

Driverless cars can actually be considered as robots. In principle, the sensors only sense the road conditions and surrounding conditions, and then transmit them to the CPU. The CPU determines the situation according to the artificial intelligence, and then informs the telex system. The teletype system controls the mechanical devices according to the signals. Finally, the mechanical device controls the vehicle to do each. Actions.

In this process, the control of the fax machine has basically been perfected because the progress of the automobile industry in the past 20 years is mainly in this respect. Most cars today are controlled by telex. Your brakes, throttles, gear shifts, and even directions are all electronic signals. Your actions are interpreted and transmitted to the mechanical system. What kind of direct machine to machine? The controls can only be seen on very few utility vehicles. Unmanned driving is nothing more than the transfer of these signals from human control to computers.

The technical difficulty lies in the first two steps. How do you use the sensor to accurately sense the surrounding information? How does artificial intelligence judge?

From the technical perspective, there are multiple modes of perception of the surrounding situation. Google has chosen a laser sensor. The laser sensor is very accurate in determining distance, but it is expensive and limited in practicality in bad weather. At present, the high cost of Google's driverless cars is mainly caused by the laser sensors.

Cheaper is the use of optical cameras, the current Mercedes-Benz S-class magic carpet technology is to use the camera as an information acquisition source to control the air suspension, but the optical camera for the driverless, artificial intelligence on the image recognition will require a very high, the distance The speed of judgment will be very troublesome.

At present, the assistive driving function that has emerged in the car, the commonly used radar, is relatively cheap, but the detection capability of the radar is quite limited. The emission power of the vehicle radar can't be detected far, and it is afraid that the object will be blocked and the driverless car will be used. How much practicality is also questionable.

From the current point of view, the most reliable is Google's laser sensor line.

Artificial intelligence has a higher degree of testing. In fact, driving on the freeway is not too difficult. There is no problem with Google. Mercedes-Benz and Audi have no problems. VOLVO has no problem. Even China's driverless high-speed ran 286 kilometers without problems. This is because the situation of the expressway is now relatively simple, and the algorithm required for artificial intelligence is not very demanding.

Driving in the city is completely different.

Umerson, Google’s driverless car project manager, said, “One mile in the urban area is far more complicated than on the freeway, because the former has hundreds of different rules in a small area because of different rules. The situation of the road surface.We spent a lot of time improving our software, so it can now distinguish hundreds of different objects in real time - such as pedestrians, buses, parking signs held by the coordinators, or cyclists. Give gesture signals to pass through the road."

In terms of artificial intelligence, other manufacturers are still far from Google. Google's level of artificial intelligence is close to being practical, and other manufacturers are still performing.

Third, the value of exposure patents

One of the three patents that was exposed this time was Google’s patent for “Determining when to drive autonomously” that was filed in March 2014. This patent is actually a continuation of a patent in 2012. The specific content is to collect information through various sensors carried on the car, and compare the map data with the traffic model. Once there is a large enough deviation, the car will automatically A warning was issued to alert the driver of human control. If there is no intervention from the passenger, the self-driving car will automatically slow down or enter a safer lane.

This patent is considered a safety patent and requires manual intervention. If the human does not intervene, it will reduce the speed and enter the relatively safe lane to avoid accidents.

From the level of artificial intelligence, it is a matter of a few lines of code, not a high technology, and this patent is not a core patent.

The second is a patent that Google applied for in 2013. When the car detects that pedestrians pass in front of the road, it will make a correct response, such as slowing down or even stopping driving. Then the car will be issued to pedestrians. The signal to be done, the signal may be lights, electronic signs or sounds.

The purpose of this patent is to allow communication between the car and pedestrians, and to solve the problem of the undecided intention of the driverless car. From the perspective of artificial intelligence, it is no more than worthwhile to detect pedestrians and send signals.

The third is automatic gesture control. Google has applied for a patent called "3D gesture automatic control." A 3D camera is installed at the upper left of the driver's driver's seat. It can monitor and recognize some gestures of the driver to activate some functions. . According to the patent's description, gestures can be used to adjust seats, change fan speed, adjust air-conditioning temperature, switch windows and other functions. For safety, some functions involving driving operations require physical button operations.

In addition, drivers can learn preset gestures. Since the system supports face recognition, different drivers can also set gestures for individuals.

This thing is almost a replica of the human face recognition and somatosensory manipulation on the mobile phone. It is technically long-used for commercialization. However, considering Google’s ease of handling the driverless car, Google moved it to the car to apply for a patent. . With limited practicality, it is not as good as speech manipulation and recognition. What's more, it's not the core stuff.

Therefore, this time Google's three patents were exposed, as it said that it was technical information disclosure, it would be better to say that it was an advertisement. The real core of Google Driverless is sensor acquisition and artificial intelligence algorithms. The three patents exposed this time are not of much value.

Fourth, the future of the car

Through the above analysis, we can see. Google is currently at the forefront of the road from assisted driving to driverless exploration.

When other vendors are still exploring the third phase of highly automated driving, Google has been working in the fourth phase for several years. When other manufacturers can only perform on the highway, Google has already solved the difficult city road driving.

Through research on unmanned principles, we have found that Google's current exploration is not limited to Google cars, but can be transplanted to any new car.

As long as the car is controlled by telex, Google's signals can be collected and artificial intelligence can be transplanted. The power, braking, and direction parameters of different cars must be adapted.

No matter if it is an electric car, a fuel car or a hybrid car, it can be transplanted without any difficulty to Google’s unmanned system.

When Google succeeds, BMW, Mercedes-Benz, Audi, Tesla, Bentley, Toyota, Honda, Nissan, Mazda, Great Wall, Chang'an, BYD, Chery, regardless of the car manufacturer, just use Google's program to buy Google's algorithm, Buying Google-supplied hardware can achieve driverless functionality.

In the future, unmanned status will be similar to today's auto follow-up and automatic parking. As long as you are willing to pay, any brand of a model with a high version can provide. Driverless is not a batch of new cars, but can be installed on any new model.

When driverless driving becomes popular, new functions and industries will be derived. Because of the popularity of unmanned vehicles, it means that every car has a high-performance car. Through the future 5G network, they can communicate with each other, can communicate with the city management center, and even communicate with sensors on the road surface. People can manage cars like high-speed rails. Traffic accidents have become historical terms.

When the communication between the vehicle and the restaurant, hotel, gas station, and charging post is established, the car can be automatically refueled and recharged to bring the owner to the restaurant or hotel.

When the vehicle and the mobile phone, tablet, and computer are connected, they can issue instructions to the car at any time to realize more advanced functions.

The future life is like this, we just say to the cell phone at home, I want to eat French snail at 12 o'clock noon today. The mobile phone will recommend to your hotel, plan the route, and arrange the time. All you need to do is to leave the house at the appointed time, board the car, and then get off and enter the door of the western restaurant. The car will automatically go to the gas station to refuel. At the appointed time, it will return to the western restaurant entrance to pick you up.

This seemingly science fiction life may be realized within 10 years, and the future is very beautiful and at your fingertips.

Steel Coils Processing Line

Coil Slitting Line,Slitting Line For Sale,Cut To Length Line For Sale,Steel Cut To Length Machine

JINAN RAINTECH MACHINERY INDUSTRIES CO.,LTD , https://www.rollformingindustry.com