Autopilot Tesla proved innocent of the driver's death
In May 2016 there was a car accident involving Tesla Model S, which worked autopilot (Autopilot). Then the driver was killed. Since then, the National Traffic Safety Administration on US roads has conducted investigation and found that the autopilot in the car worked fine and was not involved in the crash. The report, which was published in format pdf, it is said that the system of automatic electric braking must first prevent collisions while moving and could not influence the situation, when in front of Tesla suddenly left the truck to leave the adjacent road.
In addition, the autopilot is not designed for independent control of the car and requires constant monitoring on the part of the driver. After the accident, Tesla added another breakaway system: now the machine next to the driver had his hands on the steering wheel, and if he does not keep his hands on the steering wheel even after the warning, the autopilot is switched off until the end of the trip. During the investigation it was also shown that after the appearance of auto-steer mode Tesla cars (this is when the car is traveling at a certain band) the number of accidents decreased by 40%.
The next version of the autopilot Tesla can become completely independent, if such incidents will not cast a shadow on the company's expertise in the development of their system. Last summer, we came to the conclusion that the artificial intelligence began killing ahead of time and that we will have to live with it. Does this mean that our conclusions were premature and the autopilot until the fault, and therefore have to prepare and wait for the next disaster, is already present?