After a long list of positive and negative reactions to the current Autopilot feature of the Tesla models, the American manufacturer decided to update its system so to increase the level of safety offered by the Model S and Model X.
The most significant upgrade to Autopilot will be the use of more advanced signal processing to create a picture of the world using the onboard radar. The radar was added to all Tesla vehicles in October 2014 as part of the Autopilot hardware suite, but was only meant to be a supplementary sensor to the primary camera and image processing system.
The radar can be used as a primary control sensor without requiring the camera to confirm visual image recognition. This is a non-trivial and counter-intuitive problem, because of how strange the world looks in radar. Photons of that wavelength travel easily through fog, dust, rain and snow, but anything metallic looks like a mirror. The radar can see people, but they appear partially translucent. Something made of wood or painted plastic, though opaque to a person, is almost as transparent as glass to radar.
On the other hand, any metal surface with a dish shape is not only reflective, but also amplifies the reflected signal to many times its actual size. A discarded soda can on the road, with its concave bottom facing towards you can appear to be a large and dangerous obstacle, but you would definitely not want to slam on the brakes to avoid it.
Therefore, the big problem in using radar to stop the car is avoiding false alarms. Slamming on the brakes is critical if you are about to hit something large and solid, but not if you are merely about to run over a soda can. Having lots of unnecessary braking events would at best be very annoying and at worst cause injury.
The first part of solving that problem is having a more detailed point cloud. Software 8.0 unlocks access to six times as many radar objects with the same hardware with a lot more information per object.
Assembling those radar snapshots, which take place every tenth of a second, becomes a 3D "picture" of the world. It is hard to tell from a single frame whether an object is moving or stationary or to distinguish spurious reflections. By comparing several contiguous frames against vehicle velocity and expected path, the car can tell if something is real and assess the probability of collision.
When the car is approaching an overhead highway road sign positioned on a rise in the road or a bridge where the road dips underneath, this often looks like a collision course. The navigation data and height accuracy of the GPS are not enough to know whether the car will pass under the object or not. By the time the car is close and the road pitch changes, it is too late to brake.
This is where fleet learning comes in handy. Initially, the vehicle fleet will take no action except to note the position of road signs, bridges and other stationary objects, mapping the world according to radar. The car computer will then silently compare when it would have braked to the driver action and upload that to the Tesla database. If several cars drive safely past a given radar object, whether Autopilot is turned on or off, then that object is added to the geocoded whitelist.
When the data shows that false braking events would be rare, the car will begin mild braking using radar, even if the camera doesn’t notice the object ahead. As the system confidence level rises, the braking force will gradually increase to full strength when it is approximately 99.99% certain of a collision. This may not always prevent a collision entirely, but the impact speed will be dramatically reduced to the point where there are unlikely to be serious injuries to the vehicle occupants.
Taking this one step further, a Tesla will also be able to bounce the radar signal under a vehicle in front – using the radar pulse signature and photon time of flight to distinguish the signal – and still brake even when trailing a car that is opaque to both vision and radar.