Introducing Tesla 9.0

Tesla introduced a new update – autonomous driving features included.
Recently the Tesla project started to falter as it was hit by economic problems, lawsuits and not least by the fatal crash of a car controlled by autopilot. Now Tesla announced a new update to manage the turnaround.
Tesla’s autopilot was one of the first sold in line and offered a Level 2 semi-autonomous driving solution. The cars were able to drive by themselves but the manual requested the human driver to keep their hands on the wheel and stay alert. The driver had to be able to take over control at any time because technology was not ready to master all traffic situations.

Autopiloted accidents

Lately there has been an increase of incidents involving the Tesla Autopilot. Latter seems to tempt humans to lose their attentiveness in traffic and take up other activities. In Great Britain, a driver lost his license for sitting on the passenger seat whilst driving in an autonomous car.
On March 23, 2018 in California, a Tesla Model X car crashed into a freeway divider, killing the driver and causing a fire. Recently the National Transportation Safety Board (NTSB) of the USA took over the investigations in the case – their report stated that the driving was not paying attention. The car was oriented towards the car ahead and lost its point of reference at the divider. At that point, the driver had to take over control of the vehicle – unfortunately, he didn’t. The car accelerated, pulled to the left and crashed into the divider.
There have been more incidents involving Tesla models causing collisions by accelerating for no apparent reason. One reason for this could be a faulty update. Many users sued Tesla for selling life-threatening technology; these cases have been settled out-of-court. However, the company is facing more lawsuits initiated by consumer protection organizations.

Update 9.0

Elon Musk announced on Twitter that there will be a new update starting in August involving new autonomous driving features. What those are and what autonomy level the Tesla models will reach with them was not communicated. The software update does not provide the integration of a lidar system. A lidar enables a 3D image of the vehicle environment and serves for orientation and position finding. Last year, Tesla was denied the ability to build autonomous cars by a GM manager because Tesla cars aren’t equipped with lidars. In fact, Tesla uses camera sensors above all.
However, researchers at Audi and MIT have developed algorithms that allow 3D information to be calculated from the camera images. Whether that is Tesla’s plan is unclear, but it cannot be ruled out. We can just hope that the update in August will provide more safety. Not only Tesla’s reputation is at stake, but also that of autonomous driving technology as such.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

Interview with Prof. Dr. Daniel Cremers: Technical Challenges in Developing HAD?

During the Automotive Tech.AD 2018 we had a chat with Prof. Dr. Daniel Cremers, who holds various roles at the Technical University, Munich. Prof. Cremers gives us a better understanding of his roundtable discussion around the question “What are the most serious technical challenges in developing HAD?”. He addresses the many requirements on the way to autonomous cars. Cars need to understand what is happening around them. That requires to reconstruct the car’s surrounding in order to realize actions like path planning, decision making or obstacle avoidance. Moreover he speaks about the role of deep neural networks in future self-driving cars.

About Prof. Dr. Daniel Cremers

Since 2009 Daniel Cremers holds the chair for Computer Vision and Pattern Recognition at the Technical University, Munich. His publications received several awards, including the ‘Best Paper of the Year 2003’ (Int. Pattern Recognition Society), the ‘Olympus Award 2004’ (German Soc. for Pattern Recognition) and the ‘2005 UCLA Chancellor’s Award for Postdoctoral Research’. Professor Cremers has served as associate editor for several journals including the International Journal of Computer Vision, the IEEE Transactions on Pattern Analysis and Machine Intelligence and the SIAM Journal of Imaging Sciences. He has served as area chair (associate editor) for ICCV, ECCV, CVPR, ACCV, IROS, etc, and as program chair for ACCV 2014. He serves as general chair for the European Conference on Computer Vision 2018 in Munich. In December 2010 he was listed among “Germany’s top 40 researchers below 40” (Capital). On March 1st 2016, Prof. Cremers received the Leibniz Award 2016, the biggest award in German academia. He is Managing Director of the TUM Department of Informatics. According to Google Scholar, Prof. Cremers has an h-index of 71 and his papers have been cited 19173 times.

Ad Punctum Case Study: Autonomous Vehicles will be slower than you think

How does sensor or AI performance affect the potential top speed of an autonomous car? And what is the current maximum speed of an autonomous car taking into account sensor reliability? Ad Punctum conducted a Case Study to carve out answers to these questions and draw conclusion on future mobility.

Case Study Conclusions:

  • Autonomous vehicle top speed is a function of sensor / AI reliability.
  • Based on current sensor performance, autonomous vehicles are unlikely to travel above 70 mph for passenger cars and 60 mph for large trucks.
  • Lower performance enables different engineering trade-offs (cheaper & lighter elements).
  • Vehicles will need to package protect for step changes in sensor performance.

Read the full Case Study and find out why it won’t be as easy as one might think to build fast autonomous vehicles. You can download the whole Case Study here.

OPTIS Webinar: Smart Headlamps – Light & Sensors Integration for Autonomous Driving

In the context of Autonomous Driving, sensors will appear everywhere in automotive, also inside car headlamps. Several Tier-1 suppliers have already shown concepts with LiDAR or Camera integration in the headlamp. A headlamp is a tiny box already containing several modules for lighting and signalling functions. But how to add a sensor without impacting visual signature or sensor performances?
In this webinar, we will study 3 aspects of smart headlamps:

Smart Design with Light Guide optimisation
Smart Lighting thanks to Digital Lighting
Smart Headlamp with Sensor integration

Webinar time and date:

The webinar will be held 2 times on June 26th to give you the chance to join us when it suits you most.

June 26th, 2018 – 10 to 10:30 AM CEST

June 26th, 2018 – 5 to 5:30 PM CEST

Signalling functions can be used for communication with other vehicles and pedestrians. Light guides will probably remain a cost-effective solution to display an homogeneous lit appearance. It’s essential to optimize the workload and time that are required to get an effective design. We will explain how Smart Design works to optimize Light Guides using SPEOS Optical Shape Design and Optimizer.
Pixel Beam appears to be the ultimate lighting solution for a wide-range of applications. Even in case of Autonomous Driving, glare free or lane marking will be needed to boost the drivers’ confidence in the intelligence of the car. Smart Lighting could adapt to different driving conditions. If you want to evaluate different technologies (LCD, MEMS, µAFS) or test any ideas (crosswalk, lane marking), dynamic simulation with OPTIS VRX is the key to identify relevant parameters and justify technological choices.
Autonomous cars will require more sensors that need to find their space in the car. Headlamp is an electronic box with a transparent cover and positioned at the corner of the car. So, integrate a camera and a LiDAR in the headlamp seems to be a promising idea. Smart Headlamps will be an essential component of autonomous driving. Optical simulation is needed to design the optical components, but more importantly to ensure the opto-mechanical interactions between sources, lenses and support structures, considering any variations in the different parts. As SPEOS is CAD integrated, mechanical components can be easily moved and re-simulated to quickly assess impact of changes. It can also be used to understand the eye safety classification of the LiDAR emitter.
Through these 3 different topics, we cover different challenges where OPTIS is offering a predictive simulation to design smarter products in a shorter time.

Webinar speakers:

Cedric BellangerCedric Bellanger

Product Marketing Manager
OPTIS

Julien MullerJulien Muller

Product Owner SPEOS & VRX-Headlamp
OPTIS

Find out more about OPTIS' VRX 2018 - The driving simulator that reproduces virtually the performance and behaviour of advanced headlighting systems: