Introducing Tesla 9.0

Tesla introduced a new update – autonomous driving features included.
Recently the Tesla project started to falter as it was hit by economic problems, lawsuits and not least by the fatal crash of a car controlled by autopilot. Now Tesla announced a new update to manage the turnaround.
Tesla’s autopilot was one of the first sold in line and offered a Level 2 semi-autonomous driving solution. The cars were able to drive by themselves but the manual requested the human driver to keep their hands on the wheel and stay alert. The driver had to be able to take over control at any time because technology was not ready to master all traffic situations.

Autopiloted accidents

Lately there has been an increase of incidents involving the Tesla Autopilot. Latter seems to tempt humans to lose their attentiveness in traffic and take up other activities. In Great Britain, a driver lost his license for sitting on the passenger seat whilst driving in an autonomous car.
On March 23, 2018 in California, a Tesla Model X car crashed into a freeway divider, killing the driver and causing a fire. Recently the National Transportation Safety Board (NTSB) of the USA took over the investigations in the case – their report stated that the driving was not paying attention. The car was oriented towards the car ahead and lost its point of reference at the divider. At that point, the driver had to take over control of the vehicle – unfortunately, he didn’t. The car accelerated, pulled to the left and crashed into the divider.
There have been more incidents involving Tesla models causing collisions by accelerating for no apparent reason. One reason for this could be a faulty update. Many users sued Tesla for selling life-threatening technology; these cases have been settled out-of-court. However, the company is facing more lawsuits initiated by consumer protection organizations.

Update 9.0

Elon Musk announced on Twitter that there will be a new update starting in August involving new autonomous driving features. What those are and what autonomy level the Tesla models will reach with them was not communicated. The software update does not provide the integration of a lidar system. A lidar enables a 3D image of the vehicle environment and serves for orientation and position finding. Last year, Tesla was denied the ability to build autonomous cars by a GM manager because Tesla cars aren’t equipped with lidars. In fact, Tesla uses camera sensors above all.
However, researchers at Audi and MIT have developed algorithms that allow 3D information to be calculated from the camera images. Whether that is Tesla’s plan is unclear, but it cannot be ruled out. We can just hope that the update in August will provide more safety. Not only Tesla’s reputation is at stake, but also that of autonomous driving technology as such.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

Interview with Prof. Dr. Daniel Cremers: Technical Challenges in Developing HAD?

During the Automotive Tech.AD 2018 we had a chat with Prof. Dr. Daniel Cremers, who holds various roles at the Technical University, Munich. Prof. Cremers gives us a better understanding of his roundtable discussion around the question “What are the most serious technical challenges in developing HAD?”. He addresses the many requirements on the way to autonomous cars. Cars need to understand what is happening around them. That requires to reconstruct the car’s surrounding in order to realize actions like path planning, decision making or obstacle avoidance. Moreover he speaks about the role of deep neural networks in future self-driving cars.

About Prof. Dr. Daniel Cremers

Since 2009 Daniel Cremers holds the chair for Computer Vision and Pattern Recognition at the Technical University, Munich. His publications received several awards, including the ‘Best Paper of the Year 2003’ (Int. Pattern Recognition Society), the ‘Olympus Award 2004’ (German Soc. for Pattern Recognition) and the ‘2005 UCLA Chancellor’s Award for Postdoctoral Research’. Professor Cremers has served as associate editor for several journals including the International Journal of Computer Vision, the IEEE Transactions on Pattern Analysis and Machine Intelligence and the SIAM Journal of Imaging Sciences. He has served as area chair (associate editor) for ICCV, ECCV, CVPR, ACCV, IROS, etc, and as program chair for ACCV 2014. He serves as general chair for the European Conference on Computer Vision 2018 in Munich. In December 2010 he was listed among “Germany’s top 40 researchers below 40” (Capital). On March 1st 2016, Prof. Cremers received the Leibniz Award 2016, the biggest award in German academia. He is Managing Director of the TUM Department of Informatics. According to Google Scholar, Prof. Cremers has an h-index of 71 and his papers have been cited 19173 times.

Ad Punctum Case Study: Autonomous Vehicles will be slower than you think

How does sensor or AI performance affect the potential top speed of an autonomous car? And what is the current maximum speed of an autonomous car taking into account sensor reliability? Ad Punctum conducted a Case Study to carve out answers to these questions and draw conclusion on future mobility.

Case Study Conclusions:

  • Autonomous vehicle top speed is a function of sensor / AI reliability.
  • Based on current sensor performance, autonomous vehicles are unlikely to travel above 70 mph for passenger cars and 60 mph for large trucks.
  • Lower performance enables different engineering trade-offs (cheaper & lighter elements).
  • Vehicles will need to package protect for step changes in sensor performance.

Read the full Case Study and find out why it won’t be as easy as one might think to build fast autonomous vehicles. You can download the whole Case Study here.

OPTIS Webinar: Smart Headlamps – Light & Sensors Integration for Autonomous Driving

In the context of Autonomous Driving, sensors will appear everywhere in automotive, also inside car headlamps. Several Tier-1 suppliers have already shown concepts with LiDAR or Camera integration in the headlamp. A headlamp is a tiny box already containing several modules for lighting and signalling functions. But how to add a sensor without impacting visual signature or sensor performances?
In this webinar, we will study 3 aspects of smart headlamps:

Smart Design with Light Guide optimisation
Smart Lighting thanks to Digital Lighting
Smart Headlamp with Sensor integration

Webinar time and date:

The webinar will be held 2 times on June 26th to give you the chance to join us when it suits you most.

June 26th, 2018 – 10 to 10:30 AM CEST

June 26th, 2018 – 5 to 5:30 PM CEST

Signalling functions can be used for communication with other vehicles and pedestrians. Light guides will probably remain a cost-effective solution to display an homogeneous lit appearance. It’s essential to optimize the workload and time that are required to get an effective design. We will explain how Smart Design works to optimize Light Guides using SPEOS Optical Shape Design and Optimizer.
Pixel Beam appears to be the ultimate lighting solution for a wide-range of applications. Even in case of Autonomous Driving, glare free or lane marking will be needed to boost the drivers’ confidence in the intelligence of the car. Smart Lighting could adapt to different driving conditions. If you want to evaluate different technologies (LCD, MEMS, µAFS) or test any ideas (crosswalk, lane marking), dynamic simulation with OPTIS VRX is the key to identify relevant parameters and justify technological choices.
Autonomous cars will require more sensors that need to find their space in the car. Headlamp is an electronic box with a transparent cover and positioned at the corner of the car. So, integrate a camera and a LiDAR in the headlamp seems to be a promising idea. Smart Headlamps will be an essential component of autonomous driving. Optical simulation is needed to design the optical components, but more importantly to ensure the opto-mechanical interactions between sources, lenses and support structures, considering any variations in the different parts. As SPEOS is CAD integrated, mechanical components can be easily moved and re-simulated to quickly assess impact of changes. It can also be used to understand the eye safety classification of the LiDAR emitter.
Through these 3 different topics, we cover different challenges where OPTIS is offering a predictive simulation to design smarter products in a shorter time.

Webinar speakers:

Cedric BellangerCedric Bellanger

Product Marketing Manager
OPTIS

Julien MullerJulien Muller

Product Owner SPEOS & VRX-Headlamp
OPTIS

Find out more about OPTIS' VRX 2018 - The driving simulator that reproduces virtually the performance and behaviour of advanced headlighting systems:

Auto.AI: Conference Summary and Key Takeaways by the Chair

Executive Summary

Auto.AI 2017 was held in Berlin on the 28th and 29th of September. Attendees representing academia, original equipment manufacturers (OEMs), suppliers, start-ups and consultants shared ideas through presentations, peer-to-peer round table sessions and informal discussions. Attendees were asked questions using anonymous live polls throughout the event. There were four broad topics relevant to understanding the development of artificial intelligence in the automotive sector:

  • Artificial intelligence outlook including machine learning & deep learning techniques
  • Computer vision, imaging & perception
  • Sensor fusion & data
  • Simulation, testing & validation of artificial intelligence

Key Takeaways

  • No one seems to be advocating a rules-based approach for autonomous driving.
    Although possible in concept, a set of detailed rules appears impractical in the real world because the volume of instructions needed to cover all driving conditions would be massive and debugging would likely fail to spot all errors or contradictions.
  • There needs to be a common view of the acceptable safety level for deployment of autonomous vehicles.
    This requires the input of regulators, industry, academia and probably 3rd party consumer groups too. In addition to the safe level, we need to determine how to measure it and confirm compliance — the second part is challenging without a test fleet covering tens of billions of miles.
  • Safety is related to the use case.
    The more complex you make the driving conditions, the more corner cases you’ll encounter. The scale of the problem can be reduced by steps such as: learning a particular area (geo-fencing) and not driving in extreme conditions or at very high speeds. Although this diminishes the value for the retail user, there are plenty of industrial applications that can operate within those constrained operating conditions.
  • There needs to be widespread collaboration on a shared framework for testing and validation of AI.
    Governments, companies and academia should all be involved and it would ideally use open data that was not tied to specific simulation tools or sensor sets. The experts at the event were wary of letting driverless cars to transport their family without seeing safety data beforehand (government and manufacturer assurances weren’t trusted).
  • Work needs to be done on explaining AI.
    There are big differences between the capabilities non-technical people think AI has and what it is capable of — there should be no talk of killer robots. At the same time, deep learning techniques mean that the system function cannot be explained in the same way as traditional coding. New ways to explain how the system operates are required and without them building trust will be very difficult. It could even be necessary for the AI to learn how to explain itself using natural language or other tools.
  • Virtual testing is vital.
    This is for three reasons: firstly, simulation dramatically decreases real world miles; secondly, because AI techniques like reinforcement learning need crashes to take place in order for the AI to learn; and thirdly because even real-world data becomes a simulation once you interact with it in a different way to the original situation. It’s better to do that virtually! For a virtual environment to be successful it must be something that can be replicated in the real world with the same results.
  • There is plenty of disagreement over the right approach to many areas.
    The event live poll highlighted differences of opinion regarding how AI should act, how much information it will be capable of processing and what level of redundancy was required for safe operation. More consistent was the high burden of proof that AI systems will be faced with and a view that currently no one really knows how to convincingly do that.
  • Implementation timing remains uncertain.
    In the event live polling, over a quarter of respondents believe that self-driving will be widespread by 2023 or even earlier. The majority believe that we will be waiting beyond 2025 — a healthy difference of opinion. Health Warning: informal discussions revealed that in general the timescale comes down if the question is about level 4 specific use case vehicles on public roads (they operate on private land already) and goes further out if asked about go-anywhere level 5 vehicles.

Artificial intelligence outlook including machine learning & deep learning techniques

Key Takeaways

  • Vehicle electronics are growing in value but require further standardisation and reductions in power consumption
  • Data storage is a major issue — techniques from traditional big data do not work very well with images and video
  • Image recognition is improving but research would benefit from wider availability of labelled video datasets
  • Further work is required to create greater depth of scenarios and improve simulation processing times
  • Realistic visualisation of simulations for humans is different to modelling the sensor inputs vehicle AI interprets
  • Understanding machine learning isn’t always hard… sometimes it comes up with simpler rules than we expect!

Market Growth… is forecast at 6% compound annual growth rate (CAGR) for electronics — reaching $1,600 of average vehicle content. For semi-conductors the figures are even more impressive — 7.1% CAGR. The specifics of market development are less clear — these growth figures include L1/2 systems but not full autonomy. Although there is a definite role for the technology, standardisation is a must, requiring a yet-to-be-established framework. Safety is a big challenge: without clear agreement on what safety level is acceptable, definite technical standards cannot be set. Another open issue is the degree to which the car will have to make decisions for itself versus interaction with infrastructure and other vehicles. The problem is the latency (response time) of large data sets. Finally, self-driving chipsets must consume significantly less power than current prototypes.
Researchers have gained new insights by translating real world crash data into a virtual environment… the information came from records collected by the regulators. Technology in production today sometimes makes facile errors (e.g. lane keeping recognising a bike path rather than the kerbside). Research has shown that it is possible to correlate the virtual models with real world data (for instance replicating a collision with a pedestrian) but the challenge of testing thoroughly remains substantial. Multiple different environments are needed; there are thousands of types of crash situation; and each vehicle has unique attributes. Through all of this, it is vital that the results correlate to the real world. Researchers aim to reduce modelling times from days (currently) to hours — real time is the ultimate goal. Without improvements in processing speed, virtual sample sets are in danger of remaining too small or too slow to be usable.
The challenge of staying aware of the state of the art in data processing and artificial intelligence… large OEMs are interested in AI in the broadest sense — self-driving, handling customer data and improving business efficiency. The first challenge is the data itself. Not only will the car become a massive source of data, but much of it does not easily fit into existing data structures — images and video are more complicated and unstructured than traditional inputs. With images, it may be necessary to pre-process and store the resulting output, rather than the image itself, to reduce storage space and retrieval time. Image capture and object recognition as a definite area where more work is required and machine learning is already relevant, for instance recognising brands of truck trailer may help build broader recognition of what a trailer looks like. By studying a whole range of machine learning activities (a huge resource undertaking), organisations can develop an understanding of the best fit between problems, data collection methods and analysis tools.
There are different ways of obtaining image data in real time… dedicated chips can translate lidar traces (compatible with multiple lidar types) into an instantly available augmented image. This allows object identification from the raw data and for less expensive lidar units to be used. Examples showed a 16-line lidar unit being augmented for higher resolution.
Machine learning already has applications in ADAS feature sets… it has been put to use in two frequently encountered highway situations: roadworks and other drivers cutting in and out. Video and radar example data was combined with machine learning and human guidance about acceptable limits of driving behaviour. Interestingly, in both cases although the machine learning was given multiple data inputs, only a few key elements were required to provide very good accuracy in use. This reduces the sensor inputs and complexity of processing. For example, machine learning identified a high correlation between the angle of the vehicle in front and whether it was intending to cut in, in preference to more complicated rules combining relative speeds and side to side movement. Multiple sensors should be used fordecision making: although a camera is better for monitoring many of the situations, its limited field of view means that radar needs to be used in camera blind spots.
The car will become one of the biggest generators of natural language data… and its proper use will enable manufacturers to create a personalised experience that the customer values. For relatively complex commands (“when X happens, then please do Y”), contemporary techniques have 95% correct transcription of what the customer is saying and mid-80s% task completion. This is encouraging but shows further development is needed. OEMs will also have to create ecosystems that allow them to control the customer experience inside the cabin, yet are seamless with the personal assistants the customer might have on their mobile phone or home speaker system.
New techniques are improving image recognition… Using industry benchmark tests, computer image recognition is now superior to humans. In some specific use cases this already has practical uses, for example a smartphone app that assesses melanomas. However, at around 97% correct identification of a random image (versus about 95% for humans), improvement is required. Different methods are being tested, with greater progress on static images than video; partly due to difficulty but also because video has less training data: smaller libraries and fewer labelled categories. Video identification accuracy can be improved by running several different methods in parallel. One of the most promising approaches is turning the video into a set of 2D images with time as the 3rd dimension — a technique pioneered by Deep Mind (now of Google). Combining this process with different assessment algorithms (such as analysing the first and nth frame rather than all frames), teams have achieved accuracy of nearly 90% for gesture recognition. Generally, late fusion (a longer gap between frames) gives better results than early fusion — there is variation in what combination of processing algorithms yields the best accuracy. Progress is happening all the time. New ways of addressing machine learning problems sometimes create step changes, so improvement may not be at a linear rate.
It is hard to create different virtual environments for vehicle testing… Using video game tools and very experienced developers, near photo realistic models can be created, but this appears to be the easy part! Because the structure of computer graphical data is different to real life, models need to be adjusted to create the correct type of artificial sensor inputs. This is even more challenging with radar and lidar input data as the model must accurately simulate the “noise” — a factor of both the sensor and the material it is detecting. Perfecting this could take several years. More useful immediately is the ability to create virtual ground truth (e.g. that is a kerb) that can serve SAE Level 2 development. Because L1/2 inputs are more binary, sophisticated sensor simulation issues are less relevant. Researchers believes that a virtual environment of 10km-15km is sufficiently to assist development of these systems, assuming the ability to impose different weather conditions (snow, heavy rain etc).

Computer vision, imaging & perception

Key Takeaways

  • Different options emerging to help machine vision check against pre-determined ground truth
  • Calibration and synchronisation of multiple sensor types is a challenge
  • Using hardware specific processing techniques may improve performance — but also impairs plug & play
  • Virtual testing will be valuable but requires a robust feedback loop and real-world validation

The state of the art in mapping… Mapping companies have several layers of information such as: road layout; traffic information; photographic information; and lidar traces of the road and its surroundings. At present, vehicle navigation relies on basic inputs and workable levels of accuracy (a few metres). High definition mapping allows a car to be more precise about its surroundings and relieves sensors of the need to fully determine the environment for themselves. Detailed lidar-based maps of the roadside can be used to build a “Road DNA” that autonomous systems can reference. AI can also be used to update maps. Crowd-sourced position data helps to determine where road layouts have changed (because everyone appears to be going off-road). Currently, AI sends these problems for human investigation but in future it could make decisions for itself. There may be value in collecting images from user-vehicles to update maps, both for ground truth and road sign interpretation.
An in-depth review of the StixelNet image processing technique… This method breaks images down into lines of pixels (columns in the case study) and then determines the closest pixel, allowing identification of ground truth (kerbside, people and cars) and free space. Camera data can be combined with lidar traces from the same vehicle to allow the system to train the camera recognition using the laser data. The positive of this approach is that it is continuous and scaleable — more cars added to the fleet equals faster learning. The downside is that it is difficult to calibrate and synchronise cameras and lidar on the same vehicle to the accuracy required. It is also difficult to write the algorithms — several processing options are available, all with different weaknesses. Studies indicate that systems trained on the camera and lidar data showed better results than stereo cameras and better than expected performance on close-up images.
Simulation and assessment of images captured by different types of camera… Measures of image quality that go beyond traditional quantitative techniques and are relevant to machine learning can be identified. Software has been developed that can simulate the images a camera will create in particular lighting conditions and using a range of filters. One of the findings is that image processing for machine vision may be able to forego some of the steps currently performed in digital photography (e.g. sRGB). This could save time and processing power. Research has a wider scope than autonomous vehicles but the application looks promising. A transparent and open source platform looks beneficial.
Implications of deep learning… Neural networks bring high purchasing costs and energy consumption. Some of these costs do not provide a proportional increase in accuracy, there is a law of diminishing returns. It may be better to reduce the cost of one part of the system and purchase additional elements, whilst retaining some of the saving. For instance, going to a single 5mm x 5mm processor rather than two 3mm x 3mm acting in series reduces the power consumption by about half.
Creating virtual environments… researchers used scenarios designed for video games to look at visual detection. The process developed creates grid information to assess the underlying physics and then layers skins on top to create realistic images. Driving characteristics and reaction of both the target vehicle and other vehicles can therefore be modelled, including the effect of collisions. The same information can also be used to create artificial data such as a lidar trace.
The importance of hardware in artificial intelligence… test vehicles can now drive autonomously without reference to lane markings because of their free space detection ability. Power consumption is reducing — a chip commonly used in prototype AVs uses 80W but its replacement requires 30W despite an increase in processing capacity. It may be beneficial to use processing software that is designed around the hardware. Some studies indicate that properly matched chipsets and processing suites can reduce latency and improve performance. An example end-to-end research project, which has several neural layers and multiple sensors including camera and lidar still finds decision making difficult in the following areas: gridlock, extreme weather, temporary road layouts, muddy tracks and difficult turns. There are also numerous edge casesand multi-party trolley problems.
The challenges of developing the next generation of technologies… Although lane keeping can be further improved, it is necessary to develop systems with a different approach, especially because in many cases there are no clear lane markings. One potential method is training networks to detect extreme weather and tune object recognition based on that information — for instance a camera may be best in sunny conditions but 3D data may be best where due to snow the lane markings become nearly invisible. There is also a downside to sensor improvement… for instance, as camera resolution improves, the data labelling may become unusable and need replacement (possibly manually). Power consumption of prototype chips is too high. Some concept demonstration chips draw 250W and in series production a processor needs to be below 4W.
The challenges of helping self-driving AI to learn and be tested… Driving situations have a long tail — there are typical situations that recur with high probability and critical situations that have low probability, such as a child running into the road chasing a ball. Despite the child running into the road being low probability, it is important to replicate multiple situations (big child, small child, from the left, from the right etc). Although difficult to get perfect digital realities, it is possible to get close and then to validate against real world experiments. It is important to get feedback from the AI about why it performed a certain way during the simulated event — this will identify bugs in the model where unrealistic situations are created, preventing mis-training of the AI. Considerable challenges to achieving this vision remain: creating the models; constructing realistic scenarios automatically; ensuring full coverage of likely circumstances; getting realistic sensor data; and creating a feedback loop to real world tests. Not to mention getting everyone involved and doing it quickly! Happily, some simulation data is already usable, despite not being perfect. When one group evaluated how ADAS systems on the market today reacted to children running into the road (physical trials using dummies), the results suggested performance could improve through combining existing analysis and decision-making techniques. Although at least one vehicle passed each test, none passed them all.

Sensor fusion & data

Key Takeaways

  • There are many opportunities to reuse knowledge on other applications but the transfer is unlikely to be seamless
  • There are ways around complex problems without solving them — for instance vehicle re-routing or copying others
  • Still unclear whether self-driving AI should be a number of small modules or a centralised system
  • Large evolutions in hardware capability likely to reduce the value of previously collected data

Examples of autonomous driving technology being applied across different use cases… A fully autonomous sensor set is likely to contained five cameras, five radars, lidar covering 360o and ultrasonic sensors. This creates plenty of challenges including integration problems and extreme weather conditions. Real world experience from production and research projects is useful here. The first case study was the execution of ADAS in trucks. The initial translation of passenger car technology (using the same type of camera, but mounting it higher) meant that the field of vision over short distances was reduced. In the second generation, a fish eye camera was added alongside the original camera. This provides enhanced short distance recognition whilst preserving the strengths of the existing system over longer distances. The second example was of a prototype automated tractor trailer coupling system where the vehicle lines up the trailer at the same time as detecting any objects (e.g. humans) in the way. This is done in conditions where sensor input may be impaired, for example the camera is likely to get mud on it.
Practical learnings from driverless research and production implementation… full autonomy in all situations is a massive challenge. If it can be limited, perhaps to an urban use case such as robo taxis, then it can become much more attainable. The downside is that limiting application is likely to reduce the private vehicle take-up. There remains a substantial difference of opinion among many in the driverless development community about how important different AI capabilities are. For instance, Waymo has dedicated significant resource to understanding the hand gestures of traffic policemen, whilst others assign lower importance. There still appears to be no concrete approach to walk from conventional programming with limited machine learning to decision making AI that can deal with complex edge cases (such as Google’s famous ducks being chased by woman on a wheelchair). If one does exist, it has not been submitted for peer review. Cross domain learning seems like a big part of the answer. Instead of AI trying to understand hand gestures by policemen, why not give control to a remote operator, or even re-route and avoid the problem altogether? It seems almost certain that V2V and V2G communication is necessary. Extreme weather conditions, domain shift (changes in road layout) and complex traffic may all be too difficult for a single vehicle operating alone to overcome. It is also unclear whether the right approach is a small number of very capable systems or a larger grouping of software modules with clearly defined roles. It also seems that even today’s state of the art may not be good enough for the real world. Due to closure speeds, cameras rated for object identification on highways could need framerates of 100hz to 240hz to be capable — this requires more powerful hardware. At the same time, OEMs want components that use less power. Selecting the right componentry also requires appropriate benchmarking to be developed. Camera systems cannot simply be assessed in terms of framerate and resolution; latency and power consumption are also important. Audi is undertaking extensive comparison of learning techniques, hardware and processing methods. Some correlations are appearing: hardware specific processing appears better than generic methods; using real and virtual and augmented learning seems to improve decision making, but not in all analysis models.
Lessons learned from different object recognition experiments… self-parking systems are in production, as are research fleets testing automated highway and urban driving technologies. AI’s task can be divided into four elements. First is classification (recognising an object on its own). Second is object detection (spotting it within a scene). Third is scene understanding. Finally comes end-to-end (working out how to safely drive through the scene). Training is difficult due to limited availability of off-the-shelf data. There is none for ultrasonic, cameras or fish eye and only a small amount for laser scans. Experiments continue on the best way to develop scene understanding. Lidar products can detect most vehicles at a range of 150m, but how should that data be handled — as single points or pre-clustered? Should object detection be 2D or 3D Researcher are trying to develop processing that can spot patterns in point clouds to identify objects but is also intelligent enough to interpolate for point clouds of lower resolution (e.g. recognising objects with a 16 line lidar that were learned using a 32 line data). Determining how best to use lidar data in concert with other sensors is a subject of ongoing research. For instance, OEM opinions differ on the minimum points per metric metre requirement.

Simulation, testing & validation of artificial intelligence

Key Takeaways

  • Self-driving AI requires both innovation and functional safety — may need to tilt today’s balance more towards safety
  • With a greater volume of properly annotated data, new machine learning techniques can be employed

Safety considerations for self-driving vehicles… Autonomy depends on collaboration between two very different disciplines: functional safety and artificial intelligence. Functional safety is associated with strong discipline, conformance to protocols, standardisation and making things purposefully boring. Artificial intelligence is highly disruptive, innovative and uses multiple approaches. Recent crashes by vehicles in driverless mode show that AI could be more safety conscious — that is not to say that AI is to blame for the example accidents, but that if the vehicle had driven more cautiously the accident might have been avoided. For AI systems to be approved by regulators it is likely that they need to: lower their exposure to accidents; act in ways that other road users find highly predictable; reduce the likely severity of collisions and; increase their control over the actions of surrounding 3rd parties. Self-driving vehicle creators must be able to explain the AI to regulators and the public. AI must be robust to domain shift (e.g. either be able to drive equally well in San Francisco and New York or be prevented from doing actions it cannot complete properly). AI must act consistently take decisions that can be traced to the inputs it receives.
Research into machine learning techniques that improve image recognition… performance is improving but machines still find it very difficult to recognise objects because they do not look at pictures in the same way as humans. Machines see images as a large string of numbers, within which are combinations of numbers that form discrete objects within the picture. Learning what objects are is therefore not intuitive; it is pattern based and machines require large datasets of well annotated data in order to gain good recognition skills. Researchers have been developing a concept called “learning by association”. The key innovation is that beyond a comparison of a new and unlabelled image to an existing labelled image, the associations identified by the AI are then compared to a second labelled image to determine the confidence of a match. Training in this way led to enhanced results in tests and an improvement in recognition of a brandnew dataset that was added without a learning stage.

Live Poll Results

Attendees were periodically asked multiple choice questions which they answered using a smartphone app.

  • Most attendees want to judge a self-driving solution for themselves
  • Although some attendees believe self-driving will be a reality soon, the majority think it is post 2025
  • The biggest challenges were seen to be the lack of clarity from regulators and the need to test more
  • Attendees overwhelmingly believe that no one knows how much real-world testing will be required
  • In the face of a new and difficult situation, most attendees thought the vehicle should simply stop safely
  • Attendees expressed a clear preference for object recognition by multiple sensor types over camera or lidar alone
  • Only a minority see all the data being collected as constantly necessary, but most thought it was sometimes required
  • There was complete agreement on the need for redundancy, but a 50:50 split between high and low capability

In Closing: A Summary Of This Report

  • No one seems to be advocating a rules-based approach for autonomous driving.
    Although possible in concept, a set of detailed rules appears impractical in the real world because the volume of instructions needed to cover all driving conditions would be massive and debugging would likely fail to spot all errors or contradictions.
  • There needs to be a common view of the acceptable safety level for deployment of autonomous vehicles.
    This requires the input of regulators, industry, academia and probably 3rd party consumer groups too. In addition to the safe level, we need to determine how to measure it and confirm compliance — the second part is challenging without a test fleet covering tens of billions of miles.
  • Safety is related to the use case.
    The more complex you make the driving conditions, the more corner cases you’ll encounter. The scale of the problem can be reduced by steps such as: learning a particular area (geo-fencing) and not driving in extreme conditions or at very high speeds. Although this diminishes the value for the retail user, there are plenty of industrial applications that can operate within those constrained operating conditions.
  • There needs to be widespread collaboration on a shared framework for testing and validation of AI.
    Governments, companies and academia should all be involved and it would ideally use open data that was not tied to specific simulation tools or sensor sets. The experts at the event were wary of letting driverless cars to transport their family without seeing safety data beforehand (government and manufacturer assurances weren’t trusted).
  • Work needs to be done on explaining AI.
    There are big differences between the capabilities non-technical people think AI has and what it is capable of — there should be no talk of killer robots. At the same time, deep learning techniques mean that the system function cannot be explained in the same way as traditional coding. New ways to explain how the system operates are required and without them building trust will be very difficult. It could even be necessary for the AI to learn how to explain itself using natural language or other tools.
  • Virtual testing is vital.
    This is for three reasons: firstly, simulation dramatically decreases real world miles; secondly, because AI techniques like reinforcement learning need crashes to take place in order for the AI to learn; and thirdly because even real-world data becomes a simulation once you interact with it in a different way to the original situation. It’s better to do that virtually! For a virtual environment to be successful it must be something that can be replicated in the real world with the same results.
  • There is plenty of disagreement over the right approach to many areas.
    The event live poll highlighted differences of opinion regarding how AI should act, how much information it will be capable of processing and what level of redundancy was required for safe operation. More consistent was the high burden of proof that AI systems will be faced with and a view that currently no one really knows how to convincingly do that.
  • Implementation timing remains uncertain.
    In the event live polling, over a quarter of respondents believe that self-driving will be widespread by 2023 or even earlier. The majority believe that we will be waiting beyond 2025 — a healthy difference of opinion.

About Auto.AI
Auto.AI is Europe’s first platform bringing together all stakeholders who play an active role in the deep driving, imaging, computer vision, sensor fusion and perception and Level 4 automation scene. The event is run by we.CONECT Global Leaders, a young, owner-managed, medium-sized company from the heart of Berlin and subsidiary office in London. The next Auto.AI USA conference runs from March 11th to 13th 2018 and the next European Auto.AI conference takes place from September 16th to 18th 2018.
About Ad Punctum
Ad Punctum is a consulting and research firm founded by an ex-automotive OEM insider. We bring focused interest, an eye for the story and love of detail to research. Intellectual curiosity is at the centre of all that we do and helping companies understand their business environment better is a task that we take very seriously.
About The Author
Thomas Ridge is the founder and managing director of Ad Punctum, based in London. You may contact him by email at tridge@adpunctum.co.uk.

To Mirror or not to Mirror: How Camera Monitoring Systems are expanding the Driver’s Perspective

This article was authored by Jeramie Bianchi – Field Applications Manager at Texas Instruments.
Objects in the mirror are closer than they appear–this message is the tried and true safety warning that has reminded drivers for decades that their rearview mirrors are reflecting a slightly-distorted view of reality. Despite their limitations, mirrors are vital equipment on the car, helping drivers reverse or change lanes. But today, advanced driver assistance systems (ADAS) are going beyond a mirror’s reflection to give drivers an expanded view from the driver’s seat through the use of cameras. See ADAS domain controller integrated circuits and reference designs here.
Camera monitoring systems (CMS), also known as e-mirrors or smart mirrors, are designed to provide the experience of mirrors but with cameras and displays. Imagine looking into a rearview mirror display and seeing a panoramic view behind your vehicle. When you look to your side mirror, you see a high-resolution display showing the vehicles to your side. These scenarios are becoming reality, as are other features such as blind-spot detection and park assist.
It’s important to understand the current transition from mirrors to CMS. It’s no surprise that systems in today’s vehicles are already leveraging ADAS features for mirrors. Most new vehicles in the past decade have added a camera to the back of the vehicle or attached a camera to the existing side mirror, with a display inside the vehicle to give drivers a different perspective of what’s behind or at the side of the vehicle.
Figure 1 shows the routing of this rearview camera and display system. The backup display is embedded in the rearview mirror and a cable routes to the rear of the vehicle.

Figure 1: Rearview mirror display and rearview camera for panoramic or backup views

The side mirror is different because the camera is located on the mirror. The side mirror still exists for viewing, but typically its camera works when the driver activates a turn signal or shifts in reverse. During a turn or a lane change, the camera outputs a video feed to the infotainment display in the dashboard and may show a slightly different angle than the side mirror itself, as shown in Figure 2.

Figure 2: Side mirror with camera viewed on an infotainment display

Now that I’ve reviewed current CMS configurations that incorporate a mirror with a camera and display, it’s worth noting it’s possible to achieve a CMS rearview mirror replacement through the addition of one or two cameras installed on the rear of the vehicle.
From the rear of vehicle, video data from an imager is input to TI’s DS90UB933 parallel interface serializer or DS90UB953 serializer with Camera Serial Interface (CSI)-2. This data is then serialized over a flat panel display (FPD)-Link III coax cable to a DS90UB934 or DS90UB954 deserializer, and then output to an application processor for video processing, such as JacintoTM TDAx processors, and then shown on a rearview mirror display. If the display is located far from the Jacinto applications processor, you will need a display interface serializer and deserializer to route the data over a coax cable again. You could use the DS90UB921 and DS90UB922 red-green-blue (RGB) format serializer and deserializer, respectively, or, if you’re implementing higher-resolution displays, the DS90UB947 and DS90UB948 Open Low-Voltage Differential Signaling Display Interface (LDI) devices.
Figure 3 shows the connections between these devices when using a display onboard with an applications processor.

Figure 3: Rearview mirror system block diagram

The second CMS is the side mirror replacement portion. The camera must be located in the same location where the mirror used to be. This camera’s video data displays a view of what the driver would see in the mirror. To achieve this, the camera data is serialized and sent over an FPD-Link III coax cable to a remote display located in the upper part of the door panel or included in the rearview mirror display. With a camera and display, now the side view can be in more direct line-of-sight locations for the driver. For example, if both the displays for side view and rear view are included in the rearview mirror, the driver only needs to look in one location.
Another option available in a side mirror replacement is to add a second co-located camera with the first, but at a different viewing angle. The benefit of this setup versus a standard mirror is that with two differently angled cameras, one camera can be used for the view that a side mirror would have provided and the second camera can provide a wider field of view for blind-spot detection and collision warning features. Figure 4 shows a two-camera side mirror replacement system.

Figure 4: Side mirror replacement system block diagram

The question you may be asking now is why drivers need cameras and displays if they can achieve most of the same functionality with a mirror. The answer lies in the features that cameras can provide over mirrors alone. If only a side mirror exists, side collision avoidance is solely up to the driver. With a camera, the detection of a potential collision before a lane change could activate vehicle warning alerts that prevent drivers from making an unwise action. Panoramic rear views with wide field-of-view (FOV) rear cameras or a separate narrowly focused backup camera can provide a driver with different line of sights and reduces or eliminates blind spots that would not be possible with mirrors alone.
This is just the beginning, though, because in order for vehicles to move from driver assistance systems to autonomous systems, a CMS can be incorporated into sensor fusion systems. CMS has the opportunity to incorporate ultrasonics and possibly even radar. The fusion of rear and side cameras with ultrasonics adds the capability to assist drivers in parking or can even park the vehicle for them. Radars fused with side mirrors will add an extra measure of protection for changing lanes and even side collision avoidance.
To learn more about how to implement sensor fusion, check out the follow-up blog posts on park assist sensor fusion using CMS and ultrasonics or front sensor fusion with front camera and radar for lane departure warning, pedestrian detection and even assisted braking.

A better LiDAR for the Future?

Dozens of startups are striving to enter the autonomous vehicle market, hoping to bring innovation to the next generation of automobiles. From software companies that focus on artificial intelligence to hardware manufacturers that are building essential components, AV technology has been a boon for entrepreneurs.
Not all startups are created equal, however. While there are dozens of companies tackling the same group of problems, very few are ready to deploy their offerings. This is due to the immense complexities associated with self-driving technology, whether it’s the algorithm under the hood or the sensor on top.
Innoviz Technologies, a startup out of Kfar Saba, Israel, has zeroed in on one key challenge: LiDAR. The company is developing low-cost solid state LiDAR solutions for the AV market. Its products are not yet ready to be deployed in motor vehicles, but the firm’s progress has already attracted $82 million from investors, including Aptiv (formerly Delphi Automotive), Magna International, Samsung Catalyst and Softbank Ventures Korea.
“We got amazing feedback from companies that visited us (at CES) and realized what we have is already working well,” said Omer Keilaf, co-founder and CEO of Innoviz.

Solving Problems

Keilaf said that one of the problems with LiDAR technology is that it can be degraded by cross interference if multiple cars are close together. From the beginning he wanted Innoviz’s solution to work seamlessly in a multi-LiDAR environment.
“When we started this company the first thing we did was try to understand what are the main challenges behind LiDAR,” said Keilaf. “We had a certain kind of cost on one side. We decided no matter what we do, if you want to have a solution that’s viable for the mass market, we need to make sure that we can sell it at one point for $100. Maybe not in the first two or three years where the volume is not that high, but you cannot design a LiDAR if you do not believe that at some point you can’t sell it for $100 because it won’t go to mass market.”
Innoviz also wanted to build a sensor that is sensitive enough to collect light from a long distance, but strong enough to endure the blinding rays of the sun, which have been problematic for some LiDAR solutions.
Many companies say they can overcome these obstacles, but Innoviz wanted to prove their accomplishments were applicable beyond the testing phase.
“I think the biggest challenge is not only to show a demo but to actually deliver a product that’s automotive-grade,” said Keilaf. “A product that’s very high-quality and is actually reliable. This is what the OEMs want. They want to count on you to actually deliver a product in two years, which is already automotive-grade and meets all the performance and quality needs. They don’t have any appetite for risk.”

More Than LiDAR

LiDAR may receive the most attention, but Keilaf realizes that self-driving cars will need to equip multiple technologies before they’re deployed.
“I think it’s clear that in order to have high functional safety, you need to rely on different technologies like LiDAR, cameras, radar, sonar and GPS,” he said. “And there’s so much data that OEMs need to analyze. You see cars today that are running those platforms with computers that drive a lot of power and heat. It’s not what we do, but it’s one of the hurdles today.”

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.

Algolux Interview: Enabling Autonomous Vision

Dave Tokic
VP Marketing & Strategic Partnerships at Algolux

Dave Tokic
VP Marketing & Strategic Partnerships at Algolux

About Dave:

Dave Tokic is vice president of marketing and strategic partnerships at Algolux, with over 20 years of experience in the semiconductor and electronic design automation industries. Dave most recently served as senior director of worldwide strategic partnerships and alliances for Xilinx, driving solution and services partnerships across all markets. Previously, he held executive marketing and partnership positions at Cadence Design Systems, Verisity Design, and Synopsys and has also served as a marketing and business consultant for the Embedded Vision Alliance. Dave has a degree in Electrical Engineering from Tufts University.
Dave’s Pop in the Job? Helping make cars safer by enabling the auto industry to provide next generation computer vision and imaging capabilities today.

The Interview:

As part of his presentation at the Auto.AI 2017 we had a chat with Dave about his vision of autonomous driving and its connection to Algolux. The company was part of our startup lounge showcasing applications and technologies for autonomous and connected cars.

Autonomous Driving Innovations at IAA 2017

This year’s International Motor Show Germany (IAA) was inaugurated by German Chancellor Angela Merkel on September 14th in Frankfurt am Main and once again attracted thousands of visitors. One of the most demanded topics was autonomous driving. Both established car manufacturers and newcomers introduced their (non)market-ready innovations. Here are some of the show’s highlights, listed alphabetically.

Audi: towards Level 5 with Aicon

Audi covered all autonomy levels from 3 to 5. The Bavarians presented the semi-autonomous series model A8 and a concept car based on the SUV e-tron Sportback which is intended for highly autonomous driving (Level 4). The vehicle is guided by an empathic AI that is learning permanently and enabling autonomous driving up to 80 mph.
However Audi’s true eye catcher goes by the name Aicon – a concept study for Level 5 automation, means fully autonomous. This electric vehicle was designed for the luxury sector is meant to have a reach of 800 kilometers. Audi also focused on the car’s interior equipping it with a comfortable couch instead of a back seat and rotatable seats in the front row. Each passenger has a personal display, the main screen is situated below the windshield.
The car understands three types of communication: voice command, eye control and of course haptic control. In general Audi put great emphasis on light effects, e.g. for visualizing information about the current driving mode or when somebody enters the car. Another innovation: people can enter the vehicle via their smartphone.

Bosch: parking & assistance systems

At the International Motor Show Germany Bosch presented some of the concepts that the company has already unveiled before. First and foremost there is the valet-parking system used in the parking lot of the Mercedes Museum. Bosch developed the sensors that enable a driverless Mercedes to navigate through the parking lot. Regarding connectivity Bosch uses updates via OTA (Over The Air).

Continental: BEE & CUbE

Automotive supplier Continental followed suit and showcased the projects CUbE (Continental Urban mobility Experience), an autonomous electro shuttle development in cooperation with EasyMile and BEE (Balanced Economcy and Ecology). Latter is Continental’s vision of connected traffic that works by the use of collective intelligence. This vision was perceptible via a virtual reality demo at Continental’s expo stand. Moreover the visitors could marvel at the new 3D Flash Lidar sensor which doesn’t have any mechanical parts. These sensors are much more robust and cheaper than previous models.

Magna: platform for Level 4

The name says it: Magna introduced MAX4 – a scalable platform for Level 4 automation including interfaces for all relevant sensors. It shall be compatible with most manufacturers’ systems. During the previous tests in Berlin Magna used a testing vehicle in form of a Jeep Grand Cherokee.

Valeo: learning how to park

French supplier Valeo mainly drew attention with the system Park4U Home. One can teach the system how to park correctly, the system collects data and learns the pattern. Once learned Park4U can replicate the process by itself. Also worth mentioning: the new Lidar “SCALA” and MyMobius, a customization application analyzing driving behavior.

VW: news from SEDRIC

Obviously Volkswagen was also present at the show. VW went straight for Level 5 introducing its concept for full autonomy SEDRIC. No matter where you are or where you wanna go: according to VW’s vision people can use their smartphone to order a self-driving vehicle (SEDRIC) to move in urban areas or travel long distances. That would mean a radical change in private transport.
These are only a few examples of what the International Motor Show Germany had in store. None the less I think that these are great contributions to what future mobility may look like.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

Magneti Marelli invests in solid-state LiDAR expert LeddarTech

Magneti Marelli acquires a stake in the company and enters into a technical and commercial cooperation agreement.
LeddarTech is a Canadian company that develops a proprietary LiDAR (Light Detection And Ranging) technology integrated into semiconductors and sensor modules for self-driving cars and driver assistance systems. LeddarTech specializes in solid-state LiDAR systems that use infrared light to monitor the area around them.
LiDAR technology naturally aligns with the evolution of Magneti Marelli Automotive Lighting technologies towards autonomous driving purposes, building on the innovative “Smart Corner” solution presented at the 2017 Consumer Electronics Show (CES) in Las Vegas.

Magneti Marelli has acquired a stake in LeddarTech Inc., a worldwide leader in solid state LiDAR (Light Detection And Ranging) sensing technology. Magneti Marelli and LeddarTech have also entered into a technical and commercial cooperation agreement to jointly develop complete LiDAR systems aimed at autonomous driving applications and destined for integration in automotive lighting products for OEMs worldwide.
LiDAR is a sensor technology based on laser light that – in combination with cameras and radar – will enable autonomous driving levels 2 through 5 (from partial to full driving automation, as defined by SAE International’s standard).
“Our know-how in high-end lighting technologies paves the way to new applications aimed at autonomous driving systems, which will be crucial in the future of automotive” – said Pietro Gorlier, CEO of Magneti Marelli. “The investment in LeddarTech, with its proven and proprietary competence in solid-state LiDAR sensors, provides Magneti Marelli with the opportunity to partner with a technology leader with key expertise in this strategically important sector”.
Autonomous driving requires a range of sensors to provide information which enables the car to understand and navigate the environment. LiDAR technology is particularly suitable due to its advanced features, including precise object localization and multiple objects detection even if they are moving, or in difficult lighting and weather conditions.
Founded in 2007 in Quebec City, LeddarTech is a reference in optical detection and ranging technology. Thanks to its unique approach in remote object detection, the company has gained a leadership position in LiDAR.
In particular, LeddarTech develops solid-state LiDAR systems, which provide reliable, high-performing and cost effective sensors. LeddarTech uses patented signal acquisition and processing techniques that are used to generate cleaner return signals with better range and sensitivity over other solid-state LiDAR solutions.
Magneti Marelli has already started to exploit its electronics and lighting know-how, coupling them in the perspective of autonomous driving technology. The first development is the “Smart Corner” solution, already exhibited at CES Las Vegas 2017, that uses the areas of the car traditionally reserved for lighting systems – i.e. the corners – to bring together the various sensors useful for autonomous driving into one unit. Integrated into the advanced projector headlamps and tail lamps are solid state LiDAR, cameras, radar and ultrasonic sensors to create a modular, self-contained, efficient solution to package and locate the many sensors required to support autonomous capability.

About Magneti Marelli:

Magneti Marelli designs and produces advanced systems and components for the automotive industry. With 86 production units, 14 R&D centres in 21 countries, approximately 43,000 employees and a turnover of 7.9 billion Euro in 2016, the group supplies all the major carmakers in Europe, North and South America and the Asia Pacific region. The business areas include Electronic Systems, Lighting, Powertrain, Suspension and Shock Absorbing Systems, Exhaust Systems, Aftermarket Parts & Services, Plastic Components and Modules, Motorsport. Magneti Marelli is part of FCA.