OPTIS Webinar: Smart Headlamps – Light & Sensors Integration for Autonomous Driving

In the context of Autonomous Driving, sensors will appear everywhere in automotive, also inside car headlamps. Several Tier-1 suppliers have already shown concepts with LiDAR or Camera integration in the headlamp. A headlamp is a tiny box already containing several modules for lighting and signalling functions. But how to add a sensor without impacting visual signature or sensor performances?
In this webinar, we will study 3 aspects of smart headlamps:

Smart Design with Light Guide optimisation
Smart Lighting thanks to Digital Lighting
Smart Headlamp with Sensor integration

Webinar time and date:

The webinar will be held 2 times on June 26th to give you the chance to join us when it suits you most.

June 26th, 2018 – 10 to 10:30 AM CEST

June 26th, 2018 – 5 to 5:30 PM CEST

Signalling functions can be used for communication with other vehicles and pedestrians. Light guides will probably remain a cost-effective solution to display an homogeneous lit appearance. It’s essential to optimize the workload and time that are required to get an effective design. We will explain how Smart Design works to optimize Light Guides using SPEOS Optical Shape Design and Optimizer.
Pixel Beam appears to be the ultimate lighting solution for a wide-range of applications. Even in case of Autonomous Driving, glare free or lane marking will be needed to boost the drivers’ confidence in the intelligence of the car. Smart Lighting could adapt to different driving conditions. If you want to evaluate different technologies (LCD, MEMS, µAFS) or test any ideas (crosswalk, lane marking), dynamic simulation with OPTIS VRX is the key to identify relevant parameters and justify technological choices.
Autonomous cars will require more sensors that need to find their space in the car. Headlamp is an electronic box with a transparent cover and positioned at the corner of the car. So, integrate a camera and a LiDAR in the headlamp seems to be a promising idea. Smart Headlamps will be an essential component of autonomous driving. Optical simulation is needed to design the optical components, but more importantly to ensure the opto-mechanical interactions between sources, lenses and support structures, considering any variations in the different parts. As SPEOS is CAD integrated, mechanical components can be easily moved and re-simulated to quickly assess impact of changes. It can also be used to understand the eye safety classification of the LiDAR emitter.
Through these 3 different topics, we cover different challenges where OPTIS is offering a predictive simulation to design smarter products in a shorter time.

Webinar speakers:

Cedric BellangerCedric Bellanger

Product Marketing Manager
OPTIS

Julien MullerJulien Muller

Product Owner SPEOS & VRX-Headlamp
OPTIS

Find out more about OPTIS' VRX 2018 - The driving simulator that reproduces virtually the performance and behaviour of advanced headlighting systems:

Auto.AI: Conference Summary and Key Takeaways by the Chair

Executive Summary

Auto.AI 2017 was held in Berlin on the 28th and 29th of September. Attendees representing academia, original equipment manufacturers (OEMs), suppliers, start-ups and consultants shared ideas through presentations, peer-to-peer round table sessions and informal discussions. Attendees were asked questions using anonymous live polls throughout the event. There were four broad topics relevant to understanding the development of artificial intelligence in the automotive sector:

  • Artificial intelligence outlook including machine learning & deep learning techniques
  • Computer vision, imaging & perception
  • Sensor fusion & data
  • Simulation, testing & validation of artificial intelligence

Key Takeaways

  • No one seems to be advocating a rules-based approach for autonomous driving.
    Although possible in concept, a set of detailed rules appears impractical in the real world because the volume of instructions needed to cover all driving conditions would be massive and debugging would likely fail to spot all errors or contradictions.
  • There needs to be a common view of the acceptable safety level for deployment of autonomous vehicles.
    This requires the input of regulators, industry, academia and probably 3rd party consumer groups too. In addition to the safe level, we need to determine how to measure it and confirm compliance — the second part is challenging without a test fleet covering tens of billions of miles.
  • Safety is related to the use case.
    The more complex you make the driving conditions, the more corner cases you’ll encounter. The scale of the problem can be reduced by steps such as: learning a particular area (geo-fencing) and not driving in extreme conditions or at very high speeds. Although this diminishes the value for the retail user, there are plenty of industrial applications that can operate within those constrained operating conditions.
  • There needs to be widespread collaboration on a shared framework for testing and validation of AI.
    Governments, companies and academia should all be involved and it would ideally use open data that was not tied to specific simulation tools or sensor sets. The experts at the event were wary of letting driverless cars to transport their family without seeing safety data beforehand (government and manufacturer assurances weren’t trusted).
  • Work needs to be done on explaining AI.
    There are big differences between the capabilities non-technical people think AI has and what it is capable of — there should be no talk of killer robots. At the same time, deep learning techniques mean that the system function cannot be explained in the same way as traditional coding. New ways to explain how the system operates are required and without them building trust will be very difficult. It could even be necessary for the AI to learn how to explain itself using natural language or other tools.
  • Virtual testing is vital.
    This is for three reasons: firstly, simulation dramatically decreases real world miles; secondly, because AI techniques like reinforcement learning need crashes to take place in order for the AI to learn; and thirdly because even real-world data becomes a simulation once you interact with it in a different way to the original situation. It’s better to do that virtually! For a virtual environment to be successful it must be something that can be replicated in the real world with the same results.
  • There is plenty of disagreement over the right approach to many areas.
    The event live poll highlighted differences of opinion regarding how AI should act, how much information it will be capable of processing and what level of redundancy was required for safe operation. More consistent was the high burden of proof that AI systems will be faced with and a view that currently no one really knows how to convincingly do that.
  • Implementation timing remains uncertain.
    In the event live polling, over a quarter of respondents believe that self-driving will be widespread by 2023 or even earlier. The majority believe that we will be waiting beyond 2025 — a healthy difference of opinion. Health Warning: informal discussions revealed that in general the timescale comes down if the question is about level 4 specific use case vehicles on public roads (they operate on private land already) and goes further out if asked about go-anywhere level 5 vehicles.

Artificial intelligence outlook including machine learning & deep learning techniques

Key Takeaways

  • Vehicle electronics are growing in value but require further standardisation and reductions in power consumption
  • Data storage is a major issue — techniques from traditional big data do not work very well with images and video
  • Image recognition is improving but research would benefit from wider availability of labelled video datasets
  • Further work is required to create greater depth of scenarios and improve simulation processing times
  • Realistic visualisation of simulations for humans is different to modelling the sensor inputs vehicle AI interprets
  • Understanding machine learning isn’t always hard… sometimes it comes up with simpler rules than we expect!

Market Growth… is forecast at 6% compound annual growth rate (CAGR) for electronics — reaching $1,600 of average vehicle content. For semi-conductors the figures are even more impressive — 7.1% CAGR. The specifics of market development are less clear — these growth figures include L1/2 systems but not full autonomy. Although there is a definite role for the technology, standardisation is a must, requiring a yet-to-be-established framework. Safety is a big challenge: without clear agreement on what safety level is acceptable, definite technical standards cannot be set. Another open issue is the degree to which the car will have to make decisions for itself versus interaction with infrastructure and other vehicles. The problem is the latency (response time) of large data sets. Finally, self-driving chipsets must consume significantly less power than current prototypes.
Researchers have gained new insights by translating real world crash data into a virtual environment… the information came from records collected by the regulators. Technology in production today sometimes makes facile errors (e.g. lane keeping recognising a bike path rather than the kerbside). Research has shown that it is possible to correlate the virtual models with real world data (for instance replicating a collision with a pedestrian) but the challenge of testing thoroughly remains substantial. Multiple different environments are needed; there are thousands of types of crash situation; and each vehicle has unique attributes. Through all of this, it is vital that the results correlate to the real world. Researchers aim to reduce modelling times from days (currently) to hours — real time is the ultimate goal. Without improvements in processing speed, virtual sample sets are in danger of remaining too small or too slow to be usable.
The challenge of staying aware of the state of the art in data processing and artificial intelligence… large OEMs are interested in AI in the broadest sense — self-driving, handling customer data and improving business efficiency. The first challenge is the data itself. Not only will the car become a massive source of data, but much of it does not easily fit into existing data structures — images and video are more complicated and unstructured than traditional inputs. With images, it may be necessary to pre-process and store the resulting output, rather than the image itself, to reduce storage space and retrieval time. Image capture and object recognition as a definite area where more work is required and machine learning is already relevant, for instance recognising brands of truck trailer may help build broader recognition of what a trailer looks like. By studying a whole range of machine learning activities (a huge resource undertaking), organisations can develop an understanding of the best fit between problems, data collection methods and analysis tools.
There are different ways of obtaining image data in real time… dedicated chips can translate lidar traces (compatible with multiple lidar types) into an instantly available augmented image. This allows object identification from the raw data and for less expensive lidar units to be used. Examples showed a 16-line lidar unit being augmented for higher resolution.
Machine learning already has applications in ADAS feature sets… it has been put to use in two frequently encountered highway situations: roadworks and other drivers cutting in and out. Video and radar example data was combined with machine learning and human guidance about acceptable limits of driving behaviour. Interestingly, in both cases although the machine learning was given multiple data inputs, only a few key elements were required to provide very good accuracy in use. This reduces the sensor inputs and complexity of processing. For example, machine learning identified a high correlation between the angle of the vehicle in front and whether it was intending to cut in, in preference to more complicated rules combining relative speeds and side to side movement. Multiple sensors should be used fordecision making: although a camera is better for monitoring many of the situations, its limited field of view means that radar needs to be used in camera blind spots.
The car will become one of the biggest generators of natural language data… and its proper use will enable manufacturers to create a personalised experience that the customer values. For relatively complex commands (“when X happens, then please do Y”), contemporary techniques have 95% correct transcription of what the customer is saying and mid-80s% task completion. This is encouraging but shows further development is needed. OEMs will also have to create ecosystems that allow them to control the customer experience inside the cabin, yet are seamless with the personal assistants the customer might have on their mobile phone or home speaker system.
New techniques are improving image recognition… Using industry benchmark tests, computer image recognition is now superior to humans. In some specific use cases this already has practical uses, for example a smartphone app that assesses melanomas. However, at around 97% correct identification of a random image (versus about 95% for humans), improvement is required. Different methods are being tested, with greater progress on static images than video; partly due to difficulty but also because video has less training data: smaller libraries and fewer labelled categories. Video identification accuracy can be improved by running several different methods in parallel. One of the most promising approaches is turning the video into a set of 2D images with time as the 3rd dimension — a technique pioneered by Deep Mind (now of Google). Combining this process with different assessment algorithms (such as analysing the first and nth frame rather than all frames), teams have achieved accuracy of nearly 90% for gesture recognition. Generally, late fusion (a longer gap between frames) gives better results than early fusion — there is variation in what combination of processing algorithms yields the best accuracy. Progress is happening all the time. New ways of addressing machine learning problems sometimes create step changes, so improvement may not be at a linear rate.
It is hard to create different virtual environments for vehicle testing… Using video game tools and very experienced developers, near photo realistic models can be created, but this appears to be the easy part! Because the structure of computer graphical data is different to real life, models need to be adjusted to create the correct type of artificial sensor inputs. This is even more challenging with radar and lidar input data as the model must accurately simulate the “noise” — a factor of both the sensor and the material it is detecting. Perfecting this could take several years. More useful immediately is the ability to create virtual ground truth (e.g. that is a kerb) that can serve SAE Level 2 development. Because L1/2 inputs are more binary, sophisticated sensor simulation issues are less relevant. Researchers believes that a virtual environment of 10km-15km is sufficiently to assist development of these systems, assuming the ability to impose different weather conditions (snow, heavy rain etc).

Computer vision, imaging & perception

Key Takeaways

  • Different options emerging to help machine vision check against pre-determined ground truth
  • Calibration and synchronisation of multiple sensor types is a challenge
  • Using hardware specific processing techniques may improve performance — but also impairs plug & play
  • Virtual testing will be valuable but requires a robust feedback loop and real-world validation

The state of the art in mapping… Mapping companies have several layers of information such as: road layout; traffic information; photographic information; and lidar traces of the road and its surroundings. At present, vehicle navigation relies on basic inputs and workable levels of accuracy (a few metres). High definition mapping allows a car to be more precise about its surroundings and relieves sensors of the need to fully determine the environment for themselves. Detailed lidar-based maps of the roadside can be used to build a “Road DNA” that autonomous systems can reference. AI can also be used to update maps. Crowd-sourced position data helps to determine where road layouts have changed (because everyone appears to be going off-road). Currently, AI sends these problems for human investigation but in future it could make decisions for itself. There may be value in collecting images from user-vehicles to update maps, both for ground truth and road sign interpretation.
An in-depth review of the StixelNet image processing technique… This method breaks images down into lines of pixels (columns in the case study) and then determines the closest pixel, allowing identification of ground truth (kerbside, people and cars) and free space. Camera data can be combined with lidar traces from the same vehicle to allow the system to train the camera recognition using the laser data. The positive of this approach is that it is continuous and scaleable — more cars added to the fleet equals faster learning. The downside is that it is difficult to calibrate and synchronise cameras and lidar on the same vehicle to the accuracy required. It is also difficult to write the algorithms — several processing options are available, all with different weaknesses. Studies indicate that systems trained on the camera and lidar data showed better results than stereo cameras and better than expected performance on close-up images.
Simulation and assessment of images captured by different types of camera… Measures of image quality that go beyond traditional quantitative techniques and are relevant to machine learning can be identified. Software has been developed that can simulate the images a camera will create in particular lighting conditions and using a range of filters. One of the findings is that image processing for machine vision may be able to forego some of the steps currently performed in digital photography (e.g. sRGB). This could save time and processing power. Research has a wider scope than autonomous vehicles but the application looks promising. A transparent and open source platform looks beneficial.
Implications of deep learning… Neural networks bring high purchasing costs and energy consumption. Some of these costs do not provide a proportional increase in accuracy, there is a law of diminishing returns. It may be better to reduce the cost of one part of the system and purchase additional elements, whilst retaining some of the saving. For instance, going to a single 5mm x 5mm processor rather than two 3mm x 3mm acting in series reduces the power consumption by about half.
Creating virtual environments… researchers used scenarios designed for video games to look at visual detection. The process developed creates grid information to assess the underlying physics and then layers skins on top to create realistic images. Driving characteristics and reaction of both the target vehicle and other vehicles can therefore be modelled, including the effect of collisions. The same information can also be used to create artificial data such as a lidar trace.
The importance of hardware in artificial intelligence… test vehicles can now drive autonomously without reference to lane markings because of their free space detection ability. Power consumption is reducing — a chip commonly used in prototype AVs uses 80W but its replacement requires 30W despite an increase in processing capacity. It may be beneficial to use processing software that is designed around the hardware. Some studies indicate that properly matched chipsets and processing suites can reduce latency and improve performance. An example end-to-end research project, which has several neural layers and multiple sensors including camera and lidar still finds decision making difficult in the following areas: gridlock, extreme weather, temporary road layouts, muddy tracks and difficult turns. There are also numerous edge casesand multi-party trolley problems.
The challenges of developing the next generation of technologies… Although lane keeping can be further improved, it is necessary to develop systems with a different approach, especially because in many cases there are no clear lane markings. One potential method is training networks to detect extreme weather and tune object recognition based on that information — for instance a camera may be best in sunny conditions but 3D data may be best where due to snow the lane markings become nearly invisible. There is also a downside to sensor improvement… for instance, as camera resolution improves, the data labelling may become unusable and need replacement (possibly manually). Power consumption of prototype chips is too high. Some concept demonstration chips draw 250W and in series production a processor needs to be below 4W.
The challenges of helping self-driving AI to learn and be tested… Driving situations have a long tail — there are typical situations that recur with high probability and critical situations that have low probability, such as a child running into the road chasing a ball. Despite the child running into the road being low probability, it is important to replicate multiple situations (big child, small child, from the left, from the right etc). Although difficult to get perfect digital realities, it is possible to get close and then to validate against real world experiments. It is important to get feedback from the AI about why it performed a certain way during the simulated event — this will identify bugs in the model where unrealistic situations are created, preventing mis-training of the AI. Considerable challenges to achieving this vision remain: creating the models; constructing realistic scenarios automatically; ensuring full coverage of likely circumstances; getting realistic sensor data; and creating a feedback loop to real world tests. Not to mention getting everyone involved and doing it quickly! Happily, some simulation data is already usable, despite not being perfect. When one group evaluated how ADAS systems on the market today reacted to children running into the road (physical trials using dummies), the results suggested performance could improve through combining existing analysis and decision-making techniques. Although at least one vehicle passed each test, none passed them all.

Sensor fusion & data

Key Takeaways

  • There are many opportunities to reuse knowledge on other applications but the transfer is unlikely to be seamless
  • There are ways around complex problems without solving them — for instance vehicle re-routing or copying others
  • Still unclear whether self-driving AI should be a number of small modules or a centralised system
  • Large evolutions in hardware capability likely to reduce the value of previously collected data

Examples of autonomous driving technology being applied across different use cases… A fully autonomous sensor set is likely to contained five cameras, five radars, lidar covering 360o and ultrasonic sensors. This creates plenty of challenges including integration problems and extreme weather conditions. Real world experience from production and research projects is useful here. The first case study was the execution of ADAS in trucks. The initial translation of passenger car technology (using the same type of camera, but mounting it higher) meant that the field of vision over short distances was reduced. In the second generation, a fish eye camera was added alongside the original camera. This provides enhanced short distance recognition whilst preserving the strengths of the existing system over longer distances. The second example was of a prototype automated tractor trailer coupling system where the vehicle lines up the trailer at the same time as detecting any objects (e.g. humans) in the way. This is done in conditions where sensor input may be impaired, for example the camera is likely to get mud on it.
Practical learnings from driverless research and production implementation… full autonomy in all situations is a massive challenge. If it can be limited, perhaps to an urban use case such as robo taxis, then it can become much more attainable. The downside is that limiting application is likely to reduce the private vehicle take-up. There remains a substantial difference of opinion among many in the driverless development community about how important different AI capabilities are. For instance, Waymo has dedicated significant resource to understanding the hand gestures of traffic policemen, whilst others assign lower importance. There still appears to be no concrete approach to walk from conventional programming with limited machine learning to decision making AI that can deal with complex edge cases (such as Google’s famous ducks being chased by woman on a wheelchair). If one does exist, it has not been submitted for peer review. Cross domain learning seems like a big part of the answer. Instead of AI trying to understand hand gestures by policemen, why not give control to a remote operator, or even re-route and avoid the problem altogether? It seems almost certain that V2V and V2G communication is necessary. Extreme weather conditions, domain shift (changes in road layout) and complex traffic may all be too difficult for a single vehicle operating alone to overcome. It is also unclear whether the right approach is a small number of very capable systems or a larger grouping of software modules with clearly defined roles. It also seems that even today’s state of the art may not be good enough for the real world. Due to closure speeds, cameras rated for object identification on highways could need framerates of 100hz to 240hz to be capable — this requires more powerful hardware. At the same time, OEMs want components that use less power. Selecting the right componentry also requires appropriate benchmarking to be developed. Camera systems cannot simply be assessed in terms of framerate and resolution; latency and power consumption are also important. Audi is undertaking extensive comparison of learning techniques, hardware and processing methods. Some correlations are appearing: hardware specific processing appears better than generic methods; using real and virtual and augmented learning seems to improve decision making, but not in all analysis models.
Lessons learned from different object recognition experiments… self-parking systems are in production, as are research fleets testing automated highway and urban driving technologies. AI’s task can be divided into four elements. First is classification (recognising an object on its own). Second is object detection (spotting it within a scene). Third is scene understanding. Finally comes end-to-end (working out how to safely drive through the scene). Training is difficult due to limited availability of off-the-shelf data. There is none for ultrasonic, cameras or fish eye and only a small amount for laser scans. Experiments continue on the best way to develop scene understanding. Lidar products can detect most vehicles at a range of 150m, but how should that data be handled — as single points or pre-clustered? Should object detection be 2D or 3D Researcher are trying to develop processing that can spot patterns in point clouds to identify objects but is also intelligent enough to interpolate for point clouds of lower resolution (e.g. recognising objects with a 16 line lidar that were learned using a 32 line data). Determining how best to use lidar data in concert with other sensors is a subject of ongoing research. For instance, OEM opinions differ on the minimum points per metric metre requirement.

Simulation, testing & validation of artificial intelligence

Key Takeaways

  • Self-driving AI requires both innovation and functional safety — may need to tilt today’s balance more towards safety
  • With a greater volume of properly annotated data, new machine learning techniques can be employed

Safety considerations for self-driving vehicles… Autonomy depends on collaboration between two very different disciplines: functional safety and artificial intelligence. Functional safety is associated with strong discipline, conformance to protocols, standardisation and making things purposefully boring. Artificial intelligence is highly disruptive, innovative and uses multiple approaches. Recent crashes by vehicles in driverless mode show that AI could be more safety conscious — that is not to say that AI is to blame for the example accidents, but that if the vehicle had driven more cautiously the accident might have been avoided. For AI systems to be approved by regulators it is likely that they need to: lower their exposure to accidents; act in ways that other road users find highly predictable; reduce the likely severity of collisions and; increase their control over the actions of surrounding 3rd parties. Self-driving vehicle creators must be able to explain the AI to regulators and the public. AI must be robust to domain shift (e.g. either be able to drive equally well in San Francisco and New York or be prevented from doing actions it cannot complete properly). AI must act consistently take decisions that can be traced to the inputs it receives.
Research into machine learning techniques that improve image recognition… performance is improving but machines still find it very difficult to recognise objects because they do not look at pictures in the same way as humans. Machines see images as a large string of numbers, within which are combinations of numbers that form discrete objects within the picture. Learning what objects are is therefore not intuitive; it is pattern based and machines require large datasets of well annotated data in order to gain good recognition skills. Researchers have been developing a concept called “learning by association”. The key innovation is that beyond a comparison of a new and unlabelled image to an existing labelled image, the associations identified by the AI are then compared to a second labelled image to determine the confidence of a match. Training in this way led to enhanced results in tests and an improvement in recognition of a brandnew dataset that was added without a learning stage.

Live Poll Results

Attendees were periodically asked multiple choice questions which they answered using a smartphone app.

  • Most attendees want to judge a self-driving solution for themselves
  • Although some attendees believe self-driving will be a reality soon, the majority think it is post 2025
  • The biggest challenges were seen to be the lack of clarity from regulators and the need to test more
  • Attendees overwhelmingly believe that no one knows how much real-world testing will be required
  • In the face of a new and difficult situation, most attendees thought the vehicle should simply stop safely
  • Attendees expressed a clear preference for object recognition by multiple sensor types over camera or lidar alone
  • Only a minority see all the data being collected as constantly necessary, but most thought it was sometimes required
  • There was complete agreement on the need for redundancy, but a 50:50 split between high and low capability

In Closing: A Summary Of This Report

  • No one seems to be advocating a rules-based approach for autonomous driving.
    Although possible in concept, a set of detailed rules appears impractical in the real world because the volume of instructions needed to cover all driving conditions would be massive and debugging would likely fail to spot all errors or contradictions.
  • There needs to be a common view of the acceptable safety level for deployment of autonomous vehicles.
    This requires the input of regulators, industry, academia and probably 3rd party consumer groups too. In addition to the safe level, we need to determine how to measure it and confirm compliance — the second part is challenging without a test fleet covering tens of billions of miles.
  • Safety is related to the use case.
    The more complex you make the driving conditions, the more corner cases you’ll encounter. The scale of the problem can be reduced by steps such as: learning a particular area (geo-fencing) and not driving in extreme conditions or at very high speeds. Although this diminishes the value for the retail user, there are plenty of industrial applications that can operate within those constrained operating conditions.
  • There needs to be widespread collaboration on a shared framework for testing and validation of AI.
    Governments, companies and academia should all be involved and it would ideally use open data that was not tied to specific simulation tools or sensor sets. The experts at the event were wary of letting driverless cars to transport their family without seeing safety data beforehand (government and manufacturer assurances weren’t trusted).
  • Work needs to be done on explaining AI.
    There are big differences between the capabilities non-technical people think AI has and what it is capable of — there should be no talk of killer robots. At the same time, deep learning techniques mean that the system function cannot be explained in the same way as traditional coding. New ways to explain how the system operates are required and without them building trust will be very difficult. It could even be necessary for the AI to learn how to explain itself using natural language or other tools.
  • Virtual testing is vital.
    This is for three reasons: firstly, simulation dramatically decreases real world miles; secondly, because AI techniques like reinforcement learning need crashes to take place in order for the AI to learn; and thirdly because even real-world data becomes a simulation once you interact with it in a different way to the original situation. It’s better to do that virtually! For a virtual environment to be successful it must be something that can be replicated in the real world with the same results.
  • There is plenty of disagreement over the right approach to many areas.
    The event live poll highlighted differences of opinion regarding how AI should act, how much information it will be capable of processing and what level of redundancy was required for safe operation. More consistent was the high burden of proof that AI systems will be faced with and a view that currently no one really knows how to convincingly do that.
  • Implementation timing remains uncertain.
    In the event live polling, over a quarter of respondents believe that self-driving will be widespread by 2023 or even earlier. The majority believe that we will be waiting beyond 2025 — a healthy difference of opinion.

About Auto.AI
Auto.AI is Europe’s first platform bringing together all stakeholders who play an active role in the deep driving, imaging, computer vision, sensor fusion and perception and Level 4 automation scene. The event is run by we.CONECT Global Leaders, a young, owner-managed, medium-sized company from the heart of Berlin and subsidiary office in London. The next Auto.AI USA conference runs from March 11th to 13th 2018 and the next European Auto.AI conference takes place from September 16th to 18th 2018.
About Ad Punctum
Ad Punctum is a consulting and research firm founded by an ex-automotive OEM insider. We bring focused interest, an eye for the story and love of detail to research. Intellectual curiosity is at the centre of all that we do and helping companies understand their business environment better is a task that we take very seriously.
About The Author
Thomas Ridge is the founder and managing director of Ad Punctum, based in London. You may contact him by email at tridge@adpunctum.co.uk.

Deloitte Study: Trust in AD Technology is on the Rise

Acceptance for autonomous car technology has been increasing recently which is important for the industry. If nobody trusts the technology there will be no one to buy it. Autonomous driving is not only about benefits but also about challenges. Above all driverless cars shall contribute to higher road safety but experts fear the threat of hacking attacks and car manipulation.
In this environment you frequently find studies that ask for consumer opinions. Recent research shows that consumer confidence is climbing – this could be a result of growing PR and advertising expenses for autonomous vehicles. In the course of showcasing and testing events more and more people get to experience the technology. It can be assumed that this is another factor for the growing trust.

Deloitte Study on Consumer Trust

Lately Deloitte published the 2018 edition of its annual “Global Automotive Consumer Study”. Judging from last and this year’s data you can draw the conclusion that consumer trust in autonomous driving is on the way up. The requirements for consumer confidence stays the same as in 2017 – users demand functionality. If latter can be assured, 59% of the polled people would enter an autonomous vehicle. In 2017 67% were rather sceptical towards autonomous driving, that number shrank to 41%. On average 64% of the surveyed consumers are willing to spend extra money for driverless car technology – in the US and Japan is way below the average with around 40%. In Germany at least every second person could imagine to make additional expenses for autonomous vehicles.

Obstacles on the Road towards Autonomous Driving

The study also identified several obstacles that delay the complete implementation of the technology. These include large investments needed from the companies and a missing legal basis for autonomous driving. If there are no clear legal principles autonomous cars will unlikely penetrate the whole society. However, Deloitte expects the movement to pick up high speed once the legal issues are solved.

Brand Loyalty and Autonomous Driving?

Autonomous cars won’t break brand loyalty – so says the study. Especially in countries where many large car manufacturers have their origin like Germany, Japan and the US people pledge loyalty to traditional carmakers. Nevertheless, apart from Japan, Asian consumers tend to consider less established companies. Only a third of the surveyed users from Asia would consider a traditional car manufacturer.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

When Car Trunks become Mailboxes

German car manufacturer Daimler plans to use car trunks for the postal distribution process. In the future letters and parcels shall be sent to the addressees’ car trunk. The postman will get a code to open the trunk and deliver the package. The idea of using a car trunk as a mailbox is not entirely new. Volvo is working on the concept since 2016 – therefore the Swedes are collaborating with the compatriot startup urb-it. Volvo just calls it the In-Car Delivery Service, while Daimler chose the name Smart Ready to Drop+.

Smart Ready to Drop+

The name already tells that the concept is not designed for every Daimler model but only for the Smart. Daimler implemented the Smart Ready to Drop+ system in Berlin, Bonn and Cologne together with its partner Liefery which is processing the deliveries. Apparently the concept was well adopted by the users and there are plans to extend the program to Hamburg. The city now sealed collaborations with Daimler and Volkswagen in order to develop and implement innovative mobility concepts. That’s why the famous ITS World Congress is going to take place in Hamburg in 2021.

How does the concept work?

When a customer makes an online purchase he will get the option to specify his smart HUB address – this is the area where the car is parking during the delivery. The exact location can be entered in a so called “Dropzone”. The delivery agent gets a one-time code to open the trunk and drop the delivery. As soon as the delivery is complete the customer gets a confirmation message via the Mercedes me App.

And the downsides?

The German Automobile Club (ADAC) is warning about such keyless access systems. Technical security is not guaranteed throughout as regular auto thefts of vehicles with non-analog closing mechanisms show. Therefore it is still questionable if keyless systems – and thus car trunks as mailboxes – will prevail.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

Outside Tech becomes vital to Autonomy

Believe it or not, some of the best automotive technologies had nothing to do with cars when they were invented. One of the most recent came from Intel, which designed the RealSense camera technology for personal computers. It can sense 3D objects, allowing users to interact with their PCs in an entirely new way.
As cool as it was, the tech didn’t resonate with consumers who were shopping for new PCs. They stuck with the mouse and keyboard/touchpad, a tried and true control method that has been around for several years.
Intel searched for a new market and found that drone makers could use RealSense to improve crash avoidance. This got the attention of automakers, which are now looking to RealSense to improve active safety.
Derek Kerton, founder and chairman of the Autotech Council in Silicon Valley, used this example to illustrate why automobiles will evolve faster than anticipated.
“Things are going to happen quicker than people believe because a multitude of exponential laws are in place here,” said said Kerton. “There is also R&D in inventions that we don’t think are relevant to automotive that are going to be [in the future].”

Knowing where to look

If you want the inside scoop regarding which technologies will prove to be vital to autonomous cars, think about the problems inherent to their deployment.
“Identity is an important one,” said Mark Thomas, VP of marketing at RideCell, a developer of car- and ride-sharing platforms. “When there’s no driver [inside your Uber], the car can display your name so I know that’s my car. But how does it know that somebody isn’t just hijacking it and getting a free ride? Being able to know who the person is – in an authenticated manner – so that only you can unlock the autonomous car that comes up, that’s definitely one of the critical pieces.”
This could make consumers uncomfortable if they feel that autonomous Ubers are invading their privacy, but that may not be avoidable.
“It’s one of those opt-in things,” he said. “When people have a direct benefit to opting in to having their identity known, that’s an important piece of the equation. If you want to use the service, you benefit from the fact that it authenticates you, so nobody else can get in and take your car. There’s nothing worse than watching your Uber drive off with somebody else in it.”

Securing your next ride

Authentication could be taken to a whole new level with autonomous automobiles. Whether owned, rented or shared, these cars will need a way to securely identify each passenger, while passengers will need a way to get inside.
Gail Gottehrer, an attorney and partner at Akerman LLP, believes that biometrics will play a key role in self-driving cars.
“It’s something that’s uniquely you,” said Gottehrer. “It’s harder to hack than a four- or eight-digit code or password. People like it because it’s something you don’t have to think about, you can’t forget, as well all do with our passwords.”
Gottehrer added that as autonomy approaches, consumers are going to want more security.
“I think the more we can show people we have these extra levels of security, and especially with Uber having concerns about some of their drivers and security protocols, we can reassure people that the car [is secure],” she said.

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.

Will Ransomware Invade the Cars of Tomorrow?

The WannaCry ransomware strike was a strong reminder of the dangers of connectivity – and, more importantly, the risk of lax security. No corporation is too large or too small to become a victim of ransomware when vulnerabilities are exploited. If a business is not properly secured, it could become a target at any moment.
“You don’t hear about it because they pay,” said Brian Balow, a member and partner at the law firm Dawda, Mann, Mulcahy & Sadler PLC. “They don’t want it publicized.” Balow said that if a company is being ransomed, it means two things: (1) its security isn’t very effective and (2) the firm does not have any backups. “Ransomware is a big deal and part of the reason it’s not going away is because people pay,” he added.

What does this mean for the auto industry, which is at the onset of connectivity?

“It’s like the whole idea of the Internet,” Balow explained. “If you want to use it robustly, then [you’re] giving up a certain amount of privacy. It’s kind of the same idea with vehicle technology. We want to install the latest and greatest technology, and we want to be able to update the vehicle wirelessly, right? So if [an automaker] improves something in the brake module or in the infotainment system, [it can] push that down to every one of those vehicles.”
Connectivity may be more convenient, but it could bring a new set of challenges. At the very least, passenger data could be at risk. In the worst-case scenario, the vehicle could be hacked with malicious intent. Automakers might be tempted to implement a five-star rating system to highlight why their cars are safe, but Argus Cyber Security’s Meg Novacek is concerned this could invite future attacks.
“That in and of itself could still be a challenge,” said Novacek, who serves as Argus’ executive director of business development in North America. “The minute you give a rating and say, ‘Here’s the test you have to pass to get the rating,’ it basically says, ‘Oh, these are the things you need to be able to hack.’”

Consumers may also be leery of any hype surrounding vehicle security

“Sometimes telling a customer that you’ve got the safest car on the road opens up a couple cans of worms,” said Novacek. “Why are you talking about it? And what about your product before you started talking about it? Okay, so your 2018 model is safe – what about 2016!?”
Gail Gottehrer, an attorney and partner at Akerman LLP, expects automakers to combat the risk of ransomware and other malicious threats by enlisting in the help of white hat hackers. She expects new bounty programs that invite individuals to test their systems, find vulnerabilities and contribute any and all creative solutions.
“I think it’s going to be more of that, more of an effort to get out in front and tell people, ‘You don’t need to go behind the scenes and deal with ransomware, come to us. In all likelihood we’ll pay you or hire you,’” said Gottehrer. “A lot of these white hat hackers are in demand. It’s really become quite an industry if you’re good at what you do. I think you’re going to see more of that – embracing help from wherever it can come and not being proud about, ‘It has to be someone who works in our company.’”

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.

USA: Light-duty vehicles are getting connected

Two years ago, The National Highway Traffic Safety Administration (NHTSA) started developing the guidelines for commercial vehicle communication. At the time the authorities focused on vehicle-infrastructure communication. A few days ago a proposal for vehicle-to-vehicle communication (V2V) was published online.

In the US, transport policy is falling within the sovereignty of the federal states. Therefore federal authorities only issue guidelines interpreted by the individual states. The federal authorities only issue guidelines which are individually interpreted by the individual states. The NHTSA (National Highway Traffic Safety Administration), a sub-agency of the US Department of Transportation, has recently issued three directives.

3 directives for road safety

In September the NHTSA published official guidelines for the operation of level 3 and 4 autonomous vehicles in accordance with SAE standards. The SAE splits vehicle automation into 5 levels, whereas level 5 is equivalent to the highest degree of automation. The guideline covers 15 aspects manufacturers have to take care of. These include data about application areas, safeguarding in case of system failures or HMI (Human Machine Interface) design. It also has to be clarified how the vehicle would respond to an inevitable accident – a so called ethical dilemma.

This October, the NHTSA issued additional guidelines for cybersecurity. Latter will focus on a security architecture enabling critical systems to communicate on separate channels. Manufacturers need to assure vehicle security until their scrapping. Guidelines on driver attention followed in November. The regulation applies that vehicle providers integrate a system reducing driver distraction through smartphones. During a journey particular functions like messaging or video playback shall be disabled.

Light-duty vehicles: connection required

The draft will be discussed by stakeholders such as politics, industry and research first. The NHTSA guidelines for V2V communication relate to light-duty vehicles. US Transportation Secretary Anthony Foxx sees vehicle communication as a huge opportunity to save human lifes. Just last year 35,000 people dies in road accidents in the US. If cars could “speak”, they would warn each other of dangers. Therefore US authorities launched the campaign “Road to Zero” with the goal to reduce deaths due to road accidents to zero. The NHTSA has provided 1 Million $ for the campaign.

With V2V and V2I communication the transportation authority primarily expects to reduce accident severity. Manufacturers will be obliged to integrate V2V in their light-duty models. Here the NHTSA is direly proposing industry-wide standards. Data transmission between vehicles shall be realized via DSRC (Dedicated Short Range Communication) technology. The data collected would cover the vehicles’ position, its driving direction and speed in order to calculate potential dangers. Latter are transmitted to near vehicles – 10 times per second.

Additional guidelines for vehicle-infrastructure communication are to follow soon. Vehicle communication is especially useful in cities, for example at intersections, when turning or changing lanes. V2V communication should not involve any privacy issues because it does not involve data about the persons in the vehicle, but only about the vehicle itself. Further regulations regarding privacy and data security are not formulated yet.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields autonomous driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

Driver workload assessments – Case Study by Denso

Have a look at Denso’s presentation on driver workload management held at the Automotive Apps Evolution conference. The vehicle is becoming an increasingly complex environment because of more devices being brought into car to interact with. The consequence: many reported crashes involving electronic device-related distractions. Driver workload management is one approach to cope with increasing demands for drivers on the road via HMI systems.

Key Learning Points

  • Road infrastructure is becoming increasingly demanding (traffic density, greater use of signage, road structure complexity).
  • Society’s concern: increased driver workload and distraction lead to more accidents.
  • A smart HMI solution would be a workload manager with 2 key functions: assessment of the driver’s current workload level & control the HMI to support but not overload the driver.
  • Minimizing driver distraction whilst optimizing the driver and passenger experience and at the same time is a real opportunity for intelligent design concepts.
  • GENIVI Alliance is already driving the broad adoption of an in-vehicle infotainment open-source development platform.

View the full Case Study “Driver workload assessment and safe management of in-car apps” and learn how Denso’s approaches the potential threat of driver workload increase.

Hyundai: Integration between active and passive safety systems

we.CONECT invited Dr. Baro Hyun, Senior Research Engineer Eco-Vehicle Control System Development Team, R&D Division at Hyundai Motor Group to talk about the coexistence of active and passive safety systems in cars. He also mentioned trust as one of the most important requirements for autonomous vehicles to prevail in reality. Consumers need to be convinced that autonomous cars a are safe mobility alternative – by manufacturers, solution providers and by the government.

Continental's fight against driver distraction

During his interview at Car HMI Europe, Continental Project Engineer Zack Bolton questioned the assumption that hands-free mobile use whilst driving is less distracting than handheld cell phone use. He stated there is no data or evidence proving the advantage of hands-free mobile use. Safety systems and their machine interfaces are still too simplistic; for better HMI with less distractions the driver intention should be taken into account, too.