Rio Tinto: Where Autonomous Driving is already Reality

Mine operator Rio Tinto owns a fleet of autonomous trucks that is used to transport mining materials to increase efficiency and lower costs. It’s already been 10 years since the first self-driving dump truck transported overburden to a nearby train station. Back in 2008 the tests were so succesful that the company decided to strengthen its fleet. Over the years the number of autonomous trucks steadily increased – today there are more than 80 autonomous vehicles applied to the Pilbara mine in Australia.

AHS Autonomous Haulage System

The dump trucks were originally produced by the construction machine manufacturers Komat’su and Caterpillar. They are called AHS which stands for Autonomous Haulage Sytem and are controlled from the site’s operations center. Each truck is eqipped with a GPS transmitter so they can be located at any time.
Unlike conventional dump trucks the autonomous ones are in permanent use. That’s a difference of about 700 hours per year which makes 15% of the total runtime. They do not need vacation and cannot be detracted like humans behind the wheel. Rio Tinto representatives also underline the safety aspect – in the whole past decade there was not even one reported injury.

New Milestones

Last year the autonomous trucks carried a quarter of the whole overburden and dumped it into trains. Latter are also automated – if loaded up, they automatically start their journey through the Outback of Australia.
This year Rio Tinto reached the next milestone. The autonomous trucks have now carried one billion ton of soil through the site. Because of the huge success, Rio Tinto plans to implement additional trucks. The goal is to have 140 autonomous trucks driving through the mines of Pilbara until the end of 2019.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

Auto.AI: Conference Summary and Key Takeaways by the Chair

Executive Summary

Auto.AI 2017 was held in Berlin on the 28th and 29th of September. Attendees representing academia, original equipment manufacturers (OEMs), suppliers, start-ups and consultants shared ideas through presentations, peer-to-peer round table sessions and informal discussions. Attendees were asked questions using anonymous live polls throughout the event. There were four broad topics relevant to understanding the development of artificial intelligence in the automotive sector:

  • Artificial intelligence outlook including machine learning & deep learning techniques
  • Computer vision, imaging & perception
  • Sensor fusion & data
  • Simulation, testing & validation of artificial intelligence

Key Takeaways

  • No one seems to be advocating a rules-based approach for autonomous driving.
    Although possible in concept, a set of detailed rules appears impractical in the real world because the volume of instructions needed to cover all driving conditions would be massive and debugging would likely fail to spot all errors or contradictions.
  • There needs to be a common view of the acceptable safety level for deployment of autonomous vehicles.
    This requires the input of regulators, industry, academia and probably 3rd party consumer groups too. In addition to the safe level, we need to determine how to measure it and confirm compliance — the second part is challenging without a test fleet covering tens of billions of miles.
  • Safety is related to the use case.
    The more complex you make the driving conditions, the more corner cases you’ll encounter. The scale of the problem can be reduced by steps such as: learning a particular area (geo-fencing) and not driving in extreme conditions or at very high speeds. Although this diminishes the value for the retail user, there are plenty of industrial applications that can operate within those constrained operating conditions.
  • There needs to be widespread collaboration on a shared framework for testing and validation of AI.
    Governments, companies and academia should all be involved and it would ideally use open data that was not tied to specific simulation tools or sensor sets. The experts at the event were wary of letting driverless cars to transport their family without seeing safety data beforehand (government and manufacturer assurances weren’t trusted).
  • Work needs to be done on explaining AI.
    There are big differences between the capabilities non-technical people think AI has and what it is capable of — there should be no talk of killer robots. At the same time, deep learning techniques mean that the system function cannot be explained in the same way as traditional coding. New ways to explain how the system operates are required and without them building trust will be very difficult. It could even be necessary for the AI to learn how to explain itself using natural language or other tools.
  • Virtual testing is vital.
    This is for three reasons: firstly, simulation dramatically decreases real world miles; secondly, because AI techniques like reinforcement learning need crashes to take place in order for the AI to learn; and thirdly because even real-world data becomes a simulation once you interact with it in a different way to the original situation. It’s better to do that virtually! For a virtual environment to be successful it must be something that can be replicated in the real world with the same results.
  • There is plenty of disagreement over the right approach to many areas.
    The event live poll highlighted differences of opinion regarding how AI should act, how much information it will be capable of processing and what level of redundancy was required for safe operation. More consistent was the high burden of proof that AI systems will be faced with and a view that currently no one really knows how to convincingly do that.
  • Implementation timing remains uncertain.
    In the event live polling, over a quarter of respondents believe that self-driving will be widespread by 2023 or even earlier. The majority believe that we will be waiting beyond 2025 — a healthy difference of opinion. Health Warning: informal discussions revealed that in general the timescale comes down if the question is about level 4 specific use case vehicles on public roads (they operate on private land already) and goes further out if asked about go-anywhere level 5 vehicles.

Artificial intelligence outlook including machine learning & deep learning techniques

Key Takeaways

  • Vehicle electronics are growing in value but require further standardisation and reductions in power consumption
  • Data storage is a major issue — techniques from traditional big data do not work very well with images and video
  • Image recognition is improving but research would benefit from wider availability of labelled video datasets
  • Further work is required to create greater depth of scenarios and improve simulation processing times
  • Realistic visualisation of simulations for humans is different to modelling the sensor inputs vehicle AI interprets
  • Understanding machine learning isn’t always hard… sometimes it comes up with simpler rules than we expect!

Market Growth… is forecast at 6% compound annual growth rate (CAGR) for electronics — reaching $1,600 of average vehicle content. For semi-conductors the figures are even more impressive — 7.1% CAGR. The specifics of market development are less clear — these growth figures include L1/2 systems but not full autonomy. Although there is a definite role for the technology, standardisation is a must, requiring a yet-to-be-established framework. Safety is a big challenge: without clear agreement on what safety level is acceptable, definite technical standards cannot be set. Another open issue is the degree to which the car will have to make decisions for itself versus interaction with infrastructure and other vehicles. The problem is the latency (response time) of large data sets. Finally, self-driving chipsets must consume significantly less power than current prototypes.
Researchers have gained new insights by translating real world crash data into a virtual environment… the information came from records collected by the regulators. Technology in production today sometimes makes facile errors (e.g. lane keeping recognising a bike path rather than the kerbside). Research has shown that it is possible to correlate the virtual models with real world data (for instance replicating a collision with a pedestrian) but the challenge of testing thoroughly remains substantial. Multiple different environments are needed; there are thousands of types of crash situation; and each vehicle has unique attributes. Through all of this, it is vital that the results correlate to the real world. Researchers aim to reduce modelling times from days (currently) to hours — real time is the ultimate goal. Without improvements in processing speed, virtual sample sets are in danger of remaining too small or too slow to be usable.
The challenge of staying aware of the state of the art in data processing and artificial intelligence… large OEMs are interested in AI in the broadest sense — self-driving, handling customer data and improving business efficiency. The first challenge is the data itself. Not only will the car become a massive source of data, but much of it does not easily fit into existing data structures — images and video are more complicated and unstructured than traditional inputs. With images, it may be necessary to pre-process and store the resulting output, rather than the image itself, to reduce storage space and retrieval time. Image capture and object recognition as a definite area where more work is required and machine learning is already relevant, for instance recognising brands of truck trailer may help build broader recognition of what a trailer looks like. By studying a whole range of machine learning activities (a huge resource undertaking), organisations can develop an understanding of the best fit between problems, data collection methods and analysis tools.
There are different ways of obtaining image data in real time… dedicated chips can translate lidar traces (compatible with multiple lidar types) into an instantly available augmented image. This allows object identification from the raw data and for less expensive lidar units to be used. Examples showed a 16-line lidar unit being augmented for higher resolution.
Machine learning already has applications in ADAS feature sets… it has been put to use in two frequently encountered highway situations: roadworks and other drivers cutting in and out. Video and radar example data was combined with machine learning and human guidance about acceptable limits of driving behaviour. Interestingly, in both cases although the machine learning was given multiple data inputs, only a few key elements were required to provide very good accuracy in use. This reduces the sensor inputs and complexity of processing. For example, machine learning identified a high correlation between the angle of the vehicle in front and whether it was intending to cut in, in preference to more complicated rules combining relative speeds and side to side movement. Multiple sensors should be used fordecision making: although a camera is better for monitoring many of the situations, its limited field of view means that radar needs to be used in camera blind spots.
The car will become one of the biggest generators of natural language data… and its proper use will enable manufacturers to create a personalised experience that the customer values. For relatively complex commands (“when X happens, then please do Y”), contemporary techniques have 95% correct transcription of what the customer is saying and mid-80s% task completion. This is encouraging but shows further development is needed. OEMs will also have to create ecosystems that allow them to control the customer experience inside the cabin, yet are seamless with the personal assistants the customer might have on their mobile phone or home speaker system.
New techniques are improving image recognition… Using industry benchmark tests, computer image recognition is now superior to humans. In some specific use cases this already has practical uses, for example a smartphone app that assesses melanomas. However, at around 97% correct identification of a random image (versus about 95% for humans), improvement is required. Different methods are being tested, with greater progress on static images than video; partly due to difficulty but also because video has less training data: smaller libraries and fewer labelled categories. Video identification accuracy can be improved by running several different methods in parallel. One of the most promising approaches is turning the video into a set of 2D images with time as the 3rd dimension — a technique pioneered by Deep Mind (now of Google). Combining this process with different assessment algorithms (such as analysing the first and nth frame rather than all frames), teams have achieved accuracy of nearly 90% for gesture recognition. Generally, late fusion (a longer gap between frames) gives better results than early fusion — there is variation in what combination of processing algorithms yields the best accuracy. Progress is happening all the time. New ways of addressing machine learning problems sometimes create step changes, so improvement may not be at a linear rate.
It is hard to create different virtual environments for vehicle testing… Using video game tools and very experienced developers, near photo realistic models can be created, but this appears to be the easy part! Because the structure of computer graphical data is different to real life, models need to be adjusted to create the correct type of artificial sensor inputs. This is even more challenging with radar and lidar input data as the model must accurately simulate the “noise” — a factor of both the sensor and the material it is detecting. Perfecting this could take several years. More useful immediately is the ability to create virtual ground truth (e.g. that is a kerb) that can serve SAE Level 2 development. Because L1/2 inputs are more binary, sophisticated sensor simulation issues are less relevant. Researchers believes that a virtual environment of 10km-15km is sufficiently to assist development of these systems, assuming the ability to impose different weather conditions (snow, heavy rain etc).

Computer vision, imaging & perception

Key Takeaways

  • Different options emerging to help machine vision check against pre-determined ground truth
  • Calibration and synchronisation of multiple sensor types is a challenge
  • Using hardware specific processing techniques may improve performance — but also impairs plug & play
  • Virtual testing will be valuable but requires a robust feedback loop and real-world validation

The state of the art in mapping… Mapping companies have several layers of information such as: road layout; traffic information; photographic information; and lidar traces of the road and its surroundings. At present, vehicle navigation relies on basic inputs and workable levels of accuracy (a few metres). High definition mapping allows a car to be more precise about its surroundings and relieves sensors of the need to fully determine the environment for themselves. Detailed lidar-based maps of the roadside can be used to build a “Road DNA” that autonomous systems can reference. AI can also be used to update maps. Crowd-sourced position data helps to determine where road layouts have changed (because everyone appears to be going off-road). Currently, AI sends these problems for human investigation but in future it could make decisions for itself. There may be value in collecting images from user-vehicles to update maps, both for ground truth and road sign interpretation.
An in-depth review of the StixelNet image processing technique… This method breaks images down into lines of pixels (columns in the case study) and then determines the closest pixel, allowing identification of ground truth (kerbside, people and cars) and free space. Camera data can be combined with lidar traces from the same vehicle to allow the system to train the camera recognition using the laser data. The positive of this approach is that it is continuous and scaleable — more cars added to the fleet equals faster learning. The downside is that it is difficult to calibrate and synchronise cameras and lidar on the same vehicle to the accuracy required. It is also difficult to write the algorithms — several processing options are available, all with different weaknesses. Studies indicate that systems trained on the camera and lidar data showed better results than stereo cameras and better than expected performance on close-up images.
Simulation and assessment of images captured by different types of camera… Measures of image quality that go beyond traditional quantitative techniques and are relevant to machine learning can be identified. Software has been developed that can simulate the images a camera will create in particular lighting conditions and using a range of filters. One of the findings is that image processing for machine vision may be able to forego some of the steps currently performed in digital photography (e.g. sRGB). This could save time and processing power. Research has a wider scope than autonomous vehicles but the application looks promising. A transparent and open source platform looks beneficial.
Implications of deep learning… Neural networks bring high purchasing costs and energy consumption. Some of these costs do not provide a proportional increase in accuracy, there is a law of diminishing returns. It may be better to reduce the cost of one part of the system and purchase additional elements, whilst retaining some of the saving. For instance, going to a single 5mm x 5mm processor rather than two 3mm x 3mm acting in series reduces the power consumption by about half.
Creating virtual environments… researchers used scenarios designed for video games to look at visual detection. The process developed creates grid information to assess the underlying physics and then layers skins on top to create realistic images. Driving characteristics and reaction of both the target vehicle and other vehicles can therefore be modelled, including the effect of collisions. The same information can also be used to create artificial data such as a lidar trace.
The importance of hardware in artificial intelligence… test vehicles can now drive autonomously without reference to lane markings because of their free space detection ability. Power consumption is reducing — a chip commonly used in prototype AVs uses 80W but its replacement requires 30W despite an increase in processing capacity. It may be beneficial to use processing software that is designed around the hardware. Some studies indicate that properly matched chipsets and processing suites can reduce latency and improve performance. An example end-to-end research project, which has several neural layers and multiple sensors including camera and lidar still finds decision making difficult in the following areas: gridlock, extreme weather, temporary road layouts, muddy tracks and difficult turns. There are also numerous edge casesand multi-party trolley problems.
The challenges of developing the next generation of technologies… Although lane keeping can be further improved, it is necessary to develop systems with a different approach, especially because in many cases there are no clear lane markings. One potential method is training networks to detect extreme weather and tune object recognition based on that information — for instance a camera may be best in sunny conditions but 3D data may be best where due to snow the lane markings become nearly invisible. There is also a downside to sensor improvement… for instance, as camera resolution improves, the data labelling may become unusable and need replacement (possibly manually). Power consumption of prototype chips is too high. Some concept demonstration chips draw 250W and in series production a processor needs to be below 4W.
The challenges of helping self-driving AI to learn and be tested… Driving situations have a long tail — there are typical situations that recur with high probability and critical situations that have low probability, such as a child running into the road chasing a ball. Despite the child running into the road being low probability, it is important to replicate multiple situations (big child, small child, from the left, from the right etc). Although difficult to get perfect digital realities, it is possible to get close and then to validate against real world experiments. It is important to get feedback from the AI about why it performed a certain way during the simulated event — this will identify bugs in the model where unrealistic situations are created, preventing mis-training of the AI. Considerable challenges to achieving this vision remain: creating the models; constructing realistic scenarios automatically; ensuring full coverage of likely circumstances; getting realistic sensor data; and creating a feedback loop to real world tests. Not to mention getting everyone involved and doing it quickly! Happily, some simulation data is already usable, despite not being perfect. When one group evaluated how ADAS systems on the market today reacted to children running into the road (physical trials using dummies), the results suggested performance could improve through combining existing analysis and decision-making techniques. Although at least one vehicle passed each test, none passed them all.

Sensor fusion & data

Key Takeaways

  • There are many opportunities to reuse knowledge on other applications but the transfer is unlikely to be seamless
  • There are ways around complex problems without solving them — for instance vehicle re-routing or copying others
  • Still unclear whether self-driving AI should be a number of small modules or a centralised system
  • Large evolutions in hardware capability likely to reduce the value of previously collected data

Examples of autonomous driving technology being applied across different use cases… A fully autonomous sensor set is likely to contained five cameras, five radars, lidar covering 360o and ultrasonic sensors. This creates plenty of challenges including integration problems and extreme weather conditions. Real world experience from production and research projects is useful here. The first case study was the execution of ADAS in trucks. The initial translation of passenger car technology (using the same type of camera, but mounting it higher) meant that the field of vision over short distances was reduced. In the second generation, a fish eye camera was added alongside the original camera. This provides enhanced short distance recognition whilst preserving the strengths of the existing system over longer distances. The second example was of a prototype automated tractor trailer coupling system where the vehicle lines up the trailer at the same time as detecting any objects (e.g. humans) in the way. This is done in conditions where sensor input may be impaired, for example the camera is likely to get mud on it.
Practical learnings from driverless research and production implementation… full autonomy in all situations is a massive challenge. If it can be limited, perhaps to an urban use case such as robo taxis, then it can become much more attainable. The downside is that limiting application is likely to reduce the private vehicle take-up. There remains a substantial difference of opinion among many in the driverless development community about how important different AI capabilities are. For instance, Waymo has dedicated significant resource to understanding the hand gestures of traffic policemen, whilst others assign lower importance. There still appears to be no concrete approach to walk from conventional programming with limited machine learning to decision making AI that can deal with complex edge cases (such as Google’s famous ducks being chased by woman on a wheelchair). If one does exist, it has not been submitted for peer review. Cross domain learning seems like a big part of the answer. Instead of AI trying to understand hand gestures by policemen, why not give control to a remote operator, or even re-route and avoid the problem altogether? It seems almost certain that V2V and V2G communication is necessary. Extreme weather conditions, domain shift (changes in road layout) and complex traffic may all be too difficult for a single vehicle operating alone to overcome. It is also unclear whether the right approach is a small number of very capable systems or a larger grouping of software modules with clearly defined roles. It also seems that even today’s state of the art may not be good enough for the real world. Due to closure speeds, cameras rated for object identification on highways could need framerates of 100hz to 240hz to be capable — this requires more powerful hardware. At the same time, OEMs want components that use less power. Selecting the right componentry also requires appropriate benchmarking to be developed. Camera systems cannot simply be assessed in terms of framerate and resolution; latency and power consumption are also important. Audi is undertaking extensive comparison of learning techniques, hardware and processing methods. Some correlations are appearing: hardware specific processing appears better than generic methods; using real and virtual and augmented learning seems to improve decision making, but not in all analysis models.
Lessons learned from different object recognition experiments… self-parking systems are in production, as are research fleets testing automated highway and urban driving technologies. AI’s task can be divided into four elements. First is classification (recognising an object on its own). Second is object detection (spotting it within a scene). Third is scene understanding. Finally comes end-to-end (working out how to safely drive through the scene). Training is difficult due to limited availability of off-the-shelf data. There is none for ultrasonic, cameras or fish eye and only a small amount for laser scans. Experiments continue on the best way to develop scene understanding. Lidar products can detect most vehicles at a range of 150m, but how should that data be handled — as single points or pre-clustered? Should object detection be 2D or 3D Researcher are trying to develop processing that can spot patterns in point clouds to identify objects but is also intelligent enough to interpolate for point clouds of lower resolution (e.g. recognising objects with a 16 line lidar that were learned using a 32 line data). Determining how best to use lidar data in concert with other sensors is a subject of ongoing research. For instance, OEM opinions differ on the minimum points per metric metre requirement.

Simulation, testing & validation of artificial intelligence

Key Takeaways

  • Self-driving AI requires both innovation and functional safety — may need to tilt today’s balance more towards safety
  • With a greater volume of properly annotated data, new machine learning techniques can be employed

Safety considerations for self-driving vehicles… Autonomy depends on collaboration between two very different disciplines: functional safety and artificial intelligence. Functional safety is associated with strong discipline, conformance to protocols, standardisation and making things purposefully boring. Artificial intelligence is highly disruptive, innovative and uses multiple approaches. Recent crashes by vehicles in driverless mode show that AI could be more safety conscious — that is not to say that AI is to blame for the example accidents, but that if the vehicle had driven more cautiously the accident might have been avoided. For AI systems to be approved by regulators it is likely that they need to: lower their exposure to accidents; act in ways that other road users find highly predictable; reduce the likely severity of collisions and; increase their control over the actions of surrounding 3rd parties. Self-driving vehicle creators must be able to explain the AI to regulators and the public. AI must be robust to domain shift (e.g. either be able to drive equally well in San Francisco and New York or be prevented from doing actions it cannot complete properly). AI must act consistently take decisions that can be traced to the inputs it receives.
Research into machine learning techniques that improve image recognition… performance is improving but machines still find it very difficult to recognise objects because they do not look at pictures in the same way as humans. Machines see images as a large string of numbers, within which are combinations of numbers that form discrete objects within the picture. Learning what objects are is therefore not intuitive; it is pattern based and machines require large datasets of well annotated data in order to gain good recognition skills. Researchers have been developing a concept called “learning by association”. The key innovation is that beyond a comparison of a new and unlabelled image to an existing labelled image, the associations identified by the AI are then compared to a second labelled image to determine the confidence of a match. Training in this way led to enhanced results in tests and an improvement in recognition of a brandnew dataset that was added without a learning stage.

Live Poll Results

Attendees were periodically asked multiple choice questions which they answered using a smartphone app.

  • Most attendees want to judge a self-driving solution for themselves
  • Although some attendees believe self-driving will be a reality soon, the majority think it is post 2025
  • The biggest challenges were seen to be the lack of clarity from regulators and the need to test more
  • Attendees overwhelmingly believe that no one knows how much real-world testing will be required
  • In the face of a new and difficult situation, most attendees thought the vehicle should simply stop safely
  • Attendees expressed a clear preference for object recognition by multiple sensor types over camera or lidar alone
  • Only a minority see all the data being collected as constantly necessary, but most thought it was sometimes required
  • There was complete agreement on the need for redundancy, but a 50:50 split between high and low capability

In Closing: A Summary Of This Report

  • No one seems to be advocating a rules-based approach for autonomous driving.
    Although possible in concept, a set of detailed rules appears impractical in the real world because the volume of instructions needed to cover all driving conditions would be massive and debugging would likely fail to spot all errors or contradictions.
  • There needs to be a common view of the acceptable safety level for deployment of autonomous vehicles.
    This requires the input of regulators, industry, academia and probably 3rd party consumer groups too. In addition to the safe level, we need to determine how to measure it and confirm compliance — the second part is challenging without a test fleet covering tens of billions of miles.
  • Safety is related to the use case.
    The more complex you make the driving conditions, the more corner cases you’ll encounter. The scale of the problem can be reduced by steps such as: learning a particular area (geo-fencing) and not driving in extreme conditions or at very high speeds. Although this diminishes the value for the retail user, there are plenty of industrial applications that can operate within those constrained operating conditions.
  • There needs to be widespread collaboration on a shared framework for testing and validation of AI.
    Governments, companies and academia should all be involved and it would ideally use open data that was not tied to specific simulation tools or sensor sets. The experts at the event were wary of letting driverless cars to transport their family without seeing safety data beforehand (government and manufacturer assurances weren’t trusted).
  • Work needs to be done on explaining AI.
    There are big differences between the capabilities non-technical people think AI has and what it is capable of — there should be no talk of killer robots. At the same time, deep learning techniques mean that the system function cannot be explained in the same way as traditional coding. New ways to explain how the system operates are required and without them building trust will be very difficult. It could even be necessary for the AI to learn how to explain itself using natural language or other tools.
  • Virtual testing is vital.
    This is for three reasons: firstly, simulation dramatically decreases real world miles; secondly, because AI techniques like reinforcement learning need crashes to take place in order for the AI to learn; and thirdly because even real-world data becomes a simulation once you interact with it in a different way to the original situation. It’s better to do that virtually! For a virtual environment to be successful it must be something that can be replicated in the real world with the same results.
  • There is plenty of disagreement over the right approach to many areas.
    The event live poll highlighted differences of opinion regarding how AI should act, how much information it will be capable of processing and what level of redundancy was required for safe operation. More consistent was the high burden of proof that AI systems will be faced with and a view that currently no one really knows how to convincingly do that.
  • Implementation timing remains uncertain.
    In the event live polling, over a quarter of respondents believe that self-driving will be widespread by 2023 or even earlier. The majority believe that we will be waiting beyond 2025 — a healthy difference of opinion.

About Auto.AI
Auto.AI is Europe’s first platform bringing together all stakeholders who play an active role in the deep driving, imaging, computer vision, sensor fusion and perception and Level 4 automation scene. The event is run by we.CONECT Global Leaders, a young, owner-managed, medium-sized company from the heart of Berlin and subsidiary office in London. The next Auto.AI USA conference runs from March 11th to 13th 2018 and the next European Auto.AI conference takes place from September 16th to 18th 2018.
About Ad Punctum
Ad Punctum is a consulting and research firm founded by an ex-automotive OEM insider. We bring focused interest, an eye for the story and love of detail to research. Intellectual curiosity is at the centre of all that we do and helping companies understand their business environment better is a task that we take very seriously.
About The Author
Thomas Ridge is the founder and managing director of Ad Punctum, based in London. You may contact him by email at tridge@adpunctum.co.uk.

Competition & Collaboration – Who is teaming up for Autonomous Driving?

Competition is good for business, this also applies to autonomous driving. Nevertheless every organization has to keep its eyes open for promising collaboration partners. Both traditional car manufacturers and new market players like Waymo (Google) are looking for suitable cooperations to fill their gaps.

Autonomous driving drives the change

The race towards autonomous driving has affected this trend in particular. Autonomous driving is not just about introducing a new way of driving, it is disrupting the industry. Current automotive profit margins are nothing compared to what companies could earn with driverless taxis. Just imagine the variety of services that can be offered inside an autonomous car.
Car manufacturers are slowly transforming into mobility service providers. But they need help from other companies to perform the change. The outcome: several strong company blocks in competition with each other.

Block building in automotive

The different blocks are fighting for the lead in the race towards autonomous driving. The goal is to offer a robot taxi service by 2021. BMW for example gathered several companies like Continental, Mobileye, Intel and Hyundai in order to bring together their knowledge whereas Waymo has been buying cars from FCA to modify them on their own.
Like BMW, Mercedes-Benz counts on strong cooperations. The Swabians teamed up with Bosch and the Chinese comapny Geely to give Waymo a hard time. A Mercedes-Benz representative recently announced that they plan to build autonomous cars from scratch instead of using modified versions of already existing models like Waymo does.
More blocks worth mentioning are:
– General Motors with Cruise Automation
– Aptiv (Delphi) with nuTonomy and Lyft
– Volvo and Autoliv (marketed as Zenuity)
Another notable block emerged around Baidu in China. The company has more than 70 partners and is currently working on optimizing the platform Apollo: 2.0.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

American Center for Mobility Gives Automakers a Safer Venue for AV Testing

Autonomous vehicles were given a boost this spring when the American Center for Mobility opened in Michigan. Located at the historic former site of the Willow Run Bomber Plant in Ypsilanti Township, ACM is hoping to be the premier destination for AV testing.
“We built ACM on a collaborative approach, working with industry, government and academia on a local, regional, national and even international level,” said John Maddox, president and CEO of the American Center for Mobility. He spoke to attendees at ACM’s ribbon cutting ceremony, which brought together a number of political supporters and auto industry execs.
Michigan Governor Rick Snyder referred to ACM as “another huge step forward” for the state as it strives to maintain its leadership as the auto capital of the world. “[Mobility] does three things,” said Snyder. “It’s going to bring us a safer world in terms of saving lives and quality of life. It’s going to create opportunities for people – people that may be economically disadvantaged, have disabilities and other challenges in their lives. It will provide options to their lives they have not seen in the past.” Snyder added that as mobility evolves it will also bring a new level of efficiency to the infrastructure. “This is a place to be on the forefront of improving lives, of creating opportunities for our citizens in this state, but also the entire world,” Snyder continued.
Lieutenant Governor Brian Calley concurred, adding, “This is going to make such a big difference for our infrastructure, for our safety, but especially mobility for people that don’t have the same types of opportunities that many of the rest of us have.” Calley praised the way corporations, associations, state representatives and others came together to build ACM from the ground up. “It’s so special, so important,” Calley added. “It’s going to have such a profound impact on the entire world and it’s happening right here.”
U.S. Congresswoman Debbie Dingell, one of many staunch supporters of AV technology, expressed the importance of building a place where self-driving cars can be tested and validated. “One of the things that has surprised me is the public resistance to autonomous vehicles,” said Dingell. “Let’s be honest, the Uber accident [in March] has made people concerned. That’s why we need this test site.”
Kevin Dallas, corporate vice president, artificial intelligence and intelligent cloud business development at Microsoft, also joined the stage to discuss how the company will serve ACM as its exclusive data and cloud provider. “We see it as an opportunity to invest deeply in the first safe environment where we can test, simulate and validate connected autonomous vehicles,” said Dallas. “And then accelerate the delivery of applications and services around autonomous systems. We’re taking that very seriously.”
After the ceremony, William “Andy” Freels, President of Hyundai America Technical Center, Inc. (HATCI), took a moment to share his thoughts on ACM. “We became founding members at the end of last year,” said Freels. “We are literally about 15 minutes from this facility. It’s a real investment in our local R&D facility here. Initially we will start using ACM for sensor development and sensor fusion testing. Connectivity is obviously a very important part.”
While ACM is designed to serve many areas of autonomous car development, Freels thinks the primary benefits will come from testing the potential interactivity and communication between cars (V2V) and infrastructure (V2I).
“Like never before, vehicles are going to need to work together to communicate [with each other] and the infrastructure,” Freels added. “That’s really quite different from the way it has been done in the past, where we could do something completely independently. I think that’s a key point of this facility – being able to collaborate with the industry, as well as the government and the academia side of it.”

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.

To Mirror or not to Mirror: How Camera Monitoring Systems are expanding the Driver’s Perspective

This article was authored by Jeramie Bianchi – Field Applications Manager at Texas Instruments.
Objects in the mirror are closer than they appear–this message is the tried and true safety warning that has reminded drivers for decades that their rearview mirrors are reflecting a slightly-distorted view of reality. Despite their limitations, mirrors are vital equipment on the car, helping drivers reverse or change lanes. But today, advanced driver assistance systems (ADAS) are going beyond a mirror’s reflection to give drivers an expanded view from the driver’s seat through the use of cameras. See ADAS domain controller integrated circuits and reference designs here.
Camera monitoring systems (CMS), also known as e-mirrors or smart mirrors, are designed to provide the experience of mirrors but with cameras and displays. Imagine looking into a rearview mirror display and seeing a panoramic view behind your vehicle. When you look to your side mirror, you see a high-resolution display showing the vehicles to your side. These scenarios are becoming reality, as are other features such as blind-spot detection and park assist.
It’s important to understand the current transition from mirrors to CMS. It’s no surprise that systems in today’s vehicles are already leveraging ADAS features for mirrors. Most new vehicles in the past decade have added a camera to the back of the vehicle or attached a camera to the existing side mirror, with a display inside the vehicle to give drivers a different perspective of what’s behind or at the side of the vehicle.
Figure 1 shows the routing of this rearview camera and display system. The backup display is embedded in the rearview mirror and a cable routes to the rear of the vehicle.

The side mirror is different because the camera is located on the mirror. The side mirror still exists for viewing, but typically its camera works when the driver activates a turn signal or shifts in reverse. During a turn or a lane change, the camera outputs a video feed to the infotainment display in the dashboard and may show a slightly different angle than the side mirror itself, as shown in Figure 2.

Now that I’ve reviewed current CMS configurations that incorporate a mirror with a camera and display, it’s worth noting it’s possible to achieve a CMS rearview mirror replacement through the addition of one or two cameras installed on the rear of the vehicle.
From the rear of vehicle, video data from an imager is input to TI’s DS90UB933 parallel interface serializer or DS90UB953 serializer with Camera Serial Interface (CSI)-2. This data is then serialized over a flat panel display (FPD)-Link III coax cable to a DS90UB934 or DS90UB954 deserializer, and then output to an application processor for video processing, such as JacintoTM TDAx processors, and then shown on a rearview mirror display. If the display is located far from the Jacinto applications processor, you will need a display interface serializer and deserializer to route the data over a coax cable again. You could use the DS90UB921 and DS90UB922 red-green-blue (RGB) format serializer and deserializer, respectively, or, if you’re implementing higher-resolution displays, the DS90UB947 and DS90UB948 Open Low-Voltage Differential Signaling Display Interface (LDI) devices.
Figure 3 shows the connections between these devices when using a display onboard with an applications processor.

The second CMS is the side mirror replacement portion. The camera must be located in the same location where the mirror used to be. This camera’s video data displays a view of what the driver would see in the mirror. To achieve this, the camera data is serialized and sent over an FPD-Link III coax cable to a remote display located in the upper part of the door panel or included in the rearview mirror display. With a camera and display, now the side view can be in more direct line-of-sight locations for the driver. For example, if both the displays for side view and rear view are included in the rearview mirror, the driver only needs to look in one location.
Another option available in a side mirror replacement is to add a second co-located camera with the first, but at a different viewing angle. The benefit of this setup versus a standard mirror is that with two differently angled cameras, one camera can be used for the view that a side mirror would have provided and the second camera can provide a wider field of view for blind-spot detection and collision warning features. Figure 4 shows a two-camera side mirror replacement system.

The question you may be asking now is why drivers need cameras and displays if they can achieve most of the same functionality with a mirror. The answer lies in the features that cameras can provide over mirrors alone. If only a side mirror exists, side collision avoidance is solely up to the driver. With a camera, the detection of a potential collision before a lane change could activate vehicle warning alerts that prevent drivers from making an unwise action. Panoramic rear views with wide field-of-view (FOV) rear cameras or a separate narrowly focused backup camera can provide a driver with different line of sights and reduces or eliminates blind spots that would not be possible with mirrors alone.
This is just the beginning, though, because in order for vehicles to move from driver assistance systems to autonomous systems, a CMS can be incorporated into sensor fusion systems. CMS has the opportunity to incorporate ultrasonics and possibly even radar. The fusion of rear and side cameras with ultrasonics adds the capability to assist drivers in parking or can even park the vehicle for them. Radars fused with side mirrors will add an extra measure of protection for changing lanes and even side collision avoidance.
To learn more about how to implement sensor fusion, check out the follow-up blog posts on park assist sensor fusion using CMS and ultrasonics or front sensor fusion with front camera and radar for lane departure warning, pedestrian detection and even assisted braking.

What Happens When Autonomous Vehicles Turn Deadly?

Consumers and auto execs alike were horrified by the news that a self-driving Uber vehicle had hit and killed a pedestrian. The incident prompted Uber to ground its fleet of self-driving cars while the National Transportation Safety Board (NTSB) and the National Highway Traffic Safety Administration (NHTSA) reviewed the accident to determine who was at fault.

Uber is only one part of the growing autonomous vehicle sector, but the accident sent shockwaves throughout the entire industry. It’s the kind of incident that could thwart plans for AV deployment, attract a new level of scrutiny from lawmakers, and erode consumer confidence in a vehicle’s ability to drive itself.

Many in the auto industry wouldn’t even respond to a request for comment, but Nicolas de Cremiers, head of marketing at Navya, shared his reaction to what happened last March.

“As with any sector, human error is a possibility,” said de Cremiers, whose company produces autonomous shuttles and cabs. “It is crucial that we, as suppliers of autonomous mobility solutions, come together with communities and municipalities to begin taking steps towards creating safety standards and comprehensive measures for the upcoming Autonomous Vehicle Era in smart cities.”

de Cremiers remained optimistic for the future of AVs, adding, “In working towards a more fluid and sustainable future, by improving traffic flow and reducing congestion in urban centers, we will ultimately upgrade the quality life while raising safety standards for a world in perpetual motion.”

As far as regulations are concerned, Danny Atsmon, CEO of Cognata, a startup specializing in AV simulations, said there needs to be some “common procedures” before these vehicles are publicly deployed.

“It’s not a bad idea to have some commonality and standards among the different AV providers,” said Atsmon. “I do believe that after this incident, there are high chances that it will lead to some regulations.”

Gil Dotan, CEO of Guardian Optical Technologies, said it is the industry’s responsibility to “make sure we learn the most and make our tech smarter and more robust.”

“This will push carmakers and tech providers to be more cautious and responsible,” said Dotan, whose company is developing sensing tech for inside the cabin. “This has precedents in other industries, like aviation and space travel, where unfortunate events have occurred. The last thing we should take out of this is to stop our efforts.”

Dotan is among those who see the greater good in what AVs could achieve by eventually reducing the number of fatal car accidents. Atsmon agrees, but he said the incident is a reminder that AVs “still have years of development and a long validation process before it can be released on the road.”

Where does this leave Uber, the company at the center of it all? Jeffrey Tumlin, principal at Nelson\Nygaard, a transportation planning consultancy, said the video released by the Tempe Police Department is “remarkably damning.”

“Yes, the victim crossed a high-speed road – in the dark, on a curve,” said Tumlin. “But all the tech on the vehicle did nothing to slow the car or alert the human observer. While I still believe that AV tech can result in huge safety benefits, whatever tech was out there on the roads should not have yet been cleared for human trials.”

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.

Say Farewell to Private Car Ownership

Autonomous driving has not been established yet but still the day may come when private cars will disappear from the streets completely. It is one of the numerous debates within future mobility: Will privately-owned cars exist in future? Many studies confirm that the traditional status symbol will vanish – slowly but surely.

Transportation Services & Private Cars

Last year the University of Michigan Transportation Research Institute (UMTRI) has already stated that the demand for new cars is decreasing, especially in places where transport services like Uber and Lyft are established. Several car manufactures are currently setting up similar models pushing their transition to mobility service providers. The study shows that people tend to buy new cars when there are no transport services given. This means that the mobility offer in a specific area defines the locals relation to mobility.

Changing Insurance

The insurance industry spotted this trend as well. Various insurance companies are already offering specific telematics rates including discounts for people who avoid accidents for a long period of time or a certain number of driven kilometers. This caused many critics to forecast the quick end of motor vehicle insurance. This scenery additionally alters with the development of autonomous driving. Experts suppose that with autonomous cars established there will be less accidents. But the ones that happen will cause higher damage costs because of the sensors that have to be repaired or replaced for vehicle control.

Swiss Re: Fall of Insurance Fees?

Swiss Re already expects the number of privately insured cars to drop sharply – their analysis predicts a decrease of about 15%. This also impacts the car insurance model, which, according to Swiss Re, soon won’t be profitable anymore. Just like car manufacturers Swiss Re sees the big money within the data that is collected by autonomous cars. As a result Swiss Re started to collaborate with the japanese conglomerate Softbank to in terms of telematics. More details on the deal are yet to be provided.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

A better LiDAR for the Future?

Dozens of startups are striving to enter the autonomous vehicle market, hoping to bring innovation to the next generation of automobiles. From software companies that focus on artificial intelligence to hardware manufacturers that are building essential components, AV technology has been a boon for entrepreneurs.
Not all startups are created equal, however. While there are dozens of companies tackling the same group of problems, very few are ready to deploy their offerings. This is due to the immense complexities associated with self-driving technology, whether it’s the algorithm under the hood or the sensor on top.
Innoviz Technologies, a startup out of Kfar Saba, Israel, has zeroed in on one key challenge: LiDAR. The company is developing low-cost solid state LiDAR solutions for the AV market. Its products are not yet ready to be deployed in motor vehicles, but the firm’s progress has already attracted $82 million from investors, including Aptiv (formerly Delphi Automotive), Magna International, Samsung Catalyst and Softbank Ventures Korea.
“We got amazing feedback from companies that visited us (at CES) and realized what we have is already working well,” said Omer Keilaf, co-founder and CEO of Innoviz.

Solving Problems

Keilaf said that one of the problems with LiDAR technology is that it can be degraded by cross interference if multiple cars are close together. From the beginning he wanted Innoviz’s solution to work seamlessly in a multi-LiDAR environment.
“When we started this company the first thing we did was try to understand what are the main challenges behind LiDAR,” said Keilaf. “We had a certain kind of cost on one side. We decided no matter what we do, if you want to have a solution that’s viable for the mass market, we need to make sure that we can sell it at one point for $100. Maybe not in the first two or three years where the volume is not that high, but you cannot design a LiDAR if you do not believe that at some point you can’t sell it for $100 because it won’t go to mass market.”
Innoviz also wanted to build a sensor that is sensitive enough to collect light from a long distance, but strong enough to endure the blinding rays of the sun, which have been problematic for some LiDAR solutions.
Many companies say they can overcome these obstacles, but Innoviz wanted to prove their accomplishments were applicable beyond the testing phase.
“I think the biggest challenge is not only to show a demo but to actually deliver a product that’s automotive-grade,” said Keilaf. “A product that’s very high-quality and is actually reliable. This is what the OEMs want. They want to count on you to actually deliver a product in two years, which is already automotive-grade and meets all the performance and quality needs. They don’t have any appetite for risk.”

More Than LiDAR

LiDAR may receive the most attention, but Keilaf realizes that self-driving cars will need to equip multiple technologies before they’re deployed.
“I think it’s clear that in order to have high functional safety, you need to rely on different technologies like LiDAR, cameras, radar, sonar and GPS,” he said. “And there’s so much data that OEMs need to analyze. You see cars today that are running those platforms with computers that drive a lot of power and heat. It’s not what we do, but it’s one of the hurdles today.”

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.

Could In-Car Ads Lead to Cheaper Mobility?

Ride-hail services like Uber and Lyft are expected to be some of the biggest beneficiaries of autonomous vehicles. The cars could theoretically pick up passengers 24 hours a day. On the downside, these services will then be required to purchase or lease new vehicles from automakers, an expense they currently avoided by having drivers use their own automobiles.
With so much technology going into them, self-driving cars are likely to be very expensive. How will Uber and Lyft – or any taxi-type service – pay for them without increasing their fees?

Is In-car Advertising the Solution?

One solution is personalized in-car advertising. Using data gathered from customer’s phones, the car could deliver targeted ads that provide steady revenue for ride-hail services.
“I’m sure everything will be tried,” said Marc Weiser, managing director at RPM Ventures, which invests in seed and early stage companies, including automotive IT. “We’re not doing it yet, so why would we start? As long as the economics work…[but] if they don’t, that’s when you could start to see people paying for ads not to be there.”
Higher fees for no ads? That sounds like the model used by some streaming video services, including YouTube.
Jeffrey Tumlin, principal at Nelson\Nygaard, a transportation planning consultancy, thinks this model is inevitable.
“Of course that’s the market we’re heading to,” said Tumlin. “And given the business model for mobility, I would expect it’s going to be much more like the Hulu model, where the only thing you pay for is to turn the ads off. Think about advertising: when you are [in a car], you’re in a confined space. You’re surrounded by surfaces. The [car] knows who you are, where you are, where you’re going. It has your credit card information. It has anything that’s available about you online.”

Fearing the Ad Invasion

Grayson Brulte, co-founder and president of Brulte & Company, hopes that is not the case. He is not looking forward to a ride-hail service that bombards passengers with any form of advertising.
“[But] I am very fascinated and utterly interested in hyper-local experiences that can be monetized,” said Brulte, whose company develops and implements technology strategies for a global marketplace. “If you’re driving through a city and you have an augmented reality windshield, and it pops up an offer for Chipotle or something else – a paid or sponsored advertisement inside the augmented reality world – I think that’s interesting.”
Brulte said he is not bullish on the idea of a car that’s covered in ads or forces passengers to watch a screen. “It’s too invasive,” he said, comparing it to the loud and repetitive commercials that are displayed in some taxis. “It takes away from the whole experience. People try to turn it off all the time. They hate it. I don’t see that transferring to autonomous vehicles because there will be a backlash.”
Before that happens, Weiser theorized that ride-hail services might experiment with offering faster pickups for a monthly fee.
“Will I pay $500 a month to ensure that when I hit the ‘Lyft’ button to hail a ride, the car is there within five minutes every time?” Weiser questioned. “And if not, my fare is entirely free? We haven’t seen any of that. Right now the only differentiating payment models are [based on the] vehicle or the number of people inside the vehicle.”

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.

Deloitte Study: Trust in AD Technology is on the Rise

Acceptance for autonomous car technology has been increasing recently which is important for the industry. If nobody trusts the technology there will be no one to buy it. Autonomous driving is not only about benefits but also about challenges. Above all driverless cars shall contribute to higher road safety but experts fear the threat of hacking attacks and car manipulation.
In this environment you frequently find studies that ask for consumer opinions. Recent research shows that consumer confidence is climbing – this could be a result of growing PR and advertising expenses for autonomous vehicles. In the course of showcasing and testing events more and more people get to experience the technology. It can be assumed that this is another factor for the growing trust.

Deloitte Study on Consumer Trust

Lately Deloitte published the 2018 edition of its annual “Global Automotive Consumer Study”. Judging from last and this year’s data you can draw the conclusion that consumer trust in autonomous driving is on the way up. The requirements for consumer confidence stays the same as in 2017 – users demand functionality. If latter can be assured, 59% of the polled people would enter an autonomous vehicle. In 2017 67% were rather sceptical towards autonomous driving, that number shrank to 41%. On average 64% of the surveyed consumers are willing to spend extra money for driverless car technology – in the US and Japan is way below the average with around 40%. In Germany at least every second person could imagine to make additional expenses for autonomous vehicles.

Obstacles on the Road towards Autonomous Driving

The study also identified several obstacles that delay the complete implementation of the technology. These include large investments needed from the companies and a missing legal basis for autonomous driving. If there are no clear legal principles autonomous cars will unlikely penetrate the whole society. However, Deloitte expects the movement to pick up high speed once the legal issues are solved.

Brand Loyalty and Autonomous Driving?

Autonomous cars won’t break brand loyalty – so says the study. Especially in countries where many large car manufacturers have their origin like Germany, Japan and the US people pledge loyalty to traditional carmakers. Nevertheless, apart from Japan, Asian consumers tend to consider less established companies. Only a third of the surveyed users from Asia would consider a traditional car manufacturer.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de