Startup PerceptIn developed an Autonomous Vehicle for less than $10,000

Until now, the race for autonomous driving was mainly about vehicles that can drive on public roads. But there also are alternative models that target other locations.

According to most studies, Waymo (Google’s subsidiary in the Alphabet Group) and General Motors are leading the competition for the introduction of autonomous driving. Both want to develop a driving service with robot taxis that targets vehicles that can drive on all roads.

Competition for Speed

However, this still requires some development work. The faster a car drives, the better the technology must be around it – this also applies to autonomous driving. Fatal accidents can occur at high speed, as the recent accidents involving Tesla and Uber have shown. Fast driving autonomous cars require efficient sensors that are able to “see” what is happening in the distance. This applies to all sensor types such as radar, camera or Lidar sensors.

Additionally, the data collected by the sensors has to be processed promptly in order to initiate the right actions at high speed in an emergency – not to mention the braking distance. This also implies a high computing capacity.

Why not go slower?

The problem is that powerful sensors and high computing capacities are very expensive and this is an obstacle to the technology. That’s why startup Perceptin follows another path, making it possible to significantly reduce the costs for an autonomous vehicle. Former Baidu employee Shaoshan Liu founded Perceptin in 2016 with the goal of creating a reliable vehicle that should not be used on public roads but in confined areas – i.e. on university campuses, on company grounds or in parks.

To this end, the company developed an electric car that can be produced for less than $5,000 in China. Hardware and software for autonomous driving doubles the price. Here the factor speed comes into play again. The slower the vehicle, the less computing capacity is needed. It does not use a Lidar system either, but inexpensive camera, radar and ultrasound technology.

The omission of Lidar sensors is compensated by camera technology that enables a 3D image via point clouds. This is certainly not suitable for high velocities, but is perfectly fine for vehicles not faster than 20 km/h. Its position is determined with an accuracy of 20 centimeters by GPS and an autometry sensor. The medium-range radar can only see 50 meters and the inexpensive ultrasonic sensor has a range of five meters.

Sales and Customers

Perceptin has already found its first customer – ZTE. The Chinese telecommunications company purchased five units, which will be used on its own premises. Overall, Perceptin hopes for sales figures in the six-figure range.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site:

Introducing Tesla 9.0

Tesla introduced a new update – autonomous driving features included.
Recently the Tesla project started to falter as it was hit by economic problems, lawsuits and not least by the fatal crash of a car controlled by autopilot. Now Tesla announced a new update to manage the turnaround.
Tesla’s autopilot was one of the first sold in line and offered a Level 2 semi-autonomous driving solution. The cars were able to drive by themselves but the manual requested the human driver to keep their hands on the wheel and stay alert. The driver had to be able to take over control at any time because technology was not ready to master all traffic situations.

Autopiloted accidents

Lately there has been an increase of incidents involving the Tesla Autopilot. Latter seems to tempt humans to lose their attentiveness in traffic and take up other activities. In Great Britain, a driver lost his license for sitting on the passenger seat whilst driving in an autonomous car.
On March 23, 2018 in California, a Tesla Model X car crashed into a freeway divider, killing the driver and causing a fire. Recently the National Transportation Safety Board (NTSB) of the USA took over the investigations in the case – their report stated that the driving was not paying attention. The car was oriented towards the car ahead and lost its point of reference at the divider. At that point, the driver had to take over control of the vehicle – unfortunately, he didn’t. The car accelerated, pulled to the left and crashed into the divider.
There have been more incidents involving Tesla models causing collisions by accelerating for no apparent reason. One reason for this could be a faulty update. Many users sued Tesla for selling life-threatening technology; these cases have been settled out-of-court. However, the company is facing more lawsuits initiated by consumer protection organizations.

Update 9.0

Elon Musk announced on Twitter that there will be a new update starting in August involving new autonomous driving features. What those are and what autonomy level the Tesla models will reach with them was not communicated. The software update does not provide the integration of a lidar system. A lidar enables a 3D image of the vehicle environment and serves for orientation and position finding. Last year, Tesla was denied the ability to build autonomous cars by a GM manager because Tesla cars aren’t equipped with lidars. In fact, Tesla uses camera sensors above all.
However, researchers at Audi and MIT have developed algorithms that allow 3D information to be calculated from the camera images. Whether that is Tesla’s plan is unclear, but it cannot be ruled out. We can just hope that the update in August will provide more safety. Not only Tesla’s reputation is at stake, but also that of autonomous driving technology as such.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site:

Waymo takes the Gloves off

Google’s subsidiary Waymo is considered a market leader for a technology that is not marketable yet. That means Waymo is closest to commercializing autonomous driving, according to several research studies. The company plans to set up on-demand transportation services as a competitor for conventional cabs. Originally based in California, Google/Waymo got the authorization to take the project to Arizona.

Waymo’s vehicles

Initially Waymo wanted to build its own cars but eventually moved away from the idea. Today Waymo’s fleet consists of Fiat-Chrysler (FCA) models, more precisely of Chrylser Pacifica vehicles. Additional collaborations with Jaguar-Land-Rover and Honda are following the same model. It’s likely that the Jaguar models cover the luxury segment, FCA the mid-range segment and Honda the compact car division. Lately Waymo ordered 62,000 new Chrysler Pacifica vehicles that are currently being equipped with autonomous car technology. Waymo said nothing about acquisition and modification costs, but industry experts estimated the costs at about 31 billion USD – for modification purposes only!

Waymo & Uber?

Meanwhile Uber raised its hand and proposed a collaboration with Waymo, which seems utopian after the 2 companies settled a legal dispute forcing Uber to pay Waymo 2.6 billion USD. Subject of the lawsuit: stolen trades secrets, in this case information about a Lidar.

Moreover, Uber has to take responsibility for a fatal crash in Arizona, where an autonomous car ran over a woman due to a deactivated emergency brake system. Therefore, Uber aborted all testing activities in Arizona and won’t resume testing in the state. The incident cost a lot of trust in the technology – this led to stricter testing regulations with the USA.

When is the time?

It is questionable that Waymo will be able to offer an autonomous transport service in 2018. The modification of more than 60,000 FCA vehicles is expected to take more than 1 year. However Waymo should not take the foot off the gas, main competitor GM plans to mass produce a highly-autonomous vehicles (Level 4) in 2019.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site:

Mobileye and Intel test Autonomous Vehicles in Jerusalem

In March 2017, Intel bought Israeli tech company Mobileye – this was not Intel’s first step towards autonomous driving, but definitely one of the most important ones. Mobileye provides computing capacity and sensors for the area around Jerusalem. The company also uses its expertise to guide several countries on the implementation of autonomous driving. Mobileye’s expertise is undisputable, at the latest since they sent 100 self-driving test vehicles onto the streets of Jerusalem.

The operation’s slogan: If you can do it in Jerusalem, you can do it everywhere, as its traffic is said to be extremely heavy and exhausting for human drivers. Apart from that, Jerusalem is also Mobileye’s company location.

Camera Sensors & True Redundancy

For the moment the testing fleet is only equipped with camera sensors. 8 cameras capture images to detect obstacles and traffic signs and for positioning and mapping. By this the vehicle can develop optimal routes by itself. The procedure of using camera sensors only is called “true redundancy”. The advantage over the use of different kinds of sensors (“real redundancy”) is the small amount of data processed.


Data is processed by AI and converted into corresponding actions. In order to prevent AI from commanding dangerous maneuvers, Mobileye developed the so-called Responsibility-Sensitive-Safety (RSS) – a mathematical model that aligns AI orders with internal protocols. If a certain action or maneuver is not listed in the safety protocols, RSS prevents the execution. Intel has published the standards behind these protocols.

Computing Power

Today, all testing vehicles are equipped with the EyeQ4 chip. However, Mobileye has already unveiled its successor, the EyeQ5 chip, with a computing power ten times as strong as the current chip. The EyeQ5 will be in full mass production by 2020 and was already ordered by BMW for 2021.

First Troubleshooting

Shortly after sending out the test fleet, first issues emerged. One car ran over a red light despite the efforts of a safety driver. At least Mobileye already discovered the cause of the malfunction and solved it: A TV camera interfered with the transponder signal of the traffic lights. Because of the missing signal, the car crossed the road as if there were no traffic lights.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site:

VW to introduce Autonomous Parking in Hamburg

Volkswagen approaches autonomous parking, testing the technology in a parking garage in Hamburg, Germany.
Last year Bosch and Daimler already presented their joint valet-parking concept at the IAA which was implemented inside the Mercedes-Benz Museum in Stuttgart. Recently Bosch presented eGo, a similar concept developed at the RWTH Aachen Campus. Now Volkswagen also hopped the trend testing an autonomous valet-service in Hamburg.

Parking today: time- and money consuming

Looking for a parking spot not only costs nerves, but also time and money. According to an INRIX study searching for parking is one of the major hidden costs of driving, devouring more than 3,000$ per driver in the US in 2017. This implies fuel costs that occur for finding a parking space. In New York, a driver yearly spends 107 hours looking for a space on average. A possible solution would be interlinking vehicles and parking spaces in order to save drivers‘ time and money.

Parking in the future

The valet-service concept is based on these goals. Autonomous cars are able to communicate with free parking spaces and can park much closer to each other, because there is no human exiting the car. The search for a spot still may take time, but the driver doesn’t have to care at all, as he exited the car before it entered the parking garage. Watching the cars do their thing won’t be possible anyway – humans won’t have the permission to enter the autonomous parking garages.
VW plans to test autonomous valet parking for the next two years. First the parking garage in Hamburg has been mapped and equipped with signs that guide the autonomous vehicle. The innovative parking lot will work for autonomous car models by VW, Audi and Porsche. Drivers can exit their cars at the entrace gate and initiate the search for a parking space with only one swish on their smartphone. Future plans include mixed traffic, meaning manually controlled vehicles and autonomous vehicles looking for a space in the same parking lot.

Road to ITS 2021: Autonomous trucks and more

Hamburg and VW are conducting a strategic partnership, so there are more projects in the pipeline until 2021, when the Intelligent Transport Systems World Congress (ITS) takes place in the city. Soon there will also be autonomous trucks running around the port of Hamburg. VW wants to use the trucks of its subsidiaries MAN and Scania to foster the development of autonomous driving. Nissan and Renault are taking a similar road to push autonomous driving technology development.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site:

When Driverless Car Demonstrations Are Less Than They Seem

This report was inspired by the many and varied demonstrations of self-driving vehicle technology over the past few years and the widening gulf between the appearance of capability and the reality. The aim is to inform non-specialists about some of the different methods used to enhance the apparent driving proficiency of prototype driverless vehicles.
Self-driving vehicles form an understanding of where they are and where they want to go using advanced versions of contemporary mapping and navigation systems — mature technology. This includes dynamic route planning that changes course based on traffic conditions and road closures. Ideal paths derived from mapping are the foundation stone of nearly all (if not every) self-driving system. The disparity in capability between projects lies in how the car copes with differences between the ideal route and the actual environment. The best systems recognise objects and create an understanding of their real-time situation, together with predictions of how the scene might unfold. Lesser systems do not have this ability, or are capable only in simpler scenarios. This inadequacy can be disguised by the design of the demonstration (not that anyone would do such a thing). To explain the background clearly, this report covers the following areas:
A beginner’s guide to object recognition — a brief overview of what a self-driving artificial intelligence (AI) tries to identify in its surroundings and why.
An introduction to scene understanding and prediction — an overview of how the artificial intelligence can use its understanding of the local environment to make driving decisions.
An overview of different demonstration events; relative difficulty and how to spot fakes — four complexity levels:
• The parking lot demonstration
• The closed course demonstration
• The carefully selected on-road demonstration
• The high-confidence on-road Level 4 demonstration
This includes examples of how the demonstration can be simplified to make the vehicle appear more capable and some ways that you can investigate further. The issue is that, as shown in the table below, nearly all demonstrations appear sensational, so it is important to bring greater objectivity to the near certain euphoria felt on exiting the vehicle.

The only conclusion is buyer beware — look carefully behind the curtain. Very few people have travelled in a driverless vehicle and the experience remains impressive, even in circumstances where it is heavily staged. This report simply aims to assist objectivity in the face of thrilling and often seemingly compelling technology demonstrations.

The full report including all insights and graphics is available for download. Please find the download button below to access your report:

About Ad Punctum
Ad Punctum is a consulting and research firm founded by an ex-automotive OEM insider. We bring focused interest, an eye for the story and love of detail to research. Intellectual curiosity is at the centre of all that we do and helping companies understand their business environment better is a task that we take very seriously.
About The Author
Thomas Ridge is the founder and managing director of Ad Punctum, based in London. You may contact him by email at
For Further Contact
If you would like to discuss this report, please contact the author.