What Do Cloud Robotics Mean for Driverless Cars?

When you think of autonomous cars or driverless vehicles, you probably don’t associate them with cloud computing and data analytics. However, that’s exactly the technology that makes autonomy possible, at least when it comes to modern mechanics.

You see, a driverless vehicle needs to be able to make a decision in a split second. In order to make that decision, it needs to scan and analyze everything in its local environment. While it is certainly possible to create a computer — housed inside the vehicle — that can do all these complex calculations, store all the necessary data and handle the analytics side of it, that’s not what you would call efficient.

Instead, these vehicles tap into a remote cloud computing network to access boatloads of stored data. This data includes roadmaps, instructions and strategies, visual indicators or identifiers and much more.

They do make calculations on their own, and of course, there is computer equipment embedded inside the vehicle. It’s the data, though, that these systems tap into that makes the whole thing truly possible – data stored primarily in the cloud.

What if the entire system was streamed from the cloud? What if remote machines handled the processing and fed everything back to the vehicle? How would this change the technology?

Security Is a Major Concern

It doesn’t matter whether a vehicle is controlled via a local system or a cloud-based system, security is an important aspect for both scenarios. A vehicle that has to wait for commands from a cloud system is especially vulnerable, though.

Can you imagine the havoc one could wreak by gaining control of these systems? Vehicles could be shut down in the middle of a busy highway, careened into oncoming traffic or even made to run down pedestrians. This is an absolute worst-case scenario – a real nightmare – and nothing even remotely close has ever happened like this. That doesn’t mean it can’t.

In July 2015, a pair of hackers were able to crash a Jeep after gaining access to the vehicle’s software through a vulnerability in the entertainment system.

In fact, even traditional vehicles that are simply connected can be hacked in many ways. This can all be done through technology like Wi-Fi, Bluetooth, GPS and more.


Liability Gets Even More Confusing

Not everything about the technology is a boon. Cloud computing – especially with a system that is handling the processing and work remotely – can open up a bunch of new questions, namely when it comes to liability.

For example, if there’s a defect in the technology that results in lives lost, who is at fault? Is it the original manufacturer of the vehicle? Is it the company responsible for the cloud equipment and computing tech? Is it the company that wrote the self-driving software used in the vehicle?

Things are already confusing with autonomous vehicles as it is, but once you add the cloud computing aspects, it gets more convoluted. Lawmakers are already trying to iron out the specifics, and only a handful of states have regulations in place for driverless vehicles.

Nearly every auto manufacturer from Audi to GM is working on self-driving technology, and yet there are few-to-no laws in place to guide or set precedent. Most laws assume a human being is controlling the vehicle. This caveat is also why Google and Tesla’s engineers must keep their hands near the wheel and their eyes on the road when traveling in these vehicles.

But it’s not just the laws. Look at the roadways you use to commute. They were specifically designed for human vision and control, not that of computers. Our transportation infrastructure needs to be updated to accommodate this new technology.

All these factors make the concept of liability extremely confusing. If an autonomous vehicle makes a choice to avoid a collision with pedestrians but will result in injury to the driver, can the driver take action against the manufacturer? If it’s the state of the road that is to blame, who is at fault? There are no easy answers and no longer a single source to point the finger at.

A 2009 D.C. Metro train crash due to an automatic control system failure resulted in a whole slew of claims and lawsuits: 84 out-of-court claims worth a total of $1.6 million. But the fault was never explicitly stated. The company operating the trains took responsibility, but was it really something they did — or the software engineers who came up with the technology?

The best way to solve this is to not only put regulations in place for using autonomous vehicles, but also for when something happens. It sounds a little crazy, but rules of blame should be defined beforehand — before we’re in that dire situation where everyone is demanding results and emotions are running high.

Consumers, businesses and emergency officials should know exactly who is at fault in the event of an accident. Computers cannot lie — although maybe someday they could if we keep inching towards artificial intelligence. But until then, computers do not suffer from temptation to lie or deceit. You can pull the necessary data, assess it and make an informed decision.

But before we can do that, we need regulations in place that tell everyone who is truly at fault.

Are Driverless Cars Safer With Cloud Robotics Onboard?

Despite the concerns, driverless vehicles have an incredible potential to clean up our roadways, especially if the processing is handled via cloud systems. The president of the Insurance Information Institute (III) estimates that there will be 80 percent fewer accidents from autonomous cars.

Ultimately, once the legal gaps and ethics behind driverless vehicles are defined, these vehicles will be safer than current vehicles. This is because a computer can make a much smarter, more informed decision and it can do so much faster than a human can. This is only possible through cloud computing and data analytics technology, however.

Every time a driverless vehicle encounters a scenario or event, the data is recorded and stored for later. Imagine a human that has instant access to hundreds of Terabytes of data. You’d have the knowledge, skill and experience to make pretty much any informed decision. That’s the unfettered and advanced access autonomous vehicles will have — at least if they are tapping into a cloud system.

They won’t hesitate or choose poorly when faced with a split decision. Instead, they’ll make the necessary adjustments and an accident or avoidance will play out exactly as it should. It’s a matter of deciding when a vehicle should save lives or sacrifice them — as morbid as that sounds.

These semantics need to understood before these vehicles enter our roadways. But when they do, nearly 90 percent of accidents — that are a result of human error — will no longer occur. Those are some good odds.


Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com. Read more.

Join the Discussion

4 comments so far.

  • [Avatar for Ternary]
    January 21, 2017 11:40 am

    Many software updates in vehicle applications are to be performed “over the air” and not exclusively in the shop. The consequences of hacking vehicle systems can be enormous. Hacking driverless vehicle systems could be even more disastrous.

    Vehicle security will rely greatly on cryptographic methods for issues such as source authentication. Cryptography, by its nature, has become largely a mathematical discipline. It is a field where independent inventors can make huge and unexpected contributions.

    A patent system should be part of an infrastructure that stimulates and rewards inventions in technology that is mission critical to an important developing sector in our economy. And not create barriers and disincentivize inventors by rejecting cryptographic inventions as being an “abstract ideas.”

  • [Avatar for Night Writer]
    Night Writer
    January 21, 2017 08:46 am

    The reality is that most accidents are caused by speeding, tail gating, inattentive driving, reckless lane changes, etc. Things the machines won’t do.

    As soon as the machines get at least as good as humans and affordable, cities will start to require their use during rush hour times on heavily used highways.


  • [Avatar for Night Writer]
    Night Writer
    January 21, 2017 08:44 am

    You know why the whole country is going to become driverless? Efficiency of the use of the highways that are crowed.

    You won’t be allowed on the highways during rush hour that are crowded unless you have some certified driverless vehicle.

    My guess is the improvement in efficiency will be 20-40 percent. That is a lot of savings on highways and there isn’t even room to build new ones in some places.

  • [Avatar for Joachim Martillo]
    Joachim Martillo
    January 21, 2017 07:27 am

    Kobayashi Maru for autonomous street vehicles.

    For driverless cars, a moral dilemma: Who lives or dies?

    BOSTON (AP) — Imagine you’re behind the wheel when your brakes fail. As you speed toward a crowded crosswalk, you’re confronted with an impossible choice: veer right and mow down a large group of elderly people or veer left into a woman pushing a stroller.

    Now imagine you’re riding in the back of a self-driving car. How would it decide?

    Researchers at the Massachusetts Institute of Technology are asking people worldwide how they think a robot car should handle such life-or-death decisions. Their findings so far show people prefer a self-driving car to act in the greater good, sacrificing its passenger if it can save a crowd of pedestrians. They just don’t want to get into that car.

    To tell the truth a robot car, whose software was well-crafted, might have options not available to humans. It could throw itself into reverse and effectively destroy its engine to come to a stop. (It’s a solution that does not occur to most people and that can be somewhat risky if the airbag inflates somewhat late. An autonomous vehicle can improve the timing of such safety measures.)

    The robot car should react faster. A good simulation environment should provide a testbed to work out some of the worst most improbable scenarios that human drivers would generally be ill-equipped or not fast enough to handle.