top of page

NEWS & UPDATES

Search
Writer's pictureSerge Lambermont

How to train your AI systems 100,000 times cheaper

Updated: Jul 19, 2020

by John Koetsier, Forbes


How do you beat Tesla, Google, Uber and the entire multi-trillion dollar automotive industry with massive brands like Toyota, General Motors, and Volkswagen to a full self-driving car? Just maybe, by finding a way to train your AI systems that is 100,000 times cheaper.


It’s called Deep Teaching.



Forbes by John Koetsier: forbes


Perhaps not surprisingly, it works by taking human effort out of the equation.

And Helm.ai says it’s the key to unlocking autonomous driving. Including cars driving themselves on roads they’ve never seen ... using just one camera.

“Our Deep Teaching technology trains without human annotation or simulation,” Helm.ai CEO Vladislav Voroninski told me recently on the TechFirst podcast. “And it’s on a similar level of effectiveness as supervised learning, which allows us to actually achieve a higher levels of accuracy as well as generalization ... than the traditional methods.”

Artificial intelligence runs on data the way an army marches on its stomach. Most self-driving car projects use annotated data, Voroninski says.


That means thousands upon thousands of images and videos that a human has viewed and labeled, perhaps identifying things like “lane” or “human” or “truck.” Labeling images costs at least dollars per image, which means the cost of annotation becomes the bottleneck.


“The cost of annotation is about a hundred thousand X more than the cost of simply processing an image through a GPU,” Voroninski says.

And that means that even with budgets of tens of billions of dollars, you’re going to be challenged to drive enough training data through your AI to make it smart enough to approach level five autonomy: full capability to drive anywhere at any time in any conditions.

The other problem with level five?

You pretty much have to invent general artificial intelligence to make it happen.

“If you mean Level five like literally going anywhere in a sense of being able to go off-roading in a jungle or driving on the moon ... then I think that an AI system that can do that would be on par with a human in many ways,” Voroninski told me. “And potentially could be AI complete, meaning that it could be as hard as solving general intelligence.”


Fortunately, a high-functioning level four self-driving system is pretty much all we need: the ability to drive most places at most times in most conditions.

That will unlock our ability to get driven: to reclaim thousands of hours spent in cars for leisure and work. That will also unlock fractional car ownership and much more cost-effective ride-sharing, plus a host of other applications.

And multiple other trillion dollar markets, including autonomous robots, delivery robots, and more.

So how does deep teaching work?

Deep teaching uses “compressive sensing” and “sophisticated priors” to scale limited information into deep insights. It’s essentially a shortcut to a form of intelligence. Similar technologies helped us drop the cost of mapping the human genome massively, discover the structure of DNA, and have been used to speed up MRI (magnetic resonance imaging) by a factor of ten.


“Science is full of these kinds of reconstruction problems where you observe information, indirect information about some object of interest, and you want to recover the structure of that object from that indirect information,” Voroninski says. “Compressive sensing is an area of research which solves these reconstruction problems with a lot less data than people previously thought possible, by incorporating certain structural assumptions about the object of interest into their construction process.”

Those structural assumptions include “priors,” kind of a priori assumptions that a system can take for granted about the nature of reality.


One example: object permanence. A car doesn’t just stop existing when it passes behind a truck, but an self-driving AI system without knowledge of this particular prior — one that human babies learn in their infancy — wouldn’t necessarily know that. Supplying these priors speeds up training, and that makes autonomous driving systems smarter.

There are about 20 similar concepts that ours brains use to infer the state of the world according to our eyes, Voroninski says. Supplying enough of these repeatedly useful concepts is critical to deep teaching.


That’s enabled Helm.ai’s system to drive Page Mill Road, near Skyline Boulevard in the Bay area, with just one camera and one GPU. It’s a curvy, steep mountain road that the system wasn’t trained on — it received no data or images from that route, Voroninski says — but was able to navigate with ease and at reasonable speed.

And frankly, that’s mostly what we need.


We don’t need a system that can off-road or work in the worst blizzard-and-ice conditions. For effective and useful self-driving, we need a system that can handle 99% of roads and conditions, which probably covers a much higher percentage of our overall driving — especially when commuting.


In that sense, making a system that’s safer than humans is not insanely difficult, Voroninski says. After all, AI doesn’t drink and drive.


But the autonomous bar is actually higher than that. “Simply achieving a system that has safety levels on par with a human is actually fairly tractable, in part because human failure modes are somewhat preventable, you know, things like inattention or aggressive driving, etc,” Voroninski told me “But in truth even achieving that level of safety is not sufficient to launch a scalable fleet. Really what you need is something that’s much safer than a human.”


After all, lawyers exist. And liability for robotic autonomous systems is going to be an issue.


“We currently still lack the legal and regulatory frameworks to deploy L5 technologies at scale both nationally and internationally,” says Katrin Zimmermann, a managing director at automotive consulting group TLGG Consulting. “Technology might enable you to drive in theory, but policy will allow you to drive in practice.”

When solved, however, there are multiple trillion-dollar industries to address. Helm.ai is building technology for self-driving cars, naturally, but the technology is not only for personal vehicles or self-driving taxis. It’s also for shipping. Delivery robots for last mile service. Service vehicles like street cleaners. Industrial machines that can navigate autonomously.

Solving safe and reliable autonomy unlocks Pandora’s box of capability, and none too soon. We need autonomous systems for environmental reclamation on a global scale, safer manufacturing at lower cost, and a hundred other applications.

Pandora’s box, of course, is a mixed blessing.Unlocking autonomy puts hundreds of millions of jobs at risk. Engineering a solution for that will require politicians as well as scientists.

For now, Helm.ai is focused on self-driving — and focused on shipping its technology to any car brand that wants it.

“What we’re looking to do is really to solve the critical AI piece of the puzzle for self driving cars and license the resulting software to auto manufacturers and fleets,” Voroninski says. “So you can sort of think about what we’re doing as kind of an Android model for self-driving cars.”

188 views0 comments

Comments


bottom of page