AI is not “magic dust” for your company, says Google’s Cloud AI boss

AI is not “magic dust” for your company, says Google’s Cloud AI boss

Andrew Moore is the new head of Google’s Cloud AI business, a unit that is striving to make machine learning tools and techniques more accessible and useful for ordinary businesses.

To that end, his team announced several new tools today. These include AI Hub, a modular framework for connecting different machine learning components, and Kubeflow, software that makes machine learning projects more portable.

Efforts to make AI more accessible will likely define the technology’s impact. They will also prove very important to the future of companies like Google.

Moore spoke with MIT Technology Review’s senior editor for AI, Will Knight, ahead of today’s announcement.

Like you, lots of AI researchers are being sucked into big companies. Isn’t that bad for AI?

It’s healthy for the world to have people who are thinking about 25 years into the future—and people who are saying “what can we do right now?”

There’s one project at Carnegie Mellon that involves a 70-foot-tall robot designed to pick up huge slabs of concrete and rapidly create levies against major flooding. It’s really important for the world that there are places that are doing that—but it’s kind of pointless if that’s all that’s going on in artificial intelligence.

While I’ve been at Carnegie Mellon, I’ve had hundreds of meetings with principles in large organizations and companies who are saying “I am worried my business will be completely replaced by some Silicon Valley startup, how can I build something to counter that?”

I can’t think of anything more exciting than being at a place that is not just doing AI for its own sake anymore, but is determined to bring it out to all these other stakeholders who need it.

Sign up for The Download

Your daily dose of what's up in emerging technology

Sign Up

Thank you — please check your email to complete your sign up.

Incorrect email format

By signing up you agree to receive email newsletters and notifications from MIT Technology Review. You can change your preferences at any time. View our Privacy Policy for more detail.

How big of a technology shift is this for businesses?

It’s like electrification. And it took about 2 or 3 decades for electrification to pretty much change the way the world was. Sometimes I meet very senior people with big responsibilities who have been led to believe that artificial intelligence is some kind of “magic dust” that you sprinkle on an organization and it just gets smarter. In fact, implementing artificial intelligence successfully is a slog.

When people come in and say “how do I actually implement this artificial intelligence project?” we immediately start breaking the problems down in our brains into the traditional components of AI: perception, decision making, action (and this decision-making component is a critical part of it now. You can use machine learning to make decisions much more effectively) and we map those onto different parts of the business. One of the things Google Cloud has in place is these building blocks that you can slot together.

Solving artificial intelligence problems involves a lot of tough engineering and math and linear algebra and all that stuff. It very much isn’t the magic dust type of solution.

What mistakes do companies make in adopting AI?

There are a couple of mistakes I see being made over and over again. When people come and say, “I’ve got this massive amount of data, surely there’s some value I can get out of it,” I sit them down and have a strong talk with them.

What you really need to be doing is working with a problem your customers have or your workers have. Just write down the solution you’d like to have then work backwards and figure out what kind of automation might support this goal, then work back to whether there’s the data you need, and how you collect it.

What makes a good AI practitioner?

The problem is, it’s a kind of artisanal skill. There’s no real play book for it. But the big push is to find the problem and work backwards from it. And it’s actually fun, because there’s creativity in thinking about how the business could change, and creativity in understanding which pieces of technology are really feasible as opposed to a blue-sky crazy science project. But it’s really rare to find people who are able to use both parts of their brain at once.

AI is about using math to make machines make really good decisions. At the moment it has nothing to do with simulating real human intelligence. Once you understand that, it kind of gives you permission to think about how a set of data tools—things like deep learning, and auto-machine-learning, and, say, natural language translation—how you can put those into situations where you can solve problems. Rather than just saying “‘wouldn’t it be good if the computer replaced the brains of all my employees so that they could run my company automatically.”

What do you think of MIT’s plan to build a new college for AI?

I was really pleased to see what MIT is doing. At Carnegie Mellon, when we created our big AI initiative two years ago, more than 50% of everyone involved in it was outside the school of computer science.

AI by itself is an abstract concept that, to me personally, isn’t that exciting. It’s when you say, “how are we going to make cosmology vastly more effective through massive automation?” or, “how are we going to make it so that kids studying literature can have tools to find if something was written by a person in the same state of mind as someone else?”

What MIT is doing is very sensible. It’s my personal belief that there won’t be many opportunities for students who want to avoid AI.

At CMU you organized an AI conference with the Obama administration. Does the current US government need to pay more attention to AI?

There are huge sectors of the commercial world [that will be affected], but there are also things in the public sector, from education to effectively managing healthcare for veterans to automation for controlling massive fires. I would be horrified at any country that decided it was not going to be bringing artificial intelligence into the public sector—we have opportunities in so many verticals to save lives and improve lives.

Google Cloud was at the center of a recent controversy over a contract with the US Air Force. Will you continue to work with the US military?

We will continue our work with governments and the military in many areas, as we have for years. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. We will also work to provide tools to enhance government efficiency.

Collaboration in these areas is important and we’ll actively look for more ways to augment the critical work of these organizations. One recent example is our partnership with the Drug Enforcement Agency to fight opioid addiction.

Google’s plans regarding China have also been controversial, and there is a Google AI research lab in Beijing. How important is China to Google’s AI plans?

Google’s really serious when it says, “AI first,” and that’s what has attracted so many of us to Google in the first place. There is AI happening in pretty much every Google engineering office around the world, including China.

How do you plan to integrate employees’ concerns over how AI is used into plans for the future?

Sundar wrote a blog about AI principles in June, and we also just published a post about working with people for “steering the right path for AI.” Sundar set us on this path because it’s the right thing to do, but I also think that this makes very sound business sense.

I want to see organizations choosing to work with Google specifically because we are so systematically organized around making sure AI projects avoid the many potential ethical pitfalls that new AI practitioners may make.

Source link