How Will Computers Serve Us in 2020?

Live-blogging from the IBM Watson University Symposium at Harvard Business School and MIT Sloan School of Management. Additional coverage is on the Smarter Planet Blog. .

Panel discussion: What Can Technology Do Today, and in 2020?

Moderator: Andrew McAfee – MIT Sloan, CDB

Panelists: Alfred Spector, Google; Rodney Brooks, MIT, Heartland Robotics, David Ferrucci,IBM

Alfred Spector, Google
Alfred Spector, Google

Spector: We focused in computer science for many years on solving problems where accuracy and repeatability was critical. You can’t charge a credit card with 98% probability. We’re now focusing on problems where precision is less important. Google search results don’t have to be 100% accurate, so it can focus on a bigger problem set.

When I started in computer science, It was either a mathematical or an engineering discipline. What has changed is that the field is now highly empirical because of all of that data and learning from it. We would never have thought in the early days of AI how to get 4 million chess players to train a computer. You can do that today.

The Next Big ThingThis is one in a series of posts that explore people and technologies that are enabling small companies to innovate. The series is underwritten by IBM Midsize Business, but the content is entirely my own.

Brooks: Here at MIT, all students take machine learning because it’s that important.

McAfee: Was there a turning point when you decided the time was right to take these empirical approaches?

Brooks: It was in the 90s. The Web gave us the data sets.

Ferrucci: Watson was learning over heuristic information. Plowing through all those possibilities through sheer trial and error was too big. You have to combine inductive and deductive reasoning.

Brooks: It’s easy to get a plane to fly from Boston to Los Angeles. What’s hard is to get a robot to reach into my pocket and retrieve my keys.

McAfee: Why does the physical world present such challenges?

Brooks: In engineering, you have to set up control loops and you can’t afford for them to be unstable. Once a plane is in the air, the boundaries of differential equations don’t change that much. But when reaching into my pocket, the boundaries are changing every few milliseconds.

McAfee: The things that 2-year-old humans can do machines find very difficult, and the things that computers can do humans find very difficult.

Rodney Brooks, MIT

Rodney Brooks, MIT

Brooks: One thing we have to solve is the the object recognition capabilities of a two-year-old child. A child knows what a pen or a glass of water is. There is progress here, but it’s mainly in narrow sub-fields. Google cars are an example of that. They understand enough of road conditions that they can drive pretty well.

Spector: We’re looking to attack everything that breaks down barriers to communication. Example: With Google Translate, we eventually want to get to every language.

Another is how to infer descriptions from items that lack them. How do you infer a description from an image? We’re at the point where if you ask for pictures of the Eiffel Tower, we’re pretty good at delivering that.

A third thing is to make sure that information is available always from every corpus, whether it’s your personal information, information in books or information that’s on the Web. We want to break down those barriers while also preserving property rights. How many times have you searched for something and you can’t find it? I turns out it’s in a place where you weren’t looking. When you combine that with instantaneity of access, you can be on the street and communicate with someone standing next to you in the right language and the right context. You can go to a new city where you’ve never been before and enjoy that city no matter where it is.

McAfee: You think in five years I’ll be able to go to Croatia and interact comfortably with the locals?

Spector: Yes.

Brooks: We think manufacturing is disappearing from the US, but in reality there is still $2 trillion in manufacturing in the US. What we’ve done is go after the high end. We have to find things to manufacture that the Chinese can’t. What this has led to is manufacturing jobs getting higher tech. If we can build robotic tools that help people, we can get incredible productivity. The PC didn’t get rid of office workers did; it made them do things differently. We have to do that with robots.

We can take jobs back from China but they won’t be the same jobs. That doesn’t mean people have to be engineers to work. Instead of a factory worker doing a repetitive task, he can supervise a team of robots doing repetitive tasks.

My favorite example is automobiles. We’ve made them incredibly sophisticated but ordinary people can still drive.

Spector: It’s machines and humans working together to build things we couldn’t build separately. At Google, we learn how to spell from the spelling mistakes of our users.

Ferrucci: This notion that the collaboration between the health care team, the patient and the computer can result in a more effective diagnostic system as well as one that produces more options. Everyone is well informed about the problems, the possibilities and why. I think we’re capable of doing that today much better than we did in the past. This involves exploiting the knowledge that humans use to communicate with each other already. This gets you as a patient more involved in making better decisions faster. It’s collaborating better with the experts.

McAfee: Don’t we need to shrink the caregiver team to improve the productivity of the system?

Ferrucci: The way you make the system more productive is to make people healthier. Does that involve a smaller team? I don’t know, but I do know you get there by focusing on the right thing, which is the health of the patient.

Andrew McAfee, MIT Sloan

Andrew McAfee, MIT Sloan

McAfee: If you could wave a wand and get either much faster computers, much bigger body of data or a bunch more Ph.D.’s on your team, which would you want?

Brooks: Robotics isn’t limited by the speed of computers. We’ve got plenty of data, although maybe not the right data. Smart Ph.D.’s are good, but you’ve got to orient them in the right direction. The IBM Watson team changed the culture to direct a group of Ph.D.s the right way. I think we’d be better off if universities were smaller and did more basic research that companies like IBM would never do.

Spector: When many of us in industry go to the universities, we’ve often surprised that the research isn’t bolder. Perhaps that has to do with faculty reward issues. We envision that there’s going to be need for vastly more computation. I’m sure Google data centers will continue to grow. If you stay anywhere near Moore’s law, these numbers will become gigantic. The issues will relate to efficiency: Using the minimum amount of power and delivering maximum sustainability.

With respect to people, there’s a tremendous amount of innovation that needs to be done. Deep learning is a way to iteratively learn more from the results of what you’ve already learned. Language processing is a way to do that. We learn from the results of what we do. Finally, data is going to continue to grow. We bought a company with a product called Freebase where people are creating data by putting semantic variables together. Just learning the road conditions in New York from what commuters and telling us is crowdsourced data, and that’s enormously useful.

David Ferrucci, IBM Research

David Ferrucci, IBM Research

Ferrucci: We need all three, but in order, it’s researchers, data, machines. Parallel is processing is important, but it’s less important than smart people.

McAfee: Do computers ultimately threaten us?

Brooks: The machines are going to get better, but for the foreseeable future we’ll evolve faster. There’s a lot of work going on in the area of putting machines into the bodies of people. I think we’re going to be merging and coupling machines to our bodies. A hundred years from now? Who the hell knows?

Spector: There will be more instantaneity, faster information. We can embrace that, like we did central heating, or reject it. I think we’re on a mostly positive track.

Audience question: What’s the next grand challenge?

Ferrucci: I think the more important thing is to continue to pursue projects that further the cause of human-computer cooperation. We tend to go off after new projects that require entirely different architectures, and that hurts us. I’d rather we focus on extending and generalizing architectures we’ve established and focus on applying it to new problems.

Brooks: I’d like to see us focus on the four big problems we need to solve.

  • Visual object recognition of a 2-year-old
  • The spoken language capabilities of a four-year-old
  • The manual dexterity of a six-year-old. Tying shoelaces is a huge machine problem
  • The social understanding of an eight-year-old child.