Applied Machine Learning with Peter Norvig, Director of Research, Google AI

Applied Machine Learning with Peter Norvig, Director of Research, Google AI


[MUSIC PLAYING] PETER NORVIG:
Welcome, everybody. Thanks for being here and
awake so early in the morning. I want to give you a little
rundown of what we’re seeing in terms of
how companies approach the problems of
machine learning. And you all know
the standard model. You get some data,
some pictures, and you put some labels
on it; and then you throw in some of that TensorFlow
stuff; and then profit. And how does that happen? Well, it’s not always
quite that direct. It’s sometimes a little
bit more complicated. Why is that? Well, there’s some
surprises along the way. So the first thing is
we’ve been going out, and we’ve been talking to
companies like you and others, and saying what do you
expect the difficulties are going to be? Here’s all the parts
you have to do. You have to define your
objectives, collect the data, build the infrastructure,
optimize the ML algorithm, and then integrate
it into your product. And people’s expectation
is, well, the hard part is going to be that
algorithm stuff. All that math, that’s going
to be really hard, right? And then they go out
and actually do it. And the reality
is more like this, that the math part
was the easy part. And the hard part
is getting the data and building the
infrastructure, and then doing the integration of fitting
together the machine learning part with the
rest of your product. And the math part is
just a tiny little bit. So it really flips
the work you have to do from what you
expected you have to do. So what we were
trying to put together is a methodology that
matches this reality rather than matches the expectation. Here some of the pitfalls
we see over and over again. So, one, everybody says,
well, machine learning things would go faster,
better, cheaper. It’ll all be good, right? And sometimes it really is. And sometimes you do things
hundreds of times faster than you could have otherwise. Sometimes you do
things that you just couldn’t have done at all
without machine learning. But other times it
ends up being longer because you’re doing something
new in a methodology you didn’t know about. And so it’s hard to do
something the first time. There’s issues of do
you have the right data and have you curated
it properly and so on. Sometimes people forget that. This idea that it’s
all or nothing, that you’re either doing
machine learning or you’re doing manual. That’s probably not the
right way to look at it. The way you should
be looking at it is, what’s appropriate
for what sub problem? And don’t be afraid to say
it’s not 100% automated, that there’s still
some humans in the loop at the appropriate point. Figuring out what
the right product is. So it’s tempting to say here’s
a metric that I can optimize up to 99%. But it doesn’t help
me build my product– that’s not what you
want to optimize. And then the questions
of what you’re going to do in-house, what
tools you’re going to use, and what level you’re
going to use that at. And there’s a lot
of choices here. You can build
everything by yourself. You can take these
pre-trained models that various companies
are offering. And figuring out
that right mix has been another stumbling point. Now I want to just
step back a little bit and look at the difference
between traditional software and machine learning software. So in traditional
software, we start off, and we got an engineer
or a team of engineers sitting at their desks. And they get an idea. And then they manually code
all the decision points of every possible thing
we have to describe to the computer what to do. And the difficulty
is that there’s a lot of different
paths, and you have to make sure every one works. And that makes traditional
software mathematical science. We’re basically trying to
prove our program is correct. Of course, in the
real life, we only do that for a
little toy programs. We don’t actually prove
our real programs. But we’re aiming
in that direction. And we use Boolean logic. And we strive for certainty. In machine learning,
it’s quite different. So first of all, it’s
not the programmers writing the program. It’s the computer that’s
writing the program. There’s still a
place for the human. And that’s to be a teacher,
to load in the data, and teach the
system how to learn. And that makes machine
learning, makes software an empirical science. So it’s more like doing
biology than like doing math. You make theories
about the world. You test your theories. They’re never going
to be quite precise. They’re not logical and Boolean. They’re probabilistic. And you embrace the
uncertainty rather than trying to eliminate it. So it’s a completely
different mindset. And it can be hard for people
to get used to that change. Now in addition to the way
you look at the problem, we’ve also made progress
in the methodology for how we build products. So we started off, software was
kind of a studio or an artisan field. And here’s two guys named
Steve in a garage building a great company by themselves. And, of course, they had
a lot of help later on, but you started out, you
had very small teams. There wasn’t much methodology. You did it all by the
seat of the pants. But then we realized
you can only get so far with those small teams. And so we invented sort
of the factory model, or the assembly line
model, where we said, how are we going to make
teams of thousands of software engineers work together? Well, to do that, we need to
impose a lot of discipline and methodology. And we built that up over
basically a half century. But now we’re saying maybe we
don’t need that factory model. And maybe it’s more
like a school model where now we’re going to
have teachers teaching the computers what to do. And we’re going to need a new
methodology for doing that. So what are the tools
we have along the way? So I’m old enough that
when I started programming, I actually had one of these. And you drew out with a
pencil little flowcharts. I never thought that
was useful, but you had to do it if you wanted
to get an A on your homework. So I said, OK. But then over time we built up
a much stronger set of tools. And now we have all
this great stuff. And we have educational
materials and methodologies. And programmers can
be more productive because we have this half
century of building up a methodology. Now in machine learning,
we’re just getting started. So we’re more at
this level of tools. We have these ancient tools. And, yeah, they’re going to get
more sophisticated over time. And you can see
the beginnings of– there’s a hammer there
and a saw and so on. And you know they’re
going to get better. But we’re really just
right at the beginning. And, yeah, we do have some
cool tools that do help. Here’s a TensorBoard. But we haven’t built out
the whole ecosystem the way we have for
traditional software. So how are we
going to get there? How are we going to have
a methodology for machine learning success? Well, that’s one of the
things we’re working on. I want to point you
to this blog we’ve been working on called
“The Lever,” which helps move things faster, and
go through a couple of the blogs we have. And we don’t have to read
all these words right now, but it’ll be available, and
you can go and look at it. So one is this idea of
experimentation, of saying, we don’t know exactly
where we’re going, but we shouldn’t
worry about that. And we should try to
make progress quickly by doing quick experiments,
looking at the results, analyzing them, and changing
the direction you’re going. This idea of how product
managers integrate with machine learning is different
because product managers are used to a discipline
of saying, this is what we’re going to build. Here’s how we’re going
to break it into pieces. I know how long
it’s going to take to build each of those pieces. I can envision
the final product. And with machine learning,
It’s not always like that because there’s much
more variance in how long it’s going to take
or whether it’s even going to be possible at
all to build a component. And then, where does
the data come from? If data is the key, data is
the new capital, the machinery on the factory floor
that makes everything go, where do we get it? What’s the process
for curating it? You can also think of it
as data is the new gold. You want to take
good care of that. You’re making an investment. You want to make the
right investments and know how to use it properly. And you can look at
those blogs for more on each of those ideas. Again, a lot of words. Here you can refer back to it. But we think of the challenge in
terms of these five categories. So, first, what’s
the technology? Getting used to using all these
new tools, using the data, making it flow
through, figuring out which models make
sense and so on, and then deploying
that technology. How does having
a cool technology fit into a product that makes
sense for your customers? So we’ve had issues
with this where there’s a lot going on it at
Google within the company, and a lot of great
research going on, and a lot of great
product development. But if we don’t bring
those two sides together, we’re not going to have success. So I remember, as a
director of research, the photos team
came to us and said, we have this terrible
success disaster. People are using
photos, and they’re taking lots and
lots of pictures, and now they’re confused
and they can’t organize them and they can’t find
their own photos. What can we do? And they said, I
think what we need is some human factors
resource to make it easier for people to sort
all their photos into folders. And we said to them,
yeah, OK, we’ve got some of those
human factors experts. And we think we could probably
make that a little bit better. But how about instead
if we automatically labeled all the pictures so
nobody had to waste any time putting them into folders? And they said, you can do that? I thought that was
science fiction. So it was that conversation of
the technologist recognizing these guys have a need
and the product team recognizing these guys
have a technology. And you got to have
those conversations. You’ve got to put
those guys together, or else we would have
wasted all that effort. The design issues can change. So that was a great example
of where the design was going to change a lot,
where we no longer had to worry about what’s the
folder structure look like and what are all
the hashtags like. Rather, we could just say you’re
going to have a search box. And now we figure out
what the results look like when you do that search. So the user experience is
going to be quite different. And then the people problem. I get a lot of complaints from
companies like you saying, how can we hire these machine
learning experts because you and Facebook got them all? So they’re out there. But the other thing
to think about is some of the experts we have
are not the people you want. So you don’t need
somebody who’s going to be writing papers
in the top conferences and inventing new algorithms. What you need are people that
can take the existing tools and put them together
into a product. And sometimes those two types
of people are quite different. And somebody who’s the
world’s expert on algorithms might not be the right
person to build a product. But somebody who
understands that can do it. And then there’s a lot of
problems around growth, that we see companies
making the first steps, getting some success, and
then stumbling a little bit as they try to go forward. And there’s a lot of
issues around this. So part of the problem
is when you first get started, you throw in some
data and everything works. But then you say– well, now all of a sudden, as
we grow, we have privacy issues or we have regulatory issues. And these are things
you don’t think about when you’re just
developing the algorithm and trying to get
something going, but that can be crucial to
the success of your company. And so there’s another class of
people that are important here. So one of the amazing things to
me I learned through Launchpad was talking to a company who
said, yeah, for our next hire, I think we’d rather have
a lawyer than an engineer. And it was the first
time in my career that I actually agreed
with that sentiment. [LAUGHTER] So I told you all the issues. And so far it’s been
kind of negative. It’s been tough. But there are cool
things you can do. They’re just an amazing
number of things. And there’s more coming. So I’m going to go
through some of them. I chose some of the engagements
that we’re involved with. I didn’t choose any of
your guys’ companies because that would be like
saying which one of your kids is a favorite. I can’t do that. So I left you guys out. Here’s a team I met at
Stanford of astrophysicists. And they’re tackling
this problem called gravitational lensing. So what does that mean? So there’s a galaxy
someplace way out there. And light is shining from
that galaxy to Earth. And in between them and
us, there’s another galaxy. And that galaxy is heavy. And so that actually
bends the light. And if you could measure
exactly what was going on, essentially that
would be like putting this galaxy in the
middle on a scale to see how heavy it
is to bend the light. And then you could
learn something about dark matter and
all that cool stuff that these guys care about. And it turns out
physicists know how to make that calculation
by saying let’s start here. Apply the rules of physics
in the forward direction. And that will tell us– if we
know what this galaxy is like, that will tell us what
the light looks like. And then if that
doesn’t match, then tweak this one a little
bit and try again. And it takes a long, long
time on supercomputers to do that because they
have to make lots of trials. And what these guys said
is, we’re physicists. We understand math. We don’t know anything about
this machine learning stuff. But we heard that
in deep learning, you can differentiate
and go backwards rather than going forwards. And that seems like that’s
exactly what we need. So in a couple of months,
they taught themselves everything they needed to do. They tried it. It worked better than
the existing techniques. And it ran 10
million times faster. So that’s a pretty cool success. And they got from zero to
success in a couple of months. Now, of course,
these are physicists. So a normal person, when
you say the word tensor, they get a little bit nervous. But physicists eat
that for breakfast. So maybe it’s a little
bit easier for them. They’re literally
rocket scientists. Here’s another
example– similar– of looking for planets. And I used to be at NASA
before I was at Google. And I was involved in a
precursor to this mission. So I really like it. And the idea is you
look at a star far away. And a planet circles that star. And there is an eclipse. So the eclipse blocks out
the light a little bit. And the Kepler mission
looked for that. And all the really,
really big planets that pass in front of a
distant star and block off a lot of the light,
they found all of those. And that was cool. But now we wanted
to go back and say, can we find more,
smaller planets? And the existing techniques
didn’t do that because there’s a lot of uncertainty. It’s not just one factor. But the machine
learning techniques were able to pick those out. And now we’ve found
many more planets than the ones that have
already been found. And I just saw this morning
there was another similar kind of thing, with
looking at old data where the standard techniques
were able to pull out the easy examples, then going
back with machine learning and finding more examples. We’ve done a lot of
work in medicine. This was a problem of
looking at the retina and diagnosing eye disease. And we showed we can do that
better than regular doctors do. But then we wanted to say,
what else can we do with that? And we said, well, maybe we
can detect high blood pressure. Turns out, yeah, we can
do that really well. And then the
engineers kept going. They were on a roll. And they said,
what other columns do we have in the database? One of the columns is sex. Let’s see if we
can predict that. And the doctor says,
oh, hold on a minute. There’s no difference between
a male and a female retina. You’re not going to be
able to predict that. And the engineers
said, well, if that’s so, why did I get 95% accuracy? [LAUGHTER] And the doctors still don’t
know why we’re able to do that. Their theory of the
eye was incomplete. And the medical
applications go all the way down to high school students. So I talked about
rocket scientists that have two decades of experience. Here’s some kids
who don’t even have two decades of being
alive, and yet they were able to contribute to this. It’s more than just sick people. So we can also look
at sick plants. So you go out into
the rainforest and detect some browning
on the leaves or something. And if you’re an experienced
farmer, you know what to do. But maybe you’re not
an experienced farmer, or maybe through
climate change you’re getting a different
sort of disease that you haven’t seen before,
so we can help with that. And one of the real
challenges here was to say, well, we can’t
put a supercomputer out into the field. And we probably don’t have
any Wi-Fi connectivity. So it’s got to run
locally on the phone. And so one of the
big challenges is to say, how can we take these
compute-heavy applications and scale them down
to the size of a phone and make them still work? And that we were able
to do in this case. Here’s another one. And every company has got
to have an elevator pitch. You invent a company. And you say, what’s
your company? And the answer is, it’s
like Tinder, but for cats. So this company,
called Conectera, it’s got an elevator pitch,
which is it’s like Fitbit, but for cows. And that seems silly. Do cows really need to boast
about how many steps they did today? No, they don’t. But the farmer really wants
to know what’s going on. The farmer wants to know,
are any of my cows sick? Are any of them doing
anything unusual? How much water
are they drinking? How much are they
walking around? Where are they going
from here to there? And you put a device with a
GPS and accelerometer on it. And now you can get the total
picture for your full herd. Here’s another example where
there’s no Wi-Fi connectivity. So we wanted to detect
illegal deforestation. So the main thing
you can think of– and there’s a couple
different things going on– but the main
one is, can you hear the sounds of saws
cutting down trees? So it’s power saws
if they’re loud. But there’s nobody there. So you put a lot of
sensors out into the field. The sensors basically
you see here, it’s a cell phone
with some solar cells to keep it powered over time. And then the phones form a
mesh network with each other. And they’re listening. And they alert us to when
something’s going on. Captioning videos–
so, on the one hand, we’ve been doing speech
recognition for a long time. This should be easy. But on the other hand,
captioning videos is a little bit harder than
regular speech recognition. So if I’m talking
directly into a microphone and there’s just
one person talking, that’s a pretty easy problem. But if there’s a
video and there’s lots of things happening,
multiple people talking at once, car crashes and so
on, bad microphones and so on, it becomes a lot harder. And a hundred
different languages– so we wanted to tackle that
and make it all automated. And we had success
in that field. Interfacing with the hardware– so I take a lot
pictures with my phone. I also take a lot of pictures
with a big, heavy DSLR. And one of the reasons
you want a big, heavy lens is that allows you to
blur out the background. But we can do that
in software as well. And here you see examples
of being able to use that. Here’s more example. We worked with the
Geena Davis Institute. They’re interested in
bias in the film industry. And so we built a
system that would analyze who’s on
screen for how long, and who has what speaking
roles, and then say is that fair across
male versus female. And here we see
basically a chart that’s saying in movies for
which the female is the lead, the female appears more
often, but not by that much, whereas in movies
where the male is the lead, it’s much farther skewed. So we’ve identified bias here. And you can break that
down into components and so on, and see more. And that was all
done automatically. Whereas previously they’re
making very slow progress in hand annotating
every frame, we’re able to do that all at once. So let me stop here and
open it up for questions. But I just wanted to give
you the idea that there is a powerful opportunity here. There’s so many
things you can do. And we have a set of tools set
which is continuing to evolve. And you’ve got to
meet that halfway. You’ve got to be
able to say I want to adopt this technology
and this methodology. And there’s going to be
some bumps along the way. I’m going to be
discovering a lot, some that’s new to me, some
that’s new to everybody. But with that, we can see
a lot of opportunities for making success
and doing things you couldn’t do otherwise. So why don’t we open it
up for questions now? Yeah? AUDIENCE: What is your most
overrated and underrated application of machine learning
that you see in the field currently? PETER NORVIG: Well, one
that might be a candidate for both is these assistants. So we have this
idea that you should be able to talk and
have a conversation. And a lot of the emphasis
now is on a little speaker that you can put on the table
that doesn’t have a keyboard, doesn’t have a screen,
but will listen to you. And these have proven
to be pretty popular. And people buy them and
say, oh, this is so cool. I can ask it to play music,
and it plays the right song. And I can ask it
for the weather, and it tells me the weather. And wait a minute,
what else can I ask it? [LAUGHTER] So it’s a great success in
terms of what it can do. But it’s been a failure so
far in that we haven’t quite figured out everything
that you can do. Whereas in other applications,
say in Google Search, we gave the user a pretty
good model of what works and what doesn’t work. It’s basically you
type in some keywords, and we’ll show you pages that
are relevant to those keywords. People understand that model. And they know what
the balance of power is– that Google is
going to do this amount, but the user also has
to bring their savvy to asking the right questions
and analyzing the result pages. With these assistants,
we’re not quite there yet. So we’re sort of halfway
saying it’s just like a person. You talk to it the way
you talk to a person. But we’re also saying, well,
no it’s not really a person. It doesn’t understand
everything. Well, what does it
understand then? We haven’t made that clear yet. And I think over time– well, one, capabilities
will expand. It will be able to
do more and more. But we’ll also need a
better user interface to say here’s how you
should think of it in terms of what it can do. Well, it’s not a grown person. It’s an eight-year-old. Is that the way to think of it? No, that’s not quite
the right model. So we’ll need some way
to make that more clear. Thank you. Yeah? AUDIENCE: Are there problems
that people are trying to solve [INAUDIBLE]? PETER NORVIG: Yeah,
so are there problems that people will try
to use machine learning and they don’t have to? That’s certainly the case. And there are some
advantages to keeping things as simple as possible. So don’t employ a
heavy-duty technology when an easy one would use. There’s certainly
a lot of examples where the benefit is so
small that it’s easier to do it by hand, even
if it’s not as automated. And maybe there’s a cost
to doing it by hand, but still that’s easier than
investing in a big effort. There’s also the
explainability type issue. So I know in doing
Google search, historically we were
a little resistant to handing too much control
to machine learning. So we always had this
idea that lots of factors are involved in
search, and we’re going to invent new
factors, a thing on the page that you should care about or
a way that the user interacts with the page in their
history and so on. That’s data. We’re going to use that
data to improve the product. And we would often do
things like, say, well, we figured this out. Now we’re going to add this in. How much should be add it in? Well, the machine
learning algorithm will figure out the right
values of those parameters. But we were always
reluctant to say let’s have the whole
thing be one deep learning network the way we do in
machine translation, say. And I think there’s a
couple of reasons for that. One is that we’re kind of
creating our own training data as we go along. So there is no natural
data out there. When you want to do
image processing, there are natural
pictures out in the world, but there are no
natural examples of searches and results other
than the ones that we create. So if we’re training
on that data, and then we change
what we’re doing, now the data is no longer valid. And the other thing
that concerned us is we felt like we had to
think ahead several steps because in the early days
we felt like our job was to observe the web. We were like a library catalog. Somebody else publishes the
stuff, and we just catalog it. Then we realized we’re
actually interacting with it. Every time we changed
our algorithm, the webmasters study
what we did and try to change what they’re doing. So that means we can’t just
optimize on the current data. We have to say if we make
a change, what’s going to happen into the future? And there is no data
on that unknown future. So we felt like we really had
to understand what was going on. And if everything was just
a machine learned model, it was harder for us to predict
how the future would change. Whereas if it was handwritten
code with a few machine learned parameters, it
was easier to do that. So that’s one example
where we were reluctant. And now over time, as we do
more and more machine learning, we gain more confidence in
it, and more is sneaking into the search algorithm. Yeah? AUDIENCE: [INAUDIBLE] PETER NORVIG: Yeah, so what
should your relation to data be in general? And that’s a hard question. There’s not one answer. So you have to figure out
where can I get the data? How much of it do I need? How does it have to be curated
or manipulated and so on? And there are multiple
paths to that. So sometimes the data
is already out there and you can just go
find it and collect it. And sometimes you can get
it from somebody else. Sometimes you have to create it. And there’s lots
of examples where you create an initial product
to get some interaction with the users. And then you kind of
bootstrap on that. So we did that, for example,
in our speech recognition. We wanted to have lots
of examples of people talking and getting results. So what we did is we
offered a free service which was directory assistance
for telephone numbers. And people would call
up [INAUDIBLE] business. And then we’d give an answer. And then we could say– we could figure out whether
we got the right answer or not because did they then
connect to the business? So we invested in
data collection. And lots of times
you’ll have to do that. Then there’s also this
question of to what extent is your
application special to you versus generic to
everybody else. And you can see Google
and other companies now are offering these
pre-trained models to do speech recognition or
image recognition or text processing and so
on that are trained over everything in the world. And one approach is use that
and if it works you’re done. Another is to say use that
and if it doesn’t quite work, modify it by adding
in some of your data. And the other
approach is saying, no, my application is
so different from what that was that using it
doesn’t help at all, and I’ve got to start from
scratch with my own data. And there’s no one
answer to that question. You’ve got to do the
investigation yourself to see what the right path is. So on AUDIENCE: So on
interpretability piece, what do you think
about machine learning as it relates to the
advance of human knowledge? As in the retina
example, the doctors have the results as if
their model of the eye improved, but it
didn’t, in fact. They don’t actually
have that knowledge. What do you think about that? PETER NORVIG: Yeah. So that’s a good question. So let’s see. So this gives you hints. And I think there’s lots of
ways that we do experiments. We get some data. We don’t quite understand it. And then you have to go
back and create new theory. And I think it’s
always like that. And there’s this leapfrogging
of experimentation and theory. And maybe what’s
different now is machine learning gives you a
much more powerful tool to do the experimentation. So maybe in the past, there’s
a lot of clever humans around, and theory may have
led to experimentation more often than not. But lots of
discoveries have always been made by somebody
saying, huh, that’s funny, rather than saying I
have a theory, saying here’s some unexpected results. Now can I go explain that? So machine learning will help
trigger that curiousness. Can it help do the
explanation itself? I think that’s an area where
we need a lot of improvement. So I showed some of
these charts where you could look at your space
of data and decision boundaries and so on. And that gives you some idea. But we need a lot better tools
to have a better conversation with the machine
learned algorithm to understand what
it’s really doing. And then I think the other
issue is people get confused sometimes with this issue
of understandability, and they blame the
algorithm when they should be blaming the problem. And so things that
are understandable are things that can be
described in simple terms. So if I want to
balance a checkbook, I know how to describe
what the right answer is, and I know how to write
a program to do that. And maybe it’s complicated
to get it exactly right, but inherently that’s
a simple problem. And so the code to solve
it should be understandable regardless of what programming
language you write it in or what system you
use to write it. Whereas recognizing
somebody’s face, that’s just an inherently hard
problem, hard in the sense that there’s often no
definitively correct answer. Here’s a face. Who does that belong to? Experts can disagree on
what the right answer is. There is no one right answer. And then, secondly, because
the process for discovering it is unconscious, there is no
expert that can tell you, this is how I made
that decision. Rather, it’s I don’t
know how I did it. My subconscious mind did it
rather than my conscious mind. And so people blame the
machine learning algorithm for not solving those two
problems when it’s not the algorithm’s
fault. It’s the fact that the problem was
difficult to begin with. AUDIENCE: Hi. I have a very basic question. How do I identify a
machine learning problem? There are certain
[INAUDIBLE] problems in [INAUDIBLE] there are
multiple places where we can apply the machine learning. So I’m still figuring out
how to find out a real use case for machine learning. PETER NORVIG: Yeah. I think that comes
down to experience. So I mentioned the case
where our product managers on photos team hadn’t
identified the problem. They didn’t know that
that was a possibility that they could try. So you’ve got to be up to date
on what some similar problems that people have worked on. That’s why I wanted
to show you today a wide range of
possibilities, just to get you thinking about
here’s the types of things that can be done. So you need to get
that spark of an idea. Hey, here’s something
somebody else did. This seems similar
to what I’m doing. And then you need to be able
to analyze that, to say, what do I need in order
to have success there? Well, I need the
right kind of data. I need the right kind
of objective function. I need that hook-up to the
users and be useful to them. So figuring out those steps of
what could define a product. And the more often you do
it, the better you get at it. SPEAKER 1: We have time
for one more question. PETER NORVIG: Yeah. AUDIENCE: How often is
it viable to actually reduce the model down to
something that runs usefully on a mobile phone
versus being too complex and needing to go off
to the cloud to compute? PETER NORVIG: Yeah, so
we’re definitely spending a lot of more time doing that. And, fortunately, many
models are much easier to run than they are to train. So the training
process is complex, but then if you can
get it onto the phone, I think there’s a
wide variety of things that are covered pretty well. Phones are really,
really powerful now as long as they don’t
have to be on all the time. AUDIENCE: Yeah, as long
as it’s in short bursts of interactions. PETER NORVIG: We haven’t
yet got to the point where you can be running your
video on your phone all day long and analyzing all the
scenes in front of you. Your battery dies
long before that. But we’re working
in that direction. So I think you should say,
by default, the answer is going to be, yes,
I can make that work. And then the question is,
how do I make it work? So what’s the process
of compiling the model and making it small enough,
making it not too power hungry? How often do I have
to update the model and recompute it in the
cloud and download it? What do I download? What are the privacy concerns? So one advantage of having a
model that runs on the phone is that you can be immune
from the liability of holding somebody’s private data. So if it’s only on
their phone, then it’s their problem and
not your problem. And you can’t be sued. You can’t have that data
recovered and so on. But the disadvantage then is if
you aren’t collecting the data, you aren’t taking advantage of
possibility for improvements. And so there is
a lot of research now on ways in which
you can share parameters for your network
without revealing any of the secrets of
the data underneath. And I think those are
the real issues of what’s the architecture of what
runs where and when does it get updated and how
does the data flow. But you should probably
assume that if I can solve those problems, I
can get it to run on the phone. AUDIENCE: Cool. Thank you. PETER NORVIG: OK. Yeah. Thank you. [APPLAUSE] [MUSIC PLAYING]

1 Comment

Leave a Reply

Your email address will not be published.


*