Everything You Think You Know About AI Is Wrong
Everything You Think You Know About AI Is Wrong
Robots are coming for our jobs. Terminators will soon murder us all. There is no escape. Resistance is futile.
These doom-laden predictions probably sound familiar to anyone who's read or seen any
movies lately involving artificial intelligence. Sometimes they're invoked with genuine alarm,
as in the case of Elon Musk and Stephen Hawking warning against the
danger of killer automatons. Other times, the anxiety comes across as a kind of detached,
ironic humor masking the true depths of our dread, as if tweeting nervous jokes about
#Skynet will somehow forestall its rise.
as in the case of Elon Musk and Stephen Hawking warning against the
danger of killer automatons. Other times, the anxiety comes across as a kind of detached,
ironic humor masking the true depths of our dread, as if tweeting nervous jokes about
#Skynet will somehow forestall its rise.
AI raises unsettling questions about our place in the economy and society; even if by some
miracle 99 percent of employers agree not to use robots to automate labor, that still leaves
many hardworking people potentially in the lurch. That's why it's important to talk about the
impact AI will have on our future now, while we have a chance to do something about it. And
the questions are complicated: Whose jobs will be at stake, exactly? How do we integrate
those people back into the economy?
miracle 99 percent of employers agree not to use robots to automate labor, that still leaves
many hardworking people potentially in the lurch. That's why it's important to talk about the
impact AI will have on our future now, while we have a chance to do something about it. And
the questions are complicated: Whose jobs will be at stake, exactly? How do we integrate
those people back into the economy?
But the more I learn about artificial intelligence, the more I've come to realize how little most
of us - myself included - really understand about how the technology is actually developing,
which in turn has a direct impact on the way we experience AI in the real world. It's one thing
to get excited about Siri and chatbots. It's something else entirely to hear that certain fields of
AI research are progressing much more rapidly than others, with implications for the way that
technology will shape our culture and institutions in the years to come.
of us - myself included - really understand about how the technology is actually developing,
which in turn has a direct impact on the way we experience AI in the real world. It's one thing
to get excited about Siri and chatbots. It's something else entirely to hear that certain fields of
AI research are progressing much more rapidly than others, with implications for the way that
technology will shape our culture and institutions in the years to come.
Killer robots may be much further off than you think
For something like the Terminator to become reality, a whole bunch of technologies need to be sufficiently advanced at the same time. What's really happening is that AI researchers are making much greater progress on some ideas such as natural-language processing (i.e., understanding plain English) and data analysis, and far less quickly on other branches of AI such as decision-making and deductive reasoning. Why? Because starting in the mid-to-late 2000s, scientists achieved a breakthrough in the way they thought about neural networks, or the systems that allow AI to interpret data.
Along with the explosion of raw data made possible by the Internet, this discovery allowed
machine learning to take off at a near-exponential rate, whereas other types of AI research
are plodding along at merely a linear pace, said Guruduth Banavar, an IBM executive who
oversees the company's research on cognitive computing and artificial intelligence.
machine learning to take off at a near-exponential rate, whereas other types of AI research
are plodding along at merely a linear pace, said Guruduth Banavar, an IBM executive who
oversees the company's research on cognitive computing and artificial intelligence.
"What is not talked about much in the media is that AI is really a portfolio of technologies,
" said Banavar. "Don't just look at one field and assume that all of the remaining portions of
the AI field are moving at the same pace."
" said Banavar. "Don't just look at one field and assume that all of the remaining portions of
the AI field are moving at the same pace."
This doesn't mean scientists won't make breakthroughs in those other AI fields that eventually
make killer robots possible. But it does mean, for now, that the limits of our research may be
putting important constraints on our ability to create the fully sentient machines of our
nightmares. This is vital, because in the meantime, the other advances we've made are
pushing us toward creating very specific kinds of artificial intelligence that do not resemble
the Terminator robot at all.
make killer robots possible. But it does mean, for now, that the limits of our research may be
putting important constraints on our ability to create the fully sentient machines of our
nightmares. This is vital, because in the meantime, the other advances we've made are
pushing us toward creating very specific kinds of artificial intelligence that do not resemble
the Terminator robot at all.
For instance, consumers are already seeing our machine learning research reflected in the
sudden explosion of digital personal assistants like Siri, Alexa and Google Now - technologies
that are very good at interpreting voice-based requests but aren't capable of much more than
that. These "narrow AI" have been designed with a specific purpose in mind: To help people
do the things regular people do, whether it's looking up the weather or sending a text message.
sudden explosion of digital personal assistants like Siri, Alexa and Google Now - technologies
that are very good at interpreting voice-based requests but aren't capable of much more than
that. These "narrow AI" have been designed with a specific purpose in mind: To help people
do the things regular people do, whether it's looking up the weather or sending a text message.
Narrow, specialized AI is also what companies like IBM have been pursuing. It includes, for
example, algorithms to help radiologists pick out tumors much more accurately by "learning"
all the cancer research we've ever done and by "seeing" millions of sample X-rays and MRIs.
These robots act much more like glorified calculators - they can ingest way more data than a
single person could hope to do with his or her own brain, but they still operate within the
confines of a specific task like cancer diagnosis. These robots are not going to be launching
nuclear missiles anytime soon. They wouldn't know how, or why. And the more pervasive this
type of AI becomes, the more we'll understand about how best to build the next generation of
robots.
example, algorithms to help radiologists pick out tumors much more accurately by "learning"
all the cancer research we've ever done and by "seeing" millions of sample X-rays and MRIs.
These robots act much more like glorified calculators - they can ingest way more data than a
single person could hope to do with his or her own brain, but they still operate within the
confines of a specific task like cancer diagnosis. These robots are not going to be launching
nuclear missiles anytime soon. They wouldn't know how, or why. And the more pervasive this
type of AI becomes, the more we'll understand about how best to build the next generation of
robots.
So who is going to lose their job?
Partly because we're better at designing these limited AI systems, some experts predict that
high-skilled workers will adapt to the technology as a tool, while lower-skill jobs are the ones
that will see the most disruption. When the Obama administration studied the issue, it found
that as many as 80 percent of jobs currently paying less than $20 an hour might someday be
replaced by AI.
high-skilled workers will adapt to the technology as a tool, while lower-skill jobs are the ones
that will see the most disruption. When the Obama administration studied the issue, it found
that as many as 80 percent of jobs currently paying less than $20 an hour might someday be
replaced by AI.
"That's over a long period of time, and it's not like you're going to lose 80 percent of jobs and
not reemploy those people," Jason Furman, a senior economic adviser to President Obama,
said in an interview. "But [even] if you lose 80 percent of jobs and reemploy 90 percent or 95
percent of those people, it's still a big jump up in the structural number not working. So I think
it poses a real distributional challenge."
not reemploy those people," Jason Furman, a senior economic adviser to President Obama,
said in an interview. "But [even] if you lose 80 percent of jobs and reemploy 90 percent or 95
percent of those people, it's still a big jump up in the structural number not working. So I think
it poses a real distributional challenge."
Policymakers will need to come up with inventive ways to meet this looming jobs problem.
But the same estimates also hint at a way out: Higher-earning jobs stand to be less negatively
affected by automation. Compared to the low-wage jobs, roughly a third of those who earn bet
ween $20 and $40 an hour are expected to fall out of work due to robots, according to Furman
. And only a sliver of high-paying jobs, about 5 percent, may be subject to robot replacement.
But the same estimates also hint at a way out: Higher-earning jobs stand to be less negatively
affected by automation. Compared to the low-wage jobs, roughly a third of those who earn bet
ween $20 and $40 an hour are expected to fall out of work due to robots, according to Furman
. And only a sliver of high-paying jobs, about 5 percent, may be subject to robot replacement.
Those numbers might look very different if researchers were truly on the brink of creating
sentient AI that can really do all the same things a human can. In this hypothetical scenario,
even high-skilled workers might have more reason to fear. But the fact that so much of our AI
research right now appears to favor narrow forms of artificial intelligence at least suggests we
could be doing a lot worse.
sentient AI that can really do all the same things a human can. In this hypothetical scenario,
even high-skilled workers might have more reason to fear. But the fact that so much of our AI
research right now appears to favor narrow forms of artificial intelligence at least suggests we
could be doing a lot worse.
How to live with your robot
The trick, then, is to move as many low-skilled workers as we can into higher-skilled jobs.
Some of these jobs are currently held by people; other jobs have yet to be invented. So how
do we prepare America's labor force for work that doesn't exist yet?
Part of the answer involves learning to be more resilient and flexible, according to Julia Ross,
the dean of engineering and IT at the University of Maryland Baltimore County. We should be
nurturing our children to interact with people from different backgrounds and to grapple with
open-ended questions, teaching them how to be creative and how to think critically -
and doing it all earlier and better.
"How do we get people to understand and embrace that concept?" said Ross at a recent
event hosted by The Washington Post. "That you need to be a lifelong learner, that the things
that you're learning today may be obsolete in 5 years - and that's okay? You can get
comfortable with that idea if you're comfortable with your capacity to learn. And that's
something we have to figure out how to instill in every student today."
Soon, teachers themselves may come to rely on narrow AI that can help students get the
most out of their educational experiences, guiding their progress in the way that's best for
them and most efficient for the institution. We're already seeing evidence of this in places like
Georgia Tech, where a professor recently revealed - much to the surprise of his students -
that one of his teaching assistants was a chatbot he had built himself.
Making artificial intelligence easy for regular people to use and love depends on a field of
research called human-computer interaction, or HCI. And for Ben Shneiderman, a computer
science professor at the University of Maryland, HCI is all about remembering the things that
make people human.
This means giving people some very concrete ways to interact with their AI. Large,
high-definition touchscreens help create the impression that the human is in control, for
example. And designers should emphasize choice and context over a single "correct" answer
for every task. If these principles sound familiar, that's because many of them are already
baked into PCs, smartphones and tablets.
"People want to feel independent and like they can act in the world," said Shneiderman,
author of "Designing the User Interface: Strategies for Effective Human-Computer Interaction."
"The question is not 'Is AI good or bad?' but 'Is the future filled with tools designed to
supplement and empower people?'"
That's not to say narrow AI is the only kind researchers are working on; indeed, academics
have long been involved in a debate about the merits of narrow AI versus general artificial
intelligence. But the point is that there's nothing predetermined about general AI when so
much of our current research efforts are being poured into very specific branches of the
field - buckets of knowledge that do more to facilitate the use of AI as a friendly helper rather
than as the object of our undoing.
Some of these jobs are currently held by people; other jobs have yet to be invented. So how
do we prepare America's labor force for work that doesn't exist yet?
the dean of engineering and IT at the University of Maryland Baltimore County. We should be
nurturing our children to interact with people from different backgrounds and to grapple with
open-ended questions, teaching them how to be creative and how to think critically -
and doing it all earlier and better.
event hosted by The Washington Post. "That you need to be a lifelong learner, that the things
that you're learning today may be obsolete in 5 years - and that's okay? You can get
comfortable with that idea if you're comfortable with your capacity to learn. And that's
something we have to figure out how to instill in every student today."
most out of their educational experiences, guiding their progress in the way that's best for
them and most efficient for the institution. We're already seeing evidence of this in places like
Georgia Tech, where a professor recently revealed - much to the surprise of his students -
that one of his teaching assistants was a chatbot he had built himself.
research called human-computer interaction, or HCI. And for Ben Shneiderman, a computer
science professor at the University of Maryland, HCI is all about remembering the things that
make people human.
high-definition touchscreens help create the impression that the human is in control, for
example. And designers should emphasize choice and context over a single "correct" answer
for every task. If these principles sound familiar, that's because many of them are already
baked into PCs, smartphones and tablets.
author of "Designing the User Interface: Strategies for Effective Human-Computer Interaction."
"The question is not 'Is AI good or bad?' but 'Is the future filled with tools designed to
supplement and empower people?'"
have long been involved in a debate about the merits of narrow AI versus general artificial
intelligence. But the point is that there's nothing predetermined about general AI when so
much of our current research efforts are being poured into very specific branches of the
field - buckets of knowledge that do more to facilitate the use of AI as a friendly helper rather
than as the object of our undoing.
Everything You Think You Know About AI Is Wrong
Reviewed by sariyugurta7@gmail.com
on
juillet 14, 2017
Rating:
Aucun commentaire