The brains behind IBM’s Jeopardy-winning, disease-tracking, weather-mapping Watson supercomputer plan to embark on a lobbying blitz in Washington, D.C., this week, hoping to show federal lawmakers that artificial intelligence isn’t going to kill jobs — or humans.
To hear IBM tell it, much of the recent criticism around machine learning, robotics and other kinds of AI amounts to merely “fear mongering.” The company’s senior vice president for Watson, David Kenny, aims to convey that message to members of Congress beginning with a letter on Tuesday, stressing the “real disaster would be abandoning or inhibiting cognitive technology before its full potential can be realized.”
Labor experts and reams of data released in recent months argue otherwise: They foretell vast economic consequences upon the mass-market arrival of AI, as entire industries are displaced — not just blue-collar jobs like trucking, as self-driving vehicles replace humans at the wheel, but white-collar positions like stock trading too.
Others fear the privacy, security and safety implications as more tasks, from managing the country’s roads to reading patients’ X-ray results, are automated — and the most dire warnings, from the likes of SpaceX and Tesla founder Elon Musk, include the potential arrival of “robots capable of destroying mankind.”
But as IBM seeks to advance and sell its AI-driven services, like Watson, the company plans to tell lawmakers those sort of concerns are “fantasy.” Along with a private meeting with some lawmakers near Capitol Hill on Wednesday, Kenny is urging Congress to avoid reacting out of fear and pursuing some proposals, like an idea from Bill Gates to tax robots, as regulators debate how to handle this fast-growing field.
“The impact of AI is evident in the debate about its societal implications — with some fearful prophets envisioning massive job loss, or even an eventual AI ‘Overlord’ that controls humanity,” Kenny wrote. “I must disagree with these dystopian views.”
For IBM, the stakes are high: Watson and the future of what it calls “cognitive technology” are critical to Big Blue’s business. Beyond Watson’s existing work — from aiding in cancer research to funnier tasks, like writing a cookbook — IBM has sought to bring its famed supercomputer to tackle some of the sprawling, data-heavy tasks of the federal government.
In some ways, though, the most vexing challenges facing AI aren’t technological — they’re political.
Self-driving cars, trucks and drones, for example, can’t just take to the roads and skies without permission from local and federal regulators, which are only just beginning to loosen restrictions on those industries.
Others fear that automation might lead to discrimination: Under President Barack Obama, the White House spent months warning that highly powerful algorithms could share the biases of their authors, leading to unfair treatment of minorities or other disadvantaged communities in everything from obtaining a credit card to buying a house. That’s why his administration in October explicitly urged Congress to help it hire more AI specialists in key government oversight roles.
And more challenging still are the economic implications of AI. It will be up to federal officials — including President Donald Trump or his successors — to grapple with untold numbers of Americans who might someday find themselves out of a job and in need of training in order to find new careers. (Trump’s own Treasury secretary, however, previously has said AI is more than 50 years away from causing such disruptions.)
Sensing potential political hurdles, companies like Apple, Facebook, Google and IBM chartered a new organization, the Partnership for Artificial Intelligence, last year. In doing so, they hoped to craft ethical standards around the safe and fair operation of machine learning and robotics before government regulators — in the United States or elsewhere — sought to more aggressively target AI with consumer protection regulations. AI now counts among some of those tech companies’ regular lobbying expenses.
And IBM, in particular, has spent years trying to tell a friendlier, less economically catastrophic story about AI in the nation’s capital — a campaign that it will continue this week.
“Technological advancements have always stoked fears and concern over mass job loss,” Kenny wrote in his Tuesday letter to Congress. “But history suggests that AI, similar to past revolutionary technologies, will not replace humans in the workforce.”
In many cases, like cyber security and medicine, Kenny told lawmakers it’s still humans at the end of the day who can “choose the best course of action when an AI system has identified a problem.” He stressed that government should instead focus its attention on fixing a “shortage of workers with the skills needed to work in partnership with AI systems.”
For all the scrutiny facing the industry, however, some in Congress are still getting up to speed. That’s why lawmakers like Rep. John Delaney, D-Md., co-founded the Congressional Artificial Intelligence Caucus, an informal organization of Democrats and Republicans studying the issue. His group is joining IBM’s Kenny and other tech leaders at a private, off-record event at the Capitol Hill Club on Wednesday.
Delaney told Recode he comes to AI from the perspective that the “sky is not falling” — that even if industries change, old jobs might still be replaced by new ones in emerging fields. With the caucus, he said, the goal is to “make sure Congress is as informed on this issue as possible, so when it inevitably has some sort of knee-jerk reactions” on AI, the final response from Capitol Hill is “more measured” in scope.
Asked if the tech industry similarly appreciates the economic implications of its inventions, the congressman replied: “I think the [tech] industry focuses, as it should, on being at the cutting edge of innovation and creating products and services that enhance productivity and improve peoples’ lives.”
“So when they’re thinking about driverless cars, do they spend more time thinking about how this will enhance productivity, and how it will [protect] safety, than the jobs that will be affected? I think the answer is probably yes, but I don’t think they do it in some of kind of nefarious way,” Delaney said.
This article originally appeared on Recode.net.