Toggle Menu

Democracy Needs a Reboot for the Age of Artificial Intelligence

A revolution in machine learning will transform society. We can’t let tech companies use it to usurp power.

Katharine Dempsey

November 8, 2017

Louis Buckley, Content Developer at London’s Science Museum, plays rock, paper, scissors with Berti the Robot, in London, UK, February, 2009.(Ian Nicholson / AP Photo)

While speaking on a panel earlier this year, I watched an expert in artificial intelligence reassure a group of anxious professionals with an analogy: Many people can’t explain how a car runs, but they don’t hesitate to get into one. In other words, consumers will embrace the benefits of technology without asking too many questions. The audience nodded sagely; no one seemed to pick up the metaphor’s disturbing implications.

Drawing a correlation between an inability to describe the internal-combustion engine and our poor grasp of the potential impacts of artificial intelligence brings a false sense of security. One doesn’t need be a gearhead to form opinions on congestion pricing, seat belts, or a gas tax. Likewise, you don’t need to be a coder to grapple with AI. By not asking questions about artificial intelligence and its related fields, we relinquish a massive amount of control to giant profit-seeking firms.

A healthy modern democracy requires ordinary citizens to participate in public discussions about rapidly advancing technologies. We desperately need new policies, regulations, and safety nets for those displaced by machines. With computing power accelerating exponentially, the scale of AI’s significance is still not being fully internalized. The 2017 McKinsey Global Initiative report “A Future that Works” predicts that AI and advanced robotics could automate roughly half of all work globally by 2055, but, McKinsey notes, “this could happen up to 20 years earlier or later depending on the various factors, in addition to other wider economic conditions.”

Granted, the media are producing more articles focused on artificial intelligence, but too often these pieces veer into hysterics. Wired magazine labeled this year’s coverage “The Great Tech Panic of 2017.” We need less fear-mongering and more rational conversation. Dystopian narratives, while entertaining, can also be disorienting. Skynet from the Terminator movies is not imminent. But that doesn’t mean there aren’t hazards ahead.

Current Issue

View our current issue

Subscribe today and Save up to $129.

Yoshua Bengio, a Canadian computer scientist and one of the world’s eminent deep-learning experts, argues that we should focus on the ways technology is set to compound existing problems, especially inequality. Bengio wrote in an e-mail, “AI will probably exacerbate inequalities, first with job disruptions—a few people will benefit greatly from the wealth created, [while] a large number will suffer because of job loss—and second because wealth created by AI is likely to be concentrated in a few companies and a few countries.”

Another risk of AI is its propensity to reinforce racial and gender bias. Earlier this year, a report published in Science showed “that applying machine learning to ordinary human language results in human-like semantic biases.” The researchers input a “corpus of text from the World Wide Web,” and the machine-learning program associated women’s names more with words like “wedding” and “parents” and men’s names with “professional” and “salary.”

With this in mind, it’s easy to imagine a woman being passed over for a traditionally male job because of an algorithm. This is the same type of technology that can determine university admission or a bank-loan application. The decisions will be fair only if the data is unbiased, and we don’t have to look too far to be reminded that our world, and therefore our data, is far from even-handed.

Artificial intelligence is a heterogeneous field—it includes computer science, neuroscience, even philosophy. Ordinary citizens have many ways to approach or understand artificial intelligence, but the easiest way is to realize the role of AI in topics they already care about.

When I interviewed Ben Scott, a senior adviser to the Open Technology Institute at New America, it was the day after Facebook’s chief operating officer, Sheryl Sandberg, announced the company’s intention to tighten controls on its targeting of ads. ProPublica had revealed that Facebook’s tools could be used to direct advertisements to “Jew haters.” Facebook’s advertising-targeting platform is a sophisticated AI system. Unfortunately, as Scott told me, “people don’t necessarily think of this as an AI problem—it absolutely is.” The way Russia has exploited social media to sow confusion and discontent across the world—that’s also an AI problem. Artificial intelligence is becoming tightly woven into nearly every aspect of society.

Increasingly, to thoughtfully discuss ethics, politics, or business, the general population needs to pay attention to AI. In 1989, Ursula Franklin, the distinguished German-Canadian experimental physicist, delivered a series of lectures titled “The Real World of Technology.” Franklin opened her lectures with an important observation: “The viability of technology, like democracy, depends in the end on the practice of justice and on the enforcements of limits to power.”

For Franklin, technology is not a neutral set of tools; it can’t be divorced from society or values. Franklin further warned that “prescriptive technologies”—ones that isolate tasks, such as factory-style work—find their way into our social infrastructures and create modes of compliance and orthodoxy. These technologies facilitate top-down control.

We cannot back down

We now confront a second Trump presidency.

There’s not a moment to lose. We must harness our fears, our grief, and yes, our anger, to resist the dangerous policies Donald Trump will unleash on our country. We rededicate ourselves to our role as journalists and writers of principle and conscience.

Today, we also steel ourselves for the fight ahead. It will demand a fearless spirit, an informed mind, wise analysis, and humane resistance. We face the enactment of Project 2025, a far-right supreme court, political authoritarianism, increasing inequality and record homelessness, a looming climate crisis, and conflicts abroad. The Nation will expose and propose, nurture investigative reporting, and stand together as a community to keep hope and possibility alive. The Nation’s work will continue—as it has in good and not-so-good times—to develop alternative ideas and visions, to deepen our mission of truth-telling and deep reporting, and to further solidarity in a nation divided.

Armed with a remarkable 160 years of bold, independent journalism, our mandate today remains the same as when abolitionists first founded The Nation—to uphold the principles of democracy and freedom, serve as a beacon through the darkest days of resistance, and to envision and struggle for a brighter future.

The day is dark, the forces arrayed are tenacious, but as the late Nation editorial board member Toni Morrison wrote “No! This is precisely the time when artists go to work. There is no time for despair, no place for self-pity, no need for silence, no room for fear. We speak, we write, we do language. That is how civilizations heal.”

I urge you to stand with The Nation and donate today.

Onwards,

Katrina vanden Heuvel
Editorial Director and Publisher, The Nation

She proposes that to better understand issues about the “real world of technology” one must think not only in terms of economics but of justice too. In doing so, we can transcend the “barriers that technology puts up against reciprocity and human contact.” Although she was speaking nearly 30 years ago, Franklin’s concerns relating to technology prophetically map onto current apprehensions about AI. “You see, if somebody robs a store, it’s a crime, and the state is all set and ready to nab the criminal. But if somebody steals from the commons and from the future, it’s seen as entrepreneurial activity and the state gives them tax concessions…. We badly need an expanded concept of justice and fairness that takes mortgaging the future into account.” Shifting thinking about AI from profits to principles, as Franklin recommends, would allow for the possibility of change from the bottom up.

To think about AI in terms of justice, academics from all disciplines ought to nurture the public’s interest in the ethics of AI. Interdisciplinary groups of thinkers such as Data & Society and AI Now should be model organizations. They produce fascinating, valuable guides to ethics and inclusion in this field. In AI Now’s 2017 report, the organization explores such crucial questions as, “What happens to workers after their jobs have been automated?,” “What effects will these systems have on vulnerable individuals and minorities?,” and “How will AI systems be used by law enforcement or national security agencies?”

Explanations of the likely costs and benefits of AI should also be made accessible and relevant to community issues. Informed citizens, for instance, should be aware of the connections between AI advancements and changes in the farming industry. Hands Free Hectare in the UK completed a fully automated harvest this year. When that technology becomes widely adopted, it could increase productivity but eliminate a staggering number of jobs. Personalized medicine and developments in diagnostics can improve treatments for diabetes and cancer, but datasets are also being sold, often unknowingly, by powerful health-data brokers in a billion-dollar industry. Advancements in AI can help combat forest fires and climate change, but society cannot take full advantage if the information is kept proprietary.

With all of this in mind, we must make AI a prominent issue on the campaign trail and demand more from our policy-makers. Prior to the Trump administration, advancements in AI and machine learning were beginning to be taken seriously in Washington. But at present, no one appears to be incorporating the ethics of AI into government strategy. It is much easier to blame overseas workers than to confront the inevitable restructuring of our society.

“The data shows that you are more likely to lose your job to mechanization than to a Mexican,” said Alec Ross, who was the senior adviser for innovation under Secretary of State Hillary Clinton. In a recent conversation with Ross, now running for governor of Maryland, about generating public interest in AI, he pointed out that “the most important stakeholders of artificial intelligence are the ones least likely to understand it.”

One tenet of Ross’s campaign is the idea that “talent is everywhere, but opportunity is not.” He is an advocate of tech literacy beginning at a young age and of integrating it into public-school systems. “The lower the level of education you have, the more likely your job will be impacted by AI,” he said.

The Ross campaign is a harbinger of the role technology will play in politics. Earlier this year, Tom Perriello, who had endorsed the idea that robots should be taxed, lost the Democratic nomination for governor of Virginia. Following his defeat, Perriello told The New Yorker: “The single biggest thing that I took away from this campaign is that whichever party ends up figuring out how to speak about two economic issues—automation and monopoly—will not only be doing right by the country but will have a massive electoral advantage.”

Educating the public on the impacts of AI is an important challenge. Steps can be taken to bring the necessary questions out of the margins and to diverse elements of society. The objective must be to create wide-ranging interest and at least a basic orientation of the societal, ethical, and economic effects of artificial intelligence. To restrict this discussion to exclusive silos is to squander the opportunity to build a better, more inclusive world.

Katharine DempseyKatharine Dempsey is an editor and writer based in Montreal, Quebec, who explores the intersection of advancing technologies, society, and culture.


Latest from the nation