As artificial intelligence proliferates, so does concern about its use in areas ranging from criminal justice to hiring to insurance. Many people worry that tools built using “big data” will perpetuate or worsen past inequities, or threaten civil liberties. Over the past two years, for instance, Amazon has been aggressively marketing a facial-recognition tool called Rekognition to law-enforcement agencies. The tool, which can identify faces in real time from a database of tens of millions, has raised troubling questions about bias: Researchers at the ACLU and MIT Media Lab, among others, have shown that it is significantly less accurate in identifying darker-skinned women. Equally troubling is the technology’s potential to erode privacy.
Privacy advocates, legislators, and even some tech companies themselves have called for greater regulation of tools like Rekognition. While regulation is certainly important, thinking through the ethical and legal implications of technology shouldn’t happen only after it is created and sold. Designing and implementing algorithms are far from merely technical matters, as projects like Rekognition show. To that end, there’s a growing effort at many universities to better prepare future designers and engineers to consider the urgent questions raised by their products, by incorporating ethical and policy questions into undergraduate computer-science classes.
“The profound consequences of technological innovation…demand that the people who are trained to become technologists have an ethical and social framework for thinking about the implications of the very technologies that they work on,” said Rob Reich, a political scientist and philosopher who is co-teaching a course called “Computers, Ethics, and Public Policy” at Stanford this year.
Coursework on the ethics of technology is not entirely new: It emerged in universities in the 1970s and ’80s, with engineers collaborating with philosophers and others to develop course materials. ABET, an organization that accredits engineering programs, has required for decades that programs provide students with “an understanding of professional and ethical responsibility.” But how the requirement is carried out varies widely.
Casey Fiesler, a faculty member in the Department of Information Science at the University of Colorado Boulder, said that a common model in engineering programs is a stand-alone ethics class, often taught towards the end of a program. But there’s increasingly a consensus among those teaching tech ethics that a better model is to discuss ethical issues alongside technical work. Evan Peck, a computer scientist at Bucknell University, writes that separating ethical from technical material means that students get practice “debating ethical dilemmas…but don’t get to practice formalizing those values into code.” This is a particularly a problem, said Fiesler, if an ethics class is taught by someone from outside a student’s field, and the professors in their computer-science courses rarely mention ethical issues. On the other hand, classes focused squarely on the ethics of technology allow students to dig deeply into complicated questions. “I think the best solution is to do both…but if you can’t do both, incorporating [ethics material into regular coursework] is the best option,” Fiesler said.
The new generation of tech-ethics courses cover topics like data privacy, algorithmic bias and accountability, and job automation, and they often draw on concrete, real-world cases. For example, some classes consider criminal-justice algorithms, which many jurisdictions across the United States use to predict the chance that someone accused of a crime will be rearrested or fail to appear in court if they are released pending their court hearings. Pretrial risk-assessment algorithms often recommend that judges detain those with “high” risk scores in jail and release those with “low” risk scores. Proponents of such tools argue that they can help to reduce the number of unconvicted people that are held in jail. (Currently that number is nearly half a million on any given day across the country.) But exactly how such algorithms are designed and implemented—as well as overseen—is a major area of concern among advocates and researchers. For instance, by drawing on data like past arrests, these programs can perpetuate the racial skew that is already present in the criminal-justice system. Fixing this issue is far from straightforward: Work by academic researchers suggests that it’s not always easy to determine what algorithmic “fairness” means in the first place.
Several new programs are designed to assist computer-science professors and others with technical backgrounds who may not feel prepared to teach on philosophical and policy issues. Embedded EthiCS is an initiative at Harvard University in which philosophy faculty and grad students develop and teach ethics course modules in computer-science classes, in close collaboration with computer scientists. Another initiative, the Responsible Computer Science Challenge, a partnership between Omidyar Network, Mozilla, Schmidt Futures, and Craig Newmark Philanthropies, will award up to $3.5 million in grants for “promising approaches to embedding ethics into undergraduate computer-science education.”
Philosopher Shannon Vallor of Santa Clara University and computer scientist Arvind Narayanan of Princeton University have co-authored modules to embed ethics material in software-engineering classes, and made them available to other educators. Professors at more than 100 universities have requested to use them, according to Vallor. The goal of this kind of embedded ethics is to make such considerations part of students’ routine. “Habits are powerful: Students should be in the habit of considering how the code they write serves the public good, how it might fail or be misused, who will control it; and their teachers should be in the habit of calling these issues to their attention,” Narayanan and Vallor write.
Other academic programs are incorporating tech ethics by offering classes co-taught by faculty members from multiple fields. An example of this approach is this year’s “Computers, Ethics, and Public Policy” course at Stanford University, developed by Rob Reich, computer scientist Mehran Sahami, political scientist Jeremy Weinstein, and research fellow and course manager Hilary Cohen. The course was first developed in the late 1980s. But this year’s version, with 300 students, includes faculty members and teaching assistants from a range of disciplines, allowing for deeper dives into ethical, policy, and technical topics. Students are given assignments in three areas: coding exercises, a philosophy paper, and policy memos.
Part of the impetus for developing a new version of the class, Rob Reich said, is the huge popularity of computer science at Stanford in recent years. The class aims to both help give engineering students familiarity with ethical and policy questions and to impart technical understanding of algorithms and other tools to students in non-technical fields. The latter is critical as well, Reich emphasized. Students who go to work in public policy and other fields need to have familiarity with technology—as demonstrated by recent congressional hearings at which some legislators seemed unaware of basic facts regarding Facebook, like how the company makes money.
In approaching philosophical questions, Reich said, the course emphasizes that ethical awareness is an ongoing habit, not a fixed set of rules. “One of the big-picture messages that we want to get across through considering the complicated ethical terrain is that there’s no such thing as having an ethical checklist. [It’s not the case] that after you go through some exercise, you’re done with ethical compliance and you can stop thinking of the ethical dimensions of your work,” Reich said.
We now confront a second Trump presidency.
There’s not a moment to lose. We must harness our fears, our grief, and yes, our anger, to resist the dangerous policies Donald Trump will unleash on our country. We rededicate ourselves to our role as journalists and writers of principle and conscience.
Today, we also steel ourselves for the fight ahead. It will demand a fearless spirit, an informed mind, wise analysis, and humane resistance. We face the enactment of Project 2025, a far-right supreme court, political authoritarianism, increasing inequality and record homelessness, a looming climate crisis, and conflicts abroad. The Nation will expose and propose, nurture investigative reporting, and stand together as a community to keep hope and possibility alive. The Nation’s work will continue—as it has in good and not-so-good times—to develop alternative ideas and visions, to deepen our mission of truth-telling and deep reporting, and to further solidarity in a nation divided.
Armed with a remarkable 160 years of bold, independent journalism, our mandate today remains the same as when abolitionists first founded The Nation—to uphold the principles of democracy and freedom, serve as a beacon through the darkest days of resistance, and to envision and struggle for a brighter future.
The day is dark, the forces arrayed are tenacious, but as the late Nation editorial board member Toni Morrison wrote “No! This is precisely the time when artists go to work. There is no time for despair, no place for self-pity, no need for silence, no room for fear. We speak, we write, we do language. That is how civilizations heal.”
I urge you to stand with The Nation and donate today.
Onwards,
Katrina vanden Heuvel
Editorial Director and Publisher, The Nation
Vallor endorses the idea of ethics education as helping students develop moral awareness. In her book Technology and the Virtues, she argues that to be prepared to navigate ethical challenges posed by new technology, students need to develop “practical wisdom”—a concept from the work of Aristotle and other philosophers. This involves cultivating a disposition to judge and respond wisely, rather than leaning on moral scripts. When a new technology comes out five years from now that we didn’t anticipate, Vallor said, what will be important is having developed the skills to be able to navigate relevant ethical issues.
In addition to the movement toward ethical training in universities, researchers are forming new venues for socially and ethically aware research on technology. Solon Barocas, a faculty member in the Department of Information Science at Cornell University, co-founded a group called Fairness, Accountability, and Transparency in Machine Learning in 2014. Over the past two years, the group has held interdisciplinary conferences in which hundreds of people from fields including computer science, social sciences, law, media and information studies, and philosophy—many of them students—gather together to discuss tech-related ethical and policy issues.
Just a few years ago, Barocas says, work on these topics was relegated to workshops within larger machine-learning conferences. But now, ethical questions have entered the mainstream and are seen as urgent research questions for the field to grapple with. More than ever, Barocas says, students are motivated to combine their interest in social change with their interest in computing. As compared with even just a few years when he was in graduate school, there is now a much clearer path for them to do so.
Even with the increased attention and support for the ethics of technology, much work remains to be done. There’s a need, for example, to reach people beyond those involved in higher education. Within the tech industry, Barocas and Vallor said, some companies are looking for ways that they can engage in the ethical implications of their work, and are hiring people to focus on these issues. (Both have personal experience with this: Barocas is taking a year off from Cornell to work at Microsoft Research, and Vallor is an AI ethicist/visiting researcher at Google). In collaboration with colleagues at Santa Clara University’s Markkula Center for Applied Ethics, Vallor has also developed tech-ethics modules for companies to use in staff training. And nearly 400 people—many of whom work for tech companies in Silicon Valley—have enrolled in the Ethics of Technological Disruption, a Stanford continuing-studies course, featuring many guest speakers and taught by the same group of faculty that developed the tech-ethics course for undergraduates.
There’s also a need to bridge the gap between the academic and industry conversation about tech ethics and the wider community, particularly those most likely to be impacted by potential bias in tools such as predictive policing and risk-assessment algorithms. While advocates and researchers are doing some work to reach out to impacted communities, Barocas said, “there’s still not nearly enough interaction between the communities that are most deeply affected by these kinds of problems, and the people doing research on them.”
Finally, some experts argue that tech ethics must also consider cases in which technology shouldn’t be emphasized as a solution. In a recent session of the Ethics of Technological Disruption, guest speaker Safiya Umoja Noble brought up the enormous and disproportionate toll that the mortgage crisis took on African-American wealth. Noble, author of the book Algorithms of Oppression and a faculty member at the University of Southern California, said, “We could say that we need to tweak the algorithm. But I think there’s a different set of conversations that we need to have, about the morality of projects that are just everyday business.” As Noble and other researchers like Virginia Eubanks argue, part of the conversation, and part of tech-ethics education, should include thinking through how to put technology in its proper place.
Stephanie WykstraStephanie Wykstra (@swykstr) is a freelance writer and researcher based in New York