That corporations are people in the eyes of the law has been a great subject of liberal indignation. This legal framework appears to have sanctified corporate money as speech and to be indicative of an era when democracy has been swallowed whole by the ”free market” and those who control it. But the foundation for corporate personhood was laid in Dartmouth v. Woodward before the Civil War for practical reasons: because a corporation is a collective of people doing business together, its constituent persons should not be deprived of their constitutional rights when they act as such. This decision facilitated the stabilization and the expansion of the early American economy; it allowed corporations to sue and to be sued, provided a unitary entity for taxation and regulation and made possible intricate transactions that would have involved a multitude of shareholders. Only by assigning legal personhood to corporations could judges make reasonable decisions about contracts.
Judges work by analogy all the time. When they seek to answer a new question using the body of existing law, they often seek, in that corpus, analogies that would allow a similar treatment to be extended to the case at hand. For instance, when trying to figure out how Google’s self-driving car should be regulated for insurance and liability purposes, should it be treated like a pet or a child or something else? It is, after all, partially autonomous and partially under the control of its owner. The most baffling regulatory frontier now confronting legislators and judges is technology. Here, the courts are making up the law as they go along, grasping for pre–Web 2.0 analogies that will allow them to adjudicate sophisticated new threats to privacy. So I’d like to propose one that hasn’t been tried, one that could revolutionize the way companies like Google and Facebook approach privacy: treat programs as people too.
Imagine the following situation: your credit card provider uses a risk assessment program that monitors your financial activity. Using the information it gathers, it notices your purchases are following a “high-risk pattern”; it does so on the basis of a secret, proprietary algorithm. The assessment program, acting on its own, cuts off the use of your credit card. It is courteous enough to e-mail you a warning. Thereafter, you find that actions that were possible yesterday—like making electronic purchases—no longer are. No humans at the credit card company were involved in this decision; its representative program acted autonomously on the basis of pre-set risk thresholds.
We interact with the world through programs all the time. Very often, programs we use interact with a network of unseen programs so that their ends and ours may be realized. We are acted on by programs all the time. These entities are not just agents in the sense of being able to take actions; they are also agents in the representative sense, taking autonomous actions on our behalf.
Now consider the following: in 1995, following the detection of hacker attacks routed through Harvard University’s e-mail system, federal prosecutors—looking for network packets meeting specific criteria—obtained a wiretap order to inspect all e-mails on Harvard’s intranet. In defending this reading of millions of personal messages, the US attorney claimed there was no violation of Harvard users’ privacy, because the scanning had been carried out by programs. A few years later, responding to the question of whether it was reading users’ e-mail in Gmail, Google argued there was no breach of privacy because humans were not reading its users’ mail. And in 2013, the National Security Agency reassured US citizens that their privacy had not been violated, because programs—mere instantiations of abstract algorithms—had carried out its wholesale snooping and eavesdropping.
Popular
"swipe left below to view more authors"Swipe →
There is a pattern visible in these defenses of invasive surveillance and monitoring: if you want to violate Internet users’ privacy, get programs—artificial, not human, agents—to do your dirty work, and then wash your hands of them by using the Google defense: there is no invasion of privacy because no humans accessed your personal details. Thus, the raw, untutored intuition that “programs don’t know what your e-mail is about” allows corporate and governmental actors to put up an automation screen between themselves and culpability for the privacy violations in which they routinely engage.
We should take a closer look at our intuition that only human can know things; programs take actions, they can use and employ information—they can know something—to further corporate and governmental interests contrary to our own. Humans don’t perform the intrusive scanning carried out by the NSA or Google, but they do use it. Google is able to collect user information and use it to increase advertising revenue; in the case of the US government, the scanning programs enable criminal prosecutions. Google acknowledges that if Gmail data were forwarded on to third parties without appropriate consent, a violation of privacy would occur, and thus recognizes automation is not a defense against all charges. Still, the common notion that its artificial agents cannot “know” anything lets them become a convenient surrogate for intrusion.
The problem with devising legal protections against privacy violations by artificial agents is not that current statutory regimes are weak. Rather, they have not been interpreted in a way that will protect users’ privacy from today’s sophisticated programs. Privacy laws should apply to the surveillance done by Google and the National Security Agency, and indeed, if the actions of the programs were simply treated as the actions of the corporate and governmental actors who deploy them, it would be clear that extant privacy laws are being violated. Many statutes and constitutional protections currently on the books—like the often-amended and supplemented Electronic Communications Privacy Act, the Wiretap Act and, of course, the Fourth Amendment—seem to provide us and our electronic communications with sufficient privacy. (The Supreme Court’s so-called “third-party doctrine” does not; e-mails stored with Internet service providers are not afforded the same protection as e-mails in transit or stored on our machines.) But automation somehow deludes some people—besides Internet users, many state and federal judges—into imagining our privacy has not been violated. We are vaguely aware something is wrong, but are not quite sure what. The programs deployed by corporate and governmental actors are used like humans would be, but are not regarded as such by the law and by those subject to their use.
I suggest we fit artificial agents like smart programs into a specific area of law, one a little different from that which makes corporations people, but in a similar spirit of rational regulation. We should consider programs to be legal agents—capable of information and knowledge acquisition like humans—of their corporate or governmental principals. The Google defense—your privacy was not violated because humans didn’t read your e-mail—would be especially untenable if Google’s and NSA’s programs were regarded as their legal agents: by agency law and its central doctrine of respondeat superior (let the master answer), their agents’ activities and knowledge would become those of their legal principal, and could not be disowned; the artificial automation shield between the government (the principal) and the agent (the program) would be removed.
Changing the status of programs to legal agents would produce two salutary effects. First, applying this category would recognize that programs take actions, that they are not inert. It is not a problem if my data simply sits on a corporation’s hard drive and is only read by a machine. The problem is that programs do things with our data. As the credit card risk assessment algorithm example shows, programs can take actions directed against us that can block our ends from being realized. (In the credit card example, no human ever interacted with us.) Second, we clarify the program’s legal relationship with us, draw a reasonable analogy with human agents and draw upon an established body of law that protects our rights.
Currently programs, no matter how ‘smart’ they might be, and no matter how they are used by human or organizational users, are regarded by the law as things, not legal subjects or legal persons. But for years, ever since the use of programs in e-commerce, legal scholars have debated their legal status. In particular, they have wondered how programs used in contracting scenarios should be understood; the common understanding now is that things—like programs—cannot enter into contracts because they do not have the intention to enter into them. But some legal theorists, including myself, have argued that artificial agents—like website shopping agents or high-speed currency traders—can and should be understood, on legal, philosophical and economic grounds, not as mere objects or tools but as legal agents capable of entering into, and completing, contracts.
There is a straightforward legal precedent for such treatment: the aforementioned corporate personhood. Corporations, too, can be attributed knowledge and are said to take actions. They are understood to have intentions and are allowed to enter into contracts; the change in status suggested for programs and artificial agents would mean that they would be treated similarly. It is important to remember—even in this post–Citizens United era—that the change in legal status of corporations was done to facilitate commerce, to allow a form of limited liability for humans engaging in business. That change, arguably, protected our interests too.
Now, legal agents like humans incur liabilities for their principals based on their actions and their knowledge; if a doctor knows a patient is ill and does not treat him, his employer, a hospital, say, is liable for this act of knowing negligence. A program knows something if it shows it can use the information, just like a human demonstrates knowledge of a safe’s combination by cracking the lock. Treating smart programs as the legal agents of their deployers and users would allow the attribution of their actions and their knowledge to their principals.
Imagine the sea change that this legal interpretation would produce in our most hotly debated privacy issues. Viewing programs engaged in monitoring or surveillance as legally recognized “knowers” of personal information on behalf of their principals, as legal agents capable of knowledge acquisition, denies the legitimacy of the Google defense. Consider the US Wiretap Act, which criminalizes the intentional interception, and subsequent use or disclosure of the contents, of electronic communication. If an artificial legal agent capable of acquiring knowledge engages in these acts, its principal is also in violation of the statute. Google’s deployment of AdSense in Gmail would be in violation of the Wiretap Act because the scanning of each e-mail by its artificial agents constitutes an “interception” for the purposes of the act. The agent’s knowledge of the contents of the e-mail can be attributed to Google, whose use of that content would be a use by a legal person—the corporation—who knows the information was obtained illegally. In 2004, the Electronic Privacy Information Center (EPIC) argued to the California attorney general that Google was in breach of California Penal Code §631(a) by “reading or attempting to read or learning the contents or meaning” of Gmail messages. EPIC had a perfectly good case to make against Google; viewing Gmail’s AdSense as Google’s legal agent whose knowledge was attributed to its corporate principal would only have strengthened their case. Similar considerations, obviously, would apply to the NSA’s notoriously broad collection of Americans’ communications.
Another key practice would be disrupted by program personhood. Deep content inspection by Internet service providers (ISPs) currently examines information that passes through their networks by using “sniffer programs” to enable greater monetization of their networks. This is like the Post Office opening our credit card bills to see where we shop and dine and then putting coupons in the mailbox for stores and restaurants. If we treat content inspection tools as legal agents of the ISPs, then as the programs acquire contents of e-mails this material becomes part of the knowledge attributed to the company, which would violate the US Wiretap Act and similar statutes.
Current law, by treating programs as things, like dumb tools equivalent to toasters and knives, creates an ambiguity; it lets their users use them for smart activities like “reading” and acquiring “knowledge” but lets them defend their use as dumb entities. Corporations and governments get to have their cake—use smart programs—and eat it, by denying their smartness; we, whose personal data is “read” and “known” by programs, find ourselves with no defense, conceptual or legal. Meanwhile, we are subject to increasingly targeted marketing based on our personal conversations, and our data is endlessly vulnerable to surveillance, just so long as a layer of automation exists between the watcher and the watched. But by treating these programs in a manner that acknowledges their capacities, their use and their relationship to those that use them, by treating an artificial agent as a legal subject as opposed to an inanimate object, we categorize these entities more appropriately and better protect our privacy rights. If packet sniffer and e-mail reading programs were treated as legal agents of those who use them, most scanning taking place today would have to confront legal arguments that would indict of them of privacy violations under extant law.
One small step for law; one giant leap for privacy rights and commonsense.