Following President Trump’s calls for “extreme vetting” of immigrants from seven Muslim majority countries, then–Department of Homeland Security Secretary John F. Kelly hinted that he wanted full access to visa applicants’ social-media profiles. “We may want to get on their social media, with passwords. It’s very hard to truly vet these people in these countries, the seven countries,” Kelly said to the House’s Homeland Security Committee, adding, “If they don’t cooperate, they can go back.”
Such a proposal, if implemented, would expand the department’s secretive social media–monitoring capacities. And as the Department of Homeland Security moves toward grabbing more social-media data from foreigners, such information may be increasingly interpreted and emotionally characterized by sophisticated data-mining programs. What should be constitutionally protected speech could now hinder the mobility of travelers because of a secretive regime that subjects a person’s online words to experimental “emotion analysis.” According to audio leaked to The Nation, the Department of Homeland Security is currently building up data sets with social media–profile information that are searchable by “tone.”
At an industry conference in January, Michael Potts, then–Deputy Under Secretary for Enterprise and Mission Support at the DHS’s Office of Intelligence & Analysis, told audience members that the DHS’s unclassified-data environment today has four data sets that are “searchable by tone,” and plans to have 20 additional such data sets before the end of this year. This data environment, known as Neptune, includes data from US Customs & Border Protection’s Electronic System for Travel Authorization database, which currently retains publicly available social media–account data from immigrants and travelers participating in the Visa Waiver Program.
According to Potts, whose department has been charged with designing the president’s “extreme vetting” program, these search capabilities are being built for numerous Department of Homeland Security agencies’ data streams: “What we’re trying to do with a project that within the department we call ‘The Data Framework’ is to break down those stove pipes by taking near real-time feeds by the various systems…processing that information appropriately, tagging it, making sure it’s replicating what [is] in that CBP system, Coastguard system, or ICE system, and then making it available in the data lake so that we can put search and other analytic tools on top of that data.”
The collection of social-media data in data sets that can be searched by tone suggests that authorities could be turning to the emerging fields of tone and emotional analysis to vet immigrants and travelers. Such processes use natural language–processing algorithms to identify emotional values in large amounts of text data. It is unclear what programs immigration authorities would be using for this aspect of social-media monitoring, but one former Customs and Border Protection agent told The Nation that the division uses a mix of third-party and in-house software tools for data-mining projects. Neither Potts nor the Department of Homeland Security responded to The Nation’s numerous requests for comment for this article.
The push to mine immigrants’ social data began well before Trump’s election, on the heels of the terror attack in December 2015 when Tashfeen Malik and her husband, Syed Rizwan Farook, shot and killed 14 people and wounded nearly two dozen others in San Bernardino, California. According to the FBI, the two had previously shared private social-media messages sympathetic to violent jihad and martyrdom, which the DHS failed to uncover when Malik was screened through the K-1 visa program for foreign-citizen fiancé(e)’s of US citizens. Shortly after the attacks, the DHS created a Social Media Task Force—led by the Department’s Office of Intelligence and Analysis—that initiated a pilot program in social-media analytics for K-1 visa applicants and Syrian refugees, and also solicited information from private vendors offering open-source and social-media analytical capabilities.
The department reviewed hundreds of tools pertinent to vetting foreigners and conducting criminal investigations, and by September it was using social media for “30 different operational and investigative purposes within the department,” according to then-Secretary Jeh Johnson, who also noted that the department’s research-and-development division was leveraging billions in private-sector development into social-media analytics.
But such social media–monitoring efforts have drawn criticism even from within the government. In February, the DHS Office of Inspector General issued a memorandum questioning DHS social media–screening practices, including a lack of unified criteria to measure if they’re actually keeping out national-security threats.
According to the report, US Citizenship and Immigration Services and Immigrations and Customs Enforcement have piloted social media–screening tools for vetting refugees and certain nonimmigrant visa applicants, respectively, that “lack criteria for measuring performance to ensure they meet their objectives.” And beyond these departmental-standards issues, the report also noted that departments like USCIS found that automated social media–screening tools are not always very technically proficient. For example, USCIS used a social media–screening platform developed by the Department of Defense’s Defense Advanced Research Projects Agency (DARPA) to screen certain foreigners, but concluded “that the tool [is] not a viable option for automated social media screening” because social-media accounts found by the automated program did not always belong to the individuals being scrutinized.
Several industry experts consulted by The Nation felt that the capabilities Potts described pointed to automated tone or emotion analysis. This use of artificial intelligence to discern patterns in text and speech is becoming so popular that it is difficult to keep pace with newer companies providing such services, according to Joanna Bryson, who studies the use of artificial intelligence to understand natural intelligence at the University of Bath. “The fact that [DHS] is going on the record acknowledging their use of this stuff is significant. It’s a program, and it’s a growing program. And it’s worrying, either because they can use your mood to affect your immigration experience or because some charlatan could be selling them stuff that could affect your immigration experience.”
Emotional analysis attempts to discern particular emotions undergirding sentiments, such as fear, anger, happiness, or sadness, according to Rama Akkiraju, master inventor at the IBM Almaden Research Center, whose Watson computer system is regarded as being at the forefront of the natural language–processing field. Because it targets granular information about human feeling, emotional analysis is more tailored to individual speech and interactions than the more prominently known “sentiment analysis,” which law-enforcement agencies and private companies use to aggregate and gauge the collective feelings of large crowds in broad strokes. Tone analysis seeks to understand the ways people express their emotions to others through primarily written language. IBM’s Tone Analyzer breaks down written tone into three major areas: emotion, social, and language (i.e., perceived writing style, such as analytical, confident, or tentative). In a blog post, Akkiraju said the most salient emotional tones found by the company’s research included cheerfulness, negative emotions (including fear, disgust, and despair), and anger (marked by hostile intensity and potential for aggressiveness).
With a hostile incoming administration, a massive infrastructure of courts and judges waiting to turn “freedom of speech” into a nostalgic memory, and legacy newsrooms rapidly abandoning their responsibility to produce accurate, fact-based reporting, independent media has its work cut out for itself.
At The Nation, we’re steeling ourselves for an uphill battle as we fight to uphold truth, transparency, and intellectual freedom—and we can’t do it alone.
This month, every gift The Nation receives through December 31 will be doubled, up to $75,000. If we hit the full match, we start 2025 with $150,000 in the bank to fund political commentary and analysis, deep-diving reporting, incisive media criticism, and the team that makes it all possible.
As other news organizations muffle their dissent or soften their approach, The Nation remains dedicated to speaking truth to power, engaging in patriotic dissent, and empowering our readers to fight for justice and equality. As an independent publication, we’re not beholden to stakeholders, corporate investors, or government influence. Our allegiance is to facts and transparency, to honoring our abolitionist roots, to the principles of justice and equality—and to you, our readers.
In the weeks and months ahead, the work of free and independent journalists will matter more than ever before. People will need access to accurate reporting, critical analysis, and deepened understanding of the issues they care about, from climate change and immigration to reproductive justice and political authoritarianism.
By standing with The Nation now, you’re investing not just in independent journalism grounded in truth, but also in the possibilities that truth will create.
The possibility of a galvanized public. Of a more just society. Of meaningful change, and a more radical, liberated tomorrow.
In solidarity and in action,
The Editors, The Nation
One study that IBM cites in its explanation of the science behind its tone-analyzer service discusses an analytics system, known as PEARL, used to interpret and visualize emotional styles and moods largely from a person’s Twitter timelines. While it’s unclear whether the system is similar to the one DHS may be using to vet travelers, one of the study’s authors, Dr. Jian Zhao, said PEARL is “one of the top kinds of collaboration projects between IBM and DARPA,” the Defense Department agency whose social media–screening platforms have been tested by US Citizenship and Immigration Services.
The study explicitly suggests that words gleaned from a person’s tweets could be referenced by a customer-care representative to determine what kind of mood a person is normally in, how often or easily they get upset, and the variables that trigger their emotional states. After extracting particular “emotional words” from a user’s tweets, the PEARL system rated the tweets based on their emotional “dimensions” as well their emotional type (anger-fear, anticipation-surprise, joy-sadness, and trust-disgust). It then segmented the stream of tweets into particular time chunks to gauge a person’s long-term mood.
According to Zhao, a research scientist at FX Palo Alto Laboratory, the system relies on previous studies that gauged subjects’ emotional reactions to words presented to them in isolation in order to determine which “emotional words” to extract from a person’s tweets. IBM’s public tone analyzer also relies on a psycholinguistic dictionary, called the Linguistic Inquiry and Word Count, as a foundational text for determining the emotionality of particular words. Both Akkiraju and Zhao say the most common applications for these kinds of systems would fall into fields like customer relations, where live emotion estimation could be used when a customer phones into a call center.
While IBM and other companies in the field of emotion and tone analysis claim with increasing certitude that the categories of words as expressed through social media can help predict aspects of a person’s personality, they are fundamentally biased by the information fed into them, says Jason Baldridge, an artificial-intelligence scientist who has designed sentiment-analysis algorithms for brands. That means a nuanced and historically significant term like “jihad” could be flattened to its contemporary meaning in the context of the war on terror, and the only way to avoid getting flagged as a security risk would be to not use the word at all—in other words, to self-censor.
“These are static systems,” he told The Nation. “They are systems constructed by some person at some point, and the more things appear look like what they were trained to do, they work well, but the more they diverge, the worse they do.” As an example, he noted that in most sentiment-analysis lexicons, the term “trump” indicates positive feeling, something which is likely no longer true for a sizable number of Americans.
The degree to which these technologies have been used to study indviduals’ digital footprints for national-security purposes is even less well understood. Vlad Teichberg, a Russian-born US citizen who is part of the activist livestreaming collective Global Revolution TV, believes that it would be hard for authorities to filter his protest-focused communications from those of others calling for violence. “People who are advocating for violent acts against police, let’s say, would likely be writing about the same things I am reporting on,” notes Teichberg, “but the difference would be the nuance—we are referring to these themes to call for accountability and transparency, not violence.” Teichberg, who used to travel regularly to Spain, where his wife is from, claims that every time he has landed in the United States in the last year he has been interrogated and asked for his social-media handles. The activist, whose background is mathematics, worries that the programs DHS could be using to emotionally tag his social-media posts could categorize him as a risk.
When The Nation explained the government’s plans to expand social-media data sets searchable by tone for vetting immigrants, Jennifer Lynch, a staff attorney at Electronic Frontier Foundation, said it was legally perilous for the government to use such technology when making major decisions about a person’s life, especially when done in secret.
“How do we know that [tone-analysis services] are actually doing what they say they’re doing, and how could we ever know that?” she said. “We don’t have data sets representative of all people who have committed acts of terror on the US, so how do we predict what the future is going to look like? What are the things a person would possibly say on Twitter or social media to indicate they would commit violence against the US?”
Lynch said the fact that such technology could be used on foreigners, who are often not native English speakers, could also mean that their choice of words on social media could be mislabeled by tone-analysis algorithm.
Get unlimited access: $9.50 for six months.
Teichberg echoes this point: “Language is so subjective. Your algorithm would have to be really smart to understand different cultural contexts…. People assume the danger of false negatives, like letting a terrorist into the country, is far greater than that of false positives, but a false positive here means denying people their fundamental rights to travel and be with their families.” Teichberg says his wife has been unable to come to the United States over the last year because of a so-called “administrative processing” hold on her travel visa.
One former Customs and Border Protection agent, who requested anonymity, told The Nation that agents have an obligation to look at whatever is available, and stressed that mining public data alone could provide many valuable insights. “It’s my personal opinion that you don’t need the private data, because the good guys will give up their passwords, and the bad guys would give you their dummy accounts,” said the former agent. “But a lot of people do put out public information that they don’t think is going to be seen.”
Civil-liberties advocates, however, assert that public social-media data can also cause problems, likely to be misunderstood frequently and sometimes with major consequences. “When you’re trying to glean information from social media, you aren’t going to understand context of local social norms—detecting sarcasm, for example, is really hard,” says Drew Mitnick, policy counsel at the civil liberties organization Access Now. “Turning that over to automated systems to draw conclusions that are already that hard for people is worrying, because these conclusions have a huge impact on people’s lives.”
Without proper cultural context, even humans interpreting social media, for instance, can make huge mistakes. According to the Daily Mail, in 2012 immigration authorities barred two British tourists from entering the United States, keeping one handcuffed and in a cell for 12 hours, because of two misinterpreted tweets. Leigh Van Bryan, a 26-year-old bar manager from Coventry, England, was flagged by the Department of Homeland Security as a threat for tweeting, “Free this week, for quick gossip/prep before I go and destroy America?’” ‘Destroy’ is British slang for partying. Authorities also questioned Van Bryan about a tweet riffing off a Family Guy bit about “diggin’ Marilyn Monroe up,” even searching his suitcases looking for spades and shovels.
Despite these concerns, intelligence agencies like the NSA and the CIA have been pushing ahead with using sentiment-analysis techniques. And such institutional knowledge may be helping the DHS get access to even more data and analytical capabilities. Speaking of these four tone-searchable data sets, Potts continued, “We just had a breakthrough with an access regime to NSA to at least some of their technology solutions, which gets us out ahead of the sure expectations of this administration. We can find more data in the IC [intelligence cloud], so we have a lot more work to do.”
Aaron Miguel CantúTwitterAaron Miguel Cantú is a reporting fellow at Type Investigations.
George JosephTwitterGeorge Joseph is a reporting fellow at Demos.