Nicole Keplinger, 22, had long seen ads on Facebook promising financial relief, but she always ignored them and assumed that they were scams. Keplinger was drowning in student debt after obtaining a worthless degree from the for-profit Everest College, whose parent corporation, Corinthian Colleges Inc., had recently collapsed under accusations of fraud and predatory lending. But when an offer arrived in her e-mail inbox in April—“Cut your student loan payment or even forgive it completely!”—she thought it seemed more legitimate than the rest, so she called the number.
The person on the other end was aggressive. “They wanted my banking information, my Social Security number, my parents’ number and their information. I was like, ‘Wait a minute,’” Keplinger recalled. Even after she said that she lived on a fixed income (on disability due to a kidney transplant), the telemarketer kept up the pressure. “They said I needed to get a credit card. I don’t know if they were going to take money off it or what… but why do I need to get a credit card if I’m trying to reduce my student loans?”
Keplinger lied and said she’d call back, but not everyone gets away. If she disclosed her bank information, her loans most certainly would not have been cut or forgiven. At best, she would have been charged a large fee for something she could do herself: get on government repayment programs such as forbearance or deferment. At worst, she might have had the money debited each month from her bank account without any benefit provided in return, or been ensnared by a “phantom-debt collector”—a distressingly common racket that involves telling people they owe phony debts and scaring them into paying. It’s the perfect ploy to attempt on people who have already been preyed upon by unscrupulous outfits like Corinthian and who, having been misled and overcharged, are understandably confused about how much money they owe. At the same time, the fact that Keplinger was e-mailed in addition to seeing ads on Facebook suggests that her information was in the hands of a “lead generator,” a multibillion-dollar industry devoted to compiling and selling lists of prospective customers online.
Welcome to a new age of digital redlining. The term conjures up the days when banks would draw a red line around areas of the city—typically places where blacks, Latinos, Asians, or other minorities lived—to denote places they would not lend money, at least not at fair rates. “Just as neighborhoods can serve as a proxy for racial or ethnic identity, there are new worries that big data technologies could be used to ‘digitally redline’ unwanted groups, either as customers, employees, tenants, or recipients of credit,” a 2014 White House report on big data warns.
Thus, rather than overt discrimination, companies can smuggle proxies for race, sex, indebtedness, and so on into big-data sets and then draw correlations and conclusions that have discriminatory effects. For example, Latanya Sweeney, former chief technologist at the Federal Trade Commission, uncovered racial bias on the basis of Google searches: black-identifying names yielded a higher incidence of ads associated with “arrest” than white-identifying names. It’s discrimination committed not by an individual ad buyer, banker, or insurance broker, but by a bot. This is likely what happened to Nicole: Facebook’s huge repository of data has strong indicators of users’ socioeconomic status—where they attend school, where they work, who their friends are, and more—and the company targets them accordingly. In May, Facebook and IBM announced a partnership that will result in the two tech giants combining their vast data troves and analytics in order to achieve “personalization at scale.”
The authors of this article saw this firsthand when one of us (Astra) opened a second Facebook account to communicate with Corinthian students: Her newsfeed was overrun by the sorts of offers Nicole sees regularly—in stark contrast to the ads for financial services, such as PayPal and American Express, that she normally gets. Such targeting isn’t obvious to most users, but opening a new profile and associating primarily with students from a school known to target low-income people of color, single mothers, veterans, and other vulnerable groups cracks a window onto another Facebook entirely.
* * *
Longstanding consumer protections should, in principle, apply to the digital landscape. The use of data-driven methods for judging people’s creditworthiness goes back a century. Before the passage of the Fair Credit Reporting Act in 1970, consumer-reporting bureaus would gather information on everything they could find about people—whether true or fabricated, fair or unfair, relevant or irrelevant—and then provide it to creditors. Your dossier was likely to contain whatever information they could get away with collecting or making up about you. So, if you were considered a sexual deviant, a drunk, a troublemaker, an adulterer, or whatever else, it was all fair game if a creditor was willing to pay for that information. The FCRA was meant to limit these practices by putting an end to the collection of “irrelevant” information and establishing rules for the “permissible” uses of consumer reports. In 1974, Congress passed the Equal Credit Opportunity Act, which added more bite to financial regulations by making it illegal for creditors to discriminate against applicants on the basis of race, religion, national origin, sex, marital status, age, or receiving public assistance. Cases brought under ECOA have often focused on the presence of human bias in making credit decisions—think of an African-American woman walking into a lender’s office and receiving unfair rates based on her race or gender.
Of course, the days when creditworthiness was assessed in one-on-one meetings are long gone. Today, lenders, employers, and landlords rely on credit-scoring systems like the widely used FICO score, which take data from an individual’s consumer report and derive a metric of his or her risk. These scores allow for automated decision-making, yet there’s evidence that such systems have not eliminated bias, but rather enshrine socioeconomic disparities in a technical process.
Though deeply flawed, credit scores and consumer reports are immensely consequential in many facets of our lives, from obtaining a loan to finding a job to renting a home. The lack of a score—or a lower score than one actually deserves—can mean higher interest rates within the mainstream banking system, or being forced into the arms of check-cashing services and payday lenders. Scores can become “self-fulfilling prophecies, creating the financial distress they claim merely to indicate,” as legal scholars Danielle Citron and Frank Pasquale have observed. The worse your score, the more you’re charged—and the more you’re charged, the harder it is to make monthly payments, which means the worse you’re ranked the next time around.
With the sheer quantity of data that can be collected online, FICO scores are just the tip of the iceberg. “Now the system has exploded, where you’ve got all these actors that you don’t actually have a relationship with: network advertisers, data brokers, companies that are vacuuming up information,” says Ed Mierzwinski, consumer-program director at the United States Public Interest Research Group (USPIRG). This information comes from sources both online and off-line: Thousands of data brokers keep tabs on everything from social-media profiles and online searches to public records and retail loyalty cards; they likely know things including (but not limited to) your age, race, gender, and income; who your friends are; whether you’re ill, looking for a job, getting married, having a baby, or trying to buy a home. Today, we all swim in murky waters in which we’re constantly tracked, analyzed, and scored, without knowing what information is being collected about us, how it’s being weighted, or why it matters—much of it as irrelevant and inaccurate as the hearsay assembled during the early days of consumer reporting.
Making things even more muddled, the boundary between traditional credit scoring and marketing has blurred. The big credit bureaus have long had sidelines selling marketing lists, but now various companies, including credit bureaus, create and sell “consumer evaluation,” “buying power,” and “marketing” scores, which are ingeniously devised to evade the FCRA (a 2011 presentation by FICO and Equifax’s IXI Services was titled “Enhancing Your Marketing Effectiveness and Decisions With Non-Regulated Data”). The algorithms behind these scores are designed to predict spending and whether prospective customers will be moneymakers or money-losers. Proponents claim that the scores simply facilitate advertising, and that they’re not used to approve individuals for credit offers or any other action that would trigger the FCRA. This leaves those of us who are scored with no rights or recourse. While federal law limits the use of traditional credit scores and dictates that people must be notified when an adverse decision is made about them, the law does not cover the new digital evaluation systems: You are not legally entitled to see your marketing score, let alone ensure its accuracy.
The opacity and unaccountability of online consumer-credit marketing negatively affect not only those individuals who get a bad deal or financial offer; there is evidence that these personalized predatory practices played a role in the subprime-mortgage bubble and subsequent financial crisis. “From 2005 to 2007, the height of the boom in the United States, mortgage and financial-services companies were among the top spenders for online ads,” write Mierzwinski and Jeff Chester in a scholarly article on digital decision-making and the FCRA. Companies like Google, Yahoo, Facebook, and Bing make billions a year from online financial marketing. Lead generation specifically “played a critical, but largely invisible, role in the recent subprime-mortgage debacle.” Since 2008, when the crash occurred, the capabilities for tracking and targeting have become only more sophisticated.
Proof of discrimination in online microtargeting is notoriously hard to come by. How can you tell if you’re being targeted with an advertisement for an inferior or bogus financial product because data brokers have deemed you part of the “rural and barely making it,” “probably bipolar,” or “gullible elderly” market segments? Or if you’re being offered a jacked-up interest or insurance rate based on your race, gender, neighborhood, or health condition? Or whether you’re receiving offers for a subprime financial product because a marketing score flags you as a risk, or you’ve been caught in a lead generator’s snare?
“Measurement is an enormous challenge,” says Aaron Rieke of Upturn, a technology-policy-and-law consulting firm. “You see one ad and I see another. It’s often impossible for a researcher on the outside to find out why. Maybe an ad buyer ran out its budget. Maybe we were profiled differently. Maybe the ads were geotargeted.”
To date, the best indication of potentially discriminatory practices is the marketing literature and public boasting that companies produce themselves. A recent white paper on “Civil Rights and Big Data, and Our Algorithmic Future,” which Rieke helped write, contains a prime example: “At an annual conference of actuaries, consultants from Deloitte explained that they can now use thousands of ‘non-traditional’ third party data sources, such as consumer buying history, to predict a life insurance applicant’s health status with an accuracy comparable to a medical exam.” The company does this partly by incorporating the health of an applicant’s neighbors.
* * *
As many as 70 million Americans do not have a credit score, or have low scores due to “sparse” or “thin” files. A variety of start-ups are trying to exploit this situation under the banner of magnanimously extending credit to individuals facing a disadvantage under the traditional financial model. They do this by bulking up consumer credit files—crunching large amounts of data and feeding it into proprietary scoring formulas. The data comes from traditional sources (such as credit reports) and what some experts call “fringe alternative data,” which can include information about shopping habits, web and social-media usage, government records, music tastes, location, and just about anything else. The new big-data-fueled techniques are the innovative ingredients needed to “disrupt” the traditional business of consumer finance and “innovate” different types of products and services.
ZestFinance, which declined to be interviewed or comment for this article, leads the pack with a troubling motto emblazoned on its website: “All data is credit data.” LendUp, an online lender that specializes in short-term, small-dollar, high-interest credit—like the kind offered by payday lenders, pawnbrokers, and title loans—targets people without access to other forms of credit who need fast cash to make ends meet. On the other side of the socioeconomic spectrum, Earnest—another venture-capital-backed lending start-up—proclaims that it’s trying to “build the modern bank for the next generation, and the mission is better access to credit to millions of people at earlier ages and at cheaper prices—and we do that using software and data,” according to Louis Beryl, the CEO and founder. The project was born out of Beryl’s difficulty in getting a loan as a Harvard graduate student: Earnest caters to middle-class college graduates and offers them low rates (anywhere from 4.25 to 9.25 percent for personal loans) and personalized customer service.
These start-ups all tell very similar stories—common in Silicon Valley—about using technology to benefit their target population, this time through expanding opportunities for financial inclusion. But given the sky-high rates that some of them offer—the annual interest rate for loans offered by LendUp and other similar big-data lenders ranges from 134 percent to 749 percent—they seem little more than high-tech loan sharks. But while traditional loan sharks can only get people who walk through their door, online creditors and marketers have an enormous (and unsuspecting) population at their fingertips online. In myriad ways, these companies represent the ongoing shift of power away from consumers and the erosion of longstanding protections, yet there has been little regulatory scrutiny. “In addition to whether they’re covered by the laws,” says USPIRG’s Mierzwinski, “there is also the question of whether some of their algorithms are trying to evade the laws by creating illegal proxies—and that’s absolutely something that we’re hoping the [Consumer Financial Protection Bureau] can use its supervisory authority to figure out.”
* * *
Officials we spoke to from the CFPB paid tribute to innovation when asked about the potential impact of digital technologies on fair lending. But while there’s no denying that big data, new credit-scoring models, and financial-services start-ups could be beneficial, in theory, for disadvantaged communities, market incentives in practice ensure that stigmatization and exclusion prevail.
Scoring systems are technologies of risk management, and new digital data collection and micro-targeting further shift the risk—and expense—to those who are most vulnerable. For example, compare Earnest and LendUp. The former shows how big data can be used to benefit consumers—but for Earnest, the risk is only worthwhile when dealing with a subset of people whose privilege and financial soundness have yet to be recognized by the mainstream banking system. The latter reveals the more common and exploitative uses of big data to entangle a financially insecure population with few, if any, alternatives available to them.
As the digital revolution unfolds, already limited consumer protections will come under increasing stress. Both the Consumer Financial Protection Bureau and the Federal Trade Commission lean on the FCRA and ECOA, yet plenty of evidence suggests that new, expanded safeguards are needed, given the laws’ limits and loopholes. That said, both the CFPB’s assistant director of fair lending, Patrice Ficklin, and the FTC’s Julie Brill insist that current laws are up for today’s challenges. “Whether they’re a large bank or a small start-up, it is illegal for lenders to discriminate against consumers,” Ficklin says.
No doubt, the laws currently on the books provide crucial protections—but experts and advocates warn that they don’t take into account the disparate impacts of the new technology. For example, it’s not illegal for companies to discriminate based on a potential customer’s or employee’s personal network—the people they know and the interests they share with others online. “The legal fight against discrimination (or, rather, the legal fight for equality) may be a long distance from the fight to safeguard ordinary people—and especially members of historically marginalized groups—from encountering unfairness and injustice due to data-driven discrimination,” says Seeta Peña Gangadharan, a researcher focused on data profiling and inclusion.
Given this fact, more fundamental reforms are needed. Brill, for one, has been extremely vocal about the need for more robust privacy protections in the form of data-broker regulations that curb data collection at the source. “I think that we need to give tools to consumers so that they can control their information used for marketing, to suppress it or correct it if they want,” Brill says. “I want to add, though, that I don’t think consumers can manage all of this on their own. And that’s why there need to be some rules around sensitive information—for instance, health information, information about race, financial status, geolocation. If that’s going to be used, consumers need to be told, and they need to say, ‘Yes, OK, you can use it.’”
In other words, we need to move from an opt-out model, where the default setting makes our private information freely available to thousands of invisible and unaccountable actors, to one that’s opt-in—a move that would inevitably constrict the flow of private data. This is what Brill calls the “right to obscurity,” a right that will become even more essential as more and more everyday devices get networked through the so-called “Internet of things.” (Imagine a future in which your auto insurer collects data from a device in your vehicle; this data, because it isn’t acquired through a third party, isn’t covered by the FCRA, and consumers have no right to access or correct the information.)
Data brokers and marketers, not to mention advertising-dependent tech giants like Google and Facebook, would not be pleased if such legislation came to pass. Lobbying associations like the Consumer Data Industry Association would no doubt spend huge sums to squelch any reforms, invoking their First Amendment right to use the data for whatever purpose they please, and arguing that advertising and scoring are tantamount to speech, and privacy equivalent to censorship. In his book The Black Box Society: The Secret Algorithms That Control Money and Information, Frank Pasquale points out that some lawyers are even using First Amendment cases “as a shield to protect credit rating agencies accused of wrongdoing during the subprime debacle.”
Strong data-broker legislation or, better yet, a baseline, cross-sector privacy law would be an enormously positive (if unlikely) development in the United States. Even so, the frame of privacy/obscurity needs to be expanded. Consider Nicole Keplinger again. When she and other for-profit-college students are targeted by scammers on Facebook, the problem isn’t simply that their privacy has been violated through the collection of personal information. The fact that they are treated as quarry by financial predators raises a deeper issue of fairness. Even in a scenario in which privacy protections are strong, data brokers regulated, and the FCRA and ECOA aggressively enforced, there would be no restrictions against targeting people who are poor. Discriminating based on income is as American as apple pie: Unlike race, religion, sex, or marital status, class is not a protected class.
Right now, many people insist that a combination of digital technology and the free market will solve the problem of financial inclusion, even though it’s a problem caused by the market itself. Perhaps the very concept of “consumer protections”—which inevitably leads to individualized solutions to systemic failures—is part of the problem. Looking back on the role that online consumer-credit marketing played in the 2008 crash, it’s clear that consumer protections are in fact citizen protections. Nothing less than the health of our entire society is at stake. Also in This Issue
Leah Hunt-Hendrix and Astra Taylor, “‘Tech’ Is Political—How We Respond to It Needs to Be Just as Political”
Tim Shorrock, “How Private Contractors Have Created a Shadow NSA ”
Eleanor Saitta, “The Key to Ending Mass Surveillance? Math.”
Ingrid Burrington, “What Amazon Taught the Cops”
Jessica Bruder, “These Workers Have a New Demand: Stop Watching Us ”
Virginia Eubanks, “Want to Cut Welfare? There’s an App for That. ”
Astra TaylorAstra Taylor is cofounder of the Debt Collective and the director of the documentary films What Is Democracy? Zizek! and Examined Life. She has written for The New York Times, The L.A. Times, The Baffler, n+1 and other outlets. She is the author of The People’s Platform: Taking Back Power and Culture in the Digital Age and Democracy May Not Exist, but We'll Miss It When It's Gone (Metropolitan Books)
Jathan SadowskiJathan Sadowski is a PhD candidate in the human and social dimensions of science and technology at Arizona State University.