Facebook Needs to Kill Microtargeted Ads Now
A recent civil rights settlement is just the first step to finally forcing Facebook to respect users’ privacy and take responsibility for the ways its customers leverage private data.
By Max Eddy
A hallmark of the last few years has been seeing Facebook hauled before the powers that be and grilled for its failings. The results are sometimes mixed and have too often given us an opportunity to see US officials failing to ask the right questions of one of the most powerful companies on the planet. The cringe-worthy moments when Facebook’s founder Mark Zuckerberg had to explain to a room full of senators how Facebook works won’t soon be forgotten.
Despite that, there’s been a sense that a reckoning is coming for the social network as it fumbles again and again on issues of privacy. March 20, 2019, was the day Facebook settled a lawsuit with civil rights groups and agreed to change how its ads work—at least, in a very narrow sense. It could be the biggest moment to date in the chastening of Facebook.
The case centered on ads for housing, credit-related products, and employment. Using Facebook’s platform, companies could exclude whole groups they deemed undesirable. Or, conversely, target groups with particularly predatory products. That’s because Facebook, like many other companies, deals in data, and finding ways to put specific ads in front of specific people.
All the information you explicitly share, and some that Facebook divines from what you post, is used to place you into various boxes. Do you have children? Are you rich? What’s your ethnic background? Facebook gathers all this information up and then sells companies the ability to target you with specific ads. This can be as innocuous as trying to sell me rat food, for my wonderful pet rats, or as insidious as shuttling predatory financial schemes to marginalized communities.
If you feel like that’s a violation of privacy, you’re not wrong. It’s also one of the very few times when a tech company’s decision to profit from users’ data has been met not only with resistance but also with real consequences.
The outcome is frankly surprising to me as someone who has watched Facebook being unrepentant in its violation of the public trust: The company agreed to change its ways. In its settlement with the ACLU and other civil rights organizations, the company agreed to limit what advertisers of these product categories can do, how their ads are targeted, and what advertisers can learn about users. Facebook even says it’s creating a tool that will let those targeted by ads in these categories demystify the process. You will be able to see housing, credit, and employment ads targeted at people other than yourself.
A week after this initial announcement, Facebook also said it would “ban praise, support, and representation of white nationalism and white separatism on Facebook and Instagram,” another surprising move for the company. Facebook was also sued by HUD for its discriminatory advertising practices.
This is all good, and I’m glad to see it. Too many new technologies that could bring people together and create a fairer world aren’t being used that way. Facebook might keep you in touch with your family, but it’s also bringing in enormous profits by harvesting your data. A bigger example is how AI and machine learning, which should be used to shore up shortcomings in human decision-making, can end up reinforcing our own biases. How? Because it’s a computer, and it’s garbage in garbage out, and we’re pouring all our biases into these systems and find ourselves pleased when the results match our expectations. Don’t believe me? Ask Tay.
What struck me about the Facebook settlement was one particular sentence: “Anyone who wants to run housing, employment or credit ads will no longer be allowed to target by age, gender, or zip code.”
Again, great. It’s the kind of thing privacy advocates and civil rights advocates want. But why stop there? A lot has been written about how microtargeting through social platforms was a major part of the Russian influence campaign that muddled the 2016 US election. We now know that Russian trolls used the same tools in Facebook to target specific groups for misinformation campaigns. Fake Black Lives Matter events were created; phony pro-Trump groups, too.
All of this was possible because Facebook — and Twitter, and others — built a business around getting an ad where advertisers want it. When you hear security fanatics like me screeching about privacy and how “if the product is free, then you’re the product,” this is what we mean. This is how companies turn you, and your online activity, into money.
The ubiquity of this technology belies its strangeness. It has also driven down the price enormously. At the RSA security conference, a presenter estimated that it would cost just $77 to target all the individuals needed to swing the vote from one party to another in Michigan, based on 2016 election data. In Pennsylvania, where the number of votes between the parties in 2016 was in the hundreds of thousands, the price tag was still only $250,000. That’s well within the means of a nation-state bent on sewing discontent in an election, and it’s built on the privacy-invading marketplace of products that has become enormously valuable in recent years. It’s the direct consequence of the commoditization of your private data.
If Facebook took the same action on political advertising that it’s taking today, that could go a long way toward preventing another US election filled with foreign misinformation — as well as domestic misinformation. The strategies Russian trolls used simply would not work if the technology for targeting specific individuals wasn’t available to them. Being able to look up and see what other political messages were sent to other people might also help cut through some of the mistrust that has so saturated our politics.
Maybe preventing our legitimate political parties from using this technology would be a good thing, too. Mass communication should bring us together. But these specific messages whispered in the ears of voters have broken us into smaller and smaller factions. Maybe it’s worth making the parties work a little harder to get our votes by holding our privacy a little tighter.
I know Facebook isn’t likely to consider expanding its moratorium on targeted advertising. There’s money to be made, obviously, regardless of what it means for our privacy or for people targeted by predatory advertising. But if it makes financial sense to turn off the tap on the ads Facebook has given up in its settlement, maybe the company could afford to change how its ads work in general. With billions of people on its platform and some of the best minds in Silicon Valley working in its offices, surely Facebook can find a way to make money while respecting users’ privacy and not being complicit in worsening society’s ills.
Originally published at www.pcmag.com.
Facebook Needs to Kill Microtargeted Ads Now
Research & References of Facebook Needs to Kill Microtargeted Ads Now|A&C Accounting And Tax Services
Source
0 Comments