Data Privacy Is a Human Right
They say the small bit of data you don’t give away, companies will find a way to take anyway.
“Data privacy”: It’s a phrase that has increasingly bubbled up in news headlines, conversations, podcasts, and interviews. It’s become a death sentence for some companies, a savior for others, and has snuck into our vocabulary as we begin to consider, for the first time ever, the darker side of the spectrum of personalization.
Naturally, I, like many others, have started being a more thoughtful about clicking “accept” when modals ask permission to track my behavior. I’ve changed my passwords to ones that are long and weird. I’ve updated my security settings and deactivated Facebook. I use a fake email address for online forms, I’ve opted out of location settings, and have stopped storing my personal information in public places. I’m doing the bare minimum, I’m sure, but I’m getting increasingly freaked out by the barrage of news about people’s personal information being compromised. So I’m taking some sort of action. And, as I’ve started to get a sense for how damaging a breach of privacy can be, I’ve realized that everyone should take action too.
It’s not that hard. Data privacy is a human right. Everyone can take action.
Right?
The reason not everyone can take action to protect their data privacy starts with the growing economic disparity between upper and lower classes, which has further manifested itself as a digital divide. To summarize: lower-income communities rarely have access to the same level of tech equity that higher-income groups do when it comes to access, education, opportunity, and quality.
It’s no surprise, then, that people in low-income communities are less aware that data privacy is a real concern, and they may not fully understand the implications it can have on their lives.
In fact, because low-income communities are disproportionately plagued with real-life physical safety threats, they are unsurprisingly left with far less concern for privacy threats, which feel much more theoretical.
This lack of tech literacy becomes especially troubling as the privacy threat to these households becomes increasingly less theoretical. They absolutely should be concerned about it, given that the technology that is more affordable and accessible tends to be more susceptible to privacy threats than the more expensive technology found in higher-income households. According to this brilliant study by Pew Research Center (emphasis mine):
The security risk that comes with a heavy reliance on mobile devices makes these populations more likely to be targeted by cross-device tracking, cell site simulators, and even in-store tracking by retailers. This, combined with a lack of literacy around tracking and data mining, makes members of low-income households are prime candidates for having their data bought and sold by third parties.
While mobile devices have made huge strides in accounting for privacy and security, there are clear winners when it comes to what kind of technology is likely to be more secure. iPhones have been crushing the privacy game lately, but given the fact that Android has a roughly 80% market share in some parts of the world (in part because of affordability), low-income families are ostensibly more likely to use Android devices, which are targeted by hackers far more often.
The communities that are the least equipped to take ownership of their data privacy, understand its impact, and take preventative measures are also the ones with the most to lose from having their privacy compromised.
If a data breach or leak were to happen, low-income communities would be hit hardest due to the amount and type of information required by all kinds of forms, applications, and tests they need to fill out to get access to the programs that allow them to, well, survive:
And this is only considering overt attacks to users’ privacy and security. What about the more latent effects of a system that continues to subtly perpetuate structural inequality?
Data collected on these lower-income families is used for ad targeting that quite literally affects their long-term health and wellbeing, which in turn can affect the lives of these families for generations. Tobacco companies and makers of junk food disproportionately target ads to poorer communities. Research shows that “black children and teens viewed 70 percent more food ads than white youth in 2013, and they saw 86 more ads than white children and teens by 2017.”
The long-term effect of this kind of ad targeting is not so subtle. As we know, eating a poor-quality diet high in junk food is linked to a variety of health issues. Poor health is exacerbated by low income, and low income is exacerbated by poor health. This cycle of disease, poverty, and inequality is further compounded by a digital divide that has failed to protect these segments of our population from the growing attacks on their data rights, data privacy, and online security.
If we agree that data privacy is a human right, then the institution of GDPR (General Data Protection Regulation) and CCPA (California Consumer Privacy Act) and other pieces of data privacy legislation are steps in the right direction. These laws propose that meaningful consent, data privacy, and data autonomy are rights to be protected and honored, and that data is to be dignified with safekeeping and proper care.
A lot of this consent-management legislation, however, is based on the assumption that users understand the tracking they are opting in and out of. And as we’ve seen, not only is that not always the case, especially for lower-income communities, it’s also harder for them to prioritize it. But it has to be a priority, especially for communities who are more at risk.
The responsibility is on us, as designers and builders of digital tools, to build teams that represent a wide range of populations and backgrounds, and to build tools to serve all people when it comes to privacy and data security. We can’t let parts of the population be deprived of their fundamental right to data protection simply because their circumstances preclude them from the privileges we, as designers, are more likely to enjoy.
As a product designer, I can’t help but see this as a substantial opportunity to think critically about the tools we design and ways we can democratize information, allowing all users to make informed decisions. How can we design for use cases where communities have more to lose than we might possibly understand? How do we design for data privacy with the understanding that not all communities have the means to update to the latest device or navigate complicated security settings? How do we design tools that help companies ensure they’re providing the rights of privacy and security for all people?
How do we ensure that basic data privacy isn’t a luxury product?
When we’re lucky, this can be as simple as designing good content. As the tweet below demonstrates, consent management is good, but well-written consent management is a game changer: “63% of people think ads are fine…but that changes if you explain adtech.” It really makes you wonder what kinds of tracking people would opt in and out of if they truly understood what was happening to their data.
This is just the beginning. Exposing these fault lines in data privacy and digital equity allows us to start thinking critically about the people our work supports, and, more critically, those it doesn’t.
At Segment, my team has started thinking more seriously about what investing in data privacy and security looks like through comprehensive user research. One emerging theme in our research, exploration, and iteration thus far has been clear: No one is solving this problem well or comprehensively today.
At least… not yet.
For more nonsense, follow me on Twitter and Medium.
Data Privacy Is a Human Right
Research & References of Data Privacy Is a Human Right|A&C Accounting And Tax Services
Source
0 Comments