The Threat of Artificial Intelligence to POC, Immigrants, and War Zone Civilians
Trigger warning: mentions of violence, racism, and xenophobia. Specific trigger warnings also noted in each section.
This is part of a series of how design and technology perpetuate structural inequality and oppression. I’m calling this exclusionary design, the opposite of inclusive design.
Right now, every big tech company is putting huge investments in the development of Artificial Intelligence (AI) and we need to pay attention. Especially from a social justice perspective, understanding the applications of AI is important because it poses a dangerous threat to the livelihood of immigrants, refugees, people in war zones, and people of color. And yet, as much as it endangers these marginalized communities it can also be completely liberating and empowering for others, especially for people with disabilities.
I recognize that as soon as I publish this article it’s going to be outdated, because this field is changing incredibly fast. I’m also going to focus more on what I view as problematic applications and relationships — such as facial recognition for the police or automated, vetting systems for ICE — and less on the technical aspects of AI. This article includes heavy criticism of law enforcement, immigration officials, the criminal justice prison complex, and military establishment. So disclaimer, if you like cops, you probably won’t like this article. This is what we’ll get into:
Like most tech and computer-y things, Artificial Intelligence (AI) seems intimidating. But also like most tech things, it can be boiled down to the essentials and become a simple concept to grasp, even though its development remains complex. I’m not a computer programmer, so I have found A People’s Guide to AI created by Allied Media Projects super helpful in understanding and explaining AI.
AI is essentially the theory and development of computer systems that are able to perform tasks that normally require human intelligence. Basically, computers that can think and act like humans. It doesn’t have to be fancy. I used to think of AI as a far-future concept that would come in the form of a cyborg attempting to seduce me. In reality, a lot of AI resides in our home, like Alexa or Google Home (which, depending on what you like, might leave you already seduced). Facial recognition is used by Facebook to help tag photos, and that’s been around for years. Recommendation engines drive a lot of advertising and sales campaigns that we see on a regular basis.
This commercially available AI is commonly known as “Narrow AI” because it’s developed under a limited, predetermined set of functions. The Verge talks about this in their video: Why artificial intelligence has no common sense. Even the Google Duplex, which creeped out the entire world when we heard it schedule a haircut appointment, sounds very realistic but is not a truly intelligent, autonomous being the way a human is. It is simply programmed to be able to carry out a conversation to book appointments, and for extra flair has bunch of human sounding noises like uh, um, and mm-hmm.
I have a confession, I think narrow AI is kind of shitty and I hate using it. I could never complete a task with Siri because she would always listen to what I asked, like ‘look up directions’ or ‘call my dad,’ and then reply, ‘here are google search results for call my dad.’ Completely useless. But the eventual goal is for AI to be much more advanced and think like humans, and that’s why machine learning and humanizing AI is one of the main focuses right now.
Machine learning is a branch of AI in which a computer generates rule and predictions based on raw data it’s being fed. Basically, school for robots. For example, we (the general public) have been helping train Google’s image recognition AI for years. Facebook is using our billions of photos that we upload online to train algorithms. People aren’t customers of social media, we’re factories of raw data for them to use for machine learning. The real customers are the people willing to pay for that data or AI.
There are literally hundreds of companies out there focused on developing AI. Looking at some of the top companies, I became increasingly concerned at a growing trend: they all have some relationship or deal with the military, police, and immigration enforcement. For example:
So this is a problem.
Trigger warning: this section has imagery of violence against black people inflicted by police officers, and racist and anti-Semitic remarks made by an AI chatbot
To get us all on the same page I’m going to state: the systemic racism against Black and Latino people of color (POC) is incredibly prevalent in the way they are policed. It includes but is not limited to the headlines we constantly see: that young, unarmed black men are being gunned down and assaulted while the offending police officers are consistently not held accountable for their actions (USA Today). This Washington Post article has written an extensive list in which police and the criminal justice system are inherently racially biased against Black and Latino POC.
Note that this isn’t about the moral standing and ethics of individual police officers, or any individual person. Every person has the capacity to be kind, caring, and do the work to combat their inherent racial bias (which everyone has!). This is about the system of power and violence that is embedded in the history and culture of policing.
TLDR; black lives matter but not to the police state
Now that being said, we can take a critical look at the way companies are developing AI. Initially, the goal sounds great: we want to prevent and solve crime using AI. Yeah of course I want to prevent crime, who doesn’t want to do that? What’s the problem?
To start out with, the technology is already flawed with a racial bias against dark-skinned and Black POC. Amazon’s facial recognition AI, Rekognition, was publicly introduced in November 2016 and marketed heavily to cops and immigration officials. This faced backlash from within Amazon, with a letter signed by over 450 employees delivered to Jeff Bezos and other executives. Unfortunately that didn’t do much to stop the project. And after a test run of the system in the summer of 2018, the ACLU discovered that it is full of bugs and inaccuracies that resulted in 28 false matches between members of Congress and the mugshots of people arrested for a crime.
Their methodology was as follows: “Using Rekognition, we built a face database and search tool using 25,000 publicly available arrest photos. Then we searched that database against public photos of every current member of the House and Senate. We used the default match settings that Amazon sets for Rekognition.”
They also found that these false matches were disproportionately made against POC. As terrifying as this, is it doesn’t surprise me. Despite claims that computers cannot have human bias, computers and algorithms are developed by humans and therefore inherent our biases — this might include age, gender, race, etc. Congresswoman Alexandria Ocasio-Cortez (AOC) recently spoke out about this and faced significant backlash online, with people complaining it’s impossible for algorithmic programs to be racially biased.
My reaction is: where have people been, if they think algorithms are completely neutral? This article by Nature explains why AI has the potential to be sexist and racist due to aspects of its development, such as skewed data and built-in fixes. And who could forget, that when Microsoft unleashed its chatbot Tay on Twitter in 2016, Tay went on a anti-Semitic and anti-black Twitter rant before being quickly shut down (Gizmodo). That’s what happens when Twitter is your data set for machine learning.
Not to be outdone, IBM secretly used footage from NYPD CCTV cameras to develop surveillance technology that could be used to search for people by physical characteristics (Verge). So already, this is a privacy violation against the pedestrians they captured footage of. And though the NYPD claims they didn’t make use of the skin tone or or race search capabilities for fear of racial profiling claims, a former IBM researcher said the team still developed it, therefore indicating there was clearly interest in the area.
Apparently, IBM declined to comment about this collaboration to various news outlets. As far as I know, there is no letter or petition coming from within the company either. This lack of transparency and dialogue is highly concerning to me. IBM was a key aid to the Nazi party back in the day, producing a punch card technology that helped the Nazis create a population census and identification system that ultimately enabled them to funnel people into their Concentration Camps, where they were tortured, worked to death, and gassed (Huff Post). In order to prevent a similar innovation-fueled genocide, I would hope that tech companies are willing to engage in an open dialogue with the public about their work; especially companies with a history like IBM.
So our reality is, tech companies have and will probably continue to provide AI to law enforcement. Taking my personal feelings out of the equation for a sec, I believe this is problematic because these companies are developing flawed systems to help an already flawed system. For example, AI might incorrectly identify objects in video footage, like talking on the phone vs. smoking a cigarette, or along the same lines a woman wearing a burqa vs. person wearing a ski mask (IEEE Spectrum). The system is not perfect, but police will assume it is, and use this technology to perpetuate the racism that already exists with the criminal justice system.
Instead of pouring more money into the police, let’s fund youth and communities instead. No Cop Academy is a campaign based in Chicago and supported by over 85 community organizations that demands the $95 million intended for a police academy go to schools and programs that will benefit black and brown youth instead. Support #NoCopAcademy
Trigger warning: mentions of death and the abuse of children at the hands of border patrol.
When I began writing this, we were still in the middle of a partial government shutdown that caused thousands of federal employees to forego their paychecks in favor of a giant border wall.
I remember when “Build the Wall” first started, like many others I thought it was a complete joke. Regardless of your opinion of immigration, I hope we can agree that building a giant cement wall that stretches the entire US southern border is massively inefficient. Are we really supposed to believe that something that will take years to build and billions of dollars to fund is an effective way to allocate resources? Especially when it’s susceptible to uneven terrain and underground tunnels. Seriously, every project manager’s worst nightmare.
Because I thought the wall was so insufficient, I relaxed thinking it would never pose a real threat to anyone’s safety. That is, until I learned that Palmer Luckey, the 25-year-old founder of Virtual Reality company Oculus, has created a scheme for a border wall using AI to be several times more effective at a fraction of the cost. (I swear when I learned about this I almost shit my pants.)
Again, to get us all on the same page with immigration: we are all living on stolen, Indigenous land that was taken through genocide of entire nations of people. Indigenous people were not even granted citizenship on their own land until 1924 (Racial Equity Tools). So already, the concept of gatekeeping America is flawed.
A quick and super fun history lesson: in 1882 Congress passed the Chinese Exclusion Act, which restricted the entry of Chinese laborers for the next ten years because they were concerned these “degraded, exotic, dangerous” people would take away jobs. That was the first race-based immigration legislation in America. The Immigration Act of 1924 which severely restricted immigration that were non-Protestant, Mexican, and from Southern and Eastern Europe, while welcoming immigrants from Britain, Ireland, and Northern Europe. In 1986 the Immigration Reform and Control Act criminalized the employment of undocumented workers, making it difficult for Latino workers to find jobs. (Sources are Britannica and Racial Equity Tools)
On top of all of this, the naturalization process is intentionally really difficult to complete, as it takes on average 5–10 years and thousands of dollars for someone to become an American citizen (ABC News). Not to mention that most naturally-born Americans can’t even pass the citizenship test they require naturalized citizens to pass (Biz Women).
TLDR; Immigration law in the US has been and continues to be structurally racist and xenophobic.
Okay so now I’m going to be honest, I’m losing my mind over the threat of tech companies weaponizing AI for immigration control. It doesn’t really matter what happens with the battle between Trump and Congress because in many areas we already have a border wall, and it’s going to become more intelligent no matter what publicity stunts are distracting us. Luckey has founded a company called Anduril Industries, which has a lattice system that can detect and identify motion within a 2-mile radius using 32-ft towers that are packed with radar, communications antennae, and a laser-enhanced camera. This system is currently being used on the Mexican border as we speak (CNN Business).
The problem with applying this technology to border patrol is the same problem with applying it to policing — there is a huge potential for error, and the consequence of that error is human lives. For the most part with this system, we are talking about endangering Latino POC lives.
Apparently Anduril is not the first company that has made a bid with Homeland Security. There was a failed project with Boeing starting 2006 that cost about $7 billion and was canceled due to taking too much time and money while still being completely ineffective (that’s $7 billion of taxpayer money that went down the drain, by the way). It looks like the failure that comes with working from a more “traditional” company and along with fear that other international powers like Russia and China are developing smarter tools is pushing US military and immigration officials to work more closely with Silicon Valley.
According to CNN, the Department of Homeland Security has declined to comment on how effective Anduril’s technology is, but I’ll keep an eye out for it. In the meantime, let’s take a look at Microsoft’s contract with Immigration and customs enforcement (ICE). They won a contract in January 2019 that would have their cloud computing software Azure to handle sensitive unclassified information for ICE. Specifically, their initial blog post boasts that Azure Government will help them modernize their IT and enable them to “process data on edge devices or utilize deep learning capabilities to accelerate facial recognition and identification.”
Last spring, following the zero tolerance policy implemented at the border, ICE started separating children (as young as below 5) from their families and kept them at detention centers (NPR). That NPR article also details how various facilities handle the children inhumanely — such as improperly administering psychotropic drugs, not allowing physical affection like hugging (or only under certain circumstances), and keeping them in tightly packed spaces behind chain-linked fences. The poor treatment has resulted in the deaths of a toddler, Mariee Juárez, 7-year-old Jakelin Caal Maquin, and a 8-year-old Felipe Gomez Alonzo who happened to die on Christmas day. According to The Guardian, at least 12 people have died at adult detention centers in 2018.
Last summer, a coalition of advocacy groups brought together 300,000 signatures, many of which included Microsoft employees, that spoke out against the contract (GeekWire). The company declined to comment on the petition and the CEO sent out a memo denouncing the treatment of children, but saying they were only supporting with ICE with “supporting legacy mail, calendar, messaging, and document management workloads” and not taking part of any activity that separated children from their families. Well, my guess is border patrol officers need to send email too, and Microsoft made that process a whole lot easier. I don’t believe that it is possible for Microsoft to completely remove themselves from responsibility in this situation. And to my knowledge and disappointment, their contract with ICE is continuing today and has not been terminated.
This particular issue is less about the effectiveness of AI and more about the type of alliances and relationships tech companies are building when providing their cloud and AI services. Especially considering how many people in tech are talking about the importance of inclusion, accessibility, and diversity — how can an organization be inclusive while accepting clients who perpetuate structural violence? Something to think about.
Trigger warning: mentions and imagery of death and injury of civilians, including children, by U.S. drone strikes
Okay let’s lay out the facts: in the past few years the U.S. has carried out hundreds of strikes against Pakistan, Afghanistan, Yemen, and Somalia. This has resulted in approximately 8.4 million deaths, including nearly 800,000 civilians, about 300 of which were children (The Bureau of Investigative Journalism). Why are we bombing these countries? Well the simple answer is that it’s a series of counter-terrorism strikes that began with President W Bush and expanded significantly under President Obama due to evolving militant threats and greater availability of the technology required (see history of drone warfare). President Trump has expanded it even further: and launched at least 238 drone strikes in his first 2 years in office, compared to Obama’s 186 (The Daily Beast).
The real reason why we are bombing these countries runs much deeper and is entangled with civil war and unrest, multiple terrorist threats, demand for natural gas and oil, and alliances with other countries. Since I don’t have as much knowledge in this area, I’m not going to that deep into this. But I think we can objectively agree on this as our foundational problem:
TLDR; U.S. drone strikes have caused thousands of POC civilian deaths
Last year, Google announced a contract with the Pentagon that faced significant backlash. Called Project Maven, the goal was to develop AI that could study and analyze drone footage in order to improve drone strikes in the battlefield (Global News). Apparently Google’s senior leadership was very enthusiastic about this project because it could win Google Cloud larger Pentagon contracts, spurring millions of dollars in revenue (Gizmodo). This news was faced with significant backlash: about 3,000 employees said in an open letter, “We believe that Google should not be in the business of war” and that this project would compromise Google’s informal “Don’t be Evil” motto (Independent). Haha um, it’s a little too late for that, Google.
While Project Maven could arguably improve the accuracy and therefore result in fewer civilian deaths, this doesn’t resolve the opinion that the U.S. should not be launching drone strikes at all, and that these counter-terrorist attacks are actually fueling rather than resolving foreign terrorism (my opinion).
The good news is that Google decided to terminate its contract once it expires this year in 2019. They also bid on but decided not go pursue a $10 billion contract to provide cloud computing services to the Department of Defense, called JEDI (Vox). In June 2018, Google published a set of AI ethics principles that establishes that they will not develop AI for weapons or tech that causes harm, violating surveillance technology, or technology that violates international human rights law in general.
HOWEVER, we should note that Google still maintains that they can continue to work with the government and the military in other areas. And let’s be real: this is a highly profitable area for them. The U.S. has a bigger defense budget than the next 7 countries behind it combined at a total of $610 billion (PGPF). There’s no way a major corporation can easily say no to that type of money. So while Google has taken steps to reduce impact, I question their motivations and wonder if they are merely trying to avoid bad PR while continuing to work for the military in the shadows.
This MIT Technology Review article asks a great question about the role of ethics in the tech industry: specifically who defines ethics and who enforces them? It notes how easily Google set aside its newly minted ethics principles in favor for the JEDI contract. And that’s because creating a set of principles and emailing them out does not actually set expectations, provide accountability, or enact change. As NYU scholar Philip Alston says in the article, we need to start thinking of AI ethics in terms of human rights, and treat them like such.
(I’m not going to go into China’s whole military-civil fusion deal, it’s too scary and I’m too depressed about it. But you can read about it in the Financial Times, ISPI, and CNAS).
I wanted to reserve a space for me to be emotional about my personal attachment to these issues. In the process of writing this article, I cried a few tears, I questioned my role in the tech industry, and I experienced maybe 5–7 existential crises.
When I decided to become a designer in tech about 5 years ago I engulfed myself in organizations and resources that implied that inclusive design and ethical principles could change the world. Looking at my industry, I see a lot of people with this mindset; it’s clear when designers write their own code of ethics or write about how design can solve the world’s biggest challenges. We care, we want to make a difference, and we believe we can create a positive social impact.
Honestly, who are we kidding? What positive impact can we make when the decisions that have the greatest impact are made behind closed doors by company executives, in pursuit of huge profits? I can recycle my little cardboard boxes all day, that’s not going to stop giant corporations from pouring toxic waste into streams as an industrial byproduct and effectively canceling my effort.
But I still want to acknowledge that people within these companies — at Amazon, Microsoft, and Google — had the courage to speak out against these human rights violations and in some cases made an impact. So I don’t think it’s a completely lost cause, but I do think that while the U.S. government, military, and police organizations will continue to court them for million dollar contracts, we need to maintain an open dialogue, hold tech companies accountable, and demand that AI abide by human rights. And for now, I think that’s the most we can do.
Thanks for reading. This is the part of a series that I’m working on about exclusionary design. If you have any suggestions for what to write about, feel free to comment or tweet at me @thetuttingtutor.
The Threat of Artificial Intelligence to POC, Immigrants, and War Zone Civilians
Research & References of The Threat of Artificial Intelligence to POC, Immigrants, and War Zone Civilians|A&C Accounting And Tax Services
Source
0 Comments