Impact of Automation on the Economy
This post is written for Dr. Darakhshan Mir’s class on Computing and Society at Bucknell University. We discuss problems in tech and analyze them using ethical frameworks.
We’ve seen a tremendous rise in Artificial Intelligence (AI) in recent times. AI systems are becoming increasingly commonplace. Oftentimes, discussions concerning AI revolve around how automation may impact the job market. In this post, I try to answer this question, drawing primarily from Furman et al. [1]. Below, I discuss the trends in the job market, followed by a few ways in which automation is influencing the general economic landscape.
In 1942, the economist Joseph Schumpeter coined the phrase creative destruction to refer to a process through which an existing production system is replaced by one that is more innovative, thereby boosting labor productivity [2]. Such a process creates “economic losers” who stand to lose from the change. Some may have their wages reduced while others may lose their jobs altogether. Keynes describes the latter as,
Historical evidence suggests that while certain jobs may be at imminent risk, new jobs usually appear in complementary sectors in the long run [4]. Though easier said than done, it’s important that people adapt to this evolving market. Unfortunately, we’re already seeing a “long-term decline” in the labor force participation because individuals are unable to keep up with the skills required in the workplace [5]. This raises a crucial question.
Is the tech sector to blame for using one set of people (e.g. engineers) as a means to build systems which exclude another set of people (e.g. low-skilled laborers) from jobs? This exclusion seems hardly avoidable in a capitalist economy given that firms are always vying to maximize their profits (by innovating). Is it then our education system which is unable to prepare individuals for the new jobs? Or is it the government that is too lenient toward regulating advances in AI?
These are tough questions to answer, and there’s no one-size-fits-all solution. Even if a society with automation leads to the same, or a larger, economic output as one without it, it’s unjust to disregard, albeit unintentionally, the needs of the most vulnerable stakeholders, i.e. low-skilled workers. In this case, the best we can hope to do is invest in retraining workers to promote the public good.
Since the turn of the decade, AI-related start-ups have been receiving more funding by the day [7]. However, tech giants like Google and Baidu dominate most of the investment while the rest of the companies (tech or otherwise) are left to play catch-up. This poses a few challenges to the creation of a “mature AI economy” which, to an extent, levels the playing field for both incumbents and entrants.
As the internet has matured, we’ve observed greater “switching costs” [1] for customers to stop using an established platform and move to a new one. Platforms such as Google Search enjoy first-mover advantage and are able to collect their users’ data to further their market dominance. A majority of AI applications rely on machine learning [7], so the unavailability of large datasets presents a significant barrier for AI start-ups.
Some people argue that companies that spend their resources to craft a good dataset deserve the right to distribute it as desired. On the other hand, it’s reasonable to believe that the data itself belongs to the users. To tackle these contrasting stakeholder interests, [1] proposes the notion of data portability which “allows customers to take their data from one provider to another.” While it’s a step in the right direction, the authors also acknowledge that further work is needed to establish how large datasets impact the market.
In regards to data portability, there are questions about data security and whether customers should be able to “own” the inferences made about their behavior by an AI-based application [8]. Thus, the role of any future regulatory, third-party agency is to ensure that companies are held to a standard of ethical data practice and that the user’s well-being (physically, socially or emotionally) is kept at the forefront.
Income Inequality
Skeptics of automation are concerned about AI leading to a wide disparity in income levels. Furman et al. write that,
This concern has led people to revisit proposals such as Universal Basic Income (UBI), wage supplements, and guaranteed employment. Understandably so, none of these proposals are bulletproof otherwise they would have been put into practice long ago. UBI, in particular, seems highly ambitious as it requires a near 50% tax hike in addition to $1 trillion in annual financing [1].
As we’ve seen time and again, income inequality can lead to a vicious feedback loop for the already-disadvantaged section of society [10]. In the context of AI substituting low-skilled labor, what then is the best policy to minimize the divide between the rich and the poor?
Our society is far from an “AI takeover” (a general AI) and we’re yet to see a breakthrough in the field, even though task-specific advances in machine learning have captivated everyone. Nevertheless, this shouldn’t be an excuse to not think about how we may want to regulate AI.
Low-skilled workers are especially vulnerable to automation, and some of them are even unable to recover from job loss. In addition, as a start-up, it’s increasingly difficult to break into the field due to a lack of competition and the unavailability of “standardized datasets.” All of this has led to a host of ethical discussions on who is to blame for the loss of jobs and who has the rights over a user’s data, which crucially each AI application depends upon.
Policymakers across the world have attempted to devise frameworks to regulate AI. However, as [9] suggests, all of them fall short of discussing their vision for the future society. Only by understanding our priorities for the future, the authors of this study argue, would we be able to comprehensively address the questions brought up in this post.
[1] Furman, Jason, and Robert Seamans. “AI and the Economy.” Innovation Policy and the Economy (2019).
[2] Schumpeter, Joseph. “Creative destruction.” Capitalism, socialism and democracy 825 (1942).
[3] Keynes, John Maynard. 1930. “Economic Possibilities for our Grandchildren.” Essays in Persuasion (2010).
[4] Autor, David H. “Why Are There Still So Many Jobs? The History and Future of Workplace Automation.” Journal of Economic Perspectives (2015).
[5] Council of Economic Advisers (CEA). Economic Report of the President (2016).
[6] Frey, Carl Benedikt, and Michael A. Osborne. “The future of employment: how susceptible are jobs to computerisation?” Technological forecasting and social change (2017).
[7] Bughin, Jacques et. al. “Artificial Intelligence: The Next Digital Frontier?” MGI Report, McKinsey Global Institute (June 2017). Link.
[8] Tucker, Catherine. “Privacy and Innovation.” In Innovation Policy and the Economy, vol. 11 (2012). Chicago: University of Chicago Press.
[9] Cath, Corinne, et al. “Artificial intelligence and the ‘good society’: the US, EU, and UK approach.” Science and engineering ethics (2018).
[10] Durlauf, Steven N. “A theory of persistent income inequality.” Journal of Economic growth 1.1 (1996).
If you liked this, please check out my other Medium posts and my personal blog. Comment below on how I can improve.
Impact of Automation on the Economy
Research & References of Impact of Automation on the Economy|A&C Accounting And Tax Services
Source
0 Comments