I came across two items of note on the topic of automation, which will occupy a good portion of Part II of Global Trends, devoted to the rise of new technologies.
The first is a Forbes article by my fellow economist Effi Benmelech, from Kellogg. In this article Professor Benmelech takes a contrarian view on the rise of robots, arguing that expenditures on robots make up a tiny fraction of overall investment, and that these expenditures tend to be confined to the manufacturing sector (itself a modest share of GDP and overall employment). This leads him to conclude that the disruptive effects of automation might not be as severe as is often anticipated (displacement of labor, rising income inequality). However, as he recognizes at the end of the article, robotization is not the only manifestation of automation. And the other manifestations of automation (AI, cloud computing, etc.) may indeed have disruptive effects, replacing workers and leading to more inequality. Broadly speaking, I agree that robotization is unlikely to be the major issue. Automation more broadly, however, has the potential to really affect productivity (positively) and employment (negatively, at least for low-skilled labor). (For a full-blown academic study of this issue, I direct you to this link, where the ideas in the Forbes article are much more fleshed-out).
The second is an intriguing article from the Wall Street Journal on the growing use of automation and AI for job applicant screening and recruitment activities. Such technologies, in my view, have the potential to greatly improve the allocation of workers to jobs, resulting in efficiency gains. What gets programmed in the algorithms, however, raises very deep and interesting ethical questions, and it it not obvious that the resulting matching process will result in more equity and inclusion (a WSJ article from September 2021 goes into these issues in more detail). To be pondered...
I believe that robots, automation and AI will have disruptive effects in the workplace. However, it will arrive at a much slower pace than predicted, as we learn more about the complexities of implementing these technologies.
I work at a cell therapy company that manufactures autologous therapies, and I have seen the impact of trying to implement automation into the manufacturing process. It is difficult to implement these technologies and processes, because validation, qualification, and training must be done. Automated systems are also less flexible than updating manual processes due to system changes and validation required, so every variation takes longer to implement. Some step changes that may be easy to implement in a manual process may also be limited by automated systems capabilities. Therefore, although we are making progress, there are still trade offs in fully implementing automation in more complex manufacturing processes.
I also don’t believe using AI to screen for job applicants will be disruptive to the industry anytime soon. My company uses AI to schedule interviews, which is useful. However, AI for more subjective filtering can cause issues. It has been found to be discriminatory in filtering job applications. AI makes choices based on past data, which is generated by human behaviors. When past human decisions contain discriminatory elements, those factors are replicated by the algorithms. This article from the ACLU supports that “Rather than help eliminate discriminatory practices, AI has worsened them —hampering the economic security of marginalized groups that have long dealt with systemic discrimination” (https://www.aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities/). Before we can start to safely implement AI in widespread uses, we need to ensure that we’re eliminating these biases through policy and further research.
Posted by: Pauline Yang | 01/15/2022 at 06:30 PM
Thank you for bringing up these topics and sharing these articles, Professor! I agree that automation, such as AI, can have disruptive effects, replacing workers and leading to more inequality. One of the biggest sources of these effects, in my opinion, is algorithmic bias. I see this largely within job applicant screening and with facial recognition technologies.
With respect to job applicant screening, many of these tools are created to eliminate human error and increase efficiency. By doing this, we anticipate the displacement of labor as automation would replace the need for workers. However, these technologies are not free from error themselves. A quote from a Washington Post article (https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/ ) said it best, “algorithms often carry all the biases and failures of human employees, but with even less judgment.” As mentioned in one of the WSJ articles you shared, many of these AI tools exhibit hiring biases based on racial and/or gender stereotypes buried within the algorithms. While they may be successful at screening applicants based on education, hard skills, etc., they are not capable of understanding broader hiring goals such as culture fit. These algorithms often include biases that filter out applicants who are programmed to be unqualified when in actuality they may have the necessary skills, such as applicants who can perform the work but have criminal records. This only furthers pre-existing inequalities.
Facial recognition remains a powerful technology with significant implications in both criminal justice and everyday life. But these technologies include a significant amount of racial bias. The 2018 Gender Shades project highlighted discrepancies in the classification accuracy of face recognition technologies for different skin tones and sexes. As referenced in this Harvard article (https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/ ), algorithms consistently demonstrated the poorest accuracy for darker-skinned females and the highest for lighter-skinned males. I have seen this firsthand in Apple’s facial recognition technology for their iPhones. My sister has been able to unlock my mother’s phone multiple times and I have unlocked my sister’s phone a few times even though my sister and I look quite different.
There are broader implications of the biases in facial recognition technologies beyond the everyday use of unlocking phones such as the facial recognition network used by law enforcement. Within the law enforcement system, there is significant racial bias, particularly against Black Americans. Biases and inaccuracies within facial recognition technologies, and other automation innovation, only widen pre-existing inequalities. So, while AI can be quite helpful in increasing efficiencies, it can have quite disruptive effects if we do not increase regulations such as ethical auditing of these technologies.
Posted by: Mareena Haseeb | 01/17/2022 at 03:38 PM