« Returns to Large Firms in India | Main | China's Falling Birth Rate »

01/11/2022

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I believe that robots, automation and AI will have disruptive effects in the workplace. However, it will arrive at a much slower pace than predicted, as we learn more about the complexities of implementing these technologies.

I work at a cell therapy company that manufactures autologous therapies, and I have seen the impact of trying to implement automation into the manufacturing process. It is difficult to implement these technologies and processes, because validation, qualification, and training must be done. Automated systems are also less flexible than updating manual processes due to system changes and validation required, so every variation takes longer to implement. Some step changes that may be easy to implement in a manual process may also be limited by automated systems capabilities. Therefore, although we are making progress, there are still trade offs in fully implementing automation in more complex manufacturing processes.

I also don’t believe using AI to screen for job applicants will be disruptive to the industry anytime soon. My company uses AI to schedule interviews, which is useful. However, AI for more subjective filtering can cause issues. It has been found to be discriminatory in filtering job applications. AI makes choices based on past data, which is generated by human behaviors. When past human decisions contain discriminatory elements, those factors are replicated by the algorithms. This article from the ACLU supports that “Rather than help eliminate discriminatory practices, AI has worsened them —hampering the economic security of marginalized groups that have long dealt with systemic discrimination” (https://www.aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities/). Before we can start to safely implement AI in widespread uses, we need to ensure that we’re eliminating these biases through policy and further research.

Thank you for bringing up these topics and sharing these articles, Professor! I agree that automation, such as AI, can have disruptive effects, replacing workers and leading to more inequality. One of the biggest sources of these effects, in my opinion, is algorithmic bias. I see this largely within job applicant screening and with facial recognition technologies.

With respect to job applicant screening, many of these tools are created to eliminate human error and increase efficiency. By doing this, we anticipate the displacement of labor as automation would replace the need for workers. However, these technologies are not free from error themselves. A quote from a Washington Post article (https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/ ) said it best, “algorithms often carry all the biases and failures of human employees, but with even less judgment.” As mentioned in one of the WSJ articles you shared, many of these AI tools exhibit hiring biases based on racial and/or gender stereotypes buried within the algorithms. While they may be successful at screening applicants based on education, hard skills, etc., they are not capable of understanding broader hiring goals such as culture fit. These algorithms often include biases that filter out applicants who are programmed to be unqualified when in actuality they may have the necessary skills, such as applicants who can perform the work but have criminal records. This only furthers pre-existing inequalities.

Facial recognition remains a powerful technology with significant implications in both criminal justice and everyday life. But these technologies include a significant amount of racial bias. The 2018 Gender Shades project highlighted discrepancies in the classification accuracy of face recognition technologies for different skin tones and sexes. As referenced in this Harvard article (https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/ ), algorithms consistently demonstrated the poorest accuracy for darker-skinned females and the highest for lighter-skinned males. I have seen this firsthand in Apple’s facial recognition technology for their iPhones. My sister has been able to unlock my mother’s phone multiple times and I have unlocked my sister’s phone a few times even though my sister and I look quite different.

There are broader implications of the biases in facial recognition technologies beyond the everyday use of unlocking phones such as the facial recognition network used by law enforcement. Within the law enforcement system, there is significant racial bias, particularly against Black Americans. Biases and inaccuracies within facial recognition technologies, and other automation innovation, only widen pre-existing inequalities. So, while AI can be quite helpful in increasing efficiencies, it can have quite disruptive effects if we do not increase regulations such as ethical auditing of these technologies.

The comments to this entry are closed.