Laborera Risk Management Case Study of Artificial Intelligence for Human Resources

Laborera Risk Management Case Study of Artificial Intelligence for Human Resources

DOI: 10.4018/979-8-3693-1634-4.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Laborera is a fictional company. The company is a cloud-based software vendor that specializes in human capital management applications. Laborera is entangled in a class action discrimination suit for its artificial intelligence tools that allegedly prescreen and disqualify applicants in protected categories. Porter's five forces model and McKinsey's strategic horizons offer a framework for strategic review. Deal and Kennedy and the Denison organizational culture model are used to strengthen Laborera's organizational culture. The Bridges transition model and the McKinsey 7-S model enable transition and thoughtful organizational change management. Laborera improves ethical decision making, including decision processes involving a human-machine team, using the fairness/justice approach, and the common good approach. This case study demonstrates theoretical applications of risk management for software companies desiring to leverage artificial intelligence within innovative software applications.
Chapter Preview
Top

Literature Review

Bernd Schmitt, a marketing expert from Columbia University, interviewed by Frieda Klotz (2016), explains that artificial intelligence may subsume many human resources functions that require analytical decision-making, and he implores companies to start planning now. Artificial intelligence systems are powerful prediction engines, according to Silverman (2020). At its roots, artificial intelligence is essentially applied statistics. Computers are applying statistics to data. Machines trained on biased training data (referring to historical correlations between race and opportunity) may statistically and systematically emulate human bias (Kennedy, 2021). Corporate leaders and boards must go beyond following current regulations and statutes and anticipate potential risks and consequences associated with how employers will use artificial intelligence systems; specifically, artificial intelligence technologies introduce risks to brand and reputation when used to worsen racial inequity (Silverman, 2020).

Key Terms in this Chapter

Responsible Artificial Intelligence: A multi-disciplinary effort to design and build AI systems with careful consideration of their fairness, accountability, and transparency.

Strategic Risk Management: Evaluating the likelihood of business decisions related to a company's plans to achieve its business objectives and choosing the ones that will enable the company to succeed.

Change Management: A structured process for planning and implementing new ways of operating.

Artificial Intelligence: The ability of a digital computer to use a mathematical model to make inferences commonly associated with intelligent beings from data.

Discrimination: The unjust or prejudicial treatment of different categories of people, especially on ethnicity, age, sex, or disability.

Transition Management: A process of getting the best out of change and managing the conversion from one state to another.

Inclusion and Diversity: A framework through which an organization should ensure through effort and policy that people with different characteristics all feel respected, accepted, supported, and valued in the work environment.

Complete Chapter List

Search this Book:
Reset