Data scientists continue to pursue machine learning and AI algorithms that can assess people without bias. From selecting variables that eliminate bias to delivering nudges that encourage decision-makers to think differently, the new technologies are making great inroads.
— By Jill Motley
Artificial intelligence and machine learning algorithms are only as smart as the data input into the systems. Amazon learned that when it had to scrap its AI hiring tool because the algorithm had learned to make the same biased decisions humans were making. The reason was that the data came from history – the hires made in the past – and men and women were preferred for certain jobs. Since then, technology experts have worked on developing AI algorithms that can detect bias in the data so it does not replicate them. Even more hopeful is the work being done to use AI algorithms to nudge decision-makers to make unbiased decisions about job candidates, and
other talent decisions, that are fairer to everyone. Developing AI and machine learning algorithms requires continual deep analysis of the algorithms design and learning process to ensure the algorithms do not perpetuate bias.
Nudging Towards No Bias
The concept of a nudge was popularized by authors Richard H. Thaler and Cass R. Sunstein, in their book appropriately named, “Nudge.” People are susceptible to biases that lead to poor decisions, but nudges could direct them in the right direction without restricting their freedom of choice. A start-up company called Humu Inc. developed Nudge technology which uses a proprietary Nudge Engine to deliver small, scientifically-bases interventions to employees. The nudge technology can gently impact decision-making in a variety of ways, including maintaining organizational resilience and manager effectiveness. It also addresses diversity and inclusion by delivering helpful suggestions designed to change behaviors through small incremental steps.
The system collects data from a variety of sources and uses that data to develop management prompts. The nudges are not making decisions, like Amazon’s AI hiring tool that culled job applications. They are suggesting alternatives which help people think about a person or a decision and are not intended to change minds. The nudges offer new perspectives and suggestions for action.
Removing Personal Choice
Bias is a process in which facts are selectively chosen by a person to confirm beliefs who then focuses on the things that confirm those beliefs. As Stuart Nisbet, chief data scientist at Cadient Talent, a talent acquisition platform, explains - to remove bias, it is necessary to remove the personal choice of which data is included. All data points contributing to the hiring of an applicant (positive choice) and decline of an applicant (negative choice) are included. The data points and their weighting are done through an objective statistical analysis. Computer algorithms can help with this process by using the experiences and human judgement made in prior hiring decisions that resulted in good hires. A good hire can be objectively defined in a variety of ways, i.e. longevity, productivity, etc. Variables like gender and race are removed as variables because they have no bearing on work performance, meaning they will not influence the hiring decision.
Developing AI and machine learning algorithms requires continual deep analysis of the algorithms design and learning process to ensure the algorithms do not perpetuate bias.
Developing a bias-free algorithm has proven challenging because so much data has bias embedded in it. One of the important lessons that has emerged from the effort to design effective and efficient algorithms is to have diverse teams work on the designs and analyze results. It is sometimes difficult for people to look at results and spot bias when their biases are in play. An algorithm may exclude job candidates based on certain words, for example, that a white person or a male may not recognize as biased against diverse people or women.
Utilizing at Every Step
One of the questions that continually comes up is when to utilize algorithms in the recruiting and hiring process, especially since there is still much work to do in terms of ensuring the algorithms are unbiased and producing desired results. The answer is that algorithms can drive value during recruiting, screening applicants, interviewing, and hiring. Balancing the opportunity to create value with the risks is challenging. An interview with Vivek Ravisankar, CEO of HackerRank, discussed ways AI algorithms can make a difference in recruiting, and many of the same principles apply to the other talent management steps. HackerRank is a technical assessment and remote interviewing solution for identifying and hiring developers. This company offers the tech industry a means of finding the qualified developers without regard to race, ethnicity, gender, or any of the other traits that exclude people from consideration.
Ravisankar identified four steps in the recruiting process where AI algorithms can help. They are unbiased sourcing, screening, interviewing, and selection. He points to the fact that job descriptions and programmatic recruitment advertising can be tweaked to target a specific demographic, making the algorithm more inclusive. AI can be used to verify and screen candidate qualifications, including administering online tests. AI can also conduct automated video interviews, and conduct background checks on selected interviewees.
All of this comes with a caution. Employers must continually assess the data fed to AI to ensure bias is not creeping in. They must also review the data analytics to make sure diversity and inclusion remain principles embedded in the algorithm. The bottom line job of algorithms is to prioritize capabilities and skills based on people’s capabilities and skills and not on their demographics. This is something that should have been occurring all along, but bias has been persistent.
AI algorithms and machine learning may be the means of finally overcoming bias, but it is going to take diligent attention to the algorithmic process. It is tempting to implement a program and not fully assess what it produces. Self-regulation is key to developing algorithms that are unbiased.