Independent Review Certifies SkyHive’s Skills Models as Free of AI Bias

Independent Review Certifies SkyHive’s Skills Models as Free of AI Bias

An independent assessment shows that SkyHive’s Skills Models are free of racial or gender bias as in analyzing candidate qualifications—answering one of the biggest questions hiring managers and workers have about the use of artificial intelligence in talent management systems.

SkyHive’s Skills Models are now Armilla Verified, with the assessment showing that:

  • Skills Models remain robust and accurate when demographic information is added, meaning that the models don’t show unconscious racial and gender bias. 
  • Specifically, the models meet the standard set by New York City’s Local Law 144, the industry standard for employers using automated hiring tools.
  • The Skills Models also remain robust when irrelevant information is added, meaning that the models successfully ignore text that doesn’t matter and focus on the real skills of a candidate.

A skills-based approach, when adopted as a talent management strategy, can improve employee retention, enhance internal mobility, and guide critical reskilling and upskilling strategies. But to do that, employers need an accurate, and most importantly unbiased, skill inventory.

The verification is completely voluntary, but SkyHive has a commitment to putting ethics first when developing talent management solutions, said Mohan Reddy, SkyHive co-founder and CTO. 

“SkyHive submitted our technology for an Armilla Verification Badge for the same reason we chose to become a Certified B Corporation: to ensure we’re living up to our values as a company,” said Reddy. “We’ve worked hard to build the world’s most ethical AI people technology, and we wanted an independent assessment to demonstrate that we have succeeded.”

Unconscious AI bias is a major concern as more and more employers use automated tools and artificial intelligence to manage hiring. AI has enormous potential to make hiring both more efficient and more equitable by enabling skills-first talent approaches. But that can’t happen if longstanding biases are baked into the AI’s algorithms.

There are a number of ways bias can be inadvertently introduced into automated tools, even for well-intentioned employers. One way is for bias to be built into the datasets that employers use to “train” AI to identify potential candidates. 

An AI system only knows what is in its dataset. If there are blind spots in how a company hires and promotes, such as an imbalance between men and women, then those biases will be reflected in the HR data—and the AI tool will end up sharing them. Biased AI results could end up reinforcing existing problems , denying opportunities for workers and undermining skills-based hiring strategies for HR teams.

SkyHive’s technology allows employers to move from a job-based to a skills-based hiring strategy. By having accurate and up-to-date information about skills, employers can find talent more easily and not be limited by outdated job descriptions or the “paper ceiling” of broad education requirements. SkyHive’s skill ontology also allows workers to better understand what skills are in demand and how to advance in their careers.

Bias was assessed by taking a sample of resumes parsed by SkyHive in the United States, Canada, India, and the United Kingdom. The firm ran the resumes through the SkyHive Skill Model Inference twice: once anonymously and again with demographic data including race, gender, age, and years of work experience. The skills extracted from the two analyses had a 97.5% overlap, meaning that demographic data had little influence on the results.

In addition, the review tested whether the model could be confused by irrelevant text in a resume by inserting text that had nothing to do with skills, such as “lorem ipsum” or sections from novels in the public domain. The parser correctly ignored these sections and still identified skills with 95% accuracy, the report said. 

The report did find that using synonyms of skill names in a resume could have an impact on results. Substituting synonyms resulted in 19% less overlap in skill sets. This data will be used by SkyHive teams to continue improving our patented approach for an ontology that identifies and classifies skills. 

The review also included a bias audit as required by New York City Local Law 144, identifying the 30 most-frequently extracted skills and the “selection rate” for candidates. The audit found that the “impact ratio” was within acceptable limits and there was no evidence of bias.

The New York law has an impact far beyond the city itself, because so many major companies either are headquartered or do business there. Realistically, any large employer has to consider the New York standards, since it is much easier to make an entire application compliant than to create a different version for New York. 

Verification of AI ethics can also make it easier for SkyHive clients to respond to RFPs by providing a simple, independent way of responding to questions about ethical AI and bias in their applications.

To find out how to use SkyHive to solve your talent and workforce development problems, contact us today.

Download PDF

워크포스의 잠재력을 발휘해 보세요

세계에서 가장 윤리적인 AI 기술로 직무 중심에서 스킬 중심으로의 전환을 시작하세요 — 입증된 기술로 국제적으로 수상하며 인정받고 있습니다.

데모 신청하기

관련 리소스

"모든 쿠키 허용"을 클릭하면 사이트 탐색을 개선하고 사이트 사용을 분석하며 마케팅 활동을 지원하기 위해 귀하의 기기에 쿠키를 저장하는 데 동의합니다. 자세한 내용은 개인정보 처리 방침을 참조하세요.