Self Help

The Algorithm - Hilke Schellmann

Author Photo

Matheus Puppe

· 48 min read
Thumbnail

“If you liked the book, you can purchase it using the links in the description below. By buying through these links, you contribute to the blog without paying any extra, as we receive a small commission. This helps us bring more quality content to you!”

BOOK LINK:

CLICK HERE

  • The passage introduces the use of AI tools like HireVue in hiring, monitoring, promoting and firing employees. It notes how this area has become the “Wild West” without much regulation of algorithms.

  • It shares the story of Lizzie, a makeup artist who was laid off during the pandemic based on a poor score from HireVue, despite good performance reviews. She later found out the HireVue score was never actually given.

  • Experts warn that many AI tools used in hiring are not ready and some companies in this space are overpromising what their technologies can do, similar to disgraced company Theranos.

  • As algorithms make more impactful decisions about people’s careers, there is a need to better understand how these tools work and identify potential issues like bias or other flaws. The book aims to shed light on this important and growing issue.

  • AI is now widely used throughout the hiring process and world of work, from screening resumes to conducting video interviews to monitoring employee performance. Companies use AI to deal with the huge volume of applications and make these processes more efficient.

  • However, there are also risks if these AI systems incorporate and scale up existing biases. Discrimination could potentially affect hundreds of thousands of people without transparency. It’s a major civil rights issue to have algorithms making high-stakes decisions.

  • The author investigated AI in hiring and employment after learning about a job applicant interviewed by a “robot.” They interviewed over 200 people from various perspectives on the topic.

  • While AI could address some human biases, systems now reject qualified candidates and don’t necessarily improve hiring outcomes. We need oversight to ensure opportunities are based on merit rather than attributes like gender or ethnicity.

  • The book aims to expose what’s really happening with workplace AI, warn about overhyped claims, and provide practical advice for job seekers, employees and companies on evaluating and improving how these systems are developed and used.

  • AI and algorithms are increasingly being used by companies to screen and evaluate job applications at scale, as hiring managers are overwhelmed by large volumes of applications.

  • The book profiles “Sophie”, a Black female veteran software engineer who applied to 146 jobs over 6 months, mostly through online job boards and company websites that used AI screening.

  • Sophie found it frustrating to have little human interaction during the hiring process. Algorithms may fail to surface less traditional candidates like Sophie due to how they are trained on existing hiring patterns.

  • While technology was meant to democratize hiring, some argue AI screening instead marginalizes groups and makes it difficult for many qualified people to find jobs, despite companies complaining of talent shortages.

  • The chapter questions whether AI tools used to screen candidates actually help companies or cause more problems and discrimination against women, people of color and others.

  • Résumé screeners that use AI/machine learning to analyze applicant résumés can potentially discriminate based on gender, race, nationality etc. if the training data is skewed or contains biases.

  • Amazon had to scrap their résumé screener AI after it systematically downgraded applicants whose résumés mentioned words associated with women’s groups, due to biases in the original (male-dominated) training data.

  • Other companies likely have similar issues with their hiring AIs, but they are not widely known as companies do not want to disclose flaws that could result in lawsuits.

  • Two whistleblowers (John Scott and Ken Willner) inspected résumé screening tools and found they used arbitrary or biased variables like names, locations, hobbies etc. to predict candidate success, rather than actual job qualifications.

  • Variables correlated with gender, race, nationality etc. in the data could amount to illegal discrimination if used as predictive factors by the algorithms.

  • Relying solely on word correlations without a logical link to the job can get employers into legal trouble for indirect discrimination. More oversight is needed of these hiring AIs and their potential biases.

Here are the key points about potential adverse impacts from algorithmic résumé screening:

  • Algorithms can pick up on spurious correlations in training data that have nothing to do with job performance, like shoe size, but correlate with protected attributes like gender. This could discriminate against certain groups.

  • Some résumé screeners use words/keywords that are not actually relevant to the job being screened for, but relate to applicants’ identities or demographic groups. This treats applicants differently based on attributes like race or gender.

  • Problematic keywords found included “Afric*” and correlating activities like softball vs baseball negatively/positively. These have no meaningful job relevance but could discriminate.

  • Résumé screeners often just pick up random correlations in training data without understanding why, like playing lacrosse or having a certain name. This likely reflects biases in the original data.

  • Simply removing discriminatory keywords may not fix the underlying problems if algorithms are still learning inappropriate correlations. Oversight is needed to ensure fairness.

  • Hobbies and extracurriculars on résumés often reflect social/cultural factors rather than job skills, so algorithms should exclude them from decisions.

The key risk is unsupervised algorithms learning and acting on biases in their training data, even for attributes legally protected from consideration in hiring. Employers must vet algorithms for potential unfair impacts and relevance to jobs.

Here are the key points from the summarization:

  • The EEOC sued a Chinese company, iTutorGroup, in 2022 for using algorithms to automatically reject applications from women over 55 and men over 60. This was the EEOC’s first lawsuit involving a company’s use of AI. iTutorGroup settled for $365,000.

  • In 2023, the EEOC investigated DHI, which operates the Dice.com job site, for allowing job descriptions that potentially discriminated against non-visa holders. DHI agreed to use AI to remove discriminatory keywords from job postings.

  • Job platforms’ algorithms can discriminate against women by amplifying small gender differences in user behavior data. For example, men tend to apply to jobs they’re only partly qualified for, while women only apply if highly qualified. AI learns these patterns and may recommend more men.

  • At LinkedIn, former VP John Jersin developed “representative results” AI to intervene and maintain diversity in recommendation results sent to recruiters, countering potential bias from behavioral data. Limited evidence suggests this was effective.

  • AI can also skew the job opportunities individuals see on platforms based on past application patterns, potentially limiting views of certain types of roles.

  • Even simple programming errors can cause issues, like an adaptive testing company accidentally building tests that gave women harder questions than men with the same qualifications.

  • Job applicant screening tools and algorithms are commonly used by companies to filter large candidate pools, but they often reject qualified candidates. They tend to be overly narrow and strict in only seeking exact matches to job description criteria.

  • Some common issues identified are tools rejecting candidates for arbitrary reasons like gaps in employment longer than 6 months, tools scoring candidates incorrectly which led to wrong hiring decisions, and ballooning generic job descriptions that filter out good candidates.

  • Researchers found over 90% of surveyed companies use these tools but they know the tools exclude qualified candidates most of the time. Hiring managers often then complain about the quality of candidates they see.

  • The tools prioritize efficiency over effectiveness and end up narrowing the candidate pool too much. A human recruiter may be able to better assess candidates that the tools reject.

  • When companies start directly hiring previously rejected candidates, they often find them to be high performers, suggesting the tools are over-filtering talent pools. Changes are needed to hiring processes and better understanding tool limitations and biases.

  • Companies are developing AI tools that can analyze people’s social media feeds and predict their personality traits and behavior. This goes beyond just sorting resumes - the tools claim to provide hidden insights into someone’s “real persona.”

  • One company, Humantic AI, advertises that with just an email address, its algorithm can scan social feeds and predict things like culture fit, personality, and how much supervision someone needs. This gives companies access to personal data without users’ consent.

  • Predicting personality from social media is controversial as it tips the power balance in hiring decisions. While insights into soft skills could be useful, the tools may not be very accurate and risk judging people unfairly based on limited online data.

  • As jobs change rapidly, employers are looking more at traits like curiosity, adaptability and teamwork over specific skills. Personality tests have become a $2 billion industry but traditional tests are time-consuming. AI tools promise instant insights without candidates knowing they are being assessed.

  • However, there are open questions about how well these tools actually work and whether people should have some say over personal data being analyzed this way without permission, especially when it could impact their careers and livelihoods. More testing and oversight of these predictive hiring algorithms may be needed.

  • New AI tools claim to analyze people’s social media profiles and online behavior to assess personality traits and predict suitability for jobs or likelihood of certain behaviors.

  • They promise to flag things like toxic language, bullying, self-harm indications, politics, violence, or racy content through automated analysis of public social media posts and profiles.

  • However, critics argue these tools lack context and common sense. They often flag innocuous posts or song lyrics out of context. It’s unclear how well they truly measure complex human qualities.

  • One person, Kai Moore, had their social media analyzed by the tool Fama. It flagged many benign likes and tweets in a 300+ page report, showing the limitations of keyword searches without understanding context.

  • There are questions around how the tools were trained, what exactly they can predict, potential harms of exclusion, and lack of transparency from vendors on methodology.

  • The author plans to test one such tool, Humantic AI, on their own social media to see what personality assessment it provides, raising concerns about accuracy and implications of being judged by an algorithm.

In summary, the passage discusses the emerging use of AI tools to analyze social media for hiring and employment decisions, but also raises questions about their capabilities and potential harms given limitations in understanding context and humanity.

  • The author interviews Tomas Chamorro-Premuzic, a psychology professor who believes AI can be objectively helpful for assessing personalities in hiring.

  • They test two different AI tools (Humantic and Crystal) on Chamorro-Premuzic’s social media profiles to predict his personality.

  • Humantic gives very different predictions based on his Twitter vs LinkedIn - calling him cautious/deliberate on Twitter but influential/energizing on LinkedIn.

  • Crystal provides a more consistent prediction of direct/assertive/competitive across LinkedIn profiles.

  • Chamorro-Premuzic acknowledges some accuracy but also inconsistencies between the tools’ predictions and his self-perception.

  • This highlights the limitations of current AI tools in reliably and consistently predicting human personalities from limited online data, especially when predictions differ significantly based on small changes to the analyzed data (Twitter vs LinkedIn). Personality is thought to be relatively stable.

  • The author questions how ready these AI tools are for high-stakes uses like employment decisions given their inaccuracies even when assessing someone specifically studying the topic like Chamorro-Premuzic.

  • The author tested two AI-based personality assessment tools, Humantic AI and Crystal, on himself and with Tomas Chamorro-Premuzic. Both tools gave very different personality profiles based on the author’s LinkedIn vs Twitter data.

  • The author and Chamorro-Premuzic noted this shows the algorithms may not be very accurate or consistent. The companies acknowledged the predictions may vary with limited data.

  • The author questioned why the algorithms don’t provide warnings about low accuracy predictions. There is pressure for results over accuracy.

  • The algorithms’ methods - comparing users’ words to label them based on others’ personality tests - seem flawed and were not independently validated.

  • In a follow up study with 94 people, the author found the tools again gave inconsistent results across profiles and disagreed with each other.

  • This implies the tools may not measure personality reliably as advertised and could impact important decisions like hiring.

  • The author remains skeptical of unchecked AI being used without understanding how predictions are made or accuracy levels. Human judgment is still needed to properly evaluate AI results.

  • Some companies use AI-powered video games to assess job applicants instead of traditional resume reviews or interviews. The games are meant to test things like cognitive abilities, social skills, personality traits, etc. that may indicate job fit and potential.

  • Companies like Plum and Pymetrics have developed suites of games that applicants can play as part of the hiring process. The games aim to measure things like sociability, pattern recognition, decision making under pressure.

  • Proponents argue games reduce bias compared to resume screening and are more enjoyable for applicants than skills tests. Games also make it harder to fake answers compared to interviews.

  • Skeptics question whether games that test abstract skills and traits really indicate someone’s ability to do a specific job. Some applicants feel the games add anxiety since they don’t directly relate to the job skills.

  • The passage provides an example of one person, Martin Burch, who was asked to play a Plum sorting game as part of applying for a data analyst role at Bloomberg. He wondered what it had to do with the actual job skills.

So in summary, the passage discusses the emerging use of AI games in hiring assessments and both the arguments for and critiques against this approach.

  • Martin Burch applied for a job at Bloomberg but was rejected after completing an online assessment called Plum. The rejection came very quickly, within a day, which surprised him.

  • When he asked the recruiter why he was rejected, she said it was due to not meeting the benchmarks on the Plum assessment. This didn’t make sense to Burch as he was qualified for the role based on his experience.

  • Burch used his rights under GDPR to request his data from Plum. He became known as “Patient Zero”, the first to use this law to understand an algorithmic job rejection.

  • Plum’s CEO said their tools measure traits like problem solving, personality and social intelligence that are better predictors of future performance than past experience alone.

  • Burch received his scores from 12 categories measured by Plum but didn’t understand how they related to the job or what the scores meant.

  • Experts question if assessments like Plum truly have “face validity” or a clear link to predicting success in specific roles. More holistic considerations of candidates and teams may be needed.

So in summary, Burch challenged his algorithmic rejection by using GDPR to access his data, but still had doubts about how Plum’s assessments accurately predict job fit and performance. Experts also question their validity and consideration of other important factors.

  • Pymetrics uses games and assessments to measure cognitive, social and emotional traits like generosity, fairness, emotion and attention from a job applicant’s mouse movements, clicks, etc.

  • It compares the applicant’s performance to that of existing successful employees at a company to see if their traits align and predict job success.

  • However, using only high performers as a benchmark has risks like biases in who gets labeled a high performer and small/non-diverse sample sizes.

  • Analyzing just high vs. general population may find traits that distinguish them but don’t predict job success, like one example of healthcare workers.

  • Experts question if games truly measure traits or just game-playing ability, and whether traits like risk-taking are actually relevant for jobs.

  • There is limited evidence that the games can predict real-world job performance as they were not designed for that purpose originally.

  • Reading about the balloon game, the applicant realized their strategic behavior was seen as risk-taking, but they are more cautious in real life, showing a potential disconnect.

The summary focuses on the concerns raised about using games to measure traits and predict job performance, as well as potential issues with only benchmarking against high performers instead of a fuller analysis.

  • The passage discusses concerns with using AI-based games and assessments in hiring. It focuses on Pymetrics and Cognify as two companies that use games.

  • With Pymetrics, the narrator and Claudia Prostov felt self-conscious playing the games and uncertain what the results really said about their skills and personalities. Claudia disagreed with how Pymetrics assessed her traits.

  • Cognify takes a different approach by designing games to directly assess problem-solving and processing abilities, which research shows correlate with job performance. The games measure speed and accuracy on puzzles, math problems, and document analysis.

  • A key concern is whether game results truly assess skills relevant to the job. Sophie felt a Cognify assessment was irrelevant for the software developer role she applied to.

  • It’s important AI tools are designed and validated to only use predictive data points that are fair, unbiased, and actually measure job-relevant qualifications rather than extraneous personal factors. Cognify carefully analyzes which variables predict performance and ensures fairness.

  • Too much data or non-job related data raises risks of unfairness, while transparent interpretation of results is needed for employers to trust AI assessments. Cognify takes a responsible, psychologist-led approach to address these challenges.

  • Sophie had anxiety taking a cognitive test called Cognify during a job application process, even though she felt confident in her software engineering skills. She felt the test was not related to the job and did poorly, which depressed her for weeks.

  • Matthew Neale from the test company acknowledged it’s disappointing when candidates have a negative experience, but said the test aims to assess learning, problem-solving, and other cognitive skills relevant for software design roles.

  • Martin Burch had a similar experience taking the Plum assessment for a Bloomberg job, questioning why cognitive skills were being tested when his current job involved different skills like web scraping.

  • Experts questioned how predictive these types of abstract cognitive tests are for specific job duties, compared to previous job experience. At best they may account for 18.5% of job performance prediction.

  • There is a whole industry helping candidates prepare for these types of cognitive assessments, but tools can be calibrated in ways candidates can’t anticipate.

  • Lawyers recounted flagging adverse impacts against women in some cognitive assessments, where differences weren’t clearly job-related and alternative assessments were recommended. Accuracy and potential for discrimination is a legal risk companies consider.

  • Resources Research Organization (HumRRO) published a study in December 2021 showing that AI games they developed to assess conscientiousness did not reliably predict it, despite their extensive experience building assessment tools.

  • Multiple clients of organizational psychologist Charles Handler tried Pymetrics and then abandoned it because the results did not meet their expectations.

  • David Futrell of Walmart also tested AI games but found they did not work well and were slightly negatively correlated with job performance.

  • Martin Burch was automatically rejected for a Bloomberg job based solely on his scores from a Plum assessment, without human review of other application materials. He filed a complaint and Bloomberg admitted they use these assessments for automatic rejections.

  • Studies show personality assessments via games or other methods only correlate with about 5% of job performance. Other factors are more important.

  • One-way video interviews assessed by AI algorithms are now common, but questions remain about how well these technologies actually predict job fit and performance.

So in summary, the article raises doubts about the predictive capabilities of AI-based games and assessments and questions their use for automatic rejection of candidates without meaningful human review. It also shares one candidate’s experience challenging his rejection on these grounds.

Based on the details provide

The person describes doing a one-way video interview with an AI system called Retorio. They find the experience weird, as there is no one on the other end to interact with. They are nervous about how they are coming across in the video responses. While reviewing their answers, they cringe at some of their responses.

The system analyzes their facial expressions, tone of voice, and words used to generate a score of their suitability. To their surprise, they score very highly. The summary generated says they are curious, deliberate, candid, compatible, and relaxed.

The person interviews another individual, Alex Huang, who had a negative experience with video interviews. Huang felt they did not allow him to show his interpersonal skills and mostly resulted in rejection without feedback. He was also unaware the videos may have been analyzed by AI rather than humans.

Overall, the person seems positive about their own experience but acknowledges many find one-way video interviews unnatural and anxiety-inducing. University career centers are helping students prepare for them but they still present challenges, especially for international and neurodiverse students. Based on the details provided, the person does not express overt negativity and seems open to both benefits and drawbacks of this evaluation method.

  • Video interviews can increase anxiety for job applicants as they don’t know how they are being assessed. They also don’t allow disclosure of any disabilities like speech impairments. This has led some students to refuse video interviews altogether.

  • In a tight labor market, employers should reconsider overly reliant on virtual assessments that don’t allow proper evaluation of cultural fit. Younger generations value mission alignment which is hard to assess virtually.

  • Despite criticisms, use of AI-based video interview platforms like HireVue is growing rapidly. They aim to make hiring more democratic by casting a wider net for applicants.

  • HireVue uses structured interviews and AI analysis to reduce bias compared to human interviews. However, critics argue lack of transparency in AI systems makes it hard to properly evaluate fairness.

  • When we took a HireVue practice interview, our score was only 37%, indicating a poor match. However, HireVue claimed employers still review videos, despite evidence some prioritize only top scores to reduce review time.

  • Ultimately, vendors like HireVue say the final hiring decision is up to employers, not the AI system alone, but transparency remains an issue.

  • Kevin Parker, former CEO of HireVue, said companies have autonomy over how they use the technology and it’s a personal decision.

  • However, an email from HireVue’s chief psychologist to Atlanta Public Schools showed an intention to use AI scores to filter out job applicants scoring below a 33% cutoff.

  • A former APS HR VP said they only used HireVue’s AI as a pilot and hid scores from principals to test its effectiveness without impacting hiring. However, the school district later received a DOJ inquiry about their use of HireVue.

  • Shelton Banks, CEO of a nonprofit helping underserved minorities, initially hesitated about AI but found volunteers scored interviews inconsistently. Testing HireVue’s scores, the top-scored candidates got jobs while the lowest scored did not.

  • Banks now uses HireVue scores along with his judgment to select a mix of candidates for his job training program. He also uses HireVue to track students’ interview skill improvements. While not perfect, he finds the AI provides better standardization than human raters alone.

  • HireVue is an AI-based hiring platform that assesses candidates through video interviews and provides scores on various soft skills like communication, drive, and persuasion.

  • It claims its algorithm can accurately measure traits like these by analyzing what candidates say, how they say it, and their facial expressions/tones.

  • The algorithm is trained on video interviews of past employees rated as high, middle, or low performers and looks for patterns in words, expressions, tones that correlate with performance.

  • It analyzes candidates similarly and provides scores indicating likelihood of performance levels. But success can’t be predicted with 100% certainty as interview performance doesn’t guarantee job performance.

  • Bias is a concern with AI tools. HireVue aims to reduce bias by not directly using factors like age, gender, race. But biases can still enter indirectly through proxies correlated with these factors.

  • Common bias testing like the four-fifths rule has limitations and may not catch biases against intersectional groups. Proxies like zip codes can also introduce hidden biases.

So in summary, while aiming to reduce bias, questions remain around how well these tools truly measure skills and performance without bias, given the limitations of interviews and potential for biases to enter indirectly.

  • A hiring company was using a tool that predicted job success based on whether candidates knew someone who already worked there.

  • When they checked results by racial group, they found Asian Americans were more likely to know someone at the company, while African Americans usually did not.

  • This introduced racial bias, since it was really acting as a proxy for race/ethnicity rather than job performance. The company decided not to use this criterion.

  • You can only detect hidden biases like this by closely examining results and being aware of what biases might exist. It requires continuous review and improvement.

  • HireVue’s tools analyzed facial expressions in interviews using AI. But interpreting expressions accurately is difficult and expressions may mean different things across groups.

  • They claimed expressions linked to traits like engagement could predict job success. But the high performers used to train the AI may have been biased samples not representative of all who could succeed.

  • Overreliance on matching expressions to a biased existing sample risks perpetuating biases rather than promoting diversity. Facial expressions alone may have little to do with skills or job performance.

  • More transparency is needed so results can be audited to ensure algorithms are equitable and fair for all applicants. Outcomes and reasons for decisions should be explained.

In short, the case highlights how hidden biases can influene hiring tools and the importance of carefully reviewing tools for fairness and potential to perpetuate historical biases. Relying solely on proxies like expressions risks being unrelated to the job.

  • The passage discusses concerns about using facial expression analysis in job interviews, as conducted by the company HireVue. It cites research showing expressions may vary across cultures and individuals and not reliably indicate emotions.

  • An expert, Lisa Barrett, explained recent research finding little evidence for universal facial expressions of emotions. Movements don’t inherently translate to meaningful interpretations of emotions or thinking styles.

  • Facing public questioning of the science and complaints, HireVue first removed facial expression analysis in early 2021. They claimed it overlapped with speech analysis, though they had defended it for years.

  • Tone-of-voice analysis was also removed in 2022, with a similar justification that it provided little additional benefit. This raises questions about why these techniques were used for so long without evidence they worked.

  • The passage criticizes using unproven AI methods that could erroneously screen out job applicants. It argues for more transparent and scientifically-valid techniques like manual reviewer analysis, which HireVue now employs.

In summary, the passage outlines scientific doubts about facial/vocal emotion analysis in interviews and questions the responsible use of AI, given companies like HireVue relied on uncertain methods for a long time before removing them.

  • New AI tools are being developed that analyze facial expressions and voice tone in job interviews and video calls to assess applicants. However, some experts argue these tools are being introduced before properly establishing what problems they aim to solve.

  • Curious Thing AI and myInterview are two companies that offer automated AI phone/video interviews. They analyze speech transcripts to assess English proficiency and personality traits.

  • The author tested Curious Thing by doing interviews in German. Surprisingly, it still rated his English as “competent.”

  • With myInterview, the author set up a fake job posting and interviewed himself. The AI rated him an 83% match.

  • Experts worry these tools are being rushed to market before fully understanding their capabilities and limitations. Repeated issues show vendors don’t adequately learn from past problems. The technology is outpacing understanding of how to apply it responsibly in HR.

  • The author tests an AI-based video interview tool called myInterview by answering questions in German, Chinese, and random English phrases. Surprisingly, the tool still produced scores matching her to jobs, including transcripts that were gibberish.

  • When she shared this with the myInterview founders, they said this type of edge case testing was useful feedback to improve the tool. However, a psychology professor questioned whether voice intonation alone can provide valid or reliable data for hiring decisions.

  • Further tests using an AI-generated voice still produced high matching scores, showing the tool failed to detect there was no actual human present.

  • Remote hiring has led to more candidate fraud like pretending to be someone else or hiring others to take interviews/assessments. The FBI has warned companies about potential “deepfake” videos in hiring processes.

  • Hiring tools still have a long way to go to reliably and fairly assess candidates. Job applicants are also trying different tactics like using AI to generate answers or alter their voices to outperform these algorithms. More identity verification may be needed going forward.

  • Huang had interviewed and was hired for a job as a credit manager through a traditional in-person interview.

  • When the authors visited Huang at work two months later, they jokingly asked his boss Tony Velasquez how Huang was doing.

  • Velasquez gave Huang high praise, saying in his 30 years of experience Huang was among the top 3-5 candidates who had worked there. Velasquez said Huang had come very far very quickly and was doing very well in the role.

So in summary, Huang found the job through a traditional interview process and had made a strong positive impression in his first couple months based on feedback from his boss who was impressed by Huang’s performance.

  • In the past, physiognomy and graphology were pseudosciences used to justify discriminatory judgments of people based on attributes like race, with no scientific evidence they were accurate or meaningful. Physiognomy was used to label slaves as inferior.

  • Today, AI systems could replicate these issues if they learn patterns from biased data or “overfit” training data without meaning. Claims that AI can extract hidden truths are dubious without understanding how the systems actually work.

  • Graphology, assessing personality through handwriting analysis, was popular in French hiring despite no evidence it is valid or accurately predicts job performance. Like current AI marketing, it seemed intuitively appealing but was pseudoscience.

  • Many disabilities are invisible, so AI tools that claim to treat all applicants equally may not actually work fairly for people with disabilities. Human biases have historically disadvantaged those with disabilities in hiring. Proper testing is needed to ensure fair treatment.

  • In general, past pseudosciences caution that new assessment methods require rigorous scientific validation before using in high-stakes decisions like hiring, to avoid potential discrimination and harm. Intuitive appeals are not evidence of accuracy or fairness.

  • The unemployment rate for people with disabilities is roughly double that of people without disabilities.

  • AI-based assessment tools used in hiring often take a one-size-fits-all approach, which can perpetuate biases against people with disabilities rather than overcome them.

  • Henry Claypool, who has a disability himself, tested out some AI hiring games and tools to evaluate potential pitfalls for people with disabilities.

  • He expressed concerns about having to disclose a disability early in the process for accommodations, fearing it could disadvantage or disqualify his application.

  • Playing the games caused him stress and anxiety as someone with slower cognitive processing. He questioned how well the tasks measured skills actually relevant for jobs.

  • Others with disabilities echoed concerns that AI tools could inaccurately assess them based on traits outside their control, like facial expressions or motor skills.

  • There is a lack of transparency around how AI makes decisions. Experts call for better regulations from the EEOC and for companies to rethink overreliance on AI tools without human judgment.

Sophie Powell and Patti Sanchez are vocational counselors who help people with disabilities, especially those who are deaf or hard of hearing, find employment. However, they have faced significant challenges with AI and digital tools used in the hiring process.

Job applications and assessments on sites like Indeed are often in English and difficult for deaf clients to understand due to language and cultural barriers. The assessments are also typically timed, leaving not enough time for Powell to translate and communicate with clients.

Video interviews like those on HireVue do not work at all for clients who do not speak. When Powell and Sanchez have requested accommodations, they never received responses from employers.

Even closed captions can cause issues if they run too fast. Personality tests sometimes include hundreds of questions that take too long to translate back and forth. Powell and Sanchez feel the tests are often irrelevant to the jobs their clients are applying for.

As a result, the counselors have sometimes just completed the assessments themselves because the process is such an obstacle. They want clients to be able to meet with actual hiring managers in person instead of relying on AI tools that can’t understand disabilities. The experience highlights ongoing challenges for people with disabilities in navigating AI systems used in hiring.

  • Sophie Powell, a deaf job applicant, was rejected for a warehouse position after only a brief phone call with HR. The manager cited safety concerns but Powell argued a light could serve as a reasonable accommodation under the ADA.

  • Powell believes companies do not truly follow the ADA’s requirements, seeing it as more of an optional law. The law is intended to ensure fair hiring and access to opportunity.

  • Critics argue AI hiring tools could disadvantage people with disabilities in unfair ways. Games may screen out people with motor impairments, and a lack of disability representation in training data means tools likely perpetuate existing exclusion.

  • Individual accommodation discussions are needed but tools frontload rejections, preventing many from reaching human interviews. Critics advocate routing some applicants to humans to incentivize considering accommodations.

  • Experts report employers show a careless, unvalidated approach to tools, prioritizing innovation over potential discrimination. More scrutiny is needed given tools’ scale and how biases can be hidden. Just because technology is possible does not mean it should be used without consideration of consequences.

  • Companies need to do more than just follow anti-discrimination laws when it comes to AI systems - they should actively work to ensure the tools are accessible, tested on people with disabilities, and not biased or discriminatory.

  • Strict productivity algorithms used in warehouses like Amazon can lead to exhaustion, injuries or disabilities if they don’t accommodate people who need more breaks. This raises legal and ethical issues.

  • The EEOC commissioner acknowledges AI could unintentionally discriminate worse than traditional methods. The commission addresses intentional and unintentional discrimination.

  • Unintentional discrimination cases are guided by a 1971 Supreme Court ruling that found tests unfairly passing different racial groups at different rates are illegal unless proven job-relevant.

  • Problems include AI tools systematically downgrading women or certain groups. Facially neutral factors like zip codes could also unintentionally discriminate.

  • Unlike past tests, people often don’t know they are being assessed by AI or on what criteria, making discrimination harder to detect and address. This particularly harms people with disabilities.

  • There is a lack of transparency in how AI tools analyze and score people that raises legal and ethical concerns regarding discrimination and accommodating disabilities.

  • The EEOC had initially been hesitant to investigate AI tools used in hiring, noting they hadn’t received any complaints specifically about AI discrimination.

  • Advocates pushed the EEOC to take a more proactive role, as it’s difficult for applicants to prove AI discrimination.

  • In 2021, after MIT Technology Review stories called out the EEOC, they announced an AI task force to look into the issue.

  • In 2022, the EEOC and DOJ finally released guidance on AI and hiring, though it’s not enforcement. They said they will monitor companies for discrimination cases.

  • Issues identified include tools not working well for people with disabilities, like speech impairments, and games testing motor skills not relevant to jobs.

  • Changing laws may be needed to better address AI discrimination, as evidence is hard to obtain from “black box” tools.

So in summary, the EEOC moved from hesitant to starting to investigate AI tools, though more action is still needed like potential legal changes to address issues unique to AI assessments.

  • The EEOC and DOJ released guidance warning that AI and other technologies used by employers could disadvantage job applicants and employees with disabilities, potentially violating anti-discrimination laws.

  • The guidance took a broad view of what constitutes discrimination, stating that rejecting a candidate with a disability who could do the job with accommodation could be considered discriminatory.

  • The guidance warned against using AI tools that assess abilities like writing based on comparing applicants’ personalities to current employees, as this could discriminate against people with uncommon personalities due to disabilities.

  • Critics want more transparency from vendors about how AI tools assess candidates. They argue proprietary claims prevent proper scrutiny.

  • Independent audits of vendors like Pymetrics and HireVue raised conflicts of interest concerns, as the vendors paid for and promoted the audits. The audits also had limitations in scrutinizing algorithms and data.

  • Experts call for government testing of tools before use to ensure compliance, similar to pre-approval processes for new medications. This would replace reliance on vendor-paid private audits.

  • AI tools are increasingly being used by companies for predictive analytics to evaluate and monitor employees, in addition to using AI for hiring.

  • Vendors like Eightfold, Gloat, and Fuel50 provide platforms for internal talent mobility and skills-based matching of employees to new roles within a company.

  • Eightfold’s AI tool in particular analyzes data on employees’ skills and predicts their potential to acquire new skills and take on different roles in the future. It can infer additional skills beyond what individuals list on their profiles.

  • Vodafone uses Eightfold to help transition the company from telecom to technology and identify “hidden gems” - employees with untapped potential. Their goal is to upskill and reskill many employees to fill new technical roles as the company transforms.

  • Supporters argue these tools can increase diversity and internal mobility. However, there are also privacy and bias concerns since the tools extensively monitor and analyze employees. More regulation may be needed to ensure such predictive analytics do not harm workers.

  • Eightfold’s AI tool uses skills data from employees’ resumes and profiles to match them to new roles within their company. It shows the skills required for a role and which employees currently have those skills or could develop them quickly.

  • This helps managers look internally first and potentially lower requirements rather than seeking external hires. It can also suggest skill transfers between roles and industries.

  • Seeing the impact on the talent pipeline provides transparency for managers to make informed decisions. They often opt to focus on core skills rather than extensive experience requirements.

  • Some concerns are that the AI may penalize people who don’t follow typical career trajectories, for example due to caregiving responsibilities. Eightfold says it provides the same information to employees for transparency.

  • Adoption by employees is high because the tool provides clear career guidance and pathways. But the data it collects on employees’ skill development and career progressions could also be used to flag those deemed “slow.”

  • Critics argue skills data from resumes is a weak predictor of actual job performance and outcomes. Résumés don’t capture skill proficiency or true job success over many years.

  • Critics argue that skills matching technologies like Eightfold reduce people to just a few keywords on their resume and don’t account for true potential. However, with improvements, skills matching could help employees advance their careers.

  • Some companies like Unilever and Vodafone use internal talent marketplaces matched by AI to help employees learn new skills and find new projects/roles within the company, saving on external hiring costs. Unilever’s system matched 300,000 employee hours to new projects.

  • People data from benefits, reviews, emails, calendars etc. is used by vendors to predict employee flight risk, burnout risk, and identify “quiet quitters”. Predictive analytics was in high demand from companies in late 2022 as resignations remained high.

  • PepsiCo uses Visier’s predictive analytics tools to analyze people data and identify potential issues like attrition risk, gender gaps, and signs of quiet quitting. Accurate data is important for meaningful insights.

  • Visier monitors various data signals like emails, surveys and business metrics to predict outcomes like employee commitment, performance and retention. The more granular individual data, the better the predictions.

  • Visier’s tools help managers benchmark teams, identify potential connections between employees, and detect changes in behavior that may indicate burnout risk over time. The goal is to help companies make better people-related decisions to improve outcomes.

  • Various hospital groups and finance departments used employee and absence data provided by Visier’s people analytics software to better allocate resources and determine optimal staffing levels. One hospital was able to improve processes and reduce absences without spending more money.

  • Visier’s software aims to predict employee flight risk/turnover by analyzing patterns in past leavers and assigning scores based on factors like engagement, absenteeism, performance reviews, tenure, commute time, LinkedIn activity. However, accurately predicting individual behavior is difficult.

  • At Cushman & Wakefield, people analytics leader Enpei Lam built a model using over 130 employee attributes. It was 80-90% accurate at broadly predicting groups leaving but less than 50% for individuals. The top factors connected to leaving were lack of internal mobility, high meeting loads, pay issues, performance, manager tenure.

  • While flight risk scores aim to inform retention efforts, there are also risks if managers change behaviors towards perceived flight risks or if scores are inaccurate or biased. Additionally, important external factors are often excluded from analyses.

  • Overall, people analytics can provide useful insights when contextualized but overreliance on predictive modeling poses risks if nuance is lost and decisions impact individuals unfairly. The tools supplement but do not replace human judgment.

  • Emily Smith worked as a medical coder for an insurance company remotely. She faced intense surveillance to track her productivity, including monitoring how many charts she reviewed per hour, tracking her computer activity like idle time, websites visited, and flagging unauthorized sites.

  • There were daily reports ranking employees’ productivity that made her feel demeaned. She had to constantly move her mouse or type even while working to avoid being flagged as idle.

  • The monitoring caused her stress, like being afraid to take bathroom breaks. She had to calculate limited unpaid time off when a family member passed away.

  • After over a year, she found a new job without surveillance where she self-reports hours and has supportive management.

  • Companies are increasingly using surveillance tools to monitor remote employees like keystrokes, websites, productivity levels. Eight of the top 10 US employers track individual worker metrics in real-time.

  • While surveillance may boost productivity short-term, experts warn it can negatively impact health and well-being if used to excessively monitor breaks or work pace. Microsoft also patented tracking after-hours work.

  • The benefits of surveillance for employers need to be weighed against potential downsides for employee satisfaction, well-being and trust in management.

  • Tara Behrend and her team studied 76 research studies and found that increased employee monitoring and surveillance does not actually improve performance. Instead, it increases stress, negative attitudes, and burnout. More intense monitoring leads to even lower performance.

  • Major tech companies like Microsoft acknowledge in internal reports that closely tracking productivity is counterproductive and undermines trust between employers and employees. It leads workers to engage in “productivity theater” like unnecessary meetings rather than actual work.

  • Microsoft originally offered a “productivity score” tool but removed individual scoring after public backlash, instead reporting adoption metrics at the organizational level only. Other tools like Zoom’s attention tracking also faced backlash.

  • However, Microsoft tools still allow granular monitoring of individuals if companies activate those settings, as demonstrated in online videos. While Microsoft advises against close surveillance, the tools enable it.

  • In summary, research shows increased monitoring hurts performance, but tech companies continue developing and selling surveillance capabilities even while advising against their use internally. Close tracking remains enabled in their products if companies choose to implement it.

  • Microsoft advocates a “zero trust” approach where companies consider all workers a potential insider risk and monitor communications for threats like harassment, profanity, discrimination.

  • However, there is no data backing the claim that things like venting due to burnout directly lead to a toxic workplace or harassment. Correlation does not equal causation.

  • Monitoring employees raises ethical concerns, but Microsoft does not seem to limit how employers use these tools. Data may not truly be anonymous and safeguards have loopholes.

  • Productivity/performance metrics based on monitoring are questionable and difficult to define for knowledge work. Past attempts using algorithms to evaluate teachers faced many issues and lawsuits.

  • Data collection risks mission creep where it gets misused, like trying to identify “low performers” for layoffs.

  • Surveillance could interfere with workers’ rights to unionize, and the NLRB general counsel has signaled plans to challenge practices that do so.

  • In summary, while framed as protecting organizations, unchecked employee monitoring raises serious ethical concerns and risks negatively impacting workers.

  • Some employers closely monitor their employees’ physical health and activities using devices like fitness trackers, which track metrics like steps, workouts, sleep patterns. This data is sometimes shared with health insurers.

  • Regal Plastics, a company profiled, encourages employees to use fitness trackers and shares employees’ health data on monitors in offices to foster friendly competition. However, once collected, this data could potentially be misused, such as to make layoff decisions.

  • Predictive algorithms using employee benefits data can reveal personal details like divorces, disabilities, or impending pregnancies. This data is often collected and shared without employee consent.

  • While presented as helping employees optimize their health and well-being, extensive health monitoring at work raises privacy and discrimination concerns. Data collected for one purpose could potentially be used against employees in other ways.

  • The NLRB chair believes extensive workplace surveillance could chill union organizing efforts by exposing such activities. She wants employers to disclose monitoring practices to address these concerns. However, implementing new regulations would likely face legal challenges.

  • Alight is a company that provides wellness and benefits software to large employers. Their software analyzes employee data from various sources like health insurance, 401k, HR systems, etc. to generate personalized recommendations and nudges.

  • In a demo, Alight showed how they identified a fictional employee “Aiden” was going through a divorce and financial struggles based on drops in insurance and 401k loans. They recommended mental health and financial support.

  • They also profiled “Ellen” as a new parent based on knee surgery, MRI and adding a baby to insurance. They offered childcare support and recommended doctors.

  • The author expressed privacy concerns over employers and vendors having so much sensitive personal health data. Alight gets data from insurance carriers and other sources.

  • While Alight said recommendations prioritize quality and cost, the author noted the demo instead emphasized suggesting the “best” doctors. There is a lack of transparency over how recommendations are generated.

  • The extensive data collection and profiling by third parties like Alight without explicit consent goes against expectations of privacy. The author remains unconvinced such an invasive system is actually beneficial for employees.

  • Some large tech companies like Google directly offer mental health services to employees in addition to health insurance benefits to get help quickly without co-pays. However, this has created conflicts of interest, as demonstrated by a lawsuit filed by former Google employee Chelsey Glasson.

  • Glasson felt the company-provided therapist abandoned her when she sued Google. She believes employers providing these services directly impacts transparency and creates risks for victims of workplace misconduct who complain.

  • Some companies are now questioning how effective corporate wellness programs really are and trying new approaches like tracking benefits through data sharing between employers. However, privacy and ethical issues are not part of these new initiatives.

  • Some workplaces monitor employee safety and performance more directly through brainwave tracking tools. This includes caps that detect fatigue in truck drivers and miners, as well as headbands used on factory workers and students to track focus and attention.

  • Emerging vocal biomarker technology aims to detect mental health issues from voice analysis, with potential applications in remote work formats, classrooms, and with smart home devices. One startup partnered with a college to test this during the pandemic. While it could help address mental health needs, it also raises privacy concerns.

  • Lina Lakoczky-Torres is a wellness representative at Menlo College who struggled during the pandemic when her therapy sessions stopped once she returned home to Las Vegas. Providing mental health support to students remotely was challenging.

  • About half of Menlo College students are from out of state and lost access to on-campus mental health services during the pandemic, increasing mental health needs.

  • Ellipsis Health approached Menlo College with an AI tool that could assess anxiety and depression levels through daily voice messages from students. The college agreed to let students use it for free.

  • Lakoczky-Torres used the Ellipsis app and found it asked questions about home life, school, and how she was feeling.

  • Vocal biomarker companies claim analyzing speech patterns can objectively monitor mental health over time and eventually diagnose issues, but the technology is still new.

  • There are concerns about privacy, lack of consent for data collection, and potential biases in assessments. More validation studies are needed before clinical use.

  • While AI systems analyze multiple vocal attributes, one short recording may not be enough for an accurate assessment on its own. The technology is currently best for monitoring changes over time, not singular diagnoses.

  • Companies like Ellipsis Health and Sonde Health are developing technology to analyze vocal biomarkers in speech to assess mental health and other medical conditions.

  • Users can download an app and record themselves speaking for 30 seconds. The app then provides a score indicating their state of mental well-being and recommends exercises or hotlines as needed.

  • A pilot program using the Ellipsis app at Menlo College found that some students found it therapeutic just to talk, even if it felt weird talking to no one. The results were mixed, with some students only using it once while others used it more regularly.

  • Privacy was not a major concern for students. One noted that privacy doesn’t really exist anymore and sharing feelings on social media is common.

  • However, the science of using vocal biomarkers for medical diagnosis is still emerging. More rigorous testing is needed, as faulty diagnosis could have serious consequences. Factors like voice modulating AI also present challenges.

  • Experts think vocal biomarkers have potential if developed properly, but diagnosis of many conditions from voice alone is still difficult and limited in accuracy compared to in-person exams.

  • A company used HireVue video interviews and previous performance metrics to make layoff decisions during the COVID-19 pandemic.

  • Lizzie, a makeup artist, was laid off allegedly due to low scores on her HireVue assessment.

  • With help from pro bono lawyers, Lizzie challenged the decision and reviewed the data the employer had on her.

  • She found that on the HireVue results, she scored between 0-33 out of 100, but the rating and recommendation sections were blank. It said “further review” under status.

  • Lizzie believes this meant a technical error occurred and the results needed a second look. Her managers may have misinterpreted it.

  • Seeing this reassured Lizzie that something wasn’t right with how her assessment was handled. She believes the decision to lay her off was unfair.

  • The story raises questions about relying solely on algorithms and metrics to make employment decisions, particularly around layoffs. Technical issues or biases could negatively impact workers.

  • HR departments are increasingly using AI tools to help make decisions around layoffs and terminations. A survey found 98% of HR leaders would rely on algorithms and AI for layoff decisions in 2023.

  • However, many HR managers don’t fully trust these algorithms. Only 50% were completely confident algorithms would make unbiased recommendations.

  • Experts warn companies need to exercise caution when using AI for layoffs and be careful of potential biases in data or tools. Proper oversight and understanding of tools is important to avoid bad outcomes.

  • It’s unclear if companies have processes to incorporate non-quantitative data like exemplary performance or helping others that may not be captured in data systems. This qualitative data could provide important context about employees.

The key points are that AI is taking a bigger role in termination decisions but tools may not be fully trusted or understood, highlighting the importance of human oversight and consideration of all relevant factors about an employee’s performance and contributions.

  • Cornell Causey, a former military contractor, saw firsthand as an Amazon warehouse supervisor how the company uses algorithms and technology to track, reprimand and terminate workers.

  • Metrics like rates of work, scanning packages, etc. are closely monitored digitally for warehouse employees like pickers. Workers have quotas like 300-350 tasks to complete per hour.

  • If workers fall below quotas, they get automatic warnings from the system. Continued low performance leads to automatic termination even without a human supervisor deciding.

  • Claire Grove and Vicky Graham were both Amazon Flex delivery drivers who believe they were fired by automated systems after technical issues led to problems with their work records.

  • They had no ability to speak to humans at Amazon to appeal and don’t know the exact reasons for their terminations. The automated nature of Amazon’s systems left them feeling helpless.

  • Amazon claims humans make all termination decisions but former insiders like Causey saw how algorithms are used to monitor workers and initiate warnings and firings without direct human oversight.

So in summary, the key point is that former Amazon employees provide insights into how the company closely tracks workers digitally and uses algorithms to enforce quotas, issue warnings and recommendations for firings, with terminations sometimes occurring via automated emails with no ability to appeal to actual humans.

  • Amazon warehouse workers are monitored by algorithms that track their productivity and set rate quotas they must meet each hour (known as “make rate”). If workers do not meet the rate, they can receive warnings or eventually be fired.

  • The rules and quotas are designed to be one-size-fits-all and do not account for individual circumstances like health issues or temporary accommodations needs. This has raised concerns about fairness.

  • Managers have little discretion and must follow what the algorithms recommend, even if they feel a warning or firing is unjust in a particular case. It is difficult to override the algorithm.

  • Workers have little visibility into how they are being monitored and assessed. They have little agency to challenge algorithmic decisions. This level of algorithmic control and monitoring is concerning to some advocates.

  • Both productivity rates and “time off task” (TOT) are tracked, and workers can be terminated for too many TOT instances, even if caused by equipment issues outside their control.

  • Constant surveillance via algorithms is also used as evidence in some firings for policy violations discovered through computer logs.

  • However, Amazon states it does not have fixed quotas and considers tenure, peer performance, and safety in assessing expectations.

  • re was bought by HireVue in May 2023. Eric Sydell, re’s founder and CEO, left the company after the acquisition by HireVue.

  • Sydell is skeptical of how AI hiring tools are being used and questions whether companies are properly vetting and validating these tools. He notes it’s difficult for companies to properly evaluate complex algorithms.

  • Sydell argues companies need to continually monitor AI tools to ensure they remain fair, unbiased and effective over time as conditions change. He is starting an auditing company to provide these services.

  • Other experts interviewed expressed similar concerns that AI hiring tools often don’t actually predict job success well and vendors can’t be fully trusted to self-audit. Employers need to do their own validation for specific roles.

  • The future could see more personal data being used by algorithms to continuously evaluate people’s career potential without their consent or knowledge. This threatens privacy and could unfairly disadvantage people. Independent oversight is needed.

  • Sydell wants companies to ask tough questions of vendors and continually monitor tools, rather than take vendor claims at face value. But proper evaluation of complex algorithms remains a challenge without independent experts.

The passage discusses the limitations of using AI and personality tests to predict job success and social outcomes. It argues that predicting the future is inherently difficult due to the many complex factors involved in people’s lives and careers.

As evidence, it outlines a large study where researchers were unable to accurately predict various social outcomes for families, even when using massive datasets and advanced machine learning algorithms. The best predictions remained quite weak, with accuracy rates under 30%.

Traditional predictive models that only used 4 variables performed almost as well as the best AI methods. This suggests AI may not offer meaningful improvements over traditional approaches for predicting social and job outcomes.

Given these limitations, the passage questions whether companies should really have so much trust in AI hiring tools. It argues we need more regulation and transparency to properly assess these technologies and ensure they don’t negatively impact people’s careers or autonomy.

  • A study found that traditional statistical methods like regression analysis performed just as well or better than AI/machine learning models at predicting outcomes, but required much less data. However, vendors continue pushing for more data and complex AI approaches.

  • Opaque algorithms and lack of explainability make it hard to verify that models aren’t introducing biases or ensuring fair outcomes, especially for protected attributes like race. Auditors cannot reproduce or validate results.

  • Systems that continuously track people’s data over time to make high-stakes decisions, like driving privileges, lack transparency and accountability. Similar opaque systems are used in hiring.

  • Traditional methods are more transparent and raise fewer privacy/legal issues while potentially reducing discrimination. But commercial interests incentivize vendors to push “elaborate random number generators” disguised as effective AI tools.

  • Hiring is a difficult domain for AI as the targets like “good employee” are fuzzy and hard to measure. Studies show AI does no better than chance or manual methods. Transparency is needed to validate claims of effectiveness.

  • Randomized controlled trials, like in medicine, should become standard to properly evaluate these systems by comparing outcomes over time between those labeled as “high/low potential” hires by the models.

  • Increased regulation, auditing requirements and testing of impact are recommended to provide needed oversight of these high-stakes systems.

  • The article discusses concerns about AI tools and neuroscience being used in hiring and work in invasive ways that threaten privacy and autonomy.

  • Two vendors that hired their own third-party auditors to review AI tools still had controversial outcomes, showing conflicts of interest in self-auditing. Past crises like the 2008 financial crisis also involved self-auditing that missed serious issues.

  • An effective system would have independent parties or the government set standards for transparency, bias testing, and effectiveness testing before tools are used. This could help prevent discrimination and allow researchers to independently validate the tools.

  • If governments don’t regulate, non-profits could help test and build public-interest AI. Employee input on technology use could also help, similar to workers’ councils in Europe.

  • Advancing neuroscience raises concerns that thoughts could be read, which has huge workplace and privacy implications. Regulations are needed to prevent abusive use of these technologies.

  • Predictive algorithms and thought-reading tools challenge the idea of individual autonomy and risk unfairly limiting opportunities. More human-centered approaches are needed to hiring and work.

  • Recruiting and hiring processes have become highly automated with AI and algorithms playing a major role in screening candidates and deciding who gets interviews. Companies like HireVue use video interviews and AI analysis to assess candidates.

  • APTs are used by 99% of Fortune 500 companies to filter candidates. IBM claims its AI can predict which employees will quit with 95% accuracy. Google received over 3 million job applications in 2019.

  • Recruiting is a $200 billion industry and firms like Google are disrupting it with tools like Google for Jobs that uses AI to match candidates to openings.

  • Companies are also using AI to track existing employees, with some monitoring productivity, sentiment in messages, and external social signals to determine flight risks.

  • This level of automation and tracking raises concerns about bias, fairness, lack of transparency and overlooking human qualities that are difficult for AI to assess but important for many roles. Overall the use of AI in hiring and employment is a profound societal shift that merits scrutiny and safeguards.

#book-summary
Author Photo

About Matheus Puppe