Science fiction has been trying to warn us for years that we are heading down a dangerous path. 2001, a Space Odyssey, The Matrix, I Robot, The Terminator, WALL-E... and the list goes on and on and on. Our culture is full of stories where tech is not always working in our best interest. To be fair, it is not the technology’s fault – humans created the technology, programmed the algorithms (or AI), and set it loose on the world. They just failed to see the consequences of their actions until it was too late.
But, this is all science fiction, not reality… right?
Gunpowder was first invented around 700 AD in China for use in fireworks – a few centuries later, someone figured out it could be used for other more lethal applications. I wonder what the original inventors would think of how their amazing technology was used.
For that matter, what about the Wright brothers – do you think they considered dropping bombs from their amazing flying machine? Or Tesla – was he thinking fighter or spy drones when he introduced the first remote controlled vehicle?
It’s not the technology, it’s how you use it
The truth is most technology is ‘dual use’ – it can be used to create or destroy. A knife can cut steak or stab someone. A molecular compound can be injected into the body to deliver life-saving drugs or a lethal poison. Social media platforms can keep people connected, share news, and raise awareness for amazing causes or it can be used to manipulate the populous, steal information, and scam thousands.
What is most striking is not when bad people do bad things, but when good people, or people with good intentions, end up down a bad path because they misused technology. How many mistakes could we have avoided just by understanding the ‘what’ and ‘how’ of the tools at our fingertips?
The evidence is all around us, we are falling victim to our tech.
Consider the last time you were in a public place – how many phones and devices did you see? How many people were looking at them while they walked, ate, talked? How many great introductions, conversations, experiences were being missed in favor of email, texts, and browsing? And as for those devices – how many misunderstandings, confusions, and misrepresentations were sent through those emails and texts and comment boxes because they weren’t the right format for the communications?
The road to hiring hell is paved with good intentions
Hiring is no exception to the dangers of technology misuse and the resulting unintended consequences.
For example: Consider Artificial Intelligence (AI) based talent acquisition solutions. Many of these offerings tout that they provide the perfect candidates fast and without bias. But is that what they are really doing?
AI or Machine Learning is a set of algorithms (math-based instructions) that analyze data sets to find patterns, then act upon those patterns. It’s the technology behind suggested items you may want to buy when shopping on a website, or other shows you may like to watch when using a streaming service. For talent acquisition, the patterns are used to filter candidates or match candidates from a database.
But what AI can’t do is tell if the data it’s using to find patterns is good or bad, or if the patterns that it finds are predictive of a candidate’s success, or not. So, for example, if the data set of engineers is predominantly male because the industry was traditionally predominantly male, what the algorithm will see is the pattern of male for engineer, and therefore females are not a good match and are filtered out. It sounds ridiculous, but it happened.
So, for all those ‘easy button’ solutions that select your candidates, evaluates or writes your job descriptions, chats and engages with your candidates, and makes sure every task along the process gets done, the full story may be hidden just below the surface-level sales pitch. There are many paths that will land your company or your candidates in hiring hell.
Choose the red pill – know the tech so you can use it for good
Luckily, the paths to hell are avoidable and it starts with awareness and intention.
In addition to knowing what you need and ensuring that the vendor is a good fit for your organization (both topics of other blogs), do the due diligence on the technology.
Here are some simple questions to get you started.
1) What is it doing?
Dive beneath the sleek sales pitch and pretty marketing material to the details of what the technology does - step by step. Place yourself in the shoes of the users: recruiters, IT, hiring managers, candidates, and you.
Walk through the experience to understand what happens at each stage and what triggers each next step.
Beyond the technical steps, consider the experience and the communication and messaging, and the perception it leaves on the users.
Evaluate what happens when things don’t go as planned – how it handles exceptions, edge cases, and course changes.
Examine the data and analytics to make sure you are getting the information you need to measure the performance of the technology, process, and team.
Don’t forget to look at the ‘negative’ – what it’s not doing. Every technology has limits. Every solution has strengths, weaknesses, target uses, and edge cases it doesn’t handle. Know both what it does and what it does not do.
Tip: if your vendor says it can do it all and it’s the perfect fit for every organization’s every need – they are lying.
For example: if you are evaluating an assessment provider, take the assessment like a candidate. Then, access and review results like a recruiter and/or hiring manager to examine how the results are compiled and reported as a business leader. Go through problem scenarios such as what happens if a candidate contests the results or the hiring manager changes their mind about what they want to measure. Ask questions such as when it’s not appropriate to use the assessments or under what conditions the assessments fail.
2) How does it do it?
Understand how every step works. There is no such thing as technology or science ‘magic’ and ‘proprietary’ should never be used to mask the approach. You don’t need to understand the code or know the equations behind the algorithms or examine the lines of data driving the analysis. But you must understand the concepts and techniques behind them. Because if you don’t, how can you evaluate if the technology is doing what you intend for you, your organization, and your candidates.
Ask where the data comes from and how it’s collected, protected, and validated to ensure accuracy. Data size, scope, and accuracy dictate the accuracy of the results.
Understand why it works. What is the reasoning behind the technology so that it produces the intended results.
Ask how it may produce different results for users of different demographics, backgrounds, cultures, languages, and those with disabilities.
Understand how the technology handles exceptions and outliers.
Tip: a common technique to hide weak or unproven technology is to obscure the ‘how’ with overly-complex terminology, jargon, claims of ‘proprietary’ or ‘secret’ approaches, and lack of evidence of results. Algorithms, code, and lines of data are protected, not the concepts behind them. Never use a technology without a basic understanding of what it’s doing because the liability for the result is on you.
For example: If evaluating an AI solution that automatically filters candidates to give you the best ones, understand how it’s doing it. Where is it getting it’s definition of ‘good’ vs. ‘bad’ candidates? What patterns is it applying to your candidates? How does it know those patterns are predictive to the candidate’s success? What is the adverse impact of the outcomes (the difference between the results of one demographic group vs. another)? How is the technology ensuring good candidates aren’t missed or how can it be adjusted if there are candidates that are being missed?
3) Can you prove it?
Technology claims should all be backed with proof, otherwise, how do the vendors know what to claim? Ask for proof of results, accuracy, adverse impact, or any other claims of performance and/or outcome. Proof comes in the form of data sets (graphs and charts and tables showing results) or data points (case studies, testimonials, customer referrals).
When examining proof:
Know the data: what exactly was measured to show this ‘success’, how was it collected, and how was it verified to be correct.
Know the ‘n’. For data sets, n stands for the number of data points used. For example, if looking at a graph at how much time is saved, are you looking at an n = 1 (one customer or one example), n= 10, n=100. The bigger the n value, the more times the success has happened.
Understand the full graph. Some graphs can be very misleading, either by accident or intent. For example, missing axis labels, hard-to-read legends explaining the colors or symbols, and unclear explanation of what data is included. If you don’t understand, ask.
Beyond what you see, identify what may be missing. The data you are not seeing may be more important than the data presented.
For example, if you are looking at a graph showing performance and the n value is far lower than the claimed number of customers or success, it could be a sign that they are hiding a bunch of bad data. Or, for example, if you are seeing average time to fill but not seeing it against the number of recs open per recruiter, it may not be accurate to your team’s circumstances.
4) Can I prove it?
Explanations and proof can only get you so far. Ultimately, your organization will have specific needs. Who you hire, how many you hire, the size and experience of your team, the demands of your organization and industry, your hiring process and approaches, etc. To truly measure if the technology solution is a good fit, prove it for yourself with a trial, pilot, or other form of test period.
How do you engage in a pilot to truly test the value of a technology? A great question for a future blog.
Bringing it all together
Not all technology or technology vendors are c