Behind the Hiring Curtain: What’s Really Happening
Episode 330 - Patrick Dunlop, organisational psychologist and Future of Work professor, shares what he learned from studying recruiters, what’s overhyped, what’s still painfully manual, and how experienced candidates can move with confidence through modern selection processes.
Spend enough time around job seekers and you will hear the same diagnosis: “Hiring is broken.”
Spend enough time around recruiters and you will hear a different one: “We’re drowning.”
Both can be true. What has changed in the last few years is not simply the technology inside recruitment. It’s the volume, the noise, and the mismatch between what candidates think is happening and what is actually happening inside organisations.
In my conversation with Professor Patrick Dunlop, an organisational psychologist at Curtin University, one theme kept resurfacing: the biggest misunderstanding is not about AI. It’s about realism. Hiring varies wildly from one organisation to the next, and much of what candidates assume is “automated” is still surprisingly manual, uneven, and dependent on human judgement.
What follows is a structured, evidence-informed way to think about modern hiring if you are an experienced professional, particularly in your 40s and beyond.
The hype gap: Why your LinkedIn feed isn’t a reflection of the labour market
Patrick described running a survey of Australian recruiters during COVID after watching his LinkedIn feed fill with confident claims: AI that analyses resumes, cover letters, even personality from photos.
Then he looked at what recruiters were actually doing.
Many were using “AI” in the most literal sense. Others were printing stacks of resumes and reading them by hand.
This matters because the candidate advice economy tends to assume a uniform, high-tech hiring machine. But the real world still includes legacy systems, under-resourced teams, and inconsistent capability. The practical takeaway is simple: do not build your entire job search strategy around what you think “the algorithm” is doing. Often, there is no algorithm worth optimising for.
Resumes and cover letters: Two “mystery boxes” that still dominate
Despite decades of innovation in selection science, the resume remains the most widely used assessment tool. Patrick called it “one of the worst assessment tools we’ve ever created,” while also noting it’s under-researched and stubbornly persistent.
Why does it survive? Because it is cheap, familiar, and scalable. And, in many roles, it still carries signals hiring managers care about: progression, credibility, industry context, and scope.
Cover letters are even more ambiguous. Patrick described them as a “mystery box,” with organisations split across three positions: cover letters signal commitment, cover letters are ignored, or cover letters are now meaningless because they can be generated at scale and “all sound the same.”
If you are an experienced candidate, this ambiguity can feel like a trap. My view is that the solution is not to guess what a specific recruiter prefers. It is to design your materials so they work under both conditions:
- If the cover letter is read, it should add signal that the resume cannot.
- If it isn’t read, nothing important should be trapped inside it.
This means shifting your cover letter away from generic enthusiasm and toward specifics: your “fit thesis,” a clear view of the commercial or operational problem you solve, and proof in the form of outcomes.
Why assessment is expanding: the logic behind the “bottleneck”
Many candidates dislike psychometric tests because they experience them as hurdles, not evidence. Patrick made an important point: from an organisational psychology perspective, anything you are judged on is an assessment, including the resume and interview. But psychometric testing was developed to solve a problem that human judgement cannot solve well: inconsistency.
People are poor intuitive judges. They change their minds from day to day, disagree with each other, and latch onto irrelevant details. Standardised assessments aim to reduce this noise by measuring pre-defined characteristics in a consistent way.
There is another force at play. The easier it becomes for candidates to produce polished applications using AI tools, the more employers look for new bottlenecks: structured interviews, work samples, simulations, and tests that are harder to “polish.”
So while many candidates want assessments to disappear, the more likely reality is that they will evolve and proliferate.
What good assessment looks like for senior roles
Patrick’s answer was blunt: the step that gets skipped most often is job analysis. Before anyone selects a test or a case study, they need clarity on what success looks like in that role, in that organisation.
Once you know what you’re selecting for, there are three broad tool families:
- Ability tests (reasoning, numeracy, problem solving)
- Questionnaires (personality, motivation, values, interests)
- Structured interviews and simulations (behavioural interviews, case studies, work samples)
For senior roles, he emphasised structured interviews as “very defensible,” with strong evidence linking performance in structured interviews to performance on the job.
The word “defensible” is doing work here. It signals a principle that many candidates intuit but rarely hear said plainly: assessments are not only about prediction. They are also about justification. Leaders need a process they can explain internally, and sometimes externally, if challenged.
The real problem with assessments is not the tools. It’s lazy design.
In my practice, I see two extremes. Some organisations run rigorous, multi-step selection processes that clearly map to the role. Others bolt assessments onto a process without explaining why, offering no feedback, and treating results as if they are self-explanatory.
Patrick described this as “chuck all that into the pot and stir it up” testing: standard packages used because “it’s just what you do,” not because they map to the role.
The same happens with case studies. People underestimate the expertise required to design them well: you need a clear definition of what “good” looks like, consistent rating criteria, and trained assessors.
For senior candidates, this is often where frustration spikes. When you are asked to complete an assessment that does not resemble the work, it can feel like your experience is being dismissed.
Patrick offered a simple lens: candidates value assessments that allow them to demonstrate capability relevant to the role. That’s why abstract reasoning tests can irritate senior applicants: you are not applying for a job to solve diagram puzzles, you are applying to lead.
“Faking” and the uncomfortable truth about self-presentation
One of the most useful parts of the conversation was Patrick’s distinction between “faking good” and presenting your “good work self.”
Faking good is when you deliberately answer in ways that misrepresent you, because you believe it will produce a better score.
But he also described a grey zone: we all adapt our behaviour to context. You do not speak to your boss the way you speak to friends. That is not deception, it is professionalism. In a personality questionnaire, he recommends answering from the perspective of your “good work day” self: the version of you that shows up when you’re productive, engaged, and doing your job well.
This is an important reframing for experienced professionals. Many fear that any deliberate self-presentation is “dishonest.” In reality, hiring has always required self-presentation. The question is whether you are presenting a credible, sustainable version of yourself.
Patrick also pointed out a double standard: critics sometimes argue personality tests are useless because candidates can fake them, while praising interviews and resumes as if faking does not happen there. It does. Interviewers simply overestimate their ability to detect it.
Candidate experience is deteriorating because systems were not built for this scale
Candidates tell me they feel ghosted, neglected, and processed. Patrick’s explanation was not sentimental. It was structural.
Applicant tracking systems are often legacy tools. They were not designed for a world where candidates submit dozens of applications a week, and where employers receive extraordinary volumes.
At the same time, AI has increased noise. Recruiters now receive what Patrick called “AI generated slop,” including scattergun applications that may not even come from real people.
When you are looking at a spreadsheet of applicants and “three quarters of them aren’t real people,” personalised communication becomes unrealistic.
He went further, speculating that bot activity could be used intentionally to clog systems, whether by malicious actors or by services that promise to apply on your behalf at scale.
The consequence is bleak but clarifying: if you are applying into high-volume systems with high noise, the odds of thoughtful review fall. In that environment, strategic targeting becomes less optional.
The senior candidate’s dilemma: adapt without losing status
One of the best moments in our conversation came from a story I shared: two senior executives asked to submit a video as part of a CEO application. One was furious and underperformed. The other adapted fast and won the role.
Patrick’s response was nuanced. He drew a line between two types of video screening:
- The “upload two minutes of yourself saying whatever you want” approach, which he finds risky and biased because criteria are undefined and assessors may simply reward what feels familiar.
- The asynchronous structured video interview, which can be fairer because every candidate gets the same questions and the same time, and assessors can be trained with rating scales to focus on content, not appearance or background.
He also acknowledged the human cost: one-way interviews feel awkward, and they reduce the candidate’s ability to ask questions. Employers who use them need to provide a separate channel for candidate questions and make that explicit.
For candidates, he offered practical advice: practise speaking to a webcam, rehearse a small set of role-relevant stories, and get each story under two minutes. Recruiters are reviewing many submissions and do not want rambling answers.
This is where the “status” issue becomes real. Senior candidates often interpret new formats as disrespectful. But the labour market does not reward indignation. It rewards fluency.
What comes next: stricter policing and more invasive verification
Patrick expects a near-term tightening around assessment integrity as AI becomes normalised among candidates. Some employers will respond with aggressive bans and loose detection thresholds, because they can reject candidates without explaining why.
Others may take a more pragmatic approach: encouraging candidates to use AI to improve clarity, while insisting that the underlying content remains true. He gave a strong example: using AI to cut a ten-minute story down to a two-minute answer, without inventing experience.
He also flagged growth in proctoring software that monitors browser behaviour and uses webcams to detect assistance, while noting public tolerance for invasive technology may have limits, especially in Europe.
Finally, he described emerging “AI interviews” and multi-agent simulations: AI avatars conducting interviews, AI in the room nudging interviewers, and even role-play simulations run by multiple AI agents.
This future is not evenly distributed. Some organisations will embrace it quickly. Others will lag behind with spreadsheets and printed resumes.
What I want experienced candidates to take away
If you are job searching right now, the goal is not to become a hiring-tech expert. The goal is to build an approach that is robust across different realities.
Here are five principles I would hold onto:
- Do not over-optimise for an imagined algorithm. Many processes are still manual and inconsistent.
- Treat your resume as an assessment instrument, not a biography. Make the key signals easy to find.
- Expect assessments to increase, not decrease. Prepare a repeatable method for tests, interviews, and work samples.
- Aim for “good enough,” not perfection. Credible self-presentation beats gaming.
- Choose strategy over volume. In a market clogged with low-quality applications, targeting and clarity are advantages.
Hiring is not a single system. It is thousands of systems under strain. That can be demoralising, or it can be freeing. Once you stop trying to solve “the” hiring process, you can start solving the one in front of you, with the best information available.
If you’d like help with you job search, reach out to me. I'd love to help you navigate everything and come out on top.
About Our Guest, Patrick Dunlop
About the Host, Renata Bernarde
Hello, I’m Renata Bernarde, the Host of The Job Hunting Podcast. I’m also an executive coach, job hunting expert, and career strategist. I teach professionals (corporate, non-profit, and public) the steps and frameworks to help them find great jobs, change, and advance their careers with confidence and less stress.
If you are an ambitious professional who is keen to develop a robust career plan, if you are looking to find your next job or promotion, or if you want to keep a finger on the pulse of the job market so that when you are ready, and an opportunity arises, you can hit the ground running, then this podcast is for you.
Resources Mentioned in This Episode
Timestamps to Guide Your Listening
- 00:00 Understanding Assessment Tools
- 00:52 The Importance of Job Analysis
- 03:48 Designing Effective Assessment Processes
- 06:53 The Role of Simulations and Case Studies
- 09:59 Concerns About Psychometric Testing
- 12:56 Faking in Assessments and Its Implications
- 15:50 Cultural Differences in Assessment Responses
- 26:44 Cross-Cultural Assessment in Personality Testing
- 30:57 Candidate Experience and Recruitment Processes
- 36:10 The Impact of AI on Job Applications
- 39:04 Adapting to New Technologies in Job Search
- 49:19 Future Trends in Recruitment and Assessment
Transcript
Renata Bernarde (00:00)
This episode is for the serious job seeker, the job seeker that leaves no stone unturned and wants to geek out on all of the details that can make their application float to the top of the pile. It is also a must-listen for HR directors, who are the executives that are making strategic decisions and investments on talent acquisition.
And also, of course, all talent acquisition professionals, all recruiters and headhunters who should always be on top of the latest research on talent acquisition and recruitment best practices.
Because today I’m speaking with Professor Patrick Dunlop from Curtin University’s Future of Work Institute.
organizational psychologist whose research focuses on recruitment, assessment and selection, and how technology is reshaping the way organizations hire.
also an associate editor for the International Journal of Selection and Assessment, and this means that he spends his days reading and shaping the latest research on how people are chosen for jobs.
So in this conversation, we walk through what really happens when you apply for a role that uses tests, online interviews or AI-based screening.
explain to us why employers still rely on personality and aptitude assessments, what the evidence says about candidates who exaggerate or fake good on those tests.
and how that behavior shows up later in performance and conduct at work. We then move into AI, and Patrick shares what he’s seen in the research on so-called traditional AI in hiring, such as algorithms that rank CVs or score video interviews, and the newer wave of generative AI, including ChatGPT and Claude, that candidates themselves are using.
We talk about what is considered fair use of AI as a candidate,
ethical red lines are, and how regulators are starting to respond if you are a corporate professional who has spent years building your career and you now find yourself sending applications into automated systems, this episode will help you understand what is happening behind the scenes.
Patrick will offer you very practical guidance on how to approach assessments, what to do if you dislike tests, and how to prepare for structured interviews, and how leaders can design hiring processes that are both evidence-based and humane.
So if you have ever wondered about any of these things and you think, does any of this actually measure who I am and what I can do, then this conversation is for you.
Renata Bernarde (03:05)
Look, Patrick, when you look at the way that hiring is done today, what do you think candidates most misunderstand about recruitment and assessment?
Patrick Dunlop (03:19)
Yeah, that’s a really interesting question. The truth is it’s hard to know because hiring is done so differently by so many different organizations. And so it’s sort of hard to sort of pin down one golden truth that applies to hiring generally. So mean, a few years ago, was during COVID, I decided to do a research study of Australian recruiters and people working in talent acquisition across Australia.
I was browsing LinkedIn a lot because we’re isolated in Western Australia. And all I could see on my LinkedIn feed was, you know, recruitment tech, AI here, AI this, we can analyze cover letters and resumes automatically. And, you know, we can interpret personality based on people’s photographs and using AI algorithms. And I just thought, my God, like the world out there is moving so quickly. And I needed, I felt like I needed to get my finger on the pulse.
So I did this survey of talent acquisition leaders expecting them to tell me all about how much AI they’re using and how brilliant it is and how efficient it is. And then I actually discovered that the real world is quite different, right? So most recruiters, their AI was using Microsoft Excel and getting it to automatically color certain cells green or red, depending on whether the person was marked as at shortlist D. Other people talked to me about stacks of resumes that they printed and that they were trolling through individually. ⁓
So there was just this huge gap between the hype on, I guess, the LinkedIn bubble and the real world. And so I think it’s hard to sort of pin down one thing and say, oh, there’s a misunderstanding. Probably the misunderstanding I had when I think about it now is just this, you know, I just bought into the hype a little bit too much. And I know when you’re looking for work, often you’re drawn into these ecosystems like LinkedIn or maybe some online forums if you’re working in a particular specialist area or maybe you’ll go to a networking event.
They always wheel out these presenters from the talent tech companies to talk about the latest gadgets in that space. But actually, the reality is a lot grittier and there’s a lot more grunt work being done even now with generative AI, I suspect. So I think there’s a couple of things though that have stuck. So one thing that’s really stuck is the use of resumes for Sys, and we can get into that later. It’s probably one of the worst assessment tools we’ve ever created, and yet it’s used most frequently.
And it’s most, I would say it’s probably the most under-researched tool. We actually haven’t done a lot of systematic research into resumes, ⁓ partly because they’re not psychologically very interesting things. But when you see AI, you see rumors of AI being used, quite often it’s used to analyze resumes at scale, because, know, printing them out, looking at the manual is painful. And so people are looking for shortcuts there, which is understandable. ⁓
Otherwise, the other mystery box to me is cover letters. Like should you write a cover letter or not? Again, there’s not a lot of research on cover letters. And I speak to people that say, yeah, there’s no cover letter. That’s a signal that somebody’s not invested. I see other people say, I don’t even look at cover letters. Half our applicants don’t write them. They’re not standardized. You can’t tell. And then another group now are saying, well, people just use AI to write cover letters. So there’s almost no point in looking at them because they all sound the same. So yeah, it’s hard to pin it down. But those are the things that I
Renata Bernarde (06:26)
Mm.
Patrick Dunlop (06:29)
that surprised me, I suppose.
Renata Bernarde (06:31)
Yeah, that’s very true. ⁓ What I have noticed is when I’m talking to the very senior leaders in recruitment, they are the ones that tell me, yeah, no, we don’t use AI that well yet. We have tested a couple of tools and they don’t really work for us yet. But then if I’m talking to recruiters on LinkedIn, like you said, it seems like the vibe is much bigger than the reality.
It’s the same for candidates. We talk a lot about people using ChatGPT and Claude and Perplexity to write their resumes, but what I hear from recruiters is that there’s still a lot of people that are not using AI to do their job applications. So it goes both ways.
Patrick Dunlop (07:17)
Yeah, indeed. Yeah, yeah. And also
in a university, we’re in a bubble because everybody, which is where I work, I work at a university and there’s always an email a day, maybe two, talking about AI. mostly it’s about students using it or academics using it for their teaching. But you’re absolutely right. A lot of people don’t want to touch it. Some people haven’t heard of it. Some people have heard of it, but can’t access it. So yeah, it’s very important not to make assumptions about it based on hype.
Renata Bernarde (07:43)
Okay.
You know, one of the things that I said maybe a year and a half ago on the show here on the podcast that is that I thought assessments were going to grow in the recruitment and selection process because there’s more people use AI to do their resumes and cover letters. The quality of those applications get a little bit higher and then we need to bottleneck things along the recruitment and selection process and assessments is a great way to do that.
But from your perspective, what is it that assessment tools are trying to measure? Because most job candidates, frankly, hate doing assessments.
Patrick Dunlop (08:24)
Yeah, assessments are generally not much fun. basically what we’re trying to do with, if we’re talking about psychometric testing, suppose, so from an organizational psychologist point of view, we would call anything that a candidate is being judged on, we would call that an assessment. So a resume would be a type of assessment, or an interview would be a type of assessment. But I think what you’re talking about here is the psychometric tests, which a lot of people don’t like them. I mean, no one particularly enjoys being tested.
You do tend to find people that are good at tests don’t mind it so much because they just see it as a challenge, but by and large it’s seen as a hurdle to overcome, to make it to the next stage of a selection process. Yeah, so what are people trying to do with these assessments? Well, psychometric testing is something that ⁓ we developed in psychology and there’s a lot of work in education as well because there’s a lot of assessment that goes on in schools. But we developed it to try and solve a problem.
which is it’s really hard to make judgments about people using our intuition. There’s tons and tons of research that shows when people try to make intuitive judgments, they’re hopeless at it. They’re inconsistent. So on two different days, they might arrive at two different conclusions about a job candidate’s suitability. They’re inconsistent with other judges. So you and I might evaluate the same candidate, but you know.
reach totally different conclusions. And again, if we wait one more day, we might even flip in terms of our evaluation. We’re not consistent in ⁓ what sorts of information we latch onto and judges being important. And because of all of this, it’s very hard to get nice robust assessments of people unless you develop a standardized method to do that. And that’s really the origin of psychometric testing or the purpose of psychometric testing is to provide us a way we can
assess something on a predefined set of characteristics in a standardized way, which means that if you and I have different scores, we can interpret that difference as being representing differences in our standing on that characteristic. So we’ve made tests that measure all kinds of things. So you’ve got one category of tests is the ⁓ mental aptitude or cognitive ability type test. And we’ve probably all had to go through some of these.
And within there, there’s types of tests that measure things like verbal reasoning. Some people might have done some of those diagrammatic or abstract reasoning tests that are trying to get at people’s abstract problem solving skills. Numeracy assessments are quite common. In certain roles, you might see mechanical reasoning tests being used, 3D spatial reasonings, other examples. There’s lots and lots of examples. And on the other side, you’ve got what we, we tend not to use the word test. We usually say assessment or questionnaire, but this sort of captures
things like personality questionnaires or a vocational interest inventory, or maybe a motivation questionnaire or a values questionnaire. And what they’re trying to get at is, it’s meant to be standardized ways to capture those individual differences that aren’t so much ability-based, but are more motivational-based or more about our preferred behaviors or thinking styles and so forth. And so for example, if you…
Let’s say for your organization, you’ve worked out that being really attentive to detail, it’s just super important for a particular role, right? Like maybe it’s an executive assistant role and they just need someone who can cross the T’s and dot the I’s. So we might use a personality assessment to try to capture that. You could use an ability test. You could give somebody a test where you throw a bunch of errors at them and see if they can identify them quickly and accurately. But you might also throw in a personality questionnaire which captures something that’s more like the motivation to be…
detail oriented, right. So if you give me a 10 minute test, I could probably do really well on a test looking for errors. But if you told me I had to do that all day, I just don’t have the temperament for it, right. I’d get frustrated, I’d get bored, I’d start making mistakes. And that’s probably what the personality or motivation type assessments and trying
Renata Bernarde (12:24)
Okay. When you think about my audience, they’re usually experienced professionals. ⁓ This is kind of new to them. I don’t think that assessments for senior professionals were as popular maybe 10 years ago as they are today. I think 10, 20 years ago, you would ⁓ correlate ⁓ assessments with graduate roles. And now we’re seeing more and more assessments being used for
middle managers and definitely for senior execs and it’s becoming more more common. If you were to design an assessment process for a senior corporate role, what would a good process look like?
Patrick Dunlop (13:09)
Yeah, so I would say the most important step, which gets skipped a lot, is you start with doing a proper analysis of the job. Right. So, you know, it sounds boring, questions about assessment. And we always say, stop, wait, wait, wait, before you get into assessment, we’ve got to figure out what exactly we’re looking for. And so the first step is to do, you know, figure out what exactly is this job about? What is the person in that job going to be doing? And that determines what characteristics we need them to have.
And then you can start thinking about how to assess those characteristics. There’s loads of ways to do job analysis. there’s, for example, there’s a great website. an American website called O-NET, which contains a huge database of position descriptions for all kinds of different roles. So they’re a great starting point because every individual role will be different, but O-NET can give you a really good start. I found generative AI is also very good for developing the ⁓ first template, if you like.
But at end of the day, you really do need to look in the context of your organization. So that might involve observing people in that role, or asking them to just make notes of what they do. There’s also questionnaire-based approaches you can use. So once you do that, you identify the characteristics that are essential for success, and then you start to design an assessment structure that aims to tap into those different characteristics. And that’s when you’ve got decisions to make about, well, we need to measure our mental aptitude for this role?
And if so, which types of mental aptitudes are important? Do we need somebody who’s good at problem solving in abstract situations? is this particular or very ⁓ focused on understanding how numerical information relates to each other? Like, are they going to be trolling through Excel sheets trying to make sense of financial records and so forth? ⁓ And then the other side to it is, what kind of competencies or dispositional characteristics or potential do we need to capture?
So are they going to be working in a team? And if so, what different approaches might we use to access, to assess people’s ability to work well with people? Are they expected to lead? Are they expected to influence? Or are they expected to motivate? Or are they expected to sort of take charge and issue instructions, but where following orders is not up for debate, It really depends a bit on the context. And so once you’ve identified those sorts of characteristics, there’s a whole range of options. So one…
method that, so you might use a personality questionnaire if you want something that’s very standardized and you know exactly what characteristics that are relevant. Or you might opt for something more like a structured interview. So that’s something we, there’s a lot of research that’s been done in structured interviewing and it’s one of the best, when I say best, it’s very defensible in the sense that if you do, if somebody does well on a structured interview, there’s a lot of evidence to suggest that they will probably do well on the job as well.
But there’s a lot of caveats to that. We can get into that later. But I would say structured interview, one of the great things about structured interviews is they can be designed to assess all kinds of different competencies or different, you know, different competencies or types of job knowledge. And the other thing with structured interviews is you can ⁓ tailor them to suit the amount of experience that your candidates likely have. So what you typically find for graduate level roles, you tend to see more situational style questions. So a situational question is, OK,
such and such is going on. There’s an angry customer, how do you respond? Whereas the other type of structured interview question is a behavioral based question, which is like, okay, ⁓ tell us about a time when you had to deal with an angry customer or a disgruntled staff member or whatever. How did you resolve that? And so for those sorts of level roles, you’re probably more likely to see behavioral questions because they sort of compel people to draw from their memory banks and describe exactly what they did and what impact that had.
So those are also, yeah, they’re quite good. And the last category, sorry, I know I’m rambling a bit, but the last category you sometimes see ⁓ sort of things like case studies or simulations. And with those, particularly at the senior executive role, I understand they’re quite commonly used there. You also see them in graduate level roles as well. But basically the philosophy behind this is if we can somehow take a chunk of the job and watch people do that chunk of the job in real time.
Renata Bernarde (16:56)
Yeah.
No, no, go ahead.
Patrick Dunlop (17:23)
then we get a pretty good indicator of what they’re likely to do when they’re doing that same type of work after they’re employed. And so that’s generally what simulations are designed to do. The trouble is making them not easy to do, making them well and standardize and all that stuff. We can unpack that later too.
Renata Bernarde (17:39)
Yeah. I have had clients that have all of the above so that they had done simulation, case studies. I find that if an organization includes that as part of the recruitment process for a senior executive, they take everything really seriously. you know, if you get that job, you’re a good fit for that team. You know, you’ve been analyzed thoroughly before you engaged.
What I get concerned as a career coach supporting people go through these processes is when they are just haphazardly included, you know, and we don’t really see how they’re going to be used. You know, they’re just asked to sit for a psychometric test and then they don’t get the report. No, no feedback is given. They don’t know what they’ve done wrong, especially because as part of my coaching program.
I use the strengths assessment test, the TalentPredix which you have done. And when we think about their results in that talent wheel from TalentPredix, my clients may get concerned if, for example, they are going for a leadership position, but leadership is not one of their top talents. You know, it’s teamwork or it’s organization or it’s problem solving. And they’re worried that
Not that they cannot lead. I think my clients are pretty confident about their ability to lead with those talents. They’re not confident about the recruiter’s ability to read between the lines of the psych test that they’ve just run. Do you worry about that as well?
Patrick Dunlop (19:17)
Hmm.
I mean, yeah, absolutely. It’s it’s often the case that, you know, say, I’ll just say psychometric testing, but it can be anything. This could apply to simulations as well. But it’s often the case that these sorts of assessments get used for reasons other than a job analysis. Right. So in some organizations, it’s just what you do. Right. So you’re a team leader, you’re trying to recruit a team member, or you’re trying to recruit a senior manager or whatever. Sorry, I just have to
Renata Bernarde (19:23)
Hmm.
Patrick Dunlop (19:47)
There we go. That’ll happen again. ⁓ Yeah, you’re trying to recruit somebody and your company’s policy is there is a psychometric testing process and no one bothers to explain what’s it for. No one explains that. before you start doing that, you need to think about which parts of the psychometric assessment are relevant, which parts are not relevant. How should you weigh those different parts up? How should it be weighed up against what goes on in the interview?
And very often the test are just chosen because they’re the same three that we always use, know, a verbal numerical abstracts, pretty kind of hackneyed combination, and then a personality questionnaire. And that’s it. We just chuck all that into the pot and stir it up without actually sort of being strategic about it. ⁓ With the simulations, what I’ve often come across is situations where someone’s just, you how hard can it be to write a case study? It surely can’t be that hard. And so they just smash one out.
and they don’t think much about, well, hang on, we’ve got to actually observe somebody responding to this and we have to have a strong understanding in advance of what a good response is and what a bad response is and what behaviors we think are important and what behaviors we don’t think are important. And I think people sort of underestimate. There’s kind of this category of tasks that seems easy to do.
But to do it well is really hard, but it’s hard to appreciate that, right? And I think things like designing surveys is another one. Anyone can just write a survey. How hard can it be? But actually designing a good one requires a lot of training and expertise. I think the same is true for assessments, especially case studies. The other thing is it’s hard to sort of, when you’re dealing with people that have been working in organizations and senior roles, of course they have a lot of expertise. They’ve worked in these roles for a long time. And so they understand, they’ll understand those roles better than an organizational psychologist like me. But…
what they sometimes underestimate is the expertise in assessment design that they don’t have. And what you really need to do is pull those two bits of expertise together to come up with something good. And so I suspect what’s happening with some of your clients is they might be running into some of these kind of cobbled together simulations, which don’t have that kind of robust, the stuff on the other side that keeps it robust to make sure that the judges are all on the same page and that everyone agrees on what is effective or ineffective behavior.
Renata Bernarde (22:03)
Yeah, if we flip the coin a little bit and think about it from an employer’s perspective, ⁓ one of the concerns that that is raised with using psychometric tests or personality tests is this situation where somebody could be faking it. You know, I want you to talk about that, because for my audience, I think it’s important for them to understand the importance of
sitting down doing this 30, 40 minute long, sometimes two hours long assessments at a time when they are, I mean, you tell me when they’re relaxed, when they’re not in a hurry, when they’re not trying to manipulate the information because at the other end, there will be a dashboard that might be sort of flashing some red alerts there if they perceive the candidate as being.
faking or trying to manipulate the information. Can you explain to my audience what that looks like?
Patrick Dunlop (23:07)
Yeah, absolutely. So faking on these self-report assessments is something we’ve been interested in in psychology for a long time because we use assessments in a lot of high stakes situations. So one type of faking, which is what we’ll talk more about, is called fake good and that fake good, is what you’re trying to make somebody trying to make themselves look like, you know, the perfect individual. There’s another type of faking which we call fake bad, which sometimes shows up in situations like a clinical assessment for say work.
work cover claim, right? So somebody might want to claim that, you know, they’re suffering from a mental illness caused by something that happened at work. They’re not really, they’re just malingering, but they get given this clinical assessment. And so they, you know, they try to fake that. And a lot of clinical assessments have these little measures in there designed to detect people that might be malingering. But in the job application setting, most typically people, organizations are concerned about people trying to make themselves look really good.
It’s interesting. It’s a problem that’s very hard for researchers to crack. So what we do know about faking is it does damage the integrity of the assessment. So it’s certainly true that some people, some applicants out there do fake good. And what that looks like in practice, for example, is let’s say I’ve been given a personality questionnaire and one of the questions is, ⁓ my desk is neat and tidy and I have to agree or disagree with that.
And if you look at, can’t see it, thankfully, but if you could, you would see books and there’s a shopping bag over here, a webcam that I decided not to use just randomly sitting there. So if I agree with that item, I’m faking good. I’m trying to make myself look like I’m organized and diligent, knowing that there’s no way that you could catch me out. Right. And that’s bad because what’s happening there is you’re getting a misrepresentation of my true level of tidiness or organization. And so obviously if I want a job, there’s a strong incentive for me to make myself look like a
tidier, more diligent, hardworking, nicer, more collaborative, ⁓ you know, interested, curious person. And so that’s kind of the threat of faking. ⁓ But then on the other side of it, there is this kind of gray zone where we wear different hats, right? So the way I would describe myself, ⁓ my personality in the context of hanging out with my friends, it would be a bit different from, if my boss was standing behind.
The way you sort of put on different hats depending on your context. And it’s not like we’re chameleons and totally change color, but we do adapt, right? And that’s of part of what we’re supposed to, right? The societal expectations are that we do adapt to the situations that we’re in. There’s a reason I don’t talk to my boss the way I talk to my friends, because he would think it’s weird. And probably I would have a hard time getting promoted if I did that. And so when you’re filling in a personality questionnaire, it is helpful to think about what hat.
that you should be wearing. And most typically, I would recommend putting your work on your good days, your good work days work ahead is the thing I would wear. So when you’re trying to do a good job and you’re trying to get on with people, on those days at work where you’re having a good time and you’re feeling productive, that’s the personality that you should project in those personality tests. It’s not faking because it’s still you. It’s just a version of you that you’re very likely to deploy when you get the job. That’s a version of you that will be working.
And interestingly, there’s some research which is showing that when you design personality questionnaires with instructions that are really trying to tap into that kind of work ahead personality as opposed to a more general personality. When you do that, you actually get more valid assessments. The assessments that you get from those those tailored instructions improves a bit because, you know, obviously my work personality is going to best reflect how I’ll behave at work. Whereas my home, everyday personality.
There’ll be a bit of that at work, but most of that will be when I’m not at work and when I’m hanging with my friends or my partner or my family. So that’s not really faking. That’s not problematic faking. That’s just showing us who you are. The other thing I’ll say about faking, because it’s super interesting that you mentioned it in relation to personality tests. I’ve studied faking personality for a long time, and I’ve seen lots of people say, you can’t use these tests. People fake them. And then
Five minutes later, they’ll talk about the wonderful interview that they do or the wonderful resume screening process that they do. you’re like, but people can fake on interviews and resumes as well. Why are you holding personality tests up to this standard? I don’t know why that is. ⁓ It sort of seems inconsistent. My hunch is that people, when they’re interviewing somebody, I reckon a lot of interviewers believe that they can tell when someone’s faking.
Renata Bernarde (27:28)
Yeah.
Patrick Dunlop (27:40)
in an interview, I have control, can see your face, can read between the lines, I can read your mind somehow. I can’t do that with a personality test. So I’m going to be suspicious of that. I trust my judgment. But the reality is no one’s very there aren’t many people out there that can detect when someone’s lying to them. So yeah, I think that that would be my takeaway from the faking thing. Don’t feel to don’t be too worried about being caught faking. ⁓ If if all you’re trying to do is to showcase your best work yourself.
Right? If you are taking the test very cynically, if you’re just like, ⁓ look, I’m not going to tell them, I’m not going to think about who I am at all. All I’m going to do is treat this thing like it’s a game. And my goal is to get the highest score. you know, okay, well, I see a question about keeping my desk tidy. Well, I’m not even going to look at my desk. I’m just going to tell them I’ve got the tidiest desk in the world. Right? If you, you take a test like that, then it’s possible you’ll set off a few faking alarms because you know, you’ll come out the other end looking like the most implausibly brilliant person.
And so, know, yeah, okay, you might actually end up getting a job offer out of that. But I think the problem is you’ll get found out because at some point you actually have to do that job and people will realize, ⁓ actually that person’s disorganized, not very hardworking, et cetera, et cetera. So, you know, think it’s a bit of a mugs game trying to fake your way through the test, but some people do it. And when they do it, does undermine the integrity of the process.
Renata Bernarde (29:00)
And I believe that part of your research shows that if somebody over claims or lied in their tests, that there’s a higher chance of them not doing well in the job in terms of following instructions. Can you explain that?
Patrick Dunlop (29:18)
Yeah, yeah, absolutely.
I forgot I’d done that study until you mentioned it. It was a really fun study to do. So one of the questions when it comes to faking on personality assessments, one of the questions practitioners always ask us, which is a good question. Well, can we tell? Is there a way to detect when somebody has faked? And there are a few different methods that test developers now use. In that study that you’re mentioning, we were looking at a different type of method where we would ask people to self-report
knowledge, right? So I might say, okay, here’s a topic. Science and technology is a topic. Here are 15 things that relate to science and tech. And some of them will be super, you know, everyone will know these and some of these will be super obscure. And I want you to tell me, you familiar? How familiar are you? So if I say gravity, you’ll probably say, oh, yeah, I know what gravity is. So I’ll write that as something I’m very familiar with. But if I said something like quantum physics, you might say, well, I’ve heard of it, but I don’t know much about it.
Anyway, so what we do with this method is we’re not actually that interested in measuring how well you know science and technology. The catch is there’ll be a few questions in there that aren’t real, right? They’ll sound like something scientific or technological, but actually they’re just bogus things we made up. And so the theory is if you’re out there telling me you know all about this bogus scientific method, you know, the Dunlop method of inferring, ⁓ you know, solving matrix determinants or something, right? It’s no such thing. But if you say, yeah, I know what that is, then
And that might be, we were interested to see, is that a sign that somebody’s trying to trick the employer into hiring them or trying to impress them? And what we found in that study was ⁓ we designed a questionnaire that was measuring knowledge about political concepts. So we had a bunch of obvious ones like voting and parliament house and majority and some really obscure things that I’d never heard of until I Googled them and then some made up ones.
What we found was the people that said that they were familiar with these made up concepts, when we then gave them a task that related to actually using political knowledge, we found that people that ⁓ pretended to know things that didn’t exist were more likely to behave deviantly in that task. they would take it. That’s right. asked people to give their opinions about various political concepts or political ideas. For example, what’s your opinion on people having to show their ID before they’re allowed to vote?
And we didn’t care if people were for or against that. We just wanted to know, you know, is the opinion coherent? Does it make sense? it ⁓ logical? Is it thought out? And in the task, got, we asked them to form opinions about things that actually are nonsensical. And we instructed everybody that if you don’t have an opinion on something, just skip the question. But if you do, tell us what it is. And so the people that claim to know things that didn’t exist were more likely to give us their strong opinions.
about concepts that didn’t make any sense at all, which was deviant behavior. what we took away from that was maybe a hallmark of lying is somebody who’s pretending to know things that don’t exist. It wasn’t a strong signal. It’s not something we would feel comfortable recommending test developers put into their tests just now. But there was a little bit of a signal.
Renata Bernarde (32:30)
Before we move on from assessments, because I want to talk to you about other things, but I want to share with you this experience that I had a few years ago with a client who was employed and on track for a promotion. So, you know, the work that I was doing with him was to help him get that promotion underway under his belt. And one thing that we didn’t even discuss in our coaching sessions was the fact that he had done
psychometric test that was run by a large global well-known, will not name the name, HR consulting firm in this global organization he worked for. And he scored really poorly on that assessment. Now, one thing that I have noticed, especially as I use Talent Predicts with such a diverse range of individuals and cultures,
I do talent projects with all of my university students, 50 % of them are international. So it gives me a lot of anecdotal information about how people use the 10 point scale. And what I have found is that somebody like myself who is a flamboyant Latin American woman, I can go extremely agree, extremely disagree with a lot of things and I feel comfortable using that sort of format.
And when I work with other cultures, especially in Asia, Pacific, some countries in Europe, people are much more reserved about their opinions about things. So they tend to gravitate towards the center of that 10 point scale. And I think that that’s what happened to him. So this being ⁓ an American psychometric test, I did a little bit of research at the time and him not being from that culture, he basically just sort of.
undersold himself, you know, a lot. And his promotion was in jeopardy. They pulled it. And then I sort of we built a case about the fact that he did not see this linked to the promotion. This was a test that everybody at that level that he operated had to do. It wasn’t necessarily linked to the promotion. It wasn’t part of the recruitment process for that role. And
He comes from a culture where ⁓ people just don’t go extremely agree, extremely disagree with that 10 point scale. It wasn’t talent prediction, but it was a similar assessment with a lot of 10 point scale results. And he got the promotion in the end because he was on track and he had enough ⁓ visibility and reputational equity within the organization to balance off what happened with that test.
Tell me about your assessment with this. Did we do the right thing on fighting it off?
Patrick Dunlop (35:27)
Yeah, yeah, yeah, there’s a lot
going on there. mean, so one of them, so the cross cultural question is interesting, to be honest, I’m not super familiar with the cross cultural research. I know there is a big literature on cross cultural assessment in personality and a lot of work does get done when taking a test and transforming into another language. What you tend to find, like my intuition is most of the variability on personality test score is between individuals within cultures. Does that make sense?
even though maybe culturally you might get some differences, still get a lot of like this doesn’t matter what the culture is. There’ll be some people out there that strongly agree, strongly disagree and others that are more. ⁓ But but it is there’s always a risk when you combine norm groups. So if you have one norm group, that’s a, you know, American entrepreneurs, and then you get like undergraduate psychology students to take a test and you compare them to, you know, psychology students in even Australia, I think they’re usually quite introverted and not out there trying to push themselves.
and compare them to American entrepreneurs, you’re gonna get everyone, it’s gonna look like everyone has a very low score on say extraversion. So that’s a kind of classic mistake. Another classic mistake is like if somebody’s in line for promotion and the organization has already kind of committed to, you know, saying, okay, we’ve observed your behavior for the past, I don’t know, four or five, whatever, years, and we’ve been pretty impressed by that. And so we’d like you to recommend, we’d like to recommend you for promotion. It’s then weird to go,
Renata Bernarde (36:31)
Yes.
Patrick Dunlop (36:54)
but actually you’ve just done this test. ⁓ And that invalidates all those observations of your behavior that because like ultimately the best predictor of how somebody will behave tomorrow is how they’ve behaved yesterday, the day before and so forth. People are surprisingly consistent. And the whole point of a personality test is you should use it when you don’t have that data. Like when you don’t have like 10 years of historical data, that’s when a personality test is useful because essentially a personality test is somebody
just sort of trying to summarize who they’ve been for the last X number of years in a very structured way. So you shouldn’t throw away all the, when you’ve got the luxury of having access to how they’ve worked in the past, you should never throw that away in favor of some kind of standardized test. And then the third factor there is when you give somebody a test for one particular reason, it’s not really fair to then use those test scores for a different purpose. So something like a cognitive test, ⁓
Renata Bernarde (37:49)
Hmm.
Patrick Dunlop (37:52)
know, cognitive testing is pretty hard. Like you can deliberately do badly in a cognitive test, but you can’t sort of fake good. can’t sort of temporarily inject yourself with, you know, ability serum and then ace a test and then revert back to your natural state afterwards. But perhaps if you’ve been given a cognitive test for development reasons, you’re not that invested in the development part of it, you might not take the test too seriously, right? So already there’s a bit of a bait and switch problem. But personality, especially, I mean,
Renata Bernarde (38:15)
Mm-hmm.
Patrick Dunlop (38:20)
just thinking about what I saying before about which hat you should wear when applying for a job. When you’re applying for a job, you’re trying to signal your best self. The self that, your ideal worker self is what the organization wants to hire you on. But when you’re thinking about development, you’re more inclined to think about what are the areas that I’m not my ideal self? And you might think more about those when you’re taking the test with the hope that there’ll be something in the test results that…
then lends itself to a growth opportunity, right? So if you’re not, if you feel like you’re not networking very well, even though you might be quite extroverted, you might see the networking items and go, actually in this company, I need to do better. And so you might disagree with those a bit more than you would if it was just abstractly, oh, are you a good network? Well, yeah, I am. I am good at it. It’s just in this company, I’m struggling a bit. And so the idea would be you come out, your networking score is lower, and then maybe the organizational coach will create an opportunity for you to go to a company conference or leadership program or some other way that
to build your social network. But then to take those test scores and then say, no, no, no, sorry, surprise, this is for a job application. It just happens to be an internal job. That feels a bit, yeah, just a bit underhanded. I don’t think, I don’t know, maybe there was malice, but I would say this is one of those innocent mistakes that somebody could make without thinking through the consequences.
Renata Bernarde (39:24)
Mm-hmm.
Yeah.
Yes.
I know. I feel like sometimes people overestimate what global companies do and how, no, it’s a global company. It’s so well known. Of course they have the best practices and their managers know what they’re doing. No, they don’t. I’m so impressed. Sometimes just this morning I spoke with a client who is again applying for a job in a very well known global brand.
the process is horrible. The whole recruitment process has been really draining and long. And you would imagine that with a large ⁓ HR department that they wouldn’t do that sort of thing, but it happens all the time. And talking about ⁓ candidate experience, I hear this from my…
candidate my clients all the time that the process seems to be more opaque and weird now than it was maybe three, four years ago when they were looking for work maybe before the pandemic or during the pandemic even. And now the candidate experience has become so poor, know, that they feel ghosted, they feel neglected as candidates. ⁓ It seems like the recruiters really do not have time for them anymore. ⁓
What do you think are the simple things that employers, organizations can do that will make a difference to that candidate experience moving forward?
Patrick Dunlop (41:16)
Yeah, yeah, there’s a few pieces there. like, in the context of say, an assessment, right, so not putting aside how long it takes to get to that assessment and to get feedback and so forth, what candidates seem to value the most is having an opportunity to show off their capabilities to perform, right? So if you give them an interview and then ask them strange questions that have nothing to do with their work history or whatever, they’ll probably be very frustrated by that. And this is why things like abstract reasoning tests often
Renata Bernarde (41:34)
Yeah.
Patrick Dunlop (41:45)
irritate people is because it’s like, you know, this isn’t, I’ve applied for a job as a senior executive. Am I really going to be sitting here solving diagram puzzles when I’m work? No, no, I’m not. So I’m not really getting the opportunity to showcase my skills. So that’s in the assessment context. But I think sitting in terms of the process, I think what is often missing is the communication piece, right? And I don’t just mean like the template emails, dear, dear, insert name.
we apologize, but we could not shortlist you ⁓ because of insert reason, know, those kind of, I mean, that’s, that’s super insulting getting, getting a letter like that. ⁓ And I think what’s, what probably is driving that. mean, look, I don’t know, it’ll vary by organization. Like one of the things I will say about candidate experience is it has received more attention paradoxically. Like there’s now a company in Australia called benchmark CX, which has, which sells ⁓ a dashboard product for benchmarking your candidate experience.
And I know that guy called, I think, Kevin Wheeler, ran a company organization, used to be called the Talent Board, but they’ve rebranded. And they would have a Candid Award every year to celebrate the company that had the best candidate experience. And so there’s a whole bunch of researchers from the kind of user experience discipline, from technology. They’re obsessed about UX, right? Probably a lot of UX went into this podcast platform, for example, so they know where to put buttons and things. But some of that has sort of bled into candidate experience.
I think a lot of that relates to the design of the tech in applicant tracking systems and so forth. So it’s less about the psychology of it, but more about, you know, I mean, one thing I see online people complaining about is when you fill in an application form, it asks you for all these details and then, I don’t know, it asks you for your resume. You give it your resume, which has all these details in it, and then the rest of the form then just asks you to repeat the same details that’s already in the resume. And that drives people crazy, obviously, because it takes a long time to these stupid forms in.
So that kind of basic candidate experience stuff, it’s more UX stuff. It doesn’t take an org site to tell you why that’s annoying. I think there’s a lot to be, a lot of these systems are legacy. They weren’t designed for scale, right? They weren’t designed for a world where people need to send 50 applications off in a week. And then on the other side, you’ve got…
AI being a real ⁓ cause of noise, right? So a lot of recruiters are now dealing with AI generated slop. They’re not real applicants. They’re know, somebody’s fired off 10,000 scatter gun AI resume, ⁓ sorry, AI applications. And if you get enough of those, your systems just get clogged up. You look at a spreadsheet, three quarters of them aren’t real people, and you can’t send individualized ⁓ replies with detailed feedback to all of those people, especially knowing
half of them or three quarters of them might not be real. So I also have sympathy for recruiters, ⁓ particularly if they’re in an organization that doesn’t see candidate experience as a priority, right? They might just see this as a cost. understand recruitment’s often regarded as this kind of annoying thing that we have to tolerate, but it’s not seen as a resource or a strategic function.
Renata Bernarde (44:33)
Yes.
Tell me about these bots that are sending out job applications. Why are they clogging up the system?
Patrick Dunlop (44:55)
Yeah, I think there’s a, I mean, probably some people would just like to watch the world burn. so they just, you know, they just, yeah, probably just, I mean, I could have, like, I don’t have any data to support this, but my hunch is like, you know, hostile governments, for example, that don’t like democratic countries doing things to them. They, they might, you know, there’s already evidence of bot farms sitting around social media and causing trouble, you know, playing by size of a debate and polarizing people. But another way you can cause democratic societies to grind to a halt is to just
Renata Bernarde (45:00)
Really?
Patrick Dunlop (45:24)
fill all their systems with crap. And so there’s probably, you know, individuals that engage in that kind of behavior because it’s cheap and easy to do. And there’s also job seekers who don’t really buy into the strategic approach, right? Rather than thinking, okay, what do I want to do? And where do I want to be? And where should I apply? And which is just like, no, no, there’s an ad, I will apply for it because I just have to press a button and it’s done. There’s also third party services that specifically offer these services, you know, this is a product.
I won’t name them, but there’s a few out there that, yeah, that try to, they guarantee you a job. Well, not guarantee, but they promise you higher chances with the help of AI. So I think it’s a combination of these things. There’s also, you know, probably a bit of desperation. We’re in this kind of weird stage where we don’t really know how the economy is doing. the latest figures were positive, but not mind blowing. There’s a lot of anxiety about entry level jobs. Will they be replaced by AI or are we just going through an economic recession? And that’s why the, you know,
numbers are falling in certain countries. We just don’t know. And so I think as a job seeker in this environment, I totally understand why you just get desperate and you just think, stuff it. I’ll just send my cover letter that I’ve written with AI to hundreds of companies and hopefully one or two of them will stick. And as a recruiter dealing with that, it must be incredibly frustrating.
Renata Bernarde (46:44)
Yes, it’s interesting. You mentioned that a few months ago, I was on a webinar with a panel and there is this other podcaster that is very well known, has a very good podcast and his whole spiel is send hundreds and hundreds of applications. And I’m the opposite of that. So it was weird to be on the panel with him. We were trying so hard to be respectful of each other’s points of view, but clearly we have very different.
ways of working with our clients. I haven’t seen that work for my clients, frankly, you know, the hundreds and hundreds of applications, maybe because my clients are more experienced and they need to sort of think about where they’re niching down in terms of sending their applications to. And if they send hundreds, it’s just going to be, even though it seems like it’s low effort because you can automate a few tasks, it’s still a lot of work.
Patrick Dunlop (47:39)
Mmm.
Renata Bernarde (47:40)
It’s always lot of work. ⁓ So I want to talk to you about these new technologies. You mentioned AI. Another thing that you ⁓ I think you have worked ⁓ as well as asynchronous video interviews, and I find it really interesting. actually interviewed a Perth based, you’re in Perth. This guy was in Perth as well as a founder of
a platform that only does asynchronous video interviews. And I want to tell you another story that I think is really telling. And when I tell people that they cannot age out of their professions, this is a great example. I had two very senior executives working with me. ⁓ Both of them were asked to do a video as part of the initial application for CEO roles, right?
There is an executive search firm in Australia that has that as one of their standard requirements. One of them had a massive breakdown. So she’s like, I cannot believe that I need to do a video. This is ridiculous. She was so upset and annoyed about this. And look, I get it. She has a reputation. She was already a CEO, well known in her field.
Why do I have to do a video selling myself for this opportunity? You know, she didn’t refuse to do it, but she did a very poor job. She wasn’t paying on it at all. The other one, bless her heart. She did the best interview I’ve ever seen. Like it was better than my interviews. I’m like, how did you learn how to do this so quickly? You know, it was basically an overnight learning process for this woman who
found a way of, she even included banners underneath for each of the topics that she was talking about. I thought it was brilliant. I wish I could show people. And that’s the difference of when we come to ageism in our profession, in job search and talking to recruiters, one adapted super quickly and sent it out. And guess what? She got the job. She’s actually now the CEO of
⁓ quite well known ⁓ industry association. The other one did not get the job, which was the CEO of a large organization as well. What do you make of this? How can people adapt and not feel so… I don’t want my listeners to feel like they’re going to lose their status quo if they need to adapt to different things like…
Patrick Dunlop (50:11)
Mm.
Yeah.
Renata Bernarde (50:34)
a asynchronous video interview.
Patrick Dunlop (50:37)
Yeah, yeah, this is always tough because the assessment world is often, it’s not led by academics, right? We don’t, people like me don’t come up with the latest and greatest assessments, publish them in an academic journal and then all the companies take our research and use it to design. No, that’s not how it works. Usually it’s some tech company that comes up with some wacky new method. And then we come in on our high horses about two years too late to say, Hey, this doesn’t actually work. We shouldn’t use this, right? So it’s all back to front. Like you think innovation happens at universities.
I’m not in this space, unfortunately. I think the video thing’s interesting, right? the kind of, I sort of see two things. One is the upload, you know, two minutes of yourself saying whatever it is you feel like you need to say, right? That gives me the heebie-jeebies, that type of assessment. It really, and then the reason it does is because there’s so, like it’s a little bit like having a.
a coffee with your recruiter and that’s the interview. Just having a nice chat at a cafe, it’s very unstructured. If you’re not giving people very clear understanding of what specifically you’re looking for in the video, what will be attended to and what will not be attended to and how you will be judged, I think you’re setting yourself, you’re taking a huge risk of introducing an assessment that is incredibly biased in many ways that you won’t even know.
also may not actually indicate anything at all that’s useful. So like for example, knowing to put that little banner and things like that. mean, did the company, when they designed this, did they sit around a table and go, yeah, we should give five points to anyone that puts a banner there and no points if they don’t. But also we’re not gonna tell them that there’s five points for a banner. I bet you they didn’t even talk about banners and instead what happened was they saw the banner and went, that’s clever.
Renata Bernarde (52:01)
Mm.
Patrick Dunlop (52:25)
And that’s to me is incredibly alarming because it means they’re just building the plane while they’re flying it. You know, they don’t even know what they’re looking for. They just don’t know it when they see it is kind of the and that’s a really terrible way to design assessments. The risk there is, is you might discover that people like to see white men, for example, right? I don’t realize it preemptively, but that’s just their mental model of a CEO or maybe they like young women, right? And that’s the thing. That’s just what the video is. When make sure you’re a young woman and
Renata Bernarde (52:42)
Yes.
Patrick Dunlop (52:51)
especially one that makes bandits, right? So no disrespect to the person that got the job. No doubt she is capable for many other reasons, but that type of assessment is incredibly risky because I think age isn’t probably the biggest factor there, right? If you’re not, especially for executives, they’re not spending their days making videos and working with this kind of tech, right? If they’re doing that, then it’s gotta be for a hobby because you don’t wanna be paying your executives to make videos.
Renata Bernarde (53:18)
Yes.
Patrick Dunlop (53:19)
⁓ So it’s totally understandable that they’re not on top of all this tech. But going to the asynchronous video interview, that is a piece of recruitment tech that I do have a bit more time for. And again, it comes down to how well it’s designed by the users. one of the best assessment tools we’ve ever invented is the structured interview, as I mentioned before. And what I didn’t talk about was what the ingredients are of a structured interview. The most obvious ingredient is that everyone gets asked the same questions, right? And I think a lot of people think,
Renata Bernarde (53:33)
Mm-hmm.
Patrick Dunlop (53:48)
As long as they’re doing that, they’re doing a structured interview. But it’s actually, it’s a spectrum, right? You can have no structure at all or ridiculously extreme amount of structure and somewhere in the middle is where most structured interviews fall. But one of the things about asynchronous video interviews is they are very structured. They ensure that every candidate gets exactly the same question as every other candidate. They also ensure that every candidate gets exactly the same amount of time to think about their response and to deliver their response as every other candidate. So on that front, they’re
Renata Bernarde (54:15)
Mm-hmm.
Patrick Dunlop (54:17)
They’re very fair and systematic. They also, a lot of these structured interview, sorry, asynchronous interview companies have set up systems which allow you to create rating scales before the interviewer started. you and I could write a rating scale for an interview and then give it to a team of recruiters and say, when you look at the answers, I want you to judge everybody using this rating scale. And we can even do some training. We can explain, okay, this is what a five means. This is what a four means. This is what a three means. And also,
these are the things you should pay attention to. You should pay attention to what they’re saying, but don’t pay attention to how they’re dressed. Don’t pay attention to anything going on in the background. That’s not important. We don’t know what lives people live in these. They’re doing these things at home. So it’s really important that you just focus on their words and not how they look or whether they’ve got an accent and all that. So if you do all that training, you can actually improve the standardization and rigor of these interviews.
and the other great thing about them is you can reach lots of candidates at once. can send one invitation out to a thousand people and they can log in whenever they want, whenever it’s convenient for them. Whereas traditional interviews, you know, are you free Friday at 12? sorry, can’t do 12. What about one? Okay, what about, you know, it’s, it’s, and you have to fly to Perth from Sydney for this. okay. Well, there’s a time difference. So the one way interviews get around that problem very nicely, but there is a price you pay and one of the prices is,
First of all, they feel awkward, right? The first one you do always feels like the worst, but most of the people I speak to about these now, they say after you’ve done one or two, it’s not so bad. And most platforms now offer sort of a practice question or two. And the other thing that does, I have to say, really suck about them is you don’t get your chance to speak, right? You don’t really get your chance to ask questions. So I would recommend to any employers that are using them, make sure you allow that somehow, right? Somewhere else in the process, you have to…
Renata Bernarde (56:02)
Hmm.
Patrick Dunlop (56:08)
create an opportunity for candidates to ask questions. And also, make sure you reassure them that that opportunity exists so that they don’t worry about it when they’re doing their interview. But if you’re going into a one-way interview, I think the tips I would give, the first tip would be, first of all, just train yourself to get used to the experience of speaking into a webcam and making eye contact. And thankfully, when these things came out, we weren’t doing Teams meetings as routinely as we are now. But I think a lot of us are a bit more used to
Renata Bernarde (56:27)
Yes.
Patrick Dunlop (56:36)
collaborating virtually. So it’s certainly, but get practice, it’s great when someone’s nodding, like it’s really helpful that you’re nodding as I speak, for example, but when you’re speaking into the void, it’s hard to get that feedback. So it’s definitely worth practicing that. And the other thing is most of these interviews don’t give you a lot of time to explain, know, typically an interview question will require you to describe a situation, what you did, what effect it had.
you know, and so forth. And so I think it’s a good idea to rehearse those, right? Think of what are the three or four examples or what’s an example I want to give to showcase my problem solving or my communication or my leadership or, and just get your script down to under two minutes because you you’ve got to remember a recruit is probably looking at a lot of these things and they’re not going to want to hear a rambling lengthy story. So it’s all about practice, I think, and getting over that initial awkwardness.
Renata Bernarde (57:27)
Yes, all of these things are so new to my audience, right? ⁓ We sometimes forget that Chateaubriete has only been around for three years and asynchronous video interviews are maybe four or five years old. ⁓ And I think that there’s just so much else that is coming our way. What do you think will be the trends that we need to look out for?
Patrick Dunlop (57:55)
Yeah, it’s interesting question. I think with AI in the hands of candidates, employers are a bit worried about the integrity of online assessments. And so they may start doing things about that. And some of those things will be ham-fisted and careless. Others will be a bit more thoughtful and considered. So I think some of the trends… So I’ll talk about that first and then I’ll talk about the opportunities I see with AI. So I think some of the trends we’ll see is you’ll start seeing…
organizations been quite menacing and threatening about AI use, right? They might say things like, oh, you cannot use AI if we detect that you’re using it. You know, you will be excluded and they’ll be quite unforgiving, right? They probably won’t use a, oh, we’re 95 % sure. They’ll probably say we’re 50 % sure that’s enough for us. We’re just, you know, it looks like you were reading the script. So that, you know, they, cause it’s, it’s pretty hard, you know, they can reject you for any reason. They don’t have to be public about why. I think we’ll see more of that.
You might see some employers be bit more philosophical about it. Like already there are some examples where they say, no, you should use it. We want you to use it. Just don’t lie with it, right? Use it to upsell yourself. ⁓ Like for example, before I said, try to get your stories down to two minutes. Well, I would think a reasonable use of AI would be to get your story that’s 10 minutes down to two minutes and extract the important bits and then use that as your response. The key there is the story is true.
and my behavior that I’m telling you about is real. And that’s what the employer wants to know. So I think that kind of philosophy, we might see some organizations, or more organizations embracing that. We might see a pivot away from online assessment and there’ll be more verification and more face-to-face stuff, like in the good old days, you know, when we all had to go into lecture theaters and center and take tests.
The other thing we might see, I’m not sure about at the level that you’re thinking of mainly, but certainly at the high volume assessments, there’ll be more use of ⁓ sort proctoring software, right? So software that, say you’re taking an aptitude test on your web browser, there’ll be software that’s capturing, have you got other tabs running? Have you switched between different windows? How are you moving your mouse around? You’ll take photos with your webcam to see if there’s a second person in the room or what have you.
I know that’s big business right now and the concern is around the integrity of assessment. I wonder a bit about the half-life of that. I kind of feel like at some point people are going to get fed up with this invasive technology. Australia doesn’t tend to move very quickly in the tech space, but you might see some action in Europe. ⁓ I think already some of these technologies are not legal there, but in other countries they’re used routinely. So I think that will be another trend. And on the more opportunistic side,
I could, mean, already there’s AI interviews, right? So, and I’m not talking about AI scoring the interview, although that’s definitely a thing that’s taken off. And we can talk about that as well, if you want, but, know, where you’re being interviewed by an AI avatar, right? So it’s not Renata asking me questions. It’s, you know, it’s a digital version of you that’s got chat GPT under the hood and it’s generating questions and then I’m responding. ⁓ It’s interesting. one of my friends works for a company that does AI interviewing.
And he was telling me he went to Japan recently and discovered that the Japanese love this kind of technology. love, obviously there’ll be individual differences, but in the main people were really positive about being interviewed by this anime style avatar, whereas it speaks to people here and it gives them the creeps, right? They find it a bit dumb. So yeah, that was an interesting cross-cultural difference, but we’ll see more of that. And we may even see multi-agent assessments where there are, already I know there is recruitment tech that…
Renata Bernarde (1:01:12)
Really?
Yeah.
Patrick Dunlop (1:01:34)
puts an AI in the room and the AI’s job is to just nudge the interviewer to say, just push on this a little bit further or ask the candidate to clarify that. Or it might give feedback to the interviewer afterwards to say, hey, look, you rambled a bit, get your questions a bit shorter. But we might even see systems where the AI agents are interacting with each other and the candidate. So if you think about how hard it is to create a high fidelity simulation involving multiple partners, it’s hard enough to get one actor in the room to role play.
know, disgruntled customer, but imagine, you know, you want to see how somebody leads a team, manages a team that’s in the middle of a fight with the members of arguing with each other. Well, you can imagine a room with four AI agents, where two of them are yelling at each other, the other two are stressing out and all that. So I think it’s like possible that this sort of assessment could be done in a nice scaled, I mean, look, it’s easy to say, there are companies out there designing this stuff now, they’re probably discovering right now it’s not that easy to do because…
there’s lots of problems with AI like it hallucinates someone could jailbreak it and you know just saying the right thing might trigger one of the agents to start sprouting far right wing propaganda instead of paying a disgruntled customer. So I think there’s a lot of interesting problems that these companies will be tackling. But that you know don’t know how many years away we are from that probably not many.
Renata Bernarde (1:02:52)
All right. So if we think about 2026, when this episode is going out, it’s the beginning of the year, you’re ⁓ an experienced manager, you’re getting ready to look for work. If you were in that situation, Patrick, how would you spend your time? What would you do in order to increase your chances of succeeding in your job search?
Patrick Dunlop (1:03:17)
Yeah, yeah, think, well, unfortunately, resumes are still used. So I would certainly want to polish, certainly want to polish the resume and get lots of feedback on that. you know, feedback is a funny thing because everyone will have an opinion about it. At some point, you’ve just got to make a decision about it. Probably run it through some keyword algorithms and see if you can find some websites that can try to identify, you know, what is my resume screaming at? What are my skills according to your analysis of this? Those sorts of services might be helpful.
I think also the other thing we didn’t talk much about is the power of social networks, right? I think overwhelmingly, like, you know, we don’t like to admit this because it’s quite a disturbing truth, but it’s who you know matters a lot. a lot of, you know, there’s some types of roles where you can apply from another country, get the role and in you go, you know, those kind of high volume sorts of things. You don’t need a big social network. Well, actually, when I think about it, my first job was flipping burgers for a fast food organization. I applied for it because I knew people that worked there.
That’s probably not why I got the job, but it certainly was the reason I applied. But a like that, yeah, it’s probably less about who you know. But as you move up the hierarchy, organizations are quite risk averse. And there is actually evidence to suggest that recommended candidates do tend to outperform candidates that have just come out of nowhere. And if you think about it, it does make sense. Like, I’m not going to recommend somebody. When I recommend somebody, I’m putting skin in the game. And that person’s no good. It makes me look like a bit of a muppet.
And also for them, they know I recommended them and they don’t want to disappoint me. And so they will put in more effort and so forth. that the mechanisms make sense, but it creates this horrible, unfair kind of nepotistic system which nobody publicly wants to acknowledge. But it is definitely, it’s definitely real. So, you know, probably my best advice would be, you know, if you don’t know many people, change that, right? Find ways to meet people. If you’re a migrant, find your expat community, right? And, you know, socialize with that.
affiliated with a professional body, attend their networking events, volunteer with them, find ways to get to know them. know it’s easy to say this, you may be under a lot of financial pressure and working already, but these are the things that I think help.
yeah, I think the social networks is probably way more important. ⁓ Yeah.
Renata Bernarde (1:05:35)
Okay, all right. Patrick, this has been so good, this conversation with you. I am so happy that we connected through a network of us. Somebody introduced us. So there you go. ⁓ I think that you have such interesting take on assessments and this whole recruitment process. I loved what you said about the future and what potentially is to come. I love this idea of
Patrick Dunlop (1:05:47)
There you go. ⁓
Renata Bernarde (1:06:03)
the AIs interacting and sort of providing that opportunity to do simulation ⁓ with a candidate. That sounds really interesting. Thank you so much for coming on the podcast and sharing all of this with us. You can come back anytime. So if you have new research and new things that you want to discuss with us in the future, I don’t think we tackled even half of the questions that we were supposed to today. So we definitely can do another episode in the future.
Patrick Dunlop (1:06:31)
Wonderful. Thanks so much for having me. I’ll be very happy to return.
Renata Bernarde (1:06:35)
Thanks, Patrick.
Patrick Dunlop (1:06:37)
Excellent, thank you, bye bye.
Renata Bernarde (1:06:38)
Okay.