We all know that it is important to make a good first impression, especially in a job interview. Still, it is slightly disturbing to see scientific proof that (a) that impression is formed by the time we have spoken just seven words and (b) we are doomed unless those words come out in an accent like Jacob Rees-Mogg’s. Not that I imagine the arch-Brexiter has been to many job interviews (though who knows what December might hold).
According to new research from Yale University, when we hear someone speak we form near-instantaneous conclusions about their social class. Michael Kraus, assistant professor of organisational behaviour at the Yale School of Management, reported that, even during brief interactions, speech patterns shape our perceptions of competence. And people are able to judge social class with reliable accuracy merely from hearing seven random words.
“While most hiring managers would deny that a job candidate’s social class matters, in reality, the socio-economic position of an applicant or their parents is being assessed within the first seconds they speak,” he explained.
A chilling thought. No one actually intends to take their parents into a job interview, but it seems that they sneak in anyway. How should we counter this? Make everyone go the full Marcel Marceau? “Tell us about your previous experience using mime, Play-Doh or the medium of interpretative dance.”
In one part of the study, 274 managers with hiring experience compared audio recordings and transcripts from 20 candidates without knowing any details about the applicants’ qualifications. The managers were far more likely to correctly guess a candidate’s socio-economic background accurately if they heard their voice than if they read their written words. Worse: the higher an applicant’s perceived social class, the more likely they were to judge them a good fit and award them a higher salary.
Ouch. Imagine that — free money just for sounding posh. No wonder my Mancunian grandmother used to try to make me repeat “How now, brown cow” in Margaret Thatcher’s plummy tones.
The proof of the power of these biases is mounting not only because of studies like these but also because of advances in artificial intelligence and digital metrics. I spent this week at the Professional Speechwriters’ Association’s annual conference in Washington, where Noah Zandan, head of Quantified Communications, gave a fascinating — and slightly terrifying — presentation. His team has pioneered human-trained AI technology that teaches machines to measure the impact of how we communicate, using over 1,400 metrics (voice, accent, gestures, choice of words).
His research suggests a more generous 15 seconds to make a first impression. Worse, though, for the speechwriters (who were turning white by this point) the machine judges that content only counts for 11 per cent of our impact when we talk. Passion, expertise, voice and presence are all twice as important.
Conclusion: no one cares much what you say — they care how you look and how you sound. I was judged to have “cheated” in my speech because I have a British accent. Apparently this gives you a “perceived intelligence” bounce with American audiences. If only they knew.
So how do we get around this bias? We could look to the many observable exceptions to the rule. In the 21st century, some non-posh voices stand out as more authentic and distinctive. Witness the lucrative careers of television personalities Danny Dyer, Stacey Dooley, or the cast of Love Island.
It’s wise also to remember that job interviews are a two-way street. You are interviewing the interviewers too. Would you really want to work for a company stupid enough to hire someone just because they sound as if they have a plum stuck up their bottom?
The writer is the author of ‘How to Own the Room: Women and the Art of Brilliant Speaking’