March 25, 2026

52% of NZ workers distrust AI hiring tools your company already uses

Two professionals conduct a virtual job interview using laptops in a modern office.

The Woolworths incident is the symptom, not the disease

When a Woolworths job applicant recently received AI-generated personality feedback that read like a horoscope written by a malfunctioning chatbot, it became a viral moment. But the real story is not one bad candidate experience. It is the growing body of evidence that automated hiring tools, adopted to save time and money, are quietly undermining the employers who use them.

The numbers are stark. A GoGetta and Ipsos survey of 1,000 New Zealand workers found 52% are uncomfortable or very uncomfortable with AI being used to decide who gets hired. That is not a fringe concern. It is a majority of the workforce telling employers they do not trust the process.

GoGetta founder and CEO Colleen Getley put it bluntly: “When you are applying for a job, it’s your life… to get screened out by a non-human is really uncomfortable.” She warned that “disengagement and mistrust can quickly become a business problem.”

Employers already know the tools are failing

Robert Half research reveals that 37% of employers acknowledge their automated screening misses strong candidates, while 36% say AI-generated CVs are making it harder to assess talent quality. Nearly 98% of Kiwi employers report challenges distinguishing top candidates.

Ronil Singh, director at Robert Half, is direct about the cause: “With advances in technology prone to error, along with uniform formatting and templated language driven by the rise of AI-generated content, distinguishing candidates and accurately assessing their true skills and suitability has become increasingly difficult.”

The downstream costs compound. Careerminds research found only 21.4% of HR teams reported AI fully replaced roles without operational issues. A third lost critical skills and expertise. And 41.2% said they would approach AI-driven decisions differently if given the chance. That is a remarkable admission of regret from the people who signed off on the tools.

Machines talking to machines

The absurdity deepens when you consider what is actually happening in the screening pipeline. Most AI tools extract keywords from job ads and match them against keywords in CVs. Candidates now use generative AI to write CVs. Employers use it to write job ads. The result, as the GoGetta survey noted, is machines talking to machines with no meaningful human intervention.

Victoria University of Wellington professor Dr Simon McCallum has warned that once an automated decision is made, humans rarely override it: “They’ll just use whatever the computer says or default to a computer said no situation.”

AI expert Dr Karaitiana Taiuru raised a sharper concern, noting that training data could easily reflect bias against Maori, Pasifika and immigrant communities. For employers, that is not just an ethical problem. It is a human rights liability waiting to materialise.

The right way exists but most employers are not using it

This is not an anti-AI argument. Used properly, these tools deliver real gains. Alan Price, global head of talent acquisition at Deel, says AI can cut initial screening time by up to 90%, freeing HR to focus on structured, skills-based conversations. Emma Davison, CEO of Virtual Headquarters, says AI “doesn’t replace judgement, it sharpens it” by handling initial sorting so leaders can think deliberately about who they hire.

But personality assessments, the kind that produced the Woolworths debacle, are a different beast. Frog Recruitment’s analysis warns that businesses introduce AI expecting to reduce headcount quickly, “only to find that service quality slips, review processes become more cumbersome and decision-making suffers.”

New Zealand is more exposed than it realises

Treasury’s July 2024 analytical note found that New Zealand’s higher-skilled economy is more exposed to AI impacts than many peers. But Treasury also noted the country’s traditionally slow technology diffusion and low investment in intangible capital, meaning employers risk getting the disruption without the productivity gain.

Across government, 108 AI use cases were identified across 37 agencies in a 2024 survey. The technology is embedding fast, with no AI-specific employment legislation to govern it.

For business owners, the calculus is straightforward. AI screening that produces garbage personality assessments does not save money. It costs you the candidates you actually wanted, damages your brand with everyone who applied, and leaves you exposed when regulation catches up. The employers who win from here will be the ones who treat AI as a tool that needs supervision, not a replacement for thinking.

Sources

Subscribe for weekly news

Subscribe For Weekly News

* indicates required