It’s Not the Tech, It’s the Fear
This article is inspired by a conversation I had recently while I was doing some homework on AI technology evaluation within HR teams. Someone shared a really powerful anecdotal story with me on the importance of psychological safety (and sometimes the lack of it) during technology implementations. While the conversation was incredibly brief—no more than a minute or two—I have found myself going back to it repeatedly.
I think I finally figured out why that story is so sticky for me: I talk A LOT about how technology makes us more human in allowing us to be more creative and focus on the uniquely human characteristics (here’s the link to that newsletter edition; call me an eternal optimist). I have yet to talk about the other aspect of being human: fear.
As a species, our evolution across many millennia has wired us to survive. That desire for survival also means that fear is a condition of our existence. We are afraid of things that threaten our perceived identity in society; we are afraid of things that may make us feel less valued and esteemed; and we are afraid of things that threaten our ability to provide shelter and food for ourselves and our loved ones.
The one thing in 2025 that all of those fears have in common? Artificial Intelligence. How AI is theorized, portrayed, and applied in today’s work environment makes us inherently afraid for our survival.
I believe that part of the reason we see limited ROI on AI investments is because the premise of AI tech implementations goes against the very instincts of the people tasked with implementing them. Put yourself in this scenario for a moment: you’re working on an AI tech implementation plan, and you get a ping on your phone about how company X has eliminated thousands of positions and is placing their bets in AI. How would you feel? Will you be as inclined to put your best effort into the technology you’re working on implementing?
Whatever you may have felt just now is exactly why psychological safety is more important in 2025 than ever. No modern coach or OD guru will outsmart millennia of human evolution for your entire organization, and your workforce is much smarter than you take them for. So, instead of selling the usual “productivity is good for you and the company” line and fighting an uphill battle against human nature, we should start figuring out how we can work with human nature, and psychological safety at work is a pretty good starting point.
Psychological safety is a belief that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes. As this topic has been widely studied since the 90s, research has proven that:
It is a key driver of performance in team environments (read: your teams get stuff done faster and better in a psychologically safe working environment)
It positively impacts individual task performance (read: better execution, more accountability, fewer delays)
It drives innovation in organizations (read: problems are solved faster and more proactively, and you get better error detection)
TLDR: it’s a thing.
While this topic has been well-studied for decades, and we may have tried a few band-aid solutions in the workplace in the past few decades to address it, the rapid rise of AI has made psychological safety much harder to achieve and much more critical. Here’s why:
Job-loss anxiety: This is the most obvious one, as 41% of US workers believe AI will make their duties obsolete. It’s hard to implement technology, let alone drive its adoption, when someone is concerned about said technology’s potential impact on their food and shelter needs.
Missing accountability: Imagine shaking a magic eight-ball to answer every question you’re asked during the day. Without explainability and understanding how the AI tool works, that’s what an employee feels like when you have them use AI to replace parts of their typical workflow. Who is responsible for the decisions and actions if the system told you to do it?
Data privacy & confidentiality: When someone entered voluntary information into a benefits administration system previously, they got helpful benefit selection recommendations. Now, most users will pause and wonder how, by whom, and what this information is used for. And they will probably skip all optional data fields.
Broken trust equation: Your workforce doesn’t trust your intentions with AI. Especially when AI-enabled monitoring systems (e.g., keystroke trackers, webcam analytics, etc.) are making news headlines. In the relentless pursuit of productivity and profitability, we made choices and sacrifices along the way. The unintended consequence of that is a reduction in trust between companies and their workforces.
Change fatigue: Digital transformation was a once-every-few-years effort a while ago. Now it seems like we are undergoing one every quarter. While it may be necessary for the business, people are exhausted. That exhaustion is directly correlated to a drop in psychological safety and further technology adoption.
While increasing overall organizational psychological safety is an ongoing effort (and IMO, we have a lot of deficits to recover from on that one), if you are tasked with selecting and rolling out AI-enabled HR technology for your organization, there are a few immediate things you can do to boost psychological safety and, therefore, drive adoption during the process.
Discovery & Evaluation: Those cross-functional RFP calls with 10+ people and less than 5 minutes of talking time per person on average can feel really “ranked”, where SMEs with valuable inputs may not feel comfortable bringing up their perspectives. Try using an anonymized Q&A board or channel during this stage. Give people the space and comfort to voice their questions and concerns without fearing judgment or retribution.
Demos & Selection: Sales teams often focus on understanding the buyers’ needs, but not all buyers truly understand their end-users’ needs. Instead of following the vendor’s script and having senior voices in the room raise questions, have your vendors walk through one real prediction (using scrubbed or masked data from your organization) and let the frontline users poke holes in it.
Design & Configuration: 100% clean data doesn’t exist (well, unless you have a system freeze in place). Instead of creating hesitancy in data sharing during the configuration phase (which may impact the quality of your model training) by insisting on having perfect data in the system, try treating dirty data as a system issue that needs to be resolved as part of the implementation phase, and not a finger-pointing exercise.
UAT / Pilot: Repeat after me, “System bugs are good. System bugs discovered during UAT are even better.” UAT testers and pilot users may feel that if they report too many bugs, it will make the system owner look bad, so they report only what is absolutely necessary. This doesn’t help anyone as the bugs are still there post go-live, and now you don’t have a dedicated implementation team to solve these issues. Next time, try a “bug bounties” hunt and reward those who report more bugs during testing.
Go-Live & Hypercare: This section deserves its own airtime, but for now, start with coaching the correct end user behavior by setting the expectation that the AI tool is not always right, and provide nudges to end users on when to trust and when to challenge the algorithm. Also, anonymous and live feedback channels can be helpful here.
Post Go-Live: Dedicate time to pulse the system users’ experience, put together cross-functional teams to review any model drift, surfaced biases, and known adoption blockers. Go-live is often when the real work starts, so make sure you allocate the right amount of attention and time to it.
All said and done, you can’t expect people to act against their instincts in the name of organizational productivity.
AI can bring out the best (creativity) and worst (fear) in us. The beauty of being human is that we don’t have to label either end of the spectrum as good or bad; we can just let them be as they are, as they are core to our existence. Instead of placating or, worse, fighting against human nature during this period of technology-driven disruptions, we should understand/appreciate them for what they are so we can figure out how to work with these sentiments instead of against them. Technology is neither good nor bad; it’s really about what we are trying to do with it, how we convey that goal, and who we bring along the journey to accomplish it.