My Problem with Ethical AI

Confession time: Ever since I came across the words “ethical AI” a year or so back, something about it didn’t sit well with me. I wasn’t quite sure if it was because of how it was framed in marketing literature, how it was thrown into panel discussions and newsletters as a buzzword, or how few academic sources get cited in conversations around ethical AI. Whatever it was, there was something about the logic behind the conversation that bugged me.

Fast-forward one year, and a lot of research and discovery on how AI tech is trained and built for HR, I can finally put a finger on why ethical AI doesn’t make sense to me.

This topic became an article because of a panel I was on last week. I was asked the question: Is AI ethical, and if not, how can we make it more ethical?

My response to that on the panel was, “We are asking the wrong question here entirely.”

Here’s why I said that:

AI is a Reflection of Us

All LLMs and decision-making algorithms that we use to enable AI functionalities in modern HR tech is trained on historical data and patterns. While you can choose to use a generic model that is trained on broader, publically available data or a custom model that is trained on your company or industry-specific data, you cannot bypass the historical data training aspect as the basis of any Gen AI tool.

All of that historical data wasn’t created out of the blue. It was captured during previous human-made decisions, processes, and structural setups. It was the abundance of data we have generated in the past that has made AI possible.

So, to ask if AI is ethical, we are effectively asking whether our past actions, decisions, and systems have been ethical. And I am sure we all have our own answers to that particular question.

AI Amplifies Us

One of the reasons why I believe the conversation of ethics has come to the foreground in AI conversations is because of the technology’s capability to amplify the not-so-great decisions and designs that have lurked in how we approach people and talent management in the past. AI is forcing us to confront our historical decisions in a way that we rarely had to before.

Humans are biased (consciously or unconsciously), and bias is a part of our survival mechanism. However, when we started automating many of our HR processes with AI in the name of efficiency and productivity, we now get to see patterns emerge based on our historical decisions, and it’s making us all say, “Wait a minute, this doesn’t look right…”

With this in mind, I would say that ethical AI isn’t about the technology. We are not debugging code here; we are effectively trying to debug culture.

Sidebar

As I am writing this, a new hypothesis is developing: I wonder if AI’s ability to amplify not-so-great past decisions is a barrier in AI adoption across the HR industry?

We hear about concerns around if the AI is ethical or biased quite a bit. But with this new lens on, I do wonder if “ethics” and “bias” are the two closest English words we have to describing the idea that, different from all previous technology iterations, AI implementation is not just about putting the tech in place. It is about the technology’s ability to call us out for all of the broken things, skeletons-in-closets, and cans-kicked-down-the-road, and our ability/capacity to deal with the aftermath of that en mass.

Where do we go from here?

As much as I would like to end the article with a “Are we afraid the AI is unethical—or that it’s too honest about how we’ve been operating?”, I also can’t help myself to be practical. Let’s be honest: a system that has been broken for decades will not be fixed in mere months.

That said, I think more dialogues and education around this topic are needed so we don’t see it as a Human vs. Tech problem, but rather a present-day-human-trying-to-do-the-right-thing problem. So, here is where I think we can get started on reframing the conversation:

  • Instead of asking if AI is ethical, try exploring: whose values shaped the data and decision this tool was built and trained on, and are you aligned with those values? Whose definition of fairness are we optimizing for, and can the tool do that?

  • Shift the focus from AI, the technology, to the system and values the AI is reflecting. Instead of saying “AI made a bad decision”, understand how those decisions were made in the original dataset

  • Change the view of AI as a risk in your organization to AI as a revealer of the biases and historical challenges. The technology can help surface and correct deeper structural issues at scale

With all that said, I will leave you with one last thought: You don’t need a perfect system or set of processes to start using AI. You do need an approach that is self-aware enough to admit when systems are broken and brave enough to make better decisions going forward.

Previous
Previous

Adoption Is an Emotion, Not a Metric

Next
Next

365 Days of Diving into AI Headfirst: A Reflection