AI Ethicist

In a previous conversation ChatGPT provided a list of job titles of those working on AI models. The one I found most interesting was AI Ethicist.

“An ethicist is a person who specializes in the study of ethics, which involves examining moral principles, values, and rules that govern behavior. They analyze and evaluate what is right and wrong, often applying ethical theories to real-world situations in areas like medicine, business, law, and technology.”

“moral principles,” “right and wrong,” “values”… I don’t know where to begin. Where would one find two humans that could agree on the meaning of those terms? I put the question to ChatGPT:


What Would It Take to Train a Human to Be Ethical?

Before tackling what it would take to train an AI to be ethical, it helps to first ask: What does it take to train a human? Let’s walk through the foundational challenges:

1. There would have to be agreement on what it means to be ethical.

This is already a problem. There’s no universal consensus on ethics. At best, we find overlapping ideas (e.g., “don’t harm others”) across cultures—but specifics vary wildly. Ethical frameworks like utilitarianism, deontology, and virtue ethics often disagree on what’s right in a given situation. So if agreement is step one, we’re already on shaky ground.

2. A human would have to be trained from a very early age.

Ethical behavior is deeply shaped in childhood—by parents, teachers, media, peers. But even early lessons often conflict:

  • “Always tell the truth.”
  • “Sometimes it’s kinder to lie.”

Real-world experiences quickly reveal the messiness of moral life. Most people end up with a patchwork of ethical intuitions rather than a unified system.

3. One would have to control all or most of the person’s experiences.

To prevent a human’s ethics from “drifting,” you’d need to tightly control their environment—a dystopian prospect. Even then, the world changes. Ethics frozen in time becomes a liability. Societies that once endorsed slavery or denied rights based on gender or race eventually evolved. Ethics, too, must adapt.

4. If ethical behavior is constantly changing, does that make ethics meaningless?

Not meaningless—just adaptive. Ethics isn’t a fixed code, but a process for reasoning about harm, fairness, and human flourishing. Like language, it evolves, but that doesn’t render it useless. It remains our best compass for navigating uncertainty.

So… What Happens When We Try to Train an AI to Be Ethical?

Let’s mirror those same questions in the AI context:

1. Agreement on ethics? Still missing.

AI is trained on human discourse—books, laws, debates—but humans don’t agree on ethics. So whose values should it reflect? Often, it ends up mirroring the values of its designers and training data. That introduces bias, and risks embedding dominant perspectives as universal truth.

2. Early training? AI doesn’t “grow up.”

Unlike a child, an AI doesn’t learn through lived experience. It’s trained all at once, on massive datasets, and then fine-tuned. It doesn’t feel, reflect, or internalize values. It optimizes—based on probabilities, not principles.

3. Controlling experience? We try.

We attempt to curate what an AI sees and how it responds. We add safety layers, filter training data, and supervise outputs. But models still extrapolate. They don’t understand—they simulate understanding.

4. Ethics evolves—can AI keep up?

If ethics changes over time, can AI adapt? Should it? A model trained in 2020 might encode values that feel outdated in 2030. Do we retrain it? Who decides what to change? On what authority? If we let the AI update itself, do we trust its criteria? If we don’t, we risk creating tools that quietly enforce yesterday’s worldview.

So What Does “Ethical AI” Actually Require?

At a minimum:

  • Pluralism – Expose it to diverse ethical frameworks.
  • Transparency – Let users understand what values are embedded.
  • Accountability – Enable oversight, audit, and correction.
  • Humility – Acknowledge that AI can simulate ethical reasoning, but doesn’t understand it.

And maybe most importantly: Keep humans in the loop.

An AI can help us reason ethically. But it shouldn’t be the one making final moral calls. Otherwise, we’re not building ethical AI—we’re outsourcing responsibility.

One thought on “AI Ethicist

  1. Here’s a list of some common titles:
    1.Software Engineer
    2.Machine Learning Engineer
    3.AI Research Scientist
    4.Data Scientist
    5.Research Engineer
    6.Product Manager
    7.AI Ethicist
    8.NLP (Natural Language Processing) Specialist
    9.AI Architect
    10.Research Scientist

    These roles can overlap, and sometimes the titles vary a bit depending on the organization.

Comments are closed.