Golden Circle

The Quiet Work of Emotional AI

What Designing With Care Taught Me About Building with Bots.

Every week, it feels like a new AI tool launches. A co-pilot, a chatbot, a plugin to help you think faster or feel smarter. And I get it, there’s a lot to be excited about. But lately, I’ve found myself circling to question, What kind of relationship do we actually want with our AI tools?

I’ll be honest: I’m an AI skeptic.
Back in 2016, I was studying computer science while tech was moving fast: self-driving cars, chatbots, VR/AR. What stood out most to me was how little we talked about people. That realization led me to design, not to ditch tech, but to build it with more care.

Lately, I’ve been working on tools meant for emotional moments as part of my thesis: grief, long-distance family, and reconnection. Some worked. Most didn’t. But they taught me a lot about what happens when AI enters personal spaces.

Here are four takeaways that now shape how I approach emotional AI:

1. You’re not just designing the interface; designers should be involved in feeding the model.

You’re not just teaching the AI what to say — you’re teaching it how and when to care.

When I first learned how to build a GPT, I was fascinated by what it could do. But I was just as curious about what it was built on. Most large language models are trained on publicly scraped text: Reddit threads, Wikipedia articles, old blogs, and news sites. It’s a ton of content, but it isn’t neutral. It carries a tone, usually Western, often a little argumentative, and rarely an emotionally sensitive one.

There’s a phrase in tech: GIGO — Garbage In, Garbage Out. And yeah, it holds true here too.

If your tool is meant to support nuancesoftness, or cultural specificity, generic data won’t cut it. It may even actively harm.

In one of my projects, I experimented with training a custom GPT using emotionally resonant material: literature, journal entries, essays, and even poetry. Scientific articles alone made it sound robotic. But 80% emotionally-rich content made the responses feel more… human, calmer and less sterile.

Poor data leads to poor responses.
(Image: hand sketch + ChatGPT)

2. AI is good at patterns, not people.

Even when the words are right, the responses vibe can be wrong.

There’s this moment you have when testing a tool and someone tells you, “It’s not wrong, it just feels off.” That happened a lot early on.

Another participant put it more bluntly: “It kind of sounds like a therapist that forgot I’m a person of colour […] We would never do any of the actions that this is suggesting to me, it’s so awkward.”

That moment made it clear: even culturally sensitive training data isn’t enough. The model might understand the context, but it still needed to learn how to speak the language of care — as it’s actually practiced in diasporic families.

So I tuned my custom GPT not just with diverse source material, but also with specific tone instructions. I added adjectives like: soft-spoken, non-directive, warm, emotionally literate, never presumptuous. I fed it example phrases that felt familiar — like something a cousin or close friend might text when they’re gently checking in, not prescribing a fix.

Not “Try opening up again,” but “Would it feel okay to send a photo?”
Not “Here’s what to do next,” but “There’s no rush. I’m just here.”

I realized I wasn’t just training an AI to understand people — I was training it to talk like someone from home.

And in emotional design, tone isn’t decoration. It’s the product.

Expert vs. Collaborator.
(Image: hand sketch + ChatGPT)

3. The more personal the moment, the simpler the AI.

In emotionally sensitive spaces, people don’t want tools that talk a lot.

For a while, I thought more informed AI meant a better experience — more data means more language leading to more “aliveness.” But through testing, I saw the opposite. People didn’t want long replies. They didn’t want advice. And they definitely didn’t want to feel like they were talking to something pretending to know everything.

Instead of building a back-and-forth chatbot, I focused on simple, single-line prompts. Openers, not explainers. Things like: “Want to send a photo from your walk yesterday?”

Through crits, I learned that maybe the prompt isn’t even the point. It was suggested that the photo should just appear, already selected, ready to send. So I tested, and during testing, I found that people were 40% more likely to share when the action was already halfway done.

That insight led me to think about metadata, the things people already have on their phones that say a lot without asking a lot. For example: photos., playlists, voice notes, journals.

So I wondered: What if AI didn’t prompt users to explain themselves, but simply surfaced familiar things they’ve already created.

But I also learned: just because AI can pull something up doesn’t mean it should. There’s a line between helpful and invasive, and in emotional spaces, that line moves with every person, every moment.

So now my question is less about what AI can do and more about when to stay quiet. The best tools don’t just act, they know when to step forward and when to step back.

What is AI pulling prompt from?
(Image: hand sketch + ChatGPT)

4. A/B test your UX Writing.

Trust isn’t built in one sentence, but it can fall apart in one.

While I was tuning the GPT to speak more like someone from home, I realized something else: I was also shaping how I, as a designer, spoke through the interface.

At first, I thought of prompts and CTAs as utility — just things the app had to say to keep things moving. But over time, I saw that every line of text was carrying emotional weight. It wasn’t just content; it was tone, posture, and trust, all rolled into a sentence.

The tiniest phrases made a difference. “Try being more open” felt pushy. “Would it feel okay to share?” was softer, but sometimes too hesitant.

I started treating each prompt like it had to balance meaning and permission. Could it gently hold someone without assuming how they feel? Could it invite participation, without creating pressure?

Some CTAs I loved ended up feeling off in context. Some that felt too plain at first ended up being the most comforting.

Eventually, I built a set of tone rules for myself: never assume emotion, never over-ask, always stay humble

How can we balance prompts that feel light but emotionally valuable?
(Image: hand sketch + ChatGPT)

So, I’m still an AI skeptic, but…

I’m not skeptical because I don’t believe in AI. I’m skeptical because I believe in people. I think we deserve more caring tools, not just faster or smarter ones.

As AI continues to show up in our daily lives — in our notes apps, our inboxes, even our relationships — it’s worth asking not just what it can do, but how it should behave. Will it be an active facilitator or observer? Will it explain or listen? Where is it pulling information from? Where are our boundaries with AI? How do we design with permission in AI?

This is all part of the experience, and as designers, the more we are informed about how AI is built and trained, the better we can design a user experience.

Source: www.medium.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Select the fields to be shown. Others will be hidden. Drag and drop to rearrange the order.
  • Image
  • SKU
  • Rating
  • Price
  • Stock
  • Availability
  • Add to cart
  • Description
  • Content
  • Weight
  • Dimensions
  • Additional information
Click outside to hide the comparison bar
Compare
Shopping cart close