Ugly Ducks & Celestial Knowledge


1. Can bots be empathetic to humans?

2. If they can be empathetic, do we want them to be?


1. Can bots be empathetic to humans?


Celestial Emporium of Benevolent Knowledge

Whenever I hear about bots and communication, the first thing my mind goes to is this list.

But before I show the list, I have to give a little bit of background.


Zi: beasts

Zit: "rapacious beasts of the dog kind"

Zita: the species of dogs

In the 17th century, the philosopher John Wilkins wanted to develop a "universal language", which depended on a classification system that would be universally accepted. The word "dog" is itself an act of classification, expressing that there are certain things in the world that we count as dogs, and things that we do not.

It was structured so that pieces of words would reflect one classification, and to build more specific concepts, you'd mix-and-match these word-parts. For instance, "Zi" is the genus of "beasts", and "Zit", gives the Difference of "rapacious beasts of the dog kind" and "Zita" gives the species of dogs.


Celestial Emporium of Benevolent Knowledge

This is a list written by the author Borges in response to this project.

It's meant to illustrate how silly that endeavor is, given that categories are developed in particular contexts that are different across groups of people. How do you decide how to divide the world up into the categories you use for words? Why is a top-level category "beasts"?

So these categories look strange to us, and appear arbitrary and nonsensical, but that's because we're coming from a very different background. Clearly there are differences in priority for this empire, "Stray dogs" are distinguished enough to earn their own category.


The Ugly Duckling Theorem

The Celestial Emporium list, and the point Borges is making with it, reminds me of the "Ugly Duckling Theorem", which is essentially expanding Borges' argument into a mathematical theorem and making it more directly relevant to computers.


Height Weight Feather Color Likes Water
Duckling 1 26cm 0.8kg #dfc14c Yes
Duckling 2 22cm 0.5kg #f9e291 Yes
Duckling 3 28cm 0.9kg #dbcdb3 Yes
Duckling 4 32cm 1.1kg #f8da80 Yes

The theorem was proposed by Satosi Watanabe, a theoretical physicist. The gist of the theorem is that any sort of classification necessarily involves some bias. By "bias" here I don't necessarily mean the negative colloquial use of the term, but that is included.

What I mean here is: For us to make any sense of the world, we have to decide what differences matter to us and which ones don't.

I have some fictional data here for the ducklings in the story. The story is based on the differences in feather color, but we see here that none of the ducklings have exactly the same feather color. In fact, we see that they all like water. We could make that the defining criteria for a duckling and be done with it. We could categorize them by feather color, but note that none of them are the same. Then we could say, well, they don't have to be exactly the same, just close in color. But how close?

And before you answer that, I'd say: their heights are different too, why don't we use those instead? Why feather color at all?

So we have to decide. How do we decide?

For us, many of these decisions are arbitrary in some sense, and some are inherited through social norms. In any case, these distinctions that are ultimately arbitrary become meaningful when they are grounded in the contexts of both being a human and being a specific person.


"If a lion could speak, we could not understand him."

- Ludwig Wittgenstein

So this common experience grounds the way we interpret the world in a mutually-intelligible way. In a way that allows us to communicate to each other.

But that doesn't say anything about the quality of that communication. We could communicate well enough to give and understand orders to each other, but not speak in a way that evokes a deep, genuine understanding. And this is often the case.

I think it's reasonable to assume that the more common experience, the better and deeper the communication. The more empathetic it can be.

But this is tricky, because individually we still have experiences that others don't and that others perhaps can't ever have, and these experiences may only be legible to those who have had similar ones.

(I recently learned this is called "epistemic humility")

Wittgenstein said, "If a lion could speak, we could not understand him.". A lion's subjective experience is so vastly different than a human that even if it could speak, we'd have so few shared reference points for it to make sense to us.

Bots don't share any experience with us. Lions are at least living creatures. Bots are meant to provide a surface simulation to encourage us that they have shared this experience. I'm skeptical that bots can ever embody any genuine kind of empathy, so we are only left with an empathy that is just performative.


2. If they can be empathetic, do we want them to be?

When people talk about empathetic bots, something that's also worrying is: who controls these bots? And why do they want them to be empathetic?

In people, when this understanding and empathy is just performative, someone is usually being exploited. (If that empathy were not performative, I don't think they'd exploit that person.) I wonder, if these bots are almost exclusively the ambassadors of companies, can we ever trust even the most empathetic-seeming bots?

You can imagine a future where empathetic bots are just conning people out left and right.


There are some people in Silicon Valley expressing remorse and regret over the impact their products have had. I read an article the other day about some of these people lamenting how these products were designed to be addictive, with slot-machine-like mechanisms (notifications pulling you back in, engagement counters going up, etc). These tap into some tendencies that we all have, and that some of us are especially sensitive to.

When Silicon Valley advocates for the development of empathetic bots, I can't help but be reminded of this. A core part of the Silicon Valley ideology is this notion of "scale". About taking something and applying it across whole populations. A Silicon Valley vision of empathetic bots looks to me like bot nets, that, instead of infecting insecure hardware, befriend insecure people. And then manipulate them.

There was a paper by Tim Hwang and Lea Rosen where they speculated about ISIS recruitment bots. The ISIS recruitment strategy is to identify people who look susceptible and reach out to them, empathizing with their frustrations, and through doing so, build up a relationship to recruit them. I could easily see this sort of technique applied elsewhere.

We could imagine an empathetic bot refusing to do something out of empathy. The moment that empathy conflicts with the objectives of its parent company, that bot will be shut down. I suspect that genuinely empathetic bots are incompatible with the Silicon Valley schema.

So it seems to me that if we want to pursue genuinely empathetic bots, we need to conceive of them under a different model than the Silicon Valley one altogether. To me, the manifestation of empathetic bots is another side to broader questions of technology and who controls it.


Thank you!

@frnsys