Stealing the Last Lolly

Simo Hosio, Ph.D.
2 min readAug 5, 2021

--

Would you steal the last lollipop from a crying child?

Because I, like, totally, 100% would.

I would steal ALL THE DAMN lollies if:

  1. I was close to starvation,
  2. The kid clearly has more than enough of them,
  3. It was my last option.

If it was a choice between eating the sugary thing and perishing, I’m quite sure you would commit this terrible crime against childhood, too.

Our moral compass wavers as a function of context. And context, generally speaking, refers to everything and anything. In moral judgement tasks, I would wager the temporal dimension also plays a special role: What are the long-term ramifications of my actions? In this sense the lollipop example might not work the best–the kid can’t do much.

The poor child most likely won’t even remember who took the lolly.

But you know what I mean.

Which brings me to Human-centred artificial intelligence: a contextual morality perspective, available at https://www.tandfonline.com/doi/abs/10.1080/0144929X.2020.1818828?journalCode=tbit20

While the paper perhaps suffers from various methodological problems, it’s one of the pieces of our work that has given me most joy thinking about afterwards.

The future of humans is in transhumanism.

We’re already busy developing tech that augments our physical and cognitive capabilities at scale. Hidden in plain sight, our smartphones are the first step toward this. They know us, they suggest things to do in just the right context, they act as bridges to the digital realm that has all the knowledge in the world.

So.

What if…or when we have personal agents making choices for us, how do we encode the correct morals to it? Do we want to play by some universal moral code (all humans on earth share some commonalities here), or is it a fair assumption that personal gadgets act like their owners?

Algorithms will take over.

We’re creatures of habits. Our very brains build habits just to make decision-making easier. Because just like any calculation on this computer I’m using to type this consumer processor cycles, human decision-making costs brain cycles and cognitive power.

We want to live easy.

And what’s easier than outsourcing decision-making and perhaps even the consequences to an algorithm?

And at least then we have someone to blame for our shitty decisions…

These would be wonderful topics to explore through means of design fiction… Contact me and let’s do it!

Simo

--

--

Simo Hosio, Ph.D.
Simo Hosio, Ph.D.

Written by Simo Hosio, Ph.D.

Ph.D. in Computer Science / Associate Professor at the University of Oulu, Finland / Digital Entrepreneur at kaizenhour.com

No responses yet