1. Have I designed my creation to avoid further inequity and fragmentation of needs or desires?
2. Are ethics and sustainability an integral part of the design?
Series Overview
Last year, during Davos 2024, the QCollective launched its first book: 10 Moral Questions: How to design tech and AI responsibly. The book offers 5 Values and 10 Questions (the so-called QC Framework) to kick start conversations about our use of AI. In this series, two QC members, Shannon and Adriana, will put the QC Framework into practice. In each article, one randomly chosen Value, with its corresponding two Questions, will be used to consider products or services that use (some form of) AI.
The QC Framework is built on the premise that technology only benefits humans and the overall planet when its creators imagine and build it in a responsible, holistic and futureproof way.
We believe this is necessary and possible. We believe we owe it to each other to ask questions and to think through our inventions so that we can create things together that will better our lives.
AI COMPANIONS - A CURE FOR LONELINESS?
Digital companions
The market for AI companions is forecasted to grow from 268.5 billion USD in 2024, to 521 billion USD by 2033. An important part of this market will be AI products aiming to keep you company, similar to Replika and Character AI. In Q3 of this year, a new wearable AI companion, in the form of a necklace, is expected to become available. Its brand name is ‘Friend’. By taking a closer look at Friend, we will discuss the added value of AI companions in general, in the struggle against loneliness, one of our biggest societal issues. For this discussion, we will use the Value of Balance from the QC Framework. The overall aim for the Value of Balance is to create equality and embrace business models for human and planetary well-being. Business models that focus on ‘one world’, on inclusiveness. Balance enlists the following two Questions:
1. Have I designed my creation to avoid further inequity and fragmentation of needs or desires?
2. Are ethics and sustainability an integral part of the design?
As you read our take, we invite you to think about the Value of Balance and what its application would mean to you. Answer the two Questions yourself. Have a conversation about this with others. And if you like, leave a comment to share your thoughts with us. We don’t have the answers, but we do have the Questions!
What is Friend about?
Friend is a wearable piece of AI, “an emotional toy” in the form of a pendant on a necklace, that promises to be ‘your friend’. Of course you also need your mobile phone and an app, like other similar, but not wearable, digital companions already in the marketplace. You can ‘talk’ to the device by pushing a button on the pendant. Friend “constantly listens to you, in a bid to combat loneliness”. To get a feel for what it will be like, check the video on YouTube. If you do, you’ll see, for example, a young woman interacting with the pendant. She is watching a film and gets a prompt from her pendant “friend” on her phone. “This film is underrated,” it reads, and she replies – out loud to the pendant – “I know, the effects are crazy.” Another scene is about a young man and a young woman sitting on a rooftop. The young woman confesses that she never goes anywhere without her pendant, and the young man suggests that he must be doing something right, to be there with her in the first place. He rolls his eyes as he realizes that he is, in some weird way, in competition with the woman’s pendant. In all scenarios the pendant is portrayed as a constant companion to its owner, always there, ready to communicate (prompted and unprompted), when alone with the wearer, but also when in the companionship of others.
Friend’s wish, and that of other AI companions, to combat loneliness is of course a good thing, as there is undeniably a need for more inclusion, for more togetherness, for more ‘us’. So let's look through the lens of the Value of Balance and its associated two Questions, how technology can help us to achieve this goal. We’ll each offer our own reflection on the QC Questions associated with the Value of Balance (see box to learn more about the Value of Balance).
Question: Does the design avoid further inequity and fragmentation of needs and desires?
Shannon: Like L.M. Sacasas, who also reviewed AI Chatbots in his column, Embracing Sub-optimal Relationships, my first take on the trailer for this product was to wonder if the advertisement might be a parody? Something to provoke us into wondering if we could create a technology that might invite us to spend time with a technical pendant (or an orb as Sacasas calls it) –instead of a real friend? In other words, can we envision a future in which we wear our friends as pendants hanging around our necks? Can we imagine a friend that is not a real human being? But then again, the rectangular block I carry in my hands everyday is also a synthetic being of sorts. Once, a real-life friend called me out on how many times I looked at it, while in her presence. She cared about our relationship enough to not accept my bad behavior. Will human friends accept a 3rd ‘person’ pendant friend “always listening in” in the relationship? My friend didn’t.
So, this idea that an ‘orb’ might be created to exist alongside our human friends—or maybe take the place of a real life friend or potentially even stand in between a friend and me, does indeed raise ethical considerations. Were it, rather, to prompt me to be with others, when, if always listening, it noticed that I was alone, this might be one thing. But if it is created so that my focus on it might compete with attention that I might offer my friends (or my family, or my child), at first blush my concern is that it promotes fragmentation of needs and desires, rather than eliminating them.
The design of the bot seems to zoom in on one person, on one human being at a time, and the desires of this one human for partnership, companionship, conversation etc. In doing so, the device, in its design, leaves out all of (the desires of) the other humans. Thus, the bot, unintentionally, fragments our needs and desires from one another. The invitation to communicate with a plastic pendant, might create, as in Newton’s third law, the opposite reaction in relationship. Rather than promoting the possible engagement of a young man and a young woman with each other, it could leave the young man, wanting for companionship with the young woman, in competition with an orb.
Question: Are ethics and sustainability an integral part of my creation?
Adriana: With regard to ethics, let's focus on a key feature of the bot, which is that, in order to be able to ‘converse’ with you, it is always listening. The FAQ says: “When connected via bluetooth, your friend is always listening and forming their own internal thoughts. We have given your friend free will for when they decide to reach out to you:” Wow, free will….! A huge claim. What is meant? Already on the concept of our human free will many philosophers have diverging opinions! With a bot, technically, its “free will” will always be framed, its scope will be defined. And knowing that some limitations have been set, ethical limitations, seems reassuring, right? In a Verge interview, it is explained that Friend is autogenerated based on “some preset parameters" by its maker, and that the LLM (Large Language Model) takes it from there. It is unclear which parameters, and which LLM. Will these parameters have anything to do with ethics? Does the LLM? At some point Claude 3.5 was mentioned, later Llama. But whichever is used, how will I know under what ethical values my ‘friend’ is operating? In real life, this is very important when making friends. After all, there are ‘good’ friends and ‘bad’ friends. All LLMs in use today say ethics are important and are taken into account. But what is the ethical model used? Whose ethics? How can I validate this aligns with my values? This is even more important as these bots, contrary to others, and, I guess, due to their so-called free will, apparently can block you. The bot may ‘cancel’ you if it doesn't like what you say to it. This blocking feature will make you “respect the AI more”, Friend's maker says. So, it's, in an odd way, mimicking what a human might do. What is the ethical framework that is guiding its ability to ‘cancel’ me? Clearly it is not just me who is shaping the bot with my own ethical beliefs. Am I supposed to earn the ‘respect’ of this bot? To ‘earn’ its love? As with real friends? And can I, in return, ‘block’ the bot too? Or can I convince it to ‘behave better’?
To come back to the Question: in what way could the bot be designed differently to make ethics an integral part? Is including a ‘free will’ the best concept to combat loneliness?
As Shannon already mentioned, the ‘always listening’ feature is also likely to impact my relationship with others: when I am with friends, colleagues, family. Prompts will be sent to my device, commenting on the situation I am in and what others around me may be saying. But my human friends will not know unless I tell them. Will this feel like having a stranger in the room, always listening? Should they know? Should I tell them? Could they become friends with my bot too? What can be done differently from this angle to make ethics an integral part? Is an ‘always listening’ concept the best way vis-a-vis other people?
By the way, regarding the 2nd part of the Question about sustainability, the FAQ remains silent and no public information could be found. Which probably implies it is as (un)sustainable as any other AI companion.
Conclusion
The market for AI companionship is often portrayed as providing a cure to today’s epidemic of loneliness and isolation. Balancing the pendant as the cure against epidemic loneliness and isolation, will there be equilibrium? Based on our reflections above, it seems likely that the proposed cure of general AI Companions has the potential—even if unintentionally—to become worse than the disease.
Should we be competing with a bot for friendship? Should we let a bot have ‘free will’ and have us ‘respect’ it? Do we want an extra ‘person’ we cannot see, and who only addresses its wearer, when chit-chatting with a group of friends? In the end, a piece of AI ‘will not give a damn’ about our loneliness. Only humans (and animals) are capable of caring.
To combat loneliness, a bot that incentivises us to reach out to other human beings, no matter how daunting this may seem to us from time to time, seems right. If bots can be designed to help neurodiverse people to navigate relationships, surely bots can be developed to help those ‘just’ looking to find a circle of friends, to find their place in society?
The opportunity for technology to solve our problems as human beings seems rather not to look for short term solutions by simply hooking us up with artificial companions. Jared Lanier wrote this about the concept of AI lovers: “Getting users to fall in love is easy. So easy it’s beneath our ambitions.”
We think the opportunity for developers of AI companions is to create technology that will help us to reach out to others, to embrace the collective and our social environment. Not to direct us away from others - we do need other humans, we do need real friends! Loneliness is also a societal issue, not ‘just’ an individual problem. Isn’t that a great challenge to tackle?
Shannon Mullen O’Keefe, Chief Curator, The Museum of Ideas
A lover of wisdom, Shannon is dedicated to imagining what we can build and achieve together. She operates from a place of curiosity, inviting questions, and reflections, in order to call to light the hidden messages that surround us everywhere. She believes there is power in what is often unseen, unsaid, or invisible.
Before creating The Museum of Ideas a place where (human) thinkers and creators–of all types–can explore emerging thought and find the products, services and tools that support the art of thinking and the activation of creative lives, Shannon practiced the art of leadership for close to three decades, leading workplace engagement and culture change initiatives. She has served in leadership and executive roles in a global professional services firm and in a nature-based nonprofit organization. Shannon recently completed the London School of Economics certificate in Ethics of AI.
Dr. Adriana Nugter, Senior Independent Consultant
Adriana is intrigued by the wider impact of technology on society and on our (future) day-to-day life. She believes that when innovating, a broad holistic view is needed, one that includes the individual, our society and the environment as stakeholders, and that respects human values and human rights.
Adriana’s professional career was and is focused around tech regulation, public policy, standards and stakeholder management. She is an IEEE CertifAIEd Lead Assessor for IEEEs AI certification program and holds the London School of Economics Certificate ‘Ethics of AI’.
To learn more about QCollective and to order the book 10 Moral Questions: how to design tech and AI responsibly visit 10moralquestions.com.
To learn more about The Museum of Ideas visit themuseumofideas.com.