In this photo illustration a virtual friend is seen on an iPhone screen on April 30, 2020 in Arlington, Virginia.  -

Artificial intelligence could give us digital immortality but we probably don’t want it | Engadget

In the 1990 fantasy drama Truly Madly Deeply, the protagonist Nina (Juliet Stevenson), is mourning the recent death of her boyfriend Jamie (Alan Rickman). Sensing her deep sadness, Jamie returns as a ghost to help her process the loss of her. If you’ve seen the film, you’ll know that her reappearance forces her to question her memory of him and, in her turn, to accept that perhaps he wasn’t as perfect as she remembered. Here in 2023, a new wave of AI-powered grief technology offers us all the ability to spend time with loved ones after their deaths in various forms. But unlike Jamie (who benevolently misleads Nina), we’ve been asked to let the AI ​​provide a version of who we outlive. What could go wrong?

While generative tools like ChatGPT and Midjourney are dominating the AI ​​conversation, they largely ignored the broader ethical issues around topics like grief and bereavement. The Pope in a duvet is cool, after all, but do you think about your loved ones after they die? Not so much. If you believe that generative AI avatars for the dead are still a way out, you would be wrong. At least one company already offers digital immortality, and it’s as expensive as it is creepy.

Re;memory, for example, is a service offered by Deepbrain AI, a company whose core business includes those virtual assistant-type interactive screens alongside AI news anchors. The Korean company has taken its experience with the marriage of chatbots and AI-generative video to its latest, grisly conclusion. For just $10,000 dollars and a few hours in a studio, you can create an avatar of yourself that your family can visit (an additional cost) at an off-site facility. Deepbrain is based in Korea, and Korean mourning traditions include Jesa, an annual visit to the resting place of the deceased.

Right now, even by the company’s own admission, the service doesn’t pretend to replicate their personalities too deeply; the training set really allows the avatar to have only one mood. Michael Jung, Business Development and Strategy Lead at Deepbrain told Engadget If I want to be a very funny Michael, then I have to read very hyper or funny entries for 300 lines. So every time I enter text [to the avatar] I will have a very exciting Michael. Re;memory isn’t currently trying to make a true facsimile of the subject matter, it’s something you can occasionally visit and have basic interactions with, but hopefully have a little more character than a virtual hotel reception.

While Re;memory has the added benefit of being a video avatar that can answer your questions, the audio-based HereAfter AI tries to capture a little more of your personality with a series of questions. The result is an audio chatbot that friends and family can interact with, receiving verbal replies and even stories and anecdotes from the past. By all accounts, pre-trained chatbots provide convincing answers with their owners’ voices until the illusion is unceremoniously broken when they respond robotically Sorry, I didn’t get it. You can try asking another way or move on to another topic for any question it doesn’t have an answer for.

Whether or not these technologies create a realistic avatar is not the main concern that AI is moving at such a speed that it will definitely improve. The trickier questions revolve around who owns this avatar once you’re gone? Or are your memories and data safe? And what impact can all this have on those we leave behind anyway?

Joanna Bryson, a professor of ethics and technology at the Hertie School of Governance, compares the current wave of pain tech to when Facebook was more popular with young people. Back then, it was a common destination to commemorate friends who had passed, and the emotional impact of this was astonishing. It was such a new and immediate form of communication that the children couldn’t believe they were gone. And they seriously believe they are dead friends who were reading this. And they’re like, I know, you’re seeing it.

In this photo illustration a virtual friend is seen on an iPhone screen on April 30, 2020 in Arlington, Virginia.  -

OLIVIER DOULIERY via Getty Images

The inherent extra dimension that AI avatars bring only fuels concern about the impact these creations might have on our grieving brains. What does it do to your life, that you spend your time remembering [] maybe it’s good to have some time to process it for a while. But it can turn into an unhealthy obsession.

Bryson also thinks that this same technology could start to be used in ways it wasn’t originally intended. What if you’re a teenager or tween and you spend all your time on the phone with your best friend? And then figure out which you prefer, like a [AI] summary of your best friend and Justin Bieber or something. And stop talking to your real best friend, she said.

Of course, that scenario is beyond current capabilities. Also because to create an AI version of our living best friend we would need so much data that we would need their participation/consent to the process. But that may not be the case for much longer. The recent spate of fake AI songs in the style of famous artists is already possible, and it won’t be long before you don’t need to be a celebrity for there to be enough publicly available input to power a generative AI. Microsoft’s VALL-E, for example, can already do a good job of cloning a voice with just three seconds of source material.

If you’ve ever had the misfortune to sort through the possessions of a dead relative, you often learn things about them that you never knew. Perhaps it was their fondness for a certain type of poetry through their underlinings in a book. Or maybe something more sinister, like bank statements showing crippling debts. We all have details that make us complex and complete human beings. Details that, often intentionally, remain hidden from our public figure. This raises another time-honored ethical conundrum.

The internet is awash with stories of parents and loved ones trying to access the email or texting accounts of their deceased to remember them. For better or for worse, we may not feel comfortable telling our immediate family about our sexuality or politics, or that our spouse was having an affair — all of which our private digital messages may reveal. And if we weren’t careful, this could be data that we inadvertently hand over to AI for training, only to burp that posthumous secret.

Even with the consent of the person being recreated in the AI, there are no guarantees that someone else can’t get their hands on your digital version and abuse it. And right now, that falls broadly in the same crime bucket as someone stealing your credit card details. Until they make it public, at which point other laws, such as the right to publicity, may apply, but usually these protections are only for the living.

Bryson suggests that the logical answer for data protection might be something we already knew, like the locally stored biometrics we use to unlock our phones. Apple has never trusted anyone. So they’re really very privacy oriented. So I tend to think that this is the kind of organization that will invent stuff, because they want it themselves. (The main problem with this, as Bryson points out, is that if your house burns down, you risk losing your grandmother forever.)

Front view portrait of a sad teenager complaining in a bar at night

AntonioGuillem via Getty Images

Data will always be at risk, no matter where or how it is stored. It is a danger of modern life. And all those privacy concerns could feel like a problem tomorrow (in the same way we tend to only worry about online fraud once it happens to us). The cost, accuracy, and just general creepiness that AI and our future digital avatars create might be frightening, but it’s also an overwhelming inevitability. But that doesn’t mean our future is destined to be an ocean of Max Headrooms dishing up our innermost secrets to any hacker who’ll listen.

It will be an immediate problem, there probably is already a problem, Bryson said. But hopefully a good high quality version has transparency and you can check it. And I’m sure Bing and Google are working on it now, to be able to see where chat programs get their ideas from. Until then, though, we risked finding out the hard way.

Bryson is keen to point out that there are some positives and they are available to the living. If you place too much importance on death, you’re not thinking about it properly, she said. This technology forces us to deal with our mortality in a new, albeit curious way that can only help us think about the relationships we have right here in the world of the living. An AI version of someone is always going to be a poor facsimile, so as Bryson suggests, why not get to know the real person better while you can? I would like people to try conversations with a chatbot and then talk to a real person and find out what the differences are.

All products recommended by Engadget are selected by our editorial team, which is independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices correct at time of publication.

#Artificial #intelligence #give #digital #immortality #dont #Engadget
Image Source : www.engadget.com

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *