A leading Holocaust campaigner used AI to return from the grave to answer relatives’ questions

The whole world is turning into a “Black Mirror” episode. And we’re embracing it.

Marina Smith, who died in June aged 87, speaks to mourners at her funeral via artificial intelligence

 

15 September 2022 – Over the summer I had my media team join me at my home in Crete to do a deep dive into the approaching tsunami of addictive AI-created content which will soon (well, it’s started) overwhelm us. We are simply unready for the coming deluge of video, audio, photos and even text generated by machine learning to grab and hold our attention as generated by Blenderbot 2.0, Deepmind, GANs (generative adversarial networks), Midjourney (an AI-based graphics app), TikTok, DeepMind, etc., etc. All of these use AI systems capable of producing limitless amounts of content, that produce believable-looking pictures of humans (and lots of other things), and are already being used for fake profile pics for marketing or, worse, disinformation and espionage.

Ah, the deluge.

For this post I want to discuss just one item with has gained much press attention: Holocaust campaigner Marina Smith appeared able to answer questions at a funeral celebration of her life, thanks to new technology. Mrs Smith died in June, aged 87, but video technology, built by her son’s firm, meant those attending her funeral could watch her respond to their questions about her life. Stephen Smith said it enabled his mother to be “present, in a sense”. His company predicts many uses for its “conversational video technology”.

Mrs Smith co-founded the National Holocaust Centre in Nottinghamshire, from where she ran a successful Holocaust education programme. She was awarded an MBE in 2005 for her work. The founders of StoryFile hit upon the idea for the company while working on creating interactive holograms of Holocaust survivors for the USC Shoah Foundation. You can read more about her life and work here.

Smith, the chief executive and co-founder of StoryFile, which I profiled earlier this year, told the BBC in an interview that the technology meant, once a person had died, it was possible to have a conversation with them “as if they are there, and they will answer you”. He said it meant his mother had brought “the aspects of her life that were most important to her to the people who loved her most. And it was very meaningful to them”. His mother’s words were her own, and not the creation of artificial intelligence, Smith stressed.

Some of the actual interaction at the funeral can be seen in the following 1 hour edit of the service but not all of the questions or interactions were made public:

 

 

I have seen most of the clips and it is not just pre-recorded data being left behind. No, it is not her really saying anything through an AI but the manner in which the AI makes the interaction possible.

So how does it work?

It’s quite fascinating. It is a highly flexible, “no-code” platform. You can use any video camera, from smartphones to 12K to 360, to capture responses to questions. You can even upload video clips, transcripts, and other media and AI train to create a storyfile. What StoryFile does is take mixed reality immersion a step further by using conversational artificial intelligence and natural language processing with video to create a question and answer dynamic that recreates a conversation. The video assembly is quite intricate, quite unique.

The AI is quite complex (which I will fully explain in my longer piece) but for purposes of this post I’ll simplify it. To make a conversational video, a person must make a recording while still alive, answering numerous questions about their life. Later, after that person’s death, an AI system selects appropriate clips to play in response to questions from people viewing a remembrance video; the person in the video appears to listen and reply. The system was not trying to construct its own replies and was not using AI to invent answers. It’s just selecting from a pre-recorded set of sequences and cleverly allowing people to cause them to be played. And these are not set questions. You can ask almost anything and the AI retrieves what it determines is the appropriate response.

The firm sees a wide range of possible commercial applications for the technology, from customer service to sales. It has also encouraged some famous contemporary celebrities to document their careers using the tech, including the Star Trek actor William Shatner, whose video can be interacted with on the company website.

Looking ahead, Smith envisages a world in which people document their lives on a continuous basis, suggesting that users could “speak to your 18-year-old self, when you’re 50, or introduce your children to your 16-year-old self”.

Previously, it has been suggested that AI could be used to create fully synthetic versions of dead people. But Smith rejects the idea that current technology is capable of this: “Everything about us is so absolutely unique to us,” he said. “There is no way you can create a synthetic version of me, even though it may look like me”. Point taken: using current AI technology to create a “computer-generated” person would risk putting words into the deceased person’s mouth – and it could be worse than that, those words could be believed by the audience.

And the limits of “AI conversationalists” were demonstrated by Meta’s BlenderBot 3, which was criticised for making offensive remarks and said unflattering things about the company’s co-founder Mark Zuckerberg. Meta said that “it was a prototype created for research purposes”, adding it had warned users “they should expect it to say things it ideally should not”. Ah, the metaverse – falling apart already.

POSTSCRIPT

One of the lessons I absorbed from a few decades of technology journalism is that conceiving what will happen when things scale up is really, really difficult. We can see a lone tree and grasp it; but imagining how a forest of them will change the ecosystem is incredibly hard. The iPhone and Android made it easy to get email out of the office. But they also prompted an explosion of apps. Which created a new economy of people making apps. Which encouraged apps that weren’t restricted just to doing things on the phone, but were useful in the physical world, such as Uber. Meanwhile, the connectedness meant that photos and videos could be uploaded and even streamed – for good, for bad.

The point being that all the disparate bits above might look like, well, disparate parts, but they’re available now. The trees are here, and the forest might be starting to take shape. We all remember Arthur C. Clarke’s comment that “any sufficiently advanced technology is indistinguishable from magic”. Well, the magic is among us now, seeping into the everyday. The tide is rising. But the real wave is yet to come.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top