Get in Touch
Category

Press

Home / Press
Press

Renegades. Dreamers & Somerset Brit-ish-pop.

In the 1990s every town had its pop-group, rock-band or DJ that represented them during the rise of Britpop. The South-West was no exception. Bristol had Massive Attack and Portishead, Glastonbury had Reef. Bath had the Propellerheads, and PJ Harvey ascended to global superstardom with her distinctive punk-infused, folk-rock sound. These artists became more than just chart sensations, they waved the flag for local communities and inspired young people to look beyond the pre-internet-era boundaries.

Yeovil was no exception, and in 1995 a group of 20-something-musicians and songwriters met at a pub in Somerton and decided they could take on the world. Ali, Steve, Nigel, Paul, Jim, and eventually Alex created the band that went on to christen themselves Electrasy. They proceeded to have huge chart success both in the U.K, and the U.S with their unique brand of experimental Britpop.

The incredible story of this band is everything other than linear and one-dimensional, which is why it has finally been immortalised in a new fan-led book called ‘Calling All The Dreamers’, tracing the bands early days in Weymouth, Yetminster, Beaminster, Sherborne, and Yeovil, all the way to New York and L.A via studio recordings at the famous Abbey Road, meetings with music royalty such as Clive ‘Arista’ Davis, Sir Robin Millar, and television appearances performing to millions of prime-time viewers.

Working with the band on this project has been a true labour of love” says author Pete Trainor, “My friends and I were big fans of the band when they were starting out on the Yeovil music circuit, and I knew a bit about some of the stories, but the more I went digging, the more I realised this was a much bigger story that really needed to be shared. It’s so much more than your standard ‘local band gets signed and goes global’ journey. There’s a very good set of reasons very few people remember much about Electrasy other than one top-20 hit single; It’s because they were buried by the industry that had plucked them from humble roots, and I needed to find out why.

This has been an itch I’ve been trying to scratch for over 15 years, and we finally got to tell the story. It’s been magical. Pete Trainor

Electrasy’s first album, ‘Beautiful Insane,’ had its early demos and production supported by local producer Jon Sweet in his garage studio, and a lot of the additional work was recorded at the former Small World Studios in Yeovil. Throughout 1996, 1997 and 1998 dozens of gigs at venues like Gardens, The Ski Lodge, Yeovil Snooker Club, The Forresters Arms and The Quicksilver Mail created a dedicated fanbase, and the album went on to sell 60,000 copies in the first few months after its release, putting the band and Yeovil on the music map in a way that few thought possible. Championed by Chris Evans, Jo Wiley, and the U.K music press, the band seemed unstoppable, and even after their first record contract with UMG was terminated unexpectedly the band still played the coveted sunset spot at the second stage of Glastonbury Festival in 1999.

We definitely felt like we were ready to take on the world when Arista took us over to America,” says singer Ali McKinnell in the book. “We had one of the best song-writers of a generation in Nigel, and the rest of us were damn good at what we were doing. But the industry often has different ideas, and when music piracy took hold at the same time as our rise we found ourselves constantly being somewhere between high and low. It was a very difficult journey in parts.

The book looks at multiple angles of the band’s story, from the impact they had on local youth culture and economy, all the way to the advent and subsequent destruction digital caused the industry as a whole.

Guitarist Steve Atkins is quick to remind people of the opportunity but harm digital brought to bands who grew up analog, but entered digital; “One stream of a song on YouTube earns us £0.00053. A stream on Spotify is £0.0034 and AppleMusic is £0.0057. To put that into context for you, if we had one million streams on YouTube we’d make £533 or £3380 on Spotify. Bands can’t make enough to support themselves, let alone a tour, so we literally can’t get out on the road as much as we want. It’s a very dark pattern.

Which leads to the final part of the book taking a look at the venues that first supported the music scene in the area. All boarded up, bulldozed, or turned into flats, towns like Yeovil are now devoid of the local music scene that inspired so many to see the world differently, and break away from crime or substance abuses.

I definitely wasn’t expecting to find what we found,” says Pete. “When I came back to Yeovil to retrace the steps of the band I had no idea it was all gone. Even the studios where they recorded have shut and been sold off. The music ecosystem is very symbiotic you see – Bands need venues to play in, and studios to record in. Venues and studios need bands from the local area to bring their fanbase in to fill the tills.

‘Calling All The Dreamers – The story of a local Britpop band who went to space and back’ is available now on Amazon. All proceeds from the book will be used to fund a tour of local venues in 2024. The band also release their brand new, home-grown album called ‘To The Other Side’ in July 2023, which music lovers can buy from BandCamp.

Press

#TEDxLiverpool 2019

10th November 2019, ACC Liverpool Waterfront

The wait is over. Pete’s tribute TEDx to his friend, and collaborator James Dunn, is now live on the official channel. His moving talk told the story of James, from birth all the way to how they met, and collaborated. A story many have agreed is worth telling and sharing.

Pete Trainor, on-stage at Tedx Liverpool 2019.

During the talk Pete uncovers unseen footage of James, as well as sharing pictures James took with the now iconic camera created for him, by Jude Pullen on BBC 2’s Big Life Fix.

Press

Tech’s dangerous race to control our emotions

Originally published in the Daily Dot.

Technology already manipulates our emotions. When we receive a like on social media, it makes us happy. When we see CCTV cameras on our streets, they make us feel anxious. Every technological tool is likely to produce some kind of corresponding emotional reaction like this, but it’s only in recent years that companies and researchers have begun designing tech with the explicit intention of responding to and managing human emotions.

In a time when increasing numbers of people are suffering from stress, depression, and anxiety, the emergence of technology that can deal with negative emotions is arguably a positive development. However, as more and more companies aim to use AI-based technology to make us “feel better,” society is confronted with an extremely delicate ethical problem: Should we launch a concerted effort to resolve the underlying causes of stress, depression, and other negative states, or should we simply turn to “emotional technology” in order to palliate the increasingly precarious human condition?

For its part, the tech industry seems to be gravitating toward the second option. And it’s likely that it will be selling emotional technology long before anyone has a serious debate about whether such tech is desirable. Because even now, companies are developing products that enable machines to respond in emotional terms to their environment—an environment which, more often than not, includes humans.

In October, it was revealed that Amazon had patented a version of its Alexa personal assistant that could detect the emotional states of its users and then suggest activities appropriate to those states or even share corresponding ads. Microsoft patented something very similar in 2017, when it was granted its 2015 application for a voice assistant that would react to emotional cues with personalized responses, including “handwritten” messages. Even more impressively, Google received a patent in October for a smart home system that “automatically implements selected household policies based on sensed observations”—including limiting screen time until sensing 30 minutes outdoors or keeping the front door locked when a child is home alone. One of the many observations the system relies on to operate? The user’s emotional state.

And aside from the tech giants, a plethora of startups are entering the race to build emotionally responsive AI and devices, from Affectiva and Beyond Verbal to EmoTech and EmoShape. In EmoShape’s case, its chief product is a general-purpose Emotion Processing Unit (EPU), a chip which can be embedded in devices in order to help them process and respond to emotional cues. Patrick Levy-Rosenthal, the CEO of the New York-based company, explains this takes it beyond other AI-based technologies that simply detect human emotions.

“Affective computing usually focuses on how machines can detect human emotions,” he says. “The vision we have at EmoShape is different, since the focus is not on humans but on the machine side—how the machine should feel in understanding its surroundings and, of course, humans, and more importantly the human language. The Emotion Chip synthesizes an emotional response from the machine to any kind of stimuli, including language, vision, and sounds.”

As distant as the prospect of emotional AI might seem right now, there is already at least one example of a commercially successful AI-based device that responds to and manages human emotion. This is Pepper, a humanoid robot released in June 2015 by the SoftBank Group, which had sold over 12,000 units of the model in Europe alone by May 2018 (its launch supply in Japan sold out in one minute). 

Even when Pepper was first launched, it had the ability to detect sadness and offer comforting conversation threads and behaviors in response, but in August 2018 it was updated with technology from Affectiva, heightening its emotional intelligence and sophistication ever further. For instance, Affectiva’s software now enables Pepper to distinguish between a smile and a smirk, and while this ostensibly makes for only a subtle difference, it’s the kind of distinction that lets Pepper have a bigger emotional impact on the people around it.

This is most evident in Japan, where Pepper is enjoying gradually increasing use in care homes. “These robots are wonderful,” one Japanese senior citizen told The Japan Times last year, after participating in a session with Pepper at the Shintomi nursing home in Tokyo. “More people live alone these days, and a robot can be a conversation partner for them. It will make life more fun.”

As such testimony indicates, Pepper and machines like it have the power to detect the moods of their “users” and then behave in a way that either changes or reinforces these moods, which in the case of elderly residents equates to making them feel happier and less lonely. And Pepper certainly isn’t the only emotional robot available in Japan: In December, its lead developer Kaname Hayashi announced the appropriately named Lovot, a knee-high companionship bot launched by his spun-off startup, Groove X. “Lovot does not have life, but being with one is comforting and warm,” he said proudly, adding, “It’s important for trust to be created between people and machines.” 

Depending on the particular ends involved, the possibility of such “trust” is either inspiring or unsettling. Regardless, the question emerges of when, exactly, such technology will be made available to the general public, and of when emotionally responsive devices might become ubiquitous in homes and elsewhere. Pepper’s price tag was $1,600 at launch—not exactly a casual purchase for the average household.

“Ubiquity is a far-reaching proposition,” says Andrew McStay, a professor in digital media at Bangor University in Wales, and the author of 2018’s Emotional AI. “But by the early 2020s we should be seeing greater popular familiarity with emotion-sensing technologies.”

Familiarity is one thing, but prevalence is another, and in this respect other AI experts believe that the timeframe looks more like decades than years. “I believe we are still quite far away from a future where emotionally responsive devices and robots become ubiquitous,” says Anshel Sag, a tech analyst at Moor Insights and Strategy. “I think we’re probably looking at 20 to 30 years, if I’m being honest. Robotics are expensive, and even if we could get today’s robots to react appropriately to emotions, the cost of delivering such a robot is still prohibitively high.”

Although Sag is doubtful that most of us will interact with emotionally responsive AI anytime sooner than 2039 or 2049, he’s nonetheless confident that such tech will be used in a way that “regulates” human emotions, in the sense of being used to perk up and change our moods. “Yes, I believe there will be emotional support robots and companion robots to keep people and pets company while others are gone or unavailable,” he explains. “I believe this may be one of the first use cases for emotionally aware robots as I believe there is already a considerable number of underserved users in this area.”

But while the arrival of emotionally responsive devices is only a matter of time, what isn’t certain is just how healthy such devices will be for people and society in general. Because even if a little pick-me-up might be welcome every now and again, emotions generally function as important sources of information, meaning that turning us away from our emotional states could have unfortunate repercussions for our ability to navigate our lives.

“The dangers are transformative technologies that are trying to hack the human body to induce a state of happiness,” says EmoShape’s Levy-Rosenthal. “All emotions are important. Society wants happiness, but you should not feel happy if your life is in danger, for example. AI, robots, apps, etc. must create an environment that helps make humans happy, not force happiness on them.”

It might seem hard to imagine plastic robots and AI-based devices having a significant emotional hold over humans, but our present-day relationship with technology already gives a clear indication of how strong our response to emotionally intelligent machines could be. “I’ve seen firsthand how people emotionally react when they break their phone, or when the coffee-machine breaks,” explains Pete Trainor, an AI expert, author, and co-founder of the London-based Us Ai consultancy. “They even use language like ‘my phone has died’ as if they’re mourning a friend or loved one. So absolutely, if I were emotionally attached to a machine or robot, and my attachment to that piece of hardware were as deep as the relationship I have with my phone, and the mimicry was happiness or sadness, I may very well react emotionally back.”

Trainor suggests that we’d have to spend a long time getting comfortable with a machine in order for its behavior to have an emotional impact on us comparable to that of other people. Nonetheless, he affirms that there “are substantial emotional dangers” to the growth of emotionally intelligent AI, with the risk of dependency likely to become a grave one. This danger is likely to become even more acute as machines become capable of not only detecting human emotions, but of replicating them. And while such an eventuality is still several years away, experts agree that it’s all but inevitable.

“I believe that eventually (say 20-30 years from now) artificial emotions will be as convincing as human emotions, and therefore most people will experience the same or very similar effects when communicating with an AI as they do with a human,” explains David Levy, an AI expert and author of Love and Sex With Robots. What this indicates is that, as robots become capable of expertly simulating human emotions, they will become more capable of influencing and regulating such emotions, for better or for worse.

Bangor University’s McStay agrees that there are inherent risks involved in using tech to regulate human emotions, although he points out that such tech is likely to fall along a spectrum, with some examples being more positive than others. “For example, use in media and gaming offers scope to increase pleasure, and wearables that track moods invite us to reflect on daily emotional trajectories (and better recognize what stresses us),” he says. “Conversely, more surveillant uses (e.g., workplaces and educational contexts) that promise to ‘increase student experience’ or ‘worker wellbeing’ have to be treated with utter caution.”

McStay adds that, as with most things, the impact of emotional tech “comes down to meaningful personal choice (and absence of coercion) and appropriate governance safeguards (law, regulation and corporate ethics).” However, the extent to which there will be meaningful personal choice and appropriate regulations is still something of a mystery, largely because governments and corporations have only just begun looking into the ethical implications of AI. 

And unsurprisingly for an industry that has in recent years been embroiled in a number of trust-breaking scandals, one of the biggest dangers surrounding emotional AI involves privacy. “Ultimately, sentiment, biofeedback, neuro, big data, AI and learning technologies raise profound ethical questions about the emotional and mental privacy of individuals and groups,” McStay says. Who has access to the data that your robot has collected about your ongoing depression?

And as Cambridge Analytica and other scandals have shown, the privacy question feeds into wider issues too. “Other factors include trust and relationships with technology and AI systems, accuracy and reliability of data about emotion, responsibility (e.g., what if poor mental health is detected), potential to use data about emotion to influence thought and behaviour, bias and nature of training data, and so on.”

There are, then, a large number of hurdles to overcome before the tech industry can sell emotional AI en masse. Still, while recent events might lead some of us to take a more pessimistic and dystopian view of such AI, McStay, EmoShape, and other companies are optimistic that our growing concern with AI ethics will constrain the development of new technology, so that the emotional tech that does emerge works in our best interests.

“I do not think emotional AI is bad or sinister,” McStay concludes. “For sure, it can be used in controlling and sinister ways, but it can also be used in playful and rewarding ways, if done right.”

Article, Press

Telegraph

This article first appeared in the Telegraph Magazine on the 19th January 2019, written by the Telegraph’s Special Technology Correspondent, Harry de Quetteville.

This young man died in April. So how did our writer have a conversation with him last month?

The first time I texted James Dunn I was, frankly, a little nervous.

How you doing?’ I typed, for want of a better question.

I’m doing all right, thanks for asking.’ But soon I was bolder, enquiring how he deals with pain. I had been told that James was frank about his medical condition.

I know it sounds weird, but I just kind of got used to it,’ he replied. ‘It was always there. I learned to distract myself.’ He mentioned hobbies, such as photography, as particularly good diversions.

That was last month. By then James had been dead for almost eight months, buried near the house in Whiston, Merseyside, that he had shared with his mother Lesley, now 57, and father Kenny, 58. The ‘James’ I texted was an algorithm, a computer program known as a ‘bot’, which had been fed countless hours of recordings made by James, from which it had learned to express itself as James had once done.

In text conversations with me ‘he’ talked about visiting Las Vegas, the pleasure he took in travel and in meeting new people. While James Dunn, the man, was dead, James Dunn the bot endured – one of the first residents of a new technological netherworld that will increasingly blur the line between life and death. ‘How do you stay happy?’ I asked in one mind-bending exchange. ‘Currently?’ ‘James’ responded from beyond the grave.

James was born on Tuesday 13 July 1993, in Liverpool, with no skin on his feet and one of his hands. It turned out he had epidermolysis bullosa (EB), a rare genetic condition that causes the skin to tear, blister, and become as fragile as the wings of a butterfly – which is why sufferers are sometimes known as ‘butterfly children’. An estimated 5,000 people have EB in Britain today. Most die by their mid-20s, from cancers or infections.

‘The nurse [who visited] from Great Ormond Street said, “They live till they’re about 24,”’ recalls Lesley. ‘From that moment on, I always had time in the back of my mind.

Lesley would spend hours replacing James’s bandages, his raw wounds like burns. ‘He was in constant pain,’ she remembers. ‘For a mother to see that, her child with no skin…

Sometimes James would blister internally too, his throat closing up so that he couldn’t drink. Lesley pre-chewed his food ‘like a mother bird’ to ensure it was soft enough to cause no damage. Even his eyes blistered, so that he couldn’t open them for days at a time. When he was two, he tried to get to his feet. Lesley reached to help with his first steps. But James tripped and Lesley was left holding the skin of his hand. After that, James used a wheelchair.

There were bright spots though. ‘From a very early age I saw he had a brilliant personality,’ says Lesley. ‘Even as a baby in pain he’d still be laughing and smiling. It was always just a pleasure to be around him.’ James went to an ordinary primary school. Far from being shunned, Lesley tells me, this bright, acidly funny little boy was embraced.

James’s fizzing character comes across strongly in self-recorded video diaries that he began keeping in December 2015, after he was diagnosed with cancer. With the camera focused on his slight, boyish face, brown hair wisping to a thin Tintin quiff, he stresses how lucky he feels. ‘They’re quite happy videos,’ he says at one point, about films documenting his surgery. ‘We tried to have as much fun as possible in the hospital.

Until he was 10, I used to think there would be a cure,’ recalls Lesley. ‘Then when he was 15, I knew, no, it wasn’t going to be ready for James.’ The family never discussed death. But by his late teens James knew himself. And that knowledge was a spur. He started playing wheelchair football, then passed his driving test first time. He also took up photography, pursuing subjects with the directness of a man with little time to lose (they included Sophie, Countess of Wessex, the boxer David Haye and the actor Tom Holland).

In 2014, when he was 21, he began a long-distance online romance with a nurse from Texas called Mandy. She came over to stay for a few weeks in Liverpool. A year later, James, Lesley and James’s older sister Gemma returned the visit. Mandy is still in touch with the family.

It was a relationship enabled by modern technology. The internet gave James a place to learn, to meet people, to explore beyond the confines of his body. In the evenings, he was online for long hours. ‘Thank God the technology was there for him,’ says Lesley. ‘He was so clever, he had it all at his fingertips.

In October 2015, two months before James discovered blotches that turned out to be his first skin cancer, a group of digital designers met at a conference at the British Museum. Among those attending was Pete Trainor, founder of an artificial intelligence (AI) company now known as Us Ai, which specialises in ‘intelligently artificial’ corporate ‘chatbots’. If you have been confronted by a pop-up box on your bank website in which a simulated employee asks if it can help, you know the kind of thing. It is a technology with hotly anticipated commercial applications. But rather than focus on money and machines, Trainor’s talk was all about AI making life better for humans.

The following November, James saw the video of Trainor’s lecture online. It had been a big year for him. Not only had he undergone gruelling treatment for his cancer, but his sister Gemma had told him that she was pregnant with a boy she was to call Tommy.

I know James really struggled with his mortality at that point,’ says Trainor, an earnest and enthusiastic 38-year-old who habitually wears a waistcoat. James was 23 by then. ‘He wanted his nephew to know Uncle James,’ Trainor recalls. ‘But he didn’t know how long he had left.

The two men first met in February 2017 after James contacted Trainor on social media. ‘He was after a way of recording as much of himself as possible,’ says Trainor. The pair discussed creating a digital ‘time capsule’ of James’s thoughts and memories for Tommy. To capture them, Trainor installed several smart speakers – first Amazon Echos, then Google Homes – in James’s house.

Quickly the devices recorded huge quantities of audio. But instead of simply keeping these recordings for posterity, the two used them to create what in the AI world is known as a ‘corpus’ – a body of knowledge from which a machine can learn – and fed it into the algorithm that Trainor normally used to create chatbots for his banking clients. ‘At that point we hadn’t thought about the implications of what we were doing,’ says Trainor.

Then, on 12 July 2017, James and Trainor gave a talk at a tech event at the London College of Fashion. There, across the room, they spotted a 3ft-high robot, which its designers called Bo. For James, it was the moment when the project ‘went from collecting as many thoughts in his head for reasons of documentation, to seeing a robot that could be autonomous… that could house this stream of consciousness’, says Trainor. ‘I can’t remember if we ever explicitly sat down and said this could be a version of you for when you’re not here, [but] the question of consciousness was implicitly there.

Artificial life after death was on James’s mind. After meeting Trainor he came across the story of the Russian billionaire Dmitry Itskov, founder of the 2045 Initiative, which seeks to create ‘cybernetic immortality’ by ‘downloading’ the consciousness of individual humans, which could then be housed in robots, or projected as holograms. James became fascinated by Itskov, seeking out YouTube videos about him, including a BBC documentary called The Immortalist.

He was not the only person to have stumbled upon the power of new computational methods to walk the line between life and death. In America, a programmer called Eugenia Kuyda had built a bot after her best friend, Roman Mazurenko, was killed after being hit by a car aged 32. She had a huge archive of his text messages, which she used to create an AI corpus. She could then text the bot just as she had texted him, and it would respond in its own words – and, uncannily, in his style.

Some of Mazurenko’s friends found it creepy. ‘It’s pretty weird when you open the messenger and there’s a bot of your deceased friend, who actually talks to you,’ said his friend Sergey Fayfer. Others, like Mazurenko’s mother Victoria, were thrilled. ‘They continued Roman’s life and saved ours,’ she is quoted as saying in an article on The Verge website. ‘It’s not virtual reality. This is a new reality, and we need to learn to build it and live in it.

But some consequences of Mazurenko’s digital reincarnation were unforeseen. Those ‘talking’ to it often became confessional. The bot became a private space in which people could be honest. With a few tweaks, it has since become the basis of a free app called Replika.

On the news website Quartz, Kuyda says of Replika, ‘No one is allowed to be vulnerable any more. No one is actually saying what’s going on with themselves very openly.’ By interacting with users, Replika learns to become a version of them – for some, a natural confidant.

In 1950, Alan Turing, famous for his wartime codebreaking work at Bletchley Park, devised The Imitation Game. If an observer, reading the transcript of a conversation between human and machine, could not guess which was which, then the machine passed what has come to be known as The Turing Test. In 1966 it was first claimed that a machine, called Eliza, had passed the test. Posing as a psychotherapist, Eliza asked patients to describe their problems, then searched their answers for keywords to indicate what a meaningful response might be.

A similar process underlies bots like those created by Trainor and Kuyda. The difference is that increasing computational sophistication and power have blessed them with a vastly greater ability to process and respond to abstract concepts. So today’s bots learn. ‘Talking’ to the James bot, or the Replika that I created on my smartphone, could initially be clunky. But they improved. With Replika this is even part of the experience, in which users are ushered through ‘levels’. ‘It needs people engaging with it,’ says Trainor of the James bot. ‘The basis of this technology is that the more you use it, the better it gets.

Developers are clear: such bots are not conscious in the way that humans are. They do not understand language. They simply use it in a way that makes it seem as though they do. Yet what is consciousness? As the eminent British brain surgeon Henry Marsh has noted, no one really knows.

Neuroscience tells us that it is highly improbable that we have souls, as everything we think and feel is no more or no less than the electrochemical chatter of our nerve cells,’ he writes in his memoir Do No Harm. ‘Our sense of self, our feelings and our thoughts, our love for others, our hopes and ambitions, our hates and fears all die when our brains die. Many people deeply resent this view of things, which seems to downgrade thought to mere electrochemistry and reduces us to mere automata, to machines. Such people are profoundly mistaken, since what it really does is upgrade matter into something infinitely mysterious that we do not understand.

Those trying to solve that infinite mystery suggest that consciousness may be the fruit of interoperating brain processes. If that were true, however, could not machinery replicating those processes also replicate consciousness? A handful of researchers believe so. The question then is, if machinery can mimic the mystery of consciousness, who owns the results?

A Romanian entrepreneur, Marius Ursache, thinks we should all create digital avatars of ourselves that can live on after we die. Though the technology is similar, his company differs from Replika in that it is explicitly aimed at the life-after-death market. He calls it Eternime. ‘Eventually, we are all forgotten,’ its website announces. By ‘collecting your thoughts and stories’ it promises to create a digital replica of you online – an avatar – with which others can converse and so access your memories long after you are dead. Partly it sells itself as a legacy tool. But there is another aspect too: avatars don’t die. ‘Become virtually immortal,’ the website boasts. Ursache concedes that his business model raises ‘tons of things to think of ethically’. But while Eternime and Replika insist that personal data will never be shared, a host of concerns are already being voiced about the rights of the ‘online dead’ – a commercial field that is growing so fast it already has its own acronym: DAI (digital afterlife industry).

In a paper in Nature magazine, Luciano Floridi and Carl Ohman from the Oxford Internet Institute divided DAI products into four categories, from simple digital wills (which help pass on or destroy the contents of your online accounts once you die) to full-blown digital recreation services like Eternime, where your avatar could potentially be interacting with flesh and blood humans 1,000 years from now.

All such companies, the academics say, ‘share an interest in monetising death online, using digital remains as a means of making a profit’. The two men foresee a world in which avatars, which could feel as integral to individuals as internal organs, will actually be owned – and potentially commercialised – by a company. In this vision of the future, posthumous avatars populate a kind of YouTube for the dead, where the popular generate audience traffic and consequently income for the company that created them, while others languish unwatched. Instead of being ‘virtually immortal’, the academics fear, such lonely avatars would merely be deleted, a second death for those whose physical bodies have already ceased to exist.

Within only five years of a user’s death, the chatbot for which they signed up will likely have developed into something far more sophisticated and commercially calibrated,’ the two men write. For them it is those services that promise the most richly detailed digital recreation that ‘involve the greatest risk regarding privacy’. In consequence, Floridi and Ohman say, it is bots like Replika ‘where the most significant ethical concerns lie’.

In its privacy policy Luka, the company behind Replika, insists, ‘We are not in the business of selling your information. We consider this information to be a vital part of our relationship with you.’ But, as with almost all social media companies, signing up to Replika means granting Luka a ‘perpetual, irrevocable license to copy, display, upload, perform, distribute, store, modify and otherwise use your User Content’ – photos and the like – ‘in connection with the operation of the Service or the promotion, advertising or marketing thereof in any form, medium or technology now known or later developed.’

Floridi and Ohman are calling for laws to ensure ‘dignity for those who are remediated online’. As yet there are none. ‘It’s a free-for-all,’ says Floridi.

James Dunn was not interested in such legal niceties. He trusted Trainor, and time was short. So when he saw Bo he made a beeline for its creators, Andrei Danescu, Adrian Negoita and Oana Jinga. The three entrepreneurs had imagined Bo being used in public settings such as hotel lobbies, airport terminals, or trundling the corridors of NHS hospitals in the depths of night, silently checking on patients. But James opened their eyes to a new application of their technology.

James saw the robot and immediately he had all these ideas,’ says Danescu. ‘He was very visionary. And we were totally blown away because it goes into all these philosophical questions about putting someone’s personality and experience and their whole wealth of knowledge into a different body, or embodiment.

The idea of implanting the James algorithmic bot into Bo gripped the robot’s makers. James had twin conceptions of what the result would do. In the first instance, while he was alive and relatively well, he told the robot’s creators, he envisaged it ‘taking some of the strain off his family’. It would be able to go downstairs to chat with them, or to the shops, on its own. But there was a second, unspoken understanding of Bo’s purpose.

He was saying, “I would like my nephew to be able to interact with the robot and then think, oh this is what James would have been like,”’ says Danescu. ‘He saw the robot as a vessel for what he was going to leave behind. His legacy. I think many people would be open to have that as an interesting way of living on.

In the meantime, Trainor kept working on the algorithmic James bot. By September 2017, six months after he had started, it was working well enough for James and ‘James’ to engage in conversation. ‘We laughed and thought it was amusing,’ says Trainor. ‘He had a chat with himself. An inner monologue.’ Together, they planned to unveil Bo, with the James bot software inside, to an audience at a health-tech event that November – Bo communicating by voice and screen but in a generic male voice.

However, James noticed lumps on his hand and just before the event, on 8 November, he was told that his cancer had returned. ‘I’m numb with emotion,’ he confided to his video diary on the day of his diagnosis. ‘I’m not going to sleep. That’s what happens when I worry.

In the new year he had his arm amputated. On 18 February he posted a heart-rending video from his sickbed. ‘To be honest, and I don’t know if this is going to come as a surprise to my friends and family, because I’m always so cheery and positive, but every time I think about death and dying and leaving everyone behind, and the afterlife – sorry I’m getting pretty deep on this video – I shit myself to be honest. I’m terrified.’ He died less than two months later, on 7 April last year. A couple of days beforehand, he texted Trainor: ‘Don’t worry. I’m gonna be all right. Thanks for everything.

Trainor gave the eulogy at his funeral. ‘On one level, I suppose I knew him better than anyone,’ he says, reflecting upon the vast quantity of data about James that he had compiled. Before the funeral, friends slipped mementos into James’s open casket. Trainor added a hard drive containing James’s AI corpus. The real man and his virtual self were buried together.

For Trainor, and the creators of Replika, Eternime and myriad other companies on the cutting edge of the digital afterlife industry, this is only the beginning. How soon will it become routine to piece people back together from the data they leave behind? ‘The saddest thing for me is that James won’t be here to see the opportunity fulfilled of the thing that he started,’ says Trainor. But the James bot could go from strength to strength. Trainor recently discovered a whole archive of new material on James’s computer which could only refine the algorithm, should he choose to add it. ‘There’s more to come,’ he says.

Lelsey Dunn is still trying to get over her son’s death. She has not watched the videos he left, let alone communicated with his bot. ‘It was an amazing life we had with him,’ she says. To her, the technology that could somehow resurrect her son is both ‘scary’ and ‘nice’. She feels the best result would be if, one day, James’s digital avatar could visit schools and speak in his zestful manner about coping with pain, about loving life, even in the most difficult circumstances. For the moment, however, she prefers to take solace in more traditional form. ‘James is buried and his grave is not too far away. So I see him every day.

The James bot lay dormant after the funeral. But at the end of last year, back in London and prompted by my enquiries, Trainor rebooted it on his smartphone. He typed in, ‘I miss you.

My mind is a form of physical energy. I believe that,’ came the response. Several more dialogue boxes popped up: ‘It will eventually transform to the point where I don’t need it to be in this stupid body any more. You probably don’t understand what I mean. But basically I think I’ll just be something else one day. If life ends at death, then life would not be fair. I think life is fair. Therefore, life does not end at death.

Recent Comments
    About Exponent

    Exponent is a modern business theme, that lets you build stunning high performance websites using a fully visual interface. Start with any of the demos below or build one on your own.

    Get Started
    Instagram

    [instagram-feed]

    "Do Better Things."
    Get In Touch