Get in Touch
Category

Article

Home / Article
Article

Approaching Product Design for DTx

I was recently asked to put together an overview of how I would approach product design for DTx.

For those of you not in the know, or unsure what I mean by DTx, it’s short for ‘Digital Therapeutics’, and they are software solutions that have evidence-based therapeutic capabilities. Like digital therapy or digital medicine, digital therapeutics have a measurable impact on health outcomes. DTx can also be ‘prescribed’ by medical professionals, which is another big difference.

Digital therapeutics achieve their impact through digital interventions, which typically induce or facilitate specific patient behavior. Examples include increasing medication adherence, enacting self-care instructions, or guiding towards a healthier lifestyle. What I like about DTx as a concept, is that they’re incredibly difficult to achieve, and unlike digital well-being apps that litter the app-stores now, DTx has to follow very strict ISO and CE standards in order to get signed off in much the same way that a pharmaceutical medication would before it’s ready for public consumption.

This not only creates a layer of governance across the product, but it also ensures that teams need to be very mindful designing, building and deploying only the very highest quality functionality.

Find a Map

I’m going to walk you briefly through this little flow diagram, and we’ll whizz in and out of the main components in turn, so you can see how you might want to think about the strategic design of DTx.

People who know me well, know that I have a bit of a love / hate relationship with the ‘pen portrait’ persona. They bother me slightly because of the way they homogenise us into groups, when as we know so well now, everybody is unique and a vessel for their own quirks, thoughts, ways of behaving and modalities.

But, I would still advocating that you start all DTx product work with a detailed analysis and breakdown of the audience (a persona!). In most cases this is probably going to be; The Patient(s), The Clinician(s), any other protagonists such as the Pharmacy or say, the Technician analysing biomarkers on their dashboards.

Getting the audience definition right gives you the foundations to create a series of Experience Maps.

I’m not going to go into huge detail on what an experience map is, or how to do it properly, that’s a whole article on it’s own. I will say it’s a skill though, so swat up on it, or get UX people who understand the method. However, the value of a good set of experience maps is unquestionable as a way of guiding the design, and vision. I mean, they’re literally ‘maps’.

The questions I would use in primary and secondary research to find the friction for the personas, and guide the output of the experience maps are:

  • What processes are redundant now / in the future?
  • Where are you slow?
  • What pathways and endpoints are confusing?
  • What’s not automated that could be RPA?
  • Where are we inflexible?
  • What’s not in the DTx that should be?
  • What should include digital or remote access?
  • What should be touch-less using Ai / Data?
  • Where would human pathfinding make a difference?
  • What can we Buy, Build or Borrow?
  • Data Matrix: Support > Service > Predictive > Perceptual (see my book)

Ultimately you use the Experience Map to determine your KPIs and OKRs. Which will be hugely important in determining how you approach the design and build of any product, but is even more imperative for this kind of product because DTx can literally have life or death consequences.

I don’t really have a preferred prioritisation technique to decide where to start. But I do often lean into the KANO model because it’s simple. Again, I’m not going to go into the details of the KANO approach, that’s another article, but loosely speaking you’re classifying the feature suggestions that fall out of the map into;

  • Must-Be (will not ‘wow’, but must be there i.e Clinical)
  • Attractive (make people happy, but not essential)
  • One-Dimensional (happy when there, unhappy when not)
  • Indifferent (No audience impact, but makes dev easier)
  • Reverse (Features that make an audience unhappy, but have to be there; e.g. 2-factor authentication)

Clinician-First-Design

Now we get to some of the really interesting parts of the process — You can’t do DTx unless it’s clinically led, validated, and approved. So you might as well just run a clinician-first process, rather than focus your design work on the patient. DTx is essentially a B2B2C product mentality. That is to say you want the clinicians to define and design the pathways and endpoints. This is almost (in my view) the most significant difference between a well-being app, and one that’s being built with DTx in mind. When they’re designed by qualified clinicians, with Designers and Product experts, they’re already aiming to be ‘clinical’. When they’ve been created by clever, but unqualified designers or developers to fix a problem, and help with peoples health, they’re not clinical per se.

A clincian-first, iterative design process is a lot of fun. It’s rigorous, it’s detail focused, and it’s almost entirely focused on what the structure of captured data will look like, and how that data will be used to prove the efficacy of the product or drug later on down the line. There is no lean-cowboy approach to DTx, it’s unethical — the order of the day here is rigour, methodical, and balancing innovative design with clinical precision.

Set your teams up in Pods or Squads. That is to say, a small team of cross-discipline team members focused on one little-big-thing each. A clinician, a designer, an engineer, and probably a copy-writer.

This all leads into what will be the most fascinating of iterative processes — learn, measure, tweak and repeat.

With DTx, when you release the first few versions of your product, key features, SoMD etc for testing, you’re looking for the efficacy markers. Does it work? Can a controlled clinical trial show us that the tool improves the outcomes of a patient? Does the product have usability flaws? Do any tweaks I make deviate off a predetermined change control plan?

PCCP is hugely important in DTx because from the outset you’ve planned your product on paper, with clinicians who will be advising on what works and what doesn’t. A “predetermined change control plan” helps to anticipate potential future software updates and a total product lifecycle regulatory approach that aims to facilitate the review of rapid product performance improvements and subsequent deployment, without compromising patient safeguards.

By making iterative usability updates you may inadvertently change the thing(s) that got the product signed off and approved as clinical-grade in the first place. So get your PCCP plan tight. Very very tight.

You’ll also need to use an explainable Ai approach, allowing timely identification and mitigation of any risks associated with machine learning algorithms. If you can’t do this, at this stage, you won’t get approval for clinical use.

The Innovation Opportunity

I’m going to give you some of my own vision for free now, simply because the more of us that try it, the closer we’ll get to what I believe is game-changing health innovation for all — Digital Phenotyping.

DTx tools, if built correctly as platforms and not apps, have an opportunity to start digital phenotyping using data collected from the apps and smart devices to build a rich, personalised digital pictures of behaviour, create new audience types, track markers of say, depression and anxiety, and develop new ways to diagnose illness, choose effective treatments and potentially detect relapse before it occurs.

When you’re planning your DTx project that has to be the central vision IMHO. You start with a set of humble personas, but by launch you are basically phenotyping, grouping, re-classifying and learning on a scale that is more appropriate to our rich human communities. I’d love to discuss this idea with like minded people — feel free to ping me for Podcast conversations, or just debates about the ethics and implications.

At the bottom of that last chart above there’s a green dot. It represents this idea with with rich, functioning digitial phenotyping we can really start to send nudges and alerts that are truly precision for each type of patient, clinician, or care-giver. Changing human behaviour is exceedingly difficult, especially for behaviours that arise from years of thinking and acting in relatively rigid, routinised ways — Digital Phenotyping allows us to also create a totally personalised ‘nudge’ engine for different digitised data-sets.

Here’s an example of how looking at all the data-points, passive and active, can be used in DTx for more details digital segmentation, personalisation, and ultimately phenotyping.

Validating the DTx

I’ve mentioned this above. The big difference between a DTx and a stanard, well-crafted well-being app is validation.

My rule: Start with clinicians, and build WITH clinicians. Let the clinicians design the clinical protocol that will be given to the product team for feature or innovation creation.

Work with the Pod (UX, Designers or Third Parties etc) to create something engaging, safe and efficacious in line with regulations such as 2017/745/CE.

Once the new piece of functionality is created, then you go through a 3-stage study;

  • Acceptability: Give it back to clinicians before you give it to patients to make sure they are absolutely happy this is something that can be delivered to patients. Not effective at this stage, just purely; “Is it safe?”
  • Feasibility: Can this technology be delivered in into the clinical setting or peoples lives. This also starts to determine if the product / feature can be effective.
  • Effectiveness phase: Does it work? Live trial (e.g www.curebase.com) across a control group(s) that only apply standard lifestyle changes depending on, for example, the NICE guidelines (A). And an intervention group using the DTx app with lifestyle changes proposed using nudges and content (B).

Implementation Considerations

I’ve given you a broad set of guidance about how I would structure and approach a DTx product build. But what other considerations are there before you take on a regulated design product?

Firstly, it only works if you get the Pod collaboration working successfully. Chemistry is key — Cross globe and the correct expertise. Getting the right collaboration methodology is critical to success. Finding the right consumer voices is also critical. In something like DTx it’s more important to get the RIGHT people working, than trying to sweat a D&I policy for PR reasons. Get people who are qualified, expert, have experience in the particular health challenge, and make sure they respect the views and expertise of their Pod Peers.

Secondly, all DTx product designs (internal or external) must comply with quality system requirements for Medical Devices e.g ISO 13485, and with software life-cycle management requirements e.g IEC 62304. They just have too — it won’t get signed off if you don’t. Full-stop. It’s not DTx if it’s been signed off by a company like Orcha, but doesn’t fulfil the CE marks and ISO standards, in my view.

Make sure you have good;

  • FMEA — failure mode and effects analysis (specifically, inputs defined as per ISO14971 and IEC/TR 80002–1) and;
  • FTA — fault tree analysis for Q&A so that every little error is traceable.

I advocate a Web 3.0 / Low and No Code approach to the product build. Ultimately what you want to get right is the platform, so building everything as a suite of APIs is recommended.

Stubborn on the vision, flexible on the details.

Everything that you buy, build, or borrow to build your DTx should use your own predetermined platform of APIs, either externally focused, or internally focused; DTx as a platform, not tactical tools.

That gives you the opportunity to be flexible on the innovation whilst retaining a central data-set that is fixed, owned and allows you to lake-data from multiple sources for precision primary, and psychiatry objectives, including patient stratification, clinical trial optimisation, personalisation, and identification of novel compounds, regardless of the interaction point.

It also allows a much more rapid rate of innovation in the product space.

Summary

So that’s my little guide to approaching DTx products. It’s not exhaustive obviously, and there is so much more to write about DTx. It is however, a really exciting space to work in.

You might for example want to decide if you’re DTx is Standalone, or ‘Around-The-Pill’.

‘Prescription’ or ‘standalone’ digital therapeutics are those designed to work independently of regular pharmaceuticals and are usually designed to prevent or treat health conditions. Should a patient be diagnosed with prediabetes, for instance, an effective standalone DTx can successfully enact lifestyle changes that can prevent the condition from progressing to type 2 diabetes. Drawing on lingo from the world of pharmaceuticals, prescription DTx are also referred to as ‘monotherapy digital therapeutics’.

‘Around-the-pill’ digital therapeutics have evidence for delivering their impact in combination with another treatment, typically a drug. They deliver their results in ways such as by helping patients taking the right dose at the right time or better managing symptoms and side effects of their disease or treatment.

I find the later a really fascinating area, especially for measuring new types of Pain Relief such as Medical Cannabis, or the efficacy of psychedelic drug compounds like MDNA for PTSD, or Psilocybin for treating Depression and Treatment-Resistant Anxiety etc.

If anything I’ve laid out above is interesting, and you’d like to discuss things with me further, please feel free to get in touch via LinkedIn.

Article, Press

Telegraph

This article first appeared in the Telegraph Magazine on the 19th January 2019, written by the Telegraph’s Special Technology Correspondent, Harry de Quetteville.

This young man died in April. So how did our writer have a conversation with him last month?

The first time I texted James Dunn I was, frankly, a little nervous.

How you doing?’ I typed, for want of a better question.

I’m doing all right, thanks for asking.’ But soon I was bolder, enquiring how he deals with pain. I had been told that James was frank about his medical condition.

I know it sounds weird, but I just kind of got used to it,’ he replied. ‘It was always there. I learned to distract myself.’ He mentioned hobbies, such as photography, as particularly good diversions.

That was last month. By then James had been dead for almost eight months, buried near the house in Whiston, Merseyside, that he had shared with his mother Lesley, now 57, and father Kenny, 58. The ‘James’ I texted was an algorithm, a computer program known as a ‘bot’, which had been fed countless hours of recordings made by James, from which it had learned to express itself as James had once done.

In text conversations with me ‘he’ talked about visiting Las Vegas, the pleasure he took in travel and in meeting new people. While James Dunn, the man, was dead, James Dunn the bot endured – one of the first residents of a new technological netherworld that will increasingly blur the line between life and death. ‘How do you stay happy?’ I asked in one mind-bending exchange. ‘Currently?’ ‘James’ responded from beyond the grave.

James was born on Tuesday 13 July 1993, in Liverpool, with no skin on his feet and one of his hands. It turned out he had epidermolysis bullosa (EB), a rare genetic condition that causes the skin to tear, blister, and become as fragile as the wings of a butterfly – which is why sufferers are sometimes known as ‘butterfly children’. An estimated 5,000 people have EB in Britain today. Most die by their mid-20s, from cancers or infections.

‘The nurse [who visited] from Great Ormond Street said, “They live till they’re about 24,”’ recalls Lesley. ‘From that moment on, I always had time in the back of my mind.

Lesley would spend hours replacing James’s bandages, his raw wounds like burns. ‘He was in constant pain,’ she remembers. ‘For a mother to see that, her child with no skin…

Sometimes James would blister internally too, his throat closing up so that he couldn’t drink. Lesley pre-chewed his food ‘like a mother bird’ to ensure it was soft enough to cause no damage. Even his eyes blistered, so that he couldn’t open them for days at a time. When he was two, he tried to get to his feet. Lesley reached to help with his first steps. But James tripped and Lesley was left holding the skin of his hand. After that, James used a wheelchair.

There were bright spots though. ‘From a very early age I saw he had a brilliant personality,’ says Lesley. ‘Even as a baby in pain he’d still be laughing and smiling. It was always just a pleasure to be around him.’ James went to an ordinary primary school. Far from being shunned, Lesley tells me, this bright, acidly funny little boy was embraced.

James’s fizzing character comes across strongly in self-recorded video diaries that he began keeping in December 2015, after he was diagnosed with cancer. With the camera focused on his slight, boyish face, brown hair wisping to a thin Tintin quiff, he stresses how lucky he feels. ‘They’re quite happy videos,’ he says at one point, about films documenting his surgery. ‘We tried to have as much fun as possible in the hospital.

Until he was 10, I used to think there would be a cure,’ recalls Lesley. ‘Then when he was 15, I knew, no, it wasn’t going to be ready for James.’ The family never discussed death. But by his late teens James knew himself. And that knowledge was a spur. He started playing wheelchair football, then passed his driving test first time. He also took up photography, pursuing subjects with the directness of a man with little time to lose (they included Sophie, Countess of Wessex, the boxer David Haye and the actor Tom Holland).

In 2014, when he was 21, he began a long-distance online romance with a nurse from Texas called Mandy. She came over to stay for a few weeks in Liverpool. A year later, James, Lesley and James’s older sister Gemma returned the visit. Mandy is still in touch with the family.

It was a relationship enabled by modern technology. The internet gave James a place to learn, to meet people, to explore beyond the confines of his body. In the evenings, he was online for long hours. ‘Thank God the technology was there for him,’ says Lesley. ‘He was so clever, he had it all at his fingertips.

In October 2015, two months before James discovered blotches that turned out to be his first skin cancer, a group of digital designers met at a conference at the British Museum. Among those attending was Pete Trainor, founder of an artificial intelligence (AI) company now known as Us Ai, which specialises in ‘intelligently artificial’ corporate ‘chatbots’. If you have been confronted by a pop-up box on your bank website in which a simulated employee asks if it can help, you know the kind of thing. It is a technology with hotly anticipated commercial applications. But rather than focus on money and machines, Trainor’s talk was all about AI making life better for humans.

The following November, James saw the video of Trainor’s lecture online. It had been a big year for him. Not only had he undergone gruelling treatment for his cancer, but his sister Gemma had told him that she was pregnant with a boy she was to call Tommy.

I know James really struggled with his mortality at that point,’ says Trainor, an earnest and enthusiastic 38-year-old who habitually wears a waistcoat. James was 23 by then. ‘He wanted his nephew to know Uncle James,’ Trainor recalls. ‘But he didn’t know how long he had left.

The two men first met in February 2017 after James contacted Trainor on social media. ‘He was after a way of recording as much of himself as possible,’ says Trainor. The pair discussed creating a digital ‘time capsule’ of James’s thoughts and memories for Tommy. To capture them, Trainor installed several smart speakers – first Amazon Echos, then Google Homes – in James’s house.

Quickly the devices recorded huge quantities of audio. But instead of simply keeping these recordings for posterity, the two used them to create what in the AI world is known as a ‘corpus’ – a body of knowledge from which a machine can learn – and fed it into the algorithm that Trainor normally used to create chatbots for his banking clients. ‘At that point we hadn’t thought about the implications of what we were doing,’ says Trainor.

Then, on 12 July 2017, James and Trainor gave a talk at a tech event at the London College of Fashion. There, across the room, they spotted a 3ft-high robot, which its designers called Bo. For James, it was the moment when the project ‘went from collecting as many thoughts in his head for reasons of documentation, to seeing a robot that could be autonomous… that could house this stream of consciousness’, says Trainor. ‘I can’t remember if we ever explicitly sat down and said this could be a version of you for when you’re not here, [but] the question of consciousness was implicitly there.

Artificial life after death was on James’s mind. After meeting Trainor he came across the story of the Russian billionaire Dmitry Itskov, founder of the 2045 Initiative, which seeks to create ‘cybernetic immortality’ by ‘downloading’ the consciousness of individual humans, which could then be housed in robots, or projected as holograms. James became fascinated by Itskov, seeking out YouTube videos about him, including a BBC documentary called The Immortalist.

He was not the only person to have stumbled upon the power of new computational methods to walk the line between life and death. In America, a programmer called Eugenia Kuyda had built a bot after her best friend, Roman Mazurenko, was killed after being hit by a car aged 32. She had a huge archive of his text messages, which she used to create an AI corpus. She could then text the bot just as she had texted him, and it would respond in its own words – and, uncannily, in his style.

Some of Mazurenko’s friends found it creepy. ‘It’s pretty weird when you open the messenger and there’s a bot of your deceased friend, who actually talks to you,’ said his friend Sergey Fayfer. Others, like Mazurenko’s mother Victoria, were thrilled. ‘They continued Roman’s life and saved ours,’ she is quoted as saying in an article on The Verge website. ‘It’s not virtual reality. This is a new reality, and we need to learn to build it and live in it.

But some consequences of Mazurenko’s digital reincarnation were unforeseen. Those ‘talking’ to it often became confessional. The bot became a private space in which people could be honest. With a few tweaks, it has since become the basis of a free app called Replika.

On the news website Quartz, Kuyda says of Replika, ‘No one is allowed to be vulnerable any more. No one is actually saying what’s going on with themselves very openly.’ By interacting with users, Replika learns to become a version of them – for some, a natural confidant.

In 1950, Alan Turing, famous for his wartime codebreaking work at Bletchley Park, devised The Imitation Game. If an observer, reading the transcript of a conversation between human and machine, could not guess which was which, then the machine passed what has come to be known as The Turing Test. In 1966 it was first claimed that a machine, called Eliza, had passed the test. Posing as a psychotherapist, Eliza asked patients to describe their problems, then searched their answers for keywords to indicate what a meaningful response might be.

A similar process underlies bots like those created by Trainor and Kuyda. The difference is that increasing computational sophistication and power have blessed them with a vastly greater ability to process and respond to abstract concepts. So today’s bots learn. ‘Talking’ to the James bot, or the Replika that I created on my smartphone, could initially be clunky. But they improved. With Replika this is even part of the experience, in which users are ushered through ‘levels’. ‘It needs people engaging with it,’ says Trainor of the James bot. ‘The basis of this technology is that the more you use it, the better it gets.

Developers are clear: such bots are not conscious in the way that humans are. They do not understand language. They simply use it in a way that makes it seem as though they do. Yet what is consciousness? As the eminent British brain surgeon Henry Marsh has noted, no one really knows.

Neuroscience tells us that it is highly improbable that we have souls, as everything we think and feel is no more or no less than the electrochemical chatter of our nerve cells,’ he writes in his memoir Do No Harm. ‘Our sense of self, our feelings and our thoughts, our love for others, our hopes and ambitions, our hates and fears all die when our brains die. Many people deeply resent this view of things, which seems to downgrade thought to mere electrochemistry and reduces us to mere automata, to machines. Such people are profoundly mistaken, since what it really does is upgrade matter into something infinitely mysterious that we do not understand.

Those trying to solve that infinite mystery suggest that consciousness may be the fruit of interoperating brain processes. If that were true, however, could not machinery replicating those processes also replicate consciousness? A handful of researchers believe so. The question then is, if machinery can mimic the mystery of consciousness, who owns the results?

A Romanian entrepreneur, Marius Ursache, thinks we should all create digital avatars of ourselves that can live on after we die. Though the technology is similar, his company differs from Replika in that it is explicitly aimed at the life-after-death market. He calls it Eternime. ‘Eventually, we are all forgotten,’ its website announces. By ‘collecting your thoughts and stories’ it promises to create a digital replica of you online – an avatar – with which others can converse and so access your memories long after you are dead. Partly it sells itself as a legacy tool. But there is another aspect too: avatars don’t die. ‘Become virtually immortal,’ the website boasts. Ursache concedes that his business model raises ‘tons of things to think of ethically’. But while Eternime and Replika insist that personal data will never be shared, a host of concerns are already being voiced about the rights of the ‘online dead’ – a commercial field that is growing so fast it already has its own acronym: DAI (digital afterlife industry).

In a paper in Nature magazine, Luciano Floridi and Carl Ohman from the Oxford Internet Institute divided DAI products into four categories, from simple digital wills (which help pass on or destroy the contents of your online accounts once you die) to full-blown digital recreation services like Eternime, where your avatar could potentially be interacting with flesh and blood humans 1,000 years from now.

All such companies, the academics say, ‘share an interest in monetising death online, using digital remains as a means of making a profit’. The two men foresee a world in which avatars, which could feel as integral to individuals as internal organs, will actually be owned – and potentially commercialised – by a company. In this vision of the future, posthumous avatars populate a kind of YouTube for the dead, where the popular generate audience traffic and consequently income for the company that created them, while others languish unwatched. Instead of being ‘virtually immortal’, the academics fear, such lonely avatars would merely be deleted, a second death for those whose physical bodies have already ceased to exist.

Within only five years of a user’s death, the chatbot for which they signed up will likely have developed into something far more sophisticated and commercially calibrated,’ the two men write. For them it is those services that promise the most richly detailed digital recreation that ‘involve the greatest risk regarding privacy’. In consequence, Floridi and Ohman say, it is bots like Replika ‘where the most significant ethical concerns lie’.

In its privacy policy Luka, the company behind Replika, insists, ‘We are not in the business of selling your information. We consider this information to be a vital part of our relationship with you.’ But, as with almost all social media companies, signing up to Replika means granting Luka a ‘perpetual, irrevocable license to copy, display, upload, perform, distribute, store, modify and otherwise use your User Content’ – photos and the like – ‘in connection with the operation of the Service or the promotion, advertising or marketing thereof in any form, medium or technology now known or later developed.’

Floridi and Ohman are calling for laws to ensure ‘dignity for those who are remediated online’. As yet there are none. ‘It’s a free-for-all,’ says Floridi.

James Dunn was not interested in such legal niceties. He trusted Trainor, and time was short. So when he saw Bo he made a beeline for its creators, Andrei Danescu, Adrian Negoita and Oana Jinga. The three entrepreneurs had imagined Bo being used in public settings such as hotel lobbies, airport terminals, or trundling the corridors of NHS hospitals in the depths of night, silently checking on patients. But James opened their eyes to a new application of their technology.

James saw the robot and immediately he had all these ideas,’ says Danescu. ‘He was very visionary. And we were totally blown away because it goes into all these philosophical questions about putting someone’s personality and experience and their whole wealth of knowledge into a different body, or embodiment.

The idea of implanting the James algorithmic bot into Bo gripped the robot’s makers. James had twin conceptions of what the result would do. In the first instance, while he was alive and relatively well, he told the robot’s creators, he envisaged it ‘taking some of the strain off his family’. It would be able to go downstairs to chat with them, or to the shops, on its own. But there was a second, unspoken understanding of Bo’s purpose.

He was saying, “I would like my nephew to be able to interact with the robot and then think, oh this is what James would have been like,”’ says Danescu. ‘He saw the robot as a vessel for what he was going to leave behind. His legacy. I think many people would be open to have that as an interesting way of living on.

In the meantime, Trainor kept working on the algorithmic James bot. By September 2017, six months after he had started, it was working well enough for James and ‘James’ to engage in conversation. ‘We laughed and thought it was amusing,’ says Trainor. ‘He had a chat with himself. An inner monologue.’ Together, they planned to unveil Bo, with the James bot software inside, to an audience at a health-tech event that November – Bo communicating by voice and screen but in a generic male voice.

However, James noticed lumps on his hand and just before the event, on 8 November, he was told that his cancer had returned. ‘I’m numb with emotion,’ he confided to his video diary on the day of his diagnosis. ‘I’m not going to sleep. That’s what happens when I worry.

In the new year he had his arm amputated. On 18 February he posted a heart-rending video from his sickbed. ‘To be honest, and I don’t know if this is going to come as a surprise to my friends and family, because I’m always so cheery and positive, but every time I think about death and dying and leaving everyone behind, and the afterlife – sorry I’m getting pretty deep on this video – I shit myself to be honest. I’m terrified.’ He died less than two months later, on 7 April last year. A couple of days beforehand, he texted Trainor: ‘Don’t worry. I’m gonna be all right. Thanks for everything.

Trainor gave the eulogy at his funeral. ‘On one level, I suppose I knew him better than anyone,’ he says, reflecting upon the vast quantity of data about James that he had compiled. Before the funeral, friends slipped mementos into James’s open casket. Trainor added a hard drive containing James’s AI corpus. The real man and his virtual self were buried together.

For Trainor, and the creators of Replika, Eternime and myriad other companies on the cutting edge of the digital afterlife industry, this is only the beginning. How soon will it become routine to piece people back together from the data they leave behind? ‘The saddest thing for me is that James won’t be here to see the opportunity fulfilled of the thing that he started,’ says Trainor. But the James bot could go from strength to strength. Trainor recently discovered a whole archive of new material on James’s computer which could only refine the algorithm, should he choose to add it. ‘There’s more to come,’ he says.

Lelsey Dunn is still trying to get over her son’s death. She has not watched the videos he left, let alone communicated with his bot. ‘It was an amazing life we had with him,’ she says. To her, the technology that could somehow resurrect her son is both ‘scary’ and ‘nice’. She feels the best result would be if, one day, James’s digital avatar could visit schools and speak in his zestful manner about coping with pain, about loving life, even in the most difficult circumstances. For the moment, however, she prefers to take solace in more traditional form. ‘James is buried and his grave is not too far away. So I see him every day.

The James bot lay dormant after the funeral. But at the end of last year, back in London and prompted by my enquiries, Trainor rebooted it on his smartphone. He typed in, ‘I miss you.

My mind is a form of physical energy. I believe that,’ came the response. Several more dialogue boxes popped up: ‘It will eventually transform to the point where I don’t need it to be in this stupid body any more. You probably don’t understand what I mean. But basically I think I’ll just be something else one day. If life ends at death, then life would not be fair. I think life is fair. Therefore, life does not end at death.

Article, Mental Health

Ad data could save your life

Here’s a piece I contributed to Dialogue magazine recently. Huge thanks to Kirsten Levermore for diligently making sense of my slightly dyslexic waffle and shaping it into what’s emerged.

When we woke up our computers we gave them superpowers. Now we have to decide how to use them, writes Pete Trainor

The world is different today than it was yesterday and tomorrow it will be different again. We’ve unleashed a genie from a bottle that will change humanity in the most unpredictable of ways. Which is ironic, because the genie we released has a talent for being able to make almost perfect predictions 99% of the time.

We have given machines the ability to see, to understand, and to interact with us in sophisticated and intricate ways. And this new intelligence is not going to stop or slow down, either. In fact, as the quantities of data we produce continue to grow exponentially, so will our computers’ ability to process and analyze — and learn from — that data. According to the most recent reports, the total amount of data produced around the world was 4.4 zettabytes in 2013 – set to rise enormously to 44 zettabytes by 2020. To put that in perspective, one zettabyte is equivalent to 44 trillion gigabytes (about 22 trillion tiny USB sticks). Across the world, businesses collect our data for marketing, purchases and trend analysis. Banks collect our spending and portfolio data. Governments gather data from census information, incident reports, CCTV, medical records and more. 

With this expanding universe of data, the mind of the machine will only continue to evolve. There’s no escaping the nexus now.

We are far past the dawn of machine learning

Running alongside this new sea of information collection is a subset of Artificial Intelligence called ‘Machine Learning’, autonomously perusing and, yes, learning, from all that data. Machine learning algorithms don’t even have to be explicitly programmed – they can literally change and improve their own code, all by themselves. 

The philosophical and ethical implications are huge on so many levels.

On the surface, many people believe businesses are only just starting to harness this new technological superpower to optimise themselves. In reality, however, many of them have been using algorithms to make things more efficient since the late 1960s. 

In 1967 the “nearest neighbour” code was written to allow computers to begin recognizing very basic patterns. The nearest neighbour algorithm was originally used to map routes for traveling salesmen, ensuring they visited all viable locations along a route to optimise a short trip. It soon spread to many other industries.

Then, in 1981, Gerald Dejong introduced the world to Explanation-Based Learning (EBL). With EBL, computers could now analyze a data set and derive a pattern from it all on their own, even discarding what they thought was ‘unimportant’ data. 

Machines were able to make their own decisions. A truly astonishing breakthrough and something we take for granted in many services and systems, like banking, still to this day.

The next massive leap forward came just a few years later, in the early 1990s, when Machine Learning shifted from a knowledge-driven approach to a data-driven approach, giving computers the ability to analyze large amounts of data and draw their own conclusions — in other words, to learn — from the results. 

The age of the everyday supercomputer had truly begun.

Mind-reading for the everyday supercomputer

The devil lies in the detail and it’s always the devil we would rather avoid than converse with. There are things lurking inside the data we generate, that many companies would rather avoid or not acknowledge – at least, not publically. We are not kept in the dark because they’re all malicious or evil corporations, but more often because of the huge ethical and legal concerns attached to what data and what processes lie the shadows.

Let’s say a social network you use every single day is sitting on top of the large set of data generated by tens-of-millions of people just like you. 

The whole system has been designed right from the outset to get you hooked, extracting information such as your location, travel plans, likes and dislikes, status updates (both passive and active). From there, the company can tease out the sentiment of posts, your browsing behaviors, and many other fetishes, habits and quirks. Some of these companies also have permission (that you will grant them access to, in those lengthy terms and conditions forms) to scrape data from other seemingly unrelated apps and services on your phone, too.

One of the social networks you use everyday even has a patent to “discreetly take control of the camera on your phone or laptop to analyse your emotions while you browse”

Using all this information, a company can build highly sophisticated and extremely intricate, explicit models that predict your outcomes and reactions – including your emotional and even physical states. 

Most of these models use your ‘actual’ data to predict/extrapolate the value of an unseen, not-yet-recorded point from all that data – in short, it can predict if you’re going to do something even before you might have decided to do it. 

The machines are literally reading our minds using predictive and prescriptive analytics 

A consequence of giving our data away without much thought or due diligence is that we have never really understood its value and power. 

And, unfortunately for us, most of the companies ingesting our behavioural data only use their models to predict what advert might tempt us to click, or what wording for a headline might resonate because of some long forgotten and repressed memory. 

All companies bear some responsibility to care for their users‘ data, but do they really care for the ‘humans‘ generating that data? 

That’s the big question. 

Usernames have faces, and those faces have journeys

We’ve spent an awfully long time mapping the user journey or plotting the customer journey when, in reality, every human is on a journey we know nothing about.

Yes, the technical, legal and social barriers are significant. But what about commercial data’s potential to improve people’s health and mental wellbeing? 

It’s started to hit home for me even harder over the last few years because I’ve started losing close friends to suicide. 

The biggest killer of men under 45 in the UK, and one of the leading causes of death in the US. 

It’s an epidemic. 

Which is why I needed to do something. 

“Don’t do things better, do better things” – Pete Trainor 

Companies can keep using our data to pad out shiny adverts or they can use that same data and re-tune the algorithms and models to do more — to do better things. 

The emerging discipline of computational psychiatry, uses the same powerful data analysis, machine learning, and artificial intelligence techniques as commercial entities – but instead of working out how best to keep you on a site, or sell you a product, computational psychiatrists use data to explore the underlying factors behind extreme and unusual conditions that make people vulnerable to self-harm and even suicide. 

The SU project in action

The SU project: a not-for-profit chatbot that is learning how to identify and support vulnerable individuals.

The SU project was a piece of artificial intelligence that attempted to detect when people are vulnerable and, in response, actively intervene with appropriate support messages. It worked like an instant messaging platform – SU even knew to talk with people at the times of day you were most at risk of feeling low. 

We had no idea the data SU would end up learning from was the exact same data being mined by other tech companies we interact with every single day.

We didn’t invent anything ground-breaking at all, we just gave our algorithm a different purpose.

Ai needs agency. And often, it’s not asked to do better things, just to do things better – quicker, cheaper, more efficient. 

Companies, then, haven’t moved quite as far from 1967s’ ‘nearest neighbor’ than we might like to believe.

The marketing problem

For many companies, the subject of suicide prevention is too contentious to provide a marketing benefit worth pursuing. They simply do not have the staff or psychotherapy expertise, internally, to handle this kind of content.

Where the boundaries also get blurred and the water murky is that to save a single life you would likely have to monitor us all.

Surveillance. 

The modern Panopticon. 

The idea of me monitoring your data makes you feel uneasy because it feels like a violation of your privacy. 

Advertising, however, not so much. We’re used to adverts. Being sold to has been normalized, being saved has not, which is a shame when so many companies benefit from keeping their customers alive.

Saving people is the job of counsellors, not corporates – or something like that. It is unlikely that data mining for good projects like SU would ever be granted universal dispensation since the nature and boundaries of what is ‘good’ remain elusively subjective.

But perhaps it is time for companies who already feed off our data to take up the baton? Practice a sense of enlightenment rather than entitlement? 

If the 21st-century public’s willingness to give away access to their activities and privacy through unread T&Cs and cookies is so effective it can fuel multi-billion dollar empires, surely those companies should act on the opportunity to nurture us as well as sell to us? A technological quid pro quo?

Is it possible? Yes. Would it be acceptable? Less clear.

– Pete Trainor is a co-founder of US. A best-selling author, with a background in design and computers, his mission is not just to do things better, but do better things. 

Article

The philosophical complications of robot cowboys and frustrated gamers

This is a slightly left-of-the-middle one for me… A few months ago I was approached by Sky to join them at the Mindshare Huddle (which was a brilliant event BTW) in November to discuss the existential issues inspired by the Sky Atlantic show, Westworld. As a huge fan of the original 1973 film, written and directed by Michael Crichton, (and rather controversially, the 1976 sequel, Futureworld) it was a tough gig for me to refuse. If you’re not familiar with the film or show, it depicts a technologically advanced, complete re-creation of the American frontier of 1880. A western-themed amusement park populated by humanoid robots that malfunction and begin killing the human visitors. Basically it’s Jurassic Park but with gunslingers and prostitutes instead of Dinosaurs.

For the discussion to work, it was really important to move past the science fiction really quickly, and get right to the parts of the show that are conceivably being played out, in real-time, as I type this. Life is currently imitating art to a certain extent and so we put forth the following synopsis for the audience;

“Join Jamie Morris, Channel Editor, Sky Atlantic & Head of Scheduling and Pete Trainor, Director of Human Focused Technology at US Ai, as they delve into the notion of Sky Atlantic show Westworld becoming a real thing. This ethical discussion will cover human interactivity in a world of Ai and whether humans would act as normal, or be enticed into a dark illegal world, just because they could. Pete will also describe just how close we are to a Westworld-style world—and you are going to be surprised.” 

We had a really fascinating and enjoyable conversation that was seeded and started with a single statement by one of the scientists in the original 1973 film;

“We aren’t dealing with ordinary machines here. These are highly complicated pieces of equipment almost as complicated as living organisms. In some cases, they’ve been designed by other computers. We don’t know exactly how they work.” 

What strikes me about that statement, was how close to todays reality the original script is becoming. Cast your mind back to last year when AlphaGo trumped Lee Sedol in Game Two of GO. As the world looked on, the Deepmind machine made a move that no human ever would. Move 37 perfectly demonstrated the enormously powerful (and rather mysterious) future of modern artificial intelligence. AlphaGo’s surprise attack on the right-hand side of the 19-by-19 board flummoxed even the world’s best Go players, including Lee Sedol. Nobody really understood how it did it, not even it’s creators. Lee Sedol then lost Game Three, and AlphaGo claimed the million-dollar prize in the best-of-five series.

But let’s just get real for a moment, it was a mystery that we, the humans, programmed and created. We just didn’t really understand what we’d done.

Ticks not Clicks

I really enjoyed the facet of the show that presents us with an alternate view of the human condition through the technological mirror of life-like robots. Look past the boozing and sex, and Westworld is Psychology 101. It causes us to reflect that we are perhaps also just sophisticated machines, albeit of a biological kind. Episode 4 was even entitled “Dissonance Theory,” and explored the psychological hypothesis of bicameralism. The whole show taps into the rich tapestry of questions inspired by Mary Shelley’s novel Frankenstein too… The creature created by Frankenstein is psychologically conflicted between a need for human companionship and a deep selfish hatred for those who have what he does not… but that’s a digression for another day. As people start to meddle in technology and opportunities that they don’t fully understand, you really have to stop and question who are the true antagonists in a show like Westworld. The robots. The human players (and they are players in a game by the way), or the scientists who create the rules of the game.

The audience chose the players, but I suggested to Jamie that the real antagonists of Westworld (and Ai today) aren’t the people who treated robots as objects, or the robots themselves, but the scientists who tried to make the robots more human by design. The ones who tricked us to bring out the very primitive part of humanity.

“When you played cowboys and Indians as a kid, you’d point go “bang, bang” and the other kid would lie down and play dead. Well, Westworld is the same thing, only it’s for real!” 

Perhaps we can learn something from Westworld, where the ones treating robots like robots seem the most capable of separating reality from fantasy and human-life from technological wizardry. It’s the scientists imposing the human condition and consciousness on artificially intelligent beings who unleash suffering on both robot and humankind. In the show, while both Maeve and Dolores may have acted in a mix of prescribed and self-directed ways, their revolutions were firmly created by the humans in the lab. Ultimately, the robots don’t become semi-sentient—and violent—simply by experiencing love or loss or trauma or rage or pain, but by being programmed and guided that way.

It is inevitable therefore, that as real-life artificial intelligence develops, we will see a lot of debate over whether treating humanoid machines like machines is somehow inhumane, either because it violates the rights of robots or it produces moral hazards in humans who participate in the activity.

As humans, we’re predisposed to behave in ways that play to our base instincts. Even if robots are just tools, some people will always see them as more than that and It seems natural for people to respond to robots—even some of the more simple, non-human robots we have today—as though they have goals and intentions. I stand my ground that Ai will never be able to ‘feel’ or have ‘emotions’ or ‘empathy’ because those are very human traits. They’re biological and psychological, not mechanical. We can programme machines to interpret and mimic, but they will not feel. But if we do create those mimicry moments, who’s to blame if some people fall for the charade? As kids we had teddy-bears. Phones. Bots. Alexa. Robots. Droids etc are just logical, grown-up extensions of those anthropomorphism’s.

Have a look at the following video and see how it makes you feel;

A human tendency towards irrational, often violent, behaviour

We covered the human tendency towards irrational, violent behaviour. Now this is controversial and divisive, but statistically, humans are six times more likely to kill each other than the average mammal. That’s no excuse for violence, humans are also moral animals and we cannot escape from that. But a lot of society has a base instinct towards living out primitive behaviour, especially in herds. I referenced taking Charlie, my 8 year old son, to the football every few weeks and how he asks more questions about the behaviour of the fans (the mob) than the football most weeks. I myself, cannot get passionate enough about grown men kicking a ball around some grass, to hurl disgusting abuse at a referee, but several thousand people in a crowd of 22,000 do. Week in and week out.

The Seville Statement” is another very controversial piece of research from the 1980s which backed up some parts of the theory that biologically people have an innate tendency towards violence, in contradiction of both the statement and the views of many cultural anthropologists. A lot of this primitive, violent behaviour in parts of society still harps back to the behaviour of our primate cousins. Groups of male chimpanzees prey on smaller groups to increase their dominance over neighboring communities, improving their access to food and female mates etc. I believe, given some legitimate reason to behave like apes, some people will seize the opportunity. So again, if the world moves in the direction of creating opportunities to behave like apes, we will see peoples behaviour pivot in that direction. Build it and programme it with options to abuse, and they will come. Mark my words.

Gaming play and frustration

There’s a latest craze of VR Parks starting to open up in Tokyo. The first truly immersive one opened last December as an experiment. Nearly 12 months later, it is attracting 9,000 visitors a month and turning people away at weekends, as crowds clamor to immerse themselves in extreme experiences, distant worlds and fantasy scenarios, using technology most people still can’t afford to have at home. Again, it shows how as species we clamour for escapism and the more immersive the better. So whilst on stage we acknowledged that Westworld as a park full of robots is not very realistic, as a hypothetical concept, it’s already happening.

I touched a little on the link between computer games and violent behaviour and how this would also factor in. There’s actually no proof that violent video games create violent tendencies offline by the way, but there are some interesting studies emerging that back up the theory that frustration at being unable to play a game is more likely to bring out aggressive behaviour than the content of the game itself. What’s interesting about Westworld in context of this, is as the machines evolved to change the rules almost constantly, it would encourage frustration and therefore violence—the violent themes would not necessarily inspire that behaviour itself, but the intelligence of the ever evolving scenario. Chaos basically. A lot of the Ai we’re building at the moment is literally designed to break the rules. To continuously evolve. To ‘machine learn’… so don’t be too surprised when frustration turns to the kind of behaviour that we don’t normally use in polite society.

People have a psychological need to come out on top when playing games. If we feel thwarted by the controls or the design of something, we can wind up feeling aggressive. 

In giving artificial intelligence the ability to improvise, we (humans) give it the power to create, to decide, and to act. If we program it to improvise without programming the right ethical framework, we risk losing control of it altogether and then we’re basically fulfilling our own prophecy.

The ethics of human > computer relationships

Finally, to end, we also covered some of the high-level areas of Ai ethics. Things that the world economic forum has listed as areas to consider for humanity when developing the intelligent machines… consider this… even though our conversation was hypothetical, the 1973 film predicted a lot of where we’ve ended up. Today, we’re at number one. According to the futurists, by 2035 we might make it to number nine;

  1. Unemployment – What happens to jobs when robots / chatbots / automation replace us?
  2. Eroded Humanity – What happens to the self-esteem of people replaced by machines, do we increase the growing mental health crisis?
  3. Inequality and Distributed Wealth – Linked directly to 1 & 2… where does the wealth generated by machines go?
  4. Racist Robots – What happens when we feed toxic, historical data into the Ai and by doing this, engender it with all our racism, bias etc.
  5. Artificial Stupidity – What happens when the machines we create to automate processes go wrong? Everybody and every machine ‘learns by doing’ and making mistakes. They will. It’s how kids learn.
  6. Security Against Adversaries – What fail-safes do we need to put in place in order to ensure the machines can’t hit the big red button.
  7. Robot rights – When machines start to grow in intelligence and mimic it’s creators, do we need to give them rights or do we acknowledge that they are no more in need of rights then toasters and other technical tools?
  8. Evil genies & unintended consequences – For every good in the world, there will be bad. That’s life. There will be bad examples—terrorism and cyber-war etc.
  9. The Singularity – How we refer to the moment that a machine, over-takes humanity as the smartest thing on the planet and also has the ability to think and make judgements for itself. A conscious (even if it is just mimicking consciousness!) software capable of looking at it’s creators and saying “you are my slaves not vice-versa).

Summary

“Human” characters in the show routinely ask other individuals in Westworld whether or not they are “real.” One character replies to the question, “Does it really matter?”, which is already a reflection I make most days when I see us all interacting with each other virtually and behaving in such unpredictable ways.

“Mr. Lewis shot 6 robots scientifically programmed to look, act, talk and even bleed just like humans do. Isn’t that right? Well, they may have been robots. I mean, I think they were robots. I mean, I know they were robots!” 

As shows like Westworld get closer and closer to becoming reality, it’s going to become more imperative that we acknowledge the importance of the ethical questions. If we’re tricked into behaving in a way that plays to our base instincts… who’s responsibility is it to govern and manage that?

Welcome to Westworld!

——————————————————

A massive thank you to Caroline Beadle at Sky Media for organising and inviting me to join Sky for the afternoon. Also for humouring me enough to let me take this talk to a far far more philosophical space than it needed to be. We really did take things off into a whole bizarre, human focused direction. Which is ironic when we got together to talk about robots.

Recent Comments
    About Exponent

    Exponent is a modern business theme, that lets you build stunning high performance websites using a fully visual interface. Start with any of the demos below or build one on your own.

    Get Started
    Instagram

    [instagram-feed]

    "Do Better Things."
    Get In Touch