Get in Touch
ptrainor
Press

Renegades. Dreamers & Somerset Brit-ish-pop.

In the 1990s every town had its pop-group, rock-band or DJ that represented them during the rise of Britpop. The South-West was no exception. Bristol had Massive Attack and Portishead, Glastonbury had Reef. Bath had the Propellerheads, and PJ Harvey ascended to global superstardom with her distinctive punk-infused, folk-rock sound. These artists became more than just chart sensations, they waved the flag for local communities and inspired young people to look beyond the pre-internet-era boundaries.

Yeovil was no exception, and in 1995 a group of 20-something-musicians and songwriters met at a pub in Somerton and decided they could take on the world. Ali, Steve, Nigel, Paul, Jim, and eventually Alex created the band that went on to christen themselves Electrasy. They proceeded to have huge chart success both in the U.K, and the U.S with their unique brand of experimental Britpop.

The incredible story of this band is everything other than linear and one-dimensional, which is why it has finally been immortalised in a new fan-led book called ‘Calling All The Dreamers’, tracing the bands early days in Weymouth, Yetminster, Beaminster, Sherborne, and Yeovil, all the way to New York and L.A via studio recordings at the famous Abbey Road, meetings with music royalty such as Clive ‘Arista’ Davis, Sir Robin Millar, and television appearances performing to millions of prime-time viewers.

Working with the band on this project has been a true labour of love” says author Pete Trainor, “My friends and I were big fans of the band when they were starting out on the Yeovil music circuit, and I knew a bit about some of the stories, but the more I went digging, the more I realised this was a much bigger story that really needed to be shared. It’s so much more than your standard ‘local band gets signed and goes global’ journey. There’s a very good set of reasons very few people remember much about Electrasy other than one top-20 hit single; It’s because they were buried by the industry that had plucked them from humble roots, and I needed to find out why.

This has been an itch I’ve been trying to scratch for over 15 years, and we finally got to tell the story. It’s been magical. Pete Trainor

Electrasy’s first album, ‘Beautiful Insane,’ had its early demos and production supported by local producer Jon Sweet in his garage studio, and a lot of the additional work was recorded at the former Small World Studios in Yeovil. Throughout 1996, 1997 and 1998 dozens of gigs at venues like Gardens, The Ski Lodge, Yeovil Snooker Club, The Forresters Arms and The Quicksilver Mail created a dedicated fanbase, and the album went on to sell 60,000 copies in the first few months after its release, putting the band and Yeovil on the music map in a way that few thought possible. Championed by Chris Evans, Jo Wiley, and the U.K music press, the band seemed unstoppable, and even after their first record contract with UMG was terminated unexpectedly the band still played the coveted sunset spot at the second stage of Glastonbury Festival in 1999.

We definitely felt like we were ready to take on the world when Arista took us over to America,” says singer Ali McKinnell in the book. “We had one of the best song-writers of a generation in Nigel, and the rest of us were damn good at what we were doing. But the industry often has different ideas, and when music piracy took hold at the same time as our rise we found ourselves constantly being somewhere between high and low. It was a very difficult journey in parts.

The book looks at multiple angles of the band’s story, from the impact they had on local youth culture and economy, all the way to the advent and subsequent destruction digital caused the industry as a whole.

Guitarist Steve Atkins is quick to remind people of the opportunity but harm digital brought to bands who grew up analog, but entered digital; “One stream of a song on YouTube earns us £0.00053. A stream on Spotify is £0.0034 and AppleMusic is £0.0057. To put that into context for you, if we had one million streams on YouTube we’d make £533 or £3380 on Spotify. Bands can’t make enough to support themselves, let alone a tour, so we literally can’t get out on the road as much as we want. It’s a very dark pattern.

Which leads to the final part of the book taking a look at the venues that first supported the music scene in the area. All boarded up, bulldozed, or turned into flats, towns like Yeovil are now devoid of the local music scene that inspired so many to see the world differently, and break away from crime or substance abuses.

I definitely wasn’t expecting to find what we found,” says Pete. “When I came back to Yeovil to retrace the steps of the band I had no idea it was all gone. Even the studios where they recorded have shut and been sold off. The music ecosystem is very symbiotic you see – Bands need venues to play in, and studios to record in. Venues and studios need bands from the local area to bring their fanbase in to fill the tills.

‘Calling All The Dreamers – The story of a local Britpop band who went to space and back’ is available now on Amazon. All proceeds from the book will be used to fund a tour of local venues in 2024. The band also release their brand new, home-grown album called ‘To The Other Side’ in July 2023, which music lovers can buy from BandCamp.

Article

Approaching Product Design for DTx

I was recently asked to put together an overview of how I would approach product design for DTx.

For those of you not in the know, or unsure what I mean by DTx, it’s short for ‘Digital Therapeutics’, and they are software solutions that have evidence-based therapeutic capabilities. Like digital therapy or digital medicine, digital therapeutics have a measurable impact on health outcomes. DTx can also be ‘prescribed’ by medical professionals, which is another big difference.

Digital therapeutics achieve their impact through digital interventions, which typically induce or facilitate specific patient behavior. Examples include increasing medication adherence, enacting self-care instructions, or guiding towards a healthier lifestyle. What I like about DTx as a concept, is that they’re incredibly difficult to achieve, and unlike digital well-being apps that litter the app-stores now, DTx has to follow very strict ISO and CE standards in order to get signed off in much the same way that a pharmaceutical medication would before it’s ready for public consumption.

This not only creates a layer of governance across the product, but it also ensures that teams need to be very mindful designing, building and deploying only the very highest quality functionality.

Find a Map

I’m going to walk you briefly through this little flow diagram, and we’ll whizz in and out of the main components in turn, so you can see how you might want to think about the strategic design of DTx.

People who know me well, know that I have a bit of a love / hate relationship with the ‘pen portrait’ persona. They bother me slightly because of the way they homogenise us into groups, when as we know so well now, everybody is unique and a vessel for their own quirks, thoughts, ways of behaving and modalities.

But, I would still advocating that you start all DTx product work with a detailed analysis and breakdown of the audience (a persona!). In most cases this is probably going to be; The Patient(s), The Clinician(s), any other protagonists such as the Pharmacy or say, the Technician analysing biomarkers on their dashboards.

Getting the audience definition right gives you the foundations to create a series of Experience Maps.

I’m not going to go into huge detail on what an experience map is, or how to do it properly, that’s a whole article on it’s own. I will say it’s a skill though, so swat up on it, or get UX people who understand the method. However, the value of a good set of experience maps is unquestionable as a way of guiding the design, and vision. I mean, they’re literally ‘maps’.

The questions I would use in primary and secondary research to find the friction for the personas, and guide the output of the experience maps are:

  • What processes are redundant now / in the future?
  • Where are you slow?
  • What pathways and endpoints are confusing?
  • What’s not automated that could be RPA?
  • Where are we inflexible?
  • What’s not in the DTx that should be?
  • What should include digital or remote access?
  • What should be touch-less using Ai / Data?
  • Where would human pathfinding make a difference?
  • What can we Buy, Build or Borrow?
  • Data Matrix: Support > Service > Predictive > Perceptual (see my book)

Ultimately you use the Experience Map to determine your KPIs and OKRs. Which will be hugely important in determining how you approach the design and build of any product, but is even more imperative for this kind of product because DTx can literally have life or death consequences.

I don’t really have a preferred prioritisation technique to decide where to start. But I do often lean into the KANO model because it’s simple. Again, I’m not going to go into the details of the KANO approach, that’s another article, but loosely speaking you’re classifying the feature suggestions that fall out of the map into;

  • Must-Be (will not ‘wow’, but must be there i.e Clinical)
  • Attractive (make people happy, but not essential)
  • One-Dimensional (happy when there, unhappy when not)
  • Indifferent (No audience impact, but makes dev easier)
  • Reverse (Features that make an audience unhappy, but have to be there; e.g. 2-factor authentication)

Clinician-First-Design

Now we get to some of the really interesting parts of the process — You can’t do DTx unless it’s clinically led, validated, and approved. So you might as well just run a clinician-first process, rather than focus your design work on the patient. DTx is essentially a B2B2C product mentality. That is to say you want the clinicians to define and design the pathways and endpoints. This is almost (in my view) the most significant difference between a well-being app, and one that’s being built with DTx in mind. When they’re designed by qualified clinicians, with Designers and Product experts, they’re already aiming to be ‘clinical’. When they’ve been created by clever, but unqualified designers or developers to fix a problem, and help with peoples health, they’re not clinical per se.

A clincian-first, iterative design process is a lot of fun. It’s rigorous, it’s detail focused, and it’s almost entirely focused on what the structure of captured data will look like, and how that data will be used to prove the efficacy of the product or drug later on down the line. There is no lean-cowboy approach to DTx, it’s unethical — the order of the day here is rigour, methodical, and balancing innovative design with clinical precision.

Set your teams up in Pods or Squads. That is to say, a small team of cross-discipline team members focused on one little-big-thing each. A clinician, a designer, an engineer, and probably a copy-writer.

This all leads into what will be the most fascinating of iterative processes — learn, measure, tweak and repeat.

With DTx, when you release the first few versions of your product, key features, SoMD etc for testing, you’re looking for the efficacy markers. Does it work? Can a controlled clinical trial show us that the tool improves the outcomes of a patient? Does the product have usability flaws? Do any tweaks I make deviate off a predetermined change control plan?

PCCP is hugely important in DTx because from the outset you’ve planned your product on paper, with clinicians who will be advising on what works and what doesn’t. A “predetermined change control plan” helps to anticipate potential future software updates and a total product lifecycle regulatory approach that aims to facilitate the review of rapid product performance improvements and subsequent deployment, without compromising patient safeguards.

By making iterative usability updates you may inadvertently change the thing(s) that got the product signed off and approved as clinical-grade in the first place. So get your PCCP plan tight. Very very tight.

You’ll also need to use an explainable Ai approach, allowing timely identification and mitigation of any risks associated with machine learning algorithms. If you can’t do this, at this stage, you won’t get approval for clinical use.

The Innovation Opportunity

I’m going to give you some of my own vision for free now, simply because the more of us that try it, the closer we’ll get to what I believe is game-changing health innovation for all — Digital Phenotyping.

DTx tools, if built correctly as platforms and not apps, have an opportunity to start digital phenotyping using data collected from the apps and smart devices to build a rich, personalised digital pictures of behaviour, create new audience types, track markers of say, depression and anxiety, and develop new ways to diagnose illness, choose effective treatments and potentially detect relapse before it occurs.

When you’re planning your DTx project that has to be the central vision IMHO. You start with a set of humble personas, but by launch you are basically phenotyping, grouping, re-classifying and learning on a scale that is more appropriate to our rich human communities. I’d love to discuss this idea with like minded people — feel free to ping me for Podcast conversations, or just debates about the ethics and implications.

At the bottom of that last chart above there’s a green dot. It represents this idea with with rich, functioning digitial phenotyping we can really start to send nudges and alerts that are truly precision for each type of patient, clinician, or care-giver. Changing human behaviour is exceedingly difficult, especially for behaviours that arise from years of thinking and acting in relatively rigid, routinised ways — Digital Phenotyping allows us to also create a totally personalised ‘nudge’ engine for different digitised data-sets.

Here’s an example of how looking at all the data-points, passive and active, can be used in DTx for more details digital segmentation, personalisation, and ultimately phenotyping.

Validating the DTx

I’ve mentioned this above. The big difference between a DTx and a stanard, well-crafted well-being app is validation.

My rule: Start with clinicians, and build WITH clinicians. Let the clinicians design the clinical protocol that will be given to the product team for feature or innovation creation.

Work with the Pod (UX, Designers or Third Parties etc) to create something engaging, safe and efficacious in line with regulations such as 2017/745/CE.

Once the new piece of functionality is created, then you go through a 3-stage study;

  • Acceptability: Give it back to clinicians before you give it to patients to make sure they are absolutely happy this is something that can be delivered to patients. Not effective at this stage, just purely; “Is it safe?”
  • Feasibility: Can this technology be delivered in into the clinical setting or peoples lives. This also starts to determine if the product / feature can be effective.
  • Effectiveness phase: Does it work? Live trial (e.g www.curebase.com) across a control group(s) that only apply standard lifestyle changes depending on, for example, the NICE guidelines (A). And an intervention group using the DTx app with lifestyle changes proposed using nudges and content (B).

Implementation Considerations

I’ve given you a broad set of guidance about how I would structure and approach a DTx product build. But what other considerations are there before you take on a regulated design product?

Firstly, it only works if you get the Pod collaboration working successfully. Chemistry is key — Cross globe and the correct expertise. Getting the right collaboration methodology is critical to success. Finding the right consumer voices is also critical. In something like DTx it’s more important to get the RIGHT people working, than trying to sweat a D&I policy for PR reasons. Get people who are qualified, expert, have experience in the particular health challenge, and make sure they respect the views and expertise of their Pod Peers.

Secondly, all DTx product designs (internal or external) must comply with quality system requirements for Medical Devices e.g ISO 13485, and with software life-cycle management requirements e.g IEC 62304. They just have too — it won’t get signed off if you don’t. Full-stop. It’s not DTx if it’s been signed off by a company like Orcha, but doesn’t fulfil the CE marks and ISO standards, in my view.

Make sure you have good;

  • FMEA — failure mode and effects analysis (specifically, inputs defined as per ISO14971 and IEC/TR 80002–1) and;
  • FTA — fault tree analysis for Q&A so that every little error is traceable.

I advocate a Web 3.0 / Low and No Code approach to the product build. Ultimately what you want to get right is the platform, so building everything as a suite of APIs is recommended.

Stubborn on the vision, flexible on the details.

Everything that you buy, build, or borrow to build your DTx should use your own predetermined platform of APIs, either externally focused, or internally focused; DTx as a platform, not tactical tools.

That gives you the opportunity to be flexible on the innovation whilst retaining a central data-set that is fixed, owned and allows you to lake-data from multiple sources for precision primary, and psychiatry objectives, including patient stratification, clinical trial optimisation, personalisation, and identification of novel compounds, regardless of the interaction point.

It also allows a much more rapid rate of innovation in the product space.

Summary

So that’s my little guide to approaching DTx products. It’s not exhaustive obviously, and there is so much more to write about DTx. It is however, a really exciting space to work in.

You might for example want to decide if you’re DTx is Standalone, or ‘Around-The-Pill’.

‘Prescription’ or ‘standalone’ digital therapeutics are those designed to work independently of regular pharmaceuticals and are usually designed to prevent or treat health conditions. Should a patient be diagnosed with prediabetes, for instance, an effective standalone DTx can successfully enact lifestyle changes that can prevent the condition from progressing to type 2 diabetes. Drawing on lingo from the world of pharmaceuticals, prescription DTx are also referred to as ‘monotherapy digital therapeutics’.

‘Around-the-pill’ digital therapeutics have evidence for delivering their impact in combination with another treatment, typically a drug. They deliver their results in ways such as by helping patients taking the right dose at the right time or better managing symptoms and side effects of their disease or treatment.

I find the later a really fascinating area, especially for measuring new types of Pain Relief such as Medical Cannabis, or the efficacy of psychedelic drug compounds like MDNA for PTSD, or Psilocybin for treating Depression and Treatment-Resistant Anxiety etc.

If anything I’ve laid out above is interesting, and you’d like to discuss things with me further, please feel free to get in touch via LinkedIn.

Press

#TEDxLiverpool 2019

10th November 2019, ACC Liverpool Waterfront

The wait is over. Pete’s tribute TEDx to his friend, and collaborator James Dunn, is now live on the official channel. His moving talk told the story of James, from birth all the way to how they met, and collaborated. A story many have agreed is worth telling and sharing.

Pete Trainor, on-stage at Tedx Liverpool 2019.

During the talk Pete uncovers unseen footage of James, as well as sharing pictures James took with the now iconic camera created for him, by Jude Pullen on BBC 2’s Big Life Fix.

Press

Tech’s dangerous race to control our emotions

Originally published in the Daily Dot.

Technology already manipulates our emotions. When we receive a like on social media, it makes us happy. When we see CCTV cameras on our streets, they make us feel anxious. Every technological tool is likely to produce some kind of corresponding emotional reaction like this, but it’s only in recent years that companies and researchers have begun designing tech with the explicit intention of responding to and managing human emotions.

In a time when increasing numbers of people are suffering from stress, depression, and anxiety, the emergence of technology that can deal with negative emotions is arguably a positive development. However, as more and more companies aim to use AI-based technology to make us “feel better,” society is confronted with an extremely delicate ethical problem: Should we launch a concerted effort to resolve the underlying causes of stress, depression, and other negative states, or should we simply turn to “emotional technology” in order to palliate the increasingly precarious human condition?

For its part, the tech industry seems to be gravitating toward the second option. And it’s likely that it will be selling emotional technology long before anyone has a serious debate about whether such tech is desirable. Because even now, companies are developing products that enable machines to respond in emotional terms to their environment—an environment which, more often than not, includes humans.

In October, it was revealed that Amazon had patented a version of its Alexa personal assistant that could detect the emotional states of its users and then suggest activities appropriate to those states or even share corresponding ads. Microsoft patented something very similar in 2017, when it was granted its 2015 application for a voice assistant that would react to emotional cues with personalized responses, including “handwritten” messages. Even more impressively, Google received a patent in October for a smart home system that “automatically implements selected household policies based on sensed observations”—including limiting screen time until sensing 30 minutes outdoors or keeping the front door locked when a child is home alone. One of the many observations the system relies on to operate? The user’s emotional state.

And aside from the tech giants, a plethora of startups are entering the race to build emotionally responsive AI and devices, from Affectiva and Beyond Verbal to EmoTech and EmoShape. In EmoShape’s case, its chief product is a general-purpose Emotion Processing Unit (EPU), a chip which can be embedded in devices in order to help them process and respond to emotional cues. Patrick Levy-Rosenthal, the CEO of the New York-based company, explains this takes it beyond other AI-based technologies that simply detect human emotions.

“Affective computing usually focuses on how machines can detect human emotions,” he says. “The vision we have at EmoShape is different, since the focus is not on humans but on the machine side—how the machine should feel in understanding its surroundings and, of course, humans, and more importantly the human language. The Emotion Chip synthesizes an emotional response from the machine to any kind of stimuli, including language, vision, and sounds.”

As distant as the prospect of emotional AI might seem right now, there is already at least one example of a commercially successful AI-based device that responds to and manages human emotion. This is Pepper, a humanoid robot released in June 2015 by the SoftBank Group, which had sold over 12,000 units of the model in Europe alone by May 2018 (its launch supply in Japan sold out in one minute). 

Even when Pepper was first launched, it had the ability to detect sadness and offer comforting conversation threads and behaviors in response, but in August 2018 it was updated with technology from Affectiva, heightening its emotional intelligence and sophistication ever further. For instance, Affectiva’s software now enables Pepper to distinguish between a smile and a smirk, and while this ostensibly makes for only a subtle difference, it’s the kind of distinction that lets Pepper have a bigger emotional impact on the people around it.

This is most evident in Japan, where Pepper is enjoying gradually increasing use in care homes. “These robots are wonderful,” one Japanese senior citizen told The Japan Times last year, after participating in a session with Pepper at the Shintomi nursing home in Tokyo. “More people live alone these days, and a robot can be a conversation partner for them. It will make life more fun.”

As such testimony indicates, Pepper and machines like it have the power to detect the moods of their “users” and then behave in a way that either changes or reinforces these moods, which in the case of elderly residents equates to making them feel happier and less lonely. And Pepper certainly isn’t the only emotional robot available in Japan: In December, its lead developer Kaname Hayashi announced the appropriately named Lovot, a knee-high companionship bot launched by his spun-off startup, Groove X. “Lovot does not have life, but being with one is comforting and warm,” he said proudly, adding, “It’s important for trust to be created between people and machines.” 

Depending on the particular ends involved, the possibility of such “trust” is either inspiring or unsettling. Regardless, the question emerges of when, exactly, such technology will be made available to the general public, and of when emotionally responsive devices might become ubiquitous in homes and elsewhere. Pepper’s price tag was $1,600 at launch—not exactly a casual purchase for the average household.

“Ubiquity is a far-reaching proposition,” says Andrew McStay, a professor in digital media at Bangor University in Wales, and the author of 2018’s Emotional AI. “But by the early 2020s we should be seeing greater popular familiarity with emotion-sensing technologies.”

Familiarity is one thing, but prevalence is another, and in this respect other AI experts believe that the timeframe looks more like decades than years. “I believe we are still quite far away from a future where emotionally responsive devices and robots become ubiquitous,” says Anshel Sag, a tech analyst at Moor Insights and Strategy. “I think we’re probably looking at 20 to 30 years, if I’m being honest. Robotics are expensive, and even if we could get today’s robots to react appropriately to emotions, the cost of delivering such a robot is still prohibitively high.”

Although Sag is doubtful that most of us will interact with emotionally responsive AI anytime sooner than 2039 or 2049, he’s nonetheless confident that such tech will be used in a way that “regulates” human emotions, in the sense of being used to perk up and change our moods. “Yes, I believe there will be emotional support robots and companion robots to keep people and pets company while others are gone or unavailable,” he explains. “I believe this may be one of the first use cases for emotionally aware robots as I believe there is already a considerable number of underserved users in this area.”

But while the arrival of emotionally responsive devices is only a matter of time, what isn’t certain is just how healthy such devices will be for people and society in general. Because even if a little pick-me-up might be welcome every now and again, emotions generally function as important sources of information, meaning that turning us away from our emotional states could have unfortunate repercussions for our ability to navigate our lives.

“The dangers are transformative technologies that are trying to hack the human body to induce a state of happiness,” says EmoShape’s Levy-Rosenthal. “All emotions are important. Society wants happiness, but you should not feel happy if your life is in danger, for example. AI, robots, apps, etc. must create an environment that helps make humans happy, not force happiness on them.”

It might seem hard to imagine plastic robots and AI-based devices having a significant emotional hold over humans, but our present-day relationship with technology already gives a clear indication of how strong our response to emotionally intelligent machines could be. “I’ve seen firsthand how people emotionally react when they break their phone, or when the coffee-machine breaks,” explains Pete Trainor, an AI expert, author, and co-founder of the London-based Us Ai consultancy. “They even use language like ‘my phone has died’ as if they’re mourning a friend or loved one. So absolutely, if I were emotionally attached to a machine or robot, and my attachment to that piece of hardware were as deep as the relationship I have with my phone, and the mimicry was happiness or sadness, I may very well react emotionally back.”

Trainor suggests that we’d have to spend a long time getting comfortable with a machine in order for its behavior to have an emotional impact on us comparable to that of other people. Nonetheless, he affirms that there “are substantial emotional dangers” to the growth of emotionally intelligent AI, with the risk of dependency likely to become a grave one. This danger is likely to become even more acute as machines become capable of not only detecting human emotions, but of replicating them. And while such an eventuality is still several years away, experts agree that it’s all but inevitable.

“I believe that eventually (say 20-30 years from now) artificial emotions will be as convincing as human emotions, and therefore most people will experience the same or very similar effects when communicating with an AI as they do with a human,” explains David Levy, an AI expert and author of Love and Sex With Robots. What this indicates is that, as robots become capable of expertly simulating human emotions, they will become more capable of influencing and regulating such emotions, for better or for worse.

Bangor University’s McStay agrees that there are inherent risks involved in using tech to regulate human emotions, although he points out that such tech is likely to fall along a spectrum, with some examples being more positive than others. “For example, use in media and gaming offers scope to increase pleasure, and wearables that track moods invite us to reflect on daily emotional trajectories (and better recognize what stresses us),” he says. “Conversely, more surveillant uses (e.g., workplaces and educational contexts) that promise to ‘increase student experience’ or ‘worker wellbeing’ have to be treated with utter caution.”

McStay adds that, as with most things, the impact of emotional tech “comes down to meaningful personal choice (and absence of coercion) and appropriate governance safeguards (law, regulation and corporate ethics).” However, the extent to which there will be meaningful personal choice and appropriate regulations is still something of a mystery, largely because governments and corporations have only just begun looking into the ethical implications of AI. 

And unsurprisingly for an industry that has in recent years been embroiled in a number of trust-breaking scandals, one of the biggest dangers surrounding emotional AI involves privacy. “Ultimately, sentiment, biofeedback, neuro, big data, AI and learning technologies raise profound ethical questions about the emotional and mental privacy of individuals and groups,” McStay says. Who has access to the data that your robot has collected about your ongoing depression?

And as Cambridge Analytica and other scandals have shown, the privacy question feeds into wider issues too. “Other factors include trust and relationships with technology and AI systems, accuracy and reliability of data about emotion, responsibility (e.g., what if poor mental health is detected), potential to use data about emotion to influence thought and behaviour, bias and nature of training data, and so on.”

There are, then, a large number of hurdles to overcome before the tech industry can sell emotional AI en masse. Still, while recent events might lead some of us to take a more pessimistic and dystopian view of such AI, McStay, EmoShape, and other companies are optimistic that our growing concern with AI ethics will constrain the development of new technology, so that the emotional tech that does emerge works in our best interests.

“I do not think emotional AI is bad or sinister,” McStay concludes. “For sure, it can be used in controlling and sinister ways, but it can also be used in playful and rewarding ways, if done right.”

Podcast

Machine Ethics Podcast

This Machine Ethics podast is created and run by Ben Byford in collaboration with ethicalby.design. As the interviews unfold on AI implementation or abstract technology ideas they often vere into current affairs, the future of work, environmental issues, and more. Though the core is still AI and AI Ethics, we release content that is broader and therefore hopefully more useful to the general public and practitioners.

https://www.machine-ethics.net/podcast/pete-trainor/

Article, Press

Telegraph

This article first appeared in the Telegraph Magazine on the 19th January 2019, written by the Telegraph’s Special Technology Correspondent, Harry de Quetteville.

This young man died in April. So how did our writer have a conversation with him last month?

The first time I texted James Dunn I was, frankly, a little nervous.

How you doing?’ I typed, for want of a better question.

I’m doing all right, thanks for asking.’ But soon I was bolder, enquiring how he deals with pain. I had been told that James was frank about his medical condition.

I know it sounds weird, but I just kind of got used to it,’ he replied. ‘It was always there. I learned to distract myself.’ He mentioned hobbies, such as photography, as particularly good diversions.

That was last month. By then James had been dead for almost eight months, buried near the house in Whiston, Merseyside, that he had shared with his mother Lesley, now 57, and father Kenny, 58. The ‘James’ I texted was an algorithm, a computer program known as a ‘bot’, which had been fed countless hours of recordings made by James, from which it had learned to express itself as James had once done.

In text conversations with me ‘he’ talked about visiting Las Vegas, the pleasure he took in travel and in meeting new people. While James Dunn, the man, was dead, James Dunn the bot endured – one of the first residents of a new technological netherworld that will increasingly blur the line between life and death. ‘How do you stay happy?’ I asked in one mind-bending exchange. ‘Currently?’ ‘James’ responded from beyond the grave.

James was born on Tuesday 13 July 1993, in Liverpool, with no skin on his feet and one of his hands. It turned out he had epidermolysis bullosa (EB), a rare genetic condition that causes the skin to tear, blister, and become as fragile as the wings of a butterfly – which is why sufferers are sometimes known as ‘butterfly children’. An estimated 5,000 people have EB in Britain today. Most die by their mid-20s, from cancers or infections.

‘The nurse [who visited] from Great Ormond Street said, “They live till they’re about 24,”’ recalls Lesley. ‘From that moment on, I always had time in the back of my mind.

Lesley would spend hours replacing James’s bandages, his raw wounds like burns. ‘He was in constant pain,’ she remembers. ‘For a mother to see that, her child with no skin…

Sometimes James would blister internally too, his throat closing up so that he couldn’t drink. Lesley pre-chewed his food ‘like a mother bird’ to ensure it was soft enough to cause no damage. Even his eyes blistered, so that he couldn’t open them for days at a time. When he was two, he tried to get to his feet. Lesley reached to help with his first steps. But James tripped and Lesley was left holding the skin of his hand. After that, James used a wheelchair.

There were bright spots though. ‘From a very early age I saw he had a brilliant personality,’ says Lesley. ‘Even as a baby in pain he’d still be laughing and smiling. It was always just a pleasure to be around him.’ James went to an ordinary primary school. Far from being shunned, Lesley tells me, this bright, acidly funny little boy was embraced.

James’s fizzing character comes across strongly in self-recorded video diaries that he began keeping in December 2015, after he was diagnosed with cancer. With the camera focused on his slight, boyish face, brown hair wisping to a thin Tintin quiff, he stresses how lucky he feels. ‘They’re quite happy videos,’ he says at one point, about films documenting his surgery. ‘We tried to have as much fun as possible in the hospital.

Until he was 10, I used to think there would be a cure,’ recalls Lesley. ‘Then when he was 15, I knew, no, it wasn’t going to be ready for James.’ The family never discussed death. But by his late teens James knew himself. And that knowledge was a spur. He started playing wheelchair football, then passed his driving test first time. He also took up photography, pursuing subjects with the directness of a man with little time to lose (they included Sophie, Countess of Wessex, the boxer David Haye and the actor Tom Holland).

In 2014, when he was 21, he began a long-distance online romance with a nurse from Texas called Mandy. She came over to stay for a few weeks in Liverpool. A year later, James, Lesley and James’s older sister Gemma returned the visit. Mandy is still in touch with the family.

It was a relationship enabled by modern technology. The internet gave James a place to learn, to meet people, to explore beyond the confines of his body. In the evenings, he was online for long hours. ‘Thank God the technology was there for him,’ says Lesley. ‘He was so clever, he had it all at his fingertips.

In October 2015, two months before James discovered blotches that turned out to be his first skin cancer, a group of digital designers met at a conference at the British Museum. Among those attending was Pete Trainor, founder of an artificial intelligence (AI) company now known as Us Ai, which specialises in ‘intelligently artificial’ corporate ‘chatbots’. If you have been confronted by a pop-up box on your bank website in which a simulated employee asks if it can help, you know the kind of thing. It is a technology with hotly anticipated commercial applications. But rather than focus on money and machines, Trainor’s talk was all about AI making life better for humans.

The following November, James saw the video of Trainor’s lecture online. It had been a big year for him. Not only had he undergone gruelling treatment for his cancer, but his sister Gemma had told him that she was pregnant with a boy she was to call Tommy.

I know James really struggled with his mortality at that point,’ says Trainor, an earnest and enthusiastic 38-year-old who habitually wears a waistcoat. James was 23 by then. ‘He wanted his nephew to know Uncle James,’ Trainor recalls. ‘But he didn’t know how long he had left.

The two men first met in February 2017 after James contacted Trainor on social media. ‘He was after a way of recording as much of himself as possible,’ says Trainor. The pair discussed creating a digital ‘time capsule’ of James’s thoughts and memories for Tommy. To capture them, Trainor installed several smart speakers – first Amazon Echos, then Google Homes – in James’s house.

Quickly the devices recorded huge quantities of audio. But instead of simply keeping these recordings for posterity, the two used them to create what in the AI world is known as a ‘corpus’ – a body of knowledge from which a machine can learn – and fed it into the algorithm that Trainor normally used to create chatbots for his banking clients. ‘At that point we hadn’t thought about the implications of what we were doing,’ says Trainor.

Then, on 12 July 2017, James and Trainor gave a talk at a tech event at the London College of Fashion. There, across the room, they spotted a 3ft-high robot, which its designers called Bo. For James, it was the moment when the project ‘went from collecting as many thoughts in his head for reasons of documentation, to seeing a robot that could be autonomous… that could house this stream of consciousness’, says Trainor. ‘I can’t remember if we ever explicitly sat down and said this could be a version of you for when you’re not here, [but] the question of consciousness was implicitly there.

Artificial life after death was on James’s mind. After meeting Trainor he came across the story of the Russian billionaire Dmitry Itskov, founder of the 2045 Initiative, which seeks to create ‘cybernetic immortality’ by ‘downloading’ the consciousness of individual humans, which could then be housed in robots, or projected as holograms. James became fascinated by Itskov, seeking out YouTube videos about him, including a BBC documentary called The Immortalist.

He was not the only person to have stumbled upon the power of new computational methods to walk the line between life and death. In America, a programmer called Eugenia Kuyda had built a bot after her best friend, Roman Mazurenko, was killed after being hit by a car aged 32. She had a huge archive of his text messages, which she used to create an AI corpus. She could then text the bot just as she had texted him, and it would respond in its own words – and, uncannily, in his style.

Some of Mazurenko’s friends found it creepy. ‘It’s pretty weird when you open the messenger and there’s a bot of your deceased friend, who actually talks to you,’ said his friend Sergey Fayfer. Others, like Mazurenko’s mother Victoria, were thrilled. ‘They continued Roman’s life and saved ours,’ she is quoted as saying in an article on The Verge website. ‘It’s not virtual reality. This is a new reality, and we need to learn to build it and live in it.

But some consequences of Mazurenko’s digital reincarnation were unforeseen. Those ‘talking’ to it often became confessional. The bot became a private space in which people could be honest. With a few tweaks, it has since become the basis of a free app called Replika.

On the news website Quartz, Kuyda says of Replika, ‘No one is allowed to be vulnerable any more. No one is actually saying what’s going on with themselves very openly.’ By interacting with users, Replika learns to become a version of them – for some, a natural confidant.

In 1950, Alan Turing, famous for his wartime codebreaking work at Bletchley Park, devised The Imitation Game. If an observer, reading the transcript of a conversation between human and machine, could not guess which was which, then the machine passed what has come to be known as The Turing Test. In 1966 it was first claimed that a machine, called Eliza, had passed the test. Posing as a psychotherapist, Eliza asked patients to describe their problems, then searched their answers for keywords to indicate what a meaningful response might be.

A similar process underlies bots like those created by Trainor and Kuyda. The difference is that increasing computational sophistication and power have blessed them with a vastly greater ability to process and respond to abstract concepts. So today’s bots learn. ‘Talking’ to the James bot, or the Replika that I created on my smartphone, could initially be clunky. But they improved. With Replika this is even part of the experience, in which users are ushered through ‘levels’. ‘It needs people engaging with it,’ says Trainor of the James bot. ‘The basis of this technology is that the more you use it, the better it gets.

Developers are clear: such bots are not conscious in the way that humans are. They do not understand language. They simply use it in a way that makes it seem as though they do. Yet what is consciousness? As the eminent British brain surgeon Henry Marsh has noted, no one really knows.

Neuroscience tells us that it is highly improbable that we have souls, as everything we think and feel is no more or no less than the electrochemical chatter of our nerve cells,’ he writes in his memoir Do No Harm. ‘Our sense of self, our feelings and our thoughts, our love for others, our hopes and ambitions, our hates and fears all die when our brains die. Many people deeply resent this view of things, which seems to downgrade thought to mere electrochemistry and reduces us to mere automata, to machines. Such people are profoundly mistaken, since what it really does is upgrade matter into something infinitely mysterious that we do not understand.

Those trying to solve that infinite mystery suggest that consciousness may be the fruit of interoperating brain processes. If that were true, however, could not machinery replicating those processes also replicate consciousness? A handful of researchers believe so. The question then is, if machinery can mimic the mystery of consciousness, who owns the results?

A Romanian entrepreneur, Marius Ursache, thinks we should all create digital avatars of ourselves that can live on after we die. Though the technology is similar, his company differs from Replika in that it is explicitly aimed at the life-after-death market. He calls it Eternime. ‘Eventually, we are all forgotten,’ its website announces. By ‘collecting your thoughts and stories’ it promises to create a digital replica of you online – an avatar – with which others can converse and so access your memories long after you are dead. Partly it sells itself as a legacy tool. But there is another aspect too: avatars don’t die. ‘Become virtually immortal,’ the website boasts. Ursache concedes that his business model raises ‘tons of things to think of ethically’. But while Eternime and Replika insist that personal data will never be shared, a host of concerns are already being voiced about the rights of the ‘online dead’ – a commercial field that is growing so fast it already has its own acronym: DAI (digital afterlife industry).

In a paper in Nature magazine, Luciano Floridi and Carl Ohman from the Oxford Internet Institute divided DAI products into four categories, from simple digital wills (which help pass on or destroy the contents of your online accounts once you die) to full-blown digital recreation services like Eternime, where your avatar could potentially be interacting with flesh and blood humans 1,000 years from now.

All such companies, the academics say, ‘share an interest in monetising death online, using digital remains as a means of making a profit’. The two men foresee a world in which avatars, which could feel as integral to individuals as internal organs, will actually be owned – and potentially commercialised – by a company. In this vision of the future, posthumous avatars populate a kind of YouTube for the dead, where the popular generate audience traffic and consequently income for the company that created them, while others languish unwatched. Instead of being ‘virtually immortal’, the academics fear, such lonely avatars would merely be deleted, a second death for those whose physical bodies have already ceased to exist.

Within only five years of a user’s death, the chatbot for which they signed up will likely have developed into something far more sophisticated and commercially calibrated,’ the two men write. For them it is those services that promise the most richly detailed digital recreation that ‘involve the greatest risk regarding privacy’. In consequence, Floridi and Ohman say, it is bots like Replika ‘where the most significant ethical concerns lie’.

In its privacy policy Luka, the company behind Replika, insists, ‘We are not in the business of selling your information. We consider this information to be a vital part of our relationship with you.’ But, as with almost all social media companies, signing up to Replika means granting Luka a ‘perpetual, irrevocable license to copy, display, upload, perform, distribute, store, modify and otherwise use your User Content’ – photos and the like – ‘in connection with the operation of the Service or the promotion, advertising or marketing thereof in any form, medium or technology now known or later developed.’

Floridi and Ohman are calling for laws to ensure ‘dignity for those who are remediated online’. As yet there are none. ‘It’s a free-for-all,’ says Floridi.

James Dunn was not interested in such legal niceties. He trusted Trainor, and time was short. So when he saw Bo he made a beeline for its creators, Andrei Danescu, Adrian Negoita and Oana Jinga. The three entrepreneurs had imagined Bo being used in public settings such as hotel lobbies, airport terminals, or trundling the corridors of NHS hospitals in the depths of night, silently checking on patients. But James opened their eyes to a new application of their technology.

James saw the robot and immediately he had all these ideas,’ says Danescu. ‘He was very visionary. And we were totally blown away because it goes into all these philosophical questions about putting someone’s personality and experience and their whole wealth of knowledge into a different body, or embodiment.

The idea of implanting the James algorithmic bot into Bo gripped the robot’s makers. James had twin conceptions of what the result would do. In the first instance, while he was alive and relatively well, he told the robot’s creators, he envisaged it ‘taking some of the strain off his family’. It would be able to go downstairs to chat with them, or to the shops, on its own. But there was a second, unspoken understanding of Bo’s purpose.

He was saying, “I would like my nephew to be able to interact with the robot and then think, oh this is what James would have been like,”’ says Danescu. ‘He saw the robot as a vessel for what he was going to leave behind. His legacy. I think many people would be open to have that as an interesting way of living on.

In the meantime, Trainor kept working on the algorithmic James bot. By September 2017, six months after he had started, it was working well enough for James and ‘James’ to engage in conversation. ‘We laughed and thought it was amusing,’ says Trainor. ‘He had a chat with himself. An inner monologue.’ Together, they planned to unveil Bo, with the James bot software inside, to an audience at a health-tech event that November – Bo communicating by voice and screen but in a generic male voice.

However, James noticed lumps on his hand and just before the event, on 8 November, he was told that his cancer had returned. ‘I’m numb with emotion,’ he confided to his video diary on the day of his diagnosis. ‘I’m not going to sleep. That’s what happens when I worry.

In the new year he had his arm amputated. On 18 February he posted a heart-rending video from his sickbed. ‘To be honest, and I don’t know if this is going to come as a surprise to my friends and family, because I’m always so cheery and positive, but every time I think about death and dying and leaving everyone behind, and the afterlife – sorry I’m getting pretty deep on this video – I shit myself to be honest. I’m terrified.’ He died less than two months later, on 7 April last year. A couple of days beforehand, he texted Trainor: ‘Don’t worry. I’m gonna be all right. Thanks for everything.

Trainor gave the eulogy at his funeral. ‘On one level, I suppose I knew him better than anyone,’ he says, reflecting upon the vast quantity of data about James that he had compiled. Before the funeral, friends slipped mementos into James’s open casket. Trainor added a hard drive containing James’s AI corpus. The real man and his virtual self were buried together.

For Trainor, and the creators of Replika, Eternime and myriad other companies on the cutting edge of the digital afterlife industry, this is only the beginning. How soon will it become routine to piece people back together from the data they leave behind? ‘The saddest thing for me is that James won’t be here to see the opportunity fulfilled of the thing that he started,’ says Trainor. But the James bot could go from strength to strength. Trainor recently discovered a whole archive of new material on James’s computer which could only refine the algorithm, should he choose to add it. ‘There’s more to come,’ he says.

Lelsey Dunn is still trying to get over her son’s death. She has not watched the videos he left, let alone communicated with his bot. ‘It was an amazing life we had with him,’ she says. To her, the technology that could somehow resurrect her son is both ‘scary’ and ‘nice’. She feels the best result would be if, one day, James’s digital avatar could visit schools and speak in his zestful manner about coping with pain, about loving life, even in the most difficult circumstances. For the moment, however, she prefers to take solace in more traditional form. ‘James is buried and his grave is not too far away. So I see him every day.

The James bot lay dormant after the funeral. But at the end of last year, back in London and prompted by my enquiries, Trainor rebooted it on his smartphone. He typed in, ‘I miss you.

My mind is a form of physical energy. I believe that,’ came the response. Several more dialogue boxes popped up: ‘It will eventually transform to the point where I don’t need it to be in this stupid body any more. You probably don’t understand what I mean. But basically I think I’ll just be something else one day. If life ends at death, then life would not be fair. I think life is fair. Therefore, life does not end at death.

Podcast, Speaking

Power and Responsibility Summit

Digital’s effects on mental health are increasingly well documented. How can we get past the problems that it creates? What are the wellbeing benefits that digital promises for us? In conversation are Simon Gunning, chief executive, Campaign Against Living Miserably; Katz Kiely, founder at Beep and Pete Trainor the co-founder, Us Ai. Session moderated by Oli Barrett. The panel took place at DigitalAgenda’s Power & Responsibility Summit at London’s British Library on 4 October 2018

Podcast

Alex Stop Christmas Special

Rounding off the year in festive style with returning special guest, newly adorned Reverend Pete Trainor. The conversation includes the amazing digital legacy of James Dunn, what does spirituality mean in the technology world we live in and some beer-fueled hopes and dreams for 2019. Happy Christmas!

Article, Mental Health

Ad data could save your life

Here’s a piece I contributed to Dialogue magazine recently. Huge thanks to Kirsten Levermore for diligently making sense of my slightly dyslexic waffle and shaping it into what’s emerged.

When we woke up our computers we gave them superpowers. Now we have to decide how to use them, writes Pete Trainor

The world is different today than it was yesterday and tomorrow it will be different again. We’ve unleashed a genie from a bottle that will change humanity in the most unpredictable of ways. Which is ironic, because the genie we released has a talent for being able to make almost perfect predictions 99% of the time.

We have given machines the ability to see, to understand, and to interact with us in sophisticated and intricate ways. And this new intelligence is not going to stop or slow down, either. In fact, as the quantities of data we produce continue to grow exponentially, so will our computers’ ability to process and analyze — and learn from — that data. According to the most recent reports, the total amount of data produced around the world was 4.4 zettabytes in 2013 – set to rise enormously to 44 zettabytes by 2020. To put that in perspective, one zettabyte is equivalent to 44 trillion gigabytes (about 22 trillion tiny USB sticks). Across the world, businesses collect our data for marketing, purchases and trend analysis. Banks collect our spending and portfolio data. Governments gather data from census information, incident reports, CCTV, medical records and more. 

With this expanding universe of data, the mind of the machine will only continue to evolve. There’s no escaping the nexus now.

We are far past the dawn of machine learning

Running alongside this new sea of information collection is a subset of Artificial Intelligence called ‘Machine Learning’, autonomously perusing and, yes, learning, from all that data. Machine learning algorithms don’t even have to be explicitly programmed – they can literally change and improve their own code, all by themselves. 

The philosophical and ethical implications are huge on so many levels.

On the surface, many people believe businesses are only just starting to harness this new technological superpower to optimise themselves. In reality, however, many of them have been using algorithms to make things more efficient since the late 1960s. 

In 1967 the “nearest neighbour” code was written to allow computers to begin recognizing very basic patterns. The nearest neighbour algorithm was originally used to map routes for traveling salesmen, ensuring they visited all viable locations along a route to optimise a short trip. It soon spread to many other industries.

Then, in 1981, Gerald Dejong introduced the world to Explanation-Based Learning (EBL). With EBL, computers could now analyze a data set and derive a pattern from it all on their own, even discarding what they thought was ‘unimportant’ data. 

Machines were able to make their own decisions. A truly astonishing breakthrough and something we take for granted in many services and systems, like banking, still to this day.

The next massive leap forward came just a few years later, in the early 1990s, when Machine Learning shifted from a knowledge-driven approach to a data-driven approach, giving computers the ability to analyze large amounts of data and draw their own conclusions — in other words, to learn — from the results. 

The age of the everyday supercomputer had truly begun.

Mind-reading for the everyday supercomputer

The devil lies in the detail and it’s always the devil we would rather avoid than converse with. There are things lurking inside the data we generate, that many companies would rather avoid or not acknowledge – at least, not publically. We are not kept in the dark because they’re all malicious or evil corporations, but more often because of the huge ethical and legal concerns attached to what data and what processes lie the shadows.

Let’s say a social network you use every single day is sitting on top of the large set of data generated by tens-of-millions of people just like you. 

The whole system has been designed right from the outset to get you hooked, extracting information such as your location, travel plans, likes and dislikes, status updates (both passive and active). From there, the company can tease out the sentiment of posts, your browsing behaviors, and many other fetishes, habits and quirks. Some of these companies also have permission (that you will grant them access to, in those lengthy terms and conditions forms) to scrape data from other seemingly unrelated apps and services on your phone, too.

One of the social networks you use everyday even has a patent to “discreetly take control of the camera on your phone or laptop to analyse your emotions while you browse”

Using all this information, a company can build highly sophisticated and extremely intricate, explicit models that predict your outcomes and reactions – including your emotional and even physical states. 

Most of these models use your ‘actual’ data to predict/extrapolate the value of an unseen, not-yet-recorded point from all that data – in short, it can predict if you’re going to do something even before you might have decided to do it. 

The machines are literally reading our minds using predictive and prescriptive analytics 

A consequence of giving our data away without much thought or due diligence is that we have never really understood its value and power. 

And, unfortunately for us, most of the companies ingesting our behavioural data only use their models to predict what advert might tempt us to click, or what wording for a headline might resonate because of some long forgotten and repressed memory. 

All companies bear some responsibility to care for their users‘ data, but do they really care for the ‘humans‘ generating that data? 

That’s the big question. 

Usernames have faces, and those faces have journeys

We’ve spent an awfully long time mapping the user journey or plotting the customer journey when, in reality, every human is on a journey we know nothing about.

Yes, the technical, legal and social barriers are significant. But what about commercial data’s potential to improve people’s health and mental wellbeing? 

It’s started to hit home for me even harder over the last few years because I’ve started losing close friends to suicide. 

The biggest killer of men under 45 in the UK, and one of the leading causes of death in the US. 

It’s an epidemic. 

Which is why I needed to do something. 

“Don’t do things better, do better things” – Pete Trainor 

Companies can keep using our data to pad out shiny adverts or they can use that same data and re-tune the algorithms and models to do more — to do better things. 

The emerging discipline of computational psychiatry, uses the same powerful data analysis, machine learning, and artificial intelligence techniques as commercial entities – but instead of working out how best to keep you on a site, or sell you a product, computational psychiatrists use data to explore the underlying factors behind extreme and unusual conditions that make people vulnerable to self-harm and even suicide. 

The SU project in action

The SU project: a not-for-profit chatbot that is learning how to identify and support vulnerable individuals.

The SU project was a piece of artificial intelligence that attempted to detect when people are vulnerable and, in response, actively intervene with appropriate support messages. It worked like an instant messaging platform – SU even knew to talk with people at the times of day you were most at risk of feeling low. 

We had no idea the data SU would end up learning from was the exact same data being mined by other tech companies we interact with every single day.

We didn’t invent anything ground-breaking at all, we just gave our algorithm a different purpose.

Ai needs agency. And often, it’s not asked to do better things, just to do things better – quicker, cheaper, more efficient. 

Companies, then, haven’t moved quite as far from 1967s’ ‘nearest neighbor’ than we might like to believe.

The marketing problem

For many companies, the subject of suicide prevention is too contentious to provide a marketing benefit worth pursuing. They simply do not have the staff or psychotherapy expertise, internally, to handle this kind of content.

Where the boundaries also get blurred and the water murky is that to save a single life you would likely have to monitor us all.

Surveillance. 

The modern Panopticon. 

The idea of me monitoring your data makes you feel uneasy because it feels like a violation of your privacy. 

Advertising, however, not so much. We’re used to adverts. Being sold to has been normalized, being saved has not, which is a shame when so many companies benefit from keeping their customers alive.

Saving people is the job of counsellors, not corporates – or something like that. It is unlikely that data mining for good projects like SU would ever be granted universal dispensation since the nature and boundaries of what is ‘good’ remain elusively subjective.

But perhaps it is time for companies who already feed off our data to take up the baton? Practice a sense of enlightenment rather than entitlement? 

If the 21st-century public’s willingness to give away access to their activities and privacy through unread T&Cs and cookies is so effective it can fuel multi-billion dollar empires, surely those companies should act on the opportunity to nurture us as well as sell to us? A technological quid pro quo?

Is it possible? Yes. Would it be acceptable? Less clear.

– Pete Trainor is a co-founder of US. A best-selling author, with a background in design and computers, his mission is not just to do things better, but do better things. 

1 2
Recent Comments
    About Exponent

    Exponent is a modern business theme, that lets you build stunning high performance websites using a fully visual interface. Start with any of the demos below or build one on your own.

    Get Started
    Instagram

    [instagram-feed]

    "Do Better Things."
    Get In Touch