In the 1990s every town had its pop-group, rock-band or DJ that represented them during the rise of Britpop. The South-West was no exception. Bristol had Massive Attack and Portishead, Glastonbury had Reef. Bath had the Propellerheads, and PJ Harvey ascended to global superstardom with her distinctive punk-infused, folk-rock sound. These artists became more than just chart sensations, they waved the flag for local communities and inspired young people to look beyond the pre-internet-era boundaries.
Yeovil was no exception, and in 1995 a group of 20-something-musicians and songwriters met at a pub in Somerton and decided they could take on the world. Ali, Steve, Nigel, Paul, Jim, and eventually Alex created the band that went on to christen themselves Electrasy. They proceeded to have huge chart success both in the U.K, and the U.S with their unique brand of experimental Britpop.

The incredible story of this band is everything other than linear and one-dimensional, which is why it has finally been immortalised in a new fan-led book called ‘Calling All The Dreamers’, tracing the bands early days in Weymouth, Yetminster, Beaminster, Sherborne, and Yeovil, all the way to New York and L.A via studio recordings at the famous Abbey Road, meetings with music royalty such as Clive ‘Arista’ Davis, Sir Robin Millar, and television appearances performing to millions of prime-time viewers.
“Working with the band on this project has been a true labour of love” says author Pete Trainor, “My friends and I were big fans of the band when they were starting out on the Yeovil music circuit, and I knew a bit about some of the stories, but the more I went digging, the more I realised this was a much bigger story that really needed to be shared. It’s so much more than your standard ‘local band gets signed and goes global’ journey. There’s a very good set of reasons very few people remember much about Electrasy other than one top-20 hit single; It’s because they were buried by the industry that had plucked them from humble roots, and I needed to find out why.”
This has been an itch I’ve been trying to scratch for over 15 years, and we finally got to tell the story. It’s been magical. Pete Trainor
Electrasy’s first album, ‘Beautiful Insane,’ had its early demos and production supported by local producer Jon Sweet in his garage studio, and a lot of the additional work was recorded at the former Small World Studios in Yeovil. Throughout 1996, 1997 and 1998 dozens of gigs at venues like Gardens, The Ski Lodge, Yeovil Snooker Club, The Forresters Arms and The Quicksilver Mail created a dedicated fanbase, and the album went on to sell 60,000 copies in the first few months after its release, putting the band and Yeovil on the music map in a way that few thought possible. Championed by Chris Evans, Jo Wiley, and the U.K music press, the band seemed unstoppable, and even after their first record contract with UMG was terminated unexpectedly the band still played the coveted sunset spot at the second stage of Glastonbury Festival in 1999.
“We definitely felt like we were ready to take on the world when Arista took us over to America,” says singer Ali McKinnell in the book. “We had one of the best song-writers of a generation in Nigel, and the rest of us were damn good at what we were doing. But the industry often has different ideas, and when music piracy took hold at the same time as our rise we found ourselves constantly being somewhere between high and low. It was a very difficult journey in parts.”

The book looks at multiple angles of the band’s story, from the impact they had on local youth culture and economy, all the way to the advent and subsequent destruction digital caused the industry as a whole.
Guitarist Steve Atkins is quick to remind people of the opportunity but harm digital brought to bands who grew up analog, but entered digital; “One stream of a song on YouTube earns us £0.00053. A stream on Spotify is £0.0034 and AppleMusic is £0.0057. To put that into context for you, if we had one million streams on YouTube we’d make £533 or £3380 on Spotify. Bands can’t make enough to support themselves, let alone a tour, so we literally can’t get out on the road as much as we want. It’s a very dark pattern.”
Which leads to the final part of the book taking a look at the venues that first supported the music scene in the area. All boarded up, bulldozed, or turned into flats, towns like Yeovil are now devoid of the local music scene that inspired so many to see the world differently, and break away from crime or substance abuses.
“I definitely wasn’t expecting to find what we found,” says Pete. “When I came back to Yeovil to retrace the steps of the band I had no idea it was all gone. Even the studios where they recorded have shut and been sold off. The music ecosystem is very symbiotic you see – Bands need venues to play in, and studios to record in. Venues and studios need bands from the local area to bring their fanbase in to fill the tills.”
‘Calling All The Dreamers – The story of a local Britpop band who went to space and back’ is available now on Amazon. All proceeds from the book will be used to fund a tour of local venues in 2024. The band also release their brand new, home-grown album called ‘To The Other Side’ in July 2023, which music lovers can buy from BandCamp.
10th November 2019, ACC Liverpool Waterfront
The wait is over. Pete’s tribute TEDx to his friend, and collaborator James Dunn, is now live on the official channel. His moving talk told the story of James, from birth all the way to how they met, and collaborated. A story many have agreed is worth telling and sharing.
During the talk Pete uncovers unseen footage of James, as well as sharing pictures James took with the now iconic camera created for him, by Jude Pullen on BBC 2’s Big Life Fix.
Originally published in the Daily Dot.
Technology already manipulates our emotions. When we receive a like on social media, it makes us happy. When we see CCTV cameras on our streets, they make us feel anxious. Every technological tool is likely to produce some kind of corresponding emotional reaction like this, but it’s only in recent years that companies and researchers have begun designing tech with the explicit intention of responding to and managing human emotions.
In a time when increasing numbers of people are suffering from stress, depression, and anxiety, the emergence of technology that can deal with negative emotions is arguably a positive development. However, as more and more companies aim to use AI-based technology to make us “feel better,” society is confronted with an extremely delicate ethical problem: Should we launch a concerted effort to resolve the underlying causes of stress, depression, and other negative states, or should we simply turn to “emotional technology” in order to palliate the increasingly precarious human condition?
For its part, the tech industry seems to be gravitating toward the second option. And it’s likely that it will be selling emotional technology long before anyone has a serious debate about whether such tech is desirable. Because even now, companies are developing products that enable machines to respond in emotional terms to their environment—an environment which, more often than not, includes humans.
In October, it was revealed that Amazon had patented a version of its Alexa personal assistant that could detect the emotional states of its users and then suggest activities appropriate to those states or even share corresponding ads. Microsoft patented something very similar in 2017, when it was granted its 2015 application for a voice assistant that would react to emotional cues with personalized responses, including “handwritten” messages. Even more impressively, Google received a patent in October for a smart home system that “automatically implements selected household policies based on sensed observations”—including limiting screen time until sensing 30 minutes outdoors or keeping the front door locked when a child is home alone. One of the many observations the system relies on to operate? The user’s emotional state.
And aside from the tech giants, a plethora of startups are entering the race to build emotionally responsive AI and devices, from Affectiva and Beyond Verbal to EmoTech and EmoShape. In EmoShape’s case, its chief product is a general-purpose Emotion Processing Unit (EPU), a chip which can be embedded in devices in order to help them process and respond to emotional cues. Patrick Levy-Rosenthal, the CEO of the New York-based company, explains this takes it beyond other AI-based technologies that simply detect human emotions.
“Affective computing usually focuses on how machines can detect human emotions,” he says. “The vision we have at EmoShape is different, since the focus is not on humans but on the machine side—how the machine should feel in understanding its surroundings and, of course, humans, and more importantly the human language. The Emotion Chip synthesizes an emotional response from the machine to any kind of stimuli, including language, vision, and sounds.”
As distant as the prospect of emotional AI might seem right now, there is already at least one example of a commercially successful AI-based device that responds to and manages human emotion. This is Pepper, a humanoid robot released in June 2015 by the SoftBank Group, which had sold over 12,000 units of the model in Europe alone by May 2018 (its launch supply in Japan sold out in one minute).
Even when Pepper was first launched, it had the ability to detect sadness and offer comforting conversation threads and behaviors in response, but in August 2018 it was updated with technology from Affectiva, heightening its emotional intelligence and sophistication ever further. For instance, Affectiva’s software now enables Pepper to distinguish between a smile and a smirk, and while this ostensibly makes for only a subtle difference, it’s the kind of distinction that lets Pepper have a bigger emotional impact on the people around it.
This is most evident in Japan, where Pepper is enjoying gradually increasing use in care homes. “These robots are wonderful,” one Japanese senior citizen told The Japan Times last year, after participating in a session with Pepper at the Shintomi nursing home in Tokyo. “More people live alone these days, and a robot can be a conversation partner for them. It will make life more fun.”
As such testimony indicates, Pepper and machines like it have the power to detect the moods of their “users” and then behave in a way that either changes or reinforces these moods, which in the case of elderly residents equates to making them feel happier and less lonely. And Pepper certainly isn’t the only emotional robot available in Japan: In December, its lead developer Kaname Hayashi announced the appropriately named Lovot, a knee-high companionship bot launched by his spun-off startup, Groove X. “Lovot does not have life, but being with one is comforting and warm,” he said proudly, adding, “It’s important for trust to be created between people and machines.”
Depending on the particular ends involved, the possibility of such “trust” is either inspiring or unsettling. Regardless, the question emerges of when, exactly, such technology will be made available to the general public, and of when emotionally responsive devices might become ubiquitous in homes and elsewhere. Pepper’s price tag was $1,600 at launch—not exactly a casual purchase for the average household.
“Ubiquity is a far-reaching proposition,” says Andrew McStay, a professor in digital media at Bangor University in Wales, and the author of 2018’s Emotional AI. “But by the early 2020s we should be seeing greater popular familiarity with emotion-sensing technologies.”
Familiarity is one thing, but prevalence is another, and in this respect other AI experts believe that the timeframe looks more like decades than years. “I believe we are still quite far away from a future where emotionally responsive devices and robots become ubiquitous,” says Anshel Sag, a tech analyst at Moor Insights and Strategy. “I think we’re probably looking at 20 to 30 years, if I’m being honest. Robotics are expensive, and even if we could get today’s robots to react appropriately to emotions, the cost of delivering such a robot is still prohibitively high.”
Although Sag is doubtful that most of us will interact with emotionally responsive AI anytime sooner than 2039 or 2049, he’s nonetheless confident that such tech will be used in a way that “regulates” human emotions, in the sense of being used to perk up and change our moods. “Yes, I believe there will be emotional support robots and companion robots to keep people and pets company while others are gone or unavailable,” he explains. “I believe this may be one of the first use cases for emotionally aware robots as I believe there is already a considerable number of underserved users in this area.”
But while the arrival of emotionally responsive devices is only a matter of time, what isn’t certain is just how healthy such devices will be for people and society in general. Because even if a little pick-me-up might be welcome every now and again, emotions generally function as important sources of information, meaning that turning us away from our emotional states could have unfortunate repercussions for our ability to navigate our lives.
“The dangers are transformative technologies that are trying to hack the human body to induce a state of happiness,” says EmoShape’s Levy-Rosenthal. “All emotions are important. Society wants happiness, but you should not feel happy if your life is in danger, for example. AI, robots, apps, etc. must create an environment that helps make humans happy, not force happiness on them.”
It might seem hard to imagine plastic robots and AI-based devices having a significant emotional hold over humans, but our present-day relationship with technology already gives a clear indication of how strong our response to emotionally intelligent machines could be. “I’ve seen firsthand how people emotionally react when they break their phone, or when the coffee-machine breaks,” explains Pete Trainor, an AI expert, author, and co-founder of the London-based Us Ai consultancy. “They even use language like ‘my phone has died’ as if they’re mourning a friend or loved one. So absolutely, if I were emotionally attached to a machine or robot, and my attachment to that piece of hardware were as deep as the relationship I have with my phone, and the mimicry was happiness or sadness, I may very well react emotionally back.”
Trainor suggests that we’d have to spend a long time getting comfortable with a machine in order for its behavior to have an emotional impact on us comparable to that of other people. Nonetheless, he affirms that there “are substantial emotional dangers” to the growth of emotionally intelligent AI, with the risk of dependency likely to become a grave one. This danger is likely to become even more acute as machines become capable of not only detecting human emotions, but of replicating them. And while such an eventuality is still several years away, experts agree that it’s all but inevitable.
“I believe that eventually (say 20-30 years from now) artificial emotions will be as convincing as human emotions, and therefore most people will experience the same or very similar effects when communicating with an AI as they do with a human,” explains David Levy, an AI expert and author of Love and Sex With Robots. What this indicates is that, as robots become capable of expertly simulating human emotions, they will become more capable of influencing and regulating such emotions, for better or for worse.
Bangor University’s McStay agrees that there are inherent risks involved in using tech to regulate human emotions, although he points out that such tech is likely to fall along a spectrum, with some examples being more positive than others. “For example, use in media and gaming offers scope to increase pleasure, and wearables that track moods invite us to reflect on daily emotional trajectories (and better recognize what stresses us),” he says. “Conversely, more surveillant uses (e.g., workplaces and educational contexts) that promise to ‘increase student experience’ or ‘worker wellbeing’ have to be treated with utter caution.”
McStay adds that, as with most things, the impact of emotional tech “comes down to meaningful personal choice (and absence of coercion) and appropriate governance safeguards (law, regulation and corporate ethics).” However, the extent to which there will be meaningful personal choice and appropriate regulations is still something of a mystery, largely because governments and corporations have only just begun looking into the ethical implications of AI.
And unsurprisingly for an industry that has in recent years been embroiled in a number of trust-breaking scandals, one of the biggest dangers surrounding emotional AI involves privacy. “Ultimately, sentiment, biofeedback, neuro, big data, AI and learning technologies raise profound ethical questions about the emotional and mental privacy of individuals and groups,” McStay says. Who has access to the data that your robot has collected about your ongoing depression?
And as Cambridge Analytica and other scandals have shown, the privacy question feeds into wider issues too. “Other factors include trust and relationships with technology and AI systems, accuracy and reliability of data about emotion, responsibility (e.g., what if poor mental health is detected), potential to use data about emotion to influence thought and behaviour, bias and nature of training data, and so on.”
There are, then, a large number of hurdles to overcome before the tech industry can sell emotional AI en masse. Still, while recent events might lead some of us to take a more pessimistic and dystopian view of such AI, McStay, EmoShape, and other companies are optimistic that our growing concern with AI ethics will constrain the development of new technology, so that the emotional tech that does emerge works in our best interests.
“I do not think emotional AI is bad or sinister,” McStay concludes. “For sure, it can be used in controlling and sinister ways, but it can also be used in playful and rewarding ways, if done right.”
This Machine Ethics podast is created and run by Ben Byford in collaboration with ethicalby.design. As the interviews unfold on AI implementation or abstract technology ideas they often vere into current affairs, the future of work, environmental issues, and more. Though the core is still AI and AI Ethics, we release content that is broader and therefore hopefully more useful to the general public and practitioners.

