Get in Touch
Articles Tagged with

Ai

Home / Ai
Press

#TEDxLiverpool 2019

10th November 2019, ACC Liverpool Waterfront

The wait is over. Pete’s tribute TEDx to his friend, and collaborator James Dunn, is now live on the official channel. His moving talk told the story of James, from birth all the way to how they met, and collaborated. A story many have agreed is worth telling and sharing.

Pete Trainor, on-stage at Tedx Liverpool 2019.

During the talk Pete uncovers unseen footage of James, as well as sharing pictures James took with the now iconic camera created for him, by Jude Pullen on BBC 2’s Big Life Fix.

Article, Mental Health

Ad data could save your life

Here’s a piece I contributed to Dialogue magazine recently. Huge thanks to Kirsten Levermore for diligently making sense of my slightly dyslexic waffle and shaping it into what’s emerged.

When we woke up our computers we gave them superpowers. Now we have to decide how to use them, writes Pete Trainor

The world is different today than it was yesterday and tomorrow it will be different again. We’ve unleashed a genie from a bottle that will change humanity in the most unpredictable of ways. Which is ironic, because the genie we released has a talent for being able to make almost perfect predictions 99% of the time.

We have given machines the ability to see, to understand, and to interact with us in sophisticated and intricate ways. And this new intelligence is not going to stop or slow down, either. In fact, as the quantities of data we produce continue to grow exponentially, so will our computers’ ability to process and analyze — and learn from — that data. According to the most recent reports, the total amount of data produced around the world was 4.4 zettabytes in 2013 – set to rise enormously to 44 zettabytes by 2020. To put that in perspective, one zettabyte is equivalent to 44 trillion gigabytes (about 22 trillion tiny USB sticks). Across the world, businesses collect our data for marketing, purchases and trend analysis. Banks collect our spending and portfolio data. Governments gather data from census information, incident reports, CCTV, medical records and more. 

With this expanding universe of data, the mind of the machine will only continue to evolve. There’s no escaping the nexus now.

We are far past the dawn of machine learning

Running alongside this new sea of information collection is a subset of Artificial Intelligence called ‘Machine Learning’, autonomously perusing and, yes, learning, from all that data. Machine learning algorithms don’t even have to be explicitly programmed – they can literally change and improve their own code, all by themselves. 

The philosophical and ethical implications are huge on so many levels.

On the surface, many people believe businesses are only just starting to harness this new technological superpower to optimise themselves. In reality, however, many of them have been using algorithms to make things more efficient since the late 1960s. 

In 1967 the “nearest neighbour” code was written to allow computers to begin recognizing very basic patterns. The nearest neighbour algorithm was originally used to map routes for traveling salesmen, ensuring they visited all viable locations along a route to optimise a short trip. It soon spread to many other industries.

Then, in 1981, Gerald Dejong introduced the world to Explanation-Based Learning (EBL). With EBL, computers could now analyze a data set and derive a pattern from it all on their own, even discarding what they thought was ‘unimportant’ data. 

Machines were able to make their own decisions. A truly astonishing breakthrough and something we take for granted in many services and systems, like banking, still to this day.

The next massive leap forward came just a few years later, in the early 1990s, when Machine Learning shifted from a knowledge-driven approach to a data-driven approach, giving computers the ability to analyze large amounts of data and draw their own conclusions — in other words, to learn — from the results. 

The age of the everyday supercomputer had truly begun.

Mind-reading for the everyday supercomputer

The devil lies in the detail and it’s always the devil we would rather avoid than converse with. There are things lurking inside the data we generate, that many companies would rather avoid or not acknowledge – at least, not publically. We are not kept in the dark because they’re all malicious or evil corporations, but more often because of the huge ethical and legal concerns attached to what data and what processes lie the shadows.

Let’s say a social network you use every single day is sitting on top of the large set of data generated by tens-of-millions of people just like you. 

The whole system has been designed right from the outset to get you hooked, extracting information such as your location, travel plans, likes and dislikes, status updates (both passive and active). From there, the company can tease out the sentiment of posts, your browsing behaviors, and many other fetishes, habits and quirks. Some of these companies also have permission (that you will grant them access to, in those lengthy terms and conditions forms) to scrape data from other seemingly unrelated apps and services on your phone, too.

One of the social networks you use everyday even has a patent to “discreetly take control of the camera on your phone or laptop to analyse your emotions while you browse”

Using all this information, a company can build highly sophisticated and extremely intricate, explicit models that predict your outcomes and reactions – including your emotional and even physical states. 

Most of these models use your ‘actual’ data to predict/extrapolate the value of an unseen, not-yet-recorded point from all that data – in short, it can predict if you’re going to do something even before you might have decided to do it. 

The machines are literally reading our minds using predictive and prescriptive analytics 

A consequence of giving our data away without much thought or due diligence is that we have never really understood its value and power. 

And, unfortunately for us, most of the companies ingesting our behavioural data only use their models to predict what advert might tempt us to click, or what wording for a headline might resonate because of some long forgotten and repressed memory. 

All companies bear some responsibility to care for their users‘ data, but do they really care for the ‘humans‘ generating that data? 

That’s the big question. 

Usernames have faces, and those faces have journeys

We’ve spent an awfully long time mapping the user journey or plotting the customer journey when, in reality, every human is on a journey we know nothing about.

Yes, the technical, legal and social barriers are significant. But what about commercial data’s potential to improve people’s health and mental wellbeing? 

It’s started to hit home for me even harder over the last few years because I’ve started losing close friends to suicide. 

The biggest killer of men under 45 in the UK, and one of the leading causes of death in the US. 

It’s an epidemic. 

Which is why I needed to do something. 

“Don’t do things better, do better things” – Pete Trainor 

Companies can keep using our data to pad out shiny adverts or they can use that same data and re-tune the algorithms and models to do more — to do better things. 

The emerging discipline of computational psychiatry, uses the same powerful data analysis, machine learning, and artificial intelligence techniques as commercial entities – but instead of working out how best to keep you on a site, or sell you a product, computational psychiatrists use data to explore the underlying factors behind extreme and unusual conditions that make people vulnerable to self-harm and even suicide. 

The SU project in action

The SU project: a not-for-profit chatbot that is learning how to identify and support vulnerable individuals.

The SU project was a piece of artificial intelligence that attempted to detect when people are vulnerable and, in response, actively intervene with appropriate support messages. It worked like an instant messaging platform – SU even knew to talk with people at the times of day you were most at risk of feeling low. 

We had no idea the data SU would end up learning from was the exact same data being mined by other tech companies we interact with every single day.

We didn’t invent anything ground-breaking at all, we just gave our algorithm a different purpose.

Ai needs agency. And often, it’s not asked to do better things, just to do things better – quicker, cheaper, more efficient. 

Companies, then, haven’t moved quite as far from 1967s’ ‘nearest neighbor’ than we might like to believe.

The marketing problem

For many companies, the subject of suicide prevention is too contentious to provide a marketing benefit worth pursuing. They simply do not have the staff or psychotherapy expertise, internally, to handle this kind of content.

Where the boundaries also get blurred and the water murky is that to save a single life you would likely have to monitor us all.

Surveillance. 

The modern Panopticon. 

The idea of me monitoring your data makes you feel uneasy because it feels like a violation of your privacy. 

Advertising, however, not so much. We’re used to adverts. Being sold to has been normalized, being saved has not, which is a shame when so many companies benefit from keeping their customers alive.

Saving people is the job of counsellors, not corporates – or something like that. It is unlikely that data mining for good projects like SU would ever be granted universal dispensation since the nature and boundaries of what is ‘good’ remain elusively subjective.

But perhaps it is time for companies who already feed off our data to take up the baton? Practice a sense of enlightenment rather than entitlement? 

If the 21st-century public’s willingness to give away access to their activities and privacy through unread T&Cs and cookies is so effective it can fuel multi-billion dollar empires, surely those companies should act on the opportunity to nurture us as well as sell to us? A technological quid pro quo?

Is it possible? Yes. Would it be acceptable? Less clear.

– Pete Trainor is a co-founder of US. A best-selling author, with a background in design and computers, his mission is not just to do things better, but do better things. 

Article

The philosophical complications of robot cowboys and frustrated gamers

This is a slightly left-of-the-middle one for me… A few months ago I was approached by Sky to join them at the Mindshare Huddle (which was a brilliant event BTW) in November to discuss the existential issues inspired by the Sky Atlantic show, Westworld. As a huge fan of the original 1973 film, written and directed by Michael Crichton, (and rather controversially, the 1976 sequel, Futureworld) it was a tough gig for me to refuse. If you’re not familiar with the film or show, it depicts a technologically advanced, complete re-creation of the American frontier of 1880. A western-themed amusement park populated by humanoid robots that malfunction and begin killing the human visitors. Basically it’s Jurassic Park but with gunslingers and prostitutes instead of Dinosaurs.

For the discussion to work, it was really important to move past the science fiction really quickly, and get right to the parts of the show that are conceivably being played out, in real-time, as I type this. Life is currently imitating art to a certain extent and so we put forth the following synopsis for the audience;

“Join Jamie Morris, Channel Editor, Sky Atlantic & Head of Scheduling and Pete Trainor, Director of Human Focused Technology at US Ai, as they delve into the notion of Sky Atlantic show Westworld becoming a real thing. This ethical discussion will cover human interactivity in a world of Ai and whether humans would act as normal, or be enticed into a dark illegal world, just because they could. Pete will also describe just how close we are to a Westworld-style world—and you are going to be surprised.” 

We had a really fascinating and enjoyable conversation that was seeded and started with a single statement by one of the scientists in the original 1973 film;

“We aren’t dealing with ordinary machines here. These are highly complicated pieces of equipment almost as complicated as living organisms. In some cases, they’ve been designed by other computers. We don’t know exactly how they work.” 

What strikes me about that statement, was how close to todays reality the original script is becoming. Cast your mind back to last year when AlphaGo trumped Lee Sedol in Game Two of GO. As the world looked on, the Deepmind machine made a move that no human ever would. Move 37 perfectly demonstrated the enormously powerful (and rather mysterious) future of modern artificial intelligence. AlphaGo’s surprise attack on the right-hand side of the 19-by-19 board flummoxed even the world’s best Go players, including Lee Sedol. Nobody really understood how it did it, not even it’s creators. Lee Sedol then lost Game Three, and AlphaGo claimed the million-dollar prize in the best-of-five series.

But let’s just get real for a moment, it was a mystery that we, the humans, programmed and created. We just didn’t really understand what we’d done.

Ticks not Clicks

I really enjoyed the facet of the show that presents us with an alternate view of the human condition through the technological mirror of life-like robots. Look past the boozing and sex, and Westworld is Psychology 101. It causes us to reflect that we are perhaps also just sophisticated machines, albeit of a biological kind. Episode 4 was even entitled “Dissonance Theory,” and explored the psychological hypothesis of bicameralism. The whole show taps into the rich tapestry of questions inspired by Mary Shelley’s novel Frankenstein too… The creature created by Frankenstein is psychologically conflicted between a need for human companionship and a deep selfish hatred for those who have what he does not… but that’s a digression for another day. As people start to meddle in technology and opportunities that they don’t fully understand, you really have to stop and question who are the true antagonists in a show like Westworld. The robots. The human players (and they are players in a game by the way), or the scientists who create the rules of the game.

The audience chose the players, but I suggested to Jamie that the real antagonists of Westworld (and Ai today) aren’t the people who treated robots as objects, or the robots themselves, but the scientists who tried to make the robots more human by design. The ones who tricked us to bring out the very primitive part of humanity.

“When you played cowboys and Indians as a kid, you’d point go “bang, bang” and the other kid would lie down and play dead. Well, Westworld is the same thing, only it’s for real!” 

Perhaps we can learn something from Westworld, where the ones treating robots like robots seem the most capable of separating reality from fantasy and human-life from technological wizardry. It’s the scientists imposing the human condition and consciousness on artificially intelligent beings who unleash suffering on both robot and humankind. In the show, while both Maeve and Dolores may have acted in a mix of prescribed and self-directed ways, their revolutions were firmly created by the humans in the lab. Ultimately, the robots don’t become semi-sentient—and violent—simply by experiencing love or loss or trauma or rage or pain, but by being programmed and guided that way.

It is inevitable therefore, that as real-life artificial intelligence develops, we will see a lot of debate over whether treating humanoid machines like machines is somehow inhumane, either because it violates the rights of robots or it produces moral hazards in humans who participate in the activity.

As humans, we’re predisposed to behave in ways that play to our base instincts. Even if robots are just tools, some people will always see them as more than that and It seems natural for people to respond to robots—even some of the more simple, non-human robots we have today—as though they have goals and intentions. I stand my ground that Ai will never be able to ‘feel’ or have ‘emotions’ or ‘empathy’ because those are very human traits. They’re biological and psychological, not mechanical. We can programme machines to interpret and mimic, but they will not feel. But if we do create those mimicry moments, who’s to blame if some people fall for the charade? As kids we had teddy-bears. Phones. Bots. Alexa. Robots. Droids etc are just logical, grown-up extensions of those anthropomorphism’s.

Have a look at the following video and see how it makes you feel;

A human tendency towards irrational, often violent, behaviour

We covered the human tendency towards irrational, violent behaviour. Now this is controversial and divisive, but statistically, humans are six times more likely to kill each other than the average mammal. That’s no excuse for violence, humans are also moral animals and we cannot escape from that. But a lot of society has a base instinct towards living out primitive behaviour, especially in herds. I referenced taking Charlie, my 8 year old son, to the football every few weeks and how he asks more questions about the behaviour of the fans (the mob) than the football most weeks. I myself, cannot get passionate enough about grown men kicking a ball around some grass, to hurl disgusting abuse at a referee, but several thousand people in a crowd of 22,000 do. Week in and week out.

The Seville Statement” is another very controversial piece of research from the 1980s which backed up some parts of the theory that biologically people have an innate tendency towards violence, in contradiction of both the statement and the views of many cultural anthropologists. A lot of this primitive, violent behaviour in parts of society still harps back to the behaviour of our primate cousins. Groups of male chimpanzees prey on smaller groups to increase their dominance over neighboring communities, improving their access to food and female mates etc. I believe, given some legitimate reason to behave like apes, some people will seize the opportunity. So again, if the world moves in the direction of creating opportunities to behave like apes, we will see peoples behaviour pivot in that direction. Build it and programme it with options to abuse, and they will come. Mark my words.

Gaming play and frustration

There’s a latest craze of VR Parks starting to open up in Tokyo. The first truly immersive one opened last December as an experiment. Nearly 12 months later, it is attracting 9,000 visitors a month and turning people away at weekends, as crowds clamor to immerse themselves in extreme experiences, distant worlds and fantasy scenarios, using technology most people still can’t afford to have at home. Again, it shows how as species we clamour for escapism and the more immersive the better. So whilst on stage we acknowledged that Westworld as a park full of robots is not very realistic, as a hypothetical concept, it’s already happening.

I touched a little on the link between computer games and violent behaviour and how this would also factor in. There’s actually no proof that violent video games create violent tendencies offline by the way, but there are some interesting studies emerging that back up the theory that frustration at being unable to play a game is more likely to bring out aggressive behaviour than the content of the game itself. What’s interesting about Westworld in context of this, is as the machines evolved to change the rules almost constantly, it would encourage frustration and therefore violence—the violent themes would not necessarily inspire that behaviour itself, but the intelligence of the ever evolving scenario. Chaos basically. A lot of the Ai we’re building at the moment is literally designed to break the rules. To continuously evolve. To ‘machine learn’… so don’t be too surprised when frustration turns to the kind of behaviour that we don’t normally use in polite society.

People have a psychological need to come out on top when playing games. If we feel thwarted by the controls or the design of something, we can wind up feeling aggressive. 

In giving artificial intelligence the ability to improvise, we (humans) give it the power to create, to decide, and to act. If we program it to improvise without programming the right ethical framework, we risk losing control of it altogether and then we’re basically fulfilling our own prophecy.

The ethics of human > computer relationships

Finally, to end, we also covered some of the high-level areas of Ai ethics. Things that the world economic forum has listed as areas to consider for humanity when developing the intelligent machines… consider this… even though our conversation was hypothetical, the 1973 film predicted a lot of where we’ve ended up. Today, we’re at number one. According to the futurists, by 2035 we might make it to number nine;

  1. Unemployment – What happens to jobs when robots / chatbots / automation replace us?
  2. Eroded Humanity – What happens to the self-esteem of people replaced by machines, do we increase the growing mental health crisis?
  3. Inequality and Distributed Wealth – Linked directly to 1 & 2… where does the wealth generated by machines go?
  4. Racist Robots – What happens when we feed toxic, historical data into the Ai and by doing this, engender it with all our racism, bias etc.
  5. Artificial Stupidity – What happens when the machines we create to automate processes go wrong? Everybody and every machine ‘learns by doing’ and making mistakes. They will. It’s how kids learn.
  6. Security Against Adversaries – What fail-safes do we need to put in place in order to ensure the machines can’t hit the big red button.
  7. Robot rights – When machines start to grow in intelligence and mimic it’s creators, do we need to give them rights or do we acknowledge that they are no more in need of rights then toasters and other technical tools?
  8. Evil genies & unintended consequences – For every good in the world, there will be bad. That’s life. There will be bad examples—terrorism and cyber-war etc.
  9. The Singularity – How we refer to the moment that a machine, over-takes humanity as the smartest thing on the planet and also has the ability to think and make judgements for itself. A conscious (even if it is just mimicking consciousness!) software capable of looking at it’s creators and saying “you are my slaves not vice-versa).

Summary

“Human” characters in the show routinely ask other individuals in Westworld whether or not they are “real.” One character replies to the question, “Does it really matter?”, which is already a reflection I make most days when I see us all interacting with each other virtually and behaving in such unpredictable ways.

“Mr. Lewis shot 6 robots scientifically programmed to look, act, talk and even bleed just like humans do. Isn’t that right? Well, they may have been robots. I mean, I think they were robots. I mean, I know they were robots!” 

As shows like Westworld get closer and closer to becoming reality, it’s going to become more imperative that we acknowledge the importance of the ethical questions. If we’re tricked into behaving in a way that plays to our base instincts… who’s responsibility is it to govern and manage that?

Welcome to Westworld!

——————————————————

A massive thank you to Caroline Beadle at Sky Media for organising and inviting me to join Sky for the afternoon. Also for humouring me enough to let me take this talk to a far far more philosophical space than it needed to be. We really did take things off into a whole bizarre, human focused direction. Which is ironic when we got together to talk about robots.

Recent Comments
    About Exponent

    Exponent is a modern business theme, that lets you build stunning high performance websites using a fully visual interface. Start with any of the demos below or build one on your own.

    Get Started
    Instagram

    [instagram-feed]

    "Do Better Things."
    Get In Touch