Will artificial intelligence solve more problems than it creates?

Artificial Intelligence is very clearly flavour of the month in the technology industry, but unlike many other 'hyped' technologies which struggle to break out of the sector press, AI has provoked some strong opinion...

Whether it is autopilot mode crashes from Tesla, politically incorrect bots from Microsoft, or robots taking our jobs, AI seems to be a proxy for a lot of our fears about the modern world. And it’s not just sensationalist headlines, tech giants like Stephen Hawking and Elon Musk have both warned of our impending doom.

The truth is, we’re still a long way from an AI that can pose genuine harm to humanity.  We need to remember that AI is created and controlled by humans, and it’s ultimately human error that will ever cause AI to malfunction. Worrying over the direction of AI misses a bigger, more crucial point - rather than worrying about software being too self-aware, or intelligent, we should be concerned about software that isn’t clever enough.

Software isn’t clever enough

Relatively basic coding errors occur more frequently than we might hope, and sometimes they make it into mainstream consciousness. The Heartbleed bug scare in 2014, for example, made the world aware that a code error could allow attackers to remotely access our computer systems.

The scale of the issue is only growing. Recent news shows that today the UK is targeted by dozens of serious cyber attacks each month, and that’s only at a national level. Company software is growing complex, interdependent, and vulnerable, making hacks and attacks regular occurrences.


Software has become too difficult for humans to properly test and manage. Whilst testing does still happen and most developers dedicate significant time to it, as it gets harder it’s also more difficult to catch minute errors that can leave software users vulnerable.

If an application has hundreds of millions of lines of code, it’s really too much to ask for humans to guarantee its safety. Like the latest Tesla car crash shows, human error can have much to blame - the driver was going above the speed limit, but the autopilot sensors were failing to distinguish a white trailer against a bright sky, so ultimately it wasn’t human error per se or "malevolent AI" that caused the accident, but rather, human error within the original code.

How do we get it right?

The good news is, perhaps ironically, AI actually has the potential to insure us against this risk, although not as you may think. AI that talks directly to computer programs, replacing the human element, can test our software for us, and even fix the bugs, enabling better software that is more secure and impervious to attack.

We are definitely not at a place where computers have free will, but we have made some amazing strides toward self-aware software that interacts only with other code. Here at DiffBlue we have, in fact, created computers that can write code independently. This is a giant step towards autonomous software production and this could be one of the most important areas within computer science today.

With a future where the Internet of Things is the norm, we need to know that the technology that we use is safe and secure. Whilst it seems that giving machines the ability to make decisions could pose an existential threat, in fact it’s these developments that will allow us to keep extending the realm in which we can make decisions, taking control over roles that are becoming too complex for us. In effect, AI could well be our saviour.

This is a guest blog and may not represent the views of Virgin.com. Please see virgin.com/terms for more details. Image from gettyimages.


Our Companies

Quick Links