AI and ethics are no longer a theoretical debate. AI is all around us, in our work and our personal lives – if you Google a question you use AI, if you use predictive text you use AI.
The challenge with AI is that it’s only as intelligent as the data we feed it – and if this data is laced with gender, racial, age (...I could go on) bias, then unfortunately this is the lesson AI will take. This is fixable, but we just need to address the core of this issue and find solutions at the same time.
As a technologist I have spent years trying to solve problems that make human lives easier, whether that is a robot to get me a Snickers from the vending machine or through giving businesses of all sizes access to the most up to date tech – regardless of the size of their budget.
AI can solve for a lot of the challenges we face today – and I don’t mean confectionary based ones – I mean the big humanitarian issues that surround productivity, health care and the democratisation of education. But also, as a technologist, I am also hyper aware of the blind spots. If we are serious about using AI for good (rather than just serving up clever marketing that gets people like you and I to spend or gamble money we don’t have) then we need to work together to find a solution to fix this challenge at scale.
We know that the businesses of all sizes could make huge productivity gains by embracing AI, but we need industry and government to help clear a way through the ethical issues and move the global conversation forward. At Sage we recently shared a paper titled “Building a Competitive, Ethical AI Economy,” that uncovers the important and unanswered questions surrounding AI.
Core to solving business and humanitarian problems is behaving in a responsible and ethical way, at Sage we have thought long and hard about what this looks like and have published the Ethics of Code. This is a guide for our developers in-house to work to when building AI, and a call to our peers in the tech industry to act and build responsibility for the future. These ethics are:
- AI needs to reflect the diversity of the users it serves
- AI must be held to account – and so must its users
- AI should be rewarded for showing its workings
- AI should level the playing field
- AI will replace, but it must also create
When I was learning about AI as a child and during my early years at university, new and exciting tech was reserved for people with deep domain expertise and very deep pockets. Recent progress has seen a fall in cost fall, and the resulting democratisation of this tech, giving us as much power in our pockets today as NASA had to launch rockets in the 1960s.
This evolution means we are seeing a need for people with diverse skill sets, outside of engineering, adding as much value to the world of tech as people with PHDs. Building careers in conversational design, front end design, and translating this software into tangible benefits that get people excited about the possibilities – with out being scared off by tales of robot apocalypse that Hollywood (…very unhelpfully) keeps promising.
If we focus on the bigger picture and solve the longer-term issues, we can use smart technology to resolve some of the deeper issues effecting humanity today such as productivity, equality, economic opportunities, health care issues – simply by playing smart with this new tech, now that’s exciting.
- This is a guest blog and may not represent the views of Virgin.com. Please see virgin.com/terms for more details.