If you work in IT like myself, you’ll know what a struggle it can be to explain to your family and friends exactly what you do for a living. More specifically, that your job is probably so much more than fixing a laptop – although this is something that I must admit people still ask me to do now and then…
Not being able to easily explain to people what I do and the value of it was frustrating, but now this is changing along with the nature of my job. In the past two years I have been heavily involved in Artificial Intelligence (AI) and the Internet of Things (IoT). Now the solutions that I’m developing and the projects that I’m working with are having a direct impact on people, communities, and even potentially broader societies. From flood prevention to waste optimisation and robotic process automation, the application of technology is changing the way people live and work. So now, when I give examples of my work, people can relate to or even be amazed by how far technology has come.
Moving past repetitive tasks…
The impact of technology, and particularly the development of AI on businesses and society, is something that I spend a lot of time discussing with customers and partners. The advent of Cloud computing has enabled us to analyse vast quantities of data with ever more powerful computers, which is extending the possibilities of what we can do. Recently, there has been a lot of discussion around AI programs that have excelled at a particular task, predominantly a type of game. The AI could be trained to execute a specific task to a level beyond human capabilities. We see this approach being applied within manufacturing, where robots can be programmed to repeat a particular task to a precision and speed that a human would struggle to match, and can learn from their experience to develop more efficient ways of working.
The focus now for companies like Deepmind is to develop a general AI that can be good at more than one task. For Deepmind, this was initially also focused on games and the ability for machines to learn independently, until they are at a level to outperform humans. The hope is that these general AI programs will be able to apply this approach to multiple real-world problems, reducing the time they need to become effective. The concern though is that we will reach a point, sometimes called the Singularity, when we will develop an AI that has the ability to learn and apply itself to any problem, without needing input from human programmers. It is easy to see that this could have huge benefits, but for me, it is also a cause for concern.
Autonomous cars are a good example of this dilemma. We know that leveraging autonomous cars could have a huge impact on society. For the elderly or disabled it could open up much greater freedom and mobility than is currently possible. For governments and councils, it would allow them to get greater efficiency out of the infrastructure that is in place and manage the increased demands from a growing population. But they will never be entirely safe and they also raise a fundamental problem; how do we program an AI with the rules and ethics that we as humans live by?
The most widely known example of this is the runaway car. A car is travelling down a hill, it cannot be stopped but can be directed to move. On one side is a cliff, on the other side is a child, in the middle of the road are two adults. What should the AI in the car do? Swerve over the cliff and kill you, as you are in the car, but save the child and adults? Swerve and kill the child but save you and the adults? Or not move at all and kill the two adults but save you and the child? What if there were two children not one, or what if the two adults were pregnant women, or perhaps there are children in the car? All of these situations would need to be defined in a set of rules, and even then you may need to agree to have a setting in the car, which would allow you as the occupant, to decide whether the car should put more value on your life or on others, as not everyone’s values are the same.
So what’s next?
This for me highlights one of the key points around the development of technology in the next few years. Technology will not suddenly take over, nor will AI dominate and compete directly with humans, unless we let it. Stephen Hawking, the world-renowned theoretical physicist, expressed his fear that AI might replace humans altogether. He believed that we should still move forward on Artificial Intelligence development, but we also need to be mindful of its very real dangers.
It is up to each business to decide if AI should replace a human and if they do, whether the employees can be given more rewarding work or be made redundant. It is up to society and governments to decide if AI should be applied to urban mobility and how they mitigate the impact on those that work in those sectors currently. What we can do with technology will have a direct impact on all of us, but I strongly believe that it is humans that will determine how that develops and how it is applied. Perhaps in trying to replicate how humans behave in AI solutions, we will actually learn a lot more about what makes us human in the first place.