When AI Comes Alive, Ethical Concerns Arise

When AI Comes Alive, Ethical Concerns Arise

Artificial intelligence has a strong foothold in various industries – optimising logistics, detecting fraud, composing art, conducting research, and providing translations are some of many examples of how AI has made our lives better. Though the good has outweighed the bad, there have been some questions being asked about this technology. So which issues and conversations keep AI experts up at night in regards to moral, social, and political implications of new technologies.

On Security. Is Artificial Intelligence Truly Safe to Use?

Many have known how good AI can be, but people do wonder if it falls into the wrong hands. This applies not only to robots produced to replace human soldiers, or autonomous weapons but to AI systems that can cause damage if used maliciously. Future battles would not be just on the battleground only, which is why many organisations and countries around the world are investing in cybersecurity.

Intelligence. How Can Humanity Control a Technology That Is More Intelligent Than Us?

Human dominance has been at the top for quite a while now, simply due to its ingenuity and intelligence. Due to that, humans are capable of creating tools to benefit themselves and others, even controlling other animals below the food chain.

This begs the question, what if artificial intelligence turns against its master. What if what we do see in movies becomes a reality? It is a question to ponder because we can’t really “turn it off”, as AI could have already predicted that outcome and would make the necessary moves to defend itself. What should we do by then?

Rights. Does AI Deserve Humane Treatment?

While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals. In a way, we are building similar mechanisms of reward and aversion in systems of artificial intelligence. For example, reinforcement learning is similar to training a dog: improved performance is reinforced with a virtual reward.

Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status. Should AI be treated like humans or animals of comparable intelligence? Will we consider the suffering of “feeling” machines?

Bias. Is It Possible for AI to Discriminate?

Deep learning depends heavily on data. So what happens if biased training data is used to teach these systems? Data might not always be clean so we have to take extra precautions and ensure that AI does not learn from our flawed perceptions. Google’s recent debacle of its AI not recognising minorities comes to mind about AI discrimination bias.

Ethics in AI: It’s a Tricky Path

Ethics exist to ensure that what we do in the future only benefits humankind. If these questions are left unanswered, implications down the road can be far grimmer than people realize. Enterprises, organizations, and citizens should keep asking questions, keep working towards building ethical AI, and keep trying to fight automated bots and malicious attacks because artificial intelligence is coming whether or not we’re ready.