EU Commission’s AI Expert Group is Concerned about the Future of AI

EU Commission’s AI Expert Group is Concerned about the Future of AI

After some time deliberating, the EU Commission’s Artificial Intelligence expert group has revealed their assessment of AI technology and just how fast it is advancing. According to them, the ethical risks of the rapidly growing tech is “unimaginable.” The most notable risks mentioned include autonomous lethal systems, individual tracking, and citizen scoring. All of these risks are unfortunately quite valid.

The first of these concerns, that being lethal autonomous systems, is an obvious one. We’ve seen this risk played out a dozen times in a dozen different movies portraying artificial intelligence. Sure, we (hopefully) won’t let AI control all of our nuclear weapons a la Skynet, leading to humanity’s assured destruction, but the fact remains that artificial intelligence in control of any lethal system could make the decision to eliminate individuals or entire groups of people without any human input at all. And considering that artificial intelligence is unlikely to be imbued with any sense of morality or mercy anytime soon, trusting the lives of anyone to its judgment seems like a recipe for disaster.

Tracking individuals is a somewhat more varied concern. After all, many people in today’s society are already tracked in some form or another, but the means through which artificial intelligence will be able to do it raise some concerns for the Commission’s Group. The group is fairly convinced that biometric data will be used by artificial intelligence in ways that people don’t condone, specifically in criminal investigation or identification. During investigation, AI could use the biometric data of people to more or less function as a lie detector, or could use data it has gained on individuals throughout society to try and read micro expressions. Granted, this risk doesn’t sound nearly as bad as AI controlling weapons and killing people without human oversight, but people already don’t like it when their data is used without their permission. Imagining the level AI could take it to is frankly quite worrying.

The final concern is ‘citizen scoring,’ a concept brought to light by a recent episode of the show Black Mirror. The idea is a simple one; there is an AI whose job it is to score the citizens under its supervision. How and what they could be scored over is exactly the ethical issue at hand. Will citizens be scored by an AI in regards to how civil they behave in public? The kind of merchandise they purchase in stores? Will their overall value as a citizen somehow be assessed, judging their worth as a member of the country? Will citizens even be told how they are scored? Black Mirror may be a work of fiction, but the concept of an AI being used to evaluate a person’s worth in different lights is very possible.

Ultimately, artificial intelligence isn’t anywhere near the level it would need to be to do any of these very scary things. But the EU Commission’s group has made it clear that a very ethical approach is going to be needed in regards to developing AI if we want to avoid serious issues like these.


Please enter your comment!
Please enter your name here