Artificial Intelligence: Man at the Crossroads
January 29, 2019 | ThalesEstimated reading time: 3 minutes
What is there to say about Artificial Intelligence that hasn't already been said? Is it some evil monster intent on devouring our hard-won freedoms? Or a magical trunk with the powers to deliver humankind from its lowly insufficiencies? In truth it's neither: AI in our lifetime will be anything and everything we decide to make it.
The battle between man and machine is nothing new. Remember the Luddites in 19th century England, who destroyed factory machinery out of fear that their ancestral crafts would go to waste. But the arrival of Artificial Intelligence takes the battle to a whole new level, because this time the fear is that machines will make us humans quite simply irrelevant.
AI can replicate certain human functions, but it cannot fully replicate a person because humans are "multi-taskers" while AI only handles one task at a time.
The AI we know today is called weak AI because it is focused on one narrow task. But it works incredibly fast and can analyse unfathomably huge amounts of data. It also has cognitive powers, observing its surroundings and spotting specific details, learning from what it sees and evolving from what it learns. It can make sense of its observations and draw intelligent conclusions. But does that mean AI can replace humans?
It's unlikely. And for a variety of reasons, it really isn't a very good idea.
First, AI can replicate certain human functions, but it cannot fully replicate a person because humans are "multi-taskers" while AI only handles one task at a time. Another essential difference is that AI lacks the sentience, free will and consciousness that define us as human beings.
Second, this weak AI needs data — lots of data — to learn from. So it's entirely dependent on the data that's provided, and ultimately, therefore, on the people who provide it.
That brings us to the question of the veracity and integrity of the data used in the learning process. There have been cases of AI systems that are racist because the data they learned from was racially biased. And in one experiment, a machine was incapable of recognising a lion if it wasn't in its natural habitat. To teach a computer to recognise a lion, you have to show it millions of pictures of different lions in different poses. And in the experiment, it turned out the machine was learning to recognise the habitat, not the lion. So analysis based on AI is clearly far from infallible, and, intentionally or not, it might be biased.
These weaknesses may not be particularly worrisome in the consumer applications that use AI today. But they could have dramatic consequences in defence, security or transport systems, for example. It would be unrealistic and unconscionable to leave AI to its own devices when the lives and security of millions of people are at stake.
Machines and humans need to interact constantly and in real time — and it must be the human who makes the final decision.
This is why Thales advocates a form of AI that is explainable, verifiable and ethical. This is our overriding concern. And as a corollary, another key area of research for Thales is aimed at better understanding how humans and machines interact.
Technology is neither good nor bad — it all depends how people use it. It's not alright to take a knife on board an aircraft, but you can own as many knives as you want if you're a butcher! AI needs to explain to humans how it has reached its conclusions, and what the consequences of its analysis might be. So machines and humans need to interact constantly and in real time — and it must be the human, and only the human, who makes the final decision.
Suggested Items
Nolan’s Notes: Coming to Terms With AI
05/07/2024 | Nolan Johnson -- Column: Nolan's NotesHow fast do things move in the world of data analytics? Here’s an example. We’ve been planning this issue on artificial intelligence for the past few months, and, in fact, I had already written this column about a month ago. Then I went to IPC APEX EXPO and upended it all. I originally had compared AI to drag racing in that (CPU) horsepower and new (data) vehicles have steadily delivered higher performance competition. That seemed pretty accurate given how generative AI models dominated the popular media with amazing results—and sometimes spectacular crashes.
RTX's Advanced Ground System for Space-based Missile Warning Now Operational
05/06/2024 | RTXAn advanced ground system for space-based missile warning developed by Raytheon, an RTX business, is now operational at the U.S. Space Force's Overhead Persistent Infrared Battlespace Awareness Center (OBAC).
Cigent, Swissbit Announce Partnership to Enhance Endpoint Data Security
05/06/2024 | CigentCigent, a leading provider of endpoint data protection solutions, and Swissbit, a leading manufacturer of storage, security, and embedded IoT solutions, today announced a strategic partnership to offer a comprehensive portfolio of secure storage drives designed to safeguard endpoint data against a growing landscape of cyberthreats.
Industrial PC Market Size to Record $1.75 Billion Growth from 2023-2027
05/03/2024 | PRNewswireThe global industrial pc market size is estimated to grow by USD 1.75 billion from 2023 to 2027, according to Technavio. This growth is expected to occur at a Compound Annual Growth Rate (CAGR) of almost 6.29% during the forecast period.
Gartner Survey: 61% of Organizations Evolving D&A Model Due to AI
05/01/2024 | Gartner, Inc.Sixty-one percent of organizations are forced to evolve or rethink their data and analytics (D&A) operating model because of the impact of disruptive artificial intelligence (AI) technologies, according to a new Gartner, Inc. survey.