Maajid Nawaz thundered against AI as he highlighted the scary side effects which occur when technology learns from online algorithms.
The rant against AI came after a new study found technology is becoming prejudiced by learning from humans.
He referenced a Microsoft chatbot called Tay, who was given its own Twitter account and was allowed to interact with the public last year.
The project however quickly took an unexpected turn when it turned into a pro-Hitler troll which peddled in conspiracy theories.
The LBC host said: “Microsoft's cyber simulation of a teenage girl's Twitter account went from naive teenage platitudes, as one would expect, to Hitler worship in a few hours.”
LBC • GETTY
The LBC host suggested people were right to think AI is scary
"This is an AI attempting to simulate a teenage girl. It went from naive platitude to Hitler worship, within a few hours,” he continued.
"A software linked the word 'woman' with homemaker, and 'man' with programmer.
"It gets worse. AIs have indulged in Holocaust denial, or indeed have advertised for highly paid jobs that are exclusive to men. AI is indulging in anti-semitism, in sexism, and in racism."
Mr Nawaz added: "If that is what AI is learning from us, human beings, I'm wondering what does that say about us as human beings?"
It went from naive platitude to Hitler worship
Get Quotes on Home Insurance
Researchers launched a massive study into million of words online to look at how closely different terms were to each other in the text, which revealed bias.
In a paper about the new study in the journal Science, the researchers wrote: “Our work has implications for AI and machine learning because of the concern these technologies may perpetuate cultural stereotypes.
“Our findings suggest if we build an intelligent system that learns enough about the properties of language to be able to understand and produce it, in the process it will also acquire historical cultural associations, some of which can be objectionable.
Meet the robot you can control with your mind Wed, March 15, 2017
What if we could develop robots that were a more natural extension of us and that could actually do whatever we are thinking? A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University is working on this problem, creating a feedback system that lets people correct robot mistakes instantly with nothing more than their brains
Jason Dorfman, MIT CSAIL 1 of 9
“Already, popular online translation systems incorporate some of the biases we study. Further concerns may arise as AI is given agency in our society.
“If machine-learning technologies used for, say, résumé screening were to imbibe cultural stereotypes, it may result in prejudiced outcomes.”
Professor Bryson, one of the authors of the study, said the key finding of the research was not so much about AI but humans.
She said: “I think the most important thing here is we have understood more about how we are transmitting information, where words come from and one of the ways in which implicit biases are affecting us all."