MIT Researchers Unleashed Psychopathic AI on Reddit

Adjust Comment Print

The AI is named "Norman" after the disturbed killer from Alfred Hitchcock's Psycho, and was trained using images posted on a subreddit dedicated to death and graphic images. Nonetheless, the results are spine-chilling.

The Rorschach test is a standard psychiatric technique which involves showing patients a series of inkblots in order to gauge their underlying thought patterns.

The researchers compared Norman's answers to other AI algorithms loaded with standard training, CNN wrote. Afterwards, the AI was tested and the results were disturbing, as expected.

In one inkblot test, a standard AI saw "a black and white photo of a red and white umbrella", while Norman saw "man gets electrocuted while attempting to cross busy street". Norman's choices are alarming to say the least. This method is also known as "Statistical Learning" because it involves a vast amount of data that is drip fed through the system to help it make predictions. In 2016 and 2017, the team worked on two AI's that could generate horror imagery and tell ghost stories.

Thankfully, the team had a goal behind this mad experiment beyond terrifying humanity with their nightmare.

In one inkblot, the standard AI might see "a closeup of a vase with flowers", while Norman sees "a man is shot dead". By creating Norman, the team wanted to pinpoint possible reasons responsible if an AI decides to go nuts.

Researchers at MIT have programmed an AI using exclusively violent and gruesome content from Reddit.

While the Norman project immediately brings to mind psychopathic robots from the "Terminator" movie franchise, the researchers say there is validity in showing that machines are not inherently biased and that the people inputting data can significantly alter their behaviour. Norman "passed" with flying colours (according to the experiment's aims), seeing baroque displays of death and destruction where most onlookers would perceive more prosaic situations.

Norman serves as a reminder that, as its creators put it, "when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it".

Comments