Google plans to use machine learning technology to spot terror-related content on its video sharing site.
(CCM) — Google has launched a crackdown on terror-related content on its YouTube video sharing site after conceding that more needs to be done to prevent terrorists from using the site to spread their messages, according to a BBC report.
The company plans to commit more engineering resources to the task of training software that uses machine learning to identify videos glorifying terrorism and violence, according to the report. It also plans to employ more people tasked with deciding if video content should be removed in cases where the software is unable to make a clear decision.
In addition, the company promised to deal with videos that violate YouTube policies more strictly and to step up its counter-radicalization initiatives.
Google will also collaborate with Facebook, Microsoft, and Twitter to produce software that other companies can use to monitor content to weed out terror-related material. The move comes just days after Facebook announced that it would be deploying artificial intelligence software to scrutinize what its users post online.
Google is currently facing a fine which could be as high as $9 billion after being accused by the EU of skewing the results of product searches towards its own service.
Image: © Denis-Linine - Shutterstock.com