The Home Office today announced its own AI program that is designed to stop extremist content. Specifically, the AI will focus on finding, mitigating, and removing Islamic State (IS) propaganda. In a statement, the Home Office said the tool works at a 99.99 percent rate. In taking a proactive approach, the office also condemned tech giants for “consciously failing” to take enough action to stop extremism spreading online. However, some opponents have criticized the measure and claim it will lead to false imprisonments. Even 99.99 percent is not totally accurate, so there will be some false positives and false accusations. In a statement, the Home Office says the technology scans video-streaming, download platforms, and social media in real-time. The AI works into the upload process and is designed to prevent the content even being uploaded. This is a pre-filtering measure that the European Commission has been eager to implement across the continent. While critics point to a removal of freedom of speech, the UK government insists it is doing a job tech companies should have already performed. Last year the government urged tech company to decrease the time it take to detect and remove extremist content. The number stood at an average of 36 hours, but the UK wanted it reduced to 2 hours. As companies failed to meet UK demands, the government moved ahead with its own solution. Using £600,000 in public funds, the Home Office drafted ASI Data Science to create the AI tool. The 99.99% accuracy is undoubtedly impressive, but the margin for error is still potentially huge. For example, 300 hours of YouTube videos are uploaded every minute. The UK’s tool could falsely flag 3 hours of that footage as extremist.
Handing Off to Tech Companies
It is also worth remembering the AI tool is only designed to flush out IS extremist content. Of course, there are many other variants of extremism and hate, but these will go unchecked. It is clear the Home Office wants to send a message to tech companies… they need to do more. Home Secretary Amber Rudd, speaking to the BBC, suggested legislation is not out of the question to force tech companies to use the tool. While she said she prefers companies to take the fight on themselves, the threat of government intervention is now clearly stated. Discussing the extremism-blocking tool, Rudd told the BBC: “It’s a very convincing example that you can have the information that you need to make sure that this material doesn’t go online in the first place. “We’re not going to rule out taking legislative action if we need to do it, but I remain convinced that the best way to take real action, to have the best outcomes, is to have an industry-led forum like the one we’ve got. This has to be in conjunction, though, of larger companies working with smaller companies.” “We have to stay ahead. We have to have the right investment. We have to have the right technology. But most of all we have to have industry on our side — with industry on our side, and none of them want their platforms to be the place where terrorists go, with industry on side, acknowledging that, listening to us, engaging with them, we can make sure that we stay ahead of the terrorists and keep people safe,” she said.