Machine learning tool developed to detect fake news domains when they register

 Breaking News
  • No posts were found

Machine learning tool developed to detect fake news domains when they register

November 12
04:57 2020
  • Social media companies and governments need to take a more active role in tackling fake news
  • Techniques observed could be used to target corporates if not tackled

Academics at UCL and other institutions have collaborated to develop a machine learning tool that identifies new domains created to promote false information so that they can be stopped before the ‘fake news’ can be spread through social media and online channels. “Real-Time Prediction of Online False Information Purveyors and their Characteristics,” is a working paper co-authored by Anil R. Doshi (UCL School of Management), Sharat Raghavan (University of California, Berkley) and William Schmidt (Cornell University).

To counter the proliferation of false information it is important to move fast, before the creators of the information begin to post and broadcast false information across multiple channels. Anil Doshi and his fellow academics set out to develop an early detection system to highlight domains that were most likely to be bad actors. Details contained in the registration information, for example, whether the registering party is kept private, are used to identify the sites.

Anil Doshi, Assistant Professor for the UCL School of Management commented: “Many models that predict false information use the content of articles or behaviours on social media channels to make their predictions. By the time that data is available, it may be too late. These producers are nimble and we need a way to identify them early. By using domain registration data, we can provide an early warning system using data that is arguably difficult for the actors to manipulate. Actors who produce false information tend to prefer remaining hidden and we use that in our model.”

By applying a machine-learning model to domain registration data, the tool was able to correctly identify 92 percent of the false information domains and 96.2 percent of the non-false information domains set up in relation to the 2016 US election before they started operations.

Doshi and his co-authors propose that their tool should be used to help regulators, platforms, and policy makers proceed with an escalated process in order to increase monitoring, send warnings or sanction them, and decide ultimately, whether they should be shut down. The academics behind the research also call for social media companies to invest more effort and money into addressing this problem which is largely facilitated by their platforms.

Doshi continued “Fake news which is promoted by social media is common in elections and it continues to proliferate in spite of the somewhat limited efforts social media companies and governments to stem the tide and defend against it. Our concern is that this is just the start of the journey. We need to recognise that it is only a matter of time before these tools are redeployed on a more widespread basis to target companies, indeed there is evidence of this already happening. Social media companies and regulators need to be more engaged in dealing with this very real issue and corporates need to have a plan in place to quickly identify when they become the target of this type of campaign.”

The research is ongoing in recognition that the environment is constantly evolving and while the tool works well now, the bad actors will respond to it. This underscores the need for constant and ongoing innovation and research in this area.

Media Contact
Company Name: Pressat
Contact Person: Alison Lancaster
Email: Send Email
Country: United Kingdom
Website: https://pressat.co.uk/

Related Articles

Categories