Tags
  • School of Computing and Information
  • School of Education
Accolades & Honors

Research Team Receives Grant to Form AI System to Debunk False COVID Information

Yu-Ru Lin, associate professor in the School of Computing and Information (SCI), Adriana Kovashka, assistant professor in SCI and Wen-Ting Chung, research assistant professor in the School of Education, have been awarded a RAPID Grant from the National Science Foundation to develop a debunking system for COVID-19 related misinformation.

As a result of the COVID-19 pandemic, the RAPID Grants have been awarded to research teams to “mobilize the scientific community to better understand and develop measures to respond to the virus.”

“We rely so much on mass media and social media to get information, even more so during the pandemic,” said Lin, the project’s principal investigator, whose research focuses on using data science to understand collective behavior and social movement. “The mission of this project is to reduce the harmful impact of misinformation.”

Using machine learning and data mining, the team will create an AI system that identifies which false information is most influential, who is most affected by it and how to "debunk" the problematic information automatically in social media. Their debunking system will rely heavily on citizen journalism and crowdsourcing images that counter misinformation on Twitter.

“When people are used to consuming the same media sources or discussing news with people strictly in their social circles, they lose out on the opportunity to see alternative information, or other points of view,” said Chung, whose research interests include group bias and sociocultural factors on learning and motivation. "The system could be a learning device that helps cultivate people with a more critical view in discerning the features of problematic information."

Kovashka, whose expertise is in computer vision and machine learning, added, “What makes this interesting, is how it taps into the work of advertisers. It’s been shown that people will be most likely to click on something is when a post prompts an emotion—in this case it’s fear. Of course, computationally modeling what specific aspect of visual or textual content will evoke an emotion and what kind of behaviors it will prompt is challenging, so part of the goal of this proposal is to advance how we computationally analyze persuasion.”

The team expects to complete their project within the year.