Skip to content

Video Working Group: Visual Misinformation

By: Mich Donovan

This month’s Duke Video Working Group topic centered around visual misinformation and the work that the Duke Reporter’s Lab is doing to address a media landscape where truth is harder and harder to discern. Joel Luther showcased how schema like Claim Review can help create a common language for fact checking and identifying mistruths in the media. Particularly interesting was how, utilizing machine learning, platforms are being developed that can provide real-time automated factchecking. Since politicians repeat themselves so often, we can create AI models that recognize a statement as it is being said and then display previously cited sources that prove, disprove, or clarify that claim to the viewer.

We also discussed the role of deepfakes and digital manipulation of video. Using some basic editing tools, a bad actor can distort an otherwise normal video of someone to make them appear drunk or unflattering. With some advanced tools involving machine learning, a bad actor can map a famous person’s face on to almost anyone. While this deepfake technology has not yet reached the point of being totally seamless, many universities and institutions are pursuing not only how to create the “perfect deepfake” but how to identify them as well. In the meantime, this technology has only emboldened others to debate the veracity of any kind of video. If any video could be fake, how will we know when something is actually real?

Leave a Reply

Your email address will not be published. Required fields are marked *