After a fake image of the Pentagon on fire went viral recently, Twitter Inc. has decided that the platform needs to arm itself better against such deepfake skullduggery.
The artificial intelligence-generated image was seen by enough people to upset the stock market briefly. Some news websites reported about the incident and later apologized when it became known that there had definitely been no explosion at the Pentagon. It was seen and apparently believed by many people.
The whole debacle was a testament to how realistic AI can create phony events and how such events can, in some way, even if just for a short time, rock the world. There has been a lot of talk in the last few years about the danger of deepfake technology in the age of information warfare. Recently, an image of the Pope dressed like a rap star went viral. Though harmless, it was so well done that one can only imagine the chaos this technology will cause now and in the near future.
Twitter announced today that it’s adding a new feature that may help. There’s little chance such images will never appear on the platform again, but Twitter said one way of dealing with deepfakes is to expand its Community Notes tool. Up until now, Community Notes has been available to members of the public – Twitter has no input – so they can add context to misleading tweets. For regular Twitter users, the feature has been very useful, considering all the inflammatory tweets that don’t explain the context. Twitter will now apply this detector to images.
“From AI-generated images to manipulated videos, it’s common to come across misleading media,” the company explained. “Today we’re piloting a feature that puts a superpower into contributors’ hands: Notes on Media.”
When users believe that an image is potentially misleading, they will now be able to click on the “About the image” tab. Users will then be able to write additional information specifically related to the image, and that new information will appear below matching images sent throughout the platform.
“It’s currently intended to err on the side of precision when matching images, which means it likely won’t match every image that looks like a match to you,” the company explained. “We will work to tune this to expand coverage while avoiding erroneous matches.”
It also said that at some point soon, the same feature will be available for multiple images and videos. There’s little doubt this button is going to get worn out very quickly.
Your vote of support is important to us and it helps us keep the content FREE.
One-click below supports our mission to provide free, deep and relevant content.
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.