Twitter, the world’s largest microblogging platform, hosted a strange event a few weeks ago. A fake image created with artificial intelligence was shared from an account that appeared to belong to Bloomberg and received a verification badge. In this post, it was claimed that there was an explosion near the US Department of Defense (Pentagon). This caused volatility of 500 billion dollars in the US stock markets. Even though Twitter closed the account after realizing the incident, it was too late.
Now, there has been a new development regarding this. Twitter has introduced a new security measure that allows users to verify shared images. In fact, in the past, there was a system that users could verify for texts in such fake posts. Here, the scope of this verification system has been expanded. However, Twitter said that this feature will be used for single photo posts for now. We will wait for a while for videos and multi-image posts. Work on this continues.
Users will verify whether photos and videos are real
This new feature, called “Notes on Media” with the work done, will allow users to verify a photo or video. The user, who thinks that an image is fake, will also write a description about it and explain the situation. In this way, ordinary Twitter users will have the opportunity to realize that what they see is fake.
It has to be said that the new feature on Twitter is important. Because the widespread use of artificial intelligence-focused tools causes fake images to roam the internet. There are those of you who remember; A few months ago, the fake image of Pope Francis was on the agenda like a bomb. Pope Francis wore a very striking puffer coat in that image, and a necklace with a cross motif on it. Although many people believed it, the truth was found out later… The new feature on Twitter will prevent fake images to some extent.