
A labelling system could help tell AI images apart and address ethical concerns
The problem is simple: it’s whether a photo’s real or not anymore. Photo manipulation tools are so good, so common and easy to use, that a picture’s truthfulness is no longer guaranteed.
The situation got trickier with the uptake of . Anyone with an internet connection can cook up just about any image, plausible or fantasy, with photorealistic quality, and present it as real. This affects our ability to discern truth in a world increasingly influenced by images.
I and the ethics of artificial intelligence, including how we use and understand digital images.
Many people ask how we can tell if an image has been changed, but that’s fast becoming too difficult. Instead, here I suggest a system where creators and users of images openly state what changes they’ve made. Any similar system will do, but new rules are needed if AI images are to be deployed ethically – at least among those who want to be trusted, especially media.
Doing nothing isn’t an option, because what we believe about media affects how much we trust each other and our institutions. There are several ways forward. Clear labelling of photos is one of them.
Deepfakes and fake newsPhoto manipulation...
-
Heavy Rain Alert: There will be heavy rain with strong winds in these states of the country! IMD issued a warning
-
RCB Fans Are Planning An Epic Farewell For Virat Kohli. Will They Pass The Ultimate 'Test'?
-
Scotland boat missing: Urgent search after vessel disappears off UK coast
-
Scotland boat missing: Urgent search after vessel disappears off UK coast
-
Eating this chocolate biscuit could help you LOSE weight - key ingredient curbs hunger