Nikon and Agence France-Presse (AFP) are collaborating on a practical verification system to ensure image provenance in the field of photojournalism. Recently, I spoke to Nikon about their plans for the function as well as how it might help viewers and photographers in the modern age.
Undoubtedly, you’ve heard some variation of the following saying: “Believe none of what you hear, and only half of what you see.” Well, I think we can all agree that in the modern age, even believing half of what we see is getting harder and harder. It used to be much easier. A photographer would capture a moment on film, and there you had it. Proof of an event. Over decades, stories would occasionally emerge about the photographer not having followed the most ethical protocols in obtaining the image, think a photojournalist posing a supposedly unscripted event, but the image itself was largely accepted as a representation of something that actually happened.
Then came digital. Suddenly, the chemical reactions of film were replaced by an assortment of ones and zeros, and every image created was now easily malleable through the use of software like Photoshop, which could transform that genuine moment into something decidedly ungenuine in a matter of seconds. Journalism, a field based on the very foundation of reporting the truth, was faced with a new threat to its integrity. Readers needed to believe that what they were being told was the truth. So photojournalists needed to devise and follow strict guidelines to hold their work accountable to that special bond they have with their readers. Aside from a bit of dodging and burning, maybe some cropping, altering an image in photojournalism is forbidden. It needs to be. People need to trust what they see with their eyes if you are going to call it “news.”
But then came the rise of artificial intelligence. Now, not only is it possible to Photoshop an event after the fact to shape the narrative in a dishonest manner, it is now possible to create photorealistic depictions of events that never even occurred whole cloth from the comfort of one’s own home without even needing to own a camera in the first place. The very concept of truth faces more obstacles than ever. I think about this especially now in an election year in my own country. No matter what side of the political divide you find yourself on, I’m pretty sure that we can all agree that the truth should matter. Facts should matter. So we need journalists more than ever to produce verified information so that we can find a way forward through a world full of mirages.
To help address this issue, Nikon collaborated with AFP, a news agency based in Paris, to start a practical verification system for use in the field of photojournalism. The goal is to protect against the falsification and/or unauthorized use of imagery and streamline the fact-checking process. So, how exactly would this work?
To get some answers, I reached out to Mr. Hiroshi Yamagata, Assistant Department Manager of the Software Development Department in the Development Sector, and Ms. Mayumi Nakayama, Assistant Department Manager of the UX Planning Department at Nikon to chat about the collaboration. Mr. Yamagata joined Nikon in March of 2011. Since then, he has worked in in-house camera development and has been responsible for product development, mainly software development. He is currently in charge of software development for connectivity and security functions. Ms. Nakayama was a software engineer during the development of digital photography and the start of firmware development for 35mm film direct color transmitter when she first joined Nikon. After a brief stint away, she returned to Nikon and engaged in PC application development. She now works in the planning department and is in charge of planning work to improve the authenticity of images.
I started by asking about Nikon and AFP’s overall objective for the project. In a digital world, where one can alter an image drastically in just a matter of seconds, what are the unique challenges they see professional photographers facing that can be addressed with its image provenance function? They responded that “there are many challenges related to images, but regarding photographers, we believe that the issue lies in the unauthorized use of images such as plagiarism and copyright infringement. In addition to that, for news agencies that particularly prioritize the reliability of images, there are challenges related to the burden of time, cost, and effort in the fact-checking process.”And, in practical terms, how would they go about the process of image provenance? If I were a photojournalist in the field, how would I engage with the function? It turns out the function would be built into the camera to ensure that the verification process begins at the source, the initial capture. “With an image provenance function based on the C2PA specification, metadata, which can be detected when it is tampered with, is added to the image file at the time of capture. In the metadata, data in a tamper-proof format is recorded to prove that it was generated with a Nikon camera (see below image). Photojournalists either send the captured images directly to the news agency or edit them using C2PA-compliant tools for cropping and adjustments such as brightness before sending them to the agency. The record of image adjustments made by the photojournalists are added to the image file as provenance information. The news agency can verify the provenance of the received image data and determine if it is a trustworthy image.” Since not every photographer is a photojournalist, enabling the image verification system would be optional and could be turned off by other users. But for journalists who would welcome the verification, the function would be as simple as activating the function in the camera’s menu system.
Personally, I am not a photojournalist. I am an advertising photographer. My images are regularly retouched. But, there is still value in activating the image provenance function for someone like myself as well. Specifically, my income is based not only on a day rate, but also on licensing fees. In the digital age, where many have a misguided belief that every image they find on the internet is free, photographers have an entirely new challenge to fight back against copyright infringement and unsanctioned use of our work. So, I asked the team how the security they are developing can be used to address the issue of unauthorized sharing and copyright infringement? For instance, a commercial photographer who licenses their image to Person A, but then an unknown Person Z copies the image from the internet and tries to pass it off as their own. Does the Image Provenance Function help to identify or prevent such instances from occurring?
The team responded that “when adding C2PA metadata in a camera, Nikon performs digital signing using its camera-specific private key on the metadata. This means that while a plagiarist may be able to manipulate the image data, they cannot disguise the C2PA metadata, including the digital signature specific to Nikon cameras. In the example you provided, Person A can prove his/her ownership of the image through the C2PA metadata, allowing Person A to claim that the image plagiarized by Person Z is counterfeit.
I then asked about dishonest actors who might try to strip the original metadata. This could be a threat in both photojournalism as well as commercial imagery. To counter that, the team is developing an invisible electronic watermark that would be merged directly into the image data itself. Therefore, even if a plagiarist manipulates the image and removes the metadata, it can still be detected through the watermark.
But what about those at the end of the process? The ones who receive our imagery such as the press agencies or the person viewing our work in a news story. How does this make it easier for news organizations to validate the authenticity of an image? How do we see this trickling down so that end-users, consumers of the content, are also able to identify the source and validity of the image? Will there be some kind of blue checkmark equivalent, metadata, or some sort of identifier that consumers will use to ensure the image they are looking at is genuine?
On that side of the ledger, news organizations would be able to “define the permissible range of editing for image data, and make decisions based on whether the provenance information within the image meets those conditions. For example, they may establish that resizing, cropping, and adjustments to brightness fall within the acceptable range, and then verify the provenance data from the time of capture.” That would allow them to quickly and reliably check to see that a submitted image meets their journalistic standards.
On the side of the consumer, “they can assess the source and validity of an image through indicators such as the display of icons indicating the presence of C2PA data and the display of detailed provenance information. To promote this system, it is necessary for companies which distribute images, including social media platforms, to support and participate in the C2PA ecosystem.”
As the companies continue to develop this security protocol, it will be key that distribution platforms work with them to ensure end-to-end coverage. For example, platforms should have an easy way for visitors to check C2PA data of imagery they consume. And, in the age of AI-generated imagery, this is going to be even more difficult. The AI imagery ecosystem has largely been built on the backs of photographers’ work without the consent of the artists themselves. One of the biggest fights in the industry now is how those artists are to be compensated for their work. I asked if the image provenance system would be able to help identify when an AI-generated image was based on an artist’s previous work, but that is currently still a challenge that proves difficult. In order to establish the connection between the original image and the one generated by the AI system, the creator of the AI system would need to grant the public access to the data they used to train their AI system. So “resolving this issue requires collaboration from vendors providing AI tools. Therefore, we believe that building an ecosystem for verifying the authenticity of images across the entire imaging business is necessary.”
This partnership between Nikon and AFP is a definite step in the right direction. We live in an increasingly digital world where it can seem impossible to know what is real and what is fiction. Tools like this, which are designed to help provide clarity to the public and protection for artists, are going to be increasingly necessary as we go forward. The stronger the bond between those who create the images and those who consume them, the stronger our level of trust will be going into the future.
Thank you for writing about this and answering some of the burning questions we have about how the Content Authenticity Initiative would function for end users and agencies. This has so incredibly important and we need it asap. I cannot wait until Nikon activates this ability in our cameras- hyper important for the work of journalists and documentary photographers like myself, especially at a time when the public needs to see the gargantuan efforts journalistic platforms put into maintaining journalistic integrity all the way to publication.
The other side of this, however, is, "what about the journalist or whistleblower who needs to preserve their anonymity?"
Good point. I believe in cases like that, you should be able to turn the function off in the camera menu. However, that would be an issue journalists and outlets would need to navigate.