Essay
The Week Photography Contests Lost the Plot
Two competitions caught with AI images in seven days. But the real problem isn't the cheating. It's that we still don't have a framework for telling the difference between a cheat and a phone's auto-upscaler.
On Sunday, Tokina pulled the overall winner of their 2025 monthly photo contest after Reddit users spotted a SynthID watermark on the image. On Tuesday, Hasselblad Masters 2026, one of the most prestigious competitions in photography, was accused of shortlisting an AI-generated image in their Street category. A garbled Coca-Cola bottle label gave it away. 108,000 entries, 70 finalists, and nobody at Hasselblad caught it before the public did.
Two contests. One week. Both caught by Reddit, not by judges.
The internet's reaction was predictable: outrage at the entrants, mockery of the judges, demands for AI bans. And I understand the frustration. But having spent the week reading the coverage, the comments, and the technical breakdowns, I think the conversation is stuck in the wrong place.
The Coca-Cola Bottle Problem
The Hasselblad image is the more interesting case, and not for the reasons most people are discussing.
The garbled text on the bottle label is being treated as a smoking gun for generative AI. And it might be. But garbled text on a label is also a known artefact of upscaling algorithms. If you run an image through Topaz Gigapixel, or if your phone's computational pipeline applies an aggressive upscale, fine text is one of the first things to break. The AI isn't generating a fake bottle. It's reconstructing pixels it doesn't have enough data for, and text is the hardest thing to reconstruct accurately.
Does that mean the image is innocent? No. Does it mean the image is definitely AI-generated? Also no. And that ambiguity is the actual problem.
We don't have a reliable way to tell the difference between "a photographer took this on their phone and the phone's built-in processing upscaled and sharpened it" and "a person typed a prompt and generated this from nothing." Both can produce artefacts that look identical to a casual observer. Both can trigger a SynthID watermark. Both live on a spectrum that the photography industry has not defined, categorised, or agreed on.
The Tokina case is even murkier. The winning image had a SynthID watermark, which means it was either fully generated by Google's tools or it was processed through one of Google's editing features like Magic Editor. A Reddit user found video from the photographer's YouTube channel that appears to show the same scene. If the photographer was there, shot the scene, and then replaced the sky using Google's tools, is that AI fraud? Or is it the same sky replacement that thousands of photographers do in Photoshop every day, just using a different tool that happens to leave a watermark?
I don't know. And neither does anyone else right now. That's the problem.
The Spectrum Nobody Wants to Define
Here's what I think the photography industry needs to confront honestly. There is no clean line between "real photograph" and "AI-generated image." There is a spectrum, and pretending otherwise is making these controversies worse, not better.
On one end, you have the raw capture. Sensor data, straight out of camera, no adjustments. The purest form of a photograph.
Then you have the standard edits that every working photographer makes. Exposure correction, white balance, cropping, lens corrections. Nobody argues these aren't photography.
Then things start getting grey. Healing and cloning out distractions. Frequency separation for skin retouching. Focus stacking. Sky replacement in Photoshop. Panoramic stitching. Content-aware fill. These are all manipulations. Some of them add pixels that weren't in the original capture. The industry has broadly accepted them as part of the craft, but they're all on the spectrum.
Then you move further along. AI-powered noise reduction. AI upscaling. AI masking. AI sky replacement. AI object removal. These are the tools that most modern photographers use daily, often without even thinking about it. Lightroom's latest update added background AI processing for bulk workflows. Adobe is building AI deeper into every tool we use. Are photographers who use these features cheating? Obviously not. But they are using generative AI to alter their images.
And at the far end, you have fully AI-generated images. No camera, no sensor, no photographer present. A prompt and an algorithm.
The problem with photo contests right now is that they're trying to draw a binary line (real vs AI) across a spectrum that has at least six gradations. And every time they get it wrong, which is often, it damages the credibility of the entire competition system and, worse, the 69 other finalists in the Hasselblad Masters who did nothing wrong.
What Actually Needs to Happen
I think the answer is straightforward, even if the implementation is hard: raw file auditing and transparent categorisation.
Every serious photo competition should require RAW file submission for finalists. Not all 108,000 entries. Just the shortlist. The RAW file is the closest thing photography has to a chain of custody. It contains the original sensor data, the camera serial number, the lens information, the timestamp, the GPS coordinates. It's not foolproof, but it's the baseline evidence that a camera was involved.
Beyond that, competitions need to stop pretending there are only two categories. The spectrum is real. Instead of "photography" and "not photography," define the categories honestly. Straight capture. Edited capture. Composite or digitally enhanced. AI-assisted. AI-generated. Let photographers enter in the category that matches what they did. Let judges evaluate work within its proper context. Let the audience understand what they're looking at.
This isn't about banning AI. It's about transparency.
The Tension I Should Address
I build AI tools for photographers. I'm not going to pretend that isn't relevant here.
The tools I'm building use AI to help photographers understand and evaluate their existing images. The camera captures the photo. The AI analyses it. At no point does the AI generate, alter, or add anything to the image. The photograph is the photograph. The AI is a lens for looking at it differently.
I'm not against AI-generated imagery. I think it's a legitimate creative medium with its own value and its own audience. But it is a different thing to photography. A photograph is a record of light that actually hit a sensor at a specific moment in time. An AI-generated image is a statistical prediction of what an image might look like. Both can be beautiful. Both can be art. But they are not the same thing, and treating them as interchangeable is dishonest to both.
The fix isn't to ban AI from photography. The fix is to label it. Transparently, consistently, and without shame. A photographer who uses AI upscaling shouldn't have to hide it. A digital artist who generates images from prompts shouldn't have to pretend they used a camera. The deception is the problem, not the technology.
The Rest of the Week
Canon R6 V confirmed for May 13. The under-$3K hybrid shootout with Sony's A7 V is about to get very real.
Nikon ZR firmware added Log 3G10 in H.265. If you shoot Nikon for video, this is the update that actually matters. H.265 log recording changes the file size and grading equation significantly.
Blackmagic showed the URSA Cine 12K LF at NAB. Plus DaVinci Resolve 21, which I wrote about two weeks ago. Blackmagic continues to be the most interesting company in imaging right now.
OM System OM-1 shooting Ethiopia's rarest wolf. I own an OM-1, so this one caught my eye. Proof that Micro Four Thirds in the right hands is still a wildlife tool.
Kodak is taking back its film distribution. Worth watching. When a brand pulls distribution in-house, it usually means either they're about to invest heavily or they're about to raise prices. Sometimes both.
What I'm Actually Thinking
The AI-in-contests problem isn't going away. It's going to get worse as the tools get better. A year from now, the garbled Coca-Cola bottle won't happen. The artefacts will be invisible. The SynthID watermarks can be stripped. The only reliable defence is the RAW file and a culture of transparency.
Competitions that don't adapt will lose credibility. Photographers who do honest work will get caught in the crossfire of suspicion. And the conversation will keep circling the same drain until someone builds a framework that acknowledges the spectrum instead of pretending it doesn't exist.
The technology isn't the villain here. The lack of transparency is.
If you want to go deeper on this, Fred Ritchin's The Synthetic Eye: Photography Transformed in the Age of AI is the best thing I read last year. Ritchin is the dean emeritus of the International Center of Photography and he frames the AI transformation of photography without nostalgia or panic. He sees it as both positive and terrifying, which is the only honest position. His Substack is worth following too.
Same time next week.
Alex Kesselaar is a photographer, drone operator, and the person behind Pixelfetch. He shoots for government and infrastructure clients through Kess Media in Sydney.
The gear, news, and tools that matter. Found for you, zero fluff. Subscribe to the daily briefing →
Sunday Essay
One essay a week. No algorithm.
Subscribe to the Pixelfetch Weekly Deep-Dive. Long-form takes on photography, delivered Sunday evening.
Powered by Substack. You can unsubscribe anytime.