I don’t take pictures of the moon. Call it naive optimism, boredom at the mundane, or simple past experience, but I know for a fact that a picture of the moon will never, ever be as good as you think it will be.
Oh sure, with a professional photographer, loads of experience and the largest telephoto lens in creation you can indeed get something that’s going to be worth turning into a background photo on your high resolution display. But for the rest of us, the end result is always pretty much the same, a white dot, perhaps with some blurry spots here and there, in the middle of inky blackness as the camera phone is incapable of capturing anything else.
So, when someone on Reddit tested Samsung’s “Scene Optimizer” AI enhancements were adding detail that simply did not exist on the original photos, I immediately…thought it was something put in the news cycle hoping that it would take some of the heat off the other, considerably more important piece of tech news that has been on the spotlight this week. Especially because, broadly speaking, that’s what the AI enhancement is supposed to do.
As we’re now very much on the “AI”-era, we’re well on our way of making anything and everything have “AI” in the marketing materials in one or another way. It’s nice, it reminds me of sometime far in the past where everything was “everything 2000”. But in this particular case, it’s not entirely inaccurate. 99% of moon images taken with a cell phone will be absolute crap. Faced with this reality, and knowing that the subject matter is going to look pretty much the same no matter where you take the picture beyond some changes regarding position and tone, it’s much simpler to train an AI on, say, several hundred thousand high-res shots of the moon ant let it go to town once it detects that’s the image subject, adding things that it knows are there, even if the viewfinder didn’t actually capture it.
Nevertheless, the story gained enough traction to make Samsung respond to the allegations, claiming that it’s still using the taken picture as a base for the enhancements (one expects even the most optimistic photographer would be able to tell if they are presented with an entirely different picture from the one they took), and reassuring potential customers that they will “refine” their scene optimizing feature to prevent something like this from happening again. Which to me seems like taking the same piece of software and telling it to do the minimum amount of processing once it detects the moon. The internet has spoken, and it has claimed that it wants authenticity on its pictures of the moon. Authentically washed out, authentically relying on optical and digital zoom, with as little software processing to make sure that they remain a constant reminder that some things are really really hard to photograph.
Everyone else just, you know, google a picture of the moon.