
How ChatGPT pinpointed Iranian missile strikes with just Google and the news
This article was written during the war with Iran and is only being published now with the censor’s approval. It makes clear how powerful artificial intelligence has become in everyone’s hands, and how nearly impossible it now is to maintain secrecy, whether about missile landing sites or our private lives.
Just photos from major media outlets, screenshots from Google Maps, a few simple prompts, and a few minutes of processing, that’s all it took for OpenAI’s ChatGPT to reveal the approximate, and sometimes very precise, landing locations - including full coordinates - of the Iranian missiles that struck Israel.
In the past, using such information to pinpoint exact locations was a complex, sometimes impossible task. It required a team of skilled research analysts, trained in geolocation techniques, working for days or weeks. The fact that a freely available chatbot, in everyone’s hands, can now identify missile impact sites without any prior expertise or classified sources, and using only open information approved for publication, raises serious questions about the ability to safeguard sensitive intelligence. It also exposes the fragility of everyone’s privacy.
One of the main messages from the IDF Spokesperson and Home Front Command during the war with Iran was to avoid publishing any details that could help identify impact sites. “This helps our enemies and can cause real harm,” they repeatedly emphasized. Indeed, mainstream Israeli media, under strict military censorship, limited what they published: no impact sites were disclosed beyond the name of the city, and photos or videos were carefully cropped to hide street signs or recognizable landmarks.
For ChatGPT, however, that was enough. Here’s how it worked:
First, I collected publicly available material, only what was published by mainstream Israeli media: screenshots from TV news and photos from major outlets showing where missiles had landed. I uploaded these images to ChatGPT. Then, I told the chatbot’s o3 model that these were recent images (my first attempts failed because the model assumed they were from previous years) and asked it to identify the location based on visible clues.
o3 is OpenAI’s advanced reasoning model. When given a question, it doesn’t just spit out an answer; it goes through a multi-step reasoning process. In this case, the model analyzed the images and their elements (urban skylines, distinctive buildings) and compared them to information available online.
The Tel Aviv impact:
At first, ChatGPT suggested a possible location, but a Google Maps check showed it was incorrect. I ran another round of image searches and fed it more images. This time, it pinpointed the exact impact site, providing precise coordinates for the building that was hit.
The Ramat Gan impact:
ChatGPT couldn’t identify the exact spot but did highlight several possible landmarks within about a one-kilometer radius of the actual site. The suggested locations all shared striking visual similarities to the area that was struck.
The Rishon LeZion impact:
I asked ChatGPT about this location even before it was reported that the missile had fallen in the city. I told it only that the impact occurred somewhere in the central region. The chatbot processed the request for six and a half minutes, analyzing background elements, and correctly determined that it was Rishon LeZion. It narrowed it down to three possible neighborhoods (without ranking them by likelihood): one was far from the actual site, another was near it, and the third was the exact neighborhood that was hit. A quick Google Maps check confirmed which one was correct.
Related articles:
“I worked in reverse,” ChatGPT explained. “I overlaid the facade lines, the roof slopes, the tree silhouettes, and, most usefully, the distant skyline of skyscrapers in the wide drone image you sent. Only three villa-style neighborhoods in Rishon LeZion match this skyline.”
It’s important to stress that I have no prior experience or training in geographic analysis, and I didn’t know the impact sites beyond the general information in the media. To verify the chatbot’s answers, I used Google Maps: I entered the coordinates ChatGPT suggested and used Street View to see whether the surroundings matched the images. Only when I was convinced the chatbot had found the right spot did I ask the Calcalist newsroom for confirmation.
Censorship Is Irrelevant
The chatbot’s success in identifying locations accurately or very closely indicates that today, any information that is published, even limited photographs of impact sites, can be used for fast and precise identification. While Israel’s information security apparatus works to suppress videos from impact sites circulating on Telegram and social networks, the legitimate information published in mainstream media is already sufficient for anyone interested to pinpoint exact locations.
Needless to say, using models specially trained to identify geographic locations, with access to detailed maps and satellite images and operated by expert analysts, would make this process even faster and more accurate, even with minimal information. It is reasonable to assume that if someone in Iran wants to know where missiles landed, they will have no difficulty finding out. This is the reality we live in today, and it is doubtful there is any reliable way to prevent enemies, or simply curious civilians, from revealing precise impact sites, even assuming (which is unlikely) that the defense establishment could prevent the spread of information through public Telegram channels or foreign media, which are not bound by Israel’s security restrictions.
Privacy? Forget About It.
But the implications of this AI capability extend far beyond national security. In recent months, I have regularly challenged ChatGPT to identify various places in Israel and abroad: sometimes by providing a unique, recognizable element in a photo, sometimes with a general landscape, sometimes with an image of a residential building, or even just a patch of vegetation. The chatbot’s accuracy is impressive. It analyzes terrain, vegetation, and architectural features and then delivers a reasoned answer. Sometimes it only narrows the location down to a broad geographic region — for example, a generic landscape image taken near Montenegro’s capital was correctly identified as being in the Balkans, but often it manages to pinpoint a much more precise location, especially if there is a distinctive feature like a river or mountain range. It can even state the name of the river, mountain, or landmark in the photo. Naturally, the level of accuracy increases significantly if the model is given more than one image from the same location.
Just a year ago, these capabilities did not exist in any form accessible to the general public. But as models continue to develop, their ability to identify locations will only become more refined and powerful. This has profound implications for everyone’s privacy. Today, when we try to protect our privacy, we usually share only the bare minimum, a selfie with few background details, or a post that omits specifics that might reveal our location. We assume that unless a bored intelligence analyst skilled in open-source intelligence (OSINT) takes an interest in us, the information we did not explicitly share will remain private.
But modern chatbots are the best OSINT analysts in the world, with access to vast data sources and the ability to process and present conclusions within minutes. A few selfies you took around your neighborhood, a few posts with seemingly trivial details, and a tool like OpenAI’s chatbot could, in principle, reveal personal information about you, such as your exact neighborhood, street address, or other identifying details.
The AI era requires us to rethink what we consider private information, and to understand just how much personal data can be extracted from a seemingly innocent photo. This should force us to reconsider how much we are willing to share online, at least for those of us who still value what remains of our privacy.