The latest viral ChatGPT trend is doing ‘reverse location search’ from photos

0
6

[ad_1]

There’s a somewhat concerning new trend going viral: People are using ChatGPT to figure out the location shown in pictures.

This week, OpenAI released its newest AI models, o3 and o4-mini, both of which can uniquely “reason” through uploaded images. In practice, the models can crop, rotate, and zoom in on photos — even blurry and distorted ones — to thoroughly analyze them.

These image-analyzing capabilities, paired with the models’ ability to search the web, make for a potent location-finding tool. Users on X quickly discovered that o3, in particular, is quite good at deducing cities, landmarks, and even restaurants and bars from subtle visual clues.

In many cases, the models don’t appear to be drawing on “memories” of past ChatGPT conversations, or EXIF data, which is the metadata attached to photos that reveal details such as where the photo was taken.

X is filled with examples of users giving ChatGPT restaurant menus, neighborhood snaps, facades, and self-portraits, and instructing o3 to imagine it’s playing “GeoGuessr,” an online game that challenges players to guess locations from Google Street View images.

It’s an obvious potential privacy issue. There’s nothing preventing a bad actor from screenshotting, say, a person’s Instagram Story and using ChatGPT to try to doxx them.

Of course, this could be done even before the launch of o3 and o4-mini. TechCrunch ran a number of photos through o3 and an older model without image-reasoning capabilities, GPT-4o, to compare the models’ location-guessing skills. Surprisingly, GPT-4o arrived at the same, correct answer as o3 more often than not — and took less time.

There was at least one instance during our brief testing when o3 found a place GPT-4o couldn’t. Given a picture of a purple, mounted rhino head in a dimly-lit bar, o3 correctly answered that it was from a Williamsburg speakeasy — not, as GPT-4o guessed, a U.K. pub.

That’s not to suggest o3 is flawless in this regard. Several of our tests failed — o3 got stuck in a loop, unable to arrive at an answer it was reasonably confident about, or volunteered a wrong location. Users on X noted, too, that o3 can be pretty far off in its location deductions.

But the trend illustrates some of the emerging risks presented by more capable, so-called reasoning AI models. There appear to be few safeguards in place to prevent this sort of “reverse location lookup” in ChatGPT, and OpenAI, the company behind ChatGPT, doesn’t address the issue in its safety report for o3 and o4-mini.

We’ve reached out to OpenAI for comment. We’ll update our piece if they respond.

Updated 10:19 p.m. Pacific: Hours after this story was published, an OpenAI spokesperson sent TechCrunch the following statement:

“OpenAI o3 and o4-mini bring visual reasoning to ChatGPT, making it more helpful in areas like accessibility, research, or identifying locations in emergency response. We’ve worked to train our models to refuse requests for private or sensitive information, added safeguards intended to prohibit the model from identifying private individuals in images, and actively monitor for and take action against abuse of our usage policies on privacy.”



[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here