Reported a flaw to OpenAI that lets users peek at others' chat responses. Got an auto-reply on May 29th, radio silence since. Issue remains unpatched :(
Avoided their bug bounty due to permanent NDAs preventing disclosure even after fixes. Following standard 45-day disclosure window—users should avoid sharing sensitive data until this is resolved.
No, definitely not the empty string hallucination bug. These are clearly real user conversations. They start like proper replies to requests, sometimes reference the original question, and appear in different languages.
i had the exact same behavior back in 2023, it seemed like clearly leakage of user conversations - but it was just a bug with api calls in the software i was using.
It was the classic "oh no we did caching wrong" bug that many startups bump into. It didn't expose actual conversations though, only their titles: https://openai.com/index/march-20-chatgpt-outage/
In one of the responses, it provided the financial analysis of a not well-known company with a non-Latin name located in a small country. I found this company; it is real and numbers in the response are real. When I asked my ChatGPT to provide a financial report for this company without using web tools, it responded: `Unfortunately, I don’t have specific financial statements for “xxx” for 2021 and 2022 in my training data, and since you’ve asked not to use web search, I can’t pull them live.`.
It's not just about sensitive data like passwords, contracts, or IP. It's also about the personal conversations people have with ChatGPT. Some are depressed, some are dealing with bullying, others are trying to figure out how to come out to their parents. For them, this isn't just sensitive, it's life-changing if it gets leaked. It's like Meta leaking their WhatsApp messages.
I really hope they fix this bug and start taking security more seriously. Trust is everything.
everything is vulnerable. the question is, has this researcher demonstrated that they have discovered and successfully exploited such a vulnerability. what exactly in this post makes you believe that this is the case?
After some hemming and hawing, my most cromulent thought is, having good security posture isn't synonymous with accepting every claim you get from the firehose
This is going to be subject to the legal discovery process with the usual safeguards to prevent leaks; in particular, the judge will directly supervise the decision of who needs access to these logs, and if someone discloses information derived from them for an improper purpose, there's a very good chance they'll go to jail for contempt of court, which is much more stringent than you can usually expect for data privacy. You can still quite reasonably be against it, but you cannot reasonably call it "plain text logs available for everyone at the company to view".
My understanding is that all Bugcrowd bounties do by default.
You can shame it all you want, but you can also just publish your bugs directly. Nobody has to use the Bugcrowd platform. You don't even have to wait 45 days; I don't buy these "CERT/CC" rules.
The bug bounty world is a funny one. I remember one complaining that their bug was dismissed and fixed after they signed an NDA, no payout, nothing. Another one got $100 instead of $5,000 because the company downgraded the severity from high to low. So they ended up with little or no money, and no recognition either. Not sure if these were edge cases, but it does make you wonder how fair the process really is.
If you're dealing with large companies, a good rule of thumb is that the bounty program is incentivized to pay you out. Their internal metrics improve the more they pay; the point is to turn up interesting bugs, and the figure of merit for that is "how much did we have to spend". At a large company, a bounty that isn't paying anything out is a failure.
All bets are off with small random startups that do bug bounties because they think they're supposed to (most companies should not run bounties). But that's not OpenAI. Dave Aitel works at OpenAI. They're not trying to stiff you.
Simultaneous discovery (either with other researchers or, even more often, with internal assessments) is super common. What's more, you're not going to get any corroboration or context for them (sets up a crazy bad incentive with bounty seekers, who litigate bounty results endlessly). When you get a weird and unfair-seeming response to a bounty from a big tech company, for the sake of your own sanity (and because you'll probably be right), just assume someone internal found the bug before you did, and you reported it in the (sometimes long) window during which they were fixing it.
Hi all, I work on security at OpenAI. We have looked into this report and the model response does not contain outputs from any other users nor does it reflect a security vulnerability, compromise, or exploit.
The original report was that submitting a message close to (but not quite) 1500 seconds to the audio transcription API would result in weird, unrelated, off-topic responses that look like they might be replies to someone else’s query. This is not what’s happening. Our API has a bug where if the tokenization of the audio (which is not strictly correlated with the audio length) exceeds a limit, the entire input is truncated, and the model effectively receives a blank query. We’re working with our API team to get this fixed and to produce more useful error messages.
When the model receives an empty query, it generates a response by selecting one random token, then another (which is influenced by the first token), and another, and so on until it has completed a reply. It might seem odd that the responses are coherent, but this is a feature of how all LLM's work - each token that comes before influences the probability for the next token, and so the model generates a response containing words, phrases, code, etc. in a way that appears humanlike but in fact is solely a creation of the model. It’s just that in this case, the output started in a random (but likely) place and the responses were generated without any input. Our text models display the same behavior if you send an empty query, or you can try it yourself by directly sampling an open source model without any inputs.
We took a while to respond to this. Our goal is to provide a reasonable response to reports. If you have found a security vulnerability, we encourage you to report it via our bug bounty program: https://bugcrowd.com/engagements/openai.
> If you have found a security vulnerability, we encourage you to report it via our bug bounty program
It seems like reporting bugs/issues via that program forces you to sign a permanent NDA preventing disclosures after the reported issue been fixed. I'm guessing the author of this disclosure isn't the only one that avoided it because of the NDA. Is that potentially something you can reconsider? Otherwise you'll probably continue to see people disclosing these things publicly and as a OpenAI user it sounds like a troublesome approach.
(Note; I also work for OpenAI Security — though I’ve not worked on our bounty program for some time. These just my thoughts and experiences.)
I believe the author was referring to the standard BugCrowd terms, which as far as I know are themselves fairly common across the various platforms. In my experience we are happy for researchers to publish their work within the normal guidelines you’d expect from a bounty program — it’s something I’ve worked with researchers on without incident.
> The leaked responses show clear signs of being real conversations: they start with contextually appropriate replies, sometimes reference the original user question, appear in various languages, and maintain coherent conversational flow. This pattern is inconsistent with random model hallucinations but matches exactly what you'd expect from misdirected user sessions.
A model like GPT-4o can hallucinated responses that are indistinguishable from real user interactions. This is easy to confirm for yourself: just ask it to make one up.
I’m certainly willing to believe OpenAI leaks real user messages, but this is not proof of that claim.
In one of the responses, it provided the financial analysis of a not well-known company with a non-Latin name located in a small country. I found this company; it is real and numbers in the response are real. When I asked my ChatGPT to provide a financial report for this company without using web tools, it responded: `Unfortunately, I don’t have specific financial statements for “xxx” for 2021 and 2022 in my training data, and since you’ve asked not to use web search, I can’t pull them live.`.
GPT-4o's writing style is so specific that I find it hard to believe it could fake a user query.
You can spot anyone using AI writing a mile away. It stopped saying "delve" but started saying stuff like "It's not X–it's Y" and "check out the vibes (string of wacky emoji)" constantly.
LLMs are trained and fine-tuned on real conversations, so resembling a real conversation doesn't really rule out hallucination.
If the story in OP about getting a company's private financial data is true (i.e. the numbers are correct and nonpublic) that could be a smoking gun.
Either way it's a bad look for OpenAI to have not responded to this. Even if the resolution turns out to be that these are just hallucinations, it should've been investigated and responded to by now if OpenAI actually care about security.
> I am issuing this limited, non‑technical disclosure:
> No exploit code, proof‑of‑concept, or reproduction steps are included here.
Then why bother? I feel a bit cynical here, but if the goal is to get this fixed, they're not going to care unless it becomes a zero day and is given to the masses, otherwise it's going to quietly be exploitable by the few unsavory groups who know of it and will never be patched. Isn't the whole point of responsible disclosures to give them a time clock to get this situated before actual publication? Forgive me if I'm wrong, I haven't been in that field in a long time.
It adds some pressure, we know now what the bug is about so we can guess which endpoints to poke at, then it's only a matter of time before it leaks. It would be unethical for the researcher to just publish it.
Reminds me of a time I found a serious issue with mailgun. Messaged them, no reply. Had to spam their twitter to get a response. Basically you could have stolen tons of API keys from users without their knowledge and mailgun never disclosed it.
I could have actually gone to their office in person if I wanted to be pedantic but it actually seemed like a pretty weird office space lol.
I don't think disclosure of reported security issues is really a norm, unless the firm finds evidence the bug was exploited (by someone other than the reporter). It's a good thing to do, but I think the majority of stuff that gets reported everywhere is never disclosed --- with the major and obvious exception of consumer or commercial software that needs to be updated "on prem".
The problem I have with it is that there's no way they could have determined if an API key was stolen or not, even to this day.
Basically, their docs (which seemed auto-generated) pointed to a domain they did not own (verified this). So if you ran any API examples you sent your keys to a 3rd party. I know because I did this. There's no way to know that the domain in the docs is simply wrong.
I tried explaining this to the support people, that I needed to talk with a software engineer but they kept stonewalling. I think it was fixed after 24 hours or so.
> A single misconfiguration can leak thousands of sensitive conversations in seconds. Treating privacy as an afterthought is untenable when the blast radius is this large.
Massive security bug, well spotted. It's like Bank of America showing other people my transactions, or Meta leaking my WhatsApp messages.
This raises some serious questions about security.
I believe it is extremely important to disclose that the ‘responses leaks’ you obtained did not originate from LLM models themselves, but rather through other insecure systems / in a more conventional manner.
Just to avoid yet another case of hallucinations outputs getting misinterpreted.
https://jarbon.medium.com/gpt-prompt-bug-94322a96c574
https://snipboard.io/FXOkdK.jpg
I felt like it was a huge deal at the time but it’s surprisingly hard to quickly google it.
OpenAI very well may have a bug, but I'm not clear on this part. How do you know the numbers are real?
I understand you know the name is the company is real, but how do you know the numbers are real?
It's way may than anyone should need to do, but the only way I can see someone knowing this is contacting the owners is the company.
Accurate financial data?
How do we know?
What does using not-web-search not having the data have to do with the claim that private chats with the data are being leaked?
???
A lot of AI products straight up have plan text logs available for everyone at the company to view.
I really hope they fix this bug and start taking security more seriously. Trust is everything.
After some hemming and hawing, my most cromulent thought is, having good security posture isn't synonymous with accepting every claim you get from the firehose
Software quality is... Minimal now days.
Mozilla's program, which has been around longer than most, doesn't. Google and Microsoft don't. Meta and Apple don't.
This is water carrying, intentional or not, for a terrible practice that should be shamed, so that it doesn't become standard.
You can shame it all you want, but you can also just publish your bugs directly. Nobody has to use the Bugcrowd platform. You don't even have to wait 45 days; I don't buy these "CERT/CC" rules.
All bets are off with small random startups that do bug bounties because they think they're supposed to (most companies should not run bounties). But that's not OpenAI. Dave Aitel works at OpenAI. They're not trying to stiff you.
Simultaneous discovery (either with other researchers or, even more often, with internal assessments) is super common. What's more, you're not going to get any corroboration or context for them (sets up a crazy bad incentive with bounty seekers, who litigate bounty results endlessly). When you get a weird and unfair-seeming response to a bounty from a big tech company, for the sake of your own sanity (and because you'll probably be right), just assume someone internal found the bug before you did, and you reported it in the (sometimes long) window during which they were fixing it.
The original report was that submitting a message close to (but not quite) 1500 seconds to the audio transcription API would result in weird, unrelated, off-topic responses that look like they might be replies to someone else’s query. This is not what’s happening. Our API has a bug where if the tokenization of the audio (which is not strictly correlated with the audio length) exceeds a limit, the entire input is truncated, and the model effectively receives a blank query. We’re working with our API team to get this fixed and to produce more useful error messages.
When the model receives an empty query, it generates a response by selecting one random token, then another (which is influenced by the first token), and another, and so on until it has completed a reply. It might seem odd that the responses are coherent, but this is a feature of how all LLM's work - each token that comes before influences the probability for the next token, and so the model generates a response containing words, phrases, code, etc. in a way that appears humanlike but in fact is solely a creation of the model. It’s just that in this case, the output started in a random (but likely) place and the responses were generated without any input. Our text models display the same behavior if you send an empty query, or you can try it yourself by directly sampling an open source model without any inputs.
We took a while to respond to this. Our goal is to provide a reasonable response to reports. If you have found a security vulnerability, we encourage you to report it via our bug bounty program: https://bugcrowd.com/engagements/openai.
It seems like reporting bugs/issues via that program forces you to sign a permanent NDA preventing disclosures after the reported issue been fixed. I'm guessing the author of this disclosure isn't the only one that avoided it because of the NDA. Is that potentially something you can reconsider? Otherwise you'll probably continue to see people disclosing these things publicly and as a OpenAI user it sounds like a troublesome approach.
I believe the author was referring to the standard BugCrowd terms, which as far as I know are themselves fairly common across the various platforms. In my experience we are happy for researchers to publish their work within the normal guidelines you’d expect from a bounty program — it’s something I’ve worked with researchers on without incident.
A model like GPT-4o can hallucinated responses that are indistinguishable from real user interactions. This is easy to confirm for yourself: just ask it to make one up.
I’m certainly willing to believe OpenAI leaks real user messages, but this is not proof of that claim.
Right now there is no real proof, untill you confirm that the data it provided cannot be hallucinated (which could be not feisable).
Also, acknowledging the response fron OpenAI staff dismissing it, would you mind sharing PoC?
You can spot anyone using AI writing a mile away. It stopped saying "delve" but started saying stuff like "It's not X–it's Y" and "check out the vibes (string of wacky emoji)" constantly.
If the story in OP about getting a company's private financial data is true (i.e. the numbers are correct and nonpublic) that could be a smoking gun.
Either way it's a bad look for OpenAI to have not responded to this. Even if the resolution turns out to be that these are just hallucinations, it should've been investigated and responded to by now if OpenAI actually care about security.
For real? At least doesn't match the one on https://keybase.io/requilence
I could have actually gone to their office in person if I wanted to be pedantic but it actually seemed like a pretty weird office space lol.
The problem I have with it is that there's no way they could have determined if an API key was stolen or not, even to this day.
Basically, their docs (which seemed auto-generated) pointed to a domain they did not own (verified this). So if you ran any API examples you sent your keys to a 3rd party. I know because I did this. There's no way to know that the domain in the docs is simply wrong.
I tried explaining this to the support people, that I needed to talk with a software engineer but they kept stonewalling. I think it was fixed after 24 hours or so.
Massive security bug, well spotted. It's like Bank of America showing other people my transactions, or Meta leaking my WhatsApp messages.
This raises some serious questions about security.
I certainly wouldn't sign an indefinite NDA for a chance to win:
Average payout: $836.36
openai should be grateful, after all, they want all information to be free
Just to avoid yet another case of hallucinations outputs getting misinterpreted.