The 6 hour claim is interesting, but I highly doubt Avelo (or any airline) would handle 100k requests/sec
If we consider that the real major's move about 400k-500k passengers/day, let's be really optimistic and say that they check their booking 6 times a day for the week before they fly. That's around 250 requests/sec.
Anyone know about the consumer facing tech stacks at airlines these days? Seems unlikely that they'd have databases that would auto scale 400x...
I doubt their API would handle 100k requests per second. That math was roughly indictive of what the cost to send 100k requests per second would look like. He did mention that that was assuming the target didn't have rate limiting, either intentional or just pure limits of the hardware.
Do we know what GDS Avelo is using? In other GDSes, is the confirmation code always sufficient to fully identify a booking? I was under the impression that PRLs could be re-used as long as the passenger surname was different.
The space of all possible PRLs is about 2 billion, I can imagine a really big Airline moving that many passengers.
Confirmation codes are not sufficient on their own, they cycle through them relatively quickly so they have to be combined with things like the passengers family name to actually identify the booking.
They use a service of Sabre but not Sabre GDS. it’s called Radixx.
Yes in other GDS, it can be enough to identify a full booking. That’s why airlines prefer ticket or coupon number since the first two digits are the airline ticket stock / identifier and then fare codes, etc
The requiring last name, and more info is more or less security since any pss system can query the airline first for that combination before requiring more info to return a match.
6 alphanumeric, case insensitive characters only allow for about 2 billion unique combinations. I’d have guessed there were more reservations made than that?
The lack of needing the last name might have allowed a hacker to brute force the whole list; but it seems that even with a last name, it could expose a lot of PII. Just pass codes along with popular last names (Smith, Jones, Nelson, etc.) and it seems like it could spit out a bunch of reservations.
>The Avelo team was responsive, professional, and took the findings seriously throughout the disclosure process. They acknowledged the severity, worked quickly to remediate the issues, and maintained clear communication. This is a model example of how organizations should handle security disclosures.
Sounds like no bug bounty?
It's great if OP is happy with the outcome, but it's so infuriating that companies are allowed to leak everyone's data with zero accountability and rely on the kindness of security researchers to do free work to notify them.
I wish there was a law that assigned a dollar value to different types of PII leaks and fined the organization that amount with some percentage going to the whistleblower. So a security researcher could approach a vendor and say, "Hi! I discovered vulnerabilities in your system that would result in a $500k fine for you. For $400k, I'll disclose it to you privately, or you can turn me down and I'll receive $250k from your fines."
Yeah, as an American, I'm jealous of many aspects of GDPR. I really appreciate you blogging / tooting about experiences protecting your rights under GDPR. I wish we had 1/10th of the consumer privacy protections you have.
How does security research like this work out in practice, in the EU?
I read a lot of vulnerability writeups like this and don't recall seeing any where the author is European and gets a better outcome. Are security researchers actually compensated for this type of work in the EU?
> it's so infuriating that companies are allowed to leak everyone's data with zero accountability and rely on the kindness of security researchers to do free work to notify them.
This is a matter for lawmakers and law enforcement. Campaign for it. Nothing will change otherwise
Always consider rate limiting if you deploy a public endpoint. Always require authentication to perform resource-consuming and/or privacy leaking requests.
(Requiring authentication makes rate limiting more practical since even a distributed attacker would need many credentials, which they probably don't have).
> They were responsive, professional, and took the findings seriously, patching the issues promptly.
The "issue" is that they're returning the entire PNR dataset to the front-end in the first place. He doesn't detail how they fixed it, but there's no reason in the world that this entire dataset should be dumped into Javascript. I got into pretty heated arguments with folks about this at Travelocity and this shit is exactly why I was so adamant.
This is about a non-rate-limited endpoint providing ticket data given a booking code only (and not last name as it's usually the case), which makes it feasible to bruteforce the entire search space.
(unfortunately, I feel like AI was overused in authoring the writeup)
Is it really AI slop if someone leverages AI to improve / transform their novel experiences and ideas into a rendition that they prefer?
I'm not suggesting whether or not the article is AI assisted. I'm wondering if the ease of calling someone's work "AI slop" is a step along the slippery slope towards trivializing this sort of drive-by hostility that can be toxic in a community.
You are right about the toxicity, I will edit my comment.
There's a difference between leveraging AI to proofread or improve parts of their writing and this - I feel like AI was overused here; gave the whole article that distinctive smell and significantly reduced its information density.
Overuse of bulleted lists, unnecessary sensationalism, sentences like "The requests flew. There was no WAF, no IP blocking, no CAPTCHA." and so on. It reeks of someone pasting some notes into a chat prompt and asking it to spruce it up for publication.
Maybe just try having confidence in yourself. Trust your instincts. I'm not going to impugn my own abilities based on some purported flaw in an abstract amorphous blog called "humanity", whatever that is. A lot of individuals of distinction have many characteristics better than the average, why wouldn't I trust myself more than other people?
Pattern recognition is a many millions of years evolved ability best exemplified in the "human" species by the way, so I basically disagree with your whole premise anyways.
(b.) they practically demonstrate the point: while, yes, AI uses em-dashes, the entire corpus of em-dashes is still largely human, too, so using that as a sole signal is going to have a pretty high false positive rate.
If we consider that the real major's move about 400k-500k passengers/day, let's be really optimistic and say that they check their booking 6 times a day for the week before they fly. That's around 250 requests/sec.
Anyone know about the consumer facing tech stacks at airlines these days? Seems unlikely that they'd have databases that would auto scale 400x...
The space of all possible PRLs is about 2 billion, I can imagine a really big Airline moving that many passengers.
Yes in other GDS, it can be enough to identify a full booking. That’s why airlines prefer ticket or coupon number since the first two digits are the airline ticket stock / identifier and then fare codes, etc
The requiring last name, and more info is more or less security since any pss system can query the airline first for that combination before requiring more info to return a match.
Or are PNR locators recycled after a while?
Sounds like no bug bounty?
It's great if OP is happy with the outcome, but it's so infuriating that companies are allowed to leak everyone's data with zero accountability and rely on the kindness of security researchers to do free work to notify them.
I wish there was a law that assigned a dollar value to different types of PII leaks and fined the organization that amount with some percentage going to the whistleblower. So a security researcher could approach a vendor and say, "Hi! I discovered vulnerabilities in your system that would result in a $500k fine for you. For $400k, I'll disclose it to you privately, or you can turn me down and I'll receive $250k from your fines."
There is. It is called GDPR.
Plenty of companies have been fined for leaks like this.
Some countries also have whistleblower bounties but, as you might expect, there are some perverse incentives there.
How does security research like this work out in practice, in the EU?
I read a lot of vulnerability writeups like this and don't recall seeing any where the author is European and gets a better outcome. Are security researchers actually compensated for this type of work in the EU?
How about fining individual developers with poor coding practices?
This is a matter for lawmakers and law enforcement. Campaign for it. Nothing will change otherwise
The "issue" is that they're returning the entire PNR dataset to the front-end in the first place. He doesn't detail how they fixed it, but there's no reason in the world that this entire dataset should be dumped into Javascript. I got into pretty heated arguments with folks about this at Travelocity and this shit is exactly why I was so adamant.
(unfortunately, I feel like AI was overused in authoring the writeup)
I'm not suggesting whether or not the article is AI assisted. I'm wondering if the ease of calling someone's work "AI slop" is a step along the slippery slope towards trivializing this sort of drive-by hostility that can be toxic in a community.
There's a difference between leveraging AI to proofread or improve parts of their writing and this - I feel like AI was overused here; gave the whole article that distinctive smell and significantly reduced its information density.
"The fallout"
This flaw was critical.
And other vibes. You know it when you see it, though it may be hard to define.
How do you know your perception is accurate? One of humanity's biggest weaknesses is trusting that kind of response.
Pattern recognition is a many millions of years evolved ability best exemplified in the "human" species by the way, so I basically disagree with your whole premise anyways.
?
A stark reminder is a stark reminder about the existence of AI slop. You see the phrase a lot in social media comment spam.
Which really makes me wonder how we ended up training an AI…
https://news.ycombinator.com/item?id=46236514
https://news.ycombinator.com/item?id=46273466
(b.) they practically demonstrate the point: while, yes, AI uses em-dashes, the entire corpus of em-dashes is still largely human, too, so using that as a sole signal is going to have a pretty high false positive rate.