It’s being sorted not once per frame, but once per item.
If you have 50 items in the list, then the list gets sorted 50 times. If you have 200 items in the list, the list is sorted 200 times.
This is unnecessary. The obvious alternative is a binary heap… which is what the fix does. Although it would also be obvious to reuse an existing binary heap implementation, rather than inventing your own.
> It’s being sorted not once per frame, but once per item.
Even if that were the case, sorting a list that's already sorted is basically free. Any reasonable sort method (like the builtin one in a JS runtime) will check for that before doing anything to the list.
> The obvious alternative is a binary heap… which is what the fix does.
The overhead of creating a heap structure out of JS objects will dwarf any possible benefit of avoiding a couple of calls to Array.sort().
No. Browser dev tools are available, and make it pretty easy to do performance profiling, and get a flamegraph etc..
Just seems like the reality of things is that the number of extensions or widgets or whatever has remained low enough that this extra sorting isn't actually that punitive in most real-world use cases. As a long-time developer working mainly in VSCode, I notice no difference between performance/snappiness in VSCode compared to JetBrains Rider, which is the main other IDE I have meaningful experience with these days.
Yeah the issue reads as if someone asked Claude Code "find the most serious performance issue in the VSCode rendering loop" and then copied the response directly into GitHub (without profiling or testing anything).
Even if it is a real performance issue, the reasonable fix would be to move the sort call out of the loop - implementing a new data structure in JS is absolutely not the way to fix this.
Adding a new data structure just for this feels like such an AI thing. I've added to our agents.md a rule to prefer using existing libraries and types, otherwise Gemini will just happily generate things like this.
There’s clearly functionality to push more work to the current window’s queue, so I would not be surprised if the data structure needs to be continually kept sorted.
(Somewhere in the pile of VSCode dependencies you’d think there’d be a generic heap data structure though)
Right, and also this would show up in the profiler if it were a time sink — and I'm 100% certain this code has been profiled in the 10 years it's been in the codebase.
also no discussion of measured runtimes for the rendering code. (if it saves ~1.3ms that sounds cool, but how many ms is that from going over the supposed 16ms budget.)
If you work with LLM agents, you will immediately be able to tell this issue is written by one. The time cost of this sort is almost certainly not real, as others have pointed out.
I’ve had agents find similar “performance bottlenecks” that are indeed BS.
I hate ai sometimes — an AI generated pull request (really some rando found a way of shaving 12% off the run loop?) responded to by an ai comment bot:
> This feature request is now a candidate for our backlog. The community has 60 days to upvote the issue. If it receives 20 upvotes we will move it to our backlog. If not, we will close it. To learn more about how we handle feature requests, please see our documentation.
I would have expected V8 sort() to be optimized for runs of presorted input, like other implementations nowadays. So O(n²) seems more likely than O(n² log n). Not that it matters much.
But then again, probably AI slop with "performance gain" numbers taken out of thin air. Who knows if the number 50 and 1-2ms are based on fantasy novels or not.
Like when I used Claude to build a door video intercom sytem, and first asked it to create a plan. It inserted how many weeks each milestone would take, and it was an order of magnitude off. But I guess milestone documents have time estimates, so that's how it's supposed to look, information accuracy be damned.
I'm confused: Does top.execute() modify currentQueue in some way, like pushing new elements to it?
If it doesn't, then why not simply move the sort out of the loop? This is simpler and faster than maintaining a binary heap.
> If it doesn't, then why not simply move the sort out of the loop?
Yup, they should definitely move the sort outside of the loop. Shifting is O(N) so overall complexity would be O(N^2) but they could avoid shifting by reverse-sorting outside the loop and then iterating backwards using pop()
One more thing: Nowadays sort() functions ary usually heavily optimized and recognize already sorted subsequences. If currentQueue isn't modified during the loop, then the sort() call should run in O(n) after the first iteration, instead of O(n * log n). Still worse than not having it inside the loop at all, of course.
I see they also contributed a fix to the OnlyFans notification robot. Clearly doing the important work that the internet needs.
Like if Batman turned out to be bad at fighting criminals so had to fight null pointer exceptions instead.
Good thing to find...
If you have 50 items in the list, then the list gets sorted 50 times. If you have 200 items in the list, the list is sorted 200 times.
This is unnecessary. The obvious alternative is a binary heap… which is what the fix does. Although it would also be obvious to reuse an existing binary heap implementation, rather than inventing your own.
Even if that were the case, sorting a list that's already sorted is basically free. Any reasonable sort method (like the builtin one in a JS runtime) will check for that before doing anything to the list.
> The obvious alternative is a binary heap… which is what the fix does.
The overhead of creating a heap structure out of JS objects will dwarf any possible benefit of avoiding a couple of calls to Array.sort().
Yes, that's indeed the approach I'd take.
I don't work in JS-land.. but are Electron apps difficult to do performance profiling on?
Just seems like the reality of things is that the number of extensions or widgets or whatever has remained low enough that this extra sorting isn't actually that punitive in most real-world use cases. As a long-time developer working mainly in VSCode, I notice no difference between performance/snappiness in VSCode compared to JetBrains Rider, which is the main other IDE I have meaningful experience with these days.
This just seems like an AI slop GitHub issue from beginning to end.
And I'd be very surprised if VS Code performance could be boosted that much by a supposedly trivial fix.
(Somewhere in the pile of VSCode dependencies you’d think there’d be a generic heap data structure though)
also no discussion of measured runtimes for the rendering code. (if it saves ~1.3ms that sounds cool, but how many ms is that from going over the supposed 16ms budget.)
I’ve had agents find similar “performance bottlenecks” that are indeed BS.
> This feature request is now a candidate for our backlog. The community has 60 days to upvote the issue. If it receives 20 upvotes we will move it to our backlog. If not, we will close it. To learn more about how we handle feature requests, please see our documentation.
also it’s an issue, not a PR
But then again, probably AI slop with "performance gain" numbers taken out of thin air. Who knows if the number 50 and 1-2ms are based on fantasy novels or not.
Like when I used Claude to build a door video intercom sytem, and first asked it to create a plan. It inserted how many weeks each milestone would take, and it was an order of magnitude off. But I guess milestone documents have time estimates, so that's how it's supposed to look, information accuracy be damned.
Yup, they should definitely move the sort outside of the loop. Shifting is O(N) so overall complexity would be O(N^2) but they could avoid shifting by reverse-sorting outside the loop and then iterating backwards using pop()