I interned at zed during the summer of 2022, when the editor was pre-alpha. Nathan, Max, Antonio are great guys and build software with care. I'm happy to see the editor receive the success it deserves, because the team has poured so much world-class engineering work into it.
I worked with Antonio on prototyping the extensions system[0]. In other words, Antonio got to stress test the pair programming collaboration tech while I ran around in a little corner of the zed codebase and asked a billion questions. While working on zed, Antonio taught me how to talk about code and make changes purposefully. I learned that the best solution is the one that shows the reader how it was derived. It was a great summer, as far as summers go!
I'm glad the editor is open source and that people are willing to pay for well-engineered AI integrations; I think originally, before AI had taken off, the business model for zed was something along the lines of a per-seat model for teams that used collaborative features. I still use zed daily and I hope the team can keep working on it for a long time.
[0]: Extensions were originally written in Lua, which didn't have the properties we wanted, so we moved to Wasm, which is fast + sandboxed + cross-language. After I left, it looks like Max and Marshall picked up the work and moved from the original serde+bincode ABI to Wasm interface types, which makes me happy: https://zed.dev/blog/zed-decoded-extensions. I have a blog post draft about the early history of Zed and how extensions with direct access to GPUI and CRDTs could turn Zed from a collaborative code editor into a full-blown collaborative application platform. The post needs a lot of work (and I should probably reach out to the team) before I publish it. And I have finals next week. Sigh. Some day!
I wish they would have stayed with the collaborative part a bit longer. Once the AI wave hit it feels abandoned with various bugs and hard to reproduce issues. I am a full time zed user converting from sublime only due to the collaborative features, but by now we don't even use the collaborative features anymore because it's unreliable (broken connections, sounds, overwriting changes, weird history/undo behavior), so will probably go back to sublime again. Note that all of us are sitting on fiber connections, so I don't believe the issues are network related.
I've been trying to be active, create issues, help in any way I can, but the focus on AI tells me Zed is no longer an editor for me.
Yeah, we plan to revisit the collaboration features; it was painful but we decided we needed to pause work on it while we built out some more highly-requested functionality. We still have big plans for improving team collaboration!
It would be interesting to (optionally) make the AI agent more like an additional collaborative user, sharing the chat between users, allowing collaborative edits to prompts, etc.
Not sure what your budget looks like, but maybe its time to look for a new developer if its feasible? So you don't neglect a feature that's already in production and broken.
It's absolutely remarkable that these folks are writing this from scratch in Rust. That'll be a long-term killer feature for the editor.
Do you think GPL3 will serve as an impediment to their revenue or future venture fundraising? I assume not, since Cursor and Windsurf were forks of MIT-licensed VS Code. And both of them are entirely dependent on Microsoft's goodwill to continue developing VS Code in the open.
Tangentially, do you think this model of "tool" + "curated model aggregator" + "open source" would be useful for other, non-developer fields? Would an AI art tool with sculpting and drawing benefit from being open source? I've talked with VCs that love open developer tools and they hate on the idea of open creative tools for designers, illustrators, filmmakers, and other creatives. I don't quite get it, because Blender and Krita have millions of users. Comfy is kind of in that space, it's just not very user-friendly.
> entirely dependent on Microsoft's goodwill to continue developing VS Code in the open.
The premise of many open source licenses, including MIT, is that the user is _not_ dependent on the developer. No matter what MS does, the latest pulled version of VS Code will remain working and available. MS could license future VS Code versions under more restrictive licenses, however the Cursor devs can continue to use and themselves develop the code they already have.
To be clear, by "the user" I'm referring to the Cursor devs. This is the terminology of many F/OSS licenses.
They are dependent to the extent they won't put in the work to make those extensions at the same level Microsoft was pumping resources into them.
In theory everyone can fork Chrome and Android, in practice none of the forks can keep up with Google's resources, unless they are Microsoft or Samsung.
Isaac, that email that you sent to us (long after your internship ended) when Wasmtime first landed support for the WASM Component model was actually very helpful! We were considering going down the path of embedding V8 and doing JS extensions. I'm really glad we ended up going all in on Wasmtime and components; it's an awesome technology.
Yes, Wasm components rock! I'm amazed to see how far you've taken Wasm and run with it. I'm at a new email address now, apologies if I've missed any mail. We should catch up sometime; I'll be in SF this summer, I might also visit a friend in Fort Collins, CO. (Throwing distance from Boulder :P)
Hey Isaac! I was intrigued by the way Zed added extensions, and I think it turned out great! I ended up taking that pattern of WASM, WIT, and Rust traits to add interactive hot reloading in a few projects. It feels like it has a strong future in gamedev where you could load and execute untrusted user mods without having all your players getting hacked left and right.
Thank you Brian! I miss tonari, I hope you're well. Game mods seem like a great fit for Wasm! I'm excited about Wasm GC, etc., because it means you can use e.g. a lightweight scripting language like Lua to sketch a mod, with the option of using something heavier if you need that performance.
Hey! I was reading your extensions code a lot. The backwards compatibility is done in a smart way. Several layers of wit and the editor makes the choice based on wasm headers which one to choose.
I learned something from that code, cool stuff!
One question: how do you handle cutting a new breaking change in wit? Does it take a lot of time to deal with all the boilerplate when you copy things around?
> The entire Zed code editor is open source under GPL version 3, and scratch-built in Rust all the way down to handcrafted GPU shaders and OS graphics API calls.
When I saw this, I immediately wondered what strange rendering bugs Zim might run into. This was before reading your comment.
In my opinion, this type of graphics work is not the core functionality of a text editor, same has been solved already in libraries. There is no reason to reinvent that wheel... Or if there is then please mention why.
Thank you, I bring this up in every Zed thread on the internet, hopefully the devs will eventually fix it. Until they do, Zed is simply unusable on regular-DPI displays, at least in light mode. See these screenshots:
Zed defaults to a font weight a little thin for my taste, increase it and it will probably solve your issue. I don’t see anything really wrong with the first screenshot, might just be a matter of what you are used to.
I think it's a combination of using a Zed theme with insufficiently high text contrast, missing subpixel font rendering in Zed, and possibly more gamma correction and less stem darkening than you're used to.
Not sure if it's related, but I've built Zed from source to try on Windows (I haven't tried it on other platforms), and it does not look good sadly, it's also quite a bit "uncrisp" or something - I don't really have the words to describe.
Does your monitor have a nonstandard rgb pattern? If zed is trying to do its own subpixel rendering then getting the pattern wrong is going to mess up your results.
shrink the zed window by one pixel horizontally and one pixel vertically. there's a video on that issue page which shows resizing making the font go in and out of focus, and that tells me that there's something dividing the window height and width by 2 and starting the font rendering there. if you divide by 2 and you get .5, you'll see the blurriness. if you make the window 1 pixel wider you won't get x.5 anymore, you'll get a whole number.
try it and see. i bet that helps/fixes at least some of you suffering from this.
If you are using MacOS, unfortunately, your issue is that you are using a 1440p monitor, not an issue with any one program.
Apple has removed support for font rendering methods which make text on non-integer scaled screens look sharper. As a result, if you want to use your screen without blurry text, you have to use 1080p (1x), 4k (2x 1080p), 5k (2x 1440p) or 6k screens (or any other screens where integer scaling looks ok).
To see the difference, try connecting a Windows/Linux machine to your monitor and comparing how text looks compared to the same screen with a MacOS device.
This comment is incorrect, I have tried the editor on both MacOS and Linux, and texts looks like crap on both if you're using your screen at its native resolution. The difference is easily visible in screenshots.
native resolution on any monitor should work fine on MacOS.
using pixel fonts on any non-integer multiplier of the native resolution will always result in horrible font rendering, I don't care what OS you're on.
I use MacOS on all kinds of displays as I move throughout the day, some of them are 1x, some are 2x, and some are somewhere in between. using a vector font in Zed looks fine on all of them. It did not look fine when I used a pixel font that I created for myself, but that's how pixel fonts work, not the fault of MacOS.
1440p monitor would probably be why the text is blurry - there simply isn't enough pixels to make things smooth without resorting to special hacks to improve LowDPI text rendering, which with more and more displays being HiDPI many don't bother doing.
I am amazed people consider 1440p low resolution. My knee-jerk reaction was to assume you were sarcastic. I use a monitor of roughly that many lines of pixels and have never had observed blurry text in the tools I use (and I use fairly small fonts).
Look, you can insist that a 1440p monitor can only show blurry text all you like, but the problem that people are talking about is that the text is even blurrier than that.
I have native 1440p 120Hz on my main screen which is more than 30inches across (ultrawide). I can see pixels if I look close enough, but I do not see any pixels at usual reading distance.
I have used retina displays of various sizes -- but after a while I just set them down to half their resolution usually (i.e. I do not use the 200% scaling from the OS, rather set them to be 1440p (or lower on 13inch laptops)). I have not seen an advantage to retina displays.
Biggest draw for me with 1440p 32" is being the same DPI as a 1080p 24". I like to have one big monitor and then 2 small flank vertical monitors and having them all be the same DPI just makes headaches go away on every operating system I use them with.
Text on 1440p looks great with full hinting and subpixel rendering. Unfortunately, macOS does neither, so the jump to retina feels more significant than it is.
How big is your screen? At 27", I can clearly see pixels on 1440p. A 4k display with 150% scaling (effectively 1440p) looks much better. Maybe you haven't used a higher resolution? If so, you might not know what you're missing.
Not GP. But I have a 24" 4k (close to retina), the MBA screen and while they're nicer than the 27" 1440p I have, the latter is essentially worthless on macOS. With Linux, it's more than fine. Not super sharp, but quite readable. On macOS, the blurred text is headache inducing.
(cross-posting on both subthreads): I have native 1440p 120Hz on my main screen which is more than 30inches across (ultrawide). I can see pixels if I look close enough, but I do not see any pixels at usual reading distance.
I have used retina displays of various sizes -- but after a while I just set them down to half their resolution usually (i.e. I do not use the 200% scaling from the OS, rather set them to be 1440p (or lower on 13inch laptops)). I have not seen an advantage to retina displays.
I mean the last time I saw anyone have a 1440p display was back in the early 2010's, so ... nowadays most people that I know buy 4k 27"/32" displays at minimum, with 5k displays gaining popularity as the price for them goes down. Macbooks for example come with a very high resolution display, and so do most high-end PC laptops, too.
1440p is high enough for anything depending on screen size.
If they're running everything on the GPU then their SDF text rendering needs more work to be resolution independent. I'm assuming they use SDFs, or some variant of that.
Really, the screen isn't the issue given that on other editors OP says it is fine.
I was using Zed up until a few months ago. I got fed up with the entire AI panel being an editable area, so sometimes I ended up clobbering it. I switched to Cursor, but now I don't "trust" the the editor and its undo stack, I've lost code as a result of it, particularly when you're in mid-review of an agentic edit, but decide to edit the edit. The undo/redo gets difficult to track, I wish there was some heirarchical tree view of history.
The restore checkpoint/redo is too linear for my lizard brain. Am I wrong to want a tree-based agentic IDE? Why has nobody built it?
Interesting. I actually like the editable format of the chat interface because it allows fixing small stuff on the fly (instead of having to talk about it with the model) and de-cluttering the chat after a few back and forths make it a mess (instead of having to start anew), which makes the context window smaller and less confusing to the model, esp for local ones. Also, the editable form makes more sense to me, and it feels more natural and simple to interact with an LLM assistant with it.
Yes! Editing the whole buffer is a major feature because the more you keep around failed attempts and trash the dumber the model gets (and more expensive).
If you're working on stuff like marketing websites that are well represented in the model dataset then things will just fly, but if you're building something that is more niche it can be super important to tune the context -- in some cases this is the differentiating feature between being able to use AI assistance at all (otherwise the failure rate just goes to 100%).
> I actually like the editable format of the chat interface because it allows fixing small stuff on the fly
Fully agreed. This was the killer feature of Zed (and locally-hosted LLMs). Delete all tokens after the first mistake spotted in generated code. Then correct the mistake and re-run the model. This greatly improved code generation in my experience. I am not sure if cloud-based LLMs even allow modifying assistant output (I would assume not since it becomes a trivial way to bypass safety mechanisms).
The only issue I would imagine is not being able to use prompt caching, which can increase the cost of API calls, but I am not sure if prompt caching is used in general in such a context in the first place. Otherwise you just send the "history" in a json file, there is nothing mystical about llm chats really. If you use an API you can just send to autocomplete whatever you want.
Ah that's a bummer. You can still add threads as context, but that you cannot use slash commands there, so the only way to add them or other stuff in the context is to click buttons with the mouse. It would be nice if at least slash commands were working there.
edit: actually it is still possible to include text threads in there
It actually seems to work for me. I have an active text thread and it was added automatically to my inline prompt in the file. There was this box on the bottom of the inline text box. I think I had to click it the first time to include the context, but the subsequent times it was included by default.
Yeah, It was great because you were in control of where and when the edits happened.
So you could manage the context with great care, then go over to the editor and select specific regions and then "pull in" the changes that were discussed.
I guess it was silly that I was always typing "use the new code" in every inline assist message.
A hotkey to "pull new code" into a selected region would have been sweet.
I don't really want to "set it and forget it" and then come back to some mega diff that is like 30% wrong. Especially right now where it keeps getting stuck and doing nothing for 30m.
omg. "the entire AI panel being an editable area" is the KILLER feature for me!
I have complete control, use my vim keys, switch models at will and life is awesome.
What I don't like in the last update is that they removed the multi-tabs in the assistant. Previously I could have multiple conversations going and switch easily, but now I can only do one thing at a time :(
Haven't tried the assistant2 much, mostly because I'm so comfy with my current setup
Been using cline and their snapshot/rewind/remove context (even out-of-order) features are really shining especially with larger projects and larger features+changes becoming more commonplace with stronger LLMs.
I would recommend you check it out if you've been frustrated by the other options out there - I've been very happy with it. I'm fairly sure you can't have git-like dag trees, nor do I think that would be particularly useful for AI based workflow - you'd have to delegate rebasing and merge conflict resolution to the agent itself... lots of potential for disaster there, at least for now.
You will not catch me using the words "agentic IDE" to describe what I'm doing because its primary purpose isn't to be used by AI any more than the primary purpose of a car is to drive itself.
But yes, what I am doing is creating an IDE where the primary integration surface for humans, scripts, and AIs is not the 2D text buffer, but the embedded tree structure of the code. Zed almost gets there and it's maddening to me that they don't embrace it fully. I think once I show them what the stakes of the game are they have the engineering talent to catch up.
The main reason it hasn't been done is that we're still all basically writing code on paper. All of the most modern tools that people are using, they're still basically just digitizations of punchcard programming. If you dig down through all the layers of abstractions at the very bottom is line and column, that telltale hint of paper's two-dimensionality. And because line and column get baked into every integration surface, the limitations of IDEs are the limitations of paper. When you frame the task of programming as "write a huge amount of text out on paper" it's no wonder that people turn to LLMs to do it.
For the integration layer using the tree as the primary means you get to stop worrying about a valid tree layer blinking into and out of existence constantly, which is conceptually what happens when someone types code syntax in left to right. They put an opening brace in, then later a closing brace. In between a valid tree representation has ceased to exist.
Representing undo/redo history as a tree is quite different from representing the code structure as a tree. On the one hand I'm surprised no one seems to care that a response has nothing to do with the question... on the other hand, these AI tooling threads are always full of people talking right past each other and being very excited about it, so I guess it fits.
They certainly can be quite different things and in all current systems I know of the two are unrelated, but in my system they are one and the same.
That's possible because the source of truth for the IDE's state is an immutable concrete syntax tree. It can be immutable without ruining our costs because it has btree amortization built into it. So basically you can always
construct a new tree with some changes by reusing most of the nodes from an old tree. A version history would simply be a stack of these tree references.
I've very interested in this, and completely agree we are still trying to evolve the horse carriage without realizing we can move away from it.
How can I follow up on what you're building? Would you be open to having a chat? I've found your github, but let me know how if there's a better way to contact you.
Zed is exactly how software should be made. Granted, I don't agree with all of their UX decisions (i think the AI panel is really bad compared to Cursor's), but good lord is the thing fast. These guys are the real deal. They built a rendering system (GPUI) in Rust before building Zed on top of it, and so it is one of the fastest (if not the fastest) pieces of software that resides on my computer. I can't wait until GPUI becomes a bit more mature/stable so I can build on top of it, because the other Rust GUI libraries/frameworks aren't great.
> I can't wait until GPUI becomes a bit more mature/stable so I can build on top of it
Man, so true. I tried this out a while back and it was pretty miserable to find docs, apis, etc.
IIRC they even practice a lot of bulk reexports and glob imports and so it was super difficult to find where the hell things come from, and thus find docs/source to understand how to use something or achieve something.
Super frustrating because the UI of Zed was so damn good. I wanted to replicate hah.
part of me wants to dedicate time to making something with it and then creating examples/PRs -- but it's too unstable given how fast they're moving for now IMO. if anyone from Zed team can chime in and confirm, that'd be awesome.
i have and it's more of the same (unless i'm missing something). the fact that the entire thing is editable is weird to me. i really think they should just clone cursor's in this one case because they really nailed the UX with the checkpoint restoration
edit: yes i missed something. i see the new feature. hell yeah!
Its the opposite to me. I really liked that the AI panel was a fully featured text editor buffer like any other. The new agent panel makes it too much like "the rest" haha. I guess I'll get used to it over time. The important thing is that we finally have agentic editing which is extremely powerful ofc.
Ah, that's still the old one - the whole thing is no longer editable in the new one we launched today. (You can still access the old one, but the new one is the default as of today.)
Check out the video in the blog post to see the new one in action!
I really miss the everything is editable panel, it felt like a superpower. There’s a bit of a learning curve, but after it’s amazing and everything else feels limited.
I’ve been using Zed a few months on my fedora laptop (thinkpad x230) and haven’t had any performance issues. Definitely faster than any other graphical editor I’ve used. Perhaps a driver issue would be slowing it down?
You should report an issue with your specs, not just say “other applications don’t have this problem” — especially as a Linux user.
For one, not all applications are GPU accelerated.
Two, their UX may need to be improved for a specific hardware configuration. I have used Zed with good performance on Intel dGPU, AMD dGPU, and Intel iGPU without issue — my guess is a missing dependency?
Meh, it's not worth the trouble. I don't care enough about using Zed to fix their Linux distribution problems or debug something for them. This isn't some volunteer backed FOSS project where they get a free pass or free QA work from me.
What's the point of commenting that it's slow if you don't care about using the program and switched to something else? Also, how is whether the project is volunteer-run relevant? Would you file a support ticket for commercial software you use saying "it's slow" and then when they follow up asking for details about your setup, you say "sorry, you don't get free QA work from me"? Do you really think that would lead to them fixing your performance problem?
The point was contradicting another comment with my own experience, not to putz around with bug reports or trouble shooting.
I don't care about Zed fixing anything - they're Zed's issues, not mine. All I'm saying is that contrary to what someone else said about the software being "fast" I tried it and at startup, it was unusably slow. I'm what you would call a failed conversion.
> Also, how is whether the project is volunteer-run relevant? Would you file a support ticket for commercial software you use saying "it's slow" and then when they follow up asking for details about your setup, you say "sorry, you don't get free QA work from me"
So this is kind of needlessly antagonistic imo - the point between the lines is that FOSS projects run by volunteers get a lot more grace than venture backed companies that go on promotion blitzes talking about their performance.
But you run Linux, with its myriad of software configurations. And if this thread is correct Linux support is already far along, if it runs well on something old like the X230. It is not a realistic expectation for any project to work on your hardware if you are not at least willing to report an issue, or rather: No software will run flawlessly on all hardware always, that's not realistic.
Error message, hardware configuration, done.
From my perspective that is not something you do for zed, but something you do for your distro and hardware.
And ofc, your first comment was fine either way. But the attitude of the latter is just poor.
Once you get a knack for it, you can see that the original comment of "So I guess it's only fast on macos?" already has an attitude, and the rest of the thread comes at no surprise.
How about "I'm getting <1FPS perf on {specs}" instead of the snark.
This. Honestly, their issue based on Zed’s issue tracker is likely with NVIDIA drivers inconsistencies, which ironically is due to the closed source nature of NVIDIA drivers (its workarounds all the way down bringing pain to app and driver developers), not Zed (which is indeed FOSS, just not “volunteer” driven).
You’re both being antagonistic. While Zed may be VC backed, they’re providing a world class open source editor experience for free. There are no expectations in either direction. You’re not a special customer paying them to care about Linux. And you also don’t owe them volunteer effort to help resolve some Linux issue you encountered. They failed to convert. You missed out on honestly possibly the best editor out there right now. That’s that.
The antagonistic part is assuming your specific Linux configuration is innately Zed’s issue. It’s possible simply mentioning it to them would lead you quickly and easily to a solution, no free labor needed. It’s possible Zed is prepared to spend their vast VC resources on fixing your setup, even—which seems to be what you expect. Point being there’s a middle ground where you telling Zed “hey it didn't work well for me” gives Zed the chance to resolve any issues on their end in order to properly convert you, if you truly are interested in trying their editor. You don’t need to respond to the suggestion with a lecture on how companies exploit free volunteer labor and anything short of software served up on a silver platter would make you complicit. It’s really a little absurd.
If I had to guess, your system globally or their rendering library specifically is probably stuck on llvmpipe.
You don’t need a high-end gpu zed runs perfectly fine on embedded graphics. There are no shortage of software configurations on linux that result in cpu graphics rendering, which is the problem.
There is great enthusiasm for the editor in this thread. A personal anecdote indicating subpar performance on a common developer environment (Linux) is a useful signal that took a few seconds of effort.
Putting together a high quality, actionable bug report is a much higher bar that can often feel like screaming at the clouds.
So, only positive feedback allowed in this thread?
As a Linux user, I am sadly accustomed to some software working in only a just-so configuration. A datapoint that the software is still Mac first development is useful to know. Zed might still be worth trying, but I have to temper my enthusiasm from the headline announcement of, “everything is great”.
Is it even Zed’s fault if your linux system/setup over-eagerly prefers cpu rendered graphics because of old political and religious driver licensing issues?
That's interesting[1], what was slow when you tried it on MacOS?
[1]: people experiencing sluggishness on Linux are almost certainly hit by a bug that makes the rendering falls back to llvmpipe (that is CPU rendering) instead of Vulkan rendering, but MacOS shouldn't have this kind of problems.
I'm under Debian and i3wm/X11, sometimes it does some stuff that blocks input for a while so I can't drive the window manager until its done.
At least it did a month or so ago, and at that time I couldn't figure out a practical use for the LLM-integration either so I kind of just went back to dumb old vim and IDEA Ultimate.
When its fast its pretty snappy though. I recently put revisiting emacs on my todo-list, should add taking Zed out for another round as well.
I think that’s the same issue I’ve had with i3 and the sole reason why I switched to bspwm. I think it happens when the cursor is on a GPU accelerated window and you quit the app - it’s like i3’s keyboard input gets trapped in that pane and can’t escape (my work around was to create a terminal and kill the GPU app with skill using my mouse)
That sounds a lot like a CPU fallback of the rendering that should otherwise happen on the GPU. Isn't there any logs that could suggest that this is the case?
Edit: I just saw your edit to your reply here[1] and that's indeed what's happening. Now the question is “why does that happen?”.
> I can't wait until GPUI becomes a bit more mature/stable so I can build on top of it
I wouldn’t hold my breath. GPUI is built specifically for Zed. It is in its monorepo without separate releases and lots of breaking changes all the time. It is pretty tailored to making a text editor rather than being a reusable GUI framework.
That repo is to download a small template (why do we need a crate for that?), and it still pulls `gpui` directly from the Zed monorepo via a git dependency.
That kind of setup is fine for internal use, but it’s not how you'd structure a library meant for stable, external reuse. Until they split it out, version it properly, and stop breaking stuff all the time, it's hard to treat GPUI as a serious general-purpose option.
there's basically zero documentation for Iced as it stands. They even wrote that if you're not a great Rust dev, you're going to have a bad time and that all docs are basically "read the code" until their book is written. I'm glad System76 is able to build using Iced, but you need a great manual for a tool to be considered mature and useful.
IMO Slint is milestones ahead and better. They've even built out solid extensions for using their UI DSL, and they have pages and pages of docs. Of course everything has tradeoffs, and their licensing is funky to me.
> what specific documentation do you think are lacking? Tutorials?
examples beyond tiny todo app/best practices would be a great start.
> Tutorials? That's for users to write.
sure, and how's that going for them? there are near zero tutorials out there, and as someone looking to build a desktop tool in rust, they've lost me. maybe i'm not important enough for them and their primary goal is to intellectually gatekeep this tool from the vast majority for a long time, in which case, mission accomplished
there are literally dozens of examples, including many apps you can reference. come join the discord and check out the showcase channel. I've written and published probably 50-100 examples to show best practices to people who want to learn more. I basically leave zero questions unanswered on that server, unless they are so far out of my wheelhouse that I can't answer them, but even then I might point you to the right resource or person...and I'm not even part of the team. the community is just wonderful IMHO
> sure, and how's that going for them? there are near zero tutorials out there, and as someone looking to build a desktop tool in rust, they've lost me. maybe i'm not important enough for them and their primary goal is to intellectually gatekeep this tool from the vast majority for a long time, in which case, mission accomplished
26.5k stars on github and a flourishing community of users, which grows noticeably larger every day. new features basically every week. bug fixes sometimes fixed in literal minutes.
it's not a matter of gatekeeping, but a matter of resources. iced is basically the brainchild of a single developer (plus core team members who tackle some bits and pieces of the codebase but not frequently), who already has a day time job and is doing this for free. would you rather him write documentation—which you and I could very well write—or keep adding features so the library can get to 1.0?
I encourage you to look for evidence that invalidates your biases, as I'm confident you'll find it. and you might just love the library and the community. I promise you a warm welcome when you join us on discord ;-)
here are a few examples of bigger apps you can reference:
this is cool! i appreciate the warm invite. I really like your repo! They should include these examples in their primary repo. I did bump into halloy/icebreaker, etc but i just don't really find reading through massive repos a great entrypoint into whether a library/framework makes sense for me. I'll have to seriously look into it again, i do really like a vibrant community, and a lively discord is a nice close second. Thanks!
> iced is basically the brainchild of a single developer (plus core team members who tackle some bits and pieces of the codebase but not frequently), who already has a day time job and is doing this for free.
This single-handedly convinced me not to rely on anything using Iced. I have no patience left for projects with that low a bus factor.
At some point you will need to realize that the endless people commenting about the lack of documentation is an issue with Iced, and the proverbial head in the sand approach will not help you.
UI frameworks typically need more than just the type of documentation that Rust docs provide. We see this with just about every UI framework around.
I'm not a maintainer or a member of the project, just an interested user.
Tutorials might be nice, but the library is evolving fast. I'm happier the core team spent time working on an animations API and debugging (including time travel) since the last release instead of working on guides for beginners.
Maybe that changes after 1.0.
Until then, countless users have learned to use it. Also iced is more a library than a framework. There's no right answer to the problems you'll be trying to solve, so writing guides on "best practices" is generally unhelpful if not downright harmful.
egui is nice but its API changes a lot between versions which makes it hard to rely on. Slint is stable and well documented. Its license is open source and also free to use in many cases so there is no real issue there.
Firefox rendering is based on WebRender, which runs on OpenGL. The internals of WebRender are similar to gpui but with significantly more stuff to cover the landscape of CSS.
Meanwhile I'm checking Helix editor every 6 month to see if authors became any less hostile to the idea of thinking about considering of starting thinking about potentially adding copilot support.
Why should an open source editor support some single commercial product API in their core? Why copilot and not another product?
It's completely reasonable to me that this should be a third party plugin or that they should wait for some standard that supports many products.
As @adriangalilea recently aptly wrote in Helix's 2nd-longest discussion thread (#4037):
> For the nth time, it's about enabling inline suggestions and letting anything, either LSP or Extensions use it, then you don't have to guess what the coolest LLM is, you just have a generic useful interface for LLM's or anything else to use.
An argument I would agree with is that it's unreasonable to expect Helix's maintainers to volunteer their time toward building and maintaining functionality they don't personally care about.
It's not about it being locked to a commercial product — whatever they built would be provider-agnostic. My understanding is the decision is more about not wanting to build things into core that are evolving so quickly and not wanting to rely on experimental LSP features (though I think inline completions are becoming standard soon[1]). Zed itself is perfect evidence of that -- they built an AI integration and then basically had to throw it away and rebuild it because the consensus best practice design changed. The Helix maintainers don't have time for that kind of churn and aren't trying to keep up with the hype cycle. When the plugin system is ready people will be able to choose their preferred implementation, and maybe eventually some aspects of it will make it into core.
Interestingly enough, this is exactly why I've started using Zed – while simultaneously eagerly waiting for Helix PR #8675 (Steel plugin system) to get merged. It's not far off, but then again, many Helix PRs seem that way, only to stay in limbo for months if not years.
These last two months I've been trialing both Neovim and Zed alongside Helix. I know I should probably just use Neovim since, once set up properly, it can do anything and everything. But configuring it has brought little joy. And once set up to do the same as Helix out of the box, it's noticeably slower.
Zed is the first editor I've tried that actually feels as fast as Helix while also offering AI tooling. I like how integrated everything is. The inline assistant uses context from the chat assistant. Code blocks are easy to copy from the chat panel to a buffer. The changes made by the coding agent can be individually reviewed and accepted or rejected. It's a lot of small details done right that add up to a tool that I'm genuinely becoming confident about using.
Also, there's a Helix keymap, although it doesn't seem as complete as the Vim keymap, which is what I've been using.
Still, I hope there will come a time when Helix users can have more than just Helix + Aider, because I prefer my editor inside a terminal (Helix) rather than my terminal inside an editor (Zed).
The authors don’t seem hostile at all. They’re firmly against putting work into a feature they don’t care for but welcome pro-AI users to make it happen. For some reason the latter group hasn’t seemed to accomplish it.
Unless something's changed, every AI-backed language server I've tried in Helix suffers from the same limitation when it comes to completions: Suggestions aren't shown until the last language server has responded or timed-out. Your slowest language server determines how long you'll be waiting.
The only project I know of that recognizes this is https://github.com/SilasMarvin/lsp-ai, which pivoted away from completions to chat interactions via code actions.
I feel like an LSP is very insufficient for the ideal UX of AI integrations. LSP would be fine for AI autocompletes of course, but i think we want a custom UX that we don't quite yet know. Eg what Zed offers here seems useful. I also really like what Claude Code does.
I don't know the LSP spec well enough to know if these sort of complex interactions would work with it, but it seems super out of scope for it imo.
This seems more in scope of those same people who want to make their editor into an IDE. And just like most other things, the editor is a poor integration point for AI. The shell and inter-process communications are the gold standard for integration and are where the best integrations emerging from. Things that work with your editor instead of trying to replace it. Aider being the best example I've seen so far... though I'd love to hear about others.
This rings so true for me! Helix is beautiful and works fantastic, I'm pretty happy not having AI integrated into my editor so Helix is basically exactly as I want without any extras I don't!
I can understand that, and it's great if it fits your needs. It's annoying when apps that just have to do one thing and do it well instead are focusing on hype features. My recent rant is about Warp terminal that has a "different font size for different tabs" issue open for years, but silently integrated all sorts of AI into the terminal.
And yet, it's hard to ignore the fact that coding practices are undergoing a one-in-a-generation shift, and experienced programmers are benefiting most from it. Many of us had to ditch the comfort of terminal editors and switch to Microsoft's VSCode clones just to have these new incredible powers and productivity boosts.
Having AI code assistants built into the fast terminal editor sounds like a dream. And editors like Helix could totally deliver here if the authors were a bit more open to the idea.
Went from Atom, to VSC, to Vim and finally to Zed. Never felt more at home. Highly recommend giving it a try.
AFAIK there is overlap between Atoms and Zeds developers. They built Electron to built Atom. For Zed they built gpui, which renders the UI on the GPU for better performance. In case you are looking for an interesting candidate for building multi platform GUIs in rust, you can try gpui yourself.
People have been using editors that look comparable for decades that dont need fancy GPU rendering to be fast and responsive. What is happening that stuff like that is necessary now?
GPU rendering in some form or another (mostly just bitblitting characters, I guess) has also been common for decades. Classic hardware text mode is basically just that. Also, display densities and sizes have gone up (for some of us).
The first time I used Zed, I really noticed and appreciated the very low latency between a keypress and visual result. It created a great sense of “connection” to the experience. (I’ve previously used VS Code, Emacs, Sublime Text, JetBrains, and others)
I'm not sure, it might have changed since, but my personal experience was different.
Tried using zed on Linux (pop os, Nvidia) several months ago, was terribly slow, ~1s to open right click context window.
I've spent some time debugging this, and turns out that my GPU drivers are not the best with my current pop os release, but I still don't understand how it might take so long and how GPU is related to right clicking.
Switched back to emacs, love every second. :)
I'm not sure if title referring to actual development speed or the editor performance.
p.s. I play top games on Linux, all is fine with my GPU & drivers.
I also tried Zed on Linux a few months back, and had GPU/driver issues, so it was either slow or didn't run. Tried it just now and it worked right out of the box, and it's incredibly fast.
I will keep playing around with it to see if it's worth switching (from JetBrains WebStorm).
I know they started on MacOS and their Linux support is relatively new, so I wonder if that "fastest" label is really only applicable to MacOS currently.
Tried Zed and Cursor, but they always felt too magical to me. I ended up building a minimal agent framework that only uses seven tools (even for code edits):
read, write, diff, browse, command, ask, and think.
These simple, composable tools can be utilized well enough by increasingly powerful LLM(s), especially Gemini 2.5 pro to achieve most tasks in a consistent, understandable way.
More importantly - I can just switch off the 'ask' tool for the agent to go full turbo mode without frequent manual confirmation.
I just released it yesterday, have a look at https://github.com/aperoc/toolkami for the implementation if you think it is useful for you!
Counterpoint: Zed wins me over because the LLM calls don't feel like magic - I maintain control over API calls unlike Cursor, which seems to have a mind of its own and depletes my API quota unexpectedly. Plus, Zed matches Sublime's performance unlike Cursor's laggy Electron VS Code foundation.
> I ended up building a minimal agent framework that only uses seven tools
you can choose which tools are used in zed by creating a new "tools profile" or editing an existing one (also you can add new tools using MCP protocol)
Zed and Cursor are very different; I wouldn’t put them in the same bucket myself. I’ve been using the Zed AI assistant panel for a while (manual control over the context window by including files and diagnostics) — will try the new agentic panel soon.
Unfortunately the new agent panel completely nerfs the old workflow.
I also love the old version (now called "Text Threads") for its transparency.
Even though they brought back text threads, the context is no longer included (or include-able!) as context in the inline assist. That means that you can no longer select code, hit ctrl+enter, and type "use the new code" or whatever.
I wish there was a way to just disable the agent panel entirely. I'm so uninterested in magical shit like cursor (though claude code is tasteful IMO).
Actually, I just checked and an active text thread is added to the inline prompt context (you may need to click on the box at the bottom of the inline prompt to include it, but then it is added by default for the next). So it looks fine to me (and it is nicer that it is more explicit this way).
There is also the "+" button to add files, threads etc, though it would be nice if it could also be done through slash commands.
Yes, and it followed the instructions in my text thread.
I opened a previous agent thread and it gave me the option to include both threads to the context of the inline prompt (the old text thread was included and I had to click to exclude it, the new thread was grayed out and I had to click to include it).
You can still include Text Threads as context in the inline assist prompt with @thread "name of thread", or using the `+` button. And it should suggest the active text thread for you, so it's one click. Let us know if that isn't working, we wanted to preserve the old workflow (very explicit context curation) for people who enjoyed previous assistant panel.
Maybe once all of this is a bit more mature we can just get down to the minimal subset of features that are really important.
I’d love a nvim plugin that is more or less just a split chat window that makes it easy to paste code I’ve yanked (like yank to chat) add my commentary and maybe easily attach other files for context. That’s it really.
I can highly recommend gp.nvim, it has a few features but by default it's just a chat window with a yank-to-chat function. It also supports a context file that gets pasted into every chat automatically (for telling the AI about the tools you use etc)
It's amazing. Just like you keep repeating full turbo, I hope we all go full turbo, all the time! Who needs thoughtful care in these things anyway, that's for another day! Lets goooo
I generally use Neovim, but Zed was the first code editor that made me go, "Wow, I can see myself actually using this." My only gripe is the "Sign In" button at the top that I can't seem to remove.
But apropos TFA, it's nice to see that telemetry is opt-in, not opt-out.
Yeah. I've been using vim since the 90's. A bit of emacs here and there, and more recently some helix too. Zed was the first GUI editor that took me over. I've always hated VSCode, but Zed is so fast and its UI just clicks on me that I've been using it as my main editor for months now.
Subscribed to their paid plan just to keep the lights on and hoping it will get even better in the future.
I tried the agent mode with sonnet 3.7 (not the thinking one). When it started trying to create a file, it kept getting a "Failed to connect to API: 400 Bad Request" error. After a few attempts to create files, it "touch"ed a file and tried to edit this, which also failed. It checked permissions with "ls -la", then it tried to "cat" the code into it but failed because of syntax errors (to do with quoting). Then it tried nano(?!?!) and failed, and then it started "echo"ing the code into the file chunk by chunk, which started working. After 4 chunks it got an error and then it made the following chunks smaller. It took it a dozen "echo"s or so.
While the initial 400 error is a bummer, I am actually surprised and admire its persistence in trying to create the file and in the end finding a way to do so. It forgot to define a couple of stuff in the code, which was trivial to fix, after that the code was working.
Zed employee here - that sounds like a bug, sorry about that!
If you're okay sharing the conversation with us, would you mind pressing the thumbs-down button at the bottom of the thread so that we can see what input led to the 400?
(We can't see the contents of the thread unless you opt into sharing it with the thumbs-down button.)
No problem, and thanks for the response here! I sent two feedback threads (added a second test where a file succeeded to edit and the rest failed). It was actually quite entertaining seeing the model try troubleshoot stuff anyway.
I used github copilot's sonnet 3.7. I now tried copilot's sonnet 3.5 and it seems to work, so it was prob a 3.7 issue? It did not let me try zed's sonnets, so I don't know if there is a problem with zed's 3.7 (I thought I could still do 50 prompts with a free account, but maybe that's not for the agent?).
I am amazed how well it works. Yesterday I have spent a full day with a new macOS project with idea in my head. Spend half a day writing basic features, and after that opened the project in Zed to add features. Not very well documented things like AppKit + SwiftUI integration - no issues, and I mean I was getting about 500 new lines from my questions and was getting compilable code (and working). A few times after review I modified a few things to make it compilable/or better. But still.
And I had one interesting problem with objc/swift and javascript integration - and Zed AI delivered some masterpiece in JavaScript, that is definitely outside my knowledge.
This technology is definitely going to change how we program now.
(not the one you asked, but can chime in with some info)
This was a long time ago, but the way I did it was to use XcodeGen (1) and a simple Makefile. I have an example repo here (2) but it was before Swift Package Manager (using Carthage instead). If I remember correctly XcodeGen has support for Swift Package Manager now.
On top of that I was coding in VS Code at the time, and just ran `make run` in the terminal pane when I wanted to run the app.
Now, with SwiftUI, I'm not sure how it would be to not use Xcode. But personally, I've never really vibed with Xcode, and very much prefer using Zed...
It’s funny that they lead with AI tools. I love zed because it’s fast, customizable, has a clean interface and it’s easy to pair program. The LLM bit is just an annoying thing I shut off because imo I’m too junior at the moment to use LLMs
I was interested in zed as I was looking for a performant VSCode replacement, but its inability to fully remove AI integration and disable the prominent sign-in button made me lose any interest. Judging by the project’s response or lack of it on these topics, I am worried about adopting zed in my workflows.
Its a small button in the top right corner. Very oit of the way unless interacted with. I use zed because its faster and cleaner than vscode. I dont want AI in my editor.
For people who have tried the new agent panel in Zed, how does it compare to something like Cursor or Windsurf?
(I've yet to dive deep into AI coding tools and currently use Zed as an 'open source Sublime Text alternative' because I like the low latency editing.)
I'd say it's closer to Claude Code than to either of the two IDE-oriented ones. I say this because it actually does the right thing more often than either Cursor or Windsurf. It gathers the right context, asks for feedback when needed and has yet to "go back and forth between two failing solutions" like I've seen Cursor do.
I don't know what Zed's doing under the hood but the diffing tool has yet to fail on me (compared to multiple times per conversation in Cursor). Compared to previous Zed AI iterations, this one edits files much more willingly and clearly communicates what it's editing. It's also faster than Claude Code at getting up to speed on context and much faster than Cursor or Windsurf.
I've used zed both for work and personal projects. I'm not a fan of the direction they're going. I don't want "agentic" tools, I just want to back and forth with an LLM to get a better idea. I've specified in my markdown to not write actual code to files. Would be nice if this was togglable in settings.
Apart from that, it's a hell of a lot better than alternatives, and my god is it fast. When I think about the perfect IDE (for my taste), this is getting pretty close.
In the assistant panel there is a mode option - write / ask / minimal, it sounds like what you are looking for.
Anyway you can always make your prompts to do or not do certain actions, they are adding more features, if you want you can ignore some of them - this is not contradictory.
> I don't want "agentic" tools, I just want to back and forth with an LLM to get a better idea.
Ah! So you can get that experience with the agent panel (despite "agent" being in the name).
If you click the dropdown next to the model (it will say "Write" by default) and change it from "Write" to "Minimal" then it disables all the agentic tool use and becomes an ordinary back-and-forth chat with an LLM where you can add context manually if you like.
Also, you can press the three-dots menu in the upper-right and choose New Text Thread if you want something more customizable but still not agentic.
From my experience Zed agents oftn just goes and edits your files without your asking it to. Even if you ask questions about codebase, it assumes you want it to be changed. For it to be useful it must be better at understanding prompts; I would also like it to generate diffs like it does but prompt me if I want to apply them first
I wanted Zed to work as a high performance dev-enabled Markdown editor as I was looking to replace Obsidian that doesn't scale well (memory use and cursor latency degrades with document size, and generally what I suspect is a weak foundation).
My use case is active note taking(PKM) and reviewing including a few large (1-2+ MB) markdown files.
However, to my surprise, the performance for Markdown was much worse - it's practically unusable.
This is clearly a Markdown backend problem, but not really relevant in the editor arena, except maybe to realize that the editor "shell" latency is just a part of the overall latency problem.
I still keep it around as I do with other editors that I like, and sometimes use it for minor things, while waiting to get something good.
On this note, I think there's room for an open source pluggable PKM as an alternative to Obsidian and think Zed is a great candidate. Unfortunately I don't have time to build it myself just yet.
> On this note, I think there's room for an open source pluggable PKM as an alternative to Obsidian and think Zed is a great candidate. Unfortunately I don't have time to build it myself just yet.
I'm also super interested in building this. OTOH Obsidian has a huge advantage for its plugin ecosystem because it is just so hackable.
One of the creators of Zed talked about their experience building Atom - at the time the plugin API was just wide open (which resulted in a ton of cool stuff, but also made it harder to keep building). They've taken a much stricter Plugin API approach in Zed vs. Atom, but I think the former approach is working out well for Obsidian's plugin ecosystem.
+1 for Typora. It also has a nice option to automatically copy any media that is pasted into a doc into a $doc.assets subfolder so you can keep everything organized.
If you like Zed's collaboration features, I wrote a plugin that make Obsidian real-time collaborative too. We are very inspired by their work (pre agent panel...). The plugin is called Relay [0].
Important not - you should not assume Zed is on par with vscode in terms of functionality. Nothing really is as MS started as early as Atom was born, and perhaps they were considering some SublimeText approach to the editior, as it is what started these type of editors more or less.
But Zed is a complete rewrite, which on one hand makes itsuper-fast, but otherwise is still super-lacking of integration with the existing vsix extensions, language servers, and what not. Many authors in this forum totally fail to see that SublimeText4 is super ultra fast also compared to Electron-based editors, but is not even close in terms of supported extensions.
The whole Cursor hysteria may abruptly end with CoPilot/Cline/Continue advancing, and honestly, havng used both - there isnt much difference in the final result, should you know what you are doing.
I use both Zed and JB. The thing with the jetbrains stuff is (1) it costs money and (2) you only need the fancy refactoring features occasionally. With modern LSP you can “find usages” and “goto definition” and you get live symbol completion and documentation in Zed just as naturally as JB. Zed covers 98% of your editing needs. A proper Debugger would be awesome, I use JB for debugging every time.
Not exactly answering your question, but as a user of JetBrains IDEs the Windsurf Plugin with Cascade[0] for JB IDEs is the best solution I've found so far to get a good agentic AI integration without giving up JetBrains goodies.
It's pretty tough to give up the good things that Jetbrains IDEs can bring, when they exist for a given lang. The obvious example is Java - IntelliJ is just leaps and bounds better than whatever stack of plugins you need in VSCode (or Cursor).
This isn't a great solution, but in cases where I've wanted to try out Cursor on a Java code base, I just open the project in both IDEs. I'll do AI-based edits with Cursor, and if I need to go clean them up or, you know, write my own code, I'll just switch over to IntelliJ.
Again, that's not the smoothest solution, but the vast majority of my work lately has been in Javascript, so for the occasional dip into Java-land, "dual-wielding" IDEs has been workable enough.
Cursor/Code handle JS codebases just fine - Webstorm is a little better maybe, but not the "leaps and bounds" difference between Code and IntelliJ - so for JS, I just live in Cursor these days.
Java is the original “it needs an ide” language. Always has been. If you’re not using jetbrains, eclipse, or whatever other monstrosity to write Java you’re going to have a bad time. I wouldn’t consider this a mark against Zed, I’d wager very few people write Java it.
I was a RubyMine and later IDEA user for many many years. I agree with everything you've said but I got so tired of the IDE using excessive RAM and constantly making my fan spin (2019 Intel MBP). Switching to Zed made my experience on this laptop enjoyable again, the downside being that I miss out on some of the features from the JetBrains editors.
I've learned to work around the loss of some functionality over the past 6 months since I've switched and it hasn't been too bad. The AI features in Zed have been great and I'm looking forward to the debugger release so I can finally run and debug tests in Zed.
I used to have one of these and recently got an M1 Max machine - the performance boost is seriously incredible.
The throttling on those late-game intel macs is hardcore - at one point I downloaded Hot[1], which is a menu bar app that shows you when you're being throttled. It was literally all the time that the system was slowing itself down due to heat. I eventually just uninstalled it because it was a constant source of frustration to know I was only ever getting 50% performance out of my expensive dev laptop.
I got an M4 as a new work machine and it is absolutely bonkers how much faster and quieter it is. And the battery lasts forever, even when running my dev setup. I can actually go and work at a coffee shop for a couple hours without taking the charger now.
Same here. My slightly older M2 MacBook Air seems to be allergic to electricity. I wouldn't be afraid of leaving the house without a charger before a full day's work, as long as I'm not planning to run compute-heavy stuff the entire time.
I get the high performance component, but it seems most of their speed is from underlying models supporting a "diff style" edits and that a small percentage of it is from code-level text-editing optimizations and improvements.
vscode running a typescript extension (cline, gemini, cursor, etc) to achieve LLM-enhanced coding is probably the least efficient way to do it in terms of cpu usage, but the features they bring are what actually speeds up your development tasks - not the "responsiveness" of it all. It seems that we're making text editing and html rendering out to be a giant lift on the system when it's really not a huge part of the equation for most people using LLM tooling in their coding workflows.
Maybe I'm wrong but when I looked at zed last (about 2 months ago) the AI workflow was surprisingly clunky and while the editor was fast, the lack of tooling support and model selection/customization left me heading back to vscode/cline which has been getting nearly two updates per week since that time - each adding excellent new functionality.
agreed! I was not a huge fan of the AI integration before in Zed and would always switch to Cursor (or, lately Claude Code) to actually get something done. Now that Zed can target specific pieces of code from the sidebar and edit them directly, it's been my goto for the last 24 hours. I've yet to "eject" to my old tools.
The pricing page was not linked on the homepage. Maybe it was, maybe it wasn't but it surely was not obvious to me.
Regardless of how good of a software it is or pretends to be I just do not care about landing pages anymore. Pricing page essentially tells me what I am actually dealing with. I knew about Zed when it was being advertised as "written in rust because it makes us better than everyone" trend everyone was doing. Now, it is LLM based.
Absolutely not complaining about them. Zed did position themselves well to take the crown of the multi billion dollar industry AI code editors has become. I had to write this wall of text of because I just want to just drop the pricing page link and help make people make their own decision, but I have to reply to "whats your point" comments and this should demonstrate I have no point aside from dropping a link.
Wow that's one awkwardly pompous introduction. Nevertheless Zed never fails to impress. Aside from all the AI fireworks it really goes to show how building software "from scratch" pays off in the long run.
I hate chat and panels in my ide, they are distracting and have worst ux, I'm using Cursor because of cursor prediction, but lately the trend has been the opposite even in the development of Cursor, apparently people are more interested about chatting instead of doing super powered editing supported by the AI that let you lead the game and makes you feel much more productive. I'd like to see something like that in Zed and I'd pay for it, agent mode and chats never worked for me, always been the worse experience
Awesome, thank you. I've been trying it for the last few minutes, really liking it. Pretty impressed with how well it works using ollama + qwen3:32b
Two nitpicks:
1) the terminal is not picking up my regular terminal font, which messes up the symbols for my zsh prompt (is there a way to fix this?)
2) the model, even though it's suggesting very good edits, and gives very precise instructions with links to the exact place in the code where to make the changes, is not automatically editing the files (like in the video), even though it seems to have all the Write tools enabled, including editing - is this because of the model I'm using (qwen3:32b)? or something else?
Edit: 3rd, big one, I had a js file, made a small 1 line change, and when I saved the file, the editor automatically, and without warning, changed all the single quotes to double quotes - I didn't notice at first, committed, made other commits, then a PR - that's when I realized all the quotes changes - which took me a while to figure out how they happened, until I started a new branch, with the original file, made 1 change, saved and then I saw it
Can this behavior be changed? I find it very strange that the editor would just change a whole file like that
1. You can specify your terminal font via terminal.font_family in zed settings.json
2. Not sure.
3. For most languages, the default is to use prettier for formatting. You can disable `format_on_save` globally, per-language and per project depending on your needs. If you ever need to save without triggering format ("workspace: save without formatting").
Prettier is /opinionated/ -- and its default is `singleQuote` = false which can be quite jarring if unexpected. Prettier will look for and respect various configuration files (.prettierrc, .editorconfig, via package.json, etc) so projects can set their own defaults (e.g. `singleQuote = true`). Zed can also be configured to further override prettier config zed settings, but I usually find that's more trouble than it's worth.
If you have another formatter you prefer (a language server or an external cli that will format files piped to stdin) you can easily have zed use those instead. Note, you can always manually reformat with `editor: format` and leave `format_on_save` off by default if that's more your code style.
Thank you for the excellent and informative reply. Will try it out
It would be nice for prettier to throw a user warning before making a ton of changes on save for the first time, and also let them know where they can configure it
I'm sure it's fast, but is it accessible? Am I as a screen reader user going to get fired if the company I work at decides one day to have all of their devs use this? And if not, any plans to make it so?
Might sound a little brusque but this really is the stakes we're playing with these days.
I've had aot of issues with AI hallucinating the API surface for libraries, to the point where I kinda gave up using it for a lot of purposes.
But I got back on the horse & broke out Zed this weekend, deciding that I'd give it another shot, and this time be more deliberate about providing context.
My first thought was that I'd just use Zed's /fetch and slam some crates.io docs into context. But there were dozens and dozens of pages to cover the API surface, and I decided that while this might work, it wasn't a process I would ever be happy repeating.
So, I went looking for some kind of Crates.io or Rust MCP. Pretty early looking effort, but I found cratedocs-mcp. It can search crates, lookup docs for crates,lookup specific members in crates; that seems like maybe it might be sufficient, maybe it might help. Pulled it down, built it...
https://github.com/d6e/cratedocs-mcp
Then check the Zed docs for how to use this MCP server. Oh man, I need to create my own Zed extension to use an MCP service? Copy paste this postgres-context-extension? Doesn't seem horrendous, but I was pretty deflated at this side-quest continuing to tack on new objectives & gave up on the MCP idea. It feels like there should be some kind of builtin glue that lets Zed add MCP servers via configuration, instead of via creating a whole new extension!!
On the plus side, I did give DeepSeek a try and it kicked out pretty good code on the first try. Definitely some bits to fix, but pretty manageable I think, seems structurally reasonably good?
I don't know really know how MCP tool integration works in the rest of the AI ecosystem, but this felt sub ideal.
Here’s what I want, a keyboard only code editor, (ideally with all the vim keybindings and features) that also has in built lsp for popular languages like C, C++, Go and just works out of the box (even better if it has good code completion with copilot or something similar). I can’t seem to get this anywhere. I’m still sticking with neovim for now but it’s code complete doesn’t work well that I’ve turned it off and I have to maintain its config every few months.
I want to love Zed. The UX is absolutely amazing - it just feels like you're on turbo mode all the time. And the AI is excellent too. I don't know how far it can go, though, without the VSCode ecosystem.
Yes and no. An ecosystem can grow, even grow exponentially. But it is sensitive to competitive pressures. VSCode today is so much more widely adopted than anything before it. More than emacs, vim, anything really. So Zed has a good chance because it's simply excellent and many people (myself included) will be motivated to make it happen. But it's not determined that it will succeed in growing a comparable ecosystem.
One thing that works in favour of Zed, which previous IDEs didn't have, is that it's a lot easier to program things today, because of AI. It may even be possible to port many of the more popular extensions from VSCode to Zed with relatively low investment.
I would question that logic and offer the following example: Panic released Nova, a very nice editor that included a JS plugin system to make porting extensions easier. It was not originally (but might be now) set up for easy porting of VSCode extensions.
To date I know of barely anyone using it.
VSCode kind of had Atom’s audience to build off of, and other editors don’t always have that runway.
I have been using Zed as my main editor for the past ~5 months and I have been very happy with it. It's actually fast and snappy. I hope they become sustainable.
VS Code forks (Cursor and Windsurf) were extremely slow and buggy for me (much more so than VS Code, despite using only the most vanilla extensions).
I tried Zed out and successfully got Claude chat working with my API key.
But I'm not sure how to get predictions working.
When the predictions on-ramp window popped up asking if I wanted to enable it, I clicked yes and then it prompted me to sign in to Github. Upon approving the request on Github, an error popover over the prediction menubar item at the bottom said "entity not found" or something.
Not sure if that's related (Zed shows that I'm signed in despite that) but I can't seem to get prediction working. e.g. "Predict edit at cursor" seems to no-op.
Anyways, the onboarding was pretty sweet aside from that. The
"Enable Vim mode" on the launch screen was a nice touch.
I was trying to figure this out too. It seems like you need to sign up for their "Zed Pro" subscription for it to work at all, and nothing indicates this anywhere. It just appears broken. Not the best first impression.
I've been recently using Zed, and I'm in love. It's the best editor experience I've ever had. Everything's snappy like notepad.exe, but it's also quite featureful, and quickly becoming even better.
It's fast paced, yet it doesn't blush over anything I'd find important. It shows clearly how to use it, shows a realistic use case, e.g. the model adding some nonsense, but catching something the author might have missed, etc. I don't think I've seen a better AI demo anywhere.
Maybe the bar is really low that I get excited about someone who demos an LLM integration for programmers to actually understand programming, but hey.
This doesn't seem ready for real use. Testing it right now, is miles slower than Cursor for simple edits in the AI panel, and behaves in what seems to be a broken way. I gave up after it started to type out my entire file from scratch for every edit and presenting a diff for the entire file, rather than the few lines that were changed.
Does it not do incremental edits like Cursor? It seems like the LLM is typing out the whole file internally for every edit instead of diffs, and then re-generates the whole file again when it types it out into the editor.
We actually stream edits and apply them incrementally as the LLM produces them.
Sometimes we've observed the architect model (what drives the agentic loop) decide to rewrite a whole file when certain edits fail for various reasons.
It would be great if you could press the thumbs-down button at the end of the thread in Zed so we can investigate what might be happening here!
The way context management in Zed works is really well-done imo. I haven't found a different place which does it this way.
Basically, by default:
- You have the chat
- Inline edits you do use the chat as context
And that is extremely powerful. You can easily dump stuff into the chat, and talk about the design, and then implement it via surgical inline edits (quickly).
That said, I wasn't able to switch to Zed fully from Goland, so I was switching between the two, and recently used Claude Code to generate a plugin for Goland that does chat and inline edits similarly to how the old Zed AI assistant did it (not this newly launched one) - with a raw markdown editable chat, and inline edits using that as context.
I might be wrong, I last used Cline in December, but I believe we're talking about different things.
Cline's an Agent, and you chat with it, based on which it makes edits to your files. I don't think it has manual inline edit support?
What I'm talking about is that you chat with it, you're done chatting, you select some text and say "rewrite this part as discussed" and only that part is edited. That's what I mean with inline edits.
I'm using it without any AI stuff, just turned everything off. I like it. But in day-to-day usage I don't really see any difference with VSCode. And I'm using it out of pure curiosity and interest to try something new.
Like iy way more than the old one. (really didnt like how i couldnt just copy code blocks in one click) The new one seems interresting but useless without the agentic toolcalling, witch seems to be unsupported by most (even tool capable models like o4-mini or gemini2.5-pro. I would like to be able to supply debug info like before, as well as importing regular text threads into the context.
I have launched Zed probably once a week for the past few months to try it out (each time running updates to see what's improved). I so badly want a native editor like this one to succeed. But the UX just seems very difficult to use. For a while, there were no settings UIs, and you had to edit everything in JSON. I think that's still true. But fine.
Now I'm excited that they actually have a Cursor-like agentic mode.
But the suggestions are still just nowhere near as "smart" as the ones from Cursor. I don't know if that's model selection or what. I can't even tell which model is being used for the suggestions.
Today I'm trying to use the Agentic stuff, I added an MCP server, and I keep getting non-stop errors even though I started the Pro trial.
First error: It keeps trying to connect to Copilot even though I cancelled my Copilot subscription. So I had to manually kill the Copilot connection.
Second Error: Added the JIRA MCP (it's working since Zed lists all the available tools in the MCP) and then asked a basic question (give me the 5 most recent tickets). Nope. Error interacting with the model, some OAuth error.
Third Weirdness (not error): Even though I'm on a Pro trial, the "Zed" agent configuration says "You have basic access to models from Anthropic through the Zed Free AI Plan" – aren't I on a Pro trial? I want to give you money guys, please, let me do that. I want to encourage a high performance editor to grow.
I'm not even trying to do anything fancy. I just am on a pro trial. Shouldn't this be the happiest of happy paths? Zed should use whatever the Pro stuff gives you, without any OAuth errors, etc. How can I help the Zed team debug this stuff? Not even sure where to start.
Update: The JIRA MCP seems to be working now, I'm not sure what happened.
I also added an elixir RuleSet (I THINK it's being used, but can't easily tell).
Still missing the truly fast and elegant suggestions from Cursor (especially when Cursor suggests _removing_ lines, haven't seen that in Zed yet). But I can see it getting there.
Some agents stuff also worked well. I had it fix two elixir warnings and a rust warning in our NIF.
Unrelated to Zed, I find myself in the awkward position of maintaining a (very small) rust file in our code base without ever having coded rust. And any changes, upgrades, etc are done via AI.
So far it seems to work (according to our unit tests) and the library isn't in any critical path. But it's a new world :-)
Also, Zed still seems to only give me access to "basic" models even though I'm in the pro tier trial. Not sure if that's a bug.
I'm curious what others' experiences have been with this. I haven't tried it out yet. Is it comparable to Cursor's capabilities? More on par with VS Code Copilot? Something else entirely?
In my personal experience I couldn't use Zed for editing python.
Firstly, when navigating in a large python repository, looking up references was extremely slow (sometimes on the order of minutes).
Secondly, searching for a string in the repo would sometimes be incorrect (e.g. I know the string exists but Zed says there aren't any results, as if a search index hasn't been updated). These two issues made it unusable.
I've been using PyCharm recently and found it to be far superior to anything else for Python. JetBrains builds really solid software.
Yes.. but all that really matters in a zero sum game is that Vscode and cursor have good Python integration and zeds is not good. I live zed otherwise but I have to do Python all day so I don’t use it
Slightly OT: Why aren't AI coding assistants implemented as plugins (like through an LSP), rather than being a standalone AI-first editor (like Cursor)?
I might be missing the obvious, and I get no standard exists, but why aren't AI coding assistants just plugins?
Do you mean for Zed in particular? Because there are lots of AI agent plugins for VSC and Jetbrains IDEs both. I’m currently using one now, namely Augment. And AFAIK, copilot is still a plugin.
I don't think it's obvious but in my own experiments with this it's things like permissions that can get a little hairy. It's all doable and IMO preferable to have an agent-as-daemon. aider is kind of like this.
I'd love to be able to configure it to use OpenAI API but with a custom hostname. I found some comments in a forum about how to configure that, but they were pretty old and didn't seem to work anymore.
My favorite part of Zed is the problems/errors view. It's great seeing everything in one place and being able to edit multiple files with context at the same time.
That feature + native Git support has fully replaced VSCode for me.
Been using Zed as my daily driver (without AI) since sometime in late 2023 when I decided I wanted to ditch vscode for mention leaner and faster. Love it.
I switched to cursor earlier this year to try out LLM assisted development and realised how much I now despise vscode. It’s slow, memory hungry, and just doesn’t work as well (and in a keyboard centric way) as Zed.
Then a couple of weeks ago, I switched back to Zed, using the agents beta. AI in Zed doesn’t feel quite as polished as cursor (at least, edit predictions don’t feel as good or fast), but the agent mode works pretty well now. I still use cursor a little because anything that isn’t vscode or pycharm has imho a pretty bad Python LSP experience (those two do better because they use proprietary LSP’s), but I’m slowly migrating to full stack typescript (and some Gleam), so hope to fully ditch cursor in favour of Zed soon.
I was a sublime user as well, Zed is the first editor to get me off it. It’s nice using an editor that gets modern improvements again. The writing was on the wall for ST when LSPs happened and support for them was relegated to a community plugin.
I don't feel the need for Sublime to step it up, because nothing is on their level yet. I tried Zed and went back to Sublime pretty quickly as it is by default lighter weight (the main Zed process is fine but it automatically runs a bunch of nodejs bloat) and doesn't have a bunch of useless AI crap. You can turn those things off in Zed (kudos to them for that at least), but I prefer to stick with an editor that doesn't need to be tweaked to get rid of bloat.
Honestly, being able to just turn off all the AI junk and have a fast editor is the main thing I want. You think AI features or speed actually matter more for most devs day to day?
Tried using it, but I’ve found it to be unreliable on my Linux laptop. Maybe it’s a sway thing or an amdgpu thing but last time I tried it was just too prone to crashing.
It's honestly frustrating to see such a promising product make _so many_ sub-par product decisions. It's probably one of the only products that I have genuinely tried using > 5 times and gave up right after. I would grant this if they were new in the block, but it's been a while now.
If they had focused on
1. Feature-parity with the top 10 VSCode extensions (for the most common beaten path — vim keybindings, popular LSPs, etc) and
2. Implemented Cursor's Tab
3. Simple chat interface that I can easily add context from the currently loaded repo
I would switch in a beat.
I _really_ want something better than VSCode and nvim. But this ain't it. While "agentic coding" is a nice feature, and specially so for "vibe coding projects", I (and most of my peers) don't rely on it that much for daily driving their work. It's nice for having less critical things going on at once, but as long as I'm expected to produce code, both of the features highlighted are what _effectively_ makes me more productive.
1. Zed has been working great for me for ~1.5 years while I ignored its AI features (I only started using Zed's AI features in the past 2 weeks). Vim keybindings are better IMHO than every other non-vim editor and the LSP's I've used (typescript, clangd, gleam) have worked perfectly.
2. The edit prediction feature is almost there. I do still prefer Cursor for this, but its not so far ahead that I feel like I want to use Cursor and personally I find Zed to be a much more pleasant editor to use than vscode.
3. When you switch the agent panel from "write" to "ask" mode, its basically that, no?
I'm not into vide coding at all, I think AI code is still 90% trash, but I do find it useful for certain tasks, repetitive edits, and boilerplate, or just for generating a first pass at a React UI while I do the logic. For this, Zed's agent feature has worked very well and I quite like the "follow mode" as a way to see what the AI is changing so I can build a better mental model of the changes I'm about to review.
I do wish there was a bit more focus on some core editor features: ligatures still don't fully work on Linux; why can't I pop the agent panel (or any other panel for that matter) into the center editor region, or have more than one panel docked side by side on one of the screen sides? But overall, I largely have the opposite opinion and experience from you. Most of my complaints from last year have been solved (various vim compatibility things), or are in progress (debugger support is on the way).
Been trying out Zed here and there for a few months. There's lots to like about it. Nice clean UI. Extremely fast. No endless indexing (looking at you, PyCharm). It doesn't include a python debugger UI, and the type inspection is far behind PyCharm, so it doesn't take me away from PyCharm yet for serious work, but it might once it's competitive on those features.
I have run into some problems with it on both Linux and Mac where zed hangs if the computer goes to sleep (meaning when the computer wakes back up, zed is hung and has to be forcibly quit.
Haven't tried the AI agent much yet though. Was using CoPilot, now mostly Claude Code, and the Jetbrains AI agent (with Claude 3.7).
I hate the direction they're going to be honest with this giga focus on AI bullshit. Only good part added was zeta (their own predictive editing model that jumps across the file where it predicts you want to edit your typo etc AND have a "subtle" mode), but they price it at 20/month, which is absurd.
>scratch-built in Rust _all the way down to handcrafted GPU shaders and OS graphics API calls_
Is this what happens to people who choose to learn Rust?
Joking aside, this is interesting, but I'm not sure what the selling point is versus most other AI IDEs out there? While it's great that you support ollama, practically speaking, approximately nobody is getting much mileage out of local models for complex coding tasks, and the privacy issues for most come from the LLM provider rather than the IDE provider.
I worked with Antonio on prototyping the extensions system[0]. In other words, Antonio got to stress test the pair programming collaboration tech while I ran around in a little corner of the zed codebase and asked a billion questions. While working on zed, Antonio taught me how to talk about code and make changes purposefully. I learned that the best solution is the one that shows the reader how it was derived. It was a great summer, as far as summers go!
I'm glad the editor is open source and that people are willing to pay for well-engineered AI integrations; I think originally, before AI had taken off, the business model for zed was something along the lines of a per-seat model for teams that used collaborative features. I still use zed daily and I hope the team can keep working on it for a long time.
[0]: Extensions were originally written in Lua, which didn't have the properties we wanted, so we moved to Wasm, which is fast + sandboxed + cross-language. After I left, it looks like Max and Marshall picked up the work and moved from the original serde+bincode ABI to Wasm interface types, which makes me happy: https://zed.dev/blog/zed-decoded-extensions. I have a blog post draft about the early history of Zed and how extensions with direct access to GPUI and CRDTs could turn Zed from a collaborative code editor into a full-blown collaborative application platform. The post needs a lot of work (and I should probably reach out to the team) before I publish it. And I have finals next week. Sigh. Some day!
I've been trying to be active, create issues, help in any way I can, but the focus on AI tells me Zed is no longer an editor for me.
A feature that people are paying $0 for?
Do you think GPL3 will serve as an impediment to their revenue or future venture fundraising? I assume not, since Cursor and Windsurf were forks of MIT-licensed VS Code. And both of them are entirely dependent on Microsoft's goodwill to continue developing VS Code in the open.
Tangentially, do you think this model of "tool" + "curated model aggregator" + "open source" would be useful for other, non-developer fields? Would an AI art tool with sculpting and drawing benefit from being open source? I've talked with VCs that love open developer tools and they hate on the idea of open creative tools for designers, illustrators, filmmakers, and other creatives. I don't quite get it, because Blender and Krita have millions of users. Comfy is kind of in that space, it's just not very user-friendly.
To be clear, by "the user" I'm referring to the Cursor devs. This is the terminology of many F/OSS licenses.
In theory everyone can fork Chrome and Android, in practice none of the forks can keep up with Google's resources, unless they are Microsoft or Samsung.
Good luck on finals!
I learned something from that code, cool stuff!
One question: how do you handle cutting a new breaking change in wit? Does it take a lot of time to deal with all the boilerplate when you copy things around?
I check back on the GitHub issue every few months and it just has more votes and more supportive comments, but no acknowledgement.
Hopefully someone can rescue us from the sluggish VS Code.
https://github.com/zed-industries/zed/issues/7992
I have a 1440p monitor and seeing this issue.
In my opinion, this type of graphics work is not the core functionality of a text editor, same has been solved already in libraries. There is no reason to reinvent that wheel... Or if there is then please mention why.
Example Zed screenshot, using "Ayu Light": https://i.ibb.co/Nr8SjvR/Screenshot-from-2024-07-28-13-11-10...
Same code in VS Code: https://i.ibb.co/YZfPXvZ/Screenshot-from-2024-07-28-13-13-41...
https://github.com/waydabber/BetterDisplay
(Or are you using it in vertical orientation?)
try it and see. i bet that helps/fixes at least some of you suffering from this.
It looks like the relevant work needs to be done upstream.
I don't know the internals of Zed well, but it seems entirely plausible they're doing text rendering from scratch.
I have the same issue with macOS in general, and I don't understand how anyone can use it on a normal DPI monitor.
I'm guessing zed implemented their own text rendering without either hinting or subpixel rendering or both.
Apple has removed support for font rendering methods which make text on non-integer scaled screens look sharper. As a result, if you want to use your screen without blurry text, you have to use 1080p (1x), 4k (2x 1080p), 5k (2x 1440p) or 6k screens (or any other screens where integer scaling looks ok).
To see the difference, try connecting a Windows/Linux machine to your monitor and comparing how text looks compared to the same screen with a MacOS device.
Example Zed screenshot, using "Ayu Light": https://i.ibb.co/Nr8SjvR/Screenshot-from-2024-07-28-13-11-10...
Same code in VS Code: https://i.ibb.co/YZfPXvZ/Screenshot-from-2024-07-28-13-13-41...
using pixel fonts on any non-integer multiplier of the native resolution will always result in horrible font rendering, I don't care what OS you're on.
I use MacOS on all kinds of displays as I move throughout the day, some of them are 1x, some are 2x, and some are somewhere in between. using a vector font in Zed looks fine on all of them. It did not look fine when I used a pixel font that I created for myself, but that's how pixel fonts work, not the fault of MacOS.
1) No hinting
2) No subpixel rendering
Apparently all editors bothered doing, except Zed.
From the Issue:
> Zed looks great on my MacBook screen, but looks bad when I dock to my 1080p monitor. No other editor has that problem for some reason.
I have used retina displays of various sizes -- but after a while I just set them down to half their resolution usually (i.e. I do not use the 200% scaling from the OS, rather set them to be 1440p (or lower on 13inch laptops)). I have not seen an advantage to retina displays.
(not parent commenter, but hold same opinion)
I have used retina displays of various sizes -- but after a while I just set them down to half their resolution usually (i.e. I do not use the 200% scaling from the OS, rather set them to be 1440p (or lower on 13inch laptops)). I have not seen an advantage to retina displays.
If they're running everything on the GPU then their SDF text rendering needs more work to be resolution independent. I'm assuming they use SDFs, or some variant of that.
Really, the screen isn't the issue given that on other editors OP says it is fine.
Knuth would be angry reading this :)
The restore checkpoint/redo is too linear for my lizard brain. Am I wrong to want a tree-based agentic IDE? Why has nobody built it?
They fixed that with the new agent panel, which now works more like the other AI sidebars.
I was (mildly) annoyed by that too. The new UI still has rough edges but I like the change.
If you're working on stuff like marketing websites that are well represented in the model dataset then things will just fly, but if you're building something that is more niche it can be super important to tune the context -- in some cases this is the differentiating feature between being able to use AI assistance at all (otherwise the failure rate just goes to 100%).
Fully agreed. This was the killer feature of Zed (and locally-hosted LLMs). Delete all tokens after the first mistake spotted in generated code. Then correct the mistake and re-run the model. This greatly improved code generation in my experience. I am not sure if cloud-based LLMs even allow modifying assistant output (I would assume not since it becomes a trivial way to bypass safety mechanisms).
In general they do. For each request, you include the complete context as JSON, including previous assistant output. You can change that as you wish.
edit: actually it is still possible to include text threads in there
Oops, I guess.
So you could manage the context with great care, then go over to the editor and select specific regions and then "pull in" the changes that were discussed.
I guess it was silly that I was always typing "use the new code" in every inline assist message. A hotkey to "pull new code" into a selected region would have been sweet.
I don't really want to "set it and forget it" and then come back to some mega diff that is like 30% wrong. Especially right now where it keeps getting stuck and doing nothing for 30m.
What I don't like in the last update is that they removed the multi-tabs in the assistant. Previously I could have multiple conversations going and switch easily, but now I can only do one thing at a time :(
Haven't tried the assistant2 much, mostly because I'm so comfy with my current setup
I would recommend you check it out if you've been frustrated by the other options out there - I've been very happy with it. I'm fairly sure you can't have git-like dag trees, nor do I think that would be particularly useful for AI based workflow - you'd have to delegate rebasing and merge conflict resolution to the agent itself... lots of potential for disaster there, at least for now.
Vote/read-up here for the feature on Zed: https://github.com/zed-industries/zed/issues/17455
And here on VSCode: https://github.com/microsoft/vscode/issues/20889
You will not catch me using the words "agentic IDE" to describe what I'm doing because its primary purpose isn't to be used by AI any more than the primary purpose of a car is to drive itself.
But yes, what I am doing is creating an IDE where the primary integration surface for humans, scripts, and AIs is not the 2D text buffer, but the embedded tree structure of the code. Zed almost gets there and it's maddening to me that they don't embrace it fully. I think once I show them what the stakes of the game are they have the engineering talent to catch up.
The main reason it hasn't been done is that we're still all basically writing code on paper. All of the most modern tools that people are using, they're still basically just digitizations of punchcard programming. If you dig down through all the layers of abstractions at the very bottom is line and column, that telltale hint of paper's two-dimensionality. And because line and column get baked into every integration surface, the limitations of IDEs are the limitations of paper. When you frame the task of programming as "write a huge amount of text out on paper" it's no wonder that people turn to LLMs to do it.
For the integration layer using the tree as the primary means you get to stop worrying about a valid tree layer blinking into and out of existence constantly, which is conceptually what happens when someone types code syntax in left to right. They put an opening brace in, then later a closing brace. In between a valid tree representation has ceased to exist.
That's possible because the source of truth for the IDE's state is an immutable concrete syntax tree. It can be immutable without ruining our costs because it has btree amortization built into it. So basically you can always construct a new tree with some changes by reusing most of the nodes from an old tree. A version history would simply be a stack of these tree references.
How can I follow up on what you're building? Would you be open to having a chat? I've found your github, but let me know how if there's a better way to contact you.
edit: they updated the AI panel! looking good!
Man, so true. I tried this out a while back and it was pretty miserable to find docs, apis, etc.
IIRC they even practice a lot of bulk reexports and glob imports and so it was super difficult to find where the hell things come from, and thus find docs/source to understand how to use something or achieve something.
Super frustrating because the UI of Zed was so damn good. I wanted to replicate hah.
Have you had a chance to try the new panel? (The OP is announcing its launch today!)
The annoncement is about it reaching prod release, but they emailed people to try it out in the preview version.
edit: yes i missed something. i see the new feature. hell yeah!
Check out the video in the blog post to see the new one in action!
Editing and deleting not only your messages but also the LLM's messages should be trivial.
One of the coolest things about LLM tech is that it's stateless, yet we leave that value on the floor when UIs act like it's not.
Press the 3-dots menu in the upper right of the panel, and then choose "New Text Thread" instead of "New Thread".
EDIT: just gave it a shot and I get "unsupported GPU" as an error, informing me that my GPU needs Vulkan support.
Their detection must be wrong because this is not true. And like I said, other applications don't have this problem.
For one, not all applications are GPU accelerated.
Two, their UX may need to be improved for a specific hardware configuration. I have used Zed with good performance on Intel dGPU, AMD dGPU, and Intel iGPU without issue — my guess is a missing dependency?
I don't care about Zed fixing anything - they're Zed's issues, not mine. All I'm saying is that contrary to what someone else said about the software being "fast" I tried it and at startup, it was unusably slow. I'm what you would call a failed conversion.
> Also, how is whether the project is volunteer-run relevant? Would you file a support ticket for commercial software you use saying "it's slow" and then when they follow up asking for details about your setup, you say "sorry, you don't get free QA work from me"
So this is kind of needlessly antagonistic imo - the point between the lines is that FOSS projects run by volunteers get a lot more grace than venture backed companies that go on promotion blitzes talking about their performance.
Error message, hardware configuration, done.
From my perspective that is not something you do for zed, but something you do for your distro and hardware.
And ofc, your first comment was fine either way. But the attitude of the latter is just poor.
How about "I'm getting <1FPS perf on {specs}" instead of the snark.
The antagonistic part is assuming your specific Linux configuration is innately Zed’s issue. It’s possible simply mentioning it to them would lead you quickly and easily to a solution, no free labor needed. It’s possible Zed is prepared to spend their vast VC resources on fixing your setup, even—which seems to be what you expect. Point being there’s a middle ground where you telling Zed “hey it didn't work well for me” gives Zed the chance to resolve any issues on their end in order to properly convert you, if you truly are interested in trying their editor. You don’t need to respond to the suggestion with a lecture on how companies exploit free volunteer labor and anything short of software served up on a silver platter would make you complicit. It’s really a little absurd.
If I had to guess, your system globally or their rendering library specifically is probably stuck on llvmpipe.
seems like you needing a GPU would be your issue
Putting together a high quality, actionable bug report is a much higher bar that can often feel like screaming at the clouds.
I’m genuinely curious what you are getting out of it
As a Linux user, I am sadly accustomed to some software working in only a just-so configuration. A datapoint that the software is still Mac first development is useful to know. Zed might still be worth trying, but I have to temper my enthusiasm from the headline announcement of, “everything is great”.
I'm on PopOS and the issue ended up being DRI_PRIME.
Might be worth trying `DRI_PRIME=0 zed`.
[1]: people experiencing sluggishness on Linux are almost certainly hit by a bug that makes the rendering falls back to llvmpipe (that is CPU rendering) instead of Vulkan rendering, but MacOS shouldn't have this kind of problems.
At least it did a month or so ago, and at that time I couldn't figure out a practical use for the LLM-integration either so I kind of just went back to dumb old vim and IDEA Ultimate.
When its fast its pretty snappy though. I recently put revisiting emacs on my todo-list, should add taking Zed out for another round as well.
Edit: I just saw your edit to your reply here[1] and that's indeed what's happening. Now the question is “why does that happen?”.
[1]
I wouldn’t hold my breath. GPUI is built specifically for Zed. It is in its monorepo without separate releases and lots of breaking changes all the time. It is pretty tailored to making a text editor rather than being a reusable GUI framework.
i think there's some desire from within zed to making this a real thing for others to reuse.
That kind of setup is fine for internal use, but it’s not how you'd structure a library meant for stable, external reuse. Until they split it out, version it properly, and stop breaking stuff all the time, it's hard to treat GPUI as a serious general-purpose option.
Waiting for Robius / Makepad to mature a bit more. Looks very promising.
Iced, being used by System76's COSMIC EPOCH, is not great in what regards? Serious question.
IMO Slint is milestones ahead and better. They've even built out solid extensions for using their UI DSL, and they have pages and pages of docs. Of course everything has tradeoffs, and their licensing is funky to me.
Calling iced not useful reads like an uninformed take
examples beyond tiny todo app/best practices would be a great start.
> Tutorials? That's for users to write.
sure, and how's that going for them? there are near zero tutorials out there, and as someone looking to build a desktop tool in rust, they've lost me. maybe i'm not important enough for them and their primary goal is to intellectually gatekeep this tool from the vast majority for a long time, in which case, mission accomplished
> sure, and how's that going for them? there are near zero tutorials out there, and as someone looking to build a desktop tool in rust, they've lost me. maybe i'm not important enough for them and their primary goal is to intellectually gatekeep this tool from the vast majority for a long time, in which case, mission accomplished
26.5k stars on github and a flourishing community of users, which grows noticeably larger every day. new features basically every week. bug fixes sometimes fixed in literal minutes.
it's not a matter of gatekeeping, but a matter of resources. iced is basically the brainchild of a single developer (plus core team members who tackle some bits and pieces of the codebase but not frequently), who already has a day time job and is doing this for free. would you rather him write documentation—which you and I could very well write—or keep adding features so the library can get to 1.0?
I encourage you to look for evidence that invalidates your biases, as I'm confident you'll find it. and you might just love the library and the community. I promise you a warm welcome when you join us on discord ;-)
here are a few examples of bigger apps you can reference:
https://github.com/squidowl/halloy
https://github.com/hecrj/icebreaker
https://github.com/hecrj/holodeck
and my smaller-scale examples (I'm afraid my own big app is proprietary):
https://github.com/airstrike/iced_receipts a simple app showing how to manage multiple screens for CRUD-like flows
https://github.com/airstrike/pathfinder/ a simple app showing how to draw on a canvas
https://github.com/airstrike/iced_openai a barebones app showing how to make async requests
https://github.com/airstrike/tabular a somewhat complex custom widget example
I'll be waiting for you on Discord ;-) my username is the same there so ping me if you need anything
and I forgot to link to a ridiculously cool new feature that dropped last week: time travel debugging for free
https://github.com/iced-rs/iced/pull/2910
check out the third and fourth videos!
This single-handedly convinced me not to rely on anything using Iced. I have no patience left for projects with that low a bus factor.
UI frameworks typically need more than just the type of documentation that Rust docs provide. We see this with just about every UI framework around.
Just write some tutorials already.
Tutorials might be nice, but the library is evolving fast. I'm happier the core team spent time working on an animations API and debugging (including time travel) since the last release instead of working on guides for beginners.
Maybe that changes after 1.0.
Until then, countless users have learned to use it. Also iced is more a library than a framework. There's no right answer to the problems you'll be trying to solve, so writing guides on "best practices" is generally unhelpful if not downright harmful.
https://github.com/helix-editor/helix/discussions/4037
> For the nth time, it's about enabling inline suggestions and letting anything, either LSP or Extensions use it, then you don't have to guess what the coolest LLM is, you just have a generic useful interface for LLM's or anything else to use.
An argument I would agree with is that it's unreasonable to expect Helix's maintainers to volunteer their time toward building and maintaining functionality they don't personally care about.
[1]: https://microsoft.github.io/language-server-protocol/specifi...
These last two months I've been trialing both Neovim and Zed alongside Helix. I know I should probably just use Neovim since, once set up properly, it can do anything and everything. But configuring it has brought little joy. And once set up to do the same as Helix out of the box, it's noticeably slower.
Zed is the first editor I've tried that actually feels as fast as Helix while also offering AI tooling. I like how integrated everything is. The inline assistant uses context from the chat assistant. Code blocks are easy to copy from the chat panel to a buffer. The changes made by the coding agent can be individually reviewed and accepted or rejected. It's a lot of small details done right that add up to a tool that I'm genuinely becoming confident about using.
Also, there's a Helix keymap, although it doesn't seem as complete as the Vim keymap, which is what I've been using.
Still, I hope there will come a time when Helix users can have more than just Helix + Aider, because I prefer my editor inside a terminal (Helix) rather than my terminal inside an editor (Zed).
https://github.com/helix-editor/helix/pull/8675
Also, the Helix way, thus far, has been to build a LSP for all the things, so I guess you'd make a copilot LSP (I be there already is one).
The only project I know of that recognizes this is https://github.com/SilasMarvin/lsp-ai, which pivoted away from completions to chat interactions via code actions.
I don't know the LSP spec well enough to know if these sort of complex interactions would work with it, but it seems super out of scope for it imo.
And yet, it's hard to ignore the fact that coding practices are undergoing a one-in-a-generation shift, and experienced programmers are benefiting most from it. Many of us had to ditch the comfort of terminal editors and switch to Microsoft's VSCode clones just to have these new incredible powers and productivity boosts.
Having AI code assistants built into the fast terminal editor sounds like a dream. And editors like Helix could totally deliver here if the authors were a bit more open to the idea.
Went from Atom, to VSC, to Vim and finally to Zed. Never felt more at home. Highly recommend giving it a try.
AFAIK there is overlap between Atoms and Zeds developers. They built Electron to built Atom. For Zed they built gpui, which renders the UI on the GPU for better performance. In case you are looking for an interesting candidate for building multi platform GUIs in rust, you can try gpui yourself.
Tried using zed on Linux (pop os, Nvidia) several months ago, was terribly slow, ~1s to open right click context window.
I've spent some time debugging this, and turns out that my GPU drivers are not the best with my current pop os release, but I still don't understand how it might take so long and how GPU is related to right clicking.
Switched back to emacs, love every second. :)
I'm not sure if title referring to actual development speed or the editor performance.
p.s. I play top games on Linux, all is fine with my GPU & drivers.
I will keep playing around with it to see if it's worth switching (from JetBrains WebStorm).
Nvidia drivers in particular are terrible on Linux, so what OP is describing is likely some compatibility/version issue.
that is why I commented, since was disappointed a bit
These simple, composable tools can be utilized well enough by increasingly powerful LLM(s), especially Gemini 2.5 pro to achieve most tasks in a consistent, understandable way.
More importantly - I can just switch off the 'ask' tool for the agent to go full turbo mode without frequent manual confirmation.
I just released it yesterday, have a look at https://github.com/aperoc/toolkami for the implementation if you think it is useful for you!
you can choose which tools are used in zed by creating a new "tools profile" or editing an existing one (also you can add new tools using MCP protocol)
Even though they brought back text threads, the context is no longer included (or include-able!) as context in the inline assist. That means that you can no longer select code, hit ctrl+enter, and type "use the new code" or whatever.
I wish there was a way to just disable the agent panel entirely. I'm so uninterested in magical shit like cursor (though claude code is tasteful IMO).
There is also the "+" button to add files, threads etc, though it would be nice if it could also be done through slash commands.
I opened a previous agent thread and it gave me the option to include both threads to the context of the inline prompt (the old text thread was included and I had to click to exclude it, the new thread was grayed out and I had to click to include it).
edit: yup, they fixed it 2 days ago
It looks like I was 2 days out of date, and updating fixed it for me.
I’d love a nvim plugin that is more or less just a split chat window that makes it easy to paste code I’ve yanked (like yank to chat) add my commentary and maybe easily attach other files for context. That’s it really.
https://github.com/yetone/avante.nvim
Yours is the full agent, though... Nice.
[1] https://github.com/karthink/gptel
[2] https://github.com/dolmens/gptel-aibo
[3] https://github.com/lizqwerscott/mcp.el
It's like lisp's original seven operators: quote, atom, eq, car, cdr, cons and cond.
And I still can't stop smiling just watching the agent go full turbo mode when I disable the `ask` tool.
The goal is composable semantic routing -- seamlessly traversal between different tools through things like saved outputs and conversational partials.
Routing similar to pipewire, conversation chains similar to git, and URI addressable conversations similar to xpath.
This is being built application down to ensure usability, design sanity and functionality.
Then, connect it using this line: `client = MCPClient(server_url=server_url)` (https://github.com/aperoc/toolkami/blob/e49d3797e6122fb54ddd...)
Happy to help further if you run into issues.
MCP Clients and servers can support both sse or stdio
But apropos TFA, it's nice to see that telemetry is opt-in, not opt-out.
Subscribed to their paid plan just to keep the lights on and hoping it will get even better in the future.
It's open source, builds extremely well out of the box, and the UI is declarative.
While the initial 400 error is a bummer, I am actually surprised and admire its persistence in trying to create the file and in the end finding a way to do so. It forgot to define a couple of stuff in the code, which was trivial to fix, after that the code was working.
If you're okay sharing the conversation with us, would you mind pressing the thumbs-down button at the bottom of the thread so that we can see what input led to the 400?
(We can't see the contents of the thread unless you opt into sharing it with the thumbs-down button.)
I used github copilot's sonnet 3.7. I now tried copilot's sonnet 3.5 and it seems to work, so it was prob a 3.7 issue? It did not let me try zed's sonnets, so I don't know if there is a problem with zed's 3.7 (I thought I could still do 50 prompts with a free account, but maybe that's not for the agent?).
This was a long time ago, but the way I did it was to use XcodeGen (1) and a simple Makefile. I have an example repo here (2) but it was before Swift Package Manager (using Carthage instead). If I remember correctly XcodeGen has support for Swift Package Manager now.
On top of that I was coding in VS Code at the time, and just ran `make run` in the terminal pane when I wanted to run the app.
Now, with SwiftUI, I'm not sure how it would be to not use Xcode. But personally, I've never really vibed with Xcode, and very much prefer using Zed...
1: https://github.com/yonaskolb/XcodeGen 2: https://github.com/LinusU/Soon
One way you could use LLMs w/o inducing brain mush would be for code or design reviews, testability, etc.
If you see codebases you like, stash them away for AI explanation later.
https://github.com/zed-industries/zed/issues/12325
https://github.com/zed-industries/zed/issues/6756
(I've yet to dive deep into AI coding tools and currently use Zed as an 'open source Sublime Text alternative' because I like the low latency editing.)
I don't know what Zed's doing under the hood but the diffing tool has yet to fail on me (compared to multiple times per conversation in Cursor). Compared to previous Zed AI iterations, this one edits files much more willingly and clearly communicates what it's editing. It's also faster than Claude Code at getting up to speed on context and much faster than Cursor or Windsurf.
Apart from that, it's a hell of a lot better than alternatives, and my god is it fast. When I think about the perfect IDE (for my taste), this is getting pretty close.
Anyway you can always make your prompts to do or not do certain actions, they are adding more features, if you want you can ignore some of them - this is not contradictory.
Ah! So you can get that experience with the agent panel (despite "agent" being in the name).
If you click the dropdown next to the model (it will say "Write" by default) and change it from "Write" to "Minimal" then it disables all the agentic tool use and becomes an ordinary back-and-forth chat with an LLM where you can add context manually if you like.
Also, you can press the three-dots menu in the upper-right and choose New Text Thread if you want something more customizable but still not agentic.
This is clearly a Markdown backend problem, but not really relevant in the editor arena, except maybe to realize that the editor "shell" latency is just a part of the overall latency problem.
I still keep it around as I do with other editors that I like, and sometimes use it for minor things, while waiting to get something good.
On this note, I think there's room for an open source pluggable PKM as an alternative to Obsidian and think Zed is a great candidate. Unfortunately I don't have time to build it myself just yet.
I'm also super interested in building this. OTOH Obsidian has a huge advantage for its plugin ecosystem because it is just so hackable.
One of the creators of Zed talked about their experience building Atom - at the time the plugin API was just wide open (which resulted in a ton of cool stuff, but also made it harder to keep building). They've taken a much stricter Plugin API approach in Zed vs. Atom, but I think the former approach is working out well for Obsidian's plugin ecosystem.
So far the only editor I've found that does this is Typora.
If you like Zed's collaboration features, I wrote a plugin that make Obsidian real-time collaborative too. We are very inspired by their work (pre agent panel...). The plugin is called Relay [0].
[0] https://relay.md
I’ve been using PyCharm Professional for over a decade (after an even longer time with emacs).
I keep trying to switch to vscode, Cursor, etc. as they seem to be well liked by their users.
Recently I’ve also tried Zed.
But the Jetbrains suite of tools for refactoring, debugging, and general “intelligence” keep me going back. I know I’m not the only one.
For those of you that love these vscode-like editors that have previously used more integrated IDEs, what does your setup look like?
But Zed is a complete rewrite, which on one hand makes itsuper-fast, but otherwise is still super-lacking of integration with the existing vsix extensions, language servers, and what not. Many authors in this forum totally fail to see that SublimeText4 is super ultra fast also compared to Electron-based editors, but is not even close in terms of supported extensions.
The whole Cursor hysteria may abruptly end with CoPilot/Cline/Continue advancing, and honestly, havng used both - there isnt much difference in the final result, should you know what you are doing.
https://aider.chat/docs/usage/watch.html
[0] https://plugins.jetbrains.com/plugin/20540-windsurf-plugin-f...
I've heard decent things about the Windsurf extension in PyCharm, but not being able to use a local LLM is an absolute non-starter for me.
At the moment I’m using Claude Code in a dedicated terminal next to my Jetbrains IDE and am reasonably happy with the combination.
This isn't a great solution, but in cases where I've wanted to try out Cursor on a Java code base, I just open the project in both IDEs. I'll do AI-based edits with Cursor, and if I need to go clean them up or, you know, write my own code, I'll just switch over to IntelliJ.
Again, that's not the smoothest solution, but the vast majority of my work lately has been in Javascript, so for the occasional dip into Java-land, "dual-wielding" IDEs has been workable enough.
Cursor/Code handle JS codebases just fine - Webstorm is a little better maybe, but not the "leaps and bounds" difference between Code and IntelliJ - so for JS, I just live in Cursor these days.
I've learned to work around the loss of some functionality over the past 6 months since I've switched and it hasn't been too bad. The AI features in Zed have been great and I'm looking forward to the debugger release so I can finally run and debug tests in Zed.
I used to have one of these and recently got an M1 Max machine - the performance boost is seriously incredible.
The throttling on those late-game intel macs is hardcore - at one point I downloaded Hot[1], which is a menu bar app that shows you when you're being throttled. It was literally all the time that the system was slowing itself down due to heat. I eventually just uninstalled it because it was a constant source of frustration to know I was only ever getting 50% performance out of my expensive dev laptop.
[1]: https://github.com/macmade/Hot
vscode running a typescript extension (cline, gemini, cursor, etc) to achieve LLM-enhanced coding is probably the least efficient way to do it in terms of cpu usage, but the features they bring are what actually speeds up your development tasks - not the "responsiveness" of it all. It seems that we're making text editing and html rendering out to be a giant lift on the system when it's really not a huge part of the equation for most people using LLM tooling in their coding workflows.
Maybe I'm wrong but when I looked at zed last (about 2 months ago) the AI workflow was surprisingly clunky and while the editor was fast, the lack of tooling support and model selection/customization left me heading back to vscode/cline which has been getting nearly two updates per week since that time - each adding excellent new functionality.
Does responsiveness trump features and function?
I'm curious what you think of this launch! :D
We've overhauled the entire workflow - the OP link describes how it works now.
The pricing page was not linked on the homepage. Maybe it was, maybe it wasn't but it surely was not obvious to me.
Regardless of how good of a software it is or pretends to be I just do not care about landing pages anymore. Pricing page essentially tells me what I am actually dealing with. I knew about Zed when it was being advertised as "written in rust because it makes us better than everyone" trend everyone was doing. Now, it is LLM based.
Absolutely not complaining about them. Zed did position themselves well to take the crown of the multi billion dollar industry AI code editors has become. I had to write this wall of text of because I just want to just drop the pricing page link and help make people make their own decision, but I have to reply to "whats your point" comments and this should demonstrate I have no point aside from dropping a link.
The free pricing is a bit confusing, it says 50 prompts/month, but also BYO API keys
So even if I use my own API keys, the prompts will stop at 50 per month?
Also, since it’s open source, couldn’t just someone remove the limit? (I guess that wouldn’t work if the limit is of some service provided by Zed)
Two nitpicks:
1) the terminal is not picking up my regular terminal font, which messes up the symbols for my zsh prompt (is there a way to fix this?)
2) the model, even though it's suggesting very good edits, and gives very precise instructions with links to the exact place in the code where to make the changes, is not automatically editing the files (like in the video), even though it seems to have all the Write tools enabled, including editing - is this because of the model I'm using (qwen3:32b)? or something else?
Edit: 3rd, big one, I had a js file, made a small 1 line change, and when I saved the file, the editor automatically, and without warning, changed all the single quotes to double quotes - I didn't notice at first, committed, made other commits, then a PR - that's when I realized all the quotes changes - which took me a while to figure out how they happened, until I started a new branch, with the original file, made 1 change, saved and then I saw it
Can this behavior be changed? I find it very strange that the editor would just change a whole file like that
2. Not sure.
3. For most languages, the default is to use prettier for formatting. You can disable `format_on_save` globally, per-language and per project depending on your needs. If you ever need to save without triggering format ("workspace: save without formatting").
Prettier is /opinionated/ -- and its default is `singleQuote` = false which can be quite jarring if unexpected. Prettier will look for and respect various configuration files (.prettierrc, .editorconfig, via package.json, etc) so projects can set their own defaults (e.g. `singleQuote = true`). Zed can also be configured to further override prettier config zed settings, but I usually find that's more trouble than it's worth.
If you have another formatter you prefer (a language server or an external cli that will format files piped to stdin) you can easily have zed use those instead. Note, you can always manually reformat with `editor: format` and leave `format_on_save` off by default if that's more your code style.
- https://zed.dev/docs/configuring-zed#terminal-font-family
- https://zed.dev/docs/configuring-zed#format-on-save
- https://prettier.io/docs/configuration
- https://zed.dev/docs/languages/yaml#prettier-formatting
- https://zed.dev/docs/configuring-zed#formatter
It would be nice for prettier to throw a user warning before making a ton of changes on save for the first time, and also let them know where they can configure it
But I got back on the horse & broke out Zed this weekend, deciding that I'd give it another shot, and this time be more deliberate about providing context.
My first thought was that I'd just use Zed's /fetch and slam some crates.io docs into context. But there were dozens and dozens of pages to cover the API surface, and I decided that while this might work, it wasn't a process I would ever be happy repeating.
So, I went looking for some kind of Crates.io or Rust MCP. Pretty early looking effort, but I found cratedocs-mcp. It can search crates, lookup docs for crates,lookup specific members in crates; that seems like maybe it might be sufficient, maybe it might help. Pulled it down, built it... https://github.com/d6e/cratedocs-mcp
Then check the Zed docs for how to use this MCP server. Oh man, I need to create my own Zed extension to use an MCP service? Copy paste this postgres-context-extension? Doesn't seem horrendous, but I was pretty deflated at this side-quest continuing to tack on new objectives & gave up on the MCP idea. It feels like there should be some kind of builtin glue that lets Zed add MCP servers via configuration, instead of via creating a whole new extension!!
On the plus side, I did give DeepSeek a try and it kicked out pretty good code on the first try. Definitely some bits to fix, but pretty manageable I think, seems structurally reasonably good?
I don't know really know how MCP tool integration works in the rest of the AI ecosystem, but this felt sub ideal.
You can sign up for the beta here - https://zed.dev/debugger - or build from source right now.
I also laughed at the dig on VSCode at the start. For the unaware, the team behind Zed was originally working on Atom.
Here's a nice recent post about it: https://felix-knorr.net/posts/2025-03-16-helix-review.html
I'm catching up on Zed architecture using deepwiki: https://deepwiki.com/zed-industries/zed
Starting out with a much smaller ecosystem than already-popular alternatives is a totally normal part of the road to success. :)
One thing that works in favour of Zed, which previous IDEs didn't have, is that it's a lot easier to program things today, because of AI. It may even be possible to port many of the more popular extensions from VSCode to Zed with relatively low investment.
To date I know of barely anyone using it.
VSCode kind of had Atom’s audience to build off of, and other editors don’t always have that runway.
VS Code forks (Cursor and Windsurf) were extremely slow and buggy for me (much more so than VS Code, despite using only the most vanilla extensions).
But I'm not sure how to get predictions working.
When the predictions on-ramp window popped up asking if I wanted to enable it, I clicked yes and then it prompted me to sign in to Github. Upon approving the request on Github, an error popover over the prediction menubar item at the bottom said "entity not found" or something.
Not sure if that's related (Zed shows that I'm signed in despite that) but I can't seem to get prediction working. e.g. "Predict edit at cursor" seems to no-op.
Anyways, the onboarding was pretty sweet aside from that. The "Enable Vim mode" on the launch screen was a nice touch.
https://zed.dev/blog/fastest-ai-code-editor
It's fast paced, yet it doesn't blush over anything I'd find important. It shows clearly how to use it, shows a realistic use case, e.g. the model adding some nonsense, but catching something the author might have missed, etc. I don't think I've seen a better AI demo anywhere.
Maybe the bar is really low that I get excited about someone who demos an LLM integration for programmers to actually understand programming, but hey.
Does it not do incremental edits like Cursor? It seems like the LLM is typing out the whole file internally for every edit instead of diffs, and then re-generates the whole file again when it types it out into the editor.
We actually stream edits and apply them incrementally as the LLM produces them.
Sometimes we've observed the architect model (what drives the agentic loop) decide to rewrite a whole file when certain edits fail for various reasons.
It would be great if you could press the thumbs-down button at the end of the thread in Zed so we can investigate what might be happening here!
Basically, by default:
- You have the chat
- Inline edits you do use the chat as context
And that is extremely powerful. You can easily dump stuff into the chat, and talk about the design, and then implement it via surgical inline edits (quickly).
That said, I wasn't able to switch to Zed fully from Goland, so I was switching between the two, and recently used Claude Code to generate a plugin for Goland that does chat and inline edits similarly to how the old Zed AI assistant did it (not this newly launched one) - with a raw markdown editable chat, and inline edits using that as context.
Cline's an Agent, and you chat with it, based on which it makes edits to your files. I don't think it has manual inline edit support?
What I'm talking about is that you chat with it, you're done chatting, you select some text and say "rewrite this part as discussed" and only that part is edited. That's what I mean with inline edits.
For Agentic editing I'm happy with Claude Code.
Personally, I just use the terminal for my build tools and Zed talks to clangd just fine for autocomplete etc.
It supports extensions for languages such as Java and seemingly that extension can build code, too.
Zed also contains Git-support out of the box, which sounds pretty much like a lightweight IDE.
Now I'm excited that they actually have a Cursor-like agentic mode.
But the suggestions are still just nowhere near as "smart" as the ones from Cursor. I don't know if that's model selection or what. I can't even tell which model is being used for the suggestions.
Today I'm trying to use the Agentic stuff, I added an MCP server, and I keep getting non-stop errors even though I started the Pro trial.
First error: It keeps trying to connect to Copilot even though I cancelled my Copilot subscription. So I had to manually kill the Copilot connection.
Second Error: Added the JIRA MCP (it's working since Zed lists all the available tools in the MCP) and then asked a basic question (give me the 5 most recent tickets). Nope. Error interacting with the model, some OAuth error.
Third Weirdness (not error): Even though I'm on a Pro trial, the "Zed" agent configuration says "You have basic access to models from Anthropic through the Zed Free AI Plan" – aren't I on a Pro trial? I want to give you money guys, please, let me do that. I want to encourage a high performance editor to grow.
I'm not even trying to do anything fancy. I just am on a pro trial. Shouldn't this be the happiest of happy paths? Zed should use whatever the Pro stuff gives you, without any OAuth errors, etc. How can I help the Zed team debug this stuff? Not even sure where to start.
I also added an elixir RuleSet (I THINK it's being used, but can't easily tell).
Still missing the truly fast and elegant suggestions from Cursor (especially when Cursor suggests _removing_ lines, haven't seen that in Zed yet). But I can see it getting there.
Some agents stuff also worked well. I had it fix two elixir warnings and a rust warning in our NIF.
Unrelated to Zed, I find myself in the awkward position of maintaining a (very small) rust file in our code base without ever having coded rust. And any changes, upgrades, etc are done via AI.
So far it seems to work (according to our unit tests) and the library isn't in any critical path. But it's a new world :-)
Also, Zed still seems to only give me access to "basic" models even though I'm in the pro tier trial. Not sure if that's a bug.
Firstly, when navigating in a large python repository, looking up references was extremely slow (sometimes on the order of minutes).
Secondly, searching for a string in the repo would sometimes be incorrect (e.g. I know the string exists but Zed says there aren't any results, as if a search index hasn't been updated). These two issues made it unusable.
I've been using PyCharm recently and found it to be far superior to anything else for Python. JetBrains builds really solid software.
That's nice for the chat panel, but the tab completion engine surprisingly still doesn't officially support a local, private option.[0]
Especially with Zed's Zeta model being open[1], it seems like there should be a way to use that open model locally, or what's the point?
[0]: https://github.com/zed-industries/zed/issues/15968
[1]: https://zed.dev/blog/edit-prediction
I might be missing the obvious, and I get no standard exists, but why aren't AI coding assistants just plugins?
That feature + native Git support has fully replaced VSCode for me.
I switched to cursor earlier this year to try out LLM assisted development and realised how much I now despise vscode. It’s slow, memory hungry, and just doesn’t work as well (and in a keyboard centric way) as Zed.
Then a couple of weeks ago, I switched back to Zed, using the agents beta. AI in Zed doesn’t feel quite as polished as cursor (at least, edit predictions don’t feel as good or fast), but the agent mode works pretty well now. I still use cursor a little because anything that isn’t vscode or pycharm has imho a pretty bad Python LSP experience (those two do better because they use proprietary LSP’s), but I’m slowly migrating to full stack typescript (and some Gleam), so hope to fully ditch cursor in favour of Zed soon.
Other than that a beautiful editor.
> ... 3. Baked into a closed-source fork of an open-source fork of a web browser
I laughed out loud at this one.
[0] : https://github.com/zed-industries/zed/pull/29496
``` "openai": { "api_url": "https://openrouter.ai/api/v1", "version": "1", "available_models": [ { "name": "anthropic/claude-3.7-sonnet:beta", "max_tokens": 200000 }, ... ```
Just change api_url in the zed settings and add models you want manually.
https://openrouter.ai/models?fmt=cards&providers=OpenAI
Huh?
Yes it's not the modern human but I think that's close enough.
If they had focused on
1. Feature-parity with the top 10 VSCode extensions (for the most common beaten path — vim keybindings, popular LSPs, etc) and
2. Implemented Cursor's Tab
3. Simple chat interface that I can easily add context from the currently loaded repo
I would switch in a beat.
I _really_ want something better than VSCode and nvim. But this ain't it. While "agentic coding" is a nice feature, and specially so for "vibe coding projects", I (and most of my peers) don't rely on it that much for daily driving their work. It's nice for having less critical things going on at once, but as long as I'm expected to produce code, both of the features highlighted are what _effectively_ makes me more productive.
1. Zed has been working great for me for ~1.5 years while I ignored its AI features (I only started using Zed's AI features in the past 2 weeks). Vim keybindings are better IMHO than every other non-vim editor and the LSP's I've used (typescript, clangd, gleam) have worked perfectly.
2. The edit prediction feature is almost there. I do still prefer Cursor for this, but its not so far ahead that I feel like I want to use Cursor and personally I find Zed to be a much more pleasant editor to use than vscode.
3. When you switch the agent panel from "write" to "ask" mode, its basically that, no?
I'm not into vide coding at all, I think AI code is still 90% trash, but I do find it useful for certain tasks, repetitive edits, and boilerplate, or just for generating a first pass at a React UI while I do the logic. For this, Zed's agent feature has worked very well and I quite like the "follow mode" as a way to see what the AI is changing so I can build a better mental model of the changes I'm about to review.
I do wish there was a bit more focus on some core editor features: ligatures still don't fully work on Linux; why can't I pop the agent panel (or any other panel for that matter) into the center editor region, or have more than one panel docked side by side on one of the screen sides? But overall, I largely have the opposite opinion and experience from you. Most of my complaints from last year have been solved (various vim compatibility things), or are in progress (debugger support is on the way).
I have run into some problems with it on both Linux and Mac where zed hangs if the computer goes to sleep (meaning when the computer wakes back up, zed is hung and has to be forcibly quit.
Haven't tried the AI agent much yet though. Was using CoPilot, now mostly Claude Code, and the Jetbrains AI agent (with Claude 3.7).
I work at Zed and I like using Rust daily for my job, but outside work I also like Elm, and Zig, and am working on https://www.roc-lang.org
sorry, but to me it is just pure garbage.
Is this what happens to people who choose to learn Rust?
Joking aside, this is interesting, but I'm not sure what the selling point is versus most other AI IDEs out there? While it's great that you support ollama, practically speaking, approximately nobody is getting much mileage out of local models for complex coding tasks, and the privacy issues for most come from the LLM provider rather than the IDE provider.