Same, I've added a .#screenshots derivation. High up-front effort but almost zero maintenance afterwards.
Bonus: since you're generating screenshots programmatically anyway, you can generate a pair of each with your app's light/dark theme, and swap them in/out depending on prefers-color-scheme: dark. <picture> elements work in GitHub READMEs, too: https://github.com/CyberShadow/CyDo#readme
+1 for this approach. For a mobile app, I made Nix spawn an ephemeral Android emulator instance for generating up-to-date screenshots, requiring no prior setup and leaving no lingering data around after running. Setting it up wasn't that high-effort in my case either; coming up with the idea was the hard part, the Nix code was one-shot by your favorite LLM.
Granted manually updating the screenshots isn't the most laborious task in the world, but the "upload-apk + take-screenshot + transfer-back-to-PC + edit" process is usually barely annoying enough that you end up almost never doing it otherwise (similar to the OP's experience in the closing paragraph).
The only problem with this idea I can forsee is that the application and therefore the screenshots can change but the documentation does not. For example, if the documentation says press "Options > Customize" but the application is updated so this becomes "Preferences > Advanced" then the screenshot will show the new text but the documentation will still show the old labels. This would be very confusing as it would be hard to correlate what is being shown on the screenshot with the text. If the user saw the old screenshot they could more easily identify that they were looking at an out of date documentation.
Having said that, have a process to automatically grab screenshots is going to make it significantly easier for a developer to update the docs so the motivation to keep the text up to date is going to be much higher.
If the author is reading this, please note your code blocks don't scroll (and in fact overflow the white text onto the white background) on mobile layouts. You need an "overflow-x: scroll" or such.
For the small casual games I've been vibe coding, I always start from a place where the application has a CLI where it can run headless, rendering to offscreen texture, with a a screenshot command as well as performance instrumentation. It takes no time to include all this, and gives the agent a way to automate the ui and inspect important things. It also lets me trivially have the agent update screenshots.
Not as neat as being part of the build process, but I will now add that.
I was toying with a DragonRuby game a while back, and did something like that. But DR also comes with recording reproducible playbacks, screenshotting etc. built in, couple hot reloading and easily being able to inject code into the running game, and it was great putting in place instructions so the agent could run the game fully and show off things for me in addition to allowing it to test things. I think we'll see more and more frameworks built to enable this - it's nice for human development, but it really pays off when you're working with an agent to have everything nicely runnable from a CLI and fully introspectible.
I have an offscreen screenshot path, as well as a CLI arg for world pos/camera view vector, and scripted benchmark runs with a simple text-based input format that has rows of named segments of n game ticks length with control inputs per segment. Use that extensively for A/B testing of visuals and performance while working on the game code.
Would you mind sharing a link to some of these casual games? I ask cuz I'm also interested in how vibe coding can make game development easier.
We had such a vibrant indie game scene when Adobe flash was about and since then nothing's really touched that level of ease of development. I think vibe coding is the first tool that actually exceeds it.
App stores require screenshots, but generating N images for NUMBER_OF_SCREEN_SIZES times NUMBER_OF_LOCALIZATIONS can be a chore.
In the past I wrote my own scripts for that, today tools like Fastlane[1] help.
I use Fastlane for my logic puzzle game Nonoverse[2], you can see sample screenshots in its App Store page.
I also automated App Preview video recording, complete with multiple scenes. If anyone wants to read more let me know, perhaps this is a good topic for an article.
Can we please agree that the OS should not send any event to applications while a screenshot is being made?
It is very annoying if you press a screenshot button and suddenly menus disappear. Or much worse, the application sends a "screenshot taken" message back to the social media platform.
Super cool! Love that you can declare the screenshots inline in the markdown document.
For my desktop app I created a solution that generates screenshots in multiple languages, light/dark mode, removes noise and adds Windows/macOS window frames.
This is neat. I wrote https://github.com/zombocom/rundoc. It has a similar feature. The main driver is to produce tutorials so it also puts the output of commands run back in the document.
Wouldn’t a real live render approach work in this case? Have a live preview of your tool inside a rectangle. If the tool is light it should be optimal visually: it will respect browser rendering settings like accessibility parameters or custom addons.
I’ve wondered about doing screenshots from the e2e test run, even keeping docs/ all together in the same repo so when you update the documentation and need a new screenshot you add a new test
Nice! I actually started to build this exact thing a couple years back, and ended up abstracting it out to something more generic with https://picshift.io/. That said, I still love the screenshot use case - the original name of this project was ScreenSync ;)
Bravo. This is incredibly useful, and really improves the quality of documentation, especially for many applications whose design and UI are always in flux.
The docs for Textual (TUI library for Python) build screenshots along with the docs. Technically not really screenshots, they are SVGs, but principle is the same. They never get out of date.
I do not know why but looking at the title I was sure that this involves something like webserver that updates static file it serves by some external webhook.
Why wouldn't you want to version the screenshots along with the text? That's a feature, not a bug.
At best, this seems to require an unpublished draft state for all automatic screenshot updates until explicitly approved so that mistakes don't leak out to everyone else.
At worst, this is an unrealistic level of discipline to keep things in sync that is far greater than just updating the docs normally with the next major version release.
My alternative suggestion would be to make sure your test suite takes screenshots with every build. They're already perfectly organized and in the context of what you're documenting.
I wrote a gui app once that ran on a safety-critical platform. I ended up stuffing a rendering of the gui (rendered offscreen) into shmem at I think 24hz, and rendered that screenshot into the safety critical application. I passed clicks (no typing for this gui) back from the statically rendered image updating on a cadence, to the offscreen GUI.
Worked well. Not quite the same as this, but that’s what this reminds me of.
I don't think I follow. What is that giving you that you wouldn't get by just having the user click in the application and see its real interface directly? Or are you saying you were embedding one application inside another?
The rendering of the safety-critical application was written completely in C using OpelGLSC (https://www.khronos.org/openglsc/) to render the GUI, and had to pass a formal validation suite (MISRA was the big one IIRC). Simply put, the safety critical application essentially was not allowed to "fail in an unsafe manner" in the DO-178 sense. Using javascript, or some c++ gui library was very much out of the question.
Fortunately, this was not an airborne platform, so failing safely was much simpler than what a true aviation stack or medical stack would need to do.
Awesome!
Now you could even go a step further and add satori to the pipeline to add content to the the fresh screenshot. This way annotation could be easily added to the screenshot.
nice, embedding the capture instructions right in the markdown as comments is a dead-simple solution that'll age way better than any fancy external tooling
> Then you change the UI slightly – tweak a colour, move a button, update some copy – and suddenly every screenshot that includes that element is stale. You know they’re stale. Your users might not notice, but you know, and it gnaws at you.
Bonus: since you're generating screenshots programmatically anyway, you can generate a pair of each with your app's light/dark theme, and swap them in/out depending on prefers-color-scheme: dark. <picture> elements work in GitHub READMEs, too: https://github.com/CyberShadow/CyDo#readme
Granted manually updating the screenshots isn't the most laborious task in the world, but the "upload-apk + take-screenshot + transfer-back-to-PC + edit" process is usually barely annoying enough that you end up almost never doing it otherwise (similar to the OP's experience in the closing paragraph).
Having said that, have a process to automatically grab screenshots is going to make it significantly easier for a developer to update the docs so the motivation to keep the text up to date is going to be much higher.
For the small casual games I've been vibe coding, I always start from a place where the application has a CLI where it can run headless, rendering to offscreen texture, with a a screenshot command as well as performance instrumentation. It takes no time to include all this, and gives the agent a way to automate the ui and inspect important things. It also lets me trivially have the agent update screenshots.
Not as neat as being part of the build process, but I will now add that.
I have an offscreen screenshot path, as well as a CLI arg for world pos/camera view vector, and scripted benchmark runs with a simple text-based input format that has rows of named segments of n game ticks length with control inputs per segment. Use that extensively for A/B testing of visuals and performance while working on the game code.
We had such a vibrant indie game scene when Adobe flash was about and since then nothing's really touched that level of ease of development. I think vibe coding is the first tool that actually exceeds it.
And for those of you: https://XCancel.com/search?q=%23vibejam&src=typed_query
In some cases it does. Which engine?
App stores require screenshots, but generating N images for NUMBER_OF_SCREEN_SIZES times NUMBER_OF_LOCALIZATIONS can be a chore.
In the past I wrote my own scripts for that, today tools like Fastlane[1] help.
I use Fastlane for my logic puzzle game Nonoverse[2], you can see sample screenshots in its App Store page.
I also automated App Preview video recording, complete with multiple scenes. If anyone wants to read more let me know, perhaps this is a good topic for an article.
[1]: https://fastlane.tools/
[2]: https://apps.apple.com/us/app/nonoverse-nonogram-puzzles/id6...
> 100% open source under the MIT license
See: https://docs.fastlane.tools/
It doesn’t support App Preview automation, this is something that I had to script myself.
Can we please agree that the OS should not send any event to applications while a screenshot is being made?
It is very annoying if you press a screenshot button and suddenly menus disappear. Or much worse, the application sends a "screenshot taken" message back to the social media platform.
For my desktop app I created a solution that generates screenshots in multiple languages, light/dark mode, removes noise and adds Windows/macOS window frames.
Wrote about it here: https://maxschmitt.me/posts/cakedesk-website-redesign#screen...
It's currently a separate script (which is a pain to maintain). I should look into making it a part of the markdown/mdx. Thanks for the inspiration!
https://github.com/ericfortis/mockaton/tree/main/pixaton-tes...
NoMethodError at /self-updating-screenshots undefined method `name' for nil:NilClass
Ruby title-for: in handle, line 12 Web GET interblah.net/self-updating-screenshots
followed by a very detailed traceback when I try to access the page
The docs for Textual (TUI library for Python) build screenshots along with the docs. Technically not really screenshots, they are SVGs, but principle is the same. They never get out of date.
https://textual.textualize.io/widgets/markdown/#example
If author sees this: Turn off Django debug mode
You can get responsive design in "screenshots" with this. Super nice, and people can copy paste, look at the code (useful for dev tools), etc.
The users WILL DEFINITELY notice if the screenshots don't match what they have in front of their eyes.
https://github.com/simonw/shot-scraper
At best, this seems to require an unpublished draft state for all automatic screenshot updates until explicitly approved so that mistakes don't leak out to everyone else.
At worst, this is an unrealistic level of discipline to keep things in sync that is far greater than just updating the docs normally with the next major version release.
My alternative suggestion would be to make sure your test suite takes screenshots with every build. They're already perfectly organized and in the context of what you're documenting.
Worked well. Not quite the same as this, but that’s what this reminds me of.
Fortunately, this was not an airborne platform, so failing safely was much simpler than what a true aviation stack or medical stack would need to do.
F
Related: Sabotaging projects by overthinking, scope creep, and structural diffing – https://news.ycombinator.com/item?id=47890799