IMO the denoising looks rather unnatural and emphasizes the remaining artifacts, especially color fringe around details. Personally I'd leave that turned off. Also, with respect to the demosaic step, I wonder if it's possible to implement a version of RCD [1] for improved resolution without the artifacts that seem to result from the current process.
It's neat that it captured the shadow of the subway train, too, which arrived just ahead of the train itself. This virtual shadow is thrown against a sort of extruded tube with the profile of the slice of track and wall that the slit was pointed at.
The video [https://www.magyaradam.com/wp/?page_id=806] blew my mind. I can only image he reconstructed the video by first reconstructing one frame's worth of slits — then shifting them over by one column and adding the next slit data.
Imagine a camera that only takes pictures one pixel wide. Now make it take a picture, for example, 60 times a second and append every pixel-wide image together in order. This is what's happening here, it's a bunch of one pixel wide images ordered by time. The background stays still as it's always the same area captured by that one pixel, resulting in the lines, but moving objects end up looking correct as they're spread out over time.
At first, I thought this explanation would make sense, but then I read back what I just wrote and I'm not sure it really does. Sorry about that.
Yeah, like walking past a door that's cracked just a bit so you can see into an office only a slit. Now reconstruct the whole office from that traveling slit that you saw.
Okay I was stumped about how this works because it's not explained, as far as I can tell. But I guess the sensor array has its long axis perpendicular to the direction the train is traveling.
You use a single vertical line of sensors and resample "continuously". When doing this with film, the aperture is a vertical slit and you continuously advance the film during the exposure.
For "finish line" cameras, the slit is located at the finish line and you start pulling film when the horses approach. Since the exposure is continuous, you never miss the exact moment of the finish.
Line scan sensors are basically just scanners, heck people make em out of scanners .
Usually the issue is they need rather still subjects, but in this case rather than the sensor doing a scanning sweep they're just capturing the subject as it moves by, keeping the background pixels static.
Absolutely fascinating stuff! Thank you so much for adding detailed explanations of the math involved and your process. Always wondered how it worked but never bothered to look it up until today. Reading your page pushed it beyond idle curiosity for me. Thanks for that. And thanks also to HN for always surfacing truly interesting reading material on a daily basis!
What's your FPS/LPS in this setup? I've experimented with similar imaging with an ordinary camera, but LPS was limiting, and I know line-scan machine vision cameras can output some amazing numbers, like 50k+ LPS.
Probably would be worth asking a train driver about this, e.g. "what is a place with smooth track and constant speed"
[1] https://github.com/LuisSR/RCD-Demosaicing
At first, I thought this explanation would make sense, but then I read back what I just wrote and I'm not sure it really does. Sorry about that.
Very cool.
Lightbulb on.
Aha achieved. (Don’t you love Aha? I love Aha.)
For "finish line" cameras, the slit is located at the finish line and you start pulling film when the horses approach. Since the exposure is continuous, you never miss the exact moment of the finish.
Usually the issue is they need rather still subjects, but in this case rather than the sensor doing a scanning sweep they're just capturing the subject as it moves by, keeping the background pixels static.