27 comments

  • superfish 8 hours ago
    "Unsplash > Gen3C > The fly video" is nightmare fuel. View at your own risk: https://apple.github.io/ml-sharp/video_selections/Unsplash/g...
    • uwela 1 hour ago
      Goading companies into improving image and video generation by showing them how terrible they are is only going to make them go faster, and personally I’d like to enjoy the few moments I have left thinking that maybe something I watch is real.

      It will evolve into people hooked into entertainment suits most of the day, where no one has actual relationships or does anything of consequence, like some sad mashup of Wall-E and Ready Player One.

      If we’re lucky, some will want to meatspace with augmented reality.

      Maybe we’ll get really nice holovisions, where we can chat with virtual celebrities.

      Who needs that?

      We’re already having to shoot up weight-loss drugs because we binge watch streaming all the time. We’ve all given up, assuming AI will do everything. What good will come from having better technology when technology is already doing harm?

    • Traubenfuchs 5 hours ago
      Early AI „everything turns into dog heads“ vibes. Beautiful.
    • schneehertz 5 hours ago
      san check, 1d10
    • ghurtado 7 hours ago
      Seth Brundle has entered the chat.
  • rcarmo 1 hour ago
    Well, I got _something_ to work on Apple Silicon:

    https://github.com/rcarmo/ml-sharp (has a little demo GIF)

    I am looking at ways to approximate Gaussian splats without having to reinvent the wheel, but I'm a bit over my depth since I haven't been playing a lot of attention to those in general.

    • esperent 3 minutes ago
      [delayed]
    • 7moritz7 44 minutes ago
      The example doesn't look particularly impressive to say the least. Look at the bottom 20 %
  • Leptonmaniac 7 hours ago
    Can someone ELI5 what this does? I read the abstract and tried to find differences in the provided examples, but I don't understand (and don't see) what the "photorealistic" part is.
    • emsign 7 hours ago
      Imagine history documentaries where they take an old photo and free objects from the background and move them round giving the illusion of parallax movement. This software does that in less than a second, creating a 3D model that can be accurately moved (or the camera for that matter) in your video editor. It's not new, but this one is fast and "sharp".

      Gaussian splashing is pretty awesome.

      • kurtis_reed 6 hours ago
        What are free objects?
        • ferriswil 6 hours ago
          The "free" in this case is a verb. The objects are freed from the background.
          • Retr0id 5 hours ago
            Until your comment I didn't realise I'd also read it wrong (despite getting the gist of it). Attempted rephrase of the original sentence:

            Imagine history documentaries where they take an old photo, free objects from the background, and then move them round to give the illusion of parallax.

            • necovek 5 hours ago
              I'd suggest a different verb like "detach" or "unlink".
            • nashashmi 2 hours ago
              Free objects in the background.
              • Sharlin 1 hour ago
                No, free objects in the foreground, from the background.
            • tzot 5 hours ago
              > Imagine history documentaries where they take an old photo, free objects from the background

              Even using commas, if you leave the ambiguous “free” I suggest you prefix “objects” with “the” or “any”.

    • ares623 7 hours ago
      Takes a 2D image and allows you to simulate moving the angle of the camera with correct-ish parallax effect and proper subject isolation (seems to be able to handle multiple subjects in the same scene as well)

      I guess this is what they use for the portrait mode effects.

    • zipy124 2 hours ago
      Basically depth estimation to split the scene into various planes, and then inpainting to work out the areas in the obscured parts of the planes, and then the free movement of them to allow for parallax. Think of 2D side scrolling games that have various different background depths to give illusion of motion and depth.
    • derleyici 7 hours ago
      It turns a single photo into a rough 3D scene so you can slightly move the camera and see new, realistic views. "Photorealistic" means it preserves real textures and lighting instead of a flat depth effect. Similar behavior can be seen with Apple's Spatial Scene feature in the Photos app: https://files.catbox.moe/93w7rw.mov
    • eloisius 7 hours ago
      From a single picture it infers a hidden 3D representation, from which you can produce photorealistic images from slightly different vantage points (novel views).
      • avaer 7 hours ago
        There's nothing "hidden" about the 3d represenation. It's a point cloud (in meters) with colors, and a guess at the the "camera" that produced it.

        (I am oversimplifying).

        • uh_uh 5 hours ago
          "Hidden" or "latent" in a context like this just means variables that the algo is trying to infer because it doesn't have direct access to them.
        • eloisius 7 hours ago
          Hidden in the sense of neural net layers. I mean intermediary representation.
          • avaer 7 hours ago
            Right.

            I just want to emphasize that this is not a NERF where the model magically produces an image from an angle and then you ask "ok but how did you get this?" and it throws up its hands and says "I dunno, I ran some math and I got this image" :D.

    • avaer 7 hours ago
      It makes your picture 3D. The "photorealistic" part is "it's better than these other ways".
    • carabiner 5 hours ago
      Black Mirror episode portraying what this could do: https://youtu.be/XJIq_Dy--VA?t=14. If Apple ran SHARP on this photo and compared it to the show, that would be incredible.

      Or if you prefer Blade Runner: https://youtu.be/qHepKd38pr0?t=107

    • p-e-w 7 hours ago
      Agreed, this is a terrible presentation. The paper abstract is bordering on word salad, the demo images are meaningless and don’t show any clear difference to the previous SotA, the introduction talks about “nearby” views while the images appear to show zooming in, etc.
  • supermatt 2 hours ago
    I note the lack of human portraits in the example cases.

    My experience with all these solutions to date (including whatever apple are currently using) is that when viewed stereoscopically the people end up looking like 2d cutouts against the background.

    I haven't seen this particular model in use stereoscopically so I can't comment as to its effectiveness, but the lack of a human face in the example set is likely a bit of a tell.

    Granted they do call it "Monocular View Synthesis", but i'm unclear as to what its accuracy or real-world use would be if you cant combine 2 views to form a convincing stereo pair.

  • moondev 7 hours ago
    • delis-thumbs-7e 7 hours ago
      Interestingly Apple’s own models don’t work on MPS. Well, I guess you just have to wait for few years..
    • matthewmacleod 6 hours ago
      This is specifically only for video rendering. The model itself works across GPU, CPU, and MPS.
    • diimdeep 4 hours ago
      No, model works without CUDA then you have .ply that you can drop into gaussian splatter viewer like https://sparkjs.dev/examples/#editor

      CUDA is needed to render side scrolling video, but there is many ways to do other things with result.

  • nashashmi 2 hours ago
    I could not find any mention of it but does this use regenerative AI? I can’t imagine it able to accomplish anything like this without using a large graphical Model in the back.
  • yodon 8 hours ago
    > photorealistic 3D representation from a single photograph in less than a second
  • derleyici 7 hours ago
    Apple's Spatial Scene in the Photos app shows similar behavior, turning a single photo into a small 3D scene that you can view by tilting the phone. Demo here: https://files.catbox.moe/93w7rw.mov
    • Traubenfuchs 5 hours ago
      It‘s awful and often creates a blurry mess in the imaginated space behind the object.

      Photoshop content aware fill could do equally or better many years ago.

  • avaer 7 hours ago
    Is there a link with some sample gaussian splat files coming from this model? I couldn't find it.

    Without that that it's hard to tell how cherry-picked the NVS video samples are.

    EDIT: I did it myself, if anyone wants to check out the result (caveat, n=1): https://github.com/avaer/ml-sharp-example

  • Dumbledumb 4 hours ago
    In Chapter D.7 they describe: "The complex reflection in water is interpreted by the network as a distant mountain, therefore the water surface is broken."

    This is really interesting to me because the model would have to encode the reflection as both the depth of the reflecting surface (for texture, scattering etc) as well as the "real depth" of the reflected object. The examples in Figure 11 and 12 already look amazing.

    Long tail problems indeed.

  • tartoran 7 hours ago
    Impressive but something doesn't feel right to me.. Possibly too much sharpness, possibly a mix of cliches, all amplified at once.
    • a3w 33 minutes ago
      For me, TMPI and SHARP look great. TMPI is consistently brighter, though, with me having no clue which is more correct.
  • arjie 8 hours ago
    This is incredibly cool. It's interesting how it fails in the section where you need to in-paint. SVC seems to do that better than all the rest, though not anywhere close to the photorealism of this model.

    Is there a similar flow but to transform either a video/photo/NeRF of a scene into a tighter, minimal polygon approximation of it. The reason I ask is that it would make some things really cool. To make my baby monitor mount I had to knock out the calipers and measure the pins and this and that, but if I could take a couple of photos and iterate in software that would be sick.

    • necovek 3 hours ago
      You'd still need one real measurement at least: this might get proportions right if background can be clearly separated, but the absolute size of an object can be worlds apart.
  • Geee 8 hours ago
    This is great for turning a photo into a dynamic-IPD stereo pair + allows some head movement in VR.
    • SequoiaHope 8 hours ago
      Ah and the dynamic IPD component preserves scale?
  • pmontra 2 hours ago
    So Deckard got lucky that the picture enhancement machine allucinated the correct clue? But that was boundto happen 6 years ago, no AI yet.
  • brcmthrowaway 8 hours ago
    So this is the secret sauce behind Cinematic mode. The fake bokeh insanity has reached its climax!
    • duskwuff 8 hours ago
      As well as their "Spatial Scene" mode for lock screen images, which synthesizes a mild parallax effect as you move the phone.
      • Terretta 7 hours ago
        It's available for everyday photos, portraits, everything, not just lock screens.
      • spike021 7 hours ago
        you can also press the button while viewing a photo in the Photos app to see this.
  • remh 7 hours ago
    • mvandermeulen 7 hours ago
      I thought this was going to be the Super Troopers version
  • harhargange 7 hours ago
    TMPI looks just as good if not better.
    • wfme 7 hours ago
      Have a look through the rest of the images. TMPI has some pretty obvious shortcomings in a lot of them.

      1. Sky looks jank 2. Blurry/warped behind the horse 3. The head seems to move a lot more than the body. You could argue that this one is desirable 4. Bit of warping and ghosting around the edges of the flowers. Particularly noticeable towards the top of the image. 5. Very minor but the flowers move as if they aren't attached to the wall

    • jjcm 7 hours ago
      Disagree - look at the sky in the seaweed shot. It doesn't quite get the depth right in anything, and the edges of things look off.
      • shwaj 7 hours ago
        Agreed. The head of the fly also seems to have weird depth.
  • BoredPositron 3 hours ago
    The paper is just a word salad and it's not better than previous sota? I might be missing a key element here.
  • yieldcrv 3 hours ago
    I want to see with people
  • codebyprakash 2 hours ago
    Quite cool!
  • yodon 8 hours ago
    See also Spaitial[0] which announced today full 3D environment generation from a single image

    [0]https://www.spaitial.ai/

    • boguscoder 7 hours ago
      Requires email to view anything, that’s sad
    • avaer 5 hours ago
      The best I've seen so far is Marble from World Labs, though that gives you a full 360 environment and takes several minutes to do so.
    • dag11 7 hours ago
      I'm confused, does it actually generate environments from photographs? I can't view the galleries since I didn't sign up for emails but all of the gallery thumbnails are AI, not photos.
      • jrflowers 6 hours ago
        > I'm confused, does it actually generate environments from photographs?

        It’s a website that collects people’s email addresses

    • andsoitis 7 hours ago
      Why are all their examples of rooms?

      Why no landscape or underwater scenes or something in space, etc.?

      • jaccola 7 hours ago
        Constrained environments are much simpler.

        I believe this company is doing image (or text) -> off the shelf image model to generate more views -> some variant of gaussian splatting.

        So they aren't really "generating" the world as one might imagine.

  • diimdeep 7 hours ago
    Works great, model file is 2.8 GB, on M2 rendering took a few seconds, result is guassian .ply file but repo implementation requires CUDA card to render video, I have used one of webgl live renderers from here https://github.com/scier/MetalSplatter?tab=readme-ov-file#re...
  • benatkin 8 hours ago
    That is really impressive. However, it was a bit confusing at first because in the koala example at the top, the zoomed in area is only slightly bigger than the source area. I wonder why they didn't make it 2-3x as big in both axes like they did with the others.
  • ballpug 7 hours ago
    [dead]
  • IlikeKitties 8 hours ago
    [flagged]
  • calvinmorrison 8 hours ago
    I understand AI for reasoning, knowledge, etc. I haven't figured out how anyone wants to spend money for this visual and video stuff. It just seems like a bad idea.
    • netsharc 1 hour ago
      Photo apps on phones (can you still call them cameras?) already have a lot of "AI" to enhance photos and videos taken. Some of it is technological necessity, since you're capturing something through a tiny hole, a lot of it is sexying it up to appeal to people, because hey, people would prefer a cinema-quality depiction of their memories rather than the reality...
    • accurrent 8 hours ago
      Simulation. It takes a lot of effort today to bring up simulations in various fields. 3 D programming is very nontrivial and asset development is extremely expensive. If I have a workspace I can take a photo of and just use it to generate a 3d scene I can then use it in simulations to test ideas out. This is particularly useful in robotics and industrial automation already.
      • jijijijij 3 hours ago
        I don't see any examples of a 3D scene information usable for simulation. If you want to simulate something hitting a table, you need the whole table (surface) in space, not just some spatial illusion effect extrapolated from an image of a table. I also think modelling the 3D objects for simulation is the least expensive part of an simulation... the simulation is the expensive thing.

        I doubt this will be useful for robotics or industrial automation, where you need an actual spatial, or functional understanding of the object/environment.

        • accurrent 1 hour ago
          With research like this you need to start somewhere. The fact we can get 3d information helps. There are people looking into making splats capture collision information [1].

          I have worked on simulation and in my day job do a lot of simulation. While physics is oftem hard and expensive you only need to write the code once.

          Assets? You need to comission 3d artists and then spend hours wrangling file formats. Its extremely tedious. If we could take a photo and extract meshes Im sure we'd have a much easier time.

          [1] https://trianglesplatting.github.io/

    • rv3392 7 hours ago
      This specific paper is pretty different to the kind of photo/video generation that has been hyped up in recent years. In this case, I think this might be what they're using for the iOS spatial wallpaper feature, which is arguably useless but is definitely an aesthetic differentiator to Android devices. So, it's indirectly making money.
    • re-thc 8 hours ago
      Do people not spend on entertainment? Commercials? It's probably less of a bad idea than knowledge. AI giving a bad visual has less negatives than giving the wrong knowledge leading to the wrong decision.