6 comments

  • Jaxkr 3 hours ago
    This guy is a genius; for those who don’t know he also brought us ControlNet.

    This is the first decent video generation model that runs on consumer hardware. Big deal and I expect ControlNet pose support soon too.

    • artninja1988 12 minutes ago
      He also brought us IC-Light! I wonder why he's still contributing to open source... Surely all the big companies have made him huge offers. He's so talented
    • msp26 2 hours ago
      I haven't bothered with video gen because I'm too impatient but isn't Wan pretty good too on regular hardware?
      • dragonwriter 6 minutes ago
        Wan 2.1 (and Hunyuan and LTXV, in descending ordee of overall video quality but each has unique strengths) work well—but slow, except LTXV—for short (single digit seconds at their usual frame rates — 16 for WAN, 24 for LXTV, I forget for Hunyuan) videos on consumer hardware. But this blows them entirely out of the water on the length it can handle, so if it does so with coherence and quality across general prompts (especially if it is competitive with WAN and Hunyuan on trainability for concepts it may not handle normally) it is potentially a radical game changer.
      • dewarrn1 1 hour ago
        LTX-Video isn't quite the same quality as Wan, but the new distilled 0.9.6 version is pretty good and screamingly fast.

        https://github.com/Lightricks/LTX-Video

      • vunderba 1 hour ago
        Wan 2.1 is solid but you start to get pretty bad continuity / drift issues when genning more than 81 frames (approx 5 seconds of video) whereas FramePack lets you generate 1+ minute.
  • IshKebab 3 hours ago
    Funny how it really wants people to dance. Even the guy sitting down for an interview just starts dancing sitting down.
    • jonas21 1 hour ago
      Presumably they're dancing because it's in the prompt. You could change the prompt to have them do something else (but that would be less fun!)
    • Jaxkr 2 hours ago
      Massive open TikTok training set lots of video researchers use
  • ZeroCool2u 4 hours ago
    Wow, the examples are fairly impressive and the resources used to create them are practically trivial. Seems like inference can be run on previous generation consumer hardware. I'd like to see throughput stats for inference on a 5090 too at some point.
  • WithinReason 2 hours ago
    Could you do this spatially as well? E.g. generate the image top-down instead of all at once
  • modeless 2 hours ago
    Could this be used for video interpolation instead of extrapolation?
    • yorwba 2 hours ago
      Their "inverted anti-drifting" basically amounts to first extrapolating a lot and then interpolating backwards.
  • fregocap 3 hours ago
    looks like the only motion it can do...is to dance
    • jsolson 1 hour ago
      It can dance if it wants to...

      It can leave LLMs behind...

      'Cause LLMs don't dance, and if they don't dance, well, they're no friends of mine.

      • rhdunn 40 minutes ago
        That's a certified bop! ;) You should get elybeatmaker to do a remix!

        Edit: I didn't realize that this was actually a reference to Men Without Hats - The Safety Dance. I was referencing a different parody/allusion to that song!

      • MyOutfitIsVague 20 minutes ago
        The AI Safety dance?