FlashAttention-T: Towards Tensorized Attention

(dl.acm.org)

67 points | by matt_d 3 hours ago

7 comments

  • jmward01 1 hour ago
    I built guided window attn (literally predict the position of the window) a while ago and that works great. Why are we still stuck on any form of attn that looks at the entire context in any meaningful way? Do humans work this way? Do I need a whole book to predict the next word? Who out there is working on really new unique ways to deal with infinite history, other than me of course :)
    • cs702 42 minutes ago
      > Who out there is working on ... infinite history?

      Many people are still working on improving RNNs, mostly in academia. Examples off the top of my head:

      * RWKV: https://arxiv.org/abs/2006.16236 / https://arxiv.org/abs/2404.05892 https://arxiv.org/abs/2305.13048

      * Linear attention: https://arxiv.org/abs/2503.14456

      * State space models: https://arxiv.org/abs/2312.00752 / https://arxiv.org/abs/2405.21060

      * Linear RNNs: https://arxiv.org/abs/2410.01201

      Industry OTOH has gone all-in on Transformers.

      • jmward01 16 minutes ago
        RNNs have two huge issues: - long context. Recurrence degrades the signal for the same reason that 'deep' nn architectures don't go much past 3-4 layers before you need residual connections and the like - (this is the big one) training performance is terrible since you can't parallelize them across a sequence like you can with causal masked attn in transformers

        On the huge benefit side though you get: - guaranteed state size so perfect batch packing, perfect memory use, easy load/unload from a batch, O(1) of token gen so generally massive performance gains in inference. - unlimited context (well, no need for a concept of a position embedding or similar system)

        Taking the best of both worlds is definitely where it is at for the future. An architecture that can train parallelized, has a fixed state size so you can load/unload and patch batches perfectly, unlimited context (with perfect recall), etc etc. That is the real architecture to go for.

      • viraptor 22 minutes ago
        > Industry OTOH has gone all-in on Transformers.

        It's so annoying. Transformers keep improving and recurrent networks are harder to train so until we hit some real wall, companies don't seem eager to diverge. It's like lithium batteries improving easy faster than it was profitable to work on sodium ones, even though we unfortunately want the sodium ones to be better.

  • simianwords 1 hour ago
    OT but instead of quadratic attention can we not have n^10 or something crazier? I feel like we are limiting the intelligence just to save cost. But I can imagine that there might be some questions that may be worth paying higher cost for.

    I feel like n^10 attention can capture patterns that lower complexity attention may not. So it seems arbitrary that we have n^2 attention.

    • crystal_revenge 31 minutes ago
      What you're missing is that there's no need to do extra work in the kernel smoothing step (what attention essentially is) because all the fancy transformation work is already happening in learning the kernel.

      The feedforward networks prior to the attention layer are effectively learning sophisticated kernels. If you're unfamiliar (or for those who are) a Kernel is just a generalization of the dot product which is the most fundamental way of defining "similarity" between two points.

      By learning a kernel the transformer is learning the best way to define what "similar" means for the task at hand and then we simply apply some basic smoothing over the data. This will handle all sort of interesting ways to compare points and that comparison will allow all points to provide a little bit of information.

      Anything you could hope to achieve by performing more comparisons would be better solved by a better similarity function.

    • jsenn 59 minutes ago
      You can find papers discussing "cubic" attention, i.e. each token gets to interact with each pair of other tokens, but always in very theoretical settings with single-layer transformers on contrived synthetic tasks.

      Keep in mind that LLMs have many many layers, so they have plenty of opportunity to model higher-order interactions without needing to brute force every possible combination of 10 previous tokens, of which the vast majority will be useless. Empirically, even full "quadratic" attention is not always necessary, as evidenced by the existence of linear/sparse attention variants that perform almost as well.

    • storus 1 hour ago
      Aren't layers basically doing n^k attention? The attention block is n^2 because it allows 1 number per input/output pair. But nothing prevents you from stacking these on top of each other and get k-th order of "attentioness" with each layer encoding a different order.
    • noosphr 1 hour ago
      Yes, and it works in theory.

      Less so in practice. You saturate the memory of a b200 with a few dozen tokens on attentions higher than order 4. Training is even worse.

      To paraphrase Knuth: high order polynomials are much more unimaginably large than mere infinity.

    • eldenring 1 hour ago
      This is a common way of thinking. In practice this type of thing is more like optimizing flop allocation. Surely with an infinite compute and parameter budget you could have a better model with more intensive operations.

      Another thing to consider is that transformers are very general computers. You can encode many many more complex architectures in simpler, multi layer transformers.

    • refulgentis 1 hour ago
      n^2 isn't a setting someone chose, it's a mathematical consequence of what attention is.

      Here's what attention does: every token looks at every other token to decide what's relevant. If you have n tokens, and each one looks at n others, you get n * n = n^2 operations.

      Put another way: n^2 is when every token gets to look at every other token. What would n^3 be? n^10?

      (sibling comment has same interpretation as you, then handwaves transformers can emulate more complex systems)

      • measurablefunc 1 hour ago
        There are lots more complicated operations than comparing every token to every other token & the complexity increases when you start comparing not just token pairs but token bigrams, trigrams, & so on. There is no obvious proof that all those comparisons would be equivalent to the standard attention mechanism of comparing every token to every other one.
        • vlovich123 1 hour ago
          While you are correct at a higher level, comparing bigrams/trigrams would be less compute not more because there’s fewer of them in a given text
        • refulgentis 40 minutes ago
          That skips an important part: the "deep" in "deep learning".

          Attention already composes across layers.

          After layer 1, you're not comparing raw tokens anymore. You're comparing tokens-informed-by-their-context. By layer 20, you're effectively comparing rich representations that encode phrases, relationships, and abstract patterns. The "higher-order" stuff emerges from depth. This is the whole point of deep networks, and attention.

          TL;DR for rest of comment: people have tried shallow-and-wide instead of deep, it doesn't work in practice. (rest of comment fleshes out search/ChatGPT prompt terms to look into to understand more of the technical stuff here)

          A shallow network can approximate any function (universal approximation theorem), but it may need exponentially more neurons. Deep networks represent the same functions with way fewer parameters. There's formal work on "depth separation",functions that deep nets compute efficiently, but shallow nets need exponential width to match.

          Empirically, People have tried shallow-and-wide vs. deep-and-narrow many times, across many domains. Deep wins consistently for the same parameter budget. This is part of why "deep learning" took off, the depth is load-bearing.

          For transformers specifically, stacking attention layers is crucial. A single attention layer, even with more heads or bigger dimensions, doesn't match what you get from depth. The representations genuinely get richer in ways that width alone can't replicate.

  • sigbottle 1 hour ago
    Oh wow there's still work being done on ampere?

    I was wondering - I've been thinking about switching to AI systems programming (I know, easy task), but from what I understand, industry cloud GPUs are the main winners, right? Nobody's going to pay me (assuming I even had the skills) to optimize for consumer GPUs?

    From what I understand, it's not just number + capacity + performance, it's literal core primitives. I don't think any of the "Blackwell" chips like the grace one or rtx 5090 have for example SM pairs in their ISA? And likewise similar fundamental differences between consumer and cloud hopper (where the majority of the perf is the cloud one's ISA?)

    So I guess I'm wondering if I should buy a GPU myself or should I just rent on the cloud if I wanted to start getting some experience in this field. How do you even get experience in this normally anyways, do you get into really good schools and into their AI labs which have a lot of funding?

    • coolsunglasses 26 minutes ago
      I do CUDA for a living (not inference) and for the life of me (and a couple of LLMs for that matter) I cannot figure out what you mean by "SM pairs".

      Do you mean the coupled dies on stuff like the B200? An NVidia chip die has many SMs if so.

      Do you mean TMEM MMA cooperative execution? I'm guessing that must be it given what the paper is about.

    • vlovich123 53 minutes ago
      Look at am the email addresses. If you’ll recall there’s an embargo on China.
    • storus 1 hour ago
      I still have 2x NVLinked A6000 and they aren't that bad compared to a single RTX 6000 Pro.
    • Maxious 1 hour ago
      yep, https://github.com/poad42/cuda-fp8-ampere recently another attempt at squeezing whatever's left from ampere
  • semiinfinitely 2 hours ago
    tri dao isn't on the paper is it even allowed to call it "FlashAttention"???
  • saagarjha 2 hours ago
    Less annoying link directly to the paper: https://dl.acm.org/doi/pdf/10.1145/3774934.3786425?download=...
  • verytrivial 59 minutes ago
    Tldr: 5% - 17% speedup due to removing a bottleneck by juggling where on a GPU/compute core a computation is done during Flash attention.
  • measurablefunc 3 hours ago
    [flagged]