Highlights
- Pro
Popular repositories Loading
-
flash-attention-w-tree-attn
flash-attention-w-tree-attn PublicForked from Dao-AILab/flash-attention
Fast and memory-efficient exact attention
Python 4
-
nano-patch-sequence-pack
nano-patch-sequence-pack PublicJust a few lines to combine 🤗 Transformers, Flash Attention 2, and torch.compile — simple, clean, fast ⚡
Python 2
-
fast-hadamard-transform
fast-hadamard-transform PublicForked from Dao-AILab/fast-hadamard-transform
Fast Hadamard transform in CUDA, with a PyTorch interface
C 1
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.

