Nous Research Open-Sources Lighthouse Attention with 17x Speedup on B200 for 512K Context

According to Beating, Nous Research has open-sourced Lighthouse Attention, a long-context training mechanism that achieves 17x speedup for 512K-length text processing on a single B200 GPU, and 1.4–1.7x end-to-end training acceleration at 98K length. The technique uses a coarse-to-fine approach: it first scans compressed summaries at different levels to identify core segments, then passes the filtered text to FlashAttention for processing. In tests on a 5.3-billion-parameter model trained on 50 billion tokens, the approach not only reduced training time but matched or exceeded baseline performance of fully-attention-based training.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments