Sakana AI and Nvidia have open-sourced TwELL, a sparse data format that enables H100 GPUs to skip 80% of invalid computations in large language models without sacrificing accuracy. The solution delivers up to 30% faster inference and 24% faster training on H100s while reducing peak memory usage. In testing on a 1.5-billion-parameter model, the approach reduced active neurons to below 2% through lightweight regularization during training, with no performance degradation across seven downstream tasks.
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
OpenAI Discontinues Fine-tuning API Effective Immediately, Existing Users Can Access Until January 6, 2027
According to OpenAI's official announcement monitored by Beating, the company is discontinuing its self-serve Fine-tuning API for developers effective immediately. New users can no longer create fine-tuning tasks, while existing active users can access the service until January 6, 2027. Deployed fin
GateNews6m ago
Microsoft Open-Sources Phi-Ground 4B Model, Outperforms OpenAI Operator and Claude in Screen Clicking Accuracy
According to Beating, Microsoft recently open-sourced the Phi-Ground model family, designed to solve the problem of where AI should click on a computer screen. The 4-billion-parameter version, paired with larger language models for instruction planning, exceeded the clicking accuracy of OpenAI
GateNews1h ago
Tilde Research Discovers Muon Optimizer Kills 25% of Neurons; Aurora Alternative Achieves 100x Data Efficiency Gain
According to Tilde Research, the Muon optimizer adopted by leading AI models including DeepSeek V4 and Kimi K2.5 has a hidden flaw: it causes over 25% of MLP layer neurons to permanently die during early training. The team designed Aurora, an alternative optimizer, and open-sourced it. A 1.1B
GateNews1h ago
Nvidia Commits Over $40 Billion to AI Investments in Early 2026, Including $30 Billion to OpenAI
According to TechCrunch, Nvidia committed over $40 billion to equity investments in AI companies in the first months of 2026, with a $30 billion investment in OpenAI as the largest single commitment. The chipmaker also pledged up to $3.2 billion in glassmaker Corning and as much as $2.1 billion to d
GateNews5h ago
NVIDIA’s open AI long-term partner Deepinfra raises $107 million Series B funding to build a “token factory”
AI startup DeepInfra announced it has completed a $107 million Series B funding round, led by 500 Global and early Google engineer Georges Harik, with strategic investors including NVIDIA (Nvidia), Samsung Next, and Supermicro participating. According to official information, this capital injection will be used to expand global data center capacity, addressing the computational cost and efficiency bottlenecks faced when current AI applications shift from “model training” to “large-scale inferenc
ChainNewsAbmedia5h ago
ECB Governing Council Member Escrivá Flags AI Risks to Financial Infrastructure on May 9
ECB Governing Council member Escrivá stated on May 9 that central banks must reassess the resilience of financial infrastructure and cybersecurity robustness in light of artificial intelligence developments. According to his remarks at an event, recent AI advances compel a reevaluation of financial
GateNews6h ago