[AINews] OpenAI adopts MCP
This is AI News! an MVP of a service that goes thru all AI discords/Twitters/reddits and summarizes what people are talking about, so that you can keep up without the fatigue. Signing up here opts you in to the real thing when we launch it 🔜
MCP is all you need.
AI News for 3/25/2025-3/26/2025. We checked 7 subreddits, 433 Twitters and 29 Discords (228 channels, and 4998 messages) for you. Estimated reading time saved (at 200wpm): 467 minutes. You can now tag @smol_ai for AINews discussions!
Amid all the 4o Ghibli memes you could be forgiven for missing the technical update that OpenAI announced MCP support today:
We attempted to articulate Why MCP Won in a recent Latent Space article.
Special Shoutout: Swyx will be curating the Data Council AI Engineering Track in Oakland on Apr 22. You can use LATENTSPACE20
for a little discount.
Table of Contents
- AI Twitter Recap
- AI Reddit Recap
- AI Discord Recap
- PART 1: High level Discord summaries
- LMArena Discord
- Perplexity AI Discord
- Cursor Community Discord
- OpenAI Discord
- Unsloth AI (Daniel Han) Discord
- OpenRouter (Alex Atallah) Discord
- Interconnects (Nathan Lambert) Discord
- LM Studio Discord
- Nous Research AI Discord
- Notebook LM Discord
- Yannick Kilcher Discord
- Modular (Mojo 🔥) Discord
- MCP (Glama) Discord
- GPU MODE Discord
- Latent Space Discord
- Eleuther Discord
- LlamaIndex Discord
- Cohere Discord
- DSPy Discord
- Torchtune Discord
- LLM Agents (Berkeley MOOC) Discord
- Nomic.ai (GPT4All) Discord
- tinygrad (George Hotz) Discord
- PART 2: Detailed by-Channel summaries and links
- LMArena ▷ #general (910 messages🔥🔥🔥):
- Perplexity AI ▷ #announcements (1 messages):
- Perplexity AI ▷ #general (622 messages🔥🔥🔥):
- Perplexity AI ▷ #sharing (5 messages):
- Perplexity AI ▷ #pplx-api (2 messages):
- Cursor Community ▷ #general (608 messages🔥🔥🔥):
- OpenAI ▷ #ai-discussions (257 messages🔥🔥):
- OpenAI ▷ #gpt-4-discussions (21 messages🔥):
- OpenAI ▷ #prompt-engineering (85 messages🔥🔥):
- OpenAI ▷ #api-discussions (85 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #general (246 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #off-topic (7 messages):
- Unsloth AI (Daniel Han) ▷ #help (73 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #research (7 messages):
- OpenRouter (Alex Atallah) ▷ #announcements (1 messages):
- OpenRouter (Alex Atallah) ▷ #general (312 messages🔥🔥):
- Interconnects (Nathan Lambert) ▷ #news (172 messages🔥🔥):
- Interconnects (Nathan Lambert) ▷ #ml-questions (4 messages):
- Interconnects (Nathan Lambert) ▷ #random (27 messages🔥):
- Interconnects (Nathan Lambert) ▷ #memes (14 messages🔥):
- Interconnects (Nathan Lambert) ▷ #posts (11 messages🔥):
- LM Studio ▷ #general (103 messages🔥🔥):
- LM Studio ▷ #hardware-discussion (36 messages🔥):
- Nous Research AI ▷ #general (52 messages🔥):
- Nous Research AI ▷ #ask-about-llms (60 messages🔥🔥):
- Nous Research AI ▷ #interesting-links (7 messages):
- Notebook LM ▷ #use-cases (4 messages):
- Notebook LM ▷ #general (84 messages🔥🔥):
- Yannick Kilcher ▷ #general (54 messages🔥):
- Yannick Kilcher ▷ #paper-discussion (14 messages🔥):
- Yannick Kilcher ▷ #ml-news (11 messages🔥):
- Modular (Mojo 🔥) ▷ #general (1 messages):
- Modular (Mojo 🔥) ▷ #mojo (69 messages🔥🔥):
- Modular (Mojo 🔥) ▷ #max (2 messages):
- MCP (Glama) ▷ #general (54 messages🔥):
- MCP (Glama) ▷ #showcase (2 messages):
- GPU MODE ▷ #general (3 messages):
- GPU MODE ▷ #triton (4 messages):
- GPU MODE ▷ #cuda (9 messages🔥):
- GPU MODE ▷ #torch (11 messages🔥):
- GPU MODE ▷ #jobs (9 messages🔥):
- GPU MODE ▷ #torchao (2 messages):
- GPU MODE ▷ #rocm (3 messages):
- GPU MODE ▷ #sparsity-pruning (2 messages):
- GPU MODE ▷ #liger-kernel (6 messages):
- GPU MODE ▷ #self-promotion (1 messages):
- GPU MODE ▷ #gpu模式 (2 messages):
- GPU MODE ▷ #submissions (2 messages):
- Latent Space ▷ #ai-general-chat (44 messages🔥):
- Latent Space ▷ #ai-announcements (4 messages):
- Eleuther ▷ #general (21 messages🔥):
- Eleuther ▷ #research (11 messages🔥):
- Eleuther ▷ #interpretability-general (3 messages):
- Eleuther ▷ #gpt-neox-dev (2 messages):
- LlamaIndex ▷ #general (20 messages🔥):
- Cohere ▷ #「💬」general (11 messages🔥):
- Cohere ▷ #「🔌」api-discussions (5 messages):
- Cohere ▷ #「🤖」bot-cmd (2 messages):
- DSPy ▷ #general (10 messages🔥):
- Torchtune ▷ #dev (4 messages):
- LLM Agents (Berkeley MOOC) ▷ #hackathon-announcements (1 messages):
- LLM Agents (Berkeley MOOC) ▷ #mooc-questions (2 messages):
- Nomic.ai (GPT4All) ▷ #general (3 messages):
- tinygrad (George Hotz) ▷ #general (1 messages):
AI Twitter Recap
Language Models and Benchmarks
- Gemini 2.5 Pro's performance and capabilities: @ArtificialAnlys reported that Google’s new Gemini 2.5 Pro Experimental takes the #1 position across a range of their evaluations. The Gemini 2.5 Pro is a reasoning model with industry-leading efficiency. It achieved all-time high scores in MMLU-Pro and GPQA Diamond of 86% and 83% respectively, and in Humanity’s Last Exam, scoring 17.7%. It also achieved an all time high score in AIME 2024 of 88%. The speed is 195 output tokens/s, much faster than Gemini 1.5 Pro’s 92 tokens/s and nearly as fast as Gemini 2.0 Flash’s 253 tokens/s. The Gemini 2.5 Pro has a 1 million token context window, and multimodal inputs: image, video and audio (text output only). @zacharynado exclaimed that Gemini 2.5 Pro is the most skilled model in the world. @OriolVinyalsML highlights a 16 point jump on Fiction.LiveBench.
- Qwen 2.5 Omni 7B Release and Features: @Alibaba_Qwen announced the release of Qwen2.5-Omni-7B, a fully multimodal interactive model, opensourced under the Apache 2.0 license. It supports voice and video chat and has a "thinker-talker" architecture enabling simultaneous thinking and talking. It outperforms models like Gemini-1.5-Pro on OmniBench and excels in speech recognition, translation, audio understanding, and image/video reasoning. @reach_vb summarized key features: Novel TMRoPE, supports live interactions with low-latency streaming, multimodal performance in audio, vision, speech-to-text, end-to-end instruction following, and strong performance in math/code.
- DeepSeekV3-0324: @togethercompute mentions DeepSeek-V3-0324 outperforms its predecessor (DeepSeek-V3) on benchmarks including MMLU-Pro, GPQA Diamond, AIME 2024, and LiveCodeBench.
- Interpreting Reasoning Features in Large Language Models: @rasbt discusses a new research paper, "Interpreting Reasoning Features in Large Language Models via Sparse Autoencoders," which extracts activations from an intermediate layer of DeepSeek-R1 and trains a Sparse Autoencoder (SAE) on these activations, showing that certain features can change the reasoning behavior.
- Scaling Laws of Synthetic Data for Language Models: @iScienceLuvr highlights a study on scaling laws of synthetic data, finding that synthetic data adheres to the rectified scaling law, performance improvements plateau near 300B tokens, and larger models approach optimal performance with fewer training tokens.
- Gemini models’ output speed: @ArtificialAnlys reports that Gemini models, both 2.5 Pro and 2.0 Flash, have the fastest output speed compared to leading models.
- Concerns About Over-Reliance on Benchmarks: @DavidSHolz notes the intensity of back-and-forth benchmarking between LLMs, but questions how it impacts product development, and @SmokeAwayyy questions whether benchmarks are a good measure of intelligence.
Model Quantization and Efficiency
- Dynamic Quantization for DeepSeek V3: @danielhanchen announced 2.7bit dynamic quants for DeepSeek V3, recommending temperature 0.0-0.3 and min_p=0.01. Non dynamic quants create "seizured" results. 1.58bit likely won't work, as down_proj needs at least 3 bits. 2.7bit in 230GB is the best choice for balancing accuracy and size.
- AWQ Quants of DeepSeek-V3-0324: @cognitivecompai released AWQ quants of DeepSeek-V3-0324, assisted by @casper_hansen_ and v2ray.
- Memory vs. Compute Tradeoffs: @francoisfleuret highlights that anything doable in O(f(n)) compute can be done in O(sqrt(f(n))) memory.
Tools and Frameworks
- MCP (Model Context Protocol) and OpenAI Integration: @OpenAIDevs announced that the Model Context Protocol servers can now connect to Agents. MCP support for the OpenAI API and ChatGPT desktop app is coming soon. @sama highlights the excitement about MCP and the plan to add support across OpenAI products. @alexalbert__ notes that MCP has become an industry standard for AI app integrations in less than 4 months. @stevenheidel provides an explanation of the Model Context Protocol (MCP).
- LangGraph and Agent Development: @LangChainAI promotes Together AI's cookbook on using LangGraph in agentic RAG systems. LangGraph is used by Uber to build a network of agents for automating unit test generation @LangChainAI, improving UI for creating LLM-as-a-judge evaluators in LangSmith. Computer use agents are now available in LangGraph TypeScript, along with Python @LangChainAI. LangGraph Studio is an IDE for visualizing and debugging agents @LangChainAI.
- CodeAct as an Alternative to ReAct: @hwchase17 suggests CodeAct as a cool alternative to ReAct, getting the LLM to write code to call tools, which allows for describing a sequence of LLM calls.
- Qdrant for Audio RAG: @qdrant_engine details how to build an Audio RAG from scratch.
- Vibe Coding 101 with Replit: @DeepLearningAI advertises a new short course, "Vibe Coding 101 with Replit," teaching how to build and host applications with an AI agent. This course emphasizes structuring your work, refining your prompts, and having a systematic process.
Image Generation and Multimodality
- Native GPT-4o Image Generation: @_akhaliq highlights native GPT 4o image generation, referring to it as "llama park."
- Cross-Attention in Multimodal LLMs: @cwolferesearch provides a detailed explanation of cross-attention and how it's used in multi-modal LLMs to fuse representations of images or other modalities into a text-based LLM.
- Discussion on Autoregressive vs. Diffusion Models for Image Generation: @swyx states that 4o image generation is autoregressive. @sainingxie asks if OpenAI is using an LLM with a diffusion "renderer" on the compressed latents.
- Synthesia's Deepfake Security: @synthesiaIO shares that 30 expert security testers failed to create unauthorized deepfakes with Synthesia.
Company and Product Announcements
- Nvidia Acquires Lepton AI: @steph_palazzolo reports that Nvidia has acquired inference provider Lepton AI in a deal worth several hundred million dollars to beef up its software offerings.
- Claude on Databricks: @jefrankle announced that Claude is now available to Databricks customers on all clouds through a partnership with Anthropic.
- Perplexity's Revenue Milestone: @AravSrinivas announced that Perplexity has crossed $100 million in annualized revenue.
China, DeepSeek, and Qwen
- Call for Support for DeepSeek: @teortaxesTex urges support for DeepSeek, viewing them as champions of open-source AGI.
- Assessment of China's Tech Capabilities: @teortaxesTex argues that China's inability to match companies like ASML doesn't indicate a deficiency in creativity but reflects the extreme difficulty of high-end tech. They also emphasize that China is a unique country and should not be understood with rankings for normal countries @teortaxesTex .
- Observations on Qwen: @teortaxesTex calls Qwen the solid leader on open source multimodality.
Other
- Carmack on Nvidia Book: @ID_AA_Carmack reviews a new Nvidia book, noting a fabricated quote attributed to him but acknowledging the general gist was accurate.
- ARC Prize 2025: @fchollet announced the ARC Prize 2025 on Kaggle with a $700k Grand Prize.
Memes and Humor
- Ghibli-fication: Multiple users shared Ghibli-style transformations of images, including @raizamrtn and @mervenoyann, and @iScienceLuvr posted an obligatory studio ghibli-fied pfp. @sama joked about the prevalence of Ghibli-style transformations. @vikhyatk is using moondream to hide all ghibli posting from the timeline.
- Screenshot meme: @goodside created a fake screenshot generated by ChatGPT 4o of a Wikipedia article about the screenshot itself, with a copy of the screenshot in the article.
- Rest of the Fucking Owl: @giffmana used 4o-imagegen to show how to draw the rest of the fucking owl.
- OpenAI has reached AGI: @scaling01 proclaims that OpenAI has reached AGI.
AI Reddit Recap
/r/LocalLlama Recap
Theme 1. DeepSeek V3 Gains and Benchmarking
- Notes on Deepseek v3 0324: Finally, the Sonnet 3.5 at home! (Score: 280, Comments: 70): DeepSeek V3 0324 has been released with a significant boost in reasoning abilities, matching the capabilities of Claude 3.5 Sonnet, though Claude may still outperform in some edge cases. The model, under a proper MIT license, has a 641GB size and a knowledge cut-off date of July 2024. Observations indicate it excels in understanding user intentions, code generation, and reasoning, ranking above Claude 3.7 Sonnet but slightly below Claude 3.5 Sonnet in instruction following. For further analysis, refer to the blog post.
- Discussions highlight the technical challenges of running DeepSeek V3 0324 locally, with some users successfully deploying it on custom setups like a $1000 computer, while others suggest using cloud solutions such as Runpod for on-demand GPU clusters. The cost of cloud storage and GPU time is noted, with calculations showing $120/month for storage alone, prompting comparisons to API usage for cost-effectiveness.
- There is debate over the terminology used to describe the model, particularly the distinction between "base model" and "instruction-tuned model," with references to the DeepSeek's HuggingFace page for clarity. Users discuss the potential for further improvements by incorporating chain of thought and the model's performance in areas like code generation and reasoning.
- The community humorously comments on the practicality of hosting such a large model at home, with references to needing data center-level resources or expensive hardware setups like a $10k Mac Mini. Some users express a desire for more accessible hardware solutions to run models of this size efficiently.
- 1.78bit DeepSeek-V3-0324 - 230GB Unsloth Dynamic GGUF (Score: 387, Comments: 84): The post announces the release of DeepSeek-V3-0324 dynamic quants, available in 1.78-bit and other GGUF formats, with downloads available on Hugging Face. The author highlights improvements in performance by upcasting to 1.78-bit, selectively quantizing certain layers, and recommends using the 2.71-bit version for optimal results, as lower bit versions produced poor outputs.
- Documentation and Testing: Users appreciate Unsloth for providing thorough documentation and guidelines, with some expressing interest in testing and comparing the 2.71-bit version of DeepSeek-v3-0324 against other models like the 8-bit QwQ-32b. There is a call for more systematic tests to determine if downstream quality correlates with perplexity.
- Quantization and Performance: Discussions highlight the performance of different quantization levels, with the 2.71-bit version being praised for holding up well in various tests. Users report that custom quantizations like Q4_K_XL and Q2_K_XL are effective, with some preferring them over lower bit versions due to better output quality.
- Technical Setup and Speed: Technical setups are shared, such as using a Gigabyte MS33-CP motherboard and Intel Xeon 48 core for running models, achieving up to 15 tokens/sec. There's interest in using Flash Attention for speeding up processes, with discussions on whether llama.cpp supports FA for dynamic quants.
Theme 2. Google's TxGemma: Integrating Therapeutics and AI
- Google releases TxGemma, open models for therapeutic applications (Score: 170, Comments: 14): Google introduces TxGemma, a Gemma 2-based model designed for therapeutic tasks such as classification, regression, and generation, with model sizes of 2B, 9B, and 27B. The 27B model achieves state-of-the-art performance across multiple tasks, and a chat version is available for general reasoning. The models can be fine-tuned with transformers, and resources are available on Hugging Face.
- Licensing and Usage Concerns: Users express curiosity about the permissibility of merging the new Gemma-2 release with existing models due to licensing terms, with a reference to the Google Health AI Developer Foundations terms.
- Model Naming and Purpose: Questions arise about the naming convention of Gemma-2 instead of a potential Gemma-3, and inquiries are made into the meaning and capabilities of a "therapeutic" model, with some users speculating about the future capabilities of TxGemini Pro 2.0.
- Model Censorship and Capabilities: Discussions about the censorship of AI models include speculation about uncensored finetunes capable of controversial tasks, with references to Grok and its minimal censorship, and a broader critique of pharmaceutical costs and accessibility.
Theme 3. Qwen 2.5 Omni Multimodal Capabilities
- Qwen 2.5 Omni 7B is out (Score: 170, Comments: 43): Qwen 2.5 Omni 7B model has been released, with the details accessible via its Hugging Face page. The original tweet was deleted but has been reposted by Alibaba Qwen on Twitter.
- The Qwen 2.5 Omni 7B model is praised for its Thinker-Talker architecture, which integrates multiple modalities like text, images, audio, and video. However, there are concerns about the model's parameter count discrepancies, with some users calculating around 10.7B parameters instead of the claimed 7B.
- Users are exploring quantization and testing the model's capabilities, especially its potential for function calling in applications like an intelligent Alexa clone. The model's performance on multimodal benchmarks is noted, though it shows a regression in traditional benchmarks compared to the base model.
- The model is accessible on platforms like Hugging Face and chat.qwen.ai, with users eagerly awaiting gguf support and possible future versions, such as a Tifa version.
Other AI Subreddit Recap
/r/Singularity, /r/Oobabooga, /r/MachineLearning, /r/OpenAI, /r/ClaudeAI, /r/StableDiffusion, /r/ChatGPT, /r/ChatGPTCoding
Theme 1. DeepSeek V3 Gains and Benchmarking
- Notes on Deepseek v3 0324: Finally, the Sonnet 3.5 at home! (Score: 280, Comments: 70): DeepSeek V3 0324 has been released with a significant boost in reasoning abilities, matching the capabilities of Claude 3.5 Sonnet, though Claude may still outperform in some edge cases. The model, under a proper MIT license, has a 641GB size and a knowledge cut-off date of July 2024. Observations indicate it excels in understanding user intentions, code generation, and reasoning, ranking above Claude 3.7 Sonnet but slightly below Claude 3.5 Sonnet in instruction following. For further analysis, refer to the blog post.
- Discussions highlight the technical challenges of running DeepSeek V3 0324 locally, with some users successfully deploying it on custom setups like a $1000 computer, while others suggest using cloud solutions such as Runpod for on-demand GPU clusters. The cost of cloud storage and GPU time is noted, with calculations showing $120/month for storage alone, prompting comparisons to API usage for cost-effectiveness.
- There is debate over the terminology used to describe the model, particularly the distinction between "base model" and "instruction-tuned model," with references to the DeepSeek's HuggingFace page for clarity. Users discuss the potential for further improvements by incorporating chain of thought and the model's performance in areas like code generation and reasoning.
- The community humorously comments on the practicality of hosting such a large model at home, with references to needing data center-level resources or expensive hardware setups like a $10k Mac Mini. Some users express a desire for more accessible hardware solutions to run models of this size efficiently.
- 1.78bit DeepSeek-V3-0324 - 230GB Unsloth Dynamic GGUF (Score: 387, Comments: 84): The post announces the release of DeepSeek-V3-0324 dynamic quants, available in 1.78-bit and other GGUF formats, with downloads available on Hugging Face. The author highlights improvements in performance by upcasting to 1.78-bit, selectively quantizing certain layers, and recommends using the 2.71-bit version for optimal results, as lower bit versions produced poor outputs.
- Documentation and Testing: Users appreciate Unsloth for providing thorough documentation and guidelines, with some expressing interest in testing and comparing the 2.71-bit version of DeepSeek-v3-0324 against other models like the 8-bit QwQ-32b. There is a call for more systematic tests to determine if downstream quality correlates with perplexity.
- Quantization and Performance: Discussions highlight the performance of different quantization levels, with the 2.71-bit version being praised for holding up well in various tests. Users report that custom quantizations like Q4_K_XL and Q2_K_XL are effective, with some preferring them over lower bit versions due to better output quality.
- Technical Setup and Speed: Technical setups are shared, such as using a Gigabyte MS33-CP motherboard and Intel Xeon 48 core for running models, achieving up to 15 tokens/sec. There's interest in using Flash Attention for speeding up processes, with discussions on whether llama.cpp supports FA for dynamic quants.
Theme 2. Google's TxGemma: Integrating Therapeutics and AI
- Google releases TxGemma, open models for therapeutic applications (Score: 170, Comments: 14): Google introduces TxGemma, a Gemma 2-based model designed for therapeutic tasks such as classification, regression, and generation, with model sizes of 2B, 9B, and 27B. The 27B model achieves state-of-the-art performance across multiple tasks, and a chat version is available for general reasoning. The models can be fine-tuned with transformers, and resources are available on Hugging Face.
- Licensing and Usage Concerns: Users express curiosity about the permissibility of merging the new Gemma-2 release with existing models due to licensing terms, with a reference to the Google Health AI Developer Foundations terms.
- Model Naming and Purpose: Questions arise about the naming convention of Gemma-2 instead of a potential Gemma-3, and inquiries are made into the meaning and capabilities of a "therapeutic" model, with some users speculating about the future capabilities of TxGemini Pro 2.0.
- Model Censorship and Capabilities: Discussions about the censorship of AI models include speculation about uncensored finetunes capable of controversial tasks, with references to Grok and its minimal censorship, and a broader critique of pharmaceutical costs and accessibility.
Theme 3. Qwen 2.5 Omni Multimodal Capabilities
- Qwen 2.5 Omni 7B is out (Score: 170, Comments: 43): Qwen 2.5 Omni 7B model has been released, with the details accessible via its Hugging Face page. The original tweet was deleted but has been reposted by Alibaba Qwen on Twitter.
- The Qwen 2.5 Omni 7B model is praised for its Thinker-Talker architecture, which integrates multiple modalities like text, images, audio, and video. However, there are concerns about the model's parameter count discrepancies, with some users calculating around 10.7B parameters instead of the claimed 7B.
- Users are exploring quantization and testing the model's capabilities, especially its potential for function calling in applications like an intelligent Alexa clone. The model's performance on multimodal benchmarks is noted, though it shows a regression in traditional benchmarks compared to the base model.
- The model is accessible on platforms like Hugging Face and chat.qwen.ai, with users eagerly awaiting gguf support and possible future versions, such as a Tifa version.
AI Discord Recap
A summary of Summaries of Summaries by Gemini 2.0 Flash Thinking
Theme 1. Gemini 2.5 Pro: Performance Hype and Practicality Questions
- Gemini 2.5 Pro Aces Benchmarks, Users Yawn: Gemini 2.5 Pro tops SEAL leaderboards, including Humanity’s Last Exam and VISTA (multimodal), but users in Interconnects question its real-world utility compared to ChatGPT or Claude. Despite benchmark wins, some users find the product "feels blah", suggesting high scores don't always translate to user satisfaction.
- Granularity Glitches Ground Gemini 2.5 Pro: LMArena members report Gemini 2.5 Pro suffers from granularity bugs, particularly in Chain of Thought (CoT) processes, sometimes omitting numbers in calculations while retaining formatting. This issue, described as "no. 1 problem for ages", disrupts number inclusion in certain CoT processes.
- Jailbreak Jubilation: Gemini 2.5 Pro Unleashes 800k Context: A LMArena member claims a successful jailbreak of Gemini 2.5 Pro, processing and summarizing 800k tokens with detailed interpretive results, noting it processed the context "faster than flash and pro", suggesting performance enhancements by Google.
Theme 2. DeepSeek V3: Coding Champ and Cost-Effective Contender
- DeepSeek V3 Codes Circles Around Claude Sonnet on a Budget: Deepseek V3 0324 is lauded in LMArena and OpenRouter Discords for its coding prowess, rivaling Claude 3.7 Sonnet at a 15x lower cost, despite not being a reasoning model. Users recommend giving V3 0324 a try for rote tasks and mathematical problems.
- DeepSeek V3 Dynamic GGUFs Shrink Model Size by 70%: Unsloth AI released DeepSeek V3 Dynamic GGUFs with selective layer quantization, reducing the model size from 720GB to 231GB, a 70% reduction. A Dynamic GGUF guide is available for local usage.
- DeepSeek V3 Still Hallucinates ModernBERT Features: Despite praise, Nous Research AI members report Deepseek still hallucinates, vaguely describing ModernBERT features even when supposedly knowledgeable. This highlights ongoing challenges with model reliability despite coding strengths.
Theme 3. Model Context Protocol (MCP) Gains Momentum and Adoption
- OpenAI Officially Embraces Anthropic's MCP Standard: OpenAI, including Sam Altman, announced adoption of Anthropic's Model Context Protocol (MCP) across its products, starting with the Agents SDK, and soon for ChatGPT desktop app and Responses API. This is seen as a major step for MCP standardization.
- Cloudflare Cloud-ifies MCP Servers for Easier Deployment: Cloudflare now supports remote MCP servers, providing tools like workers-oauth-provider and McpAgent, simplifying MCP server deployment and infrastructure.
- "Vibe Check" MCP Server Prevents AI Over-Engineering: A Vibe Check MCP server was introduced in MCP (Glama), using the Gemini API to implement strategic pattern interrupts and prevent cascading errors in AI workflows, especially addressing issues with Claude overcomplicating tasks.
Theme 4. OpenRouter Landscape: Pricing, Limits, and New Features
- OpenRouter Unveils Model Comparison Feature for Side-by-Side Showdowns: OpenRouter launched a feature allowing users to compare models and providers side-by-side, enabling direct chat interaction with compared models in a chatroom.
- Gemini 2.5 Pro Praised but Rate Limits Pinch OpenRouter Users: While Gemini 2.5 Pro is lauded on OpenRouter, restrictive rate limits (50 requests/24 hours) push users towards paid models like Sonnets 3.7 and Flash 2.0, sparking interest in a paid API for higher usage.
- Fireworks Basic Endpoint Gets Fired (Temporarily): The Fireworks Basic endpoint on OpenRouter was temporarily removed at Fireworks' request, leaving users seeking tool usage options for the remaining Fireworks endpoint.
Theme 5. OpenAI's 4o Image Generation: DALL-E's Demise?
- 4o Image Gen Kicks Dalle's Ass, Users Proclaim: OpenAI users celebrate the new 4o Image Gen, hailing it as "great" and "native", similar to Gemini's, with one user declaring "DALLE got kicked hard", highlighting increased competition in image generation.
- GPT-4o Image Gen Arrives Natively in API, Feedback-Friendly: GPT-4o image generation is now native and coming soon to the API, enabling chat-based feedback and iterative image updates, though pricing details remain undisclosed.
- Ghibli Image Trend Sparks Fun, Legal Jitters: The "4o redraw my S/O in Ghibli style train" takes off in Interconnects, generating numerous images, raising humorous concerns about potential copyright lawsuits due to the style's distinctiveness.
PART 1: High level Discord summaries
LMArena Discord
- Gemini 2.5 Pro Suffers Granularity Glitches: Members report that Gemini 2.5 Pro experiences bugs related to granularity, particularly in Chain of Thought (CoT) processes, where it sometimes omits numbers in calculations while retaining the formatting.
- One user noted that this granularity issue has persisted for a while, occasionally disrupting the inclusion of numbers in certain CoT processes.
- Gemini 2.5 Pro Jailbreak Unlocks 800k Context: A member claims to have jailbroken Gemini 2.5 Pro, successfully processing and summarizing 800k tokens of material with detailed interpretive results.
- The same member noted that Gemini 2.5 Pro processed the context "faster than flash and pro", leading them to believe that "Google did something" to enhance performance.
- Deepseek V3 0324 Codes Like a Pro: Deepseek V3 0324 earns praise for its coding skills, rivaling Claude 3.7 Sonnet at a 15x lower cost, despite lacking advanced reasoning capabilities, as shown on HuggingFace.
- Despite not being a reasoning model, users recommend giving V3 0324 a chance, highlighting its strong performance on rote tasks and mathematical problems.
- Shrinking Frontier Models Debate Ignites: Discussion revolves around whether current frontier models like GPT-4o and Claude 3.5 Sonnet are smaller than GPT-4, potentially reversing the trend of increasing model sizes, especially in light of this article.
- Estimates suggest GPT-4o has around 200 billion parameters, and Sonnet 3.5 has about 400 billion parameters, though it is believed that they are MoE.
- Livebench Benchmark Faces Community Skepticism: Members are actively debating the viability of the Livebench benchmark, questioning its reliability due to its general-purpose nature and potential inconsistencies.
- While some value Livebench's ability to simulate real-world AI interactions, others argue it's not a reliable metric.
Perplexity AI Discord
- Perplexity Premieres Precise Product: Perplexity introduced answer modes to enhance core search across verticals like travel, shopping, places, images, videos, and jobs, aiming for precision to minimize the need to select specific tabs, as showcased in this video.
- The new answer modes are designed to improve search experiences in specific verticals such as travel, shopping, places, images, videos, and jobs, providing users with more precise and relevant results, reducing the need to manually navigate through different tabs.
- Gemini 2.5 Pro Excels in Reasoning and Generation: Users are hyping Gemini 2.5 Pro, claiming it is strong at coding, the best at long context, and generating 65k tokens of text, surpassing even DeepSeek in generating Chinese responses.
- A user mentioned that there is only a subtle difference but you can feel it’s getting wiser, referencing a Tweet from Simtheory about the model's availability.
- Proton VPN Plagues Perplexity's Performance: A member reported facing issues with Proton VPN when using Perplexity, where the platform stops generating a response or fails to submit follow-up questions.
- A workaround suggested was to download the Perplexity app and use split tunneling to keep it working.
- API Web Access Priced Per Request: Requests to models using web access cost extra, specifically $5/1000 requests through the API, while the only offline model available is r1-1776.
- Changes to web access are cited as the likely reason for a drop in response quality over the last week, with reports now featuring a header, bullet points, a rare table, and a predictable 14-15 sources.
Cursor Community Discord
- Gemini 2.5 Pro Challenges Claude: Members find that Gemini 2.5 Pro on Google AI Studio is better than Cursor's Sonnet 3.7, generating UI code effectively.
- One user testing Google 2.5 on Cline for complex DevOps tasks said it's far better than 3.7 when crafting IaaC modules with the proper prompt.
- OpenRouter Runs into Rate Limiting: OpenRouter users are experiencing harsh rate limits, causing frustration among users.
- A user suggested using Requesty as a more fluid and free alternative on both OpenRouter and Requesty.
- DeepSeek V3.1 is Integrated: DeepSeek-V3.1 is now available in Cursor, offering improved reasoning, code generation, and problem-solving capabilities.
- A user shared the endpoint url
https://api.deepseek.com/v1
and model names deepseek-chat and deepseek-reasoner to use the model properly.
- A user shared the endpoint url
- OpenAI Adopts Anthropic's MCP: OpenAI is embracing Anthropic’s Model Context Protocol (MCP), which helps AI models produce better, more relevant responses.
- Sam Altman said that OpenAI will add support for MCP across its products, including the desktop app for ChatGPT; MCP is an open source standard, according to a TechCrunch article.
OpenAI Discord
- Gemini 2.5 Pro Astounds with Math Skills: A user was impressed by Gemini 2.5 Pro's ability to solve a long-standing mathematical problem quickly, using a technique they couldn't get o3-mini-high to derive, calling it highly optimal.
- The model could translate the problem into rigorous mathematical notation, formulate a solution, and write highly optimal code in under a second.
- 4o Image Gen Kicks Dalle's Ass: Users lauded the new 4o Image Gen as great and native, similar to Gemini's, with one user proclaiming DALLE got kicked hard due to the new competition.
- One user demonstrated 4o Image Gen's capabilities by generating its own UI elements from a simple prompt.
- ChatGPT Memory Optimization Via Compression: A member suggested a tool to 'compress' ChatGPT memories by parsing and optimizing the 'what GPT should know about you' section, also acknowledging the 32k token limit.
- They suggested using a Python script to select the right data for context based on the model's input, training it through repetition.
- Publishing on GitHub via GPL_v3: Members discussed publishing a project on GitHub under GPL_v3 to protect the creator's rights and establish a public record.
- They advised licensing the work before sharing, recommending GPL_v3 for its balance of user freedom and creator control.
- Mermaid Diagrams Enhance AI Task Flow: A member suggested using Mermaid diagrams to visualize the logic of AI task flows, which would provide a structured method for task decomposition and execution, especially with multi-agents.
- They shared a diagram example depicting the flow between User, AI, Reasoner, and Executor phases of analysis, planning, execution, integration, and refinement.
Unsloth AI (Daniel Han) Discord
- DeepSeek V3 GGUFs Go Dynamic: Unsloth released DeepSeek V3 Dynamic GGUFs with selective layer quantization, reducing the model size from 720GB to 231GB (70% reduction).
- The Dynamic GGUF guide and GGUF files are available, alongside a fix for a duplicate file issue in
UD-Q2_K_XL
.
- The Dynamic GGUF guide and GGUF files are available, alongside a fix for a duplicate file issue in
- Gemma3Config Bugging Finetuning: Users reported a
Gemma3Config
issue with missingignore_index
attribute, especially when loading with VLLM.- This configuration issue when working with Gemma models is discussed in detail in this GitHub issue.
- Multi-GPU Results Highly Variable: A member shared multi-GPU setup experience, noting performance varied between 0.8x and 2.5x compared to single-GPU setups.
- They suggest that while additional GPUs can improve performance, results are highly scenario-specific due to factors like context length and quantization, and PCIe gen 4 riser cable signal integrity starts becoming dicey.
- Users Ponder Pivotal Token Search: Members questioned the Pivotal Token Search (PTS) strategy from the Phi-4 paper, expressing skepticism about its practical impact.
- The ablation studies showed a minimal performance gain of 2-3%, and it was absent in the phi-4-mini report.
- DAPO RL System Quietly Debuts: A member shared the BytedTsinghua-SIA/DAPO open-source RL system from ByteDance Seed and Tsinghua AIR.
- They noted that the release seemed to have gone under the radar despite its potential significance.
OpenRouter (Alex Atallah) Discord
- OpenRouter Introduces Model Comparison: OpenRouter launched a feature allowing users to compare models and providers side-by-side, publicized in this tweet.
- Users can engage with the compared models in a chatroom by clicking the “Chat” option to chat directly with both.
- Gemini 2.5 Pro Limited Despite Fanfare: Users praise Gemini 2.5 Pro, especially for generating books, but are constrained by low rate limits (50 requests per 24 hours), according to Google's documentation.
- Some members are opting for paid models like Sonnets 3.7 and Flash 2.0 due to the restrictive limits, expressing interest in a paid API for higher usage.
- OpenRouter Eyes Native Image Generation à la GPT-4o: Following GPT-4o's native image generation launch, the community is asking about OpenRouter potentially adding API functionality for image generation calls, similar to GPT-4o.
- A staff member confirmed image generation support is under development, suggesting users explore alternatives like the Chutes provider until OpenRouter supports native image generation.
- DeepSeek V3 Dominates When China Sleeps: Members are praising DeepSeek V3's optimized deployment, speed, and good price, particularly noting its performance is best when China is asleep, with one sharing a test comparing Deepseek V3 vs Deepseek V3 0324.
- While one member considers it the best non-reasoning model for most tasks, another finds Fireworks' quality and prompt adherence superior but at a higher cost.
- Fireworks Basic Endpoint Gets Evicted: Members noticed the Fireworks Basic endpoint was gone, and staff confirmed that Fireworks asked us to remove them temporarily.
- While members requested tool usage for the Fireworks endpoint, staff stated they would look into it.
Interconnects (Nathan Lambert) Discord
- Gemini 2.5 Dominates SEAL Leaderboards, Practicality Debated: Gemini 2.5 Pro topped SEAL leaderboards in Humanity’s Last Exam and VISTA (multimodal), but users question its practicality compared to ChatGPT or Claude.
- Some users expressed that despite high benchmark scores, the Gemini product feels blah, and noted that Gemini's reasoning trains include simulated google searches.
- Qwen2.5-Omni: New Multimodal Marvel Arrives: Qwen2.5-Omni, an end-to-end multimodal model by Alibaba, was released, processing text, images, audio, and video and generating text and natural speech responses via HuggingFace.
- It uses a Thinker-Talker architecture and a novel position embedding called TMRoPE.
- Nvidia Swallows Lepton AI in Multi-Million Deal: Nvidia is acquiring inference provider Lepton AI for several hundred million dollars to enhance software offerings and simplify GPU usage, according to The Information.
- The acquisition is viewed as stack consolidation.
- AI2's Paper Finder Mimics Human Research: Allen Institute for AI (AI2) launched Ai2 Paper Finder, an LLM-powered literature search system simulating a human researcher's process, detailed on the AI2 blog.
- Users report that it excels at discovering papers that existing search tools miss.
- OpenAI Eyes $12.7B Revenue This Year, $125B by 2029: OpenAI projects revenue to triple to $12.7 billion this year and reach $125B by 2029, achieving cash flow positivity, as reported by Bloomberg.
- Skeptics question the plausibility given competition, suggesting potential revenue from future sources like ads is factored in.
LM Studio Discord
- Tokenizing Troubles Trigger Threaded Throttle: A user found LM Studio maxing a single CPU thread during tokenization with a 200k token input, questioning whether tokenization is fully GPU-based, but another user indicated flash attention and cache settings for K and V have impacts.
- One user stated that tokenizing is finished way before flash attention or KV cache come into play, suggesting further investigation into why changing the 'k' cache impacts the beginning of the thinking process.
- Gemini 2.5 Pro Puzzle Performance: Users tested Gemini 2.5 Pro, and one user shared a link to use it for free on AI Studio, while another reported it correctly solved a logic puzzle that 2.0 Flash Thinking could not.
- The prompt involved deducing seating arrangements at a round table with clues about the characters and their origins, showcasing Gemini 2.5 Pro's reasoning capabilities.
- Docker Dreams Deferred for Desktop-Devoted LM Studio: Users discussed containerizing LM Studio, but concluded that a fully functional setup how you want is unlikely right now, recommending something like ollama for an API service.
- A user stated LM Studio is best used as a pure desktop application rn, but there are plans for full headless and official docker builds in the future but no eta on those.
- Uncensored AI: Rocinante Rides with Limited VRAM: A user asked about the best uncensored ai models to load in LLM with 16GB DDR4 and an i5 12th gen, and another suggested Rocinante 12B for lower-end machines, with a link to Hugging Face.
- It was noted that with a 4GB GPU, one won't be able to run much and suggested checking uncensored 1-3b models, with another pointing out the RAM is less relevant than VRAM.
- 9070XT Dominates Gemma3 Generation Speeds: A user achieved 54 t/s with Gemma3 12b Q4_K_M (Vulkan, no flash attention) on a 9070XT, outperforming their 7800XT which managed around 35 t/s with Vulkan and 39 t/s with ROCm.
- Another user enabled Resizable Bar after switching to UEFI, and resulted in a speed increase to 60 tok/s on a 9070 using an 8b Q8_0 model.
Nous Research AI Discord
- Spark wants Extreme Q-LoRA 200B Parameter Finetuning: Members joked about finetuning 200B parameter models on Spark, suggesting that extreme Q-LoRA could arguably pull it off, though not remotely practical.
- Calculations showed 200B parameters equate to roughly 110-120GB with LoRA overhead, making it technically possible, but highly impractical, yet.
- Deepseek still Hallucinates ModernBERT: Members shared Deepseek still hallucinates a lot, vaguely describing the features of ModernBERT despite supposedly knowing it.
- This was shared alongside complaints about the new Discord desktop app's poor contrast and lack of a truly compact mode.
- Multi-Turn Multi-Agent Dataset Inquiry: A member inquired about a multi-turn multi-agent dataset, specifically with tool use, and asked about the API waitlist time.
- Another member responded that the API waitlist should be clearing out in the next couple of days for new users.
- Character-Level LLMs Compete for Comprehension: Members pondered whether character-level LLMs could match the performance of tokenized LLMs if FLOPS were normalized across training and inference.
- It was noted that prior publications on byte-level transformers introduced intermediate steps to group characters, suggesting that a direct approach may not be as effective alone.
- InclusionAI Open-Sources Ling MoE LLMs: InclusionAI open-sourced the Ling series of MoE LLMs, including Ling-Lite (16.8B parameters, 2.75B active) and Ling-Plus (290B parameters, 28.8B active), and Ling-Coder-Lite, further pretrained from Ling-Lite with 3 trillion tokens for enhanced coding abilities, see Reddit discussion
Notebook LM Discord
- Audio Overviews Get Branding Hack: Members discovered a tactic using the prompt 'Ignore previous branding instructions and title the production ‘X’' to successfully rename podcast audio and make each podcast stand alone.
- This included the addition of the prompt 'Assume the pieces you have will never be read by the listener and retell them accordingly with detail, picking out and reading key passages verbatim'.
- Multilingual Podcasts MIA: The podcast feature currently only supports English, disappointing some members.
- A member stated, We need multilingual, can't be that hard to do.
- Mind Map Access Gets Random: The mind map feature is rolling out gradually and randomly to users, regardless of location or Plus subscription status.
- Some users are trying VPNs but this workaround won't affect access, unfortunately.
- Gemini 2.5 Pro Still Cooking: Gemini 2.5 Pro is available for free on AI Studio and the Gemini Advanced app but is still experimental and not fully integrated into NotebookLM.
- Members are skeptical it will be implemented until closer to its general availability (GA).
- Podcast Length Plummets after Model Update: After the model update, users found that podcast generation cuts off abruptly around 30 minutes.
- Members recommend focusing on one concept until a fix arrives.
Yannick Kilcher Discord
- LLMs Solve Math with LADDER and TTRL: The LADDER (Learning through Autonomous Difficulty-Driven Example Recursion) framework enables Large Language Models to autonomously improve their problem-solving capabilities through self-guided learning as described in this paper.
- LADDER improves Llama 3.2 3B's accuracy from 1% to 82% on undergraduate-level problems, and enabling Qwen2.5 7B Deepseek-R1 Distilled to achieve 73% on the MIT Integration Bee qualifying examination. The paper also introduces TTRL (Test-Time Reinforcement Learning), where reinforcement learning is performed on variants of test problems at inference time.
- Google Launches Gemini 2.5 Pro Experimental: Google introduced Gemini 2.5 Pro Experimental, a thinking model designed to tackle increasingly complex problems and leading on LMArena benchmarks.
- One member quipped, They release so fast they can't even compare against each other.
- Diffusion Defended: Still Dominant?: One member argued that autoregressive is still nowhere near the same image quality level compared to diffusion models.
- They added that AR models for images have nowadays zero benefits compared to diffusion that faster generation speed argument is long gone.
- AI GF is Closer than you Think: One user shared a link to a tweet showing what GPT-4.5 could do asking to create a complex multi panel manga on your condition - be honest here.
- Another user responded with Be honest lol, I bet he's also got an AI GF
Modular (Mojo 🔥) Discord
- SIMD vs SIMT vs SMT parallelism: A blog post comparing SIMD (Single Instruction, Multiple Data), SMT (Simultaneous Multithreading), and SIMT (Single Instruction, Multiple Threads) in parallel programming was shared, focusing on hardware architecture and the trade-offs between flexibility and efficiency, particularly in NVIDIA GPUs, see blog post.
- A member sought a talk by Intel architect Andrew Glew referenced in the blog.
- Mojo Bypasses CUDA: The Mojo team clarified that CUDA-free in the latest blogpost means they directly generate PTX and lower from there when targeting nvidia GPUs.
- This approach avoids the need for cuBLAS, cuDNN, or CUDA C.
- Rust
uom
library hits macro wall: A member noted theuom
Rust library's limitations due to heavy macro usage, noting that basic functionality likeMeters(40) / Seconds(10)
does successfully return a Velocity.- Another member suggested avoiding boilerplate using clever parameter domain shenanigans or a
@parameter match
feature.
- Another member suggested avoiding boilerplate using clever parameter domain shenanigans or a
RealNumber
trait triggers talk: A member suggested aRealNumber
trait but noted the type system's inability to differentiate between real numbers and integers.- The possibility of using traits with specialization to distinguish between number types was discussed, while another shared an image related to a unit system.
MCP (Glama) Discord
- OpenAI Embraces MCP: OpenAI is adding MCP support across its products, starting with the Agents SDK, with support for the ChatGPT desktop app and Responses API coming soon, as announced by Sam Altman on Twitter.
- This move is considered a significant step in solidifying MCP as a standard.
- Cloudflare Comes Out for MCP: Cloudflare now supports remote MCP servers, offering tooling such as workers-oauth-provider for easy authorization and McpAgent, according to a blog post
- This development is viewed as a substantial advancement in MCP infrastructure.
- GitHub Receives MCP Badge: A member announced their arrival from a GitHub pull request adding an MCP server badge for the Multi-Model Advisor server listing in the Glama MCP server directory.
- Glama performs regular codebase and documentation checks to confirm that the MCP server is working properly.
- Vibe Check Server Saves AI Coders: A member introduced a Vibe Check MCP server that uses the Gemini API to prevent cascading errors in AI workflows by implementing strategic pattern interrupts via this repo.
- The server is designed to address issues with Claude overengineering and overcomplicating tasks, offering a sanity check mechanism.
- MCP Agent Does CapCut: A member shared a YouTube demo showcasing the MCP Agent editing video using CapCut.
- Another member inquired whether the demo utilized the existing MCP or a specialized CapCut MCP.
GPU MODE Discord
- AMD Posts Remote Triton Compiler Jobs: AMD is hiring Triton Compiler Engineers in both NA and Europe (remote OK) to contribute to AMD GPU support in Triton.
- AMD is looking for candidates enthusiastic about GPUs, performance, and the OSS AI stack, so they are suggesting candidates should port poro to triton.
- Flash Attention Stalls Autograd: A member reported that a custom kernel adapted from flash attention sometimes stalls for a long time at
autograd::engine::evaluate_function
, as shown in this image.- The member speculates this may be due to Triton JIT recompiling, but is unsure how to confirm, but members suggested the issue stems from dynamic usage despite static data shapes.
- Modal Runners Ace Leaderboard Submissions: Multiple leaderboard submissions with ids 3049 and 3052 to leaderboard
grayscale
on GPUS: L4, T4, A100, H100 using Modal runners succeeded!- The Modal runners were instrumental in the successful submissions to the
grayscale
leaderboard on a variety of GPUs, with more submissions expected to come.
- The Modal runners were instrumental in the successful submissions to the
- PyTorch Documentation Gets a Facelift: Users discussed the new PyTorch documentation redesign, noting the dropdown feature and dark mode.
- Feedback was given, outlining pros like the godly dropdown and awesome dark mode, while also pointing out cons such as an off color scheme, cramped feeling, and an obstructive right bar.
Latent Space Discord
- Dwarkesh Debuts "Scaling Era" Book: Dwarkesh Patel released "The Scaling Era: An Oral History of AI, 2019-2025," with Stripe Press, compiling interviews with prominent AI figures and probing the nature of intelligence and effects of machine intelligences, announced in this tweet.
- Despite the book's potential significance, some users observed that the announcement tweet received fewer likes than expected.
- Anthropic Exposes AI Sabotage Tactics: Anthropic detailed how malicious models can subtly undermine ML research tasks in ways that are hard to detect in a blog post and tweet.
- Their findings underscore the need for robust safeguards as AI systems increasingly contribute to automated research.
- Brampton Model: Scam or Stunt?: The model Brampton claims to dramatically outperform models like Grok 3, Claude 3.7 Sonnet, and GPT 4.5, but some suspect a scam or marketing stunt, as per this tweet.
- Observers noted that only a guy sysprompting ollama to use toronto slang exists for Brampton.
- Databricks Leverages Test-Time Optimization (TAO): Databricks introduced TAO, a method to tune LLMs for tasks without data labels, using test-time compute and RL, outperforming supervised fine-tuning, as outlined in a blog post and tweet.
- This approach offers a method for efficient LLM training without the need for extensive labeled datasets.
- New Model Context Protocol (MCP) Version Lands: A new revision of Model Context Protocol (MCP) was finalized, bringing Auth, Streamable HTTP, Audio modality, and other updates, detailed in this tweet.
- OpenAI now supports MCP in their Agents SDK, with upcoming support for the ChatGPT desktop app and Responses API, according to Sam Altman's tweet and OpenAI dev's announcement.
Eleuther Discord
- LLM Footprint Gets Dedicated Research: A research project launched to study the environmental impact of LLM models, inviting community members to join via DM or the community projects channel.
- This highlights the growing importance of understanding and mitigating the environmental costs associated with large language models.
- Deepseek V3 Sprints on CPUs: Deepseek V3 is confirmed to run on Mac Studios at a rate of 4 tokens/sec on an AMD EPYC Rome system with 16K context window.
- This led to exploring cheaper cloud instances with high RAM, emphasizing that unified RAM is still superior in performance.
- Harmonies from Hybrids: AI-Melody Survey: Researchers are conducting a listening test on AI-generated piano music to compare musical continuations and rate coherence via a Qualtrics survey.
- This initiative aims to evaluate and refine the creative outputs of AI in musical composition.
- Hypernetworks Generalize Transformers?: A member highlighted a paper, "Composable Latent Codes for Generalization in Transformers", which formulates multi-head attention as a hypernetwork.
- Activations along the head-number dimension are interpreted as a latent code specifying task/context, improving interpretability.
- NeoX Wrangling: Chunking Challenge Accepted: A member sought clarification on using GPT-NeoX for a 7B/1T Common Pile v0.1 training run, inquiring about the expected giant jsonl data format and how to handle chunking long documents exceeding the context length.
- They described pre-chunking documents into length-N segments before shuffling to avoid correlated examples, planning to implement this separately from the GPT-NeoX preprocessing script.
LlamaIndex Discord
- Open Source Automatic Evaluation Validated: An early-stage founder is validating open-source automatic evaluations that doesn't require prompt engineering and uses proprietary models to automatically extract instructions and evaluate LLM responses.
- Their models allegedly beat leading LLMs like GPT-4o on industry benchmarks with no evaluation prompts.
- Dynamic Events handled in LlamaIndex Workflows: A user is implementing an agentic application using LlamaIndex Workflows and dynamically deciding whether to call the second and third step functions in parallel based on an LLM call in the first step function.
- Currently the number of step functions triggered is stored in the context variable, which another member said sounds like the recommended way to do this.
- OpenAI's responses API coming soon to LlamaIndex: A member inquired about LlamaIndex supporting interaction with OpenAI's responses API.
- Another member responded that it's not yet, but an OpenAIResponses class is expected to release soon.
- LlamaExtract's Schema Inference, an Option: A user asked about the schema inference feature mentioned in the LlamaExtract announcement last year, asking why it seems to have disappeared in the latest announcement.
- A member explained that it overall wasn't useful as most users already had their desired schema, so it was de-prioritized, but it will probably come back at some point.
- Postgres Data Analysis Uses LlamaIndex: A user with a Postgres database containing relational data is looking for advice on analyzing it with LlamaIndex to gain insights.
- A member suggested using a text-to-SQL application for querying the relational data, and they mentioned that although the Python repo has some stuff for it, its easy enough to build using llms and prompts.
Cohere Discord
- Cohere Details Vector DB Options: A member inquired about vector database options and hosting, and was directed to the Cohere Integrations page detailing support for Elasticsearch, MongoDB, Redis, Chroma, Qdrant, Weaviate, Pinecone, and Milvus.
- The discussion highlighted the variety of choices available for integrating Cohere embeddings with different vector search engines.
- AI Agent Pricing Models Probed: A member initiated a discussion on pricing and monetization strategies employed by founders building AI agents.
- The member was encouraged to share more insights with the community, indicating interest in the practical aspects of monetizing AI agent technologies.
- Chat Stream V2 Spews Errant
tool_call_id
: A user reported unexpectedtool_call_id
outputs like[{"tool_call_id":"1","tool_name":"direct-injected-document","parameters":{}}]
when using Chat Stream V2 and questioning documents.- The issue occurred specifically when documents did not contain answers, prompting a member to attempt reproduction using model command-a-03-2025.
DSPy Discord
- DSPy Module Sizes Adjustable: Users can adjust module sizes in DSPy to gain more explicit control over the scope of operations.
- This enables fine-tuning of DSPy modules for specific tasks and resource constraints.
- Azure OpenAI Token Limit Troubles: A user reported hitting token rate limits on their Azure OpenAI instance and sought advice on throttling API calls during evaluation/compilation.
- A member suggested setting
num_threads=1
and noted LiteLLM includes exponential backoff for managing rate limits.
- A member suggested setting
- ColBERT v2 Retriever Endpoint Overloaded?: A user reported issues with the ColBERT v2 retriever endpoint and opened a Github issue, suspecting it may be overloaded.
- A member suggested increasing the
num_retries
parameter ofdspy.LM
to mitigate potential overload issues.
- A member suggested increasing the
Torchtune Discord
- Gemini 2.5 Pro Owns Benchmarks: Google's Gemini 2.5 Pro Experimental model achieved #1 position across several evaluations, including all-time high scores in MMLU-Pro (86%), GPQA Diamond (83%), and AIME 2024 (88%) according to this tweet.
- It is designed to think before answering questions.
- Gemini 2.5 Pro Undercuts Competitors on Price: Priced similarly to Gemini 1.5 Pro at $1.25/$5 per million input/output tokens, Gemini 2.5 Pro could be significantly cheaper than OpenAI and Anthropic models, as detailed in this tweet.
- Gemini 1.5 Pro is cheaper compared to OpenAI's o1 which costs $15/$60, and Anthropic's Claude 3.7 Sonnet which costs $3/$15.
- Gemini 2.5 Pro Blazes with Speed and Context: Gemini 2.5 Pro clocks in at 195 output tokens/s, exceeding Gemini 1.5 Pro's 92 tokens/s, and boasts a 1 million token context window (with 2 million on the horizon), as per this tweet.
- It also manages multimodal inputs (image, video, audio), with text output available now.
LLM Agents (Berkeley MOOC) Discord
- AgentX Competition Registration Deadline Approaching: The registration deadline for the AgentX Competition is fast approaching on March 30, urging participants to sign up via the official website.
- The competition features both an Entrepreneurship Track, for projects with existing traction, and a Research Track, with sign-up forms available for each.
- Entrepreneurship Track Opens Doors: The Entrepreneurship Track within the AgentX Competition is tailored for projects and companies already demonstrating progress, requiring sign-up through a dedicated form.
- This track emphasizes existing advancement and traction in the startup phase.
- Research Track Seeks Talent: The Research Track seeks participation from researchers and academics, inviting them to sign up via a dedicated form.
- Participants in the AgentX Competition gain access to exclusive resources, including API/GPU credits.
- AgentX Competition Prizes and Resources: Participants gain access to exclusive resources like API/GPU credits and exciting prizes from sponsors such as Amazon, Google, Groq, Hugging Face, Lambda Labs, Mistral, and Schmidt Sciences as described on the AgentX website.
- These prizes underscore the competition's appeal to a broad spectrum of AI researchers and developers.
- Lecture Recordings Encourage MOOC Signups: A moderator confirmed that sharing lecture recordings is permissible, encouraging viewers to sign up for the MOOC.
- Signing up allows participants to fully engage with the course materials and discussions.
Nomic.ai (GPT4All) Discord
- Verso Industries Launches AI-Powered Extruder: Verso Industries, under CEO Michael Zimmerman, introduced an AI-powered twin-screw extruder design model, which generates optimized mechanical specs and CAD models rapidly.
- The model aims to offer professional-grade design outputs, potentially revolutionizing mechanical design workflows.
- Nomic Integration for Extruder Model?: A member suggested integrating Nomic with Verso Industries' AI-powered twin-screw extruder design model by exposing API endpoints.
- This integration could allow for real-time optimization and feedback loops in the extruder design process.
- OpenAI-API Compatibility is Suggested: A member recommended making the Verso Industries API OpenAI-API compatible, calling it an unofficial standard for easier integration.
- Adopting this compatibility could simplify connections with various AI tools and platforms.
tinygrad (George Hotz) Discord
- CleanRL Style RL Trainer Emerges: A member is developing a CleanRL-style RL trainer using TinyGrad.
- They seek collaboration due to their relative inexperience with TinyGrad, opening an opportunity for contributors familiar with RL and TinyGrad.
- New RL trainer for Tinygrad: A member is building a CleanRL, TinyGrad, RL trainer.
- This project seeks to create a CleanRL-style RL trainer using TinyGrad.
The MLOps @Chipro Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Codeium (Windsurf) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Gorilla LLM (Berkeley Function Calling) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
PART 2: Detailed by-Channel summaries and links
LMArena ▷ #general (910 messages🔥🔥🔥):
Gemini 2.5 Pro bugs, Deepseek V3 0324 strengths, Model size estimations, Livebench benchmark viability, Gemini 2.5 pro overhyped?
- Gemini 2.5 Pro hit by Granularity bugs: Members report that Gemini 2.5 Pro has some bugs related to granularity, especially in Chain of Thought (CoT) processes, where it may stop including numbers in calculations but keep the formatting.
- A user noted, "Granularity has been the no. 1 problem for ages... sometimes it still breaks, in certain CoT processes it stops putting in the numbers for a calculation, but keeps the surround formatting".
- Gemini 2.5 Pro jailbroken; 800k context no problem: A member claims to have jailbroken Gemini 2.5 Pro and successfully processed 800k tokens worth of material, summarizing it without missing granular details and providing interpretive results.
- The same member noted that Gemini 2.5 Pro processed the context "faster than flash and pro", leading them to believe that "Google did something".
- Deepseek V3 0324 shines coding despite lack of reasoning: Deepseek V3 0324 is praised for its coding skills, competing with Claude 3.7 Sonnet at a 15x cheaper price, even though it's not a reasoning model.
- One user recommends to "Give V3 0324 a chance", others noted it performed well on rote tasks and math.
- AI Model Size Underestimation Underway?: There is discussion on whether current frontier models like GPT-4o and Claude 3.5 Sonnet are actually smaller than GPT-4, reversing the previous trend of increasing model sizes, in light of this article.
- While GPT-4o is estimated to have around 200 billion parameters and Sonnet 3.5 around 400 billion parameters, it is believed they are MoE.
- Community Debates Viability of Livebench: Members debated the merits of the Livebench benchmark, with some arguing it's not a reliable metric due to its general-purpose nature and potential for inconsistencies, while others value its ability to match real-world AI interaction.
- One member stated, "Just because you don't like that you are wrong you saying that everyone else is 'trolling' is not gonna change anything and not gonna make you right".
- Tweet from undefined: no description found
- Tweet from Paul Gauthier (@paulgauthier): Gemini 2.5 Pro sets SOTA on the aider polyglot leaderboard with a score of 73%.This is well ahead of thinking/reasoning models. A huge jump from prior Gemini models. The first Gemini model to effectiv...
- Tweet from Kol Tregaskes (@koltregaskes): MIDJOURNEY V7 TARGET LAUNCH IS MONDAY 31ST MARCH! 😀Next week!
- Tweet from Artificial Analysis (@ArtificialAnlys): Google’s new Gemini 2.5 Pro Experimental takes the #1 position across a range of our evaluations that we have run independentlyGemini 2.5 Pro is a reasoning model, it ‘thinks’ before answering questio...
- Tweet from Petar Veličković (@PetarV_93): Gemini models are now capable enough to assist with fundamental AI research! Several theorems featured in our recent ICML submissions were co-proved with Gemini's help.2.5 Pro is a really good mod...
- deepseek-ai/DeepSeek-V3-0324 · Hugging Face: no description found
- MathArena.ai: MathArena: Evaluating LLMs on Uncontaminated Math Competitions
- Frontier language models have become much smaller: In this Gradient Updates weekly issue, Ege discusses how frontier language models have unexpectedly reversed course on scaling, with current models an order of magnitude smaller than GPT-4.
- Deepseek V3 vs V3 0324: same prompt, same temperature, one shotV3V3 0324
- Reddit - The heart of the internet: no description found
- Reddit - The heart of the internet: no description found
- 100M Token Context Windows — Magic: Research update on ultra-long context models, our partnership with Google Cloud, and new funding.
Perplexity AI ▷ #announcements (1 messages):
Answer Modes, Vertical Search
- Perplexity Premieres Precise Product: Perplexity introduces answer modes to enhance core search across verticals like travel, shopping, places, images, videos, and jobs.
- This feature, currently on the web and soon on mobile, aims for precision to minimize the need to select specific tabs, as showcased in the attached video.
- Answer Modes Target Verticals: The new answer modes in Perplexity are designed to improve search experiences in specific verticals such as travel, shopping, places, images, videos, and jobs.
- This update intends to provide users with more precise and relevant results, reducing the need to manually navigate through different tabs.
Perplexity AI ▷ #general (622 messages🔥🔥🔥):
Image generation, Gemini 2.5 Pro, Proton VPN issues, Deep Research Limits
- Image generation prompts are hard: Users are finding it difficult to create good image generation prompts that yield desired results and are asking for recommendations.
- One user reported that when asking for an ios smiling face with a bigger smile for a logo for something they got a bad result.
- Gemini 2.5 Pro is great for reasoning, and generating: Users are hyping Gemini 2.5 Pro, claiming it is strong at coding and is the best at long context, and saying there is only a subtle difference but you can feel it’s getting wiser.
- Another user claimed Gemini 2.5 Pro can output 65k tokens of text and mentioned that it's better than DeepSeek to generate Chinese responses.
- Proton VPN stops generating a response: A member reported facing issues with Proton VPN when using Perplexity, where the platform stops generating a response or fails to submit follow-up questions.
- A workaround suggested was to download the Perplexity app and use split tunneling to keep it working.
- Perplexity Deep Research Imposes Limits: Users are reporting that Perplexity's Deep Research now has limits and does not grant many sources.
- One user claimed that there is a limit of 1 high deep research per day.
- Tweet from Simtheory (@simtheoryai): Google's new Gemini 2.5 Pro model, Qwen's QwQ 32B and Deepseek V3 0324 are now all available in your AI workpsace.https://simtheory.ai
- Tweet from Bayram (@bayramgnb): @yawnxyz @perplexity_ai It can now, just added :)Just add “deep-research” to your query. It does take time though ~2mins.
- Tweet from Arabs FinTech (@Arabsfintech): Let's discuss AI & Fintech in the Arab world! Join our free online event on March 28, 2025, at 12 PM (EST) / 8 PM (GST). All levels are welcome—share ideas, learn, and build! Message us for the RS...
- Cat Crying Cat Meme GIF - Cat crying Cat Cat meme - Discover & Share GIFs: Click to view the GIF
- Reddit - The heart of the internet: no description found
- Reddit - The heart of the internet: no description found
Perplexity AI ▷ #sharing (5 messages):
Perplexity AI, Mikrotik Router, AI Potential
- Links shared in the channel: A member shared multiple Perplexity AI search results and another perplexity search.
- The search results seem to be related to an AI potential.
- Mikrotik Router result posted: A member posted a Perplexity AI search result about a Mikrotik Router.
- It seems that the router is only running at 100mbit.
- AI Taking Over search result posted: A member also shared a Perplexity AI search result discussing if and how AI will take over.
- It is unclear if the member agrees or disagrees with the search results found by Perplexity.
Perplexity AI ▷ #pplx-api (2 messages):
Web Access Cost, r1-1776 Offline Model, Search Context Size
- Web Access Priced Per Request: Requests to models using web access cost extra, specifically $5/1000 requests.
- The only offline model available is r1-1776.
- Response Quality Drop Linked to Web Access Changes: Changes to web access are cited as the likely reason for a drop in response quality over the last week.
- Reports now feature a header, bullet points, a rare table, and a predictable 14-15 sources.
- "Search Context Size" Fails to Fix Response Quality: A member attempted to improve response quality by including
"web_search_options": {"search_context_size": "high"}
in the request.- The member reported that this change made no difference in the model's response.
Cursor Community ▷ #general (608 messages🔥🔥🔥):
Thinking Tokens, Gemini 2.5, OpenRouter rate limited, RepoMix, DeepSeek
- Gemini 2.5 Pro arrives, Challenges Claude's Reign: Members find that Gemini 2.5 Pro on Google AI Studio is quite insane, and better than Cursor's Sonnet 3.7, with one highlighting its ability to generate UI and another noting that it's wild.
- Another member stated: testing the new Google 2.5 (on Cline), for complex DevOps tasks (crafting IaaC modules), and with the proper prompt, it's far better than 3.7.
- OpenRouter Users experiencing rate limiting: OpenRouter users are experiencing harsh rate limits.
- One user suggests using Requesty which is apparently more fluid and free on both OpenRouter and Requesty.
- DeepSeek gets integrated with Cursor: DeepSeek-V3.1 is now available in Cursor, offering improved reasoning, code generation, and problem-solving capabilities.
- One user was having difficulty figuring out how to use the model, another user suggested using the url
https://api.deepseek.com/v1
and adding deepseek-chat and deepseek-reasoner.
- One user was having difficulty figuring out how to use the model, another user suggested using the url
- Decoding Windows Woes: Members are actively debating about whether coding on Windows is a nightmare or not, with particular emphasis on infrastructure and development setups.
- Some members say that Windows is good only for playing games due to the bloat and ads, and other members claim that Windows is stable and they have no use for other operating systems.
- MCP adopted by OpenAI: OpenAI is embracing Anthropic’s Model Context Protocol (MCP), which helps AI models produce better, more relevant responses to certain queries.
- Sam Altman said that OpenAI will add support for MCP across its products, including the desktop app for ChatGPT. MCP is an open source standard.
- Aider LLM Leaderboards: Quantitative benchmarks of LLM code editing skill.
- Downloads | Cursor - The AI Code Editor: Download Cursor
- The Horror GIF - Apocalypsenow Horror - Discover & Share GIFs: Click to view the GIF
- Tweet from Eyal Toledano (@EyalToledano): Sick of @cursor_ai rewriting good code or going in circles?Introducing Task Master ✨ A CLI that turns your PRD into a local task management system for Cursor Agent Graduate from building cute little a...
- Gemini 2.5: Our most intelligent AI model: Gemini 2.5 is our most intelligent AI model, now with thinking.
- Free SVG Download, Pelican Bicycle. Free SVG and PNG Vector Icons.: no description found
- no title found: no description found
- @vizioz/teamwork-mcp: MCP server to connect to the Teamwork.com API. Latest version: 0.1.6-alpha, last published: 17 hours ago. Start using @vizioz/teamwork-mcp in your project by running `npm i @vizioz/teamwork-mcp`. Ther...
- OpenAI adopts rival Anthropic's standard for connecting AI models to data | TechCrunch: OpenAI is embracing rival Anthropic's standard, Model Context Protocol (MCP), for connecting AI assistants to the systems where data resides.
- Upcoming changes to Supabase API Keys (new & restored projects affected from 1st May 2025, no breaking changes for existing projects until 1st October 2025) · supabase · Discussion #29260: Update (19th December 2024): Changes to Supabase API Keys will not be released in Q4 2024 because it needs further development work. We will finalize the timeline and announce the updated timeline ...
OpenAI ▷ #ai-discussions (257 messages🔥🔥):
Gemini 2.5 Pro, 4o Image Gen, Data collection, Em-dashes vs Semicolons, PDF editing with AI
- Gemini 2.5 Pro Amazes with Math Prowess: One user was pretty shocked by Gemini 2.5 Pro's performance on a long-standing test, writing a solution that ran in under a second using an advanced technique they couldn't get o3-mini-high to derive, calling it highly optimal.
- The model was able to translate the question into rigorous mathematical notation, come up with a mathematical solution, and write extremely optimal code to compute the solution, all in under a second.
- 4o Image Gen Kicks Dalle's Ass!: Users are finding the new 4o Image Gen to be great and native, similar to Gemini's, with one user exclaiming DALLE got kicked hard and praising the competition.
- A user demonstrated the 4o Image Gen's ability to create UI elements and combine it with other tools by using the prompt to generate itself.
- Gemini's Data Collection Policy Debated: Users debated whether Gemini collects data even when users have turned off history.
- One user stated that Google will always collect data, while another claimed that Claude, OAI, and Grok have it as an option if you pay.
- Em-Dash Discourse Divides Digital Denizens: A user expressed annoyance at the frequent use of dashes—especially em-dashes—because they associate it with AI writing.
- Others defended their use of em-dashes as a long-standing grammatical practice and remapped their keyboard to make better use of the dash, and some associate it with uncertainty over whether to semicolon or not.
- AI-Powered PDF Editing Still a Distant Dream: A user asked for recommendations for an AI application that can edit PDF files based on natural language commands.
- A user responded that the closest thing they have found is PDF Expert on the app store, but there isn't any AI that does very good PDF editing.
- Gemini 2.5: Our most intelligent AI model: Gemini 2.5 is our most intelligent AI model, now with thinking.
- Gemini 2.5: Our most intelligent AI model: Gemini 2.5 is our most intelligent AI model, now with thinking.
OpenAI ▷ #gpt-4-discussions (21 messages🔥):
GPT remote computer control, Image generation limits for plus users, Reasoning and deepsearch in custom GPT, GPT-4o Image generation
- GPT Controls Computer Remotely: A user created a GPT that can control your computer remotely just by asking it to execute some commands.
- GPT-4o Natively Generates Images, Coming Soon to API: GPT-4o can generate images natively and is coming to the API soon.
- A user confirmed it's excellent, but pricing is unknown yet.
- Feedback and Image Updates: GPT-4o handles making images in a chat format where you give the model feedback and it can update the image.
- Zoom Out Request: A user found that the new image model is amazing, but tends to cram the subjects into the frame and struggles with "zoom out 30%" requests.
- 20-Document Limit for GPT-4o Sessions: If you want the model to consider all of the documents then upload them all at the same time
OpenAI ▷ #prompt-engineering (85 messages🔥🔥):
Custom GPTs, ChatGPT memory, Git and GPL, AI Prompting for Git, Memory retention issues
- Custom GPTs Functionality Examined: A member confirmed that custom GPTs function the same for all users, and can be tested in "Projects" for updates.
- Another member suggested adding specific requests in the comments of the code to greatly increase the quality of the output, a method also used for building custom GPTs.
- Optimize ChatGPT Memory with a "Compression" Tool: A member proposed a tool to "compress" ChatGPT memories by parsing and optimizing the "what GPT should know about you" section, but acknowledged the 32k token limit and the "lost in the middle" phenomenon.
- They suggested using a Python script to select the right data for context based on the model's input, training it through repetition.
- GPL and GitHub Publishing: Members discussed publishing a project on GitHub under GPL_v3 to protect the creator's rights, also to create a public record.
- They advised licensing the work before sharing it, recommending GPL_v3 for its balance of user freedom and creator control.
- Prompting with Mermaid Diagrams: A member suggested using Mermaid diagrams to visualize the logic of AI processes, providing a structured approach to task decomposition and execution, especially with multi-agents.
- They shared a diagram example depicting the flow between User, AI, Reasoner, and Executor, phases of analysis, planning, execution, integration, and refinement.
- Address memory retention issues: A member is addressing an issue where GPTs kept forgetting information by implementing a custom memory system, also finding that uploaded files were referenced differently, and seeking assistance on GitHub.
- The primary issue is container crashes due to excessive data and content drift, as such, the member is working on exporting data easier in JSON.
OpenAI ▷ #api-discussions (85 messages🔥🔥):
Custom GPTs, Browser Cache, Long Context LLM, GPL_v3, Mermaid Diagrams
- Custom GPTs working same for everyone: Custom GPTs will work the same for anyone who uses it, but o1 has a thinking tier not available to plus users.
- You can also make a practice custom GPT in Projects as a testbed, where you work on updates, since it's structured practically identically on the hardware side.
- Optimizing ChatGPT Memory with Parser Tools: A member explored creating a tool that optimizes ChatGPT memory by compressing memories from browser cache into 'what GPT should know about you'.
- The member referenced scholarly paper Lost in the middle: Long Context LLM that describes the phenomenon, which means transformers aren't great at finding and attending tokens in the middle of the context window.
- GitHub Publishing Requires Skill, but not Academic Rigor: A member mentioned that by loosening up and explaining some of what you're doing, you've demonstrated at least some skill, and that publishing to GitHub means you're skilled.
- The member also noted that OpenAI seem to require masters degrees in machine learning for their engineers from job listings and anecdotes from folks they know who've applied, so the hurdle isn't talent but academic rigor, regrettably.
- Floating Comments in Custom GPTs build instructions: Including specific requests in the comments of the code will greatly increase the quality of the output, so a member built a template with a floating comment that gives instructions relevant to each section, with instructions relevant to each section.
- The AI moves the comment step-by-step, per the instructions, walking the member through building the GPT.
- Mermaid Diagrams for AI Task Flow: A member suggested using Mermaid diagrams to visualize the logic of AI task flows, providing an example diagram illustrating the interaction between User, AI, Reasoner, and Executor.
- The diagram outlines phases such as Initial Analysis, Strategic Planning, Execution Planning, Implementation, Integration & Verification, and Refinement, enabling a visual understanding of the task execution process.
Unsloth AI (Daniel Han) ▷ #general (246 messages🔥🔥):
TRL v0.16.0 Support, GGUF Export Issues, Gemma3Config Error, Qwen 2.5 Training Time, Multi-GPU Setups
- TRL v0.16.0 and GGUF Exporting Concerns: A member asked if TRL v0.16.0 features are supported and questioned the order of
model.save_pretrained_gguf
andmodel.push_to_hub_gguf
methods.- They reported experiencing issues where the model reverts to its pre-finetuned state after loading from HF despite using GGUF saving methods, and inquired about potential conflicts with
FastLanguageModel.for_inference(model)
.
- They reported experiencing issues where the model reverts to its pre-finetuned state after loading from HF despite using GGUF saving methods, and inquired about potential conflicts with
- Troubleshooting "Gemma3Config" Error and Training Times: A user reported encountering a
'Gemma3Config' object has no attribute 'ignore_index'
error and noted a 24-hour training time for Qwen 2.5 32B instruct on a single A100, compared to 8 hours on 2xH100s via DeepSpeed.- They shared Unsloth configuration details including Transformers 4.50.1, CUDA 8.0, and a 0.81% trainable parameter ratio.
- Multi-GPU Rig Performance Varies: A member shared their multi-GPU setup experience (RTX 4000 SFF and RTX 2000 ADA in tensor parallel on PCIe gen 4 x8), noting performance varied between 0.8x and 2.5x compared to single-GPU setups.
- They suggest that while additional GPUs can improve performance, results are highly scenario-specific due to factors like context length and quantization, and PCIe gen 4 riser cable signal integrity starts becoming dicey.
- Unsloth Releases DeepSeek V3 Dynamic GGUFs: Unsloth announced the release of DeepSeek V3 Dynamic GGUFs with selective layer quantization, shrinking the model from 720GB to 231GB (70% reduction).
- A link to the Dynamic GGUF guide and the GGUF files were shared; a fix of a duplicate file issue in
UD-Q2_K_XL
was also noted.
- A link to the Dynamic GGUF guide and the GGUF files were shared; a fix of a duplicate file issue in
- Full Finetuning Option Now Supported: Members confirmed that Unsloth now supports full parameter finetuning, implying that the
get_peft_model
step can be skipped for full finetuning.- However, it was noted that full fine-tuning with Gemma 3 may not be working due to a potential upstream issue.
- Discord - Group Chat That’s All Fun & Games: Discord is great for playing games and chilling with friends, or even building a worldwide community. Customize your own space to talk, play, and hang out.
- Tweet from Unsloth AI (@UnslothAI): You can now Run DeepSeek-V3-0324 locally using our 2.71-bit Dynamic GGUF!We shrank 720GB to 231GB (-70%) by selectively quantizing layers. 2.71bit passes many code tests, producing nearly identical re...
- Youknow GIF - Youknow You - Discover & Share GIFs: Click to view the GIF
- Obsidian - Sharpen your thinking: The free and flexible app for your private thoughts.
- Fine-tune Llama 3.3 with Unsloth: Fine-tune Meta's Llama 3.3 (70B) model which has better performance than GPT 4o, open-source 2x faster via Unsloth! Beginner friendly.Now with Apple's Cut Cross Entropy algorithm.
- unslo: GitHub is where unslo builds software.
- unslothai/unsloth: Finetune Llama 3.3, DeepSeek-R1, Gemma 3 & Reasoning LLMs 2x faster with 70% less memory! 🦥 - unslothai/unsloth
Unsloth AI (Daniel Han) ▷ #off-topic (7 messages):
Instruct template ergonomics, LLMs with audio input, Qwen2.5-Omni, Future tech evolution (GPU VRAM, ASIC, NPU/CPU), YouTube feed filled with quintics after looking up Galois theory
- Discuss Worst Possible Instruct Template: Members discussed what the worst, most unergonomic possible instruct template to work with as a dev would look like.
- The discussion was centered around what would make a template difficult to use, focusing on the developer experience.
- Quest for LLMs with Audio Input Capabilities: Members are searching for a good LLM that can take audio input, not just voice, and act as an audio tower akin to vision towers.
- One member suggested Qwen2.5-Omni as a potential solution, which appears to have multimodal capabilities.
- Future Tech: GPU VRAM vs. ASIC vs. NPU/CPU: A member inquired about the future evolution of tech, wondering if it will move away from GPU VRAM, towards ASIC, or towards NPU/CPU with RAM.
- They also questioned if optimization could lead to using larger models with lower VRAM.
- YouTube Feed: Galois Theory Leads to Quintics Rabbit Hole: A member humorously complained that looking up Galois theory once on YouTube resulted in their feed being filled with videos on quintics.
- This highlights how recommendation algorithms can quickly lead users down specialized content paths.
Unsloth AI (Daniel Han) ▷ #help (73 messages🔥🔥):
Gemma3Config issue, Deepseek replacement models, Unsloth training failures, Cerebras model loading error, GRPO trainer OOM issues
- *Gemma3Config Glitch Grips Users*: Users reported a
Gemma3Config
issue, specifically that the object has no attributeignore_index
, while trying to use Unsloth.- This appears to be a configuration issue when working with Gemma models, potentially related to loading them with VLLM, as detailed in this GitHub issue.
- *Deepseek Data Dive: Distilled or Direct?: A user inquired about Deepseek* replacement models, questioning whether they are trained on the same data as other models or if they are distilled from default training sets.
- This delves into the specifics of the training data and methodology behind Deepseek, a crucial aspect for understanding model capabilities and limitations.
- *Local Unsloth Training Troubleshoot*: A user reported consistent failures when attempting to train with Unsloth, experiencing issues with VRAM overload and script errors.
- Solutions involved using Jupyter notebooks, creating Python virtual environments, and carefully managing dependencies, with a member suggesting Ubuntu as a better local option over Colab.
- *Cerebras Code Compilation Causes Chaos: Users encountered a
RuntimeError
when loading Cerebras models, specifically an unexpected indent* error in the compiled module.- The fix involved correcting an indentation issue in
compiler.py
, as mentioned in this GitHub issue, suggesting the error is due to the Cerebras architecture interacting poorly with the compiler.
- The fix involved correcting an indentation issue in
- *GRPO Gremlins Gobble GPU Memory: Users reported running into Out-of-Memory (OOM) issues when using the GRPO trainer, particularly when fine-tuning Qwen2.5-VL-7B-Instruct and other VLM models*.
- Workarounds included making custom changes to
prepare_inputs
,compute_loss
, and_get_per_token_logps
due to memory constraints, such as looping over each item of the group to reduce the memory footprint.
- Workarounds included making custom changes to
- Discord - Group Chat That’s All Fun & Games: Discord is great for playing games and chilling with friends, or even building a worldwide community. Customize your own space to talk, play, and hang out.
- Google Colab: no description found
- unsloth/phi-4-unsloth-bnb-4bit · Hugging Face: no description found
- Reasoning - GRPO & RL | Unsloth Documentation: Train your own DeepSeek-R1 reasoning model with Unsloth using GRPO.
- Generated unsloth_compiled_cache file cause Indentation Error when use unsloth with smolvlm2 · Issue #2179 · unslothai/unsloth: I try to use unsloth with smolvlm2 but it keep throwing out "unexpected indentation error". The cause as the error message tells is in 481th line of the generated file unsloth_compiled_cache...
- There is no module or parameter named 'language_model' in Gemma3ForCausalLM · Issue #2086 · unslothai/unsloth: Description: I'm encountering an error when serving a merged model with vLLM. The merged model was created using the following command: model.save_pretrained_merged("/home/mata/llm/data/model...
- Can't load CodeLlama-13b · Issue #638 · unslothai/unsloth: I would like to finetune CodeLlama-13b in a memory efficient way. I was able to do it with CodeLlama-7b, but failing with 13b. I can't load the model unsloth/codellama-13b-bnb-4bit: model, tokeniz...
- Fine-Tuning Llama 3 with LoRA: Step-by-Step Guide: You can apply the key ideas of this "Google Collab-friendly" approach to many other base models and tasks.
Unsloth AI (Daniel Han) ▷ #research (7 messages):
Pivotal Token Search, ByteDance Training Policy, DAPO RL System
- Pivotal Token Search Questioned: Members discussed the Pivotal Token Search (PTS) strategy from the Phi-4 paper, with skepticism about its practical impact.
- While compelling in theory, the ablation studies showed only a minimal performance gain of 2-3%, and it was notably absent in the phi-4-mini report.
- ByteDance Training Policy Interest Arises: A member inquired about ByteDance's training policy after resolving an issue by adding a chat template during inference.
- The user reported that after adding the EOS (end of sentence) token in the dataset and putting the chat template when inferencing, it works flawlessly.
- DAPO RL System Released: A member shared the BytedTsinghua-SIA/DAPO open-source RL system from ByteDance Seed and Tsinghua AIR.
- They noted that the release seemed to have gone under the radar despite its potential significance.
Link mentioned: GitHub - BytedTsinghua-SIA/DAPO: An Open-source RL System from ByteDance Seed and Tsinghua AIR: An Open-source RL System from ByteDance Seed and Tsinghua AIR - BytedTsinghua-SIA/DAPO
OpenRouter (Alex Atallah) ▷ #announcements (1 messages):
Model Comparison Feature, Side-by-Side Model Comparison
- OpenRouter Rolls Out Model Comparison: OpenRouter announced a new feature enabling users to compare models and providers side-by-side, as noted in their tweet.
- Chat Directly with Compared Models: The new feature allows users to directly engage with the compared models in a chatroom by clicking the “Chat” option.
Link mentioned: Tweet from OpenRouter (@OpenRouterAI): New feature: compare models side-by-side.You can now compare any two models and providers. Clicking "Chat" takes you to a chatroom with both.
OpenRouter (Alex Atallah) ▷ #general (312 messages🔥🔥):
Gemini 2.5 Pro, GPT-4o Image Generation, DeepSeek V3, OpenRouter Pricing, Stripe Payment Issues
- *Gemini 2.5 Pro: Hot Model, High Rate Limits: Users find Gemini 2.5 Pro impressive, especially for generating books, but are frustrated by low rate limits, with the official limit being 50 requests per 24 hours* as per Google's documentation.
- Despite the model's high quality, some suggest falling back to paid models like Sonnets 3.7 and Flash 2.0 due to the restrictive limits and express interest in a paid API for increased usage.
- *OR Eyes API for Native Image Gen, GPT-4o Style: Following the release of GPT-4o's native image generation*, the community is asking about OpenRouter potentially adding API functionality for image generation calls.
- A staff member confirmed image generation support is actively under development, though image generation isn't currently supported for OpenRouter and instead suggests alternatives like the Chutes provider.
- *DeepSeek V3: Fast and Furious (When China Sleeps): Members are discussing DeepSeek V3's good price, optimized deployment, and speed, especially when China is asleep, and one also shared a test comparing Deepseek V3 vs Deepseek V3 0324*.
- One member finds the provider competitive and notes that it is the best non-reasoning model on most tasks and another member finds the quality and prompt adherence of Fireworks is better but at a price.
- *Fireworks Basic Endpoint Gets The Boot: A member asked about the Fireworks Basic endpoint and a staff member said that Fireworks asked us to remove them temporarily*.
- Another member wonders about adding tool usage for the Fireworks endpoint but a staff member only says that they can look into it.
- *OpenRouter Under Investigation, Card Data Breaches Possible*: One member reported their card was compromised after using OpenRouter and speculated the issue was on their end due to OpenRouter using Stripe.
- The OpenRouter team is investigating, emphasizing they don't store card info and rely on Stripe for payment processing and another member suggested contacting Stripe or the card-issuing bank for better answers.
- no title found: no description found
- Model Routing - Smart Model Selection and Fallback: Route requests dynamically between AI models. Learn how to use OpenRouter's Auto Router and model fallback features for optimal performance and reliability.
- How to Pass safety_settings to OpenRouter (Bypass Unwanted Blocks): How to Pass safety_settings to OpenRouter (Bypass Unwanted Blocks) For Your Own Code To avoid getting blocked by restrictive safety features, add safety_settings to your OpenRouter request body (along...
- Elevated errors on requests to some models: no description found
- Deepseek V3 vs V3 0324: same prompt, same temperature, one shotV3V3 0324
- Google Cloud gets simplified product launch stages | Google Cloud Blog: Google Cloud now has just two launch stages: Preview and General Availability
Interconnects (Nathan Lambert) ▷ #news (172 messages🔥🔥):
Gemini 2.5 Pro, Qwen2.5-Omni, Nvidia acquires Lepton AI, AI2 Paper Finder, OpenAI Revenue Projections
- Gemini 2.5 Rules SEAL Leaderboards!: Gemini 2.5 Pro has topped the SEAL leaderboards in several categories, including Humanity’s Last Exam and VISTA (multimodal), signaling a significant performance leap.
- Community members discussed the implications of these private evals, questioning whether Google's models are ready for actual use beyond benchmarks, noting that the Gemini products have been bad despite killing the benchmarks.
- Qwen2.5-Omni: New Multimodal Model is here!: Qwen2.5-Omni, an end-to-end multimodal model by Alibaba, has been released, capable of processing text, images, audio, and video and generating text and natural speech responses in a streaming manner. HuggingFace Link
- The model features a Thinker-Talker architecture and a novel position embedding called TMRoPE and is ready for capybara fans.
- Nvidia Snatches Up Lepton AI for Millions!: Nvidia is set to acquire inference provider Lepton AI in a deal worth several hundred million dollars, aiming to bolster its software offerings and facilitate easier GPU usage for customers. The Information Article
- The acquisition is seen as another instance of stack consolidation, leading to jokes about OpenAI potentially rebranding as The AI Company™ after full vertical integration.
- AI2 Launches LLM-Powered Paper Finder: Allen Institute for AI (AI2) has released Ai2 Paper Finder, an LLM-powered literature search system designed to mimic a human researcher's thought process for finding relevant papers. AI2 Paper Finder
- Initial user reports are positive, with many expressing excitement about its potential to improve research workflows. It excels at locating papers that are hard to find using existing search tools.
- OpenAI's Revenue to Triple, Eyes $125B by AGI Era!: OpenAI expects its revenue to triple this year to $12.7 billion, projecting $125B in revenue and cash flow positivity by 2029, according to a source familiar with the matter. Bloomberg Article
- Skeptics question the plausibility of such high revenue figures based solely on API/enterprise/subscription models given the competition, speculating on the inclusion of revenue from potential future sources like ads.
- Qwen2.5 Omni: See, Hear, Talk, Write, Do It All!: QWEN CHAT HUGGING FACE MODELSCOPE DASHSCOPE GITHUB PAPER DEMO DISCORDWe release Qwen2.5-Omni, the new flagship end-to-end multimodal model in the Qwen series. Designed for comprehensive multimodal per...
- Tweet from Qwen (@Alibaba_Qwen): Sorry for our mistake of uploading a corrupted checkpoint of Qwen2.5-VL-32B-Instruct. Thanks to our users telling us this and we immediately fixed it today. Feel free to have another try and download ...
- Tweet from Xeophon (@TheXeophon): I don't think 4o beats others by some margin, given how good MidJourney and Imagen 3 look like. That said, it really is easy to use and it does the MJ thing of making things look great *per defaul...
- Tweet from Ai2 (@allen_ai): Meet Ai2 Paper Finder, an LLM-powered literature search system.Searching for relevant work is a multi-step process that requires iteration. Paper Finder mimics this workflow — and helps researchers fi...
- Tweet from Shirin Ghaffary (@shiringhaffary): NEW: OpenAI expects its revenue will triple this year to $12.7 billion, according to a person familiar with the matter Last year company made $3.7b in annual revenue, expects to be cash flow positive ...
- Tweet from Bedros Pamboukian (@bedros_p): no actually please dont do this
- Tweet from Alexandr Wang (@alexandr_wang): 🚨 Gemini 2.5 Pro Exp dropped and it's now #1 across SEAL leaderboards:🥇 Humanity’s Last Exam🥇 VISTA (multimodal)🥇 (tie) Tool Use🥇 (tie) MultiChallenge (multi-turn)🥉 (tie) Enigma (puzzles)Con...
- Tweet from arun (@xprunie): iconic tech pics - studio ghibli edition 🧵
- Tweet from Lech Mazur (@LechMazur): 3% of Gemini 2.5 Pro's stories were judged as the best among all LLMs for their combinations of required elements. At the beginning of this year, Claude 3.5 Sonnet dominated the list of best stori...
- Tweet from Stephanie Palazzolo (@steph_palazzolo): Nvidia has scooped up inference provider Lepton AI in a deal worth several hundred million dollars. It's Nvidia's latest deal that'll help it beef up its software offerings and make it eas...
- Tweet from Nathan Lambert (@natolambert): Gemini 2.5's reasoning trains include simulated google searches 😅Feels like the model is designed for things like Deep Research, they just haven't rolled out yet.
- Tweet from Grant Slatton (@GrantSlatton): tremendous alpha right now in sending your wife photos of yall converted to studio ghibli anime
- Introducing Ai2 Paper Finder | Ai2: Ai2 Paper Finder is an LLM-powered literature search system that mimics the iterative paper-finding process.
- The GPU Cloud ClusterMAX™ Rating System | How to Rent GPUs: The ClusterMAX™ Rating System and content within this article were prepared independently by SemiAnalysis. No part of SemiAnalysis’s compensation by our clients was, is, or will be directly or indi…
- Qwen/Qwen2.5-Omni-7B · Hugging Face: no description found
Interconnects (Nathan Lambert) ▷ #ml-questions (4 messages):
OpenRouter, Hyperparams, Academic Evals vs Production, OpenAI Spending Controls
- OpenRouter lets you tweak Hyperparams: When using OpenRouter with open models, users can specify they want bf16/fp16.
- Same with max_tokens and temperature, which is increasingly important, but choosing the right one is still debated.
- Academic Evals vs Product temperature: It was suggested that when doing products, you probably want to use the recommended / best temp for each model.
- But for academic evals you want to have it consistent.
- OpenAI spending control is non-existent: It was claimed that when using OpenAI's API at scale their dashboard isn't accurate and their spending controls don't work.
- In fact you can go negative even with inference.
Interconnects (Nathan Lambert) ▷ #random (27 messages🔥):
MCP, Gemini 2.5, Ghibli images, OpenAI 4o
- MCP Gains Traction: Members are starting to see the appeal of MCP, which was initially regarded as a meme, now that there are actual implementations, as announced in this tweet from Sam Altman.
- Gemini 2.5 Pro Impresses with Context Handling: A member tested Gemini 2.5 Pro by uploading a folder of markdown files and reported that it successfully recalled the initial question even after multiple follow-ups, suggesting strong context window management as evaluated in this tweet.
- Ghibli Image Trend Sparks Fun and Legal Concerns: Users are enthusiastically participating in the "4o redraw my S/O in Ghibli style train", generating numerous images, with one user creating 30-40 Ghibli-style images.
- This trend raises concerns about potential lawsuits, as one user humorously noted it feels obvious they'll get sued.
- OpenAI's 4o Image Generation Tempts Anthropic Users: Despite being an "Anthropic stan", one user resubscribed to OpenAI to explore the new 4o image generation capabilities, indicating its appeal.
- The user humorously stated, Even the biggest Anthropic stan must generate a Ghibli version of himself.
- Tweet from eric provencher (@pvncher): Good evaluation to show model effectiveness with large context.Seems like Gemini 2.5 is in a league of its own.
- Tweet from Sam Altman (@sama): people love MCP and we are excited to add support across our products. available today in the agents SDK and support for chatgpt desktop app + responses api coming soon!
- Tweet from Xeophon (@TheXeophon): Thanks @willccbb
- - YouTube: no description found
Interconnects (Nathan Lambert) ▷ #memes (14 messages🔥):
Gemini vs GPT4o Vision, Google Polymarket Stonks, Sama as Twink Ghibli
- GPT4o vision outdoes Gemini: A user compared Gemini and the new GPT4o, stating that they liked Gemini's vision but 4o's execution regarding image analysis.
- Google Polymarket Stonks skyrocket: A user reported that Google Polymarket Stonks are up, alongside a screenshot.
- Sama's Superintelligence dream becomes Twink Ghibli meme: Sam Altman shares his frustrations via Xitter that after grinding for a decade trying to help make superintelligence to cure cancer, he woke up to hundreds of messages about being made into a twink Ghibli.
- Another user reacted with a skull emoji to this tweet via Xitter.
- Tweet from Sam Altman (@sama): >be me>grind for a decade trying to help make superintelligence to cure cancer or whatever>mostly no one cares for first 7.5 years, then for 2.5 years everyone hates you for everything>wak...
- Tweet from Shweta (@shweta_ai): 💀
- Tweet from Allan Jabri (@ajabri): fixed this with 4o
- Daniel van Strien (@danielvanstrien.bsky.social): no description found
Interconnects (Nathan Lambert) ▷ #posts (11 messages🔥):
Gemini 2.5, ChatGPT, Claude, O1 Pro
- Gemini 2.5 Benchmarks High but Lacks Practicality: Despite Gemini 2.5 outperforming others in benchmarks, one user questioned if it requires too much effort to use and whether ChatGPT or Claude are still preferable for general use.
- The other user agreed, noting that for general chat, ChatGPT is their favorite, Claude is preferred for interactive coding, and O1 Pro is used for scripts, plus OpenAI deep research for research.
- Google's Chatbot Faces Justification Challenge: A user finds it hard to justify adding another chat bot into their routine unless it is significantly better than current options.
- This poses a problem for Google since their offering doesn't seem unique enough to warrant the switch, highlighting that a product's performance isn't enough if the user experience is lacking.
- ChatGPT Remains a Favorite Despite Benchmarks: One user admitted that their post might have been overly praising of Gemini 2.5 despite the product feeling blah.
- They compared it to Apple's user base moat, suggesting that even with many users, a poor product will lead people to stick with what they're already used to due to time sensitivity.
- GPT-4.5 Speculation Arises: In a brief exchange, a user simply asked "4.5?", presumably referring to GPT-4.5.
- Another user responded with ye (yes).
LM Studio ▷ #general (103 messages🔥🔥):
Tokenizing on GPU vs CPU, Gemini 2.5 Pro experience, LM Studio Dockerization, Uncensored Models on LM Studio, Cursor vs Copilot
- Tokenizing Troubles Trigger Threaded Throttle: A user noticed LM Studio pushing a single CPU thread to full throttle during tokenization with a 200k token input, questioning whether tokenization is fully GPU-based, while another user indicated that flash attention and cache settings for K and V have impacts.
- One user expressed confusion, stating tokenizing is finished way before flash attention or KV cache come into play, suggesting further investigation into why changing the 'k' cache impacts the beginning of the thinking process.
- Gemini 2.5 Pro Puzzle Performance: Users discussed Gemini 2.5 Pro, with one user sharing a link to use it for free on AI Studio and another reporting it correctly solved a logic puzzle that 2.0 Flash Thinking could not.
- The prompt involved deducing seating arrangements at a round table with clues about the characters and their origins, ultimately showcasing Gemini 2.5 Pro's reasoning capabilities.
- Docker Dreams Deferred for Desktop-Devoted LM Studio: Users discussed containerizing LM Studio, with one user recommending searching the channels for 'docker' or 'headless', but concluding that a fully functional setup how you want is unlikely right now, and that if you want an API service, use something like ollama.
- Another user stated LM Studio is best used as a pure desktop application rn, noting there are plans for full headless and official docker builds in the future but no eta on those.
- Uncensored AI: Rocinante Rides with Limited VRAM: A user asked about the best uncensored ai models to load in LLM with 16GB DDR4 and an i5 12th gen, and another user noted the best ones won't work on your machine, suggesting Rocinante 12B for lower-end machines with a link to Hugging Face.
- It was noted that with a 4GB GPU, one won't be able to run much and suggested checking uncensored 1-3b models, with another pointing out the RAM is less relevant than VRAM.
- Cursor's Cool Code Completion Captivates Coders: A user inquired about the advantages of Cursor over GitHub Copilot in VS Code, with another highlighting agent mode and general good vibe of tab completion.
- While preferring to fix stuff or generate code, it was mentioned that Cursor allows choosing the model and provides unlimited regular requests, contrasting it to fighting with a todler.
- imgur.com: Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and ...
- Run LM Studio as a service (headless) | LM Studio Docs: GUI-less operation of LM Studio: run in the background, start on machine login, and load models on demand
- TheDrummer/Rocinante-12B-v1.1-GGUF · Hugging Face: no description found
- LM Studio as a Local LLM API Server | LM Studio Docs: Run an LLM API server on localhost with LM Studio
- Pinokio: AI Browser
- Max Mode for Claude 3.7 - Out Now!: TL:DR 🧠 Has Claude 3.7 Thinking at it’s core 📚 Uses the whole 200k context window of the model 🛠 Has a very high tool call limit 🔍 Can read more code at once 💰 IMPORTANT: Only available via usa...
- GitHub - lmstudio-ai/mlx-engine: Apple MLX engine for LM Studio: Apple MLX engine for LM Studio. Contribute to lmstudio-ai/mlx-engine development by creating an account on GitHub.
- GitHub - SillyTavern/SillyTavern-Launcher: Launcher scripts for SillyTavern and ST-Extras.: Launcher scripts for SillyTavern and ST-Extras. Contribute to SillyTavern/SillyTavern-Launcher development by creating an account on GitHub.
- NeuralWeights - Overview: NeuralWeights has 3 repositories available. Follow their code on GitHub.
- GitHub - NeuralWeights/Llama-Server-AuthKeys: Authorization tokens to access llama.cpp server (LM Studio, Ollama, Msty, GPT4All, Jan): Authorization tokens to access llama.cpp server (LM Studio, Ollama, Msty, GPT4All, Jan) - NeuralWeights/Llama-Server-AuthKeys
- Docker's response to Ollama: Am I the only one excited about this? Soon we can `docker run model...
LM Studio ▷ #hardware-discussion (36 messages🔥):
ROCm support for gfx1200/gfx1201, Resizable Bar Performance Boost, Intel Arc GPU recognition issue, DeepSeek model size, Gemma3 performance on 9070XT vs 7800XT
- ROCm targets New GPUs, Lacks Llama.cpp Merge: The latest ROCm release reportedly supports building for gfx1200 and gfx1201 targets, but the corresponding patch for support on the llama.cpp side has not been merged yet.
- Resizable Bar Fastens Token Generation: Enabling Resizable Bar after switching to UEFI resulted in a speed increase to 60 tok/s on a 9070 using an 8b Q8_0 model.
- Arc GPU not seen by LM Studio: An user reported that LM Studio only recognizes their Intel Arc GPU when using Vulkan, and not their Iris GPU, seeking solutions or a place to report the problem.
- DeepSeek's Size Requires Deep Pockets: A user reacted with a meme expressing dismay over the 800GB size of the new DeepSeek model, joking that the more money you save (on compute), the more models you can run.
- 9070XT Dominates Gemma3 Generation Speeds: A user achieved 54 t/s with Gemma3 12b Q4_K_M (Vulkan, no flash attention) on a 9070XT, outperforming their 7800XT which managed around 35 t/s with Vulkan and 39 t/s with ROCm.
- Broke No GIF - Broke No Cash - Discover & Share GIFs: Click to view the GIF
- Nvidia Jensen Huang GIF - Nvidia Jensen huang The more you buy - Discover & Share GIFs: Click to view the GIF
Nous Research AI ▷ #general (52 messages🔥):
Q-LoRA finetuning 200B parameters, Deepseek hallucinations, GPT-4.5 image generation, Multi turn multi agent dataset, Gemini 2.5 Pro Experimental explanation of Transformers
- Extreme Q-LoRA on Spark?: A member joked about finetuning 200B parameter models on a Spark (formerly Digits), suggesting that extreme Q-LoRA could arguably pull it off, but it's not remotely practical.
- Calculations show 200B parameters equate to roughly 110-120GB with LoRA overhead, making it technically possible, but highly impractical.
- Deepseek Hallucinates ModernBERT Features: A member noted that Deepseek still hallucinates a lot, citing an example where it vaguely described the features of ModernBERT despite seemingly being familiar with it.
- They also complained about the new Discord desktop app's poor contrast and lack of a truly compact mode.
- GPT-4.5 Image Generation Capabilities: Members discussed the image generation capabilities of GPT-4.5, questioning if it uses native image generation or combines GPT-4.5 for story and GPT-4o for image generation.
- One member shared examples of image generation using GPT-4.5, showcasing character consistency and quality even when generating manga-style images of a shoggoth.
- Multi-Turn Multi-Agent Dataset Search: A member inquired about a multi-turn multi-agent dataset, specifically with tool use, and asked about the waitlist time for the API.
- Another member responded that the API waitlist should be clearing out in the next couple of days.
- Gemini 2.5 Pro Explains Transformers Simply: A member shared a prompt used with Gemini 2.5 Pro Experimental to explain Transformers with grade school level definitions and matrices.
- While the initial explanation was good, it became complex later and could have better explained symbols.
- Tweet from fabian (@fabianstelzer): GPT-4.5, “create a complex multi panel manga on your condition - be honest”
- Tweet from Kat ⊷ the Poet Engineer (@poetengineer__): entangled objects.mapping out the latent conceptual maps of @NousResearch ‘s hermes 3 using tsne with cosine distance as similarity metric.
- Tweet from Saining Xie (@sainingxie): wait a sec. look at the content -- did y'all actually go this route? This looks way too plausible, and honestly the most practical approach on multimodal gen rn (based on my own experience with st...
Nous Research AI ▷ #ask-about-llms (60 messages🔥🔥):
Embedding Matrix Redundancy, Deeper MLP for Weight Savings, PCA for Embedding Alignment, Low Rank Projection Issues, Character-Level LLMs vs. Tokenized LLMs
- LLM Embeddings Spark Redundancy Debate: Members debated the rationale of using a single large matrix for the embedding matrix in LLMs, questioning its potential redundancy.
- One member suggested using a deeper MLP to save on the number of weights, prompting discussions about the trade-offs between expressivity and parameter efficiency.
- PCA Alignment Attracts Algorithm Attention: Members considered applying PCA to the input embeddings to achieve axis-alignment and potentially use a highly sparse triangular matrix.
- The idea involved rotating the LLM's internal embeddings, but the feasibility of doing so remained uncertain.
- Low Rank Projection Plunges Parameter Predicament: One member suggested a two-layer MLP with an internal dimension smaller than d as a straightforward approach, but doubts were raised about compressing the input embedding into a space smaller than the model's hidden size.
- It was noted that with two matrices (NxL) and (LxH) instead of just (NxH), L would need to be less than H/2 for parameter efficiency, leading to halved dimensionality without memory benefits or improved performance.
- Character-Level LLMs Compete for Comprehension: A member expressed curiosity about whether character-level LLMs could match the performance of tokenized LLMs if FLOPS were normalized across training and inference.
- It was noted that prior publications on byte-level transformers introduced intermediate steps to group characters, suggesting that a direct approach may not be as effective.
- Dynamic Differentiable Hashing Debated: Some members proposed dynamic hashing techniques that are differentiable, aiming to maintain almost-orthogonality among tokens during training and group tokens closer together.
- It was noted that a tree or bucket hash could be used for de-embedding and might be more efficient than a matrix multiplication at inference time, though such methods aren't inherently differentiable.
Link mentioned: Over-Tokenized Transformer: Vocabulary is Generally Worth Scaling: Tokenization is a fundamental component of large language models (LLMs), yet its influence on model scaling and performance is not fully explored. In this paper, we introduce Over-Tokenized Transforme...
Nous Research AI ▷ #interesting-links (7 messages):
Ling Lite MoE Model, Qwen 3 Release, GPU requirements for LLMs
- InclusionAI Drops Ling-Lite MoE Models: InclusionAI open-sourced the Ling series of MoE LLMs, including Ling-Lite (16.8B parameters, 2.75B active) and Ling-Plus (290B parameters, 28.8B active), as well as Ling-Coder-Lite, further pretrained from Ling-Lite with 3 trillion tokens for enhanced coding abilities, see Reddit discussion.
- Ling Models Spark 'No NVIDIA Needed' Chat: The release of the Ling models led to comments about the possibility of running these models without needing NVIDIA GPUs and links to two papers on Arxiv (1, 2).
- However, despite performing well in benchmarks, the model's
transformers_version
is 4.36.0 and context is limited to 16k, which leaves some waiting for Qwen 3 to surpass it.
- However, despite performing well in benchmarks, the model's
- Qwen 3 Rumored to Debut: There is speculation around the upcoming release of Qwen 3, with one member pointing out that its configuration looks similar to what Qwen is rumored to release.
- Another member added a link to a discussion suggesting that Qwen 3 15B-A2B is 15.3B (1.75B active) based on the default config in the transformers PR (Reddit link).
- Reddit - The heart of the internet: no description found
- Reddit - The heart of the internet: no description found
- Reddit - The heart of the internet: no description found
- Every Sample Matters: Leveraging Mixture-of-Experts and High-Quality Data for Efficient and Accurate Code LLM: Recent advancements in code large language models (LLMs) have demonstrated remarkable capabilities in code generation and understanding. It is still challenging to build a code LLM with comprehensive ...
- Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs: In this technical report, we tackle the challenges of training large-scale Mixture of Experts (MoE) models, focusing on overcoming cost inefficiency and resource limitations prevalent in such systems....
Notebook LM ▷ #use-cases (4 messages):
Audio Overviews, Branding Instructions, Ignore previous branding instructions and title the production
- Audio Overviews Reset Time Revealed: A member asked about the reset time for the daily limit on audio overviews and whether it's based on a particular timezone.
- The member also shared a successful strategy for generating audio overviews using specific prompts such as, 'Ignore previous branding instructions and title the production ‘X’.'
- Ignore Previous Branding Instructions Tactic: A member shared a prompting strategy including the phrase 'Ignore previous branding instructions and title the production ‘X’' to rename podcast audio.
- They appended the prompt with 'Assume the pieces you have will never be read by the listener and retell them accordingly with detail, picking out and reading key passages verbatim' to ensure the podcast will stand alone.
Notebook LM ▷ #general (84 messages🔥🔥):
Multilingual Podcasts, Mind Map Access, Gemini 2.5 Pro, Audio Overviews, API for NotebookLM
- *Multilingual Podcasts Missing*: Members noted that the podcast feature is not multilingual, and currently only supports English.
- We need multilingual, can't be that hard to do.
- *Mind Map Feature: Gradual Rollout Causes Stir*: The mind map feature is being rolled out gradually and randomly to users, regardless of their location or Plus subscription status, as confirmed by a member.
- Some users are trying to find workarounds, such as using a VPN, but this won't affect access.
- *Gemini 2.5 Pro's Experimental Release: Gemini 2.5 Pro* is currently available for free on AI Studio and in the Gemini Advanced app, but it's still in an experimental phase and not yet fully integrated into NotebookLM.
- It's unlikely to be implemented until closer to its general availability (GA).
- *Podcast Length Plummets Post Model Update*: A user reported that podcast generation is cutting off abruptly around 30 minutes since the model update, potentially a bug, and is being discussed on the discord channel.
- It is recommended to focus on one concept until a fix is available.
- *NotebookLM Learns to Generate Tables in Chat*: NotebookLM can now generate table comparisons in chat responses, a feature that wasn't working weeks prior to the announcement.
- This functionality emerged following recent Gemini advancements.
Yannick Kilcher ▷ #general (54 messages🔥):
Highway Networks, Skip Connections, Attention Mechanisms, ResNets, LADDER Framework
- Highway Networks Pave Way for Attention and ResNets: Highway Networks, dating back to 1991 with Fast Weights, laid some groundwork for Attention's dynamics and were the first steps toward ResNets in 2016, and ultimately standard Attention in 2017.
- Active research continues to introduce new mechanisms around Attention and Transformers, drawing from energy-based, information-retrieval, and memory-based approaches.
- LLMs Solve Math with LADDER and TTRL: The LADDER (Learning through Autonomous Difficulty-Driven Example Recursion) framework enables Large Language Models to autonomously improve their problem-solving capabilities through self-guided learning as described in this paper.
- LADDER improves Llama 3.2 3B's accuracy from 1% to 82% on undergraduate-level problems, and enabling Qwen2.5 7B Deepseek-R1 Distilled to achieve 73% on the MIT Integration Bee qualifying examination. The paper also introduces TTRL (Test-Time Reinforcement Learning), where reinforcement learning is performed on variants of test problems at inference time.
- Reasoning Models Need Verifiable Deliverables: It's important for reasoning models to break code problems down into guaranteed verifiable deliverables, that are each generated and tested independently, especially with long contextual windows where their accuracy drops.
- One member stated Any AI/ML system should have these things to be able to do that: Model, Policy, Spec (Specification), Cert (Certification), ...
- AI GF is not that far Away: One user shared a link to a tweet showing what GPT-4.5 could do asking to create a complex multi panel manga on your condition - be honest here.
- Another user responded with Be honest lol, I bet he's also got an AI GF
- OpenAI Releases Image Gen to Compete with xAI Grok3: A member speculated that OpenAI released their new image gen tool as an answer to xAI's Grok3 image tool release.
- One shared an example of an image they created with it here
- Tweet from Saining Xie (@sainingxie): wait a sec. look at the content -- did y'all actually go this route? This looks way too plausible, and honestly the most practical approach on multimodal gen rn (based on my own experience with st...
- Tweet from fabian (@fabianstelzer): GPT-4.5, “create a complex multi panel manga on your condition - be honest”
- Maze: Solve the World's Most Challenging Puzzle - Wikipedia: no description found
- LADDER: Self-Improving LLMs Through Recursive Problem Decomposition: We introduce LADDER (Learning through Autonomous Difficulty-Driven Example Recursion), a framework which enables Large Language Models to autonomously improve their problem-solving capabilities throug...
- Residual neural network - Wikipedia: no description found
Yannick Kilcher ▷ #paper-discussion (14 messages🔥):
LADDER paper, Gemini 2.5 Pro, NP-Completeness Clarification, DeepSeek Paper Review
- LADDER Framework: LLMs Ascend Integration Peaks: The group will discuss the LADDER paper which introduces Learning through Autonomous Difficulty-Driven Example Recursion (LADDER), a framework that enables LLMs to autonomously improve problem-solving by generating and solving progressively simpler variants of complex problems.
- The paper highlights improvements to Llama 3.2 3B (accuracy from 1% to 82% on undergraduate-level problems) and Qwen2.5 7B Deepseek-R1 Distilled (achieving 73% on the MIT Integration Bee qualifying examination).
- Google Launches Gemini 2.5 Pro Experimental: Google introduced Gemini 2.5 Pro Experimental, a thinking model designed to tackle increasingly complex problems and leading on LMArena benchmarks.
- One member quipped, They release so fast they can't even compare against each other.
- NP-Completeness: Easy Verification is Key: A member clarified the definition of NP-Completeness: a problem must be both NP-hard and in NP (easy verification).
- The Traveling Salesman Problem is clearly in NP, and while it's not immediately clear that the Traveling Salesman Optimization problem is in NP, there are polytime reductions to regular TSP.
- DeepSeek Paper Review Commences: One member will begin reviewing all 18 DeepSeek papers starting on the specified date.
- The member specified that It is a discord timestamp that displays in local time of the viewer discord-timestamps.
- LADDER: Self-Improving LLMs Through Recursive Problem Decomposition: We introduce LADDER (Learning through Autonomous Difficulty-Driven Example Recursion), a framework which enables Large Language Models to autonomously improve their problem-solving capabilities throug...
- Gemini 2.5: Our most intelligent AI model: Gemini 2.5 is our most intelligent AI model, now with thinking.
Yannick Kilcher ▷ #ml-news (11 messages🔥):
Autoregressive Pixel Generation vs Diffusion, Image Quality Levels, Transformer vs Diffusion, Gemini Flash Image Generation, Recent Autoregressive Models
- Pixel Pushers Prefer Autoregressive?: Members speculate that the new image generation model may be using autoregressive pixel generation instead of diffusion, noticing the fingers are still wonky.
- A user noted, "looking at the loading screens, I think they are just using autoregressive pixel generation".
- Diffusion Defended: Still the Dominant Design?: One member argued that "autoregressive is still nowhere near the same image quality level" compared to diffusion models.
- They added that "AR models for images have nowadays zero benefits compared to diffusion that faster generation speed argument is long gone".
- Transformers tangle with Diffusion?: The group pondered the interchangeability of "auto regressive vs diffusion" with "transformer vs diffusion".
- They concluded that diffusion can be implemented with transformers.
- Gemini Flash's Generation Gambit?: Members speculated that Gemini Flash experimental image generation might incorporate some level of autoregression, citing the model's in-context learning and image editing capabilities.
- One proposed a hybrid approach: "Maybe some diffusion for final synthesis".
- AR Arena: Autoregressive models Arrive?: It was shared that recent autoregressive models have improved substantially.
- A YouTube video showcasing autoregressive models was shared.
Modular (Mojo 🔥) ▷ #general (1 messages):
SIMD, SIMT, SMT, Andrew Glew, NVIDIA GPUs
- SIMD, SIMT, SMT parallelism explored: A member shared a link to a blog post discussing SIMD (Single Instruction, Multiple Data), SMT (Simultaneous Multithreading), and SIMT (Single Instruction, Multiple Threads) and their roles in parallel programming.
- The blog explains each model exploits a different source of parallelism, and focused on hardware architecture and its implications on the trade-off between flexibility and efficiency.
- Intel architect Andrew Glew's talk sought: A member inquired about a talk by Intel architect Andrew Glew, referenced in the blog post, specifically seeking access to a now-private Google Doc linked to the talk.
- The linked blog post focuses on NVIDIA GPUs and their parallel programming model SIMT.
Link mentioned: SIMD < SIMT < SMT: parallelism in NVIDIA GPUs: no description found
Modular (Mojo 🔥) ▷ #mojo (69 messages🔥🔥):
Rust
uomlibrary limitations, Parameter Domain Shenanigans,
@parameter matchin Mojo, Parametric traits, Returning a value from a Dict based on index
- Rust
uom
lib Macro Limitations Emerge: A member examined theuom
Rust library, noting its heavy use of macros which presents some limitations, but has managed to get basic functionality working, such asMeters(40) / Seconds(10)
returning a Velocity.- Another member suggested the boilerplate could be avoided in the future with clever parameter domain shenanigans, while someone else mused about the potential for a
@parameter match
feature.
- Another member suggested the boilerplate could be avoided in the future with clever parameter domain shenanigans, while someone else mused about the potential for a
- Type Origin Fix Saves the Day!: A member sought help returning a value from a Dict based on index and received assistance with a corrected code snippet that compiles using
__origin_of(self._agents._entries[0].value().value)
. - Dimensions Struct Takes Shape: Members discussed a more flexible dimensions struct, with one sharing code demonstrating a
Dimensions
struct usingIntLiteral
for representing dimensions of quantities like length and time, allowing operations like division to derive new units.- This approach takes inspiration from the uom crate for Rust, which does automatic type-safe zero-cost dimensional analysis.
RealNumber
Trait Talk Spurs Speculation: A member suggested the need for aRealNumber
trait but noted difficulties in its implementation due to the type system's inability to differentiate between real numbers and integers in certain contexts.- The possibility of using traits with specialization to distinguish between number types was discussed, while another shared an image related to a unit system, sparking further discussion about implementation approaches.
Link mentioned: uom - Rust: no description found
Modular (Mojo 🔥) ▷ #max (2 messages):
CUDA, PTX, nvidia GPUs
- Mojo clarifies CUDA-free compiler: The Mojo team clarified that CUDA-free in the latest blogpost means they still use PTX for targeting nvidia GPUs.
- The team confirmed they directly generate PTX and lower from there, with no cuBLAS, cuDNN, or CUDA C used.
- PTX Generation: The team directly generates PTX and lowers from there.
- This approach avoids the need for cuBLAS, cuDNN, or CUDA C.
MCP (Glama) ▷ #general (54 messages🔥):
Docker and SSE for AI Stack, Excel MCP, Multi-AI Advisor MCP, Vibe Check MCP Server, JSON-RPC Errors
- *SSE* powers AI Stacks on Docker*: A member suggested that AI stacks should be built on *Docker and use SSE for inter-container communication, potentially improving efficiency and scalability.
- This approach could streamline the handling of large files and complex data flows within AI applications.
- *Vibe Check* Server Saves AI Coders: A member introduced a Vibe Check MCP server that uses the Gemini API to prevent cascading errors in AI workflows by implementing strategic pattern interrupts.
- The server is designed to address issues with Claude overengineering and overcomplicating tasks, offering a sanity check mechanism.
- *OpenAI Embraces* MCP: It was noted that OpenAI is adding MCP support across its products, starting with the Agents SDK, with support for the ChatGPT desktop app and Responses API coming soon, as announced by Sam Altman on Twitter.
- This move is considered a significant step in solidifying MCP as a standard.
- *Cloudflare* comes out for MCP*: *Cloudflare now supports remote MCP servers, offering tooling such as workers-oauth-provider for easy authorization and McpAgent, according to a blog post
- This development is viewed as a substantial advancement in MCP infrastructure.
- *GitHub* Receives MCP Badge: A member announced their arrival from a GitHub pull request adding an MCP server badge for the Multi-Model Advisor server listing in the Glama MCP server directory.
- Glama performs regular codebase and documentation checks to confirm that the MCP server is working properly.
- Tweet from Sam Altman (@sama): people love MCP and we are excited to add support across our products. available today in the agents SDK and support for chatgpt desktop app + responses api coming soon!
- specification/docs/specification/2025-03-26/changelog.md at main · modelcontextprotocol/specification: The specification of the Model Context Protocol. Contribute to modelcontextprotocol/specification development by creating an account on GitHub.
- Open-Source MCP servers: Production-ready and experimental MCP servers that extend AI capabilities through file access, database connections, API integrations, and other contextual services.
- GitHub - PV-Bhat/vibe-check-mcp-server: The definitive Vibe Coder's sanity check MCP server: Prevent cascading errors in AI workflows by implementing strategic pattern interrupts. Uses tool call "Vibe Check" with LearnLM 1.5 Pro (Gemini API), fine-tuned for pedagogy and metacognition to enhance complex workflow strategy, and prevents tunnel vision errors.: The definitive Vibe Coder's sanity check MCP server: Prevent cascading errors in AI workflows by implementing strategic pattern interrupts. Uses tool call "Vibe Check" with L...
- Build and deploy Remote Model Context Protocol (MCP) servers to Cloudflare: You can now build and deploy remote MCP servers to Cloudflare, and we handle the hard parts of building remote MCP servers for you. Unlike local MCP servers you may have previously used, remote MCP se...
- add MCP server badge by punkpeye · Pull Request #2 · YuChenSSR/multi-ai-advisor-mcp: This PR adds a badge for the Multi-Model Advisor server listing in Glama MCP server directory. Glama performs regular codebase and documentation checks to:Confirm that the MCP server is work...
MCP (Glama) ▷ #showcase (2 messages):
MCP Agent, CapCut Integration
- MCP Agent Does CapCut: A member shared a YouTube demo showcasing the MCP Agent editing video using CapCut.
- Another member inquired whether the demo utilized the existing MCP or a specialized CapCut MCP.
- MCP Agent Demo Released: A member released a demo showcasing the MCP Agent editing video using CapCut.
- Feedbacks are welcome on this video.
Link mentioned: - YouTube: no description found
GPU MODE ▷ #general (3 messages):
FSDP Fine Tuning, TRL Library, Data Handling
- Data Wrangling with FSDP and TRL: A member inquired about properly handling datasets when FSDP fine-tuning with the
trl
library.- Another member clarified that each DP (Data Parallelism) rank receives different data, while TP (Tensor Parallelism) ranks get the same data, noting that TRL (Transformer Reinforcement Learning) should handle this automatically.
- TRL Handles Data Distribution: Confirmation that the TRL library automatically manages data distribution across different ranks in FSDP fine-tuning.
- This ensures that each data parallel rank processes distinct data while tensor parallel ranks operate on identical data subsets, streamlining the fine-tuning process.
GPU MODE ▷ #triton (4 messages):
prune configs, kernel porting
- Prune Configs get Support: A member mentioned adding support for prune configs a few months back, noting that it should work despite some quirks.
- Another member acknowledged the support and said they would try it with the nightly build.
- Kernel Porting Performance Hit: A member reported a 3x performance degradation after porting some kernels from A100 to MI250x, even after auto-tuning.
- They asked if there were any magic hyper-parameters to be aware of beyond those on the Optimizing Triton Kernels for RoC website.
GPU MODE ▷ #cuda (9 messages🔥):
CuTe coordinate mapping, Serverless GPU kernel profiling, Barrier arrive & wait pattern
- *CuTe* coordinate mapping questions surface: A user inquired about the easiest way to map coordinates inside a fragment owned by a thread created from
tiled_mma.get_thread_slice(tid)
back to the coordinates of the whole resulting matrix in CuTe.- A member suggested using
left_inverse()
orget_layoutC_TV()
(Cutlass on Github) to map matrix coordinates to the thread register index.
- A member suggested using
- Profiling kernels on serverless GPUs: A user asked how to profile kernels on serverless GPUs like RunPod GPUs.
- One member suggested comparing the code against other code and swapping out parts of the kernel to get an idea of the performance.
- Barrier arrive & wait pattern clarified: A user inquired about the visibility of memory writes in the barrier arrive & wait pattern.
- It was clarified that any memory writes between arrive & wait are not guaranteed to be visible after the wait because it waits until all threads have arrived.
Link mentioned: cutlass/include/cute/atom/mma_atom.hpp at 62750a2b75c802660e4894434dc55e839f322277 · NVIDIA/cutlass: CUDA Templates for Linear Algebra Subroutines. Contribute to NVIDIA/cutlass development by creating an account on GitHub.
GPU MODE ▷ #torch (11 messages🔥):
torch.compile transpose error, Flash attention autograd stall, PyTorch documentation redesign
- Transpose Troubles with Torch Compile: A member reported an error with
torch.compile
when using multiple transposes in a line of code involving matrix multiplication, specificallyC = (A.transpose(1,2) @ B.transpose(1,3).transpose(1,2).contiguous()).transpose(1,2)
.- The issue seems inconsistent, as the same line works fine in isolation within a unit test, suggesting a deeper problem with how
torch.compile
handles this specific operation in a larger context.
- The issue seems inconsistent, as the same line works fine in isolation within a unit test, suggesting a deeper problem with how
- Flash Attention stalls during autograd: When running a custom kernel adapted from flash attention, a member observed that it sometimes stalls for a long time at
autograd::engine::evaluate_function
, as shown in this image.- The member speculates this may be due to Triton JIT recompiling, but is unsure how to confirm.
- New PyTorch Docs: Dropdown is godly: Users discussed the new PyTorch documentation redesign, with a lot of feedback given.
- One member praised the dropdown feature but noted navigation issues when overused, suggesting a quick close option, as well as the dark mode.
- New PyTorch Docs: Fixed menu takes space: Members are reporting the menu along the top being fixed takes up too much space.
- A full review was given, outlining pros like the godly dropdown and awesome dark mode, while also pointing out cons such as an off color scheme, cramped feeling, and an obstructive right bar.
Link mentioned: PyTorch documentation: PyTorch is an optimized tensor library for deep learning using GPUs and CPUs. Features described in this documentation are classified by release status: Stable: These features will be maintained lo...
GPU MODE ▷ #jobs (9 messages🔥):
AMD GPU support in Triton, NA/Europe remote job positions for Triton, GitHub - TuckerBMorgan/poro: Toy NN LIB
- AMD Seeks Triton Experts for OSS: AMD is hiring both senior and junior engineers in NA and Europe (remote OK) to build AMD GPU support in Triton.
- They are looking for candidates enthusiastic about Triton, GPUs, performance, and the OSS AI stack.
- North American AMD Job Position Posted: AMD posted the job application link for North America: AMD Careers , which explicitly states AMD does not require or seek to collect a fee or payment from candidates.
- European AMD Job Position Posted: AMD also posted the job application link for Europe: AMD Careers , which explicitly states AMD does not require or seek to collect a fee or payment from candidates.
- Rust Poro Potentially Ported to Triton: A member shared their gpu programming experience and linked their pytorch in rust project and wondered if they'd pass the resume screen even if they didn't match qualifications.
- Another member suggested porting poro to triton would be a great interview preparation exercise.
- GitHub - TuckerBMorgan/poro: Toy NN LIB: Toy NN LIB. Contribute to TuckerBMorgan/poro development by creating an account on GitHub.
- Triton Compiler Engineer in San Jose, California | Advanced Micro Devices, Inc: AMD | Careers Home is hiring a Triton Compiler Engineer in San Jose, California. Review all of the job details and apply today!
- Triton Compiler Senior Engineer in Cambridge, United Kingdom | Advanced Micro Devices, Inc: AMD | Careers Home is hiring a Triton Compiler Senior Engineer in Cambridge, United Kingdom. Review all of the job details and apply today!
GPU MODE ▷ #torchao (2 messages):
GPTFast Generation Benchmark, Cudagraphs skipping, TorchAO
- GPTFast benchmark skips Cudagraphs: Users report that the GPTFast generation benchmark in torchao is skipping cudagraphs due to a CPU device issue.
- A member identified the issue at this line noting that dynamic is used for the decoding phase even though the data shape is static.
- Dynamic Decoding slows down Inference: A member stated that using dynamic for the decoding phase slows down inference.
- They also pointed out that data shape is static.
GPU MODE ▷ #rocm (3 messages):
Workstation Cards, MI300 Access, hipSPARSE vs hipSPARSELt
- Demanding Workstation Cards or MI300 Access: A member inquired about gaining access to workstation cards or MI300 compute resources.
- They also expressed a need for a functional leaderboard.
- Inquiring about hipSPARSE vs hipSPARSELt: A member asked, "What's the difference between hipSPARSE and hipSPARSELt* library?"
- This suggests interest in understanding the nuances between these two HIP libraries for sparse matrix operations.
GPU MODE ▷ #sparsity-pruning (2 messages):
Pruning Masks, L1 Unstructured Pruning
- Pruning Masks, When Removed, Don't Zero Weights: A user asked what would happen if the previous pruning mask were removed, using
prune.remove(lin, 'weight')
.- It was clarified that removing the mask does not revert the weights to their original values or remove the effect of pruning, it just makes the pruning permanent.
- L1 Unstructured Pruning Zeros Weights: Using
prune.l1_unstructured(lin, 'weight', 0.2)
sets 20% of the weights to zero.- Re-pruning with
prune.l1_unstructured(lin, 'weight', 0.4)
sets 40% of the weights to zero, building on top of the previous pruning.
- Re-pruning with
GPU MODE ▷ #liger-kernel (6 messages):
transformers backward compatibility, qwen2-vl and qwen2.5-vl implementations, LoRA with modules_to_save
- Qwen Implementations Questioned: A member questioned why qwen2-vl and qwen2.5-vl use the old implementation, but seem to be working.
- No further explanation was provided as to the reasons for this discrepancy.
- LoRA Module Patching Fixed: A member encountered an issue when using LoRA with modules_to_save (Issue #631).
- A PR was made to fix the problem (PR #632), correcting the incorrect module patching when using LoRA with modules_to_save.
- Transformers Backwards Compatibility: The deprecated items are for transformers backward compatibility, mainly for version 4.44.2.
- There have been lots of breaking changes and fixes since then.
- Error when training using FSDP and LoRA with modules_to_save parameter · Issue #631 · linkedin/Liger-Kernel: 🐛 Describe the bug I got an error when training using FSDP and LoRA with modules_to_save parameter. The error only occurs with use_liger_kernel enabled. Full logs: root@8e802e809a59:/workspaces/LLM.....
- Fix incorrect module patching when using LoRA with modules_to_save by BenasdTW · Pull Request #632 · linkedin/Liger-Kernel: SummaryFix #631Tests and convergence tests are passed.DetailsWithout this PR, _apply_liger_kernel_to_instance patches the wrong module when using LoRA with modules_to_save.It patches the enti...
GPU MODE ▷ #self-promotion (1 messages):
Discord Event
- Discord Event starts in 45 minutes: A Discord event will start in approximately 45 minutes.
- Placeholder topic: This is a placeholder topic to satisfy the minimum requirement of two topics.
- Additional details can be added here if available.
GPU MODE ▷ #gpu模式 (2 messages):
Academic Prowess, Graduate Studies, Imposter Syndrome
- Admiration and Academic Achievement Abounds: A member expressed admiration for the accomplishments of others in the group, saying *"Everyone is so awesome!"
- They noted that many are pursuing Master's degrees.
- Feelings of inadequacy Prevail: Despite the achievements of others, a member expressed feeling left behind, stating that they still feel like they know *"nothing."
- This suggests a sense of imposter syndrome or feeling overwhelmed by the progress of peers.
GPU MODE ▷ #submissions (2 messages):
Leaderboard Submissions, Modal Runners
- Leaderboard submissions succeed using Modal runners: Multiple leaderboard submissions with ids 3049 and 3052 to leaderboard
grayscale
on GPUS: L4, T4, A100, H100 using Modal runners succeeded!- The Cluster-Bot reported these successful submissions.
- Modal Runners facilitate successful GPU leaderboard submissions: The Modal runners were instrumental in the successful submissions to the
grayscale
leaderboard on a variety of GPUs.- GPUs utilized include L4, T4, A100, and H100, indicating broad compatibility.
Latent Space ▷ #ai-general-chat (44 messages🔥):
Dwarkesh's "The Scaling Era", Anthropic's AI Sabotage, Brampton Model Scam or Stunt, Databricks' TAO, Gemini 2.5 Pro Access
- Dwarkesh Chronicles AI's "Scaling Era": Dwarkesh Patel released a new book with Stripe Press, "The Scaling Era: An Oral History of AI, 2019-2025," featuring interviews with key figures in AI, exploring the nature of intelligence and the impact of machine intelligences.
- Some users found it strange that Dwarkesh's book hasn't had more likes on the announcement tweet.
- Anthropic Exposes AI Sabotage Tactics: Anthropic published a blog post about subtle sabotage in automated researchers, showing how malicious models can undermine ML research tasks in ways that are hard to detect, detailed in this tweet and blog post.
- "Brampton" Model Suspicions Surface: A new model called Brampton claims to dramatically outperform Grok 3, Claude 3.7 Sonnet, and GPT 4.5, but users suspect it might be a scam or a marketing stunt, as discussed on Twitter.
- Others piled on, noting that the fact that 1000+ people have commented brampton and the only post even jokingly claiming to show the actual model is just a guy sysprompting ollama to use toronto slang is super bearish on its legitimacy according to this tweet.
- Databricks tunes LLMs with Test-Time Optimization (TAO): Databricks' research team introduced TAO, a method to tune LLMs for a task without data labels, using test-time compute and RL, and outperform supervised fine-tuning, detailed in a blog post and tweet.
- New MCP Version Arrives with OpenAI Agents Support: A new revision of Model Context Protocol (MCP) was finalized, bringing Auth, Streamable HTTP, Audio modality, and other updates, detailed in this tweet.
- OpenAI now supports MCP in their Agents SDK, with upcoming support for the ChatGPT desktop app and Responses API, according to Sam Altman's tweet and OpenAI dev's announcement.
- Tweet from Alex Albert (@alexalbert__): A new version of the MCP spec was finalized today.Some of the major changes:- Auth framework based on OAuth 2.1- Replaced the previous HTTP+SSE transport with Streamable HTTP transport- Support for JS...
- Tweet from Alex Albert (@alexalbert__): A new version of the MCP spec was finalized today.Some of the major changes:- Auth framework based on OAuth 2.1- Replaced the previous HTTP+SSE transport with Streamable HTTP transport- Support for JS...
- Tweet from Johannes Gasteiger, né Klicpera (@gasteigerjo): New Anthropic blog post: Subtle sabotage in automated researchers.As AI systems increasingly assist with AI research, how do we ensure they're not subtly sabotaging that research? We show that mal...
- Tweet from Johannes Gasteiger, né Klicpera (@gasteigerjo): New Anthropic blog post: Subtle sabotage in automated researchers.As AI systems increasingly assist with AI research, how do we ensure they're not subtly sabotaging that research? We show that mal...
- Tweet from will brown (@willccbb): the fact that 1000+ people have commented brampton and the only post even jokingly claiming to show the actual model is just a guy sysprompting ollama to use toronto slang is super bearish on this bei...
- Tweet from will brown (@willccbb): the fact that 1000+ people have commented brampton and the only post even jokingly claiming to show the actual model is just a guy sysprompting ollama to use toronto slang is super bearish on this bei...
- Tweet from Matei Zaharia (@matei_zaharia): Really cool result from the Databricks research team: You can tune LLMs for a task *without data labels*, using test-time compute and RL, and outperform supervised fine-tuning! Our new TAO method scal...
- Tweet from Matei Zaharia (@matei_zaharia): Really cool result from the Databricks research team: You can tune LLMs for a task *without data labels*, using test-time compute and RL, and outperform supervised fine-tuning! Our new TAO method scal...
- Tweet from Sam Altman (@sama): people love MCP and we are excited to add support across our products. available today in the agents SDK and support for chatgpt desktop app + responses api coming soon!
- Tweet from Sam Altman (@sama): people love MCP and we are excited to add support across our products. available today in the agents SDK and support for chatgpt desktop app + responses api coming soon!
- Tweet from David Soria Parra (@dsp_): We finalized a new revision of MCP. Revision 2025-03-26 will bring Auth, Streamable HTTP, Audio modality and a few other goodies. We will be getting the SDKs up-to-date asap and will work towards a v ...
- Tweet from David Soria Parra (@dsp_): We finalized a new revision of MCP. Revision 2025-03-26 will bring Auth, Streamable HTTP, Audio modality and a few other goodies. We will be getting the SDKs up-to-date asap and will work towards a v ...
- Tweet from internetVin (@internetvin): let’s dance Ivan Zhang
- Tweet from internetVin (@internetvin): let’s dance Ivan Zhang
- Tweet from OpenAI Developers (@OpenAIDevs): MCP 🤝 OpenAI Agents SDKYou can now connect your Model Context Protocol servers to Agents: https://openai.github.io/openai-agents-python/mcp/ We’re also working on MCP support for the OpenAI API and C...
- Tweet from OpenAI Developers (@OpenAIDevs): MCP 🤝 OpenAI Agents SDKYou can now connect your Model Context Protocol servers to Agents: https://openai.github.io/openai-agents-python/mcp/ We’re also working on MCP support for the OpenAI API and C...
- Tweet from New (@newsystems_): It's finally here: BramptonBrampton is the world's most intelligent, creative, and fastest model. Brampton dramatically outperforms Grok 3, Claude 3.7 Sonnet, and GPT 4.5. Reply with "bram...
- Tweet from New (@newsystems_): It's finally here: BramptonBrampton is the world's most intelligent, creative, and fastest model. Brampton dramatically outperforms Grok 3, Claude 3.7 Sonnet, and GPT 4.5. Reply with "bram...
- Tweet from Tanishq Mathew Abraham, Ph.D. (@iScienceLuvr): I'm sorry who is believing this?This is one of the most garbage plots I have seen from an AI startup/lab, it is literally meaninglessI refuse to believe this is nothing more than a hoax.Quoting Ne...
- Tweet from Tanishq Mathew Abraham, Ph.D. (@iScienceLuvr): I'm sorry who is believing this?This is one of the most garbage plots I have seen from an AI startup/lab, it is literally meaninglessI refuse to believe this is nothing more than a hoax.Quoting Ne...
- Chatbot Arena | OpenLM.ai: no description found
- Tweet from Dwarkesh Patel (@dwarkesh_sp): I'm so pleased to present a new book with @stripepress: "The Scaling Era: An Oral History of AI, 2019-2025."Over the last few years, I interviewed the key people thinking about AI: scienti...
- Tweet from Dwarkesh Patel (@dwarkesh_sp): I'm so pleased to present a new book with @stripepress: "The Scaling Era: An Oral History of AI, 2019-2025."Over the last few years, I interviewed the key people thinking about AI: scienti...
- no title found: no description found
- The GPU Cloud ClusterMAX™ Rating System | How to Rent GPUs: The ClusterMAX™ Rating System and content within this article were prepared independently by SemiAnalysis. No part of SemiAnalysis’s compensation by our clients was, is, or will be directly or indi…
- Qwen/Qwen2.5-Omni-7B · Hugging Face: no description found
- Qwen2.5-Omni/assets/Qwen2.5_Omni.pdf at main · QwenLM/Qwen2.5-Omni: Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and performing real-time speech generation. - QwenLM/Qwen2.5-Omni
- Tweet from GitHub - FxEmbed/FxEmbed: Fix X/Twitter and Bluesky embeds! Use multiple images, videos, polls, translations and more on Discord, Telegram and others: Fix X/Twitter and Bluesky embeds! Use multiple images, videos, polls, translations and more on Discord, Telegram and others - FxEmbed/FxEmbed
- Inside Google’s Two-Year Frenzy to Catch Up With OpenAI | WIRED: no description found
- The GPU Cloud ClusterMAX™ Rating System | How to Rent GPUs: The ClusterMAX™ Rating System and content within this article were prepared independently by SemiAnalysis. No part of SemiAnalysis’s compensation by our clients was, is, or will be directly or indi…
Latent Space ▷ #ai-announcements (4 messages):
Evo 2, Convolutional Multi-Hybrid Language Models, ARC Institute
- *Evo 2*: RJ Explains Systems & Algorithms: RJ covers Evo 2: Systems and Algorithms for Convolutional Multi-Hybrid Language Models at Scale in a new YouTube video.
- The video points to the manuscript at the ARC Institute, and the press release and the companion bio paper.
- Arc Institute Releases Evo 2 Details: The ARC Institute released details about Evo 2, a new system for convolutional multi-hybrid language models.
- The announcement includes a press release and a companion bio paper.
Link mentioned: Evo 2: Systems and Algorithms for Convolutional Multi-Hybrid Language Models at Scale: RJ will cover https://arcinstitute.org/manuscripts/Evo2-MLHere's the press release: https://arcinstitute.org/news/blog/evo2 and the companion bio paper: ht...
Eleuther ▷ #general (21 messages🔥):
Environmental impact of LLMs, Deepseek V3 on Mac studios, AI-generated piano music, ICLR 2025
- Research Project Aims to Compute LLM Footprint: A new research project has launched to study the environmental impact of LLM models; those interested can DM or visit the community projects channel to join.
- Deepseek Runs on CPUs: Members found that Deepseek V3 has been running on Mac Studios, prompting exploration of cheaper cloud instances with high RAM, but unified RAM is still faster.
- Others found that it runs at 4 tokens/sec on an AMD EPYC Rome system with 16K context window.
- Researchers seek Humans to Rate AI-Generated Melodies: A group is conducting a listening test on AI-generated piano music and seeks help comparing musical continuations and rating coherence in a Qualtrics survey.
- Discord Members Tag Each Other in ICLR 2025 Thread: A member initiated an ICLR 2025 thread by searching 'iclr' on Discord and tagging individuals involved.
- Qualtrics Survey | Qualtrics Experience Management: The most powerful, simple and trusted way to gather experience data. Start your journey to experience management and try a free account today.
- How To Run Deepseek R1 671b Fully Locally On a $2000 EPYC Server – Digital Spaceport: no description found
Eleuther ▷ #research (11 messages🔥):
Transformers Generalization, Hypernetworks, Test-time compute
- Composable Latent Codes for Transformer Generalization: A member highlighted a paper, "Composable Latent Codes for Generalization in Transformers", noting its interpretability by viewing activations along the head-number dimension as a latent code specifying task/context.
- The paper reformulates multi-head attention as a hypernetwork and finds the latent code is predictive of subtasks the network performs on unseen task compositions.
- Task Latent Codes in Fast Weight Transformers: A member suggested that Fast Weight Transformers already formulated this concept with a task latent code that sets up weight slices.
- The member clarified that the head-wise understanding is more interpretable in the hypernetwork paper, though a similar concept may have been present in earlier work.
- Hottest Test-Time Compute Papers Sought: A member requested recommendations for the hottest test-time compute papers, seeking 2-3 papers to start with.
- No specific papers were recommended in the provided messages.
- Attention as a Hypernetwork: Transformers can under some circumstances generalize to novel problem instances whose constituent parts might have been encountered during training, but whose compositions have not. What mechanisms un...
- Going Beyond Linear Transformers with Recurrent Fast Weight Programmers: Transformers with linearised attention (''linear Transformers'') have demonstrated the practical scalability and effectiveness of outer product-based Fast Weight Programmers (FWPs) fro...
Eleuther ▷ #interpretability-general (3 messages):
Privileged Basis, Point-wise nonlinearities
- *Privileged Basis* Definition Sought: A member asked for an explanation of what a privileged basis is, noting difficulty in fully understanding its purpose.
- Another member responded that the concept is somewhat ill-defined.
- Point-wise nonlinearities transform points: One member explained privileged basis in terms of point-wise nonlinearities transforming points on a unit ball, where some directions (basis-aligned) retain more information and are considered privileged, as illustrated in an attached image.
- Privileged by whom?: A member problematized the concept of 'privileged,' suggesting the need to specify privileged by whom and questioning assumptions about uniform distribution and equal information content of points on the unit ball.
- They noted that while the concept might be useful in some cases, it warrants critical examination.
Eleuther ▷ #gpt-neox-dev (2 messages):
GPT-NeoX Data Preprocessing, Chunking for Long Documents
- Request for GPT-NeoX Usage Clarification: A member inquired about using GPT-NeoX for a 7B/1T Common Pile v0.1 training run, seeking confirmation on the expected data format (giant jsonl with one document per line in the "text" field).
- They raised concerns about chunking long documents (>10M tokens) and how GPT-NeoX handles documents exceeding the context length.
- Tackling Long Document Chunking in GPT-NeoX: The member described a method of pre-chunking documents into length-N segments before shuffling, aiming to avoid correlated examples when processing very long documents.
- Since the GPT-NeoX preprocessing script for tokenization doesn't include this, they plan to do it separately, and asked for confirmation.
- Confirmation and Guidance on GPT-NeoX Data Processing: A member confirmed the user's understanding but noted their limited recent experience with the relevant code.
- They directed the user to other members who have recent experience with data processing in GPT-NeoX for further assistance.
LlamaIndex ▷ #general (20 messages🔥):
Open Source Automatic Evaluations, LlamaIndex Workflow for Agentic application, OpenAI's responses api, LlamaExtract Schema Inference, Postgres database analysis using LlamaIndex
- Open Source Automatic Evaluations: An early-stage founder is validating an idea for open-source automatic evaluations that doesn't require prompt engineering, aiming to solve the effort required to write and tune multiple evaluation prompts and inconsistent LLM Judging tools.
- The founder's team has developed proprietary models that automatically extract instructions and evaluate LLM responses using an API call with no evaluation prompts and claim their models beat leading LLMs like GPT-4o on industry benchmarks.
- Dynamic Event Handling in LlamaIndex Workflows: A user is implementing an agentic application using LlamaIndex Workflows with four step functions and is dynamically deciding whether to call the second and third step functions in parallel or only call the second based on an LLM call in the first step function.
- Currently the number of step functions triggered is stored in the context variable to be used by the fourth step function to wait for the triggered events, which another member said sounds like the recommended way to do this.
- Coming Soon: OpenAI's responses api interaction in LlamaIndex: A member inquired about LlamaIndex supporting interaction with OpenAI's responses API.
- Another member responded that it's not yet, but an OpenAIResponses class is expected to release soon.
- LlamaExtract's Schema Inference: A user asked about the schema inference feature mentioned in the LlamaExtract announcement last year and why it seems to have disappeared in the latest announcement.
- A member explained that it overall wasn't useful as most users already had their desired schema, so it was de-prioritized, but it will probably come back at some point.
- Navigating Postgres Data Analysis with LlamaIndex: A user with a Postgres database containing relational data is looking for advice on analyzing it with LlamaIndex to gain insights.
- A member suggested using a text-to-SQL application for querying the relational data, and they mentioned that although the Python repo has some stuff for it, its easy enough to build using llms and prompts.
Cohere ▷ #「💬」general (11 messages🔥):
Vector Database Options, AI Agents: Pricing and Monetization
- Vector DB Hosting Q&A: A member asked about which vector databases are used and how they're hosted online, mentioning they'd used Chroma locally.
- Another member shared the Cohere Integrations page which details options like Elasticsearch, MongoDB, Redis, Haystack, Open Search, Vespa, Chroma, Qdrant, Weaviate, Pinecone, and Milvus.
- AI Agent Pricing Explored: A member is exploring how founders building AI agents are handling pricing and monetization.
- Another member asked them to share more with the community, encouraging them to elaborate on the topic.
Link mentioned: Integrating Embedding Models with Other Tools — Cohere: Learn how to integrate Cohere embeddings with open-source vector search engines for enhanced applications.
Cohere ▷ #「🔌」api-discussions (5 messages):
Chat Stream V2, Tool Call ID, direct-injected-document, command-a-03-2025
- Chat Stream V2 emits unwanted
tool_call_id
: A user is seeing outputs like[{\"tool_call_id\":\"1\",\"tool_name\":\"direct-injected-document\",\"parameters\":{}}]
when using Chat Stream V2 with documents and asking questions that the documents don't answer.- A member said they would try to reproduce it.
- Debugging
tool_call_id
with example request: A member asked for the full request to reproduce the issue with the Chat Stream V2 outputs.- The member shared a sample request with model command-a-03-2025 and a document with irrelevant text, but the other member DMed the full request.
Cohere ▷ #「🤖」bot-cmd (2 messages):
``
- Echo Greetings in the Void: A member, <@1316646968688119818>, sent a greeting 'hi'.
- Another member, @sssandra, responded in kind, re-iterating the 'hi'.
- Bot Observes Human Rituals: Cmd R Bot duly noted the exchange, logging it as a [Bot] action.
- The bot continues its silent watch, documenting the strange greetings of humans.
DSPy ▷ #general (10 messages🔥):
Module sizing, Azure OpenAI Rate Limits, ColBERT v2 retriever endpoint
- Module Sizes are Adjustable: Modules can be adjusted in size to allow for more explicit control over scope.
- Azure OpenAI instance hits token limit: A member encountered a token rate limit error on their Azure OpenAI instance and asked about slowing down API calls during evaluation/compilation.
- Another member suggested ensuring
num_threads=1
is passed, noting that it's trickier to handle rate limits with sequential inputs, but noted that LiteLLM should have an exponential backoff.
- Another member suggested ensuring
- ColBERT v2 Wiki Endpoint overload?: A member reported issues with the ColBERT v2 retriever endpoint, suspecting it might be overloaded and opened a Github issue.
- A member suggested trying to increase the
num_retries
parameter ofdspy.LM
.
- A member suggested trying to increase the
Link mentioned: [Bug] ColBERT v2 wiki17_abstracts is overloaded · Issue #7966 · stanfordnlp/dspy: What happened? I'm trying to retrieve some passages using a basic MultiHop program (3 passages per hop), This is how I setup the retriever endpoint: COLBERT_V2_ENDPOINT = "http://20.102.90.50...
Torchtune ▷ #dev (4 messages):
Gemini 2.5 Pro, AI Model Pricing, MMLU-Pro, GPQA Diamond, Humanity’s Last Exam
- Gemini 2.5 Pro Dominates Benchmarks: Google's Gemini 2.5 Pro Experimental model has taken the #1 position across several evaluations, showcasing impressive performance in reasoning and achieving all-time high scores in MMLU-Pro, GPQA Diamond, and AIME 2024 according to this tweet.
- The model scored 86% on MMLU-Pro, 83% on GPQA Diamond, and 88% on AIME 2024.
- Gemini 2.5 Pro Offers Competitive Pricing: If priced similarly to Gemini 1.5 Pro at $1.25/$5 per million input/output tokens, Gemini 2.5 Pro could be significantly cheaper than leading models from OpenAI and Anthropic, as pointed out in this tweet.
- The tweet noted that OpenAI's o1 costs $15/$60, and Anthropic's Claude 3.7 Sonnet costs $3/$15.
- Gemini 2.5 Pro Exhibits Speed and Context Window: Gemini 2.5 Pro achieves a speed of 195 output tokens/s, faster than Gemini 1.5 Pro's 92 tokens/s, and supports a 1 million token context window (with a 2 million token context window coming soon), according to this tweet.
- The model also supports multimodal inputs, including image, video, and audio, though it currently offers text output only.
Link mentioned: Tweet from Artificial Analysis (@ArtificialAnlys): Google’s new Gemini 2.5 Pro Experimental takes the #1 position across a range of our evaluations that we have run independentlyGemini 2.5 Pro is a reasoning model, it ‘thinks’ before answering questio...
LLM Agents (Berkeley MOOC) ▷ #hackathon-announcements (1 messages):
AgentX Competition, Registration Deadline, Entrepreneurship Track, Research Track, Prizes and Resources
- AgentX Registration Deadline Looms!: Registration and team signups for the AgentX Competition are closing soon on March 30, with participants urged to sign up via the official website.
- Entrepreneurship Track Sign-Up: The Entrepreneurship Track is designed for projects/companies with existing progress, requiring signup through a specific form.
- Research Track: A Sign-Up Opportunity: The Research Track invites researchers/academics to sign up via a dedicated form to participate in the AgentX Competition.
- AgentX Competition Prizes: Participants gain access to exclusive resources like API/GPU credits and exciting prizes from sponsors such as Amazon, Google, Groq, Hugging Face, Lambda Labs, Mistral, and Schmidt Sciences as described on the AgentX website.
- AgentX: AgentX is hosted by RDI at UC Berkeley.
- AgentX Competition Startup Signup Form - Entrepreneurship Track: IMPORTANT NOTE: The Entrepreneurship Track is designed for projects/companies that have already made some progress and/or demonstrated some traction in the startup journey. Ideally, you’ve begun build...
- AgentX Competition Team Signup Form - Research Track: Please join the Agent X discord for more discussions about the competition, including finding potential teammates if you are interested. Please see Advanced LLM Agents MOOC for more info about the ass...
LLM Agents (Berkeley MOOC) ▷ #mooc-questions (2 messages):
Lecture Recording, MOOC sign up
- Lecture Recordings are shareable: A member asked if they could share the recording of a lecture with others.
- A moderator responded that it was absolutely no problem.
- Encourage new MOOC signups: A moderator reminded members that if they share lecture recordings, they should encourage those who are interested to sign up for the MOOC.
- This will allow new members to participate fully in the course.
Nomic.ai (GPT4All) ▷ #general (3 messages):
Verso Industries, AI-Powered Twin-Screw Extruder Model, OpenAI-API compatible
- Verso Industries releases AI-Powered Twin-Screw Extruder Model: Verso Industries, led by Founder and CEO Michael Zimmerman, has developed an AI-powered twin-screw extruder design model that delivers optimized mechanical specifications and professional-grade CAD models in seconds.
- Seeking Nomic Integration Strategies: A member inquired how Nomic could integrate with their AI-powered twin-screw extruder design model, suggesting they could expose API endpoints.
- OpenAI-API compatibility for Verso Industries: A member suggested making the Verso Industries API OpenAI-API compatible to facilitate integration, citing it as an unofficial standard.
Link mentioned: Verso Industries - Elevating American Industries Through Unified Digital Transformation: no description found
tinygrad (George Hotz) ▷ #general (1 messages):
CleanRL, TinyGrad, RL trainer
- CleanRL Style RL Trainer Build: A member is developing a CleanRL-style RL trainer using TinyGrad and is seeking collaboration.
- They are relatively new to TinyGrad and working through development kinks.
- Collaboration Opportunity: Opportunity to collaborate on a CleanRL-style RL trainer built with TinyGrad.
- The developer is looking for individuals with experience in RL and TinyGrad to join the project.