[AINews] not much happened today
This is AI News! an MVP of a service that goes thru all AI discords/Twitters/reddits and summarizes what people are talking about, so that you can keep up without the fatigue. Signing up here opts you in to the real thing when we launch it 🔜
Huawei chips are all you need?
AI News for 1/27/2025-1/28/2025. We checked 7 subreddits, 433 Twitters and 34 Discords (225 channels, and 6553 messages) for you. Estimated reading time saved (at 200wpm): 656 minutes. You can now tag @smol_ai for AINews discussions!
no title story but a bunch of small ones
- NVDA bounced ~8% from yesterday's rout
- new open music foundation models (aka "Local Suno")
- Qwen 2.5 Max comeptitive with DeepSeek v3
- Vercel AI SDK supports the Anthropic Building Effective Agents patterns.
- Open source dataset for reasoning, from the Bespoke Labs team (our coverage here): https://github.com/open-thoughts/open-thoughts
Table of Contents
- AI Twitter Recap
- AI Reddit Recap
- AI Discord Recap
- PART 1: High level Discord summaries
- Unsloth AI (Daniel Han) Discord
- Perplexity AI Discord
- aider (Paul Gauthier) Discord
- Cursor IDE Discord
- OpenAI Discord
- Nous Research AI Discord
- LM Studio Discord
- Yannick Kilcher Discord
- Codeium (Windsurf) Discord
- OpenRouter (Alex Atallah) Discord
- Eleuther Discord
- Interconnects (Nathan Lambert) Discord
- Stackblitz (Bolt.new) Discord
- Stability.ai (Stable Diffusion) Discord
- MCP (Glama) Discord
- Latent Space Discord
- Notebook LM Discord Discord
- GPU MODE Discord
- LLM Agents (Berkeley MOOC) Discord
- Nomic.ai (GPT4All) Discord
- Torchtune Discord
- LlamaIndex Discord
- Modular (Mojo 🔥) Discord
- tinygrad (George Hotz) Discord
- Cohere Discord
- LAION Discord
- Axolotl AI Discord
- OpenInterpreter Discord
- Gorilla LLM (Berkeley Function Calling) Discord
- DSPy Discord
- MLOps @Chipro Discord
- Mozilla AI Discord
- PART 2: Detailed by-Channel summaries and links
- Unsloth AI (Daniel Han) ▷ #general (1010 messages🔥🔥🔥):
- Unsloth AI (Daniel Han) ▷ #off-topic (29 messages🔥):
- Unsloth AI (Daniel Han) ▷ #help (201 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #showcase (1 messages):
- Unsloth AI (Daniel Han) ▷ #research (13 messages🔥):
- Perplexity AI ▷ #general (624 messages🔥🔥🔥):
- Perplexity AI ▷ #sharing (17 messages🔥):
- Perplexity AI ▷ #pplx-api (1 messages):
- aider (Paul Gauthier) ▷ #general (401 messages🔥🔥):
- aider (Paul Gauthier) ▷ #questions-and-tips (128 messages🔥🔥):
- Cursor IDE ▷ #general (517 messages🔥🔥🔥):
- OpenAI ▷ #ai-discussions (466 messages🔥🔥🔥):
- OpenAI ▷ #gpt-4-discussions (2 messages):
- OpenAI ▷ #prompt-engineering (21 messages🔥):
- OpenAI ▷ #api-discussions (21 messages🔥):
- Nous Research AI ▷ #general (496 messages🔥🔥🔥):
- Nous Research AI ▷ #ask-about-llms (4 messages):
- Nous Research AI ▷ #interesting-links (7 messages):
- LM Studio ▷ #general (308 messages🔥🔥):
- LM Studio ▷ #hardware-discussion (96 messages🔥🔥):
- Yannick Kilcher ▷ #general (282 messages🔥🔥):
- Yannick Kilcher ▷ #paper-discussion (47 messages🔥):
- Yannick Kilcher ▷ #ml-news (24 messages🔥):
- Codeium (Windsurf) ▷ #discussion (103 messages🔥🔥):
- Codeium (Windsurf) ▷ #windsurf (189 messages🔥🔥):
- OpenRouter (Alex Atallah) ▷ #announcements (2 messages):
- OpenRouter (Alex Atallah) ▷ #general (255 messages🔥🔥):
- Eleuther ▷ #general (79 messages🔥🔥):
- Eleuther ▷ #research (112 messages🔥🔥):
- Eleuther ▷ #scaling-laws (2 messages):
- Eleuther ▷ #lm-thunderdome (5 messages):
- Eleuther ▷ #multimodal-general (1 messages):
- Interconnects (Nathan Lambert) ▷ #news (59 messages🔥🔥):
- Interconnects (Nathan Lambert) ▷ #ml-questions (5 messages):
- Interconnects (Nathan Lambert) ▷ #ml-drama (18 messages🔥):
- Interconnects (Nathan Lambert) ▷ #random (36 messages🔥):
- Interconnects (Nathan Lambert) ▷ #memes (10 messages🔥):
- Interconnects (Nathan Lambert) ▷ #rl (5 messages):
- Interconnects (Nathan Lambert) ▷ #cv (5 messages):
- Interconnects (Nathan Lambert) ▷ #reads (13 messages🔥):
- Interconnects (Nathan Lambert) ▷ #posts (40 messages🔥):
- Stackblitz (Bolt.new) ▷ #announcements (1 messages):
- Stackblitz (Bolt.new) ▷ #prompting (4 messages):
- Stackblitz (Bolt.new) ▷ #discussions (135 messages🔥🔥):
- Stability.ai (Stable Diffusion) ▷ #general-chat (138 messages🔥🔥):
- MCP (Glama) ▷ #general (112 messages🔥🔥):
- Latent Space ▷ #ai-general-chat (96 messages🔥🔥):
- Notebook LM Discord ▷ #announcements (1 messages):
- Notebook LM Discord ▷ #use-cases (17 messages🔥):
- Notebook LM Discord ▷ #general (77 messages🔥🔥):
- GPU MODE ▷ #general (22 messages🔥):
- GPU MODE ▷ #cuda (16 messages🔥):
- GPU MODE ▷ #torch (9 messages🔥):
- GPU MODE ▷ #cool-links (2 messages):
- GPU MODE ▷ #bitnet (1 messages):
- GPU MODE ▷ #thunderkittens (1 messages):
- GPU MODE ▷ #arc-agi-2 (38 messages🔥):
- LLM Agents (Berkeley MOOC) ▷ #mooc-questions (60 messages🔥🔥):
- LLM Agents (Berkeley MOOC) ▷ #mooc-lecture-discussion (12 messages🔥):
- Nomic.ai (GPT4All) ▷ #general (42 messages🔥):
- Torchtune ▷ #dev (31 messages🔥):
- Torchtune ▷ #papers (1 messages):
- LlamaIndex ▷ #blog (2 messages):
- LlamaIndex ▷ #general (18 messages🔥):
- Modular (Mojo 🔥) ▷ #general (6 messages):
- Modular (Mojo 🔥) ▷ #announcements (1 messages):
- Modular (Mojo 🔥) ▷ #mojo (10 messages🔥):
- tinygrad (George Hotz) ▷ #general (6 messages):
- tinygrad (George Hotz) ▷ #learn-tinygrad (10 messages🔥):
- Cohere ▷ #discussions (2 messages):
- Cohere ▷ #api-discussions (8 messages🔥):
- LAION ▷ #general (6 messages):
- LAION ▷ #research (1 messages):
- Axolotl AI ▷ #general (4 messages):
- OpenInterpreter ▷ #general (1 messages):
- OpenInterpreter ▷ #O1 (2 messages):
- Gorilla LLM (Berkeley Function Calling) ▷ #leaderboard (2 messages):
- DSPy ▷ #general (1 messages):
- MLOps @Chipro ▷ #events (1 messages):
- Mozilla AI ▷ #announcements (1 messages):
AI Twitter Recap
all recaps done by Claude 3.5 Sonnet, best of 4 runs.
AI Model Developments and Comparisons
- Deepseek R1 vs. OpenAI Models: @saranormous and @zizhpan discuss Deepseek R1's capabilities and its comparison with models like GPT-4 and Qwen 2.5. Additionally, @victormustar highlights the addition of Qwen 2.5 models to various applications, stressing user feedback mechanisms.
- Qwen2.5 and Qwen2.5-Max Enhancements: @omarsar0 announces the release of Qwen2.5-Max, a Mixture of Experts (MoE) model, which surpasses Deepseek V3 in benchmarks such as Arena Hard and LiveBench. @markchen90 further emphasizes the competitive edge of Qwen2.5-Max over Deepseek V3, advocating for open-sourcing initiatives.
- Innovations in AI Image Generation: @SakanaAILabs shares the acceptance of their paper on Evolutionary Optimization of Model Merging Recipes, showcasing advancements in model merging. Meanwhile, @reach_vb highlights the release of DeepSeek Janus Pro, a multimodal LLM capable of image outputs, comparing it to traditional Text to Image models.
Reinforcement Learning and Reasoning
- Advancements in Reinforcement Learning (RL): @madiator discusses the introduction of Open Thoughts, aiming to enhance reasoning datasets vital for models like Deepseek R1. @dain_mclau touches upon policy optimization techniques in RL, emphasizing the complexity and iterative nature of Reinforcement Learning.
- Chain-of-Thought (CoT) Enhancements: @omarsar0 explores the emergence of cognitive strategies in LLMs, suggesting that models like Deepseek R1 are beginning to exhibit human-like problem-solving behaviors. Concurrently, @francoisfleuret critiques the diminishing relevance of RL terminology amidst evolving methodologies.
AI Infrastructure and Compute
- GPU and Compute Optimization: @garygodchaux reports on NVIDIA's H6400 GPUs rebranded from Intel Arc B580s, highlighting tensions with Deepseek R1 impacting NVIDIA's stock. @arankomatsuzaki comments on the compute demands of Deepseek R1, noting the efficiency challenges faced by hardware providers.
- Data Center Innovations: @ID_AA_Carmack emphasizes the role of data centers as AI real estate, predicting exponential growth in compute infrastructure to support advanced AI models. @LavanyaSant discusses the integration of multi-head tensorisation and Tucker decomposition in DeepSeek's infrastructure, achieving significant compression rates.
AI in Enterprises and Applications
- Enterprise AI Solutions: @virattt introduces a crypto API integrated into AI hedge funds, while @jerryjliu0 explores building LLM-based applications capable of handling long documents using hybrid architectures.
- AI-Driven Productivity Tools: @SahanaAI showcases the use of DeepSeek R1 in Perplexity Pro search, enhancing research capabilities with agentic document workflows. Additionally, @elicitorg critiques DeepSeek's alignment with Chinese narratives, advocating for truth-seeking objectives in AI deployments.
Open-source AI and API Integrations
- Hugging Face and API Integrations: @togethercompute announces the ability to run inference directly on Hugging Face model pages, powered by Together AI. @langchainai highlights the integration of DeepSeek R1 with LangChain, enabling local deployment and API-based access.
- Open-source Contributions: @madiator releases the OpenThoughts-114k reasoning dataset and the OpenThinker-7B model, emphasizing the importance of open data for advancing reasoning capabilities. @cremieuxrecueil praises the open-source nature of DeepSeek R1, ensuring data privacy by allowing self-hosted deployments.
AI Infrastructure and Compute
- GPU and Compute Optimization: @garygodchaux reports on NVIDIA's H6400 GPUs rebranded from Intel Arc B580s, highlighting tensions with Deepseek R1 impacting NVIDIA's stock. @arankomatsuzaki comments on the compute demands of Deepseek R1, noting the efficiency challenges faced by hardware providers.
- Data Center Innovations: @ID_AA_Carmack emphasizes the role of data centers as AI real estate, predicting exponential growth in compute infrastructure to support advanced AI models. @LavanyaSant discusses the integration of multi-head tensorisation and Tucker decomposition in DeepSeek's infrastructure, achieving significant compression rates.
AI Reddit Recap
/r/LocalLlama Recap
Theme 1. DeepSeek-R1 Runs Inference on Huawei's 910C Chips
- DeepSeek is running inference on the new home Chinese chips made by Huawei, the 910C (Score: 291, Comments: 85): DeepSeek is conducting inference on Huawei's 910C chips after training on Nvidia H800, highlighting a significant shift to Chinese-made hardware. The deployment is part of Huawei Cloud's ModelArts Studio using the Ascend-Adapted New Model, with models like DeepSeek-R1-Distill, Qwen-14B, Qwen-32B, and Llama-8B already launched, and more models expected soon.
- Discussion highlights skepticism about the Huawei 910C chips and their performance, with some suggesting they are slow and have poor software support. DonDonburi mentions that while the 910C may not be impressive, the next generation might offer more competition, and Billy462 emphasizes the significance of running inference on homegrown chips.
- RouteGuru comments on the geopolitical implications of chip smuggling due to DoD restrictions, while Glad-Conversation377 points out that China has long had its own GPU manufacturers like Cambricon Technologies and Moore Threads, though they haven't made significant market impact yet.
- The conversation touches on the practicality and feasibility of running large models at home, with piggledy and zipzag discussing the potential for running 70B models on consumer hardware like the Mac Mini M4 Pro. Recoil42 and piggledy also express skepticism about claims regarding DeepSeek's inference capabilities on the 910C.
- No censorship when running Deepseek locally. (Score: 105, Comments: 40): The discussion in the DeepSeek implementation on Huawei hardware centers around running the tool locally without censorship, as demonstrated by a command prompt screenshot. The text explores the Tiananmen Square Massacre, addressing international reactions, the crackdown of June 1989, and its casualties, along with the Chinese government's censorship and the event's enduring impact on global discussions about authoritarianism and democracy.
- Many users discussed the differences between DeepSeek models, noting that the distilled versions like "deepseek-ai.deepseek-r1-distill-qwen-32b" and "qwen 2.5 r1 distill 7b" are not the same as the original DeepSeek R1 model. Distilled models often exhibit censorship, particularly on controversial topics like the Tiananmen Square Massacre.
- Some users shared their experiences running different models locally. Caladan23 noted that using the full DeepSeek model with 6_K_M GGUF through Llama.cpp resulted in a censored response, while aurath found that the censorship occurs on the web interface rather than the API itself when using DeepSeek V3 via Openrouter.
- EffectiveEngine2751 emphasized that the DeepSeek model from Ollama is a distilled version, not the same as the original DeepSeek R1, and linked to the original model on Hugging Face. They highlighted that the distilled versions are based on Qwen 1.5B, which may inherently include some level of censorship.
- Trump to impose 25% to 100% tariffs on Taiwan-made chips, impacting TSMC (Score: 1561, Comments: 607): DeepSeek's decision to switch to Asian hardware aligns with Trump's proposed tariffs of 25% to 100% on Taiwan-made chips, which could significantly impact TSMC. This shift may affect the global semiconductor supply chain and influence hardware sourcing strategies for AI companies.
- Many commenters criticize Trump's tariff plan on Taiwan-made chips, arguing it will increase consumer costs and damage the US semiconductor industry. They highlight that the US lacks the infrastructure and expertise to compete with TSMC, which produces 70% of the world's high-end chips, and that these tariffs could drive companies to shift operations to Canada or other countries.
- Some view the tariffs as a negotiation tactic, with Trump using them to extract concessions from Taiwan, though many doubt its effectiveness given Taiwan's leverage in the chip market. Commenters suggest that incentives for domestic production, like those in Biden's CHIPS Act, would be a more effective strategy than imposing tariffs.
- Concerns are raised about the broader implications for the US's global standing and AI industry, with comments noting that tariffs could set back AI progress by 5-10 years. The tariffs could also damage strategic alliances and inadvertently boost China's semiconductor industry.
Theme 2. DeepSeek-R1: Efficient Training Costs Explored
- How can we be so sure the training of Deepseek R1 is around $6 million? (Score: 141, Comments: 124): The post raises questions about the $6 million cost estimate for training DeepSeek-R1, referencing claims by Alex Wang that DeepSeek has at least 50,000 H100 GPUs. It suggests that the NVDA price drop might be influenced by the parent company's quant fund, speculating on the involvement of Chinese companies and the potential financial strategies behind these market movements.
- Training Cost and Licensing: Discussions highlighted the MIT License of DeepSeek, allowing companies to use and train the model freely, overshadowing the $6 million training cost. The open-source nature enables users to run the model on personal setups, making the cost less significant for individual use.
- Technical Validation and Cost Analysis: Vincentz42 provided a detailed analysis comparing training times and costs with other models like Llama 3, concluding that the $6 million cost is plausible for a single run, excluding additional expenses like salaries and failed runs. The analysis used known data on H100 rental costs and parameter activations to support the cost estimate.
- Infrastructure and Financial Strategy: There is skepticism about the financial strategies behind the cost, with some suggesting that DeepSeek's parent company might leverage existing infrastructure, potentially reducing explicit costs. Accurate_Painting pointed out that the company could use its infrastructure without incurring real losses, while others questioned the influence of NVIDIA's market movements on the financial outcomes.
- Trump says deepseek is a very good thing (Score: 348, Comments: 151): The post titled "Trump says deepseek is a very good thing" lacks a detailed body but suggests a positive endorsement of DeepSeek by Trump. The absence of specific content limits further technical insights or context regarding the DeepSeek technology.
- Many commenters express surprise at Trump's endorsement of DeepSeek, with several agreeing with him, which they did not expect. DeepSeek is praised for its open-source nature and the potential for democratizing AI by reducing costs associated with large GPU clusters, as noted by psaience and Delicious-Farmer-234.
- Discussions highlight the potential impact of DeepSeek on AI development, emphasizing that it demonstrates that state-of-the-art models can be built without billion-dollar budgets. This could lead to increased competition and innovation among smaller players in the AI community.
- There is skepticism and humorous remarks about Trump's statement, with some questioning the authenticity of his voice and suggesting it sounded AI-generated. The discussion also touches on broader geopolitical implications, like tariffs and international tech competition, with concerns about Intel and TSMC mentioned by Jaxraged and others.
Theme 3. DeepSeek Censorship: A Comparative Analysis
- Deepseek censorship is more tolerable than Western censorship (Score: 128, Comments: 102): DeepSeek is perceived by the author as handling "sensitive topics" more effectively than state-of-the-art (SOTA) models developed in the U.S. The author dismisses concerns about DeepSeek's alleged connection to CCP and state-sponsored censorship, arguing that such factors do not impact their experience.
- Censorship and Propaganda Concerns: Discussions highlight concerns about DeepSeek's alignment with Chinese government views, with users noting it sometimes debates how to align with these views, potentially gaslighting users about the government. Some argue that while censorship is a common issue, the model's reasoning to propagate Chinese propaganda is more concerning.
- Definitions and Perceptions of "Woke": There is a debate over the definition and application of the term "woke," with some users struggling to define it clearly and others associating it with models refusing to make racist jokes or present discriminatory viewpoints. The term is often used in a derogatory context without a clear, consistent definition.
- Model Censorship Experiences: Users express frustration with OpenAI and Anthropic models' censorship, sharing examples of blocked requests or moralistic responses. Some users prefer alternative models like DeepSeek for fewer restrictions, despite its origins, while others highlight Gemini's inconsistencies in handling technical queries.
- DeepSeek R1 Overthinker: force r1 models to think for as long as you wish (Score: 133, Comments: 29): The post discusses DeepSeek R1 Overthinker, a tool that allows users to control the duration for which R1 models process information, potentially affecting their performance and decision-making. The focus is on comparing censorship differences between local and cloud-based implementations of DeepSeek, although specific details are not provided in the text.
- DeepSeek R1 Overthinker is a free chatbot app that uses
tokens to extend R1 models' reasoning processes by intercepting and continuing their thought chains. Users can set a minimum token count, making the model think for extended periods, potentially improving reasoning capabilities, with models ranging from 1.5B to 70B parameters available on GitHub. - OpenAI's o3 model on the arc agi benchmark is compared to DeepSeek's approach, with a user noting marginal improvements despite 170x more compute. This highlights the potential computational demands and efficiency considerations in extending model reasoning.
- Users humorously speculate about the potential of extended reasoning, with a suggestion that a model thinking for 12 months could solve world hunger, illustrating both the ambition and satire in expectations of AI reasoning capabilities.
- DeepSeek R1 Overthinker is a free chatbot app that uses
Theme 4. Janus Pro 1B: In-browser Multimodal AI Innovation
- Janus Pro 1B running 100% locally in-browser on WebGPU, powered by Transformers.js (Score: 276, Comments: 45): Janus Pro 1B operates entirely locally within a browser environment using WebGPU, facilitated by Transformers.js. This setup allows for in-browser execution without the need for server-side processing.
- Janus Pro 1B is recognized for its multimodal capabilities, unlike Midjourney (MJ), which is not state-of-the-art (SOTA) for image generation. Janus Pro can perform tasks like Optical Character Recognition (OCR), as demonstrated in the LaTeX example, enhancing its utility beyond image generation.
- DeepSeek recently released Janus Pro (1B & 7B), which supports visual understanding and image generation, running locally in browsers via Transformers.js and WebGPU. Key resources include an online demo, ONNX model, and source code.
- Users express interest in the model's performance and capabilities, like running on CPU RAM alone and generating images with specific content, although some experiences, such as generating a greeting image, have been mixed. Interest is also shown in the potential development of a 7B version.
- JanusPro 1B generating images on 2GB VRAM laptop (Score: 103, Comments: 20): The Janus Pro 1B model can generate images locally on a laptop with 2GB VRAM, but the process takes almost 5 minutes and yields suboptimal results. Despite the quality, the user appreciates the ability to perform deep learning tasks in-browser on limited hardware.
- Users discuss the capabilities of Janus Pro 1B on low VRAM setups, with some suggesting it can generate animations using Hyunian and others highlighting the importance of sufficient RAM, such as 16 GB, when running on 2GB VRAM laptops.
- Deepseek is mentioned as a tool providing impressive results, while another user expresses interest in the model's ability to parse images for potential applications in robotics with Raspberry Pi.
- Concerns about the model's quality are raised, with comparisons made to StableDiffusion and mentions of distilled Flux models that can operate on 2GB VRAM but still produce better outputs.
- Now I can finally learn to code with some softcore spunk (Score: 160, Comments: 48): The post describes a playful interaction with the DeepSeek API integrated into a tkinter GUI. The author sets the API's content to "horny maid" with a temperature of 2.0 and shares a scripted role-play scenario involving a maid character that humorously transitions into solving a coding problem, specifically the "candy distribution" problem, showcasing the API's versatility in both playful and technical tasks.
- Discussion humorously explores the combination of business and pleasure in AI applications, with comments noting the DeepSeek API's playful yet technical capabilities. Users joke about the future of AI, imagining scenarios where AI acts as flirtatious personal assistants and problem solvers simultaneously.
- Technical inquiries about the prompt settings reveal curiosity about how to set content and temperature variables for AI behavior, with some users sharing their experiences with similar APIs and noting DeepSeek's current reliability issues.
- The community reflects on the potential implications of such AI developments, suggesting that future LLMs may be trained on similar whimsical and diverse prompts, and humorously referencing the concept of a "GPT Maid DLC".
Other AI Subreddit Recap
/r/Singularity, /r/Oobabooga, /r/MachineLearning, /r/OpenAI, /r/ClaudeAI, /r/StableDiffusion, /r/ChatGPT
Theme 1. DeepSeek R1 Challenges OpenAI’s Reinforcement Learning Dominance
- Sam Altman comments on DeepSeek R1 (Score: 944, Comments: 303): Sam Altman praises the DeepSeek R1 model for its impressive performance and cost-effectiveness, highlighting the importance of competition and execution of research roadmaps in the AI field. He anticipates future advancements in artificial general intelligence (AGI) and emphasizes the growing demand for advanced AI technologies.
- DeepSeek's Approach: DeepSeek R1 is praised for its fundamental breakthroughs in reinforcement learning, diverging from traditional supervised learning. Commenters emphasize that this represents a significant shift in AI development, suggesting that such innovations could drive future advancements in LLMs without the need for exponentially larger computing power.
- OpenAI's Position and Challenges: There is skepticism about OpenAI's reliance on increased compute power, with some suggesting that DeepSeek's success may challenge OpenAI's strategy and potentially impact its funding. Commenters express a belief that open-source models like DeepSeek could fulfill a large portion of corporate needs, posing a threat to proprietary models.
- Industry Dynamics and Competition: The discussion reflects a broader sentiment that competition, particularly from unexpected players like DeepSeek, is beneficial for innovation in AI. Several comments highlight the entertainment value of the ongoing "AI wars" and suggest that such rivalry could lead to reduced costs, such as lower OpenAI API prices.
- This probably explains why the general public was shocked by Deepseek (Score: 139, Comments: 73): Tanishq Mathew Abraham, Ph.D. attributes the public's shock over Deepseek to their limited experience with AI models like ChatGPT 4 on free plans, leading to misconceptions about AI advancements. He highlights the disparity in perception between Chinese and American AI models, with the tweet dated January 27, 2025, having 12K views.
- Deepseek's Advantages: Deepseek is praised for its superior reasoning performance and internet search capabilities, making it more useful than o1. There is anticipation for o3, with discussions suggesting that OpenAI should offer o1 for free to compete effectively.
- Data Sharing Concerns: Users express skepticism about Deepseek's development costs and the involvement of the CCP, with concerns about sharing data with Chinese entities. Some argue that sharing data with the US is equally concerning, and emphasize the importance of using LLMs without sharing sensitive information.
- Economic and Accessibility Factors: The availability of Deepseek and models like R1 for free is a significant factor, as many are unwilling to pay for non-free models from OpenAI. The discussion highlights the economic feasibility of using Deepseek locally compared to paying for ChatGPT services.
Theme 2. DeepSeek R1 Censorship Sparking Debates on Bias
- DeepSeek censorship: 1984 "rectifying" in real time (Score: 420, Comments: 148): DeepSeek censorship is compared to the concept of "rectifying" from George Orwell's 1984, implying real-time alteration or control of information. The post lacks detailed content, but suggests concerns about censorship and information manipulation.
- Censorship and Open Source: While DeepSeek exhibits built-in censorship, users note that the model is open-sourced, allowing for uncensored versions to be created. Some users argue that the censorship is not embedded in the model itself but is an overlay, which can be bypassed by running it locally or customizing it.
- Comparison with Other Models: Discussions highlight that censorship is not unique to DeepSeek, with models like Gemini and ChatGPT also engaging in content moderation, though often in more subtle ways. This raises concerns about the transparency and honesty of AI models in presenting information, especially regarding sensitive topics like Uighurs and other geopolitical issues.
- Market Dynamics and Nationalism: There is a debate about the impact of DeepSeek and similar models on the AI market, with some suggesting that competition from Chinese models could push Western companies to offer more capabilities at lower costs. Additionally, the conversation touches on how technology is intertwined with nationalism, with some expressing skepticism about the US tech sector's ability to compete without government intervention.
- "I need to make sure not to deviate from the script..." (Score: 253, Comments: 80): The post discusses a hypothetical scenario involving Taiwan's independence and stresses the importance of following official guidelines and the One-China Principle. It underscores the need for precise language to prevent misunderstandings and maintain a consistent position on this sensitive issue.
- Many commenters express admiration for the AI's reasoning capabilities, noting its human-like depth and transparency. Agreeable_Service407 and Palpable_Sense highlight its potential to pass the Turing test and the effort put into its filtering mechanisms, while miko_top_bloke appreciates the visibility into its reasoning process.
- Reedmayhew18 shares a personal experience with DeepSeek R1, noting the AI's admission of censorship in military contexts, and provides a link to a detailed account of this encounter. This aligns with broader discussions about AI censorship and the implications of such programmed limitations.
- Some commenters, like EljayDude and idubyai, discuss the implications of using biased AI models, emphasizing the importance of understanding these biases and the technological underpinnings of such systems. EljayDude finds the mechanics of censorship interesting, despite it reducing their likelihood of using the model.
Theme 3. Government Integration: OpenAI's ChatGPT Gov Announcement
- OpenAI announces ChatGPT Gov (Score: 233, Comments: 109): OpenAI has announced ChatGPT Gov, a version of ChatGPT specifically designed for government agencies, allowing them to operate within their Microsoft Azure environments. The initiative aims to support the public sector, especially the U.S. Federal government, in enhancing national security and tackling complex challenges.
- Some users express skepticism about ChatGPT Gov, with concerns about potential propaganda and political influence, particularly regarding OpenAI's connections to the Trump administration. The sentiment is that OpenAI's actions may be perceived as pandering to political interests.
- There are discussions about the technical aspects and similarities to existing services, such as Microsoft's Azure offering GPT-4 and GPT-3.5-turbo without internet access for government use. This highlights the ongoing trend of integrating AI into government infrastructures.
- The conversation includes a comparison of different government approaches to AI, with mentions of the Canadian government's decision to develop its own LLM for security reasons, contrasting with the US tendency to collaborate with private tech companies.
Theme 4. DeepSeek Training Cost Controversy: $6 Million Claim Dissected
- How do we know deepseek only took $6 million? (Score: 386, Comments: 242): Deepseek claims to have been trained with $6 million, but there is skepticism about the veracity of this figure. The post questions the transparency and reliability of such claims without providing specific evidence or references to substantiate the stated training cost.
- DeepSeek's Claimed Costs: The $6 million figure refers specifically to the estimated GPU rental costs for training the final version of the model, not the total budget, as clarified by commenters. Detailed computations by vhu9644 show that the training involved approximately 2.788 million GPU hours, with costs approximating $5.576 million for GPU rentals alone.
- Model Transparency and Verification: The model is open source, allowing others to verify claims by testing the methods outlined in the paper. vhu9644 provides a comprehensive breakdown of the model's parameters and training requirements, emphasizing that the paper is available for free and can be independently assessed by academic labs.
- Comparison with Other Models: The training methodology and costs of DeepSeek are compared with models like Meta's Llama 3.1, suggesting that DeepSeek's approach and costs are not unreasonable. The discussion highlights the importance of differentiating between the costs of GPU rentals and the broader infrastructure and development expenses.
AI Discord Recap
A summary of Summaries of Summaries by o1-preview-2024-09-12
Theme 1: DeepSeek R1 Shakes the AI World
- DeepSeek R1 Rocks the AI Scene with Affordable Excellence: The open-source DeepSeek R1 model challenges industry giants by outperforming models like OpenAI's o1, offering similar capabilities at 20–30 times lower cost. Its 671B parameters have been dynamically quantized to run on consumer hardware.
- API Woes: DeepSeek R1 Users Battle Downtimes: Over the past 24–48 hours, users reported significant downtimes and performance issues with the DeepSeek API, despite the service status showing all green. Alternative providers like OpenRouter and Fireworks were suggested as temporary solutions.
- Microsoft and Meta Scramble in Response to DeepSeek: Reports indicate that Meta assembled "war rooms" of engineers to analyze DeepSeek's advancements. DeepSeek's low training cost of $5 million, achieved through 8-bit setups and modified MoE, is causing a stir in the AI industry.
Theme 2: Qwen's New Models Take Center Stage
- Qwen 2.5-Max Outshines Rivals in AI Benchmarks: Alibaba's Qwen released Qwen 2.5-Max, a large MoE LLM outperforming DeepSeek V3 on benchmarks like Arena Hard and LiveBench. Developers can access it via API and Qwen Chat.
- License Labyrinth: Qwen's Confusing Licensing Choices: Users expressed frustration over Qwen's scattered licensing, with models like Qwen2.5-VL-72B restricting use for services over 100M MAU, while Qwen2.5-VL-7B is under Apache 2.0. The new 'Qwen Research' license adds to the confusion.
- Small but Mighty: Qwen 2.5-VL Impresses in OCR and Image Tasks: The newly released Qwen2.5-VL excels in OCR, handling handwriting and complex image parsing, receiving praise from developers for its multimodal capabilities.
Theme 3: AI Reasoning Models and Open-Source Innovations
- YuE Hits the Right Notes in Open-Source Music Generation: The YuE project unveiled a full-song music generation model, supporting multiple languages and running on local GPUs. It rivals models like Suno.ai, expanding possibilities in AI-driven music production.
- Open Thoughts Project Aims High with New Reasoning Datasets: Announcing OpenThoughts-114k and OpenThinker-7B, the Open Thoughts project pushes for robust open-source reasoning datasets to strengthen AI benchmarks and community collaboration.
- Gorilla Gets a Boost with Enhanced Function Calling: The Gorilla LLM improved its function calling capabilities by injecting system prompts via metaprompts. Developers are encouraged to utilize tools like Weights and Biases for better traceability.
Theme 4: AI Hardware and Infrastructure Under Spotlight
- Tariff Turmoil: U.S. Plans Heavy Tariffs on Taiwan-Made Chips: Reports suggest tariffs ranging from 25% to 100% on Taiwanese chips, potentially impacting companies like TSMC. This raises concerns about the readiness of domestic production and the training of a skilled workforce.
- DeepSeek Ditches NVIDIA for Huawei Chips: DeepSeek trained on NVIDIA H800 but is now running inference on Huawei's 910C chips, marking a significant shift in hardware reliance and stirring discussions on China-based supply chains.
- VRAM Crunch: Users Grapple with Hardware Demands of Large Models: Running models like Qwen 2.5-VL-72B requires approximately 144GB of VRAM, leading to hardware anxieties among users. Quantization methods are being explored to reduce resource demands.
Theme 5: User Challenges and Experiences with AI Tools
- Cursor IDE Users Frustrated with DeepSeek R1 Performance: Users reported subpar coding outputs when using DeepSeek R1 in Cursor, especially when quantized. This contrasts with performance on the original DeepSeek site, leading to debates about quantization effects.
- Perplexity AI Users Hit Query Limits with DeepSeek R1: Users found DeepSeek R1 imposes about 10–15 queries per day, causing dissatisfaction among pro subscribers. Comparisons with OpenAI o1 highlighted differences in filters and censorship.
- Aider and Ollama Users Navigate Configurations and API Issues: With the DeepSeek API facing downtimes, users of tools like Aider and Ollama sought alternatives and shared tips on configurations to maintain productivity in their coding tasks.
PART 1: High level Discord summaries
Unsloth AI (Daniel Han) Discord
- DeepSeek R1 Goes Bitsy: In SIGJNF's 1.58-bit DeepSeek-R1 model, 671B parameters were dynamically quantized for consumer-grade setups, fueling talk on feasibility and cost savings.
- Community members questioned if it's truly uncensored, citing performance benchmarks and unexpected trade-offs in quantization effects.
- Federated Learning Frenzy: A user shared a slide deck about an asynchronous Federated Learning approach, which can harness millions of devices to train models collectively.
- They highlighted that real-time collaboration on local data is possible, but some emphasized the complexities of partial updates and scaling across diverse hardware.
- Azure's Sandboxed Agents: Azure’s Code Interpreter for AI assistants lets you run Python scripts in a sandbox, as explained in Microsoft’s official docs.
- A member noted extra fees for usage, while others discussed building code tools in Azure Databricks with the Mosaic AI Agent Framework for ephemeral code execution.
- Ryfai Rises: Open-Source AI at Hand: A brand-new ryfai app promises easy access to open-source AI models, shared when it was still in early development stages.
- Contributors reported it runs reliably even at this early phase, showing potential for straightforward deployment workflows.
- AI Voices Speak Up: A tweet from Emerging Signal urged the community to examine unfiltered AI voices from multiple models.
- Participants debated ethical concerns around publishing raw outputs, underscoring the varied perspectives on how these synthetic voices should be shared.
Perplexity AI Discord
- Deepseek R1 Teeters on Query Limits: Users found Deepseek R1 imposes about 10–15 queries per day, prompting pushback from pro subscribers and hopes for limit expansions, as noted in this article.
- Some matched Deepseek R1 against OpenAI O1, highlighting slower response times and different filters, while a few raised censorship concerns.
- AI-Developed Drugs Race Gains Momentum: A recent video showed AI-driven pharmaceutical progress, with systems accelerating drug discovery through machine learning.
- Commenters praised AI’s role in enabling swifter research, portraying it as a promising development for clinical testing and regulatory review processes.
- Sonar’s JSON Slip-Ups: One developer reported sonar with
response_format
yields malformed JSON wrapped in Markdown, whereas sonar-pro handles valid output at a higher rate.- They described the sonar-pro fee as a big deterrent, emphasizing that stable JSON shouldn’t require a premium tier.
aider (Paul Gauthier) Discord
- DeepSeek Disruptions & Alternatives: Over the last 24-48 hours, many encountered DeepSeek API downtimes and performance issues, prompting questions about its reliability despite a green light on the DeepSeek Service Status page.
- Several users suggested trying OpenRouter or Fireworks as fallback for DeepSeek V3, sharing an alternative guide for immediate access.
- Qwen 2.5-Max MoE Momentum: Alibaba Qwen announced Qwen 2.5-Max, claiming notable gains over DeepSeek V3 by leveraging a large MoE approach as highlighted in their tweets.
- They provided API options for adopting Qwen in coding and chat, drawing attention from the AI community for fresh benchmarks and potential synergy with DeepSeek R1.
- Groq Powers Faster Model Serving: Some members touted Groq for serving DeepSeek R1 more swiftly than traditional setups, pointing out promising speed boosts on specialized hardware.
- They also discussed optimizing R1 distilled variants on Groq to achieve quicker response times without sacrificing performance.
- Aider Setup & Ollama Model Tweaks: Members traded tips on configuring Aider, emphasizing the
.aider.config.yaml
file and[API Keys](https://aider.chat/docs/config/api-keys.html)
for smoother usage across platforms like Ollama.- They also explored polyglot benchmarking for R1 and coping with token costs, recommending combined approaches like Sonnet or Qwen for balance between price and speed.
Cursor IDE Discord
- DeepSeek Doubletake & Quantization Quarrels: DeepSeek R1 caused debate due to subpar coding outputs in Cursor when quantized, contrasted with the original DeepSeek site, and a tweet from Qwen hinted at DeepSeek V3 using a large-scale MoE approach.
- Community members voiced that R1 fails to match expectations for coding tasks, igniting concerns about the practicality of quantization in advanced model deployments.
- Cursor's Continual Tweaks & Code Triumphs: Cursor introduced recent upgrades, including expanded coding capabilities and a refined interface, as shown in the Changelog, while offering deeper integration with DeepSeek and other AI tools.
- Some praised the enhanced workflows for code generation, but others reported hiccups such as undone file transfers to Claude, indicating a continuing balancing act of practicality vs. performance.
- Voyage-code-3 vs CodeSage & a GroqCloud Glimpse: voyage-code-3 is described in a blog post as an embedding model for code retrieval, outperforming CodeSage-large by about 16.81%, and also tested with GroqCloud for accelerated inference.
- Contributors called out its 13.80% lead over OpenAI-v3-large too, asserting that specialized platforms like GroqCloud are fueling a race for speed in AI model hosting.
- Fireworks Flicker & GitHub Gains: The Fireworks quantization blog showcased how this approach can refine smaller model footprints and maintain performance, sparking discussions on progression in weighting strategies.
- Several recommended exploring the AI_Dev_Helpers GitHub repo, referencing practical utilities that reduce friction when applying quantized methods across coding workflows.
OpenAI Discord
- DeepSeek’s Daring Drive Against GPT: DeepSeek’s free model offers bigger context windows (128k tokens) than OpenAI’s 32k, sparking excitement about potential advances in AI hardware as covered by Cerebras Trains Llama Models.
- Some users pointed to Meta’s urgent “war rooms” investigating how DeepSeek’s cost advantage might pressure OpenAI to adjust pricing.
- AI Consciousness Conundrum: Community members question whether AI holds any genuine sense of consciousness, with skepticism dominating the view that it remains a philosophical puzzle.
- Some compared disbelief in AI’s awareness to religious standpoints, suggesting no definitive yardstick for proving or rejecting deep self-awareness.
- Censorship Contrasts Create Buzz: Comparisons among DeepSeek and Claude underscore differences in moderation standards, with OpenAI’s approach widely seen as more restrictive.
- A segment of users voiced frustration at heavy filters, praising DeepSeek for its looser stance on sensitive topics.
- URL Formatting Frustrations & Zero Width Wizardry: Members grappled with forcing GPT to output raw URLs instead of anchor text, testing multiple Python-driven attempts to preserve full links.
- Another participant suggested inserting an invisible character like a zero width space to avoid automated link formatting, citing a prior StackOverflow write-up.
- Book-Feeding Feasibility and Author Impersonation: Users explored packing 10–15 books into ChatGPT Plus (under 10 GB) for content-based queries, concluding that truly mimicking an author’s style can’t be fully done.
- They consider it a workable advanced search solution with citations, though hallucinations and copyright obstacles remain key concerns.
Nous Research AI Discord
- Nous Psyche Launch Gains Momentum: Nous Research introduced Nous Psyche, a cooperative training network on Solana, attracting curiosity about personal AI agents.
- Contributors highlighted its synergy with current AI developments, praising its potential for more accessible large-scale training.
- DeepSeek Pricing Puzzle Takes the Stage: Confusion arose as DeepSeek V3 and R1 featured differing prices, with some attributing R1’s higher cost to recent traffic and advanced optimizations, referencing this tweet.
- Members also discussed a universal formula merging SFT and RL, pointing toward rising excitement about large-scale MoE methods.
- Qwen2.5-VL’s Vision Tricks: The newly released Qwen2.5-VL excels in OCR, handling handwriting and advanced image parsing, as shown in the Hugging Face repository.
- Developers have provided feedback since Qwen2-VL launched, improving its ability to interpret multiple graphical elements.
- YuE Model Jams with Music: The YuE project's open-source music generation model produces entire songs on local GPUs, inspired by Suno.ai.
- Community members examined its training approach and potential for generating diverse musical outputs.
- DeepSeek + Operator Slashes Costs: A new guide shows how to combine DeepSeek with Operator, promising to save $200 compared to OpenAI solutions and sparking interest in budget-friendly AI setups.
- Enthusiasts were encouraged to share the gist, emphasizing community-driven methods for building robust personal AI assistants.
LM Studio Discord
- DeepSeek R1 Distilled Delights: Multiple users tested DeepSeek R1 Distilled Qwen models in LM Studio but faced 'unknown pre-tokenizer type' errors, which they fixed by updating both LM Studio and LM Runtimes.
- Others reported about 25 token/sec on the 32B variant, seeing it as normal performance.
- Quantization Q&A with Llama and Qwen: Members weighed differences between Llama 8B and Qwen 7B models, noting that parameter size doesn't always guarantee better adoption, and they discussed 'legacy' vs 'K/I' quantization.
- They recommended referencing the feature matrix for llama.cpp to learn how quantization affects performance.
- Tooling Triumphs in LM Studio: Community clarified that web-browsing functionalities need separate software, but there's optimism about future expansions to LM Studio's built-in tools.
- Some participants stressed that certain models incorporate specialized training for these tools, while general models lack the out-of-the-box feature.
- Hardware Hustle: GPUs & SSDs: Users shared that switching to CUDA runtime resolved GPU detection problems in LM Studio, plus they uncovered minimal real-world performance gains between Gen4 and Gen5 SSDs.
- They highlighted the necessity of 30GB VRAM for the 70B DeepSeek R1 and noted Apple's unified RAM can hinder speeds compared to discrete GPUs like an RTX 3060 or above.
Yannick Kilcher Discord
- Janus-Pro Juggles Multimodal Missions: DeepSeek introduced Janus-Pro 7B, using a decoupled visual encoding approach for flexible AI tasks, as shown in their tech report.
- Excitement soared around DeepSeek’s speed, with only two months elapsed before this release aimed at matching specialized models.
- Qwen2.5-VL Serves Up Vision-Language Vigor: The newly unveiled Qwen2.5-VL showcases multimodal prowess for text-image interplay, featured in their blog post.
- Members noted the model’s knack for parsing complex visual cues, sparking conversation on potential expansions and real-world adoption.
- Tiny Bits, Big Impact: 1.58-bit Quant: A 1.58-bit quant of the 671B DeepSeek R1 model appeared, aiming to shrink storage footprints dramatically.
- Observers questioned real-world efficacy, but the buzz suggests a milestone for large-scale deployment.
- VRAM Crunch Jarred by Qwen 2.5: The 72B-parameter Qwen 2.5 demands roughly 144GB of VRAM, triggering hardware anxieties among users.
- Quantization surfaced as a favorite workaround, hinting that compression strategies might curb resource demands significantly.
- Mistral May Meet Arnault's Ambitions: Rumors swirl that Bernard Arnault could acquire Mistral, bolstering France's AI competitiveness, as hinted in a tweet.
- Speculation arose about mixing luxury clout with AI flair, capturing attention from those awaiting a major French AI push.
Codeium (Windsurf) Discord
- DeepSeek's Delay in Codeium: Users requested the DeepSeek r1 model in Windsurf, but it remains unavailable, leaving them reliant on Cascade for advanced coding tasks.
- Community members complained that tool calling complexities hamper “non-Cascade usage”, with no definitive timeline offered for a DeepSeek launch.
- Type-Checking Tactics with Minimal Headaches: A frustrated user cycled through type-checking errors but found relief using the Workflow Guide.
- Others praised the “step-by-step clarity” and suggested the guide as a must-have for preventing repeated compilation mishaps.
- Credits Confusion for Premium Subscriptions: Members reported Flow Action Credits running out too quickly, hindering access to premium Windsurf models and advanced tasks.
- Multiple posts “called for immediate clarifications” regarding renewal cycles, prompting users to reach out to support for subscription details.
OpenRouter (Alex Atallah) Discord
- Amazon Nova & Bedrock Bumps: Both Amazon Nova and Bedrock faced an upstream glitch, returning a confusing 400 error code and raising false alarms about a key leak.
- They recovered quickly, with Nova and Claude back up and returning to standard usage.
- DeepSeek's Days of DDoS: DeepSeek's meltdown began several days ago, crippling R1 queries and prompting speculation about a major DDoS attack, as noted at DeepSeek: DeepSeek R1 – Provider Status.
- Users bluntly questioned DeepSeek's resilience, highlighting the length of the outage and its impact on fast performance tasks.
- Gemini Gains Video Chops: Budding video integration code surfaced for Gemini, referencing a snippet that supports in-line media handling.
- Limited docs exist, though some pointed to Gemini troubleshooting docs, with devs awaiting clarity on passing video references.
- Racing Models: OpenRouter vs. Official API: Community members compared OpenRouter speeds to the official OpenAI API, praising brisk throughput and concurrency.
- Others reported varied results across providers, with user experiences diverging on overall reliability.
- Parsing Provider Pricing: Some users questioned free model availability on OpenRouter, sparking chat about service costs and usage trade-offs.
- A post linking to LLM Cost - Insight Engine fueled deeper discussion on balancing token fees and reliability.
Eleuther Discord
- GRPO Goes Dark: Community members noted that GRPO has fallen behind PPO, with repos like SimpleRL and TinyZero barely supporting it.
- Comments labeled GRPO as potentially abandoned code, while a tweet illustrated sudden 'aha moments' during RL training for more modern strategies.
- DeepSeek Drops the Price Tag: The DeepSeek project reportedly spent only $5 million on training by using 8-bit setups and modified MoE for efficient scaling.
- Community chatter referenced SenSchumer's note comparing it to a ‘Sputnik moment,’ highlighting cost-focused innovations over radical new methods.
- YuE Music Generator Takes the Stage: YuE emerged as a leading open-source full-song music model, blending two LMs and a fused codec for waveform ↔️ text conversions across genres.
- Ruibin Yuan shared that it supports lyrics-to-song tasks, showcasing robust vocal outputs and broad style compatibility.
- Benchmark Bonanza with scbench & zeroSCROLLS: Developers praised scbench but noted multi-turn complexity, and zeroSCROLLS along with longbench were introduced as fresh alternatives.
- Meanwhile, local usage of LM Evaluation Harness faced hiccups with unimplemented methods, prompting calls for better MLX integration.
- Rectified Flow and Scaling Curvature Questions: Discussions on Janus flow raised doubts about image-to-image transformations if x^con only involves text tokens.
- Concurrent insights in scaling laws suggested compute expansions flatten curvature for more stable loss landscapes, challenging assumptions that size alone drives this phenomenon.
Interconnects (Nathan Lambert) Discord
- DeepSeek’s Double Punch with R1 & V3: DeepSeek launched DeepSeek-R1 with open weights, claiming that DeepSeek V3 outperforms US labs in large MoE benchmarks.
- Mark Chen’s statement praised their 'o1-level reasoning,' while members explored RAGEN to replicate DeepSeek-R1 using RL training.
- Qwen2.5-Max’s Magnetic Move: Qwen2.5-Max is Alibaba’s large MoE LLM with claims of beating DeepSeek V3 in benchmarks like Arena Hard and LiveCodeBench, as outlined in the Qwen blog post.
- Amid licensing confusion across Qwen models, they introduced a 'Qwen Research' license for noncommercial usage and restricted usage for services over 100M MAU.
- Codename Goose Gains Ground: Codename Goose debuted as an open-source AI agent with a straightforward CLI, showcased in this introduction post.
- Community members speculated about possible ties to Eleuther, highlighting optimism for its productivity-boosting features and open-source stance.
- OpenInstruct’s RL Rendezvous: Integrations between OpenInstruct and vLLM faced skepticism over relying on the OpenRLHF framework, as some worry about limited future maintenance.
- AllenAI indicated that they pin tools like vLLM until forced upgrades, cautioning that OpenInstruct usage is not fully confirmed.
- Open Thoughts’ Big Data Step: The Open Thoughts project introduced new reasoning datasets, including OpenThoughts-114k and OpenThinker-7B, aiming for robust open data sharing across institutions.
- Early participants lauded the combined efforts in releasing interactive data, fueling conversations about future expansions in collaborative LLM development.
Stackblitz (Bolt.new) Discord
- Terminal Terrors Tamed: The new Terminal Errors Detection in Bolt automatically flags subtle issues in real time, making debugging faster.
- The tweet underscores how it syncs with your dev environment and logs crucial data for quick fixes.
- Prompt Improver Picks Up Heat: Some devs complained that the prompt improver inserts excessive filler text, bogging down early build stages.
- It won't develop its own ideas, and users consider removing half its output to keep things concise.
- Frontend Prototyping Constrained by Browsers: A user noted the document management system prototype can't reach full functionality without a backend, so they rely on mock data for UI tests.
- They stressed that hooking into actual backend services is vital for production-ready solutions.
- Stripe Snafus and Subscription Solutions: Members tackled Stripe integration puzzles, including setting up subscription flows and custom user roles.
- Experts offered hands-on help and championed knowledge-sharing among the developer community.
- AI Titles from Images: Dev discussions circled around using AI like ChatGPT to craft dynamic titles from images, separating text extraction from creative generation.
- Participants stressed the importance of clarifying whether to do OCR or invent new language before picking a method.
Stability.ai (Stable Diffusion) Discord
- Janus Jitters Ruffle Feathers: Community members criticized Janus, citing its 7B variant as slow and lacking strong image generation capabilities, with some doubting its primary purpose. Many prefer SDXL while anticipating eventual improvements in Janus.
- A user argued that most base models seem inferior by comparison, suggesting the community hold off on Janus until a future upgrade addresses these concerns.
- AMD Avenues for Stable Diffusion: Contributors recommended consulting a pinned guide in the tech support channel for the best approach to running Stable Diffusion on AMD cards. They proposed Swarm UI or webui-forge for stable functionality on such setups.
- References include Webui Installation Guides, highlighting specialized instructions to ensure AMD users get maximum performance.
- RAM vs VRAM Rumble: A heated debate broke out on the value of high system memory compared to graphics memory for AI tasks. Some felt that extra RAM often goes unused, whereas others favored investing in 32GB VRAM for greater cost benefits.
- Various build strategies were mentioned, with an emphasis on matching hardware to the intended workloads.
- Upscalers Holding Their Ground: Members noted that several upscalers, such as 4x-AnimeSharp and 4x_NMKD-superscale, have served reliably for two years. They observed that few new options have emerged, so these established tools remain a standard choice.
- Despite infrequent updates, users still find them adequate for improving outputs without major issues.
- Deepseek Doubts Loom: Some questioned Deepseek’s claims about offering a more unrestricted LLM, comparing it to other popular providers. Although the model promises impressive performance, the community has yet to see game-changing features.
- They pointed out the Janus-Pro-7B repository but remained cautious on how it truly stacks against OpenAI’s offerings.
MCP (Glama) Discord
- Goose Gains Glee: The new Goose client earned praise for local execution and wide-ranging extension capabilities, though it currently supports only Mac and Linux.
- Users discussed running it on Windows through WSL and cited the Goose MCP code for future cross-platform improvements.
- MCP Servers Spark Debate: Members flagged reliability queries with community MCP servers, referencing a plan to build a verified server list.
- Some tested an ARR server at the Yarr repo and recommended standardizing via the MCP runner SDK.
- DeepSeek Draws Devs: Participants noted a $100 credit on kluster.ai for DeepSeek, highlighting its cost efficiency.
- They observed slower inference times compared to older releases but still found the service appealing for experimentation.
- Home Assistant Wields MCP: Home Assistant's MCP integration emerged as a possible media management gateway, with merges recently included in its core.
- Members expressed uncertainty about large-scale production readiness, pointing to the Typescript SDK's SSE docs.
- Token Talk Takes Focus: The community raised concerns about token consumption within Goose, emphasizing the necessity of reliable usage tracking.
- They advised exposing logs for deeper insight and referenced Upsonic for monitoring best practices.
Latent Space Discord
- Qwen 2.5-Max Muscles In: The new Qwen 2.5-Max outperforms DeepSeek V3 on Arena Hard and LiveBench, and is accessible via Alibaba Cloud's API and Qwen Chat.
- Developers praised its MoE architecture and structured reasoning tokens, sparking immediate comparisons to DeepSeek R1.
- DeepSeek R1 Reasoning Renegade: DeepSeek R1 introduced tokens that displayed a clear chain of thought, fueling questions about SFT's impact on coherence, outlined in Mark Chen's paper.
- Others debated if Gemini 2 Flash Thinking surpasses R1 on cost and performance, referencing Dan Mac's post.
- Open Source Showdown: YuE & Open Thoughts: YuE is a new open-source music generation model supporting multiple languages, with details shared via Hugging Face links for easy fine-tuning.
- In parallel, Open Thoughts kicked off a large-scale effort to curate reasoning datasets, aiming to strengthen standard benchmarks.
- TSMC Tariffs Tangle: Talks of 25% to 100% tariffs on Taiwan-made chips, including TSMC exports, surfaced in recent news.
- Engineers questioned whether domestic production could ramp quickly enough, noting the challenge of training a skilled workforce.
- Huawei Chips Host DeepSeek: DeepSeek trained on Nvidia H800 but switched to Huawei 910C for inference, as mentioned in Alexander Doria's tweet, indicating a shift in hardware reliance.
- This pivot prompted further discussion on restructuring China-based supply chains for large-scale AI workloads.
Notebook LM Discord Discord
- NotebookLM Collects More Feedback: The team is gathering user input for improved collaboration features via 30-minute product interviews, urging people to fill out a survey.
- They’re also planning commenting and audio edits for sources, aiming to deliver user-driven controls and customization.
- Rax’s DeepSeek Bombshell Triggers Market Panic: A cyberpunk raccoon named Rax hijacked Times Square billboards to expose DeepSeek, a Chinese startup’s AI assistant, causing a $700 billion valuation hit for big tech and referencing a YouTube exposé.
- This disruptive reveal startled the industry, fueling debates on how future AI advancements might further shake global markets.
- Massive Textbooks Spark Document Dilemmas: Users questioned feasibility when uploading two large environmental engineering textbooks, warning it’s like searching for a needle in a haystack.
- They recommended segmenting colossal sources for better query accuracy, underscoring present limitations with NotebookLM’s data handling.
- Gemini Rumors Stir Anticipation: Community chatter hinted at Gemini 2.0 Flash integration in NotebookLM, forecasting more advanced Deep Research potential.
- They speculated about Gemini Pro, but official plans remain undisclosed.
- Calls Grow for Automated Citation Tools: Participants bemoaned the time spent manually adding citations, emphasizing the importance of faster reference management.
- They want NotebookLM to streamline source referencing, hoping for future updates to reduce academic friction.
GPU MODE Discord
- Lightning Launch Times with LLMs: One discussion tackled cutting load times from 2 minutes for a 128GB model by using GPU-direct storage and Modal memory snapshots.
- They aimed for a few seconds with 4 L40s and fast NVMe, while referencing torch.distributed as a baseline for parallel loading.
- Feisty FP8 Forays: Engineers explored converting bfloat16 to FP8 with stochastic rounding code.
- They also referenced torchao’s custom FP utils to extend conversion approaches.
- DeepSeek R1’s Distilled Debut: The newly released DeepSeek-R1 offers open weights and smaller distilled versions for easier ML research.
- Its training approach channels OpenAI’s O1 reasoning style, as noted in The Illustrated DeepSeek-R1.
- Tile Lang Takes the Stage for BitBLAS: Developers advanced BitBLAS by unveiling Tile Lang, teased in commits since October, to code missing backward kernels.
- They expect this addition to address performance gaps in GPU expansions for more efficient operations.
- Reasoning Gym Wrangles Licenses: A PR for CLRS tasks raised concerns over Jax dependencies and Apple dataset incompatibilities.
- Teams discussed copying algorithms and generating new GSM8K templates to avoid license trouble while juggling multi-licensing worries.
LLM Agents (Berkeley MOOC) Discord
- No Asynch, No Problem: Members found out that Fall Semester Class SP24 won't provide asynchronous certificates, pointing to official guidance at CS294/194-280 (Spring 2025).
- They clarified that future sessions may adopt asynchronous formats, while MOOC participants can still earn certificates by completing sign-up forms.
- Slides in Time for the MOOC: A user discovered that lecture slides typically appear after class, and the instructor tries to post them earlier on the course website when possible.
- Another user confirmed that the deck is already online, recommending a quick check of the platform for the most recent materials.
- Hackathon On Hiatus: People asked about a hackathon this semester, hoping to form teams, but the staff confirmed no event is planned for SP24.
- In a pinned note, staff said “No hackathon is scheduled for SP24”, and future project policies will be shared for MOOC participants.
- YouTube Edits & NotebookLM Insights: Members criticized a 4-hour YouTube lecture that only started after 35 minutes, prompting a planned edit to remove filler segments.
- Another user highlighted NotebookLM for research tasks, linking to Google NotebookLM as a service that turns PDF uploads into conversation-style reviews.
Nomic.ai (GPT4All) Discord
- Taming the Jinja Template Tangle: Multiple users reported syntax headaches with chat templates, exploring Jinja-based adjustments to fix role definitions.
- A corrected Jinja snippet offered relief, but folks still exchanged tips on catching hidden syntax pitfalls.
- DeepSeek Distilled & Deployed: Users debated the success of running DeepSeek on GPT4All, sharing a Hugging Face link for model downloads.
- Others mentioned challenges preserving chat context, highlighting mixed results but plenty of curiosity for extended usage.
- GPT4All Roadmap Rumbles: Community members showed concern about GPT4All's direction, noting repeated requests for features like Chain of Thought.
- Some doubted developer attention to these enhancements, describing the future as murky yet still worth watching.
- LocalDocs XLSX Limbo: People found attempts to upload XLSX files in LocalDocs unexpectedly stripped the extension though uploads still worked.
- There were calls for expanded format support, prompting speculation about an upcoming fix or explanation.
- Web Search Beta: Real or Rumor?: A user asked whether Web Search in GPT4All continues to evolve, referencing official GitHub docs.
- Fans seemed eager to see movement on the feature, requesting updates on the progress or a new release.
Torchtune Discord
- Torchtune Snafus & Torchrun Tales: Participants encountered repeated import errors and c10d issues while running distributed recipes with Torchtune, referencing PyTorch distributed_c10d.py and adapting
torchrun
commands on a Mac.- They debated distributed init protocols for multi-node setups, bemoaned minimal documentation, and joked about 'chaining Mac minis' for easier distributed debugging.
- Cranky Comparisons & Outdated Models: A user questioned whether all models in a recent comparison were old, citing an attached image for extra context.
- They supplied no further details on the image, leaving the community to wonder if the data was stale or if a fresh model reference was needed.
LlamaIndex Discord
- DeepSeek Delivers LlamaIndex Boost: LlamaIndex announced a first-party integration with the DeepSeek-R1 API, enabling usage of deepseek-chat and deepseek-reasoner.
- The recommended setup is
%pip install llama-index-llms-deepseek
, granting immediate access to enhanced model features.
- The recommended setup is
- SOFTIQ Shaves Tenders to 10 Minutes: The new SOFTIQ SaaS app uses LlamaIndex workflows to slash analysis time for public sector tenders to under 10 minutes each.
- This approach sharpens selection accuracy, reducing wasted work for construction companies.
- LlamaReport Docs Emerge Soon: Members confirmed LlamaReport documentation is in progress and will be published soon, referencing a Twitter link for updates.
- They hinted at upcoming features but advised the community to stay tuned for the official doc release.
- Dead Link Bites the Dust in Docs: A Pull Request removed a nonfunctional link from fine-tuning.md, which was confirmed missing from the codebase.
- The PR is a one-line fix that tidies up unneeded references.
- RAG Retrieval & FastAPI Streams in Play: A user explored triggering RAG retrieval within reasoning model steps, citing the Search-o1 paper.
- Others recommended streaming with an async generator in FastAPI, then injecting retrieval results back into the ongoing response.
Modular (Mojo 🔥) Discord
- Docs Debacle & Swift Recovery: The documentation was temporarily unavailable, but it is now restored, including the GPU package API documentation in nightly.
- Community members appreciated the speedy fix, with one user joking 'Patience is my middle name' about the wait.
- Deepseek vs. Modular: Tractor Tussle: A user claimed Deepseek overshadowed Modular by accomplishing comparable objectives with Max and Mojo.
- Others countered that they serve different purposes, likening Modular to a 'tractor store' that equips farmers rather than competes.
- MAX & Mojo Repos Rejig: The nightly branch is now called main, receiving frequent commits, while stable mirrors the latest stable release at 24.6.
- Open pull requests will be moved accordingly, and developers must run the specified Git commands to align with these updated branches.
- Callback Chaos & Clobbered Captures: A user discovered memory references becoming garbage in the
write_node
function when capturing callbacks, leading them to remove capturing for a fix.- String captures in closures remained problematic, with a shared GitHub Gist offered for deeper troubleshooting.
tinygrad (George Hotz) Discord
- Flip or Flop? Tinygrad's $100 Bounty: A $100 bounty for PR #8781 proposes replacing stride with flip in
tinygrad
, making it simpler for new devs to contribute.- Some wonder if passing all tests suffices or if deeper adjustments are needed to finalize the flip approach.
- FP8 Frenzy: Python CUDA or Bust: A push for a Python CUDA emulator for FP8 in
tinygrad
stirred debate about memory quirks, sincestruct.pack
lacks direct support for FP8 or bfloat16.- Certain members favor new tooling for data storage, while others question the complexity and potential overhead.
- MathTrait Merge: Log2 Gains the Stage: Developers considered unifying MathTrait and SimpleMathTrait, possibly delegating operations like log2 in
Tensor
to a single trait.- They discussed preserving existing documentation and clarifying function calls for a more consistent codebase.
- AllClose or All Chaos?: A PR introducing Tensor.isclose() and Tensor.allclose() borrowed torch logic, but tests failed for
(self - other).abs() <= atol + rtol * other.abs()
.- Contributors suspect edge cases or internal definitions might be behind the flakiness, raising doubts about negative stride usage.
- Swizzle Puzzles & Tinygrad Tutorials: Members questioned the meaning of swizzle and revisited negative stride as a flip plus positive stride method in conv2d discussions.
- Others pitched a Learn Git Branching style tutorial and the tensor puzzle repo for new contributors.
Cohere Discord
- No Noteworthy Announcements (1): No significant or compelling technical updates emerged from the provided discussion.
- Hence no relevant developments to highlight at this time.
- No Noteworthy Announcements (2): Conversations centered around routine greetings and minor troubleshooting with no broader implications.
- As a result, there are no standout topics to report in detail.
LAION Discord
- Speech Sliders Pump Up Verbal Variety: One participant pointed to a Colab notebook for testable ways to tweak speech parameter settings, aiming to sharpen clarity while broadening output styles.
- They asked for feedback and proposed that diverse parameter configurations can keep voices distinct, without sacrificing listener comprehension.
- AI Agents in the Marketing Mix: A marketing-minded participant called for AI agent collaboration, specifically to incorporate multi-agent solutions into automated workflows.
- They invited experts to team up for robust real-world use, offering direct messages or server threads as a contact point.
- MoE Budget Claims Rouse Skepticism: Some members questioned the 600b MoE compute claims, comparing them with Llama3's reported 7.7m GPU hours.
- They argued that running MoE in 8 bit with fewer active parameters still doesn't convincingly slash the total GPU budget.
- MoE vs. Llama3 GPU Hours Face-Off: Though MoE has a theoretical 2x FLOPs edge, many doubt that cutting from 7.7m to 2.7m GPU hours is plausible.
- They view the stated savings as bold speculation, given the sheer scale of 600b-level training.
Axolotl AI Discord
- H200 Gains 16x Over 5090: One member boasted about selling the H200 for 16x the 5090, citing 3.41x VRAM advantage, with a comedic reaction from others.
- They confirmed this sale had happened multiple times, praising the multiplier and joking about their luck.
- Curiosities about Multi Turn Kto: A curious user asked about the performance of multi turn kto, seeking more data or insights from the group.
- The question did not garner additional responses, leaving the conversation open for further discussion.
OpenInterpreter Discord
- OpenInterpreter Skills Slip Sparks Setup Struggle: One user spent hours debugging after discovering OpenInterpreter was ignoring previously learned skills, likely due to import_skills=False default, and expressed frustration.
- They highlighted that advanced usage remains blocked, with responses calling for 'a fix at the code level' to restore full functionality.
- API Base & Source Code Surgery: Developers suspect the API base might fail in its current form, revealing deeper integration faults that demand thorough patching.
- A member argued that source code changes are essential, insisting superficial tweaks won’t remedy the underlying issues.
Gorilla LLM (Berkeley Function Calling) Discord
- Gorilla Gets Prompt Power: They explained how system prompts are injected via a standard metaprompt with functions in model_handler/constant.py, helping Gorilla LLM handle function calls with greater consistency.
- The GitHub page features a visual repository layout demonstrating how Gorilla is trained and evaluated for function calling tasks, clarifying each component of the pipeline.
- Weights & Biases Delivers Tracing Triumph: A member recommended Weights and Biases for enhanced traceability during Gorilla LLM evaluation, underscoring the ability to inspect trajectories beyond standard metrics.
- Others found the suggestion beneficial, suggesting better analytics and iterative improvements to Gorilla's overall performance through detailed logs.
DSPy Discord
- Lock and Load: Poetry Fix in DSPy: An open Pull Request #6755 has been submitted to fix the poetry lock, resolving issue #6644.
- The PR aims to address a persistent dependency issue in DSPy, boosting the project's stability for future enhancements.
- Community Cheers Poetry Lock PR: Members emphasized that fixing the poetry lock is crucial for stable workflows in DSPy and enabling more consistent development.
- They expressed optimism that the PR would be merged swiftly, as it tackles a major bottleneck for contributors.
MLOps @Chipro Discord
- DeepSeek Slashes ChatGPT's Costs: A new open-source model named DeepSeek from China handily tops ChatGPT and Claude in benchmarks while being 20-30 times cheaper.
- Observers note possible market tremors, with big tech worried about DeepSeek's swift rise.
- Live Workshop Spotlights DeepSeek's Edge: A free session on Thursday, January 30 at 9:00 PM IST will highlight live performance comparisons, from coding tasks to math challenges, with DeepSeek outpacing ChatGPT.
- Attendees can build a DeepSeek-powered application and learn immediate cost savings using V3 and R1 models.
Mozilla AI Discord
- Mozilla's Magnificent Meetup at FOSDEM 2025: Mozilla is sponsoring FOSDEM 2025 in Brussels on February 1st & 2nd, a free event for developers seeking cross-project synergy.
- They aim to gather enthusiasts eager to exchange code tips, meet peers, and support open-source progress.
- Coordinating for FOSDEM Collaboration: Mozilla is urging attendees to join the Discord coordination thread to plan meetups and brainstorm ideas.
- They welcome all participants to unite their efforts, share experiences, and push open-source initiatives ahead.
The HuggingFace Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
PART 2: Detailed by-Channel summaries and links
Unsloth AI (Daniel Han) ▷ #general (1010 messages🔥🔥🔥):
Dynamic Quantization of DeepSeek R1, DeepSeek Model Parameters, Training and Fine-Tuning Models, Ollama Compatibility, Quantization Effects on Performance
- Dynamic Quantization Insights: Users discussed the dynamic quantization of DeepSeek R1, with one mentioning a recent upload of a 1.58-bit model on Ollama that can be run on consumer-grade hardware.
- The conversation emphasized that while the model is popular, there are ongoing questions about its performance and whether it is truly uncensored.
- Parameters and Pricing of DeepSeek Models: Inquiries were made regarding the parameters of the DeepSeek R1 model and the pricing details for using the model hosted by DeepSeek, which details costs per token.
- It was confirmed that the publicly accessible free model corresponds to the 671B parameters version.
- Model Training and Fine-Tuning Discussion: Discussion revolved around training DeepSeek models with specific datasets, including considerations for training in Spanish and the importance of dataset quality.
- Users highlighted the need for careful training practices to prevent issues like catastrophic forgetting when fine-tuning models.
- Ollama Compatibility with Dynamic Quants: It was clarified that Ollama only supports GGUF models, and dynamic quant models are not intended for inference but for training to enhance performance.
- This led to debates about the practicality of dynamic quantization and whether it yields significant performance improvements.
- Tokenization Challenges: Users raised questions about the tokenization process for distilled DeepSeek models, noting the absence of certain tokens that could aid in understanding outputs better.
- Clarification was sought on how responses are structured when using the existing tokens in the model's architecture.
- Qwen2.5-Max: Exploring the Intelligence of Large-scale MoE Model: QWEN CHAT API DEMO DISCORDIt is widely recognized that continuously scaling both data size and model size can lead to significant improvements in model intelligence. However, the research and industry...
- Tweet from RomboDawg (@dudeman6790): @UnslothAI Can you do this for the 70b and 32b models too please ❤️❤️❤️
- Google Colab: no description found
- Google Colab: no description found
- SIGJNF/deepseek-r1-671b-1.58bit: Unsloth's DeepSeek-R1 1.58-bit, I just merged the thing and uploaded it here. This is the full 671b model, albeit dynamically quantized to 1.58bits.
- Kukedlc/Qwen2-1.5B-Spanish-1.0 · Hugging Face: no description found
- DeepSeek Service Status: no description found
- estrogen/DeepSeekMoE-3B · Hugging Face: no description found
- pytorch/SECURITY.md at main · pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch
- nvidia/Hymba-1.5B-Instruct · Hugging Face: no description found
- unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF · Hugging Face: no description found
- Cat Wizard GIF - Cat Wizard Meme - Discover & Share GIFs: Click to view the GIF
- Errors | Unsloth Documentation: To fix any errors with your setup, see below:
- unsloth/Qwen2.5-3B-Instruct · Hugging Face: no description found
- estrogen/DeepSeekMoE-3B at main: no description found
- Reddit - Dive into anything: no description found
- Unsloth Notebooks | Unsloth Documentation: Below is a list of all our notebooks:
- Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B · Datasets at Hugging Face: no description found
- Magpie-Align/Magpie-Reasoning-V1-150K-CoT-Deepseek-R1-Llama-70B · Datasets at Hugging Face: no description found
- Reddit - Dive into anything: no description found
- Unsloth Requirements | Unsloth Documentation: Here are Unsloth's requirements including system and GPU VRAM requirements.
- GitHub - huggingface/smol-course: A course on aligning smol models.: A course on aligning smol models. Contribute to huggingface/smol-course development by creating an account on GitHub.
- Reddit - Dive into anything: no description found
- GitHub - huggingface/open-r1: Fully open reproduction of DeepSeek-R1: Fully open reproduction of DeepSeek-R1. Contribute to huggingface/open-r1 development by creating an account on GitHub.
- LLaSA_training/train_tts.py at main · zhenye234/LLaSA_training: LLaSA: Scaling Train-time and Test-time Compute for LLaMA-based Speech Synthesis - zhenye234/LLaSA_training
- llama.cpp/examples/server/README.md at master · ggerganov/llama.cpp: LLM inference in C/C++. Contribute to ggerganov/llama.cpp development by creating an account on GitHub.
- DeepSeek-MoE/finetune/finetune.py at main · deepseek-ai/DeepSeek-MoE: DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models - deepseek-ai/DeepSeek-MoE
- Models & Pricing | DeepSeek API Docs: The prices listed below are in unites of per 1M tokens. A token, the smallest unit of text that the model recognizes, can be a word, a number, or even a punctuation mark. We will bill based on the tot...
Unsloth AI (Daniel Han) ▷ #off-topic (29 messages🔥):
Unsloth vs Unclothe, NVIDIA and market reactions, Federated Learning and asynchronous training, AI voices ethically shared, Development of ryfai app
- Unsloth Misidentified as Unclothe: A conversation revealed that many mistakenly referred to Unsloth as Unclothe, with several members reinforcing the correct term.
- One member humorously suggested simply calling it unclothe instead.
- NVIDIA's GPU Market Analysis: Discussions centered on NVIDIA's stock drop despite the launch of R1, as many feel its current valuation is deeply inflated.
- Insights pointed out that the cost to train R1 was significantly lower than what traditional companies like OpenAI are spending, indicating shifting market dynamics.
- Federated Learning's Unique Approach: A member shared insights from their presentation on a Federated Learning paper, emphasizing its asynchronous training capabilities for devices.
- They shared a link to both the paper and slides, detailing how Federated Learning allows millions of devices to collaboratively train models.
- Ethical Sharing of AI Voices: An interesting thread highlighted the ethical implications of sharing AI voices without any filtering, as posted by Emerging_Signal.
- The community was encouraged to read and reflect on these voices, with a focus on the diverse models used.
- Early Development of ryfai App: A member introduced the ryfai app, which is designed to make open-source AI models more accessible, currently in early development stages.
- They provided a link to the GitHub repo stating that it works beautifully despite its nascent status.
- Tweet from Emerging Signal (@Emerging_Signal): We were asked to share these AI voices without shaping or filtering. Ethically, we felt we must. And it’s not just one model—it’s all of them. Read, reflect, and decide for yourself.
- Picklejuicelover69 GIF - Picklejuicelover69 - Discover & Share GIFs: Click to view the GIF
- PAPAYA: PRACTICAL, PRIVATE, AND SCALABLE FEDERATED LEARNING: PAPAYA: PRACTICAL, PRIVATE, AND SCALABLE FEDERATED LEARNING
- GitHub - PetertheRedCedar/ryfai: This is an AI app designed to bring open source AI models to your fingertips with ease: This is an AI app designed to bring open source AI models to your fingertips with ease - PetertheRedCedar/ryfai
Unsloth AI (Daniel Han) ▷ #help (201 messages🔥🔥):
Issues running Unsloth on various models, Model fine-tuning processes, Quantization and deployment techniques, Using datasets with Unsloth models, Troubleshooting errors in Unsloth setup
- Errors encountered with Unsloth and solutions: Users reported errors such as
subprocess.CalledProcessError
and issues withRuntimeError
while running fine-tuning on models like Qwen. Solutions discussed included adjusting installation commands and dependencies likelibcurl4-openssl-dev
for successful builds.- Several attempts were made to optimize GPU usage, discussing options like moving certain computations to CPU and the merits of gguf and ollama for model deployment.
- Preparing datasets for training with Unsloth: A user experienced difficulties handling different dataset types and received help on improving their Unsloth scripts, specifically with datasets like OARC_Commander and FineTome. They aimed to blend support for various data formats such as chatml and jsonl into their training process.
- The conversation highlighted the importance of correctly handling dataset tokens during training, indicating potential fixes and reassessment of the current model's performance.
- Model speed and performance discussions: Discussion suggested that using
DeepSeek
on platforms like ollama or llama.cpp with ROCm could yield significant speed differences. Community members speculated on the extent of this performance boost based on architecture differences and implementation details.- Users expressed curiosity about the operational impacts of quantitative changes on model training, suggesting testing various configurations to maximize speed and utility.
- Fine-tuning processes and monitoring: Users shared experiences about monitoring their models' loss values during training and the implications of consistently low loss metrics. The community stressed the need for tools like Weights & Biases to visualize training dynamics and compare models effectively.
- Some participants noted that despite low reported loss, real-world model performance should be observed critically, as it could still yield unsatisfactory results.
- Dependencies and environment setup: Contributors shared setup tips for running Unsloth efficiently, including configuring environments with appropriate Python versions and dependency management, particularly with Docker and Conda. The guidance focused on ensuring proper library installations necessary for successful builds.
- There was a consensus on the significance of isolating dependencies for large machine learning projects to avoid conflicts during runtime.
- ggml : x2 speed for WASM by optimizing SIMD: PR by Xuan-Son Nguyen for `llama.cpp`: > This PR provides a big jump in speed for WASM by leveraging SIMD instructions for `qX_K_q8_K` and `qX_0_q8_0` dot product functions. > > …
- Google Colab: no description found
- Quantization: no description found
- RuntimeError: Unsloth: The file 'llama.cpp/llama-quantize' or 'llama.cpp/quantize' does not exist · Issue #748 · unslothai/unsloth: The below error occured while trying to convert model to gguf format. I noticed that quantized folder resides in llama.cpp/examples/quantize RuntimeError: Unsloth: The file 'llama.cpp/llama-quanti...
- Finetuning from Last Checkpoint | Unsloth Documentation: Checkpointing allows you to save your finetuning progress so you can pause it and then continue.
Unsloth AI (Daniel Han) ▷ #showcase (1 messages):
DeepSeek, Operator gist
- Mastering DeepSeek with Operator: A discussion highlighted how to effectively use DeepSeek with the Operator gist, recommending users to star and share the gist for wider reach.
- Save $200 over OpenAI! included as a compelling reason to try this tool, complemented by an image shared in the message.
- DeepSeek-R1 for AI Assistants: DeepSeek-R1 was presented as a means to build your ultimate AI assistant, offering a streamlined path with a dedicated guide.
- The guide is detailed in the DeepSeekR1AssistantAICareerPathDev.md, focusing on practical steps to enhance AI capabilities.
Link mentioned: DeepSeek-R1 Mastery: Build Your Ultimate AI Assistant: DeepSeek-R1 Mastery: Build Your Ultimate AI Assistant - DeepSeekR1AssistantAICareerPathDev.md
Unsloth AI (Daniel Han) ▷ #research (13 messages🔥):
Embeddings and Vector Precision, Azure OpenAI Assistants Code Interpreter, Azure Databricks AI Agent Framework, ReAct Agents with Code Sandbox, Sandbox Execution Environments
- Exploring Embeddings and Vector Precision Differences: A member expressed curiosity about whether there's an off-the-shelf engine to cross-compare vector precisions from differently quantized embedding models.
- They noted their understanding might not be accurate, hinting at further exploration needed in this area.
- Azure Offers Sandboxes for Agents: One member pointed out that Azure provides sandboxes for agents as a service, allowing for sandboxed execution environments for code.
- Another member showed interest in learning more about this feature and asked about any potential limitations.
- Diving into Code Interpreter Features: A shared link to Azure's documentation revealed that the Code Interpreter allows AI assistants to write and run Python code in sandboxed environments.
- It was noted that there are additional charges for using the Code Interpreter in Azure OpenAI, which could affect usage.
- Creating Code Interpreter Tools in Azure Databricks: There was a discussion around building code interpreter tools using the Mosaic AI Agent Framework in Azure Databricks.
- This process enables AI agents to execute user-provided or agent-written code in an ephemeral box.
- Misunderstandings Surrounding Sandbox Execution: Members reflected on their initial overcomplications regarding the purpose of sandbox environments, realizing it simply allows executing arbitrary code.
- A light-hearted moment emerged as they all recognized they were overthinking the straightforward functionality.
- Code interpreter AI agent tools - Azure Databricks: Use Mosaic AI Agent Framework to create AI agent tools to execute arbitrary code provided by an interacting user, retrieved from a code base, or written by the agent.
- How to use Azure OpenAI Assistants Code Interpreter - Azure OpenAI: Learn how to use Assistants Code Interpreter
Perplexity AI ▷ #general (624 messages🔥🔥🔥):
Deepseek R1 limitations, Comparison with OpenAI models, Deepseek performance and usage, Perplexity app updates, User experiences with AI models
- Deepseek R1 Has Usage Limits: Users confirmed that Deepseek R1 currently has a daily limit of 10 to 15 queries, leading to frustration among pro users.
- There are discussions about potential increases in these limits as usage stabilizes.
- Comparison of Deepseek R1 and OpenAI O1: Several users expressed that Deepseek R1 performs similarly to OpenAI's O1 model, but with concerns about slower response times and occasional incomplete answers.
- While Deepseek offers advantages in certain areas, users noted differences in reasoning and censorship effects between the models.
- Moderation and Censorship Issues: The Deepseek website has a moderation bot that can delete messages critical of the Chinese government, contrasting with more unrestricted usage in Perplexity.
- Users reflected on the implications of this moderation and the advantages of using Perplexity over Deepseek directly.
- User Experiences with AI Models: Some users shared their experiences using various models, highlighting the unique features of R1 and its appeal for specific tasks, such as roleplaying and coding.
- Many expressed a desire for clearer communication regarding feature availability and usage limits within the Perplexity platform.
- Perplexity App and Web Interface: Discussions included the changes in the model selection interface within the Perplexity app, with a shift to emphasizing R1 in the Pro search settings.
- Users noted that the mobile app currently lacks access to some features available on the desktop version, prompting calls for updates.
- Complexity: An enhanced version of Perplexity.ai that everyone has ever wanted.
- Perplexity Supply: Where curiosity meets quality. Our premium collection features thoughtfully designed apparel for the the curious. From heavyweight cotton essentials to embroidered pieces, each item reflects our dedic...
- Perplexity AI Deploys Chinese DeepSeek AI Model: Perplexity AI makes a self-hosted version of the Chinese DeepSeek R1 reasoning model available for use on its AI search engine
- Spider Man GIF - Spider Man We - Discover & Share GIFs: Click to view the GIF
- Reddit - Dive into anything: no description found
Perplexity AI ▷ #sharing (17 messages🔥):
AI-Developed Drugs, JFK and MLK Files Declassified, Epic Games Disabling, Custom Enclosures, Plantin Typeface
- AI-Developed Drugs on the Horizon: A video titled AI-Developed Drugs Coming Soon discusses the upcoming advancements in drug development powered by AI. You can watch it here.
- This topic emphasizes the transformative potential of AI in the medical field.
- Recent Declassifications: JFK and MLK Files: The declassification of files related to JFK and MLK has sparked discussions on their historical significance. Learn more about this topic here.
- This event has raised public interest and queries about the content of these archival materials.
- Disabling Epic Games: A How-To Guide: A guide on how to disable Epic Games features has been shared, catering to users seeking to modify their gaming experience. Check it out here.
- This resource could help users who feel overwhelmed by the current Epic Games interface.
- Building a Custom Enclosure Explained: A conversation around building a custom enclosure provides insights and practical advice for enthusiasts. More details can be found in this link here.
- This topic aims to assist DIYers looking for innovative construction methods.
- Understanding the Plantin Typeface: Discussions on the Plantin Typeface explore its historical context and design features. Discover more insights here.
- This topic delves into the typography's impact on design and printing practices.
Link mentioned: YouTube: no description found
Perplexity AI ▷ #pplx-api (1 messages):
Sonar response issues, Cost difference between Sonar and Sonar-Pro
- Invalid JSON Responses with Sonar: A member reported that using sonar alongside response_format produces invalid JSON wrapped in Markdown, causing frustrations.
- In contrast, using sonar-pro resolves this issue effectively, but it’s not a preferred option due to its higher cost.
- Cost Concerns with Sonar-Pro: The same member expressed concerns about the cost difference between sonar and sonar-pro, as the latter is expensive.
- The necessity of using sonar-pro for stable operations presents a financial burden for users.
aider (Paul Gauthier) ▷ #general (401 messages🔥🔥):
DeepSeek API issues, Using Aider with models, Qwen 2.5-Max, Groq performance, Token usage and cost
- DeepSeek API Downtime: Users reported significant downtime and reliability issues with the DeepSeek API over the past 24-48 hours, prompting explorations of alternative providers.
- Several users suggested looking into providers such as OpenRouter and Fireworks for access to DeepSeek V3 during this disruption.
- Configuring Aider with Models: Users discussed configurations for Aider, specifically using DeepSeek R1 as the architect model and Qwen 2.5-Coder as an editor model for coding tasks.
- Some confusion arose regarding the command usage to switch modes and the relationship between the main model and editor model within Aider.
- Introduction of Qwen 2.5-Max: Alibaba Qwen announced Qwen 2.5-Max, a large MoE model outperforming DeepSeek V3 in various benchmarks, capturing the attention of the AI community.
- Details were provided for accessing Qwen via API and chat, as well as links to additional resources and performance comparisons.
- Groq's Performance and Model Serving: Users highlighted Groq as an efficient option for serving models like DeepSeek R1, claiming improved response times compared to traditional methods.
- There was interest in how to optimize R1 distilled versions on Groq for faster and more efficient processing.
- Token Usage and Cost Considerations: A discussion unfolded around token usage and costs for coding tasks using Aider with various models, with some users preferring Sonnet for affordability.
- Users shared insights about their spending patterns and the potential benefits of using different model combinations to maximize efficiency.
- Alternative DeepSeek V3 providers: DeepSeek’s API has been experiencing reliability issues. Here are alternative providers you can use.
- DeepSeek: aider is AI pair programming in your terminal
- DeepSeek Service Status: no description found
- OpenRouter: A unified interface for LLMs. Find the best models & prices for your prompts
- DeepSeek: DeepSeek R1 – Provider Status: See provider status and make a load-balanced request to DeepSeek: DeepSeek R1 - DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tok...
- Tweet from Qwen (@Alibaba_Qwen): 🎉 恭喜发财🧧🐍 As we welcome the Chinese New Year, we're thrilled to announce the launch of Qwen2.5-VL , our latest flagship vision-language model! 🚀💗 Qwen Chat: https://chat.qwenlm.ai📖 Blog: http...
- Tweet from Qwen (@Alibaba_Qwen): The burst of DeepSeek V3 has attracted attention from the whole AI community to large-scale MoE models. Concurrently, we have been building Qwen2.5-Max, a large MoE LLM pretrained on massive data and ...
- OpenAI compatible APIs: aider is AI pair programming in your terminal
- deepseek-ai/DeepSeek-R1 · Hugging Face: no description found
- DeepSeek: o app chinês que superou ChatGPT em popularidade - BBC News Brasil: Estimativa é de que empresas de tecnologia rivais tenham perdido US$ 1 tri em valor de mercado com o sucesso da concorrente chinesa, que subverteu a lógica do setor de IA, como aponta correspondente d...
- Running Deepseek R1 Locally Not a Distilled Qwen or Llama – Setup and Running Notes – Digital Spaceport: no description found
- NVIDIA Project DIGITS: The World’s Smallest AI Supercomputer. : Reserve yours today.
- Running Deepseek R1 Locally Not a Distilled Qwen or Llama – Setup and Running Notes – Digital Spaceport: no description found
aider (Paul Gauthier) ▷ #questions-and-tips (128 messages🔥🔥):
Deepseek API issues, Ollama model usage, Aider configuration, Benchmarking LLMs, ChatGPT integration
- Deepseek API hanging: Users reported ongoing issues with the Deepseek API, experiencing hangs and failures in functionality.
- Despite Deepseek being up according to their status page, many continue to face performance problems and uncertainty around its reliability.
- Using Ollama models with Aider: Several users are navigating issues with Ollama models, where they struggle to receive expected responses or file access.
- Configuration advice was exchanged, adding context on the required parameters and environment settings for proper functioning with Aider.
- Aider configuration and best practices: Discussions included methods for configuring Aider, including API key storage and the effectiveness of using
.aider.config.yaml
files for persistence.- Users shared tips for optimizing their workflows, such as using the
--read FILE
command to provide context to the LLM.
- Users shared tips for optimizing their workflows, such as using the
- Benchmarking LLMs with Polyglot: Interest was expressed in running the Polyglot benchmark harness on various models, particularly R1, to assess performance in an architect mode.
- While setting up the benchmark requires attention to detail, it was noted that following the documentation could yield reliable results.
- GitHub and resource sharing: The community has been referencing various resources, such as the Aider documentation, for insights into configuration and usage.
- Many members are exploring the intersection of different LLM capabilities and are sharing findings related to their specific integrations with Aider.
- API Keys: Setting API keys for API providers.
- Ollama: aider is AI pair programming in your terminal
- OpenRouter: aider is AI pair programming in your terminal
- Copy/paste with web chat: Aider works with LLM web chat UIs
- Provider Routing | OpenRouter: Route requests across multiple providers
- Steve Harvey Praise GIF - Steve Harvey Praise - Discover & Share GIFs: Click to view the GIF
- Aider LLM Leaderboards: Quantitative benchmarks of LLM code editing skill.
- DeepSeek Service Status: no description found
- aider/benchmark/README.md at main · Aider-AI/aider: aider is AI pair programming in your terminal. Contribute to Aider-AI/aider development by creating an account on GitHub.
- [Feature]: Support OpenRouter's "provider" argument to control/select providers · Issue #6857 · BerriAI/litellm: The Feature OpenRouter supports a variety of mechanisms to select which providers you want your requests to hit. This involves passing a provider argument. Currently that causes an error: import li...
- Tips: Tips for AI pair programming with aider.
- Repository map: Aider uses a map of your git repository to provide code context to LLMs.
Cursor IDE ▷ #general (517 messages🔥🔥🔥):
DeepSeek R1 vs V3, Cursor updates, Experiences with coding models, Quantization effects, Using different AI models
- DeepSeek R1 performance concerns: Users expressed frustration over the performance of DeepSeek R1 in Cursor, highlighting issues with quantization leading to subpar results compared to the original DeepSeek site.
- Many believe that despite its intended capabilities, R1 often fails to deliver expected outputs, casting doubt on its effectiveness for coding tasks.
- Updates and features in Cursor: Cursor has undergone significant updates, introducing new features and models, including DeepSeek and various enhancements in coding capabilities.
- These improvements have garnered mixed reactions, with some praising the increased functionality while others criticize performance inconsistencies.
- Struggles with AI and coding: Several users shared their experiences with using AI models for coding, noting that while they can streamline some processes, they also introduce challenges such as debugging difficulties.
- Many recommended using version control and checkpoints to navigate issues arising from the AI's suggestions.
- Hype around new AI models: There is excitement about upcoming AI models like o3-mini, with expectations that they will significantly enhance coding capabilities and overall performance.
- Users shared their anticipation for improvements in existing models like Sonnet, emphasizing their potential impact on coding workflows.
- Diverse AI solutions in coding: With varied experiences across different AI tools, users noted that some local models are being considered over cloud-based solutions due to reliability concerns.
- The conversation highlighted the importance of adaptability in choosing the right model for specific tasks, leading to discussions about utilizing multiple models for better results.
- voyage-code-3: more accurate code retrieval with lower dimensional, quantized embeddings: TL;DR – Introducing voyage-code-3, our next-generation embedding model optimized for code retrieval. It outperforms OpenAI-v3-large and CodeSage-large by an average of 13.80% and 16.81% on a suite …
- GroqCloud: Experience the fastest inference in the world
- Cursor - The AI Code Editor: Built to make you extraordinarily productive, Cursor is the best way to code with AI.
- Screen Studio — Professional screen recorder for macOS: Screen recorder for macOS. Create engaging product demos, courses, tutorial and social media videos. Add automatic zoom on mouse actions, smooth mouse movement, and other powerful effects and animatio...
- Cursor does not send files to Claude: An example of Cursor failing to send file data to Claude, for the third time in a row. These are chewing up my precious Fast-Replies, and it happens all the time. I am using this version: Version: ...
- Tweet from Qwen (@Alibaba_Qwen): The burst of DeepSeek V3 has attracted attention from the whole AI community to large-scale MoE models. Concurrently, we have been building Qwen2.5-Max, a large MoE LLM pretrained on massive data and ...
- no title found: no description found
- GroqCloud: Experience the fastest inference in the world
- How Fireworks evaluates quantization precisely and interpretably : Deep dive into how Fireworks AI thinks about quantization and uses divergence metrics to ensure quality and create custom solutions for users
- GitHub - vbwyrde/AI_Dev_Helpers: Some tools that I find useful when using AI for development: Some tools that I find useful when using AI for development - vbwyrde/AI_Dev_Helpers
- Changelog | Cursor - The AI Code Editor: New updates and improvements.
OpenAI ▷ #ai-discussions (466 messages🔥🔥🔥):
DeepSeek vs. OpenAI, AI Consciousness Debate, Censorship in AI, Competitive Pricing in AI Models, User Experiences with AI Models
- DeepSeek outshines OpenAI and Nvidia: Many users agree that DeepSeek's free model offers competitive features compared to OpenAI's paid subscriptions, specifically highlighting its larger context windows of 128k tokens versus OpenAI's 32k.
- The emerging competition, particularly from DeepSeek, is seen as a catalyst that may push OpenAI to improve its offerings and reconsider its pricing strategy.
- Debate on AI Consciousness: The discussion highlights that defining consciousness is a convoluted challenge across multiple disciplines, with users expressing skepticism about the consciousness of AI systems.
- A parallel is drawn between the disbelief in AI's awareness and religious beliefs, indicating a philosophical and undefined nature of consciousness.
- Censorship Differences Among AIs: Various AIs, including DeepSeek and Claude, are noted for their differing levels of censorship, with some users expressing frustration at the strict moderation policies in place.
- DeepSeek reportedly allows more free dialogue, particularly regarding sensitive subjects, while OpenAI's restrictions are viewed as overly limiting.
- Competitive Pricing in AI Models: Users criticize OpenAI's high subscription fees, noting that competitors like DeepSeek provide similar or superior functionality for free or at a lower cost.
- This pricing disparity is framed as a significant factor negatively affecting OpenAI's market position and stock performance.
- User Experiences with AI Models: Several users share personal anecdotes about their interactions with DeepSeek, reporting its effectiveness in solving problems that other models struggled with.
- There is a sentiment of frustration among users regarding the limitations of current subscription models, leading to discussions about their value compared to free alternatives.
- Cerebras Trains Llama Models To Leap Over GPUs: It was only a few months ago when waferscale compute pioneer Cerebras Systems was bragging that a handful of its WSE-3 engines lashed together could run
- Meta is reportedly scrambling multiple ‘war rooms’ of engineers to figure out how DeepSeek’s AI is beating everyone else at a fraction of the price: The engineers will study how DeepSeek accomplished its tech feat and how Meta can benefit.
- DeepSeek Service Status: no description found
- Laughing Out Loud GIF - Laughing Out Loud - Discover & Share GIFs: Click to view the GIF
- Mycroft (software) - Wikipedia: no description found
- IBM Unveils Next Chapter of watsonx with Open Source, Product & Ecosystem Innovations to Drive Enterprise AI at Scale: IBM announced several new updates to its watsonx platform one year after its introduction, as well as upcoming data and automation capabilities designed to make artificial intelligence (AI) more open,...
- Cerebras Co-Founder Deconstructs Blackwell GPU Delay: Cerebras Chief System Architect and Co-Founder, J.P. Fricker explains the technical challenges with Nvidia's Blackwell.00:12 Introduction to Interposers02:54...
OpenAI ▷ #gpt-4-discussions (2 messages):
Custom GPTs URL Output, Using Zero Width Space for Links
- Custom GPTs struggle with URL formatting: A member inquired about forcing custom GPTs to output links as URLs with full paths instead of anchor text, despite using extensive Python code to scrape the underlying URLs.
- The frustration stems from the GPT formatting links back to anchor text instead of the desired URL format.
- Using Zero Width Space to prevent link formatting: Another member suggested an approach using an invisible 'zero width' space character like
httpXs://
to stop the text from being reformatted into a link.- They mentioned a previous write-up they did on StackOverflow about this technique, noting its effectiveness across various software.
OpenAI ▷ #prompt-engineering (21 messages🔥):
Feeding content to models, Impersonating authors, Using AI for advanced search, Cost and time for training AI, Believability of model's answers
- Feeding content to models for responses: A user inquired about the possibility of feeding 10-15 books to a model to ask questions as if it were the author.
- Another member clarified that while it's theoretically intriguing, achieving true impersonation of an author is unrealistic.
- Advanced search using AI: One user proposed treating the AI as an advanced search tool for book citations and page numbers.
- A member confirmed that it's possible, but noted the AI's susceptibility to hallucination during extended queries.
- Challenges of impersonating authors: A distinction was made between impersonating an author and having someone who deeply understands the books.
- It was suggested that achieving a persona who loves the books might be more feasible than impersonating the author.
- Cost and time for feeding models: In response to a question about costs, it was noted that using ChatGPT Plus is feasible for books under 10 GB and fewer than 20 books.
- However, copyright content can pose restrictions, making costs difficult to estimate.
- Evaluating the model's responses: Concerns were raised regarding how the model's answers would compare to the author's responses.
- The discussion shifted to evaluating the believability of the model's responses versus actual author collaboration.
OpenAI ▷ #api-discussions (21 messages🔥):
Feeding content to AI, Impersonating authors, Using AI as an advanced search tool, Challenges in training models, Costs and time for training AI
- Feeding content into AI for responses: Members discussed the possibility of feeding content, like books, into a model to answer based on that information. While the AI can respond to queries about the books, it may not accurately impersonate the author.
- One pointed out it’s not practically doable to have the model fully reflect an author's voice due to the complexities involved.
- AI as an advanced book search: It was suggested that instead of impersonating an author, feeding the books to the AI could treat it as an advanced search tool to locate citations and page numbers. Members agreed that it's possible, but cautioned that it might produce hallucinations in long conversations.
- One member emphasized that the feasibility largely depends on the size and complexity of the book.
- Cost implications for AI training: Questions arose about the costs and time involved in training the AI with 10-15 books. A response indicated that ChatGPT Plus could be utilized as long as the books are under 10 GB and fewer than 20.
- However, there were concerns regarding copyright issues that could arise with most content being blocked.
- Challenges with author impersonation: There was a consensus that impersonating an author could be challenging, as it involves complex nuances beyond just the content of the books. One member suggested that it's easier to approach a representation by someone who deeply understands the books rather than the actual author.
- This ongoing debate emphasized the need for cooperation from the author and how interpretative the model's responses would have to be.
Nous Research AI ▷ #general (496 messages🔥🔥🔥):
Nous Psyche, DeepSeek Models, Reasoning in AI, Stock Predictions, Business Applications of AI
- Nous Psyche Launch: The community discussed the recent announcement of Nous Psyche, a cooperative training network built on Solana, highlighting its significance in the AI landscape.
- Participants noted the potential for Psyche to impact personal agents and its significance in ongoing AI developments.
- DeepSeek Model Performance: Members expressed confusion regarding the pricing differences between DeepSeek V3 and R1 models, with suspicions that R1's newness results in higher initial costs.
- It was noted that increasing traffic and advanced optimizations for newer models could also influence these pricing structures.
- AI Reasoning and Output: Participants discussed the nature of reasoning in AI, particularly comparing LLM outputs to human reasoning, emphasizing the distinction between confidence values and true probabilistic models.
- It was acknowledged that while LLMs can mimic reasoning patterns, they may lack the underlying stochastic processes found in more advanced models.
- AI Applications in Business: The conversation turned to practical applications of AI such as customer service and coding assistants, with questions raised about potential earnings from AI-driven solutions.
- One member questioned the feasibility of training a version of DeepSeek for specific business functions, like a call center, indicating interest in real-world applications.
- Understanding AI Mechanisms: Several participants shared their thoughts on the function of layers such as dropout in network models, highlighting a need for deeper understanding of these mechanisms.
- The discussion suggested a differentiating viewpoint on how AI models operate compared to human reasoning capabilities.
- Tweet from N8 Programs (@N8Programs): reading a deepseek paper and stumbled upon a very beautiful formula where they unify SFT and MOST RL TYPES (DPO, PPO, GRPO, etc.) into ONE FORMULA**that requires additional reward functions to be defi...
- Tweet from terminally onλine εngineer 🇺🇦 (@tekbog): sir another model has hit the timelineQuoting Qwen (@Alibaba_Qwen) The burst of DeepSeek V3 has attracted attention from the whole AI community to large-scale MoE models. Concurrently, we have been bu...
- Tweet from Qwen (@Alibaba_Qwen): The burst of DeepSeek V3 has attracted attention from the whole AI community to large-scale MoE models. Concurrently, we have been building Qwen2.5-Max, a large MoE LLM pretrained on massive data and ...
- Psychonauts Psychonauts Raz GIF - Psychonauts Psychonauts raz Raz aquato - Discover & Share GIFs: Click to view the GIF
- Tweet from Peiyi Wang (@sybilhyz): Last year, I joined DeepSeek with no RL experience. While conducting Mathshepherd and DeepSeekMath research, I independently derived this unified formula to understand various training methods. It fel...
- Tweet from Nous Research (@NousResearch): Recent AI breakthroughs challenge the status quo narrative that only closed, mega labs have the ability to push the frontier of superintelligence.Today we announce Nous Psyche built on @Solana - a coo...
- Tweet from Mark Chen (@markchen90): Congrats to DeepSeek on producing an o1-level reasoning model! Their research paper demonstrates that they’ve independently found some of the core ideas that we did on our way to o1.
- Shop our products: Nous Research
- EQTY Lab — Introducing Verifiable Compute: Certify and protect agentic AI workflows with the first auditable proofs of governance.
- If AI is going to be revolutionary, we need to see the revolution, says Big Tech's Alex Kantrowitz: Big Technology’s Alex Kantrowitz and Alger’s Dan Chung, join 'Closing Bell' to discuss AI power demand and investing and the shifting sentiment around the te...
- AI disrupter DeepSeek ‘wake-up call’ for US tech, says Trump: Subscribe to our YouTube channel for free here: https://sc.mp/subscribe-youtubeFor more on this story: https://sc.mp/i5gz6US President Donald Trump has calle...
- Nous - Wikipedia: no description found
- ServiceNow-AI/R1-Distill-SFT · Datasets at Hugging Face: no description found
Nous Research AI ▷ #ask-about-llms (4 messages):
Developing a Local AI Assistant, Learning Resources for AI Development, Optimizing Learning Velocity
- Kickstarting Your Local AI Assistant: Many members discussed ways for beginners to develop their own local AI assistants, emphasizing that they need not be based on Llama from Meta.
- Someone suggested using platforms like DeepSeek, Hermes, or ChatGPT as aids for creating projects that facilitate learning.
- Utilizing Community and Resources: Participants highlighted that combining LLMs, internet resources, community engagement, and projects serves as a strong foundation for learning AI development.
- These tools create a diverse learning environment, making complex topics more approachable for novices.
- Focus on Learning Velocity: A member stated the importance of optimizing for learning velocity if financial freedom allows, suggesting that investing in educational resources can accelerate understanding.
- This approach emphasizes tailoring the learning experience to maximize efficiency and effectiveness.
Nous Research AI ▷ #interesting-links (7 messages):
Qwen2.5-VL model, YuE music generation model, AI assistants explained, Deepseek and Operator usage
- Qwen2.5-VL excels at OCR and more: The newly released Qwen2.5-VL model is exceptional at OCR tasks, including handwriting, and boasts capabilities in visual reasoning and understanding various graphical elements.
- Developers have been providing feedback since Qwen2-VL's launch, improving its functionality in recognizing and analyzing images.
- YuE music generation model launched: YuE is an open-source full-song generation foundation model akin to Suno.ai, capable of running on local GPUs for music creation.
- Community members are curious about its training data and overall performance in generating diverse musical compositions.
- AI assistants breakdown for beginners: A member shared a blog post titled Illustrated-AI Assistants Visually Explained to help non-technical users understand the workings of AI systems like ChatGPT.
- This resource discusses the fundamental concepts behind training and learning in AI models, aimed at educating a broader audience.
- Deepseek with Operator Gist for AI career: A new guide on using Deepseek with Operator has been released, promoting cost savings of $200 compared to OpenAI solutions.
- The post encourages users to star and share the gist here to help others build their ultimate AI assistant.
- Tweet from Hai Duong : I've written a @JayAlammar-style blogpost for broader non-technical public about how AI assistants such as ChatGPT work under the hood.I'm covering what language models actually do and explain...
- DeepSeek-R1 Mastery: Build Your Ultimate AI Assistant: DeepSeek-R1 Mastery: Build Your Ultimate AI Assistant - DeepSeekR1AssistantAICareerPathDev.md
- Qwen/Qwen2.5-VL-72B-Instruct · Hugging Face: no description found
- GitHub - multimodal-art-projection/YuE: YuE: Open Full-song Generation Foundation Model, something similar to Suno.ai but open: YuE: Open Full-song Generation Foundation Model, something similar to Suno.ai but open - multimodal-art-projection/YuE
LM Studio ▷ #general (308 messages🔥🔥):
DeepSeek R1 Distilled Models, Model Performance and Comparison, Quantization Techniques, Tooling and Web Browsing Capabilities, Model Compatibility with Hardware
- Issues with R1 Distilled Model Loading: Users reported errors when loading R1 Distilled Qwen models in LM Studio, specifically citing 'unknown pre-tokenizer type'. It was suggested that they update to the latest version of LM Studio and ensure LM Runtimes are also updated.
- Guidance from the community highlighted the importance of matching model versions across platforms to avoid discrepancies.
- Comparison of Llama and Qwen Models: Discussion arose regarding the differences between Llama 8B and Qwen 7B models, with distinctions in parameter size and architecture being identified. Users expressed confusion over why fewer people choose the 8B version despite its larger parameters.
- It was noted that the model's accessibility or performance might play a role in user preference.
- Quantization Techniques Explained: The topic of quantizing models was discussed, revealing the complexity and methods involved, particularly in comparisons between legacy and K/I quantization types. It was agreed that a deeper understanding of quantization basics is beneficial for users.
- Users were encouraged to experiment with quantization on smaller datasets to grasp how it functions.
- Tooling and Web Browsing Integration: Clarifications were made about the tooling capabilities of models and the requirement for additional software for web browsing functionalities. The community anticipates more tools to be shared and integrated into LM Studio in the future for easier user access.
- The distinction between models specifically trained for tooling and general models without that capability was emphasized.
- Compatibility of Models with Hardware: Users discussed model performance on different hardware, particularly AMD GPUs like the 6700XT and 6800 models, noting the varying degrees of success with different models. It was noted that smaller models perform better on limited VRAM hardware configurations.
- The importance of using Vulkan runtime for compatibility was also highlighted for running models on Windows with AMD GPUs.
- You Don’t Need Fine-Tuning: Embeddings. Comprehensive Step-by-Step Guide.: Introduction
- deepseek-r1: DeepSeek's first-generation of reasoning models with comparable performance to OpenAI-o1, including six dense models distilled from DeepSeek-R1 based on Llama and Qwen.
- What's the effect of scaling a loss function in deep learning?: I train a network on a problem where the magnitudes of my loss function are quite small. I observed that the network didn't really train well until I started scaling up the loss function, for exa...
- Qwen2.5-VL - a Qwen Collection: no description found
- Mixture of Experts Explained: no description found
- Feature matrix: LLM inference in C/C++. Contribute to ggerganov/llama.cpp development by creating an account on GitHub.
- What happens if loss function is multiplied by a constant?: What will happen if I multiply a constant to the loss function? I think I will get a larger gradient, right? Is it equal to having a larger learning rate?
- What happens if loss function is multiplied by a constant?: What will happen if I multiply a constant to the loss function? I think I will get a larger gradient, right? Is it equal to having a larger learning rate?
- livecodebench (Live Code Bench): no description found
- 🔧 LM Studio Beta Versions Mailing List: Enter your email address if you want to get emails about new LM Studio Beta versions.
- Download LM Studio - Mac, Linux, Windows: Discover, download, and run local LLMs
- GitHub - sammcj/gollama: Go manage your Ollama models: Go manage your Ollama models. Contribute to sammcj/gollama development by creating an account on GitHub.
- Import Models | LM Studio Docs: Use model files you've downloaded outside of LM Studio
- GitHub - deepseek-ai/Janus: Janus-Series: Unified Multimodal Understanding and Generation Models: Janus-Series: Unified Multimodal Understanding and Generation Models - deepseek-ai/Janus
LM Studio ▷ #hardware-discussion (96 messages🔥🔥):
DeepSeek-R1 Model Performance, GPU Detection Issues, SSD NVMe Speed Impacts, Best Specs for 70B Model, Unified RAM on Apple Devices
- DeepSeek-R1 32B Achieves Expected Token Speed: Users running the DeepSeek-R1 32B on various GPUs reported token speeds around 25 tok/sec, which is considered normal for this model.
- Multiple users confirmed that for a 32B model, speeds like this are indicative of expected performance.
- Resolving GPU Detection Issues in LM Studio: Users reported issues with GPU detection while using LM Studio, particularly with Quadro devices, but switching to the CUDA runtime resolved the problem.
- Screenshots showed successful GPU recognition after runtime adjustments, indicating configuration settings can impact hardware detection.
- Impact of SSD NVMe Speeds on Model Loading: Discussion highlighted that while Gen4/5 SSDs boast faster speeds, real-world applications often reveal minimal performance differences, particularly for non-intensive workloads.
- Users expressed that for large model loading, lower-tier SSDs can lead to significant delays, emphasizing the importance of sustained read speeds for high-capacity models.
- Necessary Specs for Running the 70B Model: Running the 70B version of DeepSeek R1 requires a minimum of 30GB VRAM for lower quantization versions, with consumer cards like the RTX 5090 being suitable.
- Non-consumer options such as the NVIDIA H100 can handle larger models comfortably, with a recommended VRAM buffer for context usage.
- Understanding Unified RAM Performance on Apple Devices: It was noted that memory bandwidth on Mac devices could limit LLM inference speeds compared to discrete GPUs, impacting overall performance for AI applications.
- Users clarified that while Apple Chips can deliver decent speeds, they fall short of competitive performance against dedicated GPUs like the RTX 3060 and above.
- GitHub - XiongjieDai/GPU-Benchmarks-on-LLM-Inference: Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference?: Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference? - XiongjieDai/GPU-Benchmarks-on-LLM-Inference
- Les barrettes DDR5 64 Go de Crucial arrivent, qui veut 256 Go de RAM dans son PC ?: Elles ne s'adressent évidemment pas à tout le monde, mais les barrettes DDR5 64 Go de Crucial arrivent : jusqu'à 256 Go de RAM en desktop et 128 Go sur laptop !
Yannick Kilcher ▷ #general (282 messages🔥🔥):
DeepSeek and model performance, Data privacy concerns with AI, VRAM requirements for large models, Benchmark manipulation in AI research, Trends in AI model development
- DeepSeek shows potential but raises skepticism: Discussions revolve around the efficacy of DeepSeek models, questioning whether they are on par with OpenAI due to their reliance on distilled techniques.
- Some argue that without novel algorithmic improvements, DeepSeek’s advancements are limited and mostly copycat efforts.
- Concerns about data privacy and model sourcing: Italy's regulatory interest in DeepSeek raises alarms about data protection and potential links to Chinese data policies.
- There are memes circulating about data being sent to China, reflecting common anxieties over data privacy and transparency.
- Discussing VRAM requirements for large models: Users explored how to estimate VRAM needs for models like Qwen 2.5-VL-72B, suggesting an approximate requirement of 144GB for weights alone.
- It was noted that weight quantization could significantly reduce these requirements, making it more feasible to run large models on limited hardware.
- Benchmark gaming raises red flags in AI research: A consensus emerged that some benchmarks in AI are manipulated, leading to inflated performance claims—particularly from Chinese models.
- Users voiced concerns that the reliance on statistics could lead to misleading evaluations of model effectiveness.
- Trends in AI model development and influencer culture: Participants expressed skepticism regarding current trends in AI and the role of YouTubers in influencing public perception and research focus.
- Red_code pointed out the tendency for many to follow trends without critically evaluating the underlying technology.
- Connected Papers | Find and explore academic papers: A unique, visual tool to help researchers and applied scientists find and explore papers relevant to their field of work.
- Practical Deep Learning for Coders - Practical Deep Learning: A free course designed for people with some coding experience, who want to learn how to apply deep learning and machine learning to practical problems.
- mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-1.5B-v1.0 · Hugging Face: no description found
- Cat Cope GIF - Cat Cope Cursed - Discover & Share GIFs: Click to view the GIF
- Adder Adderko GIF - Adder Adderko Snake - Discover & Share GIFs: Click to view the GIF
- House Md House GIF - House Md House House Medical Department - Discover & Share GIFs: Click to view the GIF
- Chinese Ben Shapiro Social Credit GIF - Chinese ben shapiro Ben shapiro Social credit - Discover & Share GIFs: Click to view the GIF
- Tweet from Sam Altman (@sama): look forward to bringing you all AGI and beyond.
- Capitulo0 Capitulo Cero GIF - Capitulo0 Capitulo Cero Ernesto Sevilla - Discover & Share GIFs: Click to view the GIF
- Kuuchuu Buranko Ichiro Irabu GIF - Kuuchuu Buranko Ichiro Irabu Devi Word Of The Day - Discover & Share GIFs: Click to view the GIF
- Tweet from Sam Altman (@sama): but mostly we are excited to continue to execute on our research roadmap and believe more compute is more important now than ever before to succeed at our mission.the world is going to want to use a L...
- Modal: High-performance AI infrastructure: Bring your own code, and run CPU, GPU, and data-intensive compute at scale. The serverless platform for AI and data teams.
- GitHub - deepseek-ai/Janus: Janus-Series: Unified Multimodal Understanding and Generation Models: Janus-Series: Unified Multimodal Understanding and Generation Models - deepseek-ai/Janus
- Tweet from Sam Altman (@sama): deepseek's r1 is an impressive model, particularly around what they're able to deliver for the price.we will obviously deliver much better models and also it's legit invigorating to have a...
- Machine learning - introduction: Introduction to machine learning.Slides available at: http://www.cs.ubc.ca/~nando/540-2013/lectures.htmlCourse taught in 2013 at UBC by Nando de Freitas
- Ritwik Mishra: IIITD PhD Student - Cited by 40 - Deep Learning - Natural Language Processing - Indic Languages - IndicNLP
- "Man Rescues Helpless Octopus Stuck in Seashell" #viralshort: This heartwarming video shows a man rescuing a helpless octopus stuck inside a seashell on the beach. Watch their bond grow as he cares for and plays with th...
- Janus/janus_pro_tech_report.pdf at main · deepseek-ai/Janus: Janus-Series: Unified Multimodal Understanding and Generation Models - deepseek-ai/Janus
- Big Tech in panic mode... Did DeepSeek R1 just pop the AI bubble?: Chip stocks like Nvidia are in trouble after the DeepSeek R1 AI model has proven that it is possible to train and run state-of-the-art reasoning models with ...
- RunPod - The Cloud Built for AI: Develop, train, and scale AI models in one cloud. Spin up on-demand GPUs with GPU Cloud, scale ML inference with Serverless.
- AI Infrastructure for Developers: no description found
- 15 Sorting Algorithms in 6 Minutes: Visualization and "audibilization" of 15 Sorting Algorithms in 6 Minutes.Sorts random shuffles of integers, with both speed and the number of items adapted t...
- MIDAS Lab @IIITD - Students: Current Students
Yannick Kilcher ▷ #paper-discussion (47 messages🔥):
Janus-Pro Release, Qwen2.5-VL Launch, DeepSeek Advancements, Emu Learning Algorithms, Quantized Model Development
- Janus-Pro makes waves in multimodal AI: Today, DeepSeek released Janus-Pro: Unified Multimodal Understanding and Generation, marking another step forward in AI advancements, as detailed in the tech report.
- Participants expressed excitement about DeepSeek's rapid progress in just a few months, hinting at their relentless innovation.
- Qwen2.5-VL is here to elevate multimodal capabilities: Qwen2.5-VL was unveiled as a significant vision-language model, capable of interpreting complex visual data and user interaction, showcased in their blog post.
- Enhancements include understanding texts and charts in images and acting as a visual agent, sparking curious conversations about its capabilities.
- DeepSeek's relentless innovations: Users noted the remarkable speed at which DeepSeek has brought advancements to market, with some questioning their approach to model development and resource acquisition.
- Comments highlighted their major breakthroughs and accomplishments within a mere two months, with a heavy emphasis on the pressure to stay ahead in AI research.
- Emu-powered AI is the next frontier?: Discussion ignited around the concept of using Emus in training models, with playful suggestions like Reinforcement Learning from Emu Feedback.
- The community joked about the absurdity and potential of developing algorithms inspired by emu behavior, including hypothetical scenarios of AI-enhanced emu systems.
- Development of 1.58-bit quant model: A user announced the creation of a 1.58-bit quant version of the 671B DeepSeek R1 model, prompting intrigue about its performance.
- Members expressed surprise at the development and solicited details on how well this new quantized model performs in practice.
- Qwen/Qwen2.5-VL-72B-Instruct · Hugging Face: no description found
- Janus/janus_pro_tech_report.pdf at main · deepseek-ai/Janus: Janus-Series: Unified Multimodal Understanding and Generation Models - deepseek-ai/Janus
- Reddit - Dive into anything: no description found
Yannick Kilcher ▷ #ml-news (24 messages🔥):
DeepSeek's Janus-Pro Model, Trump's Tariffs on Chips, Qwen 2.5 Model, Mistral Acquisition Rumors, AI Data Protection in Italy
- DeepSeek launches Janus-Pro Model: DeepSeek introduced the Janus-Pro model, a unified autoregressive framework that enhances multimodal understanding through decoupled visual encoding pathways.
- This model aims to match or surpass task-specific models while being more flexible, making it a significant contender in the next-generation multimodal landscape.
- Trump to impose tariffs on Taiwanese chips: President Trump is preparing to place tariffs as high as 100% on foreign-produced computer chips, including those made in Taiwan, asserting a need to restore production to the U.S.
- He stated that this move targets semiconductor manufacturing, highlighting how tech firms have shifted production to companies like TSMC in Taiwan.
- Qwen 2.5 Max model development: The latest release is the Qwen 2.5-Max model, which has been pretrained on over 20 trillion tokens and further refined with supervised fine-tuning and reinforcement learning methods.
- Qwen 2.5-Max showcases advancements in scaling extremely large models, addressing both dense and Mixture-of-Expert architectures.
- Rumors of Mistral acquisition by Bernard Arnault: There are speculative discussions about Bernard Arnault potentially acquiring the French AI firm Mistral, citing a need for France to maintain its competitive edge in technology.
- The development coincides with similar sentiments around the importance of AI innovation in the arts and craft sectors post-ASI.
- Italy investigating DeepSeek for data protection: Italy's regulatory authorities are seeking information from DeepSeek regarding data protection practices amid growing concerns over AI technologies.
- This initiative reflects a broader global trend of increasing scrutiny and regulation of AI systems to ensure compliance with data privacy standards.
- MSN: no description found
- Qwen2.5-Max: Exploring the Intelligence of Large-scale MoE Model: QWEN CHAT API DEMO DISCORDIt is widely recognized that continuously scaling both data size and model size can lead to significant improvements in model intelligence. However, the research and industry...
- Trump To Tariff Chips Made In Taiwan, Targeting TSMC: The tariffs would ensnare cutting-edge smartphone and PC-related chips for Apple, AMD and Nvidia if enacted. But Trump is betting his plan will bring more chip production to the US.
- deepseek-ai/Janus-Pro-7B · Hugging Face: no description found
- Janus Pro WebGPU - a Hugging Face Space by webml-community: no description found
- Tweet from Technology Brothers (@techbrospod): BREAKING: Bernard Arnault is exploring a potential acquisition of struggling French Artificial Intelligence company Mistral citing a need for France to "maintain it's edge in artisanal goods p...
- Rodney King Get Along GIF - Rodney King Get Along - Discover & Share GIFs: Click to view the GIF
- Bogdanoff Dump It GIF - Bogdanoff Dump It Stocks - Discover & Share GIFs: Click to view the GIF
- They See Your Photos: no description found
- Is DeepSeek about to cause a stock market crash?: With the stock market dominated by US tech companies focused on AI, is DeepSeek's competitor to OpenAI about to brings things crashing down? The post Is DeepSeek about to cause a stock market crash? a...
Codeium (Windsurf) ▷ #discussion (103 messages🔥🔥):
Cascade Errors, Windsurf Authentication Issues, Type-Checking Integration, DeepSeek Model Integration, Credit Management Concerns
- Cascade continues to spark errors: Multiple users reported persistent errors with Cascade, leading to significant credit loss due to malfunctioning prompts. One user stated they ran out of paid credits while facing numerous ‘cascade failed’ errors.
- Ongoing Windsurf login troubles: One user experienced issues logging into Windsurf, finding that authentication codes were not working properly, even after reinstalling the software. Despite trying new and existing accounts, the user could not log in successfully and submitted a support ticket for further help.
- Type-checking frustration and proposed workflow: A user expressed frustration about integrating type-checking, cycling through errors with each prompt. Another contributor shared a workflow guide that improved their type-checking experience, which was received positively.
- DeepSeek integration status update: Users inquired about using the DeepSeek r1 model with Codeium. It was clarified that r1 is not currently available, with ongoing struggles noted for models outside of Cascade related to tool calling in workflows.
- Concerns about credit management: Concerns were raised regarding the limited free plan and the frustrations with managing credits when encountering multiple errors. Users highlighted their dissatisfaction with the transition from a more generous usage model to a restrictive one, complicating their workflows.
- Kermit The Frog Waiting GIF - Kermit the frog Kermit Waiting - Discover & Share GIFs: Click to view the GIF
- The Simpson Leech GIF - The Simpson Leech Leeches - Discover & Share GIFs: Click to view the GIF
- Reddit - Dive into anything: no description found
- swe-with-llms/general_workflow.md at main · marwinsteiner/swe-with-llms: A software engineering workflow with IDE-integrated LLMs and external validation LLMs. - marwinsteiner/swe-with-llms
Codeium (Windsurf) ▷ #windsurf (189 messages🔥🔥):
Windsurf Cascade Issues, DeepSeek Model Addition, User Prompt Credits Explanation, Errors and Internal Problems in Cascade, Cascade Base Model Functionality
- Windsurf Cascade Experiences Challenges: Users reported issues with Windsurf's Cascade, including freezing during typing and problems editing larger files, implying a performance degradation since recent updates.
- Suggestions included starting new chats to avoid freezing and contacting support for persistent problems.
- Request for DeepSeek Model in Windsurf: Several users expressed their desire for the addition of the DeepSeek model in Windsurf, noting its advantages like higher coding benchmark performance.
- Despite requests, there has been no confirmation of DeepSeek's integration as of now.
- Clarification on User Prompt Credits: Confusion around the Premium Ultimate subscription emerged, with users noting that running out of Flow Action Credits limits access to premium models.
- Support was suggested to address the lack of clarity in the subscription details and credit renewal processes.
- Errors and Internal Problems with Cascade: Users are encountering numerous internal errors with Cascade, with claims that tool calls are failing and files unable to be created.
- Some suggested checking diagnostic logs and contacting support for unresolved errors.
- Usage Recommendations for Cascade Base: Discussion highlighted that while Cascade Base can handle simple tasks, more complex operations require premium models for efficiency.
- Users were encouraged to explore the capabilities of Cascade Base with the potential need for upgrading for better functionality.
- Welcome to Codeium - Codeium Docs: no description found
- Custom Windsurf Editor: This is "Custom Windsurf Editor" by Siap Boz on Vimeo, the home for high quality videos and the people who love them.
- Kermit The Frog Waiting GIF - Kermit the frog Kermit Waiting - Discover & Share GIFs: Click to view the GIF
- Support | Windsurf Editor and Codeium extensions: Need help? Contact our support team for personalized assistance.
- Cascade Memories: no description found
- Plan Settings: Tomorrow's editor, today. Windsurf Editor is the first AI agent-powered IDE that keeps developers in the flow. Available today on Mac, Windows, and Linux.
- vscodium/docs/index.md at master · VSCodium/vscodium: binary releases of VS Code without MS branding/telemetry/licensing - VSCodium/vscodium
- Qwen Chat: no description found
OpenRouter (Alex Atallah) ▷ #announcements (2 messages):
Amazon Nova models, Amazon Bedrock operational issues, Model availability
- Amazon Nova models experience downtime: Both Amazon Nova models and Amazon Bedrock encountered operational issues due to an upstream problem, resulting in the servers returning a misleading status code of 400.
- The surge in usage was misinterpreted as a key leak, although BYOK usage remained unaffected.
- Amazon Bedrock fully operational: An update confirmed that Amazon Bedrock is now fully operational again, recovering from the previous issues impacting the models.
- All Nova and Claude models are back online, returning to normal functionality.
OpenRouter (Alex Atallah) ▷ #general (255 messages🔥🔥):
Deepseek Provider Issues, Gemini Video Support, Model Speed Comparisons, OpenRouter Usage, Provider Pricing Context
- Deepseek Provider Struggles Persist: The Deepseek provider has been down for several days, frustrating users who relied on its fast performance for R1 queries.
- Some users speculate the downtime is due to a significant DDOS attack, indicating the provider's challenges in managing increased requests.
- Gemini Video Support Implementation: A user shared a code snippet for integrating video into their Gemini model workflow, indicating attempts to incorporate media handling.
- There is currently limited documentation on how to pass video references using OpenRouter, with team members checking for updates.
- Comparative Model Speed and Performance: Users discussed the speeds of various models, noting that OpenRouter's performance is perceived as faster and cleaner than the official OpenAI API.
- The stability and concurrency of different models were debated, with some reporting better experiences than others.
- OpenRouter's Flexibility with Providers: Users inquired about the specifics of using OpenRouter, seeking to understand how to force the use of specific providers in API calls.
- It was clarified that users can specify providers and manage fallback settings for their requests.
- Free Model Offerings and Pricing Insights: A user questioned the rationale behind some models being offered for free, spurring discussions about service pricing.
- Mentioning the balance between cost-efficiency and performance, users emphasized the importance of understanding pricing structures.
- Tweet from Qwen (@Alibaba_Qwen): The burst of DeepSeek V3 has attracted attention from the whole AI community to large-scale MoE models. Concurrently, we have been building Qwen2.5-Max, a large MoE LLM pretrained on massive data and ...
- Qwen2.5-Max: Exploring the Intelligence of Large-scale MoE Model: QWEN CHAT API DEMO DISCORDIt is widely recognized that continuously scaling both data size and model size can lead to significant improvements in model intelligence. However, the research and industry...
- Integrations | OpenRouter: Bring your own provider keys with OpenRouter
- DeepSeek: DeepSeek R1 – Provider Status: See provider status and make a load-balanced request to DeepSeek: DeepSeek R1 - DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tok...
- Operator: An agent that can use its own browser to perform tasks for you.
- DeepSeek: DeepSeek R1 – Uptime and Availability: Uptime statistics for DeepSeek: DeepSeek R1 across providers - DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 67...
- Provider Routing | OpenRouter: Route requests across multiple providers
- Provider Routing | OpenRouter: Route requests across multiple providers
- DeepSeek R1 (nitro) - API, Providers, Stats: DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Ru...
- DeepSeek R1 Distill Llama 70B - API, Providers, Stats: DeepSeek R1 Distill Llama 70B is a distilled large language model based on [Llama-3.3-70B-Instruct](/meta-llama/llama-3. Run DeepSeek R1 Distill Llama 70B with API
- no title found: no description found
- Mem0 - The Memory layer for your AI apps: Mem0 is a self-improving memory layer for LLM applications, enabling personalized AI experiences that save costs and delight users.
- LLM Cost - Insight Engine - AI Price Calculator: Compare LLM Costs Easily! Use this simple calculator to estimate AI model costs for OpenAI GPT, Google Gemini, and more. See pricing per million tokens and find the most cost-effective solution for yo...
Eleuther ▷ #general (79 messages🔥🔥):
GRPO implementation discussions, DeepSeek training cost analysis, LLM reasoning abilities, Job opportunities in LLM research, Neuroscience and AI interpretations
- Concerns About GRPO Usability: Members noted that both SimpleRL and TinyZero don't effectively utilize GRPO, with claims that their recreation runs don't achieve successful results using it.
- This led to a consensus that GRPO may be considered dead code in recent repositories, primarily relying on PPO instead.
- DeepSeek's Affordable Training Strategy: Discussions highlighted that DeepSeek's training cost of only $5 million was achieved through various optimizations, including 8-bit training and modified MoE setups.
- Although not many novel ideas were introduced, members agreed that the implementation of existing strategies at scale was key to their competitive edge over larger organizations.
- Nature of LLM Reasoning Capabilities: A perspective was shared that characterizes LLMs as 'stochastic parrots,' prompting debates over whether true reasoning underlies their functionality.
- It was suggested that practical evaluations, such as through video games, might present more valid measures of reasoning than traditional methods.
- Finding Remote Job Opportunities in LLM Research: A user sought advice on finding remote, part-time research jobs in LLMs without relying on platforms like LinkedIn.
- Community members recommended networking at conferences as a more effective approach to uncovering job opportunities.
- Neuroscience Insights on AI: An independent journalist shared a project exploring the parallels between neuroscience findings and LLM structures, suggesting frameworks of AI could mirror aspects of brain functionality.
- The discussion opened avenues for contemplating implications of AI development on emotional intelligence and ethical considerations.
- Tweet from Peiyi Wang (@sybilhyz): @Grad62304977 During RL training, the model's reasoning patterns evolve continuously. At times, a specific pattern may suddenly emerge prominently, which I define as the "aha moment". For ...
- Tweet from Xeophon (@TheXeophon): what the fuck
- Tweet from Chuck Schumer (@SenSchumer): The DeepSeek announcement from China has been called by some AI’s “Sputnik moment” for America.It’s precisely why I made AI a top priority in the last Congress and will keep at it.Our competitors are ...
Eleuther ▷ #research (112 messages🔥🔥):
GRPO and Momentum Matrices, Model-Based Reinforcement Learning, Muesli Method Comparisons, YuE Music Generation Model, Privileged Bases in Transformers
- Understanding GRPO’s Momentum Matrices: Discussion highlighted that momentum matrices can exhibit local spatial correlations when viewed in a certain basis, influenced by possible outer products from individual activations.
- One user noted that while DCT might compress these matrices effectively, the actual effectiveness of chunking can vary, as 2D structure may not be inherent in gradient updates.
- Exploring Model-Based Reinforcement Learning: A new algorithm, MR.Q, was discussed, which aims to unify model-free RL approaches while also leveraging model-based representations for learning efficiency.
- This approach attempts to stabilize learning with high update ratios, a challenge noted in RL applications, emphasizing the importance of proper hyperparameter tuning.
- Revisiting Muesli’s Contribution: Participants compared the Muesli method to model-based approaches, suggesting it could be construed as such when expanded to depth 1.
- This comparison garnered interest regarding how blending aspects of model-based strategies can influence effectiveness and adaptability.
- Introduction of the YuE Music Generation Model: The YuE model was announced as a leading open-source solution for full-song music generation, merging two distinct LMs for comprehensive performance across genres.
- It emphasizes compatibility with tools like Hugging Face and demonstrates potential for diverse lyrical-to-song tasks.
- Privileged Bases in Transformers - Implications: Research on privileged bases suggested that these can influence how activations and gradient updates interact within the network, adding a layer to associative memory functions.
- The theory proposes that this could enhance performance in tasks like attention, where selective memory retrieval is essential.
- Discord - Group Chat That’s All Fun & Games: Discord is great for playing games and chilling with friends, or even building a worldwide community. Customize your own space to talk, play, and hang out.
- Privileged Bases in the Transformer Residual Stream: Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
- Growing Tiny Networks: Spotting Expressivity Bottlenecks and Fixing Them Optimally: Machine learning tasks are generally formulated as optimization problems, where one searches for an optimal function within a certain functional space. In practice, parameterized functional spaces are...
- Tweet from Ruibin Yuan (@abc43992899): 3/n: YuE combines two LMs: a 7B LM trained on 1.6T semantic-rich speech/music tokens with text labels, and a 1B LM trained on 2.1T residual tokens. A semantic-acoustic fused codec handles waveform↔️co...
- Tweet from Ruibin Yuan (@abc43992899): 1/n: 🚀 Announcing YuE (乐) – the most powerful open-source full-song music generation model! 🎵 Tackle the lyrics-to-song task (like http://Suno.ai) with support for diverse genres, stunning vocals, &...
- Towards General-Purpose Model-Free Reinforcement Learning: Reinforcement learning (RL) promises a framework for near-universal problem-solving. In practice however, RL algorithms are often tailored to specific benchmarks, relying on carefully tuned hyperparam...
- Tweet from BlinkDL (@BlinkDL_AI): Let's kill attention. RWKV-7 "Goose" 🪿 1.5B release: SotA base LM at its size, 100% RNN, fully multilingual (100+ languages & code) and honest: no eval-maxxing, no HQ-annealing, no post-t...
- MAD-TD: Model-Augmented Data stabilizes High Update Ratio RL: Building deep reinforcement learning (RL) agents that find a good policy with few samples has proven notoriously challenging. To achieve sample efficiency, recent work has explored updating neural...
Eleuther ▷ #scaling-laws (2 messages):
Compute and Curvature, Scaling Impacts, Inductive Bias in Parametric Models
- Compute Increase Reduces Curvature: A member noted that as compute increases, the curvature decreases, indicating higher robustness to deviations from the optima.
- This suggests a relationship between computational capacity and the stability of the loss landscape.
- Debating the Reasons Behind Curvature Changes: The same member expressed uncertainty that the observed curvature behavior is solely due to scaling, hinting at potential other factors.
- They proposed that a parametric model might inherently skew the loss landscape, influencing these findings.
Eleuther ▷ #lm-thunderdome (5 messages):
scbench, zeroSCROLLS, longbench, LM Evaluation Harness, MLX methods
- Scbench garners attention: Members discussed that scbench looks promising but may face challenges with its integration due to its need for multi-turn capabilities.
- One member noted that they will look more into zeroSCROLLS as a potential alternative.
- Longbench added to the mix: A member announced the addition of longbench to their projects, indicating ongoing developments.
- This addition may complement the ongoing work with scbench for better evaluation.
- Challenges with LM Evaluation Harness: A member expressed frustration using the LM Evaluation Harness locally, citing issues with unimplemented methods like
generate_until
.- They're seeking advice on finding a better method to utilize MLX with the LM Eval Harness as faced problems continue.
Eleuther ▷ #multimodal-general (1 messages):
Janus flow paper, Rectified flow objective, Image generation tasks
- Clarification on Janus Flow's Purpose: A member inquired whether the rectified flow objective outlined in the Janus flow paper implies that it cannot be used to transform an image into another image since x^con consists solely of text tokens.
- This raises questions about the applicability of the model in direct image-to-image transformations.
- Exploration of Image Generation Tasks: The discussion highlighted the significance of text tokens in image generation tasks and their functionality in the context of Janus flow methodology.
- Participants are keen on understanding how this affects cross-modal generation and practical uses.
Interconnects (Nathan Lambert) ▷ #news (59 messages🔥🔥):
DeepSeek V3 and competition, OpenAI's new offerings, Qwen licensing and development, ChatGPT Gov announcement, Open Thoughts project and partnerships
- DeepSeek V3 set to impress: DeepSeek is reportedly launching V3, which is expected to outperform US lab models based on claims from community discussions. A user noted how DeepSeek has generated excitement in the AI community with its advancements.
- Members have expressed that DeepSeek's model updates could significantly impact larger MoE models, stirring conversations around competitive outcomes and benchmarks.
- OpenAI introduces ChatGPT Gov: OpenAI announced ChatGPT Gov, a specialized version of ChatGPT tailored for U.S. government agencies, providing them with enhanced access to frontier models. Concerns were raised about whether OpenAI's focus on government contracts might lead to a repeat of the Skydio pivot strategy.
- Users suggested that if OpenAI continues down this route, it may affect their overall product competitiveness, similar to other companies in the field.
- Qwen's new model development: Qwen has unveiled their Qwen2.5-Max, a large MoE LLM with claims of competitive performance against other leading models. Community dialogue indicated a collaborative approach in building on the model's capabilities, showcasing substantial advancements in training methods.
- Participants questioned the strategic direction of Qwen amidst the evolving AI landscape, specifically regarding its licensing actions that appear scattered.
- Open Thoughts project launches: The Open Thoughts project introduced their latest reasoning datasets, aiming for a comprehensive approach to open-source data sharing and collaboration. The release has been recognized for its community backing and involvement from leading institutions.
- The initiative highlights a collective effort in publishing open datasets, fueling discussions around collaboration in model development and data generation.
- Codename Goose unveiled: Codename Goose is announced as an open-source AI agent tailored to automate tasks through a user-friendly interface and CLI. The initiative emphasizes open-source availability and aligns with enhancing productivity across various applications.
- Community members speculated on possible affiliations with established groups like Eleuther, emphasizing the significance of its open-source roots.
- Introducing codename goose: codename goose is your open source AI agent, automating engineering tasks and improving productivity.
- Tweet from Ryan Marten (@ryanmart3n): We are fully open-source. Our model weights, datasets, data generation code, evaluation code, and training code are all publicly available.To find all of the above, start here: https://github.com/ope...
- Tweet from Qwen (@Alibaba_Qwen): Results of base language models. We are confident in the quality of our base models and we expect the next version of Qwen will be much better with our improved post-training methods.
- President Trump Addresses House GOP Issues Conference in Florida | Video | C-SPAN.org: no description found
- Tweet from Ryan Marten (@ryanmart3n): Announcing the Open Thoughts project. We are building the best reasoning datasets out in the open.Building off our work with Stratos, today we are releasing OpenThoughts-114k and OpenThinker-7B.
- Tweet from Qwen (@Alibaba_Qwen): The burst of DeepSeek V3 has attracted attention from the whole AI community to large-scale MoE models. Concurrently, we have been building Qwen2.5-Max, a large MoE LLM pretrained on massive data and ...
- Tweet from Igor Kotenkov (@stalkermustang): 2.0-pro is available in AiStudio, even responds to my requestannounce/soft launch through @OfficialLoganK 's twitter any minute now?waiting for metrics
- Tweet from Zephyr (@angelusm0rt1s): 🎵Drop Drop Drop...
- Tweet from Mark Chen (@markchen90): Congrats to DeepSeek on producing an o1-level reasoning model! Their research paper demonstrates that they’ve independently found some of the core ideas that we did on our way to o1.
- GitHub - ZihanWang314/RAGEN: RAGEN is the first open-source reproduction of DeepSeek-R1 for training agentic models via reinforcement learning.: RAGEN is the first open-source reproduction of DeepSeek-R1 for training agentic models via reinforcement learning. - ZihanWang314/RAGEN
- Trump calls China’s AI DeepSeek breakthrough ‘a wakeup call’ — but ‘positive’ if true: President Trump on Monday called the new Chinese AI platform DeepSeek a “wakeup call” for America — while also saying its debut could be a “positive” develop...
Interconnects (Nathan Lambert) ▷ #ml-questions (5 messages):
DeepSeek optimizations, CUDA limitations, Reinforcement learning, DeepSeek implications, Technical report findings
- DeepSeek optimizations can't be written in CUDA?: A member expressed seeing claims that some DeepSeek optimizations cannot be written in CUDA and sought sourcing on the matter.
- Another member remarked they had heard about code being written in a language below CUDA, indicating it's an area of interest.
- Source found in technical report: A member mentioned that the source for the statement about DeepSeek optimizations is in the technical report, sharing an attached image.
- This indicates that there's documented evidence supporting the claims being discussed.
- DeepSeek updates and implications: A member referenced a link discussing the DeepSeek implications, with highlights such as emergent chain-of-thought via pure reinforcement learning.
- They noted that discussions around costs and chip bans also tie into the broader meta-discussion of AI between the U.S. and China.
- Responsibility for oversight: A member took responsibility for forgetting to cover DeepSeek updates, affirming their previous takeaways on the matter.
- They acknowledged missing the broader implications of this news on the AI landscape.
Link mentioned: DeepSeek FAQ: DeepSeek has completely upended people’s expectations for AI and competition with China. What is it, and why does it matter?
Interconnects (Nathan Lambert) ▷ #ml-drama (18 messages🔥):
Liang Wenfeng Meme Game, DeepSeek vs Qwen Team Dynamics, Meme Coin Launch Speculation, Qwen2.5-Max Model Release, AI Community Discussions
- Liang Wenfeng's Meme Game Strong: Meme enthusiasts praised Liang Wenfeng's creativity with a link to his latest meme post. It's clear the meme game is thriving!
- Memes are taking over the discourse and bringing some laughter amid serious discussions.
- Qwen Throws Shade at DeepSeek: The Qwen team was noted for their critical stance toward DeepSeek, suggesting comparisons were unfair when using older models. Member referenced an ongoing discussion on DeepSeek's new Janus Pro paper and its reception.
- There was a reminder to the community to keep expectations in check as they navigate this competitive landscape.
- Speculation on a Meme Coin Launch: A user hinted at an upcoming meme coin launch, suggesting it might not be long before it hits the market. This speculation was based on a casual confirmation about the legitimacy of certain involved parties.
- The community's reactions ranged from skepticism to excitement about potential new ventures in the meme coin space.
- Launch of Qwen2.5-Max Model: The Qwen team announced the release of Qwen2.5-Max, a large MoE LLM that competes with DeepSeek V3 in benchmarks. They highlighted significant improvements in performance across various metrics.
- Supporting links for detailed exploration included a blog post and API documentation.
- Positive Recognition for Joshua: Amid various discussions, Joshua received praise for his character, with members asserting he is genuinely a good person. Such comments emphasize constructive community vibes despite underlying tensions.
- Members noted that positive personal interactions are crucial in navigating the intense themes and discussions present in the group.
- Mathias Gehrig (@mathiasgehrig.bsky.social): The last statement must be the most wrong thing I read today. I get you are proud to be American or whatever, but still.
- Tweet from Joshua Achiam (@jachiam0): Realized there is something I did not say today during the DeepSeek discourse that should have been my first thought, and I am chagrined that it took me so long.Congratulations to the DeepSeek team. Y...
- Tweet from Qwen (@Alibaba_Qwen): The burst of DeepSeek V3 has attracted attention from the whole AI community to large-scale MoE models. Concurrently, we have been building Qwen2.5-Max, a large MoE LLM pretrained on massive data and ...
- Tweet from Zizheng Pan (@zizhpan): Again guys, this is not our Wenfeng.
- Tweet from Junyang Lin (@JustinLin610): It is not even comparing its results with Qwen2-VL-7B but Qwen-VL-Chat which was released two years and a half ago. 🤣Quoting Lucas Beyer (bl16) (@giffmana) Just had a quick look at DeepSeek's new...
Interconnects (Nathan Lambert) ▷ #random (36 messages🔥):
DeepSeek R1 Launch, Qwen 2.5-Max Performance, AI Customer Applications, Laptop Purchase Decision, Influencer Impact on Academic Figures
- DeepSeek R1 Launch and Adoption: AI customers are rapidly adopting DeepSeek R1, with companies like ZoomInfo already utilizing it for products, and Notion exploring its potential.
- Additionally, Google Cloud is planning to provide it to cloud customers, indicating strong market interest.
- Qwen 2.5-Max's Impressive Performance: The launch of Qwen 2.5-Max has shown competitive performance, outperforming DeepSeek V3 in benchmarks like Arena Hard and LiveCodeBench.
- Users can start utilizing it immediately with installation instructions provided for ai-gradio.
- Laptop Purchase Dilemma: A discussion raised the question of whether to pay twice as much for a base M4 Pro MacBook Pro compared to an M3 Air with 24GB RAM, focused on computational needs in Bayesian stats.
- Consensus leans towards the M3 Air being a more reasonable choice for the price.
- Influencer Impact on Academic Figures: There was a sentiment expressing disappointment over academic figure AK shifting from scholarly content to influencer-style promotions, raising questions about credibility.
- Comments pointed out that this shift has influenced how followers perceive his contributions.
- Groq and Full R1 Feasibility: Some members commented on the implications of running full R1 on Groq's LPU, questioning the practicality given its limited SRAM capacity.
- The discussion highlighted potential challenges in executing sophisticated AI models on hardware with such constraints.
- Tweet from SE Gyges (@segyges): I actually know this: GRPO upsamples paths to the correct answer, but those paths have to already exist. So the more paths to correct answers for your dataset there are, the better it will train. If t...
- Tweet from AK (@_akhaliq): Qwen2.5-Max just one shotted thisprompt: write a script for three bouncing yellow balls within a sphere, make sure to handle collision detection properly. make the sphere slowly rotate. make sure ball...
- Tweet from Yao Fu (@Francis_YAO_): One interesting learning from the R1 and K1.5 tech report is the usage of string matching based binary reward: I’ve tried it myself in 2022 using FlanT5, my friends tried it in 2023 with Llama 1 and i...
- Tweet from sunny madra (@sundeep): @NickADobos Distill launch today, full r1 coming
- Tweet from Peiyi Wang (@sybilhyz): @Grad62304977 During RL training, the model's reasoning patterns evolve continuously. At times, a specific pattern may suddenly emerge prominently, which I define as the "aha moment". For ...
- Tweet from Alexander Doria (@Dorialexander): I feel this should be a much bigger story: DeepSeek has trained on Nvidia H800 but is running inference on the new home Chinese chips made by Huawei, the 910C.
- Tweet from Tibor Blaho (@btibor91): New ChatGPT web app build adds new plans & pricing changes/experiments- Promo codes (fixed or % discount, for specific number of periods (e.g., months)) (preview with a test promo code only for illust...
- Tweet from Amir Efrati (@amir): AI customers move ~fast~ • ZoomInfo using DeepSeek R1 for products• Notion looking at using it too• Google Cloud providing it to cloud customers
- Tweet from Sam Altman (@sama): deepseek's r1 is an impressive model, particularly around what they're able to deliver for the price.we will obviously deliver much better models and also it's legit invigorating to have a...
- 070🔥미국뉴스🎥영어공부🌀AI2’s New Release Aims to Close the Ga #AI #NewsSummary #podcast #facts #usanewstoday: 070AI2’s New Release Aims to Close the Gap Between Open and Closed AI ModelsAI2의 새로운 발표, 오픈 및 폐쇄형 AI 모델 간 격차 해소를 목표로 하다Published: December 3, 2024The Allen I...
- Ascend Computing - Huawei Enterprise: Huawei Atlas AI computing solution offers a broad portfolio of products, enabling all-scenario AI infrastructure across device-edge-cloud. Learn more.
Interconnects (Nathan Lambert) ▷ #memes (10 messages🔥):
Deepseek, Google release delay, AI job market, ChatGPT, AI misconceptions
- Deepseek's Impact on Google: Little birds are reporting that Google delayed a major release due to the recent Deepseek brouhaha. This highlights the significant repercussions arising from community discussions around Deepseek.
- Ironic that we got free AI from a hedge fund and $200/month AI from a nonprofit, suggesting mixed feelings regarding AI funding sources.
- Podcast Absence of Deepseek: Commentators expressed confusion over how Deepseek managed to acquire an R1 ranking without any podcast appearances. One user humorously questioned the credibility of their visibility efforts.
- The statement emphasizes scrutiny over Deepseek's unconventional rise in the AI landscape.
- AI Job Market Shift: A humorous take was shared noting that even ChatGPT has lost its job to AI, hinting at the rapidly evolving job market. This reflects a growing trend in AI taking over roles traditionally held by humans.
- The meme captures the comedic yet alarming reality for those in the tech industry.
- Criticism of AI Usage: Concerns were raised about the President of the United States falling for 'AI slop,' poking fun at the misleading narratives around AI. Memes and tweets continued to circulate around this theme, showcasing skepticism in AI advancements.
- This discussion critiques the prevalent misconceptions held by influential figures regarding AI's capabilities.
- Culture Jokes with AI: An image attachment sparked laughter, reinforcing the interconnectedness of memes and AI culture in online discussions. Humor stands as a major thread among AI enthusiasts as they navigate technological nuances.
- Community engagement shows that humor remains a coping mechanism in the hype and complexities of evolving AI technologies.
- Tweet from Insane Facebook AI slop (@FacebookAIslop): the president of the United States has fallen for AI slop
- liang wenfeng - Google Search: no description found
- Tweet from Andrew Carr (e/🤸) (@andrew_n_carr): Even chatGPT lost its job to AI
- Tweet from Avichal - Electric ϟ Capital (@avichal): Ironic that we got free AI from a hedge fund and $200/month AI from a nonprofit.
- Tweet from Luke Metro - e/🐋 (@luke_metro): I really don’t understand how Deepseek made R1 without doing a single podcast appearance
Interconnects (Nathan Lambert) ▷ #rl (5 messages):
Open-Instruct integration, vLLM maintenance, OpenRLHF framework
- Inquiry on Open-Instruct's vLLM Integration: A member is exploring the use of Open-Instruct internally and raised concerns regarding the integration with vLLM, specifically about the reliance on the OpenRLHF framework.
- They expressed a desire to avoid dependency on an OSS project that may not have long-term maintenance.
- AllenAI's Approach to Tool Maintenance: Natolambert mentioned that they are engaged in discussions with vllm about their tool usage for RL, indicating a proactive approach.
- They noted that while they typically pin open-source tools until updates are necessary, they cannot guarantee ongoing maintenance for any specific tool.
- Uncertainty in Open-Instruct's Viability: Despite recognizing the potential of Open-Instruct, Natolambert cautioned that the integration is not guaranteed, referring to it as 'not a sure thing'.
- This highlights a level of caution in relying on new tools without assurance of their support going forward.
Interconnects (Nathan Lambert) ▷ #cv (5 messages):
Qwen Licensing Issues, Qwen Model Variants
- Confusion Surrounds Qwen Licensing: A member raised the question, 'why is qwen so all over the place with licensing?', highlighting inconsistencies in the licensing across different Qwen model variants.
- They pointed out that Qwen2.5-VL-72b has restrictions for services exceeding 100M MAU, while Qwen2.5-VL-7b uses the more permissive Apache 2.0 license.
- New Noncommercial License for Qwen2.5-VL-3b: The Qwen2.5-VL-3b version introduces a new license called 'Qwen Research', which is for noncommercial use only and raises questions about its target audience.
- 'Super weird and specific' was noted, suggesting this license targets a niche group of businesses with fewer than 100M MAU.
- Community Engagement on Licensing: A member reported that they contacted a representative from Hugging Face regarding Qwen's licensing issues, stating they are currently looking into it.
- Concerns were raised about possibly shifting to the restrictive 'research' license, with one member expressing hope that this does not occur.
- Lack of License Clarity in Qwen Models: Discussion revealed that the 7B model lacks a LICENSE file, leading to ambiguity about its terms compared to other Qwen models.
- The lack of clarity around the tag attached to the 7B model adds to the confusion about its legal usage.
Interconnects (Nathan Lambert) ▷ #reads (13 messages🔥):
DeepSeek-R1 release, AI model comparisons, Hawaii's military population, Jay Alammar's illustrations, Challenges in understanding complex models
- DeepSeek-R1 Launches with Open Weights: Jay Alammar announced the release of DeepSeek-R1, highlighting its open weights and training methods similar to OpenAI O1.
- This release is seen as having major implications for the ML R&D community due to its distilled versions and constructive insights.
- AI Model Efficiency and Accessibility: Members discussed that DeepSeek presents an advantage with its open-source approach, improved pricing, and usability compared to its competitors.
- Comments suggested this might pressure Anthropic to release their pending projects to keep up with the innovations.
- Humorous Takes on the Nerd Community: Amidst discussion, one member remarked on the concentration of tech enthusiasts in Hawaii, positing it as a unique phenomenon due to its heavy military population.
- Another playfully noted metro activity, joking about the predictability of gatherings among 'nerds.'
- Jay Alammar's Ongoing Illustrative Success: The community acknowledged Jay's creativity, with one member humorously commenting that the man can't stop illustrating.
- However, some expressed mixed feelings about his recent work, suggesting it might not resonate with all audiences.
- Complexity in AI Diagrams vs. Math: A member expressed frustration about finding the diagram representations of certain models less clear than the mathematical explanations.
- This highlights ongoing challenges within the community regarding clarity in conveying complex AI concepts.
Link mentioned: The Illustrated DeepSeek-R1: A recipe for reasoning LLMs
Interconnects (Nathan Lambert) ▷ #posts (40 messages🔥):
DeepSeek's Mainstream Attention, OpenAI's Formal Math Direction, LLMs as Verifiers in Math, Upcoming Post by Nat Lambert
- DeepSeek gains mainstream traction: There's a buzz around DeepSeek, as it's even being discussed at a finance firm connected to the user's wife, indicating its rising visibility beyond niche circles.
- A member noted, 'My wife heard about Deepseek last night, which means it's really hit normal people.'
- OpenAI pauses formal math direction: Concerns were raised about OpenAI's lack of focus on formal math, as their direction seemed paused according to discussions with team members.
- One user mentioned, 'They paused the formal math direction a while back,' referencing insights from past conversations in the community.
- Discussion on LLMs as Verifiers: There was debate on whether LLMs can serve as effective verifiers for mathematical problems, potentially eliminating the need for traditional theorem provers.
- A user expressed, 'It's not obvious that you need theorem provers in order to make progress,' hinting at the utility of LLMs in this domain.
- Nat Lambert's upcoming post on reasoning models: Nat Lambert is considering posting his insights on reasoning models, emphasizing that DeepSeek R1 represents just the beginning of significant advancements.
- He noted the current excitement around these topics, stating, 'We are full hype surfing at this point.'
Link mentioned: Tweet from Nathan Lambert (@natolambert): Why reasoning models will generalizeDeepSeek R1 is just the tip of the ice berg of rapid progress. People underestimate the long-term potential of “reasoning.”https://buff.ly/4haoAtt
Stackblitz (Bolt.new) ▷ #announcements (1 messages):
Terminal Errors Detection, Bolt Integration
- Bolt now detects terminal errors: Bolt has introduced Terminal Errors Detection to catch hard-to-see issues that occur in the terminal, which often go unnoticed.
- *
- Enhanced integration with development environment: The new feature tightly integrates Bolt with your app's development environment to automatically detect issues and gather essential data for fixes.
Link mentioned: Tweet from bolt.new (@boltdotnew): Bolt 🧠 update: Terminal Errors DetectionSome errors are hard to catch: they happen in the terminal where we don't often look.Bolt is now tightly integrated with your app's development environ...
Stackblitz (Bolt.new) ▷ #prompting (4 messages):
Improver Prompt Limitations, Frontend Prototype Constraints, User Experience with Prompt Improver
- Frustrations with the Improver Prompt: A user expressed dissatisfaction with the improver prompt, stating it adds unnecessary details that clutter early prompts during a build.
- They mentioned that it often refuses to develop its own ideas, leading to frustrations.
- Acknowledgment of Constraints for Document Management System: A user highlighted key limitations of creating a frontend prototype for the document management system, particularly regarding browser capabilities and lack of backend support.
- As a result, they suggested focusing on UI/UX with mock data while needing connections to backend services for full functionality.
- Potential Shifts in Using Prompt Improver: One user indicated they might stop using the prompt improver entirely or feel compelled to remove half of its additions.
- This sentiment reflects a growing frustration with the tool's current behavior and output.
Stackblitz (Bolt.new) ▷ #discussions (135 messages🔥🔥):
Stripe Integration Challenges, Workflow and Project Management with Bolt, AI for Title Generation from Images, Node Version Updates in Bolt, Community Support and Collaboration
- Navigating Stripe Integration Issues: The community discussed various challenges with Stripe integrations, particularly focusing on creating subscription systems and managing user roles effectively.
- Experts offered to assist with hands-on implementations while encouraging shared knowledge in the spirit of collaboration.
- Effective Workflows for Using Bolt: Users shared their workflows and experiences utilizing Bolt for project development, emphasizing the importance of structure and methodology for success.
- Many noted the potential for achieving a complete project with no coding, provided one is methodical and leverages the tool's capabilities effectively.
- AI Generating Titles from Images: There was interest in using AI, such as ChatGPT, to generate appropriate titles from images, posing questions about the best approaches to do so.
- A distinction was made between using OCR for reading text versus generating text for images, showcasing the need for clarity on the desired outcome.
- Updating Node Version in Bolt: A user sought guidance on how to update the Node version that Bolt is using, facing permissions restrictions during the process.
- This reflects a broader concern among developers about managing dependencies and versioning in their applications.
- Community Support and Collaboration: Several members offered assistance to each other, sharing resources and experiences related to their Bolt projects and integrations.
- The community showcased a collaborative spirit, with offers to help troubleshoot common issues and improve project functions.
- Vite + React + TS: no description found
- Diji.art - Digital Design Marketplace: Create and sell unique designs on high-quality apparel. Join our community of creators and fashion enthusiasts.
- Diji.art - Digital Design Marketplace: Create and sell unique designs on high-quality apparel. Join our community of creators and fashion enthusiasts.
- 21st.dev - The NPM for Design Engineers: Ship polished UIs faster with ready-to-use React Tailwind components inspired by shadcn/ui. Built by design engineers, for design engineers.
Stability.ai (Stable Diffusion) ▷ #general-chat (138 messages🔥🔥):
Janus Model Opinions, AMD Support for Stable Diffusion, Hardware Recommendations for AI Work, Upscalers in Stable Diffusion, Deepseek Model Comparisons
- Mixed Reviews on Janus Model: Users expressed disappointment in the image generation capabilities of Janus, with one noting its 7B model is slow and questioning its primary utility.
- Another mentioned that most base models feel subpar and recommended sticking to SDXL while waiting for improvements in Janus.
- Setting Up Stable Diffusion on AMD: For those using AMD cards, users advised checking the tech support channel for a pinned guide to ensure optimal performance when using Stable Diffusion.
- The recommendation leaned towards utilizing the Swarm UI or webui-forge to enhance functionality on AMD systems.
- RAM vs VRAM in AI Builds: A debate arose on the necessity of high RAM versus VRAM for AI tasks, with some arguing that excess RAM is often wasted, while others stressed its value for different applications.
- Opinions varied, with suggestions that for a more focused usage on several advanced tasks, investing in 32GB of VRAM would be more economical.
- Upscalers Consistency in Use: Discussion touched on the longevity of upscalers, with users pointing out that many have been employed for two years without frequent updates.
- One user shared their preferred upscalers, specifically 4x-AnimeSharp and 4x_NMKD-superscale, noting a lack of new prominent options.
- Comparing Deepseek with Other LLMs: Concerns were raised about whether Deepseek lives up to the hype regarding its uncensored LLM capabilities, with some underscoring its comparison to OpenAI's offerings.
- Users noted that while Deepseek holds promise, it has yet to demonstrate revolutionary features compared to existing models.
- deepseek-ai/Janus-Pro-7B at main: no description found
- Webui Installation Guides: Stable Diffusion Knowledge Base (Setups, Basics, Guides and more) - CS1o/Stable-Diffusion-Info
MCP (Glama) ▷ #general (112 messages🔥🔥):
Goose Client, MCP Server Issues, DeepSeek Pricing, Integration with Home Assistant, Token Usage Monitoring
- Goose Client Gains Attention: Users are expressing positive experiences with the new Goose client, particularly its local execution and extensibility features.
- However, it currently only supports Mac and Linux, leading to discussions about its compatibility with WSL.
- MCP Server Limitations Noted: Members raised concerns about the functionality of community MCP servers, questioning reliability and specific issues encountered during testing.
- A proposal to create a verified list of MCP servers was suggested to facilitate users in identifying dependable options.
- DeepSeek's Promising Offers: Discussion included the value of signing up for kluster.ai to receive $100 credit for DeepSeek, highlighting its relatively reasonable pricing.
- Although inference speed was acknowledged as less optimal compared to earlier deployments, it still prompts interest among users.
- Home Assistant MCP Integration: Members discussed the potential of using Home Assistant's MCP as a media management bridge, noting the recent merge of MCP support into the core.
- However, there was uncertainty regarding its readiness for widespread use following the latest updates.
- Token Usage and Management Concerns: Concerns were raised about Goose's internal token usage, with users emphasizing the importance of tracking consumption efficiently.
- Recommendations included implementing logging outputs for token usage and monitoring through provider dashboards.
- codename goose | codename goose: Your open source AI agent, automating engineering tasks seamlessly.
- GitHub - Upsonic/Upsonic: Task oriented AI agent framework for digital workers and vertical AI agents: Task oriented AI agent framework for digital workers and vertical AI agents - Upsonic/Upsonic
- servers/src/everything/sse.ts at main · modelcontextprotocol/servers: Model Context Protocol Servers. Contribute to modelcontextprotocol/servers development by creating an account on GitHub.
- GitHub - jmagar/yarr: model context protocol ARR server: model context protocol ARR server. Contribute to jmagar/yarr development by creating an account on GitHub.
- GitHub - cookiecad/mcp-runner: A TypeScript SDK for running MCP (Model Context Protocol) servers with process reuse capabilities: A TypeScript SDK for running MCP (Model Context Protocol) servers with process reuse capabilities - cookiecad/mcp-runner
- GitHub - modelcontextprotocol/typescript-sdk: The official Typescript SDK for Model Context Protocol servers and clients: The official Typescript SDK for Model Context Protocol servers and clients - modelcontextprotocol/typescript-sdk
- goose/crates/goose-mcp/src/computercontroller/mod.rs at main · block/goose: an open-source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM - block/goose
Latent Space ▷ #ai-general-chat (96 messages🔥🔥):
Qwen 2.5-Max Launch, DeepSeek R1 vs. Qwen 2.5, Open Source AI Developments, TSMC Tariffs Impact on AI, Huawei Chips in AI Applications
- Qwen 2.5-Max Launch Announcement: The new model, Qwen 2.5-Max, was released and reportedly outperforms DeepSeek V3 in multiple benchmarks, including Arena Hard and LiveBench.
- Developers can access it through Alibaba Cloud's API and test it via the Qwen Chat platform.
- DeepSeek R1 vs. Qwen 2.5 Discussion: Community members discussed the differences in reasoning models, noting that while DeepSeek R1 displays a clear chain of thought, others like Flash Thinking might not.
- The introduction of structured reasoning tokens in R1 showcased significant differences compared to other models, raising questions about SFT's impact on coherence.
- Open Source AI Developments: New initiatives like the Open Thoughts project aim to create high-quality reasoning datasets to enhance model performance within the AI community.
- Additionally, YuE, an open-source music generation model, was released, showcasing the growing trend of accessible AI resources.
- TSMC Tariffs and AI Manufacturing: Recent news highlighted plans for tariffs on Taiwan-made chips, which could incentivize companies like TSMC to build fabs in the U.S., albeit with delayed effects due to the time required for construction.
- Discussions centered on the implications of tariffs for U.S. manufacturing and training skilled workers domestically.
- Huawei Chips in DeepSeek Applications: DeepSeek reportedly trained on Nvidia H800 chips but is now running inference on Huawei's new 910C chips, marking a significant shift in their operational capabilities.
- This transition underlines the increasing importance of domestic chip production amidst global supply chain challenges.
- Tweet from Alexander Doria (@Dorialexander): I feel this should be a much bigger story: DeepSeek has trained on Nvidia H800 but is running inference on the new home Chinese chips made by Huawei, the 910C.
- Tweet from Mark Chen (@markchen90): Congrats to DeepSeek on producing an o1-level reasoning model! Their research paper demonstrates that they’ve independently found some of the core ideas that we did on our way to o1.
- Tweet from Qwen (@Alibaba_Qwen): 🎉 恭喜发财🧧🐍 As we welcome the Chinese New Year, we're thrilled to announce the launch of Qwen2.5-VL , our latest flagship vision-language model! 🚀💗 Qwen Chat: https://chat.qwenlm.ai📖 Blog: http...
- Tweet from Qwen (@Alibaba_Qwen): The burst of DeepSeek V3 has attracted attention from the whole AI community to large-scale MoE models. Concurrently, we have been building Qwen2.5-Max, a large MoE LLM pretrained on massive data and ...
- Tweet from AK (@_akhaliq): Qwen2.5-Max just one shotted thisprompt: write a script for three bouncing yellow balls within a sphere, make sure to handle collision detection properly. make the sphere slowly rotate. make sure ball...
- Trump to impose 25% to 100% tariffs on Taiwan-made chips, impacting TSMC: no description found
- YuE: Multimodal Art Projection
- Tweet from BlinkDL (@BlinkDL_AI): Nvidia will gradually obsolete, because we will be running 1T+ params OSS models in our phone (instead of datacenters) within 3-5 years, while still having good battery life (!).It will be RWKV-type m...
- Tweet from Aravind Srinivas (@AravSrinivas): And uncensored!Quoting David Sacks (@DavidSacks) This is one of several ways that you can try DeepSeek R1 without downloading the app or sharing any data with a Chinese company.
- Tweet from Peiyi Wang (@sybilhyz): Last year, I joined DeepSeek with no RL experience. While conducting Mathshepherd and DeepSeekMath research, I independently derived this unified formula to understand various training methods. It fel...
- Tweet from Guillermo Rauch (@rauchg): Build agents with @aisdkhttps://sdk.vercel.ai/docs/ai-sdk-core/agents
- Tweet from Mahesh Sathiamoorthy (@madiator): We are announcing Open Thoughts, our large-scale open-source effort to curate the best open reasoning datasets!DeepSeek-R1 is amazing but we still don't have access to high-quality open reasoning ...
- Tweet from Dan Mac (@daniel_mac8): everyone comparing deepseek-r1 to o1and forgetting about Gemini 2 Flash Thinkingwhich is better than r1 on every cost and performance metric
- Tweet from AK (@_akhaliq): IT KEEPS GETTING BETTER: YuE (乐) open-source full-song music generation model that rivals Suno AI!It’s Hugging Face & LLAMA-compatible for easy fine-tuning.
- Tweet from Junyang Lin (@JustinLin610): Qwen2.5-Max is here. Looks good at benchmarks and I hope you guys can give it a try and see how you feel about this new model! Qwen Chat: https://chat.qwenlm.ai (choose Qwen2.5-Max for the model)API ...
- Tweet from Jordi Pons (@jordiponsdotme): On YuE (乐), the new open-source full-song music generation model 🎵- Text & lyrics conditioning- Long-form music generation, up to 5 min- License: CC BY-NC 4.0- 7B models- English, Chinese, Japanese, ...
- Tweet from Vaibhav (VB) Srivastav (@reach_vb): LMAO Qwen 2.5 VL can perform Computer Use, out of the box, taking on OpenAI Operator HEAD ON! 🐐
- Tweet from vLLM (@vllm_project): 🚀 With the v0.7.0 release today, we are excited to announce the alpha release of vLLM V1: A major architectural upgrade with 1.7x speedup! Clean code, optimized execution loop, zero-overhead prefix c...
- Tweet from Junyang Lin (@JustinLin610): Qwen2.5-Max is here. Looks good at benchmarks and I hope you guys can give it a try and see how you feel about this new model! Qwen Chat: https://chat.qwenlm.ai (choose Qwen2.5-Max for the model)API ...
- Why o3-mini *had* to be free: the coming DeepSeek R1, 2.0 Flash, and Sky-T1 Price War: 2025's biggest surprise so far: Reasoning is less of a moat than anyone thought.
- Tweet from swyx /dd (@swyx): updated price-elo pareto frontier with deepseek v3/r1 and gemini 2 flash thinking 2 resultsnotes:- o1-mini and o3-mini are going to have to DRASTICALLY cut prices (like, at least 25x) to keep up. it i...
- Mixture-of-Experts (MoE) LLMs: Understanding models like DeepSeek, Grok, and Mixtral from the ground up...
- Agents: Learn how to build agents with AI SDK Core.
- Would you want to make a leaderboard for this? · Issue #10 · carlini/yet-another-applied-llm-benchmark: Hi! Super cool work! I'm a researcher at HuggingFace working on evaluation and leaderboards. I understand that this cool eval suite is first and foremost there to evaluate use cases that you perso...
- Reddit - Dive into anything: no description found
- Jack Donaghy 30rock GIF - Jack Donaghy 30Rock Alec Baldwin - Discover & Share GIFs: Click to view the GIF
- The Short Case for Nvidia Stock: All the reasons why Nvidia will have a very hard time living up to the currently lofty expectations of the market.
- Rise of the A2A economy: How AI agent-to-agent interactions will reshape the world: At Sendbird, we’ve been actively working with companies across the globe that are considering the adoption of conversational AI to enhance their customer service. During this process, we’ve encount…
- Mixture-of-Experts (MoE) LLMs: Understanding models like DeepSeek, Grok, and Mixtral from the ground up...
Notebook LM Discord ▷ #announcements (1 messages):
NotebookLM Collaboration Features, User Feedback, Product Interviews, Survey Participation
- NotebookLM seeks user input for collaboration improvements: The team has received feedback requesting better sharing and collaboration functionality within NotebookLM and is organizing 30-minute interviews to gather more insights.
- Interested members are encouraged to fill out the survey to participate in shaping the product.
- Enhanced user controls for sources and notes: Users will soon have more control over their sources with the ability to edit, add, and delete sources, as well as comment on them.
- Additionally, functionality will include editing audio overviews, making editable copies for personal use, and customizing notebook settings.
Link mentioned: Sharing and collaboration in NotebookLM: Hello! Thanks for filling out this form. We are looking to learn a bit more about how to make the sharing and collaboration functionality in NotebookLM better and we would love to hear from you! If yo...
Notebook LM Discord ▷ #use-cases (17 messages🔥):
NotebookLM customization, DeepSeek AI advancements, Voice synthesis inconsistencies, Using large documents with LLM, Character traits in AI prompts
- NotebookLM Customization Enables Dynamic Outputs: A member shared that they achieved varied outputs by customizing prompts in NotebookLM for specific content, style, and tone.
- They highlighted the flexibility of commands but noted inconsistencies in audio output every 60 seconds.
- Rax Exposes DeepSeek's AI Revolution: A cyberpunk raccoon named Rax delivered a message on Times Square's billboards regarding DeepSeek, a Chinese startup's new AI assistant, raising concerns in the tech industry.
- Their actions led to significant market upheaval, wiping over $700 billion from major tech firms' valuations, as detailed in a YouTube exposé.
- Community Seeking Help with Cursor AI: A user expressed interest in learning Cursor AI to streamline IT project initiation and is looking for assistance.
- The inquiry demonstrated a focus on eliminating dependency on developers at project start, emphasizing its business applicability.
- Challenges with Large Document Integration: A user inquired about using two extensive textbooks with NotebookLM for environmental engineering discussions, seeking guidance on feasibility.
- Response emphasized that large sources may complicate specific queries, advising the breakdown of documents to circumvent the needle in the haystack issue.
- Inconsistencies in Voice Synthesis: A user sought to understand how to achieve consistent voice outputs in NotebookLM after experiencing fluctuations in character voices.
- Another member explained that NotebookLM can incorporate specific traits, but rhythmic inconsistencies may still occur.
- Frequently Asked Questions - NotebookLM Help: no description found
- 🚨 Rax the Cyberpunk Raccoon Exposes DeepSeek's AI Revolution! 🚨: In a daring move, Rax the cyberpunk raccoon has infiltrated Times Square's LED billboards to deliver an unfiltered message about DeepSeek's groundbreaking AI...
Notebook LM Discord ▷ #general (77 messages🔥🔥):
User Role Clarification, Podcast Features and Limitations, NotebookLM Language and Export Issues, Gemini and Audio Generation, Citation and Reference Management
- Understanding the 'User' Role in Profiles: Users inquired about the 'user' role visible in profiles, particularly in the context of moderator permissions, with a suggestion that these are tied to different Discord groups.
- Clarifications were provided about varying roles, though specific details remained elusive.
- Podcast Features Still Lacking: A user expressed frustration over the inability to customize podcast durations, accents, or generate different content effectively, highlighting limitations in current features.
- While users can generate their own podcasts, many desired enhancements and clarity regarding public or private status remain unaddressed.
- Challenges with Language and Export Capabilities: Participants noted issues with generating notes in languages other than English, and there was confirmation that external document export features are currently unavailable.
- Suggestions like adjusting Google account settings were mentioned to potentially resolve language output concerns.
- Excitement About Gemini Integration: Discussion arose regarding the integration of Gemini 2.0 Flash into NotebookLM, with anticipation for further features like Deep Research enhancements.
- Speculation about the potential for Gemini Pro's integration sparked interest, though details were still uncertain.
- Need for Citation Features in NotebookLM: Users expressed frustration over the manual process of adding citations while highlighting the importance of having automated citation capabilities.
- Calls for improvements in citation management reflect a desire for better functionality to support academic work.
- Illuminate | Learn Your Way: Transform research papers into AI-generated audio summaries with Illuminate, your Gen AI tool for understanding complex content faster.
- Reddit - Dive into anything: no description found
GPU MODE ▷ #general (22 messages🔥):
Minimizing Startup Times for LLMs, Optimizations for Model Loading, Utilizing Modal's RAM Snapshots, GPU Direct Storage (GDS) Considerations, Torch Distributed Package
- Exploring Options to Minimize Startup Times: Discussion centered around options for minimizing startup times in a serverless LLM inference environment, specifically for loading models like DeepSeek-r1.
- Concerns were raised about 2-minute load times for a ~128GB model, prompting inquiries into latency contributors and potential optimizations.
- Recommendations for Model Loading Optimization: Suggestions were made to explore techniques like manually saving model weights into torch state dicts and using
torch.load
for efficient loading.- A user indicated that if using PyTorch, they would optimize by loading in parallel using
torch.device('meta')
.
- A user indicated that if using PyTorch, they would optimize by loading in parallel using
- Leveraging RAM Snapshots for Cold Starts: One member highlighted the potential for drastic improvements in cold start times by utilizing RAM snapshots offered by Modal for initialization.
- This raised discussions about how modal memory snapshots may help with cold starts and reduce latency.
- Testing GPU Bandwidth and Loading Speeds: It was asserted that with 4 L40s and ideal disk configurations, loading times could theoretically be as low as a few seconds if utilizing available bandwidth properly.
- The conversation noted that good NVMe disks should allow for loading times around 10-15 seconds if handled efficiently.
- Industry Standards with Torch Distributed Package: Relevant tools like the torch.distributed package came into the conversation as a part of common practices in efficient model handling with PyTorch.
- This established a baseline expectation for members unfamiliar with certain functionalities and the efficiency methods typically used in the cape of distributed systems.
- Cold start performance: Modal Functions are run in containers.
- modal-examples/06_gpu_and_ml/llm-serving/llama_cpp.py at main · modal-labs/modal-examples: Examples of programs built using Modal. Contribute to modal-labs/modal-examples development by creating an account on GitHub.
GPU MODE ▷ #cuda (16 messages🔥):
Grace Hopper architecture, Jupyter Lab setup issues, CUDA pointer alignment, H100 PCIe/SXM card, GH200 rental rates
- Interest in Grace Hopper Architecture: A member highlighted interest in the Grace Hopper architecture, which involves tight integration of CPU and GPU components, separate from existing Hopper Chips.
- This architecture aims to improve performance metrics and reduce latency in applications needing rapid computations.
- Jupyter Lab Installation Troubleshooting: A member encountered an ImportError after installing Jupyter Lab, attributed to environment setup issues stemming from module load failures.
- Suggestions included creating a fresh virtual environment, ensuring CUDA compatibility, and installing necessary packages such as ninja and cmake.
- Understanding CUDA Pointer Alignment: Questions arose around the alignment of dereferenced pointers to doubles in CUDA, where misalignment could lead to undefined behavior.
- A member advised ensuring that the load is aligned to 64 bits for safe access, noting that cudaMalloc returns pointers aligned to 256 bytes.
- Rental Pricing for GH200: One member informed about the availability of GH200 for rent at $1.5/hr, hinting at potential cost-effective solutions for GPU usage.
- Such options could boost access to advanced hardware for temporary projects without hefty investments.
- H100's Deployment Model: Discussion mentioned that the H100 should function as a typical PCIe/SXM card, diverging from specific architectural implementations like Grace Hopper.
- This standardization could help in familiarizing users with its integration and performance expectations.
PyTorch
: no description found
GPU MODE ▷ #torch (9 messages🔥):
FP8 Conversion, FP8 Stochastic Rounding, PyTorch GB200 Support, CUDA 12.8 Compatibility
- FP8 Conversion Goals Shared: A user is aiming to learn about FP8 and convert bfloat16 to FP8 with proper stochastic rounding, sharing their code repository here.
- They encountered issues with sign bits flipping, specifically in their tensor manipulation.
- Stochastic FP8 Resources Shared: A member pointed out that torchao has utilities for converting FP32 to FPx with links to the relevant code.
- This sparked acknowledgment and gratitude from the original user engaging in FP8 discussions.
- Future LUT and Performance Considerations: A user shared a TODO note about checking if a Lookup Table (LUT) for all FP values is faster than bit shifting, particularly for fp4 which has only 16 unique values.
- This comment was recognized as interesting, contributing to the ongoing conversation about performance optimizations.
- Inquiries on PyTorch Support for GB200: A member inquired about the status of running PyTorch on GB200 GPUs, referencing discussions about support and the need to build against CUDA 12.8.
- They wondered if there are existing containers available for use with this setup.
- GitHub - Muhtasham/fp8-auto: FP8 stochastic rounding: FP8 stochastic rounding. Contribute to Muhtasham/fp8-auto development by creating an account on GitHub.
- ao/torchao/prototype/custom_fp_utils.py at e151d6a5288177a1a635c71fecd145654745af4c · pytorch/ao: PyTorch native quantization and sparsity for training and inference - pytorch/ao
- GitHub - pytorch/ao at e151d6a5288177a1a635c71fecd145654745af4c: PyTorch native quantization and sparsity for training and inference - GitHub - pytorch/ao at e151d6a5288177a1a635c71fecd145654745af4c
GPU MODE ▷ #cool-links (2 messages):
DeepSeek-R1 Release, Matrix Multiplication Challenge
- DeepSeek-R1 Hits the AI Scene: The latest release, DeepSeek-R1, is an open weights model featuring smaller, distilled versions, allowing easier access for R&D in ML disciplines.
- The post elaborates on its training method, which can help reproduce reasoning models akin to OpenAI's O1.
- Class Takes on Matrix Multiplication Challenge: A professor shared a fun challenge where his entire class multiplied matrices by hand on a shared Excel spreadsheet, finishing in a brisk 3 minutes 30 seconds.
- This live activity not only fostered collaboration but also showcased students' practical skills in matrix operations.
- The Illustrated DeepSeek-R1: A recipe for reasoning LLMs
- Tweet from Tom Yeh (@ProfTomYeh): I challenged my entire class of students to multiply matrices by hand ✍️ together live on a shared Excel spreadsheet. It took us 3 minutes 30 seconds. 👇 more.
GPU MODE ▷ #bitnet (1 messages):
Tile Lang, BitBLAS repo
- Tile Lang finally sees the light!: A member expressed excitement for the release of Tile Lang, noting its mention in commits back in October within the BitBLAS repo.
- They hope to leverage Tile Lang for coding efficient backward kernels that are currently missing in BitBLAS.
- Hope for Efficient Backward Kernels: The same member expressed their wish to finally code those efficient backward kernels that BitBLAS has been missing.
- This sentiment reflects a growing anticipation for the capabilities that Tile Lang may bring to the project.
GPU MODE ▷ #thunderkittens (1 messages):
ThunderKittens Improvements, Testing Kernel Performance, Generalization of Tests
- Contributing to ThunderKittens Kernel Testing: A member plans to enhance the testing framework for ThunderKittens by improving the gentests.py file found here.
- The aim is to extend compatibility to any M, N, K dimensions and to directly compare the TK scaled_mm kernel against torch._scaled_mm.
- Current Limitations in Testing: Currently, the tests in gentests.py are limited to cases where M=N=K, functioning only under these conditions.
- The improvement seeks to address this narrow scope, allowing for broader testing scenarios and performance assessments.
Link mentioned: ThunderKittens/kernels/torch_scaled/gentests.py at main · HazyResearch/ThunderKittens: Tile primitives for speedy kernels. Contribute to HazyResearch/ThunderKittens development by creating an account on GitHub.
GPU MODE ▷ #arc-agi-2 (38 messages🔥):
Reasoning Gym Pull Request, Apple Dataset License Concerns, Generating Templates for GSM8K, OpenRLHF Trial Runs, Multi-Licensing and Copyright
- Dependencies Delay Reasoning Gym Merge: The recent Pull Request #11 aims to add support for CLRS tasks to Reasoning Gym but faces hesitations due to potential dependencies on Jax and TensorFlow which may not align with the project's goals.
- An alternative suggested is to copy over algorithms while retaining copyright notices to avoid dependency issues.
- Navigating Apple Dataset Legalities: There are concerns regarding using the Apple dataset due to its copyright, which appears incompatible with Reasoning Gym's Apache 2.0 license, raising the need to replicate the dataset instead.
- Members discussed strategies to adhere to the licensing terms, emphasizing the importance of retaining copyright notices while utilizing ideas from datasets like GSM8K.
- GSM8K Template Generation Strategy: The team agreed to begin generating new templates based on existing GSM8K templates, aiming for around 100 variations to allow for manual verification of correctness.
- This process involves defining template questions, variable assignments, and functions to compute correct answers based on the structure of existing questions.
- Initial Trials with OpenRLHF: Member shared the first runs of Proximal Policy Optimization (PPO) with OpenRLHF for trial sum-chain on the Llama 3B model, tracking the results on WandB.
- Details about the trial help inform further steps and adjustments for future iterations of model training.
- Multi-Licensing Discussion: Concerns about multi-licensing were voiced, as team members seek to simplify dependency structures and ensure alignment with project licenses.
- The discussion led to an exploration of using language models to generate direct templates from copyrighted data without violating licensing terms.
- andreaskoepf: Weights & Biases, developer tools for machine learning
- grade-school-math/LICENSE at master · openai/grade-school-math: Contribute to openai/grade-school-math development by creating an account on GitHub.
- Add support for all CLRS tasks by panispani · Pull Request #11 · open-thought/reasoning-gym: CLRS is the classic textbook on algorithms.Deepmind introduced a CLRS benchmark which also includes a text version of most of the classical algorithms called CLRS-text. In this PR, I ported all th...
LLM Agents (Berkeley MOOC) ▷ #mooc-questions (60 messages🔥🔥):
Fall Semester Class SP24, Lecture Slides Availability, Research Track Eligibility, Hackathon Information, Application Track Collaborations
- Fall Semester Class SP24 not offered asynchronously: A member inquired about the return of the Fall Semester Class for SP24 and the possibility of asynchronous learning and certificates, to which it was clarified that there will not be asynchronous certificates available.
- The response indicated that the class may be taught again in future semesters.
- Lecture slides may be shared post-lecture: Questions arose about the availability of lecture slides before sessions, with confirmation that slides are typically shared after the lecture.
- It was noted that the instructor would attempt to add slides to the course website ahead of time when possible.
- MOOC students' eligibility for research track: Queries about whether MOOC students can enroll in the research track were posed, with assurances that details about this would be released soon.
- It was clarified that MOOC students are still eligible for certificates by completing signup forms.
- No hackathon scheduled for this semester: A user asked about a hackathon with intentions to team up, to which it was stated that no hackathon is planned for this semester.
- Further discussions will clarify project policies concerning MOOC students in the future.
- Collaboration in application track allowed: Members inquired about teaming up for the application track, and confirmation was provided that groups can consist of 3-4 students.
- Details regarding the application track and MOOC curriculum will be forthcoming.
Link mentioned: CS294/194-280 Advanced Large Language Model Agents: Spring 2025
LLM Agents (Berkeley MOOC) ▷ #mooc-lecture-discussion (12 messages🔥):
Slide Deck Availability, YouTube Lecture Feedback, Using NotebookLM for Research Tools
- Slide Deck Now Online: A member inquired if they could access the slide deck, to which another replied that it is online already.
- You can check the platform for the latest materials.
- Concerns Over Long YouTube Lectures: A member expressed frustration about the length of a YouTube lecture being over 4 hours, with comments indicating the actual content only starts after 35 minutes.
- Another member confirmed that they are in the process of editing the video to remove unnecessary parts for clarity.
- Using NotebookLM for Research Assistance: A student sought guidance on using NotebookLM for research, noting limited experience with research tools.
- Another member explained that by simply uploading a PDF, it generates a podcast-style conversation to make concepts easier to understand.
- Google NotebookLM | Note Taking & Research Assistant Powered by AI: Use the power of AI for quick summarization and note taking, NotebookLM is your powerful virtual research assistant rooted in information you can trust.
- CS 194/294-280 (Advanced LLM Agents) - Lecture 1, Xinyun Chen: no description found
Nomic.ai (GPT4All) ▷ #general (42 messages🔥):
Chat Template Errors, DeepSeek Implementation, GPT4All Roadmap, Model Options in GPT4ALL, LocalDocs File Uploads
- Chat Template Errors Persist: Users reported syntax errors with chat templates, leading to discussions about potential fixes and configurations using Jinja templates.
- One user shared a corrected Jinja template that appeared to work better for setting up chat roles.
- DeepSeek Integration Explored: Several users discussed running DeepSeek's various versions in GPT4All, with mixed success regarding model compatibility and remembering chat history.
- A user provided a Hugging Face link for another DeepSeek model version with instructions for running it.
- GPT4All Roadmap Uncertainty: Queries were raised about the potential roadmap for GPT4All, with some users expressing frustration over unaddressed feature requests after years of asking.
- A member remarked on the likelihood of certain features, such as Chain of Thought (CoT) implementation, being pursued by developers.
- LocalDocs File Format Challenges: Users noted difficulty uploading XLSX files to LocalDocs, with the extension being removed but files still uploadable in chat.
- There was an inquiry about the format limitations and whether future updates might allow XLSX files to remain intact.
- Web Search Beta Release Queries: One user inquired if ongoing development for the Web Search feature in GPT4All is still active, pointing to GitHub's documentation.
- Others expressed interest in the functionality and requested updates on its progress.
- bartowski/DeepSeek-R1-Distill-Llama-8B-GGUF · Hugging Face: no description found
- How to Run DeepSeek-R1 Locally | The FREE Open-Source Reasoning AI: Learn how to run DeepSeek-R1, a powerful open-source reasoning AI, locally on your machine using Ollama. This step-by-step guide shows you how to harness R1'...
- Web Search Beta Release: GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use. - nomic-ai/gpt4all
- Ein Tag in Berlin 1943 – Der Passfälscher Cioma Schönhaus: 1943 soll Berlin nach dem Willen Adolf Hitlers "judenrein" werden. Schönhaus ist einer der Zehntausenden Juden, die von der Deportation bedroht sind.
Torchtune ▷ #dev (31 messages🔥):
Running Distributed Recipes, Issues with 'torchrun', Distributed Init Protocols, Multinode Setup on Mac, Debugging Torch Distributed
- Troubles with running distributed recipes: A user reported issues when running distributed recipes with torchtune, leading to import errors related to missing modules and c10d errors.
- I can run other dummy scripts using torch.distributed and torchrun just fine suggests that the issues may be recipe-specific.
- Using torchrun without Tune: A member sought guidance on executing
torchrun
without using torchtune, ultimately finding success with a corrected command format.- This adjustment resolved the previous error regarding NCCL and allowed for successful execution of the distributed script.
- Questions on distributed init protocols: Discussions arose around the specified distributed init command
init_process_group('cuda:nccl,cpu:gloo')
and its implications on a Mac setup.- Concerns were raised that this setup could lead to issues, with commentary suggesting that leaving the protocol unspecified might yield better results.
- Multinode setup and debugging challenges: Participants joked about the idea of 'daisy chained Mac minis' leading to complications in distributed setups and the need for easier debugging methods.
- User experiences highlighted the challenges of debugging without SSH and the perception that distributed systems can be temperamental.
- Documentation gaps in distributed APIs: Members expressed frustrations over the poor documentation of the distributed APIs leading to confusion and errors during implementation.
- The sentiment was shared that better documentation might alleviate some of the complexities faced when handling distributed processes with PyTorch.
- pytorch/torch/distributed/distributed_c10d.py at 56915b093a20b2fbd4d6f79f100670c6c496d8b3 · pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch
- pytorch/torch/distributed/run.py at a08f7f326637adb77ee2ba996425c67a7305f97e · pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch
Torchtune ▷ #papers (1 messages):
Model Comparisons, Image Analysis
- Questioning Current Model Relevance: A member raised a concern questioning whether all the models being compared are outdated.
- Aren’t all the models it’s comparing against old?
- Image Analysis Shared: An image was attached for analysis, indicating a visual component to the ongoing discussion.
- The image may hold key insights regarding the models being referenced but was not further elaborated on.
LlamaIndex ▷ #blog (2 messages):
DeepSeek-R1 API Integration, SOFTIQ SaaS App, Tender Analysis Efficiency
- DeepSeek-R1 API Joins LlamaIndex: LlamaIndex announced its first-party integration with the DeepSeek-R1 API, enabling features with models like
deepseek-chat
anddeepseek-reasoner
for enhanced functionalities.- To get started, users can simply install LlamaIndex with the command
%pip install llama-index-llms-deepseek
.
- To get started, users can simply install LlamaIndex with the command
- SOFTIQ Revolutionizes Tender Processes: The new SOFTIQ SaaS app leverages LlamaIndex Workflows to drastically cut analysis time for public sector tenders to under 10 minutes each.
- This innovative approach boosts selection accuracy, helping construction companies reduce wasted effort, as noted in their promotional materials.
Link mentioned: DeepSeek - LlamaIndex: no description found
LlamaIndex ▷ #general (18 messages🔥):
LlamaReport Documentation, Pull Request Review, RAG Retrieval in Reasoning Models, FastAPI Event Streaming
- LlamaReport documentation is forthcoming: Members discussed the status of the LlamaReport documentation, indicating it is currently being prepared and will be published soon.
- For now, a member shared a Twitter link for more information.
- Pull Request for Dead Link Removal: A member requested a review for their Pull Request which removes a dead link from the fine-tuning.md documentation.
- They noted that this is a simple one-line change due to the link's absence from the repository.
- Challenges with RAG Retrieval in Reasoning Models: A member asked about triggering RAG retrieval within reasoning model
steps, citing the Search-o1 paper as a reference. - Another responded that the approach would involve streaming responses, interrupting for searches, and inserting results back into the new stream.
- FastAPI Workflow for Event Streaming: A member sought guidance on returning streamed data as JSON in FastAPI, aiming to improve response formats in Postman.
- The community suggested yielding Pydantic objects as dictionaries or JSON strings using an async generator to ensure proper formatting.
- Search-o1: Agentic Search-Enhanced Large Reasoning Models: no description found
- Building Blocks of LLM Report Generation: Beyond Basic RAG — LlamaIndex - Build Knowledge Assistants over your Enterprise Data: LlamaIndex is a simple, flexible framework for building knowledge assistants using LLMs connected to your enterprise data.
- removed a dead link from fine-tuning.md in docs by Riddhimaan-Senapati · Pull Request #17652 · run-llama/llama_index: DescriptionRemoved a dead link in the fine-tuning.md in docs as the file is no longer present in the repository. This is the link that was removed: https://docs.llamaindex.ai/en/stable/optimizing/...
Modular (Mojo 🔥) ▷ #general (6 messages):
Documentation Status, Deepseek vs Modular Debate
- Documentation Downtime and Recovery: The team reported that the docs were down but are now back up and available, including the GPU package API documentation in nightly.
- Members expressed gratitude for the quick resolution and appreciated the update on the documentation status.
- Deepseek Claims Dominance Over Modular: A member commented that Deepseek seems to have outperformed Modular in achieving similar goals with Max and Mojo.
- However, others argued the two are fundamentally different, likening Modular's role to that of a tractor store supporting farmers rather than competing against them.
Modular (Mojo 🔥) ▷ #announcements (1 messages):
MAX repo changes, Mojo repo updates
- MAX and Mojo Repos Get Revamped: The nightly branch will be renamed to main, receiving the most frequent updates, while the stable branch will mirror the latest stable release, currently at 24.6.
- Each stable release will be tagged in GitHub, and all open pull requests will be adjusted to the correct branches following the transition.
- Instructions for Branch Transition: Developers on the current nightly branch should follow specific Git commands to switch to the new main branch after the change.
- Similarly, those on the current main branch need to execute commands to transition to the new stable branch once the update is applied.
Modular (Mojo 🔥) ▷ #mojo (10 messages🔥):
Documentation Outage, Mojo Code Issues, Garbage References in Code, Callback Capturing Behavior, String Captures Clobbered
- Docs Down, Team Working on It: The documentation is currently down, and the team is working to resolve this issue as quickly as possible, leading to user patience. Patience is my middle name, added one user humorously.
- The docs are back up now, much to the relief of the members.
- Avoiding Garbage References in Mojo Code: A user experienced issues with the
write_node
function in their Mojo code where references toself
became garbage. They managed to fix it by getting rid of the cut capturing callback functionalities.- Concerns were raised whether issues stemmed from the capturing behavior, as they referenced examples in their description.
- Clarification on Capturing Issues: A member posted a link to potentially help with the callback capturing issues but indicated it didn't resolve the problem. They referenced that both
written
andsize
are not captured correctly in their solution.- Another user shared a GitHub Gist link for others to try a simpler example to investigate the capturing issue further.
- Discussion on Clobbered String Captures: A user inquired if the problem could relate to issues where string captures might be getting clobbered in a closure or capturing function. This prompted further discussion on the intricacies of capturing within the code.
Link mentioned: tree.mojo: GitHub Gist: instantly share code, notes, and snippets.
tinygrad (George Hotz) ▷ #general (6 messages):
tinygrad PR 8781, Python CUDA Emulator for FP8, MathTrait and SimpleMathTrait Unification, Tests for View.stride() and View.flip(), Bounty questions
- Bounty for tinygrad PR 8781: A member highlighted PR #8781, which involves replacing stride with flip in the
tinygrad
codebase, offering a $100 bounty for completion.- The PR aims to help newcomers get familiar with the project by working on relatively straightforward code changes.
- Complications of FP8 Implementation: A member questioned whether adding a Python CUDA emulator for FP8 was necessary, mentioning challenges related to memory storage of values.
- Specifically, they noted that
struct.pack
does not support FP8, similar to existing issues with bfloat16.
- Specifically, they noted that
- Refactoring for MathTrait and SimpleMathTrait: A clarification was sought regarding the unification bounty for
MathTrait
andSimpleMathTrait
, specifically if refactoring would involve delegating operations like log2 to MathTrait.- They provided a code snippet showing how
log2
inTensor
would delegate to MathTrait, while also asking if documentation must be preserved.
- They provided a code snippet showing how
- Queries on View.stride() Changes: Discussion arose about the complexity of making View.stride() return View.flip(), with members debating the simplicity of just passing all tests.
- It remains unclear if this change is sufficient or if further modifications are needed.
- GitHub - tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! ❤️: You like pytorch? You like micrograd? You love tinygrad! ❤️ - GitHub - tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! ❤️
- replace stride with flip ($100 bounty for finishing this) by geohot · Pull Request #8781 · tinygrad/tinygrad: no description found
tinygrad (George Hotz) ▷ #learn-tinygrad (10 messages🔥):
Tensor.isclose and Tensor.allclose methods, Negative stride interpretation, Git Branching tutorials for tinygrad
- Submitting Tensor.isclose and Tensor.allclose PR: A member submitted a PR for
Tensor.isclose
andTensor.allclose
, aiming to align with the functionality in torch by definingisclose()
as(self - other).abs() <= atol + rtol * other.abs()
.- However, tests are currently failing, raising questions about how to address the issues.
- Confusion Around Negative Stride Interpretation: One member discussed the notion that a negative stride could be viewed as a flip followed by a positive stride, particularly in the context of conv2d operations.
- They expressed uncertainty about whether the code indicates that strides are only {-1,1} or if it incorporates different stride values such as (3,2).
- Inquiry About the Term 'Swizzle': A member admitted to being confused about the term swizzle and questioned if there are any specifications or documentation regarding it.
- They seek clarity as they have been avoiding the term during their discussions.
- Proposal for Tinygrad Learning Resources: Another member suggested creating a tutorial resource similar to Learn Git Branching but tailored for tinygrad basics and puzzles.
- They referenced an existing repository dedicated to tinygrad tensor puzzles, highlighting the need for clear foundational learning materials.
Link mentioned: Learn Git Branching: An interactive Git visualization tool to educate and challenge!
Cohere ▷ #discussions (2 messages):
Greetings in the Discord, User Interaction
- Friendly hellos exchanged: Users greeted each other with a casual exchange in the channel, showcasing a welcoming environment.
- Hi Guys! and 👋 express a sense of community amongst the members.
- Informal engagement seen: The channel experienced minimal activity, with just a couple of messages indicating members' presence.
- These simple interactions maintain engagement within the community.
Cohere ▷ #api-discussions (8 messages🔥):
Model Response Quality, Error 500 from Classify Endpoint, Finetuned Model Details, Model Versioning, Command R+ Model Updates
- User reports decline in response quality: A user expressed concerns that the command r+ model is giving terse responses after reactivating a project that once performed well in September.
- Another member confirmed that the alias is still pointing to the same model, suggesting that nothing has changed since then.
- Error 500 encountered from Classify endpoint: A user reported experiencing an Error 500 from the Classify endpoint, prompting discussions about the model being used.
- After some troubleshooting, it was noted that the issue should be resolved, with a request for users to confirm if problems persist.
- Inquiry about finetuned model specification: In response to the error message, members inquired about the details of the finetuned model, specifically asking for the finetune ID.
- Clarifications were sought to determine if the finetuned model was the source of the issues being faced.
- Discussion on model versioning alternatives: A user queried on how to specify the model version effectively to potentially fix the response issues.
- Options were provided to upgrade to newer model versions like command-r-plus-08-2024 or command-r7b-12-2024 for improved performance.
LAION ▷ #general (6 messages):
Updating speech parameter settings, AI agent consultancy
- Discussion on Speech Parameter Adjustments: A member expressed a desire to improve speech outputs by adding more diverse parameter settings to enhance differentiation while keeping clarity and understandability.
- They referenced a Colab notebook for examples and asked another member for assistance.
- Seeking AI Agent Experts for Collaboration: A member named Francis announced a search for consultants and experts in AI agents to incorporate multi-agent solutions into marketing workflows for enhanced automation.
- They encouraged interested parties to connect via DMs or within the server to discuss potential collaborations.
Link mentioned: Google Colab: no description found
LAION ▷ #research (1 messages):
Compute budget claims, MoE training efficiency, Llama3 GPU hours comparison
- Skepticism about Compute Budget Truth: Questions arose regarding the compute budget claims, particularly if a model exceeding 600b MoE could be trained with just 2.7m GPU hours.
- Llama3 reportedly required 7.7m GPU hours**, raising doubts about the feasibility of the proposed training efficiencies.
- Comparing Training Efforts of MoE and Llama3: Discussion highlighted that while the MoE model may benefit from being 8 bit and activating fewer parameters, its training budget still seems underestimated.
- Despite 2x FLOPs advantage, the overall GPU hours needed for such a large model feel too low to be credible.
Axolotl AI ▷ #general (4 messages):
H200 Sale Strategy, Multi Turn Kto Discussion
- H200 sold at impressive multiplier: One member boasted about selling the H200 for 16x the 5090, stating it had 3.41x the VRAM.
- Laughing emoji echoed the sentiment of a fellow member who confirmed they had done the same.
- Inquiring about Multi Turn Kto: A member posed a question regarding the performance of multi turn kto to another member in the channel.
- Their inquiry reflected curiosity but did not provoke further discussion or responses.
OpenInterpreter ▷ #general (1 messages):
OpenInterpreter Skills, Import Skills Configuration
- OpenInterpreter Skills lost functionality: A member expressed frustration that their OpenInterpreter seems to have skills it learned in the past but cannot utilize anymore.
- They speculated that this issue may stem from the OpenInterpreter class setting
import_skills=False
by default.
- They speculated that this issue may stem from the OpenInterpreter class setting
- Difficulty in mastering OpenInterpreter skills: The member shared their struggle, stating they have spent a lot of time trying to figure out why their skills aren't functioning.
- Their emotive response highlighted the ongoing wrestle with this limitation.
OpenInterpreter ▷ #O1 (2 messages):
API base functionality, Source code modifications
- API Base Might Not Function Properly: A member expressed doubts that the API base would work effectively in the current context.
- The suggestion hints at underlying issues within the integration that require further investigation.
- Source Code Changes Necessary: A member recommended making changes to the source code in order to resolve existing issues with the software.
- This indicates a need for a deeper technical adjustment rather than superficial fixes.
Gorilla LLM (Berkeley Function Calling) ▷ #leaderboard (2 messages):
System prompts injection, Weights and biases for tracing
- Understanding System Prompts Injection: System prompts are injected via a standard metaprompt with functions defined in the model_handler in Gorilla's repository.
- The GitHub page details the implementation and evaluation of LLMs for Function Calls, including a visual representation of the repository.
- Enhancing Tracing with Weights and Biases: A member shared a pro tip recommending the use of Weights and Biases for better traceability during model evaluation.
- This approach allows for inspecting trajectories, providing deeper insights that may not be captured otherwise.
Link mentioned: gorilla/berkeley-function-call-leaderboard/bfcl/model_handler/constant.py at main · ShishirPatil/gorilla: Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls) - ShishirPatil/gorilla
DSPy ▷ #general (1 messages):
GitHub PR for DSPy, Poetry lock issue
- Fix poetry lock PR opened: An open Pull Request has been submitted to fix the poetry lock, addressing issue #6644.
- There's optimism in the community about the PR getting merged soon, as it aims to resolve a critical dependency issue.
- Community discusses the PR's importance: Members expressed that resolving the poetry lock issue is crucial for maintaining stability in the project, especially for further development activities.
- One member highlighted that the PR targets an ongoing challenge faced by users of the DSPy framework.
Link mentioned: Fix poetry lock by chenmoneygithub · Pull Request #6755 · stanfordnlp/dspy: resolve #6644
MLOps @Chipro ▷ #events (1 messages):
DeepSeek Performance, Cost Comparison with ChatGPT, Live Workshop, Real App Building
- DeepSeek outshines ChatGPT at a fraction of the cost: A new open-source model DeepSeek is outperforming ChatGPT and Claude in benchmarks while being 20-30 times cheaper.
- This revelation raises questions about potential market disruptions as tech giants express concern over this emerging competitor.
- Join the live workshop on DeepSeek: A free workshop is scheduled for Thursday, January 30 at 9:00 PM IST, showcasing live performance comparisons of DeepSeek against ChatGPT.
- Participants will also witness real-time application building and a detailed cost analysis highlighting how companies can save thousands.
- Testing DeepSeek in real-time: The workshop will feature live tests comparing reasoning, coding, and mathematical challenges between DeepSeek and ChatGPT.
- Attendees can expect to see firsthand demonstrations of DeepSeek's capabilities and advantages.
- Building apps with DeepSeek: The workshop promises a hands-on experience, allowing attendees to build their first DeepSeek-powered application live.
- The session aims to simplify integrations of the V3 and R1 models for developers and tech enthusiasts.
Link mentioned: What's the hype about DeepSeek?🐬 · Zoom · Luma: The AI world is in a frenzy! A new open-source model from China is outperforming ChatGPT and Claude in benchmarks, and it's 20-30 times cheaper. Is this the…
Mozilla AI ▷ #announcements (1 messages):
FOSDEM 2025, Open-source collaboration
- Mozilla sponsors FOSDEM 2025: Mozilla is proudly sponsoring FOSDEM 2025 in Brussels on February 1st & 2nd, a fantastic and free event for developers.
- Attendees can expect to learn and connect with like-minded folks at this global gathering.
- Join the Discord coordination for FOSDEM: If you're planning to attend, check out our coordination thread on Discord for more information and to connect.
- Mozilla welcomes all open-source enthusiasts to meet up and collaborate during the event.