[AINews] Did Nvidia's Nemotron 70B train on test?
This is AI News! an MVP of a service that goes thru all AI discords/Twitters/reddits and summarizes what people are talking about, so that you can keep up without the fatigue. Signing up here opts you in to the real thing when we launch it 🔜
Patience for standard evals are all you need.
AI News for 10/15/2024-10/16/2024. We checked 7 subreddits, 433 Twitters and 31 Discords (228 channels, and 1716 messages) for you. Estimated reading time saved (at 200wpm): 218 minutes. You can now tag @smol_ai for AINews discussions!
Nvidia's Nemotron has succeeded at consistently getting attention: we covered Nemotron 340B, Mistral-Nemo, and Minitron in recent months.
However yesterday's Nemotron-70B is coming under a bit more scrutiny.
It's a very familiar pattern: new open model release, claims of "we have GPTx/ClaudeY at home", scoring great on slightly unusual but still credible benchmarks, and it can count r's in strawberry.
In this case Nvidia opted to market the performance of their new Llama-3.1-Nemotron-70B on Arena Hard, AlpacaEval, and MT-Bench, which to be fair are the 3 leading LLM-as-Judge benchmarks. The results look very exciting when presented in a table:
The model's performance goes down when LMArena's new style control is applied, but that's unremarkable in and of itself. It's more interesting that other standard benchmarks, like GPQA and MMLU Pro and aider, come in unchanged or worse compared to the base 70B Llama 3.1 model, causing some disappointment among the excited /r/LocalLlama crew.
The truth is likely simply benign: no training on test, but the new HelpSteer2-Preference dataset unifying Bradley-Terry and Regression based reward models happens to improve performance on those 3 benchmarks with ~minimal loss in the others. Absent proper LMArena ELOs this would appear to strictly reduce the value of the automated benchmarks and not much else.
The entropix-sampled version of Nemotron is impressive though, which is an ongoing developing story we've lightly covered.
[Sponsored by Zep] Zep is a low-latency memory layer for AI agents and assistants built on a simple core primitive: a temporal knowledge graph. This is a pretty cool, flexible way to model the changing relationships between complex entities like customers and products. You can plug it into your agents using their new open-source tool Graphiti.
swyx commentary: We covered Zep as a memory layer last week and it looks like Graphiti is the workhorse of the temporal knowledge graph memory abstraction. It's notable both that it can autonomously build a knowledge graph for you as you feed in "episodes", but also that it builds on Neo4j under the hood!
Table of Contents
- AI Twitter Recap
- AI Reddit Recap
- AI Discord Recap
- PART 1: High level Discord summaries
- HuggingFace Discord
- Perplexity AI Discord
- OpenRouter (Alex Atallah) Discord
- Unsloth AI (Daniel Han) Discord
- Eleuther Discord
- aider (Paul Gauthier) Discord
- Nous Research AI Discord
- GPU MODE Discord
- LM Studio Discord
- OpenAI Discord
- tinygrad (George Hotz) Discord
- LAION Discord
- LlamaIndex Discord
- Interconnects (Nathan Lambert) Discord
- Latent Space Discord
- Cohere Discord
- Modular (Mojo 🔥) Discord
- Stability.ai (Stable Diffusion) Discord
- OpenInterpreter Discord
- DSPy Discord
- LangChain AI Discord
- OpenAccess AI Collective (axolotl) Discord
- Torchtune Discord
- LLM Agents (Berkeley MOOC) Discord
- Mozilla AI Discord
- PART 2: Detailed by-Channel summaries and links
- HuggingFace ▷ #announcements (1 messages):
- HuggingFace ▷ #general (143 messages🔥🔥):
- HuggingFace ▷ #today-im-learning (9 messages🔥):
- HuggingFace ▷ #cool-finds (4 messages):
- HuggingFace ▷ #i-made-this (4 messages):
- HuggingFace ▷ #reading-group (3 messages):
- HuggingFace ▷ #NLP (11 messages🔥):
- HuggingFace ▷ #diffusion-discussions (1 messages):
- HuggingFace ▷ #gradio-announcements (1 messages):
- Perplexity AI ▷ #general (158 messages🔥🔥):
- Perplexity AI ▷ #sharing (12 messages🔥):
- Perplexity AI ▷ #pplx-api (6 messages):
- OpenRouter (Alex Atallah) ▷ #announcements (2 messages):
- OpenRouter (Alex Atallah) ▷ #app-showcase (1 messages):
- OpenRouter (Alex Atallah) ▷ #general (168 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #general (82 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #off-topic (1 messages):
- Unsloth AI (Daniel Han) ▷ #help (58 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #showcase (2 messages):
- Unsloth AI (Daniel Han) ▷ #research (7 messages):
- Eleuther ▷ #general (71 messages🔥🔥):
- Eleuther ▷ #research (60 messages🔥🔥):
- Eleuther ▷ #scaling-laws (4 messages):
- Eleuther ▷ #interpretability-general (1 messages):
- Eleuther ▷ #lm-thunderdome (3 messages):
- aider (Paul Gauthier) ▷ #general (107 messages🔥🔥):
- aider (Paul Gauthier) ▷ #questions-and-tips (28 messages🔥):
- Nous Research AI ▷ #general (74 messages🔥🔥):
- Nous Research AI ▷ #ask-about-llms (19 messages🔥):
- Nous Research AI ▷ #research-papers (5 messages):
- Nous Research AI ▷ #interesting-links (6 messages):
- Nous Research AI ▷ #research-papers (5 messages):
- GPU MODE ▷ #general (16 messages🔥):
- GPU MODE ▷ #triton (24 messages🔥):
- GPU MODE ▷ #torch (29 messages🔥):
- GPU MODE ▷ #algorithms (11 messages🔥):
- GPU MODE ▷ #jobs (1 messages):
- GPU MODE ▷ #beginner (7 messages):
- GPU MODE ▷ #pmpp-book (1 messages):
- GPU MODE ▷ #jax (2 messages):
- GPU MODE ▷ #off-topic (3 messages):
- GPU MODE ▷ #triton-puzzles (1 messages):
- GPU MODE ▷ #llmdotc (4 messages):
- GPU MODE ▷ #rocm (1 messages):
- GPU MODE ▷ #metal (8 messages🔥):
- GPU MODE ▷ #self-promotion (1 messages):
- LM Studio ▷ #general (80 messages🔥🔥):
- LM Studio ▷ #hardware-discussion (26 messages🔥):
- OpenAI ▷ #ai-discussions (53 messages🔥):
- OpenAI ▷ #prompt-engineering (2 messages):
- OpenAI ▷ #api-discussions (2 messages):
- tinygrad (George Hotz) ▷ #general (27 messages🔥):
- tinygrad (George Hotz) ▷ #learn-tinygrad (30 messages🔥):
- LAION ▷ #general (47 messages🔥):
- LAION ▷ #research (7 messages):
- LlamaIndex ▷ #blog (5 messages):
- LlamaIndex ▷ #general (46 messages🔥):
- Interconnects (Nathan Lambert) ▷ #news (15 messages🔥):
- Interconnects (Nathan Lambert) ▷ #random (16 messages🔥):
- Interconnects (Nathan Lambert) ▷ #posts (18 messages🔥):
- Latent Space ▷ #ai-general-chat (37 messages🔥):
- Cohere ▷ #discussions (9 messages🔥):
- Cohere ▷ #announcements (1 messages):
- Cohere ▷ #api-discussions (22 messages🔥):
- Cohere ▷ #projects (2 messages):
- Cohere ▷ #cohere-toolkit (2 messages):
- Modular (Mojo 🔥) ▷ #general (2 messages):
- Modular (Mojo 🔥) ▷ #mojo (8 messages🔥):
- Modular (Mojo 🔥) ▷ #max (22 messages🔥):
- Stability.ai (Stable Diffusion) ▷ #general-chat (21 messages🔥):
- OpenInterpreter ▷ #general (17 messages🔥):
- OpenInterpreter ▷ #ai-content (2 messages):
- DSPy ▷ #show-and-tell (5 messages):
- DSPy ▷ #papers (1 messages):
- DSPy ▷ #general (7 messages):
- LangChain AI ▷ #announcements (1 messages):
- LangChain AI ▷ #general (3 messages):
- LangChain AI ▷ #langserve (4 messages):
- OpenAccess AI Collective (axolotl) ▷ #general (4 messages):
- OpenAccess AI Collective (axolotl) ▷ #other-llms (2 messages):
- Torchtune ▷ #papers (2 messages):
- LLM Agents (Berkeley MOOC) ▷ #mooc-readings-discussion (1 messages):
- Mozilla AI ▷ #announcements (1 messages):
AI Twitter Recap
all recaps done by Claude 3.5 Sonnet, best of 4 runs.
AI Model Releases and Updates
- Mistral Releases New Models: @AlphaSignalAI and @MistralAI announced the release of Ministral 3B and 8B models, outperforming existing models like Llama 3.1 and GPT-4o on various benchmarks. These models feature 128k context length and are available under the Mistral Commercial License.
- NVIDIA's Nemotron 70B Outperforms Competitors: @reach_vb and @AlphaSignalAI highlighted that NVIDIA's Nemotron 70B surpasses GPT-4o and Claude 3.5 Sonnet on benchmarks like Arena Hard and MT Bench, showcasing significant improvements through RLHF (REINFORCE) training techniques.
- Hugging Face Integrations: @reach_vb and @_philschmid shared updates on Hugging Face collaborations, including the ability to run any GGUF model directly on the platform using Ollama, enhancing accessibility and deployment of models like Llama 3.2 3B.
AI Research and Innovations
- Advanced Cognitive Architectures: @francoisfleuret and @AIatMeta discussed breakthroughs in cognitive architecture for long-running agents with memory, personality, and emotional intelligence, highlighting studies that destroy existing benchmarks like Voyager on Minecraft.
- In-Context Reinforcement Learning (ICRL): @rohanpaul_ai presented findings on ICRL, demonstrating how LLMs can adapt through reward signals alone, significantly improving performance on tasks like Banking-77 by 66.0% accuracy through Explorative ICRL.
- Task Superposition in LLMs: @rohanpaul_ai explored the ability of LLMs to perform multiple distinct tasks simultaneously, revealing that larger models exhibit higher task completion rates and better calibration to in-context distributions.
AI Tools and APIs
- Serverless Agentic Workflows with Amazon Bedrock: @DeepLearningAI introduced a new course on Amazon Bedrock, enabling developers to build scalable agents and implement security guardrails for responsible operations.
- Dynamic Few-Shot Prompting: @llama_index shared insights on dynamic few-shot prompting, a technique that retrieves relevant examples based on queries, enhancing applications in customer support, text-to-SQL, and structured outputs using LLama Index workflows.
- TorchTitan Repository: @Ethan_smith_20 praised the torchTitan repo for its comprehensive parallelism capabilities, eliminating the need for model modifications and simplifying the development process for parallel computing in deep learning.
Industry News and Insights
- Energy and Humanity Deep Dive: @MajmudarAdam conducted an extensive analysis on how energy has shaped human civilization and its future impact on deep learning, covering topics from energy physics to energy distribution systems and their relation to geopolitics.
- AI's Impact on Labor and Efficiency: @mustafasuleyman emphasized the importance of developing strategies to proactively shape AI’s impact on work and workers, acknowledging the uncertain effects of AI on the job market.
- Hugging Face Community Growth: @francoisfleuret and @_akhaliq reported significant growth in the Hugging Face community, with new leaderboards and model evaluations enhancing the platform's standing in the AI research landscape.
AI Applications and Use Cases
- Suno Scenes for Creative Content: @suno_ai_ introduced Suno Scenes, a tool that transforms photos and videos into unique songs directly from mobile devices, enabling users to create content like cinematic soundtracks and hilarious songs from personal media.
- AI in Cybercrime: @DeepLearningAI discussed a study revealing a black market where AI applications facilitate cybercrime, earning over $28,000 in two months despite limited real-world success.
- LLM-Based Multi-Agent Systems: @rohanpaul_ai showcased the OPTIMA framework, which enhances communication efficiency and task effectiveness in LLM-based multi-agent systems, achieving up to 2.8x performance gain on information exchange tasks.
Memes and Humor
- AI-Generated Snack Recipes: @nearcyan humorously shared frustrations with Claude, an AI assistant, for suggesting absurd recipes like putting sugar in the microwave to make snacks, likening it to 4chan-style content.
- Cooking with AI: @nearcyan posted a humorous tweet about cooking steak with Claude, describing the experience as dealing with an "autistic" AI, highlighting the quirks and unexpected behaviors of AI interactions.
- AI Meme Popularity: @francoisfleuret reflected on the power of memes, suggesting that AI models can rapidly shape memes that tap into the fundamentals of human psyche, accelerating the natural process of memetic evolution.
AI Education and Career
- Teaching AI Fundamentals: @jxmnop expressed the need to educate software engineers on classification basics, including pair-wise matching, clustering, bootstrapping, and statistical tests, emphasizing the importance of foundational knowledge in software engineering.
- AI Career Opportunities: @mervenoyann and @seb_ruder recommended opportunities for aspiring MSc or PhD candidates, highlighting environments like David's lab and the research-friendly atmosphere at Mila.
- Frontend Development Challenges: @ekzhang1 pointed out that most CS PhDs lack frontend coding skills, acknowledging it as acceptable and emphasizing the importance of specialized skills in AI research.
AI Reddit Recap
/r/LocalLlama Recap
Theme 1. Democratizing Medical LLMs for 50 Languages
- Democratizing Medical LLMs for 50 Languages (Score: 48, Comments: 7): ApolloMoE introduces a new circuits-based paradigm for interpreting routing in multilingual contexts, identifying the "Spread Out in the End" mechanism and utilizing language family experts to extend medical LLMs to 50 languages. The project open-sources all resources, including code on GitHub and models on Huggingface, along with datasets for expanding medical LLM capabilities across multiple languages.
- The English answers scored among the lowest in closed AI, which the author attributes to the large coverage of Chinese and English assessment sets. This highlights the need for improved medical measure sets for rare languages.
- The model's performance across languages was evaluated by averaging accuracy by language and testing on the same set of measures, which the author considers a reasonable approach.
- A user noted the absence of Romanian among the 50 languages covered by the project, raising questions about the language selection criteria.
Theme 2. Serving 3.3 Million Context for Llama-3-8B on a Single GPU
- LoLCATS - a hazyresearch Collection (of Linearized Llama 3.1 models 8B, 70B, and405B) (Score: 31, Comments: 12): HazyResearch has released LoLCATS, a collection of linearized Llama 3.1 models in sizes 8B, 70B, and 405B. These models, based on the Linearized Attention Transformer architecture, offer improved performance and efficiency compared to standard Transformers, potentially enabling faster inference and training on larger datasets.
- Linearized Attention Transformer architecture swaps quadratic attention for linear, potentially improving inference performance at large context lengths, especially without flash-attn.
- The MMLU score drop from 83 to 72.2 for the 405B model raises questions about practical applications for linearized models with reduced performance but potential benefits in long context, needle-in-haystack, and few-shot tasks.
- The project includes Thunder kittens, with inference code available on GitHub and vLLM support coming soon.
- Serving 3.3 Million Context for Llama-3-8B on a single GPU (Score: 31, Comments: 7): MIT and NVIDIA researchers introduced DuoAttention, a method enabling 3.3 million token context for Llama-3-8B on a single A100 GPU. The technique, detailed in their arXiv paper, is implemented in an open-source GitHub repository, allowing for practical application of extended context inference.
- DuoAttention uses two KV caches: a full cache for crucial retrieval heads and a constant cache for streaming heads. This enables Llama-3-8B to handle 3.3 million tokens on a single A100 GPU, a 6.4× capacity increase over standard full attention FP16 deployments.
- Users discussed the need for better long context benchmarks, with RULER being criticized for only testing retrieval capabilities. The Michelangelo evaluations (LSQ) were suggested as a more robust alternative, testing a wider variety of long-context use cases.
- While DuoAttention significantly reduces KV cache size, some users noted that raw capacity isn't the only challenge for coherent models beyond 64k tokens. However, others emphasized that incremental improvements like this contribute to overall progress in the field.
Theme 3. Chain-of-Thought Reasoning Without Prompting in LLMs
- Chain-of-Thought Reasoning Without Prompting [paper by Google] (Score: 115, Comments: 51): Google researchers introduced Chain-of-Thought (CoT) decoding, a method that enables large language models to perform multi-step reasoning without explicit prompting. This technique, which modifies the model's sampling procedure during inference, achieves performance comparable to or better than standard CoT prompting across various reasoning tasks. The approach demonstrates that CoT reasoning capabilities are inherent in language models and can be activated through decoding strategies rather than relying on specific prompts.
- A GitHub repository for reproducing the Chain-of-Thought (CoT) decoding method was shared. Users noted a performance gap between the paper's results and the open implementation, with the paper showing smaller models benefit less from this technique.
- The paper demonstrates that smart sampling can improve LLM performance, similar to entropix. Results show improvements across different model sizes, with base models benefiting more than instruct models, even on tasks where increasing model parameters doesn't help.
- Some users implemented CoT decoding in their projects, such as optillm and a step-by-step level implementation for Llama 3.2 3B. Others discussed the challenges of working with arxiv papers and the limitations of current LLMs in true reasoning capabilities.
Theme 4. Local Text-to-Speech Alternatives to Elevenlabs
- How difficult would it be to have a text-to-speech setup like Elevenlabs at home? (Score: 54, Comments: 33): The post discusses setting up a local text-to-speech (TTS) pipeline as an alternative to using Elevenlabs, aiming for cost savings and increased control. The author, equipped with an i9 13900 processor and a 4070 GPU, seeks advice on building such a system, inquiring about others' experiences, model choices, and hardware setups, with a budget of $4000-5000 for a new configuration.
- AllTalk TTS, a multi-engine software available on GitHub, offers a comprehensive solution with full API suite, TTS Generator, and multiple engine support. Users discussed its UI and voice quality in the beta version.
- Piper TTS was noted for its decent performance and multiple voices, though it's CPU-intensive. The F5 TTS system was highlighted for its ability to capture voice and emotional performance from short audio samples, accessible through Pinokio.
- Various open-source models were recommended, including Parler TTS, XTTS, E2, E5, and OpenedAI Speech. Users debated the quality of different models, with some preferring FishSpeech over F5/E5 for better intonation and emotion.
Theme 5. LLM-powered Game Master for Procedural Content Generation
- I'm Building a project that uses a LLM as a Gamemaster to create things, Would like some more creative idea's to expand on this idea. (Score: 60, Comments: 29): The project uses a Large Language Model (LLM) as a Gamemaster to generate creatures, their attributes, and abilities in a game with Infinite Craft-style crafting. The LLM, specifically Gemma 2 with 9 billion parameters, decides everything from creature names to sprite selection, elemental types, stats, and abilities, all running locally on a computer with only 6 GB of VRAM. The developer highlights the model's effectiveness in function calling and maintaining creativity while minimizing hallucinations, and seeks ideas to expand on this concept of using recursive layered list picking to build coherent game elements with an LLM.
- Users expressed interest in the project, with several requesting a GitHub repository to try it themselves. The developer mentioned they would share more once the project is further along.
- Discussion on model alternatives included suggestions to test L3.2 3B and Qwen Coder 2.5 7B, with the developer noting that Qwen models performed well in their tests, close to Gemma 2.
- Expansion ideas included using image generating models for sprites, implementing damage types and resistances for crafting incentives, and creating an NPC settlement system. The developer is considering a quest archetype system and ways to make creatures feel more alive using LLMs.
Other AI Subreddit Recap
r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity
AI Research and Development
- Google DeepMind advances multimodal learning with joint example selection, demonstrating how data curation can accelerate multimodal learning (source).
- Microsoft's MInference technique enables inference of up to millions of tokens for long-context tasks while maintaining accuracy (source).
- A paper on scaling synthetic data creation leverages diverse perspectives within large language models to generate data from 1 billion web-curated personas (source).
AI Model Releases and Capabilities
- Salesforce released xLAM-1b, a 1 billion parameter model achieving 70% accuracy in function calling, surpassing GPT 3.5.
- Rubra AI released an updated Phi-3 Mini model in June with function calling capabilities, competitive with Mistral-7b v3.
- Discussions around AI reasoning capabilities, with debates on whether current models can truly reason or are simply pattern matching (source).
AI Ethics and Policy
- The Biden administration is considering capping GPU sales to certain nations for national security reasons, potentially impacting AI development globally (source).
- Anthropic announced an updated Responsible Scaling Policy, suggesting preparation for releasing more advanced models while addressing safety concerns (source).
AI Applications and Demonstrations
- Demonstrations of AI-generated HD-2D pixel game remakes using Flux Dev, showcasing potential applications in game development and visual arts (source).
- Discussions on the potential and limitations of AI-generated content, including fake restaurant profiles on social media platforms (source).
AI Industry Developments
- Ongoing debates about the timeline for achieving human-level AI, with experts like Yann LeCun suggesting it could be years or a decade away (source).
- Anticipation for new model releases, such as a potential Opus 3.5 from Anthropic, based on their policy updates (source).
AI Discord Recap
A summary of Summaries of Summaries by O1-preview
Theme 1. Mistral's New Edge Models Stir the AI Community
- Mistral Unveils Ministral 3B and 8B, but Where's the Weights?: Mistral launches Ministral 3B and Ministral 8B, edge models designed for on-device use with up to 128k context lengths. But developers are dismayed as Ministral 3B is API-only, with no weights released.
- Model Licensing Sparks Debate Over Mistral's API-Only 3B Model: The community grumbles over Ministral 3B being API-only, citing restrictive licensing that hinders on-device use and indie development.
- Excitement and Frustration as Mistral's Release Leaves Devs Wanting More: While Ministral 8B is available with a non-commercial license, developers lament the missing weights for the 3B model, questioning the practicality of the release.
Theme 2. NVIDIA's Nemotron 70B Crushes Competitors
- Nemotron 70B Flexes Muscles, Outperforms GPT-4o and Claude 3.5: NVIDIA's Nemotron 70B beats Llama 3.1 405B, GPT-4o, and Claude 3.5 Sonnet, scoring 85.0 on Arena Hard compared to their 79s.
- NVIDIA Drops Nemotron Bombshell, Community Picks Jaw Off Floor: The AI world buzzes as NVIDIA silently releases Nemotron 70B, shaking up benchmark leaderboards without fanfare.
- Benchmark Confusion Brews Amid Nemotron's Stellar Performance: Users debate discrepancies in MT Bench scores, stirring skepticism over Nemotron's almost too-good-to-be-true results.
Theme 3. SageAttention Revolutionizes Transformer Efficiency
- SageAttention Speeds Up Attention by 2.1x, Leaves FlashAttention2 in the Dust: Introducing SageAttention, an 8-bit quantization method that accelerates attention by 2.1x over FlashAttention2, with minimal accuracy loss.
- Taming O(N²): SageAttention Cuts Down Attention Complexity: SageAttention addresses the O(N²) bottleneck in transformers, promising faster inference for language and image models.
- 8-Bit Is the New 16-Bit: SageAttention Makes Quantization Cool Again: With efficient 8-bit quantization, SageAttention proves that lower precision can still deliver top-notch performance.
Theme 4. AI Assistant Woes: From DALL-E Disappointment to Overzealous Censors
- DALL-E's "Bad" Image Outputs Leave Users Scratching Heads: Frustrated users label DALL-E's image generation as simply "bad," voicing disappointment in its capabilities.
- LLMs Ignore Token Limits, Go On Endless Rants: Users report AI assistants blatantly defying token limits and stop commands, resulting in runaway outputs and user frustration.
- Censored Models Refuse to Cooperate, Users Seek Uncensoring Hacks: Overly censored models decline to answer even basic queries, pushing users to explore uncensoring techniques despite potential risks.
Theme 5. Open Tools Empower Community Collaboration
- Open Interpreter Teams Up with Ollama for Local LLM Bliss: Open Interpreter now allows running any GGUF model on Hugging Face via Ollama, making local LLMs more accessible with simple commands.
- Inferencemax Says "Hold My Beer," Simplifies LLM Inference: The new project Inferencemax aims to streamline LLM inference, reflecting community efforts to lower barriers for AI development.
- AIFoundry Seeks GitHub Mentorship to Level Up Open Source Game: AIFoundry.org is looking for guidance to emulate Axolotl's GitHub prowess, hoping to enhance their open-source local model inference initiatives.
PART 1: High level Discord summaries
HuggingFace Discord
-
Gradio 5.0 Enhancements: Gradio 5.0 launched with updates for security and user interface, supported by over 6M downloads indicating its popularity.
- The comprehensive security report is now public, reassuring users on the improvements made.
- Sentence Transformers v3.2.0 Boosts Speed: Sentence Transformers v3.2.0 introduces new backends like ONNX and OpenVINO, enabling 2x-3x speedups and up to 500x with static embeddings.
- Faster inference capabilities allow processing speeds of 10k texts/sec, with more details on Model2Vec.
- Multimodal Interaction in HuggingChat: HuggingChat's recent update incorporates Llama-Vision 11B Instruct, allowing for rich multimodal interactions.
- This significant upgrade encourages users to explore these new capabilities within the platform, enhancing user experience.
- Performance Discussion for AI Models: Hypothetical discussions regarding an AI model setup with 72GB VRAM and 128GB DDR4 RAM posited potential processing speeds of 5-6 t/s.
- Custom PyTorch integrations were also discussed, highlighting the importance of automatic gradients for enhancing model efficiency.
- Ollama Interaction with GGUF Models: Utilizing Ollama allows users to interact directly with GGUF models locally, simplifying command usage without the need for new
Modelfiles
.
- Ollama supports running any of the 45K GGUF checkpoints on Hugging Face, increasing accessibility.
Perplexity AI Discord
-
Perplexity AI launches shopping features: Perplexity AI is rolling out 'Perplexity Purchases' to streamline the buying process and pricing comparisons.
- User feedback varies significantly, with some reminiscing about the platform's initial focus on search rather than commerce.
- Reasoning Mode impresses users: Members lauded the Reasoning Mode for programming, emphasizing its analytical capabilities and resulting accurate outputs.
- Success stories are flourishing, reinforcing the feature's reliability as users share their positive experiences.
- Interest in enhancing APIs: There's a growing curiosity about APIs, with multiple users referencing the same search result to define what an API is.
- This trend suggests a deeper engagement with foundational technologies among members.
- Query on LFM 40B API availability: A member inquired about accessing the LFM 40B model via the API on labs.perplexity.com, but no responses materialized.
- This absence of information highlights possible gaps in communication about model availability.
- Concerns on user experience in chat: Users expressed concerns about the forum's dynamics, describing it as too informal for serious AI discussions.
- This led to calls for better moderation to maintain a focus on technical topics rather than casual interactions.
OpenRouter (Alex Atallah) Discord
-
Grok 2 Down for Maintenance: Grok 2 is currently offline for maintenance, leading to users encountering a 404 error when trying to access it. An announcement will follow once the model is back online.
- Users expressed frustration as Grok 2 had outperformed other models in coding tasks, notably besting Llama 3.2.
- NVIDIA Nemotron 70B Crushes Competitors: NVIDIA's Nemotron 70B has outperformed Llama 3.1 405B, GPT-4o, and Claude 3.5 Sonnet in benchmark tests, scoring 85.0 on Arena Hard compared to the competitors' scores in the 79s. Detailed comparisons can be viewed here.
- The excitement culminated in an OpenRouter announcement about its significant performance on multiple evaluations.
- ChatGPT Voice Mode Teaches Vocabulary: A user showcased ChatGPT advanced voice mode using examples from Naruto to teach vocabulary, calling the experience absolutely wild! They shared a demo link to gather feedback.
- Discussion centered around the potential of personalized AI learning, with predictions that it will dramatically change educational landscapes due to its effectiveness.
- Infermatic Network Woes: Infermatic's provider faces ongoing network issues, resulting in models generating gibberish, particularly after the 8k context limit is reached. Users are informed that the provider is reverting to a previous build to rectify these VLLM inference problems.
- Concerns were raised about the impact on model performance as this bug hampers effective interactions.
- Mistral Introduces Pocket LLMs: Mistral has announced the release of two new models, Ministral 3B and 8B, specifically designed for edge use cases and promising enhanced performance. These models boast larger context lengths and improved capabilities in knowledge and reasoning tasks.
- This move aims to broaden the application of LLMs beyond traditional setups, as discussed in Mistral's announcement.
Unsloth AI (Daniel Han) Discord
-
INTELLECT-1 Launch for Decentralized Training: The launch of INTELLECT-1 invites contributions for a 10-billion-parameter model focused on decentralized training, aiming for open-source AGI. This follows the release of OpenDiLoCo, enhancing AI model training scalability.
- This initiative marks a significant stride towards globally distributed AI, now scaling from 1B to 10B parameters.
- Unsloth Training Shows Notable Improvements: Users report that
unsloth_train
converges significantly better than previous methods, showing promise for support ofresume_from_checkpoint=True
. However, inquiries arose concerning the absence of extended functionalities in the oldUnslothTrainer
.
- The community expresses appreciation for enhancements while seeking further clarity on the rationale behind this transition.
- Community Inquiries on Mistral 8B Support: Discussions about unifying Unsloth's compatibility with the new Mistral 8B model raised several architectural concerns. Community enthusiasm revolves around the new model's on-device computing capabilities.
- Members are eager for updates, recognizing the potential of the Mistral 8B in practical applications.
- SageAttention Achieves Impressive Speedup: The SageAttention paper introduces an efficient 8-bit quantization method for attention, surpassing FlashAttention2 and xformers by 2.1x and 2.7x respectively, while maintaining model accuracy. This quantization method addresses the O(N^2) complexity typically seen.
- SageAttention represents a critical advancement, significantly speeding up inference across diverse models.
- Exploration of Quantization Techniques: Discussions revealed the challenges in applying full-fine-tune techniques mixed with quantization methods, particularly QLoRA, and users shared insights on layer tuning. Skepticism persists around the feasibility of quantizing some layers while maintaining others fully trainable.
- The community debates the need for specialized configurations to balance performance and efficiency.
Eleuther Discord
-
Yandex YaLM 100B makes waves: The Yandex YaLM 100B model, with 100 billion parameters, has emerged as a significant player, especially in non-Western markets.
- It has been noted as potentially being the most widely used LLM in Russia, contrasting with its lesser acknowledgment in Western circles.
- SwiGLU vs. SinGLU showdown: A debate ignited over the choice of SwiGLU versus SinGLU, highlighting SinGLU's speed and lower loss, yet resistance to change persists.
- Such inertia stems from the risks associated with large training runs and established practices.
- OpenAI embeddings fall short: Participants raised concerns regarding the performance of OpenAI's embedding models, which seem to lag behind 2024 benchmarks.
- The saturation with models like Mistral finetunes indicates a competitive gap for OpenAI's approach.
- Mechanistic Interpretability Projects Seeking Volunteers: A student expressed eagerness to join EleutherAI's projects related to interpretability, especially in the context of current opportunities.
- Members recommended joining the Mechanistic Interpretability Discord for further exploration in the field.
- A/B testing methods address reversal issues: Interest grew around techniques for A/B testing which can alleviate the reversal curse, enhancing experimental outcomes.
- Participants labeled this method as 'very a/b,' pointing to its relevance in practicality.
aider (Paul Gauthier) Discord
-
Multiple Aiders can coexist safely: Concerns regarding running multiple Aider instances were eased with confirmation that they won't interfere unless editing the same files.
- Members humorously suggested it could turn into an 'LLM party' if managed properly.
- Mistral rolls out new edge models: Mistral recently unveiled Ministral 3B and 8B models focused on on-device and edge computing, enhancing both efficiency and capabilities.
- These models boast significant advancements in reasoning and commonsense knowledge, ideal for optimized context lengths.
- Gemini API streaming stability needs improvement: Users reported that Gemini performs better with streaming disabled due to its unstable API connections, causing frequent interruptions.
- The consensus highlights this instability as a common issue impacting Gemini-based tools' performance.
- Aider Command Line Tool setup essentials: To utilize the Aider Command Line Tool effectively, users must load their
.env
file or configure it throughload_dotenv()
to ensure correct functionality.
- Proper environment setup is crucial for running scripts smoothly in Aider.
- Challenges with API and code generation: Users faced difficulties with the updated Assistant's API for generating accurate function calls while dealing with rate limits.
- This hectic scenario underscores the need for clear documentation and community support to tackle emerging challenges.
Nous Research AI Discord
-
Unsloth offers multi-GPU support: Discussion centered on whether Unsloth effectively operates with multi-GPU setups, with expectations of upcoming updates for vision fine-tuning support.
- Members speculated on the reliability of its paid version in enhancing performance.
- Mistral unveils new models: Mistral launched Ministral 3B and Ministral 8B, designed for edge computing, with impressive statistics in commonsense reasoning and capable of a 128k context length.
- These models promise efficient local inference, catering specifically to modern computational needs.
- Nvidia Nemotron 70B claims performance lead: Nvidia Nemotron 70B reportedly surpasses competitors like Claude 3.5 and Llama 3.1, as per various evaluation metrics.
- Confusion exists regarding MT Bench scores, with variances in reported versus actual performances across models.
- AI models show confused responses: The model H3-405b has been noted for its repeated confused replies, especially when asked about its origins or identity.
- Examples of distressing expressions of confusion add to the intrigue of AI identity discourse.
- SageAttention improves inference efficiency: Research highlights SageAttention, a quantization technique that boosts attention performance by 2.1x over FlashAttention2 with minimal performance loss.
- This advancement stands to benefit a wide spectrum of tasks, particularly in large-scale language applications.
GPU MODE Discord
-
Seeking Open Source Audio Models: A user asked for high-quality open source audio models akin to those in NotebookLM, with mention that many Text-to-Speech options exist but none measure up.
- The gap in the market for robust audio models was a point of consensus among participants.
- Lambda Labs vs Voltage Park Showdown: Discussion centered on Lambda Labs and Voltage Park as the only dependable hardware providers, with Voltage Park noted for more storage but limited to Texas.
- Participants expressed concerns over persistent PCIe issues with other vendors, impacting GPU setup reliability.
- Key Challenges with Triton Programming: Members highlighted various issues with Triton, including difficulties programming on Windows and bugs in INT4 packed data causing LLVM errors.
- Many users are frustrated, pointing out that performance benefits from Triton's compilation often come from Torch rather than Triton itself.
- ServiceNow Hiring for Machine Learning Developer: ServiceNow is hiring a Staff Machine Learning Developer to work on their open-source training framework supporting Starcoder2, which is faster than Megatron-LM.
- Job details can be found on Smart Recruiters.
- Generative AI Book Announcement: Yuri Plotkin announced his upcoming book on Generative AI, covering foundational algorithms including Bayesian inference and latent variable models, which can be found on the book website.
- He encouraged following him on Twitter for ongoing updates, sharing insights on key concepts in the field.
LM Studio Discord
-
SageAttention Boosts Performance: The new method, SageAttention, accelerates inference in transformer models by providing efficient quantization for attention, achieving 2.1 times improvements over existing methods.
- This technique shows superior accuracy compared to FlashAttention3, with implications for both language and image generation.
- Llama 8B Tokens per Second Variability: Users report a wide range of tokens per second (TPS) for Llama 8B, with Q6_K setups on 1070 Ti GPUs achieving around 28-35 TPS.
- Performance is closely linked to factors like context length, quantization, and GPU VRAM bandwidth.
- GPU Performance Matters: New generation GPUs like the 4080 or 4090 drastically outperform older models such as the 1070 Ti but need correct configurations to maximize capabilities.
- Utilizing tensor cores and enhanced memory bandwidth are essential for achieving notable performance gains.
- Challenges in Compiled Models: Users questioned the current support for custom compiled versions of Llama.cpp with LM Studio, leading responses to suggest using the command line tool
lms
for model loading.
- This solution promotes persistence across reboots, mitigating some of the challenges faced with compiled models.
- Token Generation Speeds Under Fire: Members highlighted sluggish token generation speeds with high-capacity models, with some setups peaking at 0.25 tokens/sec, illustrating CPU bottlenecks.
- With many local setups feeling these limits, there’s a push to consider cloud services for better performance if needed.
OpenAI Discord
-
Grok 2 Shows Potential: Members expressed interest in experimenting with Grok 2, indicating a growing fascination with newer models.
- Although specific performance details were lacking, the buzz suggests Grok 2 could be a noteworthy development.
- DALL-E's Image Generation Falls Short: DALL-E's capabilities were criticized, with one member simply labeling its image output as bad.
- Expectations are high for image generation, and this feedback underscores disappointment in its performance.
- The Mystery of Model Parameters: There was lively debate over the parameter sizes of models like 4o-mini and GPT-3.5, with speculation around 4o-mini having it set at 1 billion parameters.
- Varying opinions indicate confusion in the community regarding the relationship between model size and performance.
- GPTs Struggle with PDF Comprehension: Members noted that GPTs fail to read entire PDFs before responding, often leading to incomplete information being referenced.
- Including key information in the main instructions was suggested to help improve response accuracy.
- Guidelines for Using ChatGPT to Create Website Content: A user sought advice on using ChatGPT to build a website centered on controlling, asking for strategies on effective prompt crafting.
- Emphasis was placed on sourcing content from trustworthy and scientific materials, highlighting a focus on quality.
tinygrad (George Hotz) Discord
-
Tinygrad ML Library War Winning Streak: A member described three key reasons why tinygrad is set to excel: its efficient kernel search using BEAM and MCTS, a concise codebase of under 10k lines, and a lazy execution model.
- 'This avoids the combinatorial nightmare of one kernel per combination of device...', stressing improved performance through its streamlined approach.
- Tinybox Preorder Puzzles: Discussions around the tinybox preorder sparked inquiries about payment methods and associated costs, especially if it would adopt Stripe like previous models.
- Members voiced curiosity on how to navigate the preorder payment process with existing methods.
- OpenCL Handling Makes Waves: Concerns regarding Out Of Memory (OOM) handling emerged after facing all-black outputs in Stable Diffusion, with questions about OpenCL's capability.
- A member sought clarity on whether the implementation effectively addresses these out-of-memory conditions in tinygrad.
- MSE and MAE Implementation Simplified: A proposal to integrate MSE and MAE functions directly into tensors was made, claiming it can be executed in a few lines of code.
- They referenced a GitHub pull request showcasing the implementation along with testing.
- Windows Compatibility Concerns Do Surface: Issues with Windows 11 arose when Python installation led users to Microsoft Store, indicating compatibility hurdles.
- Attention was drawn to sqlite issues from earlier discussions, emphasizing the necessity of using the correct Python version.
LAION Discord
-
Microdiffusion Implementation Progress: The community is eagerly awaiting the implementation of the microdiffusion paper, which could significantly reduce training costs with a $2k training goal and 7 days of H100 compute secured.
- Discussions focus on preprocessing help and seeking short-term improvements following experiment preparation.
- Data Preprocessing Challenges Highlighted: A member noted issues in uploading large datasets to Hugging Face due to its 300GB limit, proposing chunking the data into parts or using a webdataset hosted on an S3.
- They aim to preprocess data and stream it efficiently by categorizing images into multiple datasets based on aspect ratios.
- Webdataset for Efficient Data Handling: Participants discussed the use of webdataset as a workaround for large dataset management, allowing streamlined usage with PyTorch.
- One member insisted webdataset bundling would enhance management for their anticipated 1TB dataset.
- Dinov2 gets optimized in layers: Discussion centered on distilling Dinov2 into the early layers, enhancing efficiency for downstream tasks related to images.
- Notably, this method shows superior performance compared to merely relying on cross attention with CLIP embedding.
- Introduced EchoPrime for Echocardiography: EchoPrime emerges as a multi-view, contrastive learning-based model trained on over 12 million video-report pairs, tackling traditional echocardiography AI challenges.
- This new foundation model enhances performance and application scope in cardiac imaging.
LlamaIndex Discord
-
Experimenting with Dynamic Few-shot Prompting: Dynamic few-shot prompting enhances LLM fine-tuning by retrieving relevant examples based on queries instead of a fixed set (more details here). This method improves prompt contextualization for various applications.
- Participants pointed to a related discussion thread that emphasizes the importance of pertinent examples in this approach.
- Mistral Rolls Out New Edge-Class Models: Mistral has launched notable edge-class models with day 0 support available via 'pip install llama-index-llms-mistralai' (installation link). This allows developers to quickly integrate these models into their systems.
- The announcement received attention in the community, highlighting its relevance in the current AI landscape (link to the announcement).
- Enhancing Multimodal RAG Systems Using Azure: A guide illustrates how to create a multimodal RAG system leveraging Azure AI Search and Azure OpenAI with LlamaIndex, guiding improvements in retrieval accuracy (see the guide). This comprehensive documentation includes benchmarks for practical implementation.
- The walkthrough focuses on maximizing contextual retrieval across different AI systems, providing valuable techniques as shared in this tweet.
- Optimizing Neo4jPropertyGraphStore Creation: Creating a Neo4jPropertyGraphStore can be time-consuming, particularly when handling 64,322 nodes, prompting discussions on memory optimization and schema simplifications. Suggestions included setting
refresh_schema
to false to mitigate costly schema-related calls.
- Community feedback indicated that these adjustments could significantly enhance performance during initialization.
- Investigating Multi-Agent Orchestration Workflows: Users inquired about replicating OpenAI's Swarm capabilities within LlamaIndex, focusing on workflows as a core approach. The discussions led to examples of multi-agent communication being provided, backed by blog articles and GitHub repositories.
- This exploration aims to develop efficient solutions for orchestrating actions across multiple agents using existing workflows.
Interconnects (Nathan Lambert) Discord
-
Mistral celebrates launch of Ministral models: On the first anniversary of Mistral 7B, Mistral launched two edge models: Ministral 3B and Ministral 8B, designed for on-device use with privacy-first inference capabilities, featuring up to 128k context lengths.
- The community expressed disappointment over the absence of weights for Ministral 3B, raising questions about its potential performance compared to Ministral 8B which does have non-commercial weights.
- AI2 OLMo Internship offers competitive salaries: The AI2 is hiring research interns for the OLMo project, with salaries from $86,520 to $123,600 and an opportunity to lead significant research in NLP and machine learning over a 12-week internship.
- Interns can define research projects and publish in high-profile journals, making this opportunity quite coveted in the competitive landscape.
- Snailbot expands its capabilities: Snailbot is now being utilized for audio feed posts, reflecting its enhanced functionality in content sharing.
- This has been perceived as a twofer, with users expressing excitement about the bot's new use case.
- Challenges in audio distribution: Users are expressing challenges with audio content distribution, stressing the need for effective strategies.
- One user humorously compared their issues to a meme from a popular note-taking app, indicating widespread frustration within the community.
- Hackernews visibility struggles: There are ongoing concerns about the pitfalls of posting on Hackernews, especially regarding link visibility and potential penalties for direct links.
- Members discussed how navigating visibility issues is complicated, suggesting strategies that avoid direct linking to bolster content engagement.
Latent Space Discord
-
Gemini free tier struggles to deliver: Users have reported timeouts and failures with the Gemini free tier, raising doubts about its claimed 1.5B token per day capabilities.
- Effective usage could be closer to 0.05B tokens, as speculated by several members.
- Mistral enters the edge model race: Mistral introduced Ministral 3B and Ministral 8B models aimed at on-device applications, enhancing commonsense and reasoning in the sub-10B range.
- However, the 3B model is API-only, limiting its on-device applicability and drawing critiques regarding restrictive licensing.
- Nvidia's Llama 3.1 Nemotron raises eyebrows: Nvidia's Llama 3.1 Nemotron 70B reportedly surpasses both GPT-4o and Claude Sonnet 3.5 across various benchmarks, stirring community excitement.
- Debate arises over whether Sonnet 3.5 users can still claim relevance against this cutting-edge model.
- E2B's SDK gets a funding boost: E2B launched the v1.0 SDK alongside an impressive $11.5M seed round, targeting AI code interpreting with secure sandboxing.
- The startup claims to run millions of sandboxes monthly, with notable partnerships including Perplexity.
- Call for LLM performance benchmarking tool: A member put forth the idea of a CPUBenchmark-style tool dedicated to LLM comparisons to improve existing leaderboards, which are currently limited.
- Current tools, such as lmsys/hugging face leaderboards, don't allow for effective direct comparisons between models.
Cohere Discord
-
Cohere Community Inspires Daily: Members find daily motivation from the Cohere community, appreciating its supportive atmosphere.
- A lot of things, honestly this whole community each day everyday! reflects the positive sentiment shared.
- Job Opportunities Clarified: A reminder surfaced that inquiries regarding jobs at Cohere should be directed to the careers page instead.
- The member highlighted the team's passion for addressing real-world challenges with ML/AI technologies.
- Join the RAG++ AMA Tomorrow!: Another AMA with Ayush Thakur and Meor Amer on RAG development kicks off tomorrow at 11:00 AM ET, following great interest from the community.
- The session links back to the RAG++ course, promising insights into current developments.
- Cohere Embed API error handling explained: Inquiries on error handling in the Cohere Embed API prompted suggestions for implementing retry logic based on specific error codes when documents fail to embed.
- Errors could result in an overall failure for the batch, advising care in managing embeddings.
- Text-to-Speech is here for chatbots!: Excitement brews over the introduction of text-to-speech functionality for chatbot responses, with a setup guide shared for users.
- Sick! was the enthusiastic response from a user, indicating effective adoption of the new feature.
Modular (Mojo 🔥) Discord
-
Playground Gets Love from Users: Members expressed much needed love for the Playground feature, thanking Modular for its improvements and support. For more information, you can read about it in the Playground documentation.
- This positive feedback highlights the importance of community input in refining tools.
- Save the Date for Community Showcase: A community meeting is scheduled for October 21st, featuring a live showcase where participants can demo their MAX and Mojo projects. Slots will last between 5-10 minutes, allowing sharing of learnings and feedback.
- Engagement like this helps catalyze collaboration and knowledge sharing among developers.
- Weird Mojo Bug Fixed: A member identified a Mojo bug that was reproducible but later fixed it themselves, offering to add any contributions to the changelog. They encouraged others to report similar issues to enhance the platform.
- This proactive approach can lead to quicker bug resolution and better software stability.
- Inferencemax Project Simplifies API: A member shared their new project called Inferencemax, aimed at simplifying LLM inference, although it may not fully meet existing requests. The code is in Python, with planned improvements for performance.
- This project reflects ongoing efforts to create a more accessible inference API landscape.
- Jakub's Python API for MAX Sparks Interest: Inquiry about Jakub's contributions to the Python API for MAX led to a link being shared to a community meeting where he spoke. Although the API isn't fully released yet, its presence in nightly builds aims to showcase ease of use.
- Such discussions emphasize anticipation for API developments that improve usability.
Stability.ai (Stable Diffusion) Discord
-
Mineral Resources Poster Assistance Needed: A member sought help on creating a poster about mineral resources for their college project, asking for guidance from the community.
- Another member advised them to share specific needs in the chat for a more direct support approach.
- SD3 Fails at Human Poses: Discussion centered on the performance drawbacks of SD3 with human figures in lying or upside-down positions, noted to be generally poor.
- A participant highlighted frequent deformations occur regardless of pose, indicating a consistent issue.
- LLM Token Limits Ignored: A user vented frustrations about LLMs failing to abide by token limits or stop commands, causing chaotic outputs.
- They speculated potential problems with prompt templating, inviting insights from more seasoned users.
- Clearing Up LyCORIS vs LoRA Confusion: A member inquired about the purpose of the LyCORIS folder since everything moved to LoRA, expressing confusion.
- Another user responded, explaining the folder's historical necessity for extensions now subsumed by newer interfaces like Auto1111.
- New Web3 Project Job Roles Available: An update shared the launch of a new Web3 project that is looking to fill various positions, including Developer and Moderator, with competitive pay.
- Interested candidates were encouraged to reach out directly for more specific information on available roles.
OpenInterpreter Discord
-
Open Interpreter GitHub Copilot extension suggestion: A member proposed creating an Open Interpreter GitHub Copilot extension, while another indicated they lacked the bandwidth to pursue it but would guide community efforts.
- They encouraged collaboration within the community to bring this project to life.
- Excitement for Mozilla AI Talk: Members expressed anticipation for an upcoming talk from Mozilla AI, urging everyone to add it to their calendars.
- A link to the event was shared for easy access.
- Kernel panic reported on app closure: A member reported a kernel panic when closing the Open Interpreter app, prompting MikeBirdTech to recommend creating a dedicated troubleshooting post.
- Details about the version used should accompany the report for effective resolution.
- New Local LLMs functionality: A recent update now enables easy execution of any GGUF model on Hugging Face via Ollama, just by pointing to the repository.
- Users can run Llama 3.2 3B with a simple command, making local LLMs much more accessible.
- Positive feedback on the Local LLMs update: Members expressed enthusiasm for the new ability to run models directly, highlighting it as a significant enhancement for local LLMs.
- An appreciation for previously missing features was noted, particularly in connection to Jan.
DSPy Discord
-
Unit Testing DSPy Workflow System: A member announced they are unit testing a DSPy powered Workflow system in the Discord channel. Check the channel for progress updates and feedback on the testing process.
- This testing aims to refine the workflow and ensure reliability, encouraging community input on findings.
- Major Update to dspygen Framework: A recent major update has been made to the dspygen framework, built for improvements outside of dslmodel. This aims to enhance the DSPy workflow for language models like GPT, BERT, and LLaMA.
- The updates focus on bringing more features and optimizations, allowing better integration within existing systems.
- LightRAG outshines GraphRAG: Recent claims suggest LightRAG offers significant enhancements in effectiveness and cost efficiency compared to GraphRAG as detailed in this paper. The authors propose that LightRAG addresses limitations of existing RAG systems, improving contextual awareness and information retrieval through innovative graph structures.
- They assert that these Innovations result in reduced operational costs and improved overall system performance.
- DSPy integration into GPT-O1+ progresses: Updated documentation introduced a long-form RAG example for building a question answering system about tech topics using DSPy. Users can install DSPy with
pip install -U dspy
and a tutorial is available on DSPy documentation.
- This integration is expected to streamline workflows and improve the user experience within the DSPy framework.
- Revamping documentation approaches: Discussion emerged about the upcoming revamp of DSPy documentation, focusing on improving rhythm and style. Participants are considering whether to use HTML documentation versus detailed notebooks, mentioning the usefulness of having caches for execution.
- This revamp aims to enhance clarity and accessibility for users, allowing easier navigation through the documentation.
LangChain AI Discord
-
LangChain Community to Close: On October 31, 2024, the current LangChain Discord community will shut down to make way for a revamped user space aimed at being more engaging and fun.
- Members can keep up with updates by filling out the form here and are encouraged to provide feedback via community@langchain.dev.
- Advice Needed for API Routing: A member seeks guidance on using agents for routing user inquiries to different APIs, mentioning they have 5 APIs set up in Docker Compose.
- This inquiry aims to enhance their project structure and optimize user interaction with APIs.
- Playground Blank Page Woes: Members flagged a significant issue in the Playground where input types with Optional fields cause the page to load blank with errors in the console.
- The problem likely stems from the input schema's null type conflicting with jsonforms, hampering functionality.
- GitHub Issue Logged for Playground Trouble: A member opened GitHub Issue #782 to track the Playground issue relating to Optional fields leading to load failures.
- This is part of ongoing efforts to resolve key usability problems within the LangChain Playground.
- Remote Runnable Tools Binding Inquiry: A member questioned the absence of a bind_tools() method for tool binding in the Remote Runnable, creating an opportunity for improvements.
- This discussion could lay the groundwork for better tool management within LangChain's environment.
OpenAccess AI Collective (axolotl) Discord
-
AIFoundry Seeks GitHub Mentorship: AIFoundry.org is looking for mentorship regarding their GitHub organization and design, aiming to emulate Axolotl's streamlined approach.
- Yulia expressed a desire for guidance to enhance their open-source initiative focused on local model inference.
- Mistral's Access Rules Explained: To access the new Mistral-8B-Instruct-2410 model on Hugging Face, users must provide contact details and obtain permission for non-standard uses.
- Accessibility is contingent on consent from Mistral AI, with a call to review their privacy policy for compliance.
- L3.1 Ethereal Rainbow Launch Dangers: The L3.1 Ethereal Rainbow repository has been flagged for containing sensitive and potentially harmful content, necessitating caution for users.
- The repository has prompted warnings because of its sensitive material and users should carefully consider the implications of the content.
- Finetuning the L3.1 Model: The L3.1 model has been finetuned with over 250 million tokens and maintains a sequence length capability of 16k, enhancing its performance for creative writing applications.
- This focus on RP and creative writing signifies a targeted effort to bolster the model's practical usability in sensitive contexts.
Torchtune Discord
-
Members Buzz About New Paper: Enthusiasm surged around the paper titled arxiv:2410.06511, with members deeming it a fantastic read.
- One member affirmed they're still reviewing the paper, underscoring its quality and engagement from the community.
- Unanimous Praise for Paper Quality: The overall sentiment regarding the paper was strongly positive, with multiple members highlighting its impressive content.
- Some noted they are still working through the details, reflecting a shared interest in its insights.
LLM Agents (Berkeley MOOC) Discord
-
LLMs excel in zero-shot optimization: Recent research shows that Large Language Models (LLMs) can perform zero-shot optimization across complex problems like multi-objective optimization.
- This application could be instrumental in engineering tasks such as rocket nozzle design and windfarm layout optimization.
- Meet the Language-Model-Based Evolutionary Optimizer (LEO): LEO is introduced as a novel population-based approach leveraging LLMs for numerical optimization, performing equally well against gradient-based and gradient-free methods.
- However, concerns about the potential for hallucination in outputs suggest a necessity for meticulous management in its applications.
- Community buzzes about LLM design applications: Discussions in the community reflect a keen interest in the practical uses of LLMs for engineering designs, particularly focusing on reasoning capabilities.
- Members are enthusiastic about collaborating on how LLMs can tackle real-world engineering challenges.
Mozilla AI Discord
-
Pilot Program for AI Stewardship Practice: The MaRS Discovery District is offering free slots for the AI Stewardship Practice Program, targeting professionals in AI fields.
- This initiative provides a microcredential aimed at researchers, entrepreneurs, and educators looking to influence AI positively; more info here.
- Participants Wanted for AI Course Pilot: There’s an opportunity to join the course pilot for the program, valued at 500 CAD, with interested participants encouraged to respond quickly.
- Seats will fill based on threaded replies, making swift action crucial for those wanting to participate.
The Alignment Lab AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The LLM Finetuning (Hamel + Dan) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The MLOps @Chipro Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The DiscoResearch Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Gorilla LLM (Berkeley Function Calling) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
PART 2: Detailed by-Channel summaries and links
HuggingFace ▷ #announcements (1 messages):
Gradio 5.0 Launch
Sentence Transformers v3.2.0
HuggingChat Multimodal Update
FLUX LoRA Lab Introduction
LLM Evaluation Guidebook
-
Gradio 5.0 Launch with Enhanced Security: We just launched Gradio 5.0, making it easier to create production-ready machine learning web applications, accompanied by a comprehensive security overhaul. Over 6M monthly downloads emphasize its growing popularity.
- In the spirit of transparency, we're making the full security report public for all to see.
- Sentence Transformers v3.2.0 Debuts: Sentence Transformers v3.2.0 is out now, introducing 2 new backends for embeddings: ONNX and OpenVINO, promising speedups of 2x-3x. This update marks the biggest release for inference in 2 years, allowing for up to 500x speedups with static embeddings.
- Check out the new capabilities described in-depth, including faster inference at 10k texts/sec with Model2Vec.
- HuggingChat Goes Multimodal: HuggingChat is now multimodal with the addition of Llama-Vision 11B Instruct, enhancing interaction possibilities. This brings exciting new dimensions to user experiences within the platform.
- This is not a drill, as users are encouraged to explore this significant upgrade to utilize multimodal capabilities.
- Exciting Launch of FLUX LoRA Lab: Introducing the FLUX LoRA Lab, where users can mix and combine multiple FLUX LoRAs into unique configurations. The *🎲 function* offers random merges for a touch of surprise and creativity.
- This playful approach encourages experimentation with LoRAs and is designed to spark innovation and fun.
- New LLM Evaluation Guidebook Released: A new LLM evaluation guidebook has been published to provide practical insights and theoretical knowledge. This resource aims to support users in better managing the Open LLM Leaderboard and designing evaluations.
- The guidebook is part of an effort to share best practices and insights gathered by the @huggingface evaluation team.
Links mentioned:
- Tweet from Gradio (@Gradio)): 🔒 Gradio 5 Just Got Even More Secure! 🔒 Following the launch of Gradio 5, we're excited to share one of its most significant enhancements: a comprehensive Security overhaul! 🛡️ With Gradio be...
- Tweet from tomaarsen (@tomaarsen)): 📣 Sentence Transformers v3.2.0 is out, marking the biggest release for inference in 2 years! 2 new backends for embedding models: ONNX (+ optimization & quantization) and OpenVINO, allowing for speed...
- Tweet from tomaarsen (@tomaarsen),): Model2Vec distills a fast model from a Sentence Transformer by passing its vocabulary through the model, reducing embedding dims via PCA and applying Zipf weighting. Inference with the resulting stat...
- Tweet from @GoogleDevExpert (@GoogleDevExpert)): 300k+ Hugging Face Transformers models are now available in KerasNLP 🤗 Spearheaded by GDE @ariG23498, this lets devs load models like Gemma & Llama2 directly into KerasNLP - a universe of new possib...
- Tweet from Argilla (@argilla_io):): 🚀 Argilla ❤️ LlamaIndex: you can now monitor llama-index pipelines and improve them with human and AI feedback! - Ideal for RAG - Full traceability - Rich metadata collection Check it out and try t...
- Tweet from Sayak Paul (@RisingSayak),): Fine-tuning 5B param video models should be possible with a SINGLE 24GB GPU 🍓 We're releasing CogVideoX-Factory, a repository containing memory-optimized scripts to fine-tune Cog family of video...
- Tweet from Daniel Vila Suero (@dvilasuero)): 📢 Thinking LLMs data pipeline replication Today, Jason Weston from @AIatMeta shared their new work: train LLMs to think & respond and apply iterative DPO I have just implemented the data generation...
- Tweet from merve (@mervenoyann)): This is not a drill 💥 @huggingface HuggingChat is now multimodal with Llama-Vision 11B Instruct by @Meta ! 🤗
- Tweet from apolinario 🌐 (@multimodalart)): ✨ Introducing FLUX LoRA Lab 🧪🔬 Mix-up and combine multiple FLUX LoRAs and do your own crazy LoRA alchemy (You can also use the 🎲 function to randomly merge 2 LoRAs for some surprise, novelty and ...
- Tweet from Quentin Lhoest 🤗 (@qlhoest)): New blog post: Scale AI-based Data Processing EASY The FineWeb-Edu dataset comes from processing 45TB (🤯) of FineWeb And it uses a Language Model to classify the educational level of the text 😭😭 ...
- Tweet from merve (@mervenoyann)): so we have a leaderboard for LLMs with video input, and most of them are open models 🔥👏 we are so back fam
- Tweet from Quentin Lhoest 🤗 (@qlhoest)): My new app is out !! ✨The Common Crawl Pipeline Creator ✨ Create your pipeline easily: ✔Run Text Extraction✂️ ✔Define Language Filters🌐 ✔Customize text quality💯 ✔See Live Results👀 ✔Get Python cod...
- Tweet from Clémentine Fourrier 🍊 (@clefourrier)): Dear LLM twitter, I made an evaluation guidebook for you! 🥳 https://github.com/huggingface/evaluation-guidebook Goal: sharing both practical insights and theoretical knowledge the @huggingface eval...
HuggingFace ▷ #general (143 messages🔥🔥):
AI Model Performance
Use of Ollama with Hugging Face
Gradio Documentation Issues
TTS Model Recommendations
Role of AI in Workforce
-
Discussion on AI Model Performance: Members discussed the hypothetical performance of a model setup with 72GB of VRAM and 128GB of DDR4 RAM, questioning if it could achieve 5-6 t/s in processing speed.
- Additionally, there was a mention of a custom PyTorch linear layer and its integration with Autograd for automatic gradients.
- Using Ollama with Hugging Face: An introduction to using Ollama, which allows interaction with GGUF models directly from the computer without creating new
Modelfiles
was shared, emphasizing its simple command syntax.
- The discussion highlighted accessibility, as Ollama claims to run any of the 45K public GGUF checkpoints on Hugging Face.
- Issues with Gradio Documentation: A user raised concerns about the usability of the Gradio documentation, noting readability issues with text on a dark background and lack of replies from the maintainers.
- This highlighted a need for better engagement from the community and maintainers regarding documentation improvements.
- Recommendations for TTS Models on Hugging Face: The community member asked for recommendations for Text-to-Speech (TTS) models from the Hugging Face library, prompting a user to point to trending models.
- Specifically, the SWivid/F5-TTS model was highlighted as an updated option available for TTS tasks.
- AI's Role in the Job Market: A conversation about AI roles in the workplace emphasized that while AI tools, like large language models, are emerging, the job market will always evolve.
- Members noted the importance of adapting to new tools and technologies similar to the transition seen with spreadsheet software in various jobs.
Links mentioned:
- CogVLM2: Bringing Deeper Visual and Language Understanding to AI: AI has come a long way in understanding text, but when it comes to merging visual data — like images and videos — with language, we’ve…
- Hugging Face - Learn: no description found
- Use Ollama with any GGUF Model on Hugging Face Hub: no description found
- Models - Hugging Face: no description found
- openr/reports/OpenR-Wang.pdf at main · openreasoner/openr: OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models - openreasoner/openr
- Home: An Open-Sourced Framework for Advancing Reasoning in Large Language Models
HuggingFace ▷ #today-im-learning (9 messages🔥):
AI Influencer Development
Image Generation Techniques
Language Models with Personality
-
Building an AI Influencer akin to Aitana Lopez: A member is attempting to create an AI Influencer inspired by the Spanish AI model Aitana Lopez and is facing challenges in generating diverse images with varying attributes.
- They seek guidance on achieving more realistic Instagram content instead of repetitive camera shots.
- Using ControlNet for Image Consistency: A suggestion was made to utilize ControlNet to maintain consistency while generating images for the AI Influencer's profile.
- This potential solution was posed as a way to assist in the creation of unique images that align with the desired vision.
- Challenges in AI Development: A member pointed out the difficulty of expecting assistance from others on complex tasks, indicating a lack of clarity on the requests for help.
- This comment reflects the broader challenges faced by those new to the field in seeking practical help.
- Questions About Language Models: One member inquired whether Hugging Face can generate phrases with a distinct personality based on a large dataset of prompts.
- They expressed interest in developing a language model that not only generates text but also embodies a personality reflective of the input data.
- Exploring Transformers for Phrase Generation: A discussion was initiated around the capability of using transformers for generating similar phrases based on specific prompts.
- This reflects an ongoing interest in leveraging advanced AI techniques for creative applications.
HuggingFace ▷ #cool-finds (4 messages):
GroupFi-Chatbox
PaliGemma GitHub Repository
-
GroupFi-Chatbox: An AI-Enhanced Messaging Solution: A member shared a link to the GroupFi-Chatbox repository on GitHub, noting that it’s a messaging solution they are looking to enhance with AI features.
- The shared repository features a detailed README and invites contributions to its development.
- Discovering PaliGemma for More Fun: Another member highlighted the PaliGemma repository, stating that if you loved GroupFi-Chatbox, you would adore this one.
- A user confirmed their interest by mentioning they just starred the repository on GitHub, showing appreciation for the shared link.
Links mentioned:
- GroupFi-Chatbox/packages/sdk/README.md at dev · TanglePay/GroupFi-Chatbox: Contribute to TanglePay/GroupFi-Chatbox development by creating an account on GitHub.
- GitHub - ThinamXx/PaliGemma: Reading PaliGemma paper ...: Reading PaliGemma paper ... Contribute to ThinamXx/PaliGemma development by creating an account on GitHub.
HuggingFace ▷ #i-made-this (4 messages):
Video Inference using Vision Transformers
Accelerating LLM Training
In-Depth Question Answering Evaluation App
-
Resources for Video Inferences: A member shared a collection of resources for learning about video inferences using vision transformers, available on their GitHub.
- The resource aims to guide users through the process of implementing video inference techniques in their projects.
- LLM Training Process Platform: A member is excited about a platform built to accelerate the LLM training process by managing data across various storage solutions including Hugging Face and S3.
- They offered a demo and are keen to tailor the platform according to community needs; contact them via Mail or LinkedIn.
- New Medium Article on Learning App: A member announced their first article on Medium discussing the In-Depth Question Answering Evaluation App, designed to provide real-time feedback for learners.
- The app utilizes Gemini 1.5 Pro for question answering and aims to enhance users' learning experiences, with thanks to Dr. Fady AlNajjar for the idea.
Links mentioned:
- Enhancing Learning Through Real-Time Feedback: In-Depth Question Answering Evaluation App: In the world of online learning and self-improvement, having effective tools to evaluate one’s progress is crucial. Whether you’re studying…
- 0xD4rky - Overview: a boy living in a loop of compilation and termination - 0xD4rky
HuggingFace ▷ #reading-group (3 messages):
Reading Group Reminder
Participant Excitement
-
Reading Group Scheduled for Tomorrow: A reminder was posted that the next reading group is queued up for tomorrow, inviting everyone to join the discussion.
- Participants were encouraged to attend, highlighting the community's enthusiasm for upcoming events.
- Excitement from Attendees: A participant expressed their excitement about the upcoming reading group, affirming their attendance.
- This showcases the positive engagement and anticipation within the group for collaborative discussions.
HuggingFace ▷ #NLP (11 messages🔥):
Fine-tuning LLMs
Transformers library contribution
Special tokens usage
Attention masks
GPU requirement for debugging
-
Clarifying Fine-Tuning of LLMs: A member questioned how LLMs, unlike BERT, identify where a response starts in prompts during fine-tuning since labels aren't used.
- Another member suggested that adding special tokens helps signify user/system turns in the sequence.
- Using Attention Masks for System Responses: A user pointed out the utility of attention masks to focus updates solely on the system response in sequences.
- This approach is beneficial for ensuring alignment even when handling malicious user inputs.
- Contributing to Transformers Library: A member expressed interest in contributing to the transformers library on GitHub but inquired if a GPU was necessary.
- It was clarified that a GPU isn't required unless debugging specific cases, and free GPU hours on Colab are a viable option.
- Debugging Edge Cases and Model Sizes: Discussion arose regarding edge cases where larger batch sizes might alter outputs, particularly related to GPU compilation issues.
- However, it was noted that many contributions could be made using smaller models and CPU resources.
HuggingFace ▷ #diffusion-discussions (1 messages):
Hugging Face tutorial
DiffusionPipeline
DDPM model
-
Hugging Face tutorial on Diffusers: A member shared a Hugging Face tutorial that explains how to utilize the Diffusers toolbox for building custom diffusion systems.
- The tutorial emphasizes the user-friendly nature of the toolbox while introducing the core components like models and schedulers.
- Understanding the DiffusionPipeline: The DiffusionPipeline can be used to bundle models and schedulers but can also be unbundled to create new systems, offering flexibility in design.
- This allows users to customize their diffusion processes according to specific requirements.
- Running a basic pipeline with ease: Users can generate an image with as few as four lines of code utilizing the DDPMPipeline, highlighting the approachable syntax for running models.
- For example, the original model from the DDPM paper can be accessed and used directly in the code with minimal setup.
- Classroom consideration for model size: A member mentioned that the DDPM model (approximately 450Mb) should work well for classroom environments, ensuring accessibility for all students.
- They humorously noted the importance of reliable WiFi in facilitating model usage during lessons.
Link mentioned: Understanding pipelines, models and schedulers: no description found
HuggingFace ▷ #gradio-announcements (1 messages):
Gradio 5 themes
-
Gradio 5 Showcases New Themes: Gradio 5 includes several new themes, enhancing the visual experience for users. Check out the video showcasing all the themes to see what’s new.
- The themes promise to offer users a more personalized and engaging interface.
- Inclusion of Various Visual Styles: The new Gradio 5 themes feature a range of visual styles, catering to different user preferences. This addition allows for greater customization in how applications appear to end users.
- Members expressed excitement about how these themes can improve user interface design.
Perplexity AI ▷ #general (158 messages🔥🔥):
Perplexity AI Features
Reasoning Mode
Perplexity Purchases
User Experience with AI Models
UI Improvements
-
Perplexity AI introduces shopping features: Perplexity is integrating shopping capabilities with its new feature 'Perplexity Purchases', which aims to streamline purchases and check pricing effectively.
- This has drawn mixed reactions, with some users preferring the platform's original search functionality rather than evolving into a shopping service.
- Positive feedback on Reasoning Mode: Users are praising the effectiveness of the Reasoning Mode for programming tasks, noting that it demonstrates deep analytical capabilities.
- Several members shared success stories with this feature, emphasizing its reliability in generating accurate results.
- Discussion around AI extensions: The availability of extensions like 'Complexity' for enhancing Perplexity's performance was highlighted, alongside suggestions for effective alternatives like 'vettedai' and 'gigabrain'.
- Users shared positive experiences with these tools for cross-referencing various sources, including social media and reddit threads.
- Concerns over user experience in AI chat spaces: Concerns were raised about the chat space's dynamic, with users feeling the environment sometimes resembles a kindergarten rather than a serious AI forum.
- This led to discussions on the need for better moderation and user interactions to maintain focus on AI topics.
- Improvements in user interface and features: Members expressed optimism about ongoing improvements to Perplexity's user interface, particularly in making features like model changes more accessible.
- While some users noted the need for a simplified process, they appreciated the thoughtfulness behind the new UI enhancements.
Links mentioned:
- The New York Times warns AI search engine Perplexity to stop using its content: Perplexity argues it’s “surfacing factual content.”
- Redesigned Spaces and Purchases coming soon to Perplexity: Discover Perplexity's upcoming features: revamped Spaces and Perplexity Purchases. Enjoy free shipping and a sleek UI. Stay tuned for the release!
- Tweet from Perplexity (@perplexity_ai): Perplexity for Finance: Real-time stock quotes. Historical earning reports. Industry peer comparisons. Detailed analysis of company financials. All with delightful UI. Have fun researching the marke...
- The New York Times warns AI search engine Perplexity to stop using its content: Perplexity argues it’s “surfacing factual content.”
- Arangutan Monkey GIF - Arangutan Monkey Dancing - Discover & Share GIFs: Click to view the GIF
- Tweet from Aravind Srinivas (@AravSrinivas): Naming a meeting room after my favorite mathematician was a long standing dream come true. Ramanujan is an epitome of curiosity, the core idea perplexity stands for.
- GitHub - pnd280/complexity: ⚡ Supercharge your Perplexity.ai: ⚡ Supercharge your Perplexity.ai. Contribute to pnd280/complexity development by creating an account on GitHub.
Perplexity AI ▷ #sharing (12 messages🔥):
Green Power Ranger
Understanding APIs
Starlink Gigabit Speed Plan
TikTok AI Moderators
Oura Ring 4 Review
-
Curious about the Green Power Ranger: A member linked a search result discussing the Green Power Ranger, sparking interest in the character's background.
- What made this character so popular?
- Revisiting APIs: Multiple members shared the same search result related to the question, 'What is an API?'.
- This reflects a growing interest in understanding this essential technology.
- Starlink Gigabit Speed Enthusiasm: Information about the Starlink Gigabit Speed Plan was shared, raising questions about its effectiveness.
- Members discussed the implications of this upgrade on internet connectivity.
- TikTok's AI Moderation Shift: A member highlighted a video featuring news on TikTok's pivot towards AI moderators, showcasing the evolving landscape of content moderation.
- This move raises discussions about the balance between automation and human oversight.
- Oura Ring 4 Review Sparks Interest: A member posted a link to an Oura Ring 4 review, generating intrigue about its features.
- Users are curious about how this updated version compares to earlier models.
Perplexity AI ▷ #pplx-api (6 messages):
search_domain_filter issue
Healthcare use case inquiries
LFM 40B API availability
-
search_domain_filter not working as expected: A member expressed frustration that adding a domain to the search_domain_filter did not yield the expected results, generating unrelated content instead.
- Another member confirmed that the parameter is functional, clarifying that if no relevant information is found, the model may still rely on its general knowledge.
- Inquiries about BAAs for healthcare projects: A member asked whether Perplexity signs Business Associate Agreements (BAAs) for healthcare use cases as they plan to build an enterprise solution in the US.
- There was no direct response to this inquiry in the gathered messages.
- Availability of LFM 40B via API: In the discussion, a member inquired about the possibility of accessing the LFM 40B model from labs.perplexity.com through the API.
- No responses were provided regarding this specific query.
OpenRouter (Alex Atallah) ▷ #announcements (2 messages):
Grok 2 Maintenance
NVIDIA Nemotron 70B Performance
-
Grok 2 Temporarily Down for Maintenance: xAI has taken Grok 2 down for temporary maintenance lasting a couple of hours, resulting in users receiving a 404 error if they attempt to access it.
- An announcement will be made when the models are ready to return.
- NVIDIA Nemotron 70B Dominates Benchmark Tests: Nemotron 70B outperformed Llama 3.1 405B, GPT-4o, and Claude 3.5 Sonnet across multiple evaluations: Arena Hard score of 85.0 compared to 79.2 and 79.3.
- For further details, try it here and check out the announcement for the big day in open source.
Link mentioned: Tweet from OpenRouter (@OpenRouterAI): Big day for open source: NVIDIA Nemotron 70B Nemotron beat Llama 405B, GPT-4o & Claude 3.5 Sonnet on several evals: Nemotron 70B vs Claude 3.5 vs GPT4o: > Arena Hard: 85.0 | 79.2 ...
OpenRouter (Alex Atallah) ▷ #app-showcase (1 messages):
ChatGPT advanced voice mode
Personalized AI learning
Self-learning with AI
Vocabulary teaching examples
-
ChatGPT voice mode teaches vocab from Naruto: A user demonstrated using ChatGPT advanced voice mode to teach new vocabulary with examples from Naruto, declaring the experience to be 'absolutely wild!'
- They shared a demo link for feedback on the effectiveness of this teaching method.
- Future of personalized AI learning: The user expressed excitement for personalized AI learning, predicting it to be a revolutionary force in education.
- They noted that these new voice models are 'shockingly effective,' hinting at significant innovations on the horizon.
- Impact of AI on self-learning: The conversation highlighted the transformative power of AI in self-learning processes, emphasizing advancements in voice models.
- It's very interesting to see what comes up soon, indicating a future filled with potential developments in educational technology.
Link mentioned: Tweet from Ahmet ☕ (@ahmetdedeler101): ChatGPT voice mode teaching me vocabulary with examples from Naruto Personalized AI learning is the future. It's shockingly effective 😂
OpenRouter (Alex Atallah) ▷ #general (168 messages🔥🔥):
Grok 2 Issues
Infermatic Provider Problems
Yi Lightning and Model Performance
OpenRouter Credit and API Key Questions
Mistral's New Models
-
Grok 2 off-line: Grok 2 seems to be down as xAI has pulled the API, leaving users frustrated with no access.
- A user lamented its absence, claiming it outperformed other models like Llama 3.2 in Python coding and chatbots.
- Infermatic provider faces network issues: Infermatic's provider is experiencing network issues, causing models to produce gibberish responses, especially after reaching an 8k context limit.
- Users are advised that the provider is working on reverting their build to address the VLLM inference issues affecting service.
- Yi Lightning model performance under scrutiny: Some users are skeptical about Yi Lightning's reported performance, noting possible discrepancies in evaluation results compared to expected outputs.
- Discussions arose regarding whether the model's success is legitimate or if it's a product of gaming the evaluation metrics.
- OpenRouter credits and API key confusion: New users reported difficulties in purchasing credits and managing API keys, with mixed messages about credit expiration and usage.
- Clarifications were provided around the functionality of keys versus credits, with users expressing frustration over the system's complexity.
- Mistral launches new 'Pocket LLMs': Mistral introduced two new models, Ministral 3B and 8B, aimed at edge use cases and promising enhanced performance.
- These models support larger context lengths and aim to extend capabilities in knowledge and reasoning tasks.
Links mentioned:
- Tweet from Vaibhav (VB) Srivastav (@reach_vb): We're soo back!: Nvidia Nemotron 70B - beats Llama 3.1 405B, GPT4o & Claude 3.5 Sonnet! 🔥 Evals (Nemotron 70B vs Claude 3.5 vs GPT4o) > Arena Hard - 85.0 vs 79.2 vs 79.3 > AlpacaEval 2 LC...
- Un Ministral, des Ministraux: Introducing the world’s best edge models.
- OpenRouter | docs.ST.app: Don't have access to OpenAI / Claude APIs due to geolocking or waitlists? Use OpenRouter.
- Tweet from Rohan Paul (@rohanpaul_ai): Andrej Karpathy on the importance of extremely smaller-sized distilled models (even 1Bn param model should be good enough) Video Credit - Original video from "No Priors: AI, Machine Learning, Tec...
- OAuth PKCE | OpenRouter: Secure user authentication via OAuth
- Llama 3.1 Nemotron 70B Instruct - API, Providers, Stats: NVIDIA's Llama 3.1 Nemotron 70B is a language model designed for generating precise and useful responses. Run Llama 3.1 Nemotron 70B Instruct with API
Unsloth AI (Daniel Han) ▷ #general (82 messages🔥🔥):
INTELLECT-1 Launch
Unsloth Training Improvements
Mistral 8B Model Support
Quantization Techniques in Training
Modelscope and Swift Discussion
-
INTELLECT-1 Launch for Decentralized Training: The launch of INTELLECT-1 invites contributions for a 10-billion-parameter model focused on decentralized training, aiming for open-source AGI.
- This development follows the release of OpenDiLoCo, enhancing globally distributed AI model training by scaling from 1B to 10B parameters.
- Unsloth Training Shows Notable Improvements: Users confirm that
unsloth_train
converges significantly better than previous methods, expressing hopes for support ofresume_from_checkpoint=True
in the future.
- Feedback suggests the community values enhancements, yet there are inquiries about why the old
UnslothTrainer
wasn't extended for added functionalities. - Community Inquiries on Mistral 8B Support: Questions arose regarding Unsloth's compatibility with the new Mistral 8B model, with responses indicating architectural differences requiring examination.
- Community members appreciate the new models' sizes and capability for on-device computing, anticipating further updates.
- Exploration of Quantization Techniques: A discussion highlighted challenges in applying full-fine-tune techniques while mixing quantization methods like QLoRA, with users sharing their experiences on layer tuning.
- Some skepticism exists over whether quantizing certain layers while keeping others fully trainable is feasible without extensive customization.
- Modelscope and Swift Framework Evaluated: Members provided mixed feedback on the Modelscope repos and the Swift framework, mentioning issues but also recommending their extensive documentation for beginner learning.
- Concerns about stability were raised, with users pointing out ongoing issues despite the usefulness of these platforms.
Links mentioned:
- Un Ministral, des Ministraux: Introducing the world’s best edge models.
- INTELLECT–1: Launching the First Decentralized Training of a 10B Parameter Model: We're excited to launch INTELLECT-1, the first decentralized training run of a 10-billion-parameter model, inviting anyone to contribute compute and participate. This brings us one step closer to...
- Tweet from Daniel Han (@danielhanchen): Fixed a bug which caused all training losses to diverge for large gradient accumulation sizes. 1. First reported by @bnjmn_marie, GA is supposed to be mathematically equivalent to full batch training...
- Swift DOCUMENTATION — swift 2.5.0.dev0 documentation: no description found
Unsloth AI (Daniel Han) ▷ #off-topic (1 messages):
Open-source data generation packages
Claude workspace utilities
-
Quest for Open-source Data Generation Tools: A member sought recommendations for any open-source packages or projects to enhance their high-quality data generation pipeline.
- They mentioned using Claude workspace along with various utility scripts, indicating a need for more integrated solutions.
- Using Claude Workspace for Data Utilities: The discussion highlighted the use of Claude workspace which facilitates various utility scripts for managing data processes.
- This indicates that members are relying on Claude as a foundation but are looking for more robust open-source solutions to elevate their data generation tasks.
Unsloth AI (Daniel Han) ▷ #help (58 messages🔥🔥):
Model Saving Issues
Installation Problems
Fine-Tuning Llama Models
Windows Setup Requirements
Handling Long Contexts
-
Model Saving Issues Confound Users: Users encountered complications when saving models using
model.save_pretrained_gguf
with unexpected outputs, leading to confusion about model integrity.- Another user suggested the problem might relate to the method of merging LoRA adapters, noting that incorrect saving procedures often led to degraded performance.
- Installation Problems Plague Unsloth Users: New users faced circular dependency issues during the installation of Unsloth, specifically with PyTorch version requirements that varied across dependencies.
- Assistance was sought about which PyTorch version was compatible, leading to confirmations that version 2.4 was necessary for successful installation.
- Fine-Tuning Llama Models Sparks Questions: Discussion arose regarding fine-tuning Llama 3.1 on various datasets, including initial tests and the effectiveness of using non-textual numeric data for training.
- Queries about using sequence lengths based on GPU capabilities hinted at a broader concern regarding memory limitations when implementing fine-tuning strategies.
- Windows Setup Requires Additional Configuration: Windows users noted the necessity of installing WSL 2 with a Linux distribution to properly set up their environments for training models.
- Guidance was provided to install additional tools like Miniconda and relevant dependencies on Ubuntu to mitigate setup issues.
- Handling Long Contexts in Training: It was noted that larger context lengths in models necessitate more VRAM and performance capacity, especially for users with high memory limits.
- Suggestions emphasized that filling the context limit could strain system resources, particularly for those training on consumer-grade GPUs.
Links mentioned:
- Google Colab: no description found
- Unsloth Notebooks | Unsloth Documentation: See the list below for all our notebooks:
- Unsloth Documentation,): no description found
- Load: no description found
- unsloth/unsloth/models/gemma.py at main · unslothai/unsloth: Finetune Llama 3.2, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory - unslothai/unsloth
- Add support for passing in
inputs_embeds
intogenerate
function · Issue #862 · unslothai/unsloth: I need to use the generate function by passing in inputs_embeds for a multi-modal model I'm building, I can't use input_ids. I see Unsloth doesn't currently support this. Would it be possi...
Unsloth AI (Daniel Han) ▷ #showcase (2 messages):
Llama-3.1-70B
NVIDIA's Llama-3.1-Nemotron
Token generation speed
AI model risks
-
Llama-3.1-70B boasts impressive token speed: The
llama-3.1-70b-instruct
model achieves a remarkable 230 tokens per second, showcasing its efficiency in processing.- This performance sets a high standard for future language model benchmarks.
- NVIDIA customizes Llama-3.1-Nemotron-70B: NVIDIA has released the Llama-3.1-Nemotron-70B-Instruct model tailored to enhance helpfulness in generated responses.
- This model aims to address the common inaccuracies and biases found in AI-generated outputs.
- Cautions on AI output reliability: Users are warned that AI models may produce inaccurate, harmful, or biased responses during testing.
- A disclaimer reminds users not to upload confidential or personal data, as responses are logged for security.
Link mentioned: llama-3_1-nemotron-70b-instruct | NVIDIA NIM: Experience the leading models to build enterprise generative AI apps now.
Unsloth AI (Daniel Han) ▷ #research (7 messages):
SageAttention Quantization Method
Gradient Accumulation Fixes
OpenR Framework for LLM Reasoning
Iterative Thought Training for LLMs
-
SageAttention Achieves Impressive Speedup: The paper proposes SageAttention, an efficient 8-bit quantization method for attention, which outperforms FlashAttention2 and xformers by 2.1x and 2.7x respectively, while maintaining accuracy.
- SageAttention significantly accelerates model inference without incurring loss in performance across diverse models, addressing the O(N^2) complexity seen in traditional attention mechanisms.
- Fixing Gradient Accumulation Issues: A recent blog post discusses a fix for a Gradient Accumulation issue that affected training accuracy, which was discovered to diverge due to a denormalization error in cross-entropy loss.
- The updated method ensures that all training losses now align across multiple GPUs, directly impacting large scale training runs, with implementations available via
pip install --upgrade --no-cache-dir unsloth
. - OpenR Framework Enhances LLM Reasoning: The OpenR framework integrates key components aimed at enhancing reasoning in large language models through reinforcement learning and non-autoregressive decoding.
- Initial evaluations on the MATH dataset show notable performance improvements, motivating the establishment of a community around this open-source platform to accelerate LLM reasoning development.
- Training LLMs to Think Explicitly: A new paper proposes a novel training method that equips existing LLMs with explicit thinking abilities through iterative search and optimization, without requiring additional human data.
- This approach addresses complex user instructions by scoring thought candidates with a judge model, leading to enhanced performance in instruction following tasks.
Links mentioned:
- Thinking LLMs: General Instruction Following with Thought Generation: LLMs are typically trained to answer user questions or follow instructions similarly to how human experts respond. However, in the standard alignment framework they lack the basic ability of explicit ...
- SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration: The transformer architecture predominates across various models. As the heart of the transformer, attention has a computational complexity of O(N^2), compared to O(N) for linear transformations. When ...
- Bug Fixes in LLM Training - Gradient Accumulation: Unsloth's Gradient Accumulation fix solves critical errors in LLM Training.
- Tweet from Daniel Han (@danielhanchen): Fixed a bug which caused all training losses to diverge for large gradient accumulation sizes. 1. First reported by @bnjmn_marie, GA is supposed to be mathematically equivalent to full batch training...
- openr/reports/OpenR-Wang.pdf at main · openreasoner/openr: OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models - openreasoner/openr
- Home: An Open-Sourced Framework for Advancing Reasoning in Large Language Models
Eleuther ▷ #general (71 messages🔥🔥):
Yandex YaLM 100B
SwiGLU vs. SinGLU
OpenAI embeddings
Open Source Model Licensing
Re-ranking Techniques
-
Discussion on Yandex's YaLM 100B Model: Members discussed the Yandex YaLM 100B model, which leverages 100 billion parameters trained on diverse sources, both English and Russian.
- One noted its performance in the context of non-Western models, highlighting that it might be the most widely used LLM in Russia but is less recognized in Western circles.
- Choosing Between SwiGLU and SinGLU: A member questioned the preference for SwiGLU over SinGLU, despite SinGLU's advantages in speed and lower loss reported by tests.
- Inertia in established practices keeps many from testing alternatives, as large training runs carry a significant risk if they fail.
- Critique of OpenAI's Embedding Models: Participants expressed dissatisfaction with OpenAI's embedding models, stating they do not perform well relative to 2024 standards.
- Benchmark saturation with models like Mistral finetunes implies that OpenAI's embeddings have become less competitive.
- Clarification on Open Source Model Licensing: The distinction between what constitutes open source was clarified, emphasizing the use-based restrictions in licenses and how it impacts discussions around projects like Llama 405B.
- Discrepancies in opinions about licensing arise especially with major companies like Meta, leading to confusion in the community.
- Embedding Approaches for Semantic Search: A discussion on using decoder-only models for generating embeddings revealed that they can be effective, similar to encoder-based approaches.
- It was noted that while attention masking is different, methods for extracting embeddings can still yield useful results from either model type.
Links mentioned:
- Enhancing RAG Pipelines with Re-Ranking | NVIDIA Technical Blog: In the rapidly evolving landscape of AI-driven applications, re-ranking has emerged as a pivotal technique to enhance the precision and relevance of enterprise search results.
- NVIDIA Technical Blog | News and tutorials for developers, data scientists, and IT admins: News and tutorials for developers, scientists, and IT admins
- yandex/yalm-100b · Hugging Face: no description found
Eleuther ▷ #research (60 messages🔥🔥):
Mechanistic Interpretability Projects
Algorithmic Improvements in LLMs
Discord Communities in ML
ICLR 2025 Paper Rankings
Sparse Autoencoders for Knowledge Unlearning
-
Mechanistic Interpretability Projects Seeking Volunteers: A student expressed eagerness to join EleutherAI's projects related to interpretability, especially in the context of current opportunities.
- Members recommended joining the Mechanistic Interpretability Discord for further exploration in the field.
- Algorithmic Improvements in Learning Models: Discussion highlighted the progress of algorithmic efficiency in LLMs, noting that improvements have been around doubling every 8 months.
- One contributor emphasized that optimal performance requires a focus on algorithms rather than merely increasing computational power.
- Discord Communities in Machine Learning: Inquiry into useful Machine Learning Discords led to mentions of CUDA Mode and private research servers, indicating scarcity in quality communities.
- Users noted the potential of the Mechanistic Interpretability Discord for sharing knowledge and resources.
- ICLR 2025 Paper Rankings for Mechanistic Interpretability: A member shared links to a ranked list of ICLR submissions focusing on mechanistic interpretability, including keywords for analysis.
- Suggestions were made to expand the key terms to include explainability for a more comprehensive search of relevant papers.
- Sparse Autoencoders for Knowledge Unlearning: Discussion surrounding a paper on using sparse autoencoders to unlearn knowledge in language models revealed mixed excitement ratings, with some querying about the reasons.
- Members expressed curiosity about the project's application and potential effectiveness in AI safety.
Links mentioned:
- Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm Intelligence: We propose Model Swarms, a collaborative search algorithm to adapt LLMs via swarm intelligence, the collective behavior guiding individual systems. Specifically, Model Swarms starts with a pool of LLM...
- Persistent Topological Features in Large Language Models: Understanding the decision-making processes of large language models (LLMs) is critical given their widespread applications. Towards this goal, describing the topological and geometrical properties of...
- Simplifying, Stabilizing and Scaling Continuous-Time Consistency Models: Consistency models (CMs) are a powerful class of diffusion-based generative models optimized for fast sampling. Most existing CMs are trained using discretized timesteps, which introduce additional hy...
- NNsight and NDIF: Democratizing Access to Foundation Model Internals: We introduce NNsight and NDIF, technologies that work in tandem to enable scientific study of the representations and computations learned by very large neural networks. NNsight is an open-source...
- Applying Sparse Autoencoders to Unlearn Knowledge in Language Models: We investigate whether sparse autoencoders (SAEs) can be used to remove knowledge from language models. We use the biology subset of the Weapons of Mass Destruction Proxy dataset and test on the...
- Tweet from Yaroslav Bulatov (@yaroslavvb): The point of "bitter lesson" is that simple things often work at scale. You can "muscle your way" past bad algorithms by scaling up compute. But if you fix your compute, the only way t...
- Reading group — Mechanistic Interpretability: no description found
- woog interp paper review: no description found
- MI Reading Group Paper Suggestions: no description found
Eleuther ▷ #scaling-laws (4 messages):
Reversal trick
Reversal curse
A/B testing techniques
-
Curiosity about the Reversal Trick: Members expressed interest in understanding the weird trick with reversals, prompting a discussion about its implications.
- A member questioned its nature, asking, 'what's that?'
- Discussion on Reversal Curse: The term reversal curse surfaced, prompting inquiries about its effects and how to manage it.
- One participant seemed to confirm its relevance by simply saying, 'yes.'
- A/B Testing Methodologies Addressing Reversals: A member shared that there was a technique to ameliorate the reversal curse, which showed promise in A/B testing scenarios.
- They emphasized that this approach was notably described as 'very a/b.'
Eleuther ▷ #interpretability-general (1 messages):
Sparse Autoencoders
InceptionV1
Mechanistic Interpretability
Polysemantic Neurons
Vision Interpretability
-
Sparse Autoencoders shine in InceptionV1: This paper highlights how sparse autoencoders (SAEs) effectively extract interpretable features from the early vision layers of InceptionV1.
- SAEs succeed in revealing new curve detectors and decomposing polysemantic neurons into more straightforward components, advancing our grasp of vision interpretability.
- Polysemantic neurons get simplified: The findings reveal that SAEs help address issues associated with polysemantic neurons caused by superposition, resulting in clearer single-feature representations.
- This enhancement in feature decomposition indicates that SAEs are a critical asset for understanding layer dynamics in convolutional neural networks.
- Curve detectors discovery fills gaps: The application of SAEs led to the identification of additional curve detectors, highlighting previously unnoticed features in the InceptionV1 framework.
- This progress in feature extraction showcases the efficacy of mechanistic interpretability methods in neural network analysis.
Link mentioned: The Missing Curve Detectors of InceptionV1: Applying Sparse...: Recent work on sparse autoencoders (SAEs) has shown promise in extracting interpretable features from neural networks and addressing challenges with polysemantic neurons caused by superposition. In...
Eleuther ▷ #lm-thunderdome (3 messages):
Instruct Dataset Command
Turkish MMLU Regex Fix
-
Config Command for Instruct Dataset: A user shared their command for loading the instruct dataset:
ds = instruct_dataset(tokenizer=any_tokenizer, source="allenai/ai2_arc", split="train")
.- This snippet details the configuration setup for accessing the training split of the AI2 ARC dataset.
- Turkish MMLU Regex Pattern Fixed: A small regex fix for Turkish MMLU was issued in a pull request, correcting an earlier pattern that caused issues in experiments.
- This fix was necessary to ensure that subsequent experiments ran smoothly, highlighting the importance of accurate regex patterns.
Link mentioned: Fix: Turkish MMLU Regex Pattern by ArdaYueksel · Pull Request #2393 · EleutherAI/lm-evaluation-harness: In the rerun of the experiments, we noticed that we uploaded the prior iteration of the regex pattern. I made sure to replace the incorrect pattern and ensure that the experiments ran smoothly for ...
aider (Paul Gauthier) ▷ #general (107 messages🔥🔥):
Aider usage with multiple instances
Commit message conventions
Local LLM performance
VSCode Aider extension updates
New model announcements from Mistral
-
Multiple Aiders won't cause issues unless editing the same files: Concerns were raised about running multiple Aiders simultaneously, to which it was clarified that as long as they don't edit the same files, it should be fine.
- LLM party! was humorously hinted by a member after the clarification.
- Commit message conventions not supported: A member asked if code conventions apply to
/commit
messages, and it was confirmed that using--commit-prompt
is necessary instead of modifyingCONVENTIONS.md
.
- Direct documentation link regarding this was shared for users looking for a comprehensive guide.
- Local LLMs vs. Online APIs performance discussion: Discussion revealed that many users found local LLMs less efficient for coding tasks compared to online solutions, citing time-wasting experiences.
- Specific local models like Deepseek and Qwen 2.5 were compared, highlighting Deepseek 2.5's better benchmark but questioning overall usability.
- Updates on the VSCode Aider extension: A member announced they had published a fork of the VSCode Aider extension, planning to improve functionality with architect mode and better OpenRouter integration.
- Community support for the extension was encouraged, with plans for a feedback topic discussed to aid in user interaction.
- Mistral's new models announced: Mistral announced two new models, Ministral 3B and 8B, aimed at on-device and edge computing, touting efficiency and improved capabilities.
- These models offer significant advancements in reasoning and commonsense knowledge, with a promising context length optimization.
Links mentioned:
- Un Ministral, des Ministraux: Introducing the world’s best edge models.
- no title found: no description found
- Options reference: Details about all of aider’s settings.
- Not Diamond: Not Diamond is the world's most powerful AI model router.
- mattf - Overview: mattf has 98 repositories available. Follow their code on GitHub.
- The plugin currently doesn't work with Windows · Issue #3 · MattFlower/vscode-aider-extension: Currently, the plugin doesn't work with windows.
- Claude 3.5 Sonnet (self-moderated) - API, Providers, Stats: Claude 3.5 Sonnet delivers better-than-Opus capabilities, faster-than-Sonnet speeds, at the same Sonnet prices. Run Claude 3.5 Sonnet (self-moderated) with API
aider (Paul Gauthier) ▷ #questions-and-tips (28 messages🔥):
Aider Command Line Tool
Gemini API Performance
Code Generation with Aider
Using Azure with Aider
Installation Issues with Aider
-
Aider Command Line Tool Requirements: The aide command line tool loads the
.env
file, and users need to set the environment or load it viaload_dotenv()
for proper scripting.- This setup is crucial to ensure that the required configurations are correctly recognized by the tool.
- Gemini's Streaming Stability Concerns: Some members reported better results with Gemini when disabling streaming due to its unstable API connections, which can lead to interruptions.
- Comments indicated that this instability is common and can impact the performance of Gemini-based tools.
- Code Generation Challenges with New API: A member described difficulties in getting Chat GPT to generate correct function calls for the new beta Assistant's API, even after providing documentation links.
- They noted the challenges of hitting rate limits while trying to provide context through source code additions.
- Azure Configuration for Aider: Users discussed using their Azure accounts for generating API keys, noting that the max advised tokens for chat interactions is around 20k.
- Detailed configuration steps for pointing Aider to Azure services were shared, including installation commands and environment variable settings.
- Installation Troubleshooting for Aider: A user faced errors while installing Aider, specifically during the download of the NumPy package, prompting requests for assistance.
- Members were urged to specify their installation method and error messages for better troubleshooting support.
Link mentioned: Azure: aider is AI pair programming in your terminal
Nous Research AI ▷ #general (74 messages🔥🔥):
Unsloth multi-GPU support
New Mistral models
Nvidia Nemotron 70B
Control vector generation in llama.cpp
Lambda.chat deployment features
-
Unsloth's Multi-GPU Functionality: Discussion arose about whether Unsloth works efficiently with multi-GPU setups, with mentions of a paid version that is said to support it.
- Members noted potential updates for vision fine-tuning support were anticipated soon.
- Mistral Introduces New Models: Mistral launched two new state-of-the-art models, Ministral 3B and Ministral 8B, aimed at on-device computing and edge use cases with leading stats in commonsense and reasoning.
- Both models can handle up to 128k context length and are tailored for efficient local inference.
- Nvidia Nemotron 70B Performance: Nvidia Nemotron 70B reportedly outperforms several competitors, including Claude 3.5 and Llama 3.1, based on various evaluation metrics.
- Confusion arose over MT Bench scores, with discrepancies noted between reported and actual performances from different models.
- Control Vector Successes and Challenges: A member successfully implemented a control vector generator in llama.cpp, experimenting with scaling and inversing its effects while learning the system.
- After adjusting parameters, they were able to achieve desired responses, focusing on refining it for their application.
- Updates on Lambda.chat Features: The recent changes on Lambda.chat included adding models and system prompts but raised questions about the absence of major updates like the 70B model.
- Conversations ensued regarding enhancing steerability through possible system message injections in ongoing dialogs.
Links mentioned:
- Tweet from Vaibhav (VB) Srivastav (@reach_vb): We're soo back!: Nvidia Nemotron 70B - beats Llama 3.1 405B, GPT4o & Claude 3.5 Sonnet! 🔥 Evals (Nemotron 70B vs Claude 3.5 vs GPT4o) > Arena Hard - 85.0 vs 79.2 vs 79.3 > AlpacaEval 2 LC...
- Tweet from Rohan Paul (@rohanpaul_ai): Andrej Karpathy on the importance of extremely smaller-sized distilled models (even 1Bn param model should be good enough) Video Credit - Original video from "No Priors: AI, Machine Learning, Tec...
- Un Ministral, des Ministraux: Introducing the world’s best edge models.
- Ovis1.6 Gemma2 9B - a Hugging Face Space by AIDC-AI: no description found
- Pytorch Matrix Multiplication - a Hugging Face Space by not-lain: no description found
- Tweet from xjdr (@_xjdr): Nemotron-70B entropix edition is pretty fucking good
- EleutherAI: EleutherAI has 151 repositories available. Follow their code on GitHub.
- GitHub - EleutherAI/lm-evaluation-harness: A framework for few-shot evaluation of language models.: A framework for few-shot evaluation of language models. - EleutherAI/lm-evaluation-harness
Nous Research AI ▷ #ask-about-llms (19 messages🔥):
Confused Responses from AI Models
Qwen and WizardLM's Creator Responses
Transformer Block Dynamics
Sampling Parameters Impact
AI Model Identity and Mythological References
-
AI Models Show Confusion Responses: A member noted that H3-405b frequently responds with looks around confused when asked about its creator, with a few odd instances reported for different query methods.
- Another member mentioned an example of a confused response where the model expresses distress and confusion about its identity.
- Qwen and WizardLM Acknowledge Creators: Discussions highlighted that Qwen and WizardLM are unique for explicitly naming OpenAI and Anthropic as their creators, raising questions about Qwen's training data origins.
- A member speculated whether Qwen's data comes from synthetic data of flagship models or if it simply suffers from contamination.
- Clarifying Transformer Block Mechanisms: A novice member sought clarification about whether transformer blocks remain static during inference, questioning the possibility of backtracking in activation states.
- They also inquired if their description aligns with the concept of a KV cache and where such data is typically stored (e.g., VRAM).
- Effects of Sampling Parameters: In response to varying experiences with AI responses, a member suggested checking sampling parameters like temperature and top-p settings to understand discrepancies better.
- Another member indicated that despite similar parameters, confusion responses were still not consistently observed.
- AI Models identified with Mythological Figures: Members humorously identified AI models with mythological figures, stating Opus as Prometheus and Hermes-3 as Odin, raising questions about AI identity.
- The ongoing discussion reflects the playful exploration of AI personalities and characteristics, juxtaposed with mythical attributes.
Nous Research AI ▷ #research-papers (5 messages):
SageAttention
OpenR Framework
RF Inversion Techniques
Selective Attention
Attention Mechanism Optimization
-
SageAttention accelerates inference: The authors introduced SageAttention, a quantization method for attention that shows a significant performance boost, delivering 2.1x better OPS than FlashAttention2.
- It achieved improved accuracy without end-to-end metrics loss across various models, showcasing substantial potential for large language tasks.
- OpenR integrates powerful reasoning techniques: The paper presents OpenR, an open-source framework aimed at enhancing reasoning in large language models by combining data acquisition and reinforcement learning OpenR documentation.
- Initial experiments on the MATH dataset reveal significant improvements in reasoning capabilities through its unique architecture and test-time computation methods.
- RF inversion tackles challenges of diffusion models: In a new approach, researchers propose using RF inversion with dynamic optimal control for image editing and recovery tasks, presenting a robust alternative to conventional diffusion models Hugging Face paper.
- Despite existing challenges in editability and faithfulness, this method shows promise by leveraging the benefits of rectified flow models.
- Selective Attention boosts transformer performance: A recent study on Selective Attention reveals that it effectively minimizes unneeded elements in context, improving performance across various model sizes, achieving near equivalency to larger transformer setups Hugging Face paper.
- This method significantly reduces memory and compute requirements during inference, with up to 47x less memory usage, making it a valuable optimization technique.
- Exploring attention mechanism optimizations: The latest research highlights that unneeded context elements degrade model performance, calling for changes in the standard attention mechanism to combat this issue arXiv paper.
- The findings emphasize the relevance of attention optimization in enhancing model efficiency and effectiveness in language tasks.
Links mentioned:
- Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm Intelligence: We propose Model Swarms, a collaborative search algorithm to adapt LLMs via swarm intelligence, the collective behavior guiding individual systems. Specifically, Model Swarms starts with a pool of LLM...
- SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration: The transformer architecture predominates across various models. As the heart of the transformer, attention has a computational complexity of O(N^2), compared to O(N) for linear transformations. When ...
- Paper page - Selective Attention Improves Transformer: no description found
- Paper page - Semantic Image Inversion and Editing using Rectified Stochastic Differential Equations: no description found
- openr/reports/OpenR-Wang.pdf at main · openreasoner/openr: OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models - openreasoner/openr
- Home: An Open-Sourced Framework for Advancing Reasoning in Large Language Models
Nous Research AI ▷ #interesting-links (6 messages):
Ollama Application
GGUF Models on Hugging Face
Model Running Commands
-
Ollama simplifies LLM interactions: Ollama is an application based on llama.cpp that allows users to interact with LLMs directly through their computers, supporting community-created GGUF quants from Hugging Face.
- Users can seamlessly execute public GGUF checkpoints using a simple command: ollama run hf.co/{username}/{repository}.
- Proper model naming for Ollama: A user suggested using NousResearch/Hermes-3-Llama-3.1-8B-GGUF as the correct model name for running in Ollama instead of the original input.
- This highlights the need for precise naming conventions to successfully utilize models from Hugging Face.
- User experimentation with model running: After a naming clarification, another user expressed intent to attempt running the model with the corrected name in Ollama.
- This demonstrates user engagement and adaptation to utilizing available resources effectively.
Links mentioned:
- Tweet from AI Notkilleveryoneism Memes ⏸️ (@AISafetyMemes): This story is fucking insane 3 months ago, Marc Andreessen sent $50,000 in Bitcoin to an AI agent to help it escape into the wild. Today, it spawned a (horrifying?) crypto worth $150 MILLION. 1) Tw...
- Use Ollama with any GGUF Model on Hugging Face Hub: no description found
Nous Research AI ▷ #research-papers (5 messages):
SageAttention quantization
OpenR framework for LLMs
RF inversion with dynamic control
Selective Attention mechanism
New model by Feng et al.
-
SageAttention accelerates attention mechanisms: The paper introduces SageAttention, a quantization method that enhances attention efficiency in transformer models, outperforming FlashAttention2 and xformers by approximately 2.1x and 2.7x respectively.
- SageAttention delivers nearly no end-to-end performance loss, benefiting various applications including large language processing and image generation.
- OpenR framework revolutionizes reasoning with LLMs: The OpenR framework integrates reinforcement learning, data acquisition, and decoding to improve reasoning capabilities in LLMs, inspired by OpenAI's o1 model.
- Initial evaluations on the MATH dataset indicate significant performance enhancements facilitated by its innovative design and approach.
- Rectified Flows offer new inversion methods: This paper proposes an innovative method for diffusion image inversion using rectified flows via dynamic optimal control, addressing editability challenges found in traditional methods.
- This approach offers a promising alternative to the recently dominant Diffusion Models, expanding the possibilities in generative modeling.
- Selective Attention optimizes performance: Selective Attention is presented as a parameter-free modification to traditional attention mechanisms, significantly enhancing language modeling performance across various model sizes.
- This technique also allows for substantial reductions in memory requirements, achieving up to 47x less memory in specific configurations.
- New model insights from Feng et al.: A paper from authors including Shangbin Feng and others presents research that contributes to understanding advanced concepts in the field, with a focus on current developments.
- More details are available in the linked PDF for those interested in their findings.
Links mentioned:
- SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration: The transformer architecture predominates across various models. As the heart of the transformer, attention has a computational complexity of O(N^2), compared to O(N) for linear transformations. When ...
- Paper page - Selective Attention Improves Transformer: no description found
- Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm Intelligence: We propose Model Swarms, a collaborative search algorithm to adapt LLMs via swarm intelligence, the collective behavior guiding individual systems. Specifically, Model Swarms starts with a pool of LLM...
- Paper page - Semantic Image Inversion and Editing using Rectified Stochastic Differential Equations: no description found
- openr/reports/OpenR-Wang.pdf at main · openreasoner/openr: OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models - openreasoner/openr
- Home: An Open-Sourced Framework for Advancing Reasoning in Large Language Models
GPU MODE ▷ #general (16 messages🔥):
Open source audio models
Reliable hardware options
Lambda Labs vs Voltage Park
Multi-node clusters
Infiniband vs Ethernet
-
Searching for Open Source Audio Models: A user inquired about open source audio models similar to the one used in NotebookLM, with mrdragonfox mentioning that while many Text-to-Speech options exist, none are comparable.
- The discussion highlights a noticeable gap in the market for high-quality open source audio models.
- Finding Reliable Hardware Options: Bghira identified Lambda Labs and Voltage Park as the only reliable hardware providers, citing persistent PCIe issues elsewhere.
- Concerns raised include the reliability of GPU setups, network issues, and disk crashes across other providers.
- Comparing Lambda Labs and Voltage Park: In the comparison of hardware providers, Bghira stated that Voltage Park offers more storage, but operates solely in Texas compared to Lambda's diverse locations.
- This choice limits the deployment options for users compared to Lambda's broader geographical coverage.
- Inquiring about Multi-node Clusters: Kashimoo confirmed interest in setting up a cluster of 4 V100s across a network, questioning if Lambda provides such options.
- Bghira clarified that while multi-node clusters are achievable, it's recommended to select Infiniband for optimal performance during setup.
- Discussion on Infiniband vs Ethernet for Clusters: While discussing performance requirements, Bghira noted that using Infiniband at signup ensures the best performance for multi-node clusters.
- Kashimoo expressed a preference for Ethernet for his experiments, showcasing a flexible approach to networking in cluster setups.
GPU MODE ▷ #triton (24 messages🔥):
Triton on Windows
Meta-programming in Triton
INT4 Packed Data Issues
Triton Compilation Process
Performance Benefits of Torch Compilation
-
Triton on Windows poses challenges: Members expressed that achieving Triton on Windows requires a significant amount of effort, implying that compiling it is one thing, but making it function is another.
- Given this complexity, there are doubts about any meaningful speedups being observed so far.
- Meta-programming proposals generate discussion: The conversation on Triton meta-programming revealed varied opinions, with one member disliking the reliance on jinja templates, proposing a more structured approach using Triton's internal data structures.
- There is excitement around potential proposals aimed at improving these methodologies.
- INT4 packed data causing LLVM errors: There is a serious bug regarding INT4 packed data in the latest Triton release, causing issues when performing operations with dequantized tensors, leading to LLVM errors.
- This is linked to scheduled stages, where lowering stages resolves the issue for Ampere GPUs but not Ada GPUs.
- Triton's compilation process raises concerns: It was clarified that Triton produces LLVM IR first and any observed performance benefits are derived from Torch's compilation mechanisms rather than intrinsic improvements with Triton.
- Concerns were raised about the limited performance increases and questions about why larger entities have not prioritized Windows support.
- Performance benefits from Torch compile methods: The potential of invoking
torch.compile
correctly is suggested as a method to expose shortcomings in the Triton backend, highlighting that many operations aren't lowering properly.
- Despite past patches enabling compilation, the overall performance improvements appear small and sporadic.
Links mentioned:
- LLVM ERROR: mma16816 data type not supported · Issue #4922 · triton-lang/triton: The latest Triton build (3.1.0) throws the following error when using bitpacked data inside a loop with tl.dot: LLVM ERROR: mma16816 data type not supported This error happens on Ampere and Hopper,...
- LLVM ERROR: mma16816 data type not supported when invoking
tl.dot
with dequantized tensor · Issue #4652 · triton-lang/triton: Problem Statement I am trying to dequantize the quantized tensor (packed into int32) and perform multiplication to another tensor in fp16. However, I observed a weird error: LLVM ERROR: mma16816 da... - Comparing triton-lang:release/3.1.x...woct0rdho:v3.1.x-windows · triton-lang/triton: Development repository for the Triton language and compiler - Comparing triton-lang:release/3.1.x...woct0rdho:v3.1.x-windows · triton-lang/triton
- gemlite/gemlite/triton_kernels/gemm_A16fWnO16f_int32packing.py at master · mobiusml/gemlite: Simple and fast low-bit matmul kernels in CUDA / Triton - mobiusml/gemlite
GPU MODE ▷ #torch (29 messages🔥):
torch.optim.SGD and Fused Implementation
DDP and Multithreading Issues
Graph Break Overhead in torch.compile
foreach vs. Fused Performance
-
Confusion over SGD's Fused Implementation: Members discussed the absence of a fusion option for torch.optim.SGD, leading to confusion about whether it was just a default implementation with its documentation possibly lagging behind.
- One user noted they tried using
fused=True
, but it failed, confirming SGD does not have a fused implementation. - DDP Can Cause Threading Warnings: Concerns were raised about DDP causing errors like 'Unable to join threads to shut down before fork()', highlighting potential issues with multithreading and DDP when torch.compile is used.
- The user expressed that it doesn't break functionality, but they found it bothersome and were looking for solutions.
- Understanding Graph Break Overhead: Members discussed how graph breaks in torch.compile contribute to performance overhead, primarily due to additional time entering compile regions and losing fusion opportunities.
- It was noted that this overhead could reach hundreds of microseconds, impacting model execution speed.
- The Difference between foreach and Fused: A member explained that 'foreach' implements horizontal parallelism with tensors, while 'fused' incorporates both horizontal and vertical parallelism.
- This discussion highlighted the nuances of these two approaches in optimizing performance in PyTorch.
- One user noted they tried using
Links mentioned:
- SGD — PyTorch 2.4 documentation: no description found
- Frequently Asked Questions — PyTorch 2.4 documentation: no description found
- torch.optim — PyTorch 2.4 documentation: no description found
GPU MODE ▷ #algorithms (11 messages🔥):
Sketchy Figures
Confusing Communication Styles
Emerging Sampling Techniques
AI Influencer Dynamics
-
Identify Sketchy Figures on Twitter: Discussion around
@untitled01.ipynb
and@_xjdr
on Twitter raised some eyebrows due to their lack of clear explanations in math/code terms.- One member expressed that their communication style indeed seems sketchy, given the absence of detailed write-ups and use of poems instead.
- Frustration with Communicative Clarity: Members criticized the opaque communication of some creators as a waste of time, reflecting a broader sentiment about effective technical explanations.
- One participant asserted that if a concept can't be articulated simply, it likely indicates a deeper misunderstanding, echoing sentiments shared by others.
- Emerging Techniques Under Scrutiny: Despite the confusion, there's a hint that
@_xjdr
might have discovered a new sampling technique but verification will depend on future developments.
- The uncertainty surrounding their methods raises questions, especially as they engage in casual discussions about AGI sprinkled with emojis.
- AI Influencer Meme Dynamics: The dialogue included implications that the first Twitter user's use of emojis appears to leverage the AI influencer meme to gain attention from the second user's releases.
- This dynamic complicates the perception of credibility and transparency in discussing AI developments.
GPU MODE ▷ #jobs (1 messages):
open source training framework
Starcoder2
ServiceNow hiring
AI technology
machine learning developer
-
ServiceNow seeks a Staff Machine Learning Developer: ServiceNow is looking to hire a Staff Machine Learning Developer to work on their open source training framework used for training Starcoder2, which reportedly is faster than Megatron-LM.
- Interested candidates can find the position details on Smart Recruiters.
- Origins of ServiceNow in San Diego: ServiceNow originated in 2004 in San Diego, California, founded by Fred Luddy, aiming to revolutionize work processes.
- Today, the company serves over 8,100 clients, including 85% of Fortune 500 companies, with its innovative AI-enhanced technology.
- ServiceNow's mission to improve work: The company's platform connects people, systems, and processes, driving smarter and faster work methods.
- ServiceNow invites professionals to join them in their journey to make the world a better place.
Link mentioned: Staff Machine Learning Developer: Company Description: Tout a commencé sous le soleil de San Diego, en Californie, en 2004, lorsqu’un ingénieur visionnaire, Fred Luddy, a vu le potentiel de transformer notre façon de travailler. Aujou...
GPU MODE ▷ #beginner (7 messages):
GPU programming beginner projects
GPU acceleration on Raspberry Pi
ARM development
Community support for beginners
-
Explore Beginner Projects for GPU Programming: A user inquired about good beginner projects for learning GPU programming with CUDA or OpenCL.
- Another user suggested checking working groups for needed project help and referenced a specific channel for resources.
- Raspberry Pi's Graphics Acceleration Possibilities: A user questioned the potential to GPU accelerate graphics on the RPi3 series.
- A response highlighted that while the Pi 5+ supports eGPU connections, focusing on ARM development might provide more immediate value.
- Benefits of CPU Optimization over GPU on Raspberry Pi: A user discussed the balancing act between CPU and GPU when considering workloads on the Raspberry Pi, stating that optimizing for CPU could be simpler.
- They noted that AVX / NEON optimizations make CPU programming much more straightforward due to inherent processing efficiencies.
Link mentioned: ao/torchao/experimental at main · pytorch/ao: PyTorch native quantization and sparsity for training and inference - pytorch/ao
GPU MODE ▷ #pmpp-book (1 messages):
Matrix Multiplication Kernels on A100
Shared-memory Kernel Performance
-
Skepticism on A100 Matrix Kernel Speeds: Concerns were raised about the matrix multiplication kernels on A100, specifically regarding the supposed speeds which seem to exclude L1 cache effects for the naive kernel.
- The question was previously discussed in the community regarding the real-world performance of shared-memory kernels and potential issues like warp stalls.
- Suggestion for Educational Footnotes: It was suggested to include a footnote in the new edition regarding real-world performance considerations when discussing these kernels.
- While this detail is important, presenting the naive kernel's speed without caveats may lead to misconceptions about practical application.
GPU MODE ▷ #jax (2 messages):
Flash Attention kernel comparison
Pallas and Triton kernels
-
Flash Attention kernel in Ring Attention outperforms JIT version: A member pointed out that the Flash Attention kernel used in the ring_attention repo is faster than the JIT Flash Attention.
- This repository focuses on Transformers with Arbitrarily Large Context.
- Lack of Pallas/Triton kernels in the repo: Another member noted that they do not see any Pallas/Triton kernels in the ring_attention repository, even though Pallas is imported multiple times.
- They commented that Pallas appears to be included without actual usage in the code.
Link mentioned: GitHub - haoliuhl/ringattention: Transformers with Arbitrarily Large Context: Transformers with Arbitrarily Large Context. Contribute to haoliuhl/ringattention development by creating an account on GitHub.
GPU MODE ▷ #off-topic (3 messages):
Microplastics in brain tissue
Microplastics effects on human health
-
Microplastics invade human brains: A new study published in JAMA Network Open reveals that scientists in Brazil found microplastics in the brain tissue of cadavers.
- Researchers have documented microplastics in nearly every organ in the body, sparking concerns about their impact, especially in the brain.
- Microplastics seep into the bloodstream: Mounting evidence shows that microplastics are present in the bloodstream and are also found in plaque that clogs arteries, which can lead to heart disease.
- This highlights the urgent need to understand the health implications of these ubiquitous pollutants.
Link mentioned: Microplastics found in the human brain: The tiny scraps of plastic were found in the olfactory bulb, the part of the brain responsible for processing smell.
GPU MODE ▷ #triton-puzzles (1 messages):
Triton Puzzles error
Google Colab issues
-
Encountering Triton Puzzles Error in Google Colab: A user reported facing an error while trying to run Triton Puzzles on Google Colab, referring to a specific GitHub issue.
- I did not change any code, raising concern whether others have experienced similar issues.
- Seeking Help for Google Colab Triton Issues: Additionally, some members expressed interest in collaborating to troubleshoot the reported Triton Puzzles error in Google Colab.
- The user specifically mentioned needing assistance as they were unsure of the cause given that no code changes were made.
Link mentioned: Issues · srush/Triton-Puzzles.): Puzzles for learning Triton. Contribute to srush/Triton-Puzzles development by creating an account on GitHub.
GPU MODE ▷ #llmdotc (4 messages):
Loss Increase from Removing Variables
Linear Layer Bias Adjustments
Optimizer Update Requirements
-
Loss unexpectedly spikes after variable removal: A member noted that after removing unused variables, their loss increased, reaching around 10 from the usual 7 loss after 100 training iterations.
- No errors were reported, only an unexpected increase in loss was observed, suggesting a complex interaction between variables and model performance.
- Adjustments to Linear Layer Bias: The same member clarified that they set the linear layer bias to None, along with the bias gradient adjustments, impacting the training dynamics.
- This change specifically targets linear layers, excluding the bias in layer normalization, indicating a focused optimization effort.
- Implications of Tensor Deletion on Optimizer: In a discussion, another member pointed out that when tensors are deleted, it becomes essential to update the optimizer accordingly to ensure proper weight decay usage.
- They noted that using the actual index of the tensor can determine the necessity of weight decay, hinting at careful management of tensor lists.
GPU MODE ▷ #rocm (1 messages):
elliotarledge: tetears
GPU MODE ▷ #metal (8 messages🔥):
MPS Programming Resources
Learning Metal Programming
Simple Kernel Implementation
-
Resources for MPS Programming: A member sought beginner resources for getting up to speed on MPS programming to contribute to PyTorch support, leading to several helpful links being shared, including a YouTube video.
- Additional resources included tutorials on Metal programming and relevant GitHub repositories.
- Learning Metal with Simple Kernels: Another member shared their experience of learning Metal programming and mentioned they're trying to get a simple add kernel up and running.
- They humorously acknowledged the complexity of the task, with light banter about diving into more efficient techniques like efficient flash attention.
Links mentioned:
- Performing Calculations on a GPU | Apple Developer Documentation: Use Metal to find GPUs and perform calculations on them.
- GitHub - smrfeld/pytorch-cpp-metal-tutorial: Tutorial for (PyTorch) + (C++) + (Metal shader): Tutorial for (PyTorch) + (C++) + (Metal shader). Contribute to smrfeld/pytorch-cpp-metal-tutorial development by creating an account on GitHub.
- llm_experiments/metal-perf at main · malfet/llm_experiments: Contribute to malfet/llm_experiments development by creating an account on GitHub.
GPU MODE ▷ #self-promotion (1 messages):
Generative AI
Foundational Algorithms
Bayesian Inference
Latent Variable Models
-
Yuri Plotkin's Comprehensive Guide on Generative AI: Yuri Plotkin, an ML scientist, introduced his upcoming book focused on Generative AI which reviews foundational algorithms and techniques like latent models, VAEs, and GANs. More details can be found on the book website.
- He emphasized the importance of this field in uniting key machine learning concepts, stating 'Everything in one concise, explanatory book.'
- Follow for Updates on Twitter: Yuri encouraged readers to follow his account on X for additional insights related to the book. His posts promise to share extra tidbits surrounding Generative AI concepts.
- He also directed users to his Twitter for continuous updates and knowledge sharing on the topic.
- Snapshot of Key Topics in the Book: The book will cover various essential aspects of Generative AI, including uncertainty, Bayesian inference, and model selection. Readers can expect discussions on exponential family distributions and KL-divergence.
- Plotkin highlights certain sections such as 'Mean-Field approximations' and mentions that the text will provide a 'Snapshot of covered topics.'
Link mentioned: The Variational Inference Book: A comprehensive review and explanation of generative AI in one concise book. @TheVariational
LM Studio ▷ #general (80 messages🔥🔥):
SageAttention
Quantum Compression of Models
Llama.cpp Compiled Models
Local Models and Memory Usage
Token Generation Speed and GPU Requirements
-
SageAttention Accelerates Model Inference: A new method, SageAttention, accelerates inference in transformer models by offering efficient quantization for attention, outperforming existing methods by 2.1 times.
- The approach achieves superior accuracy over FlashAttention3 and shows potential for various applications in language and image generation.
- Discussion on Quantization Compression: Members raised the potential for compressing GGUF format files to improve efficiency, noting that traditional archiving methods can yield significant file size reductions.
- There was speculation on compressing data at the format level, possibly enhancing load times from HDD to 10x faster.
- Challenges with Custom Compiled Models: Users inquired about using custom compiled versions of Llama.cpp with LM Studio, with responses indicating that there isn't current support for this functionality.
- An alternative suggestion was to automate server startup and model loading using the command line tool
lms
, which offers a solution for persistence across reboots. - Local Models: GPU and Memory Constraints: Discussion highlighted the necessity of significant GPU memory for running large models, with requirements ranging upwards of 90GB of VRAM to handle higher context windows effectively.
- Users shared insights on running 70B Q8 models on systems with varying setups, alongside the implications of system RAM usage, leading to slower performance.
- Token Generation Speed Insights: Members reported slow token generation speeds when using high capacity models, with one user noting a max of 0.25 tokens/sec with their setup, underlining the CPU bottlenecks involved.
- Discussions suggested that many local setups are limited by processing delays, prompting users to consider cloud services for faster output when needed.
Links mentioned:
- Un Ministral, des Ministraux: Introducing the world’s best edge models.
- SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration: The transformer architecture predominates across various models. As the heart of the transformer, attention has a computational complexity of O(N^2), compared to O(N) for linear transformations. When ...
- Reddit - Dive into anything: no description found
- mistralai/Ministral-8B-Instruct-2410 · Hugging Face: no description found
- lms — LM Studio's CLI - CLI | LM Studio Docs: Get starting with the lms command line utility.
LM Studio ▷ #hardware-discussion (26 messages🔥):
Tokens per second (TPS) performance
GPU performance comparisons
Mining rack setups
Llama model configurations
Benchmarking AI performance
-
Llama 8B performance insights: Users are reporting a wide range of tokens per second (TPS) for Llama 8B models, with configurations like Q6_K achieving 28-35 TPS on older GPUs like 1070 Ti.
- Discussions suggest that performance variations heavily depend on context length, quantization, and GPU capabilities, emphasizing VRAM bandwidth as a critical factor.
- Newer GPUs promise better TPS: It's noted that newer generation GPUs, such as the 4080 or 4090, are significantly faster for AI tasks than older models like the 1070 Ti, but require proper configuration to realize this potential.
- Users highlighted that tensor cores and higher memory bandwidth lead to substantial performance boosts, asserting a 4080 can outperform a 1070 Ti under correct settings.
- Potential challenges of mining rack setups: Concerns about building a mining rack, such as using the Asus Pro WS WRX90E-Sage with PCIe riser cables, revolve around cost, noise, power, and cooling issues.
- Users advised using PCIe5 riser cables over PCIe4 to mitigate errors, ensuring a more stable setup for high-performance tasks.
- Benchmarking insights from real-world performance: A user shared experiences on performance testing with Ollama on various AI models, emphasizing real-world performance comparisons rather than academic benchmarks.
- Their findings reflected that certain models like Llama3.1 show similar performance across different GPU generations, underlining the importance of running through consistent configurations.
- Community discussing high-performance Llama models: Several users shared their experiences running larger models like 70B, looking for optimal quantizations and hardware capable of supporting them effectively.
- For example, one user achieved 66 TPS on Llama 3.1 8B Q8 with a 7900XTX, inviting discussions on optimal setups for handling large models.
Links mentioned:
- Reddit - Dive into anything: no description found
- GitHub - XiongjieDai/GPU-Benchmarks-on-LLM-Inference: Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference?: Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference? - XiongjieDai/GPU-Benchmarks-on-LLM-Inference
OpenAI ▷ #ai-discussions (53 messages🔥):
Grok 2 Performance
DALL-E Image Capabilities
Model Parameter Comparisons
GPT-4 vs GPT-4o vs GPT-4 Turbo
Voice Dictation Tool Integration
-
Grok 2 Shows Potential: One member shared their experience trying out Grok 2, although no specific details were provided.
- This suggests a growing interest in experimenting with newer models.
- DALL-E's Image Generation Falls Short: Concerns were raised regarding DALL-E's image capabilities, with a member calling it simply bad.
- It's clear there are high expectations around image generation performance.
- The Mystery of Model Parameters: Discussion revolved around the parameter sizes of models like 4o-mini and GPT-3.5, with varying opinions on their relative sizes.
- One member questioned if 4o-mini could be confirmed to have only 1 billion parameters, which remains speculative.
- Debate Over Model Performance: Several users debated whether GPT-4o is indeed smaller or performs better compared to GPT-4 Turbo, with varying opinions on their performance differences.
- The conversation reflected the complexity in understanding model capabilities amidst differing user reports and experiences.
- Seeking Solutions for Product Similarity: A member expressed a need for tools to validate product similarities using supervised learning, particularly for product names.
- Conversation highlighted challenges in identifying identical products with varying names, emphasizing the importance of training data.
Link mentioned: Wispr Flow | Effortless Voice Dictation: Flow makes writing quick and clear with seamless voice dictation. It is the fastest, smartest way to type with your voice.
OpenAI ▷ #prompt-engineering (2 messages):
GPTs PDF comprehension
Building a website with ChatGPT
-
GPTs miss key info in PDFs: A member noted that GPTs do not read the entire PDF before replying; instead, they search for relevant snippets, which can often lead to missing crucial information.
- The recommendation is to include key information in main instructions to ensure better responses from the model.
- Guidelines for Using ChatGPT to Create Website Content: Another member expressed interest in building a website about controlling using ChatGPT and sought advice on crafting effective prompts to generate content.
- They emphasized the importance of sourcing information from trustworthy and scientific sources while planning to iterate on the text until satisfaction is achieved.
OpenAI ▷ #api-discussions (2 messages):
GPTs PDF reading limitations
Building a website with ChatGPT
-
GPTs struggle with PDF comprehension: A member noted that GPTs don't read the entire PDF before replying, instead searching for relevant bits, which can lead to missing key information.
- According to this member, key information should be included in the main instructions to ensure it is referenced.
- Formulating prompts for website content: Another member expressed interest in using ChatGPT to help build a website focused on controlling, seeking guidance on prompt formulation.
- They emphasized the need for trustworthy and scientific sources and wished to train ChatGPT to improve the content over time.
tinygrad (George Hotz) ▷ #general (27 messages🔥):
Tinygrad's ML Library Potential
Tinybox Preorder Discussion
OpenCL Handling Issues
MSE and MAE Implementation
Windows Compatibility
-
Tinygrad poised to win ML library war: A member detailed three reasons why tinygrad will excel in the ML library landscape: its efficient kernel search using BEAM and MCTS, its concise codebase of under 10k lines, and its lazy execution model.
- 'This avoids the combinatorial nightmare of one kernel per combination of device...' emphasizing that tinygrad's approach results in faster performance.
- Tinybox preorder inquiries emerge: Inquiries about preordering the tinybox model surfaced, specifically regarding payment methods and the costs involved.
- Members expressed curiosity about how to complete the preorder payment, particularly if it would use Stripe like previous models.
- OpenCL OOM concerns raised: Concerns about OOM handling arose after encountering all-black outputs in Stable Diffusion, prompting questions about OpenCL's operation.
- A member questioned whether the current implementation adequately addresses out-of-memory conditions within tinygrad.
- Implementation of MSE and MAE: A member proposed adding MSE and MAE functionality to tensors, stating it can be achieved in just a couple of lines of code.
- They shared a link to a GitHub pull request that showcases this implementation with tests included.
- Discussion on Windows compatibility: A member noted that navigating Python installation prompted the Microsoft Store when using cmd on Windows 11, highlighting compatibility issues.
- They also reported on sqlite issues found in previous discussions, emphasizing the importance of using the right version of Python.
Links mentioned:
- Tweet from Alex Cheema - e/acc (@ac_crypto): 3 Reasons Why @tinygrad Will Win The ML Library War. 1. tinygrad searches its kernels. tinygrad uses BEAM search and soon MCTS to search for optimal kernels. You only have to write a small amoun...
- MSE in tensors.py and tests implemented by littlemountainman · Pull Request #7107 · tinygrad/tinygrad: MSE with testing implemented
tinygrad (George Hotz) ▷ #learn-tinygrad (30 messages🔥):
Disabling Gradient Calculations
Dynamic Input Tensors in JIT
TD-MPC Implementation
Learning Rate Schedulers
Backpropagation Success
-
Disabling Gradient Calculations Techniques: To disable gradients when running the model, you can use
with Tensor.test():
as an alternative to settingTensor.no_grad = True
manually. This ensures more streamlined evaluation, particularly emphasized for easier function decoration and usage.- One user observed that printing tensors nicely might require
.numpy()
or.tolist()
, while direct tensor prints do not realize the tensor by design. - JIT Input Size Consistency Requirement: JIT requires the input tensor to have consistent sizes, indicated by an error message about mismatched shapes when sizes vary. Another user confirmed the design expectation for the input sizes to be the same to avoid issues.
- To introduce dynamic axes, it's suggested to use a
Variable
, but users noted that this functionality is not fully user-friendly yet. - Progress on TD-MPC Implementation: A user reported that their implementation of TD-MPC in Tinygrad is now operational, needing only backprop to finalize the training cycle. They shared insights on the anticipated extensive training time, predicting a long process for runs using video data.
- The user emphasized the need for a more powerful setup for efficiency, mentioning cloud solutions to tackle intensive training sessions more effectively.
- Discussion on Learning Rate Schedulers: The inclusion of learning rate schedulers in the main repository has been suggested, requiring improvements in code quality and better testing practices. Users expressed eagerness for the integration of these features into neural network components.
- One member noted the functionality of
update_ema_parameters
and inquired about the rationale for decay in these parameters, seeking insights from others regarding its commonality in practice. - Backpropagation Functionality Working: A user confirmed that backpropagation is successfully functioning in their current setup, allowing progress on their TD-MPC implementation. They plan to experiment with
.safetensors
files for quick tests while continuing to refine their loss function.
- Another user hinted at possibly setting up a shared cloud resource to expedite development processes for others in the community, proposing enhanced hardware utilization.
- One user observed that printing tensors nicely might require
Links mentioned:
- GitHub - mdaiter/tdmpc-tinygrad: TD-MPC, Tinygrad: TD-MPC, Tinygrad. Contribute to mdaiter/tdmpc-tinygrad development by creating an account on GitHub.
- tdmpc2/tdmpc2/common/layers.py at a7890b69857c402ef19edea494e210068e3ec363 · nicklashansen/tdmpc2: Code for "TD-MPC2: Scalable, Robust World Models for Continuous Control" - nicklashansen/tdmpc2
- tinygrad/tinygrad/tensor.py at master · tinygrad/tinygrad): You like pytorch? You like micrograd? You love tinygrad! ❤️ - tinygrad/tinygrad
LAION ▷ #general (47 messages🔥):
Microdiffusion Implementation
Data Preprocessing Challenges
Webdataset Usage
Hugging Face Dataset Limits
Potential for Further Experimentation
-
Microdiffusion Implementation Progress: The community is eagerly awaiting the implementation of the microdiffusion paper, which could significantly reduce training costs with a $2k training goal and 7 days of H100 compute already secured.
- Discussions involve preprocessing help and potential short-term improvements post-experiment preparation.
- Data Preprocessing Challenges: A member expressed difficulties in uploading large datasets to Hugging Face, which has a 300GB limit for datasets, suggesting chunking it into three parts or using a webdataset hosted on an S3 for efficient data handling.
- They plan to preprocess data and stream it efficiently by possibly categorizing images into multiple datasets based on aspect ratios for better organization.
- Webdataset for Efficient Data Handling: The conversation highlighted the use of webdataset as an alternative for large dataset management, which allows efficient streaming and usage with PyTorch.
- One member emphasized that webdataset bundling would facilitate better management of their anticipated 1TB dataset.
- Navigating Hugging Face Dataset Limits: Concerns about Hugging Face's uploading policies were raised, particularly regarding the potential risks of bypassing their dataset limits by splitting large datasets into smaller parts.
- One member suggested reaching out to Hugging Face support for clarification, while another joked about the possibility of being 'banned from HF'.
- Collaborative Improvement Suggestions: Participants shared thoughts on replication strategies from other successful repositories, indicating a willingness to improve efficiency and data management processes.
- Ideas included converting to MDS format for streaming data from Cloudflare, which would expedite training and reduce costs.
Links mentioned:
- StableCascade/train at master · Stability-AI/StableCascade: Official Code for Stable Cascade. Contribute to Stability-AI/StableCascade development by creating an account on GitHub.
- GitHub - victorchall/llama32vlm-caption: Contribute to victorchall/llama32vlm-caption development by creating an account on GitHub.
- GitHub - webdataset/webdataset: A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.: A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch. - webdataset/webdataset
- GitHub - SwayStar123/microdiffusion: Contribute to SwayStar123/microdiffusion development by creating an account on GitHub.
- common-canvas/commoncatalog-cc-by · Datasets at Hugging Face: no description found
LAION ▷ #research (7 messages):
Dinov2 Optimization
Echocardiography AI
EchoPrime Model
EchoCLIP vs New Model
AI in Cardiac Imaging
-
Dinov2 gets optimized in layers: Discussion emerged around distilling Dinov2 into the early layers, leveraging its training on meaningful downstream tasks related to images for efficiency.
- It was noted that this approach performs better than simply using cross attention with CLIP embedding.
- Introduction of EchoPrime for echocardiography: EchoPrime is a new multi-view, view-informed, video-based vision-language foundation model trained on over 12 million video-report pairs, addressing limitations in traditional echocardiography AI models.
- This model utilizes contrastive learning to create a unified embedding model, enhancing the performance and application scope in cardiac imaging.
- Enhancements on EchoCLIP model: A member announced the preprint release from a coworker who has significantly improved upon their earlier EchoCLIP model by scaling it up and refining the experimental design.
- This new model exhibits much better capabilities compared to the original one created about six months prior.
Link mentioned: EchoPrime: A Multi-Video View-Informed Vision-Language Model for Comprehensive Echocardiography Interpretation: Echocardiography is the most widely used cardiac imaging modality, capturing ultrasound video data to assess cardiac structure and function. Artificial intelligence (AI) in echocardiography has the po...
LlamaIndex ▷ #blog (5 messages):
SkySQL for AI apps
Dynamic few-shot prompting
Mistral new edge-class models
Multimodal RAG system with Azure
LlamaIndex with Elastic
-
Experiment with Dynamic Few-shot Prompting: Dynamic few-shot prompting allows the retrieval of relevant examples based on the query instead of relying on a fixed set, enhancing the method of fine-tuning LLMs (more details here). This approach aims to provide more pertinent examples, as mentioned in this thread.
- Adopting this technique can improve the contextual understanding of prompts for various applications.
- Mistral Releases New Edge-Class Models: Mistral has launched impressive new edge-class models and announced day 0 support with installation via 'pip install llama-index-llms-mistralai' (installation link). This signifies a continuation of support for cutting-edge AI models (link to the announcement).
- Developers are encouraged to integrate these models into their systems without delay.
- Build Multimodal RAG Systems with Azure: A step-by-step guide explains how to create a multimodal RAG system using Azure AI Search and Azure OpenAI with LlamaIndex, enhancing retrieval accuracy through contextual information (see the guide). This guide provides benchmarks and techniques for effective implementation as shared in this tweet.
- The detailed walkthrough focuses on methods to improve contextual retrieval between different AI systems.
- LlamaIndex with Elastic Talk Tomorrow: A session featuring @seldo will discuss using LlamaIndex in conjunction with Elastic, promising insights for developers (find out more). This talk is anticipated to showcase practical applications and integration techniques.
- Catch the discussion to learn about optimizing LlamaIndex workflows with Elastic.
LlamaIndex ▷ #general (46 messages🔥):
Neo4jPropertyGraphStore
LlamaIndex Typescript API calls
LlamaIndex partnership process
Warnings in module loads
Multi-agent orchestration in LlamaIndex
-
Speeding up Neo4jPropertyGraphStore creation: A user noted that creating the Neo4jPropertyGraphStore takes a long time, especially with 64322 nodes in the store, and wondered about optimizing memory and schema simplifications.
- Discussion revealed potential ways to improve performance, like setting
refresh_schema
to false to avoid expensive calls related to schema counts. - Monitoring API calls in LlamaIndex Typescript: A user inquired about monitoring API calls made to OpenAI through LlamaIndex Typescript, seeking a way to log these actions effectively.
- Another member shared that using the observability feature in LlamaIndex can help log events and monitor LLM/prompt inputs and outputs.
- Understanding LlamaIndex partnership process: A user asked about the process of becoming an official LlamaIndex partner and the qualification criteria.
- It was clarified that there aren't official partners, but various companies with LlamaIndex integrations can assist in RAG application development.
- Module load warnings are not fatal: A user raised concerns about warnings during module loads, questioning their severity.
- The response indicated that these warnings can be safely ignored, as they are not fatal.
- Implementing multi-agent orchestration using workflows: A user asked if the capabilities of OpenAI's Swarm could be replicated in LlamaIndex, with an emphasis on workflows being the main approach.
- Examples of multi-agent communication with workflows were provided, including blog articles and GitHub repositories for guidance.
- Discussion revealed potential ways to improve performance, like setting
Links mentioned:
- Partners — LlamaIndex, Data Framework for LLM Applications: Get to production faster with LlamaIndex Experts
- Observability | LlamaIndex.TS: LlamaIndex provides one-click observability 🔭 to allow you to build principled LLM applications in a production setting.
- [Bug]: Extremely long time initializing Neo4jPropertyGraphStore for larger graphs · Issue #16204 · run-llama/llama_index: Bug Description It takes about 14 min to initiate the graph store with 3558 entities. I feel this is because refresh_schema() does not handle large graphs well. Maybe not using async? I pasted the ...
Interconnects (Nathan Lambert) ▷ #news (15 messages🔥):
Mistral's new models
Chatbot Arena Updates
Yi-Lightning performance
Ministral weights availability
-
Mistral introduces Ministral models: On the first anniversary of Mistral 7B, the company launched two new edge models: Ministral 3B and Ministral 8B, designed for on-device use with privacy-first inference capabilities.
- These models, boasting features like up to 128k context lengths, are positioned as leading contenders in the sub-10B category for diverse applications.
- Ministral 3B lacks weights: Discussion arose around the absence of weights for Ministral 3B, sparking questions about its potential performance compared to Ministral 8B which does offer non-commercial weights.
- The community expressed disappointment and curiosity regarding the decision to withhold weights for this model.
- Performance of Yi-Lightning surges: The newly released model Yi-Lightning from @01AI_YI has garnered attention in the Chatbot Arena, ranking #6 overall with strong performances in Math and Coding.
- This model's rise was recognized with over 13K community votes, indicating robust capabilities, as it matches prominent peers like Grok-2.
- Concerns over model evaluations: In discussions about model performance, it was noted that Gemma2 9B was left out of comparison tables, possibly highlighting inconsistencies in benchmark evaluations.
- Comments suggested a need for a more uniform evaluation codebase, as fluctuations in performance metrics were observed.
Links mentioned:
- Un Ministral, des Ministraux: Introducing the world’s best edge models.
- Tweet from Armand Joulin (@armandjoulin): A great set of models have entered the arena! Sadly Gemma2 9B dropped from one of their tables so I had to add it. Would be even better if all models were evaluated with same codebase as I see fluctu...
- Tweet from lmarena.ai (formerly lmsys.org) (@lmarena_ai): Big News from Chatbot Arena! @01AI_YI's latest model Yi-Lightning has been extensively tested in Arena, collecting over 13K community votes! Yi-Lightning has climbed to #6 in the Overall ranking...
Interconnects (Nathan Lambert) ▷ #random (16 messages🔥):
AI Internships
Doomsday Clock for AGI
Competition among Intern Candidates
-
Doomsday Clock Launch by Saudi-backed Business School: A Saudi-backed business school in Switzerland launched a Doomsday Clock to warn about the dangers of 'uncontrolled artificial general intelligence,' calling it a 'god-like' AI, paralleling past societal fears about nuclear threats.
- Michael Wade, clock creator and professor, detailed this initiative in a recent op-ed for TIME.
- AI2 OLMo Internship Opportunity: The AI2 is hiring research interns for the OLMo project, offering competitive salaries ranging from $86,520 to $123,600, and an opportunity to lead significant research in NLP and machine learning.
- Interns can define research projects, collaborate with team members, and publish in high-profile journals over a 12-week internship starting flexibly.
- Intense Competition for AI Internships: A conversation arose about the competitive nature of the OLMo internships, especially with applicants like a 'post training lead' from a top AI lab.
- It was discussed how this level of competition makes the internship notably challenging for graduate students.
- Concerns about Grad Student Competition: One member expressed concern about the pressure on grad students when competing against highly experienced candidates in the internship selection process.
- This sentiment was echoed by others, highlighting the difficulties faced in landing these coveted opportunities.
Links mentioned:
- For the Love of God, Stop Making Inscrutable Doomsday Clocks: A business school is using AI doomerism, money from Saudi Arabia, and a dusty Cold War metaphor to get people hyped about AI’s future.
- Tweet from Matt Shumer (@mattshumer_): http://x.com/i/article/1846205240728588288
- Job Application for Research Internship, OLMo at The Allen Institute for AI: no description found
Interconnects (Nathan Lambert) ▷ #posts (18 messages🔥):
Snailbot's dual function
Audio distribution challenges
Hackernews posting issues
-
Snailbot takes on double duty: The discussion highlights Snailbot being utilized for audio feed posts, showcasing its expanded functionality.
- A user remarked about this new use as a twofer, indicating excitement and novelty.
- Struggles with audio distribution: Creating an effective strategy for distributing audio content remains unclear, with multiple users expressing their difficulties.
- One user humorously likened their situation to a popular note-taking app meme, conveying frustration.
- Hackernews posting pitfalls: Concerns were raised about the challenges of posting to Hackernews, especially regarding link visibility and upvoting dynamics.
- A member pointed out that direct links may face penalties, complicating the sharing process and discouraging users from soliciting upvotes directly.
- Finding solutions to visibility issues: Participants discussed strategies to maintain link viability, suggesting users inform others where to find content instead of directly linking.
- The process of getting noticed on Hackernews was described as fickle, creating further barriers to distribution.
Latent Space ▷ #ai-general-chat (37 messages🔥):
Gemini free tier performance
Mistral's new models
Nvidia's Llama 3.1 Nemotron
E2B's SDK launch and funding
AI compute and nuclear energy
-
Gemini free tier faces performance issues: Users have reported timeouts and failures with the Gemini free tier, casting doubt on the practicality of 1.5B token per day claims, especially under the current rate limits.
- A member speculated that real effective usage might be much lower, potentially around 0.05B tokens instead.
- Mistral releases new edge models: Mistral unveiled Ministral 3B and Ministral 8B models designed for on-device applications, pushing the boundaries in commonsense and reasoning capabilities in the sub-10B category.
- However, critiques indicated that 3B is API-only, limiting its on-device utility and raising concerns about restrictive licensing for indie developers.
- Nvidia's Llama 3.1 Nemotron steals the spotlight: Nvidia's new Llama 3.1 Nemotron 70B model reportedly outperforms both GPT-4o and Claude Sonnet 3.5 on various benchmarks, according to a recent release announcement.
- The community is buzzing, questioning whether Sonnet 3.5 enjoyers are truly in the same league as this newly released model.
- E2B launches SDK with significant funding: E2B announced the launch of its v1.0 SDK along with a $11.5M seed round, aimed at providing infrastructure for AI code interpreting with secure sandboxes.
- The startup is already running millions of sandboxes monthly, highlighted by partnerships with notable customers like Perplexity.
- Suggesting LLM performance benchmarks: A member proposed the idea of creating a CPUBenchmark-style comparison tool specifically for LLMs, as existing leaderboards do not facilitate direct model comparisons.
- Current tools, such as lmsys/hugging face leaderboards, have limitations that hinder effective model comparison.
Links mentioned:
- Un Ministral, des Ministraux: Introducing the world’s best edge models.
- Tweet from Yohei (@yoheinakajima): introducing "ditto" the simplest self-building coding agent 📄 ~500 lines of code 🛠️ can build multi-file apps 🔁 a simple LLM loop with 5 tools github/replit/more below 👇
- 3b is is API-only so you won’t be able to run it on-device, which is the killer ... | Hacker News: no description found
- Tweet from maharshi (@mrsiipa): nvidia casually dropped an open 70B model that beats gpt-4o and claude 3.5 sonnet
- Tweet from Vasek Mlejnsky (@mlejva): Today, I'm excited to launch @e2b_dev, release v1.0 of our SDK, and announce our $11.5M seed round! We're building infrastructure for AI code interpreting. Secure E2B Sandboxes for running A...
- Tweet from morgan — (@morqon): openai gets a 100k B200 cluster with an initial 206 MW of renewable energy, leased from oracle, designed, built and operated by crusoe, online in the first half of 2025 the press release: “a record-s...
- Tweet from morgan — (@morqon): openai gets a 100k B200 cluster with an initial 206 MW of renewable energy, leased from oracle, designed, built and operated by crusoe, online in the first half of 2025 the press release: “a record-s...
- Tweet from Vasek Mlejnsky (@mlejva): Today, I'm excited to launch @e2b_dev, release v1.0 of our SDK, and announce our $11.5M seed round! We're building infrastructure for AI code interpreting. Secure E2B Sandboxes for running A...
- Tweet from Philipp Schmid (@_philschmid): Did NVIDIA silently release a Llama 3.1 70B fine-tune that outperforms @OpenAI GPT-4o and @AnthropicAI Claude Sonnet 3.5? Yesterday, @nvidia added Llama 3.1 Nemotron 70B Instruct a further RLHFed mode...
- Tweet from Find anything. Protect everything | Dropbox Dash: Dropbox Dash for Business combines AI universal search and organization with universal content access control. Find, organize, share, and secure content across apps effortlessly—so you can focus on th...
- Amazon, Google make dueling nuclear investments to power data centers with clean energy: Tech giants Amazon and Google are investing in the next generation of nuclear reactors. Both companies are seeking new sources of carbon-free electricity to meet increasing demand from data centers an...
- New nuclear clean energy agreement with Kairos Power: Google’s first nuclear energy deal is a step toward helping the world decarbonize through investments in advanced clean energy technologies.
Cohere ▷ #discussions (9 messages🔥):
Community Inspiration
Job Opportunities at Cohere
-
Community Inspires Daily: One member expressed that they find inspiration from the Cohere community every day, highlighting its positive impact.
- Another member agreed, stating, A lot of things, honestly this whole community each day everyday!
- Job Opportunities Clarified: A member reminded others that this channel is not the right place to look for jobs at Cohere and provided a link to the careers page for applications.
- They emphasized the passion of the Cohere team in solving real-world problems with ML/AI technology, working from multiple locations.
Link mentioned: Careers: Our team of ML/AI experts is passionate about helping developers solve real-world problems. From our offices in Toronto, London, and Palo Alto, we work at the cutting edge of machine learning to unloc...
Cohere ▷ #announcements (1 messages):
RAG++ course
AMA with RAG experts
-
Round 2 AMA with RAG Experts: Due to the great response to the first AMA, join us tomorrow at 11:00 AM ET for another live chat with Ayush Thakur and Meor Amer, experts on RAG development.
- This session promises to offer behind-the-scenes insights from the RAG++ course by Weights & Biases and Cohere.
- Event Link for the AMA: The event link for tomorrow's AMA is available here.
- Make sure to mark your calendars and prepare any questions you have regarding advanced RAG development!
Cohere ▷ #api-discussions (22 messages🔥):
Cohere Embed API Error Handling
Reducing RAG Retrieved Chunks
Trial Key Rate Limits
Model Usage on Trial Keys
-
Cohere Embed API error handling explained: A user inquired about handling errors when using the Cohere Embed API, particularly if one document fails to embed within a batch of 96.
- Errors could result in an overall failure for the batch, thus it's suggested to create retry logic based on specific error codes.
- Speeding up RAG functionality: To improve citation speed in RAG, switching
citations_quality
to FAST can significantly enhance performance.
- One can reduce overall citations by either truncating manually after n citations or implementing a ranking system for top_n chunks.
- Trial Key rate limits discussed: Another member faced a TooManyRequestsError with trial keys and was informed that trial keys allow for up to 1,000 API calls per month.
- Users noted that rate limits are tied to accounts, not individual trial keys, and suggested upgrading to production keys for higher limits.
- Issues with trial key usage in Cohere: A user reported that they could use trial keys in the dashboard but faced issues accessing models through the Cohere dependency.
- Despite being able to use trial keys on the dashboard, the limitation with API access was concerning, and guidance was given to wait for the trial keys to reset.
Link mentioned: Http status codes — Cohere: Understand Cohere's HTTP response codes and how to handle errors in various programming languages.
Cohere ▷ #projects (2 messages):
System Prompt Templates
In-Depth Question Answering Evaluation App
-
Excitement Around System Prompt Templates: A member expressed enthusiasm about the variety of system prompt templates available, stating that there are many to choose from.
- This excitement highlights the engagement and interest in optimizing AI interactions.
- Launch of In-Depth Question Answering Evaluation App: A member shared their first article on Medium detailing the In-Depth Question Answering Evaluation App, which utilizes Streamlit and Gemini 1.5 Pro.
- The app aims to enhance learning through real-time feedback, transforming how users evaluate their knowledge, with a nod to Dr. Fady AlNajjar for the idea.
Link mentioned: Enhancing Learning Through Real-Time Feedback: In-Depth Question Answering Evaluation App: In the world of online learning and self-improvement, having effective tools to evaluate one’s progress is crucial. Whether you’re studying…
Cohere ▷ #cohere-toolkit (2 messages):
Text-to-speech availability
Chatbot responses
-
Text-to-speech is here for chatbots!: Text-to-speech is now available for chatbot responses, with a detailed setup guide provided for users.
- This new feature aims to enhance user interaction through more dynamic audio responses.
- User excitement about new feature: One user expressed excitement by stating, 'sick!' after the announcement of text-to-speech functionality.
- Such enthusiasm indicates a positive reception toward the new capabilities being introduced.
Modular (Mojo 🔥) ▷ #general (2 messages):
Playground updates
Community meeting showcase
-
Playground Receives Enthusiastic Praise: Members expressed much needed love for the Playground feature, thanking Modular for its improvements and support.
- For more information, you can read about it in the Playground documentation.
- Mark Your Calendars for the Community Showcase: A community meeting is scheduled for October 21st, featuring a live showcase where participants can demo their MAX and Mojo projects.
- Slots will last between 5-10 minutes, providing opportunities for sharing learnings and gathering feedback.
Link mentioned: Modular Docs: no description found
Modular (Mojo 🔥) ▷ #mojo (8 messages🔥):
Mojo Bugs
Deque Code Contribution
Storing SIMD in YMM Register
Using OpenCV in Mojo
Mojo Standard Library
-
Weird Mojo Bug Fixed: A member identified a Mojo bug that was reproducible but later fixed it themselves, offering to add any contributions to the changelog if filed.
- They encouraged others to report similar issues to improve the platform.
- Deque Code Contribution in Mojo: Another member confirmed that the issue was resolved in the latest nightly and expressed excitement about contributing deque code to Mojo.
- They mentioned that Joe Loser would be looking at the deque code soon.
- Inquiry about SIMD Storage in YMM Register: A question was raised about the specific method to store a SIMD in the YMM register and whether Mojo handles this automatically if the size fits.
- This sparked discussion around SIMD storage implementations in Mojo.
- OpenCV Usability in Mojo: One member inquired whether it is possible to use OpenCV within Mojo, highlighting a need for image processing capabilities.
- No definitive answer was provided yet, leading to further curiosity.
- Mojo's Standard Library Goals: A member questioned if Mojo aims to reimplement the entire Python standard library as it seeks to be a superset of Python.
- Another member theorized it would be a long time before such an extensive feature is realized.
Modular (Mojo 🔥) ▷ #max (22 messages🔥):
Higher Level API for LLM Inference
Inferencemax Development
Mojo vs. Python Implementations
Jakub's Python API Work for MAX
-
Searching for High-Level API for LLMs: A member expressed the need for a higher level API for LLM inference, similar to the HF Transformers library, as current MAX examples require extensive code.
- Another member mentioned that Max might already support this, but the existing examples are too low-level with hundreds of lines of code needed.
- Introduction to Inferencemax Project: A member shared their new project called Inferencemax aimed at simplifying LLM inference, though they noted it may not fully align with the request.
- The code is currently written in Python, and while it’s considered suboptimal, improvements for performance are planned.
- Potential Mojo Implementations: A discussion ensued about the possibility of implementing a Mojo version of Inferencemax, with a focus on the novelty of Mojo for some developers.
- Members encouraged looking at example codes, particularly those listed for Llama3, as a resource.
- Jakub’s Work on Python API for MAX: A member inquired about Jakub's contributions to the Python API for MAX, and another provided a link to a community meeting where Jakub was the first speaker.
- It was noted that the API isn't fully released yet, present only in nightly builds, but it aims to showcase ease of use.
- Continued Learning and Resource Sharing: Members expressed eagerness to learn more about the new API developments and indicated that they would revisit the community video multiple times for better understanding.
- Discussion highlighted the value of community resources and collaboration in improving individual projects and understanding.
Links mentioned:
- MAX Examples | Modular Docs: Ready-to-use code examples using MAX APIs
- max/examples/graph-api/pipelines/llama3 at main · modularml/max: A collection of sample programs, notebooks, and tools which highlight the power of the MAX Platform - modularml/max
- GitHub - teilomillet/inferencemax: Contribute to teilomillet/inferencemax development by creating an account on GitHub.
Stability.ai (Stable Diffusion) ▷ #general-chat (21 messages🔥):
Mineral Resources Poster
SD3 Human Pose Limitations
LLM Token Limit Issues
LyCORIS vs LoRA Explained
Web3 Project Job Openings
-
Need Help Creating a Mineral Resources Poster: A member requested assistance in making a poster on mineral resources for their college project, asking others for help.
- Another member encouraged them to post their needs directly in the chat for support.
- Understanding SD3 Limitations with Humans: Discussion emerged regarding SD3's performance with human figures when lying down or upside down, with one member commenting on it being generally poor.
- Another participant argued that issues arise regardless of position, with frequent deformations in images.
- Frustrations with LLM Token Limit Respect: A user expressed frustration over LLM models not adhering to token limits or stop commands, leading to incoherent repeats and spurious outputs.
- They speculated about potential issues with prompt templating as cause, seeking input from those more experienced.
- Clarification on LyCORIS vs LoRA Folders: A member questioned the reason for the existence of a LyCORIS folder despite moving everything to LoRA.
- Another user clarified that it originated from historical need for extensions, now incorporated into interfaces like Auto1111.
- Job Positions for New Web3 Project: An announcement was made concerning the launch of a new Web3 project with various job roles available including Developer and Moderator offering competitive salaries.
- Interested candidates are encouraged to reach out directly for more details.
OpenInterpreter ▷ #general (17 messages🔥):
Open Interpreter GitHub Copilot extension
Mozilla AI talk announcement
Kernel panic issue
Understanding bandwidth
GitHub Marketplace extension listing
-
Open Interpreter GitHub Copilot extension idea: A member suggested creating an Open Interpreter GitHub Copilot extension, to which another member indicated they don’t have the bandwidth to pursue but support community efforts.
- They encouraged the community to take on the project while providing guidance where possible.
- Mozilla AI talk on the horizon: MikeBirdTech announced excitement for an upcoming talk from Mozilla AI and urged members to add the event to their calendars.
- The link to the event was also shared for easy access.
- Kernel panic when closing the app: A member reported experiencing a kernel panic when attempting to close the Open Interpreter app.
- MikeBirdTech advised creating a dedicated post with version details in a specific channel to troubleshoot the issue effectively.
- Clarification on bandwidth: The term bandwidth was discussed, with one member explaining it refers to their available time and resources for new projects.
- Another member humorously acknowledged their error in understanding the term and valued the discussion's insights.
- GitHub Marketplace extension listing criteria: A member clarified that there are no specific bandwidth requirements for listing an extension on the GitHub Marketplace, focusing instead on meeting the platform's criteria.
- They outlined essential steps for creating and publishing an extension, emphasizing the importance of providing user value and integration.
Link mentioned: Tweet from Mike Bird (@MikeBirdTech): pip install --upgrade open-interpreter A π release!
OpenInterpreter ▷ #ai-content (2 messages):
Local LLMs
Hugging Face
Ollama Integration
Llama 3.2 3B
-
Local LLMs Now Easier to Use: A major update allows users to easily run any GGUF model directly on Hugging Face via Ollama by just pointing to the repository and executing scripts.
- For example, users can run Llama 3.2 3B using the command
ollama run hf(.)co/hugging-quants/Llama-3.2-3B-Instruct-Q4_K_M-GGUF
. - Feature Appreciation: One member expressed enthusiasm about the new feature, stating it was an exciting advancement for local LLMs.
- They also noted that this was a feature they appreciated about Jan that was previously missing on Ollama.
- For example, users can run Llama 3.2 3B using the command
Link mentioned: Tweet from Philipp Schmid (@_philschmid): Big Update for Local LLMs! Excited to share that you can now easily use any GGUF model on @huggingface directly with @ollama! Just point to the Hugging Face repository and run it! Here is how to run @...
DSPy ▷ #show-and-tell (5 messages):
DSPy Workflow System
dspygen Framework Update
Livecoding DSPy Signatures
Unit Testing DSPy
Loom Recordings
-
Unit Testing a DSPy Powered Workflow System: A member announced they are unit testing a DSPy powered Workflow system in the Discord channel.
- Check the channel for progress updates and feedback on the testing process.
- Major Update to dspygen Framework: A recent major update has been made to the dspygen framework, built for improvements outside of dslmodel.
- dspygen aims to enhance the DSPy workflow for language models like GPT, BERT, and LLaMA.
- Livecoding Decorator for DSPy Signatures: A member hosted livecoding sessions focusing on creating a decorator to merge DSPy signatures with Custom GPTs.
- Participants can join the session in the Discord channel for live updates and demonstrations.
- Loom Recordings on DSPy Topics: Two Loom recordings were shared, showcasing different aspects of DSPy development.
- These recordings can provide further insights into the ongoing work and strategies being employed.
Link mentioned: GitHub - seanchatmangpt/dspygen: A Ruby on Rails style framework for the DSPy (Demonstrate, Search, Predict) project for Language Models like GPT, BERT, and LLama.: A Ruby on Rails style framework for the DSPy (Demonstrate, Search, Predict) project for Language Models like GPT, BERT, and LLama. - seanchatmangpt/dspygen
DSPy ▷ #papers (1 messages):
LightRAG improvements
GraphRAG limitations
Retrieval-Augmented Generation systems
-
LightRAG outshines GraphRAG: Recent claims suggest LightRAG offers significant enhancements in effectiveness and cost efficiency compared to GraphRAG as detailed in this paper.
- The authors propose that LightRAG addresses limitations of existing RAG systems, improving contextual awareness and information retrieval through innovative graph structures.
- RAG systems face challenges: Current RAG systems struggle with limitations like flat data representations and poor contextual understanding, leading to fragmented responses.
- The proposed LightRAG framework seeks to resolve these issues by incorporating graph structures in text indexing and retrieval processes.
Link mentioned: LightRAG: Simple and Fast Retrieval-Augmented Generation: Retrieval-Augmented Generation (RAG) systems enhance large language models (LLMs) by integrating external knowledge sources, enabling more accurate and contextually relevant responses tailored to user...
DSPy ▷ #general (7 messages):
DSPy integration with GPT-O1+
Documentation revamp discussions
HTML vs. notebooks for documentation
-
DSPy integration into GPT-O1+ progresses: Updated documentation introduced a long-form RAG example for building a question answering system about tech topics like Linux or iPhone apps using DSPy.
- Users can install DSPy with
pip install -U dspy
and a tutorial is available on DSPy documentation. - Revamping documentation approaches: Discussion emerged about the upcoming revamp of DSPy documentation, focusing on improving rhythm and style.
- Participants are considering whether to use HTML documentation versus detailed notebooks, mentioning the usefulness of having caches for execution.
- Preference for HTML documentation: A member expressed a strong preference for using an HTML format for documentation instead of standalone notebooks.
- They suggested retaining detailed code samples in the repository while providing a straightforward starter guide in the docs.
- Building on existing documentation framework: Following the suggestion for HTML documentation, focus was shifted to existing building blocks documenting DSPy capabilities.
- The community feels that enhancing existing documentation will adequately cover primitives, optimizers, metrics, and common techniques.
- Users can install DSPy with
Link mentioned: Getting Started I: Basic Question Answering | DSPy: Let's walk through a quick example of basic question answering in DSPy. Specifically, let's build a system for answering Tech questions, e.g. about Linux or iPhone apps.
LangChain AI ▷ #announcements (1 messages):
New Community Launch
Feedback Request
Moderator Opportunities
Discord Closure
-
LangChain Community Closing on October 31, 2024: On October 31, 2024, LangChain will be closing the current Discord community to build a new and improved space for users.
- The goal is to create a community that is more helpful, engaging, and fun.
- Stay Updated on New Community: To stay informed about the upcoming community, members can fill out the form located here.
- For better experience, users are encouraged to use the latest versions of Chrome, Firefox, Safari, or Edge.
- Suggestions for Improvement: LangChain seeks feedback on ways to enhance the new community space and welcomes thoughts and suggestions.
- Members can share their feedback via community@langchain.dev to help shape the new environment.
- Call for Moderators: LangChain is also looking for interested individuals to join as moderators in the new community.
- Anyone willing to support in this capacity is encouraged to reach out and express their interest.
Link mentioned: Airtable | Everyone's app platform: Airtable is a low-code platform for building collaborative apps. Customize your workflow, collaborate, and achieve ambitious outcomes. Get started for free.
LangChain AI ▷ #general (3 messages):
API Routing with Agents
Docker Compose
-
Seeking Advice on API Routing: A member is looking for guidance on the best choice for using agents to route user questions to different APIs based on the agent's description.
- They have 5 APIs running in Docker Compose, indicating a structured setup for their project.
- General Inquiry in Chat: A user inquired about receiving 10 pings, suggesting possible issues with notifications or system alerts.
- This raised questions about communication effectiveness and preferences among the group.
LangChain AI ▷ #langserve (4 messages):
Remote Runnable Tools Binding
Playground Blank Page Issue
GitHub Issue Tracking
-
Inquiring about Remote Runnable Tool Binding: A member asked whether it's possible to bind tools to the Remote Runnable, noting it lacks a bind_tools() method.
- This request opens the door for potential enhancements on handling tool bindings effectively.
- Playground encounters issues with Optional fields: Members identified a significant problem in the Playground when an input type includes an Optional field, leading to a blank page and error being logged in the console.
- The input schema's null type is believed to cause compatibility issues with jsonforms, hence hindering its functionality.
- GitHub Issue #782 opened on Playground Problem: A member reported the Playground issue on GitHub, detailing that chains with optional fields lead to loading failures and console errors.
- The issue has been documented in GitHub Issue #782 to track the resolution process.
Link mentioned: Input type with Optional
field breaks Playground · Issue #782 · langchain-ai/langserve: If a chain has an input type containing an optional field, the Playground page fails to load (blank page), and the following error is logged in the browser console: index-400979f0.js:150 Uncaught E...
OpenAccess AI Collective (axolotl) ▷ #general (4 messages):
AIFoundry start-up
Mistral AI model
Mistral license requirements
-
AIFoundry seeks mentorship on GitHub design: Yulia from AIFoundry.org expressed admiration for Axolotl's organized GitHub and is looking for mentorship in a similar process.
- She inquired if there was someone available who could assist their open-source start-up focused on local model inference.
- Mistral AI model access requirements: A link to the new Mistral-8B-Instruct-2410 model on Hugging Face was shared, emphasizing that contact information must be provided to access the model.
- Permission is required from Mistral AI for any uses outside standard access, and individuals are encouraged to review the privacy policy.
Link mentioned: mistralai/Ministral-8B-Instruct-2410 · Hugging Face: no description found
OpenAccess AI Collective (axolotl) ▷ #other-llms (2 messages):
L3.1 Ethereal Rainbow
Finetuning on L3.1
Sensitive Content
-
L3.1 Ethereal Rainbow Repository Launched: The L3.1 Ethereal Rainbow repository has been marked as containing sensitive content and may have harmful information.
- Users are advised to view content with caution due to the sensitive nature of its materials.
- Finetuning Details for L3.1: The L3.1 model has been finetuned on over 250 million tokens with a sequence length of 16k.
- This fine-tuning process focuses on RP and creative writing, enhancing the model's capabilities in these areas.
Link mentioned: invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B · Hugging Face: no description found
Torchtune ▷ #papers (2 messages):
New Paper Discussion
-
Excitement over the new paper: Members expressed enthusiasm about the paper titled arxiv:2410.06511, indicating it to be a great read.
- In a show of affirmation, one member added that they are also still going through the paper, emphasizing its quality.
- Consensus on Paper Quality: The sentiment regarding the paper was unanimous, with multiple members noting its impressive content.
- One member highlighted that it is still in progress as they go through the details, marking common interest.
LLM Agents (Berkeley MOOC) ▷ #mooc-readings-discussion (1 messages):
LLM optimization
Language-Model-Based Evolutionary Optimizer (LEO)
Zero-shot optimization applications
Design applications in engineering
-
LLMs shine in optimization tasks: Recent research highlights that Large Language Models (LLMs) can perform zero-shot optimization across various challenging problems, including multi-objective ones.
- This line of work could prove significant for practical applications in areas such as rocket nozzle design and windfarm layout optimization.
- Introducing the Language-Model-Based Evolutionary Optimizer (LEO): The paper introduces LEO, a novel population-based approach using LLMs for numerical optimization, performing comparably to both gradient-based and gradient-free methods.
- However, the creative nature of LLMs raises concerns about hallucination, necessitating careful management.
- Applications spark discussions in the community: Community members expressed interests in the deeper applications of LLMs with reasoning, especially related to engineering designs.
- They are eager to exchange thoughts on the implications of applying LLMs to practical engineering challenges.
Link mentioned: Large Language Model-Based Evolutionary Optimizer: Reasoning with elitism: Large Language Models (LLMs) have demonstrated remarkable reasoning abilities, prompting interest in their application as black-box optimizers. This paper asserts that LLMs possess the capability for ...
Mozilla AI ▷ #announcements (1 messages):
AI Stewardship Practice Program
Microcredentialing in AI
MaRS Discovery District
-
Pilot Program for AI Stewardship Practice: The MaRS Discovery District is offering a few free slots to pilot the AI Stewardship Practice Program aimed at individuals working with AI.
- This program provides a microcredential for researchers, entrepreneurs, educators, and others looking to positively influence the evolution of AI; more info here.
- Chance to Join AI Course Pilot: Participants interested in the course pilot can reply in the thread for a chance to secure a seat valued at 500 CAD.
- Seats will be distributed based on the order of replies until slots fill up, encouraging quick response from potential participants.