[AINews] The AI Nobel Prize
This is AI News! an MVP of a service that goes thru all AI discords/Twitters/reddits and summarizes what people are talking about, so that you can keep up without the fatigue. Signing up here opts you in to the real thing when we launch it 🔜
Artificial Neural Networks are all you need to be a physicist.
AI News for 10/7/2024-10/8/2024. We checked 7 subreddits, 433 Twitters and 31 Discords (226 channels, and 2556 messages) for you. Estimated reading time saved (at 200wpm): 277 minutes. You can now tag @smol_ai for AINews discussions!
We could talk about the new Differential Transformer paper, or the new AdderLM paper, but who are we kidding, the big story of the day is Geoff Hinton and John Hopfield's Nobel Prize in Physics.
The 14 page citation covers their greatest hits, while the memes from AI people and reaction from career physicists has been... interesting.
Of course, Hopfield is not new to physics prizes.
[Sponsored by Zep]: Zep is a low-latency memory layer for AI agents and assistants. They continuously update their internal graph of user interactions to deliver fast, deterministic fact retrieval. They just released their new community edition; check it out on GitHub!
Swyx commentary: The use of Knowledge Graphs for Memory was one of the hottest topics at the AI Engineer conference - other popular frameworks are also launching "long term memory" support, but this is an open source solution that isn't tied to LangChain, Autogen, et al. Readme includes a lovely FAQ which we love to see. Memory layers seem to be as hot in 2024 as Vector databases were in 2023.
Table of Contents
- AI Twitter Recap
- AI Reddit Recap
- AI Discord Recap
- PART 1: High level Discord summaries
- HuggingFace Discord
- LM Studio Discord
- Unsloth AI (Daniel Han) Discord
- aider (Paul Gauthier) Discord
- Eleuther Discord
- OpenAI Discord
- OpenRouter (Alex Atallah) Discord
- Stability.ai (Stable Diffusion) Discord
- Cohere Discord
- Latent Space Discord
- Nous Research AI Discord
- GPU MODE Discord
- LlamaIndex Discord
- Modular (Mojo 🔥) Discord
- Interconnects (Nathan Lambert) Discord
- Perplexity AI Discord
- DSPy Discord
- OpenInterpreter Discord
- LLM Agents (Berkeley MOOC) Discord
- tinygrad (George Hotz) Discord
- LangChain AI Discord
- Torchtune Discord
- LAION Discord
- PART 2: Detailed by-Channel summaries and links
- HuggingFace ▷ #announcements (1 messages):
- HuggingFace ▷ #general (566 messages🔥🔥🔥):
- HuggingFace ▷ #today-im-learning (7 messages):
- HuggingFace ▷ #i-made-this (1 messages):
- HuggingFace ▷ #computer-vision (2 messages):
- HuggingFace ▷ #NLP (1 messages):
- HuggingFace ▷ #diffusion-discussions (6 messages):
- LM Studio ▷ #announcements (2 messages):
- LM Studio ▷ #general (227 messages🔥🔥):
- LM Studio ▷ #hardware-discussion (81 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #general (246 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #help (60 messages🔥🔥):
- aider (Paul Gauthier) ▷ #general (157 messages🔥🔥):
- aider (Paul Gauthier) ▷ #questions-and-tips (101 messages🔥🔥):
- aider (Paul Gauthier) ▷ #links (4 messages):
- Eleuther ▷ #general (36 messages🔥):
- Eleuther ▷ #research (188 messages🔥🔥):
- Eleuther ▷ #multimodal-general (1 messages):
- OpenAI ▷ #ai-discussions (138 messages🔥🔥):
- OpenAI ▷ #prompt-engineering (23 messages🔥):
- OpenAI ▷ #api-discussions (23 messages🔥):
- OpenRouter (Alex Atallah) ▷ #announcements (2 messages):
- OpenRouter (Alex Atallah) ▷ #general (112 messages🔥🔥):
- Stability.ai (Stable Diffusion) ▷ #general-chat (95 messages🔥🔥):
- Cohere ▷ #discussions (19 messages🔥):
- Cohere ▷ #questions (25 messages🔥):
- Cohere ▷ #api-discussions (22 messages🔥):
- Cohere ▷ #projects (9 messages🔥):
- Latent Space ▷ #ai-general-chat (58 messages🔥🔥):
- Nous Research AI ▷ #general (39 messages🔥):
- Nous Research AI ▷ #ask-about-llms (4 messages):
- Nous Research AI ▷ #research-papers (6 messages):
- Nous Research AI ▷ #interesting-links (3 messages):
- Nous Research AI ▷ #research-papers (6 messages):
- GPU MODE ▷ #general (7 messages):
- GPU MODE ▷ #triton (3 messages):
- GPU MODE ▷ #torch (19 messages🔥):
- GPU MODE ▷ #beginner (1 messages):
- GPU MODE ▷ #pmpp-book (4 messages):
- GPU MODE ▷ #youtube-recordings (3 messages):
- GPU MODE ▷ #torchao (5 messages):
- GPU MODE ▷ #off-topic (5 messages):
- GPU MODE ▷ #rocm (2 messages):
- GPU MODE ▷ #webgpu (1 messages):
- GPU MODE ▷ #metal (3 messages):
- GPU MODE ▷ #avx (2 messages):
- LlamaIndex ▷ #blog (5 messages):
- LlamaIndex ▷ #general (49 messages🔥):
- Modular (Mojo 🔥) ▷ #general (13 messages🔥):
- Modular (Mojo 🔥) ▷ #mojo (19 messages🔥):
- Modular (Mojo 🔥) ▷ #max (13 messages🔥):
- Interconnects (Nathan Lambert) ▷ #news (5 messages):
- Interconnects (Nathan Lambert) ▷ #ml-questions (14 messages🔥):
- Interconnects (Nathan Lambert) ▷ #ml-drama (14 messages🔥):
- Interconnects (Nathan Lambert) ▷ #random (6 messages):
- Perplexity AI ▷ #general (31 messages🔥):
- Perplexity AI ▷ #sharing (6 messages):
- Perplexity AI ▷ #pplx-api (2 messages):
- DSPy ▷ #show-and-tell (1 messages):
- DSPy ▷ #general (13 messages🔥):
- OpenInterpreter ▷ #general (13 messages🔥):
- LLM Agents (Berkeley MOOC) ▷ #mooc-questions (6 messages):
- LLM Agents (Berkeley MOOC) ▷ #mooc-lecture-discussion (1 messages):
- tinygrad (George Hotz) ▷ #general (5 messages):
- tinygrad (George Hotz) ▷ #learn-tinygrad (1 messages):
- LangChain AI ▷ #general (3 messages):
- LangChain AI ▷ #langchain-templates (1 messages):
- LangChain AI ▷ #share-your-work (1 messages):
- Torchtune ▷ #general (4 messages):
- LAION ▷ #general (2 messages):
- LAION ▷ #research (2 messages):
AI Twitter Recap
all recaps done by Claude 3.5 Sonnet, best of 4 runs.
AI and Language Models
- OpenAI's DevDay introduced new features like real-time voice API, vision model fine-tuning, and cost-saving prompt caching. @_philschmid noted a 50% discount on reused tokens.
- Anthropic's Claude 3.5 Sonnet model was highlighted as the current best model by consensus. @alexalbert__ shared this insight from a podcast episode.
- Reka AI Labs announced updates to their Reka Flash model, including improved multimodal capabilities and function calling support. @RekaAILabs detailed the enhancements across image, video, and audio modalities.
- The GOT (Generic OCR Transformer) model was praised for its OCR capabilities. @mervenoyann shared that it achieved a 98.79% accuracy score on a benchmark dataset.
- Discussions around open-source AI models continued, with @ClementDelangue arguing that open-source creates healthy competition and fights against power concentration in AI.
Software Development and Engineering
- @svpino provided a detailed explanation of how Single Sign-On (SSO) works, emphasizing its importance in modern authentication systems.
- The importance of thorough testing in software development was stressed by @svpino, who stated that untested code is essentially non-working code.
- @bindureddy suggested that allowing candidates to use AI tools during interviews is a form of resourcefulness rather than cheating.
- An internal milestone was reported by @bindureddy, where their AI engineer can now look at stack traces, solve issues, and submit pull requests with varying degrees of human intervention.
AI Ethics and Societal Impact
- @ylecun criticized Trump's tax plan, claiming it would lower taxes for the top 5% while increasing taxes for everyone else.
- The appointment of the world's first Minister of AI in France was noted as a historic move by @rohanpaul_ai.
- @RichardMCNgo shared thoughts on the fragility of civilization and the importance of upholding standards and deescalating conflicts in the face of technological pressures.
AI Research and Development
- The Mixture of Experts (MoE) architecture was explained in a visual guide shared by @_philschmid, highlighting its efficiency in parameter usage.
- A new benchmark called SWE-bench Multimodal was announced by @OfirPress, featuring 617 tasks with images to challenge AI agents in realistic scenarios.
- @rohanpaul_ai shared research on Inverse Painting, which can generate time-lapse videos of the painting process for any artwork.
AI Tools and Applications
- @mickeyxfriedman announced that FlairAI now supports generating brand-consistent video advertisements by combining models trained on brand aesthetics and products.
- @_akhaliq shared information about openai-gradio, a Python package for easily creating web apps powered by the OpenAI API.
- @jerryjliu0 discussed using contextual retrieval for better chunking strategies in slide decks, improving question-answering capabilities.
Memes and Humor
- @ylecun joked about periodic AC failures preventing AGI from going rogue for long.
- @karpathy humorously referred to Sydney (likely referring to Bing's chatbot) as the "AI Harambe."
- @lateinteraction made a pun about the GIL-free mode in Python, saying they could write two threads about it, but not in parallel.
AI Reddit Recap
/r/LocalLlama Recap
Theme 1. Energy-Efficient AI: Addition-Based Algorithm Claims 95% Reduction
- A Visual Guide to Mixture of Experts (MoE) (Score: 73, Comments: 7): Mixture of Experts (MoE) is an efficient model architecture that uses multiple specialized neural networks (experts) and a gating network to route inputs to the most appropriate expert. This approach allows for larger models with increased parameter counts while maintaining computational efficiency, as only a subset of experts is activated for each input. MoE architecture has been successfully applied in various domains, including language models like Google's Switch Transformer and Microsoft's Turing-NLG, demonstrating improved performance and scalability compared to traditional dense models.
- Addition is All You Need for Energy-Efficient Language Models: Reduce energy costs by 95% using integer adders instead of floating-point multipliers. (Score: 318, Comments: 65): Researchers propose a novel approach called AdderLM that replaces floating-point multiplications with integer additions in language models, potentially reducing energy consumption by up to 95%. The method, detailed in a paper on arXiv, maintains comparable performance to traditional models while significantly decreasing computational costs and power requirements for AI systems.
- AdderLM's implementation faces challenges as major corporations aren't developing models outside traditional transformer boundaries. The Jamba-1.5 model shows promise for long context sizes but lacks widespread adoption and requires 80GB+ VRAM to run.
- Users debate the performance of Jamba models, with some finding the 398B model underwhelming for its size, while others praise the 1.5 version for handling large context lengths. The lack of easy quantization for local hosting remains an issue.
- The paper's poor grammar raised concerns, but the concept of replacing multiplications with additions intrigued readers. Some speculate this approach could lead to CPU-focused solutions and potentially challenge Nvidia's monopoly if implemented in tools like llama.cpp.
Theme 2. Zamba 2: New Mamba-based Models Outperform Larger Competitors
- Zamba 2 2.7B & 1.2B Instruct - Mamba 2 based & Apache 2.0 licensed - beats Gemma 2 2.6B & Mistral 7B Instruct-v0.1 (Score: 125, Comments: 30): Zamba 2, a Mamba 2-based model with 2.7B and 1.2B parameter versions, outperforms Gemma 2 2.6B and Mistral 7B Instruct-v0.1 in benchmarks, as shown in the provided images. The models, available on Hugging Face under an Apache 2.0 license, are accessible at Zamba2-2.7B-instruct and Zamba2-1.2B-instruct, though support for llama.cpp is pending.
- Where do you actually rank LLaMA 3.2 405B among the big boys? (Score: 56, Comments: 58): The post compares the performance of several leading large language models, including LLaMA 3.1 405B, Gemini 1.5 Pro, GPT-4, Claude 3.5 Sonnet, Grok 2, Mistral Large 2, Qwen 110B, Deepseek 2.5, and Command R+. The author seeks to understand where LLaMA 3.1 405B ranks among these "big boys" in terms of performance and capabilities.
- Claude 3.5 Sonnet and GPT-4 variants consistently rank highly for reasoning and performance, with Claude 3.5 Sonnet often placed in the top 3. Users report mixed experiences with GPT-4o, some finding it excellent while others describe it as "overcooked" or frustrating to use.
- LLaMA 3.1 405B is generally ranked in the top 5 models, with some users placing it above Mistral Large 2. It's noted for being "absurdly hard to run" but performs well in long-context tasks and general use.
- The recent update to Gemini 1.5 Pro has significantly improved its performance, with users now ranking it alongside top models. It excels in long-context tasks, handling up to 100k tokens effectively, making it particularly useful for legal documentation and other extensive text processing.
Theme 3. Open WebUI 0.3.31: New Features Rivaling Commercial AI Providers
- Try my open-source browser assistant that works with local models. (Score: 64, Comments: 21): The post introduces an open-source browser assistant that works with local LLM models, offering predefined prompts and custom options. The extension supports various websites including YouTube, Reddit, Slack, Gmail, X, Telegram, and GitHub, and operates 100% locally with page data sent directly to the selected assistant through a background process running on port 8080 by default. The extension is available for Firefox and Chrome, with links provided to the GitHub repository and browser extension stores.
- The extension operates 100% locally with no telemetry or account required. It supports custom endpoints for various AI models and can work with locally run Open WebUI.
- Users expressed interest in YouTube transcription functionality, which distills timestamps every 30 seconds. The developer clarified that the minimum supported Firefox version is currently set to 129.
- Discussion around compatibility with LM Studio revealed limitations, as the extension can only work within the browser. The developer recommended using Open WebUI for web-based tasks and LM Studio for other purposes.
- Open WebUI 0.3.31 adds Claude-like ‘Artifacts’, OpenAI-like Live Code Iteration, and the option to drop full docs in context (instead of chunking / embedding them). (Score: 484, Comments: 80): Open WebUI 0.3.31 introduces several new features, including Claude-like 'Artifacts' for live rendering of HTML, CSS, and JS in a resizable window, a Svelte Flow interface for chat branch navigation, and a "full document retrieval" mode allowing entire documents to be loaded into context without chunking. The update also adds editable code blocks with live updates in Artifacts and an ask/explain feature for LLM responses, bringing Open WebUI closer to features offered by commercial AI providers.
- Open WebUI 0.3.31 introduces live rendering of HTML, CSS, and JS in a resizable window, which users find "1000x better than chatgpt UI". The update also includes the ability to run Python code in the UI.
- A user demonstrated the new features by generating a landing page for a cat library using L3.1 8B zero-shot. The prompt "Build me a landing page for a cat library" produced a basic but functional design.
- Users expressed excitement about the update and inquired about upcoming features in version 0.4. A public milestone suggests further improvements, though some features were released earlier than expected.
Theme 4. AntiSlop Sampler: Reducing Repetitive Language in LLM Outputs
- Prompt-Writing Burnout? How Do You Cope? (Score: 79, Comments: 87): Prompt-writing burnout is described as an all-consuming cycle of crafting, refining, and testing prompts, with the author estimating they've written "a thousand pages" worth of content. The poster experiences fluctuating success rates with their prompts, leading to frequent revisions and occasional complete restarts. To cope with this fatigue, they've found relief in taking breaks, going for walks, and playing video games like Helldivers and Valheim, as suggested by AI, but are seeking additional strategies from the community.
- AntiSlop Sampler gets an OpenAI-compatible API. Try it out in Open-WebUI (details in comments) (Score: 120, Comments: 46): The AntiSlop Sampler, a tool for reducing repetitive language in AI-generated text, now has an OpenAI-compatible API. This update allows users to integrate AntiSlop Sampler into applications that support OpenAI's API, potentially improving the quality of AI-generated content by reducing redundancy and repetition. The new feature can be tested in Open-WebUI, with further details provided in the comments of the original post.
- Users expressed interest in the AntiSlop Sampler's implementation, with discussions about its multilingual capabilities and potential integration with other backends like llama.cpp and ExllamaV2. The developer provided a GitHub link for computing slop phrases.
- The project creator shared detailed setup instructions for running AntiSlop Sampler with Open-WebUI, including installation steps and configuration settings. Users can adjust the slop phrase probabilities in a JSON file to customize the tool's behavior.
- Some users reported mixed results when testing the tool, with concerns about coherence loss in generated text. The developer addressed these issues, suggesting adjustments to the strength parameter and providing benchmark comparisons between baseline and AntiSlop-enhanced models.
Theme 5. Optimizing AI Agents: DSPy and Argilla for Improved Search and Prompts
- Optimizing Prompt Usage for Search Agent with DSPy and Argilla (Score: 108, Comments: 2): The post describes optimizing an ArXiv agent using DSPy, Langchain tools, and Argilla to improve its ability to search and answer questions from scientific papers. The author used DSPy's AvatarOptimizer to enhance prompt structuring for the ArXiv API, resulting in more efficient and accurate information extraction, and evaluated the improvements using Argilla's UI** for detailed response review. The optimized agent demonstrated better understanding of questions and more relevant information extraction from ArXiv, with the example notebook available at GitHub.
- Try my open-source browser assistant that works with local models. (Score: 64, Comments: 21): The open-source browser assistant, Taaabs, works with local LLMs and offers predefined prompts along with custom options for various websites including YouTube, Reddit, Slack, Gmail, and GitHub. The extension operates 100% locally, sending page data directly to the selected assistant through a background process, with OpenWebUI running on port 8080 by default, and supports a vision mode for image analysis. Users can install Taaabs from the GitHub repository or download it for Firefox and Chrome browsers through provided links.
- Users expressed enthusiasm for Taaabs, with questions about data privacy, Firefox compatibility, and YouTube transcription. The developer confirmed 100% local processing, no account requirement, and distilled transcripts every 30 seconds.
- The extension offers flexibility in AI model selection, including predefined chatbots and custom endpoints. Users can set up local instances with Open WebUI or use external APIs like Groq for prioritizing speed.
- Some users encountered issues with LM Studio integration and the new tab override feature. The developer addressed these concerns, promising to remove the new tab functionality in the next update and clarifying that LM Studio, as a standalone app, isn't directly compatible with browser extensions.
Other AI Subreddit Recap
r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity
AI Model Releases and Improvements
- Salesforce's "tiny giant" xLAM-1b model surpasses GPT 3.5 in function calling: Salesforce released xLAM-1b, a 1 billion parameter model that achieves 70% accuracy in function calling, surpassing GPT 3.5. It is dubbed a "function calling giant" despite its relatively small size.
- Phi-3 Mini (June) with function calling: Rubra AI released an updated Phi-3 Mini model in June with function calling capabilities. It is competitive with Mistral-7b v3 and outperforms the base Phi-3 Mini.
- Microsoft/OpenAI crack multi-datacenter distributed training: According to analyst Dylan Patel, Microsoft and OpenAI have achieved multi-datacenter distributed training, potentially enabling more efficient large-scale model training.
AI Research and Techniques
- Inverse Painting generates time-lapse videos of painting process: A new technique called Inverse Painting can generate time-lapse videos showing the painting process for any artwork, learning from diverse drawing techniques.
- MonST3R estimates geometry in presence of motion: Researchers developed MonST3R, an approach for estimating 3D geometry in scenes with motion, which could improve 3D reconstruction from video.
- New LLM sampling method may reduce hallucinations: Engineers are evaluating a new sampling method for LLMs based on entropy that could reduce hallucinations and allow for dynamic inference-time compute similar to OpenAI's O1 model.
AI Capabilities and Impact
- AI images taking over Google search results: A post shows AI-generated images increasingly appearing in Google image search results, highlighting the growing prevalence of AI content online.
- Rapid AI progress predicted by Max Tegmark: AI researcher Max Tegmark states that significant AI advancements will occur in the next 2 years, making long-term planning difficult and potentially "blowing our minds".
- Accelerating rate of change compared to history: A post compares the rate of technological change today to historical periods, arguing that change is accelerating rapidly compared to previous centuries.
AI Image Generation Techniques
- File path prompts for realistic photo generation: Users discovered that including Windows file paths in prompts (e.g. "C:\Users\name\Pictures\Photos\") can produce more realistic-looking AI-generated photos.
- Generating image, 3D, and video from sketches: A demo shows generating image, 3D model, and video from a single sketch input using AI in ComfyUI.
- 90s Asian photography style: A user shared AI-generated images mimicking 90s Asian photography styles, demonstrating the ability to replicate specific aesthetic periods.
AI Discord Recap
A summary of Summaries of Summaries to us by O1-preview
Theme 1. Cutting-Edge AI Models Unveiled and Explored
- Nvidia Doubles Down with Llama-3.1-Nemotron-51B: Nvidia launched the Llama-3.1-Nemotron-51B, a NAS-optimized model achieving 2x throughput on a single H100 GPU while maintaining accuracy. Users can experiment with the model via the API at Nvidia AI or download it from Hugging Face.
- Meta Tracks 70k Points with CoTracker 2.1: Meta released CoTracker 2.1, enhancing video motion prediction by jointly tracking 70,000 points on a single GPU. The accompanying paper detailing these advancements is available here.
- Google Merges Models Up to 64B Parameters: A Google intern's research explores model merging at scale, combining language models up to 64B parameters. The study addresses questions about performance and generalization when merging large models, raising both excitement and skepticism in the community.
Theme 2. Nobel Prize Controversy: AI Meets Physics
- Hinton and Hopfield Bag Nobel, Physics Community Reacts: The 2024 Nobel Prize in Physics was awarded to Geoffrey Hinton and John J. Hopfield for their work on artificial neural networks, sparking debates. Critics argue the award may dilute the prestige of the prize by prioritizing AI over traditional physics achievements.
- Physicists Question Nobel's Focus on AI: Members in physics forums express frustration, suggesting that awarding AI work in physics overlooks more deserving physics research. Some see it as a sign of hype overshadowing impactful science.
- AI Ethics Discussed at Nobel Level: The Swedish Academy of Sciences shifts focus to include AI ethics and safety, indicating a broader consideration of AI's impact. This move reflects societal concerns about the intersection of AI and traditional sciences.
Theme 3. Fine-Tuning Frenzy and Optimization Obstacles
- Unsloth Studio Aims to Simplify Fine-Tuning: Anticipation builds for the release of Unsloth Studio, expected to streamline the fine-tuning process on Windows without complex setups like Docker. Users express frustration over current difficulties and hope for a seamless installer experience.
- Aider Users Demand Control Over Auto-Commits: Developers request that Aider prompts for commit confirmations instead of auto-committing code changes. Clarity on cost estimations and better labeling in the interface are also hot topics among users seeking more control.
- LM Studio 0.3.4 Boosts Mac Performance with MLX: The release of LM Studio 0.3.4 introduces an MLX engine for Apple Silicon Macs, offering 10-50% speed improvements. Users note enhanced efficiency, especially when running larger models.
Theme 4. GPU Gossip: Hardware Headaches and Hints
- GPU Showdown: Tesla P40 vs. RTX 4060 Ti Sparks Debate: Members weigh the pros and cons of a Tesla P40 with 24GB VRAM against an RTX 4060 Ti with 16GB VRAM. While the P40 offers more memory, concerns include slower performance and limited inference capabilities compared to the 4060 Ti.
- NVIDIA vs. AMD: Performance Disparities Discussed: Users agree that combining an RTX 3060 with an RX 6600 leads to inefficiencies, advocating for sticking with NVIDIA GPUs for better speed and compatibility. Dual 3060s might increase VRAM but won't significantly boost processing speed.
- HBM and SRAM Scaling Scrutinized: Skepticism arises over HBM's cost-effectiveness, with discussions highlighting that it constitutes a significant portion of devices like the H100. Issues with SRAM scaling not keeping pace with logic scaling are also noted, pointing to potential design oversights.
Theme 5. AI Tools and APIs: User Triumphs and Trials
- Cohere API Charms Developers with Simplicity: New users praise the Cohere API for its ease of use, enabling multi-tool agent setups with minimal code. The introduction of Dark Mode also excites users, enhancing the developer experience.
- OpenRouter Saves Costs with Prompt Caching: OpenAI prompt caching on OpenRouter enables up to 50% savings on inference costs. Users can audit their savings on the activity page, and the feature currently supports eight OpenAI models.
- Anthropic's Message Batches API Offers Bulk Processing: Anthropic introduces the Message Batches API, allowing up to 10,000 queries processed asynchronously within 24 hours. While some users appreciate the cost-effectiveness, others voice concerns about response delays.
PART 1: High level Discord summaries
HuggingFace Discord
- Nvidia's Llama-3.1-Nemotron-51B Launch: Nvidia introduced Llama-3.1-Nemotron-51B, a NAS-optimized model achieving 2x throughput on a single H100 GPU while maintaining accuracy.
- Users can experiment with the model via the API at Nvidia AI or download it from Hugging Face.
- Meta Enhances Video Motion Prediction: Meta released CoTracker 2.1, capable of tracking 70k points on a single GPU, improving on motion prediction abilities.
- The accompanying paper details the advancements and can be found here.
- Hugging Face Accelerate 1.0 Features: Hugging Face launched Accelerate 1.0, introducing new features aimed at optimizing model training processes.
- Users can explore the announcement in greater detail by visiting the announcement blog.
- LLMs Bound by Training Scope: Members highlighted that LLMs like GPT-2 and GPT-3 are confined to their training distribution, limiting their ability to solve unfamiliar problems.
- While they can assist in various tasks, they lack true understanding and independent output filtering.
- Importance of Tokenizer Accuracy: Discussions confirmed the need for using the correct tokenizer specific to models, as mismatched ones yield ineffective outcomes.
- Efficiency increases as many models share tokenization approaches, making it a critical aspect for developers.
LM Studio Discord
- LM Studio 0.3.4 enhances Mac performance: The release of LM Studio 0.3.4 introduces an MLX engine for improved on-device LLMs on Apple Silicon Macs, allowing for simultaneous model execution and structured JSON responses.
- Users report 10-20% speed boosts for larger models and up to 50% for smaller ones when using MLX, distinguishing it from previous versions.
- Auto-update confusion plagues users: Users expressed frustration over version 0.3.4 not being available through auto-update, necessitating manual downloads from the site, which has led to bugs in existing workflows.
- This unintended migration of chats has resulted in mixed experiences, highlighting the transition difficulties faced by users.
- Debate on GPU VRAM advantages: In the ongoing discussion of VRAM Options, members evaluated the benefits of a Tesla P40 with 24GB versus an RTX 4060 Ti with 16GB, emphasizing the P40's memory but noting its slower performance.
- Concerns arose regarding the P40's limited inference applications compared to the more versatile 4060 Ti.
- Performance disparities: NVIDIA vs AMD: The group concurred that using an RTX 3060 in tandem with an RX 6600 leads to inefficiencies, advocating for a dedicated NVIDIA setup for optimal speed.
- One member highlighted that dual 3060s could increase VRAM but might not improve processing pace effectively.
- User experiences reveal hardware limitations: In discussions around Stable Diffusion, users noted considerable limitations concerning VRAM usage with different models, pointing out the impact on processing speeds.
- Concerns were raised regarding the viability of running newer models efficiently on current hardware setups, particularly when comparing high-end GPUs.
Unsloth AI (Daniel Han) Discord
- Anticipation Builds for Unsloth Studio Launch: Users eagerly await the release of Unsloth Studio, which promises to simplify the fine-tuning process on Windows while skipping complicated setups like Docker.
- Frustration surfaced over Docker and GPU driver setups, driving hope for a smooth experience with an installer.
- Fine-Tuning LLMs for Content Moderation Explored: A proposal for fine-tuning an LLM for content moderation was made, targeting a dataset of 50k entries focused on short texts.
- Suggestions pointed to Llama Guard and Gemma Shield as possible tools for effective classification.
- Unpacking Model Merging Strategies: Participants discussed a new paper on model merging at scale, emphasizing methodologies across various model sizes and configurations.
- Skepticism arose regarding the practicality of merging larger models, amidst concerns highlighted in previous leaderboards.
- Performance Questions on Inference Methods: Users raised queries about whether vllm's inference competes effectively against Unsloth on consumer hardware.
- A need for clarity emerged, weighing setup effort versus performance gains in the community discussions.
- Colab Resources for Training Models Highlighted: A member shared a link to a Colab notebook designed to assist with ShareGPT and Llama training, which received positive feedback.
- This resource helped alleviate some prior frustrations, aiming to streamline the training process for users.
aider (Paul Gauthier) Discord
- Aider prompts for Commit Confirmation: Users need Aider to prompt for commit confirmation instead of auto-committing after coding, with concerns about clearly labeling estimated costs in the interface.
- Many believe disabling auto-commits could enhance control over code changes, while the management of costs remains a critical topic.
- Embeddings fuel Semantic Search: Discussion revealed that embeddings play a key role in semantic search, aiding LLMs in retrieving relevant documents based on vector representations.
- Maintaining consistent embeddings across platforms is crucial to prevent relevance loss in document retrieval.
- Python 3.13 makes waves: Python 3.13 is out, featuring a better REPL and support for mobile platforms, signaling broader accessibility efforts.
- The release also includes the introduction of an experimental JIT compiler, which could optimize performance significantly.
- Podcasting with AI using NotebookLM: A member detailed their experience with Google NotebookLM to create an episode about the SmartPoi project, sharing their overview episode.
- Despite some content confusion, the AI-generated podcast was convincing enough for family members to believe it was authentic.
- Introducing Message Batches API: The introduction of the Message Batches API by Anthropic was praised as a cost-effective solution for processing large queries asynchronously.
- While some raised concerns about response delays, others saw its potential for generating training data more efficiently.
Eleuther Discord
- Controversy Brews Over AI Nobel Winners: Debate ignites within the physics community about the appropriateness of awarding the Nobel Prize in Physics to Hinton and Hopfield for their AI work, raising concerns about hype overshadowing impactful research.
- Members argue significant recognition should prioritize traditional physics achievements, and a prize for neural networks may dilute the award's prestige.
- Exciting advances in Normalized Transformer: The new nGPT architecture introduces a hypersphere representation that normalizes vectors, claiming a 20x boost in training efficiency through enhanced representation learning.
- This approach could potentially streamline the learning process by maintaining unit norm vectors at each layer, optimizing training dynamics.
- Model Merging Performance Scrutiny: A new study on model merging from Google explores performance implications for large-scale models, examining scalability issues up to 64B parameters.
- Key findings address common questions about held-in performance, raising awareness about performance inconsistencies when merging models beyond conventional boundaries.
- Generative Reward Models Gain Traction: Research emphasizes the significance of Generative Reward Models, which combine human and AI feedback to enhance LLM training performance.
- Discussions on implementation underscore the necessity of reasoning in decision-making within AI systems to achieve effective post-training performance.
OpenAI Discord
- AI for Document Categorization Enthralls Users: Members discussed the potential for an AI to categorize documents effectively, despite skepticism regarding current capabilities that make manual organization sometimes preferred.
- They proposed several tools that could handle large file collections, leading to an interesting debate over how to manage extensive datasets efficiently.
- Cloud Costs vs Local AI Analysis: Concerns about AI costs emerged, particularly with cloud analysis for 18,478 files estimated to reach around $12,000.
- Members weighed the server expenses for cloud solutions against the costs associated with local hardware, debating the best route for data analysis.
- AVM and Multi-modal AI Capabilities Excite Engineers: Discussions around AVM highlighted the exciting convergence of multi-modal AI technologies, pointing out how it could significantly alter user interactions.
- Members expressed anticipation for upcoming features that might enhance the functionality of AVM tools.
- Prompt Leaderboards Ignite Debate: The possibility of a leaderboard for prompts sparked humorous discussions about how to objectively score prompt effectiveness.
- Questions arose regarding the feasibility and methods for maintaining consistency in prompt evaluations across varied outputs.
- Success with Gemini Advanced Prompts: A member reported consistent success with a well-crafted prompt for Gemini Advanced, generating high-quality responses across different interactions.
- They were reminded about community guidelines, stressing the necessity of adhering to regulations regarding discussions of other AIs.
OpenRouter (Alex Atallah) Discord
- OpenAI Prompt Caching Launch Hits the Mark: Last week, OpenAI prompt caching was launched, enabling significant cost savings on inference costs, potentially up to 50%. It works seamlessly with 8 OpenAI models and integrates with providers like Anthropic and DeepSeek.
- Users can audit their savings from caching on the openrouter.ai/activity page, with details on benefits viewable through the
/generation
API.
- Users can audit their savings from caching on the openrouter.ai/activity page, with details on benefits viewable through the
- Double Generation Issues Disrupt the Flow: Users reported experiencing double generation per request in OpenRouter, stirring a discussion on potential setup issues and timeout management. Recommendations surfaced to increase timeouts for better performance.
- While some attributed the issue to their configurations, collective feedback indicated a need for further troubleshooting.
- Anthropic API Moderation Battle: A user faced challenges with Claude 3.5 Sonnet moderation, discovering that the
:beta
endpoint might alleviate some imposed moderation issues. The standard endpoint enforces mandatory moderation, while the beta option allows for self-moderation.- This raised important questions about best practices when working with Anthropic APIs under varied conditions.
- Insights into Provider Selection for Efficiency: Members exchanged strategies on how to effectively route requests to specific providers, particularly Anthropic, to mitigate rate limit errors. Default load balancing options and manual provider pinning were highlighted as viable alternatives.
- This sparked queries on optimizing request handling further to prevent disruptions.
- Frequency of 429 Errors Raises Eyebrows: Concerns about frequent 429 errors while using Sonnet prompted discussions about resource exhaustion and suggested avoiding fallback options directing traffic to Anthropic instead. Users emphasized the necessity of maintaining consistent API access.
- This touches upon the need for robust error handling and rate management strategies in high-traffic scenarios.
Stability.ai (Stable Diffusion) Discord
- GPU Showdown: RX 6900 XT vs RTX 4070: Users discussed GPU performance, comparing the RX 6900 XT to the RTX 4070 and highlighting that AMD cards may lag due to CUDA dependencies.
- VRAM emerged as crucial, with most recommending Nvidia cards for better efficiency and fewer memory issues during image generation.
- Styling Images with Inpainting Techniques: Discussion erupted around inpainting techniques for applying specific styles to images, using methods like ipadapter and ControlNet.
- Members urged sharing images for improved feedback on style transfers without altering original elements.
- ControlNet Models Gain Attention: A user’s inquiry about ControlNet models led to a shared GitHub link offering insights and examples.
- The shared resource emphasized controlling diffusion models, making it easier to grasp with visual aids.
- Automatic1111 UI Confusions for Newbies: New users flooded the chat with queries about the Automatic1111 UI, seeking setup support and optimal configurations.
- Suggestions included exploring the Forge WebUI as a potential fix for common Automatic1111 issues.
- Community Rallies for Image Generation Help: Members actively sought assistance regarding various aspects of image generation using Stable Diffusion, discussing workflow optimizations.
- There was a strong emphasis on community support, particularly for troubleshooting challenges like local connection issues.
Cohere Discord
- Cohere API charms new users: A new member raved about the Cohere API, emphasizing its simplicity for setting up a multi-tool agent with minimal code.
- Developer experience is a big factor for them while integrating AI into their team's workflow.
- Dark Mode excitement buzzes: Users expressed enthusiasm over Cohere's new Dark Mode, leading to lively chatter within the channel.
- The introduction of this feature was a welcomed change that many noted enhances user experience.
- Concerns arise over data retention: Users inquired about restricting Cohere from storing user prompts, leading to discussions on the data retention settings.
- A member provided a link detailing how to opt out, emphasizing the importance of data privacy.
- Fine-Tuning with extensive examples: One member shared that they used 67,349 examples in fine-tuning, splitting them into batches of 96 for the API due to restrictions.
- Not sure if this was the right way to go about it or not echoed their uncertainty regarding the process.
- Rerank API struggles with data: A user noted that the Rerank API was not returning documents as expected when using the Python SDK, particularly with the 'return_documents: True' parameter.
- Testing via Thunder Client indicated a possible bug in the SDK, leading to further investigation.
Latent Space Discord
- Voice Mode Woes: Members reported frustrations with advanced voice mode; reinstalling the app on iOS fixed their issue, but not on Mac OS.
- One member mentioned that this mode is time-limited, with shorter responses leading to a feeling of inefficacy.
- Hinton and Hopfield Claim Nobel Glory!: John J. Hopfield and Geoffrey E. Hinton won the 2024 Nobel Prize in Physics for pivotal work in machine learning.
- Discussions arose questioning the intersection of machine learning and physics, reflecting skepticism about recognizing AI contributions.
- Anthropic's Cost-Effective API: Anthropic launched the Message Batches API, allowing up to 10,000 queries for asynchronous processing within 24 hours.
- A member noted its similarities to OpenAI’s batching, hinting at the growing competitive landscape.
- Salesforce's Generative UX Takes Flight: Salesforce introduced the Generative Lightning UX, which aims to dynamically tailor enterprise app layouts to user needs.
- Currently in pilot phase, Salesforce is actively seeking user feedback ahead of the anticipated 2025 release.
- Cursor Tips Uncovered at Weights & Biases: Insights from a Cursor tips & tricks meeting at Weights & Biases emphasized sharing effective usage strategies among teams.
- A follow-up thread was initiated for deeper discussions on these helpful tricks.
Nous Research AI Discord
- Knowledge Graphs amplify LLM capabilities: A recent demo highlighted a knowledge graph that integrates with LLMs, showcasing its potential benefits and leaving attendees eager for practical applications.
- Discussions focused on augmenting Transformers for compatibility with these graphs without flattening, emphasizing the need to retain structured data.
- OpenAI introduces o1 reasoning system: OpenAI released their new reasoning system, o1, building on models like Q* that promises online search capabilities.
- Despite its promise, it's currently a prototype with inference scaling laws indicating high processing costs.
- Diff Transformer improves attention mechanisms: The Diff Transformer employs a differential attention mechanism, boosting relevant context focus while minimizing noise, enhancing performance in long-context modeling.
- This approach is particularly effective in hallucination prevention, outperforming traditional models in specific applications.
- Google's insights on large-scale model merging: Research from Google investigates model merging at large scales with experiments on language models up to 64B parameters, sharing findings via arXiv.
- The study raises questions about the generalization and longevity of performance benefits from merging larger models.
- Interest in free text to video models: A user raised the query regarding availability of free text-to-video models, animated or otherwise, with mention of animate2diff as a possible resource.
- The community expressed a desire to gather more insights on this topic, seeking contributions from fellow members.
GPU MODE Discord
- Inference Optimisation Adventure Begins: A new user expressed a desire to begin their inference optimisation journey using Triton and CUDA-based optimisations, which reflects growing interest in advanced engine optimisations.
- It's essential for newcomers to tap into community knowledge for successful navigation in this area.
- Skepticism Around HBM Effectiveness: HBM remains a significant cost factor for devices like the H100, sparking discussions about its utility and comparative energy efficiency with LPDDR5.
- The community is evaluating if the benefits justify the costs, especially regarding power consumption.
- SRAM Scaling Issues Emerge: Community members highlighted that SRAM scaling has not kept pace with logic scaling, surprising contributors from firms like Graphcore.
- Concerns were voiced about design oversights dating back to 2015.
- Exploring GPU Acceleration for DataLoaders: A lively discussion established that DataLoaders could be accelerated on GPUs, but challenges with multiprocessing appear to hinder performance.
- Less reliance on multiprocessing could potentially enhance GPU efficiency.
- INT8 Mixed Precision Yields Performance Boost: INT8 mixed precision training delivered a 1.7x speedup on a 4090 GPU, potentially rivaling A100 performance without tradeoffs.
- Further experiments are encouraged to validate these results.
LlamaIndex Discord
- LlamaIndex Hackathon Launches: The second-ever LlamaIndex hackathon starts this Friday for #SFTechWeek, offering over $12,000 in cash prizes for innovators.
- Participants can sign up and gain insights on building complex multi-agent systems here.
- LlamaParse Premium Rises to the Occasion: LlamaParse premium is positioned as a powerful document parser tailored for context-augmented LLM applications, adept at handling complex documents.
- Its capability to process interleaved scanned documents and multi-table Excel sheets is well detailed in this link.
- Oracle Integrates with New Capabilities: A big update reveals that Oracle has added four new integrations: data loader, text splitter, embeddings, and vector search.
- Documentation on these tools highlights their capabilities, especially the data loader's functionalities.
- Docstore Supports Chunks and Full Documents: Members confirmed that the docstore is capable of accommodating both chunks and full documents, as they operate under the same class.
- cheesyfishes highlighted its adaptability, proving beneficial for varied storage needs.
- Contextual Retrieval and Metadata Enrichment: Insights emerged on contextual retrieval from Anthropic, emphasizing the importance of metadata and chunk enrichment to enhance model interactions.
- The discussion indicated potential in leveraging prompt caching to bolster scalability moving forward.
Modular (Mojo 🔥) Discord
- Mojo makes it to the TIOBE Top 50!: The October 2024 TIOBE index showcases Mojo climbing into the top 50 programming languages, emphasizing its appeal as a fast and secure language.
- Members noted Mojo's rapid rise within a year, attracting attention away from more established languages like Python.
- Mojo Keywords Need Clarity: Discussions emerged over re-evaluating keywords like 'inout' and 'borrowed' for Mojo to enhance clarity in the references subsystem, linked to a GitHub proposal.
- Participants echoed that clearer keyword conventions could significantly aid beginners in navigating the language.
- WebAssembly vs JavaScript Controversy: A debate sparked over whether WebAssembly can replace JavaScript for DOM access, with varying opinions from the community emphasizing need for improved garbage collection.
- The discussion revealed an ongoing interest in the efficiency of using WebAssembly and highlighted potential shortcomings in current execution models.
- Max Inference Engine Cry for Help!: A user reported problems using the max inference engine on their Intel NUC, particularly through TorchScript and ONNX, until they switched to a version earlier than 2.4.
- This resolution encouraged more users to examine their version compatibility to prevent similar issues.
- Graph Compilation Times Into Question: Concerns about lengthy graph compilation for multiple tensor operations emerged, estimating around 400-500 ms for completion.
- Discussions proposed creating reusable operations, like a generic reshape, as a method to streamline the graph creation process.
Interconnects (Nathan Lambert) Discord
- Nobel Prize Awarded for Neural Networks: The 2024 Nobel Prize in Physics was awarded to John J. Hopfield and Geoffrey E. Hinton for their foundational work on artificial neural networks. This recognition emphasizes their pivotal contributions to machine learning.
- The community expressed feelings of wholesomeness regarding this honorable acknowledgement.
- OpenAI Secures Independent Compute Power: OpenAI is securing its own compute capacity through data center agreements with Microsoft competitors due to slow response times from Microsoft, according to CFO Sarah Friar. This move is viewed as spicy but unsurprising given Microsoft’s trust issues.
- One alternative strategy discussed includes the implications of these agreements on OpenAI's autonomy in a competitive market.
- 8B Model Outperforms 11B for Text: The 8B model is reportedly more effective in text-only tasks compared to its 11B Vision counterpart, designed primarily for images. Users noted that all the additions are for handling images, indicating a trade-off in text performance.
- The community is curious about how such performance discrepancies will affect future model development.
- Growing Importance of Explainability in AI: A blog post highlighted the escalating significance of explainability in large language models (LLMs) as they evolve from individual task performance to complex system-level productivity. This need for auditable reasoning keeps gaining traction in discussions surrounding AI accountability.
- As models become more complex, establishing transparency is crucial for fostering user trust and understanding in AI applications.
- Sampling Insights and Industry Perceptions: Participants discussed that many large companies perceive sampling methods as a black box, focusing largely on beam/nucleus techniques with inadequate exploration of alternatives. This has raised concerns among Bayesians regarding the quality of sampling methods currently used.
- There is a call for better sampling techniques and a broader exploration of the landscape beyond dominant methods.
Perplexity AI Discord
- Discord Experience Issues Cause Frustration: Members expressed frustration over being removed from Discord, questioning if it's a psyop, while others highlighted varied performance across devices.
- These issues prompted discussions about potential solutions and the need for improved communication from support.
- Merch and Referral Speculations Bubble: A newcomer inquired about announcements regarding referral-related merchandise, but no current offers were detailed in the chat.
- Speculation about potential rewards lingered as an unclear topic of interest among members.
- China's Powerful Sound Laser Bombshell: An exciting video revealed that China has developed the world's most powerful sound laser, showcasing impressive technology.
- You can catch the action in the video that sparked numerous conversations around advancements in acoustic tech.
- Cerebras IPO Faces Off Against Nvidia: A discussion unfolded around the challenges that Cerebras might encounter during its IPO process, especially competing with Nvidia.
- Detailed insights are available in an article that sheds light on this significant industry event, read more here.
- Rate Limit Increase Request Ignites Urgency: A member urgently sought guidance on requesting a rate limit increase, noting multiple emails to support without a response.
- Clarification on whether to contact the correct support email suggested potential oversights in communication processes.
DSPy Discord
- Creating Tools That Create Tools: A member emphasized the need for tools that create tools to boost efficiency in future development.
- Such tools represent a growing trend towards enhanced automation and community engagement.
- Assistants Develop Assistants: Members explored the exciting potential of developing assistants that can create other assistants.
- This concept of meta-development promises to significantly advance productivity.
- Custom LM vs Adapter Showdown: Discussion emerged around the need for clearer documentation on when to choose a custom Adapter over a custom LM.
- Members suggested reviewing the existing language models documentation for improvements.
- Custom LM Clients Phasing Out: DSPy 2.5 has deprecated all custom LM clients except
dspy.LM
, which will phase out in DSPy 2.6 as well; migration is encouraged.- Helpful migration guidance can be found in this notebook.
- LM Configuration Confusion: An issue arose with
lm_kwargs
not populating in the MIPROv2 optimizer, raising questions about expected behavior.- A member confirmed that
lm.kwargs
should contain the kwargs unless the predictor is explicitly configured otherwise.
- A member confirmed that
OpenInterpreter Discord
- Open-Interpreter Maintains Tool Calling Consistency: A member asked how Open-Interpreter ensures accurate tool calling, learning that it's largely consistent thanks to the system message paired with LLMs.
- Mikebirdtech clarified that while it's not strictly deterministic, the system message supports a reliable performance.
- Exploring Potential of Structured Output: Discussion emerged on structured output for custom tool calling, as past experiments hinted at significant untapped potential.
- There was general agreement that enhancements from tools like Ollama and llamacpp could make such developments feasible.
- Mozilla AI Talk Set to Inspire: Mikebirdtech reminded everyone about next week's talk from Mozilla AI focusing on open source initiatives, urging attendance through a link in the Discord event.
- The excitement was palpable, highlighting the talk's potential relevance and interest for AI enthusiasts.
LLM Agents (Berkeley MOOC) Discord
- In-person Lecture Attendance Restricted: Due to room size limitations, only Berkeley students can attend the lectures in person, leaving others to participate remotely.
- This decision sparked discussions regarding access and community involvement in the Berkeley MOOC.
- Debate on Autogen for AI Agents: Members debated the use of Autogen in production environments versus using raw API calls for implementing AI agents in their startups.
- This dialogue emphasized the importance of optimizing Autogen for real-world applications.
- Building Frameworks with Redis: A user shared insights about developing their own framework using Redis to connect workers, aiming to streamline operations.
- This approach targets trimming down abstraction and improving control over complex use cases.
- Omar's Exciting DSPy Lecture: A member expressed excitement for an upcoming DSPy lecture by Omar, marking it as a significant event in the community.
- Their dedication to contributing to DSPy development showcases a strong interest in advancing this framework's capabilities.
- Contributions Being Made to DSPy: The same member plans to actively contribute to DSPy, reinforcing their commitment to its development.
- Such involvement illustrates the growing interest in enhancing DSPy tools and features.
tinygrad (George Hotz) Discord
- Tinygrad Website Navigation Issues Highlighted: A member raised concerns that users might struggle to find specific pages on the tinygrad website unless they click a small button, pointing to possible navigation flaws.
- Upon further reflection, they confirmed that clicking the button would indeed guide users to the intended page.
- Bounty Challenge for Swift Compilation: A user is pursuing a bounty from exo to compile tinygrad to Swift, sharing a link to the GitHub issue for reference.
- They aim to retain exo's Python roots while seeking advice from moderators on achieving this goal.
- Tensor.sum() Workaround Developed: A workaround using qazalin's additional buffer count PR was created to address errors arising from Tensor.sum(), which struggled with excessive buffers.
- This method is noted as very inefficient, requiring operations to be added and split iteratively to avoid issues.
- Improved Norm Calculation Method: A new script processes gradients by iteratively calculating norms and squaring them to optimize memory usage.
- This method involves creating groups of norm1_squared and norm2_squared, enhancing stability but sacrificing some efficiency.
- George Hotz Stresses Documentation Value: George Hotz emphasized the significance of reading the questions document, guiding users towards leveraging existing resources effectively.
- This advice aims to improve user clarity and reduce confusion surrounding tinygrad’s functionalities.
LangChain AI Discord
- Travel Plans in Question: A member expressed interest in attending an event but was unsure about their ability to travel at that time.
- This concern reflects the complexities involved in scheduling and commitment when travel is a factor.
- ChatPromptTemplate Utilization: A user detailed their approach using
ChatPromptTemplate
for generating messages in a chat application, including an example prompt setup.- This implementation showcases how to construct both
example_prompt
andexample_selector
for enhanced chat interactions.
- This implementation showcases how to construct both
- Escaping Quotes in Messages Causes JSON Issues: Multiple users reported that their
messages
object had double quotes encoded as"
, leading to invalid JSON format.- They sought guidance on preventing this escaping issue to ensure valid JSON is transmitted in chat.
- Integrating FewShotChatMessagePromptTemplate: A user demonstrated how to implement
FewShotChatMessagePromptTemplate
with a specified example selector and prompt.- This setup aims to enhance context and improve responses during chat interactions.
Torchtune Discord
- BF16 Training Issues Demand Attention: Adjusting the learning rate (LR) is crucial for proper BF16 training as BF16 weights may not update correctly with minimal changes, possibly leading to suboptimal performance. Implementing BF16 mixed-precision training was suggested to address this, despite the increased memory burden from additional FP32 gradients.
- Another member emphasized that without proper rate adjustments, BF16 training could lead to significant inefficiencies.
- Understanding BF16 Effects in 1B Models: Discussions emerged about the more pronounced effects of BF16 in 1B models, potentially due to fewer parameters having a lesser response to updates. One member noted that the BF16 weight update underflow could be traced back to the relationship between
weight
andweight_delta
.- Verification against results from BF16 mixed-precision training was proposed as a way to clarify these observations.
- Experimenting with Stochastic Rounding: Interest sparked around introducing stochastic rounding in the optimizer for weight updates, with aims to evaluate its potential impact on Torchtune. A member expressed readiness to run experiments, carefully considering the trade-offs between benefits and complications.
- The team aims to explore practical implications of this approach while remaining cognizant of any resulting complexities.
LAION Discord
- Hinton's Nobel Award Foresight: In 50 years, awarding a Nobel to Geoffrey Hinton may be judged like the one for lobotomy given to Moniz in 1949, reflecting a significant misalignment with today's machine learning advancements.
- The discourse indicates that Hinton's understanding of modern techniques is profoundly disconnected from the current landscape.
- Large-Scale Model Merging Insights: New research from Google discusses model merging approaches for language models up to 64 billion parameters, emphasizing factors affecting performance and generalization.
- Referenced in a tweet, the findings raise critical inquiries about merging efficacy in larger architectures.
- Curiosity Surrounds Autoarena Tool: A user introduced the Autoarena tool, accessible at autoarena.app, highlighting its potential features for technical users.
- This tool has sparked interest, leading to speculation about its possible applications in the field.
The Alignment Lab AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The LLM Finetuning (Hamel + Dan) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The MLOps @Chipro Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The OpenAccess AI Collective (axolotl) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Mozilla AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The DiscoResearch Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Gorilla LLM (Berkeley Function Calling) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
PART 2: Detailed by-Channel summaries and links
HuggingFace ▷ #announcements (1 messages):
Nvidia models
Meta's VLMs
Open Source Hugging Face Accelerate 1.0
Video language models
ColPali multimodal retrieval
- Nvidia Launches High-Efficiency Llama-3.1-Nemotron-51B: Nvidia introduced the Llama-3.1-Nemotron-51B, a NAS-optimized model achieving 2x throughput on a single H100 GPU while retaining accuracy.
- Users can experiment with the model through the API at Nvidia AI or download it from Hugging Face.
- Meta's CoTracker 2.1 Takes Video Motion Prediction Up a Notch: Meta released CoTracker 2.1, an enhanced version of its model capable of tracking 70k points jointly on a single GPU.
- The corresponding paper can be found here, detailing the advancements in video motion prediction.
- Hugging Face Accelerate 1.0 Opens New Possibilities: Hugging Face announced the release of Accelerate 1.0, packed with new features for optimizing model training.
- For additional details, users are encouraged to read the announcement blog.
- Video-to-Text Models Finally Arrive: Hugging Face unveiled a new task for video language models, enabling video-text-to-text capabilities.
- The new functionality comes with comprehensive documentation available in the transformers library.
- ColPali Revolutionizes Multimodal Document Retrieval: ColPali, an innovative approach for multimodal document retrieval, offers seamless integration with Qdrant for efficient indexing.
- Despite some skepticism about its practicality, it presents a simple method to index and search ColPali embeddings efficiently.
- Tweet from NVIDIA AI Developer (@NVIDIAAIDev): 👀 Experience high-efficiency NVIDIA Llama-3.1-Nemotron-51B - a NAS-optimized model achieving 2x throughput while preserving accuracy runs on a single H100 GPU. ✨Try out the Llama-3.1-Nemotron-51B N...
- Tweet from Niels Rogge (@NielsRogge): Meta has released CoTracker 2.1, an improved version of its Transformer-based model for video motion prediction, on @huggingface! Capable of tracking 70k points jointly on a single GPU Paper (with l...
- Tweet from Tris Warkentin (@triswarkentin): Gemma 2 just got even better! 🚀 New Japanese-tuned 2B model AND a $150K Kaggle competition to build Gemma models for every language. Great to have @sundarpichai here to share the excitement! Read m...
- Tweet from Zach Mueller (@TheZachMueller): The day has finally arrived, @huggingface Accelerate 1.0 is now out! There are tons of new goodies to explore and plenty more to come. I'll quickly talk about my favorites 🧵 For a refresher, g...
- Tweet from merve (@mervenoyann): Your LLM can't understand videos and images? How sad 😔 Luckily we shipped a new task for video language models 🤗 look for video-text-to-text in left tab at @huggingface /models ⏯️ It also comes...
- Tweet from Adina Yakup (@AdinaYakup): Here is a collection for leaderboards and Arenas from the Chinese community on @huggingface 🔥🏆🇨🇳 https://huggingface.co/collections/zh-ai-community/leaderboards-and-arenas-664b6913bfd9b93ba4ac242...
- Tweet from Julian Bilcke (@flngr): How it looks like right now (I'm the only user of the server so it's smooth 😂)
- Tweet from Daniel van Strien (@vanstriendaniel): ColPali is an exciting new approach to multimodal document retrieval, but some doubt its practical use with existing vector DBs. It turns out it's super easy to use @qdrant_engine to index and se...
- Tweet from JB Delbrouck (@IAMJBDEL): Paper Central is a new 🤗 Hugging Face space designed to provide the most up-to-date information on the latest research papers. It's the first portal to bring together all key sources in one place...
HuggingFace ▷ #general (566 messages🔥🔥🔥):
LLMs Performance & Limitations
Tokenization Importance
AI Advancements & Research
Connectome Replication in ML
GPU Use and Compatibility
- LLMs Limited by Training Distribution: Discussions emphasized that language models operate within their trained distribution, meaning they cannot generate groundbreaking discoveries or solve novel problems outside their training scope.
- Participants noted that while LLMs like GPT-2 and GPT-3 can assist in tasks, they lack true understanding and agency to filter outputs independently.
- Significance of Correct Tokenizers: Participants agreed on the necessity of using the correct tokenizer specific to the model family, as mismatched tokenizers would not yield effective results.
- It was pointed out that while learning about different tokenizers may not be essential, knowing that many models share tokenization leads to efficiency.
- Research in AI and Possible Applications: Members expressed curiosity about ongoing AI research and the potential application of open-source models in various categories.
- Suggestions were made to replicate connectome structures in ML, noting prior works on simpler organisms as potential starting points.
- Querying GPU Compatibility and Performance: The channel explored compatibility of GPU setups across different OS environments, especially in relation to AI model usage and performance.
- Participants inquired about using their CPUs efficiently for model inference on both Windows and Linux setups, with emphasis on power efficiency and output effectiveness.
- Model Upgrades and Feasibility: Users discussed the feasibility of upgrading their current LLM models to newer versions, weighing the performance improvements against their existing setups.
- The discussion highlighted ongoing developments in LLM technology, urging users to consider advancements that enhance capability in both local and cloud environments.
- Tweet from Building WOPR: A 7x4090 AI Server: no description found
- Bugs Bunny Looney Tunes GIF - Bugs Bunny Looney Tunes Cartoons - Discover & Share GIFs: Click to view the GIF
- ShaderMatch - a Hugging Face Space by Vipitis: no description found
- Hehe Hee GIF - Hehe Hee Smile - Discover & Share GIFs: Click to view the GIF
- FlyWire: no description found
- Tags · unclemusclez/unsloth-llama3.2: Llama 3.2 with Unsloth
- GitHub - TuragaLab/flyvis: A connectome constrained deep mechanistic network (DMN) model of the fruit fly visual system: A connectome constrained deep mechanistic network (DMN) model of the fruit fly visual system - TuragaLab/flyvis
HuggingFace ▷ #today-im-learning (7 messages):
Jailbreaking LLMs
Alpaca Dataset and Fine-tuning
Model Merging at Scale
Google's Research Contributions
- Exploring Jailbreaking Techniques for LLMs: A member is learning how to jailbreak LLMs, a growing area of research and experimentation in AI.
- This focus on jailbreaking is becoming more prevalent as the community seeks to understand model limitations.
- Practicing with Alpaca Dataset and Stable Models: A member engaged with the Alpaca dataset for fine-tuning, but indicated that progress with the Qwen 2 1.5 instruct model is not going well.
- They are currently facing challenges and are seeking advice from peers about improving their implementation.
- Google's Large Scale Model Merging Insights: New work from Google on model merging at scale addresses important questions regarding performance across larger models, up to 64B parameters.
- The research explores how different factors like model size and merging methods affect generalization and held-in performance, with a link to the full paper.
- Discussion about Visibility for Model Merging Research: A community member expressed interest in increasing visibility for the model merging work, suggesting a potential presentation during a discussion session.
- Another member offered to conduct a reading group talk, similar to previous presentations they've given, to share insights from their research.
- Tweet from undefined: no description found
- Tweet from Prateek Yadav (@prateeky2806): Ever wondered if model merging works at scale? Maybe the benefits wear off for bigger models? Maybe you considered using model merging for post-training of your large model but not sure if it genera...
HuggingFace ▷ #i-made-this (1 messages):
pelolisu: Diffusers Logo
HuggingFace ▷ #computer-vision (2 messages):
Extending Character Set in TrOCR
- Challenges in Extending TrOCR Character Set: A member inquired about the difficulty in extending the character set/dictionary of the TrOCR model and asked for advice on how to achieve this.
- There were no responses provided in the chat regarding this specific query, leaving it open for future discussion.
- Request for Responses: The member requested to be tagged if anyone responds to their inquiry about TrOCR character set extension.
- This highlights the need for collaboration and sharing of insights among team members.
HuggingFace ▷ #NLP (1 messages):
T5 model ONNX files
Conversion methods to ONNX
ONNX export with torch
- Check T5 model's ONNX folder: A member suggested checking the T5 model's page on Hugging Face for required ONNX files located in the onnx folder.
- If necessary files are missing, they recommended downloading the onnx folder into your local structure.
- Exploration of Conversion to ONNX: The discussion included a link to a Hugging Face blog that outlines three methods to convert Transformers models to ONNX.
- Each method, whether using the low-level
torch
API or the high-level API of optimum, accomplishes the same export task.
- Each method, whether using the low-level
- Exporting with torch.onnx: For low-level conversion, the member described how to use
torch.onnx
to convert model checkpoints to an ONNX graph, emphasizing the need for specific parameters.- They provided a snippet demonstrating how to load the model and tokenizer using transformers and torch libraries.
Link mentioned: Convert Transformers to ONNX with Hugging Face Optimum: no description found
HuggingFace ▷ #diffusion-discussions (6 messages):
Image Model Identification
Diffusion Models
- Identifying Image Creation Models: A user inquired about which model was used to create a particular low-resolution image. Responses suggested it could be a Flux or a pony model with a griffin LORA, but noted the image's generic nature affected identification.
- It's important to highlight that without clearer resolution, identifying the model remains challenging.
- Confusion About Diffusers Library: The conversation clarified that while diffusers is a library that can load various models, it doesn't directly specify image creation. Users mentioned that both Stable Diffusion XL and Flux are types of diffusion models compatible with diffusers.
- One user remarked that many images, like the one discussed, might also be created using paid models that lack specific LORAs or character knowledge.
LM Studio ▷ #announcements (2 messages):
LM Studio 0.3.4 release
New features in LM Studio
Bug fixes in LM Studio
- LM Studio 0.3.4 launches with Apple MLX support: LM Studio 0.3.4 features an MLX engine for efficiently running on-device LLMs on Apple Silicon Macs, alongside the ability to download models from Hugging Face.
- The update allows multiple models to run simultaneously and enforces structured JSON responses, making it easier for developers to work with diverse LLMs.
- New tools streamline model management: New keyboard shortcuts, like
Cmd+Shift+M
for searching models andCmd+Shift+R
for managing LM Runtimes, enhance user experience in LM Studio.- The update also includes a feature for setting structured output via the UI, simplifying the process of configuring model responses.
- Key bug fixes improve stability: The update addresses critical bugs, including a fix for the black screen issue after prolonged use, documented in issue #98.
- Other fixes ensure that additional ports work for the local server and resolve issues with the embedding API in Obsidian, improving overall functionality.
- Issues · lmstudio-ai/lmstudio-bug-tracker: Bug tracking for the LM Studio desktop application - Issues · lmstudio-ai/lmstudio-bug-tracker
- Issues · lmstudio-ai/lms: LM Studio CLI. Contribute to lmstudio-ai/lms development by creating an account on GitHub.
- Issues · lmstudio-ai/lmstudio-bug-tracker: Bug tracking for the LM Studio desktop application - Issues · lmstudio-ai/lmstudio-bug-tracker
- GitHub - lmstudio-ai/mlx-engine: Apple MLX engine for LM Studio: Apple MLX engine for LM Studio. Contribute to lmstudio-ai/mlx-engine development by creating an account on GitHub.
- LM Studio 0.3.4 ships with Apple MLX: Super fast and efficient on-device LLM inferencing using MLX for Apple Silicon Macs.
- Download LM Studio - Mac, Linux, Windows: Discover, download, and run local LLMs
LM Studio ▷ #general (227 messages🔥🔥):
LM Studio updates
MLX Engine introduction
Performance comparisons of models
Issues with LM Studio
User experiences with LLM models
- LM Studio version updates cause confusion: Users have reported issues with LM Studio auto-updating, specifically with version 0.3.4 not being available through the auto-update feature yet, only downloadable from the website.
- The new version brings changes such as migration of chats, but some users have experienced bugs in their existing workflows.
- Introduction of MLX Engine for Macs: The MLX engine is a new inference engine designed exclusively for Apple Silicon Macs, providing significant performance boosts for supported models.
- Users note that MLX supported models show roughly 10-20% speed increases for larger models and up to 50% for smaller ones.
- Performance comparisons highlight model capabilities: Participants discussed their experiences with different models, comparing the potential and performance of Llama 3.2, Gemma 2, and other models on compatible hardware.
- Llama 3.1 is noted as performing well, while Gemma 2 does not show a significant difference with MLX.
- User experiences reveal limitations: Some users shared difficulties with using LM Studio, including restrictions on updating and issues with previous conversations when transitioning to new versions.
- Users also expressed dissatisfaction with their hardware's capability to run newer models efficiently, particularly in comparison to high-end GPUs.
- Power consumption discussions: Users compared the power consumption of their GPUs during inference, noting differences such as an RTX 4090 drawing significantly more power than an M3 Max during similar tasks.
- These discussions highlighted the varying efficiencies and performance expectations between different hardware setups.
- bunnycore/Llama-3.2-3B-Mix-IQ4_XS-GGUF · Hugging Face: no description found
- rombodawg/Rombos-LLM-V2.5-Qwen-14b · Hugging Face: no description found
- meta-llama/Llama-3.2-11B-Vision · Hugging Face: no description found
- Reddit - Dive into anything: no description found
- GitHub - ml-explore/mlx: MLX: An array framework for Apple silicon: MLX: An array framework for Apple silicon. Contribute to ml-explore/mlx development by creating an account on GitHub.
- Llama-3.2 11B Vision Support · Issue #9643 · ggerganov/llama.cpp: Is it working right now in any way?
LM Studio ▷ #hardware-discussion (81 messages🔥🔥):
Linux Resource Usage
GPU VRAM Options
Multi-GPU Configurations
Performance of AMD vs NVIDIA
Stable Diffusion Model Efficiency
- Linux Should Perform Better on Low Resources: A member expressed the opinion that Linux should utilize fewer resources effectively, suggesting personal preference over technical capabilities.
- You need more RAM was suggested to improve performance rather than changing the OS.
- Comparing VRAM Options: Tesla vs 4060 Ti: When considering additional VRAM, members debated whether to get a Tesla P40 with 24GB or a RTX 4060 Ti with 16GB, highlighting the P40's memory advantages.
- Concerns about P40's slower performance and limited use cases for inference only were noted with comparisons to the 4060 Ti.
- Challenges of Multi-GPU Setups: Discussion on multi-GPU setups emphasized the need for proper configuration to manage cooling and power with 4090's size and PCI-e lanes being major considerations.
- A member considered using an A6000 for better VRAM utilization while ensuring scaling for larger models.
- Performance Disparities Between NVIDIA and AMD: Members agreed that combining an RTX 3060 with an RX 6600 is not ideal due to inefficiencies, with preference for sticking to NVIDIA for speed.
- Combining GPUs may not yield the desired result, as one member highlighted that using dual 3060s would effectively offer more VRAM but at a similar processing pace.
- Grappling with Stable Diffusion Performance: One member shared experiences with Stable Diffusion and the limitations of VRAM with different models, noting that larger models could affect processing speed.
- They hinted at the possibility of leveraging AIs for specific coding tasks, providing insights on QR codes being effectively used but with varying performance outcomes.
- Server kaufen bei LS Computersysteme - Die Serverspezialisten : no description found
- Reddit - Dive into anything: no description found
- AMD Radeon Instinct MI60 32GB HBM2 Graphics Accelerator Mining Card 80MH @ 160W | eBay: no description found
Unsloth AI (Daniel Han) ▷ #general (246 messages🔥🔥):
Unsloth Studio Release
Fine-Tuning Models
Model Merging Research
Performance of LLMs
RAG and Fine-Tuning
- Unsloth Studio anticipated launch: Users eagerly await the release of Unsloth Studio, hoping it simplifies the fine-tuning process on Windows without complicated installations.
- One user expressed frustration with setting up Docker and GPU drivers, hoping for a seamless experience with an executable installer.
- Fine-Tuning Techniques Discussion: It is acceptable to fine-tune a fine-tune, especially when using the same chat template for instruct models.
- However, extending this to multiple levels of fine-tuning can become excessive, with cautions shared about the potential pitfalls.
- Research on Model Merging: A new paper on model merging at scale was shared, discussing methodologies and evaluations with various model sizes and configurations.
- Participants expressed skepticism regarding the practical benefits of merging larger models, referencing past leaderboard concerns.
- Performance Comparison Between Inference Methods: Questions arose about whether vllm's inference is significantly faster than Unsloth's on consumer hardware.
- Users are weighing the setup effort against potential performance gains, signaling a need for clarity on efficiency.
- Challenges with DPO Fine-Tuning: One user faced OOM issues when attempting DPO fine-tuning on llama 3.1 with an 8k context length, raising concerns about VRAM consumption.
- Discussion highlighted the differing resource requirements between SFT and DPO methods, with a call for community support.
- Google Colab: no description found
- Chat Templates | Unsloth Documentation: no description found
- All Hands AI: no description found
- LoRA Learns Less and Forgets Less: Low-Rank Adaptation (LoRA) is a widely-used parameter-efficient finetuning method for large language models. LoRA saves memory by training only low rank perturbations to selected weight matrices. In t...
- Wow GIF - Wow - Discover & Share GIFs: Click to view the GIF
- Ohearn Sad Mike Ohearn Sad GIF - Ohearn sad Ohearn Mike ohearn sad - Discover & Share GIFs: Click to view the GIF
- What Matters for Model Merging at Scale?: Model merging aims to combine multiple expert models into a more capable single model, offering benefits such as reduced storage and serving costs, improved generalization, and support for decentraliz...
- Tweet from Prateek Yadav (@prateeky2806): Ever wondered if model merging works at scale? Maybe the benefits wear off for bigger models? Maybe you considered using model merging for post-training of your large model but not sure if it genera...
- unclemusclez/unsloth-llama3.2: Llama 3.2 with Unsloth
- GitHub - chigkim/Ollama-MMLU-Pro: Contribute to chigkim/Ollama-MMLU-Pro development by creating an account on GitHub.
- macadeliccc/opus_samantha · Datasets at Hugging Face: no description found
Unsloth AI (Daniel Han) ▷ #help (60 messages🔥🔥):
Unsloth functionality on Windows
Fine-tuning LLMs for content moderation
ShareGPT format in Colab
Prompt design for completion tasks
Colab resources for model training
- Unsloth Compatibility with Windows: A user inquired if Unsloth could operate on Windows without using WSL.
- No direct responses were provided on this topic.
- Fine-tuning LLMs for Content Moderation: A member proposed fine-tuning an LLM for a content moderation task involving short texts, with a dataset of 50k entries.
- Suggestions included possibly using Llama Guard and Gemma Shield for managing classification.
- Clarification on Dataset Formats: A discussion arose about the necessary dataset format for model training, with mentions of ShareGPT and HF's generic format.
- Members highlighted the need to normalize datasets for compatibility with multiturn formats.
- Prompt Design for Code Merging Tasks: User presented a prompt template aimed at merging code snippets and sought advice on utilizing an Instruct model.
- Questions arose regarding the use of special tokens and prompt structure for optimal performance.
- Sharing Colab Resources: A member shared a link to a Colab notebook for help with ShareGPT and Llama training.
- This resource was received positively, with one user apologizing for earlier frustrations.
- Google Colab: no description found
- Google Colab: no description found
aider (Paul Gauthier) ▷ #general (157 messages🔥🔥):
Aider's Features
Embeddings and Semantic Search
Message Batches API
Free LLM Options
Cost Estimations in Aider
- Aider's Edition Options and Preferences: Users expressed a need for Aider to prompt for commit confirmation instead of auto-committing after coding, with some advocating for disabling automatic commits altogether.
- Concerns also arose about handling estimated cost visibility, with suggestions for clearer labeling to indicate these figures are estimates.
- Understanding Embeddings in AI Systems: Discussion centered around how embeddings are utilized for semantic search, allowing LLMs to retrieve relevant documents by matching vector representations, while LLMs only process the actual text input.
- Users acknowledged the importance of maintaining consistent embeddings across systems to prevent loss of relevance in document retrieval.
- Exploration of Message Batches API: The introduction of the Message Batches API by Anthropic was discussed, highlighting its cost-effectiveness for processing large volumes of queries asynchronously with reduced pricing.
- While some questioned the practicality of delayed responses, others recognized its potential for applications like generating training data.
- Free LLMs for Coding: Users sought recommendations for reliable free LLMs suitable for coding tasks, with suggestions including Llama 3.1 and Hermes models noted for their capabilities.
- Concerns about affordability led users to compare various free and paid options, emphasizing the desire to avoid costs associated with premium models like GPT-4o.
- Cost Management in Aider: Users discussed the challenge of tracking project costs within Aider, emphasizing a desire for a more detailed cost breakdown per project rather than session estimates.
- While some valued the cost display feature, others suggested improvements to clarify that costs are merely estimates.
- Installing aider: aider is AI pair programming in your terminal
- Tweet from Anthropic (@AnthropicAI): Introducing the Message Batches API—a cost-effective way to process vast amounts of queries asynchronously. You can submit batches of up to 10,000 queries at a time. Each batch is processed within 24...
- Config with .env: Using a .env file to store LLM API keys for aider.
- In-chat commands: Control aider with in-chat commands like /add, /model, etc.
- Tweet from Harrison Chase (@hwchase17): 🚀We're launching "long-term memory" support in LangGraph At its core, long-term memory is "just" a persistent document store that lets you *put*, *get*, and *search* for memories...
- Tips: Tips for AI pair programming with aider.
- FAQ: Frequently asked questions about aider.
- Options reference: Details about all of aider’s settings.
- Hybrid full-text search and vector search with SQLite: Combine SQLite's builtin FTS5 full-text search extension with the sqlite-vec vector search extension for hybrid search!
aider (Paul Gauthier) ▷ #questions-and-tips (101 messages🔥🔥):
Aider Confusion on API Key Usage
Command Line Options for Aider
Context Management in Aider
Deepseek Model Usage
Feedback and Feature Requests
- Aider Confusion on API Key Usage: Users clarified that Aider prioritizes values from
.env
files over YAML config when querying APIs, leading to potential confusion regarding API keys.- This situation occurs particularly when
.env
files and config files contain overlapping key names, necessitating careful management of environment variables.
- This situation occurs particularly when
- Command Line Options for Aider: Commands like
--no-suggest-shell-commands
and specifying models likeclaude-3.5-sonnet:beta
were discussed to enhance Aider's functionality.- Aliases and wrapper scripts were recommended as methods to streamline command usage and prioritize command-line arguments for API keys.
- Context Management in Aider: Users sought methods to selectively include parts of large files in Aider's context, suggesting the use of external scripts or commands to extract necessary code fragments.
- Users were encouraged to utilize
/run
execution to manage context effectively, but caution was advised regarding potential confusion for the LLM.
- Users were encouraged to utilize
- Deepseek Model Usage: A new user inquired about using the deepseek/deepseek-coder model in Aider, expressing interest in combining it with gpt4o for enhanced functionality in architect mode.
- Users discussed the conceptualization of using Aider as a chat interface mixed with automation, where models operate within defined roles to enhance coding tasks.
- Feedback and Feature Requests: Users were invited to share feedback or feature requests related to Aider's functionalities, encouraging constructive exploration of the application.
- The necessity for thoughtful interaction with Aider was emphasized, as users navigate through its command and configuration complexities.
- Installing aider: aider is AI pair programming in your terminal
- Scripting aider: You can script aider via the command line or python.
- FAQ: Frequently asked questions about aider.
- Home: aider is AI pair programming in your terminal
- Chat modes: Using the chat, ask and help chat modes.
aider (Paul Gauthier) ▷ #links (4 messages):
Python 3.13 Release
Google NotebookLM Podcast
- Python 3.13 brings significant updates: Today marks the release of Python 3.13, featuring a better REPL with improved error messages, an option to run without the GIL, and the introduction of an experimental JIT compiler. Additionally, iOS and Android are now Tier 3 supported platforms thanks to the Beeware project.
- A member highlighted the implications of these updates, noting that support for mobile platforms indicates serious commitment to widening accessibility.
- Creating an AI Podcast with Google NotebookLM: A member shared their experience using Google NotebookLM to create a podcast episode focused on the SmartPoi project, describing it as a mix-up of ideas but with a good format. They provided a link to their overview episode as well as a guide on how to get started with NotebookLM.
- They noted that even though there were some confusions in the content, their family was convinced it was a real podcast, demonstrating the potential of AI-generated audio.
- What’s New In Python 3.13: It's Python 3.13 release day today. The big signature features are a [better REPL](https://docs.python.org/3.13/whatsnew/3.13.html#whatsnew313-better-interactive-interpreter) with improved error ...
- SmartPoi AI Podcast Episode 1 - Circus Scientist: The Overview episode: This episode was generated by AI! How To: I used Google NotebookLM for this, and uploaded all of my blog posts and web pages relating to the SmartPoi and Magic Poi projects. Chec...
Eleuther ▷ #general (36 messages🔥):
Nobel Prize in Physics
AI research and recognition
Hinton and Hopfield's contributions
Physics community reactions
Model merging research
- Controversy over Nobel Prize for AI research: Many in the physics community are debating the appropriateness of awarding the Nobel Prize in Physics to Hinton and Hopfield for their AI work, calling it a stretch or an indication of a degraded situation in valuing rigorous science.
- One member remarked it might serve to promote 'hype' work over impactful research, arguing that significant recognition should focus on traditional physics achievements.
- Mixed feelings on Hinton's recognition: While some acknowledge the rational connection to Hinton due to his historic role in AI development, others deem the prize distribution as overly expansive and a diversion from fundamental physics.
- A sentiment was shared that if neural networks weren't as overhyped, the prize could have recognized more impactful physics contributions instead.
- AI Ethics discussed at Nobel Prize event: Participants noted that the Swedish Academy of Sciences is shifting its focus to include discussions on AI ethics and safety, reflecting broader societal concerns.
- This change in dialogue may indicate a desire for the prize to encompass emerging fields that intersect with traditional science.
- Excitement around model merging research: A new model merging study from Google sheds light on the implications and performance of merging large-scale models, presenting interesting findings on their scalability.
- The research addresses common questions and explores the effects of various factors on held-in performance and generalization in models up to 64B parameters.
- Reactions from the physics community: The r/physics community expressed their frustration regarding the Nobel award, labeling it a Very Online phenomenon and questioning what constitutes 'real physicists'.
- Some voiced concerns that the award could detract from the prestige of the prize, suggesting it emphasizes applications over traditional physics excellence.
- Tweet from Prateek Yadav (@prateeky2806): Ever wondered if model merging works at scale? Maybe the benefits wear off for bigger models? Maybe you considered using model merging for post-training of your large model but not sure if it genera...
- Neural networks and physical systems with emergent collective computational abilities.: OpenRead Reading & Notes Taking
- Reddit - Dive into anything: no description found
Eleuther ▷ #research (188 messages🔥🔥):
Normalized Transformer (nGPT)
MuP and Initialization
Diff Transformer
Gradient Descent Behavior
Generative Reward Models
- nGPT proposes hypersphere representation: The paper introduces a novel transformer architecture called nGPT, which normalizes all vectors on the hypersphere, claiming up to 20x improvement in training efficiency.
- This architecture focuses on representation learning, where each layer modifies its output while maintaining unit norm vectors, potentially enhancing learning speed.
- Concerns about MuP's effectiveness: Discussion highlights that MuP fails in networks outside transformers, with its theoretical assumptions not holding, particularly regarding the alignment between gradients and parameters.
- Critiques include its performance when biases and weight initializations scale, leading to misrepresentations of its supposed benefits.
- Diff Transformer amplifies relevant context: The Diff Transformer reduces attention to irrelevant context by subtracting two softmax attention maps, showing improved performance in language modeling tasks.
- It emphasizes sparse attention patterns and offers notable benefits in long-context modeling, hallucination mitigation, and in-context learning.
- Understanding Scaling Laws in Gradient Descent: A thread discusses the power-law behavior observed in gradient descent, supported by a blog post from Francis Bach analyzing the real behavior of optimization.
- The conversation reflects on mathematical insights into how these scaling laws manifest during training, contrasting them with traditional optimization literature.
- Generative Reward Models and their significance: The research community shares insights on Generative Reward Models, which leverage human and AI feedback to improve LLM training performance.
- Discussions on implementation and achieving effective post-training performance highlight the importance of reasoning in decision-making within AI systems.
- Generative Reward Models that Unify RLHF and RLAIF Approaches: A novel framework that combines RLHF and RLAIF to better align LLMs with human preferences, outperforming classical methods by up to 45%.
- Selective Attention Improves Transformer: Unneeded elements in the attention's context degrade performance. We introduce Selective Attention, a simple parameter-free change to the standard attention mechanism which reduces attention to un...
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?: Low-rank training has emerged as a promising approach for reducing memory usage in training Large Language Models (LLMs). Previous methods either rely on decomposing weight matrices (e.g., LoRA), or s...
- Erasing Conceptual Knowledge from Language Models: Erasing concepts from language models by addressing innocence, seamlessness and specificity through low-rank model editing
- nGPT: Normalized Transformer with Representation Learning on the Hypersphere: We propose a novel neural network architecture, the normalized Transformer (nGPT) with representation learning on the hypersphere. In nGPT, all vectors forming the embeddings, MLP, attention matrices ...
- Differential Transformer: Transformer tends to overallocate attention to irrelevant context. In this work, we introduce Diff Transformer, which amplifies attention to the relevant context while canceling noise. Specifically, t...
- Tweet from Taha Yassine (@taha_yssne): I like visualizations so I tried reproducing this one. Here I also show the second and third candidates for tokens above a certain entropy threshold. Top-p would make more sense tho. According to gpt...
- A Spectral Condition for Feature Learning: The push to train ever larger neural networks has motivated the study of initialization and training at large network width. A key challenge is to scale training so that a network's internal repre...
- Adaptive skip connections improve training: SUMMARY Applying a single, linear, learnable weight to the skip (identity) component of residual connections slightly improves training performance. It also reveals interesting training dynamics: d…
- Tweet from main (@main_horse): @papers_anon Their code was published ~4hrs ago. I imported it and executed some full rank (fft) and fira runs. I noticed that the FFT runs would start better than fira, but quickly blow up from inst...
- Wolfram LLM Benchmarking Project: Results from Wolfram's ongoing tracking of LLM performance. The benchmark is based on a Wolfram Language code generation task.
- Tweet from Yaroslav Bulatov (@yaroslavvb): Was happy to see Francis Bach looking at the real behavior of gradient descent. This is in contrast to "hypothetical" behavior which is what optimization literature traditionally studies. Quo...
- Showing $\sum_{i=1}^k i^{-2}(1-i^{-2})^t\approx \frac{\sqrt{\pi }}{2 \sqrt{t}}$ for large $k$: For large $k$, I observe the following: $$f(t)=\sum_{i=1}^k i^{-2}(1-i^{-2})^t \approx \frac{\sqrt{\pi }}{2 \sqrt{t}}$$ What's the easiest way to show this? Notebook
- unilm/Diff-Transformer at master · microsoft/unilm: Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities - microsoft/unilm
- [PDF] Position: LLM Unlearning Benchmarks are Weak Measures of Progress | Semantic Scholar: An academic search engine that utilizes artificial intelligence methods to provide highly relevant results and novel tools to filter them with ease.
- no title found: no description found
Eleuther ▷ #multimodal-general (1 messages):
zackt1234: https://discord.com/channels/729741769192767510/1214931475850469426/1292977027254583397
OpenAI ▷ #ai-discussions (138 messages🔥🔥):
Document Categorization AI
AI Tools for File Management
Cloud vs Local AI Costs
AVM and Multi-modal AI
AI Subscriptions Comparison
- AI for Document Categorization Sparks Interest: Members discussed the potential for an AI to categorize data by reading content, with proposals for tools that can handle their extensive file collections.
- Some expressed skepticism about current capabilities, suggesting that it would be easier to organize files manually or use multiple specialized tools.
- Cloud vs Local AI Cost Discussion: The cost of analyzing vast amounts of data with AI tools was a key concern, with estimates suggesting that analyzing 18,478 files could cost around $12,000 using API services.
- Questions arose about the feasibility of local AI versus cloud-based services, with users weighing server costs against local hardware expenses.
- Interest in AVM and Multi-modal AI: Discussions around AVM highlighted its potential for multi-modal interactions, emphasizing how it might revolutionize user experience with future enhancements.
- Members noted a convergence of various AI technologies and expressed excitement about the future capabilities of AVM-like tools.
- Recommendations for AI Subscriptions: Users debated whether to subscribe to ChatGPT or Claude, noting that ChatGPT’s o1-preview has limitations, while Claude has proven useful for existing users.
- The consensus suggested that subscriptions should focus on accessing full versions of tools rather than limited previews.
- Nvidia's Dominance in AI Hardware: Discussion around Nvidia highlighted how the company has become central to AI hardware, particularly in training and running large models due to its CUDA toolkit.
- Members noted challenges with AMD GPUs in AI applications and expressed frustration over the high costs and limited support for older hardware.
Link mentioned: 70 Best Automated File Organization AI tools - 2024: Discover the best 70 paid and free AI Automated File Organization, and find their features and pricing. Find the best AI tools for Automated File Organization.
OpenAI ▷ #prompt-engineering (23 messages🔥):
Learning Styles in Dog Training
Prompt Leaderboard Discussion
Curiosity About Prompt Creation
Gemini Advanced Prompt Success
- Learning Styles in Dog Training: Members discussed the analogy between dog training and understanding AI models, emphasizing that understanding anatomy may help some while others may not find it necessary.
- One noted that different individuals have unique ways of learning, and it’s essential to recognize these diverse approaches.
- Debate on Prompt Leaderboards: The topic of whether a leaderboard for prompts could exist was raised, with members questioning how prompts could be objectively scored.
- The discussion included thoughts on the feasibility of evaluating and hashing prompts to produce consistent outputs.
- Yoshijpn's Insights on Prompts: A member expressed a desire for a finite set of prompts with specific documentation, noting the vast number of potential prompts given a 4096 character limit.
- They remarked that the total number of possible prompts far exceeds human storage capabilities, speculating it at 5 x 10^9982.
- Seeking Prompt Engineer Assistance: A member requested assistance from a prompt engineer for a project, indicating they are willing to pay for help.
- This highlights the ongoing need for skilled prompt engineers within the community.
- Success with Gemini Advanced Prompt: A member shared their success in creating a prompt for Gemini Advanced that consistently generated high-quality outputs across multiple chats.
- However, they were reminded to refrain from discussing other AIs in the channel, adhering to community guidelines.
OpenAI ▷ #api-discussions (23 messages🔥):
Learning Styles in Training
Prompt Engineering Queries
Interest in Prompt Leaderboards
Limitations of Prompt Length
Collaboration for Prompt Creation
- Learning Styles Vary in Training: Members discussed how learning preferences can differ, likening it to dog training where some might require extensive knowledge of anatomy, while others thrive without it.
- One member expressed enjoyment in engaging with model outputs over technical understanding.
- Prompt Leaderboards Spark Interest: A member humorously proposed the idea of a leaderboard for prompts, questioning how such prompts could be scored.
- Another member expressed curiosity regarding the potential community responses to this idea.
- Exploring Prompt Creation Parameters: Discussions arose about the prerequisites for creating a standardized prompt set, including length limits and output consistency.
- Interest was shown in the complexity of generating unique prompts, with speculation on the vast total number of possible prompts.
- Collaboration for Prompt Engineering: A member sought assistance from skilled prompt engineers for a project, indicating readiness to pay for quality work.
- Another member shared their success in crafting a high-quality prompt for consistent outputs with the Gemini Advanced model.
- Fan of AI Primaries: A member humorously introduced themselves as Sam Altman, expressing admiration for Dario Amodei.
- This playful comment highlights the engaging and varied personalities within the discussion.
OpenRouter (Alex Atallah) ▷ #announcements (2 messages):
OpenAI prompt caching
Prompt caching audits
Cost savings with caching
Updates on model endpoints
Anthropic beta endpoints
- OpenAI Prompt Caching Launched: Last week, OpenAI prompt caching was launched, enabling significant cost savings on inference costs, potentially up to 50%.
- It is automatically enabled for 8 OpenAI models and works with providers like Anthropic and DeepSeek, with more to follow.
- Audit Your Caching Savings: Users can now audit their savings from prompt caching directly from the openrouter.ai/activity page.
- This feature can also be accessed via the
/generation
API to track how much was saved on each generation.
- This feature can also be accessed via the
- Cache Usage Insights Available: The
cache_discount
field in the response body reveals how much the response saved on cache usage, aiding user decisions.- However, some providers like Anthropic may show a negative discount on cache writes, impacting the overall cost savings.
- Model Endpoint Updates Rolled Out: Free model endpoints will now display the accurate endpoint context length on the model page to enhance user clarity.
- Feedback is requested on the Anthropic
:beta
self-moderated endpoints as they approach a planned exit from beta soon.
- Feedback is requested on the Anthropic
- Prompt Caching | OpenRouter: Optimize LLM cost by up to 90%
- Prompt Caching | OpenRouter: Optimize LLM cost by up to 90%
OpenRouter (Alex Atallah) ▷ #general (112 messages🔥🔥):
OpenRouter Performance Issues
Anthropic API Usage
Prompt Caching Details
Model Provider Selection
Rate Limits
- OpenRouter encountering double generation issues: Users reported experiencing double generation per request. One member suspected it might be related to their setup, while another suggested increasing timeouts for better handling.
- Challenges with Anthropic 3.5 Sonnet moderation: A member faced moderation issues with Claude 3.5 Sonnet, realizing that using the :beta endpoint could avoid some of these problems. The regular endpoint imposes mandatory moderation, while the :beta variant allows for self-moderation.
- Insights on prompt caching mechanics: Discussion emerged around prompt caching, covered in detail by OpenRouter's documentation. Members noted it automates caching for several providers but required manual activation for Anthropic, costing +25% for input tokens.
- Provider selection strategies for requests: User queries led to explanations on how to route requests to specific providers like Anthropic to avoid rate limit errors. The default load balancing behavior and manual pinning of providers were highlighted as options.
- Rate limit concerns with Google Vertex: A user reported frequent 429 errors while using Sonnet, indicating resource exhaustion. Suggestions were made to disallow fallback options to redirect requests to Anthropic instead.
- no title found: no description found
- Provider Routing | OpenRouter: Route requests across multiple providers
- Prompt Caching | OpenRouter: Optimize LLM cost by up to 90%
- Prompt Caching | OpenRouter: Optimize LLM cost by up to 90%
- Limits | OpenRouter: Set limits on model usage
- Requests | OpenRouter: Handle incoming and outgoing requests
Stability.ai (Stable Diffusion) ▷ #general-chat (95 messages🔥🔥):
Stable Diffusion WebUI setup
GPU performance comparison
Image generation and modification tips
ControlNet models
General help and resource sharing
- Choosing the Right GPU for Stable Diffusion: When comparing the RX 6900 XT and RTX 4070, users discussed the performance implications, noting that AMD cards like the 6900 XT can be slower due to CUDA dependencies.
- The importance of VRAM was highlighted, with recommendations leaning towards Nvidia for fewer memory issues and better efficiency, particularly for generating images.
- Inpainting Techniques for Specific Styles: Users shared queries on how to apply specific styles to existing images without altering original elements, with methods like ipadapter and ControlNet being considered.
- Community members suggested posting images for better assistance in achieving desired style transfers without generating unwarranted changes.
- ControlNet Models Explained: A user inquired about ControlNet models, prompting another member to share a GitHub link for detailed explanations and examples.
- The shared link emphasizes the control over diffusion models and provided visual examples for better understanding.
- Exploring Auto1111 UI for New Users: New users posed questions about the Automatic1111 UI and where to find support for setup issues, seeking guidance on optimal configurations.
- The chat included suggestions for the Forge WebUI as a potential alternative that may fix some of the common issues faced with Automatic1111.
- Community Support for Image Generation: Members reached out for assistance on various aspects of image generation using Stable Diffusion, discussing workflow optimizations and sharing insights.
- The conversation encompassed the importance of community support for troubleshooting issues like local connection problems with the web UI.
Link mentioned: GitHub - lllyasviel/ControlNet: Let us control diffusion models!: Let us control diffusion models! Contribute to lllyasviel/ControlNet development by creating an account on GitHub.
Cohere ▷ #discussions (19 messages🔥):
Cohere API Performance
Cohere Dark Mode
Data Retention Settings
AI Club Collaboration
Cohere Outage
- Cohere API impresses new user: A new member expressed appreciation for the Cohere API, highlighting its simplicity in setting up a multi-tool agent with minimal code.
- Developer experience is a big factor for the member as they evaluate integrating AI into their team's workflow.
- Excitement for Cohere's new Dark Mode: A user enthusiastically announced that Cohere now features Dark Mode.
- This revelation sparked excitement among others in the channel.
- Restricting Cohere's data storage: A user inquired about how to restrict Cohere from storing user prompts, asking if such settings are available.
- Another member confirmed that users can opt out through their dashboard, providing a link to the data retention settings.
- Potential for student club collaboration: A student from SRM India sought a partnership with Cohere to start an AI and data science club on campus.
- The response directed them to Cohere's community resources for potential support.
- Cohere experiences an outage: Users reported encountering 503 errors, indicating that Cohere is down.
- This issue was noted in the channel as it affected access to the platform.
- Login | Cohere: Login for access to advanced Large Language Models and NLP tools through one easy-to-use API.
- Cohere For AI (C4AI): Cohere For AI (C4AI) is Cohere's non-profit research lab that seeks to solve complex machine learning problems.
Cohere ▷ #questions (25 messages🔥):
Fine Tuning Cohere API
Commercial Use of Cohere APIs
Changing Frequency and Presence Penalties
Data Usage and Privacy Controls
Crafting Effective Prompts
- Fine Tuning with Batch Processing: A member mentioned using a total of 67,349 examples for fine-tuning, splitting them into batches of 96 for the API due to restrictions.
- Not sure if this was the right way to go about it or not was their sentiment.
- Cohere APIs for Commercial Use: One member inquired about using Cohere APIs for commercial purposes, to which another confirmed that it is targeted towards the enterprise market.
- They were directed to the FAQs for further information.
- Adjusting Frequency and Presence Penalties: A user needed help with changing Frequency Penalty and Presence Penalty, and was pointed towards the advanced option in the dashboard to locate settings.
- A Python example was shared demonstrating how to add these penalties directly in the co.chat function.
- Data Privacy and Usage Policies: A member asked about restricting user prompts stored in Cohere; another highlighted the data opt-out policy available for customers.
- More details were provided through a link regarding how Cohere maintains data control and the options available to customers for data privacy.
- Guidelines for Crafting System Role Prompts: In response to a query about the language structure for the system role, it was confirmed that it follows the standard markdown approach.
- Members were directed to the documentation for a clearer understanding of effective prompt crafting.
- Crafting Effective Prompts — Cohere: This page describes different ways of crafting effective prompts for prompt engineering.
- Chat — Cohere: Generates a message from the model in response to a provided conversation. To learn more about the features of the Chat API follow our [Text Generation guides](https://docs.cohere.com/v2/docs/chat-api...
- Enterprise Data Commitments: Cohere maintains robust controls to protect enterprise data and respect our enterprise customers’ rights regarding their data.
- Cohere FAQs — Cohere: Cohere is a powerful platform for using Large Language Models (LLMs). This page covers FAQs related to functionality, pricing, troubleshooting, and more.
Cohere ▷ #api-discussions (22 messages🔥):
Rerank API with Semi-structured Data
Python SDK issues
API v1 and v2 differences
Janitor AI Proxy link
Advanced settings in documentation
- Rerank API struggles with Semi-structured Data: A user noted issues with the Rerank API not returning documents sent in requests with the 'return_documents: True' parameter while using the Python SDK.
- They successfully tested the API directly via Thunder Client, suggesting a possible bug in the SDK.
- Python SDK facing installation problems: One member indicated they are using the Python SDK v5.10.0 due to installation issues with v5.11, although their previous code worked in API v1.
- The member plans to test the SDK in isolation later to narrow down the issue.
- API v1 and v2 usage clarified: Discussion took place regarding the link to the /v1/chat/completions endpoint, with one member requesting clarity on using it in Janitor AI.
- It was suggested that while not explicitly mentioned, a proxy can still be utilized with the SDK.
- Search for Proxy link for Janitor AI: A user requested a specific proxy link format (like /v1/chat/completions) for Janitor AI, expressing difficulty in finding it.
- Members confirmed that while it's not explicitly available, proxy usage is possible and the SDK includes it.
- Need for advanced settings documentation: One member emphasized the importance of adding advanced settings to the documentation for better user guidance.
- This suggestion was made to enhance user experience and functionality.
- Rerank Overview — Cohere: This page describes how Cohere's ReRank models work.
- Chat (v1 API) — Cohere: Generates a text response to a user message. To learn how to use the Chat API and RAG follow our [Text Generation guides](https://docs.cohere.com/docs/chat-api).
- Rerank — Cohere: This endpoint takes in a query and a list of texts and produces an ordered array with each text assigned a relevance score.
Cohere ▷ #projects (9 messages🔥):
Companion Discord Bot
Moderation Tools
Identifying Proper Models
Hugging Face Resources
- Companion Bot Introduced: A member introduced the Companion Discord bot, powered by Cohere, designed for dynamic persona modeling and enriched interactions within server communities. The bot not only offers moderation capabilities but evolves to provide authentic conversational experiences, as detailed in the GitHub repository.
- Another member expressed excitement about the project, calling it amazing and acknowledging the shared information.
- Discussion on Moderation Capabilities: Members discussed the potential for the Companion bot to add value as a moderation tool in server environments. One remarked that it could serve as a 'spice' for existing moderation bots, enhancing user engagement.
- This exchange acknowledged the importance of having effective tools to foster respectful communication within communities.
- Searching for Proper Models: A user inquired about finding the right models for their project, pointing to a specific need within the community. Another member recommended Hugging Face as a valuable resource for various models suitable for different applications.
- The discussion highlighted the accessibility of resources available for users looking to enhance their projects.
Link mentioned: GitHub - rapmd73/Companion: A discord chat bot utilizing AI in a fun and whimsical way. Provides some moderation tools as well.: A discord chat bot utilizing AI in a fun and whimsical way. Provides some moderation tools as well. - GitHub - rapmd73/Companion: A discord chat bot utilizing AI in a fun and whimsical way. Provid...
Latent Space ▷ #ai-general-chat (58 messages🔥🔥):
Advanced voice mode frustrations
Nobel Prize for Hinton and Hopfield
Anthropic Message Batches API
Salesforce Generative Lightning UX
Cursor tips and tricks
- Advanced voice mode frustrations: A member expressed frustration about not having access to advanced voice mode, sharing that reinstalling the app on iOS resolved their issue but not on Mac OS.
- Another member noted that advanced voice mode is time-limited, with shorter responses making it feel less effective.
- Hinton and Hopfield win the Nobel Prize: The Royal Swedish Academy of Sciences announced the 2024 Nobel Prize in Physics awarded to John J. Hopfield and Geoffrey E. Hinton for their foundational work in enabling machine learning.
- Comments reflected skepticism about the overlap of machine learning and physics, citing disputes over the recognition of foundational contributions in AI.
- Anthropic introduces Message Batches API: Anthropic launched the Message Batches API, which allows submission of up to 10,000 queries at a reduced cost, processed asynchronously within 24 hours.
- One member compared this to OpenAI’s batching, noting the similarities based on quick documentation reviews.
- Salesforce Generative Lightning UX launch: Salesforce unveiled the Generative Lightning UX, which dynamically creates layouts for enterprise applications, aiming to enhance user experience by tailoring information needs.
- The initiative is currently in pilot phase, with encouragement for user feedback to refine the product for next year’s release.
- Cursor tips shared at Weights & Biases: A member shared insights from a Cursor tips & tricks meeting at Weights & Biases, emphasizing the value of discussing usage strategies within teams.
- A follow-up thread was provided for more detailed exploration of the shared tips.
- Tweet from The Nobel Prize (@NobelPrize): BREAKING NEWS The Royal Swedish Academy of Sciences has decided to award the 2024 #NobelPrize in Physics to John J. Hopfield and Geoffrey E. Hinton “for foundational discoveries and inventions that en...
- Tweet from Alex Konrad (@alexrkonrad): Exclusive: Braintrust, which helps Airtable, Brex, Notion and Stripe build AI products, has raised $36M in a Series A led by a16z. The one-year-old startup offering LLM evaluations and monitoring is ...
- Tweet from Anthropic (@AnthropicAI): Introducing the Message Batches API—a cost-effective way to process vast amounts of queries asynchronously. You can submit batches of up to 10,000 queries at a time. Each batch is processed within 24...
- Tweet from Jürgen Schmidhuber (@SchmidhuberAI): How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. More than a dozen concrete AI priority disputes under https://people.idsia.ch/~juergen/ai-priority-dispute...
- The Subprime AI Crisis: None of what I write in this newsletter is about sowing doubt or "hating," but a sober evaluation of where we are today and where we may end up on the current path. I believe that the artifi...
- Tweet from Clara Shih (@clarashih): Last week @OpenAI launched ChatGPT Canvas, an interface that displays text, code, and visualization outputs. In the enterprise, we rely on more structured, trusted UX elements -- record details, lists...
- Tweet from Rick Lamers (@RickLamers): Congratulations John J. Hopfield and Geoffrey E. Hinton 🙇♂️
- Reddit - Dive into anything: no description found
- Tweet from Alex Volkov (Thursd/AI) (@altryne): We had a "Cursor tips & tricks" meeting today with my colleagues at @weights_biases and I figured I'd share what we 'discovered' & shared between us in a 🧵 If you haven't cha...
- Reddit - Dive into anything: no description found
- Tweet from Tsarathustra (@tsarnick): Geoffrey Hinton says he is "flabbergasted" about being awarded the Nobel Prize in Physics and he believes AI will exceed people in intellectual ability so we should worry about it "getting...
- Tweet from Emma Strubell (@strubell): I don't come on this website much anymore, but when I do, it's because @Google's Chief Scientist is on here choosing to spend his time trying to discredit a single number in our >5yo pa...
- Tweet from Alex Volkov (Thursd/AI) (@altryne): We had a "Cursor tips & tricks" meeting today with my colleagues at @weights_biases and I figured I'd share what we 'discovered' & shared between us in a 🧵 If you haven't cha...
- Tweet from Jürgen Schmidhuber (@SchmidhuberAI): How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. More than a dozen concrete AI priority disputes under https://people.idsia.ch/~juergen/ai-priority-dispute...
Nous Research AI ▷ #general (39 messages🔥):
Knowledge Graphs in AI
Hermes Model Datasets
Free Compute for Competitions
LLM Evaluation Services
Prototyping with LLMs
- Knowledge Graphs blow minds: A recent demo showcased a knowledge graph that integrates seamlessly with LLMs, leaving attendees amazed at its potential benefits.
- Members expressed eagerness for practical implementations, with discussions about augmenting Transformers to work with these graphs without flattening.
- Hermes Model Datasets Discussed: A member inquired whether Nous Research will release the dataset used for Hermes 2, which was confirmed to be the openhermes 2.5 dataset.
- Additionally, it was mentioned that the function calling datasets for Hermes 2 Pro were also released recently.
- Interest in Free Compute for Testing: A member proposed offering free compute for a potential competition, sparking interest among others in the channel.
- However, there was uncertainty about whether others would take up the offer for such resources.
- Evaluation Framework for LLMs: A service called lm-evaluation-harness was recommended for conducting evaluations on language models.
- It aims to facilitate few-shot evaluations, appealing to those interested in performance metrics for various models.
- Prototyping LLMs with Graphs: One member expressed plans to prototype with LLMs, particularly regarding fine-tuning existing models.
- Questions arose about fine-tuning techniques for handling unordered graphs with Transformers.
- Nous: Hermes 3 405B Instruct (free) – Provider Status: See provider status and make a load-balanced request to Nous: Hermes 3 405B Instruct (free) - Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic c...
- GitHub - EleutherAI/lm-evaluation-harness: A framework for few-shot evaluation of language models.: A framework for few-shot evaluation of language models. - EleutherAI/lm-evaluation-harness
- Rubrics by ForceMultiplier.AI - Advanced 3D Force GraphRAG: Experience the future of data analysis with Rubrics, a revolutionary 3D Force GraphRAG system. Visualize, analyze, and understand complex data structures in an immersive environment.
- Neo4j documentation - Neo4j Documentation: Neo4j documentation - Neo4j Documentation
- Reference — NetworkX 3.3 documentation: no description found
- Discrete Math : no description found
- PyTorch-BigGraph: A Large-scale Graph Embedding System - Meta Research: We present PyTorch-BigGraph (PBG), an embedding system that incorporates several modifications to traditional multi-relation embedding systems that allow it to scale to graphs with billions of nodes a...
Nous Research AI ▷ #ask-about-llms (4 messages):
Nous masking attention
Creating eval datasets
LLM Judge experience
Llama file location
Llama-stack
- Nous handles attention masking for packed samples: A member inquired how Nous managed masking attention across packed samples, noting that the stock Llama implementation breaks with larger causal masks.
- Despite adding sample packing to the training regime, the member reported a significant efficiency hit due to the loss of batched inference.
- Collaboration on eval dataset creation: A member expressed interest in collaborating on creating and working with evals, mentioning they are working on something exciting.
- They specifically seek input from anyone experienced using LLMs as Judges for specific evaluation datasets.
- Llama file location after download: A user asked about the location where Meta hides the Llama file after downloading models from llama-stack.
- Another member replied they had never heard of llama-stack, leaving the question unanswered.
Nous Research AI ▷ #research-papers (6 messages):
Diff Transformer
Model Merging at Scale
Text to Video Models
- Introducing Diff Transformer: The Diff Transformer amplifies attention to relevant context while canceling noise through a differential attention mechanism, offering a solution to Transformer’s over-allocation of attention to irrelevant context. Experimental results indicate that it significantly enhances performance in long-context modeling and hallucination mitigation, as detailed in this paper.
- You can explore the Diff Transformer code on GitHub for practical applications.
- Google's Internship Research on Model Merging: An intern, in collaboration with Google, investigates large-scale model merging, exploring its effects on language models up to 64B parameters. They share insights into how merging methods and model size impact performance and generalization, detailed in their internship work on arXiv.
- The discussion raises questions about the longevity of benefits when merging larger models, which is a significant consideration for future model training strategies.
- Inquiry on Free Text to Video Models: A user inquired about the availability of free text-to-video models, both animated and non-animated. There was a suggestion to explore animate2diff, with others in the community potentially having more insights.
- Tweet from Prateek Yadav (@prateeky2806): Ever wondered if model merging works at scale? Maybe the benefits wear off for bigger models? Maybe you considered using model merging for post-training of your large model but not sure if it genera...
- Differential Transformer: Transformer tends to overallocate attention to irrelevant context. In this work, we introduce Diff Transformer, which amplifies attention to the relevant context while canceling noise. Specifically, t...
- unilm/Diff-Transformer at master · microsoft/unilm: Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities - microsoft/unilm
Nous Research AI ▷ #interesting-links (3 messages):
OpenAI o1 model
Open O1 project
Large-scale model merging
- OpenAI unveils new reasoning system o1: OpenAI released their new reasoning system, o1, which builds on previous models like Q* and promises online search capabilities for challenging tasks.
- Despite its potential, o1 is currently a prototype with inference scaling laws confirming high costs in processing.
- Open O1 aims to democratize AI power: The newly launched Open O1 website seeks to match the capabilities of OpenAI's proprietary o1 model with advanced open-source alternatives.
- Their mission includes achieving o1-like performance in code generation and mathematical problem-solving.
- Exploring Large-scale Model Merging: An exciting internship project has explored large-scale model merging, discussing impacts of model size and merging methods on performance.
- This work questions whether model merging remains effective for larger language models (up to 64B parameters), and findings are accessible in a research paper.
- Tweet from Prateek Yadav (@prateeky2806): Ever wondered if model merging works at scale? Maybe the benefits wear off for bigger models? Maybe you considered using model merging for post-training of your large model but not sure if it genera...
- Open-Source O1: no description found
- Reverse engineering OpenAI’s o1 : What productionizing test-time compute shows us about the future of AI. Exploration has landed in language model training.
Nous Research AI ▷ #research-papers (6 messages):
Diff Transformer
Model Merging at Scale
Text to Video Models
- Diff Transformer cancels noise for clearer attention: The Diff Transformer introduces a differential attention mechanism that amplifies focus on relevant context while suppressing noise, leading to more sparse attention patterns.
- Experimental results show it excels in language modeling and practical applications like long-context modeling and hallucination mitigation, outperforming traditional Transformers.
- Google's model merging exploration at scale: A new work from Google investigates the effectiveness of model merging at larger scales, experimenting with language models up to 64B parameters.
- This research addresses concerns about generalization and held-in performance, presenting findings via an arXiv paper linked in a post.
- Inquiry about free text to video models: A member questioned if any free text to video models are currently available, whether animated or non-animated.
- In response, there was mention of animate2diff, suggesting that other members may have more insights on this topic.
- Tweet from Prateek Yadav (@prateeky2806): Ever wondered if model merging works at scale? Maybe the benefits wear off for bigger models? Maybe you considered using model merging for post-training of your large model but not sure if it genera...
- Differential Transformer: Transformer tends to overallocate attention to irrelevant context. In this work, we introduce Diff Transformer, which amplifies attention to the relevant context while canceling noise. Specifically, t...
- unilm/Diff-Transformer at master · microsoft/unilm: Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities - microsoft/unilm
GPU MODE ▷ #general (7 messages):
Inference optimisation
HBM concerns
SRAM scaling issues
DGX-1 performance comparison
3D stacking solutions
- Kickstarting Inference Optimisation Journey: A new user expressed a desire to begin their inference optimisation journey and seeked help on Triton and CUDA-based optimisations.
- It highlights a growing interest in advanced optimisations among community members, potentially beneficial for newcomers.
- Concerns Over HBM Effectiveness: Discussions revealed skepticism about HBM's cost-effectiveness for devices like the H100, highlighting that it remains a substantial percentage of the total cost.
- Moreover, its power consumption is not significantly lower than LPDDR5, raising questions about its utility.
- SRAM Scaling Problems: Participants pointed out issues with SRAM scaling, noting that it is not keeping pace with logic scaling, which was unexpected for firms like Graphcore.
- The conversation pointed to a lack of foresight regarding this problem during the design phase around 2015.
- Comparison of NVIDIA DGX-1 Systems: A user compared the compute power capabilities between GP100 and GV100 GPUs, with notable differences in FP32 Compute and Memory Bandwidth metrics.
- This highlights the ongoing relevance of hardware efficiency in current discussions.
- Mitigation Strategies through 3D Stacking: Moving forward, a proposed solution for memory architecture issues involves 3D stacking, similar to the approach used in MI300X.
- This strategy aims to optimise performance by using leading-edge processes for SMs while offloading SRAM and I/O to older process dies.
Link mentioned: Don’t Move The Data! | : no description found
GPU MODE ▷ #triton (3 messages):
TMA Descriptor Initialization
Batch Matrix Multiplication
Compilation Issues with tl.dot
- TMA Descriptor Initialization Overhead Issue: Current implementation using host-level TMA descriptor initialization incurs about 80% overhead for smaller matrices like 4k x 4k, while it's around 10% for 32k x 32k matrices.
- Pre-caching descriptors for defined BLOCK_SIZES slightly improves performance but is limited to weights only, and issues arise with device-level descriptor initialization in nightly builds.
- BMM Implementation Challenges: A member faced an assertion error with tl.dot while performing BMM on a 2D input
a
with shape[B_M, B_K]
andmodified_weight
of shape[B_M, B_K, B_N]
.- The workaround using tl.broadcast_to increases required shared memory by 16x, which is seen as sub-optimal.
- Compilation Issue with tl.dot: Another member highlighted that the BMM code only works in interpret mode, referencing a related issue on GitHub about tl.dot with 3D shapes compilation errors.
- The issue, tracked under GitHub Issue #4867, specifies compilation errors when using 3D tensor shapes.
Link mentioned: tl.dot with 3D shapes compilation error · Issue #4867 · triton-lang/triton: This code import triton import triton.language as tl from torch import Tensor @triton.jit def get_three_d_weights( currents, # [B_M, B_N] weight_block, # [B_K, B_N] BLOCK_SIZE_K: tl.constexpr, ): p...
GPU MODE ▷ #torch (19 messages🔥):
GPU Acceleration for DataLoaders
DALI Challenges
CUDA Operations and Performance
PyTorch Conference Insights
SPDL for Efficient Data Loading
- GPU Acceleration for DataLoaders is Possible: Discussions highlighted that DataLoaders could be accelerated on the GPU, with some arguing that the design limitations due to multiprocessing impair performance.
- A member noted, 'the main problem… is that it uses multiprocessing,' suggesting a need for less multiprocessing to optimize GPU usage.
- Mixed Feelings on DALI: One member expressed frustration over their experience with DALI, stating it was 'super challenging to get going' and felt like a waste of time.
- This sentiment of difficulty was echoed by others, pointing to steep learning curves and hurdles during implementation.
- Exploring CUDA Operations for Processing: The potential for CUDA operations to enhance pre-processing efficiency was discussed, with suggestions to reduce kernel launch overhead using torch.cuda.graph().
- One member noted that registering custom CUDA operations would help, emphasizing that Inductor won't be able to reason about custom code as effectively as with built-in torch operations.
- Insights from PyTorch Conference: A member shared insights from the PyTorch Conference, mentioning throughput improvements from using CUDA operations for pre-processing tasks.
- Details about the conference were shared, including dates and links for registration, inviting more participants to join the sessions.
- SPDL for Efficient Data Loading Implementation: The SPDL framework was highlighted as an innovative approach for achieving efficient data loading by parallelizing pipelines stage-by-stage instead of process-based loading.
- A member discussed how this could lead to higher throughput, with a promise of upcoming features to improve the PyTorch data loading experience.
- Paradigm Shift - SPDL 0.0.6 documentation: no description found
- PyTorch Conference 2024: Blobs to Clips: Efficient End-to-End Vid...: View more about this event at PyTorch Conference 2024
GPU MODE ▷ #beginner (1 messages):
vayuda: do macs with m series chips use arm sve instructions
GPU MODE ▷ #pmpp-book (4 messages):
CUDA architecture details
Persistent threads
Occupancy and register usage
- Understanding 4 Processing Blocks in CUDA: A member highlighted that execution units in CUDA are allocated 'per quarter SM', necessitating balanced workloads across those 4 processing blocks for optimal performance.
- Persistent threads can complicate scheduling, especially if thread blocks like 'producers' and 'consumers' aren't handled properly, which could lead to deadlocks.
- Impact of Processors on Occupancy: A member explained that an A100/H100 SM supports up to 64 warps, and that the division into 4 processors affects occupancy, as it reduces available warps in multiples of 4.
- For instance, if a kernel requires 200 registers, it can effectively run 8 warps instead of the expected 10, thus complicating the use of cudaLaunchCooperativeKernel.
- Strategizing Kernel Launch Parameters: To ensure successful kernel launches, the launch_bounds feature can be specified to allocate the required warps, but may lead to wasted resources if not optimized correctly.
- Using such bounds can help prevent deadlocks and ensure that register limits are properly managed by the CUDA compiler to align with available processing units.
GPU MODE ▷ #youtube-recordings (3 messages):
dtype Clarification
Quantized Training
Mixed-Precision Training
INT8 Speedup on 4090 GPU
- Clarifying dtype Types: A member discussed the two types of dtype: tensor/storage dtype and compute dtype, emphasizing the use of quantized/low-bit tensor dtype for memory savings and low-bit compute dtype for faster calculations.
- For instance, while 8-bit Adam requires an 8-bit optim state for memory efficiency, the computations are handled in FP32 for accuracy.
- Insights on INT8 Quantized Training: INT8 quantized training utilizes tensor dtype as INT8 while the compute dtype remains BF16, aiming for memory savings; however, the results have been disappointing in memory savings and accuracy.
- In contrast, INT8 mixed-precision training employs BF16 for tensor dtype and INT8 for compute, yielding promising results in speed and accuracy.
- Remarkable Speedup with INT8 Mixed Precision: A member expressed surprise at discovering that INT8 mixed precision could achieve a 1.7x speedup on a 4090 GPU without any tradeoffs, making its performance comparable to that of an A100.
- They emphasized the need for more experiments to confirm these findings, while thanking a member for their contributions.
GPU MODE ▷ #torchao (5 messages):
ViT Sparsity Experiment
WeightNormSparsifier
Model Inference Time
- ViT Sparsity Experiment Shows Slowdown: A member conducting a sparsity experiment on a pretrained ViT noted increased inference time when applying the WeightNormSparsifier to MLP and attention linear layers, as mentioned in the torchao/sparsity readme.
- They questioned whether this slowdown is typical and sought feedback on potential oversights in their implementation.
- Discussion on ViT Slowdown Expectations: Another member expressed disbelief that a slowdown should occur on a ViT, challenging the initial observation and suggesting a review of the code shared.
- They indicated that a performance hit would be unexpected merely from using the WeightNormSparsifier.
- Need for Sparsification Call to Improve Speed: A member clarified that the expected slowdown with just the WeightNormSparsifier is due to its masking application, which does not inherently speed up the model.
- They advised that the member should call
sparsify_
to utilize sparse kernels and potentially enhance performance.
- They advised that the member should call
GPU MODE ▷ #off-topic (5 messages):
Raspberry Pi compatibility
Nobel Prize in Physics
Hinton's Nobel Prize win
- Raspberry Pi Might Support New Software: A member mentioned they haven't compiled for Raspberry Pi, but there’s a chance it might work as it has previously shown compatibility.
- They humorously added that they feel like a grandpa navigating Discord, just hitting random buttons to see what happens.
- ptrblock Wins Nobel Prize in Physics: A breaking announcement highlighted that ptrblock received the Nobel Prize in Physics for 'fundamental contributions to physics' which was shared in a tweet.
- Another member joked that ptrblock deserves a Nobel Peace Prize for keeping their mind at peace while debugging PyTorch code.
- Hinton Recognized as First 'Pure CS' Nobel Prize Winner: It was noted in the chat that Hinton is the first winner of a 'pure computer science' Nobel Prize, marking a significant milestone.
- This recognition has sparked conversations about the implications and recognition of computer science in prestigious awards.
Link mentioned: Tweet from jack morris @ COLM (@jxmnop): BREAKING: The Nobel Prize in Physics has been awarded to ptrblock for “fundamental contributions to physics”
GPU MODE ▷ #rocm (2 messages):
Raspberry Pi 5
External GPU gaming
Amdgpu Linux kernel patch
GLmark2 performance
- Raspberry Pi 5 Rocks External GPU Support: After witnessing Pineboards demo 4K gaming using an external GPU at Maker Faire Hanover, a member shares plans for their GPU test rig.
- The excitement for testing is palpable as they prepare to document the progress of the amdgpu Linux kernel patch on Raspberry Pi.
- Documenting GPU Patch Efforts: A member provided a livestream demonstration about setting up the external GPU patch for Raspberry Pi 5 to enhance gaming experiences.
- They intend to detail the implementation process and remaining tasks for achieving full external GPU support, sparking anticipation in the community.
- Benchmarking with GLmark2: The member showcased GLmark2 running on Raspberry Pi 5 utilizing an AMD RX 460 external GPU, emphasizing performance improvements.
- An image documenting the setup illustrates the potential of external GPU gaming on the Raspberry Pi platform.
Link mentioned: Use an External GPU on Raspberry Pi 5 for 4K Gaming | Jeff Geerling: no description found
GPU MODE ▷ #webgpu (1 messages):
ORT Min JS examples
WebGPU backend
- Seeking open source examples of ORT Min JS with WebGPU: A member is searching for any open source examples that utilize ORT Min JS with a WebGPU backend.
- They specifically highlighted the need for practical implementations to reference.
- Request for additional resources: The same member is also interested in broader resources related to WebGPU and ORT Min JS integration.
- They are hoping to gather a collection of useful links and guides from the community as well.
GPU MODE ▷ #metal (3 messages):
BFloat16 conversion
Model performance on Mac
GPU integer shifts
- BFloat16 Conversion for T5 Model: A user successfully converted a T5 family model to BFloat16 and is running it with MLX.
- This shift is aimed at optimizing model performance despite compatibility concerns.
- BFloat16 Emulation on Mac: Concerns were raised about BFloat16 not being natively supported on Mac, leading to emulation, which may affect performance.
- The suggestion was made that using float16 instead might be a better option if range is not a critical factor.
- Questioning Performance Impact of Float Conversion: A member questioned the perceived slow performance of conversions to and from float, noting it's simply a 16-bit shift.
- They inquired whether GPUs do not have vectorized integer shifts, hinting at potential optimizations.
GPU MODE ▷ #avx (2 messages):
vpternlogd instruction
AVX-512 ISA
Logic design
Amiga programming
- Discovering vpternlogd: The Ternary Logic Wonder: An intriguing post about the vpternlogd instruction was shared, highlighting its ability to perform complex bitwise Boolean logic using three inputs.
- The operation can utilize 512-bit registers, making it a powerful tool for SIMD CPU programmers looking to simplify logic operations.
- Nostalgic Reflections on Logic Design Concepts: A member recalled minterms and maxterms from their logic design course, drawing parallels to the complexities of programming.
- They humorously noted that it seemed like the Amiga chip designer wrote the docs for software developers, blending nostalgia with technical discussions.
Link mentioned: AVX Bitwise ternary logic instruction busted!: How a modern AVX instruction shares a similar design with a 1985 blitter chip, by Arnaud Carré
LlamaIndex ▷ #blog (5 messages):
LlamaIndex Hackathon
LlamaParse premium
Oracle integrations
LlamaIndex Workflows Tutorial
- LlamaIndex Hackathon Kicks Off: Don't miss the second-ever LlamaIndex hackathon starting this Friday for #SFTechWeek, with over $12,000 in cash prizes and more!
- Come and learn how to build complex, multi-agent systems for fun and profit by signing up here.
- LlamaParse Premium Shines: LlamaParse premium is touted as the best document parser for context-augmented LLM applications, capable of handling complex documents like slide decks and multi-table Excel sheets.
- It efficiently processes interleaved scanned documents and other document types with substantial text and visuals, detailed in this link: more info.
- New Oracle Integrations Arrive: A big arrival from Oracle includes four new integrations: data loader, text splitter, embeddings, and vector search.
- Detailed updates on these tools' capabilities can be found in the respective documentation, including support for the data loader.
- Comprehensive LlamaIndex Workflows Tutorial: Check out a detailed tutorial on LlamaIndex Workflows that compares them to LangGraph and guides users on getting started.
- The tutorial also includes tips on building an AI research agent and debugging strategies, accessible here.
- no title found: no description found
- no title found: no description found
LlamaIndex ▷ #general (49 messages🔥):
Docstore Functionality
Contextual Retrieval from Anthropic
Ingestion Pipeline for Qdrant
DuckDB Vector Store Limitations
RAG Pipeline Query Handling
- Docstore Can Handle Both Chunks and Full Documents: Members clarified that the docstore can store both chunks and full documents as they are effectively the same class under the hood.
- cheesyfishes emphasized that its versatility allows it to accommodate both types seamlessly.
- Contextual Retrieval Insights: cheesyfishes noted that contextual retrieval from Anthropic is mostly about metadata and chunk enrichment, highlighting similarities with existing models.
- The notable aspect discussed was leveraging prompt caching for scalability, indicating a continued evolution in the approach to retrieval mechanisms.
- Qdrant Ingestion Pipeline Usage: tharak_ asked about using the ingestion pipeline for directly indexing processed documents, aiming for efficiency across stages such as summary extraction and embedding generation.
- The discussion reflected on simplifying the indexing process by integrating it directly within the ingestion phases, enhancing overall system performance.
- DuckDB Vector Store Functionality: There was confusion regarding capabilities in the DuckDB vector store, particularly around add vs. delete methods, with a focus on the lack of an upsert function.
- whitefang_jr explained that the add method requires a list of nodes, emphasizing the need for clarity between node and document identifiers.
- Handling 'And' Queries in RAG Pipeline: Members discussed strategies for breaking down queries containing the word 'and', such as separating the terms for independent context retrieval.
- tharak_ proposed using entity recognition to differentiate between entities and operators to enhance query handling in the RAG pipeline.
- no title found: no description found
- Google Colab: no description found
- Redirecting...: no description found
- Documents / Nodes - LlamaIndex: no description found
- Add Files To Pipeline | LlamaCloud Documentation: Add files to a pipeline.
- Import Pipeline Metadata | LlamaCloud Documentation: Import metadata for a pipeline.
- Document Stores - LlamaIndex: no description found
- Ingestion Pipeline - LlamaIndex: no description found
- llama_index/llama-index-integrations/vector_stores/llama-index-vector-stores-duckdb/llama_index/vector_stores/duckdb/base.py at d5b7511a3c51937abf7b21402b826e28de58aabd · run-llama/llama_index: LlamaIndex is a data framework for your LLM applications - run-llama/llama_index
Modular (Mojo 🔥) ▷ #general (13 messages🔥):
TIOBE Index
Mojo Programming Language
WebAssembly
Rust Frontend Frameworks
Data Attributes in DOM
- TIOBE Index Shows Mojo Rising: The October 2024 TIOBE index highlights Mojo entering the top 50 languages and underscores the demand for fast, secure, and easy-to-learn programming languages.
- Given its rapid ascent within a year, Mojo outperforms several established languages, making it a promising new contender in programming.
- Excitement for Mojo's Future: Community members expressed enthusiasm for Mojo, particularly regarding its potential role in a lightning talk for dotAI and its appeal over Python.
- Mojo is seen as a fast alternative potentially attracting those previously only interested in Python.
- WebAssembly vs JavaScript Debate: Discussion about WebAssembly and its ability to access the DOM sparked debates on whether it can replace JavaScript.
- While opinions vary, one member pointed out that garbage collection needs to be addressed for smoother interaction with the DOM via JavaScript.
- Rust Frontend Frameworks Insights: Questions arose regarding how Rust frontend frameworks operate, with comparisons to JavaScript glue code managing DOM interactions.
- This led to a focus on the mechanics of how languages like Rust interface through JavaScript for frontend development.
- Utilization of Data Attributes in DOM: A user highlighted a feature in the DOM where data can be stored directly in elements as attributes beginning with
data-myattribute
.- This functionality presents an efficient way to enhance elements with custom data in web development.
- TIOBE Index - TIOBE: no description found
- Issues · modularml/mojo: The Mojo Programming Language. Contribute to modularml/mojo development by creating an account on GitHub.
Modular (Mojo 🔥) ▷ #mojo (19 messages🔥):
Mojo Keywords Reevaluation
ECS Implementation Challenges
Feedback on Mojo Proposal
Display of Keywords for Beginners
Game Development Discussions
- Mojo Keywords Reevaluation Discussion: Members are discussing the need to reconsider core keywords in Mojo such as 'inout' and 'borrowed', linked to a GitHub proposal.
- The discussion indicates a desire for clearer argument conventions in the Mojo references subsystem.
- Challenges Implementing ECS in Mojo: A member expressed the need for a mapping from types to indices in Mojo dictionaries for ECS (Entity Component System) access. Discussions highlighted that Mojo lacks advanced support for introspecting types, making this a complicated task.
- Another user suggested using Variant and noted the potential messiness involved in implementing such a mapping.
- Feedback on Idea for Mojo Keywords: A member submitted substantial feedback on the proposal regarding keyword lists in Mojo, highlighting the importance of having an accessible overview for developers. Support for this kind of list was echoed by others, emphasizing its utility for beginners in learning the language swiftly.
- A sentiment was shared that limiting the list to one page could significantly enhance the learning experience for new users.
- Exploration of Access Control Terms in Mojo: In the context of the Mojo proposal, a user advocated for adopting access control terms to aid comprehension for programmers less familiar with Rust/C++. This feedback aligns with the goal of improving user experience in the coding environment.
- Such terminology is suggested to simplify reasoning about code structures and permissions for all users.
- Game Development Community Engagement: In response to ECS discussions, one member offered to connect others interested in game development within the Mojo community. They encouraged collaboration and sharing ideas among members who are developing projects related to game development.
- Despite some members wanting to avoid ECS, the opportunity for networking and knowledge-sharing in the community is warmly welcomed.
- 2. Lexical analysis: A Python program is read by a parser. Input to the parser is a stream of tokens, generated by the lexical analyzer. This chapter describes how the lexical analyzer breaks a file into tokens. Python...
- [Discuss] Resyntaxing argument conventions and References · Issue #3623 · modularml/mojo: The design of the Mojo references subsystem is starting to come together. To finalize the major points, it helps to come back and re-evaluate several early decisions in Mojo to make the design more...
Modular (Mojo 🔥) ▷ #max (13 messages🔥):
Max Inference Engine Issues
Custom Operations Tutorial
PyTorch Version Compatibility
Graph Compilation Times
- Max Inference Engine Troubleshooting: A user reported issues using the max inference engine on their Intel NUC, encountering errors with both TorchScript and ONNX routes.
- After discussions, a resolution was found by switching to the official PyTorch channel and using a version under 2.4.
- Custom Ops Undergoing Changes: Inquiry was made about the location of the custom ops tutorial after recent updates, which included examples for creating custom Gelu operations.
- It was noted that custom ops are being reworked, affecting the availability of related documentation.
- Graph Compilation Challenges Discussed: Concerns were raised about the time-consuming nature of graph compilation when performing multiple tensor operations, estimating costs around 400-500 ms.
- Suggestions were made to create a generic reshape operation that could be reused, thus reducing overhead in graph creation.
- PyTorch Version Conflicts Resolved: A user clarified their installation of PyTorch via the conda-forge channel, which led to compatibility issues with the max inference engine.
- Switching to the official channel and meeting version restrictions resolved the problems.
Interconnects (Nathan Lambert) ▷ #news (5 messages):
2024 Nobel Prize in Physics
OpenAI's Compute Capacity
Microsoft Competition
- Nobel Prize Awarded for Neural Networks: The Royal Swedish Academy of Sciences announced the 2024 Nobel Prize in Physics awarded to John J. Hopfield and Geoffrey E. Hinton for their groundbreaking work enabling machine learning with artificial neural networks. This recognition highlights their foundational discoveries and inventions that significantly contribute to the field.
- Wow very wholesome is the sentiment from the community in response to this prestigious acknowledgment.
- OpenAI Secures Independent Compute Power: OpenAI is starting to secure its own compute capacity, entering into data center agreements with Microsoft competitors due to concerns over Microsoft's response times. CFO Sarah Friar noted that Microsoft hasn't moved fast enough to provide the necessary computing power.
- The announcement was met with comments on the competitive landscape, including remarks that this move is spicy but unsurprising given Microsoft’s trust issues.
- Tweet from The Nobel Prize (@NobelPrize): BREAKING NEWS The Royal Swedish Academy of Sciences has decided to award the 2024 #NobelPrize in Physics to John J. Hopfield and Geoffrey E. Hinton “for foundational discoveries and inventions that en...
- Tweet from Anissa Gardizy (@anissagardizy8): Scoop: OpenAI is beginning to secure its own compute capacity, signing data center deals with MSFT competitors CFO Sarah Friar has said MSFT hasn’t moved fast enough to supply the company with comput...
Interconnects (Nathan Lambert) ▷ #ml-questions (14 messages🔥):
Llama 3.2 11B vs 8B performance
Vision integration in text models
Research on PRMs/Verifiers
State-of-the-art audio models
- 8B Model Outperforms 11B for Text: Discussions indicate that the 8B model likely performs better in text-only tasks compared to the 11B Vision model, which is primarily designed for handling images.
- All the additions are for handling images, suggesting a trade-off in text performance.
- Vision Enhancements May Boost Text Models: A participant noted that adding a vision backbone could potentially improve text-only performance, provided that the text backbone remains unfrozen during training.
- Questions arose about the allocation of the 3B parameters in this fusion, highlighting uncertainty about the integration method.
- Debate on LRMs and Planning Performance: A shared thread discussed the effectiveness of LRMs like o1 in enhancing performance on planning tasks, albeit at a higher cost per call.
- Notably, accuracy improvements were reported, for example, accuracy increased from 24% to 98% on hard blocks world instances after back prompting.
- Inquiry About State-of-the-Art Audio Models: A member posed a question about experiences with state-of-the-art audio models, seeking insights from the community.
- Another member recommended reaching out to reach_vb on Twitter for expert advice in this area.
- Limited Research on PRMs: A user highlighted the scarcity of papers on PRMs, contrasting it with the abundance of resources available for LLMs as judges.
- Choose your poison, the user humorously remarked about the overwhelming literature on LLMs.
Link mentioned: Tweet from Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) (@rao2z): LRM-Modulo? Our initial experiments with o1 showed that while LRMs like it do seem to lift the floor for performance on planning problems, they are still far from robust. One idea is to view LRMs as ...
Interconnects (Nathan Lambert) ▷ #ml-drama (14 messages🔥):
Discrediting Claims
Energy Use in AI Research
Internal Disputes
History with Emma and Jeff
- Jeff Dean's Brutal Post Raises Eyebrows: Jeff Dean's recent critique of the energy emissions claims in AI papers has sparked debate, with some feeling it may be too aggressive, prompting discussions about the validity of his arguments.
- Concerns were raised about whether publishing these critiques provides casus belli for excessive energy use in the industry, questioning the implications for accountability.
- Surprise at Emma's Reaction to Jeff: There was surprise expressed over Emma's response to Jeff's statements, implying she feels he is attempting to discredit her work despite her usually reasonable demeanor.
- Some discussions highlighted a perceived historical context in the tensions between them, suggesting previous interactions may have influenced current perceptions.
- Internal Numbers Bring Skepticism: Jeff Dean's access to internal metrics has been mentioned, with subtleties implying this gives him an edge in his arguments, yet also raises questions about transparency.
- This has prompted mixed feelings about the legitimacy of claims made based on proprietary data versus external assessments.
- Email Pushback on Diplomatic Responses: Attempts to respond diplomatically to Jeff's claims were met with negativity, including an anonymous email expressing disbelief at acceptance of Jeff's points.
- This pushback signifies the polarized opinions surrounding the discussion, reflecting deeper divides in the community.
- General Sentiment on Energy Research: The overall sentiment around the energy use in AI research remains bleak, with one member noting it as a 'depressing area of research' but acknowledging the need for continued exploration.
- This perception illustrates the challenges faced in addressing the environmental impacts of AI without adequate resolutions or consensus.
- Tweet from Jeff Dean (@🏡) (@JeffDean): I'm just looking at the timeline: 2019 - honest mistakes in Strubell et al. paper (https://arxiv.org/abs/1906.02243) in assessing emissions cost of Evolved Transformer neutral architecture search...
- Tweet from GitHub - FixTweet/FxTwitter: Fix broken Twitter/X embeds! Use multiple images, videos, polls, translations and more on Discord, Telegram and others: Fix broken Twitter/X embeds! Use multiple images, videos, polls, translations and more on Discord, Telegram and others - FixTweet/FxTwitter
Interconnects (Nathan Lambert) ▷ #random (6 messages):
Toy features
Sampling techniques
Explainability in AI
- Toy impresses yet needs tweaking: A user remarked that the new toys are impressive, although they're unsure if the fit is perfect.
- They plan to continue playing with them and sharing experiences to gauge their performance.
- Sampling insights and industry perceptions: A discussion highlighted that many in big companies view sampling as a black box, primarily focusing on beam/nucleus methods without proper exploration.
- This led to comments on the need for better sampling techniques and how Bayesians are particularly concerned about this topic.
- Explainability in AI models gains attention: A shared blog post emphasized the growing importance of explainability in the context of large language models (LLMs) and their reasoning capabilities.
- The post discussed the evolution of product form factors from individual tasks to more complex system-level productivity, highlighting the need for auditable reasoning.
- Tweet from Lucas Beyer (bl16) (@giffmana): @_xjdr @waterruupto @doomslide Also you would be surprised how many people at bigco just see sampling as a blackbox and think beam/nucleus are everything. It’s not helped by most established codebases...
- Normal Computing: no description found
Perplexity AI ▷ #general (31 messages🔥):
Discord Experience Issues
Merch Announcement for Referrals
Image Generation in Perplexity
Perplexity Web vs Mobile Performance
Perplexity Profit Concerns
- Members facing Discord removal issues: A user expressed frustration about being removed from Discord, questioning if it's a psyop and others chimed in about performance problems.
- Several users noted that the experience varies greatly across devices and platforms.
- Inquiry on merchandise for referrals: A newcomer asked about announcements regarding merchandise linked to referrals, seeking clarity on current offers.
- No announcements were mentioned in the chat, leading to further speculation about potential rewards.
- Questions about image generation capabilities: A member requested information on generating images using Perplexity, particularly for Pro users, but found functionality lacking in apps.
- Related discussions pointed to difficulties with features being limited across different platforms.
- Discord's performance raises concerns: Multiple users reported slow performance when using Perplexity on web browsers, stating it works more smoothly on mobile.
- Frustration grew over the usability challenges on desktops, leading to questions around potential fixes.
- Profitability questions arise for Perplexity: A user questioned how Perplexity generates profits amidst student discounts and affordability initiatives.
- Concerns hovered over whether the pricing model is sustainable for Perplexity's financial health going forward.
Perplexity AI ▷ #sharing (6 messages):
China's Sound Laser
5MVA Design
Small Circuit Design
Cerebras IPO Challenges
Generating Descriptions
- China unveils world's most powerful sound laser: A recent video highlights that China has developed the world's most powerful sound laser, showcasing its innovative technology.
- You can watch the video here.
- Designing a 5MVA Dat: A user sought guidance on how to effectively design a 5MVA Dat, leading to discussions on appropriate methodologies.
- For more details, check out the conversation here.
- Creating a Small Circuit: A query arose about how to create a small circuit, prompting users to share their designs and approaches.
- The discussion can be found here.
- Cerebras faces IPO challenges against Nvidia: An article discusses the challenges Cerebras may face in its IPO, particularly in competition with Nvidia.
- Read more about it here.
- Seeking useful description generation: Two users inquired about ways to generate useful descriptions, indicating a demand for useful content creation tools.
- The related discussions can be accessed here.
Perplexity AI ▷ #pplx-api (2 messages):
Rate Limit Increase Request
Support Response Issues
- Request for Rate Limit Increase: A member inquired about how to request an increase in rate limit usage, mentioning that they have emailed support multiple times without a response.
- They expressed urgency in their request, seeking someone to assist in escalating the issue.
- Clarification on Support Contact: Another member asked if the original poster had emailed the correct support address, specifically api@perplexity.ai or the general support email.
- This indicates a potential oversight in communication, prompting the need for clearer instructions on reaching out for support.
DSPy ▷ #show-and-tell (1 messages):
Tool Creation
Assistants Development
- Creating Tools to Create Tools: A member emphasized the importance of creating tools that create tools in future development.
- This approach aims to enhance efficiency and foster innovation within the community.
- Assistants That Create Assistants: Discussion revolved around developing assistants that can create other assistants, echoing a growing trend in automation.
- This kind of meta-development could lead to significant advancements in productivity and functionality.
DSPy ▷ #general (13 messages🔥):
Custom LM vs Custom Adapter
Deprecation of LM Clients
LM Configuration and Adapters
Optimizer and Adapter Issues
Communication in DSPy Community
- Clarifying Custom LM and Adapter Usage: Members discussed the value of documenting reasons for choosing a custom Adapter versus a custom LM, particularly noting the challenges of selecting the right model when using different LM configurations.
- It was suggested the existing language models documentation should be assessed for clarity on this matter.
- Deprecation Notice for Custom LM Clients: DSPy 2.5 has deprecated all custom LM clients except
dspy.LM
, which will soon phase out in DSPy 2.6. Migration todspy.LM
is recommended for improved consistency with new features like Adapters.- A link to a migration notebook was shared, guiding users on transitioning to the updated standards.
- Understanding LM Configuration in Adapters: An issue was raised regarding the
lm_kwargs
not being populated in the MIPROv2 optimizer, leading to confusion over whether it's intended behavior.- Another member clarified that
lm.kwargs
will contain the kwargs from the LM unless specific configuration is passed to the predictor.
- Another member clarified that
- Engagement from the DSPy Community: Members introduced themselves and shared insights, with one stating they would experiment with optimizers and provide feedback.
- There was a friendly exchange confirming excitement about ongoing efforts and contributions within the DSPy community.
Link mentioned: Creating a Custom Local Model (LM) Client | DSPy: ---
OpenInterpreter ▷ #general (13 messages🔥):
Open-Interpreter Tool Calling
Structured Output for Tools
Mozilla AI Talk Announcement
- Open-Interpreter Tool Calling Explained: A member inquired about how Open-Interpreter maintains deterministic and accurate tool calling, receiving an explanation that it isn't purely deterministic due to LLMs, but mostly consistent thanks to the system message.
- Mikebirdtech confirmed that while it employs LLMs, the system message aids in achieving a level of consistency.
- Exploring Structured Output for Custom Tools: Another member suggested exploring structured output for custom tool calling, highlighting untapped potential and past experiments with it.
- There was a consensus that further research and improved support from tools like Ollama and llamacpp would facilitate its implementation.
- Upcoming Mozilla AI Talk Reminder: Mikebirdtech excitedly reminded members about an upcoming talk by a representative from Mozilla AI focused on open source initiatives, suggesting it would be quite interesting.
- The event is set for next week, and members were encouraged not to miss it, with a link provided to the Discord event.
LLM Agents (Berkeley MOOC) ▷ #mooc-questions (6 messages):
In-person attendance at lectures
AI agent startups using Autogen
Building frameworks with Redis
- In-person Lecture Attendance Restricted: Due to the size of the room, only Berkeley students are invited to attend the lectures in person.
- A user inquired about attending the session in person but was informed about the attendance restrictions.
- Debate on Using Autogen in Production: One member questioned if others in AI agent startups are using Autogen in production or prefer to flatten out Python files with raw API calls.
- The conversation revolved around optimizing the implementation of AI agents in real-world applications.
- Custom Framework Development with Redis: A member mentioned building their own framework with Redis, connecting nodes (workers) for enhanced functionality.
- They clarified that this approach aims for both trimming down abstraction and achieving better control over complex use cases.
- Control Over State in AI Frameworks: The user emphasized developing a framework for better control and addressing more complicated use cases by managing state (memory).
- This reflects ongoing discussions about the architecture and efficiency of AI applications.
LLM Agents (Berkeley MOOC) ▷ #mooc-lecture-discussion (1 messages):
DSPy Lecture by Omar
Contributions to DSPy
- Omar's Exciting DSPy Lecture: A member expressed enthusiasm about an upcoming lecture on DSPy presented by Omar, highlighting its importance.
- Excited about this lecture signifies a strong interest in the recent developments within the DSPy framework.
- Contributions being made to DSPy: The same member is actively working with DSPy and aims to contribute to its development and improvement.
- Their commitment showcases a keen interest in enhancing the capabilities of the DSPy tool.
tinygrad (George Hotz) ▷ #general (5 messages):
tinygrad Website Navigation
Exo Bounty Challenge
Tinygrad Documentation
- Users navigate tinygrad website concerns: A member noted that most users may not land on a certain page unless they click a small button, indicating potential navigation issues on the tinygrad website.
- They later reconsidered, affirming that users would indeed be directed there if they clicked the button.
- Discussion on exo bounty challenge: A user is attempting a bounty from exo to compile tinygrad to Swift, sharing a link to the GitHub issue for reference.
- They expressed a desire to maintain exo's Python foundation while seeking guidance from moderators or contributors on tackling the issue.
- George Hotz highlights documentation importance: George Hotz advised others to read the questions document, emphasizing the importance of consulting available resources.
- This guidance underscores a common theme of directing users towards comprehensive documentation for clarity.
- [BOUNTY - $1000] Compile tinygrad to swift · Issue #238 · exo-explore/exo: I want to keep exo 100% python if possible Would like to compile swift inference code in tinygrad The deliverable here is a merged PR in tinygrad and a small demonstration in exo of how this would ...
- tinygrad documentation - tinygrad docs: no description found
tinygrad (George Hotz) ▷ #learn-tinygrad (1 messages):
Buffer Count Workaround
Inefficiencies in Tensor Operations
- Workaround for Tensor.sum() Issues: A workaround was created using qazalin's additional buffer count PR to bypass errors caused by Tensor.sum(), which had trouble loading too many buffers at once.
- Iteratively adding and splitting up the operations was necessary to navigate around the issue, despite being noted as very inefficient in its current version.
- Norm Calculation Adjustments: The script processes gradients by calculating norms, squaring them and summing them iteratively to manage memory better.
- This approach involved creating groups of norm1_squared and norm2_squared, enhancing stability at the cost of efficiency.
LangChain AI ▷ #general (3 messages):
Travel Concerns
ChatPromptTemplate Usage
Invalid JSON Formatting
- Travel Plans in Question: A member expressed interest in attending an event but wasn't sure about their ability to travel at that time.
- This concern highlights the challenges of scheduling and commitment when travel is involved.
- Using ChatPromptTemplate for Messaging: A member shared their implementation using
ChatPromptTemplate
with a few shot prompt setup for message generation in a chat application.- They detailed how they constructed the
example_prompt
andexample_selector
for their chat model.
- They detailed how they constructed the
- Escaping Double Quotes Leading to JSON Issues: Another member reported an issue with their
messages
object where double quotes were being replaced with"
, causing invalid JSON format.- They sought advice on how to prevent this escaping from occurring in order to send valid JSON to the chat.
LangChain AI ▷ #langchain-templates (1 messages):
Escaping Quotes in Messages
ChatPromptTemplate Usage
FewShotChatMessagePromptTemplate
- Escaping quotes in messages causes JSON issues: A user reported that their
messages
object had all double quotes escaped to"
, resulting in invalid JSON format when sent to the chat.- They inquired about how to disable this escaping to maintain the original format.
- Using ChatPromptTemplate effectively: The user shared their implementation of
ChatPromptTemplate.from_messages()
to format chat messages with a specified template format.- This approach includes both human and AI message configurations for better template management.
- Integrating FewShotChatMessagePromptTemplate: The user demonstrated how to set up a
FewShotChatMessagePromptTemplate
using an example selector and an example prompt.- This integration is intended to enhance input handling and improve the chat's contextual responses.
LangChain AI ▷ #share-your-work (1 messages):
gustaf_81960_10487: <@387269332142391298> update your certs!
Torchtune ▷ #general (4 messages):
BF16 Training Challenges
Learning Rate Adjustments
Stochastic Rounding in Optimizers
- BF16 Training Issues Demand Attention: A member highlighted that the necessity to adjust the learning rate (LR) is linked to full BF16 training since BF16 weights may not update correctly if changes are too minor.
- This member suggested employing BF16 mixed-precision training to mitigate the issue, although it results in higher memory usage due to additional FP32 gradients and optimization states.
- Understanding BF16 Effects in 1B Models: A question was raised regarding why the BF16 effect appears more significant in 1B models, suggesting that fewer parameters might mean less gets updated.
- Another member pointed out that the BF16 weight update underflow is influenced by the relation between
weight
andweight_delta
, and proposed verification against BF16 mixed-precision training results.
- Another member pointed out that the BF16 weight update underflow is influenced by the relation between
- Experimenting with Stochastic Rounding: A member expressed interest in adding stochastic rounding to the optimizer for weight updates to assess its utility for Torchtune.
- They indicated a willingness to conduct experiments, weighing the experimental benefits against potential complications.
LAION ▷ #general (2 messages):
Geoffrey Hinton Nobel Comparison
Model Merging at Scale
- Geoffrey Hinton's future Nobel Award perception: In 50 years, giving a Nobel award to Geoffrey Hinton will be viewed similarly to today's perspective on the award for lobotomy awarded to Moniz in 1949 for its therapeutic value in psychoses.
- Hinton misunderstands modern machine learning profoundly, indicating a disconnect with current advancements.
- Insights on Large-Scale Model Merging: New research from a Google intern discusses large-scale model merging scenarios involving models up to 64 billion parameters.
- The study explores how model size, merging methods, and expert numbers affect held-in performance and generalization, with findings detailed in their arXiv paper.
- Tweet from Prateek Yadav (@prateeky2806): Ever wondered if model merging works at scale? Maybe the benefits wear off for bigger models? Maybe you considered using model merging for post-training of your large model but not sure if it genera...
- Tweet from Jenia Jitsev 🏳️🌈 🇺🇦 (@JJitsev): In 50 years from now, giving Nobel award to Geoffrey Hinton will be seen same like today Nobel award for lobotomy is perceived Moniz got 1949 "for his discovery of the therapeutic value of leucoto...
LAION ▷ #research (2 messages):
Model Merging at Scale
Autoarena Tool
- Exploring Model Merging at Scale: New work from Google investigates large-scale model merging, examining combinations of language models up to 64B parameters and how various factors affect performance and generalization.
- As mentioned in this thread, the research addresses important questions about the effectiveness of merging larger models and its implications.
- Introducing the Autoarena Tool: A user highlighted an interesting tool called Autoarena, found at autoarena.app, which seems to provide useful features for users.
- The tool has generated curiosity, suggesting it might present innovative solutions for its intended applications.
- Tweet from Prateek Yadav (@prateeky2806): Ever wondered if model merging works at scale? Maybe the benefits wear off for bigger models? Maybe you considered using model merging for post-training of your large model but not sure if it genera...
- AutoArena: no description found