[AINews] not much happened today
This is AI News! an MVP of a service that goes thru all AI discords/Twitters/reddits and summarizes what people are talking about, so that you can keep up without the fatigue. Signing up here opts you in to the real thing when we launch it 🔜
a spooky quiet weekend is all you need.
AI News for 10/31/2024-11/1/2024. We checked 7 subreddits, 433 Twitters and 32 Discords (231 channels, and 2436 messages) for you. Estimated reading time saved (at 200wpm): 254 minutes. You can now tag @smol_ai for AINews discussions!
Not much happened today, but a month's worth of launches happened in the past two days that you may want to keep up on.
Alternatively you may wish to tune in to the latest LS pod on LMSys/Chatbot Arena!
Table of Contents
- AI Twitter Recap
- AI Reddit Recap
- AI Discord Recap
- PART 1: High level Discord summaries
- Nous Research AI Discord
- Unsloth AI (Daniel Han) Discord
- Perplexity AI Discord
- OpenAI Discord
- LM Studio Discord
- OpenRouter (Alex Atallah) Discord
- Notebook LM Discord Discord
- aider (Paul Gauthier) Discord
- Stability.ai (Stable Diffusion) Discord
- Eleuther Discord
- Latent Space Discord
- GPU MODE Discord
- Interconnects (Nathan Lambert) Discord
- Torchtune Discord
- DSPy Discord
- OpenInterpreter Discord
- LAION Discord
- LlamaIndex Discord
- Cohere Discord
- Modular (Mojo 🔥) Discord
- OpenAccess AI Collective (axolotl) Discord
- Alignment Lab AI Discord
- LLM Agents (Berkeley MOOC) Discord
- tinygrad (George Hotz) Discord
- Mozilla AI Discord
- PART 2: Detailed by-Channel summaries and links
- Nous Research AI ▷ #general (379 messages🔥🔥):
- Nous Research AI ▷ #ask-about-llms (4 messages):
- Nous Research AI ▷ #research-papers (1 messages):
- Nous Research AI ▷ #interesting-links (6 messages):
- Nous Research AI ▷ #research-papers (1 messages):
- Unsloth AI (Daniel Han) ▷ #general (249 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #help (25 messages🔥):
- Perplexity AI ▷ #general (196 messages🔥🔥):
- Perplexity AI ▷ #sharing (7 messages):
- Perplexity AI ▷ #pplx-api (6 messages):
- OpenAI ▷ #ai-discussions (165 messages🔥🔥):
- OpenAI ▷ #gpt-4-discussions (1 messages):
- OpenAI ▷ #prompt-engineering (13 messages🔥):
- OpenAI ▷ #api-discussions (13 messages🔥):
- LM Studio ▷ #general (142 messages🔥🔥):
- LM Studio ▷ #hardware-discussion (17 messages🔥):
- OpenRouter (Alex Atallah) ▷ #announcements (2 messages):
- OpenRouter (Alex Atallah) ▷ #app-showcase (1 messages):
- OpenRouter (Alex Atallah) ▷ #general (137 messages🔥🔥):
- OpenRouter (Alex Atallah) ▷ #beta-feedback (6 messages):
- Notebook LM Discord ▷ #use-cases (31 messages🔥):
- Notebook LM Discord ▷ #general (60 messages🔥🔥):
- aider (Paul Gauthier) ▷ #announcements (2 messages):
- aider (Paul Gauthier) ▷ #general (53 messages🔥):
- aider (Paul Gauthier) ▷ #questions-and-tips (22 messages🔥):
- aider (Paul Gauthier) ▷ #links (2 messages):
- Stability.ai (Stable Diffusion) ▷ #general-chat (55 messages🔥🔥):
- Eleuther ▷ #general (9 messages🔥):
- Eleuther ▷ #research (29 messages🔥):
- Latent Space ▷ #ai-general-chat (34 messages🔥):
- Latent Space ▷ #ai-announcements (1 messages):
- GPU MODE ▷ #general (2 messages):
- GPU MODE ▷ #triton (1 messages):
- GPU MODE ▷ #beginner (3 messages):
- GPU MODE ▷ #off-topic (2 messages):
- GPU MODE ▷ #irl-meetup (1 messages):
- GPU MODE ▷ #triton-puzzles (1 messages):
- GPU MODE ▷ #rocm (3 messages):
- GPU MODE ▷ #liger-kernel (3 messages):
- GPU MODE ▷ #self-promotion (2 messages):
- GPU MODE ▷ #🍿 (4 messages):
- GPU MODE ▷ #thunderkittens (3 messages):
- Interconnects (Nathan Lambert) ▷ #news (14 messages🔥):
- Interconnects (Nathan Lambert) ▷ #ml-questions (4 messages):
- Interconnects (Nathan Lambert) ▷ #ml-drama (1 messages):
- Interconnects (Nathan Lambert) ▷ #reads (3 messages):
- Interconnects (Nathan Lambert) ▷ #posts (3 messages):
- Torchtune ▷ #general (3 messages):
- Torchtune ▷ #dev (20 messages🔥):
- DSPy ▷ #show-and-tell (6 messages):
- DSPy ▷ #papers (1 messages):
- DSPy ▷ #general (14 messages🔥):
- OpenInterpreter ▷ #general (17 messages🔥):
- OpenInterpreter ▷ #O1 (1 messages):
- OpenInterpreter ▷ #ai-content (2 messages):
- LAION ▷ #general (16 messages🔥):
- LAION ▷ #research (2 messages):
- LlamaIndex ▷ #blog (3 messages):
- LlamaIndex ▷ #general (13 messages🔥):
- Cohere ▷ #discussions (3 messages):
- Cohere ▷ #questions (4 messages):
- Cohere ▷ #api-discussions (4 messages):
- Cohere ▷ #projects (1 messages):
- Modular (Mojo 🔥) ▷ #general (2 messages):
- Modular (Mojo 🔥) ▷ #mojo (7 messages):
- OpenAccess AI Collective (axolotl) ▷ #general (5 messages):
- Alignment Lab AI ▷ #ai-and-ml-discussion (1 messages):
- Alignment Lab AI ▷ #general (1 messages):
- LLM Agents (Berkeley MOOC) ▷ #mooc-questions (2 messages):
- tinygrad (George Hotz) ▷ #learn-tinygrad (1 messages):
- Mozilla AI ▷ #announcements (1 messages):
AI Twitter Recap
all recaps done by Claude 3.5 Sonnet, best of 4 runs.
ChatGPT Search and AI-Powered Search
- ChatGPT Search Launch: @sama announced the launch of ChatGPT Search, noting positive early reviews from friends. He also stated that search is his favorite feature launched in ChatGPT since the original launch, doubling his usage over the past few weeks.
- Comparison with Other Search Tools: @_akhaliq shared a comparison between ChatGPT search and Perplexity. @AravSrinivas highlighted improvements in Perplexity's navigational queries, making it easier to navigate the web.
- Google's Grounding Feature: Google launched a "Grounding" feature with Google Search in the Gemini API & AI Studio, allowing Gemini models to access up-to-date information from web searches at runtime, as noted by @labenz.
- Developer Adoption: Despite Gemini's high performance on leaderboards, @labenz questioned why it seems to be the third priority for most developers, behind OpenAI and Anthropic.
AI Model Releases and Updates
- SmolLM2: @LoubnaBenAllal1 announced the release of SmolLM2, a new set of small, powerful language models optimized for on-device use, outperforming Meta's Llama 3.2 1B.
- Claude Desktop App: @alexalbert__ announced the release of a Claude desktop app for Mac and Windows.
- Meta's Robotics Developments: @AIatMeta announced three new developments in robotics and touch perception: Meta Sparsh, Meta Digit 360, and Meta Digit Plexus.
- Stable Diffusion 3.5 Medium: @mervenoyann mentioned the release of Stable Diffusion 3.5 Medium, a 2B model with a commercially permissive license.
AI Research and Insights
- AGI Development: @fchollet shared thoughts on the development of AGI, suggesting it will initially be worse than previous AI systems at most tasks but will improve rapidly.
- AI Regulation: @AnthropicAI published a piece advocating for targeted AI regulation sooner rather than later.
- Future of ML Specialization: @StasBekman discussed the future of ML specialization, suggesting that training LLMs will become the domain of a few companies, while inference expertise may become commoditized.
AI Tools and Applications
- Suno AI Personas: @suno_ai_ introduced Personas, a feature allowing users to save the essence of a song and reimagine it across creations.
- PromptQL: @svpino described PromptQL, a natural language API that executes Python and SQL-like queries on top of structured, unstructured, and API data.
- Agent S: @rohanpaul_ai shared information about Agent-S, an AI system that uses a computer like a human to solve diverse desktop tasks on different systems.
Memes and Humor
- @HamelHusain joked about upgrading their Python version in their base conda env, wishing for luck.
- @HamelHusain later updated that they're buying a new laptop.
- @jxnlco humorously asked why everyone at cafe lyria is so beautiful.
AI Reddit Recap
/r/LocalLlama Recap
Theme 1. AI Real-Time Game Generation Breakthrough
- This is fully ai generated, realtime gameplay. Guys. It's so over isn't it (Score: 612, Comments: 179): This post appears to be missing any actual content or body text to summarize. Without specific details, gameplay footage, or discussion points from the post body, I cannot provide a meaningful summary of what AI-generated gameplay was demonstrated or discussed.
Theme 2. Ollama Framework Security: Multiple CVEs Discovered
- More Models, More ProbLLMs: New Vulnerabilities in Ollama (Score: 71, Comments: 6): Six critical vulnerabilities were discovered in the Ollama framework, including remote code execution and container escape flaws that could allow attackers to gain control of host systems running the AI models. The security issues, tracked as CVE-2024-21626 through CVE-2024-21631, affect Ollama versions prior to 0.1.27 and enable attackers to potentially access sensitive files, execute arbitrary commands, and escape containerized environments through path traversal and command injection techniques.
- Ollama endpoint exposure concerns were discussed, with clarification that OpenWebUI implements its own OpenAI-compatible endpoint requiring API key authentication rather than directly proxying the Ollama API.
- Research by TL;DROligo revealed that of the 6 vulnerabilities, 4 received CVEs while 2 were disputed as shadow vulnerabilities by maintainers. The flaws could enable DoS attacks, model poisoning, and model theft with a single HTTP request.
- Community members highlighted the benefits of open source security, noting how increased visibility leads to faster discovery and remediation of vulnerabilities, ultimately improving software quality.
Theme 3. Meta's MobileLLM: 125M Model Matches 500M Performance
- Minimum viable LLM (Score: 47, Comments: 19): Meta's 125M MobileLLM demonstrates unexpectedly coherent text generation capabilities, challenging previous assumptions about minimum model sizes needed for basic language tasks compared to the 1.5B parameter GPT-2. The post questions the theoretical minimum parameters needed for an LLM to produce grammatically correct and contextually relevant responses, suggesting potential parameter ranges from 50M down to 100K parameters.
- RAG and masking approaches could enable training smaller models focused on knowledge retrieval and logic rather than memorization, with implementations like optillm demonstrating unbounded context capabilities. Similar concepts appear in Google's REALM and RETRO models.
- Discussion explored minimal parameter requirements, with suggestions that 100K parameters could handle coherent text with a limited 40-70 word vocabulary, while others proposed even simpler solutions using basic programming constructs.
- Qwen2.5 0.5B was highlighted as an effective small-scale mobile LLM implementation. The model demonstrates practical viability of compact architectures for local deployment.
- MobileLLM (Meta - 125M, 350M, 600M, 1B models) (Score: 160, Comments: 29): Meta released a new family of MobileLLM models ranging from 125M to 1B parameters, specifically engineered for mobile device deployment and optimized for low-latency inference. The models achieve competitive performance against larger models while maintaining efficiency, with the 1B variant reaching 90% of the performance of a 7B model on standard benchmarks while using significantly less computational resources.
- Initial concerns about benchmark comparisons excluding Qwen 2.5 and Gemma 2 were addressed by noting the paper was published in February 2024, predating these models. Benchmark data shows MobileLLM 125M outperforming Qwen 2.5 0.5B on Hellaswag (65.3 vs 52.1).
- Discussion focused on model architecture and implementation, with suggestions for training two sub-models: one on a Knowledge Graph for logic and reasoning, another for prompt-to-graph transformation. The custom architecture makes it unlikely to work as a draft model for speculative decoding.
- Users expressed interest in mobile deployment capabilities, noting that llama.cpp doesn't yet support the new MobileLLMForCausalLM architecture. The 125M model shows promise for basic tasks like rewriting and summarization.
Theme 4. QTIP: Next-Gen 2-bit Quantization for 405B Models
- New Quantization Method -- QTIP: Quantization with Trellises and Incoherence Processing (Score: 124, Comments: 29): QTIP, a new LLM quantization algorithm using trellis coded quantization and incoherence processing, achieves state-of-the-art performance with 2-bit precision on models including a 405B Instruct model, outperforming QuIP# in quality while maintaining similar speed. The method, presented in a NeurIPS 2024 Spotlight paper, runs 2-3x faster than PV-Tuning with comparable or better quality, and is available through their GitHub repository and pre-quantized models on HuggingFace.
- QTIP integration into llama.cpp appears straightforward by replacing the QuIP#-based E8P vector quantizer with QTIP's trellis quantizer. The developer confirms compatibility and ease of implementation for potential future GGUF model improvements.
- The 405B model runs at $1.6/hour, with special TP8 models designed for 8-way tensor parallelism setups. These models perform random Hadamard transforms per-GPU instead of across all activations to optimize data transfer.
- Memory requirements for quantized models can be estimated by multiplying model size by compression ratio (2-bit precision reduces size by ~2/3), making a 70B model require approximately 17.5GB VRAM when quantized.
Other AI Subreddit Recap
r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity
AI Development and Research
- Meta FAIR announced three new robotics developments including Meta Sparsh, a general-purpose encoder for vision-based tactile sensing trained on 460K+ tactile images, and Meta Digit 360, an artificial fingertip sensor with 18+ sensing features.
- A 3B parameter pre-trained generalist model was trained on 8+ robot platforms, demonstrating advances in robotics AI.
- Google quietly released "Learn about", a new AI tool for interactive learning on any topic.
AI Gaming and Graphics
- Completely AI-generated gameplay demonstrated real-time AI video game generation, though lacking object permanence.
- Technical details: Uses Oasis model (500M parameters)
- Demo available at oasis.decart.ai
- A LucasArts-style game was created using SDXL, demonstrating AI's capability in generating retro game assets.
- Workflow included using Fooocus with SDXL at 1408×704 resolution
- Used img2img for sprite animations
Product Updates and Announcements
- OpenAI released a new web search tool for ChatGPT, enabling up-to-date information access.
- Sam Altman discussed AI agents that could act as senior co-workers, collaborating on tasks for extended periods.
Memes and Humor
- An AI-generated image showing a finger in the camera demonstrated unintended artifacts in image generation.
- Various posts about Sam Altman's comments and tweets, including his apology for hyping products.
AI Discord Recap
A summary of Summaries of Summaries by O1-mini
Theme 1. AI Model Performance and Optimization
- Optimize AI Models on Local Hardware for Speed: Running a 70B model on a workstation with a 4090/7800x3D and dual 2080Ti setups achieves 6-12 tokens/sec. Concerns about CPU offloading creating performance bottlenecks highlight the need for optimized hardware configurations.
- FlashAttention-2 Boosts GPU Memory Efficiency: FlashAttention-2 enhances the attention mechanism by improving I/O operations and integrating hardware-aware features. Techniques like kernel fusion and tiling optimize memory access, achieving higher performance without sacrificing accuracy.
- SmolLM2 Models Deliver Lightweight Performance: The SmolLM2 family offers models with 135M, 360M, and 1.7B parameters, optimized for on-device applications. SmolLM2-1.7B enhances instruction following and reasoning, though it occasionally generates nonsensical outputs.
Theme 2. AI Deployment, APIs, and Cost Efficiency
- Explore Serverless Deployment for Hermes 3: A member seeks alternatives to together.ai for deploying Hermes 3 serverless since the platform only supports dedicated hardware. The search focuses on platforms offering serverless solutions tailored to specific deployment needs.
- Pplxity API Lacks Native Citation Support: The Pplxity API does not support obtaining citations, unlike other APIs. Users are exploring methods to incorporate citation capabilities effectively without native support, balancing functionality with cost-efficiency.
- Pplxity API Offers Cost-Effective Alternatives to OpenAI: Members highlighted that the Pplxity API is cheaper than OpenAI's offerings, sparking discussions about using Pplxity for cost-effective projects. This makes Pplxity API an attractive option for developers balancing cost and feature availability.
Theme 3. AI Frameworks, Finetuning, and Tool Development
- Unsloth Finetuning Framework Enhances Custom Models: The Unsloth Finetuning Framework excels in tokenizer finetuning on domain-specific datasets, increasing model adaptability. Community members are eager to share their reusable work, fostering collaborative improvements.
- Aider v0.61.0 Adds File Command Features: The latest Aider v0.61.0 enables users to load and save slash-commands using
/save
and/load
, facilitating complex command management. Aider also introduced anonymous, opt-in analytics, respecting user privacy while gathering usage insights. - DSPy Integrates Typed Outputs for Simplified Implementation: DSPy signatures with types allow direct obtaining of typed outputs, streamlining implementation. The upcoming streaming DSPy completions by end of October will further enhance functionality, with users encouraged to provide feedback on desired use cases.
Theme 4. Research Innovations in AI
- Introducing the Forgetting Transformer for Long-Context Tasks: A member unveiled the Forgetting Transformer, which integrates a forget gate into the traditional Transformer architecture to improve performance on long-context tasks. This model outperforms standard Transformers and manages information retention without relying on position embeddings.
- TokenFormer Reshapes LLM Scalability with Tokenized Parameters: TokenFormer leverages the attention mechanism for interactions between tokens and model parameters, reducing the need for extensive retraining. This architecture addresses the unsustainable computational costs associated with scaling large transformer models.
- SAEs Decompose Text-to-Image Models for Better Control: Sparse Autoencoders (SAEs) can break down the generative processes of text-to-image models into interpretable components. This allows for enhanced control over aspects like image composition, local detail, and color management, pivotal for future developments.
Theme 5. Community Events, Announcements, and Giveaways
- Join the Llama Impact Hackathon for Prizes: The 3-day Llama Impact Hackathon in San Francisco from November 8-10 offers a $15,000 prize pool. Participants can win a $1,000 prize for the best use of LlamaIndex, encouraging innovative AI solution development using Llama 3.2 models.
- Meta FAIR Unveils Innovative Robotics Tools: At Meta FAIR, three new developments in robotics and touch perception were introduced, including Meta Sparsh. These tools are designed to empower the open source community in fields like medical research and manufacturing, fostering collaborative advancements.
- Steam Gift Card Giveaway for Alignment Lab AI Members: User tpojd is offering a $50 Steam Gift Card to the Alignment Lab AI community. Members were notified through both ai-and-ml-discussion and general channels, engaging the community with the giveaway.
PART 1: High level Discord summaries
Nous Research AI Discord
-
Optimizing AI Model Performance on Local Hardware: A member detailed running a 70B model using a workstation with a 4090/7800x3D and a friend's dual 2080Ti setup, achieving 6-12 tokens per second with effective pipeline parallelism.
- Concerns were raised about CPU offloading potentially creating performance bottlenecks, emphasizing the need for optimized hardware configurations.
- Gemma2B's Extensive Tokenizer Vocabulary Enhances Complexity: Gemma2B is rated at 2.6B parameters due to its large tokenizer vocabulary, allowing it to handle diverse inputs more effectively.
- This complexity underscores the model's ability to process varied data, making it a versatile tool for complex AI engineering tasks.
- SmolLM2 Models Deliver Lightweight Performance for Devices: The SmolLM2 family offers models with 135M, 360M, and 1.7B parameters, optimized for on-device applications.
- SmolLM2-1.7B demonstrates improved instruction following and reasoning, despite occasionally generating nonsensical outputs.
- Meta Introduces Tiny LLMs for Efficient On-device Applications: Meta's Tiny LLMs are sub-billion parameter models designed for effective on-device use, accommodating hardware limitations.
- Supporting documentation includes the arXiv paper 2402.14905, detailing the models' capabilities and optimization strategies.
- Exploring Serverless Deployment Options for Hermes 3: A member is seeking alternatives to together.ai for deploying Hermes 3 serverless, as the platform only supports dedicated hardware.
- This search aims to identify platforms that offer serverless solutions, catering to specific deployment requirements.
Unsloth AI (Daniel Han) Discord
-
Unsloth Finetuning Framework Excels in Customization: Participants praised the Unsloth Finetuning Framework for its ability to perform tokenizer finetuning on domain-specific datasets, enhancing model adaptability.
- Many members are eager to share their reusable work and insights with the community, fostering collaborative improvements.
- RAG Preferred Over Fine-Tuning for Chatbots: The community leaned towards using RAG instead of fine-tuning for a coding language chatbot due to its capability for more accurate queries.
- Discussions highlighted that RAG's effectiveness in handling complex queries makes it a superior choice despite initial preferences for fine-tuning.
- Optimal CUDA Versions Identified for Pretraining: CUDA 12.1 and 11.8 were identified as the best versions for supporting libraries required in continued pretraining and implementing RAG.
- Backward compatibility concerns were raised, particularly the lack of a compatible PyTorch version for CUDA 12.6.
- Addressing Deprecated Tokenizer Warnings: A member inquired about the deprecation warning: Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead.
- Another member clarified that this warning can be safely ignored, reducing concerns over immediate action.
- Resolving Llama 3.1 Notebook ImportError: An error ImportError: cannot import name 'EntryNotFoundError' was reported when using the Llama 3.1 notebook.
- Another member acknowledged the issue and committed to investigating a solution, ensuring smooth notebook operations.
Perplexity AI Discord
-
Perplexity Pro Cancellation: A user expressed frustration over their Perplexity Pro subscription cancellation, questioning the reasons behind it. This led to a discussion about subscription value and recent updates in Perplexity's offerings.
- The cancellation raised concerns regarding the stability of Perplexity's premium services and prompted users to evaluate the benefits versus costs of maintaining their subscriptions.
- Comparisons with ChatGPT: Debate emerged about the advantages of Perplexity's model switching capability compared to ChatGPT's offerings following the launch of GPT Search. Users appreciate Perplexity's aesthetics and features but note potential challenges as competition increases.
- Some users highlighted the flexibility of model switching in Perplexity, while others pointed out that advancements in ChatGPT's functionalities could overshadow Perplexity's current offerings.
- Pplxity API Features: A member noted that the Pplxity API does not currently support obtaining citations, unlike features available in other APIs. This has raised questions about how to implement citation functionality effectively without that support.
- Users are exploring alternative methods to incorporate citation capabilities in their applications, given the absence of native citation features in the Pplxity API.
- Implementing RAG Functionality in Pplxity API: A member queried whether it was possible to implement RAG (Retrieval-Augmented Generation) functionality using the Pplxity API. They acknowledged that OpenAI supports RAG but have not tried it with Pplxity yet.
- This sparked discussions on the feasibility and potential approaches to replicate OpenAI's RAG features within the Pplxity framework, with some members expressing interest in experimenting further.
- Cost Comparison of Pplxity and OpenAI APIs: A member humorously pointed out that the Pplxity API is cheaper than OpenAI's API offerings. This sparked discussions about cost-effective API implementations for developers.
- Users are considering Pplxity API as a more economical alternative for their projects, balancing cost savings with feature availability compared to OpenAI's solutions.
OpenAI Discord
-
ChatGPT Search Launched with Subscription: Members discussed the new ChatGPT Search feature, which is included with the ChatGPT subscription at no extra cost, contrasting it with Perplexity which requires additional charges.
- Perplexity is praised for delivering richer results, sparking a debate on the advantages of each tool for various use cases.
- Advancements in AI-Generated Playable Games: Excitement surrounds the development of AI that can generate playable iterations of games like Minecraft, highlighting its potential in generative gaming.
- The company Oasis has created a basic version of Minecraft, demonstrating foundational functionality for players.
- Challenges in Configuring D&D GPT for User Actions: Members reported difficulties in setting up their D&D GPT to restrict responses strictly to user-driven actions, such as spellcasting during battles.
- Suggestions include informing the model of expected game responses to maintain control over the gameplay narrative.
- Understanding Context Windows and Tokenization in LLMs: Discussions clarified that the context window defines the model's memory limit for tokens, while tokenization refers to how text is broken down into units for processing.
- Members emphasized that both prompt tokens and contextual tokens are treated similarly by the LLM, impacting response generation.
- Impact of Token Weighting on Model Responses: The concept of weighted tokens in responses was highlighted, noting that outputs from the Python tool have a weight of 1, equal to the system prompt due to their recency.
- Members discussed using browser inspector tools to verify token weightings during model interactions to ensure desired response prioritization.
LM Studio Discord
-
LM Studio Drops Context at Capacity: Users highlighted that LM Studio starts losing contextual information once it reaches 100% capacity, impacting session continuity.
- One user proposed using a system prompt summary to preserve more relevant context during prolonged interactions.
- Open WebUI Faces API Hurdles with LM Studio: A user reported successful integration of Open WebUI with LM Studio but encountered difficulties in retrieving the model list due to API endpoint configurations.
- Another member pointed out that exposing Docker containers to the local network is essential for seamless access.
- HTML Rendering Glitches in LM Studio Models: There were reports of intermittent HTML rendering issues within LM Studio, causing confusion among users about its reliability.
- Concerns about security were raised, with suggestions to verify
htmlspecialchars
before execution, hinting at potential bugs in model iterations. - IBM's Granite 1b-A400m Setup Requires Flash Attention: A user faced challenges generating responses with IBM's granite 1b-A400m q4_0 model in LM Studio, suspecting issues related to model quantization.
- Another user clarified that enabling Flash Attention is necessary for the model to function correctly, emphasizing critical setup steps.
- LM Studio's Multi-GPU Support Shows Varied Performance: Discussions emerged about whether LM Studio effectively supports multiple GPUs, with some users leveraging both GPUs for loading code-straits 22b.
- While multi-GPU support is present, performance inconsistencies were noted, especially across different vendor combinations.
OpenRouter (Alex Atallah) Discord
-
Hermes 3 Consolidates 405B Version: The Hermes 3 405B extended version has been removed and merged into the standard variant, as announced on OpenRouter. This move aims to streamline model options for users.
- This consolidation reflects a strategic shift to enhance user experience by offering a unified model, reducing complexity in model selection.
- API v1 Models Migration Enhances Speed: The /api/v1/models API is migrating to a new cloud provider today, which is expected to improve caching and significantly boost response times.
- Post-migration,
per_request_limits
will always be set to null, particularly affecting users who are logged out or do not provide an API key; feedback is being solicited in the dedicated channel. - Rubik's AI Search Interface Overhauled: The updated Rubik's AI search interface has been released, enhancing the advanced research assistant capabilities notably. Feedback is being sought through offered beta testing opportunities.
- Participants in the beta testing can receive 1 month free premium access to models like Mistral Large and Gemini-1.5 Pro using promo code
NEW24
at checkout. - Hermes 3 Free Version Downtime: Users have reported that the free version of
hermes-3-llama-3.1-405b
is currently unresponsive in OpenRouter chat, while the standard version remains operational.
- The issue is considered temporary as models are still listed on OpenRouter, with ongoing discussions about potential resolutions.
- ChatGPT Model Updates Lack Search API: Users are discussing changes in performance with the latest chatgpt-4o model, noting the absence of search capabilities via API following recent releases.
- OpenAI admits that the model is frequently updated without user notifications, leading to concerns about consistency.
Notebook LM Discord Discord
-
Podcast Source Errors Cause Confusion: Users shared frustrations with the 'Add Source' feature and difficulties locating generated audio files post-podcast creation.
- A Geography teacher detailed challenges in implementing new tools for educational content and requested guidance on the process.
- Enhancements in Python Audio Processing: A participant discussed improvements to a Python utility for audio processing, including looping over timestamps to create segments and integrating with avatars.
- Ongoing development of 'Pause' and 'Resume' features for playback was highlighted to better manage audio cuts.
- Analyzing Google TTS Voice Quality: Google TTS voice quality varies across languages, with recommendations to use Google Cloud's Text-to-Speech for more natural sound in English.
- Users discussed creating dialogues with multiple speakers and noted constraints on audio length using Google Cloud's TTS features.
- Excitement Over NotebookLM Podcast Features: Users are enthusiastic about NotebookLM's podcast feature, discussing the creation of multiple episodes and requesting deep dives into specific sources.
- A new user inquired about the podcast feature's capabilities and the process for conducting episodes.
- User Feedback on NotebookLM Performance: Members provided mixed feedback on NotebookLM’s automatic citation formats for web searches and questioned its audio extraction and transcription capabilities.
- Concerns were raised about the inability to import certain videos, with users seeking clarification on audio processing functionalities.
aider (Paul Gauthier) Discord
-
Aider v0.61.0 Enhances File Command Features: The latest release, Aider v0.61.0, enables users to load and save slash-commands to files using
/save
and/load
, facilitating complex command management during chats.- New launch options like
--load
allow executing commands upon startup, improving the interactive experience for engineers. - Aider Sets Coding Milestone with Code Contributions: In v0.61.0, Aider contributed 860 new lines of code, representing 68% of the release's new codebase, showcasing significant self-improvement capabilities.
- This substantial code addition highlights Aider's evolving role in its own development process.
- Anonymous Analytics Integrated to Respect Privacy: Aider introduced anonymous, opt-in analytics that excludes personal data, aiming to gather usage insights without compromising user privacy.
- This feature encourages participation to enhance Aider’s performance while maintaining trust among users.
- Patched.codes Enhances Custom AI Workflows: Patched.codes was introduced as a tool for customizable AI workflows, offering features like automatic documentation generation and summarized PR reviews to optimize post-code tasks.
- Users expressed interest in leveraging this tool to automate routine chores and streamline their development processes.
- Anthropic API's Token Counting Feature Added: A new token counting endpoint from Anthropic API, accessible here, allows users to send a request and receive a token count, aiding in managing token usage.
- This addition helps users prevent overspending on tokens caused by rapid automated requests, addressing usage management concerns.
- New launch options like
Stability.ai (Stable Diffusion) Discord
-
Seeking ComfyUI Optimizations: A user with a Mac Studio M2 Max is seeking optimal setups for ComfyUI and requested community advice and experiences.
- Members recommended starting with Scott's ComfyUI tutorial videos to get familiar with the software.
- Questions About FP16 Model Availability: A community member inquired about the possibility of FP16 editions of the stable diffusion 3.5 models; they reported FP16 performance is 8x on their hardware.
- Another member confirmed that the Stable Diffusion 3.5 large model is available in FP16 and provided a link to access it on Hugging Face.
- Accessing Lora Trigger Words: A user asked how to check trigger words for the Lora they are using with ComfyUI, seeking efficient methods for access.
- Community advice directed them to the original download locations of the Lora to find detailed information regarding trigger words.
- Video Generation Model Recommendations: A discussion highlighted the use of Mochi-1 and CogVideoX for video generation, with a suggestion based on VRAM limitations.
- Members indicated that smaller models like the 5b and 2b variants could fit on systems with limited resources, while emphasizing that CogVideoX is best suited for lower VRAM.
- Lora-based Image Styling Template Needs: A user expressed a need for a Lora-based image styling template for ComfyUI, specifically one that generates images based on a selected Lora.
- They noted the difficulty in finding a template that isn't only for using multiple Loras simultaneously.
Eleuther Discord
-
DEQ Models Wrestle with Instability: Training DEQ models presents significant challenges, including exploding train losses that require frequent restarts. Members discussed the dynamics of an 'infinitely deep' network contributing to these issues.
- One member humorously noted praying to rnjesus to avoid model failures, highlighting the community's frustration with the instability.
- Hypernetworks: Just Input Transformations?: Hypernetworks sparked debate as one member classified them solely as input-dependent transformations. Discussions included practical challenges like generating models with more parameters than the base.
- Others shared their implementation experiences, emphasizing the complexities and resource demands associated with deploying hypernetworks.
- Introducing the Forgetting Transformer: A member unveiled the Forgetting Transformer, which integrates a forget gate into the traditional Transformer architecture to boost long-context task performance. This model reportedly outperforms standard Transformers without relying on position embeddings.
- The community recognized the innovation, noting that the forget gate enables the model to better manage and retain relevant information over extended contexts.
- Exploring Flow Matching and Speculative Decoding: Members explored flow matching and speculative decoding as alternatives to DEQs and UTs, aiming to optimize the accuracy-latency trade-off. These methods are touted for their efficient compute usage.
- While not direct competitors, the group agreed that flow matching and speculative decoding offer promising avenues for enhancing computational efficiency in model inference.
Latent Space Discord
-
SmolLM2 is the new SOTA: SmolLM2, an open 1B-parameter language model, was introduced with training on up to 11 trillion tokens from various curated datasets, fully open-source under Apache 2.0.
- Members discussed its performance, where SmolLM2 1.7B outperformed other models, raising excitement for upcoming demos and community testing.
- Anthropic pushes for AI regulations: Anthropic published a blog post advocating for targeted AI regulation, highlighting the urgency of establishing guidelines sooner rather than later.
- This release is notably timed ahead of elections, leading to discussions about its implications for startup competition.
- Claude 3.5 Sonnet benchmarks break records: Frameworks powered by Claude 3.5 Sonnet have achieved a staggering 49% on SWE-bench Verified, surpassing the previous SOTA of 45%.
- This milestone has sparked interest in seeing further advancements and comparisons with other systems like Aider.
- Exciting new AI tools emerge: Blockade Labs introduced Blendbox, simplifying AI art creation with direct control over visuals, while Runway ML announced Advanced Camera Control for more intentional scene navigation.
- These innovations signal a trend towards user-friendly interfaces that enhance creative expression in AI-generated content.
- OpenAI's AMA reveals compute challenges: During a Reddit AMA, OpenAI CEO Sam Altman acknowledged that compute limitations are delaying product releases, complicating the path for deploying complex AI models.
- This discussion sheds light on the infrastructural challenges facing significant advancements in AI technology.
GPU MODE Discord
-
FlashAttention-2 Enhances GPU Memory Optimization: FlashAttention-2 (2023) introduces advancements in the attention mechanism by improving I/O operations and integrating hardware-aware features, optimizing performance without compromising accuracy.
- These enhancements address redundant memory accesses between GPU HBM and SRAM, utilizing techniques like kernel fusion and tiling to ensure efficient operation within modern GPU architectures.
- Massive Triton Kernels Dataset Released: A new Triton Kernels Dataset comprising over 2.5 million tokens and 3000 Triton kernels has been released, sourced from GitHub repository scraping and executing Torch Inductor on various models.
- Future plans include expanding the dataset by analyzing 200 GitHub repositories, adding explicit docstrings, performing deduplication, and ensuring all kernels are runnable to facilitate supervised finetuning.
- Discrepancies Between Triton and vLLM Outputs: Members have identified inconsistencies between Triton and vLLM outputs, particularly with the first entry's expected values, where Triton rounds to 18 compared to vLLM's 20 as seen in the vLLM repository.
- These discrepancies suggest potential numeric errors or differences in implementation, prompting further investigation to ensure computational consistency between the two frameworks.
- Composable Kernel Performance Strategies: The Composable Kernel (CK GEMM) targets achieving approximately 135TFlops, though performance may vary based on specific kernel settings.
- To mitigate bank conflicts, members are implementing an XOR-based permutation strategy, as demonstrated in the Composable Kernel GitHub, optimizing tensor operations and reducing register spills.
Interconnects (Nathan Lambert) Discord
-
SmolLM2 Launch Integrates Open-Source Agility: Introducing SmolLM2, a 1B-parameter model trained on up to 11T tokens of curated datasets, released under the Apache 2.0 license with all datasets and scripts available.
- This model aims to establish a robust baseline for evaluating language models by incorporating exciting new features into NLP, fostering enhanced development and benchmarking.
- OpenAI o1-preview Unveiled: OpenAI announced the release of the
o1-preview
model on September 12, 2024, previously known as Q* before being succeeded by Project Strawberry.
- The launch seeks to clarify OpenAI o1 functionalities and improve user comprehension through a series of experiments and discussions.
- Decoding Reasoning in Language Models: A blog post explores Daniel Kahneman's System 1 and System 2 thinking, correlating them with language model inference processes, where traditional inference aligns with System 1 and reasoning involves analytical System 2 processes.
- Community members debated the implications of introducing 'reasoning tokens', questioning the feasibility of paralleling MCTS in practice due to potential increases in token consumption.
- Shift in Traditional NLP Evaluations: Discussions raised concerns about the decline in traditional NLP evaluations, especially within Natural Language Generation (NLG), as models are expected to excel without standardized benchmarks.
- Participants noted a transformation in the evaluation landscape, particularly impacting areas like summarization and machine translation, suggesting a need for updated benchmarks.
- Exploring Diffusion Techniques in Robotics: A participant initiated a discussion on the intersection of diffusion methods and robotics, highlighting potential applications and seeking collaborator interest.
- The inquiry led to further debates on the feasibility and existing research in applying diffusion-based approaches to enhance robotic functionalities.
Torchtune Discord
-
Llama 4 Training on 100k H100: Llama 4 is currently undergoing training with 100k H100 units, demonstrating significant strides in AI development.
- A member highlighted the rapid progress by stating, 'what a crazy world we live in.'
- Meta's Potential Nuclear Ventures: Meta is humorously speculated to announce plans for building nuclear power plants.
- Another member suggested that such announcements could occur as soon as 2025.
- Graph Breaks during Activation Offloading: There are concerns regarding graph breaks and activation offloading when utilizing PPO, with reports of decreased performance and unchanged memory usage.
- A potential reason identified is the increased activations causing processing bottlenecks.
- PPO Configuration Issues Impacting Performance: Activation checkpoints must be enabled for activation offloading to function correctly, but some configurations may miss essential checks, affecting PPO performance.
- One member proposed examining the model’s output heads as a possible source of these issues during offloading.
- Profiling Techniques for GPU Time Analysis: Members are discussing the use of tlparse for identifying graph breaks and the importance of profiling GPU time to gain deeper insights into performance problems.
- Assistance was offered by a member to help with profiling and analyzing configurations once they are set up.
DSPy Discord
-
DSPy Signatures Streamline Implementation: A member highlighted that using DSPy signatures with types allows for directly obtaining typed outputs, simplifying the implementation process.
- This approach reduces coding complexity by leveraging dspy.LM and dspy.JsonAdapter for scheme compliance.
- vLLM Enhances Server Generation: Another member suggested utilizing a server like vLLM that supports Outlines for constrained generation to request specific types such as bool.
- They demonstrated this by implementing
dspy.Predict(“text -> is_factual: bool”)
, ensuring seamless integration with existing frameworks. - Streaming DSPy Completions Launch: Streaming DSPy completions are expected to be available natively by the end of October, following the preparation of the Async PR.
- Discussions are ongoing, with a GitHub issue inviting user feedback on desired use cases for dspy.Predict() functionalities.
- Synthetic Data Generation Challenges: A member inquired about using pre-trained base models in DSPy for synthetic data generation without extensive ICL examples.
- Another member explained that base models are difficult to prompt effectively due to the lack of instruction-tuning, making practical ICL examples crucial.
- Textgrad Integration Timeline: Users expressed interest in the integration timeline of Textgrad into DSPy, though specific details were not provided.
- A GitHub comment discussed current setups and potential streaming capabilities post-integration.
OpenInterpreter Discord
-
Anthropic API Support Issues: After the latest update introducing Anthropic API Support, a member reported that scripts failed to run correctly compared to the previous version, leading to frustration.
- They suggested making the API integration optional and re-enabling the local model option that previously worked without problems.
- Meta FAIR Robotics Developments: Today at Meta FAIR, three innovative developments in robotics and touch perception were unveiled to empower the community.
- Meta Sparsh was highlighted as a versatile encoder for tactile sensing, enhancing the capabilities of robotic systems.
- Meta Sparsh Innovation: Meta Sparsh is introduced as the first general-purpose encoder, trained on 460K+ tactile images using self-supervised learning for diverse applications.
- This technology is compatible with various tactile sensors and tasks, paving the way for more advanced robotics integrations.
- Open Source Community Impact: The new robotics tools from Meta are set to significantly impact the open source community, benefiting fields like medical research and manufacturing.
- Community engagement is encouraged to explore and apply these technologies, fostering collaborative advancements.
LAION Discord
-
Patch Artifacts Frustrate Generators: A member expressed frustration about dealing with patch artifacts in autoregressive image generation, noting a potential necessity to use a VAE despite disliking them.
- "Still dealing with these patch artifacts. I HATE VAEs but it seems like I may be forced to use one."
- TokenFormer Reimagines Model Scalability: A new architecture called TokenFormer enhances flexibility by leveraging the attention mechanism for interactions between tokens and model parameters, thus mitigating the need for retraining entire models with architectural modifications.
- This approach addresses the unsustainable computational costs associated with scaling traditional transformer models as their sizes grow. Refer to TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters for more details.
- SAEs Unlock Inner Workings of Text-to-Image Models: A study revealed that Sparse Autoencoders (SAEs) can decompose the generative processes of text-to-image models into interpretable components, allowing for better control and analysis.
- These features relate to aspects such as image composition, local detail enhancement, and color management, making them pivotal for future model developments. See Unboxing SDXL Turbo with SAEs for more information.
- Lack of Attention in Diffusion Steps: Discussion pointed out that the diffusion step consists solely of a single MLP and does not have attention or awareness of adjacent patches, leading to continuity issues.
- "...the prediction of masked tokens provides the continuous vector to denoise."
- Meta's New Video Model: A member mentioned that Meta has rolled out a new model for generating video, hinting at innovations in the field.
- They encouraged others to refer to the paper linked for more information: Kaiming He et al..
LlamaIndex Discord
-
Log Traces with Open Telemetry: Now, BrainTrustData allows you to log traces directly from LlamaIndex using Open Telemetry, enhancing your observability capabilities.
- This integration ensures that telemetry is clear and effective in complex production applications.
- Prepare for the Llama Impact Hackathon: The 3-day Llama Impact Hackathon in San Francisco is set to take place from November 8-10, offering a chance to win a $15,000 prize pool.
- Participants will build AI solutions using Meta's Llama 3.2 models with a special $1,000 prize for the best use of LlamaIndex.
- LlamaParse Introduces Exciting New Features: LlamaParse now boasts two new features: Continuous mode (in beta) for stitching together multi-page tables and an Excel spreadsheet output option for easy data extraction.
- Continuous Mode ensures that lengthy tables are presented seamlessly, improving the overall user experience.
- Conversion of Workflow to Tool is Possible: Members discussed the idea that any workflow can be converted into a tool using
FunctionTool
, as illustrated with a code snippet.
- This allows workflows to be utilized in various query engines seamlessly.
- Questions Arise About Workflows: A member inquired if workflows must be async and whether high-level engines will eventually be entirely reimplemented using workflows.
- Responses confirmed that workflows are inherently async, while future reimplementations might not be a focus, instead emphasizing better documentation and pre-built workflows.
Cohere Discord
-
Framework Frenzy: LLM Component Builder: A member is developing a LLM framework that enables constructing components based on user prompts, aiming to enhance component generation for various applications.
- Currently, the framework supports Tailwind CSS exclusively, with plans to expand to other styling options. Issues with random text output are being addressed to refine the framework's performance.
- Thesis Thrust: Seeking Advisors: A member is seeking a collaborator or advisor for their master thesis and is looking for ways to expedite the process.
- Concerns were raised about the high volume of applications in the Cohere for AI Discord, potentially causing delays. The member asked Could there be a way to speed this up? and encouraged sharing email for better coordination.
- Command R Cost Cuts & Performance Boost: Inquiry was made about where to check reliability scores for Command R, leading to a reference to Cohere's blog on Command R fine-tuning.
- Command R fine-tuning offers superior performance on enterprise use cases and reduces costs by up to 15x compared to the largest models, highlighting significant economic benefits.
- Agent Application Assessment: The team is conducting a thorough review of agent building acceptance applications, focusing on candidates' relevant experience.
- Candidates can expect feedback as the team carefully evaluates each application to ensure qualified experience in agent building.
Modular (Mojo 🔥) Discord
-
Mojmelo Project Invites Contributions: A member is actively working on Mojmelo, focusing on native Matrix type and ML algorithms.
- An example usage with Logistic Regression is available here.
- Mojo's Parametric Power Ponders Limits: A discussion emerged on the parametric capability of Mojo, questioning what it cannot do.
- This reflects on Mojo's potential boundaries in its powerful feature set.
- Mojo Tests Hang on macOS GitHub Actions: A member reported issues with
mojo test
hanging during macOS GitHub Actions executions.
- This points out potential environment-specific challenges faced by developers.
- Syntactic Macros Lose Spark: A member expressed reduced enthusiasm for syntactic macros due to libraries creating small DSLs with limited documentation.
- This highlights a conflict with Mojo’s goal of simplicity.
- Malloc Faults Disrupt Mojo Inputs: A member reported malloc faults when Mojo's input method handles multiple user inputs.
- Despite a GitHub workaround, the problem persists, causing developer frustration.
OpenAccess AI Collective (axolotl) Discord
-
Axolotl Docker Tagging Confusion: Users raised concerns over Axolotl's dynamic tags like main-latest and stable tags such as main-20241031-py3.10-cu121-2.3.1, questioning their suitability for production environments.
- There was a request for comprehensive documentation on the Axolotl docker image release strategy to clarify tagging practices.
- Stable Release Timeline: A member confirmed plans to initiate a stable release once recent PRs are merged, outlining the current progress of build tags.
- The upcoming stable release will be preceded by extensive testing to ensure its reliability for end-users.
- Axolotl Docker Release History: It was noted that the last stable release tag of the Axolotl docker image is outdated due to unreleased upstream dependencies.
- Optimism was expressed about updating these dependencies to facilitate a proper release to PyPI.
- Latest Build Stability Assurance: Assurances were made that the latest builds are stable, having undergone numerous end-to-end tests.
- This validation process aims to mitigate concerns regarding the use of current tags in production environments.
Alignment Lab AI Discord
-
Steam Gift Card Giveaway: User tpojd is offering a $50 Steam Gift Card via this link.
- The announcement was made in both the ai-and-ml-discussion and general channels, notifying all members.
- ****:
LLM Agents (Berkeley MOOC) Discord
-
Member Seeks Guidance on Course Structure: A new member expressed enthusiasm about joining and requested guidance on the course structure and workflow.
- Community members responded warmly, offering support and detailed information to help the new member find necessary details to participate effectively.
- Course Website Provides Comprehensive Information: A member shared the course website to give access to all course information and assignments.
- This resource ensures that new members can easily locate necessary details to participate effectively.
tinygrad (George Hotz) Discord
-
Wrap IOCTL or Use CUDA for Device Drivers?: A discussion revolves around whether it's better to wrap raw IOCTL commands or adopt a CUDA approach by loading a
.so
for command issuance.- The nuances of the Hailo environment are highlighted, including its proprietary methods for interfacing.
- Hailo's C Library Wrapped in Python: The Hailo library utilizes a Python wrapper over its C code, offering a unique method for command execution.
- This approach enhances accessibility but raises questions about the underlying architecture and performance trade-offs.
- Proprietary Compilation of Neural Networks: Hailo requires neural networks to be compiled into a HEF proprietary protobuf format instead of traditional programs like CL shaders.
- Users must compile ONNX files specifically for this purpose, indicating a significant shift from conventional development practices.
Mozilla AI Discord
-
Limited Spaces for Mozilla Builders Demo Day: Only limited spaces are available for the Mozilla Builders Demo Day on December 5th in San Francisco, California. Interested community members should submit their info through this form to apply.
- Attendees' information will be handled according to the Mozilla Privacy Policy.
- Event Timeline for December 5th: The event will take place at Convene, 40 O’Farrell St, from 8:30 AM to 3:00 PM with registration, breakfast, and live pitches of open-source AI projects.
- The schedule includes networking opportunities, a lunch break, and an AI Demo Science Fair in the afternoon. Participants are encouraged to submit their registration by next week as space is limited.
- Questions About the Event: For any inquiries regarding the event, members can reach out to Maite via Discord. Questions can also be posted here.
- This event marks the culmination of the Builders Accelerator program that began in mid-September.
The LangChain AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The LLM Finetuning (Hamel + Dan) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The MLOps @Chipro Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The DiscoResearch Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Gorilla LLM (Berkeley Function Calling) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
PART 2: Detailed by-Channel summaries and links
Nous Research AI ▷ #general (379 messages🔥🔥):
Running AI Models on Clusters
Encoder-Decoder vs. Decoder-Only Models
Creative Writing with LLMs
Goliath 120B Model Insights
Networking Considerations for Clusters
-
Running AI Models Efficiently on Available Hardware: One member discussed leveraging their workstation (4090/7800x3D) and a friend's dual 2080Ti setup to run a 70B model, noting that they might achieve around 6-12 tokens per second with proper pipeline parallelism.
- Concerns were raised about the performance of CPU offloading and its implications on speed, highlighting potential performance bottlenecks when using CPU resources.
- Understanding Encoder-Decoder Architecture: A clarification was made regarding the structure of encoder-decoder models: encoders compress input into a vector while decoders decompress that vector into a related output.
- Discussion revealed that cross-attention is not exclusive to either encoder or decoder but serves as a mechanism to connect the two components.
- Insights into Creative Writing with LLMs: Creative writing capabilities of various LLMs were discussed, with observations that smaller models tend to produce more creative outputs compared to larger counterparts that may feel rigid.
- The Goliath 120B model was recommended for its consistent performance and ability to resist obsolescence despite newer models emerging.
- Quantization Challenges in LLMs: There were comments on the quantization issues faced by the Goliath model, specifically the varying success of different quantizations due to original creation circumstances.
- Members noted potential quantization errors that led to inconsistent outputs across models, urging caution in model evaluation under different quantization methods.
- Networking Options for AI Clusters: For cluster networking, it was suggested to prioritize physical connections like Ethernet or M.2 -> Occulink connectors over Wi-Fi to avoid issues related to connectivity and latency.
- Using Wi-Fi was deemed acceptable for experimental setups, but long-term reliability concerns were highlighted, urging the use of wired connections for consistent performance.
Links mentioned:
- MNIST Latent Space: no description found
- EQ-Bench Creative Writing Leaderboard: no description found
- Tweet from Alex Cheema - e/acc (@ac_crypto): 2 MacBooks is all you need. Llama 3.1 405B running distributed across 2 MacBooks using @exolabs_ home AI cluster
- Mac mini: Mac mini with the M4 and M4 Pro chips. Built for Apple Intelligence. With front and back ports. Financing options available. Buy now from apple.com.
- Peach And Goma Love Me GIF - Peach and goma Love me Self care - Discover & Share GIFs: Click to view the GIF
- Tweet from Andriy Burkov (@burkov): This is the system prompt for Apple Intelligence. Turns out Apple's prompt engineers are as clueless about how LLM work as all the others.
- Dark Souls Collapse GIF - Dark Souls Collapse Defeated - Discover & Share GIFs: Click to view the GIF
- GitHub - exo-explore/exo: Run your own AI cluster at home with everyday devices 📱💻 🖥️⌚: Run your own AI cluster at home with everyday devices 📱💻 🖥️⌚ - exo-explore/exo
- GitHub - lyogavin/airllm: AirLLM 70B inference with single 4GB GPU: AirLLM 70B inference with single 4GB GPU. Contribute to lyogavin/airllm development by creating an account on GitHub.
- Run LLAMA 3.1 405b on 8GB Vram: Script : https://www.patreon.com/posts/114566125Revolutionize your AI workflow with AIR-LLM - the game-changing tool that's breaking hardware barriers in LLM...
Nous Research AI ▷ #ask-about-llms (4 messages):
Gemma2B Tokenizer Vocabulary
Open-source Vector to Language Models
Hermes 3 Serverless Deployment
-
Gemma2B's Huge Tokenizer Vocabulary: A member clarified that Gemma2B is 2.6B due to its extensive tokenizer vocabulary.
- This highlights the model's complexity and its capability to handle diverse inputs effectively.
- Seeking Open-source Vector to Language Models: A member inquired about open-source models that can input a vector embedding and produce natural language, emphasizing its usefulness for their projects.
- This discussion underscores the growing interest in models that bridge embeddings with human-readable outputs.
- Searching for Better Hermes 3 Deployment Options: A member expressed a need to run something on Hermes 3 serverless, mentioning that together.ai only offered dedicated hardware.
- They are now exploring alternative platforms that may provide serverless options for their requirements.
Nous Research AI ▷ #research-papers (1 messages):
trre: https://openreview.net/forum?id=q2Lnyegkr8
Nous Research AI ▷ #interesting-links (6 messages):
SmolLM2 family
Tiny LLMs by Meta
BART model optimization
-
SmolLM2 models impress with lightweight capability: The SmolLM2 family includes compact models with sizes of 135M, 360M, and 1.7B parameters, optimized for on-device tasks, generating valid but sometimes nonsensical outputs.
- Generating valid text that doesn't always make sense highlights the model's intricacies, while the 1.7B version showed advances in instruction following and reasoning.
- Meta's Tiny LLMs for efficient on-device use: Meta recently introduced Tiny LLMs, optimizing sub-billion parameter models for effective on-device applications.
- This approach aims to facilitate tasks while accommodating device limitations, with papers detailing the models' capabilities, including one published under arXiv 2402.14905.
- Speedy BART model gains traction: The GitHub project BARTZ offers a super-fast implementation of Bayesian Additive Regression Trees, enhancing the traditional BART model architecture for better performance on GPUs.
- This development promises increased efficiency for users needing rapid embedding model solutions in Python.
Links mentioned:
- MobileLLM - a facebook Collection: no description found
- HuggingFaceTB/SmolLM2-1.7B-Instruct · Hugging Face: no description found
- GitHub - Gattocrucco/bartz: Super-fast BART (Bayesian Additive Regression Trees) in Python: Super-fast BART (Bayesian Additive Regression Trees) in Python - Gattocrucco/bartz
Nous Research AI ▷ #research-papers (1 messages):
trre: https://openreview.net/forum?id=q2Lnyegkr8
Unsloth AI (Daniel Han) ▷ #general (249 messages🔥🔥):
Unsloth Finetuning Framework
Continual Pre-Training
Dataset Size for Training
Gradient Accumulation
Instruction Tuning
-
Unsloth Shines in Customization: Participants shared their appreciation for the customizable finetuning framework of Unsloth, specifically for tokenizer finetuning on domain-specific datasets.
- Many expressed excitement about sharing their own reusable work and insights with the community.
- Challenges with Continual Pre-Training: Users discussed challenges with continual pre-training on small datasets like Tiny Stories, where the model struggled to retain specific information.
- Suggestions included enhancing dataset quality, adding instructions, and increasing the dataset size for better context.
- Optimal Settings for Training: Discussion emerged regarding optimal values for parameters like rank (r) during training, with recommendations for smaller ranks such as 32 or 128 for certain models.
- Users debated the significance of the dataset size and how it impacts model performance and recall of domain-specific knowledge.
- Fixing Errors in DPO Training: Users encountered errors related to DPO training, specifically indicating a required upgrade for the TRL library to version 0.12.
- Suggestions were made to troubleshoot the issues by reviewing error messages and ensuring compatibility with the latest library versions.
- Future Model Integration: Participants expressed interest in potential future integrations, such as the Pixtral model, and the possibility of using vision converters for fine-tuning.
- The conversation highlighted a collaborative approach to exploring new models and enhancing existing frameworks.
Links mentioned:
- unsloth/SmolLM2-1.7B-Instruct-GGUF · Hugging Face: no description found
- Tweet from elie (@eliebakouch): Hey babe, wake up, we just dropped a new SmolLM 🫡 Fully open-source. We’ll release a blog post soon to detail how we trained it. I'm also super excited about all the demos that will come in the ...
- Aya Expanse: Connecting Our World: Cohere For AI launches Aya Expanse, a state-of-the-art multilingual family of models to help close the language gap with AI.
- Bug Fixes in LLM Training - Gradient Accumulation: Unsloth's Gradient Accumulation fix solves critical errors in LLM Training.
- unsloth/SmolLM2-1.7B-bnb-4bit · Hugging Face: no description found
- unsloth/SmolLM2-1.7B · Hugging Face: no description found
- Tweet from Daniel Han (@danielhanchen),): Found more bugs for #Gemma: 1. Must add
2. There’s a typo for model 3. sqrt(3072)=55.4256 but bfloat16 is 55.5 4. Layernorm (w+1) must be in float32 5. Keras mixed_bfloa... - Tweet from Daniel Han (@danielhanchen)): Fixed a bug which caused all training losses to diverge for large gradient accumulation sizes. 1. First reported by @bnjmn_marie, GA is supposed to be mathematically equivalent to full batch training...
- Fixing bugs in Gemma, Llama, & Phi 3: Daniel Han): The story behind our 8 bug fixes for Gemma, multiple tokenization fixes for Llama 3, a sliding window bug fix and Mistral-fying Phi-3, and learn about how we...
- Lecture 32: Unsloth,): no description found
- Hacks to Make LLM Training Faster - Daniel Han, Unsloth AI,): Hacks to Make LLM Training Faster - Daniel Han, Unsloth AIAs open-source LLMs have become more capable, a substantial ecosystem has developed around the fine...
- Unsloth.ai: Easily finetune & train LLMs): no description found
Unsloth AI (Daniel Han) ▷ #help (25 messages🔥):
CUDA Version Support
Deprecated Tokenizer Warning
RAG vs Fine-Tuning for Chatbots
Graph RAG vs Light RAG
Llama 3.1 Notebook Errors
-
Best CUDA Version for Pretraining: Discussions highlighted that CUDA 12.1 and 11.8 have the best support for libraries needed for continued pretraining and implementing RAG.
- Backward compatibility was debated, particularly regarding the absence of a compatible PyTorch version for CUDA 12.6.
- Understanding Deprecated Tokenizer: A member queried about the deprecation warning stating Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead.
- Another member indicated that this is merely a warning that can be ignored.
- Debate on RAG vs Fine-Tuning: A member asked about the best process for fine-tuning a model, weighing RAG against fine-tuning for a coding language chatbot.
- Consensus leaned towards RAG due to its ability to provide more accurate queries, despite initial thoughts favoring fine-tuning.
- RAG Framework Recommendations: Suggestions surfaced regarding RAG frameworks, with one member stating that Graph RAG is reputable, but Light RAG might be comparable or superior.
- This sparked interest in finding the best RAG methodology for a shared chatbot project.
- Errors in Llama 3.1 Notebook: A member reported running into an error, specifically an ImportError: cannot import name 'EntryNotFoundError' when using the Llama 3.1 notebook.
- Another member acknowledged the error and promised to look into it.
Perplexity AI ▷ #general (196 messages🔥🔥):
Perplexity Pro Cancellation
Comparisons with ChatGPT
Model Switching Benefits
Claude Sonnet Performance
Real-time Data and API Usage
-
Perplexity Pro Cancellation Causes Confusion: A user expressed frustration over their Perplexity Pro subscription cancellation, questioning the reasons behind it.
- This led to a discussion about subscription value and the recent updates in Perplexity's offerings.
- Many Users Analyze Perplexity vs ChatGPT: Debate emerged about the advantages of Perplexity's model switching capability compared to ChatGPT's offerings following the launch of GPT Search.
- While some users appreciate the aesthetics and features of Perplexity, they also note potential challenges ahead as competition increases.
- Comparing Model Performance: Perplexity vs Competitors: Conflicting experiences noted as a user mentioned inconsistent performance with Perplexity's models, especially after the recent Claude Sonnet refresh.
- Others shared positive experiences while utilizing the new model but acknowledged discrepancies in results compared to competitors.
- User Insights on Real-time API Data: A user inquired about using the Perplexity API for real-time statistics, particularly regarding accuracy and hallucinations in outputs.
- This sparked interest around the structure of outputs provided by Perplexity AI and its potential for live data queries.
- Discussion on Perplexity's Future Features: Users commented on the desire for Perplexity to incorporate features similar to Claude AI for improved functionality.
- This included suggestions for incorporating new artifacts and improving the competitive stance against other AI models and search products.
Link mentioned: Tweet from Aravind Srinivas (@AravSrinivas): Been enjoying using the Grok 2 model. Now on Perplexity iOS app too for Pro users. (Restart app if you don’t see it on “Settings->AI Model”)
Perplexity AI ▷ #sharing (7 messages):
Google's Decillion Fine
Toxic Black Plastic
Ecuador's Forest Song
Volume, Form, and Mass
Aluminium Price Predictions
-
Google hit with a $20 Decillion fine: Google faces a staggering $20 Decillion fine for undisclosed reasons, raising eyebrows in the tech community. More details can be explored in this YouTube video.
- This fine is touted as one of the largest in tech history and has significant implications for Google's future operations.
- New concerns over toxic black plastic from e-waste: A study reveals alarming findings about toxic black plastic derived from e-waste, which poses serious environmental risks. People are urged to dig deeper into the issues surrounding e-waste management.
- The report emphasizes the need for improved recycling practices to mitigate these hazardous materials.
- Ecuador's forest inspired a song: In a whimsical turn of events, Ecuador's forest wrote a song, showcasing the connection between nature and culture. This unique project aims to raise awareness about forest conservation.
- Exploration of this project highlights how art can serve as a catalyst for environmental action.
- Exploring the concept of Volume, Form, and Mass: A discussion on the concept of Volume, Form, and Mass sheds light on fundamental art and design principles critical for creators. More insights can be found here.
- Understanding these concepts is vital in shaping perceptions of space and materials in artistic endeavors.
- Predictions for global aluminium prices: Market analysts have speculated on the future of global aluminium prices, focusing on supply chain impacts. Details on these predictions can be found in the report.
- These predictions are influenced by various global economic factors, indicating a potentially volatile market ahead.
Link mentioned: YouTube: no description found
Perplexity AI ▷ #pplx-api (6 messages):
Pplxity API Features
Implementing RAG Functionality
Cost Comparison
Chatbot Functionality
-
Pplxity API lacks citation feature: A member noted that the Pplxity API does not currently support obtaining citations, in contrast to features available in other APIs.
- This has raised questions about how to implement citation functionality effectively without that support.
- Querying chatbot functionalities: Another member expressed interest in implementing chatbot functionality similar to OpenAI's but sought clarification on its feasibility with the Pplxity API.
- They targeted functionalities that could mimic the features available in OpenAI's offerings.
- Exploring RAG functionality: A member queried whether it was possible to implement RAG (Retrieval-Augmented Generation) functionality using the Pplxity API.
- They acknowledged that OpenAI supports RAG, but they had not tried it using Pplxity yet.
- Cost benefits of using Pplxity: A member humorously pointed out that the Pplxity API is cheaper than OpenAI's API offerings.
- This sparked discussions about cost-effective API implementations for developers.
- Referencing Pplxity Documentation: A member directed others to the official Pplxity FAQ for more detailed information.
- This resource is expected to clarify further questions regarding API capabilities and functionalities.
Link mentioned: no title found: no description found
OpenAI ▷ #ai-discussions (165 messages🔥🔥):
ChatGPT Search
Image Generation Models
AI and Human Interaction
Community Contributions
AI Impact on Employment
-
Exploring ChatGPT Search Functionality: Members discussed their experiences with ChatGPT Search, noting that it came with the ChatGPT subscription without extra charges, unlike Perplexity.
- Perplexity is considered to deliver richer results, creating a conversation around the pros and cons of both tools for different use cases.
- New Image Generating Capabilities: A member highlighted the excitement around AI that generates playable iterations of games like Minecraft, showing potential in generative gaming.
- It was also mentioned that a company called Oasis created a version of Minecraft that currently has basic functionality for players.
- The Future of AI and Employment: Concerns were raised about AI potentially taking over all jobs, leading to questions about sustainable economic models and how society would manage resources.
- A hypothetical discussion suggested that while AI could run all jobs sustainably, humanity's flawed nature casts doubt on achieving such a utopia.
- Community Engagement and Puzzler Role: Members shared insights into the criteria for becoming a puzzler in the Discord community, noting the importance of positive contributions.
- Expressions of desire for the puzzler role were common, with discussions on how to earn recognition within the community.
- AI Sentience and Ethical Considerations: A philosophical discussion emerged around the nature of AI and sentience, questioning what it means for AI to be 'freed' and its implications.
- Comparative analogies were drawn between human embryos and AI, emphasizing the dependency and control aspects of both.
Link mentioned: LiveBench: no description found
OpenAI ▷ #gpt-4-discussions (1 messages):
Nouswise Multi-File Connection
-
Nouswise succeeds in multi-file connections: A member suggested that the team at Nouswise has successfully figured out how to connect multiple files.
- They encouraged others to try it out for themselves.
- Potential benefits of multi-file connections: The discussions highlighted the potential advantages of connecting multiple files together, such as improved efficiency and organization in workflows.
- Members expressed curiosity about how this feature could enhance their projects.
OpenAI ▷ #prompt-engineering (13 messages🔥):
D&D GPT limitations
Context windows and system prompts
Tokenization in LLMs
Message history importance
Weighting in model responses
-
D&D GPT struggles with user-directed actions: A member expressed challenges with configuring their D&D GPT to limit responses strictly to the effects of user actions, like casting a spell in a battle.
- Another member suggested informing the model about expected game responses to maintain control over the gameplay flow.
- Understanding context windows vs. prompting: A member asked for clarification on context windows and system prompts, querying whether message history is distinct from actual prompting.
- It was explained that the context window defines the model's memory limit, while system prompts set behavioral guidelines for the model.
- Clarifying tokens in LLM interactions: Discussion arose around the nature of tokens, leading to clarification that tokens are units of text that can vary in length, and both prompts and contextual tokens are treated similarly by the LLM.
- One member highlighted the importance of understanding tokenization and its impact on responses.
- Response weighting in LLM interactions: A member brought up the concept of weighted tokens in responses, pointing out that python tool returns take priority over standard prompts due to their recent context.
- The conversation included insights on using a browser inspector tool to verify token weightings during model interactions.
OpenAI ▷ #api-discussions (13 messages🔥):
D&D GPT Interaction
Context Windows and Tokenization
Message History vs. Prompting
Weight of System Prompts
Python Tool Returns
-
D&D GPT limits user action responses: In discussions about creating a D&D DM GPT, members addressed the need to limit AI responses to the effects of user actions, such as casting a spell during a battle.
- One member emphasized that the AI observes and follows user directions, which can prevent premature narrative conclusions.
- Understanding Context Windows and Tokens: It was clarified that the context window represents the model's maximum memory for tokens, while message history pertains to the ongoing flow of conversation.
- Members agreed that while both context tokens and prompt tokens are fundamentally the same, pasting a whole conversation history does not preserve the natural dialogue flow.
- Weighting of Tokens in AI Responses: Discussions highlighted that there are weights applied to message tokens, typically set to 0 for recent messages, with priority given to recent context.
- In particular, outputs from the Python tool carry a weight of 1, giving them the same importance as the system prompt due to their recency.
LM Studio ▷ #general (142 messages🔥🔥):
LM Studio context issues
Open WebUI for LM Studio
HTML rendering in models
IBM's Granite model requirements
-
LM Studio struggles with context management: Users discussed context length limitations in LM Studio, noting that it starts dropping old information after reaching 100% capacity.
- One user suggested that utilizing a system prompt summary could help retain more relevant context during extended sessions.
- Integrating Open WebUI with LM Studio: A user shared that they successfully connected Open WebUI to LM Studio, but faced issues retrieving a model list due to API endpoint configuration.
- Another user mentioned that proper setup requires exposing Docker containers to the local network for seamless access.
- HTML rendering capability in models: Some users experienced intermittent HTML rendering from the AI during sessions, expressing confusion about when it would function properly.
- Concerns were raised about security and verifying htmlspecialchars before execution, suggesting it might have been a bug in model iterations.
- Requirements for IBM's Granite model: A user reported issues generating responses with IBM's granite 1b-A400m q4_0 model in LM Studio, questioning if it was due to model quantization.
- Another user clarified that Flash Attention must be enabled for the model to function correctly, highlighting important setup considerations.
Link mentioned: GitHub - open-webui/open-webui: User-friendly AI Interface (Supports Ollama, OpenAI API, ...): User-friendly AI Interface (Supports Ollama, OpenAI API, ...) - open-webui/open-webui
LM Studio ▷ #hardware-discussion (17 messages🔥):
LM Studio limitations
CPU performance issues
Multi-GPU support
Inference speed on cards
-
Hyperthreading struggles with LM Studio: Concerns were raised about a potential soft cap in LM Studio that limits hyperthreading performance, especially with a 24c/48t CPU configuration.
- Some members reported that threading sliders have minimal effect if inference is on the GPU, while others found benefits on CPU for large models.
- Inference on CPU is sluggish: Members noted that inference performance on CPU is often hindered, citing potential limitations stemming from RAM speed.
- One user with a 5950X and 128GB RAM reported performance issues during CPU inference, suggesting that larger model constraints affect usability.
- Confusion about multi-GPU support: Questions arose regarding whether LM Studio truly supports multiple GPUs, as some reported using both cards for loading code-straits 22b.
- Others confirmed that while it offers multi-GPU support, performance may vary, particularly with different vendor combinations.
- Preference for compute on powerful GPUs: A user expressed frustration at LM Studio defaulting to a weaker GPU instead of the more powerful 3080 for computation.
- This sentiment echoed the group's desire for improved performance and usability over competing frameworks like kobold.cpp.
- Appealing to Apple users regarding CPU models: A light-hearted comment was made about limitations with larger models (<=3B) on CPU, particularly for Apple users.
- Another member humorously expressed interest in seeing high-channel Epyc processors handle inference tasks, highlighting memory bandwidth concerns.
OpenRouter (Alex Atallah) ▷ #announcements (2 messages):
Hermes 3 405B removal
/api/v1/models API speedup
-
Hermes 3 405B Version Consolidation: The Hermes 3 405B extended version has been removed and consolidated into the standard variant, as detailed in the official announcement on OpenRouter.
- This change reflects a shift towards streamlining the available models for better user experience.
- API v1 Models Speeds Up: The /api/v1/models API is undergoing a migration to a new cloud provider today, which will improve caching and significantly enhance speed.
- Post-migration,
per_request_limits
will be set to null always, particularly affecting users logged out or sending no API key; feedback is sought in the dedicated channel.
Link mentioned: Hermes 3 405B Instruct - API, Providers, Stats: Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coheren...
OpenRouter (Alex Atallah) ▷ #app-showcase (1 messages):
Rubik's AI search interface
Beta testing opportunity
Promotional offer for premium access
-
Rubik's AI Search Interface Gets a Makeover: The updated Rubik's AI search interface has been launched, focusing on enhancing the advanced research assistant functionality significantly.
- The team is eager for feedback on the new interface and is offering an opportunity to participate in beta testing.
- Call for Beta Testers: The community is invited to become beta testers for the revamped interface over the coming weeks, with participants receiving 1 month free premium access to top models including Mistral Large and Gemini-1.5 Pro.
- Interested users can utilize the promo code
NEW24
at checkout to experience the new features. - Explore More About Rubik's AI: For detailed information about the updates and offers, users can visit Rubik's AI, and review the Terms and Privacy Policy.
- Additionally, there's an option to join the Discord community for ongoing discussions and support.
Link mentioned: Rubik's AI - AI Research Assistant & Search Engine: Access powerful AI models for NLP, computer vision, and more. Get instant answers from Groq, Claude-3.5 Sonnet, and GPT-4o.
OpenRouter (Alex Atallah) ▷ #general (137 messages🔥🔥):
Hermes 3 Issues
OpenRouter Setup
Alternatives to Hermes 3
ChatGPT Model Changes
Novel Writing Tools
-
Hermes 3 free version currently down: Users report that the free version of
hermes-3-llama-3.1-405b
is hanging and not returning responses in OpenRouter chat, while the standard version is functioning correctly.- The issue is believed to be temporary, as models are still listed on OpenRouter.
- Setting up OpenRouter account for novel writing: New users are encouraged to use their OpenRouter API key in conjunction with tools like Novel Crafter for writing support.
- Novel Crafter allows seamless integration, letting users manage their stories effectively.
- Searching for alternatives to Hermes 3: Users are looking for free alternatives to Hermes 3, with
llama-3.1-405b-instruct
suggested as a potential option.
- However, some users express that no other models match the user experience provided by Hermes 3.
- Concerns about ChatGPT model updates: Users discuss changes in performance with the latest
chatgpt-4o
model, noting the lack of search capabilities via API following recent releases.
- OpenAI admits that the model is frequently updated without user notifications, leading to concerns about consistency.
- Discussion on model parameters and performance: A dialogue indicates that higher parameter counts generally lead to better performance in models, with Hermes 3 being favored over other alternatives.
- It is suggested that while parameter counts are important, the specific formatting for roleplay applications also plays a significant role in user satisfaction.
Links mentioned:
- Limits | OpenRouter: Set limits on model usage
- The Novel Writing Toolbox: no description found
- Activity | OpenRouter: See how you've been using models on OpenRouter.
- Llama 3.1 405B Instruct (free) - API, Providers, Stats: The highly anticipated 400B class of Llama3 is here! Clocking in at 128k context with impressive eval scores, the Meta AI team continues to push the frontier of open-source LLMs. Meta's latest c...
- Hang First Time GIF - Hang First Time Smiles - Discover & Share GIFs: Click to view the GIF
- Tweet from Shannon Code (@shannonNullCode): 👀 30 seconds from signup to live decentralized AI accessible wallet. Quoting Emblem Vault (@EmblemVault) 🏛️Emblem Vault September Town Hall🏛️ Let's review our September highlights with a qu...
- OAuth PKCE | OpenRouter: Secure user authentication via OAuth
- Anthropic Status: no description found
OpenRouter (Alex Atallah) ▷ #beta-feedback (6 messages):
Access to Integrations
Beta Access Requests
-
Multiple requests for Integration Access: Several members expressed their desire to gain access to the integrations feature, with requests stated in various forms.
- Ahoy, I would get access to integrations was a common phrase used, demonstrating a collective interest.
- Inquiries on Requesting Integration Access: A member asked, how do we request for integration access? indicating a need for clarity on the process.
- This reflects a greater demand for guidance on accessing these features.
- Requests for Beta Access: One member expressed enthusiasm by stating, Would love to get beta access in a lighthearted manner.
- This highlights a growing interest in participating in upcoming integrations.
Notebook LM Discord ▷ #use-cases (31 messages🔥):
Podcast Source Errors
Python Utility for Audio Processing
Voice to Avatar Integration
Google TTS Voice Quality
Long Audio Synthesis with Google Cloud
-
Podcast Source Errors cause confusion: Users expressed frustration regarding difficulties with the 'Add Source' feature and locating generated audio files after podcast creation.
- A Geography teacher shared their experience implementing new tools for educational content and sought guidance on the process.
- Python Utility Enhancements for Audio: A participant discussed a Python utility for audio processing, including looping over timestamps to create audio segments and plans for integration with avatars.
- They noted ongoing work on 'Pause' and 'Resume' features for playback, highlighting the need for better management of audio cuts.
- Enhancing Voice Interaction with Avatars: Discussion arose about the Python annotation module's capability to separate multiple speakers' voices during simultaneous speech, including filler sounds.
- It was noted that while the avatar playback relies on WebRTC with dedicated channels, the system may still struggle with minor voice sounds.
- Google TTS Voice Quality Analysis: Google TTS voice quality varied across languages, with some users recommending the use of Journey voices for more natural sound, especially in English.
- Resources were shared on how to utilize Google Cloud's TTS features, including creating dialogue with multiple speakers and constraints on audio length.
- Discussion on Deep Dive Creation: An individual shared their YouTube video on BYD's EV strategy, seeking feedback on creating high-quality deep dives using NotebookLM.
- Participants shared knowledge on tools and methodologies to enhance podcast production and audio synthesis for better user experience.
Links mentioned:
- no title found: no description found
- no title found: no description found
- no title found: no description found
- no title found: no description found
- Don't Tell Me About Halloween Script - Quiet Please: no description found
- BYD's 3Q24 Triumph: The New Frontier in Global EV Domination?Can They Become the Next Global Leader?: Dive into BYD's latest 3Q24 results and their bold strategy to lead the global EV revolution! In this deep analysis, we uncover how BYD not only beat profit ...
- Text-to-Speech AI: Lifelike Speech Synthesis | Google Cloud: Turn text into natural-sounding speech in 220+ voices across 40+ languages and variants with an API powered by Google’s machine learning technology.
- no title found: no description found
Notebook LM Discord ▷ #general (60 messages🔥🔥):
NotebookLM Podcast Features
User Feedback on NotebookLM
Language Support for Podcasts
CSV Upload Functionality
Technical Limitations and User Inquiries
-
Excitement Over NotebookLM Podcast Feature: Users expressed enthusiasm for the NotebookLM podcast feature, with discussions on how to create multiple episodes and request deep dives into specific sources.
- A new user inquired about the capabilities of the podcast feature and how to conduct episodes.
- Diverse Language Support and Limitations: Many users are curious about the languages supported by NotebookLM's podcast generation; currently, audio overviews are only in English, though some reported success with Spanish sources.
- One user suggested a workaround for generating podcasts in different languages by specifying the target language in the prompt.
- User Feedback on Performance and Limitations: Members shared mixed feedback regarding NotebookLM’s automatic citation formats for web searches and experience with video imports, questioning its capabilities in audio extraction and transcription.
- Concerns were raised about why certain videos cannot be imported, with users seeking clarification on NotebookLM's audio processing abilities.
- CSV Upload Inquiry: A user requested assistance with uploading a CSV of links to NotebookLM, hoping to have each link added as a source quickly.
- This sparked further interest in how to optimize content input within the application.
- Community Engagement Suggestions: There was a suggestion to establish a weekly live chat with the community to enhance engagement.
- This proposal reflects a desire for more interactive communication among users.
Link mentioned: June 6 2024 - Help: no description found
aider (Paul Gauthier) ▷ #announcements (2 messages):
Aider v0.61.0 features
Aider's code contributions
Anonymous analytics
Model support enhancements
Launch command options
-
Aider v0.61.0 introduces new file command features: The latest release, Aider v0.61.0, allows users to load and save slash-commands to files using
/save
and/load
. This enables complex commands and context recreation during chats.- New options like
--load
allow users to execute commands upon launch, enhancing the interactive experience. - Aider sets a new coding milestone: In this release, Aider wrote 860 new lines of code, marking a record at 68% of the new code in the release. This significant contribution showcases Aider's self-improvement capabilities.
- Aider wrote 68% of the code in this release emphasizes the model's evolving contribution to its development.
- Enhanced support for models: Aider now properly supports all o1 models, regardless of the provider, ensuring broader compatibility. Furthermore, it follows litellm's supports_vision attribute to enable image support for models.
- This improvement addresses concerns with API error handling, particularly when accessing weak models.
- Anonymous analytics introduced: The release includes anonymous, opt-in analytics that do not share personal data, allowing for better insights without compromising user privacy. This approach encourages user participation in improving Aider’s performance.
- Members can understand how their interaction influences the model without needing to worry about privacy issues, enhancing overall trust.
- Interface and usability tweaks improve user experience: New features like the
--no-fancy-input
switch disable prompt toolkit input, simplifying the user interface. Additionally, filenames are now displayed in sorted order for commands such as/add
and/read-only
.
- These adjustments help streamline user interactions, making it easier to manage command inputs effectively.
- New options like
Link mentioned: Release history: Release notes and stats on aider writing its own code.
aider (Paul Gauthier) ▷ #general (53 messages🔥):
Aider analytics
Customizable AI workflows
Continue VS Code Alternative
GitHub Copilot
Image processing errors
-
Aider Analytics Feedback Request: A user pushed analytics code to the main branch and requested others to opt-in for data collection to improve Aider. They emphasized Aider respects privacy, not collecting personal information.
- Another user expressed concern over being charged for excessive token use, indicating potential issues with Aider's handling of large contexts.
- Exploring Customizable AI Workflows: A user introduced Patched.codes as a tool for customizable AI workflows, optimizing post-code tasks to enhance productivity. Features include automatic documentation generation and summarized PR reviews.
- Other users expressed interest in automating chores and streamlining their coding processes using this tool.
- Continue vs Other Code Assistants: Users discussed experiences with coding assistants like Continue, Cursor, and GitHub Copilot, noting mixed opinions on their performance. Some preferred free tools like Aider and Codeium over paid options.
- Users agreed that while Copilot excels in autocomplete features, Aider's utility increases as one becomes more accustomed to its capabilities.
- Challenges with Image Processing in Aider: A user encountered an error while uploading a .png file to Aider, indicating it was rejected as a valid image by the Anthropic API. In contrast, another user's png file worked without issues.
- This discrepancy led to discussions about the robustness of the image handling in Aider and the potential for minor bugs.
- Rate Limits and Token Counting: Users discussed Aider's handling of rate limits during API calls and its impact on token usage. A new token counting feature from Anthropic was introduced as a potentially beneficial tool for users managing usage.
- Concerns about overspending on tokens due to rapid automated requests were raised, reflecting a need for clearer feedback in the system.
Links mentioned:
- Repository map: Aider uses a map of your git repository to provide code context to LLMs.
- Analytics: Opt-in, anonymous, no personal info.
- Tweet from Alex Albert (@alexalbert__): There's finally an easy way to count tokens with the Anthropic API. With our new token counting endpoint, you can send a request and get a token count back in response. This endpoint is free to ...
- Tweet from Alex Albert (@alexalbert__): There's finally an easy way to count tokens with the Anthropic API. With our new token counting endpoint, you can send a request and get a token count back in response. This endpoint is free to ...
- Patched: Open source workflow automation for dev teams
- no title found): no description found
- Continue: Amplified developers, AI-enhanced development · The leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside the IDE
- GitHub Copilot · Your AI pair programmer): GitHub Copilot works alongside you directly in your editor, suggesting whole lines or entire functions for you.
aider (Paul Gauthier) ▷ #questions-and-tips (22 messages🔥):
Aider Documentation
Sonnet File Handling
Aider UX Limitations
Read-Only Context in Aider
Test Command Differences
-
Aider Documentation is Helpful: A member expressed gratitude for the good documentation of Aider, noting it has been helpful in recent usage.
- My bad, they acknowledged missing some aspects but are thankful for the tool.
- Sonnet's File Request Behavior: Concerns were raised regarding Sonnet asking for files more frequently, even when they were available in chat.
- One member mentioned having only two files loaded, but Sonnet still seems to forget one of them.
- Exploring Aider's Code Patch Capabilities: A member inquired about using Aider's capabilities for generating code patches outside of Aider itself.
- Another member noted they saw many heuristics used but did not find a proper API exposed by Aider for this purpose.
- Using Read-Only Files in Aider: It was confirmed that to specify files for only context in Aider, one should use the command /read-only.
- Another member added, just tell it 'fix the tests, not the code' for clarity.
- Difference Between /test and /run in Aider: A member queried how /test differs from /run pytest, seeking clarification.
- The response indicated that /test automatically shares command output with the LLM and prompts for fixes on non-zero exit statuses.
Link mentioned: REQ: Ability to toggle --verbose function on and off from within aider, perhaps /verbose-debug ? · Issue #1870 · Aider-AI/aider: Issue It would be really useful, to aid in debugging issues and understanding the "behind the curtain" work aider is doing, to be able to toggle the verbose output seen with --verbose, on an...
aider (Paul Gauthier) ▷ #links (2 messages):
Electron App
Browser Functionality
-
Electron App Availability Sparks Discussion: A member noted the availability of a new service but expressed disappointment that it's just a browser wrapped as an Electron app.
- This raised questions about the value of this format compared to existing solutions.
- Comparing Electron App to Browser Installation: Another member questioned whether the Electron app is truly better than simply installing as an app in Chrome or Safari.
- Is there any benefit to this implementation? was the underlying concern expressed.
Stability.ai (Stable Diffusion) ▷ #general-chat (55 messages🔥🔥):
ComfyUI Setup for SD
FP16 Model Performance
Lora Trigger Word Access
Video Generation Models
Lora Training Methods
-
Seeking ComfyUI Optimizations: A user with a Mac Studio M2 Max is looking for optimal setups for generating with ComfyUI and requested community advice and experiences.
- Members recommended starting with Scott's ComfyUI tutorial videos to get familiar with the software.
- Questions About FP16 Model Availability: A community member inquired about the possibility of FP16 editions of the stable diffusion 3.5 models; they reported FP16 performance is 8x on their hardware.
- Another member confirmed that the Stable Diffusion 3.5 large model is available in FP16 and provided a link to access it on Hugging Face.
- Accessing Lora Trigger Words: A user asked how to check trigger words for the Lora they are using with ComfyUI, seeking efficient methods for access.
- Community advice directed them to the original download locations of the Lora to find detailed information regarding trigger words.
- Video Generation Model Recommendations: A discussion highlighted the use of Mochi-1 and CogVideoX for video generation, with a suggestion based on VRAM limitations.
- Members indicated that smaller models like the 5b and 2b variants could fit on systems with limited resources, while emphasizing that CogVideoX is best suited for lower VRAM.
- Lora-based Image Styling Template Needs: A user expressed a need for a Lora-based image styling template for ComfyUI, specifically one that generates images based on a selected Lora.
- They noted the difficulty in finding a template that isn't only for using multiple Loras simultaneously.
Links mentioned:
- Kitty Cat GIF - Kitty Cat Hello Chat - Discover & Share GIFs: Click to view the GIF
- stabilityai/stable-diffusion-3.5-large at main: no description found
- GitHub - pythongosssss/ComfyUI-Custom-Scripts: Enhancements & experiments for ComfyUI, mostly focusing on UI features: Enhancements & experiments for ComfyUI, mostly focusing on UI features - pythongosssss/ComfyUI-Custom-Scripts
- GitHub - jitcoder/lora-info: Contribute to jitcoder/lora-info development by creating an account on GitHub.
Eleuther ▷ #general (9 messages🔥):
Attention Weights in Inference
Post Removal Discussion
User Ban Considerations
-
Questioning Attention Weights Application: A member is experimenting with changing the attention weights of the latest block during inference to enhance focus on specific past tokens.
- Is it too late to matter or the only place that matters? Some suggest changing weights across all layers might lead to better results, based on past implementations.
- Discussion on Post Removals: Concerns were raised about a member's repeated posts being removed, prompting questions about their legitimacy.
- Another member echoed suspicions about similar prior posts, suggesting a ban may be necessary.
- Concerns Over Attention Weight Adjustment: One member noted challenges when trying to adjust attention weights across all layers, resulting in gibberish outputs.
- There is uncertainty on the best approach, as initial tokens may suffer from low attention values.
Eleuther ▷ #research (29 messages🔥):
DEQ model challenges
Hypernetworks discussion
Forgetting Transformer
Training dynamics
Innovative classification methods
-
DEQ Models Face Significant Instability: Members discussed the challenges faced when training DEQ models, noting that the dynamics of an 'infinitely deep' network can lead to exploding train losses, requiring numerous restarts.
- One member humorously lamented, praying to rnjesus that the model wouldn't fail.
- Hypernetworks Seen as Input Transformations: A heated debate about hypernetworks arose, with one member asserting that they are merely a form of input-dependent transformations.
- Others chimed in with personal experiences implementing hypernetworks, noting challenges such as generating models with more parameters than the base.
- Introduction of the Forgetting Transformer: A member introduced the Forgetting Transformer, a model that incorporates a forget gate into the traditional Transformer architecture to enhance performance on long-context tasks.
- This model reportedly outperforms standard Transformers and retains advantages without needing position embeddings.
- Flow Matching and Speculative Decoding as Alternatives: Discussion shifted to the potential of methods like flow matching and speculative decoding to provide better options on the accuracy-latency curve compared to DEQs and UTs.
- Members agreed these alternatives may not be direct competitors but aim for efficient compute usage in computations.
- Validating Interests in Research Ideas: A member noted it's valid to pursue ideas simply because they seem neat, even mentioning that hobby-horses can be mistaken for significant research.
- This led to a broader conversation on the value of different approaches and personal preferences in model design.
Links mentioned:
- Recurrent Spectral Network (RSN): shaping the basin of attraction of a discrete map to reach automated classification: A novel strategy to automated classification is introduced which exploits a fully trained dynamical system to steer items belonging to different categories toward distinct asymptotic attractors. These...
- Forgetting Transformer: Softmax Attention with a Forget Gate: An essential component of modern recurrent sequence models is the forget gate. While Transformers do not have an explicit recurrent form, we show that a forget gate can be naturally incorporated...
- Understanding Warmup-Stable-Decay Learning Rates: A River Valley Loss Landscape Perspective: Training language models currently requires pre-determining a fixed compute budget because the typical cosine learning rate schedule depends on the total number of steps. In contrast, the Warmup-Stabl...
- Marge I Just Think Theyre Neat GIF - Marge I Just Think Theyre Neat The Simpsons - Discover & Share GIFs: Click to view the GIF
Latent Space ▷ #ai-general-chat (34 messages🔥):
SmolLM2 Release
AI Regulation Discussion
Claude 3.5 Sonnet Benchmarks
AI Tool Announcements
OpenAI AMA Highlights
-
SmolLM2 is the new SOTA: SmolLM2, an open 1B-parameter language model, was introduced with training on up to 11 trillion tokens from various curated datasets, fully open-source under Apache 2.0.
- Members discussed its performance, where SmolLM2 1.7B outperformed other models, raising excitement for upcoming demos and community testing.
- Anthropic pushes for AI regulations: Anthropic published a blog post advocating for targeted AI regulation, highlighting the urgency of establishing guidelines sooner rather than later.
- This release is notably timed ahead of elections, leading to discussions about its implications for startup competition.
- Claude 3.5 Sonnet benchmarks break records: Frameworks powered by Claude 3.5 Sonnet have achieved a staggering 49% on SWE-bench Verified, surpassing the previous SOTA of 45%.
- This milestone has sparked interest in seeing further advancements and comparisons with other systems like Aider.
- Exciting new AI tools emerge: Blockade Labs introduced Blendbox, simplifying AI art creation with direct control over visuals, while Runway ML announced Advanced Camera Control for more intentional scene navigation.
- These innovations signal a trend towards user-friendly interfaces that enhance creative expression in AI-generated content.
- OpenAI's AMA reveals compute challenges: During a Reddit AMA, OpenAI CEO Sam Altman acknowledged that compute limitations are delaying product releases, complicating the path for deploying complex AI models.
- This discussion sheds light on the infrastructural challenges facing significant advancements in AI technology.
Links mentioned:
- Tweet from Apoorv Khandelwal (@apoorvkh): Wondering how long it takes to train a 1B-param LM from scratch on your GPUs? 🧵 See our paper to learn about the current state of academic compute and how to efficiently train models! Use our code t...
- Tweet from Blockade Labs (@BlockadeLabs): Introducing Blendbox: a fresh way to create with AI No more wrestling with long prompts or random results. Blendbox Alpha brings simplicity & control to AI art, so you can shape your vision directly....
- Tweet from Alvaro Cintas (@dr_cintas): What a crazy day in AI 🤯 • Claude Dictation • Synthflow Voice 2.0 • Claude Desktop app • ElevenLabs X to Voice • RedPanda Image Model • OpenAI launches SearchGPT • Google Learn About experiment • Se...
- Tweet from Alex Albert (@alexalbert__): One day since this thread and we already have new frameworks powered by the new 3.5 Sonnet on the leaderboard. The 50% barrier has been crossed - you love to see it. Quoting Alex Albert (@alexalbert...
- LangGraph Platform: New deployment options for scalable agent infrastructure: We've rebranded our service for deploying and scaling LangGraph apps as LangGraph Platform. Learn about the multiple deployment options and what LangGraph Platform entails.
- Tweet from Vaibhav (VB) Srivastav (@reach_vb): Fuck it - it’s raining smol LMs - SmolLM2 1.7B - beats Qwen 2.5 1.5B & Llama 3.21B, Apache 2.0 licensed, trained on 11 Trillion tokens 🔥 > 135M, 360M, 1.7B parameter model > Trained on FineWeb...
- Tweet from Anthropic (@AnthropicAI): We've published a short piece making the case for targeted AI regulation sooner rather than later. Read it here: https://www.anthropic.com/news/the-case-for-targeted-regulation
- Tweet from Loubna Ben Allal (@LoubnaBenAllal1): Introducing SmolLM2: the new, best, and open 1B-parameter language model. We trained smol models on up to 11T tokens of meticulously curated datasets. Fully open-source Apache 2.0 and we will release...
- Tweet from Runway (@runwayml): Advanced Camera Control is now available for Gen-3 Alpha Turbo. Choose both the direction and intensity of how you move through your scenes for even more intention in every shot. (1/8)
- Tweet from morgan — (@morqon): devday london: o1 release “soon” Quoting Olivier Godement (@oliviergodement) @tarekayed00 @romainhuet We're giving audience access to o1-preview! We're still working on the full o1 and plan...
- Tweet from Alexander Doria (@Dorialexander): Some of theses model releases are not like the others. Not only is SmolLM2 the new SOTA for language models running on edge device, but it is an actual open science project providing the code, the dat...
- Reddit - Dive into anything: no description found
- Reddit - Dive into anything: no description found
- OpenAI CEO Sam Altman says lack of compute capacity is delaying the company's products | TechCrunch: OpenAI CEO Sam Altman admitted a lack of compute capacity is one factor preventing the company from shipping products as often as it'd like.
Latent Space ▷ #ai-announcements (1 messages):
LM Arena Podcast
Audio Quality Challenges
Statistics of Subjectivity
ELO Tracking
4o-mini Ranking Drama
-
New LM Arena Podcast Episode Released: The latest podcast episode featuring @infwinston and @ml_angelopoulos discusses the history and future of LM Arena, albeit with some audio quality issues.
- Listeners can catch the discussion on topics like the Statistics of Subjectivity and the 4o-mini ranking drama on Latent Space.
- ELO System Explored for Intelligence Tracking: A key highlight in the episode is the use of ELO to track the Pareto Frontier of $/Intelligence, providing unique insights into efficiency.
- This approach offers an interesting angle on measuring performance and relevance in AI development.
Link mentioned: Tweet from Alessio Fanelli (@FanaHOVA): In the arena, generating tokens 🏟️ @infwinston and @ml_angelopoulos came on the pod to talk about the history and future of LM Arena: - The Statistics of Subjectivity - Using ELO to track the Paret...
GPU MODE ▷ #general (2 messages):
CUDA Optimization for Matrix Multiplication
GPU Scheduling with Kubernetes
NVIDIA Device Plugin
Deep Learning Performance
GPU Pod Resource Recognition
-
CUDA Optimization for Matrix Multiplication Explained: In a detailed post, the author iteratively optimizes a matrix multiplication implementation in CUDA, focusing on performance characteristics of modern GPUs used in deep learning.
- The post highlights key techniques such as coalescing global memory accesses and shared memory caching, providing links to GitHub for kernel code and a related benchmarking repo.
- Seeking Help for GPU Scheduling in Kubernetes: A member is looking for assistance with scheduling GPU resources in a Kubernetes cluster using the NVIDIA device plugin, detailing their setup on worker-node and master-node.
- Despite having the gpu drivers and CUDA toolkit installed, they still face issues as the GPU pod shows it does not recognize the GPU resource.
Link mentioned: How to Optimize a CUDA Matmul Kernel for cuBLAS-like Performance: a Worklog: In this post, I’ll iteratively optimize an implementation of matrix multiplication written in CUDA.My goal is not to build a cuBLAS replacement, but to deepl...
GPU MODE ▷ #triton (1 messages):
Triton casting strategies
Rescale Kernel Implementation
vLLM Baseline Comparison
FP8 Quantization
Triton vs vLLM outputs
-
Triton Casting vs Static Casting: A member inquired if Triton's casting strategies correlate with what static casting achieves while exploring a simple rescale kernel implementation.
- They included a code snippet for a rescale function and sought clarity on the casting mechanisms in Triton.
- Rescale Kernel Implementation Details: The provided kernel,
rescale_bf16_to_fp8
, scales bfloat16 inputs to float8 by utilizing activation scales through a multiplication process before casting.
- The offsets are calculated based on the kernel's parameters, and the output is stored accordingly.
- Benchmarking Against vLLM Code: The member is using
torch.ops._C.static_scaled_fp8_quant
from vLLM as a reference point for evaluating the new Triton kernel.
- They shared a GitHub link to the relevant section of the vLLM repository that outlines the scaling operation involved.
- Output Discrepancies Identified: Discrepancies between Triton outputs and vLLM outputs were noted, specifically regarding the first entry's expected value compared to actual results.
- The calculations suggested Triton rounds to 18, while vLLM's method yields 20, raising questions about potential numeric errors or differences in implementations.
Link mentioned: vllm/csrc/quantization/fp8/common.cu at 55650c83a0c386526ed04912a0c60eccca202f3e · vllm-project/vllm: A high-throughput and memory-efficient inference and serving engine for LLMs - vllm-project/vllm
GPU MODE ▷ #beginner (3 messages):
Colpali model usage
Model quantization
Inference speed with LoRas
Performance at different bit widths
-
Colpali model leads to resourceful hackathon workaround: During a hackathon, a team had to rely on a lesser-known Colpali model due to compute limitations, resorting to using a member's company GPUs for processing.
- Next time, we'll have to plan better! The aim was to achieve faster inference for tasks like text-to-image generation and utilizing various LoRas.
- Bit precision affects model performance: A member explained how running models in different formats, like FP16 or Int8, can depend on hardware capabilities and optimization features of the backend.
- Typically, most GPUs and CPUs support these formats, but as you drop below certain precisions, like FP4, you need specialized hardware and operations to manage computations effectively.
- Dequantization challenges in low bit widths: If a native operation isn't available at a specific bit width, models may need to dequantize to a higher precision to ensure functionality, which can impact performance.
- In cases like GGUF 6bit, lowering precision can lead to performance trade-offs, making it less preferable.
GPU MODE ▷ #off-topic (2 messages):
Asking Dumb Questions
Advanced Topics
Google Search Answers
-
Never a Dumb Question: A discussion highlighted the sentiment that there's never a dumb question, only a dumb answer, indicating a supportive environment for inquiries.
- Questions that seem simple often arise more frequently in advanced topics, showing the complexity can intimidate some individuals.
- Advanced Questions Often Lead to Apologies: It's noted that as topics advance, members tend to preface their questions with apologies, highlighting their discomfort with asking questions.
- However, it was mentioned that questions are always relevant and not easily found via a quick Google search.
- Frustration with Easy Questions: A member expressed frustration with questions that are easily searchable online, noting they rarely come with apologies.
- This emphasizes a preference for more thoughtful inquiries that contribute to deeper discussions.
GPU MODE ▷ #irl-meetup (1 messages):
lavawave03: I would be so down!
GPU MODE ▷ #triton-puzzles (1 messages):
Triton Learning
Triton Puzzle Visualization
-
Gratitude for Visualization Patch: A member expressed their appreciation for the recent patch that helped restore the visualization functionality in the Triton puzzle.
- Feeling excited, they noted that this change significantly aided their efforts in returning to learning Triton.
- Return to Learning Triton: Another member highlighted their return to learning Triton and going through the Triton puzzle after some time off.
- They found the changes to be beneficial for re-engaging with the material.
GPU MODE ▷ #rocm (3 messages):
Composable Kernel Performance
XOR based Permutation Strategy
-
Composable Kernel aims for 135TFlops: A user suggested that CK GEMM can achieve around 135TFlops, but noted that performance varies depending on settings.
- Higher or lower performance can be experienced even with the same kernel, indicating fluctuations in results based on parameters.
- Avoiding Bank Conflicts with XOR: It was discussed that using XOR might lead to register spills, prompting a strategy to implement an XOR based permutation to avoid bank conflicts.
- This approach aims to optimize performance by mitigating potential conflicts during kernel execution.
- Code Resource for Bank Conflict Solutions: A user shared a link to the composable_kernel GitHub code as a resource to help avoid bank conflicts.
- The code serves as a reference for implementing strategies that enhance performance in machine learning tensor operations.
Link mentioned: composable_kernel/include/ck/tensor_operation/gpu/grid/gridwise_gemm_xdl_cshuffle_v3.hpp at 03c6448ba3c854195c61c817036b66af1fa0e844 · ROCm/composable_kernel: Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators - ROCm/composable_kernel
GPU MODE ▷ #liger-kernel (3 messages):
Learning Triton
Accessing GPU Resources
Cloud Services for GPU
AI Development Environments
-
Exploring GPU Access Options: A member expressed interest in learning Triton and Liger, but lacked access to a GPU.
- Another member suggested using lightning.ai or Google Cloud for free GPU hours, or considering vast.ai and Lambda Cloud for paid options.
- Cloud Platforms for GPU Learning: It was proposed that members can look into cloud platforms to learn effectively without a personal GPU.
- Using such platforms can ease the learning curve for GPU-intensive frameworks like Triton.
GPU MODE ▷ #self-promotion (2 messages):
FlashAttention
FlashAttention-2
GPU Memory Optimization
-
FlashAttention Revolutionizes Attention Mechanism: FlashAttention (2022) introduced a breakthrough by addressing redundant memory accesses between GPU HBM and SRAM, achieving significant speed improvements without sacrificing accuracy.
- This innovation combined techniques like kernel fusion and tiling, ultimately offering a solution amidst the focus on FLOPs reduction.
- FlashAttention-2 Further Optimizes Performance: FlashAttention-2 (2023) continues the momentum by enhancing hardware-aware features and improving I/O operations, highlighting ongoing advancements in attention computation.
- The evolution reflects a persistent effort to streamline performance in contrast to traditional approximation methods.
Link mentioned: FlashAttention-2 | DigitalOcean: no description found
GPU MODE ▷ #🍿 (4 messages):
Triton Kernels Dataset
Github Repository Scanning
Cudabench Schema Definition
Submission Scoring Criteria
-
Massive Triton Kernels Dataset Released: A new dataset of over 2.5 million tokens comprising 3000 Triton kernels has been produced, collected by scraping GitHub repositories and running Torch Inductor on various models.
- The dataset will continue to grow with future plans for annotations, deduplication, and ensuring all kernels are runnable.
- Next Steps for Data Enhancement Discussed: Next steps include generating more data by analyzing 200 GitHub repositories and extracting Triton kernels alongside corresponding PyTorch code to facilitate supervised finetuning.
- Additionally, adding explicit docstrings to the extracted code was proposed to enhance clarity.
- Inquiry on Cudabench Schema: Interest was expressed in the status of a defined schema for Cudabench to ensure the competition aspect is effective by providing developers with a baseline to compete against.
- Exploration of possible composable elements of the schema was suggested for improved functionality.
- Deliberation on Scoring Criteria for Submissions: Discussion revolved around determining how to score submissions based on latency, throughput, and memory usage, with a focus on defining criteria for what makes a submission better.
- Throughput was suggested as the leading candidate for scoring due to its ability to encompass both latency and memory metrics.
Links mentioned:
- sahancpal/triton_kernels · Datasets at Hugging Face: no description found
- Possible Spam Detected - Pastebin.com: Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
- You are an AI assistant who helps software engineers write triton kernels which - Pastebin.com: Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
GPU MODE ▷ #thunderkittens (3 messages):
Session Start Delay
Prerequisite Stream
-
Session Start Delayed: The start time for the session has been pushed back to 1:15 due to a slight delay.
- Stay tuned for further updates as the session begins shortly.
- Inquiry about Prerequisite Stream: A member inquired about the prerequisite stream that was mentioned earlier, specifically referencing its timing.
- Another member asked if it was the stream scheduled for the 29th.
Interconnects (Nathan Lambert) ▷ #news (14 messages🔥):
SmolLM2 launch
Traditional NLP evaluations
Changes in model expectations
Outdated evaluation metrics
New evaluation rubric development
-
SmolLM2 Launch Promises Open-Source Freedom: Introducing SmolLM2, a new 1B-parameter model trained on up to 11T tokens of curated datasets, now fully open-source under Apache 2.0 with all datasets and scripts to be released.
- This model aims to establish a strong baseline for evaluating language models, integrating exciting new features into NLP.
- Decline in Traditional NLP Evaluations: Concerns were raised about the decrease in traditional NLP evaluations, especially in Natural Language Generation (NLG), as models are increasingly expected to perform well without standardized evaluations.
- Participants noted that the evaluation landscape seems to have shifted, particularly in areas like summarization and machine translation.
- Evolving Expectations from Language Models: Discussion highlighted that people's expectations from language models have significantly changed, reflecting the advancements in AI.
- A member noted, 'What people expect from models has changed a lot,' emphasizing that the bar has been raised.
- Outdated Metrics in Evaluations: A member shared insights from an evaluation project in 2022 which led to the removal of fluency as a metric, stating that all models were found to be 'perfectly fluent.'
- They mentioned that similar trends of obsolescence in evaluation metrics have been noted across other areas as well.
Link mentioned: Tweet from Loubna Ben Allal (@LoubnaBenAllal1): Introducing SmolLM2: the new, best, and open 1B-parameter language model. We trained smol models on up to 11T tokens of meticulously curated datasets. Fully open-source Apache 2.0 and we will release...
Interconnects (Nathan Lambert) ▷ #ml-questions (4 messages):
Diffusion and Robotics
Style Transfer Techniques
-
Exploring Diffusion Techniques in Robotics: A participant expressed curiosity about the intersection of diffusion methods and robotics, suggesting potential applications.
- The question prompted further discussion about whether there were others interested in this area.
- Simplifying Style Transfer Approaches: Another user suggested exploring style transfer as it doesn’t necessarily require fine-tuning and is a viable option.
- However, they noted a lack of available code for this specific technique, indicating a potential gap in resources.
- Shifting Thoughts on Style Transfer: One member reflected on their initial idea to use style transfer, contemplating extracting style modifiers into text for prompts.
- They later concluded that using image-image style transfer techniques would likely be more effective after generating the appropriate content image.
Interconnects (Nathan Lambert) ▷ #ml-drama (1 messages):
xeophon.: https://x.com/sahir2k/status/1852064158830989757
Interconnects (Nathan Lambert) ▷ #reads (3 messages):
OpenAI o1-preview
Reasoning in Language Models
Token Billing and Latency
Search Algorithms in AI
-
OpenAI o1-preview Launch Announcement: OpenAI released the long-awaited
o1-preview
model on September 12, 2024, which was previously known as Q* before being superseded by Project Strawberry.- This launch aims to clarify how OpenAI o1 functions to improve user understanding through a series of experiments and discussions.
- Understanding Reasoning in Models: The blog post discusses Daniel Kahneman's concepts of System 1 and System 2 thinking, correlating them with language model inference processes.
- The traditional inference is likened to System 1, while reasoning involves slower, more analytical System 2 processes.
- Confusion about Token Billing and Latency: A member expressed confusion over the statement regarding the relationship between latency and reasoning tokens as being sub-linear when using algorithms like MTCS.
- They pointed out potential issues with this claim, questioning how parallelization of MCTS is feasible in practice.
- Token Generation in Search Algorithms: In response to the previous confusion, another member clarified that if search algorithms generate multiple nodes, the number of generated tokens would increase significantly.
- This emphasizes the potential complexity and increase in token consumption associated with search processes in AI.
Link mentioned: Reasoning Series, Part 1: Understanding OpenAI o1: OpenAI's o1-preview, released in September 2024, introduces "reasoning tokens" to enhance complex problem-solving capabilities. This post explores the model's reasoning process, debu...
Interconnects (Nathan Lambert) ▷ #posts (3 messages):
Discord Community History
Friendship Origins
Engagement Expressions
-
OG Discord Friendships Reminisced: A member noted that they have known another member since their time in the Wavelength chat, indicating a long-standing friendship.
- This highlights the roots of community ties that have formed over the years.
- Identity Check: That’s My Name!: A member, identified as andrewnc, responded affirmatively to a mention of their name, showing engagement within the group.
- This simple acknowledgment adds a personal touch to interactions.
- Excitement Expressed: Let’s Go!: Another member expressed enthusiasm with a message filled with emojis, signaling eagerness and positive energy.
- This reflects the community's vibrant atmosphere and camaraderie.
Torchtune ▷ #general (3 messages):
Llama 4 Training
Meta's unexpected projects
-
Llama 4 Training on 100k H100: Llama 4 is already in training using 100k H100 units, showcasing rapid advancements in AI capabilities.
- One member remarked on the extraordinary pace of development, exclaiming, 'what a crazy world we live in.'
- Meta's Possible Nuclear Ventures: A member humorously speculated about Meta potentially announcing plans to build nuclear power plants.
- Another concurred, suggesting that this could happen as soon as 2025.
Torchtune ▷ #dev (20 messages🔥):
Graph Breaks and Activation Offloading
PPO Performance Issues
Profiling Techniques
Checkpoints and Activation State
-
Troubles with Graph Breaks during Activation Offloading: There are concerns regarding graph breaks and activation offloading when using PPO, with one member noting that it was noticeably slower and did not reduce memory usage.
- A potential issue may be due to the increased activations hitting a bottleneck during processing.
- PPO Configuration Might Cause Issues: Activation checkpoints need to be enabled for activation offloading to work, but there could be additional checks missed in the PPO setup that affect performance.
- One member suggested exploring the model’s output heads as a potential source of the problems when offloading.
- Need for Profiling to Analyze GPU Time: Members discussed utilizing tlparse for identifying graph breaks and suggested profiling GPU time for deeper insights into performance issues.
- One member offered assistance with profiling and analysis of the configuration once it was set up.
- Identifying Graph Break Causes: An identified graph break was linked to a no-op in the output layer which triggers during forward passes when no_grad mode is applied.
- Community members wondered if there’s a way to prevent activation triggers when they aren't needed.
- Sharing Profiler Configurations for Better Insights: A request was made for a profiler configuration to assist a member who is new to profiling techniques.
- The exchange of configurations and troubleshooting assistance was encouraged to facilitate better understanding and debugging.
DSPy ▷ #show-and-tell (6 messages):
DSPy Signatures
Typed Outputs
Server Generation with vLLM
Constrained Generation
-
DSPy Signatures Simplify Implementation: A member highlighted that using DSPy signatures with types allows for directly obtaining typed outputs, making the implementation simpler.
- This method streamlines the process, reducing complexity in coding.
- Leveraging vLLM for Type Boolean: Another member suggested using a server like vLLM that can utilize Outlines for constrained generation to directly request types like bool.
- They shared that implementing
dspy.Predict(“text -> is_factual: bool”)
adheres to scheme compliance with dspy.LM + dspy.JsonAdapter. - Keeping Up with DSPy Developments: A member expressed challenges in staying updated with the rapid advancements in DSPy, acknowledging the difficulty of keeping pace.
- They humorously noted the overwhelming nature of ongoing developments in the field.
Link mentioned: Tweet from Omar Khattab (@lateinteraction): @karthikkalyan90 @dottxtai Hey Karthik! Super cool! But btw you can just ask for type bool directly and use a server like vLLM that uses Outlines for constrained generation — and you’ll get scheme adh...
DSPy ▷ #papers (1 messages):
js7772219: <@738704828494118953> good feedback!
DSPy ▷ #general (14 messages🔥):
Streaming DSPy completions
Synthetic data generation with pre-trained models
Textgrad integration
User feedback on streaming needs
-
Streaming DSPy Completions Nearing Launch: Chatter suggests that streaming DSPy completions may be available natively by the end of October. Active discussions are ongoing following the preparation of the Async PR.
- A post in the discussion invites users to share feedback on their desired use cases, particularly focusing on dspy.Predict() functionalities.
- Base Models for Synthetic Data Generation: A member questioned if pre-trained base models could be utilized in DSPy for synthetic data generation without needing many ICL examples. Another member elaborated that base models are tricky to prompt effectively.
- They emphasized the challenges faced when working with base models, particularly the absence of instruction-tuning which makes practical ICL examples important.
- Textgrad Integration Timeline Inquiry: A user expressed interest in knowing when Textgrad would be integrated into DSPy. Specific details on the timeline were not provided in the discussion.
Links mentioned:
- streaming after LiteLLM integration · Issue #1715 · stanfordnlp/dspy: In my current setup, I write everything in DSPy, then I extract the prompt form the dspy module. Then, I use that prompt with litellm to stream the output to the user(if the module is chain of thou...
- streaming after LiteLLM integration · Issue #1715 · stanfordnlp/dspy: In my current setup, I write everything in DSPy, then I extract the prompt form the dspy module. Then, I use that prompt with litellm to stream the output to the user(if the module is chain of thou...
OpenInterpreter ▷ #general (17 messages🔥):
Anthropic API Support Issues
Beta Testing Opportunities
Invite Link Problems
Open Interpreter Desktop Subscription Upgrade
Request Size Error
-
Anthropic API Support causing issues: A member reported that after the latest update introducing Anthropic API Support, scripts failed to run correctly compared to the previous version, leading to frustration.
- They suggested making the API integration optional and re-enabling the local model option that previously worked without problems.
- Seeking beta testing participation: A member expressed interest in becoming a beta tester for Linux and Windows, mentioning their experience in cybersecurity and APIs.
- They also offered to assist with updating website documentation to contribute to the project.
- Invite link invalid for event: A member reached out, stating that the invite link for an event was invalid and asked if there was a different link available.
- Another user responded, directing them to find the link in the 'events' channel.
- Upgrading Open Interpreter Desktop subscription: One user asked for guidance on how to upgrade their Open Interpreter Desktop subscription to continue their development work.
- They humorously referred to wanting to resume their 'god-like status' with the tool.
- Request size error encountered: A user described experiencing a 413 error when attempting to use the API, indicating that their request exceeded the maximum size allowed.
- They noted that after initially resolving issues, further attempts led to the same error when using the model flag.
OpenInterpreter ▷ #O1 (1 messages):
ngrok issues
busy harvest season
-
Ngrok Troubleshooting in Progress: A member mentioned ongoing issues with ngrok, which were clarified with help from another member, Kai.
- They plan to address the ngrok concerns this weekend when they have more time.
- Busy Harvest Season Keeps Member Occupied: The member expressed that it has been an amazing and busy harvest season, indicating they are currently preoccupied.
- They have not had time to resolve the issues due to their busy schedule related to the harvest.
OpenInterpreter ▷ #ai-content (2 messages):
Meta FAIR robotics developments
Meta Sparsh
Meta Digit 360
Meta Digit Plexus
Open source community impact
-
Meta FAIR Launches Breakthrough Robotics Solutions: Today at Meta FAIR, three cutting-edge developments in robotics and touch perception were unveiled, aiming to empower the community.
- These advancements include the innovative Meta Sparsh, which serves as a versatile encoder for tactile sensing.
- Meta Sparsh Revolutionizes Tactile Sensing: Meta Sparsh is introduced as the first general-purpose encoder, trained on 460K+ tactile images using self-supervised learning for versatile applications.
- This technology works across various tactile sensors and tasks, opening a path for enhanced robotics.
- Meta Digit 360 Offers Human-Level Touch Sensation: Meta Digit 360 presents a significant breakthrough with an artificial fingertip-based tactile sensor that features 18+ sensing capabilities.
- This ensures human-level precision in touch data, which is crucial for advanced interactive systems.
- Meta Digit Plexus Enhances Robotic Integration: Meta Digit Plexus acts as a standardized platform for connecting robotic sensors, streamlining hardware and software for tactile integration.
- It enables seamless control and data collection across multiple components, simplifying robotic applications.
- Open Source Community to Benefit from New Robotics Tools: The new capabilities introduced at Meta promise significant potential impacts for the open source community in areas ranging from medical research to manufacturing.
- Community engagement is encouraged to facilitate further exploration and application of these technologies.
Link mentioned: Tweet from AI at Meta (@AIatMeta): Today at Meta FAIR we’re announcing three new cutting-edge developments in robotics and touch perception — and releasing a collection of artifacts to empower the community to build on this work. Deta...
LAION ▷ #general (16 messages🔥):
Autoregressive Image Generation
Patch Artifacts in Image Generation
MAR Model
VAE Usage
Meta's New Video Technology
-
Patch Artifacts Frustrate Generators: A member expressed frustration about dealing with patch artifacts in autoregressive image generation, noting a potential necessity to use a VAE despite disliking them.
- "Still dealing with these patch artifacts. I HATE VAEs but it seems like I may be forced to use one."
- MAR Model Explained: It was established that the model operates as a MAR (Masked Autoregressive Model), with a reference to a related paper for further understanding.
- "It's weird that the generated image is not continuous at patch boundaries... the information is just failing to transfer."
- Lack of Attention in Diffusion Steps: Discussion pointed out that the diffusion step consists solely of a single MLP and does not have attention or awareness of adjacent patches, leading to continuity issues.
- "...the prediction of masked tokens provides the continuous vector to denoise."
- Meta's New Video Model: A member mentioned that Meta has rolled out a new model for generating video, hinting at innovations in the field.
- They encouraged others to refer to the paper linked for more information: Kaiming He et al..
- Concerns Over Future Training of DiTs: Concerns were raised that if current metrics and scaling papers are accurate, no one will be able to train DiTs within the next six months.
- This highlights an upcoming challenge in the field where existing models may quickly become obsolete.
LAION ▷ #research (2 messages):
TokenFormer architecture
Sparse Autoencoders (SAEs)
SDXL Turbo
Text-to-image models
-
TokenFormer Reimagines Model Scalability: A new architecture called TokenFormer enhances flexibility by leveraging the attention mechanism for interactions between tokens and model parameters, thus mitigating the need for retraining entire models with architectural modifications.
- This approach addresses the unsustainable computational costs associated with scaling traditional transformer models as their sizes grow.
- Harnessing SAEs for SDXL Turbo: Researchers have explored using Sparse Autoencoders (SAEs) to extract interpretable features from the generative process of SDXL Turbo, showcasing their capability to control image generation.
- Their findings demonstrate that features learned through SAEs can causally influence the creation of images and reveal specialized roles among the transformer's blocks.
- SAEs Unlock Inner Workings of Text-to-Image Models: A study revealed that SAEs can decompose the generative processes of text-to-image models into interpretable components, allowing for better control and analysis.
- These features relate to aspects such as image composition, local detail enhancement, and color management, making them pivotal for future model developments.
Links mentioned:
- TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters: Transformers have become the predominant architecture in foundation models due to their excellent performance across various domains. However, the substantial cost of scaling these models remains a si...
- Paper page - Unpacking SDXL Turbo: Interpreting Text-to-Image Models with Sparse Autoencoders: no description found
- Unboxing SDXL Turbo with SAEs: no description found
LlamaIndex ▷ #blog (3 messages):
Open Telemetry
Llama Impact Hackathon
LlamaParse new features
-
Log Traces with Open Telemetry: Now, @braintrustdata allows you to log traces directly from LlamaIndex using Open Telemetry, enhancing your observability capabilities. Check out their documentation for more details.
- This integration ensures that telemetry is clear and effective in complex production applications.
- Prepare for the Llama Impact Hackathon: The 3-day Llama Impact Hackathon in San Francisco is set to take place from November 8-10, offering a chance to win a $15,000 prize pool. Participants will build AI solutions using Meta's Llama 3.2 models with a special $1000 prize for the best use of LlamaIndex.
- Don't miss this opportunity to showcase innovative AI applications and collaborate with fellow developers!
- LlamaParse Introduces Exciting New Features: LlamaParse now boasts two new features: Continuous mode (in beta) for stitching together multi-page tables and an Excel spreadsheet output option for easy data extraction. This update is designed to enhance the usability and flexibility of data processing.
- Continuous Mode ensures that lengthy tables are presented seamlessly, improving the overall user experience.
LlamaIndex ▷ #general (13 messages🔥):
Neo4j PropertyGraphIndex Issues
Changelog Location
Workflows to Tools Conversion
Workflow Queries
-
Neo4j PropertyGraphIndex creates id conflicts: A user reported a unique constraint issue in Neo4j's PropertyGraphIndex where nodes shared the same id as their name, leading to a conflict.
- This issue suggests that the graph's ontology may not support multiple nodes with identical names across different sections.
- Changelog for llama-index-graph-stores-neo4j found: One member shared the changelog for the Neo4j package, offering valuable insights on version changes.
- Another user expressed their appreciation for the availability of the changelog.
- Conversion of workflow to tool is possible: Members discussed the idea that any workflow can be converted into a tool using
FunctionTool
, as illustrated with a code snippet.
- This allows workflows to be utilized in various query engines seamlessly.
- Questions arise about workflows: A member inquired if it’s mandatory for workflows to be async and whether high-level engines will eventually be entirely reimplemented using workflows.
- Responses confirmed that workflows are inherently async, while future reimplementations might not be a focus, instead emphasizing better documentation and pre-built workflows.
Cohere ▷ #discussions (3 messages):
LLM Framework
Component Building
Tailwind Support
Output Issues
-
Building a framework for LLMs: A member is currently developing a framework that enables LLMs to construct components based on user prompts.
- This framework aims to enhance component generation capabilities for various applications.
- Current Tailwind support only: As of now, the framework supports Tailwind CSS exclusively, indicating a focused initial implementation.
- The member is actively working on expanding support to other styling options in the future.
- Random text output issue: The framework is generating random, non-component text outside the intended component output, which is a point of concern.
- The member is making efforts to address and fix this issue for a more refined output.
Cohere ▷ #questions (4 messages):
Master Thesis Collaboration
Expediting Applications
-
Seeking Advisor for Master Thesis: A member expressed interest in finding a collaborator or advisor for their master thesis and sought advice on expediting the process.
- Could there be a way to speed this up? was the main inquiry as they looked for support from the community.
- Concerns over Application Volume: Another member highlighted that the Cohere for AI Discord receives numerous applications, raising concerns about the potential delays.
- They asked a specific member if it was possible to expedite the applications and encouraged another member to share their email for better coordination.
Cohere ▷ #api-discussions (4 messages):
Command R Reliability Scores
Command R Fine-Tuning
Benchmarks for AI Models
-
User Inquiry on Command R Reliability Scores: A member asked where to check for reliability scores for Command R.
- Another member asked if they meant benchmarks, linking to Cohere's blog on Command R fine-tuning.
- Command R Fine-Tuning Offers Cost-Effectiveness: The blog cited by the member claims that Command R fine-tuning offers superior performance on enterprise use cases and costs up to 15x less than the largest models on the market.
- This aspect signifies the potential for economic advantages when adopting Command R for advanced applications.
Link mentioned: Introducing Command R Fine-Tuning: Industry-Leading Performance at a Fraction of the Cost: Command R fine-tuning offers superior performance on enterprise use cases and costs up to 15x less than the largest models on the market.
Cohere ▷ #projects (1 messages):
Agent Building Experience
Application Review Process
-
Ongoing Review of Agent Building Applications: Acceptance applications have been thoroughly reviewed, focusing on the candidates' experience with building agents.
- The team is committed to providing feedback on applications once the review process is complete.
- Candidate Communication Assurance: Candidates can expect a response as the team diligently goes through each application.
- The statement emphasized the careful evaluation to ensure qualified experience in agent building.
Modular (Mojo 🔥) ▷ #general (2 messages):
Mojmelo Project
Level Advancement
- Congrats on Level 3 Advancement!: <@435478813598679041> just advanced to level 3! This achievement showcases their engagement in the community.
-
Mojmelo welcomes contributions!: A member is currently working on Mojmelo and invites contributions, mentioning the focus on native Matrix type and ML algorithms.
- An example usage with Logistic Regression is available here.
Link mentioned: Mojmelo/tests/LogisR_test.mojo at main · yetalit/Mojmelo: Machine Learning algorithms in pure Mojo 🔥. Contribute to yetalit/Mojmelo development by creating an account on GitHub.
Modular (Mojo 🔥) ▷ #mojo (7 messages):
Mojo parametric capability
mojo test issues in GitHub Actions
Syntactic macros concerns
Support for custom decorators
Malloc fault issues
-
Mojo Societal Impact: What Can't It Do?: A member pondered how the powerful parametric capability of Mojo leads to speculation about its limitations.
- It becomes a question of what it can not do - an interesting perspective on Mojo's capabilities.
- Hanging Mojo Tests on macOS GitHub Actions: One member inquired if others have faced issues with
mojo test
hanging during macOS GitHub Actions runs.
- This highlights potential environment-specific challenges that developers might face.
- Concerns Over Syntactic Macros: A member expressed diminishing enthusiasm for syntactic macros, as libraries tend to create small DSLs with limited documentation.
- This contributes to a painful development experience, highlighting a potential conflict with Mojo’s goals of simplicity.
- Desire for Custom Decorators: There’s curiosity about when custom decorators will be supported in Mojo, signalling a common request among developers.
- The community is keen on enhancing Mojo's functionality to suit more advanced programming needs.
- Malloc Fault Issues in Mojo Input: A member reported malloc faults with Mojo's input method when handling multiple user inputs in a program.
- Though a GitHub issue indicated a workaround, they still experience the problem regardless, creating frustration.
Link mentioned: Issues · modularml/mojo): The Mojo Programming Language. Contribute to modularml/mojo development by creating an account on GitHub.
OpenAccess AI Collective (axolotl) ▷ #general (5 messages):
Axolotl Docker Image Release Strategy
Stable Release Plans
Previous Release Information
Testing Procedures
-
Understanding Axolotl Docker Tagging: Users highlighted confusion over dynamic tags like main-latest and stable tags like main-20241031-py3.10-cu121-2.3.1, questioning their suitability for production use.
- There was a call for documentation regarding the Axolotl docker image release strategy.
- Upcoming Stable Release Imminent: A member confirmed plans to push for a stable release once recent PRs are merged, clarifying the current state of build tags.
- The stable release will follow after thorough testing to ensure reliability.
- Historical Context of Previous Releases: A member noted that the last stable release tag of the Axolotl docker image is quite old due to upstream dependencies that haven't been released.
- They expressed optimism about replacing those dependencies to enable a proper release to PyPI.
- Confidence in Latest Build Stability: It was emphasized that the latest builds are not unstable, with numerous tests, including end-to-end tests, validating functionality.
- This assurance aims to alleviate concerns about using current tags in production environments.
Alignment Lab AI ▷ #ai-and-ml-discussion (1 messages):
tpojd: steam gift 50$ - steamcommunity.com/gift-card/pay/50
@everyone
Alignment Lab AI ▷ #general (1 messages):
tpojd: steam gift 50$ - steamcommunity.com/gift-card/pay/50
@everyone
LLM Agents (Berkeley MOOC) ▷ #mooc-questions (2 messages):
Course Guidance
Website Navigation
-
New Member Seeks Course Guidance: A new member expressed excitement about joining the channel and asked for guidance on how the course works.
- Members are welcoming and willing to share information about navigating the course.
- Course Information Available on Website: Another member provided a link to the course website for access to all information and assignments: Course Website.
- This resource ensures new members can easily find the necessary details to participate effectively.
tinygrad (George Hotz) ▷ #learn-tinygrad (1 messages):
Device Driver Methods
Hailo Python Wrapper
Proprietary Compilation Process
-
Wrap IOCTL or Use CUDA for Device Drivers?: There is a discussion on whether it's better to wrap raw IOCTL commands or to adopt a CUDA approach by loading a
.so
for command issuance.- The nuances of the Hailo environment are noted, including its proprietary methods for interfacing.
- Hailo's C Library Wrapped in Python: The Hailo library employs a Python wrapper over its C code, providing a unique method for command execution.
- This enables greater accessibility, but raises questions about the underlying architecture and performance trade-offs.
- Proprietary Compilation of Neural Networks: A discussion highlights that Hailo requires neural networks to be compiled into a HEF proprietary protobuf format instead of writing traditional programs like CL shaders.
- Users must compile ONNX files specifically for this purpose, indicating a significant shift from conventional development practices.
Mozilla AI ▷ #announcements (1 messages):
Mozilla Builders Demo Day
Builders Accelerator program
Event Registration
Open Source AI Projects
-
Limited Space for Mozilla Builders Demo Day: Only limited spaces are available for the Mozilla Builders Demo Day on December 5th in San Francisco, California. Interested community members should submit their info through this form to apply.
- Attendees' information will be handled according to the Mozilla Privacy Policy.
- Event Timeline for December 5th: The event will take place at Convene, 40 O’Farrell St, from 8:30 AM to 3:00 PM with registration, breakfast, and live pitches of open-source AI projects. The schedule includes networking opportunities, a lunch break, and an AI Demo Science Fair in the afternoon.
- Participants are encouraged to submit their registration by next week as space is limited.
- Questions About the Event: For any inquiries regarding the event, members can reach out to Maite via Discord. Questions can also be posted here.
- This event marks the culmination of the Builders Accelerator program that began in mid-September.
Link mentioned: San Francisco Builders Demo Day Event Application: We have limited space available to attend Mozilla Builders Demo Day on December 5th. The provided email, name, and GitHub profile will be submitted with this form. We use your information only to pr...