[AINews] $1150m for SSI, Sakana, You.com + Claude 500m context
This is AI News! an MVP of a service that goes thru all AI discords/Twitters/reddits and summarizes what people are talking about, so that you can keep up without the fatigue. Signing up here opts you in to the real thing when we launch it 🔜
$1b is all you need for safe superintelligence?
AI News for 9/3/2024-9/4/2024. We checked 7 subreddits, 433 Twitters and 30 Discords (213 channels, and 3131 messages) for you. Estimated reading time saved (at 200wpm): 340 minutes. You can now tag @smol_ai for AINews discussions!
More news with no dominant theme:
- Safe Superintelligence (our coverage here) announced their $1b raise at $5b. Ilya hinted at their search approach in the Reuters report.
- Sakana AI announced their $100m series A describing a little more of their approach: "Our logo is meant to invoke the idea of a school of fish coming together and forming a coherent entity from simple rules as we want to make use of ideas from nature such as evolution and collective intelligence in our research."
- You.com announced their $50m series B and pivot to a ChatGPT form factor - effectively ceding AI Search to Perplexity, which raised over $250m this summer after raising $63m in spring.
- Anthropic announced Claude for Enterprise with a 500m context window.
- ChatGPT was rewritten from Next.js to Remix
- AI2 released a 64 expert MoE version of OLMo (our coverage here)
Table of Contents
- AI Twitter Recap
- AI Reddit Recap
- AI Discord Recap
- PART 1: High level Discord summaries
- Unsloth AI (Daniel Han) Discord
- aider (Paul Gauthier) Discord
- OpenAI Discord
- HuggingFace Discord
- CUDA MODE Discord
- Stability.ai (Stable Diffusion) Discord
- Nous Research AI Discord
- LM Studio Discord
- OpenRouter (Alex Atallah) Discord
- Eleuther Discord
- Perplexity AI Discord
- Cohere Discord
- Latent Space Discord
- Modular (Mojo 🔥) Discord
- LangChain AI Discord
- LlamaIndex Discord
- OpenInterpreter Discord
- LAION Discord
- Interconnects (Nathan Lambert) Discord
- Torchtune Discord
- DSPy Discord
- OpenAccess AI Collective (axolotl) Discord
- Gorilla LLM (Berkeley Function Calling) Discord
- tinygrad (George Hotz) Discord
- PART 2: Detailed by-Channel summaries and links
- Unsloth AI (Daniel Han) ▷ #general (459 messages🔥🔥🔥):
- Unsloth AI (Daniel Han) ▷ #off-topic (12 messages🔥):
- Unsloth AI (Daniel Han) ▷ #help (67 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #showcase (4 messages):
- Unsloth AI (Daniel Han) ▷ #community-collaboration (1 messages):
- aider (Paul Gauthier) ▷ #general (285 messages🔥🔥):
- aider (Paul Gauthier) ▷ #questions-and-tips (69 messages🔥🔥):
- aider (Paul Gauthier) ▷ #links (1 messages):
- OpenAI ▷ #ai-discussions (318 messages🔥🔥):
- OpenAI ▷ #gpt-4-discussions (1 messages):
- OpenAI ▷ #prompt-engineering (16 messages🔥):
- OpenAI ▷ #api-discussions (16 messages🔥):
- HuggingFace ▷ #general (223 messages🔥🔥):
- HuggingFace ▷ #today-im-learning (4 messages):
- HuggingFace ▷ #cool-finds (4 messages):
- HuggingFace ▷ #i-made-this (12 messages🔥):
- HuggingFace ▷ #reading-group (5 messages):
- HuggingFace ▷ #computer-vision (13 messages🔥):
- HuggingFace ▷ #NLP (9 messages🔥):
- HuggingFace ▷ #diffusion-discussions (2 messages):
- CUDA MODE ▷ #general (1 messages):
- CUDA MODE ▷ #triton (1 messages):
- CUDA MODE ▷ #torch (3 messages):
- CUDA MODE ▷ #torchao (25 messages🔥):
- CUDA MODE ▷ #sequence-parallel (2 messages):
- CUDA MODE ▷ #off-topic (15 messages🔥):
- CUDA MODE ▷ #llmdotc (6 messages):
- CUDA MODE ▷ #sparsity-pruning (1 messages):
- CUDA MODE ▷ #liger-kernel (140 messages🔥🔥):
- Stability.ai (Stable Diffusion) ▷ #general-chat (187 messages🔥🔥):
- Nous Research AI ▷ #general (118 messages🔥🔥):
- Nous Research AI ▷ #ask-about-llms (43 messages🔥):
- Nous Research AI ▷ #research-papers (8 messages🔥):
- Nous Research AI ▷ #research-papers (8 messages🔥):
- LM Studio ▷ #general (93 messages🔥🔥):
- LM Studio ▷ #hardware-discussion (82 messages🔥🔥):
- OpenRouter (Alex Atallah) ▷ #announcements (2 messages):
- OpenRouter (Alex Atallah) ▷ #app-showcase (2 messages):
- OpenRouter (Alex Atallah) ▷ #general (146 messages🔥🔥):
- Eleuther ▷ #general (56 messages🔥🔥):
- Eleuther ▷ #research (88 messages🔥🔥):
- Eleuther ▷ #lm-thunderdome (5 messages):
- Perplexity AI ▷ #announcements (1 messages):
- Perplexity AI ▷ #general (120 messages🔥🔥):
- Perplexity AI ▷ #sharing (10 messages🔥):
- Perplexity AI ▷ #pplx-api (9 messages🔥):
- Cohere ▷ #discussions (70 messages🔥🔥):
- Cohere ▷ #announcements (6 messages):
- Cohere ▷ #questions (10 messages🔥):
- Cohere ▷ #api-discussions (46 messages🔥):
- Cohere ▷ #projects (1 messages):
- Latent Space ▷ #ai-general-chat (31 messages🔥):
- Latent Space ▷ #ai-announcements (1 messages):
- Latent Space ▷ #ai-in-action-club (57 messages🔥🔥):
- Modular (Mojo 🔥) ▷ #general (34 messages🔥):
- Modular (Mojo 🔥) ▷ #mojo (15 messages🔥):
- Modular (Mojo 🔥) ▷ #max (2 messages):
- LangChain AI ▷ #general (46 messages🔥):
- LangChain AI ▷ #share-your-work (1 messages):
- LlamaIndex ▷ #blog (5 messages):
- LlamaIndex ▷ #general (28 messages🔥):
- LlamaIndex ▷ #ai-discussion (1 messages):
- OpenInterpreter ▷ #general (11 messages🔥):
- OpenInterpreter ▷ #O1 (2 messages):
- OpenInterpreter ▷ #ai-content (3 messages):
- LAION ▷ #general (14 messages🔥):
- LAION ▷ #announcements (1 messages):
- Interconnects (Nathan Lambert) ▷ #news (10 messages🔥):
- Interconnects (Nathan Lambert) ▷ #random (3 messages):
- Torchtune ▷ #general (11 messages🔥):
- DSPy ▷ #show-and-tell (5 messages):
- DSPy ▷ #general (5 messages):
- OpenAccess AI Collective (axolotl) ▷ #general (5 messages):
- OpenAccess AI Collective (axolotl) ▷ #axolotl-dev (1 messages):
- OpenAccess AI Collective (axolotl) ▷ #general-help (3 messages):
- Gorilla LLM (Berkeley Function Calling) ▷ #discussion (6 messages):
- tinygrad (George Hotz) ▷ #general (2 messages):
- tinygrad (George Hotz) ▷ #learn-tinygrad (2 messages):
AI Twitter Recap
all recaps done by Claude 3.5 Sonnet, best of 4 runs.
Key Trends in AI Research and Development
- MoE Models: @mervenoyann introduced OLMoE, an open-source Mixture-of-Experts Language Model, which boasts 1B active parameters and 7B total parameters, trained on 5 trillion tokens. It is reported to outperform models with similar active parameters, including Llama2-13B-Chat. The details highlight the innovative architecture involving 64 experts per layer and a focus on efficient training techniques.
- Challenges with AI Alignment: @Yuchenj_UW discussed the logistics of training large models on expensive GPU setups and shared insights on model context requirements for optimal performance, emphasizing escalating resource needs for advanced AI tasks. The specific costs tied to GPU usage were mentioned, creating a practical snapshot of the economic implications of AI research.
- Emerging AI Projects: @rohanpaul_ai described cutting-edge developments in AI agents highlighting a shift towards projects that autonomously perform tasks like document analysis and technical image generation. These agents operate on user-defined tasks, showcasing a trend towards deeper integration of AI in practical enterprise applications.
Innovative Tools and APIs for AI Development
- Command and Control in AI: @ctojunior detailed a modern approach for AI-driven video generation that utilizes normal control pipelines and adapts them for better integration into generative models. This reflects a broader movement to enhance human interaction capabilities in automated systems.
- RAG Systems: @omarsar0 provided insights into Retrieval-Augmented Generation (RAG), emphasizing its relevance in comparison to long-context models. They pointed out the operational efficiency of RAG in producing superior outcomes with fewer tokens, indicating a significant area of research for future applications.
- GitHub Integration in AI: @rohanpaul_ai showcased a move towards GitHub integrations for AI applications under the new Anthropic Enterprise plan, marking a step towards operational efficiency in collaborative coding environments with enhanced security features.
Sectoral Impacts of AI Deployment
- Healthcare Innovations: @qdrant_engine introduced tools that combine text and image data for enhanced diagnostic capabilities, reflecting an ongoing revolution in healthcare workflows through AI assistance. The integration of multimodal search represents a critical advancement aimed at improving patient care.
- Educational Outreach: @DeepLearningAI announced updates to educational programs focused on Python programming, emphasizing the need for AI literacy among knowledge workers. This initiative aims to deepen understanding and interaction with AI tools in professional settings.
- Geopolitical Dimensions: Insights from @ylecun discussed the broader implications of AI governance pertaining to freedom of speech across different systems, linking technological discourse with fundamental democratic principles. This highlights the necessity for thoughtful regulation in the rising era of AI.
Humor and Memes in AI Discussion
- Coding Lamentations: @Aidan_mclau humorously noted the commonplace struggles coders face, reflecting on the absurdities and stresses tied to software development today. This captures a relatable sentiment among developers navigating the complexities of modern coding environments.
- Founder's Grind: @HamelHusain defined \"founder mode,\" contrasting the rigorous demands placed on entrepreneurs against the observed success rates and pitfalls, resonating an air of jest about start-up life and expectations.
- AI Antics: @teortaxesTex commented satirically on the state of AI and its promise of superintelligence, poking fun at the rhetoric around AI's capabilities while providing perspective on the hype vs. reality in AI developments.
AI Reddit Recap
/r/LocalLlama Recap
Theme 1. Benchmarking New AI Models Against Previous Generations
- OLMoE - a fully open source sparse MoE with only 1 billion active parameters (Score: 161, Comments: 8): OLMoE, a new open-source language model using sparse Mixture-of-Experts, has 7 billion parameters but uses only 1 billion per input token, outperforming models with similar active parameters and even larger ones like Llama2-13B-Chat. The model was pretrained on 5 trillion tokens and adapted to create OLMoE-1B-7B-Instruct, with all aspects of the work, including model weights, training data, code, and logs, made publicly available through various platforms.
- OLMoE's performance is questioned when compared to newer models like Deepseek V2 Lite 16B MoE. Users noted the model's openness but raised concerns about MoE training speed advantages during fine-tuning, citing issues with GPU utilization and loss stabilization.
- The model's 7B parameter size and 1B active parameter count are praised for their potential as a local assistant. Users anticipate 30-50 tokens/s without GPU when quantized, making it suitable for laptop use.
- Community interest in GGUF support and integration with llama.cpp is expressed. Some users are awaiting GGUF versions and benchmarks against more current models for a fair comparison.
Theme 2. Claude-Dev Extension Adds Support for Local LLMs
- Claude-Dev Now With Local LLM support! (Ollama, OpenAI Compatible Servers) (Score: 66, Comments: 13): Claude-Dev version 1.5.19 has been released with support for local Language Models through Ollama and OpenAI-compatible servers. This update, available on GitHub, addresses a long-awaited feature request from the community.
- Users expressed excitement about Claude-Dev's compatibility with local Language Models, particularly mentioning deepseek coder v2 for its affordability and potential performance.
- The update was well-received, with users looking forward to trying various models including Gemini, GPT-4, and free local options for simpler tasks.
- Community members showed appreciation for the new API support, indicating it was a highly anticipated feature.
Other AI Subreddit Recap
r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity
AI and Autonomous Systems
- Autonomous Agent Civilization in Minecraft: A groundbreaking experiment featuring over 1000 autonomous AI agents in Minecraft, creating their own culture, economy, religion, and government.
- Tesla's Actually Smart Summon (ASS): Tesla launches an improved version of their Smart Summon feature, demonstrating advancements in autonomous vehicle technology.
AI Image Generation and Processing
- ComfyUI Advanced Live Portrait: A demonstration of real-time AI-powered portrait generation and manipulation using ComfyUI.
- Improved Text Encoder for Stable Diffusion: A new ViT-L/14 / CLIP-L Text Encoder finetune for Flux.1, offering enhanced text adherence and detail in image generation.
AI Development and Future Predictions
- GPT-NEXT Announcement: OpenAI Japan teases GPT-NEXT for 2024, hinting at potential advancements in language models.
- AI Progress Perspective: An infographic emphasizing the importance of considering long-term AI progress rather than focusing on short-term developments.
Memes and Humor
- GPT-Hype Meme: A humorous take on the recurring cycle of GPT model releases and associated hype.
- AI vs. Robot Speculation: A meme-style image comparing the potential arrival of humanoid robots versus superintelligent AI.
AI Discord Recap
A summary of Summaries of Summaries by Claude 3.5 Sonnet
1. LLM Advancements and Benchmarking
- Llama 3 Models Make a Splash: Meta's Llama 3 family of models, including the massive 405B parameter version, have been released with capabilities like 128k context windows and function calling.
- The models are already being deployed, with OpenRouter launching Llama 3.1-405B-instruct at competitive pricing of $2.5/mil tokens. Users are eager to test its performance across various benchmarks.
- Command R Models Get a Refresh: Cohere released updated Command R and R+ models, featuring improved performance in reasoning, coding, and multilingual retrieval-augmented generation (RAG) tasks.
- The models boast up to 50% higher throughput due to GQA enhancements, with significant price reductions - Command R now costs $0.15/$0.60 for input/output tokens, while R+ is $2.50/$10.00.
2. Optimization Techniques for LLMs
- Low-Rank Approximations for Efficient Training: Researchers are exploring low-rank approximations for gradient transmission in distributed training setups to reduce communication overhead between nodes.
- This approach aligns with ongoing efforts to develop adaptive communication patterns for large-scale AI projects, as discussed in papers like DiLoCo: Distributed Low-Communication Training of Language Models.
- Dynamic Expert Routing Enhances Model Flexibility: The concept of Dynamic Expert Routing is gaining traction, allowing models to define their own experts during training rather than using fixed configurations.
- While promising for enhancing model adaptability, members noted a lack of comprehensive literature on the subject, highlighting an area ripe for further research and development.
3. Open Source AI Developments
- Tinygrad Launches Affordable Cloud Service: Tinygrad introduced a cloud service priced at just $60/month, offering a 4090 GPU and 500 GB of storage, positioning it as 3x cheaper than vast.ai.
- The service allows users to run tinygrad locally while leveraging faster cloud operations, promising only one roundtrip per 'TinyJit' function, as announced in a tweet.
- Re-LAION 5B Dataset Addresses Safety Concerns: The Re-LAION-5B dataset was released, updating LAION-5B with additional safety measures and removing links to suspected CSAM content.
- This update, developed in collaboration with organizations like the Internet Watch Foundation, aims to provide a more ethical and safe dataset for AI research and development.
4. AI Applications and Industry Impact
- GameNGen Simulates DOOM in Real-time: The GameNGen neural model demonstrated the ability to simulate the game DOOM in real-time, achieving over 20 fps on a single TPU with high-quality interactions.
- Human raters struggled to distinguish between clips of the simulation and real gameplay, showcasing the potential for neural models in game development and interactive media.
- Meta AI Assistant Gains Traction: Meta's AI assistant has reportedly reached 400 million monthly active users and 40 million daily active users, indicating rapid adoption in the market.
- This growth, reported by The Information, suggests Meta is gaining ground in the AI assistant space, though still behind ChatGPT's reported 200 million weekly active users.
PART 1: High level Discord summaries
Unsloth AI (Daniel Han) Discord
- Fine-tuning LLMs Gains Credibility: Participants debunked the myth that fine-tuning can't teach new concepts, showcasing successful implementations reliant on proper parameters and robust datasets.
- Challenges remain, as some noted the tendency for models to hallucinate, necessitating a well-crafted fine-tuning approach.
- RAG vs. Fine-tuning Debate: A lively discussion on the effectiveness of RAG compared to fine-tuning for hallucination reduction concluded that a hybrid approach could maximize benefits.
- Participants acknowledged RAG's context-grounding advantages, suggesting flexibility in approach may yield superior outcomes.
- Llama 3.1 Launch at OpenRouter: The newly launched Llama 3.1-405B-instruct model features a significant 128k context and is competitively priced at $2.5/mil tokens.
- This model supports function calling, quickly gaining traction among users eager to leverage advanced capabilities.
- Cost Cuts with GPT-4o: OpenAI's GPT-4o model now costs $4 per 1M tokens, drastically reducing the expenses associated with token usage and encouraging broader adoption.
- With its Structured Outputs feature, GPT-4o aligns responses to JSON Schemas, shifting LLM pricing strategies toward an economic model driven by performance.
- Challenges in Multi-GPU Training: Members raised issues with multi-GPU setups encountering errors due to misconfigured GPU detection protocols, particularly in script execution.
- A pull request was submitted to improve compatibility with CUDA configurations, highlighting a need for refining the code's handling of GPU environments.
aider (Paul Gauthier) Discord
- Gemini Model Performance under Scrutiny: Users expressed skepticism regarding the new Gemini model's performance, particularly its compatibility with Aider.
- While some find it impressive, concerns persist about its effectiveness in various contexts.
- Sonnet Benchmark Shows Consistent Performance: Recent evaluations indicate Sonnet maintains effective code editing capabilities, with stable benchmark results.
- Despite rumors, performance statistics reveal a consistent pass rate across different tests.
- Magic Dev Unveils Long-Term Memory Model: Magic Dev introduced a model featuring a massive 100M token context window, enhancing coding tasks through reasoning.
- This development raises interest for its potential applications in complex problem-solving tasks.
- Aider's Development Path and Community Engagement: Paul G. praised community involvement in Aider's evolution, indicating no immediate plans for drastic changes.
- Discussions on future growth included potential GUI versions to foster user engagement.
- Aider Model Support Confusion: Discussions highlighted confusion around settings in the .env file for the Aider model using the OpenRouter API.
- Errors about the LLM Provider not being specified led to discussions about required environment variables.
OpenAI Discord
- Personalizing LLMs to Enhance User Experience: Users emphasized the significance of personalizing LLMs to create unique personalities and maintain long-term memory of interactions.
- Concerns were raised regarding the cost implications of API calls necessary for sustaining personalized experiences.
- Grok 2 and Gemini Show Down: A lively comparison took place between Grok 2 and Gemini, highlighting Grok's creativity but inconsistency with complex tasks.
- Users shared their frustrations with Grok's outputs, noting significant variations based on prompt quality.
- Optimizing Job Matching Scores: Imbalanced similarity scores in CV and job description comparisons were unpacked, with ranges identified between 5 and 65.
- Feedback suggested recalibrating prompts and classification rules to enhance scoring fairness and clarity.
- API Call Strategies – Separate vs Single Prompts: A debate stirred around using multiple API calls for distinct questions versus a single comprehensive prompt for document evaluation.
- Recommendations favored separate calls to reduce hallucinations, bolstering response clarity and reliability.
- Enhancing Document Analysis through Batch Processing: Chat focused on leveraging batch processing as a strategy to streamline large document analyses and maintain efficiency.
- Links to OpenAI's batch processing documentation circulated, igniting interest in effective data extraction techniques.
HuggingFace Discord
- Llama 3 Models Require Serious Hardware: A user seeks help building RAG applications with LLaMA 3 models, asking for ideal on-premise GPU and RAM configurations for models with 8B, 70B, and 405B parameters.
- Responses suggested using the Nvidia A100 GPU, with at least 300GB of GPU RAM required for the LLaMA 405B, prompting discussions on its cost and operational viability.
- Amazon ML Challenge 2024 Needs Collaborators: A member is on the lookout for teammates for the Amazon ML Challenge 2024, aiming to collaborate on innovative projects.
- No specific details about the challenge were provided, creating an open invitation for more passionate contributors to join.
- Exploring AI in CAD Systems: Discussions focused on the integration of AI with CAD systems, sparking interest in capabilities similar to J.A.R.V.I.S.
- Members shared advancements they made in incorporating AI, showcasing realistic potential for enhanced interactive applications.
- Advancements in Text-to-Speech ML: The Text-to-Speech-ML GitHub project aims to improve text-to-speech technology through community collaboration, welcoming contributions from users.
- This initiative signifies the community's push for advancements in machine learning related to speech synthesis, reinforcing open-source contributions.
- Animating Fireballs Using AI: Members discussed techniques to animate fireball effects in photos, recommending tools like AnimateDiff and IP Adapter Plus.
- This community-driven exploration reflects a collective effort to enhance static images with animated elements through various creative techniques.
CUDA MODE Discord
- LTM architecture uses RNN for attention: In a brief exchange, a member noted that the LTM architecture appears to utilize an RNN for managing attention.
- Understanding Triton's Atomic Add Scope Settings: The
scope=GPU
configuration in Triton's atomic add settings restricts operations to just the GPU, whilescope=system
allows for multi-GPU calculations, potentially influencing performance.- The default setting for multi-GPU environments is
scope=GPU
, ensuring functionalities without requiring extra configuration.
- The default setting for multi-GPU environments is
- FX pass maps aten operations to Triton: A query emerged about creating an FX pass to map aten operations directly to a custom Triton kernel, aimed at performance optimization.
- Users confirmed that Triton can be called natively from PyTorch, integrating advanced GPU acceleration seamlessly.
- Quantization of Attention Layers Sparks Discussion: Members discussed the implications of quantizing QKV projections in attention layers, stressing the need for maintaining model accuracy.
- The default filter_fn automatically quantizes Linear layers, which raised questions about its operational assumptions.
- Release v0.2.0 enhances Liger-Kernel: The Liger-Kernel's new release v0.2.0 improves API clarity and introduces broader model support, but some users are facing Out Of Memory (OOM) errors.
- Integrating the LayerNorm module showed promising performance, although OOM issues with the Hugging Face example persist.
Stability.ai (Stable Diffusion) Discord
- Optimize SDXL Performance: To boost SDXL performance, users recommend adding
--xformers
,--medvram-sdxl
, and--no-half-vae
to thewebui-user.bat
file, particularly for low VRAM GPUs.- These adjustments aim to enhance speed and reduce VRAM usage without compromising compatibility with VAE.
- Clarifying SGE Implementation: Discussion around SEG in workflows reveals confusion over its necessity and complexity, especially regarding tools like Impact Pack.
- Participants are left questioning if SEG is a standard method or a specialized feature for certain tools.
- Massive Training Costs for AI Models: Training base models such as SD1.5 or SDXL is reported to take months and cost potentially millions, raising concerns about resource allocation.
- Users noted that LORA models can be trained with far fewer resources compared to larger models.
- RunwayML Pulls Stable Diffusion Repos: The removal of Stable Diffusion 1.5 repositories from RunwayML on platforms like HuggingFace has caused alarm within the community.
- This move suggests a potential shift in focus away from earlier models, leaving users speculating about future developments.
- GPU Generation Time Debates: Users with 3060 and 3060 Ti GPUs share their generation time experiences with SDXL and Flux models, raising concerns about performance.
- There are worries about whether these GPUs can manage long generation times and the associated model storage requirements.
Nous Research AI Discord
- Hermes 3 Communicates with Amnesia Mode: Users found that Hermes 3's amnesia mode shows a preference for formal language, rejecting casual terms like 'bruh' as unfriendly. This indicates a possible trend toward AI models exhibiting defined communication styles.
- The observation raises questions about how AI personality traits may shape future interactions with models.
- Low-Rank Approximations Optimize Gradient Transmission: A discussion emerged around using low-rank approximations for efficient gradient transmission among distributed nodes, potentially easing communication overhead. This approach resonates with the need for adaptive communication patterns in training.
- Members emphasized the importance of optimizing gradient performance in large-scale AI projects to enhance training efficiency.
- Training LLaMA 3 on Diverse Data: One user is training an 8b LLaMA 3 model using both synthetic and real instruction data from sources like Reddit and StackExchange, aiming to reduce AI-like behavior. This illustrates varied methodologies in refining model training.
- Such efforts could lead to significant findings on how diverse data sets influence AI behaviors and benchmarks.
- Introducing Word Game Bench for LLMs: Word Game Bench, a novel evaluation framework for LLMs, focuses on interactive games like Wordle to address typical evaluation shortcomings. This benchmark signifies an innovative shift in assessing model performance through playful interactions.
- Members expressed enthusiasm over the potential insights from this benchmark for improving language model assessment accuracy.
- GameNGen Represents DOOM Real-Time: The neural model GameNGen demonstrates the ability to simulate DOOM independently of traditional engines, achieving over 20 frames per second with realism. Human ratings indicate difficulty in distinguishing its simulation from actual gameplay.
- Discussion around how this model can influence platforms like Unreal Engine emphasized the prospect of integrating such simulation technologies into upcoming games.
LM Studio Discord
- LM Studio Update 0.3.2 boosts performance: The latest LM Studio update (0.3.2) resolves latency issues with Flash Attention, enhancing local inference performance.
- Users had mixed feelings, noting improved functionality yet expressing concerns about stability compared to earlier versions.
- Flash Attention Models compatibility: LLaMa-3.1 and Mistral have been confirmed to support Flash Attention, with discussions expanding to Google's Gemma-2.
- This highlights an eagerness among users to gauge overall performance capabilities across various supported models.
- M2 Ultra Mac showcases LLM potential: A user successfully set up an M2 Ultra Mac with 192 GB Unified Memory, setting sights on LLM development.
- Pydus expressed curiosity about the size of models he could effectively load on this new hardware.
- Power management for multi-GPU setups: A configuration with 4x RTX 4090s was calculated to have a 3500W limit, sparking conversation on power distribution.
- Concerns focused on how to safely distribute power across multiple outlets without overwhelming breakers.
- LLM performance insights from Llama 3.1: Llama 3.1's 70B model reaches 97 tokens per second on a multi-GPU setup, with previous records showing slower rates.
- Discussions centered around optimizing performance, especially when distributing model layers across GPUs and ensuring efficient usage.
OpenRouter (Alex Atallah) Discord
- Gemini Flash 8B Model Launch: The new Gemini Flash 8B has been made available alongside the Gemini Flash Experiment, both currently free until final pricing for AI Studio is finalized.
- This launch is part of Google's initiative to enhance their model offerings and user navigation following the separation of Google Vertex from AI Studio.
- daun.ai Celebrates Launch: The team behind daun.ai received congratulations for their successful launch, marking a significant achievement in the community.
- Cheers and acknowledgments filled the chat as users recognized this milestone.
- Exciting Updates on Cohere Command Models: Updates to Command R models have restructured access points and changed model IDs for improved operational efficiency.
- Users are particularly enthusiastic about the updates, citing benefits in pricing and model performance.
- Perplexity Models Encounter Issues: A user reported problems with Perplexity models, receiving errors for invalid models—which stemmed from prior bugs that were promptly addressed.
- Clarifications on these issues are necessary as users seek to understand the scope of the errors impacting performance.
- Infrastructure Upgrades Cause Recent Downtime: Recent infrastructure upgrades are leading to increased downtime and challenges in system responsiveness.
- The team has acknowledged these issues, linking them to database limitations and ongoing projects to bolster the backend.
Eleuther Discord
- NaN weights derail embedding training: A user reported that their embedding weights go to NaN just a few steps into training, despite having a normal range initially. Checking gradients and loss components revealed a probable data-dependent decay term as the cause.
- Lightning's detect_anomaly=True setting helped in tracking the issue based on gradient analysis.
- Community seeks feedback on research ideas: A PhD student sought input on research involving compression using diffusion models, asking for best sharing practices. Members suggested sharing in the general channel or a specific area for low-pressure discussions.
- Regularizing losses on network inputs was emphasized for stability, highlighting the importance of clarifying assumptions in discussions on Sparse Autoencoders (SAEs).
- Sparse encoding in SAEs clarified: A member clarified that a reconstruction error loss term should accompany a sparsity-focused loss in their SAE approach to avoid deviations during training. Additional losses were suggested to help with statistical context from frozen networks.
- Misunderstandings about the role of LLMs in SAEs were noted as vital for the encoding process.
- Dynamic Expert Routing enhances flexibility: During a discussion, it was explained that allowing models to define their own experts during training boosts adaptability over fixed configurations. A request for related papers revealed a gap in existing literature.
- The need for more resources on this Dynamic Expert Routing concept was highlighted.
- New Word Game Bench for Model Evaluation: The community introduced a benchmark called Word Game Bench, targeting evaluations of language models on word puzzle games like Wordle. Notably, no models have surpassed a 50% average win rate.
- The benchmark encourages interaction and feedback from models instead of relying on static responses.
Perplexity AI Discord
- Discord Server Hits 100K Members: The Discord server has now reached an incredible milestone of 100K members, showcasing the vibrant growth of the community.
- Members expressed gratitude for the community's support, emphasizing the team's eagerness to continue growing and evolving together.
- Pro Subscription Disbandment Problems: Users are voicing concerns about Pro subscriptions vanishing, possibly due to misuse of promo codes or account discrepancies.
- One user mentioned redeeming a promo code that became nonfunctional, prompting questions about potential voucher abuse measures.
- Model Performance Under Scrutiny: Discussions revealed that switching between AI models often yields similar responses, raising suspicions about the model differentiation post-updates.
- One user noted that inquiries about the model type returned generic information rather than specifics on GPT or Claude.
- PPLX API Credit Access Issues: Several users reported not receiving the promised $5 PPLX API credits post-Pro purchase and have sought assistance in resolving this.
- The support team's lack of resolution has left users requesting account details for further investigation.
- Rate Limiting Triggers Confusion: A user encountered a 429 Client Error when invoking the API endpoint, despite minimal function calls in their script.
- They raised concerns about prematurely hitting the rate limit and sought clarification on the underlying factors.
Cohere Discord
- Command R+ Models Deliver Significant Performance Enhancements: The recently refreshed Command R and R+ models, including
command-r-08-2024
, show boosted performance in reasoning, coding, and multilingual RAG, with throughput improvements up to 50%.- Additionally, pricing has been adjusted significantly: $0.15 for input and $0.60 for output of the Command R models, while R+ now costs $2.50 for input and $10.00 for output.
- Users Question MMLU's Practical Relevance: Nick Farst noted that MMLU has limited correlation with real-world applications, as much of its content is outdated.
- The discussion reflected a community consensus on prioritizing practical performance metrics over traditional benchmarks like MMLU.
- C4AI Scholars Program Sparks Interest: A query arose regarding the eligibility of ongoing graduate students for the C4AI Scholars Program, particularly for a January internship.
- Members recommended directly contacting C4AI for clear details on the application process and ongoing opportunities.
- Maya LLaVA-Pretrain Dataset Launches with 4M+ Entries: The Maya LLaVA-Pretrain dataset now features 4,404,776 entries across 8 languages, fundamentally designed to enhance pretraining for large language and vision models.
- Access requires agreeing to sharing conditions, ensuring compliance with usage policies despite the dataset being publicly available.
- Trial API Key Restricts Usage: A trial API key user met a rate limit (Error 429), permitting only 1000 API calls per month, which highlighted the need to upgrade to a production key.
- Users discussed strategies for reranking citations in generated outputs, aiming to streamline excess citations for clarity.
Latent Space Discord
- Codeium raises $150M for expansion: Codeium announced a $150 million Series C funding, valuing the company at $1.25 billion, using a total of $243 million raised to boost R&D.
- With these funds, they aim to accelerate growth despite not yet utilizing their Series B funds from January.
- Meta AI assistant's impressive reach: Meta's AI assistant has reached 400 million monthly active users and 40 million daily active users, highlighting its rapid adoption in the market.
- This surge in usage has led to discussions about the potential need for licensing as the platform grows.
- DeepMind introduces customizable Gems: Google DeepMind launched Gems, customizable AI chatbots designed for specific roles like a Learning Coach and Coding Partner.
- Critics emphasize that their success hinges on user-friendliness and the quality of curation applied to these tools.
- LLM benchmarks discussed in new podcast: The latest Latent Space Podcast features Google DeepMind's Nicholas Carlini stressing the need for custom LLM benchmarks.
- He discusses techniques for training data extraction and the challenges associated with losing logprobs from OpenAI.
- Concerns raised over research agent efficiency: During discussions, participants expressed concerns about research agents, noting an average research duration of 2 minutes costing around $0.005, pointing to inefficiencies.
- Debates also emerged on the effectiveness of the STORM approach versus one-shot methods for generating research papers, with a preference for continuous feedback.
Modular (Mojo 🔥) Discord
- Mojo's Growing Role in Web3: While Mojo is explored for blockchain protocols, it remains too immature compared to Go, Rust, and C++ for serious development. Ongoing enhancements on Mojo's IO and networking APIs are crucial to align with modern hardware capabilities.
- Feedback highlights the need for a more robust development environment to alleviate programmers' memory management concerns.
- Open Source Uncertainty for Mojo Compiler: Mojo is marketed as open-source, yet the compiler's source code isn't currently available due to the rapid iteration by a small team. The timeline for when or if this might change remains ambiguous.
- Members expressed concern over this lack of transparency in the project’s development direction.
- Debate: Programming Language Performance: A heated discussion evaluated the performance of Go, particularly in comparison to C, noting that Go's optimizer conservatism can lead to poorer performance on complex issues. This has raised questions on Go's suitability for certain applications moving forward.
- Mixed opinions arose regarding how much slower Go has indeed become historically.
- MAX SDK Development Tactics: The developer team for MAX SDK grapples with balancing development speed, licensing, and community engagement. Finding contributors knowledgeable in both MLIR and Mojo has proven challenging.
- Members are calling for expanded team efforts to address these knowledge gaps.
- Excitement Over OPENSEA Collaboration: News surfaced about a collaboration with OPENSEA for a new free mint, stimulating conversation and interest among members. Participation is encouraged through a claim link that has been circulated.
- While interest is evident, some members opted out, citing varied engagement levels.
LangChain AI Discord
- LangChain App Stumbles in Docker: A user ran into issues with their LangChain app when using the ChatOllama object in a Docker container, while it worked normally outside Docker. The root cause was identified as a problematic base URL, resolved by switching to a direct Ollama host URL.
- It seems the Docker setup needs specific configurations to perform well with the LangChain API.
- ChatOllama vs Ollama Showdown: ChatOllama caters specifically to chat-like interactions, whereas Ollama serves broader language model tasks, each with unique functionalities. Users shared usage examples and detailed API references for both models.
- The community praised the tailored use cases, making it clear why one might choose ChatOllama over Ollama based on project needs.
- Real-time Streaming Output Confusion: A user faced challenges with their agent executor which collected all outputs rather than streaming them live. Questions arose about the implications of setting
streamRunnable = False
on the output behavior.- Clarifying this behavior is crucial for optimizing real-time interactions in model deployments.
- Hybrid RAG Models for Enhanced LLMs: Discussions revolved around improving LLMs via feedback and fine-tuning techniques, despite their inability to learn in real-time. Participants explored alternatives like traditional RAG models and self-query techniques to boost model performance.
- Emphasis was placed on evolving RAG strategies to ensure competitive performance benchmarks.
- Creating a Custom GPT for HR: A user aimed to build a specialized GPT model for their HR team, highlighting the importance of avoiding hallucinations in its responses. Suggestions for implementing effective RAG techniques were made to refine the model's outputs.
- Community wisdom leaned towards iterative adjustments based on real feedback to cultivate an efficient HR tool.
LlamaIndex Discord
- GymNation's Digital Transformation Triumph: GymNation improved its member experience significantly, increasing digital lead to sales conversion by 20% and achieving an 87% conversation rate with digital leads, as detailed in their success story.
- Their partnership with LlamaIndex has driven real business outcomes.
- LLMs in Production Talk Planned: Catch the upcoming discussion on large language models in production with insights from Twitter on September 9th.
- This talk is set to provide critical information for deploying LLMs effectively.
- LlamaIndex Integrates with MLFlow: The new integration with MLFlow enhances tracking and evaluation capabilities for LlamaIndex applications, as shared in a podcast by co-founder here.
- This integration promises improved logging and performance evaluations for ML models.
- Join the LLM x Law Hackathon: There's an exciting opportunity coming up on September 8th for the LLM x Law Hackathon, exploring AI applications in legal contexts, more details can be found on Twitter.
- Expect three tracks focused on innovative AI development in legal spheres.
- Enhanced Financial Analysis with MoW & RAG: A new approach combining Mixture of Workflows (MoW) and Corrective RAG allows for advanced financial data analysis using models like Phi-3 and Qwen-2, as outlined here.
- This method enables context-aware analysis of financial statements.
OpenInterpreter Discord
- Join the House Party Next Week!: A member announced a House Party for next week, sticking with an earlier time to gather more attendees.
- The invite included a heartfelt message to encourage participation, creating buzz around the upcoming event.
- Terminal Apps for KDE Needed: A member reported issues with Konsole, the current terminal app for KDE, causing screen bleeding while scrolling.
- Discussions sparked around alternative terminal applications to handle these problems effectively.
- Obsidian OI Plugin Patch Required: A user praised Obsidian OI plugin tutorial videos but encountered installation issues and requested assistance.
- Another member urged to detail these problems in a specific channel for targeted help.
- GameNGen Neural Model Powers Real-time Gameplay: The GameNGen neural model achieves over 20 fps simulating DOOM in real-time on a single TPU, showcasing impressive interaction quality.
- Next frame predictions hit a PSNR of 29.4, with testers finding it hard to distinguish real from simulated gameplay.
- AgentOps Team Leaves Members Buzzing: Anticipation grows around Adam and the AgentOps team, with recent discussions highlighting exciting developments.
- Members shared gratitude for the insights and the positive vibe surrounding what’s to come.
LAION Discord
- Google's GPU Acquisition Raises Questions: Members questioned why Google is buying NVIDIA GPUs if they already possess TPUs, hinting at potential performance considerations.
- Is the TPU enough? raises curiosity over Google's hardware strategy amidst rising competition.
- RunwayML Purges Stable Diffusion Repos: Discussion erupted over RunwayML deleting all their Stable Diffusion 1.5 repos on HuggingFace and GitHub, which led to disruptions in existing projects.
- Concerns over the impact on diffusers 1.5 functionalities were expressed, with one member noting it broke single file loading.
- Frustration over Repo Removals: Members expressed annoyance at RunwayML's lack of foresight in deleting the repositories without archiving, impacting various dependencies.
- One member speculated about legal reasons behind the removal but found no specific issues cited.
- Challenges in Generating Novel Covers: A member shared challenges in generating suitable images for novel covers, seeking ways to achieve a more comic book or cartoon style.
- Despite efforts with DALL-E, they received heavily generated AI pictures instead, illustrating difficulties in achieving intended styles.
- Launch of Re-LAION-5B Dataset: The Re-LAION-5B dataset was announced, marking an important update to LAION-5B that addresses safety concerns and removes links to suspected CSAM.
- Joint efforts with organizations like the Internet Watch Foundation ensure the dataset's integrity, now available for download in two safe versions, as detailed in the announcement.
Interconnects (Nathan Lambert) Discord
- Tech Giants Eye OpenAI's New Funding: Nvidia, Apple, and Microsoft are in talks to invest in OpenAI's new $100 billion funding round, as highlighted by Bloomberg.
- Glad to see a non-profit attract such interest, underscored the community's excitement about this potential investment.
- ChatGPT Dominates with Massive User Base: ChatGPT boasts over 200 million weekly active users, while Meta AI trails with 40 million daily active users, according to data from The Information.
- Some members discussed the implications of Meta AI's limited availability, particularly in regions like the EU.
- Tinygrad Launches Affordable Cloud Service: Tinygrad introduced a cloud service for just $60/month, featuring a 4090 GPU and 500 GB of storage, making it 3x cheaper than vast ai. Users can run tinygrad locally and enjoy faster cloud operations with only one roundtrip per 'TinyJit' function.
- This offering aims to facilitate a seamless transition for developers needing both local and cloud capabilities.
- Inquiry on System Prompts and Evaluations: A user sought research on the influence of system prompts on evaluation scores, highlighting a burgeoning interest in prompt engineering.
- The inquiry indicates a desire to explore how to effectively shift performance outcomes of AI models through better prompt management.
- Anticipation for Chatbot Competition: Members expressed excitement about the ongoing chatbot wars, with one declaring, Begun, the chatbot wars have.
- The discourse reflects confidence in the evolving ecosystem of AI assistants.
Torchtune Discord
- QLoRA Memory hits limits: Concerns emerged about QLoRA's memory requirements, questioning if it should suffice for training with 4 48GB GPUs. Users indicated their setups approach memory limits with shorter sequences without CPU offloading.
- Members discussed the implications of memory performance on training dynamics and potential optimizations.
- Multi GPU Evaluation Inquiry: A question arose around whether multi GPU evaluation is feasible in TorchTune, triggering discussions on best practices and setup expectations.
- Participants shared their thoughts on performance implications and configurations for achieving optimal results.
- Torch Version Compatibility Clarification: One user confirmed they are using Torch version 2.4.0+cu124, raising compatibility concerns with other setups. This version could influence how the model behaves in various configurations.
- Compatibility discussions emphasized the importance of aligning software versions with desired performance outcomes.
- Troubleshooting Illegal Memory Access: A member reported experiencing an illegal memory access error during training, recommending the use of CUDA_LAUNCH_BLOCKING=1 for effective debugging.
- They pointed out that CUDA errors may be asynchronously reported, complicating the troubleshooting process and suggesting deeper investigation needs.
DSPy Discord
- DSPy Community Invited to Join the Revolution: A member shared a GitHub repo, inviting the DSPy community to join the revolution around it, emphasizing community involvement.
- They expressed enthusiasm for collaborative efforts, boosting engagement in the project.
- LinkedIn Auto Jobs Applier Gains Popularity: The GitHub repo for LinkedIn Auto Jobs Applier is attracting attention with over 2k likes each day, showcasing its rising popularity.
- However, members raised concerns about its functionality, pointing to unresolved GitHub issues indicating points to be desired still.
- Bay Area AI Meetup with Michael Ryan: Michael Ryan will discuss DSPy and LM Programs at the Bay Area AI meetup, covering the MIPROv2 optimization algorithm's application.
- His discussion emphasizes treating LM Programs with the same rigor as traditional software, highlighting importance in testing and auditing.
- AgentOps Platform Introduction: AgentOps provides tools for creating agents, including graphs, monitoring, and replay analytics, aiming to enhance LLM usage.
- The open-source platform invites community contributions, making it available through its GitHub repository.
- DSPy Doubts and Support: A user sought clarification on where to post doubts about DSPy, indicating a proactive interest in troubleshooting and engagement.
- This reflects an active community eager to support each other and improve understanding of DSPy functionalities.
OpenAccess AI Collective (axolotl) Discord
- Dark Mode Request for Axolotl GitHub Docs: A member requested a dark mode for the Axolotl GitHub documentation, stating that the current light mode strains the eyes.
- A switch to dark mode would significantly enhance usability for frequent users accessing configuration parameters.
- Optimal Hardware for Llama 70B Training: Questions arose about the hardware needed for full training of the Llama 70B model, particularly concerning the adequacy of A6000 GPUs.
- It was confirmed that using 3x A6000 GPUs would be sufficient for training the full weight model.
- Introduction of Assistant Prefill Feature in Transformers: A pull request proposes an assistant prefill feature for chat templates in Transformers, enabling the model to start responses autonomously.
- This addition aims to fulfill a widely requested need that has been expressed both internally and on GitHub.
- Fix for Llama 3.1 Special Tokens: Concerns were raised about issues with uninitialized special tokens in the Llama 3.1 base model, particularly regarding out-of-distribution embeddings.
- In response, a new option
fix_untrained_tokens: true
was introduced to help resolve these issues.
- In response, a new option
Gorilla LLM (Berkeley Function Calling) Discord
- Groq Leaderboard Addition Delayed: Members noted that Groq has not yet been added to the leaderboard, with their PRs expected to be raised next week.
- We're still waiting for Groq to contribute to the evaluation process.
- Commitment to Clear Documentation Steps: A member assured that they will document the necessary steps for reproducibility, addressing previous concerns in discussion.
- This approach aims to enhance clarity in the model's documentation.
- GIS Geometry Presentation Test Case Challenges: A member analyzed a Java test case where their model struggled with initialization prompts in the GIS geometry presentation.
- Despite the challenges, they concluded that the model's response was superior to function calls for initialization.
- Evaluation Temperature Settings Clarified: Members questioned if all models are evaluated at a temperature of 0 for fair comparisons, as previously mentioned.
- One member emphasized that maintaining unchanged parameters is essential for consistent function call outputs.
tinygrad (George Hotz) Discord
- tinygrad's operation limitations questioned: A member inquired if tinygrad is confined to statically scheduled operations and whether it struggles with semi-structured sparsity and weight selection.
- This inquiry ignited discussions around the framework's overall capabilities and raised doubts about operations that might be beyond tinygrad's reach.
- George Hotz seeks clarity on tinygrad limits: George Hotz requested specific examples of operations that users find challenging to execute in tinygrad, aiming to assess the framework's versatility and limitations.
- This indicates a proactive approach to understanding how operation scheduling might affect tinygrad's usability for complex tasks.
- Tensor.cat faces issues with sharded tensors: A user reported encountering an AssertionError while using
Tensor.cat
to concatenate sharded tensors along the batch axis, indicating padding problems.- While unsqueezing an extra dimension was possible, the user still grappled with reshaping the resulting tensor, further complicating their implementation.
- Clarifying Tensor.cat error origins: The user questioned whether the issue with
Tensor.cat
was a fundamental limitation of tinygrad or simply a lack of supported functionality.- They are considering code modifications to handle an extra batch dimension or exploring alternative methods to circumvent the need for
cat
.
- They are considering code modifications to handle an extra batch dimension or exploring alternative methods to circumvent the need for
The Alignment Lab AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The LLM Finetuning (Hamel + Dan) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The MLOps @Chipro Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Mozilla AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The DiscoResearch Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
PART 2: Detailed by-Channel summaries and links
Unsloth AI (Daniel Han) ▷ #general (459 messages🔥🔥🔥):
Fine-tuning LLMs
RAG vs. Fine-tuning
Multi-GPU Training
Model Ranking and Alpha
Data Generation for Training
- Fine-tuning LLMs and Knowledge Acquisition: There is a common misconception that fine-tuning cannot effectively teach new concepts, with some studies suggesting it leads to hallucinations. However, participants shared experiences of successful fine-tuning, especially when using the right parameters and strategies.
- Discussions indicated that fine-tuning can improve response styles, but it's essential to have robust datasets and understand the intricacies of model training.
- RAG vs. Fine-tuning for Hallucination Reduction: Participants debated the effectiveness of RAG compared to fine-tuning in reducing hallucinations, with insights suggesting that fine-tuning has its advantages when done correctly. It's noted that RAG can ground allegations better in specific contexts.
- Ultimately, the consensus emerged that a combined approach of both methods could yield better results.
- Discussion on Multi-GPU Training Issues: Users encountered challenges when trying to run multiple notebooks on different GPUs simultaneously, leading to errors in training. It was identified that the code’s handling of GPU detection creates conflicts when validating GPU counts against the environment settings.
- A pull request was made to address these issues, emphasizing the need for proper handling of the CUDA_VISIBLE_DEVICES environment variable.
- Importance of Rank and Alpha in Fine-tuning: The rank of the model affects the number of trainable parameters during fine-tuning, with higher ranks suggested for effectively teaching new concepts. Participants discussed the balance between utilizing higher ranks and avoiding catastrophic forgetting.
- RSLora was also mentioned as a potential method to stabilize rankings during training, although its effectiveness appeared variable among users.
- General Strategies for Effective Training: Tips were shared on using specific data generation and setup configurations to enhance the fine-tuning process, emphasizing the need for trial and error. Resource links, such as articles and videos, were provided to obtain further insights into effective methodologies.
- Several community members expressed gratitude for the shared knowledge and planned to experiment with the discussed methods over the weekend.
- MathΣtral: As a tribute to Archimedes, whose 2311th anniversary we're celebrating this year, we are proud to release our first Mathstral model, a specific 7B model designed for math reasoning and scientific disc...
- Fumo Touhou GIF - Fumo Touhou Fumo Touhou - Discover & Share GIFs: Click to view the GIF
- Kaggle Mistral 7b Unsloth محسن کره: Explore and run machine learning code with Kaggle Notebooks | Using data from No attached data sources
- Unsloth Notebooks | Unsloth Documentation: See the list below for all our notebooks:
- GitHub - mlabonne/llm-autoeval: Automatically evaluate your LLMs in Google Colab: Automatically evaluate your LLMs in Google Colab. Contribute to mlabonne/llm-autoeval development by creating an account on GitHub.
- Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?: When large language models are aligned via supervised fine-tuning, they may encounter new factual information that was not acquired through pre-training. It is often conjectured that this can teach th...
- Fix for multi gpu setup training with a single GPU. by Sehyo · Pull Request #974 · unslothai/unsloth: check_nvidia() originally spawns a new process for nvidia-smi, thus bypassing that GPU count might be limited by an OS environmental variable as this won't be reflected in the new process. Add...
- 100M Token Context Windows: Research update on ultra-long context models, our partnership with Google Cloud, and new funding.
- @dylanebert on Hugging Face: "Here's a 1-minute video tutorial on how to fine-tune…": no description found
- GitHub - e-p-armstrong/augmentoolkit: Convert Compute And Books Into Instruct-Tuning Datasets (or classifiers)!: Convert Compute And Books Into Instruct-Tuning Datasets (or classifiers)! - e-p-armstrong/augmentoolkit
- unclemusclez/smollm-135m-instruct-devinator: SmolLM 135M Instruct Trained on DEVINator Data for Open Hands (Open Devin)
- GitHub - linkedin/Liger-Kernel: Efficient Triton Kernels for LLM Training: Efficient Triton Kernels for LLM Training. Contribute to linkedin/Liger-Kernel development by creating an account on GitHub.
- In-depth guide to fine-tuning LLMs with LoRA and QLoRA: In this blog we provide detailed explanation of how QLoRA works and how you can use it in hugging face to finetune your models.
- Orange Cat Smile Orenge Cat Smiling GIF - Orange Cat Smile Cat Smile Orenge Cat Smiling - Discover & Share GIFs: Click to view the GIF
- GitHub - huggingface/lighteval: LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron.: LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron. - hug...
- GitHub - linkedin/Liger-Kernel: Efficient Triton Kernels for LLM Training: Efficient Triton Kernels for LLM Training. Contribute to linkedin/Liger-Kernel development by creating an account on GitHub.
- Fine Tuning Is For Form, Not Facts | Anyscale: Fine tuning is one approach to domain-specific model refinement (DSMR), but it’s not a silver bullet for improving domain-specific performance.
- Storing models to huggingface is not working · Issue #636 · unslothai/unsloth: Hello, I think instructions for storing model to hugging face are not very clear. Following line in notebook tries to push model to HF model repository ("hf/model", tokenizer, quantization_m...
- Benchmark against unsloth · Issue #57 · linkedin/Liger-Kernel: 🚀 The feature, motivation and pitch hey, did you run any benchmark against unsloth which uses similar kernels? I guess your project can be used as a dropdown replacement with multi gpu support. Alt.....
- Tweet from Bram (@BramVanroy): @hsu_byron Is this stable? If so, a downstream integration with @huggingface trainer would be extremely valuable :o I'd need be through accelerate cc @TheZachMueller maybe
- GitHub - facebookresearch/xformers: Hackable and optimized Transformers building blocks, supporting a composable construction.: Hackable and optimized Transformers building blocks, supporting a composable construction. - facebookresearch/xformers
Unsloth AI (Daniel Han) ▷ #off-topic (12 messages🔥):
Training AI scripts
Meta's upcoming models
GPT-4o pricing changes
LLM provider comparisons
Gemini 2.0 updates
- Simplified AI Training With One Script: A user shared that they are creating two scripts to help anyone train their own AI easily, eliminating the need for complex libraries or setups.
- They emphasized it's just a single code file to run with Python after installing some dependencies, and provided a link to text generation web ui as a resource.
- Meta to Announce New Llama Models: A post indicated that Meta will soon announce updates along with the next set of Llama models, sparking speculation among members about the nature of these models.
- Discussions included the possibility of new multimodal models akin to the Chameleon type rather than Llama 4.
- Lower Costs With GPT-4o: A user pointed out that OpenAI's new GPT-4o model reduces costs to $4 per 1M tokens, making it significantly cheaper for both input and output tokens.
- It also offers Structured Outputs ensuring model outputs match specified JSON Schemas, prompting a shift in the economic model for LLM providers.
- LLM Providers as Modern App Stores: A member commented that LLM providers, similar to app stores, aim to control development, leading to a token-based payment system rather than revenue cuts from sales.
- Another member compared it to cloud services like Firebase, suggesting a broader trend in monetization strategies.
- Gemini 2.0 Sparks Interest: Users expressed excitement about Gemini 2.0, with one linking to a Reddit post that discusses its features and implications.
- It was suggested that Gemini 2.0 could be related to experimental models listed in AI Studio, raising curiosity about its capabilities.
- Tweet from OpenAI Developers (@OpenAIDevs): Our newest GPT-4o model is 50% cheaper for input tokens and 33% cheaper for output tokens. It also supports Structured Outputs, which ensures model outputs exactly match your JSON Schemas.
- Reddit - Dive into anything: no description found
- Reddit - Dive into anything: no description found
- GitHub - oobabooga/text-generation-webui: A Gradio web UI for Large Language Models.: A Gradio web UI for Large Language Models. Contribute to oobabooga/text-generation-webui development by creating an account on GitHub.
Unsloth AI (Daniel Han) ▷ #help (67 messages🔥🔥):
Learning Rate Scheduler
Fine-Tuning Use Cases
Tokenizers and Model Loading
GPU Resources and Configuration
Memory Optimization Techniques
- Cosine Learning Rate Scheduler Insights: A member inquired about the appearance of the learning rate graph when setting the scheduler to cosine with warmup steps.
- Another member noted that the behavior may vary, depending on specific configurations.
- Exploring Fine-Tuning Use Cases: Members discussed the suitable applications for fine tuning and shared mixed experiences regarding its effectiveness.
- One member emphasized the importance of understanding whether fine tuning is appropriate for introducing new knowledge or if it serves better for other use cases.
- Differences in Tokenizer Usage After Training: A member asked when it’s necessary to push the tokenizer after training a model.
- The consensus was that pushing the tokenizer is only necessary when new tokens have been added.
- Managing GPU Resources Effectively: Discussions centered around the efficiency of renting versus owning hardware for AI training tasks.
- Several members agreed that renting compute resources is typically cheaper and more practical for occasional training.
- Optimizing RAM Usage During Training: A member facing out-of-memory errors while training a DPO model sought advice on optimization techniques within a 16GB RAM limit.
- Suggestions included reducing batch size and examining memory requirements for different training methods, noting that DPO generally requires more VRAM than standard fine-tuning.
- Finetune Llama 3.1 with Unsloth: Fine-tune and run Meta's updated Llama 3.1 model with 6x longer context lengths via Unsloth!
- Hastebin: no description found
Unsloth AI (Daniel Han) ▷ #showcase (4 messages):
OpenRouter Launch
Llama 3.1 Model
Pricing Strategy
- Stealth Launch of Llama 3.1 on OpenRouter: After weeks of effort, Llama 3.1-405B-instruct is now live on OpenRouter, featuring a full 128k context and function calling support.
- The model is available at avian.io and serves real users while maintaining the cheapest pricing at $2.5/mil tokens.
- Comment on Earning from OpenRouter: In response to a query, the member clarified that they receive payment regardless of link usage, indicating satisfaction with their work on the project.
- They emphasized that they don't earn extra money from referrals, expressing pride in building the infrastructure.
Link mentioned: Meta: Llama 3.1 405B Instruct – Provider Status: See provider status and make a load-balanced request to Meta: Llama 3.1 405B Instruct - The highly anticipated 400B class of Llama3 is here! Clocking in at 128k context with impressive eval scores, th...
Unsloth AI (Daniel Han) ▷ #community-collaboration (1 messages):
hamchezz: I want to finetune a llm on some undefined goal just because 😄
aider (Paul Gauthier) ▷ #general (285 messages🔥🔥):
Gemini Model Performance
Sonnet Benchmarking
Magic Dev's Long-Term Memory Model
Aider's Growth and Future
Coding Assistance Tools
- Gemini Model Performance under Scrutiny: There's speculation regarding the new Gemini model's performance with some users expressing skepticism, particularly related to its compatibility with Aider.
- Responses indicate that while many find the model impressive, concerns about its effectiveness in certain contexts persist.
- Sonnet Benchmark Shows Consistent Performance: Recent evaluations suggest Sonnet remains effective, with benchmark results showing no significant degradation over time in Aider's code editing capabilities.
- Users noted that despite rumors, the performance statistics reveal a stable pass rate throughout various tests.
- Magic Dev Unveils Long-Term Memory Model: Magic Dev has introduced a model featuring a massive 100M token context window, designed to enhance coding tasks by reasoning rather than relying on memorization.
- This development has sparked interest regarding its potential application in software development and other complex problem-solving tasks.
- Aider's Development Path and Community Engagement: Paul G. expressed enthusiasm for the community's involvement in Aider's evolution as an open-source tool, indicating no immediate plans for drastic changes.
- The discussion encompassed future growth aspects, including potential GUI versions to increase user engagement and investment opportunities.
- Comparison of AI Coding Tools: Users discussed their experiences with coding assistance tools like Zed and the anticipated offerings from Magic Dev, noting a lack of available products from Magic currently.
- The common sentiment highlights Microsoft’s GitHub Copilot as a significant player in the space, drawing attention to its expanding user base and revenue generation.
- Sonnet seems as good as ever: Sonnet’s score on the aider code editing benchmark has been stable since it launched.
- Dancing Cat Dance GIF - Dancing cat Dance Cat - Discover & Share GIFs: Click to view the GIF
- Homer Brain GIF - Homer Brain Monkey - Discover & Share GIFs: Click to view the GIF
- Codebase Retrieval | Continue: Talk to your codebase
- Tweet from OpenAI Developers (@OpenAIDevs): This model is also now available in the API as `chatgpt-4o-latest`. We recommend `gpt-4o-2024-08-06` for most API usage, but are excited to give developers access to test our latest improvements for c...
- 100M Token Context Windows: Research update on ultra-long context models, our partnership with Google Cloud, and new funding.
- Nu Deployment: Nu Deployment has 3 repositories available. Follow their code on GitHub.
- GitHub - nus-apr/auto-code-rover: A project structure aware autonomous software engineer aiming for autonomous program improvement. Resolved 30.67% tasks (pass@1) in SWE-bench lite and 38.40% tasks (pass@1) in SWE-bench verified with each task costs less than $0.7.: A project structure aware autonomous software engineer aiming for autonomous program improvement. Resolved 30.67% tasks (pass@1) in SWE-bench lite and 38.40% tasks (pass@1) in SWE-bench verified wi...
aider (Paul Gauthier) ▷ #questions-and-tips (69 messages🔥🔥):
Aider Model Support
Repo File Detection
User Experience with Aider Errors
Handling Large Repos with Aider
Integration with GitHub Copilot
- Aider Model Support Confusion: Users discussed settings in the
.env
file for the Aider model, with confusion arising about the correct format for the model name when using the OpenRouter API.- One user noted receiving errors about the LLM Provider not being specified, leading to confusion over required environment variables.
- Repo File Detection Issues: A user inquired how to get Aider to automatically detect recently created files, as they only seem to appear with the
/add
command.- The community suggested using commands like
/drop
and/clean
, but none resolved the issue of new files not suggesting automatically.
- The community suggested using commands like
- Improving User Experience with Aider Errors: A discussion arose around presenting error messages more effectively to avoid overwhelming users with long text blocks when issues occur.
- Members agreed that simplifying error presentation can improve UX, especially when users don't immediately recognize critical error messages.
- Handling Large Repos with Aider: A user queried the limitations of Aider's efficiency with large repositories, prompting discussions about managing complexity.
- Suggestions for keeping Aider focused included adding only relevant files and breaking tasks into smaller, manageable steps for more effective code editing.
- Integration with GitHub Copilot: A member asked if Aider could theoretically utilize the GitHub Copilot API, given their company has approved Copilot but not other LLMs.
- The conversation highlighted the challenges in integrating various LLMs due to the corporate approval process, particularly around legal scrutiny.
- Model warnings: aider is AI pair programming in your terminal
- Config with .env: Using a .env file to store LLM API keys for aider.
- Providers | liteLLM: Learn how to deploy + call models from different providers on LiteLLM
- GitHub - ChimeHQ/SwiftTreeSitter: Swift API for the tree-sitter incremental parsing system: Swift API for the tree-sitter incremental parsing system - ChimeHQ/SwiftTreeSitter
- Keys | OpenRouter: Manage your keys or create new ones
- Supported languages: Aider supports pretty much all popular coding languages.
- `py-tree-sitter-languages` is unmaintained · Issue #7 · paul-gauthier/grep-ast: Hi @paul-gauthier , thanks for your work on aider. I've been having a blast using it. This project uses https://github.com/grantjenks/py-tree-sitter-languages, but that project is unmaintained and...
- Tips: Tips for AI pair programming with aider.
- FAQ: Frequently asked questions about aider.
aider (Paul Gauthier) ▷ #links (1 messages):
Anthropic Prompt Engineering
Jupyter Notebooks
UVX Tool
- Anthropic Offers Excellent Prompt Engineering Resources: Anthropic continues to excel in providing documentation, offering an Interactive Tutorial on Prompt Engineering accessible via this link.
- This tutorial, delivered through Jupyter notebooks, establishes Anthropic as a leader in LLM vendor documentation.
- Hands-on with Jupyter Notebooks for Prompt Engineering: The first few chapters of the tutorial are basic but effectively demonstrate simple prompts using the Anthropic API with the installation command
%pip install anthropic
for proper virtual environment setup.- Users can start their Jupyter server quickly with commands like
git clone https://github.com/anthropics/courses
anduvx --from jupyter-core jupyter notebook courses
.
- Users can start their Jupyter server quickly with commands like
- Issue Reporting and Community Contribution: A user mentioned filing an issue and a pull request on Anthropic's GitHub repository after navigating the tutorial, indicating an active community engagement.
- This demonstrates a commitment to improving the resources collaboratively, enhancing the overall learning experience.
Link mentioned: Anthropic’s Prompt Engineering Interactive Tutorial: Anthropic continue their trend of offering the best documentation of any of the leading LLM vendors. This tutorial is delivered as a set of Jupyter notebooks - I used it …
OpenAI ▷ #ai-discussions (318 messages🔥🔥):
Personalization of LLMs
Comparison of AI Models
AI for Customer Support
AI Coding Performance
Upcoming AI Releases
- Personalization of LLMs is Desired: Users discussed the importance of personalizing LLMs, with suggestions for AI to have unique personalities and long-term memory of conversations to enhance user experience.
- Concerns were raised about the cost implications of using API calls for maintaining personalized interactions.
- Comparison Between AI Models: The performance of various AI models was debated, specifically the Gemini and Grok 2, with some users noting that Grok can be creative but not always reliable for complex programming tasks.
- Some users expressed disappointment with specific outputs from models like Grok, highlighting varying effectiveness depending on the prompt.
- AI in Customer Support Applications: A user inquired about building AI chatbots for customer support, noting the potential of using the OpenAI API but expressing concerns over its complexity.
- Recommendations included starting with simpler AI projects and potentially working with developers or existing no-code solutions.
- Performance of AI in Coding: The conversation covered the relative coding abilities of models like Grok 2 and Gemini, with discussions on how Grok performs better in some creative aspects but may falter in debugging.
- Testimonies indicated that users need to carefully choose models based on specific coding tasks, suggesting that Grok can provide viable outputs in some scenarios.
- Future AI Releases and Developments: Speculation arose regarding the release timelines of new AI models from OpenAI and Cerebras, with concerns about market impacts and competition with other nations regarding AGI.
- Participants expressed caution over the implications of AGI advancements, particularly in a geopolitical context, while discussing potential collaborations between tech companies.
OpenAI ▷ #gpt-4-discussions (1 messages):
smilebeda: 👍
OpenAI ▷ #prompt-engineering (16 messages🔥):
Job Description vs CV Matching
Prompt Engineering Strategies
Document Analysis Efficiency
API Call Structure
Deep Document Analytics Discussion
- Job Description vs CV Matching Scores: Discussions highlighted issues with prompts for scoring the similarity between CVs and job descriptions, specifically on how candidates from different backgrounds were assessed.
- One user noted their struggles with prompts leading to inaccurate scores and shared feedback on a specific scoring method used.
- Finding Better Classification Rules for Matching: Suggestions were made to refine classification rules for job scoring to avoid reliance on semantic similarity.
- A user expressed that enhancing classification could lead to immediate improvements in the scoring results.
- Separating API Calls vs Single Prompts: A conversation examined whether to use multiple API calls for different questions or a single prompt for document analysis.
- It was suggested that using separate requests can reduce hallucinations and maintain clarity in responses.
- Batch Processing for Efficiency: A recommendation for using batch processing was shared with a link to OpenAI's documentation to enhance prompt efficiency.
- This method could streamline the analysis of large documents without overwhelming the model.
- Exploring Deep Document Analytics: An inquiry was made regarding discussions on deep document analytics and using ChatGPT responses for data collection.
- Users were directed towards a specific channel for further information on fine-tuning and model utilization.
OpenAI ▷ #api-discussions (16 messages🔥):
Prompt Engineering
API Call Strategies
Document Analytics
Batch Processing
Fine-Tuning
- Challenges in Job Description Matching: A member shared their experience using the OpenAI API to compare job descriptions with CVs, receiving inconsistent similarity scores despite multiple prompts, including specific scoring rules.
- The scoring for CVs ranged from 5 to 65 with unclear rationale given for certain comparisons, causing confusion.
- Fine-Tuning for Improved Results: A member suggested that fine-tuning the model could resolve the score discrepancies but noted the requirement for a substantial dataset.
- An alternative idea was presented to force the API to return responses in JSON format to ensure structured output.
- Optimizing Prompt Complexity: A user inquired about the efficacy of separate API calls for multiple questions versus a single comprehensive prompt for extracting information from large documents.
- It was advised that separate requests could reduce the chances of hallucinations, making smaller requests a preferred strategy.
- Exploration of Document Analytics: One member expressed interest in deep document analytics and sought resources for discussing approaches for effective data extraction.
- There was a reference made to starting with ChatGPT responses for data collection before moving on to fine-tuning other models.
- Resource Sharing for Developers: A user was directed to the OpenAI documentation on batch processing to enhance their implementation strategy.
- Another member thanked the community for the resource while indicating their interest in obtaining more information regarding document analytics.
HuggingFace ▷ #general (223 messages🔥🔥):
Inferences of Llama 3.1 405B
GPT-2 local usage
Amazon ML Challenge 2024
Testing and Code Quality
AI Integration with CAD
- Challenges with Llama 3.1 405B Inference Endpoint: A member inquired about successfully calling the inference endpoint for the Llama 3.1 405B model, with another mentioning a positive experience using NIM 3.1 405B.
- There appears to be interest in finding out the efficiency and capabilities this latest model offers compared to others.
- Experiences with GPT-2: One user expressed confusion over unexpected outputs from GPT-2, referring to it as threatening and misbehaving during runs.
- Others recommended newer models like Llama and Mistral as better alternatives for chat and instruction tasks.
- Amazon ML Challenge 2024 Team Search: A user was seeking teammates for the Amazon ML Challenge 2024, looking to collaborate on the competition's projects.
- The inquiry lacked additional details about the challenge specifics, creating opportunities for engagement within the community.
- Importance of Testing in Code Quality: A member shared insights on the significance of running multiple tests for code quality, emphasizing early bug detection and improved maintainability.
- They listed various testing methods, including unit and integration testing frameworks, to ensure overall code stability.
- AI Integration with CAD Systems: A conversation arose about the capabilities of AI models similar to J.A.R.V.I.S from Iron Man, with mentions of existing projects integrating AI with CAD.
- It was noted that various individuals have successfully made significant advancements in AI, showcasing the potential for interactive applications.
- torch.Tensor.to — PyTorch 2.4 documentation: no description found
- Licenses: no description found
- Tweet from Noah Reid (@NM_Reid): uhh, anaconda just sent a message to our HPC admins that we're in violation their ToS and we now need to pay for a license or remove all their software from our system?
- unclemusclez/smollm-135m-instruct-devinator: SmolLM 135M Instruct Trained on DEVINator Data for Open Hands (Open Devin)
HuggingFace ▷ #today-im-learning (4 messages):
Human Feedback in Model Training
Low Bit Quantization
GPU Importance for Training
Colab and Kaggle for AI Learning
- Human Feedback's Role in Model Accuracy: A recent paper discusses how human feedback serves as a standard for evaluating Large Language Models and its implications on training objectives. The study highlights weaknesses in preference scores, suggesting they might under-represent crucial aspects such as factuality, as detailed in the PDF.
- The analysis reveals that the assertiveness of outputs can skew perceived factuality errors, indicating a need for comprehensive evaluation metrics.
- Learning about Low Bit Quantization: One member expressed interest in low bit quantization, linking to a foundational research paper on the technique. This suggests a focus on optimizing model performance while reducing storage requirements.
- Understanding this approach can contribute significantly to improving efficiency in neural network operations.
- Importance of GPU for Training: A strong recommendation was made regarding the necessity of GPUs for effective model training, especially through platforms like Colab or Kaggle. The member urged others not to pursue training without GPU support.
- This indicates a consensus on the limitations of CPU-based training for modern AI tasks.
Link mentioned: Human Feedback is not Gold Standard: Human feedback has become the de facto standard for evaluating the performance of Large Language Models, and is increasingly being used as a training objective. However, it is not clear which properti...
HuggingFace ▷ #cool-finds (4 messages):
LLM Pruning
Text-to-Speech Machine Learning
Multi-Party Chat Agents
Qwen2-VL series
Vision-Language Models
- Efficient LLM Pruning Strategies: A recent study examines layer pruning strategies for open-weight pretrained LLMs, finding minimal performance degradation until after 50% of layers are removed. They use parameter-efficient finetuning techniques like QLoRA to enhance the models post-pruning.
- This method can significantly reduce computational resources while improving memory and latency during inference.
- Enhancements in Text-to-Speech with GitHub Project: The Text-to-Speech-ML GitHub repository is a collaborative project for advancing text-to-speech technology. Users can contribute to its development by creating GitHub accounts.
- This initiative provides a platform for collaborative improvements and innovations in machine learning for speech synthesis.
- Exploration of Multi-Party Conversational Agents: A new paper discusses the need for dialogue systems to handle multi-party conversations, differing from traditional pairwise chats. The authors introduce a dataset called MultiLIGHT, aimed at improving model training for more complex interactions.
- Challenges include deciding when to talk and creating coherent responses based on multiple characters in conversations.
- Qwen2-VL Series: State-of-the-Art Vision-Language Models: Qwen has unveiled the Qwen2-VL series, which are advanced vision-language models that also incorporate video understanding capabilities. This series marks a significant advancement in the integration of vision and language processing.
- For more details, you can check their announcement at Qwen's blog.
- The Unreasonable Ineffectiveness of the Deeper Layers: We empirically study a simple layer-pruning strategy for popular families of open-weight pretrained LLMs, finding minimal degradation of performance on different question-answering benchmarks until af...
- Multi-Party Chat: Conversational Agents in Group Settings with Humans and Models: Current dialogue research primarily studies pairwise (two-party) conversations, and does not address the everyday setting where more than two speakers converse together. In this work, we both collect ...
- GitHub - Azymack/Text-to-Speech-ML-: Contribute to Azymack/Text-to-Speech-ML- development by creating an account on GitHub.
HuggingFace ▷ #i-made-this (12 messages🔥):
VividNode
ToonGPT Launch
Word Game Bench
FLUX LoRA Training
Thoth Bot
- VividNode: Your Personal AI Chatbot: A user launched their own open-source chatbot called VividNode, created with Python and PySide6, with plans to enhance its features in the future. They expressed excitement about developing their skills and collaborating with others on open-source projects.
- Users can experience GPT-like capabilities and image generation directly on their desktop, with chat history locally stored.
- ToonGPT Goes Live on Product Hunt: ToonGPT is now live on Product Hunt, inviting users to support its interactive fun for kids. The developers are enthusiastic about gathering feedback to improve the platform.
- They emphasized their eagerness for community engagement and user interaction to enhance the offering further.
- Introducing Word Game Bench for Model Evaluation: A new framework called Word Game Bench was introduced for evaluating language models on word puzzle games, highlighting its challenging nature. This benchmark evaluates models on games like Wordle and Connections, offering distinct advantages over typical evaluations.
- Models interactively engage with the game, using feedback to improve performance, while some unconventional evaluation methods are being considered.
- FLUX LoRA Training Tutorial Released: A tutorial guide for FLUX LoRA training using the Kohya SS GUI has been shared, aiming to simplify the training process for users. The tutorial is designed for those with limited GPU resources, specifically for an 8GB GPU running Windows.
- This guide caters to learners looking to get up to speed quickly with practical frameworks.
- AI-Powered Thoth Bot Launch: Thoth Bot has been introduced as an AI-powered CLI tool that utilizes multiple LLMs via the Groq API for tasks such as chat and Python code generation. It streamlines coding workflows with features for automated code generation, execution, and error fixing.
- The project aims to enhance coding efficiency and simplify the development process for programmers.
- Word Game Bench: no description found
- no title found: no description found
- ai-research-agent: no description found
- What is VividNode & How to Use It: VividNode is a software that allows you to directly experience GPT chatbot (ChatGPT) and image generation features on your desktop without…
- ToonTales - KiddieGPT - Product Information, Latest Updates, and Reviews 2024 | Product Hunt: Introducing ToonGPT: A delightful AI-powered companion crafted for kids! Inspired by my daughter Becky, ToonGPT combines the magic of cartoons with interactive fun, sparking creativity and joy in ever...
- GitHub - U-C4N/Thoth-Bot: AI-powered CLI tool for chat, Python code generation, and improvement using multiple LLMs via Groq API. Streamlines coding workflow with automated code generation, execution, and error fixing.: AI-powered CLI tool for chat, Python code generation, and improvement using multiple LLMs via Groq API. Streamlines coding workflow with automated code generation, execution, and error fixing. - U-...
HuggingFace ▷ #reading-group (5 messages):
Meta FAIR's Transfusion research
Impressions on Transfusion
GitHub updates
- Meta FAIR's Transfusion claims leap in multimodal modeling: Meta FAIR's groundbreaking research on Transfusion introduces a unified framework for training transformers on mixed-modality sequences, where it predicts tokens and diffuses images simultaneously.
- The experimental results showcase superior performance and scalability, achieving comparable results to models with 7 billion parameters and 2 trillion multi-modal tokens.
- Excitement over Transfusion's potential impact: Members express agreement on the potential of Transfusion, with one noting its significance as a game-changer in multimodal modeling.
- “It’s exciting to see how this could shape the future of AI in multimodal tasks,” captures the prevailing enthusiasm in the discussion.
- Query on the actual performance of Transfusion: A member noted curiosity regarding how well Transfusion performs, remarking on the presence of many Gen AI keywords in the paper.
- This skepticism reflects a desire for deeper insights into the framework's effectiveness beyond the hype.
- Updates on GitHub for record keeping: A member updated their GitHub for record keeping and requested feedback on any issues found.
- This reflects an ongoing effort in the community to maintain transparency and address any potential concerns.
HuggingFace ▷ #computer-vision (13 messages🔥):
Image Processing Techniques
Transfer Learning Challenges
Noisy Document Classification
Project Collaboration
- Utilizing Image Processing for Document Quality: One member suggested using image processing techniques combined with pre-trained models like OpenCV to assess document quality in terms of blurriness and darkness.
- They recommended employing algorithms like Laplacian variance for detecting blur and consulted CNNs like VGG or ResNet for general image quality features.
- Transfer Learning Experiments on Document Quality: Another member reported challenges in using transfer learning on datasets where brightness and blur were manually adjusted, indicating poor performance in real-world tests.
- They expressed interest in exploring OpenCV techniques and inquired if there were any articles that could guide them in this direction, as this is a prevalent issue in organizations.
- Discussion on Project Source Code: A request was made for the project source code to provide more specific solutions for the discussed document classification project.
- Members shared links to their GitHub repositories, including noisy_doc_clf, where augmented images and transfer learning attempts are documented.
- Late Night Communication: One member noted it was late at night and proposed to continue discussions the following morning.
- They also mentioned sending a friend request, which was met with agreement and appreciation from another member.
Link mentioned: noisy_doc_clf/notebooks/train.ipynb at main · ajkdrag/noisy_doc_clf: Contribute to ajkdrag/noisy_doc_clf development by creating an account on GitHub.
HuggingFace ▷ #NLP (9 messages🔥):
LLaMA 3 models
GPU requirements for LLaMA 3
Inference configurations for LLaMA 3
Comments on shared advice
Cost considerations for LLaMA 3
- Request for Help with LLaMA 3 Models: A member seeks assistance with building RAG applications using LLaMA 3 models, asking for recommendations on suitable on-premise GPUs and RAM configurations for models with 8B, 70B, and 405B parameters.
- They specifically inquire about the optimal GPU and RAM setups needed to run these large models effectively.
- Nvidia A100 Suggested as Best GPU: In response to the request, another member suggests the Nvidia A100 - large as the best GPU option.
- They did not clarify which specific model this recommendation relates to, prompting further questions about RAM requirements.
- LLaMA 405B Has High GPU RAM Needs: A member explains that running the LLaMA 405B will require more than one GPU and at least 300Gb of GPU RAM depending on precision settings.
- They caution that this model is extremely expensive to run and suggest considering cloud-based options.
- Reflections on Contributions: Some members comment on the provided advice, with one asserting it's a model-generated response that doesn't add any useful information.
- Another member humorously speculates that the previous message might have been generated by LLaMA 3 itself.
- Inquiries about Audit Log: One member suggests checking the audit log in light of the conversation dynamics, potentially seeking more clarity on previous interactions.
- This remark indicates a desire to review the context of participation in the discussion.
HuggingFace ▷ #diffusion-discussions (2 messages):
Animating Fireballs
AnimateDiff
IP Adapter Plus
SVD
- Animating Fireballs in Photos: A user inquired about ways to animate only the fireball in a photo they had.
- Another member suggested using AnimateDiff with IP Adapter Plus or SVD as potential solutions.
- Suggestions for Fireball Animation Techniques: The community discussed various techniques for adding animation effects to static images.
- Specific tools like AnimateDiff were highlighted for their effectiveness in achieving dynamic visual enhancements.
CUDA MODE ▷ #general (1 messages):
iron_bound: sounds like their LTM architecture has an RNN for attention
CUDA MODE ▷ #triton (1 messages):
Triton Atomic Add Scope
Multi-GPU Configurations
GPU vs System Scope
- Understanding Triton's Atomic Add Scope Settings: The
scope=GPU
configuration specifies operations executed solely on the GPU, whilescope=system
encompasses calculations across multiple GPUs and the host.- This distinction can influence performance optimizations depending on the execution context across multi-GPU setups.
- Default GPU Scope and Multi-GPU Compatibility: The default
scope=GPU
is designed to operate efficiently out of the box in multi-GPU environments without additional configuration.- Users should expect expected multi-GPU functionalities without special adjustments when using the default setting.
- Clarifying the Meaning of Scope Options: In Triton's atomic add implementation,
scope=system
implies the ability to leverage both GPU and host resources for operations.- It essentially indicates a broader operational context than just the GPU, allowing for integrated processing.
CUDA MODE ▷ #torch (3 messages):
FX pass for custom Triton kernel
Calling Triton from PyTorch
Examples of FX passes
- Curiosity about FX pass mapping: A user expressed curiosity regarding the possibility of having an FX pass that maps aten operations to a custom Triton kernel.
- This indicates an interest in optimizing performance for specific operations through custom implementations.
- Native Triton calls in PyTorch: Another user clarified that you can natively call Triton code from a PyTorch program, enabling functionality with torch.compile.
- This opens up avenues for integrating advanced GPU acceleration directly within PyTorch workflows.
- Resources for FX pass examples: The discussion mentioned that skimming Triton code can be helpful for looking into FX passes, providing a specific link to a GitHub resource.
- The provided link points to a file that can serve as a reference for understanding FX passes in the context of PyTorch.
Link mentioned: pytorch/torch/_inductor/fx_passes/pre_grad.py at main · pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch
CUDA MODE ▷ #torchao (25 messages🔥):
Quantization Techniques
AWQ Implementation Challenges
Low-Bit Optimizer Adjustments
Model Fixes in AO
Mixed Precision Quantization
- Quantizing Attention Layers Sparks Debate: There was a discussion regarding the intentional quantization of QKV projections in attention layers, with insights on maintaining accuracy.
- Members noted that the default filter_fn operates under specific assumptions, automatically quantizing Linear layers with 2D shapes.
- AWQ's Integer Expectations Causing Confusion: A member highlighted an implementation quandary with AWQ, stressing that floating point zeros worsen perplexity compared to integer storage.
- It was revealed that previous attempts to swap quant/dequant functions suffered from discrepancies in rounding logic, making implementations incompatible.
- Low-Bit Optimizer's Code Under Review: Participants reviewed the low-bit optimizer code, questioning a line on non-sign bits, indicating a potential oversight.
- It was discovered this code was copied from an external library, raising further depth on maintaining correct logic.
- ML Model Fix for AO Looks Promising: A fix for AO's llama model was pointed out, where a particular implementation detail under specific training conditions was flagged for attention.
- Reviewers were involved to ensure that the model behavior aligns with the intended functionalities.
- Mixed Precision Work Mentioned with Interest: The conversation leaned towards exploring mixed precision quantization techniques as members shared methods and references.
- A related project link was provided, presenting an intern’s work on mixed precision quantization based on varying model sizes.
- ao/torchao/prototype at main · pytorch/ao: PyTorch native quantization and sparsity for training and inference - pytorch/ao
- awq_hqq_test.py: awq_hqq_test.py. GitHub Gist: instantly share code, notes, and snippets.
- ao/torchao/quantization/prototype/mixed_precision at main · pytorch/ao: PyTorch native quantization and sparsity for training and inference - pytorch/ao
- ao/torchao/prototype/low_bit_optim/quant_utils.py at main · pytorch/ao: PyTorch native quantization and sparsity for training and inference - pytorch/ao
- pytorc - Overview: pytorc has 2 repositories available. Follow their code on GitHub.
- ao/torchao/prototype/low_bit_optim/quant_utils.py at main · pytorch/ao: PyTorch native quantization and sparsity for training and inference - pytorch/ao
- torch.searchsorted — PyTorch 2.4 documentation: no description found
- Fixed the llama model by yiliu30 · Pull Request #769 · pytorch/ao: In the training mode (model.setup_caches(..., training=True)) with input_pos is None, the freq_cis is overridden by L208. ao/torchao/_models/llama/model.py Lines 19...
- bitsandbytes/bitsandbytes/functional.py at e4674531dd54874c0abbc786ad5635c92c34dc3e · bitsandbytes-foundation/bitsandbytes: Accessible large language models via k-bit quantization for PyTorch. - bitsandbytes-foundation/bitsandbytes
CUDA MODE ▷ #sequence-parallel (2 messages):
Flash Attention Kernel Challenges
NVIDIA GeForce RTX 3090 Compatibility
Attention Head Dimensionality
- Struggles with Flash Attention Kernel Shared Memory Sizes: The discussion revolves around challenges in writing a Flash Attention kernel related to shared memory sizes, specifically with blocks of Q, K, and V showing large memory requirements, such as 131,072 bytes for a block of Q.
- The user feels that they might be overlooking how Flash Attention can work on non-Hopper GPUs that have smaller SRAM sizes.
- Issue with Flash Attention on NVIDIA GeForce RTX 3090: An issue was raised regarding running the flash_attn package on two NVIDIA GeForce RTX 3090 GPUs, encountering problems due to the GPUs' compute capability of 8.6.
- The situation was highlighted in a GitHub issue where users are seeking solutions to these compatibility issues.
- Impact of Attention Heads on Model Dimension: A question was posed about whether the large model dimension is divided across attention heads, suggesting that each head in Flash Attention would only manage smaller inner dimensions around 64 or 128.
- This raises considerations on optimizing resource usage in Flash Attention models while maintaining performance.
Link mentioned: Support for NVIDIA GeForce RTX 3090 with Compute Capability 8.6 · Issue #190 · Dao-AILab/flash-attention: Issue description: Hello, I am using the flash_attn package on a system with two NVIDIA GeForce RTX 3090 GPUs, both of which have a Compute Capability of 8.6. When trying to run the package, I enco...
CUDA MODE ▷ #off-topic (15 messages🔥):
Twitter recommendations
Benefits of Twitter
Summer ‘24 Twitter Poll
Logistics for CUDA Mode event
- Twitter profile recommendations: A member sought recommendations for profiles to follow on Twitter, prompting several responses.
- One user shared a follow list while another suggested reconsidering the decision to join Twitter.
- Discussion on Twitter's utility: Members discussed the benefits of Twitter for keeping up with recent news and following notable labs conducting SOTA research.
- One mentioned that it's useful for sharing work, and others agreed on curating followed accounts to improve the experience.
- Poll on Twitter's value in Summer '24: A poll was initiated asking members if their time spent on Twitter during Summer '24 was a net positive or negative.
- The conversation explored whether Twitter is ultimately worth the time invested, with mixed feelings in the replies.
- Logistics questions for CUDA Mode event: A new member asked about logistics for the upcoming CUDA Mode event, inquiring about hotel accommodations and food provisions.
- They expressed their need for information due to traveling from out of state for the weekend.
CUDA MODE ▷ #llmdotc (6 messages):
L2 Side Aware Performance
FP8 Transition
Loss Landscape Insights
Training Sample Drop Impact
- L2 Side Aware Code Boosts Performance: After fixing some bugs, the L2 Side Aware code consistently achieves ~1823GB/s for GELU forward, up from 1791GB/s for kernel 3 with x128.
- The improvements also come with lower power consumption, although further simplification and optimization are still necessary.
- Switch to FP8 Planned: The developer plans to switch back to FP8 coding tomorrow to refresh their memory on that segment.
- Progress on the current L2 Side Aware code is deemed sufficient for the moment.
- Insights on Loss Landscape: A member discussed how certain optimizations lead to a stationary point in the loss landscape, indicating potential similarities with the regular adamw method.
- These insights suggest that understanding the quality of minima reached is crucial, emphasizing the need for some vanilla finetuning verification.
- Impact of Dropping Training Samples: Concerns were raised about how dropping some training samples may affect the optimization results, but it was suggested that the impact might not be more 'undefined' than other strategies.
- The conversation highlighted the importance of implementation to compare outcomes against conventional methods.
CUDA MODE ▷ #sparsity-pruning (1 messages):
mobicham: https://x.com/JamesLiuID/status/1829554782287413513
CUDA MODE ▷ #liger-kernel (140 messages🔥🔥):
Liger-Kernel Release v0.2.0
LayerNorm Kernel Update
Memory Issues with Hugging Face Example
Research on Atomic Operations
Documentation Enhancements
- Release v0.2.0 brings changes: A new release of the Liger-Kernel was announced, addressing previous publishing issues, with highlights including a cleaner API and more model support.
- Recent tests revealed some users experiencing Out Of Memory (OOM) errors, prompting discussions about the new version's memory efficiency compared to 0.1.1.
- LayerNorm Kernel integration: A PR was merged to integrate custom kernels and the LigerLayerNorm module, improving the performance of normalization operations.
- Discussions around performance indicated that using sys scope for multi-GPU operations was worth investigating.
- Memory issues with Hugging Face example: Users reported that version 0.2.0 of the framework led to unexpected OOM errors during model training with the Hugging Face example.
- A user confirmed that rolling back to version 0.1.1 resolved the issue, while others speculated on potential causes related to the use of Liger.
- Investigating atomic operation mismatches: A user raised a concern about a recurring failure in tests related to the RMS normalization kernel when rewritten to support partial aggregation.
- Despite setting a manual seed in tests to achieve deterministic behavior, mismatches persisted in results, indicating an underlying issue with the kernel implementation.
- Documentation updates for Liger-Kernel: A new section describing the LayerNorm kernel was added to the README to enhance documentation and guide users on its usage.
- Chat participants discussed the need for better documentation and tutorials related to integrating custom operations within Liger-Kernel.
- Add Operator Resolve Rule — Hidet Documentation: no description found
- Issues · linkedin/Liger-Kernel: Efficient Triton Kernels for LLM Training. Contribute to linkedin/Liger-Kernel development by creating an account on GitHub.
- CUDA semantics — PyTorch 2.4 documentation: no description found
- torch.compile() throws exception when LigerKernel is used · Issue #174 · linkedin/Liger-Kernel: 🐛 Describe the bug ... File "/home/tromero/workspace/seahorse/.venv/lib/python3.11/site-packages/torch/_inductor/async_compile.py", line 173, in triton kernel = TritonCodeCache.load(kernel_...
- Release v0.2.0 Release Note · linkedin/Liger-Kernel: Opening Thoughts 🫶 Thank You! We'd love to take this chance to express our sincere gratefulness to the community! 2500+ ⭐ , 10+ new contributors, 50+ PRs, plus integration into Hugging Face 🤗, a...
- [Documentation] LayerNorm added to README by AndreSlavescu · Pull Request #180 · linkedin/Liger-Kernel: Summary Added LayerNorm description to README Testing Done N//A Hardware Type: RTX 3090 run make test to ensure correctness run make checkstyle to ensure code style run make test-convergenc...
- GitHub - linkedin/Liger-Kernel: Efficient Triton Kernels for LLM Training: Efficient Triton Kernels for LLM Training. Contribute to linkedin/Liger-Kernel development by creating an account on GitHub.
- [Operators] LayerNorm Kernels + LigerLayerNorm by AndreSlavescu · Pull Request #169 · linkedin/Liger-Kernel: Summary integrated layernorm custom kernels + LigerLayerNorm module Testing Done tested layernorm kernels for correctness Hardware Type: RTX 3090 run make test to ensure correctness run make...
- custom Embedding kernel by AndreSlavescu · Pull Request #135 · linkedin/Liger-Kernel: Summary Added Embedding forward/backwards kernels + LigerEmbedding class which maps to nn.Embedding nn.Embedding is useful for encoder-only models such as BERT ref: #131 Testing Done tested ag...
- GitHub - linkedin/Liger-Kernel: Efficient Triton Kernels for LLM Training: Efficient Triton Kernels for LLM Training. Contribute to linkedin/Liger-Kernel development by creating an account on GitHub.
Stability.ai (Stable Diffusion) ▷ #general-chat (187 messages🔥🔥):
Model Compatibility and Optimizations
SGE Usage in AI Tools
Training Model Costs
Stable Diffusion Model Updates
Generation Times with Different GPUs
- Optimizations for SDXL Performance: Users suggest adding
--xformers
,--medvram-sdxl
, and--no-half-vae
to thewebui-user.bat
for better SDXL performance on GPUs with lower VRAM.- These commands collectively aim to improve speed and reduce VRAM usage while maintaining compatibility with VAE.
- Understanding SEG in Detail: There's confusion around the implementation of SEG in workflows, with users questioning its necessity and complexity in tools like Impact Pack.
- SEG's integration raises questions about whether it is an established method or a new concept created for specific tools.
- Training Costs for AI Models: Training base models such as SD1.5 or SDXL is noted to require months and substantial financial investment, potentially costing millions.
- Users discuss that while larger models are expensive to train, smaller checkpoints or LORA models can be trained with fewer resources.
- RunwayML Deletes Stable Diffusion Repositories: RunwayML's removal of their Stable Diffusion 1.5 repositories from platforms like HuggingFace has sparked concern among users.
- The speculation around the decision hints at a possible shift in focus away from older models.
- Generation Speed with 3060/3060 Ti: Users with 3060 and 3060 Ti GPUs discuss their experiences and expectations regarding generation times with SDXL and Flux models.
- Concerns are raised about whether the hardware can handle long generation times and model storage requirements adequately.
- imgur.com: Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and ...
- imgur.com: Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and ...
- Command Line Arguments and Settings: Stable Diffusion web UI. Contribute to AUTOMATIC1111/stable-diffusion-webui development by creating an account on GitHub.
- Enhancing Conditional Image Generation with Explainable Latent Space Manipulation: In the realm of image synthesis, achieving fidelity to a reference image while adhering to conditional prompts remains a significant challenge. This paper proposes a novel approach that integrates a d...
- GitHub - kshitij79/CS-7476-Improvements-in-Diffusion-Model: Contribute to kshitij79/CS-7476-Improvements-in-Diffusion-Model development by creating an account on GitHub.
- Fantastische Fabelwesen: Das Malbuch für Erwachsene zum Stressabbau und zur kreativen Entspannung - mit Elfen, Feen, Drachen und vielen anderen mystischen Fantasie-Kreaturen : Press, Flying Colours, Gehrke, Nora: Amazon.de: Bücher: no description found
- 【AI行业报告】Top 100 AI 产品 (第3期): AI行业报告第三期,来看看哪些AI产品上榜了?
Nous Research AI ▷ #general (118 messages🔥🔥):
Amnesia Mode in AI
Low-Rank Approximations for Gradients
Training LLaMA 3
Word Game Bench for LLMs
Hermes 3 Gradient Behavior
- AI shows preferences in communication style: Users experimenting with Hermes 3's amnesia mode noticed it prefers professional language, even rejecting casual terms like 'bruh' as unfamily-friendly.
- This highlights a potential trend where AI models exhibit predefined personality traits or communication guidelines.
- Discussion on low-rank approximations for optimization: A user suggested using low-rank approximations for gradient transmission, which could reduce communication overhead between nodes in distributed training.
- This aligns with ongoing discussions about adaptive communication patterns and gradient impact on performance.
- Training LLaMA 3 on diverse instruction data: A member shared they are training an 8b LLaMA 3 model on synthetic and real instruction data sourced from platforms like Reddit and StackExchange.
- Their aim is to see if training on this real data reduces AI-like behavior, demonstrating varying approaches to model refinement.
- Word Game Bench for evaluating language models: Introduction of Word Game Bench, a testing framework for LLMs focused on word games like Wordle and Connections, showcases a novel evaluation strategy.
- The benchmark emphasizes interactive gameplay experiences to gauge model performance over static responses, addressing common pain points in LLM evaluations.
- Unexpected behavior in model tuning: Users observed unusual tuning results in their models, with reports of fluctuating loss values and potential exploding gradients during training sessions.
- These insights reflect the complexities and unpredictabilities inherent in training larger AI models.
- DiLoCo: Distributed Low-Communication Training of Language Models: Large language models (LLM) have become a critical component in many applications of machine learning. However, standard approaches to training LLM require a large number of tightly interconnected acc...
- Tweet from wings (@wingsoverheaven): no description found
- Hermes 3 70B Instruct - API, Providers, Stats: Hermes 3 is a generalist language model with many improvements over [Hermes 2](/models/nousresearch/nous-hermes-2-mistral-7b-dpo), including advanced agentic capabilities, much better roleplaying, rea...
- Word Game Bench: no description found
- Tweet from zafir (@zafstojano): Excited to share "Word Game Bench" - a fun benchmark for evaluating language models on word puzzle games! It is a relatively hard benchmark, where no model currently scores above 50% average...
- Tweet from Sam Altman (@sama): we are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models. for many reasons, we think it's important that this happens at the national...
- GitHub - oobabooga/text-generation-webui: A Gradio web UI for Large Language Models.: A Gradio web UI for Large Language Models. Contribute to oobabooga/text-generation-webui development by creating an account on GitHub.
Nous Research AI ▷ #ask-about-llms (43 messages🔥):
Instruct Tuning
Model Hosting and Precision
Performance of Full Precision vs Quantization
Lambda Cloud API Usage
100 Million Token Context Window
- Instruct Tuning and User End Training: Many members discussed the implications of training models on user inputs versus outputs, concluding that training only on outputs yielded better benchmark results.
- One member mentioned they mask input data from loss calculations, focusing on predicting from the output start.
- Demand for Full Precision Model Hosting: Several members expressed frustration over the lack of hosting options for the full precision Hermes 3 model, especially given its significant resource requirements of 810GB for weights alone.
- It was noted that large-scale providers like Anthropic and OpenAI have optimized hardware for serving models, indicating potential demand challenges for others.
- Performance Comparison: 8-bit vs Full Precision: A few percentage points difference was highlighted between 8-bit quantization and 16-bit full precision, with larger models exhibiting greater quantization resistance.
- Discussion revealed potential coherence issues with smaller models at lower bit rates, indicating that model size impacts performance significantly.
- Lambda Cloud API for Hermes 3: Members discussed the Lambda Cloud API as a viable method to access Hermes 3, but were concerned it only provided an 8-bit quantized option that does not allow system prompt settings.
- Yet, one member noted that the API supports system prompts, indicating its usability for specific applications.
- Breakthrough in Context Window Technology: A member mentioned claims of a 100 million token context window, referring to it as magic,” suggesting improvements in LLM capabilities.
- Another added that such advancements could be akin to breakthroughs in AI scope previously recognized, hinting at ongoing developments in the field.
- NousResearch/Hermes-3-Llama-3.1-405B · Hugging Face: no description found
- Using the Lambda Chat Completions API | Lambda Docs: no description found
- Lambda Docs: no description found
Nous Research AI ▷ #research-papers (8 messages🔥):
GameNGen
DOOM simulation
Real-time game engines
Unique game design
Horror game potential
- GameNGen simulates DOOM: The game DOOM is simulated entirely by the GameNGen neural model, marking a significant technological achievement in real-time game engines.
- Human raters struggle to distinguish between clips from the simulation and actual gameplay, showcasing the model's effectiveness.
- Proof of concept hailed: Members acknowledged GameNGen as a great proof of concept, sparking interest in how major engines like Unreal might integrate similar technology.
- The discussion highlighted potential for replication by individuals interested in this innovative approach.
- Trippy gameplay experience: Footage of GameNGen's gameplay has been described as trippy and dreamlike, leading to curiosity about the future tech capabilities for realistic games.
- One member expressed interest in leveraging unique hallucination effects for original gaming experiences, particularly in the horror genre.
- Potential for horror game design: There’s excitement around the prospects of using GameNGen technology for a uniquely refreshing horror IP, focusing on its distinct atmospheric qualities.
- However, it was noted that such design would likely require significant hand-holding with the model to achieve desired outcomes.
Link mentioned: GameNGen: Diffusion Models Are Real-Time Game Engines
Nous Research AI ▷ #research-papers (8 messages🔥):
GameNGen
Neural Models in Gaming
DOOM Simulation
Innovative Game Engines
Dreamlike Gameplay
- GameNGen simulates DOOM with neural models: The innovative GameNGen neural model demonstrates the ability to simulate the classic game DOOM without using any traditional game engine, achieving over 20 frames per second.
- Human raters struggle to distinguish between clips of the simulation and real gameplay, showcasing its effectiveness as a proof of concept.
- Discussion on integration with Unreal Engine: Members expressed curiosity about how platforms like Unreal Engine might incorporate neural simulation technologies like GameNGen in future games.
- The community is eager to see the replication of this technology, highlighting its potential impact on game development.
- Unique hallucinatory qualities spark interest: Footage of gameplay from GameNGen has been described as trippy and dreamlike, leading to discussions on its potential for original game ideas.
- One member suggested that the model's unique qualities could be leveraged for a refreshing horror IP, although it might require considerable guidance.
- Future tech implications for gaming: Community members expressed excitement about the future implications of using neural models in gaming, especially for creating more immersive experiences.
- They envision possibilities for combining these unique gameplay visions with traditional game mechanics for deeper interaction.
Link mentioned: GameNGen: Diffusion Models Are Real-Time Game Engines
LM Studio ▷ #general (93 messages🔥🔥):
LM Studio Updates
Flash Attention Support
Model Performance Issues
API Security
Text Generation Models
- LM Studio Update (0.3.2) Performance Improvements: The latest LM Studio update (0.3.2) has resolved previous latency issues with Flash Attention, enhancing performance for users in local inference environments.
- One user expressed gratitude for the improved functionality, while concerns about stability compared to the earlier version were also noted.
- Flash Attention Models Discussion: Members discussed which models currently support Flash Attention, with LLaMa-3.1 and Mistral being highlighted as compatible options.
- Google's Gemma-2 models are also noted to support Flash Attention, prompting discussions on overall model performance and compatibility.
- Challenges with Large Models on Limited VRAM: Users with 8GB VRAM reported difficulties loading larger models, specifically the xLAM 7b, indicating performance issues related to context and VRAM utilization.
- To troubleshoot slow performance, it was suggested to adjust settings to optimize VRAM usage and test with smaller context values.
- Security Concerns with API: A user inquired about adding authentication to the LM Studio API for security during port forwarding.
- It was advised to implement custom security measures as needed, with one member sharing plans to create a reverse proxy for better control.
- Text and Voice Generation Capabilities: There was an inquiry regarding the availability of text-to-image or text-to-voice functionalities within LM Studio.
- Users were advised that such features are currently not available or supported within the LM Studio framework.
- tokenizer_config.json · sophosympatheia/Midnight-Miqu-70B-v1.5 at main: no description found
- GitHub - THUDM/CogVideo: Text-to-video generation: CogVideoX (2024) and CogVideo (ICLR 2023): Text-to-video generation: CogVideoX (2024) and CogVideo (ICLR 2023) - THUDM/CogVideo
- lmstudio-community/xLAM-7b-r-GGUF at main: no description found
- GitHub - YorkieDev/LMStudioWebUI: A wip version of a simple Web UI to use with LM Studio: A wip version of a simple Web UI to use with LM Studio - YorkieDev/LMStudioWebUI
LM Studio ▷ #hardware-discussion (82 messages🔥🔥):
M2 Ultra Mac setup
Power distribution with multiple GPUs
LLM performance benchmarking
Using nvlink for memory sharing
Llama 3.1 model speed tests
- M2 Ultra Mac arrives, huge potential for LLMs: A user set up their new M2 Ultra Mac with 192 GB Unified Memory and a 2 TB drive, aiming to establish a development environment before exploring LLMs.
- Pydus is eager to find out how large a model he can load on his new machine.
- Discussing Power Limits for GPUs: With a 96-core EPYC and 4x RTX 4090s, a user noted that calculations show a power consumption limit of 3500W, stressing the need for careful power distribution across multiple outlets.
- The conversation involved configuring multiple PSUs and ensuring they could handle the load without blowing breakers.
- Empirical testing on LLMs for Token Rate: Mylez_96150 shared that the Llama 3.1 70B model runs at 97 tokens per second using a multi-GPU setup, while another user indicated a 1 token per second rate may have been recorded earlier.
- The discussions explored various setups including how to optimize performance when splitting model layers across GPUs.
- Challenges with Multi-GPU Inference: Concerns were raised about how to efficiently run LLMs across multiple GPUs, particularly whether the performance improves when using NVLink drivers and how memory sharing impacts speed.
- Communications highlighted that proper model loading and configurations can potentially increase throughput significantly.
- Debating Impact of PCIe Configurations: A user queried how switching RTX 4090 settings from Gen4 x16 to Gen4 x8 could impact performance when working with a 70B or 405B model.
- Another user explained that it might only significantly affect the inference speed if multiple GPUs run dense models, implying some setups could see drastic performance changes.
Link mentioned: Power Usage Auxiliary Nuclear GIF - Power Usage Auxiliary Nuclear - Discover & Share GIFs: Click to view the GIF
OpenRouter (Alex Atallah) ▷ #announcements (2 messages):
Gemini Flash 8B
Gemini Flash Experiment
Pricing updates
Database downtime
Separation of providers
- Launch of Gemini Flash 8B Model: The new Gemini Flash 8B (EXP) has been made available alongside the Gemini Flash Experiment.
- Both models will remain free until the pricing for AI Studio is finalized.
- Separation of Google Vertex from Google AI Studio: Google Vertex has been officially separated from Google AI Studio, now recognized as two distinct providers.
- This change aims to clarify the offerings and improve user navigation.
- Pricing for Gemini Experimental Models Adjusted: All Gemini Experimental models are currently free, as confirmed in the latest update.
- This adjustment aims to provide accessible resources until future pricing models are established.
- Downtime Recorded Due to Database Issue: A recorded downtime of 15 minutes was experienced due to a database error, which has since been reverted.
- The issue was resolved quickly, minimizing impact on the services.
- Gemini Flash 8B 1.5 Experimental - API, Providers, Stats: Gemini 1.5 Flash 8B Experimental is an experimental, 8B parameter version of the [Gemini 1. Run Gemini Flash 8B 1.5 Experimental with API
- Gemini Flash 1.5 - API, Providers, Stats: Gemini 1.5 Flash is a foundation model that performs well at a variety of multimodal tasks such as visual understanding, classification, summarization, and creating content from image, audio and video...
OpenRouter (Alex Atallah) ▷ #app-showcase (2 messages):
daun.ai launch
AI CLI tool
- daun.ai celebrates launch success: Members congratulated the team behind daun.ai for their recent launch.
- Cheers and acknowledgments echoed in the chat for this significant milestone.
- Exploring the All-in-one AI CLI Tool: One user expressed enthusiasm for the AI CLI tool by sigoden, describing it as an all-in-one solution featuring Chat-REPL, Shell Assistant, and more.
- The tool enables access to several models including OpenAI, Claude, and Gemini, highlighting its versatility.
Link mentioned: GitHub - sigoden/aichat: All-in-one AI CLI tool featuring Chat-REPL, Shell Assistant, RAG, AI tools & agents, with access to OpenAI, Claude, Gemini, Ollama, Groq, and more.: All-in-one AI CLI tool featuring Chat-REPL, Shell Assistant, RAG, AI tools & agents, with access to OpenAI, Claude, Gemini, Ollama, Groq, and more. - sigoden/aichat
OpenRouter (Alex Atallah) ▷ #general (146 messages🔥🔥):
Cache Features for Sonnet and DeepSeek
Issues with Perplexity Models
Cohere Command Model Updates
Qwen Model Provider Concerns
Downtime and Infrastructure Upgrades
- Cache Features for Sonnet and DeepSeek Coming Soon: A member inquired about the arrival of cache features for Sonnet and DeepSeek, with another member indicating it may be available soon, possibly tomorrow.
- Discussion highlighted that database incidents have shifted priorities, affecting release timelines.
- Issues with Perplexity Models: A user reported errors with Perplexity models, receiving a message about an invalid model, prompting a call for clarification on the issue.
- The issue was confirmed to have been introduced due to preceding bugs which were addressed quickly.
- Cohere Command Model Updates Introduced: A notable update was made to the Command R models, with a streamlining of its access points and changes to model ids to ensure smoother operation.
- Users expressed excitement over the benefits of the updates, particularly regarding pricing and improved model performance.
- Concerns Over Qwen Model's Provider: Concerns were raised about Qwen's provider, DashScope, which is not well-known among users, though it produced promising benchmarks.
- Despite uncertainties surrounding the provider, users showed eagerness to explore and test the model through available platforms.
- Downtime and Infrastructure Upgrades: There has been increasing downtime reported, leading to concerns about system health and responsiveness as infrastructure upgrades continue.
- The team acknowledged the issues stemming from database limitations, with ongoing projects aimed at strengthening the backend.
- Chatroom | OpenRouter: LLM Chatroom is a multimodel chat interface. Add models and start chatting! Chatroom stores data locally in your browser.
- no title found: no description found
- Qwen 2 7B Instruct - API, Providers, Stats: Qwen2 7B is a transformer-based model that excels in language understanding, multilingual capabilities, coding, mathematics, and reasoning. It features SwiGLU activation, attention QKV bias, and grou...
- Cohere Command Models: Command R models are optimized for a variety of use cases including reasoning, summarization, and question answering. Developed by Cohere and Cohere For AI.
- Tweet from Qwen (@Alibaba_Qwen): To access Qwen2-VL-72B, temporarily you should use our official API in the following way:
- Tweet from GitHub - FixTweet/FxTwitter: Fix broken Twitter/X embeds! Use multiple images, videos, polls, translations and more on Discord, Telegram and others: Fix broken Twitter/X embeds! Use multiple images, videos, polls, translations and more on Discord, Telegram and others - FixTweet/FxTwitter
- Tweet from Logan Kilpatrick (@OfficialLoganK): @DaveManouchehri Free in AI Studio. I don’t know off the top of my head if Vertex’s experimental endpoint is free or not
- no title found: no description found
- Issues · Pythagora-io/gpt-pilot: The first real AI developer. Contribute to Pythagora-io/gpt-pilot development by creating an account on GitHub.
- CohereForAI (Cohere For AI): no description found
- no title found: no description found
- Pythagora (GPT Pilot) Beta - Visual Studio Marketplace: Extension for Visual Studio Code - The first real AI developer.
- LLM Rankings | OpenRouter: Language models ranked and analyzed by usage across apps
Eleuther ▷ #general (56 messages🔥🔥):
NaN weights in embedding training
Research collaboration feedback
Sparse encoding in SAEs
Vision embedding vs vision token
Training Input Statistics Adjustment
- NaN weights derail embedding training: A user reported that their embedding weights go to NaN just a few steps into training, despite having a normal range initially. Various members suggested checking gradients and loss components that might be rounding to zero, which pointed to a data-dependent decay term as the probable cause.
- Lightning's detect_anomaly=True setting proved useful for debugging, tracking the issue to the decay term based on gradient analysis.
- Seeking research feedback from the community: A PhD student sought feedback on research project ideas related to compression using diffusion models, asking where to post. Members suggested posting in the general channel or a designated space for low-pressure discussions.
- Another member suggested regularizing losses on network inputs to maintain stability, while emphasizing the need to clarify assumptions in discussions regarding SAEs.
- Discussion on sparse encoding in SAEs: A member clarified that a reconstruction error loss term should accompany a sparsity-focused loss in their SAE approach. Additional losses can prevent deviations from expected distributions during training, even when inputs are fixed.
- Participants highlighted potential misunderstandings about the role of the LLM in SAEs, stressing that statistics from frozen networks can provide essential context for the encoding process.
- Exploring vision embeddings versus tokens: A user raised questions about the trade-offs between using vision embedding and vision token approaches for modeling. The community acknowledged the difference but noted a lack of clarity on the specific advantages of each technique.
Eleuther ▷ #research (88 messages🔥🔥):
Dynamic Expert Routing
Tokenization Challenges
Multi-Token Prediction
Finite Scalar Quantization
Symmetry in Neural Networks
- Dynamic Expert Routing Discussion: A member explained that allowing the model to define its own experts during training can enhance flexibility compared to a fixed number in the configuration.
- A request for papers related to this concept was made, pointing to a need for more literature on the subject.
- Tokenization Faces Criticism: Concerns were raised about tokenization obfuscating important data aspects and potentially becoming obsolete with advancements like multi-token prediction and parallel transformers.
- One member noted that current tokenization methods might hinder model training efficiency and the overall model performance.
- Exploring Multi-Token Prediction: There was a discussion on the effectiveness of multi-token prediction (MTP) for reasoning and planning tasks, with some members questioning its utility for smaller models.
- It was mentioned that published studies have yet to mitigate concerns around MTP's performance on smaller architectures.
- Finite Scalar Quantization Introduced: FSQ was highlighted as a promising alternative to vector quantization (VQ) in VQ-VAEs, potentially improving codebook utilization and model efficiency.
- Members noted the simplicity of FSQ while still achieving competitive metrics, making it an attractive option for further exploration.
- Symmetry in Neural Networks Explored: A new paper discusses the negative impact of symmetry in loss functions, which can lead to low-capacity states, and proposes a method to mitigate this issue.
- Concerns were raised about the implementation costs and effectiveness of applying this new loss function across different models.
- Rethinking Tokenization: Crafting Better Tokenizers for Large Language Models: Tokenization significantly influences language models(LMs)' performance. This paper traces the evolution of tokenizers from word-level to subword-level, analyzing how they balance tokens and types...
- Remove Symmetries to Control Model Expressivity: When symmetry is present in the loss function, the model is likely to be trapped in a low-capacity state that is sometimes known as a "collapse." Being trapped in these low-capacity states can...
- WavTokenizer: an Efficient Acoustic Discrete Codec Tokenizer for Audio Language Modeling: Language models have been effectively applied to modeling natural signals, such as images, video, speech, and audio. A crucial component of these models is the codec tokenizer, which compresses high-d...
- LayerSkip: faster LLM Inference with Early Exit and Self-speculative decoding: Introduction
- Finite Scalar Quantization: VQ-VAE Made Simple: We propose to replace vector quantization (VQ) in the latent representation of VQ-VAEs with a simple scheme termed finite scalar quantization (FSQ), where we project the VAE representation down to a f...
- Language Model Beats Diffusion -- Tokenizer is Key to Visual Generation: While Large Language Models (LLMs) are the dominant models for generative tasks in language, they do not perform as well as diffusion models on image and video generation. To effectively use LLMs for ...
- Image and Video Tokenization with Binary Spherical Quantization: We propose a new transformer-based image and video tokenizer with Binary Spherical Quantization (BSQ). BSQ projects the high-dimensional visual embedding to a lower-dimensional hypersphere and then ap...
Eleuther ▷ #lm-thunderdome (5 messages):
Word Game Bench
Measuring Consistency in Multiple Choice Questions
- Introducing Word Game Bench for Language Models: A new benchmark called Word Game Bench has been developed for evaluating language models on word puzzle games, particularly Wordle and Connections.
- No model currently scores above 50% average win rate, and the benchmark is designed to test models through interaction and feedback rather than static responses.
- Challenges in Measuring Consistency: A member is attempting to compare responses for multiple choice questions to measure consistency when prompts vary slightly, using an extended dataset for the same question across different prompts.
- Suggestions include creating datasets that represent desired comparisons using functions like
doc_to_target
ordoc_to_text
, albeit with the needed effort for each model.
- Suggestions include creating datasets that represent desired comparisons using functions like
- Word Game Bench: no description found
- Tweet from zafir (@zafstojano): Excited to share "Word Game Bench" - a fun benchmark for evaluating language models on word puzzle games! It is a relatively hard benchmark, where no model currently scores above 50% average...
Perplexity AI ▷ #announcements (1 messages):
Discord server growth
- Discord server hits 100K members: The Discord server has now reached an incredible milestone of 100K members, showcasing the vibrant growth of the community.
- A huge thank you to @everyone was expressed for their support and feedback, highlighting the team's excitement to continue evolving together.
- Community support and feedback appreciated: The team expressed gratitude for all the support and feedback received from members, reinforcing the importance of community engagement.
- They emphasized their eagerness to continue growing and evolving with everyone involved.
Perplexity AI ▷ #general (120 messages🔥🔥):
Perplexity Pro Subscription Issues
AI Model Performance
Promotional Materials for Perplexity
AI Competitions and Hackathons
User Experience with Image Uploads
- Conversation Around Pro Subscription Deletion: Several users expressed concerns about their Pro subscriptions disappearing, with potential causes including misuse of promo codes and account issues. The support team advised reaching out via email for unresolved issues regarding subscription access.
- One user specifically mentioned redeeming a promo code, only to find it no longer functional, prompting questions about voucher abuse prevention measures in place.
- Discussions on Model Selection and Performance: Users noted that switching between AI models sometimes yielded similar answers, raising suspicions about model differentiation. It was suggested that the used models might not be effectively differentiating, potentially due to recent updates.
- One participant specifically noted that inquiries about model type (GPT or Claude) returned a generic response about Perplexity's model rather than the selected model.
- Showcasing Perplexity at AI Exhibition: A user organizing an AI exhibition in France requested promotional materials and video resources to effectively showcase Perplexity AI at their event. They specified the need for materials beyond just YouTube content to enrich the presentation.
- This indicates an interest in enhancing user engagement and understanding of Perplexity AI through well-coordinated marketing efforts.
- Concerns Over Thread Deletion: Users raised concerns about threads disappearing unexpectedly, with some insisting they never delete their posts. This led to discussions about potential automatic deletions by the platform, provoking frustration among members.
- One user highlighted that refreshing the browser led to their threads being lost, calling for clarification on deletion policies.
- Inquiries on Image Upload Functionality: Several users reported issues with uploading images, prompting discussions on technical difficulties faced while using the feature. Resolutions indicated that disabling certain browser extensions helped restore functionality.
- This reflects ongoing troubleshooting and user adjustments required to maintain full access to platform capabilities.
- Maragoni Rohan Sai - Portfolio: no description found
- Griffith Berserk GIF - Griffith Berserk Eclipse - Discover & Share GIFs: Click to view the GIF
Perplexity AI ▷ #sharing (10 messages🔥):
MrBeast news
C++ programming
Vikings impact on modern culture
OpenAI's DALL-E
Muscle knots
- What's up with MrBeast?: Members discussed the latest developments surrounding MrBeast through this link.
- The discussion hints at a large interest in his current activities and status.
- Need for C++ programs: A user sought help with a C++ programming task.
- This reflects ongoing interest in learning and improving programming skills.
- Vikings are trending: One member is exploring the impact of Vikings on modern culture in this discussion.
- This enthusiasm suggests a growing fascination with historical influences.
- Exploring DALL-E's capabilities: A link was shared regarding OpenAI's DALL-E and its features, found here.
- Members are taking an active interest in understanding AI's creative outputs.
- Understanding muscle knots: A user posted a query about muscle knots through this link.
- This indicates a desire to learn more about health and body mechanics.
Perplexity AI ▷ #pplx-api (9 messages🔥):
PPLX API Credits
Perplexity Pro Searches Availability
Rate Limiting Issue
- Users not receiving PPLX API credits: Multiple users reported that after purchasing Pro, they did not receive the $5 PPLX API credits promised.
- One user requested account details to further investigate the issue, yet no resolution has been provided.
- Pro Searches not available on API: A user expressed uncertainty regarding the functioning of Pro Searches when queuing the API.
- Another member clarified that Pro is not available via the API and followed up asking if this feature will be available in the future.
- Rate limiting concerns: A user encountered a 429 Client Error: Too Many Requests while invoking the API endpoint multiple times.
- They inquired about how they got insta rate limited despite only using three functions in their script.
Cohere ▷ #discussions (70 messages🔥🔥):
Command R+ Model Updates
Throughput and GQA Impact
Cohere Scholars Discord Access
MMLU Optimization Discussion
Open Weights and Licensing
- Command R+ Models Show Major Improvements: The introduction of updated models, including
command-r-08-2024
, displays enhanced performance in multilingual retrieval-augmented generation (RAG), math, and reasoning tasks.- Users noted that throughput has significantly increased, achieving up to 50% higher efficiency compared to previous models, thanks to GQA enhancements.
- Discussion on MMLU Relevance: Nick Farst stated that MMLU does not correlate strongly with practical applications, suggesting a significant portion of its content is outdated and irrelevant.
- Members expressed indifference toward MMLU, highlighting the need for focusing on more practical aspects of model performance.
- Users Anticipate Updates and Features: Members are eager about what new updates Cohere will bring, particularly regarding the impact of GQA on model performance and potential quantization methods.
- Discussions revealed a community interest in performance metrics and a desire for quantitative comparisons between new and previous models.
- Accessing Cohere Scholars Discord: A user inquired about how to join the Cohere Scholars Discord, receiving guidance to visit the Cohere For AI website for the join link.
- This reflects ongoing community engagement and interest in participating in platform discussions.
- Open Weights Discussion: Clarifications were provided about the nature of model weights being open, noting that while most weights are available, the latest updates are not yet fully open-sourced.
- Members shared links to Hugging Face for accessing open weights and reflected on the implications this has for academic and research-friendly usage.
- Command models get an August refresh — Cohere: no description found
- Cohere Documentation — Cohere: no description found
- Safety Modes — Cohere: The safety modes documentation describes how to use default and strict modes in order to exercise additional control over model output.
- Updates to the Command R Series: The latest versions of the Command R model series offer improvements across coding, math, reasoning, and latency.
- joey234/mmlu-human_sexuality-original-neg · Datasets at Hugging Face: no description found
- The Cohere Blog: Explore our collection of insightful blog posts covering a diverse range of generative AI topics. Our articles offer in-depth analyses, expert opinions, and practical advice to inform and inspire.
- CohereForAI/c4ai-command-r-plus-08-2024 · Hugging Face: no description found
Cohere ▷ #announcements (6 messages):
Command R models
Pricing updates
Hugging Face availability
Fine-tuning defaults
Ollama deployment
- Command R and R+ models just dropped!: The refreshed Command R and R+ models now feature performance boosts in reasoning, coding, tool use, and multilingual RAG, with lower latency and new safety modes.
- The updated models can be accessed using aliases
command-r-08-2024
andcommand-r-plus-08-2024
.
- The updated models can be accessed using aliases
- New pricing structure revealed: Command R tokens are now priced at $0.15 for input and $0.60 for output, while R+ has been reduced to $2.50 for input and $10.00 for output, making it more affordable.
- These significant price cuts include 3x reduction for R's input and 30% cheaper input for R+.
- Models now available on Hugging Face: Users can find the refreshed models on Hugging Face, with anticipation for their conversion and upload to Ollama.
- Vincentbosch noted that the models will take some time before they are fully ready on Ollama.
- Questions about fine-tuning defaults: Discussion arose on whether the new models will serve as the new defaults for fine-tunes, hinting at a shift in standard practice.
- No conclusive answer was provided, leaving room for speculation among members.
- Request for benchmarks: A user inquired about the availability of benchmarks for the new models to better understand their performance metrics.
- As of now, these benchmarks have not been released, prompting further interest in their results.
Link mentioned: Command models get an August refresh — Cohere: no description found
Cohere ▷ #questions (10 messages🔥):
C4AI Scholars Program
Command R+ Release
GDPR Compliance
Cohere's Trust Center
- C4AI Scholars Program for Graduate Students: A member inquired whether the C4AI Scholars Program accepts ongoing graduate students possibly through a January internship setup.
- Another member advised reaching out directly to C4AI for clarification on this matter.
- Inquiry on Command R+ Release: A user asked if the latest version of Command R+ is set to be released soon, indicating interest in new developments.
- No responses were provided to clarify the release timeline.
- GDPR Compliance with Cohere APIs: An inquiry was made regarding whether Cohere APIs comply with GDPR regulations, particularly concerning data usage with Command R+.
- Follow-up responses included posts from the Cohere Trust Center, which may contain useful information.
- Cohere Trust Center Resources: An individual shared a link to the Cohere Trust Center, highlighting its priority on data confidentiality, integrity, and availability.
- It appears to serve as a helpful resource for users concerned about compliance and data handling.
Link mentioned: Cohere Inc | Trust Center
: no description found
Cohere ▷ #api-discussions (46 messages🔥):
Rate Limit Issues with Trial API Key
Reranking Citations
Safety Mode Interaction with Preamble
Citations in Financial Data Analysis
- Trial API Key Exceeded Rate Limits: A user encountered a rate limit error (Error 429) while using a trial API key, which allows only 1000 API calls per month.
- Another user explained that to increase capacity, one would need to upgrade to a production key, which requires credit card details.
- Reranking Citations to Limit Redundancy: A user asked for advice on limiting citations in generated text, which had an excessive count considering their 180-word output.
- Suggestions included reranking citations to display only the top 3, or focusing on displaying referenced documents instead of inline citations.
- Interaction Between Safety Mode and Preamble: Questions arose regarding whether safety_mode overrides custom preamble settings in API usage.
- It was clarified that safety mode operates independently, running “Safety Instructions,” while using a structured format to present prompts in STRICT or CONTEXTUAL modes.
- Development of Financial Data Analysis App: A user is developing a financial data analysis application and mentioned the importance of citations for their output.
- The app is still in early stages but shows promise, and users expressed willingness to offer assistance in its development.
- API Keys and Rate Limits — Cohere: This page describes the limitations around Cohere's API.
- Rerank — Cohere: This endpoint takes in a query and a list of texts and produces an ordered array with each text assigned a relevance score.
Cohere ▷ #projects (1 messages):
Maya LLaVA-Pretrain Dataset
Multilingual Dataset Features
Translation Quality Results
API Support and Batch Processing
- Maya LLaVA-Pretrain is live!: The Maya LLaVA-Pretrain dataset has been announced, containing 4,404,776 entries across 8 languages, designed for pretraining large language and vision models.
- This dataset has expanded from an original LLaVA-pretrain English dataset through machine translation and toxicity filtering, making it suitable for image-captioning and visual question-answering tasks.
- Accessing the dataset requires agreement: To access the Maya LLaVA-Pretrain Dataset, users must log in or sign up to agree to the sharing conditions required by the repository.
- This step is necessary despite the dataset being publicly accessible and ensures compliance with the usage policy.
- Appreciation for API Support: One user expressed appreciation for assistance from a team member regarding batch processing and API support during the dataset preparation.
- They highlighted the collaboration with the c4ai-aya-35B model API and command-r-plus API for refining toxicity.
- Upcoming Translation Quality Results: The team plans to present the translation quality results on the dataset card soon, enhancing the dataset's credibility.
- This follows the dataset preparation and focuses on the advancements made during the machine translation efforts.
Link mentioned: kkr5155/Maya-llava-pretrain · Datasets at Hugging Face: no description found
Latent Space ▷ #ai-general-chat (31 messages🔥):
Codeium funding updates
Meta AI assistant growth
DeepMind's customizable Gems
Evolution of code generation tools
Tome's pivot to enterprise AI assistance
- Codeium secures $150M in Series C funding: Codeium announced it raised $150 million in Series C funding, valuing the company at $1.25 billion while having not yet touched its Series B funds from January.
- With a total of $243 million raised, the co-founder aims to use this capital to ramp up R&D and growth initiatives.
- Meta AI assistant hits impressive user numbers: Meta's AI assistant has achieved 400 million monthly active users and 40 million daily active users, indicating rapid adoption.
- Discussions suggested that they might soon need to seek a license, reflecting the growing interest and usage of the platform.
- Google DeepMind launches customizable Gems: DeepMind introduced Gems, customizable AI chatbots that can act as topic experts for various scenarios, including a Learning Coach and a Coding Partner.
- Critics noted that gaining traction will heavily depend on ease of use and curation of these tools.
- Discussion on the state of code generation tools: Recent advancements in code generation tools like Claude 3.5 Sonnet and Townie's redesign are enabling new methods for building software through dialogue with AI.
- Despite the hype, concerns remain about these tools mainly serving as flashy demos without significant integration into existing codebases.
- Tome pivots to focus on enterprise AI assistance: Tome is rebranding to become a dedicated AI assistant aimed at helping businesses break into new enterprise accounts.
- The pivot marks a significant shift in strategy as the company repositions itself within the AI landscape.
- Command models get an August refresh — Cohere: no description found
- How we built Townie – an app that generates fullstack apps: Like Claude Artifacts, but with a backend and database
- Tweet from Google DeepMind (@GoogleDeepMind): Over the coming days, start creating and chatting with Gems: customizable versions of Gemini that act as topic experts. 🤝 We’re also launching premade Gems for different scenarios - including Learni...
- Tweet from 1X (@1x_tech): Introducing NEO Beta. Designed for humans. Built for the home.
- Tweet from Aravind Srinivas (@AravSrinivas): Impressive numbers
- Tweet from Henri Liriani (@hliriani): We're rebooting Tome to be a different company. @magicaltome is now an AI assistant for breaking into new enterprise accounts. Here's a bit on the journey we've been on…
- Our Androids | 1X Technologies: Inspired by human nature. Meet EVE and NEO and learn more about how they use embodied learning to solve problems, from meeting labor demands to everyday assistance.
- Tweet from Ahmad Al-Dahle (@Ahmad_Al_Dahle): On the heels of the update I shared on Llama yesterday, we’re also seeing Meta AI usage growing FAST with 185M weekly actives! 🚀
- GitHub Copilot competitor Codeium raises $150M at a $1.25B valuation | TechCrunch: Codeium, a startup developing an AI-powered tool to rival GitHub Copilot, has raised $150 million at a $1.25 billion valuation.
- 2024 | TechCrunch: no description found
Latent Space ▷ #ai-announcements (1 messages):
Latent Space Podcast
LLM Benchmarks
Meetup Announcement
- New Episode on LLM Benchmarks: The latest episode of the Latent Space Podcast features Nicholas Carlini from Google DeepMind discussing the importance of writing your own LLM benchmarks.
- He covers topics such as how he uses AI, his benchmark for large language models, and extracting training data from LLMs, emphasizing the loss of logprobs from OpenAI.
- Upcoming Meetup Next Month: A shoutout was made for an upcoming meetup organized by @213041230177763329, scheduled for next month.
- Details on the meetup were mentioned, encouraging listeners to participate.
Link mentioned: Tweet from Latent.Space (@latentspacepod): 🆕 Why you should write your own LLM benchmarks w/ Nicholas Carlini of @GoogleDeepMind Covering his greatest hits: - How I Use AI - My benchmark for large language models - Extracting Training Data...
Latent Space ▷ #ai-in-action-club (57 messages🔥🔥):
STORM approach vs one-shot research paper generation
Viewing issues in shared screens
Research agent effectiveness
CogVLM discussion
Language-based learning strategy
- Debate on the STORM Approach or One-shot Generation: A member expressed a preference for the STORM approach, asserting that one-shotting a research paper feels too brittle and relies heavily on tedious human validation.
- Another member countered, suggesting that breaking up generation for feedback could yield better results than running through the entire process without interruption.
- Shared Screen Viewing Confusion: Members experienced issues with shared screens, with some able to see Yikes' screen while others faced loading problems.
- This led to a discussion on viewing discrepancies, something not typically seen where one person could view while others could not.
- Concerns Over Research Agent Limitations: Participants discussed the potential shortcomings of research agents, referring to previous experiences and concerns about their capabilities.
- One member noted that average research took about 2 minutes and cost approximately $0.005, indicating potential inefficiencies.
- CogVLM Draws Interest for Future Discussions: The community found CogVLM intriguing, mentioning the need for someone to give a talk about it after highlighting its features on GitHub.
- A member compared its inclusion in the generated paper to 'LLM barf', hinting at a disconnect between the technology and its application.
- Exploring LLMs for Learning New Codebases: One member expressed their capability to demonstrate Langflow in practice, focusing on how to employ LLMs for learning new codebases.
- They suggested this meta topic could foster deeper discussion and understanding among participants.
- no title found: no description found
- Tweet from Jimmy Koppel (@jimmykoppel): But all that's to stop you from looking too closely at what they actually do. Because I don't think there's much there at all.
- GitHub - THUDM/CogVLM: a state-of-the-art-level open visual language model | 多模态预训练模型: a state-of-the-art-level open visual language model | 多模态预训练模型 - THUDM/CogVLM
- AI In Action: Weekly Jam Sessions: no description found
Modular (Mojo 🔥) ▷ #general (34 messages🔥):
Mojo and Web3 applications
Open source status of Mojo
Performance comparison of programming languages
MAX SDK and Licensing
Collaboration with OPENSEA
- Mojo still maturing for Web3: Discussions reveal that while Mojo is being considered for blockchain protocols, it is still too immature compared to Go, Rust, and C++ for serious development.
- Members noted the ongoing work on Mojo's IO and networking APIs, which need adjustments to match modern hardware.
- Open Source Status of Mojo Compiler: While Mojo is reportedly an open-source language, the source code for its compiler is currently not available, as a small team is tasked with its rapid iteration.
- There is uncertainty about when or if the compiler might become open-source in the future.
- Programming Language Performance Debate: Members debated the performance of Go compared to C, with mixed reports on how much slower Go has become, impacting its suitability for certain applications.
- Darkmatter highlighted that the Go optimizer's conservative approach can lead to significant performance drawbacks in complex issues.
- MAX SDK Development Insights: The Modular team is balancing development speed, product licensing, and community engagement, with an emphasis on open-source contributions.
- While there is an acknowledgment of the need to expand the team, finding individuals familiar with both MLIR and Mojo remains a challenge.
- Free Mint Collaboration with OPENSEA: Members were informed about a collaboration with OPENSEA for a new free mint, encouraging participation through a claim link.
- While some users expressed interest, others opted out of participation.
- Modular: MAX & Mojo Community License: The MAX SDK ("MAX") & Mojo Community License governs what uses we allow with our software and how you can change the world with it.
- Modular: Career Post: At Modular we believe a great culture is the key to creating a great company. The three pillars we work by are Build products users love, Empower people, and Be an incredible team.
- MAX FAQ | Modular Docs: Answers to questions we expect about MAX Engine.
Modular (Mojo 🔥) ▷ #mojo (15 messages🔥):
Memory Management and Architecture
Mistakes in Software Design
Lookup Tables in Mojo
Error Handling for Tuple Indices
Type Awareness in Programming
- Architect's Role in Memory Management: A member expressed that if programmers worry about specific memory being released, then it reflects a failure in the architect's design of the system/environment.
- They emphasized the need for a development team to create a sufficient development environment to avoid such concerns.
- Embracing Mistakes in Software Development: One member noted that mistakes are inevitable in software design and highlighted the importance of flexibility in architecture.
- They concluded that rather than seeking a flawless design, teams should focus on adapting to failures.
- Generating Lookup Tables for Mojo: A member shared their excitement about creating a script to generate a
.mojopkg
file containing custom lookup tables.- They expressed joy at the functionalities they've added, showcasing their enthusiasm for the project.
- Errors and Tuple Bounds Awareness: Concerns were raised about out-of-bounds errors in tuples potentially leading to confusion in the editor due to type awareness at each index.
- The discussion hinted at the lack of clear error messages for these cases while also acknowledging its effectiveness in most situations.
- The Need for InvalidType in Error Messages: A suggestion was made to introduce an
InvalidType
to improve error handling mechanisms for types that don't match.- This was seen as potentially beneficial when dealing with the
Type != Type
error messages.
- This was seen as potentially beneficial when dealing with the
Modular (Mojo 🔥) ▷ #max (2 messages):
fastai model export
Modular framework agnostic model format
- Fastai's Model Export Suggestions: Discussion highlighted that fastai allows users to export the trained PyTorch model using the command
Learner.export
for production deployment.- One member suggested it would be cool to override
Learner.export
to generate Mojo code for the input pipeline along with the model.
- One member suggested it would be cool to override
- Modular's Ambition to Resolve Pickle Issues: A member pointed out that Modular seems poised to tackle the pickle problem by creating a cross-platform framework agnostic model format.
- This hints at a move towards standardizing model deployment across different platforms.
LangChain AI ▷ #general (46 messages🔥):
LangChain with Docker
ChatOllama vs Ollama
Real-time streaming in LangChain
Using Hybrid RAG models
Building a competent GPT for HR
- LangChain App Fails in Docker: A user reported issues with their LangChain app calling the ChatOllama object when containerized, but it worked fine when not in Docker.
- The issue was identified as being related to the base URL, which was resolved by using a direct Ollama host URL instead.
- Differences between ChatOllama and Ollama: ChatOllama is designed for chat-like interactions, while Ollama is for general tasks involving language models, offering specific functionalities tailored to their use cases.
- Details on how to use both models were provided, along with examples demonstrating their respective API references.
- Real-time Streaming Output Issue: A user faced difficulties with their agent executor, which gathered all outputs instead of streaming them in real-time.
- Another member inquired about the behavior of
streamRunnable = False
to understand its impact on streaming capabilities.
- Another member inquired about the behavior of
- Hybrid RAG Models for LLM Training: Discussion highlighted the possibility of continuous improvement for LLMs through feedback and fine-tuning despite their inability to learn in real-time.
- Alternatives such as traditional RAG models and self-query techniques were also mentioned as viable options for enhancing performance.
- Building a Competent GPT for HR: A user expressed the desire to create a GPT model specifically tailored for their HR team's extensive manual, emphasizing the need for it not to hallucinate.
- Suggestions included using good RAG techniques and enhancing the model's performance through iterative adjustments based on feedback.
- AgentExecutor | LangChain.js: no description found
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- )">no title found: no description found
- langchain.agents.agent.AgentExecutor — 🦜🔗 LangChain 0.2.16: no description found
- When running ollama via docker, it won't respond to any request by API-call or python-client-library · Issue #6398 · ollama/ollama: What is the issue? I setup the nvidia docker toolkit sucessfully on my Ubuntu 22 Machine with a RTX-4000, and start ollama as docker-container with exposed port 11434: docker run -d --gpus=all --en...
LangChain AI ▷ #share-your-work (1 messages):
sourcefound: https://www.getaiphone.app/
LlamaIndex ▷ #blog (5 messages):
GymNation Success Story
LLMs in Production Talk
LlamaIndex & MLFlow Integration
LLM x Law Hackathon
Enhanced Financial Data Analysis
- GymNation's Digital Transformation Triumph: GymNation improved its member experience significantly, increasing digital lead to sales conversion by 20% and achieving an 87% conversation rate with digital leads.
- Their partnership with LlamaIndex has driven real business outcomes, as detailed in their latest success story.
- Explore LLMs in Production on September 9th: Catch @seldo discussing large language models in production during an upcoming event on September 9th.
- Details are available in the announcement on Twitter.
- LlamaIndex Integrates with MLFlow: Co-founder @jerryjliu0 shared insights on our new integration with MLFlow on the podcast, enhancing how users can log and evaluate LlamaIndex applications.
- This integration allows for better tracking, evaluation, and deployment of ML models, with a full demo available online.
- Join the LLM x Law Hackathon on September 8th: Excitement builds for the LLM x Law Hackathon organized by @hexapode, exploring the crossroads of AI and law on September 8th.
- Participants can expect three tracks focused on AI development and legal applications, more information shared on Twitter.
- Financial Analysis Enhanced with MoW and RAG: The new approach combines Mixture of Workflows (MoW) and Corrective RAG for thorough financial data analysis, using models such as Phi-3 and Qwen-2.
- This method enables context-aware analysis of financial statements, as outlined in the shared details.
LlamaIndex ▷ #general (28 messages🔥):
LlamaIndex Warning
Query Engines Deprecation
Llama3 LLM Usage
Handling JSON in LLM
Azure OpenAI Integration Issues
- LlamaIndex Warning about Valid Config Keys: A user reported receiving a warning about changed config keys in LlamaIndex V2, specifically mentioning 'allow_population_by_field_name' and 'smart_union'.
- Another user suggested that this warning might be related to the version of SQLAlchemy in use.
- Query Engines Deprecation Concerns: A user raised concerns about the potential deprecation of QueryEngines based on documentation, which references deprecated methods for RAG workflows.
- Others clarified that only specific methods for structured outputs are deprecated, while core query engines remain unaffected.
- Using Llama3 LLM for API Calls: A user inquired about utilizing Llama3 with OpenAI for
generate_qa_embedding_pairs
, looking for clear guidance.- Another user suggested globally defining the LLM for consistent use or passing it as a keyword argument during function calls.
- Handling JSON Data in LLM Workflows: A user shared their struggles integrating JSON output from an external API into the LLM successfully.
- They were advised to format JSON responses properly before passing them to the LLM to avoid complications.
- Issues with Azure OpenAI Integration: A user expressed frustration with the integration of LlamaIndex and Azure AI, stating issues with citation mismatches in search results.
- Another user countered this claim, stating they haven't observed similar issues in production and encouraged contributions to rectify any bugs.
- (Deprecated) Query Engines + Pydantic Outputs - LlamaIndex: no description found
- Workflow for a Function Calling Agent - LlamaIndex: no description found
LlamaIndex ▷ #ai-discussion (1 messages):
LitServe
LlamaIndex
AI Model Deployment
- Lightning Fast AI Model Serving with LitServe: The article presents LitServe, a high-performance serving engine that allows developers to deploy and manage various AI models efficiently.
- It highlights the combination of LitServe with LlamaIndex, enhancing versatility in building intelligent applications.
- Integration of LitServe and LlamaIndex: When paired with LlamaIndex, LitServe unlocks new potential for building robust AI applications.
- The combination provides developers with improved tools and resources for effective AI model management.
Link mentioned: Serving AI Models at Lightning Speed with LitServe and LlamaIndex: Ankush k Singal
OpenInterpreter ▷ #general (11 messages🔥):
House Party Announcement
Terminal Applications for KDE
Obsidian OI Plugin Issues
GPT-4o Memory Concerns
- Join the House Party Next Week!: A member announced a House Party for next week, sticking with an earlier time to gather more attendees.
- The invite included a heartfelt message to encourage participation.
- Requesting KDE Terminal Apps: A member inquired about terminal apps for KDE, noting that the current one, Konsole, is causing issues with screen bleeding while scrolling.
- There are discussions about the effectiveness of various terminal applications, with one confirming similar problems in a standard terminal.
- Obsidian OI Plugin Issue Resolution: A user commended the videos on the Obsidian OI plugin but faced installation issues and is seeking advice on resolving them.
- Another member suggested posting detailed installation issues in a specific channel to gather relevant help.
- GPT-4o's Forgetting Problem: A user expressed frustration about GPT-4o not remembering past interactions and questioned how to leverage it for web development.
- They humorously suggested asking the model for tips on improving memory capabilities.
OpenInterpreter ▷ #O1 (2 messages):
Potential applications discussion
House party meetup
- Excitement for potential developments: A member expressed eagerness for upcoming developments and willingness to get involved.
- They mentioned having thoughts on potential applications and desired a chance to discuss further.
- House party as discussion venue: Another member suggested that a house party next Thursday would be a great time to chat about potential applications.
- This informal meetup aims to provide an opportunity for more in-depth conversations.
OpenInterpreter ▷ #ai-content (3 messages):
GameNGen Neural Model
DOOM Simulation
Shout-out to AgentOps
YouTube Video Discussion
- GameNGen Neural Model Powers Real-time Gaming: The GameNGen neural model simulates the classic game DOOM entirely in real-time, achieving over 20 fps on a single TPU and producing high-quality interactions.
- Next frame prediction reaches a PSNR of 29.4, with human raters struggling to distinguish between real and simulated gameplay.
- Excitement for AgentOps Team's Future: Excitement builds around what Adam and the AgentOps team will accomplish next, as highlighted in a recent discussion.
- A member expressed gratitude for shared insights and the positive enthusiasm surrounding the team's developments.
- YouTube Video Shout-out: A YouTube video featuring timestamps gave a shout-out to a community member, catching attention in the chat.
- Contributors highlighted the video's content as engaging and worth checking out, with emphasis on the shout-out acknowledgment.
Link mentioned: GameNGen: Diffusion Models Are Real-Time Game Engines
LAION ▷ #general (14 messages🔥):
Google buying NVIDIA GPUs
RunwayML deletes repos
Effects on diffusers
Realistic image generation for novels
Re-LAION-5B dataset update
- Google's GPU Acquisition Raises Questions: Members questioned why Google is buying NVIDIA GPUs if they already possess TPUs, hinting at potential performance considerations.
- Is the TPU enough? raises curiosity over Google's hardware strategy amidst rising competition.
- RunwayML Purges Stable Diffusion Repos: Discussion erupted over RunwayML deleting all their Stable Diffusion 1.5 repos on HuggingFace and GitHub, which led to disruptions in existing projects.
- Concerns over the impact on diffusers 1.5 functionalities were expressed, with one member noting it broke single file loading.
- Frustration over Repo Removals: Members expressed annoyance at RunwayML's lack of foresight in deleting the repositories without archiving, impacting various dependencies.
- One member speculated about legal reasons behind the removal but found no specific issues cited.
- Seeking Realistic Comic Book Covers: A member shared challenges in generating suitable images for novel covers, seeking ways to achieve a more comic book or cartoon style.
- Despite efforts with DALL-E, they received heavily generated AI pictures instead, illustrating difficulties in achieving intended styles.
- Launch of Re-LAION-5B Dataset: The Re-LAION-5B dataset was announced, marking an important update to LAION-5B that addresses safety concerns and removes links to suspected CSAM.
- Joint efforts with organizations like the Internet Watch Foundation ensure the dataset's integrity, now available for download in two safe versions.
-
Releasing Re-LAION 5B: transparent iteration on LAION-5B with additional safety fixes | LAION:
Today, following a safety revision procedure, we announce Re-LAION-5B, an updated version of LAION...
- runwayml (Runway): no description found
LAION ▷ #announcements (1 messages):
mega_b: https://laion.ai/blog/relaion-5b/
Interconnects (Nathan Lambert) ▷ #news (10 messages🔥):
OpenAI Funding Round
Chatbot Wars
Meta AI Usage
- Tech Giants Eye OpenAI's New Funding: The three most valuable tech companies, Nvidia, Apple, and Microsoft, are in discussions to invest in OpenAI's new $100 billion funding round as reported by Bloomberg.
- Glad to see a non-profit attract such interest, another member commented, highlighting the significance of this potential investment.
- ChatGPT Dominates with Massive User Base: ChatGPT boasts over 200 million weekly active users, while Meta AI is reportedly gaining traction but with 40 million daily active users according to a quote from The Information.
- Discussion arose about whether Meta AI is used similarly to ChatGPT, with some pointing out it isn't available in regions like the EU.
- Anticipation for Chatbot Competition: Members are expressing their excitement over the ongoing competition in the chatbot space, referencing comments like 'Begun, the chatbot wars have.'
- One participant noted, This is expected imo, showcasing confidence in the growth of the AI assistant sector.
- Tweet from Mark Gurman (@markgurman): Nvidia, Apple and Microsoft — the three most valuable tech companies — are in talks to invest in OpenAI as part of the company’s new $100 billion funding round. https://www.bloomberg.com/news/articles...
- Tweet from Amir Efrati (@amir): Begun, the chatbot wars have ChatGPT: 200M+ weeklies. Meta AI likely not far behind (though unclear if people are using it the same way or accidentally!) https://www.theinformation.com/articles/m...
Interconnects (Nathan Lambert) ▷ #random (3 messages):
Tinygrad Cloud Service
System Prompts Impact
- Tinygrad Launches Affordable Cloud Service: Tinygrad announced a new cloud service offering for just $60/month, which includes a 4090 GPU and 500 GB of cloud storage, making it 3x cheaper than vast ai.
- Users can continue to use tinygrad locally while benefiting from faster operations in the cloud, with the promise of only one roundtrip per 'TinyJit' function.
- Inquiry on System Prompts and Evaluations: A user inquired about research papers that examine the impact of system prompts on evaluation scores and whether scores can be meaningfully shifted.
- This suggests a growing interest in understanding how prompt engineering might influence the performance outcomes of AI models.
Link mentioned: Tweet from the tiny corp (@tinygrad): Coming soon: CLOUD=1 For $60/month (3x cheaper than vast ai), we'll rent you a 4090 and 500 GB of cloud storage. Use tinygrad as normal on your dev machine, but it runs things fast in the cloud....
Torchtune ▷ #general (11 messages🔥):
QLoRA Memory Issues
Multi GPU Evaluation
Torch Version Compatibility
Illegal Memory Access Errors
- QLoRA hits memory limits: A member expressed suspicion regarding memory usage for QLoRA, indicating it should be sufficient for training with 4 48GB GPUs.
- They noted that without CPU offloading their setup approaches the memory limit with shorter sequences.
- Discussion on Multi GPU Evaluation Support: One member inquired whether multi GPU evaluation is supported in TorchTune.
- This prompted further discussions on setup and performance expectations.
- Torch Version Matters: Another member clarified that they are using Torch version 2.4.0+cu124, which raised questions about compatibility with the setup.
- They noted that this version could impact how the model behaves under different configurations.
- Illegal memory access during training: A member reported experiencing an illegal memory access error while running their training script, suggesting to pass CUDA_LAUNCH_BLOCKING=1 for debugging.
- They also highlighted that CUDA errors might be asynchronously reported, complicating troubleshooting efforts.
DSPy ▷ #show-and-tell (5 messages):
LinkedIn Auto Jobs Applier
DSPy community engagement
GitHub repo discussion
- DSPy Community Invited to Join the Revolution: A member expressed excitement about sharing a GitHub repo, stating they wanted to invite the DSPy community to join the revolution around it.
- Lol, they emphasized the importance of community involvement in their project.
- LinkedIn Auto Jobs Applier Gains Popularity: The GitHub repo for LinkedIn Auto Jobs Applier is reportedly gaining traction, with over 2k likes each day.
- However, there were concerns about its functionality and the GitHub issues that indicate points to be desired still.
- Concerns Over Testing the Project: A member inquired if the project had been tested, suggesting that user feedback indicates it may still lack stability.
- This hints at a need for further improvements as discussions continue on the viability of the project.
DSPy ▷ #general (5 messages):
DSPy: Prompt Optimization
Bay Area AI Meetup
AgentOps platform
Michael Ryan's Talk
- Bay Area AI Meetup with Michael Ryan: Michael Ryan will speak at the post-@AIconference Bay Area AI meetup at GitHub HQ discussing DSPy and LM Programs, highlighting how to build reliable LLM applications.
- His talk will cover the MIPROv2 optimization algorithm and the importance of treating LM Programs as rigorously as traditional software systems, emphasizing testing and auditing.
- MIPROv2 for LM Programs: Michael Ryan introduced the concept of LM Programs in DSPy, exploring composable calls to LLMs that enhance reliability and optimize performance.
- He has been recognized for his contributions, earning a Best Social Impact award at ACL 2024, illustrating the significance of his work in the field of language models.
- AgentOps Platform Introduction: AgentOps offers robust tools for creating agents with features like graphs, monitoring, and replay analytics, designed to eliminate guesswork in LLM usage.
- This platform is open source, promoting collaborative improvements through its GitHub repository.
- DSPy Doubts and Support: A user inquired about the appropriate channel for posting doubts regarding the use of DSPy, seeking directions for community support.
- This indicates an active interest in engaging with DSPy functionalities and possibly contributing to troubleshooting within the community.
- Tweet from Alexy 🤍💙🤍 (@ChiefScientist): Super excited to host Michael Ryan at the post-@AIconference http://Bay.Area.AI meetup hosted by @github HQ in SOMA SF! DSPy: Prompt Optimization for LM Programs Michael Ryan, @Stanford It has nev...
- Agent Database By AgentOps.ai & Agen.cy: no description found
OpenAccess AI Collective (axolotl) ▷ #general (5 messages):
Axolotl GitHub Documentation
Training Hardware for Llama 70B
A6000 GPUs
- Request for Dark Mode on Axolotl GitHub Doc: A member expressed a desire for the Axolotl GitHub documentation to have a dark mode option, stating that the light mode is difficult on the eyes.
- This change would enhance usability for those who frequently reference configuration parameters.
- Quest for Optimal Hardware for Llama 70B Training: A member inquired about the current hardware requirements for full training of the Llama 70B model, questioning if a few A6000 GPUs would suffice.
- Another member responded affirmatively, suggesting that 3x A6000 GPUs should be adequate for the task.
- Considering Full Weight Training with A6000: A member expressed surprise at the implications of using A6000 GPUs for training the 70B full weight model.
- The conversation highlighted a blend of hope and skepticism regarding hardware capabilities in LM training.
OpenAccess AI Collective (axolotl) ▷ #axolotl-dev (1 messages):
Assistant Prefill Feature
GitHub Contributions
- Assistant Prefill Feature added to Transformers: A new pull request proposes an assistant prefill feature for chat templates and the TextGenerationPipeline, allowing for initial responses to be pre-filled by the model.
- This feature has been requested multiple times internally and on GitHub, demonstrating its demand within the community.
- Enhancements with the new pull request: The pull request led by Rocketknight1 adds functionalities that have been desired both internally and externally on GitHub.
- By implementing this feature, the Transformers library continues to evolve, addressing user needs and enhancing usability.
Link mentioned: Add assistant prefill for chat templates and TextGenerationPipeline by Rocketknight1 · Pull Request #33198 · huggingface/transformers: Something that's been requested several times both internally and on Github is assistant prefill: The ability to begin the model's response for it and let it continue. We use a slightl...
OpenAccess AI Collective (axolotl) ▷ #general-help (3 messages):
Llama 3.1 special tokens
Fixing untrained tokens
- Llama 3.1 still facing special token issues?: A member inquired whether Llama 3.1 base still has issues with uninitialized special tokens, particularly regarding embeddings being out of distribution.
- Another participant confirmed the issue with a solution that involves adding a new option to fix the problem.
- Introducing
fix_untrained_tokens
option: In response to the concerns raised, a member announced the introduction of the optionfix_untrained_tokens: true
, which can now be set.- This option is expected to address the problems faced with uninitialized special tokens.
Gorilla LLM (Berkeley Function Calling) ▷ #discussion (6 messages):
Groq Leaderboard Updates
Documentation of Model Steps
GIS Geometry Presentation Test Case Issues
Model Evaluation Temperature Settings
- Groq Awaiting Leaderboard Addition: Members noted that Groq has not yet been added to the leaderboard, but we can expect their PRs to be raised around next week.
- We're still waiting for them to contribute to the evaluation process.
- Documentation Steps Assurance: A member assured that they will document the steps necessary for reproducibility to satisfy previous concerns raised in the discussion.
- This proactive approach aims to enhance clarity in the model's documentation.
- Problems with GIS Geometry Presentation: A member analyzed a Java test case where their model struggled with initialization prompts for the GIS geometry presentation.
- They concluded that the model's answer was still better than performing function calls since the request was about initialization.
- Clarification on Evaluation Temperature: Members questioned whether all models are evaluated strictly at a temperature of 0 for fair comparisons as mentioned in discussions.
- One member expressed their understanding of the settings and emphasized that keeping some parameters unchanged is crucial for consistent function call outputs.
- gorilla/berkeley-function-call-leaderboard at main · ShishirPatil/gorilla: Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls) - ShishirPatil/gorilla
- Set Model Temperature to 0 for Consistent Leaderboard Results · ShishirPatil/gorilla · Discussion #562: The current model generation script (model_handlers) uses a default temperature of 0.7 for inference. This introduces some degree of randomness into the model output generation, leading to potentia...
tinygrad (George Hotz) ▷ #general (2 messages):
tinygrad capabilities
sparsity handling
- Queries on tinygrad's operation handling: A member asked if tinygrad is limited to statically scheduled operations and if it has issues with semi-structured sparsity or weight selection.
- This query sparked further discussion about its overall capabilities, questioning whether there are any operations that can't be executed in tinygrad.
- George Hotz seeks examples of limitations: George Hotz responded by asking for specific examples of operations that users find impossible to perform in tinygrad.
- This question indicates a desire to clarify the framework's versatility and limitations regarding operation scheduling.
tinygrad (George Hotz) ▷ #learn-tinygrad (2 messages):
Tensor.cat functionality
Sharded tensors
Batch dimension handling
Error troubleshooting
- Tensor.cat struggles with sharded tensors: A user encountered an error when trying to use
Tensor.cat
to concatenate two sharded tensors along the batch axis, resulting in anAssertionError
related to padding.- They noted the ability to unsqueeze an extra dimension but encountered further issues when attempting to reshape the resulting tensor.
- Clarifying the root of the issue: The user posed two questions surrounding the nature of the problem - whether it's a fundamental limitation or simply unsupported functionality.
- They are exploring ways to either modify the code to handle an extra batch dimension or utilize different methods to avoid the need for
cat
.
- They are exploring ways to either modify the code to handle an extra batch dimension or utilize different methods to avoid the need for