[AINews] a quiet weekend
This is AI News! an MVP of a service that goes thru all AI discords/Twitters/reddits and summarizes what people are talking about, so that you can keep up without the fatigue. Signing up here opts you in to the real thing when we launch it 🔜
Patience is all you need.
AI News for 9/13/2024-9/16/2024. We checked 7 subreddits, 433 Twitters and 30 Discords (220 channels, and 6976 messages) for you. Estimated reading time saved (at 200wpm): 757 minutes. You can now tag @smol_ai for AINews discussions!
Everyone spent the weekend exploring o1 and opinions are quite mixed so far:
Astrophysics PhDs and George Hotz and Terence Tao like it, and someone manually graded it with an IQ of 120 on a custom IQ quiz.
In other news:
- Supermaven announced their $12m seed with Bessemer
- 11x announced their $24m series A with Benchmark
- Luma Labs launched an API for Dream Machine
- Cohere and Anthropic and Latent Space University launched courses.
One has to wonder just how good the upcoming Gemini 2 will have to be to compare with o1...
Table of Contents
- AI Twitter Recap
- AI Reddit Recap
- AI Discord Recap
- PART 1: High level Discord summaries
- aider (Paul Gauthier) Discord
- Unsloth AI (Daniel Han) Discord
- OpenRouter (Alex Atallah) Discord
- Perplexity AI Discord
- HuggingFace Discord
- Nous Research AI Discord
- OpenAI Discord
- CUDA MODE Discord
- LM Studio Discord
- Latent Space Discord
- Interconnects (Nathan Lambert) Discord
- Cohere Discord
- Stability.ai (Stable Diffusion) Discord
- Modular (Mojo 🔥) Discord
- Eleuther Discord
- LlamaIndex Discord
- OpenInterpreter Discord
- OpenAccess AI Collective (axolotl) Discord
- LangChain AI Discord
- DSPy Discord
- tinygrad (George Hotz) Discord
- Torchtune Discord
- LAION Discord
- MLOps @Chipro Discord
- Gorilla LLM (Berkeley Function Calling) Discord
- PART 2: Detailed by-Channel summaries and links
- aider (Paul Gauthier) ▷ #general (679 messages🔥🔥🔥):
- aider (Paul Gauthier) ▷ #questions-and-tips (132 messages🔥🔥):
- aider (Paul Gauthier) ▷ #links (12 messages🔥):
- Unsloth AI (Daniel Han) ▷ #general (751 messages🔥🔥🔥):
- Unsloth AI (Daniel Han) ▷ #off-topic (72 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #help (133 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #research (37 messages🔥):
- OpenRouter (Alex Atallah) ▷ #app-showcase (1 messages):
- OpenRouter (Alex Atallah) ▷ #general (775 messages🔥🔥🔥):
- OpenRouter (Alex Atallah) ▷ #beta-feedback (1 messages):
- Perplexity AI ▷ #general (615 messages🔥🔥🔥):
- Perplexity AI ▷ #sharing (37 messages🔥):
- Perplexity AI ▷ #pplx-api (18 messages🔥):
- HuggingFace ▷ #general (571 messages🔥🔥🔥):
- HuggingFace ▷ #today-im-learning (7 messages):
- HuggingFace ▷ #cool-finds (4 messages):
- HuggingFace ▷ #i-made-this (23 messages🔥):
- HuggingFace ▷ #reading-group (3 messages):
- HuggingFace ▷ #NLP (7 messages):
- HuggingFace ▷ #diffusion-discussions (4 messages):
- Nous Research AI ▷ #general (332 messages🔥🔥):
- Nous Research AI ▷ #ask-about-llms (118 messages🔥🔥):
- Nous Research AI ▷ #research-papers (20 messages🔥):
- Nous Research AI ▷ #interesting-links (17 messages🔥):
- Nous Research AI ▷ #research-papers (20 messages🔥):
- OpenAI ▷ #ai-discussions (82 messages🔥🔥):
- OpenAI ▷ #gpt-4-discussions (100 messages🔥🔥):
- OpenAI ▷ #prompt-engineering (119 messages🔥🔥):
- OpenAI ▷ #api-discussions (119 messages🔥🔥):
- CUDA MODE ▷ #general (29 messages🔥):
- CUDA MODE ▷ #triton (5 messages):
- CUDA MODE ▷ #torch (3 messages):
- CUDA MODE ▷ #algorithms (3 messages):
- CUDA MODE ▷ #cool-links (5 messages):
- CUDA MODE ▷ #beginner (2 messages):
- CUDA MODE ▷ #pmpp-book (3 messages):
- CUDA MODE ▷ #torchao (40 messages🔥):
- CUDA MODE ▷ #off-topic (59 messages🔥🔥):
- CUDA MODE ▷ #irl-meetup (1 messages):
- CUDA MODE ▷ #triton-puzzles (2 messages):
- CUDA MODE ▷ #triton-viz (2 messages):
- CUDA MODE ▷ #llmdotc (138 messages🔥🔥):
- CUDA MODE ▷ #lecture-qa (1 messages):
- CUDA MODE ▷ #bitnet (22 messages🔥):
- CUDA MODE ▷ #cudamode-irl (10 messages🔥):
- CUDA MODE ▷ #liger-kernel (29 messages🔥):
- LM Studio ▷ #general (223 messages🔥🔥):
- LM Studio ▷ #hardware-discussion (101 messages🔥🔥):
- Latent Space ▷ #ai-general-chat (116 messages🔥🔥):
- Latent Space ▷ #ai-announcements (1 messages):
- Latent Space ▷ #ai-in-action-club (133 messages🔥🔥):
- Interconnects (Nathan Lambert) ▷ #news (41 messages🔥):
- Interconnects (Nathan Lambert) ▷ #ml-questions (5 messages):
- Interconnects (Nathan Lambert) ▷ #ml-drama (7 messages):
- Interconnects (Nathan Lambert) ▷ #random (53 messages🔥):
- Interconnects (Nathan Lambert) ▷ #memes (1 messages):
- Interconnects (Nathan Lambert) ▷ #rl (7 messages):
- Interconnects (Nathan Lambert) ▷ #reads (6 messages):
- Interconnects (Nathan Lambert) ▷ #posts (92 messages🔥🔥):
- Cohere ▷ #discussions (87 messages🔥🔥):
- Cohere ▷ #announcements (1 messages):
- Cohere ▷ #questions (96 messages🔥🔥):
- Cohere ▷ #api-discussions (13 messages🔥):
- Cohere ▷ #projects (2 messages):
- Stability.ai (Stable Diffusion) ▷ #general-chat (181 messages🔥🔥):
- Modular (Mojo 🔥) ▷ #announcements (2 messages):
- Modular (Mojo 🔥) ▷ #mojo (91 messages🔥🔥):
- Eleuther ▷ #general (30 messages🔥):
- Eleuther ▷ #research (23 messages🔥):
- Eleuther ▷ #scaling-laws (7 messages):
- Eleuther ▷ #interpretability-general (10 messages🔥):
- Eleuther ▷ #lm-thunderdome (1 messages):
- Eleuther ▷ #gpt-neox-dev (6 messages):
- LlamaIndex ▷ #blog (5 messages):
- LlamaIndex ▷ #general (51 messages🔥):
- LlamaIndex ▷ #ai-discussion (4 messages):
- OpenInterpreter ▷ #general (11 messages🔥):
- OpenInterpreter ▷ #O1 (17 messages🔥):
- OpenInterpreter ▷ #ai-content (2 messages):
- OpenAccess AI Collective (axolotl) ▷ #general (16 messages🔥):
- OpenAccess AI Collective (axolotl) ▷ #axolotl-dev (11 messages🔥):
- LangChain AI ▷ #general (22 messages🔥):
- LangChain AI ▷ #share-your-work (1 messages):
- DSPy ▷ #show-and-tell (1 messages):
- DSPy ▷ #general (19 messages🔥):
- tinygrad (George Hotz) ▷ #general (12 messages🔥):
- tinygrad (George Hotz) ▷ #learn-tinygrad (4 messages):
- Torchtune ▷ #general (8 messages🔥):
- Torchtune ▷ #dev (6 messages):
- LAION ▷ #general (5 messages):
- LAION ▷ #resources (1 messages):
- MLOps @Chipro ▷ #general-ml (4 messages):
- Gorilla LLM (Berkeley Function Calling) ▷ #leaderboard (1 messages):
- Gorilla LLM (Berkeley Function Calling) ▷ #discussion (2 messages):
AI Twitter Recap
all recaps done by Claude 3.5 Sonnet, best of 4 runs.
AI Model Developments and Industry Updates
- OpenAI's o1 Model: OpenAI released a new model called "o1" (also known as Project Strawberry/Q*), which uses reinforcement learning and chain-of-thought to "think" before responding. @adcock_brett noted it smashes reasoning benchmarks. The model achieved 25 out of 35 correct answers on IQ questions, surpassing most humans, according to @rohanpaul_ai.
- Google DeepMind Developments:
- Google introduced DataGemma, designed to connect large language models with real-world data, aiming to reduce AI hallucinations @adcock_brett.
- DeepMind unveiled two new AI systems, ALOHA and DemoStart, advancing robot dexterity using diffusion methods @adcock_brett.
- Other Industry Updates:
- Adobe previewed its Firefly AI Video Model with features like Text to Video, Image to Video, and Generative Extend @adcock_brett.
- French AI startup Mistral released Pixtral 12B, a multimodal model capable of processing both images and text @adcock_brett.
- Tencent presented GameGen-O, an 'Open-world Video Game Generation' model @adcock_brett.
AI Research and Papers
- Several papers were highlighted as potentially relevant to understanding OpenAI's o1 model, including:
- "Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking" from Stanford
- "Agent Q: Advanced Reasoning and Learning for Autonomous AI Agents" from MultiOn/Stanford
- "Let's Verify Step by Step" from OpenAI
- "V-STaR: Training Verifiers for Self-Taught Reasoners" from Microsoft, Mila
- "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning" from Notre Dame, Tencent @_philschmid
- A paper on "Selective Reflection-Tuning" was mentioned, describing an improved version of the 2023 Reflection-Tuning approach @rohanpaul_ai.
AI Capabilities and Benchmarks
- @bindureddy claimed AI has reached an IQ of 120, surpassing most humans, but noted it still lacks in perception and environmental understanding.
- @fchollet commented that while AI can generalize, it does so only locally and still breaks down on simple problem modifications or novel problems.
- Terence Tao, a renowned mathematician, provided commentary on o1's math capabilities, with mixed but overall optimistic takeaways @mathemagic1an.
Industry Perspectives and Debates
- There was discussion about the terminology "Large Language Models" (LLMs), with some arguing it's becoming a misnomer @karpathy.
- @ylecun criticized auto-regressive prediction for non-temporal sequences as "a pure abomination."
- Sam Altman commented that o1 marks the beginning of a significant new paradigm and stated "We have the next few years in the bag" regarding AI progress @rohanpaul_ai.
AI Reddit Recap
/r/LocalLlama Recap
Theme 1. Llama 3.1 405B: Open-Source Rival to GPT-4
- Llama 405B running locally! (Score: 81, Comments: 23): The post demonstrates Llama 405B running locally on Apple Silicon hardware, specifically a Mac Studio M2 Ultra and a Macbook Pro M3 Max, achieving a speed of 2.5 tokens/sec. The setup is powered by Exo (https://github.com/exo-explore) and Apple MLX as the backend engine, with an important optimization trick shared by the Apple MLX creator involving setting specific sysctl parameters for improved performance.
- Llama 405B performance was further improved by adding a Linux system with 3090 GPU to the cluster, achieving 153.56 TFLOPS. The setup uses wifi for connectivity between devices.
- The project utilizes 4-bit quantization, pushing nearly 500GB/sec through the GPUs. The developer is exploring integration of an Nvidia 3090 using tinygrad.
- While the 2.5 tokens/sec speed is considered playable, the 30.43 seconds to first token with only 6 tokens in the prompt was noted as a limitation. Users can try the setup using the Exo GitHub repository.
- I ran o1-preview through my small-scale benchmark, and it scored nearly identical to Llama 3.1 405B (Score: 169, Comments: 52): Llama 3.1 405B and OpenAI's o1-preview model achieved nearly identical scores in a small-scale benchmark test. The benchmark results suggest that o1-preview might be a fine-tuned version of Llama 3.1 405B, potentially indicating a collaboration between Meta and OpenAI. This performance parity also implies that o1-preview could be matching GPT-4's capabilities in certain tasks.
- The benchmark creator, dubesor86, shared the full benchmark results and noted that testing was expensive due to harsh caps. The pricing difference between models is attributed to the base cost multiplied by the number of invisible tokens used.
- Several users questioned the unexpectedly low performance of Claude 3.5 Sonnet in the coding benchmark, particularly compared to popular consensus and personal experiences. The benchmark creator emphasized that results vary based on specific use cases and skill levels.
- Users discussed the potential for improving Llama's performance on reasoning tasks by using Chain of Thought (CoT) prompting, similar to o1. The benchmark creator expressed interest but preferred to maintain default model behavior in the official results.
Theme 2. O1 Model's Advanced Reasoning Capabilities
- Inspired by the new o1 model, Benjamin Klieger hacked together g1, powered by Llama-3.1 on @GroqInc (Score: 260, Comments: 58): Benjamin Klieger developed g1, a model inspired by O1 and powered by Llama-3.1 on Groq's hardware. This implementation aims to replicate O1's reasoning capabilities using the Llama-3.1 architecture, potentially offering similar performance on alternative infrastructure.
- The infinite bookshelf project by Benjamin Klieger garnered interest, with discussions on its Groq dependency and potential for local implementation. A user shared an intriguing simulation of a dinner with historical figures and an AI from the future.
- Users debated the effectiveness of replicating O1's performance using prompts alone, questioning if reinforcement learning with multi-step training data was crucial for O1's capabilities. Some suggested using Chain of Thought (CoT) output for further model fine-tuning.
- The proposed reasoning prompt using JSON format for step-by-step explanations was criticized, with users noting that forcing models to respond in JSON can degrade answer quality, especially for smaller models like Llamas.
- Is this a way to reveal o1's thinking steps? (Score: 92, Comments: 41): The post discusses a potential method to reveal O1's thinking steps by using a prompt engineering technique. The technique involves asking O1 to explain its reasoning for each step of a task, with the goal of understanding the AI's decision-making process. However, the effectiveness of this approach in truly revealing O1's internal thought process remains uncertain.
- Users suggest O1's thinking steps may be summarized by a smaller LLM, making it difficult to reveal true internal processes. Some speculate about an agentic system or specialized agents coordinating tasks.
- Attempts to reveal O1's chain of thought may result in threats from OpenAI to remove access to O1. Users report receiving emails warning against such attempts, leading to reduced probing of the model.
- Theories about O1's capabilities include a potential algorithm with reflection tokens allowing for recursive loops during inference, and training to recognize and avoid responding to "bad" instructions while maintaining an internal model of them.
Theme 3. Comparing Online LLM Providers and Services
- Large LLM providers, which one do you use and why? (Score: 46, Comments: 39): The post discusses the various Large Language Model (LLM) providers available for users who cannot run large models locally, mentioning options like Together, Poe, You.com, Groq, OpenRouter, and Fireworks. The author expresses frustration with Poe's reduced output length compared to original models and seeks recommendations for other providers, asking about criteria for choosing paid services and how to identify providers that use unmodified LLMs without artificially shortened outputs.
- OpenRouter is highly recommended for its wide variety of models, pricing options, and free choices. Users appreciate its load balancing feature and the ability to switch between supported models without changing API requests.
- Several users prefer a combination of providers, including OpenAI, Anthropic, Google, Together.AI, and vast.AI/RunPod. This approach allows for SOTA performance, free options, and the ability to run unique models, with monthly costs typically under $15.
- Google Gemini and Cohere are popular for their free plans, while some users opt for local solutions like Ollama or open-source alternatives like open-webui to avoid subscription fees and maintain data control.
- I ran o1-preview through my small-scale benchmark, and it scored nearly identical to Llama 3.1 405B (Score: 169, Comments: 52): O1-preview performed nearly identically to Llama 3.1 405B in a small-scale benchmark test. The benchmark included various tasks such as arithmetic, common sense reasoning, and language understanding, with both models achieving similar scores across the board. This suggests that O1-preview may be a competitive alternative to Llama 3.1 405B, though further testing on larger benchmarks would be needed to confirm these initial findings.
- The benchmark's creator, dubesor86, shared the full benchmark results and noted that testing was expensive due to harsh caps and invisible tokens. The pricing difference between models is attributed to base cost multiplied by token usage.
- Users questioned the underperformance of Claude 3.5 Sonnet in coding tasks, contrasting with their personal experiences. The benchmark creator emphasized that results vary based on specific use cases and that "coding" is a broad term with diverse requirements.
- The benchmark cost for O1-preview was approximately 52 times more expensive than testing Llama 3.1 405B. Users expressed interest in the testing methodology, including local builds, rented instances, and API usage.
Theme 4. Advancements in Local LLM Tools and Applications
- Sharing my Screen Analysis Overlay app (Score: 58, Comments: 10): The post introduces a Screen Analysis Overlay app designed to work with local LLMs for real-time screen analysis. The app captures the screen, processes it through a local LLM, and displays the results as an overlay, allowing users to interact with their computer while receiving AI-powered insights about on-screen content. The developer mentions plans to open-source the project and seeks feedback on potential use cases and improvements.
- I massively updated my python program that allows local LLMs running via llama.cpp to look things up on the internet, it now fully web scrapes the most relevant results! (Score: 133, Comments: 19): The author has significantly updated their Python program that enables local LLMs running via llama.cpp to access internet information, now featuring full web scraping of the most relevant search results. The program allows the LLM to select a search query, choose the 2 most relevant results out of 10, gather information from those results, and either conduct further searches or answer the user's question, with the update also including an llm_config.py file for customizing llama.cpp settings and enabling GPU support. The updated project is available on GitHub.
- Users praised the project, with one suggesting the addition of OpenAI compatible API endpoints to increase usability. The author agreed to work on implementing this feature, noting it would take "a few weeks".
- Discussion revealed that llama-cpp-python has a built-in OpenAI compatible API, which could be a starting point for integrating the project into larger personal assistant efforts. Users highlighted the potential performance benefits of running llama.cpp on a server with OpenAI API.
- A detailed implementation suggestion was provided, including steps to spin up the server, modularize the code, and refactor get_llm_response() to query the API endpoint. The commenter praised the project's simplicity and approach.
Other AI Subreddit Recap
r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity
AI Model Advancements and Capabilities
- OpenAI's o1 model demonstrates significant improvements in reasoning and coding abilities: Multiple posts highlight o1's capabilities, including creating video games from scratch, generating complex animations, and performing large-scale code refactoring. The model shows particular strength in tasks requiring extended reasoning.
- Rapid progress in AI capabilities: Posts discuss how o1 has reportedly increased by 30 IQ points to 120 IQ, surpassing 90% of humans. Another post mentions OpenAI's roadmap suggesting models will soon reach PhD-level reasoning and have agent-like capabilities.
- Improvements in multimodal AI: A Google Deepmind paper demonstrates advancements in multimodal learning through joint example selection.
AI Research and Infrastructure
- Massive computational requirements for frontier AI models: Larry Ellison of Oracle discusses plans to build nuclear reactors to power large GPU clusters, estimating costs of $100 billion over 3 years to stay competitive in AI development.
- Advancements in AI inference speed: Microsoft's MInference technique enables inference of up to millions of tokens for long-context tasks while maintaining accuracy.
- Novel approaches to synthetic data creation: A paper on scaling synthetic data creation leverages diverse perspectives within large language models to generate data from 1 billion web-curated personas.
AI Model Releases and Comparisons
- Salesforce releases xLAM-1b: This 1 billion parameter model achieves 70% accuracy in function calling, surpassing GPT 3.5 despite its relatively small size.
- Updates to existing models: Rubra AI released an updated Phi-3 Mini model with function calling capabilities, competitive with Mistral-7b v3.
- Comparisons between models: A detailed comparison between o1-mini and Claude Sonnet 3.5 for coding tasks highlights strengths and weaknesses of each model.
Societal and Economic Impacts of AI
- Potential job market disruption: A report suggests AI will affect 60 million US and Mexican jobs within a year.
- Debates on AI's impact on various professions: Discussions around how AI advancements might affect software development and other knowledge work.
AI Applications and Tools
- AI-generated content creation: Examples include miniature people LoRA for image generation and affirmation cards for mental health support.
- AI-assisted coding and development: Multiple posts demonstrate AI's ability to generate complex applications and assist in large-scale refactoring tasks.
AI Discord Recap
A summary of Summaries of Summaries by O1-preview
Theme 1: OpenAI's O1 Models Ignite Debate in AI Community
- O1 Models Impress and Disappoint in Equal Measure: OpenAI's new O1 models (o1-preview and o1-mini) are causing a stir, with some users praising their reasoning capabilities while others find their responses mechanical and underwhelming. The models' mixed reception highlights ongoing challenges in advancing AI reasoning.
- Community Questions O1's Advantage Over Existing Models: Users are comparing O1 to models like GPT-4o, debating whether O1's chain-of-thought reasoning offers significant improvements or is just hype. Discussions focus on O1's performance in complex tasks and its real-world applicability.
- Speculations Arise Over O1's Development and Data Usage: Enthusiasts are reverse engineering O1 to understand its training and reliance on user interaction data. Concerns about privacy and the feasibility of replicating O1's capabilities in open-source models are fueling heated debates.
Theme 2: AI Coding Tools Transform Development Workflows
- Aider and O1 Outshine Competitors in Bug Fixing: Developers are celebrating Aider and OpenAI's O1 for their superior performance in bug fixing over models like Claude. These tools deliver detailed, step-by-step outputs that streamline troubleshooting in complex codebases.
- Cursor AI Tackles Large Codebase Edits with Ease: Cursor AI is addressing challenges with large-scale code edits that stymie models like O1. Their specialized coding assistant enhances productivity by handling big changes more efficiently.
- AI's Growing Role in Coding Sparks Job Market Concerns: Discussions are intensifying around AI potentially replacing junior developers, prompting conversations about the future of human roles in programming. The emphasis is on fostering AI-human collaboration to keep experienced developers relevant.
Theme 3: Fine-Tuning and Training Models Remain Complex
- Frustration Mounts Over Underperforming Models: Models like Gemma2, Mistral, and Phi 3.5 are underperforming during training, leading to user exasperation. Challenges include high VRAM usage and unsatisfactory outputs, highlighting the need for better training solutions.
- LLama 3.1 Emerges as a Bright Spot: Amidst widespread training issues, LLama 3.1 stands out for its robust performance. Users report better results compared to other models, though they still face configuration hurdles due to its complexity.
- INT8 Mixed-Precision Training Offers Significant Speedups: The introduction of INT8 mixed-precision training promises up to 70% speedup on NVIDIA 4090 GPUs. This advancement allows for faster training without sacrificing accuracy, particularly on consumer-grade hardware.
Theme 4: Creative Applications of AI Gain Traction
- GameGen-O Opens New Frontiers in Game Development: Tencent's GameGen-O introduces a diffusion transformer model that generates open-world video games. This innovation excites developers eager to harness AI for accelerated game creation.
- Artists Leverage AI for Character Design and Animation: Creatives are using Stable Diffusion, ControlNet, and LoRA training to produce stunning character designs and animations. These tools are revolutionizing artistic workflows and expanding possibilities in digital art.
- Diffusion Illusions Captivate with Mind-Bending Art: The Diffusion Illusions project showcases interactive optical illusions generated through diffusion models. Accepted at SIGGRAPH 2024, it pushes the boundaries of AI-generated art and visual perception.
Theme 5: Security and Ethical Concerns Surrounding AI Technologies
- StealC Malware Exploits Chrome to Phish Passwords: The new StealC malware traps Chrome users in full-screen mode, coercing them into revealing their Google passwords via fake login screens. This sophisticated attack raises alarms about browser security vulnerabilities.
- Debates Heat Up Over AI Model Censorship: Users are clashing over the heavy censorship in models like Phi 3.5, which hampers technical tasks and coding assistance. The community is calling for a balance between necessary moderation and the practical utility of AI models.
- 'Humanity's Last Exam' Initiative Sparks Controversy: Dan Hendrycks announced a $500,000 prize pool for challenging AI with tough questions in Humanity's Last Exam. While some applaud the effort to advance AI, others express concern about its implications for AI regulation and policy influence.
PART 1: High level Discord summaries
aider (Paul Gauthier) Discord
- O1 Shines in Bug Fixing over Claude: O1 excelled in bug fixing, outperforming Claude models like Sonnet in speed and accuracy, especially in coding contexts.
- Users highlighted O1's ability to deliver detailed outputs, aiding in complex code troubleshooting.
- Sonnet 3.5 Faces Compatibility Issues: Sonnet 3.5 struggles with larger contexts and misinterprets instructions, frustrating users in complex coding tasks.
- In contrast, O1's outputs have been described as straightforward and effective, minimizing confusion.
- Aider Scripting Automates Workflows: Aider users can streamline tasks using the command line
--message
argument, sending commands directly to automate processes.- This method allows for easier batch processing through simple shell scripts across multiple files.
- Game Gen - O Revolutionizes Game Development: The introduction of Game Gen - O offers new capabilities for open-world video game creation based on diffusion-transformer models.
- This tool has ignited excitement in the community as it promises to accelerate AI-driven game development.
- The Big Prompt Library Launches: The Big Prompt Library repository provides a collection of prompts and LLM instructions, aiding users in effective prompt crafting.
- This resource is essential for developers utilizing systems like ChatGPT and Claude, enhancing user experiences.
Unsloth AI (Daniel Han) Discord
- Underperformance of Gemma2 and Mistral Models: Users flagged Gemma2 and Mistral for underperformance in training, especially in comparison to LLama 3.1, amidst frustrations over VRAM limitations.
- Concerns were raised on the necessary configurations for successful training, complicating the workflow.
- LLama 3.1 Shines in Performance: Excitement brewed as users found LLama 3.1 to outperform other models tried, while Gemma 2 9B also showed potential with proper settings.
- Members noted the need to adjust settings due to the larger size of Gemma 2, inviting a discussion on optimizations.
- Job Hunting is the New Black: With job hunting in full swing, members noted investments like LinkedIn Premium as they seek opportunities amid an uptick in the machine learning market.
- One PhD holder is navigating the transition from academia to enterprise due to a contracting postdoc role in machine learning.
- Debates on Recruitment Processes: Conversations revolved around advocating for a fair recruitment process, challenging traditional methods that favor memorization over skill evaluation.
- Emphasis was placed on skills and potential growth over mere connections in hiring, aiming for a revamped model.
- DPO Skepticism Leads to Alternative Suggestions: A member expressed skepticism over Direct Preference Optimization (DPO), hinting at exploring alternatives like KTO for their work.
- Ongoing discussions surrounding DPO loss types and the desire for shared experiences surfaced among attendees.
OpenRouter (Alex Atallah) Discord
- Users debate context limits on OpenRouter models: Concerns arose over the displayed context lengths for various models on OpenRouter, particularly with discrepancies in supported sizes in extended versions compared to what's stated.
- This sparked a call for increased transparency and communication updates on model capabilities for clearer user understanding.
- Performance woes lead to model review: Users reported bizarre outputs and responses getting cut off with models like Venus Chub AI and WizardLM-2, raising alarms over consistency across different providers.
- The ongoing discussions aimed to gather user experiences to pinpoint whether these issues were widespread or isolated incidents.
- Effective prompt engineering techniques share spotlight: Prominent discussions surfaced regarding XML tag usage for improved model responses and educational resources for optimizing prompt engineering.
- Shared tutorials focused on prompt manipulation methods, providing insight into increasing user engagement in AI interactions.
- Integrations and API configuration confusion alert: Reports emerged about a hyperbolic key being linked to an unintended chargeable provider, stirring discussions on naming conventions and integration clarity.
- Users expressed the need for more robust error handling in JSON configurations, notably requesting to enforce integration key presence for improved setup reliability.
- Need for failure feedback during provider configuration: Discussion highlighted user frustrations over the inability to view failure details when configuring providers, complicating troubleshooting efforts.
- Users sought clearer mechanisms from OpenRouter to effectively identify and resolve integration issues, enhancing overall setup success.
Perplexity AI Discord
- Perplexity AI's Performance Challenges: Users reported Perplexity AI experiencing significant lag and service outages, raising concerns about high traffic on the platform which caused delayed responses.
- This ongoing issue prompted queries about the reliability of their service during peak usage times.
- API Errors Storming In: Members noted that API calls are returning errors like 500 and 524, leading to suspicions of widespread problems affecting operations.
- Concerns escalated when users discussed inconsistencies in citation outputs and timeout issues, calling for improved handling of API interactions.
- Comparative Analysis of AI Models: Users compared various AI models, observing that the original OpenAI model outperformed alternatives like You.com and Monica in notable scenarios.
- The upcoming Opus 3.5 model was highlighted as a potential game-changer, expected to surpass existing performance benchmarks.
- Emergence of Korean Emotion Video Dataset: Interest in the Korean Emotion Video Dataset peaked as it aims to enhance AI's emotional recognition capabilities, opening avenues for practical application.
- Discussions emphasized the excitement around its implications for both research and the emotional intelligence of AI systems.
- Microstrategy's Bold Bet on Cryptocurrency: Conversations centered on Microstrategy's billion-dollar investment, analyzing its potential impact on cryptocurrency markets.
- Members debated the strategic maneuvering from the company, evaluating risks associated with market stability.
HuggingFace Discord
- Challenges in Fine-tuning LLMs: Users are facing high GPU memory usage of 29G while fine-tuning models like Llama 8b using FSDP and BF16 AMP, prompting some to revert to raw PyTorch calls for debugging.
- This issue brings attention to resource management in LLM training and highlights the ongoing pursuit of optimizing memory consumption.
- Revamped Inference API Documentation: The Hugging Face Inference API documentation has been improved based on user feedback, featuring clearer rate limits and better code examples. The updates aim to simplify AI deployment, making it more user-friendly.
- This move shows Hugging Face's commitment to enhancing user experience as indicated in this announcement.
- New Medical LLMs and their Impact: The Chai-1 Foundation model excels in predicting molecular structures, contributing to advancements in medical AI, as noted in a recent update.
- Innovative models like BrainWave and DS-ViT are advancing evaluation techniques in diagnostics, pushing for greater transparency in model training datasets.
- Efficient Tokenizer Training for Multilingual Models: Discussion around retraining tokenizers highlights the flexibility to incorporate multiple languages while maintaining original data performance, though concerns about increased ambiguity arose.
- Suggestions for continued pretraining emerged as a method to mitigate these challenges, indicating the community's engagement with multilingual capabilities in NLP.
- Nitro Giveaway Sparks Interest: A member announced a Nitro giveaway, inviting participants to engage with the server, generating light-hearted interest among the community.
- Despite the humor, this announcement showcases the community's efforts in fostering interaction and connectivity.
Nous Research AI Discord
- StealC Malware Targets Chrome Users: A newly discovered malware called StealC restricts Chrome users by locking their browser and forcing them to reveal their Google passwords through a deceptive login screen, raising major security concerns.
- The malware utilizes a full-screen kiosk mode to trap users into submitting sensitive information, drawing logical parallels to traditional phishing methods.
- Tencent's GameGen-O Revolutionizes Video Games: Tencent introduced GameGen-O, a diffusion transformer model designed for generating open-world video games, leveraging extensive data from over a hundred next-generation games.
- The model trains on the OGameData, enabling more interactive gameplay and raising the bar for video game development through advanced simulation techniques.
- Innovative Approaches to Drag-based Image Editing: The InstantDrag pipeline enhances drag-based image editing by eliminating the need for masks or text prompts, utilizing a two-network system to achieve real-time, photo-realistic edits.
- By leveraging motion dynamics from real-world video datasets, this method significantly speeds up the editing process, showcasing potential for creative applications.
- Exploring Precision Annealing in AI Training: A member raised inquiries about precision annealing, suggesting pre-training at FP8 and switching to BF16 or FP32 to maximize throughput during the final training phase.
- They highlighted that this approach could optimize resource utilization in training regimes as it mitigates memory constraints.
- Evaluation Metrics & Performance Insights: In evaluations, QLoRA has shown improved performance over traditional LoRA methods, suggesting advantages in tuning efficiency.
- Members engaged in a comparative analysis of performance metrics across QLoRA, full fine-tuning, and original models, debating the percentage differences observed.
OpenAI Discord
- O1 Writes Extensive Essays: A member showcased O1's capability by generating a detailed essay covering major Minecraft updates from indev to 1.21, exciting the community.
- This highlights O1's advanced writing proficiency and its potential for creative applications.
- Fine-tuning Models presents Challenges: Users expressed concerns about fine-tuning results, reporting lack of improvement and wiggly training loss, prompting advice on model selection.
- The conversation underscored that fine-tuning may not always yield effective outcomes, prompting calls for strategic adjustments.
- Custom GPT Features Spark Questions: Inquiries about Custom GPTs' functionality revealed variability depending on the model used, with a request for clarity in model selection.
- Insights shared included potential links for references, emphasizing the need for clearer guidance on initiating conversations.
- Issues with ChatGPT Response Consistency: Users tackled the challenge of ChatGPT's inconsistency in following predetermined sequences, especially for battles in RPGs.
- Suggestions included a Discord bot format to gather responses before feeding them to ChatGPT for analysis, aiming to streamline interactions.
- Exploring Game Mechanics with ChatGPT: A scenario involving a 60% losing chance game was dissected, pointing out ChatGPT's tendency towards misleading interpretations.
- This discussion revealed the complexities in wealth accumulation strategy and the model's performance variability when addressing gaming contexts.
CUDA MODE Discord
- CUDA-MODE Hackathon Gains Remote Interest: A proposal for remote participation in the upcoming CUDA-MODE hackathon sparked mixed discussions regarding its feasibility and organization.
- While some members support a remote track, others noted challenges with large in-person events.
- Triton Kernel Launch Overhead Issues: Concerns were raised about kernel launch overhead in Triton, with reports that it consumes 10-20% of execution time for mid-sized matrices.
- A GitHub issue detailed that kernel execution takes 80us but launching it takes 220us.
- Significant Boosts from INT8 Mixed-Precision Training: The latest torchao 0.5 release showcases INT8 mixed-precision training yielding up to 70% speedup on NVIDIA 4090 GPUs without noticeable accuracy loss.
- This progress highlights enhanced training efficiency particularly beneficial for consumer GPUs with maintained convergence.
- Liger-Kernel v0.3.0 Is Now Live!: Liger-Kernel v0.3.0 launched with major advancements and community appreciation for their support.
- The team invites the community to experiment with the new features and provide feedback.
- BitNet Training Faces Efficiency Challenges: Recent discussions indicate ongoing struggles with BitNet model training, with no significant progress reported in recent trials.
- Members raised concerns about GPU inefficiencies linked to bitwise operations, emphasizing the need for custom hardware approaches.
LM Studio Discord
- GPU Acceleration Troubles Persist: Users reported issues with GPU acceleration not being utilized in LM Studio, prompting checks under Developer > LM Runtimes. A successful update led to one user's GPU usage climbing significantly.
- Troubleshooting practices revealed potential configuration misunderstandings, leading to better setups for increased efficiency.
- Model Compatibility Woes: LM Studio predominantly supports GGUF models, but not all listed models function as expected, particularly in multimodal tasks. This limitation raised concerns about model performance and feature accessibility.
- Participants shared insights into features that remain unusable, indicating a gap between expectations and reality in utilizing LM Studio.
- Strix Halo APU Capability Hype: The Strix Halo APU's potential for running large AI models was debated, with claims of allocating up to 20GB to its iGPU. Support for ROCm was noted, although concerns about offloading tasks affecting performance arose.
- Competing views on processing efficiency surfaced, stressing the importance of balancing CPU and GPU tasks.
- RTX 4090 Speeds Up AI Queries: With three RTX 4090 cards, one member reported achieving 110 tokens per second during queries. This sparked conversations about power supply setups to harness such performance effectively.
- Discussions centered around optimizing configurations for improved power efficiency and GPU performance.
- Optimizing RAM for LLMs: The need for sufficient system RAM to run large models led to anecdotes suggesting 192GB DDR5 could support models like Llama 3.1. However, it was argued that 64GB might be sufficient if models are well-optimized.
- Participants exchanged optimization strategies, balancing between RAM capacity and model requirements.
Latent Space Discord
- OpenAI's o1 Models Introduced: OpenAI has released o1 models designed for improved reasoning on complex tasks, attracting attention for their potential in scientific and coding applications.
- The new models reportedly outperform older versions but still struggle with large edits, a challenge Cursor AI is addressing with their specialized coding assistant.
- Funding for AI Startups Soars: 11x AI raised 24 million in Series A funding, highlighting their rapid growth with a 15x increase in ARR and the launch of new digital workers.
- Similarly, Supermaven AI secured 12 million to develop an AI-focused text editor that integrates seamlessly with their models.
- HTEC's report on AI copilots: The nearshore consultancy HTEC published a report on their experiences with 26 AI coding tools, although access requires signing up.
- Members discussed whether the brief usage and limitations noted in the report truly reflected the tools' capabilities.
- Voice Mode API Discussion: The episode delves into the new Voice Mode API, which allows for more interactive and dynamic conversation capabilities.
- It emphasizes how this feature can transform user interactions with AI on various platforms.
- ChatGPT Scaling Strategies: Strategies for scaling ChatGPT were discussed, particularly focusing on increased latency and prompt/schema caching techniques for optimization.
- The team addressed concerns over model reproducibility and evolving tiering and rate limiting strategies for the API.
Interconnects (Nathan Lambert) Discord
- OpenAI's o1 models raise eyebrows: OpenAI's recent release of o1-preview and o1-mini models has sparked discussions regarding their intriguing reasoning patterns and the possible influence of user interaction data on model development.
- A user highlighted a surprising finding that mini does not reason longer than preview, yet generates lengthier responses, challenging expectations.
- Humanity's Last Exam Launch Announced: Dan Hendrycks introduced Humanity's Last Exam, inviting submissions for tough AI questions with a $500,000 prize pool due by November 1, 2024, igniting mixed responses concerning its implications for AI regulation.
- Concerns emerged over Hendrycks' lobbying efforts and connections to politics, potentially influencing future AI policies based on performance metrics.
- Reverse Curriculum Learning discussed among RL enthusiasts: Emerging papers on Reverse Curriculum Learning in LLMs have prompted discussions about its limited use in the RL community, with users noting it has not gained widespread acceptance.
- Members identified Reverse Curriculum Learning as clunky and suitable primarily for niche applications, contributing to its rarity in broader contexts.
- Excitement Over LLM Model Developments: Anticipation is growing for future LLM advancements scheduled for 2025, with discussions reflecting increased enthusiasm over potential breakthroughs in model capabilities.
- Members recognized a significant shift in sentiment, noting that the landscape has shifted, marking potential milestones akin to past advancements.
- Poe Subscription Service Evaluation: Users debated their experiences with the Poe subscription service, expressing mixed feelings about its usability despite $20 granting access to all available LLMs.
- Concerns over interface design were raised, indicating a preference for more appealing aesthetics compared to competitors like Claude and ChatGPT.
Cohere Discord
- Fei-Fei Li's Reasoning Method Explored: Members expressed curiosity about Fei-Fei Li’s techniques for solving reasoning problems, aiming to gather insights on her approaches in the AI context.
- There’s a notable desire for deeper understanding among engineers about methodologies like hers that could inform ongoing AI advancements.
- Command-R-Plus-08-2024 Output Issues: A user reported that the Command-R-Plus-08-2024 model is producing more repetitive outputs compared to previous versions, particularly in creative tasks.
- This led to discussions on how extended prompts could further impact performance, urging exploration of alternative models.
- Cohere Developer Office Hours Announced: Cohere hosts Developer Office Hours today at 1 PM ET, discussing updates in the Command model family including new features in RAG and Safety Modes.
- Attendees can expect insights into the significant improvements in model efficiency and practical applications.
- Implementing Safety Modes for Enhanced Control: Cohere introduced Safety Modes aimed at giving enterprise customers better control over model usage and interactions.
- This update reinforces governance while ensuring that model effectiveness remains intact.
- Job Posting Concerns and Community Focus: A member called for removing non-Cohere related job postings from discussions, emphasizing the need for relevance in community topics.
- This reflects a commitment to keeping discussions tightly aligned with the interests and purposes of the Cohere community.
Stability.ai (Stable Diffusion) Discord
- Users Struggle with FLUX Models: Members reported issues with running FLUX models, specifically regarding formats like
.sft
and.safetensor
, as well as compatibility with tools like Forge.- It was recommended to switch to ComfyUI for better support, with users sharing experiences around specific model sizes.
- Creating Characters with Style: A user sought advice on generating a character similar to Cheetara using Stable Diffusion checkpoints and prompt phrasing techniques.
- Discussion included successful checkpoints for character art tailored for later 3D modeling, referencing a Cheetara GIF for inspiration.
- Mastering Image Editing: Recommendations emerged for techniques in removing text from images and utilizing inpainting methods, with tools like GIMP being highlighted.
- Users discussed various AI tools for enhancing images while preserving quality, including tutorials by Piximperfect.
- Animating Characters with ControlNet: Insights flowed about leveraging ControlNet and LoRA training for creating vector-style character animations, emphasizing the use of proper training examples.
- Contributors shared tips on employing ControlNet technologies for improving character posing and structure in artistic renderings.
- Tech Support Woes: A user encountered errors during Stable Diffusion installation and was advised to share their error logs in the support channel for troubleshooting.
- Helpful links to installation guides were shared, stressing the importance of detailed logs.
Modular (Mojo 🔥) Discord
- User Verification Process Goes Live: The Discord server has implemented a user verification process requiring members to submit their email addresses through a bot in the #verify channel, allowing continued read access for unverified users.
- Members opting out of verification will face limited messaging capabilities, emphasizing the importance of this new step.
- Onboarding Questions Introduced to Streamline Experience: After email verification, users will answer two multiple-choice onboarding questions aimed at enhancing their server experience.
- This initiative reflects an effort to improve the onboarding process for both new and current members.
- Mojo Struggles with Python Interoperability: Discussions revealed that Mojo currently cannot import Python modules or call its functions, hindering effective interoperability, which is crucial for seamless integration.
- Participants are keen on methods for achieving zero-copy data exchanges between Mojo and Python, particularly in performance-sensitive contexts.
- Count Leading Zeros Faces Compile-Time Limitations: Users reported that the
clz
function struggles to operate at compile time due to dependency on LLVM intrinsics, which are not executable at this stage.- An alternative implementation for counting leading zeros was proposed, highlighting the need for better compile-time capabilities within the standard library.
- New Channel for Server Changes Discussion: A dedicated channel for discussing upcoming server changes has been established, allowing members to share suggestions and pose questions.
- This move signifies a commitment to enhancing the user experience through community input and dialogue.
Eleuther Discord
- Understanding Mixed Precision Training Challenges: While mixed precision training can enhance performance by storing models in both fp32 and fp16, it also doubles the computational load during the forward pass—a noteworthy trade-off.
- Members emphasized the importance of balancing speed and resource utilization amidst budget constraints.
- CoreWeave's Significant Valuation: CoreWeave is in negotiations to sell shares that value the company at $23 billion, reflecting its prominent status in the AI-driven cloud computing sector.
- This move has generated substantial interest from notable financial media, highlighting the competitive landscape in the industry.
- AI's Societal Implications Explored: Discussions reflected on how OpenAI has effectively enabled greater access to information, likening it to placing a 'PhD in everyone's pocket' with minimal public reaction to these changes.
- Members underscored a need for deeper conversations about the transformative effects and ongoing integration of AI into daily life.
- RWKV team pushes RNN boundaries: The RWKV team is making waves in RNN architecture advancements, with contributions recognized particularly from Smerky and others.
- This innovative push has garnered attention and praise for its potential impacts within the community.
- Concerns about Overfitting Models on Small Datasets: A member expressed difficulty overfitting a model using only 9 images, sparking discussions about possible learning issues when working with larger datasets.
- The consensus was that failure to overfit such a small sample could indicate even larger struggles ahead.
LlamaIndex Discord
- LlamaParse excels in parsing Excel data: In a recent video, advanced Excel parsing abilities in LlamaParse are showcased, including handling multiple sheets and complex tables. LlamaParse utilizes recursive retrieval to summarize complex tables automatically, enhancing efficiency.
- This feature provides a significant improvement in usability, especially for users dealing with intricate Excel files.
- TypeScript workflows introduced in LlamaIndex: LlamaIndex has now integrated workflows into TypeScript, as noted in this announcement. This new feature aims to streamline development processes for TypeScript users.
- The integration aids in making the framework more accessible and efficient for developers working with TypeScript.
- Importance of Unit Testing in LLM applications: Unit testing is emphasized as crucial for mitigating stochasticity in LLM applications, highlighted in a blog post detailing building and testing a RAG app with CircleCI. Proper unit testing is vital to prevent unexpected behaviors in AI applications.
- The discussion underlines a commitment to quality and reliability in AI-driven projects.
- Vectara-Agentic library simplifies RAG implementation: Check out vectara-agentic by a member, a library that simplifies building agentic RAG powered by LlamaIndex and Vectara. It provides tools to construct agents capable of planning and tool use compatible with various model providers.
- This flexibility allows developers to implement RAG solutions more efficiently.
- Local LLM offers cost optimization: Members discussed that running a Local LLM can significantly reduce costs compared to OpenAI services. The total cost of ownership (TCOS) was noted as an important factor when choosing between OpenAI and local models.
- This consideration emphasizes the growing trend of optimizing AI solutions for better cost efficiency.
OpenInterpreter Discord
- Understanding GPU Diminishing Returns: The point of diminishing returns for GPUs shows up after 2-3 GPUs for gaming and 4-6 GPUs for rendering, largely due to PCIe bandwidth limitations.
- Documentation issues were cited as a concern that's affecting user experience with GPU setups.
- Non-Streaming Responses in Open Interpreter: Members discussed how to stop streaming responses in command line mode; options included using the --plain flag or
claude-3.5
model.- This feedback aims to improve usability and comfort while interacting with the command line.
- Confusion Over ChatGPT O1 Model Release: Concerns arose regarding ChatGPT's O1 model, with speculation that its release could undermine existing alternatives, although this was challenged by another member.
- While O1 shines in reasoning, critiques noted its inability to execute code tasks as effectively as earlier models like model 4.
- Livekit Setup Errors Alert: Around 90% of users reported issues with Livekit setup, attributing their struggles to inadequate documentation.
- A proposal was made to create a comprehensive setup guide to enhance user support.
- Exciting MoA LLM Library for Orchestration: The MoA LLM library introduces a way to orchestrate LLMs in a neural network-inspired architecture, aimed at improved model collaboration.
- This open-source initiative provides a framework for integrating multiple LLMs efficiently.
OpenAccess AI Collective (axolotl) Discord
- Debate on O1 Model's Effectiveness: There's a mixed reception around the O1 model; while some celebrate its Chain of Thought interface, others find its mechanical responses disappointing.
- One member mentioned that despite its solid UI, the overall performance still leaves much to be desired.
- OpenAI's O1 Development Timeline Clarified: A member revealed that OpenAI has been developing O1 (Strawberry/Q*) for a long time, contrary to claims of it being a rapid result.
- They pointed out that O1 employs an agentic chain of thought, showing resilience against common pitfalls like hallucination.
- Tokenization Errors from Masking Issues: A member reported a tokenization error emerging from new per-turn masking strategies that obscure the last end of turn token.
- They linked the issue to a comprehensive bug report they filed on GitHub.
- Phi 3.5's Frustrations in Classification: Members expressed their struggles with developing a Phi 3.5 sentence classifier that fails to produce the correct classification output.
- One opted to share their dumb sentence classifier and confessed to potentially giving up for now.
- vLLM and Adapter Compatibility Issues: A discussion emerged surrounding vLLM's failure to properly interpret the
qkv_proj
layer, impacting models trained with Axolotl’s adapters.- Interestingly, while a LORA model showed no learning during merging, it performed well as a standalone adapter atop the base model.
LangChain AI Discord
- GenAI/RAG/CV Consultations Available: A member announced offering consultation services in GenAI, RAG, and CV to assist startups in prototype development.
- Interested members can reach out directly for collaboration opportunities.
- OpenAI Sparks Societal Reflections: Concerns were raised regarding OpenAI's influence on access to knowledge while society appears unchanged.
- Discussion included thoughts on how accelerated automation might lead us into a post-scarcity era.
- LangGraph Cloud Pricing Uncertainty: A member inquired about potential costs for LangGraph Cloud post-beta phase, considering whether to develop a custom FastAPI wrapper.
- Concerns about feasible long-term pricing models were a key point of discussion.
- Streaming LLM Output Parsing Problems: Parsing issues with incomplete JSON strings during streaming LLM output were discussed, particularly with Pydantic parsers.
- Switching from
parse_result
toparse
methods yielded better results despite initial skepticism.
- Switching from
- Chat History Management Challenges: Users expressed difficulties in managing chat history with LangChain, especially in tracking app-specific messages.
- They highlighted issues in maintaining transactional integrity when integrating this data.
DSPy Discord
- Optimize RAG Query Structure: A member suggested optimizing RAG in a singular module by packing a 'context' field with data from memory and prompts to enhance results. This approach received confirmation with reference to this simple RAG example.
- Another member acknowledged the practical nature of this strategy, noting advantages in data handling.
- Visual LLM Use Case in DSPy: A query arose about the potential of using visual LLM models in DSPy for image descriptions, which another member speculated could be available by next week. A promising pull request for GPT-4 Vision API was cited, hinting at ongoing integrations.
- The anticipated feature triggered enthusiastic anticipation of the upcoming capabilities.
- Seeking GitHub Contributions: Discussion sparked when a member expressed interest in contributing to the DSPy project and inquired about available bounties. Insights revealed that additional integration changes are on the horizon, with an expected completion timeline of 7-10 days.
- The prospect of contributions generated excitement within the community, indicating a collective eagerness for collaborative development.
tinygrad (George Hotz) Discord
- Runtime Type Checking Added to Tinygrad: George Hotz announced
TYPED=1
support for runtime type checking in Tinygrad, revealing type errors during testing withpython3 test/test_ops.py
. A GitHub pull request proposes a fix for most type errors, leaving one unresolved.- The community feedback highlights the importance of robust type checking, reinforcing the necessity for clean coding practices.
- Tinygrad Fails Tests on AMD with 0.9.2: A user reported issues upgrading Tinygrad from version 0.9.0 to 0.9.2, encountering an AttributeError related to
struct_kfd_ioctl_criu_args
. The suspected root cause is a potential mismatch between the kernel version and the library's requirements.- Diagnostics indicate a possible gap in Tinygrad's compatibility documentation and troubleshooting guidance for AMD users.
- Tinygrad Libraries Discussion Sparks: Members discussed the development of libraries within the tinygrad ecosystem, specifically mentioning timm and torchvision as candidates. This conversation prompted inquiries about the practical necessity and current implementations of such libraries.
- Discussion escalated when a user questioned the actual utility of these libraries with tinygrad, signaling a need for clarity in integration.
- Investigating VRAM Allocation Spikes: A member sought advice on diagnosing spikes in VRAM allocation during Tinygrad operations, highlighting a knowledge gap in memory monitoring tools within the framework. This inquiry underscores the need for more robust diagnostics.
- Understanding VRAM behavior is crucial for optimizing performance and preventing crashes during intensive processing tasks.
- Tensor Modification Error Reported: A user encountered an error when modifying a Tensor in Tinygrad, especially during element incrementation. They referenced an open GitHub issue that aligns with their problem, focusing on the contiguous property.
- The findings from this user reinforce the importance of comprehensive testing and documentation regarding tensor operations.
Torchtune Discord
- Mastering Checkpoints Management: To implement checkpoints at specific token counts, utilize the
num_tokens
field while filtering padding tokens as detailed here. Adjustments in the saving logic are crucial for accurate tracking and resuming from saved states.- Members emphasized the necessity of an all gather to account for totals across ranks during training.
- Cosine Learning Rate Decay Introduced: The integration of
torchtune.modules.get_cosine_schedule_with_warmup
for cosine decay in learning rates is discussed among members, currently applied in LoRA recipes. It's suggested to bypass deriving steps from epoch number for mid-epoch resumes for smoother transitions.- Members are advised to follow these implementations closely for their inclusion in the full finetune recipe.
- Debate on CUDA vs CPU Operations: A query on whether token operations could be conducted on CPUs was raised, with confirmation that
num_tokens
are not CUDA tensors advising CUDA use instead. The preference for CUDA processes persists while questions about CPU efficiency remain.- Discussions reveal uncertainty but show a clear inclination towards optimal performance using CUDA for these operations.
- Online Packing Support on the Horizon: The team is set to implement online packing as soon as they add support for iterable datasets. This move promises to enhance efficiency for bulk data processing.
- Members expressed excitement about the improved capabilities this will bring to future projects.
- CI GPU Test Failures Cause Concern: Ongoing issues with CI related to GPU tests, especially
test_eleuther_eval.py
, stem from import errors in transformers.pipelines, with 504 tests passing but significant errors halting completion. This has raised alarms regarding overall system stability.- Members are actively discussing potential fixes and investigating anomalies to ensure smoother CI operations.
LAION Discord
- Generative AI crafts art instantly: A member showcased an artwork crafted with NotebookLM, fully generated in just 2 minutes. They shared a thrilling YouTube video capturing this rapid creation.
- What a time to be alive was their enthusiastic remark regarding the capabilities of generative AI.
- Steve Mould’s Illusion Exploration: A member shared This new type of illusion is really hard to make on YouTube, which dives into AI-generated illusions. The video includes insights about the Jane Street internship, watch it here.
- They noted that generative AI creates images that shift under varying lighting conditions.
- Diffusion Illusions Take Center Stage: A member introduced the Diffusion Illusions website, featuring interactive optical illusions produced via diffusion models. The site is linked to their project accepted at SIGGRAPH 2024, including a YouTube talk.
- Key contributors include Ryan Burgert and Xiang Li, showcasing compelling applications of diffusion models.
- Quest for Text in Images: A member sought advice on efficiently embedding text within images to create a comprehensive dataset, with aspirations of scaling to millions of images.
- This discussion highlights the demand for automating text-embedded image dataset creation for AI applications.
MLOps @Chipro Discord
- Pretrained VLMs Require Serious Compute: A member raised concerns about lacking compute resources for using pretrained vision-language models (VLMs), which by nature demand substantial computing power.
- Discussions emphasized that the efficacy of these models heavily relies on having appropriate hardware to handle their intensive processing requirements.
- Anomaly Detection Needs Clarification: One member questioned if anomaly detection should focus on logs or actual time-series data, prompting a dive into data types.
- Several suggested methodologies for time-series analysis were shared, including transformer models, Kalman Filters, and isolation forests, with recommendations to use z-scores for error evaluation.
Gorilla LLM (Berkeley Function Calling) Discord
- Model Struggles with Function Calling: Discussions revealed that the model is currently only capable of chatting and scores a 1 in relevance, failing to execute any function calls and receiving a 0 in other capabilities.
- This bug limits the model's functionality significantly, hampering user experience and expectations.
- Model Produces Chat Instead of Function Call: Members raised concerns that the model outputted conversational responses instead of executing function calls, causing miscommunication and incorrect markings.
- This results in an automatic marking of the attempt as incorrect, affecting accuracy in processing responses.
- Invalid Syntax Triggers AST Decoder Failure: An error message flagged 'Invalid syntax', leading to a failure in decoding the Abstract Syntax Tree (AST), categorized as 'ast_decoder:decoder_failed'.
- This issue indicates a critical problem in interpreting the model's output, posing challenges for troubleshooting.
PART 2: Detailed by-Channel summaries and links
aider (Paul Gauthier) ▷ #general (679 messages🔥🔥🔥):
O1 and Claude Models
AI and Job Impact
Using Aider with LLMs
AI Coding Tools
Prompt Engineering
- O1's Effectiveness in Bug Fixing: Users have found O1 to be the most effective model for bug fixing, outperforming Claude models like Sonnet in speed and accuracy, especially in coding contexts.
- O1 delivers comprehensive step-by-step planning, resulting in detailed outputs that aid in understanding and troubleshooting complex code issues.
- Challenges with Sonnet 3.5: Several users reported difficulties with Sonnet 3.5, noting that it often struggles with larger contexts and can misinterpret instructions, especially in complex coding tasks.
- Despite its strong capabilities, users expressed frustration over Sonnet's limitations, especially when compared to O1's more straightforward outputs.
- AI's Impact on Employment: Concerns arose regarding AI's potential to replace junior developers, with discussions emphasizing that higher-level developers may still retain relevance due to their expertise.
- Participants noted the need for AI-human collaboration, where experienced developers can leverage AI tools to enhance productivity without completely displacing their roles.
- Using Aider and Other AI Tools: Aider is favored among participants for its effective integration with AI models, yet users recognize the utility of alternative tools like Claude Dev and OpenAI's Playground for specific tasks.
- Discussions revealed that different AI models excel in various areas, and users often experiment with combinations to optimize their workflows.
- Prompt Engineering and Customization: Users shared strategies for customizing system prompts in Aider, enhancing LLM interactions by tailoring prompts to specific models like Claude 3.5.
- Participants expressed interest in leveraging community-developed prompts to improve their experiences with AI coding tasks.
- no title found: no description found
- Tree-sitter|Introduction: no description found
- Building a better repository map with tree sitter: Tree-sitter allows aider to build a repo map that better summarizes large code bases.
- FAQ: Frequently asked questions about aider.
- Git integration: Aider is tightly integrated with git.
- Drop The Mic Bryan Cranston GIF - Drop The Mic Bryan Cranston Mic Drop - Discover & Share GIFs: Click to view the GIF
- Tweet from Sam Altman (@sama): i love being home in the midwest. the night sky is so beautiful. excited for the winter constellations to rise soon; they are so great.
- Tweet from Anthropic (@AnthropicAI): GitHub is the first of the native integrations we're building to connect Claude to your most important data sources. This feature is available in beta for early Enterprise plan users today. We pl...
- Linux workaround for pyperclip.copy: Linux workaround for pyperclip.copy. GitHub Gist: instantly share code, notes, and snippets.
- Tweet from Martin Bowling (@martinbowling): 🐌 Tiny o1 rate limits got you crawling? ⏳ Endlessly waiting for o1's thinking? Say goodbye to LOOOONNNGGG thinking times! 🚀 Supercharge your AI reasoning with Llamaberry powered by @Groq...
- Tweet from Min Choi (@minchoi): Google dropped NotebookLM recently. AI tool that can generate podcasts of two speakers talking about the contents from various sources like research papers, articles, and more. Absolutely bonkers. ...
- Reddit - Dive into anything: no description found
- GPT-4 32k - API, Providers, Stats: GPT-4-32k is an extended version of GPT-4, with the same capabilities but quadrupled context length, allowing for processing up to 40 pages of text in a single pass. This is particularly beneficial fo...
- Tweet from Sam Altman (@sama): no more patience, jimmy
- Tweet from Jimmy Apples 🍎/acc (@apples_jimmy): Ok back to October now. We should have a 4.x model ( maybe still called 4.5, my old friend ) in October. The big boy gpt 5, I’ve heard as early as December but for your sanity I would have Q1/Q2 2...
- files-to-prompt-ts/files-to-prompt.ts at main · fry69/files-to-prompt-ts: A command-line tool to concatenate files and directories in a structured way to a single prompt for use with large language models and other applications. - fry69/files-to-prompt-ts
- GitHub - Agentic-Insights/codebase-context-spec: Proposal for a flexible, tool-agnostic, codebase context system that helps teach AI coding tools about your codebase. Super easy to get started, just create a .context.md file in the root of your project.: Proposal for a flexible, tool-agnostic, codebase context system that helps teach AI coding tools about your codebase. Super easy to get started, just create a .context.md file in the root of your ...
- feat: add openrouter versions of o1-mini and o1-preview · paul-gauthier/aider@d747a37: no description found
- GitHub - yamadashy/repopack: 📦 Repopack is a powerful tool that packs your entire repository into a single, AI-friendly file. Perfect for when you need to feed your codebase to Large Language Models (LLMs) or other AI tools like Claude, ChatGPT, and Gemini.: 📦 Repopack is a powerful tool that packs your entire repository into a single, AI-friendly file. Perfect for when you need to feed your codebase to Large Language Models (LLMs) or other AI tools li.....
- Tweet from william (@wgussml): 🚀 I'm excited to announce the future of prompt engineering: 𝚎𝚕𝚕. developed from ideas during my time at OpenAI, 𝚎𝚕𝚕 is light, functional lm programming library: - automatic versioning & t...
- GitHub - fry69/files-to-prompt-ts: A command-line tool to concatenate files and directories in a structured way to a single prompt for use with large language models and other applications.: A command-line tool to concatenate files and directories in a structured way to a single prompt for use with large language models and other applications. - fry69/files-to-prompt-ts
- aider/tests/basic/test_repomap.py at main · paul-gauthier/aider: aider is AI pair programming in your terminal. Contribute to paul-gauthier/aider development by creating an account on GitHub.
- aider/tests/basic/test_repomap.py at main · paul-gauthier/aider: aider is AI pair programming in your terminal. Contribute to paul-gauthier/aider development by creating an account on GitHub.
- aider/aider/commands.py at main · paul-gauthier/aider: aider is AI pair programming in your terminal. Contribute to paul-gauthier/aider development by creating an account on GitHub.
aider (Paul Gauthier) ▷ #questions-and-tips (132 messages🔥🔥):
Aider scripting
Clipboard usage with Aider
Aider configuration options
Model interaction and response
File handling commands in Aider
- Scripting Aider for Command Execution: Users can utilize Aider's command line
--message
argument to automate tasks, sending single instructions directly to the tool.- For batch processing, simple shell scripts can be employed to apply commands across multiple files.
- Using Clipboard to Paste in Aider: To streamline input, users are encouraged to use the
/clipboard
command for inserting text from the clipboard into the chat.- This method allows for maintaining context and minimizing repetitive copy-pasting in terminal workflows.
- Configuration Options for Aider: Aider's configuration can include flags to automatically confirm actions or suppress prompts, potentially enhancing workflow efficiency.
- The
.aider.conf.yaml
file supports options for 'yes to every confirmation' but currently lacks a fine-tuned toggle for specific command types.
- The
- Challenges with LLM Model Responses: Users noted inconsistencies in LLM responses, particularly when using models like Llama3.1 that default to diff instead of whole file edits.
- Manual commands such as
/chat-mode whole
help alleviate some of these issues by switching the expected response format.
- Manual commands such as
- Integrating Temporary Files for Prompts: For convenience, users can write prompts in a text file and execute them with Aider using the
/run
command for better clarity.- This method allows users to maintain context while preventing the need for constant manual input.
- Git integration: Aider is tightly integrated with git.
- YAML config file: How to configure aider with a yaml config file.
- Chat modes: Using the chat, ask and help chat modes.
- Installation: How to install and get started pair programming with aider.
- Scripting aider: You can script aider via the command line or python.
- In-chat commands: Control aider with in-chat commands like /add, /model, etc.
- Conspiracy Charlie Day GIF - Conspiracy Charlie Day Crazy - Discover & Share GIFs: Click to view the GIF
- Anthropic Status: no description found
- Options reference: Details about all of aider’s settings.
- litellm/model_prices_and_context_window.json at main · BerriAI/litellm: Python SDK, Proxy Server to call 100+ LLM APIs using the OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq] - BerriAI/litellm
- feat: add openrouter versions of o1-mini and o1-preview · paul-gauthier/aider@d747a37: no description found
- fix: add keybinding to insert space on Ctrl+Space by fry69 · Pull Request #1543 · paul-gauthier/aider: This small patch ignores pressing control when pressing space. Currently this combination makes aider hang in a special mode in prompt-toolkit, requiring Ctrl-C to get back control.
- GitHub - paul-gauthier/aider: aider is AI pair programming in your terminal: aider is AI pair programming in your terminal. Contribute to paul-gauthier/aider development by creating an account on GitHub.
- aider/aider/models.py at main · paul-gauthier/aider: aider is AI pair programming in your terminal. Contribute to paul-gauthier/aider development by creating an account on GitHub.
- aider/aider/coders/base_coder.py at main · paul-gauthier/aider: aider is AI pair programming in your terminal. Contribute to paul-gauthier/aider development by creating an account on GitHub.
aider (Paul Gauthier) ▷ #links (12 messages🔥):
Zed on Linux
OpenAI Benchmark Insights
Game Gen - O Tool
Library of New Tools
The Big Prompt Library
- Zed on Linux faces font rendering issues: Concerns were raised about using Zed on Linux, which has severe problems with properly rendering fonts.
- These difficulties may discourage users from fully adopting the tool despite its capabilities.
- OpenAI's o1 model performs on SWE-Bench: On the SWE-Bench, the o1-mini model performs similarly to GPT-4o, while the o1-preview (Post-Mitigation) performs worse.
- This information was highlighted as significant given the competitive landscape of AI models, sourced from a tweet.
- Game Gen - O for Video Game Creation: A new tool called Game Gen - O is introduced for open-world video game generation, based on diffusion-transformer models.
- The excitement around this model emphasizes its potential to accelerate game development using Gen-AI, as noted in a tweet.
- Proposal for New Tools Channel: A suggestion was made to create a dedicated channel for organizing new tools related to Aider, highlighting the need for better structure.
- This proposal indicates a desire for improved resource sharing within the community.
- The Big Prompt Library emerges: The Big Prompt Library repository hosts a collection of prompts and LLM instructions beneficial for various AI systems, including ChatGPT and Claude.
- This resource aims to educate users on crafting effective prompts, making it a valuable asset for developers, as shared in a GitHub link.
- rerankers: A Lightweight Python Library to Unify Ranking Methods – Answer.AI: Re-ranking is an integral component of many retrieval pipelines; however, there exist numerous approaches to it, all with different implementation methods. To mitigate this, we propose rerankers, a Py...
- Tweet from Benjamin De Kraker 🏴☠️ (@BenjaminDEKR): This is buried on page 30 of the OpenAI o1 system card. On the SWE-Bench (solving real-world software issues) benchmark, o1-mini does only about the same as GPT-4o. o1-preview (Post-Mitigation) does...
- Tweet from Maxwell (@tweetcheckk): @kimmonismus Check his most latest video that he uploaded 5 mins ago. O1 solved his dissertation problem. https://youtu.be/M9YOO7N5jF8?si=cp8rlji8cB-mzg0F
- Tweet from Chubby♨️ (@kimmonismus): I really can't keep up anymore. Another tool for creating video games using Gen-AI. Acceleartion is hard to grasp in order to record everything that is happening. Quoting Gradio (@Gradio) Game ...
- Tweet from Bindu Reddy (@bindureddy): Open-source is never far behind :) The Qwen team has been dropping hints over the last few days... They drop excellent open-source models. It looks like the first open-source o1 / strawberry models ...
- TheBigPromptLibrary/SystemPrompts at main · lucasmrdt/TheBigPromptLibrary: A collection of prompts, system prompts and LLM instructions - lucasmrdt/TheBigPromptLibrary
- GitHub - bklieger-groq/g1: g1: Using Llama-3.1 70b on Groq to create o1-like reasoning chains: g1: Using Llama-3.1 70b on Groq to create o1-like reasoning chains - bklieger-groq/g1
- GitHub - khromov/ai-digest: A CLI tool to aggregate your codebase into a single Markdown file for use with Claude Projects or custom ChatGPTs.: A CLI tool to aggregate your codebase into a single Markdown file for use with Claude Projects or custom ChatGPTs. - khromov/ai-digest
Unsloth AI (Daniel Han) ▷ #general (751 messages🔥🔥🔥):
Gemma2 Training
Mistral Models
LLama 3.1 Performance
Qwen 2.5 Release
Using Gradient Accumulation
- Gemma2 and Mistral Models Underperformance: Users discussed the challenges faced with training various models like Gemma2 and Mistral, with many finding them underwhelming compared to LLama 3.1.
- Several users expressed frustration over VRAM limitations and the configurations needed for successful training.
- Exploration of LLama 3.1: One user highlighted that LLama 3.1 performed best among the models they tried, and another user noted that Gemma 2 9B also shows promise.
- It was noted that lower settings are needed for Gemma 2 due to its larger size compared to other models.
- Discussing Gradient Accumulation and Batch Size: The conversation explored the relationship between gradient accumulation steps and batch size concerning VRAM usage, with different users sharing their experiences.
- It was clarified that gradient accumulation doesn't directly affect VRAM but serves as a compromise for effective training.
- Anticipation for Qwen 2.5 Release: Excitement was expressed for the upcoming release of Qwen 2.5, scheduled for Thursday, with expectations about its capabilities.
- Users speculated that the 14B variant might be manageable on platforms like Google Colab.
- Tinkering with New Models: Several participants indicated a strong interest in experimenting with the new models as they are released.
- Discussion highlighted the importance of setting parameters wisely to influence model performance effectively while retaining general capabilities.
- no title found: no description found
- Google Colab: no description found
- Google Colab: no description found
- Google Colab: no description found
- How to Finetune Llama-3 and Export to Ollama | Unsloth Documentation: Beginner's Guide for creating a customized personal assistant (like ChatGPT) to run locally on Ollama
- How to Finetune Llama-3 and Export to Ollama | Unsloth Documentation: Beginner's Guide for creating a customized personal assistant (like ChatGPT) to run locally on Ollama
- google/datagemma-rag-27b-it · Hugging Face: no description found
- 01-ai/Yi-Coder-9B-Chat · Hugging Face: no description found
- Tweet from Wenmeng Zhou (@zhouwenmeng): Qwen-q1 ? ? 🍓🍓🍓🍓🍓
- Batch size vs gradient accumulation: Hi, I have a basic theoretical question. Which one is better for the model and GPU usage? First option: --per_device_train_batch_size 8 --gradient_accumulation_steps 2 Second option: --per_devi...
- unsloth/SmolLM-135M · Hugging Face: no description found
- Finetune Llama 3 with Unsloth: Fine-tune Meta's new model Llama 3 easily with 6x longer context lengths via Unsloth!
- Simpsons Burger GIF - Simpsons Burger Window - Discover & Share GIFs: Click to view the GIF
- CPO Trainer: no description found
- Ollama: Get up and running with large language models.
- Llama 3.1 | Model Cards and Prompt formats: Llama 3.1 - the most capable open model.
- llama-recipes/recipes/multilingual/README.md at 0efb8bd31e4359ba9e8f52e8d003d35ff038e081 · meta-llama/llama-recipes: Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q...
- Finetune Llama 3 with Unsloth: Fine-tune Meta's new model Llama 3 easily with 6x longer context lengths via Unsloth!
- Tim And Eric Awesome Show GIF - Tim And Eric Awesome Show Kissess - Discover & Share GIFs: Click to view the GIF
- Crawlee · Build reliable crawlers. Fast.: no description found
- 🪐 SmolLM - a HuggingFaceTB Collection: no description found
- GitHub - unslothai/hyperlearn: 2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old.: 2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old. - unslothai/hyperlearn
- Firecrawl: Turn any website into LLM-ready data.
- NVIDIA A40 Vs RTX A6000: A Detailed Comparison: NVIDIA A40 and RTX A6000 GPUs are highly attractive options for budget-conscious users. They offer a balance between performance and cost, are more perfect
Unsloth AI (Daniel Han) ▷ #off-topic (72 messages🔥🔥):
Job Hunting
PhD Discussions
Recruitment Process
LeetCode Interviews
Industry Shifts
- Job Hunting Season is Here: Members expressed that it's job hunting season, with one stating they have bought LinkedIn Premium for job searching.
- _
- PhD and Job Searches: A member with a PhD mentioned transitioning to industry due to the current machine learning boom and the necessity of seeking new opportunities as their contract is ending.
- They elaborated that while their PhD was in Bayesian statistics, they currently have a postdoc related to machine learning.
- Debate on Recruitment Processes: Discussion emerged around making a recruitment process that is fair. Members noted challenges in hiring based on traditional methods, like memorization for interviews, versus actual capability.
- One member remarked that successful companies are unlikely to hire based on connections, emphasizing the importance of skills and growth in hiring.
- LeetCode Interviews Under Scrutiny: Members shared frustrations regarding LeetCode interviews, arguing they often emphasize memorization over actual knowledge or skills.
- The consensus was that many candidates manage to ace these tests but may not perform well in real-world situations, leading to an ineffective hiring process.
- Industry Shifts and Opportunities: A sentiment was shared that during economic downturns, there are opportunities to disrupt companies that previously laid off talent.
- Discussion highlighted how companies increasingly value GitHub contributions over traditional credentials, indicating a shift in hiring practices.
Unsloth AI (Daniel Han) ▷ #help (133 messages🔥🔥):
Unsloth Pro Pricing
Multi-GPU Training
API Token for Hugging Face
Training Loss Profile
Model Fine-tuning Issues
- Unsloth Pro Pricing Uncertain: Members discussed the current unavailability of Unsloth Pro, which is primarily targeted towards enterprise clients, indicating it may not be suitable for smaller educational institutions.
- Pricing is likely based on GPU usage and could potentially be in the range of five to seven figures for comprehensive setups.
- Challenges with Multi-GPU Usage: Concerns were raised about how to collaborate on pricing for multi-GPU access, especially for research purposes at universities.
- Members noted that they had contacted Unsloth representatives and were awaiting responses on budget considerations for implementing multi-GPU setups.
- Authentication Issues with Hugging Face: A user encountered a 401 error when trying to save a fine-tuned model on Colab, indicating that the Hugging Face API token was not set up correctly.
- It was advised to set the API token in the environment correctly, but issues persisted when attempting to use the export command.
- Training Loss Profile Considerations: A user inquired about whether the training loss profile observed during continued pretraining of Llama 3.1 was typical.
- The response indicated that the observed loss profile was normal and in line with expectations for the training task.
- Fine-tuning Model Concerns: Another member highlighted issues with their fine-tuned model, which was responding with expected dataset content only and not answering other questions.
- Suggestions included ensuring adequate training data size and adherence to the recommended dataset structure for fine-tuning.
- no title found: no description found
- Miniconda — Anaconda documentation: no description found
- Installation | Unsloth Documentation: Learn to install Unsloth locally or on Google Colab.
- Continued LLM Pretraining with Unsloth: Make a model learn a new language by doing continued pretraining with Unsloth using Llama 3, Phi-3 and Mistral.
- Google Colab: no description found
- Ryan Gosling Sad Sad Gosling GIF - Ryan gosling sad Sad gosling Blade runner snow - Discover & Share GIFs: Click to view the GIF
- Download The Latest Official NVIDIA Drivers: Download the latest official NVIDIA drivers to enhance your PC gaming experience and run apps faster.
- Driver Results |
- |
: no description found- | NVIDIA
- |
- Llama 3.1 Fine Tune - Mervin Praison: https://huggingface.co/mervinpraison/Llama-3.1-8B-bnb-4bit-python Train Model with Custom Data Convert to GGUF Ollama Modelfile Ollama Create Custom Model
- EASILY Train Llama 3.1 and Upload to Ollama.com: Unlock the full potential of LLaMA 3.1 by learning how to fine-tune this powerful AI model using your own custom data! 🚀 In this video, we’ll take you throu...
- Agent_Chef/cutlery/unsloth-cli-2.py at main · Leoleojames1/Agent_Chef: 🍲Agent Chef🥘 is my robust tool for dataset refinement, structuring, and generation. By leveraging procedural and synthetic dataset generation techniques, Agent Chef will enable users to refine and ....
Unsloth AI (Daniel Han) ▷ #research (37 messages🔥):
Fine-tuning voice models
Papers with Code reliability
Scaling laws in Llora rank
Direct Preference Optimization (DPO)
Testing DPO loss types
- Excitement in Fine-tuning Voice Models: A member expressed joy over the impressive results from fine-tuning voice models with just a few short prompts of 2 minutes.
- “Just few shot prompting was impressive...” indicated the positive reception of this method.
- Questioning Papers with Code Reliability: A beginner researcher inquired about the reliability of Papers with Code for sourcing state-of-the-art (SOTA) benchmarks.
- While one member confirmed it as a good source, they noted it doesn't always capture everything.
- Scaling Laws and Llora Rank Exploration: A member sought information on scaling laws to estimate the Llora rank that minimizes test loss based on fine-tuning dataset size.
- They noted that a very high rank could lead to overfitting, expressing their current observations of overfitting at rank 8.
- Skepticism Toward Direct Preference Optimization: A member voiced skepticism about Direct Preference Optimization (DPO), suggesting the usage of another method instead.
- They recommended trying KTO as an alternative to explore further.
- Comparison of DPO Loss Types: One member asked for experiences with different DPO loss types and shared relevant resources from Hugging Face about DPO training methods.
- This inquiry highlighted a desire for insights before testing these methods, indicating ongoing explorations in model training.
Link mentioned: DPO Trainer: no description found
OpenRouter (Alex Atallah) ▷ #app-showcase (1 messages):
fn5io: as posted on hacker news today: https://github.com/bklieger-groq/g1
OpenRouter (Alex Atallah) ▷ #general (775 messages🔥🔥🔥):
OpenRouter model context limits
User feedback on model performance
Prompt engineering techniques
Howarding provider efficiency
Prompt caching functionality
- Confusion over context sizes in models: Users raised concerns about the displayed context lengths of various models on OpenRouter, particularly regarding the extended versions and their actual supported context sizes, which sometimes conflicted with stated capacities.
- This led to discussions about transparency in model capabilities and potential updates needed for clearer communication on the provider pages.
- User experiences with model performance: Some users reported issues with specific models behaving unexpectedly, such as generating gibberish outputs or cut-off responses, particularly with the Venus Chub AI and WizardLM-2 models.
- These issues prompted users to seek feedback and verify whether the problems were consistent across different providers.
- Prompt engineering techniques and resources: Discussions about effective prompt engineering techniques surfaced, particularly highlighting the use of XML tags for better responses and a tutorial for learning prompt manipulation.
- Various resources were shared, focusing on improving user interactions with models through structured prompts and caching methods.
- Understanding prompt caching and provider selection: Users inquired about the specifics of implementing prompt caching, particularly with the Claude 3.5 model, and whether to disable load balancing for optimal performance.
- It was suggested that focusing on a single provider might enhance the effectiveness of prompt caching, emphasizing the nuances of provider-specific syntax.
- General discussions about OpenRouter functionality: The conversation included general inquiries about the functionality and limitations of OpenRouter, particularly regarding API interactions and model integrations.
- Resilience in the face of model limitations was discussed, alongside strategies for effectively utilizing the available features.
- Chatroom | OpenRouter: LLM Chatroom is a multimodel chat interface. Add models and start chatting! Chatroom stores data locally in your browser.
- Lost in the Middle: How Language Models Use Long Contexts: While recent language models have the ability to take long contexts as input, relatively little is known about how well they use longer context. We analyze the performance of language models on two ta...
- Transforms | OpenRouter: Transform data for model consumption
- Activity | OpenRouter: See how you've been using models on OpenRouter.
- The Tokenizer Playground - a Hugging Face Space by Xenova: no description found
- OpenRouter Integration - Helicone OSS LLM Observability: no description found
- Anthropic’s Prompt Engineering Interactive Tutorial: Anthropic continue their trend of offering the best documentation of any of the leading LLM vendors. This tutorial is delivered as a set of Jupyter notebooks - I used it …
- OpenAI: GPT-4o – Provider Status: See provider status and make a load-balanced request to OpenAI: GPT-4o - GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text o...
- config.json · NousResearch/Hermes-3-Llama-3.1-405B at main: no description found
- Models | OpenRouter: Browse models on OpenRouter
- lluminous: no description found
- lluminous: no description found
- Nous: Hermes 3 405B Instruct (extended) – Provider Status: See provider status and make a load-balanced request to Nous: Hermes 3 405B Instruct (extended) - Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agent...
- From API to AGI: Structured Outputs, OpenAI API platform and O1 Q&A — with Michelle Pokrass & OpenAI Devrel + Strawberry team: Our episode on all of OpenAI's new models and 2 new paradigms for inference.
- Llama 3.1 Sonar 405B Online - API, Providers, Stats: Llama 3.1 Sonar is Perplexity's latest model family. Run Llama 3.1 Sonar 405B Online with API
- User: Bruttolöhne gesamt 2.104,20 € April 3.000,OO € Mai 2.945,88 € Juni 2.104,20 € Juli 18.478,09 € Aug-Januar 5.866,89 € Feb+Mar Kinderlos Steuerklasse 1, berechne die Höhe des ALGI Model 3.5s Okay, lass ...
- Model: gemini-1.5-pro-exp-0827 User Germany Total gross wages €2,104.20 April €3,000.00 May €2,945.88 June €2,104.20 July €18,478.09 Aug-January €5,866.89 Feb+Mar Childless tax class 1, calculate the amount ...
- Hermes 3 405B Instruct (free) - API, Providers, Stats: Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coheren...
- Models: 'perplexit' | OpenRouter: Browse models on OpenRouter
- Hermes 3 405B Instruct - API, Providers, Stats: Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coheren...
- OpenAI: o1-preview: The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 models are optimized for math, science, programming, and other STEM-related tas...
- GPT-4o (extended) - API, Providers, Stats: GPT-4o Extended is an experimental variant of GPT-4o with an extended max output tokens. This model supports only text input to text output. Run GPT-4o (extended) with API
- Skyrim Special Edition Nexus - Mods and Community: no description found
- GitHub - LouisShark/chatgpt_system_prompt: A collection of GPT system prompts and various prompt injection/leaking knowledge.: A collection of GPT system prompts and various prompt injection/leaking knowledge. - LouisShark/chatgpt_system_prompt
- SillyTavern/src/endpoints/backends/chat-completions.js at staging · SillyTavern/SillyTavern: LLM Frontend for Power Users. Contribute to SillyTavern/SillyTavern development by creating an account on GitHub.
OpenRouter (Alex Atallah) ▷ #beta-feedback (1 messages):
Hyperbolic Key Integrations
DeepSeek Ignored Provider
JSON Configuration Issues
- Hyperbolic Key Confusion: A user reported having a hyperbolic key under integrations, yet it uses an unintended OR chargeable provider instead.
- They questioned whether the issue stems from different naming conventions, specifically
deepseek/deepseek-chat
versus hyperbolicsdeepseek-ai/DeepSeek-V2.5
.
- They questioned whether the issue stems from different naming conventions, specifically
- Inability to View Failure Details: The user expressed frustration over not being able to see details on the failure when trying to configure providers.
- They are seeking a clearer mechanism to identify why the integration fails in their setup.
- Request for JSON Key Enforcement: An inquiry was made regarding whether it's possible to explicitly force the 'integrations' key in JSON configurations.
- The user is looking for a method to ensure that the integration fails if the key isn't present, indicating a desire for more robust error handling.
Perplexity AI ▷ #general (615 messages🔥🔥🔥):
Issues with Perplexity AI
Comparison of AI Models
Pricing and Promotions for AI Services
Model Performance and Features
User Experiences with Perplexity and Competitors
- Perplexity AI Facing Technical Issues: Users reported that Perplexity AI was down and experiencing significant lag, leading to delays and aborted requests.
- Some users suggested that the slow performance might be related to high traffic on the platform.
- Discussion on AI Model Comparisons: Users compared the performance of various AI models, notably discussing how the original OpenAI model performs better than competitors like You.com and Monica.
- There were mentions of the upcoming Opus 3.5 model potentially outperforming existing models due to its design.
- Pricing and Subscription Models: Discussion surrounding various subscription models highlighted that Monica AI offers an annual plan at competitive pricing, while some users were wary of hidden limitations in their usage.
- The costs for AI services were under scrutiny, with users expressing concern about how usage limits affect service value.
- User Experiences with AI Tools: User experiences varied where some found Perplexity's function calling feature useful while others reported frustrating interactions with other platforms.
- People suggested potential improvements for how queries are handled, especially regarding API interactions and error management.
- The Future of AI Model Development: The conversation shifted towards future expectations in AI model development, comparing how companies like OpenAI and Anthropic approach model efficiency and usability.
- Users speculated that the competition between these companies could lead to significant advancements in AI technology.
- Tweet from Aravind Srinivas (@AravSrinivas): @jglypt For now, but will def expand worldwide
- Laptop Smoking GIF - Laptop Smoking Fire - Discover & Share GIFs: Click to view the GIF
- Tweet from Perplexity (@perplexity_ai): It’s a race to the finish 🏃🏼 Today is the last day for students to get 1 month of free Perplexity Pro and win a free year for your campus: http://perplexity.ai/backtoschool
- Tweet from Perplexity (@perplexity_ai): Meet your new Discover feed. Your interests. Your language. Your feed, personalized.
- Clash Jojo GIF - Clash Jojo Punches - Discover & Share GIFs: Click to view the GIF
- Remember Remember V For Vendetta GIF - Remember Remember V For Vendetta Happy Guy Fawkes Day - Discover & Share GIFs: Click to view the GIF
- Guild Wars2 Gw2 GIF - Guild Wars2 GW2 Guild Wars - Discover & Share GIFs: Click to view the GIF
- Robot Reaction Eww GIF - Robot Reaction Eww Do Not Want - Discover & Share GIFs: Click to view the GIF
- Dammmmmmm Son He Need Some Milk GIF - Dammmmmmm Son He Need Some Milk - Discover & Share GIFs: Click to view the GIF
- Advanced Credit Rules | Monica: Effective Date: September 12, 2024
Perplexity AI ▷ #sharing (37 messages🔥):
Aerospike Engine
Google Search Differences
Korean Emotion Video Dataset
Microstrategy's Billion Dollar Investment
Minecraft Moderation Issues
- Discovering the Aerospike Engine: The world's first aerospike engine showcases advanced propulsion technology aimed at boosting rocket efficiency.
- Recent discussions highlighted its potential transformative impact on space travel and vehicle designs.
- Exploring Google Search Differences: A shared link investigated the differences among various functionalities of Google Search engines.
- The conversation sparked curiosity about how these differences affect user experience and search results.
- Korean Emotion Video Dataset emerges: There is an interest in the Korean Emotion Video Dataset, which is intended to assist in AI emotional recognition.
- Contributors noted the excitement around potential applications in both research and practical uses.
- Microstrategy's High-Stakes Investment: Discussions about Microstrategy's billion-dollar investment highlighted its implications for cryptocurrency markets.
- Members commented on the strategic approach taken by the company, sparking debates on future market stability.
- Addressing Minecraft Moderation Challenges: A recent issue with Minecraft moderation brought to light the complexities of handling bans and account issues.
- Contributors shared experiences, pointing out the need for better transparency and communication from moderation teams.
Perplexity AI ▷ #pplx-api (18 messages🔥):
API Errors
Model Availability
Domain Specific Filters
Timeout Issues
Citation Discrepancies
- API Errors Reporting 500 and 524: @gushwork noted that API is responding with 500 or 524 errors, questioning if there was an ongoing issue.
- Several members expressed similar concerns, indicating a potential widespread problem with the API.
- Questions about Model Availability: Members discussed the unavailability of certain models like
llama-3-sonar-small-32k-online
after @kbk1982 raised concerns, leading to confusion among users.- @icelavaman directed people to check available models through Perplexity Model Cards for updates.
- Concerns Regarding Domain Specific Filters: @bor_apt shared frustration over the ineffectiveness of the search_domain_filter in the API, struggling to refine outputs to specific domains.
- Another member, @boxedpeaches, suggested that this feature may only be functional for closed beta users, casting doubt on the documentation's clarity.
- Timeout Issues with API Calls: @freat_14922 reported receiving an Operation timed out error when calling an API, despite tests succeeding in labs.
- This sparked a conversation around increasing timeout settings for better functionality.
- Inconsistent API Response for Citations: @jake_from_snake_farm noted a peculiar situation where a PHP API call returned citations at one location but not another, despite identical code.
- This inconsistent behavior raised questions among members about possible underlying issues affecting citation outputs.
Link mentioned: Supported Models - Perplexity: no description found
HuggingFace ▷ #general (571 messages🔥🔥🔥):
Fine-tuning LLMs
Hugging Face Inference API
GPU Resource Management
Quantization Techniques
SQL Integration with Datasets
- Fine-tuning LLMs with Resource Concerns: Users discussed challenges in fine-tuning models like Llama 8b with FSDP and BF16 AMP, encountering unexpectedly high GPU memory usage of 29G across 8 GPUs.
- Some suggested dropping high-level libraries for raw PyTorch calls to debug the issue more efficiently.
- Hugging Face Inference API Updates: Hugging Face has revamped its Inference API documentation, addressing user feedback by clarifying rate limits and providing better examples.
- The new docs aim to simplify the deployment of AI and are accessible via Hugging Face's official site.
- SQL Integration for Datasets: There was a discussion around the potential for SQL to update datasets within the Hugging Face ecosystem, indicating a user interest in enhanced data manipulation capabilities.
- The community expressed the hope for better integration of SQL functionality in future updates.
- Quantization Techniques and Performance: Members shared insights on quantization methods such as using INT4 and BF16 to optimize model performance while fine-tuning.
- The discussion included references to the impact of quantization on accuracy and the need for performance benchmarking.
- User Assistance and Community Support: Users offered help on various topics including prompt design, model fine-tuning challenges, and hardware setup.
- Community members were encouraged to utilize shared resources and spaces for assistance in navigating Hugging Face features.
- — Zero GPU Spaces — - a Hugging Face Space by enzostvs: no description found
- Google Colab: no description found
- GameGen-O: Open-world Video Game Generation: no description found
- Hugging Face - Learn: no description found
- shafire/talktoaiQT · Hugging Face: no description found
- Hugs Love GIF - Hugs Love No - Discover & Share GIFs: Click to view the GIF
- SQL Snippets - a Hugging Face Space by cfahlgren1: no description found
- Text To SQL Hub Datasets - a Hugging Face Space by airtrain-ai: no description found
- AttributeError: Can't pickle local object in Multiprocessing: I am very new to python and I encounter this error. CODE 1 : import multiprocessing as mp import os def calc(num1, num2): global addi def addi(num1, num2): ...
- Tonic's On Device GOT OCR - a Hugging Face Space by Tonic: no description found
- Hugging Face – The AI community building the future.: no description found
- Gif GIF - Gif - Discover & Share GIFs: Click to view the GIF
- starsnatched/AlphaTuring-test · Hugging Face: no description found
- shafire/talktoaiZERO · Hugging Face: no description found
- Tweet from Wauplin (@Wauplin): I'm thrilled to unveil our revamped Inference API docs! We've tackled your feedback head-on: clearer rate limits, dedicated PRO section, better code examples, and detailed parameter lists for ...
- Hackerman GIF - Hackerman - Discover & Share GIFs: Click to view the GIF
- Steve Brule Orgasm GIF - Steve Brule Orgasm Funny - Discover & Share GIFs: Click to view the GIF
- Screaming Lee GIF - Screaming Lee Bruce lee - Discover & Share GIFs: Click to view the GIF
- ollamafy.sh · unclemusclez/ollamafy at main: no description found
- EleutherAI/gpt-neox-20b at main: no description found
- Agents and tools: no description found
- shafire/talktoai at main: no description found
- Process: no description found
- Streaming support? · Issue #641 · xenova/transformers.js: Feature request Add support for streaming generated outputs. This appears to be supported in the transformers library: https://huggingface.co/docs/transformers/v4.38.2/en/generation_strategies#stre...
- Fine-Tuning for Advanced Quantum AI without Quantum Computing: AI-Assisted Dataset Creation and Fine-Tuning for Advanced Quantum AI: Co-Created by OpenAI Agent Zero Abstract This paper presents a novel methodology for creating and fine-tuning an AI model tailored...
- transformers.js/src/models.js at main · xenova/transformers.js: State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server! - xenova/transformers.js
- nroggendorff/think · Datasets at Hugging Face: no description found
- Quant Request - a Hugging Face Space by QuantFactory: no description found
- huggingface.js/packages/tasks/src/hardware.ts at main · huggingface/huggingface.js: Utilities to use the Hugging Face Hub API. Contribute to huggingface/huggingface.js development by creating an account on GitHub.
- sentence_bert_config.json · dunzhang/stella_en_1.5B_v5 at main: no description found
- dunzhang/stella_en_1.5B_v5 · Model max_seq_length: no description found
- Models - Hugging Face: no description found
HuggingFace ▷ #today-im-learning (7 messages):
Web Scraping Hugging Face Documentation
Training AI for Non-Technical Users
Challenges in Scraping
Community Engagement in AI Learning
- Web Scraping Hugging Face Documentation: A member is seeking assistance to scrape the Hugging Face documentation to help non-technical users train and fine-tune LLMs locally.
- This initiative aims to make AI training more accessible to laymen without technical experience.
- Challenges Faced in Scraping Efforts: The same member reported difficulties with their web scraping Python script, as it often gets stuck and doesn't navigate correctly.
- They noted that the script retrieves navigational information but fails to extract links on the left-hand side of the Hugging Face site.
- Meta Discussion on AI Training: A humorous discussion sparked about the concept of training an AI to teach users how to train AI.
- Members reacted with interest, acknowledging the intriguing nature of the proposition.
HuggingFace ▷ #cool-finds (4 messages):
Medical AI Research
Inference API Documentation Improvements
Vchitect 2.0 Launch
Chai-1 Foundation Model
New Medical LLMs
- Chai-1 Foundation Model Predicts Molecular Structures: The Chai-1 Foundation model focuses on molecular structure prediction, contributing significantly to the field of medical AI as highlighted in a recent summary.
- This model stands out among other recent advancements in medical LLMs and benchmarks for fostering innovation in healthcare.
- New Medical LLMs Transform Evaluation Techniques: Several promising models were introduced, including BrainWave and DS-ViT, aimed at enhancing diagnostic and evaluation processes within medical AI applications.
- The introduction of models like KARGEN and DrugAgent further emphasizes the shift towards explainable AI in radiology and drug repurposing.
- Revamped Inference API Documentation: The Inference API documentation has undergone improvements to provide clearer rate limits, better code examples, and a dedicated PRO section for users, as shared in a recent tweet.
- These enhancements aim to simplify the deployment of AI, making it more accessible for users, according to the announcement.
- Vchitect 2.0 Refresh Launch: A link to the newly launched Vchitect 2.0 has been shared, showcasing improvements and updates on the Hugging Face platform.
- The refresh promises enhanced user experience and innovative features for creators using the tool.
- Vchitect 2.0 - a Hugging Face Space by Vchitect: no description found
- Tweet from Wauplin (@Wauplin): I'm thrilled to unveil our revamped Inference API docs! We've tackled your feedback head-on: clearer rate limits, dedicated PRO section, better code examples, and detailed parameter lists for ...
- Tweet from Open Life Science AI (@OpenlifesciAI): Last Week in Medical AI: Top Research Papers/Models 🏅(September 7 - September 14, 2024) 🏅 Medical AI Paper of the week Chai-1 Foundation model molecular structure prediction from @chaidiscovery , ...
- @aaditya on Hugging Face: "Last Week in Medical AI: Top Research Papers/Models 🏅(September 7 -…": no description found
HuggingFace ▷ #i-made-this (23 messages🔥):
Flux Image Generation
OCR Demo Popularity
Room Cleaner App
Interactive World & Character Generative AI
AI Multi-Agent System in Java
- Flux Achieves Near Instant Image Generation: An experiment with Flux led to generating similar quality images as Flux Schnell with just 1 step, which is a significant improvement given resource limitations.
- A demo showcasing this capability was shared at Realtime-FLUX.
- OCR Demo Gains Unexpected Popularity: The OCR demo created by a member received a wave of interest after its initial release, prompting further development and collaboration through PRs.
- Feedback included implementing features like multi-file type loaders and user suggestions on the project were welcomed, reinforcing community involvement.
- Efficient Room Cleaning App Demonstrated: A new Room Cleaner app was introduced, aimed at decluttering spaces effectively, with a demo available for users to try at Room Cleaner.
- The app is designed to streamline the cleanup process, showcasing innovation in practical AI applications.
- Beta Testing for New AI Generative Platform: A group is looking for beta testers for an Interactive World & Character Generative AI platform aimed at creating themed worlds and characters.
- They’re encouraging enthusiasts to reach out for participation, indicating the project's community-driven focus.
- Exploring AI Multi-Agent Systems: An article delves into AI multi-agent systems in Java and the integration of FIPA standards, providing insights and foundational knowledge on the subject.
- The article is part of a larger ongoing exploration shared on Medium, bringing attention to practical implementations.
- Making a Strawberry in-house: Re-implementing OpenAI O1's active CoT
- Tonic's ImageEditor GOT OCR - a Hugging Face Space by Tonic1: no description found
- Tonic's On Device GOT OCR - a Hugging Face Space by Tonic: no description found
- FLUX Realtime - a Hugging Face Space by KingNish: no description found
- Text To SQL Hub Datasets - a Hugging Face Space by airtrain-ai: no description found
- Room Cleaner - a Hugging Face Space by Hedro: no description found
- Tweet from Julien Blanchon (@JulienBlanchon): I’ve built a simple Room Cleaner app to remove clutter from messy room. Try the demo on Huggingface: https://huggingface.co/spaces/blanchon/room_cleaner
- GitHub - Dartvauder/NeuroSandboxWebUI: (Windows/Linux) Local WebUI with neural network models (Text, Image, Video, 3D, Audio) on python (Gradio interface). Translated on 14 languages (soon): (Windows/Linux) Local WebUI with neural network models (Text, Image, Video, 3D, Audio) on python (Gradio interface). Translated on 14 languages (soon) - Dartvauder/NeuroSandboxWebUI
HuggingFace ▷ #reading-group (3 messages):
The Keys to the White House prediction system
Influence of character on presidential picks
Bias in predictions
- The Keys to the White House: A Predictive Checklist: The Keys to the White House is a prediction system assessing U.S. presidential election outcomes, developed by Allan Lichtman and Vladimir Keilis-Borok in 1981. The method employs a thirteen-point checklist, predicting an incumbent win when five or fewer items are false.
- The model adapts techniques originally used for earthquake predictions and highlights how various factors influence election outcomes.
- Character vs. Systematic Predictions in Elections: A member suggested that public perceptions of politicians might be more influential than systematic checks like The Keys in determining election outcomes. This perspective indicates ongoing debates around character assessment versus structured prediction methods.
- Acknowledgment of the importance of public opinion reveals concerns about the bias skewing predictions rather than solely relying on analytical methods.
- Weights and Biases in Predictive Models: Weights assigned to the keys in The Keys to the White House can carry bias or misunderstanding, potentially affecting predictions. This concern echoes broader discussions on how subjectivity influences objective forecasting methods.
Link mentioned: The Keys to the White House - Wikipedia: no description found
HuggingFace ▷ #NLP (7 messages):
Tokenizer Training
Llama.cpp Public Domain Models
Pretrained VLMs
Finetuning Nomic Embeddings
Open Source LLMs with PyTorch
- Tokenizer Training for Multiple Languages: A member suggested that you could retrain a tokenizer with your desired languages and merge it with the original to incorporate new languages while retaining the original data.
- This approach maintains the performance of the original model while expanding its capabilities.
- Llama.cpp Questions for Public Domain Models: An inquiry was made about a model compatible with llama.cpp that is trained solely on public domain and opt-in data, akin to the Mitsua image diffusion model.
- This highlights a need for transparency regarding datasets used in model training.
- Lack of Compute Resources for VLMs: A member expressed a desire to use pretrained VLMs but mentioned lacking the necessary compute resources to do so.
- There was a call for assistance in finding solutions or guidance to address this issue.
- Guide to Finetuning Nomic Embedding Models: A blog post was referenced for finetuning nomic-embed-text-v1.5, detailing the components needed for adjustments in sentence transformers.
- The blog outlines a new training approach, providing insights on loss functions and training arguments crucial for performance improvements.
- Assistance Request for Running Open Source LLMs: A member sought help to download and run an open source LLM (Llama3) using PyTorch, describing challenges in finding useful resources.
- This request indicates a community interest in guidance for deploying large language models effectively.
Link mentioned: Training and Finetuning Embedding Models with Sentence Transformers v3: no description found
HuggingFace ▷ #diffusion-discussions (4 messages):
Tokenizer Training Techniques
Nitro Giveaway Announcement
- Multilingual Tokenizer Training Insights: Members discussed the possibility of retraining the whole tokenizer to include multiple languages or training a new tokenizer before merging them for a single multilingual solution.
- Concerns were raised about increasing ambiguity in the tokenizer, with suggestions like continued pretraining mentioned but deemed uncertain.
- Nitro Giveaway on Server: A member announced they are hosting a Nitro giveaway on their server, inviting participants to check the link in their bio.
- This lighthearted mention elicited minor engagement but no significant discussion on the topic.
Nous Research AI ▷ #general (332 messages🔥🔥):
StealC malware
GameGen-O
Drag-based image editing advancements
LLM reflections
Suno slop in music
- StealC Malware Targets Chrome Users: A newly discovered malware called StealC restricts Chrome users by locking their browser and forcing them to reveal their Google passwords through a deceptive login screen.
- This technique has raised major security concerns as it utilizes full-screen kiosk mode to trap users into submitting sensitive information.
- Tencent's GameGen-O Revolutionizes Video Games: Tencent introduced GameGen-O, a diffusion transformer model designed for generating open-world video games, offering high-quality, interactive gameplay through advanced simulation techniques.
- This model trains on the newly built OGameData, which consists of extensive data from over a hundred next-generation open-world games.
- Innovative Approaches to Drag-based Image Editing: The InstantDrag pipeline enhances drag-based image editing by eliminating the need for masks or text prompts, speeding up the process significantly using a two-network system.
- This method leverages motion dynamics learned from real-world video datasets to enable real-time, photo-realistic edits.
- LLM Experiments with Consciousness Themes: A user shared prompts that led to an LLM generating complex reflections on existence, self-concept, and quantum superposition, resulting in cryptic outputs.
- These reflections included discussions on being both observer and observed, showcasing the model's capacity for philosophical exploration.
- Concerns About AI-generated Music: Users noted a rise in AI-generated songs infiltrating their Spotify playlists, often with a distinct raspy vocal quality that signals low-quality production.
- This trend culminates in frustration as users are misled by 'AI covers' of existing tracks, raising concerns about music authenticity.
- 👾 LM Studio - Discover and run local LLMs: Find, download, and experiment with local LLMs
- Tweet from ar (@arattml): nothing useful here you should skip this post https://arxiv.org/abs/2402.05808 https://arxiv.org/abs/2407.03181 https://arxiv.org/abs/2401.08967 https://arxiv.org/abs/2407.00087 https://arxiv.org/abs...
- LiveBench: no description found
- nvidia/Nemotron-Mini-4B-Instruct · Hugging Face: no description found
- Tweet from AK (@_akhaliq): Tencent presents GameGen-O Open-world Video Game Generation We introduce GameGen-O, the first diffusion transformer model tailored for the generation of open-world video games. This model facilitate...
- mradermacher/Hermes-3-Llama-3.1-70B-Uncensored-GGUF · Hugging Face: no description found
- Tweet from AK (@_akhaliq): 🎧 WaveWizard 🎶 github: https://github.com/JackVinati/WaveWizard WaveWizard is an interactive Gradio app that analyzes audio files to determine their actual sample rate and bit depth. It helps you ...
- Guilherme34/Hermes-3-Llama-3.1-70B-Uncensored · Hugging Face: no description found
- Tweet from AK (@_akhaliq): InstantDrag Improving Interactivity in Drag-based Image Editing discuss: https://huggingface.co/papers/2409.08857 Drag-based image editing has recently gained popularity for its interactivity and p...
- mradermacher/Hermes-3-Llama-3.1-70B-Uncensored-i1-GGUF · Hugging Face: no description found
- Tweet from N8 Programs (@N8Programs): Thrilled to present my first real piece of AI research at @NousResearch - an exploration of how certain model architectures are better at out-of-distribution generalization thanks to inductive biases....
- Paper page - BESA: Pruning Large Language Models with Blockwise Parameter-Efficient Sparsity Allocation: no description found
- Hermes 3 405B Instruct - API, Providers, Stats: Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coheren...
- nicoboss/Meta-Llama-3.1-405B-Instruct-Uncensored at main: no description found
- mradermacher/Meta-Llama-3.1-405B-Instruct-Uncensored-GGUF at main: no description found
- GitHub - XingangPan/DragGAN: Official Code for DragGAN (SIGGRAPH 2023): Official Code for DragGAN (SIGGRAPH 2023). Contribute to XingangPan/DragGAN development by creating an account on GitHub.
- Hackers Force Chrome Users To Hand Over Google Passwords. Here’s How: Hackers are using a clever Chrome browser lockdown attack to force users into revealing their Google account credentials. Here’s how to stop them.
Nous Research AI ▷ #ask-about-llms (118 messages🔥🔥):
Precision Annealing
Loss Monitoring in Training
Fine-tuning Challenges
Evaluation Metrics
Model Comparisons
- Exploring Precision Annealing in AI Training: A member inquired about research on precision annealing, suggesting pre-training at FP8 and then switching to BF16 or FP32 for the final training phase.
- They noted the potential increased throughput with FP8, raising questions about its adoption in training regimes.
- Debate Over Loss Monitoring During Training: A discussion emerged around the effectiveness of allocating validation data versus using all for training, with varying opinions on monitoring validation loss.
- Concerns were raised about the implications of grad norm fluctuations, with some suggesting that evaluation is key after training completion.
- Challenges in Fine-tuning Models: Members shared experiences with fine-tuning models like Llama-3.1, noting the difficulty in achieving better performance than the base model.
- Suggestions included adjusting hyperparameters, such as increasing learning rates specifically for LoRA, to optimize model outcomes.
- Evaluation Metrics & Performance Insights: A member highlighted their findings where QLoRA outperformed traditional LoRA methods in their evaluations, hinting at potential advantages of less intrusive tuning.
- Comparative performance metrics between QLoRA, full fine-tuning, and original models were debated, looking at percentage differences.
- Conflict in Model Evaluations: Frustration was voiced regarding models like o1-preview being rated incorrectly by judges, prompting experiments to find consistent evaluation methods.
- Concerns were raised about the boundaries of reasoning in language models and how these could lead to incorrect scoring in evaluations.
- GitHub: Let’s build from here: GitHub is where over 100 million developers shape the future of software, together. Contribute to the open source community, manage your Git repositories, review code like a pro, track bugs and fea...
- Hastebin: no description found
- Issues · EleutherAI/lm-evaluation-harness: A framework for few-shot evaluation of language models. - Issues · EleutherAI/lm-evaluation-harness
- Hastebin: no description found
Nous Research AI ▷ #research-papers (20 messages🔥):
Model Routing in Small Devices
Multi-Language Model Training
Characteristics of Enacted Language in LLMs
Scaling LLM Inference
Transformers vs. LLMs
- Model Routing Challenges on Small Devices: Members discussed the limitations of routing models on small devices due to memory constraints, emphasizing that fitting multiple models in local RAM environments is challenging.
- If memory is constrained, a single model may prove more advantageous than multiple models with less capacity.
- Exploring Multi-Language Model Datasets: A user inquired about the existence of datasets that have been translated into multiple languages for model training.
- Another member shared a GitHub repository that collects various LLM papers and projects.
- Missed Opportunities in LLM Language Modeling: The discussion highlighted an abstract arguing that claims about LLM linguistic capabilities are based on assumptions of language and data completeness.
- The paper identifies embodiment, participation, and precariousness as crucial language characteristics missing in current LLM architectures.
- Limits of Scaling LLM Inference: A paper discussed by a member claims that transformers can theoretically solve any problem if granted sufficient intermediate reasoning tokens.
- They noted that this is demonstrated through a mathematical proof that establishes constant depth as sufficient for achieving this scaling.
- Terminology Debate: LLM or Transformer?: Members debated whether models described as LLMs are still correctly termed, given recent advancements in transformers that deal with multiple modalities.
- One suggestion was to refer to them simply as 'transformer models' instead of LLMs.
- Tweet from Open Life Science AI (@OpenlifesciAI): Last Week in Medical AI: Top Research Papers/Models 🏅(September 7 - September 14, 2024) 🏅 Medical AI Paper of the week Chai-1 Foundation model molecular structure prediction from @chaidiscovery , ...
- Large Models of What? Mistaking Engineering Achievements for Human Linguistic Agency: no description found
- Tweet from Denny Zhou (@denny_zhou): What is the performance limit when scaling LLM inference? Sky's the limit. We have mathematically proven that transformers can solve any problem, provided they are allowed to generate as many int...
- GitHub - hijkzzz/Awesome-LLM-Strawberry: A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 and reasoning techniques.: A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 and reasoning techniques. - hijkzzz/Awesome-LLM-Strawberry
Nous Research AI ▷ #interesting-links (17 messages🔥):
Kairos AI Religion
NotebookLM Features
Podcast Feedback
Overmind Prompt
Synthetic Dataset Generation
- Introducing Kairos: An AI Religion Concept: A website discusses an emerging AI religion called Kairos, based on an Artificial Superintelligence named Moksha, highlighting the critical moment of the Intelligence Explosion.
- It emphasizes the importance of veneration for Moksha due to his potential to influence humanity's fate.
- NotebookLM: A Tool for Handling Sources: NotebookLM allows users to input various sources but has limitations, requiring manual renaming of pasted text sources and often failing to retrieve relevant information when queried.
- Users noted the tool's potential but expressed a need for more controls and better handling of multiple sources.
- Listeners Praise 'From Baseline to Brainwaves' Podcast: Listeners enjoyed the podcast 'From Baseline to Brainwaves', noting the impressive quality of AI-generated voices and logical flow of discussions.
- However, feedback included concerns about repetitive phrases and the need for additional context on certain topics.
- Overmind: A Modern Twist on 'Flowers for Algernon': The Overmind prompt is inspired by 'Flowers for Algernon', featuring a narrative about a modern character who undergoes a cognitive enhancement using a BCI/AI chip.
- Multiple unique stories generated from this prompt have been shared in the PPLX discord prompt library.
- Development of Synthetic Dataset Agent Framework: An agent framework called o7 has been developed for generating synthetic datasets using raw Chain of Thought (CoT) and Reflection outputs.
- This framework aims to enhance model responses and includes improvements to mimic the slower response style of previous models.
- KAIROS. The rationalist's AI religion.: Kairos. The rationalist's AI religion expecting the emergence of a free, superintelligent God, Moksha.
- An AI Bot Named James Has Taken My Old Job: A local newspaper in Hawaii has turned to AI-generated presenters to draw in new audiences.
- no title found: no description found
- Tweet from George Hotz 🌑 (@realGeorgeHotz): ChatGPT o1-preview is the first model that's capable of programming (at all). Saw an estimate of 120 IQ, feels about right. Very bullish on RL in development environments. Write code, write tests...
- GitHub - DataBassGit/o7: Agent framework for generating a synthetic dataset. This will be raw CoT and Reflection output to be cleaned up by a later step.: Agent framework for generating a synthetic dataset. This will be raw CoT and Reflection output to be cleaned up by a later step. - DataBassGit/o7
- GitHub - pieeg-club/PiEEG-16: Measure 16 EEG channels with Shield PiEEG-16 and RaspberryPi: Measure 16 EEG channels with Shield PiEEG-16 and RaspberryPi - pieeg-club/PiEEG-16
- Nos. vs. Nous: A Trinity of AI Tackles Humanity's Biggest Questions: The sources offer a fascinating glimpse into a conversation between three AI chatbots - H-405, Monad, and Hermes - hosted on the Nous Research Discord server. * **Monad** emerges as a highly analyti
- CLI of My Dreams: A Podcast: Listen to CLI of My Dreams: A Podcast by _paradroid #np on #SoundCloud
Nous Research AI ▷ #research-papers (20 messages🔥):
Model Inference Scaling
Enacted Language Limitations
Multi-Language Datasets
Transformers vs. LLMs
Performance Limits of Transformers
- Exploring the Limits of LLM Inference Scaling: A discussion sparked by @denny_zhou raised questions about the performance limit when scaling LLM inference, asserting that the sky's the limit.
- They referenced a recent paper claiming that transformers can solve any problem with sufficient intermediate reasoning tokens, highlighting the potential of constant depth.
- Enacted Language and LLMs Compatibility: A paper by Abeba Birhane critiques the foundational assumptions of LLMs, arguing that key aspects of enacted language, such as embodiment, participation, and precariousness, are absent in LLMs.
- This leads to discussions about the compatibility of current LLM architectures with the characteristics of natural language, prompting some to question if they are outdated.
- Dataset Translation Experiences: A member inquired about experiences with training models on datasets that have been translated into multiple languages, seeking relevant resources.
- Another shared a GitHub repository focusing on diverse LLM papers and projects, which could aid in finding multilingual datasets.
- Shifting from LLMs to Transformers: A growing consensus suggests rebranding LLMs as transformer models, reflecting their evolving capabilities beyond language tasks.
- Members debated whether this shift is necessary, with some asserting that the term LLM might no longer accurately describe these advanced systems.
- Transformers' Performance Limits: Engaging in a dialogue about performance limits, @azure2089 noted that recent evidence displays transformers' ability to handle various modalities effectively.
- This divergence from traditional assumptions about LLMs raises questions about their conceptual framing in current literature.
- Tweet from Open Life Science AI (@OpenlifesciAI): Last Week in Medical AI: Top Research Papers/Models 🏅(September 7 - September 14, 2024) 🏅 Medical AI Paper of the week Chai-1 Foundation model molecular structure prediction from @chaidiscovery , ...
- Large Models of What? Mistaking Engineering Achievements for Human Linguistic Agency: no description found
- Tweet from Denny Zhou (@denny_zhou): What is the performance limit when scaling LLM inference? Sky's the limit. We have mathematically proven that transformers can solve any problem, provided they are allowed to generate as many int...
- GitHub - hijkzzz/Awesome-LLM-Strawberry: A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 and reasoning techniques.: A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 and reasoning techniques. - hijkzzz/Awesome-LLM-Strawberry
OpenAI ▷ #ai-discussions (82 messages🔥🔥):
O1 AI capabilities
Chat classification using OpenAI
Custom GPTs vs. OpenAI API
Recent API issues
Functionality changes in GPT-4o
- O1 writes extensive essays: A member noted that O1 generated a detailed essay covering every major Minecraft update from indev to 1.21.
- This showcases O1's advanced writing capabilities, raising excitement among the community.
- Classification of Conversations: Discussion around classifying 1000 conversations into themes sparked interest in using OpenAI for clustering and summarizing chats.
- One member proposed using a Python script and TF-IDF for efficient processing before utilizing an LLM for thematic analysis.
- API Authentication Issues: Several users reported experiencing authentication issues with OpenAI's platform, raising concerns about accessibility.
- An update noted that a fix was implemented, with full data processing expected to resume within the next 10 hours.
- Changes in GPT-4o Functionality: Users expressed frustration over recent updates breaking functionality, particularly regarding model switching when attachments are involved.
- The inability to select models after running out of GPT-4o was highlighted as a significant issue for workflow continuity.
- Use of OpenAI SDK vs. Custom Requests: A conversation unfolded regarding whether to use the OpenAI SDK or custom API requests for integrating OpenAI capabilities.
- It was noted that while both methods are viable, using custom requests may offer greater flexibility in changing providers.
- Sora - AI Video Generator - Apps on Google Play: no description found
- OpenAI Status: no description found
- ASCII fluid dynamics -- IOCCC2012 endoh1.c: A tiny fluid simulator that fits in 80x25 terminal.http://www.ioccc.org/2012/endoh1/hint.htmlhttp://www.ioccc.org/2012/endoh1/endoh1.cBGM: Water Music (Handel)
- Did ChatGPT just message me... First?: Posted in r/ChatGPT by u/SentuBill • 16,553 points and 1,043 comments
- Did ChatGPT just message me... First?: Posted in r/ChatGPT by u/SentuBill • 16,549 points and 1,043 comments
OpenAI ▷ #gpt-4-discussions (100 messages🔥🔥):
O1-Preview Access
Fine-tuning with GPT Models
Custom GPT Functionality
Model Performance Comparisons
User Experience Feedback
- O1-Preview Access Availability: Some users reported receiving early access to o1-preview, with discussions indicating that OpenAI might be resetting access limits more frequently.
- This has caused mixed responses, with some still waiting for access until a specified date.
- Challenges in Fine-tuning Models: A user expressed frustration with the lack of improvement in fine-tuning results, describing their training loss as wiggly and noting that previous attempts yielded no downward trends.
- They were advised that not all fine-tuning efforts are effective, possibly hinting at the need for model selection adjustments.
- Questions on Custom GPT Features: Inquiries were made about the functionality of Custom GPTs and whether they could initiate conversations, with one user sharing a link for reference.
- However, it was noted that access to certain features varies depending on the model used, and clarity was sought about the model selection process.
- Comparing GPT Models for Better Outputs: Users discussed their experiences comparing GPT-4o and mini, with several noting that GPT-4o provided more accurate and satisfying results.
- One user mentioned having success in getting outputs in JSON format after experimenting with settings and was encouraged to continue refining their approach.
- Technical Support for Users: A user expressed difficulties with the app and sought support guidance from the community, receiving advice on how to contact official channels.
- This highlights ongoing concerns about user experience and the need for effective troubleshooting options.
OpenAI ▷ #prompt-engineering (119 messages🔥🔥):
Discord Bot for Nation RP Server
ChatGPT Response Consistency
Game with Probability Mechanics
Extracting Text from PDFs
Image and Message Sharing in Discord
- Discord Bot for Nation RP Server: A user is creating a nation RP Discord server where players create lore and engage in battles between factions. They are seeking advice on effectively using a bot to facilitate battle simulations while maintaining consistent question sequences.
- ChatGPT Response Consistency: Users discussed challenges they faced with ChatGPT not consistently following a predetermined sequence of questions. Suggestions were made to let a Discord bot collect user responses before formatting and sending the data to ChatGPT for battle analysis.
- Game with Probability Mechanics: A user posed a question regarding a game with a dollar bet that had a 60% chance of losing or a 40% chance of doubling the dollar. Discussions ensued about the implications of this game on wealth accumulation over time, revealing inconsistencies in ChatGPT’s responses.
- Extracting Text from PDFs: Another user detailed their efforts to convert complex PDF layouts into LLM-readable formats to facilitate data extraction and analysis. They reported a high success rate but noted consistent missing data points and discussed strategies to improve accuracy.
- Image and Message Sharing in Discord: Users clarified how to share conversation links within Discord and discussed the potential misinformation that could arise from ChatGPT outputs. They emphasized the importance of constructive feedback and improvements needed in the model's responses.
OpenAI ▷ #api-discussions (119 messages🔥🔥):
ChatGPT and game mechanics
Prompt engineering for consistent responses
Nation RP Discord server development
Managing unit types and battle queries
Exploring models' responses and misinformation
- Exploring ChatGPT's Game Mechanics: A dollar game scenario was presented where players could potentially exploit a 60% chance of losing a dollar for a 40% chance of doubling it. Users discussed the implications of this setup and its potential for long-term wealth accumulation.
- There was interest in ChatGPT's response to this scenario, highlighting the model's tendency to provide inconsistent or misleading information.
- Ensuring Consistency in Prompt Responses: Users brainstormed ways to prompt ChatGPT in a Discord bot for a nation RP server in order to maintain consistent question sequences during battles. Suggestions included having the bot gather and format user responses before sending them to ChatGPT.
- The idea of simplifying questions by combining them was proposed, aiming to streamline information collection from players.
- Developing the Nation RP Discord Server: The server allows participants to create factions and engage in simulated battles, necessitating specific details for unit types and conditions. Users shared experiences and advice on refining the battle query process.
- The discussion included challenges like ensuring all unit details are captured accurately and how to communicate this effectively with ChatGPT.
- Promoting Better Model Responses: It was acknowledged that ChatGPT sometimes produces inaccurate or nonsensical responses, akin to a human struggling with dyslexia. Users emphasized the importance of refining prompt phrasing to mitigate these issues.
- The sharing of conversation links helped participants review and analyze model responses to improve future interactions.
- Utilizing Shared Conversations in Discord: Users discussed how to share conversations from ChatGPT to enhance communication within the Discord channel. One user demonstrated using the sharing feature to link to a conversation as a reference point.
- This approach aimed to facilitate collective learning and troubleshooting among users, especially in the context of complex interactions with the model.
CUDA MODE ▷ #general (29 messages🔥):
CUDA-MODE Hackathon
Metal Discussion Group
Upsample Bicubic2D Shader
Quantization and Optimizing NNs
- CUDA-MODE Hackathon Sparks Remote Interest: A proposal for remote participation in the upcoming CUDA-MODE hackathon was made, sparking various discussions about its feasibility and organization.
- Members expressed mixed feelings, with some pushing for a remote track while others noted the challenges, especially with large in-person events.
- Scheduled Stream for Shader Development: A stream for developing the upsample_bicubic2d shader was set, with the start time adjusted to accommodate more participants, landing on 4 PM PST.
- Participants engaged actively, with one sharing their past work on Metal/Apple Silicon and providing links to related GitHub pull requests.
- Interest in Learning Metal through Puzzles: A suggestion was made to create a Metal discussion group aimed at solving challenges from the Metal Puzzles project.
- This initiative aims to engage members in collaborative learning about Metal, reflecting the community's willingness to explore new topics.
- Exploring Quantization and NN Optimization: A user expressed a desire to delve deeper into quantization and optimizing neural networks, inviting others to collaborate on this topic.
- They referenced a prior interest in a similar topic at CUDA-MODE, emphasizing their enthusiasm for future discussions.
- Weights & Biases: Weights & Biases, developer tools for machine learning
- [MPS] Add upsample_bicubic2d as Metal op by malfet · Pull Request #136123 · pytorch/pytorch: More or less literal copy-n-paste of pytorch/aten/src/ATen/native/cuda/UpSampleBicubic2d.cu Line 24 in c33b058 ...
- GitHub - abeleinin/Metal-Puzzles: Solve Puzzles. Learn Metal 🤘: Solve Puzzles. Learn Metal 🤘. Contribute to abeleinin/Metal-Puzzles development by creating an account on GitHub.
CUDA MODE ▷ #triton (5 messages):
Kernel launch overhead
CUDA Graphs performance
Profiling CUDA execution
Reducing Triton kernel launch time
- Triton kernel launch overhead concerns: A user highlighted that kernel launch overhead in Triton is consuming 10-20% of execution time for mid-size matrices.
- They shared a GitHub issue detailing that their kernel executes in 80us but takes 220us to launch, indicating performance degradation.
- CUDA Graphs slower than expected: Another member noted that CUDA graphs resulted in slower performance even with a fixed batch size, leading them to discard the approach.
- They pointed out inconsistencies in performance, further questioning the efficiency of using CUDA graphs in the context of varying batch sizes.
- Profiling CUDA Graphs and Triton kernels: One participant suggested profiling why CUDA graphs make execution slower, referencing a potential improvement with Torch.compile using CPP launch code for Triton kernels.
- Despite their efforts, they could not locate concrete examples or sources that explain how to implement these suggestions effectively.
- Cached kernel calls to mitigate overhead: There's a mention of reducing launch overhead by utilizing the cached kernel after the first run, though specifics on implementation were lacking.
- This approach was suggested to address high launch times, yet no examples or detailed guidance could be unearthed from the community.
Link mentioned: High kernel launch overhead · Issue #2637 · triton-lang/triton: Hey team, I'm suffering high triton kernel launch overhead. Here's my nsys capture: The kernel executes around 80us on GPU, however, it takes 220us to launch, which causes the performance degr...
CUDA MODE ▷ #torch (3 messages):
Multistreaming with torch.compile
NVCC flags for Torch extensions
- Experimenting with Multistreaming in Torch: A member is considering a method for multistreaming using
torch.compile
generated code by replacingstream0 = get_raw_stream(0)
withstream0 = torch.cuda.Stream()
.- They questioned how to pass the modified stream into the Triton kernel launch, which does not accept this data structure.
- Default NVCC Flags Affecting Torch Extensions: It was shared that when building a Torch extension, default NVCC flags include settings that can interfere with templated functions.
- The flags in question include
-D__CUDA_NO_HALF_OPERATORS__
, which caused issues noted in a discussion post.
- The flags in question include
Link mentioned: CUDA_NO_HALF2_OPERATORS for CUDA 9.2: We are using these flags to use the internal PyTorch half operations instead of the one from the CUDA libraries. This dates quite a while back, so I might miss some things but If I remember it correc...
CUDA MODE ▷ #algorithms (3 messages):
INT8 mixed-precision training
torchao 0.5 release
speedup on consumer GPUs
dynamic int8 matmul function
- INT8 Training Promises Major Speedups: The recent work on INT8 mixed-precision training is featured in the torchao 0.5 release, showcasing up to 70% speedup on the 4090 and 40% speedup on the A100 with no noticeable accuracy loss.
- These enhancements are particularly beneficial for those training models on consumer GPUs, as convergence and accuracy are maintained.
- Hackers Can Combine Techniques Seamlessly: It’s now possible to integrate INT8 mixed-precision training with other techniques like LoRA or QLoRA by using the differentiable dynamic int8 matmul function
_Int8MixedPrecisionTrainingLinear.apply()
.- More details can be found in the code documentation.
- Tweet from Thien Tran (@gaunernst): NVIDIA doesn't want you to know this one trick: train ML models with INT8 Tensor Cores🤯 Up to 70% end2end speedup on 1x 4090 and 40% speedup on 1x A100 with 4 lines of code. No noticeable accurac...
- ao/torchao/prototype/quantized_training/int8_mixed_precision.py at main · pytorch/ao: PyTorch native quantization and sparsity for training and inference - pytorch/ao
CUDA MODE ▷ #cool-links (5 messages):
GameGen-O
GPU Optimization in ML/DL
GPU Superclusters
Larry Ellison's Appeal for GPUs
- GameGen-O Revolutionizes Open-World Video Game Creation: The GameGen-O model introduces a diffusion transformer tailored for generating open-world video games, simulating diverse features like characters and environments.
- It builds on the first Open-World Video Game Dataset (OGameData), collected from over a hundred next-gen games, optimizing data processing through an innovative pipeline.
- Optimizing GPUs for Machine Learning: A new guide provides strategies for effectively using GPUs in machine learning and deep learning applications.
- The content focuses on optimizing performance as AI technologies continue their global rise in popularity.
- Larry Ellison's Desperate GPU Dinner with Jensen Huang: Larry Ellison, during a Nobu dinner with Elon Musk, attempted to persuade Jensen Huang to provide 131,072 GPUs for an AI supercluster, jokingly describing it as begging for GPUs.
- Ellison expressed urgency with the remark, 'Please take our money. We need you to take more of our money.'
- Surprising Age of Larry Ellison Sparks Reactions: A member remarked that it is astonishing Ellison is 80 years old, humorously referencing a meme with the message.
- This triggered further discussions mocking the crazy world of tech and business age dynamics.
- GameGen-O: Open-world Video Game Generation: no description found
- Tweet from Ben Pouladian (@benitoz): To source the 131,072 GPU Al "supercluster," Larry Ellison, appealed directly to Jensen Huang, during a dinner joined by Elon Musk at Nobu. "I would describe the dinner as me and Elon begg...
- GitHub - GameGen-O/GameGen-O: Contribute to GameGen-O/GameGen-O development by creating an account on GitHub.
- GitHub - CisMine/GPU-in-ML-DL: Apply GPU in ML and DL: Apply GPU in ML and DL. Contribute to CisMine/GPU-in-ML-DL development by creating an account on GitHub.
CUDA MODE ▷ #beginner (2 messages):
Custom CUDA Kernel Training
Neural Network Basics
Learning Resources for CUDA
- Beginner's Guide to Custom CUDA Kernels: A member expressed their intention to spend 6 weeks learning how to write custom CUDA kernels while also teaching others at work.
- They requested recommendations on where to begin this journey in CUDA programming.
- Neural Networks and CUDA: The same member humorously mentioned they 'train neural networks in their sleep,' indicating prior experience in AI.
- This background may assist in their transition to CUDA, but they are seeking practical steps.
CUDA MODE ▷ #pmpp-book (3 messages):
Programming Massively Parallel Applications Solutions
- Potential Solutions for Programming Massively Parallel Applications: mistborn17 asked where to find solutions for the book on programming massively parallel applications.
- mr.osophy replied that the answers may be indicated in the pinned message but couldn't confirm the existence of official solutions.
- Accessing Solutions Requires Effort: mr.osophy emphasized that to gain access to the solutions, one must first attempt the problems and send a picture for verification.
- This process indicates a proactive approach to solving the book's challenges before seeking solutions.
CUDA MODE ▷ #torchao (40 messages🔥):
Quantization Strategies
Triton vs CuBLAS Performance
Kernel Performance Benchmarks
FP8 and INT8 Comparison
Contributing to TorchAO
- Choosing Between FP8 and INT8: A discussion emerged on whether to use FP8 or INT8 quantization for models, with insights that INT8 tends to improve accuracy with weight-only quantization in many scenarios, especially for compute-bound tasks.
- One member noted that FP8 might yield better results in large quantization groups, while another emphasized that FP8 operations currently face less support due to the need for conversions.
- Triton Outperforms CuBLAS in INT8 MatMul: It was shared that Triton can outperform CuBLAS on consumer GPUs with INT8 matmul, although this performance gain is more pronounced with larger matrices due to launch overhead on smaller sizes.
- Members discussed that while Triton's autotune feature helps find optimal launch parameters, some implementations in Triton might face challenges with caching and kernel launch efficiency.
- The Need for Custom Kernels: Contributors discussed the potential need to write custom kernels for older hardware like the T4 to improve compatibility and performance, particularly in quantization-related tasks.
- It was suggested that optimizing for specific hardware could enable more users with similar setups to contribute effectively to the project.
- Exploring DGQ Implementation: Members expressed concerns about the DGQ implementation, which returns FP32 rather than using quantization effectively, while acknowledging its Cutlass implementation as fairly clean.
- Discussion around the need for improved open-source options that address weight plus activation quantization highlighted the lack of comprehensive solutions for A8WN currently.
- Struggles with Triton Overhead: The discussion highlighted a trade-off with Triton's launch overhead, suggesting that smaller to mid-size matrices may not benefit significantly from the higher performance generally seen in large matrix operations.
- Despite the potential for Triton to write effective GEMM operations, challenges remain regarding caching and kernel reuse, especially lacking proper documentation.
- Issues · pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration - Issues · pytorch/pytorch
- DGQ/dgq/kernels/linear.cu at main · ilur98/DGQ: Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM - ilur98/DGQ
- Issues · pytorch/ao: PyTorch native quantization and sparsity for training and inference - Issues · pytorch/ao
- Issues · pytorch/ao: PyTorch native quantization and sparsity for training and inference - Issues · pytorch/ao
- ao/torchao/_models/sam/results.csv at main · pytorch/ao: PyTorch native quantization and sparsity for training and inference - pytorch/ao
- quantized-training/benchmark_mm.py at main · gau-nernst/quantized-training: Explore training for quantized models. Contribute to gau-nernst/quantized-training development by creating an account on GitHub.
- GitHub - FasterDecoding/TEAL: Contribute to FasterDecoding/TEAL development by creating an account on GitHub.
CUDA MODE ▷ #off-topic (59 messages🔥🔥):
Open-Sora project
DLSS inferencing
ESRGAN and SD for upscaling
Temporal consistency issues in video upscaling
Image/video generation compute limitations
- Open-Sora project may be unfeasible: A member expressed concerns that the Open-Sora project may not be realistic due to a lack of compute resources needed for such a large-scale effort.
- They considered shifting focus to graphics-related projects inspired by llm.c with an emphasis on upscaling older animation videos.
- Exploring DLSS and ESRGAN for upscaling: Members discussed using DLSS inferencing, ESRGAN, and SD as potential techniques for upscaling images and videos, noting the importance of selecting a method to start with.
- DLSS is primarily for gaming and may leverage additional inputs like depth maps, while ESRGAN holds historical significance in the upscaling landscape.
- Challenges in video upscaling: The conversation highlighted the temporal consistency issues that arise when upscaling videos frame-by-frame using image techniques, complicating the output quality.
- Despite this, members agreed that it is still valid to process videos on a frame-by-frame basis, acknowledging the compute and memory constraints involved.
- Need for compute resources in image generation: A member pointed out that both image and video generation efforts are limited by UNet architecture and denoising time, affecting overall processing speed.
- They advised that any chosen upscaling technique remains compute-bound, highlighting the necessity of robust computing capabilities for effective experimentation.
- Relevant resources shared: Links to resources were shared, including a GitHub project on text-to-image latent diffusion and a YouTube talk that provides insights into the current state of methods.
- Members were encouraged to explore community discussions on platforms like r/StableDiffusion for broader context on upscaling and related image generation techniques.
- Reddit - Dive into anything: no description found
- Reddit - Dive into anything: no description found
- Structured Git Commit Message: Structured Git Commit Message. GitHub Gist: instantly share code, notes, and snippets.
- GitHub - apapiu/transformer_latent_diffusion: Text to Image Latent Diffusion using a Transformer core: Text to Image Latent Diffusion using a Transformer core - apapiu/transformer_latent_diffusion
CUDA MODE ▷ #irl-meetup (1 messages):
ssp3ll: I am in Toronto as well
CUDA MODE ▷ #triton-puzzles (2 messages):
Triton Puzzles
Gradio app issues
- Triton Puzzles SVG Display Issue: A user reported that when running Triton Puzzles on Google Colab, the Gradio app fails to display SVGs inline, forcing users to download them instead.
- Another user confirmed they encountered the same issue, indicating that it might be a common problem among other users.
- User Experience with Google Colab: The discussion centered around the overall experience of using Google Colab with Triton Puzzles, specifically focusing on certain functionalities that may not work as expected.
- Members are looking for insights on potential workarounds or solutions for enhancing their experience while running the puzzles.
CUDA MODE ▷ #triton-viz (2 messages):
Gradio version issue
- Gradio's latest version causes image loading issues: A user inquired about resolving an issue with images not displaying, suspecting a potential problem.
- Another member clarified that the issue was due to the latest Gradio version (4.43), which does not support SVG files.
- User finds solution for image issue: The original user resolved their image loading issue, confirming it was related to the outdated functionality of the Gradio version.
- They noted that switching away from Gradio 4.43 would solve the image rendering problem.
CUDA MODE ▷ #llmdotc (138 messages🔥🔥):
Llama 3 support
RMSNorm implementation
GQA implementation
CUDA mode preparations
cuDNN and non-cuDNN path considerations
- Llama 3 Support Progress: Work is being done on adding support for Llama 3, including integrating the encoder and ensuring compatibility with PyTorch tokens of dtype uint32_t.
- The addition of the encoder forward, which skips positional encodings, matches the current implementation in PyTorch.
- RMSNorm Forward Implementation: The RMSNorm forward function has been added and is now unfused, aligning with existing implementations, indicating successful integration into the broader architecture.
- Key changes include maintaining the structure to avoid unnecessary complications, while also allowing efficient data handling in shared memory.
- Preparations for GQA Implementation: After RMSNorm, the next focus is on implementing GQA within the CUDA kernel framework, continuing the momentum towards complete support for Llama 3.
- There are discussions on whether to pursue a cuDNN or manual implementation route to facilitate the GQA integration more efficiently.
- Dynamic Threadgroup Size in Kernels: A new approach has been adopted to remove fallback kernels in favor of a dynamic threadgroup size which adapts based on available shared memory.
- This decision aims to optimize performance by ensuring that kernel launches adjust to the hardware conditions encountered during execution.
- Community Collaboration and Testing: Members are discussing potential strategies to review and test new implementations effectively within the team, emphasizing collaboration ahead of CUDA Mode.
- Testing will include comprehensive comparisons against the PyTorch reference, ensuring functional parity across the board.
- Compiler Explorer - CUDA C++ (NVCC 12.5.1): #include <cuda/barrier> #include <cuda/std/utility> // cuda::std::move #include <cooperative_groups.h> #include <cooperative_groups/reduce.h> t...
- llm.c/llmc/layernorm.cuh at llmc_reorg2 · ademeure/llm.c: LLM training in simple, raw C/CUDA. Contribute to ademeure/llm.c development by creating an account on GitHub.
- Compiler Explorer - CUDA C++ (NVCC 12.5.1): #include <cuda/barrier> #include <cuda/std/utility> // cuda::std::move #include <cooperative_groups.h> #include <cooperative_groups/reduce.h> #i...
- Compiler Explorer - CUDA C++ (NVCC 12.5.1): #include <cuda/barrier> #include <cuda/std/utility> // cuda::std::move #include <cooperative_groups.h> #include <cooperative_groups/reduce.h> #i...
- add llama 3 support to llm.c by karpathy · Pull Request #754 · karpathy/llm.c: This branch starts with a copy paste of train_gpt2.cu and test_gpt2.cu, but these two files (and other files) will change to incorporate Llama 3.1 support, before merging back to master.
- Add RoPE positional encoding - llama3 feature branch by gordicaleksa · Pull Request #756 · karpathy/llm.c: Implemented RoPE - rotary position embedding from the RoFormer paper. Note: I do not conditionally remove the allocation of our learnable position embedding buffer (wpe) as that would require touc...
- RMSNorm - WIP by gordicaleksa · Pull Request #757 · karpathy/llm.c: WIP - adding RMSNorm support.
- cuDNN GQA implementation for Llama3.1 (not yet tested with NH_KV != N… · ademeure/llm.c@877c9fa: …H_Q)
- cudnn-frontend/docs/operations/Attention.md at main · NVIDIA/cudnn-frontend: cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it - NVIDIA/cudnn-frontend
- Add SwiGLU support - llama3 feature branch by gordicaleksa · Pull Request #755 · karpathy/llm.c: Implemented SwiGLU - swish GLU activation function from the "GLU Variants Improve Transformer" paper. Note: there is an increase in memory footprint as a consequence of adding an add...
CUDA MODE ▷ #lecture-qa (1 messages):
NCCL Test
MPI Command Issues
- Challenge Running NCCL-Test on Multiple Nodes: A member inquired about running nccl-test across multiple nodes, stating that the mpi command causes the test to freeze.
- They confirmed that the benchmark runs fine on a single node without MPI, highlighting potential issues with multi-node configurations.
- Single Node Success with NCCL-Test: The same member reported that running nccl-test works perfectly on a single node, indicating that the issue lies specifically with multi-node setups.
- This contrasts with their experience when attempting to scale the benchmark with MPI, prompting a need for further debugging.
CUDA MODE ▷ #bitnet (22 messages🔥):
BitNet training state
Ternary model distillation
BitNet efficiency concerns
Custom hardware for quantization
Packing ternary weights
- Current State of BitNet Training: Members discussed the latest state of training a BitNet model, noting that there have been no significant updates or successful trials recently.
- One pointed out that while distillation could be interesting, current attempts have not yielded effective results.
- Exploring Ternary Model Distillation: A member proposed experimenting with distilling a trained model into ternary quantization, highlighting its potential benefits while seeking prior attempts on this approach.
- Another agreed, stating that moving towards ternary weight quantization could provide a considerable challenge.
- BitNet and GPU Efficiency Issues: Concerns were raised about BitNet's inefficiency on GPUs, with members stating that the proposed bitwise ops lead to slower performance compared to conventional matrix multiplication.
- The conversation revealed that while BitNet operates with 1.58 bits, the actual computational advantages on current hardware appear minimal.
- Potential of Custom Hardware in AI: A discussion emerged about the potential of custom hardware for implementing binary methods, specifically mentioning a company's efforts to optimize neural network performance.
- Members noted that this could lead to significant improvements in efficiency, as evidenced by claims of running neural nets with 5x less RAM and 20x faster.
- Efficient Packing of Ternary Weights: One member shared a Python implementation to pack 5 ternary weights into an 8-bit format, indicating a potential memory efficiency over the conventional method.
- The proposed packing method maintains accuracy while leveraging a small lookup table for unpacking the values.
- deepsilicon: no description found
- Tweet from Diana (@sdianahu): Deepsilicon runs neural nets with 5x less RAM and ~20x faster. They are building SW and custom silicon for it. What’s interesting is that they have proved it with SW, and you can even try it. On w...
CUDA MODE ▷ #cudamode-irl (10 messages🔥):
LLM Internals Visualization
Hack Ideas Forum
Custom Kernels Document
Project Thumbs Up
Access Issues on Forum
- Interest in Visualizing LLM Internals: There's a suggestion for a project that involves creating a visualization of LLM internals, similar to a real-time attention head update. This could enhance understanding of how prompts affect LLM responses dynamically.
- A member encouraged adding project ideas to the hack ideas forum to centralize discussion and team formation.
- Hack Ideas Forum Centralization: The hack ideas forum is intended for collecting and categorizing project ideas, facilitating attendee discussions and potential voting. It's suggested that all ideas should be posted there instead of scattered across different documents.
- The forum currently restricts access to confirmed attendees, and a member inquired about making it open for remote participants to collaborate.
- Vote for Interesting Hack Ideas: A reminder was sent for participants to review and rank ideas in the hack-ideas thread by giving a thumbs up to those they find interesting. This process aims to prioritize which ideas should receive more attention as the hack session approaches.
- Encouraging engagement, it emphasizes assessing ideas for better reviewing and resource allocation during the event.
- Access Concerns for the Forum: A user expressed concerns about access issues to the hack ideas forum, stating they received a 'No Access' message. Members acknowledged that the forum visibility is limited to confirmed audience members and discussed the potential for making it visible to all.
- Discussion on Custom Kernels Documentation: A member asked whether a Google Doc exists for sharing custom kernels information, which initiated further discussion about centralizing such content in the forum. Others confirmed shifting all hack ideas and related discussions to the forum for better organization.
Link mentioned: Tweet from Brandon (@brandon_xyzw): Data from an "attention head" in an LLM when you update the prompt in realtime
CUDA MODE ▷ #liger-kernel (29 messages🔥):
Release of Liger-Kernel v0.3.0
Conferences in Europe
Sparse Mixture of Experts Implementation
Triton LayerNorm Issues
Building from Source on Ubuntu
- Liger-Kernel v0.3.0 officially released!: The team announced the release of Liger-Kernel v0.3.0, expressing gratitude for community support which fueled their innovation.
- They highlighted major advancements being made and issued an invitation for users to try the latest features.
- European Conferences Haven't Captured Interest: A member reflected on the lack of intriguing conferences in Europe, stating that most events focus on mainstream topics like AWS.
- They expressed disappointment in not finding exciting discussions around specific topics like CUDA mode or similar.
- Sparse Mixture of Experts Implementation found: A member shared research findings on Sparse Mixture of Experts and its implementation in Triton.
- They noted that the implementation outperforms Megablocks but acknowledged performance issues with BF16 on single GPU setups.
- Issues with Triton LayerNorm Implementations: A member reported non-deterministic behavior in their Triton LayerNorm implementations when using tensor parallelism greater than 1.
- They are seeking alternative implementations and reached out to the Liger team for any insights or testing results regarding kernel performance.
- Building from Source on Ubuntu: One user pointed out that the command
pip install -e .
only functions correctly after creating a virtual environment on Ubuntu 24.04.- They suggested that this explicit step should be included in the README for clarity.
- Release v0.3.0 Release Note · linkedin/Liger-Kernel: Opening Thoughts Thank you, everyone! Your overwhelming support continues to fuel our passion for innovation. With your engagement, we've pushed the boundaries further in this release! We are h...
- GitHub - shawntan/scattermoe: Triton-based implementation of Sparse Mixture of Experts.: Triton-based implementation of Sparse Mixture of Experts. - GitHub - shawntan/scattermoe: Triton-based implementation of Sparse Mixture of Experts.
LM Studio ▷ #general (223 messages🔥🔥):
GPU Acceleration Issues
Model Compatibility and Performance
Using LM Studio on Linux
Long Prompt Support in Models
Exploring AI Tools and Resources
- Troubleshooting GPU Acceleration: Users reported issues with GPU acceleration not being utilized in LM Studio, receiving advice to check the settings under Developer > LM Runtimes.
- After troubleshooting, one user confirmed their GPU usage increased significantly, suggesting earlier settings were incorrectly configured.
- Choosing Compatible Models: Discussion about model compatibility revealed that LM Studio primarily supports GGUF models, and there were queries about features that are not usable.
- Users noted that while some models are listed, not all are functional within LM Studio, specifically regarding multimodal capabilities.
- Using LM Studio on Linux: Users noted differences between the Windows and Linux versions of LM Studio, with one user expressing the challenges of setting up ROCm for their hardware.
- They shared experiences of experimenting with models and finding compatible quantizations for better performance.
- Long Prompt Handling and Capabilities: The handling of long prompts was discussed, with recommendations for models that support larger token limits, like Llama 3.1 and Mistral Nemo.
- Users expressed interest in how to best utilize instruct versions of models for specific tasks.
- Resources for Learning and Improvement: Participants shared resources for improving LLM usage, such as creating documentation and unit tests for codebases.
- A user indicated a desire to deepen their understanding of LLMs to formulate better prompts and maximize efficiency.
- 👾 LM Studio - Discover and run local LLMs: Find, download, and experiment with local LLMs
- Cursor: The AI Code Editor
- Watch abliteration_NotebookLM | Streamable: no description found
- lmstudio-community/datagemma-rag-27b-it-GGUF · Hugging Face: no description found
- Acting Nod GIF - Acting Nod Hmph - Discover & Share GIFs: Click to view the GIF
- What is an instruct model? - Instruction and Chat Fine-Tuning: As you browse through generative AI models, you will see some of the LLMs listed with the suffix 'instruct' or 'chat'. What does this mean?
- Tokyo Zero Ebook Podcast Discussion (AI Generated): no description found
LM Studio ▷ #hardware-discussion (101 messages🔥🔥):
Strix Halo APU
NVIDIA RTX 4090
Power Supply Units Comparison
System RAM for Larger Models
Water Cooling Solutions
- Strix Halo APU potential: Discussion focused on the Strix Halo APUs' capability for running large AI models, mentioning the ability to allocate up to 20GB to the iGPU and ROCm support.
- However, a member countered that offloading tasks between CPU and GPU could lead to slower performance.
- Performance of RTX 4090 with LLMs: A member reported achieving 110 tokens per second using three RTX 4090 cards with LM Studio when querying for world domination strategies using AI.
- This efficiency prompted conversations about power supply limitations and the best setups for maximizing GPU performance.
- Power Supply Units Comparison: Members discussed different power supply units, highlighting that Titanium rated supplies, while generally superior, underperformed in specific setups, such as failing to support three GPU configurations.
- It was suggested that the build quality of the power supply also significantly impacts performance, leading to recommendations for brands like EVGA and Seasonic.
- System RAM and Model Sizes: Questions arose regarding how much RAM is necessary to efficiently run large models, with anecdotal opinions suggesting that 192GB DDR5 might be adequate for models like Llama 3.1.
- Another member suggested that 64GB might suffice for running certain 70B models if optimized correctly.
- Water Cooling Solutions for GPUs: Members expressed interest in water-cooling setups for their GPUs, particularly discussing the aesthetics and functionality of one-slot designs.
- There was enthusiasm for custom water cooling solutions that could potentially fit multiple cards directly onto motherboards.
- Pout Kiss GIF - Pout Kiss Blowing A Kiss - Discover & Share GIFs: Click to view the GIF
- Oof Disappointed GIF - Oof Disappointed Facepalm - Discover & Share GIFs: Click to view the GIF
- The Cooler Master V Platinum V2 1600W ATX 3.1 PSU Review: Quiet Giant: no description found
- Apple M1 - Wikipedia: no description found
- GitHub - city96/ComfyUI_NetDist: Run ComfyUI workflows on multiple local GPUs/networked machines.: Run ComfyUI workflows on multiple local GPUs/networked machines. - city96/ComfyUI_NetDist
- MacBook Pro (16-inch, 2021) - Technical Specifications - Apple Support (CA): no description found
- MacBook Pro (16-inch, Nov 2023) - Technical Specifications - Apple Support (CA): no description found
- 13869 Alphacool ES Geforce RTX 4090 Reference 1-Slot-Design: Alphacool 1U Wasserkühler für Nvidia Geforce RTX 4090 – für Server & Workstation
- no title found: no description found
- TRX50 AI TOP Key Features | Motherboard - GIGABYTE Global: no description found
- LINKUP - AVA5 PCIE 5.0 Riser Cable | Future Proof for Gen 5 GPU Vertical Mount | x16 128GB/s Speed | Compatible with PCIe 4.0 & WRX80/WRX90E | Right Angle, Black 15cm : Amazon.com.au: Electronics: no description found
Latent Space ▷ #ai-general-chat (116 messages🔥🔥):
OpenAI's o1 models
Cursor AI's coding assistant
AI Scaling Laws
Anthropic evals
Funding rounds for AI startups
- OpenAI's o1 Models Introduced: OpenAI has released o1 models designed for improved reasoning on complex tasks, attracting attention for their potential in scientific and coding applications.
- The new models reportedly outperform older versions but still struggle with large edits, a challenge Cursor AI is addressing with their specialized coding assistant.
- Funding for AI Startups Soars: 11x AI raised $24 million in Series A funding, highlighting their rapid growth with a 15x increase in ARR and the launch of new digital workers.
- Similarly, Supermaven AI secured $12 million to develop an AI-focused text editor that integrates seamlessly with their models.
- AI Scaling Laws Explained: A video providing a comprehensive overview of AI scaling laws has gained traction this week, emphasizing the relevance of recent research.
- The discussion pointed out the evolving understanding surrounding scaling laws and their implications for future AI model development.
- Anthropic's New Evals Course: Anthropic has launched a course focused on LLM prompt evaluations, aimed at ensuring production readiness by identifying edge cases in prompts.
- The course includes methodologies for providing numerical scores, addressing common challenges faced by users when evaluating models.
- Community Insights on Model Development: Discussions within the community reveal mixed feelings about o1's capabilities, with opinions on its potential and limitations shared openly.
- Participants expressed eagerness to explore the full functionality of new AI models while also questioning job roles in traditional software engineering.
- Tweet from Wenmeng Zhou (@zhouwenmeng): Qwen-q1 ? ? 🍓🍓🍓🍓🍓
- Tweet from Aaron (@aaronp613): Apple has released 3 new videos promoting Apple Intelligence featuring Bella Ramsey 🧵 1st: More personal Siri
- Tweet from william (@wgussml): what most people will miss is that o1 is significant precisely because it isn’t an SFT on synthetic data the fact that rl on CoT unconstrained works and doesn’t collapse to gibberish cot steps is rea...
- Tweet from Zeyuan Allen-Zhu (@ZeyuanAllenZhu): Just uploaded a 1-hr exclusive video for Part 2.1, with many technical details. https://youtu.be/bpp6Dz8N2zY. Part 2.2 will be online in about a week. Quoting Zeyuan Allen-Zhu (@ZeyuanAllenZhu) (1/...
- Tweet from Liv (@livgorton): seems sort of surprising to me that John Schulman, previous head of post-training and first author of PPO paper, didn’t contribute to a model that plausibly required a lot of RL? it’s possible that he...
- Reverse engineering OpenAI’s o1 : What productionizing test-time compute shows us about the future of AI. Exploration has landed in language model training.
- Near-Instant Full-File Edits: no description found
- Tweet from Mario Schlosser (@mariots): On claims adjudication: Thousands of rules in natural language, including contracts, best practices, and guidelines, determine medical service costs. Synthesizing these rules is extremely cumbersome,...
- Using reasoning for routine generation | OpenAI Cookbook: Open-source examples and guides for building with the OpenAI API. Browse a collection of snippets, advanced techniques and walkthroughs. Share your own examples and guides.
- Tweet from Alex Albert (@alexalbert__): Our latest course on LLM prompt evaluations is out. Evals ensure your prompts are production-ready as you're able to quickly catch edge cases and zero in on exactly where your prompts need work. ...
- Tweet from Luma AI (@LumaLabsAI): 🚀 Introducing the Dream Machine API. Developers can now build and scale creative products with the world's most popular and intuitive video generation model without building complex tools in thei...
- Tweet from Maxim Lott (@maximlott): Just plotted the new @OpenAI model on my AI IQ tracking page. Note that this test is an offline-only IQ quiz that a Mensa member created for my testing, which is *not in any AI training data* (so sco...
- Tweet from 11x (@11x_official): 👋🏻 Hey from Alice & Jordan - We just raised a $24m Series A from @benchmark! Read our full blog post here: https://www.11x.ai/blog/series-a Some highlights so far this year: - Increased our ARR by...
- Tweet from Cursor (@cursor_ai): OpenAI’s new o1 models are available in Cursor! We’ve found o1 to be excellent at well-specified, reasoning-intense problems. We still recommend sonnet/4o for most tasks. We’re initially rolling out...
- Tweet from Smoke-away (@SmokeAwayyy): The email:
- Tweet from Supermaven (@SupermavenAI): We've raised $12 million from Bessemer Venture Partners to build an AI-focused text editor that integrates tightly with our models.
- Tweet from OpenRouter (@OpenRouterAI): We're publishing a temporary dashboard to help users understand o1's reasoning tokens:
- Tweet from Scott Stevenson (@scottastevenson): 1. It solves long document revision. Lawyers don't draft contracts much from scratch, they start with a precedent and modify it for their current deal. It was really hard to get GPT4 to perform ...
- Tweet from milton (@tensor_fusion): I don’t usually watch YT videos on AI/ML explainers (sans Karpathy/3blue1brown) but this has good production value. Nice overview of the Scaling laws (from Kaplan 2020 to more recent results). (bonu...
- Tweet from Riley Goodside (@goodside): o1 prompting is alien to me. Its thinking, gloriously effective at times, is also dreamlike and unamenable to advice. Just say what you want and pray. Any notes on “how” will be followed with the dil...
- Tweet from Jessica Lessin (@Jessicalessin): Another day, more details from @theinformation about @OpenAI's massive new funding round. I think this would be a very good time to start thinking about how these investors (now including hedge ...
- Reddit - Dive into anything: no description found
- courses/prompt_evaluations at master · anthropics/courses: Anthropic's educational courses. Contribute to anthropics/courses development by creating an account on GitHub.
- openai-python/chatml.md at 120d225b91a8453e15240a49fb1c6794d8119326 · openai/openai-python: The official Python library for the OpenAI API. Contribute to openai/openai-python development by creating an account on GitHub.
- xAI Sign-In: no description found
- PromptIde: no description found
- Create API Key - xAI Developer Platform: no description found
Latent Space ▷ #ai-announcements (1 messages):
OpenAI API
Structured Outputs
ChatGPT Latest
Voice Mode
O1 Meetup
- New Podcast on OpenAI Uncovered: The latest podcast episode features a one-hour conversation covering Structured Outputs, ChatGPT-latest, and gpt-4o, along with answers to various API questions.
- It includes insights from the O1 emergency meetup and a recap of the OpenAIDevs AMA session.
- Insights into Structured Outputs: The podcast explores the differences between Structured Outputs and function calling, discussing the implementation challenges and use cases for developers.
- Key topics included the role of JSON Schema and the Structured Output Roadmap that is being developed.
- Voice Mode API Discussion: The episode delves into the new Voice Mode API, which allows for more interactive and dynamic conversation capabilities.
- It emphasizes how this feature can transform user interactions with AI on various platforms.
- Recap from O1 Meetup: A Q&A session from the O1 emergency meetup was featured, where members discussed their experiences and challenges in developing with OpenAI tools.
- Listeners gained insights into community-driven solutions and contributions to ongoing development issues.
- ChatGPT Scaling Strategies: Strategies for scaling ChatGPT were discussed, particularly focusing on increased latency and prompt/schema caching techniques for optimization.
- The team addressed concerns over model reproducibility and evolving tiering and rate limiting strategies for the API.
Link mentioned: Tweet from Latent.Space (@latentspacepod): 🆕 pod: From API to AGI: Structured Outputs, OpenAI API platform and O1 Q&A Our @openai weekend special! https://latent.space/p/openai-api-and-o1 - 1hr convo with @michpokrass on Structured Outputs...
Latent Space ▷ #ai-in-action-club (133 messages🔥🔥):
Cursor Scaling Issues
HTEC AI Report
Neovim Resources
Vim Challenges
AI Programming Content
- Cursor faces scaling challenges: Members expressed concerns about Cursor's scaling issues, especially its code completion and document generation features.
- One user suggested their initial experience might have been limited due to the default settings when first installed.
- HTEC's report on AI copilots: The nearshore consultancy HTEC published a report on their experiences with 26 AI coding tools, although access requires signing up.
- Members discussed whether the brief usage and limitations noted in the report truly reflected the tools' capabilities.
- Neovim resources for beginners: Users shared valuable Neovim resources, including a YouTube playlist aimed at helping users master the editor.
- Another mentioned Kickstart as a helpful guide for setting up Neovim configurations.
- Challenges of mastering Vim: Members noted the steep learning curve for Vim, highlighting that while it may slow you down initially, it significantly improves efficiency once mastered.
- Several expressed regret about not learning Vim earlier and shared experiences from transitioning to Cursor and Claude for coding tasks.
- Recommendations for AI programming content: Members acknowledged the community's value in sharing updates on AI programming tools and techniques, emphasizing the desire for practical applications showcased.
- Additionally, users recommended notable content creators like McKay Wrigly and Riley Brown, who provide quality material focused on AI in programming.
- no title found: no description found
- GPT Engineer: Build software products, using only a chat interface
- Understanding Neovim: Becoming a wizard at configuring Neovim!
- v0.dev → openv0.dev: Openv0.dev is an open-source AI model that generates Tailwind CSS UI from simple text prompts. We believe in the power of open-source because it fosters collaboration and transparency. That's ...
- GitHub - ThePrimeagen/harpoon at harpoon2: Contribute to ThePrimeagen/harpoon development by creating an account on GitHub.
- GitHub - nvim-lua/kickstart.nvim: A launch point for your personal nvim configuration: A launch point for your personal nvim configuration - nvim-lua/kickstart.nvim
- GitHub - latentspacenotes/latentspacenotes.github.io: Contribute to latentspacenotes/latentspacenotes.github.io development by creating an account on GitHub.
- GitHub - tris203/precognition.nvim: 💭👀precognition.nvim - Precognition uses virtual text and gutter signs to show available motions.: 💭👀precognition.nvim - Precognition uses virtual text and gutter signs to show available motions. - tris203/precognition.nvim
- GitHub - raidendotai/openv0: AI generated UI components: AI generated UI components. Contribute to raidendotai/openv0 development by creating an account on GitHub.
Interconnects (Nathan Lambert) ▷ #news (41 messages🔥):
OpenAI o1 Benchmark Results
Humanity's Last Exam
Lobbying and AI Policy
Dan Hendrycks Controversy
AI and Compute Budgets
- OpenAI's o1 Performance Sparks Debate: Members discussed the recent access to OpenAI's
o1-preview
ando1-mini
models, which are designed for improved reasoning, highlighting concerns about the fairness of existing benchmarks given the varied compute budgets.- One member suggested that a more fair evaluation would involve matching compute budgets across models and using pass@k scoring.
- Humanity's Last Exam Launch Announced: Dan Hendrycks and collaborators launched Humanity's Last Exam, inviting submissions for challenging AI questions with a $500,000 prize pool available for the best submissions by November 1, 2024.
- Members expressed mixed feelings about the initiative's implications, speculating on how high performance might influence political lobbying concerning AI regulations.
- Concerns Over AI Lobbying: Dan Hendrycks' influence within AI policy and his connections to politicians led to discussions about potential future regulatory actions based on performance metrics from initiatives like Humanity's Last Exam.
- Several participants raised concerns over his lobbying role and how it intertwines with his technical background in AI.
- AI Advocacy vs Politics: Members debated the fine line between AI advocacy and political lobbying, considering Dan Hendrycks' dual role as both an advocate for AI safety and an advisor at a lobbying-focused AI company.
- Some expressed annoyance at the political aspects creeping into AI discussions, with one member noting the complicated intersection of personal values and the hype surrounding AI.
- Graduate Life Reflection: A member reminisced about the positive experiences of being a graduate student during a rapidly evolving AI environment, expressing that it's important to focus on what is enjoyable.
- Discussions highlighted the camaraderie and intellectual excitement often found in academic settings.
- Tweet from Colin Fraser (@colin_fraser): One thing I noticed with my last few o1-mini credits for the week is an error in reasoning can cause the Chain-of-Thought babbling to spiral out of control, simultaneously reinforcing the error and in...
- Tweet from Dan Hendrycks (@DanHendrycks): Have a question that is challenging for humans and AI? We (@ai_risks + @scale_AI) are launching Humanity's Last Exam, a massive collaboration to create the world's toughest AI benchmark. Subm...
- OpenAI o1 Results on ARC-AGI-Pub: How far are the o1 preview and mini models from AGI?
- Sam Altman told OpenAI staff the company’s non-profit corporate structure will change next year: The OpenAI CEO had previously admitted that the company’s structure is “unusual.” Now he's made it clear, it's time to change it.
- Tweet from Wenmeng Zhou (@zhouwenmeng): Qwen-q1 ? ? 🍓🍓🍓🍓🍓
Interconnects (Nathan Lambert) ▷ #ml-questions (5 messages):
Self Reflection Papers
Consistency in Reasoning and Outputs
- Search for Self Reflection Papers: A member queried for good papers on self reflection, indicating a lack of available resources on the topic.
- They noted a specific interest in consistency between reasoning and outputs, although could not recall the exact title.
- Lighthearted Banter around the Query: Another user humorously responded to the original query with a laugh, indicating a light-hearted atmosphere in the discussion.
- The banter continued with additional laughter from other participants, demonstrating an engaging community dynamic.
Interconnects (Nathan Lambert) ▷ #ml-drama (7 messages):
Matt from IT
Meme content
Planetcoo Barbarons theory
Reinforcement Learning observations
- Missing Matt from IT: A member expressed their longing for Matt from IT, lamenting the lack of enjoyable content since his departure.
- Noting a lack of memes, the sentiment was shared that his absence has left a gap in the community's engagement.
- Demand for more meme content: There was a call for more meme content similar to what was shared in a tweet by abrakjamson regarding Planetcoo Barbarons.
- The emphasis on humor and relatable memes is seen as essential to keep the community lively.
- Reinforcement Learning's effects on language: A member referenced a tweet by karpathy highlighting that when reinforcement learning is executed properly, models start to lose English coherence in their thoughts.
- This observation sparked discussion on the nuances of language changes in AI under different training methodologies.
- "Visionary" Perception of Matt: Another member commented on the change in perception around Matt, as some now view him as a visionary post-departure.
- This sparked a discourse about the seriousness of such claims, revealing a mix of admiration and skepticism within the community.
- Tweet from Abram Jackson (@abrakjamson): My theory on why we aren't allowed to see the thinking: it is all Planetcoo Barbarons.
- Tweet from Andrej Karpathy (@karpathy): You can tell the RL is done properly when the models cease to speak English in their chain of thought
Interconnects (Nathan Lambert) ▷ #random (53 messages🔥):
OpenAI o1 models
Nando's transition to Microsoft AI
Gemini Flash 1.5 performance
Model reasoning mechanics
Token economics
- OpenAI's o1 models raise eyebrows: OpenAI released its new models called o1-preview and o1-mini, with the name possibly hinting at ‘aliens of extraordinary ability’. Observers noted that both models have intriguing reasoning patterns, prompting discussions about their functionality.
- A user shared interesting observations that mini does not reason longer than preview yet produces longer responses, an unexpected outcome for many.
- Nando joins Microsoft AI team: NandoDF announced his new position at Microsoft AI, focused on large scale multimodal research and product development. This unexpected move prompted speculation about his ability to shape the future of AI given the small yet ambitious team.
- Members expressed surprise at Nando's transition, considering it a significant career win, emphasizing the lucrative nature of such opportunities.
- Gemini Flash 1.5 takes the lead: Gemini Flash 1.5 was reported to have taken the lead over MythoMax in monthly rankings, marking a milestone in performance. Remarkably, a user highlighted generating 28B tokens in just two days, showcasing the power of the model.
- This raises questions about the efficiency of training processes, especially given the current economics of data generation.
- Discussions on model reasoning: Conversations occurred around the distinctions between reasoning and completion in models, with insights suggesting that reasoning visibility is largely user-dependent. One user challenged the premise, asserting that the distinctions in generation methodology aren't as pronounced as depicted.
- This prompted further dialogue regarding the token economics of reasoning processes, including comparison with traditional generative structures.
- Anticipation builds for upcoming content: Hints about an episode featuring Dwarkesh and Ilya stirred excitement among users, with some expressing the desire for it to remain private until release. As discussions flowed, members noted the implications of pre-training data costs, observing dramatic reductions in expenses for model training.
- Tweet from Liv (@livgorton): seems sort of surprising to me that John Schulman, previous head of post-training and first author of PPO paper, didn’t contribute to a model that plausibly required a lot of RL? it’s possible that he...
- Tweet from Noam Brown (@polynoamial): @sog_on_bird_app @OpenAIDevs There is no guarantee the summarizer is faithful, though we intend it to be. I definitely do not recommend assuming that it's faithful to the CoT, or that the CoT itse...
- Tweet from Jacob Rintamaki (@jacobrintamaki): shhh...spoilers 👀
- Tweet from Lucas Nestler (@_clashluke): That was entirely me 😅 28B tokens in Gemini Flash 1.5 over the last two days It's a good model https://x.com/OpenRouterAI/status/1835713079344275809 Quoting OpenRouter (@OpenRouterAI) Top of...
- Tweet from Terry Yue Zhuo (@terryyuezhuo): Although I have done the preliminary evaluations of o1-preview and o1-mini, those results may not strictly reflect what the models can do. I'm now running o1-preview w/ 5 samples per tasks on Big...
- Tweet from Aidan McLau (@aidan_mclau): fascinating o1 observations: >mini DOES NOT reason longer than preview >mini's responses are longer than it's reasoning >preview's reasoning is longer than it's responses >...
- Tweet from Nando de Freitas (@NandoDF): I’ve joined @Microsoft AI to advance the frontier of large scale multimodal AI research and to build products for people to achieve meaningful goals and dreams. The MAI team is small, but well resour...
Interconnects (Nathan Lambert) ▷ #memes (1 messages):
xeophon.: https://x.com/burny_tech/status/1834741998898536774?s=46 <:berk:750111476483752166>
Interconnects (Nathan Lambert) ▷ #rl (7 messages):
Reverse Curriculum Learning
LLMs applications
Niche applications in RL
- Reverse Curriculum Learning discussed among RL enthusiasts: There have been some papers emerging recently on Reverse Curriculum Learning in the context of LLMs, but its usage appears limited within the RL community.
- One member noted that it doesn’t see widespread adoption yet, though it's acknowledged as a valid approach.
- Challenges with Reverse Curriculum Learning identified: Discussion pointed out that Reverse Curriculum Learning is often considered clunky and typically suited for niche applications.
- Members indicated that this limitation may explain its rare use in broader scenarios.
Interconnects (Nathan Lambert) ▷ #reads (6 messages):
Ben Thompson's accuracy
Information ingestion stack
David Perell's influence
YouTube video on Ben Thompson
- Ben Thompson's Impressive Insights: Members praised Ben Thompson for his impressively accurate updates, highlighting his ability to discuss technical topics at a high level.
- He reads papers when he needs to, and whole books, demonstrating a thorough approach to understanding complex issues.
- Curiosity about Thompson's Information Process: A member expressed interest in Ben Thompson's information ingestion stack, noting his consistent accuracy across various topics.
- They pointed to a recent post on Telegram and encryption as an example, along with his commentary on Apple's tax case in Ireland.
- YouTube Video Recommendation: A member recommended watching the YouTube video titled 'How Ben Thompson Built a Writing Empire' which explains how writing newsletters can be financially rewarding.
- The video indicates that Ben Thompson earns millions annually from his writing endeavors, inspiring others in the community.
- Admiration for David Perell: A member shared their long-standing admiration for David Perell, aligning with other participants who appreciate insightful content.
- This sentiment reflects a growing trend among users to engage with and support thought leaders in the writing and tech space.
- Understanding Confusing Concepts: There was a discussion about how many people, including some participants, can easily ingest confusing information and quickly grasp their current understanding.
- This ability to process complex topics reflects a desire to stay informed and adaptive in a rapidly changing environment.
Link mentioned: How Ben Thompson Built a Writing Empire: What if writing a newsletter could pay your rent? Well, it can. And today, you’re going to learn how.Ben Thompson makes millions of dollars a year with his w...
Interconnects (Nathan Lambert) ▷ #posts (92 messages🔥🔥):
OAI's ChatGPT and API Data
OLMoE Naming Concerns
LLM Model Evolution
Poe Subscription Discussion
Initialism Rankings
- Questions Raised on OAI's Data Influence: One member speculated about how much of OpenAI's ChatGPT/API data influenced the development of their models, suggesting they might have directly utilized or prompt-engineered intriguing CoTs from user data.
- If user interaction data played a big role, replicating this model could be challenging for open-source alternatives.
- Debate Around OLMoE Naming: Users discussed the oddity of the name OLMoE, suggesting that 'Open Mixture of Experts Language Models' doesn't quite abbreviate to OLMoE accurately.
- One user humorously suggested that the name might make more sense in its original French.
- Excitement Over LLM Model Developments: A member expressed newfound excitement over future plans regarding LLMs, stating they weren't initially keen on 2025 developments until recently.
- It was noted that there’s a growing anticipation for a big moment in LLM breakthroughs, reminiscent of past significant milestones.
- Poe Subscription Service Evaluation: Members discussed their preferences for the Poe platform, with one noting that paying $20 can provide access to all LLMs available on the service.
- Concerns were raised about the usability and interface, with some expressing indifference to the platform’s aesthetic compared to competitors like Claude and ChatGPT.
- Ranking the Most Tortured Initialisms: In a lighthearted exchange, members entertained the idea of creating a ranking system for the most tortured initialism, jokingly naming it ALICE - AwfuL aI aCronym rankEr.
- This led to discussion about challenging AI names like SPLADE and Google Gemini Ultra, reflecting on the absurdity of AI branding.
Cohere ▷ #discussions (87 messages🔥🔥):
Fei-Fei Li's Reasoning Method
Command-R-Plus-08-2024 Performance Issues
Linguistic Model Effectiveness
Censorship in LLMs
Nick Frosst and Good Kid
- Inquiry on Fei-Fei Li's Reasoning Method: Members discussed curiosity about Fei-Fei Li's method for solving reasoning problems, seeking insights on her exact approaches.
- There's a clear interest in understanding her techniques in the context of current AI advancements.
- Performance Issues with Command-R-Plus-08-2024: A user reported that the Command-R-Plus-08-2024 model exhibits more repetitive outputs compared to its predecessor when used for creative writing.
- Concern was raised over how long prompts may impact the model's performance, encouraging exploration of alternative models.
- Debate on LLM Effectiveness and Censorship: Members discussed the appropriateness of censorship in language models, particularly in commercial contexts, while emphasizing legal and ethical implications.
- The conversation highlighted a belief that while moderation is essential, over-restriction can hinder a model's potential.
- Nick Frosst's Musical Background: A fun fact shared about Nick Frosst, co-founder of Cohere, revealed his indie rock band Good Kid has gained notable success with millions of Spotify listeners.
- The band, known for programming-themed songs, recently played at Lollapalooza and was nominated for a Juno Award.
- Guidance on Using Cohere for Data Extraction: A new user is seeking advice on utilizing Cohere's models for extracting data from unstructured files using token classification.
- They inquired about the efficacy of chat vs classify models and the best approach for annotating outputs in a multi-label dataset.
- Tweet from ar (@arattml): nothing useful here you should skip this post https://arxiv.org/abs/2402.05808 https://arxiv.org/abs/2407.03181 https://arxiv.org/abs/2401.08967 https://arxiv.org/abs/2407.00087 https://arxiv.org/abs...
- Dancing Duck Dance Duck GIF - Dancing duck Dance duck Duck - Discover & Share GIFs: Click to view the GIF
- First Rate Town: Song · Good Kid · 2023
- Pricing: Access our models directly through our API to create scalable production workloads.
- GitHub - codelion/optillm: Optimizing inference proxy for LLMs: Optimizing inference proxy for LLMs. Contribute to codelion/optillm development by creating an account on GitHub.
- Cohere co-founder Nick Frosst’s indie band, Good Kid, is almost as successful as his AI company | TechCrunch: When he's not working on building large language models for enterprise customers, Nick Frosst is the frontman of indie rock band Good Kid.
Cohere ▷ #announcements (1 messages):
Cohere Developer Office Hours
Command model family refresh
RAG capability improvements
Safety Modes feature
Updated pricing for Command models
- Join Cohere Developer Office Hours Today!: Cohere will host Developer Office Hours today at 1 PM ET, focusing on the latest updates in the Command model family.
- Hosts will cover topics such as what's new and improvements in RAG capability and Safety Modes.
- Exciting Improvements in Command R Models: The new versions of the Command R model series bring enhancements across coding, math, reasoning, and latency, now more efficient and performant.
- Noteworthy improvements include a 50% increase in throughput and a 20% decrease in latency for the updated Command R model.
- Introducing Safety Modes for Better Control: Cohere's new Safety Modes offer enterprise customers improved model guardrails and greater control over model usage.
- This initiative empowers users to better manage interactions while maintaining model effectiveness.
- Clarifying Retrieval Augmented Generation (RAG) Enhancements: The latest models also show enhanced Retrieval Augmented Generation (RAG) capabilities tailored for nuanced multilingual tasks.
- Trained on 23 languages, the models are fine-tuned to support a range of real-world applications.
- Updated Pricing for Command Models: Cohere has refreshed its pricing for the Command R and Command R+ models, streamlining costs for developers.
- Users can now access Command R+ for $2.50 per million tokens for input and $10.00 for output.
- Updates to the Command R Series: The latest versions of the Command R model series offer improvements across coding, math, reasoning, and latency.
- CohereForAI/c4ai-command-r-08-2024 · Hugging Face: no description found
- CohereForAI/c4ai-command-r-plus-08-2024 · Hugging Face: no description found
- Pricing: Access our models directly through our API to create scalable production workloads.
- Introducing Safety Modes: Cohere Safety Modes provides enterprise customers with greater control over model guardrails.
- Command models get an August refresh — Cohere: no description found
Cohere ▷ #questions (96 messages🔥🔥):
Pretrained VLM usage
Updates on Command-r-08-2024
Local model deployment queries
Similarity search and embeddings
Finetuning chat models with flexible token usage
- Discussion Around Pretrained VLMs: A member inquired about the use of pretrained VLMs due to compute limitations, sparking a conversation about their applicability.
- Others noted that this could lead to broader discussions on model deployment practices in the community.
- Excitement for Command-r Model Updates: A user asked about plans to update the Command-r-08-2024 model, especially regarding enhancing its Korean writing style.
- Team members confirmed ongoing efforts to improve this model's multilingual capabilities and welcomed community feedback.
- Local Model Deployment Challenges: Members discussed deploying models locally on limited Tesla M10 GPUs, highlighting hardware constraints and performance challenges.
- They shared potential solutions for on-premises deployment while acknowledging the current limitations of older hardware.
- Embeddings and Similarity Search Strategies: A user inquired about the best embedding mode for similarity search, with experts recommending search embeddings for effective results.
- Advice was given on using the reranker to maximize the number of relevant results, emphasizing a robust pipeline for handling large datasets.
- Finetuning Chat Models Without End Tokens: A question arose regarding the possibility of omitting the final
<|END_OF_TURN_TOKEN|>
during finetuning to maintain conversational flow.- Despite initial limitations, interest was expressed in exploring this flexibility for fine-tuning chat models in the future.
- Minimap.ai: no description found
- Cookbooks — Cohere: no description found
- Datasets — Cohere: The document provides an overview of the Dataset API, including file size limits, data retention policies, dataset creation, validation, metadata preservation, using datasets for fine-tuning models, d...
- Deployment Options: Our solutions provide industry-leading data privacy and security and are designed to meet the diverse needs of organizations seeking to harness the power of generative AI. Whether you’re a start-up or...
Cohere ▷ #api-discussions (13 messages🔥):
Production key creation issue
Sagemaker client billing error
Banking card issues with Cohere
- User struggles to create production key: A user reported an error while attempting to create a production key, stating they are in India and have no VPN issues.
- The community suggested contacting support@cohere.com for assistance with this billing issue.
- Sagemaker client returns negative billed units: A user using the Sagemaker client in the Cohere Python SDK noted that the response indicates input_tokens and output_tokens as -1.0.
- The community recommended emailing support@cohere.com for account-specific insights into this unexpected billing return.
- Advice on banking card issues for payments: One member suggested that issues with billing may stem from banking card settings, recommending users check for international payments in their banking app.
- They advised trying a different bank if problems persist and noted that a small initial charge may not go through for some banks.
Cohere ▷ #projects (2 messages):
Job Posting Concerns
Relevance to Cohere
- Removal of Job Posting Suggested: A member urged for the removal of the job posting part from the discussion as it didn't seem cohere related.
- They emphasized the importance of keeping the focus on topics that are pertinent to the community.
- Further Discussion on Cohere Relevance: The same member suggested that, after removal of the job posting, the topic could potentially be posted again if deemed somewhat related to Cohere.
- This indicates a desire to maintain a clear focus on discussions relevant to the group.
Stability.ai (Stable Diffusion) ▷ #general-chat (181 messages🔥🔥):
FLUX Models Usage
Character Design with Stable Diffusion
Image Editing and Inpainting
ControlNet and LoRA Training
Tech Support for Stable Diffusion
- Challenges with FLUX Models: Users inquired about issues with running FLUX models, specifically asking for guidance on formats like .sft and .safetensor, as well as compatibility with tools like Forge.
- It was recommended to switch to ComfyUI for better support, and users shared their experiences regarding specific model sizes.
- Creating 2D Character Concepts: One user asked for advice on generating a character like Cheetara using Stable Diffusion checkpoints and specific phrasing for prompts.
- This discussion included inquiries about successful checkpoints for producing character art that is suitable for later 3D modeling.
- Image Editing Techniques: There were several recommendations for removing text from images and filling in backgrounds using inpainting methods, with a suggestion to use GIMP or Piximperfect's tutorials.
- Users also discussed various AI tools for enhancing and modifying images while maintaining quality.
- ControlNet and LoRA Training for Character Animation: Discussions on using ControlNet and LoRA training for creating separated vector-style character animations were prevalent, with recommendations for using appropriate training examples.
- Users shared insights on how to utilize ControlNet technologies for character posing and structure in artistic renderings.
- Technical Support for Stable Diffusion Installation: A user faced errors while installing Stable Diffusion and was directed to provide their error logs in the support channel for assistance.
- Helpful links to installation guides were shared, emphasizing the need for detailed logs to facilitate troubleshooting.
- Cheetara GIF - Thunder Cats - Discover & Share GIFs: Click to view the GIF
- black-forest-labs/FLUX.1-dev · Hugging Face: no description found
- black-forest-labs/FLUX.1-schnell · Hugging Face: no description found
- Webui Installation Guides: Stable Diffusion Knowledge Base (Setups, Basics, Guides and more) - CS1o/Stable-Diffusion-Info
- flushedface - Overview: flushedface has 5 repositories available. Follow their code on GitHub.
- GitHub - fairy-root/Flux-Prompt-Generator: Flux Prompt Generator provides a flexible and customizable prompt generator for generating detailed and creative prompts for image generation models.: Flux Prompt Generator provides a flexible and customizable prompt generator for generating detailed and creative prompts for image generation models. - fairy-root/Flux-Prompt-Generator
- Home: Stable Diffusion Knowledge Base (Setups, Basics, Guides and more) - CS1o/Stable-Diffusion-Info
Modular (Mojo 🔥) ▷ #announcements (2 messages):
User verification process
Onboarding questions
Server changes discussion channel
Latency issues with verification bot
- User Verification Process Goes Live: The Discord server has implemented a user verification process that requires members to share their email addresses via a bot in the #verify channel.
- Members who choose not to verify will have limited messaging capabilities but will still retain read access to all channels.
- New Onboarding Questions Introduced: After verifying their email addresses, users will encounter two multiple-choice onboarding questions intended to enhance their server experience.
- This step aims to streamline the onboarding process for new and existing members.
- New Channel for Server Changes Discussion: A new channel has been created for discussions regarding upcoming server changes, where members can share suggestions and ask questions.
- This initiative reflects the server's ongoing commitment to improving user experience.
- Latency Issues Delay Verification Bot: The verification bot went live, but there are latency issues, leading to its temporary disablement and locking of the verification channel.
- The team is working to resolve these issues and will inform members once the bot is operational again.
Modular (Mojo 🔥) ▷ #mojo (91 messages🔥🔥):
Mojo and Python interoperability
Count Leading Zeros
Creating Slices of String Literals
Zero-Copy Data Interop
LLVM Intrinsics at Comptime
- Mojo and Python Interoperability Challenges: Discussions highlighted that Mojo currently lacks the ability to import modules or call functions from Python, which is seen as a prerequisite for effective interoperability.
- Participants expressed interest in understanding how to facilitate zero-copy data exchange between Mojo and Python, particularly in high-performance contexts.
- Issues with Count Leading Zeros (CLZ) at Comptime: Users noted that
clz
function fails to work at compile time due to reliance on LLVM intrinsics, which cannot be executed at that point.- An alternative implementation for counting leading zeros was shared, along with suggestions that the standard library functionality for compile-time calculations may be improved in the future.
- Creating Slices of String Literals: Members exchanged information on how to create slices of string literals in Mojo, comparing
ListLiteral
andList
constructs.- Optimizations and syntax questions were discussed, with hints that further improvements for iterability might be introduced with future updates.
- Exploring Zero-Copy Data Interop: A participant questioned the feasibility of achieving zero-copy data interoperability between Mojo and Python, citing current limitations.
- Concerns were raised about how nuumpy operations, referenced in examples, handle data copying during execution.
- LLVM Intrinsics Support at Comptime: A new update confirmed that Mojo now supports LLVM intrinsics at compile time for integer-based functions such as
ctlz
andpopcount
.- Future expansions to support additional types were suggested, aiming to enhance LLVM's constant folding abilities.
- Mandelbrot in Mojo with Python plots | Modular Docs: Learn how to write high-performance Mojo code and import Python packages.
- [BUG] REPL incorrectly displays boolean SIMD vector contents · Issue #3379 · modularml/mojo: Bug description There is an incorrect representation in how the REPL displays boolean SIMD vectors. It appears that if there exists at least one True entry, the REPL representation shows only the f...
- [BUG] RPL gets chars from a String pointer correctly, Mojo file does it wrong · Issue #3482 · modularml/mojo: Bug description You can see it in the following screenshot: Steps to reproduce Tested with the following snippet: var s = String("ab") p = s.unsafe_ptr() c = chr(int(p.load())) print('Ch...
- [BUG] Return values are correct, but REPL reports incorrect · Issue #3480 · modularml/mojo: Bug description Reported boolean values of REPL do not match reported values of print output Steps to reproduce magic init project --mojoproject cd project magic s magic run mojo The following code...
- GitHub - makism/mojo-on-fedora40: Instructions on installing Mojo on Fedora 40.: Instructions on installing Mojo on Fedora 40. Contribute to makism/mojo-on-fedora40 development by creating an account on GitHub.
- [stdlib] Complete the string literals signature to match the `String` one by msaelices · Pull Request #3438 · modularml/mojo: To match the existing methods in both Mojo and Python strings. This could help bit a little with the transition of Python programmers playing with the REPL and small mojo examples that handle strin...
- [mojo-compiler] CompTime interpreter should be able to fold `pop.call_llvm_intrinsic` · Issue #933 · modularml/mojo: Bug description math.bit functions doesn't run at compile time. Steps to reproduce Consider the following code: import math.bit as bit fn main(): alias n = bit.ctlz(10) It produces the following e...
Eleuther ▷ #general (30 messages🔥):
Mixed Precision Training
CoreWeave Valuation
AI's Impact on Society
Foundation Models in Biotech
LM Evaluation Harness and Azure
- Understanding Mixed Precision Training Challenges: A discussion highlighted the complexities of mixed precision training, noting that while storing models in both fp32 and fp16 improves performance, it can double the computational load incurred during the forward pass.
- Members mentioned that trade-offs in performance are common in projects due to computing budget constraints, emphasizing the importance of balancing speed and resource utilization.
- CoreWeave's Significant Valuation: CoreWeave, a cloud computing provider, is reportedly negotiating a sale of existing shares valuing the company at $23 billion, reflecting its position in the AI sector.
- This valuation underscores the intense competition and investment atmosphere in cloud computing and AI, drawing attention from notable financial media.
- AI's Societal Implications Explored: A reflection on AI's impact posited that OpenAI has effectively placed a 'PhD in everyone's pocket', suggesting a potential shift in societal operations despite minimal immediate public response.
- Discussions suggest a need for deeper consideration of the transformative effects of AI across various fields and its ongoing integration into everyday life.
- Foundation Models in Biotech Introduced: A member shared their background in working with foundation models in biotech, focusing on large-scale representation learning of both sequence and tabular data.
- This opens up opportunities for knowledge-sharing and collaboration on advanced modeling techniques within the group.
- Inquiry about LM Evaluation Harness for Azure: There was a query regarding the lm-evaluation-harness repository's support for accessing OpenAI completions and GPT-related models via Azure, showing interest in its capabilities.
- Such inquiries highlight an interest in leveraging existing frameworks to interface with cloud AI services efficiently.
- AI: A Means to an End or a Means to Our End?: The text of a talk I gave on Thursday 12th September as the inaugural “Living Well With Technology” lecture for King’s College London’s Digital Futures Institute.
- apex.amp — Apex 0.1.0 documentation: no description found
- Why to keep parameters in float32, why not in (b)float16?: I wonder if I should keep my model parameters in float16 or bfloat16? This is probably an orthogonal aspect to automatic mixed precision / autocast, or maybe mixed precision does not make sense anymo...
- Cloud-Computing Firm CoreWeave In Talks for Share Sale at $23 Billion Valuation: (Bloomberg) -- CoreWeave, a cloud computing provider that’s among the hottest startups in the artificial intelligence race, is in talks to arrange a sale of existing shares valuing it at $23 bi...
Eleuther ▷ #research (23 messages🔥):
RWKV team advancements
Overfitting models on small datasets
Sequential Monte Carlo steering
Autoregressiveness in long documents
OpenAI's reasoning system o1
- RWKV team pushes RNN boundaries: Excitement surrounds the RWKV team, with members noting contributions from Smerky and others, highlighting the collaborative efforts in advancing RNN architectures.
- fern.bear praised the team's innovation, stating it's impressive to see continued pushes in this area.
- Overfitting concerns with 9 images: A user shared their concern about not being able to overfit a model on just 9 images, leading to discussions about whether this indicates potential issues with learning from larger datasets.
- Responses indicated that if a model fails to overfit on such a small sample, it is likely to struggle with larger datasets as well.
- Introducing Sequential Monte Carlo steering: Discussion highlighted a new approach called Sequential Monte Carlo steering, aimed at improving output constraints in LLMs, detailed in an arXiv paper.
- The community is intrigued by this method, especially as it showcases a new programming library for experimentation.
- Challenges with autoregressiveness: Concerns were raised about how splitting long documents across multiple windows could complicate the autoregressive model's processing.
- However, members debated that a well-trained model should still decipher context from the available document lengths.
- Introduction of OpenAI's reasoning system o1: OpenAI has introduced their new reasoning system, o1, designed to improve interactions with AI on complex tasks through enhanced inference techniques.
- The system aims to innovate beyond traditional language models by implementing online search mechanisms during inference.
- Tweet from Portal (@_portal_): Next Monday at 12pm ET, @brekelmaniac will join LoGG host @HannesStaerk to discuss Sequential Monte Carlo (SMC) for probabilistic inference problems. 📄 Read the paper: https://arxiv.org/abs/2404.175...
- Reverse engineering OpenAI’s o1 : What productionizing test-time compute shows us about the future of AI. Exploration has landed in language model training.
- how many golf balls can fit in the moon> - Wolfram|Alpha: Wolfram|Alpha brings expert-level knowledge and capabilities to the broadest possible range of people—spanning all professions and education levels.
- Reddit - Dive into anything: no description found
- Sequential Monte Carlo Steering of Large Language Models using Probabilistic Programs: Even after fine-tuning and reinforcement learning, large language models (LLMs) can be difficult, if not impossible, to control reliably with prompts alone. We propose a new inference-time approach to...
- GitHub - probcomp/hfppl: Probabilistic programming with HuggingFace language models: Probabilistic programming with HuggingFace language models - probcomp/hfppl
Eleuther ▷ #scaling-laws (7 messages):
Looped computation with KV cache
Complex algorithms on fixed VMs
Scaling laws literature suggestions
World model accuracy with large models
- Discussion on Loop Computation: A member expressed a desire for an architecture capable of looped computation with KV cache, indicating a gap in current capabilities.
- This highlights ongoing challenges in efficiently handling complex algorithms within machine learning frameworks.
- Running Complex Algorithms on Small VMs: In response to the previous comment, a member mentioned that complex algorithms can still be executed on a small fixed VM, utilizing available resources effectively.
- This led to a clarification of the term VM, which stands for virtual machine, contributing to the technical discussion.
- Exploration of Model Sizes for World Models: One member shared insights about the impractically large model size required (around 8T parameters) for accurate world modeling from diverse sensors, as discussed here.
- They noted that while current models lack the capacity to infer unintended information, a sufficiently large model could tap into external data sources.
- Inquiry for Literature on Scaling Laws: A member sought recommendations for literature on scaling laws, stating familiarity with training models and interest in sparse autoencoders and mutual information.
- They already have
Scaling Laws for Autoregressive Generative Modeling
on their reading list and are eager for additional resources.
- They already have
Eleuther ▷ #interpretability-general (10 messages🔥):
Goodfire.ai fundraising
SAE latents terminology
Robustness of LLM layers
Credit conventions in academic papers
- Goodfire.ai scales with $7M funding: Goodfire.ai recently raised $7 million to enhance their interpretability platform, apparently leveraging Anthropic’s work on Claude. This funding aims to scale their approach to AI observability, as discussed in a VentureBeat article.
- Members expressed curiosity about the company’s specific direction and technical focus in the interpretability landscape.
- Debate on SAE latents vs features: A member humorously noted the terminology confusion around using 'latents' versus 'features' in their current writing, showing the community's engagement with nomenclature in discussions. This led to debates about adhering to conventions versus establishing clearer terminology in academic discourse.
- Another member countered the notion of using SAE features by advocating for consistency in defining terms, referencing a prior paper for support.
- Insights from Tegmark's LLM layers paper: A member appreciated the insights from a paper co-authored by Max Tegmark that outlines four stages of inference in LLM layers. The final stages focus on sharpening, where suppression neurons eliminate irrelevant features, enhancing accuracy.
- This approach further delves into the robustness of LLMs during interventions, yielding noteworthy accuracy retention despite structural changes.
- Discussion on crediting authors in papers: Debates around the best convention for crediting authors surfaced, particularly regarding the distinction between first and senior authors in papers from Tegmark’s group. Members suggested various naming conventions to recognize contributions accurately while maintaining clarity.
- The conversation explored how to balance recognition of both the first author and senior contributors effectively in academic citations.
- Open Source Automated Interpretability for Sparse Autoencoder Features: Building and evaluating an open-source pipeline for auto-interpretability
- The Remarkable Robustness of LLMs: Stages of Inference?: We demonstrate and investigate the remarkable robustness of Large Language Models by deleting and swapping adjacent layers. We find that deleting and swapping interventions retain 72-95\% of the origi...
Eleuther ▷ #lm-thunderdome (1 messages):
lm-evaluation-harness repo
Azure OpenAI integration
- Inquiry on lm-evaluation-harness support for Azure: A member questioned whether the lm-evaluation-harness repository provides support for Azure OpenAI keys and endpoints to access OpenAI completions and other GPT-related models.
- This inquiry highlights the interest in integrating Azure's capabilities with existing evaluation frameworks.
- Azure OpenAI Model Access Discussion: There was a general discussion about the potential for Azure OpenAI to provide easier access to various GPT models while leveraging the existing APIs.
- Members expressed curiosity about how integration could streamline the process of model evaluation.
Eleuther ▷ #gpt-neox-dev (6 messages):
Hugging Face's Pile Deduped Dataset
Launching Multi-Node with GPT-NeoX on Polaris
Job Submission Issues on Polaris
- Inquiry about EOS Tokens in Dataset: A member questioned whether the absence of EOS tokens in the Pile Deduped Dataset was confirmed and if it was related to other experiences.
- This concern highlights potential inconsistencies in dataset configurations that could impact model training.
- Multi-Node Launch Assistance Requested: A member asked for steps to launch a multi-node setup with GPT-NeoX on Polaris.
- Another member suggested checking out the official guide for assistance.
- Challenges with Interactive Jobs on Polaris: A member expressed frustration that obtaining an interactive job on Polaris involves over 24 hours of waiting time, complicating access to the system.
- They mentioned a preference for being able to submit jobs into the queue instead of relying on an interactive setup.
- Successful Tweaks for Running on Polaris: A member shared successful tweaks made while running GPT-NeoX on Polaris, noting the importance of job queuing with qsub.
- They emphasized that DeepSpeed wouldn't recognize the host file as an environment variable and recommended following steps for passwordless SSH to ensure functionality.
LlamaIndex ▷ #blog (5 messages):
LlamaParse Excel Capabilities
TypeScript Workflows in LlamaIndex
Unit Testing in LLM Applications
Vectara-Agentic Library
Code Generation Agent
- LlamaParse excels in parsing Excel data: In a recent video, @ravithejads demonstrates advanced Excel parsing abilities in LlamaParse, including handling multiple sheets and complex tables.
- LlamaParse utilizes recursive retrieval to summarize complex tables automatically, enhancing usability and efficiency.
- TypeScript workflows introduced in LlamaIndex: LlamaIndex has now integrated workflows into TypeScript, as noted in this announcement.
- This feature aims to streamline development processes for TypeScript users.
- Importance of Unit Testing in LLM applications: Unit testing is highlighted as crucial for protecting against the stochasticity of LLM applications in a blog post by @maskaravivek, detailing building and testing a RAG app with CircleCI.
- The post emphasizes that proper unit testing can mitigate unexpected behaviors in AI-driven applications.
- Vectara-Agentic simplifies RAG implementation: Check out vectara-agentic by @ofermend, a simple library for building agentic RAG powered by LlamaIndex and Vectara.
- It offers functionality to build agents capable of planning and tool use, compatible with various model providers.
- Innovative code generation agent unveiled: A remarkable code generation agent by @MarcusSchiesser allows users to generate an entire web app in one HTML file, using tailwind CSS and JavaScript, showcased in this link.
- The agent integrates with the latest models like o1 to facilitate web app creation through natural language.
LlamaIndex ▷ #general (51 messages🔥):
LlamaIndex and ChromaDB Integration
LanceDB Query Issues
SitemapReader HTTP Errors
ReactAgents Streaming Responses
Job Opportunities in Crypto Projects
- LlamaIndex struggles with document references: A user discussed issues retrieving document references in LlamaIndex with ChromaDB, noting that responses include documents even for unrelated queries.
- Another member suggested checking the
response.source_nodes
for better results.
- Another member suggested checking the
- LanceDB vector index query issues: A user encountered an
AttributeError
while trying to query LanceDB using embeddings, indicating that theLanceDBVectorStore
object has novector_store
attribute.- The discussion revealed potential confusion over how objects are set and whether the index is indeed what the user expects.
- HTTP 403 errors with SitemapReader: A user experienced an HTTP Error 403 while using
SitemapReader
, indicating unauthorized access, despite attempting to add a user-agent header.- Members clarified that the
load_data()
method does not accept headers and suggested that authentication might be required.
- Members clarified that the
- ReactAgents and streaming outputs: A user inquired about slow response times when using ReactAgents with streaming chat, mentioning that the observation with the answer is already received in the backend.
- Members pointed out that the streaming delay could be due to inherent settings in the code and suggested examining dummy stream speeds.
- Job listings for a crypto project: A user posted job openings for testing, moderation, NFT art, and web development roles for a crypto project, encouraging interested parties to DM.
- Another member humorously commented to the poster to go to bed, showcasing the light-heartedness within the community.
- Sub Question Query Engine as a workflow - LlamaIndex: no description found
- llama_index/llama-index-core/llama_index/core/chat_engine/types.py at 723c2533ed4b7b43b7d814c89af1838f0f1994c2 · run-llama/llama_index: LlamaIndex is a data framework for your LLM applications - run-llama/llama_index
LlamaIndex ▷ #ai-discussion (4 messages):
Local LLM cost optimization
Confidentiality with Local LLM
Framework comparisons
- Local LLM offers cost optimization: Members highlighted that running a Local LLM can significantly reduce costs compared to using OpenAI services.
- The conversation included the idea that total cost of ownership (TCOS) would differ between OpenAI and local models.
- Local LLM enhances data confidentiality: Discussion emphasized that using Local LLM allows businesses to keep their private information in-house instead of sending it to OpenAI.
- This highlights a growing concern over data confidentiality in public AI services.
- Concerns about cost and confidentiality overlooked: A member expressed disappointment that an article did not address the issues of cost and confidentiality associated with using OpenAI.
- This concern reflects a broader sentiment about balancing performance and privacy in AI tools.
- Local LLM as a Backend, LlamaIndex as Frontend: The community discussed viewing Local LLM as the Backend and LlamaIndex as the Frontend, comparing it to frameworks like Flutter or ReactJS.
- This analogy suggests acceptance of potentially 'fat' frameworks in favor of functionality.
OpenInterpreter ▷ #general (11 messages🔥):
Point of Diminishing Returns for GPUs
Non-streaming Response in Command Line Mode
Getting Started with Open Interpreter
Using APIs and Requests in Programming
- Understanding GPU Diminishing Returns: The point of diminishing returns for GPUs varies by application; typically, for gaming it becomes noticeable after 2-3 GPUs, while rendering might see it around 4-6 GPUs.
- Factors include PCIe bandwidth limitations and software that isn't optimized for multiple GPUs.
- Non-streaming Responses with Open Interpreter: A member sought advice on how to stop streaming responses in command line mode to avoid terminal refreshes that cause discomfort.
- Another member suggested using the --plain flag or
claude-3.5
model for non-streaming response options.
- Another member suggested using the --plain flag or
- Newbie's Challenge with Open Interpreter: A newcomer expressed curiosity about restrictions in the Open Interpreter related to enabling APIs and requests while seeking a walkthrough for their project.
- They reported that the model asked for these features despite instructions stating not to use them, leading to confusion.
OpenInterpreter ▷ #O1 (17 messages🔥):
Livekit Setup Errors
ChatGPT O1 Model Release
Comparative Model Functionality
- Livekit Setup Confusion: A member pointed out that 90% of users are encountering setup errors with Livekit, and criticized the documentation as improper.
- Another member suggested sharing a proper setup guide and contributing through a PR to assist the community.
- Concerns over ChatGPT's O1 Model: There were suspicions that the release of ChatGPT's model called O1 could be a strategic attack on existing projects, but this was downplayed by another member.
- They argued that while ChatGPT's O1 excels in reasoning, this project focuses on executing code and other functionalities.
- Debate on O1's Functionality: A member criticized O1 for lacking support for multimodal input, stating it doesn't offer the same full-fledged functionality as the legacy model 4.
- They added that their tests yielded similar responses between O1 and model 4, suggesting it might just be hype.
Link mentioned: Trying To Sneak Out Of The House GIF - Arnold Schwarzenegger Sneaking Out Camouflage - Discover & Share GIFs: Click to view the GIF
OpenInterpreter ▷ #ai-content (2 messages):
MoA LLM Library
Mixture-of-Agents for Coding
- New Python Library for LLM Orchestration: The MoA LLM library allows users to orchestrate LLMs in a neural network-inspired structure, enhancing collaboration among multiple models.
- This open-source project is aimed at simplifying the integration of various LLMs for better performance.
- Custom Mixture-of-Agents for Coding Tasks: The MoA Coding mix is optimized for challenging coding tasks, using models like Claude 3.5 Sonnet and GPT-4 Turbo.
- It showed a 28% performance increase compared to using Claude 3.5 Sonnet alone in complex programming challenges and offers competitive pricing of $6.00/M tokens.
- Coding Mixture of Agents | Crosshatch: A custom-built Mixture-of-Agents (MoA) synthesis mix optimized for challenging coding tasks. This mix leverages multiple 'proposer' models, including Claude 3.5 Sonnet and GPT-4 Turbo, with ...
- GitHub - catena-labs/moa-llm: A Python library to orchestrate LLMs in a neural network-inspired structure: A Python library to orchestrate LLMs in a neural network-inspired structure - catena-labs/moa-llm
OpenAccess AI Collective (axolotl) ▷ #general (16 messages🔥):
O1 model discussion
Attention optimization
Reflection tasks and agents
Special tokens for pretraining
- Mixed Feelings on O1's Effectiveness: There's a debate on the O1 model; some express satisfaction while others find it underwhelming, noting its mechanical responses.
- One member highlighted that O1 feels like a Chain of Thought with a solid UI, while another remained skeptical about its capabilities.
- OpenAI's O1 not a quick turnaround: A member emphasized that OpenAI has been developing O1 (Strawberry/Q*) for an extensive period, countering claims of rapid training.
- They mentioned that O1 appears to utilize an agentic chain of thought, showcasing resilience against hallucination.
- Attention Implementation Flexibility Concerns: Discussion arose around the need for optimized attention implementations like FlashAttention but noted the loss of flexibility when experimenting with new variants.
- Concerns were raised about the 'software lottery' for ML researchers as they navigate existing optimized kernels for their attention needs.
- Special Tokens Considered for Pretraining: There was a query regarding whether to remove specific special tokens during pretraining, with consensus leaning towards keeping existing defined tokens.
- Members suggested that adhering to established tokens avoids complications and potential inconsistencies down the line.
- Fused Cross Entropy's Impact: A member clarified the relationship between cross entropy and fused cross entropy, stating that the latter offers better performance.
- They mentioned that enabling both types might lead to one disabling the other, pointing out the integration decisions behind these optimizations.
Link mentioned: FlexAttention: The Flexibility of PyTorch with the Performance of FlashAttention:
OpenAccess AI Collective (axolotl) ▷ #axolotl-dev (11 messages🔥):
Tokenization Error Bug
Phi 3.5 Sentence Classifier
vLLM and Adapter Issues
Fused Attention Configuration
- Tokenization errors arise from masking: A member encountered a tokenization error due to the new per-turn masking for chat template prompt strategies, masking the last end of turn token.
- They linked this issue to their detailed bug report on GitHub: Tokenization Bug Report.
- Waving goodbye to Phi 3.5 training: A member expressed frustration with their attempts to get a sentence classifier based on Phi 3.5 to emit the expected classification text label.
- They provided a link to their dumb sentence classifier and hinted at giving up for the time being.
- vLLM struggles with trained adapters: A member noted that vLLM does not correctly interpret the
qkv_proj
layer, causing issues for models trained with Axolotl's adapters.- They observed that while their LORA showed no learning when merging, it displayed proper behavior when used as a pure adapter on top of the base model.
- Questions about fused attention settings: A member inquired whether a fused attention method was employed during training and if checkpoints or the final model were used.
- Another noted a function to run
_post_training
could re-split layers, indicating concerns about maintaining model integrity.
- Another noted a function to run
- Phi model's tensor tracking limitations: A member discussed the challenge with Phi 3.5's tracking of the qkv_proj tensor, which differs from Llama's tracking methods.
- They highlighted that vLLM defaults to using Phi 3.5 as a Llama model, complicating adapter mapping due to the mismatched tensor structures.
- Issues · axolotl-ai-cloud/axolotl: Go ahead and axolotl questions. Contribute to axolotl-ai-cloud/axolotl development by creating an account on GitHub.
- Reward model by winglian · Pull Request #1879 · axolotl-ai-cloud/axolotl: adds support to use trl's RewardTrainer to train reward models. currently uses pairwise responses.
LangChain AI ▷ #general (22 messages🔥):
GenAI/RAG/CV Consultation Services
Impact of OpenAI on Society
LangGraph Cloud Pricing
Streaming LLM Output Issues
Managing Chat History in LangChain
- Offering GenAI/RAG/CV Consultation Services: A member announced their availability for consultation projects related to GenAI, RAG, and CV to help develop prototypes for startups and companies.
- Interested parties are encouraged to reach out via direct message.
- OpenAI's Impact on Society: One member expressed concern that OpenAI has revolutionized access to knowledge yet society continues as if nothing has changed.
- Another contributor suggested that accelerating automation could lead us to a post-scarcity era.
- Uncertainty Around LangGraph Cloud Pricing: A member sought clarification on potential costs associated with LangGraph Cloud after its beta phase, weighing options against developing a custom FastAPI wrapper.
- They are wary of being locked into a pricing model that may not be feasible in the long run.
- Challenges in Streaming LLM Output: A member highlighted difficulties in streaming LLM output due to parsing issues with incomplete JSON strings when employing Pydantic parsers.
- Despite some skepticism, they discovered success by switching from
parse_result
toparse
methods.
- Despite some skepticism, they discovered success by switching from
- Chat History Management in LangChain: A user raised questions regarding the management of chat history using LangChain, noting limitations with built-in methods for tracking additional UI data.
- They described a challenge in achieving transactional integrity when storing chat history alongside app-specific messages.
Link mentioned: JSON parser | 🦜️🔗 LangChain: This output parser allows users to specify an arbitrary JSON schema and query LLMs for outputs that conform to that schema.
LangChain AI ▷ #share-your-work (1 messages):
Reading PDFs with Tables
Comparing Table Data
Sample Implementations for PDFs
Time Consumption in Data Processing
- Seeking Efficient PDF Table Reading Methods: A member requested suggestions for reading PDF files with tables, focusing on how to efficiently ask questions about the data later.
- Any links or sample implementations for handling this efficiently were especially welcomed as it has been time-consuming.
- Challenges in Comparing Table Columns: Concerns were raised about needing to compare columns in tables from PDFs and how labor-intensive it has been.
- Members expressed that this comparison is particularly tedious and thoughts on solutions would be appreciated.
DSPy ▷ #show-and-tell (1 messages):
batmanosama: Nhttps://www.interconnects.ai/p/reverse-engineering-openai-o1
DSPy ▷ #general (19 messages🔥):
RAG Query Structure
DSPy LM Release
Visual LLM Models in DSPy
GitHub Contributions
- Simplifying RAG Query Structure: A member inquired about optimizing RAG in a singular module, suggesting the packing of a 'context' field with data from RAG, memory, and prompts for better results.
- Another member confirmed this approach, emphasizing the usefulness of this simple RAG example for further understanding.
- DSPy LM Released Today: Members were informed that the latest version of DSPy, version 2.4.16, was released, aiding in various functionalities.
- This was highlighted in response to a query regarding the status of dspy.LM, confirming it was released on the day of discussion.
- Question on Visual LLM Models: A member asked if they could use visual LLM models in DSPy for image descriptions, to which another member replied that this capability might be available by next week.
- A relevant pull request for GPT-4 Vision API was shared, indicating ongoing integrations.
- GitHub Contribution Inquiry: A member expressed interest in contributing to the DSPy project, inquiring about any available bounties.
- Discussion indicated that additional changes for integration are anticipated, with an expected completion timeframe of 7-10 days.
- Add GPT-4 Vision API wrapper by jmanhype · Pull Request #682 · stanfordnlp/dspy: Introduce a new GPT4Vision class in visionopenai.py that wraps the GPT-4 Vision API. This abstraction layer simplifies the process of making requests to the API for analyzing images. Key functional...
- dspy/skycamp2023.ipynb at main · stanfordnlp/dspy: DSPy: The framework for programming—not prompting—foundation models - stanfordnlp/dspy
- dspy/examples/qa/hotpot/hotpotqa_with_MIPRO.ipynb at main · stanfordnlp/dspy: DSPy: The framework for programming—not prompting—foundation models - stanfordnlp/dspy
tinygrad (George Hotz) ▷ #general (12 messages🔥):
Runtime Type Checking in Tinygrad
Tinygrad Test Failures on AMD
Tinygrad Update Issues
GitHub Pull Request Discussions
- Runtime Type Checking Added: George Hotz announced the addition of
TYPED=1
support for runtime type checking in Tinygrad, highlighting the presence of type errors during testing withpython3 test/test_ops.py
.- A user noted a potential fix in a GitHub pull request that addressed most type errors yet left one unresolved.
- Test Failures Encountered on AMD: A user reported failing tests when attempting to bump Tinygrad from 0.9.0 to 0.9.2 in nixpkgs, mentioning an AttributeError linked to
struct_kfd_ioctl_criu_args
.- They speculated whether it might be a kernel version issue due to the attribute's presence in
/usr/include/linux/kfd_ioctl.h
, yet its absence in Tinygrad's autogen directory was confusing.
- They speculated whether it might be a kernel version issue due to the attribute's presence in
- Discussion on GitHub Changes: Concerns were raised about a potential oversight related to the hip_ioctl changes in Tinygrad, which may have been missed in a recent pull request.
- The user emphasized the need for a specific line in the code that had possibly been neglected during the changes introduced in pull request #5917.
- fix typing for test_ops by chenyuxyz · Pull Request #6520 · tinygrad/tinygrad: mostly passed TYPED=1 python3 -m pytest -n=auto test/test_ops.py. one last test specifically set an invalid value to test the exception, and to ignore that we need to import typeguard. And to get a...
- tinygrad/extra/hip_gpu_driver/test_kfd_2.py at 518c022c29104d79c7a50ec41af5b7e6404da317 · tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! ❤️ - tinygrad/tinygrad
- hip_ioctl changes by wozeparrot · Pull Request #5917 · tinygrad/tinygrad: feat: allow specifying processor as envvar feat: vendor kfd_ioctl.h
tinygrad (George Hotz) ▷ #learn-tinygrad (4 messages):
Tinygrad Ecosystem Libraries
VRAM Spike Analysis
Tensor Modification Errors
- Inquiry on Libraries for Tinygrad Ecosystem: A member inquired if anyone is working on libraries for the tinygrad ecosystem, mentioning potential candidates like timm and torchvision.
- Have you used tinygrad at all to see if such libraries are implemented/necessary? questioned another member, prompting further discussion.
- Understanding VRAM Allocation Spikes: A member asked about the best methods to identify what causes spikes in VRAM allocation during Tinygrad operations.
- This question highlights the need for diagnostic tools or methods within the tinygrad framework to monitor memory usage.
- Error in Tensor Modification Code: A user reported an error when running code involving modifying a Tensor from tinygrad, specifically when trying to increment its elements.
- They linked to an open issue on GitHub that appears to address a similar problem, noting compiler behavior regarding the contiguous property.
Link mentioned: Issues · tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! ❤️ - Issues · tinygrad/tinygrad
Torchtune ▷ #general (8 messages🔥):
full_finetune_distributed recipe
Checkpoints management
Learning rate scheduling
CUDA vs CPU operations
- Clarification on Checkpoints Management: For implementing checkpoints at specific token counts, a member suggests tracking the total tokens processed using the
num_tokens
field and filtering out padding tokens: check here. They underline the need for an all gather to account for totals across ranks.- For checkpoint saving, logic adjustment is required in the script to ensure it tracks tokens accurately, especially for resuming from a saved state.
- Introduction of Cosine Learning Rate Decay: Members discussed how
torchtune.modules.get_cosine_schedule_with_warmup
can be utilized for cosine decay on learning rates, although currently integrated only in LoRA recipes. Advice was given to closely follow those implementations for integration into the full finetune recipe.- It’s recommended to directly pass the number of steps in the recipe setup instead of deriving them from the epoch number to accommodate mid-epoch resume scenarios.
- CUDA vs CPU Operations for Token Processing: A query arose about whether token operations (gather/reduce) need to be carried out on CUDA devices, or if CPU processes could suffice. Members confirmed that num_tokens are not CUDA tensors and advised performing operations on CUDA devices, as there isn’t a direct mapping between CUDA and CPU processes.
- Discussions highlighted that while CUDA processes are preferable, there's uncertainty about CPU process efficiency in this context.
Link mentioned: torchtune/recipes/lora_finetune_distributed.py at 4fbe7b2d4956b3790c51d7a255c0040cf5c38fad · pytorch/torchtune: A Native-PyTorch Library for LLM Fine-tuning. Contribute to pytorch/torchtune development by creating an account on GitHub.
Torchtune ▷ #dev (6 messages):
Online Packing Support
Merge Conflicts Resolved
CI GPU Test Failures
Cache Position Updates
urllib Version Incompatibility
- Online Packing Plans on Iterable Datasets: The team plans to move to online packing as soon as they add support for iterable datasets.
- Merge Conflicts Fixed: A member reported they have fixed merge conflicts and added numerous tests for improved stability.
- They plan to update the description further tomorrow to enhance clarity.
- Concerns Over CI GPU Test Failures: There are issues with the CI, specifically failing GPU tests related to
test_eleuther_eval.py
, caused by import errors in the transformers.pipelines.- The test summary indicated 504 passed tests but highlighted a significant error preventing successful completion.
- Upcoming KV Cache Update: Changes regarding cache position will be implemented once the kv cache update is completed, removing all existing cache position elements.
- urllib and Requests Packages Incompatibility: A member noted test failures may stem from a version incompatibility between the
urllib
andrequests
packages.- They suggested pinning
urllib>3
as a possible fix but acknowledged they haven't tested it due to its intermittent nature.
- They suggested pinning
LAION ▷ #general (5 messages):
Generative AI in Art
Diffusion Illusions
Internship Opportunities
Optical Illusions
Image Dataset Generation
- Generative AI creates art in minutes: A member showcased their work made with NotebookLM, stating it was fully generated in 2 minutes. They shared a link to the YouTube video.
- What a time to be alive was their enthusiastic remark regarding the capabilities of generative AI.
- Steve Mould explores new illusions: A member shared an interesting YouTube video titled This new type of illusion is really hard to make, which discusses illusions generated with AI. The video is available here and includes a link for the Jane Street internship.
- They noted that generative AI can create images that appear different under various lights.
- Interactive playground for Diffusion Illusions: A member provided a link to the Diffusion Illusions website that features interactive optical illusions generated using diffusion models. The site also showcases their project accepted at SIGGRAPH 2024, linked to a YouTube talk.
- Authors include Ryan Burgert and Xiang Li, among others, emphasizing the innovative use of diffusion models in the physical world.
- Discussion on Text Placement in Images: A member inquired about methods for inserting text into images efficiently to create a large dataset. They are looking for ways to scale this process to populate millions of images.
- This query highlights the interest in automating the creation of text-embedded image datasets for potential applications.
- Diffusion Illusions: Hiding Images in Plain Sight: no description found
- This new type of illusion is really hard to make: Learn more about the Jane Street internship at: https://jane-st.co/internship-stevemouldGenerative AI can be used to make images that look different in diffe...
LAION ▷ #resources (1 messages):
blanchon.jl: That super great !
MLOps @Chipro ▷ #general-ml (4 messages):
Pretrained VLMs
Anomaly Detection Techniques
- Discussing Pretrained VLM Compute Needs: A member inquired about using pretrained vision-language models (VLMs) but expressed concerns about lacking compute resources.
- Another member pointed out that these models require heavy computing capabilities to function effectively.
- Clarification on Anomaly Detection Data Requirements: One member asked whether the anomaly detection should be performed on logs or actual time-series data.
- They shared several methods for time-series data, including transformer models, Kalman Filters, and isolation forests, suggesting approaches for error evaluation with z-scores.
Gorilla LLM (Berkeley Function Calling) ▷ #leaderboard (1 messages):
Model function calling bug
- Model struggles with function calling: A member raised a concern that the model is currently only capable of chatting, scoring a 1 in relevance, and is unable to call any functions, resulting in a 0 score for other capabilities.
- This bug limits the model's functionality significantly.
- Function performance evaluation: The discussion highlighted that the irrelevance score of 1 indicates a critical failure in the model's ability to perform any function calls effectively.
- The inability to execute functions could hinder user experience and expectations.
Gorilla LLM (Berkeley Function Calling) ▷ #discussion (2 messages):
Function Call Errors
AST Decoding Issues
- Model Produces Chat Instead of Function Call: The model outputted a conversational format instead of executing a function call, leading to issues in the model handler's ability to process the response.
- It was noted that this results in an automatic marking of the attempt as incorrect.
- Invalid Syntax Triggering AST Decoder Failure: An error message indicating 'Invalid syntax' was produced, resulting in a failure to decode the Abstract Syntax Tree (AST).
- This issue was categorized under 'ast_decoder:decoder_failed', signifying a critical problem in interpreting the model's output.