[AINews] Replit Agent - How did everybody beat Devin to market?
This is AI News! an MVP of a service that goes thru all AI discords/Twitters/reddits and summarizes what people are talking about, so that you can keep up without the fatigue. Signing up here opts you in to the real thing when we launch it 🔜
A fully integrated Web IDE is all you need.
AI News for 9/4/2024-9/5/2024. We checked 7 subreddits, 384 Twitters and 30 Discords (214 channels, and 2723 messages) for you. Estimated reading time saved (at 200wpm): 303 minutes. You can now tag @smol_ai for AINews discussions!
A packed day. The annual Time 100 AI outrage piece. Maitai, AnythingLLM, Laminar launched. Melodio - new text-to-music model. Together ai announced some kernel work and speculative decoding work. Andrej Karpathy on a podcast. $2000/mo ChatGPT. We very nearly featured Matt Shumer + Sahil Chaudhary's Reflection Tuned finetune of Llama 3.1 70B as today's title story, but the 405B + paper is coming next week, so we will just give you a heads up that it is coming.
The big launch of the day is Replit Agent.
If you've been paying attention to the coding agent company launches - like Claude Artifacts, Cursor Composer, Val.town Townie, Cosie Genie, Honeycomb, and even the You.com pivot yesterday, this is pretty much what you'd expect Replit to do, just very very well executed - full text to running app generation with planning and self healing. What's laudable is the lack of waitlist - it is live today to paid users - and can deploy on a live URL with postgres backend, from people who cannot code, including on your phone. Of course, Replit Agent can even make a Replit clone.
There are unfortunately no benchmarks or even blogposts to write about. Which makes our job simple. Watch video, try it, or scroll on.
Table of Contents
- AI Twitter Recap
- AI Reddit Recap
- AI Discord Recap
- PART 1: High level Discord summaries
- Stability.ai (Stable Diffusion) Discord
- Unsloth AI (Daniel Han) Discord
- OpenAI Discord
- HuggingFace Discord
- LM Studio Discord
- Nous Research AI Discord
- Interconnects (Nathan Lambert) Discord
- Modular (Mojo 🔥) Discord
- OpenRouter (Alex Atallah) Discord
- Perplexity AI Discord
- CUDA MODE Discord
- Eleuther Discord
- Latent Space Discord
- OpenInterpreter Discord
- Torchtune Discord
- Cohere Discord
- LlamaIndex Discord
- LangChain AI Discord
- LAION Discord
- tinygrad (George Hotz) Discord
- OpenAccess AI Collective (axolotl) Discord
- DSPy Discord
- DiscoResearch Discord
- LLM Finetuning (Hamel + Dan) Discord
- MLOps @Chipro Discord
- Gorilla LLM (Berkeley Function Calling) Discord
- PART 2: Detailed by-Channel summaries and links
- Stability.ai (Stable Diffusion) ▷ #general-chat (321 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #general (254 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #off-topic (35 messages🔥):
- Unsloth AI (Daniel Han) ▷ #help (18 messages🔥):
- Unsloth AI (Daniel Han) ▷ #showcase (1 messages):
- Unsloth AI (Daniel Han) ▷ #community-collaboration (2 messages):
- Unsloth AI (Daniel Han) ▷ #research (4 messages):
- OpenAI ▷ #ai-discussions (259 messages🔥🔥):
- OpenAI ▷ #gpt-4-discussions (3 messages):
- OpenAI ▷ #prompt-engineering (25 messages🔥):
- OpenAI ▷ #api-discussions (25 messages🔥):
- HuggingFace ▷ #announcements (1 messages):
- HuggingFace ▷ #general (193 messages🔥🔥):
- HuggingFace ▷ #today-im-learning (8 messages🔥):
- HuggingFace ▷ #i-made-this (6 messages):
- HuggingFace ▷ #reading-group (1 messages):
- HuggingFace ▷ #computer-vision (3 messages):
- HuggingFace ▷ #NLP (2 messages):
- HuggingFace ▷ #diffusion-discussions (2 messages):
- LM Studio ▷ #general (111 messages🔥🔥):
- LM Studio ▷ #hardware-discussion (60 messages🔥🔥):
- Nous Research AI ▷ #general (140 messages🔥🔥):
- Nous Research AI ▷ #ask-about-llms (20 messages🔥):
- Nous Research AI ▷ #research-papers (2 messages):
- Nous Research AI ▷ #interesting-links (1 messages):
- Nous Research AI ▷ #research-papers (2 messages):
- Nous Research AI ▷ #reasoning-tasks (1 messages):
- Interconnects (Nathan Lambert) ▷ #news (11 messages🔥):
- Interconnects (Nathan Lambert) ▷ #ml-questions (6 messages):
- Interconnects (Nathan Lambert) ▷ #random (42 messages🔥):
- Interconnects (Nathan Lambert) ▷ #posts (74 messages🔥🔥):
- Modular (Mojo 🔥) ▷ #announcements (1 messages):
- Modular (Mojo 🔥) ▷ #mojo (117 messages🔥🔥):
- Modular (Mojo 🔥) ▷ #max (8 messages🔥):
- OpenRouter (Alex Atallah) ▷ #app-showcase (3 messages):
- OpenRouter (Alex Atallah) ▷ #general (91 messages🔥🔥):
- OpenRouter (Alex Atallah) ▷ #beta-feedback (5 messages):
- Perplexity AI ▷ #general (77 messages🔥🔥):
- Perplexity AI ▷ #sharing (10 messages🔥):
- Perplexity AI ▷ #pplx-api (2 messages):
- CUDA MODE ▷ #general (36 messages🔥):
- CUDA MODE ▷ #triton (6 messages):
- CUDA MODE ▷ #beginner (2 messages):
- CUDA MODE ▷ #jax (1 messages):
- CUDA MODE ▷ #off-topic (5 messages):
- CUDA MODE ▷ #irl-meetup (1 messages):
- CUDA MODE ▷ #hqq-mobius (17 messages🔥):
- CUDA MODE ▷ #llmdotc (4 messages):
- CUDA MODE ▷ #cudamode-irl (1 messages):
- CUDA MODE ▷ #liger-kernel (8 messages🔥):
- Eleuther ▷ #general (19 messages🔥):
- Eleuther ▷ #research (52 messages🔥):
- Eleuther ▷ #lm-thunderdome (2 messages):
- Eleuther ▷ #multimodal-general (1 messages):
- Eleuther ▷ #gpt-neox-dev (1 messages):
- Latent Space ▷ #ai-general-chat (68 messages🔥🔥):
- OpenInterpreter ▷ #general (55 messages🔥🔥):
- OpenInterpreter ▷ #O1 (7 messages):
- Torchtune ▷ #general (27 messages🔥):
- Torchtune ▷ #dev (13 messages🔥):
- Cohere ▷ #discussions (13 messages🔥):
- Cohere ▷ #questions (8 messages🔥):
- Cohere ▷ #projects (3 messages):
- LlamaIndex ▷ #blog (3 messages):
- LlamaIndex ▷ #general (20 messages🔥):
- LangChain AI ▷ #general (14 messages🔥):
- LangChain AI ▷ #share-your-work (2 messages):
- LAION ▷ #general (10 messages🔥):
- LAION ▷ #research (6 messages):
- tinygrad (George Hotz) ▷ #general (8 messages🔥):
- tinygrad (George Hotz) ▷ #learn-tinygrad (6 messages):
- OpenAccess AI Collective (axolotl) ▷ #general-help (7 messages):
- OpenAccess AI Collective (axolotl) ▷ #community-showcase (5 messages):
- DSPy ▷ #show-and-tell (2 messages):
- DSPy ▷ #papers (1 messages):
- DSPy ▷ #general (1 messages):
- DiscoResearch ▷ #general (2 messages):
- LLM Finetuning (Hamel + Dan) ▷ #general (1 messages):
- MLOps @Chipro ▷ #events (1 messages):
- Gorilla LLM (Berkeley Function Calling) ▷ #discussion (1 messages):
AI Twitter Recap
all recaps done by Claude 3.5 Sonnet, best of 4 runs.
AI Development and Models
- Document Retrieval Techniques: @mervenoyann highlighted methods for multimodal RAG (retrieval-augmented generation), suggesting models like Donut or LayoutLM for improved structured responses from labeled data.
- AI Agents Functionality: @bindureddy explained that AI Agents can automate various tasks, such as document generation and technical image generation, enabling users to specify high-level tasks for execution by the AI.
- Image and Video Generation: @rohanpaul_ai detailed the development of JPEG-LM and AVC-LM, which utilize file encoding to enhance image and video generation. This method reduces data complexity while delivering impressive output quality.
AI Tools and Technologies
- New Enterprise Features: @rohanpaul_ai unveiled a new enterprise plan from AnthropicAI with significant features like a 500K context window and improved security measures, targeting specific use cases in marketing and engineering.
- GPU Market Trends: @LeptonAI discussed trends in the H100 GPU pricing model, predicting a drop in costs similar to that seen with the A100 GPUs, emphasizing the importance of monitoring and testing for reliability.
Philosophy and Ethics in AI
- Importance of Inquiry: @teortaxesTex criticized the lack of curiosity among scientists, suggesting a need for deeper inquiry into fundamental questions rather than accepting superficial explanations.
- Research Impact: @stanfordnlp shared recycled insights on how grad students can engage in impactful AI research, which aligns with broader discussions about meaningful contributions to the field.
Community and Collaboration
- Networking for NLP Events: A seminar announcement by @stanfordnlp promoted a talk on \"The State of Prompt Hacking\", inviting participation and emphasizing the importance of community engagement in discussions about NLP breakthroughs.
- Foundational Insights from Leadership: @RamaswmySridhar shared thoughts on scaling organizations, stressing the necessity for transparency and accountability as key drivers for high-growth companies.
- Mentoring and Opportunities: @aidan_mclau recognized the influence of community connections, advocating for younger engineers to leverage collaborative relationships for career growth.
AI Reddit Recap
/r/LocalLlama Recap
Theme 1. GitHub's Automated Flagging: Impact on AI Model Repositories
- Qwen repo has been deplatformed on github - breaking news (Score: 183, Comments: 75): GitHub temporarily flagged and removed the Qwen repository for unknown reasons, as reported by main contributor Junyang Lin. The project remained accessible on Gitee (Chinese GitHub equivalent) and Hugging Face, with documentation available at qwen.readthedocs.io. The post author urges the open-source community to create an archive to prevent future deplatforming incidents.
- The Qwen repository was restored on GitHub, as announced by contributor Justin Lin with the tweet: "We are fucking back!!! Go visit our github now!" Users discussed the need for backup solutions and distributed AI systems.
- Discussions arose about alternatives to GitHub, including AI-focused torrent trackers like aitracker.art and decentralized platforms such as Codeberg and Radicle. Users emphasized the importance of platform-independent solutions for code hosting and collaboration.
- Some users speculated about potential targeting of Chinese models or Microsoft's involvement, referencing the company's history of anticompetitive behavior. Others cautioned against jumping to conclusions and suggested waiting for GitHub's official explanation of the temporary removal.
All AI Reddit Recap
r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity
AI Research and Development
- Logan Kilpatrick suggests AI advancements are not slowing down if one is "paying close enough attention" (336.5 points)
- Comments note rapid improvements in AI video and image generation
- Some users express frustration with cryptic tweets and hype from AI researchers
- OpenAI co-founder Ilya Sutskever tweets "time to climb" (302.5 points)
- OpenAI tweets "we have so much to tell you" (233 points)
- Anthropic is "shipping so hard" according to a tweet (190.5 points)
- Christian Szegedy predicts superhuman AI mathematician by 2026, possibly even 2025 (140.5 points)
AI Funding and Competition
- Sutskever's new AI safety startup SSI has raised $1 billion (268 points)
- Reuters article on the funding (118 points)
- OpenAI and competitors are reportedly concerned about xAI's compute power (141 points)
AI Image Generation
- A 5-minute journey with Stable Diffusion video showcases the model's capabilities (366 points)
- Flux Icon Maker generates vector icon outputs using a custom-trained Lora and ComfyUI workflow (213 points)
- Allows direct conversion to vector graphics for scalability
- Uses the ComfyUI-ToSVG repository for vector conversion
AI Discord Recap
A summary of Summaries of Summaries by Claude 3.5 Sonnet
1. LLM Advancements and Benchmarking
- DeepSeek V2.5 Launch: DeepSeek V2.5 merges its Coder and Chat models, showing significant improvements in various performance metrics, such as an ArenaHard win rate increase from 68.3% to 76.3%. Read more here.
- Users appreciate these upgrades, enhancing overall usability while maintaining instruction-following capabilities. Change Log.
- Reflection 70B Model Announcement: The new Reflection 70B model introduces Reflection-Tuning for self-correction, generating excitement in the community. Announcement by Matt Shumer.
- Members eagerly anticipate the upcoming 405B version, projected to outperform existing alternatives. Tweet.
- This innovative approach could significantly improve model performance, sparking discussions on its potential applications and implications for model design. Research Paper.
2. AI Industry News and Funding
- xAI's Cluster Sparks Competitive Concerns: Elon Musk's progress in building xAI's 100k GPU cluster has raised concerns among rival model developers, with OpenAI's Sam Altman expressing worries over potential computing power disparities.
- The news sparked discussions about the escalating AI arms race, with one community member humorously noting: 'eventually we all become GPU poor'.
- OpenAI's Ambitious Pricing Strategy: Reports emerged that OpenAI is considering subscriptions up to $2,000 per month for access to their next-generation models, suggesting potential 100x capability increases over lower-tier versions.
- The community reacted with skepticism, with one member stating: 'This will be a Vision-Pro level disaster. I hope it's a joke'. Others speculated this might be more suitable for B2B pricing models.
3. Multimodal AI Innovations
- Transfusion Model Insights: Meta released a paper on the Transfusion model, a multitasking approach integrating language and diffusion training on 1T text tokens and 692M images. Transfusion Paper.
- It was highlighted that the methodology yields better scaling performance compared to traditional discrete token training. Transfusion Paper.
- Loopy: Audio-Driven Video Generation: The paper introduces Loopy, an end-to-end audio-conditioned video diffusion model aimed at synthesizing natural motion without manual spatial templates. Loopy Paper.
- Loopy enhances audio-portrait movement correlation and showcases significant improvements in performance based on extensive experimental results. Loopy Paper.
- Comfy Rewrite Project Gains Traction: Julien Blanchon announced a minimalist Comfy rewrite from scratch, seeking to create a highly extensible user interface with no dependencies. This project invites collaboration to simplify usage while maintaining flexibility.
- Members expressed interest in reforms to enhance user experience and reduce complexity, and more details are available here.
PART 1: High level Discord summaries
Stability.ai (Stable Diffusion) Discord
- Hash Rosin Model Madness: A user seeks the best model for generating realistic hash rosin images, referencing a specific Lora that provides detailed close macro shots.
- Suggestions include pairing the Lora with models like SDXL or Flux to enhance output quality.
- ControlNet Conundrum: A user struggles with ControlNet preprocessors in ComfyUI, specifically missing options beyond the tile preprocessor.
- Users recommend experimenting with tiled ksamplers and checking setup accuracy, with tutorial resources being suggested.
- Installation Insights: Discussions revolve around trying various model combinations, with a focus on using Flux and SDXL for superior image generation.
- Participants are keen to learn how to integrate different models with Lora to achieve desired results.
- GPU Performance Predicaments: Users discuss GPU performance limitations, particularly focused on VRAM while utilizing heavy models like SDXL and Flux.
- Concerns about lengthy generation times prompt suggestions to explore cloud services for enhanced capacity and efficiency.
- Cloud Computing Curiosities: Recommendations abound for using cloud platforms like Vast.ai to access high-performance GPUs for demanding models.
- The need for cloud solutions resonates, especially among users with lower-spec machines, such as laptops.
Unsloth AI (Daniel Han) Discord
- Unsloth gets Y Combinator backing: Unsloth announced being backed by Y Combinator, marking a significant milestone in their development.
- The team is excited about future developments, including their newly celebrated 2 million monthly downloads.
- New features in Unsloth unveiled: Unsloth will launch Unsloth Studio for model fine-tuning, and Dora integration for users still requires
use_dora = True
to utilize.- Discussion also highlighted popular model recommendations like Gemma 2 27B and Llama 3.1 8B, with community members sharing insights from their experiments.
- Illya raises $1 billion for AGI: Illya's recent $1 billion funding for Safe SuperIntelligence sparked confusion regarding its implications for scaling AGI and LLM reasoning.
- Members noted that there’s no evidence that scaling leads to AGI, pointing out that the investments are often driven by hype.
- Research on reasoning in LLMs: The community discussed the challenges of reasoning and planning in LLMs, asserting that scaling alone won't improve these capabilities.
- Insights suggested that effective reasoning may require architectural innovations or explicit reasoning mechanisms.
OpenAI Discord
- Debate on AI vs Human Cognition: A lively discussion revolved around the differences between AI reasoning and human understanding, emphasizing that LLMs utilize statistical predictions rather than authentic cognition.
- Participants pointed out that while AI simulates consciousness, it inherently lacks a true understanding that biological entities possess.
- Perplexity Emerges as a Favorite: Members frequently praised Perplexity for its speed and reliability, especially for tasks like research and projects, with the free tier deemed sufficient for many users.
- This makes Perplexity a competitive alternative to other paid subscription tools in the AI space.
- Gemini AI Performance Muddles Expectations: Users shared mixed experiences with Gemini AI, particularly noting unreliable outputs in programming tasks and hallucinations affecting response accuracy.
- Despite these setbacks, some users reported improvement in newer versions, leading them to continue exploring the tool.
- OpenAI Hits Major Subscription Milestone: OpenAI celebrated reaching 1 million paid users, driven by its business-focused offerings such as ChatGPT Team and Enterprise products.
- With subscription fees starting at $60 per user per month, this underscores significant revenue opportunities amid ongoing operational costs.
- Changing UI Draws User Confusion: Recent changes in ChatGPT’s user interface, particularly the absence of the regenerate button, have left users perplexed and uncertain about navigation.
- Some users speculate about interface elements being relocated to the model selection dropdown, affecting usability.
HuggingFace Discord
- Vision Language Models Overview: A new blogpost introduces the fundamentals of vision language models, aimed at newcomers in the field.
- It serves as a resource for understanding key principles that underpin the applications of visual and language integration.
- Streamlined Optimization for Tau LLM: The Tau LLM series examines methodologies to enhance training processes and performance metrics.
- Insights from community experts guide improvements in model efficiency and deployment strategies.
- InkubaLM-0.4B Expands Language Representation: The release of InkubaLM-0.4B addresses support for African languages, showcasing advancements in multilingual capabilities.
- This project represents a wider effort in the community to enhance diversity in AI applications.
- Kyber Odyssey Tackle Post-Quantum Encryption: The team announced acceptance of a submission at the AMA research challenge focusing on the implementation of NIST's post-quantum encryption protocols, available on GitHub.
- Their efforts prioritize accessibility for learners and communities, enhancing security protocols at minimal costs.
- Qwen2-VL-7B-Instruct Handler Released: A working handler.py and updated requirements.txt for Qwen2-VL-7B-Instruct showcase functionality on endpoints like T4, A100, and L4.
- These updates focus on maintaining compatibility and performance improvements, ensuring robust operation across different setups.
LM Studio Discord
- LM Studio 0.3.2 download error reported: Users encountered an 'unable to get local issuer certificate' error after the LM Studio 0.3.2 update, hindering model downloads. This issue may relate to corporate network security changes or SSL certificates.
- The inconvenience highlights connectivity challenges that could impact model deployment timelines in corporate environments.
- Image API exploration underway: Users seek free Image API providers with high limits, mentioning Stable Diffusion as a starting point. The request includes queries for alternatives offering advanced imaging tools.
- The search for expanded API capabilities reflects a growing demand for diverse imaging resources in project workflows.
- Reflection 70B model gains attention: The Reflection 70B model, known for correcting reasoning mistakes, is now available on Hugging Face. Users are eager for its integration into LM Studio following the recent upload.
- This model's capability is noted as a significant advancement for open-source LLM discussions within the community.
- User feedback on new LM Studio UI: Some users voiced criticism regarding the new UI in LM Studio 0.3.2, highlighting large elements and the lack of preset dropdowns as problems. Many expressed a desire for a more compact UI and the reintroduction of preset options.
- This feedback may guide future UI development to enhance user experience and functionality.
- Max RAM recommended for Mac users: Discussion emphasized that Apple users should aim for the largest RAM possible, with 64GB being a baseline for serious AI use. Users encouraged investing in NAS systems for efficient storage solutions.
- Ramping up RAM will facilitate enhanced model handling and performance for demanding workloads.
Nous Research AI Discord
- Reflection-Tuning for LLMs: The newly introduced method of Reflection-Tuning aims to enhance LLM capabilities by teaching models to self-correct during output generation using datasets intentionally crafted with errors.
- This innovative approach could significantly improve model performance, sparking discussions on its potential applications and implications for model design.
- Frustration with Mergekit Stalling: Users reported Mergekit stalling at 'Executing graph: 0% 0/1457' while merging fine-tuned Llama 3.1 models in Colab, preventing usable model creation.
- Guidance on resolving this issue seems essential for smooth model merging processes within the community.
- Illya's $1 Billion AGI Fundraising: Illya successfully raised $1 billion for Safe Superintelligence, aiming to tackle AGI complexity through scaling efforts.
- Members remain puzzled about whether scaling alone can address the reasoning limitations of LLMs, reflecting ongoing debates in the AI community.
- Falcon Mamba Model Released: Falcon Mamba launched by the Technology Innovation Institute under the TII Falcon Mamba 7B License 1.0, is now available on Hugging Face for open access.
- The launch blog emphasizes the model's competitive edge and integration within the Hugging Face ecosystem, inviting further exploration.
- Loopy: Advancements in Audio-Driven Video Generation: The paper introduces Loopy, an end-to-end audio-conditioned video diffusion model aimed at synthesizing natural motion without manual spatial templates.
- Loopy enhances audio-portrait movement correlation and showcases significant improvements in performance based on extensive experimental results.
Interconnects (Nathan Lambert) Discord
- xAI's GPU Cluster Raises Eyebrows: Elon Musk's 100k GPU cluster development for xAI is causing concern among rivals, with Sam Altman of OpenAI voicing his fears over competitive computing power disparities.
- One member quipped that we all inevitably become GPU poor, highlighting the escalating stakes in AI infrastructure.
- Unsloth Partners with YCombinator: Unsloth has secured backing from YCombinator to develop an integrated model creation solution, focusing on speed and accessibility using Triton and CUDA.
- Reflection Llama-3.1 Emerges as the Top Open-source LLM: Reflection Llama-3.1 70B is acclaimed as the leading open-source LLM, leveraging a technique named Reflection-Tuning for enhanced reasoning accuracy and trained with synthetic data by Glaive.
- Users can experiment with the model here.
- Quest for Effective Reasoning Datasets: A member sought recommendations for reasoning datasets, particularly those encompassing chain-of-thought reasoning, reflecting a crowded market of options.
- Prominent suggestions included the MATH and GSM8k benchmarks, revered for assessing LLM reasoning capabilities.
- OpenAI's Pricing Strategy Sparks Debate: Reports suggest that OpenAI may consider subscription fees reaching $2,000 per month, leading to skepticism regarding market viability given competitive pricing landscapes.
- Members are curious about potential B2B pricing models, questioning how such steep consumer costs could be justified in practice.
Modular (Mojo 🔥) Discord
- Magic Package Manager Takes Charge: The new Magic package manager officially supports MAX and Mojo projects with a single Conda package available now, streamlining virtual environment management.
- Users are urged to migrate to Magic or compatible tools, as the legacy
modular
CLI will cease updates starting Monday.
- Users are urged to migrate to Magic or compatible tools, as the legacy
- Mojo Undergoes Performance Scrutiny: Testing reveals the ord() function in Mojo runs approximately 30 times slower than in C++ and Python, prompting calls for optimizations.
- Community discussions suggest inspecting the ord implementation and potential features like Small String Optimization to enhance performance.
- Uncertain Future for Model Serialization Format: The team has no ETA for the platform-independent model serialization format, characterized as a future enhancement expected to aid in containerization.
- Feedback highlights anticipation for this feature, which is hoped to smooth the deployment of models in Docker containers.
OpenRouter (Alex Atallah) Discord
- Infinite Bank Account Dilemma: A member humorously proposed the idea of condensing their bank account into an infinite amount, sparking lively debate about financial limits.
- This led to a philosophical discussion where another member questioned if condensing into an infinite amount truly implies expansion.
- Opus Outshines Sonnet in Specific Tasks: A member highlighted that Opus outperforms Sonnet on particular prompts, such as calculating angles on a digital clock display.
- However, many contend that comprehensive benchmarks still favor Sonnet, creating a split in performance evaluation.
- DeepSeek V2.5 Model Hits Higher Marks: The launch of DeepSeek V2.5, merging its Coder and Chat models, showcases significant metric improvements, like an ArenaHard win rate jump from 68.3% to 76.3%.
- Users appreciate these upgrades, enhancing overall usability while maintaining instruction-following capabilities.
- Reflection 70B Model Announcement: The new Reflection 70B model is set to introduce Reflection-Tuning for self-correction, generating excitement in the community.
- Members are eagerly anticipating the upcoming 405B version, projected to outperform existing alternatives, according to Matt Shumer's announcement.
- AI Studio Key Configuration Fails: AI Studio users reported a critical issue where the key entry does not save configurations, reverting back to Not Configured.
- While Hyperbolic and Lambda keys function properly, this inconsistency raises concerns among users regarding reliability.
Perplexity AI Discord
- Perplexity offers Free Membership for Students: Perplexity announced a free 1-year pro membership for colleges reaching 500 student signups with
.edu
emails, raising questions on eligibility and sign-up criteria.- Users must register by a specific date, and the conversation highlighted uncertainty about their university's participation.
- xAI's Colossus Steals the Show: Perplexity AI introduced the World's Most Powerful Supercomputer, xAI's Colossus, alongside discussions on the Oldest Known Board Game, Senet.
- For more about this groundbreaking discovery, check out the YouTube video here.
- File Uploads Made Easy with Perplexity API: A member outlined a method to implement file uploads in Flask using the Perplexity API, detailing both client-side and server-side configurations.
- This method modifies the /query route to accept file data, allowing for seamless integration into API prompts.
- Cold Showers Gain Traction: Members dived into the benefits of cold showers, highlighting health advantages like improved circulation and mood enhancement.
- This trend sparked discussions about daily routines and their mental benefits.
- Boosting Perplexity API Response Quality: A user sought advice on configuring Perplexity API requests to emulate the response quality of the Perplexity website.
- While no specific solutions were offered, the quest for enhanced API responses indicates a community interest in model performance.
CUDA MODE Discord
- Cursor AI Tool Yields Mixed Reviews: While discussing the Cursor AI coding tool, several members expressed skepticism, saying it feels unhelpful, although it excels at code retrieval compared to the free tier.
- One member noted, 'Does anyone actually try to use it for tickets right?' questioning its effectiveness in practical scenarios.
- New Reflection 70B Marks Milestone in Open-Source LLMs: The launch of Reflection 70B, an open-source LLM refined through Reflection-Tuning, excited many, with a follow-up model, 405B, expected next week to set new standards.
- A community member shared a tweet from Matt Shumer, emphasizing the model's capabilities to self-correct mistakes.
- Diving into Pallas Kernels: Members explored various kernels implemented in Pallas, available on GitHub, showcasing transformations for Python+NumPy programs.
- The Splash Attention kernel was highlighted, with its implementation linked here for in-depth review.
- Exploring Open Sora's CUDA Implementation: A member is tackling the implementation of Open Sora in CUDA and C++, noting the difficulty and slow progress on this extensive project.
- They expressed a wish for more advancements in graphics, indicating a desire for progress in the technical domain.
- Memory-Bound Performance Analysis in Triton: Performance remains limitingly slow in memory-bound setups while achieving speeds near FP16 with larger batch sizes, indicating ongoing efforts for efficiency.
- The conversation also leaned towards using autotuning to potentially enhance speed, as batch sizes grew.
Eleuther Discord
- MCTS in Image Generation: A Debate: The discussion on applying Monte Carlo Tree Search (MCTS) in image tasks opened questions about its logic reversal when compared to models like AlphaZero and AlphaProof.
- One participant emphasized how MCTS relies heavily on previous steps, pointing out its focus on enhancing policies rather than generating them.
- Creative AI Workshop Interest: Members are seeking information on upcoming creative AI workshops, aiming to leverage insights from their recent paper on diffusion models.
- Skepticism arose regarding their relevance for the ICCV timeframe, especially given looming submission deadlines.
- Scaling Parameters: A Pitfall: Concerns emerged about the inefficiencies in scaling parameter counts without a corresponding increase in dataset size, with references to the Chinchilla paper.
- One user suggested examining the paper's formulas for a clearer understanding of the implications of scaling.
- Transfusion Model Insights: Discussion centered around the Transfusion paper, which offers insights into training multi-modal models on both discrete and continuous data.
- It was highlighted that the methodology yields better scaling performance compared to traditional discrete token training.
- AI Boosts Developer Productivity: Findings from a paper titled The Effects of Generative AI on High Skilled Work showed a 26.08% increase in task completion among developers using AI tools like GPT 3.5.
- This suggests significant productivity improvements linked to the infusion of AI technologies in development.
Latent Space Discord
- SSI Inc secures massive $1B funding: SSI Inc has successfully acquired $1B in a funding round, alongside Sakana's $100M achievement.
- Speculation arose regarding potential allocations from this funding towards Nvidia in engineering discussions.
- You.com shifts strategies with $50M boost: You.com transitions from AI search ventures to focus on deeper productivity agents, powered by a recent $50M funding round.
- Founder Richard Socher emphasized that competing with Google on simple queries is less effective than enhancing complex query capabilities.
- Karpathy champions Tesla in autonomous driving: In a captivating podcast, Andrej Karpathy predicts that Tesla will lead in self-driving tech, despite Waymo's advancements, citing a vital software versus hardware challenge.
- He highlighted the transformative potential of Optimus, Tesla's humanoid robot, for future factory applications.
- OpenAI contemplates a $2000/month model: OpenAI is reportedly considering a $2000/month subscription for accessing their next-gen model, suggesting possible 100x capability increases over lower-tier versions.
- Discussions hint at either significant model performance enhancements or the need to cover escalating operational costs.
- Replit Agent automates dev tasks: Replit has launched the Replit Agent to automate software development tasks, including setting up development environments during early access.
- This initiative aims to strengthen Replit's offerings by integrating AI more deeply into programming workflows.
OpenInterpreter Discord
- Open Interpreter Marks Another Year: Members celebrated the birthday of Open Interpreter, highlighting its achievements in AI-human collaboration and prompting a humorous remark about 'AGI achieved, we can all go home now'.
- This reflective moment underscored the tool’s relevance in today’s AI discourse.
- Teaching the Open Interpreter New Tricks: Discussion centered around Teach Mode, where users can say, 'I want to teach you something' to help the system develop new skills based on user input.
- The system’s adaptability aligns with principles shared by Rabbit Tech, demonstrating its potential in diverse applications.
- Open Repos Encourage Collaboration: The Open Interpreter and 01 repositories are now open-source, inviting developers to integrate innovative functionalities into their applications.
- One user expressed aspirations to automate web tasks by leveraging these open resources.
- AGI Buzz in the Air: A curious member raised a question regarding the AGI announcement, provoking a mix of excitement and skepticism among participants, reiterated by 'AGI achieved, we can all go home now'.
- This chatter reflects a vibrant community engagement around advanced AI concepts.
- Fulcra App: Still Waiting to Explore: Interest simmered around the international launch of the Fulcra app, with expectations high from users outside New Zealand.
- The anticipated release timeline remains unclear, keeping users on edge.
Torchtune Discord
- PyTorch 2.4 Compile Errors Emerge: Members reported compile errors with PyTorch 2.4, particularly with fake tensors, suggesting use of
os.environ['TORCH_COMPILE_BACKEND'] = 'aot_eager'
to mask errors in CI.- A possible CI issue regarding the default backend was raised, stressing the need for updated gcc installations for CI workers.
- Input Padding Hits Performance Hard: Testing revealed that input padding with the Alpaca dataset incurred a substantial drop in speed, despite showing improved memory footprint.
- The suggestion to report both padded and unpadded tokens aimed to quantify the performance impact of padding more effectively.
- Enhancements to DeepFusionModel Tests: The latest updates for DeepFusionModel included added tests for kv caching, with a pull request shared for detailed review and feedback.
- Pull Request #1449 proposes overrides for max cache sequence length, prompting discussions on its necessity.
- Unsloth Gains Y Combinator Support: Unsloth has secured backing from Y Combinator, igniting excitement around prospective support for community initiatives.
- Anticipation grew as one member expressed hope for similar opportunities, highlighting the shifting landscape of community projects.
- Clarification on Meta Employment: A member clarified misconceptions regarding employment at Meta, emphasizing that not all participants are affiliated with the company.
- One member noted that Salman is doing it purely for the love of the game, dispelling assumptions of professional ties.
Cohere Discord
- Tackling System Prompt Errors: A user faced issues optimizing their system prompt, receiving errors stating Could not parse & validate the given body.
- Another member advised providing detailed prompts in a designated channel for focused help.
- What's Cooking with Cohere?: Members are eager to learn about the latest updates from Cohere, with one pointing to the Cohere blog for fresh insights.
- This resource highlights customer use cases and recent developments crucial for understanding ongoing improvements.
- Implementing Text Suggestions Like Gmail: A member sought advice on replicating a text suggestions feature akin to Gmail's Smart Compose using Cohere models.
- Another member suggested the importance of contextual prompting to make this feature feasible.
- Using LLM Agents for Reports: There's interest in leveraging LLM agents to generate stakeholder reports, drawing from previous writing styles and meeting notes.
- Suggestions ranged from RAG with Nimble rerank for meeting notes to meta prompting techniques to retain writing style consistency.
- OpenSesame 2.0 Debuts Major Updates: OpenSesame 2.0 launched with enhancements like no longer requiring ground truth input and integration with vector DBs for semantic searches.
- It also supports multiple models, including functionalities for platforms like OpenAI, Gemini, and Cohere.
LlamaIndex Discord
- Netchex AI Revolutionizes Employee Support: Netchex implemented AskHR + Netchex AI using LlamaIndex, transforming employee support for small to medium-sized businesses in just one month with two engineers. They used advanced RAG pipelines for context-aware responses, showcasing rapid development in the HR sector. Read more here.
- This implementation demonstrates the effective use of AI in enhancing employee interactions, marking a significant evolution in the HR landscape.
- create-llama Introduces Multi-Agent Workflow: The latest update to create-llama offers a multi-agent workflow in Python, emphasizing its role in rapid deployment for various use cases. An example utilizes three agents to generate a blog post, demonstrating its flexibility and efficiency. Check it out!.
- This feature aims to streamline content creation processes, empowering developers to innovate with AI capabilities easily.
- Launch of llama-deploy for Microservices: llama-deploy enables seamless microservice deployment based on LlamaIndex Workflows, marking a substantial improvement in deployment efficiency. This launch builds on lessons from llama-agents and Workflows, enhancing capabilities for developers. Get details here.
- The system aims to simplify the deployment of AI-centric applications, crucial for scaling services quickly.
- Installing llama-index-experimental-param-tuner: To install the experimental package, run
pip install llama-index-experimental
for llama-index version 0.11.3. One user confirmed that this installation step is necessary for the functionality.- This package is expected to offer advanced features for users seeking to leverage the latest improvements in LlamaIndex.
- Setting up Claude with LlamaIndex: A comprehensive guide was shared for utilizing Claude's latest models in LlamaIndex, including setup instructions and tokenizer settings. The models range from Claude 3 Opus to Claude 3 Haiku, emphasizing adherence to documentation.
- This integration opens opportunities for building sophisticated applications that utilize advanced language models.
LangChain AI Discord
- Community Input Sought for AI Agent Platform: A member is exploring a platform to build, deploy, and monetize AI agents and is requesting insights from other builders during the research phase.
- They are offering beta access in return for a brief chat, aiming to refine features based on community feedback.
- Document-Driven Chatbot Challenges: Assistance is requested for a chatbot that needs to interact using content from two PDF files, with an emphasis on user experience.
- Key requirements include document loading, response generation, and efficient conversation management.
- Exploring Advances in Vision Language Models: A blog post reveals the journey from early models like CLIP to sophisticated solutions such as Flamingo and LLaVA, emphasizing joint training with vision and text data.
- Gamified Learning with CodeMaster App: The CodeMaster app has launched, aimed at enhancing coding skills through gamification and science-backed learning techniques.
- Community feedback praises its spaced repetition feature, significantly boosting user engagement and knowledge retention.
- Shifting from SQLite to Cloud Solutions: Options for transitioning from SQLite to Postgres or MySQL for a ReAct agent deployed on GCP AppEngine were discussed.
- Concerns about losing local SQLite context with redeployments were also raised.
LAION Discord
- Comfy Rewrite Project Gains Traction: Julien Blanchon announced a minimalist Comfy rewrite from scratch, seeking to create a highly extensible user interface with no dependencies. This project invites collaboration to simplify usage while maintaining flexibility.
- Members expressed interest in reforms to enhance user experience and reduce complexity, and more details are available here.
- Reflection 70B Claims Self-Correction Ability: Reflection 70B is announced as the top open-source model capable of fixing its own mistakes through Reflection-Tuning. Reports indicate it outperforms models like GPT-4o across benchmarks, with a 405B version on the horizon.
- The AI community buzzes with excitement, as a noteworthy tweet highlights its revolutionary features.
- Transfusion Model Combines Modalities: Meta released a paper on the Transfusion model, a multitasking approach integrating language and diffusion training on 1T text tokens and 692M images. It shows potential for future extensions to audio and potentially video.
- The study proposes innovative use of VAE for seamless media transitions, which could have broad implications for multi-modal AI developments, as described in the arXiv paper.
- SwarmUI Focuses on Modular Accessibility: The SwarmUI project aims to provide a modular web user interface for Stable Diffusion, prioritizing user-friendliness and performance enhancements. A GitHub link was shared, highlighting its goal to make power tools easily accessible.
- Members noted its extensibility is a key feature, catering to users who seek streamlined operations in their AI applications. More can be explored on its GitHub page.
- Unified Multi-modal Model Proposed: Members discussed the vision of a Transfusion+GameNGen model that integrates language, vision, audio, and gaming engines into a singular framework. Such an advancement could redefine interactions across AI and modalities.
- This concept sparked debate on the future of integrated AI solutions, with many keen on exploring the practical implications of this type of model.
tinygrad (George Hotz) Discord
- Bounty Payments Completed: All individuals who emailed to claim bounties have been paid, and recipients are encouraged to report if they have not received their compensation.
- This promotes transparency and efficiency in managing user rewards within the tinygrad community.
- Tinyboxes Rental Proposal Takes Shape: A concept was shared regarding manufacturing tinyboxes for sale or rental from a data center, emphasizing an upgrade path for hardware.
- The plan aims to sell outdated hardware to keep stock fresh for consistent rentals.
- Discussion on Pricing Models for Performance: Members explored pricing models, recommending costs be expressed as $/exaflops and $/tflops*month.
- This highlights the complexity of pricing structures and how they cater to different user needs.
- Confusion Over phi Operation in IR: A member inquired about the phi operation in the IR, asking how it compares to LLVM IR's placements in loop bodies.
- Discussion clarified it's not a true phi operation, with suggestions to rename it to ASSIGN or UOps.UPDATE.
- Insights on cstyle Renderer: George Hotz directed attention to the cstyle renderer for a better understanding of its role in the ongoing discussion.
- This was acknowledged as a useful reference by members seeking deeper comprehension.
OpenAccess AI Collective (axolotl) Discord
- Unsloth Phi converts seamlessly to Llama: The Unsloth Phi architecture now converts to Llama, allowing for the use of a Llama3 configuration for more efficient experimental setups.
- This adjustment offers a potential boost in experimentation efficiency.
- Ongoing discussions about Phi3 challenges: While Phi3 is considered safe, there are challenges that need consistent attention highlighted in the Discord history.
- Members suggest that while it functions, it may require further investigation due to ambiguities in performance.
- Invisietch looks for a small model: Invisietch seeks a small model for rapid experimentation, reflecting a need for accessible resources in the community.
- This pursuit showcases a wider interest in agile development tactics.
- Dora support is officially confirmed: Axolotl now officially supports Dora by using the parameter
peft_use_dora: true
, as noted in a GitHub issue.- Members are encouraged to review prior discussions to explore similar feature requests.
- Llama-3.1-8B turns into a Molecular Design Engine: Fine-tuning and DPO successfully transformed Llama-3.1-8B into a model for generating molecules based on user-defined properties.
- This advancement enables on-demand molecule creation with minimal input instructions.
DSPy Discord
- DSPy Usecase List Revealed: The DSPy usecase list has been officially announced, detailing insights into nearly 100 products built with Large Models (LMs) in production, as shared in a tweet.
- This initiative, led by key contributors, aims to gather community input and explore current deployments within a DSPy context.
- ColPali Enhances Document Retrieval: A new method named ColPali has launched, efficiently enhancing document retrieval through a late interaction mechanism for visually rich documents, as described here.
- Developed by Manuel Faysse and Hugues Sibille, ColPali addresses limitations in existing systems by incorporating non-textual elements like tables and figures.
- Visual Document Retrieval Benchmark Introduced: The Visual Document Retrieval Benchmark (ViDoRe) has been introduced, designed to assess retrieval performance across diverse languages and document types.
- This benchmark aims to enhance evaluation methods by integrating a broader spectrum of document elements beyond plain text.
- Livecoding Sessions in Full Swing: A reminder about ongoing livecoding sessions encourages members to participate via this link.
- These sessions are intended to bolster hands-on coding skills within the community.
- New Paper Alert: A link to a new research paper was shared, found here, highlighting topics relevant to AI and model developments.
- This contribution adds to the ongoing discourse surrounding advancements in the field.
DiscoResearch Discord
- Member Seeks Experience with Multimodal LLMs: A member inquired about experiences with multimodal LLMs that incorporate both text and speech inputs, particularly focusing on training and finetuning efforts.
- This reflects an escalating interest in weaving speech capabilities into LLM frameworks.
- YouTube Video on Multimodal Insights: A member shared a YouTube video that presumably covers aspects of multimodal models.
- This resource could serve as a valuable introduction for those aiming to operationalize multimodal capabilities in their projects.
LLM Finetuning (Hamel + Dan) Discord
- Meeting Needs a Transcript: Participants emphasized the necessity of a transcript of the entire meeting, including attendee names, to improve accountability.
- This could enhance reference accuracy and accountability for future discussions.
- Focused Proof of Concept in Development: One member is developing a proof of concept for a report, indicating a hands-on approach to project implementation.
- This moves towards practical implementation while keeping the scope manageable.
- Complexities of Agent Workflows: The conversation included ideas about leveraging agents' workflows, hinting at a potential shift in project methodology.
- However, concerns emerged regarding the complexity of evaluating agents, stemming from a lack of established standards.
MLOps @Chipro Discord
- AI Enterprise Summit Set for SF: The AI Enterprise Summit is scheduled for October 2, 2024, in San Francisco, targeting executives and AI enthusiasts focused on scaling AI products. Use code AIR50 for a $50 discount on tickets to this exclusive event.
- Expected to draw a crowd of ambitious professionals, the summit aims to facilitate connection and learning opportunities among attendees.
- Industry Leaders to Take the Stage: Keynote speakers for the summit include Paul Baier (CEO of GAInsights), Ted Shelton (COO of Inflection AI), and Jeremiah Owyang (Blitzscaling Ventures), providing insights on practical business applications.
- These leaders will offer valuable perspectives from the industry, making it a significant learning experience for all participants.
- Networking for AI Professionals: The summit promotes a curated gathering where AI professionals can network and collaborate on AI product development. This environment aims to foster constructive dialogues among leaders in the field.
- Participants will have the chance to engage directly with thought leaders, ensuring a productive exchange of ideas and fostering potential collaborations.
Gorilla LLM (Berkeley Function Calling) Discord
- Gorilla LLM Issue Acknowledgment: A member acknowledged the issue regarding Gorilla LLM and assured they would take a look at it.
- No additional details were provided, but this indicates engagement in addressing potential improvements.
- Berkeley Function Calling Insights: Discussion around Berkeley Function Calling included inquiries about the utility of this approach in Gorilla LLM integration.
- Although specific comments were not available, the interest reflects a trend towards enhancing function calls and interfaces in newer models.
The Alignment Lab AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Mozilla AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
PART 2: Detailed by-Channel summaries and links
Stability.ai (Stable Diffusion) ▷ #general-chat (321 messages🔥🔥):
Model Recommendations for Hash Rosin
Using ControlNet with ComfyUI
Installations and Model Pairings
Technical Challenges and Performance
Cloud Computing Options
- Model Recommendations for Hash Rosin: A user seeks advice on the best model to generate realistic hash rosin images, referencing a specific Lora that recreates close macro shots of Hash Rosin.
- Suggestions include pairing the Lora with models like SDXL or Flux to achieve better quality outputs.
- Using ControlNet with ComfyUI: A user inquires about difficulties with ControlNet preprocessors in ComfyUI, specifically not seeing options beyond the tile preprocessor.
- Recommendations suggest that users try tiled ksamplers and ensure their setup is correct; tutorials may also be helpful.
- Installations and Model Pairings: There are discussions on experimenting with various models, emphasizing the use of Flux and SDXL for optimal image generation.
- Users express interest in understanding how to combine different models with Lora to get the desired results.
- Technical Challenges and Performance: Users discuss the performance of their GPUs, with a focus on VRAM limitations while running heavy models like SDXL and Flux.
- Concerns are raised about generation times, with some users suggesting cloud services for higher capacities and faster processing.
- Cloud Computing Options: Recommendations point towards using cloud services like Vast.ai for powerful GPU access to handle demanding models.
- Discussions highlight the advantages of cloud setups, particularly for users with lower-spec local machines, such as laptops.
- Tweet from Julien Blanchon (@JulienBlanchon): Trying to figure out how to fix Comfy 👀
- Civitai: The Home of Open-Source Generative AI: Explore thousands of high-quality Stable Diffusion models, share your AI-generated art, and engage with a vibrant community of creators
- h94/IP-Adapter · Hugging Face: no description found
- lllyasviel/ControlNet-v1-1 at main: no description found
- Stable Diffusion v1.5 [bf16/fp16] [no-ema/ema-only] [no-vae] [SafeTensors] [Checkpoint] - v1.5-no-ema | Stable Diffusion Checkpoint | Civitai: Stable Diffusion v1.5 [bf16/fp16] [no-ema/ema-only] [no-vae] [SafeTensors] [Checkpoint] ===================== Disclaimer: I'm just a Script kiddie....
- lllyasviel/sd_control_collection at main: no description found
- Hash Rosin - v1.0 | Stable Diffusion LoRA | Civitai: This Lora can recreate close up macro shots of Hash Rosin in jars and on Dabbers. it is also flexible enough to make things out of Rosin like anima...
Unsloth AI (Daniel Han) ▷ #general (254 messages🔥🔥):
Y Combinator backing
Unsloth Model Updates
Model Recommendations
Fine-tuning with Dora
Reflection Llama-3.1 70B
- Unsloth gains Y Combinator backing: Unsloth recently announced being backed by Y Combinator, marking a significant milestone for the team and their mission.
- The team shared their excitement about this achievement and their plans for future developments.
- Exciting new features in Unsloth: The team revealed an upcoming UI called Unsloth Studio for fine-tuning models and celebrated reaching 2 million monthly downloads.
- Users interested in multi-GPU testing are encouraged to show their interest for future opportunities.
- Model Recommendations for 4090: Popular models recommended for experimentation include Gemma 2 27B, Mistral Nemotron 12B, Phi-3 medium, and Llama 3.1 8B.
- The community was engaged in discussing these recommendations and sharing their experiences.
- Fine-tuning models with Dora: Dora integration for fine-tuning is available and may require setting
use_dora = True
for users to utilize.- Users are reminded that it's possible to fine-tune models while considering memory constraints.
- Reflection Llama-3.1 70B model release: The Reflection Llama-3.1 70B model incorporates a new technique called Reflection-Tuning to enhance reasoning capabilities.
- The community is curious about its performance, inviting discussions on testing and comparisons.
- Unsloth x YCombinator: Unsloth, your favorite open source fine-tuning package is now backed by YCombinator and we intend to keep open source more lively than ever!
- Ollamafy (Work in Progress) - a Hugging Face Space by unclemusclez: no description found
- mattshumer/Reflection-70B · Hugging Face: no description found
- Republica De Fifidonia Rick GIF - Republica De Fifidonia Rick Idk - Discover & Share GIFs: Click to view the GIF
- Tweet from Daniel Han (@danielhanchen): Uploaded more 4bit bnb quants to http://huggingface.co/unsloth for 4x faster downloading! 1. @NousResearch Hermes 8, 70 & 405b 2. @cohere Command R 32b, R+104b 3. @akjindal53244 Llama 3.1 Storm 4. Reu...
- unsloth/unsloth-tutorial: Get up and running with large language models.
- OpenAI Compatible Web Server - llama-cpp-python: no description found
- get_chat_template using <|eot_id|> for tool use when <|eom_id|> should be used. · Issue #845 · unslothai/unsloth: With llama 3.1 <|eom_id|> has been introduced to help support multi-turn reasoning. End of message. A message represents a possible stopping point for execution where the model can inform the ex...
- unclemusclez/smollm-135m-instruct-devinator: SmolLM 135M Instruct Trained on DEVINator Data for Open Hands (Open Devin)
- microsoft/Phi-3.5-mini-instruct · Hugging Face: no description found
- unsloth/Phi-3.5-mini-instruct-bnb-4bit · Hugging Face: no description found
- unsloth/Phi-3.5-mini-instruct · Hugging Face: no description found
Unsloth AI (Daniel Han) ▷ #off-topic (35 messages🔥):
Back to school humor
Age perception in conversation
Discussion about age
Meme sharing
- Infinit3e's Back to School Excitement: Infinit3e announced going back to school for AI, sparking a humorous exchange about age perceptions.
- Theyruinedelise sarcastically remarked, thinking Infinit3e was considerably older, calling into question stereotypes around age.
- Age Misunderstandings Lead to Laughs: A funny discussion took place when members joked about Infinit3e's age, guessing he might be around 20-22.
- Infinit3e humorously reacted, stating he was actually 35, while others joined in jest about their own ages.
- Cool Old Men? A Theoretical Debate: MrDragonFox made a playful argument that old men can be cool, even if he himself doesn’t fit that description.
- The conversation continued with members teasing each other about their ages in a light-hearted manner.
- Sharing Memes to Express Humor: Infinit3e shared a meme featuring a character's friend requests, linking it to the ongoing laughter about age discrepancies.
- The gif expressed a humorous take on the number of friend requests, adding to the playful atmosphere of the chat.
Link mentioned: Fivie Kuu0001 GIF - Fivie Kuu0001 Lynxdenis - Discover & Share GIFs: Click to view the GIF
Unsloth AI (Daniel Han) ▷ #help (18 messages🔥):
Trailing Newline Issue in Vim
Using Unsloth for Text Summarization
Running Unsloth Locally for Private Data
Gemma Model Comparisons
Finetuning Chatbots with User Data
- Trailing newline fix implemented: A member highlighted a trailing newline issue added by Vim and submitted a PR to address it. This change is linked to another issue (#992) regarding chat formatting.
- Theyruinedelise responded, 'thank you we’ll check!', indicating the community's acknowledgment.
- Unsloth cannot summarize snippets: A user inquired if Unsloth could use AI to summarize text snippets, but a member clarified that Unsloth cannot do that. They recommended using any AI models, such as ChatGPT, for summarization tasks.
- This suggests users may not be aware of the model capabilities available in Unsloth and are encouraged to explore other AI solutions.
- Documentation aids local model training: A new user was advised to start by going through the Unsloth documentation for guidance on finetuning models locally. The documentation covers creating datasets and deploying custom models.
- Members highlighted essential resources to help navigate the finetuning process effectively.
- Comparison of Gemma models confirmed: A member asked if unsloth/gemma-2-9b-it is the same as google/gemma-2-9b-it, to which another member confirmed that they are indeed the same. This clarification helps prevent any confusion regarding model usage.
- The detailed discussion also indicates shared resources potentially are interchangeable.
- Building datasets for chatbot finetuning: A user expressed interest in finetuning a chatbot and sought advice on building a dataset from previous tickets and live chats. Another member suggested defining data format and focusing on specific tasks for effective training.
- The conversation reflects the importance of tailored datasets for achieving desired outcomes in chatbot performance.
- Welcome | Unsloth Documentation: New to Unsloth? Start here!
- Changing lstrip -> strip to address trailing spaces/newlines in chat formatting for Ollama (#992) by rodrigomeireles · Pull Request #993 · unslothai/unsloth: This is related to #992
Unsloth AI (Daniel Han) ▷ #showcase (1 messages):
rodrigo_meireles: Do you have some report comparing them somehow? Would be interesting to read.
Unsloth AI (Daniel Han) ▷ #community-collaboration (2 messages):
Channel Etiquette
Suggested Channels
- Avoiding Message Duplication: A member urged others not to post the same message multiple times across the server, promoting better channel usage.
- This call for moderation targets improved communication efficiency within the community.
- Best Channel for Posting: One member suggested that channel <#1257011997250424842> is likely the best place to share certain messages.
- This suggestion indicates an ongoing effort to streamline topic discussions in appropriate spaces.
Unsloth AI (Daniel Han) ▷ #research (4 messages):
Illya's billion-dollar funding
LLMs scaling and AGI
Reasoning and planning in LLMs
- Confusion over Illya's Funding for AGI: A member expressed confusion about the significance of Illya raising 1 billion dollars for Safe SuperIntelligence focused on scaling AGI, questioning if scaling truly enhances LLM reasoning.
- Another member responded, highlighting that there is no evidence that scaling LLMs leads to AGI and noted that the investments are primarily driven by hype.
- Impressive Research and Reasoning in LLMs: A member inquired about remarkable research that seems to effectively address the reasoning and planning challenges faced by LLMs.
- In response, it was noted that simply scaling up LLMs will not yield advanced reasoning capabilities, and true reasoning likely requires architectural innovations or explicit reasoning mechanisms.
OpenAI ▷ #ai-discussions (259 messages🔥🔥):
AI Consciousness Debate
Perplexity Use Case
Gemini Performance
OpenAI Subscription Growth
UI Changes in ChatGPT
- Debate on AI Consciousness and Cognition: A discussion highlighted the difference between AI reasoning and human understanding, emphasizing that LLMs operate based on statistical predictions rather than genuine cognition.
- Participants suggested that while AI can simulate consciousness, it lacks true understanding and self-preservation instincts inherent in biological organisms.
- Perplexity as a Preferred Tool: Members expressed their preference for using Perplexity, citing its speed and reliability as significant advantages for tasks like research and school projects.
- The free tier of Perplexity was highlighted as sufficient for users, making it an attractive alternative to paid subscriptions.
- Mixed Reviews on Gemini AI: Users reported inconsistent performance with Gemini AI, particularly in programming tasks, highlighting issues with hallucinations and unreliable responses.
- Despite these challenges, some users noted that newer versions of Gemini are showing improvement and are trying them out.
- OpenAI Hits 1 Million Paid Users: OpenAI announced reaching 1 million paid users for its business-focused products, which likely include ChatGPT Team and Enterprise services.
- The subscription model for enterprise can be quite expensive, with base prices around $60 per user monthly, highlighting significant revenue potential despite ongoing operational losses.
- Changes to ChatGPT's User Interface: Users noted the disappearance of the regenerate button in ChatGPT and were uncertain about the changes in the UI, with some suggesting it was moved to the model selection dropdown.
- Some users reported not seeing certain buttons so the interface seems to be undergoing changes that may not be uniformly applied.
- Tweet from Jason Kneen (@jasonkneen): http://x.com/i/article/1831453865201340416
- Reflection 70B Playground: no description found
- Bloomberg - Are you a robot?: no description found
- Bloomberg - Are you a robot?: no description found
OpenAI ▷ #gpt-4-discussions (3 messages):
GPT response issues
Icons disappearing
Browser compatibility
App frustrations
- Random issues with GPT responses: A user reported experiencing random issues with GPT, where generating a new response overwrote the previous one and caused icons to vanish on the website.
- They expressed frustration, stating they couldn't view past responses and were unhappy with the app.
- Browser compatibility solutions: Another member suggested using Chrome to avoid the issues encountered by the user, recommending testing in different browsers.
- They also directed the user to OpenAI's help center to report bugs or seek assistance with the problem.
OpenAI ▷ #prompt-engineering (25 messages🔥):
Font Issues
AI Author Imitation
Exporting Outputs with Errors
Tool Calls in Prompts
- Font Issues Causing Weird Symbols: Members discussed a potential font issue causing weird symbols such as
youâre
in the generated output.- This was linked to a Flutter app making API requests, and possible escape character mistranslations were mentioned.
- AI Refuses to Imitate Recent Authors: A member noted that the AI is designed to avoid imitating recent or copyrighted authors, focusing instead on older figures like Shakespeare and Dante.
- They suggested that creating a style guide is easy and that defining one's own communication style can be more effective.
- Variable Output Responses in API Calls: A user reported inconsistent responses from the OpenAI API, occasionally receiving correct outputs while facing errors otherwise.
- Discussion suggested issues might relate to the wrapper used to interact with the API, and building a better one could help.
- Successful Tool Calls Implementation: Members shared their experiences with incorporating tool calls into prompts, stating that tool names must be correct for success.
- One member successfully resolved their issues by realizing they needed to include the tool result after calling a tool, ensuring proper structure.
- Sharing Resources for Better Advice: During discussions, links to outside resources were shared for users seeking assistance with their issues, particularly on tool calls.
- Members encouraged looking into community sources for more tailored advice on using OpenAI's functionalities effectively.
OpenAI ▷ #api-discussions (25 messages🔥):
Font Missing Issues
Incorporating Tool Calls
Character Encoding Errors
API Response Consistency
Creating Effective Tool Chains
- Identify Font Missing Issues: A user indicated a possible font missing issue affecting their prompts and responses, which sparked a discussion about language compatibility.
- One member suggested checking for available fonts in the app to resolve this issue.
- Incorporating Tool Calls into Prompts: A user inquired about successfully incorporating tool calls in their prompts and expressed frustration with error messages from OpenAI.
- Another member shared that they regularly create multiple tool calls in a single output and emphasized the importance of using the correct tool name.
- Character Encoding Errors in Responses: A user reported receiving weird symbols in API responses and identified that these issues sometimes involve escaped characters.
- It was suggested that these could be apostrophes getting mistranslated by their wrapper and noted that the issue is inconsistent.
- Consistency in API Responses: Users discussed the inconsistency of receiving API responses, with some being formatted correctly and others not.
- The possibility of needing to build a better wrapper was raised as a potential solution for consistent outputs.
- Clarifying Tool Call Structure: A member clarified their tool call structure, which includes an Assistant message with content followed by a matching Tool Message with results.
- This information was given as a solution to their previous struggles in implementing tool calls effectively.
HuggingFace ▷ #announcements (1 messages):
Vision Language Models
Optimization of Tau LLM
InkubaLM-0.4B Release
Shadowbox Tool
Selective Fine-tuning
- Introduction to Vision Language Models Explored: A new blogpost by a verified user provides a concise overview of vision language models.
- This resource aims to simplify understanding for newcomers to the concept.
- Optimization Ahead for Tau LLM: Check out the Tau LLM series that focuses on improving training processes and model performance.
- The series promises detailed insights from a leading member in the community.
- InkubaLM-0.4B Advances Language Support: The community welcomes the release of InkubaLM-0.4B, a model designed to support African languages.
- This initiative showcases a commitment to expanding representation in the AI space.
- No-Code AI With Shadowbox Tool: A no-code constructor called Shadowbox has been introduced, enabling users to create tasks using FOSS AI models.
- This tool aims to make AI more accessible to non-coders in the community.
- Fine-Tuning Language Models Made Simple: Explore the article on selective fine-tuning of language models using the Spectrum approach.
- The content highlights practical strategies for achieving tailored model performance.
HuggingFace ▷ #general (193 messages🔥🔥):
Comparison of Coding Models
Transformer Attention Explainer
Evaluation of Code Generation
Coding Benchmark Quality
Future Research Ideas in Code Generation
- Comparison of Coding Models: Members discussed identifying the best coding model, with suggestions like Llama 3.1 70B standing out as a top choice.
- One member asked for recommendations while others noted the presence of multiple models overfitting on benchmarks.
- Transformer Attention Explainer: A member requested clarification on how transformers represent attention as a single number for a given token.
- Questions focused on understanding the connection between distance in latent vector space and attention representation.
- Evaluation of Code Generation: The difficulty of establishing a 'correct' label for code outputs was addressed, with discussions around using error rates for evaluation.
- Members noted the importance of semantic correctness and pragmatics in code evaluation, pointing to the limitations of LLMs as judges.
- Coding Benchmark Quality: There was a consensus on the need for rigorous evaluation methods in current coding benchmarks, particularly the absence of good labels for correctness.
- Members discussed creating interactive comparisons of different model outputs, emphasizing the importance of pragmatically useful code.
- Future Research Ideas in Code Generation: Future research directions were discussed, including the idea of using visual models to assess code semantics and pragmatics.
- The potential for models to predict rendered frames from code and vice versa was highlighted as an exciting research avenue.
- Tweet from Omar Sanseviero (@osanseviero): Latent Navigation with Flux is here 🤯 https://huggingface.co/spaces/latentexplorers/latentnavigation-flux Check out a CEO going from "lawful good" to "chaotic evil"
- cyBERT: Neural network, that’s the tech; To free your staff from, bad regex
- Shocked Surprised GIF - Shocked Surprised Gasp - Discover & Share GIFs: Click to view the GIF
- Latent Navigation - a Hugging Face Space by latentexplorers: no description found
- Can Ai Code Results - a Hugging Face Space by mike-ravkine: no description found
- Aaaaaaa GIF - Aaaaaaa - Discover & Share GIFs: Click to view the GIF
- Deep Learning Ai Better The Devil You Know GIF - Deep Learning AI Better The Devil You Know Overkill - Discover & Share GIFs: Click to view the GIF
- Smg4 Mario GIF - Smg4 Mario You Dropped This King - Discover & Share GIFs: Click to view the GIF
- CopyleftCultivars/llama-3.1-natural-farmer-16bit · Hugging Face: no description found
HuggingFace ▷ #today-im-learning (8 messages🔥):
Residual Connections in AI
Jeeds Agent Models
Transformers and Attention Mechanism
Python Microservice with Ollama
- Exploring Residual Connections: One member mentioned learning about implementing residual connections and their underlying mechanics today.
- They aim to deepen their understanding of why residual connections are effective in model architectures.
- Coding New Jeeds Agent Models: Another user is focusing on coding new agent models featuring the Jeeds architecture today.
- This represents a notable effort to apply new methodologies in AI development.
- Understanding Attention in Transformers: A user raised a question regarding how a single number represents attention for a given token in the transformers architecture.
- They inquired if this value is derived from distance in latent vector space and requested further materials discussing this topic.
- Concerns Over Cross-Posting Messages: There was a concern from a user regarding cross-posting messages across channels, asking another member to refrain from doing so.
- The back-and-forth highlighted community guidelines on maintaining channel clarity in discussions.
- Building Python Microservice with Ollama: One participant is interested in creating a Python microservice with Ollama to paraphrase sentences in multiple ways.
- This endeavor hints at the application of language models in developing versatile text processing solutions.
HuggingFace ▷ #i-made-this (6 messages):
GPT4FREE
Kyber Odyssey Encryption Implementation
Yi-Coder Release
Advancements in Vision Language Models
Minimalist UI for Comfy
- Explore GPT4FREE!: A member discovered GPT4FREE and proposed creating an online version of the web UI.
- The initiative aims to make GPT access more user-friendly and accessible.
- Kyber Odyssey takes on Post-Quantum Encryption: A group proudly announced their acceptance of a submission on implementing NIST's new post-quantum encryption protocols to the AMA research challenge, emphasizing accessibility for learners through open-source code at GitHub.
- They aim to empower traditionally overlooked communities with minimal cost to enhance security and privacy.
- Yi-Coder is Live!: Yi-Coder has been released by 01ai, inviting users to try it out and contribute examples.
- This release offers a new tool and showcases community involvement through PRs.
- Recent Advancements in Vision Language Models: A member shared insights about a blog post on early contrastive approaches like CLIP transitioning to advanced models such as Flamingo and LLaVA, highlighting their joint training capabilities.
- Minimalist Comfy Rewrite Project: A member announced their experimental project aiming to rewrite Comfy from scratch, focusing on creating a minimalist UI and server without dependencies.
- They invited others who are interested in creating an extensible solution to contact them for collaboration.
- Tweet from Julien Blanchon (@JulienBlanchon): Trying to figure out how to fix Comfy 👀
- Yi Coder 9B - a Hugging Face Space by Tonic: no description found
- A Brief Introduction to Vision Language Models: Overview of recent advancements in the field of Vision Language Models. From early contrastive learning approaches like CLIP to more advanced models like Flamingo and LLaVA.
- GitHub - qompassai/KO: Kyber Odyssey: Charting a course for secure innovation in a post-Crowdstrike world: Kyber Odyssey: Charting a course for secure innovation in a post-Crowdstrike world - qompassai/KO
HuggingFace ▷ #reading-group (1 messages):
quantaussie99: Have to read this… I don’t get it
HuggingFace ▷ #computer-vision (3 messages):
Tracking Algorithms for Multi-Object Tracking
Retrieving Screen Items from Internal Data
Running BLIP-2 on AWS SageMaker
- Discussing Tracking Algorithms: Members mentioned using various tracking algorithms like ByteTrack and DeepSORT for multi-object tracking.
- They are exchanging insights about the pros and cons of these options.
- Question on Internal Data Retrieval: One member posed a question about the possibility of retrieving items on screen by reading some internal data.
- This sparked a discussion on the feasibility and methods of accessing such data.
- Inquiry about Running BLIP-2 on AWS SageMaker: A member sought advice on running the BLIP-2 model on AWS SageMaker for inference on 19,000 images.
- They requested tips on configuration, instance types, performance optimization, and integration steps.
HuggingFace ▷ #NLP (2 messages):
Qwen2-VL-7B-Instruct
requirements.txt update
fp16 performance
- Qwen2-VL-7B-Instruct Handler Created: A working handler.py and requirements.txt for Qwen2-VL-7B-Instruct has been shared, confirmed to work on dedicated endpoints like T4 64GB, A100 80GB, and L4 96GB.
- The commit is also linked, showing a recent update made just 1 day ago.
- Requirements.txt Being Updated: An update to the requirements.txt was noted with a specific commit link provided for reference.
- This update is part of the ongoing maintenance to ensure compatibility and functionality of the project.
- fp16 Implementation Lacks Flash-Attention: Currently, the implementation is using fp16 without flash-attention, which is noted as a limitation.
- This situation was acknowledged with an indication of future enhancements to expect.
Link mentioned: hperkins/Qwen2-VL-7B-Instruct at main: no description found
HuggingFace ▷ #diffusion-discussions (2 messages):
PixArt-Alpha performance
FluxImageToImagePipeline availability
- PixArt-Alpha demonstrates impressive performance: A member highlighted that PixArt-Alpha does a nice job, but specifics on its efficacy or use cases were not elaborated.
- This suggests that there may be notable features worth exploring further.
- FluxImageToImagePipeline missing from diffusers: A member inquired about the absence of FluxImageToImagePipeline in the diffusers, even though it appears in the HF documentation.
- This raised questions about potential discrepancies or updates that may not have been synchronized in the library.
LM Studio ▷ #general (111 messages🔥🔥):
LM Studio 0.3.2 Issues
Image API Providers
Reflection 70B Model
Change in UI Elements
Advanced Model Techniques
- LM Studio 0.3.2 download error: Users reported encountering an 'unable to get local issuer certificate' error after updating to LM Studio 0.3.2, causing issues with downloading models.
- It was suggested that this could be related to changes in corporate network security or SSL certificates affecting the software's connectivity.
- Exploring Image API options: A user expressed interest in finding free Image API providers with high limits, mentioning Stable Diffusion but seeking more options.
- They inquired if any providers offer API access to advanced imaging tools.
- Reflection 70B model discussion: The Reflection 70B model was highlighted as a leading open-source LLM trained to correct its reasoning mistakes, available on Hugging Face.
- There was anticipation about when this model would be accessible within LM Studio following its recent upload.
- Concerns about new UI elements: Some users criticized the new UI in LM Studio 0.3.2, citing large elements and the absence of preset dropdowns as inconvenient.
- Feedback indicated a desire for smaller UI elements and the return of preset options in future versions.
- Advanced quantizing AGI models: A humorous prediction regarding the future of AI indicated a potential battle over the quantization of AGI models.
- Users expressed optimism about advancements in AI and model techniques.
- Hello Hi GIF - Hello Hi Cute - Discover & Share GIFs: Click to view the GIF
- no title found: no description found
- mattshumer/Reflection-70B · Hugging Face: no description found
- Microsoft GraphRAG with an RDF Knowledge Graph — Part 1: Using a local LLM & Encoder to do Microsoft’s GraphRAG
LM Studio ▷ #hardware-discussion (60 messages🔥🔥):
Mac RAM and storage needs
Local server versus cloud options
Raspberry Pi and LMStudio compatibility
Performance of RTX 3060 for inference
NAS advantages for Apple users
- Mac users should max out RAM for models: Users discussed that for Apple hardware, one should aim to buy the biggest RAM possible, especially for handling large models.
- 64GB is considered a minimum for serious usage in AI, with suggestions to invest in NAS for storage solutions.
- Building AI-capable local servers debated: Some members debated whether to purchase a local server or use cloud options for AI purposes, highlighting the financial burden of setting up personal rigs.
- A member mentioned that cloud subscriptions could provide better capabilities for less cost compared to building a local machine.
- Raspberry Pi unable to run LMStudio: A member inquired about the feasibility of running LMStudio on a Raspberry Pi, but it was confirmed that this is currently not possible.
- The differences between LMStudio and Ollama were discussed, emphasizing Ollama's wider hardware compatibility.
- GPU performance discussions for models: A member with an RTX 3060 shared concerns about increasing context length with their current setup, which has 6GB VRAM and 64GB DDR4 RAM.
- Others suggested saving money to invest in a new GPU, emphasizing that the performance boost from upgraded hardware is crucial.
- NAS setup benefits for Apple users: Users shared experiences with NAS systems, expressing their love for better organization and efficiency in moving storage away from main desktops.
- A specific Asustor NAS was mentioned, along with the idea of using it for Time Machine backups for multiple iPhones.
- no title found: no description found
- Reddit - Dive into anything: no description found
Nous Research AI ▷ #general (140 messages🔥🔥):
Reflection-Tuning Techniques
Hermes Model Speculations
Fine-tuning LLMs
Dataset Creation for AI Models
Nvidia Driver Issues
- Reflection-Tuning Innovations: The new technique called Reflection-Tuning aims to improve LLMs' capabilities by teaching them to correct their own mistakes during output generation, reflecting on their responses.
- This method emphasizes using a dataset that intentionally includes errors to aid the model's self-correction abilities.
- Discussion on Hermes Model's Reflection Ability: During discussions about the Hermes model, members speculated that the original training data may not support immediate corrections, which presents a challenge for improving model responses.
- There was some confusion regarding how pretraining could account for immediate errors if they weren't in the text, leading to deeper discussions around fine-tuning strategies.
- Fine-tuning Techniques and Datasets: Participants shared methods for fine-tuning models, indicating a desire to see comparisons between different models like GPT-4o and Llama 70B.
- A suggestion was made to finetune models that include reflection tokens and revision techniques to enhance output evaluation.
- Nvidia Driver and Vulkan Compatibility Issues: Users experienced issues getting Vulkan to work with their Nvidia drivers, encountering a message that required the use of the nouveau driver instead of the proprietary Nvidia driver.
- There was a call for solutions on how to enable better performance with Vulkan while using the current Nvidia setup.
- General AI Community Engagement: Participants, including computer science students, shared resources and suggestions for starting in the AI field, emphasizing the importance of practical and theoretical knowledge.
- There was excitement about collaborative efforts in AI model experimentation, highlighting a proactive spirit in the community.
- Tweet from Matt Shumer (@mattshumer_): @EnricoShippole @binary_racoon @GlaiveAI Different reflection -- just don't want any confusion, we're doing something totally different
- Reflection-Tuning: Recycling Data for Better Instruction-Tuning: Recent advancements in Large Language Models (LLMs) have expanded the horizons of natural language understanding and generation. Notably, the output control and alignment with the input of LLMs can...
- Tweet from Matt Shumer (@mattshumer_): @abacaj Not quite — we found current models struggled to do this well (they don't know when to reflect). It required training it into the model via a dataset that intentionally makes mistakes ->...
- matts (Matt Szydlik): no description found
- Cat Cute Cat GIF - Cat Cute cat Yap - Discover & Share GIFs: Click to view the GIF
- Tweet from Zeyuan Allen-Zhu (@ZeyuanAllenZhu): (1/7) Physics of LM, Part 2.2 with 8 results on "LLM how to learn from mistakes" now on arxiv: https://arxiv.org/abs/2408.16293. We explore the possibility to enable models to correct errors i...
- mattshumer/Reflection-70B · Hugging Face: no description found
- Tweet from Matt Shumer (@mattshumer_): I'm excited to announce Reflection 70B, the world’s top open-source model. Trained using Reflection-Tuning, a technique developed to enable LLMs to fix their own mistakes. 405B coming next week ...
- GitHub - tianyi-lab/Reflection_Tuning: [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning: [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning - tianyi-lab/Reflection_Tuning
Nous Research AI ▷ #ask-about-llms (20 messages🔥):
Mamba API inquiries
Mergekit issues
Scaling and LLM reasoning
Llama 3.1 utilization
Open reasoning tasks
- Curiosity about Mamba API: Members inquired whether the Mamba API exists and discussed multiple free API alternatives beyond the usual suspects like Google and Hugging Face.
- Arthrod specifically asked about other free APIs while inviting community suggestions.
- Frustration with Mergekit Stalling: A member reported that Mergekit keeps stalling at 'Executing graph: 0% 0/1457' while trying to merge two fine-tuned Llama 3.1 models in Colab.
- The execution halts without creating a usable model in the HF hub repo, leading to confusion among users.
- Scaling and LLM Reasoning Query: One member raised questions about scaling in relation to Illya's $1 billion funding for AGI and whether it genuinely improves LLM reasoning.
- Kingsd sought insights from others who might have spent significant time exploring this topic for clarity.
- Practical Use of Llama 3.1 in Trading: A user shared their experience using Llama.cpp as an inference engine for trading infrastructure, specifically mentioning mistral-7B-instruct-v0.2.Q6_K.gguf for coding queries.
- They received recommendations for using Llama 3.1 8B Instruct if resources allow, with discussions around GPU specifications.
- Accessing Open Reasoning Task Resources: A member asked for datasets focused on reasoning tasks and was directed to an open reasoning tasks project that lists potential task types for training or evaluation.
- The project isn't a dataset itself, but participants were encouraged to develop their datasets based on the suggested tasks.
Link mentioned: bartowski/Meta-Llama-3.1-8B-Instruct-GGUF at main: no description found
Nous Research AI ▷ #research-papers (2 messages):
Falcon Mamba release
Loopy video diffusion model
- Falcon Mamba Launch by TII: Falcon Mamba, a new model by Technology Innovation Institute, has been released under the TII Falcon Mamba 7B License 1.0, available for open access on Hugging Face. The blog details the design decisions, the model's competitive edge against SoTA models, and its integration in the Hugging Face ecosystem.
- Innovative Model Loopy for Audio-Only Video Generation: The paper introduces Loopy, an end-to-end audio-conditioned video diffusion model that enhances natural motion and portrait synthesis without the need for manual spatial templates. This model employs a unique inter- and intra-clip temporal module to better correlate audio with human motion, improving overall performance in video generation.
- Paper page - Loopy: Taming Audio-Driven Portrait Avatar with Long-Term Motion Dependency: no description found
- Welcome Falcon Mamba: The first strong attention-free 7B model: no description found
Nous Research AI ▷ #interesting-links (1 messages):
adjectiveallison: https://github.com/Cognitive-AI-Systems/MAPF-GPT
Nous Research AI ▷ #research-papers (2 messages):
Falcon Mamba Model
Loopy Video Diffusion Model
- Falcon Mamba Introduced by TII: Falcon Mamba is a new model released by the Technology Innovation Institute in Abu Dhabi under the TII Falcon Mamba 7B License 1.0, designed for open access in the Hugging Face ecosystem.
- The blog discusses the model's design decisions and its competitiveness against existing SoTA models, highlighting that it is accessible for research and application purposes here.
- Loopy: A Breakthrough in Audio-Only Video Generation: The paper presents Loopy, an end-to-end audio-only conditioned video diffusion model that overcomes limitations in controlling human motion via audio signals by leveraging long-term motion information.
- Loopy improves audio-portrait movement correlation by removing the need for manually specified spatial motion templates, showing significant advancements in both natural motion synthesis and detail during extensive experiments.
- Welcome Falcon Mamba: The first strong attention-free 7B model: no description found
- Paper page - Loopy: Taming Audio-Driven Portrait Avatar with Long-Term Motion Dependency: no description found
Nous Research AI ▷ #reasoning-tasks (1 messages):
Illya's fundraising for AGI
Scaling and LLM reasoning
- Illya Raises $1 Billion for Safe Superintelligence: Illya successfully raised $1 billion for his venture Safe Superintelligence, which is geared towards achieving AGI through scaling efforts.
- Members expressed confusion over whether scaling can effectively solve issues related to LLM reasoning.
- Questioning Scaling's Impact on LLMs: One member questioned if scaling genuinely addresses the reasoning capabilities of large language models (LLMs) and how it functions.
- They inquired if others in the group have seriously invested time in exploring this topic.
Interconnects (Nathan Lambert) ▷ #news (11 messages🔥):
xAI's GPU Cluster
Unsloth backed by YCombinator
Reflection Llama-3.1
Intrinsic-self correction technique
- xAI raises concerns over GPU cluster power: Elon Musk's progress in building xAI’s 100k GPU cluster is causing concern among rival model developers, with OpenAI's Sam Altman expressing worries over potential computing power disparities.
- One member humorously remarked that eventually we all become GPU poor.
- Unsloth teams up with YCombinator: Unsloth announced its backing by YCombinator, aiming to create an all-in-one solution for model creators focused on speed and accessibility through software innovations.
- Reflection Llama-3.1 touted as top open-source LLM: Reflection Llama-3.1 70B is highlighted as the world's leading open-source LLM, employing a new technique called Reflection-Tuning to improve reasoning accuracy.
- Discussion on intrinsic-self correction: There was a mention of skepticism regarding the effectiveness of intrinsic-self correction without external tools, referencing the usual GDM paper.
- One user expressed surprise at this approach, questioning its viability in the context of the recently discussed Reflection Tuning.
- Tweet from X Daily News (@xDaily): NEWS: Elon’s progress in building out xAI’s 100k GPU cluster has some rival model developers worried. OpenAI CEO Sam Altman, for instance, has told some Microsoft executives that he is concerned tha...
- mattshumer/Reflection-70B · Hugging Face: no description found
- Tweet from Unsloth AI (@UnslothAI): We’re excited to share that Unsloth is now backed by @YCombinator! Building on our foundation in open-source fine-tuning, we’re creating the all-in-one solution so you can focus on making the models ...
Interconnects (Nathan Lambert) ▷ #ml-questions (6 messages):
Reasoning Datasets
HuggingFace Numina
MATH Benchmark
GSM8k Benchmark
CHAMP Dataset
- Hot Picks for Reasoning Datasets: A member sought recommendations for reasoning datasets/benchmarks, particularly those that include chain-of-thought reasoning trajectories.
- Another member humorously noted the abundance of options, suggesting they were overwhelmed by the choices.
- HuggingFace's Numina Garners Attention: A participant recommended the recent HuggingFace Numina resources as great for data in reasoning tasks.
- It's seen as a valuable addition to the pool of benchmarks for those interested in this space.
- Standard Benchmarks: MATH and GSM8k: When asked about notable benchmarks, several members pointed to the MATH and GSM8k as standard references in reasoning evaluations.
- These benchmarks are often used in assessments of large language models' reasoning capabilities.
- CHAMP Dataset Offers Unique Insights: A member highlighted the CHAMP dataset, which focuses on high school math problems with annotated hints, providing additional context for reasoning tasks.
- The benchmark aims to investigate the impact of problem-specific hints and concepts on LLM performance, as detailed in the paper.
- Quest for Off-the-Beaten-Path Insights: The original poster expressed a desire for lesser-known reasoning datasets while scouring HuggingFace for a research project.
- They were particularly interested in datasets that aren't commonly referenced in discussions.
Link mentioned: CHAMP: A Competition-level Dataset for Fine-Grained Analyses of LLMs' Mathematical Reasoning Capabilities: Recent large language models (LLMs) have shown indications of mathematical reasoning ability on challenging competition-level problems, especially with self-generated verbalizations of intermediate re...
Interconnects (Nathan Lambert) ▷ #random (42 messages🔥):
Cursor chat documentation
QwenLM GitHub disappearance
Model naming confusion with OpenAI
Vendors for Llama fine-tuning
Artificial analysis and new image models
- Cursor chat documentation gaining traction: Discussion highlights the lack of a standardized
chats.txt
file for logging AI interactions in software development, with a focus on how useful Cursor could make it.- Shocked at the absence of such a standard in the industry, members believe it could enhance documentation of codebases significantly.
- QwenLM mysteriously vanished from GitHub: Concerns arose as the QwenLM organization disappeared from GitHub, prompting speculation about unknown flags from the platform.
- Members express disbelief at the lack of communication from GitHub, reflecting on similar past incidents as ridiculous.
- Confusion over OpenAI model names: There was confusion regarding two different models, GPT-4o-latest and GPT-4o-2024-08-06, which are not the same despite similar naming schemes.
- Members humorously noted that OpenAI's naming strategy has puzzled many, with some joking that Scale was tripped up by it.
- Seeking Llama fine-tuning recommendations: A member asked for preferred vendors for fine-tuning Llama models, with suggestions made to hire a capable engineer.
- Responses included mentions of companies like Fireworks and Together, which were noted as fine but not 100% reliable.
- Discussion on upcoming image models: A participant raised a question about any organization preparing to release new image models, specifically mentioning Saturn-Next as a promising candidate.
- Speculation included that these models might be exclusive to artificial analysis, contrasting against expected updates from Midjourney.
- The /llms.txt file – llms-txt: A proposal to standardise on using an /llms.txt file to provide information to help LLMs use a website at inference time.
- How do I export chat with AI?: I consider this a very useful feature. I recently had a very long interaction with my friend Sonnet, and I would love to export it so I can format it, remove all the unnecessary parts, and save it as ...
- Tweet from Simon Willison (@simonw): @Lingster888 Well that’s weird, their whole GitHub organization has vanished https://github.com/qwenlm It was there yesterday, here’s the way back machine from a few days ago https://web.archive.org/...
- Tweet from Xeophon (@TheXeophon): Whoops, Scale was tripped up by OpenAIs amazing naming scheme as well. GPT-4o-latest and GPT-4o-2024-08-06 are two different models 🙃 Quoting Alexandr Wang (@alexandr_wang) SEAL Leaderboard Update...
- Tweet from Junyang Lin (@JustinLin610): We are still alive... GitHub flagged our org for unknown reasons and we are trying to approach them for some solutions. Quoting Simon Willison (@simonw) Anyone know why the @Alibaba_Qwen AI team ha...
- Tweet from Phil (@phill__1): Wait, are there stealth models in the artificialanalysis image arena? There seem to be three new models in testing, with Saturn-Next being really good.
Interconnects (Nathan Lambert) ▷ #posts (74 messages🔥🔥):
Autoformalization in AI
Superhuman AI Mathematicians by 2026
OpenAI's Pricing Strategy
Google's Challenges with AI Deployment
SnailBot's Performance
- Autoformalization as a Key Strategy: A member emphasized that autoformalization will be crucial for AI's advancement, particularly in the context of synthetic data regimes already hinted at by big labs.
- They noted that Google is actively pursuing this area, indicating competitive pressure in the market.
- Szegedy Predicts Superhuman AI by 2026: Christian Szegedy stated he now believes we'll have superhuman AI mathematicians by 2026, a significant shift from his earlier prediction of 2029.
- His assertion sparked debate about the feasibility of this target, particularly regarding informal reasoning required in mathematical proofs.
- OpenAI's Potential High Pricing: Reports surfaced that OpenAI might consider subscriptions up to $2,000 per month for new models, which many believe might be unrealistic given market competition.
- Members speculated that B2B pricing might be more palatable but questioned how families could justify such costs for consumer AI.
- Google Struggles with AI Strategy: Discussion highlighted Google's ongoing difficulties in effectively deploying their AI frameworks, with Vertex AI criticized for user-friendliness.
- Despite having top engineers, the organization seems to struggle with execution, raising concerns about their leadership in AI.
- SnailBot's Quirks: One member humorously dubbed SnailBot as the slowest rust program ever written, highlighting its entertaining nature.
- Despite its quirks, there was a sentiment that SnailBot remains a free and amusing addition to the community.
- Tweet from undefined: no description found
- Tweet from Tibor Blaho (@btibor91): OpenAI is reportedly considering high-priced subscriptions up to $2,000 monthly for new AI models like reasoning-focused Strawberry and flagship Orion LLMs (though final prices are likely to be lower)...
- Tweet from Christian Szegedy (@ChrSzegedy): My view on this has not changed in the past eight years: I have given many talks and written position paper in 2019 (link below). Progress is faster than my past expectation. My target date used to be...
Modular (Mojo 🔥) ▷ #announcements (1 messages):
Magic package manager
MAX and Mojo integration
Conda ecosystem
Virtual environment management
- Magic 🪄 Officially Takes the Helm: Today, we announced that Magic is the new official package manager and virtual environment manager for MAX and Mojo projects, with packages available as a single Conda package —
max
.- Starting this Monday, users are encouraged to migrate to Magic or other tools that support Conda package management as the
modular
CLI will not receive updates.
- Starting this Monday, users are encouraged to migrate to Magic or other tools that support Conda package management as the
- Seamless Integration with Conda Ecosystem: The choice to adopt the Conda ecosystem as a standard aims to enhance compatibility with popular package management systems, improving code reproducibility while minimizing conflicts.
- With Magic, you can instantly launch code examples and create new projects, ensuring a streamlined experience for managing dependencies.
- Say Goodbye to Packaging Conflicts: Managing package dependencies and virtual environments is crucial for stability and compatibility, and Magic addresses this challenge effectively.
- The current stable release of
magic
is 0.2.3, bringing specific improvements for Modular pipelines and future enhancements for managing and deploying them.
- The current stable release of
- Check Out the New Magic Docs: For more information on getting started with Magic, users can visit our new magic docs page.
- Magic builds upon the Conda and PyPi ecosystems, providing access to thousands of packages and additional features tailored for MAX and Mojo projects.
- Community Support and Feedback Appreciated: A huge thanks was extended to the community for their feedback and support during this transition.
- Users are encouraged to share their questions and feedback in the designated channel <#1267269207372988597>.
Link mentioned: Get started with Magic | Modular Docs: Magic is a package manager and virtual environment manager for MAX and Mojo
Modular (Mojo 🔥) ▷ #mojo (117 messages🔥🔥):
Mojo performance
Async function support
Memory management in Mojo
Mojo standard library enhancements
Compiler and debugging tools
- Mojo performance concerns with ord() function: A user noted a significant performance difference between using ord() in Mojo compared to C++ and Python, indicating it is roughly 30 times slower in a benchmark scenario.
- Discussions included suggestions to use the debugger for inspecting the ord implementation and speculation about optimizations like Small String Optimization.
- Issues with async functions in Mojo: Attempts to utilize async fn and async def in Mojo resulted in various errors, primarily attributed to the user running a stable build rather than the nightly version where async support exists.
- It was clarified that marking fn main as async might not be supported, indicating current limitations in the language.
- Memory management and borrowing in Mojo: The conversation centered on how to handle partial borrows of objects with constructs like Arc and Weak, leading to considerations about the overhead involved.
- An alternative approach was suggested to implement weak references possibly through a separate type, as well as discussions regarding the use of Omit for optional fields.
- Utilizing debugging tools for Mojo: Suggestions arose for using compile tricks in Mojo to obtain assembly outputs, aiding in understanding generated code, and aiding debugging efforts.
- The potential to create a Mojo compiler explorer with support for MLIR was also discussed, emphasizing its educational benefits.
- Enhancements and features for Mojo library: Discussions included the possibility of adding the Omit type to the standard library, which could avoid overhead associated with unused fields.
- Improvements and refinements to types and constructors were discussed to ensure functionality without compromising code efficiency.
- Mojo🔥 roadmap & sharp edges | Modular Docs: A summary of our Mojo plans, including upcoming features and things we need to fix.
- Mojo🔥 roadmap & sharp edges | Modular Docs: A summary of our Mojo plans, including upcoming features and things we need to fix.
- Roadmap & known issues | Modular Docs: A summary of known issues and upcoming features for the MAX platform.
- Small string optimization in Mojo’s stdlib: Small string optimization in Mojo’s stdlib and small buffer optimization while we’re at it
- mojo/stdlib/src/builtin/string.mojo at main · modularml/mojo: The Mojo Programming Language. Contribute to modularml/mojo development by creating an account on GitHub.
Modular (Mojo 🔥) ▷ #max (8 messages🔥):
Model Serialization Format
Containerization Techniques
MAX Engine Support
- Awaiting Model Serialization Format: A user inquired about the ETA on the platform-independent model format, but the response indicated there is no current ETA for the model serialization format as it’s more of a feature enhancement.
- Feedback expressed excitement for the upcoming feature which aims to aid in containerization but emphasized that platform independence isn't a core need.
- Containerization Insights Requested: The user asked for recommended containerization methods, expressing interest in deploying models in Docker containers while noting issues with other tools like tvm.
- The response highlighted that model serialization will facilitate docker containerization, with hopes to release it within a month.
- MAX Engine and GGUF: It was clarified that gguf isn't supported by the MAX engine, and alternative pipelines can be referenced in the provided GitHub link.
- This provided context for users exploring similar functionalities or seeking workarounds with the MAX engine.
Link mentioned: max/examples/graph-api at main · modularml/max: A collection of sample programs, notebooks, and tools which highlight the power of the MAX Platform - modularml/max
OpenRouter (Alex Atallah) ▷ #app-showcase (3 messages):
Bank Account Expansion
Infinite Dilution Concept
- Infinite Bank Account Concept: A member humorously expressed a desire to condense their bank account into an infinite amount.
- This witty request sparked discussion about financial limits and possibilities.
- Confusion Over Expansion vs. Condensation: Another member questioned whether condensing into an infinite amount would actually mean expanding it.
- This provoked a thought-provoking moment, prompting deeper consideration of financial concepts.
- The Perils of Infinite Expansion: A member raised an important point stating that if you infinitely expand something, you can dilute it to nothingness.
- This comment cautioned against the potential downsides of pursuing infinite quantities in contexts like finance.
OpenRouter (Alex Atallah) ▷ #general (91 messages🔥🔥):
Opus vs Sonnet Performance
DeepSeek V2.5 Release
Reflection 70B Announcement
Claude Caching Feature
Model Throughput Comparisons
- Opus claims better task performance than Sonnet: A member noted that Opus outperforms Sonnet on specific prompts, such as calculating angles on a digital clock display.
- Conversely, others argue that most benchmarks consistently show Sonnet as superior overall.
- Launch of DeepSeek V2.5 Model: DeepSeek has merged and upgraded its Coder and Chat models into the new V2.5 version, which shows significant improvements in various performance metrics.
- For example, the ArenaHard win rate improved from 68.3% to 76.3%, enhancing both general capabilities and instruction following.
- Excitement over Reflection 70B model: The new Reflection 70B model has been announced, boasting self-correcting capabilities through a technique called Reflection-Tuning.
- With the promise of a 405B version launching next week, the community anticipates it will outperform existing models.
- Questions about Claude's context caching: There are inquiries about the availability of context caching in the Claude model, with some members sharing experiences of rate limits and costs.
- It was revealed that current conditions do not allow for reduced prices via caching, although plans for implementation in the future are expected.
- Concerns over model throughput: Concerns were raised about the throughput of DeepSeek models being lower than that of Sonnet 3.5, despite the advancements in the new V2.5 model.
- Some members remarked that while the model is great for personal use, its slower performance presents challenges for production cases.
- GPT-4o’s Memory Breakthrough! (NIAN code): no description found
- Tweet from Matt Shumer (@mattshumer_): I'm excited to announce Reflection 70B, the world’s top open-source model. Trained using Reflection-Tuning, a technique developed to enable LLMs to fix their own mistakes. 405B coming next week ...
- Tweet from Matt Shumer (@mattshumer_): I'm excited to announce Reflection 70B, the world’s top open-source model. Trained using Reflection-Tuning, a technique developed to enable LLMs to fix their own mistakes. 405B coming next week ...
- DeepSeek API introduces Context Caching on Disk, cutting prices by an order of magnitude | DeepSeek API Docs: In large language model API usage, a significant portion of user inputs tends to be repetitive. For instance, user prompts often include repeated references, and in multi-turn conversations, previous ...
- Change Log | DeepSeek API Docs: Version: 2024-09-05
OpenRouter (Alex Atallah) ▷ #beta-feedback (5 messages):
AI Studio key issues
Bug reports
Activity logging
- AI Studio key doesn’t save configuration: When entering an AI Studio key, the page updates successfully but reverts back to Not Configured after entry.
- Daun.ai identified this as a potential bug and is working on a fix.
- Hyperbolic and Lambda keys function properly: Despite issues with the AI Studio key, both Hyperbolic and Lambda keys are reported to have worked without problems.
- Users expressed concern regarding inconsistent behavior across different keys.
- Activity logging questions raised: A user inquired about the possibility of verifying if the AI Studio key was utilized under Activity.
- This raised questions on how effectively users can monitor their key usage.
Perplexity AI ▷ #general (77 messages🔥🔥):
Perplexity subscription offers
Referral program details
Changes in membership
Merchandise promotions for students
Technical support and inquiries
- Perplexity's Year Membership for Students: Perplexity announced a free 1-year pro membership for colleges that reach 500 student signups with
.edu
emails, prompting discussions about eligibility and sign-up criteria.- Users discussed needing to sign up by a specific date, with some expressing uncertainty about their university's status.
- Clarifications on Referral Links: Members inquired about finding their affiliate referral links and sharing membership benefits, with one noting that a specific URL provides access.
- Confusion arose regarding how many times a unique promo code can be utilized, with clarification that it can be used up to eight times.
- Merch Promotions for Student Referrals: Announcement of new merchandise for students that can be obtained through referrals was shared, encouraging members to participate in sharing their links.
- Specific instructions were provided on how to get these promotions by referring friends to Perplexity.
- Technical Issues with Language Settings: Users encountered problems with language settings not applying correctly across different browsers, leading one member to successfully resolve it by toggling options.
- The resolution indicated that switching to a different language and back could solve the display issue.
- Inquiries about Free Perplexity Access: There were questions regarding access to free Perplexity features for students, specifically tied to referrals and the university's registration numbers.
- Members expressed concerns about subscription expirations and the necessary conditions to unlock extended access.
- Tweet from Perplexity (@perplexity_ai): New merch for students 🔜 Just one way to get it: refer your friends to Perplexity! Share more, get more: http://perplexity.ai/backtoschool
- Tweet from Perplexity (@perplexity_ai): Never looked better. Thanks for the feature @tryramp! Quoting Aravind Srinivas (@AravSrinivas) Times Square is lit today
Perplexity AI ▷ #sharing (10 messages🔥):
World's Most Powerful Supercomputer
Benefits of Cold Showers
Memory Storage in the Brain
Oldest Known Board Game
Dark Souls Innovations
- Discover xAI's Colossus Supercomputer: Perplexity AI highlighted the World's Most Powerful Supercomputer, xAI's Colossus, alongside the Oldest Known Board Game, Senet.
- You can catch more about this incredible discovery in the YouTube video here.
- Cold Showers Bring Benefits: Multiple members shared links discussing the benefits of cold showers, showcasing various health advantages.
- These benefits include improved circulation and boosted mood, making them a popular topic of discussion.
- Brain's Memory Storage Mechanism: There was an interesting reference to how the brain stores memories in triplets, an intriguing research area.
- It elaborates on the connections between memories and how they form complex networks in our brains.
- Innovations in Dark Souls: The conversation touched on the latest innovations in Dark Souls games, prompting inquiries about their mechanics and design.
- A member seeked to know more about these innovations in a linked discussion.
- User Interface Updates on Perplexity: A member received a reminder to make their thread Shareable, enhancing collaboration within the community.
- This emphasizes an ongoing effort to improve user engagement and accessibility in discussions.
Perplexity AI ▷ #pplx-api (2 messages):
File Upload Implementation with Perplexity API
Configuring Perplexity API Requests
- Integrating File Uploads in Flask with Perplexity API: A member shared a method to implement file uploads in a Python Flask app using the Perplexity API, detailing client-side and server-side components of the implementation.
- Key functionality includes modifying the /query route to accept file data and integrating the file content into the prompt sent to the API.
- Achieving High-Quality Responses from Perplexity API: A user inquired about configuring their Perplexity API requests to replicate the quality and style of answers from the Perplexity website.
- While specifics were not provided, they are looking for ways to enhance the API response quality based on existing reference models.
CUDA MODE ▷ #general (36 messages🔥):
Cursor game change
AI coding tools
vLLM Open Office Hours
Reflection 70B announcement
SaaS startups leveraging AI
- Cursor game changer receives mixed reviews: Several members expressed skepticism about the Cursor AI tool, with one stating they found it unhelpful and even termed it a 'skill issue'.
- Another member praised its code retrieval capabilities while ultimately deeming it not worth the investment compared to the free tier.
- Concerns over reliance on AI coding assistants: There are discussions about the potential negative effects of using AI coding tools, with some fearing reliance might lead to 'brainrot'.
- As one member put it, 'does anyone actually try to use it for tickets right?' indicating skepticism about their effectiveness.
- vLLM Open Office Hours providing insights: The vLLM team is hosting bi-weekly Open Office Hours, with today's session focusing on NVIDIA CUTLASS for high-performance inference.
- Participants can expect massive performance improvements in upcoming releases, with recordings available on YouTube.
- Reflection 70B: A new milestone in open-source LLMs: A new model, Reflection 70B, has been introduced as a leading open-source LLM trained using Reflection-Tuning to self-correct.
- It will be followed by the 405B model next week, touted to be the best in the world, developed alongside GlaiveAI.
- SaaS startups and AI tools: Members discussed the trend of SaaS startups claiming efficiency boosts through AI tools, though skeptical voices remain.
- One pointed out that the motivational content on social media often oversimplifies the potential advantages of these technologies.
Link mentioned: Tweet from Matt Shumer (@mattshumer_): I'm excited to announce Reflection 70B, the world’s top open-source model. Trained using Reflection-Tuning, a technique developed to enable LLMs to fix their own mistakes. 405B coming next week ...
CUDA MODE ▷ #triton (6 messages):
MLIR_DEBUGGING
Triton Environment Variables
- Enable MLIR Dumps for Debugging: A user suggested using
MLIR_ENABLE_DUMP=1
to output MLIR after each compiler pass, which helps in understanding how Triton compiles under the hood.- They indicated that one could compare two dumps for effective debugging and noted that LLMs can assist in explaining MLIR better.
- Utilizing TRITON_INTERPRET for Enhanced Debugging: Another member highlighted that setting
TRITON_INTERPRET=1
is one of the best debugging tools available in Triton.- This variable provides valuable insights during the debugging process.
- Referencing README for Debugging Variables: A user recommended referring to the README linked previously, which contains numerous helpful environment variables for debugging Triton.
- They mentioned that while most may not be necessary, certain variables can prove essential for resolving complex issues.
Link mentioned: GitHub - triton-lang/triton at 7480ef5028b724cb434b7841b016c6d6debf3b84: Development repository for the Triton language and compiler - GitHub - triton-lang/triton at 7480ef5028b724cb434b7841b016c6d6debf3b84
CUDA MODE ▷ #beginner (2 messages):
Optimization techniques for convolution
Memory access patterns in CUDA
- Optimizing Convolution with Constant Memory: A member reported that using constant memory for the convolution matrix decreased execution time from 850 ms to 705 ms, but expected a register count of 19 instead of the observed 20.
- They questioned why the register count didn't drop further, suggesting a need for clarity on the optimization process.
- Local Memory's Unexpected Impact: Utilizing local memory for the convolution matrix led to a runtime reduction from 850 ms to 702 ms, which was contrary to expectations, and registers per thread dropped to 19.
- The member inquired why local memory use resulted in lower constant load, prompting discussion on local vs. global memory effects.
- Compiler Behavior with Local Memory: Another member explained that the compiler may not fit local memory into registers and that local memory becomes interleaved global memory when dynamic addressing is involved.
- They provided a link to the NVIDIA documentation on local memory to guide further understanding of memory access patterns.
CUDA MODE ▷ #jax (1 messages):
Pallas kernels
Splash Attention kernel
Video primer on Pallas
- Explore Pallas Kernels from JAX: Members are sharing various kernels implemented in Pallas, available at this GitHub repository. This repository showcases composable transformations of Python+NumPy programs, including differentiation and JIT to GPU/TPU.
- An image of the repository is also included, providing a visual reference for contributors.
- Diving into Splash Attention Kernel: A specific kernel example shared is the Splash Attention kernel, with its implementation found here. This links directly to the code, highlighting important components of Pallas operations.
- Members are encouraged to review the kernel's details to better understand its function within the Pallas framework.
- Check out Video Primer on Pallas: A short primer video about Pallas, featuring one of its main inventors, Sharad, was shared via this link. The video serves as an introduction to the concepts and functionalities of Pallas.
- It's a useful resource for those looking to familiarize themselves with Pallas's features and use cases.
- jax/jax/experimental/pallas/ops/tpu at main · google/jax: Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more - google/jax
- jax/jax/experimental/pallas/ops/tpu/splash_attention/splash_attention_kernel.py at main · google/jax: Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more - google/jax
CUDA MODE ▷ #off-topic (5 messages):
Llm.c Alternatives
AI Summit in Mumbai
Burnout Prevention Strategies
- Hope for LLM Alternatives: A member expressed optimism for a potential llm.c alternative that isn’t related to large language models, indicating a desire for single-purpose solutions.
- Another member chimed in that PyTorch could already serve that broader functionality.
- NVIDIA AI Summit Announcement: The NVIDIA AI Summit will take place in Mumbai from October 23–25, 2024, with over 50 sessions covering various AI topics, including generative AI.
- Members were encouraged to register now and engage with industry leaders and exhibitors at the event.
- Insights on Burnout Prevention: A member shared insights on avoiding burnout, emphasizing the importance of knowing personal limits and maintaining a 95% effort over 100% for sustainability.
- They suggested focusing on what is in your control, setting realistic goals, and forgiving oneself for past mistakes to encourage continuous improvement.
Link mentioned: Join NVIDIA AI Summit 2024: October 23–25, Mumbai, India
CUDA MODE ▷ #irl-meetup (1 messages):
Joining Tenstorrent
CUDA kernel development
CUDA Mode IRL event
- Atul Krishnadas joins Tenstorrent as kernel developer: Atul Krishnadas announced his upcoming role as a kernel developer at Tenstorrent in Santa Clara.
- He expressed enthusiasm for CUDA, emphasizing his background in its development.
- Development of PyTorch/cuDNN clone: Atul shared his experience in creating a PyTorch/cuDNN clone, having written all the CUDA kernels from scratch for various functionalities.
- He offered a demo of his work, showcasing his proficiency in forward/backpropagation and mini-batch training.
- Inquiry about CUDA Mode IRL event: Atul inquired about available spots for the CUDA Mode IRL event happening on the 21st, mentioning he applied some time ago.
- He thanked the community in advance for any updates regarding the event's availability.
CUDA MODE ▷ #hqq-mobius (17 messages🔥):
Performance Insights on Batch Sizes
Autotune Configurations from PyTorch
Triton Code Limitations with GROUP_M
GemV Implementation Challenges
Memory-Bound Performance Analysis
- Performance Insights on Batch Sizes: Up to batch size 16-32, the speed-up remains consistent as it utilizes 1 16x16 / 8x32 tensor core instruction, but slows down afterwards while maintaining close to 1x at higher batch sizes.
- Mobicham noted that with more autotune parameters, there's potential for improved speed.
- Autotune Configurations from PyTorch: A member shared extra autotune configs found in the PyTorch repository, useful for int8 mm challenges.
- These configs can potentially aid Mobicham's performance tests, particularly in optimizing tensor core usage.
- Triton Code Limitations with GROUP_M: Mobicham indicated that reducing GROUP_M below 8 could negatively affect performance due to restrictions in
tl.dot
supporting only specific tensor core shapes.- The assertion error received when using lesser shapes highlights the challenge in achieving efficient implementations.
- GemV Implementation Challenges: After struggling with a good gemv implementation in Triton, Mobicham switched to CUDA, leading to the development of GemLite.
- Testing showed that using multiply + add was feasible, but ultimately, performance was subpar compared to using
tl.dot
.
- Testing showed that using multiply + add was feasible, but ultimately, performance was subpar compared to using
- Memory-Bound Performance Analysis: The performance when using advanced configuration settings remains slower in memory-bound setups yet achieves speeds close to FP16 with large batches.
- This is particularly beneficial for large context prefill and training, indicating effective overall progress.
Link mentioned: pytorch/torch/_inductor/kernel/mm_common.py at main · pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch
CUDA MODE ▷ #llmdotc (4 messages):
Open Sora Implementation
Graphics Progress
- Open Sora Implementation Work in CUDA: One member shared their efforts in implementing Open Sora in CUDA and C++, noting that it's a huge task with slow progress.
- I really wish graphics would take off tho... reflects a sentiment for more advancements in this area.
- Inspiration Among Peers: A member suggested that the discussions may have inspired enough others to contribute or explore further.
- This comment highlights the collaborative atmosphere within the community despite challenges.
CUDA MODE ▷ #cudamode-irl (1 messages):
Third Wave Delays
Inbox Notifications
- Waiting for the Third Wave: A member expressed frustration about not receiving any updates in their inbox and mentioned having to wait for the third wave.
- They noted that the lack of notifications has led to a sense of delay in expected information.
- Frustration with Inbox Notifications: The same member indicated that their inbox remains empty, suggesting a disconnect regarding expected updates.
- This comment reflects a broader concern about timely communication within the group.
CUDA MODE ▷ #liger-kernel (8 messages🔥):
Jupyter Notebook Versioning
Python Script for Benchmark Visualizations
Implementation of MoE Models
- Jupyter Notebook versioning concerns: Members discussed the inefficiencies of versioning Jupyter Notebooks, citing that it's often heavy and cumbersome.
- They proposed creating a Python script to generate PNG visualizations, storing them in a folder included in .gitignore.
- Creating a PoC for PNG storage solution: s1r_o mentioned preparing a Proof of Concept (PoC) for the PNG storage solution and suggested it should be foolproof by placing images in a designated ignored folder.
- Byronhsu1230 agreed on the approach and indicated they would consult a colleague who had previously implemented a similar solution.
- Discussion moved to PR: s1r_o created a Pull Request (PR) and a branch to further discuss the implementation details for the proposed solution.
- They indicated that it would not take long to implement and encouraged continued discussion in the PR.
- Exploring MoE models from Huggingface: s1r_o raised thoughts on implementing MoE models such as Mixtral or Nllb_moe from Huggingface.
- The idea is to support several operations and then integrate the MoE kernel once the development is completed.
Eleuther ▷ #general (19 messages🔥):
MCTS Application in Image Tasks
Creative AI Workshops
Keyword-Driven Generative Models
Undergraduate Internships in Labs
Minimalist UI Development
- MCTS used in Image Recognition Discussion: There's a debate on how Monte Carlo Tree Search (MCTS) could be applied to image generation, comparing its logic reversal to models like AlphaZero and AlphaProof.
- One participant questioned how MCTS could be reversed, particularly when each step heavily relies on the previous one, emphasizing that MCTS enhances policies rather than generates them.
- Seeking Workshops on Creative AI: A member inquired about upcoming workshops focused on creative AI, looking to apply learnings from their paper on diffusion models and LoRA composition.
- Another member expressed skepticism about the relevance of such workshops for the ICCV timeframe, considering submission deadlines.
- Extracting Metadata from Captions: Keyword-driven generative models, like Stable Diffusion, require careful pre-processing of training data, prompting curiosity about their methodologies.
- One user is brainstorming ways to extract metadata tags from 1.2 million captions, linking the discussion to best practices in data curation.
- Undergraduate Internships in Academic Labs: The conversation highlighted that academic labs can hire undergraduate interns, particularly if the PI has bandwidth and the student has a suitable background.
- One intern shared their experience of starting part-time and transitioning to a full-time role, shedding light on potential career pathways.
- Development of Minimalist UI: A user announced their initiative to rewrite a minimalist UI, aiming for a super-extensible design without unnecessary dependencies.
- They expressed interest in collaboration, inviting others to join their project aimed at creating a customizable user interface and server.
Link mentioned: Tweet from Julien Blanchon (@JulienBlanchon): Trying to figure out how to fix Comfy 👀
Eleuther ▷ #research (52 messages🔥):
Inefficiency of Scaling Parameters
Transfusion Model Insights
Gradient Behavior During Training
Effects of Generative AI on Work
Numerical Stability in Optimizers
- Scaling parameters inefficiently impacts training: A member raised questions about the inefficiency of scaling parameter counts significantly without increasing dataset size, referencing the Chinchilla paper for calculations.
- Another member suggested looking into the paper's formulas directly to understand the consequences of scaling more accurately.
- Insights from Transfusion Paper: A discussion pointed towards the Transfusion paper that explores training multi-modal models over discrete and continuous data.
- It was noted that the authors achieve improved scaling performance compared to training a language model over discrete image tokens.
- Unusual patterns in training gradients: A member discussed observing spikes in Hamming similarity between gradients during distillation training, suggesting that certain sequences of data points may be beneficial.
- They considered the possibility of numeric precision impacting the gradients, prompting further examination of the optimizer's behavior, particularly in their Lion implementation.
- Generative AI boosts developer productivity: A shared paper titled The Effects of Generative AI on High Skilled Work revealed a 26.08% increase in tasks completed among developers using the AI tool (GPT 3.5).
- This finding suggests significant productivity gains attributed to the integration of AI technologies in software development.
- Numerical stability issues in optimizers: Concerns were raised about potential numerical stability issues within the Lion optimizer, especially about discrete jumps in gradients that could affect training consistency.
- It was suggested that adjusting parameters to standard 32-bit formats might help to alleviate some reported numerical inconsistencies in the training process.
- distily/distily_attn_mlp_sweep · Training metrics: no description found
- Diffusion Models Learn Low-Dimensional Distributions via Subspace Clustering: Recent empirical studies have demonstrated that diffusion models can effectively learn the image distribution and generate new samples. Remarkably, these models can achieve this even with a small numb...
- Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model: We introduce Transfusion, a recipe for training a multi-modal model over discrete and continuous data. Transfusion combines the language modeling loss function (next token prediction) with diffusion t...
- Parametric design - Wikipedia: no description found
Eleuther ▷ #lm-thunderdome (2 messages):
Leaderboard IFeval
IFeval differences
- Understanding Leaderboard IFeval: A member inquired about the difference between leaderboard_ifeval and ifeval.
- Clarification on their functions or purposes remains pending.
- Seeking clarity on system components: A member expressed a need for clarity regarding the distinction between two components: leaderboard_ifeval and ifeval.
- The discussion hints at differences in their roles, but further elaboration is awaited.
Eleuther ▷ #multimodal-general (1 messages):
bennib2407: what’s the SOTA video captioning model?
Eleuther ▷ #gpt-neox-dev (1 messages):
RoPE Compatibility
Attention Output Discrepancies
- RoPE Implementation Compatibility Question: A member inquired whether the Hugging Face implementation of RoPE for GPTNeoX / Pythia is compatible with those used in LLaMA and GPT-Fast models.
- They provided a snippet of the frequency and rotary embedding computation for reference.
- Comparative Analysis of Attention Outputs: The member noted significant differences (>95%) in the attention outputs between their Pythia model implementation and their own implementation.
- This discrepancy prompted them to seek insights on potential incompatibility or implementation errors in the RoPE application.
Latent Space ▷ #ai-general-chat (68 messages🔥🔥):
SSI Inc funding
You.com funding
Karpathy insights
OpenAI pricing
Replit Agent launch
- SSI Inc secures $1B in funding: SSI Inc has secured a staggering $1B funding round, while Sakana clinched $100M.
- Speculation on how much of the funding might be allocated to Nvidia arose in discussions.
- You.com refocuses with new funding: You.com has shifted from AI search products to developing deeper productivity agents with a new $50M funding round, aiming for innovative approaches in complex query handling.
- Founder Richard Socher emphasizes that competing with Google on simple queries is less viable than enhancing productivity-focused capabilities.
- Karpathy's views on Tesla and self-driving tech: In a recent podcast, Andrej Karpathy articulated that while Waymo has made strides, he believes Tesla will lead in self-driving technology long-term, citing a fundamental software versus hardware problem.
- He also discussed the transformative potential of Optimus, Tesla's humanoid robot, emphasizing its applications in factories.
- OpenAI considers high-tier subscription model: OpenAI is reportedly evaluating a $2000/month subscription for its next-gen model, suggesting a potential increase of 100x in capabilities compared to lower-tier offerings.
- The pricing discussion hints at either substantial enhancements in model performance or the need to cover operational costs amid rising expenses.
- Launch of Replit Agent: Replit introduced the Replit Agent, aimed at automating software development tasks like setting up dev environments, in early access for subscribers.
- The move is seen as a strategic effort to build upon Replit's offerings and potentially capitalize on AI's integration into programming workflows.
- Yi-Coder: A Small but Mighty LLM for Code | Hacker News: no description found
- GameNGen: Diffusion Models Are Real-Time Game Engines
- Tweet from Anna Monaco (@annarmonaco): Introducing Paradigm – a reimagined workspace with AI at its core. Centered around the primitive of a spreadsheet, Paradigm puts swarms of intelligent agents at your fingertips. The real power of Pa...
- Tweet from TIME (@TIME): TIME's new cover: The 100 most influential people in AI https://ti.me/4dQcJ1Q
- Tweet from Kennan Davison (@kennandavison): Excited to introduce Icon: we help brands create winning AI ads with real creators. We’re backed by Peter Thiel’s Founders Fund and founders of Ramp, Flexport, Pika, & Cognition (Devin). The future o...
- Tweet from TDM (e/λ) (@cto_junior): 1) What? Quoting TIME (@TIME) TIME's new cover: The 100 most influential people in AI https://ti.me/4dQcJ1Q
- Tweet from Bindu Reddy (@bindureddy): OpenAI is considering $2K / month to access their top models. All kidding aside, this will be a Vision-Pro level disaster. I hope it's a joke
- Tweet from Michael Lingelbach (@mjlbach): Looks like a SOTA open source text-to-music model (a rectified flow dit) is out. Paper here: https://arxiv.org/abs/2409.00587 Code here: https://github.com/feizc/FluxMusic The examples sound very s...
- Tweet from AI Explained (@AIExplainedYT): Would you pay $2000/month for ChatGPT? This is the highest price that's 'on the table', for a subscription, according to a just-released Information report on OpenAI. This would be the t...
- Tweet from Teortaxes▶️ (@teortaxesTex): Yep, vindicated again: @deepseek_ai do merge code and generalist models. (Change log retroactively changed). Tao and the art of compute recycling. You love to see it. Quoting Teortaxes▶️ (@teortaxes...
- Tweet from shawn swyx wang (@swyx): the @karpathy code: - be Team Human - choose things that scale - scale them to all of humanity things = { @tesla camera vision | humanoid robots | transformers | AI education @EurekaLabsAI} Favori...
- The Road to Autonomous Intelligence with Andrej Karpathy: Andrej Karpathy joins Sarah and Elad in this week of No Priors. Andrej, who was a founding team member of OpenAI and former Senior Director of AI at Tesla, need
- Tweet from Matt Shumer (@mattshumer_): I'm excited to announce Reflection 70B, the world’s top open-source model. Trained using Reflection-Tuning, a technique developed to enable LLMs to fix their own mistakes. 405B coming next week ...
- Tweet from Techmeme (@Techmeme): OpenAI says it now has 1M+ paid users for the corporate versions of ChatGPT, including ChatGPT Team, Enterprise, and Edu (@rachelmetz / Bloomberg) https://www.bloomberg.com/news/articles/2024-09-05/o...
- Tweet from Nathan Lambert (@natolambert): Ai2 released OLMoE today. It's our best model to date. - 1.3B active, 6.9B total parameters, 64 experts per layer - Trained on 5T tokens from DCLM baseline + Dolma - New preview of Tulu 3 post tra...
- Tweet from Together AI (@togethercompute): 🚀 NVIDIA H200 and the Together Kernel Collection (TKC) are coming to Together GPU Clusters: delivering accelerated performance, efficiency, and scalability for AI training, fine-tuning, and inference...
- Tweet from Nathan Lambert (@natolambert): Much like Q*, OpenAI's Strawberry system has been leaked enough where we have substantive interesting hypotheses on their training setup and use cases. Some ideas: * Self talk as reasoning with o...
- Tweet from Amjad Masad (@amasad): AI is incredible at writing code. But that's not enough to create software. You need to set up a dev environment, install packages, configure DB, and, if lucky, deploy. It's time to automate...
- Launch HN: Maitai (YC S24) – Self-Optimizing LLM Platform | Hacker News: no description found
- Show HN: AnythingLLM – Open-Source, All-in-One Desktop AI Assistant | Hacker News: no description found
- Show HN: Laminar – Open-Source DataDog + PostHog for LLM Apps, Built in Rust | Hacker News: no description found
- Tweet from Brendan Dolan-Gavitt (@moyix): OpenAI: pip install openai and set OPENAI_API_KEY Anthropic: yea same but s/openai/anthropic/g Google: oh boy. ok so you have a GCP account? no? ok go set that up. and a payment method. now make a ...
- With $50M in new funding, You.com thinks its AI can beat Google on hard questions | TechCrunch: If you build an AI search product, you compete with Google. But Google has a lot easier time answering queries with a single, simple answer, such as "how
- [AINews] SciCode: HumanEval gets a STEM PhD upgrade: PhD-level benchmarks are all you need. AI News for 7/15/2024-7/16/2024. We checked 7 subreddits, 384 Twitters and 29 Discords (466 channels, and 2228...
OpenInterpreter ▷ #general (55 messages🔥🔥):
Open Interpreter Birthday
Teach Mode
Open Interpreter Repositories
AGI Discussion
Fulcra App Availability
- Open Interpreter Celebrates Its Birthday: Members celebrated the birthday of Open Interpreter, noting its impact on AI-human interaction and innovation.
- One attendee humorously remarked, 'AGI achieved, we can all go home now'.
- Exploring Teach Mode Functionality: The Teach Mode on Open Interpreter was discussed; users can say, 'I want to teach you something' to engage the system in creating new skills.
- It can adapt its skill based on the tasks taught, with emphasis on flexible execution and aligning with Rabbit Tech's methodologies.
- Access to Open Interpreter Repositories: The Open Interpreter and 01 repositories are open-source, inviting users to build upon them for their own applications.
- One user expressed interest in integrating functionalities into their software, particularly for web automation instances.
- AGI Announcement Query: A member inquired about an AGI announcement, to which another humorously responded, 'AGI achieved, we can all go home now'.
- Members seemed engaged with the idea, reflecting a mix of excitement and skepticism in follow-up messages.
- Fulcra App Regional Availability: A member expressed interest in the Fulcra app and inquired about its release in regions outside of New Zealand.
- There was no direct response regarding the release timeline, indicating ongoing anticipation from users.
- Youre A Wizard Hagrid GIF - Youre A Wizard Hagrid Afirmation - Discover & Share GIFs: Click to view the GIF
- Clapping Yay GIF - Clapping Yay Excited - Discover & Share GIFs: Click to view the GIF
- Frankenstein Its Alive GIF - Frankenstein Its Alive Happy - Discover & Share GIFs: Click to view the GIF
- GitHub - OpenInterpreter/open-interpreter: A natural language interface for computers: A natural language interface for computers. Contribute to OpenInterpreter/open-interpreter development by creating an account on GitHub.
- GitHub - OpenInterpreter/01: The #1 open-source voice interface for desktop, mobile, and ESP32 chips.: The #1 open-source voice interface for desktop, mobile, and ESP32 chips. - OpenInterpreter/01
OpenInterpreter ▷ #O1 (7 messages):
O1 recent demos
O1 shipping date
House Party event
Discord links
- Request for Recent Demos of O1: Someone inquired about any recent demos of O1, indicating continued interest in the product's updates.
- This reflects a desire for tangible showcases of its functionality as it approaches shipment.
- Shipping Date Uncertainty for O1: A user expressed frustration regarding the shipping date of their preordered O1, which they mentioned had not yet arrived.
- This highlights concerns about delays, as preorders suggest a promise for earlier access.
- House Party Event Announcement: A member encouraged others to tune in to the House Party later, signifying its importance in the community.
- This indicates an upcoming opportunity for discussion and networking among members.
- Links Shared for House Party Event: A couple of Discord links were shared for accessing the House Party event, making participation easier for interested members.
- The shared links foster engagement and community involvement around O1 discussions.
Torchtune ▷ #general (27 messages🔥):
Compile errors with PyTorch 2.4
Input padding performance
Memory footprint during training
Allocation of tokens in datasets
CI testing for torch.compile
- Compile errors when using PyTorch 2.4: Members reported compile errors with the latest main on PyTorch 2.4, especially issues with fake tensors. It was noted that using
os.environ['TORCH_COMPILE_BACKEND'] = 'aot_eager'
might hide these errors in CI.- One member suggested a potential CI issue regarding testing with the default backend, hinting at a need for CI workers to install a newer version of gcc.
- Performance impact of input padding: One member did a test run with input padding using the default config on the Alpaca dataset and found a significant speed hit. They noted that while memory footprint improved due to less fragmentation, the performance optimization was not as beneficial.
- Another member suggested that reporting both padded and unpadded tokens could provide insights into the waste from padding, emphasizing that padded tokens are still processed.
- Memory footprint considerations: Discussion about memory management led to insights on better memory footprint and implications for OOM issues during training. Using expandable segments did not seem to resolve memory hikes for larger sequence lengths.
- Members highlighted that reserved memory is crucial for avoiding OOM, and one noted a bump in memory likely corresponds with increases in sequence length.
- Need for CI testing standards: There was a suggestion to open a separate issue regarding CI testing for torch.compile with the default backend due to inconsistent error reporting. This topic has since been revisited in the context of existing GitHub issues.
- Engagement around CI standards included discussion about setting up an environment to better reproduce issues faced with PyTorch versions.
- Issues · pytorch/torchtune: A Native-PyTorch Library for LLM Fine-tuning. Contribute to pytorch/torchtune development by creating an account on GitHub.
- [Low-bit optim] Improve compile time + Fix PyTorch 2.3 support for 4-bit optim by gau-nernst · Pull Request #812 · pytorch/ao: Static-shape compile optim step for single parameter + disable cache size limit. For a given model, the number of different argument combinations to single_param_adam() is fixed -> safe to dis...
- transformers/src/transformers/trainer.py at 5c1027bf09717f664b579e01cbb8ec3ef5aeb140 · huggingface/transformers: 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX. - huggingface/transformers
- Prevent OOM during checkpoint save on colab for llama3-8b qlora recipe · pytorch/torchtune@a02ccd6: A Native-PyTorch Library for LLM Fine-tuning. Contribute to pytorch/torchtune development by creating an account on GitHub.
- torchtune/tests/recipes/test_full_finetune_single_device.py at f437639f6abf101cc2b40793d5d86dbda35e24ec · pytorch/torchtune: A Native-PyTorch Library for LLM Fine-tuning. Contribute to pytorch/torchtune development by creating an account on GitHub.
Torchtune ▷ #dev (13 messages🔥):
DeepFusionModel Caches
Testing DeepFusionModel
Unsloth Backed by YC
Daniel Han's Contribution
Meta Employment Clarification
- DeepFusionModel Caches Misunderstanding: Discussion centered around whether
encoder_max_seq_len
should be ignored indeepfusionmodel.setup_caches
if the encoder lacks asetup_caches
function.- It's a bit counter intuitive, but the encoder seq len is for the cross attention layers in the decoder.
- Enhancements to DeepFusionModel Tests: A member updated that tests for kv caching have been added to the DeepFusionModel and a pull request was shared for review.
- Pull Request #1449 introduces overrides for max cache seq length and further discussions evolved around its purpose.
- Unsloth Secures Y Combinator Backing: A member noted that Unsloth is now backed by Y Combinator, sparking interest in potential upcoming support for others in the community.
- Anticipation grew as someone expressed hope to receive similar serious backing next.
- Praise for Daniel Han: Appreciation was expressed for Daniel Han, described as a legend by a community member, signaling his significant contributions.
- Members recognized the effort and support received from notable individuals in the AI community.
- Clarification of Meta Employment: A critique was shared regarding assumptions of employment at Meta, clarifying that not all members are affiliated with the company.
- One member highlighted that Salman is doing it purely for the love of the game while others confirmed they do work for Meta.
Link mentioned: [RFC] Adding overrides for max cache seq length by SalmanMohammadi · Pull Request #1449 · pytorch/torchtune: Context What is the purpose of this PR? Is it to add a new feature fix a bug update tests and/or documentation other (please add here) #1364 Changelog This PR: Adds support for overriding th...
Cohere ▷ #discussions (13 messages🔥):
System prompt optimization
Cohere updates
Community engagement
- Struggles with System Prompt Optimization: A user sought help to optimize their system prompt but faced errors stating Could not parse & validate the given body.
- Another member suggested sharing details in a specific channel for better assistance.
- Exploring What's New with Cohere: A member inquired about the latest updates with Cohere and how others are utilizing the platform.
- The response pointed them to the Cohere blog for quick insights on recent developments and customer use cases at cohere.com/blog.
- New Members Seeking Community Connections: A new member expressed their intent to connect with the Cohere community to understand its offerings better.
- They confirmed they had checked out the documentation as a starting point.
- Encouragement for New Users: A community member reassured a newcomer they were in the right place for learning and collaboration.
- They encouraged checking out the platform’s comprehensive documentation to get started effectively.
Link mentioned: The Cohere Blog: Explore our collection of insightful blog posts covering a diverse range of generative AI topics. Our articles offer in-depth analyses, expert opinions, and practical advice to inform and inspire.
Cohere ▷ #questions (8 messages🔥):
Text Suggestions Feature
LLM Agents for Report Generation
Cohere Usage Best Practices
- Implementing Text Suggestions Like Gmail Smart Compose: A member sought guidance on using Cohere models to implement a text suggestions feature in their messaging platform similar to Gmail's Smart Compose.
- Another member suggested that this could be achieved by effectively prompting the model with the email context.
- Generating Reports with LLM Agents: A member inquired about using LLM agents to generate reports based on prior writing styles and meeting notes from stakeholders.
- Responses included suggestions for employing RAG with Nimble rerank for meeting notes and meta prompting techniques for writing styles.
- Getting Proficient with Cohere: A member asked for advice on effectively using Cohere and producing quality output.
- Another member recommended reviewing the Cohere documentation for best practices and model functions.
Link mentioned: The Cohere Platform — Cohere: Cohere offers world-class Large Language Models (LLMs) like Command, Rerank, and Embed. These help developers and enterprises build LLM-powered applications such as conversational agents, summarizatio...
Cohere ▷ #projects (3 messages):
LLM Agents for Report Generation
OpenSesame 2.0 Launch
- Exploring LLM Agents for Report Creation: A member inquired about using LLM agents to generate reports based on prior writing styles and meeting notes from stakeholders for the Internal Audit team.
- Has anyone experimented with this approach?
- OpenSesame 2.0 Brings Major Enhancements: OpenSesame 2.0 has been released with significant updates, including eliminating the need for ground truth input and connecting to vector DBs for real-time semantic search.
- The update also features multi-model support for platforms like OpenAI, Gemini, and Cohere.
LlamaIndex ▷ #blog (3 messages):
Netchex AI using LlamaIndex
create-llama templates
llama-deploy microservices
- Netchex AI Revolutionizes Employee Support: @Netchex implemented AskHR + Netchex AI using LlamaIndex, transforming employee support for small to medium-sized businesses in just one month with two engineers.
- They used advanced RAG pipelines for context-aware responses, showcasing rapid development in the HR sector. Read more here.
- create-llama Introduces Multi-Agent Workflow: The latest update to create-llama offers a multi-agent workflow in Python, emphasizing its role in rapid deployment for various use cases.
- An example workflow utilizes three agents to generate a blog post, demonstrating its flexibility and efficiency. Check it out!.
- Launch of llama-deploy for Microservices: The new llama-deploy system allows for seamless deployment of microservices based on LlamaIndex Workflows, representing a significant step in their evolution.
- This launch builds on the lessons learned since the release of llama-agents and Workflows, enhancing deployment capabilities for developers. Get details here.
LlamaIndex ▷ #general (20 messages🔥):
llama-index-experimental-param-tuner installation
Getting embedding vectors from ChromaVectorStore
Integrating Claude with LlamaIndex
Text-to-SQL functionality and embeddings
Optimizing prompts in RAG applications
- Installing llama-index-experimental-param-tuner: To install the experimental package, run the command
pip install llama-index-experimental
for llama-index version 0.11.3.- One user confirmed that this installation step is necessary for the functionality.
- Embedding vectors in ChromaVectorStore: A user ran into an issue obtaining embedding vectors from relevant nodes, leading to a ValueError stating that the embedding was not set.
- Others discussed that restructuring the Chroma class might resolve the issue of embeddings not being returned.
- Setting up Claude with LlamaIndex: A comprehensive guide was shared for utilizing Claude's latest models in LlamaIndex, including setup instructions and tokenizer settings.
- The models include Claude 3 Opus, Claude 3 Sonnet, and Claude 3 Haiku, with emphasis on following the documentation for chat engine setup.
- Combining Text-to-SQL with Semantic Search: A user inquired about implementing Text-to-SQL functionality on specific table columns, some of which contain embeddings for semantic search.
- No direct solution was provided in the discussion, indicating the need for further exploration of the integration.
- Prompt Optimization in RAG Applications: Members discussed the transition from QueryPipelines to Workflows, noting the potential for optimization using DSPy within LlamaIndex.
- There were references to helpful integration examples and the complexities of maintaining an efficient RAG pipeline.
- dspy/dspy/predict/llamaindex.py at main · stanfordnlp/dspy: DSPy: The framework for programming—not prompting—foundation models - stanfordnlp/dspy
- llama_index/llama-index-integrations/vector_stores/llama-index-vector-stores-chroma/llama_index/vector_stores/chroma/base.py at fd4a2e6b2da51fb6b3c50f636f795c0599341ff8 · run-llama/llama_index: LlamaIndex is a data framework for your LLM applications - run-llama/llama_index
- Anthropic - LlamaIndex: no description found
- Chat Engine - Condense Plus Context Mode - LlamaIndex: no description found
LangChain AI ▷ #general (14 messages🔥):
Building AI Agents
Chatbot Development
ReAct Agent Deployment
Database Solutions for AI Agents
- Community Seeks Input on AI Agent Platform: A member is developing a new platform for building, deploying, and monetizing AI agents and seeks insights from existing agent builders for a research phase.
- They offered gratitude and beta access in exchange for a short chat.
- Guidance on Building Document-Driven Chatbot: Another member requested assistance for creating a chatbot that effectively interacts using content from two PDF files, emphasizing smooth user experience.
- The discussion highlighted key requirements such as document loading, response generation, and conversation management.
- FAISS Vector DB Integration for Chatbot: A participant inquired about an end-to-end solution including storing documents in FAISS vector DB for retrieving answers.
- They received guidance on document loading, embeddings creation, and setting up a retriever using LangChain.
- Transitioning from SQLite to Cloud Database: Postgres or MySQL Saver implementations were requested as alternatives to SQLite for a ReAct agent running on GCP AppEngine.
- The contributor expressed concern over losing local SQLite database context with redeployments.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Faiss (Async) | 🦜️🔗 LangChain: Facebook AI Similarity Search (Faiss) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that p...
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
LangChain AI ▷ #share-your-work (2 messages):
Vision Language Models
CodeMaster App
Gamification in Learning
DSA Learning Techniques
- Recent Advances in Vision Language Models Explored: A new blog post delves into the evolution of Vision Language Models (VLMs) from early approaches like CLIP to advanced models such as Flamingo and LLaVA. It highlights how jointly training with vision and text data enhances performance across various tasks like segmentation and classification, citing works like DALL-E 2.
- CodeMaster App Launched for Enhanced Learning: The newly introduced CodeMaster app aims to improve coding skills through gamification and scientifically-based techniques for knowledge retention. Users can participate in community competitions and earn rewards while reinforcing their learning.
- Feedback about CodeMaster highlights its impact on programming education, with users praising the spaced repetition feature for effective mastery of concepts, as demonstrated by testimonials from Alex Chen and Sarah Johnson.
- Feedback Requested on DSA Learning Project: A project discussing a fun approach to learning Data Structures and Algorithms (DSA) is seeking community feedback. The aim is to incorporate daily problem-solving alongside scientifically-backed methods for knowledge retention.
- This initiative, still in its infancy with only 8 hours of development, aims to motivate users through gamified experiences in learning DSA.
- A Brief Introduction to Vision Language Models: Overview of recent advancements in the field of Vision Language Models. From early contrastive learning approaches like CLIP to more advanced models like Flamingo and LLaVA.
- Code Helper: no description found
LAION ▷ #general (10 messages🔥):
Comfy Rewrite Project
Complimentary GUI for Comfy
SwarmUI
ComfyBox Project
- Julien Blanchon starts Comfy Rewrite: A member, Julien Blanchon, announced experimenting with a minimalist Comfy rewrite from scratch, aiming for a super-extensible user interface with no dependencies.
- The project invites collaboration and seeks to simplify usage without sacrificing flexibility.
- Complimentary GUI Ideas discussed: Another member suggested developing a complimentary GUI that utilizes Comfy in the backend while offering an easier user experience similar to A1111.
- The aim is to allow quick tasks like inpainting and upscaling without the complexity of loading nodes.
- ComfyBox project explored: One member mentioned a past attempt at creating a similar interface, pointing to the ComfyBox project on GitHub which appears abandoned.
- Criticism was raised about its cumbersome UI, which lacks the streamlined experience desired.
- Discussion on SwarmUI: Members acknowledged SwarmUI, which was referred to as a modular web-user-interface focusing on accessibility and performance for Stable Diffusion.
- It was noted that SwarmUI emphasizes extensibility, appealing to users looking for more user-friendly options.
- Tweet from Julien Blanchon (@JulienBlanchon): Trying to figure out how to fix Comfy 👀
- Tweet from undefined: no description found
- GitHub - mcmonkeyprojects/SwarmUI: SwarmUI (formerly StableSwarmUI), A Modular Stable Diffusion Web-User-Interface, with an emphasis on making powertools easily accessible, high performance, and extensibility.: SwarmUI (formerly StableSwarmUI), A Modular Stable Diffusion Web-User-Interface, with an emphasis on making powertools easily accessible, high performance, and extensibility. - mcmonkeyprojects/Swa...
LAION ▷ #research (6 messages):
Transfusion Model
Reflection 70B
Causal UNET Performance
Unified Multi-modal Models
- Meta's New Transfusion Model Unveiled: Meta released a paper on Transfusion, a multi-modal model that combines language and diffusion training techniques across discrete and continuous data, with a pretrained 7B model on 1T text tokens and 692M images.
- The study emphasizes the model's potential to be extended to audio and possibly video, using VAE for smooth transitions between media types.
- Reflection 70B Promises Major Advances: Excitement builds around the announcement of Reflection 70B, claimed to be the world's top open-source model that can independently fix its own mistakes through Reflection-Tuning.
- Reports state it surpasses existing models, including GPT-4o on multiple benchmarks, with a 405B version set to release next week, raising eyebrows in the AI community.
- Causal UNET Performs as Well as Dense Linear: Discussion highlighted that using a UNET for causal modeling yields performance comparable to dense linear models, sparking intrigue among developers.
- This suggests new avenues in model architecture adjustments that could potentially enhance efficiency in language processing.
- Vision of a Unified Multi-modal Model: A member proposed the idea of Transfusion+GameNGen, envisioning a model that integrates language, vision, audio, and even gaming engines into a single framework.
- The implications of such a model could fundamentally reshape interactions between various modalities and AI applications.
- Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model: We introduce Transfusion, a recipe for training a multi-modal model over discrete and continuous data. Transfusion combines the language modeling loss function (next token prediction) with diffusion t...
- Tweet from Matt Shumer (@mattshumer_): I'm excited to announce Reflection 70B, the world’s top open-source model. Trained using Reflection-Tuning, a technique developed to enable LLMs to fix their own mistakes. 405B coming next week ...
- Tweet from Chubby♨️ (@kimmonismus): I can hardly believe what I'm reading here: an LLM that fixes its own bugs, corrects itself and beats all current models, including GPT-4o in all benchmarks? And the model is still OpenSource? ...
tinygrad (George Hotz) ▷ #general (8 messages🔥):
Bounty Payments
Tinyboxes Rental Model
Pricing Models for Performance
- Bounty Payments Completed: All individuals who emailed to claim bounties should have been paid, with an open call to inform if anyone has not received their payment.
- This ensures transparency and efficiency in managing user rewards.
- Innovative Tinyboxes Rental Concept: A proposal was shared to manufacture tinyboxes that could either be sold or rented out from a data center, with an upgrade path for hardware.
- The concept focuses on selling outdated hardware to maintain fresh stock for continuous rentals.
- Pricing in Performance Metrics: Discussion arose around pricing models, with suggestions to express costs in $/exaflops and $/tflops*month.
- This discussion highlights the complexity and different considerations around pricing structures for users.
- Complexity in Memory Bandwidth Considerations: The conversation noted the complications that arise assuming a fixed flop to memory bandwidth ratio when pricing.
- Members mentioned the challenges of partitioning GPUs to make performance ratios add up, indicating a need for clearer guidelines.
- Inference Implications for Memory Bandwidth: It was pointed out that memory bandwidth considerations are particularly crucial for those performing bs=1 inference on their own hardware.
- This highlights the varying needs of users depending on their specific use cases and workload requirements.
tinygrad (George Hotz) ▷ #learn-tinygrad (6 messages):
phi operation in IR
UOps.UPDATE
cstyle renderer insights
- Confusion Over phi Operation in IR: A member asked about the workings of the phi operation in the IR, comparing it to LLVM IR where it's typically at the beginning of loop bodies.
- This led to a clarification from another member explaining that it's not truly a phi operation and suggested it be renamed to ASSIGN.
- Insight on Cstyle Renderer: George Hotz recommended checking the cstyle renderer to understand its functionality related to the discussion.
- This piece of advice was acknowledged by the initial inquirer who expressed intent to look into it.
- Alternative Naming Suggestion for phi: Another member suggested the operation could also be called UOps.UPDATE to better reflect its purpose.
- This contribution added to the ongoing discussion about naming conventions within the IR implementation.
Link mentioned: Kernel Fusion part 3: the linear layer UOps: Tutorials on tinygrad
OpenAccess AI Collective (axolotl) ▷ #general-help (7 messages):
Unsloth Phi to Llama Conversion
Challenges with Phi3
Small Model for Rapid Iteration
Dora Support in Axolotl
- Unsloth Phi successfully converts to Llama: It has been noted that there exists an Unsloth Phi where the architecture was converted to Llama, enabling the use of a Llama3 configuration.
- This adjustment offers a potentially more efficient setup for experiments.
- Discussions highlight Phi3 challenges: Members pointed out that while Phi3 should be safe to use, there are ongoing discussions about its related challenges in the Discord history.
- This concern suggests that while it functions, issues may still arise that warrant further investigation.
- Invisietch seeks small model for experiments: Invisietch is on the hunt for a small model to conduct quick iterative experiments, highlighting a need for accessible resources.
- This reflects a broader interest in finding efficient solutions for agile development.
- Dora support confirmed in Axolotl: It has been confirmed that Axolotl supports Dora by passing the parameter
peft_use_dora: true
.- This information is documented in a GitHub issue, which also encourages prior searches for similar feature requests.
Link mentioned: DoRA Support · Issue #1328 · axolotl-ai-cloud/axolotl: ⚠️ Please check that this feature request hasn't been suggested before. I searched previous Ideas in Discussions didn't find any similar feature requests. I searched previous Issues didn't...
OpenAccess AI Collective (axolotl) ▷ #community-showcase (5 messages):
Llama-3.1-8B fine-tuning
Chemical Language Model
Molecule generation
DPO optimization
SmileyLlama
- Llama-3.1-8B transforms into Molecular Design Engine: Fine-tuning and DPO successfully turned Llama-3.1-8B into a powerful model for generating molecules based on specified properties, demonstrating its capability in molecular design.
- This technique allows users to produce molecules on-demand by providing a few hints about their desired characteristics.
- SFT and DPO create revolutionary Chemical Language Model: A study revealed that a Large Language Model (LLM) can function as a Chemical Language Model (CLM) when trained using supervised fine-tuning (SFT) and direct preference optimization (DPO).
- This approach enables the LLM to generate molecules relevant to drug development, achieving performance comparable to CLMs reliant solely on chemical data.
- Excitement over new molecular design capabilities: That sounds sick!
- Members expressed enthusiasm over the potential of this fine-tuned model and considered sharing it widely on social media.
- SmileyLlama's debut on social media: The model, dubbed SmileyLlama, is a Chemical Language Model designed to create molecules from property prompts and has garnered attention on X.
- A post from the Axolotl account highlighted that it stands on par with other pure CLMs, while utilizing the Axolotl framework.
- Upcoming accessibility for testing: There is anticipation for the arrival of the HF model, allowing members to engage with the fine-tuned Llama model directly.
- This follows the recent advancements in using Llama for chemical tasks, indicating a move toward broader accessibility.
- Tweet from Axolotl (@axolotl_ai): SmileyLlama, a fine-tuned Chemical Language Model to design molecules from properties specified in the prompt. An SFT+DPO model on par with other pure CLM's, but built with Axolotl.
- SmileyLlama: Modifying Large Language Models for Directed Chemical Space Exploration: Here we show that a Large Language Model (LLM) can serve as a foundation model for a Chemical Language Model (CLM) which performs at or above the level of CLMs trained solely on chemical SMILES string...
DSPy ▷ #show-and-tell (2 messages):
DSPy usecase list
Livecoding sessions
- DSPy Usecase List is Here: The DSPy usecase list has been officially announced, aiming to explore what people are building with Large Models (LMs) and deploying in production. An initial list of nearly 100 products and OSS systems has been compiled, detailed in a tweet and a linked document.
- This initiative is led by @isaacbmiller1 and @lateinteraction to gather insights through a DSPy perspective.
- Livecoding Event Announcement: A reminder was shared about a current livecoding session happening in the designated Discord channel. Participants were directed to join in at this link.
- This event aims to foster hands-on coding experiences within the community.
Link mentioned: Tweet from isaac 🧩 (@isaacbmiller1): What are people building with LMs? What are they deploying in production? @lateinteraction and I want to begin to answer that question through a DSPy lens. We compiled an initial list of nearly 10...
DSPy ▷ #papers (1 messages):
batmanosama: https://huggingface.co/papers/2409.02889
DSPy ▷ #general (1 messages):
ColPali
Visual Document Retrieval Benchmark
- ColPali Revolutionizes Document Retrieval: A new method called ColPali has been released, enhancing document retrieval using a late interaction mechanism, making it efficient for visually rich documents according to this blog post.
- Designed by a team including Manuel Faysse and Hugues Sibille, ColPali overcomes limitations of existing systems by utilizing non-textual elements like tables and figures.
- Introducing the Visual Document Retrieval Benchmark: The paper introduces the Visual Document Retrieval Benchmark (ViDoRe), which assesses retrieval performance across various languages, domains, and document types.
- This benchmark aims to enhance the evaluation of retrieval systems by incorporating a wider range of document elements beyond just text.
Link mentioned: ColPaLi: Efficient Document Retrieval with Contextualized Language Model: ColPaLi, a new document retrieval system, leverages Vision Language Models (VLMs) to efficiently handle visually rich documents. By combining visual and textual information, ColPaLi outperforms existi...
DiscoResearch ▷ #general (2 messages):
Multimodal LLMs
Training/Finetuning
- Inquiry about Multimodal LLMs Experience: A member asked if anyone has experience working with multimodal LLMs that utilize both text and speech as input, specifically in training or finetuning efforts.
- This reflects a growing interest in integrating speech capabilities into LLM frameworks.
- YouTube Resource on Multimodal Models: A member shared a YouTube video presumably related to multimodal LLMs, hinting at useful insights on the topic.
- This could be a great starting point for those interested in the operationalization of multimodal models.
LLM Finetuning (Hamel + Dan) ▷ #general (1 messages):
Meeting Transcription
Agent Workflows
Evaluation Challenges
- Transcription of Meeting Attendees: A discussion highlighted the need for a transcript of the entire meeting including names of all attendees.
- This could enhance reference accuracy and accountability for future discussions.
- Proof of Concept for Reporting: A participant is working on a proof of concept for one report, indicating a focused approach to their project.
- This moves towards practical implementation while keeping the scope manageable.
- Concerns about Agent Workflows: There was consideration of utilizing agents' workflows for the project, suggesting an innovative approach.
- However, there are worries about the complexity of evaluating agents due to the lack of established standards.
MLOps @Chipro ▷ #events (1 messages):
AI Enterprise Summit
San Francisco event
Keynote speakers
Networking opportunities
- AI Enterprise Summit to Kick Off in SF: An AI Enterprise Summit is set for October 2, 2024, in San Francisco, designed for executives, entrepreneurs, and AI enthusiasts to gather and discuss scaling AI products.
- Use code AIR50 for a special $50 savings on tickets to this exclusive one-day event.
- Notable Speakers at the Summit: The summit will feature industry leaders including Paul Baier (CEO of GAInsights), Ted Shelton (COO of Inflection AI), and Jeremiah Owyang (Blitzscaling Ventures) among others.
- These speakers will provide insights based on real business use cases, enhancing learning for all attendees.
- Curated Gathering for AI Professionals: This event promises to be a curated gathering of ambitious executives and AI professionals, offering opportunities to network and learn.
- Participants will engage with thought leaders and explore various aspects of AI product development.
Link mentioned: AI Realized – The Enterprise AI Summit · Luma: Christina Ellwood & David Yakobovitch Present... AI Realized Summit 2024 For Enterprise Executives, Entrepreneurs & AI Innovators. Join us in San Francisco…
Gorilla LLM (Berkeley Function Calling) ▷ #discussion (1 messages):
huanzhimao: Thanks for the issue! Will take a look