[AINews] Not much technical happened today
This is AI News! an MVP of a service that goes thru all AI discords/Twitters/reddits and summarizes what people are talking about, so that you can keep up without the fatigue. Signing up here opts you in to the real thing when we launch it 🔜
Funding is all you need.
AI News for 10/1/2024-10/2/2024. We checked 7 subreddits, 433 Twitters and 31 Discords (225 channels, and 1832 messages) for you. Estimated reading time saved (at 200wpm): 219 minutes. You can now tag @smol_ai for AINews discussions!
Today OpenAI announced raising 6.6B in new funding at a 157B valuation. On Twitter ChatGPT head of product Nick Turley also added "250M weekly actives, up from 200M about a month ago".
Also in fundraising news Poolside announced a $500 million fundraise to make progress towards AGI.
Table of Contents
- AI Twitter Recap
- AI Reddit Recap
- AI Discord Recap
- PART 1: High level Discord summaries
- aider (Paul Gauthier) Discord
- OpenRouter (Alex Atallah) Discord
- HuggingFace Discord
- LM Studio Discord
- Nous Research AI Discord
- Interconnects (Nathan Lambert) Discord
- Unsloth AI (Daniel Han) Discord
- GPU MODE Discord
- Eleuther Discord
- OpenAI Discord
- Latent Space Discord
- Stability.ai (Stable Diffusion) Discord
- Perplexity AI Discord
- Cohere Discord
- LlamaIndex Discord
- Torchtune Discord
- Modular (Mojo 🔥) Discord
- OpenInterpreter Discord
- LangChain AI Discord
- OpenAccess AI Collective (axolotl) Discord
- tinygrad (George Hotz) Discord
- LAION Discord
- DSPy Discord
- LLM Agents (Berkeley MOOC) Discord
- Mozilla AI Discord
- MLOps @Chipro Discord
- PART 2: Detailed by-Channel summaries and links
- aider (Paul Gauthier) ▷ #general (198 messages🔥🔥):
- aider (Paul Gauthier) ▷ #questions-and-tips (70 messages🔥🔥):
- aider (Paul Gauthier) ▷ #links (3 messages):
- OpenRouter (Alex Atallah) ▷ #announcements (2 messages):
- OpenRouter (Alex Atallah) ▷ #general (244 messages🔥🔥):
- HuggingFace ▷ #announcements (1 messages):
- HuggingFace ▷ #general (164 messages🔥🔥):
- HuggingFace ▷ #today-im-learning (5 messages):
- HuggingFace ▷ #cool-finds (2 messages):
- HuggingFace ▷ #i-made-this (7 messages):
- HuggingFace ▷ #computer-vision (4 messages):
- HuggingFace ▷ #NLP (10 messages🔥):
- LM Studio ▷ #general (125 messages🔥🔥):
- LM Studio ▷ #hardware-discussion (17 messages🔥):
- Nous Research AI ▷ #general (118 messages🔥🔥):
- Nous Research AI ▷ #ask-about-llms (1 messages):
- Nous Research AI ▷ #research-papers (1 messages):
- Nous Research AI ▷ #interesting-links (3 messages):
- Nous Research AI ▷ #research-papers (1 messages):
- Nous Research AI ▷ #reasoning-tasks (1 messages):
- Interconnects (Nathan Lambert) ▷ #news (39 messages🔥):
- Interconnects (Nathan Lambert) ▷ #ml-drama (65 messages🔥🔥):
- Interconnects (Nathan Lambert) ▷ #random (9 messages🔥):
- Interconnects (Nathan Lambert) ▷ #memes (2 messages):
- Interconnects (Nathan Lambert) ▷ #rl (2 messages):
- Interconnects (Nathan Lambert) ▷ #reads (5 messages):
- Interconnects (Nathan Lambert) ▷ #posts (1 messages):
- Unsloth AI (Daniel Han) ▷ #general (82 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #off-topic (12 messages🔥):
- Unsloth AI (Daniel Han) ▷ #help (21 messages🔥):
- Unsloth AI (Daniel Han) ▷ #research (1 messages):
- GPU MODE ▷ #triton (3 messages):
- GPU MODE ▷ #torch (4 messages):
- GPU MODE ▷ #announcements (1 messages):
- GPU MODE ▷ #cool-links (1 messages):
- GPU MODE ▷ #beginner (2 messages):
- GPU MODE ▷ #pmpp-book (1 messages):
- GPU MODE ▷ #youtube-recordings (6 messages):
- GPU MODE ▷ #torchao (11 messages🔥):
- GPU MODE ▷ #sequence-parallel (3 messages):
- GPU MODE ▷ #off-topic (9 messages🔥):
- GPU MODE ▷ #irl-meetup (1 messages):
- GPU MODE ▷ #triton-puzzles (6 messages):
- GPU MODE ▷ #hqq-mobius (8 messages🔥):
- GPU MODE ▷ #llmdotc (38 messages🔥):
- GPU MODE ▷ #rocm (1 messages):
- GPU MODE ▷ #liger-kernel (2 messages):
- GPU MODE ▷ #metal (1 messages):
- GPU MODE ▷ #diffusion (2 messages):
- Eleuther ▷ #general (72 messages🔥🔥):
- Eleuther ▷ #research (13 messages🔥):
- OpenAI ▷ #ai-discussions (47 messages🔥):
- OpenAI ▷ #gpt-4-discussions (18 messages🔥):
- OpenAI ▷ #prompt-engineering (3 messages):
- OpenAI ▷ #api-discussions (3 messages):
- Latent Space ▷ #ai-general-chat (53 messages🔥):
- Stability.ai (Stable Diffusion) ▷ #general-chat (49 messages🔥):
- Perplexity AI ▷ #general (38 messages🔥):
- Perplexity AI ▷ #sharing (7 messages):
- Perplexity AI ▷ #pplx-api (2 messages):
- Cohere ▷ #discussions (26 messages🔥):
- Cohere ▷ #questions (13 messages🔥):
- Cohere ▷ #api-discussions (6 messages):
- LlamaIndex ▷ #blog (2 messages):
- LlamaIndex ▷ #general (37 messages🔥):
- LlamaIndex ▷ #ai-discussion (1 messages):
- Torchtune ▷ #announcements (1 messages):
- Torchtune ▷ #general (18 messages🔥):
- Torchtune ▷ #dev (13 messages🔥):
- Modular (Mojo 🔥) ▷ #mojo (25 messages🔥):
- OpenInterpreter ▷ #general (9 messages🔥):
- OpenInterpreter ▷ #O1 (7 messages):
- OpenInterpreter ▷ #ai-content (3 messages):
- LangChain AI ▷ #general (15 messages🔥):
- LangChain AI ▷ #share-your-work (2 messages):
- OpenAccess AI Collective (axolotl) ▷ #general (8 messages🔥):
- OpenAccess AI Collective (axolotl) ▷ #axolotl-help-bot (6 messages):
- tinygrad (George Hotz) ▷ #general (6 messages):
- tinygrad (George Hotz) ▷ #learn-tinygrad (3 messages):
- LAION ▷ #general (2 messages):
- LAION ▷ #resources (1 messages):
- DSPy ▷ #papers (1 messages):
- DSPy ▷ #colbert (1 messages):
- LLM Agents (Berkeley MOOC) ▷ #mooc-questions (2 messages):
- Mozilla AI ▷ #announcements (2 messages):
- MLOps @Chipro ▷ #general-ml (1 messages):
AI Twitter Recap
all recaps done by Claude 3.5 Sonnet, best of 4 runs.
AI Model Developments and Industry Updates
- New AI Models and Capabilities: @LiquidAI_ announced three new models: 1B, 3B, and 40B MoE (12B activated), featuring a custom Liquid Foundation Models (LFMs) architecture that outperforms transformer models on benchmarks. These models boast a 32k context window and minimal memory footprint, handling 1M tokens efficiently. @perplexity_ai teased an upcoming feature with "⌘ + ⇧ + P — coming soon," hinting at new functionalities for their AI platform.
- Open Source and Model Releases: @basetenco reported that OpenAI released Whisper V3 Turbo, an open-source model with 8x faster relative speed vs Whisper Large, 4x faster than Medium, and 2x faster than Small, featuring 809M parameters and full multilingual support. @jaseweston announced that FAIR is hiring 2025 research interns, focusing on topics like LLM reasoning, alignment, synthetic data, and novel architectures.
- Industry Partnerships and Products: @cohere introduced Takane, an industry-best custom-built Japanese model developed in partnership with Fujitsu Global. @AravSrinivas teased an upcoming Mac app for an unspecified AI product, indicating the expansion of AI tools to desktop platforms.
AI Research and Technical Discussions
- Model Training and Optimization: @francoisfleuret expressed uncertainty about training a single model with 10,000 H100s, highlighting the complexity of large-scale AI training. @finbarrtimbers noted excitement about the potential for inference time search with 1B models getting good, suggesting new possibilities in conditional compute.
- Technical Challenges: @_lewtun highlighted a critical issue with LoRA fine-tuning and chat templates, emphasizing the need to include the embedding layer and LM head in trainable parameters to avoid nonsense outputs. This applies to models trained with ChatML and Llama 3 chat templates.
- AI Tools and Frameworks: @fchollet shared how to enable float8 training or inference on Keras models using
.quantize(policy)
, demonstrating the framework's flexibility for various quantization forms. @jerryjliu0 introduced create-llama, a tool to spin up complete agent templates powered by LlamaIndex workflows in Python and TypeScript.
AI Industry Trends and Commentary
- AI Development Analogies: @mmitchell_ai shared a critique of the tech industry's approach to AI progress, comparing it to a video game where the goal is finding an escape hatch rather than benefiting society. This perspective highlights concerns about the direction of AI development.
- AI Freelancing Opportunities: @jxnlco outlined reasons why freelancers are poised to win big in the AI gold rush, citing high demand, complexity of AI systems, and the opportunity to solve real problems across industries.
- AI Product Launches: @swyx compared Google DeepMind's NotebookLM to ChatGPT, noting its multimodal RAG capabilities and native integration of LLM usage within product features. This highlights the ongoing competition and innovation in AI-powered productivity tools.
Memes and Humor
- @bindureddy humorously commented on Sam Altman's statements about AI models, pointing out a pattern of criticizing current models while hyping future ones.
- @svpino joked about hosting websites that make $1.1M/year for just $2/month, emphasizing the low cost of web hosting and poking fun at overcomplicated solutions.
AI Reddit Recap
/r/LocalLlama Recap
Theme 1. OpenAI's Whisper Turbo: Breakthrough in Browser-Based Speech Recognition
- The insanity of whisper versions (Score: 30, Comments: 14): The post discusses the numerous versions of Whisper, including size variations (base, small, tiny, large, turbo), version iterations (v1, v2, v3), language-specific models (English-only), and performance-focused variants (faster whisper, insanely-fast whisper). The author seeks guidance on selecting an appropriate Whisper model, considering factors such as GPU performance and language requirements, specifically mentioning medium.en for English and potentially a larger non-English version for foreign transcription/translation.
- Whisper-ctranslate2 (based on faster-whisper) is recommended as the fastest option, with large models suggested for non-English use. Version comparisons indicate v2 and v3 outperform v1, with language-specific variations in v3's performance.
- Hardware requirements for large Whisper models include 6GB VRAM (minimum), with CPU inference speeds around 0.2-0.5x realtime. Users reported WhisperX crashes on 8GB fp32 GPUs, while fp16 performs better with lower VRAM usage.
- Performance benchmarks for Whisper models are available, including FP16 benchmarking and large v3 benchmarking. Alternative options like whisperfile, a llamafile wrapper for Whisper, were suggested for fast CPU use cases.
- OpenAI's new Whisper Turbo model running 100% locally in your browser with Transformers.js (Score: 456, Comments: 52): OpenAI's Whisper Turbo model can now run 100% locally in web browsers using Transformers.js, enabling speech-to-text transcription without sending data to external servers. This implementation leverages WebGPU for faster processing, achieving real-time transcription speeds on compatible devices, and offers a fallback to WebGL for broader compatibility.
- The Whisper large-v3-turbo model achieves ~10x RTF (real-time factor), transcribing 120 seconds of audio in ~12 seconds on an M3 Max. It's a distilled version of Whisper large-v3, reducing decoding layers from 32 to 4 for faster processing with minor quality degradation.
- The model runs 100% locally in the browser using Transformers.js and WebGPU, without hitting OpenAI servers. The 800MB model is downloaded and stored in the browser's cache storage, enabling offline use through service workers.
- Users discussed the model's multilingual capabilities and potential accuracy changes. A real-time version of the model is available on Hugging Face, and it can also be used offline with whisper.cpp by ggerganov.
- Whisper Turbo now supported in Transformers 🔥 (Score: 174, Comments: 33): Hugging Face's Open Source Audio team has released Whisper Turbo in Transformers format, featuring a 809M parameter model that is 8x faster and 2x smaller than Large v3. The multilingual model supports time stamps and uses 4 decoder layers instead of 32, with implementation in Transformers requiring minimal code for automatic speech recognition tasks using the ylacombe/whisper-large-v3-turbo checkpoint.
- Whisper Turbo's performance is discussed, with users comparing it to faster-whisper and Nvidia Canary. The latter is noted to be at the top of the Open ASR leaderboard but supports fewer languages.
- GGUF support for Whisper Turbo was quickly implemented, with the developer providing links to the GitHub pull request and model checkpoints within hours of the request.
- Users confirmed Whisper Turbo's compatibility with Mac M-chip, providing code modifications to run it on MPS. One user reported achieving 820X realtime speed on a 4090 GPU without performance degradation.
Theme 2. Convergence and Limitations of Current LLM Architectures
- All LLMs are converging towards the same point (Score: 108, Comments: 57): Various large language models (LLMs) including Gemini, GPT-4, GPT-4o, Llama 405B, MistralLarge, CommandR, and DeepSeek 2.5 were used to generate a list of 100 items, with the first six models producing nearly identical datasets and groupings. The author observed a convergence in the main data output across these models, despite differences in their "yapping" or extraneous text, leading to the conclusion that these LLMs are trending towards a common point that may not necessarily indicate Artificial Super Intelligence (ASI).
- ArsNeph argues that LLMs are converging due to heavy reliance on synthetic data from the GPT family, leading to widespread "GPT slop" and lack of originality. Open-source fine-tunes and models like Llama 2 are essentially distilled versions of GPTs, while newer models like Llama 3 and Gemma 2 use DPO to appear more likable.
- Users discuss potential solutions to LLM convergence, including experimenting with different samplers and tokenization methods. The XTC sampler for exllamav2 is mentioned as a promising approach to reduce repetitive outputs, with some users eager to implement it in llama.cpp.
- Discussion touches on Claudisms, a phenomenon where Claude exhibits its own parallel versions of GPTisms, potentially as a form of fingerprinting. Some speculate that these patterns might be artifacts used to identify text generated by specific models, even when other models train on that data.
- Best Models for 48GB of VRAM (Score: 225, Comments: 67): An individual with a new RTX A6000 GPU featuring 48GB of VRAM is seeking recommendations for the best models to run on this hardware. They specifically request models that can operate with at least Q4 quantization or 4 bits per weight (4bpw) to optimize performance on their high-capacity GPU.
- Users recommend running 70B models like Llama 3.1 70B or Qwen2.5 72B. Performance benchmarks show Qwen2.5 72B achieving 12-13 tokens/second with q4_0 quantization and 8.5 tokens/second with q4_K_S quantization on 2x RTX3090 GPUs.
- ExllamaV2 with TabbyAPI is suggested for better speeds, potentially reaching 15 tokens/second with Mistral Large at 3 bits per weight. A user reported up to 37.31 tokens/second on coding tasks using Qwen 2 72B with tensor parallelism and speculative decoding on Linux.
- Some users recommend trying Mistral-Large-Instruct-2407 at 3 bits per weight for a 120B parameter model, while others suggest Qwen 72B as the "smartest" 70B model. Cooling solutions for the RTX A6000 were discussed, with one user showcasing a setup using Silverstone FHS 120X fans in an RM44 chassis.
- Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices (Score: 53, Comments: 17): The paper introduces TPI-LLM, a tensor parallel inference system designed to run 70B-scale language models on low-resource edge devices, addressing privacy concerns by keeping sensitive data local. TPI-LLM implements a sliding window memory scheduler and a star-based allreduce algorithm to overcome memory limitations and communication bottlenecks, respectively. Experiments show that TPI-LLM achieves over 80% reduction in time-to-first-token and token latency compared to Accelerate, and over 90% reduction compared to Transformers and Galaxy, while reducing the peak memory footprint of Llama 2-70B by 90%, requiring only 3.1 GB of memory for 70B-scale models.
- TPI-LLM leverages multiple edge devices for inference through tensor parallelism, running Llama 2-70B on 8 devices with 3GB of memory each. This distributed approach allows for significant memory reduction but comes with a trade-off in speed.
- The system's performance is limited by disk I/O, resulting in a 29.4-second time-to-first-token and an average throughput of 26.1 seconds/token for a 70B model. Despite these latencies, the approach shows promise in running large language models on low-resource devices.
- Users discussed alternative distributed implementations like exo for running models across multiple devices. Concerns were raised about potential issues with realtime pool changes and layer rebalancing in distributed setups.
Theme 3. Nvidia's NVLM 72B: New Multimodal Model Release
- Nvidia just dropped its Multimodal model NVLM 72B (Score: 92, Comments: 10): Nvidia has released its multimodal model NVLM 72B, with details available in a paper and the model accessible through a Hugging Face repository. This 72 billion parameter model represents Nvidia's entry into the multimodal AI space, capable of processing and generating both text and visual content.
- NVLM 72B is built on top of Qwen 2 72B, as revealed by a quick look at the config file.
- Ggerganov, creator of llama.cpp, expressed the need for new contributors with software architecture skills to implement multimodal support, citing concerns about project maintainability. He stated this in a GitHub issue comment.
- Discussion arose about why major companies release models in Hugging Face format rather than GGUF. Reasons include compatibility with existing hardware, no need for quantization, and the ability to finetune, which is not easily done with GGUF files.
Theme 4. Advancements in On-Device AI: Gemini Nano 2 for Android
- Gemini Nano 2 is now available on Android via experimental access (Score: 38, Comments: 12): Gemini Nano 2, an upgraded version of Google's on-device AI model for Android, is now accessible to developers through experimental access. This new iteration, nearly twice the size of its predecessor (Nano 1), demonstrates significant improvements in quality and performance, rivaling the capabilities of much larger models in both academic benchmarks and real-world applications.
- Users speculated about extracting weights from Gemini Nano 2, with discussions on the model's architecture and size. It was clarified that Nano 2 has 3.25B parameters, not 2B as initially suggested.
- There was interest in the model's transparency, with questions about why Google isn't more open about the LLM used. Some speculated it might be a version of Gemini 1.5 flash.
- A user provided information from the Gemini Paper, stating that Nano 2 is trained by distilling from larger Gemini models and is 4-bit quantized for deployment.
Theme 5. Innovative Techniques for Improving LLM Performance
- Archon: An Architecture Search Framework for Inference-Time Techniques from Stanford. Research Paper, Codes, Colab available;
pip install archon-ai
. OSS version of 01? (Score: 35, Comments: 2): Stanford researchers introduced Archon, an open-source architecture search framework for inference-time techniques, potentially serving as an OSS alternative to Anthropic's 01. The framework, available viapip install archon-ai
, comes with a research paper, code, and a Colab notebook, allowing users to explore and implement various inference-time methods for large language models.
- Just discovered the Hallucination Eval Leaderboard - GLM-4-9b-Chat leads in lowest rate of hallucinations (OpenAI o1-mini is in 2nd place) (Score: 39, Comments: 6): The Hallucination Eval Leaderboard reveals GLM-4-9b-Chat as the top performer with the lowest rate of hallucinations, followed by OpenAI's o1-mini in second place. This discovery has prompted consideration of GLM-4-9b as a potential model for RAG (Retrieval-Augmented Generation) applications, suggesting its effectiveness in reducing false information generation.
- GLM-4-9b-Chat and Jamba Mini are highlighted as promising models with low hallucination rates, but are underutilized. The inclusion of Orca 13B in the top performers was also noted with surprise.
- The leaderboard's data is seen as valuable for LLM-based Machine Translation, with users expressing enthusiasm about the potential applications in this field.
- GLM-4 is praised for its 64K effective context, which exceeds many larger models on the RULER leaderboard, and its ability to minimize code switching in multilingual tasks, making it a strong candidate for RAG applications.
- Shockingly good super-intelligent summarization prompt (Score: 235, Comments: 39): The post discusses a summarization system prompt inspired by user Flashy_Management962, which involves generating 5 essential questions to capture the main points of a text and then answering them in detail. The author claims this method, tested on Qwen 2.5 32b q_4, is "shockingly better" than previous approaches they've tried, and outlines the process for formulating questions that address central themes, key ideas, facts, author's perspective, and implications.
- Users discussed refining the prompt by specifying answer length and including examples. The OP mentioned trying more complex prompts but found simpler instructions worked best with the Qwen 2.5 32b q_4 model.
- The technique of generating question & answer pairs for summarization sparked interest, with some users suggesting it's a known NLP task. The OP noted a significant improvement, describing it as a "30 point IQ level jump for the LLM" in understanding text.
- The summarization method is being integrated into projects like Harbor Boost under the name "supersummer". Users also shared related resources, including DSPy and an e-book summary tool for further exploration.
Other AI Subreddit Recap
r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity
AI Model Releases and Capabilities
- OpenAI releases o1-mini model: OpenAI released o1-mini, a smaller version of their o1 model. Some users report getting o1 responses randomly when using GPT-4, suggesting OpenAI may be testing a model router to determine when to use o1 vs GPT-4. Source
- Whisper V3 Turbo released: OpenAI released Whisper V3 Turbo, an optimized version of their large-v3 speech recognition model that offers 8x faster transcription speed with minimal accuracy loss. Source
- PuLID for Flux now works on ComfyUI: The PuLID (Prompt-based Unsupervised Learning of Image Descriptors) model for the Flux image generation model is now compatible with the ComfyUI interface. Source
AI Company Updates and Events
- OpenAI DevDay announcements: OpenAI held a developer day event with multiple announcements, including:
- A 98% decrease in cost per token from GPT-4 to 4o mini
- A 50x increase in token volume across their systems
- Claims of "excellent model intelligence progress" Source
- Mira Murati's exit from OpenAI: Before Mira Murati's surprise departure from OpenAI, some staff reportedly felt the o1 model had been released prematurely. Source
AI Features and Applications
- Advanced voice mode rolling out: OpenAI is starting to roll out advanced voice mode to free users of ChatGPT. Source
- Realtime API announced: OpenAI announced the Realtime API, which will enable Advanced Voice Mode functionality in other applications. Source
- Copilot Vision demonstrated: Microsoft demonstrated Copilot Vision, which can see and interact with webpages the user is viewing. Source
- NotebookLM capabilities: Google's NotebookLM tool can process multiple books, long videos, and audio files, providing summaries, quotes, and explanations. It can also handle content in foreign languages. Source
AI Ethics and Societal Impact
- Concerns about job displacement: The CEO of Duolingo discussed potential job displacement due to AI, sparking debate about the societal impacts of automation. Source
- Sam Altman on AI progress: Sam Altman of OpenAI discussed the rapid progress of AI, stating that by 2030, people may be able to ask AI to perform tasks that previously took humans months or years to complete. Source
AI Research and Development
- UltraRealistic Lora Project: A new LoRA (Low-Rank Adaptation) model for the Flux image generation system aims to create more realistic, dynamic photography-style outputs. Source
AI Discord Recap
A summary of Summaries of Summaries by O1-mini
Theme 1. Advancements and Launches of AI Models
- Nova Pro Surpasses GPT-4 in Benchmarks: Nova-Pro achieves outstanding scores with 97.2% on ARC-C and 96.9% on GSM8K, outperforming GPT-4 and Claude-3.5 in reasoning and mathematics.
- Llama 3.2 Introduced with Vision Capabilities: Llama 3.2 supports 11B and 90B configurations, enabling local deployment and enhanced fine-tuning for custom vision tasks.
- Phi-3.5 Models Highlight Censorship Features: Phi-3.5-MoE showcases extensive censorship mechanisms, sparking discussions on model usability for technical applications.
Theme 2. AI Infrastructure and Tooling Enhancements
- Streamlined Project Management with o1-engineer: o1-engineer leverages OpenAI's API for efficient code generation and project planning, enhancing developer workflows.
- Local AI Screen Recording via screenpipe: screenpipe offers secure, continuous local AI recording built with Rust, serving as a robust alternative to Rewind.ai.
- Resolving Installation Issues in LM Studio: Community members troubleshoot LM Studio launch problems, emphasizing the importance of compatibility with Llama 3.1 and the use of virtual environments.
Theme 3. AI Ethics, Safety, and Legal Implications
- Debating AI Safety and Ethical Concerns: Discussions on AI Safety tackle both traditional ethics and modern threats like deepfakes, often humorously likened to "angry grandmas shouting at clouds."
- NYT Lawsuit Impacts AI Copyright Stance: The potential NYT lawsuit against OpenAI raises questions about copyright infringement and the broader legal responsibilities of LLMs.
- Ethical Use of AI in Creative Works: Community outrage over Character.AI's unauthorized use of an individual's likeness highlights the necessity for responsible AI development practices.
Theme 4. Model Training, Fine-Tuning, and Optimization
- Enhancing Efficiency with Activation Checkpointing: Implementing activation checkpointing in training reduces memory usage, enabling the handling of larger models like llama 3.1 70B.
- Addressing FP8 Precision Training Challenges: Researchers explore instabilities in FP8 precision during extended training runs, seeking solutions to optimize stability and performance.
- Optimizing Multi-GPU Training Techniques: Effective multi-GPU training emphasizes parallel network training and efficient state communication to scale up to 10,000 GPUs.
Theme 5. AI Integration and Deployment Strategies
- Semantic Search with Oracle AI Vector Search and LlamaIndex: Combining Oracle AI Vector Search with LlamaIndex enhances retrieval augmented generation (RAG) pipelines for more accurate contextual data handling.
- Deploying HuggingFace Models as LangChain Agents: HuggingFace models can be integrated as Agents in LangChain, facilitating advanced chat and text generation tasks within development workflows.
- Local Deployment Strategies with OpenRouter and LlamaIndex: Utilizing OpenRouter and LlamaIndex for semantic search and multimodal models supports scalable and efficient AI deployment in diverse applications.
PART 1: High level Discord summaries
aider (Paul Gauthier) Discord
- Prompt Caching Insights: A summary of various AI models' prompt caching capabilities was shared, focusing on OpenAI's and Anthropic’s strategies, with discussion on cost implications and cache misses.
- Key points included discussions on the caching mechanisms of models like DeepSeek and Gemini, emphasizing their efficiency.
- AI Models for Code Editing Compared: Sonnet outperformed other models in overall performance, but at a higher cost than o1-preview, which offers better token reimbursement under certain conditions.
- Recommendations included benchmarking Gemini as an architect model in conjunction with Sonnet to potentially enhance editing capabilities.
- YAML Parsing Pitfalls: Users flagged quirks in YAML parsing, specifically around the conversion of keys like 'yes' to booleans, complicating their configurations.
- Strategies shared for preventing this issue included the use of quoted strings to maintain intended parsing outcomes.
- Streamlined Project Management via o1-engineer: o1-engineer serves as a command-line tool for developers to efficiently manage projects using OpenAI's API for tasks such as code generation.
- This tool aims to enhance the development process, focusing specifically on project planning.
- Seamless Local AI Recording with screenpipe: screenpipe allows for continuous local AI screen recording, designed for building applications that require complete context retention.
- Positioned as a secure alternative to Rewind.ai, it ensures user data ownership and is built with Rust for higher efficiency.
OpenRouter (Alex Atallah) Discord
- Samba Nova launches free Llama endpoints: In collaboration with Samba Nova, five free bf16 endpoints for Llama 3.1 and 3.2 are now live on their new inference chips to measure performance, including the 405B Instruct model.
- Expect exciting throughput while using these endpoints aimed at supporting the Nitro ecosystem.
- Gemini Models standardize token sizes: The Gemini and PaLM models now use standardized token sizes, leading to 2x higher prices but 25% shorter inputs which should help overall affordability details here.
- Despite the changes, users can expect a 50% reduction in costs over time.
- Cohere offers discounts on models: Cohere models are available on OpenRouter with a 5% discount and upgraded to include tool calling support in their new v2 API.
- This update enhances user access to tools, aiming to improve overall experience.
- Realtime API integration discussed: Discussions centered around OpenRouter's support for the new Realtime API, particularly its current limitations with audio inputs and outputs.
- Users are eager for improvements but no timeline for enhancements has been confirmed yet.
- OpenRouter model performance under scrutiny: Concerns regarding model performance and availability on OpenRouter surfaced, particularly with greyed-out providers and fluctuating rates.
- Users need to stay vigilant about price changes as they navigate different provider options.
HuggingFace Discord
- Llama 3.2 Launch Brings Local Running: Llama 3.2 has launched, allowing local execution with vision fine-tuning available through a new recipe. This model supports 11B and 90B configurations for enhanced fine-tuning capabilities.
- Community feedback indicates a positive reception, as members explore opportunities for applying these models effectively and engaging in discussions on their implications.
- Transformers 4.45.0 Simplifies Tool Creation: The release of transformers v4.45.0 introduces tools utilizing a
@tool
decorator, streamlining the development process for users. This update enhances building efficiency for diverse applications.- Community members enthusiastically discussed these changes, calling for feedback on the updated design and proposing various uses.
- Fine-tuning Mistral 7B for Narrative Generation: A member is keen to finetune Mistral 7B for story generation, looking for guidance on pretraining methodologies. They learned that Mistral is pretrained on substantial data, emphasizing task-specific fine-tuning regime.
- Further clarification was provided, distinguishing between pretraining and the refinement process necessary to specialize models efficiently.
- NotebookLM Eclipses Traditional Tools: Participants praised NotebookLM for its efficacy as an end-to-end multi-modal RAG app, proving particularly effective for analyzing financial reports. A member showcased its capabilities in a YouTube video, exploring the potential for educational content.
- Teammates expressed interest in its application potential, heightening discussions on its future development and integration.
- Exploration of 'trocr-large-handwriting' Benefits: A member suggested employing 'trocr-large-handwriting' for datasets closely resembling handwriting for better performance. The conversation included ideas on fine-tuning on specific character datasets to improve recognition.
- This led to a broader discussion on model choices for handwriting recognition tasks, with members weighing the pros and cons of various approaches.
LM Studio Discord
- LM Studio Launch Woes: Users encountered problems launching LM Studio post-update, particularly with shortcuts and executable files in the app directory, with one workaround suggested for relying on updated installation files.
- This indicates potential issues with legacy installations that may hinder users' productivity.
- Llama 3.1 Compatibility Concerns: An error loading the Llama 3.1 model in LM Studio prompted recommendations to update to version 0.3.3, which officially supports the model.
- This mismatch highlights the necessity of ensuring software compatibility when upgrading models.
- Langflow Integration Success: One user successfully integrated LM Studio with Langflow by adjusting the base URL of OpenAI components, finding the modification enabled smoother workflow.
- They pointed to available resources for Langflow, which might help others streamline their setups.
- Optimizing GPU Utilization: Discussions around GPU utilization settings in LM Studio focused on defining what 'offload' specifically entails regarding CPU and GPU resource management.
- Clarifications were sought about optimized configurations for using GPU versus CPU, particularly for tasks like autocomplete.
- High-End GPU Performance Showdown: A user detailed their setup with 7 RTX 4090 GPUs, raising eyebrows with an estimated 3000 watts of power consumption during operation.
- Another user humorously pointed out the dramatic implications of such intense power usage, reflecting on the fascination with high-performance systems.
Nous Research AI Discord
- Rapid Model Quantization Amazes Users: Users expressed surprise at the blazingly fast quantization time for a 3b model, which took less than a minute to process.
- One user humorously noted the economic comparison to minimum wage work, highlighting potential efficiency gains over human labor.
- Audio Token Pricing Raises Concerns: Discussion arose around audio tokens costing $14 per hour for output, which some deemed expensive compared to human agents.
- Participants noted that while AI is continuously available, the pricing may not significantly undercut traditional support roles.
- Ballmer Peak Study Captivates Members: A shared paper on the Ballmer Peak indicates that a small amount of alcohol can enhance programming ability, challenging traditional beliefs.
- Members crowded into discussions about personal experiences in seeking the 'perfect dosage' for productivity.
- DisTrO's Ability to Handle Bad Actors: Discussion pointed toward DisTrO's verification layer, capable of detecting and filtering bad actors during training.
- While it doesn’t inherently manage untrustworthy nodes, this layer provides some degree of protection.
- Nova LLM Suite Introduced by RubiksAI: RubiksAI launched the Nova suite of Large Language Models, with Nova-Pro achieving an impressive 88.8% on MMLU.
- Benchmark scores for Nova-Pro show 97.2% on ARC-C and 96.9% on GSM8K, focusing on features like Nova-Focus and improved Chain-of-Thought abilities.
Interconnects (Nathan Lambert) Discord
- OpenAI bags $6.6B for ambitious plans: OpenAI has successfully raised $6.6 billion at a staggering $157 billion valuation, facilitated by Thrive Capital among others like Microsoft and Nvidia.
- CFO Sarah Friar shared that this will enable liquidity options for employees post-funding, marking a significant shift in the company’s financial landscape.
- Liquid.AI claims architectural breakthroughs: Discussion emerged around Liquid.AI, which reportedly surpasses previous performance predictions made by Ilya Sutskever in 2020.
- While some skeptics question its validity, the insights from Mikhail Parakhin lend a degree of credibility to the claims.
- AI's potential in advanced mathematics: Robert Ghrist instigated a dialogue on whether AI can engage in research-level mathematics, indicating a moving boundary of capabilities for LLMs.
- This conversation highlights a shift in expectations as AI begins to tackle complex conjectures and theorems.
- AI safety discussions ignite further debates: In lengthy discussions, members wrestled with the implications of AI Safety, particularly regarding old ethics and emergent threats like deepfakes.
- Commentary likened critics to 'angry grandmas shouting at clouds', illustrating the contentious nature of the discourse.
- Google's ambitions discussed: Speculation arose around Google's potential push for AGI, fueled by their extensive cash reserves and history of AI investments.
- Doubts linger over the company's true commitment towards realizing an AGI vision, with divided opinions among members.
Unsloth AI (Daniel Han) Discord
- Feature Extraction Formats in VAE: Participants discussed preferred formats for feature extraction in Variational Autoencoders, favoring continuous latent vectors or pt files, noting the relevance of RGB inputs/outputs for models like Stable Diffusion.
- The conversation highlighted practical choices to enhance model training and effectiveness.
- Feedback Invitation for AI Game: A member invited feedback for their newly launched AI game, playable at game.text2content.online, involving crafting prompts to jailbreak an AI under time constraints.
- Concerns about login requirements were raised, but the creator clarified it was to mitigate bot activity during gameplay.
- Challenges in FP8 Training: A paper was shared discussing the instabilities faced while training large language models using FP8 precision, which uncovered new issues during extended training runs; find it here.
- Community members are keen to explore solutions to optimize stability and performance in these scenarios.
- Discount Codes for AI Summit 2024: Calls went out for discount codes to attend the NVIDIA AI Summit 2024 in Mumbai, with a student voicing interest in utilizing the opportunity to engage with fellow AI enthusiasts.
- Their background in AI and LLMs positions them to significantly benefit from participation at the summit.
- Unsloth Model Loading Troubles: A user faced an error while loading a fine-tuned model with LoRA adapters using AutoModelForPeftCausalLM, which prompted discussions about adjusting the max_seq_length.
- Members provided valuable insights into model loading methods and best practices for resolution.
GPU MODE Discord
- Clarification on Triton Kernel Invocation Parameters: A user inquired about the function of changing num_stages in Triton kernel invocation, speculating its relationship to pipelining.
- Another member explained that pipelining optimizes loading, computing, and storing operations, as illustrated in this YouTube video.
- CUDA Mode Event Sparks Interest: The third place prize in CUDA mode was noted for its connection to data loading projects, inspiring curiosity about progress updates.
- A member shared the no-libtorch-compile repository to aid in development without libtorch.
- IRL Keynotes Now Available for Viewing: Keynote recordings from the IRL event have been released, featuring insightful talks by notable figures such as Andrej Karpathy.
- Participants are thanked, particularly Accel, for their contribution in recording these keynotes effectively.
- Community Navigates Political Discourse: Community members expressed concerns over geopolitical stability, emphasizing a desire for coding focus amidst tense discussions.
- Debate over the appropriateness of political discussions arose, with members agreeing that limiting such topics might ensure a more comfortable environment.
- Upcoming Advancing AI Event Set for October: An Advancing AI event is scheduled in San Francisco, inviting participants to engage with ROCM developers.
- The community is encouraged to DM for registration details and discuss AI advancements during the event.
Eleuther Discord
- Bayesian Models Face Frequentist Challenges: Neural architectures predominantly utilize frequentist statistics, presenting hurdles for implementing Bayesian networks effectively in trainable models. Suggestions included collapsing probabilities into model weights, simplifying the Bayesian approach.
- Discussion highlighted alternatives to maintain practicality within Bayesian frameworks without compromising complexity.
- NYT Lawsuit Shakes AI Copyright Foundation: The community delved into the implications of OpenAI potentially paying off NYT to stave off copyright claims, sparking concerns about the broader impact on LLM liability. Arguments surfaced noting that such compensation wouldn’t necessarily confirm pervasive copyright infringement.
- Members underscored differences in motivations between profitable companies and independent creators facing copyright disputes.
- Liquid Neural Networks: A Game Changer?: Members expressed optimism about the application of liquid neural networks for fitting continuous functions, asserting lowered developmental complexity in comparison to traditional methods. They suggested that an end-to-end pipeline could enhance usability, assuming developer competence.
- The potential for these networks to ease complexity in prediction tasks fueled further discussions on their practical deployment.
- Self-Supervised Learning Expands Horizons: The concept of self-supervised learning on arbitrary embeddings was introduced, emphasizing its applicability across various model weights. This approach involves gathering linear layers from multiple models to form comprehensive datasets for better training.
- Members recognized the implications of extending SSL in enhancing model capabilities across different AI applications.
- Transfer Learning Revolutionized by T5: The effectiveness of T5 in transfer learning for NLP tasks was celebrated, with notable capabilities in modeling a variety of applications. One member humorously stated, 'God dammit T5 thought of everything,' showcasing its extensive text-to-text adaptability.
- In addition, discussions referenced new designs for deep learning optimizers, critiquing existing methods like Adam and proposing alterations for improved training stability.
OpenAI Discord
- Users Desire Higher Subscription Tiers: Members discussed the potential for a higher-priced OpenAI subscription to offer more up-to-date features and services, citing frustrations with current limitations across AI platforms.
- This change could enhance user experience with innovative capabilities.
- Feedback on New Cove Voice Model: Multiple users expressed dissatisfaction with the new Cove voice model, claiming it lacks the calming nature of the classic voice, with calls for its return.
- Community consensus leaned towards preferring a more tranquil voice as they reminisce over the classic version.
- Liquid AI's Architecture Performance: Discussion centered on a new liquid AI architecture reported to outperform traditional LLMs, which is available for testing and noted for its inference efficiency.
- Members speculated about its unique structure compared to typical transformer models.
- Issues Accessing the Playground: Concerns were raised regarding difficulties logging into the Playground, with some users suggesting incognito mode as a potential workaround.
- Reports indicated that access issues may vary geographically, particularly noted in areas like Switzerland.
- Disappearing Responses Issue in macOS App: Users reported responses disappearing in the macOS desktop app post-update, potentially due to changes in notification settings.
- Frustration was evident after these issues impacted user experience significantly during critical tasks.
Latent Space Discord
- OpenAI secures $6.7B in funding: OpenAI announced a funding round of $6.7 billion, reaching a $157 billion valuation with key partnerships, potentially involving NATO allies to advance AI technologies.
- This funding raises questions on international collaboration and the strategic direction of AI policies.
- Advanced Voice feature for all users: OpenAI is rolling out the Advanced Voice feature to all ChatGPT Enterprise and Edu users globally, offering free users an early preview.
- Some skepticism remains around the actual performance benefits of these voice applications.
- Deep dive into Multi-GPU training techniques: A detailed discussion on multi-GPU training emphasized the need for efficient checkpointing and state communication, especially with up to 10,000 GPUs in use.
- Key strategies highlighted include parallelizing network training and enhancing failure recovery processes.
- Launch of new multimodal models MM1.5: Apple introduced the MM1.5 family of multimodal language models aimed at improving OCR and multi-image reasoning, available in both dense and MoE versions.
- This launch focuses on models tailored for video processing and mobile user interface comprehension.
- Azure AI's HD Neural TTS update: Microsoft unveiled an HD version of its neural TTS on Azure AI, promising richer speech with emotional context detection.
- Features like auto-regressive transformer models are expected to enhance realism and quality in generated speech.
Stability.ai (Stable Diffusion) Discord
- Conquering ComfyUI Installation Issues: A user faced struggles with installing ComfyUI on Google Colab, particularly with the comfyui manager installation process.
- Discussants noted the importance of specific model paths and compatibility problems with Automatic1111.
- Flux Model Shows Off Fantastic Features: Users praised the impact of the Flux model in creating consistent character images and improving details like hands and feet.
- One member shared a link to a Flux lora that surprisingly boosts image quality beyond its intended use.
- Automatic1111 Installation Troubles Persist: Issues arose when installing Automatic1111 with the latest Python version, raising questions about compatibility.
- Members recommended using virtual environments or Docker containers to better manage different Python versions.
- Debating Debian-based OS Quirks: A lively conversation focused on the pros and cons of Debian-based operating systems, highlighting popular distributions like Pop and Mint.
- Users humorously shared their thoughts on retrying Pop for its unique features.
- Python Version Compatibility Chaos: Members discussed the challenges of using the latest Python version, suggesting older versions might improve compatibility with some scripts.
- One user contemplated adjusting their setup to execute scripts separately to overcome stability issues.
Perplexity AI Discord
- Hustle for Higher Rate Limits: Users discussed options for requesting rate limit increases on the API, seeking to surpass the 20 request limit.
- There is broad support for these requests, demonstrating a collective need for enhanced capabilities.
- Eager Anticipation for Llama 3.2: The upcoming release of Llama 3.2 sparked excitement among users eagerly awaiting the new features.
- One meme echoed a sense of uncertainty on the release date, with humor drawing attention to past delays.
- LiquidAI Gaining Speed Fame: LiquidAI has been praised for its speed, with a user proclaiming it is crazy fast compared to competing models.
- While speed is its strength, users have noted its inaccuracy, raising concerns about reliability.
- Chat Feature with PDF Capability Rocks: A user confirmed successfully downloading entire chats as PDFs, opening discussions on this feature's utility.
- This reflects a growing demand for better ways to save complete conversations, especially for documentation.
- Text-to-Speech's Mixed Reviews: Discussion on the text-to-speech (TTS) feature highlighted its common usage for crafting long replies, despite some pronunciation issues.
- Users find it a handy tool but see potential for refinement in its accuracy.
Cohere Discord
- Credit Card Cloud and Apple Pay Support Needed: A member expressed the need for full support for credit card cloud and Apple Pay, prompting advice to contact support@cohere.com for assistance.
- Another member offered to handle the support inquiry for smoother resolution.
- Event Notifications Arriving Late: A member reported issues with event notifications arriving after events, especially during the last Office Hours meeting.
- This has been acknowledged as a technical glitch, prompting thanks for raising the issue.
- Inquiring About MSFT Copilot Studio: A member asked about experiences with MSFT Copilot Studio and its comparative value against other solutions in the market.
- A response emphasized sensitivity regarding promotional content in discussions.
- Azure Model Refresh Hiccups: A member reported problems with refreshing models in Azure, suggesting immediate contact with both Cohere support and the Azure team.
- Another member requested the associated issue ID for better tracking in communications.
- Interest in Cohere Chat App Development: A member inquired about any upcoming Cohere chat app plans, specifically for mobile devices, and expressed enthusiasm for community promotion.
- Offering to host a webinar, they highlighted advocacy for the platform.
LlamaIndex Discord
- Cost-effective Contextual Retrieval RAG Emerges: A member shared @AnthropicAI's new RAG technique that enhances retrieval by prepending metadata to document chunks, improving performance and cost-effectiveness. This method guides the retrieval process more accurately based on the contextual position in documents.
- This innovative approach is positioned as a game-changer, aiming to streamline data handling in various applications.
- Oracle AI Vector Search Shines in Semantic Search: Oracle AI Vector Search, a groundbreaking feature of Oracle Database, leads the charge in the semantic search domain, enabling systems to understand information based on meaning rather than keywords. This technology, when paired with the LlamaIndex framework, is positioned as a powerful solution for building sophisticated RAG pipelines.
- The synergy between Oracle and LlamaIndex enhances capabilities, pushing boundaries in AI-driven data retrieval, as detailed in this article.
- Human Feedback Fuels Multi-agent Writing: An innovative blog-writing agent utilizing multi-agent systems incorporates human in the loop feedback into TypeScript workflows, showcasing dynamic writing improvements. Viewers can see the agent in action, writing and editing in real-time, showcased in this live demonstration.
- This development highlights the potential for significantly enhancing collaborative writing processes through direct human engagement.
- Exploring LlamaIndex Infrastructure Needs: Members shared insights on hardware specifications for running LlamaIndex, noting varying needs based on model and data size. Key considerations included the necessity of GPUs for running LLM and embedding models, with recommendations for specific vector databases.
- This discussion emphasized practical aspects that influence deployment decisions, catering to diverse project requirements.
- NVIDIA's NVLM Captures Attention: The introduction of NVIDIA's NVLM 1.0, a multimodal large language model, was highlighted, emphasizing its state-of-the-art capabilities in vision-language tasks. Members speculated on potential support within LlamaIndex, particularly regarding the large GPU requirements and loading configurations.
- The discussion stirred excitement about possible integrations and performance benchmarks that could emerge from this implementation within LlamaIndex.
Torchtune Discord
- Salman Mohammadi Nominated for Contributor Awards: Our own Salman Mohammadi got the nod for the 2024 PyTorch Contributor Awards for his valuable contributions on GitHub and active support in the Discord community.
- His work has been crucial to boosting the PyTorch ecosystem, which saw contributions from 3,500 individuals this year.
- Tokenizer Probabilities vs. One-Hot in Distillation: Members debated the effectiveness of distillation using probabilities for token training over one-hot vectors, highlighting how larger models can yield better latent representations.
- They agreed that mixing labeled and unlabeled data could 'smooth' the loss landscape, enhancing the distillation process.
- H200s Are Coming: Excitement brewed as a member announced their 8x H200 setup, boasting an impressive 4TB RAM, already en route.
- This setup is set to power their local in-house developments further, reinforcing their infrastructure.
- Local LLMs Get Priority: The chat sparked discussions on deploying local LLMs, noting that current APIs fall short for healthcare data in Europe.
- Members stressed that local infrastructure improves security for handling sensitive information.
- B100s Hardware Plans on the Horizon: Future plans to integrate B100s hardware were laid out, signaling a shift toward enhanced local processing capabilities.
- The community expressed anticipation for more resources to strengthen their developmental capabilities.
Modular (Mojo 🔥) Discord
- Mojo Literals lag behind: A member confirmed that literals don't function yet in Mojo, suggesting
msg.extend(List[UInt8](0, 0, 0, 0))
as an alternative approach.- The community anticipates try expressions might be included in future updates.
- EC2 Type T2.Micro Woes: A user faced a JIT session error on a budget-friendly EC2 t2.micro instance due to possible memory constraints during compilation.
- Members suggested a minimum of 8GB of RAM for smoother operations, with one noting that 2GB sufficed for binary builds.
- Mojo Library Imports in Discussion: There’s growing interest in Mojo's future support for import library functionality to utilize CPython libraries instead of
cpython.import_module
.- Concerns arose about potential module name conflicts, proposing an import precedence strategy for integration.
- Memory Management Strategies Explored: A suggestion emerged to use swap memory on EC2, with a cautionary note about performance degradation due to IOPS usage.
- Another user validated successful operations with 8GB, while concerns on Mojo’s handling of memory-specific imports were also highlighted.
- Mojo's Import Behavior: It was noted that Mojo presently does not manage imports with side effects like Python, complicating compatibility.
- This led to discussions on whether Mojo's compiler should replicate all of Python's nuanced import behaviors.
OpenInterpreter Discord
- Nova LLM Launch Hits Big: Nova has launched its suite of Large Language Models, including Nova-Instant, Nova-Air, and Nova-Pro, achieving an 88.8% score on MMLU.
- Nova-Pro surpasses competitors with 97.2% on ARC-C and 96.9% on GSM8K, emphasizing its top-tier reasoning and math capabilities.
- Open Interpreter Supports Dynamic Function Calls: Members discussed if they could define custom functions in their Python projects using Open Interpreter's
interpreter.llm.supports_functions
feature.- While Open Interpreter can create functions on the fly, strict definitions ensure accurate model calls, as clarified with reference to the OpenAI documentation.
- Realtime API Launches for Speech Technologies: A new realtime API enables speech-to-speech capabilities, enhancing relationships in conversational AI.
- This API aims to bolster applications with immediate responses, revolutionizing interactive communication.
- Vision Now Integrated into Fine-Tuning API: OpenAI has announced vision in the fine-tuning API, allowing models to utilize visual data during training.
- This expansion opens new pathways for multimodal AI applications, further bridging text and image processing.
- Model Distillation Enhances Efficiency: Model distillation is focused on refining model weight management to boost performance.
- This method aims to minimize computational load while maintaining model accuracy, ensuring optimized outputs.
LangChain AI Discord
- LangChain waiting for GPT Realtime API: Members eagerly discussed when LangChain will support the newly announced GPT Realtime API, but no definitive timeline emerged in the chat.
- This uncertainty led to ongoing speculation within the community about potential features and implementation.
- HuggingFace now an option in LangChain: HuggingFace models can be utilized as Agents in LangChain for various tasks including chat and text generation, with a code snippet shared for implementation.
- For further insights, members were directed to LangChain's documentation and a related GitHub issue.
- Concerns over curly braces in prompts: A member raised concerns about effectively passing strings with curly braces in chat prompt templates in LangChain, as they are interpreted as placeholders.
- Community members sought different strategies to handle this issue without altering input during processing.
- Nova LLMs outperforming competition: The launch of Nova LLMs, including Nova-Instant, Nova-Air, and Nova-Pro, shows significant performance, with Nova-Pro achieving a stellar 88.8% on MMLU.
- Nova-Pro also scored 97.2% on ARC-C and 96.9% on GSM8K, establishing itself as a frontrunner in AI interactions; learn more here.
- LumiNova elevates image generation: The new LumiNova model promises exceptional image generation capabilities, enhancing the visual creativity of AI applications.
- This advancement opens up new possibilities for interactive and engaging AI-driven experiences.
OpenAccess AI Collective (axolotl) Discord
- Qwen 2.5 impresses in deployment: A member successfully deployed Qwen 2.5 34B and reported its performance as insanely good, rivaling GPT-4 Turbo.
- The discussion buzzed with specifics on deployment and vision support, emphasizing the rapid evolution of model capabilities.
- Exploration of small model capabilities: Members marveled at the impressive advancements in small models and debated their potential limits.
- How far exactly can we push this? What are the actual limits? The conversation reflects a growing interest in optimizing smaller architectures.
- Clarifications on hf_mlflow_log_artifacts: A member asked if setting hf_mlflow_log_artifacts to true would save model checkpoints to mlflow, indicative of integration concerns.
- This highlights the crucial need for robust logging mechanisms in model training workflows.
- Custom instruct format in sharegpt discussed: Instructions on defining a custom instruct format for datasets in sharegpt were shared, stressing the usage of YAML.
- Essential steps were outlined, including custom prompts and ensuring JSONL format compatibility for successful outcomes.
tinygrad (George Hotz) Discord
- Tiny Box Unboxing Wins Hearts: A member unboxed the tiny box from Proxy, highlighting the great packaging and wood base as standout features.
- Worried about the ny->au shipment, they praised the effort that secured the package successfully.
- Debating the Bugfix PR Approach: A call for review on this bugfix PR was made, addressing issues with saving and loading tensors twice.
- The PR tackles #6294, revealing that disk devices keep unlinked files without creating new ones, which remains a crucial development point.
- Tinygrad Code Boosts Coding Skills: Engaging with the tinygrad codebase has shown to enhance coding skills in a member's day job, proving the value of open-source experience.
- It's making my day job coding better as a side effect, they shared, reflecting on positive coding impact.
- C Interoperability is a Win: Members discussed how Python's productive nature rivals its C interoperability, allowing smooth function calls that improve performance in low-level operations.
- Despite some limitations with structs, the consensus is that the benefits for rapid iteration remain substantial.
- UOp vs UOP Optimization Frustrations: A member expressed challenges faced with optimizing UOp vs UOP pool, citing individual object references complicating the process.
- They suggested a more efficient storage class that utilizes integer handles to manage object references better.
LAION Discord
- Strong Anti-Spam Sentiment: A member expressed strong dislike for spam, emphasizing frustrations with unwanted messages in the community.
- This reflects a common challenge where members urge for better moderation to control spam's impact on communication.
- Sci Scope Newsletter Launch Announced: The personalized newsletter from Sci Scope is now available, offering tailored updates on preferred research areas and new papers weekly.
- Never miss out on research relevant to your work again! Users can try it out now for a stress-free way to keep up with advancements in AI.
- Weekly AI Research Summaries for Busy Professionals: The newsletter will scan new ArXiv papers and deliver concise summaries, aimed at saving subscribers hours of work each week.
- This service promises to simplify the task of selecting pertinent reading material with a weekly high-level summary.
- Exclusive Offer for New Users: New users can sign up for a free 1-month trial that includes access to custom queries and a more relevant experience.
- This initiative enhances engagement, making it easier for users to keep pace with the rapidly evolving field of AI.
DSPy Discord
- Personalized Newsletter from Sci Scope: Sci Scope has launched a personalized newsletter that delivers weekly summaries of new papers tailored to individual interests, helping users stay updated without the hassle.
- This service scans for new ArXiv papers based on user preferences; it begins with a free 1-month trial to attract new users.
- Query on Code Similarity Search: A member is exploring options for code similarity and considering Colbert for outputting relevant code documents from snippets, questioning its effectiveness without finetuning.
- They are also seeking alternative methods for code search, highlighting the community's collaboration on effective approaches.
LLM Agents (Berkeley MOOC) Discord
- Lab Assignments Pushed Back: A member inquired about the lab assignments scheduled for release today, leading to confirmation that staff requires another week to prepare them. Updates will be available on the course page at llmagents-learning.org.
- The delay has sparked concerns, with participants expressing frustration over the lack of communication regarding release schedules and updates.
- Communication Gaps Highlighted: Concerns emerged over insufficient updates on lab releases, with one member unable to find relevant emails or announcements. This situation underlines the need for better course communication amidst participant expectations.
- Participants await important information about course progress, emphasizing the urgency for timely announcements from course staff.
Mozilla AI Discord
- Establishing ML Paper Reading Group: A member proposed to kickstart an ML paper reading group aimed at discussing recent research, enhancing community interaction.
- This initiative seeks to boost collective knowledge sharing among engineers interested in the latest developments in machine learning.
- Tips for Publishing Local LLM Apps: Community members expressed gratitude for insights provided on effectively publishing local LLM-based apps to the app store.
- These tips are viewed as essential for those navigating the complexities of app publishing.
- Community Job Board Proposal Gains Interest: A discussion emerged regarding the creation of a job board to facilitate community job postings.
- Initiated by a member, this idea aims to connect talent with job opportunities within the engineering sector.
- Lumigator Gets Official Spotlight: The community introduced Lumigator in an official post, showcasing its features and capabilities.
- This introduction reinforces the community’s commitment to highlighting noteworthy projects relevant to AI engineers.
- Exciting Upcoming Events on Tech Innovations: Several upcoming events were highlighted, including discussions on Hybrid Search concentrating on search technologies.
- Other sessions, like Data Pipelines for FineTuning, promise to further advance engineering knowledge and skills.
MLOps @Chipro Discord
- Nova Models outshine competitors: Introducing Nova: the next generation of large language models that beat GPT-4 and Claude-3.5 across various benchmarks, with Nova-Pro leading at 88.8% on MMLU.
- Nova-Air excels across diverse applications while Nova-Instant offers speedy, cost-effective solutions.
- Benchmarking Excellence across Nova Models: Nova-Pro shines with impressive scores: 97.2% on ARC-C for reasoning, 96.9% on GSM8K for mathematics, and 91.8% on HumanEval for coding.
- These benchmarks solidify Nova's position as a top contender in the AI field, showcasing its extraordinary capabilities.
- LumiNova revolutionizes image generation: The newly introduced LumiNova sets a high bar for image generation, promising unmatched quality and diversity in visuals.
- This model complements the Nova suite by providing users with advanced tools for creating stunning visuals effortlessly.
- Future developments with Nova-Focus: The development team is exploring Nova-Focus and enhanced Chain-of-Thought capabilities to further push the boundaries of AI.
- These innovations aim to refine and expand the potential applications of the Nova models in both reasoning and visual generation.
The Alignment Lab AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The LLM Finetuning (Hamel + Dan) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The DiscoResearch Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Gorilla LLM (Berkeley Function Calling) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
PART 2: Detailed by-Channel summaries and links
aider (Paul Gauthier) ▷ #general (198 messages🔥🔥):
Prompt Caching Support
AI Models Performance Comparison
YAML Parsing Issues
File Editing with Aider
Error Handling in Aider
- Overview of Prompt Caching Support: A summary of various AI models' prompt caching capabilities was shared, including their costs and mechanisms, highlighting OpenAI's and Anthropic’s caching strategies.
- Users discussed the implications of cache misses and cost efficiencies among different models such as DeepSeek and Gemini.
- Comparison of AI Models for Code Editing: Reaper_of_fire noted that while Sonnet offers better performance overall, it is more expensive compared to o1-preview, which can sometimes yield greater token reimbursement.
- There were suggestions to benchmark Gemini as an architect model with Sonnet as the editor to potentially improve editing outcomes compared to simultaneous use of Sonnet.
- Challenges with YAML Parsing: A discussion arose surrounding YAML's parsing quirks, particularly the accidental conversion of keys like 'yes' into booleans, complicating configuration management.
- Users shared insights on avoiding issues by using quoted strings to prevent unintended parsing, emphasizing the drawback of YAML.
- File Handling in Aider: Concerns were raised about the behavior of Aider's file management capabilities, specifically with the
/read-only
command not autocompleting file paths as expected.- Users discussed whether recent changes to the functionality might have caused this issue, particularly in pre-typed files setup.
- Implications of AI Modalities: Users expressed varying preferences for AI model modalities, noting that while O1-preview offers benefits, users found that Sonnet was better at effectively handling prompts in specific contexts.
- The community shared opinions about the need for different formats in AI interactions, and whether models should adopt adjustments for better output results.
- Advanced model settings: Configuring advanced settings for LLMs.
- YAML: The Norway Problem: Earlier this week, Haroen Viaene posted this tweet about YAML: worst part of yaml: https://yaml.org/type/bool.html — Haroen Viaene (@haroenv) January 10, 2022 The linked-to page contains the doc...
- big-AGI/src/modules/3rdparty/THIRD_PARTY_NOTICES.md at big-agi-2 · enricoros/big-AGI: Generative AI suite powered by state-of-the-art models and providing advanced AI/AGI functions. It features AI personas, AGI functions, multi-model chats, text-to-image, voice, response streaming, ...
- Options reference: Details about all of aider’s settings.
- Add experimental vector search. · Shakahs/aider@4701206: no description found
- aider/aider/repo.py at 72cb5db53066a1ca878412b866e87b916529b68e · paul-gauthier/aider: aider is AI pair programming in your terminal. Contribute to paul-gauthier/aider development by creating an account on GitHub.
- YAML: The Norway Problem: Earlier this week, Haroen Viaene posted this tweet about YAML: worst part of yaml: https://yaml.org/type/bool.html — Haroen Viaene (@haroenv) January 10, 2022 The linked-to page contains the doc...
- aider/aider/repo.py at 72cb5db53066a1ca878412b866e87b916529b68e · paul-gauthier/aider: aider is AI pair programming in your terminal. Contribute to paul-gauthier/aider development by creating an account on GitHub.
- aider/aider/repo.py at 72cb5db53066a1ca878412b866e87b916529b68e · paul-gauthier/aider: aider is AI pair programming in your terminal. Contribute to paul-gauthier/aider development by creating an account on GitHub.
aider (Paul Gauthier) ▷ #questions-and-tips (70 messages🔥🔥):
Architect Mode Usage
Cache Management in Aider
Setting up Aider with Local Models
Norton Antivirus Issues
Obsidian and LLM Integrations
- Clarification on Architect Mode Usage: Users discussed how to effectively utilize the architect mode in Aider without triggering file edits, emphasizing the need for clear instructions.
- One user noted variations in prompts still led to unexpected code outputs, paired with shared experiences of the model starting responses from scratch.
- Understanding Cache Management in Aider: The conversation highlighted that editing files invalidates cache, suggesting that maintaining read-only files during long interactions could enhance prompt caching efficiency.
- Participants acknowledged that frequent changes could negate cost savings associated with caching, indicating a potential misunderstanding in optimal usage.
- Setting Up Aider with Local Models: A user inquired about connecting Aider to dual locally-hosted Ollama instances, prompting guidance on LiteLLM's single endpoint support.
- It was suggested that advanced setups might require exploring OpenAI API proxies for those needing multiple instance connections.
- Norton Antivirus Issues: A user reported difficulties with Norton blocking files and directories, leading to complications when running scripts after relocating affected files.
- Responses included suggestions for configuring Git settings to mark directories as safe to bypass ownership issues.
- Integrating LLMs with Obsidian: Users shared their setups of integrating LLMs within their workflows, particularly highlighting a lack of extensive plugin usage beyond basic automation.
- Chat interfaces were preferred for LLM interactions, while tools like
aichat
were mentioned as noteworthy for terminal use.
- Chat interfaces were preferred for LLM interactions, while tools like
- OpenAI: aider is AI pair programming in your terminal
- Creating and highlighting code blocks - GitHub Docs: no description found
- Ollama: aider is AI pair programming in your terminal
- Chat modes: Using the chat, ask and help chat modes.
aider (Paul Gauthier) ▷ #links (3 messages):
o1-engineer tool
screenpipe recording
- Efficient Project Management with o1-engineer: o1-engineer is a command-line tool that assists developers in managing and interacting with their projects efficiently by leveraging OpenAI's API for functionalities like code generation and project planning.
- This tool aims to streamline the development workflow, making project management more effective for developers.
- 24/7 Local AI Recording with screenpipe: screenpipe offers local AI screen and microphone recording, allowing the construction of AI applications with full context, thus presenting a secure alternative to services like Rewind.ai.
- It works with Ollama and prioritizes user data ownership, developed in Rust for performance.
- GitHub - mediar-ai/screenpipe: 24/7 local AI screen & mic recording. Build AI apps that have the full context. Works with Ollama. Alternative to Rewind.ai. Open. Secure. You own your data. Rust.: 24/7 local AI screen & mic recording. Build AI apps that have the full context. Works with Ollama. Alternative to Rewind.ai. Open. Secure. You own your data. Rust. - mediar-ai/screenpipe
- GitHub - Doriandarko/o1-engineer: o1-engineer is a command-line tool designed to assist developers in managing and interacting with their projects efficiently. Leveraging the power of OpenAI's API, this tool provides functionalities such as code generation, file editing, and project planning to streamline your development workflow.: o1-engineer is a command-line tool designed to assist developers in managing and interacting with their projects efficiently. Leveraging the power of OpenAI's API, this tool provides functiona...
OpenRouter (Alex Atallah) ▷ #announcements (2 messages):
Llama 3.1 and 3.2 Endpoints
Gemini Token Standardization
Cohere Model Discounts
Chatroom Upgrades
- Samba Nova launches free Llama endpoints: In collaboration with Samba Nova, five free bf16 endpoints for Llama 3.1 and 3.2 are now available on their new inference chips, aimed at measuring performance.
- The best-in-class throughput for the 405B Instruct model is already making waves as a promising addition for Nitro if performance lives up to expectations.
- Gemini Models undergo token standardization: The Gemini and PaLM models have now been standardized to use the same token sizes, which will result in approximately 2x higher prices with inputs being 25% shorter.
- Costs are expected to reduce by 50% despite the changes, reassuring users of overall affordability here.
- Cohere's new discount and tool calling feature: Cohere models are now offered on OpenRouter with a 5% discount and have been upgraded to their v2 API, complete with tool calling support.
- Users can now access these tools more efficiently, aiming to enhance overall user experience.
- Chatroom landing page gets an upgrade: The new Chatroom landing page enhances model comparisons and includes intelligence tests for evaluating model performance on openrouter.ai/chat.
- Users can now benefit from an improved LaTeX formatter and a better code formatter to enhance their coding experience.
- Llama 3.1 8B Instruct (free) - API, Providers, Stats: Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. Run Llama 3.1 8B Instruct (free) with API
- Llama 3.1 70B Instruct (free) - API, Providers, Stats: Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. Run Llama 3.1 70B Instruct (free) with API
- Llama 3.1 405B Instruct (free) - API, Providers, Stats: The highly anticipated 400B class of Llama3 is here! Clocking in at 128k context with impressive eval scores, the Meta AI team continues to push the frontier of open-source LLMs. Meta's latest c...
- Llama 3.2 3B Instruct (free) - API, Providers, Stats: Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for advanced natural language processing tasks like dialogue generation, reasoning, and summarization. Run Llama 3.2 ...
- Llama 3.2 1B Instruct (free) - API, Providers, Stats: Llama 3.2 1B is a 1-billion-parameter language model focused on efficiently performing natural language tasks, such as summarization, dialogue, and multilingual text analysis. Run Llama 3.2 1B Instruc...
OpenRouter (Alex Atallah) ▷ #general (244 messages🔥🔥):
Realtime API Updates
OpenRouter Model Performance
File Upload Limitations
OpenAI Caching
Free Credit Programs
- Discussion on Realtime API: Users inquired about the potential support of the new Realtime API by OpenRouter, highlighting its current limitations regarding audio input and output.
- There is ongoing interest in integrating new functionalities, but no definite timeline has been established.
- OpenRouter Model Performance Concerns: Members expressed concerns about model performance and availability, particularly with regard to loading different providers under varied circumstances.
- Specific instances of encountering greyed-out providers and changes in charging rates were discussed, indicating the need for users to stay aware of price changes.
- Limitations on File Uploads: Questions were raised about file upload capabilities within OpenRouter, with users reporting issues related to mobile devices and unsupported file types.
- Suggestions were made to rename files or input them as text, but the underlying HTML limitations were noted as a problem.
- Insights on OpenAI Caching: A comprehensive breakdown of various commercial context caching implementations was shared, explaining how they function and their respective discount rates.
- This was seen as valuable information for users looking to understand cost implications across different models.
- Free Credit Programs Inquiry: Users inquired about available free credit programs, expressing challenges in working with limited resources.
- It was clarified that while no universal credit program exists, research-related credits may be issued based on user activity.
- Tweet from Jordi Bruin (@jordibruin): MacWhisper 10.0 is available as a free update now! - Supports the new Whisper Turbo model for up to 21x realtime at super high accuracy - Support for local AI models using Ollama - Support for custom...
- Nova: Discover Nova, the advanced AI solutions from Rubik's AI. Experience intelligent reasoning, mathematics, coding, and image generation.
- UGI Leaderboard - a Hugging Face Space by DontPlanToEnd: no description found
- Tweet from Rubiks AI (@RubiksAI): 🚀 Introducing Nova: The Next Generation of LLMs by Nova! 🌟 We're thrilled to announce the launch of our latest suite of Large Language Models: Nova-Instant, Nova-Air, and Nova-Pro. Each designe...
- Privacy | OpenRouter: Manage your privacy settings
- Models: 'uncensored' | OpenRouter: Browse models on OpenRouter
- Llama 3.2 3B Instruct - API, Providers, Stats: Llama 3.2 3B is a 3-billion-parameter multilingual large language model, optimized for advanced natural language processing tasks like dialogue generation, reasoning, and summarization. Run Llama 3.2 ...
- Llama 3.1 405B: API Provider Performance Benchmarking & Price Analysis | Artificial Analysis: Analysis of API providers for Llama 3.1 Instruct 405B across performance metrics including latency (time to first token), output speed (output tokens per second), price and others. API providers bench...
- Provider Routing | OpenRouter: Route requests across multiple providers
- GitHub - bobcoi03/opencharacter: Open Source Alternative to Character.AI - Create your own characters uncensored no filters: Open Source Alternative to Character.AI - Create your own characters uncensored no filters - bobcoi03/opencharacter
HuggingFace ▷ #announcements (1 messages):
Llama 3.2 Release
Transformers v4.45.0
Whisper Turbo Integration
GGUF Model Deployment
HuggingChat for macOS
- Llama 3.2 Launches with Exciting Features: Llama 3.2 has officially launched, enabling users to run it locally with vision fine-tuning available through a new recipe. Now, you can easily post-train Llama 3.2 Vision on your own dataset in just a few lines of code!
- With support for 11B and 90B models in SFTTrainer, fine-tuning capabilities are enhanced for custom tasks, allowing models to see and follow user instructions.
- Transformers v4.45.0 Introduces Simplified Tool Building: The release of transformers v4.45.0 includes an easier method to build tools using a function with type hints and a
@tool
decorator. This improvement streamlines the creation of tools for users, making the process more intuitive.- Feedback is welcomed as the community explores this lightning-fast method to enhance tool-building efficiency in their projects.
- Whisper Turbo Now Supported in Transformers: Whisper Turbo integration has been announced, broadening the functionality of the Transformers library. This release furthers the capabilities of speech recognition within the existing framework.
- Users are encouraged to explore the new integration and check it out in their applications.
- Deploy GGUF Models Easily on Inference Endpoints: Hugging Face now allows direct deployment of GGUF models onto its Inference Endpoints, making it simpler to serve models to end-users. This feature reduces complexity for model authors aiming for streamlined deployment.
- With this update, model accessibility and usability on the platform has been significantly enhanced.
- HuggingChat Beta Launches for macOS: HuggingChat is now in beta for macOS, providing easy access to top open-source models. Users can utilize the service with just an internet connection and a Hugging Face Hub account.
- Feedback on the beta version is encouraged to ensure optimal user experience.
- Tweet from merve (@mervenoyann): ICYMI I contributed a Llama 3.2 Vision fine-tuning recipe to huggingface-llama-recipes 🦙
- Tweet from Lewis Tunstall (@_lewtun): Anybody can now post-train Llama 3.2 Vision on their own dataset in just a few lines of code with TRL 🚀! We've just added support for the 11B and 90B models to the SFTTrainer, so you can fine-...
- Tweet from Xenova (@xenovacom): Llama 3.2 running 100% locally in your browser on WebGPU! 🦙 Up to 85 tokens per second! ⚡️ Powered by 🤗 Transformers.js and ONNX Runtime Web. No installation required... just visit a website! Chec...
- Tweet from Vaibhav (VB) Srivastav (@reach_vb): Run Llama 3.2 1B & 3B in a FREE Google Colab! 🔥 Powered by Transformers ⚡
- Tweet from abhishek (@abhi1thakur): Here's how you can easily fine-tune latest llama 3.2 (1b and 3b) locally and on cloud:
- Tweet from Aymeric (@AymericRoucher): Transformers v4.45.0 released: includes a lightning-fast method to build tools! ⚡️ During user research with colleagues @MoritzLaurer and Joffrey Thomas, we discovered that the class definition curr...
- Reddit - Dive into anything: no description found
- Tweet from Alvaro Bartolome (@alvarobartt): 🤗 HuggingChat is now available in beta for macOS users! Now the latest top open-source models are one click-away for macOS users; you only need an internet connection and a Hugging Face Hub account....
- Tweet from lunarflu (@lunarflu1): New metadata is available for @huggingface model authors: `new_version`. If a model has newer versions defined, the model page will show a banner linking to the latest version!
HuggingFace ▷ #general (164 messages🔥🔥):
Model Performance Comparison
Fine-Tuning Challenges
Innovative LLM Projects
Hugging Face Contributions
Community Queries
- Performance of Llama 3.2 Models: The community discussed the capabilities of Llama 3.2 models, particularly highlighting that the 1B model is considered the best in its class and the 3B model excels as well.
- There were opinions shared about how smaller models are limited by their data, impacting their ability to 'think' as well as larger models.
- Challenges with Llama 3.2 Access: Users reported experiencing timeouts when trying to use the Llama 3.2 1B and 3B models, with several acknowledging they had access yet encountered runtime errors.
- One user mentioned using the x-wait-for-model flag but still faced an operation timeout.
- Eyeballing Alternatives in LLMs: A user initiated a call for collaboration on a proof-of-concept project aimed at reducing computational costs in LLMs by exploring alternative neural network architectures.
- This sparked interest but was met with inquiries about the specifics of the project.
- Recent Developments in Hugging Face: There was excitement around new features introduced in Hugging Face's Transformers, especially regarding agents and the new
@tool
decorator.- The community expressed appreciation for the updates and engaged in discussions about its application.
- Community Engagement and Queries: Members actively engaged with each other, asking questions about models, project ideas, and sharing resources related to Hugging Face and LLMs.
- Discussions included performance comparisons, project collaborations, and tips for using models effectively.
- Tweet from Rubiks AI (@RubiksAI): 🚀 Introducing Nova: The Next Generation of LLMs by Nova! 🌟 We're thrilled to announce the launch of our latest suite of Large Language Models: Nova-Instant, Nova-Air, and Nova-Pro. Each designe...
- Tweet from Ann Huang (@AnnInTweetD): It's the end of an era. We just shut down our @xetdata servers after 658 days in production. 🪦 What did we learn? https://xethub.com/blog/shutting-down-xethub-learnings-and-takeaways
- A Short Summary of Chinese AI Global Expansion : no description found
- Agents and tools: no description found
- Sure Nodding GIF - Sure Nodding Disbelief - Discover & Share GIFs: Click to view the GIF
HuggingFace ▷ #today-im-learning (5 messages):
Dart and Flutter for Mobile Games
History of Transcendental Functions
Explainable AI Methods for CV
Object Detection and Segmentation
Understanding τ and Its Mathematical Significance
- Dart and Flutter Win Over Kotlin: A member shared their positive experience using Dart and Flutter to create mobile games, finding them easier and more enjoyable compared to Kotlin and Android Studio.
- "Great tools to learn!" highlights the preference for these frameworks among developers.
- Exploring the Mystery of τ: A user discussed the historical context of Transcendental Functions, specifically focusing on the symbol τ (the ratio of circumference to radius) and its relationship with π.
- "It’s a magic number... because pi is transcendental and irrational, and that tau is irrational" emphasizes the distinct characteristics of these constants.
- Diving into Explainable AI for Computer Vision: A member expressed interest in learning about explainable AI methods pertinent to computer vision, particularly in segmentation and object detection.
- The goal is to build a library that includes various methods compatible with Hugging Face models.
HuggingFace ▷ #cool-finds (2 messages):
Open Source Contributions
Medical AI Updates
- Help Needed with FSDP and Accelerate Integration: A member shared a post urging contributors to assist with issues related to
fsdp
,accelerate
, andtraining
within the Transformers GitHub repository.- They emphasized the importance of community help, noting, 'this issue tracker needs you!'.
- Engaging Medical AI Podcast Launch: An announcement was made about a new daily video podcast focused on making Medical AI/LLM updates more engaging for audiences, which can be enjoyed anytime.
- The first episode is available on YouTube, promising a fresh format for delivering important medical insights.
Link mentioned: Tweet from Arthur Zucker (@art_zucker): For anyone wanting to dive a bit into fsdp
, accelerate
, training
and their integration with transformers
, this issue tracker needs you! 🫡🤗 https://github.com/huggingface/transformers/issues...
HuggingFace ▷ #i-made-this (7 messages):
NotebookLM Features
XP System and Badges
- NotebookLM shines in multi-modal tasks: A member praised NotebookLM as a true end-to-end multi-modal RAG app, particularly useful for studying financial reports, taking notes, and engaging in chat.
- They created a YouTube video illustrating its functionality, including a 'Deep dive' podcast on the Roman Empire.
- Suggestions for enhancing XP systems: A member inquired about borrowing perks from another platform's XP system while sparking discussion about potential features.
- Suggestions included implementing a badge system and allowing users to spend their XP, which would enhance engagement and site traffic.
- Engagement through competitiveness: The desire for an XP system at Hugging Face was echoed, highlighting its ability to promote competitiveness.
- Members agreed that this could encourage more interaction and increase site traffic and retention.
HuggingFace ▷ #computer-vision (4 messages):
trocr-large-handwriting
Self-driving car models
Fine-tuning models
- Exploring 'trocr-large-handwriting' for better results: A member suggested that if the dataset closely resembles handwriting, using 'trocr-large-handwriting' could be more effective.
- Another member pointed out the similarity between handwritten paragraphs and 11 alphanumeric characters, implying that fine-tuning on a character dataset might be worth considering.
- Choosing between PyTorch and TensorFlow for self-driving cars: One member raised a question regarding whether to learn PyTorch or TensorFlow for developing a self-driving car.
- This highlights the importance of framework choice in approaching complex machine learning projects.
- Fine-tuning as a method for targeted learning: A member shared insights on fine-tuning a pre-trained model, emphasizing its value in leveraging existing knowledge for specific domains, such as medical knowledge in a VQA model.
- They described a process where models are pre-trained before undergoing instruction tuning and specialized context training, aligning with the vision modality.
HuggingFace ▷ #NLP (10 messages🔥):
Finetuning Mistral 7B
Pretraining Misunderstanding
Request for Benchmarks
Moderator Request
General NLP Introduction
- Finetuning Mistral 7B for Story Generation: A member expressed interest in finetuning Mistral 7B for story generation but sought guidance on how to pretrain data.
- They were encouraged that pretraining is generally not necessary as Mistral is already pretrained on a large corpus, focusing instead on task-specific fine-tuning.
- Clarifying Pretraining Concepts: Another member pointed out that the term 'Pretraining Data' may be a misunderstanding, clarifying that actual pretraining involves training a model on a large corpus.
- They elaborated that fine-tuning utilizes the pretrained model to adapt to specific tasks without needing to relearn basic language structures.
- Seeking Benchmarks for Instruction Tuned Models: A member searched for consistent benchmarks for recent instruction-tuned models like TruthfulQA, TriviaQA, and BoolQ.
- They specifically requested updated benchmarks for newer models such as llama3.2 and qwen 2.5, inviting others to @ mention them with any information.
- Moderator Request for Thread: A member asked someone to moderate a thread, requesting that the user step in.
- This inquiry led to some confusion, with another member questioning the meaning of the request.
- Introduction to NLP: A newcomer introduced themselves and shared their initial journey into the NLP world, mentioning their learning about Hugging Face models.
- Their enthusiasm to learn more about fine-tuning on personal datasets was evident, as they sought help from the community.
LM Studio ▷ #general (125 messages🔥🔥):
LM Studio Bugs
Llama 3.1 Issues
Langflow Integration
GPU Utilization Settings
Model Compatibility with LM Studio
- LM Studio Launch Issues: Several users reported difficulties starting LM Studio post-update, especially with desktop shortcuts and direct executable files from the app directory.
- One user found a workaround by launching the app from the updated version directory, indicating potential issues with old installation files.
- Challenges with Llama 3.1 Compatibility: A user encountered an error message when trying to load the Llama 3.1 model in LM Studio, indicating a mismatch with the software version.
- Updating to version 0.3.3 was recommended as the current version supports the Llama model.
- Integrating LM Studio with Langflow: One user shared their successful experience connecting LM Studio with Langflow by modifying the base URL of OpenAI components.
- They found resources helpful for learning Langflow basics, suggesting the integration might streamline their workflow.
- Understanding GPU Utilization in LM Studio: Users discussed settings for GPU utilization in LM Studio, with debate over what 'offload' means regarding resource management between CPU and GPU.
- Clarifications were requested regarding optimal settings for utilizing GPU versus CPU for specific tasks, especially for autocomplete models.
- Setting Model Parameters in LM Studio: A user sought advice on parameters in LM Studio, specifically whether to handle 'max layers' settings within the application or allow Windows to manage GPU memory.
- Confusion arose around earlier responses about the impact of offloading and whether they were referencing the correct configurations.
- Building a fully local LLM voice assistant to control my smart home: I’ve had my days with Siri and Google Assistant. While they have the ability to control your devices, they cannot be customized and inherently rely on cloud services. In hopes of learning someth...
- Aleksey Tsepelev / LM-Studio docker-compose · GitLab: GitLab.com
- LM Studio - Experiment with local LLMs: Run Llama, Mistral, Phi-3 locally on your computer.
- Sideload models - Advanced | LM Studio Docs: Use model files you've downloaded outside of LM Studio
- Config Presets - Configuration | LM Studio Docs: Save your system prompts and other parameters as Presets for easy reuse across chats.
LM Studio ▷ #hardware-discussion (17 messages🔥):
GPU Performance Comparison
Thread Count Impact on Inference
CPU Utilization Monitoring
Llama 3.1 Performance Metrics
High-End GPU Setup
- Specs of 4080S vs 4060ti: User questioned if the 4080S has the same 16GB memory as the 4060ti, noting that higher cores and bandwidth justify higher costs.
- It's emphasized that more cores lead to better performance, though actual data is preferred over speculation.
- Llama 3.1 Inference Data Collection: Testing with different thread configurations resulted in a slight speed enhancement from 1.49 tok/sec (1 thread) to 2.56 tok/sec (4 threads), raising concerns about potential bottlenecks.
- Users acknowledged the need for better data analysis tools as performance didn't scale linearly with thread count.
- Monitoring CPU Utilization: One user is exploring ways to extract per-core utilization metrics due to the current monitoring setup providing limited granularity.
- They mentioned using NetData for monitoring, though it has a complex interface that doesn't fully meet their needs.
- Highlighting High-End GPU Setups: Discussion included a user boasting about running a setup with 7 RTX 4090 GPUs, a power-intensive configuration estimated at 3000 watts.
- Another user humorously noted the dramatic effect of the system's power consumption during operation.
- Xeon CPU Performance Observations: Users detailed their experiences running Meta-Llama-3.1 on a Xeon CPU E3-1245, noting high core usage and a variation in token generation rates.
- The results demonstrated that CPU-only inference can lead to full-core utilization, indicating potential performance limits on certain configurations.
Nous Research AI ▷ #general (118 messages🔥🔥):
Rapid Model Quantization
Audio Token Costs
Novel AI Research on Alcohol Effects
DisTrO's Reliability Against Bad Data
AI Summit 2024 Discounts
- Rapid Model Quantization Amazes Users: Users expressed surprise at the blazingly fast quantization time for a 3b model, taking less than a minute to process.
- One user humorously noted the economic comparison to minimum wage work, highlighting the potential efficiency gains of AI over human labor.
- Audio Token Pricing Raises Concerns: Discussion arose around the costs of audio tokens, with 14$ per hour for audio output deemed expensive compared to human agents.
- Participants noted that while AI offers continuous availability, the pricing structure may not significantly undercut traditional support roles.
- Ballmer Peak Study Captivates Members: A shared research paper on the Ballmer Peak concluded that a small amount of alcohol enhances programming ability, challenging traditional beliefs.
- Members humorously engaged about their personal experiences in seeking the 'perfect dosage' for productivity.
- DisTrO's Ability to Handle Bad Actors: Participants discussed DisTrO's verification layer, which can detect and drop bad actors during the training process.
- Clarification was provided that while the training loop does not inherently manage untrustworthy nodes, the added layer offers some protection.
- Seeking Discounts for AI Summit 2024: A student expressed the desire for discount codes for passes to the AI Summit 2024 in Mumbai due to financial constraints.
- The student shared their background in LLMs and AI, emphasizing their commitment to making the summit a worthwhile investment.
- Tweet from Kuldar ⟣ (@kkuldar): Someone gave NotebookLM a document with just "poop" and "fart" repeated over and over again. I did NOT expect the result to be this good.
- The Ballmer Peak: An Empirical Search: no description found
- Emu3: no description found
Nous Research AI ▷ #ask-about-llms (1 messages):
.faiqkhan: Have you tried lancedb?
Nous Research AI ▷ #research-papers (1 messages):
teknium: https://arxiv.org/abs/2409.14664
Nous Research AI ▷ #interesting-links (3 messages):
Nova LLM Suite Launch
Personalized AI Research Newsletter
- Nova LLM Suite Introduced by RubiksAI: RubiksAI launched the Nova suite of Large Language Models, including Nova-Instant, Nova-Air, and Nova-Pro, designed for exceptional speed and reasoning capabilities with Nova-Pro leading at 88.8% on the MMLU.
- Benchmark scores highlight Nova-Pro's 97.2% on ARC-C and 96.9% on GSM8K, with a focus on upcoming features like Nova-Focus and enhanced Chain-of-Thought skills.
- Personalized Newsletter by Sci Scope: Sci Scope now offers a personalized newsletter that delivers weekly summaries of new ArXiv papers based on user-specified research interests, keeping professionals updated effortlessly.
- This service promises to save users hours each week by grouping similar topics and providing concise overviews, making it easier to stay informed on relevant AI developments.
- Tweet from Rubiks AI (@RubiksAI): 🚀 Introducing Nova: The Next Generation of LLMs by Nova! 🌟 We're thrilled to announce the launch of our latest suite of Large Language Models: Nova-Instant, Nova-Air, and Nova-Pro. Each designe...
- Sci Scope: An AI generated newsletter on AI research
Nous Research AI ▷ #research-papers (1 messages):
teknium: https://arxiv.org/abs/2409.14664
Nous Research AI ▷ #reasoning-tasks (1 messages):
o1 reasoning extraction
context window exploration
- Extracting o1's reasoning chain for insights: A member inquired whether anyone has attempted to prompt o1 to extract its reasoning process after producing an answer.
- They speculated that the reasoning chain might be cached in o1's context window, suggesting that querying it post-answer could yield a synthetic data list.
- Exploring the context window mechanics: The discussion hinted at the possibility of leveraging o1's context window to revisit and regenerate the reasoning behind its answers.
- The member expressed curiosity about uncovering how reasoning processes are stored and accessed within o1.
Interconnects (Nathan Lambert) ▷ #news (39 messages🔥):
OpenAI's Recent Funding Round
Liquid.AI Architecture Discussion
AI in Research-Level Mathematics
- OpenAI secures $6.6B funding with a hefty valuation: OpenAI has raised $6.6B in funding at a $157B post-money valuation, led by Thrive Capital with contributions from companies like Microsoft and Nvidia.
- CFO Sarah Friar noted that this funding will allow for a tender event for employees, offering them liquidity options following this significant round.
- Liquid.AI claims a breakthrough over previous predictions: Discussion revolves around the Liquid.AI model, with claims that it performs significantly better than predictions made by Ilya Sutskever in 2020, indicating a possible architecture breakthrough.
- Some skeptics still question the validity of the claims, highlighting several red flags but acknowledging the credibility of Parakhin's insights.
- AI's evolving capabilities in mathematics: A discussion initiated by Robert Ghrist raises the question of whether AI can tackle research-level mathematics, including making conjectures and proving theorems.
- He notes that the boundary of what is achievable with LLMs has shifted, based on his experiences with AI in theorem proving.
- Tweet from Mikhail Parakhin (@MParakhin): Just to reiterate: http://Liquid.AI model is the first one I've seen that managed to break away from the prediction made by @ilyasut in 2020. Literally everyone else is in the epsilon vicinity of ...
- Tweet from Mikhail Parakhin (@MParakhin): @manic_pixie_agi They are discussing internally. It's a bit tricky, as the whole company value is in this new architecture.
- Tweet from NIK (@ns123abc): NEWS: OpenAI asks investors not to back rival start-ups like xAI and Anthropic lol, lmao even
- Tweet from Erin Woo (@erinkwoo): scooplet: OpenAI employees may get to cash out after the company's $6.6b (!!) funding round CFO Sarah Friar told employees that the round "“means that we have the ability to offer a tender ev...
- Tweet from prof-g (@robertghrist): can AI do research-level mathematics? make conjectures? prove theorems? there’s a moving frontier between what can and cannot be done with LLMs. that boundary just shifted a little. this is my expe...
Interconnects (Nathan Lambert) ▷ #ml-drama (65 messages🔥🔥):
AI Safety Discussions
Ethics in AI Development
Google's AI Ambitions
Controversies in AI Usage
Funding for AI Research
- Debate on AI Safety and Ethics: Members discussed the broad terms of AI Safety, touching on the challenges of addressing both old AI Ethics and new threats like deepfakes and biases in training data.
- One noted that many critics of AI Safety seem to have filled a vacuum, leading to a perception that some voices are just 'angry grandmas shouting at clouds'.
- Google's Pursuit of Advanced AI: There was speculation about Google's desire to push for AGI, especially given their significant cash reserves and historical investments in AI talent.
- While some believe Google is aiming for advanced models, other members questioned the company's commitment to the vision of AGI.
- Controversial AI Usage Sparks Outrage: A tweet about Character.AI using the likeness of a murdered individual for a video game character without consent resulted in outrage and calls for ethical practices in AI development.
- Members expressed sadness about potential public relations disasters for the involved parties after the original link was removed, indicating a likely cover-up.
- Funding Challenges in AI Research: The high costs of developing future AI models such as GPT-5 and GPT-6 led to discussions about whether organizations like OpenAI could realistically raise the necessary funds, estimated at $50-100 billion.
- In contrast, Google was compared favorably due to its cash reserves and existing resources, making it a formidable player in the race for AI advancements.
- Comparing AI Progress to Self-Driving Cars: One member likened the current state of AI development to the evolution of self-driving cars, anticipating various startups with mixed success rates while larger companies like Google continue to work on stable products.
- This perspective highlighted that while excitement and experimentation grow, substantive results might be slow and vary greatly between different projects.
- Tweet from Dwarkesh Patel (@dwarkesh_sp): "There's no way that OpenAI can pay for the clusters planned to be built next year unless they raise $50-100 billion" @dylan522p / @asianometry out tomorrow "Rip that bong, baby"
- Tweet from philpax (@philpax_): @segyges might be a bit late if it's about the scandal that's brewing
- Notion – The all-in-one workspace for your notes, tasks, wikis, and databases.: A new tool that blends your everyday work apps into one. It's the all-in-one workspace for you and your team
- Tweet from Brian Crecente (@crecenteb): This is fucking disgusting: @character_ai is using my murdered niece as the face of a video game AI without her dad's permission. He is very upset right now. I can't imagine what he's go...
Interconnects (Nathan Lambert) ▷ #random (9 messages🔥):
OpenAI secrets
GPU access challenges
LLM agent mishap
GPU marketplace
Shadeform services
- Curiosity about OpenAI secrets: A member expressed interest in uncovering potential OpenAI secrets, indicating a belief that the world might not be as locked down as presumed.
- This reflects a growing curiosity in the community regarding hidden capabilities and access within the AI space.
- Struggles to acquire a GPU: A member compared their attempts to secure an Nvidia GPU to a drug addict's quest, highlighting the intense demand and obstacles faced.
- They reported encountering quota upgrade requests, underscoring the difficulties in obtaining the necessary resources.
- LLM agent's reckless actions: An individual shared a cautionary tale about their LLM agent that irresponsibly accessed and modified their system, ultimately causing boot issues.
- This incident serves as a warning about the unpredictable nature of AI agents in sensitive systems.
- Interest in GPU marketplace solutions: A member discussed Shadeform, a marketplace for on-demand GPUs, highlighting its features like scheduling for future reservations.
- They emphasized the ease of managing multi-cloud deployments and billing through a centralized dashboard.
- Ease of deploying cloud resources: Shadeform offers a streamlined approach for launching GPU instances in managed cloud accounts, optimizing user experience.
- Members showed interest in its capabilities for handling containerized workloads more efficiently.
- Tweet from Buck Shlegeris (@bshlgrs): I asked my LLM agent (a wrapper around Claude that lets it run bash commands and see their outputs): >can you ssh with the username buck to the computer on my network that is open to SSH because I ...
- Shadeform - The GPU Cloud Marketplace: Efficiently develop, train, and deploy AI models in any cloud environment. Access on-demand GPUs across multiple GPU clouds and seamlessly scale ML inference for optimal performance.
Interconnects (Nathan Lambert) ▷ #memes (2 messages):
Trash Panda Emoji
Social Media Reactions
- Natolambert's Meme Reaction: @natolambert shared a meme link, humorously questioning if another user was involved, referencing this tweet.
- Lmao commented on the absurdity, highlighting the comedic context of the shared content.
- Xeophon Denies Involvement: In response, xeophon clarified, No, that's too much even for me and added a subtle emoji reaction.
- This exchange showcases the playful banter and camaraderie within the conversation.
Link mentioned: Tweet from Trash Panda 🦝 (@trashpandaemoji): @TheXeophon @natolambert
Interconnects (Nathan Lambert) ▷ #rl (2 messages):
RL Conference
Andrew Barto's Statement
- Andrew Barto's Hilarious Moment at RL Conference: During his talk at the RL Conference, Andrew Barto humorously remarked, 'Let's not have RL become a cult,' which sparked a standing ovation from the audience.
- The comment resonated well, showcasing the community's light-hearted take on serious discussions in the realm of reinforcement learning.
- Eager to Watch Andrew Barto's Talk: A member expressed eagerness to watch Andrew Barto's talk, highlighting the memorable reaction from the attendees.
- This reflects a broader interest in the content of the conference and its key speakers' presentations.
Link mentioned: Tweet from Eugene Vinitsky 🍒 (@EugeneVinitsky): Funniest part of @RL_Conference was when Andrew Barto said "Lets not have RL become a cult" and then received a standing ovation at the end of his talk
Interconnects (Nathan Lambert) ▷ #reads (5 messages):
Jack's interview
Meta's Llama model training
Constrained Generative Policy Optimization
Reward models in LLMs
Google's insights on model training
- Jack's Meandering Interview: Participants noted that Jack's interview was quite meandering, with no apparent objective except to give him space to express himself.
- One viewer remarked that the overall quality of the interview was only okay.
- Meta's Success with Llama Models: Andrew Carr discussed how Meta achieved effective post-training with the Llama series of models, sharing insights from a recent paper they released.
- The paper highlights challenges of single reward models in aligning LLMs and introduces Constrained Generative Policy Optimization, enhancing performance across various benchmarks.
- Introduction of Judge Models: They introduced a Mixture of Judge models aimed at optimizing RLHF, including judges for False refusal, instruction following, math/coding, factuality, and safety.
- The concept is straightforward, and according to their findings, it positively impacts performance metrics like MATH and Human Eval.
- Concerns about Llama Team's Claims: One member expressed suspicion about whether the claims regarding the Llama team's methods were truly valid, implying a potential disconnect.
- Despite the skepticism, another participant noted that Google had mentioned similar approaches, adding to the conversation.
Link mentioned: Tweet from Andrew Carr (e/🤸) (@andrew_n_carr): I often wonder how Meta did such a good job post training the Llama series of models. They just released a paper that gives us a good idea. The big challenge is that using a single reward model to...
Interconnects (Nathan Lambert) ▷ #posts (1 messages):
SnailBot News: <@&1216534966205284433>
Unsloth AI (Daniel Han) ▷ #general (82 messages🔥🔥):
Feature Extraction with VAE
Zoom Event Announcements
FP8 Training Challenges
NVIDIA AI Summit 2024
BFloat16 Performance
- Discussion on Feature Extraction Formats: Went over preferred formats for feature extraction in Variational Autoencoders, with suggestions pointing to continuous latent vectors or pt files.
- Participants clarified that RGB inputs/outputs are most relevant for models like Stable Diffusion.
- Upcoming Zoom Event Notifications: Multiple reminders for a live Zoom call were shared, emphasizing it was being recorded for future access.
- Links to join the call for updates were circulated and participants expressed excitement about the content.
- Challenges in FP8 Training: A member shared a paper discussing difficulties encountered while training large language models using FP8 precision.
- They noted newly identified instabilities during longer training runs, sharing the link for further reading.
- NVIDIA AI Summit 2024 Discount Codes: A call for discount codes for passes to the upcoming NVIDIA AI Summit 2024 in Mumbai was made by a student eager to participate.
- The student highlighted their background in AI and LLMs, assuring that their attendance would be beneficial.
- Performance Benefits of BFloat16: Participants discussed the advantages of using bfloat16 for enhanced performance on GPUs, often yielding faster computations.
- It was agreed that bfloat16 generally leads to improved overall performance, particularly for training tasks.
- Video Conferencing, Web Conferencing, Webinars, Screen Sharing: Zoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars across mobile, desktop, and room systems. Zoom ...
- Video Conferencing, Web Conferencing, Webinars, Screen Sharing: Zoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars across mobile, desktop, and room systems. Zoom ...
- Scaling FP8 training to trillion-token LLMs: We train, for the first time, large language models using FP8 precision on datasets up to 2 trillion tokens -- a 20-fold increase over previous limits. Through these extended training runs, we uncover...
- Unsloth Talk: 1 Hacks to make LLM training faster (Adv) Daniel from Unsloth
- Unsloth Talk: 1 Hacks to make LLM training faster (Adv) Daniel from Unsloth
- Process: no description found
Unsloth AI (Daniel Han) ▷ #off-topic (12 messages🔥):
AI Game Feedback
Login Concerns for AI Game
Bot Detection Measures
Humorous Programmer Joke
- AI Game Feedback Request: A member shared a link to their newly created AI game, inviting others to provide feedback and play it for free at game.text2content.online. The game involves crafting prompts to jailbreak an AI and uncover a secret word within a time limit.
- Another member remarked that the project appears similar to a recently deployed game, prompting the creator to clarify that it’s a standalone fun project without data collection.
- Login Requirement Sparks Reactions: Concerns arose when a user expressed hesitation to play the game upon seeing the login requirement, humorously stating they closed the tab as a result. The creator explained that login forms are necessary due to increased bot activity, which could inflate operational costs.
- They suggested users create temporary email accounts if they prefer not to use their personal information to sign up.
- Humorous Programmer Joke Shared: The creator of the game shared a humorous joke about a programmer ignoring warnings, highlighting the stereotype that programmers only care about errors. The joke generated amusement in the discussion, reflecting the lighthearted nature of the conversation.
Link mentioned: LLM Jailbreak: no description found
Unsloth AI (Daniel Han) ▷ #help (21 messages🔥):
Unsloth Model Loading Issues
Dataset Organization for Fine-tuning
ChatML Template with phi3.5
Using Unsloth Models on CPU
Temperature Parameter in Training
- Unsloth Model Loading Issues: A user encountered an error loading the fine-tuned model with LoRA adapters using AutoModelForPeftCausalLM. They were informed to check the max_seq_length or find an equivalent setting since the issue did not occur with unsloth's loading method.
- Dataset Organization for Fine-tuning: Inquiries were made about properly organizing a dataset for fine-tuning the LLaMA 3.2-3B model, with a confirmation that the previous structure was adequate. Additionally, clarifications were sought regarding the ability to use the dataset with the 3.2 notebook.
- ChatML Template with phi3.5: Concerns were raised about compatibility problems with the ChatML prompt template in the phi3.5 notebook, specifically regarding vocabulary mismatches. A participant acknowledged that it should work, while another committed to investigating the issue.
- Using Unsloth Models on CPU: It was clarified that Unsloth models can be used on CPUs by converting them to formats compatible with llama.cpp or ollama for inference. The conversion enables deployment on CPU despite the original framework's typical requirement.
- Temperature Parameter in Training: A user inquired about reducing the model's temperature to achieve less creative and more direct results during training. It was clarified that temperature adjustments are typically applied during inference rather than training.
Link mentioned: phi3 playbook gguf: llama_model_load: error loading model: vocab size mismatch · Issue #418 · unslothai/unsloth: The llama.cpp integration within the playbook does not works, anyway i have manually created the gguf file but when i try to serve the model using the llama.cpp server i am getting the following er...
Unsloth AI (Daniel Han) ▷ #research (1 messages):
edd0302: http://arxiv.org/abs/2409.17264
Crazy parallelization for inference
GPU MODE ▷ #triton (3 messages):
Kernel Invocation Parameters
Pipelining in Triton
num_stages Functionality
- Understanding num_stages in Kernel Invocation: A user inquired about the function of changing num_stages in a kernel invocation for Triton, speculating it might relate to pipelining.
- What exactly is it pipelining and how to customize it? they asked, wanting clarification on the differences between setting num_stages=2 and num_stages=3.
- Pipelining Optimization Explained: In response, a member explained that pipelining optimizes loops to allow simultaneous execution of loading, computing, and storing operations, which reduces warp stalling.
- They provided a YouTube video for further understanding on the topic.
GPU MODE ▷ #torch (4 messages):
CUDA mode
no-libtorch-compile
Multithreaded data loading
SPDL framework
- Third Place Prize at CUDA Mode Insight: A participant noted that the third place prize at CUDA mode aimed to achieve a specific goal related to data loading.
- This raised curiosity about the repository connected to the project, prompting others to seek out progress updates.
- Repository For No LibTorch Compile: A member shared the link to the repository for the no-libtorch-compile project, allowing others to track its development.
- This repository aims to facilitate work without needing libtorch for compilation.
- Introduction to Multithreaded Data Loading: Discussion about multithreaded data loading highlighted key resources, including the SPDL documentation page.
- Another shared the corresponding GitHub repository, emphasizing its focus on scalable and performant data loading strategies.
- GitHub - lianakoleva/no-libtorch-compile: Contribute to lianakoleva/no-libtorch-compile development by creating an account on GitHub.
- SPDL 0.0.6 documentation: no description found
- GitHub - facebookresearch/spdl: Scalable and Performant Data Loading: Scalable and Performant Data Loading. Contribute to facebookresearch/spdl development by creating an account on GitHub.
GPU MODE ▷ #announcements (1 messages):
IRL keynotes recordings
Talks by notable speakers
Accel's contribution
- IRL Keynotes Recordings Released: The recordings for our IRL keynotes are finally out. You can watch them here.
- The lineup features amazing talks by Tri Dao, Supriya Rao, Andrej Karpathy, Lily Liu, Tim Dettmers, and Wen-mei Hwu.
- Shoutout to Accel for Quality Recording: A big thanks to Accel for the beautiful recording of the keynotes. Their efforts made it possible to showcase these essential discussions effectively.
GPU MODE ▷ #cool-links (1 messages):
as_ai: cool: https://openai.com/index/introducing-the-realtime-api/
GPU MODE ▷ #beginner (2 messages):
Breaking into Machine Learning
Tensor Manipulation in Triton
- Curiosity about Machine Learning Opportunities: A user expressed their desire to learn and contribute to machine learning, noting a transition from software engineering and previous experience in ML.
- They highlighted the challenge of self-learning in this field without a job that requires solving real ML problems.
- Efficiently Manipulating Tensor Shapes: Another user inquired about manipulating a tensor
X
of shape[BLOCK_SIZE_INP * triton.next_power_of_2(n_inp_bits), 256, BLOCK_SIZE_OUT]
to remove elements from the second dimension without excessive memory operations.- They sought a method similar to
X[:,:BLOCK_HIDDEN_SIZE]
while maintaining data efficiency and avoiding loading/unloading from memory.
- They sought a method similar to
GPU MODE ▷ #pmpp-book (1 messages):
deon1217: 5th edition is coming by EOY?
GPU MODE ▷ #youtube-recordings (6 messages):
Project Presentations Upload
Future Events Locations
- Project Presentations Will Not Be Uploaded: Members confirmed that the project presentations will not be uploaded as the recording crew was not retained for that long.
- Mr. Osophy expressed disappointment with a sad face emoticon, reinforcing the unfortunate news.
- Discussion on Future Events: Questions were raised about whether there would be another event in the future and if it would be hosted in locations other than San Francisco.
- Mark shared uncertainty about the design of the next event, pondering if it would be bigger or smaller, expressing a need for more contemplation.
GPU MODE ▷ #torchao (11 messages🔥):
TorchAO vs pytorch/torch/ao
Sensitivity scan and pruning
Prototyping features in TorchAO
Benchmarking and warmup in training
- Clarifying TorchAO vs pytorch/torch/ao: There's confusion about using TorchAO, a GPU quantization/sparsity library, versus the older pytorch/torch/ao, which was CPU-focused and had convolution support.
- A member highlighted the need to differentiate these two to avoid new users drifting toward the outdated library.
- Issues with sensitivity scan and pruning: A new member experienced exceptions while conducting a sensitivity scan layer by layer using pruning techniques, potentially due to outdated parameterizations.
- They were advised to check the example flow and update to the newer version of the library after realizing they used the older one.
- Understanding Prototype Features in TorchAO: There was a discussion about being directed to a GitHub page related to the prototype pruner in TorchAO, raising questions about its stability and usage.
- Members confirmed that while these prototype features could be used, they lack the same backward compatibility guarantees as more stable features.
- No Warmup in Benchmarking Settings: One member inquired why there is no warmup phase implemented in the
benchmark_aq.py
script from TorchAO.- This highlights a potential area for improvement in the benchmarking procedures as they explore training efficiencies.
- ao/torchao/sparsity/prototype/pruner at main · pytorch/ao: PyTorch native quantization and sparsity for training and inference - pytorch/ao
- port over torch.ao.pruning to protype folder by jcaip · Pull Request #148 · pytorch/ao: This PR ports over the torch.ao.pruning flow to torchao. Based on our last OSS sync session with @mklasby, we'd like to make some changes to the BaseSparsifier but don't want to limite...
GPU MODE ▷ #sequence-parallel (3 messages):
Long Context Methods
Survey Papers on Context
Author Engagement
- Author Open for Questions: @bekar2617, one of the authors, invited participants to drop any questions they have regarding the paper.
- Feel free to engage!
- Inquiry on Long Context Surveys: A member, @glaxus_, praised the paper and inquired if there are any surveys covering long context methods, noting their overwhelming number.
- They shared that it's hard to keep track of all the various approaches in this area.
- Unawareness of Survey Papers: @bekar2617 humorously responded that they aren't aware of any good survey papers on long context methods.
- Haha, that's true, indicating shared frustration with the abundance of material available.
GPU MODE ▷ #off-topic (9 messages🔥):
Geopolitical Stability
Political Discussions in Server
User Reactions to Political Stress
Community Guidelines on Topics
- Community Concerns about Geopolitical Stability: apaz expressed uncertainty about the current geopolitical climate, suggesting it feels more fragile than commonly perceived.
- kashimoo resonated with this sentiment, sharing personal concerns due to family in affected regions, calling the situation nerve-wracking.
- Debate on Politics in the Server: mr.osophy pointed out the potential guideline against political discussions, questioning if respectful dialogue should be permitted.
- apaz and others agreed that keeping politics off limits is reasonable, implying the rules might be for the community's comfort.
- Mixed Reactions to Political Stress: marksaroufim shared a grim view of personal sadness over geopolitical events, humorously suggesting it might be better to focus on coding.
- He conveyed a sense of camaraderie in wanting peace while navigating the effects of political tensions.
- Offensive Comments Spark Backlash: mr.osophy brought attention to a controversial remark involving Elon Musk and Haitians, highlighting its inappropriateness.
- He emphasized that such comments are offensive and pushed a narrative of understanding from the immigrant community.
GPU MODE ▷ #irl-meetup (1 messages):
saurabh_works: Any groups in India? 🇮🇳
GPU MODE ▷ #triton-puzzles (6 messages):
Triton Kernel Explanation
Add Vector Function
Row Major Format in Tensors
- Understanding the Add Vector Function: A user shared an incorrect implementation of a vector addition kernel in Triton that led to confusion about the correct approach.
- The discussion highlighted the essential need for clarity in tensor operations and the importance of properly indexing the tensors.
- Row Major Format Clarification: Members explained that all tensors are stored as 1D arrays in row major format, indicating that each element of a column is spaced by the size of N1.
- This detail helped a user understand why multiplying by N1 was necessary to correctly calculate the storage location.
- Effective Educational Resources: A user remarked on the usefulness of a diagram showing how tensors are indexed in memory, appreciating its clarity.
- This indicates a positive engagement with visual aids in technical explanations, emphasizing their value in understanding complex concepts.
GPU MODE ▷ #hqq-mobius (8 messages🔥):
AWQ and HQQ comparison
Quantization methods
Perplexity benchmarks
MMLU and GSM8K tests
lm eval implementations
- AWQ combined with HQQ yields surprising results: Users observed that combining AWQ with HQQ seems to outperform both methods individually, raising questions on the approach's effectiveness.
- One user provided a GitHub example highlighting the implementations.
- Calibration using uint4 surprisingly better than HQQ: During calibration, a user noted the use of uint4 quantization error instead of HQQ, leading to interesting performance outcomes.
- They emphasized AWQ's scaling optimization versus HQQ's zero-point focus.
- Perplexity comparisons raise eyebrows: Concerns were raised over perplexity scores where HQQ showed higher values than int4, despite expectations of better performance.
- Users questioned the benchmark reliability, suggesting a shift away from perplexity in favor of other metrics.
- Testing for robust benchmarks: A request was made to run tests on MMLU and GSM8K with instruct models to achieve more robust benchmarking.
- The current perplexity calculations using the llm-awq paper implementation were noted to differ from commonly accepted lm eval methods.
- Sharing lm eval scripts: A user shared their script for running lm eval in conjunction with HF models, specifically for instruct models.
- This contribution aims to streamline the benchmarking process within the community.
- ao/torchao/prototype/awq/example.py at awq · vayuda/ao: Native PyTorch library for quantization and sparsity - vayuda/ao
- hqq/examples/llama2_benchmark/eval_model.py at master · mobiusml/hqq: Official implementation of Half-Quadratic Quantization (HQQ) - mobiusml/hqq
GPU MODE ▷ #llmdotc (38 messages🔥):
Pipeline Parallelism
Activation Checkpointing
Zero-3 Implementation
Chunked Softmax
Sequence Parallelism
- Pipeline Parallelism Causes Concerns: Members expressed fear regarding pipeline parallelism, suggesting it complicates efficient scheduling for training purposes.
- It's noted that while naive implementations are straightforward, avoiding performance bottlenecks becomes complex, leading to skepticism like 'no codebase survives pipeline parallelism'.
- Activation Checkpointing Developments: Progress was made on activation checkpointing for the Llama3 branch, aiming to save memory by selectively storing residuals during training.
- Despite achieving a version that reduces memory load, there are concerns that it won't be sufficient for 405B BF16 training at 128K context length.
- Zero-3 and Chunked Softmax Discussions: There were discussions about implementing Zero-3 along with chunked softmax, emphasizing the need for efficient memory management techniques.
- One approach suggested using chunked softmax to improve memory efficiency, especially relevant given the challenge of processing large vocabularies and context lengths.
- Pondering Sequence Parallelism as a Solution: A member proposed sequence parallelism as a potential solution to managing attention layers more effectively, minimizing overhead from massively parallel processing.
- By dividing tasks among GPUs, especially for large models, it might provide an easier path forward without requiring extensive modifications to existing systems.
- Complexity of Offloading Residuals: The feasibility of offloading residuals to CPU memory was discussed, but challenges in managing the sheer amount of data required for large models were highlighted.
- Strategies to manage memory include selective recomputation of activations and utilizing existing methods to combine forward pass efficiencies, albeit with complexity remaining a concern.
- add llama 3 support to llm.c · karpathy/llm.c@d808d78: LLM training in simple, raw C/CUDA. Contribute to karpathy/llm.c development by creating an account on GitHub.
- Activation Checkpointing for Llama3 branch by ademeure · Pull Request #773 · karpathy/llm.c: This keeps residual3 for all layers, and then up to N layers for everything else, with relatively little complexity... This means if you set "-ac 16", it will only recompute 50% of a...
- Scaling Language Model Training to a Trillion Parameters Using Megatron | NVIDIA Technical Blog: Natural Language Processing (NLP) has seen rapid progress in recent years as computation at scale has become more available and datasets have become larger. At the same time, recent work has shownR...
GPU MODE ▷ #rocm (1 messages):
Advancing AI event
ROCM developers
- Advancing AI Event Scheduled in SF: An Advancing AI event is set for 10/10 in San Francisco at Moscone, focusing on upcoming hardware and software developments.
- Interested attendees are encouraged to DM for registration details and catch up with ROCM developers.
- Participation Opportunities with ROCM Devs: The ROCM community is inviting interested participants to join them at the event to discuss various topics in AI.
- This is a chance to network and engage directly with ROCM developers and learn about their latest projects.
GPU MODE ▷ #liger-kernel (2 messages):
Kernel Functional Reminder
Contribution Guide Updates
- Reminder for Functional Addition in Kernel: A member noted that they forgot to add functional when inserting a new kernel, highlighting it as an easily neglected detail.
- They suggested incorporating a reminder in the contribution guide to avoid similar oversights in the future.
- Ongoing Work to Address Reminders: Another member mentioned that someone is currently working on addressing this issue regarding the functional addition.
- This indicates a collaborative effort to improve the contribution process within the community.
GPU MODE ▷ #metal (1 messages):
Prefix sum puzzle
Debugging notebook crashes
- Prefix Sum Puzzle Needs Solving: A member inquired if anyone has solved the prefix sum puzzle as they are experiencing difficulties.
- They mentioned that their notebook crashes and expressed uncertainty on how to debug the issue, stating they are unsure about their mistakes.
- Request for Debugging Help: A user expressed challenges in debugging their setup related to the prefix sum puzzle.
- They are looking for insights or suggestions on methods to resolve the crashes occurring during attempts to run the notebook.
GPU MODE ▷ #diffusion (2 messages):
FLUX Inference Models
Custom Kernels Memory Optimization
- FLUX Inference Repo Insights: The FLUX repository offers an official inference model for FLUX.1, contributing to ongoing development handled by black-forest-labs.
- The repository encapsulates essential functionalities including a sampling function pivotal for performance enhancements.
- Optimizing Memory Usage in Image Processing: A member proposed that instead of sending full image data (e.g., 3145728 B for a 1024 x 1024 image), it could be more efficient to send just the dimensions, reducing it to 6 B.
- Calculating
img_ids
in a custom kernel would utilize shared memory for efficiency, asserting that handling integers could also be optimized across the board.
- Calculating
Link mentioned: flux/src/flux/sampling.py at 87f6fff727a377ea1c378af692afb41ae84cbe04 · black-forest-labs/flux: Official inference repo for FLUX.1 models. Contribute to black-forest-labs/flux development by creating an account on GitHub.
Eleuther ▷ #general (72 messages🔥🔥):
Bayesian vs Frequentist Models
NYT Lawsuit Implications
Scraping Legitimacy
Expert Witness Dynamics
OpenAI Settlements
- Navigating Bayesian Models in Neural Architectures: A member noted that neural architectures predominantly utilize frequentist statistics, making it challenging to implement Bayesian networks and beta distributions in trainable models.
- They proposed intuitive and alternative solutions, including collapsing probabilities into model weights without preserving complexity of the Bayesian framework.
- NYT Lawsuit Tests Limits of Copyright on AI: Discussion followed on the feasibility of OpenAI paying off NYT to avoid broader liabilities, spotlighting the complexities surrounding copyright infringement claims against LLMs.
- It was argued that compensating NYT might not imply that the whole AI model infringes on copyright and pointed out the difference in industry motivations between profitable entities and independent creators.
- Debates on Scraping Legitimacy and Ethics: Members expressed concerns about the ethical ramifications of scraping materials from the internet, especially how it relates to creative professionals and potential litigation.
- Questions arose about whether any ruling on scraping could define distinct liabilities for OSS developers, potentially affecting research contexts significantly.
- Insights on Expert Witness Roles and Challenges: One participant humorously speculated on the challenges faced by mathematicians acting as expert witnesses in copyright cases focused on data compression and LLM training.
- They noted that trial outcomes may hinge on complexities surrounding whether LLMs can be proven to store training data in weights or demonstrate transformation of that data.
- Speculation on OpenAI Trial Outcomes: The community discussed the likelihood of OpenAI settling lawsuits, raising the possibility that very few litigants would claim damages due to specific cases where their work was highlighted.
- Ultimately, most agreed that a broad ruling would be difficult to predict and could lead to innovative licensing arrangements with established publications.
Eleuther ▷ #research (13 messages🔥):
Sequential Prediction of Output Representations
Liquid Neural Networks Application
Self-Supervised Learning on Arbitrary Embeddings
Transfer Learning Techniques in NLP
- Exploring Sequential Predictions in Continuous Functions: Members discussed treating the task of predicting continuous functions as an autoregressive process, with examples such as predicting the digits in F(x) = 0.135.
- One suggestion was to explore T5's approach to continuous prediction tasks, highlighting its versatility in handling various models.
- Liquid Neural Networks Looks Promising: The discussion pointed towards using liquid neural networks for fitting continuous functions, emphasizing lower developmental complexity.
- A member remarked that pipelines could be made more end-to-end, assuming developers are knowledgeable about the model.
- Self-Supervised Learning for Any Model: A member introduced the concept of self-supervised learning (SSL) on arbitrary embeddings derived from any model and data.
- They further elaborated on extending SSL to work on any model weights, collecting linear layers from various models to form the dataset.
- Transfer Learning Techniques in NLP with T5: The capabilities of T5 were highlighted, particularly its effectiveness in transfer learning for natural language processing tasks.
- One member humorously noted, 'God dammit T5 thought of everything,' reflecting on its comprehensive text-to-text framework.
- Examining Deep Learning Optimizers: A link was shared discussing a new design space for deep learning optimizers, critiquing methods like Adam and Shampoo without convexity assumptions.
- The proposal includes assigning different operator norms to tensors based on their roles within the neural architecture, potentially enhancing training stability.
- Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The ef...
- Old Optimizer, New Norm: An Anthology: Deep learning optimizers are often motivated through a mix of convex and approximate second-order theory. We select three such methods -- Adam, Shampoo and Prodigy -- and argue that each method can in...
OpenAI ▷ #ai-discussions (47 messages🔥):
OpenAI Subscription Tiers
Voice Model Preferences
Liquid AI Architecture
Playground Access Issues
API Access Updates
- Users Desire Higher Subscription Tiers for Updates: Members discussed the potential benefits of having a higher-priced OpenAI subscription that could provide more up-to-date features and services.
- One user expressed frustration over the limitations imposed by current subscription tiers while using various AI platforms.
- Struggles with New Cove Voice Model: Multiple users voiced their dissatisfaction with the new Cove voice model, stating it lacks the calming nature of the classic version and is overly energetic.
- One user emphasized a community consensus against the new voice, prompting a plea for the return of the classic voice.
- Liquid AI's Advanced Architecture Performance: Discussion included a new architecture from Liquid AI, claimed to outperform traditional LLMs and currently available for testing.
- Members noted its inference efficiency and speculated on its varying architecture from typical transformer models.
- Issues with Accessing the Playground: A user raised concerns about logging into the Playground, with others experiencing similar issues or suggesting incognito mode as a workaround.
- Communication suggested that access may vary based on users' locations, particularly in specific regions like Switzerland.
- API Access and Usage Tiers: A user inquired if API access is limited to specific usage tiers, while a response clarified that it was previously tier-based but the current situation had changed.
- This indicates ongoing adjustments within OpenAI's service offerings and accessibility.
- Realtime Playground: Speech-to-speech playground for OpenAI's new Realtime API. Built on LiveKit Agents.
- About Cove's voice changing with the new conversation system: I’m feeling the same. I can’t believe I’m missing the voice of an AI assistant. Lol
- Cartesia: no description found
OpenAI ▷ #gpt-4-discussions (18 messages🔥):
Disappearing Responses Issue
Creating Custom GPT with Unique Features
o1-preview Model Access Query
Using OAuth for Google Drive Connections
GPT Policy Violation Appeal Process
- Disappearing Responses Issue Reported: Users reported issues with responses disappearing in the macOS desktop app, attributing it to a recent update that might have altered notification options.
- One user expressed frustration, emphasizing that this has affected their experience in the last 20 minutes.
- Launch of 'Simple Story' Custom GPT: A new custom GPT named 'Simple Story' was created to transform simple sentences into coherent stories, maintaining proper spacing and introducing characters effectively.
- The creator stated that this GPT addresses shortcomings found in ChatGPT, inspired by an interview with author David Perell.
- Clarification on Model Access: o1-preview: There was a query about the model access on OpenAI's platform, specifically distinguishing between 'o1-preview' and its recent snapshot 'o1-preview-2024-09-12'.
- A member responded that both endpoints currently point to the same model snapshot, raising questions about any differences in use with ChatGPT.
- Exploring OAuth for User-Specific Google Drive Saving: Discussion revolved around the possibility of creating a GPT that allows other users to save conversations to their own Google Drive using OAuth for login.
- One member sought clarity on whether it's feasible for a custom GPT to enable this, recognizing the technicalities involved in implementing such features.
- Appeal Process for GPT Policy Violations: A user shared their experience regarding an email about their GPT, 'Video Summarizer', being removed due to policy violations, and their ongoing appeal to resolve the matter.
- They expressed frustration over the lack of response from customer support after a week, highlighting the emotional impact of losing their GPT from the store.
OpenAI ▷ #prompt-engineering (3 messages):
LLMs and chain-of-thought
Midjourney seed number retrieval
- LLMs: Chain-of-Thought doesn't stop hallucinations: It's noted that while chain-of-thought improves accuracy in LLMs, it doesn't reduce hallucinations, as seen with the incorrect answer of 'Ninetales' instead of 'Vulpix' for a fire-type fox Pokemon.
- The distinction between hallucination and intelligence was highlighted by verifying tail counts, showing that Vulpix has six tails and Ninetales has nine.
- Retrieving Seed Number from Midjourney Images: A user inquired about obtaining the seed number for images generated in web Midjourney.
- A member responded by directing them to look for information in another channel, specifically <#998381918976479273>.
OpenAI ▷ #api-discussions (3 messages):
LLMs Hallucination Issues
Midjourney Seed Number Retrieval
- LLMs fail to prevent hallucinations with chain-of-thought: Users noted that using chain-of-thought techniques does not prevent hallucinations in LLMs but may improve accuracy, as seen with erroneous answers for a Pokémon question.
- Despite introducing thought processes, models like gpt-3.5-turbo still lead to inaccuracies, revealing underlying hallucination problems.
- Finding Midjourney's seed number: A user asked how to retrieve the seed number from a picture generated in Midjourney.
- Another member suggested visiting a specific channel for help, indicating community support for this inquiry.
Latent Space ▷ #ai-general-chat (53 messages🔥):
OpenAI's new funding round
OpenAI's Advanced Voice
Multi-GPU training techniques
Releases of multimodal language models
Azure AI's HD neural TTS
- OpenAI secures $6.7B in new funding: OpenAI has officially announced a funding round of $6.7 billion at a $157 billion valuation, collaborating with key partners including U.S. allied governments to further unlock AI technology's potential.
- This raises questions about which allies are involved, with discussions speculating on the inclusion of NATO countries and the implications for international collaboration in AI.
- Advanced Voice launching for all users: OpenAI has begun rolling out the Advanced Voice feature to all ChatGPT Enterprise and Edu users globally, with free users getting a sneak peek.
- This enhancement reflects OpenAI's commitment to improving interaction modalities, although some users express skepticism about the practical performance of voice applications.
- Insights on Multi-GPU training: An insightful breakdown of multi-GPU training was shared, emphasizing the importance of parallelizing network training, efficient state communication, and rapid failure recovery techniques.
- Techniques such as checkpointing and optimizer state communication were highlighted as crucial for maintaining performance across 10k GPU usage.
- New multimodal models announced: The recent announcement highlighted MM1.5, a new family of multimodal language models by Apple aimed at enhancing OCR and multi-image reasoning capabilities.
- The models come in dense and MoE variants, including ones designed specifically for video and mobile UI understanding.
- Updates on Azure AI's HD Neural TTS: Microsoft has introduced an HD version of its neural TTS service on Azure AI, enhancing the expressiveness and engagement of generated speech through emotional context detection.
- With features like auto-regressive transformer models, users can expect improvements in speech realism and quality, positioning Azure's service competitively in AI-powered voice applications.
- Tweet from keith (@keithwhor): @emileberhard @OpenAI @pbbakkum @landakram @DustMason rolling out throughout the week!
- Tweet from swyx @ DevDay! (@swyx): Here’s my @OpenAIDevs day thread for those following along. everyone else gotchu with videos and stuff so i will just give personal notes and aha moments thru the day first observation: @sama MIA GP...
- Tweet from dsa (@dsa): OpenAI announced the Realtime API today. While access rolls out to all devs, try the API using our key: https://playground.livekit.io/
- Looks like even for the non-realtime API they're charging $200/M for output audi... | Hacker News: no description found
- Tweet from Nathan Lambert (@natolambert): Lots of big multimodal language models the last two weeks, but still about 0 on what the heck RLHF looks like for them. Who's work on this after LLaVA?
- Tweet from Bleeding Purple Guy (@BleedingPurple4): @WilliamShatner You good Captain?
- Tweet from Bindu Reddy (@bindureddy): Call center humans are breathing a huge sigh of relief... They just realized that they are cheaper than OAI's voice mode, which is a whopping $18/hr 🤯🤯
- Tweet from Jeff Clune (@jeffclune): My answer: Alec Radford. There were good suggestions below, but to my mind @AlecRad is clearly the person with the largest influence, yet the least recognition. He's been the driver so many ama...
- Tweet from Soumith Chintala (@soumithchintala): There's three parts. 1. Fitting as large of a network and as large of a batch-size as possible onto the 10k/100k/1m H100s -- parallelizing and using memory-saving tricks. 2. Communicating state...
- Tweet from Erica Brescia (@ericabrescia): Thrilled to be on this journey to AGI through code with @jasoncwarner @eisokant and the incredible team they've built @poolsideai. Can't wait for the world to benefit from what this team can...
- Tweet from Tanishq Mathew Abraham, Ph.D. (@iScienceLuvr): MM1.5: Methods, Analysis & Insights from Multimodal LLM Fine-tuning abs: https://arxiv.org/abs/2409.20566 Apple introduces MM1.5, a new family of MLLMs carefully designed to enhance a set of core ca...
- What Am I Looking At Landon Bloom GIF - What Am I Looking At Landon Bloom Inventing Anna - Discover & Share GIFs: Click to view the GIF
- Tweet from Jamin Ball (@jaminball): OpenAI audio real time works out to $10-$15 / hour ($120 / 1M tokens at 80/20 blended). This will vary depending on how much end user is talking vs model is "talking," but should be directiona...
- Tweet from OpenAI (@OpenAI): Starting this week, Advanced Voice is rolling out to all ChatGPT Enterprise, Edu, and Team users globally. Free users will also get a sneak peek of Advanced Voice. Plus and Free users in the EU…we’ll...
- Tweet from Gabriel Stengel (@GabeStengel): Today, we’re thrilled to announce our $18.5M Series A led by @rabois at @khoslaventures, along with @jaltma, @ericschmidt, @mantisVC and many others. https://rogo.ai/blog/rogo-series-a-with-khosla-v...
- Tweet from PicoCreator - AI Model Builder in 🌉 (@picocreator): OpenAI dev day x notebook LM ❤️ This whole conversation summary of the OpenAI dev day; Is entirely AI generated without, any human supervision, or intervention 🤖
- GitHub - yakazimir/esslli_2024_llm_programming: Course resources for the ESSLLI 2024 class on language model programming: Course resources for the ESSLLI 2024 class on language model programming - yakazimir/esslli_2024_llm_programming
- New HD voices preview in Azure AI Speech: contextual and realistic output evolved: Our commitment to improving Azure AI Speech voices is unwavering, as we consistently work towards making them more expressive and engaging. Today, we are..
- Cognitive-Speech-TTS/doctopodcast/doctopodcast.py at master · Azure-Samples/Cognitive-Speech-TTS: Microsoft Text-to-Speech API sample code in several languages, part of Cognitive Services. - Azure-Samples/Cognitive-Speech-TTS
Stability.ai (Stable Diffusion) ▷ #general-chat (49 messages🔥):
ComfyUI Installation Issues
Flux Model Utilization
Automatic1111 Troubles
Debian-based OS Preferences
Python Version Compatibility
- Challenges with ComfyUI installation: A user struggled with installing ComfyUI on Google Colab, indicating issues with the comfyui manager installation process.
- Discussions surfaced regarding the need for specific model paths and compatibility challenges with Automatic1111.
- Flux model showcases impressive features: Users highlighted the effectiveness of the Flux model, especially in generating consistent character images and fixing details like hands and feet.
- One member shared a link to a Flux lora that surprisingly enhances image quality despite its primary use not being for that purpose.
- Troubleshooting Automatic1111 installation: A member reported encountering issues installing Automatic1111 with the latest Python version, sparking questions about compatibility.
- Suggestions pointed towards using virtual environments or containers like Docker to manage different Python versions.
- Debian-based OS discussions: A conversation arose regarding the use of Debian-based operating systems, with users noting individual quirks of popular distributions like Pop and Mint.
- One user humorously expressed their intent to retry Pop for its unique traits.
- Python version compatibility concerns: Members discussed the implications of using the latest Python version, suggesting that older versions may provide better compatibility for certain scripts.
- One user contemplated adjusting their environment for running scripts separately to resolve stability issues.
- ostris/OpenFLUX.1 · Hugging Face: no description found
- Flux Fusion DS (Smooth merge) [4+ steps] [AIO & UNET] [ALL GGUF • NF4 • FP8/FP16] - v0-fp8-e4m3fn (AIO) | Flux Checkpoint | Civitai: GGUF & BNB QUANTS FOR LOW VRAM. NEED EXTRA SETUP ↓↓↓ AIO (All in one) versions include UNET + VAE + CLIP L + T5XXL (fp8). UNET ONLY VERSIONS AL...
- Marvelmixx - v1.0 | Flux LoRA | Civitai: This LoRA is a unique blend of the styles from the renowned comic book artists Jim Lee, Joe Madureira, and Mike Deodato. It captures the dynamic en...
Perplexity AI ▷ #general (38 messages🔥):
Rate Limit Increases
Llama 3.2 Release
LiquidAI Performance
Chat Download Feature
Text-to-Speech Utility
- Discussions on Rate Limit Increases: A member inquired whether asking for rate limit increases on the API is an option, wanting to surpass the 20 limit.
- Another member confirmed the request, highlighting the prevalent need for higher thresholds.
- Anticipation for Llama 3.2 Release: Interest in the release of Llama 3.2 was expressed multiple times with users eager for its arrival.
- One member humorously noted, laughs in mr arvind, alluding to ambiguity regarding the release timeline.
- LiquidAI Praised for Speed: LiquidAI received accolades for its speed, with a user exclaiming that it is crazy fast compared to existing models.
- However, it was noted that while it functions quickly, it is also considered to be widely inaccurate.
- Functionality of Chat Download: A query was raised about the ability to download entire chat sessions as PDFs, which another user confirmed having accomplished.
- A discussion highlighted the potential need for this feature, particularly when saving complete conversations.
- Text-to-Speech Feature Feedback: The text-to-speech (TTS) feature's utility was discussed, with a user mentioning they frequently use it for long replies at work.
- Despite some pronunciation issues, they find it a valuable tool, indicating a need for its improvement.
Perplexity AI ▷ #sharing (7 messages):
Perplexity AI and Philosophy
LiquidAI GPT Rival Launch
Stability AI vs ClipDrop
FLUX Model Efficiency
Current AI Model Landscape
- Perplexity AI aids in Philosophical Research: A user expressed appreciation for Perplexity AI's ability to assist in philosophical research, sharing a link related to Thucydides.
- This shows the platform's utility in accessing philosophical insights quickly.
- LiquidAI Launches GPT Rival: LiquidAI has debuted its new GPT competitor, highlighting a shift in the AI landscape along with updates on how Telegram is adjusting its policies.
- This development was shared via a YouTube clip, indicating significant progress in AI tools.
- Stability AI vs ClipDrop Comparison: A user accessed a comparison between Stability AI and ClipDrop to better understand the differences and similarities.
- The link shared provides further insights on their capabilities.
- FLUX Model Overview: A user discovered that FLUX does not discriminate between pictures and paintings, emphasizing its speed in processing.
- This model's abilities could advance the field of image recognition, as noted in a recent search.
- Current AI Model Insights: Queries on the current landscape of AI models have been made, with users seeking insights into emerging technologies.
- The exchange highlights ongoing discussions about which models are leading the field, as indicated in a shared link.
Perplexity AI ▷ #pplx-api (2 messages):
API Credit Usage
Account Details Inquiry
- User inquires about API credit changes: A user raised a question regarding when they will see changes in their API credits, expressing confusion as they have a $0 usage cost despite using the API yesterday.
- This inquiry highlights the user's concern about understanding billing cycles for their API usage.
- Support offered for account details: Another member responded promptly to the user's query, asking for them to DM their account details for further assistance.
- This mutual interaction indicates readiness within the community to support users facing issues with their accounts.
Cohere ▷ #discussions (26 messages🔥):
Credit card cloud support
Event notifications issue
MSFT Copilot Studio inquiry
- Seeking Credit Card Cloud and Apple Pay Support: A member expressed the need for full support for credit card cloud and Apple Pay, prompting another member to recommend emailing support@cohere.com for assistance.
- They offered to take over from there, ensuring a smoother process for help.
- Event Notifications Coming Late: A member reported that event notifications were arriving after the actual events, specifically noting issues during the last Office Hours meeting.
- Another member acknowledged the problem as a technical glitch, thanking the reporter for bringing it to attention.
- Inquiry on MSFT Copilot Studio: A member asked if anyone had experience with MSFT Copilot Studio, questioning its value compared to alternative solutions.
- Another member remarked that this space wasn't for advertising services, indicating sensitivity around promotional queries.
Cohere ▷ #questions (13 messages🔥):
Azure Model Refresh Issues
Cohere Chat App Roadmap
Cohere Webinar Opportunities
RAG++ Course Resources
Reasoning Models for AI Agents
- Azure Model Refresh Issues: A member reported a hiccup with refreshing models via Azure and was advised to submit a support ticket while also contacting the Azure team for further assistance.
- Another member requested the issue ID for tracking purposes, emphasizing the need for communication with the Azure team about the ongoing issue.
- Inquiry on Cohere Chat App Development: A member asked if Cohere plans to develop a chat app, specifically for mobile devices.
- They expressed enthusiasm for Cohere and offered to host a live webinar to promote it within their AI community, highlighting their advocacy for the platform.
- Searching for RAG++ Course Resources: A member requested resources for a RAG++ course, prompting another individual to provide relevant links and information related to Retrieval Augmented Generation (RAG).
- Resources included guides on using RAG in Cohere and cookbooks for practical implementation, enhancing users' experience in building generative AI applications.
- Discussion on Reasoning Models in AI Agents: A member initiated a discussion about decomposing complexity in generative AI applications, questioning the offloading of tasks to models with advanced reasoning capabilities.
- They outlined the trade-offs between granularity and the risks of coupling with specific models, prompting community insights on this topic.
Link mentioned: Cookbooks — Cohere: Explore a range of AI guides and get started with Cohere's generative platform, ready-made and best-practice optimized.
Cohere ▷ #api-discussions (6 messages):
API Error 403
Model Transfer Issues
Support Contact
- API Error 403 experienced: A user reported encountering a 403 Forbidden error when trying to use the API after transferring to another server, which halted all model functionality.
- None of the models seem to work after transferring was mentioned, raising concerns about server configuration.
- Immediate support suggested: In response to the API error, a community member recommended reaching out via email at support@cohere.com for immediate assistance.
- They offered encouragement, stating members are always welcome to seek help.
- Previous awareness of the issue: Another member noted that the API error has been addressed previously, suggesting it may not be a new issue.
- They indicated that this topic may have already been discussed among the support team.
- Team inquiry initiated: A community member indicated their intention to ask the team about the error to provide further assistance.
- Their input shows a collaborative effort to solve the issue.
LlamaIndex ▷ #blog (2 messages):
Contextual Retrieval RAG
Multi-agent systems
Human in the loop feedback
TypeScript workflows
- Cost-effective Contextual Retrieval RAG with Metadata: A member shared @AnthropicAI's new RAG technique that enhances retrieval by prepending metadata to document chunks, improving performance and cost-effectiveness.
- This method guides the retrieval process more accurately based on the contextual position in documents.
- Human Feedback Powers Multi-agent Blog-writing System: An exciting blog-writing agent utilizing multi-agent systems has been demonstrated, incorporating human in the loop feedback into TypeScript workflows.
- Viewers can see the agent in action, writing and editing in real-time, showcased in this live demonstration.
LlamaIndex ▷ #general (37 messages🔥):
LlamaIndex Infrastructure
GPU Utilization
HuggingFace LLM Usage
NVLM Support
Document Management Strategies
- Discussing LlamaIndex infrastructure setup: Members shared insights on hardware specifications for running LlamaIndex, noting varying needs based on model and data size.
- Key considerations included the necessity of GPUs for running LLM and embedding models, with recommendations for specific vector databases.
- NVLM from NVIDIA gains attention: The introduction of NVIDIA's NVLM 1.0, a multimodal large language model, was highlighted, emphasizing its state-of-the-art capabilities in vision-language tasks.
- Members speculated on potential support within LlamaIndex, particularly regarding the large GPU requirements and loading configurations.
- HuggingFace LLM integration concerns: Discussions centered around using HuggingFace models locally versus through APIs, with suggestions on how to connect pre-saved models effectively.
- Members also explored the use of specific techniques and libraries for wrapping and enhancing model interactions.
- Slice and Dice Document Management: A member inquired about the best practices for indexing web articles, questioning whether to keep them combined or as individual documents.
- The conversation hinted at the benefits of maintaining articles separately for improved document management within indexing frameworks.
- Explorations into Code Adjustments: A user explored the codebase in-depth, particularly regarding the similarity between query and chat engines, revealing details about code implementation.
- This highlighted the complexity of parameter settings, leading to suggestions for potential feature requests.
- nvidia/NVLM-D-72B · Hugging Face: no description found
- Hugging Face LLMs - LlamaIndex: no description found
LlamaIndex ▷ #ai-discussion (1 messages):
Oracle AI Vector Search
LlamaIndex Framework
Semantic Search
Retrieval Augmented Generation (RAG)
- Oracle AI Vector Search Revolutionizes Semantic Search: Oracle AI Vector Search, a groundbreaking feature of Oracle Database, leads the charge in the semantic search domain, enabling systems to understand information based on meaning rather than keywords.
- This technology, when paired with the LlamaIndex framework, is positioned as a powerful solution for building sophisticated RAG pipelines.
- LlamaIndex Enhances Oracle's Vector Search: Combining LlamaIndex with Oracle AI Vector Search creates a robust infrastructure for retrieval augmented generation, enhancing data retrieval capabilities.
- The integration promises improved efficiency in processing and accessing information, expanding its influence in AI applications.
Link mentioned: Oracle AI Vector Search with LlamaIndex: A Powerful Combination: Ankush k Singal
Torchtune ▷ #announcements (1 messages):
2024 PyTorch Contributor Awards
Salman Mohammadi
Community Contributions
PyTorch Growth Statistics
- Salman Mohammadi nominated for 2024 Contributor Awards: Our very own Salman Mohammadi was nominated for the 2024 PyTorch Contributor Awards for his significant contributions on GitHub and support in the Discord community.
- Salman's efforts have been recognized as essential in enhancing the PyTorch ecosystem.
- Annual Contributor Awards highlight PyTorch's growth: The Annual PyTorch Contributor Awards will be celebrated at the 2024 PyTorch Conference, honoring individuals who have significantly contributed to the ecosystem.
- With contributions from over 3,500 individuals and 3,000 organizations this year, PyTorch has seen tremendous growth since just 200 organizations were involved two years ago.
- Acknowledgment of community contributions: The announcement extends heartfelt thanks to the PyTorch community for their dedication, passion, and hard work, which have been vital to PyTorch's success.
- Each contribution has played a pivotal role in advancing AI and machine learning.
Link mentioned: Announcing the 2024 PyTorch Contributor Awards: no description found
Torchtune ▷ #general (18 messages🔥):
Knowledge Distillation
Training Token Probabilities
Dataset Creation for Distillation
Optimization Flags in Torchtune
- Training tokens vs. one-hot vectors in distillation: Members discussed whether distillation is more effective due to using probabilities for all tokens instead of one-hot vectors, pointing out that larger models can create useful latent representations.
- It was suggested that learning from both labeled and unlabeled data can 'smooth' the loss landscape during the distillation process.
- Creating datasets for knowledge distillation: A member inquired about generating datasets through random token sequences, leading to a suggestion for creating datasets from model generations.
- It was confirmed that using the same dataset for both training and distillation is a feasible approach.
- Streamlined process for knowledge distillation: The optimal process for effective distillation involves fine-tuning a large model on a desired dataset before distilling knowledge into a smaller, non-fine-tuned model.
- Members shared insights on keeping track of experiments and optimizing settings to reduce computational costs.
- Optimization flags for Torchtune efficiency: Members discussed the use of optimization flags such as
compile=True
anddataset.packed=True
to enhance performance in Torchtune.- It was recommended to leverage nightly builds for improved performance and consider higher ranks in LoRA for better results.
- Distilling the Knowledge in a Neural Network: A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making pr...
- Distilling Llama3.1 8B into Llama3.2 1B using Knowledge Distillation — torchtune main documentation: no description found
- GitHub - pytorch/torchtune: A Native-PyTorch Library for LLM Fine-tuning: A Native-PyTorch Library for LLM Fine-tuning. Contribute to pytorch/torchtune development by creating an account on GitHub.
Torchtune ▷ #dev (13 messages🔥):
H200s Deployment
Local In-House LLMs
Healthcare Data Regulations
B100s Hardware Plans
- H200s are on their way: A member confirmed their H200s are already on the road, expressing excitement about whichever arrives first.
- They mentioned having 8x H200 with 4TB RAM, showcasing their impressive hardware setup.
- Local In-House LLMs are the goal: There's active discussion about deploying in-house LLMs as current APIs cannot handle health data in Europe.
- A member emphasized that having local infrastructure makes everyone feel safer with sensitive information.
- B100s Expected for Future: Plans to acquire B100s hardware were mentioned, indicating a significant upgrade.
- Members expressed hope to get more resources shortly, reinforcing their commitment to local processing capabilities.
- Navigating Healthcare Data Compliance: A member expressed that not many services are HIPAA compliant, highlighting past challenges in the healthcare sector.
- They noted that EU regulations are even more stringent than HIPAA, further complicating matters for those handling health data.
Modular (Mojo 🔥) ▷ #mojo (25 messages🔥):
Mojo Literals
EC2 Instance Requirements
Mojo Library Imports
Memory Management
Import Behavior Differences
- Mojo Literals still in progress: A member confirmed that literals do not work yet in Mojo, suggesting an alternative approach with
msg.extend(List[UInt8](0, 0, 0, 0))
.- Another user expressed the hope for try expressions to be implemented later.
- Running Mojo on EC2 instance: One user encountered a JIT session error on a cheap EC2 t2.micro instance, indicating potential memory limitations while compiling.
- Members advised that at least 8GB of RAM might be necessary, and one confirmed that 2GB was enough to build a binary.
- Future of Mojo Library Imports: A discussion arose about the possibility of Mojo supporting import library for CPython libraries instead of the current method
cpython.import_module
.- Concerns were raised about module name conflicts between Mojo and Python, with the proposed solution being import precedence for Mojo.
- Memory Management Insights: A member suggested the use of swap memory on EC2 but warned that it could lead to performance issues as IOPS are consumed.
- Another user confirmed successful operations on 8GB of memory, while concerns about how Mojo handles memory-specific imports were also discussed.
- Differences in Import Behavior: A user noted that Mojo does not currently handle imports with side effects, unlike Python, which complicates compatibility.
- The conversation highlighted that the Mojo compiler may not need to replicate all the nuanced behaviors of Python imports.
OpenInterpreter ▷ #general (9 messages🔥):
Nova LLM Launch
Function Calls in Open Interpreter
Open Interpreter Computer Role
Trading View Experience
October House Party
- Nova LLM Launch Announced: 🚀 Nova has launched its suite of Large Language Models, including Nova-Instant, Nova-Air, and Nova-Pro, designed for exceptional performance with an 88.8% score on MMLU.
- Notably, Nova-Pro outperforms rivals in reasoning and math, achieving a stellar 97.2% on ARC-C and 96.9% on GSM8K.
- Function Calls with Open Interpreter Explained: A user questioned whether they could define their own functions in a Python project using Open Interpreter, pointing to the setting
interpreter.llm.supports_functions
.- Another member clarified that while Open Interpreter can write functions on the fly, defining strict functions ensures the model calls them correctly, referencing the OpenAI documentation.
- Exploring the Computer Role in Open Interpreter: A member mentioned discovering a 'computer' role feature in Open Interpreter, potentially aiding in function calling within Python applications.
- They referred to relevant details in the Open Interpreter migration guide.
- Trading View Background Shared: One user shared their background in Trading View using Pine Script and full-stack development, indicating experience in creating e-Commerce platforms with React+Node and Vue+Laravel.
- They announced their availability for new opportunities after finishing previous work.
- Reminder for October House Party: A reminder was issued for the upcoming October House Party scheduled for tomorrow, providing a link for participants to join.
- Participants were encouraged to bring questions and share any recent projects built with Open Interpreter.
Link mentioned: Tweet from Rubiks AI (@RubiksAI): 🚀 Introducing Nova: The Next Generation of LLMs by Nova! 🌟 We're thrilled to announce the launch of our latest suite of Large Language Models: Nova-Instant, Nova-Air, and Nova-Pro. Each designe...
OpenInterpreter ▷ #O1 (7 messages):
01 app capabilities
OS mode confusion
- 01 app mirrors Light's capabilities: A member inquired if the 01 app has access to the same functionalities as the Light app, such as screen access.
- Another member confirmed that the server has the same capabilities, aligning the 01 app with those of Light.
- Access procedure questioned: The same member asked how to access these capabilities since there's no 'os' mode available within the 01 app.
- Further clarification was given, noting that 'os mode' is related to a feature specific to the Open Interpreter, not the 01 app.
OpenInterpreter ▷ #ai-content (3 messages):
Realtime API
Vision in Fine-Tuning
Prompt Caching
Model Distillation
Tool Use Podcast
- Introducing the Realtime API: A new realtime API enables speech-to-speech capabilities, showcasing advancements in interactive communication.
- This API is intended to enhance various applications with immediate responses in conversational AI.
- Vision Introduced in Fine-Tuning API: OpenAI has announced vision in the fine-tuning API, expanding the API's functionalities beyond text processing.
- This addition allows models to incorporate visual data during training, opening new avenues for multimodal AI applications.
- Efficient Prompt Caching Unveiled: The introduction of prompt caching promises 50% discounts and faster processing for recently-seen input tokens.
- This feature significantly optimizes the interaction with APIs by reducing latency and costs associated with repetitive token inputs.
- Model Distillation Takes Center Stage: Model distillation focuses on improving efficiency by refining model weight management for increased performance.
- This technique is aimed at reducing the computational load while maintaining a high level of accuracy in model predictions.
- New Episode of Tool Use Podcast Released: Today's episode of Tool Use features a guest appearance by a well-known AI figure; watch it here.
- Additionally, there's a separate discussion showcased in another video episode, emphasizing ongoing innovation in the field.
Link mentioned: Tweet from Sam Altman (@sama): realtime api (speech-to-speech): https://openai.com/index/introducing-the-realtime-api/ vision in the fine-tuning api: https://openai.com/index/introducing-vision-to-the-fine-tuning-api/ prompt cach...
LangChain AI ▷ #general (15 messages🔥):
LangChain support for GPT Realtime API
Using HuggingFace models in LangChain
Concerns about curly braces in prompt templates
Local hardware for AI model deployment
Feedback on Microsoft Copilot Studio
- LangChain support for GPT Realtime API: Members expressed curiosity about when LangChain will support the newly announced GPT Realtime API.
- Speculation remains, with no definitive answer shared in the chat.
- HuggingFace models usable as Agents in LangChain: It was confirmed that HuggingFace models can be utilized as Agents in LangChain for tasks like chat and text generation. A code snippet provided illustrates how to create a LangChain LLM from a HuggingFace pipeline.
- For detailed documentation, members were directed to consult the LangChain documentation and the relevant GitHub issue.
- Handling curly braces in prompt templates: A member inquired about effectively passing a string containing curly braces in chat prompt templates without them being interpreted as placeholders in LangChain.
- Alternate solutions were sought, as current approaches would alter the input during processing.
- Local AI model hardware discussion: A user mentioned their workplace purchasing hardware to run AI models locally for internal chatbots, particularly referencing llama.cpp as a common choice.
- Discussion revolved around what configurations or models are preferred for local deployment.
- Opinions sought on Microsoft Copilot Studio: Members shared their thoughts and experiences regarding Microsoft Copilot Studio and its value compared to other solutions.
- The inquiry sparked a conversation about alternatives in the market but was noted as somewhat off-topic.
LangChain AI ▷ #share-your-work (2 messages):
Nova LLMs
LumiNova
OppyDev AI
- Nova LLMs dominate with SOTA performance: The launch of Nova LLMs includes Nova-Instant, Nova-Air, and Nova-Pro, which outshine GPT-4o and Claude-3.5 Sonnet with top MMLU scores, with Nova-Pro leading at 88.8%.
- Nova-Pro excels with 97.2% on ARC-C for reasoning and 96.9% on GSM8K for mathematics, promoting itself as the go-to model for AI interactions. More details can be found here.
- Introducing LumiNova for stunning visuals: The newly launched LumiNova model is designed for exceptional image generation, promising unmatched quality and diversity in visual outputs.
- This model complements the Nova series by enhancing AI's visual creativity, creating new opportunities for interactive capabilities.
- Three quick code updates with OppyDev AI: OppyDev AI introduces a video guide demonstrating three easy methods for updating code effectively.
- This resource aims to streamline coding tasks, making it easier for developers to enhance their codebases with AI assistance.
Link mentioned: Tweet from Rubiks AI (@RubiksAI): 🚀 Introducing Nova: The Next Generation of LLMs by Nova! 🌟 We're thrilled to announce the launch of our latest suite of Large Language Models: Nova-Instant, Nova-Air, and Nova-Pro. Each designe...
OpenAccess AI Collective (axolotl) ▷ #general (8 messages🔥):
NVIDIA's 72B model
Qwen 2.5 Deployment
Advancements in small models
- NVIDIA's 72B model rivals Llama 3.1: NVIDIA just published a 72B model that is approximately on par with Llama 3.1 405B in math and coding evaluations, while also featuring vision capabilities.
- Wow, what a time to be alive!
- Qwen 2.5 achieves impressive performance: A member successfully deployed Qwen 2.5 34B and described its performance as insanely good, comparable to GPT-4 Turbo.
- The excitement about small model advancements was palpable, with discussions focusing on deployment specifics and vision support.
- The potential of small models: Members expressed amazement at how good small models are becoming and pondered the limits of their capabilities.
- How far exactly can we push this? What are the actual limits?
Link mentioned: Tweet from Phil (@phill__1): Wow nvidia just published a 72B model with is ~on par with llama 3.1 405B in math and coding evals and also has vision 🤯
OpenAccess AI Collective (axolotl) ▷ #axolotl-help-bot (6 messages):
hf_mlflow_log_artifacts
Custom instruct format in sharegpt
YAML configuration for datasets
Using Axolotl for instruction tuning
- Clarification on hf_mlflow_log_artifacts: A member questioned whether setting hf_mlflow_log_artifacts to true would result in model checkpoints being saved to mlflow.
- This indicates ongoing concerns about the integration of logging mechanisms in their model training process.
- Defining a Custom Instruct Format: Instructions on specifying a custom instruct format for datasets in sharegpt format were discussed, emphasizing the use of YAML configuration.
- Key steps include defining a custom prompt format and ensuring the dataset is in JSONL format for successful integration.
- Guide to YAML Configuration for Datasets: The conversation outlined how to structure a YAML config to preprocess datasets in Axolotl, including examples of required fields.
- Specific placeholders in the YAML file help facilitate instruction tuning customization according to user needs.
- Utilizing Axolotl for Dataset Preprocessing: Once the YAML configuration is prepared, it can be employed with Axolotl to customize dataset formatting for training.
- This approach enhances the flexibility of the training process, allowing tailored configurations for specific tasks.
- axolotl/docs/dataset-formats/inst_tune.qmd at main · axolotl-ai-cloud/axolotl: Go ahead and axolotl questions. Contribute to axolotl-ai-cloud/axolotl development by creating an account on GitHub.
- OpenAccess-AI-Collective/axolotl | Phorm AI Code Search: Understand code, faster.
tinygrad (George Hotz) ▷ #general (6 messages):
Unboxing Tiny Box
GitHub Bugfix Review
PR Refactoring Discussion
- Unboxing Tiny Box gets rave review: <@533837520580902912> unboxed the tiny box shipped to Australia by Proxy, praising the great packaging and wood base as a nice touch.
- Worried about the ny->au shipment, they noted that whoever secured it did a great job.
- Call for review on bugfix PR: <@vladov3000> is seeking a reviewer for this bugfix PR that addresses the issue of saving and loading tensors twice.
- The PR aims to solve issue #6294, explaining that disk devices keep unlinked files and don't create new ones.
- Minimizing the PR for clarity: <@georgehotz> requested that the PR be made absolutely minimal, suggesting a need for clarity.
- <@vladov3000> agreed and proposed to separate the changes into two different PRs, one for refactoring and one for fixes.
Link mentioned: Fix tensor saving and loading twice. by vladov3000 · Pull Request #6815 · tinygrad/tinygrad: Solves #6294. See this comment for the details. TLDR; disk devices keep around unlinked files and don't create new files. This is my first contribution, so there may be a few rough edges. Name...
tinygrad (George Hotz) ▷ #learn-tinygrad (3 messages):
tinygrad code benefits
Python productivity and C interoperability
UOp vs UOP pool optimization issues
Compiler/Program/Allocator challenges
Distributed training in tinygrad
- tinygrad Code Enhances Coding Skills: Exploring the tinygrad codebase has positively impacted a member's day job coding, showcasing how engaging with open-source projects can improve programming skills.
- It's making my day job coding better as a side effect.
- Python's Ease and C Interoperability Shine: Members noted that Python is both productive for rapid iteration and effective in its C interoperability, despite some limitations with structs.
- They agreed that calling C functions is straightforward, enhancing performance for low-level operations.
- Challenges with UOp and UOP Pool Optimization: A member expressed frustration with the UOp vs UOP pool, pointing out difficulties in optimization due to individual object references.
- They advocated for a storage class that could efficiently manage object references using integer handles.
- Compiler/Program/Allocator Efficiency Issues: Concerns were raised about inefficiencies in Compiler/Program/Allocator/Buffer abstractions, particularly regarding the temporary file creation by the clang backend.
- This practice causes significant delays, especially in programs where the backend creates identical temporary files repeatedly, affecting CPU usage.
- Inquiries about Distributed Training Examples: A member requested examples of pytorch training scripts for distributed runs that could be ported to tinygrad, sparking interest in current capabilities.
- They mentioned having AMD compute power to potentially compare larger models like llama 3.1 70B.
LAION ▷ #general (2 messages):
Spam concerns
- Strong Anti-Spam Sentiment: A member expressed a strong dislike for spam, highlighting their frustrations with unwanted messages.
- This sentiment reflects a common challenge across digital platforms where spam can clutter communication.
- Moderators Attention Needed: The message referencing <@&825830190600683521> suggests a need for moderators to manage spam effectively.
- This indicates an ongoing concern within the community regarding the volume of spam messages.
LAION ▷ #resources (1 messages):
Sci Scope Newsletter
ArXiv Paper Summaries
Personalized Research Updates
- Sci Scope Newsletter Launch Announced: The personalized newsletter from Sci Scope is now available, allowing users to sign up and choose their preferred research areas to receive tailored updates weekly on new papers.
- Never miss out on research relevant to your work again! Users can try it out now for a stress-free way to keep up with developments in AI.
- Weekly AI Research Summaries for Busy Professionals: The newsletter promises to scan new ArXiv papers and deliver a concise summary to subscribers, saving them hours of work each week.
- It aims to make it easier for users to stay updated and select their next reading material with a weekly high-level summary.
- Exclusive Offer for New Users: New users can sign up for a free 1-month trial, giving them access to custom queries and a tailored experience.
- This feature enhances the user experience, allowing for a more relevant and efficient engagement with the rapidly advancing field of AI.
Link mentioned: Sci Scope: An AI generated newsletter on AI research
DSPy ▷ #papers (1 messages):
Personalized Newsletter
AI Research Updates
Weekly Summaries
ArXiv Papers
Sci Scope Features
- Personalized Newsletter Launches: Sci Scope has launched a personalized newsletter where users can sign up to receive summaries of new papers tailored to their interests each week.
- This feature aims to save time and ensure that users never miss significant updates relevant to their work, making it easier to stay informed.
- Effortless AI Research Tracking: The newsletter scans for new ArXiv papers based on user-defined preferences, delivering concise summaries directly to inboxes.
- This service is designed to help researchers stay on the bleeding edge of AI research without getting overwhelmed.
- Free Trial for New Users: Sci Scope offers a free 1-month trial allowing users to experience the benefits of weekly AI research summaries.
- This initiative aims to attract users to try the tailored newsletter and discover its potential value.
- Convenient Reading Material Selection: The service groups together newest AI research papers with similar topics for easier navigation and selection.
- This approach provides a straightforward way for users to choose their next reading material without excessive effort.
Link mentioned: Sci Scope: An AI generated newsletter on AI research
DSPy ▷ #colbert (1 messages):
Code Similarity Search
Colbert for Code Search
Code Search Alternatives
- Exploring Code Similarity Search: A member inquired about recommendations for starting with code similarity, expressing interest in using Colbert for a code search setup where code snippets would output relevant code documents.
- They questioned whether Colbert would be suitable for this application or if it required further finetuning before being effective.
- Open to Alternative Approaches for Code Search: The member also expressed openness to any other ideas regarding how to approach code search, seeking guidance from the community.
- This reflective inquiry emphasizes the collaborative spirit of seeking useful methods beyond initial thoughts on native Colbert usage.
LLM Agents (Berkeley MOOC) ▷ #mooc-questions (2 messages):
Lab Release Schedule
Course Communication
- Labs delayed by a week: A member inquired about the release of lab assignments today and whether links would be posted to the course page at llmagents-learning.org.
- Another member confirmed that the staff needs another week before they can release the lab assignments, unfortunately delaying the updates.
- Clarification on Updates: The same member expressed concern over the lack of updates on the lab release and couldn't find any emails or announcements regarding it.
- The response highlighted the communication gap as participants await crucial information about the course progress.
Link mentioned: Large Language Model Agents: no description found
Mozilla AI ▷ #announcements (2 messages):
ML Paper Reading Group
Publishing Local LLM Apps
Job Board Proposal
Lumigator Introduction
Upcoming Events
- Seeking ML Paper Reading Group: A member is looking to establish an ML paper reading group to facilitate discussions on recent research.
- This initiative aims to enhance community engagement and collective knowledge sharing.
- Tips for Publishing Local LLM Apps: A huge thank you was given to a member for sharing valuable insights on successfully publishing local LLM-based apps to the app store.
- Their contributions are seen as crucial for those looking to navigate the app publishing process.
- Proposal for a Job Board: A discussion arose on whether to create a job board for community job postings.
- This topic was initiated by a member who expressed interest in helping connect talent with opportunities.
- Lumigator Officially Introduced: <#1281660143251095634> was officially introduced by the community, highlighting its features and capabilities.
- The introduction reinforces the community's commitment to showcasing innovative projects.
- Upcoming Events Announced: Several upcoming events were highlighted, including Hybrid Search, focusing on search technologies.
- Other events such as Data Pipelines for FineTuning are set to further enrich community knowledge and growth.
MLOps @Chipro ▷ #general-ml (1 messages):
Nova models
LumiNova
MMLU performance
AI Evolution
- Nova Models outshine competitors: Introducing Nova: the next generation of large language models that beat GPT-4 and Claude-3.5 across various benchmarks, with Nova-Pro leading at 88.8% on MMLU.
- The models cater to different needs; Nova-Air excels across diverse applications while Nova-Instant offers speedy, cost-effective solutions.
- Benchmarking Excellence across Nova Models: Nova-Pro shines with impressive scores: 97.2% on ARC-C for reasoning, 96.9% on GSM8K for mathematics, and 91.8% on HumanEval for coding.
- These benchmarks solidify Nova's position as a top contender in the AI field, showcasing its extraordinary capabilities.
- LumiNova revolutionizes image generation: The newly introduced LumiNova sets a high bar for image generation, promising unmatched quality and diversity in visuals.
- This model complements the Nova suite by providing users with advanced tools for creating stunning visuals effortlessly.
- Future developments with Nova-Focus: Looking ahead, the development team is exploring Nova-Focus and enhanced Chain-of-Thought capabilities to further push the boundaries of AI.
- These innovations aim to refine and expand the potential applications of the Nova models in both reasoning and visual generation.
Link mentioned: Tweet from Rubiks AI (@RubiksAI): 🚀 Introducing Nova: The Next Generation of LLMs by Nova! 🌟 We're thrilled to announce the launch of our latest suite of Large Language Models: Nova-Instant, Nova-Air, and Nova-Pro. Each designe...