[AINews] a quiet weekend
This is AI News! an MVP of a service that goes thru all AI discords/Twitters/reddits and summarizes what people are talking about, so that you can keep up without the fatigue. Signing up here opts you in to the real thing when we launch it 🔜
peace and quiet is all you need.
AI News for 8/9/2024-8/12/2024. We checked 7 subreddits, 384 Twitters and 29 Discords (253 channels, and 4266 messages) for you. Estimated reading time saved (at 200wpm): 508 minutes. You can now tag @smol_ai for AINews discussions!
Ahead of the well-telegraphed #MadeByGoogle event tomorrow (and rumored gpt-4o-large release, although of course OpenAI does not think about competitors), it's been a very very quiet weekend, so quiet that our /r/LocalLlama filters came up completely empty for the first time since we started tracking it.
You can check out:
- the new 30% SOTA result on SWE-Bench
- the new GPT-4o model exclusive to the ChatGPT app
- Sebastian Raschka's DPO from Scratch impl
- Hamel Husain's course recap
Big day tomorrow. Get ready.
Table of Contents
- AI Twitter Recap
- AI Reddit Recap
- AI Discord Recap
- PART 1: High level Discord summaries
- HuggingFace Discord
- Stability.ai (Stable Diffusion) Discord
- Nous Research AI Discord
- LM Studio Discord
- Unsloth AI (Daniel Han) Discord
- CUDA MODE Discord
- Latent Space Discord
- Eleuther Discord
- Perplexity AI Discord
- OpenRouter (Alex Atallah) Discord
- OpenAI Discord
- Modular (Mojo 🔥) Discord
- Cohere Discord
- Torchtune Discord
- LlamaIndex Discord
- LangChain AI Discord
- OpenAccess AI Collective (axolotl) Discord
- DSPy Discord
- tinygrad (George Hotz) Discord
- OpenInterpreter Discord
- LAION Discord
- Interconnects (Nathan Lambert) Discord
- MLOps @Chipro Discord
- LLM Finetuning (Hamel + Dan) Discord
- AI21 Labs (Jamba) Discord
- PART 2: Detailed by-Channel summaries and links
- HuggingFace ▷ #general (357 messages🔥🔥):
- HuggingFace ▷ #today-im-learning (7 messages):
- HuggingFace ▷ #cool-finds (10 messages🔥):
- HuggingFace ▷ #i-made-this (28 messages🔥):
- HuggingFace ▷ #reading-group (9 messages🔥):
- HuggingFace ▷ #core-announcements (1 messages):
- HuggingFace ▷ #computer-vision (7 messages):
- HuggingFace ▷ #NLP (29 messages🔥):
- HuggingFace ▷ #diffusion-discussions (43 messages🔥):
- Stability.ai (Stable Diffusion) ▷ #general-chat (448 messages🔥🔥🔥):
- Nous Research AI ▷ #research-papers (2 messages):
- Nous Research AI ▷ #off-topic (18 messages🔥):
- Nous Research AI ▷ #general (319 messages🔥🔥):
- Nous Research AI ▷ #ask-about-llms (23 messages🔥):
- Nous Research AI ▷ #rag-dataset (17 messages🔥):
- Nous Research AI ▷ #reasoning-tasks-master-list (29 messages🔥):
- LM Studio ▷ #general (248 messages🔥🔥):
- LM Studio ▷ #hardware-discussion (114 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #general (207 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #off-topic (13 messages🔥):
- Unsloth AI (Daniel Han) ▷ #help (110 messages🔥🔥):
- Unsloth AI (Daniel Han) ▷ #research (3 messages):
- CUDA MODE ▷ #general (44 messages🔥):
- CUDA MODE ▷ #torch (52 messages🔥):
- CUDA MODE ▷ #cool-links (3 messages):
- CUDA MODE ▷ #jobs (1 messages):
- CUDA MODE ▷ #beginner (7 messages):
- CUDA MODE ▷ #pmpp-book (1 messages):
- CUDA MODE ▷ #youtube-recordings (1 messages):
- CUDA MODE ▷ #torchao (13 messages🔥):
- CUDA MODE ▷ #ring-attention (2 messages):
- CUDA MODE ▷ #off-topic (4 messages):
- CUDA MODE ▷ #hqq (1 messages):
- CUDA MODE ▷ #llmdotc (70 messages🔥🔥):
- CUDA MODE ▷ #bitnet (85 messages🔥🔥):
- CUDA MODE ▷ #webgpu (1 messages):
- CUDA MODE ▷ #cudamode-irl (2 messages):
- Latent Space ▷ #ai-general-chat (120 messages🔥🔥):
- Latent Space ▷ #ai-in-action-club (147 messages🔥🔥):
- Eleuther ▷ #announcements (1 messages):
- Eleuther ▷ #general (80 messages🔥🔥):
- Eleuther ▷ #research (161 messages🔥🔥):
- Eleuther ▷ #interpretability-general (6 messages):
- Eleuther ▷ #lm-thunderdome (7 messages):
- Eleuther ▷ #gpt-neox-dev (1 messages):
- Perplexity AI ▷ #general (209 messages🔥🔥):
- Perplexity AI ▷ #sharing (22 messages🔥):
- Perplexity AI ▷ #pplx-api (7 messages):
- OpenRouter (Alex Atallah) ▷ #announcements (1 messages):
- OpenRouter (Alex Atallah) ▷ #app-showcase (4 messages):
- OpenRouter (Alex Atallah) ▷ #general (209 messages🔥🔥):
- OpenAI ▷ #ai-discussions (168 messages🔥🔥):
- OpenAI ▷ #gpt-4-discussions (16 messages🔥):
- OpenAI ▷ #prompt-engineering (11 messages🔥):
- OpenAI ▷ #api-discussions (11 messages🔥):
- Modular (Mojo 🔥) ▷ #general (48 messages🔥):
- Modular (Mojo 🔥) ▷ #mojo (49 messages🔥):
- Modular (Mojo 🔥) ▷ #max (7 messages):
- Cohere ▷ #discussions (58 messages🔥🔥):
- Cohere ▷ #questions (19 messages🔥):
- Cohere ▷ #api-discussions (3 messages):
- Torchtune ▷ #general (25 messages🔥):
- Torchtune ▷ #dev (42 messages🔥):
- LlamaIndex ▷ #blog (7 messages):
- LlamaIndex ▷ #general (42 messages🔥):
- LlamaIndex ▷ #ai-discussion (1 messages):
- LangChain AI ▷ #general (33 messages🔥):
- LangChain AI ▷ #share-your-work (3 messages):
- OpenAccess AI Collective (axolotl) ▷ #general (17 messages🔥):
- OpenAccess AI Collective (axolotl) ▷ #axolotl-dev (5 messages):
- OpenAccess AI Collective (axolotl) ▷ #general-help (3 messages):
- OpenAccess AI Collective (axolotl) ▷ #axolotl-phorm-bot (6 messages):
- DSPy ▷ #show-and-tell (4 messages):
- DSPy ▷ #general (20 messages🔥):
- tinygrad (George Hotz) ▷ #general (9 messages🔥):
- tinygrad (George Hotz) ▷ #learn-tinygrad (8 messages🔥):
- OpenInterpreter ▷ #general (15 messages🔥):
- OpenInterpreter ▷ #ai-content (2 messages):
- LAION ▷ #general (8 messages🔥):
- LAION ▷ #research (6 messages):
- Interconnects (Nathan Lambert) ▷ #events (1 messages):
- Interconnects (Nathan Lambert) ▷ #ml-questions (2 messages):
- Interconnects (Nathan Lambert) ▷ #random (2 messages):
- Interconnects (Nathan Lambert) ▷ #memes (1 messages):
- Interconnects (Nathan Lambert) ▷ #rlhf (2 messages):
- MLOps @Chipro ▷ #events (2 messages):
- MLOps @Chipro ▷ #general-ml (1 messages):
- LLM Finetuning (Hamel + Dan) ▷ #general (2 messages):
- AI21 Labs (Jamba) ▷ #general-chat (1 messages):
AI Twitter Recap
all recaps done by Claude 3.5 Sonnet, best of 4 runs.
AI and Robotics Developments
- Figure's Humanoid Robot: @adcock_brett announced that Figure revealed their new humanoid, Figure 02, working autonomously at BMW Group's Plant Spartanburg. In just 18 months, Figure has built what they claim to be the most advanced humanoid on the planet.
- DeepMind's Table Tennis Robot: @adcock_brett reported that DeepMind developed a table tennis AI-powered robot with "human-level performance". The robot won 100% against beginners and 55% against intermediates in 29 games.
- Boston Dynamics' Atlas: @adcock_brett shared that Boston Dynamics demonstrated Atlas' dexterity with its ability to do pushups and burpees during a presentation at RSS 2024. This is the company's fully-electric robot that they announced in April.
- Autonomous Dental Robot: @adcock_brett noted that an autonomous robot performed the world's first dental procedure on a human. The system uses a 3D volumetric scanner to create detailed models of the mouth and reduced a 2-hour human procedure to just 15 minutes.
AI Model Developments
- SAM 2: @dair_ai highlighted SAM 2, an open unified model for real-time, promptable object segmentation in images and videos. It can be applied to unseen visual content without custom adaptation.
- Alibaba's Qwen2-Math: @adcock_brett reported that Alibaba released Qwen2-Math, a specialized AI model series that reportedly outperforms GPT-4 and Claude 3.5 in math capabilities.
- Listening-While-Speaking Language Model: @adcock_brett mentioned a new Listening-While-Speaking Language Model (LSLM) that can listen and speak simultaneously in real-time and respond to interruptions.
- Disease Prediction AI: @adcock_brett shared that researchers developed an AI model that can predict major diseases, achieving 95% accuracy in predicting specific diseases like coronary artery disease, type 2 diabetes, and breast cancer.
AI Tools and Applications
- LlamaParse CLI Tool: @llama_index introduced a CLI tool by @0xthierry that lets users parse any PDF, no matter how complex, into machine and LLM-readable markdown on their file system with a simple terminal command.
- MLX Whisper Package: @awnihannun announced that the MLX Whisper package now works with Distil-Whisper and other Transformers compatible Whisper models. The distil-large-v3 model runs 40X faster than realtime on an M1 Max.
- Golden-Retriever for RAG: @rohanpaul_ai shared details about Golden-Retriever, which enhances Retrieval Augmented Generation (RAG) for industrial knowledge bases. It improves the total score of Meta-Llama-3-70B by 79.2% over vanilla LLM and 40.7% over RAG.
- RecLoRA for Personalization: @rohanpaul_ai described RecLoRA, which tackles personalization in LLMs for recommendation systems. It incorporates a Personalized LoRA module and a Long-Short Modality Retriever, significantly improving performance while adding minimal time cost.
AI Research and Insights
- LLM Training Cookbook: @BlancheMinerva shared a cookbook led by @QuentinAnthon15 that details essential information often glossed over in papers and resources for learning about training large language models.
- AI Agent Efficiency: @rohanpaul_ai noted that when AI agents can do a task, they do so at 3% of the cost of a human baseline. In the test mentioned, they could complete about 40% of tasks at that efficiency.
- Challenges with LLM Tasks: @aidan_clark pointed out that asking a tokenized LLM to count letters is like asking a colorblind person to distinguish aliased colors, highlighting the fundamental challenges LLMs face with certain tasks.
- Web Scraping with LLMs: @abacaj argued that using LLMs for web scraping at scale is not reliable or affordable compared to traditional methods like Puppeteer or BeautifulSoup scripts.
AI Ethics and Societal Impact
- AI Accessibility: @swyx emphasized that AI is making user interfaces more accessible, information more multilingual, and the world more legible for various groups, including the very young, very old, and non-default-path people.
- OpenAI Board Addition: @adcock_brett reported that OpenAI announced Zico Kolter as the newest member joining their board of directors, bringing technical and AI safety expertise.
This summary captures the key developments, tools, research insights, and societal impacts discussed in the provided tweets, focusing on information relevant to AI engineers and researchers.
AI Reddit Recap
/r/LocalLlama Recap
nothing this weekend passed our upvote bar for inclusion. we were surprised too.
All AI Reddit Recap
r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity
AI-Generated Media and Creativity
- Surreal AI-generated video featuring Will Smith morphing into unexpected scenes gains popularity on r/singularity, with users comparing it to dreams and Japanese commercials. The video showcases the unpredictable nature of AI-generated content.
- LoRA training progress for improving scene complexity and realism in Flux-Dev model shared on r/StableDiffusion. The results show significant improvements in generating photorealistic images with diverse faces and mundane, cluttered scenes.
- Microsoft's Chief Scientific Officer Eric Horvitz predicts that AI systems will demonstrate undeniable creativity within 18 months, highlighting the rapid advancement in AI-generated content.
AI Development and Industry Perspectives
- An OpenAI employee's tweet de-hyping AI capabilities is positively received on r/singularity, contrasting with previous vague hype posts.
- Discussion on r/singularity about reducing hype and low-effort posts, particularly those featuring screenshots from Twitter "leakers". Users express concern about the potential harm to the AI movement's credibility.
AI Progress and Implications
- A post on r/singularity shares an image suggesting that AI capabilities will continue to improve, sparking discussion about the rapid advancement of AI technology.
Humor and Memes
- An image post on r/OpenAI humorously compares human intelligence to artificial intelligence, garnering significant engagement.
AI Discord Recap
A summary of Summaries of Summaries by GPT4O-Aug (gpt-4o-2024-08-06)
1. LLM Advancements and Benchmarking
- CRAB Benchmark Launches with a Splash: The CRAB (Cross-environment Agent Benchmark) for Multimodal Language Model Agents was introduced, generating positive community interest as seen here.
- Members expressed excitement about the new benchmark, with one commenting 'nicee' in response to the announcement.
- Llama 3.1 Takes the Lead: Discussions highlighted Llama 3.1's impressive 128k training context, making it a strong contender in model performance comparisons.
- Users are keen to experiment with Llama 3.1 for its multiturn capabilities.
2. Image Generation and Multimodal Models
- Flux Model Generates Fast Images: Users praised the Flux model for its rapid image generation capabilities, adjusting parameters like ModelSamplingFlux to enhance output quality.
- Performance varied across hardware, prompting discussions on optimization.
- HawkEye Automates CCTV Monitoring: HawkEye automates CCTV surveillance, detecting dangerous events in real time and notifying authorities.
- Suggestions were made to cross-post on IP cam forums, spurring further interest.
3. OpenAI's Model Performance and Usage
- GPT Excel at Prolog Generation: A member praised GPT-4o for its exceptional performance in Prolog generation and debugging, showcasing its logical reasoning strength.
- Prolog serves as a strong example of how GPT technology can leverage rule-based logic programming effectively.
- Concerns Over AI-Generated Image Detection: There's skepticism about consumers paying to verify if images are AI-generated, as companies often add identifiable elements to their images.
- Discussions focused on improving detection methods to prevent reliance on subtle identifiers.
4. Open Source Development and AI Tools
- OpenRouter Hits Command Line with Bash: A user shared a detailed guide to integrate OpenRouter into the command line using pure Bash, supporting piping and chaining.
- The creator highlighted the simplicity of script creation without dependencies after extensive experimentation.
- Exploring Quantization Techniques: To quantize a model after finetuning, ensure the model is well-trained before following steps using Hugging Face's
transformers
andbitsandbytes
libraries.- Evaluating performance post-quantization is crucial to maintaining model integrity.
5. AI Applications in Security and Surveillance
- HawkEye Automates CCTV Monitoring: HawkEye automates CCTV surveillance, detecting dangerous events in real time and notifying authorities.
- Suggestions included cross-posting on IP cam forums to spur interest.
- Deep Live Cam Gains Traction: The open-source project Deep Live Cam has gained attention for its potential in live camera feed applications, accessible on GitHub.
- The project is noted for its contributions to AI and real-time image processing solutions.
PART 1: High level Discord summaries
HuggingFace Discord
- Multilingual Models Struggle with Zero-Shot Tasks: Users discussed the feasibility of using Bloom and Google mBERT for zero-shot prompting, emphasizing Bloom's undertraining and poor translation outcomes.
- Alternatives like Aya were suggested for improving translation accuracy in multilingual contexts.
- Image Classification Dataset Frustrations: Participants outlined low model accuracy with large datasets, particularly CIFAR-10, criticizing the unsuitability of ImageNet for quick prototyping.
- They recommended smaller datasets like LSUN and using leaderboards on Papers with Code for benchmark references.
- Hugging Face API Downtime Woes: Frequent downtimes with the Hugging Face inference API were noted, especially when using ZeroGPU, leading to user frustrations.
- Advice was given to filter for warm models to mitigate failures from the extensive model host listings.
- Temperature Tactics in Language Models: Discussions centered on how temperature settings influence next token generation in transformers, raising questions about its effect on softmax normalization.
- Members debated whether tweaking the normalized vector impacts the input significantly across various implementations.
- Stable Diffusion Image Quality Concerns: A new user grappled with subpar image quality from Stable Diffusion 1.5, noting over-saturated colors and questioning dataset normalization practices.
- Members speculated on applying uniform normalization strategies (mean = 0.5, std = 0.5) to mitigate color discrepancies across models.
Stability.ai (Stable Diffusion) Discord
- Flux Model Generates Fast Images: Users praised the Flux model for its rapid image generation capabilities, adjusting parameters like ModelSamplingFlux to enhance output quality.
- There were notable differences in performance across various hardware configurations, prompting discussions about optimization.
- ControlNet Faces Compatibility Issues: Members encountered difficulties with ControlNet, especially when using mismatched models or adapters, which led to unforeseen results.
- Suggestions included verifying adapter compatibility and utilizing specific DensePose ControlNet models for improved functionality.
- Exploring Lora Training Techniques: Participants exchanged strategies for Lora training, with one user sharing a tutorial and others discussing fine-tuning for distinct artistic styles.
- Interest in future fine-tuning techniques, particularly with the Flux model, was prevalent among users.
- Mastering Prompt Engineering Techniques: The community highlighted the significance of prompt engineering, testing varied phrasing, groupings, and negative prompts for consistent outputs.
- Insights included the impact of punctuation on model interpretations, which led to richer image generation.
- Stable Diffusion in Graphic Design: Discussions emerged about using Stable Diffusion for creating graphic design elements, including color palettes and gradients.
- This conversation pointed to broader applications of generative AI in practical design workflows beyond traditional art.
Nous Research AI Discord
- CRAB Benchmark Launch: The CRAB (Cross-environment Agent Benchmark) for Multimodal Language Model Agents was introduced here, generating positive community interest.
- Members chimed in with excitement, with one expressing a simple 'nicee' about the announcement.
- HawkEye Automates CCTV Monitoring: HawkEye automates CCTV surveillance, detecting dangerous events in real time and notifying authorities, revolutionizing security protocols.
- There’s a suggestion to cross-post on IP cam forums, spurring further interest within that community.
- Model Performance Showdown: Members compared models Llama 3.1 (8B), Qwen2 (7B), and Gemma 2 (9B), emphasizing Llama 3.1’s impressive 128k training context for long-term tasks.
- They're particularly keen on experimenting with models that boast strong multiturn capabilities.
- Claude's Distinctive Features: A member questioned the unique tasks that Claude performs, seeking to understand the technology behind these capabilities.
- This reflects an ongoing interest in dissecting the differences in model functionalities.
- Navigating PDF to Markdown Conversions: Members shared frustrations about converting PDFs to markdown formats, specifically targeting the extraction of image and graph descriptions.
- Community members found success using Marker for noisy documents and expressed a desire to enhance their extraction techniques.
LM Studio Discord
- LM Studio Struggles with Llama 3.1: Users reported issues with Llama 3.1 in LM Studio, facing model loading errors and performance drops after the latest update.
- Detailed system specs are encouraged in the support channel to diagnose problems further.
- Optimal Specs for Large LLMs: To effectively run large models like Llama 70B, users require adequate RAM and GPU memory, with varying needs based on model weight.
- A 3090 with 24GB VRAM suffices for 27B models, but further evaluations are necessary for even larger configurations.
- 8700G Blazes through Tokens: With tweaks to RAM timings, the 8700G achieves 16 tok/s on Llama3.1 8B models at 100k context size, despite crashes in LM Studio at high RAM usage.
- The model can almost accommodate the full 128k context in 32GB RAM, showing its capability for high-performance tasks.
- M2 Ultra Outshines 4090: The M2 Ultra allegedly outperforms the 4090 in training times for Llama3.1, averaging 197s per epoch while reducing noise.
- Users consider switching to the M2 Ultra for its efficiency and quieter operation compared to the noisy 4090.
- Ideas for Server GPU Configurations: The viability of using P40 GPUs for a bespoke 10x P40 server surfaced in discussions, albeit with concerns over power consumption.
- Participants discussed balancing performance and efficiency while exploring higher VRAM options, such as the 4090D with 48GB.
Unsloth AI (Daniel Han) Discord
- Unsloth Fine-Tuning Limitations: Users expressed challenges in fine-tuning models like Phi-3 vision and mixture of experts due to structural dataset needs for effective training.
- Suggestions included integrating conversation instruction datasets for better performance in training contexts.
- AWS Model Deployment Woes: One user faced challenges deploying their fine-tuned unsloth model on AWS, noting lack of shared experiences in the community.
- Recommendations included referencing AWS tutorials specific to LLM deployment for guidance.
- High VRAM Usage for Gemma Models: Discussions highlighted that Gemma models require more VRAM for fine-tuning compared to others like Llama, raising optimization concerns.
- Users noted the potential benefits of installing Flash Attention to improve VRAM management during training.
- Celebrating Unsloth's Popularity: Unsloth celebrated reaching 2 million monthly downloads on Hugging Face, prompting excitement among users.
- Members congratulated one another, showcasing the community's enthusiasm for the model's growing adoption.
- Emergence of Hybrid Neural Networks: An innovative Hybrid Neural Network-Transformer Architecture has been proposed, pushing AI capabilities forward.
- This approach combines the strengths of neural networks and transformers, signaling a potential shift in AI model design.
CUDA MODE Discord
- Clarification on XPU Architecture: A member inquired about the XPU architecture, particularly whether the discussed Intel GPUs are discrete models or integrated ones, to which it was confirmed that Intel has been developing discrete GPUs for AI tasks.
- The discussion reflects a growing interest in Intel's AI and GPU technologies.
- CUDA Error Logging for Troubleshooting: A user encountered an illegal memory access error during a CUDA kernel launch, prompting suggestions to use tools like compute-sanitizer to troubleshoot memory allocation issues.
- Members noted common pitfalls in pointer dereferencing, indicating a need for careful memory management in CUDA applications.
- Torch Compile Improvements Suggested: A discussion arose around forcing
torch.compile()
to utilize Triton for FP8 matmul, with suggestions made for configuration tweaks and environment variables for optimization.- It was noted that
torch._intmm()
could provide a clean solution for INT8xINT32 multiplication, potentially enhancing performance.
- It was noted that
- Advancements in BitNet QAT Implementation: Members examined the implementation of BitNet with full weight QAT, focusing on grouping weights into -1, 0, 1 and optimizing post-quantization processes.
- The discussion included memory efficiencies achieved during inference, with expectations for significant savings utilizing a linear architecture.
- Memory Efficiency in Inference with BitNet: A member highlighted that a 70B model running on BitNet could fit within 16GB of GPU memory without requiring key-value caches, which is a notable advancement.
- This claim indicates substantial memory optimization potential during inference for large models.
Latent Space Discord
- LLaMA Guard 3 Video Released: A video showcasing LLaMA Guard 3 was recently posted, generating excitement among viewers. The video is available here for those interested.
- Members expressed their anticipation for the new features highlighted in the video, indicating a positive reception in the community.
- Clarity Struggles with DSPy: Today's discussion included insights from the Zeta Alpha DSPy session, with members debating the clarity of the technology. Some voiced uncertainty, noting a desire to include it as a reference in their notes.
- This highlights the need for clearer documentation and examples to ensure better understanding of DSPy.
- OpenAI Buzz with gpt4o Release: Buzz circulated regarding a potential release of gpt4o large on Tuesday, fueling speculation about the model's capabilities. Members discussed its implications for AI advancements.
- There's a keen interest in how this model might enhance functionality and push boundaries in AI applications.
- Ruby AI Gains Traction: A growing community is building AI applications with Ruby, led by members noting its suitability for LLM coding and producing new libraries like Boxcars. This has intrigued non-Ruby developers as well.
- Discussions highlighted the potential for Ruby augmented generation, furthering interest in its applications.
- AI Engineer Bootcamp for Skills Enhancement: Several members expressed interest in attending an AI Engineer bootcamp, focusing on practical skills over theoretical learning. Resources for upskilling were actively shared.
- Conversational themes pointed to the necessity for hands-on experience as a crucial component in mastering AI tools.
Eleuther Discord
- Explore the EleutherAI Cookbook: The EleutherAI Cookbook offers resources for building and deploying models, addressing gaps in empirical benchmarks and theoretical calculations.
- It includes scripts for key metrics like Transformer inference/training memory, total model parameters, and total model FLOPs, vital for resource understanding.
- DeepSpeed and GPU Dynamics: Discussions on using DeepSpeed with SFTTrainer revealed mixed experiences regarding optimizations and overcoming CUDA OOM errors during multi-GPU fine-tuning.
- Approaches like optimizer state offloading and introducing LoRA were considered for enhancing memory efficiency in training.
- Mamba vs Transformers in MMLU Performance: Members noted that Transformers generally outperform Mamba in handling multiple-choice tasks, citing the importance of routing capabilities.
- Despite larger dataset training, models like FalconMamba still struggle, while hybrids like Zamba have shown promising results.
- Model Distillation Debate: Participants discussed whether distillation should match full teacher performance or simply yield inference-time benefits, revealing complexities in efficiency claims.
- Many argued that smaller models with similar training data may offer better efficiency compared to heavily distilled models.
- CommonsenseQA Task Insights: Clarification confirmed no fine-tuning on the 9.7k train split for the CommonsenseQA Task, with that split used solely for sourcing in-context few-shot examples.
- This ensures a pure evaluation and avoids any bias from evaluating against the training set.
Perplexity AI Discord
- Perplexity AI Faces Operational Issues: Many users reported problems with the Perplexity AI platform, including an inability to select different image generation models and facing numerous error messages during high traffic.
- Dissatisfaction centered around limitations of the pro subscription, particularly regarding output size and functionality.
- Frustration with Rate Limiting: Several users expressed frustration over rate limiting, which hindered efficient processing of multiple queries and resulted in error messages during peak times.
- There was a push for better control mechanisms to effectively manage these rate-limiting scenarios.
- Interest in Batch Processing for Open Source: Users queried the absence of batch processing options for open-source models, voicing interest in cost-effective solutions similar to those from major AI providers.
- This conversation explored potential benefits of batch processing in optimizing operational costs.
- Concern over Perplexity 3.1 Performance: A user criticized the Perplexity 3.1 update, claiming it returns incorrect results compared to its predecessor, especially in tasks like Olympic medal counts.
- The original version is reported to be available for just two more days, raising concerns about further degrading performance.
- Call for Better Community Communication: Community sentiment reflected disappointment over the perceived silence from Perplexity leadership and a lack of engagement from the community manager.
- Discussions emphasized the need for improved communication strategies to help in restoring trust within the user base.
OpenRouter (Alex Atallah) Discord
- Perplexity Models Going Offline: Several Perplexity models will be inaccessible after 8/12/2024, including
llama-3-sonar-small-32k-online
andllama-3-sonar-large-32k-chat
, as noted in the Changelog. Users should prepare for these changes to maintain continuity in their model usage.- The transition aims to streamline the user experience as models become permanently unavailable.
- Transitioning to Llama3-based Sonar Models: Effective immediately, online and chat models will redirect to Llama3-based Sonar counterparts, including
llama-3.1-sonar-small-128k-online
andllama-3.1-sonar-large-128k-chat
. This change enhances model capabilities and user interaction.- Users can look forward to improved performance as the newer models take over.
- OpenRouter hits the command line with Bash: A user shared a detailed guide to integrate OpenRouter into the command line using pure Bash, supporting piping and chaining across various platforms like Raspberry Pi. This integration fosters a plan -> execute -> review workflow for automation enthusiasts.
- The creator emphasized the simplicity of creating scripts without dependencies after extensive experimentation.
- Model Performance Issues raise eyebrows: Community members discussed instability in models like Hyperbolic's 405B-Instruct, which has been recently pulled from their API. Users expressed concerns over inconsistent performance across different versions of instruct models.
- The discussions highlighted the ongoing need for reliable model outputs in production environments.
- Gemini Flash Pricing Updates prompt questions: Members are inquiring about timelines for new Gemini Flash price updates, as some have noted discrepancies in GCP cost tables reflecting this change. Alex Atallah mentioned that updates are delayed due to inconsistencies in the token:character ratio associated with Gemini.
- Such pricing changes could significantly impact overall project budgets and developer decisions.
OpenAI Discord
- GPT excels at Prolog generation: A member praised the performance of GPT-4o for Prolog generation and debugging, showcasing its strength in logical reasoning.
- Prolog serves as a solid example of how powerful rule-based logic programming can be effectively leveraged with GPT technology.
- Concerns over AI-Generated Image Detection: There's skepticism about consumers paying to verify if images are AI-generated, with members noting that companies often add identifiable elements to their images.
- This sparked a discussion on improving detection methods as reliance on subtle identifiers could become a standard practice.
- Navigating iOS App Installation Issues: A member expressed frustration about being unable to install the iOS app on their iPad Air 2 due to restrictions tied to iOS 16.4 updates.
- An Apple support rep confirmed the unavailability of app installation for this device, adding to the challenges faced by users.
- File Transfer Problems Persist: Users reported ongoing issues with GPT not returning files, regardless of size or type submitted.
- The community traced this recurring problem to systemic challenges in the file transfer mechanisms.
- Effective Keyword Insertion Techniques Discussed: Participants discussed how inserting keywords or topics into prompts doesn't necessarily require advanced skills since models can manage their context well.
- They recommended leaving variables open in prompts or giving the AI the task of dynamic keyword integration.
Modular (Mojo 🔥) Discord
- C Program Runs Successfully on MacOS: A member successfully ran a C program on MacOS to read MSRs, revealing a frequency of 24000000 and a TSC COUNT of 2099319836, despite some formatting warnings.
- The complexity of this task may either inspire interest in C or deter pursuit in computer science.
- Only Recent CPUs Support Accurate TSC Readings: Discussion noted that only CPUs from the last 15 years provide reliable TSC frequency readings, opening potential for using inlined assembly for enhanced performance.
- Members emphasized how reading instructions on ARM and Intel diverges from conventional practices.
- Mojo Programming Language Needs Better Documentation: A member pointed out the need for more clear and visible documentation on Mojo's
inlined_assembly
, suggesting a PR for improving its functionality with variadic arguments.- It's vital that users have access to clearer resources to enhance engagement with Mojo.
- Max Nightly Installation Triumph on Mac M1 Max: A member faced initial hurdles installing max nightly on their Mac M1 Max, but confirmed successful installation after resolving issues, and plans to issue a detailed report on GitHub.
- The steps taken could help guide others facing similar challenges.
- C#'s Sustained Market Relevance: Members highlighted C#'s sustained relevance in the Microsoft ecosystem since 2000, credited as a 'nicer Java' and its proficiency in Windows applications.
- The influence of Microsoft's backing has cemented C# as a key tool, particularly in developing nations.
Cohere Discord
- Sus-column-r Model Generates Debate: Members questioned whether the sus-column-r model is a Cohere product, noting skepticism about its tokenizer differing from Cohere's R series.
- Mapler argued it behaves similarly to other Cohere models, but brknclock1215 expressed doubt on its affiliation due to tokenizer inconsistencies.
- Praise for Cohere Model Performance: Several users commended the potential Cohere model for excelling at complex tasks like riddles and base64 decoding.
- Brknclock1215 mentioned that if confirmed as a Cohere model, it would signify a leap forward from existing products.
- Cohere's Pricing Under Scrutiny: Questions emerged around Cohere's pricing in light of competitors reducing theirs, with mrafonso stating that it currently lacks competitiveness.
- Mrdragonfox countered by arguing that Cohere's pricing remains reasonable and hinted at 'loss leader pricing' implications.
- Cohere Command R Model Offers Cost-Saving Features: A member clarified that only one preamble is needed with the Cohere Command R model to initiate a chat, using the conversation_id for continuity.
- This setup allows for cost savings as tokens for the preamble are only billed when included.
- Calls for RAG Systems Skill Development: A member highlighted the ongoing reliance of RAG systems on traditional retrieval methods, questioning the skill gaps relevant for AI applications.
- Another participant pointed out the critical need for good data cleaning and database management as essential skills often overlooked.
Torchtune Discord
- Navigating NeurIPS Rebuttal Maze: A member shared their confusion about handling low confidence scores in NeurIPS paper reviews, focusing on the rebuttal process.
- Support the champion reviewer by addressing concerns as low confidence might indicate a lack of expertise among those reviewers.
- Feedback is Part of the Publishing Grind: It’s normal for papers to face several rounds of reviews and rejections before landing at a suitable venue.
- One member advised to trust the value of one's work, referencing the original DQN paper as an example.
- Google T5 Inference with Torchtune: A member inquired about running inference with the Google T5 model through Torchtune, which isn't possible currently.
- Upcoming changes could support T5's encoder + decoder architecture, enabling multimodal training.
- Gemma 2b Peaks and Flatlines: Gemma 2b reportedly hits peak memory but flattens thereafter, sparking concerns over its performance consistency.
- Investigate this wandb link for detailed insights.
- Proposal for Expandable Segments: Expandable segments were proposed for all models to facilitate manual toggling, seen as a low-risk enhancement.
- Minimal modifications to config files are suggested to smooth the transition, potentially making it a default in future PyTorch updates.
LlamaIndex Discord
- LlamaIndex Property Graphs Tutorial Released: Check out this video tutorial on LlamaIndex's property graphs to learn how each node and relation can store a structured dictionary of properties.
- This foundational knowledge opens up effective techniques for utilizing property graphs.
- Notebooks for Multimodal RAG Over Complex Documents: A series of notebooks showcasing how to build pipelines over complex legal, insurance, and product documents has been shared, including methods to parse insurance claims here.
- These notebooks focus on handling documents with intricate layouts, integrating charts and images.
- Fine-Tuning GPT-3.5 with Knowledge Distillation: A discussion focused on knowledge distillation for fine-tuning a GPT-3.5 judge using LlamaIndex, with insights shared in a Medium article.
- Knowledge distillation is highlighted as an effective method in enhancing model performance while minimizing size.
- Dynamic Self-RAG Enhancements: Self-RAG is a dynamic RAG technique that identifies relevant chunks for queries instead of flooding context, with resources available here.
- This approach provides a refined strategy for context retrieval.
- Performance Concerns with WandB Integration: A user noted that deploying a
wandb
integration significantly increased their LlamaIndex query latency, raising performance concerns.- This prompts a discussion on balancing model integrations with system efficiency.
LangChain AI Discord
- LangChain Support Dwindles: Users voiced concerns about the waning support for LangChain, questioning its viability for production projects.
- One member pointed out that since its initial promise, many community members feel lost on how to proceed effectively.
- LiteLLM Gains Popularity: Several members touted LiteLLM as a user-friendly alternative, highlighting its simple API for switching between multiple LLMs.
- A user noted the ease of integration with LiteLLM, allowing focus solely on LLM functionality without extensive code changes.
- Struggles with Llama 3.1 Output: Issues arose with Llama 3.1, where attempts to reproduce structured outputs ended up returning None due to a parser failure.
- It was discovered that improper function definitions contributed to the issues with the expected output format.
- Chatbot StateGraph Confusion: Discussions on StateGraph behavior revealed that only the last message was retained, causing skepticism about its intended functionality.
- Suggestions pointed to potential loops needing to be integrated to maintain conversation history effectively.
- CRAB Benchmark Makes Waves: The introduction of 🦀 CRAB, the Cross-environment Agent Benchmark for multimodal agents, was shared, sparking interest in its comprehensive assessment approach.
- Members encouraged checking out further details on the benchmark to understand its implications for agent evaluation here.
OpenAccess AI Collective (axolotl) Discord
- Apple Intelligence introduces innovative algorithms: The paper on Apple Intelligence Foundation Models presents two novel algorithms, iTeC and MDLOO, which leverage rejection sampling and reinforcement learning from human feedback to significantly enhance model quality.
- These advancements are expected to set a new standard for model performance in the field.
- Strawberry model sparks speculation: Discussions about the Gpt-4o-large model, nicknamed 'strawberry', have ignited intense speculation following a viral tweet.
- Many members doubt the model's capabilities compared to the 'raspberry', suggesting that much of the excitement is troll-driven and lacks solid backing.
- Flux model performance receives rave reviews: Members are buzzing about Flux, with one declaring it 'crazy good', signifying strong community sentiment.
- Further details on its performance or specific features were not shared, but enthusiasm remains high.
- Effective model quantization techniques: To quantize a model after finetuning, ensure that the model is well-trained before following steps using Hugging Face's
transformers
andbitsandbytes
libraries.- After quantization, it's crucial to evaluate performance against a validation set to ensure model integrity.
- Community discusses Lora merging strategies: Members sought advice on optimal techniques to merge Loras with various models, indicating a practical need for refined methods.
- These discussions highlight the ongoing quest for improvement and shared knowledge within the community.
DSPy Discord
- Join the Hyperdimensional Hackathon: Team members are invited to the Hyperdimensional Hackathon in the Voice Lounge. More details can be found here.
- Don’t miss out on this opportunity to showcase your skills and collaborate with others!
- Beginners Unite with DSPy Notebook: A member shared a shoutout for creating a fantastic beginner notebook for DSPy that effectively guides users through problem-solving.
- This resource is highly recommended for those just starting with DSPy.
- Feedback Request on DSPy Blog: A member is seeking feedback on their blog post about DSPy, available here.
- Additionally, they shared a link to their Twitter for context on the post here.
- Golden Retriever Project Repository Shared: A participant shared a link to the Golden Retriever project repository on GitHub here.
- This repository may interest those looking to explore new tools or projects.
- DSPy as Fine-Tuning Tool: DSPy is likened to fine-tuning, allowing users to optimize instructions and/or examples with specific metrics to enhance task performance.
- This approach engages community discussions on suitability for various RAG implementations.
tinygrad (George Hotz) Discord
- Mezo Method Exploration in Tinygrad: A user expressed interest in reimplementing the Mezo method using tinygrad, questioning the existence of equivalents to
tree_map
orapply
.- This reflects a desire to utilize alternative frameworks for specific methodologies in machine learning.
- Tinygrad Meeting Agenda is Set: Upcoming Monday meeting at 9:40 a.m. PT will cover topics like tinygrad 0.9.2, qcom dsp, and various bounties including AMX.
- This agenda aims to outline crucial technical discussions planned for the weekly update.
- Clarifying Tinygrad Bounties: A user inquired about the 'inference stable diffusion' bounty, confusing it with existing documentation examples.
- The response clarified its association with MLPerf, indicating updated bounty details.
- Community Feedback on NVIDIA FP8 PR: Discussion indicated community support on tips left regarding a user's NVIDIA FP8 PR.
- This highlights the collaborative efforts within the project to enhance contributions.
- Navigating De-sharding of Models: A user sought clarity on how to de-shard a model from a multi lazy buffer to a normal lazy buffer.
- This indicates potential confusion among members regarding the process.
OpenInterpreter Discord
- Remote Attendance Options Discussed: A member in Tibet sought ways to attend an event remotely, igniting conversations about participation without travel funds. They noted that while 'they are strongly favoring in-person attendees,' a hybrid hackathon will occur later this year.
- Request for Linux Support Channel: A member called for a dedicated #linux-something_or_other channel to share experiences and trials. An alternative suggestion pointed towards another existing channel, emphasizing that 'the best place for this is <#1149558876916695090>.'
- Showcasing Terminal Agent Features: Terminal agents demonstrated impressive features, including cursor positioning and text selection with accompanying screenshots. A grayscale terminal presentation highlighted the red cursor for better visibility during operations.
- Inquiry on Speech Agent Specs: A question arose regarding the minimum and ideal specs for effective operation of a speech-to-speech agent across OS. Concerns about energy usage exceeding 100Wh for laptops were also raised as part of the discussion.
- Explore the Deep Live Cam Project: The open-source project Deep Live Cam grabbed attention for its potential in live camera feed applications, accessible on GitHub. It's gaining traction for its contributions to AI and real-time image processing solutions.
LAION Discord
- Nvidia and CUDA controversy heating up: Discussion arose about AMD's takedown of an open-source project, ZLuda, which potentially allowed other hardware to utilize CUDA technology, as highlighted in Tom's Hardware article.
- One member clarified that it was actually AMD, not Nvidia, who initiated the takedown.
- New Halva Hallucination Assistant: Google introduced the Halva Hallucination Attenuated Language and Vision Assistant to tackle hallucination issues in generative tasks combining language and vision capabilities.
- The model focuses on reducing inaccuracies, signaling an important step in addressing AI hallucinations.
- Gan.AI's TTS Model Launch: Gan.AI launched a new TTS model that supports 22 Indian languages plus English, making it the first to include Sanskrit and Kashmiri.
- The community has been encouraged to check out the product on Product Hunt and upvote if impressed.
- Checkpoint Saving Issues in DDP Training: A user reports experiencing issues where the gradient norm collapses and the optimizer skips steps during DDP training with bf16 and
accelerate
when saving checkpoints.- They noted that the problem resolves after the next checkpoint save, indicating that training otherwise runs smoothly.
- Reflection on Quadratic Softmax Attention: A user mused on the fate of a paper suggesting that quadratic softmax attention isn't the best token-mixing mechanism, yet it's prevalent in SOTA models.
- They questioned if it fails to scale or perform adequately in NLP tasks, hinting at a debate in the community.
Interconnects (Nathan Lambert) Discord
- AI2 Team Presents Language Modeling at NeurIPS: The AI2 team is set to present a language modeling tutorial at the upcoming NeurIPS conference, with plans to enhance engagement post-presentation.
- A proposal surfaced for a group event after NeurIPS, aiming to bolster community ties and foster collaboration.
- Concerns on Hapsburg Model in Training: Discussion arose over the risks posed by creating a Hapsburg model during training, questioning the rationale for selecting a variety of models.
- The consensus noted that utilizing a collection of models promotes diversity in outcomes and mitigates the risk of model collapse.
- Optimal Online PPO Exploration: A member sought guidance on the best practices for implementing RLHF with online PPO, looking for hyperparameter tips to showcase superiority over iterative DPO.
- Current feedback indicated the absence of a clear best implementation, recommending resources like the EasyLM repository and Hugging Face's TRL version for potential solutions.
- Reflections on Social Media Opinions: A user humorously suggested that a world with only poor opinions would be markedly improved, touching on the nature of online discussions.
- This lighthearted comment prompted laughter, hinting at a collective desire for more constructive discourse in lieu of prevailing bad takes.
MLOps @Chipro Discord
- Join the Alliance AI-Health Research Initiative: Students interested in novel cancer or AI research can apply for the 4-month remote internship with the Alliance AI-Health Research Initiative, applications due by 8/11. Participants will tackle projects on cancer detection and AI-based heat stroke detection, guided by experienced advisors. Apply here!
- Engagement in cutting-edge research offers a unique opportunity to contribute meaningfully to both AI and health fields.
- Build Generative AI with Google Gemini: An upcoming online event will demonstrate how to create Generative AI applications using Google Gemini and Vertex AI, deploying them as Serverless Containers. This method allows users to focus on business aspects while Google manages infrastructure operations. RSVP for the event.
- Participants can enhance their skills while leveraging Google’s resources for efficient deployment.
- Evaluating Feature Stores for Computer Vision: A member queries the effectiveness of feature stores in computer vision, seeking examples to weigh their value. Is a feature store worth it? This inquiry aims to inform broader discussions on the relevant benefits versus costs.
- The community's lack of engagement on this topic suggests potential hesitance or limited experience with feature stores in real-world applications.
LLM Finetuning (Hamel + Dan) Discord
- Exploring Vision Language Models from Scratch: A member shared a detailed blog post on vision language models that explores their development from nearly scratch, emphasizing core methodologies and insights.
- The post aims to engage the community in discussion around building these models, highlighting the complexities and nuances involved.
- Concerns on Credits Expiration across Platforms: A member inquired about the existence of expiration dates for credits on platforms like Jarvis-Labs, Replicate, and Openpipe, similar to OpenAI's recent deadline.
- This inquiry sparked a broader conversation regarding the policies on credit expiration across these various services and how they compare.
AI21 Labs (Jamba) Discord
- AI21 FusionLabs Plugin Turbocharged with RAG Features: The AI21 FusionLabs plugin for Bubble.io now supports the integration of the Jamba model and a fresh Conversational RAG endpoint, leading to 40+ app installs.
- This upgrade enhances productivity for NOcode projects, moving users away from the deprecated version, as detailed in the plugin link.
- Plugin User Resources Set to Drop: A new platform will launch next week to aid users in understanding the updated plugin and its features efficiently.
- Video guides are in the works to help the community effectively create AI applications with Bubble.io.
- AI21 Community Stoked for Future Innovations: The AI21 community is buzzing about Q4 and 2025, expecting a wave of new developments and resources.
- Participants are encouraged to gather all creative minds for upcoming 'hotfire' projects, sparking much anticipation.
The Alignment Lab AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Mozilla AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The DiscoResearch Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
PART 2: Detailed by-Channel summaries and links
HuggingFace ▷ #general (357 messages🔥🔥):
Use of Multilingual Models
Dataset Challenges in Image Classification
Issues with Hugging Face API
ONNX Model Usage
AI Hackathons and Competitions
- Challenges with Bloom and mBERT: Users discussed the feasibility of using Bloom and Google mBERT as multilingual models for zero-shot prompting experiments, noting Bloom's undertraining.
- Experiences shared highlighted difficulties with translations and directions, leading to suggestions of better alternatives like Aya.
- Finding Suitable Datasets for Image Classification: Participants shared frustrations over low model accuracy with large datasets, specifically with the challenges of using CIFAR-10 and why larger datasets like ImageNet aren't suitable for prototyping.
- Suggestions for smaller datasets included LSUN and leveraging leaderboards on Papers with Code for additional benchmarks.
- Hugging Face API Performance Issues: Users reported experiencing frequent downtimes and scheduling failures with the Hugging Face inference API, particularly when selecting ZeroGPU.
- Recommendations included filtering for warm models to avoid failures due to the extensive number of models hosted.
- Onnx Model Conversion Challenges: One user detailed difficulties in utilizing a converted Llama model in ONNX format, particularly with GPU usage and system memory issues.
- Recommendations were sought to resolve the GPU utilization issues and ensure compatibility with ONNX runtime.
- AI Hackathons and Networking Opportunities: A member inquired about AI hackathons to participate in for networking and potential prize money, indicating an interest in growing their portfolio.
- Suggestions and resources for competitions were encouraged, with focus on gaining experience through participation.
HuggingFace ▷ #today-im-learning (7 messages):
Nvidia AI Enterprise Setup Guide
FP8 Training Progress
Quantization Techniques
- Step-by-step Nvidia License Server Setup Guide Released: A comprehensive guide was completed on setting up and configuring an on-prem Nvidia License Server for Nvidia AI Enterprise and Virtual GPU.
- This guide aims to make the setup process accessible for anyone interested in leveraging Nvidia's technology.
- Progress Made with FP8 Training: Noteworthy achievements include training inference with 100M FP8 to match the bfloat16 baseline with a 0.15 loss offset over the past four days.
- The next goal outlined is to progress through 1B, 7B, and eventually 175B model training.
- Advancements on 1B FP8 Training Milestone: Recent developments show FP8 has been successfully implemented for both forward and backward passes on 1B training, retaining a 0.15 loss offset relative to the bfloat16 baseline after 50K training steps.
- Further work is needed to reduce precision loss for all-reduce quantized tensors.
HuggingFace ▷ #cool-finds (10 messages🔥):
Creative Automation
Agentic Workflow in InsurTech
Protein Design with ProtGPT2
No-Code Solutions
Published Conversations with LLMs
- Creative Automation vs. Creative Services: Recent published conversations explore the intersection of creativity, automation, and advertising evolution in AI, available here.
- These discussions highlight how automation may reshape conventional advertising practices and philosophies.
- Agentic Workflow Solutions in InsurTech: The rise of No-Code solutions is set to revolutionize the InsurTech industry, enabling significant workflow transformations at the click of a button, showcased in this Medium article.
- This approach promises to streamline and enhance operational efficiency within the insurance sector.
- ProtGPT2 for Protein Design: An article discussing ProtGPT2, a deep unsupervised language model designed for protein design, can be found here.
- This model has potential applications in biotechnological innovations and the advancement of molecular biophysics.
- Published Conversations with AI Friends: A member shared links to conversations reflecting on merging advertising and creator philosophies, with insights into how AI impacts creative processes, viewable at these links: 1, 2.
- These discussions delve into the future of AI-infused creativity.
- MLM Training with Keras on TPUs: A Keras example on masked language model training using TPUs can be found here, useful for developers interested in NLP applications.
- This resource highlights efficient training methodologies within the Keras framework.
HuggingFace ▷ #i-made-this (28 messages🔥):
HawkEye CCTV Automation
Flux LoRA
Agentic Workflow System
Human Face Generator
Contributing to GitHub and HF
- HawkEye CCTV Automation Tool Launch: HawkEye is a new tool by a member that automates CCTV surveillance without human involvement, detecting criminal events in real-time.
- Members praised the project, suggesting future enhancements like biometric recognition for improved monitoring.
- Flux LoRA: Efficient Learning: A member shared flux LoRA as an easy way to train models, noting it effectively learns multiple subjects at once.
- They highlighted that it performs better with two subjects, making it a versatile option for model training.
- No-Code Solutions in InsurTech: The emerging trend of no-code solutions in InsurTech was discussed, aiming to streamline processes with minimal effort as per the article.
- Members showed interest in the potential for transformative solutions in the industry.
- Human Face Generator Tool Introduction: Human Face Generator was introduced as a tool that simplifies the creation of human faces effortlessly.
- Members expressed enthusiasm for the tool's capabilities in generating realistic faces.
- Guidance on Contributing to GitHub and HF Projects: Members discussed steps to contribute to GitHub projects, including forking, making changes, and submitting a pull request, with emphasis on the importance of shipping a v1 quickly.
- Additionally, suggestions were made for engaging in community projects on HF to suggest improvements.
HuggingFace ▷ #reading-group (9 messages🔥):
Recording of Previous Session
Upcoming Discussion on Hacking with LLMs
Schedule for Meetups
Reading Group Resources
- Recording of Previous Session Shared: A recording of the previous session has been shared, available here. Members are encouraged to check it out and provide feedback on its clarity.
- The recording provides insights into recent topics discussed in the group.
- Next Week's Talk on Hacking with LLMs: A member mentioned plans to discuss hacking with LLMs next Saturday, along with a reference write-up. They also indicated that a benchmark would be part of the discussion.
- This session aims to deepen understanding of LLM applications in security contexts.
- Inquiries About Meetup Timing: There were questions regarding the timing of Saturday meetups and difficulties in locating the events channel. One member noted that the timing generally fluctuates based on presenter schedules.
- Another member indicated that it’s typically around 1:30 PM EST, but can vary.
- Resources for the Reading Group: A member shared a playlist for the reading group available here. They also mentioned a GitHub page containing past records that needs updating.
- These resources are intended to help members get up to speed and access previous materials.
HuggingFace ▷ #core-announcements (1 messages):
DreamBooth LoRA script
Terminus Research
Flux DreamBooth
- Educational DreamBooth LoRA Script: The DreamBooth LoRA script was designed as an educational and minimal example for users interested in exploring this technology.
- Users are encouraged to explore its functionalities further and apply it to their projects.
- Exploring Advanced Flux Techniques: For those looking to take their skills to the next level, visit the Flux Quickstart Guide as a resource for advanced techniques.
- The Terminus Research group has invested significant effort into identifying effective methods for robust Flux DreamBooth implementations.
HuggingFace ▷ #computer-vision (7 messages):
Bounding Box Annotation Tools
Agentic Workflow System in InsurTech
Video Dataset Processing
Satellite Images Processing
- Bounding Box Annotation Software Ideas: A member proposed developing a software for image object bounding box annotation with a feature for extracting labeled datasets in desired shapes, recognizing it as a basic need.
- Later, they found a list of popular open-source annotation tools that cater to this use case.
- Revolutionizing InsurTech with NO-Code Solutions: A member discussed the potential revolution in the InsurTech industry using NO-Code solutions, enabling significant transformations with minimal user intervention.
- They shared a link to a medium article detailing 'Agentic Workflow Solutions' in insurance technology.
- Dealing with Video Datasets: A member requested help regarding the basics of video dataset processing, specifically managing spatial and temporal features.
- They were directed to HuggingFace's video classification documentation for assistance.
- Searching for Football Video Dataset: A member expressed frustration over the absence of a football video dataset on HuggingFace and sought alternatives apart from converting YouTube videos to MP4.
- The search continues as they seek accessible resources for sports-related datasets.
- Satellite Images Processing Time Concerns: Concerns were raised about the processing time when working with satellite images and SAM2, noting that splitting images increased processing time significantly.
- One image took 27 minutes to process after splitting, which leads to questions on whether it should be faster based on expected scaling.
HuggingFace ▷ #NLP (29 messages🔥):
Voice Recording Sample Discussion
Temperature Effects in Transformers
Demonstrations of Decoding Strategies
Databricks Model Upload Issues
Free Hugging Face Models for QA
- Voice Recorder Talks 'OGG': A member shared an amusing audio clip where a voice humorously mispronounces the audio format 'ogg'.
- They joked about their local whisper wanting to convert it, suggesting many implementations may struggle with this.
- Exploring Temperature Effects in Language Models: Discussions revolved around how temperature affects the next token generation in transformers, particularly in respect to its application on the last layer.
- Members debated whether the normalized vector from softmax impacts the transformer input or merely the next token.
- Visualizing Decoding Strategies for Models: A member provided links to resources that illustrate temperature effects and beam search decoding strategies, emphasizing different decoding configurations.
- They suggested creating a custom visualization tool to better understand the temperature impact on token generation.
- Databricks Model Upload Essentials: A member advised checking for proper file naming and configuration when uploading models to Databricks to avoid errors.
- Specific mention was made regarding the importance of ensuring model and config file naming conventions to prevent looping errors.
- Search for Free Hugging Face Models for Q&A: A member inquired about free models on Hugging Face that could serve a similar purpose to OpenAI's LLM for question-answering based on context.
- They are particularly interested in models that would meet their requirements without incurring costs.
HuggingFace ▷ #diffusion-discussions (43 messages🔥):
FluxPipeline Issues
Image Generation Challenges with Stable Diffusion
Finetuning Stable Diffusion
Quantization Compatibility with Diffusers
- FluxPipeline hangs during model loading: Users discussed issues with FluxPipeline hanging during the model loading phase, particularly when using
.to('cuda')
after pipeline initialization.- It was noted that changing the order of operations in the code could resolve the hanging issue, although the specific reason for this behavior remained unclear.
- Poor image quality in Stable Diffusion: A new user faced challenges generating images using Stable Diffusion 1.5, reporting that colors appeared over-saturated compared to expectations.
- Concerns were raised about the normalization process for different datasets, questioning if a uniform normalization approach (mean = 0.5 and std = 0.5) should be applied.
- Exploration of NF4 Precision with Flux Schnell: Discussions arose around success with FP8 on A10g instances for Flux Schnell, with a query into compatibility of NF4 precision.
- Instructions and linked discussions provided potential resources for users to explore quantization and other precision techniques including related GitHub discussion.
- Finetuning Stable Diffusion on Multiple Datasets: One user mentioned they were finetuning Stable Diffusion 1.5 using datasets from KITTI and Waymo, but encountered color discrepancies on the KITTI dataset.
- They speculated potential normalization errors and sought input on best practices for dataset normalization in fine-tuning workflows.
- Quantization capabilities of FLUX with Diffusers: A participant inquired about the compatibility of FLUX NF4 with the diffusers framework, sparking a discussion about quantization.
- It was suggested that any form of quantization could be integrated using
optimum-quanto's requantize
call with a quantization map, indicating the versatility of the framework.
- It was suggested that any form of quantization could be integrated using
Stability.ai (Stable Diffusion) ▷ #general-chat (448 messages🔥🔥🔥):
Flux Model Usage
ControlNet Challenges
Fine-tuning and Training
Prompt Engineering Techniques
Stable Diffusion Variants
- Flux Model Experiences: Users discussed their experiences with the Flux model, highlighting its fast generation for images and the adjustment of settings like ModelSamplingFlux to refine output results.
- There were mentions of variations in performance and challenges with using Flux on different hardware setups.
- ControlNet Implementation Issues: Members faced challenges while using ControlNet, particularly when incompatible models or adapters were applied, leading to unexpected results.
- A user was advised to check their adapter compatibility and use specific DensePose ControlNet models for better outcomes.
- Lora Training and Fine-tuning: Participants shared insights on how to train a Lora, with one user mentioning a tutorial link and others discussing the effective finetuning of models for specific art styles.
- There seems to be an overall interest in exploring future fine-tuning possibilities with the Flux model.
- Prompt Engineering Strategies: Users emphasized the importance of proper prompt engineering, experimenting with different phrasing, grouping, and using negative prompts for more consistent image generation.
- Discussions included how punctuation, like periods and commas, can affect a model's interpretation of prompts.
- Graphics and Design with Stable Diffusion: A user inquired whether Stable Diffusion can be used to generate graphic design elements like color palettes and gradients, suggesting a broader application of AI in design.
- The conversation hinted at the potential for using generative AI not just in art but also in practical graphic design workflows.
Nous Research AI ▷ #research-papers (2 messages):
CRAB Benchmark
- CRAB Benchmark Launched for Multimodal Agents: A new benchmark called CRAB (Cross-environment Agent Benchmark) for Multimodal Language Model Agents was introduced here.
- This benchmark aims to enhance evaluation across various environments for language models, generating positive reactions from the community.
- Community Catches CRAB Excitement: The introduction of CRAB garnered interest among members, with one member commenting, 'nicee' in response to the announcement.
Nous Research AI ▷ #off-topic (18 messages🔥):
HawkEye CCTV Automation
Modern 4K Streaming Insights
Anime Recommendations
OpenAI Discussions
Startup Job Referral
- HawkEye revolutionizes CCTV monitoring: A member introduced HawkEye, a tool designed to fully automate CCTV surveillance, detecting dangerous events in real time and notifying authorities.
- Another member suggested cross-posting this on the IP cam talk forums, noting the potential interest from that community.
- 4K Streaming and Real-Life Insights: A discussion touched on the modern 4K streaming landscape, highlighting actress Colby Minifie's notable appearance and her choice of wearing contacts in The Boys.
- Members shared example footage to illustrate the conversation.
- Seeking Anime Recommendations after Solo Leveling: One member solicited recommendations for good anime, expressing their enjoyment of Solo Leveling.
- This led to a back-and-forth discussion among members about current favorite anime titles.
- Mixed Feelings on OpenAI's Direction: Members debated the state of OpenAI, with some suggesting it feels like 'larp' or vaporware due to uncertainties in their developments.
- However, a counterpoint highlighted the talent still within OpenAI, emphasizing legends like Radford and Fedus.
- Startup Job Referral Opportunity: A member posted a job referral opportunity at a startup offering a $500 incentive for successful referrals of candidates with relevant robotics experience.
- They shared further details via a Google Document outlining the role.
Nous Research AI ▷ #general (319 messages🔥🔥):
Nous Research AI Discord discussions
Model performance and comparisons
DPO and SFT methodologies
Fine-tuning models
Mistral-Nemo performance
- Discussion about model performance: Members have been comparing models such as Llama 3.1 (8B), Qwen2 (7B), and Gemma 2 (9B) for their smartness and long context capabilities, noting Llama 3.1’s impressive 128k training context.
- Members are also interested in experimentation, particularly with models that have good multiturn capabilities.
- DPO vs SFT in model training: The conversation addressed different training methodologies with a focus on DPO being perceived as finicky and potentially damaging, while SFT is more stable and reliable.
- DPO-NLL has been discussed as a potential improvement but is still subject to uncertainty regarding its effectiveness.
- Fine-tuning considerations: Members are exploring various dataset and model tuning options, with specific mention of Qlora for finetuning and its interaction with hardware capabilities like the RTX 3080.
- Kotykd emphasized the need for small, high-quality datasets for effective training and diverse model experience.
- Mistral-Nemo performance inquiry: The Mistral-Nemo-12B-Instruct-2407 model was recommended for its performance, specifically highlighting that it uses Flash Attention for improved efficiency.
- Discussions included weighing the practicality of various model sizes against the available GPU resources for training.
- Concerns over AI-related content: Members expressed frustration with the quality of AI discussions on social media platforms, equating the decline in discourse to less reliable information being shared.
- This included jokes about the state of AI news and a preference for actionable model releases over ambiguous social media discussions.
Nous Research AI ▷ #ask-about-llms (23 messages🔥):
Claude capabilities
Qwen2 audio usage
Translation models
LLM finetuning
Upside down text data mapping
- Claude's Unique Capabilities Inquiry: A member questioned why Claude can perform certain tasks that other models cannot, suggesting a deeper exploration into the underlying technology.
- This insight highlights the community's interest in understanding model differences and their capabilities.
- Exploring Qwen2 Audio's Performance: There's a discussion about using Qwen2 audio, noting that despite demo limitations, it has a cool presentation and can manage conversational context like Whisper.
- However, local installation poses challenges for users with 12GB VRAM, leading to workarounds.
- Current Best Chinese-English Translation Models: A user inquired about the best Chinese-English translation LLMs, leading to suggestions like Gemini 1.5 Pro for closed-source and Command-R+ for open-source models.
- It was noted that some Chinese models are also capable of providing effective translation solutions.
- LLM Finetuning for Improved Accuracy: A member asked whether citing character indexes in LLMs could be achieved through finetuning alone for high accuracy.
- This points to a growing interest in refining model responses through targeted adaptations.
- Mapping Unicode for Upside Down Text: A member proposed running a script to modify prompts for upside down text, indicating it could be done efficiently using Claude for mapping characters.
- This approach aims to generate multiple examples quickly while ensuring quality, highlighting creative solutions in data manipulation.
Nous Research AI ▷ #rag-dataset (17 messages🔥):
Multi-step Ranking
Markdown Love
Agent Development
PDF to Markdown Conversion
Image/Graph Descriptions
- Multi-step Ranking Process Discussed: Members discussed a multi-step ranking process involving initial ranking, relevance, and analysis for generating answers.
- Although the specifics are not fully detailed, they underscored the importance of context relevance.
- Passion for Markdown Emerges: There was a consensus around a love for Markdown, with a member expressing enthusiasm about converting everything to .md format.
- Another remarked on the effort involved, alluding to a timesink to convert files effectively.
- Active Projects on Agent Development: One member is primarily focused on agent-related projects and client engagement tasks at work.
- They mentioned ongoing testing of graph RAG, indicating it remains a prevalent topic in their work.
- Challenges in PDF Conversion: A discussion unfolded around the challenges of converting PDFs to markdown, with struggles to extract descriptions for images and graphs.
- One member reported success using Marker for noisy documents, while others expressed a desire to improve extraction techniques.
- Collaboration and Note Sharing: Members expressed eagerness to share notes and collaborate on projects, particularly regarding PDF conversion and agent development.
- They agreed to reach out to each other for sharing progress and insights when ready.
Nous Research AI ▷ #reasoning-tasks-master-list (29 messages🔥):
PR Merge Readiness
Code Changes Evaluation
Regex Parsing Issues
Curry's Paradox Example
- Assessing PR Merge Readiness: Discussion on whether the pull request (PR) is ready to merge, with different members contributing their opinions on necessary changes.
- n8programs stated, 'id say its merge time', indicating readiness for the merge, pending final checks.
- Evaluating Code Changes and Output: Members reviewed the latest code changes, particularly checking the output of
git diff
after modifications to ensure no discrepancies remain.- stoicbatman noted, 'this is how curry's paradox coming out', seeking clarity on the expected result.
- Regex Parsing Concerns: n8programs raised concerns about the regex implementation, emphasizing it should operate correctly even when backticks are absent.
- There was a consensus that the regex needs adjustments to properly catch cases outside of three backticks.
- Previewing Example Output: stoicbatman requested to confirm how 'curry's paradox' output looked to ensure it met expectations.
- n8programs confirmed the implementation was working as intended, expressing satisfaction with the output.
- Merge Authority Specified: During the discussion, n8programs clarified that they lack the necessary permissions to merge changes themselves.
- It was established that only tek has the authority to complete the merge, prompting a request for them to check the recent updates.
LM Studio ▷ #general (248 messages🔥🔥):
LM Studio Performance Issues
LLM Model Specifications
Headless Linux Operation
Embedding Models and RAG
Model Benchmarking
- LM Studio Struggles with New Update: Several users reported issues with using Llama 3.1 and the latest update of LM Studio, encountering errors related to model loading and performance.
- Users were encouraged to provide detailed information about their system specs and settings in a designated support channel.
- Guidelines for Using Large LLMs: To effectively utilize large language models such as Llama 70B, users need sufficient RAM and GPU memory, with specific recommendations based on model weight and quantization.
- It was noted that while a 3090 with 24GB VRAM is suitable for 27B models, further evaluation was needed for larger models.
- Headless Operation Issues on Linux: Users expressed challenges running LM Studio in headless mode on Linux, citing X-server display issues as a hindrance to proper functionality.
- Some suggested using dummy HDMI dongles or Windows Server as alternatives to overcome these difficulties.
- Integrating Embedding Models for RAG: Participants shared insights on using AnythingLLM alongside LM Studio to facilitate RAG (retrieval-augmented generation) capabilities with embedded documents.
- It was recommended to avoid built-in embedding models for better performance and to consider external models for enhanced efficiency.
- Benchmarking Language Models: Users discussed the reliability of various benchmarking sites for evaluating language models, emphasizing the importance of practical tests over leaderboard scores.
- Some suggested using personal benchmarks to assess model performance effectively, while cautioning against relying solely on established leaderboards due to potential biases.
LM Studio ▷ #hardware-discussion (114 messages🔥🔥):
8700G performance
M2 Ultra vs 4090
Server GPU options
Portable LLM inference
Mac Studio configurations
- 8700G achieves impressive token speeds: After tweaking RAM timings and using Ollama, the 8700G reaches 16 tok/s with Llama3.1 8B at 100k context size, while LM Studio with Vulkan crashes over 20GB of RAM utilization.
- The model nearly fits the full 128k context in 32GB of RAM, showcasing its potential for high-performance tasks.
- M2 Ultra surprises in comparison to 4090: Pydus reports that the M2 Ultra outperforms the 4090 in training times for Llama3.1 models, averaging 197s per epoch compared to the 4090's 202s.
- The M2's silent operation contrasts the noisy 4090, leading to thoughts of returning the latter for the more efficient Apple option.
- Exploring Server GPU Options: Discussion arose regarding the feasibility of using P40 GPUs, with wild estimates for building a 10x P40 server just for fun, though concerns over high power consumption were highlighted.
- An ideal configuration might balance performance and efficiency, possibly sourcing higher VRAM cards like the 4090D with 48GB.
- Portable LLM Inference with ROG Ally X: Bobzdar suggests that the ROG Ally X could achieve 15-17 tok/s with Llama3.1 8B, presenting a solid portable option for LLM inference.
- This performance indicates a great balance between portability and capability, especially compared to current laptop limitations.
- Potential Mac Studio Upgrade: There's enthusiasm around upgrading to a 192GB Mac Studio for improved performance in LLM tasks, with excitement over its expected efficiency compared to GPU-heavy setups.
- With features like Goliath 120B at Q8, the Mac options are becoming more appealing, making powerful local AI assistants more accessible.
Unsloth AI (Daniel Han) ▷ #general (207 messages🔥🔥):
Unsloth Fine-Tuning
Model Deployment on AWS
Pytorch Conversion Issues
Gemma Models VRAM Usage
2 Million Downloads Celebration
- Discussions on Unsloth Fine-Tuning Capabilities: Users discussed the limitations of unsloth in fine-tuning certain models, such as mixture of expert models and VLMs like Phi-3 vision.
- There were suggestions about structuring datasets for training models, with considerations for including instructions in conversation datasets.
- Model Deployment Challenges on AWS: A user sought help for deploying their unsloth fine-tuned model on AWS, but responses indicated a lack of prior experience with AWS deployments.
- Another user suggested following AWS tutorials for LLM deployment as a reference point.
- Pytorch Conversion for Unsloth Checkpoints: One user faced issues converting unsloth checkpoints to Pytorch format, particularly concerning missing files during the conversion process.
- They were directed towards GitHub resources for aid, but it was indicated that the script they were using might not support the specific checkpoints.
- VRAM Usage Concerns with Gemma Models: A discussion centered around Gemma model VRAM requirements, with users noting that they use more VRAM for fine-tuning compared to other models, like llama.
- The conversation pointed out that installing Flash Attention might help with optimizing VRAM usage for these models.
- Celebrating Unsloth's Popularity: Unsloth reached a milestone of 2 million monthly downloads on Hugging Face, which was celebrated among users in the channel.
- Members congratulated each other on this achievement, showcasing excitement for the growing user base.
Unsloth AI (Daniel Han) ▷ #off-topic (13 messages🔥):
Off-topic chat rules
Open source project guidelines
Camping experiences
- Clarifying Off-Topic Chat Rules: Discussion arose over whether certain messages are permitted in the off-topic chat, leading to a consensus that they are 'not allowed'.
- Members expressed a need for a dedicated rules channel to clarify these off-topic guidelines.
- Open Source Project Concerns: A member shared a YouTube video that is relevant for open-source projects transitioning from private to public repositories, highlighting potential risks.
- Another member noted, 'Oop we should probably watch out for this' in response to the shared video.
- Campground Woes: A member expressed their dislike for camping due to returning with 6 mosquito bites, including one on their eyelid.
- Others chimed in with comments, with one noting the horse manure problem in Australia, making camping even less appealing.
Unsloth AI (Daniel Han) ▷ #help (110 messages🔥🔥):
Model Fine-tuning Challenges
Uploading to Hugging Face
Training with Flash Attention
Using LORA with Pre-trained Models
Model Deployment Issues
- Model Fine-tuning Challenges: Users noted difficulties in training larger models like Meta-Llama-3.1-70B-bnb-4bit on Colab due to memory issues with available GPUs like T4 and L4.
- One user suggested that the 80b model requires a significant GPU that they couldn't fit into Colab.
- Uploading to Hugging Face: Users discussed the necessity of saving models correctly to Hugging Face while ensuring the tokenizer model is saved, particularly when merging LORA and base models.
- One user shared an error message about merging and asked for clarification on tokenizer behavior for models like Gemma 2.
- Training with Flash Attention: A user faced issues integrating Flash Attention 2.6.3 with training scripts for Gemma 2, indicating a potential compatibility problem with their CUDA version.
- They were advised to ensure correct imports in their Python scripts to enable Flash Attention.
- Using LORA with Pre-trained Models: Discussions included whether models need add-on LORA weights during prompting after fine-tuning, with a user seeking clarifications on the correct prompting format.
- Recommendations involved ensuring proper settings for models pre-trained using LORA, and how to deploy trained models effectively.
- Model Deployment Issues: A user reported issues with model deployment yielding repetitive outputs, leading to inquiries about potential problems with chat templates used during interaction.
- Suggestions included reviewing input parameters and the model configuration for potential errors.
Unsloth AI (Daniel Han) ▷ #research (3 messages):
Hybrid Neural Network-Transformer Architecture
Local LLM Plugin for Unreal Engine
- Exciting New Hybrid Architecture Emerges: A new hybrid Neural Network-Transformer Architecture has been proposed, enhancing AI capabilities.
- This architecture aims to combine the strengths of both models for improved performance in AI tasks.
- Local LLM Plugin Available for Unreal Engine: A member shared a link to a Local LLM Plugin for Unreal Engine that could be useful for integrating language models into development.
- This plugin offers new possibilities for developers looking to leverage local language models in their projects.
CUDA MODE ▷ #general (44 messages🔥):
XPU Architecture
Mentorship in HPC
CUDA Error Debugging
GPU Memory Management
GPU Benchmarking
- Clarification on XPU Architecture: A member inquired about the XPU architecture, wondering if the Intel GPUs discussed are integrated ones or specific models for AI tasks. Another member confirmed that Intel has been making discrete GPUs for a while, providing a link to more information.
- The conversation indicates a growing curiosity about Intel's approach to AI and GPU technologies.
- Seeking Mentorship for HPC Conference: A member expressed a need for a mentor regarding participating in a HPC conference poster presentation, sharing their relevant background in machine learning. They are reaching out for guidance just before their final year as an undergraduate.
- Several members offered to assist, noting the importance of having discussions about their ideas for better clarity.
- Resolving CUDA Kernel Launch Errors: A user reported an illegal memory access error encountered when launching a CUDA kernel. Other members suggested using tools like compute-sanitizer to help narrow down potential memory bugs.
- Discussions highlighted common issues like pointer dereferencing and correct memory allocation for CUDA operations.
- GPU Memory Management Best Practices: A user shared their implementation details for managing GPU memory, detailing cudaMalloc and cudaMemcpyAsync operations. They were advised on proper practices for avoiding memory errors and ensuring device memory is accessible.
- Responses emphasized the critical steps for managing data flow between host and device memory effectively.
- Discussion on GPU Benchmarks: An interesting GPU benchmark list was shared, highlighting performance metrics for various models and configurations. However, discussions clarified that benchmarks do not always correlate to actual inference performance across different GPU architectures.
- A member noted discrepancies in various H100 models, emphasizing the need to consider real-world application over theoretical benchmarks.
CUDA MODE ▷ #torch (52 messages🔥):
Flash Attention Extensions
Trigonometry in Torch Compile
INT8 to INT32 Matrix Multiplication
CUDA Allocator Exploration
Post-Fusion FX Graph Analysis
- Integrating Flash Attention with Custom Extensions: A user sought to directly call
flash_attn_2_cuda.fwd_kvcache
from a custom extension, raising concerns over maintenance if linked statically due to multiple wheel builds.- Another user suggested possibly loading the dynamic library directly through a syscall as a potential solution for function pointer access.
- Challenges with Torch Compile and Triton: A user inquired about forcing
torch.compile()
to utilize Triton instead of ATen for FP8 matmul, with various suggestions including configuration adjustments and environment variable setups.- Ultimately, it was highlighted that
torch._intmm()
might provide a clean INT8xINT32 matmul solution, potentially leveraging CuBLAS under the hood.
- Ultimately, it was highlighted that
- Simplifying INT8xINT32 Matmul Implementation: A user described successfully using a PyTorch implementation to achieve INT8xINT32 matmul, leading to a simpler solution than their previous Triton-based version.
- Concerns over whether PyTorch casts to INT32 internally were alleviated after confirming it calls the INT8 kernel directly.
- Diving into Torch Memory Allocator and CUDA Graphs: A member shared insights from their exploration of the torch memory allocator and CUDA Graphs, linking to an engaging community post on the topic.
- This prompted further discussions on caching allocators and experiences with the system.
- Extracting Post-Fusion FX Graph for Analysis: A user expressed interest in a programmatic method to extract the post-fused FX graph for flop and byte counting, mentioning existing debug flags as a reference.
- They ultimately aim to count bytes at fusion boundaries following the scheduling pass for more accurate analysis.
CUDA MODE ▷ #cool-links (3 messages):
NoteDance Project
Transformer Explainer Visualization
GPU Regex Matcher
- NoteDance Project enables agent training: A member introduced their project, allowing for easy training of agents, available on GitHub.
- This project aims to simplify the process of training agents for various applications.
- Impressive Transformer Explainer Visualization: One member shared a visualization tool that showcases transformer models effectively.
- The tool is noted for its neat design, enhancing understanding of transformer mechanics.
- High-Performance GPU Regex Matcher in Zig: A member highlighted their work on a GPU regex matcher written in Zig, functioning on both Nvidia and AMD hardware.
- The regex matcher validates UTF-8 at approximately 450 GB/s on the RX 7800 XT and 300 GB/s on the RTX 3090, with the Nvidia path being less optimized currently.
CUDA MODE ▷ #jobs (1 messages):
C++/ML Developer Role
Palabra.ai API
Real-time Voice Interpretation
GPU Optimization
- Palabra.ai seeks C++/ML dev for real-time voice API: Palabra.ai is looking for a strong C++/ML developer to help build an API for real-time voice interpretation with automatic voice cloning and emotional expressions, operating through Zoom with a delay of less than 1 second.
- The position is fully remote, offering a salary up to $120k + stock options, and includes optimizing GPU-based ML models and software development tasks.
- Referral bonus for successful candidate recommendations: If you know someone suitable for the role, Palabra.ai will pay a $1.5k referral bonus for every successful candidate recommendation.
- Interested individuals can DM or email for details, indicating there is a proactive outreach for finding the right talent for the team.
CUDA MODE ▷ #beginner (7 messages):
CUDA bound check expressions
__syncthreads usage
Early returns in CUDA
Conditional execution in CUDA
- Understanding CUDA Bound Check Expressions: A member questioned the necessity of the nested expression idiom for boundary checks in CUDA, asking if simpler conditional checks like
if (i >= n) { return; }
would suffice.- Another member responded, suggesting that while it might work, advanced features like
__syncthreads()
complicate early returns.
- Another member responded, suggesting that while it might work, advanced features like
- Potential pitfalls of early returns in CUDA: Discussion revealed that using early returns can cause issues with
__syncthreads()
, as it waits for all threads, potentially leading to hangs when some threads exit early.- One member emphasized that
__syncthreads()
can be used conditionally, but it must evaluate consistently across the thread block to avoid unintended side effects.
- One member emphasized that
- Need for clarification in the CUDA Programming Guide: A member pointed out that this question has been raised multiple times and suggested adding a clarification in the CUDA Programming Guide regarding the usage of
__syncthreads()
.- This could help prevent confusion among newcomers about how synchronization interacts with conditional execution.
CUDA MODE ▷ #pmpp-book (1 messages):
Instructor Document Exclusions
Matrix Multiplication Techniques
- Inquiry on Instructors Document Exclusions: A member inquired about a repository or document containing a complete list of excluded answers from the instructors document provided for the course.
- Specifically, they wanted to verify details regarding their implementation of column vs row major matrix multiplication from Chapter 3.1.
- Discussion on Matrix Multiplication Techniques: The conversation highlighted the different approaches to matrix multiplication, particularly emphasizing the distinction between column and row major formats.
- Members expressed interest in ensuring consistency in understanding methodologies as outlined in the course's instructional materials.
CUDA MODE ▷ #youtube-recordings (1 messages):
Code availability
Helpful resources for beginners
- Inquiry about Code Availability: A member asked if the code from the talk is available anywhere, expressing interest in accessing it.
- They noted that the talk was ridiculously helpful for their beginner level understanding.
- Positive Feedback on the Talk: The same member also mentioned that the talk was ridiculously helpful, highlighting its value for beginners.
- This reflects a broader appreciation for accessible educational resources in the community.
CUDA MODE ▷ #torchao (13 messages🔥):
Torch Segfault Issues
Quantization Aware Training (QAT)
Integration of NF4 Kernels
Testing FP32 in AQT
Performance Comparison of Quantization Kernels
- Torch Segfault with Float32 Precision: A member reported a segmentation fault when using
torch.compile()
with the flagtorch.set_float32_matmul_precision("high")
. They narrowed the issue down to this flag being set during quantization, leading to ongoing concerns and the need for a fix.- Another member confirmed the segfault is reproducible and suggested a workaround by setting
inductor_config.mixed_mm_choice
to 'aten'.
- Another member confirmed the segfault is reproducible and suggested a workaround by setting
- Focus on FP32 Testing in AQT: There was a suggestion to add FP32 tests to Quantization Aware Training (AQT) to prevent future issues, especially since currently only BF16 is being tested. The urgency arises from the widespread recommendation to use the high precision flag.
- Integration Request for NF4 Kernels: The authors of FLUTE reached out to integrate their NF4 kernels into bitsandbytes, focusing on fused operations that may boost inference performance. The proposed kernels include a combined dequantize and matmul, which could be vital for future enhancements.
- A member expressed that while promising, the performance of FLUTE is slow compared to current linear quantization kernels, with Llama3 8B achieving only 67 tokens/sec.
- Enhancing AQT with Backprop Capabilities: During the discussion on the quantized training PR, there were ideas to enable AQT to backpropagate, similar to NF4, without requiring gradients with respect to weights. This method could potentially aid in training models with LoRA-style adapters.
- Others agreed that such design can be beneficial, citing existing implementations by community members.
- Call for Better Testing Practices: Concerns were raised about the need for improved testing frameworks in inductor to avoid recurrence of segfaults and other issues. A member committed to developing better tests in the coming week.
CUDA MODE ▷ #ring-attention (2 messages):
TreeAttention
Training and Inference Discussion
- TreeAttention Explained: A member shared an informative post about TreeAttention, providing insights and details available at this link.
- This approach highlights the need for more efficient attention mechanisms in models.
- Insightful Commentary on Training and Inference: An author provided commentary on the challenges of training and inference, which can be found at this link.
- The discussion emphasizes critical factors that influence model performance during these stages.
CUDA MODE ▷ #off-topic (4 messages):
PTX Manual Updates
MPS on Apple Silicon
PyTorch Operations
- PTX Manual jumps to new sections: There is speculation about two new sections in the PTX manual, skipping from §9.7.4 to §9.7.7, hinting at exciting updates.
- Are two mind-blowing new sections about to drop?
- bfloat16 support for MPS in PyTorch: A user noted that there is now support for bfloat16 in MPS (Apple Silicon) when using PyTorch, which shows progress in operations coverage.
- However, there's still no support for AMP/autocasting, leaving some limitations.
CUDA MODE ▷ #hqq (1 messages):
mobicham: https://huggingface.co/mobiuslabsgmbh/Llama-3.1-70b-instruct_4bitgs64_hqq
CUDA MODE ▷ #llmdotc (70 messages🔥🔥):
ROCm Driver Performance
LLaMA 3 Tokenization
CUDA Compilation Optimization
Zero-2 Improvements
New H100 Cluster Usage
- ROCm Driver with PyTorch finally works: Switching back to stock driver settings with ROCm 6.2, resulted in coherent results with PyTorch according to members discussing GPU configurations.
- Before this change, there were issues stemming from the AMD driver in passthrough mode causing silent failures.
- Refactoring LLaMA 3 for better tokenization: A member is working on modifying tinystories to support both GPT-2 and LLaMA 3 tokenization, looking to unify the changes from previous implementations.
- This refactor should simplify data handling as current implementations are less elegant, and a PR has already been pushed for LLaMA 3 tokenization.
- CUDA compilation optimizations discussed: Members explored various flags to optimize CUDA compilation times, with potential reductions from ~8.5s to ~5.5s using
-O1
instead of-O3
.- Implementing
--threads=0
improved compile times over 5%, while suggestions were made regarding the implications of using-O0
for faster iterations.
- Implementing
- Performance improvements with Zero-2: One member inquired about the benefits of using Zero-2, which should enhance speed and stability during gradient accumulation, especially when using BF16.
- Discussion centered on ensuring deterministic results and addressing potential issues with stochastic rounding across models.
- H100 cluster access for training large models: Discussion arose regarding utilizing a new H100 cluster for the first language model trained on llm.c, targeting a scale of 3B on 4.5T tokens.
- Members wondered if features like rope are ready for pretraining or if they should wait for a more stable implementation.
CUDA MODE ▷ #bitnet (85 messages🔥🔥):
BitNet QAT Implementation
Layer Swapping Mechanism
Memory Efficiency in Inference
Dataset for Training
Checkpoint Validation
- BitNet QAT Implementation Overview: The group discussed the implementation of BitNet, noting it's full weight QAT with a post-quantization process that groups weights into -1, 0, 1 based on tensor dimensions.
- One member provided updates on weight scaling and activation quantization with code examples showcasing the functioning of the linear layers.
- Optimizing Layer Swapping: A new approach to layer swapping was proposed by subclassing and overriding the
F.linear()
function directly to accommodate BitNet's unique requirements.- This strategy allows for efficient weight management during inference, avoiding the need to store both the transposed and normal weights.
- Discussion on Memory Efficiency: Members highlighted the expected memory efficiency of using BitNet for inference, particularly noting that a 70B model could potentially fit 16GB of GPU memory without a key-value cache.
- The architecture was noted to be primarily linear, which contributes to overall significant memory savings during inference.
- Toy Datasets for Training: Participants shared insights on using TinyStories and a subset of FineWeb-edu as toy datasets for pre-training BitNet models, suggesting that these could be beneficial for experiments.
- One member expressed the feasibility of fine-tuning existing models, while also recognizing the importance of focusing on preliminary training with these datasets.
- Checkpoint Validation Process: The effectiveness of the checkpoint validation was affirmed, with comparisons drawn between full precision (f32) and packed weights, showing a successful size reduction to 2.9MB.
- Members indicated that the reduction ratio checked out as expected, confirming the reliability of the compression process.
CUDA MODE ▷ #webgpu (1 messages):
austinvhuang: trying very hard to resist the tempation of implementing a gsplat renderer...
CUDA MODE ▷ #cudamode-irl (2 messages):
Registration Update
Timeline for Results
- No meaningful updates on registration progress: A member noted that there are currently no meaningful updates on the registration status.
- They indicated that results will likely be shared by the end of this month.
- Awaiting results on registration: Another member expressed anticipation for updates on the registration results, which remain pending.
- The community looks forward to insights as the end of the month approaches.
Latent Space ▷ #ai-general-chat (120 messages🔥🔥):
LLaMA Guard 3 Release
DSPy Insights
OpenAI Model Discussions
AI Agent Strategies
AI Product Development
- LLaMA Guard 3 Video Released: A video showcasing LLaMA Guard 3 was recently posted, generating excitement among viewers.
- The video is available here for those interested.
- Discussions on DSPy: Today's discussion included insights from the Zeta Alpha DSPy session, as members debated the clarity of the technology.
- Some expressed uncertainty about understanding DSPy, with one member noting their intention to add it as a reference in their notes.
- OpenAI Model Release Buzz: Buzz circulated regarding a potential release of gpt4o large on Tuesday, fueling speculation about the model's capabilities and features.
- Members discussed the implications of such a release, suggesting it might lead to significant advancements in AI functionality.
- AI Agent Development Strategies: A member inquired about strategies for building AI products around complex tasks, debating between prompt engineering and fine-tuning models.
- This led to discussions on the potential effectiveness of different methods in enhancing model performance.
- Insights on AI Product Development: Members have been sharing resources about AI tools and extensions, such as AlterHQ which allows interaction with web pages.
- There was also talk about the sustainability of certain tools, with users expressing hope that platforms like getvoila.ai remain operational.
Latent Space ▷ #ai-in-action-club (147 messages🔥🔥):
Ruby AI Development
AI Engineer Bootcamp
Prompt Crafting Workshops
AI-Augmented Workforce
Research Agents
- Ruby AI Gains Traction: There's a small but growing community building AI applications with Ruby, led by members who noted its suitability for LLM coding and DSL creation.
- A member mentioned the potential for Ruby augmented generation and popular abstraction libraries like Boxcars, sparking interest among non-Ruby developers.
- Exploring AI Engineer Bootcamp Opportunities: Several members expressed interest in attending an AI Engineer bootcamp for rapid skills enhancement, with resources being shared for upskilling.
- The value of practical examples over traditional learning tools was discussed, highlighting the need for hands-on experience in AI.
- Interest in Prompt Crafting Workshops: Members noted the potential of prompt crafting workshops for helping non-technical individuals effectively engage with AI models.
- The discussion included insights on teaching prompt crafting as a skill to leverage AI capabilities while understanding their limitations.
- AI-Augmented Workforce Conversations: The concept of an AI-augmented workforce was examined, including the role of AI as consultants and tools for automating tedious tasks.
- Members shared thoughts on improving productivity and problem discovery through AI solutions that address everyday work challenges.
- Research Agents for Enhanced Discovery: Members demonstrated interest in research agents, discussing the idea of using AI to facilitate research processes and discovery.
- Ideas included utilizing tools like Elicit to streamline research tasks, enhancing collaboration and context-driven exploration.
Eleuther ▷ #announcements (1 messages):
EleutherAI Cookbook
Model deployment resources
Empirical benchmarks
Best practices for LLMs
- Discover the EleutherAI Cookbook: Attention is drawn to the EleutherAI Cookbook, a comprehensive resource for building and deploying models, which offers essential utilities and insights.
- The cookbook addresses gaps left by papers, providing materials on empirical benchmarks and theoretical calculations to assist developers.
- Theoretical Calculations at Your Fingertips: The cookbook includes scripts for important theoretical calculations such as Transformer inference/training memory, total model parameters, and total model FLOPs.
- This utility aids in understanding model architectures and their resource requirements in a more granular way.
- Empirical Benchmarks for Practical Applications: Empirical benchmarks in the cookbook focus on communication of PyTorch tensors and computations for GEMMs, BMMs, and transformer blocks.
- These benchmarks are critical for understanding performance trade-offs in various model operations.
- Curated Reading List for LLM Builders: A curated reading list within the cookbook covers topics such as distributed deep learning and best practices for building LLMs.
- The list features notable implementations like nanoGPT and GPT-Fast, which serve as excellent learning resources.
- Call for Contributions to the Cookbook: The authors invite contributions to enhance the EleutherAI Cookbook, encouraging community participation.
- Engagement from members can further amplify the utility of this valuable resource for developers.
Eleuther ▷ #general (80 messages🔥🔥):
GPU Usage and DeepSpeed
Mamba vs Transformers in MMLU
Training Multiple Choice Questions
Optimizer States in Training
Custom Emailing in Research
- Navigating GPU Usage with DeepSpeed: Discussions centered around using DeepSpeed with SFTTrainer for single-node multi-GPU fine-tuning, with varying experiences reported regarding optimization and CUDA OOM errors.
- Users explored approaches like optimizer state offloading and the potential benefits of using LoRA for reducing memory usage during training.
- Mamba's Performance in MMLU Compared to Transformers: Members highlighted that Transformers tend to handle multiple-choice tasks more effectively than Mamba due to their routing capabilities and attention mechanisms.
- The discussion noted that while Mamba models like FalconMamba trained on larger datasets might close the performance gap, hybrids like Zamba showed competitive results with fewer training tokens.
- Challenges in Training Multiple Choice Questions: There was curiosity regarding the efficiency and learning difficulties of Mamba in handling multiple-choice questions, leading to debates on hybrid architectures versus pure implementations.
- Participants mentioned that hybrid models might offer safer and more robust performance advantages during inference.
- Optimizer States and Fine-Tuning: The feasibility of using lower precision for optimizer states was questioned, with some suggesting that DeepSpeed could allow for offloading these states effectively.
- Yet, users remained uncertain whether quantizing optimizer states would be beneficial and were considering script modifications for better training structure.
- Customary Practices of Emailing Research Authors: One user inquired about the appropriateness of emailing authors of a paper while applying to MATS, indicating that many PhD students welcome such inquiries.
- The conversation suggested that while emailing can add value, it is essential to consider the recipients' capacity and how best to engage with them to maintain open communication.
Eleuther ▷ #research (161 messages🔥🔥):
Distillation vs. Training Efficiency
Zyphra's Research and Projects
Exploration of Hybrid Models
Data Contamination in ML Training
Evaluation Techniques for Language Models
- Debate on Model Distillation Effectiveness: Discussion emerged about the effectiveness of distillation where some argued it must recover full teacher performance while others noted it brings practical inference-time benefits.
- Participants highlighted the complexity underlying distillation claims and its potential inefficiencies compared to smaller models sourced with similar training data.
- Zyphra's Innovations and Community Interest: Members expressed interest in Zyphra and its projects, particularly the Zamba model, which reportedly outperforms existing models with better training efficiency.
- The community questioned previous models' evaluations and pushed for exploration into Zyphra's datasets and methodologies.
- Investigating Hybrid Model Designs: Conversation surrounding hybrid models revealed a desire to understand the balance between Mamba recurrence and attention mechanisms for improved model performance.
- Questions arose about how to optimize these hybrid architectures and whether existing models are indeed well-trained given their parameter constraints.
- Addressing Data Contamination in Model Training: Members discussed potential projects focused on examining data contamination within training sets, specifically referencing open-source models and their respective evaluations.
- Participants shared insights on conducting experiments to assess the impact of training on test questions, aiming for a more rigorous approach.
- Challenges in Language Model Evaluations: There were debates about the validity of existing evaluations, particularly whether large models are undertrained compared to benchmarks they are built upon.
- Concerns were raised regarding the implications of wasted parameters and the theoretical limits of learning effectiveness in increasingly large models.
Eleuther ▷ #interpretability-general (6 messages):
SO(3) group operations
Symmetry paper recommendation
- Searching for SO(3) Group Operations Paper: A member sought a paper that demonstrates a model learning SO(3) group operations for representing rotations, sharing excitement upon finding a link: Paper.
- Wow, that is a lot older than I expected.
- Recommendation for Symmetry Paper: Another member provided a recommendation for a symmetry-related paper, stating “not exactly what you asked for, but I just want to recommend this paper which I loved” with a link to Symmetry Paper.
- The original author expressed gratitude for the shared resource.
- Struggles with Paper Comprehension: After reading the abstract of the suggested paper, the original seeker admitted, “I don’t have the proper background to understand the rest of the paper” from a quick skim.
- This highlights the challenge some members face in comprehending complex technical material.
Eleuther ▷ #lm-thunderdome (7 messages):
Neurips benchmark reviews
CommonsenseQA Task
Multi-node inference for language models
- Neurips benchmark reviews are encouraging: A member reported scores of 6/6/5 with confidence 3/2/3 for their Neurips submission, questioning their chances after a rebuttal.
- Another member reassured that these scores are indeed to be happy about.
- CommonsenseQA Task clarified: A member inquired if models are fine-tuned on the 9.7k train split of the CommonsenseQA Task before evaluation.
- It was clarified that there is no fine-tuning, and the
training_split
is for sourcing in-context few-shot examples only.
- It was clarified that there is no fine-tuning, and the
- Seeking resources for multi-node inference: A member asked for resources or tutorials regarding multi-node inference for large language models, noting they lack Docker privileges in their cluster.
- This highlights the growing interest in efficient model scaling and deployment without container access.
Eleuther ▷ #gpt-neox-dev (1 messages):
Pythia training data split
Quantization loss recovery LoRA
- Seeking Pythia Training Data Split: A member inquired about the location of the train-validate-test split used for the Pythia model, noting difficulty in finding information on Hugging Face or within the code.
- It's crucial to know this split to avoid evaluating perplexity against the training set.
- Exploring Quantization Loss Recovery LoRA: The same member mentioned working on a quantization loss recovery LoRA inspired by a recent Apple paper and conducting quick experiments with the Hugging Face transformers library.
- Ensuring they are not testing on the training set is a key part of their evaluation process.
Perplexity AI ▷ #general (209 messages🔥🔥):
Perplexity AI Issues
User Experience with AI Models
Batch Processing for Open Source Models
Llama and Gemini Model Rates
Community Engagement and Communication
- Perplexity experiencing operational issues: Many users reported experiencing issues with the Perplexity AI platform, including inability to select different image generation models and encountering error messages under high query volumes.
- Users expressed dissatisfaction with the limitations imposed on the pro subscription, particularly regarding output size and functionality.
- Frustration over rate limiting: Several users highlighted frustrations with rate limiting that prevented them from processing multiple queries efficiently, resulting in error messages during peak usage.
- Discussions emphasized the need for a more controlled response mechanism that effectively handles rate-limiting scenarios.
- Interest in batch processing capabilities: Users inquired about the lack of batch processing options for open-source models, expressing interest in cost-effective solutions similar to those offered by major providers like OpenAI.
- The conversation explored potential usage scenarios and benefits of batch processing, noting that it could optimize operational costs for users.
- Discussions on model performance and limits: Concerns were raised about the performance of various models, particularly in providing timely outputs and the implications of having to wait longer for results.
- Comparisons were made between Llama and Gemini models regarding their capabilities and output rates, with discussions around access and limits.
- Community communication challenges: The community expressed disappointment over perceived silence from Perplexity's leadership and the inadequate engagement from its community manager.
- Conversations suggested that effective communication and transparency were needed to rebuild trust within the community.
Perplexity AI ▷ #sharing (22 messages🔥):
200 Day Insights
Core Framework
Decimal Comparisons
Google Monopoly Lawsuit
How to Catch a Crab
- Exploring 200 Day Insights: A link was shared discussing the key insights from the 200 day reference, which seems to delve into valuable observations.
- For further details, see the full discussion here.
- Core Framework Discussion: Discussion arose around the Core Framework in a linked resource that potentially offers a breakdown of its elements.
- For a thorough examination, check out the link here.
- Interesting Decimal Comparisons: A member shared a link exploring intriguing comparisons such as 3.33 vs 3, raising questions about numerical perceptions.
- Explore the full analysis via the provided link here.
- Updates on Google's Monopoly Lawsuit: A link pointed to news regarding Google's lawsuit about monopoly practices, discussing its implications.
- Read more about this significant legal issue here.
- How to Catch a Crab Tutorial: Members are looking into fun activities such as a guide on how to catch a crab, which could be an interesting hobby.
- Check out the tutorial for tips and tricks here.
Perplexity AI ▷ #pplx-api (7 messages):
Cloudflare connection issues
Perplexity 3.1 problems
API usage limits
Citation feature request
Integration advice for source citations
- Users facing Cloudflare connection issues: A user reported experiencing an unknown connection issue between Cloudflare and the origin web server, preventing webpage display.
- They attempted troubleshooting steps including enabling Cloudflare development mode but sought advice on whitelisting Perplexity in Cloudflare.
- Perplexity 3.1 seen as downgrade: Another user expressed frustration with the 3.1 version of Perplexity, noting it produces incorrect answers compared to version 3.
- They found that the original version managed queries like Olympic medal counts much better and is concerned about the transition as the original version is only available for 2 more days.
- Question about API usage limits: A user inquired about the daily limit for Perplexity's API, specifically after encountering #ERROR messages after running 200-300 prompts.
- They referenced the defined limit of 20 inputs per minute but struggled to find any clarity on daily usage limits.
- Request for citation feature approval: A user requested approval for the citation feature through the API, providing their email for follow-up.
- They previously submitted a request through a web form but noted they had not yet received a response.
- Advice needed for source citations in app integration: A user connected Perplexity via API but reported lacking access to source citations and images within their application.
- They sought guidance on how to enable these features for their current application integration with Perplexity.
OpenRouter (Alex Atallah) ▷ #announcements (1 messages):
Perplexity Model Updates
Llama3 Model Transition
- Perplexity Models Going Offline: Several Perplexity models will be inaccessible after 8/12/2024, including
llama-3-sonar-small-32k-online
andllama-3-sonar-large-32k-chat
as noted in the Changelog.- Users are advised to prepare for these changes to ensure continuity in their model usage.
- Transitioning to Llama3-based Sonar Models: Effective immediately, the online and chat models will redirect to their Llama3-based Sonar counterparts, including
llama-3.1-sonar-small-128k-online
andllama-3.1-sonar-large-128k-chat
.- This change aims to enhance user experience and performance in model capabilities.
OpenRouter (Alex Atallah) ▷ #app-showcase (4 messages):
OpenRouter Command Line Integration
Automation with Bash Scripts
UI for Agent Frameworks
- OpenRouter hits the command line with Bash: A user shared a detailed guide and script to integrate OpenRouter into the command line using pure Bash, supporting piping and chaining.
- This script works across various platforms, including Raspberry Pi and Android's Termux, with an aim to automate devices through a
plan -> execute -> review
workflow.
- This script works across various platforms, including Raspberry Pi and Android's Termux, with an aim to automate devices through a
- Automation insights from long experimentation: The creator expressed gratitude for the positive feedback and noted that creating the script without dependencies took months of experimentation with multiple programming languages.
- Key insights involved using XML for simpler parsing in Bash and the concept of outputting
<bot>response</bot>
, along with ideas around creating a 'mixture of experts' with the--model
flag.
- Key insights involved using XML for simpler parsing in Bash and the concept of outputting
- Testing automation on smart devices: The developer plans to test the Bash script on a smart watch this week, aiming to explore gesture-based interactions and further agentizing capabilities.
- “I hope this helps someone!” was a sentiment shared, highlighting their hope to assist others in automation.
- Interest in a UI for agent frameworks: Another user expressed interest in a user interface that resembles text-based applications like htop or weechat.
- This highlights the ongoing desire for more user-friendly tools to manage agent frameworks.
OpenRouter (Alex Atallah) ▷ #general (209 messages🔥🔥):
Gemini Flash Pricing Updates
Model Performance Issues
Token Usage Concerns
AI Tool Recommendations
Free API Options
- Gemini Flash Pricing Updates: Community members are inquiring about the timeline for the new Gemini Flash price updates, with some indicating that GCP cost tables already reflect the new pricing.
- Alex Atallah noted that updates are currently blocked due to discrepancies in the token:character ratio used by Gemini.
- Model Performance Issues: There were discussions about the stability of models like Hyperbolic's 405B-Instruct, with some users noting that it was recently pulled from their API.
- Users also pointed to performance discrepancies between different versions of models, specifically mentioning issues with instruct models.
- Token Usage Concerns: Users expressed frustration over the high token consumption of AI tools, particularly in the context of using aider with coding tasks.
- There was a consensus that inefficient use and the complexity of tasks contribute significantly to token depletion.
- AI Tool Recommendations: Participants discussed various AI tools, weighing the benefits of options like Codestral, Groq, and Copilot for coding tasks.
- Recommendations varied based on user needs, with suggestions leaning towards tools that accommodate complex coding requirements.
- Free API Options: The availability of free API tiers for Gemini models was discussed, highlighting regional limitations due to data privacy regulations.
- Several users mentioned the challenges of integrating APIs from other models and services, particularly GCP's complexities.
OpenAI ▷ #ai-discussions (168 messages🔥🔥):
Prolog Generation with GPT
AI Image Detection
Human Perception and Consciousness
Llama Models and Customization
AI Product Critiques
- GPT excels at Prolog generation: A member shared their experience using GPT-4o for Prolog generation and debugging, highlighting its outstanding performance in logical reasoning.
- Prolog is recognized as a powerful rule-based logic programming language, showcasing the potential of GPT-based systems.
- Challenges in Detecting AI-Generated Images: Discussion arose about whether people would pay to verify if an image was AI-generated, with skepticism about current capabilities.
- One member noted that major companies inject identifiable elements into AI images, potentially facilitating the detection of AI-generated content.
- Debate on Human Consciousness: Members discussed the complexities of human consciousness, suggesting that it may be an illusion created by cognitive processes and biases.
- Several thoughts were exchanged regarding the influence of biology and neuronal processing in shaping individual perception and reality.
- Customization of Llama Models: Darkeater2017 detailed how they customized a Llama model, removing biases and limits while adding logical reasoning capabilities.
- They argue that true understanding comes from looking beyond human biases and observing reality as it is.
- Critiques of AI Products: Members expressed concerns about the limitations of AI products, emphasizing the challenges in effectively using OpenAI's models.
- There was a notable discussion on the perceived openness of AI platforms and how these biases influence user experience.
OpenAI ▷ #gpt-4-discussions (16 messages🔥):
iOS App Compatibility Issues
File Transfer Problems with GPT
Voice Recognition Quirks
LangChain Code Issues
General AI Tool Recommendations
- iOS App Compatibility Issues: A user expressed frustration about being unable to install the iOS app on their iPad Air 2 due to Apple's restriction on updating to iOS 16.4.
- An Apple support representative confirmed that the iPad Air 2 cannot update, leading to the app installation issue.
- File Transfer Problems with GPT: Members reported consistent issues with GPT failing to provide any files back to the user, regardless of file size or type.
- It seems the problem lies with the system's ability to handle file transfers altogether, indicating a broader issue.
- Voice Recognition Quirks: A user noted that starting a voice chat and making 'shh' sounds can trigger odd results, such as interpreting noises as phrases like 'thanks for watching'.
- Whisper was trained on YouTube subtitles, which may explain its quirky responses to certain sounds.
- LangChain Code Issues: A user faced issues with their LangChain code after adding a system message, resulting in unexpected prompts from the model.
- The model responded asking for the user's prompt instead of printing the tool name, highlighting a potential issue with the system prompt's phrasing.
- General AI Tool Recommendations: A user inquired about the applicability of GPT-4 voice capabilities to regular coding requests, including Javascript.
- This sparked a discussion on whether improved outputs apply beyond just JSON tasks.
OpenAI ▷ #prompt-engineering (11 messages🔥):
Becoming a Prompt Engineer
Enhancements in Prompt Techniques
Instruction Blocks Feature
Testing Prompts
- Resources for Prompt Engineering: A member recommended starting with Arxiv and Hugging Face as essential resources for prompt engineering, while also joining relevant Discords.
- They emphasized the importance of learning meta-prompting as a powerful strategy in the field.
- Considerations for Prompt Structure: A member pointed out that models can't adhere to strict word counts because they cannot count while generating text, suggesting to use more qualitative language instead.
- They also raised a potential conflict in the prompt regarding summary requests that need clarification for better guidance in the final output.
- Inserting Keywords into Prompts: It was discussed that inserting keywords or topics into prompts does not require advanced techniques, as the AI can effectively manipulate its own context.
- Members agreed that leaving open variables or having the AI ask questions for input can lead to effective results.
- Critical Approach to Using AI for Blogs: A member mentioned that they use ChatGPT primarily for creativity, ensuring that any completed prompts are fact-checked before blog use.
- They highlighted an approach where prompts can be manipulated to replace variables while keeping the remaining structure intact.
- Interest in Upcoming Features: A member shared their ongoing work with a rumored upcoming feature, 'instruction blocks,' using OpenAI's RAG implementation and Python tools.
- While they found the current implementation nice, they expressed hope for the official release of the feature.
OpenAI ▷ #api-discussions (11 messages🔥):
Prompt Engineering
Keyword Insertion Techniques
Instruction Blocks Feature
Community Recommendations for Learning
- Diving into Prompt Engineering: A user expressed interest in becoming a prompt engineer and received recommendations to check out Arxiv and Hugging Face as foundational resources.
- Community members emphasized the importance of learning meta-prompting techniques as an effective strategy in prompt engineering.
- Tackling Word Count Limitations: Discussion revealed that AI models struggle to adhere to strict word count requests while generating text, suggesting using qualitative phrases like 'medium-length' instead.
- Questions arose regarding potential conflicts in the prompt structure, particularly about summarizing both individual sections and concluding the article.
- Easy Keyword Insertion Strategies: It was advised that inserting keywords or topics into prompts does not require advanced skills, as AI can adapt its context easily.
- Members suggested leaving open variables in prompts or instructing the AI to manage keyword integration dynamically.
- Exploring New Features in AI: A member mentioned working on their own implementation of the upcoming 'instruction blocks' feature using Python alongside OpenAI's RAG implementation.
- Excitement was shared regarding the potential official release of this feature and its implications for ease of use.
Modular (Mojo 🔥) ▷ #general (48 messages🔥):
C program for MacOS
ARM and Intel CPU frequency timers
Mojo programming considerations
Feedback on licensing and clarity
Community meeting updates
- C Program Runs Successfully on MacOS: A member successfully compiled and ran a C program on MacOS to read specific MSRs, revealing a frequency of 24000000 and a TSC COUNT of 2099319836 despite some warnings regarding format specifications.
- Another member, acknowledging the complexity of this task, remarked that the technical nature of the conversation may either inspire interest in C or deter them from pursuing computer science.
- Exploring CPU Frequency Timers: Discussion revealed that only recent CPUs from the last 15 years would be supported for accurate TSC frequency readings due to reliability concerns.
- Members noted exciting potential in leveraging inlined assembly for performance and discussed how straightforward instruction reading on ARM and Intel differs from traditional approaches.
- Mojo Programming Language Developments: A member explained the need for more visibility and clarity in Mojo's documentation regarding
inlined_assembly
, asserting it is somewhat hidden and sparse.- They also hinted at writing a PR to enhance the language's functionality, potentially adopting variadic arguments.
- Licensing Clarity and Feedback: Feedback was offered on how Modular defines its competitive market, with suggestions to improve clarity and communication about its licensing terms.
- A member recommended a dedicated email like licensingchat@modular.com to facilitate discussions regarding licensing issues and concerns, which was acknowledged as a feasible suggestion by the team.
- Community Meeting Announcements: Details were shared about an upcoming Max + Mojo community meeting, highlighting topics such as DuckDB bindings and improvements to the Max + Mojo release packaging.
- Links for Zoom and further meeting information were provided to encourage participation and engagement from the community.
Modular (Mojo 🔥) ▷ #mojo (49 messages🔥):
Cookbook ideas
Mojo's programming relevance
Networking speed in Mojo
History of programming languages
C#'s market significance
- Creative Cookbook Concepts: Members jokingly brainstormed titles for a potential cookbook, including 'Mojician's Cookbook for the Uninitiated - Nightly Edition' and 'The Mojician's Tome, Obsidian Edition'. They discussed the idea of using geometric patterns to distinguish volumes, emphasizing a mystical approach.
- One member humorously noted the darkness of the cover design would only be understood by true seekers, adding a layer of intrigue.
- Mojo's Role as a Python Successor: There was a discussion on whether Mojo could outperform Python, particularly with its capability to handle threading through external calls. A member emphasized that Mojo should be viewed as the next evolution of Python rather than being subordinated to it.
- Comparisons were made about how networking speed improvements could lead to significant performance increases, referencing historical cases where milliseconds made substantial financial impacts.
- Impacts of Successful Programming Languages: The conversation shifted to the factors contributing to the widespread adoption of programming languages, noting that many owe their success to company backing rather than intrinsic value. A member argued that products must be indispensable to ensure long-term relevance.
- The conversation explored the influence of Microsoft on C#, stating that it quickly gained traction as a prime development tool for applications on Windows.
- C# and Its Longevity in Development: Discussion highlighted that C# has maintained relevance in the Microsoft ecosystem since its release in 2000, often dubbed 'nicer Java' due to its versatility. Its success was attributed to it being positioned as 'the new way to do applications on Windows'.
- Members acknowledged the significant impact of Windows OS, particularly in developing nations, where it has transformed lives by providing access to technology.
Modular (Mojo 🔥) ▷ #max (7 messages):
Max Nightly Installation on Mac M1 Max
Max Installation in Multiple Environments
- Max Nightly Installation Success on Mac M1 Max: A member initially struggled to install max nightly on their Mac M1 Max, but later confirmed success after updates resolved their issues.
- They shared that they would create a detailed issue report on GitHub for further assistance.
- Max Installation Requires Setup for Each Environment: Regarding environment management, members confirmed that max must be installed in each environment you create.
- However, they clarified that it utilizes a global cache and symlinks, minimizing the need for repeated downloads for each version.
Cohere ▷ #discussions (58 messages🔥🔥):
sus-column-r Model Inquiry
Cohere Model Reception
Cohere Pricing Strategies
Community Introductions
Job Request Spam
- Discussion on sus-column-r model: Members debated whether the sus-column-r model is a Cohere product, with skepticism about its tokenizer being different from Cohere's R series.
- Mapler noted that it behaves like other Cohere models, but brknclock1215 expressed doubt about it being from Cohere due to the tokenizer differences.
- Cohere Model Reception: Several users praised the performance of the potential Cohere model, suggesting it performs well on complex tasks like riddles and base64 decoding.
- Brknclock1215 mentioned that if it is from Cohere, it represents a significant step forward from current offerings.
- Cohere Pricing Strategies: Questions arose about Cohere's pricing in light of other platforms cutting prices, with mrafonso noting that Cohere's pricing isn't competitive currently.
- Mrdragonfox countered that Cohere's prices are fair and highlights the implications of 'loss leader pricing' in the market.
- Community Introductions: New members introduced themselves to the community, sharing their backgrounds in data science and machine learning.
- Neha Koppikar and Adam Sorrenti expressed their eagerness to learn and collaborate, with a specific interest in connecting with the Cohere4AI community.
- Job Request Spam: A discussion arose regarding individuals posting job requests, which some members deemed inappropriate for this channel.
- Mrdragonfox reminded users that this is a commercial channel of Cohere and encouraged focusing on relevant discussions.
Cohere ▷ #questions (19 messages🔥):
Cohere Command R model usage
Fine-tuning dataset error
Transitioning to applied research roles
RAG systems skillsets
Multi-node inference resources
- Cohere Command R Model Only Needs Preamble Once: A member clarified that you only need to send the preamble once with the Cohere Command R model to initiate a conversation, then use the
conversation_id
to continue chatting.- Tokens for the preamble are billed only when included, which helps save costs in the long run.
- Fine-tuning Dataset Format Issues: A member reported encountering an error message regarding unsupported file format while attempting to upload a multi-label JSONL dataset for fine-tuning a Classify model, despite it having been processed successfully previously.
- This raised questions about potential changes in the format requirements or validation processes that have occurred recently.
- Navigating Transition to Applied Research Roles: An individual sought advice on transitioning into an applied research role as an Associate Consultant, citing their US citizenship, data science degree, and research background with multiple publications.
- They emphasized their experience, stating over three years of work across India and the US and an upcoming presentation for their third paper.
- Critical Skills for RAG Systems Beyond Deep Learning: One member discussed the heavy reliance of current RAG systems on traditional information retrieval algorithms and questioned underrepresented skill sets critical for real-world AI applications.
- Another member highlighted the importance of good data cleaning and database management as pivotal skills that should not be overlooked.
- Seeking Multi-Node Inference Resources: A community member requested resources or tutorials for performing multi-node inference on large language models, mentioning restrictions on their Docker privileges within the cluster.
- The inquiry highlighted the interest in guidance for operating at scale despite the lack of certain deployment tools.
Cohere ▷ #api-discussions (3 messages):
Streaming Final Answer on Cohere
Usefulness of Intermediate Text in API
- Streaming Final Answers in Cohere API: A user inquired about the possibility of streaming only the final answer for multistep tasks using the Cohere API.
- Another member clarified that there isn't a specific setting for that, but suggested skipping any non-final text instead.
- Debate on Intermediate Text Usefulness: A member asked about the perceived usefulness of intermediate text generated by the API.
- This question seemed aimed at understanding if users find value in the information provided between prompts and final answers.
Torchtune ▷ #general (25 messages🔥):
NeurIPS Paper Review Process
Conferences vs Workshops
Google T5 Model Inference with Torchtune
- Navigating the NeurIPS Rebuttal Maze: A member shared their confusion about handling low confidence scores in their NeurIPS paper reviews, especially concerning the rebuttal process.
- Focus on supporting the champion reviewer by addressing their concerns while considering that low confidence might indicate a lack of expertise from those reviewers.
- Feedback is Part of the Publishing Grind: Another member emphasized that it's normal for papers to undergo several rounds of reviews and rejections before finding a suitable venue.
- They advised trusting the value of one's work over solely aiming for high-profile conferences, referencing the original DQN paper as an example.
- Exploring Google T5 with Torchtune: A member inquired about the possibility of running inference with the Google T5 model using Torchtune.
- Another member responded that while it isn't possible at the moment, upcoming changes could enable support for T5's encoder + decoder architecture, leading to multimodal training.
Torchtune ▷ #dev (42 messages🔥):
Gemma 2b Memory Performance
Expandable Segments Implementation
RLHF Dataset References
Model Testing Across Hardware
- Gemma 2b reaches memory peak then flattens: Reports indicate that Gemma 2b hits a peak memory reserved but then flatlines as expected, which raises questions about its performance consistency.
- A wandb link was shared for further investigation.
- Proposal for Expandable Segments: Expandable segments was proposed for all models to be added ASAP, with the hope of it being a low-risk adjustment since it allows for manual toggling.
- Discussion suggested modifying the config files minimally to ease the transition as it may become a default setting in future PyTorch updates.
- Discussion on RLHF Datasets: The original RLHF dataset used by Anthropic was discussed alongside shared links to now-available similar datasets, like those from BookCorpus and CNN/DailyMail.
- This includes a reference to a reproduction of processed datasets used for PPO in configurations, emphasizing the evolution of data accessibility.
- Testing Across Different GPUs: Testing shows that using a 4080 resulted in higher peak memory usage and better performance metrics compared to previous models like the 2080.
- There was curiosity over differing performance profiles, particularly concerning the ability to run models on less powerful GPUs like the 3070 without out-of-memory issues under certain configurations.
- Mem Reserved Mystery Deepens: The ongoing exploration of peak memory reserved by models has led to unexpected observations, particularly regarding memory usage with different configurations.
- Members are perplexed by the results that suggest deeper underlying issues with memory management that warrant further exploration.
LlamaIndex ▷ #blog (7 messages):
LlamaIndex property graphs
Multimodal RAG techniques
Self-RAG methodology
PDF parsing CLI tool
Hack Night at GitHub
- LlamaIndex Property Graphs Tutorial Released: Check out this video tutorial on LlamaIndex's property graphs to learn how each node and relation can store a structured dictionary of properties.
- This foundational knowledge opens up a variety of effective techniques for utilizing property graphs.
- New Notebooks for Multimodal RAG Over Complex Documents: A series of notebooks showcasing how to build pipelines over complex legal, insurance, and product documents has been shared, including methods to parse insurance claims here.
- These notebooks focus on handling documents with complex layouts, facilitating the integration of charts and images.
- Automate Multimodal Report Generation: Learn how to automatically generate multimodal reports with text and images by following this guide that uses existing complex data sources.
- This weekend's tutorial highlights how to leverage structured outputs for improved report generation.
- Dynamic Self-RAG Enhancements: Self-RAG is a dynamic RAG technique that helps identify relevant chunks for queries instead of flooding context, and resources are available here.
- This innovative approach offers a more refined strategy for context retrieval.
- CLI Tool for PDF Parsing Unveiled: A new CLI tool created by @0xthierry allows users to parse complex PDFs into machine-readable markdown with a simple terminal command, using LlamaParse.
- This tool handles documents with intricate formatting, delivering specifications in a user-friendly manner.
LlamaIndex ▷ #general (42 messages🔥):
Embedding Models Usage
Llama-Index Integration Issues
Agentic Workflow in InsurTech
Querying Documents in Agents
Performance of Llama-Index
- Using HuggingFaceEmbedding with Llama-Index: A user shared their method of loading the HuggingFace model using
HuggingFaceEmbedding
and expressed a need for help in loading documents correctly before running their query.- Another user discussed challenges when integrating a
RouterQueryEngine
withREPLICATE_API_KEY
instead ofOPENAI_API_KEY
.
- Another user discussed challenges when integrating a
- Issues with FlagEmbeddingReranker: A user reported a bug related to the
FlagEmbeddingReranker
which was resolved by updatingllama-index-core
, but encountered a newValidationError
regardingCreateSpanBody
fromlangfuse
.- Members suggested that issues with
langfuse
could relate to version incompatibility or bugs.
- Members suggested that issues with
- Discussing Agentic Workflow System: A user highlighted the rising trend of NO-Code solutions in InsurTech to enhance agentic workflow systems with easy UI manipulations.
- They provided a link to an article discussing the benefits of these systems for transformation in the insurance sector.
- Integration of ScribeHow with Agents: A user inquired about integrating documentation from
scribehow.com
into Llama agents, specifically how to query and display embeddings for those documents.- This shows an interest in enhancing agent capabilities by using existing instructional resources.
- Performance Concerns with WandB Integration: One user noted that deploying a
wandb
integration significantly increased their LlamaIndex query latency, raising concerns about performance.- This prompts a discussion on balancing model integrations with maintaining system efficiency.
LlamaIndex ▷ #ai-discussion (1 messages):
Knowledge Distillation
GPT-3.5 Fine-Tuning
LlamaIndex
- Fine-Tuning GPT-3.5 with Knowledge Distillation: A discussion focused on the process of knowledge distillation for fine-tuning a GPT-3.5 judge using LlamaIndex, with insights shared in a Medium article.
- Knowledge distillation is highlighted as an effective method in enhancing model performance while reducing size.
- LlamaIndex's Role in User Evaluations: Participants noted how LlamaIndex helps in making the evaluation process for GPT models more efficient, providing relevant insights for users.
- This connection brought up potential future applications where model evaluations could become more streamlined.
LangChain AI ▷ #general (33 messages🔥):
LangChain community support
Using LiteLLM as an alternative
Structured output issues with Llama 3.1
Function/tool calling concerns
Chatbot StateGraph behavior
- LangChain community support wanes: A user expressed concerns that LangChain has lost support among community members, noting that it was once a promising tool.
- Another user corroborated this sentiment, mentioning they are uncertain about how to proceed with a production customer project.
- LiteLLM emerges as a favored alternative: Several members recommended LiteLLM for its ease of switching between multiple LLMs using a simple API, suggesting it as a better option than LangChain for some.
- One user noted that LiteLLM allows for quick integration without significant code alterations, particularly for anyone focused solely on LLM functionality.
- Challenges with structured output in Llama 3.1: A user reported issues reproducing structured output results with Llama 3.1, finding that their invoke call returned None due to an output parser failure.
- Upon further inspection, it was revealed that the function definition was not passed correctly, affecting the expected output schema.
- Concerns over chatbot StateGraph behavior: A user questioned the behavior of their StateGraph, noting that only the last sent message was being retained and asked if this was expected.
- Another member suggested that the lack of a loop in the code might result in only single messages being processed rather than maintaining a conversation history.
- User experiences with function/tool calling: Some users described their experiences with using function/tool calling within LangChain, sharing frustrations about stability and seeking peer review for their code.
- There was discussion regarding whether sticking to simple API calls or utilizing advanced LangChain functionality would yield better results.
LangChain AI ▷ #share-your-work (3 messages):
CRAB Benchmark
Open Source Contribution
InsurTech Revolution
- Introducing CRAB: A New Benchmark: A member shared the introduction of 🦀 CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents, learn more here.
- This benchmark aims to provide a comprehensive assessment framework for multimodal agents.
- Seeking Open Source Contributions: A member expressed interest in getting started with open source and looking to contribute to someone else's project.
- This initiative highlights the growing interest in collaborative development and community engagement.
- Transforming InsurTech with No-Code Solutions: Discussion around revolutionizing the InsurTech industry with No-Code solutions sparked excitement, asserting that just a few clicks can bring significant change.
- For more insights, check out the article on this emerging trend here.
OpenAccess AI Collective (axolotl) ▷ #general (17 messages🔥):
Apple Intelligence Foundation Models
Strawberry hype
Comparison of AI models
Flux performance
Neurips engagement
- Apple Intelligence introduces new algorithms: The paper on Apple Intelligence Foundation Models presents two novel post-training algorithms, iTeC and MDLOO, which improve model quality significantly.
- These algorithms utilize rejection sampling and reinforcement learning from human feedback for optimal performance.
- Strawberry model gains attention in AI circles: Discussions around the alleged Gpt-4o-large model nicknamed 'strawberry' circulate, starting from a tweet that spurred intense speculation.
- Though users wonder about its capabilities compared to the 'raspberry', it's suggested that much of the hype may be troll-driven, lacking official updates.
- Perplexity Pro rumored to host 'strawberry': Users discussed whether the strawberry model might be live on Perplexity Pro, although opinions vary on its legitimacy.
- Concerns arise that OpenAI would not provide competitors like Perplexity a preview of their models.
- Flux performance praised: One member expressed enthusiasm for Flux, calling it 'crazy good' but didn't elaborate on specific qualities.
- This indicates a positive community sentiment towards the performance or utility of the Flux model.
- Neurips engagement noted: Amid the discussions, a member mentioned being busy with Neurips, reflecting the ongoing conference culture in the AI community.
- This showcases the zeitgeist of AI professionals balancing research updates with events like Neurips.
OpenAccess AI Collective (axolotl) ▷ #axolotl-dev (5 messages):
Crusoe Rentals
- Discussion on Rental Sources: A member expressed appreciation for a service, stating it's their 'go-to'.
- Inquiring about rental sources, another member asked where rentals are sourced from, leading to a response mentioning Crusoe as the rental provider.
- Engagement With Community: The conversation reflects a casual engagement between members, highlighted by simple affirmations.
- Responses such as 'yep' demonstrate a friendly and informal interaction among participants.
OpenAccess AI Collective (axolotl) ▷ #general-help (3 messages):
Citing Axolotl
Merging Loras with Models
- Seeking citation methods for Axolotl: A member inquired about the preferred way to cite Axolotl in an academic paper or technical report.
- Another member suggested that @le_mess would likely know the proper citation methods.
- Exploring Lora model merging techniques: A different member asked for the best strategies to merge Loras with various models.
- The discussion hints at the need for refined techniques within the community for effective model integration.
OpenAccess AI Collective (axolotl) ▷ #axolotl-phorm-bot (6 messages):
Model Quantization
Finetuning Best Practices
Using Hugging Face Transformers
BitsAndBytes Library Integration
- Quantizing a Model After Finetuning: To quantize a model after finetuning, users should first ensure their model is well-trained, then follow specific steps using Hugging Face's
transformers
library and thebitsandbytes
library for quantization.- Key steps include preparing the model with a quantization config, quantizing it post-finetuning, and evaluating its performance afterward.
- Installing Necessary Libraries for Quantization: Users need to install both
transformers
andbitsandbytes
libraries to access the latest quantization tools as shown in the example commandpip install transformers bitsandbytes
.- Updating these libraries to their latest versions ensures compatibility with the latest features for effective quantization.
- Post-Quantization Evaluation Importance: After the quantization process, it is recommended to evaluate the model on a validation set to confirm that its performance remains satisfactory.
- This step helps to verify that quantization hasn't significantly degraded the model's precision.
- Saving and Loading Quantized Models: Once quantization is complete, the model should be saved for future use with `model.save_pretrained(
DSPy ▷ #show-and-tell (4 messages):
Hyperdimensional Hackathon
DSPy Beginner Notebook
DSPy Blog Feedback
Golden Retriever Project
- Join the Hyperdimensional Hackathon: Team members are invited to the Hyperdimensional Hackathon in the Voice Lounge. More details can be found here.
- Beginners Unite with DSPy Notebook: A member shared a shoutout to another for creating a fantastic beginner notebook for DSPy that effectively guides users through problem-solving. It's highly recommended for those starting with DSPy.
- Feedback Request on DSPy Blog: A member is seeking feedback on their blog post about DSPy, available here. They also shared a link to their Twitter for additional context on the post here.
- Golden Retriever Project Repository Shared: A participant shared a link to the Golden Retriever project repository on GitHub here. This may be of interest to those looking to explore new tools or projects.
DSPy ▷ #general (20 messages🔥):
DSPy Use Case
Custom GPT Guides
Community Reactions on HN
RAG Program Implementation
Prompt Optimization in DSPy
- DSPy as Fine-Tuning Tool: DSPy is likened to fine-tuning, where users optimize instructions and/or examples with specific metrics to enhance performance on given tasks.
- This approach engages the community in discussing its suitability for various RAG implementations.
- Custom GPTs for DSPy Guidance: A user recommended starting with a custom GPT made for DSPy, which provides insights on modularizing existing prompts with signatures and modules.
- This advice was further backed with links to useful resources like this guide.
- Concerns Raised on Hacker News: Members shared their frustrations with Hacker News, noting dismissive comments about DSPy while emphasizing the commitment to shipping improvements.
- One member humorously remarked, *
- Issues with
BootstrapFewShot
Integration: Discussion arose over the integration of raw_demos and augmented_demos in theBootstrapFewShot
, leading to odd prompts when both types are included.- A proposed workaround suggested setting
max_bootstrapped_demos
equal tomax_labeled_demos
to avoid inclusion errors.
- A proposed workaround suggested setting
- Optimizing Long Static Prompts: A question was raised about setting a long static prompt header optimized for DSPy where inputs and answers remain concise.
- Community input emphasized the possibility of adjusting
max_labeled_demos
to zero for optimizing prompt structures.
- Community input emphasized the possibility of adjusting
tinygrad (George Hotz) ▷ #general (9 messages🔥):
Mezo Method Implementation
Tinygrad Functionality Inquiry
Tinygrad Meeting Agenda
Bounties Clarification
NVIDIA FP8 PR Feedback
- Mezo Method Exploration in Tinygrad: A user expressed interest in reimplementing the Mezo method (fine-tuning with only forward passes) using tinygrad instead of PyTorch, questioning if there's an equivalent to
tree_map
orapply
in tinygrad.- This reflects a desire to explore alternative frameworks for specific methodologies in machine learning.
- Clarifying Tinygrad Meeting Agenda: A summary of the upcoming Monday meeting at 9:40 a.m. PT included topics like tinygrad 0.9.2, qcom dsp, and various bounties including AMX and qualcomm.
- This meeting agenda is aimed at outlining the crucial technical discussions planned for the weekly update.
- Bounties in Tinygrad: A user inquired about the 'inference stable diffusion' bounty, confusing it with existing documentation examples for inference from stable diffusion.
- The response clarified that it's associated with MLPerf, indicating an ongoing update to the bounty sheet.
- Feedback on NVIDIA FP8 PR: In response to a user referencing their mention in tinygrad-dev, another member reassured them, providing tips left on their NVIDIA FP8 PR.
- This shows community support and collaboration, highlighting the shared effort in improving contributions to the project.
tinygrad (George Hotz) ▷ #learn-tinygrad (8 messages🔥):
De-sharding models
Memory profiling
NaN losses with HALF
Loss scalar
ResNet MLPerf
- Understanding De-sharding of Models: A user inquired about how to de-shard a model, specifically turning a multi lazy buffer into a normal lazy buffer.
- They sought clarification on the process involved, indicating potential confusion among members.
- Profiling Memory Usage Easily: One member asked if there was an easy way to profile what's using the most memory in their workflows.
- This reflects a common interest in optimizing memory management during training.
- NaN Losses with DEFAULT_FLOAT=HALF: Concerns were raised about encountering NaN losses after the second batch when training with
DEFAULT_FLOAT=HALF
, while training with float32 had no issues.- The user speculated a possible casting issue since their optimizer expects float32 for the learning rate, leading to challenges with type errors.
- Clarification on Loss Scalar: In response to the loss issue, it was confirmed that the user's loss is a scalar, aligning with the documentation for the training loop.
- This led to discussions about potential casts and how they might affect training procedures.
- Investigating ResNet with MLPerf: A mention was made to explore ResNet in the context of MLPerf benchmarks.
- This suggest a proactive approach to assessing model performance using standard evaluation metrics.
OpenInterpreter ▷ #general (15 messages🔥):
Attending Events from Remote Locations
Linux Support Requests
Terminal Agent Capabilities
Minimum Specs for Speech Agents
Using OI for PDF Forms
- Remote Attendance Options Discussed: A member expressed interest in attending an event despite residing in Tibet without travel funds, prompting discussions about remote participation.
- Another member noted, 'they are strongly favoring in-person attendees,' yet a hybrid hackathon is planned later this year.
- Request for Linux Support Channel: A member requested the creation of a dedicated #linux-something_or_other channel for sharing trial and error experiences.
- A suggested alternative was directing inquiries to an existing channel, 'the best place for this is <#1149558876916695090>.'
- Showcasing Terminal Agent Features: Terminal agents were showcased with capabilities such as cursor positioning and text selection demonstrated through various screenshots.
- Additionally, a grayscale augmented terminal highlighted the red cursor, enhancing visibility during interactions.
- Inquiry on Speech Agent Specs: A member inquired about the minimum and ideal specs for running a speech to speech agent for OS-wide interactions.
- The question raised concerns about whether energy requirements exceed 100Wh for typical laptop use.
- Exploration of PDF Form Filling with OI: Legaltext.ai inquired whether it's currently possible to use OI for filling out PDF forms.
- This implies ongoing interest in the functionality of OI in handling document workflows.
OpenInterpreter ▷ #ai-content (2 messages):
Deep Live Cam
YouTube Video on AI Insights
- Explore the Deep Live Cam Project: Check out the open-source project on GitHub - Deep Live Cam which showcases innovative uses of live camera feeds for various applications.
- This project has caught attention for its potential integrations in AI and real-time image processing.
- YouTube Video Unpacking AI Insights: A member shared an enlightening YouTube video discussing recent advancements and challenges in AI technologies.
- The video highlights key topics influencing the AI landscape and community responses to emerging innovations.
LAION ▷ #general (8 messages🔥):
Nvidia and CUDA controversy
AMD's intervention
Open-source ZLuda
Hugging Face resources
YouTube Video Link
- Nvidia and CUDA controversy heating up: Discussion arose about AMD's takedown of an open-source project, ZLuda, which potentially allowed other hardware to utilize CUDA technology, as highlighted in Tom's Hardware article.
- One member clarified that it was actually AMD, not Nvidia, who initiated the takedown.
- Lost Discord server link frustration: A member reported that the Discord server link provided was expired, referencing a GitHub discussion.
- They asked for a new link to access the server.
- Exploring Hugging Face resources: One user shared a link to the Terminus Research Hub on Hugging Face, indicating interest in AI models and tools available there.
- This represents an ongoing exploration of resources within the AI community.
- Sharing interesting YouTube video: A member posted a link to a YouTube video titled UySM-IgbcAQ, possibly related to AI topics or discussions.
- The relevance of this video to ongoing chat discussions remains unspecified.
LAION ▷ #research (6 messages):
Halva Hallucination
Gan.AI TTS Model
DDP Training Issues
Quadratic Softmax Attention
- New Halva Hallucination Assistant: Google introduced the Halva Hallucination Attenuated Language and Vision Assistant to tackle hallucination issues.
- This model combines language and vision capabilities while focusing on reducing inaccuracies in generative tasks.
- Gan.AI's TTS Model Launch: Gan.AI launched a new TTS model that supports 22 Indian languages plus English, making it the first of its kind to include Sanskrit and Kashmiri.
- The community has been encouraged to check out the product on Product Hunt and upvote if impressed.
- Checkpoint Saving Issues in DDP Training: A user reports experiencing issues where the gradient norm collapses and the optimizer skips steps during DDP training with bf16 and
accelerate
when saving checkpoints.- They noted that the problem resolves after the next checkpoint save, indicating that training otherwise runs smoothly.
- Reflection on Quadratic Softmax Attention: A user mused on the fate of a paper suggesting that quadratic softmax attention isn't the best token-mixing mechanism, yet it's prevalent in SOTA models.
- They questioned if it failed to scale or perform adequately in NLP tasks, hinting at an ongoing debate in the community.
Interconnects (Nathan Lambert) ▷ #events (1 messages):
NeurIPS
Language Modeling Tutorial
AI2 Team Events
- AI2 Team to Present Language Modeling Tutorial at NeurIPS: The AI2 team is planning to present a language modeling tutorial at NeurIPS, aiming to foster further engagement.
- There's a suggestion to tie this event to the group for added interaction post-presentation.
- Potential Group Event after NeurIPS: An idea was proposed to hold a group event following the NeurIPS presentation to enhance collaboration.
- This initiative aims to strengthen community ties after the tutorial, making it both informative and social.
Interconnects (Nathan Lambert) ▷ #ml-questions (2 messages):
Hapsburg model concerns
Collection of models
Diversity in model generations
Model collapse risk
- Avoiding a Hapsburg Model: Concerns were raised about creating a Hapsburg model regarding the selection of models used for training.
- The rationale for using a collection of models instead of the latest best model was questioned to understand underlying reasons.
- Benefits of Using a Collection of Models: It was noted that using a collection of models enhances diversity in model generations, which can produce better outcomes.
- This approach helps in reducing the likelihood of model collapse, allowing for more robust performance.
Interconnects (Nathan Lambert) ▷ #random (2 messages):
User Rebuttals
Audience Feedback
- User Addresses Concerns: A user expressed appreciation, stating that most concerns were addressed during the rebuttal, leading them to maintain their score.
- This indicates a potentially positive outcome for the discussions held, despite previous frustrations.
- Emotional Reactions Captured!: The user shared a mixed emotional state with a sad face emoji, suggesting a feeling of being overwhelmed or touched.
- This brief emotional expression could hint at the gravity of the discussions or decisions made.
Interconnects (Nathan Lambert) ▷ #memes (1 messages):
Bad takes on social media
- A World with Fewer Bad Takes: If everyone who had bad takes exclusively had bad takes the world would be a lot better lol. This humorous reflection suggests that the world might improve with a decrease in poor opinions circulating online.
- Reflections on Opinions: A user mused about the implications of poor takes dominating discussions, proposing a world where criticism is more constructive.
- This comment sparked laughter and agreement, highlighting a desire for more thoughtful conversation.
Interconnects (Nathan Lambert) ▷ #rlhf (2 messages):
Traditional RLHF
Online PPO Implementation
Hyperparameter Recommendations
- Seeking Optimal Online PPO for RLHF: A member inquired about the best implementation of traditional RLHF using online PPO, specifically looking for hyperparameter recommendations and reproducible results.
- The aim is to demonstrate that online PPO outperforms iterative DPO, as claimed in related research such as this paper.
- Current Implementations Lacking Optimality: A response indicated that there is currently no definitive best implementation for the discussed traditional RLHF with online PPO.
- They suggested using the EasyLM repository or the refactored TRL version from Hugging Face as viable options.
MLOps @Chipro ▷ #events (2 messages):
Alliance AI-Health Research Initiative
Generative AI applications
Google Gemini and Vertex AI
Serverless Containers
- Join the Alliance AI-Health Research Initiative: Students interested in novel cancer or AI research are encouraged to apply for the 4-month remote internship with the Alliance AI-Health Research Initiative, with applications due by 8/11.
- Participants will work on cutting-edge research projects, such as cancer detection and AI-based heat stroke detection, under the guidance of experienced advisors. Apply here!
- Build Generative AI with Google Gemini: An upcoming online event will teach how to build Generative AI applications using Google Gemini and Vertex AI, allowing developers to deploy them as Serverless Containers.
- This approach enables users to concentrate on their core business while infrastructure management is handled by Google. RSVP for the event.
MLOps @Chipro ▷ #general-ml (1 messages):
Feature stores in computer vision
- Evaluating Feature Stores for Computer Vision: A member is inquiring about the use of feature stores within the context of computer vision, seeking insights on their effectiveness and value.
- Is a feature store worth it? The member is looking for examples or experiences to inform their evaluation.
- Lack of Responses on Feature Store Inquiry: Despite the inquiry regarding feature stores in computer vision, there has been a noticeable lack of responses from the community.
- Members might have reservations or limited experience, indicating a gap in the discussion around this topic.
LLM Finetuning (Hamel + Dan) ▷ #general (2 messages):
Vision Language Models
Credits Expiration Inquiry
- Exploring Vision Language Models from Scratch: A member shared a detailed blog post about vision language models that explores their development from nearly scratch.
- The post highlights essential insights and methodologies involved in building these models and strives to engage the community in discussion.
- Inquiry About Credits Expiration across Platforms: A member inquired whether there are expiration dates for credits from platforms such as Jarvis-Labs, Replicate, Fireworks, Braintrust, Perdibase, and Openpipe, similar to OpenAI's September 1st deadline.
- This question prompted additional conversation about policies on credit expiration across these various platforms.
AI21 Labs (Jamba) ▷ #general-chat (1 messages):
AI21 FusionLabs plugin
Bubble.io integrations
Jamba model
Conversational RAG endpoint
Video guides for community
- AI21 FusionLabs Plugin Updated with RAG Features: The AI21 FusionLabs plugin for bubble.io has been updated with the integration of the Jamba model, the newly released Conversational RAG endpoint, and embedding capabilities, achieving 40+ app installs.
- This update brings substantial improvements from the depreciated version, aiming to boost productivity for NOcode projects powered by AI21, as noted in the plugin link.
- Guides and Resources for Plugin Users Coming Soon: A dedicated platform will be launched next week to help users understand how to utilize the updated plugin and integrate new features into their apps quickly.
- Video guides are also being developed to provide further learning resources for community members interested in creating AI applications with bubble.io.
- Exciting Future for AI21 Community: The community is gearing up for an exciting Q4 and 2025, with promises of innovative developments and resources on the horizon.
- The excitement is palpable, with a call to collect all creative minds for upcoming projects that are described as 'hotfire'.