[AINews] GPT-4o: the new SOTA-EVERYTHING Frontier model (GPT4T version)
This is AI News! an MVP of a service that goes thru all AI discords/Twitters/reddits and summarizes what people are talking about, so that you can keep up without the fatigue. Signing up here opts you in to the real thing when we launch it 🔜
Omnimodality is all you want.
AI News for 5/10/2024-5/13/2024. We checked 7 subreddits, 384 Twitters and 30 Discords (426 channels, and 7769 messages) for you. Estimated reading time saved (at 200wpm): 763 minutes.
As is tradition on Frontier Model days on AINews, we're publishing two editions of AINews. You're currently reading the one where all Part 1 and Part 2 summaries are done by GPT4T - the previous email was done with GPT4O and has the normal commentary. We envision that you will pull them up side by side to get comparisons on discords you care about to better understand the improvements/regressions.
Table of Contents
- AI Discord Recap
- PART 1: High level Discord summaries
- Unsloth AI (Daniel Han) Discord
- Stability.ai (Stable Diffusion) Discord
- OpenAI Discord
- Nous Research AI Discord
- Latent Space Discord
- Perplexity AI Discord
- HuggingFace Discord
- LM Studio Discord
- OpenRouter (Alex Atallah) Discord
- Modular (Mojo 🔥) Discord
- CUDA MODE Discord
- Eleuther Discord
- Interconnects (Nathan Lambert) Discord
- LAION Discord
- LangChain AI Discord
- LlamaIndex Discord
- OpenAccess AI Collective (axolotl) Discord
- OpenInterpreter Discord
- tinygrad (George Hotz) Discord
- Cohere Discord
- Datasette - LLM (@SimonW) Discord
- Mozilla AI Discord
- DiscoResearch Discord
- LLM Perf Enthusiasts AI Discord
- Alignment Lab AI Discord
- AI Stack Devs (Yoko Li) Discord
- Skunkworks AI Discord
- YAIG (a16z Infra) Discord
- PART 2: Detailed by-Channel summaries and links
AI Discord Recap
A summary of Summaries of Summaries
Claude 3 Sonnet
1. GPT-4o Launch and Capabilities
- GPT-4o is OpenAI's newly launched frontier model, supporting real-time reasoning across audio, vision, and text. It maintains the intelligence level of GPT-4 while offering significant performance improvements.
- GPT-4o is now available for free to all ChatGPT users, including the free plan, marking a shift in OpenAI's strategy to make powerful AI tools accessible. Read more
- Discussions highlight GPT-4o's substantial enhancements in coding capabilities, with expectations of new benchmarks like MATH to quantify these advancements. Blog post
- Plus users will get up to 5x higher limits and earliest access to upcoming features like a new macOS desktop app and advanced voice and video capabilities. Announcement
2. Open Source LLM Exploration and Fine-tuning Techniques
- Extensive discussions on exploring open-source LLMs similar to Llama 3, with suggestions to try platforms like you.com. HuggingFace discussion
- Members sought guidance on fine-tuning techniques like knowledge distillation to enhance the accuracy and performance of models like GPT-3.5. HuggingFace blog
- Interests in running LLMs locally sparked conversations about managing hardware limitations, with recommendations on offloading techniques and quantizing models for better performance. LM Studio discussion
- Techniques to handle complex tasks like multi-topic conversations were explored, ranging from fine-tuning on specialized datasets to developing Elaborator models using prompt engineering. Unsloth AI discussion
3. Multimodal AI and Emerging Architectures
- Anticipation surrounds the integration of ChatGPT voice conversational AI with Open Interpreter API, enabling multimodal interactions. OpenInterpreter discussion
- Discussions on the potential of integrating autoregressive and diffusion models using Mixture of Experts (MoE) architectures, aiming to enhance multimodal model performance. Nous Research AI discussion
- Introduction of the YOCO architecture, a decoder-decoder model that efficiently caches key-value pairs, reducing GPU memory requirements while maintaining global attention capabilities. HuggingFace reading group
- Exploration of ThunderKittens, a new DSL from HazyResearch, aimed at simplifying AI kernel building and optimizing GPU utilization for improved computational efficiency. CUDA MODE discussion
4. Advancements in Efficient Attention and Model Scaling
- Research on an efficient method called Conv-Basis for computing attention using convolution matrices, leveraging Fast Fourier Transforms (FFT) to potentially reduce computation time. Eleuther research discussion
- Insights into depth upscaling techniques like layer repetition to improve model performance, with examples from works on Yi and Granite Code models. Eleuther research discussion
- Discussions on the performance of Linear Attention models in complex evaluations like MMLU, emphasizing the need for suitable data to leverage potential model improvements. Eleuther research discussion
- Introduction of a proposal called Farzi for synthesizing dense datasets into compact, highly effective sequences for training autoregressive models, achieving up to 120% of original data performance. Details on OpenReview
Claude 3 Opus
- GPT-4o Launches with Multimodal Capabilities: OpenAI unveiled GPT-4o, a new frontier model supporting text, audio, and image inputs with 5x higher limits for Plus users. It demonstrates strong coding and reasoning performance, and is freely available to all ChatGPT users. An updated tokenizer and potential Apple integration were also discussed.
- Llama 3 Fine-Tuning Advancements: The community explored Llama 3 model fine-tuning, with a focus on compatibility issues with quantized models, tokenization challenges, and complex conversational capabilities. Unsloth emerged as a key tool for faster fine-tuning with less memory. Fine-tuned Llama variants for token classification were shared.
- Kernel Fusion and CUDA Optimization Techniques: CUDA MODE hosted a Zoom session on kernel fusion experiences and discussed Triton for AI kernel optimization. The U Illinois PMPP YouTube lecture series on parallel programming was highlighted. Techniques like ZeRO-1 for memory efficiency in llm.c and ThunderKittens for GPU utilization were explored.
- Retrieval Augmented Generation (RAG) and Multimodal AI: RAG pipelines using LangChain and LlamaIndex garnered interest for blog chatbots, content moderation, and PowerPoint generation. Techniques for multimodal AI using DinoV2 and OpenAI's audio/visual integration were discussed. Perplexity AI introduced a multi-model strategy while OpenInterpreter enabled LiteLLM and Llama3 integration.
GPT4T (gpt-4-turbo-2024-04-09)
Major Themes and Discussions:
-
AI Model Discussions and Comparisons: Substantial discourse is observed regarding the performance and specifications of various AI models like GPT-4, GPT-4o, Llama models, and more across several Discords. Users express mixed feelings about model performance, specializing in tasks like model training, comparison between new releases, and integration.
-
Technological Innovations and Updates: Several channels report on updates regarding new functionalities, integrations, and technological advancements such as multimodal capabilities, changes in tokenizer, and speed enhancements. Updates from tech giants and community programmers are evaluated and dissected.
-
Community Engagement and Project Collaborations: Robust discussions are evident around engaging community in collaborative projects, contributing to open-source repositories, or sharing custom projects. Such engagements span coding practices, developing AI utilities, or solving complex AI-driven tasks.
-
Educational Content and Tutorials: A notable amount of educational content, tutorials, and discussions aimed at disseminating knowledge about AI technologies, programming, model training, etc., are shared. Links to academic papers, YouTube videos, and detailed blog posts are common as users seek to deepen their understanding or explain concepts to peers.
-
Privacy, Legal, and Ethical Concerns: Several discussions touch upon the privacy implications of using AI technologies, concerns about data usage, legal implications of AI-generated content, and ethical considerations. Legal discussions in particular span a range of topics from artist rights in generated content to implications of AI in existing legal frameworks.
Key Knowledge Sharing and Resources:
- Educational links to papers, tutorials on platforms like YouTube and GitHub.
- Discussions about updates in primary AI models and software tools.
- Community-driven guides and project collaborations evidenced by shared code repositories and development tools.
- Ethical, legal, and privacy concerns deliberated in the context of AI advancements.
GPT4O (gpt-4o-2024-05-13)
1. Model Performance and Releases
- GPT-4o vs GPT-4 performances were compared across various Discords, with GPT-4o lauded for its speed but scrutinized for reasoning abilities. OpenAI has made GPT-4o free, stirring discussions about its market impact. Source
- Falcon 2 and Llama 3 received significant attention for their new features and improved performance. Falcon's capabilities have been particularly discussed for outperforming competitors.
2. Technical Challenges and Solutions
- Quantum vs. Turing: Debates on the superiority of quantum computers over Turing models highlighted concerns about regulation benefiting large corporations. Discussions extended into training and manipulating models like Llama and Mistral.
- Error Handling: Frequent issues in model integration and execution, including challenges with tokenization for GGUF models and troubleshooting training errors in Tinygrad, have been addressed with community advice and detailed fixes. Example GitHub PR
- Memory Management: Discussions on optimizing GPU memory management and handling VRAM limitations, particularly within CUDA and Mojo environments, were significant, including strategies like offloading and quantization.
3. AI Integration and Enhancements
- Multimodal Models: Open discussions on integrating audio, video, and text in models like GPT-4o. The adoption of tools like ThunderKittens for optimizing kernel operations showcases continuous pursuit of enhanced performance. ThunderKittens GitHub
- Open Source and Community Projects: Projects like PyWinAssistant and LM Studio's CLI tool for model management were shared, emphasizing the collaborative spirit of the AI community. PyWinAssistant GitHub
4. Industry Trends and Events
- OpenAI’s Strategic Moves: Speculations around OpenAI’s strategic directions with GPT-4o's free access were widely discussed, indicating potential data-driven strategies or competitive market positioning. OpenAI Event Video
5. Ethics and Legal Concerns
- AI and Copyright Issues: Debates on AI-generated content potentially infringing on artists' rights were prominent, with opinions divided on whether such usage falls under fair use. This extended to discussions about AI's place in commercial art and the legal boundaries involved. Related Article
6. Educational and Support Resources
- Collaborative Learning: Channels offered guidance through shared resources, tutorials, and troubleshooting assists, forming a robust community-driven support system. Topics included fine-tuning methods and practical AI applications like control theories and inpainting with Stable Diffusion.
Detailed by-Channel Summaries and Links:
Unsloth AI (Daniel Han) ▷ General - Quantum vs. Turing Effectiveness: Debates highlight Turing outperformance in expected quantum domains. Rethinking Machine Unlearning - Concerns Over OpenAI's Regulatory Moves: GPU signing and exclusivity with the White House provoke community skepticism. OpenAI Reddit AMA - Model Training Ethics and Safety: Ethical implications of uncensored models compared analogously to knife regulation focus on misuse over tools. Technical methods for fine-tuning explored extensively.
Stability.ai (Stable Diffusion) ▷ General-Chat - Stable Diffusion 3 Doubts: Skepticism and humor surrounding the elusive release date. - ControlNet and LoRA Use: Discussions on advanced technical uses like inpainting and text-integration in images. Character Consistency Guide
OpenAI ▷ General Discussions - Performance Comparison GPT-4 vs. GPT-4o: Users actively test and compare output, citing speed vs. reasoning quality. OpenAI Event - Subscription Value Debates: Discussions center around the continued value of paid plans amidst new free-tier features.
LangChain AI ▷ General - Date Extraction Techniques: Handling date ranges and conversions to ISO format in LangChain models through shared code snippets. - Handling Tool or Function Calls: Extracting and processing multiple market descriptions with LLMs in LangChain. Example GitHub
OpenRouter (Alex Atallah) ▷ General - Model Service Outages: Jet MoE 8B experiences upstream overloads. New multimodal models like GPT-4o and LLaVA v1.6 feature prominently. OpenRouter API Watcher Tool
HuggingFace ▷ General - Open Source LLMs Exploration: Discussions included platforms and methods for optimizing LLMs such as Mistral. Query on chunking transcripts for better insight generation. Diffusion Pipeline Guide
For more detailed summaries and links, refer to the full compiled guide above.
PART 1: High level Discord summaries
Unsloth AI (Daniel Han) Discord
- Quantum Longevity vs. Turing Might: Quantum computers and Turing computers sparked a heated debate, with Turing currently winning the battle, even in fields where quantum is expected to shine. Some members voiced their concerns about the governmental focus being surprisingly on regulating quantum computing, giving undue advantage to larger corporations over smaller innovators.
- Winds of Change at OpenAI: Users aired skepticism related to OpenAI's pivot towards exclusivity and regulatory capture, fearing actions like GPU signing and collaborations with the White House could dampen open competition and innovation.
- Censorship vs. Misuse: The neighborhood was buzzing with talk about the potential dangers and ethical implications of uncensored AI models. A popular analogy compared AI model control to knife regulations, emphasizing that focus should be placed on misuse rather than the tools themselves.
- Model Training Mania: Nerd alert! A technical tête-à-tête on diverse model training and manipulation methods filled the chatrooms, covering tactics like using uncensored LLMs and manipulating models into accepting new adaptations without authorization.
- Empathy Gets a Thumbs Up: Open empathy projects are all the rage, with calls to action for community participation to enrich AI's understanding and implementation across a wider range of human contexts.
- Hang Tight for a Quantum Leap in OpenAI's Capabilities: The release of a new Model Spec and a community Reddit Q&A session with OpenAI CEO Sam Altman have members buzzing with anticipation. Feelings are running high as hopes and dreams for revolutionary AI breakthroughs clash with potential disappointment.
- Open Source Strategy in OpenAI's Crosshair: The community is split on whether OpenAI should open-source their model. Those singing for a release believe that even an underwhelming model can still position them favorably.
- Unfolding Industry Trends in the AI Landscape: Chatrooms lit up with industry trend speculations, such as whether or not to expect a model that's 10x better than the existing players. Eyes are also set on possible market dynamics shifting if Llama becomes the "State Of The Art" (SOTA).
- OpenAI Amidst Rumours of an AI Winter: Even with the looming shadow of an AI winter, members maintain a resolute belief in OpenAI's leading role in the AI industry. Strategic reasons behind OpenAI’s decisions regarding public model releases were also a talking point, including insights into past occurences involving leaks and required openness due to grants.
- Mystery of Quantized Models and Tokenization Anomalies Unravelled: Savvy users shared their experiences of managing compatibility issues of quantized models with TGI and saving-loading mistakes, notably using '16bit' format via 'model.save_pretrained_merged(...)' to make it compatible with TGI. Tokenization issues with GGUF formatted models involving Gemma were also discussed.
- Quest for the Ultimate Model: The community desired guidance on creating models to effectively handle complex, multitopic conversations. Proposed strategies ran the gamut from fine-tuning on specialized datasets to employing prompt engineering or forming a Elaborator model, shining light on the iterative journey of optimizing models in chatbot frameworks.
- Technical Users Showcase Fireside Models: User shared fine-tuned Llama variants with the community. An accompanying blog post and a notebook detailing the model fine-tuning process serves as an upcoming treat for the technical audience. Check the Model Hub.
Stability.ai (Stable Diffusion) Discord
- The Myth of SD3: Jokes and doubtful GIFs run rampant in the community about the release of Stable Diffusion 3. Despite officially announced timelines, the launch of SD3 remains a subject of bemusement, distinctively reshaping it into the realm of the fantastical in the eyes of many users.
- ControlNet Hits the Chat: Technical chats around the employment of ControlNet and LoRA technology, particularly for unique tasks like inpainting and integrating authentic text into images popped up. One standout suggestion involved using Krita as an unconventional tool to manually adjust the text within images.
- Rumble in the Hardware Jungle: A back-and-forth discussion evaluated the efficiency of AMD RX 6750 XT and NVIDIA RTX 4090 for running Stable Diffusion, culminating in varied opinions on the performance comparison between older and high-end GPUs in SD tasks.
- Stable Diffusion Meets Madison Avenue: Users highlighted potential commercial applications of Stable Diffusion, such as generating bespoke product adverts. One user voiced the necessity for maintaining character consistency across multiple images, pointing to Cobalt Explorer's guide as a source of detailed direction.
- Help! I Need Somebody: General inquiries and requests for technical support rounded out the discussion, with users tackling everything from addressing copy/paste issues on interfaces like ComfyUI to exploring upscaling methods that infuse additional detail into images.
OpenAI Discord
- GPT-4o Goes Public, Plus Users Reap Benefits: OpenAI has announced that its new flagship model, GPT-4o, is now accessible for free, with some restrictions. Plus users will receive even greater advantages, including up to 5x higher limits and the earliest access to novel features such as a new macOS desktop app and advanced voice and video capabilities.
- GPT-4o versus GPT-4: Performance Aplenty: OpenAI users are actively comparing the performance of GPT-4 and the newly launched GPT-4o in different tasks. GPT-4o boasts greater speed, but comes with a need for more explicit instructions for optimal performance. There is also a heightened anticipation for new voice and real-time camera sharing capabilities.
- Mac on the Tracks, Windows Next: Great enthusiasm has been expressed for the impending release of the macOS app for ChatGPT. There are reports that a Windows version is under progress, but availability is not uniform for all users yet.
- Token Troubles and Memory Misgivings Amid Advancements: Amidst all the advancement, there are concerns about GPT-4o's memory performance compared to older models, and requests for improved features like token counters. Plus users are mulling over the continued value of their subscription with new features being added to the free tier.
- All That Romance, and No Where to Go: An issue popped up with Gemini 1.5 where any romance-related requests consistently failed. Detailed debugging did not yield solutions, leading to speculations over syntax errors, safety settings or even Google’s system role in the problem.
- Python File Handling Made Simple: A user shared a complex yet foundational Python task to create directories, manage file writing across sessions, and zip a directory with a download link. Their post highlights the technical complexity and diversity of challenges tackled by the community.
- Creating ChatGPT Clone – A Watchful Perspective: One user expressed interest in creating a ChatGPT clone, with GPT-3.5 as the underlying model. The unique twist in the proposal was endowing the clone with the ability to oversee messages sent and received within an organization.
Nous Research AI Discord
- The Llama's Struggle to Make Sense Over 8k: King.of.kings_ shared the struggle of getting the Llama 3 70b model to remain coherent over 8k tokens, prompting discussions and possible solutions in the community.
- Aurora Sighting, New Bilingual Model, and Old Recipes in Games:
- The Northern Lights made a rare appearance in the French urban volcano of Arvenia, sparking interest and discussion.
- The introduction of MAP-Neo, a transparent bilingual Large Language Model, has caught the attention of engineers. It's trained on 4.5 trillion tokens, promising to match performance of commercial models in tasks like reasoning and math but with extra transparency.
- Members engaged in a fun diversion, discussing how perpetual stews seen in the role-playing game Kingdom Come: Deliverance reflect historical cooking methods and influence modern cooking habits.
- Neurological Advancement, Taskmaster Simulations, and Industrial Military Complex Visualizations:
- A new paper on multidirectional artificial neural networks, discussed on the
interesting-links
channel, has the potential to revolutionize the way networks handle complex dependencies. - A React application simulates a Taskmaster game show episode using a State Machine pattern, creating engaging content assisted by LLMs.
- Using the Mistral 7B instruct v 0.2 model on the llama-cpp-agent framework, a detailed knowledge graph was produced to visualize the Industrial Military Complex in ways not seen before.
- A new paper on multidirectional artificial neural networks, discussed on the
- GPT-4o: A Potent Update or an Overhyped Feature?: In the
general
channel, members passionately debated the pros and cons of GPT-4o. Some members appreciated its coding performance improvements while others criticized its speed and token output limitations. The room split over the accessibility and price-point of the Voice integration feature.
- Experts are FFNs in MoE and Llamas Love Axolotl:
- Within the MoE architecture, experts are usually only the Feedforward Networks (FFN) layers.
- The potential integration of autoregressive models and diffusion models with MoE sparked interest among participants; scepticism was expressed, but possibilities seemed exciting.
- A user shared his experience and solutions to problems met when fine-tuning the Llama3 model with the dolphin-2.9 dataset using the Axolotl system.
- Dialogues on Datasets and Training Approaches:
- ChatQA made headlines with its conversational QA model line that surpasses GPT-4 in conversational accuracy.
- IBM and RedHat's novel approach towards LLM training made rounds due to its usage of a larger model to generate synthetic datasets without the need for full retraining.
- A deeper insight into IBM/RedHat's new project reveals a scheduled information process for enhancing the LLMs' knowledge base, buoying community interest.
- An Adventurous Dip into WorldSim: In the
world-sim
channel, WorldSim was highlighted as a powerful business simulator, and invitations were shared to join Websim AI simulations. Chat group formation for philosophical discussion revolving around WorldSim was proposed.
Latent Space Discord
- OpenAI Pre-Games Incoming Spring Event: A pre-game watch party has been arranged on the Discord channel for an OpenAI event scheduled for May 13th at 9:30 AM. Show up a bit early to join the festivities.
- Go East to Discuss Future AI Infrastructure: People are huddling around a fresh conversation initiated by a member from Singapore regarding potential new AI infrastructures. They've started to compile thoughts on this Substack, so drop in if you're interested in these innovative services.
- Falcon 2 Model Soars Over LLM Landscape: Introducing Falcon 2 LLM, allegedly a multilingual, multimodal marvel that's besting models from the likes of Meta and Google. It's still being groomed with further enhancements, including 'Mixture of Experts'. Explore its prowess here.
- GPT-4o Unwraps For Your Inspection: Welcome GPT-4o! We're pooling our collective thoughts on its specs, uses, APIs, and general performance in this big-brain chat. You can join the conversation here. They say curiosity killed the cat, but it might just keep an AI engineer entertained.
- AI Security: A Career Path Worth Its Salt?: Is a career at the intersection of AI and cybersecurity your cup of tea? Our members are debating its potential and offering avenues for further exploration such as the RSA Conference. Brew a cup of coffee and jump into the conversation.
- Round Up Your Friends For OpenELM: An ongoing project to train the OpenELM model using PyTorch/MPS is looking for some additional brains. The aim is iterative training with incremental dataset addition. Be part of this open-source adventure here. Sharing, after all, is caring.
- OpenAI Event Becomes Victim of Audio Glitches: As fate would have it, the OpenAI Event watch party experienced a few hiccups with audio issues during the live stream. No watch party is complete without a little bit of drama.
- Apple and GPT-4o: A Malus-Domesticated Future?: Are Apple's tech strats robust enough to integrate heartier models like GPT-4o into their devices? Cap this stimulating conversation off with some cider thoughts.
- OpenAI Shatters Tradition with Free GPT-4o Access: Users can now enjoy GPT-4o for free, marking a new phase of OpenAI's mission. This huge leap forward not only integrates GPT-4o into everyday device and platforms, but also invites rigorous discussions.
Perplexity AI Discord
- GPT-4o's Much-Anticipated Arrival: Buzz is building around GPT-4o's introduction with high expectations around its faster processing, lower costs, and broad application scope. Enthused users are optimistic about its potential integration in the Perplexity platform, with speculation about increased functionality in AI applications.
- Unleashing Powerful Models: Users' Plea: Users expressed dissatisfaction with Perplexity's daily usage limits on potent models such as Claude 3 Opus, pointing towards considerable demand for extended access. While some users are looking at alternatives, many remain committed to Perplexity thanks to its distinctive offerings.
- Marrying AI Adoption and Privacy: During AI services' navigation and selection process, the discussions underscore the users' high regard for platforms valuing privacy. Cloud-based AIs' inherent privacy challenges notwithstanding, members endorse providers showing substantive effort to safeguard user data.
- Perplexity's Multi-Model Strategy and User Appreciation: The benefit of Perplexity's multi-model approach highlighted, allowing users to toggle between different models like ChatGPT and Claude 3 Opus catering to task requirements. This flexibility is applauded, setting it apart from platforms with limited options or more intricate navigation.
- Technical Discourses Reflect User Diversity and Needs: Engagements in technical conversations around themes like context window sizes and AI models detail workings suggest a wide AI usage range within the community. Queries range from casual inquiries about daily limits to deeper explorations into particular AI features.
- AI Career Path Intricacies Highlighted: Alexandr Yarats outlines his professional sojourners from Yandex to Google and his current role as Head of Search at Perplexity AI. His account emphasizes the rigors and rewards of a career in the tech sector, with a focus on creating AI-powered search engines.
- An Array of Searches on Perplexity AI: Users share a variety of searches conducted on Perplexity AI, from Eurovision 2024 to explaining Bernoulli's fallacy, highlighting the wide range of information that can be gleaned from the platform.
- Encouraging Shareable Threads for Collaboration: Perplexity AI emphasized the need for shareable threads, providing a guide via a Discord message, reinforcing the value of community collaboration and information sharing.
- Call for Perplexity Tutorial Met with Broken Link: A request for a Perplexity tutorial led to another user providing a link to a tutorial. However, the link redirected to a non-functional Discord path,
- Emoji Usage in Non-English Conversations: Usage of Emojis titled 'wlcm' and 'gem_2' by a user were observed in what appears to be Russian conversations, hinting at context differentiation or emotional expression.
HuggingFace Discord
- Exploring Open Source LLMs: Discussion focused on the exploration of open-source large language models (LLMs) similar to llamma3. It was suggested that platforms like you.com could be an interesting point of experimentation.
- Transcript Chunking Struggles: Current methods of chunking meeting transcripts for actionable insights from LLMs yield low similarity scores. The community was invited to suggest ways to improve this process and therefore optimise costs by making fewer LLM calls.
- Looking Under the Hood of Diffusers: Members were interested in learning more about the specifics of diffusion models, citing resources ranging from sought-after academic papers to practical tutorials from venues like Fast.ai and O'Reilly.
- Enabling Stable Diffusion: Participants shared their Stable Diffusion progress and provided informed directions on how to develop a local inference engine with StableDiffusionPipeline based on Hugging Face's diffusers library.
- Showcasing Community Achievements: A variety of tools and applications have been developed by the community, such as an AI-powered storyteller supporting multiple languages, an AI tool creating poster art from Quranic verses, and an OCR toolkit integrating different OCR technologies. Engage with these projects here.
- The Dawn of YOCO Architecture: A new research paper introduced the decoder-decoder architecture – YOCO. This breakthrough reportedly reduces GPU memory requirements while maintaining global attention capabilities and speeding up the prefill stage.
LM Studio Discord
- Bottleneck Oddities Prod Multi-GPU Performance: Members pinpointed a motherboard bottleneck causing slow performance in multi-GPU setups. Upgrading to a PCIe 4.0 compatible board resolved the performance issues.
- Remote Accessibility Confusions Busted: LM Studio Server's remote access configuration ignited discussions, eventually clarifying that replacing 'localhost' with the machine's IP would allow remote access.
- Dealing with Failures, Memory Errors in LM Studio: Members came across "Failed to load model" error messages due to insufficient memory. Solutions included turning off GPU offload or verifying that the hardware meets model running requirements.
- Community Bands Together Against Linux Server Woes: A member faced FUSE setup issues when installing LMS on a Linux server. Another user shared a solution that worked on Ubuntu Server 24.04.
- Too Much Power Brings GPU Memory Headaches: Members agreed that using LLMs requires substantial VRAM. At least 8GB+ was recommended for running models like GPT-4.
- Local Models Grapple with Hardware Limitations: Discussion around the feasibility of running high-speed local models on personal, moderate-spec laptops led to the conclusion that LM Studio may not fully support such a setup.
- Text-to-Image Tools Dazzle: Tools like Stable Diffusion, comfyUI, and Automatic1111 were highlighted for their utility in converting text to images, with less complex software suggested as a beginner-friendly option.
- Model Versioning Exposed: Model versioning and fine-tuning methods were discussed, stressing the importance of reading model cards to understand datasets and training details.
- Quantizing Models Gains Favor: Members discussed the benefits of quantizing models like the Yi-1.5 model series. They shared links to specific quantized models along with tips to improve model performance and hardware compatibility.
- Context Lengths Flex Under Model Constraints: Constraints due to model context lengths and budget affected model choice, emphasizing the limitations of different GPU capacities and the necessary trade-offs for running more extensive models.
- Use Innosetup and Nullsoft, Open Source Advocates Announce: A member recommended open-source installers Innosetup and Nullsoft, citing their successful past experiences.
- Starcoder2 Faces Debian Oddities: A user testing starcoder2-15b-instruct-v0.1-IQ4_XS.gguf on Debian 12 encountered repetitive responses and off-topic answers, opening up insightful discussions about the model's intended optimizations.
- Playground Mode Caught GPU-Dependent: Members highlighted that Playground mode can't run on just RAM + CPU. At least 4GB of VRAM is needed for effective usage.
- Beware of Deceptive Shortlinks, Warns Community: A warning was issued about a shortlink leading to a potentially unsafe or unrelated website.
- Llama 3 Models Studied, Tok Rates Explored: Members discussed the performance of Llama 3 models on various configurations while sharing token rates. The use of CPUs and RAM for potential efficiency improvements was also examined.
- Hardware Limitations Kick in Amid GPU Discussions: The performance of Tesla P100 and GTX 1060 GPUs were compared with discrepancies noticed in expected and actual performance due to potential CUDA version mismatch.
- Offloading Techniques Tackle Low VRAM: Offloading techniques were suggested for managing low VRAM (2GB), with an emphasis on properly setting the number of layers to offload to the GPU.
- CPU vs GPU: Running LLMs on CPU Takes a Hit: It was noted that running LLMs on CPUs only resulted in significant performance hits. Specific token rates were cited for improvement upon tweaking the CPU settings.
- Interface Adjustments Garner Popularity Among Users: Community members discussed adjusting model loads between GPU and RAM. Recommendations leaned towards higher VRAM usage for models to avoid load failures and response inadequacies.
- CodeQwen1.5 Wows Coding Enthusiasts: Members found the 7b model, CodeQwen1.5, highly efficient for coding tasks. With 4b quantization and a small footprint, it proved suitable for a 6GB GPU setup and outperformed the deepseek coder.
- Explore Coding Models on Huggingface: Huggingface’s leaderboard was suggested as the go-to source for comparing model performances in coding tasks. All models, especially those 7b or smaller, could be explored.View Coding Leaderboard.
- Just Bug Fixes and a Small Update: The latest build primarily addressed bug fixes and included an update called llama.cpp. No new features were introduced.
- Members Champion Cautious Clicking: Users must be wary of posts with suspicious links that may generate unwanted revenue, such as those shortened with goo.gle.
- MemGPT Queries Draw in Kobold Experience: A member sought help from someone experienced with MemGPT, with potential guidance from another member who had integrated MemGPT with Kobold.
- Newly Acquired GPU Proves Promising: A member purchased an RX 7900 XT for 700 euros, concluding it more than fit for their needs. Another member suggested that larger models like Command-R+ or YI-1.5 (quantized variants) could be handled by the new GPU.
- OpenInterpreter Connection Confounds: A member expressed confusion connecting LM Studio with OpenInterpreter. The user had difficulty discerning a difference in error messages, whether the server was connected or not.
- New Yi Models Turn Heads: The LM Studio Community released new Yi models, including a notable 34B version suitable for 24GB cards. Enhanced with imatrix, the models are available in various sizes on the Huggingface page.
- Vulkan Attempts Blur LM Studio Framework: Users encountered difficulties integrating a Vulkan-backend llama.cpp with LM Studio, with no direct solution within the current framework.
- LM Studio CLI Thrills Hands-on Users: LM Studio CLI (lms) was introduced, allowing raw LLM inspections, model loading/unloading, and API server control. More information about usage can be found on the LM Studio blog.
OpenRouter (Alex Atallah) Discord
- JetMoE 8B Goes MIA: OpenRouter's JetMoE 8B Free model demoed a 502, and it's not a new dance step. It's offline due to upstream overload. Users are advised to switch dance partners for now.
- Two Multimodals Enter the OpenRouter Ring: OpenRouter freshens up its model roster with two multimodal MVPs - GPT-4o and LLaVA v1.6 34B. More pixels, more text, more AI power.
- API Watchman Stands Guard: Tired of hitting refresh to check OpenRouter's ever-evolving model list? Meet OpenRouter API Watcher, big brother to the changes, storing them in a SQLite database, with an easy-on-the-eyes UI and RSS feed for updates. Rest those F5 fingers.
- Unwrap the Rubik's AI Cube: Advanced research assistant and search engine, Rubik's AI, rolls out beta testing with a sweet offer - two months of free premium access to AI gems like Claude 3 Opus, GPT-4 Turbo, Mistral Large and more. Go on, take a peek here.
- OpenRouter's Trio Dukes it Out Against Fraudsters: With a strong(er) arm of anti-fraud measures and a pinch of necessary personal data for security, the three-strong OpenRouter team tackles operational disruptions head-on, banking on the likes of Stripe for some backup.
- Chatter Hub: Embedded models in OpenRouter? Maybe later. Advanced WebUI for creating multiple customizable personas or agents? Sure, give BigAGI or OpenWebUI a whirl. Oh, and did we mention Jetmoe does not have online access... just in case you were wondering.
Modular (Mojo 🔥) Discord
- Building Excitement Around Mojo's Nightly Builds: The latest in the
mojo
framework ushers in nightly builds that auto-push merged commits directly to itsnightly
branch. Community members can see precise workflow timeout adjustments detailed in the related PR.
- Memory Management in Mojo List Operations Needs Facelift: There is a buzz in the community regarding potential inefficiencies of Mojo's
List
memory pre-allocation. Optimized, it could bring a 2000x speedup in specific benchmarks, suggesting a pressing need to evolve our memory management strategies.
- GitHub Actions Bug Creates Headaches for Transparency: Mojo users are experiencing a crucial bug with GitHub Actions as completed jobs masquerade as "pending". This misleading behavior obscures the visibility of ongoing workflows, affecting Mojo's recent commits and CI operations.
- Type Materialization Questions Swarm Mojo: Discussions on proper type materialization in Mojo hone in on issues such as managing memory pointers during type transformations. These concerns are leading to test failures and the need to revise the respective methods.
- New MoString Repository Challenges Mojo Developers: MoString, a new GitHub repository, showcases various StringBuilder ideas to explore in Mojo, including a method to optimize memory allocation. This endeavor calls for community contributions, proving to be an interesting experiment in pushing Mojo's boundaries.
- Ownership in Mojo Spotlighted in New Video: A recently shared video elucidates ownership in Mojo, designed to deepen knowledge. Python developers have offered insights on how these ideas transition from Python to Mojo, an angle promising better clarity for newcomers.
- Mojo and Rust Compiler Tradeoffs: Comparisons drawn between Mojo's and Rust's compilers draw light on Mojo’s simpler approach focusing on coding rather than wrestling with documentation or intricate compiler specifics. Rust's robust system design and automatic vectorization capabilities are met with a formidable learning curve, underscoring the need for thoughtful tool choice.
- Understanding Language-Query Tradeoff with SQL, ORMs, and Compilers: SQL's ease of use clashes with the rigorous system requirements of ORMs and compilers like Rust in a spirited discourse. These technologies present diverse levels of comfort and efficiency, implying the choice must come down to individual preference and project requirements.
CUDA MODE Discord
- Kernel Fusion Zoom Meeting Revealed: The technical guild organized a zoom session on real-world experiences fusing kernels. Attendees were guided to post their discussions and queries in a specific Discord channel, increasing engagement and fostering a focused learning environment.
- U Illinois PMPP Series Gains Traction: The guild continued the U Illinois PMPP series with weekly lectures targeting EMEA and NAM regions. These sessions have been made more accessible with a YouTube playlist and direct Zoom links.
- Discussing CUDA, Triton and the Art of Kernel Fusing: GPU Memory Management and CUDA formed the core of the discussions, with recurring themes around Triton's optimization potential and the benefits and strategies of kernel fusion. Key resources were shared, including papers, PRs, tutorials and GitHub commits.
- Grappling with GPU Compatibility and Installation: User queries around CUDA version compatibility with specific Torch versions and multi-GPU scripting highlighted the practical challenges faced during implementation. These doubts were clarified, making GPU utilization more effective and efficient.
- AI Kernel Building Simplified with ThunderKittens: Guild discussions centered around ThunderKittens, a new open-source project introduced by HazyResearch. The project's tile primitives aim to simplify AI kernel building, making AI's computational objectives more reachable for users.
- Harnessing the Power of llm.c and CUDA for Better Performance: Users debated the efficacy of CUDA graphs and
torch.compile
, seeking clarity on core processes while contemplating performance enhancements. Other conversations centered on llm.c’s possible utilization of ThunderKittens for future improvements, emphasizing the continuous pursuit of innovation in GPU programming. - PMPP Book’s YouTube Watch Party Kicks Off: A new YouTube watch party series was introduced focusing on PMPP book's 2018 lectures. Through regular sessions punctuated with interactive discussions, the guild aimed to facilitate learning and practice, making it a valuable resource for CUDA enthusiasts and beginners alike.
Eleuther Discord
- Synthetic Data - The Next Big Thing or Old Wine in New Bottle?: The scaling-laws channel saw intense debates about the real game-changing impact of synthetic data. Lessons from prior cycles of hype, the potential of forgetting such lessons, and the trade-offs incurred were all hot topics.
- Battle of the Network Structures: Deep neural networks including CNNs, Transformers, and MLPs are put under a unified lens in a shared study. Another paper probes the limits of MLPs, hinting at untapped scalability possibilities despite present obstacles.
- Murky 'Zero-Shot' Claims Tacked in Multimodal Models: On the general channel, a recent research paper tethered spectacular "zero-shot" claims of multimodal models to the concept frequency in pretraining data, sparking questions on the true foundation of these AI's abilities.
- Falcon2 11B Soars High: News of the Falcon2 11B model code-named "Condor", boasting an 8k context window and refined attention mechanisms, has been revealed. It is trained on a 5T web dataset, heralding a promising future for inference capabilities.
- NeurIPS Collaboration and Model Compression Pondered: There's a call for collaboration on a NeurIPS submission in the interpretability-general channel reminiscent of an "othello paper". Model compression insights and the nature of features discarded during this process were centrally discussed.
Interconnects (Nathan Lambert) Discord
- GPT-4o Dazzles as Next Frontier: OpenAI introduced GPT-4o as their newest frontier model using the alias "im-also-a-good-gpt2-chatbot" in the LMSys arena. The model shows significant performance improvement which was announced in a tweet.
- Curiosity Piqued for GPT-4o's Coding Skills: The outstanding gap in coding capabilities between GPT-4o and its previous versions was a hot topic, stirring intrigue for the newly established MATH benchmarks. More details about these advancements can be explored through this blog post.
- Tokenizer Update Gives Hope for Efficiency Boost: OpenAI's latest update to its tokenizer hints at greater efficiency, likely resulting from an expanded vocabulary. You can peek at the tokenizer update directly in this GitHub commit.
- OpenAI's Strategic Decisions Prompt Speculations: OpenAI's strategic decision to grant access to GPT-4o for free stirred speculations among members, leading to a storm of hypotheses. From data gathering to competitive positioning against giant tech firms like Meta, the forum is abuzz with discussions comparing OpenAI's tactical moves.
- Live GPT-4o Demo Splits Opinions: OpenAI's live demo of GPT-4o elicited a broad array of responses, from potential applicability discussions to critiques on the presentation style. The realism, effectiveness and integration aspects of demonstrated technologies have proved to be magnetic subjects for scrutinizing community members.
- Revealing REINFORCE as PPO's Offspring: An illuminating PR on Huggingface TRL repo posits REINFORCE to be a special case of PPO. This surprising revelation is deep-dove in a GitHub PR that provides exhaustive explanations along with a referenced paper.
- Chatbot Arena Gains Popularity: The Chatbot Arena community has won accolades from members as a significant contributor to the future of AI.
- Members Play with the Idea of Open Sourcing GPT-3.5: The potential open-sourcing of GPT-3.5 has entered the room of discussions, garnering some amusing responses, including one member asserting this could only happen when "hell freezes over".
- Surge in AI Video Consumption: Impressive viewership numbers have been reported with a video hitting 6k views in a day and others reaching 20k views. A video shared on HuggingFace paid off big time with a view count of 150k.
- Posting Videos on Platform X Under Scrutiny: The prudent idea of posting videos on Platform X triggered a discussion about the legality of native uploads, as well as permissions issues.
- Stanford Owns Rights but Stays Flexible: A member's confirmation that Stanford owns the rights to specific content, but is typically lenient about enforcement, opens up the opportunity for more liberal usage. Suggested measures to evade bureaucracy include requesting permission for personal use while assuming the risk of possible repercussions.
LAION Discord
- Art Imitates AI - Potential Legal Entanglements Debated: A hot topic under scrutiny was the potential legal pitfalls of AI services, like Midjourney, creating art that could be seen to compete against living artists. Specific attention focused on the balance between artists' rights and commercial applications of AI.
- Copyright, AI, and the Fine Print of Fair Use: The guild hall echoed with debates on AI's potential infringement of artists' copyrights when generating derivative works. Meanwhile, a faction raised the shield of fair use protections in this intellectual property war, pointing towards potential parallels with negative reviews harming a creator's business.
- AI Art & Fair Use - A Sparring Match of Opinions: Not all saw eye-to-eye in the contested landscape of AI-art; some guild members called for closer legal scrutiny on potential sales impacts on artists, while others staunchly stood their ground, labeling such usage as fair game under the broad umbrella of fair use.
- AI in the Court - Juries Need Not Apply?: Conversation shifted from art to juries as discourse dabbled in the subjects of jury nullification, and the role of people versus code in interpreting AI-related laws. The contrast between statute books and real-world legal application in this AI era sparked intrigue.
- Going Green With AI - Seeking Energy Efficiency in Era of Giants: Guild members shared innovations aimed at reducing AI's monstrous energy demands, looking towards new models and methods designed for a greener future. One source turned heads in the guild hall).
- Transforming the Sonic Landscape - Audio Data Takes Center Stage: Voices in the room turned louder regarding the task of transforming vast voice data sets into tokens. High-quality annotations focusing on emotions and speaker traits came up for discussion with a guild member sharing relevant resources for practice and educational content on YouTube.
- Converging on Mathematical Notations - Challenges in Formal Math Discussed: Technical lingo filled the air as a discussion on the use, or possible misuse, of certain formal mathematical notations indicating a sequence of elements unfolded. From the ashes of this discourse rose the function T, heralded as a valuable tool for sampling in process sequences.
LangChain AI Discord
- Have a Date with ISO in LangChain: The guild shares insights about extracting dates and converting them into ISO format using LangChain's DatetimeOutputParser. Get your hands dirty with code samples available in both JavaScript and Python.
- Extending DatetimeOutputParser for Date Ranges: To chew the cud over date range management like "from April 1st to June 2nd" in LangChain, reconstructed
DatetimeOutputParser
was proposed. A design-savvy guild member suggested tweaking theparse
function to identify and pull out the start and end dates separately.
- Agent Solution for Multiple Market Descriptions: Multifaceted discussions thrived around extracting multiple market descriptions from prompts using LangChain's tool/function calling with LLMs. Scooping information from a prompt like "compare the prices between Belgium oil and Italy power" saw increased clarity with a structural extraction approach.
- Open-Source LLMs Get Cozy with LangChain: Some nifty insights on local open-source LLM nonchalance like Ollama found their way into LangChain integration. Buckle up to unravel a wealth of data, from setting up the LLM, installing the must-have packages, to finally jiving with the model.
- Piped up Conversations on API Responses Streaming: Aspiring to invite onboard API responses for multiple frontend elements through a single API call? Gain a leg-up with Python specifics and ogle at a relevant GitHub example.
- Cancer Drug Discovery Drowns in the AI Soup: Lend an ear to the compelling YouTube discourse on how Generative AI is redefining the contours of cancer drug research. An urgent plea for more automated methods seizes the spotlight.
- Open-Source Code Interpreter Takes Baby Steps: An open-source project designed to assist Visualization & Interactive Data Analysis (NLAVIDA) made a stellar debut. Promising compatibility with OpenAI API keys and Llama 3 in the future, the project lifts the curtain on confidential data analysis.
- Bloggers' Jab at RAG Pipeline with LangChain, Pinecone: If you've been itching to add a chat feature that leverages Retrieval Augmented Generation technology to your blog, then park your eyes here. This tutorial will systematically walk you through data ingestion to building engaging chat interfaces.
- LLM Flaunts its Feathers, Heads for Multimodal: With DinoV2 in sight, LangChain aims to go multimodal as the relevant YouTube video and accompanying GitHub notebook clearly indicate.
- Streaming Saga with Session & History Management: A member seeks tutorials or assistance to integrate streaming functionality into LangChain while playing nice with session and history management. This comes after suppressing multiple bottlenecks excluding streaming.
LlamaIndex Discord
- Automated PowerPoint Wonder with Llama 3: A publication by a user showcased how the Llama 3 RAG pipeline combined with Python-pptx library can not only furnish answers but also generate PowerPoint slides. Check out the article here.
- Designing Financial Guru—the Reflective Way: A guide by Hanane Dupouy detailing the process of building a financial advisor leveraging CRITIC methodology for stock price analysis. All the wisdom is just a click away here.
- Content Control Prowess of RAG: The RAG pipeline was demonstrated to enforce adherence to moderation rules in user-created images. The complete know-how is presented here.
- Where's the Mettle in a RAG System: An insightful evaluation of four RAG system evaluation libraries—TruLens, Ragas, UpTrain, DeepEval—complete with supported metrics to ease your performance review process. Read all about it here.
- Llama 3's Abilities Manifest in Hackathon Cookbook: The hackathon hosted by @AIatMeta brought a compilation of seven distinct use cases for Llama3, moving through tasks from simple to complex. All recipes are assembled here.
- Unraveling LlamaIndex's Cache Issues: A user found a bug in _aretrieve_context function that led to an undesired postprocessor deletion, but was glad to find it fixed in the current version of llamaIndex library.
- Hybrid Search Setup Snag: A user faced a ValueError when setting up hybrid search with Qdrant, which was resolved by enabling hybrid search in the constructor:
"QdrantVectorStore(..., enable_hybrid=True)"
. - Understanding LlamaIndex—A Favorable Verdict: Members praised LlamaIndex for easy usage, flexible nature, praiseworthy documentation, and aptness in managing multi-platform support.
- AI Responses Go Rogue in Frontend: A member faced discrepancies in AI outputs displayed on the frontend, obtaining the error message "Unexpected token U" that brought about discussions on the potential cause.
- Querying with LlamaIndex—Putting Metadata to Use: A conversation followed from a user's query on metadata's role in the
query
method while using llamaIndex, leading to clarifications about metadata usage in filtering and retrieval processes. - Elevating GPT-3.5 with Knowledge Distillation: An article on Hugging Face discusses how knowledge distillation can improve finetuning GPT-3.5 as a judge, complete with a comprehensive guide. Check it out here.
OpenAccess AI Collective (axolotl) Discord
- Weighty Matters in Llama 3 Tuning: Examination of weight differences between instruct and base models of Llama 3 pointed to significant changes mainly in the K and V layers, hinting at targeted adjustments during instruct tuning. The possibility of freezing K/V layers for style tuning without losing instruct capabilities is under consideration.
- Inside the Checkpoint Conundrum: Clarifications around checkpoint naming conventions were brought up, emphasizing that an end run save should actually be located in the base folder - a nuance critical in deciphering save outputs during model runs.
- Sizing Up OpenOrca Rerun Funding: OpenOrca dedup's rerun on gpt-4o was proposed by a community champion, complete with cost estimates and a bonus insight into potential batch job pricing benefits. You can follow the action on its dataset page.
- Leading the Charge Against High Compute Usage: A volley of projects gunning to tame AI's sky-high compute usage were spotlighted, including Monarch Mixer, H3, and Hyena Safari. For a deeper dive, check out their thoughtful blog.
- Navigating the Torrent of AI Research Publishing: The sluggish pace of academic journal publications can let cutting-edge research turn stale in the fast-paced world of AI - a prominent challenge discussed in the community.
- Merge-Mania Success with Nanobitz: A code merge by user "Nanobitz" was reported successful - unfortunately, the details of the merger remain a mystery.
- LLAMA3 Template Errors Hit a Wrong Note: A LLAMA3 template in PyET hit a snag, raising confusion between 'LLAMA3' and 'LLAMA2'. The recipe for relief? Update your fastchat.
- Project Dependency Revamp Needed: User "trojaner" spotted seriously outdated project dependencies, such as peft, accelerate, deepspeed, flash-attn, xformers, and transformers. A sweeping upgrade to the latest versions is in order - except for peft, which requires installation from a repository due to a pesky plugin issue.
- FSDP and FFT: A Puzzle without Pieces: The compatibility of Fully Sharded Data Parallel (FSDP) with Fast Fourier Transform (FFT) remains inconclusive. Meanwhile, an alternative solution is under consideration - the DeepSpeed route.
- Docker AttributeError Decoded: An AttributeError encountered with LLAMA3 in a Docker scenario was diagnosed. The remedy? Update your pip dependencies and give a fresh git clone a whirl.
- Git Cloning Saves the Day for fastchat: A git cloning method triumphed in dealing with a persisting fastchat issue, flagging a potential snag with unupdated commits in certain branches.
- The Quandary of
system_prompt
Changes in Axolotl CLI: Modifying thesystem_prompt
in axolotl.cli.inference left a user baffled. Even the AI advisor Phorm wasn't up for answering, underscoring an unresolved query worth revisiting.
- Converting Merged Model to GGUF Runs into Roadblocks: A FileNotFoundError occurred during conversion of a merged model to GGUF owing to missing matching tokenizers ['spm', 'hfft']. This error serves as a signal for fine-tuning file structure or naming in future tasks or problem solving.
- Gemma Model Loading Mishap: The perils of loading a GemmaForCausalLM model hit a user in the form of a size mismatch error in
model.embed_tokens.weight
. The suggested troubleshooting strategy is to addignore_mismatched_sizes=True
to thefrom_pretrained
method, highlighting mismatch issues between training and application environments.
- Precision Matters with QLORA Merge: A question on merging QLORA to a base configuration without precision discrepancies between fp16 and fp32 emerged, underlining extant challenges in model integration and precision handling.
- Axolotl Phorm Bot to the Rescue! To seek advice on areas like Axolotl pruning capabilities and continuous pretraining tips, users turned to the Axolotl Phorm Bot. But alas, even the bot drew a blank, suggesting a revisit for these compelling queries at a later date Read more on Phorm.
- Integration of qLoRA with Base Model Remains Elusive: A member's query on how to merge qLoRA into the base model was left hanging in the threads, indicative of an issue that needs some drilling down in future discussions.
OpenInterpreter Discord
- Claude's Clunky Compatibility: Issues have cropped up with Claude API integrations, users are seeing "goofy errors". The jury's out on whether these are compatibility or configuration snags.
- Automating Antidetect with Open Interpreter: A roll up your sleeves kind of chatter suggested we can level up browser automation by using Open Interpreter to generate Python code from natural language instructions. High-impact, low-effort automation? Yes, please!
- Local is Vocal: Lively debates on the performance of local models, Mixtral, Phi, Lama3, and GPT-4 are hitting the fan. It's unanimous though, GPT-4 takes the cake. However, the key to enhanced local model effectiveness isn't just about the model anymore, it's about prompt optimization.
- GPT-4o is Turbocharged: GPT-4o, the new greyhound in the AI land, is showing up all other models with its lightning speeds - boasting up to 100 tokens/s that zooms past performance and cost-efficiency.
- ChatGPT and Interpreter API: The Buddy Cop AI Movie We Need: All eyes are on ChatGPT voice conversational AI potentially buddying up with Open Interpreter API for some serious rock'n'roll. Keep your popcorn handy.
- LiteLLM and Llama3 Dance the Tango: Users are happily connecting OpenInterpreter, LiteLLM, and Groq - llama3, leading to some major waltz in configurations.
- 01 Hardware Wifi Woes: One user's chilling connection horror story with M5 board and 01-Light wifi network setup is making the rounds. Will they survive this fright night?
- 01 on the Go with App Version: now, 01 hardware says goodbye to the desk and hello to mobile. Thanks to Thatpalmtreeguy, an early app version made an appearance here.
- Another Apple Waits in the TestFlight: Thatpalmtreeguy is the gift that keeps on giving. Bright new futures are predicted after he talks about an app awaiting TestFlight approval.
- Customer Service Replaces Sherlock Holmes: An OpenInterpreter order is lost and searching for this takes more than just a fine comb. Will customer service at help@openinterpreter.com crack this case?
- The Launch of PyWinAssistant: The latest AI sherpa PyWinAssistant has stepped into the ring, described majestically by a user as the first open-source Large Action Model that controls human user interfaces through natural language. All GitHub details here.
- See PyWinAssistant in Action Live: Just one YouTube link away from witnessing near-real-time magic of PyWinAssistant. Grab your popcorn and soda!
tinygrad (George Hotz) Discord
- Variable Shapes in Tensors Explained: A user asked about the need for variable shapes in tensors, using a reference from Tinygrad Notes as basis. The feature is integral to handle situations where tensor shapes change dynamically, optimizing compilation times and avoiding the need to regenerate kernels for every new shape.
- Training Errors in Tinygrad Solved: An "AssertionError: Tensor.training should be set in the optimizer" error encountered during model training was solved by setting
Tensor.training = True
, as articulated in this Pull Request #4460.
- Advanced Indexing Operations Explored: The group discussed the challenges and strategies for implementing advanced indexing operations, such as
node_features[indexes[i]] += features[i]
in Tinygrad. One of the proposed solutions was using one-hot encoding and matrix multiplication to aggregate features based on indices.
- Graph Neural Network Curiosities: A discussion on how to implement Graph Neural Networks (GNNs) within Tinygrad focused on neighbour searches. Topics included the comparative complexities of implementing such features against libraries like Pytorch Geometric, and potential inefficiencies with naive O(N^2) tensor operation approaches.
- Improving Tinygrad's Error Handling: Members underscored better error handling as a feature to improve Tinygrad's user experience. Such enhancements could incorporate the principles of Rust-style error messages that provide the simplest fixes, making issues resolution more straightforward for users.
Cohere Discord
- Clearing the Cohere Bill Fog: Users confronted confusion over Cohere's billing details. After some discussion, it was concluded that discrepancies between statements were due to charges accumulated since the last invoice.
- Size Matters for Command R: Debate over Command R's impact resulted in members validating that input tokens indeed grow larger when web searches are involved.
- Cracking the Code of Glitch Tokens: A notable research paper concerning "glitch tokens" in the tokenizers of large language models sparked conversation about tokenizer efficiency and model safety.
- Aya vs Cohere Command Plus: When Sharp doesn't Cut it: Uncertainty surrounded the performance differences between Aya and Cohere Command Plus. User experiences ranged from inaccuracies with Aya's responses, specifically concerning general knowledge, to advice limiting Aya's usage exclusively to translations.
- Help! There's Always Support Here: One user voiced frustration over perceived lack of Cohere support. Other members were quick to assure him of the community's responsive nature and staff availability.
- Specialist Needed: Telecom Domain: An invitation was extended for engineers interested in specializing large language models in the 5G telecommunications arena. The challenge can be found here.
- Is Your PDF Chatty?: An inquiry about Cohere's potential use for 'Chat with PDF' applications was posted, prompting several responses. The user sought information on current projects and suggestions for related reads and repositories.
Datasette - LLM (@SimonW) Discord
- Unsettled LMSYS vs LLM Quality Debate: The utility of lmsys as an index for assessing the quality of LLM remains an open debate within the group. No definitive viewpoint has emerged on the issue yet.
- GPT-4o Fails to Deliver the Promise: Critics have pointed out GPT-4o's performance deficit, especially its inability to correctly enumerate books. Despite its speedy responses and tempting rates, the model seems to lag in fundamental reasoning capabilities compared to its predecessor, GPT-4.
- AI Future Outlook: Doubts have surfaced over the exaggerated hype around AGI (Artificial General Intelligence), given the modest improvements showcased in current models like GPT-4 and Claude 3 Opus. Some group members have expressed cautious optimism about the anticipated advancements in future iterations.
- Dilemma Over Google Vertex AI Credits: A member has raised queries on efficient methods to put Google Vertex AI credits, which are nearing expiration, to good use. However, concrete plans for any potential tests are still missing.
- Questionable Voice Assistants' Nature: Issues regarding a voice assistant's ill-timed laughter have been brought up, potentially tarnishing user-experience. Suggestion for using custom prompts as potential remedies, to preserve professionalism in the output and prevent potential user-acquisition hurdles were discussed.
- Making Use of a Tweet About LLM: A tweet from member @SimonW, which offers insights on LLM, was shared in the group. The link to the tweet was provided without additional context or discussion.
Mozilla AI Discord
- No GGUF for OpenELM in sight: Members pointed out that a repository posing to contain GGUF for OpenELM is a red herring. Attention, accuracy, and proactivity are key in navigating the digital information landscape.
- Sprucing up the llamafile: Through new Pull Request #412, an added script facilitates the upgrade of llamafile archives, drawing upon external resources. It's tech flex at its finest!
- Hermes is a Speedster: Personal testings report smooth operation of the Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf model on llamafile, with response times gravitating near 10 seconds and RAM consumption peaking at 11GB on an AMD 5600U system. For context, it's a whopping model size of 5.6GB.
- Models Playing Hooky: Users relay experience of persistent hiccups when implementing models such as Llama 8B and Mistral, usual culprits being KV cache space issues. Performance varies with the available RAM on different systems.
- Enhancing Metadata Game for Llamafile: Work is being done to allow custom authorship metadata integration within llamafile and gguf. This promises a more practical approach towards file management and easy-peazy searches on platforms such as Hugging Face. Peep into the matter is here.
DiscoResearch Discord
- German YouTube Content Hunt: A member rallied the community on a mission to curate a comprehensive list of high-quality German podcasts, news programs, and YouTube channels. The aim is to gather valuable training data for a German Text-to-Speech (TTS) system.
- A MediathekView to Curate: MediathekView emerged as a recommended tool for downloading shows and films from a variety of German broadcasters, offering a potential goldmine for German TTS system training. The platform garnered interest due to its local storage of a vast film database including links and detailed descriptions, available for download here.
- JSON API, MediathekView’s Secret Weapon: Possibilities of automated access to media content data through MediathekView's JSON API sparked interest. It opens doors for efficient collection and organization of the German film database, explored in more depth at this GitHub link.
- Demo Dilemma and Praise: A participant inquired about the operational status of a demo in the Discord channel. Later, the same member expressed admiration, labeling the demo as "really nice."
- Lost in Translation, Let's Stick to English: A gentle reminder was issued for maintaining English as the primary language for communication within the channel. This ensures content remains accessible and comprehensible to all members of the diverse, international community.
LLM Perf Enthusiasts AI Discord
- Claude 3 vs Llama 3b: The Clash of Titans: A comparison between Claude 3 Haiku and Llama 3b for entity extraction scoring services sparked deep conversation. The idea is to switch from traditional fuzzy string matching to a smaller LLM to coordinate submodels within Pydantic models.
- Modeling Entity Extraction: Tweaking accuracy in entity extraction from documents holds attention as engineer folks aim to build a scoring service. They plan to use Pydantic models for comparing predicted and actual outcomes, starting with Instructor.
- Audio Tech: The Next Frontier?: Anticipation grows for an audio-related element, possibly audio in-out support for an assistant. The increased involvement of the OpenAI audio team gives more weight to these speculations.
- GPT-4o Release in the Pipeline: The upcoming OpenAI spring update expects the unveiling of the anticipated GPT-4o on Monday, May 13, 2024. This event also carries updates to ChatGPT, adding fuel to the excitement.
- Celebrity Factor Spurs Excitement: The community is quite thrilled about actress Scarlett Johansson injecting a bit of star power in the AI space, raising the stakes of an upcoming feature or campaign.
Alignment Lab AI Discord
- AlphaFold3 Federation Primed for a Powwow: The AlphaFold3 Federation is set to host a meetup on May 12th at 9pm EST. Attendees can expect updates on the current status of AlphaFold3 integration, possible choke points in the training pipeline architecture and an open Q&A session. RSVP here.
- The Orange Role Mysteriously Emerges: A member's inquiry sparked discussions around server roles, specifically his curiosity revolved around the "Orange Team". Answers are yet to be provided.
- Fasteval Seeks New Torchbearer: 'tju01' shared the news about discontinuing work on the fasteval project with no immediate plans for any related sequel. They're interested in passing on the GitHub project to a worthy successor, otherwise, the project's channels may be consigned to digital oblivion.
AI Stack Devs (Yoko Li) Discord
- New Speeds, New Crowds: An inquiry about varying character moving speed and the number of NPCs within AI Town sparked interest but hasn't seen any responses yet. More experimental freedom could be on the horizon for avid AI Town users.
- Balancing Act Between NPCs and PCs: One engineer delved into refining player-NPC interactions within AI town, suggesting to reduce NPC interaction frequency. Utilizing the llama3 model, they hope to alleviate computational load on local machines and enhance the overall player experience.
Skunkworks AI Discord
The provided text does not contain enough information for a meaningful summary.
YAIG (a16z Infra) Discord
Apologies, but a summarized report cannot be generated for this channel. The provided message "Agree!" from user "pranay01" lacks sufficient context and substantive content to be included in a technical summary.
The MLOps @Chipro Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
PART 2: Detailed by-Channel summaries and links
Unsloth AI (Daniel Han) ▷ #general (834 messages🔥🔥🔥):
- Quantum vs. Turing and Tech Regulations: Discussions revolved around the effectiveness of quantum computers versus Turing computers, with Turing currently outperforming even in areas where quantum is expected to excel. Concerns were also voiced about governmental focus on regulating quantum computing as opposed to AI, which could benefit larger corporations at the expense of smaller innovators.
- Critique on OpenAI's Policies and Partnerships: Members expressed dissatisfaction with OpenAI's moves towards exclusivity and regulatory capture, such as GPU signing and collaborations with the White House, suggesting these actions might hinder open competition and innovation.
- Concerns Over Model Censorship and Access: The community discussed the potential dangers and ethical concerns of uncensored models, comparing AI model control to regulation of physical tools like knives, emphasizing regulation should focus on misuse rather than the tools themselves.
- Discussions on Model Training and Manipulation Techniques: There was a technical exchange on various model training and manipulation tactics, including the use of uncensored LLMs and methods for merging models with new adaptations without explicit authorization.
- Community Interest in Expanding Open Source Projects: Conversations also touched upon initiatives to expand open, empathic projects, and appeals were made for community involvement to enrich AI's understanding and implementation across broader and more nuanced human contexts.
- Tweet from Daniel Han (@danielhanchen): Was fixing LLM fine-tuning bugs and found 4 issues: 1. Mistral: HF's batch_decode output is wrong 2. Llama-3: Be careful of double BOS 3. Gemma: 2nd token has an extra space - GGUF(_Below) = 3064...
- Rethinking Machine Unlearning for Large Language Models: We explore machine unlearning (MU) in the domain of large language models (LLMs), referred to as LLM unlearning. This initiative aims to eliminate undesirable data influence (e.g., sensitive or illega...
- Lorem Function – Typst Documentation: Documentation for the `lorem` function.
- alpindale/WizardLM-2-8x22B · Hugging Face: no description found
- Hugging Face – The AI community building the future.: no description found
- ThunderKittens: A Simple Embedded DSL for AI kernels: no description found
- tiiuae/falcon-11B · Hugging Face: no description found
- Gojo Satoru Gojo GIF - Gojo Satoru Gojo Ohio - Discover & Share GIFs: Click to view the GIF
- NTQAI/Nxcode-CQ-7B-orpo · Hugging Face: no description found
- Anima/air_llm at main · lyogavin/Anima: 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU - lyogavin/Anima
- eramax/nxcode-cq-7b-orpo: https://huggingface.co/NTQAI/Nxcode-CQ-7B-orpo
- LLaMA-Factory/scripts/llamafy_qwen.py at main · hiyouga/LLaMA-Factory: Unify Efficient Fine-Tuning of 100+ LLMs. Contribute to hiyouga/LLaMA-Factory development by creating an account on GitHub.
- Joy Dadum GIF - Joy Dadum Wow - Discover & Share GIFs: Click to view the GIF
- Mistral Fine Tuning for Dummies (with 16k, 32k, 128k+ Context): Discover the secrets to effortlessly fine-tuning Language Models (LLMs) with your own data in our latest tutorial video. We dive into a cost-effective and su...
- Introducing GPT-4o: OpenAI Spring Update – streamed live on Monday, May 13, 2024. Introducing GPT-4o, updates to ChatGPT, and more.
- Llama 3 Fine Tuning for Dummies (with 16k, 32k,... Context): Learn how to easily fine-tune Meta's powerful new Llama 3 language model using Unsloth in this step-by-step tutorial. We cover:* Overview of Llama 3's 8B and...
- GitHub - HazyResearch/ThunderKittens: Tile primitives for speedy kernels: Tile primitives for speedy kernels. Contribute to HazyResearch/ThunderKittens development by creating an account on GitHub.
- GitHub - lilacai/lilac: Curate better data for LLMs: Curate better data for LLMs. Contribute to lilacai/lilac development by creating an account on GitHub.
- GitHub - unslothai/unsloth: Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory: Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory - unslothai/unsloth
- remove convert-lora-to-ggml.py by slaren · Pull Request #7204 · ggerganov/llama.cpp: Changes such as permutations to the tensors during model conversion makes converting loras from HF PEFT unreliable, so to avoid confusion I think it is better to remove this entirely until this fea...
- GitHub - unslothai/unsloth: Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory: Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory - unslothai/unsloth
- GitHub - ggerganov/llama.cpp: LLM inference in C/C++: LLM inference in C/C++. Contribute to ggerganov/llama.cpp development by creating an account on GitHub.
Unsloth AI (Daniel Han) ▷ #random (15 messages🔥):
- OpenAI's New Model Spec Discussion and Community Q&A: OpenAI has released a new Model Spec for improving the behavior of their models in the API and ChatGPT. Set your reminders, as OpenAI CEO Sam Altman will answer community questions in a Reddit Q&A session today at 2pm PST.
- Community Hopes for OpenAI's Upcoming Innovations: Members express mixed emotions about upcoming innovations from OpenAI, with some members holding expectations of revitalizing AI, while others remain skeptical, fearing potential disappointment.
- Debate Over OpenAI's Open-Source Strategy: There's an ongoing debate about whether OpenAI should release a model open-source. One side argues that releasing a model could lead to negative press if it doesn't meet standards, while others believe it could still position them favorably even if the model isn't groundbreaking.
- Discourse on AI Industry Trends and Speculations: Members discuss various industry trends, including the unlikely expectation of a model being 10x better than current offerings and potential competitive moves if Llama becomes SOTA.
- Perspectives on OpenAI's Market Position and Strategic Decisions: Despite rumors of an AI winter, members believe OpenAI remains at the top of the AI industry. The conversation also touched on strategic reasons behind OpenAI’s decision-making regarding public model releases, including prior instances involving leaks and grants requiring openness.
- Reddit - Dive into anything: no description found
- Reddit - Dive into anything: no description found
Unsloth AI (Daniel Han) ▷ #help (312 messages🔥🔥):
- Quantized Model Compatibility Issues: A user raised concerns about the compatibility of quantized models with TGI, mentioning Sharding Errors on HF dedicated inference. They questioned if
.for_inference
and TGI are mutually exclusive, implying a potential need for manual inference setup. Read more on GitHub.
- Confusion on Saving and Loading Models: Discussions indicate challenges surrounding how to precisely save and load models, notably about using
16bit
format viamodel.save_pretrained_merged(...)
for compatibility with TGI. Lightly touched on alternatives involving VLLM and GGUF format but lacks exact guidance on operational implementation.
- GEMMA Model Tokenization Issue : Users discussed tokenization issues linked to GGUF formatted models; for Gemma's GGUF, there are extra spaces causing incorrect tokenization, advice included patching tokenization either by manual adjustments or through established unsloth channels.
- Clarifications and Instructions Needed for New Model Features: Enquires about leveraging LLAMA factory for 70b model trainings were discussed; meanwhile, FastLanguageModel usage questions arose, focusing on loading from a locally saved directory. Additional concerns about maximizing potential without creating new infrastructure overhead were expressed.
- Guidance Sought on Complex Modeling Techniques: Users sought advice for creating models capable of handling complex, multitopic conversations, with suggestions ranging from fine-tuning on specialized datasets to employing prompt engineering or developing an Elaborator model approach, highlighting the iterative journey of model optimization in chatbot frameworks.
- unsloth/unsloth/__init__.py at d3a33a0dc3cabd3b3c0dba0255fb4919db44e3b5 · unslothai/unsloth: Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory - unslothai/unsloth
- Home: Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory - unslothai/unsloth
- Home: Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory - unslothai/unsloth
- Home: Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory - unslothai/unsloth
- Load: no description found
- I got unsloth running in native windows. · Issue #210 · unslothai/unsloth: I got unsloth running in native windows, (no wsl). You need visual studio 2022 c++ compiler, triton, and deepspeed. I have a full tutorial on installing it, I would write it all here but I’m on mob...
- GitHub - unslothai/hyperlearn: 2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old.: 2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old. - unslothai/hyperlearn
- Google Colab: no description found
- Google Colab: no description found
- Google Colab: no description found
- Google Colab: no description found
- Google Colab: no description found
- Google Colab: no description found
- Sou Cidadão - Colab: no description found
Unsloth AI (Daniel Han) ▷ #showcase (1 messages):
- Llama Models Fine-Tuned for Token Classification Shared: Sauravmaheshkar has fine-tuned Llama variants and shared model weights on the 🤗 Hub. These models, including
unsloth/llama-2-7b-bnb-4bit
trained on conll2003 using LoRA adapters, can now be accessed by the community. - Upcoming Insights on Llama Fine-tuning: A blog post and an accompanying notebook detailing the fine-tuning process of these Llama models will soon be featured on the Weights & Biases blog. This forthcoming content will provide additional insights and practical implementation details.
Link mentioned: LlamaForTokenClassification - a SauravMaheshkar Collection: no description found
Stability.ai (Stable Diffusion) ▷ #general-chat (976 messages🔥🔥🔥):
- SD3 Release Doubts: Users expressed skepticism regarding the release date of Stable Diffusion 3, often making humorous comparisons and sharing GIFs of doubts about the release. The sentiment indicated that despite corporate timelines, the SD3 launch is considered mythical by many in the community.
- ControlNet and Fine-Tuning Discussion: Users discussed various aspects of using ControlNet and LoRA for specific tasks such as inpainting and genuine text integration in images. One user gave detailed advice on an alternative method using Krita to manually adjust text within images.
- Hardware Recommendations for SD: A conversation was held regarding the efficiency of hardware like AMD RX 6750 XT and NVIDIA RTX 4090 for running Stable Diffusion, with mixed opinions on whether higher-end GPUs significantly outperform older models in SD tasks.
- Content Creators Seeking Advice: There was an instance of a user seeking assistance for finetuning Stable Diffusion for generating specific product ads, indicating the application of SD in commercial settings. Another discussed the need for character consistency in generating multiple images, linking to external resources for further help.
- General Query and Assistance: Users asked for technical help and shared personal anecdotes about using Stable Diffusion, from troubleshooting copy/paste issues in interfaces like ComfyUI to discussing upscaling methods that incorporate extra detail into images.
- Lewdiculous/Average_Normie_l3_v1_8B-GGUF-IQ-Imatrix · Hugging Face: no description found
- dranger003/c4ai-command-r-v01-iMat.GGUF · Hugging Face: no description found
- CohereForAI/c4ai-command-r-v01 · Hugging Face: no description found
- CohereForAI/c4ai-command-r-plus · Hugging Face: no description found
- Nikolas Cruz's Depraved Google Search History: A glimpse of the Parkland shooter's descent into the dark bowels of the Internet. This is why parents should monitor what their children do online. Warning: ...
- jonathan frakes telling you you're wrong for 47 seconds: it never happened
- GitHub - Zuellni/ComfyUI-ExLlama-Nodes: ExLlamaV2 nodes for ComfyUI.: ExLlamaV2 nodes for ComfyUI. Contribute to Zuellni/ComfyUI-ExLlama-Nodes development by creating an account on GitHub.
- no title found: no description found
- GitHub - nullquant/ComfyUI-BrushNet: ComfyUI BrushNet nodes: ComfyUI BrushNet nodes. Contribute to nullquant/ComfyUI-BrushNet development by creating an account on GitHub.
- GitHub - KoboldAI/KoboldAI-Client: Contribute to KoboldAI/KoboldAI-Client development by creating an account on GitHub.
- GitHub - LostRuins/koboldcpp: A simple one-file way to run various GGML and GGUF models with KoboldAI's UI: A simple one-file way to run various GGML and GGUF models with KoboldAI's UI - LostRuins/koboldcpp
- Reddit - Dive into anything: no description found
- Character Consistency in Stable Diffusion - Cobalt Explorer: UPDATED: 07/01– Changed templates so it’s easier to scale to 512 or 768– Changed ImageSplitter script to make it more user friendly and added a GitHub link to it– Added section...
OpenAI ▷ #annnouncements (2 messages):
- GPT-4o Unleashed to the Public: OpenAI announces that the new flagship model, GPT-4o, along with features like browse, data analysis, and memory, are now available to everyone for free, albeit with certain limits. For more information, visit GPT-4o and More Tools.
- Enhanced Access for Plus Users: Plus users will benefit from up to 5x higher limits and will get the earliest access to upcoming features such as a new macOS desktop app and advanced voice and video capabilities.
- Introducing Multimodal GPT-4o: The new GPT-4o model supports real-time reasoning across audio, vision, and text. Text and image inputs are available from today via API and ChatGPT, with voice and video inputs expected in the coming weeks. Learn more at Hello GPT-4o.
OpenAI ▷ #ai-discussions (689 messages🔥🔥🔥):
- Exploring GPT-4 and GPT-4o Capabilities: Users are actively testing and comparing the performance of GPT-4 and the newly introduced GPT-4o in various tasks. While GPT-4o is noted for its speed, some users believe GPT-4 is superior in reasoning, with specific mention that GPT-4o needs more explicit instructions to perform optimally.
- Confusion Over Voice and Camera Features: There is excitement about new features like real-time camera sharing and voice mode, but some confusion persists as these features are not yet available to all users despite being showcased in demos.
- Desktop and Mobile App Developments: There's an eagerness for the roll-out of the macOS app for ChatGPT, with plans for a Windows version mentioned to be in progress. Users are looking for download links and availability which is not consistent for everyone.
- Discussions on Subscription Value: With the introduction of GPT-4o, there's ongoing discussion about the value of paid subscriptions like ChatGPT Plus, especially when GPT-4o appears to offer significant advancements.
- Concerns About Model Memory and Token Counters: Some users express disappointment regarding memory performance in GPT-4o compared to older models. There's also a desire for features like token counters to better manage model interactions within user projects.
- Twerk Dance GIF - Twerk Dance Dog - Discover & Share GIFs: Click to view the GIF
- Models - Hugging Face: no description found
- Sync codebase · openai/tiktoken@9d01e56: no description found
OpenAI ▷ #gpt-4-discussions (126 messages🔥🔥):
- Exploring GPT-4o's Output Limitations: There was confusion regarding GPT-4o's token output limitations. It was clarified that the API limit for output tokens is higher than what was initially available in the API playground; GPT-4o supports up to 4096 output tokens per message, rather than the lower figure of 2048 initially encountered by users.
- Clarifications on Custom GPTs Utilizing GPT-4o: Members debated whether custom GPTs are currently utilizing the new GPT-4o model. It was confirmed that, as of now, custom GPTs are not using the GPT-4o model, although there was some user confusion regarding output differences.
- GPT-4o Enhances Speed and Performance: It was shared that GPT-4o is substantially faster than its predecessor, with some benchmarks stating it is twice as fast as GPT-4. However, this speed increase applies only to the API, not to the quality or nature of the responses.
- Per-GPT Memory and Rollout Status: Discussed the rollout of per-GPT memory, where it was mentioned that each custom GPT would have its own separate memory bank, potentially toggleable by the creator. However, there is no official timeline for when this feature will be broadly rolled out.
- Understanding Subscription Benefits Post-GPT-4o Announcement: A discussion unfolded about the value of continuing a Plus subscription given that many features are becoming available on the free tier. Users weighed the current benefits of Plus versus expected future enhancements that might justify the subscription cost.
OpenAI ▷ #prompt-engineering (32 messages🔥):
- Persistent Moderation Filter Mystery Unraveled: A member shared issues with Gemini 1.5 failing to process requests related to "romance package" despite having no safety filters enabled. They explored various settings adjustment without success and considered issues from provider's end might be causing the restriction.
- Syntax Error or Safety Settings in AI: Suggestions were made regarding potential syntax errors or improper disabling of safety settings that could be causing the issue noted with Gemini 1.5. Further checks in AI labs were recommended to pinpoint the source of errors in processing specific content requests.
- Casual Interaction in the Chat: Two users casually greeted each other, not contributing any substantial query or issue to the ongoing discussions or topics.
- Directory and File Operation Query via Python: A user requested a method to display and handle files programmatically, specifying the task in Python for creating directories, handling files in separate sessions, and finally zipping and providing a download link for the directory.
OpenAI ▷ #api-discussions (32 messages🔥):
- Gemini 1.5 Fails on Romance Requests: A user reported an issue with Gemini 1.5, where any query related to "romance package" results in consistent failures, despite the application’s broad success in other areas. They expressed frustration, having tried various solutions including generating new API keys, setting blocks to none, and adjusting temperature settings, all without success.
- Safety Settings Scrutiny Required: In response to the problem, another member suggested checking whether safety settings in the application were explicitly turned off, as leaving them undefined could default to them being on. This could be blocking content related to the word "romance" or "package", and might necessitate a deeper review of how these settings are managed.
- Syntax and Google's Role Considered: The discussion pivoted to possible syntax errors or issues external to user control, specifically involving Google's systems. There was a suggestion to test the problematic prompts in the AI Lab to rule out syntax issues, and a hint that disabling safety protocols through a GUI might be necessary.
- Frustration Despite Expertise: The user, demonstrating significant usage of OpenAI’s offerings (over 1 billion tokens monthly) and a preference for Gemini over Claude, expressed both knowledge and frustration regarding the ongoing issue. They openly hoped for a resolution in the near future, acknowledging their familiarity with the systems while facing unexpected challenges.
- Prompt for Python File Handling: Another member posted a complex Python task, requesting assistance to display a full file tree, create a directory, manage file writing in separate Python sessions, and finally zip a directory including instructions to provide a download link. This showcases the variety of technical queries handled within the community.
OpenAI ▷ #api-projects (2 messages):
- Inquiry about ChatGPT Clone with Message Tracking: A user expressed interest in creating a ChatGPT clone utilizing the GPT-3.5 model with a unique feature: the capability to monitor messages sent and received by users within an organization. There was no solution or further discussion provided following this inquiry.
Nous Research AI ▷ #ctx-length-research (1 messages):
king.of.kings_: i am struggling to get llama 3 70b to be coherent over 8k tokens lol
Nous Research AI ▷ #off-topic (16 messages🔥):
- A Glimpse of Aurora in France: In the metropolitan central volcano of Arvenia (Auvergne, France), there were sightings of Aurora in the sky.
- Introducing MAP-Neo, the Transparent Bilingual LLM: MAP-Neo is a transparent bilingual Large Language Model (LLM) trained on 4.5 trillion tokens, supported by community efforts from 01.ai and wuhan.ai. It matches the performance of proprietary models in tasks like reasoning and math while ensuring transparency by sharing resources such as checkpoints and dataset compositions. Explore the neo models on Huggingface and GitHub.
- Period Recipes Influence Modern Gaming: In Kingdom Come: Deliverance, a role-playing game, perpetual stews reflect a historical cooking method that enriches the game's authenticity and influences players' everyday cooking practices.
- Challenges in Software Automation via RDP: Users discussed the difficulty of automating software that runs over remote connections like RDP, where direct interactions with the software’s DOM are not possible. Implementations suggested included using RPA techniques or reverse engineering with tools like Frida for a more direct interaction with the software’s functionality.
- YouTube Video Sharing: Users shared YouTube videos for viewing, though the content of these videos within the context of the discussion wasn't specified. Here are the links: Video by paradroid and Video by pradeep1148.
- Mother Day GIF - Mother day - Discover & Share GIFs: Click to view the GIF
- Neo-Models - a m-a-p Collection: no description found
- m-a-p/Matrix · Datasets at Hugging Face: no description found
- GitHub - multimodal-art-projection/MAP-NEO: Contribute to multimodal-art-projection/MAP-NEO development by creating an account on GitHub.
Nous Research AI ▷ #interesting-links (6 messages):
- Exploring Multidirectional Neural Operation: A new paper discusses the potential for artificial neural networks to optimize for multidirectional value propagation, mirroring some biological neuron behaviors. This approach could allow a neuron model to handle entire joint distributions, potentially enhancing the way networks handle complex dependencies. Read the abstract here.
- React App Simulates Taskmaster Episode: A member has developed a React application that simulates a Taskmaster game show episode using a state machine pattern. Each episode component manages different stages, interacting with LLMs to generate content, although it requires a manual retry for misformatted outputs. Explore the GitHub project.
- Hierarchical Correlation Reconstruction in Neural Networks: The mentioned research piece introduces Hierarchical Correlation Reconstruction (HCR) for modeling neurons. This could significantly shift how neural networks model and propagate complex statistical dependencies. View the resource on Hugging Face.
- Advanced Knowledge Graph Generation Using Mistral 7B: Utilizing the Mistral 7B instruct v 0.2 model and the llama-cpp-agent framework, a detailed knowledge graph of the Industrial Military Complex was created. The framework supports multiple server types and facilitates structured interaction with large language models. View the framework on GitHub.
- Deep Dive into Audio-visual AI Transformation by OpenAI: A detailed breakdown revealed that OpenAI might be progressing towards real-time multimodal AI interactions by directly mapping audio to audio and streaming video to transformers. The techniques might involve sophisticated system optimizations, data sources like YouTube dialogues, and potentially proprietary streaming codecs, aiming for tighter integration with devices like iOS. Read the full discussion on Twitter.
- Biology-inspired joint distribution neurons based on Hierarchical Correlation Reconstruction allowing for multidirectional neural networks: Popular artificial neural networks (ANN) optimize parameters for unidirectional value propagation, assuming some guessed parametrization type like Multi-Layer Perceptron (MLP) or Kolmogorov-Arnold Net...
- Yi-1.5 (2024/05) - a 01-ai Collection: no description found
- GitHub - Maximilian-Winter/llama-cpp-agent: The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output. Works also with models not fine-tuned to JSON output and function calls.: The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured...
- Taskmaster-LLM/src/App.js at main · LEXNY/Taskmaster-LLM: Contribute to LEXNY/Taskmaster-LLM development by creating an account on GitHub.
Nous Research AI ▷ #general (741 messages🔥🔥🔥):
- GPT-4o Launch Sparks Debate: The discussion was centered around the new GPT-4o update. Some users appreciated the improvement in coding performance, while others were underwhelmed by the model's speed and token output limitations.
- New Tokenization Updates and Efficiency: The updated tokenizer for GPT-4o was noted to support multiple languages better but at a cost of efficiency. Despite this, the token limit remained a prominent issue, capped at 2048 for outputs.
- Coding and Math Performance: GPT-4o reportedly excels in coding tasks and has better reasoning capabilities, which suggests an improvement over its predecessors. Users debating these capabilities seem to find it better at logic reasoning and potentially at solving math problems.
- Concerns Over Model Accessibility and Pricing: A prevalent concern was the model's accessibility and price-point, particularly about voice integration being potentially revolutionary but also limited to those who can afford it.
- Jaded and Optimistic Perspectives: Discussions depicted a divide among users, with some criticizing OpenAI’s approach to monopolizing the chatbot and AI market, while others found significant value in the updates, especially in real-time language model implementations and integration.
- Improving GPT 4 Function Calling Accuracy: Join our Discord Community and check out what we're building! We just published Part 2 of the blog comparing gpt-4-turbo vs opus vs haiku vs sonnet . Introduction to GPT Function Calling Larg...
- Tweet from Wenhu Chen (@WenhuChen): Big News! Meet our strongest fully open-source 7B-LLM Neo. We release its 4.7T pre-training data Matrix and entire codebase at MAP-Neo! 1. Neo-7B beats the existing fully open-source models like OL...
- Benchmarking Benchmark Leakage in Large Language Models: Amid the expanding use of pre-training data, the phenomenon of benchmark dataset leakage has become increasingly prominent, exacerbated by opaque training processes and the often undisclosed inclusion...
- mradermacher/llama-3-cat-8b-instruct-GGUF · Hugging Face: no description found
- refuelai/Llama-3-Refueled · Hugging Face: no description found
- ggml-model-Q4_K_M.gguf · 01-ai/Yi-1.5-34B-Chat at main: no description found
- cambioml: no description found
- Cats Animals GIF - Cats Animals Reaction - Discover & Share GIFs: Click to view the GIF
- oobabooga benchmark: no description found
- Introducing GPT-4o: OpenAI Spring Update – streamed live on Monday, May 13, 2024. Introducing GPT-4o, updates to ChatGPT, and more.
- MeeseeksAI/src/agents.py at 2399588acdee06cff4af04ca091b1ab5c71580b8 · interstellarninja/MeeseeksAI: A framework for orchestrating AI agents using a mermaid graph - interstellarninja/MeeseeksAI
- GitHub - Potatooff/Le-Potato: Simple. elegant LLM Chat Inference: Simple. elegant LLM Chat Inference. Contribute to Potatooff/Le-Potato development by creating an account on GitHub.
- Sync codebase · openai/tiktoken@9d01e56: no description found
- Sync codebase · openai/tiktoken@9d01e56: no description found
- GitHub - interstellarninja/MeeseeksAI: A framework for orchestrating AI agents using a mermaid graph: A framework for orchestrating AI agents using a mermaid graph - interstellarninja/MeeseeksAI
- Tweet from will depue (@willdepue): i think people are misunderstanding gpt-4o. it isn't a text model with a voice or image attachment. it's a natively multimodal token in, multimodal token out model. you want it to talk fast? ...
- Chat Template support for function calling and RAG by Rocketknight1 · Pull Request #30621 · huggingface/transformers: This PR updates our support of chat templates to cover tool-use and RAG use-cases. Specifically, it does the following: Defines a recommended JSON schema spec for tool use Adds tools and documents...
- TheSkullery/llama-3-cat-8b-instruct-v1 · Hugging Face: no description found
- Tweet from Johan Nordberg (@almost_digital): I joined @elevenlabsio in January and It’s been an absolute blast working with @flavioschneide on this! This one is generated from a single text prompt “rap about never stopping to learn”, lyrics inc...
- Replete-AI/code_bagel_hermes-2.5 · Datasets at Hugging Face: no description found
Nous Research AI ▷ #ask-about-llms (48 messages🔥):
- MoE Limited to FFN Layers in Most Architectures: Discussants confirmed that in most architectures, the experts in a Mixture of Experts (MoE) are only the feedforward networks (FFN) layers. Attention blocks as experts have been explored, though not standard.
- Interest in Integrating Autoregressive and Diffusion Models with MoE: The concept of combining autoregressive models (strong in text generation) with diffusion models (excellent for image tasks), using an MoE structure to potentially enhance multimodal model performance, was discussed. Skepticism exists, but the theoretical integration could offer advancements in model capabilities.
- Prompt Templates and Their Impact on LLM Performance: Dialogue clarified that using the specific prompt format a large language model was trained on can drastically affect its reliability. For example, the chatml format is used by Hermes, whereas Alpaca Prompt Format might be preferred by others.
- Handling Unsafe Behavior Input in Models: It was mentioned that built-in safety measures and "life lesson" responses in models can be manipulated with system level prompts to modify responses. Techniques to circumvent refusals and induce more direct responses were suggested, along with referenced online resources like Handling Refusals.
- Finetuning Challenges with Llama3 and Axolotl Systems: A user shared issues and solutions while attempting to fine-tune the Llama3 model with the dolphin-2.9 dataset using the Axolotl system. Problems like CUDA errors and the necessity of updating packages like flash-attn were discussed, pointing to community-driven solutions for technical bottlenecks.
- RAPHAEL: Text-to-Image Generation via Large Mixture of Diffusion Paths: Text-to-image generation has recently witnessed remarkable achievements. We introduce a text-conditional image diffusion model, termed RAPHAEL, to generate highly artistic images, which accurately por...
- ortho_cookbook.ipynb · failspy/llama-3-70B-Instruct-abliterated at main: no description found
- Refusal in LLMs is mediated by a single direction — AI Alignment Forum: This work was produced as part of Neel Nanda's stream in the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort, with co-supervision from…
Nous Research AI ▷ #rag-dataset (5 messages):
- Revealing ChatQA: An Innovator in Conversational QA: A recent Arxiv submission introduces ChatQA, a QA model line that surpasses GPT-4 in conversational accuracy by using a two-stage instruction tuning and a cost-effective dense retriever. ChatQA-70B outperforms GPT-4 with a score of 54.14 versus 53.90 across various datasets, offering a cheaper alternative without the need for synthetic data from GPT models.
- IBM/RedHat's Novel Training Approach: IBM and RedHat are collaborating on a new project that innovates LLM training by using a larger model to generate synthetic datasets without full retraining. The process, detailed on GitHub, employs taxonomies for curriculum building and leverages powerful LLMs like Granite and Merlinite.
- Framework for Enhanced Model Training Introduced: A deeper dive into IBM/RedHat's project reveals a scheduled information enrichment process for LLMs. Contributors can format and submit data weekly, which after curation, is integrated into models like Granite and Merlinite to incrementally enhance their knowledge base.
- ChatQA: Building GPT-4 Level Conversational QA Models: In this work, we introduce ChatQA, a family of conversational question answering (QA) models that obtain GPT-4 level accuracies. Specifically, we propose a two-stage instruction tuning method that can...
- InstructLab: InstructLab has 10 repositories available. Follow their code on GitHub.
Nous Research AI ▷ #world-sim (22 messages🔥):
- WorldSim Spotlighted as Top Business Simulator: Members discussed the effectiveness of WorldSim as a business and startup simulator, with proprietary highlighting its strength as an everything simulator.
- Join the WebSim Adventure: Members actively engaged in WebSim AI simulations, sharing links to specific simulations like hidden catgirl and inviting others to build bases at join WebSim.
- Twitter Buzz on Simulation Gaming: Links to Twitter posts showing enthusiasm for simulation-based gaming were shared, indicating a broader community interest. Example tweets can be found here and here.
- Technical Challenges Reported in WorldSim: Issues were noted in the functionality of WorldSim, including problems with context retention, command execution, and interface bugs.
- Philosophy and WorldSim Salon Proposal: A member proposed forming a philosophy and websim WorldSim chat group, gauging interest for collaborative discussions in a salon-style setting.
- generative.ink/chat/: no description found
- generative.ink/chat/: no description found
Latent Space ▷ #ai-general-chat (94 messages🔥🔥):
- New Infrastructure Discussion on Substack: A member from Singapore started a conversation about potential new infrastructures for AI agents, sharing their notes on Substack. Interested parties are invited to discuss and collaborate on these emerging services. [Check out the early notes here](https://sweekiat.substack.com/p/d8726e73-e717-4599-81a3-5eb82e48f9c9).
- Falcon 2 Model Unveiled: Falcon 2 LLM, described as multilingual and multimodal, has been released, outperforming competitors like Meta's Llama 3 and Google's Gemma 7B according to independent verifications. More enhancements like 'Mixture of Experts' are planned. [Explore Falcon 2's capabilities](https://falconllm.tii.ae/falcon-2.html).
- Discussion on GPT-4o: Active discussions and updates about the newly announced GPT-4o include various technical speculations, its capacities, and application benchmarks. The community is keen on exploring GPT-4o's features in API access and its performance improvements. [Learn more about GPT-4o](https://openai.com/index/hello-gpt-4o/).
- AI Security and Career Opportunities: The conversation around AI security as a potential career path materialized, with discussions about its realism and worthiness within the AI and cybersecurity intersections. Participants advised on the importance of self-research and pointed towards resources like the RSA Conference for further exploration.
- Training Collaboration for OpenELM: An announcement about a collaborative effort to train the OpenELM model from scratch using PyTorch/MPS piqued interest. This initiative is open for community collaboration and aims at iterative training with incremental dataset addition. [Explore OpenELM project collaboration](https://github.com/openai/openelm).
- Falcon LLM: Generative AI models are enabling us to create innovative pathways to an exciting future of possibilities - where the only limits are of the imagination.
- Tweet from Robert Lukoszko — e/acc (@Karmedge): I am 80% sure openAI has extremely low latency low quality model get to pronounce first 4 words in <200ms and then continue with the gpt4o model Just notice, most of the sentences start with “Sure...
- Tweet from Jim Fan (@DrJimFan): I know your timeline is flooded now with word salads of "insane, HER, 10 features you missed, we're so back". Sit down. Chill. <gasp> Take a deep breath like Mark does in the demo &l...
- Tweet from Mark Cummins (@mark_cummins): Llama 3 was trained on 15 trillion tokens (11T words). That’s large - approximately 100,000x what a human requires for language learning
- Tweet from Greg Brockman (@gdb): GPT-4o can also generate any combination of audio, text, and image outputs, which leads to interesting new capabilities we are still exploring. See e.g. the "Explorations of capabilities" sec...
- Tweet from lmsys.org (@lmsysorg): Breaking news — gpt2-chatbots result is now out! gpt2-chatbots have just surged to the top, surpassing all the models by a significant gap (~50 Elo). It has become the strongest model ever in the Are...
- Something to do something: Click to read Something to do something, by sweekiat, a Substack publication. Launched 2 years ago.
- Bloomberg - Are you a robot?: no description found
- Tweet from tweet davidson 🍞 (@andykreed): ChatGPT voice is…hot???
- Tweet from Mark Cummins (@mark_cummins): Up next is code. Code is a very important text type, and the amount of it surprised me. There’s 0.75T tokens of public code. Total code ever written might be as much as 20T, though much of this is pri...
- AI for Engineers | Latent Space | swyx & Alessio | Substack: a 7 day foundational course for prospective AI Engineers, developed with Noah Hein. NOT LIVE YET - we are 5/7 complete. Sign up to get it when it releases! Click to read Latent Space, a Substack publi...
- Sync codebase · openai/tiktoken@9d01e56: no description found
- Tweet from Robert Lukoszko — e/acc (@Karmedge): I am 80% sure openAI has extremely low latency low quality model get to pronounce first 4 words in <200ms and then continue with the gpt4o model Just notice, most of the sentences start with “Sure...
- Tweet from Justin Uberti (@juberti): Had a chance to try the gpt-4o API from us-central and text generation is quite fast. Comparing to http://thefastest.ai, this perf is 5x the TPS of gpt-4-turbo and similar to many llama-3-8b deployme...
- Tweet from Mark Cummins (@mark_cummins): Llama 3 was trained on 15 trillion tokens (11T words). That’s large - approximately 100,000x what a human requires for language learning
- Tweet from Jake Colling (@JacobColling): @simonw @OpenAI Using the model `gpt-4o` seems to work for my API access
- Tweet from Siqi Chen (@blader): this will prove to be in retrospect by far the most underrated openai event ever openai casually dropping text to 3d rendering in gpt4o and not even mentioning it (more 👇🏼)
- Falcon 2 | Hacker News: no description found
Latent Space ▷ #ai-announcements (1 messages):
- OpenAI Event Pre-Game Scheduled: A watch party for an OpenAI event is planned for tomorrow, May 13th, starting at 9:30 AM. Join the pre-game in Discord channel half an hour before the event.
Link mentioned: Join the Latent Space (née /dev/invest) Discord Server!: Check out the Latent Space (née /dev/invest) community on Discord - hang out with 3747 other members and enjoy free voice and text chat.
Latent Space ▷ #llm-paper-club-west (710 messages🔥🔥🔥):
- Open AI Spring Event Watch Party Initiated: Members of the Discord community gathered to view and discuss the OpenAI Spring Event, with an invitation for members to share their predictions. However, several encountered audio issues during the live stream, leading to suggestions of restarting the connection.
- Tech Sleeves Rolled for Apple and GPT-4o Speculations: During the event watch party, discourse veered into Apple's technological strategies and the potential implications of Google's negotiations concerning iOS 18. Speculations arose about whether Apple was sufficiently equipped to incorporate sufficiently large models into their devices.
- GPT-4o Takes the Spotlight with Free Access: In a turning revelation, it was disclosed that GPT-4o is now accessible for free, a move never before implemented for a frontier model. This announcement was complemented by discussions on Twitter, particular attention was paid to model integration strategies, including potential impacts on mobile integrations.
- Event Streaming Woes and Technical Troubles: Viewers expressed frustration with technical difficulties during the streaming, ranging from choppy video to audio issues. These disruptions led to continuous adjustments and feedback among members trying to resolve the issues for a smoother viewing experience.
- Community Engages with Practical and Predictive Conversations: As the event unfolded, members shared practical links to watch the event uninterrupted, and discussions ensued about the capabilities and future of GPT-4o and its integration into everyday devices and platforms. The conversations reflected both excitement and skepticism about the current and future applications of AI as unveiled during the event.
- Tweet from Karma (@0xkarmatic): "An ASR model, an LLM, a TTS model… are you getting it? These are not three separate model: This is one model, and we are calling it gpt-4o." Quoting Andrej Karpathy (@karpathy) They are r...
- Twitch: no description found
- Mechanical Turk - Wikipedia: no description found
- GPT-4o: There are two things from our announcement today I wanted to highlight. First, a key part of our mission is to put very capable AI tools in the hands of people for free (or at a great price). I am...
- Tweet from Jared Zoneraich (@imjaredz): gpt-4o blows gpt-4-turbo out of the water. So quick & seemingly better answer. Also love the split-screen playground view from @OpenAI
- Tweet from Olivier Godement (@oliviergodement): I haven't tweeted much about @OpenAI announcements, but I wanted to share a few reflections on GPT-4o as I've have not been mind blown like that for a while.
- Introducing GPT-4o: OpenAI Spring Update – streamed live on Monday, May 13, 2024. Introducing GPT-4o, updates to ChatGPT, and more.
- Tweet from Greg Brockman (@gdb): Introducing GPT-4o, our new model which can reason across text, audio, and video in real time. It's extremely versatile, fun to play with, and is a step towards a much more natural form of human-...
- Introducing GPT-4o: OpenAI Spring Update – streamed live on Monday, May 13, 2024. Introducing GPT-4o, updates to ChatGPT, and more.
- Tweet from Brad (@brad_agi): 50% cheaper isn't even competitive. Source: https://artificialanalysis.ai/
- Tweet from bdougie on the internet (@bdougieYO): ChatGPT saying it looks like I am in a good mood.
- Tweet from William Fedus (@LiamFedus): GPT-4o is our new state-of-the-art frontier model. We’ve been testing a version on the LMSys arena as im-also-a-good-gpt2-chatbot 🙂. Here’s how it’s been doing.
- Tweet from Sam Altman (@sama): it is available to all ChatGPT users, including on the free plan! so far, GPT-4 class models have only been available to people who pay a monthly subscription. this is important to our mission; we wan...
- GitHub - BasedHardware/OpenGlass: Turn any glasses into AI-powered smart glasses: Turn any glasses into AI-powered smart glasses. Contribute to BasedHardware/OpenGlass development by creating an account on GitHub.
- Sync codebase · openai/tiktoken@9d01e56: no description found
- Tweet from Greg Brockman (@gdb): We also have significantly improved non-English language performance quite a lot, including improving the tokenizer to better compress many of them:
Perplexity AI ▷ #general (674 messages🔥🔥🔥):
- GPT-4o Sparks Excitement and Speculation: The introduction of GPT-4o has stirred significant interest among users, with discussions focusing on its enhanced speed, lower costs, and multimodal capabilities. There's enthusiasm about its potential integration into Perplexity, with users eagerly anticipating its addition and speculating on the impact of its advanced features on current AI applications.
- Opus Use Limitations Frustrate Users: Multiple users express dissatisfaction with Perplexity's daily usage limits on powerful models like Claude 3 Opus, revealing a strong demand for more generous access terms. The limitations have led some to consider alternative platforms, although the unique strengths of Perplexity's offerings keep many loyal.
- Privacy Concerns in AI Adoption: In the context of selecting AI services, discussions highlight a strong user preference for platforms that prioritize privacy. Despite the inherent challenges in securing complete privacy when using cloud-based AIs, users advocate for choosing providers that make notable efforts to protect user data.
- Perplexity's Multi-Model Edge and User Preferences: The value of Perplexity leveraging multiple AI models, including ChatGPT and Claude 3 Opus, is emphasized, with users appreciating the ability to switch between different models based on task requirements. This flexibility is contrasted with other platforms that might offer fewer options or require more involvement to navigate.
- Technical Discussions Indicate Diverse User Base and Needs: Users engage in technical discussions around topics such as context window sizes and the implementation details of AI models, indicating a community with a wide range of uses for AI, from casual inquiries about daily limits to deeper explorations into the functionality of specific AI features.
- Tweet from Ina Fried (@inafried): A couple tidbits I've confirmed as well. 1) The mysterious GPT2-chatbot that showed up on benchmark sites was GPT-4o. 2) OpenAI did desktop version first for Mac because "we're just priori...
- gpt-tokenizer playground: no description found
- More than an OpenAI Wrapper: Perplexity Pivots to Open Source: Perplexity CEO Aravind Srinivas is a big Larry Page fan. However, he thinks he's found a way to compete not only with Google search, but with OpenAI's GPT too.
- Introducing GPT-4o: OpenAI Spring Update – streamed live on Monday, May 13, 2024. Introducing GPT-4o, updates to ChatGPT, and more.
- Two GPT-4os interacting and singing: Say hello to GPT-4o, our new flagship model which can reason across audio, vision, and text in real time.Learn more here: https://www.openai.com/index/hello-...
- Two GPT-4os interacting and singing: Say hello to GPT-4o, our new flagship model which can reason across audio, vision, and text in real time.Learn more here: https://www.openai.com/index/hello-...
- Introducing GPT-4o: OpenAI Spring Update – streamed live on Monday, May 13, 2024. Introducing GPT-4o, updates to ChatGPT, and more.
- Tweet from Mckay Wrigley (@mckaywrigley): This demo is insane. A student shares their iPad screen with the new ChatGPT + GPT-4o, and the AI speaks with them and helps them learn in *realtime*. Imagine giving this to every student in the wor...
- YesChat.ai Pricing Plan: no description found
- Introducing GPT-4o: OpenAI’s new flagship multimodal model now in preview on Azure | Microsoft Azure Blog: OpenAI, in partnership with Microsoft, announces GPT-4o, a groundbreaking multimodal model for text, vision, and audio capabilities. Learn more.
Perplexity AI ▷ #sharing (21 messages🔥):
- Exploring Career Journey in AI: Alexandr Yarats discusses his progression from Yandex to Google, and now as Head of Search at Perplexity AI. His journey underscores the intense yet rewarding path in the tech industry, culminating in his current role focusing on developing AI-powered search engines.
- Diverse Inquiries on Perplexity AI Platform: Users shared various searches on Perplexity AI ranging from topics about Eurovision 2024 to Bernoulli's fallacy. Each link directs to a specific query result, showcasing the platform's wide usage for different information needs.
- Reminder to Enable Shareable Threads: Perplexity AI reminded users to ensure their threads are shareable, providing a step-by-step guide linked in the Discord message. This indicates a focus on community collaboration and information sharing within the platform.
Link mentioned: Alexandr Yarats, Head of Search at Perplexity – Interview Series: Alexandr Yarats is the Head of Search at Perplexity AI. He began his career at Yandex in 2017, concurrently studying at the Yandex School of Data Analysis. The initial years were intense yet rewarding...
Perplexity AI ▷ #pplx-api (4 messages):
- Request for Perplexity Tutorial: A user asked for a tutorial on Perplexity. Another user responded with a link to a deep dive tutorial, but the link provided redirects to a non-functional Discord path, showing a placeholder as <<
>>.
- Emojis in Use: Two different messages from the same user included emojis, one labeled as
wlcm
and the other asgem_2
, possibly indicating different contexts or sentiments in a non-English conversation (specifically Russian).
HuggingFace ▷ #general (389 messages🔥🔥):
- Exploration of Open Source LLMs and Platforms: Discussion about open-source large language models (LLMs) similar to llamma3 and better alternatives such as Mistral. It was suggested that platforms like you.com could be used to try these models.
- Unlocking Potential in Meeting Transcripts: A user shared their strategy for chunking meeting transcripts by speaker change and creating embeddings, but faced low similarity scores between interactions. The community was asked for better solutions or insights.
- Modifying Diffusion Pipelines with Safety Features Disabled: Code sharing took place where the StableDiffusionPipeline and DiffusionPipeline were modified to disable safety checks by setting
safety_checker
toNone
andrequires_safety_checker
toFalse
.
- Interest in Employment and Collaborative Projects: A member expressed interest in working with the team, citing experience in frontend and blockchain development combined with AI.
- Optimization and Performance Discussion Observed: Various inquiries were made about optimizing deep learning models, including advice on batch sizes and use of GPU resources, to maximize computational efficiency.
- no title found: no description found
- Gryphe/Tiamat-8b-1.2-Llama-3-DPO · Hugging Face: no description found
- 👾 LM Studio - Discover and run local LLMs: Find, download, and experiment with local LLMs
- HuggingChat: Making the community's best AI chat models available to everyone.
- Easily Train Models with H100 GPUs on NVIDIA DGX Cloud: no description found
- Will Smith Chris Rock GIF - Will Smith Chris Rock Jada Pinkett Smith - Discover & Share GIFs: Click to view the GIF
- Excuse Me Hands Up GIF - Excuse Me Hands Up Woah - Discover & Share GIFs: Click to view the GIF
- Introducing GPT-4o: OpenAI Spring Update – streamed live on Monday, May 13, 2024. Introducing GPT-4o, updates to ChatGPT, and more.
- EurekAI: no description found
- Getting Started With Hugging Face in 15 Minutes | Transformers, Pipeline, Tokenizer, Models: Learn how to get started with Hugging Face and the Transformers Library in 15 minutes! Learn all about Pipelines, Models, Tokenizers, PyTorch & TensorFlow in...
- TikTok - Make Your Day: no description found
HuggingFace ▷ #today-im-learning (3 messages):
- Exploring GenAI User Interface Innovations: A YouTube video shared offers insights into the user experience with Generative AI in medical applications, featuring multimodal interactions and future plans including Retrieval Augmented Generation (RAG). Highlighted features include cost-conscious model accessibility and containerized applications. Watch the video here.
- Decoding Neural Network Initialization: The resource from deeplearning.ai provides an intuitive explanation on the importance of correct parameter initialization in neural networks to prevent the problems of exploding and vanishing gradients. To explore detailed steps and methodologies in neural network training, visit deeplearning.ai's guide here.
- Advancing Image Generation with Jax and TPUs: A user discusses their project to adapt the PyTorch implementation of the Visual AutoRegressive (VAR) model for TPU acceleration using the Jax library Equinox, noting improvements in several metrics over traditional models. Details on the VAR approach and its superiority in image generation can be found in this research paper and the Equinox library on GitHub.
- AI Notes: Initializing neural networks - deeplearning.ai: In this post, we'll explain how to initialize neural network parameters effectively. Initialization can have a significant impact on convergence in training deep neural networks...
- Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction: We present Visual AutoRegressive modeling (VAR), a new generation paradigm that redefines the autoregressive learning on images as coarse-to-fine "next-scale prediction" or "next-resolutio...
- GitHub - patrick-kidger/equinox: Elegant easy-to-use neural networks + scientific computing in JAX. https://docs.kidger.site/equinox/: Elegant easy-to-use neural networks + scientific computing in JAX. https://docs.kidger.site/equinox/ - patrick-kidger/equinox
HuggingFace ▷ #cool-finds (10 messages🔥):
- Phi-3 Optimized for Smartphones: Phi-3 has shown promising performance on low-power devices like smartphones. Details are available in a comprehensive study by various authors including Marah Abdin and others, accessible here on arXiv.
- Deep Dive into Deep Learning: A new resource for understanding deep learning basics, UDL Book, is highlighted as a particularly useful educational tool.
- Initiating Better with AI Notes: deeplearning.ai offers insights on neural network weights initialization to combat issues like exploding/vanishing gradients, crucial for effective model training.
- Visualizing LLM Effects: Explore a new interactive visualization for better understanding Large Language Models (LLM) at this link.
- Reinventing Antibody Development with RL: An innovative approach using reinforcement learning (RL) in antibody development has been described, improving the potential for targeted therapies. More information can be found in this ScienceDirect article.
- LLM Visualization: no description found
- Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone: We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of ...
- Understanding Deep Learning: no description found
- AI Notes: Initializing neural networks - deeplearning.ai: In this post, we'll explain how to initialize neural network parameters effectively. Initialization can have a significant impact on convergence in training deep neural networks...
- 3D Diffusion Policy: This paper introduces 3D Diffusion Policy (DP3), a visual imitation learning algorithm that masters divserse visuomotor tasks.
- no title found: no description found
HuggingFace ▷ #i-made-this (7 messages):
- Multilingual AI Storyteller Launched: A new AI-powered storyteller, supporting English, Malay, Chinese, and Tamil, has been released. Check it out at alkisah-ai by ikmalsaid.
- AI Tool for Quranic Posters: An AI tool that creates beautiful posters based on verses from the Holy Quran was developed, but the Space is currently inactive due to no activity. More about it can be found here.
- OCR Toolkit Introduced: A versatile OCR framework has been developed that allows integration with different OCR technologies like DocTr, PaddleOCR, and Google Cloud Vision. The developer shared the GitHub repo for community contributions at ocrtoolkit on GitHub.
- Finetuning Llama Variants for Token Classification: Llama model variants have been finetuned for token classification and uploaded to the 🤗 Model Hub, focusing on the
conll2003
dataset. Check the collection of finetuned models at LlamaForTokenClassification by SauravMaheshkar.
- Building AI-Driven OCR Quality Classifiers: A new approach has been taken to use small encoders for classifying document quality, which proved efficient for identifying noisy or clean texts in the PleIAs dataset. Explore the models at OCR Quality Classifiers by pszemraj.
- OCR Quality Classifiers - a pszemraj Collection: no description found
- Kalam AI - a Hugging Face Space by ikmalsaid: no description found
- LlamaForTokenClassification - a SauravMaheshkar Collection: no description found
- How To Create Your Own AI Discord Chat Bot With Web Search: Git Repo:https://github.com/ssimpson91/newsChanYou will need the following packages;NodeJS v. 18Python 3.10 or aboveRun these commands in your terminal to in...
- GitHub - ajkdrag/ocrtoolkit: Experiment and integrate with different OCR frameworks seamlessly: Experiment and integrate with different OCR frameworks seamlessly - ajkdrag/ocrtoolkit
HuggingFace ▷ #reading-group (2 messages):
- Introducing YOCO, a Novel Architecture: A member shared a cool read on arXiv about YOCO, a new decoder-decoder architecture for large language models that efficiently caches key-value pairs once. This design notably reduces GPU memory requirements while maintaining global attention capabilities and speeds up the prefill stage.
Link mentioned: You Only Cache Once: Decoder-Decoder Architectures for Language Models: We introduce a decoder-decoder architecture, YOCO, for large language models, which only caches key-value pairs once. It consists of two components, i.e., a cross-decoder stacked upon a self-decoder. ...
HuggingFace ▷ #computer-vision (6 messages):
- Exploring Class Condition Diffusion with UNet: A member shared their experiments with class condition diffusion using UNet and sought similar resources for latent diffusion models. They referenced a UNet diffusion course on HuggingFace.
- Struggling with YOLOv1 on Custom Dataset: A user expressed difficulties in implementing YOLOv1 from scratch on a custom dataset for educational purposes. They are curious about fixing issues with their implementation, which also involves a mini YOLO version with a ResNet backbone and a single bbox.
- Stable Diffusion Experiments Echoed: Another member highlighted their work with Stable Diffusion, citing resources on using the diffusers library from HuggingFace. They pointed to a detailed explanation on customizing the image generation pipeline with Stable Diffusion models.
- Hugging Face - Learn: no description found
- Stable Diffusion with 🧨 Diffusers: no description found
HuggingFace ▷ #NLP (7 messages):
- Challenges with Transcript Chunking: A member is seeking advice on efficiently chunking meeting transcripts to gather actionable insights using LLMs, aiming to optimize costs by making fewer LLM calls. They mentioned current methods yield low similarity scores (around 0.45) between chunks.
- Suggestion on Text Chunk Retrieval: Discussion involves not expecting high similarity scores between consecutive messages; suggested method includes fetching neighboring chunks of relevant text to maintain context.
- DMs Not Preferred by Some Members: A participant explicitly stated they do not accept direct messages (DMs), emphasizing public discussion.
- Approach for Evaluating Retriever Components: It was advised to prepare a gold dataset and benchmark retrieval components using different configurations like chunk size and overlap, with mean reciprocal rank as a recommended metric.
- Difficulty Integrating Custom Tokenizer with Transformer: A member shared issues encountered when integrating a custom Hugging Face tokenizer with a transformer, referencing a 2021 Hugging Face tutorial (view video). They reported errors suggesting a format mismatch according to ChatGPT advice.
HuggingFace ▷ #diffusion-discussions (14 messages🔥):
- Diving into Diffusion Model Details: A user asked for resources on the intrinsics of diffusion models. The recommendations included DDPM and DDIM academic papers and practical resources such as a Fast.ai online course on implementing Stable Diffusion and the Hands-On Generative AI with Python book from O'Reilly for a deeper understanding of generative models.
- Getting Started with Local Inference Engines: A user queried how to develop a local inference engine for Command-R+, but was redirected to seek insights from a different, more specialized forum likely focused on NLP strategies.
- Guidance on Using Inpainting for Custom Images: To assist with using inpainting for personal images, a link to the Hugging Face Diffusers documentation was shared, detailing the process to edit specific areas of an image using model checkpoints.
- Troubleshooting Installation Issues on macOS: A user encountered problems installing
sadtalker
on macOS. Although they were directed to search Google for similar issues, they found the advice unhelpful without resolving their problem.
- Creating Personalized Image Datasets: A user sought advice on using their own image data sets for AI models, which led to sharing of a Hugging Face guide on creating and structuring personal image datasets for model training.
- Create a dataset for training: no description found
- Inpainting: no description found
- Practical Deep Learning for Coders - Part 2 overview: Learn Deep Learning with fastai and PyTorch, 2022
- [Bug]: ModuleNotFoundError: No module named 'torchvision.transforms.functional_tensor' torchvision 0.17 promblem · Issue #13985 · AUTOMATIC1111/stable-diffusion-webui: Is there an existing issue for this? I have searched the existing issues and checked the recent builds/commits What happened? ModuleNotFoundError: No module named 'torchvision.transforms.functiona...
- Hands-On Generative AI with Transformers and Diffusion Models: Learn how to use generative media techniques with AI to create novel images or music in this practical, hands-on guide. Data scientists and software engineers will understand how state-of-the-art gene...
LM Studio ▷ #💬-general (185 messages🔥🔥):
- Understanding Multi-GPU Setup Performance: A user shared issues with slow performance using multiple GPUs, suspecting the potential impact of PCIe 3.0 bandwidth. After discussions and troubleshooting, it was determined the motherboard was the bottleneck; upgrading to a PCIe 4.0 compatible board resolved the issue.
- Exploring Remote Configuration for LM Studio: Discussion revolved around configuring LM Studio Server's IP address for remote access. It was clarified that the server binds to all interfaces on the host machine, and replacing 'localhost' with the machine's IP would solve remote accessibility concerns.
- Error Handling in LM Studio: Multiple users encountered error messages relating to "Failed to load model" due to insufficient memory. Suggestions included turning GPU offload off or verifying that hardware specifications meet the requirements for running larger models.
- Deployment Challenges with LMS on Linux Servers: One user faced difficulties installing LMS due to FUSE setup issues with AppImage on a Linux server. Another user provided a solution that worked on Ubuntu Server 24.04, emphasizing the community's role in problem-solving.
- GPU Memory Requirements for Local Model Management: Through various discussions, it was highlighted that effective use of LLMs typically requires substantial VRAM, with recommendations for at least 8GB+ for running models like GPT-4. This underlines the importance of selecting adequate hardware to avoid performance bottlenecks.
- no title found: no description found
- no title found: no description found
- Chat.lmsys.org down? Current problems and status. - DownFor: Chat.lmsys.org won't load? Or, having problems with Chat.lmsys.org? Check the status here and report any issues!
- Boo Boo This Man GIF - Boo Boo This Man - Discover & Share GIFs: Click to view the GIF
- Introducing GPT-4o: OpenAI Spring Update – streamed live on Monday, May 13, 2024. Introducing GPT-4o, updates to ChatGPT, and more.
LM Studio ▷ #🤖-models-discussion-chat (92 messages🔥🔥):
- Clarifying Local Model Capabilities: A member inquired about a dedicated coding model for a personal laptop with moderate specs, and received responses clarifying that LM Studio may not support such high-speed local models on that hardware setup. Other members also noted limitations and potential workarounds with various integrations and setups.
- Exploring Text-to-Image Conversion Tools: A discussion about converting text to images highlighted tools like Stable Diffusion, comfyUI, and Automatic1111. Members shared links and their experiences with different tools, suggesting that less complex software could be beneficial for beginners.
- Understanding Model Versions and Fine-Tuning on Hugging Face: Various members discussed how models are versioned and fine-tuned on platforms like Hugging Face, pointing to the importance of reading model cards for specific datasets and training details involved. There was a specific focus on quantization and variations introduced through fine-tuning.
- Quantizing Models for Better Performance: Several members discussed the details and benefits of quantizing various models, particularly the Yi-1.5 model series. Links to specific quantized versions were shared along with usage tips for improving model performance and compatibility with specific hardware constraints.
- Dealing with Model Constraints and Context Lengths: Multiple users addressed the issues related to model context lengths and budget constraints affecting the choice of models. There were specific mentions of the limitations posed by different GPU capacities and the trade-offs necessary for running more extensive models.
- YorkieOH10/Yi-1.5-9B-Chat-Q8_0-GGUF · Hugging Face: no description found
- YorkieOH10/Yi-1.5-6B-Chat-Q8_0-GGUF · Hugging Face: no description found
- failspy/kappa-3-phi-abliterated · Hugging Face: no description found
- dranger003/c4ai-command-r-plus-iMat.GGUF · Hugging Face: no description found
- 01-ai/Yi-1.5-9B-Chat · Hugging Face: no description found
- NikolayKozloff/Meta-Llama-3-8B-Instruct-bf16-correct-pre-tokenizer-and-EOS-token-Q8_0-Q6_k-Q4_K_M-GGUF · Hugging Face: no description found
LM Studio ▷ #🧠-feedback (4 messages):
- Exploring Open Source Installer Options: A member shared their positive experience with open-source installer alternatives Innosetup and Nullsoft Installer, noting they have used both successfully in the past.
- Performance quirks with Starcoder2 on Debian: A user experimenting with starcoder2-15b-instruct-v0.1-IQ4_XS.gguf on Debian 12 noted that initial results were acceptable, but issues such as repetitive responses and off-topic answers began to occur as they tried further tasks.
- Clarification on Model Usage: A response to the above issue highlighted that instruct models like starcoder2 are optimized for single-step commands and might not be suitable for multi-step conversations, thus explaining some of the experienced oddities.
LM Studio ▷ #⚙-configs-discussion (7 messages):
- Playground Mode Requires GPU: The conversation clarified that Playground mode is GPU only and cannot run effectively on RAM + CPU alone, especially with just 4GB of VRAM.
- Warning Against Misleading Links: A warning was issued about a shortlink potentially leading to an unsafe or unrelated site, marking it as potentially deceptive.
- LLM Training Inquiry: A member inquired about the possibility of training a language model using Word files from their syllabus to facilitate question and answer sessions.
Link mentioned: Shoo Go Away GIF - Shoo Go Away Johnny Depp - Discover & Share GIFs: Click to view the GIF
LM Studio ▷ #🎛-hardware-discussion (106 messages🔥🔥):
- Llama 3 Model Performance Queries: Discussions focused on the performance of Llama 3 models running on various hardware. Users shared their experiences with different configurations, noting tok/s rates such as 0.6 tok/s and querying the use of CPUs and RAM for potential efficiency improvements.
- Hardware Bottlenecks and Optimization: Key discussions emerged around the limitations set by hardware components. Users exchanged knowledge about VRAM capacities, especially when comparing GPU performances such as Tesla P100 versus GTX 1060. Discrepancies were noticed in expected versus actual performance rates due to potential issues like CUDA version mismatches.
- Optimizing Model Load on Limited Resources: Users explored offloading techniques to manage limitations of hardware with low VRAM (2GB). There was emphasis on correctly setting the number of model layers offloaded to the GPU to prevent errors and ensure smoother model operation.
- Comparative Discussion of Running LLMs on CPU Versus GPU: Experiences shared highlight significant performance hits when running LLMs on CPUs only. Specific token rates were discussed such as 3.2 tok/s to 3.5 tok/s improvements by tweaking CPU settings.
- Exploration of Tools and Settings in LMStudio and JAN Under Various Operating Systems: Users discussed interface elements like sliders for adjusting how much of a model loads into GPU versus RAM. A consistent recommendation was the use of higher VRAM for models to prevent load failures and inadequacies in response generation.
LM Studio ▷ #🧪-beta-releases-chat (12 messages🔥):
- CodeQwen1.5: A Surprisingly Powerful 7b Model: A 7b model named CodeQwen1.5 is recommended as highly efficient for coding, performing better than the deepseek coder. It employs a 4b quantization and fits within 4.18 GB, making it suitable for an RTX 3050 6GB GPU setup.
- Explore Coding Models on Huggingface: For those curious about different models' performance in coding, the Huggingface leaderboard offers a comprehensive list. Interested users can explore various models, especially those that are 7b or smaller, View Coding Leaderboard.
- Just Bug Fixes and a Small Update: The latest build mainly addresses bug fixes and includes an update called llama.cpp. No new features have been added in this particular update.
- Beware of Suspicious Links: Users should be cautious as some posts may contain suspicious links that potentially generate ad or referral revenue, such as those shortened with goo.gle.
- Community Interaction and Moderation: The community actively engages with posts, pointing out issues like potential spam which sometimes evades automatic moderation. Members contribute to maintaining the channel's integrity by flagging unusual activities.
Link mentioned: Big Code Models Leaderboard - a Hugging Face Space by bigcode: no description found
LM Studio ▷ #memgpt (4 messages):
- Request for MemGPT Expertise: A member asked for personal assistance from someone experienced with MemGPT, specifically for project-related questions.
- Attempted Help and Clarification: Another member offered help mentioning their experience with integrating MemGPT using Kobold, but later clarified that they hadn't successfully implemented it in the specific LM environment discussed.
LM Studio ▷ #amd-rocm-tech-preview (2 messages):
- GPU Upgrade Achieved: A member purchased an RX 7900 XT for 700 euros, which they believe provides more than enough power for their needs.
- Recommendations for Running Larger Models: Another member suggested that the newly purchased RX 7900 XT could handle larger models such as Command-R+ or YI-1.5 (quantized variants).
LM Studio ▷ #open-interpreter (4 messages):
- Confusion in Connecting LM Studio to OpenInterpreter: A member expressed confusion when attempting to connect LM Studio to OpenInterpreter, noticing no difference in error messages whether the server was connected or not. They initially asked for guidance ambiguously but clarified they were trying to connect to a specific setup referred to as "open interperter zero one".
LM Studio ▷ #model-announcements (1 messages):
- New Yi Models Launched: The LM Studio Community has released new Yi models on their Huggingface page, including a noteworthy 34B version ideal for 24GB cards. These models are enhanced by imatrix for superior quality and available in various sizes, with detailed information on Huggingface.
- Model Details and Availability: Each Yi model, such as the 6B, 9B, and 34B, has been pre-trained on a large corpus and fine-tuned on diverse data. Full descriptions and access links are available directly on the Huggingface page.
- Quantized Versions Provided: Bartowski has provided GGUF quantization for these models, based on the
llama.cpp
release b2854, ensuring efficient usage on specific hardware configurations.
- lmstudio-community/Yi-1.5-34B-Chat-GGUF · Hugging Face: no description found
- lmstudio-community/Yi-1.5-9B-Chat-GGUF · Hugging Face: no description found
- lmstudio-community/Yi-1.5-6B-Chat-GGUF · Hugging Face: no description found
LM Studio ▷ #🛠-dev-chat (19 messages🔥):
- Cross-Vendor GPU Query Lacks Satisfactory Answer: A member inquired about implementing a Vulkan-backend for llama.cpp with LM Studio, specifically looking to utilize cross-vendor GPUs. Alternative suggestions like using a backend API were briefly discussed but didn’t resolve the issue.
- Introducing LM Studio CLI: LM Studio CLI (
lms
) was highlighted as a new feature in LM Studio 0.2.22, allowing users to load/unload models, start/stop API servers, and inspect raw LLM input and output. Details and the source code are hosted on GitHub, and comprehensive installation guidance is available on the LM Studio blog.
- Vulkan Backend Compatibility Issue Remains Unresolved: Despite the introduction of LM Studio CLI, a user experienced difficulties integrating a Vulkan-backend llama.cpp with LM Studio. It appears there’s no direct solution for this issue within the current LM Studio framework.
- Seeking a Headless LM Studio Installation Solution: Another user faced challenges installing LM Studio on a Linux cloud server due to issues with "FUSE" setup in AppImage. The community suggested using Ollama or compiling llama.cpp from the base for a headless setup as a workaround.
- General Interaction and Engagement: Members actively interacted seeking technical assistance regarding installations and potential new features, suggesting both a collaborative and problem-solving environment within the LM Studio developer community.
Link mentioned: Introducing lms
- LM Studio's companion cli tool | LM Studio: Today, alongside LM Studio 0.2.22, we're releasing the first version of lms — LM Studio's companion cli tool.
OpenRouter (Alex Atallah) ▷ #announcements (2 messages):
- JetMoE 8B Experiencing Service Outages: The JetMoE 8B Free model is currently down due to upstream overload. All requests to this model will return a 502 error until further notice.
- Two Multimodal Models Now Available: OpenRouter has announced the availability of two multimodal models: GPT-4o and LLaVA v1.6 34B. These models can be accessed for AI applications through their platform.
- JetMoE 8B by jetmoe | OpenRouter: Coming from a broad set of teams, ranging from academic to industry veterans, Jet MoE is a combined effort from MIT, Princeton, IBM, Lepton, and MyShell. This model is fully open source and trained o...
- OpenAI: GPT-4o by openai | OpenRouter: GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of [GPT-4 Turbo](/models/open...
- LLaVA v1.6 34B by liuhaotian | OpenRouter: LLaVA Yi 34B is an open-source model trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: [Nou...
OpenRouter (Alex Atallah) ▷ #app-showcase (2 messages):
- OpenRouter API Watcher Unveiled: A tool named OpenRouter API Watcher has been introduced, which efficiently tracks changes in the OpenRouter model list, storing them in a SQLite database. It features a simple web interface and an RSS feed for updates, and minimizes overhead by querying the OpenRouter API only once every hour. Check the demo here.
- Rubik's AI Seeks Beta Testers: Rubik's AI has launched an advanced research assistant and search engine, inviting users to beta test with two months free of premium access. This premium offer includes access to models like Claude 3 Opus, GPT-4 Turbo, Mistral Large, and others, promising a substantial enhancement to research capabilities. Interested participants can explore further and sign up here.
- OpenRouter API Watcher: OpenRouter API Watcher monitors changes in OpenRouter models and stores those changes in a SQLite database. It queries the model list via the API every hour.
- Rubik's AI - AI research assistant & Search Engine: no description found
OpenRouter (Alex Atallah) ▷ #general (254 messages🔥🔥):
- Jetmoe Lacks Online Access: Jetmoe was confirmed to not have online access, described as suitable for academic research.
- Skepticism Surrounds Anti-Fraud Updates: Discussions around anti-fraud measures highlighted concerns about personal data collection under the guise of security. Critiques addressed how additional information, like billing addresses required by some payment processors, supposedly helps in identifying fraudulent transactions. Providers like Stripe are typically used to verify and assess transaction risk.
- OpenRouter Personnel Constraints Discussed: It was pointed out that OpenRouter is maintained by a small team of only 3 people, leading to reliance on aggressive anti-fraud measures to minimize operational disruptions.
- Exploration of Embedding Model Support in OpenRouter: There's ongoing discussion about the potential for OpenRouter to support embedding models; however, no fixed roadmap for this feature exists yet, and the team is currently focused on backend improvements.
- Request for Advanced WebUI for Creating Personas: Inquiries about a WebUI capable of creating multiple customizable personas or agents for interaction were made, with suggestions to use BigAGI or the newly named OpenWebUI, though existing platforms were reported to not fully meet these needs.
- 零一万物-AI2.0大模型技术和应用的全球公司(01.AI): no description found
- OpenAI API Endpoints | Open WebUI: In this tutorial, we will demonstrate how to configure multiple OpenAI (or compatible) API endpoints using environment variables. This setup allows you to easily switch between different API providers...
- Can Claude Read PDF? [2023] - Claude Ai: Can Claude Read PDF? PDF (Portable Document Format) files are a common document type that many of us encounter in our daily lives.
- Reddit - Dive into anything: no description found
- Reddit - Dive into anything: no description found
- Stripe | Financial Infrastructure for the Internet: Stripe powers online and in-person payment processing and financial solutions for businesses of all sizes. Accept payments, send payouts, and automate financial processes with a suite of APIs and no-c...
Modular (Mojo 🔥) ▷ #general (65 messages🔥🔥):
- Exploring Implicit Variants in Mojo: Mojo's potential incorporation of implicit variants using the pipe operator was discussed but remains non-committal. Participants referenced Python's PEP 604, suggesting a similar approach for Mojo, which can be found here.
- Nightly Builds in Public Docker Images: Inquiry about Mojo's policy on pushing nightly compiler builds into public Docker repositories was raised; however, no clear policy was provided. A workaround using
modular auth examples
was suggested to bypass website login for building images.
- Pattern Matching vs. If-Else Statements in Mojo: A detailed discourse on the implementation and utility of pattern matching in Mojo unfolded. Participants compared it to traditional if-else statements, noting that while pattern matching can be exhaustive and safer, it also requires a specific design mentality catered toward exhaustive checks.
- Discussion on Compiler Complexity between Mojo and Rust: Comparisons between Mojo's and Rust's compiler were made, highlighting Mojo’s straightforward approach which allows more focus on coding rather than dealing with documentation or compiler intricacies. Rust's complexity was noted as a significant learning curve even though it offers robust systems design and auto vectorization capabilities.
- Perceptions on SQL and ORMs vs. Programming Languages: The intuitiveness of SQL compared to ORMs and the interaction with compilers in languages like Rust led to discussions about the balance between ease of use and rigorous system requirements. Participants expressed varying levels of comfort and efficiency when working with each technology.
Link mentioned: PEP 604 – Allow writing union types as X | Y | peps.python.org: no description found
Modular (Mojo 🔥) ▷ #💬︱twitter (1 messages):
ModularBot: From Modular: https://twitter.com/Modular/status/1790046377613144201
Modular (Mojo 🔥) ▷ #📺︱youtube (1 messages):
- New Video Alert from Modular: Modular has released a new video! Watch it here.
Modular (Mojo 🔥) ▷ #🔥mojo (85 messages🔥🔥):
- Mojo Dereferencing Debate: A member discussed the syntax for dereferencing in Mojo, proposing a shift from '[]' to a C++ style '*', which sparked a debate. The counter-argument highlighted the simplicity and Python-like nature of the current Mojo syntax—
p[i]
,p[]
with default arguments, and postfix composition likep[].field
.
- Iterators and Yield in Mojo: A developer explored implementing a
yield
-like iterator behavior in Mojo by manually managing an iterator's state due to the absence of nativeyield
support. They encountered specific type errors and discussed extending support for multiple iterable structures, pointing to the lack of parametric traits as a limitation.
- Tree Sitter Grammar Contributions: A member shared about creating a tree sitter grammar fork for Mojo, which is now functioning in editors like Helix and Zed. This development piqued interest among other community members who plan to test it in additional environments like Neovim.
- Benchmarking Discussion for Mojo: The community explored the nuances of benchmarking in Mojo, discussing how to store benchmarks and whether memory usage can presently be benchmarked. Plus, a link to recent benchmarks on short string optimization was shared, comparing
InlinedString
andString
types in Mojo.
- Understanding Ownership in Mojo: A new video detailing ownership in Mojo was shared, aimed at deepening users’ understanding. Several Python developers reflected on the relevance and translation of ownership and memory management from Python to Mojo, suggesting that more comparative examples could illuminate these concepts for newcomers.
- DType | Modular Docs: Represents DType and provides methods for working with it.
- Subtyping and Variance - The Rustonomicon: no description found
- Mojo🔥: a deep dive on ownership with Chris Lattner: Learn everything you need to know about ownership in Mojo, a deep dive with Modular CEO Chris LattnerIf you have any questions make sure to join our friendly...
- [Feature Request] Unify SSO between `InlinedString` and `String` type · Issue #2467 · modularml/mojo: Review Mojo's priorities I have read the roadmap and priorities and I believe this request falls within the priorities. What is your request? We currently have https://docs.modular.com/mojo/stdlib...
- Python Mutable Defaults Are The Source of All Evil - Florimond Manca: How to prevent a common Python mistake that can lead to horrible bugs and waste everyone's time.
Modular (Mojo 🔥) ▷ #performance-and-benchmarks (1 messages):
- MoString GitHub Repository Launched: A new GitHub repository named MoString has been created to explore variations over StringBuilder ideas in Mojo. The repo includes a new
optimize_memory
method that efficiently reduces memory allocation to the required levels.
- Call for Community Contributions: The creator of MoString is inviting the community to contribute various implementations to determine what might be best suited for incorporation into the Mojo standard. This initiative is seen as a community experiment to enhance Mojo's capabilities.
Link mentioned: GitHub - dorjeduck/mostring: variations over StringBuilder ideas in Mojo: variations over StringBuilder ideas in Mojo. Contribute to dorjeduck/mostring development by creating an account on GitHub.
Modular (Mojo 🔥) ▷ #nightly (64 messages🔥🔥):
- Mojo's Nightly Builds Introducing Automatic Direct Commits: The latest update in the
mojo
framework brings nightly builds that automatically push commits merged internally directly to thenightly
branch, heralding a new chapter for ongoing development. Here's the PR with details on workflow timeout adjustments to better manage Ubuntu test hangs.
- Memory Management Concerns in Mojo List Operations: A discussion on memory management strategies for
List
in Mojo pointed to performance inefficiencies. A proposal to change how memory is pre-allocated in theextend
method, mimicking theappend
method's approach, showed a 2000x speedup in specific benchmarks.
- Crucial GitHub Actions Bug Affecting Displays: There is a significant issue with GitHub Actions displaying jobs as "pending" even though they've completed. This bug affects the transparency and monitoring of ongoing workflows, particularly visible with Mojo's recent commits and CI operations.
- Testing for Space Materialization of Types: A detailed conversion about the correct materialization of types in Mojo discussed potential issues like the failure to properly handle memory pointers during type transformations. This led to test failures and suggested revisions to the handling methods.
- Crash Reports and Proposals for Mojo Extensions: Crash reports were discussed relating to the handling of complex nested types in Mojo, with suggestions around improving lifetime management of types to avoid segmentation faults during operations like deep copies of multi-dimensional arrays.
- Usage limits, billing, and administration - GitHub Docs: no description found
- GitHub - dorjeduck/minbpe.mojo: port of Andrjey Karpathy's minbpe to Mojo: port of Andrjey Karpathy's minbpe to Mojo. Contribute to dorjeduck/minbpe.mojo development by creating an account on GitHub.
- [CI] Add timeouts to workflows by JoeLoser · Pull Request #2644 · modularml/mojo: On Ubuntu tests, we're seeing some non-deterministic timeouts due to a code bug (either in compiler or library) from a recent nightly release. Instead of relying on the default GitHub timeout of ...
- [stdlib] Delegate string comparisons to `StringRef` by siitron · Pull Request #2620 · modularml/mojo: This is a follow-up to #2409. String comparisons for StringRef are implemented. StringRef make use of memory.memcmp for all of its 6 comparisons now, hopefully this change is ok. String's and Stri...
- [stdlib] Introduce Hasher type with all necessary changes by mzaks · Pull Request #2619 · modularml/mojo: This is a draft because although the code compiles 8 tests are failing. It might be due to compiler bug. The error messages are cryptic. I don't have the "Mojo" ;) to fix them. Failed Te...
- Issues · modularml/mojo: The Mojo Programming Language. Contribute to modularml/mojo development by creating an account on GitHub.
- minbpe.mojo/mojobpe/utils/mostring/molist.mojo at main · dorjeduck/minbpe.mojo: port of Andrjey Karpathy's minbpe to Mojo. Contribute to dorjeduck/minbpe.mojo development by creating an account on GitHub.
- GitHub - dorjeduck/mostring: variations over StringBuilder ideas in Mojo: variations over StringBuilder ideas in Mojo. Contribute to dorjeduck/mostring development by creating an account on GitHub.
- GitHub - mzaks/mojo at feature/minimal-example-of-test-crash-for-new-hasher: The Mojo Programming Language. Contribute to mzaks/mojo development by creating an account on GitHub.
CUDA MODE ▷ #general (5 messages):
- Understanding GPU Memory Management: A member discussed how their laptop's GPU appears to use both dedicated and shared memory, observing that CUDA accesses shared memory when the dedicated memory is exhausted, leading to out-of-memory (OOM) errors once both are full. No further details or resources were provided on this topic.
- Performance Dip Noted with Shared Memory Usage: The same member noted a significant slowdown when the shared memory begins to be used, suspecting this might involve 'offloading' to CPU memory. However, there were no additional details on how to verify or manage this behavior.
- Direct Communication Aids in Stabilizing Discord Stage: Another member successfully contacted Discord's CEO to address stabilization issues with the Discord stage, who promised to direct the right engineer to assist. This highlights effective use of personal networks in resolving technical challenges.
CUDA MODE ▷ #triton (43 messages🔥):
- Exploration of Triton Kernels: Multiple users discussed Triton kernels, sharing resources like attorch and links to repositories such as Triton kernels and Triton index. These mentions highlight ongoing collaborations and individual contributions to optimize and expand the use of Triton for AI development.
- Sharing of Additional Learning Resources: A Lecture on Triton was shared to provide a comprehensive guide, showcased through a GitHub description at Lecture 14 on GitHub. This highlights efforts to educate more users about Triton.
- Performance Optimization Discussions: Users discussed performance enhancements involving kernels, mentioning GitHub commits like tuning Flash Attention block sizes which detailed parameter tuning for better performance. This indicates an active community working towards refining and enhancing the efficiency of their code.
- Exploring New DSLs for GPU Utilization: A new DSL named ThunderKittens, integrated within CUDA, was introduced at GitHub's ThunderKittens. This tool claims to improve GPU utilization and offers code simplicity, demonstrating continuous innovation and community interest in simplifying GPU programming.
- Queries about Triton Kernels and Contributions Advice: Users inquired about creating tutorials and contributing Triton kernels to public repositories, getting directed to consider personal repositories or platforms like triton-index for sharing optimizations. This illustrates a collaborative environment encouraging contributions and knowledge sharing.
- Tweet from Benjamin F Spector (@bfspector): (1/7) Happy mother’s day! We think what the mothers of America really want is a Flash Attention implementation that’s just 100 lines of code and 30% faster, and we’re happy to provide. We're exci...
- Lecture 14: Practitioners Guide to Triton: https://github.com/cuda-mode/lectures/tree/main/lecture%2014
- GitHub - cuda-mode/triton-index: Cataloging released Triton kernels.: Cataloging released Triton kernels. Contribute to cuda-mode/triton-index development by creating an account on GitHub.
- GitHub - zinccat/Awesome-Triton-Kernels: Collection of kernels written in Triton language: Collection of kernels written in Triton language. Contribute to zinccat/Awesome-Triton-Kernels development by creating an account on GitHub.
- GitHub - BobMcDear/attorch: A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.: A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton. - BobMcDear/attorch
- [TUTORIALS] tune flash attention block sizes (#3892) · triton-lang/triton@702215e: no description found
- GitHub - ELS-RD/kernl: Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.: Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable. - ELS-RD/kernl
CUDA MODE ▷ #cuda (9 messages🔥):
- Exploring Efficient AI with ThunderKittens: The ThunderKittens GitHub repository introduces tile primitives for speedy kernels, aiming to simplify kernel building for AI. HazyResearch highlights its commitment to optimizing AI's computational efficiency with open-source contributions.
- Delving Into HazyResearch's Computational Optimization Work: A comprehensive blog post by HazyResearch details their journey in creating ThunderKittens. They also express a commitment to reducing AI's computational demands through projects like Based, Monarch Mixer, and FlashAttention.
- A Lighter, Faster Training Repository: HazyResearch has developed nanoGPT-TK, promoting it as the simplest and fastest method for training and fine-tuning medium-sized GPTs. It features enhancements called 'kittens' that streamline the processing.
- Understanding Memory Swizzling in CUDA: In a brief technical exchange, memory swizzling was discussed as a method to avoid memory bank conflicts in CUDA programming. The benefits are further explained in the CUDA C Programming Guide.
- ThunderKittens: A Simple Embedded DSL for AI kernels: good abstractions are good.
- GPUs Go Brrr: how make gpu fast?
- GitHub - HazyResearch/ThunderKittens: Tile primitives for speedy kernels: Tile primitives for speedy kernels. Contribute to HazyResearch/ThunderKittens development by creating an account on GitHub.
- GitHub - HazyResearch/nanoGPT-TK: The simplest, fastest repository for training/finetuning medium-sized GPTs. Now, with kittens!: The simplest, fastest repository for training/finetuning medium-sized GPTs. Now, with kittens! - HazyResearch/nanoGPT-TK
CUDA MODE ▷ #announcements (1 messages):
- Kernel Fusion Talk on Zoom: The discussion on real-world experiences fusing kernels will start in 7 minutes with the special speaker. It will be hosted on Zoom at this meeting link, and participants should post their questions in Discord, specifically in the channel <#1238926773216084051>.
Link mentioned: Join our Cloud HD Video Meeting: Zoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars across mobile, desktop, and room systems. Zoom ...
CUDA MODE ▷ #algorithms (1 messages):
random_string_of_character: https://arxiv.org/abs/2405.05219
CUDA MODE ▷ #beginner (14 messages🔥):
- U Illinois PMPP Series Ongoing: The 4th lecture of the U Illinois PMPP series was announced with reminders avoiding general usage of the channel and providing a Zoom link for attendees. The sessions are described as weekly, catering separately to EMEA and NAM regions, and notifications are posted on a dedicated Discord server to avoid clutter.
- YouTube Playlist Available for PMPP Series: The U Illinois PMPP series lectures are available on a YouTube playlist, featuring a course on "Applied Parallel Programming", recorded in Spring 2018.
- Analogies to Simplify Concepts: One user enjoyed the lecture analogy comparing warps to platoons in the army, highlighting the use of relatable metaphors to clarify complex technical concepts in the series.
- CUDA Community on Discord Supports Integration and Discussion: Users are encouraged to share and discuss educational sessions and links without cluttering the general chat, with suggestions to use specific channels for announcements and discussions to maintain order and focus.
- Query on torch-tensorrt Compatibility and Installation: One participant asked for guidance on which versions of torch-tensorrt are compatible with specific CUDA and Torch versions, noting that installation seems to include multiple CUDA runtime versions, which may cause confusion.
- UIUC ECE408/CS483 Spring 2018 Hwu: This is a junior/senior-level undergraduate course entitled "Applied Parallel Programming" at the University of Illinois at Urbana-Champaign. It is often als...
- Join our Cloud HD Video Meeting: Zoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars across mobile, desktop, and room systems. Zoom ...
CUDA MODE ▷ #pmpp-book (1 messages):
- Advanced Scan Techniques Unveiled: PMPP Author Izzat El Hajj will discuss scan techniques on May 24th, followed by Jake and Georgii, who will explore advanced scan uses in CUDA C++ on May 25th. Interested parties can join the event at this Discord link.
Link mentioned: Discord - A New Way to Chat with Friends & Communities: Discord is the easiest way to communicate over voice, video, and text. Chat, hang out, and stay close with your friends and communities.
CUDA MODE ▷ #off-topic (5 messages):
- Seeking Help on Thermal Face Recognition: A member, cracker10, requested assistance for a college final project titled 'Thermal Face Recognition'. Specifically looking for insights, resources, or suggestions related to recognizing if two thermal images are of the same person.
- Clarification on Project's Aim: In response to a question, cracker10 clarified that the project's goal is to ascertain whether two thermal face images belong to the same person.
- Limited Assistance on Thermal Imaging: Another member, pessimistic_neko, expressed their inability to help, stating they don't have knowledge about thermal face recognition. This was humorously emphasized with a custom emoji, indicating a straightforward and light-hearted interaction.
CUDA MODE ▷ #irl-meetup (1 messages):
boxxy_ms: anyone in Toronto?
CUDA MODE ▷ #triton-puzzles (2 messages):
- Seeking Official Solutions for Validation: A member inquired about official solutions for checking the numerical accuracy and efficiency of their implementation. They later found the needed solution in a previous thread.
CUDA MODE ▷ #llmdotc (67 messages🔥🔥):
- Performance Insights on Multi-GPU Scripting: In a discussion about multi-GPU performance, a member shared their observation that 98% of CPU time was consumed waiting for GPU tasks to complete. They proposed offloading some tasks from the GPU to the CPU, suggesting the flexibility of llm.c could allow this adjustment (read the paper discussing relevant topics).
- Advancements in Gradient Accumulation and ZeRO-1: Users discussed various configurations and outcomes of utilizing ZeRO-1 for optimizer sharding, showcasing significant VRAM savings and potential for batch size increases on GPUs such as Nvidia A100. Main discussion and updates were tied to a GitHub pull request (see PR details here).
- Exploring ThunderKittens for Hardware Optimization: The potential use of ThunderKittens, a project providing tile primitives for accelerating kernel operations, was highlighted as beneficial for future optimizations in llm.c. It’s noted for its low-level abstraction, which could synergize well with llm.c’s needs (more about ThunderKittens).
- Challenges with GPUs in CI Systems for llm.c: A conversation about the lack of GPUs in llm.c's CI revealed challenges in testing and assurance of GPU-dependent code. It was discussed that GitHub’s new GPU runners could help, although still in beta, which necessitates adjustments in GitHub plans and potentially incurring additional costs (GitHub Action GPU Usage).
- GPU Memory Efficiency with ZeRO-1: A substantial discussion revolving around ZeRO-1 optimizer demonstrated a significant reduction in memory usage and an increase in batch size effectiveness. A commit was pushed addressing some of these aspects, improving performance further while allowing exploration of further batch size increments on powerful GPUs (commit details).
- About billing for GitHub Actions - GitHub Docs: no description found
- Run your ML workloads on GitHub Actions with GPU runners: Run your ML workloads on GitHub Actions with GPU runners
- cub::WarpLoad — CUB 104.0 documentation: no description found
- Zero Redundancy Optimizer - Stage1 by chinthysl · Pull Request #309 · karpathy/llm.c: To train much larger model variations (2B, 7B, etc), we need larger GPU memory allocations for parameters, optimizer states, and gradients. Zero Redundancy Optimizer introduce the methodology to sh...
- 2D and 3D tile divisions so that permutation coordinates can be read from threadIdx and blockIdx · Issue #406 · karpathy/llm.c: Supposedly the permutation kernels, even though they are mostly memory bound can reduce the amount of division and do thread coarsening by having a 2d or 3d grid and not have to do any division in ...
- Issues · NVIDIA/cccl: CUDA C++ Core Libraries. Contribute to NVIDIA/cccl development by creating an account on GitHub.
- GitHub - HazyResearch/ThunderKittens: Tile primitives for speedy kernels: Tile primitives for speedy kernels. Contribute to HazyResearch/ThunderKittens development by creating an account on GitHub.
- GPUs Go Brrr: how make gpu fast?
- llm.c/.github/workflows/ci.yml at 2346cdac931f544d63ce816f7e3f5479a917eef5 · karpathy/llm.c: LLM training in simple, raw C/CUDA. Contribute to karpathy/llm.c development by creating an account on GitHub.
- Zero Redundancy Optimizer - Stage1 by chinthysl · Pull Request #309 · karpathy/llm.c: To train much larger model variations (2B, 7B, etc), we need larger GPU memory allocations for parameters, optimizer states, and gradients. Zero Redundancy Optimizer introduce the methodology to sh...
- llm.c/train_gpt2.cu at master · karpathy/llm.c: LLM training in simple, raw C/CUDA. Contribute to karpathy/llm.c development by creating an account on GitHub.
CUDA MODE ▷ #lecture-qa (48 messages🔥):
- Font Size Fixation: The font size in an unspecified application or document was increased upon a user's request.
- Clarification and Access Provided for an Online Meeting: A user requested the meeting password because it was missing from the event details; another user provided a direct link to join the meeting here.
- Engagement on CUDA and Torch Compile: Discussions included the effectiveness of CUDA graphs and the intricacies of
torch.compile
. Users expressed a desire for more clarity on howtorch.compile
works internally, especially with CUDA graphs. Helpful resources and tutorials were shared, including ASPLOS 2024 workshops and a TorchDynamo deep dive. - Discussion on Triton and Kernel Fusing: Users engaged in technical discussions about the benefits and strategies of kernel fusing in performance optimization, with some debating the impact of fewer kernel launches versus potential overhead.
- Requests for Further Talks and Clarifications: There was a clear interest in deeper dives into
torch.compile
and Triton internals, with users requesting talks and additional documentation to better understand these complex topics.
- Join our Cloud HD Video Meeting: Zoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars across mobile, desktop, and room systems. Zoom ...
- torch.compile — PyTorch 2.3 documentation: no description found
- Tensor and Operator Basics: Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch
- workshops/ASPLOS_2024 at master · pytorch/workshops: This is a repository for all workshop related materials. - pytorch/workshops
- Dynamo Deep-Dive — PyTorch main documentation: no description found
CUDA MODE ▷ #youtube-watch-party (5 messages):
- ECE408 Course Slides Shared: Course slides for ECE408 / CS483 / CSE408: Applied Parallel Programming are available for Spring 2019 at ZJUI Section. Important announcements and course plans are posted, including exam dates and project timelines.
- CUDA Mode YouTube Watch Party Initiated: A new YouTube watch party for CUDA enthusiasts is announced, where participants view videos no longer than 1-1.5 hours and discuss them intermittently. The purpose is to facilitate learning among newcomers and practice among more experienced participants.
- Current Viewing Series - PMPP 2018 Lectures: The current video series is from the PMPP book's 2018 lectures by the author, hosted on the PMPP book YouTube channel. Discussions are encouraged every 10-15 minutes to enhance understanding and motivation to read the book.
- Session Schedule: Watch parties are scheduled every Saturday with two sessions; one at 7:30 GMT for EMEA attendees, and another at 18:00 GMT for NAM attendees. Zoom links will be provided by specified moderators.
- Future Plans for Watch Party Content: Post the conclusion of the 18-lecture series on PMPP, there may be a revisit to earlier cuda mode videos or an exploration of new content related to parallel processing that is vetted for quality.
Link mentioned: ECE408: Applied Parallel Programming, Spring 2019 ZJUI Section: no description found
Eleuther ▷ #general (61 messages🔥🔥):
- Exploration of Concept Frequency vs. Model Performance: Concerns are discussed regarding the research on multimodal models, emphasizing the discrepancy between "zero-shot" generalization claims and actual performance linked to concept frequency in training datasets. The discussion highlights persistent misunderstandings and misrepresentations in mainstream coverage of AI advancements.
- Incremental Growth in Generative AI: Insight into the incremental improvements in generative AI is questioned, referencing the same research paper noting that major generative models like GPT and Stable Diffusion might not exhibit as groundbreaking progress in future iterations as previously.
- Claims and Realities of AI Understanding: A paper discussion asserts multimodal model's performance is heavily influenced by the frequency of concepts in its training data, undermining the notion of robust 'zero-shot' generalization in these models.
- Falcon2 11B Model Unveiled: A new model, Falcon2 11B, has been introduced, trained on a significantly refined 5T web dataset; notable features include an 8k context window and improved attention mechanisms, promising better inference capabilities.
- Live Stream on AI Developments: An ongoing or upcoming live discussion about GPT-4o can be caught on this YouTube Live session, where OpenAI updates including the latest on ChatGPT are expected to be unveiled.
- No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance: Web-crawled pretraining datasets underlie the impressive "zero-shot" evaluation performance of multimodal models, such as CLIP for classification/retrieval and Stable-Diffusion for image gener...
- Introducing GPT-4o: OpenAI Spring Update – streamed live on Monday, May 13, 2024. Introducing GPT-4o, updates to ChatGPT, and more.
Eleuther ▷ #research (79 messages🔥🔥):
- Exploring Efficient Attention: A new study proposes an efficient method for computing attention using convolution matrices. This approach may significantly reduce computation time by leveraging Fast Fourier Transforms (FFT), but practical applicability and comparisons to existing methods like flashattn are still under discussion.
- Depth Upscaling in LLMs Researched: Depth upscaling, a method of model improvement by layer repetition, has been referenced in research papers such as SOLAR. Detailed discussions and additional examples include works on Yi and Granite Code models, highlighting various approaches to expanding model depth.
- Hazy Research Introduces ThunderKittens: Hazy Research has developed ThunderKittens, aiming to simplify key technical implementations in AI. Their work, as detailed in their blog post, seeks to bridge the gap between complex algorithms and practical AI library implementations.
- Challenges in Data Distillation for AR Tasks: A recent proposal named Farzi aims to synthesize dense datasets into compact, highly effective sequences for training autoregressive models, achieving up to 120% of original data performance. More details and efficiency comparisons are available in their publication on OpenReview.
- Performance Comparison of Linear Attention Models Highlighted: The Linear Attention model's performance in complex evaluations like MMLU is heavily discussed, with explorations into dataset impacts on the model efficacy. The ongoing discussions emphasize the need for suitable data to leverage potential model improvements.
- Fishing for Magikarp: Automatically Detecting Under-trained Tokens in Large Language Models: The disconnect between tokenizer creation and model training in language models has been known to allow for certain inputs, such as the infamous SolidGoldMagikarp token, to induce unwanted behaviour. ...
- Fast Exact Retrieval for Nearest-neighbor Lookup (FERN): Exact nearest neighbor search is a computationally intensive process, and even its simpler sibling -- vector retrieval -- can be computationally complex. This is exacerbated when retrieving vectors wh...
- Farzi Data: Autoregressive Data Distillation: We study data distillation for auto-regressive machine learning tasks, where the input and output have a strict left-to-right causal structure. More specifically, we propose Farzi, which summarizes an...
- State-Free Inference of State-Space Models: The Transfer Function Approach: We approach designing a state-space model for deep learning applications through its dual representation, the transfer function, and uncover a highly efficient sequence parallel inference algorithm th...
- Conv-Basis: A New Paradigm for Efficient Attention Inference and Gradient Computation in Transformers: Large Language Models (LLMs) have profoundly changed the world. Their self-attention mechanism is the key to the success of transformers in LLMs. However, the quadratic computational cost $O(n^2)$ to ...
- Yi: Open Foundation Models by 01.AI: We introduce the Yi model family, a series of language and multimodal models that demonstrate strong multi-dimensional capabilities. The Yi model family is based on 6B and 34B pretrained language mode...
- Granite Code Models: A Family of Open Foundation Models for Code Intelligence: Large Language Models (LLMs) trained on code are revolutionizing the software development process. Increasingly, code LLMs are being integrated into software development environments to improve the pr...
- SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling: We introduce SOLAR 10.7B, a large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks. Inspired by recent efforts t...
- Memory Mosaics: Memory Mosaics are networks of associative memories working in concert to achieve a prediction task of interest. Like transformers, memory mosaics possess compositional capabilities and in-context lea...
- Subquadratic LLM Leaderboard - a Hugging Face Space by devingulliver: no description found
- FLM-101B: An Open LLM and How to Train It with $100K Budget: Large language models (LLMs) have achieved remarkable success in NLP and multimodal tasks, among others. Despite these successes, two main challenges remain in developing LLMs: (i) high computational ...
- Masked Structural Growth for 2x Faster Language Model Pre-training: Accelerating large language model pre-training is a critical issue in present research. In this paper, we focus on speeding up pre-training by progressively growing from a small Transformer structure ...
- Farzi Data: Autoregressive Data Distillation: We study data distillation for auto-regressive machine learning tasks, where the input and output have a strict left-to-right causal structure. More specifically, we propose Farzi, which summarizes...
- ThunderKittens: A Simple Embedded DSL for AI kernels: good abstractions are good.
- GPUs Go Brrr: how make gpu fast?
- Updatable Learned Index with Precise Positions: Index plays an essential role in modern database engines to accelerate the query processing. The new paradigm of "learned index" has significantly changed the way of designing index structures...
- Jean Feydy's home page: no description found
Eleuther ▷ #scaling-laws (7 messages):
- Mixed Opinions on Synthetic Data: A participant expressed a positive view on synthetic data, identifying as bullish, while another countered, questioning its groundbreaking status due to past hyped cycles about 5-7 years ago. Concerns were raised that lessons from past experiences might not carry forward, and although synthetic data appears promising, it comes with significant tradeoffs.
- Hype versus Reality in Synthetic Data: The discussion highlighted that while synthetic data seems like a "silver bullet" for newcomers, those who have used it extensively understand its limitations. This ongoing debate underscores the cycle of technology hype and realism.
- Exploring DNN Structures through Empirical Studies: A shared arXiv paper discusses a range of deep neural network architectures including CNNs, Transformers, and MLPs under a unified framework named SPACH, suggesting distinct behaviors as network size increases. Another study revisits MLPs, probing the limits of this foundational model, hinting at potential future scalability despite current limitations.
- A Battle of Network Structures: An Empirical Study of CNN, Transformer, and MLP: Convolutional neural networks (CNN) are the dominant deep neural network (DNN) architecture for computer vision. Recently, Transformer and multi-layer perceptron (MLP)-based models, such as Vision Tra...
- Scaling MLPs: A Tale of Inductive Bias: In this work we revisit the most fundamental building block in deep learning, the multi-layer perceptron (MLP), and study the limits of its performance on vision tasks. Empirical insights into MLPs ar...
Eleuther ▷ #interpretability-general (3 messages):
- NeurIPS Submission Call: A user expresses interest in collaborating on a last-minute submission to NeurIPS, referring to a project similar to the "othello paper."
- Investigating Model Compression Side-Effects: A discussion was initiated about the nature of features and circuits lost during model compression - whether they are non-essential or, alternatively, too specialized, possibly shedding light on the diversity of the training data set.
Eleuther ▷ #gpt-neox-dev (1 messages):
oleksandr07173: Hello
Interconnects (Nathan Lambert) ▷ #news (120 messages🔥🔥):
- GPT-4o Unveiled as a Frontier Model: GPT-4o has been introduced as the latest state-of-the-art model by OpenAI, tested on the LMSys arena performing under the alias "im-also-a-good-gpt2-chatbot". The model is described as a significant improvement in this announcement.
- Debate on GPT-4o's Coding Capabilities: Discussions indicate a perceived substantial gap in coding capabilities between GPT-4o and earlier versions, implying major improvements. The conversation hints at expectations of new benchmarks like MATH to better understand these advancements, framed by ongoing discussions featured in this blog post.
- Tokenization Developments Point to Model Enhancements: OpenAI has updated its tokenizer, as seen in this GitHub commit. The implications suggest increased efficiency, possibly due to a larger vocabulary scope.
- Expectations and Speculations on OpenAI's Strategic Moves: There’s broad speculation on OpenAI’s strategic directions, particularly in making GPT-4o freely available, potentially to gather more data or as a competitive move against other big tech firms like Meta. These strategic pivots are explored extensively in discussions comparing market actions and technological advancements.
- Live Demonstrations and Public Response: OpenAI conducted a live demo that attracted varied responses, from comments on its possible applications to critiques on its presentation style. The realism and utility of the showcased technologies, including their integration and user interface, are under scrutiny by the community.
- Tweet from William Fedus (@LiamFedus): GPT-4o is our new state-of-the-art frontier model. We’ve been testing a version on the LMSys arena as im-also-a-good-gpt2-chatbot 🙂. Here’s how it’s been doing.
- Tweet from lmsys.org (@lmsysorg): Significantly higher win-rate against all other models. e.g., ~80% win-rate vs GPT-4 (June) in non-tie battles.
- Tweet from Google (@Google): One more day until #GoogleIO! We’re feeling 🤩. See you tomorrow for the latest news about AI, Search and more.
- Tweet from Bedros Pamboukian (@bedros_p): VideoFX footage from the list of examples There are 2 more, but it looks like its a WIP First look at VideoFX generations:
- Sync codebase · openai/tiktoken@9d01e56: no description found
- Tweet from Jim Fan (@DrJimFan): I stand corrected: GPT-4o does NOT natively process video stream. The blog says it only takes image, text, and audio. That's sad, but the principle I said still holds: the right way to make a vide...
- Tweet from lmsys.org (@lmsysorg): Breaking news — gpt2-chatbots result is now out! gpt2-chatbots have just surged to the top, surpassing all the models by a significant gap (~50 Elo). It has become the strongest model ever in the Are...
- Tweet from Kaio Ken (@kaiokendev1): yeah but can it moan?
Interconnects (Nathan Lambert) ▷ #ml-questions (1 messages):
- REINFORCE as a Special Case of PPO: A recent PR on Huggingface TRL repo elaborates how REINFORCE is actually a special case of PPO. Detailed implementation and explanations are available in this GitHub PR, alongside the referenced paper.
Link mentioned: PPO / Reinforce Trainers by vwxyzjn · Pull Request #1540 · huggingface/trl: This RP supports the REINFORCE RLOO trainers in https://arxiv.org/pdf/2402.14740.pdf. Note that REINFORCE's loss is a special case of PPO, as shown below it matches the REINFORCE loss presented i...
Interconnects (Nathan Lambert) ▷ #random (5 messages):
- Praise for Chatbot Arena's Community: Members expressed admiration for the Chatbot Arena community, highlighting it as instrumental in shaping the future.
- Speculation on Open Sourcing GPT-3.5: Discussion touched on the possibility of GPT-3.5 being open-sourced. One comment humorously suggested that this would occur when "hell freezes over."
Interconnects (Nathan Lambert) ▷ #reads (11 messages🔥):
- Video Viewership on the Rise: One member's video hit 6k views in a day, while others have reached 20k views, prompting a discussion on boosting these numbers even further.
- Huggingface Video Scores Big: Another video, shared on HuggingFace, impressively racked up 150k views.
- Uploading Videos to Platform X: There's a conversation about the potential of posting videos to Platform X, with considerations about native uploads and legal permissions.
- Navigating Video Rights with Stanford: The member confirms that Stanford owns the rights to certain content, but typically does not enforce strict measures, which might allow for more flexible use.
- Plan to Sidestep Bureaucracy: The strategy involves requesting permission for personal use from Stanford and proceeding with posting the video, betting on a low likelihood of repercussions.
LAION ▷ #general (109 messages🔥🔥):
- AI's Legal Balance on Artists' Rights and Commercial Use Questioned: In an intense debate, members discussed the potential legal challenges when commercial AI services produce works that could compete with artists. Concern surrounding Midjourney, specifically, was noted because it may encourage users to mimic the styles of living artists.
- The Jurisdiction of AI and Fair Use: Some members expressed concerns about AI models potentially infringing on artists' copyrights when generating derivative works. Yet others countered by highlighting fair use protections, even suggesting that negative reviews under fair use could similarly harm an artist's commercial prospects without legal repercussion.
- Fair Use Arguments in AI-Generated Content Discussed: A clear division in opinion present where some members believe AI-generated content that potentially impacts artists’ sales must face legal scrutiny, while others argue the usage falls under fair use, invoking comparisons with review content.
- Jury's Role in Interpreting Laws Regarding AI Scrutinized: The discussion touched on jury nullification and the proper role of juries in AI-related legal cases, highlighting differences between how laws are technically supposed to be followed and the real-world functioning of the judiciary.
- Focus on AI's Computational Efficiency and Open-Source Models: A link shared discussed innovations aimed at reducing AI’s computational demand; these include various new methods and models developed to improve efficiency (click to read more).
- GPUs Go Brrr: how make gpu fast?
- deepseek-ai/DeepSeek-V2 · Hugging Face: no description found
- Introducing GPT-4o: OpenAI Spring Update – streamed live on Monday, May 13, 2024. Introducing GPT-4o, updates to ChatGPT, and more.
- Silicon Valley Tip To Tip GIF - Silicon Valley Tip To Tip Brainstorm - Discover & Share GIFs: Click to view the GIF
- Falcon 2: UAE’s Technology Innovation Institute Releases New AI Model Series, Outperforming Meta’s New Llama 3: no description found
- Bunline - v0.4 | Stable Diffusion Checkpoint | Civitai: PixArt Sigma XL 2 1024 MS full finetune on custom captions for roughly 35k images w/ max(w,h) > 1024px INSTRUCTIONS: Place the .safetensors wher...
LAION ▷ #research (5 messages):
- Voice Data Sets Need Transformation: A member emphasized the necessity to transform extensive voice data sets into tokens and highlighted the need for high-quality annotations regarding emotions and speaker attributes. They shared a link for training transformers with audio as if it was text, available here and further resources on YouTube.
- Delving into Formal Mathematics Notation: Discussion arose about the use of certain notation in formal mathematics to indicate a sequence of elements which converge, with one member clarifying the potential role of a function in this context. The function T was mentioned as a possible tool to perform sampling in such sequences.
Link mentioned: Tweet from LAION (@laion_ai): Wanna train transformers with audio as if it was text? - Here is how. :) https://youtu.be/NwZufAJxmMA https://discord.gg/6jWrFngyPe
LangChain AI ▷ #general (105 messages🔥🔥):
- LangChain Date Extraction Explained: Members discussed extracting dates and converting them to ISO format using LangChain's
DatetimeOutputParser
. Detailed code snippets for both JavaScript and Python were provided, illustrating how to implement the parser.
- Custom Date Range Parsing in LangChain: When asked about handling date ranges like "from April 1st to June 2nd," it was suggested that one could extend the
DatetimeOutputParser
to recognize and handle date ranges by modifying theparse
method to identify and extract start and end dates separately.
- Handling Multiple Descriptions in Prompts: A query about extracting multiple market descriptions from prompts like "compare the prices between Belgium oil and Italy power" led to an explanation of using tool/function calling with LLMs in LangChain to structurally extract needed information based on a schema.
- Use Local Open-Source LLMs with LangChain: Guidance was provided on integrating local open-source LLMs like Ollama with LangChain, detailing steps from setting up the LLM, installing necessary packages, to interacting with the model.
- Streaming API Responses for Multiple Frontend Elements: A user inquired about streaming API responses for two frontend elements using a single API call. A response highlighted using Python and provided a GitHub link with a relevant example.
- Stores | 🦜️🔗 LangChain: In many different applications, having some sort of key-value storage is helpful.
- Structured chat | 🦜️🔗 LangChain: The structured chat agent is capable of using multi-input tools.
- Extracting structured output | 🦜️🔗 LangChain: Overview
- Quickstart | 🦜️🔗 LangChain: In this guide, we will go over the basic ways to create Chains and Agents that call Tools. Tools can be just about anything — APIs, functions, databases, etc. Tools allow us to extend the capabilities...
- langchain/libs/partners/chroma/pyproject.toml at master · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- generative-ai/gemini/use-cases/retrieval-augmented-generation/multimodal_rag_langchain.ipynb at main · GoogleCloudPlatform/generative-ai: Sample code and notebooks for Generative AI on Google Cloud, with Gemini on Vertex AI - GoogleCloudPlatform/generative-ai
- go - Overview: go has 52 repositories available. Follow their code on GitHub.
- Example of extracting multiple values from a streamed OpenAI chat response: Example of extracting multiple values from a streamed OpenAI chat response - extract_multiple_values_from_stream.py
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Quickstart | 🦜️🔗 LangChain: In this quickstart we'll show you how to:
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Manifest | 🦜️🔗 LangChain: This notebook goes over how to use Manifest and LangChain.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Issues · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
LangChain AI ▷ #share-your-work (4 messages):
- Exploring AI in Cancer Drug Discovery: A YouTube video discusses the role of Generative AI in cancer drug discovery and the need for more automated methods. Watch the full exploration here.
- Latest Updates from Index Network: Stay informed with the latest announcements or insights by checking out their latest tweet.
- Open Source Code Interpreter Launched: Obaidur-rahaman introduces a new open source project for Natural Language-Assisted Visualization & Interactive Data Analysis that's compatible with OpenAI API keys and soon, Llama 3. The project aims to securely handle and analyze confidential data, enhancing insights for enterprises, and is available on GitHub.
- Tutorial on Building Custom RAG Pipeline with LangChain and Pinecone: Zackproser is developing a detailed guide on integrating LangChain with Next.js and Pinecone to create a blog-chat feature that employs Retrieval Augmented Generation. The tutorial will cover everything from data ingestion to building an interactive chat interface, and details can be found here.
- Build a RAG pipeline for your blog with LangChain, OpenAI and Pinecone: You can chat with my writing and ask me questions I've already answered even when I'm not around
- Cancer Drug Discovery AI Agentic Workflow R&D: Many Generative AI developments exist for Drug Discovery applications. However, additional methods to further automate the process for more informed drug com...
- GitHub - obaidur-rahaman/nlavida: Natural Language-Assisted Visualization & Interactive Data Analysis (NLAVIDA): Securely handle and analyze confidential data in enterprise environments, enhancing insights and decision-making with advanced visualization.: Natural Language-Assisted Visualization & Interactive Data Analysis (NLAVIDA): Securely handle and analyze confidential data in enterprise environments, enhancing insights and decision-making ...
LangChain AI ▷ #tutorials (3 messages):
- LLM Goes Multimodal with DinoV2: A member shared a YouTube video titled "Make any Text LLM into Multimodal with DinoV2," showcasing the method to integrate vision capabilities into text-based language models. They also provided a GitHub link to a relevant notebook at DinoV2 Vision Encoder Notebook.
- "Chat with My Blog" Experience Developed: Zackproser detailed how he incorporated a chat feature into his blog that enables visitors to interact with and ask questions directly about his writings. He uses Retrieval Augmented Generation technology for this feature, and provides comprehensive resources including code for ingest, data processing, and chat interfaces in his blog post.
- Streaming with Session and History Management: Brianjack expressed difficulties integrating streaming functionality into Langchain while maintaining session and history management, stating that he successfully implemented all features except for streaming. He is looking for tutorials or assistance specifically aimed at overcoming these challenges with streaming.
- Build a RAG pipeline for your blog with LangChain, OpenAI and Pinecone: You can chat with my writing and ask me questions I've already answered even when I'm not around
- Make any Text LLM into Multimodal with DinoV2: We will take a look at how to add vision capabilities to text llm using a vision encoderhttps://github.com/githubpradeep/notebooks/blob/main/VLM.ipynbhttps:/...
LlamaIndex ▷ #blog (8 messages🔥):
- Llama 3 Powers Automatic PowerPoint Creation: An article by @naivebaesian explores using Llama 3 RAG pipeline to not only answer questions but to also generate PowerPoint slide decks with the use of Python-pptx library. More information here.
- Reflective Financial Agent Development Explained: Hanane Dupouy's guide on developing a financial agent capable of analyzing stock prices through reflection has been detailed, highlighting different implementation strategies, including CRITIC. Details can be found here.
- RAG for Content Moderation Setup Guide Released: @cloudraftio authored an article demonstrating how to establish a RAG pipeline that ensures user-generated images comply with content moderation standards by transforming images into text for easier matching. Further reading available here.
- Evaluating RAG Systems—A Core AI Skill: A comprehensive discussion by @kingzzm on evaluating RAG systems includes a review of four evaluation libraries—TruLens, Ragas, UpTrain, DeepEval—and their supported metrics. Full article available here.
- Llama 3 Use Cases Explored in New Hackathon Cookbook: Following a hackathon hosted by @AIatMeta, a new series of cookbooks detailing seven different use cases for Llama3 has been published, showcasing applications from basic to complex tasks. Read more about it here.
- Google Colab: no description found
- Google Colab: no description found
- llama-index-llms-openai: llama-index llms openai integration
- llama-index-multi-modal-llms-openai: llama-index multi-modal-llms openai integration
LlamaIndex ▷ #general (89 messages🔥🔥):
- Bug Fix in Condense Plus Context: The omission of a postprocessor in
_aretrieve_context
function was initially seen as a bug, but a member clarified that this issue was resolved in the latest version of the library after attempting to submit a pull request. "...today i find the latest version has already fixed this bug. I will upgrade my library."
- Hybrid Search Configuration Errors: A user attempted to utilize hybrid search with Qdrant but faced a ValueError despite having the correct configuration. Another user resolved the confusion by highlighting the need to enable hybrid search directly in the constructor: "
QdrantVectorStore(..., enable_hybrid=True)
".
- Exploring the Usefulness of llamaIndex: Members discussed the merits of llamaIndex over alternatives, citing its ease of use, flexibility, detailed documentation, and effective abstraction layer for handling multi-platform support. Positive user feedback included: "The docs for llama-index are beautiful."
- Technical Issue in Frontend Communication: A user reported inconsistency with the frontend's display of AI responses, receiving an error message “Unexpected token U”. The cause was suspected to be a non-200 status in the responses per another user’s suggestion to check the network tab in the console.
- Understanding Query Engine Metadata Usage:
A user questioned the role of metadata in the
query
method while using llamaIndex, wondering whether it gets automatically applied or if users need to include it explicitly. Another user clarified that metadata can be used for filtering and must be explicitly utilized, with strategies on how metadata can enhance retrieval processes discussed.
- langchain/cookbook/Multi_modal_RAG.ipynb at master · langchain-ai/langchain: 🦜🔗 Build context-aware reasoning applications. Contribute to langchain-ai/langchain development by creating an account on GitHub.
- Multi-Modal Applications - LlamaIndex: no description found
- Multimodal Ollama Cookbook - LlamaIndex: no description found
- File - LlamaIndex: no description found
- Data Connectors (LlamaHub) - LlamaIndex: no description found
LlamaIndex ▷ #ai-discussion (3 messages):
- Exploring Knowledge Distillation for GPT-3.5 Judging: A blog post on Hugging Face explains the process and benefits of using knowledge distillation to enhance the accuracy and performance of finetuning GPT-3.5 as a judge. It includes a detailed step-by-step code implementation guide.
- Community Engagement on Finetuning Techniques: A member praised the recent article, highlighting a scarcity of accessible resources that guide users on how to finetune models effectively.
Link mentioned: Knowledge Distillation for Fine-Tuning a GPT-3.5 Judge: Enhancing Accuracy and Performance : no description found
OpenAccess AI Collective (axolotl) ▷ #general (30 messages🔥):
- Insights on Llama 3 Weight Differences Highlighted: An analysis comparing the weights between instruct and base Llama 3 models showed significant changes concentrated in the K and V layers, suggesting focused adjustments during instruct tuning (view analysis). Potential freeze of K/V layers for stylistic tuning without loss of instruct capabilities is being considered.
- Save and Checkpoint Clarifications Discussed: Discussion clarified that checkpoint naming conventions suggest it’s not an end run save, which should be located in the base folder. This distinction helps in understanding save outputs during model runs.
- Potential OpenOrca Rerun Funding Considered: A community member proposed a rerun of OpenOrca dedup on gpt-4o, offering an estimated cost and suggesting potential batch job pricing benefits. Details of the proposed project can be found on its dataset page.
- Exploring Lesser Compute Usage in AI: A plethora of projects were cited focusing on reducing AI's compute usage, featuring initiatives like Monarch Mixer, H3, and Hyena Safari, with accompanying blogs detailing these advancements (read more).
- Challenge of Academic Publishing in AI Field Noted: Delays in academic journal publications can render research outdated by the time it is published, highlighting challenges in the fast-moving field of AI research. The slow publication process contrasts with the rapid pace of state-of-the-art (SOTA) advancements.
- GPUs Go Brrr: how make gpu fast?
- Open-Orca/SlimOrca-Dedup · Datasets at Hugging Face: no description found
OpenAccess AI Collective (axolotl) ▷ #axolotl-dev (11 messages🔥):
- Merge Successful for Nanobitz: The code merge referenced by the user "Nanobitz" was deemed successful. There was no specific detail on what was merged.
- LLAMA3 Template Troubles in PyET: A user tried using a new template for LLAMA3 in PyET, but encountered an error suggesting confusion between 'LLAMA3' and 'LLAMA2'. They were advised to update fastchat.
- Dependency Update Dilemma: Another participant, "trojaner", noted that the project dependencies are severely outdated, listing versions for peft, accelerate, deepspeed, flash-attn, xformers, and transformers. They suggest updating all to the latest versions, except peft which needs to be installed from a repository due to a plugin issue.
OpenAccess AI Collective (axolotl) ▷ #general-help (11 messages🔥):
- FSDP with FFT remains a mystery: Community members are uncertain if Fully Sharded Data Parallel (FSDP) works with Fast Fourier Transform (FFT). Alternative suggestions included looking into DeepSpeed.
- AttributeError in Docker explained: An AttributeError regarding LLAMA3 appears specifically when using Docker. Recommendations to resolve this included ensuring pip dependencies are updated and trying a fresh git clone.
- Git cloning solves fastchat issue: A direct approach of git cloning resolved an issue that was not fixed by merely updating fastchat. This suggests that some commits might not be updated in certain branches.
OpenAccess AI Collective (axolotl) ▷ #axolotl-help-bot (10 messages🔥):
- Changing system_prompt in Axolotl CLI Inference Remains Unclear: A user queried whether the
system_prompt
can be changed when using axolotl.cli.inference. Although the query was passed to Phorm for an answer, it returned undefined with the suggestion to check back later.
- Error Converting Merged Model to GGUF: A member highlighted a FileNotFoundError during conversion of a merged model to GGUF due to the absence of matching tokenizers ['spm', 'hfft']. This points to potential issues in file structure or naming that needs addressing in future tasks or troubleshooting.
- Size Mismatch Error in Gemma Model Loading: On attempting to load a GemmaForCausalLM model, a user encountered a size mismatch error regarding
model.embed_tokens.weight
. The error recommended addingignore_mismatched_sizes=True
to thefrom_pretrained
method for debugging, indicating mismatch issues between training and operational environments.
- Question on Merging QLORA to Base without Precision Issues: A user inquired about techniques for merging QLORA to a base configuration without encountering precision issues between fp16 and fp32. This question points to ongoing challenges in model integration and precision handling within the community.
Link mentioned: OpenAccess-AI-Collective/axolotl | Phorm AI Code Search: Understand code, faster.
OpenAccess AI Collective (axolotl) ▷ #axolotl-phorm-bot (9 messages🔥):
- Inquiring about Axolotl Pruning Capabilities: A user asked if Axolotl supports pruning. The response from Phorm was that the answer is undefined and suggested checking back soon for updates, with a link provided to Read more on Phorm.
- Seeking Continuous Pretraining Tips and LoRA Methods: Another query was raised regarding tips for continuous pretraining and the different LoRA methods. Similarly, the answer remains undefined, and users are advised to revisit the topic later through the same Phorm link.
- Question on Integrating qLoRA with Base: A member inquired about how to merge qLoRA into the base model; however, no direct response or information was provided in the discussed messages.
Link mentioned: OpenAccess-AI-Collective/axolotl | Phorm AI Code Search: Understand code, faster.
OpenInterpreter ▷ #general (41 messages🔥):
- Claude API Compatibility Issues: Users are experiencing problems when integrating Claude API, with reports of "goofy errors" occurring, indicating potential compatibility or configuration errors.
- Open Interpreter for Antidetect Python Automation: A user is exploring whether Open Interpreter can simplify browser automation by generating Python code from natural language instructions. This could enhance productivity by automating repetitive coding tasks.
- Local Model Performance Inquiries: Comparisons between local models like Mixtral, Phi, Lama3, and GPT-4 have been discussed, with GPT-4 being noted for superior performance. The need for prompt optimization for local models to improve their effectiveness was suggested.
- Speed and Efficiency of GPT-4o: Users are reporting that GPT-4o offers dramatically increased processing speeds compared to other models, achieving up to 100 tokens/s, which significantly enhances performance and cost-efficiency.
- Developments in ChatGPT and Interpreter API: There is anticipation for the ChatGPT voice conversational AI to become available via Open Interpreter API. Users are hopeful for its quick integration given its demonstrated potential in recent demos.
- no title found: no description found
- Microsoft C++ Build Tools - Visual Studio: no description found
OpenInterpreter ▷ #O1 (21 messages🔥):
- Successful Integration of OpenInterpreter with LiteLLM on Groq's Llama3: Users confirmed getting configurations like openinterpreter <> LiteLLM <> groq - llama3 working smoothly. This integration appears to be functional and operational for those who tested it.
- Troubleshooting O1 Hardware and WiFi Connection Issues: A user struggled with connection issues involving an M5 board and 01-Light wifi network setup. After several attempts including re-flashing and using a secondary device, the user still could not access the web interface to connect properly.
- Developing an App Version of the 01 Hardware: Thatpalmtreeguy discussed developing a mobile app alternative for the 01 hardware, suggesting an early app version could be found here. This approach was aimed at making development and testing more accessible.
- Awaiting TestFlight Approval for a New App: Thatpalmtreeguy also mentioned submitting an app for TestFlight approval, which could make it easier for users without Macs to contribute to testing and development.
- Customer Service Interaction over Unreceived Order: A user experienced issues with an order placed at OpenInterpreter, not having received a receipt and wishing to cancel the order. Another member recommended contacting customer support via email at help@openinterpreter.com.
OpenInterpreter ▷ #ai-content (4 messages):
- Introducing PyWinAssistant: A user shared a GitHub link to PyWinAssistant, describing it as the first open-source Large Action Model that controls human user interfaces through natural language. This tool incorporates Visualization-of-Thought and aligns with spatial reasoning in large language models.
- Demonstration of PyWinAssistant in Action: Another user confirmed successfully operating PyWinAssistant, providing a YouTube Live link to showcase its capabilities. The presentation illustrates PyWinAssistant’s real-time functionality.
Link mentioned: GitHub - a-real-ai/pywinassistant: The first open source Large Action Model generalist Artificial Narrow Intelligence that controls completely human user interfaces by only using natural language. PyWinAssistant utilizes Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models.: The first open source Large Action Model generalist Artificial Narrow Intelligence that controls completely human user interfaces by only using natural language. PyWinAssistant utilizes Visualizati...
tinygrad (George Hotz) ▷ #learn-tinygrad (38 messages🔥):
- Understanding Tensor Variable Shapes: A member asked why tensors need variable shapes, citing Tinygrad Notes. This feature helps optimize compilation times by handling situations where tensor shapes change dynamically, such as with the increasing number of tokens in transformers, and prevents the need to regenerate kernels for new shapes.
- Troubleshooting Training Errors in Tinygrad: A user encountered an "AssertionError: Tensor.training should be set in the optimizer" while training a model. The solution involves setting
Tensor.training = True
as shown in this pull request.
- Strategies for Implementing Advanced Indexing: Discussions highlighted challenges and possible strategies for implementing operations similar to
node_features[indexes[i]] += features[i]
in Tinygrad. Techniques involve using one-hot encoding and matrix multiplication for aggregating features based on indices, as exemplified by various contributors' code snippets.
- Graph Neural Network Implementation Curiosity: A discussion was initiated on implementing Graph Neural Networks (GNN) in Tinygrad with a specific interest in how neighbor searches would be managed. The conversation touched upon the complexity of implementing such features compared to existing libraries like Pytorch Geometric, and potential inefficiencies of naive O(N^2) tensor operation approaches.
- Error Handling in Tinygrad: An increase in suggestions to improve error messages in Tinygrad to enhance user experience was noted, with comparisons to Rust-style error messages that suggest the simplest fixes to help users better understand how to rectify issues.
- optimizer shouldn't be run without training by geohot · Pull Request #4460 · tinygrad/tinygrad: no description found
- train.py: GitHub Gist: instantly share code, notes, and snippets.
- test_aggregate.py: GitHub Gist: instantly share code, notes, and snippets.
- pytorch_cluster/csrc/cuda/radius_cuda.cu at master · rusty1s/pytorch_cluster: PyTorch Extension Library of Optimized Graph Cluster Algorithms - rusty1s/pytorch_cluster
- torchmd-net/torchmdnet/extensions/neighbors/neighbors_cpu.cpp at 75c462aeef69e807130ff6206b59c212692a0cd3 · torchmd/torchmd-net: Neural network potentials . Contribute to torchmd/torchmd-net development by creating an account on GitHub.
- Neural-Turing-Machine-in-Tinygrad/NTM.py at main · shriar/Neural-Turing-Machine-in-Tinygrad: Contribute to shriar/Neural-Turing-Machine-in-Tinygrad development by creating an account on GitHub.
- GitHub - rs9000/Neural-Turing-machine: NTM in PyTorch: NTM in PyTorch. Contribute to rs9000/Neural-Turing-machine development by creating an account on GitHub.
- tinygrad/examples/whisper.py at a1940ced7746fcdf09068aadf4155e4c1e3641b8 · tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! ❤️ - tinygrad/tinygrad
- tinygrad/examples/whisper.py at a1940ced7746fcdf09068aadf4155e4c1e3641b8 · tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! ❤️ - tinygrad/tinygrad
- Home - PyG: PyG is the ultimate library for Graph Neural Networks
Cohere ▷ #general (24 messages🔥):
- Confusion Over Cohere Billing Explained: One user had issues understanding their billing details, specifically discrepancies in charges shown in different views. They clarified it by realizing that the charge discrepancy was due to amounts due since the last invoice.
- Command R Queries Clarified: Members discussed the impact of using Command R with web and grounding options, confirming that the input tokens are indeed larger because they include tokens for web searches.
- Untrained Tokens in Language Models: A member shared a research paper discussing "glitch tokens" in tokenizers of large language models (LLMs) and methods for detecting them, highlighting ongoing issues with tokenizer efficiency and model safety.
- Aya vs. Cohere Command Plus Needs Clarification: Members queried about the performance differences between Aya and Cohere Command Plus, noting issues with Aya's accuracy even on common information, and a suggestion was made to restrict Aya's use to translations only.
- Assistance Request in the Community: A member expressed difficulty getting support for Cohere-related inquiries, prompting a response from other users confirming the availability of Cohere staff in the community.
Link mentioned: Fishing for Magikarp: Automatically Detecting Under-trained Tokens in Large Language Models: The disconnect between tokenizer creation and model training in language models has been known to allow for certain inputs, such as the infamous SolidGoldMagikarp token, to induce unwanted behaviour. ...
Cohere ▷ #project-sharing (2 messages):
- Specializing LLMs for Telecom Sector: A challenge is available for those interested in specializing large language models in the telecom domain (5G and beyond). Join or share the competition here.
- Exploring Cohere for 'Chat with PDF' Applications: Inquiry about the existence or development of applications using Cohere that enable chatting with PDFs. The user is seeking contributions or existing work in this area, requesting shared repositories or related blog posts.
Link mentioned: Zindi: no description found
Datasette - LLM (@SimonW) ▷ #ai (23 messages🔥):
- LMSYS Metric for LLM Quality?: The efficacy of lmsys as a metric for LLM quality remains ambiguous, with no clear consensus in the community.
- Underwhelming Updates in GPT-4o: Disappointment voiced over GPT-4o's performance, specifically its inability to accurately list books. Despite its high speed and appealing pricing, it lacks significant reasoning improvements compared to GPT-4.
- Debating AI's Future Capabilities: Skepticism arises regarding the overhyping of AGI (Artificial General Intelligence) while recognizing incremental improvements in existing models like GPT-4 and Claude 3 Opus. Some members suggest that the hype surrounding upcoming models might be unwarranted.
- Utilizing Cloud Credits: A member inquires about effective ways to utilize soon-to-expire Google Vertex AI credits but lacks solid plans for experimentation.
- Voice Assistant Characteristics: Concerns were expressed about a voice assistant's inappropriate laughter, suggesting custom prompts as potential solutions to make the outputs more professional and less detrimental to user acquisition efforts.
Datasette - LLM (@SimonW) ▷ #llm (1 messages):
simonw: https://twitter.com/simonw/status/1790121870399782987
Mozilla AI ▷ #llamafile (15 messages🔥):
- Beware of Fake Repos: A member warned that a repository claiming to include GGUF for OpenELM is fake, indicating misinformation or errors in repo availability.
- PR Enhancements for llamafile: A new Pull Request (PR #412) has been created to add a script facilitating the upgrade of llamafile archives, based on external resources.
- Performance Benchmarks Shared: One user reported smooth running of the Hermes-2-Pro-Llama-3-8B-Q5_K_M.gguf model on llamafile with response times around 10 seconds and RAM usage spiking to 11GB, specifically naming AMD 5600U and approximate model size of 5.6GB.
- Persistent Errors with AI Models: Users have encountered repeated errors when using models like Llama 8B and Mistral, related to KV cache space issues, with varied experiences based on the amount of RAM available across different systems.
- Metadata Management for Llamafile Improvements: There are ongoing developments to facilitate the integration of custom authorship metadata within llamafile and gguf, contributing to better file management and searchability on platforms like huggingface (Issue #7165).
- Add metadata override and also generate dynamic default filename when converting gguf · Issue #7165 · ggerganov/llama.cpp: This is a formalized ticket for this PR #4858 so people are aware and can contribute to figuring out if this idea makes sense... and if so then what needs to be done before this can be merged in fr...
- Added Script To Upgrade llamafile Archives by mofosyne · Pull Request #412 · Mozilla-Ocho/llamafile: Context: #411 Porting https://briankhuu.com/blog/2024/04/06/inplace-upgrading-of-llamafiles-engine-bash-script/ to llamafile
DiscoResearch ▷ #general (9 messages🔥):
- Seeking Assistance to Curate German YouTube Content: A member expressed a need for creating a list of YouTube channels featuring quality German podcasts, news programs, and vlogs to train a German TTS system. They invited others to collaborate on compiling such a list.
- Mediathekview Offers Rich Source for German Audiovisual Content: Mediathekview was recommended as a resource for downloading shows and films from a variety of German broadcasters, with a suggestion to use a public spreadsheet for organization. The discussion included details on how to download the content database, with links and descriptions Mediathekview Site.
- Local Storage Details for MediathekView Data Shared: A member clarified that MediathekView stores its film database locally, which includes all shows with links and descriptions, emphasizing the practicality for TTS training data sourcing.
- English Preferred in Discussions: A prompt was made reminding participants to keep communications in English within the channel.
- Exploration of MediathekView’s API Potential: Information about Mediathekview's JSON API was highlighted, providing potential for automated access to the media content data GitHub API.
- MediathekViewWebVLC/mediathekviewweb.lua at main · 59de44955ebd/MediathekViewWebVLC: MediathekViewWeb Lua extension for VLC. Contribute to 59de44955ebd/MediathekViewWebVLC development by creating an account on GitHub.
- Die 100 beliebtesten Podcasts im Moment – Deutschland: Diese Liste zeigt die derzeit 100 beliebtesten Podcasts mit aktuellen Daten von Apple und Podtail.
- Top YouTube Channels in Germany | HypeAuditor YouTube Ranking: Find the most popular YouTube channels in Germany as of May 2024. Get a list of the biggest YouTubers in Germany.
DiscoResearch ▷ #discolm_german (2 messages):
- Is the Demo Down?: A member queried if the demo is currently down, seeking clarification on its status.
- Praise for the Demo: Another message from the same member expressed admiration, describing the demo as "really nice."
LLM Perf Enthusiasts AI ▷ #general (4 messages):
- Claude 3 Haiku vs Llama 3b In Structured Decision: A member initiated a discussion on choosing between Claude 3 Haiku and Llama 3b for an entity extraction scoring service. They indicated issues with traditional fuzzy string matching, aiming to employ a smaller LLM to match submodels within Pydantic models.
- Entity Extraction Challenges Addressed: Focused on improving accuracy in entity extraction from documents, the member explained they are constructing an automated service using Pydantic models to compare predicted and actual outcomes with submodel lists, and are planning to test this structure initially with Instructor.
LLM Perf Enthusiasts AI ▷ #gpt4 (6 messages):
- Speculation on Audio-Related Update: It appears there is speculation regarding an audio-related feature, possibly involving audio in-out support for an assistant.
- Attention on OpenAI's Audio Team: The conversation highlights that members from the OpenAI audio team are actively engaged, possibly hinting at developments in audio technology or features.
- Anticipation for GPT-4o Launch: A YouTube video titled "Introducing GPT-4o" indicates an upcoming OpenAI spring update, scheduled for live streaming on Monday, May 13, 2024. This event is set to introduce GPT-4o along with updates to ChatGPT.
- Celebrity Involvement Creates Buzz: There's excitement around Scarlett Johansson voicing a feature or promotion, which significantly garners attention and enthusiasm among enthusiasts.
Link mentioned: Introducing GPT-4o: OpenAI Spring Update – streamed live on Monday, May 13, 2024. Introducing GPT-4o, updates to ChatGPT, and more.
Alignment Lab AI ▷ #general-chat (3 messages):
- AlphaFold3 Federation Kickoff Announced: An AlphaFold3 Federation meetup is scheduled for tomorrow at 9pm EST (12th of May). Topics include current status of Alpha Fold 3 integration, training pipeline architecture, potential bottlenecks, and an open Q&A session. Check out the details and join here.
- Inquiry about Server Role Information: A member enquired about how to find information regarding the server roles. The user also specifically mentioned a call out to the "orange team".
Link mentioned: AlphaFold3 [AF3] Federation Meet · Luma: Current Progress Update A talk by the lead developer on the current status of Alpha Fold 3 integration. Discussion of any issues encountered during the initial…
Alignment Lab AI ▷ #fasteval-dev (3 messages):
- Fasteval Development Halted: tju01 has confirmed not planning to continue with the fasteval project or any related follow-up projects. They are open to transferring ownership of the GitHub project to a responsible new owner, otherwise, the fasteval channels here might be archived.
AI Stack Devs (Yoko Li) ▷ #app-showcase (1 messages):
- Query about AI town customizations: A member inquired if it's possible to alter the character moving speed and the number of NPCs in AI town. No responses or further details were provided yet.
AI Stack Devs (Yoko Li) ▷ #ai-town-dev (1 messages):
- Reducing NPC Interaction Frequency for Better Player Engagement: A user is exploring ways to reduce the interaction frequency between NPCs to allocate more computational power to player-NPC interactions. They noted using AI town with the llama3 model, which is taxing on their local machine.
Skunkworks AI ▷ #off-topic (1 messages):
pradeep1148: https://www.youtube.com/watch?v=KQ-xGVFHDkw
YAIG (a16z Infra) ▷ #tech-discussion (1 messages):
pranay01: Agree!