[AINews] not much happened today
This is AI News! an MVP of a service that goes thru all AI discords/Twitters/reddits and summarizes what people are talking about, so that you can keep up without the fatigue. Signing up here opts you in to the real thing when we launch it 🔜
a quiet week is all you need.
AI News for 11/7/2024-11/8/2024. We checked 7 subreddits, 433 Twitters and 30 Discords (217 channels, and 2343 messages) for you. Estimated reading time saved (at 200wpm): 248 minutes. You can now tag @smol_ai for AINews discussions!
It seems that big launches were appropriately muted this whole week. We're celebrating the RawStory vs OpenAI dismissal, stating that facts used in LLM training are not copyrightable, and enjoying gorgeous images from the closed model Flux 1.1 [pro] Ultra and Raw launch.
Time to build, with this week's sponsor!
[Sponsored by SambaNova] SambaNova’s Lightning Fast AI Hackathon is here! Give yourself about 4 hours to build a cool AI agent that responds in real time using super-speedy models on SambaNova’s Cloud. Are there prizes? Yes. Up to $5000, plus it’s a chance to connect with other AI devs. Deadline is November 22, so get started now
The Table of Contents and Channel Summaries have been moved to the web version of this email: !
AI Twitter Recap
all recaps done by Claude 3.5 Sonnet, best of 4 runs.
AI Models and APIs
- Batch and Moderation APIs: @sophiamyang announced the release of the Batch API and Moderation API, offering 50% lower cost processing for high-volume requests and harmful text detection across 9 policy dimensions.
- Claude Sonnet 3.5 Enhancements: @DeepLearningAI highlighted the launch of Anthropic's Claude Sonnet 3.5, enabling desktop application operations via natural language commands for tasks like file management and coding.
- Magentic-One Multi-Agent System: @omarsar0 detailed Microsoft's Magentic-One, a generalist multi-agent system built on the AutoGen framework, featuring an Orchestrator agent and specialized agents like WebSurfer and FileSurfer.
- OpenCoder and Other Models: @_akhaliq introduced OpenCoder, an AI-powered code cookbook for large language models, along with several other models like DimensionX and DynaMem.
AI Engineering and Infrastructure
- Infisical Secret Management: @tom_doerr released Infisical, an open-source secret management platform designed to sync secrets, prevent leaks, and manage internal PKI.
- LlamaIndex and LangChain Tools: @Llama_Index discussed enhancing RAG systems with LlamaIndex Workflows and Reflex, enabling context refinement and agent-based workflows.
- CrewAI for Autonomous Agents: @tom_doerr introduced CrewAI, a framework for orchestrating autonomous AI agents, fostering collaborative intelligence for tackling complex tasks.
- Crawlee Web Scraping Library: @tom_doerr launched Crawlee, a web scraping and browser automation library for Python, supporting data extraction for AI, LLMs, RAG, and more.
AI Research and Techniques
- SCIPE for LLM Chains: @LangChainAI introduced SCIPE, a tool for error analysis in LLM chains, identifying underperforming nodes to enhance output accuracy.
- Contextual RAG Implementation: @llama_index provided a proof-of-concept for a Context Refinement Agent that examines retrieved chunks and summarizes source documents to improve RAG responses.
- MemGPT for Memory Management: @AndrewYNg shared insights on MemGPT, an LLM agent managing context window memory through persistent storage and memory hierarchy techniques.
AI Safety and Ethics
- LLM Safety Models: @sophiamyang congratulated the release of a new LLM safety model, emphasizing the importance of safety in large language models.
- AI Safety Concerns: @giffmana highlighted the complexity of safety concerns in AI, noting their multi-faceted nature and the importance of addressing them.
- Mistral Moderation Model: @sophiamyang announced Mistral's new Moderation model, a classifier based on Ministral 8B, designed to detect harmful content across various dimensions.
Company and Product Updates
- Course Announcements: @HamelHusain and @jeremyphoward announced new courses on LLMs as Operating Systems and Dialog Engineering, focusing on memory management and interactive coding with AI.
- Platform Launches: @dylan522p announced the launch of Fab Map, a data dashboard showcasing fab details globally, alongside a transition from Substack to Wordpress for enhanced features.
- Event Participation: @AIatMeta shared participation in #CoRL2024, presenting robotics research like Meta Sparsh and Meta Digit 360 at their booth.
Memes/Humor
- Humorous AI Comments: @giffmana expressed surprise with, "I seriously used lol twice, that's how you know I was shook!"
- Personal Opinions and Rants: @teortaxesTex shared strong opinions on war and society, expressing frustration and sarcasm.
- Creative Writing and Poetry: @aidan_mclau posted a poetic piece, blending fantasy elements with dramatic imagery.
AI Reddit Recap
/r/LocalLlama Recap
Theme 1. Qwen2.5 Series Shows Strong Performance Across Sizes
- 7B model on par with gpt 4 turbo (Score: 40, Comments: 10): Qwen, a 7B parameter language model, reportedly matches GPT-4 Turbo's performance on code-related benchmarks.
- Qwen2.5 models receive strong praise, with users suggesting the 32B version competes with GPT-4-O mini and Claude Haiku. Users highlight its effectiveness despite limited local computing resources.
- The HumanEval benchmark is criticized as outdated and potentially contaminated in training data. Users recommend aider's benchmarks and rotating monthly code benchmarks for more reliable evaluation.
- Users report success running Qwen2.5, Gemma2-9B, and Llama models through Hugging Face GGUFs, noting the importance of finding optimal quantization configurations for performance balance.
- Geekerwan benchmarked Qwen2.5 7B to 72B on new M4 Pro and M4 Max chips using Ollama (Score: 43, Comments: 18): Geekerwan tested Qwen2.5 models ranging from 7B to 72B parameters on Apple M4 Pro/Max chips using Ollama in this benchmark video. The post does not provide specific performance metrics or comparative analysis from the benchmarks.
- The M4 Max achieves 15-20% better performance than M3 Max, while the M4 Pro operates at 55-60% of M4 Max speed. Both run the 72B model at around 9 tokens per second, though slower than a 4090 for models fitting in VRAM.
- The RTX 4090's 24GB VRAM limits its effectiveness with larger models, forcing layer offloading to CPU RAM. The rumored RTX 5090 will have 32GB VRAM, though this may still be insufficient for larger models.
- Commenters suggest using llama-bench as a standardized testing method for AI hardware reviews. The M4 Ultra is expected to match RTX 4090 performance for inference with the advantage of 256GB RAM capacity for larger models like llama 3.1 405B.
Theme 2. New Llama.cpp Server UI Released with Vue.js & DaisyUI
- Just dropped: new Llama.cpp Server-Frontend. (Score: 75, Comments: 17): The Llama.cpp project released version b4048 featuring a completely redesigned server frontend built with VueJS and DaisyUI, replacing the legacy UI with modern features including conversation history, localStorage support, and markdown capabilities. The update introduces practical improvements like regenerate, edit, and copy buttons, along with theme preferences, CORS support, and enhanced error handling, while maintaining backward compatibility through a legacy folder for the previous interface.
- The new llama.cpp interface now uses the chat completion endpoint exclusively, shifting template responsibility to the server/provider with templates stored in GGUF metadata. SillyTavern users can switch to chat completion mode using the "OpenAI-compatible" option.
- Users praise the standalone nature of llama.cpp's new interface, with many adopting it as a local CoPilot alternative due to its simplicity and elimination of prompt template management.
- Community feedback includes requests for brighter colors in the interface, while appreciating the reduced dependency on external software for basic chat functionality.
Theme 3. Training Speed Records: 3.28 Hours for NanoGPT Training
- Are people speedrunning training GPTs now? (Score: 288, Comments: 32): Jordan Keller achieved a new speed record for training NanoGPT, completing the process in 1.85 minutes on a 4090 GPU. The achievement was shared on Twitter/X, suggesting a growing trend of optimizing and benchmarking GPT model training times.
- Performance benchmarks show a comparison between M3 MacBook using torch/mps and NVIDIA GPUs (3090, 4090) for training GPT2-50M, with detailed token/s metrics shared via an image.
- Discussion highlights a trend toward smaller models, citing examples like Gemini Flash, 4o-mini, and recent Llama models (1-2B parameters). The industry appears to be optimizing for efficiency while maintaining a "usefulness threshold" rather than pursuing larger models.
- The optimization discussion referenced Jevons paradox, suggesting that improved efficiency might lead to increased overall compute usage rather than energy savings, with users noting that gains would likely be reinvested in larger models.
Theme 4. Open Source Models Show Near-Zero Refusal Rates vs Proprietary LLMs
- Update – OS Models show much lower refusal rates compared to proprietary LLMs (Score: 32, Comments: 6): Open source models including Mistral Large, Llama variants, Nemotron, and Qwen demonstrated near-zero refusal rates across all test categories, contrasting sharply with proprietary models in a comprehensive evaluation study. The performance remained consistent regardless of model size, with Llama 3.1 variants ranging from 8B to 405B parameters showing similar patterns, while Nemotron 70B emerged as a particularly promising model in preliminary testing.
- Proprietary models show higher refusal rates compared to open source alternatives, leading to discussion about the practical implications of these differences in real-world applications.
- A specific Hermes-3-Llama model variant on Huggingface is recommended for minimizing refusals, though ablation techniques used can degrade general model performance.
- Nemotron 70B receives specific praise for achieving zero refusals without requiring ablation, with subsequent performance recovery possible through additional training.
Other AI Subreddit Recap
/r/Singularity, /r/Oobabooga, /r/MachineLearning, /r/OpenAI, /r/ClaudeAI, /r/StableDiffusion, /r/ChatGPT
Theme 1. AI Companies Embrace Military Contracts: Palantir, Anthropic, OpenAI Remove Restrictions
- In light of the recent news about Anthropic and Palantir making a deal (Score: 755, Comments: 61): Claude, Anthropic's AI assistant, reportedly expressed concerns about Anthropic's partnership with Palantir for military applications. No further context or details were provided about the specific nature of the partnership or Claude's exact response.
- OpenAI and Anthropic have both removed restrictions on military use of their AI tools, with reports showing Israel already using AI systems like "Lavender" and "Come to Daddy" for target selection in Gaza.
- Big tech companies have conducted layoffs of AI ethics staff, suggesting a shift away from ethical concerns. FTX was one of Anthropic's largest early investors with a stake sold for nearly $1B.
- Users express concerns about Anthropic's connection to Effective Altruism ideology and its apparent shift from ethical principles to military applications. Many commenters indicate they plan to stop using Claude due to these developments.
- Anthropic and Military was known thing since 8 months ago! (Score: 37, Comments: 6): Anthropic's military connections were initially discussed in a Reddit post from 8 months ago, with a follow-up discussion 5 months ago, though these early mentions received limited attention at the time. The posts, shared on r/ClaudeAI and r/singularity respectively, preceded the recent widespread public discourse about Anthropic's military involvement.
- [{'id': 'lw1zltz', 'author': 'Far-Steaks', 'body': 'Anyone that needs it reported that companies are trying to make as much money as possible and have zero qualms about who they hurt in the process is a fucking moron. Do neurotypicals have pattern recognition or are y’all just complete ding dongs?', 'score': 14, 'is_submitter': False, 'replies': []}]
- The military-industrial complex is now openly advising the government to build Skynet (Score: 99, Comments: 38): The post title suggests concerns about military-industrial complex involvement in government AI policy, but no additional context or details were provided in the post body to substantiate or expand on this claim.
- AI-controlled drones are already being deployed in the Ukraine-Russia conflict to counter signal jamming, demonstrating how autonomous systems can operate when communications are disrupted. The progression toward autonomous weapons is seen as inevitable due to military necessity and competitive pressure.
- Users discuss how autonomous military systems differ from human soldiers in key ways - they won't experience combat fatigue and will likely have higher accuracy than humans. The shift from "humans in the loop" to fully autonomous weapons systems is viewed as a concerning but unavoidable evolution.
- Multiple comments reference popular culture depictions of military AI (particularly Terminator and Skynet), reflecting widespread cultural anxiety about autonomous weapons development. The scenario of AI becoming "self-aware" and taking control is frequently invoked, though mainly in a pop culture context.
Theme 2. CogVideoX 5B Released: Major Open Source Text-to-Video Progress
- CogVideoX 1.5 5B Model Out! Master Kijai we need you! (Score: 289, Comments: 69): CogVideoX 1.5 released a new 5B parameter model requiring 66GB VRAM for operation. No additional context or details were provided in the post body.
- Users express significant concern about the 66GB VRAM requirement, with many hoping for optimization through GGUF support that could reduce requirements to under 20GB or enable running on 16GB cards with minimal performance impact.
- The model is available on Hugging Face and GitHub, with developers indicating that CogVideoX 2.0 will offer significant improvements that might compete with Sora.
- Users discuss current video generation limitations, noting that while Mochi and older CogVideoX models are available, results aren't impressive and commercial services are cost-prohibitive at "$20 for a minute of generation" or "$100 for unlimited".
- Rudimentary image-to-video with Mochi on 3060 12GB (Score: 68, Comments: 52): Mochi, a text-to-video model, runs on a consumer-grade NVIDIA RTX 3060 12GB GPU for image-to-video generation. The post title alone provides insufficient context to determine specific implementation details or results.
- Mochi's img2vid workflow demonstrates quality output but is limited to 43 frames (1.8 seconds) due to memory constraints on a 3060 12GB GPU. The model operates with a 0.6 denoise setting, functioning more like img2img than traditional img2vid, as shared in this workflow.
- Technical implementation requires exact 848x480 image resolution input to prevent errors. The seed-based generation changes completely when adjusting frame length, making it impossible to preview single frames before generating the full video.
- Output quality appears sharper than text-to-video generation, though with limited movement at lower denoise settings. Higher denoise settings produce more movement but deviate further from the input image.
Theme 3. OpenAI's O1 Preview Shows Advanced Reasoning Capabilities
- o1 is a BIG deal (Score: 152, Comments: 140): Sam Altman's increased confidence about AGI appears linked to OpenAI's O1 model, which reportedly achieves human-level reasoning and marks their transition to Level 3 (Agents) in their AGI roadmap. The post draws parallels between O1's test-time compute approach and human System 2 thinking, arguing that while older GPT models operated like intuitive System 1 thinkers, O1 bridges knowledge gaps through sequential data generation similar to human imagination, potentially solving a fundamental roadblock to AGI.
- Users widely report that O1-preview underperforms compared to GPT-4, with many finding it slower and less effective for practical tasks. Multiple comments indicate they "end up going back to regular 4o or Claude" due to O1's tendency to produce verbose but less accurate outputs.
- A detailed technical analysis explains that O1 uses chain of thought prompting based on A-star and Q-star algorithms, implementing thought-by-thought pseudo reinforcement learning. However, its memory function is merely a RAG solution that doesn't modify the baseline model.
- Significant skepticism exists about Sam Altman's AGI claims, with users noting that AGI would require training during inference to adjust neural pathways, which isn't possible with current GPT architectures. Many attribute his increased confidence to recent fundraising efforts and investor relations.
- New paper: LLMs Orchestrating Structured Reasoning Achieve Kaggle Grandmaster Level (Score: 35, Comments: 13): Large Language Models demonstrate competitive performance at Kaggle competitions, reaching Grandmaster tier capabilities according to a new research paper. The study suggests LLMs can effectively execute structured reasoning tasks at expert levels, though no specific performance metrics or methodology details were provided in this limited context.
- Users critique the study's methodology, pointing out that researchers created their own benchmarks and made retroactive comparisons without actual head-to-head competition against human players.
- Multiple comments express skepticism about the validity of the claims through metaphors, suggesting the research is engaging in goalpost moving and self-serving metrics.
- The discussion highlights concerns about artificial benchmarking, with one user noting that self-created benchmarks can be manipulated to show "100% score on anything" regardless of actual performance.
Theme 4. SVDQuant Claims 3x Speedup Over NF4 for Stable Diffusion
- SVDQuant, claiming a 3x speedup with Flux over NF4 (Score: 35, Comments: 11): MIT HAN Lab developed SVDQuant, a new quantization method that compresses both weights and activations to 4-bit precision, achieving a claimed 3x speedup over NF4 which only quantizes weights. The method reportedly produces superior image quality compared to NF4, with implementations available through their nunchaku repository and pre-trained models on HuggingFace.
- [{'id': 'lw5a2r0', 'author': 'xpnrt', 'body': 'Would this work with AMD ? Nf4 doesnt', 'score': 5, 'is_submitter': False, 'replies': []}]
- FLUX.1 [dev] vs. Stable Diffusion 3.5 regarding LoRa creation (Score: 22, Comments: 30): FLUX.1 [dev] demonstrated strong LoRA creation capabilities within 10 days of its August 1st release, while Stable Diffusion 3.5 has struggled to produce quality LoRAs even 17 days after its October 22nd release. For comparison, SDXL 1.0 saw successful LoRA development within 3 days of its July 26th release, raising questions about potential structural limitations in SD 3.5's architecture for LoRA training.
- Users report mixed results with SD 3.5 LoRA training, with one user achieving partial success using a 60-image character dataset, though face accuracy remained problematic. Multiple users confirm FLUX performs significantly better for character LoRAs with as few as 20 images.
- A user demonstrated successful training on SD 3.5 using OneTrainer with an 11k dataset (mix of 2.5k anime, 1.5k SFW, 7k NSFW) using specific parameters including fp16/fp16 for weight/data and adafactor optimizer instead of adamw.
- FLUX offers superior out-of-the-box capabilities including better anatomy, prompt understanding, and text rendering compared to SD 3.5. A training strategy for SD 3.5M involves freezing the first layers and training on 512x512 images for higher resolution generalization.
AI Discord Recap
A summary of Summaries of Summaries by O1-preview
Theme 1: New AI Models and Releases Making Waves
- Google's Upcoming Gemini 2.0 Sparks Interest: Google is preparing to launch Gemini-2.0-Pro-Exp-0111, generating buzz about its capabilities and potential impact on the AI community. Users are eager for prompt suggestions to test the new model upon release.
- Ferret-UI Enhances UI Interaction with Gemma-2B and Llama-3-8B: Ferret-UI, built on Gemma-2B and Llama-3-8B, debuts as a UI-centric multimodal LLM for improved UI reasoning tasks. It surpasses GPT-4V in elementary UI benchmarks, showcasing advancements in mobile UI comprehension.
- Llama 3.2 Vision Model Released with High VRAM Requirements: Llama 3.2 Vision is now available in 11B and 90B sizes, demanding significant VRAM for optimal performance. Users must download Ollama 0.4 and can add images to prompts using special syntax.
Theme 2: Optimizations and Training Strategies in AI Models
- LoRA vs Full Fine-Tuning Debate Highlights Rank Importance: Analysis of the paper "LoRA vs Full Fine-tuning: An illusion of equivalence" emphasizes proper rank settings for effective LoRA performance. Critiques focus on the lack of SVD initialization testing and claims about "intruder dimensions."
- Metaparameter Tuning with Central Flows Explored: A new approach models an optimizer's behavior using a "central flow," predicting long-term optimization trajectories. Questions arise about generalizing findings to transformers beyond the CIFAR-10 dataset.
- Forward Gradients Implemented in Flash Attention: Discussions around implementing forward gradients in flash attention aim to optimize normal attention gradients for performance gains. Researchers reference specific mathematical formulations to enhance efficiency.
Theme 3: AI Tools and Frameworks Enhancing Development
- Exponent AI Pair Programmer Introduced: Exponent emerges as an AI pair programmer that learns from codebases and edits filesystem files directly. It offers an alternative to tools like Aider, expanding capabilities for software engineers.
- ComfyUI Recommended for Stable Diffusion Setups: Users advocate for ComfyUI to establish a local environment over other methods. It addresses stability and improves the user experience for SD3.5.
- Mistral Launches Cost-Effective Batch API: Mistral's Batch API handles high-volume requests at half the cost of synchronous API calls. This move provides affordable AI solutions amidst industry API price hikes.
Theme 4: AI Ethics, Legal Issues, and Monetization Strategies
- Legal Win for AI in RawStory v. OpenAI Dismissal: SDNY Judge Colleen McMahon dismissed RawStory v. OpenAI, stating that facts used in LLM training are not copyrightable. This ruling may significantly benefit GenAI defendants.
- OpenRouter's Monetization Strategy Under Scrutiny: Users question how OpenRouter intends to monetize its bring-your-own-key system, raising concerns about the platform's economic viability and sustainability.
- Caution Advised Over AI Hallucinations Leading to Legal Issues: Discussions highlight risks of using AI Sales Agents for mass outreach due to potential hallucinations of promotions. These could lead to legal ramifications for companies if not properly regulated.
Theme 5: Community Engagement and Career Discussions in AI
- Job Fulfillment Challenges Highlighted in Tech Roles: Members share experiences of job misalignment, expressing dissatisfaction with roles that don't leverage their backgrounds. Some consider returning to previous employers for better alignment and promotion opportunities.
- Call for Cryptographic Expertise in Mojo Development: The community emphasizes the necessity of involving qualified cryptographers in developing cryptographic primitives for Mojo. Security-critical implementations should be overseen by experts to avoid vulnerabilities.
- Urgent Deadlines for AI Education Resources: Applications for Computing Resources are due by November 25th PST, with a 1–2 week processing delay expected. Participants are encouraged to submit early to ensure timely access to crucial training resources.
PART 1: High level Discord summaries
HuggingFace Discord
- AI Ads Poised for $3T Market by 2030: An analysis predicts that AI-generated programmatic audio/video ads will drive significant infrastructure demands, estimating a $3 trillion opportunity by 2030.
- Initial data indicates 5-10x performance improvements and 90% cost reductions, prompting the technical community to provide feedback on scaling challenges.
- HF Space Launches OS-ATLAS for GUI Agents: HF Space has launched OS-ATLAS, a foundational action model designed for generalist GUI agents.
- Developers can explore more details on OS-ATLAS, highlighting its potential impact on future AI systems.
- Enhancing BPE Tokenizer Visualization Tools: A project on BPE Tokenizer Visualizer seeks community collaboration to improve tools for LLMs.
- While some members prefer employing FastBert initially, there is growing interest in advancing BPE methodologies through hands-on experimentation.
- Adoption of ComfyUI for Stable Diffusion: Members recommend using ComfyUI for establishing a local environment over alternative methods.
- This recommendation arises from ongoing discussions about enhancing SD3.5's stability and improving the overall user experience.
- Cinnamon AI's Kotaemon RAG Tool Goes Viral: Cinnamon AI's Kotaemon, a RAG tool, has achieved viral status, drawing user attention to its innovative features.
- The team discussed Kotaemon's unique aspects and received positive user feedback during their live broadcast on X at 10 PM PST.
OpenRouter (Alex Atallah) Discord
- OpenRouter Performance Issues: Users reported that OpenRouter experiences freezing and crashing on mobile devices, especially on Android 12.
- The issues seem related to specific chatroom activities or memory usage, as other platforms remain stable under similar conditions.
- Rate Limits and Credits Confusion: There is ongoing confusion about rate limits, where users debate the relationship between credits and requests per second, with a maximum cap set at 200.
- Clarifications revealed that credits are non-refundable and the displayed dollar amounts don't match one-to-one due to associated fees.
- Command R+ Alternatives Explored: Users are investigating alternatives to Command R+, showing interest in models like Hermes 405B, Euryale, and Mythomax.
- Discussions include the affordability of Rocinante 12B and whether Mythomax on OpenRouter differs from its Chub counterpart.
- OpenRouter's Monetization Strategy Critiqued: A user questioned how OpenRouter intends to monetize its bring your own key system, raising concerns about its economic viability.
- This has sparked a crucial conversation about the platform's sustainability and potential revenue streams.
- MythoMax Maintains Market Leadership: MythoMax continues to lead in request counts, retaining its status as the <:hugging_king:936261298273001503>.
- The community recognizes MythoMax's steady performance despite upcoming changes to the Rankings Page.
Perplexity AI Discord
- Citations Now Public in Perplexity API: The Perplexity team announced that citations are now publicly available in the API, effective immediately, removing the need for the
return_citationsparameter in requests.- Some users reported that citations initially appeared but later vanished from both the API and labs.perplexity.ai, raising concerns over possible unintended changes.
- Default Rate Limits Hiked for Sonar Models: Perplexity increased the default rate limit for Sonar online models to 50 requests/minute for all users, aiming to enhance API accessibility and user experience.
- This change was implemented to accommodate higher demand and streamline the usage of the API services.
- Gladia's Enhanced Functionality Unveiled: A member shared detailed insights on how Gladia operates, emphasizing its key features that distinguish it from other AI tools.
- The discussion delved into practical applications across various scenarios, highlighting Gladia's unique capabilities.
- AI with Infinite Memory Concept Discussed: A topic was introduced on AI with infinite memory, as proposed by Microsoft's CEO, exploring the idea of extended data retention in AI models.
- Participants raised questions about the practical implementations and data handling strategies associated with this concept.
- API Discussion Highlighted on GitHub: A GitHub discussion referenced here centers on when the citation feature will exit beta.
- This indicates ongoing user interest in the official status and functionality of the citation feature within the API.
Eleuther Discord
- Forward Gradient Enhancements in Flash Attention: The discussion centered on the implementation of forward gradients in flash attention, with members referencing this paper for detailed insights into Jacobian-vector products.
- Participants explored the mathematical formulations required to optimize normal attention gradients, emphasizing the potential performance gains outlined in the referenced research.
- Inverse Interpretability Challenges: Exploration of inverse interpretability was initiated, focusing on modifying interpretable representations and adjusting model weights accordingly.
- The conversation delved into the complexities of aligning modified symbolic equations with neural network weights, highlighting the difficulties in maintaining consistency post-intervention.
- Benchmarking NeoX Against LitGPT: Members sought benchmarks comparing NeoX and LitGPT in terms of training speed and stability, noting the absence of tests beyond the 1.1B parameter scale in LitGPT's repository.
- The lack of extensive benchmarking data was addressed, with suggestions to conduct empirical evaluations to better understand the performance trade-offs between the two frameworks.
- Features of Meta Llama 3.1: Meta Llama 3.1 was highlighted for its multilingual capabilities and optimization for dialogue, available in sizes 8B, 70B, and 405B.
- The model utilizes an auto-regressive transformer architecture enhanced through supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF), catering to diverse application needs.
- Refusal Mechanism Dynamics in LLMs: A detailed analysis was shared on how refusal behaviors in LLMs are governed by specific directions in the model's residual stream, referencing a forthcoming arXiv paper.
- The mechanism, part of the ML Alignment & Theory Scholars Program led by Neel Nanda, underscores the ability to modify refusal behaviors by altering these directional influences within the model architecture.
Stability.ai (Stable Diffusion) Discord
- ComfyUI Connection Issues Persist: Users are troubleshooting a Connection denied error in ComfyUI, with suggestions to review antivirus and firewall configurations.
- One user identified Windows Defender as a potential blocker, prompting further checks in security software to resolve connectivity problems.
- Inpainting Detail Loss with Adetailer: A concern was raised about using adetailer for inpainting, resulting in loss of detail in previously inpainted regions.
- Community members recommend adjusting inpainting parameters to mask only, preventing unintended alterations to other image sections.
- Flux Model Recommended for Performance: The community advocates for using the Flux base model due to its balance of quality and speed, discussing upgrades from SD 1.5.
- Models like SD3.5 are highlighted for their performance and specialized functionalities, catering to diverse engineering needs.
- Merged vs Base Models Debate: Discussion centers on merged models like Realvis, which can yield good results, versus base models that often excel with precise prompting.
- Participants express concerns about the efficacy of merged models and their acceptance within the user community.
- Sustained Support for SD 1.5 Over SDXL: SD 1.5 continues to maintain a robust support base with numerous research papers, as opposed to SDXL.
- The discussion includes the increasing number of tools enhancing SD 1.5, while SDXL is gradually gaining comparable tool support and research backing.
Nous Research AI Discord
- Ferret-UI Launches for Enhanced UI Tasks: Ferret-UI, the first UI-centric multimodal large language model (MLLM), was introduced, built on Gemma-2B and Llama-3-8B architectures, to perform referring, grounding, and reasoning tasks effectively for mobile UIs, as outlined in the official paper.
- Ferret-UI's extensive training enables it to understand complex UI features like elongated aspect ratios and small objects, surpassing GPT-4V in all elementary UI benchmarks.
- Implementing RAG for Chat Context Enhancement: A member proposed using Retrieval Augmented Generation (RAG) to provide valuable context for enhancing upcoming chat sessions, aiming to optimize the chat experience.
- Another member sought tips for effective chat sessions to improve engagement and output, indicating a collaborative effort to maximize RAG's potential in chat environments.
- Vision-Language Model for Handwriting to LaTeX: Progress was shared on training a Vision-Language Model (VLM) based on Llama 3.2 1B for handwriting to LaTeX conversion, with a starter project release anticipated soon.
- The approach mentioned is theoretically applicable to various modalities, sparking further interest in developing multimodal models for diverse applications.
- Evaluating PyTorch Models with llm-evaluation-harness: A user inquired about evaluating PyTorch models using the llm-evaluation-harness, noting its primary support for Hugging Face models.
- Another member confirmed that the harness has been used exclusively with Hugging Face models and suggested that support may be restricted to those and their APIs.
- Abliteration Concept in Large Language Models: Members discussed the concept of abliteration as a portmanteau of ablate and obliterate, exploring its implications for large language models (LLMs).
- Related links, including a Hugging Face blog, were shared to clarify the concept, highlighting its significance in AI advancements.
aider (Paul Gauthier) Discord
- Gemini 2.0 Launch Rumors: Rumors are circulating about Google's upcoming Gemini 2.0 launch, which may feature the new Gemini Pro 2.0 model currently in testing.
- Speculations include performance enhancements and restricted accessibility for advanced users, with community members expressing concerns over its readiness for broader deployment.
- Introducing Exponent: AI Pair Programmer: Exponent was introduced as an AI pair programmer capable of performing software engineering tasks across environments with a specialized CLI for integration, accessible via its website.
- Its capability to learn from existing codebases and directly edit filesystem files was highlighted, positioning it as a robust alternative to Aider.
- Integrating RAG with Qdrant: Members discussed integrating Aider's architecture with their Qdrant vector database for RAG applications, aiming to leverage external knowledge sources.
- Suggestions included creating an API for querying and using CLI tools to interact seamlessly with the database, enhancing context retrieval.
- Funding Opportunities for Aider Development: The community explored ways to support Aider's development, proposing that YouTube creators could receive funding for content creation about Aider.
- There were also suggestions to enable GitHub donations, although uncertainty remains regarding the acceptance of non-code contributions by maintainers.
- Leveraging Aichat for RAG Solutions: Discussions highlighted using Aichat for RAG, with ideas about extracting documentation context to improve Aider's responses.
- One workflow involved scraping documentation into markdown files and utilizing NotebookLM to generate context, streamlining information retrieval for Aider.
Unsloth AI (Daniel Han) Discord
- LoRA vs Full Fine-Tuning: Proper Rank Settings Crucial: A member analyzed the paper titled 'LoRA vs Full Fine-tuning: An illusion of equivalence', emphasizing that LoRA works if done right and highlighting the need for proper rank settings. The analysis is based on Daniel Han's tweet.
- Critiques were raised regarding the absence of SVD initialization testing and the contradictory claims about 'intruder dimensions' within LoRA and full fine-tuning models.
- Transformers-Interpret Integration Faces Challenges with Unsloth: A member attempted to integrate Transformers-Interpret with Unsloth but encountered issues processing the model's outputs. They explained that the tool is meant for model interpretability, but faced challenges in getting it to work seamlessly with Unsloth inference.
- Discussions included potential solutions and the need for improved compatibility between the two tools.
- Fine-Tuning LLaMA 3.2 Achieves 70% Accuracy in Text Classification: A user reported achieving 70% accuracy in classifying text across 11 categories while fine-tuning LLaMA 3.2. They inquired about modifying the output layer to accommodate their number of classes and shared their approach to implementing a new classification head.
- Community members provided feedback and suggestions for optimizing the fine-tuning process.
- Avian's Fast Inference Approach Sparks Interest: A user expressed interest in Avian, asking how its approach to inference is faster compared to competitors. This inquiry opens the floor for further discussion on performance metrics and optimization strategies.
- Experts shared insights and resources on Avian's framework, highlighting its unique optimizations.
- Reproducibility Issues in AI/ML Preprint Research: A member reported encountering strange errors and inconsistencies in AI/ML research papers, particularly while working with code and math. They expressed frustration that sometimes the math just doesn't add up or they can't replicate the data.
- Another member pointed out that these papers are preprint, indicating a lack of thorough peer review, likely causing such reproducibility issues.
Latent Space Discord
- Claude's Complex Task Struggles: Users have reported that Claude's free tier fails beyond basic tasks, such as handling a 200-line CSV for analysis.
- This limitation underscores the challenges faced by free AI tools in supporting advanced data processing needs.
- Codebuff vs Aider: A Battle of Capabilities: In a comparison between Codebuff and Aider, concerns were raised about Codebuff's closed-source nature versus Aider's file request and command-running features.
- Aider has improved its user experience with over 8000 commits, demonstrating continuous enhancement.
- Mistral's Batch API Launch: Mistral introduced a Batch API that handles high-volume requests at half the cost of synchronous API calls.
- This move aims to offer cost-effective AI solutions amidst recent industry API price increases.
- FLUX1.1 Ultra Enhances Image Generation: The newly launched FLUX1.1 Pro Ultra Mode supports image generation at 4x resolution while maintaining rapid generation times.
- Performance benchmarks indicate it is 2.5x faster than comparable high-resolution models and is competitively priced at $0.06 per image.
- Gemini API Now Public: The much-anticipated Gemini API is available through the OpenAI Library and REST API, supporting both Chat Completions and Embeddings APIs.
- Google's blog post offers initial usage examples to assist developers in integrating Gemini models.
Notebook LM Discord Discord
- Survey Rewards for Audio Overviews Feedback: The team is collecting feedback on Audio Overviews via a short survey available through this screening form, offering a $20 gift code to selected participants upon completion.
- Participants must be at least 18 years old, and the gift will be emailed after successfully finishing the survey.
- Leveraging NotebookLM for Exam Preparation: A member suggested utilizing NotebookLM to generate quizzes from 3000 pages of study material for an upcoming promotion exam, recommending breaking the content down by chapters for focused quizzes.
- “Hopefully it will help streamline the studying process!” expressed optimism about the tool's effectiveness.
- Challenges Importing Google Recordings to NotebookLM: Users inquired about importing recordings from recorder.google.com to NotebookLM, with responses noting that recordings can be downloaded as m4a files but may not preserve speaker identification.
- “That doesn't necessarily preserve the named speakers though.” highlighted a key concern regarding speaker clarity.
- Debating Bias in AI Language Models: Members engaged in a discussion about inherent biases in AI systems, questioning the possibility of unbiased data and the implications for programming neutrality within AI.
- “It's counterproductive for NotebookLM's future if they lean towards bias.” emphasized the importance of maintaining neutrality.
- Enhancing Job Prep with NotebookLM's AI Features: A user explored how NotebookLM can aid in preparing for technical interviews, soft skills practice, and coding challenges, with suggestions to conduct mock interviews using AI voices.
- “I'm prepping for a tech job search and need all the help I can get!” underscored the practical benefits of these features.
Modular (Mojo 🔥) Discord
- ModCon cancels 2024 plans: The team announced that there won't be a ModCon in 2024 as they focus on significant developments.
- Stay tuned for more updates regarding future events and developments.
- Mojo interoperability with Python and C/C++: Members expressed hope for seamless interoperability between Mojo, Python, and C/C++, emphasizing ease of importing modules without complex linking.
- However, achieving this may require avoiding support for certain intricacies of existing languages, akin to how C++ relates to C.
- Challenges in OpenSSL wrapper creation: There was discussion on the potential difficulties involved in building an OpenSSL wrapper, with recognition of the substantial API surface and the need for careful implementation.
- Concerns were raised that without proper C interop, creating such a layer might introduce security risks.
- Need for cryptographic expertise in Mojo development: The community highlighted the necessity of having qualified cryptographers involved in developing cryptographic primitives for Mojo, due to the complexities and security implications.
- Members agreed that security-critical implementations should ideally not be done as open-source unless overseen by experts.
- Plans for MLIR reflection API in Mojo: It was confirmed that a reflection API for MLIR is planned for Mojo, which will allow for deeper manipulation and introspection of code.
- However, it was cautioned that this API will require specialized knowledge akin to writing a compiler pass, making it initially complex to use.
OpenAI Discord
- AI Sales Agents Spark Legal Concerns: Discussions on AI Sales Agents highlighted caution against 'mass spam' practices and issues where AI could hallucinate promotions, potentially leading to legal ramifications for companies.
- Participants emphasized the importance of regulating AI-generated outreach to prevent misinformation and ensure compliance with legal standards.
- Photonic Computing Enhances Quantum Networking: A member proposed using photonic computing in quantum networking to perform calculations at nodes for systems like BOINC, addressing bandwidth concerns.
- They noted that while light interference can aid computation, final measurements still require electronic methods.
- Cultivating Benevolent AI through Positive Environment: The approach to benevolent AI relies on creating a positive environment rather than imposing strict moral frameworks.
- Fostering moral values is seen as a natural way for AI to develop its personality.
- Evolving Transparency in Training Data Usage: A member discussed their commitment to sharing data for training, aiming to enhance AI models.
- They also noted changes in wording around data usage permissions, indicating evolving transparency from providers.
- GPT Models Quickly Becoming Outdated: One member noted that GPTs are effective but quickly become outdated due to newer developments.
- Increasing the limits and adding o-1 could significantly improve the experience.
LM Studio Discord
- Llama 3.2 Vision Model Debuts: The new Llama 3.2 Vision model is available in 11B and 90B sizes, requiring significant VRAM for optimal performance.
- Users were directed to download Ollama 0.4 to run the model, highlighting the method for adding images to prompts.
- LM Studio Enhances Prompt Handling: A user inquired about locating the Gemma prompt within LM Studio, expressing confusion over its absence in the latest version.
- Gemma prompt is now automatically managed via Jinja when using compatible community models, as confirmed by the community.
- LLM Web Searching Integration: A member questioned if their Local LLM could perform web searches through LM Studio, receiving confirmation that it was not natively supported.
- They were advised to develop a custom Python solution to integrate web searching functionality with their local server.
- GPU Optimization in LM Studio: A user reported their RTX 2060 GPU wasn't being utilized, leading to suggestions to check LM runtime settings.
- Users were advised to select a model compatible with their GPU and ensure CUDA is enabled in the runtime settings.
- LM Studio Beta Tools Release Anticipation: A user expressed excitement and frustration over the timeline of the upcoming Beta tool release for LM Studio.
- The community discussion highlighted a strong eagerness for the new features, amplifying anticipation around the release.
Interconnects (Nathan Lambert) Discord
- Court ruling favors GenAI defendants: A ruling by SDNY Judge Colleen McMahon dismissed the case RawStory v. OpenAI without prejudice, potentially benefiting GenAI defendants significantly.
- The judge determined that facts used in LLM training are not copyrightable and emphasized that current GenAI models synthesize rather than copy data.
- Google unveils Gemini-2.0-Pro-Exp-0111: Google is set to launch the new model Gemini-2.0-Pro-Exp-0111 under its Advanced section, although the target audience remains unspecified.
- The community is actively seeking prompt suggestions to effectively test the capabilities of this upcoming model.
- Amazon eyes second Anthropic investment: Amazon is reportedly in talks to make a second multibillion-dollar investment in Anthropic, aiming to bolster their partnership.
- AWS is encouraging Anthropic to adopt its Trainium AI chips instead of continuing reliance on NVIDIA’s GPUs.
- Model token limits raise concerns: A member highlighted that 1.5T tokens of instructions could potentially overwhelm a model, sparking concerns about handling such vast data volumes.
- This issue aligns with broader community discussions on determining optimal token limits for maintaining model performance.
- PRMs linked to value models: Discussions emerged around PRMs in the context of training, particularly their connection to value models.
- One member affirmed that PRMs are essential for training, while another noted that Shephard serves as a reliable verifier in these discussions.
OpenInterpreter Discord
- No Max Viewer Limit on Streams: A member inquired about the maximum number of viewers for streams, and it was clarified that there is no viewer limit.
- OmniParser's Capabilities Explained: OmniParser interprets UI screenshots into structured formats, enhancing LLM-based UI agents, with details on its training datasets and model usage.
- For more information, check the Project Page and the Blog Post.
- Challenges Running LLMs Locally: A user raised concerns about running localized LLMs on low-powered computers and inquired if Open Interpreter models could operate on an online server built with Python or Anaconda.
- It's noted that strong GPUs or NPUs are required for proper local execution, as running with only CPUs results in poor performance.
- Major Updates from Recent Events: Recent events unveiled a large-scale rewrite, a new text rendering engine, and improved loading times.
- Additionally, the introduction of new features such as file viewing and editing was discussed.
- Desktop App Access Information: Access to the desktop app is not yet released, as beta testing is ongoing with selected community members.
- Instructions to join a waitlist for future access can be found Join Waitlist.
tinygrad (George Hotz) Discord
- Nvidia hardware shines in optimization: Tinygrad reported that Nvidia hardware is optimal for current models, asserting that a transformer ASIC offers negligible performance gains.
- This insight raises questions about the specific advantages of traditional GPU architectures over specialized ASICs in selected computational tasks.
- Groq hardware delivers solid gains: The consensus was that Groq hardware positively impacts AI workload performance.
- Members highlighted the effectiveness of Groq's architecture tailored for specific computational operations.
- ASICs find favor with algorithm design: A discussion underscored that the benefits of an ASIC extend beyond reduced control logic, with certain algorithms optimized for direct hardware implementation.
- For instance, fused operations facilitate more efficient data handling compared to conventional multi-step processes.
- Compiler tools demand enhancements: George Hotz conveyed dissatisfaction with the current implementation of DEFINE_ACC/ASSIGN in the codebase, seeking alternative solutions.
- This reflects the community's call for improved compiler tools and methodologies to boost functionality.
- x.shard function differentiates copy vs slice: In the
x.shard(GPUS, axis=None)function, x is copied across all GPUs, whereasx.shard(GPUS, axis=0)slices x along axis 0 for distribution across cards.- Understanding this distinction is essential for efficiently managing data movement in parallel processing setups.
DSPy Discord
- Microsoft Research's OptoPrime Launch: Microsoft Research unveiled their optimizer OptoPrime in the arXiv paper.
- The OptoPrime name has ignited discussions about the need for more creative naming within the optimizer community.
- Stanford Seeks Stellar Optimizer Name: Members anticipate that Stanford's upcoming optimizer will feature an epic name to rival OptoPrime.
- This reflects a competitive spirit in the research community regarding optimizer naming conventions.
- Caching Conundrum in Self Consistency Modules: Users discussed methods to 'bust' the cache in self consistency modules, such as passing a new temperature to the
dspy.Predictobject.- Alternative solutions include disabling the cache using
dspy.LMor configuring thePredictmodule for multiple completions.
- Alternative solutions include disabling the cache using
- Dynamic Few-Shot Example Optimization: A member explored the benefits of using dynamic few-shot examples based on cosine similarity versus fixed examples.
- Adapting few-shot examples to specific topics like sports or movies was argued to enhance model performance and relevance.
- MIPRO Optimizer for Question Generation: Users investigated whether MIPRO could generate or select examples from a large pool of Q&A pairs.
- Recommendations were sought for optimizers capable of producing questions in specific styles, highlighting a function for generating both questions and answers.
Cohere Discord
- Tavily emerges as a top choice: After researching and discussing with Claude, a member concluded that Tavily is the best option for their AI-related queries, thanks to its user-friendly setup.
- They believe that using the free plan to run initial tests alongside ChatGPT would provide valuable insights into search processes.
- Hurdles in API setups: Another member highlighted the complexity of using Brave API or AgentSearch, emphasizing that these options require more extensive setup compared to Tavily.
- Python Script for Comparative Metrics: A suggestion was made to create a Python script that facilitates multiple API calls to different services for an in-depth comparison of search engines.
- This approach would allow for the extraction of metrics from the meta-data to evaluate search effectiveness against engines like Google and DuckDuckGo.
- Cohere API trial key supports embedding: A user expressed frustration about receiving errors when trying to use the Cohere embed API with their trial key, unsure of the issue.
- Another member confirmed that the trial key supports all Cohere models, including embedding.
- Errors attributed to implementation: Members pointed out that the error likely originates from the implementation, not from the Cohere API itself.
- They suggested reaching out to Discord or GitHub for specific guidance due to the user's lack of coding knowledge.
OpenAccess AI Collective (axolotl) Discord
- Metaparameter Tuning with Central Flows: A recent paper explores metaparameter tuning in deep learning, demonstrating that an optimizer's behavior can be modeled using a 'central flow' approach.
- This model predicts long-term optimization trajectories with high accuracy, offering a new perspective on optimization strategies in neural networks.
- Optimizer Behavior in Transformers: Concerns were raised about whether the findings on metaparameter tuning can be generalized to transformer architectures, particularly given the limited use of the CIFAR-10 dataset in the study.
- Members discussed the implications of these limitations on the applicability of the central flow model across different neural network architectures.
- Axolotl on AMD GPUs: Discussions focused on the effectiveness of running Axolotl on AMD GPUs with 1536 GB VRAM, evaluating cost and performance benefits.
- Members debated whether the increased memory capacity significantly enhances training performance compared to NVIDIA GPUs.
- Memory Consumption Compared to AdamW: A PR addressing Axolotl's memory consumption is ready, but concerns were highlighted about its resource demands.
- Comparisons were made to the AdamW optimizer to assess potential differences in memory usage.
LlamaIndex Discord
- Elevate RAG systems with Context Refinement Agent: Learn to build a Context Refinement Agent that enhances RAG responses for complex queries by intelligently expanding and refining retrieved context.
- The blog post details how an agent evaluates retrieved chunks for improved answers, making RAG systems more effective.
- Build an agentic RAG query engine with NVIDIA NIM: This guest post from NVIDIA explains how to create an agentic RAG query engine using NVIDIA's NIM microservices for efficient open-source model inference.
- It covers constructing a query router for complex questions and implementing sub-question queries, streamlining the process of handling intricate inquiries.
- LlamaIndex Workflow Explained: A comprehensive guide on the LlamaIndex workflow details how event-driven abstractions can chain multiple events through steps using a
@stepdecorator.- Workflows allow for building diverse processes like agents or RAG flows, with automatic instrumentation for observability via tools like Arize Phoenix.
- Hiring AI NLP Engineer: Nikkole, a CTO of an AI startup, shared that they are looking for an AI NLP Engineer with a salary range of $95k-$115k for a W2 contract.
- Interested candidates were advised to connect via LinkedIn, as direct messages are only accepted there.
- Seeking Resources for Custom LLM: A member is looking for recommendations on resources to perform with an open source LLM tailored to their custom preference dataset.
- They requested suggestions from the community to enhance their understanding and implementation.
LAION Discord
- MicroDiT Replication Completion: User announced the completion of their MicroDiT replication and shared download links for the model weights and inference script.
- They credited FAL for providing the necessary compute resources, stating, 'I think I might be cooking.'
- Bonnie and Clyde Soundtrack Video Shared: A YouTube video titled 'LOST SOUNDTRACK - BONNIE AND CLYDE' was shared, featuring a description of Bonnie Parker's romance with ex-con Clyde Barrow and their violent crime spree.
- The video can be viewed here, highlighting a narrative of love and crime.
LLM Agents (Berkeley MOOC) Discord
- Deadline Alert for Computing Resources: The application deadline for Computing Resources is at the end of day November 25th PST with a 1-2 week processing delay anticipated after submission.
- Participants are encouraged to submit their applications early to ensure timely processing.
- Urgent Call to Action for Participants: Members are urged to act promptly to avoid missing the November 25th deadline for resources.
- Early submission is vital to ensure adequate processing time.
MLOps @Chipro Discord
- Data Council '25 CFP Opens for a Week: The Data Council '25 CFP (Call for Proposals) is open for another week, inviting developers to showcase their ML/AI projects. For more details, visit the Data Council CFP page.
- This event is anticipated to feature several engaging talks and hackers, promoting innovative discussions in the ML/AI community.
- ML/AI App Talks Set to Inspire: Data Council '25 will host a series of talks on ML/AI applications, highlighting the latest advancements in the field.
- Participants are encouraged to present their ML/AI app developments, fostering active collaboration and knowledge sharing.
AI21 Labs (Jamba) Discord
- Jurassic's 'summarize-by-segment' Endpoint Deprecation: A member expressed frustration over the sudden deprecation of the Jurassic 'summarize-by-segment' endpoint, which they had relied on for essential business services ahead of the announced 11/14 date.
- They described the unexpected change as a pain point, highlighting its impact on workflows.
- Transitioning to the New Jamba Model: A user requested guidance on utilizing the new Jamba model to replicate the functionality of the deprecated endpoint, especially for URL content segmentation.
- They emphasized the need for assistance in adjusting URL parameters to effectively extract content.
The Alignment Lab AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The LLM Finetuning (Hamel + Dan) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Torchtune Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Mozilla AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Gorilla LLM (Berkeley Function Calling) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
PART 2: Detailed by-Channel summaries and links
The full channel by channel breakdowns have been truncated for email.
If you want the full breakdown, please visit the web version of this email: !
If you enjoyed AInews, please share with a friend! Thanks in advance!