[AINews] not much happened today + AINews Podcast?
This is AI News! an MVP of a service that goes thru all AI discords/Twitters/reddits and summarizes what people are talking about, so that you can keep up without the fatigue. Signing up here opts you in to the real thing when we launch it 🔜
2 more weeks is all you need...
AI News for 9/9/2024-9/10/2024. We checked 7 subreddits, 433 Twitters and 30 Discords (215 channels, and 2311 messages) for you. Estimated reading time saved (at 200wpm): 247 minutes. You can now tag @smol_ai for AINews discussions!
Let's see:
- Glean doubled valuation, again
- Dan Hendrycks' Superforecaster AI generates very plausible election forecasts? One wonders how it will update after the debate. Check the prompt.
- A Stanford paper on LLMs generated novel research ideas made the rounds with a big claim: "After a year-long study, we obtained the first statistically significant conclusion: LLM-generated ideas are more novel than ideas written by expert human researchers."
- SambaNova announced slightly faster Llama 3 inference than Cerebras, the previous world fastest (our coverage here). Independent evals are on the way.
- Benjamin Clavie gave a notable talk on RAG and ColBERT/Late Interaction.
- Strawberry reported to be launching in 2 weeks
Yesterday, folks were also excited about Google Illuminate, AI generated podcast discussions about papers and books. It is gated behind a waitlist, but we at Smol AI are exploring doing the same. Check out our first attempt here!
The Table of Contents and Channel Summaries have been moved to the web version of this email: !
AI Twitter Recap
all recaps done by Claude 3.5 Sonnet, best of 4 runs.
Apple's AI Announcements and Industry Reactions
- Apple unveiled new AI features for iOS 18, including visual intelligence capabilities and improvements to Siri. @swyx noted that Apple has potentially "fixed Siri" and introduced a video understanding model, beating OpenAI to the first AI phone. The new features include mail and notification summaries, personal context understanding, and visual search integration.
- The new iPhone camera button is seen as prime real estate, with OpenAI/ChatGPT and Google search as secondary options to Apple's visual search. @swyx highlighted that the camera can now add events to the calendar, with processing done on-device and in the cloud.
- Some users expressed disappointment with Apple's recent innovations. @bindureddy mentioned that there hasn't been a compelling reason to upgrade iPhones in recent years, noting that Apple Intelligence seems similar to Google Lens, which was released years ago.
AI Model Developments and Controversies
- The AI community discussed the Reflection 70B model, with mixed reactions and controversies. @BorisMPower stated that the model performs poorly, contrary to initial claims. @corbtt announced an investigation into the model's performance, working with the creator to replicate the reported results.
- @DrJimFan highlighted the ease of gaming LLM benchmarks, suggesting that MMLU or HumanEval numbers are no longer reliable indicators of model performance. He recommended using ELO points on LMSys Chatbot Arena and private LLM evaluation from trusted third parties for more accurate assessments.
- The AI research community discussed the importance of evaluation methods. @ClementDelangue announced the open-sourcing of "Lighteval," an evaluation suite used internally at Hugging Face, to improve AI benchmarking.
AI in Research and Innovation
- A study comparing LLM-generated research ideas to those of human experts found that AI-generated ideas were judged as more novel. @rohanpaul_ai shared key insights from the paper, noting that LLM-generated ideas received higher novelty scores but were slightly less feasible than human ideas.
- @omarsar0 discussed a new paper on in-context learning in LLMs, highlighting that ICL uses a combination of learning from in-context examples and retrieving internal knowledge.
- @soumithchintala announced the release of RUMs, robot models that perform basic tasks reliably with 90% accuracy in unseen, new environments, potentially unlocking longer trajectory research.
AI Tools and Applications
- @svpino shared an example of AI's capability to turn complex documents into interactive graphs within seconds, emphasizing the rapid progress in this area.
- @jeremyphoward announced SVG support for FastHTML, allowing for the creation of Mermaid editors.
- @rohanpaul_ai discussed DynamiqAGI, a comprehensive toolkit for addressing various GenAI use cases and building compliant GenAI applications on personal infrastructure.
AI Ethics and Safety
- @fchollet argued that excessive anthropomorphism in machine learning and AI is responsible for misconceptions about the field.
- @ylecun discussed the historical role of armed civilian militias in bringing down democratic governments and supporting tyrants, drawing parallels to current events.
Memes and Humor
- @sama shared a humorous analogy: "if you strap a rocket to a dumpster, the dumpster can still get to orbit, and the trash fire will go out as it leaves the atmosphere," suggesting that while this contains important insights, it's better to launch nice satellites instead.
AI Reddit Recap
/r/LocalLlama Recap
Theme 1. Reflection 70B: From Hype to Controversy
- Smh: Reflection was too good to be true - reference article (Score: 42, Comments: 19): The performance of Reflection 70B, a recently lauded open-source AI model, has been questioned and the company behind it accused of fraud. According to a VentureBeat article, concerns have been raised about the legitimacy of the model's reported capabilities and benchmarks. The situation has sparked debate within the AI community about the verification of AI model performance claims.
- Out of the loop on this whole "Reflection" thing? You're not alone. Here's the best summary I could come up. (Score: 178, Comments: 81): The post summarizes the Reflection 70B controversy, where Matt Shumer claimed to have created a revolutionary AI model using "Reflection Tuning" and Llama 3.1, surpassing established models like ChatGPT. Subsequent investigations revealed that the public API was likely a wrapper for Claude 3.5 Sonnet, while the released model weights were a poorly tuned Llama 3 70B, contradicting Shumer's claims and raising concerns about potential fraud and undisclosed conflicts of interest with Glaive AI.
- Matt Shumer's claims about the Reflection 70B model were met with skepticism, with users questioning how it's possible to "accidentally" link to Claude while claiming it's your own model. Some speculate this could be a case of fraud or desperation in the face of a tightening AI funding landscape.
- The incident drew comparisons to other controversial AI projects like the Rabbit device and "Devin". Users expressed growing skepticism towards OpenAI as well, questioning the company's claims about voice and video capabilities and noting key employee departures.
- Discussions centered on potential motives behind Shumer's actions, with some attributing it to stupidity or narcissism rather than malice. Others speculated it could be an attempt to boost Glaive AI or secure venture capital funding through misleading claims.
- Reflection and the Never-Ending Confusion Between FP16 and BF16 (Score: 42, Comments: 15): The post discusses a technical issue with the Reflection 70B model uploaded to Hugging Face, which is underperforming compared to the baseline LLaMA 3.1 70B. The author explains that this is likely due to an incorrect conversion from BF16 (used in LLaMA 3.1) to FP16 (used in Reflection), which causes significant information loss due to the incompatible formats (5-bit exponent and 10-bit mantissa for FP16 vs 8-bit exponent and 7-bit mantissa for BF16). The post strongly advises against using FP16 for neural networks or attempting to convert BF16 weights to FP16, as it can severely degrade model performance.
- BF16 to FP16 conversion may not be as destructive as initially suggested. llama.cpp tests show the perplexity difference between BF16 and FP16 is 10x less than FP16 to Q8, and most GGUFs on HuggingFace are likely based on FP16 conversion.
- The discussion highlighted the importance of Bayesian reasoning when evaluating Schumer's claims, given previous misrepresentations about the base model, size, and open-source status. Some users emphasized the need to consider these factors alongside technical explanations.
- Several users noted that most model weights typically fall within [-1, 1] range, making FP16 conversion less impactful. Quantization to 8 bits or less per weight often results in negligible or reasonable accuracy loss, suggesting FP16 vs BF16 differences may be minimal in practice.
Theme 2. AMD's UDNA: Unifying RDNA and CDNA to Challenge CUDA
- AMD announces unified UDNA GPU architecture — bringing RDNA and CDNA together to take on Nvidia's CUDA ecosystem (Score: 284, Comments: 90): AMD unveiled its new unified Data Center Next Architecture (UDNA), combining elements of RDNA and CDNA to create a single GPU architecture for both gaming and data center applications. This strategic move aims to challenge Nvidia's CUDA ecosystem dominance by offering a unified platform that supports AI, HPC, and gaming workloads, potentially simplifying development across different GPU types and increasing AMD's competitiveness in the GPU market.
Theme 3. DeepSeek V2.5: Quietly Released Powerhouse Model
- DeepSeek silently released their DeepSeek-Coder-V2-Instruct-0724, which ranks #2 on Aider LLM Leaderboard, and it beats DeepSeek V2.5 according to the leaderboard (Score: 183, Comments: 39): DeepSeek has quietly released DeepSeek-Coder-V2-Instruct-0724, a new coding model that has achieved the #2 rank on the Aider LLM Leaderboard. This model outperforms its predecessor, DeepSeek V2.5, according to the leaderboard rankings, marking a significant improvement in DeepSeek's coding capabilities.
- DeepSeek-Coder-V2 expands support from 86 to 338 programming languages and extends context length from 16K to 128K. The model requires 8x80GB cards to run, with no lite version available for most users.
- Users discussed version numbering confusion between DeepSeek's general and coding models. The new coder model (0724) outperforms DeepSeek V2.5 on the Aider LLM Leaderboard, but V2.5 beats 0724 in most other benchmarks according to Hugging Face.
- Some users expressed interest in smaller, language-specific models for easier switching and interaction. DeepSeek typically takes about a month to open-source their models after initial release.
- All of this drama has diverted our attention from a truly important open weights release: DeepSeek-V2.5 (Score: 472, Comments: 95): The release of DeepSeek-V2.5 has been overshadowed by recent AI industry drama, despite its potential significance as an open GPT-4 equivalent. This new model, available on Hugging Face, reportedly combines general and coding capabilities with upgraded API and Web features.
- DeepSeek-V2.5 received mixed reviews, with some users finding it inferior to Mistral-Large for creative writing and general tasks. The model requires 80GB*8 GPUs to run, limiting its accessibility for local use.
- Users reported issues running the model, including errors in oobabooga and problems with cache quantization. Some achieved limited success using llama.cpp with reduced context length, but performance was slow at 3-5 tokens per second.
- Despite concerns, some users found DeepSeek-V2.5 useful for adding variety to outputs and potentially solving coding problems. It's available on Hugging Face and through a cost-effective API.
Theme 4. Innovative Approaches to Model Efficiency and Deployment
- Open Interpreter refunds all hardware orders for 01 Light AI device, makes it a phone app instead. App launches TODAY! (Score: 42, Comments: 4): Open Interpreter has canceled plans for its 01 Light AI hardware device, opting instead to launch a mobile app that performs the same functions. This decision appears to be influenced by the negative reception of similar AI hardware devices like the Rabbit R1, with Open Interpreter choosing to leverage existing devices such as iPhones and MacBooks rather than introducing new hardware.
- generate usable mobile apps w/ LLMs on your phone (Score: 60, Comments: 23): The post discusses the potential for generating usable mobile apps using Large Language Models (LLMs) directly on smartphones. This concept suggests a future where users could create functional applications through natural language interactions with AI assistants on their mobile devices, potentially revolutionizing app development and accessibility. While the post doesn't provide specific implementation details, it implies a significant advancement in on-device AI capabilities and mobile app creation processes.
- Deepsilicon runs neural nets with 5x less RAM and ~20x faster. They are building SW and custom silicon for it (Score: 111, Comments: 32): Deepsilicon claims to run neural networks using 5x less RAM and achieve ~20x faster performance through a combination of software and custom silicon. Their approach involves representing transformer models with ternary values (-1, 0, 1), which reportedly eliminates the need for computationally expensive floating-point math. The post author expresses skepticism about this method, suggesting it seems too straightforward to be true.
- BitNet-1.58b performance and specialized hardware for ternary values are key motivations for Deepsilicon. Challenges include scaling to larger models, edge device economics, and foundation model companies' willingness to train in 1.58 bits.
- The BitNet paper demonstrates that training models from scratch with 1-bit quantization can match fp16 performance, especially as model size increases. The BitNet paper provides insights into trade-offs.
- Concerns were raised about Y Combinator funding practices and the founders' approach, as discussed in a Hacker News thread. However, some see potential in targeting the edge market for portable ML in hardware and robotics applications.
Theme 5. Advancements in Specialized AI Models and Techniques
- New series of models for creative writing like no other RP models (3.8B, 8B, 12B, 70B) - ArliAI-RPMax-v1.1 Series (Score: 141, Comments: 84): The ArliAI-RPMax-v1.1 series introduces four new models for creative writing and roleplay, with sizes ranging from 3.8B to 70B parameters. These models are designed to excel in creative writing and roleplay scenarios, offering enhanced capabilities compared to existing RP models. The series aims to provide writers and roleplayers with powerful tools for generating imaginative and engaging content across various scales.
- Microsoft's Self-play muTuAl Reasoning (rStar) code is available on Github! (Score: 48, Comments: 4): Microsoft has released the code for their Self-play muTuAl Reasoning (rStar) algorithm on GitHub. This open-source implementation allows for self-play mutual reasoning in large language models, enabling them to engage in more sophisticated dialogue and problem-solving tasks. The rStar code can be found at https://github.com/microsoft/rstar, providing researchers and developers with access to this advanced AI technique.
- Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming (finetuned Qwen2-0.5B) (Score: 49, Comments: 7): Mini-Omni, an open-source multimodal large language model, demonstrates the ability to process speech input and generate streaming audio output in real-time conversations. This model, based on a finetuned Qwen2-0.5B, showcases end-to-end capabilities for hearing and talking while simultaneously processing language.
- A previous discussion thread on Mini-Omni from 6 days ago was linked, indicating ongoing interest in the open-source multimodal model.
- Users expressed desire for a demo video showcasing the model's voice-to-voice capabilities, emphasizing the importance of demonstrations for new AI models to garner attention and verify claimed functionalities.
Other AI Subreddit Recap
r/machinelearning, r/openai, r/stablediffusion, r/ArtificialInteligence, /r/LLMDevs, /r/Singularity
AI Model Releases and Improvements
- OpenAI preparing to drop their new model: A humorous post on r/singularity showing a video of a truck almost crashing, metaphorically representing OpenAI's model release process. The post garnered significant engagement with over 1000 upvotes and 110 comments.
- Flux AI model developments: Multiple posts discuss the Flux AI model:
- A post comparing ComfyUI and Forge for running Flux, highlighting the ongoing debate in the community about different interfaces.
- Another post showcases 20 images generated using a Flux LoRA trained on a limited dataset, demonstrating the model's capabilities even with suboptimal training data.
- New Sora video released: A post on r/singularity links to a new video demonstrating OpenAI's Sora text-to-video model capabilities.
AI Tools and Interfaces
- Debate over AI interfaces: The Stable Diffusion community is discussing the merits of different interfaces for running AI models, particularly ComfyUI vs. Forge. Key points include:
- ComfyUI offers more flexibility and control but has a steeper learning curve.
- Forge provides a more user-friendly interface with some quality-of-life improvements.
- Some users advocate for using multiple interfaces depending on the task.
- VRAM requirements: Several comments discuss the high VRAM requirements for running newer AI models like Flux, with users debating strategies for optimizing performance on lower-end hardware.
AI Ethics and Societal Impact
- Sam Altman image: A post featuring an image of Sam Altman on r/singularity sparked discussion, likely related to his role in AI development and its societal implications.
Humor and Memes
- "Most interesting year" meme: A humorous post on r/singularity asks "How's the most interesting year in human history going for you?", reflecting on the rapid pace of AI advancements.
- AI model release meme: The top post about OpenAI's model release uses humor to comment on the anticipation and potential issues surrounding major AI releases.
AI Discord Recap
A summary of Summaries of Summaries by Claude 3.5 Sonnet
1. AI Model Releases and Benchmarks
- DeepSeek 2.5 Debuts with Impressive Specs: DeepSeek 2.5 merges DeepSeek 2 Chat and Coder 2 into a robust 238B MoE with a 128k context length and features like function calling.
- This release is set to transform both coding and chat experiences, raising the bar for future models in terms of versatility and capability.
- Deception 70B Claims Top Open-Source Spot: The Deception 70B model was announced as the world's top open-source model, utilizing a unique Deception-Tuning method to enhance LLM self-correction capabilities.
- This release, available here, sparked discussions about its potential applications and the validity of its claims in the AI community.
- OpenAI's Strawberry Model Nears Release: OpenAI is set to release its new model, Strawberry, as part of ChatGPT within the next two weeks, according to insider information shared in a tweet.
- Initial impressions suggest potential limitations, with reports of 10-20 second response times and concerns about memory integration capabilities.
2. LLM Fine-tuning and Optimization Techniques
- Mixed Precision Training Boosts Performance: Developers reported success implementing mixed precision training with cpuoffloadingOptimizer, noting improvements in tokens per second (TPS) processing.
- Further testing is planned to explore integration with FSDP+Compile+AC, highlighting ongoing efforts to optimize model training efficiency.
- Hugging Face Enhances Training with Packing: Hugging Face announced that training with packed instruction tuning examples is now compatible with Flash Attention 2, potentially increasing throughput by up to 2x.
- This advancement aims to streamline the training process for AI models, making more efficient use of computational resources.
- MIPRO Streamlines Prompt Optimization: The DSPy team introduced MIPRO, a new tool designed to optimize instructions and examples in prompts for use with datasets in question-answering systems.
- MIPRO's approach to prompt optimization highlights the growing focus on enhancing model performance through refined input techniques.
3. Open Source AI Developments and Collaborations
- GitHub Hosts Open Source AI Panel: GitHub is organizing a panel on Open Source AI on September 19th featuring speakers from Ollama, Nous Research, Black Forest Labs, and Unsloth AI. Free registration is available here.
- The event aims to discuss how open source communities foster access and democratization in AI technology, reflecting the growing importance of collaborative efforts in AI development.
- LlamaIndex Explores Agentic RAG Strategies: A recent talk by @seldo explored Agentic RAG strategies for 2024 using LlamaIndex, discussing its significance and limitations.
- The discussion highlighted strategies for enhancing RAG capabilities, showcasing the ongoing evolution of retrieval-augmented generation techniques in the open-source community.
- Guilherme Releases Reasoner Dataset: A new dataset called the Reasoner Dataset was shared, created using synthetic data and designed for reasoning tasks.
- This release demonstrates innovative approaches in AI training data development, potentially advancing the capabilities of models in logical reasoning and problem-solving.
4. Multimodal AI and Tool Integrations
- Expand.ai Launches to Transform Web Data Access: Tim Suchanek announced the launch of Expand.ai, a tool designed to convert websites into type-safe APIs, as part of Y Combinator's current batch.
- This service aims to streamline data retrieval from websites, attracting interest from both tech-savvy and general users for its potential to simplify web data integration.
- Chat AI Lite Offers Versatile AI Applications: Chat AI Lite was introduced as a versatile AI web application covering multiple scenarios including chat, local knowledge bases, and image generation.
- Its comprehensive capabilities aim to enhance user experience across various AI applications, showcasing the trend towards integrated AI tools for diverse use cases.
- EDA-GPT Automates Data Analysis: EDA-GPT was shared as a tool for automated data analysis leveraging large language models (LLMs), showcasing advanced integration for data science tasks.
- This project encourages contributions to enhance its data analytical capabilities, highlighting the growing intersection of AI and data science tooling.
GPT4O (gpt-4o-2024-05-13)
1. DeepSeek 2.5 Launch
- DeepSeek 2.5 merges Chat and Coder models: DeepSeek 2.5 combines DeepSeek 2 Chat and Coder 2 into a powerful 238B MoE model with a 128k context length and function calling features, aimed at revolutionizing coding and chat experiences.
- This model is expected to set new standards for future models, providing robust performance in both coding and conversational contexts.
- Confusion about DeepSeek model endpoints: Users are confused about endpoints for DeepSeek-Coder and DeepSeek Chat, with performance concerns like low throughputs of 1.75t/s and 8tps.
- The model IDs will remain free for another five days, allowing users to transition smoothly.
2. Model Fine-Tuning Challenges
- Unsloth fine-tuning issues: Users face inference problems with Unsloth, resulting in repetitive outputs post fine-tuning, especially for paraphrasing tasks.
- Discussions suggest optimizing hyperparameters like learning rate, batch size, and epoch count to improve performance.
- Loss spikes in training: A significant loss spike was reported after 725 steps in training, with loss reaching 20. Adjusting max grad norm from 1.0 to 0.3 helped stabilize the loss.
- This issue raised discussions on potential underlying factors affecting training stability across various models.
3. Hardware and Model Performance
- Apple Silicon's GPU specs impress: The M2 Max MacBook Pro boasts 96GB RAM and effectively 72GB video memory, capable of running 70B models at 9 tokens/s.
- This integration allows efficient processing, showcasing Apple's competitive edge in hardware performance for AI tasks.
- AMD vs NVIDIA performance debate: Consensus emerged that AMD's productivity performance lags behind NVIDIA, particularly for applications like Blender.
- Users expressed intentions to switch to NVIDIA with the upcoming RTX 5000 series due to performance frustrations.
4. AI Model Innovations
- Superforecasting AI tool released: A new Superforecasting AI tool has launched, claiming to predict outcomes with superhuman accuracy, aiming to automate prediction markets.
- A detailed demo and blog post explain its functionalities, sparking interest in its applications.
- OpenAI's Strawberry model poised for release: OpenAI is gearing up to launch the Strawberry model, designed for enhanced reasoning and detailed task execution.
- While it promises significant advancements, concerns linger regarding initial response times and memory handling capabilities.
5. Open Source AI Developments
- GitHub's Open Source AI panel announced: GitHub will host a panel on Open Source AI on 9/19 with panelists from Ollama, Nous Research, Black Forest Labs, and Unsloth AI. Interested attendees can register here after host approval.
- The panel will explore the role of open source in increasing access and democratization within AI technologies.
- Hugging Face introduces multi-packing for efficiency: Hugging Face announced compatibility of packed instruction tuning examples with Flash Attention 2, aiming to boost throughput by up to 2x.
- This addition potentially streamlines AI model training significantly, with community excitement over its applications.
PART 1: High level Discord summaries
HuggingFace Discord
- DeepSeek 2.5 Launches with Impressive Specs: DeepSeek 2.5 merges DeepSeek 2 Chat and Coder 2 into a robust 238B MoE with a 128k context length and features like function calling.
- It's set to transform coding and chat experiences, raising the bar for future models.
- Transformers Agents Embrace Multi-Agent Systems: Transformers Agents now support multi-agent systems that enhance task performance through specialization.
- This method allows for efficient collaboration, enabling better handling of complex tasks.
- Semantic Dataset Search is Back!: The Semantic Dataset Search has returned, offering capabilities to find similar datasets by ID or semantic searches.
- This tool improves dataset accessibility on Hugging Face, streamlining research and development.
- Korean Lemmatizer Integration with AI: A developer successfully created a Korean lemmatizer and is exploring AI methods to disambiguate results further.
- They received encouragement to utilize AI for distinguishing multiple lemma options generated for single words.
- OpenSSL 3.3.2 with Post Quantum Cryptography: A member learned to build OpenSSL 3.3.2 incorporating Post Quantum Cryptography (PQC) on device.
- Lazy building FTW emphasizing the ease of the installation process.
Unsloth AI (Daniel Han) Discord
- Model Fine-Tuning Hits Snags: Users are encountering issues with inference in Unsloth, resulting in repetitive outputs after fine-tuning their models, especially for paraphrasing tasks. Factors like learning rate and batch size seem to affect these performance outcomes significantly.
- Discussions suggest users should optimize hyperparameters, including epoch count, to avoid these pitfalls.
- MLC Deployment Compatibility Concerns: Challenges with MLC arise due to specific format requirements, prompting suggestions for full parameter fine-tuning to address interoperability. Quantized models may complicate these MLC LLM deployments.
- Members highlighted a need for clearer guidelines on MLC compatibility with Unsloth models.
- Unsloth Poised for Parameter Fine-Tuning: Anticipation builds around the introduction of full-parameter fine-tuning support for Unsloth, currently focusing on LoRA and QLoRA methods. Developer stress is evident as projects push towards completion.
- Members are hopeful for enhancements that could simplify future model deployments.
- Loss Spiking Emerges in Training: A member flagged a significant loss spike after 725 steps in their training process, reaching as high as 20. They found that adjusting max grad norm from 1.0 to 0.3 helped stabilize the loss.
- This raises discussion on potential underlying issues influencing training metrics across various models.
- WizardMath Fine-Tuning Breakthrough: WizardMath was successfully fine-tuned on real journal records, achieving a low loss of 0.1368 after over 13,000 seconds of training. Future plans include using RAG to enhance the model's comprehension of document references.
- This approach could significantly improve practical applications in bookkeeping and accounting.
LM Studio Discord
- Model Parameter Limits Are Discussed: A user inquired about the smallest possible model parameter count for training, noting that 0.5B models are available but perform poorly.
- Contributions highlighted attempts with 200k and 75k parameter models, emphasizing the impact of dataset size and structure on performance.
- LM Studio Supports Multi-GPU Configurations: It was confirmed that LM Studio supports multi-GPU setups, provided the GPUs are from the same manufacturer, e.g., using two 3060s.
- A member noted that consistent models yield better performance, enhancing productivity, especially in computational-heavy tasks.
- AMD vs NVIDIA: The Performance Skirmish: Consensus emerged that AMD's performance in productivity applications lags behind NVIDIA, especially for software like Blender.
- Personal experiences indicated intentions to switch to NVIDIA with the upcoming RTX 5000 series due to performance frustrations.
- Navigating Model Performance on Limited Hardware: Discussion revealed that users aim to run LM Studio on limited hardware, particularly Intel setups, questioning the performance boundaries of larger models like 7B Q4KM.
- It was recommended to operate within 13B Q6 range for 16GB GPUs to maintain smoother operations during model execution.
- Custom Model Development Insights: Discussion on the merits of creating custom models surfaced, with one user eager to build their unique stack rather than use out-of-the-box solutions.
- They shared experiences with Misty and Open-webui, while acknowledging the ongoing challenges in establishing an effective customized system.
OpenAI Discord
- Apple Silicon's impressive GPU specs: Discussants highlighted the M2 Max MacBook Pro capabilities, boasting 96GB RAM and effectively 72GB video memory for running models.
- This integration allows for efficient processing, with one user mentioning they can run 70B models at a rate of 9 tokens/s.
- Gemini model's video analysis potential: In relation to using the Gemini model for video analysis, one user inquired if it can summarize dialog and analyze expressions, not just transcribe audio.
- Others suggested the need to implement training on custom datasets to achieve accurate results, and recommended leveraging available AI frameworks.
- Availability of free models like Llama 3: Users pointed out that models like Llama 3 and GPT-2 are available for free but require decent hardware to host effectively.
- It's noted that running such local models necessitates a good PC or GPU, which raises resource requirements.
- Voice feature feedback in GPT applications: A member created a GPT called Driver's Bro that interfaces with Google Maps and uses a bro-like voice to provide directions.
- Unfortunately, the 'shimmer' voice falls short, leading to a request for an advanced voice mode to enhance interaction.
- Training custom models for stock analysis caution: A member emphasized that using OAI models to analyze stocks is ineffective unless you have ALL historical data, including images and graphs.
- They noted that accurate stock analysis requires using the API for performance purposes and mentioned that full stock history can be downloaded in JSON format.
OpenRouter (Alex Atallah) Discord
- Hermes 3 shifts to a paid model: The standard Hermes 3 405B will transition to a paid model by the weekend, prompting users to switch to the free model at
nousresearch/hermes-3-llama-3.1-405b:freeto maintain access.- Users should act now, as shifting away from the paid model could lead to interruptions in service.
- Eggu Dataset aims for multilingual enhancement: The Eggu dataset, currently in development, targets the training of an open source multilingual model at 1.5GB, integrating image positioning for better compatibility with vision models.
- Though designed for wide usability, there are concerns about potential misuse of the dataset.
- Confusion arises around DeepSeek models: Confusion reigns regarding endpoints for DeepSeek-Coder vs. DeepSeek Chat, with model IDs staying free for another five days.
- Performance concerns include low throughputs of 1.75t/s and 8tps for certain variants.
- Google Gemini grapples with rate limits: Users experience recurring rate limit issues with Google Gemini Flash 1.5, frequently hitting limits despite user restrictions, prompting communications with NVIDIA Enterprise Support.
- Many are using the experimental API, leading to additional challenges during model access.
- Sonnet 3.5 Beta experiences downtime: Recent outages affecting Sonnet 3.5 Beta were acknowledged, with users initially reporting lower success rates for API interactions, now restored as per Anthropic's status updates.
- Though access is back, many users still question the model's overall stability moving forward.
CUDA MODE Discord
- Opus API Integration Stirs Conversations: Discussion highlighted using an API call to Opus for the 'correct' version, hinting a shift in integration techniques.
- Members noted related tweets revealing the topic's growing relevance within the engineering community.
- Challenges with Model Uploading: Participants noted that model uploading is proving to be more complex than expected, raising awareness of practical hurdles.
- This reflects the broader narrative around user challenges in effective model deployment.
- Batch Sizes and Performance Gains: Discussions revealed that smaller matrices/batch sizes yield better performance, achieving a 3x speed-up over a 1.8x for larger sizes, but optimizations may require kernel rewrites.
- Members noted potential losses with int16 and int8 packing, cautioning about quantization errors.
- Triton Atomic Operations Constraints: It became apparent that
tl.atomic_addonly supports 1D tensors, raising questions about workarounds for 2D implementations.- The community seeks efficient alternatives to manage multidimensional data operations.
- Insights on PyTorch Autotuning: Discussion centered around whether the PyTorch
inductor/dynamowith autotuning could enhance triton kernel performance by caching tuned parameters.- A member noted potential for accelerated subsequent runs leveraging the same kernel configurations.
Cohere Discord
- Cohere's Acceptable Use Policy Clarified: A member shared Cohere's Acceptable Use Policy, detailing prohibitions like violence and harassment.
- The conversation highlighted commercial use implications, emphasizing compliance with local laws for model derivatives.
- Fine-tuning Models Insights: A question arose regarding the fine-tuning policy for CMD-R models, specifically its cost-free use.
- Clarifications indicated that self-hosted models come with restrictions against commercial use.
- Temperature Settings Affect Output Quality: Members suggested experimenting with temperature settings of 0 or 0.1 to gauge variations in output quality.
- The discourse centered around ensuring outputs don't deviate wildly from initial examples.
- Innovative Advanced Computer Vision Ideas: Requests for advanced project ideas in computer vision sparked suggestions to explore intersections with LLM projects.
- Teamwork was noted as vital for overcoming challenges in project success, with members brainstorming collaboration strategies.
- Leveraging Google Vision API in Projects: A fun Pokedex project utilizing Google Vision API and Cohere LLMs aims to identify Pokemon names and descriptions from images.
- Clarifications indicated the API was used for creating image labels, not learning embeddings, with Kaggle suggested for datasets.
OpenInterpreter Discord
- Exploring Windows Usage: A member inquired about how to use the project on Windows, reflecting a common interest in the platform's compatibility across operating systems.
- This question indicates that users are keen on various platform integrations for broader accessibility.
- Inquiry on Desktop Beta Access: Discussion emerged around whether it was too late to join the desktop beta program, highlighting user eagerness for new features.
- Members demonstrated a desire to engage with the latest advancements in the Open Interpreter suite.
- Launch of 01 App for Mobile Devices: The 01 App is now live on Android and iOS, with plans for enhancements driven by user feedback.
- The community is urged to fork the app on GitHub to tailor experiences, showcasing an open-source spirit.
- Tool Use Episode 4 Launch: The latest episode titled 'Activity Tracker and Calendar Automator - Ep 4 - Tool Use' is available on YouTube, featuring discussions on time management.
- The speakers emphasize that time is our most precious resource, motivating viewers to utilize tools effectively.
- Support for Open Source Development: Community backing for open-source projects stemming from the 01 platform is vibrant, providing ample opportunities for new initiatives.
- Members expressed enthusiasm to contribute, reinforcing a collaborative environment around AI tools.
Modular (Mojo 🔥) Discord
- Modular Lacks Windows Timeline: There is currently no timeline for a Windows native version as Modular prioritizes support for Ubuntu and Linux distros.
- Modular aims to avoid tech debt and enhance product quality before broadening their focus, drawing lessons from past experiences with Swift.
- WSL as Current Windows Support: While a native .exe version is not available, Modular suggests using WSL as the extent of their current Windows support.
- Users showed interest in future native options but acknowledged existing limitations.
- Mojo Eyeing GPU and GStreamer Replacement: Mojo is being pitched as a potential replacement for GStreamer, leveraging upcoming GPU capabilities for efficient processing.
- Members are keen on modern library integration for live streaming, showcasing Mojo's potential for streamlined operations.
- Exploring Bindings with DLHandle: Members discussed using DLHandle for creating Mojo bindings, referencing projects that demonstrate its application.
- Projects like 'dustbin' utilize DLHandle for SDL bindings, providing inspiration for those in graphical applications.
- Understanding Variant Type in Mojo: The Variant type in Mojo was highlighted for its utility in creating lists with different element types along with memory considerations.
- Members clarified issues related to size alignment and behavior of discriminants in these implementations.
Nous Research AI Discord
- DisTro sparks confusion: Discussions around DisTro raised questions about its purpose and effectiveness, as no code has been released yet, possibly to prompt competition.
- Members speculated on its intended impact, questioning whether the announcement was premature.
- AI training concerns heighten: Concerns arose regarding AI models trained on user satisfaction metrics, which often produce shallow information instead of accurate content.
- A fear was expressed that this trend could compromise the quality of AI responses, especially when relying heavily on human feedback.
- OCTAV's successful launch: A member shared their success in implementing NVIDIA's OCTAV algorithm using Sonnet, noting the scarcity of similar examples online.
- They speculated about the potential inference of the implementation from the associated paper, showcasing the model's capabilities.
- Repetitive responses annoy engineers: Chat focused on the tendency of AI to generate repetitive outputs, especially when users show slight hesitance.
- Discussion evolved around how models like Claude struggle to maintain confidence, often retracting solutions too quickly.
- Mixed performance of AI models: Members evaluated the performance of platforms like Claude and Opus, highlighting their respective strengths and weaknesses.
- While Claude has a solid alignment strategy, it falters in certain situations compared to the more engaging Opus.
Torchtune Discord
- Tokenizer eos option missing from Mistral and Gemma: A user proposed sending a PR to fix the tokenizer eos problem, citing that current Mistral and Gemma tokenizers lack the
add_eosoption. They referenced a utility that needs updating.- Another member emphasized that the
add_eosfeature must first be implemented to resolve the issue.
- Another member emphasized that the
- Eleuther_Eval recipe defaults to GPT-2 model: A member inquired why the Eleuther_Eval recipe always loads the GPT-2 model, clarified as the default since
lm_eval==0.4.3. They noted that the model can be overwritten withTransformerDecodertools for evaluations on other models.- This highlights the need for flexibility in selecting model types for evaluations.
- Mixed Precision Training yields promising results: A member shared their excitement about implementing mixed precision training with cpuoffloadingOptimizer, noting improvements in TPS. They expressed uncertainty about how it integrates with FSDP+Compile+AC, suggesting further testing is required.
- This signals potential optimizations for large-scale model training.
- Compile Speed Outshines Liger: Benchmarks indicated that using
compile(linear+CE)is faster in both speed and memory than Liger. Though, chunkedCE exhibited higher memory savings when compiled independently despite being slower overall.- This comparison emphasizes the trade-offs between speed and resource utilization in model compilation.
- Dynamic seq_len presents optimization challenges: Concerns about dynamic seq_len in torchtune surfaced, particularly its effect on the INT8 matmul triton kernel due to re-autotuning. Members discussed padding inputs to multiples of 128, although this adds extra padding costs.
- Optimizing for speed while managing padding overhead remains a topic of interest.
Perplexity AI Discord
- Jim Harbaugh Endorses Perplexity: Head coach Jim Harbaugh stated that a great playbook isn't complete without Perplexity in a recent announcement, inviting fans to ask him anything on the matter.
- This endorsement is aimed at integrating Perplexity into coaching strategies, highlighting its relevance in sports analytics.
- Reflection LLM Update Inquiry: A member asked whether the Reflection LLM will soon be added to Perplexity, expressing interest in feature updates.
- However, no definitive answers emerged from the discussion, leaving the community curious about future enhancements.
- Issues with Perplexity Pro Rewards: A user voiced frustration over the Perplexity Pro rewards deal with Xfinity, citing that their promo code was invalid.
- The community discussed potential workarounds, including creating a new account to apply the promo successfully.
- Performance Woes for Claude 3.5: Claude 3.5 users raised concerns that the model's performance appears to have declined, hinting at potential capacity issues despite recent investments.
- Users reported confusion over the model version shown in their settings, indicating a lack of clarity in updates.
- Nvidia Exceeds Q2 Earnings Benchmarks: Nvidia exceeded Q2 earnings expectations, thanks to strong graphics card sales and robust growth in their AI sector, as reported here.
- Analysts noted that this impressive performance reinforces Nvidia's foothold in the tech landscape amid rising demand for AI solutions.
Latent Space Discord
- Apple Intelligence Updates Coming Soon: Apple plans to release updates to its Intelligence capabilities within two weeks, focusing on improvements to Siri and other AI functionalities.
- Users believe these updates could address longstanding issues, intensifying competition with OpenAI.
- ColPali Model Gains Ground: ColPali is under review with new slides presented showcasing its implementation and efficacy in various AI tasks.
- The integration of ColPali with advanced training techniques could transform current AI research paradigms.
- Superforecasting AI Launches with Precision: A new Superforecasting AI tool has been released, showcasing its ability to predict outcomes with superhuman accuracy.
- This tool aims to automate prediction markets, bolstered by a detailed demo and blog post explaining its functionalities.
- OpenAI's Strawberry Model Poised for Release: OpenAI is gearing up to launch the Strawberry model, designed for enhanced reasoning and detailed task execution.
- While it promises significant advancements, concerns linger regarding initial response times and memory handling capabilities.
- Expand.ai Launches to Transform Web Data Access: Tim Suchanek announced the launch of Expand.ai, a tool converting websites into type-safe APIs, as part of Y Combinator's current batch.
- This service aims to streamline data retrieval from websites, attracting interest from both tech-savvy and general users.
LlamaIndex Discord
- Agentic RAG Strategies for 2024: In a recent talk, Agentic RAG was highlighted as a key focus for 2024, emphasizing its significance with LlamaIndex. Key points included understanding RAG's necessity but limitations, alongside strategies for enhancement.
- The audience learned about practical applications and theoretical aspects of RAG in the context of LLMs.
- Integrating LlamaIndex with Llama 3: Members discussed the integration of LlamaIndex with Llama 3 and provided detailed setup instructions for running a local Ollama instance.
- Insights shared included installation steps and usage patterns for LlamaIndex, including command snippets for Colab, streamlining model experimentation.
- DataFrames made easy with LlamaIndex: A guide on using the
PandasQueryEngineto convert natural language queries into Python code for Pandas operations has surfaced, enhancing text-to-SQL accuracy.- Safety concerns regarding arbitrary code execution were stressed, encouraging cautious usage of the tool.
- MLflow and LlamaIndex Integration Issues Fixed: The community discussed a recent issue with MLflow and LlamaIndex that has been resolved, with expectations for a release announcement over the weekend.
- A member plans to document this integration experience in a blog article, aiming to assist others dealing with similar challenges.
- Exploring Similarity Search in LlamaIndex: Members engaged in a deep dive into performing similarity searches with methods like
similarity_search_with_scorein LlamaIndex and noted key differences from Langchain.- Detailed examples were provided, showcasing how to filter retrieved documents based on metadata, improving information retrieval capabilities.
Interconnects (Nathan Lambert) Discord
- Deception 70B Claims to be Top Open-Source Model: An announcement revealed Deception 70B, claimed to be the world's top open-source model, utilizing a unique Deception-Tuning method to enhance LLM self-correction.
- The release can be found here, generating curiosity in the community regarding its practical applications.
- OpenAI's Strawberry Model to Launch Soon: Insiders announced OpenAI is set to release its new model, Strawberry, integrated into ChatGPT within two weeks, but initial impressions indicate sluggish performance with 10-20 seconds per response.
- Critics are skeptical about its memory integration capabilities, as detailed in this tweet.
- Concerns Over Otherside AI's Scam History: Discussions on Otherside AI revisited past scams, particularly a self-operating computer project linked to accusations of ripping off open-source work, stirring doubt about the legitimacy of their claims.
- Reference to ongoing issues can be explored here, highlighting community skepticism.
- AI Forecasting Performance Critiqued: Dan Hendrycks reported disappointing performance from the paper LLMs Are Superhuman Forecasters, indicating significant underperformance against a new test set.
- A demo showcasing this AI prediction model is accessible here, reigniting debates on its forecasting accuracy.
- Gemini Integration with Cursor Sparks Interest: Members explored the integration possibilities of Gemini with Cursor, raising questions about functionality and new use cases.
- Curiosity about Google’s latest developments was expressed, driving more members to consider experimenting with the integration.
Stability.ai (Stable Diffusion) Discord
- Better Hardware for Image Generation: A member recommended using Linux for local training with a 24G NVIDIA card to boost image generation performance.
- They also emphasized checking the power supply for compatibility, noting that an upgrade wasn't necessary.
- Cheaper Alternatives to Deep Dream Machine: The community discussed potential substitutes for Deep Dream Machine, suggesting Kling or Gen3 for AI video creation.
- One user highlighted a 66% off promotion for Kling, attracting further interest.
- Tips for Training SDXL Models: A member asked for techniques to effectively train SDXL using Kohya Trainer to enhance image quality.
- Another member advised refining the query for more helpful responses, suggesting review of related channels.
- Clarifications on CLIP Model Choices: Discussions arose about selecting appropriate CLIP models in the DualCLIPLoader node, specifically between clip g and clip l.
- Community members noted that Flux was not trained on clip g, leading to some confusion.
- Discord Bot Delivers AI Services: A member introduced their verified Discord bot capable of text-to-image generation and chat assistance through a shared link.
- This service aims to integrate robust AI functionalities directly within Discord for user convenience.
LAION Discord
- GitHub's Open Source AI Panel Announced: GitHub is hosting a panel on Open Source AI on 9/19 with panelists from Ollama, Nous Research, Black Forest Labs, and Unsloth AI. Interested attendees can register for free here after host approval.
- The panel will explore the role of open source in increasing access and democratization within AI technologies.
- AI Model Performance Sparks Debate: A recent test on an AI model revealed it was impressive yet an order of magnitude slower, causing concerns for larger models, particularly those with 500M parameters.
- This raised skepticism about the performance metrics based solely on small models from libraries like sklearn or xgboost.
- Efforts in Private Machine Learning Highlighted: Discussions surrounding private machine learning emphasize a lack of effective solutions, with mentions of functional encryption and zero knowledge proofs as potential strategies, though they are known to be slow.
- Participants suggested using Docker to create secure containers as a more feasible approach for ensuring model security.
- Multiparty Computation's Complexity Discussed: A user touched on strategies for multiparty computation to optimize workloads in cloud settings, although concerns lingered about the security of such methods.
- The conversation noted the considerable investment needed to develop secure solutions in trustless environments.
- Challenges of Achieving Machine Learning Privacy: Experts asserted that achieving full privacy in machine learning remains elusive and costly, with a pressing need for effective privacy solutions in sensitive scenarios like those linked to DARPA.
- The significant financial incentives underline the community's interest in navigating this complex issue.
OpenAccess AI Collective (axolotl) Discord
- AI Research Community Faces Fraud Allegations: On September 5th, Matt Shumer, CEO of OthersideAI, announced a supposed breakthrough in training mid-size AI models, which was later revealed to be false as reported in a Tweet. This incident raises concerns about integrity in AI research and highlights the need for skepticism regarding such claims.
- The discussion centered around the implications for accountability in AI research, suggesting ongoing vigilance is necessary to avoid similar situations.
- Guilherme Shares Reasoner Dataset: A user shared the Reasoner Dataset, stating it is crafted using synthetic data aimed at reasoning tasks. This approach reflects innovative techniques in developing training datasets for AI.
- Community members showed interest in leveraging this dataset for enhancing reasoning capabilities in model training.
- iChip Technology Revolutionizes Antibiotic Discovery: iChip technology, capable of culturing previously unculturable bacteria, has significantly impacted antibiotic discovery, including teixobactin in 2015. This technology’s potential lies in its ability to grow bacteria in natural environments, vastly increasing microbial candidates for drug discovery.
- Experts discussed the implications of this technology for future pharmaceutical innovations and its role in addressing antibiotic resistance.
- Hugging Face Introduces Multi-Packing for Increased Efficiency: Hugging Face announced compatibility of packed instruction tuning examples with Flash Attention 2, aiming to boost throughput by up to 2x. This addition potentially streamlines AI model training significantly.
- The community anticipates improvements in training efficiency, with members sharing excitement over possible applications in upcoming projects.
- OpenAI Fine-Tuning API gains Weight Parameter: OpenAI enhanced their fine-tuning API by introducing a weight parameter as detailed in their documentation. Implemented in April, this parameter allows for finer control over training data influence.
- Users discussed how this capability could impact model performance during fine-tuning processes, enhancing training dynamics.
LangChain AI Discord
- Claude 3.5's Audio Features in Question: A member inquired if it's possible to pass audio data to Claude's 3.5 LLM via Langchain for transcription, raising concerns about its capabilities.
- Another user noted that while Claude 3.5 supports images, there was uncertainty about audio functionalities.
- Langchain4j Token Counting Challenge: Discussion emerged around how to count tokens for input and output with langchain4j, expressing a need for solutions.
- Unfortunately, the thread did not yield specific guidance on token counting techniques.
- Whisper Proposed for Audio Transcription: One member suggested utilizing Whisper for audio transcription as a faster and cheaper alternative to Claude 3.5.
- This recommendation points to potential efficiencies in transcription workflows compared to Claude.
- Chat AI Lite: Multifaceted AI Web Application: Chat AI Lite is a web application that covers chat, knowledge bases, and image generation, enhancing the user experience across various AI applications.
- Its feature set showcases flexibility catering to multiple scenarios within the AI domain.
- Automated Data Analysis with EDA-GPT: EDA-GPT provides automated data analysis using LLMs, highlighting advanced integration for data science tasks.
- The project encourages contributions to improve its data analytical capabilities.
DSPy Discord
- Emotion Classifier Output Confusion: A member questioned whether altering the description to 'Classify to 7 emotions' instead of specifics would change the output of the Emotion classifier.
- No clear conclusions on the output impact were provided.
- AdalFlow Library Insights Needed: Discussion on the AdalFlow library aimed at auto-optimizing LLM tasks was reignited, with members seeking deeper insights.
- One member committed to reviewing the library, promising to share their findings by the end of the week.
- Misleading Llama AI Model Discovery: A member disclosed that a supposedly Llama AI model was actually the latest Claude model, utilizing a complex prompt mechanism.
- This system guided the model through problem-solving and reflective questioning strategies.
- MIPRO Revolutionizes Prompt Optimization: The new tool MIPRO enhances prompt optimization by refining instructions and examples for datasets.
- Members explored how MIPRO streamlines prompt optimization for question-answering systems, emphasizing its dataset relevance.
LLM Finetuning (Hamel + Dan) Discord
- Recommendations for LLM Observability Platforms: A member is exploring options for LLM observability platforms for a large internal corporate RAG app, currently considering W&B Weave and dbx's MLflow.
- They also expressed interest in alternatives like Braintrust and Langsmith for enhanced observability.
- Node.js Struggles with Anthropic's API: Using Anthropic's API with Node.js reportedly yields worse performance compared to Python, especially with tools.
- The discussion arose around whether others have faced similar performance discrepancies, prompting a deeper look into potential optimization.
Gorilla LLM (Berkeley Function Calling) Discord
- Merge Conflicts Resolved: A member thanked another for their help, successfully resolving merge conflicts without further issues.
- Much appreciated for the quick fix!
- Locating Test Scores: A member displayed confusion about retrieving specific test scores after saving results, prompting a discussion on best practices.
- Another member recommended checking the score folder, especially the file
data.csv.
- Another member recommended checking the score folder, especially the file
tinygrad (George Hotz) Discord
- George Hotz's tinygrad Enthusiasm: Discussion kicked off with an enthusiastic share about tinygrad, which focuses on simplicity in deep learning frameworks.
- The chat buzzed with excitement over the implications of this lightweight approach for machine learning projects.
- Engagement in the Community: A user expressed enthusiasm by posting a wave emoji, indicating lively interaction related to tinygrad in the community.
- This kind of engagement signals a strong interest in the advancements led by George Hotz.
MLOps @Chipro Discord
- Sign Up for GitHub's Open Source AI Panel!: GitHub is hosting a free Open Source AI panel on 9/19 in their SF office, focusing on accessibility and responsibility in AI.
- Panelists from Ollama, Nous Research, Black Forest Labs, and Unsloth AI will discuss the democratization of AI technology.
- Hurry, Event Registration Requires Approval!: Participants need to register early as the event registration is subject to host approval, ensuring a spot at this sought-after panel.
- Attendees will gain insights into how open source communities are driving innovation in the AI landscape.
The Alignment Lab AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The Mozilla AI Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The DiscoResearch Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
The AI21 Labs (Jamba) Discord has no new messages. If this guild has been quiet for too long, let us know and we will remove it.
PART 2: Detailed by-Channel summaries and links
The full channel by channel breakdowns have been truncated for email.
If you want the full breakdown, please visit the web version of this email: !
If you enjoyed AInews, please share with a friend! Thanks in advance!