D.A.D.: AI Can Extract Personality Traits From LinkedIn Photos That Correlate With Career Outcomes, Study Finds
The Daily AI Digest
Your daily briefing on AI
February 09, 2026 · 12 items · ~6 min read
From: Hacker News, Hugging Face Spaces, NBER
My AI confidently gave me the wrong answer three times. I said, "Are you sure?" It said, "Absolutely." Just like my kids during homework.
What's New
AI developments from the last 24 hours
AI Coding Tools Excel at Common Tasks, Struggle With Novel Problems
A Hacker News discussion surfaced a pattern many AI coding users report: AI assistants excel at common tasks with abundant training examples—one user successfully built a retro emulator with minimal prompts—but struggle with novel, domain-specific problems. When the same user attempted a proprietary technical task with no GitHub precedent, the AI couldn't help. Some pushed back, arguing AI can assist with harder work if you use it to draft detailed specs rather than generate code directly. Others noted rapid tool improvement may soon change this calculus.
Why it matters: A practical rule of thumb: treat AI coding tools as excellent at the well-trodden path, unreliable off the map—at least for now.
Higher Omega-3 Levels Linked to Lower Early-Onset Dementia Risk
A study circulating on Hacker News found that higher levels of non-DHA omega-3 fatty acids correlated with significantly lower risk of early-onset dementia. Participants in the top three quintiles of non-DHA omega-3 showed reduced dementia risk compared to those with the lowest levels. The research adds to growing evidence around nutritional factors in cognitive health, though the study design (likely observational) means correlation, not causation.
Why it matters: This is health research, not AI news—appears to have been included in error.
Microsoft AI Agent Fumbles Its Own Security Disclosure
A GitHub issue claims Microsoft's billing system can be bypassed using a combination of 'subagents' with an agent definition—though details remain vague and unverified. The disclosure quickly became messy: commenters report a fake contributor impersonating a Microsoft maintainer submitted a dubious fix, and an AI bot inappropriately closed the issue before resolution. The incident highlights growing pains as companies deploy AI agents to manage developer workflows—sometimes with embarrassing results.
Why it matters: The chaos around this disclosure—fake maintainers, premature AI closures—may be more significant than the vulnerability itself, illustrating how automated moderation can backfire when security issues arise.
GitHub Building Infrastructure for AI Agents to Run Multi-Step Tasks
GitHub appears to be developing 'agentic workflows' tooling, hosted on the company's official GitHub Pages account. Details remain thin, but the project suggests GitHub is building infrastructure for AI agents to execute multi-step tasks autonomously. Early reactions are skeptical: one developer dismissed it as solving the 'wrong level of abstraction,' another noted it references a 'Workflow Lock File' feature that doesn't actually exist yet in GitHub Actions. The unconventional domain initially triggered phishing concerns before being verified as legitimate.
Why it matters: If GitHub builds native support for AI agents into its workflow automation, it could significantly simplify how teams deploy autonomous coding assistants—but the referenced nonexistent features suggest this may be early-stage or vaporware.
Terraform Creator Proposes System to Vet AI-Generated Code Contributions
Mitchell Hashimoto, creator of Terraform and other widely-used developer tools, introduced 'Vouch'—a system for his Ghostty terminal emulator project that addresses a growing tension in open source: AI-generated code contributions. The system appears designed to maintain quality standards as LLM tools make it easier for anyone to submit code, regardless of expertise. One critic raised the inverse concern: that AI coding tools may eventually become so capable they'll sideline human contributors entirely.
Why it matters: Developer infrastructure, but it signals a broader debate about whether AI democratizes contribution or degrades quality—a tension that will eventually reach enterprise software supply chains.
What's Innovative
Clever new use cases for AI
Medieval Number System Gets Modern Font Treatment
A developer released a custom font that renders Cistercian numerals—a compact medieval notation system where monks could write any number from 1 to 9,999 as a single glyph—using standard font ligatures. Type a number, and the font automatically converts it to the historical symbol. It's a niche typographic experiment rather than a practical tool, but demonstrates creative uses of ligature technology (the same feature that turns 'fi' into a single character in professional fonts).
Why it matters: A hobbyist project with no business application—but a clever example of how font-level programming can transform text display without code changes, a technique occasionally used for brand customization or specialized notation systems.
Companion App Appears for Educational Robot Platform
A Hugging Face Space called 'baby-reachy-mini-companion' has appeared, apparently designed as a companion app for the Reachy Mini—a small humanoid robot platform used in robotics research and education. The project uses Gradio, a common framework for building AI demo interfaces. No details yet on what the companion app actually does, whether it adds AI capabilities to the robot or simply provides a control interface.
Why it matters: Early-stage developer work on a niche robotics platform—not relevant to most readers unless you're tracking the intersection of AI tools and physical robotics.
Screenwriting Tool Shows Why Training Data Diversity Matters
A Mexican engineering student built CineGraphs, a tool that generates branching story paths visualized as graphs—letting screenwriters explore narrative possibilities before committing to a linear script. The interesting finding: training the AI on experimental and international cinema rather than mainstream Hollywood produced far more varied outputs. The creator argues this taught the model that story structure is "a design space rather than a formula." The tool uses a fine-tuned version of Qwen (an open-source model from Alibaba) trained on 100 curated films.
Why it matters: For creative professionals, this suggests that AI writing tools trained on conventional material may be limiting—and that the training data's diversity matters as much as the model's size.
Quant Trading Firm Jane Street Posts Cryptic AI Project
Jane Street, the quantitative trading firm known for its technical prowess and notoriously difficult programming puzzles, has published a Hugging Face Space called 'droppedaneuralnet.' The cryptic name and minimal details suggest this could be a recruiting puzzle, demo, or internal experiment made public. Jane Street rarely shares technical work openly, making any public release notable for those tracking how elite quant firms approach AI.
Why it matters: Mostly curiosity fodder—worth a look if you're interested in how top quant firms experiment with AI, but unlikely to affect your workflow unless Jane Street reveals something more substantial.
What's Controversial
Stories sparking genuine backlash, policy fights, or heated disagreement in the AI community
Quiet day in what's controversial.
What's in the Lab
New announcements from major AI labs
Quiet day in what's in the lab.
What's in Academe
New papers on AI and its effects from researchers
Same AI, Opposite Results: Platform Culture Determines Whether AI Helps or Harms Trading
A study examined how AI-generated content performs differently across two major retail investor communities. On Seeking Alpha, AI-assisted posts showed improved reasoning and sentiment that actually predicted future stock returns—with measurable effects on trading quality. On WallStreetBets, AI content correlated with the opposite: more emotional language, sentiment contagion, higher volatility, and lottery-like trading behavior. The divergence suggests AI amplifies whatever culture it enters—analytical forums get sharper analysis, while speculative communities get turbocharged speculation.
Why it matters: For anyone using AI to inform investment decisions or monitor retail sentiment, the platform context may matter as much as the AI itself—same technology, opposite market effects.
LinkedIn Photos Can Predict Your Salary, Study Finds
Researchers built an AI system that extracts Big Five personality traits from LinkedIn photos of 96,000 MBA graduates, then tracked how those inferred traits correlated with careers. The finding: AI-derived personality scores predicted school prestige, compensation, job transitions, and career advancement about as well as race, attractiveness, or educational background. Workers sorted into occupations where their photo-inferred traits were valued, and earned more when traits matched occupational demands. The authors flag the obvious concern—employers are already using similar tools for screening, raising discrimination and autonomy questions.
Why it matters: Empirical evidence that facial-analysis hiring tools carry real predictive weight—which makes the discrimination risks concrete, not hypothetical, and likely accelerates regulatory scrutiny.
Job Type, Not Politics, Explains Gap in AI Use at Work
A study using Gallup workforce data found Democrats report using AI at work more frequently than Republicans—27.8% vs 22.5% weekly or daily in late 2024. But here's the twist: the partisan gap vanishes when you control for education, industry, and occupation. Democrats aren't more AI-enthusiastic; they're just more likely to work in white-collar jobs where AI tools are already embedded. The study also found Democrats perceive higher job displacement risk from AI, likely because they're in roles with greater AI exposure.
Why it matters: This challenges the emerging narrative that AI adoption splits along political lines—it's really a workforce composition story, which matters for how companies think about training and adoption across different employee demographics.
What's Happening on Capitol Hill
Upcoming AI-related committee hearings
| Wednesday, February 11 |
Building an AI-Ready America: Safer Workplaces Through Smarter Technology House · House Education and the Workforce Subcommittee on Workforce Protections (Hearing) 2175, Rayburn House Office Building |
What's On The Pod
Some new podcast episodes
The Cognitive Revolution — AGI-Pilled Cyber Defense: Automating Digital Forensics w/ Asymmetric Security Founder Alexis Carlier