🧠 Brain wars by billionaires, and why it matters
AI Bear
August 19, 2025. Inside this week:
OpenAI backs Merge Labs to rival Neuralink
AI-powered antibiotic breakthroughs emerge from MIT
Google embeds ChatGPT-level AI in phones - offline
The bear breaks down the power moves, the breakthroughs, and what to watch 🐻
OpenAI backs Merge Labs to challenge Neuralink
✍️ Essentials
OpenAI is backing a new brain–computer interface startup, Merge Labs, co-founded by Sam Altman and Alex Blania. They’re raising $250 million at an $850 million valuation to take on Elon Musk’s Neuralink in the race toward “high-bandwidth brain interfaces.”
🐻 Bear’s take
This turns a rivalry into a direct confrontation. OpenAI is positioning Merge as a technical sibling to Neuralink, but lithe, better aligned to evolving AI stacks. For investors, it’s a signal that BCIs might be where the next revolution hides. For user experience, it’s an early warning: personalized AI may soon mean implanted, not installed.
🚨 Bear in mind
BCI pushes into murky ethical waters. Merge is unregulated, fast-moving, and trusted by a socially powerful backer. Safety, consent, and equal access are early casualties unless they build guardrails first - before human minds go live on code.
MIT’s AI designs new antibiotics - without lab bottles
✍️ Essentials
Researchers at MIT trained AI models to design novel antibiotic molecules entirely in silico. They tested the predicted candidates and found several effective hits - accelerating drug discovery timelines.
🐻 Bear’s take
This is a biotech turbocharger. If pipelines speed from months to minutes, the next generation of therapeutics could come from code, not bioconf. That shifts power toward software teams and away from traditional pharma models.
🚨 Bear in mind
But virtual hits need lab validation. False positives can waste millions in trials. Plus, code-made molecules are easy to share - so dual-use risks (unwanted biowarfare) increase rapidly. Responsibility must follow speed.
Google packs ChatGPT-level AI into your phone (offline!)
✍️ Essentials
Google integrated ChatGPT-scale language models directly onto mobile devices - without needing internet. Users can now get advanced AI language features offline with on-device models that handle prompts and context natively.
🐻 Bear’s take
This flips the script on AI accessibility. Offline AI means privacy wins, latency drops, and AI features reach places with spotty networks. It also means fragmentation -different devices may yield different behavior, training, and bias.
🚨 Bear in mind
On-device AI must house massive models - so energy use and heat may be concerns. Updates are patchy, and model drift becomes a local maintenance issue. Users could end up with AI assistants that are smart, but standoff-ish and stale.
Quick Bites
Meta’s AI recruiting tool - a LinkedIn-style internal hiring module is trending internally
Brazil eyes AI exports - government announces fund to foster local AI scale-ups
xAI files patent - Musk’s AI lab patents its own “reasoning over chains of thought” model




