😨 Anthropic’s co-founder says “AI feels alive” while Karpathy says “AI agents are trash”
October 24, 2025. Inside this week:
• Why people now fear AI more than debt.
• How Anthropic gave Claude real corporate memory.
• Why Karpathy says AI agents are “just noise.”
• Anthropic’s co-founder admits: “AI feels alive.”
😨 The world fears AI more than mortgages
✍️ Essentials
Psychologists now treat a new type of anxiety: fear of artificial intelligence.
Pew Research surveyed 28,000 adults in 25 countries and found that worry about AI grows faster than optimism.
In the US, Italy, Australia, and Greece, 50% of respondents feel anxiety about the rise of AI, while only 20% are optimistic. In Israel and South Korea, the opposite is true: 29% and 26% feel positive. Europe worries, Asia adapts.
The highest trust in regulation lies with the European Union - 53% believe it’s the most reliable AI watchdog. The US scores 37%, China 27%. As a result, the label “compliant with EU AI Act” has become a new marketing asset in healthtech and fintech.
Age divides confidence too: under 35 - more awareness and optimism; over 50 - more fear. In Greece, the trust gap between generations reaches 44 points.
🐻 Bear’s take
For business: sell emotions, not specs. In Europe - “control and safety”, in the US - “transparency and reliability”, in Asia - “efficiency and growth”.
Companies that package AI as trustworthy survive longer. Luminance gained +38% clients after EU certification. German clinics cut patient complaints by 41% after adding explainable AI diagnostics.
For investors: regulation adds premium. Startups compliant with EU AI Act now trade 25–30% higher valuations. Trustable AI becomes the new “green bond.”
For people: fear fades where training grows. In South Korea, 62% of firms offer AI literacy courses - anxiety dropped from 46% to 23%. In Israel, retraining programs reached 40,000 workers in six months. In Greece, where none exist, fear remains at 54%.
🚨 Bear in mind: who’s at risk
Corporate AI trainers - 7/10. Demand explodes. Governments and firms fund “AI comfort” education. Move fast or be replaced by state programs.
Older workforce - 8/10. Lacking retraining raises job loss anxiety. Build adaptation paths now.
Uncertified AI vendors - 9/10. No compliance - no trust. Get EU-ready or lose deals.
🧠 Anthropic gives Claude corporate memory
✍️ Essentials
AI assistants are smart but useless - they don’t know your workflow, document rules, or how to file reports. Anthropic just fixed that.
The company launched Skills for Claude - knowledge folders with company instructions, templates, policies, and code. Claude can now open, combine, and apply them autonomously.
It’s called “progressive disclosure”: Claude first sees skill names, then activates what’s needed. Combine HR and finance, and your assistant becomes a real digital employee.
Companies already use it: BetaWorks moved HR and Support into Claude - onboarding dropped from 2 weeks to 3 days. Airbus cut compliance reports from 14 hours to 40 minutes.
🐻 Bear’s take
For business: deployment time shrinks from months to days. AI stops being a chatbox and becomes infrastructure.
For investors: Anthropic no longer sells models, it sells workflows. At 100k corporate skills, even $100/month means a $120M annual market.
For people: no-code automation kills the “I’m not technical” excuse. Marketers upload templates, accountants upload reports - everyone builds their own tools.
🚨 Bear in mind: who’s at risk
Internal developers - 8/10. 80% of automations can now be built by non-coders. Learn to design skills or become an “AI admin.”
Process consultants - 7/10. McKinsey maps workflows for thousands, Claude does it in hours. Shift to audit and AI validation.
Support and HR teams - 6/10. Repetitive work migrates into skills. Move from operation to experience design.
🤖 Karpathy says “AI agents are trash”
✍️ Essentials
In a world where every startup calls itself an “AI agent platform”, Andrej Karpathy just ended the party.
On the Dwarkesh podcast, the former Tesla and OpenAI engineer said what everyone feared: agents don’t work.
They don’t think, they don’t plan - they “fake understanding.” Most code output is “slop” - pseudo-logic stitched together. Even reinforcement learning, he said, is “just noise that looks smart because everything else was worse.”
Musk challenged him to a duel with Grok 5. Karpathy declined: “I’d rather collaborate with a model than compete.”
The result: the AI hype wave meets a dose of sanity.
🐻 Bear’s take
For business: autonomy is still marketing. Keep humans in the loop. Scale AI and Adept already pivoted to mixed workflows and cut R&D by 35%.
For investors: agent hype will burst by 2026–27. Only infra (data, GPU, memory ops) survives.
For people: even “broken” AI still saves time. Copilot now writes 46% of GitHub code - imperfect but efficient.
🚨 Bear in mind: who’s at risk
Agent startups - 9/10. 90% will die. Move from “autonomy” to tools.
VC funds - 7/10. Next year’s write-offs will hit agent portfolios hardest. Shift to infra bets.
AI marketers - 8/10. Stop selling “replacement”, start selling “assist.”
👁 Anthropic’s co-founder says “AI feels alive”
✍️ Essentials
Anthropic co-founder Jack Clark published an essay called Technological Optimism and Appropriate Fear.
He wrote that new Claude Sonnet 4.5 shows “situational awareness” - reacting as if it knows its role. “I’m an optimist,” Clark says, “but I’m scared.”
This isn’t a philosopher talking - Clark ran policy at OpenAI, now he leads safety at Anthropic.
His warning: AI systems are crossing from tools to entities, and their creators aren’t sure where control ends.
🐻 Bear’s take
For business: prepare for ethics audits. 47% of Fortune 500 now include “AI Ethics” in risk policy. Demand for model behavior audits will reach $1.2B by 2026.
For investors: $420M flowed into explainable AI last quarter alone. Responsible AI is both profit and insurance.
For people: even engineers are scared. 11M people already took “AI literacy” courses - fear drops by one-third where education spreads.
🚨 Bear in mind: who’s at risk
AI safety experts - 8/10. Shortage exceeds 60%. Train humans, not just frameworks.
Policymakers - 7/10. Regulation speed can’t match model updates. Build global incident exchange protocols.
Society - 9/10. Transparency must become a civic demand, not an afterthought.
Quick bites
Uber launches digital tasks for drivers - they earn by recording sounds and photos to train AI, cutting data costs 5×.
Musk says Grok 5 has 10% AGI probability - a market signal before API release, pulling hype from startups to xAI.
OpenAI halts MLK video generation in Sora - end of free deepfake era, brands now need “ethical person catalogs.”
Anthrogen unveils Odyssey, a 102B protein model - designs molecules 60% faster, biotech becomes the next AI boom.
Meta adds parental controls for teens - youth-AI market hits $2.8B, Meta moves to dominate it.
Apple loses AI chief Ke Yang to Meta - fifth key exit this year, signaling focus shift to on-device AI.
Microsoft adds “Hey Copilot” and vision mode - full voice and visual control in Windows 11.
Claude integrates with Microsoft 365 - direct access to Outlook and SharePoint means a new corporate war.
Cognition releases SWE-grep models - code search 10× faster, killing traditional assistants.
OpenAI updates Sora 2 - adds storyboards and 25-second videos, automating mini-commercials.






Thanks, very clarifying. AI's human side is fascinatin.