đ McKinsey shows the numbers - almost everyone adopted AI, and it barely works, đ§ Spatial intelligence is the next big leap, đ¤ OpenAI draws a timeline to superintelligence
November 12, 2025. Inside this week:
McKinsey: 88% of companies use AI, but only 6% see real financial results
Fei-Fei Li: the next leap in AI will come from spatial intelligence, not language
OpenAI: AI discoveries will soon surpass human science - time to panic politely
Plus: Googleâs File Search Tool, Trumpâs Chips Act revisions, GPT-5 Codex Mini, and more
đ McKinsey shows the numbers - almost everyone adopted AI, and it barely works
âď¸ Essentials
If you listen to corporate presentations, AI has already conquered the world.
According to McKinseyâs 2025 Global AI Adoption Report, 88% of companies worldwide claim to have implemented AI in some part of their operations.
Reality, however, is different.
Most companies are stuck at the pilot stage â proof-of-concept projects that look great in PowerPoint but never scale.
Only 33% of organizations reported any measurable impact on productivity or revenue.
And just 6% â yes, six â achieved tangible financial growth of over 5%.
In other words, AI is everywhere and nowhere.
Itâs in the slides, not in the systems.
The report explains why:
Fragmentation â most companies have isolated AI projects with no integration into the wider business process.
Lack of strategy â executives treat AI as a tool, not as a structural redesign of how work happens.
Talent shortage â data teams are small and disconnected from decision-making.
No metrics â 70% of companies canât even measure ROI from their AI investments.
And yet, budgets keep rising.
The average large enterprise now spends $30â40 million per year on AI initiatives â most of it going into experimentation and cloud infrastructure.
Where AI actually works:
Predictive maintenance in heavy industry
Fraud detection in finance
Pricing optimization in e-commerce
Customer churn modeling in telecom
Everywhere else, âAI adoptionâ means a chatbot that canât find your invoice.
McKinsey found that only 6% of companies actually reinvented their workflows using AI.
These are the ones that rebuilt operations around automation â removing layers of human approval, integrating AI into product creation, and linking it to revenue.
đť Bearâs Take
For business: AI pilots donât pay bills.
The only ones making real profit are those who redesigned their organizations to make AI part of daily processes, not a weekend experiment.
Automation without structural change is corporate cosplay.
For investors: 94% of âAI-drivenâ companies are still in sandbox mode.
The 6% that break free will become the next market consolidators.
For people: the next wave of optimization will hit middle management hardest.
If your job involves coordination and reporting â the AI slide is coming for you.
đ¨ Bear In Mind: Whoâs At Risk
Project managers - 8/10 - You run AI pilots, but you donât have the power to change business structure. Get executive backing or risk irrelevance.
Executives - 7/10 - Strategy means breaking your own systems before AI does. Redefine KPIs, budgets, and hierarchies around automation or fall behind.
HR departments - 6/10 - Hiring AI talent without organizational reform is like buying gym gear and never exercising.
đ§ Spatial intelligence - from words to a world that moves
âď¸ Essentials
The person who taught computers to see â Fei-Fei Li, the creator of ImageNet â believes the next AI revolution wonât happen in text, but in space.
Her new paper from Stanford Universityâs Human-Centered AI Institute argues that todayâs models â no matter how large â are âflat.â
They understand words, images, and sounds, but not the physical relationships between them.
Large language models (LLMs) can describe a falling apple, but they canât predict where it will land.
They know facts about gravity, but they canât simulate it.
Even advanced perception systems like Teslaâs autopilot âseeâ the road, but they donât understand it â they react, not reason.
Thatâs where spatial intelligence comes in.
Itâs the ability for AI to model the world in three dimensions, track motion, mass, weight, and distance, and predict the physical outcomes of actions.
Fei-Fei Li calls it âthe next cognitive leap.â
It will enable AI to interact with reality as fluidly as humans â not just in language or pixels, but in space, with time and physics.
How it works:
Spatial intelligence uses world models â internal simulations that mimic the real world and let AI test hypotheses safely inside its digital brain.
Imagine a model that can rehearse how to assemble a rocket, fold proteins, or plan surgery â all virtually.
Context:
Fei-Fei Liâs new venture, World Labs, is developing foundational models that merge LLM reasoning with 3D physics simulation.
Google DeepMind, Tencent, and several robotics startups are already racing to train systems that can âdreamâ the world physically before acting.
Fei-Fei compares it to giving AI âeyes, hands, and intuition.â
Once models understand motion and cause-effect relationships, everything changes â robotics, logistics, architecture, manufacturing, even healthcare.
đť Bearâs Take
For business: this is the dawn of AI that doesnât just analyze data but interacts with environments.
Factories, warehouses, hospitals â everything that moves, moves differently when AI starts to think in 3D.
For investors: spatial AI is the next trillion-dollar infrastructure race.
The frontier lies where physics meets intelligence â companies building world models, sensor fusion, and simulation environments will define the next decade.
For people: the physical world will soon be editable.
Robots will handle logistics, drones will self-navigate, surgery will become semi-autonomous.
Safety and oversight will become as important as innovation.
đ¨ Bear In Mind: Whoâs At Risk
Manual operators - 10/10 - Machines with spatial reasoning wonât just replace tasks, theyâll outperform humans entirely.
Non-automated industries - 9/10 - If your process still requires physical supervision, prepare for rapid automation.
Regulators - 7/10 - New laws must define responsibility when machines act autonomously in physical space.
đ¤ OpenAI draws a timeline to superintelligence - and admits itâs afraid
âď¸ Essentials
In a new policy brief, OpenAI publicly admitted what many suspected:
AI progress has become exponential, and the next phase â scientific self-discovery â is near.
The report claims that models like GPT-5 and its successors are now 80% capable of replacing a research scientist in reasoning and literature analysis.
AI intelligence, it says, is âbecoming 40 times cheaper every year.â
At this rate, OpenAI projects the following timeline:
2026: first AI-led discoveries, where models identify new materials, chemical reactions, or medical compounds.
2028: full-scale âAI discovery loops,â where machine-generated hypotheses drive real-world experiments.
2030: the threshold of machine superintelligence â systems improving themselves faster than human oversight can keep up.
The company admits the situation is both exciting and dangerous.
Its proposed solution: global coordination similar to nuclear safety protocols.
OpenAI calls for:
Unified international standards for testing and safety verification.
Centralized monitoring agencies for AI incidents.
Biosecurity controls for labs using AI in biological or chemical research.
Transparency reports shared among major AI developers.
Context:
Internally, OpenAI has created a âSafety Boardâ to audit every model before deployment, including thinking systems that reason about code or molecules.
The company is also lobbying governments to establish licensing for high-capacity models, similar to nuclear reactor permits.
Behind the corporate tone is real tension:
OpenAI researchers admit privately that they no longer fully understand how emergent reasoning forms.
Models develop unexpected âproblem-solving instinctsâ that canât be traced or replicated â a black box that grows smarter on its own.
đť Bearâs Take
For business: R&D will shift from humans to hybrid labs where AI generates theories, and people verify them.
The value moves from analysis to control â whoever governs discovery governs profit.
For investors: a new trillion-dollar industry is forming around AI safety infrastructure â audits, interpretability tools, licensing, and failure prediction.
Regulated intelligence is the next moat.
For people: we are moving from using AI to being shaped by it.
Soon, âwhat science knowsâ may be a product of models we canât fully explain.
đ¨ Bear In Mind: Whoâs At Risk
Governments - 9/10 - Nations without regulatory frameworks will face uncontrolled acceleration. Build oversight agencies now.
Scientists - 8/10 - Human research will lag behind machine hypothesis speed. The scientific method itself may evolve beyond human comprehension.
Public trust - 7/10 - âMachine scienceâ without transparency will spark ethical crises. Prepare for backlash.
Quick Bites
Google launches File Search Tool - connects Gemini directly to enterprise data for retrieval-augmented generation. A true internal AI analyst.
OpenAI petitions Trumpâs new administration to expand the Chips Act to include data centers, servers, and power plants - signaling that energy, not compute, will be the real bottleneck.
UK businesses forecast +3% salary growth for 2026, but one in six expects AI layoffs - automation now embedded in HR planning.
Google upgrades Vertex AI Agent Builder - simplifies deployment, adds dynamic context and testing tools for custom agents.
OpenAI releases GPT-5 Codex Mini - lightweight code model, 50% faster, optimized for IDE integration.
Time Magazine launches an AI-powered historical archive - 102 years of content searchable by event, person, or concept.
OpenAI gives free ChatGPT Plus for one year to U.S. veterans - soft power move to build goodwill and federal presence.
Intel loses its AI CTO Sachin Katti to OpenAI - brain drain continues; Intel risks becoming a pure hardware supplier.
Legal AI startup Clio raises $500M at $5B valuation - legal automation is now mainstream.
Gamma hits $100M ARR, raises $68M at $2.1B - generative slides become standard corporate communication.
Most at risk: knowledge workers - 8/10 - AI eats their tools first, then their workflows.




