🇺🇸 Trump throws an idea to speed up the US economy, 🤖 Anthropic drops its best release and crushes prices, 🧠 Sutskever pours cold water on GPU investors
December 3, 2025. Inside this week:
Trump suddenly announces a national AI that should place scientific research on a conveyor belt
Anthropic rolls out its best release and collapses prices
Sutskever hits investors with a cold shower
Anthropic publishes the numbers and productivity really grows
🇺🇸 Trump suddenly offers an idea that could speed up the US economy
✍️ Essentials
Trump suddenly announces something genuinely important: a national AI that should put scientific research on a production line.
Trump signed an executive order that forces the US Department of Energy to build a unified AI platform for scientific tasks of “national importance”. Biotech, energy, engineering. The system will be loaded with data from decades of government science and will connect 17 federal research labs with their supercomputers.
The platform must host AI agents that set up experiments, run hypotheses, and assemble predictive models across chemistry, biology, and engineering. The White House is already selling it as “the largest coordination of research assets since the Apollo program”.
Market context: The US is officially establishing “government AI” as a strategic weapon - next to space and nuclear programs. The private sector is pulled in automatically: Big Tech, clouds, chips, and AI labs. This means huge contracts, joint centers, and tightly controlled access to results.
🐻 Bear’s take
For business: a new wave of government AI demand begins around science and energy. If you build infrastructure, simulations, LLMs for R&D, or “lab as a service” - now is the time to look at how to fit into the DOE and national lab chain.
For investors: this becomes an anchor for long budgets for chips, clouds, scientific AI platforms, and biotech or energy startups capable of working with government data and regulation.
For people: fundamental things like medicine, materials, and energy technologies may start appearing faster and more accurately. But the risk grows that the most powerful AI science tools will be locked tightly inside government and corporate perimeters.
🚨 Bear in mind: who’s at risk
Biotech startups - 8/10 - national AI sharply accelerates molecule discovery, raises the bar, and competing with the DOE plus supercomputers will be difficult - you must focus on niches and partnerships with large labs.
Private AI labs - 7/10 - the government becomes the largest customer of scientific models, requirements for safety and certification tighten - you must prepare a compliant stack and lobbying support.
🤖 Claude Opus 4.5: Anthropic lowers the flagship from Olympus into real business
✍️ Essentials
Imagine you are a CTO who spent the last six months looking at Opus thinking: “wow, of course it’s smart, but the price is pain”. You test Gemini, GPT-5.1, Sonnet, juggle prices and quality. And then Anthropic rolls out a new version that is both smarter and dramatically cheaper.
Anthropic released Claude Opus 4.5 - a new flagship that fights in the same league as GPT-5.1 and Gemini 3. The model breaks the 80 percent threshold on SWE-Bench Verified on real software and sets records in tool use, reasoning, and agent tasks. According to benchmarks, Opus 4.5 matches or outperforms Gemini 3, and Anthropic sells it as their “most stable and safest” model stack.
Architecturally, Opus is no longer a single hero but a coordinator. It orchestrates teams of cheaper Haiku models - essentially a boxed multi-agent system. Meanwhile Anthropic slashes the price: Opus 4.5 is about 66 percent cheaper than Opus 4.1 and significantly more efficient in tokens and compute.
They ship the full package: unlimited chat length, Claude Code on desktop, expanded Claude for Chrome and Excel.
Market context: the release lands in the same week as GPT-5.1 Pro and Gemini 3. The frontier is a dense brawl. Models compete not only on IQ, but also on price per million tokens and agent friendliness. Anthropic was long criticized for a too-expensive Opus - now they clearly move into the zone where “you can set the flagship as default”, not just for elite tasks.
🐻 Bear’s take
For business: Opus 4.5 becomes a real candidate for your “main working engine” for code, agent workflows, and complex operations. You can rebuild the stack: keep the expensive GPT slot for rare scenarios, move everything else into the Opus plus Haiku combo.
For investors: frontier models move away from the logic “expensive means smart”. Anthropic shows you can give +performance and -66 percent price. Provider margins tighten from competition, but the market expands to those who previously could not afford top models.
For people: more intelligent assistants for developers and product teams that can finally be used without fear of burning the budget. For end users - more stable and capable AI features because flagship models stop being a luxury.
🚨 Bear in mind: who’s at risk
Dev teams without an AI stack - 7/10 - competitors will put Opus 4.5 as the brain of agents on top of your own software - you must decide where to integrate such models inside your product instead of waiting for “market stabilization”.
📈 Anthropic: “Yes, AI really speeds up work, and the numbers are already big”
✍️ Essentials
Imagine you are a manager who hears two camps every quarter: one shouting “AI will save the economy”, the other shouting “AI changes nothing”. Then Anthropic publishes a study on a huge real dataset where you see not slogans but exact percentages for professions and tasks.
Anthropic processed 100,000 anonymized conversations through Clio - their private pipeline - and matched the tasks to federal labor statistics. The estimate: mass AI adoption could double US productivity growth - about +1.8 percent per year.
Time collapse is even harsher: Claude reduces execution time by around 80 percent. A task that takes 90 minutes manually turns into 15 to 20 minutes.
The largest contribution to “overall productivity” comes from developers - 19 percent. Then operations managers, marketing, support. The wildest time savings: educational materials - minus 96 percent, research tasks - minus 91 percent, executive admin support - minus 87 percent.
Market context: there is an ongoing argument about whether AI provides real value or only presentations. Anthropic shows numbers from real work scenarios, not benchmarks. But there is a hole: the study does not answer the employment question, and their CEO publicly warns about serious risks of professional displacement.
🐻 Bear’s take
For business: you can calculate ROI not “by feel” but by specific percentages per role. If you have content, research, support, or training tasks - AI closes months of work.
For investors: doubling productivity growth is not hype but a real restructuring of the labor market. Companies that save white-collar time will increase in value.
For people: tasks that used to consume a day will take an hour. But at the same time the conversation about which roles disappear first intensifies.
🚨 Bear in mind: who’s at risk
Office roles doing writing, research, or admin - 8/10 - 80 percent of the time is now done by AI; you must shift into coordination, product, or quality control.
🧠 Sutskever walks into a podcast and says: “The age of scaling is ending”
✍️ Essentials
Imagine you are an investor who spent the last two years signing checks for data centers, GPUs, and energy. Everything revolves around “who will build the bigger cluster”. And then Sutskever - the man who launched half the industry - comes out and says: enough. Future breakthroughs will come not from hardware but from new research ideas.
Sutskever, in a big interview on Dwarkesh’s podcast, said that 2020 to 2025 was “the age of scaling”, but we are already hitting limits. The next jumps will come not from compute, but from new approaches in architectures and training. According to Sutskever, “superhuman learning” may appear in 5 to 20 years. Early ASI must be built “with care for sentient beings”.
About SSI he revealed more: the company chose a “different technical path” to superintelligence and positions itself as a purely research firm rather than another cluster builder. SSI is raising a round at around 32 billion dollars and already rejected an acquisition from Meta. The only departed cofounder is their only serious talent loss.
Market context: the industry is pumping billions into hardware. Everyone goes toward “more tokens, more GPUs”. And against this background, the main ideologist of scaling laws says: it will soon not be enough.
🐻 Bear’s take
For business: in the horizon of 2 to 3 years demand may start for “new research AI infrastructure”, not simply GPU purchases. Teams building methods, simulators, and training innovations may step out of the shadows.
For investors: a 32B valuation for a company with no product shows that the market is ready to bet on “new approaches” as aggressively as it bet on compute. But if Sutskever is right, companies sitting only on scaling will fall in multiples.
For people: if SSI really builds AI that learns like a superhuman and remains stable, this can accelerate the transition to useful and reliable assistants. But the window is years, not months.
🚨 Bear in mind: who’s at risk
Cluster builders - 7/10 - if the “age of scaling” really winds down, margins on GPU and cloud shrink; you must invest in your own research centers and new training methods.
Mid-tier companies building “yet another LLM” - 8/10 - if the leader of the industry says “new approaches matter more than compute”, old recipes stop working; you must explore architectures or look for a narrow scientific niche.
Quick bites
Everything below is exactly from the PDF, rewritten into English but without adding new facts:
Nvidia defends itself against noise around Google TPU - They say they are a generation ahead and sell “fungibility” - one platform for any model or framework.
Opus 4.5 engineering exam - The model beats human candidates on Anthropic’s real 2-hour engineering test.
Suno signs Warner Music - Generative music now uses licensed tracks and artist voices.
Gemini 3 Pro hits 130 IQ on Tracking AI - New maximum, but still not “magic”.
Tencent releases HunyuanOCR - 1B open model for OCR and document understanding.
Perplexity launches AI shopping - Free shopping mode with PayPal checkout.
Altman and Ive finalize AI device design - Launch planned in under two years.
Microsoft releases Fara-7B - Lightweight website navigation agent.
Court blocks OpenAI from using “Cameo” - Trademark takes immediate effect.
Amazon commits up to 50B for US gov data centers - AWS locked in as primary US gov compute for 10+ years.
Exa releases Exa 2.1 - Major search upgrade for agents, including Fast, Auto, Deep modes.
Artificial Analysis launches CritPt - Physics benchmark where top models solve under 10 percent.





