🤖 Oops, DeepSeek did it again, 🎬 video AI crawls toward cinema, 🎥 edits replace reshoo
December 16, 2025. Inside this week:
DeepSeek comes back again and breaks pricing logic for frontier models.
Runway Gen-4.5 gets dangerously close to cinema-level video.
Chinese Kling O1 turns video editing into text instructions.
🎬 Runway Gen-4.5 gets close to cinema level
✍️ Essentials
Imagine this scenario. A big shoot. Final scene. Everything is already filmed. Sets are dismantled. One actor is in a cast. The main actress has a new romance and is already in another country. Then during editing it turns out the climax looks like bad YouTube CGI. There is no way to reshoot. The extra budget is gone. And then on a call someone quietly says: “What if we try running the final scene through Runway?”
Runway released Gen-4.5, a new video model that jumped to first place in the text-to-video ranking by Artificial Analysis. Internally the prototype was called Whisper Thunder. The model understands physics and motion much better. Liquids, fabrics, hair, facial expressions, and human movement stay consistent frame to frame instead of melting apart.
Stylistically the model can do different looks, but its strongest mode is cinematic realism. At the level where in a freeze-frame it is already hard to tell whether the shot was generated or filmed.
Inside Runway the model was nicknamed David, and co-founder Cristóbal Valenzuela openly plays the story of “little David against Hollywood Goliaths”. Runway is already part of creator workflows, but Gen-4.5 comes very close to the line where it is no longer just for concepts and previs, but can be used in serious production.
🐻 Bear’s take
For business, brands, studios, and agencies should pull Gen-4.5 into the very beginning of the pipeline - storyboards, previs, test shots, scene variants - and leave live shooting and VFX only for the most important parts. Savings on reshoots and creative experimentation become a noticeable budget line.
For investors, AI video is forming into a standalone market where not only big tech can compete on quality. This opens space for licensing, integrations into editing software, services on top of Runway, and potential acquisitions by media conglomerates and software players.
For people, making a music video, a short film, or a cinematic product clip becomes possible almost from a laptop. At the same time, the risk of believable staged videos grows, where the human eye can no longer distinguish generated footage from real shooting.
🚨 Bear in mind: who’s at risk
Mid-size video production studios - 8/10 - money for previs, concepts, and test clips moves into AI. For the client it is easier to press a button than to pay for a full shooting day - response: move into strategy, complex staging, full-cycle production, and build hybrid “people plus AI” pipelines.
Freelance video makers and editors - 7/10 - basic visual beauty and motion turn into a prompt function, not hours on a timeline - response: grow into creative producers, learn to design scenes and processes around AI, not just operate tools.
🤖 DeepSeek returns with V3.2 and Speciale
✍️ Essentials
Imagine Sam Altman sitting in a huge office, spinning new GPT-5 prices in his hands. In other towers people calculate Gemini and Sonnet pricing. Everyone almost forgot how DeepSeek with R1 stormed onto the scene and stressed half the market. And then, as Britney sang, oops, they did it again.
Chinese DeepSeek released two reasoning models - V3.2 and V3.2-Speciale - both at 685B parameters. The standard V3.2 catches up with GPT-5, Sonnet 4.5, and Gemini 3 Pro in math, tool use, and code. The heavy Speciale beats them on a number of tasks and already claims gold medals at the 2025 International Math Olympiad and Informatics Olympiad, plus top-10 at IOI.
Pricing is outright trolling. V3.2 costs $0.28 per million input tokens and $0.42 per million output tokens. Gemini 3 Pro sits at $2 and $12. GPT-5.1 at $1.25 and $10. Sonnet 4.5 at $3 and $15. The gap is multiple times. On top of that - MIT license, weights on Hugging Face, take and run it yourself.
The market has seen this before with R1. Back then DeepSeek triggered new discussions about chip export restrictions to China. V3.2 shows this is not a one-time firework but a stable strategy - frontier performance, open weights, and aggressive pricing that breaks the logic of “pay a premium for a Western brand”.
🐻 Bear’s take
For business, you can radically cut costs for all AI features currently running on GPT or Gemini. DeepSeek is worth pulling at least into experiments and then into production and self-hosted setups.
For investors, business models built on “we just sell an expensive general API” look weaker. Value shifts into vertical products, integrations, and infrastructure around multi-model stacks.
For people, strong assistants for coding, olympiad-level math, and complex problem solving become mass-market and almost free. Competition for “smart tasks” grows.
🚨 Bear in mind: who’s at risk
US AI platforms - 8/10 - Chinese open source presses on quality and kills on price. Margins must be justified by product and ecosystem, not marketing.
Junior developers and analysts - 7/10 - reasoning models absorb more middle-level tasks. You must urgently grow product thinking, domain expertise, and work “on systems”, not “inside systems”.
🎥 Chinese Kling O1 turns video into plasticine
✍️ Essentials
Imagine you are a production studio. The clip is already shot. The client is happy until lawyers and PR arrive. “Remove all people in the background. Make it night instead of day. Replace this character with another but keep everything else the same.” Reshooting is impossible. The budget is gone. This used to be hell. Now Chinese Kling O1 says: describe it in text and I will fix it.
Kuaishou launched Kling O1, a new video system that can both generate and edit video inside one model. It accepts up to seven inputs at once - images, video, subjects, text, and references to actions or camera angles - and outputs a 3 to 10 second clip. You can take existing footage and write “remove pedestrians”, “make night lighting”, “move the scene into rain”, while keeping the same characters and composition.
It supports image references, object references, camera motion, start and end frames, and multiple characters in one scene. Internal Kuaishou tests show Kling beats Google Veo 3.1 and Runway Aleph in reference accuracy and edit precision.
This is the same kind of leap Nano Banana made this year for images. “Change this slightly here” becomes a normal request, not a task for a separate department.
🐻 Bear’s take
For business, agencies and studios get editing, VFX, and frame cleanup in one button. You can promise clients more revisions without reshoots and without killing margins. Text-based editing fundamentally changes post-production economics.
For investors, AI video moves into “operating system for content” mode. Not just generation, but deep editing tools. This becomes a platform market where SaaS wrappers live for marketing, advertising, and e-commerce.
For people, making promo videos, product videos, or cinematic clips becomes easier. Trust in “this is how it really happened” footage becomes harder.
🚨 Bear in mind: who’s at risk
Small and mid post-production studios - 8/10 - tasks like cleanup, color fixes, replacements collapse into a single AI interface - response: move into creative consulting and build pipelines on top of such models instead of competing manually.
Brand legal and PR teams - 7/10 - more easy-to-produce but hard-to-verify video appears - response: build verification, monitoring, and reaction processes and include deepfake video in risk planning.
Quick bites
Nvidia invests $2B into Synopsys - joint acceleration of chip design with AI shortens the cycle from idea to silicon and makes catching up more expensive.
Accenture deploys ChatGPT Enterprise at scale - consulting turns into factories for rolling out AI agents per function.
Black Forest Labs raises $300M at $3.25B valuation - strong foundational image models are valued separately from meme generators.
OpenAI takes a stake in Thrive Holdings - a tight circle of big enterprise money forms around OpenAI.
Tim Sweeney says “Made with AI” labels are meaningless - AI becomes a standard production layer, not a marketing badge.




