Fast scaling AI businesses

AI-powered article preview

CEO of One of the World’s Most Influential AI Labs Says: A Tidal Wave Is Coming — Let’s Get Ready

Hero

The awkward gap between “inside” and “outside”

Dario Amodei, CEO of Anthropic, has been unusually blunt in recent interviews: he’s warned that AI could eliminate around half of entry-level white-collar roles and push unemployment materially higher over the next one to five years—especially in areas like law, finance, consulting, and parts of tech. [Axios]
That warning is easy to dismiss as “Silicon Valley hype” until you notice the social asymmetry it exposes:
  • Inside the AI bubble, many people behave as if we’re nearing the “end of the exponential” (or at least the point where automation becomes unavoidable) on a timeline measured in months, not decades.
  • Outside the bubble, most people are still discussing AI as a novelty: a better search box, a nice writing helper, a party trick.
This article is an attempt to bridge that gap without drama. The core claim is not about machine consciousness or sci‑fi AGI. It’s about economic thresholds.(And for credibility: in his essay *Machines of Loving Grace*, Amodei mostly sketches a *long-run optimistic* future if powerful AI goes well; his *short-run* labor-shock warning has come primarily through interviews and public conversations rather than that utopian essay alone.) [darioamodei.com]---
Section visual

What changed lately: the “70% moment” in everyday work

Many people who spend time with modern coding agents describe a specific feeling: something shifted in the last few months.Not “the model is smarter,” but: *the workflow changed shape*.A useful way to describe it is the jump from 40% automation to 70% automation:
  • 40% feels like a helpful assistant: it drafts, suggests, autocompletes. You still do the “real work.”
  • 70% feels like a junior colleague who can execute a large chunk of the task end-to-end—your job becomes *directing, reviewing, and signing off.*
In software, you see this with “agentic” coding tools: rather than writing code line-by-line, you specify intent, let an agent implement, then you review and test. OpenAI’s releases around GPT‑5 and rapid Codex upgrades explicitly frame this as a shift toward long-running, end-to-end agent workflows (hours to days), not just autocomplete. [OpenAI]And the same pattern shows up outside coding. “Knowledge plumbing” work is full of repetitive coordination that machines can now do surprisingly well.A concrete example helps:
  • Before: reconciling spreadsheets might mean *four hours* mapping columns, chasing definitions, fixing inconsistencies.
  • After: an AI can propose the mapping and reconciliation logic in minutes—and the human’s job becomes validating the logic and exceptions.
That’s the threshold effect: once AI handles the bulk of the sub-steps, the remaining human layer (review + judgment) requires far fewer hours.---
Section visual

It’s not about AGI,

Our perspective is about task thresholds

The most useful reframing is: AI replaces tasks before it replaces jobs—and when enough tasks shift, job structures follow.A foundational paper for non-technical readers is Eloundou et al. (“GPTs are GPTs”), which estimates that about 80% of the U.S. workforce has at least 10% of tasks exposed to LLMs, and roughly 19% has 50%+ task exposure. Exposure is not the same as full automation, but it tells you *where pressure concentrates first*. [arXiv]This is why the “AGI vs not” debate can be economically irrelevant. You don’t need a machine that “thinks like a human.” You need a machine that reaches the commercial competence threshold for enough tasks that labor density collapses.### The missing link managers feel (and rarely say out loud): the apprenticeship gapIf AI automates the “doing” tasks juniors use to learn, how do we train the seniors of tomorrow? That’s not a philosophical question—it’s an operational one. (It’s also why “entry-level disruption” is not just a worker problem; it becomes a pipeline problem for firms.)---
Section visual

Why impact is not automatic

Adoption and incentives dominate capability

At this point, the conversation typically polarizes:
  • one side extrapolates model capability into societal disruption,
  • the other side points to errors and says “it won’t work.”
A more realistic stance comes from economists who separate what can be done from what will be deployed.Acemoglu’s “Simple Macroeconomics of AI” argues, in effect, that economy-wide impact depends on how many tasks are profitably automated, how adoption diffuses, and what complementarities and constraints exist. Translation: the question isn’t “can AI do X?” but “will firms implement AI at scale for X, under real-world costs and liability?” [economics.mit.edu]The OECD’s Employment Outlook chapter on AI and jobs makes a similar point from an institutional perspective: the net effect is theoretically ambiguous (displacement vs productivity vs new tasks), and so far there’s no clear evidence of a sudden collapse in labor demand attributable to AI—yet transitions matter and exposure is broad, including higher-educated workers. [OECD]So the “edge” feeling can be true and the world can remain stable: capability can jump faster than adoption.---
Section visual

Seven plausible scenarios

Evaluation of potential impact

We built a 3‑dimension framework (AI capability × labor response × macro outcome) and narrowed it to seven coherent scenarios. Here they are without the cryptic labels:AI works well; entry-level and routine work compresses; growth continues but becomes more unequal.

Managed Compression (40%) Automation outpaces institutional response; layoffs and hiring freezes hit faster than policy and new demand can absorb.
Labor Shock (20%) Progress plateaus earlier than expected; disruption is real but more gradual.
Slow Drift (15%) Capability + adoption translate into rapid growth; demand expands faster than labor shrinks.
Productivity Boom (10%) Large displacement, but stabilizers (fiscal policy, new demand, redistribution) prevent chaos.
Harsh-but-Stable Transition (7%) Very powerful systems force redistribution (UBI-like outcomes) over a short horizon.
New Social Contract (5%) Organizations cut too early based on hype; productivity doesn’t arrive fast enough; downturn follows.
Hype Recession (3%) Base case: Managed Compression. But the reason to model scenarios is not to be dramatic—it’s to avoid being blindsided by the tails.
Section visual

What to do:

Preparation without panic

For individuals: reduce fragility, increase optionality
  • Build a runway (cash buffer sized to your volatility; many will want >12 months if exposed to routine cognitive work).
  • Lower fixed costs and avoid leverage that removes flexibility.
  • Move up the value chain: responsibility, judgment, relationships, domain expertise, and decision-making.
  • Treat AI literacy as basic literacy: learn to supervise and verify systems, not just “use a chatbot.”
For managers: audit decisions, not tasks
  • Audit your team’s work by decision ownership: who makes consequential calls vs who just executes instructions.
  • If juniors are mostly executing, they are vulnerable. Shift them into micro-management of AI agents (scoping, verifying, QA, exception handling) now.
  • Address the apprenticeship gap explicitly: if AI eats the junior learning path, create deliberate rotations and training artifacts, or your future senior bench collapses.
For investors: build a portfolio that survives the transition(General framework, not personal financial advice.)
  • In “Managed Compression,” the consistent winners are often the productivity stack: compute, data centers, semis, energy/power, and platforms.
  • In “Labor Shock,” you want liquidity and defensiveness (quality balance sheets, shorter duration, diversification).
  • In “Productivity Boom,” you want optionality (innovation exposure), but avoid leverage that forces you to sell in volatility.
Section visual

Further considerations

We could drive the model even further

The framework is useful, but it’s not complete. The most important extensions are:
  • Policy & regulation as its own axis. The EU AI Act entered into force in August 2024 and is scheduled to become fully applicable on 2 August 2026, with key exceptions (notably some high-risk regimes stretching later). [digital-strategy.ec.europa.eu] In the U.S., major executive actions in 2025 explicitly emphasized reducing barriers to AI innovation and pushing back on state-by-state regulation. [The White House]
  • Geopolitics & supply chains. Advanced chips, export controls, and licensing terms can accelerate or bottleneck capability diffusion (e.g., shifting policy around Nvidia’s H200 exports and guardrails). [Reuters]
  • Social acceptance & backlash. Public anxiety is rising: Pew found 50% of Americans more concerned than excited about AI in daily life and 57% rating societal risks as high; Gallup reports broad distrust in AI’s fairness. [Pew Research Center] Labor contracts are also beginning to encode AI protections (a preview of broader institutional friction). [Reuters]
  • Energy and environmental constraints. Data center growth is becoming a macro variable: EPRI has estimated data centers could consume up to 9% of U.S. electricity generation by 2030, making power infrastructure a binding constraint and an investment theme. [The Department of Energy's Energy.gov]
  • Better grounding for probabilities. McKinsey’s 2025 global survey reports a median of 30% of respondents expecting workforce decreases in business functions in the coming year due to AI—useful for anchoring scenario weights beyond intuition. [McKinsey & Company]
  • Upside strategies, not only defense. The model should explicitly include “offense”: new roles, new firms, and new categories created by AI—especially for people who can supervise systems and own decisions.
These don’t negate the thesis. They refine it.---
Section visual

Additional read

In case you want to dive deeper

  • Eloundou et al. (2023): Task-Exposition von LLMs.
  • Brynjolfsson, Li & Raymond (2023/2025): Produktivitätseffekte im Feld. ([McKinsey & Company][3])
  • Acemoglu (2024): Makroeffekt hängt von profitabler Automatisierung + Adoption ab. ([Digitale Strategie der EU][4])
  • OECD Employment Outlook (2023): breite Exposition, ambivalente Nettoeffekte, Transition zählt. ([Digitale Strategie der EU][4])
  • Autor (2015): Historisch: Tasks werden ersetzt und neu geschaffen; Anpassung ist ungleich. ([Digitale Strategie der EU][4])
[1]: https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic "AI jobs danger: Sleepwalking into a white-collar bloodbath" [2]: https://www.businessinsider.com/anthropic-ceo-dario-amodei-centaur-phase-of-software-engineering-jobs-2026-2 "Anthropic's CEO says we're in the 'centaur phase' of software engineering" [3]: https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai "The state of AI in 2025: Agents, innovation, and ..." [4]: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai "AI Act | Shaping Europe's digital future - European Union" [5]: https://www.reuters.com/business/energy/data-centers-could-use-9-us-electricity-by-2030-research-institute-says-2024-05-29/ "Data centers could use 9% of US electricity by 2030, ..." [6]: https://www.energy.gov/gdo/clean-energy-resources-meet-data-center-electricity-demand "Clean Energy Resources to Meet Data Center Electricity ..."
Section visual

For Founders

For Investors

Sign Up for our newsletter

© 2026 CFMoto @ März