AI & Digitalization

AI Development 2026: From Chatbots to Physical AI — and What the EU AI Act Means for HR

AI development is accelerating rapidly. We put it in perspective: where we stand on OpenAI's 5-level model, why Physical AI is the next revolution — and what the EU AI Act concretely changes for recruiting and people development.

March 17, 2026 Ralph Köbler 8 min read
EU AI Act — brochure in front of the European Parliament in Strasbourg Photo: Ekō (CC BY 2.0) via Wikimedia Commons
In brief: In July 2024, OpenAI presented a 5-level model of AI development — from chatbots to AI-led organizations. At the start of 2026, we stand at the transition from Level 2 (Reasoners) to Level 3 (Agents). At the same time, Physical AI is emerging as a second development axis: AI that doesn't just act digitally, but in the physical world. For HR, the EU AI Act is becoming concurrently relevant: recruiting AI is classified as high-risk — with concrete obligations from August 2026.

Part 1: The Five Levels of AI Development

In July 2024, OpenAI CEO Sam Altman presented an internal framework to his employees describing the path to artificial superintelligence in five levels. Bloomberg first reported on it on July 11, 2024. The model has since become the most important reference framework for understanding where we stand on the AI development curve.

The Five Levels

1
2023/2024
Chatbots

AI with language capabilities. ChatGPT, Claude and Gemini hold natural conversations, translate, summarize and generate text.

2
2024/2025
Reasoners

AI with PhD-level problem-solving capabilities. OpenAI's o1/o3, Claude's extended thinking and Gemini 2.5 Pro solve complex tasks through step-by-step reasoning.

3
2026/2027 — we are here
Agents

AI that acts autonomously. Claude Code writes and tests software autonomously, Devin completes entire programming tasks, OpenAI Operator navigates the web. From 2027: fully automated software development creates recursive, exponential improvement.

4
from ~2028
Innovators

AI invents: scientific research, new materials, better algorithms, drugs. The transition from AGI to ASI — artificial superintelligence.

5
by ~2030
Organizations

AI runs complete organizations — coordinating employees, software agents and physical systems. Not science fiction: the infrastructure for this is being built right now.

Where do we stand today? At the start of 2026, we are firmly at Level 2 and at the beginning of Level 3. Reasoning models such as OpenAI's o3 and Claude's extended thinking reliably deliver complex analyses. At the same time, the first genuine agents are in production: Claude Code reached over 1 billion dollars in annual revenue by the end of 2025, Devin has a PR merge rate of 67%, and Anthropic's Model Context Protocol (MCP) was transferred to the Linux Foundation in March 2026 — a de-facto standard for communication between AI agents and tools.

Physical AI: The Second Axis

OpenAI's model describes a cognitive capability ladder: from language through thinking to acting, inventing and leading. But one dimension is missing: where does the AI act?

Physical AI — also called "Embodied AI" — describes systems that perceive the physical world, understand it and act within it. NVIDIA defines it as "autonomous machines that perform complex actions in the real world." These are no longer futuristic concepts: on March 16, 2026, Reuters reported that Skild AI and NVIDIA are deploying a universal "robot brain" on Foxconn's Blackwell GPU manufacturing lines in Houston — the first commercial large-scale application of generalized Physical AI.

The key insight: Physical AI is not a sixth level after "Organizations." It is a second axis — the embodiment of AI. Regular agents act in software; Physical AI agents act through sensors, actuators, robotic arms and vehicles in the real world.

This yields a two-axis model:

Axis 1: Cognitive Autonomy Axis 2: Embodiment
1. Chatbots — Languagepurely digital (text, code, decisions)
2. Reasoners — Thinking
3. Agents — Acting in software3b. Physical Agents — Acting in the physical world
4. Innovators — InventingAI designs better robots, materials, control systems
5. Organizations — LeadingAI orchestrates fleets, factories, supply chains

Physical AI: What is happening right now?

  • NVIDIA is building the ecosystem: Cosmos (world models for robot training), Isaac Lab 3.0 (robot simulation), GR00T N1.7 (humanoid foundation model) and Omniverse as the "operating system for Physical AI." Partnerships already encompass over 2 million robots.
  • Boston Dynamics presented the production version of Atlas at CES in January 2026 — the transition from research robot to enterprise product. Testing at Hyundai, planned volume: tens of thousands of units.
  • Tesla Optimus Gen 3 went into production in February 2026. Still in learning mode, but target price: $20,000–$30,000 per unit.
  • Figure AI is expanding alpha tests with Figure 03 for high-volume manufacturing.
  • Skild AI (valuation: $14 billion) is developing a universal robot brain: "any robot, any task, one brain."
  • China is advancing Physical AI at enormous speed: Unitree is already delivering commercially available humanoid robots with the G1 and H1 at a fraction of Western prices (from approximately $16,000). Agibot, Galbot and Fourier Intelligence are also scaling aggressively. The Chinese government has designated Physical AI a strategic priority — with the goal of achieving mass production of humanoid robots by 2027.

The compact formula: Physical AI begins where agents no longer merely operate software, but act in the real world through sensors and actuators. It is not a later replacement level, but the embodied extension of Levels 3–5.


Part 2: EU AI Act — What HR Needs to Know Now

While AI capabilities are developing exponentially, regulation is catching up. The EU AI Act has been in force since August 2024 and is being applied in phases. For HR, this is not an abstract topic: recruiting AI is explicitly classified as high-risk.

What is high-risk in recruiting?

Annex III of the AI Act lists "Employment, Workers Management and Access to Self-Employment" as a high-risk area. Concretely, this covers AI systems for:

  • Targeted Job Advertisements — targeted job postings via algorithm
  • Analysis and filtering of applications — CV screening, matching, ranking
  • Evaluation of candidates — video interview scoring, assessment AI
  • Promotion, termination, performance evaluation — AI in personnel decisions
The important distinction: Not every AI in HR is automatically high-risk. A text generator that drafts job postings is typically "assisting AI." But as soon as AI evaluates, scores, ranks or pre-filters candidates — and this result feeds into personnel decisions — you are in the high-risk area.

What is prohibited?

Prohibited since February 2, 2025:
  • Emotion recognition in the workplace — facial expression/tone-of-voice analysis in job interviews or at the workplace (narrow exceptions only for medical/safety purposes)
  • Social scoring — evaluating individuals based on their social behavior with adverse consequences
  • Biometric categorization — inferring sensitive attributes (ethnicity, religion, sexual orientation) from biometric data

This is particularly relevant because some recruiting tools historically marketed emotion analysis as an "assessment feature." German data protection authorities explicitly classify this as problematic.

Timeline: What applies when?

DateWhat becomes applicable
Feb. 2, 2025Prohibitions (emotion recognition, social scoring) + AI literacy obligation
Aug. 2, 2025Governance rules, GPAI rules
Aug. 2, 2026High-risk obligations fully applicable — this directly affects recruiting AI
Dec. 2027?Discussed possible postponement (Digital Omnibus) — uncertain, do not plan on it

The 5 Operational Obligations for HR

What does HR need to do concretely when recruiting AI is in use? Five areas of action:

1 AI Inventory and Classification

Create a recruiting AI register: every tool and feature that analyzes applications, ranks candidates, evaluates interview content or targets advertisements. Document the role: are you a deployer (user) or a provider (vendor)? Obligations shift if you substantially modify systems.

2 Human-in-the-Loop — for real

The AI Act requires human oversight by competent individuals. The German Data Protection Conference (DSK) emphasizes: "Merely formal involvement of a human is not sufficient." The operative design principle: no one-click rejection based on an AI score. The human needs genuine decision-making latitude, authority and time to review.

3 Logs and Documentation

Document AI outputs, review date, reviewer and reason for the decision. Retain automatically generated logs for at least 6 months (Art. 26 AI Act). For every AI-assisted decision: who reviewed, what was the AI recommendation, what was the human decision?

4 Transparency at the right moment

The AI notice belongs in the process, not in the rejection letter. Concretely: in the application form or at first communication. For chatbots or interview bots: a notice "you are interacting with AI" no later than the first interaction. "At the end in the rejection" is regularly too late.

5 Access and Explanation

Applicants have rights of access (GDPR: response within 1 month). In addition, the AI Act creates a right to an explanation of the role of the AI system in decisions with legal effect (Art. 86). Build a unified playbook: GDPR access request plus AI Act explanation in a single process.

And the esc Potential Analyses?

Two products, one principle — the diagnostics are AI-free:

  • The esc Potential Analysis measures metaprograms and value systems using a scientifically normed forced-choice procedure (50,000+ data sets, since 2009). The analysis can also rank candidates against an ideal profile — but this ranking is based on deterministic algorithms (normed comparison with the reference group), not on AI. No artificial intelligence is used to evaluate, score or interpret individuals.
  • The esc Competency Development works with self-assessment and external assessment based on concrete behavioral anchors. The deviations between self-assessment and external assessment are calculated directly — without AI interpretation. Where AI is used — for example in creating competency profiles or formulating behavioral anchors — it is assisting AI: it supports the work process but does not evaluate individuals. This is not high-risk within the meaning of the AI Act.

In both products, AI is used exclusively and optionally in the workflow: when formulating text modules, structuring report elements or summarizing. Never in diagnostics, never in evaluation, never in the decision.

The rule of thumb for HR tools: AI in the workflow = assisting, typically non-critical. AI in diagnostics or evaluation = regulated, notifiable, high-risk. It only becomes critical when AI evaluates individuals rather than merely supporting the work process.

Conclusion: What to do now

AI development is accelerating on both axes — cognitive and physical. For HR this means:

  1. Understand where your own tools stand on the risk scale (assisting vs. high-risk)
  2. Check immediately: no emotion recognition in use? (Prohibited since Feb. 2025)
  3. Prepare by August 2026: AI register, human-in-the-loop design, logging concept, transparency process, access playbook
  4. Don't wait: the possible postponement to December 2027 is uncertain and no reason for inaction

Those who use recruiting AI should understand compliance as a product: clear roles, genuine reviews, traceable documentation and clean transparency. Then AI not only strengthens efficiency, but also trust.

Setting up AI in HR the right way?

Our Potential Analyses and competency profiles use AI only where it supports — never in diagnostics. Scientifically normed, GDPR-compliant, AI-Act-ready.

Discover Account

Sources and further reading

Related Articles

AI & Digitalization
AI & Psychology: When AI Models Need Therapy

What happens when we confront AI models with psychological tests?

Read more
AI & Digitalization
The Evolution of Artificial Intelligence: AGI Through Developmental Psychology

How developmental psychology models help us understand the path to AGI.

Read more
AI & Digitalization
AI Implementation Since 2022: Success Stories

Concrete examples of successful AI implementations in the DACH region and the USA.

Read more