AI Development 2026: From Chatbots to Physical AI — and What the EU AI Act Means for HR
AI development is accelerating rapidly. We put it in perspective: where we stand on OpenAI's 5-level model, why Physical AI is the next revolution — and what the EU AI Act concretely changes for recruiting and people development.
Photo: Ekō (CC BY 2.0) via Wikimedia Commons
Part 1: The Five Levels of AI Development
In July 2024, OpenAI CEO Sam Altman presented an internal framework to his employees describing the path to artificial superintelligence in five levels. Bloomberg first reported on it on July 11, 2024. The model has since become the most important reference framework for understanding where we stand on the AI development curve.
The Five Levels
Chatbots
AI with language capabilities. ChatGPT, Claude and Gemini hold natural conversations, translate, summarize and generate text.
Reasoners
AI with PhD-level problem-solving capabilities. OpenAI's o1/o3, Claude's extended thinking and Gemini 2.5 Pro solve complex tasks through step-by-step reasoning.
Agents
AI that acts autonomously. Claude Code writes and tests software autonomously, Devin completes entire programming tasks, OpenAI Operator navigates the web. From 2027: fully automated software development creates recursive, exponential improvement.
Innovators
AI invents: scientific research, new materials, better algorithms, drugs. The transition from AGI to ASI — artificial superintelligence.
Organizations
AI runs complete organizations — coordinating employees, software agents and physical systems. Not science fiction: the infrastructure for this is being built right now.
Where do we stand today? At the start of 2026, we are firmly at Level 2 and at the beginning of Level 3. Reasoning models such as OpenAI's o3 and Claude's extended thinking reliably deliver complex analyses. At the same time, the first genuine agents are in production: Claude Code reached over 1 billion dollars in annual revenue by the end of 2025, Devin has a PR merge rate of 67%, and Anthropic's Model Context Protocol (MCP) was transferred to the Linux Foundation in March 2026 — a de-facto standard for communication between AI agents and tools.
Physical AI: The Second Axis
OpenAI's model describes a cognitive capability ladder: from language through thinking to acting, inventing and leading. But one dimension is missing: where does the AI act?
Physical AI — also called "Embodied AI" — describes systems that perceive the physical world, understand it and act within it. NVIDIA defines it as "autonomous machines that perform complex actions in the real world." These are no longer futuristic concepts: on March 16, 2026, Reuters reported that Skild AI and NVIDIA are deploying a universal "robot brain" on Foxconn's Blackwell GPU manufacturing lines in Houston — the first commercial large-scale application of generalized Physical AI.
This yields a two-axis model:
| Axis 1: Cognitive Autonomy | Axis 2: Embodiment |
|---|---|
| 1. Chatbots — Language | purely digital (text, code, decisions) |
| 2. Reasoners — Thinking | |
| 3. Agents — Acting in software | 3b. Physical Agents — Acting in the physical world |
| 4. Innovators — Inventing | AI designs better robots, materials, control systems |
| 5. Organizations — Leading | AI orchestrates fleets, factories, supply chains |
Physical AI: What is happening right now?
- NVIDIA is building the ecosystem: Cosmos (world models for robot training), Isaac Lab 3.0 (robot simulation), GR00T N1.7 (humanoid foundation model) and Omniverse as the "operating system for Physical AI." Partnerships already encompass over 2 million robots.
- Boston Dynamics presented the production version of Atlas at CES in January 2026 — the transition from research robot to enterprise product. Testing at Hyundai, planned volume: tens of thousands of units.
- Tesla Optimus Gen 3 went into production in February 2026. Still in learning mode, but target price: $20,000–$30,000 per unit.
- Figure AI is expanding alpha tests with Figure 03 for high-volume manufacturing.
- Skild AI (valuation: $14 billion) is developing a universal robot brain: "any robot, any task, one brain."
- China is advancing Physical AI at enormous speed: Unitree is already delivering commercially available humanoid robots with the G1 and H1 at a fraction of Western prices (from approximately $16,000). Agibot, Galbot and Fourier Intelligence are also scaling aggressively. The Chinese government has designated Physical AI a strategic priority — with the goal of achieving mass production of humanoid robots by 2027.
The compact formula: Physical AI begins where agents no longer merely operate software, but act in the real world through sensors and actuators. It is not a later replacement level, but the embodied extension of Levels 3–5.
Part 2: EU AI Act — What HR Needs to Know Now
While AI capabilities are developing exponentially, regulation is catching up. The EU AI Act has been in force since August 2024 and is being applied in phases. For HR, this is not an abstract topic: recruiting AI is explicitly classified as high-risk.
What is high-risk in recruiting?
Annex III of the AI Act lists "Employment, Workers Management and Access to Self-Employment" as a high-risk area. Concretely, this covers AI systems for:
- Targeted Job Advertisements — targeted job postings via algorithm
- Analysis and filtering of applications — CV screening, matching, ranking
- Evaluation of candidates — video interview scoring, assessment AI
- Promotion, termination, performance evaluation — AI in personnel decisions
What is prohibited?
- Emotion recognition in the workplace — facial expression/tone-of-voice analysis in job interviews or at the workplace (narrow exceptions only for medical/safety purposes)
- Social scoring — evaluating individuals based on their social behavior with adverse consequences
- Biometric categorization — inferring sensitive attributes (ethnicity, religion, sexual orientation) from biometric data
This is particularly relevant because some recruiting tools historically marketed emotion analysis as an "assessment feature." German data protection authorities explicitly classify this as problematic.
Timeline: What applies when?
| Date | What becomes applicable |
|---|---|
| Feb. 2, 2025 | Prohibitions (emotion recognition, social scoring) + AI literacy obligation |
| Aug. 2, 2025 | Governance rules, GPAI rules |
| Aug. 2, 2026 | High-risk obligations fully applicable — this directly affects recruiting AI |
| Dec. 2027? | Discussed possible postponement (Digital Omnibus) — uncertain, do not plan on it |
The 5 Operational Obligations for HR
What does HR need to do concretely when recruiting AI is in use? Five areas of action:
1 AI Inventory and Classification
Create a recruiting AI register: every tool and feature that analyzes applications, ranks candidates, evaluates interview content or targets advertisements. Document the role: are you a deployer (user) or a provider (vendor)? Obligations shift if you substantially modify systems.
2 Human-in-the-Loop — for real
The AI Act requires human oversight by competent individuals. The German Data Protection Conference (DSK) emphasizes: "Merely formal involvement of a human is not sufficient." The operative design principle: no one-click rejection based on an AI score. The human needs genuine decision-making latitude, authority and time to review.
3 Logs and Documentation
Document AI outputs, review date, reviewer and reason for the decision. Retain automatically generated logs for at least 6 months (Art. 26 AI Act). For every AI-assisted decision: who reviewed, what was the AI recommendation, what was the human decision?
4 Transparency at the right moment
The AI notice belongs in the process, not in the rejection letter. Concretely: in the application form or at first communication. For chatbots or interview bots: a notice "you are interacting with AI" no later than the first interaction. "At the end in the rejection" is regularly too late.
5 Access and Explanation
Applicants have rights of access (GDPR: response within 1 month). In addition, the AI Act creates a right to an explanation of the role of the AI system in decisions with legal effect (Art. 86). Build a unified playbook: GDPR access request plus AI Act explanation in a single process.
And the esc Potential Analyses?
Two products, one principle — the diagnostics are AI-free:
- The esc Potential Analysis measures metaprograms and value systems using a scientifically normed forced-choice procedure (50,000+ data sets, since 2009). The analysis can also rank candidates against an ideal profile — but this ranking is based on deterministic algorithms (normed comparison with the reference group), not on AI. No artificial intelligence is used to evaluate, score or interpret individuals.
- The esc Competency Development works with self-assessment and external assessment based on concrete behavioral anchors. The deviations between self-assessment and external assessment are calculated directly — without AI interpretation. Where AI is used — for example in creating competency profiles or formulating behavioral anchors — it is assisting AI: it supports the work process but does not evaluate individuals. This is not high-risk within the meaning of the AI Act.
In both products, AI is used exclusively and optionally in the workflow: when formulating text modules, structuring report elements or summarizing. Never in diagnostics, never in evaluation, never in the decision.
Conclusion: What to do now
AI development is accelerating on both axes — cognitive and physical. For HR this means:
- Understand where your own tools stand on the risk scale (assisting vs. high-risk)
- Check immediately: no emotion recognition in use? (Prohibited since Feb. 2025)
- Prepare by August 2026: AI register, human-in-the-loop design, logging concept, transparency process, access playbook
- Don't wait: the possible postponement to December 2027 is uncertain and no reason for inaction
Those who use recruiting AI should understand compliance as a product: clear roles, genuine reviews, traceable documentation and clean transparency. Then AI not only strengthens efficiency, but also trust.
Setting up AI in HR the right way?
Our Potential Analyses and competency profiles use AI only where it supports — never in diagnostics. Scientifically normed, GDPR-compliant, AI-Act-ready.
Discover AccountSources and further reading
- Bloomberg: "OpenAI Sets Levels to Track Progress Toward Superintelligent AI" (July 11, 2024)
- NVIDIA: What is Physical AI?
- Reuters: "Skild AI, Nvidia deploy robot brain on Blackwell assembly lines" (March 16, 2026)
- EU AI Act: Annex III — High-Risk AI Systems
- EU AI Act: Art. 26 — Deployer Obligations
- DSK: Guidance Note: AI and Data Protection (May 2024)
- EDPB: SME Guide — Respect individuals' rights
Related Articles
AI & Psychology: When AI Models Need Therapy
What happens when we confront AI models with psychological tests?
Read moreThe Evolution of Artificial Intelligence: AGI Through Developmental Psychology
How developmental psychology models help us understand the path to AGI.
Read moreAI Implementation Since 2022: Success Stories
Concrete examples of successful AI implementations in the DACH region and the USA.
Read more