Why Trusted Data and Better Decisions Will Define the Next Decade

An outlook by Jochen Werne – January 2026

Prologue: Intelligence as the New Infrastructure of Power

History offers a consistent lesson: technologies do not change the world by their existence alone. They change it when they reorganise power, coordination and judgment.

The printing press destabilised authority before it empowered enlightenment. Railways unified markets before bankrupting investors. Electricity transformed productivity only after grids, standards and institutions emerged. In every case, societies that prospered were not those with the earliest inventions, but those that built institutions, rules and decision systems around them.

Artificial intelligence now enters history in exactly this role. It is not another tool. It is becoming an infrastructure of cognition—reshaping how decisions are made, how risks are assessed and how authority is exercised.

As we move into 2026, the decisive question is no longer whether AI will transform economies and societies. That is already beyond dispute. The real question is this:

Who will control decision-making, on what data, under which rules, and with what level of human judgment?

This essay argues that the coming decade will not be won by the most spectacular models, but by those who master data, governance and fact-based decisioning.

1. 2026: The End of AI Innocence

The years between 2022 and 2024 were marked by astonishment. GenAI like OpenAI’s ChatGPT, Google’s Gemini, the Chinese LLM DeepSeek and others spoke fluently, generated images, wrote code and appeared to reason. Public discourse oscillated between euphoria and existential fear. By 2025, that mood began to shift. By 2026, we reach what can be described as the end of AI innocence.

Three developments converge.

First, the technical limits of current AI architectures are becoming visible. Performance gains no longer scale linearly with size. Reliability remains uneven. Energy and infrastructure costs rise sharply.

Second, governance is hardening. What began as ethical guidance is evolving into enforceable regulation—particularly in Europe, but increasingly worldwide.

Third, enterprises are moving from experimentation to mission-critical deployment. AI is no longer confined to innovation labs; it is entering credit decisions, industrial control systems, healthcare workflows and public administration.

AI Development 1950-2025

History suggests this sequence is inevitable. Every general-purpose technology passes through a speculative phase before institutional discipline asserts itself. AI is now at that inflection point.

2. From Language Illusions to World Models

A consolidated view on the evolution of AI

After decades of AI evolvement, the dark time of the AI winter, the basis for the central illusion of the recent AI boom was that intelligence could be scaled mechanically: more data, more parameters, more compute—and cognition would inevitably follow. That assumption is now under strain.

Large language models have proven extraordinarily capable at pattern recognition and linguistic synthesis. Yet they remain structurally limited when confronted with causality, sequence and physical reality. They excel at describing the world, but struggle to understand it.

This insight has profound implications. Intelligence, as history and neuroscience remind us, is not primarily about eloquence. It is about anticipation, judgment and consequence. Humans do not merely predict; they imagine outcomes, simulate futures and decide under uncertainty.

Recent research shows that these qualities cannot emerge from language alone. As a result, AI research is shifting away from monolithic architectures toward hybrid systems—combining language models with state-based reasoning, symbolic representations, knowledge graphs and domain-specific logic¹.

Currently the most promising frontier is the development of so-called world models: AI systems that represent reality spatially, temporally and causally. These systems are designed not to guess the next word, but to ask a more consequential question: “If I act, what happens next?“² This marks the transition from statistical correlation to operational foresight.

Equally important is the rise of multimodal intelligence. Language alone is insufficient for interacting with the physical world. Video, sensor data, images and time-series information are becoming central inputs. This mirrors human cognition, which integrates multiple sensory streams into a coherent mental model³.

Source: Siemens Press Room 2026 – Collaboration for full fledged Digital Twin Manufacturing Sites

For enterprises, this evolution matters deeply. Decisions about credit, fraud, supply chains, manufacturing or safety depend on state, sequence and causation, not linguistic plausibility. AI that cannot reason about these dimensions cannot be entrusted with real authority.

3. Governance: From Ethical Debate to Strategic Infrastructure

Power without governance has always produced instability. AI is no exception.

As AI systems influence ever more consequential decisions, governance is moving from abstract ethics to concrete infrastructure. The European AI Code of Practice—while debated—illustrates a broader historical pattern: societies do not suppress powerful technologies; they domesticate them⁴.

The Chatham House

Well-designed governance does not suffocate innovation. On the contrary, it enables scale by creating trust. Financial markets expanded after disclosure rules. Aviation flourished after safety regimes. Digital commerce accelerated after cybersecurity standards emerged.

AI governance follows the same logic. Transparency, accountability, auditability and human oversight are not constraints; they are preconditions for legitimacy. Without them, AI remains confined to low-risk applications. With them, it becomes a foundation of economic and social systems.

For decision-centric organisations, governance must be embedded into architecture, not appended after deployment. Models, data and decisions must be traceable, explainable and contestable.

4. The Moral Dimension: Intelligence Without Values?

Beyond engineering and regulation lies a deeper question: what kind of intelligence are we building, and for whose benefit?

In Genesis: Artificial Intelligence, Hope and the Human Spirit, Henry Kissinger, Eric Schmidt and Craig Mundie frame AI as a civilisational test rather than a technical race⁵. Their argument is historically grounded: societies that fail to align power with moral frameworks eventually face backlash or decline.

AI systems do not possess values. They inherit them—implicitly or explicitly—from data, design choices and institutional context. If human dignity, fairness and accountability are not deliberately embedded, they will not emerge spontaneously.

For decisioning systems, this is not abstract philosophy. Decisions shape lives. Access to credit, insurance, healthcare or employment depends on them. Systems that cannot explain or justify their outcomes will lose public legitimacy, regardless of technical sophistication.

5. A Historian’s Caution: The Bubble Question or From AI Boom to Institutional Reality

Every technological revolution produces excess before it produces equilibrium. This is not a flaw of innovation; it is a feature of human behaviour.

Economic historian Niall Ferguson frames the current AI surge through a long historical lens, comparing it to canal manias, railway booms, and early telecommunications revolutions. His warning is precise and empirically grounded:

The recent AI boom resembles earlier infrastructure manias – periods of extraordinary promise accompanied by speculative excess. What matters is not the technology itself, but whether sustainable institutions follow.

History supports this view. Railways transformed the world, but only after thousands of miles of redundant track were written off and speculative capital destroyed. Electrification reshaped productivity, but only once grids, pricing models and regulatory oversight stabilised investment. The internet survived the dot-com crash because its underlying utility was real, even if its early valuations were not.

Sir Niall Ferguson, MA, DPhil, FRSE – Author at the thefp.com

AI is following the same trajectory.

What distinguishes durable transformations from transient bubbles is not technical brilliance, but institutional maturity:

– governance structures

– decision accountability

– economic feedback loops

– trust mechanisms

Ferguson’s deeper point is often misunderstood. He does not argue that AI is overhyped in principle, but that judgment lags capability. And when judgment lags too far behind power, history shows that correction follows—often abruptly.

For decision-centric enterprises, this insight is critical. Sustainable value will not be created by deploying AI only faster than competitors, but by embedding it more responsibly, more transparently, and more measurably.

In historical terms: genius opens the door; institutions decide what survives.

6. Data as the New Strategic Terrain

Across research, history and economics, one conclusion recurs with striking consistency: AI without trustworthy data does not scale responsibly.

As public data becomes saturated and commoditised, the strategic premium shifts to high-quality, real-world, governed data. This includes identity data, behavioural signals, transactional histories and contextual information—data that reflects how societies and economies actually function.

Michael Meltz – Chief Strategy Officer, Experian in a McKinsey Interview

According to an interview by McKinsey & Company with Chief Strategist Michael Meltz, Experian’s next phase of growth is driven by the integration of data, AI and platforms—moving from analytics as insight to decisioning as execution⁷.

Three structural shifts underpin this evolution:

  • Proprietary, well-governed data outlasts generic models
  • Explainability replaces black-box prediction
  • Decision intelligence supersedes isolated analytics

Platforms such as Ascend are designed precisely for this environment: combining data, models and governance into a single, operational decision framework.

7. Society, Trust and the Legitimacy of Decisions

Technology ultimately fails or succeeds not in laboratories, but in society.

As automated decision systems increasingly determine access to credit, insurance, healthcare, employment and public services, their legitimacy depends on a single, fragile asset: trust.

Political scientist Francis Fukuyama famously argued that trust is the invisible infrastructure of functioning societies. The same principle applies to digital systems. Without trust, scale collapses.

Screenshot

Empirical research supports this concern. According to the OECD’s AI and Trust Framework, public acceptance of AI systems correlates strongly with three factors:

  1. Transparency of decisions
  2. Ability to contest outcomes
  3. Clear human accountability²

Similarly, the Edelman Trust Barometer (2025) found that while people increasingly accept AI in principle, they overwhelmingly reject systems that cannot explain why a decision was made or who is responsible for it.

This marks a critical shift.

Early digital platforms grew by obscuring complexity. Decision systems cannot. The higher the societal impact, the higher the demand for explainability, auditability and fairness.

Historically, this is consistent. Financial markets only gained mass trust after disclosure rules. Aviation only gained public confidence after independent safety oversight. Medicine only advanced once peer review and accountability became institutional norms.

AI will follow the same path—or it will be resisted.

For organisations operating at the intersection of data, decisions and society, trust is not a communication exercise. It is an architectural property.

8. Decision Intelligence as the New Competitive Advantage

In this environment, decision intelligence emerges as the decisive strategic capability of the next decade.

Decision intelligence is not analytics.

It is not dashboards.

It is not AI experimentation.

It is the systematic orchestration of data, models, governance and human judgment to produce reliable, repeatable, and explainable decisions at scale.

Research by Gartner identifies decision intelligence as one of the most critical enterprise capabilities of the 2020s, noting:

By 2026, organizations that use decision intelligence will outperform their peers by at least 25% in critical decision outcomes.

Why? Because competitive advantage increasingly depends not on knowing more, but on deciding better and faster under uncertainty.

This shifts the centre of gravity away from model performance alone toward:

  • data quality and lineage
  • decision transparency
  • simulation and scenario testing
  • regulatory resilience

According to McKinsey, organisations that integrate AI directly into decision workflows—rather than treating it as a separate analytics layer—achieve materially higher ROI and lower operational risk.

This is precisely where platforms like Ascend are positioned: not as AI showcases, but as decision infrastructures. They allow organisations to test, govern, explain and continuously improve decisions—across markets, regulations and risk environments.

In historical terms, this mirrors earlier transitions: from accounting records to financial control systems, from statistics to risk management, from automation to governance-aware intelligence.

9. Leadership, Judgment and the Human Layer

Technology does not abolish leadership; it exposes it.

Every historical transformation has increased, not reduced, the burden on decision-makers. Railways required new forms of management. Financial markets demanded professional risk assessment. Nuclear power intensified political responsibility.

AI follows the same pattern.

As Michael Meltz, has repeatedly emphasised in leadership discussions, the true transformation lies not in automation itself, but in the elevation of decision quality. AI does not decide instead of leaders; it decides with them—and therefore tests their judgment more rigorously than ever.

In complex systems—credit markets, global supply chains, fraud ecosystems, industrial operations—leaders face unprecedented volumes of information. The risk is not lack of data, but misinterpretation, overconfidence or blind delegation.

Decision intelligence mitigates these risks by structuring evidence, exposing assumptions and simulating outcomes. But it does not remove responsibility. On the contrary, it concentrates responsibility.

The human layer becomes more important, not less:

  • Leaders must determine which data is relevant
  • They must set acceptable risk thresholds
  • They must interpret outputs within ethical, societal and strategic contexts

AI can illuminate options. It cannot define purpose. That remains a human task.

Epilogue

History rarely rewards those who move fastest. It rewards those who move fast wisely.

The defining challenge of the coming decade is not artificial intelligence itself. It is judgment—the human capacity to decide responsibly when technology amplifies power beyond intuition.

AI will continue to improve. Models will grow more capable. Systems will become more autonomous. None of this is in doubt.

What remains uncertain is whether our institutions, leaders and decision frameworks will mature at the same pace.

In every previous technological revolution, societies faced this test. Some passed it by building trust, governance and accountability into the fabric of progress. Others failed by mistaking capability for control.

The lesson is clear:

  • power without judgment destabilises;
  • judgment without evidence stagnates.

The future therefore belongs to those who combine both.

  • Trusted data.
  • Governed systems.
  • Explainable decisions.
  • And leaders willing to remain accountable in an age of intelligent machines.

AI will not replace human responsibility. It will expose it.

The next decade will not be decided by algorithms alone, but by how deliberately we choose to govern them—and how seriously we take the decisions they shape.

That is not a technological challenge.

It is a civilisational one.

— Jochen Werne

Footnotes

  1. Menn, A. & Ksienrzyk, L. (2025). Welche KI kommt nach den Sprachmodellen? WirtschaftsWoche, 52/2025.
  2. Rothe, R. (Merantix). Statements on world models and causal AI architectures, 2024–2025.
  3. Jain, A. (Luma AI). Interviews on multimodal and video-based reasoning models, 2024–2025.
  4. Chatham House (2025). The EU’s new AI Code of Practice has its critics – but will be valuable for global governance.
  5. Kissinger, H. A., Schmidt, E., & Mundie, C. (2024). Genesis: Artificial Intelligence, Hope and the Human Spirit.
  6. Ferguson, N. (2024). The AI Boom Is a House of Cards.
  7. McKinsey & Company (2024). How Experian is fueling its next phase of growth with data, AI and platforms.
  8. OECD (2024). Trustworthy AI: Policy and Governance Frameworks.
  9. Gartner (2023). Top Strategic Technology Trends: Decision Intelligence.
  10. McKinsey & Company (2024). Embedding AI into Decision Workflows.

Source Library (Further Reading)

  • Chatham House – EU AI Code of Practice and Global Governance
  • LSE Review of Books – Review of Genesis by Kissinger, Schmidt & Mundie
  • City Journal – Artificial Intelligence: The New General-Purpose Technology
  • WirtschaftsWoche – AI architecture and post-LLM research analysis
  • SiliconANGLE – Experian’s quiet reinvention in AI and cloud decisioning
  • McKinsey – Data, AI and decision platforms at Experian
  • Ferguson, N. – The AI Boom Is a House of Cards

more insights

THE EVOLUTION OF EXPERIAN AUTOMOTIVE GERMANY

by Jochen Werne 1. Introduction – The Global Rise of Experian Automotive Around the Experian world, Experian Automotive has become synonymous with data excellence, predictive intelligence, and ecosystem-scale innovation. The transformation from a specialized automotive data provider into one of the most influential players in global mobility analytics is rooted

Read more >