Who Can Act in Law? New Research at the University of Helsinki Connects Children, AI, and Agency

Law has always drawn clear lines. Adults act. Tools assist. Children stay protected. Machines stay outside the legal conversation. That structure is starting to feel outdated. A new research initiative from the University of Helsinki asks a direct question that many legal systems have avoided for years. Who can act in law when decision-making is shared between humans and intelligent systems?

This research brings children and artificial intelligence into the same discussion on legal agency. That pairing may sound unexpected, yet it fits the current moment. Both children and AI systems influence outcomes without fitting neatly into traditional legal roles. Courts, regulators, and companies already face these situations every day.

What does “legal agency” mean?

Legal agency refers to the ability to act in ways that produce legal effects. Signing contracts, giving consent, making binding choices, or triggering liability all sit inside this idea. For adults, agency is assumed. For children, agency exists in limited form, often mediated by guardians. For AI systems, agency remains unsettled.

AI tools already approve loans, filter job applications, recommend medical actions, and guide policing resources. These systems shape outcomes even though they lack legal personhood. The law still treats them as objects, yet their actions create real consequences.

The University of Helsinki research challenges this gap. It asks whether agency should remain tied only to legal personhood or shift toward responsibility, capacity, and real-world impact.

Why children and AI belong in the same legal debate

Children participate in digital systems daily. Recommendation engines decide what content they see. Automated grading tools assess their work. Biometric systems identify them in schools. Children influence outcomes but lack full legal control.

AI systems operate in a similar grey area. They act without intention or moral judgment, yet their outputs affect rights, safety, and opportunity. Both children and AI challenge the adult-centric view of agency.

The Helsinki researchers suggest a framework based on shared and layered agency. Instead of asking whether an actor qualifies as a legal person, the framework asks who shapes the action, who benefits, and who should answer for harm.

This shift matters for AI governance. It opens space for accountability models that reflect how decisions actually happen.

Real examples where legal agency already blurs

Consider an AI-driven tutoring system used in schools. The system adapts lessons and flags students for intervention. A child responds to the system’s prompts. A teacher follows the system’s report. A school acts on that data. Who made the decision?

Another case sits in healthcare. AI-supported diagnostic tools guide doctors. A misdiagnosis causes harm. The doctor relied on the system. The vendor trained the model. The hospital approved its use. Legal agency spreads across many hands and one machine.

Courts already face similar questions with children. Consent to medical treatment, participation in legal proceedings, and online privacy rights vary by age and context. The Helsinki framework suggests borrowing from these child law models to rethink AI legal liability.

How this connects to EU AI Act Compliance

The EU AI Act moves AI regulation toward risk-based control. High-risk systems face strict obligations tied to transparency, oversight, and accountability. Yet the Act still places responsibility mainly on providers and deployers.

The Helsinki research adds depth to this approach. It highlights the need to examine how agency operates inside AI systems, especially where vulnerable groups like children are involved. Risk classification alone does not explain how decisions emerge.

EU AI Act Compliance demands more than checklists. Legal teams must understand how agency is distributed across humans and systems. That understanding shapes documentation, human oversight design, and liability planning.

The rising pressure around AI legal liability

AI legal liability remains a pressing issue across jurisdictions. In the United States, courts still rely on product liability and negligence doctrines. In the EU, proposed AI liability rules aim to ease the burden of proof for harmed parties.

Global data shows why this matters. A 2024 Stanford AI Index report states that reported AI-related incidents rose by over 20 percent year over year. Many involved automated decision systems affecting employment, credit, and public services.

As AI systems gain autonomy in narrow tasks, legal actors need clearer tools to assign responsibility. The Helsinki framework supports that need by shifting focus from personhood to participation and control.

Why legal professionals should pay attention now

Lawyers, compliance leaders, and policymakers already work at the edge of this shift. AI governance programs demand answers about accountability. Clients ask who carries risk when AI goes wrong. Regulators expect traceability.

Understanding agency across children and AI gives legal professionals stronger reasoning tools. It supports better policy drafting, clearer contracts, and safer system design. It also aligns with emerging expectations under EU AI Act Compliance and global regulatory trends.

This knowledge gap has fueled demand for structured learning paths such as AI legal certification and AI legal tech certification programs. These programs focus on real cases, regulatory frameworks, and liability mapping rather than abstract theory.

A future shaped by shared agency

The Helsinki research does not argue that AI should gain rights. It argues that law must reflect how actions happen in mixed human-machine settings. Children and AI both reveal the limits of rigid legal categories.

Agency today sits on a spectrum. Some actors act with guidance. Others act through systems. Responsibility must follow that reality.

As AI systems continue to enter education, healthcare, finance, and public administration, this research offers a practical lens. It supports smarter AI governance and clearer approaches to AI legal liability.

Closing Thoughts

The question of who can act in law no longer has a simple answer. Children and AI show that agency exists beyond traditional boundaries. Legal systems that adapt to this view will respond better to risk, fairness, and accountability.

For legal professionals who want to stay prepared, structured learning matters. The AI+ Legal Agent certification from AI CERTs supports this shift by covering AI governance, AI legal liability, EU AI Act compliance, and applied legal frameworks.

Download the Program Guide

It fits lawyers, compliance officers, and policy teams who need clarity as law and intelligent systems continue to intersect. Enroll Today

Learn More About the Course

Get details on syllabus, projects, tools and more

This field is for validation purposes and should be left unchanged.

Recent Blogs