Justice and Technology: an inevitable transformative relationship

We live in times of profound transformation in all spheres, and technology is a central driver of these changes. Historically, technology has always reshaped society, and today we are witnessing a revolution that is also occurring in legal systems, driven by digitalization and, crucially, by generative artificial intelligence (GAI). Although there is still much talk of the "digital transition," reality demands that we focus on "technological transformation." This transformative awareness is vital to address the critical and complex challenges of the present. This reality puts pressure on public institutions, including the justice system, to become more efficient, resilient, and adaptable, making the adoption of digital solutions and artificial intelligence (AI) not a convenience, but an imperative.
Technology transcends the mere function of a tool; it is deeply integrated into law, influencing the organizational, procedural, and institutional dimensions of human interactions. This perspective redefines the conception, production, and application of law, requiring a transition from a purely human-centered epistemology to one that considers the capacity of machines to create knowledge and support or even replace human decisions in the implementation of law and the administration of justice.
Technological tools applicable to the legal and judicial domains are keeping pace with the development of information systems and AI. The most relevant digital and AI systems in courts include the following:
- IAG: Generates new content (images, videos, text) from existing data, such as chatbots and virtual assistants. In the legal context, IAG can create legal texts and documents.
- OCR (Optical Character Recognition): Converts images with text into readable and editable formats, allowing searching and editing in scanned documents or PDFs.
- RPA (Robotic Process Automation): Automates rule-based processes using virtual robots for manual and repetitive tasks.
- API (Application Programming Interface): Set of rules that allows communication between different software applications.
- Machine Learning: A subset of AI that learns from data to improve performance over time, without human intervention. It is used in predictive analytics.
- E-Discovery: Practices for researching and obtaining electronic data and information in legal proceedings, with tools that automate this search.
AI systems can be classified by their technology, structure, complexity, data quality, and functionalities. AI has evolved from simpler systems to increasingly autonomous and complex systems, culminating in AI Agents and Multi-Agent Systems. In the legal field, the best examples include deductive classifiers for data organization, neural networks for predictive analysis and decision-making, and deep learning for learning from large volumes of data. Large Language Models (LLMs) understand and generate human-like text, while Small Language Models (SLMs) are more specialized and resource- and context-efficient. Other functionalities include Natural Language Processing (NLP) for understanding human language and expert systems for problem-solving. AI assistants and agents assist with complex tasks, and Retrieval-Augmented Generation (RAG) systems combine information retrieval with generative capabilities.
It's crucial to distinguish between AI Agents and Agency AI for the responsible development of autonomous systems. AI Agents are single-task tools with low autonomy and a predictable risk profile (e.g., chatbots). In contrast, Agency AI, based on multi-agency, pursues goals, collaborates, adapts, and operates with increasing independence in complex domains. It presents higher risks due to its open behavior and requires new control mechanisms.
In the judicial system, we must differentiate between AI tools applied to court management and judicial administration and those that support or draft judicial decisions.
The evolution of technology in law raises concerns that fall into three interconnected axes: cybersecurity focuses on defending the technological system against malicious interventions; cyberdemocracy aims to defend the fundamental rights of users (such as the protection of personal data) and the autonomy of authorities vis-à-vis the technological system; finally, cybergovernance refers to the regulation of the technological system itself to ensure the integrity of public institutions and the quality of the functions they perform.
There is a growing discussion about the ethical and social aspects of AI, focusing on policies and data science. Concerns include privacy violations, misinformation, bias and discrimination, "hallucinations" (AI errors), environmental costs, and the creation of epistemic bubbles. There is also an economic problem intrinsic to technological evolution and the reinvention of the economy through data.
Data preparation and quality are crucial for the digitalization and automation of justice. It is essential to inventory, analyze data quality (ROT), and assess data structure and governance. Access management, auditing, security, and workflow mapping to identify automation potential are crucial. Collaboration in data management and post-discovery actions are essential for responsible and efficient digitalization in the legal field.
Europe has been proactive in developing legal and ethical frameworks for AI in justice, such as the Council of Europe's Ethical Charter on the use of AI in justice systems (2018) and Regulation (EU) 2024/1689, which establishes harmonized rules on artificial intelligence in the European Union. The Council of Europe, through the Commission for the Efficiency of Justice (CEPEJ), has been instrumental in the ethical and technical integration of digital technologies and AI into European justice systems.
Automation and AI require increased regulatory, organizational, and management efforts from governance structures to establish robust mechanisms for evaluating and certifying technological solutions. Adopting these tools implies transferring normative regulations to machines and computer codes. The evaluation of technological systems must be multidisciplinary and interinstitutional, considering compatibility with demanding criteria for the judicial system. This evaluation mechanism must analyze the impact of AI systems on fundamental rights (access to justice, personal data), the principles of the judicial process, the guarantees of the rule of law (judge independence and impartiality), and the reasoning behind decisions. Consideration of the right to a court and a fair and equitable trial, as well as the system's consistency with the right to privacy and personal data protection, are crucial normative standards.
Levels of approachTechnological change has implications for several areas of justice, particularly governance, court administration and management, procedural management (in the technological process) and judicial decision-making.
The context of digitalization and AI influences institutional dynamics and the implementation of rule of law principles, particularly judicial independence. Judicial independence must be conceptualized in light of transparency and accountability, and in line with the new systemic demands of technology. Digital platforms have become the standard technological support for all court activities, enabling automation and smart applications. It is crucial that justice governance bodies coordinate the certification and adoption of tools, especially AI. The central debate now extends to the use of digital and automation tools in decision-making tasks themselves, including the potential automatic generation of judicial decisions.
The level of court administration and technological processes is crucial, intersecting general governance with the day-to-day management of courts. Case management is linked to the use of technologies for case processing, digitalization, and information circulation. New alternatives include online case processing, electronic management, videoconferencing, and the dematerialization of judicial services through ODR (Online Dispute Resolution) platforms. However, these innovations raise objections such as the potential disappearance of the physical court space, the dehumanization of justice, the loss of the synchronous nature of the process, the dependence on technological access and digital literacy, and the potential incompatibility with judicial independence and the loss of human intervention in the trial.
The judicial decision-making level is the most sensitive to the integration of new technological tools, especially AI. The transition from AI as a judicial assistant to the effective replacement of the human judge (by agency) raises significant ethical, philosophical, political, and constitutional concerns. The risks of AI development in judicial decisions are highlighted, such as opacity (incomprehensibility of systems), "datafication" (minimization of context, biases, distortion of moral judgments), and the loss of accountability for both the institution and the human. There is a risk of placing judicial decision-makers in the hands of the creators and managers of data processing systems inherent in predictive coding, especially if it is assumed as the preferred standard to follow. While the degree of automation may vary, the fundamental integrity of human judgment remains paramount.
At this point, the question of whether AI can perform legal interpretation is particularly relevant. Legal interpretation is a hermeneutic process embedded in social contexts, ethical values, and the dynamics of a human legal community, involving dialogue between judges, lawyers, and academics. These characteristics raise serious doubts about AI's ability to fully and adequately reproduce legal interpretation, due to its deeply contextual and human-intentional nature. Human interpretation is not a mere technical and logical deduction of rules, but a process involving ethical intuition and human and institutional intentionality. This establishes an inherent and perhaps insurmountable limit to AI's role in fundamental judicial functions.
At this point, the discussion surrounding critical systems that should be introduced in courts and the adoption of so-called LLM-based Human-Agent Systems (LLM-HAS) stands out. This typology is based on the concept of collaboration between humans and AI, aiming to overcome the limitations of autonomous AI models. LLM-HAS emphasize the essential role of human interaction in improving the reliability, safety, and performance of AI. Human collaboration can lead to greater accuracy in decision-making, alignment with ethical standards, and maintenance of accountability, especially in critical sectors.
Technological transformation in the justice sector creates tensions that require new balances between technological resources and the fundamental values of the judicial system. While the push for digital efficiency is necessary, it can compromise essential judicial principles if not carefully managed. Legal technology governance is not just about adopting technology, but about how to do so without eroding the fundamental pillars of justice.
It is essential to preserve and enhance the integrity of the jurisdiction, better understanding its structure, including its technological aspects. What is instrumental must be transformed and what is essential preserved. The resilience of our most important values can only be achieved by strengthening their governance, organizational, and procedural aspects. If these plans are not strengthened and transformed, humanity, justice, and democracy will not survive the turbulent times ahead.
The transformative relationship between justice and technology is therefore inevitable and requires a careful and continuous approach, ensuring that technological progress serves the deepest principles of the administration of justice, and not the other way around.
observador