1 Introduction

With the continuous breakthroughs of big data, blockchain, cloud computing, speech recognition, visual recognition, autonomous machine learning, and other technologies, AI represented by deep learning has made legal intelligence systems autonomous decision-making possible (Weber 1999). As early as the 1970s, the United States and other developed countries developed legal reasoning systems, legal simulation analysis systems, and expert systems based on AI technology for judicial practice. China was also vigorously promoting the accelerated development of AI. In the 1980s, Zhu Huarong and Xiao Kaiquan presided over the establishment of a mathematical model for the sentencing of theft crime; In 1993, Zhao Tingguang developed a practicing criminal law expert system, which has the functions of searching and consulting criminal law knowledge, reasoning and judging criminal cases, and qualitative sentencing. With the full deployment of critical projects such as China’s smart courts and smart prosecution, AI has been rapidly applied in the judicial field. The Supreme People’s Court of China launched its smart court navigation system and smart case push system in 2018, there was also the “Wise judge” intelligent research and judgment system in Beijing, the “206 system” in Shanghai (Shanghai criminal case intelligent assistant system), the “Intelligent assistant case handling system” in criminal cases, and the Suzhou court has also formed an “Intelligent trial model” with “Electronic file + Audio Trial + intelligent service” as the main content, the “Intelligent Trial 1.0” trial support system in Suzhou and Hebei, as well as AI products introduced by other local courts, have provided support for judges to hear cases and comprehensively improved judicial efficiency (Feng 2018).

AI has made justice a semi-automated or automated man–machine coordination operation, improved judicial efficiency, and eased the current situation of “more cases and fewer people” in China’s judicial resources. However, the combination of AI and the legal system also hides the conflict between digital code and legal logic. Just as Marx Weber worried, justice might become an “impersonalized” mechanical operation, and judges were merely “vending machines” that print codes (Coser 1997). Therefore, to avoid the defects of formal rationality, judicial AI cannot avoid the question of legitimacy. AI still needs to comply with legal logic in the legal field, and there are specific application boundaries and ethical requirements. However, China's judicial AI applications follow a consistent reform approach, focusing on practical exploration and lacking pre-set boundaries. Research has paid too much attention to practical issues such as applying AI in the legal field, but ignoring the coordination of AI and the fundamental value of law (Zuo 2020). AI provides new ideas and perspectives for the realization of justice, but the realization of legal justice not only lies in the results and logical inferences, but it also is demonstrated and displayed in a way that conforms to the public’s approval. Therefore, there are two critical questions about the legitimacy of AI justice: first, whether AI can be applied to judicial judgment; Second, whether the calculation process and AI results are reasonable and legal.

2 Application and question of AI in the judicial field

The development of AI is generally divided into Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). In the ANI stage, the intelligent machine does not have natural intelligence, mainly information processing. Whether one wants an intelligent machine to process text or images, one must present the information, so that the intelligent machine can understand (Gordon 1995). In this stage, AI can carry out a multi-functional comprehensive legal search, legal documents, and legal documents preparation, and provide legal advice, document review and case prediction, litigation strategy selection, and other decisions. On this basis, ASI can use advanced functions such as reasoning and perception to achieve natural human language, creativity, and emotional content. At this time, intelligent machines will evolve on their own, gradually surpassing human understanding and application of legal knowledge. However, the technological singularity is just a technological conjecture.

Under the existing technology, AI in the judicial field mainly focuses on two aspects: first, to provide practical tools for auxiliary judicial activities. Supplementary judicial activities are the most extensive use of AI technology. These preparatory or supplementary activities include: jurisdiction screening, automated reading of legal documents (e.g., electronic evidence collection), drafting of regular legal documents (e.g., memorandums, judgments, et al.), procedural tracking, helping the parties to communicate with the court (assisting in the Drafting of indictments, appeals, subpoenas, etc. ) (Branting et al. 1998); second, AI provides legal workers with new analytical tools for presenting judicial activities more clearly and rigorously. The operating logic of AI is the unity of knowledge or information-based logic and rule-based logic. The logic based on knowledge or information refers to the object of AI, that is, the processing of a large number of case facts and legal rules into knowledge and information (judgment database); the logic based on rules refers to the principle of AI; it is the core of programming and algorithm to induce rule model from the database and apply it to new cases. These two technical paths eventually led to what is now known as the "Retrieval System” and “Legal Expert System.”

Information resources are an important part of any democratic society and a prerequisite for judicial justice, only by maximizing access to public legal information can justice and the rule of law be realized (Iliadis, Andreou and Papadopoulos 2010). In judicial reform, the progress of science and technology promotes the evolution from small data to big data. The emergence of big data has changed the situation of previously dispersed and limited information, and made information and knowledge play a central role in social structure and behavior decision-making. In judicial AI, the legal retrieval system applies information retrieval in the legal professional field and will not change the original legal logic. The development of legal article retrieval and case retrieval systems is not necessarily related to legal logic, and the retrieval results only show similar or related content. Therefore, by providing judicial data support, AI has gradually become one way to achieve judicial justice. At present, the retrieval system has experienced the process of expanding from statute law retrieval to case retrieval, and the completeness and precision of retrieval have made significant progress. Using computer retrieval technology to promote judicial openness to the greatest extent has become judicial justice. The United States started researching the computer retrieval legal system as early as 1960 and popularized it in business (Bing 1991). Although China started late, the advantage of backwardness is obvious. In the 1980s, China gradually built three major legal databases for data retrieval, namely “Peking University Magic Book,” “FaYi,” and “Guoxin.” After decades of development, the Chinese legal information system functions in information integrity and search intelligence have been remarkably optimized. In 2013, the website of Chinese referee documents was established, and in the following years, it has become the world's largest repository of referee documents.

The legal expert system is a computer program system that simulates human experts to solve professional problems. It is not uncommon to use AI technology to assist the legal reasoning process. As early as 1958, Lucien proposed legal science’s information processing (Zhang and Xu 2019). In 1970, Buchanan discussed the feasibility of the legal reasoning model in his article on AI and legal reasoning (Buchanan and Headrick 1970). Since then, a large number of legal reasoning models or expert systems based on rules and cases have emerged, such as the Taxman system (1977) for analyzing corporate tax laws, the Ikbalsi system (1989) for dealing with workers’ accident compensation, Split-Up system (1995) for dealing with the division of property in divorce, etc. (Zhang et al. 2014). In New York State of the United States, there has been a relatively sound online filing and service system, and allows in individual criminal and family court cases, a party to online video communication; It has also developed what is known as ROSS, an AI lawyer who helps with bankruptcy cases, and Compas, PSA and LSI-R, risk-assessment software that helps judges with sentencing (Chen 2020). Although the legal expert system from the early reasoning system of lawyers Judith to the evaluation system of COMPAS to predict the probability of recidivism, the autonomous decision-making ability of the system is gradually strengthened; but at present, it is difficult for AI to learn the reasoning of legal logic, and it cannot fully simulate legal experts to solve problems. According to the European Judicial Efficiency Committee’s 2016 report on the use of information technology in the European Court of Justice, most European countries’ AI technologies are still in the “ANI” stage. For example, courts in most European countries have installed case management systems (e.g., ERP system), courts in France and Slovenia have installed early warning devices in the case management systems. Courts in Ireland are equipped with voice dictation intelligent software, the courts in Estonia, Lithuania, Romania, Slovenia, Sweden, and Turkey have installed judgment template system, etc. In recent years, China has also actively constructed legal expert system such as smart trial systems, electronic file synchronization generation systems with cases, and related intelligent auxiliary systems, with a focus on assisting judges in the administration of justice. For example, in 2017, the Shanghai Higher People’s Court has established the Shanghai criminal case intelligent assistant case handling system, the Hainan Provincial Higher People’s Court has established the Sentencing Standardization intelligent case handling system," and the Chongqing Higher People’s Court has established the Specialized trial platform for similar cases. These legal expert systems are non-autonomous decision-making, and their applications are mainly based on automatic case management systems and assistant case handling systems.

China has formed a trial-centered smart court system through AI, which uses intelligent algorithm tools to assist and regulate the trial work. China is moving toward an algorithmically enhanced case law system, which does not follow the precedent principle in common law, but follows the law of similar cases statistically. The use of information and communication technology has strengthened the control of China's judicial system rather than weakened it. Court leaders and higher courts can now use the software of “case push and deviation reminder” to restrict and control the trial work of judges. Although the experience of smart court construction in China is exciting, it shows some prudent problems.

First, instrumental rationality embodied in AI has an obvious reductionist tendency. While AI can improve efficiency and replace some judicial tasks requiring creativity and value judgment, it cannot make the final decision about human worth. Law should not be led by technology and obey the logic of technology itself, but should check and balance technological rationality with value rationality to influence human society's development toward goodness and justice. Therefore, human self-consciousness should not be allowed to be affected by technology. However, overdependence on technology should be resisted, as should any alienating force that leads to the loss of human dignity and subjectivity. Therefore, although the current development level of AI technology can support the automatic decision-making system, this kind of system can only be used to deal with simple matters that are not controversial in fact and value, such as the penalty for running a red light, but cannot be used to make automated decisions when complex facts and value choices are involved. AI can only assist human intelligence, so that human judges can make the final choice and decision.

Second, the increasing technicalization of law will make it tend to be replaced by technology. Intelligent technology should be limited to a specific area in the judicial field, perhaps without human value judgment. On one hand, people should use public power to tame computational power and serve the public interest. On the other hand, citizens should be given new data rights to resist the unlimited expansion of data manipulation. In the process of AI justice, other business entities provide algorithm design and technical support for the court. It is also worth pondering how the court can avoid losing fair adjudication in close cooperation and dependence.

Finally, AI is gradually changing the governance structure and order generation mechanism of society. Inventors, investors, and advocates of new technologies tend to exaggerate the “liberation” effect brought by technology, claiming that technologies such as AI and blockchain will make all centers and intermediaries unnecessary, thus disintegrating the pyramid of human society. The distributed structure makes the hierarchical interpersonal relationship more flat and contractual. However, the pyramids still exist, and the base is still the majority of ordinary people. Everyone is the center, and no one can control the whole network. Moreover, its top is divided into government, capital, and technological power. These three forces sometimes merge and oppose each other, but their relationship is less and less affected by the base. Technical personnel (hackers) who despise political authority will not liberate all humanity but only destroy the established legal order. Commercial forces negotiating with the government will not “check and balance” public power but only pursue profits. The superstition on computing power is no better than the belief in violence; neither can be regarded as a tool to realize social justice. To maintain the constitutional structure that different powers restrict and balance each other for the benefit of the people, it is necessary to re-establish the legal boundary between public power and private power, political power, and commercial power.

3 Advantages of AI injustice

Justice determines the pursuit and high value of justice, and it is the core value standard for judging the legitimacy of judicial power operation. AI provides efficient answers and technical support beyond the past, which can consolidate the judiciary's legitimacy and promote the realization of judicial justice. Retrieval system and legal expert system result from the application of ANI in the legal field, although they can significantly improve work efficiency, and they are far from the upper limit of judicial AI. AI has won many famous victories in chess games, which fully proves the reasoning ability of intelligent machines. In the era of big data, with the continuous improvement of computer computing power and machine learning algorithms, intelligent machines are expected to break through the functional limitations of knowledge acquisition and reasoning. Through the deep integration of law and technology, intelligent machines will complete legal argumentation with code calculations, showing the AI path to achieve judicial justice.

In the current society, the judicial judgment and justice concept are influenced by magnanimous information, so that judicial information and judicial value directly interact. When citizens acquire the right to information, judicial data, including laws and decisions, acquire more significant public attribution, and knowledge itself becomes a kind of public affairs (Hess and Ostrom 2007). In other words, nowadays, there is no room for secret justice, and justice needs to be realized in an open environment. Big data not only drives changes in public structures, but also reshapes the idea of justice. In the process of information balance, democratic participation is bound to increase, the gap of legal information between citizens, state institutions, and civil society is constantly narrowing, case information is more open, and multiple subjects will participate in the discussion and be more accountable. In a word, the revolution in the production, circulation, and distribution of judicial information in the era of big data will inevitably lead to the innovation of the concept of judicial justice. Compared with the traditional judicature, the AI judicature has three advantages in realizing the justice of adjudication.

First, similar cases are decided similarly. It conforms to the general rational cognition and simple legal emotion, and has strong persuasions to the parties and the public. It is self-evident in case law countries and gradually becomes an integral part of the judicial system even in enactment law countries. The emergence and development of AI provide a new opportunity to solve the problem of similar cases be decided similarly. Based on deep learning of a large number of precedents, AI detects the relationship between the independent variables in the judgment documents, such as the identity of plaintiff and defendant, the focus of dispute in the case, the type of behavior, the damage consequence, and the judgment result, combining with the existing laws and regulations, this paper constructs an algorithm model that can predict the outcome of a case (Li 2020). After the new case information is input, AI can give the same or similar algorithm output to the same or similar case based on the same case element extraction standard, unified algorithm modeling, and standardized pipelining operation to guarantee the consistency of the judgment.

Second, information filtering. The group’s interest in sharing information increases as the group grows in size, but outsiders with unique knowledge and non-shared information may be ruthlessly despised, ignored, and resisted, and groups can make big mistakes (Sunstein 2008). When the data grow exponentially, the massive judicial data will annihilate the cognition, resulting in individuals unable to obtain effective judicial information. In the context of information society and big data, if similar judicial cases cannot be effectively extracted and compared, human judicial errors are more likely to be amplified and more likely to conflict with and break public judicial expectations. Moral trial and public opinion vortex inevitably trigger the crisis of judicial public trust, which is the justice trap faced by judicial adjudication in the context of big data. However, AI can effectively improve the ability of judges to acquire and grasp information related to cases, and can expand the capacity, quality, and precision of judges’ rationality. AI can acquire the overall rationality based on judges’ common social life experience and professional ethics through in-depth judicial documents. Especially in the face of new and complex cases, AI can provide judges with information based on “the collective experience or ‘average rationality’ of judges in the past trial practice” by analyzing the relevance of case information and judgment results in the past adjudication (Bai 2017). According to the law of large numbers theory, the overall rationality derived from the analysis of the whole data experience is more advantageous in making up for the uncertainty of the judge's rationality and improving the predictability of judicial judgments, especially in difficult and complex cases (Sebright 2010).

Third, efficiency justice. The second and most universal meaning of justice is efficiency (Posner 2002). The most significant impact of the information revolution on the social organization is “decentralization,” with the exponential growth of judicial information interaction. Judicial information dissemination based on big data is gradually becoming flat, open, fluid, and digital. This means that citizens and society gain more rights to know, say, participate, and pay more attention to judicial judgment. Although big data also support judges in judicial adjudication, the scale of litigation has multiplied dramatically, surpassing the help of information support. While big data enhance the ability of legal personnel to handle cases and enforce judicial productivity, the demand for judicial adjudication has exceeded the limit. Therefore, with the rapid dissemination of judicial information, “delaying the administration of justice” is also prominent. AI provides a new way of thinking beyond the traditional path for judges to deal with the contradiction between more cases and fewer people, by replacing cases with machines, AI gives full play to the characteristics of standardization, process, and repeatability, which can fundamentally change the production mode of judicial judgment results, thereby effectively improving the efficiency of judicial operation. In the aspect of judicial information digitization, the development of AI technology can save many labor costs in online filing, remote trial, trial recording, evidence review, and other aspects.

4 The possibility of judicial AI

Digitization in the judicial field is the future trend of judicial justice. Whether intelligent machines will be rejected after being embedded in the current judicial process depends on matching the logic applied to them. According to legal formalism, the judicial process is a mechanical syllogism reasoning process that tries to exclude the judge's discretion. AI classifies, clustering, analyzes, and correlates large-scale precedents through deep learning, and forms a behavior comparison model to judge the cases. In practice, the Shanghai Court of China has developed the 206 system, which will automatically alert the judge when there is close to an 85% discrepancy between the judgment of the judge and the result of similar cases of the same level or higher court. If the judge still insists on the original judgment, the system will automatically push the judgment document to the president for discussion. The “Faxin” platform, which integrates three functional services of “providing intelligent question-and-answer, one-stop retrieval and category case push,” can analyze, automatically generate and push accurate matching similar cases according to the user’s colloquial case descriptions. At the same time, the push results also pay attention to the invocation of authoritative sources and the indication of the validity of the legal basis. In these cases, it is still the legal practitioners themselves who make the final decision.

4.1 The possibility of AI making legal arguments

The reasoning process of judicial AI itself is legal and reasonable. The legal knowledge used by judicial AI is derived from the practical rationality of social life and forms a mixed legal knowledge system from the abstract rationality of legal rules. Compared with the deductive reasoning of human judicature, the reasoning paradigm of AI is undoubtedly closer to the legal experience and justice concept of the public. Therefore, no matter from the perspective of dynamic justice of new natural jurisprudence, or the perspective of judgment law of realist jurisprudence, or from the perspective of living law of sociology of law, the reasoning process of judicial AI has legitimacy. AI is similar to human legal reasoning in logical structure, which derives the final judgment result through legal facts. However, the logic of AI is more complicated: first, the code language is used to complete the input of case facts and determine the judgment result; second, organize and judge judicial data through machine language and digital code. AI surpasses humans in the ability to process legal data and surpasses the efficiency of natural language in its code language. Therefore, the reasoning process of judicial AI is also reasonable.

However, the realization of judicial justice needs to be accepted by people through legal argument, which is equally essential for the judicial application of AI (Yu 2005). The logical judgment of judicial AI is not consistent with human legal argumentation. Intelligent machines cannot use natural language like human beings, let alone detailed argumentation and demonstration of the legal calculation between input and output. Then, can human intelligence not complete the legal argument to display judicial justice? At present, the retrieval system and the non-autonomous legal expert system are not enough obstacles to applying legal logic. Classifying cases with AI can significantly improve the efficiency of judicial adjudication by reducing the complicated and repetitive work of legal workers. However, after all, this is only a stage result of “ANI,” not a rational judgment. Until AI reaches the “singularity” that surpasses human beings, the reasoning mode adopted by intelligent machines is only a kind of human logic. It will only run according to the programmed rules established by human beings. However, the classic syllogism deductive reasoning of “major premise”–“minor premise”–“conclusion” adopted by legal workers is one-way decision-making. At the same time, AI can carry out more complex reasoning according to the set logical path and adopt a circular path. Therefore, after a long period of data accumulation, the intelligent system can reach the level of human experts to solve problems within a specific range. From the perspective of judicial decision-makers, whether information technology engineers or judicial personnel, judicial AI judgment is a process of writing algorithms and building models based on judicial data, conducting in-depth processing on pre-processing cases, and seeking consistent results. The human’s deductive reasoning process in the judicial process is empirical. At the same time, AI adaptation algorithm and deep learning are also trial-and-error processes, and there is no essential difference between them. AI that can pass the “Turing Test” only exists in science fiction, and even intelligent machines with autonomous decision-making cannot replace human beings as the subject of justice. The judicial form realized by the code calculation of computer science supplements and strengthens human rational thinking. Therefore, whether the intelligent machine can carry out legal arguments depends on how the researchers construct the model. Human beings are always the subject of judicial judgment. Judicators should guarantee the legal reasoning of AI with their rationality and fully explain the validity of judicial AI logic.

4.2 The possibility of judicial interpretation by AI

According to Kant’s cognitive theory, human’s comprehensive judgment comes from the combination of rationalism and experience. Therefore, cognition includes the innate synthetic form of the human brain and the processing of sensory experience materials. However, machine intelligence does not have the feelings, perceptions, and emotions of natural people, and it cannot shape personality through exploring nature, experiencing society, and establishing a family. Its mode of thinking must be realized through deep learning. Deep learning is the product of exponentially scaling up the traditional artificial neuron network’s inner layers and does not break through the traditional artificial neuron computational model (Xu 2017). Such learning can bridge the gap between AI and legal knowledge, but it cannot simulate the human experience. The ambiguity and flexibility of legal rules are challenging to translate into formalized language that machines can understand (Filippi and Hassan 2016). For example, the elements of legal norms, illegality judgment, and responsibility to prevent the cause of the more explicit content, which contain subtle value trade-offs, the codes are difficult to explain through calculation. Deep learning by AI can make full use of deep neural networks with endless recursion and convolution. The syllogism of deductive reasoning commonly adopted by humans is a kind of linear thinking, while AI can achieve higher levels of abstraction by modeling nonlinear systems. On one hand, this kind of reasoning model makes the relationship between legal knowledge and behavioral consequences more precise, and can enhance the correctness and acceptability of judicial judgment results; on the other hand, it means that code operation is difficult to restore. Although the algorithm of judicial AI is relatively simple, with the gradual complexity of the model, the cross-computing process of forward and reverse will inevitably lead to the difficulty of restoring the judicial operation.

The code operation of judicial AI follows an objective law, which excludes human intervention and maintains judicial justice in a sense. However, AI is an irreducible "black box" between judicial input and judicial output, and its rationality needs to be explained artificially. People understand and apply the law, and analyze and identify the evidence. AI simulates the human process of legal reasoning, which indirectly becomes a unique legal method. Human beings need to explain the reasoning process of AI through their legal cognition.

First, judge supervises the AI data. The specific judicial model using AI is likely to be biased from the actual judgment, and it would be against the concept of judicial justice to entirely rely on the biased model. The deep neural network and machine learning of AI are highly dependent on legal data such as judicial documents and existing cases, and sufficient and reliable big data are the premise of legitimate and reasonable reasoning results. In judicial logic, the judge has the right to interpret the law and supervise the data's correctness. In addition, judges need to excavate and introduce new case AI legal data sets outside of the data to filter. Judicial AI is mechanical, which excludes the creative space of judges. The value of a specific case can only be selected by the judge through legal interpretation and argument, to balance the consistency of legal logic and the reform of judicial justice.

Second, judge complements the rationality of the code operation. The legal policy is an integral part of public management policy, which is always affected by social development. AI can acquire common social sense and cultural knowledge through deep learning, but it cannot understand the value orientation, moral tendency, and ethical risk of the public. The specific circumstances of judicial cases vary greatly, but public management policy is the common element of legal data and legal reasoning. Just as the influence of legal principles on legal rules, specific legal rules should be applied first in judicial judgment, but the judgment result should not violate the legal principles. The implementation of public management policy in judicial AI can only be explained and demonstrated by the judge by distinguishing the relationship between the normative elements and the change of the judgment results. Technicians’ algorithm programming of law and judges understanding of law belong to different fields, and judges can only guarantee the rationality of code operation.

Third, judges interpret the specific results of AI judgments. Beyond vague human language, machine language achieves deterministic justice by identifying, categorizing, and matching cases on precedents. However, judicial AI still needs to abide by the nature of judicial judgment and legal science and respond to the legitimacy of the judgment result. There is no guarantee that a decision is not arbitrary or unfair without a reason or a weak reason (Sunstein 1996). However, it is difficult for programmers to verify the correctness and rationality of the coding algorithm due to the separation of the roles of the judicator and the engineer. Only the judge can examine and verify the judgment result by explaining the argument in the specific judicial process, determining whether the coding algorithm has loopholes or secret doors and promotes the algorithm to be closer to the legal logic.

4.3 The possibility of man–machine collaboration and interpretation

The intelligent machine simulates human’s learning behavior of legal knowledge and acquires the necessary knowledge and skills for legal reasoning. However, rationality is the metaphysical knowledge acquired by human beings based on their cognition. To reproduce this process, AI must feel the nature of nature through free-thinking and enter human consciousness through phenomena. ANI, such as retrieval system and legal expert system, cannot reproduce this process, and the future AGI is only possible. Therefore, judicial AI has incomparable advantages in the field of memory, but it is difficult to master the “ineffable” or “experiential” legal knowledge similar to jurisprudence. However, the success of AI in speech and image recognition provides an effective way for AI to acquire empirical legal knowledge. In 2012, Geoffrey Hinton made a breakthrough in “Machine learning” using convolutional neural network models to reduce the error rate in image recognition (Krizhevsky et al. 2012). Intelligent machines’ ability to recognize content in new images, based on large amounts of data and constant training, means that AI can acquire new knowledge beyond existing data sets. Some scholars put forward that to promote the deep learning of intelligent machines in the field of law, and three thresholds must be crossed: first, sufficient data volume and high information quality; second, to extract standard rules and develop scientific algorithms; third, the deep participation of legal personnel (He 2017). According to the basic structure of an intelligent machine learning system (as shown in Fig. 1), machine learning legal knowledge requires information from the external environment to the system. The content of information will seriously affect the learning and processing of intelligent machines. The existing AI lacks free-thinking, and its application in the judicial field will be in the form of man–machine cooperation.

Fig. 1
figure 1

Basic structure of machine learning

No matter human judicature or judicial AI, the result of judgment should accord with the expectation of social justice. However, judicial judgment is a formidable and complicated process that needs to examine and analyze many facts, legal rules, and precedents. More importantly, individual cases may involve vital interests and profound feelings, and legal practitioners’ solutions affect the expectations of all legal actors and their understanding of the legal system (Walton 2005). The methodology for judges to follow the idea of justice in judicial judgment is already very complicated, but the technical characteristics of AI make it more difficult. The input of the AI decision model is diverse and different, and many kinds of legal reasoning calculation processes are realized repetitively and stably. It is not easy to embody the concept of legal rationality and justice in this process. Ethics and legal justice are challenging to embed in the computing code, but the existence of the “Algorithm black box” means that it is challenging to show justice directly. People can only examine the AI system of the original legal data input set. According to the exclusion method, the weight of the different elements can be analyzed one by one through the reverse original input substitution. Although it is a new and difficult challenge to map the input and intermediate processes in AI systems to human experience, only human jurists can control the overall process of legal interpretation, and excessive reliance on AI decision-making will only lead to judicial alienation. Since the transparency of the AI decision-making process can be continuously improved through technological means, it is also logical to apply the law to interpret the judgment issues caused by judicial AI by judges. Moreover, legal interpretation is the embodiment of legal rationality and the guarantee of judicial justice. AI is only the extension and supplement of human intelligence, and even if human justice can be optimized, it cannot replace the latter. Therefore, judicial AI’s advantages and disadvantages can be supplemented and adjusted by judges according to legal interpretation.

5 Limitations of judicial AI

Non-autonomous decision-making of AI justice will not affect the current judicial justice. At the same time, the AI judicature of independent decision-making also has the possibility of realizing judicial justice. The deep application of AI is the essential characteristic of judicial modernization and the inevitable trend of realizing judicial justice. The Chinese government has designed the strategy of judicial AI from the top, from the central to the local, to carry out the construction of judicial AI. Simultaneously, China’s judicial system has quickly embraced the AI strategy as a judicial policy (Luo 2017). In the view of policymakers, judicial AI is beneficial to the reform of the judicial system, which is consistent with judicial system reform, such as trial-centered litigation system reform. For example, the Shanghai Higher People's court developed the trial function auxiliary system to reform trial-centered litigation system reform (Ye 2017). China regards information construction, including AI, as an important content of judicial reform, fully showing the government's positive attitude toward AI technology. However, while science and technology promote social progress, there are potential risks, so is AI technology. In the judicial application of AI, the basic rules of system and foresight should be followed to clarify the limits of AI’s involvement in the judicial application.

5.1 Social relations of judicial AI

Whether AI has the same dominant position as human beings may be a philosophical and ethical problem, but the legitimacy of AI in judicial decision-making is a realistic problem. AI technology has been developed for many years, among which the recognition ability of intelligent machines in speech, semantics, and image has approached or even reached the level of humans. The development of the field of ANI has gradually highlighted the legal gaps in various application scenarios. The application of AI in some fields is sufficiently autonomous that the intelligent machine needs to determine which rules to choose according to before performing a task; otherwise, it will not only fail to achieve the goals predetermined by human beings, but also may make inappropriate choices when confronted with ethical dilemmas. For example, the 2016 European Parliament report to the European Commission on AI and robotics states that the risks that robots pose to humans beings mainly include: human Safety, health and safety, privacy, integrity and dignity, autonomy and non-discrimination, and personal data protection. To deal with the above risks, it is reasonable to create a specific legal status for the robot, with clear rights and obligations, and liability for the damage it may cause. The development of AI technology needs basic norms to guarantee security, and the basic norms with legal force are the best response to them. As far as the judicial field is concerned, what the judge abides by are the legal rule and the legal principle. However, whether it is technical assistance or judicial decision-making, AI should follow specific technology development rules in the two stages of intelligent judgment. The basic principle of this kind of artificial establishment is that the intelligent robot meets the premise of entering the field of human society, and human beings can identify its identity and decide what kind of social relationship intelligent robots and human beings have (Graaf 2017).

The key to this wave of AI lies in the breakthrough of big data storage and application technology. The digitization of all kinds of information and improving machine storage and computing ability make it possible to explore the multi-dimensional information contained in massive data and apply it to prediction and decision-making. It is a qualitative change of knowledge application caused by a quantitative change (Chen 2020). In the judicial field, judicial data such as judicial documents created by judges are the basic assets on which AI relies to make decisions, determining the possible space and internal limits of judicial AI. An AI cannot become an organic free subject like the judge, or beyond judge. This means that even if the intelligent machine realizes the autonomous decision, it cannot be divorced from the basic framework of legal logic. AI is just a “helper” to help judges get closer to justice. Therefore, the judicial AI is not mutually exclusive with the judge. Judges still dominate the adoption of evidence, the decision of sentencing, the review of the conclusion of the judgment documents, and other legal argumentation work, which is enough to prove the judge’s dominant position in the judicial decision. Moreover, according to the “technology-law lock-in effect,” AI’s automatic deduction of existing laws will make the evolution of laws face obstacles (Crootof 2019). Only senior judges with sufficient legal reasoning ability can explain difficult and complex cases and continuously promote the supply of social demand by law and the pursuit of justice. People tend to choose to put in the least cognitive effort, or simply choose to trust their intuition rather than make a systematic effort to analyze every autonomous decision (Skitka et al. 1999). They leave the burden of achieving justice to the decision-makers of the judiciary. However, AI cannot become the absolute subject of justice, and it is difficult to assume the primary responsibility of adjudication alone. The social public's trust in judicial justice directly comes from the legitimacy of the judgment result, and the judge is the ultimate responsibility bearer.

5.2 Judicial AI and legal rationality

The code calculation of AI is not self-explanatory yet, and openness, transparency, and accountability are the urgent problems in judicial AI. Engineers try to strengthen the legitimacy of the legal reasoning of the intelligent machine using algorithm open source, reverse engineering, algorithm audit, etc. However, this is not the fundamental way to guarantee the justice of Judicial AI. On December 18, 2017, the New York City Council passed the first bill to address urban government algorithm discrimination. It will assign a special task force to study the use of algorithms by city government agencies. How they use algorithmic decisions affects New Yorkers’ life, examining whether the system will discriminate against people based on age, race, religion, gender, sexual orientation, or nationality. This bill examines the algorithm's rationality and legitimacy, and conforms to the rational concept of justice in the legal form. In addition, the judiciary’s relief mechanism for citizens who have suffered injustice due to intelligent machine decision-making has also become part of the rational use of judicial AI. On May 25, 2018, the EU’s Uniform Data Protection Regulation came into force. The regulation explicitly limits automated individual decisions by users with “significant impact,” that is, algorithms that make decisions based on user-level predictors; at the same time, it also creates the right of algorithm interpretation, which stipulates that citizens of EU countries will have the right to ask “to review the decision of how to make a specific algorithm in a specific service.” It requires the transparency of algorithm decision-making “right of interpretation of algorithms” is a brand new right, which shows the justice concept of legal essence and rationality. However, the regulation also stipulates an exception that the “right of interpretation of algorithms” does not apply to situations in which personal data are handled to prevent, investigate, investigate, prosecute criminal crimes, execute criminal penalties, and preventing and preventing threats to public security. This precludes the possibility of judicial application. However, the power of algorithm interpretation should not be limited to autonomous commercial decision-making, and public decision-making algorithms have a greater impact on the rights of citizens. Judicial AI code computing needs to be empowered to realize the pursuit of justice. Judicial rationality is the necessary condition of judicial justice, and legal logic is the tool leading to judicial rationality (Xiong 2011). The intelligent machine itself shows technical rationality. Although it reconstructs the content of judicial justice to a certain extent, it is not legal rationality after all. Legal rationality has its inherent logic quality, and its usefulness, actuality, times, and value constitute the basic connotation of legal essence rationality (Wang 2020). Judicial AI is good at data analysis and correlation calculation. However, any judicial case is nested in the cultural accumulation and national tradition of history, continuing history, and culture. Just as fairness and justice are difficult to attain through pure logical reasoning, to make up for the social rationality and effectiveness of judicial AI, judicators should interpret it with legal rationality.

Although AI technology is helpful to the realization of judicial justice, judicial AI needs to abide by some basic laws as the premise of security and is limited by the concept of legal rationality and justice. Legal rationality is the basis of the judicial authority of judges, and judicial AI decision-making also needs to look for the intrinsic quality with the binding force of facts. Even if judicial AI achieves autonomous decision-making, judges cannot stay out of the matter, but must actively participate in it through legal interpretation. This is the status requirement of the human being as the subject of adjudication and the inevitable requirement to control the risk of judicial AI. In other words, judicial cases are the training data and referee reference standards for AI deep learning. If the judge does not correct the data deviation, the result of the intelligent machine code calculation will be unreasonable, and human beings will be useless in the judicial process and lose the meaning of being a subject. Judicial AI not only uses old data, but also produces and processes new data. Therefore, legal interpretation needs to be unified with data monitoring, control, correction, and improvement. By positive and anti-symmetric calculation, judges can help intelligent machines participate in litigation structure, achieve fair trade-offs, and protect data quality and prevent potential risks of AI technology. Therefore, the lack of moral imagination in judicial AI needs to be remedied by human judges, whereas intelligent machine algorithms are technical decisions that can only achieve predictive justice. If judicial AI wants to be recognized by the public, it needs to carefully weigh the algorithm to respond to the justice concept of legal rationality.

5.3 Limited regulation of algorithm black box

The algorithm black box will cause the judgment logic of judicial AI to be opaque, and the fairness and effectiveness of intelligent machine code calculation in the judicial field are questioned by people. The algorithm black box in judicial AI is the opacity of the algorithm itself, and a particular review, interpretation, and management organization composed of professional technicians and legal personnel is a suitable choice.

First, interpretation algorithm. The prerequisite for protecting the people’s right to correct is to see the mistakes in the decision-making process; the prerequisite for protecting the people’s right to oppose discrimination is knowing the factual basis of the decision. Without these guarantees, information asymmetry will make these critical rights meaningless in law (Kaminski 2019). Algorithm interpretation will face problems such as balancing the dilemma of protecting trade secrets and reducing the risk of algorithm controllability caused by understanding algorithm logic by lawbreakers due to algorithms’ public service and charitable nature in the judicial field, the logic construction method of the algorithm is transferred at the same time. To confirm the neutrality, objectivity, fairness, and morality of the algorithm it provides, it will be explained at any time for the parties’ objections to judges or relevant agencies.

Second, review algorithm. The learning algorithm of AI will produce unpredictable results due to the good or bad data. This opacity can be avoided and improved consciously at the time of design. Judicial AI certainly needs to be built on a good case base and traceable code. On one hand, the court can establish a regular algorithmic review body based on the original administrative supervision body by introducing relevant talents or outsourcing to a third-party review body to find and correct the algorithm flaws and defects in time. On the other hand, independent non-profit organizations can be commissioned by governments, courts, or individuals to conduct independent algorithmic reviews. The way to review the algorithm is to review the algorithm created by the code through reverse code management and review whether the algorithm tends discrimination according to the experimental results through data experiments. As for filing the review, the higher court can carry out a regular or irregular review of the lower court, but the court can also take the initiative to carry out a self-examination of the AI algorithm it uses. In addition, the court or a third party can apply for a review of the algorithm when the third party has an objection to the judgment result. In this way, we can establish a multi-subject review, a variety of review methods, a variety of appeal channels, of a comprehensive and all-time algorithmic review system.

Third, moderate intervention algorithm. The judicial judgment is only a part of the human mind, and the judgment cannot reflect the full content. When a judge makes a decision, he is influenced by all kinds of thinking. Human trial thinking cannot be left to review the whole process, only from the behavior and the results of backward reasoning and collection of clues for verification. Therefore, we should not worry too much about intelligent machine code operation; we can use a minimum degree of intervention. Simultaneously, the disclosure limit of intelligent machine algorithms still needs careful consideration, to avoid the inhibiting effect of limited rights on scientific and technological innovation, and to prevent the infringement of private law rights in the name of public interests.

6 Conclusion

The change of technology and labor tools brings about the change of economic base and superstructure, which is an important factor of social progress. Judicial AI is not organic intelligence, but a tool in essence, which can maximize the liberation of “judicial productivity,” which is reflected in judicial fairness, efficiency, management, and public service and other aspects. However, unprecedented technological advances have raised doubts about human–computer interaction. Does the transformation from machine-aided judicial decision-making to man–machine collaboration mean the innovation of the man–machine relationship? AI has a natural affinity for judicial justice, but at the same time, the judicial process of AI is affected by the justice concept of legal rationality. The assistance of a machine to a human is to make the judicial decision a stable rational decision, and the cooperation of man and machine in the judicial decision is to prevent the extreme of formal rationality caused by mechanical judgment. AI technology will bring robot judges to humanity, but at the expense of justice. Therefore, the recognition of the subject status of the judge AI should not exceed the corresponding limit, that is, to follow specific basic laws to ensure that judicial decisions conform to social relations. Second, to ensure the legitimacy of AI's involvement in judicial decision-making, the formulation of specific norms in the legal field considers the efficiency and fairness of AI adjudication, so that each decision will not impair judicial authority. Finally, the reasonable regulation of algorithms is to reduce the opacity of algorithms to the greatest extent and interfere with technological progress to the minimum. Only utilizing science and technology to achieve unlimited access to judicial justice is the robot judge should be the meaning.