On January 30, a decision was published by the 1st Labor Court of the Cartagena Circuit that protected the rights of a minor with autism spectrum disorder and ordered his EPS to exempt him from paying moderator fees and cover transportation costs from his home to the place of their therapies. However, the sentence will not be remembered for the meaning of his decision, but for the audacity of the judge from Cartagena to use the artificial intelligence tool ChatGPT as part of the elements to build his ruling.
ChatGPT, developed by OpenAI (a company in which Microsoft recently announced investments), is, according to the answer provided by herself, a “language model (…) designed to generate text autonomously.” Unlike traditional Internet search engines whose results mainly include links to third-party websites, ChatGPT generates an explicit answer to the question asked.
However, there is multiple evidence about its current lack of precision, among other reasons, since it only contains information up to 2021. Biases or inconsistencies have also been documented in its responses, such as some that show political preferences or even information contrary to reality. All of the above is not extraordinary in a new development that continues its consolidation process and it is foreseeable that these situations will be overcome in the future.
But going back to the sentence, the innovative judge raised the legal problem under analysis to ChatGPT, and for this he inquired if the guardian had the right to exemption from the moderating fees, and even if the guardianship action should be granted. The sentence mentions that the use of ChatGPT was not intended to replace the judge’s decision, and that it was intended to optimize writing times after corroborating the information provided.
Up to that point, everything would be fine if, in effect, the judge had included additional information about said corroboration, but on the contrary, the sentence omits to include how that process was and explain what third-party sources the responses of ChatGPT were contrasted with. The above omission, perhaps despite how fair the ruling might seem, could affect its legality due to flaws in its motivation, as many procedural experts have said.
If ChatGPT was intended to be used solely to speed up typing, such as using a dictation tool, it might not even have been worth mentioning in the statement; but, if the use that a judge intends to give is to define or support the meaning of the ruling, even to a minimal extent, what is expected of the judicial authority is to verify and contrast the sources available to it.
It would be absurd to prohibit judges from using technological tools, and in fact, it is valued that they are open to using them in the administration of justice, but it is essential that this is not done lightly, as the risks could be greater than those of using a new appliance without reading the instructions.
There are still many concerns that will be reviewed by academia and jurisprudence in the field of artificial intelligence, some such as: what scope should be given to the use of this tool by judges? Or how should its use be incorporated into the process and weighed against other sources? The first challenge is the development of policies or at least agreed good practices on the use of artificial intelligence tools by the judicial branch, so that it is not carried out in an inarticulate or disparate way according to each judicial operator, and prevents its use result in decisions contrary to the law.