Using Artificial Intelligence in your legal matters? Think Again!
The use of artificial intelligence in court material is becoming more prominent with research from the University of New South Wales finding that there have been more than 80 cases of generative AI use in Australian courts, mostly by self-represented litigants.
In one recent overseas case, a self-represented litigant submitted fake evidence to the court consisting of AI generated videos, purporting to show people and events that simply did not exist.
There have also been multiple recent Australian cases where lawyers have presented documents to the Court that were prepared by AI, including citations to previous cases that were entirely fictitious.
In the recent Australian case of Helmod & Mariya (No 2) [2025] FedCFamC1A 163, Justices Aldridge, Campton and Christie of the Federal Circuit and Family Court of Australia (Division 1) Appellate Jurisdiction, highlighted the importance of the parties duties to not mislead the court or their opponent, particularly through the use of generative AI.
The self-represented appellant husband confirmed that he utilised generative AI to prepare his written material for the trial and the filed material listed several cases with citations as authorities for various propositions. The Court discussed that the cases referred to either could not be located or they were not authority for the propositions contended by the appellant.
While the court acknowledged a potential of the future use of AI in litigation, it is a tool that carries several risks. The use of AI should be done with an appropriate degree of oversight and within a regulatory framework to maintain the public confidence in the administration of justice. In the context of legal research or the prepation of court material, the risks of using AI are widely known. Artificial intelligence tools are not capable for conducting reliable research despite producing coherent and plausible responses that may turn to out completely incorrect. AI may make confident but untrue assertions and may cite sources that just don’t exist.
The court also highlighted the ethical obligations of legal professionals, to ensure that the material placed before the court is accurate. As observed by Chief Justice Bell in May v Constaras [2025] NSWCA 178, all litigants (including those who are self-represented) have a duty not to mislead the Court. The reliance on unverified data from generative AI can confuse others, create unnecessary complexity, waste time, add unnecessary legal costs, and mislead the Court.
The court in Helmod & Mariya highlighted that the utilisation of AI violated the parties obligations in litigation not to mislead the court or their opponent. The use of AI was also seen to have the potential to breach theFamily Law Act 1975. Part XIVB of the Act restricts sharing information about family law matters due to the sensitive nature of proceedings. The Court discussed that the inputting of court material into an AI program has the potential to breach these laws. The storing, collating and replication of data in AI programs has the potential to waive legal professional privilege. Ultimately, the appellant’s application for appeal was dismissed on the basis that the appellant was afforded procedural fairness, and there was no error in law despite the appellant’s actions at trial.
While self-represented litigants may believe they are saving costs and time by using generative AI to assist them in drafting documents, it is important that AI is used with extreme caution. As highlighted by the Queensland guidelines for non-lawyers, you should not enter private, confidential and legally privileged information into an AI program bacause it could then become publicly available.
Legal research and the production of court documents is a difficult task, however a properly qualified lawyer will be able to assist you.