- Thread starter
- Staff
- #1
Dadparvar
Staff member
- Nov 11, 2016
- 9,648
- 0
- 6
A judge in the UK has warned lawyers of the consequences for submitting court filings with fake cases generated by artificial intelligence (AI).
In a judgment made available to the public on Friday, Dame Victoria Sharp considered two cases in which legal professionals were suspected of using AI involving non-existent citations in their submissions to the courts. The judge referred the lawyers in question to the regulator, although none of the legal professionals were held in contempt of court.
Both misconduct cases were held jointly and involved similar circumstances. One of the two concerned a judicial review whereby a submission to the court, drafted by a pupil barrister (a junior advocate in training), included the wrong contents of a statute and five non-existent cases. In the lawyer’s witness statement, they vehemently denied the utilization of AI. Two colleagues, who noticed that the cases were faulty, attributed the situation to a simple misstatement in the citation. Dame Sharp, although stating that the pupil barrister’s conduct fell within the threshold of holding them in contempt of court, refrained from doing so because of mitigating circumstances such as her juniority, previous scrutiny, and a lack of appropriate supervision. While referring the junior barrister to the regulator, the judge emphasized that this decision is not precedential and lawyers in similar circumstances risk severe sanctions.
In the UK, sanctions could include a referral to the police, contempt of court, which can lead to a prison term of up to two years, a referral to the regulator, the striking out of an application, and public admonishment.
In the second case, a lawyer filed an application to set aside a court order and included “completely fictitious” authorities in 18 instances. The lawyer admitted to having used “publicly available artificial intelligence tools.” Dame Sharp did not hold the legal professional in contempt, as one of the authorities cited was attributed to the judge himself, which was taken as evidence that there was no attempt to mislead the court. The lawyer was, however, also referred to the regulator.
While acknowledging the benefits AI can have on both civil and criminal litigation and highlighting its “continuing and important role in the conduct of litigation,” Dame Sharp warned about the fallibility of said systems, which have to be operated with stringent oversight and with adherence the present ethical and professional standards, mandated by the Bar Standards Board’s Handbook and the Code of Conduct for Solicitors. For the justice system to operate, courts would need to be able to rely on the integrity of each respective counsel in their submissions. The judge urged firms to urgently take measures to mitigate junior practitioners’ non-professional use of AI systems.
Official guides regarding the ethical and professional usage of AI have been published by the Bar Council and the Bar Standards Board. While not being the first case in the UK in which AI has been allegedly used by legal counsel, the present case resembles recent instances in other jurisdictions in which lawyers were caught using faulty AI at trial. For instance, in the US, in the cases of Mata v Avianca, Park v Kim, and Lacey v State Farm General Insurance, submissions included an entirely fabricated AI output. Similar incidents happened in the Australian case of Valu v Minister for Immigration, the New Zealand case of Wikley v Kea, and the Canadian case of Zhang v Chen.
The post UK judge warns lawyers of consequences for misusing AI in court filings appeared first on JURIST - News.
Continue reading...
Note: We don't have any responsibilities about this news. Its been posted here by Feed Reader and we had no controls and checking on it. And because News posted here will be deleted automatically after 21 days, threads are closed so that no one spend time to post and discuss here. You can always check the source and discuss in their site.
In a judgment made available to the public on Friday, Dame Victoria Sharp considered two cases in which legal professionals were suspected of using AI involving non-existent citations in their submissions to the courts. The judge referred the lawyers in question to the regulator, although none of the legal professionals were held in contempt of court.
Both misconduct cases were held jointly and involved similar circumstances. One of the two concerned a judicial review whereby a submission to the court, drafted by a pupil barrister (a junior advocate in training), included the wrong contents of a statute and five non-existent cases. In the lawyer’s witness statement, they vehemently denied the utilization of AI. Two colleagues, who noticed that the cases were faulty, attributed the situation to a simple misstatement in the citation. Dame Sharp, although stating that the pupil barrister’s conduct fell within the threshold of holding them in contempt of court, refrained from doing so because of mitigating circumstances such as her juniority, previous scrutiny, and a lack of appropriate supervision. While referring the junior barrister to the regulator, the judge emphasized that this decision is not precedential and lawyers in similar circumstances risk severe sanctions.
In the UK, sanctions could include a referral to the police, contempt of court, which can lead to a prison term of up to two years, a referral to the regulator, the striking out of an application, and public admonishment.
In the second case, a lawyer filed an application to set aside a court order and included “completely fictitious” authorities in 18 instances. The lawyer admitted to having used “publicly available artificial intelligence tools.” Dame Sharp did not hold the legal professional in contempt, as one of the authorities cited was attributed to the judge himself, which was taken as evidence that there was no attempt to mislead the court. The lawyer was, however, also referred to the regulator.
While acknowledging the benefits AI can have on both civil and criminal litigation and highlighting its “continuing and important role in the conduct of litigation,” Dame Sharp warned about the fallibility of said systems, which have to be operated with stringent oversight and with adherence the present ethical and professional standards, mandated by the Bar Standards Board’s Handbook and the Code of Conduct for Solicitors. For the justice system to operate, courts would need to be able to rely on the integrity of each respective counsel in their submissions. The judge urged firms to urgently take measures to mitigate junior practitioners’ non-professional use of AI systems.
Official guides regarding the ethical and professional usage of AI have been published by the Bar Council and the Bar Standards Board. While not being the first case in the UK in which AI has been allegedly used by legal counsel, the present case resembles recent instances in other jurisdictions in which lawyers were caught using faulty AI at trial. For instance, in the US, in the cases of Mata v Avianca, Park v Kim, and Lacey v State Farm General Insurance, submissions included an entirely fabricated AI output. Similar incidents happened in the Australian case of Valu v Minister for Immigration, the New Zealand case of Wikley v Kea, and the Canadian case of Zhang v Chen.
The post UK judge warns lawyers of consequences for misusing AI in court filings appeared first on JURIST - News.
Continue reading...
Note: We don't have any responsibilities about this news. Its been posted here by Feed Reader and we had no controls and checking on it. And because News posted here will be deleted automatically after 21 days, threads are closed so that no one spend time to post and discuss here. You can always check the source and discuss in their site.