Search A Light In The Darkness

Saturday, 18 October 2025

Barrister found to have used AI to prepare for hearing after citing ‘fictitious’ cases

An immigration barrister was found by a judge to be using AI to do his work for a tribunal hearing after citing cases that were “entirely fictitious” or “wholly irrelevant”.

Chowdhury Rahman was discovered using ChatGPT-like software to prepare his legal research, a tribunal heard. Rahman was found not only to have used AI to prepare his work, but “failed thereafter to undertake any proper checks on the accuracy”.

The upper tribunal judge Mark Blundell said Rahman had even tried to hide the fact he had used AI and “wasted” the tribunal’s time. Blundell said he was considering reporting Rahman to the Bar Standards Board. The Guardian has contacted Rahman’s firm for comment.

The matter came to light in the case of two Honduran sisters who claimed asylum on the basis that they were being targeted by a criminal gang in their home country. Rahman represented the sisters, aged 29 and 35. The case escalated to the upper tribunal.

Blundell rejected Rahman’s arguments, adding that “nothing said by Mr Rahman orally or in writing establishes an error of law on the part of the judge and the appeal must be dismissed”.

Then, in a rare ruling, Blundell went on to say in a postscript that there were “significant problems” within the grounds of appeal put before him.

He said that 12 authorities were cited in the paperwork by Rahman, but when he came to read the grounds, he noticed that “some of those authorities did not exist and that others did not support the propositions of law for which they were cited in the grounds”.

In his judgment, he listed 10 of these cases and set out “what was said by Mr Rahman about those actual or fictitious cases”.....<<<Read More>>>...