ICCL has filed a complaint to the European Ombudsman against the European Commission for its use of generative AI in public documents, which likely violates its own guidelines and its obligation under the treaties.
A recent response from the European Commission to our access to documents request not only revealed that the President of the Commission was repeating the words of tech CEOs, but it also accidentally divulged the Commission’s use of generative AI in public documents.
The Commission's response included four links, at least one of them was generated using OpenAI's ChatGPT. This link includes "utm_source=chatgpt.com" which reveals the website that generated the link (see below). It is unclear if other parts of the response replicate generative AI outputs and if it is common practice at the Commission to use generative AI outputs in public documents.
These generative AI systems give sometimes correct and sometimes wrong information. The errors are not bugs, but by design. They "predict" next words based on probabilities. Facts are not their forte.
EU institutions have a duty to provide accurate information. By relying on bullshit generators, they might violate their obligation under the treaties to provide citizens with the right to good administration.
In addition, it is likely that this use of generative AI is a violation of the Commission's own guidelines for staff on the use of online generative artificial intelligence tools, which says "Staff shall never directly replicate the output of a generative AI model in public documents."
ICCL Enforce Senior Fellow, Dr Kris Shrishak, said today:
“Public bodies like the European Commission should always be transparent and disclose if a generative AI tool is used in any public document, even if the output from such tools has been assessed by their staff. In such a disclosure, specific details about the tools should also be mentioned for transparency.”
“If public bodies use generative AI tool in public documents, then the burden of proving the veracity should be on them and not on the recipients to rebut claims. Otherwise, generative AI tools should not be used.”
ENDS
For media queries or to arrange interviews, contact: [email protected] / [email protected] / 087 415 7162
.png)


