AI Usage guidelines

We all recognize the potential of Artificial Intelligence (AI) and more specifically the potential of generative AI and AI systems1.

The advent of generative AI marks a transformative shift: unprecedented possibilities at speed. AI agents and deep research tools are helping more person to become developers or to improve their development skills, researchers to find unseen connections, provide an overview of a field or research question, identify research gaps, generate ideas and provide tailored support for tasks such as content organization and improving language and readability. Sometimes also do our homeworks.

As this paradigm shift continues small fractures emerge, accountability and trust. The effect is not only reflected on work done by or with the help of AI, it more importantly challenge work done entirely by humans with loss of credibility under every day eyes.

Principles

To ensure safe, transparent, and human-centred use of AI our guidelines are based on the assumption that AI must be an instrument of support2. We consolidate this in the following principles:

  1. Responsibility. All outputs remain under the full responsibility of the authors. AI tools support, but do not replace, human expertise and judgement.
  2. Transparency. AI use should be disclosed when it materially contributes to content creation, especially in formal or analytical outputs.
  3. Revision. All AI-assisted content must be reviewed, validated, and edited by humans prior to publication.
  4. Respect . AI tools must not be used in ways that expose confidential, personal, or sensitive project data, in line with data protection regulations. Users should at their best ensure that both inputs and outputs respect copyright and licensing conditions.

The use of AI

We allow and use Artificial Intelligence (AI), including generative AI and large language models (LLMs), in the production of:

  • project deliverables (reports, briefs, studies)
  • communication materials (presentations, web content)
  • creative artefacts (visuals, narratives, multimedia)

The objective is to ensure transparency, accountability, and compliance, while enabling effective use of AI tools.

Disclosure

Disclosure is appreciated or (sometimes) required when AI tools are substantially involved in the preparation and writing phase, such as rephrasing, generating visual variants, or combining AI output with human edits.

Type of Use Examples Disclosure
Editorial support grammar, spelling, translation Not required
Drafting support summarisation, rephrasing, structuring Recommended
Content generation generating text, visuals, narratives Required
Analytical use interpretation, classification, synthesis Required

Special threatment should be considered for images and videos and accessibility description of such content. Generate images should not misleading or deceptive and comply with copyright and licensing rules. The use of AI to fabricate evidence or simulate real individuals without consent is not permitted. In case AI is used to generate textual representation in support of accessibility technologies it is recommended to disclose its use.

For ways of disclosing AI use refer to the section How to disclose AI use.

Revision

All AI-assisted outputs must be fact-checked and verified to avoid bias, hallucinations, or unsupported claims. AI outputs must not be used blindly without human review, i.e. copy/paste directly from the AI Agent chat interface without reading it.

If human review (by you or someone else) is not possible the resulting AI output must be disclosed as generated and the entire prompt used to produce it made available publicly.

Respect

Names, emails, addresses, date of birth, place of birth, and in general data classified as personal under the european regulations should always be considered personal information. Those information must not be included in AI chats hosted by third-party services even tho the terms of service might suggest that chat messages are not retained for training purposes.

Generate images should not misleading or deceptive and comply with copyright and licensing rules. The use of AI to fabricate evidence or simulate real individuals without consent is not permitted.

Documentation

We encourage to maintain lightweight internal documentation of tools used, purpose of use and level of human validation. This supports transparency, auditability, and continuous improvement.

How to disclose AI use

Disclosing should be as simple as possible. We therefore prefer DAIU, a lightweight creator-facing labeling of AI involvement, approach and formulation rather than strict metadata wokflows as proposed by the Coalition for Content Provenance and Authenticity (C2PA). This does not mean we don't value provable metadata, we want to focus first on humans and the challenge they face when disclosing.

The disclosure statement should be placed in the imprint or at the end of the document.

As an initial pilot we suggest the following formulations.

Only Human

Human with no AI. The content was produced by human with no AI involvement.

This is equivalent to DAIU HM · None.

Human assisted

Human with AI assistance. The content was produced by human with AI providing minor help (e.g. grammar, translation) or generated segments (e.g. rephrasing or structuring) integrated by the author.

This is comparable to DAIU HM · Assist and HM · Remix.

AI Driven

AI Driven with human revision. The content was generated by AI and then revised or adapted by a human.
AI Driven. The content was generated by AI with minimal to no human refinement.

This covers both DAIU AG · Remix, AG · Major and AG · Full.


Usage of AI within the production of this guidelines: Human with AI assistance. The content was produced by human with AI providing minor help (e.g. grammar, translation) or generated segments (e.g. rephrasing or structuring) integrated by the author.


  1. In alignment with the Artificial Intelligence Act, the term AI system is used in this communication for "machine-based systems designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives,infers, from the input it received, how to generate output such as content, predictions, recommendations, or decisions, that can influence physical or virtual environments". ↩︎

  2. Artificial Intelligence in the European Commission (AI@EC) Communication ↩︎