MENU

Generative AI (GAI) ethics and disclosure policy

At the Center for Cooperative Media, we recognize the potential benefits and challenges of using generative AI tools in our work. As we explore the use of AI in our work, we are committed to upholding our organizational values and standards, and to being transparent with our partners and funders.

The following guidelines outline our approach to the ethical use of AI in our work, incorporating principles and values from the Center’s general ethics policy:

Disclosure and transparency:

    • Disclosure: We will disclose when we use AI-generation tools substantially (beyond simple formatting, organization, summarization, and structures) in our work.
    • Explanation: We will explain how and why we use AI tools, what limitations they may have, and the steps we take to ensure the quality and integrity of the content.
    • AI tool information: We will provide information about the AI tools we use, including their developers and sources, to the extent possible.

Accuracy and integrity:

    • Verification: We will always verify, edit, and review any AI-generated content before publishing it to ensure that it is accurate, fair, relevant, and original.
    • Content standards: We will not use any AI-generated content that is false, misleading, biased, plagiarized, or harmful.
    • Supplementation: We will use AI tools to supplement, not replace, the work of our staff and partners, and we will clearly distinguish between AI-generated content and human-authored content.

Ethical and legal compliance:

    • Legitimate AI tools: We will only use legitimate, robust, and secure AI tools that comply with the law and ethical principles.
    • Accountability: We will be transparent and accountable for our use of AI tools and how they affect our work.
    • Monitoring: We will monitor and evaluate the performance and impact of our AI tools, and make adjustments as needed to ensure their responsible use.

Data and deception:

    • Ethical use: We will not use AI tools to deceive, manipulate, or exploit anyone.
    • Data privacy: We will protect the privacy and security of any data we use or generate with AI tools, and we will comply with relevant data protection laws and regulations.

Training and education:

    • Training: We will provide training and education to our partners and staff on the ethical use of AI in our work, including best practices for verification, editing, and disclosure.
    • Engagement: We will engage with experts, researchers, and industry leaders to stay informed about developments in AI technology and its implications for our work.

Public engagement:

    • Dialogue: We will engage with our readers and the public to discuss the use of AI in our work and to address any concerns or questions they may have.
    • Feedback: We will seek feedback from our readers and the public on our use of AI tools and consider their perspectives in our decision-making processes.

Review and update:

    • Policy review: We will regularly review and update our ethics and disclosure policy on the use of AI in our work to ensure that it remains relevant and effective.
    • Adaptation: We will respond to new challenges and opportunities presented by AI technology in a thoughtful and responsible manner.

By adhering to these guidelines, we aim to use generative AI as a valuable tool to enhance our work while maintaining the trust and confidence of our readers.

If you have any questions, concerns, or feedback about any part of this policy, please contact the Center directly at [email protected].