At Benevity, we believe technology should amplify human goodness. Our approach to AI is grounded in integrity, fairness, and accountability, with every employee responsible for ensuring it enhances trust, inclusion, and positive impact.
1. Purpose
This Policy defines the approach to the responsible use, development, and governance of artificial intelligence at Benevity. It sets the principles that guide how AI is selected, designed, and used to advance Benevity’s mission while protecting people, data, and trust. The Policy aligns with Benevity’s Information Security, Privacy, and Compliance frameworks, and is supported by internal standards maintained by the AI Council.
2. Scope
This Policy applies to all Benevity employees, contractors, temporary workers, and teams that design, procure, or use AI tools or capabilities; to AI systems or features embedded in Benevity products; and to third-party AI technologies integrated into Benevity’s business operations.
3. Key Definitions
Artificial Intelligence (AI): Technologies that perform tasks typically requiring human intelligence, such as pattern recognition, language understanding, prediction, or content generation.
Confidential Information: Includes any Stakeholder, employee, or proprietary data that is not publicly available including information received from Stakeholders in connection with Benevity’s operations.
GenAI: AI systems that are capable of producing new content such as text, audio, or visual in response to user prompts.
Inside Information: any non-public information—technical, personal, commercial, or strategic—that Benevity or its Stakeholders designate as confidential or that could reasonably cause harm if used or disclosed without authorisation.
Permitted Tools: Enterprise-grade GenAI models and applications approved by Benevity to execute work-related tasks by employees. These tools are curated by Benevity IT.
Personal Data: information that identifies or can reasonably be used to identify a person or, where protected by law, a legal entity. It includes any personal information Benevity collects from or on behalf of Stakeholders under applicable data protection laws such as names, contact details, online identifiers, information about employment or location.
Sensitive Personal Data: Special category Personal Data as defined by applicable data protection laws such as information about racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, health, or biometric or genetic data. It also includes criminal records, financial account credentials, information subject to legal privilege, or Inside Information.
Services: the Benevity software platforms, systems, and related applications provided to Stakeholders, including any associated websites, integrations, and APIs.
Service Data / Telemetry: Operational data generated by the Services (e.g., logs, metrics, diagnostics, security events) used to provide, secure, and improve the Services. Service Data may include aggregated insights about Stakeholder usage patterns, provided it cannot reasonably identify a natural person or a specific Stakeholder.
Stakeholder: Includes clients, end users, foundations, nonprofits, vendors, and any other organizations or individuals who engage with Benevity or whose data may be processed through Benevity’s systems.
Stakeholder Data: Means any data, content, or information that a Stakeholder, and their respective users, employees, or administrators, submit to, stores in, or is collected by the Services on their behalf. This includes files, text, images, audio/video, prompts and inputs to AI features, configuration and program settings, and Personal Data relating to the Stakeholder’s users, employees, program participants, or beneficiaries. Stakeholder Data does not include: (i) Aggregated/Anonymized Data that cannot reasonably be used to identify a natural person or a specific Stakeholder; (ii) Service Data / Telemetry; (iii) Benevity Materials (including software, models, algorithms, system prompts/instructions, and documentation); or (iv) publicly available data sourced independently of the Stakeholder. Client Data, as used in Benevity’s contracts, is a subset of Stakeholder Data.
4. Roles and Responsibilities
5. AI Principles
Benevity’s use of AI is guided by principles that ensure it remains human-centered, privacy-respecting, fair, secure, and accountable, while supporting social and environmental responsibility. These principles are informed by leading global frameworks, including the EU Trustworthy AI Guidelines, the OECD AI Principles, and the NIST AI Risk Management Framework, and reflect Benevity’s own values and commitments. These principles apply to both AI developed or deployed within Benevity’s products and AI tools or systems used internally by Benevity (including third-party or vendor solutions integrated into our operations).
5.1 Human Agency, Benefits, and Oversight
AI systems should empower human beings by enabling informed decision-making, fostering fundamental rights, and increasing productivity. Teams deploying AI must follow and document oversight mechanisms established by the AI Council, and remain accountable for their use and outcomes, ensuring that decisions with material impact on individuals or organizations remain human-led.
5.2 Privacy and Data Governance
Only the minimum necessary Stakeholder Data should be used, and data must not be transferred to, or processed by, unapproved external AI systems in ways that contravene privacy or contractual obligations. Stakeholder Data is never used to train general-purpose models without explicit permission, and third-party providers are contractually required not to retain or reuse Benevity data, prompts, or outputs beyond the permitted purpose. All AI use is subject to Benevity’s data minimization, retention, and deletion standards.
5.3 Fairness and Inclusion
Benevity is committed to building and procuring AI that treats people equitably and avoids amplifying harmful bias, prejudice or discrimination. AI systems should be accessible to all users, including those with disabilities, and should actively foster diversity and inclusion. Accessibility and fairness testing are incorporated into Benevity’s review processes proportionate to the use case and consistent with Benevity’s existing product accessibility and DEI commitments.
5.4 Transparency and Explainability
Benevity ensures transparency around when and how AI is used in its products and services. Clear disclosures of AI-assisted content are provided where AI meaningfully contributes to an outcome, and plain-language explanations are available, where appropriate, describing how key outputs are generated or can be challenged. This does not apply to routine marketing or communications materials created by employees using Permitted Tools with human review.
5.5 Accuracy and Quality
Benevity maintains human review and validation processes to ensure that AI-generated outputs used in its operations or products are accurate, appropriate, and consistent with Benevity’s standards of professionalism and integrity. AI is implemented to augment efficiency and creativity while upholding accuracy, reliability, and alignment with Benevity’s values.
5.6 Technical Robustness and Safety
AI tools and integrations must be resilient, secure, and designed with fallback or manual override plans to address potential failures. Third-party AI vendors must demonstrate equivalent or stronger safeguards, including encryption, access control, monitoring, and data isolation measures. All AI deployments are subject to Benevity’s risk assessments and ongoing monitoring requirements.
5.7 Societal and Environmental Well-Being
Benevity recognizes that innovation should not come at the expense of environmental or social well-being. In proportion to Benevity’s operational footprint, we will endeavor to partner with vendors committed to responsible supply chains.
6. Use Cases and Risk Guardrails
Benevity applies a tiered risk framework to all AI activities, ensuring proportional oversight, accountability and compliance with this Policy and global standards.
6.1 Low-Risk Uses
AI applications involving non-sensitive data (i.e no Personal Data or Confidential Information), internal audiences, or limited business impact such as text summarization, translation, or document drafting, are generally permitted when using Permitted Tools and applying human review.
6.2 Moderate-Risk Uses
AI used in Stakeholder-facing materials, decision-support analytics, or product design where AI meaningfully shapes the substance, logic, or outcomes of the material requires human review and approval by the AI Council for accuracy, fairness, security, bias mitigation, and compliance prior to implementation. This does not include routine client-facing or marketing communications drafted by employees using Permitted Tools, provided a human reviews the content before sending or publishing it. Moderate-risk uses must ensure transparent disclosure of AI involvement where outputs may influence external perception or decision-making.
6.3 High-Risk or Prohibited Uses
Regardless of whether the tool is on the Permitted Tool list, AI must not be used to:
- Make, score, or automate employment, disciplinary, or legal decisions;
- Impersonate individuals or generate deceptive, harmful, or discriminatory content;
- Process Sensitive Personal Data without legal basis and explicit authorization; or
- Monitor individuals or conduct profiling that could infringe privacy rights.
6.4 Human-in-the-Loop Principle
AI features are designed to assist, not replace, human decision-making. Where AI generates recommendations, rankings, or insights, mechanisms are in place to enable human review and intervention where appropriate. Continuous feedback and review ensure models remain aligned with Benevity’s values and Stakeholder expectations.
6.5 Risk Reporting and Review
Any suspected misuse, unintended outcome, or ethical concern related to AI must be reported immediately to the Cybersecurity Committee at securitycompliance@benevity.com. Benevity will investigate all reports promptly, confidentially and without retaliation. For the purposes of this Policy, an unintended outcome is any AI-generated result that materially deviates from expected or documented behaviour, including outputs that may be inaccurate, biased, misleading, harmful, or otherwise inconsistent with Benevity’s standards.
7. AI in Benevity Products
When AI is embedded in Benevity products or supplied by vendors, Benevity will:
7.1 Inclusive Design & Testing
Evaluate internally trained models for fairness across demographics and accessibility requirements before launch and on a recurring basis.
7.2 Data Use and Model Training
Require contractual safeguards prohibiting the use of Stakeholder Data for unrelated model training.
7.3 Explainability
Provide plain-language explanations of how significant AI recommendations (e.g., donation matching, volunteer suggestions) are generated.
7.4 User Feedback Loop
All GenAI features must enable user feedback. Low-risk features may route feedback via support channels. Feedback collected through AI-enabled features will be periodically reviewed by product and risk teams to improve performance and identify issues.
7.5 Regulatory Alignment
Design in accordance with applicable AI regulatory standards, including transparency duties (e.g., EU AI Act Article 52 for limited-risk systems) and other relevant laws; and
7.6 Opt-out
Where feasible, enable Stakeholders to opt out or disable AI-driven features without losing core functionality. If an AI component is essential to product performance, Benevity will communicate its purpose transparently so users and Stakeholders understand how data is used and can make informed decisions.
8. Policy Enforcement
Compliance with this Policy is mandatory. Breaches may result in corrective or disciplinary action, up to and including termination of employment or contract, and Benevity may take remedial or legal action to protect its data, systems, or Stakeholders as appropriate.
9. Related Resources
These internal policies and guidelines are essential for understanding and complying with this Policy:
- Acceptable Use of Technology Policy
- Benevity's ESG policy
- Data Governance Policy (includes personal information and data subject rights)
- Do the Right Thing
- Information Security Policy
- Security Incident Management Policy
- Privacy Notice
- Security Incident Management Procedure
- See Something, Say Something
- Vulnerability Management and Threat Intelligence Policy
10. Document Control
Annual Review – The AI Governance Council will review this Policy annually, or sooner if there are material regulatory or technological changes, and will publish a revision history.
Version History
Management Review and Sign-off