UW–Madison's Generative AI Policies
From the DoIT cybersecurity team:
This page outlines existing policies governing what you may and may not do when using generative artificial intelligence (AI) tools and services. These policies safeguard institutional data, which everyone in the university is legally and ethically obligated to protect. All university faculty, staff, students and affiliates must follow these policies.
Entering data into most non-enterprise generative AI tools or services is like posting that data on a public website. By design, these AI tools collect and store data from users as part of their learning process. Any data you enter into such a tool could become part of its training data, which it may then share with other users outside the university. For this reason, university faculty, staff, students and affiliates may enter institutional data into generative AI tools or services only when:
- The information is classified as public (low risk) or
- The AI tool or service being used has undergone appropriate internal review, which may include but is not limited to:
- Cybersecurity risk management (per UW-503 and the Cybersecurity Risk Management Implementation Plan)
- Data governance
- Accessibility
- Purchasing
As with everything you do at the university, you must follow UW–Madison, UW System Administration (UWSA) and UW System Board of Regents policies when using generative AI tools and services. Read on for more about those policies and tips for using AI safely.