Policies: Artificial Intelligence (AI)

Last Updated

Purpose

This policy establishes guidelines for the ethical, responsible, and legal usage of Artificial Intelligence (AI) technologies across academic, research, and administrative functions at the University of Wisconsin-Eau Claire.  

Scope

This policy applies to all UW-Eau Claire students, faculty, staff, researchers, and third-party partners who use AI technologies or develop AI systems within the University’s technology ecosystem. 

Procurement

Any AI technology that needs to be procured at UW-Eau Claire must be reviewed by UW-Eau Claire Procurement and Learning & Technology Services in order (1) to properly verify the end user license agreement allows for legal usage of the product by a public institution and (2) to verify the AI technology is properly obtained, licensed, and/or procured. Purchases must follow all University procurement rules. Purchases may not be put on credit cards unless written pre-approval is obtained from UW-Eau Claire Procurement or Learning & Technology Services. 

If a faculty or staff is interested in procuring AI technologies as a part of curriculum, research, or operations at UW-Eau Claire, please follow the process outlined in the Purchasing: Software knowledge base article or email ltsconsulting@uwec.edu to get started. 

University Data Usage

Any AI technology that will access, process, or store University data requires prior review and approval by the Learning & Technology Services Chief Information Security Officer in collaboration with the Chief Information Officer. 

University data is defined as information collected and maintained by the University of Wisconsin-Eau Claire. This includes, but is not limited to, student demographics, academic records, financial information, research data, and personally identifiable information (PII). 

In particular, the AI technology will be reviewed to verify:  

  1. Its data is securely used, transmitted, and stored. 

  1. It is protected against unauthorized access, tampering, and misuse via secure login, including MFA. 

  1. Its security measures, including encryption and access controls, have been implemented. 

  1. It complies with all relevant data protection laws and regulations. 

In addition: 

  1. Generated data ownership and copyright must comply with SYS 1310

  1. Faculty, staff, and students must be informed when they are interacting with AI technology. 

Installation of AI Technology on University Property

Any AI technology that requires downloading and installation on University property must be reviewed and approved by Learning & Technology Services. 

Conditional Usage of Generative AI

Generative AI tools that require no procurement, University data, or installation on University property may be evaluated by faculty and staff for conditional use in accordance with the "Generative AI Tools: Guidelines for Conditional Usage" reference document.  

Personal Liability

Generative AI tools, such as Claude or Gemini, use click-through agreements. Click-through agreements, such as Anthropic's Claude terms of use or Google's AI service agreements, are contracts. Individuals who accept click-through agreements without delegated signature authority may face personal consequences, including responsibility for compliance with terms and conditions. Faculty and staff should be aware that agreeing to terms of service for AI tools may create personal legal obligations. 

Prohibited Generative AI Usage

Never input personal, confidential, proprietary, or sensitive information into public generative AI tools—assume all data may be exposed or misused. This includes student records subject to FERPA and any information classified as Medium or High Risk per SYS 1031

Similarly, public instances of generative AI tools should not be used to generate output that would be considered confidential or sensitive. Examples include, but are not limited to, proprietary or unpublished research; legal analysis or advice; recruitment, personnel, or disciplinary decision-making; completion of academic work in a manner not allowed by the instructor; creation of non-public instructional materials; and grading. 

Curriculum and Research Usage 

University employees are responsible for knowing and following the rules for using AI tools in their department or unit, in addition to adhering to University policies. Make sure you understand the rules before using AI for work. If you’re unsure, ask your supervisor or department head for clarification. 

Governance 

The Chief Information Officer, in collaboration with the University Senate Technology Committee, the University Senate, and the Extended Executive Team, will oversee the annual review and update of this policy. 

Enforcement 

Violations of this policy may result in disciplinary action under the existing University codes of conduct, as outlined in the Faculty and Academic Staff Rules and Procedures (FASRP) for faculty and staff, and the Blugold Student Conduct Code for students.