These guidelines help UW-Eau Claire faculty and staff safely choose and use generative AI tools (referred to as "AI" throughout) that have not been officially approved by the University but may be used under specific conditions. All faculty and staff must follow these guidelines to protect university data, student privacy, and institutional interests while enabling responsible exploration of AI technologies.
We recognize this process may seem lengthy and cumbersome. However, these steps are essential and necessary if you want to conditionally use an AI tool that hasn't been officially approved by the University. Taking time to properly evaluate tools upfront helps protect both you and the institution from potential privacy breaches, data misuse, and compliance violations.
IMPORTANT! If you wish to purchase an AI tool using University funds, please follow the process outlined in the Purchasing: Software knowledge base article or email ltsconsulting@uwec.edu before making any purchase.
View UW-Eau Claire's Artificial Intelligence (AI) Policy. For additional guidance, see the UW System's Generative AI guidance.
Part I: Tool Selection Guidelines
Privacy and Data Protection Evaluation
Before using any generative AI tool, you must evaluate its privacy practices and data handling policies. This section will help you make informed decisions about whether a tool meets UW-Eau Claire's conditional use standards.
Step 1: Locate and Review Key Documents
What to find:
- Privacy Policy or Privacy Statement
- Terms of Service or Terms of Use
- Data Processing Agreement (if available)
- Security documentation or white papers
Where to look:
- Footer of the website (most common location)
- "Legal" or "About" sections
- Help or Support pages
- For business tools, look for "Enterprise" or "Business" documentation
Step 2: Understanding the Decision Framework
How to interpret your findings:
- 🟢 GREEN LIGHT: Proceed with evaluation—tool meets university standards for this criterion.
- 🟡 YELLOW LIGHT: Proceed with caution—only use for non-sensitive, public information.
- 🔴 RED LIGHT: Do not use for any university-related work—find an alternative tool.
Step 3: Key Questions to Evaluate
Data Usage and Training
Question: Does the tool explicitly state it will not use my inputs for training purposes?
What to look for:
- 🟢 GREEN LIGHT: "We do not use your data to train our models" or "Training opt-out available"
- 🟡 YELLOW LIGHT: "We may use aggregated/anonymized data for training"
- 🔴 RED LIGHT: "All inputs may be used for model improvement" or no mention of training data usage
Opt-Out Options
Question: Can I opt out of data sharing or training use?
What to look for:
- 🟢 GREEN LIGHT: Clear opt-out mechanisms in settings or privacy controls
- 🟡 YELLOW LIGHT: Opt-out available but requires contacting support
- 🔴 RED LIGHT: No opt-out options available
Data Encryption and Security
Question: Is my data encrypted in transit and at rest?
What to look for:
- 🟢 GREEN LIGHT: "Data encrypted using industry-standard protocols (AES-256)" or "End-to-end encryption"
- 🟡 YELLOW LIGHT: "We use encryption" (vague but present)
- 🔴 RED LIGHT: No mention of encryption or security measures
Compliance Standards
Question: Does the tool comply with FERPA, HIPAA, and applicable privacy laws?
What to look for:
- 🟢 GREEN LIGHT: Explicit mention of FERPA compliance or educational data protection
- 🟡 YELLOW LIGHT: General privacy law compliance (GDPR, CCPA) but no education-specific standards
- 🔴 RED LIGHT: No compliance certifications mentioned
Step 4: Data Storage and Retention
Location of data storage:
- 🟢 PREFERRED: United States or other countries with strong data protection laws
- 🟡 CAUTION: Unspecified locations or countries with limited data protection
- 🔴 AVOID: Countries with known data security concerns
Data retention periods:
- 🟢 PREFERRED: Clear deletion timelines (e.g., "30 days after account closure")
- 🟡 ACCEPTABLE: Reasonable retention with deletion options
- 🔴 PROBLEMATIC: Indefinite retention with no deletion guarantees
Step 5: Making Your Decision
🟢 SAFE TO PROCEED: Tool meets most criteria with green lights, minimal yellow lights, and ZERO red lights
🟡 PROCEED WITH CAUTION: Mixed results—only use for public, non-sensitive information following all other guidelines
🔴 DO NOT USE: Multiple red lights or any red lights related to data training, security, or retention
WHEN IN DOUBT: Contact ltsconsulting@uwec.edu for guidance.
Special Considerations for Data Sharing
If you cannot opt out of data sharing but are comfortable with the risk for non-sensitive data (such as brainstorming publicly available information), you may still use the tool only if:
- All data inputs are truly public information,
- No university-specific context or details are included,
- You follow all other guidelines in this document, and
- You understand the potential risks to intellectual property.
Examples of acceptable use despite data sharing:
- Brainstorming general event names (not university-specific).
- Getting writing style suggestions for public content.
- Generating ideas for publicly available educational resources.
Still prohibited even with data sharing acceptance:
- Any information that could identify students, faculty, or staff
- University-specific strategies or internal information
- Draft content intended for university use
- Research ideas or methodologies
Refer to the Data Classification and Compliance section.
System Access Permissions
Never grant these types of access to AI tools:
- Access to your computer's file system or local storage.
- Permission to read files from cloud services (OneDrive, Google Drive, Dropbox).
- Microphone or camera access (unless specifically needed and justified).
- Access to email accounts or calendar systems.
- Location tracking or device identification.
- Installation of browser extensions with broad permissions (such as reading all website data, accessing passwords, or modifying web pages).
- Software downloads or installations of any kind. Use only web-based tools that do not require software installation.
- If a tool requires installation, email ltsconsulting@uwec.edu for guidance.
Safe practices:
- Only grant the minimum permissions necessary for the tool's core function.
- Review and revoke permissions regularly.
- Use incognito/private browsing when possible.
- Consider using dedicated browsers or browser profiles for AI tools.
- Never download or install AI software. Stick to web-based tools only.
- If a tool requires installation, discontinue use immediately and email ltsconsulting@uwec.edu for guidance.
Input and Output Data Usage
Research how the tool uses your inputs and what happens to the outputs:
- Determine if inputs become part of the tool's training dataset.
- Check if outputs could be shared with other users.
- Verify whether conversations or sessions are stored permanently.
- Look for information about data anonymization or aggregation practices.
- Understand if the tool creates user profiles or behavioral analytics.
Immediately discontinue use of:
- Tools that claim ownership of user-generated content.
- Services that explicitly use all inputs for model training without opt-out options.
- Platforms that share user data with third parties for commercial purposes.
- Tools with vague or unclear data usage policies.
Part II: Best Practices for Safe Usage
Protecting Personal Information
Remove all personally identifying information before using any AI tool:
- Student names, ID numbers, or contact information.
- Faculty, staff, or student personal information.
- Employee or personnel records.
- Social Security numbers, driver's license numbers, or other government IDs.
- Credit card numbers, bank account information, or financial data.
- Health information or medical records.
- Home addresses or personal phone numbers.
- Confidential institutional information or internal communications.
Intellectual Property Protection
Understand how AI tool usage may affect your rights:
- Your original work remains your intellectual property; however, using an AI tool to alter your work may affect your rights.
- DO NOT input unpublished research, proprietary methodologies, or confidential institutional information.
- Be cautious about sharing novel ideas, grant proposals, strategic plans, or competitive information.
- Understand that some tools may claim rights to derivative works or outputs.
Best practices:
- Use AI tools for general assistance rather than core intellectual or operational contributions.
- Clearly state when and how AI tools were used in your work.
- DO NOT upload complete manuscripts, detailed research plans, operational procedures, or sensitive institutional documents.
- Consider using AI for editing, formatting, brainstorming, or routine task assistance rather than substantive content creation.
REMINDER: Any purchase to be made with University funds must go through the standard procurement process. Email ltsconsulting@uwec.edu for guidance.
Transparency and Attribution
Always be transparent about AI use:
- Disclose when AI tools were used in research, course materials, publications, or work products.
- Follow your discipline's or department's citation standards for AI tool usage.
For educators:
- Establish clear policies for AI use in your courses.
- Model appropriate AI usage for students.
- Use AI to enhance, not replace, your professional expertise.
- Be transparent with students about your use of AI in course development or grading.
- Maintain human oversight and critical evaluation of all AI outputs.
For staff:
- Use AI to streamline routine tasks and improve efficiency.
- Maintain human oversight for all decisions affecting students, employees, or institutional operations.
- Ensure AI assistance doesn't compromise service quality or accuracy.
- Follow departmental guidelines for AI use in specific work functions.
Data Classification and Compliance
Follow UW System data classification guidelines (SYS 1310):
Only use public (low-risk) data with unapproved AI tools:
- Publicly available information and research.
- General educational content and resources.
- Non-sensitive course materials and assignments (without student names/IDs).
- Published research and publicly available datasets.
Never use medium or high-risk data:
- Student educational records (FERPA-protected).
- Personnel files, employment information, or HR records.
- Financial records, budget information, or payment data.
- Research data with confidentiality requirements.
- Proprietary institutional information or strategic plans.
- Vendor contracts, legal documents, or compliance materials.
- Internal communications or meeting notes containing sensitive information.
Verifying AI Outputs
Always verify and fact-check AI results:
- Fact-check all information provided by AI tools.
- Cross-reference claims with reliable sources.
- Be aware of potential biases in AI responses.
- Understand that AI tools can generate plausible but incorrect information.
- Test AI outputs for accuracy, relevance, and appropriateness.
Quality assurance:
- Use AI as a starting point, not a final authority.
- Apply your professional expertise to evaluate and refine AI outputs.
- Consider multiple AI tools for comparison when appropriate.
Part III: Institutional Compliance
UW System Policies
All AI usage must comply with:
- UW Board of Regents Policy RPD 25-3: Acceptable Use of Information Technology Resources.
- UW System Administrative Procedure 1031.A: Information Security: Data Classification.
- FERPA (Family Educational Rights and Privacy Act): protects student education records.
- HIPAA (Health Insurance Portability and Accountability Act) (if applicable): applies to health information in certain contexts.
- Federal and state privacy laws.
- Departmental and college-specific guidelines.
UW-Eau Claire Policies
All AI usage must comply with:
Incident Reporting
Report immediately if:
- You suspect a data breach or privacy violation.
- Student, employee, or institutional data may have been compromised.
- You discover the tool is using your data inappropriately.
- There are security incidents or suspicious activities.
- You accidentally input sensitive or confidential information.
Contact Information:
- Help Desk
- Email helpdesk@uwec.edu
- Call 715-836-5711
- Visit VLL 1106
- Department Chair or Dean for policy questions
Regular Review and Updates
- Review privacy policies and terms of service quarterly—AI companies frequently update their data practices.
- Sign up for email updates from AI tool providers to stay informed about policy changes.
- Check for tool updates that might change data handling or privacy settings.
- Review your AI tool usage—discontinue tools that no longer meet University standards.
- Stay informed about updates to University AI policies.
- Participate in professional development opportunities about AI in higher education.
Additional Considerations
Faculty-Specific Considerations
Teaching and Academic Integrity
- Explicitly state your AI use expectations in course syllabi.
- Provide non-AI alternatives for students who prefer not to use these tools.
- Use established academic citation formats for AI tool usage in research and teaching materials.
- Recognize that AI detection tools are unreliable and should not be used as sole evidence of academic misconduct.
- Focus on teaching responsible AI use rather than complete prohibition.
- Recognize that AI policies and academic integrity standards are rapidly evolving.
If using AI detection tools to identify potential student misconduct:
- Include a statement in your syllabus about AI detection methods.
- Use Turnitin when possible, as it has appropriate data privacy protections:
- Avoid public instances of AI detection tools, as they create privacy risks.
- Back up Turnitin results with other evidence (comparison to the student's other work).
- Use a high confidence threshold (90%+) before pursuing misconduct cases.
- Be aware that AI detection tools frequently produce false positives, particularly flagging work by ESL students, neurodivergent students, multilingual students, and students with formal writing styles.
- Consider allowing assignment revision as an alternative that avoids triggering formal misconduct processes.
Research and Publishing
- Follow discipline-specific guidelines for disclosing AI assistance in academic publications.
- Check funder requirements regarding AI use in research proposals and execution.
- Be transparent with research collaborators about AI tool usage.
- Consider AI assistance disclosure when serving as a peer reviewer.
Staff Considerations
Daily Tasks
- Use AI to enhance routine administrative tasks while maintaining service quality.
- Ensure AI usage doesn't create barriers for colleagues, students, or community members.
- Ensure human oversight for all decisions affecting students, faculty, staff, or institutional operations.
- Maintain clear records of how AI tools are integrated into departmental workflows.
Professional Growth
- Seek professional development opportunities to stay current with AI technologies relevant to your role.
- Share AI knowledge and best practices with colleagues in your department.
- Stay informed about evolving institutional and departmental AI policies.
For Everyone
Communication and Documentation
- Keep track of which AI tools you are using.
- Align AI usage with professional ethical standards in your field or department.
- Understand and follow protocols for reporting AI-related security or privacy concerns.
Equity and Access
- Ensure no one is required to use AI tools that require personal data input.
- Consider that not all employees, students, or community members may have equal access to AI technologies.
- Design AI-enhanced processes that work for diverse user needs and technical capabilities.
Staying Current
- Recognize that AI policies and best practices are rapidly evolving.
- Participate in institutional reviews and updates of AI policies.
- Engage with professional associations and networks to stay current on AI best practices in higher education.
Quick Reference Checklist
Before selecting a tool:
- Read the privacy policy and terms of service thoroughly.
- Verify data usage and storage practices.
- Confirm no excessive system permissions are required.
- Ensure adequate documentation and support.
Before each use:
- Remove all personally identifying information.
- Verify data is classified as public/low-risk only.
- Consider intellectual property implications.
- Plan for output verification and fact-checking.
After use:
- Verify and fact-check all outputs.
- Properly cite AI tool usage.
- Review and revoke unnecessary permissions.
- Report any security or privacy concerns.
Need Help?
Contact ltsconsulting@uwec.edu for guidance on specific tools or technical questions, or cetl@uwec.edu for educational applications. This document will be updated regularly as AI technology and University policies evolve.
Glossary
- Click-through Agreement: A legal contract accepted by clicking "I agree" or a similar button.
- Data Classification: UW System's method of categorizing data by sensitivity level (public, internal, sensitive, restricted). For example, a course catalog would be public while student grades are restricted.
- FERPA: Family Educational Rights and Privacy Act—federal law that protects student education records and gives students rights to access and control their educational information.
- Generative AI: Artificial intelligence systems that can create new content, including text, images, code, or other media.
- PII (Personally Identifiable Information): Information that can identify a specific individual, such as names, social security numbers, or email addresses.
- Training Data: Information used to teach AI models how to generate responses and perform tasks.