Guidance on use of AI tools at Notre Dame

Author: Chas Grundy

Abstract image dots and lines that suggest technology or a circuit board, displayed in the shape of a brain.

Notre Dame community:

The recent growth of artificial intelligence (AI) tools such as ChatGPT, Superhuman, Otter.ai, and others poses opportunities and challenges for Notre Dame.

We recognize that people are eager to start integrating these tools into their work and personal lives, and we understand that many people have begun using some of these tools.

As guardians of University information supporting our academic and administrative operations, faculty and staff have a shared responsibility to ensure the protection of personal and confidential data. When engaging with AI tools, it is imperative to exercise caution and adhere to the following guidelines:

  1. Data Sensitivity: Do not use AI tools with sensitive or confidential data.

  2. Data Anonymization: Whenever possible, anonymize data before using AI tools. This reduces the risk of exposing personal information and maintains the privacy of individuals.

  3. Training and Awareness: Familiarize yourself with the basics of AI and data security. Regularly participate in training sessions and workshops to stay informed about best practices and potential risks.

  4. Collaboration: Work closely with our campus IT and data security teams when considering using AI tools. Do not use AI tools with University data without a contract.

  5. Vendor Assessment: Before adopting any AI tool, thoroughly assess the reputation and credibility of the vendor. Ensure that they have robust data protection and privacy practices in place.

  6. Regular Updates: Keep AI tools and associated software up to date with the latest security patches. Regular updates help safeguard against potential security breaches.

Notre Dame’s Data Classification Levels are outlined in the University’s Information Security Policy. It is also important to note that information such as student information regulated by FERPA, human subject research information, health information, human resources records, etc., must not be used with AI tools.

The University is forming an interdisciplinary working group, co-chaired by the Office of Information Technologies and Notre Dame Research, to explore and assess the potential benefits and risks associated with generative AI tools. We commit to keeping you informed as we progress in our efforts to evaluate and implement AI technologies responsibly.

Thank you for your participation. Learn more about this initiative and find guidance on AI at the Generative AI Task Force (GAIT).

If you have any questions or concerns about appropriate use of AI, please contact the OIT Help Desk during business hours at 574-631-8111 or oithelp@nd.edu.

Yours in Notre Dame,

Jane Livingston
Vice President and Chief Information Officer
University of Notre Dame