These FAQ offer guidance on some of the most common questions about using AI at Marquette right now.
Information Technology Services (ITS) at Marquette University has launched a centralized website to guide the campus community on the appropriate and effective use of generative AI. This comprehensive platform consolidates approved AI tools, outlines data protection by tier, sets expectations for responsible usage, and provides information on relevant events and training resources, serving as a single hub for AI-related guidance.
Faculty and staff at Marquette must adhere to data protection principles when using AI tools, treating them like any other system handling university information. This involves using only approved tools for institutional, sensitive, or regulated data, keeping university information within Marquette’s controlled systems, and minimizing data sharing to only what is essential for a given task. Users are responsible for verifying the accuracy of AI outputs, as these tools can generate confident but incorrect content. Furthermore, it's crucial to be aware of AI's potential for generating biased or insensitive content and to avoid using AI to automate or delegate responsibilities related to harassment, discrimination, or supporting vulnerable individuals, as these require human care and judgment. More detailed general guidelines on AI use are available on the university's generative AI website.
Marquette emphasizes that not all AI tools are secure for confidential university data, as some platforms may use input data to train public AI models, potentially exposing sensitive information. To mitigate this risk, the university has established guidance on approved AI tools. Microsoft Copilot is the recommended AI tool for institutional use, as it can be managed within Marquette’s 365 environment, ensuring privacy, compliance, and content retention in a secure, containerized environment that does not contribute to public AI models. Three main Microsoft Copilot options are available: the full Microsoft 365 Copilot license, which integrates into Microsoft 365 apps like Word and Outlook (requires departmental approval for an annual fee); a free university-managed Copilot chat for all faculty and staff, suitable for questions, drafts, and brainstorming, where content can be shared directly for context; and a free personal Microsoft Copilot chat, which lacks university data security and should only be used with publicly available data. ITS is available to provide further assistance on tool availability and usage. Other AI tools may be useful for low-risk scenarios such as brainstorming, improving general writing clarity, learning about AI, or working with non-sensitive content.
AI can be used responsibly for numerous day-to-day tasks, particularly when human review ensures the outputs are accurate and appropriate before public sharing. Examples include drafting or refining communications, organizing ideas, summarizing discussions, generating meeting minutes with approved tools, brainstorming, and enhancing clarity or tone. However, extreme caution is necessary for tasks requiring precision, attribution, or professional judgment, or when dealing with sensitive or regulated data such such as FERPA-protected student data, PCI, HIPAA, research protocols, or employee information. In these instances, users must select approved tools, limit data sharing, and meticulously verify all outputs. When preparing materials for publication or proposals, it is essential to review external policies governing AI use for compliance. For collaborative projects, early discussion among contributors about AI usage is crucial to establish transparency, consistency, and shared accountability. AI should never be used to replace human accountability or authorship, store or train on university data outside institutional control, or bypass required policies and compliance processes.
No, using AI tools is not a requirement at Marquette University. While many faculty and staff are enthusiastic about exploring these emerging technologies, others may prefer not to. Marquette's approach is to provide clarity and guidance on responsible AI use, offering a supportive path forward rather than mandating uniform adoption across the university community. Additional general guidelines on the use of AI are available on the university's generative AI website.
Marquette offers a variety of training and resources to support faculty and staff in using AI responsibly. The Center for Teaching and Learning provides faculty-focused sessions and specific AI resources for effective usage. Information Technology Services (ITS) is developing a series of GROW classes and short AI tips, including videos and guides, which will be accessible on the ITS website. Additionally, self-paced learning modules are available through Microsoft Learn, covering topics such as 'Introduction to Generative AI,' 'Responsible Use of AI in Education,' and 'Work Smarter with AI using Microsoft Copilot.' The university's generative AI website also lists various events and training opportunities related to AI, with content continually expanding as more support becomes available.
Recognizing the dynamic nature of artificial intelligence, Marquette's AI Task Force is committed to developing a flexible and adaptive institutional approach to AI. This means that current guidance is considered a starting point, not a definitive conclusion. The university plans to provide ongoing updates on several key areas, including supported AI tools and evolving responsible-use guidelines, new training opportunities for the community, and practical examples of AI integration within teaching, research, and operational contexts.