Skip to Main Content

Generative AI Tools for Students

Generative AI is transforming how research is conducted, from literature reviews to data analysis and writing. While it offers powerful tools for productivity and creativity, it also raises important questions about authorship, integrity, and transparency

What kind of data do AI tools collect from user?

When you use AI tools, whether it's ChatGPT, Microsoft Copilot, Grammarly, or others, it's important to understand what kind of data might be collected and how it could be used.


Common Types of Data Collected

  1. User Inputs
    • Anything you type into the AI (questions, prompts, documents) may be stored and reviewed to improve the tool.
    • Some tools allow you to opt out of data being used for training.
  2. Usage Data
    • Includes how often you use the tool, what features you use, and how long your sessions last.
  3. Device & Technical Info
    • IP address, browser type, operating system, and device identifiers.
  4. Interaction Logs
    • Clicks, scrolls, and other interactions with the interface.
  5. Account Information (if logged in) 
    • Name, email, and linked services (e.g., Google or Microsoft accounts).


Why It Matters

  • Privacy Risks: Sensitive or personal information could be stored or shared if not handled properly.
  • Data Storage Locations: Some tools store data in other countries, which may be subject to different privacy laws.
  • Academic Integrity: Uploading assignments or research data to non-approved tools may violate university policies.


UCalgary Guidance

Privacy Policies

When using AI tools, it’s important to understand how your data is handled. Leading companies like OpenAI and Google have published privacy commitments to clarify what they collect and how they use it.

OpenAI (ChatGPT)

  • Data Use: OpenAI may use your prompts and responses to improve its models unless you opt out in settings (available to Pro users).
  • No Training on API Data: Content submitted via the API (e.g., through third-party apps or enterprise tools) is not used for training.
  • Enterprise & Team Plans: Offer enhanced privacy—data is not stored or used for training by default.
  • User Control: You can delete your chat history or disable chat saving in your account settings.

🔗 OpenAI Privacy Policy

🌐 Google (Gemini)

  • Workspace & Cloud Users: Gemini does not use your prompts or content to train models without your permission [1] [2].
  • Enterprise-Grade Protections: Your data stays within your organization and is protected by existing Google Workspace security controls.
  • Encryption: Prompts and responses are encrypted in transit and at rest.
  • No Cross-Tool Sharing: Gemini tools don’t share data between services unless explicitly directed.

🔗 Gemini for Google Workspace Privacy Hub
🔗 Gemini for Google Cloud Data Use
 

[1] Generative AI in Google Workspace Privacy Hub
[2] How Gemini for Google Cloud uses your data


Grammarly – Committed to robust data protection and privacy compliance:

Perplexity AI – Offers transparency on data usage and retention:

Scite.ai – Follows applicable privacy laws and GDPR:

DeepSeek – Use with caution due to data transfer risks; store and process user data in China:

Should you avoid entering certain information into AI chats?

Yes! Users should be very cautious about what they enter into AI tools. While AI can be helpful, it’s important to protect your privacy, academic integrity, and sensitive data.

Information You Should Avoid Sharing:

  1. Personal Identifiable Information (PII)
    • Full name, address, phone number, student ID, or government ID numbers.
  2. Sensitive Academic Content
    • Unpublished research, thesis drafts, exam questions, or confidential data.
  3. Login Credentials or Passwords
    • Never share usernames, passwords, or access tokens.
  4. Private Conversations or Emails
    • Avoid pasting personal messages or confidential communications.
  5. Health or Financial Information
    • These are protected under privacy laws and should not be shared with AI tools.

Libraries are navigating new license agreements that limit how AI tools can be used with subscribed content. These agreements typically prohibit:

  • Using content to create a competing product or service.

  • Disrupting or interfering with how a subscribed resource functions.

They also require that:

  • Only small portions of content are used for research or training AI models.

  • Any AI tool usage must occur in a closed or self-hosted environment accessible only to UCalgary faculty, staff, or students.

This ensures external AI tools (like ChatGPT or Copilot) don’t retain, learn from, or misuse licensed content. These restrictions usually don’t apply to openly licensed materials (like those under a Creative Commons license).

📩 Planning to use library resources with AI?
Please reach out to Collections Support at collections@ucalgary.libanswers.com to discuss your project and ensure it aligns with licensing terms.

How to protect user's data when using AI tools?

AI tools can be incredibly helpful, but they also come with privacy risks. Here are some simple steps students can take to protect their data while using AI responsibly.

Tips for Protecting Your Data:

  1. Avoid Sharing Sensitive Information
    Don’t enter personal details, passwords, student IDs, or confidential academic content into AI chats.
  2. Use University-Approved Tools
    Stick to tools like Microsoft Copilot Chat, which is approved and supported by UCalgary:

    🔗 Microsoft Copilot Chat
  3. Check Privacy Settings
    Some tools (like ChatGPT) allow you to turn off chat history or opt out of data being used for training. Use these settings when available.
  4. Log Out When Done
    Especially on shared or public devices, always log out of your AI accounts to prevent unauthorized access.
  5. Use Secure Networks
    Avoid using AI tools on public Wi-Fi without a VPN. Secure networks help protect your data in transit.
  6. Read the Privacy Policy
    Know what data the tool collects and how it’s used. Look for terms like “data retention,” “training,” or “third-party sharing.”