- Admission
- Programs
- Learning
- Community
- About
- Research
- Research Priorities
- Strategic Research Plan
- Implementation Plan
- Supporting Health and Wellness of Individuals, Populations and Communities
- Expanding the foundations of knowledge and understanding our origins
- Strengthening Democracy, Justice, Equity and Education
- Supporting Research Graduate Students
- Supporting Postdoctoral Fellows
- Valuing and Measuring Scholarly Impact
- Incorporating Indigenous Perspectives into Research Ethics
- Building World-Class Research Space and Infrastructure
- Involving Undergraduate Students in Research
- Supporting Early-Career Researchers (Faculty)
- Funding Research Chairs
- Reducing Administrative barriers to Research
- Transforming industry and economies through technology, management and policy
- Implementation Plan
- Performance & Excellence
- Innovation
- Knowledge Mobilization
- Researcher Resources north_east
- Institutes, Centres & Facilities
- Leadership & Departments
- Dashboard
- Campuses
- Contact Us
- Emergency
Learning and Teaching with AI: Core Principles
The following principles seek to uphold the mission of the university amidst the perils and possibilities of AI. These principles support pedagogies of curiosity, accountability, independent thinking, and problem-solving in learning environments that uphold academic integrity, foster inclusivity, and align with university policies and legal requirements. Pedagogical goals and values must precede and support the use of AI technology.
1. For the Greater Good
Artificial Intelligence is transforming how we learn and work. SFU recognizes its equity, and environmental challenges and commit to preparing future problem-solvers for the greater good of education, research, and society.
All uses of AI in teaching, learning, and research must prioritize accountable and transparent decision-making, responsible practices, and the educational benefit of our community.
Sustainability
AI systems require significantly more electricity than typical digital technologies, placing substantial strain on energy resources and impacting sustainability at the intersection of ecology, economy, and society. Following SFU’s commitments to sustainability and climate, it is important to:
- Evaluate and transparently communicate the environmental impacts of AI technologies.
- Limit AI integration to applications that provide clear educational value and benefits to the broader academic community.
- Continuously consider more sustainable alternatives or practices, balancing innovation with responsible resource management.
Indigenous Reciprocity
Respect for Indigenous history, languages, and cultures is fundamental and must guide how knowledge is collected, represented, and shared. In alignment with our commitments to reconciliation and Indigenous collaboration, SFU asks we:
- Actively consult with Indigenous communities and governing bodies before engaging with AI tools that directly involve or relate to Indigenous knowledges.
- Recognize and respect that Indigenous ways of knowing and being, traditional practices, and cultural protocols may not inherently align with AI usage.
- Uphold Indigenous data sovereignty by ensuring all AI-related activities and courses involving Indigenous data, content, materials, or participation follow the guidance of Indigenous communities and, where relevant, align with OCAP principles. Where appropriate, obtain explicit consent, and always respect their sovereignty over data and intellectual property.
Responsible Integration
Artificial Intelligence may be integrated into SFU’s teaching and learning environments in ways that are thoughtful, intentional, and consistent with institutional values. Responsible integration of AI requires accountable and transparent practices that reflect on its environmental, pedagogical, and social implications. Members of the SFU community are expected to:
- Prioritize safety, privacy, fairness, and inclusivity in all uses of AI for instruction, assessment, and academic support.
- Clearly assess and communicate the intended benefits and potential limitations of AI tools within specific educational and disciplinary contexts.
- Exercise care and deliberation when exploring new or unfamiliar AI technology, ensuring use aligns with course goals, disciplinary standards, and SFU’s broader academic mission.
2. Academic Integrity
Members of the academic community produce original work, cite sources accurately, and uphold fairness and honesty in their teaching, learning, and research.
Academic integrity in the context of AI use requires that all members of the academic community produce original work, cite sources accurately, and uphold fairness and honesty in their teaching, learning, and research. Because AI tools are trained on large, often undocumented corpuses of data (without the knowledge of the authors), users may unintentionally plagiarize the ideas or words of others. To mitigate this risk, students and instructors must validate and take ownership of AI-generated content and ensure proper attribution.
When AI tools are used (e.g., for outlining or editing), students and instructors should disclose how the tool was used and cite it appropriately, for example, in an acknowledgements section or teaching practices to model transparency. Student disclosures about not using AI may be appropriate when an instructor has said AI is not permitted. All instructional uses of AI must comply with privacy legislation, institutional policy, and copyright law, including FIPPA and SFU’s student academic integrity policy (S10.01).
3. Uphold Privacy
Secure the privacy of our teaching and learning community by thoroughly assessing the potential risks of Artificial Intelligence tools.
Protecting the privacy and security of our university community is essential when integrating AI into teaching and learning at SFU. All AI tools must undergo SFU’s Privacy Impact Assessment (PIA) and receive institutional approval before they are used for instructional purposes.
The PIA will help provide information about how and when data will be collected or used within AI systems, and explicit consent must be required. Data input into AI tools should be limited where possible to only what is necessary, avoiding sensitive personal information unless specifically approved. Clear communication regarding how data is managed, stored, and protected is required.
4. Equitable Access
Enhance learning by helping students leverage and benefit from Artificial Intelligence tools regardless of differences in language, identity, ability, or other demographic and socio-economic factors.
In delivering an AI-enhanced education, the university should endeavour to remove barriers to access, wherever possible. Ensuring that students have equal access to AI tools promotes fairness and inclusivity and helps to bridge the digital divide. In instances where AI use is permitted, students should be able to easily access and use the tools required to complete their coursework. When designing AI-enabled courses, instructors should recommend free or university-funded tools. Students who do not wish to use AI tools for the completion of their course work should be given alternatives that foster an equivalent learning experience, where reasonable and feasible (see also “7. Disciplinary Contexts” below). For learning activities, instructors should be sensitive to students’ distinct ethical and cultural contexts by understanding that some students may have legitimate reasons for wishing to refrain from using AI.
5. Transparency
Transparency in expectations for Artificial Intelligence use fosters a culture of trust, promotes fairness, and clarity in cases of misuse.
Transparency is a key factor in building trust between instructors and students. When expectations regarding the permissible or restricted use of AI are clear, students can make informed decisions about their course work. A transparent approach to acceptable AI usage in the classroom fosters a culture of trust, promotes fairness and responsibility, and helps avoid negative repercussions. Instructors should be clear about their expectations for student use of AI tools in completion of their coursework and describe any alternatives, if available. Regardless of whether students are encouraged, discouraged, or prohibited from using AI, instructors should be explicit in their syllabi and other student-facing course materials about the purpose of asking students to engage in activities and assignments and pointing to how these align with learning and assessment.
6. Academic Freedom
Empowering instructors by promoting understandings of responsible Artificial Intelligence practices as defined by university policy, provincial and federal law.
Academic Freedom is a core principle that ensures instructors have substantial autonomy regarding teaching methods, course content, assessments, and instructional tools, including AI. Academic freedom is, however, constrained by provincial, federal law and university policy. Instructors must ensure that their decisions concerning the use of AI for instructional purposes are consistent with these constraints and follow best practices.
7. Disciplinary Contexts
A thoughtful process for when and how to use Artificial Intelligence can be facilitated by involving open discussions within and between disciplines.
This document does not strive to formulate a one-size-fits-all approach to AI at SFU. Different research, and disciplinary communities will likely have different expectations for the use of AI tools. In addition, different academic units at SFU and individual supervisory committees will make different determinations about the legitimate, pedagogically sound, and appropriate uses of AI tools. Deciding when and how to use AI should be a thoughtful process, involving open discussions within disciplines and between disciplines to promote informed interdisciplinary approaches to AI.
For more information, contact: Paul Kingsbury, AVPLT, avpltsec@sfu.ca