Last edited on January 18, 2024
To the instructor
The field of Artificial Intelligence (AI), its models, and regulatory framework continues to rapidly grow and change and therefore brings forth the need to address the ethical considerations and risks associated with AI. EVMS supports responsible use of AI tools, but there are important considerations to keep in mind when using these tools. There is no clear legal framework to guide the ethical use of AI. Until such a framework exists, the following guidelines shall ensure understanding of the ethical and responsible employment of AI tools in your teaching. AI is a rapidly evolving technology. EVMS will continue to monitor developments and update guidelines accordingly.
Guidelines
- Protect confidential data. There is no guarantee that information inputted into AI tools will remain confidential. Data classified as private or restricted should not be entered into generative AI tools.
- When using these tools, do not disclose confidential, sensitive, or personally identify information.
- Information restricted by law (FERPA, HIPPA, etc.) by contract, or by other agreement should not be entered into generative tools.
- Adhere to current policies on academic integrity and institutional standards of conduct. All students are expected to know and adhere to EVMS’s academic integrity policies and follow instructor’s guidelines about permitted use of AI for coursework. All faculty are expected to know and adhere to standards of conduct within the educational, research, and clinical realms.
- Don’t assume outputs are accurate. Content created by AI tools can include factual errors or inaccuracies, fabrications, bias, or other unreliable information. It is your responsibility to:
- ensure the accuracy of what is reported in your work.
- promote the well-being and safety of our society and in a manner that garners public trust
- be inclusive and provide access to the widest possible audience while being free from bias
- Original work. While EVMS supports responsible use of AI tools, they are not to be used as a substitute for independent thinking and creativity.
- Proper citations. If a student plans to use AI generative tools for assignments, they should be cited using the following format:
- Tool Name. (Year, Month, Date of query). “Text of query.” Generated using Tool Name. Link to tool name.
Usage philosophy
- AI as a supportive tool: AI should augment your teaching, not replace it. While AI is a valuable aide, human touch remains irreplaceable. Use AI to enhance student comprehension, but always stay engaged with your students.
- Grading with AI: When considering AI for grading tasks, weigh accuracy and speed. Ensure the AI's output mirrors the feedback you'd normally give, and be ready to explain your choice if students express discomfort with AI grading.
- Accountability for teaching content: Should you integrate AI outputs into your materials, you bear the responsibility for any errors. Always vet and cross-check AI-generated content, ensuring its accuracy and proper attribution.
- Transparency in AI usage: Keep your AI use transparent. Notify students when AI assists in creating course content and elaborate on how the technology was used.
- Refine teaching for AI integration: Mitigate potential AI pitfalls by prioritizing activities that nurture deep intellectual development. This includes encouraging hands-on knowledge creation and introspective learning.
- AI tool selection: Ensure AI tools resonate with your course goals. They should amplify learning, not distract from it. Think about the specific learning outcomes you aim to achieve and pick AI tools accordingly.
- Nurturing critical thought: Offer guidance to help students spot AI biases. Design tasks that incite iterative, inquiry-led thinking around AI.
- Plagiarism tools: Use AI plagiarism checkers judiciously. Recognize their limits and be alert to the potential misuse of AI to bypass them.
AI usage guidance
Specific examples of how AI can be used in different teaching settings (e.g., lectures, small groups, online learning).
- Example 1: Using AI-powered chatbots to answer students' questions and provide additional learning resources.
- Example 2: Using AI to grade essays and provide feedback to students.
- Example 3: Using AI to create personalized learning pathways for students.
Potential risks and challenges of using AI in teaching
While AI holds immense potential to revolutionize education, its implementation in teaching also presents specific risks and challenges. These include:
- Bias and discrimination: AI algorithms can inherit and amplify existing biases in data, leading to unfair and discriminatory outcomes for students. This can manifest in biased grading, personalized learning recommendations, and even inappropriate content generation.
- Over-reliance on technology: Overdependence on AI-powered tools can hinder the development of critical thinking and problem-solving skills in students. It can also lead to passivity and reduced engagement in the learning process.
- Lack of transparency and explainability: Many AI algorithms are complex and opaque, making it difficult to understand how they arrive at decisions. This lack of transparency can fuel mistrust and anxiety among students and educators.
- Data privacy and security: Using AI in teaching often involves collecting and analyzing student data, raising concerns about privacy and security, particularly concerning data breaches and unauthorized access.
- Unequal access and affordability: Unequal access to AI-powered educational tools can exacerbate existing inequalities in educational opportunities.