AI Regulation in Universities

Recent technological advancements have enabled the utilization of tools like ChatGPT to generate content through artificial intelligence  (AI). This development presents both opportunities and challenges for the field of education.

The integration of AI in education is experiencing rapid growth. Incorporating AI tools and services in education carries significant potential for enhancing learning outcomes and providing personalized learning experiences. Students are increasingly employing AI-powered tools for diverse tasks such as assistance with homework, essay composition, and lesson preparation. As AI technologies continue to evolve, their integration into the education sector is an ongoing learning process. Nonetheless, several notable applications of AI in education have already emerged, including:

  • Educational Games enhanced by AI (The Oregon Trail, Minecraft: Education Edition, Duolingo, and Kahoot!).
  • Adaptive Learning Platforms (the adaptive platforms of Carnegie Learning and Knewton).
  • Automated Grading and Feedback Systems.
  • Chatbots designed for Student Support.
  • Intelligent Tutoring Systems (the Duolingo app, Thinkster Math, and Khan Academy’s Khanmigo tutoring system).

Harnessing AI in education offers numerous potential advantages. It has the capacity to enhance student learning through personalized instruction and feedback, enabling teachers to allocate their time towards more innovative and captivating teaching methods.

However, alongside these benefits, there are potential risks associated with integrating AI into education. Renowned advocate for the benefits of AI in education, Matthew Lynch (“My Vision for the Future of Artificial Intelligence in Education”), underscores the importance of carefully navigating these risks. He emphasizes that while AI in education is valuable in certain aspects, vigilant monitoring of its development and its overall societal role is crucial [1].

One risk involves the potential misuse of AI by students for academic dishonesty, such as using AI to generate essays or content that may mimic human writing but lacks originality or accuracy.

Another concern is the possibility of AI perpetuating discrimination against specific student groups. This could occur if an AI-powered tool is trained on a dataset that carries biases, leading the tool to reproduce and amplify those biases in its output.

Nevertheless, additional risks loom, encompassing apprehensions related to algorithmic biases, potential job displacement, and the safeguarding of data privacy [2]. Addressing these potential risks necessitates the establishment of well-defined policies and procedures governing the integration of AI in education. Both educators and students should possess an understanding of the possible advantages and risks associated with AI, coupled with training on responsible AI usage. Moreover, government intervention plays a crucial role in overseeing the application of AI in education. Developing comprehensive guidelines for the creation and deployment of AI-powered tools falls within the purview of governmental responsibility. Additionally, financial support for research dedicated to ensuring the ethical application of AI in education can be provided by the government [3].

According to a recent global survey conducted by UNESCO [4], involving more than 450 schools and universities, it was revealed that fewer than 10% of these institutions had established policies or official guidance regarding the use of GenAI applications. This deficiency is primarily attributed to the lack of national regulations of AI. Additionally, only seven countries have reported either having developed or being in the process of developing training programs focused on AI for educators [5].

On September 7, 2023, UNESCO took a significant step on the global stage by releasing its first guidelines concerning the use of Generative AI (GenAI) in education [6].  UNESCO urged governmental bodies to regulate the application of this technology, emphasizing the importance of safeguarding data privacy and implementing an age limit of 13 for users. While acknowledging the developmental potential of GenAI, UNESCO underscores the crucial need for public engagement and government oversight to mitigate potential harm. UNESCO’s guidance also extends to safeguarding the rights of teachers and researchers, emphasizing the value of their practices when utilizing GenAI. Moreover, UNESCO advocates for caution, calling for the prevention of GenAI deployment in scenarios where it could hinder learners’ opportunities to develop cognitive abilities and social skills through real-world observations, empirical practices like experiments, interactions with other individuals, and independent logical reasoning [7].

Also on 30 January 2023, the Bureau of the Steering Committee for Education (CDEDU) of the Council of Europe convened in Strasbourg to discuss the groundbreaking “Preparatory study for the development of a legal Instrument on regulating the use of AI systems in education” [8].

Some countries have taken strides in formulating policy documents to regulate the use of AI in education. Notably, the UK Government has crafted a policy paper addressing the deployment of GenAI, encompassing large language models (LLMs) such as ChatGPT or Google Bard, within the education sector. The objective is to harness technological opportunities while ensuring the safe and effective use of technology to provide high-quality education [9].

In a similar vein, Japan introduced new guidelines in July 2023 concerning the utilization of AI in educational institutions. These guidelines prioritize aiding teachers and students in comprehending the intricacies of the technology. However, they also impose certain limitations to address concerns related to copyright infringement, potential leaks of personal information, and plagiarism [10].

The incorporation of AI in higher education necessitates careful consideration of several critical aspects:

  • Institutions should prioritize accountability and transparency, ensuring that both decision-making processes and AI algorithms uphold impartiality and fairness.
  • A paramount focus on data security is imperative, requiring institutions to validate that individuals whose data is collected have provided informed consent, as AI heavily relies on data.
  • Transparent AI algorithms and procedures should be developed by institutions, clearly delineating the judgments and recommendations made by AI systems.

Ensuring ethical and responsible practices in the implementation of AI in higher education necessitates effective regulation. Existing frameworks and rules, such as the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (ETS No. 108) [11], Convention 108 and Protocols [12] and the General Data Protection Regulation (GDPR) [13], serve to date as the cornerstone for addressing pivotal issues such as data privacy, algorithmic transparency, the mitigation of biases, and establishing accountability.

References

  1. Matthew Lynch (2018). My vision for the future of artificial intelligence in education. https://www.theedadvocate.org/vision-future-artificial-intelligence-education/
  2. Hemachandran K, Raul V. Rodriguez (2023). Navigating the future: The need for regulation in AI usage in higher education. https://www.aiacceleratorinstitute.com/navigating-the-future-the-need-for-regulation-in-ai-usage-in-higher-education/
  3. Diego Lescano (2023). The Rise of AI in Education: Benefits, Risks, and Regulation. https://www.linkedin.com/pulse/rise-ai-education-benefits-risks-regulation-diego-lescano/
  4. UNESCO survey: Less than 10% of schools and universities have formal guidance on AI (2023). https://www.unesco.org/en/articles/unesco-survey-less-10-schools-and-universities-have-formal-guidance-ai
  5. Seven steps for countries to regulate generative AI in education (2023). https://www.linkedin.com/pulse/seven-steps-countries-regulate-generative-ai-education-unesco/
  6. Guidance for generative AI in education and research (2023). https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research
  7. Supantha Mukherjee (2023). UNESCO seeks regulation in first guidance on GenAI use in education. https://www.reuters.com/technology/unesco-seeks-regulation-first-guidance-genai-use-education-2023-09-07/
  8. Pioneering Discussions Set in Strasbourg on AI Regulation in Education (2024). https://www.coe.int/en/web/education/-/council-of-europe-launches-pioneering-ai-education-initiative-aligned-with-learners-first-strategy
  9. Generative artificial intelligence (AI) in education (2023). https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education
  10. Suvendrini Kakuchi (2023). New government guidelines on the use of AI in education. https://www.universityworldnews.com/post.php?story=2023071114553690
  11. Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data. Strasbourg, 28.I.1981. https://rm.coe.int/1680078b37
  12.   The Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (CETS No. 108)https://www.coe.int/en/web/data-protection/convention108-and-protocol