Home » AI in Academia: A New Challenge for Liberia’s Tertiary Institutions | News

AI in Academia: A New Challenge for Liberia’s Tertiary Institutions | News

Liberia’s tertiary institutions (universities, colleges, et al) are experiencing a quiet yet profound transformation as artificial intelligence (AI) tools such as ChatGPT, Gemini, and others increasingly influence academic practices. Students are leveraging these technologies to compose essays, draft research papers, and even complete theses, raising critical questions about academic integrity and the future of education. 

In my recent book, The Global AI Tsunami and Its Impact on Developing Countries: The Case of Liberia, I explore the dual nature of AI as both a catalyst for efficiency and a source of significant challenges. While AI offers undeniable benefits, its integration into academia presents a complex dilemma, particularly in contexts where faculty members lack the technical expertise to identify its use. As educators grapple with this new reality, the preservation of academic integrity has become more critical than ever.

The signs of AI-generated content are often subtle but discernible. Overly formal language, generic examples, or an abrupt improvement in vocabulary can serve as red flags. During the past semester, while teaching graduate courses and advising master’s theses, I observed a troubling pattern: submissions that were unusually polished and completed with remarkable speed. One student, whose thesis had initially been rejected for multiple deficiencies, resubmitted a version that appeared flawless.

However, the text exhibited telltale characteristics of AI-generated content such as uniform sentence structure, generic phrasing, and a lack of personal insight. A comparison with the student’s previous work, which was marked by grammatical inconsistencies and simpler ideas, confirmed my suspicions. A subsequent oral defense revealed the student’s inability to explain key concepts, further substantiating that the thesis had been rewritten by AI.

My background in computer science and experience working with AI models have equipped me to identify AI-assisted or AI-generated content. However, many educators in Liberia lack similar expertise, leaving them ill-prepared to detect such practices. This knowledge gap poses a significant threat to the credibility of academic institutions. Overworked and under-resourced educators may not fully grasp how AI can mimic a student’s voice or produce comprehensive literature reviews in minutes.

The consequences are far-reaching: degrees earned through AI shortcuts undermine the skills of graduates and, by extension, the competence of Liberia’s workforce.

To address these challenges, tools like Turnitin and GPTZero have been developed to analyze text for AI-generated patterns, such as uniform sentence length or statistical anomalies in word choice. While these tools are not infallible, they provide a valuable starting point for educators. OpenAI, the creator of ChatGPT, is also exploring methods to “watermark” AI-generated text, embedding subtle markers that can be detected by software. However, such solutions are not yet standardized.

Detection is possible even without advanced tools or specialized expertise. Contextual clues and process-based assessments can reveal discrepancies. For instance, a student’s sudden mastery of complex jargon, inconsistent with their performance in class discussions, can raise suspicions. In-class writing exercises, oral defenses, and viva voce examinations can expose gaps that AI cannot fill. While these low-tech methods are effective, they are insufficient on their own. A comprehensive, three-pronged approach is necessary to address the issue effectively.

First, institutions must implement “train the trainer” initiatives. Workshops, conducted in collaboration with technology firms, can equip faculty with foundational knowledge about AI—how it operates, its limitations, and indicators of its use, such as overused transitions or generic phrasing. In my graduate courses, I plan to incorporate sessions on AI mechanics, empowering students to use AI responsibly while upholding academic integrity and enhancing the learning process.

Second, assessment methods must evolve. Institutions should move away from heavily weighted take-home essays and instead prioritize real-time evaluations, such as in-class writing assignments, oral presentations, group debates, or laboratory work, where AI cannot intervene. For theses, instructors should require annotated bibliographies and draft logs to track students’ progress and ensure the authenticity of their work. While this approach demands additional effort, it fosters transparency and accountability.

Third, ethical AI adoption and integration must be embraced. Outright bans on AI tools are counterproductive, as they may drive students to use these technologies covertly. Instead, tertiary institutions should develop policies to guide the responsible use of AI in education. Offering “AI Literacy” courses for both faculty and students can demonstrate how AI tools can be leveraged for teaching, learning, and research while maintaining academic integrity.

The stakes are high. Beyond the immediate concerns of academic integrity, unchecked dependence on AI risks eroding critical thinking and problem-solving skills. Rather than banning AI—a technology that is here to stay—policymakers must ensure faculty and students learn how to use it ethically, as a supplement to creativity rather than a replacement for it. Integrating AI literacy into curricula and establishing clear boundaries for its use are essential steps forward.

Challenges abound. Limited access to technology, unreliable internet connectivity, inconsistent electricity supply, and resistance to change among faculty are significant obstacles. However, Liberia’s tertiary institutions have demonstrated remarkable resilience in the face of adversity, from civil war to the Ebola and COVID-19 crises.

Our willingness to combine this resilience with targeted training, innovative assessment strategies, and thoughtful AI integration, will enable tertiary institutions safeguard academic integrity, and prepare students for a rapidly evolving world.

As I advised the student whose thesis was rewritten by AI, “AI is a tool, not your brain.” It must not replace creativity but rather enhance it. For now, the relationship between AI and academia resembles a cat-and-mouse game. As AI grows more sophisticated, so too must educators. If students use AI to circumvent academic standards, instructors must use it to detect and address such practices. This dynamic underscore the need for vigilance, creativity, and a willingness to adapt.

The time has come for academia to catch up with technological advancements before the next flawlessly crafted paper slips through undetected. With concerted effort and strategic action, educators can reclaim the classroom, one assignment at a time.

Until next week,

Carpe diem!