Are you exploring how generative AI is transforming the research landscape? Have you developed innovative approaches, ethical insights, or practical applications regarding AI in research? If so, we invite you to contribute a chapter to our forthcoming open access book: Generative AI-Enhanced Research: Ethical, Practical, and Transformative Approaches.
This edited collection will serve as a go-to resource for researchers, academics, educators, and students interested in harnessing generative AI tools across the research lifecycle. Our aim is to showcase a diverse range of perspectives, theoretical frameworks, and methodological innovations that illuminate the evolving role of AI in academic work.
We welcome contributions in the form of conceptual papers, empirical studies, reflective case narratives, and practical guides. Key areas of interest include (but are not limited to):
Ethical challenges and considerations in generative AI-enhanced research
Generative AI in research design and literature review
Generative AI in data collection and analysis
Generative AI in writing, publishing, and dissemination
Generative AI and research training, critical thinking, and future trends
Interested? Learn more and submit your abstract here.
You are warmly invited to participate in the International Conference on AI for Higher Education (AI4HE). Facilitated by the Human-AI Collaborative Knowledgebase for Education and Research (HACKER) and the AI Literacy Lab, the conference provides an opportunity to share knowledge of AI in Higher Education, network with peers and participate in practical workshops.
The conference will be on 26 and 27 November 2025 and will run electronically through Zoom. The conference is FREE đ
Presentations can take various formats and should focus on the use of generative AI in higher education settings. Some questions you can use to prompt your thinking are:
What constitutes AI literacy for researchers today?
How can we effectively embed AI literacy into research training and higher education curricula?
What new methodological possibilities or tensions arise when generative AI is integrated into the research process?
How do we ethically use generative AI in research without compromising scholarly integrity, originality, trustworthiness, and rigour?
Who gets to decide what constitutes âauthorshipâ or âcontributionâ when generative AI tools are involved in the production of knowledge?
How does the use of generative AI in research reshape our understanding of the researcherâs role, voice, and epistemic authority?
What does it mean to âposition oneselfâ in relation to a generative AI tool? Is it a collaborator, instrument, co-author, or something else entirely?
Abstracts are due by the 20th of June. To submit an abstract or register to attend, click on the button below. See you there!
The rise of generative AI has sparked new conversations about its role in academic research. While generative AI tools like ChatGPT have proven effective for summarisation, pattern recognition, and text classification, their potential in deep, interpretive qualitative data analysis remains underexplored. In our recent study, we examine the integration of ChatGPT as an active collaborator in qualitative data analysis. Our findings highlight ChatGPTâs ability to streamline initial coding, enhance reflexivity and higher-order thinking, and support knowledge co-construction while emphasising the necessity of human oversight.
Our study marks an exciting step forward in the integration of generative AI into qualitative inquiry. By approaching generative AI as a partner rather than a passive tool, we believe researchers will be able to its potential while preserving the richness and depth that define qualitative research.
As illustrated in another blog post, qualitative data analysis is often a laborious process, requiring meticulous coding, interpretation, and reflection. Traditional computer-assisted qualitative data analysis software, such as NVivo and MAXQDA, has long been used to help streamline aspects of qualitative data analysis. However, generative AI, and specifically ChatGPT, introduces an additional layer of adaptability, offering real-time feedback and dynamic analytical capabilities. This made us wonder how effective it would be in the qualitative data analysis process.
In our paper, we explore how ChatGPT can function beyond a simple data processing tool by actively participating in the interpretive process. Rather than merely classifying text, we found that ChatGPT could highlight implicit themes, suggest theoretical frameworks, and prompt deeper reflections on the data from both the researcher and participant. However, ChatGPT’s capacity is highly contingent on the researcherâs ability to craft well-designed prompts.
One of the key takeaways from the study is the significance of effective prompt design. We note that ChatGPTâs responses were only as good as the prompts it received. Initially, we found that the ChatGPT’s responses lacked depth or were fixated on single aspects of a topic while neglecting others. By refining our prompts, explicitly defining key concepts, and structuring questions carefully, we were able to guide ChatGPT toward more nuanced and insightful analyses.
We developed a series of 31 prompts to explore our dataset (see the prompts here). This iterative prompting process not only improved ChatGPTâs analytical output but also helped the researcher clarify her own theoretical perspectives. Our study consequently frames this prompt design process as a reflexive exercise, demonstrating how the act of crafting prompts can refine a researcherâs conceptual thinking and analytical approach.
An unexpected yet valuable outcome of using ChatGPT in the research process was its ability to stimulate the researcher’s higher-order thinking. By engaging with the ChatGPT-generated interpretations, the researcher was prompted to critically assess underlying assumptions, refine theoretical lenses, and explore alternative perspectives she might not have initially considered. This process encouraged deeper engagement, pushing the researcher to interrogate her own biases and methodological choices. As a result, the interaction with ChatGPT became an intellectual exercise in itself, allowing the researcher to refine and expand her analytical thinking in ways that traditional methods may not have facilitated as effectively.
One of the most striking findings from our study was ChatGPTâs ability to uncover implicit meanings within qualitative data. For example, when asked about concepts like âillusioâ (investment in the socially constructed values within a field), ChatGPT was able to infer instances of this concept even when it was not explicitly mentioned in the data. However, we also found that the ChatGPT-generated interpretations sometimes diverged from participantsâ own perspectives. This emphasises the critical role of human oversight. Generative AI lacks self-awareness (at least at the moment!), meaning that its responses must be carefully evaluated. Generative AI can be a powerful tool for organising and prompting analysis, but it is the researcherâs interpretive lens that ultimately determines the depth and rigour of qualitative inquiry.
One of the most innovative aspects of our study is its participatory approach, in which both the researcher and the participant engaged with ChatGPTâs analyses. Instead of using generative AI as a behind-the-scenes tool, the study involved participants in critically appraising the ChatGPTâs findings, thereby decentralising the researcherâs authority over data interpretation. This triadic model (researcher, participant, and ChatGPT) fostered greater participant agency in the research process. By giving participants the opportunity to review and respond to ChatGPT-generated interpretations, we ensured that the generative AI-assisted analyses did not overwrite or misrepresent participantsâ lived experiences. This approach not only enhanced the ethical integrity of the generative AI-assisted research but also enriched the depth and authenticity of the findings.
Questions to ponder
What are the potential benefits and risks of using AI tools like ChatGPT in qualitative research?
How can researchers ensure that ChatGPT-assisted analyses remain ethically sound and participant-driven?
The advent of generative artificial intelligence (GenAI) has opened up transformative possibilities in academic research. Tools like ChatGPT, Gemini, and Claude hold the potential to help with idea and content development, structure and research design, literature review and synthesis, data management and analysis, as well as proofreading and editing. However, as enticing as these advancements are, they bring ethical challenges that require careful navigation. To bridge this gap between potential and responsibility, my colleagues and I developed the ETHICAL framework for GenAI use, which has just been published open access!
The ETHICAL framework offers a structured approach, with each letter in the acronym representing a principle that users should embed into their practices. The framework has been summarised in this handy picture.
The ETHICAL Framework for Responsible Generative AI Use, republished from here under a CC-BY license.
Examine policies and guidelines Researchers must consult international, national, and institutional GenAI policies. This involves not only aligning with global GenAI ethics recommendations but also understanding the specifics of local guidelines. Adhering to these ensures compliance and fosters trustââ. As an example, my institution has an entire policy suite relating to responsible GenAI use in both teaching and research.
Think about the social impacts GenAI can reinforce biases and perpetuate inequalities. Researchers should critically evaluate the societal consequences of using GenAI, considering both environmental sustainability and digital equityââ.
Harness understanding of the technology A robust understanding of how GenAI tools operate (beyond their surface-level functionalities) is essential. Researchers must grasp the limitations and ethical implications of the technologies they use and should promote AI literacy within their academic communitiesââ. I have written other blog posts about what AI literacy is and how you can build your AI literacy. This handy quick video explains the components of AI literacy.
Indicate use transparently Transparency is key to maintaining academic integrity. Researchers should explicitly disclose where and how GenAI tools were used, documenting their role in the research process. This fosters accountability and mitigates risks related to copyright and authorship disputesâ. This video provides a simple guide to formatting GenAI acknowledgements.
Critically engage with outputs GenAI outputs are not infallible and require rigorous validation. Researchers bear the ultimate responsibility for ensuring that GenAI-generated content aligns with disciplinary standards and is free from inaccuracies or ethical breachesâ.
Access secure versions Security and privacy are paramount when using GenAI. Free versions of tools may not offer adequate protections for sensitive data, underscoring the need for secure, institutional subscriptions or private deployments of GenAI modelsâ.
Look at user agreements Many GenAI tools have complex user agreements, which can have significant implications for data ownership and privacy. Researchers should carefully review these terms to ensure ethical compliance and to safeguard their intellectual propertyâ.
The ETHICAL framework encourages universities to incorporate AI literacy into their curricula, ensuring that both students and faculty are prepared to navigate the ethical complexities of GenAI-enhanced research. The ETHICAL framework is also not just a set of guidelines, itâs a call to action. For educators, researchers, and institutions alike, the message is clear: the future of GenAI in higher education depends on our collective ability to navigate its challenges responsibly. The ETHICAL framework provides a compass for doing just that, fostering a research culture that is as ethical as it is forward-thinking.
Questions to ponder
How can universities integrate AI literacy into their existing curricula effectively?
What steps can researchers take to ensure equitable access to GenAI tools across diverse socio-economic contexts?
How should publishers and peer-review committees adapt to the growing use of GenAI in manuscript preparation?
Lecturers play a pivotal role in shaping the learning of their students. In a metric-focused university environment, this learning necessitates the assessment of students’ learning throughout their educational journey. Assessing assignments not only gauges the understanding of the subject matter but also evaluates the development of critical academic skills. These skills, such as research, analysis, and effective communication, are integral components of a well-rounded higher education.
Assessing transferable skills
The skills assessed must align with what is taught within the unit. When students perceive a direct connection between what is taught and what is assessed, their engagement and comprehension are heightened. Consequently, if we are going to assess students not only on their content knowledge but also their transferable skills, we need to provide them with the tools to succeed.
I believe that transferable skills enhance the applicability of studentsâ disciplinary knowledge. For years, I have worked to develop a suite of academic skills resources which are now embedded across the units within our Faculty. These resources include a suite of just-in-time online videos freely available on YouTube, as well as two written booklets (Doing Assignments and Writing Theses) that explicitly teach academic communication skills.
Over the years, I have also worked to improve the assignment rubrics within our Faculty to more accurately assess the skills that are taught within individual units. For example, I have worked with another staff member to develop templates for staff to provide feedback on academic language and literacy. We designed these templates to allow assessors to label specific mistakes for students and to provide students with referrals to appropriate support. Giving students specific labels for their errors helps them to see where they can improve. The referrals to appropriate resources and support help the student improve their skills, encouraging self-directed learning.
It is important to note that we usually recommend that these skills account for no more than 10% of the total grade for the assignment. This is because the main focus of the assessment should be the content – students should be able to clearly demonstrate an understanding and critical evaluation of the topic of the assignment. However, the students’ use of academic language and academic literacy can enhance the quality of their disciplinary content, or it can hinder the meaning of their ideas. As such, our templates allow for 5% to be attached to academic language (specifically, the elements listed in blue here) and 5% to academic literacy (the elements listed in purple here).
Assessing AI literacy
In the era of rapid technological advancement, the rise of generative artificial intelligence (AI) introduces a new dimension to education. As students are increasingly exposed to AI tools, it becomes imperative for educators to teach them how to use these tools effectively. As I have highlighted in another blog post, I firmly believe that it is our role as educators to teach students how to collaborate effectively with AI and evaluate the results obtained, a concept termed AI literacy. I see AI literacy as an essential transferable skill.
A key component of using AI ethically is acknowledging it effectively in written work. It is important to highlight, though, that if we are going to require students to demonstrate AI literacy, including the accurate acknowledgement of the use of AI tools, we need to teach it in our units and also assess it accurately. In my units, I teach students that an acknowledgement should include the name of the AI used, a description of how it was used (including the prompt used where appropriate), and an explanation of how the information was then adapted in the final version of the document. I also provide students with the example below so that they can see how an acknowledgement is used in practice.
I acknowledge that I used ChatGPT (OpenAI, https://chat.openai.com/) in this assignment to improve my written expression quality and generate summaries of the six articles I used in the annotated bibliography section. The summary prompt provided to the AI was âWrite a 350 word abstract for this article. Include a summary of the topic of the article, the methodology used, the key findings, and the overall conclusionâ. I adapted the summaries it produced to reflect my argument, style, and voice. I also adapted the summaries to better link with my topic under investigation. When I wanted the AI to help me improve my writing clarity, I pasted my written text and asked it to rewrite my work âin less wordsâ, âin a more academic styleâ, or âusing shorter sentencesâ. I also asked it to explain why it made the changes it did so that I could use this collaborative discussion as a learning process to improve my academic communication skills. I take responsibility for the final version of the text in my assignment.
Clear guidelines within rubrics should also be established to evaluate the ethical and responsible use of AI, reinforcing the importance of acknowledging the role of these tools in academic work. Given my previous work developing rubric templates for staff, I have recently developed a template for the acknowledgement of AI use within assignments. In my template, this criterion falls within the “academic literacy” section of the rubric I mentioned earlier. I have included the rubric criteria below so that other educators can use it as needed. The grading scale is the one used in my university, but it can be easily adapted to other grading scales.
High Distinction (80-100%): There was an excellent explanation about how generative AI software was used. This included, where appropriate, explicit details about the software used, the prompts provided to the AI, and explanations as to how the output of the generative AI was adapted for use within the assignment.
Distinction (70-79%): There was a clear explanation about how generative AI software was used. This included, where appropriate, sufficient detail about the software used, the prompts provided to the AI, and explanations as to how the output of the generative AI was adapted for use within the assignment.
Credit (60-69%): There was a reasonably clear explanation about how generative AI software was used. The explanation lacked sufficient details regarding one of the following: the software used, the prompts provided to the AI, and/or explanations as to how the output of the generative AI was adapted for use within the assignment.
Pass (50-59%): There was some explanation about how generative AI software was used. The explanation lacked several of the following: the software used, the prompts provided to the AI and/or explanations as to how the output of the generative AI was adapted for use within the assignment.
Fail (Below 50%): There was little or no explanation about how generative AI software was used.
Questions to ponder
The blog post outlines a rubric for assessing the acknowledgement and use of generative AI in student assignments. Considering the varying levels of detail and adaptation of AI-generated content required for different grades, what are your thoughts on the fairness and effectiveness of this approach?
How might this rubric evolve as generative AI technology becomes more advanced and commonplace in educational environments?
In an era where generative artificial intelligence (AI) permeates every aspect of our lives, AI literacy in higher education has never been more crucial. In our recent paper, we delve into our own journeys of developing AI literacy, showcasing how educators can seamlessly integrate AI into their teaching practices. Our goal is to cultivate a new generation of AI-literate educators and graduates. Through our experiences, we also created a comprehensive framework for AI literacy, highlighting the transformative potential of embracing AI in educational settings.
We embraced AI with optimism and enthusiasm, seeing it as a tool to be harnessed rather than feared. In our recent paper, we passionately argue that AI literacy is an indispensable skill for today’s graduates. We emphasise that this mindset requires a significant cultural shift in higher education, advocating for the integration of AI as a valuable learning aid. By fostering this change, we can unlock AI’s potential to enhance education and empower students to thrive in an increasingly digital world.
Our journey began with curiosity and a willingness to experiment with AI in our educational practices. Lynette, for instance, integrated AI into her role, showcasing its capacity as an academic language and literacy tutor. She encouraged her students, many of whom are from non-English speaking backgrounds, to use tools like Grammarly and ChatGPT to improve their academic writing. By doing so, she highlighted the importance of collaboration between students and AI, promoting deeper learning and engagement.
In a Masterâs level course on autoethnography, Lynette inspired her students to harness generative AI for creative data generation. She showcased how tools like DALL-E could be used to create artworks that visually represent their research experiences. This approach not only ignited students’ creativity but also deepened their engagement with their assignments, allowing them to explore their research from a unique and innovative perspective.
Basil introduced his students to the power of generative AI through hands-on assignments. One notable task involved creating a public awareness campaign centred around the UN’s Sustainable Development Goals. Students utilised DALL-E to produce compelling visuals, showcasing AI’s ability to amplify creativity and enhance learning outcomes. This practical approach not only highlighted the transformative potential of AI but also encouraged students to engage deeply with important global issues through innovative and impactful media.
While the benefits of AI in education were clear to us, we also encountered ethical considerations and challenges. In our paper, we emphasised the importance of transparency and informed consent when using AI in research and teaching. For example, we ensured that students and research participants were aware of how their data would be used and the potential biases inherent in AI-generated content. Moreover, we highlighted the environmental impact of using AI technologies. The energy consumption of AI models is significant, raising concerns about their sustainability. This awareness is crucial as educators and institutions navigate the integration of AI into their practices.
From our experiences and reflections, we developed a groundbreaking AI literacy framework for higher education, encompassing five domains: foundational, conceptual, social, ethical, and emotional. As illustrated in the figure below, this comprehensive framework is designed to empower educators and students with the essential skills to adeptly navigate the intricate AI landscape in education. By promoting a holistic and responsible approach to AI literacy, our framework aims to revolutionise the integration of AI in academia, fostering a new generation of informed and conscientious AI users.
Elements of AI Literacy in Higher Education. Download here.
From these essential domains of AI literacy, we have crafted a comprehensive framework for AI literacy in higher education.
The framework underscores the following key features:
Foundational Understanding: Mastering the basics of accessing and using AI platforms.
Information Management: Skillfully locating, organising, evaluating, using, and repurposing information.
Interactive Communication: Engaging with AI platforms as interlocutors to create meaningful discourse.
Ethical Citizenship: Conducting oneself ethically as a digital citizen.
Socio-Emotional Awareness: Incorporating socio-emotional intelligence in AI interactions.
The AI Literacy Framework for Higher Education. Download here.
Our AI literacy framework has significant implications for higher education. It provides a structured approach for integrating AI into teaching and research, emphasising the importance of ethical considerations and emotional awareness. By fostering AI literacy, educators can prepare students for a future where AI plays a central role in various professional fields.
Embracing AI literacy in higher education is not just about integrating new technologies; it’s about preparing students for a rapidly changing world. Our AI literacy framework offers a comprehensive guide for educators to navigate this transition, promoting ethical, effective, and emotionally aware use of AI. As we move forward, fostering AI literacy will be crucial in shaping the future of education and empowering the next generation of learners.
Questions to ponder
How can educators ensure that all students, regardless of their technological proficiency, can access and utilise generative AI tools effectively?
In what ways can generative AI tools be used to enhance students’ conceptual understanding of course materials?
How can the concept of generative AI as a collaborator be integrated into classroom discussions and activities?
How can educators model ethical behaviour and digital citizenship when using generative AI tools in their teaching?
How can understanding the emotional impacts of generative AI interactions improve the overall learning experience?
How can the AI literacy framework be practically integrated into different academic disciplines and curricula?
I have recently developed and delivered a masterclass about how you can develop your AI literacy in your writing and research practice. This included a series of examples from my own experiences. I thought I’d provide a summary of this masterclass in a blog post so that everyone can benefit from my experiences.
Artificial intelligence (AI) has been present in society for several years and refers to technologies which can perform tasks that used to require human intelligence. This includes, for example, computer grammar-checking software, autocomplete or autocorrect functions on our mobile phone keyboards, or navigation applications which can direct a person to a particular place. Recently, however, there has been a significant advancement in AI research with the development of generative AI technologies. Generative AI refers to technologies which can perform tasks that require creativity. In other words, these generative AI technologies use computer-based networks to create new content based on what they have previously learnt. These types of artistic creations have previously been thought to be the domain of only human intelligence and, consequently, the introduction of generative AI has been hailed as a âgame-changerâ for society.
I am using generative AI in all sorts of ways. The AIs I use most frequently include Google’s built-in generative AI in email, chat, Google Docs etc. which learns from your writing to suggest likely responses. I also use Grammarly Pro to help me identify errors in my students’ writing, allowing me more time to give constructive feedback about their writing, rather than trying to find examples. This is super time-saving, particularly given how many student emails I get and the number of assignments and thesis chapters I read! I also frequently use a customised version of Chat GPT 4, which I trained to do things the way I would like them to be done. This includes responding in a specific tone and style, reporting information in specific ways, and doing qualitative data analysis. Finally, I use Leonardo AI and DALL-E to generate images, Otter AI to help me transcribe some of my research, Research Rabbit to help me locate useful literature on a topic, and AILYZE to help conduct initial thematic analysis of qualitative data.
The moral panic that was initiated at the start of 2023 with the advent of Chat GPT caused debates in higher education. Some people insisted that generative AI would encourage students to cheat, thereby posing a significant risk to academic integrity. Others, however, advocated that the use of generative AI could make education more accessible to those who are traditionally marginalised and help students in their learning. I came to believe that the ability to use generative AI would be a core skill in the future, but that AI literacy would be essential. This led me to publish a paper where I defined AI literacy as:
AI literacy is understanding âhow to communicate effectively and collaboratively with generative AI technologies, as well as evaluate the trustworthiness of the results obtainedâ.
This prompted me to start to develop ways to teach AI literacy in my practices. I have collated some tips below.
Firstly, you should learn to become a prompt wizard! One of the best tips I can give you is to provide your generative AI with context. You should tell your AI how you would like it to do something by giving it a role (e.g., “Act as an expert on inclusive education research and explain [insert your concept here]”). This will give you much more effective results.
Secondly, as I have already alluded to above, you can train your AIs to work for you in specific ways! So be a bit brave and explore what you can do.
Thirdly, when you ask it to make changes to something (e.g., to fix your grammar, improve your writing clarity/flow), ask it to also explain why it made the changes it did. In this way, you an use the collaborative discussion you are having with your AI as a learning process to improve your skills.
The most common prompts I use in my work are listed below. The Thesis Whisperer has also shared several common prompts, which you can find here.
âWrite this paragraph in less words.â
âCan you summarise this text in a more conversational tone?â
âWhat are five critical thinking questions about this text?â
I have previously talked about how you can use generative AI to help you design your research questions.
I have since also discovered that you can use generative AI as a data generation tool. For example, I have recently used DALL-E to create an artwork which represents my academic identity as a teacher and researcher. I have written a chapter about this process and how I used the conversation between myself and DALL-E as a data source. This chapter will be published soon (hopefully!).
Most recently, I have started using my customised Chat GPT 4 as a data analysis tool. I have a project that has a large amount of qualitative data. To help me with a first-level analysis of this large dataset, I have developed a series of 31 prompts based on theories and concepts I know I am likely to use in my research. This has allowed me to start the analysis of my data and give me direction as to areas for further exploration. I have given an example of one of the research prompts below.
In this study, capital is defined as the assets that individuals vie for, acquire, and exchange to gain or maintain power within their fields of practice. This study is particularly interested in six capitals: symbolic capital (prestige, recognition), human capital (technical knowledge and professional skills), social capital (networks or relationships), cultural capital (cultural knowledge and embodied behaviours), identity capital (formation of work identities), and psychological capital (hope, efficacy, resilience, and optimism). Using this definition, explain the capitals which have played a part in the doctoral studentâs journey described in the transcript.
What I have been particularly impressed by so far is my AIs ability to detect implicit meaning in the transcripts of the interviews I conducted. I expected it to be pretty good at explaining explicit mentions of concepts, but had not anticipated it to be so good at understanding more nuanced and layered meanings. This is a project that is still in progress and I expect very interesting results.
There are some ethical considerations which should be taken into account when using generative AIs.
Privacy/confidentiality: Data submitted to some generative AIs could be used to train the generative AI further (often depending on whether you have a paid or free version). Make sure to check the privacy statements and always seek informed consent from your research participants.
Artwork: Generative AIs were trained with artwork without express consent from artists. Additionally, it is worth considering who the actual artist/author/creator of the artwork is when you use generative AI to create it. I consider both the user and the AI as collaborators working to create the artwork together.
Bias propagation: Since generative AIs are trained based on data from society, there is a risk that they may reflect biases present in the training data, perpetuating stereotypes or discrimination.
Sustainability: Recent research demonstrates that generative AI does contribute significantly to the userâs carbon footprint.
It is also important to ethically and honestly acknowledge how you have used generative AI in your work by distinguishing what work you have done and what work it has done. I have previously posted a template acknowledgement for students and researchers to use. I have recently updated the acknowledgement I use in my work and have included it below.
I acknowledge that I used a customised version of ChatGPT 4 (OpenAI, https://chat.openai.com/) during the preparation of this manuscript to help me refine my phrasing and reduce my word count. The output from ChatGPT 4 was then significantly adapted to reflect my own style and voice, as well as during the peer review process. I take full responsibility for the final content of the manuscript.
My final tip is – be brave! Go and explore what is out there and see what you can achieve! You may be surprised how much it revolutionises your practices, freeing up your brain space to do really cool and creative higher-order thinking!
Questions to ponder
How does the use of generative AI impact traditional roles and responsibilities within academia and research?
Discuss the implications of defining a ‘collaborative’ relationship between humans and generative AI in research and educational contexts. What are the potential benefits and pitfalls?
How might the reliance on generative AI for tasks like grammar checking and data analysis affect the skill development of students and researchers?
The blog post mentions generative AI’s ability to detect implicit meanings in data analysis. Can you think of specific instances or types of research where this capability would be particularly valuable or problematic?
Reflect on the potential environmental impact of using generative AI as noted in the blog. What measures can be taken to mitigate this impact while still benefiting from AI technologies in academic and research practices?
Artificial intelligence (AI) has been present in society for several years – think, for example, of computer grammar-checking software, autocorrect on your phone, or GPS apps. Recently, however, there has been a significant advancement in AI research with the development of generative AI technologies like ChatGPT. Generative AI refers to technologies which can perform tasks that require creativity by using computer-based networks to create new content based on what they have previously learnt.
For example, generative AI technologies now exist which can write poetry or paint a picture. Indeed, I entered the title of one of my published books (Research and Teaching in a Pandemic World) into a generative AI which paints pictures (Dream by WOMBO). The response it generated accurately represented the bookâs content, was eye-catching, and I believe it would have been a very suitable picture for its cover. Check it out:
(Note: This response was generated by Dream by WOMBO (WOMBO Studios, Inc., https://dream.ai/) on December 12, 2021 by entering the prompt âresearch and teaching in a pandemic worldâ into the generator and selecting a preferred style of artwork.)
The introduction of generative AI has, however, led to a certain amount of panic among educators; many workshops, discussions, policy debates, and curriculum redesign sessions have been run, particularly in the higher education context. Educators acknowledge that there is a need to accept that generative AI can also be leveraged to support student learning. In fact, it is clear that students will likely be expected to know how to use this technology when they enter the workforce. Importantly, though, there has also been significant concern that generative AI would encourage students to cheat. For example, many educators fear that students could enter their essay topic into a generative AI and that it would generate an original piece of work for them which would meet the task requirements to pass.
I believe what is missing from these discussions regarding generative AI is the fact that assessment regimes focus predominantly on the product of learning. This focus assumes that the final assignment is indicative of all the studentâs learning but neglects the importance of the learning process. This is where generative AI can be a valuable tool. From this perspective, the technology should be considered as an aide, with the intellectual work of the user lying in the choice of an appropriate prompt, the assessment of the suitability of the output, and subsequent modification of that prompt if the output does not seem suitable. Some examples of the use of generative AIs as an aide include helping students develop an outline or brainstorm ideas for an assignment, providing feedback to students on their work, guiding students in learning how to improve the communication of their ideas, and acting as an after-hours tutor or a way for English-language learners to improve their written skills. Using generative AI in this more educative manner can help students better engage with the process of their learning.
In a similar way to when Microsoft Word first introduced a spell-checker, I believe generative AI will become part of our everyday interactions in a more digitally connected and inclusive world. Importantly, though, as mentioned above, while generative AI may help the user create something, it is dependent on the user providing it with appropriate prompts to be effective. The user is also responsible for evaluating the accuracy or usefulness of what is generated. As such, we need to teach students how to communicate effectively and collaboratively with generative AI technologies, as well as evaluate the trustworthiness of the results obtained â a concept termed AI literacy. I believe AI literacy is likely to soon become a key graduate attribute for all students as we move into a more digital world which integrates human and non-human actions to perform complex tasks.
In my teaching practice, I now advise students to use generative AI as a tool to help them improve their approaches to their assignments. I suggest, in particular, that generative AI can be used as a tool to start brainstorming and planning for their assignment or research project. I include examples of how generative AI can be used for various purposes in my classes. For example, I highlight that generative AI may be able to assist a researcher in generating some starting research questions, but it is the researcherâs responsibility to refine these questions to reflect their particular research focus, theoretical lens, and so on. I emphasise to students that generative AI will not do all the work for them; they need to understand that they are still responsible for deciding what to do with the information, linking the ideas together, and showing deeper creativity and problem-solving in the final version of their work.
I have recently showcased this approach in a video which is freely available on YouTube. The first video (Using generative artificial intelligence in your assignments and research) explains what generative AI is and what it can be used for in assignments and research. The second video (Using generative AI to develop your research questions) showcases a worked example of how I collaborated with a generative AI to formulate research questions for a PhD project. These videos can be reused by other educators as needed.
This video starts by showing students how I have used ChatGPT to brainstorm a starting point for a research project by asking it to âAct as a researcherâ and list the key concerns of doctoral training programmes. In this way, I show the students the importance of prompt design in the way they collaborate with the generative AI. In the video, I show that ChatGPT provided me with a list of seven core concerns and note that, using my expertise in the field, I have evaluated these concerns and can confirm that they are representative of the thinking in the discipline. In the rest of the video, I showcase how I can continue my conversation with the generative AI by asking it to formulate a research question that investigates the identified core concerns. I show students how I collaborated with the generative AI to refine the research question until, in the end, a good quality question is developed which incorporates the specificity and theoretical positioning necessary for a PhD-level research question.
It is important to note that students are likely not yet experts in their field when they are designing their research questions. Therefore, it is important to provide them with guidance as to how to evaluate the ideas produced by generative AI. This includes highlighting that a generative AI is not always accurate, that it may disregard some information which may be pertinent to a specific research project, or that it may fabricate information. Students need to learn that a generative AI is not a tool similar to an encyclopedia which contains all the correct information. Rather, generative AI is a tool which responds to prompts by generating answers it âthinksâ would be appropriate in that particular context. Consequently, I advise students to use generative AI as a starting point, but that they should then explore the literature to further assess the accuracy of the core concerns identified earlier as well as the viability of the research question for their project.
It is also worth noting that generative AI could be used as a way to help students see what a good research question might look like, rather than using it specifically to develop a research question for their particular research project. Generative AI may also be useful in helping students see how to organise the themes in the literature. In this way, we encourage students to use generative AI as part of the learning process, allowing them to scaffold their skills so that they can use their creativity and other higher-order thinking skills to further advance knowledge in their discipline.
Students should also be taught how to appropriately acknowledge the use of generative AI in their work. Monash University has provided template statements for students to use. I use these template statements as part of my regular workshops. In this way, I show students that ethical practice is to acknowledge which parts of the work the generative AI did and which parts of the work were done by a person.
I have also recently used such an acknowledgement in one of my research papers. I have included it below for other researchers to use in their work.
I acknowledge that I used ChatGPT (OpenAI, https://chat.openai.com/) to generate an initial draft outline of the introduction of this manuscript. The prompt provided for this outline was “Act as a social science researcher and write an outline for a paper advocating for change to survey design to collect more diverse participant information”. I adapted the outline it produced for the introduction to reflect my own argument, style, and voice. This section was also significantly adapted through the peer review process. As such, the final version of the manuscript does not include any unmodified content generated by ChatGPT.
As with all new technologies, there are potential challenges and risks that should be considered. Firstly, generative AI technologies can generate results which seem correct but are factually inaccurate or entirely made up. Secondly, there is the issue of equity of access. It is incumbent upon us as educators to ensure that all students have equal access to the technologies they may be required to use in the classroom. Thirdly, there is the risk that the generative AI may learn and reproduce biases present in society. Finally, for researchers, there are also ethical concerns relating to the retention and possible generation of potentially sensitive data.
Generative AI is, at its core, a natural evolution of the technology we already use in our daily practices. In an ever-increasingly digital world, generative AI will become integral to how we function as a society. It is, therefore, incumbent upon us as educators to teach our students how to use the technology effectively, develop AI literacy, and use their higher-order thinking and creativity to further refine the responses they obtain. I believe that this form of explicit modelling is how we, as educators, can help students develop an understanding of generative AI as a tool to improve their work. In this way, we focus on the process of learning, rather than being so focused on the ultimate product for assessment.
How do you think AI literacy can be integrated into current educational curricula to enhance learning while ensuring academic integrity? What are the potential challenges and benefits of incorporating generative AI into classroom settings?
How should students and researchers navigate the ethical implications of using AI-generated content in their assignments and research?
Leave a Comment