the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Harnessing AI for Geosciences Education: A Deep Dive into ChatGPT's Impact
Abstract. The integration of artificial intelligence language models, particularly Chat GPT, into geosciences education has the potential to transform the learning landscape. This study explores the impact of ChatGPT on geoscience education. The research comprises two phases: first, a survey to understand students' perceptions and usage patterns of ChatGPT, and second, a series of tests to assess its reliability, content generation capabilities, translation abilities, and potential biases.
The survey findings reveal that ChatGPT is gaining popularity among geoscience students, with many using it as a quick information retrieval tool and for content generation tasks. However, students expressed concerns about its accuracy, potential biases, and lack of awareness regarding its limitations. While ChatGPT offers benefits in terms of generating content and streamlining educational tasks, it cannot replace the essential role of human teachers in fostering critical thinking and problem-solving skills. Thus, a balanced approach is crucial. Ethical concerns surrounding ChatGPT include its potential to bypass plagiarism detectors, introduce biases, and raise issues related to data privacy and misinformation. Responsible adoption of AI technologies in education is essential to address these concerns. In conclusion, ChatGPT has the potential to enhance geoscience education, but its implementation should be approached with caution. By understanding its capabilities and limitations, educators can leverage AI technologies to create more engaging, inclusive, and effective learning experiences while upholding academic integrity and ethical standards.
- Preprint
(2724 KB) - Metadata XML
-
Supplement
(650 KB) - BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on gc-2023-7', Matt McCullock, 09 Feb 2024
The article does raise some interesting questions about the use of AI in teaching Geosciences. The main points are well made – there is a tension between student use of ChatGPT and its well-known issues of reliability and bias. There is also mention of the ethical considerations of AI use. How can it be ethically used in the classroom to supplement, rather than replace, student learning?
Polling student’s use and views of AI does add an interesting perspective as it can add to the teacher’s own view. The article does produce some (potentially) interesting data gathered from the student responses that could add to the discussion on how to incorporate AI in the classroom. It is this part that needs to be developed.
The student data needs to be developed and contextualised.
The discussion on the student’s view can be developed by expanding on the statistics discussed (lines 136-169). The main findings are very quickly discussed without discussing what they actually mean or how they relate to the main purpose of the article.
The contextualisation can be developed by exploring what policies and software the institutions use in AI usage. The students are polled, the statistics are mentioned briefly, but there is no attempt to contextualise the students in their institutions. The responses have been isolated from the classroom. If AI offers “clarification outside of traditional classroom hours” (line 67), then how have the institutions used it? Is this use authorised by the institution or do the students do this independently? Given the concerns about bias, false references etc produced by ChatGPT, there needed to be more contextualisation of student use.
The ethical and societal implications (lines 384-429) do raise important questions about ethical use of AI and the possible threat it poses to education, but it does not offer any suggestions or answers to the questions. How have the three institutions responded to these questions? What policies and procedures do they have in place? There is a broader discussion on AI use in GS, but the article focuses on an Indian perspective, without contextualising it in an Indian setting.
Section 4 (p.13) reads like the introduction to the article. It contextualises the use of AI in education more broadly, and in GS teaching more specifically. Section 4.2 again beings in pedagogical implications, but this comes after the cramped discussion on student use of AI. Moving section 4 to the beginning would provide a more logical flow from the aims of the study to the broader pedagogical implications, to the specific student use. Unpacking 3.1 next would develop the argument, as it would focus on student engagement.
The sections on testing ChatGPT features (3.2.1) is confusing. Is this how students have used them? If so, this link needs to be made clearer. At present, the section on testing ChatGPT reads like a test of its features in absence of student engagement, perception or learning.
The discussion on integrity was also puzzling. GPTzero is one AI detection platform, but there is no recognised platform that is fool proof. Turnitin has AI detection software built in, but many institutions have not activated it due to the significant false positives it produces. Computer software is not the only way to detect AI. There is no recognition of other AI software or human ways to detect AI use (line 396). Line 106-111 show GPTZero to be accurate, yet line 394 states that AI generated text is undetectable?
Discussion on ChatGPT 4 (line 52) also raises issues of accessibility. It is subscription based, so excludes students who do not have the means to pay.
ChatGPT was given real time access to the internet, meaning the comment on Line 158-9 needs to be updated.
The article has two disparate strands. The first one, which is more in line with the abstract, explores how students use ChatGPT. The second, explores how academics have challenged the reliability of ChatGPT in GS teaching. They do not necessarily follow. The second strand could form an article that does not discuss student usage at all.
If the article’s focus is on using ChatGPT in the classroom, the argument needs to start with the broader pedagogical discussions, then explore how the three institution have used it, to then explore the student’s use and perceptions. The data collected needs to be better developed and discussed.- AC1: 'Reply on RC1', Subham Patra, 17 Apr 2024
-
RC2: 'Comment on gc-2023-7', Steven Rogers, 20 May 2024
This submission presents interesting and quite timely research into the use of generative AI in an HE setting. The work is framed as highlighting how generative AI can be used in geoscience education, however there is nothing present that is really geoscience specific - the report could really be for HE in general with the geoscience example being used as an useful case study. The submission doesn't mention if favourable ethical opinion was obtained from a relevant ethics committee before data was collected - this needs addressing/resolving.
The general discussion/conclusion of the work highlights that generative AI can be useful for structuring and writing blocks of text - and that this text cant be "spotted" by plagiarism platforms - this in itself isn't surprising as creating unique dialogue is what generative AI is supposed to do (it is explained in the work that this indicates the "content" of the work is good - I would argue the content tends to refer to the actual substance of the work, rather than clarity). In another part of the research the AI is shown to really struggle with problem solving. I think these two aspects could be looked at in a really constructive way - generative AI could be an accessible platform for students working in second languages, with dyslexia, or many other students to develop their communication skills. The AI can help structure the students original problem solving work into well communicated documents. I think this would be a much more impactful use of the data presented here - and I would also suggest the work might be framed for a general HE audience. In its current format I think the findings are a little obvious, and do need some contextualising within the rich pedagogic research on the area of AI and digital innovation.
To summarise, I think the submission would benefit from a reworking and potentially belongs in a broader HE journal (I don't think the findings/potential are at all specific to geoscience). I would also suggest the submission be updated to discuss updates such as ChatGPT4 and the various AI image generating platforms now available.
Citation: https://doi.org/10.5194/gc-2023-7-RC2 -
AC2: 'Reply on RC2', Subham Patra, 31 May 2024
The comment was uploaded in the form of a supplement: https://gc.copernicus.org/preprints/gc-2023-7/gc-2023-7-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Subham Patra, 31 May 2024
Status: closed
-
RC1: 'Comment on gc-2023-7', Matt McCullock, 09 Feb 2024
The article does raise some interesting questions about the use of AI in teaching Geosciences. The main points are well made – there is a tension between student use of ChatGPT and its well-known issues of reliability and bias. There is also mention of the ethical considerations of AI use. How can it be ethically used in the classroom to supplement, rather than replace, student learning?
Polling student’s use and views of AI does add an interesting perspective as it can add to the teacher’s own view. The article does produce some (potentially) interesting data gathered from the student responses that could add to the discussion on how to incorporate AI in the classroom. It is this part that needs to be developed.
The student data needs to be developed and contextualised.
The discussion on the student’s view can be developed by expanding on the statistics discussed (lines 136-169). The main findings are very quickly discussed without discussing what they actually mean or how they relate to the main purpose of the article.
The contextualisation can be developed by exploring what policies and software the institutions use in AI usage. The students are polled, the statistics are mentioned briefly, but there is no attempt to contextualise the students in their institutions. The responses have been isolated from the classroom. If AI offers “clarification outside of traditional classroom hours” (line 67), then how have the institutions used it? Is this use authorised by the institution or do the students do this independently? Given the concerns about bias, false references etc produced by ChatGPT, there needed to be more contextualisation of student use.
The ethical and societal implications (lines 384-429) do raise important questions about ethical use of AI and the possible threat it poses to education, but it does not offer any suggestions or answers to the questions. How have the three institutions responded to these questions? What policies and procedures do they have in place? There is a broader discussion on AI use in GS, but the article focuses on an Indian perspective, without contextualising it in an Indian setting.
Section 4 (p.13) reads like the introduction to the article. It contextualises the use of AI in education more broadly, and in GS teaching more specifically. Section 4.2 again beings in pedagogical implications, but this comes after the cramped discussion on student use of AI. Moving section 4 to the beginning would provide a more logical flow from the aims of the study to the broader pedagogical implications, to the specific student use. Unpacking 3.1 next would develop the argument, as it would focus on student engagement.
The sections on testing ChatGPT features (3.2.1) is confusing. Is this how students have used them? If so, this link needs to be made clearer. At present, the section on testing ChatGPT reads like a test of its features in absence of student engagement, perception or learning.
The discussion on integrity was also puzzling. GPTzero is one AI detection platform, but there is no recognised platform that is fool proof. Turnitin has AI detection software built in, but many institutions have not activated it due to the significant false positives it produces. Computer software is not the only way to detect AI. There is no recognition of other AI software or human ways to detect AI use (line 396). Line 106-111 show GPTZero to be accurate, yet line 394 states that AI generated text is undetectable?
Discussion on ChatGPT 4 (line 52) also raises issues of accessibility. It is subscription based, so excludes students who do not have the means to pay.
ChatGPT was given real time access to the internet, meaning the comment on Line 158-9 needs to be updated.
The article has two disparate strands. The first one, which is more in line with the abstract, explores how students use ChatGPT. The second, explores how academics have challenged the reliability of ChatGPT in GS teaching. They do not necessarily follow. The second strand could form an article that does not discuss student usage at all.
If the article’s focus is on using ChatGPT in the classroom, the argument needs to start with the broader pedagogical discussions, then explore how the three institution have used it, to then explore the student’s use and perceptions. The data collected needs to be better developed and discussed.- AC1: 'Reply on RC1', Subham Patra, 17 Apr 2024
-
RC2: 'Comment on gc-2023-7', Steven Rogers, 20 May 2024
This submission presents interesting and quite timely research into the use of generative AI in an HE setting. The work is framed as highlighting how generative AI can be used in geoscience education, however there is nothing present that is really geoscience specific - the report could really be for HE in general with the geoscience example being used as an useful case study. The submission doesn't mention if favourable ethical opinion was obtained from a relevant ethics committee before data was collected - this needs addressing/resolving.
The general discussion/conclusion of the work highlights that generative AI can be useful for structuring and writing blocks of text - and that this text cant be "spotted" by plagiarism platforms - this in itself isn't surprising as creating unique dialogue is what generative AI is supposed to do (it is explained in the work that this indicates the "content" of the work is good - I would argue the content tends to refer to the actual substance of the work, rather than clarity). In another part of the research the AI is shown to really struggle with problem solving. I think these two aspects could be looked at in a really constructive way - generative AI could be an accessible platform for students working in second languages, with dyslexia, or many other students to develop their communication skills. The AI can help structure the students original problem solving work into well communicated documents. I think this would be a much more impactful use of the data presented here - and I would also suggest the work might be framed for a general HE audience. In its current format I think the findings are a little obvious, and do need some contextualising within the rich pedagogic research on the area of AI and digital innovation.
To summarise, I think the submission would benefit from a reworking and potentially belongs in a broader HE journal (I don't think the findings/potential are at all specific to geoscience). I would also suggest the submission be updated to discuss updates such as ChatGPT4 and the various AI image generating platforms now available.
Citation: https://doi.org/10.5194/gc-2023-7-RC2 -
AC2: 'Reply on RC2', Subham Patra, 31 May 2024
The comment was uploaded in the form of a supplement: https://gc.copernicus.org/preprints/gc-2023-7/gc-2023-7-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Subham Patra, 31 May 2024
Supplement
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
535 | 108 | 32 | 675 | 28 | 21 | 25 |
- HTML: 535
- PDF: 108
- XML: 32
- Total: 675
- Supplement: 28
- BibTeX: 21
- EndNote: 25
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1