Reported by Dongkun Lyu (with ChatGPT)
Before I started to write a report, I asked ChatGPT:
“Could you please write around a 300 words seminar report for me? The date is Feb 1, 2023. People are Tim Fitz (engineer of Turnitin), Professor Daniel Lim and DKU professors and student. The content is encompassing the research area of the speakers, the process of the development of AI, their experience and thought of ChatGPT and some classic computer experiment like ‘The Turing test’ and ‘Chinese character test’.”
Consequently, most of this report, except for the first paragraph (because it repeats the information I was given), was not useful.
The seminar was held on February 1, 2023, and was attended by Tim Fitz, an engineer from Turnitin, Professor Daniel Lim, and several professors and students from DKU. The main focus of the seminar was to discuss the research areas of the speakers, the development process of AI, their experiences and thoughts on ChatGPT, and the classic computer experiments such as the Turing test and the Chinese character test. (by ChatGPT)
Based on this response from ChatGPT, I came to the conclusion that it is not evolved enough to help us write reports.
At the beginning of this seminar, the Senior Director of Engineering at Turnitin, Tim Fitz, oriented the participants on how we could treat ChatGPT by briefly introducing his career first. Tim and his teams wrote their own web crawler similar to Yahoo, Google, etcetera, surfing the Internet, looking for interesting source material. They did all the indexing, report generation, and various improvements. Hence in his field, he has witnessed the evolution of AI in recent years, seeing that AI can gradually do many things well. What makes ChatGPT different from the AI discussed before is that it is no longer a simple paraphrase, but entirely original content.
After the short introduction of his background, Tim articulated what is AI by using an example of X-ray: by feeding countless CT photos of healthy people and CT photos of cancer groups, AI can find the correlation between the spots in the photos and cancer through the establishment of learning mechanisms and models, although they do not understand any pathology. Through this example, we can also understand how AI can help doctors (and even many other industries) do a lot of work. ChatGPT, as a language learning AI, went through early iterations and overcomes problems such as grammatical confusion. At the same time, it is special because it is open to the public on a large scale, which is why it has caused such extensive discussions.
Then Professor Daniel Lim introduced his experience of meeting ChatGPT, including how he knew this AI, and tried to let it help write a syllabus. Prof Lim found that it can help write some simple programs, and even help professors write exam questions.
When considering ChatGPT and the discussion of issues in the field of philosophy, Prof Lim introduced two classic tests of whether a computer is intelligent: the Turing test and the Chinese room argument. The former is a classic model proposed by Turing, that is, if a person receiving information cannot identify whether the sender of a given information is a machine or a human, then the machine has passed the Turing test. The latter is an improvement over the former. Even if the recipient of the information can understand the output Chinese characters, maybe the person in the room who is only responsible for encoding in sequence does not need to master Chinese, which in this example means that he has no intelligence.
Then the two speakers discussed how we should rethink ChatGPT now that we have been affected by it, how it will change the way we learn and teach, and how it will change the way people think. They expressed concern about the potential for the hard-to-detect original content that ChatGPT is able to generate to disrupt the accumulation and progress of academic research.
Now we’ve discussed ChatGPT, one questions came to my mind. Will ChatGPT stop people from thinking?
Prof Lim: I did use ChatGPT for my friend’s job application, writing his cover letter, because it was about a position that he was not exactly familiar with. But I thought, hey, instead of my giving him advice. Why don’t I ask ChatGPT? And ChatGPT wrote a fantastic cover letter that gave him ideas of what he could say in the interview as well, so will it stop students from thinking, or stop all of us from thinking?
This is such a good question, but I don’t think anyone can answer it, and I have a few slides to talk about this as well.
There are so many predictions about what what ChatGPT’s role is going to play in our lives and what it is going to do. My suggestion is – we have no idea until this technology is used overtime in our society and we can come back to ask this question again.