Cover Story

AI Discussion Remains in Full Force at Colorado College

Julia Fennell ’21

“The faculty, students, and staff of Colorado College are perfectly situated to wrestle with the implications of generative AI. We are a place where learning, research, innovation, ethics, and dialogue intersect. The experts in our interdisciplinary community can lead conversations and projects that interrogate and explore both the foundations and future of AI. As an institution that strives to foster the ethical creation of knowledge, we can rise to this challenge without trepidation and without acquiescing to the inertia of educational technology. We can ask the hard questions as we explore. Colorado College is made for this challenge and opportunity.”

Dr. Emily Chan, Vice President and Dean of the Faculty and Professor of Psychology

Like any college or university in 2023, CC is facing the challenges that come with AI and other emerging technologies. However, CC has an advantage. Students here are already taught how to be global citizens, how to have hard conversations while standing up for what’s right, and how to work to create a more just world. These lessons trickle into every aspect of CC life, including the discussions and use of AI. From philosophy to computer science classes, CC faculty, staff, and students are facing the difficult questions surrounding this technology head on, using advanced research and their liberal arts education.

DR. CORY B. SCOTT ’13


Dr. Cory B. Scott ’13

Dr. Cory B. Scott ’13, Assistant Professor of Mathematics and Computer Science, does research in AI and machine learning, so the topic comes up in some ways in most of his classes. He’s had students work on class projects that apply machine learning techniques to datasets, as well as taught students the math that makes these models work, so that the students can make their own models from scratch.

“Typically, I introduce students to relatively simple examples like linear regression and neural networks, and eventually move on to more complicated machine learning models like convolutional neural nets, or transformers, which is the kind of model that ChatGPT is,” says Scott, who graduated from CC with a double major in Math and Computer Science. “I’ve assigned readings and led in-class discussions about how these models often demonstrate significant race or gender bias.”

Scott also speaks to the ethical questions that surround AI. “Despite how popular this technology is, most machine learning experts, especially those who are worried about the ethical ramifications of the field, have concerns about it and other large language models.” He notes several concerns with these models, including biases and that ChatGPT and related models are good at producing human-sounding language, but there is nothing that ensures their output is factual. “This means they can produce very authoritative-sounding text that contains completely made-up facts,” he says, adding that companies creating and training these models are not transparent about where the data comes from, which means data with race and gender biases produces AIs with the same biases.

Scott is also concerned about the ethics and sustainability issues with the production and creation of these models. “These models are very expensive to train and run. Just asking ChatGPT five questions can use a gallon of water, and ChatGPT is responsible for more than 10,000 tons of CO2 per year,” he says. “Models like ChatGPT do have some safeguards on the content they produce – for example, ChatGPT will not generate sexually explicit or violent content; however, the training process for making these safeguards has frequently involved paying laborers in the global south to purposefully generate harmful content. In most cases these laborers are not well-paid and can develop PTSD from the content they are producing and consuming. So OpenAI gets its ‘safe’ version of the model, but at the expense of the mental health of workers in Kenya.”

Scott says AI, and in particular systems like ChatGPT, have a lot of potential for classroom use, and there are many greats tools that can do things such as automatic translation or transcribing. 

“There are tools that can improve learning outcomes by personalizing content to fit how a student has done so far in the class,” Scott says. “These tools all show enormous promise. However, until many of the ethical concerns regarding ChatGPT are resolved and improved upon, I think the risks outweigh the benefits in a classroom setting. I will definitely continue to teach about it, but I don’t let my students use it as a tool.”

There are clearly some important, and hard, conversations about AI and other machine learning technologies that must be had. Scott notes that his students are ready and willing to have them. “I was really proud of the way students in my classes have discussed and wrangled with these issues,” he says. “In a lot of ways, it’s one of the things I enjoy most about teaching at CC – that my students can have discussions that really dig deep into these very nuanced concerns.”

DR. BLAKE JACKSON ’16


Dr. Blake Jackson ’16

“When it comes to AI, Colorado College has provided a top-notch education to its students for at least a decade that I’m personally aware of, and doubtless longer than that, and we will continue to do so,” says Dr. Blake Jackson ’16, Assistant Professor of Computer Science. “When I started doing machine learning research for my master’s degree, I had just graduated from CC, and I was already prepared to contribute to publishable—now published—machine learning research. Part of the AI education at CC, obviously, comes from classes that directly focus on the field of AI and on various subfields thereof, like machine learning.”

Jackson teaches some of these classes, including Natural Language Processing offered in Block 7, and hopefully a Robot Ethics class, which he taught at Harvey Mudd College and hopes to bring to CC soon.

“In these courses, I of course teach students about the technological underpinnings of relevant AI technologies. Equally importantly, I emphasize to students the importance of carefully analyzing the social impacts of these technologies and provide students with some frameworks for doing so,” says Jackson, who graduated from CC with a Computer Science major and Discrete Math minor. “I think this is where a school like CC can really excel relative to other types of institutions.”

Jackson hasn’t taught a class completely devoted to AI at CC, but some of his courses heavily involve AI. Students in his Natural Language Processing class get hands-on experience with different kinds of AI models applied to working with language, from simple tree-based classification models to more complicated deep neural and transformer-based large language models.

“Of course, I hope that these assignments help students to fully understand how these models work from a technical standpoint, but I also view it as absolutely vital to help our students understand the social impacts and risks of these technologies and their various uses situated in various contexts,” Jackson says. “What’s the point of making AI if it isn’t going to make the world a better place? Our graduates should be prepared to use their AI knowledge in beneficial ways, not just novel or profitable ones. They should also be prepared to critically evaluate speculative claims about the future of AI.”

Professors across many departments have concerns regarding the accuracy of a lot of ChatGPT’s information.

“I’ve heard of people trying to use ChatGPT kind of like a search engine to find information about a topic. This is not a good idea,” Jackson says. “These large language models are designed to generate statistically likely language. Unfortunately, statistically likely language is not the same as factually likely content. There are hundreds of examples archived in various corners of the Internet of ChatGPT or a similar system outputting outright falsehoods.”

“For example, I’ve seen screenshots of ChatGPT repeatedly insisting to a user that there are no countries that start with the letter ‘V.’ This is fairly harmless because the typical human will quickly discern that it isn’t true, but there are other examples where the chatbot’s errors are less obvious and more potentially harmful,” Jackson says. “For example, ChatGPT is known to generate completely fabricated citations for its claims that look like citations to legitimate academic papers, but actually cite papers that do not exist at all and never have. The model is just generating statistically likely language, so it will generate citations that look statistically likely in places where citations are statistically likely, but these citations are fake! They are made up and do not correspond to any real papers.”

Jackson notes that despite these concerns, he’s not saying there’s no use at all for large language models in classes at CC, even outside the Math and Computer Science Department. For example, students in social sciences can use large language models to study human language use on the internet. He has also seen AI used effectively in the arts.

“However, a prerequisite for doing this kind of work rigorously is understanding the language model one is using for the work,” Jackson says. “Luckily, we have some amazing interdisciplinary scholars in our social science and computer science programs that can help students achieve this! Our students are highly creative and conscientious people, and we should work together with them to use AI in creative and conscientious ways.”

Jackson notes that there are many other kinds of AI, many of which are also in use and under discussion at CC and have been for years. This includes asocial and non-linguistic AI models, which he thinks are less risky for students to use.

“Our physics students might use random forests to classify stars and galaxies in telescope data,” Jackson says of asocial and non-linguistic AI models. “Our geology students might do AI anomaly detection on seismic data. Our biology students might use machine learning in a bioinformatics project to, for example, classify different gut microbiomes. I fully support anything like these examples. There are limitless possibilities to use AI in the classroom without using the fraught large language models that are currently so popular.”

DR. BEN NYE


Dr. Ben Nye

Dr. Ben Nye, Assistant Professor of Mathematics and Computer Science, explains that machine learning models, specifically large language models, are just mathematical functions: you put a number in, then a bunch of addition, multiplication, or division happens, and then a number comes out.

“The ‘machine’ part is about picking a function, a model, that you think will work well, and figuring out how to map inputs or outputs you care about to numbers,” Nye says. “The ‘learning’ part is getting a big pile of examples of which outputs you want for which inputs, and tweaking the coefficients and parameters of your model until it gets close to the examples you’ve shown it.”

“Speaking as someone whose research is focused on developing and training LLMs to solve real-world problems, and has been working in this space for the last 10 years, we have an extremely limited understanding of the implications of these technologies,” says Nye, who added that AI technology is a specific application of algorithmic problem solving, and there’s no way of getting away from the environmental, social, and political implications of this technology in any computer science class.

“If I’m training these students to go out into the world and use their knowledge to solve problems for the betterment of society, then understanding, or at least awareness, of ethical implications is essential,” Nye says. “In every class I’ve taught at CC, there is at least one example where a seemingly benign computational solution has unforeseen and troubling consequences. You can’t write a shift-scheduling algorithm without systematically disadvantaging some groups of people. You can’t write a text generation algorithm without the potential of accidentally recreating harmful ideas. You can’t write a content recommendation algorithm without introducing bias into the set of articles that are selected for presentation. We begin these conversations in the very first class, Computational Thinking, and continue them throughout the curriculum.”

In the Natural Language Processing class Nye taught last year, he had students dedicate several days to investigating and quantifying the ways in which biases were present in all the models and data they had encountered.

“As an educator at a school like CC, I strive to teach students how to think carefully and critically,” Nye says. “Some tools, such as the often-compared calculator, allow students to short-cut broadly non-essential portions of assignments and reallocate time towards developing deeper and more robust cognitive skills. Other tools, such as web search or homework databases, also allow students to short-cut assignments as well – but at the expense of critical thinking. I think we should be very intentional about when and how we allow students to use these short-cuts, and the fact that students will be able to use these tools in the ‘real world’ is insufficient justification for adoption. In general, I prohibit the use of ChatGPT in all my classes as anything other than example of widely used computational systems that are rife with concerning issues.”

Nye points out that the ethical implications and concerns of these technologies are wildly important. “I’m not sure how a school like CC, with such a strong focus on ADEI issues, can endorse the use of something like ChatGPT wholesale. Leaving each department and/or professor to address these concerns – or not – as they see fit doesn’t seem nearly sufficient.”

DR. JANET BURGE


Dr. Janet Burge

“In addition to depriving the students of the opportunity to learn important writing, research, and, in the case of computer science, coding skills, there is no guarantee that tools such as ChatGPT, or those designed specifically to write computer programs, are going to return a correct answer,” says Dr. Janet Burge, Associate Professor and Co-Chair of Mathematics and Computer Science. “There have been recent court cases where AI tools have just made things up. They also don’t provide the source of that information, which means we have no way to check it for accuracy or to give credit where credit is due.”

Burge, whose primary research focuses on design rationale, also notes that information used to train the tools is unlikely to have been provided with consent of the original author.


“There are also issues relating to the use of these tools that should be of particular concern at Colorado College, which has goals of being both an antiracist institution and one concerned with sustainability,” Burge says. “Large language models and other machine learning tools learn on data that is often biased, and as a result, can help institute discrimination at scale. This results in things like tools that make racist and sexist assumptions about people, such as assuming that a doctor must be male and a nurse must be female, or the well-publicized issues with facial recognition mis-identifying Black faces at a higher rate than white ones. Another problem that may be even more difficult to mitigate is the environmental impact of these tools – the large supercomputers that power these models consume a great deal of electricity to run and water to cool.”

As of now, Burge does not use or allow these tools in her courses, though she is open to changing this position as researcher work to address some of the concerns or if the tools become more prevalent in the industry or are allowed during coding interviews.

“We want our students to be prepared to learn and use new technology, but it needs to come from a strong foundation, which is what we focus on in most of our courses,” Burge says. “There is a great deal of promise in these tools, but for now it’s worth looking at them with some caution and skepticism. Many years ago, I had a co-worker who liked to use the phrase ‘sharp tools are dangerous’ — something to keep in mind when adopting new technology. There certainly are ways to use it in the classroom that can enrich learning, but we need to be aware of the risks as well.”

Some faculty and staff members believe AI can help CC refocus on its mission and how to accomplish that mission.

CHRIS SCHACHT


Chris Schacht

“I think AI can help us refocus on what we want to teach and why, and, to some extent, forces us to be more explicit in that,” says Chris Schacht, Director of the Ruth Barton Writing Center. “In the past, an instructor could assign a piece of writing and feel 99% confident that the student had gone through a writing process of some kind in order to create a final product. This is no longer the case. Depending on the assignment, the only process a student has to go through is asking the right question to the chatbot. If we value the critical thinking that goes into the writing process, which I think any liberal arts college must value, then we have to focus on the writing process itself. This likely means dedicated course time to writing, where the instructor can be involved in the student’s writing process.”

Schacht adds that this dedicated time is even more important if the instructor wants the students to use AI in the class. “The instructor needs to be there to model proper use of the bots and teach how to critically evaluate and alter the output. The challenge, of course, is teaching students to use something that we are still learning, as well.”

Schacht echoes a common concern about the unknown nature of the future of AI. “At this point, we don’t know how much it will evolve, what its final form will be, and how easy or difficult it will be to access,” he says. “The good thing is that students and faculty are all eager to learn more about it, and not just in the ways it functions. There have been really great arguments on the various ethical implications of such technology, and how we can use it responsibly, or if that’s possible in its current form.”

Some at CC think AI is here to stay, and one of the best ways to conquer it is head-on, using it as an opportunity to grow as people and scholars.

DR. RYAN BAÑAGALE ’00


Dr. Ryan Bañagale ’00

“The study and creation of music has always brushed up against new technologies, whether that is the development of notation 800 years ago, the mass distribution of popular song in the 20th century, or the digital audio workstations that dominate production today,” says Dr. Ryan Bañagale ’00, Associate Professor and Chair of the Music Department and Interim Director of the Crown Center for Teaching. “AI is another piece of that, one that impacts not only faculty and students, but also everyone who listens to or makes music. In the Music Department, we are actively creating opportunities for our students to explore, integrate, and interrogate a variety of AI technologies. We see this as an opportunity to continue to grow as musicians and scholars, just as we have with the advent of other technologies be that the printing press or the internet.”

“Critical listening is a core tenant of the Music Department in all the work we do. We’re already in a world where text-to-speech voice generators allow producers to create ‘deep fake’ audio tracks compelling enough to draw the ire of major recording labels,” says Bañagale, who double majored in Music and Theater and minored in Renaissance Studies at CC. “Being able to hear through and beyond AI generated audio will become an increasingly important and valuable skill. Through active engagement with music-related AI, we’re preparing our students for a future where AI is as omnipresent as the internet, whether we like it or not.”

As interim director of the Crown Center for Teaching, Bañagale is part of a broader taskforce on AI and its implications, as well as opportunities, for CC students, faculty, and staff.

We know that AI and similar technologies impact everyone, not just the computer science or math industry. Therefore, the conversation surrounding it occurs in all areas at CC, including the English, Political Science, and Philosophy Departments.

DR. MARION HOURDEQUIN


Dr. Marion Hourdequin

Dr. Marion Hourdequin, Professor of Philosophy, has had students in her Epistemology class experiment with ChatGPT and discuss whether ChatGPT possess knowledge and what it would require.

“Since epistemology explores the nature of knowledge, it seemed apt to consider whether large language model chatbots like ChatGPT have knowledge, and if so what kinds,” she says. “One student – a Philosophy and Computer Science double major – pointed out that ChatGPT’s ‘knowledge’ is based on language patterns, and in this sense, is one step removed from knowledge formed through direct engagement with the world. ChatGPT predicts based on patterns of language use what words and concepts go together and in what order – but it is fully dependent on how language is used and can’t directly evaluate whether that language corresponds to anything in the world, outside of language. In this sense, ChatGPT’s ‘knowledge’ is derivative and differs from human knowledge. Although we rely on language to form and express beliefs, we also have other means to check the reliability of ideas expressed in language.”

“If someone tells me that Tutt Library is made of chocolate, I have other ways of exploring that claim beyond just what others have said, and in particular, what others on the internet have said, about Tutt library,” Hourdequin notes.

Her class also explored the issues of biases in large language models and the social context for the development and application of large language models, as well as the propensity of ChatGPT to fabricate sources when it is asked to provide citations for its outputs.

DR. ANDREEA MARINESCU


Dr. Andreea Marinescu

Dr. Andreea Marinescu, Associate Professor and Chair of the Spanish and Portuguese Department, says her department has had preliminary discussions about AI, but they have not used it in their classes as far as she knows. The department’s faculty and staff have discussed how AI might change the way they assess student learning, such as focusing more on in-class evaluations or oral assignments, like debates or group oral exams.

“We’re also curious about ethical issues and plan to explore whether or how AI is biased when it comes to non-English languages and cultures, for example,” Marinescu says. “I do think AI will push all of us to think more about how our own disciplines are changing and to (re)articulate our own views on what makes us human.”

Like other departments on campus, they plan to have ongoing, regular conversations surrounding AI.

Clearly, the challenges and dilemmas around AI, ChatGPT, and similar technology are not going away and will only continue as more advanced technology is developed. As usual, CC will continue to step up to the challenge to have hard conversations and make tough decisions in order to prepare our students to create a more just world.

Leave a comment

%d