
In my writing seminar class, I participated in the program’s first of three surveys about AI use for written assignments. The initiative was refreshing: It was the first time that I had heard about a thoughtful investigation into how AI intersects with something as deeply personal as writing at Penn. But as I reviewed the questions that tried to contextualize each participant, a more pressing question emerged: What exactly is Penn’s AI policy? It surprised me to realize that I wasn’t really sure, even though I’m a student in the School of Engineering and Applied Science.
Penn’s existing AI policies, released in November 2023 — a lifetime ago in AI terms – highlight this problem. The current guidelines primarily defer to the Code of Student Conduct and Code of Academic Integrity, using vague language about treating AI assistance similarly to “assistance from another person.” This ambiguous approach, with terms like “guidance” and references to individual course policies, creates more questions than answers. When does AI “assistance” cross the line? How do we properly attribute AI contributions? The lack of concrete standards leaves both students and faculty navigating a gray area.
This disparity isn’t surprising given Penn’s current approach to AI policy, where guidelines vary significantly between departments and schools. The inconsistency is striking: In my multivariable calculus course, we were actively encouraged to use a specialized GPT (developed by the professor) for problem explanations. A Wharton tech industry course even dedicated 20% of the grade to AI-focused assignments. Yet my mathematical applications in computer science class not only banned AI tools entirely but also emphasized severe penalties for their use.
This challenge isn’t unique to Penn — most universities maintain similarly vague policies around AI use. However, as an institution that prides itself on innovation and forward thinking, Penn has an opportunity to lead rather than follow. We should be setting the standard for thoughtful, comprehensive AI education policies that other universities emulate.
The capabilities and many limitations of modern AI tools make this policy vacuum particularly problematic. ChatGPT and similar systems have demonstrated remarkable abilities — from passing complex professional exams to generating sophisticated code and analysis in a fraction of the time it would take a human. However, many generative AI tools still make mistakes. At the end of the day, large language models are just very good at predicting which words work best next to each other. This makes AI literacy an essential skill that can only be developed with a concrete AI policy.
So we have to ask ourselves: Why isn’t Penn leading the charge on clear AI policies? Well, there are many valid reasons. AI tools are evolving at a rapid pace that makes it difficult for countries and governments, much less universities, to keep up. Additionally, there’s a tension between encouraging academic freedom and preventing misuse; professors may be hesitant to impose strict guidelines that could stifle creativity or experimentation.
Therefore, without clear guidelines, students find themselves in an uncomfortable position of uncertainty. Some use AI tools discreetly, worried about potential consequences but feeling disadvantaged if they don’t utilize resources their peers might be using. Others spend dozens of dollars a month to access premium AI products. This unregulated use of AI not only creates an uneven playing field but also undermines the development of critical thinking skills that should be central to our education.
The solution isn’t to implement blanket restrictions or allow unlimited AI use. Instead, Penn needs thoughtful, standardized policies that target plagiarism and emphasize transparency and equity. That way, both the potential and limitations of AI in education are recognized. These policies should be developed with student participation, incorporating our experiences and perspectives as the primary users of these tools in an academic setting.
The reality is that AI isn’t just another passing technological trend — it’s reshaping the professional landscape we're preparing to enter. Law firms are already using AI for research and document review. Consulting firms are leveraging AI for data analysis. Meanwhile, a wave of startups are building entire business models around AI capabilities. By failing to establish clear policies for AI usage in education, we’re not just creating current confusion, we’re potentially hampering our future workplace readiness. You’ve probably heard the term “preprofessional” more times than you can count at Penn, but it still holds weight. It is important for every college student to know how we can use AI in an ethical, effective way for the workplace, particularly at a school that puts an emphasis on our professional future. This knowledge can only begin in the classroom.
Penn has always prided itself on preparing students for the future. Now, it’s time to acknowledge that this future includes AI. We need standardized policies that reflect this reality, because while we debate and delay, the AI revolution isn’t waiting for permission to transform education.
ELO ESALOMI is an Engineering first year from London. Her email is eloe@seas.upenn.edu.
The Daily Pennsylvanian is an independent, student-run newspaper. Please consider making a donation to support the coverage that shapes the University. Your generosity ensures a future of strong journalism at Penn.
Donate