The faculty are riding horses

2 hours ago 1

Between 1900 and 1929, automotive deaths skyrocketed from 36 per year to 29,592. American cities tore themselves apart trying to accommodate this new technology. Newspapers declared cars more destructive than machine guns. Cities banned them, then unbanned them. States created speed limits, then didn’t enforce them. The automotive industry launched PR campaigns blaming pedestrians. Schools taught children new rules for streets that had always been public spaces. Insurance companies invented new financial instruments. Engineers redesigned roads. Police departments created traffic divisions. Everything had to change.

What else could happen with two incompatible transportation systems operating simultaneously? Children could play in streets when horses traveled 5-8 mph. Streets had no traffic signals, no painted lanes, no stop signs. Most roads were unpaved. The physical infrastructure assumed horses. The legal infrastructure assumed horses. The social infrastructure assumed horses. None of it worked for cars traveling over 25 mph, whose brakes barely functioned above 20.

By the 1910s, cars were overtaking horses in major cities, traveling at speeds far beyond what the old systems could safely manage. Automotive fatalities rose sharply as brakes, lighting, and signage lagged behind. By the mid-1920s, automobile accidents had become a leading cause of urban death. Public hostility grew, and car sales even dipped for a time.

And yet by 1929, automobiles were a fixture of American life. Almost nobody was riding horses to work anymore.

This history has been on my mind as I’ve been watching universities trying to accommodate AI (and listening to cries of danger and destruction). Within 18 months of ChatGPT’s public release in November 2022, the vast majority of high school and college students were using generative AI. Faculty wanted bans. Administrators said fine, we will pay for detection tools. Honor codes expanded. Academic integrity councils formed.

What else could happen with two incompatible education systems operating simultaneously? In this case, are they in fact incompatible? Faculty had been designing courses assuming students would write papers over weeks, consult limited sources, visit office hours for feedback, work at the pace the syllabus specified. This made sense when research took time, when feedback was delayed, when knowledge was scarce and hard to find. But now students expect instant feedback, immediate answers, unlimited revision cycles.

Students are learning to drive on their own, without instruction. They know how to make AI generate text, but not how to evaluate whether that text is meaningful. They know how to get quick answers, but not how to formulate important questions. Faculty could be working with students to use AI for discovery, to explore frontier problems, to test hypotheses, to engage with primary sources at scale. Instead, students are using AI for homework avoidance. They arrive at college fluent in the tech, ready to be mentored in using it for serious work. Colleges respond by treating their fluency as cheating. Students who could become expert practitioners become expert evaders.

All because faculty are still riding horses. You can’t really blame them; they would be building new vehicles on unpaved roads without institutional support. But you can blame administrators. The entire undergraduate structure in the U.S. is calibrated for a steady, predictable trot. And this particular inefficiency seems fine with university leaders.

Faculty are losing access to the best laboratory for understanding how AI works: their own students. Every day students are running experiment on what AI can and cannot do, learning which prompts work and which fail, encountering edge cases and limitations. This is great empirical knowledge for redesigning courses. So why is everything so adversarial? Why are students hiding their AI use?

In the era of cars and horses crashing into each other, responsibility was diffuse. There was no federal Department of Transportation until 1966. There were no national traffic safety standards. Cities and states acted on their own. The chaos was real chaos. Nobody was in charge of managing the transition.

University leaders have no such excuse. They have authority over faculty development, institutional training programs, resource allocation. The Chronicle of Higher Education and Inside Higher Ed have published survey after survey on student use and faculty use. Nearly 90% of faculty received no AI guidance as part of professional development. Of those who found training, most found it outside their institution. Meanwhile, 92% of students use AI tools, but only a fraction received training from their institution. University leaders spent an estimated $50 million on detection tools since 2023 while leaving faculty untrained. Many faculty report they have spent 10 hours or less on an AI platform. This is a higher ed leadership failure.

Spending vast sums on AI detection tools should be seen now as money poorly spent. The time and labor spent by universities to beef up honor codes and academic integrity councils should now be seen as time poorly spent. The energy spent by faculty in advocating bans on AI should be now seen as energy particularly poorly being spent (since it is still ongoing).

Faculty should ask students to borrow the car keys.

When I awoke this morning to finish these thoughts, I saw the news of the 2025 Nobel Prize in Economic Sciences, which went to Joel Mokyr, Philippe Aghion, and Peter Howitt. They won for their work on how societies sustain innovation over time, how technological progress depends on institutions capable of adjusting to new forms of knowledge. Aghion and Howitt’s model of endogenous growth demonstrates that innovation stalls when incumbents, fearing disruption, build barriers around existing systems. Mokyr’s historical research traces how societies that failed to connect practical skill with theoretical understanding entered long periods of stagnation. Economists I admire (hi Tyler and Alex) seem pleased with this year’s award.

Higher education is now facing multiple challenges, including technological change. I look around and I see that universities have the benefit of both the propositional expertise of faculty and the procedural fluency of students using new tools yet have not built institutional mechanisms to bring them together productively. The result is a familiar and tragic pattern: the defense of stability while the conditions for future growth dissipate.

Technological progress is fragile, easily arrested. University leadership could start creating conditions for progress tomorrow. We could train faculty, connect them with students who already know how to drive, build the feedback loops that Mokyr showed sustain innovation. Or everyone can keep riding horses, defending a system calibrated for a world that no longer exists, while their students drive into the sunset.

Read Entire Article