A year ago I argued that as AI algorithms structure more and more decisions, “last mile” human judgment would become a luxury good available to those who could pay for concierge discernment. The 5-15% “last mile” space is what algorithms can’t capture, where fine tuned human judgement is needed.
The “last mile” in higher education has always been personalized attention, but its outcome in the AI era is systematic acceleration. Students trained to run a two-minute mile will soon be arriving in a system built for the four-minute mile. The entire U.S. undergraduate university structure, from the Carnegie Unit to the four-year degree, is calibrated for a steady, predictable pace. A student who can master content at twice the expected speed creates a structural challenge I call the “two-minute mile” or “fast mile” problem.
As Alpha School is demonstrating, personalized specialized attention yields students learning at 2.6x to 5x rates. Read the New York Times piece and the epic Astral Codex Ten review. (Also read Benjamin Bloom’s 1984 paper on what personalized tutoring and immediate feedback can do).
Acceleration matters beyond classroom practice. Once learning velocity is systematically measured and institutionalized, the entire temporal organization of education collapses. Age-graded cohorts, semester schedules, and degree timelines are all based on the idea that learning happens at a predictable, standardized pace. Of course this was never true. But AI both enables and amplifies this variation.
No state in the U.S. currently measures learning velocity and I’ve found no studies focused on the measurement of accelerated learning. Under the Every Student Succeeds Act (ESSA), states measure whether students meet proficiency benchmarks and show improvement compared to prior years. These tests track learning from fixed points, assuming a roughly linear learning curve over the school year. Accountability systems also monitor chronic absenteeism, which functions as a proxy indicator for interrupted learning, not (quite possibly) time spent outside the classroom with AI.
Measuring acceleration would require continuous tracking of the rate of change, how quickly a student’s learning slope is steepening or flattening over shorter intervals. That kind of measurement is harder to operationalize, though some adaptive platforms, such as Khanmigo and NWEA, are beginning to approximate it. No ESSA-compliant state test currently quantifies changes in learning velocity.
Over 92% of high school students now use generative AI (GenAI) for coursework, authorized or not. Most of this is unstructured: essay drafts, problem sets, exam prep, on-demand tutoring without pedagogical design. But even unstructured GenAI use changes student expectations about learning pace, feedback loops, and the role of human instruction. Students are learning that knowledge is abundant and instantly accessible, that feedback can be immediate. While only a few years ago nearly 100% of course content was provided by the teacher (under state and school district oversight), soon it could be 50% or less. These students will arrive at college seeing GenAI as a resource and route to knowledge outside the classroom. They have also come to expect immediate feedback and unlimited patience from their learning tools.
If a growing cohort of “fast mile” students arrive at college demanding a high-touch, personalized, accelerated two-year college degree, the economic consequences for traditional higher education are enormous, including the eventual end of the Carnegie unit as “credit hour.” Measuring learning as time spent at 1x will not do.
Universities are going to need to have both “last mile” faculty experts to mentor individual students on personalized coursework beyond what AI can offer from frontier knowledge and methodology to new research questions, and “fast mile” support faculty and staff to work with young students (11-17) who have accelerated to college readiness and students with asymmetrical knowledge, e.g. graduate-level physics knowledge but freshmen-level humanities.
The trajectory hasn’t fully materialized…yet. But paradigm shifts arrive before the comprehensive data. They appear first as anomalies that existing frameworks can’t explain. Alpha School is one such anomaly. The 92% of students using GenAI for coursework are another. The fact that no state measures learning velocity is itself diagnostic: states have built an entire accountability apparatus around a single assumption (steady-state learning pace) that may no longer hold. If even 10-15% of incoming students arrive having experienced systematic acceleration in K-12 within the next decade, universities will face structural problems not solvable with small adjustments. The interdependencies between accreditation, financial aid, faculty roles, and the Carnegie Unit mean these questions cannot be addressed in isolation. Coordinated institutional rethinking should begin now.
Below are the questions university leaders need to be asking to make visible that their institutions are currently optimized for a model (content delivery at standard pace) that may become obsolete within a decade, and underinvested in the value proposition of higher education: frontier mentorship and acceleration coaching.
25 questions for university presidents to ask their teams:
Do we have faculty capable of “two-minute mile” mentoring (coaching students through established content at 2-5x speed) or do we only have faculty ready for “last mile” work (frontier research mentoring)? Most research faculty entered academia to work at the frontier, not to be efficiency coaches. Can they do both? Should they?
What percentage of our faculty time currently goes to delivering content that could be handled through competency-based modules versus providing mentorship that requires deep field expertise? If 70% of faculty time delivers commodity content, what happens when students arrive having already mastered it?
How do we hire, promote, and tenure faculty when the valuable work shifts from content delivery to research mentorship? Teaching evaluations measure lecture quality. What metrics capture mentoring effectiveness at the frontier?
If students accelerate through introductory and intermediate courses in weeks rather than semesters, what do faculty in those fields do with the freed-up time? Do we reduce faculty lines, reassign them to mentorship, or expect them to take on more students?
What happens to faculty in fields where most content can be effectively delivered through AI-supplemented personalized learning? If students master content in, say, mathematics, languages, some sciences before arrival or in compressed time, what’s the faculty role?
On the Carnegie Unit and Credit Hours.
If we allowed competency-based advancement where a student demonstrates calculus mastery in 6 weeks, do they earn the same 3 credits as a student who takes 16 weeks? How do we charge tuition if seat time no longer correlates with learning?
Can we legally grant degrees to students who accumulate competencies rather than credit hours, or does our accreditation require the Carnegie Unit? What would it take to get a waiver?
If we break the Carnegie Unit for some courses (competency-based) but maintain it for others (seminar-based frontier work), how do we manage a hybrid system administratively? Do we need two separate tracks?
What’s the cost per student if we shift from 30:1 lecture-based courses to 5:1 mentorship-intensive research apprenticeships? Can we afford universal frontier mentorship, or must it remain rationed?
If students can accelerate through established content, completing a traditional 4-year degree in 2-3 years, do we charge them less tuition or admit this exposes that they were paying for mentorship most weren’t accessing?
On Accreditation and External Constraints
Does our regional accreditor allow competency-based assessment without seat time requirements? Some accreditors have granted this for specific programs (Western Governors University), but can we scale it across all disciplines?
What happens to financial aid if students complete degrees in 2 years instead of 4? Federal financial aid is structured around credit hours and enrollment periods. Does faster completion disqualify students from aid?
How do we demonstrate “faculty contact hours” to accreditors if students advance through content via AI-supplemented modules and faculty focus on research mentorship? The Department of Education has specific requirements about instructional time.
If we create fast-track competency-based pathways, will medical schools, law schools, and graduate programs accept our graduates? Or will they require traditional transcripts with semester-long courses?
State boards often mandate specific course requirements for teacher certification, nursing licenses, etc. Can we meet those requirements with competency-based acceleration, or are we locked into standard-pace courses for professional programs?
(As these questions suggest, accreditation, financial aid, and the Carnegie Unit form an interlocking system designed to ensure standardization and prevent fraud. They lock in a specific model of learning that is time-based, cohort-driven, at a standard rate. You cannot change one part without changing the entire structure.)
On Student Expectations and Experience
When students arrive having experienced 2-5x acceleration in K-12, will they expect to continue at that pace, or will they accept standard-pace college courses? If they demand acceleration, can we deliver it?
How do we teach students to expect and demand frontier mentorship rather than accepting content delivery? Most undergraduates don’t know research opportunities exist or how to access them. This correlates with class background.
If we explicitly split “two-minute mile” content acceleration from “last mile” frontier mentorship, do students understand they’re paying premium prices primarily for the mentorship? Can we justify the cost?
What percentage of our current students are capable of or interested in frontier work? If 30% want to race through content and get jobs, 40% want traditional college experience, and 30% want research apprenticeship, can we serve all three?
How do we structure social and developmental experiences if students are academically ready to graduate in 2 years but developmentally 15-17 years old? If academically accelerated students arrive as young as 11, do we create separate residential programs or assume parental involvement? The residential college experience assumes students are legally adults, ready for independence, socially positioned for peer relationships with other young adults. Are we ready for an influx of young teens, with the liability, supervision, and social integration problems that will involve?
On our Economic and Competitive Position
If Alpha School or similar programs offer students competency-based acceleration through established content for $10K-40K, what are we offering for the same price? Can we articulate the value proposition beyond “the credential.”
What percentage of our undergraduates currently access intensive faculty mentorship, research opportunities, thesis advising? If it’s 20%, are the other 80% subsidizing a service they never receive?
If we reorganize around universal frontier mentorship (primary sources from freshman year, mandatory thesis, guaranteed faculty advisor), can we afford current enrollment levels? Or must we shrink to afford better ratios?
Should we explicitly become a luxury good (small, expensive, mentorship-intensive) or democratize frontier access through technology-enabled content delivery plus guaranteed human mentorship? The middle position (current model) becomes indefensible.
In 10 years, when students can get personalized content mastery for free or cheap and arrive at college having completed what we currently teach in years 1-2, what exactly are we selling? Research apprenticeship? Network access? Credentials? Time to mature? What’s the honest answer?
This last question is the only one that really matters. The others are downstream. If universities cannot articulate what they’re selling beyond “the credential” and “time to mature,” then they are vulnerable to any competitor who can provide credentials faster and cheaper. The luxury good model (small, expensive, mentorship-intensive) is economically viable but serves a tiny fraction of current enrollment. The democratization model (technology-enabled content plus guaranteed mentorship) has never been successfully operationalized at scale. The middle ground, where most universities currently operate, assumes students will continue paying premium prices for services they don’t access.
All of these questions assume a certain honesty. So how does a university assess whether students claiming accelerated mastery actually possess competent knowledge versus credentials from variable-quality programs? As competency-based K-12 programs proliferate, quality will vary dramatically. Some will deliver genuine 2-5x learning; others will be credentialing theater optimizing for advancement speed over retention and transfer. Universities will need independent assessment mechanisms (placement testing, diagnostic assessments, portfolio reviews) that don’t simply trust transcripts. But this is a second-order problem. The first-order problem is whether universities are willing to acknowledge that the four-year, time-based degree model is coming to an end.
.png)



