Greg Easley is a technologist, writer and adjunct professor at the New School, where he teaches media studies. His work focuses on responsible AI, education and media innovation.
Last winter in New England, after a stretch of frigid days that killed my outdoor exercise ambition, I set up a Wahoo Kickr in my garage. The device replaces a road bike’s rear wheel, turning it into a “smart” indoor trainer. When paired with specialized workout apps, it automatically adjusts pedaling resistance to simulate real-world terrain: foothills, flats and even mountains.
My first session began with a ramp test. Here’s how it works: You start at a low resistance level, which ratchets up every 60 seconds until you can no longer keep pace. I persevered for just over 22 minutes. By the end I was dizzy and dripping with sweat, my heart rate at 166 beats per minute — redline for someone my age.
That failure point is used to estimate “functional threshold power” (FTP): the highest average wattage your legs can sustain for an hour. Some cycling apps use FTP to personalize your workouts, building a plan optimized to make you faster. If you miss a week, the system adapts by reducing the workload. If you progress quickly, harder sessions follow. Your FTP becomes a dynamic baseline as opposed to a static score.
The results, for me, were impressive; I rolled into spring outdoor riding at near-peak fitness levels. But beyond the physical gains, I was struck by the system’s design. It had adapted in real time, identifying my initial capacity and responding with precision and flexibility to help me improve. AI-powered training platforms now analyze millions of workouts from athletes around the world, then use that data to deliver increasingly efficient, personalized training plans. Because these platforms learn continuously, each new user strengthens them, making the feedback loops more powerful over time.
Later, while drafting a syllabus for my students at the New School in Manhattan, where I teach seminars in media studies, I wondered: What if education worked like that? What if, instead of following a prescribed curriculum, teaching started with a learner’s threshold and built a dynamic, personalized path forward?
A Crisis In Education
The United States urgently needs new, innovative approaches to education. Tests conducted in 2024 by the National Assessment of Educational Progress — collectively called “The Nation’s Report Card” — confirm what U.S. Secretary of Education Linda McMahon called “a devastating trend”: American students are testing at “historic lows across all of K-12.” The scores, released this month, show that nearly half of high school seniors are now below basic levels in math, and about one-third are below basic in reading. The average reading score has dropped to its lowest level on record.
The Covid-19 lockdowns shattered the illusion that our education system is flexible. When classrooms abruptly closed, rigid learning models were reproduced on screen, leaving many students struggling and disengaged — a problem that persists today. Against this grim backdrop, another force promises to reshape education for better or worse: artificial intelligence.
Already, AI is being woven into learning and teaching in complex and rapidly evolving ways. Students use tools such as ChatGPT to draft essays, solve equations and generate study guides — sometimes to deepen understanding, but often to reduce or even eliminate the effort required to learn. Eighty-five percent acknowledge using generative AI to help them with coursework in the last year, according to an August 2025 survey by Inside Higher Ed. In my own classroom, the suspicion of AI misuse is enough to strain trust and complicate grading as the boundaries of permissibility remain undefined and mutable.
Teachers are also beginning to expedite or automate time-consuming tasks like drafting lesson plans and generating practice exercises, freeing them to focus on valuable mentoring and one-on-one support. But AI tools can also become a siren call, tempting educators to use algorithmic shortcuts for the demanding, human work of noticing, guiding and inspiring students.
Social media offers a cautionary tale here: Platforms are populated with AI-generated influencers addressing AI-generated followers, a self-reinforcing feedback loop where authenticity vanishes. The danger is that education could follow a similar path, where efficiency replaces presence and the human dimension of teaching is eventually flattened.
The integration of AI into education is no longer hypothetical — it is well underway. In April, President Trump signed an executive order to bring AI into American classrooms, and major tech companies including Google, Amazon, Microsoft and OpenAI have pledged to support this mission.
“What if, instead of following a prescribed curriculum, teaching started with a learner’s threshold and built a dynamic, personalized path forward?”
The question is not whether learning will be affected by AI, but how and to what ends. Left unguided, or steered solely by tech companies pursuing their own interests, AI educational tools could magnify inequities and perpetuate the very problems they promise to solve. With thoughtful design, however, they could move us beyond rigid curricula toward adaptive systems that respond to individual learners. The decision before us — to let AI evolve haphazardly or to shape it deliberately with educators, students and institutions at the center — will determine whether it deepens our crisis or becomes the foundation for a more flexible approach to education.
Adaptive Threshold Learning
Modern bike training apps like the one I used offer a useful model for reimagining education. Their core principle — adapting to a learner’s threshold and building upward — could form the basis of what I’ll call “adaptive threshold learning” (ATL): an AI-driven system that identifies each student’s current limits and designs experiences to expand them.
ATL would begin by identifying what a learner can accomplish right now. A diagnostic test, delivered via PC, mobile app or VR headset (if the technology ever reaches its potential), would start simply and gradually increase in difficulty until the system locates the learner’s threshold: the point where fluency falters, recall slows or errors emerge. Input could take the form of sounds, voice, text, gestures or a combination of these, captured by the device’s onboard microphone, touchscreen, camera or motion sensor.
From that baseline, ATL would generate a personalized teaching program designed to elevate the learner’s threshold in the least amount of time. The system would adapt continuously based on performance, tracking how and when the learner responds, self-corrects and fails. Over time, patterns would emerge.
Imagine using an ATL system to learn a language. You would begin a conversation test in your target language, and the system would listen not only for correct vocabulary, but also for pacing, pronunciation and contextual nuance. If you consistently misapplied verb tenses but spoke clearly, the system would shift its focus to grammar. If you hesitated before answering, it would slow the dialogue and restate prompts in simpler forms. If you handled basic conversation with ease, it would quickly advance to abstract topics or multi-part questions to challenge comprehension and fluency.
Instead of following a fixed curriculum, the app would dynamically construct your learning path. As your fluency developed, your profile would become more precise. Progress would be measured not by chapters or lessons completed, but by measurable skill improvements and behavioral signals – how quickly you respond, how confidently you speak and how flexibly you adapt to increasingly complex tasks.
While platforms like Duolingo, Khan Academy and IXL incorporate some adaptive elements, they primarily adjust pacing within a predetermined curriculum. For instance, Duolingo’s Birdbrain algorithm personalizes lesson difficulty based on user performance, yet learners still progress through a fixed sequence of language units.
In contrast, ATL would reimagine both the structure and logic of learning. Rather than merely modifying the pace of a set sequence, it would continuously assess a student’s readiness across multiple dimensions, including response time, confidence and contextual understanding, to determine the next optimal learning experience. This would enable a non-linear learning map that evolves in real time, tailored to the student’s unique progress and needs.
All learners, regardless of background or age, could have access to always-on, multidisciplinary tutors that understand how they learn and adapt accordingly. The system wouldn’t just automate instruction like so-called “AI tutors,” which often turn out to be glorified quiz engines; it would respond to behavior, measure growth and personalize feedback in ways no static curriculum can.
Over time the system would begin to understand how learning works and could perpetually self-optimize. With thoughtful design, sufficient data and adequate computing power, it could evolve into a national infrastructure for growth: a distributed, AI-powered supercomputer network that adapts to each learner’s strengths, struggles and pace, supporting education across regions, disciplines and life stages.
ATL In The Classroom
Implementing ATL in American schools would require daunting and even radical changes. But bold intervention is necessary to alter our downward trajectory. If schools persist with incremental fixes and half-measures, they risk losing even more ground to the forces already reshaping and eroding how students learn. Given the stakes, we need reforms that fully harness AI’s ability to individualize instruction at scale.
“Left unguided, or steered solely by tech companies, AI educational tools could magnify inequities and perpetuate the very problems they promise to solve.”
Some companies are already experimenting with AI in the private school space. Alpha School, a U.S. “microschool” network, is among the most fully realized models of individualized, AI-powered learning centers in operation. Students complete core academics in the morning through two hours of AI-driven, app-based learning, then spend the rest of the day in workshops and project-based activities that develop real-world skills.
If adopted in larger, more traditional public schools or school systems, ATL would not eliminate classrooms, but it would change what happens inside them. A math student, rather than being slotted into a fixed algebra curriculum, would receive assignments that adjust dynamically depending on how quickly she reasons. A history student could move beyond the textbook into primary sources, ethical counterpoints or conflicting narratives, deepening inquiry at his own pace. A music student might work through scales, ear training and theory until fluency is achieved, as measured by tempo, pitch accuracy and responsiveness.
This approach would not fit every field of study. It would lend itself most naturally to domains where progress can be measured with some objectivity — mathematics, the sciences, engineering, languages and music — rather than to interpretive and creative fields where ambiguity and perspective are central.
Yet precisely because it could accelerate mastery in the measurable disciplines, it could prove liberating. If students gained skills in algebra or chemistry more quickly, they could have more time and freedom for the elements of education that resist optimization — literature, art, philosophy and the more reflective realms of the social sciences.
With ATL, the teacher would still be essential — not as a lecturer at the blackboard, but as a coach who interprets the system’s signals, helps students understand where they stumbled and why, and convenes group discussions where collaboration and debate are vital. For example, a teacher might pull three students struggling with a calculus unit into a small workshop while others advance independently. Teaching, in this model, would become less about delivering information and more about orchestrating personal growth — not just helping students learn, but helping them understand how to learn.
No algorithm, no matter how adaptive, can replace the role of a human who inspires, contextualizes and comforts. Teachers would be the interpreters of the system’s insights, the architects of meaningful challenges and the people who help students translate progress into purpose. They would also be crucial in shaping the values of these systems, ensuring they reflect emerging domains, cultural nuance and ethical complexity.
Embracing ATL would also demand a fundamental shift in how we think about time, mastery and progression. Our current framework treats time as fixed and outcomes as variable: Everyone spends a semester studying biology, yet only some emerge with mastery. ATL would invert that logic. Mastery would become the constant; time would become the variable. One student might grasp a concept in two days, another in a week — but both would succeed because the system would adapt to them, not the other way around.
This shift would raise challenging questions. Would students still be grouped by age, or move toward “competency bands” — cohorts organized by demonstrated skill rather than birthdays? At a minimum, ATL would retire the bell curve, which assumes all students receive the same instruction over the same time period and should be judged against static benchmarks. In an adaptive system, inputs and goals would be personalized. Instead of a single distribution of outcomes, we would get a diversity of trajectories.
Grading would need to change as well. Letter grades and class rankings reduce learning into relative scores that often reflect privilege more than ability. A simpler mastery report — “pass” or “in progress” (akin to today’s “incomplete”) — paired with rich feedback would be both more sensible and more equitable. In an open-timeline model, progress would be measured against the learner’s own arc: sharper recall, steadier reasoning, greater fluency. Growth would no longer mean outpacing others; it would mean surpassing yesterday’s self.
Such a system would also redefine what it means to excel. Some students could achieve mastery of a subject in weeks — or even days — rather than being confined to the fixed pacing of a semester-long course. Freed from those constraints, they could climb higher and faster, reaching peak mastery in a chosen field or branching horizontally across a wide range of disciplines.
“No algorithm, no matter how adaptive, can replace the role of a human who inspires, contextualizes and comforts.”
Students who perform more typically, meanwhile, could still attain mastery in the subjects essential to their ambitions, helping them graduate equipped for the careers or callings they seek. By tracking progress across domains — from pattern recognition to verbal fluency — ATL could reveal hidden strengths and help align students with fields where they would naturally thrive. In this way, education would become not just more efficient but more personal: a vehicle for self-discovery.
The Risks Of Optimization
For all its potential benefits, ATL would also introduce risks that we can’t afford to ignore if we’re serious about building something better.
First, consider the danger of over-optimization: tailoring instruction so precisely to a learner’s current abilities that it narrows rather than expands intellectual range. Just as social media’s algorithmic filtering can limit our exposure to new ideas, a well-intentioned ATL system might steer students away from uncertainty, productive struggle or edge cases. It could prioritize speed over depth, comfort over challenge – flattening curiosity into compliance. Personalization, taken too far, is in danger of becoming a polished form of intellectual risk aversion. But growth often begins where comfort ends.
Second, there are costs of data dependence and the surveillance that enables it. Systems that track micro-latency, vocal inflection, facial expression and cognitive thresholds generate an extraordinarily detailed portrait of each learner. That portrait may be useful in an educational context, but it would also be intimate – and potentially threatening. Who would own it? How would it be harvested, stored, protected or monetized? And what safeguards would prevent it from being used to sort, label or limit students’ future paths?
Ethical design is non-negotiable here. Educational systems should be transparent, inclusive and accountable, especially to those they assess. Otherwise, ATL would risk becoming not a platform for growth, but a mechanism of control: sorting students by unseen algorithms and reducing potential to probability.
Third, ATL could inadvertently magnify existing inequities. Systems that rely on rich data profiles will perform better for students who have access to fast internet, newer devices and adult support. These students could potentially train the system more effectively, receive faster personalization and improve more rapidly. That advantage would compound. Without intentional design for equity, personalization risks becoming a premium service: deep for the already advantaged, shallow for everyone else.
Finally, there is a cultural risk – that in our eagerness to optimize, we forget why education matters. Learning is not just a ladder of skills. It’s also play, exploration, serendipity and becoming. ATL, if adopted, must not flatten learning into a series of checkpoints. The system may adapt, but it must still surprise.
These risks would demand solicitude from those building and deploying ATL. But the risks of inaction may be greater: As the dismal trends in American test scores make clear, our current approach is no longer serving students’ needs. ATL would be a daring new direction rooted in a philosophy dating back more than a century.
Lessons From The Past
In my time as an adjunct professor at the New School, I have often reflected on the institution’s founding mission. In 1919, a group of progressive intellectuals — among them historian Charles A. Beard, “New History” pioneer James Harvey Robinson and economist Thorstein Veblen — resigned from Columbia University to establish an independent institution, originally called the New School for Social Research. Their revolt against rigid academic orthodoxy drew on the ideas of pragmatist philosopher John Dewey, whose vision emphasized growth over conformity and the learner’s active role in constructing meaning.
Dewey envisioned schools as dynamic laboratories of growth, not factories for mass production. He rejected standardized memorization and championed learning environments that adapted to individual needs and contexts. “The school must represent present life,” he wrote, “life as real and vital to the child as that which he carries on in the home, in the neighborhood, or on the playground.”
More than a century later, AI-enabled teaching platforms could finally help realize Dewey’s vision. These systems don’t have to insert groups of students into pre-set tracks; instead, they can start from what individual learners can do now, and build from there.
“Learning is recursive, experimental and sometimes uncomfortable. Adaptive systems may help scaffold that process, but only humans can help make it meaningful.”
Long before returning to academia as a teacher, I studied under philosopher Richard Rorty – Dewey’s intellectual heir and late-20th-century evangelist – in his interdisciplinary graduate program at the University of Virginia. Rorty reimagined American pragmatism for the postmodern era. Education, to him, wasn’t about uncovering timeless forms or eternal certainties, but about expanding our linguistic and imaginative capacities: enlarging what we can say, understand and become.
Today, in my work with students and AI technology startups, I see how ATL could bridge those two worlds, turning the ideas that Rorty and Dewey championed into functional systems. For thinkers like them, the promise of education was not the passive absorption of information, but the expansion of one’s capacity to interpret the world – to speak and act with greater clarity and imagination.
Learning, in that view, certainly isn’t linear. It’s recursive, experimental and sometimes uncomfortable. Adaptive systems may help scaffold that process, but only humans can help make it meaningful.
A New Way Forward
My bike training app never judged me (though some sessions felt like penance handed down by a malevolent cycling god). It didn’t care how fast I was compared to anyone else. It simply found my current limits and built a dynamic plan to move me forward. Instead of rankings it provided a baseline and a way up.
Education can be built on that same architecture.
More than a century ago, Dewey warned that “an ounce of experience is better than a ton of theory simply because it is only in experience that any theory has vital and verifiable significance.” Learning, to him, was not preparation for life – it was life itself. It had to be active and shaped by the learner’s interactions with the world.
Rorty, who carried Dewey’s torch into our era, challenged the notion of truth as something fixed, waiting to be discovered. He saw truth as a tool – something we invent and revise to better navigate the world and reimagine whom we might become.
“The goal of education,” he wrote, “is to help students see that they can reshape themselves – reshape their own minds – by acquiring new vocabularies, by learning to speak differently.” For Rorty, education wasn’t about certainties. It was about possibility and freedom, about expanding the space of what we can say, understand and do.
That’s what the cycling ramp test gave me: not a score, but a new way forward. And that’s what an adaptive AI learning program could give every student: a system that listens and responds by building on what they can already do.
Curriculum, from the Latin currere, means “a course to be run.” ATL would replace the rigid track with a dynamic map — one that offers every learner a personalized path to their destination.
