What Top Tech Skills Should You Learn for 2025?
Educating Ourselves In The Age Of AI
"The object of education is to prepare the young to educate themselves throughout their lives."—Robert Maynard Hutchins
Robert Maynard Hutchins wasn't just making a statement about lifelong learning, he was offering an educational philosophy rooted in critical thinking and human flourishing. As president of the University of Chicago and a leading educational reformer in the 20th century, Hutchins championed a liberal arts education anchored in the "Great Books" tradition. He believed education should cultivate the intellect, foster critical thinking, responsible citizenship, and a deep engagement with the ideas that have shaped civilization. For Hutchins, learning wasn't about vocational training or overspecialization. It was about preparing students to think, question, and ultimately, educate themselves for a lifetime.
That vision is more relevant, and more fragile, than ever in the age of artificial intelligence.
AI is rapidly transforming education, and the act of thinking itself. From personalized learning and agentic tutoring to AI-generated essays and real-time language translation, the tools now available to students would have seemed like science fiction a decade ago. But with this power comes a paradox. The same AI systems that can expand access to knowledge can also flood us with misinformation, deepfakes, and algorithmic bias. They can either sharpen or dull the very intellectual capacities Hutchins saw as the goal of education. How will reason change in the age of machines?
This moment demands clear-headedness. What should education be in an AI-saturated world? If we take Hutchins seriously, then AI's role in education must not be to replace human reasoning but to support the development of it. And that means returning to the core purpose of education—not just to inform students, but to teach them how to form their own minds. Not to teach students what to think, but how to think. Not to grade outcomes, but processes.
A ShiftMemorization and repetition made sense in a world where access to information was limited. That world is gone. In the age of AI, what matters most is no longer what you know, but how you discern. AI accelerates this shift in two directions. On one hand, AI tools can democratize access to knowledge, personalize learning, and reduce barriers for students. For example, large language models can serve as writing coaches, math tutors, or language partners. AI-powered platforms can identify learning gaps and adapt to a student's pace. This is real progress. One to many becomes one to one.
But there's another side. AI systems can also generate persuasive nonsense. They can hallucinate "facts", reinforce biases, and present falsehoods with confidence. AI has even become more persuasive than humans in online debates by leveraging psychological and neuroscientific principles—like cognitive ease and personalized profiling—to powerfully influence behavior.
And students, especially younger ones, often don't yet have the tools to distinguish credible information from seductive simulations. The very strength of AI, its ability to imitate human output, makes it especially dangerous when left unquestioned. It could make typically oppressive governments of foreign states look like child's play.
This is why the nature of knowledge itself is changing. It's no longer about accumulating static facts, but navigating dynamic, often conflicting, streams of information. Everyone must become a scientist. The classroom, then, can't just be a place to learn what is true, it must become a training ground for learning how to think about what might be true.
Critical Thinking in the Age of MachinesCritical thinking, long considered an academic virtue, is now a societal necessity. In the past, it may have been seen as a metaskill for the curious or intellectually ambitious. Today, it's essential. Or rather, if intellectuals thought of it as essential in the past, it could now make or break societal cohesion.
Critical thinking is the disciplined practice of evaluating your own reasoning. It's not about being cynical or contrarian. It's about being rigorous: asking better questions, testing your assumptions, and seeking good evidence before jumping to conclusions. It's about learning to make reasonable decisions about what to believe and how to act—a definition that becomes vital when AI-generated content can mimic truth without actually embodying it. Critical thinking is an attitude, a disposition, a metacognitive stance. It is about how we process information, not just what we know.
In the age of AI, critical thinking must become the heart of the curriculum, and the heart of all society. It's the filter that allows us to navigate a world where information is abundant (much more than our brains can handle), but meaning is contested. And it's not a skill we're born with. In fact, quite the opposite, evolutionarily speaking. It must be taught, practiced, and internalized. Metacognition must be continually practiced. AI itself may offer tools to support this process. With the right design, AI tutors can prompt students to reflect, offer counterarguments, or simulate a Socratic dialogue. Some platforms are beginning to build feedback loops that help students not just solve a problem, but explain why they chose a particular path.
But the danger is that AI can also short-circuit metacognition. When a student relies on a chatbot to write a paper or solve a homework problem without understanding the reasoning behind it, they're not learning—they're just outsourcing. And the more seamless the technology becomes, the more invisible that outsourcing can be.
A 2025 study by Michael Gerlich adds empirical weight to these concerns. Surveying 666 participants, the research found a negative correlation between frequent AI use and critical thinking, particularly among younger users. The reason? Cognitive offloading. When students delegate tasks like decision-making, problem-solving, they risk bypassing the very mental processes that build critical faculties. The findings reinforce a key truth: educational systems must not only teach students how to use AI—but when to not use it. As we integrate these tools, we must embed habits of reflection and skepticism to ensure AI supports rather than supplants human cognition.
Another recent article by Yoshija Walter in the International Journal of Educational Technology in Higher Education highlights the urgent need to rethink how we integrate AI into modern education. Drawing on a case study from a Swiss university, he identifies three foundational skills for navigating an AI-infused learning environment: AI literacy, prompt engineering, and critical thinking. Rather than banning or fully embracing tools like ChatGPT, the institution adopted a nuanced framework that encourages responsible use, transparency, and reflection. This work reinforces the importance of scaffolding not just technical skills, but also the metacognitive capacities needed to thrive in an AI-augmented educational future.
Which brings us back to Hutchins.
If we align these technologies with the deeper goals of education, we have a chance to fulfill Hutchins' vision in a world he could never have imagined. A world where education is about learning how to ask the right questions. Not about offloading thought to machines, but about refining our own minds in conversation with them. There is still time, but not much.
AI In The Classroom: Cheating, Learning, And The New Academic Gray ZoneThe Excerpt
Your browser is not supportedusatoday.Comusatoday.Com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.
Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on usatoday.Com
Putting AI To Work In Schools Is Difficult. A New Toolkit Outlines How To Do It
Some districts are ahead of the curve in putting generative artificial intelligence to work in instruction and other school operations. The majority of districts, however, have either just started conversations or are still figuring out where to begin.
While a majority of states and a handful of education organizations have provided AI guidance for schools, many district leaders still need "practical" support for "pain points related to AI implementation," said Robbie Torney, the senior director of AI programs for Common Sense Media.
For instance, a common challenge Torney has heard from district leaders is how to have conversations with educators or parents who are skeptical about AI or worried about what AI use in schools might look like.
That's why Common Sense Media has released an AI toolkit for school districts to help them figure how how to use this new technology.
In an interview with Education Week, Torney discussed the challenges districts face when trying to put AI to use in schools.
This interview has been edited for length and clarity.
What's a common challenge districts are facing?Robbie Torney
One of the questions we get a lot is: Where do we start? That can be a really daunting place. Part of how we've organized the tool is we have a "getting started" guide and a readiness assessment that are bundled together that are meant to help districts think through all of the different facets of what getting started means in a K-12 system. That's everything from thinking about systems and technical infrastructure to thinking about some of those enabling conditions that are on the ground, in terms of AI literacy for different stakeholders, to engaging in the stakeholder conversations to thinking about this from a policy or a compliance framework.
What's different about using AI compared with other educational technologies?Implementation in K-12 is hard. You're adopting a new curriculum. How do you get stakeholder input and make sure that you're thinking through all of the requirements related to that curriculum, thinking about how it fits into your schedule, planning for how you're going to train your teachers, thinking about how you're going to measure whether or not the curriculum is actually being implemented, if it's having the effects that you want it to? These are the types of systems change that skilled and seasoned school administrators are used to navigating.
There's a lot about AI that feels very familiar when you're thinking about your approach to that. There's also some differences, of course, which is that district administrators themselves may be much more familiar with different types of curriculum and have much more experience or content knowledge [about that] than they do with AI. AI literacy is a huge and a really important part of the solution. But it's not the only part, and this toolkit is meant to point toward some of these other parts that need to be in place as districts are thinking about implementation.
What would you say is the most important step schools should take?The first domain that we encourage people to start on if they are novices or are still developing their understanding of AI is about leadership and vision. Some of the questions associated with that are: Is there a shared districtwide understanding of AI's potential, opportunities, and risks? Is there a mission statement for AI use that's been developed or endorsed by leadership?
AI is not something to implement for the sake of implementation. AI is a tool, and districts have to have a clear vision for what that tool can do. That feeds into some of these other pieces related to policy and governance, infrastructure and systems readiness, staff capacity and professional learning, community and stakeholder engagement. But all of that has to be rooted in developing a leadership orientation and a vision for what the technology can be used for, and some knowledge of the technology is necessary for that.
How should districts help teachers learn more about AI's role in education?Part of the work there is giving educators very concrete insight into what these technologies can do for them, and that starts with generating examples, either from other districts or in your own district, of how this can actually ... Demonstrate some of those proof points, positive wins, and exemplars that can help get staff excited about something that they may not necessarily understand or feel like they have the time for.
How should districts engage families in this process?From Common Sense Media's research, when we last polled on this as part of our "Dawn of AI" report, 83% of parents reported that schools had not communicated with them about AI policies. One of the comments that [we've heard from districts] was something to the effect of "OK, we feel like we've communicated information, but have parents really heard it or do parents feel supported by it?"
Administrators have reached out to us to talk about how they can bring training and apply some of these tools within a family engagement perspective. The toolkit does contain some specific resources for thinking about how to engage parents, how to build basic parent literacy, how to help parents understand things like the impact that AI might have on the economy in the future, or the jobs in the future, or college readiness. This is a really critical stakeholder group, and it's really down to districts to be able to help support parents with some of that work. Parent partnership is going to be critical to the success or failure of an initiative like this.

Comments
Post a Comment