Why I Stopped Teaching Facts and Started Teaching Questions

In the age of AI, teaching is no longer about transmitting facts, but about learning to question, analyze, and evaluate answers.

One second changed everything about how I teach.

I was thinking about the standard question that came during annual exams and I asked AI to calculate a project's rate of return. The answer appeared instantly. Perfect formatting, clear percentage, professional presentation.

That's when I realized traditional teaching was over.

Any descriptive question I used to spend class time explaining could now be answered in one second. The real question became: what's the point of learning the calculation when the result comes so fast?

The answer hit me. The thought process behind the result matters more than the result itself.

The Zealot Problem

If one doesn’t pay attention, one can develop what I call "zealot behavior" with AI results.

The magic of copy-pasting a result from AI without any critical view. Or on the contrary, asserting something is questionable just because AI said differently. Both responses treat AI as an infallible authority rather than a tool to question.

We're just starting to see this shift. Most students use AI to accelerate homework and data collection. But they're not learning to evaluate what they're collecting.

Anyone can ask AI to produce a 20-page essay on any topic with an outcome that looks more than decent. The real challenge is keeping the capability to reason.

Framework Thinking Over Fact Absorption

I'm redesigning my approach around one core principle: teach students to break down problems and understand the steps that lead to results.

My methodology isn't fully defined yet and will need to evolve. But I'm focusing on helping students think in frameworks and systems to understand if AI's thought process is sound or has obvious flaws.

Here's a concrete example from my finance classes.

Take calculating project profitability. First, students need to break down the project into general assumptions for calculation. They must identify key variables: investment costs, revenues, inflation, tax rate, cost of goods.

When AI gives a result as a percentage, the human must be capable of asking for calculation details. Do the different steps make sense? What's the margin per year? How were taxes calculated?

Students need to understand the framework before they can evaluate the output.

The Art of Prompting

The art of prompting the right question will be incredibly valuable in the future.

It relates to knowing the context of a topic. What matters and what doesn't. Anything that has real impact.

Students with statistics backgrounds understand correlation. I believe the capacity to detect real-world correlations between symptoms and results will help them ask the right questions to AI and connect the dots.

People who understand statistics know that confidence in results depends on sample size. If you ask AI a rare question with very little background knowledge, the result can be vague or false despite looking believable.

The capacity to be critical about when results will be reliable based on AI's learning curve is crucial knowledge.

Teaching in the Unknown

I don't have all the answers yet. I'm still developing exercises and methodologies as this unfolds.

But I'm confident about the direction. Students need to learn pattern recognition and framework thinking to collaborate effectively with AI.

The goal isn't to compete with AI or reject it. The goal is to teach students when to trust it, when to question it, and how to dig deeper when something doesn't add up.

Traditional education focused on knowledge absorption. The new model focuses on knowledge evaluation.

We're training students to be intelligent questioners rather than passive receivers. To understand context and detect patterns that help them prompt better questions and evaluate better answers.

The one-second calculation changed my entire approach to teaching. Instead of explaining how to get the answer, I'm teaching students how to question whether the answer makes sense.

That's a skill no AI can replace (for now).