← Volver a perspectivasCompetencia en datos

La competencia en datos no es opcional: lo que todo educador necesita entender sobre IA

24 de marzo de 2026 · 5 min · Kairos Consulting

The misconception that is holding educators back

When we run AI literacy workshops in schools and universities, we ask participants at the start: "What do you think data literacy means?" The most common answers: "knowing how to use Excel," "understanding statistics," "being able to code."

All of those answers are wrong — at least, they are wrong for the purpose of working effectively in an AI-enabled institution.

Data literacy for educators is not a technical skill. It is a critical thinking skill. It is the ability to ask the right questions about information — where it came from, what it can and cannot tell you, and what might be missing. You do not need to be a data scientist. You need to be a thoughtful consumer of data — and a thoughtful skeptic of AI-generated outputs.

Five habits every educator needs to develop

1. Asking "how do we know?" This is the foundational reflex. Every time a number, a recommendation, or an insight is presented — whether from an AI system, a dashboard, or a consultant — the first question should be: how was this produced? What data went in? What assumptions were made? This is not cynicism. It is intellectual hygiene.

2. Understanding what data can and cannot tell you. Data describes what happened. It does not always explain why. Correlation is not causation. Averages hide the full distribution. A student who scores 72% on every assessment is having a very different experience from one who scored 40% and then 100%. The average is the same. The story is completely different.

3. Knowing where your data comes from. Every dataset reflects the choices of whoever collected it. Which students were included? Which behaviors were tracked and which were not? An AI system trained on historical data will reproduce historical patterns — including historical biases. Understanding the provenance of data is not optional when that data is being used to make decisions about students.

4. Reading outputs critically. AI systems produce outputs with confidence. They do not express uncertainty the way a human expert would. A language model generating a student report will write with the same fluency whether it is accurate or completely fabricated. Educators need to develop the habit of treating AI outputs as drafts, not decisions.

5. Asking what is missing. Every dataset represents a choice about what to measure — and therefore a choice about what not to measure. Student attendance is easy to track. Student belonging is not. When an AI system optimizes for what is measurable, it may systematically devalue what is not. Educators are in the best position to notice this — if they are looking for it.

Why this matters now

Educational institutions are deploying AI at a pace that is outrunning their capacity to evaluate it critically. Automated essay scoring. Predictive dropout models. Personalized learning platforms. Each of these systems makes assumptions, reflects biases, and has blind spots.

The educators who understand this — who can ask the right questions, push back on opaque outputs, and advocate for their students when a system gets it wrong — will be the most valuable people in any institution navigating this transition.

"An AI system is only as trustworthy as the data it was trained on and the process that governs its use. Every educator needs to be able to ask both of those questions — and understand the answers."

Data literacy is not a technical add-on. It is a professional requirement for working in education in 2026. The institutions that invest in building this capability across their entire staff — not just in their IT teams — will be the ones that deploy AI responsibly and effectively.