Opportunities and Promise + Risks and Challenges
I asked AI to determine how many sessions at ASU+GSV 2026 mentioned AI in their session overview. Simple enough, right? Not exactly. The ASU+GSV server blocks automated requests, so AI pivoted and tried fetching information in batches — but then hit a token limit. At one point it even stalled for a few seconds, as if to say "give me a moment, this is harder than it looks." Long story short: AI managed to evaluate only a portion of the site, but still came back with an estimate that 30–40% of sessions mentioned AI in some capacity.
And that tracks. ASU+GSV 2026 put AI squarely in the spotlight, with sessions covering topics like Preparing Students for the Future of AI-Enabled Work, Responsible AI in Higher Education, AI Tutoring, AI's Impact on Youth Psychology, Designing Learning for the Age of AI, and many more.
During the conference I was asked how AI will impact correlation services--which is the heart of what EdGate provides our clients. Let me point us back to paragraph one of this newsletter article where the answer is loud and clear: there are opportunities and there are challenges.
How is EdGate using AI?
- EdGate uses AI to assist our work. EdGate is already using it internally to conduct standards research and tracking.
- There are certain EdGate data offerings that AI is excellent at. For instance, EdGate uses AI to run standards comparison analysis. Do you want to know exactly how New York’s standards compare to Florida standards? EdGate’s AI-enabled ExACT tools can run a deeply informed standards comparison analysis that not only references the standards, but also the conceptual tags (the taxonomical index) that EdGate has added to the standards. EdGate’s clients are rewarded with a precise standards comparison analysis that can be used in order to inform content development and sales/marketing teams.
- EdGate will soon be releasing a new AI-powered correlation capability wherein an EdGate ExACT user can feed a URL into the ExACT system. ExACT will evaluate the URL and align content to applicable standards, again relying on EdGate’s taxonomical index.
- Last, similar to the challenges I faced using AI to analyze the ASU+GSV session list, solely relying on AI to correlate content to standards can lead to bumps. Sort of like when your Roomba falls down a stair or keeps running into a dog toy.
So what issues does AI have in regard to standards alignment accuracy?
- Multiple applicable standards require human judgment. A single lesson may legitimately align to several learning standards. AI may flag multiple matches without being able to determine which standard should serve as the primary focus — a pedagogical decision that requires human expertise and instructional context.
- Partial alignment can be mistaken for full alignment. Many standards are hierarchical, containing "parent" and "child" standards. A lesson may address one child standard while leaving others unmet. AI may identify an alignment to the broader standard without recognizing that only a portion of it is actually covered, creating a false sense of completeness.
- State standards are highly variable and increasingly unique. Across the U.S., standards have evolved independently, meaning the same concept may be assessed differently from state to state. In one state, a student might satisfy a standard by writing an explanation; in another, they may be required to demonstrate understanding in a specific, prescribed way. AI systems trained on generalized data may not reliably account for these state-level nuances and idiosyncrasies.
- International standards present additional technical challenges. AI tools can struggle to accurately process standards documents that use right-to-left text orientation (such as Arabic or Hebrew), increasing the risk of misreading or misaligning content tied to international curricula.
- Low-quality or graphical source materials reduce accuracy. When standards documents or lesson materials contain blurry images, charts, or diagrams, AI may be unable to interpret them correctly — leading to alignment errors that a human reviewer would easily catch.
- Early Childhood Education requires experiential, observational judgment. Determining whether an activity genuinely meets early childhood education standards often depends on direct observation of how young children engage with the material. This is a domain where an educator's hands-on experience with actual children is essential — AI cannot replicate the informed, contextual judgment that comes from working closely with this age group.
- English Language Learner (ELL) content demands cultural, not just linguistic, accuracy. For ELL learners, it is not enough for lesson language to be technically correct. Content must also be culturally appropriate and sensitive to the lived experiences of diverse learners. AI may confirm grammatical or vocabulary compliance without recognizing cultural mismatches that could undermine the effectiveness or inclusivity of a lesson.
At EdGate we use a mix of technology and human expertise. We provide our clients with a fine blend of AI assistance (for cost-saving measures); a technical infrastructure based on learning standards and learning concepts; and subject matter experts with decades of teaching experience to provide our clients with the most accurate, scalable and defensible alignments available.