Evidence & Sources

The Research

Eight conversations every writing educator should be having. Annotated sources organized by argument, not just topic.

Jump to a conversation
Conversation 01

Cognitive Offloading

Does AI make students better or worse thinkers?

The Cognitive Offloading Paradox
2026
Wang & Zhang (2026) — International Journal of Educational Technology in Higher Education

A cross-cultural study of 912 students found that a partnership orientation toward AI simultaneously increased critical vigilance and cognitive offloading, and both independently predicted deeper learning. Offloading only helps when freed capacity is redirected toward higher-order work.

drphilippahardman.substack.com/p/the-cognitive-offloading-paradox
The Cognitive Paradox of AI in Education: Between Enhancement and Erosion
2025
Jose et al. (2025) — Frontiers in Psychology

Examines AI through Cognitive Load Theory and Bloom's Taxonomy. Prolonged AI exposure led to memory decline; pretesting before AI use improved retention. The challenge is distinguishing beneficial from detrimental offloading.

frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1550621/full
AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking
2025
Gerlich (2025) — Societies (MDPI)

Survey of 666 participants found a significant negative correlation between frequent AI tool use and critical thinking, mediated by cognitive offloading. Younger participants showed higher dependence and lower critical thinking scores.

mdpi.com/2075-4698/15/1/6
Conversation 02

Multilingualism & Equity

Are we confusing language medium with cognitive capacity?

GPT Detectors Are Biased Against Non-Native English Writers
2023
Liang et al. (2023) — Patterns, Stanford University

Over 61% of TOEFL essays by non-native English speakers were falsely classified as AI-generated. Detection systems inherently discriminate against students with restricted linguistic diversity — the exact population most vulnerable to false accusations.

cell.com/patterns/fulltext/S2666-3899(23)00130-7
AWAC Statement on AI and Writing Across the Curriculum (Version 2.0)
2025
Association for Writing Across the Curriculum (September 2025)

The field's own professional organization recommends against using AI detection tools as primary evaluation, calls multilingual and neurodivergent students potential beneficiaries of AI as accessibility tools, and urges policies that differentiate accessibility use from academic dishonesty.

wacassociation.org/ai-statement/
AI and the Digital Divide in Education
2026
Frontiers in Computer Science (February 2026)

AI educational tools are predominantly designed for English or major international languages, with limited accommodation for multilingual or Indigenous contexts. Algorithmic and cultural bias are structurally embedded, not incidental.

frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2026.1759027/full
Conversation 03

The Authenticity Question

What are we actually measuring, and does it map to learning?

AI-Resistant Assessments in Higher Education: Practical Insights from Faculty Training Workshops
2024
Frontiers in Education (November 2024)

Faculty across disciplines moved toward podcasts, multimedia presentations, and real-world problem-solving as alternatives to essays. These formats foster higher-order skills while reducing AI-offloading incentives.

frontiersin.org/journals/education/articles/10.3389/feduc.2024.1499495/full
Balancing AI-Assisted Learning and Traditional Assessment: The FACT Framework
2025
Frontiers in Education (May 2025)

Proposes assessment across four cognitive levels: Fundamental, Applied, Conceptual, Critical, mapped to Bloom's Taxonomy. Common strategies include oral defenses, multimodal responses, and evaluating the learning journey including responsible AI use.

frontiersin.org/journals/education/articles/10.3389/feduc.2025.1596462/full
AI and Assessment in Higher Education
2025
Times Higher Education Campus (May 2025)

A practitioner roundup of redesign strategies and the limits of traditional summative formats in an AI-present classroom. "AI did not disturb assessment — it just made our mistakes visible."

timeshighereducation.com/campus/ai-and-assessment-higher-education
Conversation 04

Scaffolded AI Literacy

Why gradual, structured integration outperforms ban or ignore.

Incorporating Generative AI into a Writing-Intensive Course Without Off-Loading Learning
2025
Discover Computing / Springer (May 2025)

Design-based research found students need to understand, access, prompt, corroborate, and incorporate AI outputs effectively. AI literacy is especially critical for minoritized students, who are most vulnerable to AI misinformation and least likely to benefit without explicit instruction.

link.springer.com/article/10.1007/s10791-025-09563-9
AWAC Statement on AI and Writing Across the Curriculum (Version 2.0)
2025
Association for Writing Across the Curriculum (September 2025)

Recommends scaffolded assignments, reflection, and portfolio assessment to make student decision-making visible. Urges faculty to model transparency by disclosing their own AI use in teaching.

wacassociation.org/ai-statement/
AI Literacy Frameworks for Higher Education: Faculty Guidance
2026
Every Learner Everywhere (January 2026)

Curated roundup of major AI literacy frameworks including EDUCAUSE, UNESCO, Stanford Teaching Commons, and the SAIL Framework. A persistent challenge is that AI literacy initiatives exist as isolated interventions rather than systematic, curriculum-wide implementations.

everylearnereverywhere.org/blog/ai-literacy-frameworks-for-higher-education
Conversation 05

Format as Equity

The essay is one format, not the only format, for assessing thinking.

Integrating Artificial Intelligence into Higher Education Assessment
2025
Williams (2025) — Intersection: A Journal at the Intersection of Assessment and Learning

Diversifying the portfolio of assessments increases inclusivity and provides multiple opportunities to demonstrate proficiency. Current AI models produce writing with more complicated grammar than many students — making the generic essay an increasingly unreliable measure.

files.eric.ed.gov/fulltext/EJ1479440.pdf
Better, Faster, Stronger? There's More to AI-Powered Assessment
2025
ASCD Educational Leadership (November 2025)

MIT Media Lab research is moving assessment beyond essays to include speech, drawing, gesture, and physical movement. These modes hold particular promise for students marginalized by traditional text-only formats.

ascd.org/el/articles/better-faster-stronger-theres-more-to-ai-powered-assessment
Conversation 06

The Sophistication Gap

Detection doesn't catch AI use. It catches unsophisticated AI use.

Evaluating the Effectiveness and Ethical Implications of AI Detection Tools in Higher Education
2025
MDPI Information (October 2025) — Sam Houston State University

Students with access to premium humanizer tools can easily bypass detection, putting less-resourced students at a disadvantage. This disparity reinforces existing educational inequities and punishes the students least equipped to navigate the system.

mdpi.com/2078-2489/16/10/905
To Avoid Accusations of AI Cheating, College Students Are Turning to AI
2026
NBC News (January 2026)

Students who write entirely their own work now run it through AI detectors pre-submission to avoid false accusations, rewriting genuine writing to satisfy a machine's statistical model. One veteran student left her university after repeated false flags jeopardized her financial aid.

nbcnews.com/tech/internet/college-students-ai-cheating-detectors-humanizers-rcna253878
AcademAI: Investigating AI Usage, Attitudes, and Literacy in Higher Education
2025
SAGE Journals (June 2025)

Higher socioeconomic status predicts more frequent and sophisticated AI use. Students most at risk from detection systems are those least equipped with digital literacy, institutional capital, and access to premium evasion tools.

journals.sagepub.com/doi/10.1177/00472395251347304
AI Detection Tools Are Unreliable. Teachers Are Using Them Anyway.
2025
NPR (December 2025)

A student whose first language is Mandarin reports his writing is flagged because limited vocabulary produces word repetition, a pattern detectors read as AI. The students most likely to be falsely accused are precisely those detection systems were never designed to protect.

npr.org/2025/12/16/nx-s1-5492397/ai-schools-teachers-students
Conversation 07

The Humanistic Case

Students were failed before they got here. Accountability requires prior instruction.

How Changes in K–12 Schooling Hampered the Preparation of College Students
2025
The Chronicle of Higher Education (January 2025)

Accountability testing created subject reallocation: courses requiring reading, thinking, and analysis were cut for drills. Students arrived never having written an authentic essay, trained instead to produce formulaic responses that score well on standardized tests.

chronicle.com/article/some-assembly-still-required

Subscription may be required.

Texas' Fastest-Growing Path to 'College Readiness' Leaves Many High Schoolers Unprepared
2025
Kinder Institute at Rice University (November 2025)

Less than half of Texas graduates deemed college ready via prep courses earned a C or better in their first college-level English or writing course. The college readiness label has not mapped to actual postsecondary performance for a significant share of Texas students.

kinder.rice.edu/urbanedge/texas-fastest-growing-path-college-readiness-leaves-many-high-schoolers-unprepared
Addressing Student Use of Generative AI Through Academic Integrity Reporting
2025
Frontiers in Education (October 2025)

Unawareness of what constitutes misconduct often reflects an opportunity for improvement in curriculum design or instructional delivery, not explicit malice. Punitive action alone fails when the underlying cause is inadequate instruction in ethics, citation, and authorship.

frontiersin.org/journals/education/articles/10.3389/feduc.2025.1610836/full
Faculty Express Near-Universal Concern That Student AI Use Undermines Original Writing
2026
College Board Research (February 2026)

79% of faculty are still beginning to explore what's needed or need guidance on AI in their classrooms. Students were failed by K–12. Faculty were not prepared by their institutions. Both groups are navigating something genuinely new — and only one is being held accountable.

newsroom.collegeboard.org/new-college-board-research-faculty-express-near-universal-concern
Conversation 08

Rethinking Language Assessment

Global institutions are abandoning one-size-fits-all testing in favor of integrated, authentic evidence.

The Future of English Proficiency Testing: Why Universities Are Rethinking Language Assessment
2025
Oxford University Press / Times Higher Education Connect (2025) — global survey, 156 respondents across 6 regions

A global survey of admissions staff, assessment specialists, and faculty found that 27% of institutions identify critical thinking and digital communication as missing from current assessments. Long-form essays received lower importance ratings than integrated skills like summarizing, synthesizing multiple sources, and collaborative tasks. Respondents across all regions converged on the same finding: authentic, integrated assessment is the future.

timeshighereducation.com/hub/oxford-university-press

Note: This is a sponsored supplement produced by Oxford University Press, who also publish the Oxford Test of English. The survey data is independently meaningful but the framing reflects a commercial context.

The Authenticity Gap in Language Testing
2025
Synthesized from OUP/THE survey findings (2025)

31% of respondents feel speaking is underrepresented in standardized tests; 29% believe listening is inadequately assessed. The highest-rated individual skill was understanding the main points of an academic lecture — a contextual, integrated task that isolated-skill tests are not designed to measure. 67% of institutions now combine multiple evidence types: standardized scores, oral interviews, in-house assessments, and portfolios.

timeshighereducation.com/hub/oxford-university-press

Note: Same sponsored supplement as above. Same commercial context applies.

Ready to put the research into practice?

The free ARWI Starter Kit gives you the policy language, process documentation tools, and assessment structure to move from detection to design. Everything in one download, ready to use this week.

Download the Free Starter Kit →