Eight conversations every writing educator should be having. Annotated sources organized by argument, not just topic.
Does AI make students better or worse thinkers?
A cross-cultural study of 912 students found that a partnership orientation toward AI simultaneously increased critical vigilance and cognitive offloading, and both independently predicted deeper learning. Offloading only helps when freed capacity is redirected toward higher-order work.
drphilippahardman.substack.com/p/the-cognitive-offloading-paradoxExamines AI through Cognitive Load Theory and Bloom's Taxonomy. Prolonged AI exposure led to memory decline; pretesting before AI use improved retention. The challenge is distinguishing beneficial from detrimental offloading.
frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1550621/fullSurvey of 666 participants found a significant negative correlation between frequent AI tool use and critical thinking, mediated by cognitive offloading. Younger participants showed higher dependence and lower critical thinking scores.
mdpi.com/2075-4698/15/1/6Are we confusing language medium with cognitive capacity?
Over 61% of TOEFL essays by non-native English speakers were falsely classified as AI-generated. Detection systems inherently discriminate against students with restricted linguistic diversity — the exact population most vulnerable to false accusations.
cell.com/patterns/fulltext/S2666-3899(23)00130-7The field's own professional organization recommends against using AI detection tools as primary evaluation, calls multilingual and neurodivergent students potential beneficiaries of AI as accessibility tools, and urges policies that differentiate accessibility use from academic dishonesty.
wacassociation.org/ai-statement/AI educational tools are predominantly designed for English or major international languages, with limited accommodation for multilingual or Indigenous contexts. Algorithmic and cultural bias are structurally embedded, not incidental.
frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2026.1759027/fullWhat are we actually measuring, and does it map to learning?
Faculty across disciplines moved toward podcasts, multimedia presentations, and real-world problem-solving as alternatives to essays. These formats foster higher-order skills while reducing AI-offloading incentives.
frontiersin.org/journals/education/articles/10.3389/feduc.2024.1499495/fullProposes assessment across four cognitive levels: Fundamental, Applied, Conceptual, Critical, mapped to Bloom's Taxonomy. Common strategies include oral defenses, multimodal responses, and evaluating the learning journey including responsible AI use.
frontiersin.org/journals/education/articles/10.3389/feduc.2025.1596462/fullA practitioner roundup of redesign strategies and the limits of traditional summative formats in an AI-present classroom. "AI did not disturb assessment — it just made our mistakes visible."
timeshighereducation.com/campus/ai-and-assessment-higher-educationWhy gradual, structured integration outperforms ban or ignore.
Design-based research found students need to understand, access, prompt, corroborate, and incorporate AI outputs effectively. AI literacy is especially critical for minoritized students, who are most vulnerable to AI misinformation and least likely to benefit without explicit instruction.
link.springer.com/article/10.1007/s10791-025-09563-9Recommends scaffolded assignments, reflection, and portfolio assessment to make student decision-making visible. Urges faculty to model transparency by disclosing their own AI use in teaching.
wacassociation.org/ai-statement/Curated roundup of major AI literacy frameworks including EDUCAUSE, UNESCO, Stanford Teaching Commons, and the SAIL Framework. A persistent challenge is that AI literacy initiatives exist as isolated interventions rather than systematic, curriculum-wide implementations.
everylearnereverywhere.org/blog/ai-literacy-frameworks-for-higher-educationThe essay is one format, not the only format, for assessing thinking.
Diversifying the portfolio of assessments increases inclusivity and provides multiple opportunities to demonstrate proficiency. Current AI models produce writing with more complicated grammar than many students — making the generic essay an increasingly unreliable measure.
files.eric.ed.gov/fulltext/EJ1479440.pdfMIT Media Lab research is moving assessment beyond essays to include speech, drawing, gesture, and physical movement. These modes hold particular promise for students marginalized by traditional text-only formats.
ascd.org/el/articles/better-faster-stronger-theres-more-to-ai-powered-assessmentDetection doesn't catch AI use. It catches unsophisticated AI use.
Students with access to premium humanizer tools can easily bypass detection, putting less-resourced students at a disadvantage. This disparity reinforces existing educational inequities and punishes the students least equipped to navigate the system.
mdpi.com/2078-2489/16/10/905Students who write entirely their own work now run it through AI detectors pre-submission to avoid false accusations, rewriting genuine writing to satisfy a machine's statistical model. One veteran student left her university after repeated false flags jeopardized her financial aid.
nbcnews.com/tech/internet/college-students-ai-cheating-detectors-humanizers-rcna253878Higher socioeconomic status predicts more frequent and sophisticated AI use. Students most at risk from detection systems are those least equipped with digital literacy, institutional capital, and access to premium evasion tools.
journals.sagepub.com/doi/10.1177/00472395251347304A student whose first language is Mandarin reports his writing is flagged because limited vocabulary produces word repetition, a pattern detectors read as AI. The students most likely to be falsely accused are precisely those detection systems were never designed to protect.
npr.org/2025/12/16/nx-s1-5492397/ai-schools-teachers-studentsStudents were failed before they got here. Accountability requires prior instruction.
Accountability testing created subject reallocation: courses requiring reading, thinking, and analysis were cut for drills. Students arrived never having written an authentic essay, trained instead to produce formulaic responses that score well on standardized tests.
chronicle.com/article/some-assembly-still-requiredSubscription may be required.
Less than half of Texas graduates deemed college ready via prep courses earned a C or better in their first college-level English or writing course. The college readiness label has not mapped to actual postsecondary performance for a significant share of Texas students.
kinder.rice.edu/urbanedge/texas-fastest-growing-path-college-readiness-leaves-many-high-schoolers-unpreparedUnawareness of what constitutes misconduct often reflects an opportunity for improvement in curriculum design or instructional delivery, not explicit malice. Punitive action alone fails when the underlying cause is inadequate instruction in ethics, citation, and authorship.
frontiersin.org/journals/education/articles/10.3389/feduc.2025.1610836/full79% of faculty are still beginning to explore what's needed or need guidance on AI in their classrooms. Students were failed by K–12. Faculty were not prepared by their institutions. Both groups are navigating something genuinely new — and only one is being held accountable.
newsroom.collegeboard.org/new-college-board-research-faculty-express-near-universal-concernGlobal institutions are abandoning one-size-fits-all testing in favor of integrated, authentic evidence.
A global survey of admissions staff, assessment specialists, and faculty found that 27% of institutions identify critical thinking and digital communication as missing from current assessments. Long-form essays received lower importance ratings than integrated skills like summarizing, synthesizing multiple sources, and collaborative tasks. Respondents across all regions converged on the same finding: authentic, integrated assessment is the future.
timeshighereducation.com/hub/oxford-university-pressNote: This is a sponsored supplement produced by Oxford University Press, who also publish the Oxford Test of English. The survey data is independently meaningful but the framing reflects a commercial context.
31% of respondents feel speaking is underrepresented in standardized tests; 29% believe listening is inadequately assessed. The highest-rated individual skill was understanding the main points of an academic lecture — a contextual, integrated task that isolated-skill tests are not designed to measure. 67% of institutions now combine multiple evidence types: standardized scores, oral interviews, in-house assessments, and portfolios.
timeshighereducation.com/hub/oxford-university-pressNote: Same sponsored supplement as above. Same commercial context applies.
The free ARWI Starter Kit gives you the policy language, process documentation tools, and assessment structure to move from detection to design. Everything in one download, ready to use this week.
Download the Free Starter Kit →