What to ask when AI vendors show up on campus
These procurement decisions are coming, whether or not they benefit student learning
The Important Work is a space for writing instructors at all levels—high school, college, and beyond—to share reflections about teaching writing in the era of generative AI.
This week’s post is by Katie Conrad, who is a professor of English at the University of Kansas. Her research interests include modernism, Irish literature and culture, technology studies, fandom studies, and critical AI literacy. She co-directed the AI & Digital Literacy Institute in conjunction with the National Humanities Center in 2024 and 2025. You can find her on Bluesky, LinkedIn, and Substack.
If you’re interested in sharing a reflection for The Important Work, you can find information here.—Jane Rosenzweig
Many educational institutions, from kindergarten on up, have been buying enterprise licenses to AI “education tools” from companies like OpenAI, Microsoft, Anthropic, and Google and education-focused AI app suites from Flint, MagicSchool, Brisk, and more (see e.g., Ford and Knox 2025, Field 2025, Hale 2025). These procurement decisions are coming even when other funding for education is at risk or under threat. And they are coming even in the face of increasing evidence that ed tech doesn’t pay off in student learning (as, for instance, outlined in the recent article in the Economist, “Ed tech is profitable. It is also mostly useless”).
Sometimes actual educators are at the table when ed tech purchasing decisions are made; frequently they are not. And from what I’ve heard from teachers at all levels, many of our colleagues don’t feel technically knowledgeable enough to help make these choices. But let me assure you that very, very few of the people making those decisions know much, if any, more than you do, other than what the companies selling them the products are telling them. And as educators who have expertise in content and practice, I believe we have the responsibility to protect our students and that we “should have input into institutional decisions about purchasing and implementation of any automated and/or generative system (“AI”) that affects the educational mission broadly conceived” (“Blueprint for an AI Bill of Rights for Education, Critical AI, 2023/2024 – unpaywalled version here).
My purpose here, then, is to provide some ways into the conversation with visiting vendors and tech companies—or the IT professionals and administrators who are making the purchasing decisions. There is a lot at stake here. We and our students shouldn’t be the lab rats for untested educational technologies with insufficient guardrails. Research is suggesting increasingly that genAI use causes cognitive harms (Zhai et al 2024, León-Dominguez 2024, Darvashi et al 2024, Dergaa et al 2024, Kumar et al 2024, Lee et al 2025, Gerlich 2025); that their “efficiencies” are illusory (the METR study); and that 95% of companies that have invested in AI are getting no return on investment (Nanda et al 2025). More than that, chatbots pose risks to the safety and well-being of our students, colleagues, families, and friends. Just ask the victims of deepfake porn. Just ask the families of Adam Raine and Austin Gordon and Sewell Garcia.
One of my least favorite arguments in favor of institutional license purchases of AI ed tech—and it comes from those who aren’t necessarily actively excited about that tech—is “that’s the only way to keep our institution’s data safe.” But remember, the choice not to keep everyone’s data safe and not used for training is, and always has been, in the hands of the companies. As I’ve written elsewhere, charging you for that “privilege” is like paying protection money to the mob.
So I think we need to ask questions in the few instances when these companies show up on our campuses. I recently wrote about my experience asking questions when Google came to my campus. From that post, here are some questions for those of you out there who want to ask but might not know where to start:
For Google and Perplexity:
You are joining a lot of tech companies providing free access to subscriptions for college students and not for teachers. Given that there are risks of learning loss and a range of ethical and use issues like hallucination to consider before using AI tools, it seems like it would be wise to at least offer the same free access to teachers who could guide students on best practices. Without that, it could seem like you’re doing an end run around teachers and instead promoting an idea of education without educators. Is this a pressure tactic to get teachers to pay for your services and get schools to buy enterprise licenses? Can you comment on your decision to provide students free access and not teachers first?
For Google:
In the fall of 2025, Google put something called Homework Help in the Chrome browser update, which allows students to solve homework, write discussion answers, and take tests with Google AI software. It was removed by that name in later releases but it’s effectively still there, just as Google Lens. Are you in conversation with that team and with actual teachers to ensure that your products are in alignment with, rather than undermining, educational learning goals?
For all AI-based ed tech companies (the big ones plus Brisk, Flint, MagicSchool, SchoolAI, etc.)
OpenAI has admitted that hallucinations are baked into generative AI systems. And as experts note, those mistakes are plausible, don’t come in the same places as human mistakes, and can’t be mitigated in the same ways. Why should educators, researchers, and students ever be using a technological system that will always potentially insert mistakes?
There have been a lot of studies that show negative cognitive impacts of generative AI use. Cognitive offloading isn’t necessarily always bad, but this is arguably not what we want in an educational environment. There have also been studies showing reductions in retention after using LLMs for tasks, reduction in critical and creative thinking, and so forth. How do you recommend mitigating the kind of learning loss that seems to result from systems like yours? (Check out some of the studies here under “Cognitive Impacts,” and share newer studies in the comments!)
Could you share with us whether all of the datasets on which your AI models are trained come from consensually obtained, licensed, or public domain materials? (If they say they train on “publicly available” data, note that that is not a legal category, and publicly available material still has copyright.)
Follow-up: Kevin Gannon, in his recent Chronicle article, writes: “Either we have copyright law or we don’t. Either plagiarism and the theft of intellectual property are anathema to higher education or they aren’t. We’re either modeling academic honesty and integrity to our students or we aren’t. … GenAI’s architecture absolutely depends on consciously taken actions that would stand in violation of any of our institutions’ academic-integrity policies.” Could you respond?
Alternate follow up: Could you explain why we have to pay for enterprise licenses for your intellectual property, but you don’t have to pay for the intellectual property you used to train your systems?
Is all student and faculty data guaranteed to be protected in all of your systems and also guaranteed not to be used for training? If not, do we have to make a special deal or licensing arrangement to ensure that protection? Why is it not already protected?
For companies that have partnered with the military (OpenAI, Google, Meta):
Do you have an AI safety team for your education division, and is it the same safety team that oversees your collaborations with the NSA and the US military?
There are, of course, many other questions worth asking. As a few of my fellow critical AI colleagues have noted, even just getting reps (or administrators) to answer the question of what they mean by “AI” can be quite revealing. But in an environment in which you may be the only person asking (and I’d encourage you to get at least one other colleague to join you in asking questions, which can encourage others to speak up), you might want to pick a question that will get others thinking about the answers that are (or aren’t) provided.
With that in mind, it’s also worth looking at this list of questions from the Library Freedom Project called Questions to Ask Vendors About AI. These very pointed and important questions cover a range of issues. While they are focused on library applications that embed AI, they could easily be adapted to ask any of any ed tech that incorporates AI. They fall under the following general headings:
Basic and technical functions of the tool
Environmental considerations
Labor expectations
Copyright, data, and privacy considerations
Revenue expectations
Charles Logan has also provided “Caregiver Resources for Pushing Back Against AI and Other Educational Technologies” that includes Questions to Ask School Administrators and Staff About Technology,” with questions that fall under the following general headings:
The technology’s purpose(s)
How the technology works
Data
Privacy
Security
The vendor
The fine print
Teacher professional development
Students’ experiences
Parents’ and caregivers’ experiences
Other
Adapted from both of these lists, which I highly recommend you read in full, some key questions I would (publicly) ask vendors, tech reps, and/or teachers and administrators (as a caregiver or as a teacher/staff member, no matter the level):
For all levels:
What is the purpose of this AI application? What problem does it solve? What is the research that shows that it solves it?
If this AI app harms students, either directly through outputs – such as guiding students to self-harm—or through data breaches, who takes the legal responsibility for that harm and what would that responsibility entail? Where is the liability contractually set out? Given that many major insurers are stepping back from coverage of AI harms (Harris and Criddle 2025), who will provide compensation if such is necessary?
How does this app/technology align with our institution’s mission and goals (for instance, environmental sustainability)? Who will be formally auditing this AI app to ensure that it continues to meet the institution’s mission and what does that audit process look like?
For K-12:
Given the documented risk, is parental consent required for the use of these apps for those under the age of 13? What are the alternatives for those who do not consent? What kind of training do you provide for those parents who do consent to their children’s use of these apps?
Again, asking the question—preferably in a public, open forum, but really at any time—is often enough to begin a more critically informed, transparent conversation and decision-making process. The problem is that these questions frequently aren’t being asked at all. That is something we have the power to change.




Thanks for this, Katie! Sharing with my admin immediately.
This is so helpful! Thank you Katie!. Once the company is on campus - it is very difficult to act at that time. Our engineering school provided a premium version of Grammarly to students - Grammarly (now called Superhuman) provides features such as having the AI "pre-grade" writing if students input prompts and rubrics. A few of us got together to meet with our IT to discuss our concerns, and we were able to get students enrolled in our classes blocked: https://viterbiit.usc.edu/services/software/grammarly/.
Students can still access free Grammarly and tuniversity-provided Chat GPT EEdu - and they can still buy their own Superhuman accounts. Anyway - better to act in advance using the framework that you provide. Thank you,, Katie!