The AI-Polished Applicant
Imagine a future where your applicant reads an AI-generated response to your live interview question off a teleprompter while you're meeting with them on Zoom. That future...is now.
Two months into the admissions cycle, a committee member watches a recorded interview and notices something strange. The candidate's eyes dart off-camera before each answer, then return with unusually complete, polished sentences. It's not proof of anything, but it raises a familiar question: How much of what we're hearing reflects the actual applicant?
Generative AI in education used to be about taking shortcuts with homework. Now it's in admissions. Personal statements polished by AI. GPAs affected by AI-assisted homework. And believe it or not, live Zoom interviews conducted with browser extensions and teleprompters. If our core inputs are increasingly "AI-mediated," admissions committees face two stark choices: make decisions based on bad evidence, or rethink what makes for quality evidence.
If our core inputs are increasingly "AI-mediated," admissions committees face two stark choices: make decisions based on bad evidence, or rethink what makes for quality evidence.
The Wrong Tool for the Job
Let's be clear about what AI tools like ChatGPT do well and what they don't. They're amazing at producing fluent text. They're not designed to predict whether a candidate will persist through a tough curriculum, pass licensing exams, or excel in clinical rotations.
That's not a criticism—it's just reality. A system built to predict the next word isn't the right tool to predict a student's next year. When we treat AI-generated text as a proxy for future performance, we confuse polish with potential.
For faculty and staff, this creates a daily fog of uncertainty. Consider the law school personal statement that reads like a professional magazine piece. The business school video interview with crisp, jargon-perfect responses. The nursing applicant whose transcripts look identical to a peer's, but come from programs with vastly different levels of AI assistance. None of this means students are doing anything wrong. It means the environment has changed, and our measurements need to change with it.
None of this means students are doing anything wrong. It means the environment has changed, and our measurements need to change with it.
Beyond Bans
The impulse to ban or police AI use will be strong—and sometimes necessary. But when admissions teams ban the use of AI, they're rewarding students who find ways to circumvent those bans. The more durable response is designing assessments that remain valuable even when students have access to AI assistance, or create scenarios that isolate the student away from that assistance.
This means carefully proctored written assessments. Properly monitored real-time video assessments that focus on reasoning with standardized scenarios. Objective scoring systems that reduce individual bias while keeping the human in the loop Most importantly, adapting to this new world means tying each component of the rapidly evolving application to actual outcomes, not just to what "feels right."
Most importantly, adapting to this new world means tying each component of the rapidly evolving application to actual outcomes, not just to what "feels right."
The Fairness Factor
There's an equity issue we can't ignore. Students with more resources have traditionally had access to writing coaches, and AI chatbots equalize that advantage. But the answer to equity cannot be an AI-based system that leads to written submissions that only rely on an applicant copy-pasting from a chatbot
The ethical path is shifting emphasis toward signals that reflect teachability, resilience, applied reasoning, and professional behavior—qualities that matter in practice and resist mere surface polish.
Real-World Solutions
One medical school admissions officer described an interview season where fatigue set in early: "By week four, every applicant sounded like they'd read the same playbook."
The team started adding brief case-based questions that required building an argument, not reciting talking points. The sessions ran five minutes longer. The notes got messier. But the decisions got better—and more defensible—because the evidence felt real again.
This isn't about abandoning essays or interviews. It's about reframing them as inputs with context. What's the source of this text? What process evidence comes with it? How does this information connect to the outcomes we actually track?
If we can't answer these questions, the problem isn't that students use new tools. It's that our process still evaluates yesterday's signals.
The "AI-polished applicant" isn't the villain here. They're the new normal. Our job is building evaluation systems that work within that reality. That means trading mystique for measurement, glow for ground truth.
When traditional applications are all AI generated, the system must evolve.
— Meshwell Staff