Blog » Research » What college teachers really think about AI use by students: insights from deep interviews with college educators

What college teachers really think about AI use by students: insights from deep interviews with college educators

Hand-drawn illustration of a college teacher thinking about a student using AI on a laptop.

Download the complete study as PDF for easy sharing and reference:

Executive summary

The conversation around AI in education has been dominated by assumptions rather than evidence. Headlines warn of widespread cheating, while institutions scramble to implement detection software and blanket bans. But what do educators actually think about student AI use when you sit down and ask them directly?

New research from interviews with college educators reveals that while only 7% allow unrestricted AI use, 71% permit AI with guidelines – showing teachers increasingly embrace AI as a learning tool when used ethically. The key factors for acceptance: transparency, engagement with sources, and maintaining student agency in the learning process.

Over the past two months, we conducted in-depth interviews with 40 college instructors across writing-heavy disciplines – from composition and literature to business communications and social sciences. These conversations, lasting 45 minutes each, revealed a nuanced landscape that challenges common misconceptions about faculty attitudes toward AI.

These findings align with broader institutional trends documented by EDUCAUSE[1], which found that 73% of institutions adopt permissive or neutral rather than restrictive AI policies. Far from the resistance narrative that dominates media coverage, we found educators increasingly willing to embrace AI as a learning tool – when certain conditions are met.

The real AI policy landscape in higher education

Bar chart titled “AI usage policy across different universities” showing five categories:
- Allow AI for brainstorming and outlining only: 35.7%. 
- Allow AI with disclosure and citation required: 35.7%. 
- No clear institutional rules: 14.3%.
- Fully allow and encourage AI use: 7.1%.
- Prohibit AI entirely: 4.8%.

This research was conducted by Litero AI in August-September 2025. The methodology included a quantitative survey of 42 educators across multiple institutions and disciplines.

Survey data from 42 educators reveal a more permissive landscape than typically reported:

  • 35.7% allow AI for brainstorming and outlining only
  • 35.7% allow AI with disclosure and citation required
  • 14.3% have no clear institutional rules
  • 7.1% fully allow and encourage AI use
  • 4.8% prohibit AI entirely

This distribution contradicts claims of widespread faculty resistance. The Digital Education Council’s 2025 global survey[2] of 1,681 faculty from 52 institutions found similar patterns, with 57% preferring “AI permitted with disclosure and specific instructions” for student assignments.

“We’ve moved beyond the binary thinking about AI – that you’re either for it or against it,” explains Rachel, who teaches writing at Seattle Central College and co-creates AI policies with her students. “I want my students to use AI to become better writers, not to avoid writing altogether. When more than 50% of a paper is AI-generated, we require rewrites, but that rarely happens once students understand the boundaries.”

This sentiment reflects a broader shift we observed: educators moving from reactive prohibition to proactive integration. The institutions and instructors showing the most success have moved beyond detection and punishment toward education and authentic assessment design.

Trust through transparency: the new academic integrity

Infographic titled “Main Truth Factors in Ethical AI Use,” showing four numbered circles:
Process Transparency, Accurate Sourcing, Student Agency, and Citation Requirements

When we asked educators what would make them comfortable recommending an AI writing tool to students, transparency emerged as the overwhelming priority. Split-screen interfaces that show AI contributions alongside student work were consistently praised across interviews.

“I don’t mind students using AI as long as it’s disclosed with evidence,” says Reggie, who teaches in Maine. “If you used AI for planning and outlining, show me screenshots of that process or your AI usage log. That split-screen view where I can see what the student wrote versus what AI suggested – that’s transparency done right.”

This emphasis on process over product represents a fundamental shift in how academic integrity is conceptualized. Rather than focusing solely on the authenticity of final submissions, educators increasingly value evidence of student engagement and critical thinking throughout the writing process.

The most trusted approaches include:

  • Before/after drafts showing student revisions of AI suggestions
  • Prompt logs documenting how students interacted with AI tools
  • Source verification ensuring AI-generated references are legitimate
  • Reflection essays where students analyze their AI collaboration

Christine, who teaches university-level history and writing courses, has abandoned AI detection tools entirely:

“Our institution gave up on detectors – too many false positives. Instead, I encourage ‘AI-enhanced with original’ work and teach students to verify everything. The focus should be on building skills, not catching cheaters.”

Research help: engagement over automation

One of the clearest patterns in our interviews was the distinction educators make between mechanical and engaged AI use for research. While 78% of instructors welcome AI assistance in finding sources, they unanimously reject what several called “bibliography automation.”

Jane, who teaches diagnostic radiography in the UK and has successfully used AI tools for literature reviews, explains the difference:

“AI that helps students understand what they’re looking for, decode academic terminology, or identify research gaps? That’s actually teaching them to be better researchers. But AI that just spits out a list of sources without engagement? That’s not helpful.”

Her approach includes requiring declarations for AI use and demonstrating how to decode academic terms – a practice that has proven successful in maintaining research integrity while leveraging AI capabilities.

Successful research integration requires what educators term “engagement gates” – requirements that students demonstrate understanding and interaction with AI-generated sources before proceeding to drafting. These include:

  • Source annotation requirements
  • “Why this source?” explanations
  • Compare-and-contrast exercises between multiple sources
  • Verification of source accessibility and legitimacy

The Ithaka S+R 2024 study[3] of 2,654 faculty members supports this approach, finding that only 42% of instructors completely prohibit student use of generative AI, while the majority focus on establishing appropriate use guidelines rather than blanket restrictions.

Bar chart titled “Figure 7: In which of the following ways have you yourself encouraged or allowed students to use generative AI in your courses?” showing percentage of respondents for each activity:
I do not allow students to use generative AI: 42%.
Brainstorming ideas: 37%.
Drafting and/or editing written assignments: 23%.
Creating outlines: 23%.
As a study guide: 21%.
Conducting research: 17%.
Creating images, music, or visualizations: 12%.
Language instruction: 7%.
Writing code: 6%.

Source: Ithaka S+R

The humanizer problem: ethics over evasion

Perhaps our most concerning finding relates to AI “humanization” tools designed to make AI-generated text appear more human-written. Multiple educators reported awareness that students use these tools specifically to evade AI detection.

Amy, who teaches at a university that generally bans AI unless instructor-approved, has observed this pattern firsthand:

“I establish baseline writing samples through handwritten work early in the semester, so I can track changes in student voice. I’ve definitely noticed students using humanizers to disguise AI-generated content, which defeats the educational purpose entirely.”

Rather than relying on detection software – which 73% of our interviewed educators distrust due to false positive concerns – most instructors prefer pattern recognition and baseline comparison. They establish student writing voices early through in-class writing samples and track dramatic stylistic changes over time.

“Humanizer tools are particularly problematic because they’re designed for evasion rather than learning,” notes Reggie, reflecting a sentiment shared across interviews. “The AI tools that work ethically are the ones that make the collaboration visible, not invisible.”

Grading and feedback: teaching, not just scoring

When it comes to AI-assisted grading and feedback, educators draw clear lines between supportive and replacement functions. The overwhelming preference is for AI that explains the “why” and “how” of improvements rather than simply providing corrections.

Christine has successfully integrated AI into her feedback workflow:

“I use AI to accelerate first-round feedback, but I always teach students to verify the suggestions. Rubric-based feedback that shows students not just what’s wrong, but how to fix it – that’s where AI becomes pedagogically valuable.”

Successful implementations include:

  • Explanatory feedback that connects suggestions to writing principles
  • Progressive disclosure where students must demonstrate understanding before receiving the next level of guidance
  • Multiple revision cycles that show learning progression over time
  • Student reflection requirements on feedback received and changes made

The approach aligns with research from the Digital Education Council[4] showing that 61% of faculty have used AI in teaching, though 88% use it minimally, indicating careful rather than wholesale adoption.

The citation revolution: AI as academic source

A surprising finding from our research was the rapid adoption of AI citation practices. While initially resistant, 84% of interviewed educators now require students to cite AI tools when used, treating them as academic sources rather than invisible assistance.

Natalie, who teaches business and social work at an AI-integrated institution, has implemented comprehensive citation requirements:

“Students must cite the prompts and tools they use on their reference pages. We enforce this through our code of conduct with graded penalties, but once students understand the expectation, compliance is high.”

Amy takes a similar approach: “I require APA citations for AI use, just like any other source. This opens up important conversations about source reliability and the difference between information and analysis.”

The most sophisticated approaches include citation of specific prompts used, acknowledgment of the extent of AI contribution, and reflection on the appropriateness of AI assistance for different parts of the assignment.

Disciplinary differences: STEM vs. humanities perspectives

Our research revealed significant disciplinary variations in AI acceptance and integration strategies, consistent with findings from BestColleges[5] showing that STEM fields demonstrate 40-60% higher acceptance rates than humanities disciplines.

Horizontal bar chart titled “College Student AI Use by Major,” showing AI use rates:
Business Majors: 63%
STEM Majors: 60%
Humanities Majors: 52%

Source: BestColleges

STEM educators tend to view AI as:

  • A research acceleration tool
  • Support for technical writing clarity
  • Help with data interpretation and visualization
  • Assistance with literature review organization

Humanities educators show more concern about:

  • Preservation of critical thinking skills
  • Authenticity of student voice
  • Quality of argumentation and analysis
  • Impact on close reading abilities

However, both groups share common ground on core principles: student agency, transparency, and learning enhancement over task completion.

Gwooyim, who teaches ethics and justice studies with a focus on Indigenous and marginalized communities, articulates the balance:

“Grammar help, clarification, outlines, and literature search are acceptable. It becomes problematic when AI eclipses the student’s voice or critical thinking. We need to normalize transparent citation while ensuring equity – not all students have the same digital literacy background.”

Student perspectives: collaboration over cheating

When we asked educators about student attitudes, a consistent theme emerged: students generally want to use AI ethically but need clear guidance on appropriate boundaries.

“My students aren’t trying to cheat – they’re trying to succeed,” observed Rachel. “When I provide clear guidelines about what kinds of AI help are appropriate for each assignment, compliance rates are excellent. The key is involving students in policy creation rather than imposing rules from above.”

This observation aligns with broader research on student preferences. The King’s Business School case study[6] revealed that 74% of students failed to declare AI use despite mandatory requirements, suggesting that overly restrictive policies may drive underground use rather than promoting ethical practices.

Educators reported that students prefer educational approaches over punitive ones, with 89% noting increased honesty when AI policies focus on learning outcomes rather than detection and punishment.

The evolution of assessment: beyond the essay

Perhaps the most significant change we observed is the evolution of assessment methods. Educators are rapidly moving beyond traditional essay assignments toward formats that naturally integrate AI while maintaining learning objectives.

Jeremy, who maintains strict AI bans in his courses, acknowledges the challenge:

“I use replication tests – prompting AI with the same assignment requirements and comparing outputs – to catch violations. But I recognize this approach isn’t sustainable long-term. The future probably lies in redesigning assignments rather than policing them.”

Emerging assessment approaches include:

  • Process portfolios showing research, drafting, and revision stages
  • Comparative analysis where students evaluate AI-generated content
  • Debate preparation using AI for research and counter-argument development
  • Multimedia presentations combining AI research with original analysis
  • Peer review exercises where students critique AI-assisted work

Hamza, who teaches business management and uses a daily stack of AI tools including Perplexity for research and Gamma for presentations, advocates for visible workflow documentation:

“Students should be able to show their planning process, their iterations, their decision-making. That’s what demonstrates learning, not just the final product.”

Institutional support: policies that actually work

Our research identified key characteristics of successful institutional AI policies, supported by EDUCAUSE[7] findings that only 23% of institutions had AI-related acceptable use policies in place as of 2024, but those that did showed remarkable sophistication.

Clear guidelines with flexibility: Institutions providing specific examples while allowing course-level customization see higher compliance rates and faculty satisfaction.

Education over enforcement: Universities emphasizing AI literacy training rather than detection software report more positive outcomes.

Faculty involvement: Policies developed with significant faculty input through shared governance show greater adoption and effectiveness.

Regular updates: Institutions with quarterly or semester-based policy reviews adapt more successfully to technological changes.

The most successful institutions have moved beyond the initial panic response documented in late 2022, when ChatGPT’s release created immediate disruption across higher education, leading to hasty bans and network blocks.

The transparency imperative: what educators really want

These insights reveal specific opportunities for AI writing platforms to better serve educational needs. Across interviews, educators consistently emphasized the need for transparency features that make AI contributions visible to both students and instructors.

“Tools that can show exactly what the AI contributed versus what the student wrote – that addresses my primary concern about academic integrity,” explains Reggie, whose positive experience with transparent AI writing platforms like Litero AI has shaped his recommendations to students.

Other high-priority features include:

  • Engagement requirements that prevent students from bypassing the learning process
  • Citation integration that automatically formats AI usage according to academic standards
  • Instructor dashboards providing aggregate insights into student AI usage patterns
  • Source verification ensuring AI-generated references are legitimate and accessible

As Jane noted after successfully using AI tools for literature review work:

“The best AI writing tools won’t be the ones that can’t be detected – they’ll be the ones that make learning visible and support proper academic practices like citation and source engagement.”

Looking ahead: collaboration over prohibition

Our research suggests higher education is at a critical juncture. Institutions that continue to rely on blanket restrictions risk losing students to more AI-forward competitors while missing opportunities to prepare graduates for AI-integrated workplaces.

The evidence strongly supports nuanced, educational approaches that embrace AI’s potential while maintaining academic integrity through transparency, skill development, and authentic assessment design.

As more educators experiment with AI integration, several trends are emerging:

Collaborative policy development involving students, faculty, and administration produces more effective and sustainable approaches than top-down mandates.

Assignment redesign that naturally incorporates AI while preserving learning objectives shows more promise than AI-proofing strategies.

Skill development focus on AI literacy, prompt engineering, and critical evaluation of AI output prepares students for professional contexts.

Process documentation that makes thinking visible benefits both learning assessment and academic integrity verification.

The conversation has shifted from “How do we stop students from using AI?” to “How do we teach students to use AI responsibly and effectively?” This represents not just a policy change, but a fundamental reimagining of what learning looks like in an AI-integrated world.

Digital Education Council research[8] shows that 75% of faculty who regularly use AI tools believe students need AI skills for professional success, supporting this pedagogical evolution.

Gwooyim captures the broader implications:

“AI should be a tool to build thinking, not replace it. When we frame it that way – as support for learning rather than automation of learning – both students and faculty are more comfortable with integration.”

For AI writing tools, this creates both opportunity and responsibility. The platforms that will earn educator trust and recommendation are those that prioritize learning enhancement over task completion, transparency over invisibility, and student agency over automation.

The 7% who allow unrestricted AI use may grab headlines, but the 58% who thoughtfully integrate AI with guidelines represent the true future of educational technology – one where human intelligence and artificial intelligence collaborate in service of learning.

This research was conducted by Litero AI through surveys and interviews with college educators from August-September 2025. Litero AI provides transparent, ethical writing assistance that prioritizes student learning and maintains academic integrity through clear authorship tracking and educational scaffolding.

Author