Litero AI in Education Codex
Table of Contents
- Core statement
- The six Codex principles
- For students: what this means for your learning
- FAQ
- For educators: what this means for assessment evolution
- For institutions: policies and implementation
- Academic policy statements
- Implementation guide for educators
- Syllabus policy kit (templates)
We're living through the calculator moment for thinking
We are witnessing the most significant shift in human learning in the 21st century. Just as calculators transformed mathematics education from arithmetic memorization to conceptual understanding, AI is transforming all intellectual work from typing to critical synthesis.
We believe the path forward lies in thoughtful integration of AI rather than prohibition.
The question is not whether students will use AI. They already do. The question is whether we'll teach them to use it brilliantly or force them to use it in shadows.
This is our position on how education should evolve — not in decades, but now.
Core statement: AI collaboration is the future of human intelligence
Human intelligence has always been augmented. We think with language, calculate with tools, and remember with books. AI is the next step in that lineage, and likely the most consequential.
The goal of education is to prepare students for a world where thinking, creating, and problem-solving happen in collaboration with AI. Graduates who learn model-assisted thinking will ship better work faster and make fewer avoidable errors. They should be evaluated on the quality of that collaboration: how they frame problems, iterate, verify, and exercise judgment in real contexts.
We draw a clear line between misuse and legitimate tool use: disclose AI assistance (tool used and general purpose), verify all factual claims, and take full responsibility for the final work's accuracy and integrity.
The point isn’t human versus AI; it’s human with AI. Learning to prompt, iterate, critique, and synthesize with models often demands deeper subject understanding than traditional recall-heavy tasks ever measured.
Authorship is framing the problem, making an original contribution, verifying claims, and owning the result.
The six Codex principles
1. The human seed
Every AI collaboration starts with a human seed.
Students bring the question, thesis, constraints, data, or example; models propose directions; students iterate, verify, and make the final call. No seed, no synthesis. The original contribution can be a new idea, structure, critique, dataset, method, or design choice.
2. The specialization
Writing from scratch is becoming a specialized skill like calligraphy — beautiful, but not essential.
The core skill is evaluating, editing, and synthesizing information. Students should learn to be excellent editors and critics of AI-generated content, not necessarily excellent generators of raw text.
3. The privacy and IP
Respect institutional policies and intellectual property when using AI tools.
Students should never upload copyrighted material, confidential data, or sensitive information to AI tools without proper authorization. Use enterprise or institutional AI services when available, and understand that free AI tools may use your inputs for training. When in doubt, redact sensitive information or consult your institution's data governance policies.
4. The assessment evolution
The future of academia is open-book, open-AI, but closed-communication during assessments.
Students should have access to all tools they'll use in their careers, but work independently during evaluation. The focus shifts from "what do you know?" to "what can you do with what you know?"
5. The transparency advantage
Students who learn to use AI openly and thoughtfully will outperform those who hide it or avoid it entirely.
Transparency allows for feedback, improvement, and skill development. Prohibition forces underground usage without guidance, creating worse outcomes for everyone.
6. The intelligence augmentation
Human intelligence augmented by AI represents a new form of thinking, not a replacement of thinking.
The goal is not human versus AI, but human with AI. Prompting AI effectively requires deeper subject mastery than most people realize. The most successful students and professionals will be those who learn to think in collaboration with artificial intelligence.
For students: what this means for your learning
Authorship = ownership: Students are fully responsible for the accuracy, reliability, and ethical quality of their work product, including any content generated by AI. You don’t own the model. You own your submission: the claims, the choices, and the consequences:
- Check cited facts against sources you actually read.
- Re-run calculations.
- Link to sources where possible.
- Remove anything you can't verify.
- Ensure accuracy before publishing AI-generated information.
Privacy by restraint: Don’t input sensitive personal or third-party data; redact by default.
Use AI openly and improve constantly. Don't hide your AI usage—develop it as a skill. Learn to prompt effectively, iterate thoughtfully, and critique outputs rigorously.
Focus on developing AI-resistant skills. Oral communication, live problem-solving, creative synthesis, and critical evaluation become more valuable, not less.
Become an excellent editor and critic. Your value is not in generating content from nothing, but in directing, refining, and improving AI-generated work toward excellence.
Document your process. Show your thinking, your iterations, and your improvements. The process of working with AI is often more valuable than the final output.
Prepare for the real world. Every job you'll have will involve AI collaboration. Students who graduate without these skills will be at a significant disadvantage.
FAQ
Isn’t using AI cheating?
Do I have to disclose everything?
How do I cite AI?
What about AI detectors?
Can I use AI for research?
For educators: what this means for assessment evolution
Design AI-native assignments. Create projects that assume AI collaboration and evaluate students on their ability to direct, critique, and improve AI outputs.
Implement live assessment. Oral defenses, real-time problem-solving, and in-person critique sessions test understanding in ways that AI cannot replicate.
Focus on unique datasets. Use local data, current events, or institution-specific information that requires students to apply learning to novel contexts.
Teach AI literacy explicitly. Don't assume students know how to use AI well. Effective prompting, output evaluation, and iterative improvement are teachable skills.
Embrace the transition. Educational institutions that lead this change will attract better students and produce more capable graduates.
For Institutions: Policies and Implementation
How leading universities handle AI (research as of August 2025)
Major universities have developed comprehensive policies governing student use of generative AI tools like ChatGPT and Claude in academic work, with approaches ranging from explicit permission frameworks to treating AI as unauthorized assistance under existing academic integrity codes. Leading institutions demonstrate markedly different philosophical approaches while consistently emphasizing disclosure, faculty discretion, and student responsibility for accuracy.
Stanford's analogical approach treats AI like human assistance
Stanford University has established one of the clearest baseline policies through its Office of Community Standards. The Board on Conduct Affairs policy, adopted February 16, 2023, provides the fundamental framework: "Absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person. In particular, using generative AI tools to substantially complete an assignment or exam (e.g. by entering exam or assignment questions) is not permitted."
Students must acknowledge AI use beyond incidental applications: "Students should acknowledge the use of generative AI (other than incidental use) and default to disclosing such assistance when in doubt." The policy explicitly grants individual faculty authority: "Individual course instructors are free to set their own policies regulating the use of generative AI tools in their courses, including allowing or disallowing some or all uses of such tools."
Stanford's Graduate School of Business implements a particularly permissive approach that prohibits instructors from banning AI in take-home work: "MBA/MSx courses: Instructors may not ban student use of AI tools for take-home coursework, including assignments and exams. Instructors may choose whether to allow student use of AI tools for in-class work, including exams." The university provides the Stanford AI Playground as its recommended secure platform for student AI interactions.
Harvard implements school-specific policies with common principles
Harvard University delegates AI policy development to individual schools while maintaining university-wide guidelines. The foundational principle emphasizes faculty discretion: "Faculty should be clear with students they're teaching and advising about their policies on permitted uses, if any, of generative AI in classes and on academic work."
Harvard Graduate School of Education has developed the most detailed student policy, explicitly prohibiting AI for assignment creation: "Unless otherwise specified by your instructor, it is a violation of the HGSE Academic Integrity Policy to use generative AI to create all or part of an assignment for a course (e.g., a paper, memo, presentation, or short response) and submit it as your own." However, HGSE permits exploratory uses: "Permissible uses of generative AI in HGSE coursework include seeking clarification on concepts, brainstorming ideas, or generating scenarios that help contextualize what you are learning."
All Harvard schools require detailed disclosure when AI use is permitted: "For any permitted use of GenAI tools, you must acknowledge and document that use in your assignment submission by explaining what tool(s) you used, prompts you provided (if applicable), and how you integrated the output into your work."
Carnegie Mellon applies existing academic integrity frameworks
Carnegie Mellon University explicitly addresses AI through its current Academic Integrity Policy rather than creating separate guidelines. The university treats AI tools as potentially unauthorized assistance: "CMU's academic integrity policy already implicitly covers such tools, as they may be considered a type of unauthorized assistance," according to the Office of Community Standards and Integrity.
Individual instructors retain complete authority to define acceptable AI use: "This policy is intentionally designed to allow instructors to define what is 'authorized' vs. 'unauthorized' and what constitutes 'plagiarism' and 'cheating.' We recommend that instructors carefully examine their own policies to make these distinctions clear in both writing and verbally to students."
CMU requires comprehensive citation when AI assistance is authorized: "Any such use must be appropriately acknowledged and cited, following the guidelines established by the APA Style Guide, including the specific version of the tool used. Submitted work should include the exact prompt used to generate the content as well as the AI's full response in an Appendix."
The university provides extensive policy templates ranging from complete prohibition to encouragement. The prohibition template states: "Passing off any AI generated content as your own (e.g., cutting and pasting content into written assignments, or paraphrasing AI content) constitutes a violation of CMU's academic integrity policy."
Oxford requires explicit authorization for any AI use
Oxford University provides practical guidance for students using generative AI tools through six specific recommendations that emphasize verification, strategic prompting, and data security. The university stresses the critical importance of fact-checking, advising students to "always cross-check AI generated outputs against established sources to verify accuracy and identify erroneous information." Oxford also recommends sophisticated prompting techniques, encouraging students to "give significant contextual information when asking questions or prompts and ask several follow-up questions to refine responses" and to "use personae in your prompts e.g. 'I am an undergraduate student who is revising for a first-year calculus exam'."
Oxford requires explicit pre-authorization for any AI use in assessments: "Artificial intelligence (AI) can only be used within assessments where specific prior authorisation has been given, or when technology that uses AI has been agreed as reasonable adjustment for a student's disability."
When authorization is granted, Oxford requires comprehensive disclosure: "Where the use of generative AI in preparing work for examination has been authorised by the department, faculty or programme, you should give clear acknowledgment of how it has been used in your work."
Common themes across institutional approaches
Faculty autonomy emerges as the dominant theme, with all five universities emphasizing instructor discretion in setting course-specific AI policies. "Regardless of your thoughts on using GenAI in your subject, convey those thoughts, the resultant subject policies, and the consequences of their violation with your students at the beginning of the semester."
Disclosure requirements appear universal when AI use is permitted, though specificity varies significantly. Stanford requires acknowledgment of "other than incidental use," while Carnegie Mellon mandates including "the exact prompt used to generate the content as well as the AI's full response in an Appendix."
Student responsibility for accuracy remains constant across institutions. Harvard's guidance states: "You are responsible for any content that you produce or publish that includes AI-generated material," while Oxford emphasizes that "users of these tools must recognise that they retain responsibility for the accuracy of what they write."
Conclusion
These policies reveal universities actively balancing educational innovation with academic integrity concerns. The spectrum from Oxford's prior-authorization requirement to Stanford GSB's prohibition on instructor AI bans demonstrates the ongoing evolution of institutional approaches. Institutions emphasize transparent communication between faculty and students, comprehensive disclosure when AI use is authorized, and ultimate student responsibility for work accuracy.
These policies will likely continue evolving as institutions gain experience with AI's educational applications and assess their effectiveness in maintaining academic integrity while promoting learning.
Academic policy statements
Suggested policies for institutional adoption:
Students are the primary authors of their work. You remain accountable for accuracy, reliability, and ethical quality of all content, including AI-generated material.
Use AI as an assistant, not a substitute. AI should enhance your thinking and productivity, not replace your intellectual engagement with the subject matter.
Disclose AI assistance transparently. Note tools used and general purpose in your work. This builds trust and demonstrates responsible use.
Verify all AI-generated information. Check facts, re-run calculations, and ensure accuracy before submitting or publishing any AI-assisted work.
Respect intellectual property and privacy. Never upload copyrighted material, confidential data, or sensitive information to AI tools without proper authorization.
Maintain academic integrity standards. AI collaboration must still meet ethical standards for original thinking, proper attribution, and honest representation of your work.
Develop AI literacy as a core skill. Learn to prompt effectively, evaluate outputs critically, and iterate thoughtfully — these are essential professional competencies.
Implementation guide for educators
AI Policy Implementation Checklist for Educators.pdf, 57.1 KB
AI Policy Implementation Checklist for Educators.docx, 7.7 KB
Step 1: Foundation
- Review institutional AI policies and legal requirements
- Assess current course learning objectives and outcomes
- Using the content received with the help of Litero functionality to claim it as your own;
- Identify which assignments benefit from AI collaboration
Step 2: Policy Development
- Draft clear AI use guidelines for your course
- Define disclosure requirements (tool, purpose, extent of use)
- Establish consequences for non-disclosure vs. misuse
Step 3: Assignment Design
- Create AI-native assignments that assume collaboration
- Design unique datasets or local case studies
- Include verification and defense components
Step 4: Assessment Strategy
- Implement live assessment elements (oral exams, real-time problem solving)
- Develop rubrics that evaluate AI collaboration quality
- Plan for process documentation and iteration evidence
Step 5: Student Education
- Teach effective prompting and output evaluation
- Demonstrate verification techniques and fact-checking
- Practice disclosure formats and academic integrity standards
Step 6: Ongoing Management
- Monitor and adjust guidelines based on experience
- Collect student feedback on AI use effectiveness
- Stay updated on institutional policy changes
Quick Reference:
- Disclosure template: "AI assistance: [Tool] used for [purpose]"
- Red flags: No verification, hidden usage, outsourced thinking
- Success markers: Critical evaluation, iteration evidence, original analysis
At Litero, we are unwavering in our commitment to academic integrity and the success of every student.
Copyright Infringement
Litero respects the intellectual property rights of all content creators. Our users are prohibited from uploading any content that may violate another party’s Intellectual Property Rights to Litero. This means that copying content from third-party sources, either directly or by paraphrasing, is copyright infringement and therefore prohibited at Litero.
By submitting content to Litero, you acknowledge that:
- You own the copyrights to the content or have express permission from the copyright owners to use and upload the content;
- Your uploading of the content will not violate any law, regulation, or ethics code, including, if applicable, your school’s academic integrity policy;
- Uploading the content will not violate Litero’s Terms of Use or this Academic Integrity Policy.
Syllabus policy kit (templates)
Baseline: AI allowed by default + brief disclosure
Generative AI tools are allowed in this course for brainstorming, outlining, clarity edits, code scaffolding, and similar support. You remain responsible for accuracy, originality, and ethics.
Disclosure (2–4 lines in your submission):
- Tools used and for what
- What you kept/changed
- How you verified claims/citations
Narrow exceptions:
For explicitly listed closed assessments (e.g., in-class quizzes, proctored exams), AI assistance isn’t allowed. These will be clearly labeled in advance.
Assessment note:
Short walkthroughs may be used to confirm authorship and understanding.
Enhanced: AI allowed + disclosure + verification + live defense
Use AI as an assistant, not a substitute. You must critique model output, verify information you rely on, and be able to explain your work without AI.
Requirements:
- Include a short process note (tools, prompts, key iterations, verification steps)
- Independent verification of any facts/citations used
- Preparedness for a brief viva or whiteboard/code walkthrough
Privacy:
Do not upload sensitive or third-party data without permission and redaction.
AI-native: AI use required as a skill
This course treats AI collaboration as a core competency. You will compare model drafts to your revisions, justify changes, and document checks.
Requirements:
- Provide a version diff or annotated critique of a model draft
- Verify all claims and cite sources you actually read
- Submit a concise process artifact (disclosure + key prompts/decisions)
- Expect a short live defense
Rubric emphasis:
Framing, critique, verification, and communication over surface polish.
This is the Litero Codex. This is how we believe the future of learning should unfold.