Markly

AI Responsibility

Last updated: 19 March 2026

Teachers are always in charge

Markly uses AI to assist with marking — it does not replace your professional judgement. Every piece of feedback the AI generates can be reviewed, edited, or discarded before anything reaches a student. You decide what gets shared, and when.

Professional accountability for student assessment always stays with you.

What the AI actually does

When you upload a set of worksheets, the AI:

  • Reads the handwritten student responses
  • Compares them against the rubric or mark scheme you have set
  • Generates a draft feedback comment and score for each student

That draft is then presented to you for review. Nothing is finalised or exported without your action.

Honesty about limitations

AI marking is not perfect, and we want to be upfront about where it struggles:

  • Handwriting: early years writing, rushed work, or low-quality scans can be misread.
  • Ambiguous answers: the AI may mark a borderline response differently to how you would.
  • Context it cannot see: if a student was absent for part of a topic, or has a specific learning need not reflected in the rubric, the AI will not account for that.
  • Unusual formats: diagrams, tables, or mixed-language responses may not be handled well.

If you spot an error, edit the feedback directly — and please use the feedback button to let us know. It helps us improve.

Student data & safeguarding

  • We collect first names and last initials only — no full student names, dates of birth, or other identifiers.
  • Worksheet images are processed in transit and permanently deleted immediately after marking. We do not store them.
  • Student data is never used to train AI models — not ours, not any third party's.
  • Assessment data (scores, feedback) is stored securely and is only accessible to you.

See our Privacy Policy for full details on data handling, retention, and your rights under UK GDPR.

Fairness & bias

The AI marks student work — it is not aware of who the student is. Feedback is anchored to the rubric you provide, which reduces the scope for open-ended bias.

SEND accommodations are configured by you, not inferred by the AI. If a student needs adapted feedback language or a different assessment lens, you set that explicitly in their profile.

We monitor AI outputs for systematic patterns — for example, if certain types of handwriting or question formats are consistently handled poorly — and we update the system when we find them.

Our commitments

  • We will tell you about any significant changes to how the AI works before they take effect.
  • We will publish changes to any third-party services that process student data.
  • We will never use student work to improve AI models without explicit, informed consent.
  • We welcome feedback — if the AI is getting something wrong, tell us.