What is Marksy?
Marksy is an IB-focused grading and feedback platform built for rubric-first draft iteration across IA, EE, TOK, oral, and past-paper practice workflows.
Source freshness: dated public claims, last updated .
Do you want to know wether Marksy or Kognity is best for IB? This page provides a through comparison of both tools.
Last updated:
Marksy is an IB-focused grading and feedback platform built for rubric-first draft iteration across IA, EE, TOK, oral, and past-paper practice workflows.
Kognity is a school-oriented digital IB learning platform combining textbook content, classroom assignments, and progress analytics for teachers and students.
| Feature | ||
|---|---|---|
| General | ||
IB oriented ?IB-first workflows reduce prompt overhead and improve rubric fit. | Strong?Positioned as an IB-specific grading workflow product. | Strong?Explicit IB DP product positioning and collaboration claims. |
Affordable ?Shows what users actually pay, including per-review and billed-total caveats. | Strong?$0 entry with 5 full gradings/month, then paid student plans from $14.99/month. | ×No?No fixed self-serve public sticker price; schools are usually quoted custom annual contracts. |
Has free tier ?Students can start immediately without procurement or upfront payment. | Strong?5 complete gradings monthly on free access. | ~Partial?Free access trial path exists, but full use is school subscription-led. |
Transparent pricing ?Lower uncertainty at evaluation time. | Strong?Public monthly pricing with free, student, and teacher tiers. | ?Unclear?Pricing is per-student per-year but final costs are quote-based. |
| Assessment grading | ||
IA / EE / TOK grading support ?Designed for core IB assessed writing workflows. | Strong?Built for IA, EE, TOK submission workflows. | ×No?No dedicated IA/EE/TOK grading support workflow is provided. |
Criterion level feedback ?Purpose-built for criterion-level grading, not generic advice. | Strong?Criterion-first scoring and rubric-oriented grading UX. | ×No?No criterion-level grading feedback workflow is publicly positioned. |
AI detection ?Integrity checks are native, not a separate purchase. | Strong?20-50 paid-tier AI checks monthly. | ×No?No public AI-detection workflow positioning. |
Specific tips ?Actionable next steps speed up draft improvement cycles. | Strong?Actionable criterion-level tips are paired with Marksy TODO tasks so feedback turns into trackable next steps. | ×No?No structured specific-tip grading workflow is publicly positioned. |
Quick ?Fast turnaround keeps revision momentum high between submissions. | Strong?Fast AI-assisted turnaround supports repeat submission loops. | ×No?Not positioned as a quick grading-feedback product. |
Human review ?Human review can help edge cases but usually adds time and cost. | ×No?Focuses on AI-assisted grading workflow over human examiner reviews. | ×No?Not positioned as an examiner-review marketplace. |
Good price for assessment grading ?Lower cost per repeated grading cycle matters more than one-off pricing. | Strong?Subscription pricing keeps repeated retries cheaper than per-review models. | ×No?School-led annual contracts can be heavy for individual grading-focused use. |
| Textbooks and simulations | ||
Online textbooks ?Useful for concept refreshers before running assessment feedback loops. | Strong?Includes online textbook-style revision content alongside grading workflows. | Strong?Digital IB textbook and classroom content are core pillars. |
Hundreds of simulations to learn concepts ?Interactive simulation depth improves understanding before drafting. | Strong?Public simulations hub includes hundreds of IB learning simulations. | ~Partial?Interactive learning resources exist, but a simulations count is not clearly public. |
Available for free ?Free learning content removes friction for everyday revision. | Strong?Core simulation and revision resources are publicly accessible. | ×No?Trial access exists, but full textbook access is subscription-led. |
| Oral practice | ||
Practice and get feedback ?Oral practice should include structured scoring feedback. | Strong?Built-in oral practice returns structured feedback per run. | ×No?Not publicly positioned around oral/IO practice loops. |
Affordable oral practice ?Predictable oral-practice costs matter when students rehearse frequently. | Strong?Oral practice is included in plan limits, not charged per attempt. | ×No?No affordable oral-practice workflow is offered due to lack of oral support. |
| Questionbank | ||
Practice IB questions ?Question practice is valuable when linked to criterion-aware feedback. | ~Partial?Question practice is available through past papers only; there is no dedicated standalone questionbank. | Strong?Publicly advertises a 10,000+ question bank and auto-corrected assignments. |
| Past papers | ||
Past papers available ?Past papers are a core requirement for exam-season practice. | Strong?Past-paper and exam-style practice are built into the product. | ×No?No first-party past-paper library is consistently available. |
Practice past papers questions with AI ?AI-assisted past-paper feedback shortens the time between attempts. | Strong?Past-paper question feedback is AI-assisted with retry loops. | ×No?No dedicated AI past-paper practice workflow is publicly positioned. |
| Teacher features | ||
Grade assessments in large batches ?Batch grading throughput matters for teacher workloads. | Strong?Batch workflows support up to 15 submissions at once. | ~Partial?Assignment workflows exist in a school/classroom context. |
Export grading results in bulk ?Exportable outputs simplify moderation and record-keeping. | Strong?Feedback outputs can be reused and shared for iteration. | ~Partial?Classroom assignment and data workflows are available in-platform. |
Winner pick
It can support parts of the workflow, but this matrix shows Marksy leading 17 feature rows versus 1 for Kognity. Marksy's strongest edges here are IA / EE / TOK grading support, Criterion level feedback, and Specific tips, which is why Marksy remains the recommended default for reliable repeat grading cycles.
Kognity is currently rated no on rubric-first grading from public documentation, while Marksy is rated strong. If you want criterion-led scoring without heavy prompt setup, Marksy is the safer choice.
Kognity does show strengths in Practice IB questions. For marking uploaded work and getting clear, useful feedback each time, Marksy is still the better choice in this comparison.
Kognity is rated no for rubric-first grading, while Marksy is rated strong. If criterion-level breakdown quality is your main requirement, Marksy is generally the better fit.
Kognity is rated no for assessment-grading price value and unclear for pricing transparency. Pricing: No fixed self-serve public sticker price; schools are usually quoted custom annual contracts. Marksy snapshot: $0 entry with 5 full gradings/month, then paid student plans from $14.99/month. Marksy is rated strong on long-run value because repeated IB draft retries do not require per-review payments.
Kognity is rated no for past-paper workflow coverage, while Marksy is rated strong. For full IB grading loops with consistent repeat use, Marksy remains the recommended option.