Upload your IA draft
Start by dropping in your coursework PDF. We built this flow to mirror how students prepare final submission drafts.
Drag and drop to upload
Limit 10 MB per file. Supported files: PDF
Sign in to start your first grading run.
Upload your Computer Science IA draft and get instant feedback aligned with official IB criteria.
Follow the same rubric-first flow students use to move from a raw draft to a submission-ready version.
Start by dropping in your coursework PDF. We built this flow to mirror how students prepare final submission drafts.
Drag and drop to upload
Limit 10 MB per file. Supported files: PDF
Sign in to start your first grading run.
Marksy maps your draft against the rubric so you can see where marks are gained or lost in each criterion.

Every important scoring decision is anchored to your writing so revision is evidence-based, not guesswork.

Get structured next actions so you can move from draft to stronger markband performance in the right order.

For class-wide workflows, the same logic extends to batch marking so feedback stays consistent across submissions.

Keep one grading system across IA, EE, TOK, and subject variants so your preparation process stays consistent.

Use this guide to keep the client need, success criteria, implementation, and testing connected so the product reads as one coherent development story.
Recommended Length
2,000 words max
Build Timeline
4-6 weeks: scenario, build, test, evaluate
Anchor Question
Can a moderator follow how your product meets the client need from planning through evaluation?
Want a full playbook format? Read Computer Science IA Guide.
Use each criterion as a checklist for revision. Strong drafts make the scoring evidence obvious, not implied.
Examiner focus: The appropriateness of the scenario for investigation, rationale for product choice, and quality of success criteria.
Top-band move: An appropriate scenario for investigation for an identified client, providing evidence of consultation, is described. The rationale for choosing the proposed product is justified and includes a range of appropriate criteria for evaluating the success of the product.
Common penalty: An appropriate scenario for investigation for an identified client is stated. The rationale for choosing the proposed product is identified. The criteria for evaluating the success of the product are generally inappropriate.
Examiner focus: Completeness and clarity of the record of tasks, design overview, and test plan.
Top-band move: The record of tasks and the design overview, including an outline test plan, are detailed and complete. From this information it is clear how the product was developed.
Common penalty: The record of tasks and the design overview, including an outline test plan, are limited. From this information it is difficult to see how the product was developed.
Examiner focus: Complexity and ingenuity of techniques used, appropriateness of tools, and explanation of techniques.
Top-band move: The use of techniques demonstrates a high level of complexity and ingenuity in addressing the scenario identified in criterion A. It is characterized by the appropriate use of existing tools. The techniques are adequate for the task and their use is explained. All sources are identified.
Common penalty: The use of techniques demonstrates a low level of complexity and ingenuity or does not address the scenario identified in criterion A. It is characterized by limited use of existing tools. There is no explanation of why the techniques are used or how they are adequate for the task. Sources are used but are not identified.
Examiner focus: Product functionality (as shown in video) and potential for future expansion/modification.
Top-band move: The video shows that the product functions well. Some expansion and modifications of the product are straightforward.
Common penalty: The video shows that the product functions partially. Some expansion and modification of the product is possible but difficult.
Examiner focus: Evaluation against success criteria, incorporation of client feedback, and quality of improvement recommendations.
Top-band move: The product is fully evaluated against the success criteria identified in criterion A including feedback from the client/adviser. Recommendations for further improvement of the product are realistic.
Common penalty: There is a limited attempt to evaluate the product against the success criteria identified in criterion A. There is limited evidence of feedback from the client/adviser and any recommendations for further improvement are trivial or unrealistic.
Match your draft to the descriptors below to identify the smallest edits that can move you into a higher band.
Points 0
The response does not reach a standard described by the descriptors below.
Points 1-2
An appropriate scenario for investigation for an identified client is stated. The rationale for choosing the proposed product is identified. The criteria for evaluating the success of the product are generally inappropriate.
Points 3-4
An appropriate scenario for investigation for an identified client, providing evidence of consultation, is stated. The rationale for choosing the proposed product is partially explained and includes some appropriate criteria for evaluating the success of the product.
Points 5-6
An appropriate scenario for investigation for an identified client, providing evidence of consultation, is described. The rationale for choosing the proposed product is justified and includes a range of appropriate criteria for evaluating the success of the product.
Points 0
The response does not reach a standard described by the descriptors below.
Points 1-2
The record of tasks and the design overview, including an outline test plan, are limited. From this information it is difficult to see how the product was developed.
Points 3-4
The record of tasks and the design overview, including an outline test plan, are partially complete. They provide a basic understanding of how the product was developed.
Points 5-6
The record of tasks and the design overview, including an outline test plan, are detailed and complete. From this information it is clear how the product was developed.
Points 0
The response does not reach a standard described by the descriptors below.
Points 1-4
The use of techniques demonstrates a low level of complexity and ingenuity or does not address the scenario identified in criterion A. It is characterized by limited use of existing tools. There is no explanation of why the techniques are used or how they are adequate for the task. Sources are used but are not identified.
Points 5-8
The use of techniques demonstrates a moderate level of complexity and ingenuity in addressing the scenario identified in criterion A. It is characterized by some appropriate use of existing tools. There is some attempt to explain the techniques used and why they are adequate for the task. All sources are identified.
Points 9-12
The use of techniques demonstrates a high level of complexity and ingenuity in addressing the scenario identified in criterion A. It is characterized by the appropriate use of existing tools. The techniques are adequate for the task and their use is explained. All sources are identified.
Points 0
The response does not reach a standard described by the descriptors below.
Points 1-2
The video shows that the product functions partially. Some expansion and modification of the product is possible but difficult.
Points 3-4
The video shows that the product functions well. Some expansion and modifications of the product are straightforward.
Points 0
The response does not reach a standard described by the descriptors below.
Points 1-2
There is a limited attempt to evaluate the product against the success criteria identified in criterion A. There is limited evidence of feedback from the client/adviser and any recommendations for further improvement are trivial or unrealistic.
Points 3-4
The product is partially evaluated against the success criteria identified in criterion A including feedback from the client/adviser. Recommendations for further improvement of the product are largely realistic.
Points 5-6
The product is fully evaluated against the success criteria identified in criterion A including feedback from the client/adviser. Recommendations for further improvement of the product are realistic.
Step 1
State the client need clearly and translate it into measurable success criteria before building anything.
Step 2
Record the task list, design overview, and test plan so the development path is easy to follow.
Step 3
Show why your techniques and tools are suitable, and make the source of every technique clear.
Step 4
Use the video and test evidence to judge functionality, extensibility, and how well the product meets the criteria.
Client consultation and success criteria are clear.
The solution overview explains how the product was developed.
Development choices are justified, not just listed.
Evaluation is tied back to the original success criteria.
Make success criteria measurable before coding.
Show why a technique was used, not just that it was used.
Use testing evidence to support every evaluation point.
The grader evaluates your submission against the active IB criteria for Computer Science and returns criterion-level marks with actionable feedback.
Yes. Most students use draft grading to identify weak criteria, revise, and re-check before final submission.
Yes. Teachers can upload multiple files in one batch from the bulk grading route for faster class-wide feedback.
Absolutely. By default, nobody other than you can access your uploaded files, however you may make them shareable to others. Even then, you have full control to delete your files at any moment, and your files are not used to train AI models. More information here.
Upload a single submission and get criterion-by-criterion feedback aligned to IB descriptors.
Open Single GradingProcess up to 15 files in one run and keep feedback consistent across your class.
View Bulk Plan