Computer Science IA Grading, Rubric Breakdown, and Markbands

Upload your Computer Science IA draft and get instant feedback aligned with official IB criteria.

How Computer Science IA Grading Works

Follow the same rubric-first flow students use to move from a raw draft to a submission-ready version.

1

Upload your IA draft

Start by dropping in your coursework PDF. We built this flow to mirror how students prepare final submission drafts.

Drag and drop to upload

Limit 10 MB per file. Supported files: PDF

Browse files

Sign in to start your first grading run.

2

See criterion-level scoring immediately

Marksy maps your draft against the rubric so you can see where marks are gained or lost in each criterion.

IB criterion-by-criterion grading summary
Score breakdown with clear criterion-level performance signals.
3

Review rubric-linked evidence highlights

Every important scoring decision is anchored to your writing so revision is evidence-based, not guesswork.

Rubric-linked highlights in grading feedback
See exactly which text supports each criterion judgement.
4

Follow a prioritized revision checklist

Get structured next actions so you can move from draft to stronger markband performance in the right order.

Prioritized to-do feedback list from grading
Actionable edits ordered by impact.
5

Use the same workflow at teacher scale

For class-wide workflows, the same logic extends to batch marking so feedback stays consistent across submissions.

Bulk grading results dashboard
Consistent rubric feedback for multiple files.
6

Stay covered across IB subjects

Keep one grading system across IA, EE, TOK, and subject variants so your preparation process stays consistent.

Wide range of IB subjects supported in Marksy
One rubric-first workflow across your IB workload.

Computer Science IA Assessment Guide Overview

Use this guide to keep the client need, success criteria, implementation, and testing connected so the product reads as one coherent development story.

Recommended Length

2,000 words max

Build Timeline

4-6 weeks: scenario, build, test, evaluate

Anchor Question

Can a moderator follow how your product meets the client need from planning through evaluation?

Want a full playbook format? Read Computer Science IA Guide.

IB Computer Science IA Criteria Breakdown

Use each criterion as a checklist for revision. Strong drafts make the scoring evidence obvious, not implied.

Criterion A: Planning (6 marks)

Examiner focus: The appropriateness of the scenario for investigation, rationale for product choice, and quality of success criteria.

Top-band move: An appropriate scenario for investigation for an identified client, providing evidence of consultation, is described. The rationale for choosing the proposed product is justified and includes a range of appropriate criteria for evaluating the success of the product.

Common penalty: An appropriate scenario for investigation for an identified client is stated. The rationale for choosing the proposed product is identified. The criteria for evaluating the success of the product are generally inappropriate.

Criterion B: Solution Overview (6 marks)

Examiner focus: Completeness and clarity of the record of tasks, design overview, and test plan.

Top-band move: The record of tasks and the design overview, including an outline test plan, are detailed and complete. From this information it is clear how the product was developed.

Common penalty: The record of tasks and the design overview, including an outline test plan, are limited. From this information it is difficult to see how the product was developed.

Criterion C: Development (12 marks)

Examiner focus: Complexity and ingenuity of techniques used, appropriateness of tools, and explanation of techniques.

Top-band move: The use of techniques demonstrates a high level of complexity and ingenuity in addressing the scenario identified in criterion A. It is characterized by the appropriate use of existing tools. The techniques are adequate for the task and their use is explained. All sources are identified.

Common penalty: The use of techniques demonstrates a low level of complexity and ingenuity or does not address the scenario identified in criterion A. It is characterized by limited use of existing tools. There is no explanation of why the techniques are used or how they are adequate for the task. Sources are used but are not identified.

Criterion D: Functionality and Extensibility of Product (4 marks)

Examiner focus: Product functionality (as shown in video) and potential for future expansion/modification.

Top-band move: The video shows that the product functions well. Some expansion and modifications of the product are straightforward.

Common penalty: The video shows that the product functions partially. Some expansion and modification of the product is possible but difficult.

Criterion E: Evaluation (6 marks)

Examiner focus: Evaluation against success criteria, incorporation of client feedback, and quality of improvement recommendations.

Top-band move: The product is fully evaluated against the success criteria identified in criterion A including feedback from the client/adviser. Recommendations for further improvement of the product are realistic.

Common penalty: There is a limited attempt to evaluate the product against the success criteria identified in criterion A. There is limited evidence of feedback from the client/adviser and any recommendations for further improvement are trivial or unrealistic.

Computer Science IA Markbands and What They Mean

Match your draft to the descriptors below to identify the smallest edits that can move you into a higher band.

Criterion A: Planning (6 marks)

Points 0

The response does not reach a standard described by the descriptors below.

Points 1-2

An appropriate scenario for investigation for an identified client is stated. The rationale for choosing the proposed product is identified. The criteria for evaluating the success of the product are generally inappropriate.

Points 3-4

An appropriate scenario for investigation for an identified client, providing evidence of consultation, is stated. The rationale for choosing the proposed product is partially explained and includes some appropriate criteria for evaluating the success of the product.

Points 5-6

An appropriate scenario for investigation for an identified client, providing evidence of consultation, is described. The rationale for choosing the proposed product is justified and includes a range of appropriate criteria for evaluating the success of the product.

Criterion B: Solution Overview (6 marks)

Points 0

The response does not reach a standard described by the descriptors below.

Points 1-2

The record of tasks and the design overview, including an outline test plan, are limited. From this information it is difficult to see how the product was developed.

Points 3-4

The record of tasks and the design overview, including an outline test plan, are partially complete. They provide a basic understanding of how the product was developed.

Points 5-6

The record of tasks and the design overview, including an outline test plan, are detailed and complete. From this information it is clear how the product was developed.

Criterion C: Development (12 marks)

Points 0

The response does not reach a standard described by the descriptors below.

Points 1-4

The use of techniques demonstrates a low level of complexity and ingenuity or does not address the scenario identified in criterion A. It is characterized by limited use of existing tools. There is no explanation of why the techniques are used or how they are adequate for the task. Sources are used but are not identified.

Points 5-8

The use of techniques demonstrates a moderate level of complexity and ingenuity in addressing the scenario identified in criterion A. It is characterized by some appropriate use of existing tools. There is some attempt to explain the techniques used and why they are adequate for the task. All sources are identified.

Points 9-12

The use of techniques demonstrates a high level of complexity and ingenuity in addressing the scenario identified in criterion A. It is characterized by the appropriate use of existing tools. The techniques are adequate for the task and their use is explained. All sources are identified.

Criterion D: Functionality and Extensibility of Product (4 marks)

Points 0

The response does not reach a standard described by the descriptors below.

Points 1-2

The video shows that the product functions partially. Some expansion and modification of the product is possible but difficult.

Points 3-4

The video shows that the product functions well. Some expansion and modifications of the product are straightforward.

Criterion E: Evaluation (6 marks)

Points 0

The response does not reach a standard described by the descriptors below.

Points 1-2

There is a limited attempt to evaluate the product against the success criteria identified in criterion A. There is limited evidence of feedback from the client/adviser and any recommendations for further improvement are trivial or unrealistic.

Points 3-4

The product is partially evaluated against the success criteria identified in criterion A including feedback from the client/adviser. Recommendations for further improvement of the product are largely realistic.

Points 5-6

The product is fully evaluated against the success criteria identified in criterion A including feedback from the client/adviser. Recommendations for further improvement of the product are realistic.

How to Raise Your Computer Science IA Score

  1. Step 1

    Define the scenario

    State the client need clearly and translate it into measurable success criteria before building anything.

  2. Step 2

    Document the design

    Record the task list, design overview, and test plan so the development path is easy to follow.

  3. Step 3

    Explain the development

    Show why your techniques and tools are suitable, and make the source of every technique clear.

  4. Step 4

    Test and evaluate

    Use the video and test evidence to judge functionality, extensibility, and how well the product meets the criteria.

Revision Checklist and Quick Wins

Client consultation and success criteria are clear.

The solution overview explains how the product was developed.

Development choices are justified, not just listed.

Evaluation is tied back to the original success criteria.

Make success criteria measurable before coding.

Show why a technique was used, not just that it was used.

Use testing evidence to support every evaluation point.

Computer Science IA Grading FAQ

How does the IB Computer Science IA grader score my work?

The grader evaluates your submission against the active IB criteria for Computer Science and returns criterion-level marks with actionable feedback.

Can I use this for early drafts and final versions?

Yes. Most students use draft grading to identify weak criteria, revise, and re-check before final submission.

Is bulk grading available for Computer Science?

Yes. Teachers can upload multiple files in one batch from the bulk grading route for faster class-wide feedback.

Is my submitted file private?

Absolutely. By default, nobody other than you can access your uploaded files, however you may make them shareable to others. Even then, you have full control to delete your files at any moment, and your files are not used to train AI models. More information here.

Single Draft

Grade One IA Now

Upload a single submission and get criterion-by-criterion feedback aligned to IB descriptors.

Open Single Grading
Teacher Workflow

Bulk Grade Multiple Submissions

Process up to 15 files in one run and keep feedback consistent across your class.

View Bulk Plan