Main Content

Test Learner Solutions

You create assessment items for determining whether the learner got the correct answer. When you write tests, think about the types of common errors that a learner often makes, and include tests for these conditions. The following examples illustrate some of the more common conditions to consider in creating assessment items. The examples include suggestions for the messages to provide to learners as output after the tests run.

Example Problems

Example problems from MathWorks® staff are provided to help you write problems and create assessment items. These example problems illustrate good practices for authoring problems and creating assessment items to test the accuracy of leaner submissions.

Assessment Items

When you create an assessment item:

  • You can set the assessment method to Correct/Incorrect or Weighted. The assessment method determines how the points assigned to the problem are awarded.

  • You can indicate that an assessment item is a pretest.

  • Test types can be among these options:

    • Compare to ref solution (script problems only; for function problems, use the assessment function assessVariableEqual)

    • Whether a function or keyword is present

    • Whether a function or keyword is absent

    • Write out the assessment item using MATLAB® code

You can convert any of these options (except for MATLAB code) to MATLAB code.

See Write Assessments for Function-Based Learner Solutions and Write Assessments for Script-Based Learner Solutions for details on these types.

Correct/Incorrect Assessments

If you select the Correct/Incorrect method, the problem is treated as pass/fail. Assessment conditions defined as Correct/Incorrect return 1 if all tests pass and 0 if any tests fail. If the results are all correct, the maximum point possible are awarded. If any results are marked incorrect, no points are awarded.

Weighted Assessments

If you select the Weighted method, it is possible to award partial credit. You assign each assessment item a percentage of the total points possible. You can modify the percentage by changing the point value (weight) assigned to each assessment item. The points awarded are determined by summing the percentages of the assessment results marked correct and multiplying by the maximum points possible.​

For example, consider a problem with two assessment items. You want 1/3 of the points (33%) to be awarded for passing the first assessment and 2/3 (67%) for passing the second one. When creating the assessment items, you would assign one point to the first assessment item and two points to the second item.

The following images demonstrate what the learner sees when the assessment item is weighted.

  • Before submitting a solution, the assessment results are empty.

  • After submitting a solution that passes the conditions in the assessment item, the assessment results color changes to green, the title shows "All Tests Passed (100%)" and the details show the test name with a green check and 100%, both displayed to the right:

  • After submitting a solution that does not pass the conditions in the assessment item, the assessment results color changes to red, the title shows "0 of 1 Tests Passed (0%)", and the list displays 0% and the reason for the failure, for example, “The submission must contain a variable named x.":

To indicate assessment items are weighted, select Weighted as the Assessment Method.

When you create multiple assessment items, you assign the weight each assessment item should have by assigning points. The following example shows Test 1 gets 1 point for 33% and Test 2 gets 2 points for 67%.

You can show the percentage to learners. The following example shows the learner view, which is the assessment name and the percentage the assessment item is given.

Learner Feedback

Show Customized Feedback

You can provide additional customized feedback to the learner on assessment failure. This feedback can be written using rich text format, including formatted text, hypertext links, images, and math equations.

The following image shows a problem description that contains bullets, mathematical equations, properly formatted code, and a picture.

Show Feedback for Initial Error Only (Script-Based Problems)

In a script-based problem, an initial error may cause subsequent errors. You may want to encourage the learner to focus first on the initial error.

When you select the option Only Show Feedback for Initial Error, detailed feedback is shown for the initial error, but is hidden by default for subsequent errors. The learner can display this additional feedback by clicking Show Feedback.

Pretests

Pretests are assessments that learners can run without submitting their solution for grading, and are different from regular assessments in the following ways:

  • When a learner runs pretests, the pretest results are not recorded in the gradebook.

  • Running pretests does not count against a submission limit.

  • Learners can view the assessment code in a pretest, as well as the output generated by that code (for MATLAB code test types only).

Use pretests to give learners a way to determine if their solution is on the right path before they submit their solutions. When learners submit their solution, pretests are also run and are treated the same as regular assessments, and do contribute to the final grade.

Pretests can be beneficial in problems where there are multiple correct but different approaches or where the number of submissions has been limited. For example, consider "Calculating voltage using Kirchhoff loops" in the Getting Started with MATLAB Grader problem collection. To solve this problem, learners must write a system of equations. There are multiple correct ways to do this, but only solutions that match the reference solution will be marked correct. The instructor has therefore added one pretest for learners to check that their equations are in the expected order before submitting.

Time Limits for Submissions

MATLAB Grader™ enforces an execution time limit of 60 seconds. The clock starts when the learner clicks Run or Submit, and stops when the output or assessment results are displayed to the learner. If the time limit is reached, an error message is displayed stating that the server timed out. The factors that contribute to total execution time depend on if the learner is running vs submitting their solution, as well as if they are solving a script- or a function-based problem.

To get an estimate of the time required to execute the reference solution, run it using MATLAB Online™, as the computational environment used to execute the MATLAB code is most similar to what is used by MATLAB Grader.

Note

Learners may find that they can see the output of their code when they click Run Script/Function, but when they submit, they get the error message "The server timed out while running and assessing your solution." This error is due to the additional execution time needed to run the reference solution and all assessments.

Timing for Script-Based Problem Solutions

Script-based problems run in the following order:

Run scriptSubmit

Run learner solution one time

  • Run reference solution one time

  • Run learner solution one time

  • Run all assessments in sequential order

Consider the following examples:

Example #1: Low probability of exceeding execution time limit

Code to executeExecution time
Reference solutionapprox. 20 seconds
Typical learner solution approx. 20 seconds
3 assessmentsapprox. 1 second each (approx. 3 seconds total)

Typical total execution time: approx. 45 seconds (includes network overhead)

A time out error in this scenario could be caused by an error in the learner’s solution, inefficient code, or excessive output printing to the screen.

Example #2: High probability of exceeding execution time limit

Code to executeExecution time
Reference solutionapprox. 25 seconds
Typical learner solution approx. 25 seconds
3 assessmentsapprox. 1 second each (approx. 3 seconds total)

Typical total execution time: approx. 65 seconds (includes network overhead)

In this scenario, the learner is able to run their solution, but will likely encounter the execution time limit when submitting. It may be necessary to redesign or remove this problem.

Timing for Function-Based Problem Solutions

Function-based problem solutions run in the following order:

Run functionSubmit

Run code in Code to call your function.

Run all assessments in sequential order.

To test a function, it must be called. When a learner submits the solution to a function-based problem, only the assessments are run. Each assessment typically checks for correctness by calling the reference function and learner function with the same inputs and comparing the resulting outputs. This means that a single function-type problem may run the reference and learner functions multiple times.

Consider the following examples:

Example #1: Low probability of exceeding execution time limit

In this example, each assessment evaluates the learner's function with a different input. The reference function and a typical learner solution both take approx. 5 seconds to run.

Code to executeExecution time
3 assessmentsapprox. 10 second each (approx. 30 seconds total)

Typical total execution time: approx. 32 seconds (includes network overhead)

A time out error in this scenario could be caused by an error in the learner’s solution, inefficient code, or excessive output printing to the screen.

Example #2: High probability of exceeding execution time limit

In this example, each assessment evaluates the learner's function with a different input. The reference function and a typical learner solution both take approx. 10 seconds to run.

Code to executeExecution time
3 assessmentsapprox. 20 second each (approx. 60 seconds total)

Typical total execution time: approx. 62 seconds (includes network overhead)

In this scenario, the learner is able to run their solution, but will likely encounter the execution time limit when submitting. It may be necessary to redesign or remove this problem.

Related Topics