Main Content

Test Case Type Distribution

Metric ID



This metric returns a distribution of the types of test cases that run on the unit. A test case is either a baseline, equivalence, or simulation test. Use this metric to determine if there is a disproportionate number of test cases of one type.

  • Baseline tests compare outputs from a simulation to expected results stored as baseline data.

  • Equivalence tests compare the outputs from two different simulations. Simulations can run in different modes, such as normal simulation and software-in-the-loop.

  • Simulation tests run the system under test and capture simulation data. If the system under test contains blocks that verify simulation, such as Test Sequence and Test Assessment blocks, the pass/fail results are reflected in the simulation test results.

This metric returns the result as a distribution of the results of the Test case type metric.

Computation Details

The metric includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.


To collect data for this metric:

  • In the Model Testing Dashboard, view the Tests by Type widget.

  • Programmatically, use getMetrics with the metric ID TestCaseTypeDistribution.

Collecting data for this metric loads the model file and requires a Simulink® Test™ license.


For this metric, instances of metric.Result return Value as a distribution structure that contains these fields:

  • BinCounts — The number of test cases in each bin, returned as a vector.

  • BinEdges — The outputs of the Test case type metric, returned as a vector. The outputs represent the three test case types:

    • 0 — Simulation test

    • 1 — Baseline test

    • 2 — Equivalence test

Compliance Thresholds

This metric does not have predefined thresholds. Consequently, the compliance threshold overlay icon appears when you click Uncategorized in the Overlays section of the toolstrip.

See Also

Related Topics