Main Content

Assess the Completeness of Requirements-Based Testing in Accordance with ISO 26262

You can use the Model Testing Dashboard to assess the quality and completeness of your requirements-based testing activities in accordance with ISO 26262-6:2018. The dashboard facilitates this activity by monitoring the traceability between requirements, tests, and test results and by providing a summary of testing completeness and structural coverage. The dashboard analyzes the implementation and verification artifacts in a project and provides:

  • Completeness and quality metrics for the requirements-based test cases in accordance with ISO 26262-6:2018, Clause 9.4.3

  • Completeness and quality metrics for the requirements-based test results in accordance with ISO 26262-6:2018, Clause 9.4.4

  • A list of artifacts in the project, organized by the units

To assess the completeness of your requirements-based testing activities, follow these automated and manual review steps using the Model Testing Dashboard.

Open the Model Testing Dashboard and Collect Metric Results

To analyze testing artifacts using the Model Testing Dashboard:

  1. Open the project that contains your models and testing artifacts. Or to load an example project for the dashboard, at the command line, type dashboardCCProjectStart.

  2. Open the dashboard. On the Project tab, click Model Testing Dashboard.

  3. If you have not previously opened the dashboard for the project, the dashboard must identify the artifacts in the project and trace them to the models. To do this, run the analysis and collect metric results by clicking Trace and Collect All.

  4. In the Artifacts pane, the dashboard organizes artifacts such as requirements, test cases, and test results under the models that they trace to. To view the metric results for the unit db_DriverSwRequest in the example project, in the Artifacts pane, click db_DriverSwRequest. The dashboard populates the widgets with data from the most recent metric collection for the unit.

    Note

    The Model Testing Dashboard considers each model in the project to represent one software unit. Other topics that document the dashboard, which are not specific to ISO 26262-6, use the term component to describe a unit that the dashboard can analyze. Where the term component appears in other topics about the dashboard, it refers to the architectural level of a unit as described in ISO 26262-6:2018.

Model Testing Dashboard showing metric results for the db_DriverSwRequest unit

The dashboard widgets show summary data on traceability and completeness measurements for the testing artifacts for each unit. The metric results displayed in yellow font indicate issues that you may need to address to complete requirements-based testing for the unit. To explore the data in more detail, click an individual metric widget. For the selected metric, a table displays the artifacts and the metric value for each artifact. The table provides hyperlinks to open the artifacts so that you can get detailed results and fix the artifacts that have issues. For more information about using the Model Testing Dashboard, see Explore Status and Quality of Testing Activities Using the Model Testing Dashboard.

Test Case Review

To verify that a unit satisfies its requirements, you create test cases for the unit based on the requirements. ISO 26262-6, Clause 9.4.3 requires that test cases for a unit are derived from the requirements. When you create a test case for a requirement, you add a traceability link between the test case and the requirement, as described in Link Requirements to Tests (Simulink Requirements) and in Establish Requirements Traceability for Testing (Simulink Test). Traceability allows you to track which requirements have been verified by your tests and identify requirements that the model does not satisfy. Clause 9.4.3 requires traceability between requirements and test cases, and review of the correctness and completeness of the test cases. To assess the correctness and completeness of the test cases for a unit, use the metrics in the Test Case Analysis section of the Model Testing Dashboard.

The following is an example checklist provided to aid in reviewing test case correctness and completeness with respect to ISO 26262-6. For each question, perform the review activity using the corresponding dashboard metric and, if necessary, apply the corresponding fix. This checklist is provided as an example and should be reviewed and modified to meet your application needs.

Checklist ItemReview ActivityDashboard MetricFix
1 — Does each test case trace to a requirement?

Check that 100% of the test cases for the unit are linked to requirements by viewing Tests Linked to Requirements.

Tests Linked to Requirements

Dial widget indicating percentage of tests with requirements and count widget indicating one unlinked test

Metric ID — TestCaseWithRequirementPercentage

For more information, see Test linked to requirement percentage.

For each unlinked test case, add a link to the requirement that the test case verifies, as described in step 3.

2 — Does each test case trace to the correct requirements?

For each test case, manually verify that the requirement it is linked to is correct. Click the Tests Linked to Requirements widget to view a table of the test cases. To see the requirements that a test case traces to, in the Artifacts column, click the arrow to the left of the test case name.

Tests Linked to Requirements

Table of test cases and linked requirements

Metric ID — TestCaseWithRequirements

For more information, see Test linked to requirements.

For each link to an incorrect requirement, remove the link. If the test case is missing a link to the correct requirement, add the correct link.

3 — Do the test cases cover all requirements?

Check that 100% of the requirements for the unit are linked to test cases by viewing Requirements Linked to Tests.

Requirements Linked to Tests

Dial widget indicating percentage of requirements with test cases and count widget indicating unlinked requirements

Metric ID — RequirementWithTestCasePercentage

For more information, see Percentage requirements with test cases.

For each unlinked requirement, add a link to the test case that verifies it, as described in step 3.

4 — Do the test cases define the expected results including pass/fail criteria?Manually review the test cases of each type. Click the Tests by Type widget to view a table of the test cases and their types. Open each test case in the Test Manager by using the hyperlinks in the Artifact column. Baseline test cases must define baseline criteria. For simulation test cases, review that each test case defines pass/fail criteria by using assessments, as described in Assess Simulation and Compare Output Data (Simulink Test).

Tests by Type

Table that lists each test case and its type

Metric ID — TestCaseType

For more information, see Test case type.

For each test case that does not define expected results, in the Test Manager, add the expected results and pass/fail criteria.
5 — Does each test case properly test the requirement that it traces to?

Manually review the requirement links and content for each test case. Click the Tests Linked to Requirements widget to view a table of the test cases. To see the requirements that a test case traces to, in the Artifact column, click the arrow to the left of the test case name. Use the hyperlinks to open the test case and requirement and review that the test case properly tests the requirement.

Tests Linked to Requirements

Table of test cases and linked requirements

Metric ID — TestCaseWithRequirements

For more information, see Test linked to requirements.

For each test case that does not properly test the requirement it traces to, in the Test Manager, update the test case. Alternatively, add test cases that further test the requirement.

Test Results Review

After you run tests on a unit, you must review the results to check that the tests executed, passed, and sufficiently tested the unit. Clause 9.4.4 in ISO 26262-6:2018 requires that you analyze the coverage of requirements for each unit. Check that all of the test cases tested the intended model and passed. Additionally, measure the coverage of the unit by collecting model coverage results in the tests. To assess the testing coverage of the requirements for the unit, use the metrics in the Test Result Analysis section of the Model Testing Dashboard.

The following checklist is provided to facilitate test results analysis and review using the dashboard. For each question, perform the review activity using the corresponding dashboard metric and, if necessary, apply the corresponding fix. This checklist is provided as an example and should be reviewed and modified to meet your application needs.

Checklist ItemReview ActivityDashboard MetricFix
1 — Does each test result trace to a test case?Use only test results that appear in the dashboard. Test results that do not trace to a test case do not appear in the dashboard. Click a widget in the Test Status section to view a table of the test cases and the results that trace to them.

Model Test Status

Table of failed test cases

Metric ID — TestCaseStatusDistribution

For more information, see Test case status distribution.

Re-run the tests that the results should trace to and export the new results.
2 — Does each test case trace to a test result?Check that zero test cases are untested and zero test cases are disabled.

Model Test Status

Widget showing count of untested test cases

Metric ID — TestCaseStatusDistribution

For more information, see Test case status distribution.

For each disabled or untested test case, in the Test Manager, enable and run the test.
3 — Have all test cases been executed?Check that zero test cases are untested and zero test cases are disabled.

Model Test Status

Widget showing count of untested test cases

Metric ID — TestCaseStatusDistribution

For more information, see Test case status distribution.

For each disabled or untested test case, in the Test Manager, enable and run the test.
4 — Do all test cases pass?

Check that 100% of the test cases for the unit passed.

Model Test Status > Passed

Dial widget indicated percentage of test cases that passed

Metric ID — TestCaseStatusPercentage

For more information, see Test case status percentage.

For each test failure, review the failure in the Test Manager and fix the corresponding test case or design element in the model.
5 — Do all test results include coverage results?Manually review each test result in the Test Manager to check that it includes coverage results.Not applicableFor each test result that does not include coverage, open the test case in the Test Manager and enable coverage collection. Run the test case again.
6 — Were the required structural coverage objectives achieved for each unit?Check that the tests achieved 100% model coverage for the coverage types that your unit testing requires. To determine the required coverage types, consider the safety level of your software unit and use table 9 in clause 9.4.4 of ISO 26262-6:2018.

Model Coverage

Model coverage results chart

Metric ID — ExecutionCoverageBreakdown

Metric ID — ConditionCoverageBreakdown

Metric ID — DecisionCoverageBreakdown

Metric ID — MCDCCoverageBreakdown

For more information, see:

For each design element that is not covered, analyze to determine the cause of the missed coverage. Analysis can reveal shortcomings in tests, requirements, or implementation. If appropriate, add tests to cover the element. Alternatively, add a justification filter that justifies the missed coverage, as described in Create, Edit, and View Coverage Filter Rules (Simulink Coverage).
7 — Have shortcomings been acceptably justified?

Manually review coverage justifications. Click a bar in the Model Coverage widget to view a table of the results for the corresponding coverage type. To open a test result in the Test Manager for further review, click the hyperlink in the Artifacts column.

Model Coverage

Table of decision coverage results

Metric ID — ExecutionCoverageBreakdown

Metric ID — ConditionCoverageBreakdown

Metric ID — DecisionCoverageBreakdown

Metric ID — MCDCCoverageBreakdown

For more information, see:

For each coverage gap that is not acceptably justified, update the justification of missing coverage. Alternatively, add test cases to cover the gap.

Unit Verification in Accordance with ISO 26262

The Model Testing Dashboard provides information about the quality and completeness of your unit requirements-based testing activities. To comply with ISO 26262-6:2018, you must also test your software at other architectural levels. ISO 26262-6:2018 describes compliance requirements for these testing levels:

  • Software unit testing in Table 7, method 1j

  • Software integration testing in Table 10, method 1a

  • Embedded software testing in Table 14, method 1a

The generic verification process detailed in ISO 26262-8:2018, clause 9 includes additional information on how you can systematically achieve testing for each of these levels by using planning, specification, execution, evaluation, and documentation of tests. This table shows how the Model Testing Dashboard applies to the requirements in ISO 26262-8:2018, clause 9 for the unit testing level, and complementary activities required to perform to show compliance.

RequirementCompliance ArgumentComplementary Activities
9.4.1 — Scope of verification activityThe Model Testing Dashboard applies to all safety-related and non-safety-related software units.Not applicable
9.4.2 — Verification methodsThe Model Testing Dashboard provides a summary on the completion of requirements-based testing (Table 7, method 1j) including a view on test results.

Where applicable, apply one or more of these other verification methods:

  • Manual review and analysis check list

  • Applying other tools, such as static code analysis, control flow analysis, and data flow analysis

  • Developing extra tests, such as interface tests, fault injection tests, and back-to-back comparisons

9.4.3 — Methods for deriving test casesThe Model Testing Dashboard provides several ways to traverse the software unit requirements and the relevant tests, which helps you to derive test cases from the requirements.You can also derive test cases by using other tools, such as Simulink® Design Verifier™.
9.4.4 — Requirement and structural coverage

The Model Testing Dashboard aids in showing:

  • Completeness of requirement coverage

  • Branch/statement and MCDC model coverage achieved by testing

  • A rationale for the sufficiency of achieved coverage

The dashboard provides structural coverage only at the model level. You can use other tools to track the structural coverage at the code level.
9.4.5 — Test environmentThe Model Testing Dashboard aids in requirements-based testing at the model level.Apply back-to-back comparison tests to ensure that the behavior of the model is equivalent to the generated code.

References:

  • ISO 26262-4:2018(en)Road vehicles — Functional safety — Part 4: Product development at the system level, International Standardization Organization

    ISO 26262-6:2018(en)Road vehicles — Functional safety — Part 6: Product development at the software level, International Standardization Organization

    ISO 26262-8:2018(en)Road vehicles — Functional safety — Part 8: Supporting processes, International Standardization Organization

See Also

Related Topics