What is Trustworthiness characteristics matrix?
This page was created for the purpose of collaboration in the roadmapping AHG of ISO/IEC JTC 1/SC 42/WG 3.
The Trustworthiness Characteristics Matrix (TCM) of this document was created with the purpose of arranging the relationship between trustworthiness-related characteristics in a situation where various AI/ML-related standards are being created within SC42.
TCM classifies the characteristics of trustworthiness in the specifications of SC42, and organizes the statements related to the characteristic. This aims to clean up consistent what relationship is in there between the trustworthiness charateristics and standard.
Trustworthiness characteristics matrix
Matrix
This is a matrix to provide relationship information of trustworthiness characteristicsDeliverable | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Informative | Normative | ||||||||||||||||||||
TR 24027:2021 | TR 29119-11:2020 | TR 24030:2021 | TR 24028:2020 | TR 24029-1:2021 | DIS 24029-2 | TR 24368:2022 | CD TR 5469 | IS 38507:2022 | IS 22989:2022 | DIS 23894 | AWI TS 12791 | AWI TS 8200 | AWI TS 6254 | TS 4213:2022 | DIS 42001 | DIS 25059 | AWI 12792 | DIS 5338 | AWI TS 5471 | ||
Bias TR | Testing | Use cases | Trus tw | RNN-1 | RNN-2 | Ethics | FSafety | Governance | Cncpts & Trmgy | Risk | Bias Treat ment TS | Controllability | XAI | ML clasfn perf. | Mangnt systems | Qual model | Transparency taxonomy | Life cycle processes | Qual eval | ||
60.60 | 60.60 | 60.60 | 60.60 | 60.60 | 40.60 | 60.60 | 30.60 | 60.60 | 60.60 | 50.20 | 20.00 | 20.00 | 20.00 | 60.60 | 40.00 | 40.60 | 20.00 | 40.20 | 20.00 | ||
characteristics | accountability | D* | D | ||||||||||||||||||
autonomous | D* | D, g | d | ||||||||||||||||||
autonomy | D* | D* | D, g | d | |||||||||||||||||
availability | D | ||||||||||||||||||||
bias | d, I, S, g | D* | d*, I, E | g, D* | D, c | d, c | c | g? | |||||||||||||
calibration | |||||||||||||||||||||
certification (of): an organization, a management system, a product | |||||||||||||||||||||
control | D* | D | |||||||||||||||||||
controllability | D | d, g | |||||||||||||||||||
consistency | D* | ||||||||||||||||||||
effectiveness | g? | D* | d?/D?, g | ||||||||||||||||||
efficiency | D* | ||||||||||||||||||||
explainability | D*, I | D | d | ||||||||||||||||||
functional safety | D* | D, g, c | |||||||||||||||||||
harm | D*, g | ||||||||||||||||||||
hazard | D* | ||||||||||||||||||||
human dignity | |||||||||||||||||||||
human factors | D* | ||||||||||||||||||||
integrity | D* | d?/D?, g | |||||||||||||||||||
interpretability | D* | ||||||||||||||||||||
intended use | D* | ||||||||||||||||||||
maintenance | |||||||||||||||||||||
oversight / human oversight | D | g | c, g | ||||||||||||||||||
predictability | g? | ||||||||||||||||||||
privacy | D* | g? | |||||||||||||||||||
quality | d | ||||||||||||||||||||
reliability | D*, g | D | d?/D?, g | g? | |||||||||||||||||
resilience | D | ||||||||||||||||||||
risk | D*, g | D | D, N, g | ||||||||||||||||||
robustness | g? | D | D | ||||||||||||||||||
safety | D* | D* | |||||||||||||||||||
security | D* | ||||||||||||||||||||
testing | g | d | |||||||||||||||||||
threat | D* | ||||||||||||||||||||
traceability | |||||||||||||||||||||
transparency | D* | g | D | d | D?/d? | g? | |||||||||||||||
trust | g? | D* | |||||||||||||||||||
trustworthiness | g? | D* | D | ||||||||||||||||||
validation | D* | d | D | d?/D?, g | g? | ||||||||||||||||
value | D* | ||||||||||||||||||||
verification | g? | D* | d | D | |||||||||||||||||
vulnerability | D* |
Legend
- Terms : defined (D), imported (d), early working definition now deprecated (D*)
- Controls : defined (C), considered (c), surveyed (S), exploratory discussions (E)
- Guidance : guidance (g)
- Normativity : normative (N), informative (I)
- Project : internation standard (IS), technical report (TR), technical specification (TS), approved work item (AWI)
Note
Maturity as expressed in the project stage number except for published standards, where ‘published’ is indicated via the publication date: https://www.iso.org/stage-codes.html
If a WG3 deliverable imports a definition from an outside project, it’s included. Colour code WG3-owned projects.
Terms
- accountability ([ISO/IEC 22989:2022], 3.5.2)
-
state of being accountable (3.5.1)
Note 1 to entry: Accountability relates to an allocated responsibility. The responsibility can be based on regulation or agreement or through assignment as part of delegation.
Note 2 to entry: Accountability involves a person or entity being accountable for something to another person or entity, through particular means and according to particular criteria.
[SOURCE:ISO/IEC 38500:2015, 2.3, modified — Note 2 to entry is added.] - accountability ([ISO/IEC TR 24028:2020], 3.1)
-
property that ensures that the actions of an entity (3.16) may be traced uniquely to that entity
[SOURCE:ISO/IEC 2382:2015, 2126250, modified — The Notes to entry have been removed.] - autonomy ([ISO/IEC 22989:2022], 3.1.5)
- characteristic of a system that is capable of modifying its intended domain of use or goal without external intervention, control or oversight
- autonomy ([ISO/IEC TR 29119-11:2020], 3.1.15)
- ability of a system to work for sustained periods without human intervention
- autonomy ([ISO/IEC TR 24028:2020], 3.7)
-
characteristic of a system (3.38) governed by its own rules as the result of self-learning
Note 1 to entry: Such systems are not subject to external control (3.10) or oversight. - autonomous ([ISO/IEC 22989:2022], 3.1.5)
- characteristic of a system that is capable of modifying its intended domain of use or goal without external intervention, control or oversight
- autonomous ([ISO/IEC TR 24028:2020], 3.7)
-
characteristic of a system (3.38) governed by its own rules as the result of self-learning
Note 1 to entry: Such systems are not subject to external control (3.10) or oversight. - availability ([ISO/IEC 22989:2022], 3.5.3)
-
property of being accessible and usable on demand by an authorized entity
[SOURCE:ISO/IEC 27000:2018, 3.7] - bias ([ISO/IEC 22989:2022], 3.5.4)
-
systematic difference in treatment of certain objects, people, or groups in comparison to others
Note 1 to entry: Treatment is any kind of action, including perception, observation, representation, prediction (3.1.27), or decision.
[SOURCE:ISO/IEC TR 24027:2021, 3.3.2, modified – remove oxford comma in definition and note to entry] - bias ([ISO/IEC TR 24027:2021], 3.2.2) - no reference
-
systematic difference in treatment of certain objects, people, or groups in comparison to others
Note 1 to entry: Treatment is any kind of action, including perception, observation, representation, prediction or decision - bias ([ISO/IEC TR 24028:2020], 3.8)
- favouritism towards some things, people or groups over others
- bias ([ISO/IEC TR 29119-11:2020], 3.1.19)
- <machine learning (3.1.43)> measure of the distance between the predicted value provided by the ML model (3.1.46) and a desired fair prediction (3.1.56)
- consistency ([ISO/IEC TR 24028:2020], 3.9)
-
degree of uniformity, standardization and freedom from contradiction among the documents or parts of a system (3.38) or component
[SOURCE:ISO/IEC 21827:2008, 3.14] - control ([ISO/IEC 22989:2022], 3.5.5)
-
purposeful action on or in a process to meet specified objectives
[SOURCE:IEC 61800-7-1:2015, 3.2.6] - control ([ISO/IEC TR 24028:2020], 3.10)
-
purposeful action on or in a process (3.29) to meet specified objectives
[SOURCE:IEC 61800-7-1:2015, 3.2.6] - controllability ([ISO/IEC 22989:2022], 3.5.6)
- property of an AI system (3.1.4) that allows a human or another external agent to intervene in the system’s functioning
- controllability ([ISO/IEC AWI TS 8200], 3.2.3)
-
property of an AI system (3.1.1) that allows a human or another external agent to intervene in the system’s functioning
[SOURCE: ISO/IEC DIS 22989, 3.5.6] - effectiveness ([ISO/IEC TR 24028:2020], 3.14)
-
extent to which planned activities are realized and planned results achieved
[SOURCE:ISO 9000:2015, 3.7.11, modified — Note 1 to entry has been removed.] - efficiency ([ISO/IEC TR 24028:2020], 3.15)
-
relationship between the results achieved and the resources used
[SOURCE:ISO 9000:2015, 3.7.10] - explainability ([ISO/IEC 22989:2022], 3.5.7)
-
property of an AI system (3.1.4) to express important factors influencing the AI system (3.1.4) results in a way that humans can understand
Note 1 to entry: It is intended to answer the question “Why?” without actually attempting to argue that the course of action that was taken was necessarily optimal. - explainability ([ISO/IEC TR 29119-11:2020], 3.1.31)
- <AI (3.1.13)> level of understanding how the AI-based system (3.1.9) came up with a given result
- functional safety ([ISO/IEC AWI TR 5469], 3.2)
-
part of the overall safety (3.1) relating to the EUC (Equipment Under Control) and the EUC control system that depends on the correct functioning of the E/E/PE (Electrical/Electronic/Programmable Electronic) safety-related systems and other risk reduction measures
[SOURCE: IEC 61508-4, ed. 2.0 (2010), 3.1.12] - functional safety ([ISO/IEC TR 24028:2020])
- NONE
- harm ([ISO/IEC TR 24028:2020], 3.17)
-
injury or damage to the health of people or damage to property or the environment
[SOURCE:ISO/IEC Guide 51:2014, 3.1] - hazard ([ISO/IEC TR 24028:2020], 3.18)
-
potential source of harm (3.17)
[SOURCE:ISO/IEC Guide 51:2014, 3.2] - human factors ([ISO/IEC TR 24028:2020], 3.19)
- environmental, organizational and job factors, in conjunction with cognitive human characteristics, which influence the behaviour of persons or organizations
- integrity ([ISO/IEC TR 24028:2020], 3.19)
-
property of protecting the accuracy and completeness of assets (3.5)
[SOURCE:ISO/IEC 27000:2018, 3.36, modified — In the definition, "protecting the" has been added before "accuracy" and "of assets" has been added after "completeness".] - intended use ([ISO/IEC TR 24028:2020], 3.22)
-
use in accordance with information (3.20) provided with a product or system (3.38) or, in the absence of such information, by generally understood patterns (3.26) of usage.
[SOURCE:ISO/IEC Guide 51:2014, 3.6] - interpretability ([ISO/IEC TR 29119-11:2020], 3.1.42)
- <AI (3.1.13)> level of understanding how the underlying (AI) technology works
- oversight ([ISO/IEC 38507:2022], 3.2.1)
-
monitoring of the implementation of organizational and governance policies and management of associated tasks, services and products set by the organization, in order to adapt to changes in internal or external circumstances
Note 1 to entry: Effective oversight needs general understanding of a situation. Oversight is one of the ‘principles of governance’ covered in depth in ISO 37000:2021, 6.4. - predictability ([ISO/IEC 22989:2022], 3.5.8)
-
property of an AI system (3.1.4) that enables reliable assumptions by stakeholders (3.5.13) about the output
[SOURCE:ISO/IEC TR 27550:2019, 3.12, “by individuals, owners, and operators about the PII and its processing by a system” has been replaced with “by stakeholders about the outputs”.] - privacy ([ISO/IEC TR 24028:2020], 3.28)
-
freedom from intrusion into the private life or affairs of an individual when that intrusion results from undue or illegal gathering and use of data (3.11) about that individual
[SOURCE:ISO/IEC 2382:2015, 2126263, modified — Notes 1 and 2 to entry have been removed.] - quality ([ISO/IEC 24030:2021], 3.3)
-
conformance to specified requirements
[SOURCE: ISO 13628-2:2006, 3.33] - reliability ([ISO/IEC 22989:2022], 3.5.9)
-
property of consistent intended behaviour and results
[SOURCE:ISO/IEC 27000:2018, 2.55] - reliability ([ISO/IEC TR 24028:2020], 3.30)
-
property of consistent intended behaviour and results
[SOURCE:ISO/IEC 27000:2018, 3.55] - resilience ([ISO/IEC 22989:2022], 3.5.10)
- ability of a system to recover operational condition quickly following an incident
- risk ([ISO/IEC 22989:2022], 3.5.11), ([ISO/IEC 38507:2022], 3.2.2)
-
effect of uncertainty on objectives
Note 1 to entry: An effect is a deviation from the expected. It can be positive, negative or both and can address, create or result in opportunities and threats (3.39).
Note 2 to entry: Objectives can have different aspects and categories and can be applied at different levels.
Note 3 to entry: Risk is usually expressed in terms of risk sources, potential events, their consequences and their likelihood.
[SOURCE:ISO 31000:2018, 3.1, modified — Remove comma after “both” in Note 1 to entry. Remove comma after “categories” in Note 2 to entry.] - risk ([ISO/IEC TR 24028:2020], 3.3)
-
effect of uncertainty on objectives
Note 1 to entry: An effect is a deviation from the expected. It can be positive, negative or both and can address, create or result in opportunities and threats (3.39).
Note 2 to entry: Objectives can have different aspects and categories and can be applied at different levels.
Note 3 to entry: Risk is usually expressed in terms of risk sources, potential events, their consequences and their likelihood.
[SOURCE:ISO 31000:2018, 3.1] - robustness ([ISO/IEC 22989:2022], 3.5.12)
- ability of a system to maintain its level of performance under any circumstances
- robustness ([ISO/IEC TR 24029-1:2021], 3.6)
-
ability of an AI system to maintain its level of performance under any circumstances
Note 1 to entry: This document mainly describes data input circumstances such as domain change but the definition is broader not to exclude hardware failure and other types of circumstances. - safety ([ISO/IEC TR 24028:2020], 3.34)
-
freedom from risk (3.31) which is not tolerable
[SOURCE:ISO/IEC Guide 51:2014, 3.14] - safety ([ISO/IEC TR 29119-11:2020], 3.1.67)
-
expectation that a system does not, under defined conditions, lead to a state in which human life, health, property, or the environment is endangered
[SOURCE:ISO/IEC/IEEE 12207:2017, 3.1.48] - security ([ISO/IEC TR 24028:2020], 3.35)
-
degree to which a product or system (3.38) protects information (3.20) and data (3.11) so that persons or other products or systems have the degree of data access appropriate to their types and levels of authorization
[SOURCE:ISO/IEC 25010:2011, 4.2.6] - testing ([ISO/IEC TR 24029-1:2021], 3.7)
-
activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component
[SOURCE:ISO/IEC/IEEE 26513:2017, 3.42] - threat ([ISO/IEC TR 24028:2020], 3.39)
- potential cause of an unwanted incident, which may result in harm (3.17) to systems (3.38), organizations or individuals
- trust ([ISO/IEC TR 24028:2020], 3.41)
-
degree to which a user (3.43) or other stakeholder (3.37) has confidence that a product or system (3.38) will behave as intended
[SOURCE:ISO/IEC 25010:2011, 4.1.3.2] - transparency ([ISO/IEC 22989:2022], 3.5.14)
-
<organization> property of an organization that appropriate activities and decisions are communicated to relevant stakeholders (3.5.13) in a comprehensive, accessible and understandable manner
Note 1 to entry: Inappropriate communication of activities and decisions can violate security, privacy or confidentiality requirements. - transparency ([ISO/IEC 22989:2022], 3.5.15)
-
<system< property of a system that appropriate information about the system is made available to relevant stakeholders (3.5.13)
Note 1 to entry: Appropriate information for system transparency can include aspects such as features, performance, limitations, components, procedures, measures, design goals, design choices and assumptions, data sources and labelling protocols.
Note 2 to entry: Inappropriate disclosure of some aspects of a system can violate security, privacy or confidentiality requirements. - transparency ([ISO/IEC TR 29119-11:2020], 3.1.81)
- <AI (3.1.13)> level of accessibility to the algorithm (3.1.12) and data used by the AI-based system (3.1.9)
- trustworthiness ([ISO/IEC 22989:2022], 3.5.16)
-
ability to meet stakeholder (3.5.13) expectations in a verifiable way
Note 1 to entry: Depending on the context or sector, and also on the specific product or service, data and technology used, different characteristics apply and need verification to ensure stakeholders’ (3.5.13) expectations are met.
Note 2 to entry: Characteristics of trustworthiness include, for instance, reliability, availability, resilience, security, privacy, safety, accountability, transparency, integrity, authenticity, quality and usability.
Note 3 to entry: Trustworthiness is an attribute that can be applied to services, products, technology, data and information as well as, in the context of governance, to organizations.
[SOURCE:ISO/IEC TR 24028:2020, 3.42, modified — Stakeholders’ expectations replaced by stakeholder expectations; comma between quality and usability replaced by “and”.] - trustworthiness ([ISO/IEC TR 24028:2020], 3.42)
-
ability to meet stakeholders' (3.37) expectations in a verifiable way
Note 1 to entry: Depending on the context or sector and also on the specific product or service, data (3.11) and technology used, different characteristics apply and need verification (3.47) to ensure stakeholders expectations are met.
Note 2 to entry: Characteristics of trustworthiness include, for instance, reliability (3.30), availability, resilience, security (3.35), privacy (3.28), safety (3.34), accountability (3.1), transparency, integrity (3.21), authenticity, quality, usability.
Note 3 to entry: Trustworthiness is an attribute (3.6) that can be applied to services, products, technology, data and information (3.20) as well as, in the context of governance, to organizations. - validation ([ISO/IEC 22989:2022], 3.5.18)
-
confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled
[SOURCE:ISO/IEC 27043:2015, 3.16] - validation ([ISO/IEC TR 24028:2020], 3.44)
-
confirmation, through the provision of objective evidence, that the requirements for a specific intended use (3.22) or application have been fulfilled
Note 1 to entry: The right system (3.38) was built.
[SOURCE:ISO/IEC TR 29110-1:2016, 3.73, modified — Only the last sentence of Note 1 to entry has been retained and Note 2 to entry has been removed.] - validation ([ISO/IEC TR 24029-1:2021], 3.10)
-
confirmation, through the provision of objective evidence, that the requirements (3.5) for a specific intended use or application have been fulfilled
[SOURCE:ISO/IEC 25000:2014, 4.41, modified — Note 1 to entry has been removed.] - value ([ISO/IEC TR 24028:2020], 3.45)
-
unit of data (3.11)
[SOURCE:ISO/IEC/IEEE 15939:2017, 3.41] - value ([ISO/IEC TR 24028:2020], 3.46)
-
belief(s) an organization adheres to and the standards that it seeks to observe
[SOURCE:ISO 10303-11:2004, 3.3.22] - verification ([ISO/IEC 22989:2022], 3.5.17)
-
confirmation, through the provision of objective evidence, that specified requirements have been fulfilled
Note 1 to entry: Verification only provides assurance that a product conforms to its specification.
[SOURCE:ISO/IEC 27042:2015, 3.21] - verification ([ISO/IEC TR 24028:2020], 3.47)
-
confirmation, through the provision of objective evidence, that specified requirements have been fulfilled
Note 1 to entry: The system (3.38) was built right.
[SOURCE:ISO/IEC TR 29110-1:2016, 3.74, modified — Only the last sentence of Note 1 to entry has been retained.] - verification ([ISO/IEC TR 24029-1:2021], 3.12)
-
confirmation, through the provision of objective evidence, that specified requirements have been fulfilled
[SOURCE:ISO/IEC 25000:2014, 4.43, modified — Note 1 to entry has been removed.] - vulnerability ([ISO/IEC TR 24028:2020], 3.48)
-
weakness of an asset (3.5) or control (3.10) that can be exploited by one or more threats (3.38)
[SOURCE:ISO/IEC 27000:2018, 3.77]
References
- [ISO/IEC TR 24027:2021] - Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making
- [ISO/IEC TR 29119-11:2020] - Software and systems engineering — Software testing — Part 11: Guidelines on the testing of AI-based systems
- [ISO/IEC TR 24030:2021] - Information technology — Artificial intelligence (AI) — Use cases
- [ISO/IEC TR 24028:2020] - Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence
- [ISO/IEC TR 24029-1:2021] - Artificial Intelligence (AI) — Assessment of the robustness of neural networks — Part 1: Overview
- [ISO/IEC DIS 24029-2] - Artificial intelligence (AI) — Assessment of the robustness of neural networks — Part 2: Methodology for the use of formal methods
- [ISO/IEC TR 24368:2022] - Information technology — Artificial intelligence — Overview of ethical and societal concerns
- [ISO/IEC CD TR 5469] - Artificial intelligence — Functional safety and AI systems
- [ISO/IEC 38507:2022] - Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations
- [ISO/IEC 22989:2022] - Information technology — Artificial intelligence — Artificial intelligence concepts and terminology
- [ISO/IEC FDIS 23894] - Information technology — Artificial intelligence — Guidance on risk management
- [ISO/IEC AWI TS 12791] - Information technology — Artificial intelligence — Treatment of unwanted bias in classification and regression machine learning tasks
- [ISO/IEC AWI TS 8200] - Information technology — Artificial intelligence — Controllability of automated artificial intelligence systems
- [ISO/IEC AWI TS 6254] - Information technology — Artificial intelligence — Objectives and approaches for explainability of ML models and AI systems
- [ISO/IEC TS 4213:2022] - Information technology — Artificial Intelligence — Assessment of machine learning classification performance
- [ISO/IEC DIS 42001] - Information Technology — Artificial intelligence — Management system
- [ISO/IEC DIS 25059] - Software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Quality model for AI systems
- [ISO/IEC AWI 12792] - Information technology — Artificial intelligence — Transparency taxonomy of AI systems
- [ISO/IEC DIS 5338] - Information technology — Artificial intelligence — AI system life cycle processes
- [ISO/IEC AWI TS 5471] - Artificial intelligence — Quality evaluation guidelines for AI systems
Candidate references
- [ISO/IEC AWI TS 29119-11] - Information technology — Artificial intelligence — Testing for AI systems — Part 11:
- [ISO/IEC AWI 12792] - Information technology — Artificial intelligence — Transparency taxonomy of AI systems
- [ISO/IEC 23053:2022] -Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)
- [ISO/IEC NP TS 17847] - Information technology — Artificial intelligence — Verification and validation analysis of AI systems
- [ISO/IEC DIS 8183] - Information technology — Artificial intelligence — Data life cycle framework
- [ISO/IEC NP 42005] - Information technology — Artificial intelligence — AI system impact assessment
Acknowledgement
The initial idea of this matrix was created by the experts of Roadmapping AHG - Harm Ellens and David Wotton.
Contributing
Issues and Pull Requests are greatly appreciated. If you've never contributed to an open source project before I'm more than happy to walk you through how to create a pull request.
You can start by opening an issue describing the problem that you're looking to resolve and we'll go from there.
License
This document is licensed under the MIT license © Jonghong Jeon