When is a system considered safety-critical, and just how safety-critical is "safety-critical"?

The criteria for what makes a system safety-critical depend upon the regulatory environment and/or the safety standards applied to a project or product. The general consensus is that a system is safety-critical if it may lead to harm being caused (or not being prevented) to users, bystanders, other stakeholders, or the environment, although the range of causes and harm considered can vary.

In most environments, it is not "all or nothing", and terms such as safety-critical and safety-related are used to describe systems from the most hazardous to barely hazardous. This can often unnecessarily cause trepidation amongst the procurers and/or developers of a system. The distinction is made when determining how safe a system is, or how far a system is from being made safe.

Most regulatory environments and safety standards reflect that the burden of proof is on an organisation to provide assurance that a system is either not safety-critical or that it is acceptably safe; a system is considered guilty unless and until it has been proven innocent.

Some safety standards provide checklists against which safety-criticality can be judged, usually based upon the domain in which they are applied. For example, Def(Aust) 5679 "The Procurement Of Computer-based Safety Critical Systems" provides a checklist focussed on defence.


If a system is not identified early as being safety-critical, it can be hazardous and/or expensive.


It can also be expensive to find out too late that a system is not safety-critical, as it can make some work unnecessary.


Perform hazard identification and risk assessment early in the life cycle to determine if safety requirements are needed to make a system acceptably safe.