UNCERTAINTY MANAGEMENT
Most, if not all, of the large engineered facilities with which this book is concerned are large 'one-off' constructions or complex systems of artefacts as distinct from artefacts that are mass produced. This distinction between 'one-off' and mass production may, at first sight, seem rather trite but it does lead to profound differences in the quantity and quality of the information available to the engineer. If a product is to be mass produced it makes economic sense to test one or more prototypes; in fact, prototype testing becomes an essential phase of the design and development of the product. By contrast it is uneconomic to test a 'one-off' product to destruction and to use that information to rebuild.
Thus the designer of a 'one-off' product obtains much less feedback about the performance of the product in the world outside of the laboratory than does his/her manufacturing counterpart. The resulting uncertainty largely surrounds the quality of any model, whether scientific or not, that the engineer uses to make decisions. However, before we pursue an analysis of uncertainty in engineering systems it is worth referring back to the discussion of Sec. 1.4 concerning 'world views'. You will recall that it was argued that the world should be considered as a totality and not as two separate systems (human beings and physical objects).
The important major differences between physical objects and human beings is that physical objects lack intentionality. Intentionality is the feature by which the states of our minds are directed at or are about objects other than themselves. This feature is why human systems are so difficult to analyse and to produce structured sets of theories that describe their behaviour. Searle, in his 1984 Reith lectures," discussed this relationship of human beings with the rest of the universe. He addressed questions such as, 'How do we reconcile the mentalistic concept of ourselves with an apparently inconsistent conception of the universe as a purely physical system?' Questions such as 'Can computers have minds?' or 'What is consciousness?' have become important in the development of artificial intelligence.
It is not the purpose of this book to attempt a detailed discussion of these issues but it is important to recognize the fundamental importance to our consideration of the human involvement in engineering safety. In managing safety we must recognize that we do not understand sufficiently well the way in which human beings behave and yet that understanding must be central in predictions concerning safety. It is necessary therefore to move away from an emphasis on prediction of safety to an emphasis on the management of safety, certainly as far as human factors are concerned
The central activity of both scientist and engineer is that of a decision maker. Whatever hypotheses are conjectured, whatever the problem faced and whatever the motivation of the problem solver, decisions must be taken on the basis of dependable information. So what is dependable information and how does the concept of dependability differ from that of truth? The deterministic treatment of engineering calculations has its roots in the ideals of 'exact science'.
We have seen that this is no longer tenable. It is now suggested that what really matters to an engineer is the dependability of a proposition. Of course, if a proposition is true then it is dependable, but if a proposition is dependable it is not necessarily true. Truth is a sufficient condition but not a necessary condition for dependability. Einstein demonstrated that Newtonian mechanics is not true but it is clearly dependable under certain conditions. Sufficient conditions for dependable information have been discussed previou~ly.'~ Aconjecture is dependable if:
1. A highly repeatable experiment can be set up to test it.
2. The resulting state is clearly definable and repeatable.
3. The value of the resulting state is measurable and repeatable.
4. The test is successful.