Trust in Computer Systems and the Cloud. Mike BursellЧитать онлайн книгу.
complex. We certainly do want to get to a clearer, more refined definition, but we first need to delve deeper into what trust looks like and how it is defined in the various spheres of relevant academic study. Although our interest is less in the human-to-human realm than in trust relationships that involve computer systems (whether human-to-computer or computer-to-computer), it is important to understand the theoretical and academic underpinnings of trust in the human-to-human realm. This is not just because there is utility in being able to relate some of this thinking to our realm by being able to compare what we mean with what we do not mean, but also because any application of trust between realms must necessarily be metaphorical and deserves a thorough examination. As discussed in Chapter 1, “Why Trust?”, metaphors are useful but can be misleading and need to be employed with care. The other reason is that if we cannot unpick what can be intended when the word trust is used, then it is difficult to define what we wish to communicate as we try to restrict some of the various associated concepts and choose those that we want to use.
First, we need to admit that the field of study regarding trust is both active and wide: there are a lot of definitions of human-to-human trust, many of which are not easily reconcilable. Most of the definitions, understandably, focus on social elements, and, as noted by Harper, there is a strong overtone of mistrust. Here are some examples supplied by other noted authors ruminating on the notion of trust:
Trust in social interactions is “the willingness to be vulnerable based on positive expectation about the behaviour of others”.2Cheshire notes that Baier's definition3 “depends on the possibility of betrayal by another person”.
For Hardin, when considering interpersonal trust, “my trust in you is encapsulated in your interest in fulfilling the trust”.4 Cheshire distinguishes trustworthiness from trust and discusses how risk-taking can act as a signal that one party considers another trustworthy.5 Dasgupta6 has seven starting points for establishing trust, of which three are related directly to punishment, one to choice, one to perspective, one to context, and one to monitoring.
All of these examples may be helpful when considering human-to-human trust relationships—though even there, they generally seem a little vague in terms of definition—but if we are to consider trust relationships involving computers and system-based entities, they are all insufficient, basically because all of them relate to human emotions, intentions, or objectives. Applying questions around emotions to, say, a mobile phone's connection to a social media site is clearly not a sensible endeavour, though we will examine later how intention and objectives may have some relevance in discussions about trust within the computer-to-computer realm.
One further definition that deserves examination is offered by Diego Gambetta, the source of our original trust definition. We will spend a little time on this as it will set up some interesting issues to which we will return at some length later in the book. Gambetta proposes the following definition:
trust (or, symmetrically, distrust) is a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he [sic] can monitor such action (or independently of his capacity ever to be able to monitor it) and in a context in which it affects his own action.7
There are some interesting points here. First, Gambetta discusses agents, though the usage is somewhat different to that which we employed in Chapter 1. We used agents to describe an entity acting for another entity, whereas he is using a different definition, where an agent is an actor that takes an active role in an interaction. Confusingly, the usage within computing sometimes falls between these two definitions. A software agent is considered to have the ability to act autonomously in a particular situation—the term autonomous agent is sometimes used equivalently—but that is not necessarily the same as acting as a person or an organisation. However, in the absence of artificial general intelligence (AGI), it would seem that software agents must be acting on behalf of humans or human organisations even if the intention is to “set them free” to act autonomously or even learn behaviour on their own.
The second important point that Gambetta makes is that a trust relationship—he is specifically discussing human trust relationships—is partly defined by expectations before any actions are performed. This resonates closely with the points we made earlier about the importance of collecting information to allow us to form assurances. His third point is related to the second, in that he discusses the possible inability of the trustor to monitor the actions in which they are interested. Given such a lack of assuring information, the ability to evaluate the likelihood of trust is based on the same data: that presented beforehand.
For his fourth point, however, Gambetta also identifies that there are contexts in which actions can be monitored, though he seems to tie such actions to actions the trustor will take. This seems too restrictive on the trustor, as there may be actions taken by the trustee that do not lead to corresponding actions by the trustor—unless the very lack of such actions is considered action in itself. More important, however, is the implicit assumption (from the negative explicit in the previous statement) that monitoring should take place.
The Role of Monitoring and Reporting in Creating Trust
This assumption about monitoring should not be glossed over. Monitoring is important because without it, there is no way for us to check or update a trust relationship. Without some sort of feedback mechanism to allow us to monitor the actions being taken by the trustee, any trust relationship that we have created to the trustee can only be based on our original expectations. It is difficult to feel that we have modelled a trust relationship well if there is no way to verify or validate the assurances we have, so monitoring definitely has a role to play.
One difference that we will encounter when we start examining trust relationships to computer systems, however, is that the opportunities for direct sensory monitoring of actions are likely to be more limited than in human-to-human trust relationships. When monitoring human actions, they are often readily apparent, but the same is not true for many computer-performed actions. If I request via a web browser that a banking application transfer funds between one account and another, the only visible effect I am likely to see is an acknowledgement on the screen. Until I get to the point of trying to spend or withdraw that money,8 I realistically have no way to be assured that the transaction has taken place. It is as if I have a trust relationship with somebody around the corner of a street, out of view, that they will raise a red flag at the stroke of noon; and I have a friend standing on the corner who will watch the person and tell me when and if they raise the flag. I may be happy with this arrangement, but only because I have a trust relationship to the friend: that friend is acting as a trusted channel for information.
The word friend was chosen carefully because a trust relationship is already implicit in the set of interactions that we usually associate with someone described as a friend. The same is not true for the word somebody, which I used to denote the person who was to raise the flag. The situation as described is likely to make our minds presume that there is a fairly high probability that the trust relationship I have to the friend is sufficient to assure me that they will pass the information correctly. But what if my friend standing on the corner is actually a business partner of the flag-waver? Given our human understanding of the trust relationships typically involved with business partnerships, we may immediately begin to assume that my friend's motivations in respect to correct reporting are not neutral.
The example of the