Abstract
Iām going to endorse several of the things other people have already said.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Iām going to endorse several of the things other people have already said. The basic thesis of this talk is that we are getting better and better at solving the access control problems we had in the 1970s, and itās probably time to stop doing that and start thinking about the problems weāve actually got now.
Weāre currently in the position of these politicians saying, āOkay, perhaps we need to tweak our policies, but the real problem is that the electorate just arenāt getting our message.ā The electorate are saying, āActually the message is coming across loud and clear, and we donāt like it. We want you to come up with some different policies that we might actually vote for.ā
Weāre trying to sell a vision of security to users that they simply wonāt have any part of. Itās not that we need to work out a slightly better interface for presenting our current mechanisms to users. They are just not going to act the way our models require, and we need to adjust our world-view to take account of that.
One of the differences between our world-view and theirs, for example, is that we still see security as absolute. A system is secure or itās not. Somebody is authenticated or they arenāt. They are either authorized to do something or theyāre not. In the real world, it isnāt like that. People donāt have binary beliefs. They have views, and they are happy to make decisions based on their views, but those views are then going to change depending on the outcomes of their decisions.
If Dr. X wants to do something on Ward 12 then okay, some things are clear-cut. No, she definitely canāt authorize a payment of a million pounds; yes, she definitely can look at this particular patient record. But thereās a lot of stuff thatās in a grey area in the middle: sometimes itās not so much that she is not authorized to do something, itās more that Iām not expecting her to want to do that. Why does she want to do that?
In some cases Iām quite happy for her to do it, but I really want a much higher level of authentication than I would do for other things. However itās probably not helpful to to interrupt her, right at the moment that sheās engaged in a crucially sensitive procedure, and say, āTo proceed, you must now enter the authentication code showing on the device that youāve left in your handbag which is in your office on the next floor.ā Itās probably much better if the system pages a staff nurse and says, āCan you verify to me that Dr. X is in the ward right now, and also can you tell me what she appears to be doing, please?ā
In the real world weāre inclined to decide whether or not somethingās okay based on previous dealings that weāve had. Weāre probably all right about saying, āOkay. So you forgot your identity card, can you get someone that we can authenticate to verify who you are to us please?ā Yes, they might be lying, but weāre primarily trying to reduce the risk of making a bad decision. This interacts with the problem touched on a little bit earlier, which is that weāre outsourcing security. More and more access control decisions are being made by a third party which doesnāt have an investment in the client, or in the data owner.
This kind of outsourced access control mechanism naturally desires to have very binary decision trees that lead to a very clear-cut audit trail. This means they can show that the patient died, the plane crashed, and the money was embezzled; but it wasnāt their fault, because they can prove that they complied with the policy that they were given. The difficulty is weāre outsourcing security, but weāre not delegating the risks and opportunities in a way that aligns the incentives. So the incentives are not aligned: the āgood guysā donāt all want the same outcome anymore, no matter what the security model says.
For example, denying access is no longer the āsafeā option. Itās not true to say, āNobody ever got fired for denying access in a medical scenario,ā for instance. Interestingly, in the medical scenario, we do now see new protocols emerging, such as the break-the-glass access control protocol. Thereās no actual glass involved, by the way, itās purely a noughts-and-ones protocol, that allows non-binary outcomes to access control decisions. Itās not just deny or allow, there is the option to say āThis is a special case that we didnāt see coming. We need to allow this for now, and shall sort out afterwards whether we should have allowed it or not.ā Access control mechanisms need to allow explicitly for these tradeoffs.
The model Iām going to use to do this is the insurance model. Iām not actually suggesting that we need to quantify risks in the same way that insurance adjusters do. Although, I suppose you could do that if you want. Alice could charge Bob a cost for doing something that is based on what Alice expects it to cost her to let Bob do it. But actually itās enough if Alice picks the amount she charges Bob to be such that it aligns his incentives with hers. The cost is designed to promote wholesome behaviour over unwholesome behaviour on Bobās part, rather than to make Alice a āprofitā.
The idea is to look at who is risking what. Weāve got a server thatās going to decide whether to allow access or deny it. At the moment, if the server allows access when it shouldnāt, the serverās going to suffer. But, consider the person with the rights on the data object: if nobody accesses that object, theyāre going to go out of business. So default-deny isnāt acceptable either.
Itās a little bit like the situation with shoplifting. If you own a shop then you have to put up with a certain amount of shoplifting. Itās easy to eliminate shoplifting completely, but then no one will shop in your shop. If theyāre getting their crotch sniffed by a large dog every time they go inĀ to buy a can of tomatoes, theyāll probably shop somewhere else.
So, itās a matter of presenting users and data owners with the incentives and the disadvantages, the contextual clues that we use in real life, and allowing them to make, possibly in an automated way, rational decisions based on that information.
The first thing is to get straight about what are the risks, and what are the opportunity costs? This isnāt new, David Wheeler was advocating a version of this approach in the fourth of these workshops 20Ā years ago. However I think David tended to underplay the fact that variance was important as well as expectation: very often you do want to give people a way to trade one off against the other. People buy insurance, but people also buy lottery tickets. In each case, they are willing to accept a less favourable expectation in exchange for either a higher or a lower variance.
The proposal that weāre making is to apply this approach to security decisions. Letās think about the risks and opportunity costs, and then letās pass them along. The data owner might say to the server, āIām happy for you to grant this access provided you obtain this stipulated amount for it.ā You can think of this amount as being like an insurance premium. And the server might choose to pass that cost along to the client, if thatās the model you want to use. It might be that the system says to you, āWeāre not sure whether you are Dr. X or not, but Iāll tell you what, if you put 1000 pounds into the machine in the corner then weāll give you access. And maybe you are really a journalist, but at least we got 1000 pounds for the data.ā
If Dr. X subsequently logs in, authenticates herself, and validates the transaction, then weāll give her your 1000 pounds back. Or maybe we want two people that are authenticated to each put 100 pounds in and say that you are Dr. X. If Dr. X subsequently authenticates herself āproperlyā and affirms the earlier transaction then the two guarantors get their money back. Or maybe clients are going to pay in terms of reputation. Perhaps thereās a kind of a credit score, or loyalty points. Frequent flyer miles. Something like that, that you can spend in order to obtain these accesses. And maybe how many points you require for an access depends on how contentious that particular access is.
We might say, āWell, if it turns out that the access was bad and you shouldnāt have had it, weāll charge you a million points. Alternatively, you can pay 100 points up front now, and weāll indemnify you against this going wrong.ā Just like an insurance policy. Or else we might have a system thatās more like a bail bond, where you buy the bail bond; we let you into the system; but if it turns out that we shouldnāt have, then we send the equivalent of bounty hunters afterĀ you.
The user interface for our new approach would be very different to the sort of pop-up box you typically get at present: āThere is a security problem. The certificate from <some server that you have never heard of> is self-signed. Would you like to 1. Proceed? (In which case whatever goes wrong, itās your fault.) 2. Abort? (In which case there is no way for you to get what you want done.) or 3. View the certificate?ā
I love that third alternative; I think this is an absolute brainwave on the behalf of whoever thought it up. Itās a totally brilliant idea to have a non-expert user confronted with an endless string of ASCII or if they are lucky X.509 or something. They can look at that for a while and then they have to click on one of the other two buttons anyway.
Instead of all that, our approach would have a pop-up box that said, āWhat you are trying to do is an unsafe sort of transaction, and here are a number of options. You can risk a million Tiger points; or you can pay a hundred Tiger points now and indemnify yourself against it. Or you can take the time and trouble to authenticate yourself and various other parties more carefully, in which case the price will come down to about 8 points, depending on how much effort you are willing to put in.ā Or you can have various other options, which are based on how carefully youāve authenticated yourself, what kind of protocol youāre connected over, how many of your transactions have gone bad in the past. The key point is that we do not expect you to have a fetish about getting perfect security. The point is rather that youāre on a limited budget. Your supply of these loyalty tokens, or whatever it is, is limited.
Ross Anderson: This is perhaps very apposite given all the discussion about exceptional access. If you want exceptional access as a civilian, either with an Anton Piller Order or Norwich Pharmacal Order, you have to give money up front to lawyers. For an Anton Piller Order you typically put twenty or thirty thousand pounds surety, in case the person whose house you search comes back and sues you for damages. If the UK police are looking to get information from a US service provider like Google or Yahoo, they typically have to go through MLAT, which involves time and expense. All of these things are good. The fact that it used to cost them 200 pounds to get a mobile phone location trace was good, because it meant they didnāt do it all the time.
The problem is that governments try to legislate for zero marginal cost access. How can you prevent this systems failure with this insurance structure?
Reply: In other words, how can we prevent governments from requiring us to insure them for nothing? I donāt have a good answer to that, weāre primarily concerned here with the commercial world. Itās really hard to throttle governments against their will, but we can try to persuade them that it would be in their own interest to limit the rate at which they are able to do things. Chelsea Manning is an example of somebody doing something that they were authorized to do. Do you want to allow a sysadminĀ to move a file from once place or another? Yes, you do. Do you want them to be allowed to do it four million times in quick succession? Probably not. Associating a very small cost with each time they did it, that counts against an allowance, would allow you to detect and respond to that quite rapidly. The government is perhaps missing a trick here by not applying this type of throttle to its own security policies, internal as well as external.
The key point is to give the user an interface where they feel theyāre in control. Theyāre presented with some data. Theyāre making a decision. Weāve rigged the game so that their incentives are aligned with ours as security people, and weāre giving them a budget and more importantly weāre expecting them to spend it. Instead of saying, our objective is perfect security, weāre saying, our objective is security thatās good enough against some metric.
Jonathan Anderson: There are some other currencies, which you havenāt talked about, which involve functionality and performance. We might say, so, your certificate appears valid and itās signed by a CA whoās in our list of trusted CAs; however, thatās a very long list and our certificate transparency doesnāt really like it so, tell you what, weāll let you view the webpage. Weāre not going to ask you whether you really want to view the page that you already said you wanted to view, answer yes, or no. But we wonāt run JavaScript. Or we wonāt display password prompts. Or thereās some degradation of functionality. Or weāll have increased sandboxing that makes the thing run much more slowly, but ... [next slide] What a very nice slide you have there [laughter].
Reply: And what a very insightful comment. Okay. The other basic problem with access control is that currently outcomes are binary. Itās allow or deny. Thereās nothing in the middle. But, just as Jon points out, we could allow the access but put you in some sort of sandbox, where you canāt do anything terribly irrevocable. Itās fine to explore the catalogue, but weāre not actually going to let you buy anything. Or youāre going to be subject to audit controls, perhaps definitely or perhaps youāre just going to have a higher chance of being subjected to an audit control.
A very small proportion of transactions are routinely selected for a random in-depth audit. Thereās always a problem with people gaming to see where the threshold is for triggering an automatic audit, and then putting through transactions that are just below that level. One of the other difficulties with current access control mechanisms is that we expect them to be deterministic: to give us the same answer if we ask the same question. A good counter-measure may be to have non-deterministic mechanisms, so that whether you are audited or not is actually random, but what youāre doing affects the probability.
In summary: non-deterministic algorithms; a premium depending on what access youāre requesting, what authentication youāre offering, and what your past history is; and choosing from a range of alternative premiums, depending on what precautions you are willing to take. You can decide how much security you are willing to apply and the system can respond accordingly. The access decisions include other alternatives besides allow or deny.
As well as delivering more flexible services and interfaces that users might actually be willing to use, this approach also allows you to do security auditing by simply following the money. For example, if your computer has been taken over and is being used as part of a bot-net, youāre going to notice very rapidly that thereās a flow of security points out of your system with no corresponding flow of goodness in, and youāre going to say, āWhy is that? Why am I spending all this stuff and not getting anything for it?ā
Simon Foley: Would this work for Acceptable Use Policies? Currently, weāre using servers with Acceptable Use pop-ups that you have to agree with in order to get the service. If the conditions are reasonable, you might agree, but if itās totally clueless you might still agree, because ...
Reply: The alternative is not getting the service at all, under any conditions.
Simon Foley: In principle I can say, āWell this is restrictive, or co-optive to my dataā, but the difficulty is thereās no fairness, because once they have my data I have to take them to court to prosecute, and I canāt afford to do that as an individual. Do you think a mechanism like the one you are proposing could help?
Reply: It depends to some extent on what market pressures produce. In principle, it would allow things to evolve that say, āAll right, would you instead be prepared to agree to this less onerous agreement in exchange for more limited access?ā This is what a lot of academic licenses already do. A lot of data repositories have a licence where they say, if you log on at a university thatās one of our clients, then you get access that is restricted, in exchange for not agreeing to some of our commercial conditions.
Simon Foley: Again, thereās an asymmetry to it. I might end up agreeing to the policy because I donāt understand it, or I have some malicious reaction to the policy and I know I canāt be enforced. It comes down to the consequences. The cost to me to demonstrate the uniqueness of that policy is too high.
Reply: Iām going to try and wriggle out of this, by saying I think now youāre presenting a softer version of Rossā objection. Ross is the extreme end where youāve already signed up involuntarily to a policy thatās completely outrageous, and even going to court wonāt help you. Youāre putting forward a softer version of this, and asking how far can an approach like mine get you? I think the answer is, if people are doing this for money - if people have a data asset theyāre trying to make money from - then itās a marketing question. Can I make enough money from suspicious people like you to make it worth my while offering a softened variant of the product?
The answer to this question is not obvious: the reason there isnāt a really secure iPhone is thereās no market for it, right?
Hugo Jonker: Two comments. One is that you seem to be assuming cloud forums where everyone chooses the system. Like the nice example that you had, where they self-sign certificates.
Reply: No, no, Iām not necessarily assuming that.
Hugo Jonker: The second question is: take the self-signed certificate. How would you determine a good pricing strategy? Imagine I want to attack you. I set up a website and I can find everyone here a deal, bringing down costs. Making me seem very reliable. As soon as you go there ...
Reply: Suddenly I get a much worse deal.
Hugo Jonker: You get a worse deal, with seemingly little risk to me, so itās a spear attack.
Reply: Attacks like that generally work where somebody builds up their reputation because they want to do one big scam. Like borrowing lots of small amounts of money from a bank to build up your credit record so that theyāll lend you a big amount of money, and then running away.
Hugo Jonker: This system you propose seems particularly vulnerable to that sort of attack.
Reply: Yes, it is. But if Iām in the position of the bank, which is the position youāre putting me in, then thatās a risk that I, the bank, am willing to bet on because, in the long run, it works out for me. Okay, I got scammed by you, but by engaging in that type of transaction I come out ahead across the piste in the long run. I expect to lose occasionally, and Iāll probably insure myself against that.
Hugo Jonker: Yes, but there are actually two parties here. In the case of the self-signed certificates, the server giving the self-signed certificate and the user accepting it should both somehow be involved in saying, āI accept this riskā. Both should somehow put credits into a pot.
Reply: What happens in practice, in the model Iām advocating, is that the server incurs the risk and then may decide to pass it along to the user; either at face value, or with a discount, or with a premium, depending on their risk model.
Hugo Jonker: Then how does the user forward their risk to the server?
Reply: Okay, thatās fair enough. When I say user and server, this is unsatisfactory if we are really in a peer-to-peer setting. In that case we are talking about arbitrage, weāre talking about using pop-up boxes, or whatever, to negotiate a contract.
Ross Anderson: Perhaps thereās a simpler approach to this. If one imposed a rule that personal information could not be sold at a marginal price of zero, that might fix many things, because where companies monetize something they wonāt give it away. If you get access to stuff as a researcher thatās also being sold to commercial companies, it comes with an NDA even if it isnāt sensitive.
Reply: For revenue protection purposes.
Ross Anderson: Much of the privacy failure is because the marginal price of information tends towards zero for the usual information economics reasons, in the absence of something stopping it. The price of software can be kept above zero by copyright licensing mechanisms. Perhaps what is needed here is a privacy licensing mechanism.
Reply: That would impose a similar lower bound.
Ross Anderson: Which effectively imposes a government tax. Suppose this is the way forward: the Chancellor of the Exchequer simply says that every record containing personal information attracts a tax of one penny, and then all privacy violators are tax evaders and go straight to jail.
Reply: Actually, Caspar Bowden and I once had this very conversation as part of a discussion with the Information Commissioner about how to protect against information breaches. We reckoned that having a flat charge for personal information would be a very effective mechanismFootnote 1.
Jonathan Anderson: I think one of the problems with implementing this model is that the people tasked with enforcing the security policies in most organizations come from a part of the organization that is absolutely risk intolerant. They share in none of the benefits of enabling things, but they get egg all over their face whenever something goes wrong. I think the same is true in accounting departments or typical security in a lot of organizations, but thereās kind of a fundamental organizational behavioral problem. How do you fix that? How do you get them to want to do this?
Reply: So, how do we get people like us to buy into a model that says, if youāre never making mistakes, then youāre not taking enough risks; and that means the business is losing money relative to our competition, and thatās why Iām firing you. How do we entice security to move into that model? Thatās a really good question on which to end, I think.
Notes
- 1.
In the scheme Caspar and I came up with, the tax took the form of a per-datum spot fine for being in possession of personal information that was not tagged with a valid ACL and a conforming audit trail.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2017 Springer International Publishing AG
About this paper
Cite this paper
Christianson, B. (2017). The Price of Belief: Insuring Credible Trust? (Transcript of Discussion). In: Anderson, J., MatyĆ”Å”, V., Christianson, B., Stajano, F. (eds) Security Protocols XXIV. Security Protocols 2016. Lecture Notes in Computer Science(), vol 10368. Springer, Cham. https://doi.org/10.1007/978-3-319-62033-6_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-62033-6_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-62032-9
Online ISBN: 978-3-319-62033-6
eBook Packages: Computer ScienceComputer Science (R0)