Keywords

1 Introduction

Information security in larger organizations is often managed by an information security manager and/or a security team—the security function of the organization. The security function is the part of the organization recognised as having the expertise to identify and manage the security technologies and processes necessary to protect the organization from threats that relate to its assets. Outwardly, this is embodied in controls and procedures, often detailed in the organization’s security policy (or policies).

Policy may dictate specific security-related behaviours, which employees are expected to adopt. There are myriad of ways to promote behaviour change [15], with challenges in guaranteeing that behaviours are changed successfully [55]. Declaring a behaviour in a security policy is then not an assurance that the behaviour will happen. This reality has drawn increasing attention to the need to manage behaviour effectively. Consideration of behaviour change theory and behavioural economics [13] is one such approach.

Both research and practice have shown that behaviours may not be adopted in organizations. Employees may not see how policy applies to them, find it difficult to follow, or regard policy expectations as unrealistic [36] (where they may well be [31]). Employees may create their own alternative behaviours [12], sometimes in an effort to approximate secure working, rather than abandoning security [37]. Organizational support can be critical to whether secure practices persist [22], where individuals may assume that others with relevant knowledge and resources will manage the problem for them.

Rational security micro-economics has proved useful for explaining the interaction between organizational security policies and behaviours [8], where security ecosystems are otherwise too complicated to study directly in this way. Similarly, Herley posits that the rejection of advocated security behaviours by citizens exhibits traits of rational economic behaviour [28].

Security managers must have a strategy for how to provision for security, provide workable policy, and support user needs. In early workshops on the Economics of Information Security, Schneier advocated consideration of trade-offs [14, p. 289]; nearing 15 years later, this is not happening sufficiently in organizations. Here we revisit principles of information economics and behavioural economics in tandem, identifying contradictions which point to gaps in support. After reviewing the capacity for economics to explain a range of security-related behaviours (Sect. 2), we demonstrate how current approaches to infrastructure and provisioning of security mirror rational-agent economics, even when behavioural economics is applied to promote individual behaviours (Sect. 3). We show through examples how these contradictions align with regularly cited causes of security non-compliance from the literature.

We present a framework (Sect. 4), based on consolidated economics principles, with the following goal:

Better support for ‘good enough’ security-related decisions, by individuals within an organization, that best approximate secure behaviours under constraints, such as limited time or knowledge.

This requires us to identify the factors affecting security behaviours, which should be considered by the organization in order to inform policy design, support the identification of provisioning requirements, and describe expectations of users. The framework is intended to underpin provisioning to reach this goal.

We then apply the framework to one of the most widely promoted security behaviours (Sect. 5), the maintenance of up-to-date device software, demonstrating through comparison with independent user studies where the consolidated economics approach – bounded security decision-making – can anticipate organizational support requirements. We consider how the framework can be situated to support practitioners (Sect. 6), before concluding with a summary and future work (Sect. 7). A supporting glossary of foundational economics terminology is detailed in the Appendix.

2 Related Work

There is a growing body of research advocating the application of economics concepts to security generally, as a means to understand complex challenges. Foundational work by Gordon and Loeb asserted that traditional economics can inform optimal investment in security [26], where here we apply a similar approach to a combination of economic models, to reposition investment challenges related to security behaviour management. Beautement et al. [8] articulate how employees have a restricted ‘compliance budget’ for security, and will stop complying once they have reached a certain threshold.

Acquisti and Grossklags [2] apply behavioural economics to consumer privacy, to identify ways to support individuals as they engage in privacy-related decision-making. Similarly, Baddeley [6] applies behavioural economics in a management and policy setting, finding for example that loss-aversion can be leveraged in the design of security prompts. Other concepts from behavioural economics have been explored, such as the endowment effect [60] and framing within the domain of information security and privacy [3, 27]. Anderson and Agarwal [3] identify potential in the use of goal-framing to influence security behaviour, where commitment devices have since been explored as a way to influence behaviour change [25]. Verendel [63] applies behavioural economics principles to formalize risk-related decisions toward predicting decision-making problems, positing that aspects of usable security must also be explored.

In addition to understanding security and privacy behaviour through behavioural economics, some have advocated the influencing of such behaviour through the application of nudge theory [1, 61]. Through empirical modelling of behavioural economics, Redmiles et al. [51] effectively advocate for identifying and presenting options which are optimal for the decision-maker, and making the risk, costs, and benefits of each choice transparent. Here we explore where there are ‘gaps’ in realising these capabilities, which must be closed in order for organizations to support secure behaviours.

In terms of capturing the dynamic between a decision-maker (here, an employee), and the security function – an ‘influencer’ – Morisset et al. [42] present a model of ‘soft enforcement’, where the influencer edits the choices available to a decision-maker toward removing bad choices. Here we acknowledge that workarounds and changes in working conditions occur regularly, proposing that the range of behaviour choices is in effect a negotiation between the two parties.

In summary, there is a need to reconcile the advancements in the application of economics to security with how management of behaviour change strategies in organizations is conceptualised. Here we fill in the gaps, where currently there are contradictions and shortcomings which act against both the organization and the individual decision-maker.

3 Applying Economics to Organizational Security

Pallas [44] applies institutional economics to revisit information security in organizations, developing a structured explanation of how the centralised security function and decentralized groups of employees interact in an environment of increasingly localised personal computing. Pallas delineates three forms of security apparatus for achieving policy compliance in organizations (as in Table 1): architectural means (which prevent bad outcomes by strictly controlling what is possible); formal rules (such as policies, defining what is allowed or prohibited for those in the organization); and informal rules (primarily security awareness and culture, as well as security behaviours). We demonstrate how a strategic approach is lacking in how to manage the relatively high marginal costs of realizing the informal rules which are intended to support formal rules.

Table 1. Costs of hierarchical motivation (reproduced from Pallas [44]).

3.1 Rational vs. Bounded Decision-Making

In traditional economics, a decision-making structure assumes a rational agent [58, 59]. The rational agent is equipped with the capabilities and resources to make the decision which will be most beneficial for them. The agent knows all possible choices, and is assumed to have complete information when evaluating those choices, as well as a detailed analysis of probability, costs, gains, and losses [59]. A rational agent is then capable of making an informed decision that is simultaneously the optimal decision for them.

Behavioural economics, on the other hand, challenges the assumption that agents make fully rational decisions. Instead, the field refers to the concept of bounded rationality, which explains that an agent’s rationality is bounded due to cognitive limitations and time restrictions. These considerations also challenge the plausibility of complete information, which is practically unrealistic for a bounded agent. According to these restrictions, the bounded agent turns instead to ‘rules of thumb’ and makes ad hoc decisions based on a quick evaluation of perceived probability, costs, gains, and losses [33, 58].

Table 2 outlines the differences between the decision-making process of a rational agent and that of a bounded agent. The classical notion of rationality (or, rather, the neoclassical assumption of rationality [59]) is quite unachievable outside of its theoretical nature. From the standpoint of traditional rationality, the decision-making agent is assumed to have an objective and completely true view of the world and everything in it. Because of this objective view, and the unlimited computational capabilities of the agent, it is expected that the decision which is taken will be the one which provides maximal utility for the agent.

Table 2. Rationality vs. bounded rationality in decision-making.

It is a common misconception that behavioural economics postulates irrationality in people. The difference in viewpoint arises from how rationality was originally defined, rather than from the assumption that people are rational beings. It is agreed upon that people have reasons, motivations, and goals when deciding to do something—whether they do it well or badly, they do engage in thinking and reasoning when making a decision [59]. However, it is important to denote in a more realistic manner how this decision-making process looks for a bounded agent. It is by considering these principles that we explore a more constructive approach to decision-support in organizations.

While an objective view of the world always leads to the optimal decision (Table 2), a bounded agent often settles for a satisfactory decision. Simon [59] argues that people tend to make decisions by satisficing [33] rather than optimizing. They use basic decision criteria that lead to a combination of a satisfying and sufficient decision which from their perspective is ‘good enough’ considering the different constraints. Furthermore, when faced with too many competing decisions, a person’s resources become strained and decision fatigue [64] often contributes to poor choices. This leads to our goal to: better support ‘good enough’ decisions which best approximate secure behaviours under constraints such as limited time or knowledge.

3.2 Why We Are Here, with Too Few Choices

We consider traditional economics and behavioural economics in the context of supporting effective behaviour change. We derived the ‘pillars’ of behaviour change from the COM-B model [41]: Capability, Opportunity, and Motivation, which are all required to support a change to a particular Behaviour. We discuss how each pillar is represented in the two economic approaches.

Traditional Economics. The move from centralized to decentralized computing [44] has resulted in an imposed information asymmetry of having a recognized security function distinct from everyone else in the organization. The security function may declare formal rules and informal rules (training, behaviours), assuming that the decision-maker (individual employee) has the same knowledge that they do. Conversely, the security function does not know about expectations placed on the decision-maker by other functions, assuming they have the capacity to approximate the same knowledge; Capability then cannot be assumed. Motivation comes from formal policies, and architectural means which force certain behaviours; however, if Motivation to follow security rules is not sufficiently related to the assets which the decision-maker cares about, it will not support the recognition of risks which require the behaviour [11] (also impacting Opportunity). As the security function is distinct from the rest of the decentralized ‘PC-computing’ organization, it is often assumed that information about advocated behaviours has been sufficiently communicated to the decision-maker (where the Opportunity also cannot be assumed, because the ‘trigger’ does not match the employee’s current Ability and Motivation [19, 48]).

Behavioural Economics. In organizations, capabilities must be supported, but this is often approached in a ‘one-size-fits-all’ way, such that the decision-maker is forced, through the Motivation of enforced formal rules, to seek out the knowledge to develop the Capabilities they need. however, they may not know if they have the complete and correct knowledge unless someone with that knowledge checks (and closes the information asymmetry). An Opportunity for a new behaviour may be created, through training or shaping of the environment, and assumed to be a nudge toward a behaviour beneficial to the decision-maker [55]. If a behaviour is framed like a ‘nudge’, but accounts only for what is desirable for the influencer without checking also that it is desirable to the decision-maker, it is a ‘prod’ which cannot rely on the decision-maker’s own resources and willingness to ensure that it works, such that Motivation will fail. If the provisioned choices (the Capability) are no more beneficial than what the decision-maker already has available to them, they may instead adopt ‘shadow security’ behaviours [37].

4 A Framework for Security Choices

4.1 Toward a Consistent Strategy

Current approaches to security provisioning in organizations appear as if to support the rational decision-maker, as per traditional economics. We outline the ‘contradictions’ that currently exist in how the two economic models are being brought together as follows, where examples of ‘contradictory’ and ‘better’ approaches to supporting secure behaviours in organizations are illustrated through real-world examples in Table 3.

Respect Me and My Time, or We are Off to a Bad Start. Security behaviour provisions tend to imply that the decision-maker has resources available to complete training and policies, but in an organization the decision-maker is busy with their paid job. To avoid ‘decision fatigue’ and the ‘hassle factor’ [8] of complying with security, we must consider the endowment effect – as also applies to security [35] – and acknowledge that for the busy decision-maker, doing security requires a loss to something else. This requires an institutional view to helping the decision-maker to negotiate where that cost will be borne from. The notion of a ‘Compliance Budget’ [8] suggests to reduce the demands of security expectations, where here we note the need for an upper bound on expectations.

If This is Guidance, Be the Guide. The security function must assume that employees are (security) novices. They then will need to be told the cost of security and exactly what the steps are. Otherwise, the novice must guess the duration of an unfamiliar behaviour, and exactly what constitutes the behaviour in its entirety (e.g., knowing where to find personal firewall settings [49]). Unchecked, this leads to satisficing. Current approaches appeal to the skillful user, or assume ‘non-divisable’ target behaviours [4] (with only one, clear way to do what is being asked).

Frame a Decision to Make, Not a Decision Made. Advice is given assuming that what is advised is the best choice, and there is no other choice to be articulated. The advocated choice is rarely, if ever, presented alongside other choices (such as previous sanctioned behaviours, or ad hoc, ‘shadow security’ behaviours unknown to the security function). We should note also that a choice is often perceived, and so elements of a choice can impact the ‘gulf of evaluation’ [54]. An example is when users form incomplete/incorrect understanding of provisioned two-factor authentication technology options [23].

Edit Out the Old, Edit in the New. More security advice is often presumed to be better for security, but is not [29], and can create confusion. Stale advice can persist unless it is curated – an employee may do the wrong thing which is insecure, or the wrong thing which was secure but now is not. When policies and technologies change, the decision-maker is often left to do the choice-editing. An example is when old and new security policies are hosted without time-stamps.

Table 3. Examples of ‘contradictory’ and ‘better’ approaches to supporting secure behaviours in organizations (derived from experiences reported in real-world settings, and relevant studies).

4.2 Bounded Security Decision-Making

Security research increasingly focuses on organizational security and the interaction between managers, policies, and employees. Principles from economics have been deemed useful in security [14], and concepts from behavioural economics further support understanding of security behaviours in an organizational context [13]. For security policies to be effective, they must align with employees’ limited capacity and resources for policy compliance [16].

We use the term bounded security decision-making, to move away from any ambiguity that arises when merging concepts from traditional and behavioural economics. This distances from the tendency to apply behavioural intervention concepts to security while assuming the intervention targets to be rational agents. This is in itself a contradiction because a rational agent would by default make the optimal choice and would not require any behavioural aid or intervention (as explored in Sect. 4.1). Similarly, employees cannot possibly dedicate sufficient time or resources for every single task or policy to account for this [16]. This is a consideration that must be acknowledged at the point of security policy design.

To represent these concepts within an information security strategy model, we adapt the security investment model developed by Caulfield and Pym [16], which is constructed within the modelling framework described in [17, 18]. This model explicitly considers the decision-point for an agent (the decision-maker), and incorporates elements of the decision-making process (where we reconcile elements of behavioural economics), and available choices provided by the organization (the influencer). We adapt this framework to consider factors which should be considered when provisioning security choices, toward supporting the decision-maker to choose ‘good enough’ behaviours under constraints on knowledge and resources.

Fig. 1.
figure 1

A decision point in a decision-maker’s process bounded security decision-making (adapting elements from Caulfield and Pym [16]).

Figure 1 illustrates the components and processes which must be considered in policy design. Influencer refers to the security policy-maker in the organization, and decision-maker (DM) the bounded agent (the employee).

Process. On the left-hand side we consolidate factors in decision-making from behavioural economics into the decision-making process that informs a decision (the arrow on the left-hand side). We outline the restrictive factors (limited skills, knowledge, time and incomplete information) which characterize a bounded decision-maker. We acknowledge that the decision-maker is bounded in several ways, from individual skills and knowledge to temporal restrictions set by the organization. Our bounded decision-maker has incomplete information about the world and others, and must make do with information available within their abilities; they can only consider the perceived costs, gains and losses and prioritize subjective interests when faced with a choice.

When evaluating the risks that come with a choice ‘losses loom larger than gains’ [34, p. 279], and the decision-maker tries harder to avoid losses rather than to encounter gains. This then puts the expectations of the influencer at a loss, as the decision-maker may be more concerned with the loss of productivity than with a potential security gain (where the latter may be all that the influencer – the overseer and expert of security – can see).

Information Asymmetry. Information asymmetry regular occurs between the influencer and the decision-maker. In the context of security policies and policy compliance, the following are examples of information asymmetry:

  • The recognised differentiation of the influencer being more knowledgeable and capable in security than the decision-maker (as security is arguably the influencer’s primary task);

  • The influencer’s lack of knowledge about the decision-maker’s context, and pressures which factor into their choice-making process (resulting in the influencer seeming to perceive the decision-maker as a rational agent with motivation and resources dedicated to security);

  • The influencer’s lack of awareness about competing company policies with which the decision-maker must also comply;

  • The decision-maker’s lack of information about why security restrictions matter to the organization (overly demanding policies may cause decision-makers to lose sight of why the policies exist in the first place).

Such discrepancies in knowledge and information between the influencer and the decision-maker cause friction and create a power imbalance. Asymmetries should be identified and addressed in order to manage the gap between influencer and decision-maker perceptions (which is engineered by having a distinct, designated security function).

Decision-Maker Preferences. The restrictive factors on the left hand side of Fig. 1 influence the decision-maker’s preferences. Using these factors as a reference point, the DM may have preferences over complying with one behaviour over another. Advocated security behaviours compete with other behaviours (such as e.g., compliance with HR policies or work deadlines) for the DM’s choice of preference, where that preference impacts their final decision. If compliance with e.g., an HR policy requires less technical engagement (and time investment), this will factor into the preferences.

Choices and Decision. The two boxes above the Decision circle represent the type of choices available to the decision-maker. Available policy choices consist of the rules listed in the security policy by the influencer, but also any included advice on what to do and solutions provided. In organizations with security policies, the influencer usually assumes that the only choices available to the decision-maker are the ones noted by the policy itself. However, as literature shows, a choice may be to circumvent the policy [12, 38], or to attempt to work in a way that best approximates compliance with secure working policies, in the best way the decision-maker knows how to [37]. Though workarounds and circumventions of policy predominantly go unnoticed in organizations, this does not eliminate them from the set of choices available to the decision-maker. Behaviours regarded as choices by the decision-maker – but which are hidden to the influencer – are another information asymmetry (one which introduces risks for the organization [37]). By assuming that the only available choices come from the security policy, the influencer indirectly undermines policy by having less predictable control over policy compliance decisions in the organization.

Moral Hazard. When a number of information asymmetries exist in the organization, a moral hazard is likely occurring. A common example of a moral hazard is that of the principal-agent problem, when one person has the ability to make decisions on behalf of another. Here, the person making the decisions (the agent) is the decision-maker, and decisions are being made on behalf of the influencer (the principal) who represents the organization’s security function. However, problems between the agent and the principal arise when there are conflicting goals and information asymmetry.

If we go back to the decision-maker’s perceived risks, we argue that these are not synonymous with the risks that the influencer knows of or is concerned with. Hence, when the decision-maker enacts behaviours, they do so by prioritising their interests and aiming to reduce their perceived risks. Because of the information asymmetry that persists between the decision-maker and the influencer, as well as the decision-maker’s hidden choices driven by personal benefit, the influencer cannot always ensure that decisions are being made in their best interest. The moral hazard here is that the decision-maker can take more (security) risks because the cost of those risks will fall on the organization rather than on the decision-maker themselves.

Choice Architecture. The circle in Fig. 1 signifies the decision made by the decision-maker. In our framework, we refer to the circle by using the term ‘decision’ rather than ‘choice architecture’ for the following reasons: (1) while unusable advocated security behaviours persist, the set of choices is a composite of choices created by both the influencer and the decision-maker, which does not correspond to the accepted nature of a curated choice architecture, and; (2) referring to a choice architecture implies an intention to nudge decision-makers towards a particular choice, which also implies that there exists one optimal choice. As we have mentioned previously, a single optimal choice cannot exist for bounded decision-makers because they have perceived costs, gains, and losses individually; a more helpful approach would be to accommodate a range of choices rather than strictly advocate for one choice which is not being followed.

4.3 Framework Implementation

Here we describe steps for applying the framework (as in Fig. 2). We note that smaller organisations may not have the resources to maintain an overview of systems and system usage (more so if elements are outsourced [46]).

Fig. 2.
figure 2

Implementation steps of the bounded security decision-making framework.

  1. 1.

    Capture the process: Influencers must understand the decision-maker’s process (as defined in Fig. 1) and consider their current knowledge of the system—either as individuals or discernible groups of users. This may also be influenced by any cognitive limitations [9];

  2. 2.

    Adapt available policy choices: Policy choices must be adapted to the decision-maker’s current level of understanding and supported with concrete information—working from the decision-maker’s current state (of knowledge and resources) rather than the desired security end-state;

  3. 3.

    Validate policy choices with stakeholders: Collaboration with stakeholders must be established before policy choices are offered, so that the decision-maker is not left responsible for ensuring that it is a possible choice amongst other imperatives;

  4. 4.

    Acknowledge decision-maker preferences and choices: Decision-maker preferences (including their motivations) must be utilized rather than ignored—knowledge of these can aid in aligning policy choices with decision-maker preferences;

  5. 5.

    Align choices with competing expectations: Influencers must ensure that security policy choices do not interfere with other business expectations.

5 Worked Example – Software Security Updates

Here we apply the framework to a pertinent case study – keeping software up-to-date. This is selected from the top online security controls advocated by security experts (as prompted by Reeder et al. [52]). This is also the top piece of advice advocated by e.g., the UK governmentFootnote 1.

5.1 Process

Skills, Knowledge, and Time. Applying updates as soon as possible is seen as achieving the best results [32]. However, advocating to ‘keep software up-to-date’ or to ‘apply updates immediately’ does not accommodate consideration of preferences for committing time to other tasks (such as primary work tasks).

A bounded security decision-making approach would provide step-by-step guidance to match skill levels, and potentially the version of software that is currently on a device. Automation could also be considered, if the update process is complex or requires technical skill.

Perceived Costs, Gains, and Losses. In organizations, system patches are first deployed to a test-bed [32], to ensure that they do not create problems (losses); advice to ‘keep systems up-to-date’ ignores this, and also does not declare the cost, in terms of time, for a user to achieve this. This would then be concise, high-level advice which inadvertently assumes that a user knows already how to do this, and how often to do it. An employee may not feel that updates are a concern for them [62], so may not be motivated to do it at all.

A bounded security decision-making approach would need to provide an assurance that the latest updates have been tested on a system similar to the one the receiver of the advice is using for their work. This is so that they do not have to establish this for themselves (and to avoid both loss of cognitive automation and a need to rebuild cognitive maps [10]). It would be necessary to convey that an up-to-date system protects specific assets that the decision-maker wants to minimize losses for (where top-management or asset-focused messaging could help).

Incomplete Information. The minimal advice does not declare how to check or how often, assuming a rational approach. If an update seems to be taking a long time, a decision-maker may not know if the problem is with the machine (requiring support) or personal expectations (and not being able to troubleshoot problems [65]). There is also an assumption that the user may know the changes that updates will create in advance, when it could impact them in a range of ways [10].

A bounded security decision-making approach could involve informing the user of how long each update takes to install [40] (especially if a restart is required), based on testing on a comparable setup (including machine performance, available disk space [40] and provisioned software). It may be that updates can be scheduled centrally [40], for instance to occur when employees are most likely to have their computer on, but not be using it (if the organization has scheduled workplace lunch breaks, for instance). Ultimately, finding a time to install updates and avoid disruption is increasingly difficult to find in a PC-computing work environment.

Loss-Averse Evaluation of Risks. A rational approach does not accommodate the chance that the user has had prior bad experiences with updates [62]. It also does not provide assurances that the update will not cause software to cease working properly, and does not declare how much (paid/salaried) time the update will take (assuming this to be none/negligible).

A bounded security decision-making approach would provide backups before updates, and point to the existence of the backups (to assuage concerns about losses). A user may simply choose to delay or ignore the installation of an update [62], so there would be a need to convey or imply why this is not an appropriate option to consider – this is most readily achieved by presenting the options that the user perceives relative to each other.

5.2 Available Policy Choices

Rational advice to keep a system up-to-date does not consider that modern systems may already be doing (some or all of) this, so advice may need to consider specific operating system software (for instance). Unless an OS or application provides separate feature updates and security updates, the value of updates for security may not allow a decision-maker to consider clear choices [43].

A bounded security decision-making approach would acknowledge how updates work on the system the decision-maker us using. It would also recognize the other options that are available to the decision-maker, from the perspective of their personal preferences and not solely the one ideal preference of the security function (influencer).

5.3 Decision-Maker Choices

Because choices framed for a rational decision-maker are not made explicit and compared meaningfully, the bounded security decision-maker may construct the set of choices in an ad hoc fashion, with little to no information about the consequences of taking action or not doing so (the expertise that the security function has which they personally do not have). In an environment of incomplete information, the security function may not know this either (as may be the case with many policy mandates [29]).

6 Future Directions

Informed by user-centred security research, we outline directions for how a security manager/function in an organization can consider the proposals we have made (Sect. 4). Security managers cannot be assumed to have in-depth knowledge of the human aspects of security, but may nonetheless value it in security policy decision-making [47], and benefit from methods and tools to do so [53].

6.1 A Security Diet

A ‘security diet’ would document perceived occurrence and costs of advocated behaviours (for instance through a typical working day). Questions can then be asked to reconcile these costs with expected behaviour elsewhere in the organization [35], to determine if time for security tasks is being taken from elsewhere.

If security behaviours add to an already busy schedule, then time constraints, pressure, and stress increase the likelihood of errors [50]. An individual arguably should not be expected to commit more than their full working day to all tasks including security. Security is then self-defeating if it leaves the decision-maker to figure out how to make this possible. Consideration of how to manage security with other pressures can reduce this ‘gulf of execution’ [54].

6.2 Just Culture and the Genuine Choice Architecture

If we are to involve the decision-maker in shaping viable options, we would want to find a way to acknowledge the choices employees make which are outside of policy, to include them alongside advocated choices for clear comparison. This does however ‘declare’ unsecure options, though this aligns with the practice of a ‘blame-free’, just culture [20], toward learning from shortcomings. By defining associated properties of these two sets of choices, support can be negotiated to shape solutions which allow productive and secure working.

6.3 Policy Concordance

‘Security Dialogues’ research [5] promotes a move toward policy concordance—‘mutual understanding and agreement’ on how the decision-maker will behave. In medicine [30], concordance occurs at the point of consultation, to incorporate the respective views of the decision-maker and influencer.

The definitions of distinct behaviour choices can be considered by both sides when negotiating a solution for security concordance. This then further leverages the co-developed choice architecture. This could ‘zoom in’ further on decision options, to examine properties of individual choices according to the decision-maker’s preferences, comparing to other options which are regarded as viable.

6.4 Security Investment Forecasting

Security modelling can begin to forecast the impact of investments in complex environments, before making infrastructure and provisioning changes (e.g., [16]). Security deployed is not security as designed; contact with the complex organizational environment will alter how successful a control is in practice, and how well it fits with other practices in the organization. Incorporating employee perspectives into structured economic models will inform the viability of new controls.

7 Conclusion

We have shown how current approaches to security provisioning and infrastructure reflect traditional economics, even when concepts from behavioural economics are applied to ‘nudge’ individual security behaviours. We have constructed a framework that accommodates a set of security behaviours, as a continuous programme of choices which must be provisioned for to adequately support ‘good enough’ behaviour decisions. We then apply our framework to one of the most advocated security behaviours—software patching—and demonstrate that the rational-agent view is incompatible with the embrace of isolated behaviour change activities.

Our work identifies considerations for researchers working in organizational security: the importance of capturing where a decision-maker is, alongside where an influencer wants them to be; that a security choice architecture is essentially decentralized and cannot be wholly dictated by any one stakeholder, and; in organizations, security expertise can exist in places recognized by the organization and others not—constructed information asymmetries ought to be accounted for when assessing user behaviours. Future work can involve situated studies in organizations, including participatory design with security managers to develop viable and sustainable security behaviour interventions.