Keywords

1 Thirty Odd Years Ago \(\ldots \)

It was in the late 1980 s when I first got the opportunity to break into a bank. Banks were beginning to realise that some customers required online banking facilities. At the time the business sector was the online banking target audience. Online banking also did not look as we know it today; since the Internet was not used for commercial services at that time, online banking was not Internet banking. This predated broadband access and any service offered was through dial-up facilities. In some cases a ‘portal’ (using a term that was not used at the time) was used to connect to the bank; in other cases the bank provided its own modems and select customers could connect to the bank via a direct telephone call: modem to modem.

For banks this was a new mode of interacting with customers and they were understandably weary of the security implications of providing customers such direct access to their systems.

It is in this context that a service provider contacted my employer at the time and the question eventually arrived at my desk. In essence a physical ‘key’ was created that would uniquely identify the customer and provide access to authorised accounts. Access by using the ‘key’ would constitute proof that the customer did indeed access the accounts, and the bank could demand to inspect the key at any time, and inspection of the key would, amongst others, reveal when last the key was used. The question that landed on my desk was whether this mechanism —described in even more vague terms than above— was reliable. After signing reams of documents preventing me from ever talking to anyone about the question, I was provided a few more details and a working ‘key’. The Achilles heel of the system was clearly the possibility of creating a clone of the ‘key’, because a clone would enable a user to access accounts, while the original key would not have any indication that it was used and the original key would be indeed be available for inspection at any time.

For me the problem was akin to bypassing copy protection incorporated in many software packages at the time. Nerds like I ‘knew’ that no copy protection mechanism was infallible and the task at hand did not seem very challenging. The usual tools to bypass copy protection did not work. The next avenue of attack was working through the code that could be accessed on the key. Long story short: It did not take particularly long before the bank was supplied with a duplicate key that unlocked access to the relevant accounts. I do not know whether the bank ever commissioned that particular system; I assumed that the relative ease with which the system could be breached was valuable information for them. I certainly learned quite a few interesting lessons from the process; I gained knowledge that I would have loved to share with others, but those reams of signed non-disclosure agreements kept me quiet for decades and even today I hesitate the share all the details of the adventure.

The knowledge I had of bypassing copy protection schemes hopefully contributed in some small way to make online banking for that particular bank a little safer. Extrapolating from this isolated experience it was obvious that knowledge of how to perform a few morally questionable actions could benefit society in the long run. But, none of my university courses ever hinted at how such morally questionable actions could be performed (or that they may be useful in a positive sense).

Why we had knowledge of ways to bypass copy protection schemes, how we obtained such knowledge and whether we were justified in having such knowledge will be discussed later in this paper.

A year or two after the bank project one of the first computer viruses arrived on our desks because someone’s computer occasionally displayed a ‘ball’ bouncing across the screen. At that time there was no World Wide Web to query and we eventually located the code that displayed this ‘bouncing ball’, and carving further we eventually realised that we have discovered a computer virus on this computer. At the time there were some rumours about computer viruses, but those rumours did not make sense. But here we had the incarnation of a virus in machine code, and the way that this (then) mythical category of malware operated suddenly started making sense.

Fig. 1.
figure 1

International knowledge sharing prior to wide-scale use of the Internet

We talked about what we discovered and developed software that could remove this virus from an infected machine. Soon a steady stream of new viruses started flowing to our desks. In a relatively short time our skills to locate, isolate, extract and examine such viruses grew to be quite comfortable when confronted with a new possible virus. In some cases we were able to, with authority, say that some incidents were wrongly attributed to viruses. Based on prevalence we discovered why some viruses were ‘more viral’ than others. We discovered how unintended consequences of viruses were in some instances particularly dangerous. We even found examples of beauty in viruses where the style of the virus writer was just much more elegant than the norm. We gathered a wealth of knowledge about viruses. This knowledge was used in the fight against viruses or, blowing our own trumpets, ‘for the good of humankind’. This time there was no non-disclosure agreement. We could share this knowledge with others; we could teach them a theory of computer viruses. But we did not.

This time there were moral boundaries that we felt we should not cross, because somewhere someone would use the skills that we could transfer for evil. And for many years we talked about viruses in abstract terms —that, almost like the initial rumours we heard, enabled people to talk about the concept, but never to really understand the details— unless, like us, they were willing the acquire the detailed knowledge the hard way.

However, even requests to share copies of viruses were largely ignored. Somehow a small international community formed who became the ‘custodians’ of viral code and we were only willing to share viruses (as well as analyses and sometimes antivirus software) in that community. Figure 1 reminds one of how information was typically shared in the days before using the Internet became the universal means of communication. This particular photograph shows virus-related software that I received from Fridrik Skúlason circa 1989. It serves to emphasise how this community formed and existed prior to the wide adoption of the Internet and other global networking technologies.

My (subjective) experience was that the communities that formed as, ultimately, the custodians of malware shared a certain ethic. Similarly, hacking communities sharing a certain ethic, formed. And the ethic of the community had a profound impact on what knowledge was shared when with whom. I doubt that all communities shared the same values, but, rather, that shared values were at the core of communities that did form.

Many years have passed since the days recalled in the paragraphs above. It is time to reflect whether those communities that collected guarded knowledge about malware and hacking acted correctly when they ‘guarded’ rather than taught such information, and, if they did, whether the same imperatives still apply. Note that the claim is not that all communities guarded potentially dangerous information; some shared such information from the very beginning and their decision to share also requires reflection.

2 To Teach or Not to Teach

On the one hand knowledge —any knowledge— has value. One only has to look at outcries that result from the burning of books or almost any form of censorship that is imposed. Books are a form of transmitting knowledge, which is effectively a form of teaching. Destruction of books intended to destroy knowledge is an attempt to prevent the information it contains to be transmitted — that is, to be taught. Censorship is a restriction on free speech. It is typically imposed to prevent ideas from spreading; censorship effectively prevents teaching of the censored information.

Note that there is a small class of information that society in general does not condone. The best-known example is the depiction of certain forms of child exploitation. There is general agreement that society should not tolerate such knowledge. However, beyond these very narrow confines open societies usually frown upon most forms of censorship. From this it follows that teaching is, in general, tolerated and the right to teach defended (even when the content of what is taught is disliked).

However, there is often a wide chasm between what ought to be done and what is tolerated. Hence the act of teaching may sometimes be tolerated even when the knowledge taught is deemed inappropriate. At the other extreme, knowledge that is universally valued may make teaching such knowledge an imperative. Of course teaching and knowledge may be tolerated and/or valued anywhere between these two extremes. However, in addition to the freedom to teach (or otherwise), the act of teaching some skill or knowledge invokes at least two other factors, namely the context and the nature of the skills transfer. This triad will be explored in more detail below.

Seen in the abstract, teaching (or education) is deemed valuable. Countries spend huge amounts on education, and individuals seek education to improve their prospects in life. The value of education is captured in the age old proverb “give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime”.Footnote 1 While the modern-day reader will frown about the gender bias in this proverb, she will agree with the underlying truth: such teaching empowers the learner. It helps the learner to satisfy a basic need. This proverb does not directly impose a moral obligation — it merely compares the utility of two acts: giving and teaching. Teaching has the greater utility due to its multiplicative affect: value for a day compared to value for a lifetime. However, viewed from a utilitarian perspective, confronted with the choice of helping or teaching, teaching is the preferred option. Assuming the obvious fact that satisfying a basic need of a person contributes to the happiness of that person, then teaching may empower many people to help themselves, whereas feeding is limited to the abilities of the one or few who possess the necessary skills or knowledge. In this sense, teaching facilitates a greater happiness for a greater number of people than feeding would. An overly hasty conclusion at this stage may be that teaching is the most moral activity possible.

However, food is just one of the basic needs. Maslow [7, p. 372] posits that “it seems impossible as well as useless to make any list of fundamental physiological needs for they can come to almost any number one might wish, depending on the degree of specificity of description”. And the proverb about feeding does not directly extend to the other physiological needs, such as maternal needs or sleep. As children grow up, maternal skills may be useless unless used to provide care for others’ children. In a society the value of such skill is no longer the satisfaction of one’s own needs, but the fact that skilled labour can be exchanged for goods or services that satisfy other basic needs. Skilled labour may even be the source of the highest need that Maslow identifies: self-actualisation. However, in any society, retaining skills (and knowledge) is essential, which makes teaching an indispensable part of such a society.

Fig. 2.
figure 2

A depiction of Maslow’s hierarchy of needs [7]

Maslow [7, p. 394], for example, claims that needs form a hierarchy: “when a need is fairly well satisfied, the next prepotent (‘higher’) need emerges, in turn to dominate the conscious life and to serve as the center of organization of behavior, since gratified needs are not active motivators”. This hierarchy is depicted in Fig. 2. Maslow notes that there are exceptions to the order that he describes, but claims that a hierchy of needs still applies in those cases. His hierarchy assigns a conditional value that is assigned to certain skills or knowledge. An individual whose physiological needs are met, will value knowledge about safety. However, when physiological needs are not met, meeting those needs trumps knowledge on how to be safe. From a utilitarian perspective the moral calculus will assign more weight to actions (or skills) that satisfy basic needs than actions (or skills) that guarantee safety [9]. A hungry person may risk safety to acquire food.

In the university context it is not uncommon to assign value to subjects based on their perceived utility. A subject like computer science may be deemed more valuable than, say, philosophy, because the market has more work opportunities in the computing field than it has opportunities for philosophers. Of course there is no generally accepted calculus that weighs all relevant factors to achieve a single correct assessment of the value of any given skill set or knowledge domain.

Note that the notion of utility is inherently instrumental: the utility of something refers to its usefulness to achieve some outcome. Above we alluded to various outcomes, such as career prospects or meeting physiological (or higher-order) needs in Maslow’s hierarchy. In utilitarian theories of ethics the desired outcome is the good or happiness and the utility of a given course of action is the degree to which it achieves such an outcome (for the greatest number of people) [1, 9]. Note that such instrumental factors are not unique to utilitarianism: In Aristotelian virtue ethics the virtues are those characteristics that enable a person to best achieve his or her purpose in life.

While knowledge has a ‘raw’ utility, knowledge does not necessarily have a moral utility. Let us for the time being assume that the moral utility of knowledge depends on its application (and reflect on this assumption later). Hence, we assume here that, for example, nuclear physics is morally value neutral, but applying such knowledge to manufacture an atomic bomb or to build a nuclear power station may have vastly different moral utility values.

The title of this paper uses the phrase crime skills. The adjective crime was selected over the adjective criminal in an attempt to not imbue a moral utility into the skills to be considered. Criminal skills would attach a negative moral utility to such skills. By using the phrase crime skills we hope to signify skills that may be useful to commit a crime, but not skills that only have criminal applications. A typical example here would be the skills of a penetration tester employed by a facility to test the security of the facility. These skills will be in many ways similar to the skills of the malicious cracker who, on own initiative or as a member of a criminal outfit, attempts to penetrate the facility’s defence system for personal gain or to cause harm to the facility. We are therefore firmly positioned in a context where knowledge can be applied for good or evil purposes, and the manner in which it is applied makes all the difference.

The discussion shifted from knowledge to skills in the previous paragraph. To be more specific, the focus of the current paper is on ‘how-to’ knowledge — knowledge that Aristotle refers to as techné [11]. Given that ‘how-to knowledge’ and skills serve the same purpose we will henceforth use the terms knowledge and skills interchangeably.

Knowledge about harming others seems innate: Hobbes [5] describes the ‘natural state of mankind’ as one in which all people “are in that condition which is called warre; and such a warre as is of every man against every man”. If such knowledge is innate (or easy to obtain) teaching it either is of little additional use to those who want to harm others, or given the general availability of such knowledge it would be hard to object against anyone’s actions to share (that is, to teach) such knowledge. Therefore, if any moral objection is to be raised, it can only be raised about knowledge that is neither innate, nor easy to obtain. However, given the ubiquity of the Internet, we live in a time where it seems any knowledge is easy to obtain by anyone. But such an argument is not entirely valid. Consider, say, the theory of general relativity or, as another example, Immanuel Kant’s philosophy of reason. In both cases the knowledge is indeed very easy to locate, but usually requires a structured programme of study to acquire. And the guidance provided by dedicated teachers through the prerequisite knowledge and foundations of the theory greatly simplifies the process. In many cases ‘pure’ knowledge is insufficient to apply, and competency and confidence need to be developed. Presumably not too many people have learned to ride a bicycle from the Internet — training (and often teaching aids such as training wheels) are required. We will revisit the argument that not all knowledge (now explicitly including skills) is available to anyone who wants to master it.

However, the claim that some potentially harmful knowledge may be exceptionally hard to master i(unless taught) seems to be a moot point given that much potentially harmful innate skill (or knowledge that is easy to master) is readily available. Why would anyone with harmful intentions resort to a more complex method to inflict harm if simple methods are available? The answer is arguably that a method becomes attractive when it limits the probability of retribution. If A wants to harm B, A can hit B with a club. However, A may be seen engaging in the act, be caught and be punished. Or B may not be debilitated by the attack and harm A in defence. If A has the option to remotely administer an untraceable toxic substance to B this provides a much ‘safer’ alternative to A. It also poses a much greater risk to society: In Hobbes’s discussion of society the mechanism to avoid a war of each against the other is to “agree amongst themselves to submit to some man, or assembly of men, voluntarily, on confidence to be protected by him against all others” [5, p. 106] (emphasis added). Hence this more complex method may not only protect the perpetrator, but also undermine the essence (and stability) of society. And, while this argument was constructed using Hobbes’ philosophy, it is rather obvious that it seems to make common sense.

The realisation that people with specific categories of knowledge can abuse such knowledge is an old one. The Hippocratic Oath, for example, implores physicians to use their knowledge to “to help the sick according to my ability and judgment, but never with a view to injury and wrong-doing”. Bioethics is often summarised into four precepts, of which non-maleficence is one. This precept is derived from the maxim first do no harm, which is often expressed in Latin: Primum non nocere. While there is some debate about the origin (and age) of the maxim, it has been used for at least a few centuries [12].

One mechanism frequently applied by society is to regulate those who are entrusted with special responsibilities as professionals. Often it is realised that the safety of society (and/or of individuals) depends on the assumption that such professionals execute their duty responsibly. Masses of people cross bridges on a daily basis with an implied trust that the responsible engineer designed the bridge such that it is safe to use. People are operated on by surgeons with the knowledge that the surgeon has the knowledge (and carries the responsibility) to perform the operation with a very high likelihood of success — an expected outcome that far exceeds the impact of not undergoing such an operation. In court when one is represented by an advocate or lawyer, that legal professional has an obligation to proceed in one’s best interest, or may be held accountable. In fact, responsibility forms the foundation of professional ethics [3]. However, the word responsibility encompasses a number of meanings — in particular, obligation-responsibility, blame-responsibility and role-responsibility [3, p. 22].

While such professionals are, in the first place, expected to act in the interest of their clients and/or society, it is obvious that this very notion enables them to act contrary to the expectation. Stated differently, knowledge about safety typically implies knowledge about doing harm. The surgeon’s knowledge of how to make an incision that avoids a certain artery or nerve (because damage to the nerve or artery would be catastrophic) implies knowledge about how to precisely target such a nerve or artery and inflict major harm (and this harmful knowledge can be applied outside the normal context of an operating theatre). It is often impossible to teach someone how to avoid harm without, as a consequence, teach that person to inflict harm.

In the context of professionals, these knowledge is typically of such a nature that only professionals are entrusted with a ‘licence’ to execute such actions in the interest of society. To continue the example of the surgeon, the surgeon is not only provided with the knowledge to perform operations, but also practices such skills — starting with observing, then assisting and finally becoming the person responsible for performing the operation. This provides a ‘training ground’ that is simply not accessible to anyone else, meaning that only the surgeon is able to perfect his or her technique. Perfect technique provides confidence required to perform operations, but may also provide confidence to inflict harm if a surgeon so wishes. While another person may somehow learn a similar technique, the vast majority of people in society who has such skills, practice them regularly and are arguably in a position to abuse such skills with the least amount of collateral damage, have been taught those skills.

Note that the example of the surgeon is not a unique case: An auditor trained to identify fraudulent entries in a company’s books is in an ideal position to insert such entries in a manner that is likely to be overlooked by other auditors. An engineer who knows how, say, the transmission of microwaves ought to be contained, can use that knowledge to inflict harm through common microwave devices. The lawyer who has the knowledge to protect the right of his or her client can draft a contract that denies the other party to a contract any recourse to enforce that party’s rights.

There are also examples where skills are taught to cause damage. Engineers may be taught how to implode a building using the least quantity of explosives positioned on the most ‘vulnerable’ part of buildings. Manufacturers of weapons use knowledge to inflict the most damage possible (within certain constraints). As an example of the latter case, consider a neutron bomb designed to extinguish life, but not damage property, so that it is available for subsequent use by the user of such a weapon.

Fig. 3.
figure 3

Teaching of potentially harmful skills as part of a professional education: German scythe combat instructions, compiled by Paulus Hector Mair (1517–1579) in the Opus Amplissimum de Arte Athletica (\(\approx 1540\)), [codex MSS Dresd. C.93/C.94]. Note that a skythe, in contrast to a sword, was a comparatively cheap and widely available agricultural tool in those days

In summary, teaching potentially (or actual) harmful skills is a regular part of professional education — and has been for centuries: see Fig. 3 for comparison.

Arguably more benefits than harm accrue to the public from the fact that professionals possess such knowledge (in the vast majority of cases; some cases, such as the neutron bomb, may be a counterexample). If the benefits outweigh the costs a utilitarian argument provides a simple way to justify the teaching of such knowledge.

In cases where the benefits do not outweigh the costs the question arises whether teaching or abuse of such knowledge should be controlled. As an example, vendors often use boilerplate contracts to retain all their own rights, but deny any rights that the customer may have had. One solution in such a case is to promulgate consumer protection legislation that curtails the extent to which such contracts can limit customers’ rights.

3 Potentially Harmful IT Skills

An IT skill is potentially harmful if it enables someone to abuse IT in a manner that causes harm to another party. One common example is skills that would enable a criminal to masquerade as some user and withdraw that user’s money from his or her bank accounts. On a larger scale such skills may be used to take over or crash a computer system that forms part of a country’s critical national infrastructure. The impact from interfering with the operation of systems may range from a minor annoyance to full-scale war. Given that IT is used in almost any modern activity, any such activity may be vulnerable to abuse. For the typical IT-oriented reader of this paper no further elaboration about the impact possible abuses in this sphere is required.

The next question then is why there may be a need to teach such skills. In a nutshell, there are three answers. Penetration testing is an accepted form of testing the security of an organisation’s systems. Penetration testers criminals who may want to attack the system need the same (or similar) skills. The digital forensic examiner also fits into this category. Such an examiner needs to know what traces are left by (possibly criminal) actions and can abuse this knowledge to hide his or her own maleficent activities. Secondly, as will be argued below, computer security professionals need a proper understanding of the threats they need to protect systems from, and phantoms of such threats rarely provide sufficient insight. Finally, the vulnerabilities that occur in code are placed there (typically inadvertently) by programmers; they may become more reflective about their coding if they are more familiar with how what they do can be abused. Also this will be reflected on in more detail below.

It may also be possible to justify the teaching of such skills from an educational perspective. From experience I know that students are fascinated by ‘hacking’, ‘cracking’ and similar activities. As an example, a lecture about he operation of the Simple Mail Transfer Protocol can be pretty boring. However, showing them how easy it is to spoof sender addresses piques their interest. This also provides an ideal opportunity to bring a discussion of ethics into the lecture. Invariably students then go and send spoofed emails to their friends (hopefully within the ethical limits of such an action). Rather than becoming familiar with the protocol because they have to, they suddenly want to. And many of them run into situations where simple spoofing does not work and begin to ask questions about technologies such as Sender Protection Framework (SPF) and DomainKeys Identified Mail (DKIM) — topics that they may not have encountered in the curriculum at all. Knowledge of SPF and DKIM limits their confidence about their ability to spoof any email address and imposes some restraint on full-scale abuse of this new skill. But even here, with the checks and balances in place, one should reflect on the ethical cost-benefit ratio of inspiring to learn, given the possibility that they will abuse the skill (discounted by the fact that many people using SMTP directly will probably realise its ability for abuse anyway, but then without the benefit of having discussed ethics prior to their own discoveries).

The first reason for teaching students potential harmful skills based on the assumption that they may be employed as penetration testers is valid, but does not scale: An extremely tiny fraction of people will ever work as penetration testers, so teaching the masses such skills is not justified by the few who need the skills to be penetration testers. In addition, to be a penetration tester one needs a natural curiosity and ability to learn from obscure sources; hence acquiring the necessary skills may be part of the genetic makeup of the ideal penetration tester and teaching may add very little to skills they can acquire though their innate curiosity.

The second justification for teaching potential harmful IT skills was the claim that computer security professionals need to properly understand the threats they face. Teaching students about the categories of malware, as an example, gives them a glimpse of that world, but without the ability to construct such malware. Even talking about a Trojan horse, which is trivial to construct in a number of forms, does not seem to give the student the feeling that “I can do that!”. While students often tell me about the fun they had sending spoofed emails to their friends, nobody has ever told me after a lecture that discussed Trojan horses about the fun they had building such malware.

How well does a security professional need to know ‘the enemy’? To continue with the malware theme, students (and, arguably, professionals) tend to know the categories of malware (viruses, worms, Trojan horses, and so on) and deem them to be fairly similar threats. However, if they are faced with the tasks of creating, say, a Trojan horse and a virus, they will hopefully realise that the first task is trivial and the second not. In terms of a threat assessment it should then be obvious that custom-built Trojan horses present a credible threat from any source; a custom-built virus is very unlikely to originate from an unsophisticated attacker. Hence, depending on the type of organisation, virus scanning may be sufficient mitigation for a virus-based threat, but not for a Trojan horse. A custom-built Trojan attached to a suitable delivery mechanism (such as email) becomes a spearphish. Technical mechanisms are not particularly useful to mitigate this threat. Hence, the standard response tends to be to externalise the cost to the user in a policy that instructs the user not to open any attachments from unknown senders. However, if the pundit of such a policy is able to think how an attacker would deal with such a policy (and hence, how effective such a policy would be), one wonders whether the rational security specialist would still support such a policy. This is a rather simple example, but such policies are ubiquitous.

Many other examples could be provided to show why deep knowledge of a threat is indeed useful to mitigate it. However, there are many more people working as security specialists than penetration testers, it is still a special interest group and arguably insufficient justification to teach the bigger community such skills.

4 The IT Worker — From Hero to Zero

In the introduction an example was provided that illustrated how the values of the community determined who was trusted with knowledge. Arguably that same spirit governs sharing of knowledge amongst penetration testers and many other communities. In the case of the professions such a value system is institutionalised and enforced by professional bodies.

However, the notion of community (whether information or institutionalised) is largely absent from the broader IT workforce. Communities certainly do still exist — see, for example, Himanen’s [4] description of the hacker ethic.

Prior to the 1980 s computers were expensive machines housed in climate controlled centres to which access was tightly controlled. It was not uncommon for workers in these centres to wear white coats. This inevitably instilled a sense of community. The scarcity of computers made it necessary to network (in the social rather than data sense), and communities —as groups of people— were linked to one another.

However, over time a culture shift occurred. Many of these older computers were used for corporate management, such as the monthly printing of payslips. However, organisations did not, in general, depend on the operation of its computing facilities. In today’s context the organisation often cannot function without its computing facilities.

In a parallel set of events the concept of corporate governance emerged and became increasingly important. Corporations represented the investments of society, the workplace of society and the major sources of impact on society. They no longer were just businesses, but operated at the core of society. And, in such a core function it developed a fiduciary responsibility towards large sets of stakeholders. Various codes (in the form of laws or otherwise) appeared including the Sarbanes-Oxley Act in the US and the King Report on Corporate Governance in South Africa [6]. Over time it was inevitable, given the increasing dependency of corporations on its IT infrastructure, that computing would move from a technical or even scientific context to a management context. The extent to which this has happened is illustrated by the fact that the King III report devotes an entire chapter to IT governance.

In another parallel set of events use of computing facilities broadened to include an ever increasing variety of workers. Initially they used computing through terminals connected to the mainframe, later through personal computers and eventually through a large variety of devices that are connected to a range of services. In contrast to the ‘uniformed’ centralised specialist IT worker, almost everybody now worked using computing.

Typically a central IT department still exists in the organisation. However, rather than the admired masters of the machine, they are now responsible for maintaining a service where others are the users to be supported. Not only does this new user base need support, but they also need to be controlled as part of IT governance. Effectively the IT department becomes invisible when everything works; users seem self-sufficient. The IT department becomes visible when the infrastructure fails, when new regulations and policies are introduced (and enforced) and (often enough) when the computer is blamed for anything that goes wrong in the organisation. The IT department no longer have a shared technical expertise. It is a mixed group of management and technical skills, with managers who —in contrast to the system or database administrator of an earlier time— may have no technical skills and the technical people living in a foreign world of management. Where the ‘technical wizard’ was once the person who could solve complex problems, the help desk has become a faceless entity behind an email address or ticket system.

In this world technical skill has become extremely mobile. Expertise is often associated with a project, rather than a system or an organisation [2]. Developers flow from one project to the next. The CV of the typical IT worker is a list of completed projects, with a new employer every 18 months. Much of the IT workforce has become migrant labourers moving to wherever their skills are required for a new project [10]. Of course a part of the workforce still remains stable with people who only work at a few employers (or even a single employer) during their careers. However, in general perpetual motion has become the norm. In many ways we are seeing labour as a commodity more clearly than ever before. Arguably this is, in particular, true for developers whose skills are no longer required once a project has been completed, but where there always seems to be a new project starting somewhere else.

In the context of such migrant labourers it is arguably hard to establish any sense of community. There is little reason to become loyal towards any specific organisation. Project-based work may not be associated with a retirement fund or pension or medical benefits. And in such a context individuals fall out of the system once they are no longer useful. This may be a fertile place where disgruntled insiders (albeit temporary insiders) form. This is a context where an individual sees no way out. In this context the empowered worker may resort to crime to satisfy a basic need (such as to afford medical care for children). There are few social bonds and few professional constraints that prevent such a person from abusing potentially harmful knowledge. There is little reason to believe that the benefit to society will exceed the cost to society if the workforce, in general, has too much potential harmful knowledge.

5 Stratification of Responsibility

Up to this point a sense of community has been posited as one of the major reasons to believe that potentially harmful information will more often than not be used for the benefit of society. The lack of community in the IT sector was raised as the major concern for this sector.

However, in most professions community is not a result from almost identical human beings inhabiting the same space. In the world of medicine, the workforce may consist of various specialists, general practitioners, registered nurses, other nursing staff, ambulance drivers, paramedics, porters and workers in many other roles. In some cases one may encounter mobility, for example, medical students who rotate through various rounds over time. While some roles may have a relatively higher or lower status than other roles, this is not necessarily the case. How does the status of the hospital’s general manager, for example, compare to the status of, say, its nursing manager? Both are professionals, but the nursing manager often has a stricter sense of professional responsibility enforced by a professional board. In contrast, the responsibility of the general manager stems from a fiduciary duty towards the hospital’s stakeholders. The nursing manager has to be educated to act as a health care worker. Nursing knowledge is an essential part of the nursing manager’s duty. The general manager may need a general business acumen and a diverse (but not) specific set of management skills. While the nursing manager reports to the general manager, the general manager cannot make decisions about nursing or patient care, since the general manager is not empowered to be responsible for such decisions.

In the hospital example, the ‘culture’ or ‘community’ of one medical specialist may be very different from that of another specialist. These specialists belong to different professional societies that meet, perhaps annually, and in this context a certain sense of community is experienced. However, perhaps more importantly, the responsibilities (and, in particular, accountability) of the roles are clearly defined. To make this example more specific, consider the roles of the surgeon and anaesthetist in an operating theatre. Both are skilled medical doctors, but they have very different responsibilities is each of the three senses of responsibility mentioned earlier (viz obligation-responsibility, blame-responsibility and role-responsibility). In such a context, where skills overlap, responsibility and specific accountability is a major factor that ensures smooth operation of the system.

Knowledge is clearly linked to accountability: to be held accountable one needs certain knowledge before accountability makes sense. But accountability also constrains one’s abuse of such knowledge.

It seems obvious that such stratification in the IT sector may be meaningful. A developer needs certain skills. A system administrator needs certain skills. This does not imply any hierarchy, but the two roles are clearly accountable in different ways. If such accountability can be enforced, as it is in the medical example, it would be inappropriate for the system administrator to act as a developer (unless the system administrator is indeed also a developer who could be held accountable as a developer). Under these conditions we suggest that workers in certain roles and who are held accountable in those roles can be entrusted with potentially harmful knowledge.

Note that such an enforcement of responsibility does not necessarily reserve jobs for certain people with a certain level of education. Several attempts to professionalise the IT sector have failed; one of the major reasons for such failures is the difficulty to delineate the nature of IT jobs. As a simple example, what would the minimum education be before a person can be a programmer? The problem with asking such a question is the diverse set of people working as programmers. On the one hand someone may be a self-taught programmer who writes simple programs that are useful in his or her business. Another programmer may write code that implements autopilot functionality on a wide-body passenger aircraft. The impact (both positive and negative) of the quality of the work done by each differs significantly. It is unrealistic to expect that both will be expected to have the same qualifications and/or skills. Though these two workers share the same (generic) job title, their professional work is worlds apart. They are most probably not members of the same community in any sense of the word community. The programmer working on the autopilot system may be a member of various professional bodies and subject to their codes of conduct; however, such codes are rarely enforced. In the end the engineer who includes the autopilot software into an aircraft is the person who is professionally responsible for its correct operation. In the case of the small-business owner, he or she is responsible to some extent for the code used as business owner, and not as programmer.

It is possible to introduce legislation that limits the type of project a programmer may participate in based on skills and expertise, but the variety of programming tasks and the pace at which technology evolves makes this route unlikely. Add to this variety the fact that code is often reused (including open source code where specific code may not be attributable to a specific programmer), and enforcing stratification by law becomes even more complex. Hence, other options to stratify the IT sector needs to be explored.

One alternative used in a number of professions is the use of insurance to cover professional liability. To return to the medical example, the professional liability of a doctor may be carried by the doctor’s employer (such as the state). If not, such a doctor would be foolish to practice without proper medical insurance (and may indeed be required by law to be properly insured). The cost of insurance is typically based on the professional activities the doctor engages in. Even though all doctors are, in principle, able to assist with child birth, the associated risk can be extremely high. This is reflected in the cost of medical insurance for obstetrics. To illustrate, the 2015 cost of insurance for a South African general practitioner who does not perform procedures in operating theatre was almost ZAR 9,000 per year [8]. For such a practitioner who does perform procedures in an operating theatre, insurance almost doubled to ZAR 18,000. The insurance cost for a general practitioner who carries out basic pregnancy care and planned deliveries, insurance costs increased to almost ZAR 120,000 — about 13 times the first premium mentioned above. When the same doctor frequently practices general obstetrics the insurance increased to almost R190,000 per year. The type of work clearly determines the nature of the risk and the associated potential (financial) responsibility in terms of liability. Note that most professionals (who perform professional work) have some form of professional liability insurance, including engineers, lawyers and other health care professionals.

We do not suggest that high insurance premiums keep professionals moral. It may be true that a professional who makes too many mistakes will not be able to find insurance again and thus effectively banned from practicing as a professional, but it is unlikely that this is a major motivation for most professionals to behave in a moral manner. It is far more likely that professionals doing a specific type of work will attend the same conferences, serve on the same committees and generally bond as a community. In this community values will be shared and the norms of the community imprinted on the individuals. Even when the amount of money does not differ, doctors who are interested in treating, say, diabetes (and become known as doctors who are trusted in that particular subdiscipline) tend to form such communities.

Of course similar communities form around other shared interests, such as supporters of a particular football team. In this community values that are shared may be good or evil. Much has been reported about damage caused by some football hooligans, for example. Hence, we posit that the community imbues (and reinforces) certain values. Professional values determine the nature of such values — in particular, whether the interest of society is served by such values.

It has already been argued that the IT workforce is not (or no longer) a community. Some communities do form as special interest groups. However, professional values are hardly ever enforced in such communities. To illustrate, consider a community of security professionals. If a security breach occurs at the institution that employs such a security professional, it is extremely unlikely that the community will reflect or the impact of the personal responsibility of such a member on the breach and vice versa.

One example where exceptions may occur comes from the penetration testing community. Penetration testers typically sign agreements with the owners of systems that are to be tested. The boundaries of the test are explicitly spelled out. As long as the penetration testers operate within those boundaries, the agreement indemnifies them. However, once they exceed those boundaries (for example, by disclosing confidential information to others), they expose themselves to a significant liability in the form of penalty clauses. A penetration tester who does not abide by the values of the penetration testing community will be expelled from the community. Trust of the community is a key element in the sustainability of any business in that community.

As noted, the IT community is, in general, not properly stratified. Exceptions in the form of specific communities exist, but the mere fact that communities exist is not sufficient. Professional responsibility needs to be an inherent part of such a community before it can be trusted as professional.

Unless one teaches such a specific community, it seems prudent to limit potentially harmful knowledge taught to students. If necessary, they will have to acquire such knowledge in the workplace. This does not mean that no such skills should be taught; however, it suggests that the extent to which such skills are taught should be limited so that it does not instil a sense of complete competence in the student. Ideally the student should not be provided with knowledge open to immediate abuse; teaching should stop at a point where much additional knowledge needs to be acquired. One cannot prevent anyone from acquiring knowledge. At best one can ensure that such knowledge is not provided in a sufficiently refined form so that it can be abused to cause harm; if such ‘ready’ knowledge is provided it will simple be too easy to abuse it without restraint whenever any cause is a sufficient trigger for such abuse.

6 Conclusion

This paper reflected on the extent to which computer crime skills can be taught to IT students from a moral perspective. In many cases IT workers need such knowledge to perform activities that are in the interest of society and that are clearly moral.

It was argued that professionalism is one of the key elements that limits abuse of such knowledge. However, it was also argued that professionalism is not the only determinant of moral behaviour — a sense of community was deemed to be a particularly important part of handling such knowledge with appropriate care. In fact, the description of professionalism deviated from the usual depiction as someone who has been admitted to a profession based on skills (and education); a profession here was rather seen as a context where responsibility is a key concern when workers are assigned specific tasks.

Given the fragmented nature of the IT workforce it was argued that it is inappropriate to trust the general workforce with potentially harmful skills. When such information is taught it should be sufficiently incomplete that it is not possibly to apply the knowledge without further studies.

It remains true that anybody is arguably able to acquire any knowledge. When teaching is limited as argued above, it does not solve the problem of people having or being able to obtain harmful skills. However, it does limit the number of people who have such knowledge and are able to apply it without further work from their side. This limits the abuse of such knowledge in a moment of anger and without some opportunity to reflect. It also speaks to the complicity of the teacher who taught knowledge that is eventually abused.