CommonLounge Archive

Computer Security is more than Encryption

February 13, 2018

In this article, we’ll discuss important non-technical aspects of cryptography and computer security. In the first half, we’ll talk about the role of open algorithms and peer review in crypto software libraries and it’s robustness. In the second half, we’ll discuss how humans are at the center of 95% of cybersecurity breaches, and what systems are being created to minimize such errors. In each section, we’ll also discuss solutions and directions that are currently being explored but are yet to become the norm.

Open Algorithms and Peer Review in Cryptography

The open source movement is a movement whereby developers from around the world collaborate and contribute to software libraries on a voluntary basis. Editing code is open to all, as is the right to download and use the code. To manage the various pitfalls and risks of such an approach there are platforms such as Github which facilitate safe merging of code and oversight from project leaders.

In cryptography, a major benefit of developing libraries using the open source approach is the benefit of extensive testing in the public domain. Enabling the public to see and test the code as well as getting developers with different backgrounds to work together enables more robust code to be developed. It would be far harder for many independent developers to collude to build algorithms with hidden flaws together in the open than it is for a company or government to do so. Furthermore for the packages to be widely accepted they must pass the acceptance tests of many other developers who act diligently before merging the code into their projects.

Note that although this may be counterintuitive, making the code available for everyone to see does not lead to loss of security as the security of any cryptosystem should not depend on the secrecy of the algorithms. In particular, Kerckhoff’s principle, the central guideline in designing cryptosystems, states that: A cryptosystem should be secure even if everything about the system, except the key, is public knowledge.

There are many open source cryptography libraries from the generalist GNU Crypto project (Java), to NaCl (Python, C, C++) which focuses on public-key authenticated encryption and Crypto++ (C++) which has a full set of implementations for widely used cryptosystems and hash functions (AES, RSA, SHA, …). There are also API based systems such as Bouncy Castle (Java, C#). This wealth of community developed cryptography code is freely available online.

Alternatives to Open Algorithms

Despite this paradigm governments and some large scale institutions design and implement their own cryptographic algorithms. They often choose not to publicly disclose the mathematics and mechanics of their cryptosystems in order to avoid their adversaries (other governments and organizations) having access to the strongest encryption that they have developed.

In this case they do not have the widespread testing of the crowd to rely on and so they must engage in rigorous testing internally, both of the mathematics and of the implementation. Government agencies such as the National Security Agency (NSA) of USA and Government Communications Headquarters (GCHQ) of UK employ leading cryptographers and cryptanalysts and this underpins their confidence in their algorithms.

However there is significant interaction between government agencies, academics and industry to collaborate on developing secure cryptosystems, the implementations are however left to individual parties unlike the open source alternative. For example, the US government chose the Data Encryption Standard (DES) and Advanced Encryption Standard (AES) algorithms for the security of US government documents and communication by requesting for proposals based on specific design and security requirements and selecting the best submission.

White Box Cryptography vs Black Box Cryptography

Often, cryptography needs to be implemented in software (as opposed to hardware) and be deployed in a device where attackers have the ability to perform analysis on the deployed software and related files. This can lead to cryptographic assets, such as the private key, being subject to attacks as it needs to be stored somewhere on the device. These attacks are known as white box attacks.

The goal of white box cryptography is to defend against white box attacks. The idea is to take an implementation of some cryptosystem with a hardcoded key but then obfuscating the resulting code so as to generate code that computes the same function but from which the private key cannot be attained. It is not yet known that this can be achieved securely but highlights one frontier of cryptography where much effort is being expended.

Security Goes Beyond Mathematics & Code

The security of any cryptosystem requires certain information to be private and secure, such as passwords and private keys. However, this is often the weakest part of a cryptosystem, since people might reveal the secret information by mistake or may be tricked into doing so without their knowledge. Other human induced errors are also possible, for example, a disgruntled employee might leak information on purpose as an act of spite or retaliation.

Examples of Human Induced Errors in Cryptosystems

It is estimated that over 95% of cybersecurity breaches result from human error (according to the IBM Cyber Security Intelligence Index). Furthermore, Verizon estimates that 95% of advanced and targeted attacks involve spear phishing – a process by which personalized emails are sent to people of interest (often executives) at a given company hoping to expose a naivety to cybersecurity and gain access through redirecting the target to a URL under the attackers control or by getting the target to open an attachment of some kind.

Attacks mediated through human behavior are prevalent, as the mathematics of cryptography is thoroughly tested and developed by experts. The vulnerabilities therefore lie in less informed (regarding cyber security) individuals who are subject to the flaws in behavior, reasoning and understanding.

The most frequent examples of human error are the need to keep passwords secure and offline and a lack of awareness of the environment around you and the techniques used by attackers to access systems. Keeping one’s passwords on a note by your computer or even in a file on your hard drive connected to the internet raises the risk of passwords being stolen or viewed and used to gain access to secure systems and encrypted data.

A less cited but equally prevalent example is that of tailgating through locked doors or security gates. The following party may wait for someone to hold the door open to them so that they may gain access to a secure building where once past initial security their access to hardware and potential weak points is greatly increased. This is a more difficult problem to solve as it requires changing engrained behavior and what is viewed as politeness and due courtesy.

Approaches to Mitigate Human Induced Errors

There are solutions to these problems that are being rapidly implemented at institutions across the world. The general aim is to make security part of everyday protocols and engage with the findings of behavioral economics and most notably Richard Thaler in making secure actions easier than potentially insecure actions so that by following the path of least resistance we also act to reinforce the security of our systems laid out in code.

Many new security protocols have been designed on a human level by means of social engineering as well as schemes aimed at increasing awareness of security threats. These schemes aim to change habits so that it no longer becomes the norm to hold doors open for unknown colleagues thereby potentially exposing internal systems to outsiders. Technology helps here by enabling turnstiles that require company issued ID to pass through and then it only lets one person through at a time.

Beyond this there has been a gradual trend towards enforcing password complexity ensuring that passwords are long and contain characters of different kinds (numbers, capital letters, lowercase letters and symbols). Furthermore these systems also implement an expiry date on passwords so that they must be renewed periodically; some systems also incentivize stringer passwords by allowing the strength of the password to determine how long until it expires. People often choose simple passwords based around holidays or friends names because they are memorable. However these passwords can be very insecure and so often very complex passwords are encouraged and a trusted software system is used to remember them. This increases the security of the system by enabling ever more complex passwords and can even stop people divulging their passwords since they cannot remember them themselves.

The emerging Moving Target Defense (MTD) paradigm is at the core of many new secure systems and has instated a new front to the tit-for-tat battle between cyber attackers and defenders. As the name suggests MTD aims to change the nature of systems frequently to reduce the time that attackers have to perform attacks and gain access. This is similar to the notion of password expiry dates but on a larger scale and with shorter cycles. Examples of MTD include Binary Scrambling (changing the binary code of programs as frequently as every 5 seconds whilst maintaining functionality), micro-services firewalls (each micro-service has its own firewall and is loosely coupled) and rapid cycling (the continuous wiping and reinstating of systems).


© 2016-2022. All rights reserved.