Cryptography is the practice and study of techniques for secure communication in the presence of third parties (called adversaries). More generally, it is about constructing and analyzing protocols that overcome the influence of adversaries and which are related to various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation. Modern cryptography intersects the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce.
Cryptography prior to the modern age was effectively synonymous with encryption, the conversion of information from a readable state to apparent nonsense. The originator of an encrypted message shared the decoding technique needed to recover the original information only with intended recipients, thereby precluding unwanted persons to do the same. Since World War I and the advent of the computer, the methods used to carry out cryptology have become increasingly complex and its application more widespread.
Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances (e.g., improvements in integer factorization algorithms) and faster computing technology require these solutions to be continually adapted. There exist information-theoretically secure schemes that provably cannot be broken even with unlimited computing power—an example is the one-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms.
In typical Digital Rights Management (DRM) implementations, cryptography is a part of the security solution that employs a known strong algorithm and relies on the secrecy of the cryptographic key. Cryptographic techniques are employed for securing content distribution using both encryption and signature mechanisms. Encryption is the mechanism of transforming content using a secret key so that the encrypted content cannot be accessed or used unless it is decrypted back into the accessible form. Signature mechanism is a process to attach associated data to a content for its authentication purposes.
Encryption algorithms are categorized into two types: symmetric and asymmetric. Symmetric algorithm (a.k.a private key cryptography) uses a single secret key for both encryption and decryption processes. For example, let A wants to deliver content, x, to B. A firstly encrypts x using a secret key k,
and sends the encrypted content, y, to B. When receiving y, B has to decrypt it using the same key k to access x. The secret key k must be known only by A and B. Well known examples for symmetric algorithm are the Data Encryption Standard (DES) and the Advance Encryption Standard (AES). The DES relies on 16 rounds of permutations, while the AES is based on byte substitution, permutation and XOR operation, which makes this algorithm easy to implement in hardware. One big issue in symmetric algorithm is how B can send the secret key k to B in a save way. Once an unauthorized party intercepts the key then everything will collapse.
The asymmetric algorithm (a.k.a public key cryptography) can overcome the symmetric algorithm’s key exchange problem. This algorithm employs two different keys for encrypting and decrypting content, named public and private key, respectively. The public key can be freely distributed or placed in an accessible directory, while the private one has to be kept secret. When A wants to send content, x, to B, for example, A encrypts x using B‘s public key, ,
Thus, B is the only party who able to decrypt y, using his own private key, . The private and public keys are mathematically linked, however, in the real application, they are big numbers such that it is significantly hard to derive the private key from the public key. The security of public key cryptography, such as RSA, El-Gamal and Rabin algorithms, mostly rely on the hardness of exponential problem in a finite field.
Although the asymmetric algorithm can solve the issues of communicating the secret key, it does not mean that this algorithm is better than the symmetric one. In some application that does not involve more than one party, such as storing content in a database or protecting data inside the content, the symmetric algorithm is more efficient than the asymmetric one in term of complexity. In addition, since asymmetric algorithms are significantly slower than symmetric algorithms (for example, hardware and software implementations of RSA are about 1000 and 100, respectively, times slower than DES), they are less appropriate for encrypting and decrypting large amounts of data in short time than the symmetric algorithms. Thus, it will be suitable to combine them
in a hybrid system, whereby a symmetric algorithm, such as DES or AES, is used to encrypt and decrypt messages, and asymmetric algorithm is used to communicate the key. Furthermore, security of the asymmetric algorithm also depends on the validity of user’s public key. Recall the previous example, how it can be sure that is indeed B‘s public key. It is possible that a third party has impersonated B and put its key in an accessible directory as B‘s public key. Therefore, the distributed public key has to be certified.
Public key infrastructure (PKI) is initiated to overcome the issue of public key validity in the asymmetric algorithm by supporting the management of keys and certificates. A PKI framework consists of policies and procedures for generating and publishing keys and certificates. Besides the policies and procedures, the essential components of a PKI include certificate authority (CA), registration authority (RA), certificate storage, and software . The CA is the trusted entity that generates the certificates: ensuring the correct key and identity is tided to the certificate. The RA is responsible to legitimate the users who receive the certificate. The functionality of the CA and RA can be handled by a single entity. The certificates as well as its updated information are saved in a CA’s database.
Typically, a digital certificate consists of certificate holder’s public key, information of holder, CA and Ca’s signature, issuing date and expire date, and certificate serial number. The keys can be either generated by the CA for the client, or be generated by the client and certified by the CA. The signature is generated from the CA’s secret authentication key and the parameter held within the CA’s own authentication certificate. Addionally, in a hierarchy of CA’s, a CA is certified by a higher level of CA, while the root CA generates and signs its own authentication certificate.
A digital signature scheme consists of two algorithms: signing and verifying algorithm, and employs two different key for these algorithms. However, unlike encryption mechanism, a signature scheme uses private key for signing a message and public key for verifying the signature. For example, before sending a message, x, to B, A signs it using a secret key. Subsequently, B can verify the signature using A‘s public key.
A formal definition of digital signature can be described below.
A digital signature scheme includes five sets , , , , and , which are finite set of possible messages, possible signatures, possible keys, signing algorithms, and verification algorithms, respectively. For each , there is a signing algorithm and a corresponding verification algorithm . Each and are functions such that the following equation is satisfied for every message and for every signature : if and only if
The is secret while is public, however both of them should be polynomial-time functions and computationally infeasible for unauthorized person, say C, to counterfeit A‘s signature on message x. Since the message x belongs to A, hence he must be the only person who is able to compute the signature such that . Otherwise, his signature can be forged.
A digital signature scheme cannot be unconditionally secure due to the finitely of the set of signatures, , involved in it. For A‘s message, x, C can apply all possible signatures using the public algorithm ver until finding the right signature. Thus, given adequate time, C can always counterfeit A‘s signature. Therefore, it is desirable to have signature scheme that is computationally secure.
In practice, signing and verifying algorithm involve multiplication or exponential based computation. As a result, the signature is at least as long as the message. This will be inefficient if the message is long. To overcome this problem, a hash function is applied. With a message, x, of arbitrary length as input, a hash function, h, produces a message digest, h(x), of fixed length. Hash function has following characteristics
- h is a one-way function: given a message x, it is easy to compute h(x), but hard to derive x from existing h(x).
- Given x, it is hard to find another message x’, such that h(x) = h(x’).
- h needs to be collision-resistance; it hard to find two random messages, x and x’, such that h(x)=h(x’).
To create a digital signature, y, for a message, x, the signer, A, firstly compute hash function of the message, h(x), and then encrypts it using a private key to obtain the signature. Thus,
The pair (x,y) is then transmitted. In the destination, the receiver, B, uses the same hash function to create a hash of the real message, h(x), and then decrypts y using A‘s public key, and verify that y is valid signature if and only if
Last updated: Townsville, March 13, 2014