Network Security





Overview

In a networking environment, the data on networked computers is far more at risk from potential security threats than data on stand alone computers, even more so if the network has connections to other networks, or the Internet. It is important, therefore, to take all reasonable precautions to protect the network from hackers, viruses, and other security threats. The theft of data can have far reaching consequences, and any incursion, whether electronic or physical, can result in the corruption of data or software, or even physical damage to, or theft of, network equipment. At the same time, the network should not be so secure that users experience difficulties in carrying out their duties.

The level of security needed will depend on the business environment in which the network is operating. Whilst all networks should be reasonably secure, a network operated by a bank, over which a large number of financial transactions will be carried out every day, will require far more stringent safeguards than a network that supports a small family business. Whatever the business environment, however, the first step towards ensuring data security is to establish a security policy. The security policy should be proactive in preventing unauthorised access or behaviour. Only authorised users should be able to access data and network resources, and the network must be protected from harm, whether intentional or otherwise.

The security policy usually takes the form of a written document that articulates exactly what security threats could affect the network, what level of risk each threat poses, and what countermeasures and security procedures will be implemented. One of the first tasks when formulating a network security policy is to decide what resources should be accessible to each user group, and what level of access to grant in each case. An organisation whose network is linked to other networks via the Internet or a wide area network should implement stringent measures to protect the network from external threats, and the security measures planned should include the monitoring of network activity so that unusual or suspicious activity can be logged, and if necessary investigated.

User authentication

The first line of defence on any network is the authentication of users. Before a user can access the network, they must enter a valid username and password. The resources available to each user will depend on the rights that have been granted to that user by a network administrator, and ideally should be only those resources needed by the user to carry out the duties associated with their organisational role. In an organisation of any size, groups of users who are involved in the same kinds of activities are likely to require access to the same resources. This means that access rights can be assigned to a group entity rather than to the individual users. A new employee can be added to the group whose resources they must be able to access, while employees moving from one organisational role to another can have their membership transferred to the appropriate group. This simplifies the job of the network administrator in terms of managing user access to resources.

Users must enter valid credentials to gain access to network resources

Users must enter valid credentials to gain access to network resources


A user account must be created for a user before that user can log into the network and access network resources. The account maintains information about the user, such as their username, their password, a list of resources to which they are allowed access, and the level of access they have to each resource. The user's username uniquely identifies them on the network, and it is common practice to adopt a standard naming convention. One widely used convention is to combine the user's first initial and last name. A user called John Smith, for example, would have the username "jsmith". If another user called John Smith exists, the second John Smith might have the username "jsmith2", since the two usernames must be different. Whatever convention is used, the number of characters used should not be excessive, and it should be relatively easy to identify the person that a particular username refers to.

It is also common practice for an administrator to give a new user account an initial password to allow the user to log in to the network for the first time. Once the user has logged in, however, he or she will be asked to choose a new password. It is standard practice to allow users to change their password whenever they wish, and many organisations actually require users to change their password at regular intervals to reduce the likelihood of user passwords being discovered by an unauthorised user and subsequently used to gain unlawful access to network resources. For the same reason, users are discouraged from disclosing their password to another user. Most network authentication software provides a facility that temporarily disables a user account if an incorrect password is repeatedly entered, to discourage attempts by hackers to access network accounts.

User accounts can be disabled for longer periods if a user will be absent for an extended period of time due to illness, or for any other reason. This ensures that no one else can log in using that user's account details. It is often possible to set an account expiry date if a temporary account has been created (for a temporary employee, for example) so that the account is automatically deactivated when it is no longer required.

The details of each user, including their username, password, any group memberships they hold, and the level of access they have to various network resources, are stored in a database called the directory services database. A copy of the directory services database is held on at least one file server, and is usually duplicated on several file servers to provide some redundancy in case one or more servers become unavailable for any reason. Each time a user logs on to the network, the first available server will authenticate their credentials and grants them rights to network resources according to the information stored in the directory services database.

By default, each user will initially have full access to their personal file space on the network, but not to any other network resource. The user will normally be assigned membership of one or more groups, depending on the role they occupy, and will automatically be granted access to whatever resources are available to those groups. The rights assigned to any particular group for a particular network directory, for example, can be very tightly controlled, and typically include the following:



Security threats and countermeasures

The activities associated with network security are many and varied, but they have the broad aim of protecting the physical network infrastructure, the software configuration of the network, and the data stored on network computers. It should be remembered, however, that while a significant value can be placed on hardware, or the time taken to reconfigure network software, the value of data may be incalculable, and the loss, corruption or theft of data can have catastrophic consequences for an organisation. For this reason, most security measures are designed to protect the integrity and confidentiality of the network data. The term data integrity refers to the accuracy and completeness of the data held, while confidentiality is essential if sensitive data is held on the network's file system. The consequences of a breach of confidentiality could be cripplingly expensive litigation.

Each type of threat to network security must be identified, and the level of risk evaluated. One approach is to examine well-documented threats that have been encountered by other organisations of a similar type, together with an evaluation of the effectiveness of the countermeasures adopted. There are of course many measures that can be adopted almost as a matter of course. The implementation of a firewall and the installation of anti-virus software, for example, are pretty much standard practice. In spite of the measures put in place to prevent a security breach, however, such an occurrence is going to occur sooner or later, and any strategy that is adopted should include a set of procedures designed to mitigate the consequences of a loss of data, or disruption to network services. One of the more obvious measures is to ensure that all network data and system configuration files are regularly backed up.

Hackers may be bent on accessing confidential data, while other forms of attack, such as a computer virus, are more random and in the worst case may be destructive of data. These kinds of attack, although serious enough, are usually relatively easy to detect and appropriate actions can be taken to neutralise or mitigate their effects. A more difficult form of attack to detect (because it is not usually immediately obvious that it has occurred), is the deliberate corruption or manipulation of data. Such an attack could include activities such as changing the numbers in a spreadsheet, or altering customer records in a database. The worst aspect of this kind of attack is that, even when discovered, it is very difficult to determine when the attack occurred or how much of the data has been affected.

In order to ensure that data integrity is maintained, it is necessary to monitor and record all changes to data and system files on network servers. It is then possible for a system administrator to determine whether the changes are legitimate or the result of unauthorised access or a virus infection. If a hacker gains access at a high enough security level, they can execute system commands, make changes to the system's configuration files, or install scripts containing malicious code. Being able to identify files or directories that have been modified without authorisation allows simple remedial action to be taken immediately. If the problem remains undetected for any length of time, the damage to data and system files may become so severe that only a complete re-installation of the server's software, or restoring all of the data held on the server, will ensure that data integrity can be guaranteed.

Most operating systems now provide "rollback" facilities designed to preserve system integrity. These facilities allow a computer system to be restored to a known state following a breach of system security using copies of critical system files that have been previously saved to a safe location. There are also a number of commercial software packages that provide the same functionality. Other basic security measures include keeping operating system patches up to date, installing anti-virus software on all network computers, using data encryption, and keeping abreast of the latest security related issues.

Sometimes unintentional user errors can have potentially serious consequences, including the sort of simple errors that everyone is guilty of from time to time. The ease with which a file can be deleted, for example, is often disproportionate to the consequences of deleting the wrong file. Everyone working with computers has suffered the loss of important data files at one time or another, and in some cases such mishaps have been widely publicised. One example in recent years involved a major bank that accidentally ran a program that debited monthly standing orders from customer accounts twice. This was only discovered when large numbers of customers complained that they could not withdraw cash from ATMs due to insufficient funds. Network administrators should attempt to reduce the incidence of user error by ensuring that users are familiar with the procedures for logging in to the network and accessing data and other resources. Following the correct procedures will reduce the likelihood of users accidentally deleting files or inadvertently changing system configuration parameters.

The creation of a security-conscious culture within the organisation, and encouraging users to take sensible precautions, can greatly reduce the threat from viruses and other malware. One of the threats not so easily dealt with is that of the enemy within, and a serious potential problem in an organisation of any size is the risk of fraud being committed by an employee. Many such cases have been reported in the past, and a greater number still may have gone unreported. The threat is particularly serious if the employee involved is a senior member of staff in a position of trust. The level of system access engendered by such a position makes it relatively easy to commit computer fraud, and a number of reported incidents have involved company directors. There is no simple solution to this kind of problem other than trying to ensure that the procedures put in place, and the level of auditing undertaken, will circumvent such an occurrence.

The graphic below illustrates how threats to network security have evolved during the last four decades.

The evolution of threats to network security

The evolution of threats to network security


Physical security will be an issue in any network, but is particularly critical for organisations that have large numbers of people moving around the premises on a daily basis (e.g. a college, university, library or hospital). Network servers should be kept in a secure location to which only authorised personnel have access, and network cabling should, as far as possible, be concealed above suspended ceilings, beneath computer flooring, or within trunking or conduit. Staff should be made aware of the dangers of leaving workstations logged on and unattended, and discouraged from divulging their password to others or leaving a written note of their password in plain sight.

Viruses and other malware

A virus is a small piece of program code that can enter a computer system by attaching itself to a data file or program file. A file that has a virus attached to it in this way is said to be "infected". One of the features of a virus is that it has the ability to replicate itself. Copies of the virus can subsequently attach themselves to other files, and can infect another system when infected files are transferred from one computer to another. In addition to this ability to replicate, the virus may have some rather undesirable features, ranging from the ability to display annoying (though relatively harmless) system messages to the deletion of all the data on a computer's hard drive. In many cases, this potentially dangerous activity is only triggered when some specific event or condition occurs. Some viruses, for example, are written in such a way that they only become active at a particular date and time.

A worm is similar to a virus in the sense that it is self-replicating, but instead of having to attach itself to a file in order to propagate, the worm exists as a process (a process is what a program is called when it is actually running) in its own right, and can replicate itself virtually unchecked. The effect of this is to clog up the targeted system with large numbers of identical processes that preventing legitimate processes from running properly through sheer weight of numbers. Like a virus, the worm may also carry out some other activity that is detrimental to the system. In 1988, a worm infected thousands of UNIX computers on the Internet, tying up a computing resources and rendering many of these machines temporarily useless.

A Trojan horse is a program which usually performs some legitimate function, but also performs some other, less desirable activity. A Trojan can he created by modifying a program's code in such a way that when the modified program executes, the hidden code can carry out potentially harmful activities. The mechanism is a serious threat, because the host program will almost certainly have the same level of access to the system as the user running it. Confidential files could be copied to an area accessible to the creator of the Trojan without leaving any evidence that such an event has occurred. The activities carried out by the Trojan, which could include infecting the host computer with a virus, could escape detection by both users and system administrators. The so-called "freeware" programs found in the public domain are often a source of Trojans, so great care should be exercised when downloading, installing and using such software.

Virus protection software is now widely available that can detect the presence of a virus and remove it from the system. Most anti-virus software can scan incoming data, removable media, and other potential sources of infection. If a virus is found, the infected files can be either cleaned up or contained before they can infect their system. Due to the growing number of computer viruses (the list is added to daily), it is important to keep anti-virus software updated.

Data encryption

Encryption is the process of transforming information (or plain text) using an encryption algorithm (or cipher) to make it unreadable to anyone receiving the information unless they have a key. The result of the process is encrypted information (cipher text). A complementary process of decryption reverses the original encryption process to make the information readable again. The encryption algorithm carries out a number of mathematical transformations on the data using the key as the variable factor. Often, the same algorithm is used to decrypt the data, with the transformations being reversed, either using the same key that was used to encrypt the data in the first place, or a complementary decryption key.

Even with the most advanced and powerful computers, decoding a strongly encrypted message in any reasonable amount of time is virtually impossible without a key. The most successful attempts at code breaking have been the so-called "brute-force attacks", in which every possible key combination is used in turn until one is found that will unlock the code. The key itself is usually a large binary number with a fixed number of bits. The length of the key (i.e. the number of bits it contains) will directly determine how easy or difficult it will be to break the encryption. The more bits there are, the greater the number of different bit patterns that can be formed, and the longer (on average) it would take for a brute force attack to find the key.

One of the earliest data encryption techniques to be widely adopted is the Data Encryption Standard (DES). DES is a symmetric block cipher developed by IBM and adopted by the United States government for the encryption of unclassified information in 1976. The term block cipher refers to the fact that the algorithm encodes one fixed-length block of plain text information at a time. In the case of DES, the block size is 64 bits (8 bytes).

The encryption of each block consists of an initial permutation (IP), followed by sixteen identical processing stages called rounds, and a final permutation (FP). The output is a block of cipher text that is the same length as the original block of plain text. DES is said to be symmetric because both the encryption and decryption algorithms used the same 56-bit encryption key. This presents a potential security risk if the key must be transmitted from one location to another before encrypted data is transmitted, because the key could in theory be intercepted by an eavesdropper and used to decode subsequent encrypted transmissions.

Because computer processing speeds have risen dramatically since the 1970s, DES is no longer considered to be secure due to the relatively short key length (56 bits), despite the fact that a 56-bit key offers 72,057,594,037,927,936 possible combinations. In 1999, a brute force attack was used to publicly break a DES key in less than 24 hours. DES has now been superseded by the Advanced Encryption Standard (AES), first published in 1998, which uses 128, 192 or 256-bit keys and has a block size of 128 bits (16 bytes). In 2003, the US government announced that AES could be used to encrypt classified information. Nevertheless, DES is the yardstick against which symmetric key algorithms are still measured.

The problem with symmetric encryption schemes, as mentioned earlier, is the possibility that the transmission of the key itself will be intercepted, rendering the subsequent encryption of information useless. An alternative scheme that circumvents this problem is public key or asymmetric key encryption. This form of encryption uses two different keys, a public key which is published openly, and a private key which is known only to the recipient of the information. A sender uses an organisation's public key to encrypt information to be sent to that organisation, which then uses its private key to decrypt the message. The main disadvantage of this form of encryption is the relative complexity of the encryption and decryption algorithms by comparison with symmetric key encryption, especially in view of the fact that symmetric key encryption provides satisfactory performance. A compromise solution is to use public key encryption to transmit a symmetric key value, and then send the actual data using a symmetric encryption scheme such as AES.

One further significant benefit of public key encryption is that it makes it possible for software and electronic documents of various kinds to be digitally signed. A digital signature can be added to a document using a private key. To sign the document, the data contained in the document is used to create a message digest just a few lines long, using a process called hashing. The message digest is then encrypted using the private key, producing a digital signature which is then appended to the document to be transmitted, meaning that the document has been digitally signed. Any changes made to the document after it has been digitally signed in this way can be detected.

The receiver will decrypt the signature using the sender's public key, changing it back into the original message digest. The receiver then recreates the message digest for themselves using the same hashing process used by the sender, and compares the message digest thus derived with the unencrypted message digest that accompanied the document. If the two message digests are identical, the document has not been changed in any way. To ensure that the public key used to decode the digital signature is genuine and actually belongs to the alleged sender of the document, records of public keys are held by a certificate authority. The certificate authority can verify the identity of the key holder and advise whether the certificate is still valid (a certificate authority can revoke a certificate if the key-pair to which it relates becomes compromised, or it is no longer needed).

The firewall

Connecting a network to the Internet creates a two-way flow of traffic that potentially opens the door to various forms of attack. To provide a degree of separation between an organisation's local area network and other networks, firewalls are often employed. A firewall consists of a number of elements that collectively form a barrier between two networks, and can vary considerably in the degree of protection they afford, the level of expertise required to configure them, and their cost. The firewall's job is to prevent unauthorised traffic from entering (or leaving) the network. Firewall capabilities of varying sophistication are built into most network routers.

The sophistication of the firewall implementation will determine the level of security provided. The firewall shown below consists of two routers that carry out packet filtering, and an application gateway. Each packet entering or leaving the network must run the gauntlet of the firewall.

A firewall using packet filters and an application gateway

A firewall using packet filters and an application gateway


The packet-filtering routers inspect incoming and outgoing packets to ensure that they conform to some predetermined criteria. Those that do not are dropped, while those that pass this test are passed to the application gateway for more detailed scrutiny. The packet filters can also be configured to block incoming packets from certain IP address ranges, or indeed to block outgoing packets on a similar basis. They can also be configured to restrict the number of ports through which data may enter or leave the network to prevent hackers from using unused port numbers to gain access to the LAN. The application gateway examines the application layer protocols and even the data itself to ensure that certain types of data cannot enter or leave the network. The firewall and its constituent parts are not considered to be a part of the trusted (local) network or of any external networks to which they are connected. The firewall is often considered to be a completely separate entity from the networks it connects, and for this reason is sometimes referred to as the demilitarised zone (DMZ).

The proxy server

If Internet access is a user requirement, a proxy server is often deployed. When a user requests a web page by clicking on a link or entering the URL of the page, the request is sent to the proxy server rather than to the web server on which the page resides. The proxy server relays the request to the web server, giving its own (global) IP address as the address to which a response should be sent. Once the server has responded, the proxy server relays that response to the client computer. The main advantage of this arrangement from a security point of view is that external web servers only ever see the proxy server's global IP address, preserving the anonymity of client computers on the local network and avoiding the need for any direct communication between LAN computers and an external host. It also means that computers on a LAN that uses a private IP addressing scheme can still access web resources.

The proxy server acts as an intermediary between LAN clients and web servers

The proxy server acts as an intermediary between LAN clients and web servers


Proxy servers can be set up to save copies of recently accessed web documents for a pre-determined length of time in a cache, so that subsequent requests for the same document can be serviced locally, providing the original stored on the server has not been modified since the copy was downloaded. Caching reduces both the time required to access the document and the overall volume of data traversing the network's Internet links. The proxy server can also be configured as a web filter in order to block access to undesirable content, or to deny access to a specific IP address or IP address range.