How to secure your IT Systems
  • 20th September 2018
  • Veera




Broadly your security protection should cover your Network , Systems and Data (IP). We need to focus on internally and external as a holistic approach. This is achieved by using Technology, Policy and strict Process implementation at the enterprise level.


Protecting your network and systems is the first step, and the one that’s the biggest security resource drain for most organizations. Continuously monitoring and correlating logs and events for potential threats can help you significantly reduce the time to detect security incidents.

Protecting your systems and applications from the sheer number of software vulnerabilities requires significant effort and expertise. Automating the analysis and update process can dramatically reduce the time your systems may be at risk.

Ultimately, security threats have a common objective—access to your data. Empowering your IT team with greater insights into user accounts, their permissions, and what resources they can access can help prevent risks associated with malicious insiders (and gives you the added benefit of reporting on data access for security audits and regulatory compliance).


Physical and Network segregation

Network segmentation involves segregating the network into logical or functional units called zones. Each zone can be assigned different data classification rules, set to an appropriate level of security and monitored accordingly. Segmentation limits the potential damage of a compromise to a single zone. Attempting to jump from a compromised zone to other zones is difficult because, if the segments are designed well, then the network traffic between them can be restricted.


Video surveillance

Monitoring all critical facilities in your company by video cameras with motion sensors and night vision is essential for spotting unauthorized people trying to steal your data via direct access to your file servers, archives or backups, as well as spotting people taking photos of sensitive data in restricted areas. 


Locking and recycling

Your workspace area and any equipment should be secure before being left unattended. For example, check doors, desk drawers and windows, and don’t leave papers on your desk. All hard copies of sensitive data should be locked up, and then be completely destroyed when they are no longer needed. Also, never share or duplicate access keys, ID cards, lock codes, and so on. 


Physical controls

Physical security is often overlooked in discussions about data security. Having a poor policy on physical security could lead to a full compromise of your data or even network. Each workstation should be locked down so that it cannot be removed from the area. Also, a lock should be placed so that the case cannot be opened up, exposing the internals of the system; otherwise, hard drives or other sensitive components that store data could be removed and compromised. It’s also good practice to implement a BIOS password to prevent attackers from booting into other operating systems using removable media. Mobile devices, such as smartphones, tablets, laptops, USB flash drives, iPods, and Bluetooth devices require special attention, as explore below.


Laptop security

With laptops, the biggest concerns are loss and theft, either of which can enable malicious parties to access the data on the hard drive. Full-disk encryption should be used on every laptop within an organization. Also, using public wi-fi hotspots is never a good idea unless a secure communication channel such as a VPN or SSH is used. Account credentials can be easily hijacked through wireless attacks and can lead to compromise of an organization’s network.


Mobile device security

Mobile devices can carry viruses or other malware into an organization’s network and extract sensitive data from your servers. Because of these threats, mobile devices need to be controlled very strictly. Devices that are allowed to connect should be scanned for viruses, and removable devices should be encrypted. It is great to use NAC for that purpose.

It is important to focus in on the data, not the form factor of the device it resides on. Smartphones often contain sensitive information, yet less security protection is applied to them than to laptops that contain the same information. All mobile devices that can access sensitive data should require the same-length passwords and have the same access controls and protection software. 

Another big data leakage instrument is a smartphone with a camera that can take high-resolution photos and videos and record good-quality sound. It is very hard to protect your documents from insiders with these mobile devices or detect a person taking a photo of a monitor or whiteboard with sensitive data, but you should have a policy that disallows camera use in the building.


Implement change management and data access auditing.

Another security measure is to log all logins, data access and file server activities. Login activity has to be maintained for at least 2-3 years for security audits. Any account that exceeds the maximum number of failed login attempts should automatically be reported to the information security administrator for investigation. Being able to spot changes to sensitive information and associated permissions is critical. Using historical information to understand what data is sensitive, how it is being used, who is using it, and where it is going gives you the ability to build effective and accurate policies the first time and anticipate how changes in your environment might impact security. This process can also help you identify previously unknown risks.


Use  data encryption.

For desktop systems that store critical or proprietary information, encrypting the hard drives will help avoid the loss of critical information even if there is a breach and computers or hard drives are missing. Another encryption technology from Microsoft is BitLocker. BitLocker complements EFS by providing an additional layer of protection for data stored on Windows devices. BitLocker protects devices that are lost or stolen against data theft or exposure, and it offers secure data disposal when you decommission a device.


Back up your data.

Critical business assets should be duplicated to provide redundancy and serve as backups. At the most basic level, fault tolerance for a server means a data backup. Backups are simply the periodic archiving of the data so that if there is a server failure you can retrieve the data.



System security

The first step to securing your systems is making sure the operating system’s configuration is as secure as possible. Windows and Linux operating systems will each have their unique hardening configurations.


Apply a proper patch management strategy.

Ensuring that all versions of the applications that reside on your IT environment are up to date is not an easy task but it’s essential for data protection. One of the best ways to ensure security is to make the signatures for antiviruses and patch updates for systems automatic. For critical infrastructure, patches need to be thoroughly tested to ensure that no functionality is affected and no vulnerabilities are introduced into the system. You need to have patching strategy for both your operating systems and your applications.


Operating system patch management

There are three types of operating system patches, each with a different level of urgency. 


Hotfix — A hotfix is an immediate and urgent patch. In general, these represent serious security issues and are not optional; they must be applied to the system.

Patch — A patch provides some additional functionality or a non-urgent fix. These are sometimes optional.

Service pack — A service pack is the set of hotfixes and patches to date. These should always be applied, but test them first to be sure that no problems are caused by the update.

Application patch management

Just as you need to keep operating system patches current because they often fix security problems discovered within the OS, you need to do the same with application patches. Once an exploit in an application becomes known, an attacker can take advantage of it to enter or harm a system. Most vendors post patches on a regular basis, and you should routinely scan for any available ones. A large number of attacks today are targeted at client systems for the simple reason that clients do not always manage application patching well. Establish maintenance days where you will be testing and installing patches to all your critical applications.



Windows is by far the most popular operating system used by consumers and businesses alike. But because of this, it is also the most targeted operating system, with new vulnerabilities announced almost weekly. There are a number of different Windows versions used throughout different organizations, so some of the configurations mentioned here may not translate to all of them. Some of the things you can do is

Ensure that all accounts passwords are rotated every 6months , whether service account or use account.

Disable or restrict permissions on network shares.

Remove all services that are not required, especially telnet and ftp, which are clear-text protocols.

Enable logging for important system events.

Keep only programs required to run the jobs remove unwanted software/programs.




The Linux operating system has been widely used in many applications. Even though some claim that it’s more secure than Windows, some things still must be done to harden it correctly:

Disable unnecessary services and ports.

Disable sudo for untrusted authentication

Disable unnecessary setuid and setgid programs.

Reconfigure user accounts for only the necessary users.


Web servers

Web servers are one of the favourite areas for attackers to exploit because of the reach they have. If an attacker can gain access to a popular web server and take advantage of a weakness there, they have the opportunity to reach thousands, if not hundreds of thousands, of users who access the site and their data. By targeting a web server, an attacker can affect all the connections from users’ web browsers and inflict harm far beyond the one machine they compromised.

Web servers were originally simple in design and used primarily to provide HTML text and graphics content. Modern web servers allow database access, chat functionality, streaming media and many other services; this diversity enables websites to provide rich and complex capabilities to visitors. Every service and capability supported on a website is potentially a target for exploitation. Make sure that they’re kept up to the most current software standards. You must also make certain that you give users to have only the permissions necessary to accomplish their tasks. If users are accessing your server via an anonymous account, then common sense dictates that you must make certain that the anonymous account has the permissions needed to view web pages and nothing more.

Two particular areas of interest with web servers are filters and controlling access to executable scripts. Filters allow you to limit the traffic that is allowed through. Limiting traffic to only that which is required for your business can help ward off attacks. A good set of filters can also be applied to your network to prevent users from accessing sites other than those that are business related. Not only does this increase productivity, but it also reduces the likelihood of users obtaining a virus from a questionable site.

Executable scripts, such as those written in PHP, Python, various flavors of Java and Common Gateway Interface (CGI) scripts, often run at elevated permission levels. Under most circumstances, this isn’t a problem because the user is returned to their regular permission level at the conclusion of the execution. Problems arise, however, if the user can break out of the script while at the elevated level. From an administrator’s standpoint, the best course of action is to verify that all scripts on your server have been thoroughly tested, debugged and approved for use.


Emails : Emails are most vulnerable to attack through attachment or phishing attacks. Have proper virus protection and scanner for phishing mails. Rollout training for users on how to handle phishing emails.


FTP file sharing with external customers:  Use SFTP instead of  File Transfer Protocol (FTP) servers. Always use DMZ, virtual private network (VPN) or Secure Shell (SSH) connections for FTP-type activities. Many FTP systems send account and password information across the network unencrypted. FTP is one of the tools frequently used to exploit systems.

From an operational security perspective, you should use separate logon accounts and passwords for FTP access. Doing so will prevent system accounts from being disclosed to unauthorized individuals. Also, make sure that all files are transferred with encryption.

You should always disable the anonymous user account. To make FTP use easier, most servers default to allowing anonymous access. However, from a security perspective, the last thing you want is to allow anonymous users to copy files to and from your servers. Disabling anonymous access requires the user to be a known, authenticated user in order to access the FTP server.



IP Data:

Identify and classify sensitive data.

To protect data effectively, you need to know exactly what types of data you have. Data discovery technology will scan your data repositories and report on the findings. Then you can organize the data into categories using a data classification process.

Using data discovery and classification technology helps you control user access to critical data and avoid storing it in unsecure locations, thus reducing the risk of improper data exposure and data loss. All critical or sensitive data should be clearly labeled with a digital signature that denotes its classification, so you can protect it in accordance with its value to the organization.

As data is created, modified, stored or transmitted, the classification can be updated. However, controls should be in place to prevent u-sers from falsifying the classification level. For example, only privileged users should be able to downgrade the classification of data.


Follow these guidelines to create a strong data classification policy. And don’t forget to perform data discovery and classification as part of your IT risk assessment process.


Create a data usage policy : Of course, data classification alone is not sufficient; you need to create a policy that specifies access types, conditions for data access based on classification, who has access to data, what constitutes correct usage of data, and so on. Don’t forget that all policy violations should have clear consequences.


Control access to sensitive data. : You also need to apply appropriate access controls to your data. Access controls should restrict access to information based on the principle of least privilege: Users have to get only those privileges that are essential to perform their intended function. This helps to ensure that only appropriate personnel can access data. Access controls can be physical, technical, or administrative, as explained below.


Administrative controls : Administrative access controls are procedures and policies that all employees must follow. A security policy can list actions that are deemed acceptable, the level of risk the company is willing to undertake, the penalties in case of a violation, etc. The policy is normally compiled by an expert who understands the business’s objectives and applicable compliance regulations. 

Supervisory structure is an important part of administrative controls. Almost all organizations make managers responsible for the activities of their staff; if an employee violates an administrative control, the supervisor will get held accountable as well.


Personnel education and awareness: Training should be provided to make users aware of the company’s data usage policies and emphasize that the company takes security seriously and will actively enforce the policy. In addition, users should be periodically re-educated and tested to reinforce and validate their comprehension. Security measures are in place to limit what users can do but those tools aren’t perfect. If users open every attachment in every e-mail, chances are high that some zero-day attack or other exploit not listed in your antivirus database will compromise a machine. Therefore, users need to be educated about their responsibilities and best practices for proper computer usage.


Employee termination procedure : Ensuring that each departing employee retains no access to your IT infrastructure is critical to protecting your systems and data. You need to work with HR to develop an effective user termination procedure that protects your organization legally and technologically from former employees. Follow these user termination best practices in order to achieve that goal.


Technical controls: In most cases, users should not be allowed to copy or store sensitive data locally. Instead, they should be forced to manipulate the data remotely. The cache of both systems, the client and server, should be thoroughly cleaned after a user logs off or a session times out, or else encrypted RAM drives should be used. Sensitive data should ideally never be stored on a portable system of any kind. All systems should require a login of some kind, and should have conditions set to lock the system if questionable usage occurs.


Permissions : User permissions should be granted in strict accordance with the principle of least privileges. Here are the basic file permissions in Microsoft operating systems:


Access control lists : An access control list (ACL) is a list of who can access what resource and at what level. It can be an internal part of an operating system or application. For example, a custom application might have an ACL that lists which users have what permissions in that system. 

ACLs can be based on whitelists or blacklists. A whitelist is a list of items that are allowed, such as a list of websites that users are allowed to visit using company computers, or a list of third-party software that is authorized to be installed on company computers. Blacklists are lists of things that are prohibited, such as specific websites that employees are not permitted to visit or software that is forbidden to be installed on client computers.

In the file management process, whitelist ACLs are used more commonly, and they are configured at the file system level. For example in Microsoft Windows, you can configure NTFS permissions and create NTFS access control lists from them. You can find more information about how to properly configure NTFS permissions in this list of NTFS permissions management best practices. Remember that access controls should be implemented in every application that has role base access control (RBAC); examples include Active Directory groups and delegation.


Security devices and methods

Certain devices and systems help you further restrict access to data. Here is the list of the most commonly implemented ones: 

Data loss prevention (DLP) — These systems monitor the workstations, servers and networks to make sure that sensitive data is not deleted, removed, moved or copied. They also monitor who is using and transmitting the data to spot unauthorized use.

Firewall — A firewall is one of the first lines of defense in a network because it isolates one network from another. Firewalls can be standalone systems or they can be included in other infrastructure devices, such as routers or servers. You can find both hardware and software firewall solutions; some firewalls are available as appliances that serve as the primary device separating two networks. Firewalls exclude undesirable traffic from entering the organization’s network, which helps prevent data leakage to third-party rogue servers by malware or hackers. Depending on the organization’s firewall policy, the firewall might completely disallow some traffic or all traffic, or it might perform a verification on some or all of the traffic.

Network access control (NAC) — This involves restricting the availability of network resources to endpoint devices that comply with your security policy. Some NAC solutions can automatically fix a non-compliant node to ensure it is secure before access is allowed. NAC is most useful when the user environment is fairly static and can be rigidly controlled, such as enterprises and government agencies. It can be less practical in settings with a diverse set of users and devices that are frequently changing. NAC can restrict unauthorized devices to access your data directly from your network.

Proxy server — These devices act as negotiators for requests from client software seeking resources from other servers. A client connects to the proxy server, requesting some service (for example, a website); the proxy server evaluates the request and then allows or denies it. In organizations, proxy servers are usually used for traffic filtering and performance improvement. Proxy devices can restrict access to your sensitive data from Internet.


 Protect your data from insider threats.

Organizations continue to spend an exceptional amount of time and money to secure the network at the perimeter from external attacks; however, insider threats are becoming a key cause of data exposure. Many surveys say insider incidents account for more than 60 percent of all attacks; however, many organizations don’t report insider attacks out of fear of business loss and damage to their reputation. 

Insider threats come in two forms. An authorized insider threat is someone who misuse their rights and privileges, either accidentally, deliberately or his credentials were stolen. An unauthorized insider is someone who has connected to the network behind the perimeter defences. This could be someone who plugged into a jack in the lobby or a conference room, or someone who is using an unprotected wireless network connected to the internal network. Insider attacks can lead to data loss or downtime, so it’s as important to monitor activity in your network as activity at the perimeter. 


Insiders using remote access

Remote access to corporate networks is also becoming commonplace. Users are working from home at an increasing rate, so it’s critical to secure the connections used for remote access. Strong authentication is essential when connecting remotely. It is also important that the machines users are employing for remote access to the network are also secured properly. In addition, remote sessions should be properly logged or even video recorded.


Use endpoint security systems to protect your data.

The endpoints of your network are under attack constantly, so having the endpoint security infrastructure in place to deal with them is crucial to preventing data breaches. Unauthorized programs and advanced malware (such as rootkits) are some of the things to consider in your endpoint security strategy. With the increased usage of mobile devices, the endpoints of the network are expanding and becoming more and more undefined. Automated tools that reside on the endpoint system are essential to mitigating the effectiveness of malware. At a minimum, you should use the following technologies:


Antivirus software : Antivirus software should be installed and kept current on all servers and workstations. In addition to active monitoring of incoming files, scans should be conducted regularly to catch any infections that have slipped through, such as ransomware. 


Antispyware : Anti-spyware and anti-adware tools are designed to remove or block spyware. Spyware is computer software installed without the user’s knowledge. Usually its goal is to find out more information about the user’s behaviour and to collect personal information. Anti-spyware tools work much like antivirus tools; many of their functions overlap. Some antispyware software is combined with antivirus packages, whereas other programs are available as standalones. Regardless of the type you use, you must regularly look for spyware, often identified by the presence of tracking cookies on hosts, and remove any that gets installed.


Pop-up blockers  : Pop-ups are not just irritating; they are a security threat. Pop-ups (including pop-unders) represent unwanted programs running on the system, so they can jeopardize the system’s well-being. 


Host-based firewalls : Personal firewalls are software-based firewalls installed on each computer in the network. They work in much the same way as larger border firewalls — they filter out certain packets to prevent them from leaving or reaching your system. The need for personal firewalls is often questioned, especially in corporate networks that have large dedicated firewalls that keep potentially harmful traffic from reaching internal computers. However, that firewall can’t do anything to prevent internal attacks, which are quite common and often very different from the ones from the internet; attacks that originate within a private network are usually carried out by viruses. So, instead of disabling personal firewalls, simply configure a standard personal firewall according to your organization’s needs and export those settings to the other personal firewalls.


Host-based IDSs : Intrusion detection systems (IDSs) are also available for individual hosts. Host IDSs will monitor only the internals of a computing system. A host-based IDS will look at the system state and check whether its contents are as expected. Most host-based IDSs use integrity verification, which works on the principle that most malware will try to modify host programs or files as it spreads. Integrity verification tries to determine what system files have been unexpectedly modified. It does this with computing fingerprints, in the form of cryptographic hashes, of files that need to be monitored when the system is in a known clean state. It then scans and will issue an alert when the fingerprint of a monitored file changes. The main problem with integrity verification is that it detects the malware infection after the fact and will not prevent it.


Perform vulnerability assessments and cybersecurity penetration tests.

Vulnerability assessments usually consist of port scanners and vulnerability scanning tools. These tools scan the environment from an external machine, looking for open ports and the version numbers of those services. The results from the test can be cross-referenced with known services and patch levels that are supposed to be on the endpoint systems, allowing the administrator to make sure that the systems are adhering to the endpoint security policies.


Penetration testing is the practice of testing a computer system, network or web application to find security vulnerabilities that an attacker could exploit. Penetration testing can be automated with software applications or performed manually. The main objective of penetration testing is to identify security weaknesses. Penetration testing can also be used to test an organization's security policy, its adherence to compliance requirements, its employees' security awareness, and the organization's ability to provide security incident response and identification. Organizations should perform pen testing regularly — ideally, once a year — to ensure more consistent network security and IT management. Here are several of the main pen test strategies used by security professionals:


Targeted testing is performed by the organization's IT team and the penetration testing team working together. It's sometimes referred to as a "lights turned on" approach because everyone can see the test being carried out.

External testing targets a company's externally visible servers or devices, including domain servers, email servers, web servers or firewalls. The objective is to find out if an outside attacker can get in and how far they can go once they've gained access.

Internal testing performs an inside attack behind the firewall by an authorized user with standard access privileges. This kind of test is useful for estimating how much damage a regular employee could cause.

Blind testing simulates the actions and procedures of a real attacker by severely limiting the information given to the person or team performing the test. Typically, the pen testers are given only the name of the company.

Double-blind testing takes the blind test and carries it a step further — only one or two people within the organization might be aware a test is being conducted.

Black box testing is basically the same as blind testing, but the tester receives no information before the test takes place. Rather, the pen testers must find their own way into the system.

White box (Crystal box) testing provides the penetration testers with information about the target network before they start their work. This information can include IP addresses, network infrastructure schematics, the protocols being used and so on.



As you have seen, data protection encompasses a lot of topics and areas like Internal and External threats. It’s critical for good consultants, network administrators and security professionals to keep all their security tools up to date and to use good policy management. With so many policies to enforce and applications to keep up to date, this can seem like a daunting challenge for any security team.

Another challenge with data protection is auditing and maintaining the same level of security posture every year, as this is continuous process it is better to engage right technical consultant / company to take care of valuable company data.




Search Blog