Recently I have read quite a bit about “insider threats” and the potential for losing customer data. But not everyone is bad, so how can I tell if I need to pay attention to certain employees?
I like to call this the “tale of two insiders” – the insider threat story is not always as simple as it seems. While it is true that many organizations face the risk of a trusted insider or outsider taking sensitive information, just as many face risk from well-meaning employees who are trying to get their work done. Each of these poses a different threat to the organization.
First, let’s consider the malicious insider. This is typically someone with administrator rights, perhaps a database administrator, a Windows system administrator, or maybe a sales operations administrator. In each case, we have a trusted employee with normal access rights to confidential data, such as customer credit card or social security numbers, sensitive files, or revenue data. What happens if this employee decides to leave and joins a competitor, or simply tries to trade this information for cash? A recent study reported that 88 per cent of system administrators said they’d take confidential data with them if they knew a layoff was coming. In the current economic environment, the malicious insider can’t be taken lightly.
A common twist on the malicious insider is the “trusted outsider.” That is, someone who is not an employee but has access to internal systems. In the age of outsourcing, extranets, and connected supply chains, the trusted outsider is now the norm. Like the administrator employee above, our trusted outsider can cause considerable damage if he or she decides to take internal data for profit. In both cases, monitoring activities and correlating these actions can spot potential data theft early.
However, for every malicious insider or outsider, there are dozens to hundreds of employees who are simply trying to get their work done. In the process, they perform all sorts of unwitting policy violations, such as mailing a confidential document to their webmail account so they can work at home, or printing out a screenful of customer records to write notes on during lunch , or copying some product plans to a USB key to give to a teammate. In each case, there is potential for data loss and risk of penalty. How often, in a large organization, do employees do all of these tasks? In many cases, user activity monitoring may flood a security administration team with false positives. Two years ago a potential customer was looking at implementing a DLP solution and estimated that, given average network traffic and the product’s false positive rate, the security team would need to process two million warning incidents per year!
So, what’s the difference between a thief, a careless worker, and a harmless warning? Context. When you put together multiple pieces of information about user activities, you get context about those activities. As a result, you can filter out potential problems from the normal chatter of work activity.
Context can be simple, such as understanding a worker’s role and department, as well as the normal access rights for someone in that role. Context can also be more complex, such as understanding historical patterns for each employee – does this person normally badge into the engineering building at night – or for groups of employees. For example, what sorts of activities did the last fifty people who resigned perform that current employees do not?
In short, understanding insider threats means both detecting and rating activities according to potential damage. Detecting comes from good collection and correlation, and rating comes from advanced analysis, based on both historical patterns and current conditions. The ability to apply all of these techniques will improve security without overwhelming security staffs.