Monday, September 24, 2007

Vulnerability Assessment and Intrusion Detection Systems

Is this take "infinity"? Anyways, here are my views on the subject...

Information Technology (IT) has permeated into the core functions of almost every business function today[1]. Technology has enabled the automation of most of our business processes, enabling us to conduct business at a much faster pace with greater reliability.

However, IT, along with its benefits, has brought along a complement of complexity and security concerns. Data volumes have grown exponentially, and systems and applications continue to proliferate daily. The increased footprint of the application space means that there are more applications that could be vulnerable to intrusion and unauthorized access.

The early days of IT security focused primarily on perimeter security and authentication controls. Firewalls provided perimeter security while various network-wide authentication solutions that include NIS+ (Network Information Services), LDAP (Lightweight Directory Access Protocol) and Microsoft’s Active Directory. Network devices rely on RADIUS (Remote Authentication Dial-In User Service) and TACACS+ (Terminal Access Controller Access-Control System). However firewalls and authentication servers do not provide necessary protection against application vulnerabilities. This is primarily because applications, to function, need access (outbound or inbound) and are afforded permission accordingly through the firewalls and proxy servers. This authorized route is then exploited to take advantage of any vulnerability that the application may contain.

Vulnerability assessment systems (VAS) are used to scan systems, application and networks to search for any vulnerability that may be present. These then need to be analyzed for cause and effect, and any additional necessary protections put in place.

Intrusion detection systems (IDS) constantly watch systems and network for any functional anomaly or activity that looks like an intrusion attempt. They are configured to react according to the nature and severity of the detected event.

The systems in detail

Vulnerability Assessment Systems

System and network vulnerabilities can be classified into three broad categories[2]:

  • Software vulnerabilities – these include bugs, missing patches and insecure default configurations.
  • Administration vulnerabilities – Insecure administrative privileges, improper use of administrative options or insecure password allowed
  • Risks associated with user activity – Policy avoidance by user (bypassing virus scans), installing unapproved software, sharing access with others

Vulnerability scanners are used to scan systems, applications and networks to identify vulnerabilities that cause these risks.

Vulnerability assessment systems come in two flavors – network-based and host-based. Network-based scanners scan the entire network to provide an overall view of the most critical vulnerabilities present on the network. They are able to quickly identify perimeter vulnerabilities and insecure locations that could provide easy access to an intruder. These include unauthorized telephone modems on systems, insecure system services and accounts, vulnerable network services (SNMP[3] and DNS[4] are two examples), network devices (e.g.: routers) configured with default passwords and insecure configurations (e.g.: a default allow rule for all traffic on a firewall).

One issue that is frequently faced by anyone using a network vulnerability scanner is that it can cause possible network interruptions and even service disruption and server outages during a scan. This happens because the scanner, in the process of scanning for vulnerabilities, could actually exploit existing vulnerabilities and generate Denial-of-Service (DoS[5]) attacks against networks and systems. To mitigate this risk, the scans are often scheduled for times when the business faces minimal interruption of service from scenarios like the ones described above. However, this also leads to the possibility of missing possible critical vulnerabilities since some services, applications and servers may not be available on the network when they are not in use, thus hiding possible vulnerabilities in them.

A clear advantage that network-based scanners have is that they are independent of the hosts and devices in use. They use their own resources for operation and do not need to be installed on hosts or network devices in order to complete their function. However, this also means that they cannot perform deep scans of individual systems since they can only scan those services and applications that are available and can be probed from the network.

This is the area host-based scanners excel at. They are installed on the host and have the ability to scan the host deeply to identify all possible vulnerabilities.

Host-based vulnerability scanners get to be more granular in their scanning and results. Since they are installed on the host, they have the ability to probe deeply into the host, searching for vulnerabilities on the host that would be otherwise invisible or not easily identifiable from the network. They are able to probe applications and the host operating system and system processes for possible weaknesses and vulnerabilities.

However, by the very nature of their function, they are intrusive and have the ability to upset the functional balance of a server. They are a powerful tool that, if subject to any form of misuse, can cause unforeseeable problems on the server and networks. Since they are designed to probe for vulnerabilities, any misuse can lead to a serious compromise of an organization’s digital assets.

Intrusion Detection Systems (IDS)

Intrusion detection systems complement the function of vulnerability assessment systems. While VASs probe for vulnerabilities, IDSs look at the network and system activity, inbound network data streams, and anomalous behavior. IDSs are designed to identify behavior that does not conform to pre-defined ‘normal’ activity. On detecting any signs of abnormal activity, they can trigger alerts or even evasive and preventive measures to halt or slow down the suspected attack while relevant personnel can investigate and clear or escalate the alert.

IDSs can be of two types – the traditional signature based kind that identify intrusion by searching for data patterns in attack streams that match signatures from a pre-built database, or anomaly detecting systems that watch networks and systems continuously to build a pattern of normal behavior, and compare this to activity during normal operations to detect possible intrusions. The newer IDSs available commercially tend to be a hybrid, using both these methods to improve their chances of positively detecting intrusion while reducing their rate of false positives.

Like VASs, IDSs also are of two distinct types based on their deployment method. Network based IDSs (sometimes referred to as NIDS) are stand-alone devices that sit on the network, normally at the point of ingress and egress, doing the job of watchdogs on the network. Host-based IDSs (referred to as HIDS) are more intrusive, being installed on individual hosts and watching over all host activity intimately from their vantage point.

Host-based IDSs and VASs are mostly limited in their scope to the host they are installed on, but are able to do a deep inspection of the local host. Network based IDSs and VASs can scan large networks and vast numbers of networked hosts and devices, but cannot get into the intimate works of individual devices – they are limited in their reach to what is visible from the network.


[1] THE ROLE OF INFORMATION TECHNOLOGY IN ORGANIZATION DESIGN Authors: Lucas, Henry C., Jr. and Baroudi, Jack http://hdl.handle.net/2451/14315

[2] ISS Whitepaper on vulnerability scanners - http://documents.iss.net/whitepapers/nva.pdf

Thursday, September 20, 2007

Living in a world of 'spin'

Today, 'spin' seems to direct more public thinking than ever before. Marketing and spin (check out this article - http://www.onlineopinion.com.au/view.asp?article=3752 ) are used
more than ever to direct thinking in a specific pattern, and direct public action. Is this the reason children are being taught lesser self-reliance in schools now? Big brother - is it slowly becoming reality?

Browsing the web, I chanced across an article that said Microsoft puts out patches quicker than any other OS vendor, and hence, MS users face much lesser risk - they even measured it using a new measurement they called "days at risk". This set me thinking. Sure, MS may release patches faster - but :

Do all users always patch everything the moment the patch is released?
Why has no one compared the number of times each OS vendor patches their patches?
For each vendor, what product needs the patch each time?
Which vendor has a much-used popular product that needs patches?
Does anyone have any measured statistics on the relation between the software needing patches and its popularity, use and misuse?

If these numbers are available , how can we as the 'hopeful' guardians of cyber-integrity link these numbers to such articles that present a lop-sided view of the situation? Growing up, I learnt about 'lies, damned lies and statistics'.

Some interesting links on thinking about lies, statistics and lawyers - when will marketing and sales be added to this roll?

http://www.experts.com/showArticle.asp?id=153
http://www.rgj.com/blogs/inside-nevada-politics/2006/09/tarkanian-refutes-lies.html
http://weblog.leidenuniv.nl/users/gillrd/2007/06/lies_damned_lies_and_legal_truths_1.php


So ho will teach the world common sense again?

Wednesday, September 5, 2007

Engineering failures - or 20/20 hindsight?

A few minutes ago I read an email saying Palm is withdrawing the Foleo platform at the twelfth hour - http://blog.palm.com/palm/2007/09/a-message-to-pa.html.

Yesterday, I heard a VOIP BlueBox podcast (http://www.blueboxpodcast.com/) arguing the relative merits of the SIP protocol from an engineering and design perspective – and the fact that security considerations seems to have been added in much later after the protocol design was essentially complete and the first set of users were already using it in the public user space.


A few weeks ago, newspapers and media were wringing hands at the ghastly bridge collapse in Minnesota (http://en.wikipedia.org/wiki/I-35W_Mississippi_River_bridge). Every media report was quick to focus on the seeming 'design failures' and money was quickly sanctioned across the country to 'inspect' the rest of the existing bridges.


Two years ago, after hurricanes Katrina and Rita wreaked havoc, more studies focused on the engineering failures there.


But is all of this truly engineering failure? Are we looking at the original specifications for the designs in consideration? TCP/IP and the associated network protocols worked perfectly for their original design – fault-tolerant robust network connectivity to share information between peer universities. The security problem surfaced after commercial interests worked to expand the original network into the Internet of today, without adapting the original protocol for their proposed use and/or testing it for the proposed set of uses.


The same is true for the bridge collapse and the hurricane stories – the engineers did their work and highlighted the limits of their design. However, other interests kicked in, signed off on unknown risks without complete information, and the result is the slurry pools we see today :) so I ask myself the question – should we be blaming the engineers for poor design?


Then again, all testing does not necessarily highlight all issues, as the Skype issue (http://blog.tmcnet.com/blog/tom-keating/skype/skype-offline-latest-update.) proves. The protocol seemed to work fine – till it reached the perfect tipping point – software updates, a P2P mesh that was never tested at this volume (I too would love a lab that could test 20 million simultaneous online users and help me prepare for all eventualities – but is that a fair request to make commercially of any organization to setup?), and a network with unprecented global usage. So who do we blame this on? Skype - (who else?) for providing a service that costs - for basic usage - nothing at all except the cost of an internet connection :)


The current environment seems to focus on finding someone to blame for all failures - irrespective of the validity of the failure and invalidity of the use that caused the failure. Engineering needs to step up to the plate in their defense. They need to shake off their recitence at public speaking, and document their engineered specifications better. And others need to watch their use patterns to fit the engineered specifications - or look for engineering modifications to fit new proposed usage. In the absence of this rigor, watch out for more failures in similar patterns! 20/20 hindsight is always correct - how about moving that correctness to before the failure rather than armchair pontification?