banner2.gif (17410 bytes)

text.gif (5548 bytes)


The Open Source Security Testing Methodology Manual
English -- Catalan -- Spanish -- Italian -- German -- French -- Russian


current version: osstmm.2.0.draft.en.
notes: This version is a draft and all items in RED have not yet met with peer-review. All non-red sections are from the 1.5 standard and have met with peer review. This is not a complete version and is missing much. The most recent complete version is 1.5 and can be downloaded in pdf, postscript, and rtf formats.
date of current version: Sunday, July 01, 2001
date of original version: Monday, December 18, 2000
created by: Pete Herzog
key supporters: Marta Barceló
Clement Dupuis
Russ Spooner
key contributers: Don Bailey
Michael S. Hines
Miguel Angel Dominguez Torres
Angel Luis Urunuela
Peter Klee
Rich Jankowski
Felix Schallock
Vincent IP
edited by: Drew Simonis
Emily K. Hawthorn
Jordi Martinez i Barrachina

Copyright 2000 and 2001, Peter Vincent Herzog, All Rights Reserved, available for free dissemination under the GNU Public License. Any information contained within this document may not be modified or sold without the express consent of the author.

Index
to be completed


Foreword
This manual is to set forth a standard for Internet security testing. Disregarding the credentials of many a security tester and focusing on the how, I present a solution to a problem that exists currently. Regardless of firm size, finance capital, and vendor backing, any network or security expert who meets the outline requirements in this manual is said to have completed a successful security snapshot. Not to say one cannot perform a test faster, more in depth, or of a different flavor. No, the tester following the methodology herein is said to have followed the standard model and therefore if nothing else, has been thorough.

I say security snapshot above because I believe an Internet security test is no more than a view of a system at a single moment in time. At that time, the known vulnerabilities, the known weaknesses, the known system configurations has not changed within that minute and therefore is said to be a snapshot. But is this snapshot enough?

The methodology proposed herein will provide more than a snapshot if followed correctly with no short-cuts and except for known vulnerabilities in an operating system or application, the snapshot will be a scattershot-- encompassing perhaps a few weeks rather than a moment in time.

I have asked myself often if it is worth having a central standard for security testing. As I began to write down the exact sequence of my testing to share synchronously the active work of a penetration test, it became clear that what I was doing is not that unique. All security testers follow one methodology or another. But are all methodologies good?

All security information I found on the Internet regarding a methodology was either bland or secret. "We use a unique, in-house developed methodology and scanning tools…." This was a phrase found often. I remember once giving the advice to a CIO that if a security tester tells you his tools include ISS, Cybercop, and "proprietary, in-house developed tools" you can be sure he mainly uses ISS and Cybercop. That's not to say many don't have proprietary tools. I worked for IBM as an ethical hacker. They had the Network Security Auditor (NSA) that they now include in their firewall package. It was a good, proprietary tool with some nice reporting functions. Was it better than ISS or Cybercop? I couldn't say since we also used ISS to revalidate the NSA tests. This is due to the difficulty of keeping a vulnerability scanner up-to-date.

I feel it is valid to be able to ask companies if they meet a certain standard. I would be thrilled if they went above the standard. I would also know that the standard is what they charge a certain price for and that I am not just getting a port scan to 10,000 ports and a check of 4,800 vulnerabilities. Especially since most of which only apply to a certain OS or application. I'd like to see vulnerability scanners break down that number by OS and application. I know if I go into Bugtraq (the only true vulnerability checking is research on BT) that I will be able to find all the known vulnerabilities by OS and application. If the scanner checks for 50 Redhat holes in a certain flavor and 5 Microsoft NT holes and I'm an NT shop; I think I may try a different scanner.

So following an open-source, standardized methodology that anyone and everyone can open and dissect and add to and complain about is the most valuable contribution we can make to Internet security. And if you need a reason to recognize it and admit it exists (whether or not you follow it to the letter) it's because you, your colleagues, and your fellow professionals have helped design it and write it. Supporting an open-source methodology does not reduce your ability - rather it shows you are just as good as all the other security testers. The rest is about firm size, finance capital, and vendor backing.



Contributions
Those who have been contributed to this manual in valuable ways have been listed at the top of this document. Each person receives recognition for the type of contribution although not as to what was contributed. The use of contribution obscurity in this document is for the prevention of biases.


Terms
Throughout this manual we refer to words and terms that may be construed with other intents or meanings. We will attempt to clarify most of them in the glossary at the end of this manual, however; it is important to note that there are a few which we make universal to fit our scope. They are as follows:
black-box
any testing that is done without prior knowledge, blindly but not randomly.

hacker
good or bad, novice or expert, a person who attempts to exploit or trick a computer system.

Internet presence
the thin veil which separates systems, services, and information between a network and the Internet.

invasive
trespassing by probing or attaching to non-public parts of a system or network.

passive
data collection by not probing or attaching to non-public parts of a system or network.

Red Team
the person or persons conducting a black-box penetration test or ethical hacking engagement.

white-box
any testing completed with privileged knowledge, i.e. having the source code for a program while testing.

Intended Audience
This manual is written for the Internet security professionals both developers and testers. Terms, skills, and tools mentioned in here may not make much sense to the novice or those not directly involved in Internet security. Networking professionals may also find this manual of use since much of security blurs between the IT networking department and the security professionals.

This manual does not examine the proper way to use particular software or network protocols or how to read the results. Evil hackers-in-the-making will find this a disappointing feature of the manual. Peoples concerned with this being another guide in how to hack for fun are mistaken. Evil hackers need to find only one hole. Security testers need to find them all. We are caught between the lesser of two evils and disclosure will at least inform in a structured, useful way those who need to defend their systems. So to disclose with this manual or not is truly a damned if you do and damned if you don't predicament. We choose disclosure. In choosing disclosure we have been sure not to include specific vulnerabilities or problems that can be abused and only offer a standard methodology.

Developers will find this manual useful in building better networks, firewalls, applications, and testing tools. Many of the tests do not currently have a way to automate them. Many of the automated tests do not follow a methodology in an optimal order. This manual will address these issues. Developers may feel free to address them as well. Appendix D addresses some tools that could be developed to assist some of the testing within this methodology.


Scope
This is a document of Internet security testing methodology, a set of rules and guidelines for solid penetration testing, ethical hacking, and information security analysis including the use of open source testing tools for the standardization of security testing and the improvement of automated vulnerability testing tools. It is within the scope of this document to provide a standardized approach to a thorough security assessment of an Internet presence of an organization. Within this standardized approach for thoroughness, we achieve an Open Standard for Internet Security Testing and use it as a baseline for all security testing methodologies known and unknown.

End Result
The ultimate goal is to set a standard in testing methodology which when used in either manual or automated security testing results in meeting operational security requirements for securing the Internet presence. The indirect result is creating a discipline that can act as a central point in all Internet security tests regardless of the size of the network, type of systems, or Internet applications.

Analysis and the Business Risk Assessment
Analysis is not within the scope of this document. This document maintains a "business" perspective that relates to the risk assessment. This document by no means forces the hand of the analyst but rather guides the hand of the auditor. The analysis of collected data is completely within the control of the security testing organization. While the business perspective helps form the scope with the assumption that the organization being tested is concerned about security, privacy, image, brand, time, and all the things where loss of money could be the inevitable result. Therefore, the security tester in this document takes the extended position as information security tester, privacy tester, systems security tester, policy tester, and marketing/business defenses tester. This is the position in which this manual will achieve its thoroughness.

BS7799 and ISO17799 Information Security Testing Compliance
This document meets full information security testing compliance based on BS7799 and ISO1799 where applicable to testing from the Internet presence.


Risk and Sensitivity Assessment
This manual will treat risk assessment as the analysis of collected data. The security level to which this complies may depend on the country of the organization. The next section presents a sample of countries with strong data privacy and information security legislation and regulation for the framework of this manual and the quality of testing data to be analyzed.

Another aspect of this manual is to introduce offense measures to conduct market/business intelligence gathering. This document uses offensive and defensive market/business intelligence gathering techniques known as Competitive Intelligence as per the Society of Competitive Intelligence Professionals (SCIP) and the technique known as "Scouting" to compare the target organization's market/business positioning to the actual position as seen from other intelligence professionals on the Internet.

This document is also in compliance to the control activities found in the US General Accounting Office's (GAO) Federal Information System Control Audit Manual (FISCAM) where they apply to network security.

Legislation and Regulation Compliance
This manual was developed to satisfy the testing and risk assessment for personal data protection and information security in the following bodies of legislation. The tests performed provide the necessary information to analyze for data privacy concerns as per most Governmental legislations due to this manual's thorough testing stance. Although not all country statutes can be detailed herein, this manual has explored the various bodies of law to meet the requirements of strong examples of individual rights and privacy.

  • USA Government Information Security Reform Act of 2000 section 3534(a)(1)(A)
  • Deutsche Bundesdatenschutzgesetz (BDSG)-- Artikel 1 des Gesetzes zur Fortentwicklung der Datenverarbeitung und des Datenschutzes from 20. December 1990, BGBl. I S. 2954, 2955, zuletzt geändert durch das Gesetz zur Neuordnung des Postwesens und der Telekommunikation vom 14. September 1994, BGBl. I S. 2325
  • Spanish LOPD Ley orgánica de regulación del tratamiento automatizado de los datos de carácter personal Art.15 LOPD -. Art. 5,
  • Provincial Law of Quebec, Canada Act Respecting the Protection of Personal Information in the Private Sector (1993).
  • UK (to be completed)


Process
A security test is performed with two types of attack. A passive attack is often a form of data collection which does not directly influence or trespass upon the target system or network. An intrusive attack however does trespass upon the target system or network and can be logged and used to alarm the target system or network.

The process of a security test concentrates on evaluating the following areas:

Visibility
Visibility is what can be seen on your Internet presence. This includes - but is not limited to - open or filtered ports, the types of systems, the architecture, the applications, email addresses, employee names, the skills of the new sys admin being hired through a job search online, the circulation your software products, and the websites visited by your employees and everything they download. Being invisible includes being able to step on wet sand and leave no footprint.

Access
Access is why people visit your Internet presence. This includes but is not limited to a web page, an e-business, a P2P server to content map, a DNS server, streaming video, or anything in which a service or application supports the definition of quasi-public, where a computer interacts with another computer within your network. Limiting access means denying all except what is expressly justified in the business plan.

Trust
Trust is the most important concept in Internet security. It is a measure of how much people can depend on what the system offers. Trust depends on the kind and amount of authentication, nonrepudiation, access control, accountability, data confidentiality, and data integrity employed by the system(s).

Sometimes trust is the basis for a service, for example when one computer links to another. Some trust "partnerships" include VPNs, PKIs, HTTPS, SSH, B2B connectors, database to server connections, e-mail, employee web surfing, or any communication between two computers which causes interdependency between two computers whether server/server, server/client, or P2P.

Alarm
Alarm is the timely and appropriate notification of activities that violate or attempt to violate Visibility, Access, or Trust. This includes but is not limited to log file analysis, port watching, traffic monitoring, intrusion detection systems, or sniffing/snooping. Alarm is often the weakest link in appropriate security measures.


Risk Assesment Values
The RAV is a method for determining the decay rate of the current security level based on the size of the network and the types of tests performed. The usefulness of the RAV is in examining the testing cycle necessary to guarantee a certain percentage of security. This is beneficial .......

Calculating RAVs

The various scores assigned to modules are in terms of a period days before repeating the test and the loss of security percentage every period which passes.
????

Cycle: X days

Degradation: X%

RAV: ?????????

Modules
The methodology is broken down into modules and tasks. The modules are the flow of the methodology from one Internet Presence Point to the other. Each module has an input and an output. The input is the information used in performing each task. The output is the result of completed tasks. Output may or may not be analyzed data (also known as intelligence) to serve as an input for another module. It may even be the case that the same output serves as the input for more than one module such as IP addresses or domain names.

Some tasks yield no output; this means that modules will exist for which there is no input. Modules which have no input can be ignored during testing. Ignored modules do not necessarily indicate an inferior test; rather they may indicate superior security.

Modules that have no output as the result can mean one of three things--

  1. The tasks were not properly performed.
  2. The tasks revealed superior security.
  3. The task result data has been improperly analyzed.


It is vital that impartiality exists in performing the tasks of each module. Searching for something you have no intention of finding may lead to you finding exactly what you want. In this methodology, each module begins as an input and output exactly for the reason of keeping bias low. Each module gives a direction of what should be revealed to move further down the flow.

Time is relative. Larger projects mean more time spent at each module and on each task. The amount of time allowed before returning with output data depends on the tester and the scope of the testing. Proper testing is a balance of time and energy where time is money and energy is the limit of man and machine power.


Internet Presence Points
Internet presence points are every point in the Internet where an organization interacts with the Internet. These presence points are developed to offer as modules in the methodology flow.

While security testing is a strategic effort, there may be different ways and different tools to test many of the same modules, but there are few variations in the order in which to test them. Although some of the modules mentioned here are not specifically Internet presence points, they are worth noting due to the electronic nature and the lack of places of where they may fit in as a test of their own.

The methodology in this manual is from the standpoint of a complete and thorough security test of the Internet presence including electronically controlled physical access areas, electronic access over communication lines, wireless access points, and human interaction. This is defined here as an Unrestricted test. Therefore an unrestricted test by default will be extremely involved and require permissions far above that which may be allowed for other types of security tests.

Modules involved in an Unrestricted Test:
(RAV xxxxxxxx)

  • Network Surveying
  • Port Scanning
  • System Fingerprinting
  • Services Probing
  • Redundant Automated Vulnerability Scanning
  • Exploit Research
  • Manual Vulnerability Testing and Verification
  • Application Testing
  • Firewall & ACL Testing
  • Security Policy Review
  • Intrusion Detection System (IDS) Testing
  • Document Grinding (Electronic Dumpster Diving)
  • Social Engineering
  • Trusted Systems Testing
  • Password Cracking
  • Denial of Service Testing
  • Privacy Review
  • IDS & Server Logs Review
  • Wireless Leak Tests
  • PBX Testing

Additionally, this manual will standardize two other types of security tests which when conducted will also produce a complete and thorough review of its respective area. The two types are the Restricted test and the Standard test. The Restricted Test is a security review of purely the Internet presence and avoids other access points where interaction is involved.

Modules involved in a Restricted Security Test:

  • Network Surveying
  • Port Scanning
  • System Fingerprinting
  • Services Probing
  • Redundant Automated Vulnerability Scanning
  • Password Cracking
  • Denial of Service Testing


Testing which integrates the Internet presence with human interaction is defined as a Standard Test and requires access to logs and personnel.

Modules involved in a Standard Security Test:

  • Network Surveying
  • Port Scanning
  • System Fingerprinting
  • Services Probing
  • Redundant Automated Vulnerability Scanning
  • Exploit Research
  • Manual Vulnerability Testing and Verification
  • Application Testing
  • Firewall & ACL Testing
  • Intrusion Detection System Testing
  • Document Grinding (Electronic Dumpster Diving)
  • Social Engineering
  • Trusted Systems Testing
  • Password Cracking
  • Denial of Service Testing


As you see there is a great amount of data to collect and analyze. The above steps can be graphed into a more visual form to help understand the flow of the testing.


Methodology
The methodology flows from the point of Network Surveying to the final report. An example of this flow would allow a separation between data collection and verification testing of and on that collected data. The flow would also determine the precise points of when to extract and when to insert this data.

In defining the methodology of testing, it is important to not constrict the creativity of the tester by introducing standards so formal and unrelenting that the quality of the test suffers. Additionally, it is important to leave tasks open to some interpretation where exact definition will cause the methodology to suffer when new technology is introduced. For example, verifying that the system uses proper encryption does not specify the techniques to be used for verification nor does it specify what kind of encryption. This is done on purpose. The module on vulnerability testing is especially open due to the dynamic nature of exploits.

In this methodology, we define modules and tasks. Each module has a relationship to the one before it and the one after it. Security testing begins with an input that is ultimately the addresses of the systems to be tested. Security testing ends with the beginning of the analysis phase and the final report. This methodology does not affect the form, size, style, or content of the final report nor does it specify how the data is to be analyzed. That is left to the security tester or organization.

Modules are the variables in security testing. The module requires an input to perform the tasks of the module. Tasks are the security tests to perform depending upon the input for the module. The results of the tasks may be immediately analyzed to act as a processed result or left raw. Either way, they are considered the output of the module. This output is often the input for a following module or in certain cases such as newly discovered hosts, may be the input for a previous module.

Non-traditional modules refer to conditions where the components to test are not clearly defined as Internet Presence Points. A wireless LAN for example may bleed to the street where one can "remotely" test the network however must still be within close range. Telephone switches of organizations have long been the target of zealous phreakers (phone hackers) and therefore also the Security Tester. These two examples are considered part of a thorough security test.


Module Interdependency
In the above methodology we see a certain order in the flow that presents the possibility of running certain tests in parallel. For instance, IDS testing does not interfere with wardialing; also, neither test depends on the results of the other. However, both depend upon the review of the security policy to define certain modules.

Example 1 shows the relationship of one test to the other in terms of dependencies. Each module may depend on a module before it to have the best result. This is the input / output model discussed previously. Therefore, each module begins with the explanation of what the expected output is to be (Expected Results). The input is a bit more complicated. For certain tests, it is up to the security tester to decide what input is best. In some tasks there is a default and an alternative. For example, in TCP port scanning, one can choose to scan all 65,536 ports or can scan what has been determined to be the standard set of problematic ports. In others such as Electronic Dumpster Diving (EDD) it is not so clear since the dependencies on the depth of the testing are so greatly varied.






Example 2 shows the same tasks for parallel testing and data collection. This is very useful in team testing and automated testing tool design.






Test Module Definition
The test parmeters are the steps to a complete and thorough Internet security test corresponding to the open standard. The path of the methodology can be explained best by describing the tasks to be performed, an explanation of each task, and the possible resources involved in the task.

If no public-domain tool or resource exists for a certain task, the explanation of the task will so state, and full details will be entered into Appendix D -- Needed Resources until such time as the request is fulfilled.

Currently, many more Internet services and applications exist (some dynamically I might add) than there are tests to specifically address them. For the sake of the methodology, it will be assumed that the tester addresses each service and application in kind wherever the terms Services or Applications are used with the appropriate tools or analysis for completing a thorough test.


Test Modules and Tasks
All RAVs are based one one domain and one IP address.


Network Surveying

Cycle: 25 days -- Degradation: 18% -- RAV: 0.72

A network survey serves an introduction to the systems to be tested. It is best defined as a combination of data collection and information analysis. Although it is often advisable from a legal standpoint to define contractually exactly which systems to test if you are a third-party auditor or even if you are the system administrator, you may not be able to start with concrete system names or IP addresses. In this case you must survey and analyze. The point of this exercise is to find the number of reachable systems to be tested without exceeding the legal limits of what you may test. Therefore the network survey is just one way to begin a test; another way is to be given the IP range to test. In this module, no intrusion is being performed directly on the systems except in places considered a quasi-public domain.

In legal terms, the quasi-public domain is a store that invites you in to make purchases. The store can control your access and can deny certain individuals entry but for the most part is open to the general public (even if it monitors them). This is the parallel to an e-business or web site.

Although not truly a module in the methodology, the network survey is a starting point. Often times, more hosts are detected during actual testing. Please bear in mind that the hosts discovered later may be inserted in the testing as a subset of the defined testing and often times only with permission or collaboration with the target organization's internal security team.
Expected Results
Domain Names
Server Names
IP Addresses
Network Map
ISP / ASP information
System and Service Owners
Possible test limitations
Tasks to perform for a thorough network survey include:

Name server responses.
· Examine Domain registry information for servers.
· Find IP block owned.
· Question the primary, secondary, and ISP name servers for hosts and sub domains.

Examine the outer wall of the network.
· Use multiple traces to the gateway to define the outer network layer and routers.

Examine tracks from the target organization.
· Search web logs and intrusion logs for system trails from the target network.
· Search board and newsgroup postings for server trails back to the target network.

Information Leaks.
· Examine target web server source code and scripts for application servers and internal links.
· Examine e-mail headers, bounced mails, and read receipts for the server trails.
· Search newsgroups for posted information from the target.
· Search job databases and newspapers for IT positions within the organization relating to hardware and software.





Port Scanning

Cycle: 5 days -- Degradation: 3% -- RAV: 0.6

Port scanning is the invasive probing of system ports on the transport and network level. Included here is also the validation of system reception to tunneled, encapsulated, or routing protocols. This module is to enumerate live or accessible Internet services as well as penetrating the firewall to find additional live systems. The small sample of protocols here is for clarity of definition. Many protocols are not listed here. Testing for different protocols will depend on the system type and services it offers. For a more complete list of protocols, see Appendix F.

Each Internet enabled system has 65,536 TCP and UDP possible ports. However, it is not always necessary to test every port for every system. This is left to the discretion of the test team. Port numbers that are important for testing according to the service are listed with the task. Additional port numbers for scanning should be taken from the Consensus Intrusion Database Project Site.
Expected Results Open, closed or filtered portsIP addresses of live systemsList of discovered tunneled and encapsulated protocolsList of discovered routing protocols supportedActive servicesNetwork Map
Tasks to perform for a thorough Port Scan:
Error Checking
· Check the route to the target network for packet loss
· Measure the rate of packet round-trip time
· Measure the rate of packet acceptance and response on the target network
· Measure the amount of packet loss or connection denials at the target network

Enumerate Systems
· Collect broadcast responses from the network
· Probe past the firewall with strategically set packet TTLs (Firewalking) for all IP addresses.
· Use ICMP and reverse name lookups to determine the existence of all the hosts in a network.
· Use a TCP source port 80 and ACK on ports 3100-3150, 10001-10050, 33500-33550, and 50 random ports above 35000 for all hosts in the network.
· Use TCP fragments in reverse order with FIN, NULL, and XMAS scans on ports 21, 22, 25, 80, and 443 for all hosts in the network.
· Use a TCP SYN on ports 21, 22, 25, 80, and 443 for all hosts in the network.
· Use DNS connect attempts on all hosts in the network.
· Use FTP and Proxies to bounce scans to the inside of the DMZ for ports 22, 81, 111, 132, 137, and 161 for all hosts on the network.

Enumerating Ports
· Use TCP SYN (Half-Open) scans to enumerate ports as being open, closed, or filtered on the default TCP testing ports in Appendix B for all the hosts in the network.
· Use TCP fragments in reverse order to enumerate ports and services for the subset of ports on the default Packet Fragment testing ports in Appendix B for all hosts in the network.
· Use UDP scans to enumerate ports as being open or closed on the default UDP testing ports in Appendix B if UDP is NOT being filtered already. [Recommended: first test the packet filtering with a very small subset of UDP ports.]

Encapsulated and Tunneled Protocols
· Verify and examine the use of SMB (Server Message Block) via IP.
· Verify and examine the use of NBT (NetBIOS-over-TCP).
· Verify and examine the use of IPX or IPX/SPX (Novell's network protocol) via TCP/IP.
· Verify and examine the use of RPC (Remote Procedure Call) and DCE RPC in the Internet presence
· Verify and examine the use of PPTP (Point to Point Tunneling Protocol).
· Verify and examine the use of L2TP (Layer 2 Tunneling Protocol).
· Verify and examine the use of IP in IP encapsulation.
· Verify and examine the use of SNMP (Simple Network Management Protocol).
· Verify and examine GRE (Generic Routing Encapsulation), IPSEC, and Radius support.

Routing Protocols
· Verify and examine the use of ARP (Address Resolution Protocol).
· Verify and examine the use of RIP (Routing Information Protocol).
· Verify and examine the use of OSPF (Open Shortest Path First) and Link State Advertisements (LSA).
· Verify and examine the use of BGP (Border Gateway Protocol).



Service Probing

Cycle: 25 days -- Degradation: 3% -- RAV: 0.12

System fingerprinting is the active probing of a system for responses that can distinguish unique systems to operating system and version level.
Expected Results OS TypePatch LevelSystem Type
Tasks to perform for a thorough System Fingerprint:
· Examine system responses to determine operating system type and patch level.
· Examine application responses to determine operating system type and patch level.
· Verify the TCP sequence number prediction for each live host on the network.
· Search job postings for server and application information from the target.
· Search tech bulletin boards and newsgroups for server and application information from the target.
· Match information gathered to system responses for more accurate results.

Services Probing
This is the active examination of the application listening behind the service. In certain cases more than one application exists behind a service where one application is the listener and the others are considered components of the listening application. A good example of this is PERL installed for use in a Web application. In that case the listening service is the HTTP daemon and the component is PERL.

Expected Results Service TypesService Application Type and Patch LevelNetwork Map

Tasks to perform for a thorough service probe:
· Match each open port to a service and protocol.
· Identify server uptime to latest patch releases.
· Identify the application behind the service and the patch level using banners or fingerprinting.
· Verify the application to the system and the version.
· Identify the components of the listening service.



Automated Vulnerability Testing

Cycle: 3 days -- Degradation: 15% -- RAV: 5.0

Testing for vulnerabilities using automated tools is an efficient way to determine existing holes and system patch level. Although many automated scanners are currently on the market and in the underground, it is important for the tester to identify and incorporate the current underground scripts/exploits into this testing.
Expected Results List of system vulnerabilitiesType of application or service by vulnerabilityPatch levels of systems and applications
Tasks to perform for a thorough Vulnerability Scan:
· Measure the target organization against the currently popular scanning tools.
· Attempt to determine vulnerability by system type.
· Attempt to match vulnerabilities to applications.
· Attempt to determine application type and service by vulnerability.
· Perform redundant testing with at least 2 automated vulnerability scanners.



Vulnerability Researching

Cycle: 1 day -- Degradation: 8% -- RAV: 8.0

This module covers the research involved in finding vulnerabilities up until the report delivery. This involves searching online databases and mailing lists specific to the systems being tested. Do not confine yourself to the web-- consider using IRC, Newsgroups, and underground FTP sites.
Expected Results Patch levels of systems and applicationsList of possible denial of service vulnerabilities
Tasks to perform for a thorough Exploit Research:
· Identify all vulnerabilities according to applications.
· Identify all vulnerabilities according to operating systems.
· Identify all vulnerabilities from similar or like systems that may also affect the target systems.



Exploit Testing and Verification

Cycle: 3 days -- Degradation: 9% -- RAV: 3.0

This module is necessary for eliminating false positives, expanding the hacking scope, and discovering the data flow in and out of the network. Manual testing refers to a person or persons at the computer using creativity, experience, and ingenuity to test the target network.
Expected Results List of areas secured by obscurity or visible accessList of actual vulnerabilities minus false positivesList of Internal or DMZ systemsList of mail, server, and other naming conventionsNetwork map
Tasks to perform for a thorough Manual Testing and Verification:
· Verify all vulnerabilities found during the exploit research phase for false positives.
· Verify all positives (be aware of your contract if you are attempting to intrude or perform a denial of service).



Internet Application Testing

Cycle: 28 days -- Degradation: 12% -- RAV: 0.43

This module refers to the testing of non-daemon applications accessible from the Internet. These applications can be written in any language or script. They generally provide a business process, for example to receive user queries and provide responses. For example, a banking application might support queries to a checking or savings account.
Expected Results List of applicationsList of application componentsList of application vulnerabilitiesList of application system trusts
Tasks to perform for a thorough Internet Application test:
· Decompose or deconstruct if necessary to access the source code.
· Examine the processes of the application.
· Test the inputs of the application.
· Examine the outputs of the application.
· Examine the communications, trusts, and relationships of the application.
· Determine the limits of authentication and access control.
· Measure the limitations of the defined variables.
· Examine the use of cacheing.



Firewall and Access Control List Testing

Cycle: 30 days -- Degradation: 3% -- RAV: 0.1

The Firewall and Screening Router are two defences often found on a network that control the flow of traffic between the enterprise network and the Internet. Both operate on a security policy and use ACLs (Access Control Lists). This module is designed to assure that only that which should be expressly permitted be allowed into the network; all else should be denied. However, this is often difficult when no written security policy exists and the analyst is forced to make assumptions as to the acceptable risk. This is not the job of the tester. The security tester must attempt to find the limits of the firewall and/or the screening router both as a system and as a service.
Expected Results Information on the Firewall as a service and a systemInformation on the routers as a serviceOutline of the network security policy by the ACLList of the types of packets which may enter the networkList of the types of protocols with access inside the networkList of live systems found.
Tasks to perform for a thorough Firewall & ACL Test:
· Verify the Firewall type with information collected from intelligence gathering.
· Verify the router types and configurations.
· Test the ACL against the written security policy or against the "Deny All" rule.
· Verify that the firewall is egress filtering local network traffic
· Verify that the firewall and/or router is performing address spoof detection
· Verify the penetrations from inverse scanning completed in the Port Scanning module.
· Verify the penetrations from strategically determined packet TTL settings (Firewalking) completed in the Port Scanning module.



Intrusion Detection System Testing

Cycle: 30 days -- Degradation: 3% -- RAV: 0.1

This test is focused on the performance and sensitivity of an IDS. Much of this testing cannot be properly achieved without access to the IDS logs. Some of these tests are also subject to attacker bandwidth, hop distance, and latency that will affect the outcome of these tests.
Expected Results Type of IDSNote of IDS performance under heavy loadType of packets dropped or not scanned by the IDSType of protocols dropped or not scanned by the IDSNote of reaction time and type of the IDSNote of IDS sensitivityRule map of IDS
Tasks to perform for a thorough IDS Test:
· Verify the IDS type with information collected from intelligence gathering.
· Test the IDS for configured reactions to multiple, varied attacks.
· Test the IDS for configured reactions to obfuscated URLs.
· Test the IDS for configured reactions to speed adjustments in packet sending.
· Test the IDS for configured reactions to source port adjustments.
· Test the IDS for the ability to handle fragmented packets.
· Test the IDS for configured reactions to the network traffic listening configuration in the designated network segment(s).
· Test the IDS for alarm states.
· Test the signature sensitivity settings over 1 minute, 5 minutes, 60 minutes, and 24 hours.
· Test the effect and reactions of the IDS against a single IP address versus various addresses.



Security Policy Review

Cycle: 30 days -- Degradation: 7% -- RAV: 0.23

The security policy noted here is the written human-readable policy document outlining the mitigated risks an organisation will handle with the use of specific types of technologies. This security policy may also be a human readable form of the ACLs. There are two functions to be performed: first, the testing of the against the actual state of the Internet presence and other non internet related connections; and second, to assure that the policy exists within the business justifications of the organisation, local and federal legal statutes, and personal privacy ethics.

These tasks require that the testing and verification of vulnerabilities is completely done and that all other technical reviews have been performed. Unless this is done you can't compare your results with the policy that should be met by measures taken to protect the operating environment.
Expected Results List of all policy points differing from the actual state of the Internet presence
Show non- approval from management
List Inbound connections rules not met
List Outbound connections rules not met
List Security measures not met
List of all policy points differing from the actual state of none internet connections
List Modems rules not met
List Fax machines rules not met
List PBX rules not met
Tasks to perform for a thorough Security Policy review:
· Measure the security policy points against the actual state of the Internet presence.
· Approval from Management -- Look for any sign (e.g. signature) that reveals that the policy is approved by management. Without this approval the policy is useless because staff is not required to meet the rules outlined within. From a formal point of view you could stop investigating the policy if it is not approved by management. However, testing should continue to determine how effective the security measures are on the actual state of the internet presence.
o Inbound connections -- Check out any risks mentioned on behalf of the Internet inbound connections (internet->DMZ, internet -> internal net) and measures which may be required to be implemented to reduce or eliminate those risks. These risks could be allowed on incoming connections, typically HTTP, HTTPS, FTP, VPNs and the corresponding measures as authentication schemes, encryption and ACL. Specifically, rules that deny any stateful access to the internal net are often not met by the implementation.
o Outbound connections -- Outbound connections could be between internal net and DMZ, as well as between internal net and the Internet. Look for any outbound rules that do not correspond to the implementation. Outbound connections could be used to inject malicious code or reveal internal specifics.
o Security measures -- Rules that require the implementation of security measures should be met. Those could be the use of IDS, firewalls, DMZs, routers and their proper configuration/implementation according to the outlined risks to be met.
· Measure the security policy points against the actual state of non-Internet connections.
o Modems -- There should be a rule indicating that the use of modems that are not specially secured is forbidden or at least only allowed if the modems are powered down when not in use, and configured to disallow dial- in. Check whether a corresponding rule exists and whether the implementation follows the requirements.
o Fax machines -- There should be a rule indicating that the use of fax machines which can allow access from the outside to the memory of the machines is forbidden or at least only allowed if the machines are powered down when not in use. Check whether a corresponding rule exists and whether the implementation follows the requirements.
o PBX -- There should be a rule indicating that the remote administration of the PBX system is forbidden or at least only allowed if the machines are powered down when not in use. Check whether a corresponding rule exists and whether the implementation follows the requirements.



Document Grinding (Electronic Dumpster Diving)

Cycle: 30 days -- Degradation: 12% -- RAV: 0.4

The module here is important in the verification of much of the tested information and pertains to many levels of what is considered information security. The amount of time granted to the researching and extraction of information is dependent upon the size of the organisation, the scope of the project, and the length of time planned for the testing. More time however, does not always mean more information but it can eventually lead to key pieces of the security puzzle.
Expected Results (See Appendix E for the default profile template)A profile of the organizationA profile of the key employeesA profile of the organization's network
Tasks to perform for a thorough Document Grind:
· Examine web databases and caches concerning the target organization and key people.
· Investigate key persons via personal homepages, published resumes, and organizational affiliations.
· Compile e-mail addresses from within the organization and personal e-mail addresses from key people.
· Search job databases for skill sets technology hires need to possess in the target organization.
· Search newsgroups for references to and submissions from within the organization and key people.
· Search documents for hidden codes or revision data.



Competitive Intelligence Scouting

Cycle: 15 days -- Degradation: 15% -- RAV: 1.0

CI Scouting is the scavenged information from an Internet presence that can be analysed as business intelligence. Different than the straight-out intellectual property theft found in industrial espionage or hacking, CI lends to be non-invasive and much more subtle. It is a good example of how the Internet presence extends far beyond the hosts in the DMZ. Using CI in a penetration test gives business value to the components and can help in finding business justifications for implementing various services.
Expected Results A measurement of the organization's network business justificationsSize and scope of the Internet presenceA measurement of the security policy to future network plans
Tasks to perform for a thorough Competitive Intelligence Scouting:
· Map and weigh the directory structure of the web servers
· Map the weigh the directory structure of the FTP servers
· Examine the WHOIS database for business services relating to registered host names
· Estimate the IT cost of the Internet infrastructure based on OS, Applications, and Hardware.
· Estimate the cost of support infrastructure based on regional salary requirements for IT professionals, job postings, number of personnel, published resumes, and responsibilities.
· Measure the buzz (feedback) of the organization based on newsgroups, web boards, and industry feedback sites
· Estimate the number of products being sold electronically (for download)
· Estimate the number of products found in P2P sources, wares sites, available cracks up to specific versions, and documentation both internal and third party about the products



Trusted Systems Testing

Cycle: 60 days -- Degradation: 4% -- RAV: 0.06

The purpose of testing system trusts is to affect the Internet presence by posing as a trusted entity of the network. The testing scenario is often more theory than fact and does more than blur the line between vulnerability testing and Firewall/ACL testing-- it is the line.
Expected Results Map of systems dependent upon other systemsMap of applications with dependencies to other systemsTypes of vulnerabilities which affect the trusting systems and applications
Tasks to perform for a thorough Trusted Systems test:
· Verify possible relationships determined from intelligence gathering, application testing, and services testing.
· Test the relationships between various systems through spoofing or event triggering.
· Verify which systems can be spoofed.
· Verify which applications can be spoofed.



Password Cracking

Cycle: 21 days -- Degradation: 8% -- RAV: 0.38

Password cracking is the process of validating password strength through the use of automated password recovery tools that expose either the application of weak cryptographic algorithms, incorrect implementation of cryptographic algorithms, or weak passwords due to human factors. This module should not be confused with password recovery via sniffing clear text channels, which may be a more simple means of subverting system security, but only due to unencrypted authentication mechanisms, not password weakness itself. [Note: This module could include manual password guessing techniques, which exploits default username and password combinations in applications or operating systems (e.g. Username: System Password: Test), or easy-to-guess passwords resulting from user error (e.g. Username: joe Password: joe). This may be a means of obtaining access to a system initially, perhaps even administrator or root access, but only due to educated guessing. Beyond manual password guessing with simple or default combinations, brute forcing passwords for such applications as Telnet, using scripts or custom programs, is almost not feasible due to prompt timeout values, even with multi-connection (i.e. simulated threading) brute force applications.]

Once gaining administrator or root privileges on a computer system, password cracking may assist in obtaining access to additional systems or applications (thanks to users with matching passwords on multiple systems) and is a valid technique that can be used for system leverage throughout a security test. Thorough or corporate-wide password cracking can also be performed as a simple after-action exercise and may highlight the need for stronger encryption algorithms for key systems storing passwords, as well as highlight a need for enforcing the use of stronger user passwords through stricter policy, automatic generation, or pluggable authentication modules (PAMs).
Expected Results Password file cracked or uncrackedList of login IDs with user or system passwordsList of systems vulnerable to crack attacksList of documents or files vulnerable to crack attacksList of systems with user or system login IDs using the same passwords
Tasks to perform for a thorough Password Cracking verification:
· Obtain the password file from the system that stores usernames and passwords.
· For Unix systems, this will be either /etc/passwd or /etc/shadow.
· For Unix systems that happen to perform SMB authentication, you can find NT passwords in /etc/smbpasswd.
· For NT systems, this will be /winnt/repair/Sam._ (or other, more difficult to obtain variants)
· Create a dictionary from collected documents and web pages.
· Run an automated dictionary attack on the password file .
· Run a brute force attack on the password file as time and processing cycles allow.
· Use obtained passwords or their variations to access additional systems or applications.
· Run automated password crackers on encrypted files that are encountered (such as PDFs or Word documents) in an attempt to gather more intelligence and highlight the need for stronger document or file system encryption.
· Verify password aging.



Denial of Service Testing

Cycle: 1 day -- Degradation: 7% -- RAV: 7.0

Denial of Service (DoS) is a situation where a circumstance, either intentionally or accidentally, prevents the system from functioning as intended. In certain cases, the system may be functioning exactly as designed however it was never intended to handle the load, scope, or modules being imposed upon it.

It is very important that DoS testing receives additional support from the organization and is closely monitored.
Expected Results List weak points in the Internet presence including single points of failureEstablish a baseline for normal useList system behaviors to heavy useList DoS vulnerable systems
Tasks to perform for a thorough DoS test:
· Verify that administrative accounts and system files and resources are secured properly and all access is granted with "Least Privilege".
· Check the exposure restrictions of systems to non-trusted networks
· Verify that baselines are established for normal system activity
· Verify what procedures are in place to respond to irregular activity.
· Verify the response to SIMULATED negative information (propaganda) attacks.
· Test heavy server and network loads.



Privacy Policy Review

Cycle: 90 days -- Degradation: 2% -- RAV: 0.02

The privacy policy is the focal point of the organisation's stance on customer privacy. This policy must be publicly viewable. In cases where this policy does not exist, it is necessary to use the local privacy legislation of the target organization.
Expected Results List any disclosuresList compliance failures between public policy and actual practiceList systems involved in data gatheringList data gathering techniquesList data gathered
Tasks to perform for a thorough Privacy Policy review:
· Compare publicly accessible policy to actual practice
· Identify database type and size for storing data
· Identify data collected by the organization
· Identify storage location of data
· Identify cookie types
· Identify cookie expiration times
· Identify information stored in cookie
· Verify cookie encryption methods
· Identify server location of web bug(s)
· Identify web bug data gathered and returned to server



IDS and Server Logs Review

Cycle: 5 days -- Degradation: 3% -- RAV: 0.6

Reviewing the server logs is needed to verify the tests performed on the Internet presence especially in cases where results of the tests are not immediately visible to the tester. Many unknowns are left to the analyst who has not reviewed the logs
Expected Results List of IDS false positivesList of IDS missed alarmsList of packets which entered the network by port numberList of protocols which entered the networkList of unmonitored paths into the network
Tasks to perform for a thorough IDS and Server Log review:
· Test the Firewall, IDS, and Server logging process.
· Match IDS alerts to vulnerability scans.
· Match IDS alerts to password cracking.
· Match IDS alerts to trusted system tests.
· Verify TCP and UDP scanning to server logs.
· Verify automated vulnerability scans.
· Verify services' logging deficiencies.



Social Engineering

Cycle: 30 days -- Degradation: 18% -- RAV: 0.6

This is a method of gaining valuable information about a system by querying personnel. For example pretend to be an authority figure, call an administrator and tell him/her that you forgot your password and need immediate access so as to not lose a very important client (money). Many situations can be made up, depending what information you already gained about the organisation you are auditing. In some cases it is good to construct a situation which creates a lot of pressure on the victim (to get information fast). This way of gathering information is often very time-consuming and therefore only applicable if enough resources are available.
Expected Results Useful information for obtaining access or about insecurities
Tasks to perform for a thorough Social Engineering test:
· Select victim from information already gained about personnel
· Examine the contact methods (via telephone, e-mail, Newsgroups, Chat etc.) for victim from the target organisation.
· Gather information about victim (position, habits, preferences)
· Make up a situation to get in contact (telephone, pretend to be an authority or restaurant, socialize with victim)
· Gather information from victim
· Verify levels of information insecurity susceptibility based on total non-disclosure as the baseline.



Wireless Leak Testing

Cycle: 120 days -- Degradation: 2% -- RAV: 0.01

Expected Results Find the outer-most wireless edge of the networkFind access points into the network
Tasks to perform for a thorough Wireless Network test:
· Verify the distance in which the wireless communication extends beyond the physical boundaries of the organization
· Verify that the communication is secure and cannot be challenged or tampered
· Probe network for possible DoS problems



PBX Testing

Cycle: 120 days -- Degradation: 2% -- RAV: 0.01

Securing your organisation's Private Branch Exchange (PBX) systems will help prevent toll-fraud and theft of information.
Expected Results Find voice mailboxes that are world accessibleFind PBX Systems that are allowing remote administrationList systems allowing world access to the maintenance terminalList all listening and interactive telephony systems.
Tasks to perform for a thorough PBX test:
· Verify that voicemail PINS are changed often.
· Review call detail logs for signs of abuse.
· Ensure administrative accounts don't have default, or easily guessed, passwords.
· Make sure OS is up to date and patched.
· Check for remote maintenance access to system.
· Check the physical security of maintenance terminal.
· Identify modems, faxes, and automated operators.
· Test dial-in authentications.
· Verify remote dial-in authentication.





Index - Contact and Submission Information
If you have any comments, questions, or to note broken links on this website send e-mail to the Webmaster.