The Crux of NT Security - Phase Two: Securing The Hosts
by Aaron Sullivan
SBC Datacomm Security
last updated Mon, Oct 30, 2000

In the previous article, understanding the way to approach reasonable NT security was addressed. Most details were addressed at a high level. In this article, securing NT as a host on the Internet, specifically as a web server, is the main topic of discussion. We'll discuss this security in terms of layering and assume that the outer layer will often be penetrated in some way. We'll define the word "penetrated" as some sort of successful attack, be it a Denial of Service, or a compromise of any user account or any other action that enables an attacker to harm the target host or prevent it from functioning properly.

Before actually touching on how to secure an NT host from all those script kiddies, let's talk about NT intrusion technique. In order to do that, we need to understand how NT is accessed from across a network. NT was not built as a multi-user operating system (other than Terminal Server versions… they assume a whole new can of worms which we won't discuss here). Without modification, there is no way to establish an interactive shell account with an NT server. Thus, attackers generally gain access to a machine without the ability to execute commands on the local machine. They can read files from the server and write files to the server just like any user with proper rights can access a share… but the server will not execute code for them. For instance, a user may have access to a share with the executable winword.exe in it. The user may have rights to overwrite the file or delete the file, but they do not have rights to make the server launch the file. When they launch the file from their share view, the server does not execute winword.exe. Instead, winword.exe is read across the network and launched on that user's local machine. The same holds true for FTP and WWW access-a user can read and write files, but the server will not execute code for that user (although there are some packages that allow a server to be administered through web pages, they don't come default on an NT IIS server).

What does this mean? In essence, this makes it very difficult for an attacker (even one who gains administrative rights) to gain full control over the victim server. The attacker is forced to install a Remote Access Trojan (RAT) with the power to manipulate things like the registry and the console (console meaning the GUI and the command shell) in order to proceed in a timely manner. As mentioned in the last article, this RAT could be anything from a telnet-able command shell that listens on a specific port to SMS remote control, Netbus, Back Orifice, Donald Dick, etc. The bottom line is that the attackers have to land some sort of application on their victim servers to proceed. Some exploits, such as RDS, bypass this to a small extent by passing commands to a command shell-but attacking via this method is very slow and inefficient and generally doesn't return any sort of response code. In other words, if I run RDS as is, and I pass it a "dir" command, I don't see the results of the command; I don't see any directory listing feedback to my terminal. In order to see those directory listings, I would have to input the command so as to pipe the results to a file that I could then grab via HTTP or FTP. I could proceed by running netstat, nbtstat, and a few other programs (again, piping them out to files that I could grab via HTTP or FTP) to begin digging into the rest of the network; but using this kind of attack is not only slow, but also very noisy, as it generates extra files that must be kept track of and cleaned-up later. Basically, in order to have success and continue my attack into the rest of the network, I have to plant an application that will allow console access. It is not advisable to leave the NT resource kit on any machines that are available to the Internet. If the resource kit is installed, it provides attackers pretty much everything that they need to launch a very successful attack without otherwise having interactive console access. Combine the resource kit with a RAT, and you're in big trouble.

Once the RAT application is planted, the attackers will attempt to access the victim server using a client machine that sits outside the victim server's network. Before they can do that, they must get the application to launch. Using an exploit such as RDS, such a thing can be easy if the RAT will run as a service and be available after the machine shuts down or starts up. Otherwise, an "at" command must be used to schedule the launch of the RAT on a defined interval. An example of using "at" to execute a file is shown below.

at \\frontweb 08:00 cmd /c "c:\inetpub\wwwroot\nxt.exe"

This command would launch the nxt.exe RAT at 8:00 AM on the frontweb server. The "at" command has switches available to make that command launch at recurring intervals so that even if the server is shut down, the RAT will launch again the next time an interval is crossed.

It is advisable to also upgrade your task scheduling service from atsvc.exe to mstask.exe. This can be done very easily by upgrading the system to IE 5. With the atsvc.exe command, the scheduler can be launched from the command prompt. With mstask.exe, the scheduler can only be started from the Service Control Manager. This is an improvement to security as the attackers will no longer be able to use the "at" command unless the scheduler is running (which it should not be when the server is in production).

Once the RAT is launched, the attackers will connect to it over the network on an arbitrary port (generally customizable) and begin to enumerate the victim network from within. Primary points of attack will be domain controllers, database servers (by experience, they are most commonly Microsoft SQL servers, and receive the most study from the exploit community at large), and Exchange Mail relays. Upon finishing enumeration, more targets will be picked and files will be pillaged. This process will go on and on until the attacker is satisfied or caught. If the attacker finishes-up smoothly, then the next process is to clean up, and that involves cleaning the event logs.

Let's go back now and figure out exactly how the security architecture falls apart at the hands of a talented script kiddie. The following diagram attempts to graphically explain the process. View the thick vertical lines as doorways through which the attackers must pass in the process to successfully crack the victim server. This approach takes into consideration that most servers will have some fairly recent (up to six months old) exploit that hasn't been patched available to break into the front-end server. The point is to, as mentioned in the first article, make the invasion stop there. Make the invasion of the server stop at the point just after access (just after the blue line in the chart).

A production web server with sensitive material on it's segment of the network should not give any user coming from the Internet the ability to plant files on it. Realizing that the previous statement is a big thing to ask and will, realistically, rarely be implemented; it is suggested that accounts with access to plant files anywhere on the system be combed thoroughly to ensure there is no loophole through which to escalate. For an NT administrator, pay special attention to the IUSR accounts created by IIS and the applications that access them.

The red "X's" are critical points beyond "Access" that, if properly handled, will frustrate your script kiddie attackers to the point of giving up or performing actions that will likely expose their activity. The way to properly handle them is to use the applications associated with these functions. Following is a section for how to deal with each of these critical points

Dealing With Rights Escalation:

The way to prevent Rights Escalation is to allow for no loopholes in your system's ACL's in the area of policies, services, and file systems. The applications utilized are the service control manager, user manager (and policy editor via user manager), and the file manager. As a note, if your file systems aren't running on NTFS, you can't use the file manager to specify rights per directory and file on the local file system. Convert them to NTFS. There's an excellent guide on SecurityFocus.com to hardening this area of an NT server. Here's the link:

http://www.securityfocus.com/focus/microsoft/nt/ntsecure.html

Dealing With RAT's:

The way to solve for a RAT is to put virus scanners on your web servers. A good virus scanner is crafted to prevent users from being able to stop the virus scanner from the console (command prompt), thus helping to prevent an attacker with system or administrator access from bypassing the virus protection. A virus scanner will kill most RATs as soon as they're planted and provide alerts that nefarious activity is occurring on your server.

Dealing With Pilfering Locally:

Depending on the resources stored on the victim server, this can be a major issue that needs to be dealt close scrutiny, or it can be a very minor issue. We can safely assume that if an attacker has administrative or system-level access to a web server, he or she also has the ability to deface a web page. There isn't a lot that can be done to prevent that other than monitoring and patching. The more serious pilfering happens when an attacker gets access to password files, pending litigation documents, HR documents, or account databases. The obvious solution is to not store sensitive material on front-end servers. If you must store sensitive material on the front end, consider a utility such as PGPdisk or PGPencrypt to store the files. Also, don't place NT domains on publicly available network segments. The reason for this being that if a domain controller is accessible, there are probably occasional authentication sessions flying around on the network which can easily be picked up by software such as L0phtcrack. In addition to that, the domain controller will become a target that holds accounts that, if compromised, will work across the entire NT domain. This becomes especially dangerous if the domain compromised is the same domain that internal users rely on, because their accounts are then compromised as well. NT passwords and password files can be protected further by installing syskey and disabling LanMan authentication. Here are three links (be sure to check out the third if you install syskey) for more information on syskey and disabling LanMan:

http://support.microsoft.com/support/kb/articles/Q143/4/75.asp
http://support.microsoft.com/support/kb/articles/Q147/7/06.asp
http://support.microsoft.com/support/kb/articles/Q248/1/83.asp

Concerning pilfering of SQL databases. If you use MS SQL, there are few options to protect the actual database files. Other SQL server vendors such as Oracle and MySQL allow for more functionality in that realm. The option for MS SQL is to encrypt the data in the database via stored procedures, but that falls into the Database Administrator realm and is not considered timely for this article. MS SQL servers can be protected through the way in which they are placed on the network and accessed. That material will be covered further in the next article in the series titled "Controlling and Monitoring Communications" as this one deals primarily with security at the host level. For those curious about securing SQL databases at the host level, the following link is available:

http://www.mssqlserver.com/faq/general-encrypt.asp

Best practices also include using different passwords on the publicly available segment than the passwords on the internal network segment. A locally pilfered password will assuredly be used in other logon attempts as the attack progresses.

Exchange mail relay servers are the last major point of concern in this section (although DNS could also be covered, it isn't here). The mail relay should only be configured to pass SMTP and POP traffic from both the inside network and the outside network. NetBIOS connections between the mail relay and the internal Exchange server are a serious no-no. The Internet Mail Connector should be the only active exchange service running. There's an excellent article detailing further Exchange security much more exhaustively than this article:

http://www.securityfocus.com/focus/microsoft/exchange/exmain.html

Dealing With Enumerating Other Hosts:

Some theory will be provided along with a link providing all the details. Part of the theory of providing layered security is to make every host reasonably hardened. It is foolish to say something like "Well, my firewall or front-end web server will never be breached or bypassed, so I don't need to secure these other hosts." If the hosts are on a publicly available segment, then they should be hardened.

This topic will be covered to a large extent in the next article as most of the issues concerning enumeration of other hosts on the network and how to monitor and prevent it also deal with traffic monitoring techniques. Here's two links straight from Securityfocus.com on hardening NT servers.

http://www.securityfocus.com/focus/microsoft/nt/ntsecure.html
http://www.securityfocus.com/data/tools/nt_audit_script12.zip

Finally, as mentioned before, the attackers will clean up by clearing event logs. Event logs, by nature, are protected from tampering unless the attacker can access them with something like the event viewer or some other trusted application. In other words, the event log can't be easily doctored, only cleared.

There just so happens to be a very good piece of software for protecting against the clearing of event logs. The URL is located here:

http://www.core-sdi.com/english/freesoft.html

The software (Winaudlog and Slogger) is in beta at this point and momentarily offline, but likely to be a good solution as it prevents clearing or tampering with event logs without a remote auditor noticing.

Let's go back and show the results of following these suggested measures.

With none of the suggested measures in place, you get a continuous cycle of this:

By at least hardening other hosts on the network, to a solid extent, you'll generally stop a script kiddie at this level:

By removing sensitive files on your front-end servers and hardening them to the fullest extent, you'll generally stop a script kiddie at this level:

By placing a good virus scanner solution on your front-end servers and keeping them up to date, you'll generally stop a script kiddie at this level:

Finally, by solidifying all of your local loopholes and hardening that front-end set of servers to a solid extent, you'll stop a script kiddie at this level, frustrated:

Which, unless the script kiddie has an exploit granting administrative privileges, puts he/she in the same place as every other user accessing your system.

Stay tuned for the final article titled "Controlling and Monitoring Communications." In the last article, we discuss how to structure, block, and monitor traffic in an NT network properly. Design will be discussed in great length at this point along with the mention of a few IDS systems, the need for application based firewalls and some general guidelines on configuring them, and structuring encrypted traffic internally so as to curb successful sensitive information sniffing.

Aaron Sullivan
SBC DataComm - Network Security
Network Security & Penetration Engineer

Aaron Sullivan is an engineer (specializing in building, securing, and penetrating the Microsoft NOS environment) for the fine, fine SBC Datacomm Security Team. If you would like to request the services of SBC's security team, you may visit the web-site or call (800) 433 1006 and leave the request there.



Privacy Statement
Copyright © 1999-2000 SecurityFocus.com