Monday, January 14, 2008

Auto-Negotiation Crisis on Switches


Auto-Negotiation is the feature that allows a port on switch, router, server or other device to communicate with the device on the other end of the link to determine the optimal duplex mode (Full or Half Duplex Modes) and the speed of the connection. The driver then automatically configures the interface to the values determined for the link.


How Auto-Negotiation Works?

Auto-Negotiation is a protocol, and unlike any other protocol it works only if it’s running on both sides of the link. If one side of the link is running auto-negotiation and the other side of the link is not, then auto-negotiation cannot determine the speed and the duplex configuration on the other side of the link. Each interface advertises the speeds and the duplex mode it can operate, and the best match is selected (usually High speeds and full duplex mode is preferred. If both sides of the link are running auto-negotiation then they negotiate the best speed and duplex mode.


The main source of the crisis is when an auto-negotiation fails another feature called parallel detection kicks in.
Parallel detection works by sending the signal being received to the local 10Base-T, 100Base-TX, and 100Base-T4 drivers. If any one of these drivers detects the signal, the interface is set to that speed. Parallel detection determines only the link speed, not the supported duplex modes.


Because of the lack of widespread full-duplex support on 10Base-T, and the typical default behavior of 100Base-T, when auto-negotiation falls through to the parallel detection phase (which only detects speed), the safest thing for the driver to do is to choose half-duplex mode for the link.


When Auto-Negotiation Fails


When auto-negotiation fails on 10/100 links, the most likely cause is that one side of the link has been set to 100/full, and the other side has been set to auto-negotiation. This results in one side being 100/full, and the other side being 100/half.


Figure 1: Shows the link where auto-negotiation has failed.






In the above diagram TX and RX represents Transmitting Line and Receiving Line respectively. In a Half Duplex mode both the TX and RX are dependent on each other, until a packet arrives on the RX line the device cannot transmit and vice-versa. If the device transmits before it receives the packet then a collision occurs and the packet is lost. Where both the TX and RX line has in a Full Duplex mode do not depend on each other and they receive and transmit packets independently without monitoring each other.


When one side of the link is full-duplex and the other side is half-duplex, a large number of collisions will occur on the half-duplex side. Because the full-duplex side sends frames without checking the RX line, if it’s a busy device, chances are it will be sending frames constantly. The other end of the link, being half-duplex, will listen to the RX line, and will not transmit unless the RX line is available. It will have a hard time getting a chance to transmit, and will record a high number of collisions, resulting in the device appearing to be slow on the network. The issue may not be obvious because a half-duplex interface normally shows collisions. The problem should present itself as excessive collisions. This will choke the network bandwidth and reduce the performance of the network.


Auto-Negotiation Best Practices


Using auto-negotiation to your advantage is as easy as remembering one simple rule:

1. Make sure that both sides of the link are configured the same way- If one side of the link is set to auto-negotiation; make sure the other side is also set to auto-negotiation. If one side is set to 100/full, make sure the other side is also set to 100/full.

Note: Be careful about using 10/full, as full duplex is not supported on all 10Base-T Ethernet devices.


Gigabit Ethernet uses a substantially more robust auto-negotiation mechanism than the one described in this chapter. Gigabit Ethernet should thus always be set to auto-negotiation, unless there is a compelling reason not to do so (such as an interface that will not properly negotiate). This is a workaround until the appropriate misbehaving part is replaced.

Reference


1.
“Network Warrior” Oreilly Publications – By Gary A Donahue.


Sunday, November 25, 2007

Spoofing Source ports and getting in

An interesting but probably reasonably known technique of getting past firewalls. Incase there's a firewall out there which has ..for some reason been "configured" to allow traffic from source port 23 through. Why would it do this? Well .. ideally due to loads of firewalls being stateful these days you don't need to have bi-directional rules allowing all traffic with say.. a source port of 23 through if you've already allowed it outbound. So to explain things a bit more clearly:

If you have already a rule like:
Internal Network -- Internet -- allow traffic with a destination port of 80

Then you don't need to have a rule like:
Internet -- Internal Network -- allow traffic with a source port of 80

Thats what a stateful firewall does; keeps track of which client issued which connection to port 80 of which client and makes sure that it returns the response to that very client. So ideally a bi-directional rule is most probably a misconfiguration.

Still I won't be shocked to see many people outside still doing stuff like this so incase you happen to come across a firewall which lets traffic in based on a source port; you could use fpipe to spoof the source port and try and access traffic on other restricted ports inside but are "inaccessible' from the outside.

I wont go into the details of how to use "fpipe" coz that's easily obtainable at Foundstone. Effectively though --

You are on Machine 1
You set Fpipe to listen on 5555 on Machine 2 and connect to 3389 on Machine 3(VICTIM)
You set Fpipe to use port 22 as its source port because inbound SSH has been allowed

So when you connect to Machine 2:5555 , Machine 2 will initiate a connection using source port 22(ALLOWED BY THE FIREWALL) and make a connection to Machine3:3389. Once done it'll forward the connection over to Machine 1 where you're doing your pen-testing. Quite cool ...but really it really can be exploited only incase of some really lazy sysadmins or a firewall thats as old as Fred Flintstone is being used ;)

Hidden Cache in IE??

This one's a pretty old one which we rediscovered during a recent assessment. We were checking whether private user information or data was getting cached offline. We used both IE and Firefox to check this. While Firefox offers the eay to use about:cache to see what's got stored offline IE doesnt do the same and you need to actually go inside the "Temporary Internet Files" to find out if its stored anything there. Now there are times when you "Work Offline" and go Ctrl+H to see all your browser history and click on it to see if you can view it. Now sometimes you'll find that there's a cached page but you are not able to see it in Temporary Internet Files at all.

So what we did was to fire up FileMon , a cool program which tracks what files get accessed on your disk each time you do something. So for example say you click on a page in History and can see its contents. Using FileMon and by setting the appropriate filters you can actually pinpoint the exact file that is storing the cached resource. So when we did this for this application we found there were files from the L:\Documents and Settings\arvind\Local Settings\Temporary Internet Files\Content.IE5 directory which were being referenced each time something in history is accessed. This is actually a fairly old location in which IE stores its files to "improve performance" and on which there was an article by The Riddler a long time ago. That article can still be found here:
http://sillydog.org/mshidden21b.html#9.1

Anyway the point here is : Dont forget to look into content.ie5 and history.ie5 when you next do an appsec assignment. You might be surprised at what you find :)

Friday, November 23, 2007

Firefox -- An RFC violation??

The "no-store" directive says that no information should be stored on your hard disk and the application must make a best effort to remove the information from browser memory(volatile storage) as soon as possible. Now the "as soon as possible" is a dangerous phrase; as it could mean different things to a developer and a security consultant. However we came to the conclusion that the least that needs to happen is that all private pages shouldn't be cached at all on disk. Also in the event of the pages remaining in browser memory there is a high possibility of the pages remaining in the browser memory even after logging out. So ideally the moment you logout the browser needs to close the window itself thus flushing any pages in memory. However there have been places where this doesn't happen and pages are cached on disk even after a Firefox browser close. This doesn't happen with IE though strangely enough. Wonder why?

The "no-cache" directive says that some specific pages must not be cached at all and every time a request is made for that resource it should be revalidated against the server before the page is served. However in a recent appsec assignment we saw Firefox caching pages irrespective of the "no-cache" directive. So when we did a "Work Offline" in Firefox we were still able to access the pages. However on browser close these pages disappeared. This only proves that the pages were in browser memory for Firefox. This behavior was not repeated with IE or Opera where it refused to let us see pages on Working Offline? So why the discrepancy with Firefox? Why doesn't the browser try and contact the server before serving the page? More later when I get time to hack around ...Do reply if you know why!!

Sunday, November 4, 2007

NTLM authentication using Burp proxy

Hey buddies,

Just wanted to share my experience while doing an appsec for one of our clients.

I was doing appsec of an application which worked on NTLM authentication. With every request I was sending, the NTLM variables were queried and if authenticated I get a response.

After opening the browser, if I want to login to the application I have to provide my local domain password. This fetches the login page of the application where using my logins I can login to the application.

Here, I was struggling to setup a local proxy because whenever I setup a local proxy and send a new request, my NTLM variables were queried and I was not authenticated as local proxy has broken my NTLM authentication.

After struggling for some time, I found an option in BURP PROXY, under tab comms there is a check box asking for ‘do WWW authentication’. Basically we have to select this check box and select NTLM authentication here and provide the destination IP, Domain name, Domain password etc.

This way, working through BURP I solved the problem and continued the testing. But again remember that closing BURP removes this setting and you will have to again do the same settings to continue testing.

Cheers,
Prashant

Friday, October 26, 2007

Mail Relaying

What is Relaying?

When someone outside of your organization uses your SMTP Server to send mails out over the Internet; it’s a big problem if you have an “Open” Relay because this Relay will be used by other Spammer to send their mail.

The main problem with spammers using your server to send e-mails out over the Internet is that your server’s information will be in the header of the messages and the recipients of these messages will track you down. If you are being used as a Relay the chances are you will be contacted by someone complaining and ultimately you will be “Black Listed” for sending spam mails. It means that your server would be added in a list of servers that have been found to have “Open” Relay’s, and many companies block messages from servers that are on the “Black List”. So, when you try to send legitimate e-mails the chances are it will be returned. Once your server has been placed on a “Black List” it is very hard to be taken off and this could cost your organization a lot of money in lost revenue and you could lose your credibility.

How to test for Mail Relaying?

How can you check your SMTP server for relaying? Simple, All of you have to do is use a computer outside of your organization and type the commands shown below later in this section. You will need to type in these commands in a command shell.

In order for proper understanding of the process lets look into the following examples, mail.example.com is the mail server you are checking, sender@example.com is valid email account at mail.example.com ( or a fake email address- you can try both), and youremail@outsideaddress.com is the email account you want this message to go to.

Given below is a set of steps which are used to send mail through a mail server which is vulnerable for mail relaying.

Steps:

  1. TELNET mail.example.com 25
  2. EHLO mail.example
  3. MAIL FROM:
  4. RCPT TO:
  5. DATA

From:sender@example.com
To:youremail@outsideaddress.com
Subject: Relay test

This is a relay test and only a test.

(Type . or [enter].[enter] to end data)

  1. QUIT

Here we have a detailed example of mail relay where pipelining is employed by the SMTP server. The command typed and the server response is mentioned below.


You type this text

Server should respond with this

TELNET mail.example.com 25

Trying 10.10.10.1.
Connected to mail.example.com.
Escape character is '^]'.
220 ESMTP ESMTP

EHLO 10.10.10.1

250-mail.example.com

250-PIPELINING

250-SIZE 9999360

250-VRFY

250-ENHANCEDSTATUSCODES

250-8BITTIME

250 DSN

MAIL FROM:<>

250 2.1.0 OK

RCPT TO:

250 2.1.5 OK

DATA

354 End data with

From:sender@example.com
To:youremail@outsideaddress.com
Subject: Relay test
This is a relay test and only a test.

(type . or [enter].[enter] to end data)

250 2.0.0 OK: Queued as T22122A5

QUIT

221 2.0.0 bye


Preventing message relaying with MS Exchange

Before you start checking which version you are running - you must be running Microsoft Exchange Server 5.5 or greater, then follow these 7 steps.

  1. Go to the Internet Mail Service Properties dialog box in Microsoft Exchange
  2. Select the Routing tab at the top.
  3. Select the option reroute incoming SMTP mail (required for POP3/IMAP4 support).
  4. Reroute incoming SMTP mail.
  5. For each domain you host, you need an entry in the Routing section.
  6. Click the Routing Restrictions button.
  7. Make sure Hosts and clients with these IP addresses are checked. Leave the list of IP addresses blank.

For further information you can check some of the reference websites below.

1. http://www.msexchange.org/pages/article.asp?id=54

2. http://www.slipstick.com/exs/relay.htm#basics

3. http://www.auditmypc.com/freescan/readingroom/relay.asp

4. http://support.microsoft.com/?kbid=304897

Wednesday, October 17, 2007

Enumerating OS Accounts through Web Server

We noticed an interesting vulnerability during a recent pentest. This vulnerability is specific to certain versions of Apache web server running on *nix boxes.

The vulnerability is that one can enumerate OS accounts just by looking at the response codes and message returned by the web server when one tries to access the home directory for a particular user. Suppose one tries to access the root directory; the request from the browser would look something like this:

http://X.X.X.X/~root

GET /~root HTTP/1.1
Host: X.X.X.X
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.7) Gecko/20070914 Firefox/2.0.0.7
Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive


The response received had an error code 403

HTTP/1.x 403 Forbidden
Date: Wed, 17 Oct 2007 06:45:58 GMT
Server: Apache
Keep-Alive: timeout=15
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html

The response received had an error code 403. Also an error message like, "You don't have permission to access /~root on this server." is displayed.

When the same request is tried for a non existent user (neo as in the following case), the error code and message received in the response are different.

http://X.X.X.X/~neo

GET /~neo HTTP/1.1
Host: X.X.X.X
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.7) Gecko/20070914 Firefox/2.0.0.7
Accept:text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive


The response received had an error code 404. The error message in this case was, "The requested URL /~neo was not found on this server".

HTTP/1.x 404 Not Found
Date: Wed, 17 Oct 2007 06:45:58 GMT
Server: Apache
Keep-Alive: timeout=15
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html

Friday, October 12, 2007

Welcome!!!

Hey Guyz,
Thought we'd share a few tidbits of the work that we do for a living!! Armor Blog is a fun way to share info in a way that every techie understands. So thnx to Abhishek's initiative in starting this off, JD n Me thot we'd start the blog off with a series of Nessus false positives .. hope you enjoy this!!!

Nessus -- A peek under the hood - IV

--- TRACE running on WebServer

Very low risk here and at times even informational but useful to report specially when you have very few findings to report ;) . Basically while establishing a HTTPS connection an SSL handshake happens, think of it as a 3 way handshake for SSL. Once this completes its all normal HTTP traffic. The key here is to somehow re-create that SSL connection before you start running your HTTP commands like GET,TRACE,OPTIONS etc. So effectively what we did is create an SSL tunnel to the port 443 of the destination webserver as follows using a third party tool called Stunnel. Here's the command that you run:
Once you install STunnel there's a script which you can use to run stunnel against any remote host from within it using a predefined script found in /usr/local/share/doc/stunnel/examples/script.sh by default. So when you make a connection to 192.168.2.249 you'll need to edit the REMOTE_HOST variable in script.sh to your IP so it reads:

Run the script to establish a SSL tunnel to 192.168.2.249. Run tcpdump in another window if ur as interested as me in getting a packet level view of the situation.Once the tunnel is established you can type HTTP commands as you normally do. Here's the sequence of commands to find out if TRACE is enabled on the remote server:

--------------------
[root@pal-lin-arvind Setups]# /usr/local/share/doc/stunnel/examples/script.sh
client script connecting 192.168.2.249:443
OPTIONS * HTTP/1.1
Host: 192.168.2.249

HTTP/1.1 200 OK
Allow: OPTIONS, TRACE, GET, HEAD, POST
Content-Length: 0
Server: Microsoft-IIS/6.0
Public: OPTIONS, TRACE, GET, HEAD, POST
Date: Fri, 12 Oct 2007 16:36:47 GMT
--------------------

The only open connection obviously is the SSL tunnel as can be seen here
[arvind@pal-lin-arvind ~]$ netstat -na | grep -v unix | grep 443
tcp 0 0 192.168.2.92:47032 192.168.2.249:443 ESTABLISHED
[arvind@pal-lin-arvind ~]$

Nessus -- A peek under the hood - III

--- Check if internal IP is revealed on the server

There's a plugin which Nessus has which sends a specific query to the destination web server.Nessus looks at the "Content-Location" field of the response. This "Content Location" field apparently if the server is not patched reveals the internal IP addresses in this field. A normal request can be made for a valid webserver page by constructing a request as follows. The response will reveal Internal IP information:

[arvind@pal-lin-arvind 192.168.2.249]$ telnet 192.168.2.249 80
Trying 192.168.2.249...
Connected to 192.168.2.249 (192.168.2.249).
Escape character is '^]'.
GET / HTTP/1.1
Host: 192.168.2.249

HTTP/1.1 200 OK
Content-Length: 1433
Content-Type: text/html
Content-Location: http://192.168.2.249/iisstart.htm

As you can clearly see the Content Location reveals the internal IP address of the Webserver.Try this against any public webserver , its not always that you'll get this error(Just incase you're thinking that the Internal IP was eleased here coz I tested it on our LAN). Nessus and Qualys both reveal this finding but Qualys actually gives you an internal IP without telling you the exact page it did it on. Sure, sometime later it gives you a request which it used but that doesn't seem to work for some reason and for 2 different clients we've met with failure and have scrapped the finding. So the only way to be 100% sure that an internal IP Is not revealed is by making requests to EVERY SINGLE PAGE on the webserver.You're busy you say??? Thats where cool tools like wget come in. Along with the great "grep" , in 2 seconds you have a complete list of all pages that return a Content-Location field. Here's the sequence of commands which we plan to hack up a script for very soon(Jaideep's idea):

[arvind@pal-lin-arvind ~]$ wget -r --save-headers 192.168.2.249
[arvind@pal-lin-arvind ~]$ cd 192.168.2.249/
[arvind@pal-lin-arvind 192.168.2.249]$ grep -r Location *
index.html:Content-Location: http://192.168.2.249/iisstart.htm
[arvind@pal-lin-arvind 192.168.2.249]$

What we've done is download the entire website on 192.168.2.249, saved its response headers and grepped for Location. Every request which has obtained a response with Content-Location in it will be caught. That way we can be sure taht we've tried every possible page.This one's just based on initial findings though, R&D is still on and we'll update you if we stumble onto something.

And finally at 11:00pm in the night on a Friday evening Jaideep "The Perl dude" is done with the script. Here it is, just save it as a .pl and run it as follows:
perl wget.pl IP_ADDRESS

Here's the magic script:
-------------
use strict;
if ($#ARGV !=0)
{
die "usage: perl internal_ip.pl ";
}
my $ip=$ARGV[0];
my $cmd='wget -r --save-headers '.$ip;
#system(`rm -rf $ip`);
#my $cmd1='grep -r Content-Location *'>>'a.txt';
system ($cmd);
chdir($ip);
system (`grep -r Content-Location * > res.txt`);
-------------

Nessus -- A peek under the hood - II

-- Check if weak ciphers are installed on the server

The previous post(below this one) ended with the line ... "and the cipher DES-CBC3-MD5 was used to encrypt the connection." So now if I want to force a connection with a weak cipher can I do it? If its supported on the server -- the answer is YES. Nessus throws you a lot of "informational findings" saying RC4-MD5 is a weak cipher and is supported. You can verify this for each weak cipher Nessus reports by using this command:

openssl s_client -connect 192.168.2.249:443 -(PROTOCOL)-cipher (CIPHERNAME REPORTED BY NESSUS)
For eg. openssl s_client -connect 192.168.2.249:443 -ssl2 -cipher RC4-MD5

Repeat this for all ciphers that Nessus reports as weak ciphers.

Nessus -- A peek under the hood - I

-- Check if SSLv2 is supported on remote server

Appears primarily on port 443(https). Need a Linux box with openssl client installed(Type openssl -- if it comes back with options it means it is installed). Now using the openssl binary we can establish a connection using the sslv2 protocol to the destination server on port 443. If it connects and shows you the certificate and that the connection protocol is SSLv2 then it means that SSLv2 is supported on the remote server. Here's the exact command that you'll use:

openssl s_client -connect 192.168.2.249:443 -ssl2

When you get the response back, check right at the bottom for stuff like this:
---------------------------------------------------------
SSL handshake has read 599 bytes and written 239 bytes
---
New, SSLv2, Cipher is DES-CBC3-MD5
Server public key is 1024 bit
---------------------------------------------------------
This shows that an SSLv2 connection was established to the server and the cipher DES-CBC3-MD5 was used to encrypt the connection.

Calling "User Agent Blocking" BLUFF

Bypassing a Web App firewall which has blocked Paros

There are clients who have internal security teams. Now these guys obviously know their stuff coz recently we came across a client who'd actually done his homework and blocked Paros off. Now the first question here is how can you block someone from using software installed on his/her local hard drive? The only way would be if this software sends out some kind of information about itself (unique) which the remote IPS/Web App Firewalls can identify. We broke our heads for quite sometime trying to figure out why Paros wouldnt work at all while all direct connections and connections through Burp, Achilles and the rest seemed to work fine. Very strangely the first request seemed to get sent okay but after that..nothing.This meant that Paros was sending out something in its first packet which the destination was catching. Suddenly Jaideep came up with the theory of User Agent blocking which made sense -- Paros as we confirmed attaches a string Paros/3.2.13 at the back of its request before sending So there's something at the client side which is pattern matching and checking if a request has "Paros/3.2.13". Since we were using Paros as our browser proxy all our requests were going through Paros , so all requests had Paros appended to the User-Agent string, so all requests were getting blocked at the destination.

So whats the solution to this..I somehow WANT to use Paros. We configured Paros on our machine so all our requests went through Paros.Now Paros is going to get blocked, so we used the Proxy Chaining feature and configured Paros to forward its requests to Burp on another machine.So now the flow of traffic is going to be Paros -- Burp -- DestinationServer. So when the final request goes to the server its going from Burp instead of from Paros. And Burp doesnt add anything at the back of the User Agent field.So we did manage to use Paros; tbf we might as well as have just used Burp but the challenge of a techie is trying to find new ways of doing old things so we're happy as of now.

NextStep: This seemed to work only when Burp and Paros were on different machines. If both were on the same machine the apckets didn't even leave the network card for some reason. It was the same in the initial case -- With just Paros all packets after the first never left the network card.Why?? Let you know when we find out....