Sunday, November 25, 2007
Spoofing Source ports and getting in
If you have already a rule like:
Internal Network -- Internet -- allow traffic with a destination port of 80
Then you don't need to have a rule like:
Internet -- Internal Network -- allow traffic with a source port of 80
Thats what a stateful firewall does; keeps track of which client issued which connection to port 80 of which client and makes sure that it returns the response to that very client. So ideally a bi-directional rule is most probably a misconfiguration.
Still I won't be shocked to see many people outside still doing stuff like this so incase you happen to come across a firewall which lets traffic in based on a source port; you could use fpipe to spoof the source port and try and access traffic on other restricted ports inside but are "inaccessible' from the outside.
I wont go into the details of how to use "fpipe" coz that's easily obtainable at Foundstone. Effectively though --
You are on Machine 1
You set Fpipe to listen on 5555 on Machine 2 and connect to 3389 on Machine 3(VICTIM)
You set Fpipe to use port 22 as its source port because inbound SSH has been allowed
So when you connect to Machine 2:5555 , Machine 2 will initiate a connection using source port 22(ALLOWED BY THE FIREWALL) and make a connection to Machine3:3389. Once done it'll forward the connection over to Machine 1 where you're doing your pen-testing. Quite cool ...but really it really can be exploited only incase of some really lazy sysadmins or a firewall thats as old as Fred Flintstone is being used ;)
Hidden Cache in IE??
So what we did was to fire up FileMon , a cool program which tracks what files get accessed on your disk each time you do something. So for example say you click on a page in History and can see its contents. Using FileMon and by setting the appropriate filters you can actually pinpoint the exact file that is storing the cached resource. So when we did this for this application we found there were files from the L:\Documents and Settings\arvind\Local Settings\Temporary Internet Files\Content.IE5 directory which were being referenced each time something in history is accessed. This is actually a fairly old location in which IE stores its files to "improve performance" and on which there was an article by The Riddler a long time ago. That article can still be found here:
http://sillydog.org/mshidden21b.html#9.1
Anyway the point here is : Dont forget to look into content.ie5 and history.ie5 when you next do an appsec assignment. You might be surprised at what you find :)
Friday, November 23, 2007
Firefox -- An RFC violation??
The "no-cache" directive says that some specific pages must not be cached at all and every time a request is made for that resource it should be revalidated against the server before the page is served. However in a recent appsec assignment we saw Firefox caching pages irrespective of the "no-cache" directive. So when we did a "Work Offline" in Firefox we were still able to access the pages. However on browser close these pages disappeared. This only proves that the pages were in browser memory for Firefox. This behavior was not repeated with IE or Opera where it refused to let us see pages on Working Offline? So why the discrepancy with Firefox? Why doesn't the browser try and contact the server before serving the page? More later when I get time to hack around ...Do reply if you know why!!
Sunday, November 4, 2007
NTLM authentication using Burp proxy
Just wanted to share my experience while doing an appsec for one of our clients.
I was doing appsec of an application which worked on NTLM authentication. With every request I was sending, the NTLM variables were queried and if authenticated I get a response.
After opening the browser, if I want to login to the application I have to provide my local domain password. This fetches the login page of the application where using my logins I can login to the application.
Here, I was struggling to setup a local proxy because whenever I setup a local proxy and send a new request, my NTLM variables were queried and I was not authenticated as local proxy has broken my NTLM authentication.
After struggling for some time, I found an option in BURP PROXY, under tab comms there is a check box asking for ‘do WWW authentication’. Basically we have to select this check box and select NTLM authentication here and provide the destination IP, Domain name, Domain password etc.
This way, working through BURP I solved the problem and continued the testing. But again remember that closing BURP removes this setting and you will have to again do the same settings to continue testing.
Cheers,
Prashant
Friday, October 26, 2007
Mail Relaying
What is Relaying?
When someone outside of your organization uses your SMTP Server to send mails out over the Internet; it’s a big problem if you have an “Open” Relay because this Relay will be used by other Spammer to send their mail.
How to test for Mail Relaying?
How can you check your SMTP server for relaying? Simple, All of you have to do is use a computer outside of your organization and type the commands shown below later in this section. You will need to type in these commands in a command shell.
- TELNET mail.example.com 25
- EHLO mail.example
- MAIL FROM:
- RCPT TO:
- DATA
From:sender@example.com
To:youremail@outsideaddress.com
Subject: Relay test
This is a relay test and only a test.
(Type
- QUIT
| You type this text | Server should respond with this |
| TELNET mail.example.com 25 | Trying 10.10.10.1. |
| EHLO 10.10.10.1 | 250-mail.example.com 250-PIPELINING 250-SIZE 9999360 250-VRFY 250-ENHANCEDSTATUSCODES 250-8BITTIME 250 DSN |
| MAIL FROM:<> | 250 2.1.0 OK |
| RCPT TO: | 250 2.1.5 OK |
| DATA | 354 End data with |
| From:sender@example.com | 250 2.0.0 OK: Queued as T22122A5 |
| QUIT | 221 2.0.0 bye |
Preventing message relaying with MS Exchange
Before you start checking which version you are running - you must be running Microsoft Exchange Server 5.5 or greater, then follow these 7 steps.
- Go to the Internet Mail Service Properties dialog box in Microsoft Exchange
- Select the Routing tab at the top.
- Select the option reroute incoming SMTP mail (required for POP3/IMAP4 support).
- Reroute incoming SMTP mail.
- For each domain you host, you need an entry in the Routing section.
- Click the Routing Restrictions button.
- Make sure Hosts and clients with these IP addresses are checked. Leave the list of IP addresses blank.
For further information you can check some of the reference websites below.
1. http://www.msexchange.org/pages/article.asp?id=54
2. http://www.slipstick.com/exs/relay.htm#basics
3. http://www.auditmypc.com/freescan/readingroom/relay.asp
4. http://support.microsoft.com/?kbid=304897
Wednesday, October 17, 2007
Enumerating OS Accounts through Web Server
The vulnerability is that one can enumerate OS accounts just by looking at the response codes and message returned by the web server when one tries to access the home directory for a particular user. Suppose one tries to access the root directory; the request from the browser would look something like this:
http://X.X.X.X/~root
GET /~root HTTP/1.1
Host: X.X.X.X
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.7) Gecko/20070914 Firefox/2.0.0.7
Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
The response received had an error code 403
HTTP/1.x 403 Forbidden
Date: Wed, 17 Oct 2007 06:45:58 GMT
Server: Apache
Keep-Alive: timeout=15
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html
The response received had an error code 403. Also an error message like, "You don't have permission to access /~root on this server." is displayed.
When the same request is tried for a non existent user (neo as in the following case), the error code and message received in the response are different.
http://X.X.X.X/~neo
GET /~neo HTTP/1.1
Host: X.X.X.X
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.7) Gecko/20070914 Firefox/2.0.0.7
Accept:text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
The response received had an error code 404. The error message in this case was, "The requested URL /~neo was not found on this server".
HTTP/1.x 404 Not Found
Date: Wed, 17 Oct 2007 06:45:58 GMT
Server: Apache
Keep-Alive: timeout=15
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html
Friday, October 12, 2007
Welcome!!!
Thought we'd share a few tidbits of the work that we do for a living!! Armor Blog is a fun way to share info in a way that every techie understands. So thnx to Abhishek's initiative in starting this off, JD n Me thot we'd start the blog off with a series of Nessus false positives .. hope you enjoy this!!!
Nessus -- A peek under the hood - IV
Very low risk here and at times even informational but useful to report specially when you have very few findings to report ;) . Basically while establishing a HTTPS connection an SSL handshake happens, think of it as a 3 way handshake for SSL. Once this completes its all normal HTTP traffic. The key here is to somehow re-create that SSL connection before you start running your HTTP commands like GET,TRACE,OPTIONS etc. So effectively what we did is create an SSL tunnel to the port 443 of the destination webserver as follows using a third party tool called Stunnel. Here's the command that you run:
Once you install STunnel there's a script which you can use to run stunnel against any remote host from within it using a predefined script found in /usr/local/share/doc/stunnel/examples/script.sh by default. So when you make a connection to 192.168.2.249 you'll need to edit the REMOTE_HOST variable in script.sh to your IP so it reads:
Run the script to establish a SSL tunnel to 192.168.2.249. Run tcpdump in another window if ur as interested as me in getting a packet level view of the situation.Once the tunnel is established you can type HTTP commands as you normally do. Here's the sequence of commands to find out if TRACE is enabled on the remote server:
--------------------
[root@pal-lin-arvind Setups]# /usr/local/share/doc/stunnel/examples/script.sh
client script connecting 192.168.2.249:443
OPTIONS * HTTP/1.1
Host: 192.168.2.249
HTTP/1.1 200 OK
Allow: OPTIONS, TRACE, GET, HEAD, POST
Content-Length: 0
Server: Microsoft-IIS/6.0
Public: OPTIONS, TRACE, GET, HEAD, POST
Date: Fri, 12 Oct 2007 16:36:47 GMT
--------------------
The only open connection obviously is the SSL tunnel as can be seen here
[arvind@pal-lin-arvind ~]$ netstat -na | grep -v unix | grep 443
tcp 0 0 192.168.2.92:47032 192.168.2.249:443 ESTABLISHED
[arvind@pal-lin-arvind ~]$
Nessus -- A peek under the hood - III
There's a plugin which Nessus has which sends a specific query to the destination web server.Nessus looks at the "Content-Location" field of the response. This "Content Location" field apparently if the server is not patched reveals the internal IP addresses in this field. A normal request can be made for a valid webserver page by constructing a request as follows. The response will reveal Internal IP information:
[arvind@pal-lin-arvind 192.168.2.249]$ telnet 192.168.2.249 80
Trying 192.168.2.249...
Connected to 192.168.2.249 (192.168.2.249).
Escape character is '^]'.
GET / HTTP/1.1
Host: 192.168.2.249
HTTP/1.1 200 OK
Content-Length: 1433
Content-Type: text/html
Content-Location: http://192.168.2.249/iisstart.htm
As you can clearly see the Content Location reveals the internal IP address of the Webserver.Try this against any public webserver , its not always that you'll get this error(Just incase you're thinking that the Internal IP was eleased here coz I tested it on our LAN). Nessus and Qualys both reveal this finding but Qualys actually gives you an internal IP without telling you the exact page it did it on. Sure, sometime later it gives you a request which it used but that doesn't seem to work for some reason and for 2 different clients we've met with failure and have scrapped the finding. So the only way to be 100% sure that an internal IP Is not revealed is by making requests to EVERY SINGLE PAGE on the webserver.You're busy you say??? Thats where cool tools like wget come in. Along with the great "grep" , in 2 seconds you have a complete list of all pages that return a Content-Location field. Here's the sequence of commands which we plan to hack up a script for very soon(Jaideep's idea):
[arvind@pal-lin-arvind ~]$ wget -r --save-headers 192.168.2.249
[arvind@pal-lin-arvind ~]$ cd 192.168.2.249/
[arvind@pal-lin-arvind 192.168.2.249]$ grep -r Location *
index.html:Content-Location: http://192.168.2.249/iisstart.htm
[arvind@pal-lin-arvind 192.168.2.249]$
What we've done is download the entire website on 192.168.2.249, saved its response headers and grepped for Location. Every request which has obtained a response with Content-Location in it will be caught. That way we can be sure taht we've tried every possible page.This one's just based on initial findings though, R&D is still on and we'll update you if we stumble onto something.
And finally at 11:00pm in the night on a Friday evening Jaideep "The Perl dude" is done with the script. Here it is, just save it as a .pl and run it as follows:
perl wget.pl IP_ADDRESS
Here's the magic script:
-------------
use strict;
if ($#ARGV !=0)
{
die "usage: perl internal_ip.pl
}
my $ip=$ARGV[0];
my $cmd='wget -r --save-headers '.$ip;
#system(`rm -rf $ip`);
#my $cmd1='grep -r Content-Location *'>>'a.txt';
system ($cmd);
chdir($ip);
system (`grep -r Content-Location * > res.txt`);
-------------
Nessus -- A peek under the hood - II
The previous post(below this one) ended with the line ... "and the cipher DES-CBC3-MD5 was used to encrypt the connection." So now if I want to force a connection with a weak cipher can I do it? If its supported on the server -- the answer is YES. Nessus throws you a lot of "informational findings" saying RC4-MD5 is a weak cipher and is supported. You can verify this for each weak cipher Nessus reports by using this command:
openssl s_client -connect 192.168.2.249:443 -(PROTOCOL)-cipher (CIPHERNAME REPORTED BY NESSUS)
For eg. openssl s_client -connect 192.168.2.249:443 -ssl2 -cipher RC4-MD5
Repeat this for all ciphers that Nessus reports as weak ciphers.
Nessus -- A peek under the hood - I
Appears primarily on port 443(https). Need a Linux box with openssl client installed(Type openssl -- if it comes back with options it means it is installed). Now using the openssl binary we can establish a connection using the sslv2 protocol to the destination server on port 443. If it connects and shows you the certificate and that the connection protocol is SSLv2 then it means that SSLv2 is supported on the remote server. Here's the exact command that you'll use:
openssl s_client -connect 192.168.2.249:443 -ssl2
When you get the response back, check right at the bottom for stuff like this:
---------------------------------------------------------
SSL handshake has read 599 bytes and written 239 bytes
---
New, SSLv2, Cipher is DES-CBC3-MD5
Server public key is 1024 bit
---------------------------------------------------------
This shows that an SSLv2 connection was established to the server and the cipher DES-CBC3-MD5 was used to encrypt the connection.
Calling "User Agent Blocking" BLUFF
There are clients who have internal security teams. Now these guys obviously know their stuff coz recently we came across a client who'd actually done his homework and blocked Paros off. Now the first question here is how can you block someone from using software installed on his/her local hard drive? The only way would be if this software sends out some kind of information about itself (unique) which the remote IPS/Web App Firewalls can identify. We broke our heads for quite sometime trying to figure out why Paros wouldnt work at all while all direct connections and connections through Burp, Achilles and the rest seemed to work fine. Very strangely the first request seemed to get sent okay but after that..nothing.This meant that Paros was sending out something in its first packet which the destination was catching. Suddenly Jaideep came up with the theory of User Agent blocking which made sense -- Paros as we confirmed attaches a string Paros/3.2.13 at the back of its request before sending So there's something at the client side which is pattern matching and checking if a request has "Paros/3.2.13". Since we were using Paros as our browser proxy all our requests were going through Paros , so all requests had Paros appended to the User-Agent string, so all requests were getting blocked at the destination.
So whats the solution to this..I somehow WANT to use Paros. We configured Paros on our machine so all our requests went through Paros.Now Paros is going to get blocked, so we used the Proxy Chaining feature and configured Paros to forward its requests to Burp on another machine.So now the flow of traffic is going to be Paros -- Burp -- DestinationServer. So when the final request goes to the server its going from Burp instead of from Paros. And Burp doesnt add anything at the back of the User Agent field.So we did manage to use Paros; tbf we might as well as have just used Burp but the challenge of a techie is trying to find new ways of doing old things so we're happy as of now.
NextStep: This seemed to work only when Burp and Paros were on different machines. If both were on the same machine the apckets didn't even leave the network card for some reason. It was the same in the initial case -- With just Paros all packets after the first never left the network card.Why?? Let you know when we find out....