Archive by Author | Michael Williams

Ransom the everywhere

Three recent cyber security stories have recently appeared that combine to make a potentially wicked brew.  Firstly, ransomware authors have realised that it is possible to improve monetisation from all their mischief by knocking out businesses in one go rather than individual computers one at a time, as witnessed by the attacks on hospitals in the US.  The second is reports that the next generation of ransomware might allow malware to pass from server to server, rather than just from Internet to workstation and finally is the major announcement, due on April 12th 2016, regarding the Badlock vulnerability in the SMB/Samba technology that computers use to communicate with one another on local networks.

If it proves possible to combine these three factors into a workable attack before critical systems are patched then a whole business could be hijacked by a user clicking on a single malicious link in an Email or on a website, just as long as they were also connected to their corporate network drives shares such as home drives.

To take a step back, traditional ransomware works by persuading a user to download software, either through a malicious website or via spam Email.  This is normally simple code that downloads a seperate malicious payload in such a way to try and fool antivirus and other traditional protection.  The payload then encrypts files on the computers drives, or network drives and then extorts a ransom from the user to get the files decrypted.

Up to now such losses have generally effected individual users or a single computer but what if that has now changed, and are designed to infect server and storage farms?  The impact may well be to make paying the ransom the only way that time critical business systems can be recovered by their IT departments and get users back to work.

Returning to the attack I describe at the beginning of this post, the user who clicked on a link and infected their own laptop, that infects the home drive and with a little help from Badlock proceeds onwards across the network isn’t really on the edge of the network at all, it has the reach into the network as though it is part the core of the network but is more likely to be running an older operating system, such as Windows 7, be less well maintained than servers and contain a far more uncontrolled software than any server would be allowed to have.

So, what is the real difference between a computer at the edge of corporate network where convenience drives design descisions and at the data centre core where availability is king?  When confronted by well-motivated and profitable criminals, maybe the answer is nothing at all.

For more information:

Badlock Samba vulnerability http://badlock.org

Ransomware targeting servers (from Computer weekly)

http://www.computerweekly.com/news/450280137/Security-researchers-warn-of-server-attacking-ransomware

Targetting of hospitals in the USA (from health care news)

http://www.healthcareitnews.com/news/two-more-hospitals-struck-ransomware-california-and-indiana

Antivirus in rugged health

Anyone who is fortunate or unfortunate enough to spend much time at security conferences will probably be used to being told that antivirus is dead by people that want to sell you something else, and to me, that has always sounded like more than a touch of exaggeration.

IT can be dogged by statements like this; indeed a very traditional antivirus product that just compares files with a list of malware signatures is coming to the end of its usefulness. That doesn’t make antivirus technology redundant, just one method of detection.

Modern endpoint protection malware suites are something very different. Such products will have signatures at their core, not only signature for files but for behaviour, network traffic and even for file origin. Such endpoint protection suites integrate with the computer to provide firewall, intrusion prevention, browser protection and more, so providing a layered protection model that is far more effective than the old signature model. Nothing new there for anyone who is remotely interested in such things. Now compare this change with changes in the way modern malware has changed the way it attacks clients. two great examples are the generic downloader and cryptoware:

A generic downloader is normally embedded in an Email attachment, and is in part a, return to the macro virus.   It will attempt to manipulate a user into running an office macro designed to collect malware from the Internet, or other infected machine. The generic downloader is not itself the malware. The attacker can change the downloader code almost constantly to avoid traditional signature based scanners. However, its behavior must remain broadly similar, to connect to the Internet, download the true malware and then execute it. It is this sort of behaviour that modern systems can defend against and traditional systems struggle.

It is much the same the same for cryptoware, also known as ransomware. A user is tricked into downloading a cryptographic client and then that client starts encrypting files in a way almost indistinguishable from a legitimate request from the operating system, until the luckless user is asked to pay a ransom to get their files back. It is therefore the behavior of the client as the malware loads, and tries to contact the dark corners of the Internet that creates an opportunity for detection, even if an exact file signature is not available.

This leads to a couple of important security points, these modern suites only really work if the whole suite is installed, and enabled within a suitable framework, and secondly such suites needs to be managed throughout their life to make sure they continue to deliver the required level of protection. An example of this point is that the anti-malware client itself might contain vulnerabilities and the need to patch your all security software needs to be considered alongside patching your operating systems and applications.

Then there is always the cloud to think about. Antivirus has made use of cloud services long before they were called that for tasks such as the download of signature updates. Over the past few years much more interesting use is being made of cloud services for anti-malware. The cloud can support anti-malware software running on a client, for example by checking against cloud databases for a files reputation or a files source or to some extent replace it by forcing all Internet traffic through a proxy server. The cloud proxy server will have the latest signatures, reputation data, black lists etc continuously refreshed.

There have been, and remain all sorts of ideas to protect client computers using technology that doesn’t rely on the end point itself, especially when that client is virtualised.   To be really effective in delivering the protection needed a complex local client is still needed. Laptops need additional thought, exposed to so many more threats than a data centre supported virtual desktop.

I think we can be forgiven for occasionally referring to these modern solutions by the old name of antivirus, and the next time a salesman tells you AV is dead just think what else can work with application level encryption, third party removable storage and airport hotspots hundreds of miles away from friendly network.

A Suite of Changes

Certificates, encryption and lots of TLA’s

Things have been pretty quiet in the world of Internet encryption for some time; revelations from Snowden to the Hacking team have had surprisingly little to say on the subject. However, the calm is coming to an end as a raft of changes are beginning to make themselves felt.
Perhaps this lack of noise is because of where these changes are coming from, not from dramatic and media savvy vulnerabilities such as Heartbleed, Beast and Poodle but rather from some of the Internet’s biggest companies, especially Google and Microsoft.
Both are leaders in operating systems, web browsers and cloud services. So, it is little surprise that these companies are trying to drive up the quality of Internet security, not only to help them be perceived as secure to their customers but to provide a key differentiator with smaller players.
Given that this is a blog post, I have restricted myself to a couple of pages, a longer version of this article will be available from the author

Web browsers, and annoying everyone

All the browsers manufacturers are working to improve the security their products, though there subtleties in approach the core approach is the same. They are all starting to harden warning messages and turning alerts into blocking access to websites where configuration errors are found. Most importantly such settings will become the defaults for the next generation web browsers.  Also Google has pretty much won the argument on browser updating, and a policy of continuous rolling updates is going to become standard, even for Microsoft. Google’s aggressive deprecation of old products and standards may well also become the norm.  This will leave many businesses with even more complex legacy app support issues than we have now.

Certificates for HTTPS secure web

Web encryption can be divided into two key components, the certificate used to identify a website and commence secret communication and the ciphersuite responsible for the encryption of traffic between a user and a website.

There are moves by the industry to improve the quality and effectiveness of both these components, the first is increasing key lengths, so no real problem.   The second one is a little more interesting and is the SHA1 problem. Certificates are signed to prove they are not tampered with and the legacy algorithms used for this are starting to show their age, so now  to the SHA2 issue. This refers to the Google/Microsoft’s decision to the require modern signing algorithm SHA2, Google Chrome is already producing error messages and as Chrome updates they are going to become more forceful. When you renew certificates you are probably going to have to use SHA-2 despite the fact that some very old browsers or systems might have interesting issues.

The chain of trust

A description of  Internet certificates is beyond this article but one component does need further examination, and that is the chain of trust. To work a certificate forms a chain of other certificates that link the users’ browser to the websites server. Browsers will produce an error if the chain is not correct, and increasingly will fail to connect at all. Anyone commissioning a certificate needs to consider how that chain of trust will be presented to their customer, and be confident that errors are not produced, look for problems in you suppliers server too.

The Ciphersuite, TLS and confusing names for things

The Ciphersuite is a series of configuration items that determine how the encrypted conversation is created. Typically both client and server support a number of different Cipher options and suitable choice is negotiated.

Transport Layer Security has replaced the older SSL standard. SSL is an old protocol and problems such as POODLE together with architectural issues that are corrected in TLS. This is an issue that seems to cause a lot of concern amongst system owners, because of a largely mistaken belief that clients will have problems connecting to TLS only systems.  TLS 1.0 appeared in 1999 and support hit mainstream products by 2006. TLS itself has undergone a number of revisions and modern systems are now expected to support the latest version TLS 1.2 (released in 2008), with TLS1.3 currently in draft.

AES everywhere

The encryption algorithm is generally the most recognised part of the Ciphersuite, examples are triple DES (3DES) or AES256. This refers to the actual algorithm used to encrypt the information and any current installation should offer AES256 to any client that can use it. It is also important to remove legacy algorithms that are no longer consider secure as an attacker might be able to “negotiate down” the connection in order to decode the traffic, it looks scruffy too.

Perfect Forward Secrecy

Though not a new idea PFS is starting to gain more support for web security as it avoids a significant single point of failure for Web encryption. Most commonly used systems have a single encryption key that is used for all connections, if this key is compromised then all traffic could be accessed, this applies even if the traffic was intercepted years earlier. PFS algorithms create a temporary or ephemeral key and create a new key when required, try Googling ECDHE for equations and graphs galore.

…and in conclusion

It’s easy to see web security as a solved problem; technologies such as public key infrastructure, HTTPS etc have been with us a long time and to most users they appear stable and rather dull. The reality is that the Internet can be a hostile place and the cryptography that underpins it is under constant scrutiny. What results is widening gap between legacy systems that were adequate for the task 10 years ago but are now cause for serious concern. Such weaknesses are also very easy to detect, making automated attacks practical. The big technology and service providers are also increasingly marketing security as a differentiator, and as cloud platforms become more prevalent the need for that security becomes even more pressing.

A shock to the system

Shellshock has been with us for a week now, and as the dust starts to settle I think it is time to take a look.

Shellshock refers to a set of problems in the open source command shell Bash, or the Bourne Again SHell.   There are many blogs and articles on Shellshock, so once again I’m not trying to tell you things that are widely covered elsewhere but there are some significant problems with the coverage and I want to take a look at them here. This is not a news service; if you want to track Shellshock information please try some of the links at the end of the article, and pretty much any other IT security page at the moment.

One of the less helpful parts of the reporting is the idea that somehow the Heartbleed SSL vulnerability and Shellshock are comparable. True, they both got media friendly names as soon as the news was released but heartbleed got the cool logo.

The superficial comparisons are that they are both open source issues, they are both potentially externally facing, network exploitable and don’t need to be authenticated. They also received a great deal of general publicity; finally they both produced a great deal of activity over a short period of time.

Now, considering the differences:

Heartbleed was only present in a small number of SSL installations that were running modern implementations; many systems were too old to be vulnerable to Heartbleed. There are no systems running the Bash shell that are too old, they are vulnerable unless patched in the last few days. The Bash shell may appear on any Linux, as well as most conventional Unix systems and some embedded devices.

Heartbleed (2011) is a fault, we know it is the coding error and it is fixed, the problems with Bash are decades old and may not really be a fault at all. Bash (1989) was written before the modern Internet and concepts such as ‘secure by design’ and it may well be that the code was always meant to work this way. So, Heartbleed may allow a backdoor, Shellshock is more like a front door.

Heartbleed is very hard to exploit, it might be necessary to run the attack thousands of time before valuable data is captured. It is possible to exploit vulnerable Bash code with a single attack string. Part of the same issue is that Heartbleed packets are pretty easy to detect using tools like network IDS, despite the fact they aren’t usually logged by the vulnerable asset itself. The strings used to exploit Shellshock are arguably valid code, and so creating a solid detection with a low false positive rate is much harder.

So, is Shellshock worse than Heartbleed, undoubtedly yes. The Bash shell was never created as a security system, just to be useful. It is as old as the 486 processor and the fall of the Berlin wall.

Bash trusts user inputs in a way that modern systems shouldn’t do. Attack code is already publically available and is in use, (see web links below.) Such code might be useful for compromise as well as denial of service. Where a system is vulnerable results can be expected immediately, not like Heartbleed.

The Bash code is simple; any attacker with even basic web and scripting skills can create a new attack and manipulate target systems in new ways. I expect more from Shellshock over the next few weeks and the discovery of more vulnerabilities in decade old code that we have very quietly all become dependent on.

The following external links are recommendations of the author. This is a fast moving situation, so it is important to keep up with the latest information from security professionals and vendors.

Heartbleed

https://computacenterblogs.com/2014/04/09/heartbleed-cve-2014-0160/

http://heartbleed.com/

Shellshock technical deep dives

http://www.fireeye.com/blog/technical/2014/09/shellshock-in-the-wild.html

http://www.secureworks.com/resources/blog/shellshock-bash-attacks-on-the-rise/

http://blog.cloudflare.com/inside-shellshock/

General news article

http://www.bbc.co.uk/news/technology-29375636

Tracking Shellshock

http://www.incapsula.com/blog/shellshock-bash-vulnerability-aftermath.html

 

As simple as Password123

Anyone with an Internet connection can’t have escaped recent stories reporting the loss of intimate celebrity photographs, probably from Apples iCloud. The media have generally focused on the who the victim is and what was lost, and rather ignored how it was taken.

As the analysis starts, it does appear to be down to good old passwords. Problems with password choice, password reset procedures and systems that are vulnerable to brute force attacks all appear in the mix.

An interesting report on passwords was recently been produced by Trustwave that focused on the sorts of password most people use. ‘Password1’ ‘Hello123’ and ‘Password’ were the three most common, with 92% of passwords being broken during the analysis. Two things come out of this, firstly humans seem really bad at selecting passwords and secondly 8% of passwords were not broken. So, for those 8% of users it is possible to select a high quality password that is not trivial to break.

Passwords are perhaps the best example of the gap between being merely compliant with a policy and delivering real IT security. ‘Password123’ might look to an automated compliance check like a great password, it mixes numbers and letters, it mixes upper and lowercase and it is eleven characters long. It is, of course terrible, as are any variations on password, such as P@s5w0rd.

Many users select passwords in common formats, such as word plus number, or word plus date, making such passwords high on an attackers priority list.  The Trustwave report breaks these down further, there is a link to the report at the bottom of this post.

A Good password is one that an attacker would not consider to try before any other password, so any sort of variation on a scrambling a common word with numbers and symbols is likely to be weak, while a random string of numbers, letters and symbols is likely to be strong. The problem of remembering it, and increasingly commonly, typing into a mobile device remain of course, but that is its own problem.

Systems designed to allow us to reset our own passwords can be fooled, Q&A questions such as your pets name or your first school may well be found in your social networking data etc, so this process too needs the sort of care we give our passwords.

These issues also exist for business users, how to encourage users to use high quality, as opposed to merely complaint passwords and how to properly identify users prior to a password reset.

Cloud services, of course make this more complex. We are not responsible for the security of cloud computers, and can only have very limited influence on password resets, though we can select our own passwords with great care.

A now famous quote from IT Security expert Graham Cluley illustrates one important point, “don’t call it the cloud, call it somebody else’s computer.” We need to select our passwords with this in mind, and guard them with suitable care.

Links:

Graham Cluley: http://grahamcluley.com/2013/12/cloud-privacy-computer/

Trustwave report: http://gsr.trustwave.com/topics/business-password-analysis/2014-business-password-analysis/

Apple statement: http://www.apple.com/pr/library/2014/09/02Apple-Media-Advisory.html

Heartbleed CVE-2014-0160

My original intention was to post on the much anticipated death of Microsoft Windows XP.  Though the arrival of the heartbleed vulnerability in OpenSSL seems to have rather trumped that.  Though this blog was never intended to be a news service, and many excellent such blogs already exist – this is a situation well worthy of comment.
CVE-2014-0160 was posted on the seventh of April 2014 and concerns the leakage of information from systems using some versions of the OpenSSL security and encryption library.  The problem started to appear at the beginning of 2012 and patches have only just become available.
What it does
This post is not designed to give full technical details; those are available at the links listed at the end of this article but rather to alert our customers and employees to the potential seriousness of this problem.  The basic problem is pretty simple, when a malformed read request is sent to a vulnerable system it responds with the contents of a 64k chunk of the victim machines’ memory.  That memory could contain all sorts of sensitive data and tests have confirmed that this could include the websites private encryption keys.  Thus, compromising the site completely.

The most important considerations that I can think of are:

  • It is an over simplification to say that Linux systems are vulnerable and Microsoft systems are not, but prioritising Linux and open source systems is reasonable
  • Many older builds of OpenSSL are not vulnerable, in particular those based on version 0.9.8
  • The attack appears to be silent, there will be nothing in the server logs and network IDS vendors are only now starting to provide signitures
  • Just patching does not cure the problem, as you cannot tell if a site has been previously compromised, the vulnerable keys (certificates etc) may need to be replaced
  • Once lost, such information can be used to imitate a site and trick users into accessing a rogue site
  • Proof of concept attack code has already been published
  • The Rapid7 Metasploit framework now has an openssl_heartbleed module
  • Responsible sites have already starting patching and renewing HTTPS certificates, and revoking the old ones
  • Checks for revoked certificates are not often ideal, leaving us with website spoofing problems
  • The attack is reported as by-directional, clients are at risk as well as servers
  • Don’t rely on the default package included in a distribution, check what is actually running on your systems.  An application (for example) may have replaced the default library with a vulnerable one.
  • Getting the precise version number of the OpenSSL library is not always obvious, please check carefully with the vendor
  • Vendors, testing services, applications, repositories etc are all racing to catch up, do not assume that no news is good news

The following are all external links, please treat them with the usual care.  This is still an emerging problem, it may be necessary to check back later as more information becomes available.

Getting more information:
CVE details:    http://www.cvedetails.com/cve/CVE-2014-0160 
Specialist site:     http://heartbleed.com a good place to start

Coverage report from netcraft:
http://news.netcraft.com/archives/2014/04/08/half-a-million-widely-trusted-websites-vulnerable-to-heartbleed-bug.html

Why revocation might not be enough:
http://news.netcraft.com/archives/2013/05/13/how-certificate-revocation-doesnt-work-in-practice.html

The effect on TOR (those really needing Web anonymity are best advised to wait for things to calm down)
https://blog.torproject.org/blog/openssl-bug-cve-2014-0160

Test a public facing site (SSL from Qualys, expect this site to be rather busy just at the moment)
https://www.ssllabs.com

As for XP, perhaps we need to wait a few weeks to see what happens, I’m not alone in believing that the attacks against XP will come slowly but the attacks against this will come quickly

Undead XP

Introduction

Microsoft is going to end support for Windows XP SP3 April 8th 2014.  A very well known fact, but with repercussions for XP systems still in use even now that are perhaps not appreciated.

The most obvious thing is that XP is not going to truly die; it is in more of a Zombie state and will continue as such long after Microsoft has stopped patching it.   Despite comments from some, it is important to remember that Windows XP is a product from a kinder age and it is not possible to back-port the architectural changes seen first in Windows Vista (link), it has to go.

XP also comes with other problems, most obviously Internet Explorer (IE).  For many going to IE6 was a significant jump in itself and has caused some development to enter an unfortunate technical cul-de-sac.   Such dependence on historic browsers may not be as complete as some fear, but can be a default position from customer IT departments unwilling or unable to create a transition to something more defensible.

Weak XP systems are also likely to be running on old hardware, with their own problems, such as running out of disk space.  It would not be unreasonable to expect that other security controls such as local antivirus are also at old versions. These systems are also likely to be running an old version of MS Office.  I’ve seen examples going back to Office 97.  These are end of life, or going out of life and will not run on later versions of Windows.

MS-Office also provides a very easily accessible attack surface, as good or better than the O/S itself as it is easier to exploit by Email.  The threat from old systems must not stop at consideration of just the operating system but must also consider the browser and MS Office.

A recommended position

So, that’s easy, upgrade to Windows 7 and  a new Office suite.  Such an upgrade needs to integrate into patching, anti-malware, network security, reporting etc.   This would bring the desktop O/S and Office under support but allow other security problems to be addressed in the rebuild such as local drive encryption etc.

Another position

So, what can we recommend if a customer can’t upgrade?  Well, there are several direct technical issues and solutions discussed below.  The problem of cost and upgrade disruption are largely beyond the discussion of this document but I hope looking at the major technical issues remains worth your time.

What to do about the browser problem?

One of the reasons given for not upgrading is the need to keep either IE6 or IE7 for some “internal reason.”  So, what to do?  First off, what is the extent of the problem, and how bad it is? For example, you might find yourself hearing an argument that an out of support browser is needed to connect to an out of support version of SharePoint, so an obvious if perhaps time consuming fix presents itself.

Assuming that the IE6/7 dependent systems can’t be removed immediately, the most interesting solution I have heard is to give users a second browser for Internet facing work and leave the deprecated version of IE as Intranet only.   This means that the vulnerable browser is kept away from the big bad Internet, so reducing the attack surface to a much more manageable level and allows users to access online resources that are no longer interesting in supporting legacy browsers.

Probably the best browser for this purpose is Chrome, as Google have already stated they will keep updating the XP version into 2015 (link), and as such updates are automatic managing the second browser can be light touch proposition.

The difficult part is the browser isolation, but this can be managed by high quality proxy servers capable of distinguishing the browser version being used and preventing old versions of IE from accessing the Internet.

All the standard browser management techniques, such as IE settings in group policy, can also be set to make WWW access impractical via IE while allowing access to IE for local Intranet applications.  There will be issues for users, and they will need to understand how to work with the “non default” browser correctly.

What to do about the browser plug-in problems

If the reason for not upgrading is legacy browser support we really need to consider legacy browser plugins.  This is another problem that seems overstated by some IT functions but occasionally appears to be true.   The free, make your browsing experience better – such as Adobe Flash, Shockwave, and MS-Silverlight are the most common cause of the pain.  The second group are commercial and often expensive line of business applications that are more likely to cause real problems.

The free stuff needs to be challenged, sometimes line by line and item by item.   Shockwave, Java, Air are all examples of items that weaken a system, often with little need to actually be installed at all.  Where they have to stay, is it everywhere?  Can modern versions or replacements be used to emulate the older version (Adobe reader for example.)  Can the plugin be modified in some way to reduce the risk, for example unbind Java from the browser (certainly the Internet facing one.)

Where the problem are with high end commercial items that are not supported, and hard or impossible to replace a complex support issue results, but moving what is often a very small number of systems into a fully offline configuration is worth considering, leaving a user with two distinct compute instances.

Make sure that XP systems are not used as primary storage, even when in offline mode.  The loss of these point solutions might have a very significant business impact.

What to do about the build image problem

As Zombie XP shuffles on it becomes more vulnerable, and more opportunistic infections will hit.  So, in order to perform tasks like rebuild, VDI etc it becomes necessary to deploy the build fully hardened immediately, you won’t be able to have a base build of XPsp3 and then harden it later.   This build image will probably need its applications, antivirus and system hardening settings updated often, and subjected to frequent testing.

What do we do about the anti-malware problem

For the time being there are many antivirus vendors who will supply and support high quality products to defend XP.  My personal view is that this will continue as long as there is a large deployed base.  We are just beginning to see the major manufacturers drop support for Window 2000 after all.  So, this is one of the lesser problems.  But, we are now relying on the antivirus to do much more work, so we need to include a full suite of technology including personal firewall, HIPS, download control etc.  AV needs to be set more aggressively, and updates performed more rapidly.  Engine updates become much more important; keep an eye for any vulnerabilities in the AV itself and make sure they are corrected quickly.

You may also consider something a bit more radical, for example using tools designed to oppose advanced persistent threat, or APT to further harden the system.  Also monitor system, networks and traffic for evidence of malware (link.)

Beyond anti-malware is full on application control and application white listing.  Though these will only work on well managed systems, and well managed systems probably wouldn’t have this problem in the first place.

What to do about the patching problem?

You can’t patch the XP any more, simple?  But, you can patch many applications.  It is also possible to reduce the attack surface by upgrading those applications.  For example upgrade MS-Office which you can continue to patch.  Also, Microsoft patch Tuesday needs to feed into your vulnerability lifecycle management as hackers are going to be reverse engineering patches for IE, Vista SP2 and Office 2007 to find exploitable vulnerabilities that appear in the common code bases.

What to do about old hardware

It has been a very long time since a general desktop hardware refresh has been necessary.  Many systems purchased for Windows 2000 deployments are perfectly capable of running Windows XP.  Though 15 year old PC’s are pretty rare, there will be many that are incapable of re-use.  Even hardware only a few years old, and more so peripherals might not be expected to work beyond XP.  It seems unlikely that anyone will find it helpful to make a case for staying on XP just because of the hardware costs, even for traditional desktop rollouts.  The advantages in usability and performance are likely to be self evident.  Perhaps more thought might go in the need for that hardware to support versions of Windows that will ultimately replace Windows 7.

General hardening

There is so much written on general security practice but poor change management, local working practices and just general neglect weakens them over time, these can be restored and have very useful security outcomes, a few to start with:

  • Shutdown anonymous shares, force credentialed connections
  • Block LM authentication, many systems are still set to use weak authentication
  • Check and enforce good password policies
  • Remove unused accounts both locally and domain
  • start logging properly, particularly at gateways, and read them occasionally
  • Add robust device control, in particular block execute rights from removable media
  • Stop using XP workstations for storage
  • Control Internet access, inbound and outbound and make sure that the basic controls are mandatory

Go offline

Remove the biggest threat, the Internet.  If people really need to access internal applications then get them to do it from a different system than their general workstation.  This can be either by creating a local virtual machine (Windows 7 can support its own) or perhaps VMware workstation.  The XP physical machine can be ported to virtualisation.  There are many possibilities where XP really has to stay for an extended period of time, just keep it away from the threats.

The best plan

The best plan is to have a plan, make sure risk, business impact, compliance and user acceptance are all part of the plan and allow XP to finally retire to the history books.