Monday, March 20, 2017

Insiders: The often forgotten threat

Insider threats are of particular concern to organisations as the impact of a rogue insider can be catastrophic to the business. The 2016 Verizon Data Breach Investigations Report showed that 15% of data breaches were a direct result of insider deliberate or malicious behaviour.  Given that it is not likely that all insider breaches are discovered and/or reported this number may well be under represented in Verizon’s statistics. In addition, insiders often have legitimate access to very sensitive information, so it is no wonder that it is difficult to detect these breaches. Regardless, they can negatively impact the business in a big way, and must not be overlooked.
https://apprenda.com/blog/a-wolf-in-sheeps-clothing/
As a member of the Cisco Security Services team I speak to a lot of customers and see views of insider threats vary by industry vertical.  For example, financial services and gaming companies see financial objectives as the main motivator, manufacturing/high technology/biotech see intellectual property theft as their biggest concern, and personal services store and process large amounts of personally identifiable information which they must protect from insider theft. The unique challenge faced is that insiders are often more difficult to identify behaving maliciously as they are often misusing their legitimate access for inappropriate objectives such as fraud or data theft.
Strong user access policies are a key building block to a good insider threat management strategy.  Regular review of user access rights, along with job rotation, mandatory leave, separation of duties, and prompt removal of access rights for departing employees have been the core of managing insider risk for many years.  Once you have these key components in place it is time to go to the next level.
As with everything in security there is no single answer and frankly you should question anyone that tells you they can fix all of your security problems with one service.
To reduce the risk of the insider threat, we suggest the following strategies:
  1. Classify your sensitive data. This is the most critical step and often difficult as this requires the technology team and the business to align in order to classify what data is sensitive and to ensure there is consistency in the classification strategy. Remember to not boil the ocean; this step should focus solely on identifying sensitive data that could effect the business should it be stolen. Carnegie Mellon University has a good example that can be adapted to most organisations.
  1. Once the data has been classified, proceed with a plan to protect it.
a. Instrument the network so you can detect atypical accesses to your data. To validate if your instrumentation is setup correctly, you should be able to answer the following questions
i. Have new users started accessing sensitive data?
ii. Have your authorised users accessed more sensitive data than usual?
iii. Have your authorised users accessed different groups of sensitive data more than before?
Many fraud management professionals would recognise these questions as lead indicators of possible fraudulent activity, and astute HR professionals would recognise these as possible lead indicators of an employee about to leave the business.  Both of these scenarios are very typical lead indicators of insider data loss.  You should try to make use of fraud management and HR personnel to assist you in determining what to look for and actions you can/should take when you detect a possible insider incident.
Data flow analytics may also assist from the technical side as well.  Cisco Stealthwatch uses NetFlow to build profiles of expected behaviour for every host on the network. When activity falls significantly outside of expected thresholds, an alarm is triggered for suspicious behaviour. Data hording is one typical use case where data flow analytics detects anomalous behaviours.  For example, if a user in marketing usually only accesses a few megabytes of network resources a day but suddenly starts collecting gigabytes of proprietary engineering data in a few hours, they could be hoarding data in preparation for exfiltration. Whether the activity is the result of compromised credentials or insider threat activity, the security team is now aware of the suspicious behaviour and can take steps to mitigate it before that data makes it out of the network.
b. Data Loss Prevention software, or DLP as it is more commonly known, is software that monitors data flows much like an IPS as well as monitoring data usage at the endpoint. Network DLP uses signatures like an IPS, but the signatures are typically keywords in documents or data patterns that can identify sensitive data. Endpoint DLP can be used to control data flow between applications, outside of the network and to physical devices.  This becomes especially important if there are concerns about sending data to external data storage systems (eg Google Drive, Box, SkyDrive etc.) or to USB attached storage.  DLP can control access to all of these systems, but it is a matter of policy and vigilance as new capabilities are released at the endpoint.
There is a lot of skill in effectively setting up DLP software and much of the complaints about the lack of effectiveness of DLP comes down to a lack of proper data classification and poor DLP software configuration.  There is also an argument that network DLP is losing relevance with the increasing amount of encryption of network traffic.  This is certainly true and enterprises need to have SSL interception properly configured to maximise the effectiveness of their DLP investment.  Still not all traffic will be able to be decrypted and you must determine whether your risk appetite will allow for users having encrypted communications you cannot monitor.  This is not exclusively an IT decision, but one that needs to be decided by a well-briefed executive.
c. Network segmentation is unfortunately something that is often not done well until after a security breach. One of the benefits of a properly segmented network is that a malicious insider keeps bumping into network choke points.  If these choke points are properly instrumented then alerts flow to warn of potential inappropriate access attempts.  This gives the defender more time to detect and respond to an attack before sensitive data leaves the network.  For example, if your Security Operations Centre (SOC) observes a user in Finance trying to access an Engineering Intranet server then you should be raising an incident to address why this user is trying to access a server that most likely holds no relevance for their job function.
  1. Honeypots with decoy sensitive data are one of the more controversial strategies that may not be for everyone. The honeypot should be setup with decoy data and a similar look and feel to the production environment.  The decoy data needs to look authentic and the knowledge of the existence of a honeypot needs to controlled on a need to know basis.  The great advantage of a honeypot over other technical strategies is that all traffic that goes to the honeypot can be considered malicious and by its very nature as the honeypot has no business relevance.  The honeypot is only there to trap those that could be looking for sensitive data inappropriately.  Our consultants have found it useful in the past to use the same authentication store as the production environment so you can quickly see which user is acting inappropriately, or you may have an external attacker using the legitimate credentials of an insider to hunt for sensitive data.  Either way, you need to act quickly and deliberately to head off possible data loss.  Like every data loss scenario you need a robust process for managing these incidents types.
  1. Use of non-core applications, especially social media applications – There has been an explosion of social media applications in recent years ranging from Skype, WhatsApp, QQ, WeChat, LINE, Viber and many others. One concern we often hear from our customers is that they are worried that their staff are using these applications to send sensitive data out of the business.  These applications are often used for business purposes and depending on the sensitivity of the data this may be considered inappropriate behaviour.  Our favoured strategy is to use some of the recommendations above, classify your data, and instrument the network to look for inappropriate use.  But, from the user’s perspective, they are trying to perform their job in the most efficient manner and no one wants to discourage “good behaviour!”  If there is a legitimate business use for a social media application, we recommend that a corporate social media application be deployed so staff can be efficient in their job.  Security needs to enable users to get their job done and not hold up business progress and increase business complexity.  Additionally, users must understand the ramifications of their actions and know what data can be sent externally and what cannot leave the organisation without appropriate protections.  Education is the key to achieving an effective balance and reminders, like a “nag screen” that alerts the user that they are accessing sensitive data can reinforce the user’s training. Document watermarks and strongly worded document footers about the document sensitivity can also serve as another valuable reinforcement.
  1. Additionally, we recommend that you have the ability to hunt for caches of sensitive data – one phenomena that that our security consultants see time and again is that people have the habit of creating a cache of sensitive data to steal before they send or take it out of the organisation. This is true not just for insiders, but often with external attackers that are preparing to exfiltrate data.  Our consultants use endpoint tools to look for caches of documents in user directories, desktop and temp directories as the most common places to find document caches.  Often the documents will be compressed into an archive such as a ZIP, RAR or GZ file for quicker data exfiltration and to avoid tripping the DLP keyword filters.  Whatever tool you use to hunt for data caches it must be able to return the name and type of documents when it does its scans.  You should select a tool that can hunt on the basis of a threshold of data volume and be able to dynamically tune the amount.  Some of the more sophisticated DLP solutions can implement this functionality.

Complexity is the arch nemesis of a good security program

Like ever good super hero we have our arch nemesis, and this is often the complexity of our security environment and not the bad guys that are trying to compromise our networks. The 2016 Cisco Annual Security Report recently found the average number of Information Security vendors in enterprises was 46!  We were shocked by this number, but that goes to show that there are a lot of point products in this industry.  One of the constant comments from our customers is “can you make all of these products work together?” We hear you, and recommend that when you are devising your strategy to combat the insider threat that you also consider that the output from these controls is going to have to be acted upon, and you cannot continue to overburden the existing SOC team.  We recommend that you review how the insider threat strategy will integrate with your existing threat management process and platform as a key consideration before you get involved in the “speeds and feeds” bake offs with products.

The "Five Stages" of being breached

https://eclosure.com.au/5-stages-grief/
Doing data breach investigations in the commercial sector introduces you to many new people.  One of the nice things that people have said to me is "Great to meet you, but I hope to never see you again".  A few people that have been through a data breach will have a quiet chuckle to themselves and know what this means, but for the fortunate others this means "thanks for helping when we were having a bad time, and we hope to never have to use your services again because it means we are going through another data breach".  In the early days I found it hard to understand why some of my customers where less than happy to work with us, and some were even angry with me.  I'm thinking "WTF??? I'm the one trying to help you".  Others have said "If we let you in, how do we know you are not going to steal all of our information" and again I'm feeling like they see me as the bad guy.  I tend to take things a bit personally, I'm an only child, so yeah "It is all about ME ;-)".

Trouble is that most of the people going through a breach situation are totally unprepared and experiencing a data breach starts their thoughts spiralling into all sorts of conspiracy theories, thoughts about how everyone has let them down, denying that they had an issue because it was a mistake and "could not possibly be us" and many others.  After a while you start to realise that there is a pattern and one day when looking over my partners shoulder at home I saw she was what she reading.  I and had an epiphany as I recognised the behaviours, but it was from a social sciences model and nothing to do with technology.  My partner is an almost full time University lecturer, former perennial part time student, and part time counsellor.  In one of her main fields of study I saw the Five Stages of Grief model being discussed and immediately recognised a few of the stages.  To provide context of what I'm talking about the stages of grief in the model are:
  1. Denial;
  2. Anger;
  3. Bargaining;
  4. Depression; and 
  5. Acceptance. 
The Denial and Anger stages were the first trigger 😁 as I had seen so much of this.   But I started to recognise that during investigations customers showed some, or all, of these traits as we worked our way through the incident (i.e. kicking out bad guys, reporting on what had happened, and advising how to stop it from happening again). 

A few examples of the phases that I saw are:
Denial
"It could not possibly be us, we don't store that data"
"We have the lock in the browser so all of our transactions are secure"
"I rang my IT guy and he/she said we are secure"
"Why would anyone want to hack into us in [insert-tiny-location-here] from [insert-known-hive-of-hackers-country-here] and ruin my business"
"How would a hacker find us on the Internet"
There's a few pearls in the list above, but this is/was often the things that Incident Responders have to deal with from our customers.  Don't forget that often (2/3 of cases typically) a third party discovers the breach and the victim is informed without coming to the discovery themselves.

Anger (limited number for the PG audience)
"Why are you trying to ruin my business"
Being treated poorly, eg. working in a hot room without air-conditioning when temperature is over 40C outside.
"Why do things like this happen to me?"
Angry stream of conscious emails from the customer early in the morning (eg 2:30AM) that lack rational reasoning.

Bargaining
I suspect that many of these are internalised and few are shared with the IR team.  My partner described this as the "if only phase".
"If we made the changes that you are talking about will all of this go away?"
"Can I pay a fine so I can get back to my normal business?"
"Can I install a firewall to fix all these problems?"

Depression
In my opinion this can be a lack of communication with the investigator as the customer has withdrawn to deal with their situation.  It's OK they are coming to terms with the new normal and the fact that they have really had a data breach and now need to improve their security.

Acceptance
"What do we need to do to ensure this cannot happen again"

https://eclosure.com.au/wp-content/uploads/2014/11/Grief-300x153.jpg
My partner also pointed out that in her experience, and that of most in the counselling field, that the grief process is not linear (1-2-3-4-5) and people vacillate between different phases and often go back and forth for a period of time. Thankfully dealing with a computer security incident is not as difficult as dealing with interpersonal grief, so the process does not last as long as when dealing with personal loss, but don't be surprised when/if people go backwards.  It does happen and the better prepared we are the better we can deal with other people's emotions.

This got me thinking that one of the things we are not trained for as incident responders is dealing with the customers in this situation.  As a counsellor my partner worked part time for a few years to complete a Masters Degree (another one) to learn how to deal with people going through this cycle and as IR professionals dealing with many people going through the breach grief cycle we do not get any training and have to work out how to deal with customers going through this cycle ourselves.

I live and work in Australia for quite a lot of my life and we have recently passed mandatory data breach disclosure legislation as part of our existing Privacy Act.  Whilst it is not yet required for  Australian businesses to disclose a data breach, it is coming within 12 months of the passing of the amendment to the Act.  Reflecting on the countries in which I have worked in the last decade I can see a pattern that those who do not have mandatory breach disclosure legislation more often have business leaders that are incredulous that a data breach could affect them versus other regions (eg. USA, Japan, Europe).  Perhaps it is a matter of awareness of the potential for data breaches? 

Please note that the intent of this blog is not to trivialise grief, or the feelings of loss that people have to deal with in the personal life, merely the observation that IR people can possibly learn from research into grief.  Even knowing that this may be what the customer is experiencing can help the responder deal with them more effectively.  At the end of the day it's not all about the number of records stolen, or the value of sensitive data that has been compromised, it is about how the people feel about it and I think a small amount of grief is natural in that circumstance.  Understanding  the victims perspective definitely helps us to be empathetic with their position, and therefore we are better placed to help them make the right decisions in what may be one of their toughest days of their professional life.

Thursday, July 7, 2016

Ransomware – a wake up call for effective security controls

“The digital canary in the digital coal mine”

https://share.america.gov/wp-content/uploads/2014/11/canary_art22.jpg
A “canary in the coal mine” is an idiom that refers to an early warning sign for upcoming trouble.  This comes from the day when there was no technology to detect leaks from unseen pockets of toxic gas in the rock of a coal mine. Canaries are more sensitive to the toxic gas in the mines than humans so miners used to take poor canaries with them as an early warning sign of toxic gas.  If the canary is on the bottom of the cage it’s time to get out of the mine FAST!  So how does this relate to ransomware – bear with me for a while and I will explain how ransomware is the early warning sign that security threats have a free rein in your environment.

Ransomware is big business today.  Ransomware miscreants encrypt a victim’s files and only provide the decryption keys after the victim pays the “ransom”—usually in the vicinity of $US300 to $US500. Unlike most other online crimes that target businesses exclusively, ransomware impacts end users directly. Ransomware campaigns are not discrete about their victims as this is a volume game and the bad guys will attempt to compromise tens of thousands of victims per day whether they be a grandparent at home looking at photos, or a corporate banker making billion dollar deals.  The pay day for their efforts can be staggering. Cisco recently worked with Level 3 Threat Research Labs to disrupt an Angler exploit kit botnet which Cisco estimates to have be earning at least $US30M annually and I hope this disruption hurt the bad guys.

The effectiveness of Ransomware can be seen in a recent CERT Australia survey where 72% of companies reported malware incidents in 2015 which has more than quadrupled since 2013 (17%).  72% of respondents also stated that Ransomware “is the threat of most concern”.  These figures are staggering when the survey is targeting corporations and it’s not surprising as I have seen ransomware execute and encrypt data on ASX Top 200 companies' systems and Fortune 100 enterprise servers as well as our relatives' laptops.  Quite frankly, ransomware is everywhere and one of the key reasons why it’s a huge concern is that signature based anti-malware products such as anti-virus are mostly ineffective as ransomware is written and tested to avoid detection by AV products and the signatures can change hundreds of times in rapid succession.
Now let's get back to the “canary in the coal mine” analogy.  I believe that the most troubling aspect of ransomware is NOT its effect on the end user, but more that it is so incredibly effective in:

  • Penetrating corporate network perimeter defences;
  • Able to execute as a new process on a victim machine; 
  • Call out to a server on the Internet to download an encryption key (refer to the update below); and
  • Typically, the first time anyone detects the malware is because their work files (or cat videos) cannot be accessed because they are encrypted.
I often get asked “can you restore my files?”  Unfortunately, the answer most often is “No”.  Ironically most ransomware uses strong and well implemented cryptography and it is not economically or technically viable for anyone to attempt decryption.  The point here is that we need to move on from believing all attacks can be prevented; we also must realise that attacks must be detected quickly to prevent damage to the business.  The fact that most attacks are not directly detected by the victim, but by the action of the external party (encrypting data) is what really troubles me as a security professional.  Security controls should be preventing as close to 100% of attacks as possible, but there remains a fraction of successful attacks that we must detect and respond to before significant damage is done to our businesses.

I think we should be closely looking at the lessons we learn from ransomware to show us how effective our security preventative and detective controls are, and how they have failed.  Every time ransomware is able to execute and encrypt data, our preventative controls have failed.  We can use this incredibly destructive and annoying malware as a tool to learn about the shortcomings of our security program, or the digital canary in the digital coal mine (when the canary is dead it’s time to evacuate) so we can:

  1. Prevent and detect more ransomware and other malware incidents; and
  2. Be better able to defend our enterprises against more skilled and determined attackers such as organised crime and nation state funded actors. 

The point is that if ransomware can operate in your environment then there is little hope you have of being able to defend against the more skilled and determined attackers.  The critical questions that must be answered is “how did the ransomware get through my perimeter controls?” and “how was it able to execute and encrypt data without being detected before a victim loses access to their critical business documents (and cat photos)?”

Detecting a threat in the environment is critical to minimising the damage malware does in the network, which is why we need multiple layers of controls to protect.  We should not get too far into the preventative controls here, but like our mothers used to tell us “An ounce of prevention is worth a pound of cure” (my mother never went metric).  There have been PLENTY of articles written about preventing ransomware and other malware so I do not want to rehash what has already been done.  If you want to look for articles on prevention I suggest you have a read of the Cisco Talos blog “Ransomware: Past, Present, and Future”.  One last word on prevention, before we move on to what we are here for.  There’s a simple to deploy technique that is being overlooked by most information security professionals – blocking DNS lookups of known malicious sites.  Cisco acquired OpenDNS during 2015.  One of OpenDNS’ main functions is to provide a safe DNS infrastructure for name resolution services.  The differentiator with OpenDNS over many standard DNS services is they provide protection by blocking name resolution for known malicious domains.  The reason blocking DNS lookups for ransomware is effective is that most, if not all, ransomware uses a multi-stage attack where an email is typically used to deliver the payload and when the payload executes it calls crypto.evil.com (for example) to generate an encryption key. Yes, it is not perfect as we are playing catch up, and would be preferable to prevent the initial infection, but if you don't get your data encrypted we can call that a win!  More details of this functionality can be found at https://www.opendns.com/enterprise-security/threat-enforcement/features/cryptolocker-containment-is-the-new-prevention/.

Now lets get to the crux of concept of the canary in the coal mine analogy.  What I’m trying to say is that the ransomware is an indicator of bigger problems.  You can think of ransomware as the (unfortunately) dead canary on the bottom of the cage that has detected the gas leak.  I believe that you should be looking for the root cause of the ransomware incident rather than concentrating on your canary problem.  Root cause analysis will show how the ransomware got into the enterprise and when you can understand that, you can start to fix the problem.  Please do not go and buy a shiny new security object to fix the security problem before it is properly understood.  Without fully understanding the problem you may be fixing something that will not improve your security posture commensurately.  We all have the shiny object syndrome, but choose your time to act and resist the pressure from your peers as much as possible.

Consider the points I made above about:
  1. Malware (typically) comes into the network through the corporate email system; 
  2. Unknown software (ransomware) being able to run without human intervention on one of your corporate systems inside the corporate boundary; and 
  3. Then connect to the outside world through your corporate proxy server, IPS and firewall(s).  

This is remarkably like the tactics used by nation state attackers when setting up their beachhead inside the corporate boundary before stealing your intellectual property.  Starting to smell rotten eggs now?  This is the real reason why we are so concerned about where ransomware can run, because if ransomware can run, so can nation state attackers and they can do a far sight more damage to your business than encrypting a few files.  The typical motivation of nation state attackers is to steal your intellectual property, pricing information, customer data et al for the financial benefit of their own country.  This brings into a stronger focus the benefits of doing a proper root cause analysis of the ransomware incident as it’s not just about the one, two or more systems that run the latest ransomware variant and cause the ensuing mayhem of trying to minimise the damage and recover the data.  If you have planned ahead and have decent backups of your critical data (kudos to you if you have), then you don't need to get too spun up about the effects of the ransomware and the recovery is pretty straight forward.  Make sure you learn the lesson that the ransomware incident has taught you.  Find out how the ransomware got inside your organisation, and put in better controls to stop it happening again, or at least minimise the chance of it happening again (there’s no panacea for all ransomware).  Then work out what it did on the endpoint and build a strategy for stopping from that happening again.  Next is to look at the network communications and determine how you could have a) disconnected it (e.g. blocking DNS calls to known malicious domains); or b) detected it earlier to minimise the damage.  One of the key differences between nation state attackers and the cybercriminals behind ransomware is the end goal.  Cybercriminals are after money and typically the faster the better, whereas nation state attackers are playing the long game and looking for the data of choice.  They want to maintain access and stay in your network for the long term, whilst extracting the data that they are looking for.  Nation state attackers move laterally, hopping from system to system, looking for the data that they have been tasked with finding, and acquiring the administrator credentials often necessary to get access to this data.  All of these actions have signatures, or indicators of compromise that you can detect with the right tools.  If you have not looked for them, or had a skilled team working on your behalf, you might be shocked at what you discover.
https://share.america.gov/wp-content/uploads/2014/11/canary_spot_art.jpg

The objective is to learn from the incident and make continuous improvements to your defences and detection capabilities.  If ransomware can run in your environment, then so can the tools that nation state attackers use, and this is a cyber arms races against attackers whether they be nation state, cybercriminals, or activists with a keyboard.  So when you realise that the adversary is continuously improving their tools and techniques (as recently demonstrated by the cybercriminals and their ransomware campaigns), then you had better be doing the same to maintain your edge so your business can survive.  Remember that ransomware, whilst annoying and inconvenient, is just the canary in the coal mine.  If your yellow bird is on the bottom of the cage, you’ve potentially got bigger problems.

Update: 20 July 2016
A new version of the Locky ransomware operates in offline mode so does not need to call back home to get encryption keys. http://www.pcworld.com/article/3095865/security/new-locky-ransomware-version-can-operate-in-offline-mode.html

Wednesday, April 9, 2014

Exploiting Heartbleed vulnerability


Seems the whole InfoSec world is talking about the Heartbleed (CVE2014-0160) vulnerability in OpenSSL the last 24 hours.  Being an empirical person I wanted to try it out for myself.  There is a patch available, but they take a long time to get deployed to many web servers.

The vulnerability is know to affect the main versions of OpenSSL and will also affect many appliances that use OpenSSL for securing web pages.  One important note is that other packages that depend on OpenSSL for keygen (e.g. OpenSSH) are not vulnerable by association as they do not use the TLS transportation protocol.  Also note that Windows is not affected by this vulnerability.

So I used a vulnerable AWS Linux AMI system with OpenSSL 1.0.1e installed.  OpenSSL is vulnerable up to and including this version.  I put up a dummy payment page in AWS using a previous demonstration and send some private information to the page such as you would with a payment in the Internet. It should be noted that AWS has requested users to patch their system and the patches are available for all AWS systems I have running.



I then ran the Python script from Jared Stafford found here. I've not been able to retrieve the session keys yet, but apparently others have. I did manage to get some user data out of my test site though which included all the variables from the "Customer Details" page above.

I ran the script against the IP address and retrieved differing results on different occasions. Currently I have the script running in a loop looking to see if it's just a brute force effort that's required to get the session key out. Have to wait and see for that.

Here's a sample of the output that shows the user data that was input into the web form. We can also see the URL as well as the browser user agent. More interestingly you can see the sensitive data that was sent in the "secure SSL browser session" in plain text (bottom right) without any interception, decoding, decrypting or authentication.




If anyone has some pointers on how to get the session key (which enables decryption of the session) please drop a note in the comments section. Would love to hear from you and get that working in the demo.


Monday, January 20, 2014

Preventing POS data breach

Background

Espresso? Sure why not!
With so many people talking about Point of Sale (POS) data breaches and so many of them happening over the known history of computer crime you have likely been lead to believe that POS is a hard thing to secure and that sophisticated crooks are chasing POS systems every day.  Welllllll like most things in life they are not always like what you think they are.  For those of you that have ever had to do Pen Testing or worked on a major incident you know that everything you read and see if the press and from Hollywood is just pure hokum.  The POS incidents are no different.  Bear in mind I'm not on the inside at the most recent one, but as per my previous post it's most likely yet another RAM dumper (YARD is my new acronym.  God knows the IT industry needs another acronym).  I hope this acronym demonstrates the lack of surprise I have when dealing with this type of malware.

I remember when I first saw this type of malware and thought "wow how the hell do you stop someone from stealing plain text data in memory".  After a bit of contemplation you realise the answers are pretty simple because I was asking the wrong question.  Firstly, like every piece of sensitive data, if you don't need it, don't store, process or transmit it.  Yep as I said - simple but not always achievable.  If your business is not able to live by the "golden rule", then you must live by the "silver rule" which is "keep the bad guys out of your sensitive data". This is all common sense really, but common sense is not often applied practice and business large and small get caught out every day.

The moment of realisation

Merchants that become a victim os POS data breach typically as at least one of the following statements:


  1. We didn't know they had any sensitive data
  2. My vendor told me there was nothing to worry about
  3. It's been working for years why now?
  4. Why would someone from <Eastern-Eurpoean-country-here> login to my computer in <anywhere>?
  5. How did they find my computer/network in <anywhere> and how did they know where to look?
  6. I didn't tell anyone so how did they know my password?
From this list of statements you can see the general understanding of most merchants is pretty low and they don't have much capacity to look for problems they didn't know existed.

Preventing and stopping RAM dumpers

In the previous blog entry I talked about RAM dumpers and scrapers. Given the US POS environment it is necessary for the card data to be in the RAM of the POS system in plain text, so we have to protect the system from having bad guys on it. Follow the PCI-DSS and you're almost certain to be good.  The PCI-DSS is a very large document and will take a lot of time to digest and even more to implement.  A more abridged list to maximise effectiveness of  controls is:

  1. Do not use vendor default passwords especially for remote access (Yes this REALLY happens still today). This means use unique passwords for vendors as well.  In days gone past (I hope) support vendors used to use their company name as the password for the hundreds or thousands of POS devices.  What could possibly go wrong hey?
  2. Run the POS systems as an unprivileged user. Seriously why in the world would a POS need to run as admin?  There's only three answers - stupidity, laziness and really bad software.  Neither are acceptable.  
  3. Really restrict access to the payment network.  Most POS are protected by a single credential and a large proportion of POS have unfiltered outbound Internet access.
  4. Remove Internet access from the POS network.  Why does a POS need Internet access?  If you are using Internet based payments, filter access to the necessary addresses/ports and nothing else.   I've seen POS' send data and alerts to drop servers via email, FTP and HTTP.  There is simply no need to provide a direct outbound path for malware to send data.
  5. Any connections in/out of the POS network must be authenticated.  
  6. Monitor your POS devices for new services and processes.  Most POS malware must remain memory resident and will be installed as a service or as an auto-start application.  A new service or auto-start application on a POS is a big red flag and must be investigated. If you have least privilege on  your POS this should not happen anyway.
Of course there are ways to circumvent many of these controls, but realistically very few bad guys want to put in a whole lot of effort when there is so much low hanging fruit out there.  It's back to the old saying, "when you are being chased by a bear you only have to out run one of your mates to be safe".  It's similar in cyber-crime, but the analogy does not hold true for ever so we have to continuously improve our defences and detection of events.


Friday, January 17, 2014

RAM scraping/dumping in the payment card industry - or being a target

There's too much talk and not enough information about this topic in my opinion.  Many people and experts are getting spun up about the Target breach.  It's a lot of cards and from a major US brand, but we've been down the road before and with the same type of malcode.  I haven't seen the malcode yet, but everything points to it being a RAM scraper/dumper of which I've seen plenty in the past.  This malcode has been around in various guises since 2009 in the payment card space.  There's not a lot that can be done from an infrastructure point of view in the US given the current merchant payment architecture.

I want to try to take some the of the mystique away from this issue by explaining it simply.  Hopefully I can achieve that.  Most payment applications in the US take the payment card data from the magnetic stripe in the form of what is called Track 2.  This is a standard format for reading/writing the necessary information on a credit card.  It has many different components which are not that important in this discussion.  The important thing to note is that Track 2 data follows a standard pattern that is well known, repeatable and predictable.  Couple that with over time equipment to reproduce magnetic stripe credit cards has become very inexpensive, so the barriers to entry for bad guys have reduced dramatically.

The PCI-DSS addressed the vulnerabilities in plain text payment card data processing by requiring data at rest (on a disk) or being transmitted (on the wire/Internet) to be encrypted.  PCI-DSS also requires all access to the payment system to be authorised.  The problem is that in many POS environments (like most in the USA) the payment card data cannot be encrypted when it is being processed.  For the POS the vulnerable point is when the swipe is made of the mag stripe card.  The data from the mag stripe is read and in the memory of the POS system for processing the payment.  Most POS are Windows based and have the same well known flaws as any Windows system.  In addition the poor old POS tends to be overlooked for maintenance as they are hidden under a desk, difficult to get at, and if you break them you literally stop the cash register from ringing.  In short many POS are often old machines that are barely functional.  You couldn't even play a decent PC game on them or run a video on YouTube, but they are serviceable for their intended purpose.

Back to the malware that is installed on the POS by bad guys with unauthorised access.  Simply put this malware does a 3 stage process:

  1. Dump the memory for the POS process - i.e. the process that received the Track 2 data
  2. Run a regular expression (a search for Track 2 pattern) on the output of the POS process
  3. Stores the extracted data locally for manual collection by the bad guys or send it out of the network
Believe it or not this is not terribly difficult to do and can be done with a few simple standard tools.  I made a video of this on a test system in my lab.  I used a two Microsoft tool (Sysinternals) tools procdump.exe and strings.exe to access the memory from my process MSR605 that received the Track 2 data. Then I used grep.exe from UnxUtils to find the Track 2 data.  Once you have the process it becomes a repeatable, cookie cutter approach to access memory, use the regular expression to find and extract the data.

The link to the video is here if you want to see how easily this can be done.

In this test I did not bother to use a real POS application as this would simply be a cosmetic update to the proof of concept and likely got me sued by the POS vendor.  Rather I used the card reader/writer software that came with my MSR605 reader and writer I purchased off eBay for under $200.  This also came with a number of blank cards that can be written to emulate a legitimate payment card.  I used this equipment to make a fictitious card with test data for the late, great Joey Ramone.  Then with the data that I was able to extract from memory I would be able to make a copy of my fictitious card with test data.  Fraudsters use the same process to steal payment card data from POS devices and make fake cards for purchasing goods from stores.

The card looks like a white piece of plastic with my writing on the front side.
Front of fictitious test card
Back of fictitious test card showing magnetic stripe
Test card in reader about to be swiped/read
Of course the fraudsters do this on an industrial scale and write far better code than I can to ply their trade.  But we need to keep our heads and remember that this is nothing new or frightening and it's going to take a change by merchants to keep bad guys out of the POS environment to make this problem go away in the short term.  Let's face it, if you have people with unauthorised access in your payment channel bad stuff is going to happen eventually.  If you have plain text cashable data and bad guys in your payment channel it's going to get stolen sooner rather than later, and if you have a LOT of this cashable data, like a US based retail giant, then you are a giant TARGET.

Hope this is readily digestible to the community and helps demystify some of the discussion.

goma