Category: Security

  • Equifax

    Being in the industry, I understand how difficult it is to secure an organization, so I have some sympathy for Equifax. As an ex-NSA colleague noted (paraphrasing), “A defender has to protect everything, an attacker only has to find one hole.” That said, their business is PII, so there is a higher standard there.

    In the end my concern is less that the hack happened, than the difficulty in navigating their site and ultimately receiving the credit protection. First of all, the initial page they are telling “customers” isn’t intuitive:

    It is mostly PR material. You ultimately need to go to the “POTENTIAL IMPACT” button on the bottom:

    Then when you do sign up, they tell you you’ll have to wait for roughly a week then sign up at a different URL. You had better write down the URL because they say, “you will not receive additional reminders”. The URL, if you made the mistake of not writing it down is:

    Then “click through the link to continue through the enrollment process”. What link that is, god knows.

    In fact if you click the above “” today, it goes back to, well, “”, which I assume then you are supposed to click the “ENROLL” button on the bottom???:

    Just mildly confusing.

  • Installing Plixer’s “Scrutinizer” NPMD

    Plixer makes a good “Network Performance Monitoring and Diagnostics” (NPMD) application called “Scrutinizer“. NPMD, as Gartner calls it, mostly omeans, collecting, aggregating, and reporting on Netflow data.

    Plixer provides a VMware OVF for installation of a virtual appliance. I, however, ran into a few issues with the installation:

    • I couldn’t get the install to work OVF through vCenter successfully, or at least vCenter 6.5. It would install, but when I booted it would come up to a PXE boot, rather than CentOS which the appliance runs on. The answer was to install it through the Windows vSphere ESXi client or through the web vSphere ESXi client.
    • Setting up SSL (HTTPS) during the initial install prompts wouldn’t work. Everything seemed fine, but on final boot of the Scrutinizer appliance, the HTTP/HTTPS wouldn’t come up at all. It turned out it hadn’t actually generated the certificates and files were missing. The answer is to select “no” to SSL in the initial dialog, then when fully up, log in using the “plixer” login and use the “set ssl on” option after the fact. SSL then works correctly afterwards.
    • By default it will bind to IPv6 ports and not to IPv4 ports (!) to listen for Netflow data. The solution is to log into the Scrutinizer server/guest as root and disable IPv6 per this document. Specifically, I recommend the “/etc/sysctl.conf” change as it is relatively simple to execute.
    • When logged in as “root”, doing a “yum update” is useful, though I would do the following bullet after.
    • When logged in as “plixer”, it’s useful to run the “set tuning” as well as “update packages”, though oddly it seems to run back one of the kernel updates from the last bullet.

    Now I just need to figure why I’m still not seeing the packets from the ASA…

  • Good basic email advice

    Professor Alan Woodward from the Department of Computer Science at the University of Surrey via The Register:

    Educate users not to open files that they are not expecting. Practice your ABCs “Assume nothing, Believe no one, and Check everything should be drummed into users” personally I preach ABCD – if in any doubt Delete.

    Incidentally internal simulated phishing is extremely effective in my experience.

  • ASA Firewall Rules of Thumb

    Some important Cisco ASA firewall details I and others have learned and shared over the years:

    • Don’t use “security-level” as your method of security. In the long term at best “security-level” will cause you to block traffic you didn’t expect, at worst, it will allow traffic you didn’t want. Why? Well…
    • If you add an ACL on the “in” side of any interface (that is “into the ASA”), once it’s in the ASA, the security level doesn’t matter anymore. It’s very easy to forget this. However you can protect yourself by…
    • Always add “out” rules. Any “in” rules should be matched by “out” rules on the final destination interface. This is insurance in case you missed or were overly broad on your “in” rules.
    • Configure all of the interfaces to the same “security-level”. If you enable “same-security-traffic permit inter-interface” be careful as it allows traffic to flow to other same security levels without ACLs. You don’t want traffic to flow when you haven’t allowed it explicitly. The only exception to using different security levels might be the “outside” interface, which you may want to set to “security-level 0”. However, assuming “outside” is the Internet, ideally you want to be explicit there too. Otherwise you’re potentially setting yourself up for easy, unlogged, data exfiltration (among other things).
    • Remember that the ASA is a stateful firewall. If you establish some sort of connection out of an interface, the firewall should see that the return traffic belongs to the conversation and allow it through regardless. For the most part you don’t need to explicitly create return rules (or use the old IOS “established” trick).
    • If you’re trying to turn up a firewall on a network that existed, but was never firewalled before and you are having difficult categorizing the existing traffic, place the rules that you know are correct into the ASA, then add a “permit ip any any log” entry at the end. This will send logging of what fell to the wildcard rule to your syslog server, which you can then evaluate later. Once analyzed and missing rules in place, turn it to a “deny ip any any” and you’re done. Remember you can also do packet capture on the ASA as well.
    • Never trust a 3rd party. If they are coming into your network and saying they are properly filtering traffic toward you, filter them again anyway. First, their error could be your exploit, second you can’t assume their firewalls aren’t going to get hacked. Protect your network like it was your own child.
    • Beware of mixing ASA “access-list”s and ASA VPNs on the same firewall. Unless you want to enter “filter” hell, which generally you can only apply usefully in one direction, turn off VPN bypass with “no sysopt connection permit-vpn”. If you don’t do this YOUR VPN TRAFFIC BYPASSES ALL “access-list” RULES! Note that once you disable “VPN bypass”, your VPN traffic will appear to come from the “in” of the interface it initially arrived at. Since that’s usually “outside” and the Internet, you can have a seemingly less-than pretty mix of private addressing and public addressing to deal with on your Internet interface. This can make it cleaner to get a dedicated ASA for VPN and hang it off an arm of your firewall ASA.

    The most critical thing with firewalls is don’t be lazy. Take the time to do the configuration and rules needed. It takes extra effort up front, but a failure is far more expensive.

  • IC3 Alert on Microchip-Enabled (EMV) Credit Cards

    Unfortunately quite accurate and what a number of us have been saying all along:

    The gist can be found in a single paragraph:

    Although EMV cards will provide greater security than traditional magnetic strip cards, they are still vulnerable to fraud. EMV cards can be counterfeited using stolen card data obtained from the black market. Additionally, the data on the magnetic strip of an EMV card can still be stolen if the PoS terminal is infected with data-capturing malware. Further, the EMV chip will likely not stop stolen or counterfeit credit cards from being used for online or telephone purchases where the card is not physically seen by the merchant and where the EMV chip is not used to transmit transaction data.

    You can look at EMV two ways – a good start, or a lot of effort and money, in retrospect, potentially put toward the wrong solution. Yes, it is better than the status quo in the states, but it doesn’t so much as solve the issue as shift it. The fact is, memory scrapers will still be able to get the vast majority of information they need to create counterfeit cards for use in locations or merchants who have yet to embrace EMV, or alternatively, use the cards online where EMV is inapplicable.

    Coupled with lack of PIN (we have “Chip and Signature”, not “Chip and PIN”), what we have is something that tends to protect the banks more than the merchants. In fact some argue that it is particularly punitive to small businesses.

    While there is no panacea – the hackers will find a way, perhaps a better investment would be driving merchants to P2PE and E2EE solutions (or hybrids). That too would be expensive for merchants to implement, but at least addresses most of the major concerns in today’s security environment.

    UPDATE: The above has hit the media, but seems to have disappeared from the FBI site.

    UPDATE 2: While there is nothing official – some outlets have noticed the disappearance. The suspected cause was a concern from the banking industry:

    “We saw the PSA yesterday and spoke to the FBI after we saw it and we thought it was not really reflective of the U.S. marketplace and thought there would have been some level of confusion with the use of PIN.”

    I would have to agree, while it does not make a ton of sense that the PIN portion wasn’t implemented (which would have stopped physically stolen cards), the real concern is not in the PIN or lack thereof, but rather that the full track data is still transmitted by default in the clear.

    UPDATE 3:

    It is back with revised language:

    The above paragraph was altered to read as follows:

    Although EMV cards provide greater security than traditional magnetic strip cards, an EMV chip does not stop lost and stolen cards from being used in stores, or for online or telephone purchases when the chip is not physically provided to the merchant, referred to as a card-not-present transaction. Additionally, the data on the magnetic strip of an EMV card can still be stolen if the merchant has not upgraded to an EMV terminal and it becomes infected with data-capturing malware. Consumers are urged to use the EMV feature of their new card wherever merchants accept it to limit the exposure of their sensitive payment data.

    The language “upgraded to an EMV terminal” either is confused or confusing. Just because a “terminal” (PIN Pad?) is EMV capable, does not mean the transaction is encrypted in the terminal prior to transmission to the POS, nor does it mean that the POS does not decrypt the transaction. If it is not encrypted or it is decrypted at the POS, the POS can be used or possible memory scraping (“data-capturing malware”). Again, the PIN Pad and merchant payment infrastructure needs to support P2PE or E2EE solutions for that kind of protection.

    Note that even if it is encrypted at “terminal” and not decrypted at the POS, if it is decrypted anywhere within the merchant’s network, that could be a location for “data-capturing malware” to be installed. By using P2PE or E2EE, that risk can essentially be pushed out of the merchant and down to issuers or processors.

    As always, the opinions above are my own, and do not necessarily represent my employer’s.

  • More on “tiny” URLs…

    I keep getting them from very smart, very security conscious people. However, to make my point:

    I love what they offer but…

    Some do offer a preview, but users aren’t used to seeing that and unfortunately won’t care (ie: they are so used to getting them without preview, they won’t expect it or demand it).


    As a coworker pointed out, there are potentially plugins for Firefox etc. (I couldn’t find one that worked) or you can use a site like this:

    It’s already come in handy for me a few times.

  • Nothing new here…

    But everyone should read it:

    Password strength. Longer better than complexity.

  • Dear Secure Companies…

    Dear Secure Companies,

    Please stop sending me emails to pick up critical documents or surveys where the URLs I need to follow point into random unverifiable domains. A link that leads to a URL like:

    is not going to inspire confidence and, assuming it isn’t spear-fishing or malware, is teaching end users bad practice. That is, it’s teaching end users to follow random links rather than verifiable domains. Encouraging recipients to follow such links is completely askew to modern security awareness training which is to tell the users not to follow random links.

    I know that using 3rd party marketing, survey, and even content providers is the norm, but you need to make the effort to ensure the URLs fall under your own verifiable domain, not some random 3rd party domain. Otherwise, unfortunately, you are part of the problem.

    I say this because in my day job I regularly get emails from major security companies or entities handling PII that embed links in their email going to what appear to be random (though undoubtedly valid) sites. This is bad practice and you are not helping the overall picture when doing so.

  • Dumping SSL certificate information

    It seems lately I’m regularly having to dump the information from SSL certificates (for instance to get the “Subject” or CA signer). Since I keep having to look up the exact syntax, I thought it easier to save here and figured it might help others.

    So, if in PEM format, use the following:

    openssl x509 -text -in cert.pem

    If in PKCS#12 format, use this:

    openssl pkcs12 -info -in cert.pfx

    To dump a CSR (Certificate Signing Request), use this:

    openssl req -text -in request.csr

    To dump/check a private key:

    openssl rsa -text -noout -in key.pem

    More can be found here and here.


    You can also pull the publickey side of a certificate from an active website, which can be handy. The output will be in PEM format:

    openssl s_client -showcerts -connect >cert.pem </dev/null

    It will give you information about the certificate you just pulled, however you will need to use the above PEM dump example to get things like the serial number.

  • BankInfo Ramnit Article

    Tracy Kitten at BankInfo has an interesting article about the Ramnit worm which is worthy of a read (even I would say by the general public). Ramnit is particularly pernicious because:

    Ramnit’s man-in-the-middle looks like an actual social-media or bank-account sign-in page that captures a user’s ID and password, and sometimes other personal information en route to the actual log-in page. The difference, however, is that the page in the middle captures authentication data and allows the attacker to gain access to the victim’s accounts at will.

    That said, I’m not sure I agree with the solution espoused:

    “Passwords are not very useful for anything anymore,” [Bill] Wansley says. “They are just too easy to forget, copy or break. Everyone needs to go to multifactor authentication [emphasis added] – like Google has recently – for social-media sign-in, and certainly for anything that is for financial or medical-related accounts.”

    Certainly a challenge-response methodology would be effective if the response were dynamic (like say an RSA key fob or equivalent smartphone software), however if the two-factor authentication is two static values then there’s nothing that stops the malware from ultimately being designed to capture both factors. It would be “false security” to believe this is a permanent solution.

    It then goes on to say:

    Passphrases are better than passwords, but multifactor authentication is the new standard. “Nobody should be using their social-media passwords or phrases for their financial accounts,” Wansley says.

    While I absolutely agree that users shouldn’t use the same password for financial or other sensitive websites, I’m not absolutely convinced that making stronger passwords is generally an answer. Yes, if you are using straight dictionary words (which the websites should prevent), you are at risk, however a mix of case and say a numeric basically makes the passwords externally uncrackable. That is provided the website properly implements delays and lockouts to the process.

    In my opinion too much emphasis in the industry is put on strong passwords where people confuse the idea of a compromised hash (the encrypted form of the password) to external brute-force attack. If the former happens one should simply assume the password is compromised regardless of how strong it is. However most recent compromises involve either brute-force external attacks or outright compromise of the cleartext password – those are different animals than a hash loss. Again, a marginally strong password with delays and lockout will easily survive brute-force attack from an external source (ie: the web).

    That’s not to say a degree of password strength isn’t important, but making password too difficult to remember can be counterproductive as it encourages users to write the passwords down or use other insecure methods. In that regards “passphrases” can be a benefit – they can be easy to remember and strong at the same time.

    I think too often security professionals focus on what works for them and not the reality of the end user community they are servicing. Sure that gawd awful password complexity requirement is the ideal, but if your end users end up writing it on a post-it or in an Excel spreadsheet the game is over.

  • Why I hate tiny-fied URLs…

    In theory if the world were filled we universally good people, “bitly” and ““, which given long URLs provide short ones, are a great idea. However whenever I get one I find that I’m frankly terrified to click on them.


    Because while they could be going someplace useful, they could also be going to a giant virus laden web site, or a nasty bug exercising Flash app, or even a porn site that’s going to get me in dutch at the job.

    I mean here’s one:

    How do you know where it goes? It happens to go to my resume, but it could go to a virus, a trojan, something completely inappropriate (or even illegal).

    Again, it’s a wonderful idea, and certainly more power to those who can stomach them, but I can’t. Heck I even get them sent to me by security professionals.

    Granted, even when they are URLs that clearly go to well known sites you are always at risk, but the extra obfuscation (as nice as it is) really increases your risk. No offense to the owners of “bitly” or “”, they certainly are providing a public service, but it’s one that is too nerve-wracking for this security professional.

  • SSL certs – probably not worth the bits they’re printed on…

    This failure of the trusted Certificate Authority (CA) “Comodo”:

    highlights something that is becoming more apparent:

    SSL certificates probably aren’t worth the bits they’re printed on.

    Forgetting that there is a fairly regular stream of issues with the authorities, companies like GoDaddy issue certificates for all of $12 with nearly instantaneous issuance. That is, clearly there’s not much validation going on. Way back when it took days to get certificates issued, involved real paperwork, actual calls from issuers, and DUNS lookups, etc.

    This may still be the case with organizations like Verisign, but given that for most browsers GoDaddy is equally trusted and that pretty much no one looks at the certificate signers, one weak authority essentially compromises the whole system.

    The answer?

    Certainly Extended Validation (EV) certificates help, though those are generally overpriced and end users for the most part don’t actually care (that is, for most of us, you’re still going to use non-EV sites regardless).

    No, probably the answer is to not trust SSL certs as a metric of “identity”. Just because a site has a valid cert doesn’t mean that it’s a legitimate company or even actually is who it says it is. Instead you need to use other techniques – like Google searches to see if the site is a scam.

    It should be otherwise, but essentially the keys have been given away. In many ways unfortunately at this point (at least for non-EV), signed certs are simply a “jab fee”. The browser may as well silently accept self signed certs – the cert’s true value is mostly for enabling encryption (and that doesn’t require a trusted authority).

  • Zone Firewall TCP reassembly size

    If you get something like this in your Cisco’s IOS firewall log:

    Mar 12 15:05:33 3129: 003121: *Mar 12 15:03:03.195 EST: %FW-4-TCP_OoO_SEG: Dropping TCP Segment: seq:525214740 1415 bytes is out-of-order; expected seq:525170856. Reason: TCP reassembly queue overflow – session to on zone-pair ccp-zp-in-out class ccp-protocol-http

    sometimes accompanied by hangs in downloads, then what is happening is you are blowing out the buffers used to reassemble TCP segments when the segments have arrived “out-of-order” (also abbreviated “OoO”).

    The problem for a stateful firewall or IDS/IPS is it often needs to see more of the packet stream than just the initial segment to make a forwarding/block decision. Thus it has to collect these segments together, however sometimes the segments don’t arrive “in order”. This can particularly happen when VPN is used.

    In order to get around this, it essentially collects the streaming segments going by in a queue until it can find the missing segment (assumed to be “out-of-order”). It queues those packets in memory, but for obvious reasons it cannot have infinitely sized queues – it would run out of resources. In fact if it did, this would offer a very effective DoS (Denial of Service) attack.

    Thus, there are defined limits set to the TCP reassembly queue. Those limits are actually fairly small to start (16 entries and 1 mb), thus you may want to adjust them if you are regularly seeing messages like above.

    Using the old CBAC method of inspection, you could insert the following command:

    ip inspect tcp reassembly {[queue length packet-number] [timeout seconds] [memory limit size-in-kb] [alarm {on | off}]}

    However with the newer Zone Firewall inspection methods don’t use the same settings. Instead the new command format is:

    parameter-map type ooo global
    tcp reassembly alarm {on | off}
    tcp reassembly memory limit
    tcp reassembly queue length queue-length
    tcp reassembly timeout

    To note the defaults are as follows:

    parameter-map type ooo global
    tcp reassembly alarm off
    tcp reassembly memory limit 1024
    tcp reassembly queue length 16
    tcp reassembly timeout 5

    So, if say you wanted to quadruple the default queue/memory lengths:

    parameter-map type ooo global
    tcp reassembly memory limit 4096
    tcp reassembly queue length 64

    Note it’s not clear if a dropped segment appears the same as an “out-of-order” segment to the router – that is with a dropped/lost segment the router keeps expecting it to arrive, just out of order. Thus the error could be telling you more that you’re dropping packets than you’re blowing out your “out-of-order” queues. Unfortunately I cannot find documentation one way or another on this.

    Also to note if you’re increasing the queue length, you might want to increase the timeout (“tcp reassembly timeout time-limit-secs“), however 5 seconds is an awful long time for a segment that might be out-of-order not to arrive. As bandwidth increases, while it is likely that more packets/bytes might come in to blow out the queue, it’s unlikely they would take more time to do so (quite the opposite – an out-of-order packet at higher bandwidth is if anything likely to show up sooner, not later), thus I wouldn’t expect this to need adjustment.

  • The kitchen sink of security tools…

    This seems to be a useful location to find security tools:

    Everything including the kitchen sink!

  • Apparently George Romero was right…

    That a deadly virus would escape from the military possibly causing zombies:

    He was just wrong that humans would be the target.

  • Another case of “With friends like these…”

    Well, researchers have devised a way around most modern anti-virus software. Yet another example of, “With friends like these, who needs enemies.”

    Again, I know “security by obscurity” is false security, but it’s not like the bad guys need as much help as they’re getting!

  • How to kill a session on a Cisco PIX/FWSM

    Completely different from Cisco IOS, so hard to remember:

    Log into the PIX/FWSM and go to “enable” mode. Do a “who”:

    Choose the IP of the session you want to kill and grab the number. In this case I want to kill the “” session, so I want “2”. Then kill it:

    The target session will then drop.

    Note if you’re coming from the same IP it may make it harder because the sessions will reference the same IP. In that case, just assume the later session has a higher number (or conversely, the earlier session has a lower number).

    Be careful. I have no idea what this does is you’re in mid-access-list update.

  • A good Blackberry security primer…

    ComputerWorld has published a good Blackberry security primer here:

    I highly recommend all Blackberry owners read it.

  • Rubber Or Glue, It Still Sticks…

    This brings up a sort of interesting if not chilling thought in the world of security, particularly for large organizations:

    Mozilla shuts online store after security breach

    The title of this entry, which I’ve included verbatim, is important.

    To me when I read it, I’m reading “Mozilla has a problem”, or “Mozilla isn’t secure”, or most painfully, “Mozilla is a place I want to avoid because of its lack of security.”

    However Mozilla didn’t screw up and this is in fact no reflection on Mozilla’s security whatsoever. If you actually read the post, you’ll see:

    The Mozilla Foundation has shuttered its e-commerce store after confirming a security breach at GatewayCDI, the third-party vendor that handles the store’s backend operations. [emphasis added]

    Thus it isn’t Mozilla’s “fault” after all, it’s GatewayCDI’s.

    So, what’s the point?

    The point is, even though it isn’t Mozilla’s fault, the headline sure makes it sound like Mozilla’s fault, My guess is any large or influential organization will be reported similarly. That’s going to leave the first impression, which many people never get past (either because of human nature and/or not reading past the headline) that those organizations are insecure, rather than their arrant 3rd party resource.

    Or to put it another way, if your company is using a 3rd party and feels all safe because things like PCI aren’t your concern, think again. Shuffling it off to a 3rd party doesn’t insulate you from the softer liability of public opinion. A liability that can turn out to be nearly as expensive as many of the more traditional ones, like getting sued.

    So it’s incumbent on us as organizations and security teams to make sure our vendors are up to snuff. Signing agreements isn’t sufficient – some hands on, potentially including self conducted audits (if possible) may be required.

    Most of all this brings into question assumption that moving to a 3rd party really provides you the insulation you might think it does. Choose carefully, are you may get nearly as burned if you had done it yourself.

  • Eating ourselves alive…

    Here is yet another example of how the “good guys” are figuring out ways to subvert security to “help” us:–/news/113884

    Basically Peter Kleissner, a young and clearly very smart university student, has figured out how to inject a bootkit in front of TrueCrypt (an excellent and free encryption product) to subvert its protections.

    While I understand that “security by obscurity” is ultimately a flawed paradigm, I really don’t think the bad guys need any help. While some claim the bad guys would ultimately figure this stuff out, I’m not convinced. A lot of the malign stuff out there has at it’s basis attacks developed by “good guys”.

    While I entirely support the right to do and publish such work (unlike a number of large corporations that have sued to keep these hacks quiet), I do feel in many cases the publishing of these exploits is an act of ego and narcissism, a sort of destructive “showing off”.

    Anyway, down goes another.