Showing posts with label malware. Show all posts
Showing posts with label malware. Show all posts

Friday, August 24, 2012

Protecting Cars from Viruses

Reuters is running a story that should amuse any computer security professional: Experts hope to shield cars from computer viruses.

An excerpt:

Intel's McAfee unit, which is best known for software that fights PC viruses, is one of a handful of firms that are looking to protect the dozens of tiny computers and electronic communications systems that are built into every modern car.

It's scary business. Security experts say that automakers have so far failed to adequately protect these systems, leaving them vulnerable to hacks by attackers looking to steal cars, eavesdrop on conversations, or even harm passengers by causing vehicles to crash.
Our guess is that when cars get to the point that they drive themselves, those who understand how malware works-- and more important: how undeniably complicated modern software and its hardware architecture can be-- will start donning a pair of Converse Chuck Taylors and resemble a modern Luddite by driving themselves, a la Will Smith in I, Robot.

When you look at the statistics, you are far more likely to get injured or die in a car accident than you are in nearly any other security risk you face in your daily life.  Even with the vast skies being what they are, and the regulations on the airlines industry and their pilots, it's not possible to keep air travel 100% safe, though it's safer than driving (once you get past the TSA checkpoint).

Computerized, self-driving cars may improve (emphasis on "may") safety stats; however, not if their software landscape looks like anything else we operate with a CPU in it these days.  There are agencies with an operating budget larger than the GDP of several nations that are terrified about the possibility of malware injected into things like military aircraft or missile guidance systems.  Given that, how in the world is an automobile for ~$20K (which is at most 1% of the price tag of the military's concerns) ever going to be 100% free of malware?  Simple: it won't be.
Toyota Motor Corp, the world's biggest automaker, said it was not aware of any hacking incidents on its cars.
"They're basically designed to change coding constantly. I won't say it's impossible to hack, but it's pretty close," said Toyota spokesman John Hanson. [emphasis ours]
Oh, we've never heard that before...

Officials with Hyundai Motor Co, Nissan Motor Co and Volkswagen AG said they could not immediately comment on the issue.

A spokesman for Honda Motor Co said that the Japanese automaker was studying the security of on-vehicle computer systems, but declined to discuss those efforts.
Mums the word is a much smarter response to the press.
A spokesman for the U.S. Department of Homeland Security declined to comment when asked how seriously the agency considers the risk that hackers could launch attacks on vehicles or say whether DHS had learned of any such incidents.
They probably declined to comment because they are working on exploits for these as well.  Say it ain't so?  Look no further than Stuxnet and Flame, of which the US Gov takes full authorship credits.  It's the future of the "cyberwarfarestate".

We can't keep malware out of critical infrastructure SCADA systems.  There's no way we can keep it out of your mom's minivan.

Wednesday, March 21, 2012

Alex Halderman on Internet Voting

Computer Science Professor J. Alex Halderman is an upcoming academic star that we at Securology have been watching for awhile now, since some of the earliest days of all the great work put on at Princeton by Ed Felten and his Freedom to Tinker group (an excellent blog). Halderman, having completed his PhD (Bell Labs UNIX and C programming language creator Brian Kernighan was on his PhD committee!), has moved on to the University of Michigan as a member of the faculty and is continuing his excellent work at the intersection of technology and public policy, which always means security and privacy issues are in the spotlight.
Link
Here is an excellent interview with Halderman (presumably shot at the 2012 RSA Conference) on how he and his students (legally) hacked the mock trial of Internet Voting put on by Washington D.C., and why Internet Voting should not be employed for a very long time.

In summary, there are two main reasons why Internet Voting is a horrible idea:
  1. Getting the software perfectly correct is, for all intents and purposes, impossible.
  2. Authenticating a voter eliminates the ability to anonymize the voter's vote (major privacy flaw).

Wednesday, March 23, 2011

RSA SecurID Breach - Seed Record Threats

The following is a threat model that assumes the RSA SecurID seed records have been stolen by a sophisticated adversary, which is probably what happened.

But first, a word from our muse, Bruce Schneier, regarding what he titled back in 2005 as the "Failure of Two Factor Authentication":
Two-factor authentication isn't our savior. It won't defend against phishing. It's not going to prevent identity theft. It's not going to secure online accounts from fraudulent transactions. It solves the security problems we had ten years ago, not the security problems we have today.
[snip]
Here are two new active attacks we're starting to see:
  • Man-in-the-Middle attack. An attacker puts up a fake bank website and entices user to that website. User types in his password, and the attacker in turn uses it to access the bank's real website. Done right, the user will never realize that he isn't at the bank's website. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user's banking transactions while making his own transactions at the same time.
  • Trojan attack. Attacker gets Trojan installed on user's computer. When user logs into his bank's website, the attacker piggybacks on that session via the Trojan to make any fraudulent transaction he wants.
See how two-factor authentication doesn't solve anything? In the first case, the attacker can pass the ever-changing part of the password to the bank along with the never-changing part. And in the second case, the attacker is relying on the user to log in.
[Snip]
Two-factor authentication ... won't work for remote authentication over the Internet.
Bruce was absolutely right. We saw examples of that.

...

Now, let's put the pieces together-- the active MITM attacks that Bruce described, which could result in an offline/passive attack.

In both cases above, the adversary has to act immediately to essentially take over an authenticated session, using either the real-time MITM scenario, or the trojan scenario. But let's assume that the "good guys" have, by now, read Bruce's article [but in all reality, they probably haven't, hence they have an RSA SecurID investment] and have paid attention to the RSA jabber that says to watch for an increase in login attempts. In these examples Bruce describes, the adversary grabs the session and disconnects the valid user (possibly at the presentation layer, by taking over the session in malware that doesn't display what the actions are occurring in the authenticated session).

However, let's assume the adversary let's the user keep his authenticated session. The adversary just monitors the credentials that are entered:
  1. The User ID, and
  2. The one-time-passcode (token's readout, a.k.a. "tokencode", plus the user's PIN)
"Relax," says the security administrator. "That's what these RSA SecurID thingies are for-- to make it meaningless when a bad guy eavesdrops on credentials."

Well, except in the case where the "bad guy" has all of the seed records for all RSA SecurID tokens ever sold.

Quoting from our article from yesterday:
Assume an adversary has now in their possession, all of the seed records for all RSA SecurID tokens that are currently valid (which based on above and previous seems very plausible). Assume they have sufficient computing hardware to mass compute all of the tokencodes for all of the tokens represented by those seed records for a range of time (they obviously are well funded to get the "Advanced Persistent Threat" name). This would be the output of the RSA SecurID algorithm taking all the future units of time as input coupled with the serial number/token codes to generate all of the output "hashes" for each RSA SecurID token that RSA has ever made. These mass computed tokencodes for a given range of time would basically be one big rainbow table, a time computing trade-off not too unlike using rainbow tables to crack password hashes.
[Snip]
Since tokencodes are only 6 digits long, and RSA has sold millions of tokens, the chances of a collision of a token's output with another token's output at a random point in time is significant enough, but phish the same user repeatedly (like asking for "next tokencode") and the adversary now can significantly narrow down the possibilities of which tokens belong to which user because different tokens must appear random and not in sync with each other (otherwise RSA SecurID would have much bigger problems). Do this selectively over a period of time for a high valued asset, and chances are the adversary's presence will go undetected, but the adversary will be able to determine exactly which token (serial number, i.e. seed record) belongs to the victim user.
So, now that the adversary has these "rainbow tables" of RSA SecurID tokencodes, and now that the active attacks Bruce described have morphed into a passive attempt, all it will take is watching particular users create valid sessions-- maybe as little as a single attempt, depending upon the mathematics and randomness of the RSA SecurID token output, but probably more like watching a handful of attempts. At that point, the adversary can then impersonate the victim user at any point in the future.

So, if RSA SecurID seed records are compromised, there is really not much advantage in an RSA SecurID implementation. The threats are essentially the same as an adversary grabbing conventional passwords. The only difference is that a passive attack against compromised seed records may take multiple monitoring attempts, as opposed to a single event. But with simple malware, that won't be much more effort, especially for a high valued asset.

So given what we know, we can assume seed records were compromised. And given how little RSA is talking about it, we cannot really know how they are responding to it. Will they just distribute new tokens without compromised seed records, or will they do something much more significant? Based on what we know today, it makes more sense for an organization that is thinking about an RSA SecurID deployment to rely instead on conventional passwords (e.g. Microsoft Active Directory), and spend the extra money on monitoring for fraud and stronger identity validation for things like password resets.

Tuesday, March 22, 2011

More RSA SecurID Reactions

RSA Released a new Customer FAQ regarding the RSA SecurID breach. Let's break it down ...
Customer FAQ
Incident Overview

1. What happened?

Recently, our security systems identified an extremely sophisticated cyber attack in progress, targeting our RSA business unit. We took a variety of aggressive measures against the threat to protect our customers and our business including further hardening our IT infrastructure and working closely with appropriate authorities.
Glad to see they didn't use the words "Advanced Persistent Threat" there.
2. What information was lost?

Our investigation to date has revealed that the attack resulted in certain information being extracted from RSA’s systems. Some of that information is related to RSA SecurID authentication products.
Hmmm. Seed Records possibly?
3. Why can’t you provide more details about the information that was extracted related to RSA SecurID technology?

Our customers’ security is our number one priority. We continue to provide our customers with all the information they need to assess their risk and ensure they are protected. Providing additional specific information about the nature of the attack on RSA or about certain elements of RSA SecurID design could enable others to try to compromise our customers’ RSA SecurID implementations.
[Emphasis added by Securology]
Whoa! Pause right there. Obviously they have allowed somebody from a Public/Customer Relations background to write this. This is not coming from anybody who *knows security*.

Like we mentioned previously, Kerckhoff's Principle and Shannon's Maxim dictate that the DESIGN be open. These ideas are older than the Internet, and pretty much older than computing itself. So, disclosing the RSA SecurID DESIGN should have no adverse affect on customers with implementations unless the DESIGN is flawed to begin with.

Realistically, this is PR-speak for obfuscating details about what was stolen. All things point to seed records. Source code to on-premise implementations at customer sites shouldn't be affected, because those components aren't facing the Internet, and generally who cares about them? Yes, it's possible to hack the backend through things like XSS (think "Cross Site Printing"), but the state-of-the-art would be to compromise it from the outside using weaknesses found at RSA headquarters: seed records.
4. Does this event weaken my RSA SecurID solution against attacks?

RSA SecurID technology continues to be an effective authentication solution. To the best of our knowledge, whoever attacked RSA has certain information related to the RSA SecurID solution, but not enough to complete a successful attack without obtaining additional information that is only held by our customers. We have provided best practices so customers can strengthen the protection of the RSA SecurID information they hold. RSA SecurID technology is as effective as it was before against other attacks.
[Emphasis added by Securology.]
If it wasn't obvious that it's seed records yet, it should be screaming "SEED RECORDS" by this point. RSA SecurID is a two factor authentication system, meaning you can couple your RSA SecurID time synchronized tokencode with a PIN/Password. So, if the seed records are stolen, then the only way an adversary can impersonate you would be if he knew:
  1. Which RSA SecurID token is assigned to you (i.e. the serial number stored in the RSA SecurID database on-site at a customer's site)
  2. Your PIN/Passcode that is the second facto (i.e. another piece of information stored in the customer's site).
More evidence that the RSA breach was seed records: the serial number and seed records give the adversary half the information needed, but the rest is stored on-site.
5. What constitutes a direct attack on an RSA SecurID customer?

To compromise any RSA SecurID deployment, an attacker needs to possess multiple pieces of information about the token, the customer, the individual users and their PINs. Some of this information is never held by RSA and is controlled only by the customer. In order to mount a successful direct attack, someone would need to have possession of all this information.


6. What constitutes a broader attack on an RSA SecurID customer?

To compromise any RSA SecurID deployment, the attacker needs to possess multiple pieces of information about the token, the customer, the individual users and their PINs. Some of this information is never held by RSA and is controlled only by the customer. In order to mount a successful direct attack, someone would need to have possession of all this information.

The broader attack we referenced most likely would be an indirect attack on a customer that uses a combination of technical and social engineering techniques to attempt to compromise all pieces of information about the token, the customer, the individual users and their PINs. Social engineering attacks typically target customers’ end users and help desks. Technical attacks typically target customers’ back end servers, networks and end user machines. Our prioritized remediation steps in the RSA SecurID Best Practices Guides are focused on strengthening your security against these potential broader attacks.
[Emphasis added by Securology]
This PR person is beginning to agree with us. Yes, the seed records are the hard part. If you are an RSA SecurID customer, assume the adversary has them, and now watch out for the pieces you control.
7. Have my SecurID token records been taken?
[Emphasis added by Securology.]
Yes, it's obvious they have.
For the security of our customers, we are not releasing any additional information about what was taken. It is more important to understand all the critical components of the RSA SecurID solution.

To compromise any RSA SecurID deployment, the attacker needs to possess multiple pieces of information about the token, the customer, the individual users and their PINs. Some of this information is never held by RSA and is controlled only by the customer. In order to mount a successful attack, someone would need to have possession of all this information.
This is beginning to look like a broken record.
8. Has RSA stopped manufacturing and/or distributing RSA SecurID tokens or other products?

As part of our standard operating procedures, while we further harden our environment some operations are interrupted. We expect to resume distribution soon and will share information on this when available.
Of course manufacturing/distribution has stopped. Of course anyone worried about security would have an SOP that says "stop shipping the crypto devices when the seed records are compromised." This is just more evidence that the seed records were compromised.
[...snipped for brevity...]
13. How can I monitor my deployment for unusual authentication activity?

To detect unusual authentication activity, the Authentication Manager logs should be monitored for abnormally high rates of failed authentications and/or “Next Tokencode Required” events. If these types of activities are detected, your organization should be prepared to identify the access point being used and shut them down.

The Authentication Manager Log Monitoring Guidelines has detailed descriptions of several additional events that your organization should consider monitoring.
[Emphasis added by Securology]
Warning about failed authentication and next tokencode events further indicates the seed records were stolen, because this would indicate the adversaries are guessing valid tokencodes but invalid PINs, or guessing tokencodes in order to determine a specific user's serial number (to match stolen seed records with a particular user).
14. How do I protect users and help desks against Social Engineering attacks such as targeted phishing?

Educate your users on a regular basis about how to avoid phishing attacks. Be sure to follow best practices and guidelines from sources such as the Anti-Phishing Working Group (APWG) at http://education.apwg.org/r/en/index.htm.

In addition, make sure your end users know the following:
  • They will never be asked for and should never provide their token serial numbers, tokencodes, PINs, passwords, etc.
Because giving that away is giving away the last parts of information that are "controlled only by the customer", i.e. the mapping of UserIDs to seed records via token serial numbers.
  • Do not enter tokencodes into links that you clicked in an email. Instead, type in the URL of the reputable site to which you want to authenticate
Because a phishing attack that takes a tokencode could be all that is needed to guess which serial number a user has, since that moment in time could be recorded, and all seed records could be used in a parallel, offline attack to compute their token codes at that instance in time. Assume an adversary has now in their possession, all of the seed records for all RSA SecurID tokens that are currently valid (which based on above and previous seems very plausible). Assume they have sufficient computing hardware to mass compute all of the tokencodes for all of the tokens represented by those seed records for a range of time (they obviously are well funded to get the "Advanced Persistent Threat" name). This would be the output of the RSA SecurID algorithm taking all the future units of time as input coupled with the serial number/token codes to generate all of the output "hashes" for each RSA SecurID token that RSA has ever made. These mass computed tokencodes for a given range of time would basically be one big rainbow table, a time computing trade-off not too unlike using rainbow tables to crack password hashes. Then assume the adversaries can phish users into providing a tokencode into a false login prompt. Since tokencodes are only 6 digits long, and RSA has sold millions of tokens, the chances of a collision of a token's output with another token's output at a random point in time is significant enough, but phish the same user repeatedly (like asking for "next tokencode") and the adversary now can significantly narrow down the possibilities of which tokens belong to which user because different tokens must appear random and not in sync with each other (otherwise RSA SecurID would have much bigger problems). Do this selectively over a period of time for a high valued asset, and chances are the adversary's presence will go undetected, but the adversary will be able to determine exactly which token (serial number, i.e. seed record) belongs to the victim user. Or do it in mass quickly (think: social media) and it will harvest many userIDs to serial numbers (seed records) which would be valuable on the black market-- especially for e-commerce banking applications.
It is also critical that your Help Desk Administrators verify the end user’s identity before performing any Help Desk operations on their behalf. Recommended actions include:

· Call the end user back on a phone owned by the organization and on a number that is already stored in the system.

· Send the user an email to a company email address. If possible, use encrypted mail.

· Work with the employee’s manager to verify the user’s identity

· Verify the identity in person

· Use multiple open-ended questions from employee records (e.g., “Name one person in your group” or, “What is your badge number?”). Avoid yes/no questions

Important: Be wary of using mobile phones for identity confirmation, even if they are owned by the company, as mobile phone numbers are often stored in locations that are vulnerable to tampering or social engineering.
[...snipped for brevity...]
The above is very decent advice, not unlike what we posted recently.


So, in summary: yeah, yeah, yeah, seed records were stolen. Little to no doubt about that now.

Friday, March 18, 2011

RSA SecurID Breach - Initial Reactions


RSA, the security division of EMC, was breached by a sophisticated adversary who stole something of value pertaining to RSA SecurID two factor authentication implementations. That much we know for certain.


It's probably also safe to say that RSA SecurID will be knocked at least a notch down off their place of unreasonably high esteem.


And it wouldn't hurt to take this as a reminder that there is no such thing as a perfectly secure system. Complexity wins every time and the adversary has the advantage.


First, note that the original Securology article entitled "Soft tokens aren't tokens at all" is still very valid as the day it was published over 3 years ago. CNET is reporting that RSA has sold 40 million hardware tokens and 250 million software tokens. That means that 86% of all RSA SecurID "tokens" (which are of the "soft token" variety) are already wide open all of the problems that an endpoint device has-- and more importantly, that 86% of the "two factor authentication" products sold and licensed by RSA are not really "two factor authentication" in the first place.


Second, we should note the principles in economics, so eloquently described by your mother as: "don't put all your eggs in one basket", i.e. the principle of diversification. If your organization relies solely on RSA SecurID for security, you were on borrowed time to begin with. For those organizations, this event is just proof that "the emperor hath no clothes".


Third, the algorithm behind RSA SecurID is not publicly disclosed. This should be a red flag to anyone worth their salt in security. This is a direct violation of Kerckhoff's Principle and Shannon's Maxim, roughly that only the encryption keys should be secret and that we should always assume an enemy knows (or can reverse engineer) the algorithm. There have been attempts in the past to reverse engineer the RSA SecurID algorithm, but those attempts are old and not necessarily the way the current version operates.


Fourth, it's probably the seed records that were stolen. Since we know that the algorithm is essentially a black box, taking as input a "seed record" and the current time, then either disclosure of the "seed records" or disclosure of the algorithm could potentially be devastating to any system relying on RSA SecurID for authentication.

Hints that the "seed records" were stolen can be seen in this Network World article:
But there's already speculation that attackers gained some information about the "secret sauce" for RSA SecurID and its one-time password authentication mechanism, which could be tied to the serial numbers on tokens, says Phil Cox, principal consultant at Boston-based SystemExperts. RSA is emphasizing that customers make sure that anyone in their organizations using SecurID be careful in ensuring they don't give out serial numbers on secured tokens. RSA executives are busy today conducting mass briefings via dial-in for customers, says Cox. [emphasis added by Securology]
Suggesting to customers to keep serial numbers secret implies that seed records were indeed stolen.

When a customer deploys newly purchased tokens, the customer must import a file containing a digitally signed list of seed records associated with serial numbers of the device. From that point on, administrators assign a token by serial number, which is really just associating the seed record of the device with the user's future authentication attempts. Any time that user attempts to authenticate, the server takes the current time and the seed record and computes its own tokencode for comparison to the user input. In fact, one known troubleshooting problem happens when the server and token get out of time synchronization. NTP is usually the answer to that problem.

This further strengthens the theory that seed records were stolen by the "advanced persistent threat", since any customer with a copy of the server-side components essentially has the algorithm, through common reversing techniques of binaries. The server's CPU must be able to computer the tokencode via the algorithm, therefore monitoring instructions as they enter the CPU will divulge the algorithm. This is not a new threat, and certainly nothing worthy of a new moniker. The most common example of reversing binaries is for bypassing software licensing features-- it doesn't take a world-class threat to do that. What is much, much more likely is that RSA SecurID seed records were indeed stolen.

The only item of value that could be even more damaging might be the algorithm RSA uses to establish seed records and associate them with serial numbers. Assuming there is some repeatable process to that-- and it makes sense to believe there is since that would make production manufacturing of those devices more simple-- then stealing that algorithm is like stealing all seed records: past, present, and future.

Likewise, even if source code is the item that was stolen, it's unlikely that any of that will translate into real attacks since most RSA SecurID installations do not directly expose the RSA servers to the Internet. They're usually called upon by end-user-facing systems like VPNs or websites, and the Internet tier generally packages up the credentials and passes them along in a different protocol, like RADIUS. The only way a vulnerability in the stolen source code would become very valuable would be if there is an injection vulnerability found in it, such as passing a malicious input in a username and password challenge that resulted in the back-end RSA SecurID systems to fail open, much like a SQL injection attack. It's possible, but much more probable that seed records were the stolen item of value.


How to Respond to the News
Lots of advice has been shared for how to handle this bad news. Most of it is good, but a couple items need a reality check.


RSA filed with the SEC and in their filing there is a copy of their customer support note on the issue. At the bottom of the form, is a list of suggestions:
  • We recommend customers increase their focus on security for social media applications and the use of those applications and websites by anyone with access to their critical networks.
  • We recommend customers enforce strong password and pin policies.
  • We recommend customers follow the rule of least privilege when assigning roles and responsibilities to security administrators.
  • We recommend customers re-educate employees on the importance of avoiding suspicious emails, and remind them not to provide user names or other credentials to anyone ...
  • We recommend customers pay special attention to security around their active directories, making full use of their SIEM products and also implementing two-factor authentication to control access to active directories.
  • We recommend customers watch closely for changes in user privilege levels and access rights using security monitoring technologies such as SIEM, and consider adding more levels of manual approval for those changes.
  • We recommend customers harden, closely monitor, and limit remote and physical access to infrastructure that is hosting critical security software.
  • We recommend customers examine their help desk practices for information leakage that could help an attacker perform a social engineering attack.
  • We recommend customers update their security products and the operating systems hosting them with the latest patches.
[emphasis added by Securology]

Unless RSA is sitting on some new way to shim into the Microsoft Active Directory (AD) authentication stacks (and they have not published it), there is no way to accomplish what they have stated there in bold. AD consists of mainly LDAP and Kerberos with a sprinkling in of a few other neat features (not going into those for brevity). LDAP/LDAPS (the secure SSL/TLS version) and Kerberos are both based on passwords as the secret to authentication. They cannot simply be upgraded into using two-factor authentication.

Assuming RSA is suggesting installing the RSA SecurID agent for Windows on each Domain Controller in an AD forest, that still does not prevent access to making changes inside of AD Users & Computers, because any client must be able to talk Kerberos and LDAP to at least a single Domain Controller for AD's basic interoperability to function-- those same firewall rules for those services will also allow authenticated and authorized users to browse and modify objects within the directory. What they're putting in there just doesn't seem to be possible and must have been written by somebody who doesn't understand the Microsoft Active Directory product line very well.


Securosis has a how-to-respond list on their blog:
Remember that SecurID is the second factor in a two-factor system… you aren’t stripped naked (unless you’re going through airport security). Assuming it’s completely useless now, here is what you can do:
  1. Don’t panic. We know almost nothing at this point, and thus all we can do is speculate. Until we know the attacker, what was lost, how SecurID was compromised (if it is), and the vector of a potential attack we can’t make an informed risk assessment.
  2. Talk to your RSA representative and pressure them for this information.
  3. Assume SecureID is no longer effective. Review passwords tied to SecurID accounts and make sure they are strong (if possible).
  4. If you are a high-value target, force a password change for any accounts with privileges that could be overly harmful (e.g. admins).
  5. Consider disabling accounts that don’t use a password or PIN.
  6. Set password attempt lockouts (3 tries to lock an account, or similar).
[Emphasis added by Securology]
To their first point, I think we can know what was lost: seed records. Without that, there would be no point in filing with the SEC and publicly disclosing that fact. Anybody can know their algorithm for computing one-time passwords by reversing the server side (see above). The only other component in the process is the current time, which is public information. The only private information is the seed record.

On point #4, if your organization is a high-valued asset type of a target, flagging RSA SecurID users to change their PINs or passwords associated with their user accounts may not be a good idea, because as the defense you have to assume this well articulated offensive adversary already has your seed records and therefore could respond to the challenge to reset passwords. A better solution, if your organization is small, is to physically meet with and reset credentials for high valued users. If you cannot do that because your organization is too large of a scale, then your only real option is to monitor user behavior for abnormalities-- which is where most of your true value should come from anyway.

This does tie well with their second suggestion-- pressuring your RSA contact for more information. In all likelihood, if our speculation that seed records were indeed stolen, then the only solution is to demand new RSA SecurID tokens from RSA to replace the ones you have currently. And if RSA is not quick to respond to that, it's for one of two reasons:
  1. This is going to financially hurt them in a very significant way and it's not easy to just mass produce 40 million tokens overnight, OR,
  2. RSA's algorithm for generating seed records and assigning them to token serial numbers is compromised, and they're going to need some R&D time to come up with a fix without breaking current customers who order new tokens under the new seed record generation scheme in the future.

UPDATED TO ADD: Since all things indicate the seed records were compromised, and since Art Coviello's message is that no RSA customers should have reduced security as a result of their breach, then that must mean RSA does not believe SecurID is worth the investment. After all, if RSA SecurID seed records were stolen, it effectively reduces any implementation to just a single factor: the PIN/passwords that are requested in addition to the tokencode. And who would buy all that infrastructure and handout worthless digital keychains when they can get a single factor password authentication for super cheap with Microsoft's Active Directory?

Wednesday, December 2, 2009

The Reality of Evil Maids

There have been many attacks on whole disk encryption recently:
  1. Cold Boot attacks in which keys hang around in memory a lot longer than many thought, demonstrating how information flow may be more important to watch than many acknowledge.
  2. Stoned Boot attacks in which a rootkit is loaded into memory as part of the booting process, tampering with system level things, like, say, whole disk encryption keys.
  3. Evil Maid attacks in which Joanna Rutkowska of Invisible Things Lab suggests tinkering with the plaintext boot loader. Why is it plain text if the drive is encrypted? Because the CPU has to be able to execute it, duh. So, it's right there for tampering. Funny thing: I suggested tampering with the boot loader as a way to extract keys way back in October of 2007 when debating Jon Callas of PGP over their undocumented encryption bypass feature, so I guess that means I am the original author of the Evil Maid attack concept, huh?

About all of these attacks, Schneier recently said:
This attack exploits the same basic vulnerability as the "Cold Boot" attack from last year, and the "Stoned Boot" attack from earlier this year, and there's no real defense to this sort of thing. As soon as you give up physical control of your computer, all bets are off.
"As soon as you give up physical control of your computer, all bets are off"??? Isn't that the point of these encryption vendors (Schneier is on the technical advisory board of PGP Corp-- he maybe doesn't add that disclaimer plainly enough). Sure enough, that's the opposite of what PGP Corp claims: "Data remains secure on lost devices." Somebody better correct those sales & marketing people to update their powerpoint slides and website promotions.

To put this plainly: if you still believe that whole disk encryption software is going to keep a skilled or determined adversary out of your data, you are sadly misled. We're no longer talking about 3 letter government agencies with large sums of tax dollars to throw at the problem-- we're talking about script kiddies being able to pull this off. (We may not quite be at the point where script kiddies can do it, but we're getting very close.)

Whole Disk Encryption software will only stop a thief who is interested in the hardware from accessing your data and that thief may not even be smart enough to know how to take a hard drive out of a laptop and plug it into another computer in the first place. You had better hope that thief won't sell it on ebay to somebody who is more interested in data than hardware.

Whole Disk Encryption fails to deliver what it claims. If you want safe data, you need to keep as little of it as possible on any mobile devices that are easily lost or stolen. Don't rely on magic crypto fairy dust and don't trust anyone who spouts the odds or computation time required to compute a decryption key. It's not about the math; it's about the keys on the endpoints.

Trusted Platform Modules (TPMs) (like what Vista can be configured to use) hold out some hope, assuming that somebody cannot find a way to extract the keys out of them by spoofing a trusted bootloader. After all, a TPM is basically just a blackbox: you give it an input (a binary image of a trusted bootloader, for example) and it gives you an output (an encryption key). Since TPMs are accessible over a system bus, which is shared among all components, it seems plausible that a malicious device or even device driver could be used to either make a copy of the key as it travels back across the system bus, OR, that a malicious device could just feed it the proper input (as in not by booting the bootloader but by booting an alternative bootloader and then feeding it the correct binary image) to retrieve the output it wants.

Monday, August 24, 2009

Real-Time Keyloggers

I have discussed real-time keyloggers before, as a way to defeat some online banking applications, among other things, and that in general, one-time-password generator tokens offer complexity, but typically they do not add any real security.

Now, stealing one-time-passwords from RSA SecurID has made the NY Times as well. (Slashdot thread here.)

Authentication takes the back seat to malware. If you cannot guarantee a malware free end-point (and who can?), then you cannot guarantee an authenticated person on the other side of that end-point device.

Thursday, May 28, 2009

More Fake Security

The uninstallation program for Symantec Anti-Virus requires an administrator password that is utterly trivial to bypass. This probably isn't new for a lot of people. I always figured this was weak under the hood, like the password was stored in plaintext in a configuration file or registry key, or stored as a hash output of the password that any admin could overwrite with their own hash. But it turns out it's even easier than that. The smart developers at Symantec were thoughtful enough to have a configuration switch to turn off that pesky password prompt altogether. Why bother replacing a hash or reading in a plaintext value when you can just flip a bit to disable the whole thing?

Just flip the bit from 1 to 0 on the registry value called UseVPUninstallPassword at HKEY_LOCAL_MACHINE\SOFTWARE\INTEL\LANDesk\VirusProtect6\ CurrentVersion\Administrator Only\Security. Then re-run the uninstall program.

I am aware of many large organizations that provide admin rights to their employees on their laptops, but use this setting as a way to prevent them from uninstalling their Symantec security products. Security practitioners worth their salt will tell you that admin rights = game over. This was a gimmick of a feature to begin with. What's worse is that surely at least one developer at Symantec knew that before the code was committed into the product, but security vendors have to sell out and tell you that perpetual motion is possible so you'll spend money with them. These types of features demonstrate the irresponsibility of vendors (Symantec) who build them.

And if you don't think a user with admin rights will do this, how trivial would it be for drive-by malware executed by that user to do this? Very trivial.

Just another example on the pile of examples that security features do not equal security.

Monday, October 27, 2008

Banks, Malware, and More Failing Tokens

The Kaspersky folks have an interesting report on malware that targets the banking and financial markets that supports and echoes many of the things posted here over the last several months. For one, the banking industry is receiving targeted malware, which makes it more difficult for "signature" based anti-malware solutions to find the malware. For two, issues with second-factor authentication tokens don't solve the malware-in-the-browser problem.
"In order for a cyber criminal to be able to perform transactions when dynamic passwords are in place using phishing, s/he has to use a Man-in-the-Middle attack.... Setting up a MitM attack is inherently more difficult than setting up a standard phishing site; however, there are now MitM kits available, so cyber criminals can create attacks on popular banks with a minimum of effort."

Tuesday, February 19, 2008

Websense CEO on AV Signatures

Websense CEO, Gene Hodges, on the futility of signature based antivirus, just an excerpt:

On the modern attack vector: Antivirus software worked fine when attacks were generally focused on attacking infrastructure and making headlines. But current antivirus isn’t very good at protecting Web protocols, argued Hodges. “Modern attackware is much better crafted and stealthy than viruses so developing an antivirus signature out of sample doesn’t work,” said Hodges. The issue is that antivirus signature sampling starts with a customer being attacked. Then that customer calls the antivirus vendor, creates a sample, identifies the malware and then creates the sample. The conundrum for antivirus software comes when there’s malware that’s never detected. If you don’t know you’re being attacked there’s no starting point for a defense. “Infrastructure attacks are noisy because you wanted the victim to know they have been had. You didn’t have to be a brain surgeon to know you were hit by Slammer. Today’s malware attacks are stealthy and don’t want you to know it’s there,” said Hodges.

Is antivirus software necessary? Hodges said that antivirus software in general is still necessary, but the value is decreasing. Hodges recalled discussions at a recent conference and the general feeling from CIOs that viruses and worms were a solved problem. Things will get very interesting if there’s a recession and customers become more selective about how they allocate their security budgets. For instance, Hodges said CIOs could bring in Sophos, Kaspersky and Microsoft as antivirus vendors and “kick the stuffing out of the price structure for antivirus and firewalls.” The dollars that used to be spent on antivirus software could then be deployed for more data centric attacks that require better access control, encryption and data leakage. My take: Obviously, Hodges has a motive here since these budget dollars would presumably flow in Websense’s direction. That said the argument that the value of antivirus software is declining makes a lot of sense and is gaining critical mass.

Web 2.0 as security risk. Hodges said Web 2.0–or enterprise 2.0–techniques could become a security risk in the future, but Websense “really hasn’t seen significant exploitation of business transactions of Web 2.0.” That said enterprises are likely to see these attacks in the future. For starters, enterprises generally allow employees to tap sites like YouTube, Facebook and MySpace. Those sites are big targets for attacks and connections to the enterprise can allow “bad people to sneak bad stuff into good places,” said Hodges. In other words, the honey pot isn’t lifting data from Facebook as much as it is following that Facebook user to his place of employment. Meanwhile, Web connections are already well established in the enterprise via automated XML transactions, service oriented architecture and current ERP systems. Hodges noted that Oracle Fusion and SAP Netweaver applications fall into the Web 2.0 category.


Even the security CEOs can see it (the futility of signature based anti-malware, that is).

Thursday, February 14, 2008

Localhost DNS Entries & "Same Site Scripting"

I'm not a big fan of new names for variations of existing attacks, but Tavis Ormandy (of Google) has pointed out an interesting way to leverage non-fully qualified DNS entries for localhost (127.0.0.1) with XSS:
It's a common and sensible practice to install records of the form
"localhost. IN A 127.0.0.1" into nameserver configurations, bizarrely
however, administrators often mistakenly drop the trailing dot,
introducing an interesting variation of Cross-Site Scripting (XSS) I
call Same-Site Scripting. The missing dot indicates that the record is
not fully qualified, and thus queries of the form
"localhost.example.com" are resolved. While superficially this may
appear to be harmless, it does in fact allow an attacker to cheat the
RFC2109 (HTTP State Management Mechanism) same origin restrictions, and
therefore hijack state management data.

The result of this minor misconfiguration is that it is impossible to
access sites in affected domains securely from multi-user systems. The
attack is trivial, for example, from a shared UNIX system, an attacker
listens on an unprivileged port[0] and then uses a typical XSS attack
vector (e.g. in an html email) to lure a victim into
requesting http://localhost.example.com:1024/example.gif, logging the
request. The request will include the RFC2109 Cookie header, which could
then be used to steal credentials or interact with the affected service
as if they were the victim.

Tavis recommends removing localhost entries from DNS servers that do not have the trailing period (i.e. "localhost" vs. "localhost."). The trailing period assures that somebody cannot setup camp on 127.0.0.1 and steal your web applications cookies or run any other malicious dynamic content in the same domain, exploiting DNS for same origin policy attacks.

Wednesday, January 30, 2008

Two Words: Code Quality

Dr. Brian Chess, Chief Scientist at Fortify and static analysis guru, has a couple very interesting posts on the company blog: one on the U.S. court system paying attention to source code quality of breathalyzers, and one on the quality of source code in closed systems (Nintendo Wii and Apple's iPhone).

It appears that a custom crafted Zelda saved game file can exploit a buffer overflow in Zelda allowing the execution of any code you want to throw at the console-- step one of software piracy on the Wii. It just further illustrates that you should NEVER trust user input-- no matter how unlikely you think the input will be untrustworthy (ahem, saved games in a closed video game system).

About 1.5 Million iPhones are unaccounted for, suggesting they've been hijacked to be set free of AT&T contracts. And this further illustrates that controlling a system at a distance is impossible.

Dr. Chess' comments about the Breathalyzers are choice as well:
"One of the teams used Fortify to analyze the code, and lo-and-behold, they found a buffer overflow vulnerability! This raises the possibility that if you mix just the right cocktail at just the right time, you could craft an exploit. (Dream on.) The real lesson here is that our legal system is waking up to the importance of code. If the code isn’t trustworthy, the outcome isn’t trustworthy either. (Electronic voting machine vendors, you might want to read that last line again.)"

Monday, January 28, 2008

Tuesday, January 15, 2008

Targeted Bank Malware

There have been a lot of interesting things going on with malware these days, but this is on the top of the list (for the next few hours anyway ;). Symantec has a blog write-up on a specific trojan that targets over 400 of the most popular banks in the U.S. and abroad. From the post:
This Trojan downloads a configuration file that contains the domain names of over 400 banks. Not only are the usual large American banks targeted but banks in many other countries are also targeted, including France, Spain, Ireland, the UK, Finland, Turkey—the list goes on.
Targeted malware has some interesting economic ramifications. With signature based anti-malware, the defenses only work if a signature exists (duh!). But the problem is this: will your very large (think yellow) anti-malware vendor really publish a signature to catch malware targeted at only your organization, especially considering that each signature has the possibility of falsely identifying some other binary executable and causing a problem for another of their customers? No doubt anti-malware vendors have seen targeted malware and then specifically not published a signature for all of their customers. They may have released a specific signature for an individual organization, but the support risks are significant. It's better for everyone to NOT publish a signature unless it's widespread. Signatures add overhead, even if it's minimal. It's at least one more signature to search through when comparing each signature. At best, that's logarithmic complexity, and even that minimal complexity across a half-million signatures is expensive.

But that's not all that is interesting about "Trojan.Silentbanker" ...
Targeting over 400 banks ... and having the ability to circumvent two-factor authentication are just two of the features that push Trojan.Silentbanker into the limelight...

The ability of this Trojan to perform man-in-the-middle attacks on valid transactions is what is most worrying. The Trojan can intercept transactions that require two-factor authentication. It can then silently change the user-entered destination bank account details to the attacker's account details instead. Of course the Trojan ensures that the user does not notice this change by presenting the user with the details they expect to see, while all the time sending the bank the attacker's details instead. Since the user doesn’t notice anything wrong with the transaction, they will enter the second authentication password, in effect handing over their money to the attackers. The Trojan intercepts all of this traffic before it is encrypted, so even if the transaction takes place over SSL the attack is still valid. Unfortunately, we were unable to reproduce exactly such a transaction in the lab. However, through analysis of the Trojan's code it can be seen that this feature is available to the attackers.

The Trojan does not use this attack vector for all banks, however. It only uses this route when an easier route is not available. If a transaction can occur at the targeted bank using just a username and password then the Trojan will take that information, if a certificate is also required the Trojan can steal that too, if cookies are required the Trojan will steal those. In fact, even if the attacker is missing a piece of information to conduct a transaction, extra HTML can be added to the page to ask the user for that extra information.
If you understand how Two Factor Authentication really works (at least how banks are implementing it) then you already understand that it cannot stop fraud by an adversary in the middle (a less chauvinistic way to say "man in the middle"). Bruce Schneier has been preaching "the failures of two factor authentication" for a long time. What's monumental about this piece of malware is that it is the first to do well that which pundits have been saying for a long time. Two factor authentication, including PayPal's Security Key (courtesy of Verisign, a company not on my good list), is broken. SiteKey is broken (for the same reasons). What Schneier said in 2005 took until 2008 to materialize (at least publicly). This will not go down in history as an anomaly; this will go down as the first run at creating a sophisticated "engine" to handle MITM on any web application that has financial transactions.

Here's the real question to answer: How many fund transfer transactions must be hijacked to bank accounts in foreign nations that don't extradite their criminals to the U.S. before we finally realize just how bad malware in the "Web 2.0" world (sorry that's O'Reilly's name, I really mean "XMLHTTPRequest" world) can get?

It's about the transactions, folks. It's time to authenticate the transactions, not the users. (Schneier's been saying that, too.) Bank of America is closer-- yet farther away at the same time-- to getting this multi-factor authentication problem solved by implementing out-of-band multi-factor authentication in their "SafePass" service ... BUT, it's still completely vulnerable to this malware (I cannot state that this specimen does in fact implement a MITM on this BoA service since I have not reversed a sample, but if not in version 1.0, it will eventually). I wanted to write a rebuttal to the entry on Professor Mich Kabay's column on Network World, but the authors of this trojan did a better job!

The downfalls of SafePass, which uses SMS text messages to send one-time-use 6 digit passcodes (think RSA SecurID tokens without the hardware) to a customer's mobile phone. It's nice to see authentication out-of-band (not in the same communication channel as the HTTPS session), however, once the session is authenticated, the trojan can still create fraudulent transactions. SafePass could be improved by: 1) not using a non-confidential communication channel (SMS text messages are sent through the service provider's network in plaintext) and 2) requiring the customer to input the authentication code to validate transaction details that are sent with the out-of-band authentication token. Obviously you don't want to skip on #1 or else you'll have a privacy issue when the details of the transaction are broadcast through the provider's network and over the air.

Suppose Eve can modify Alice's payment to Bob the merchant (the transaction's destination) via a MITM trojan like this one. Eve modifies the payment to go to Charlie (an account in the Cayman Islands). Suppose Alice is presented with a "Please validate the transaction details we have sent to your mobile phone and enter the validation code below" message. Alice receives the message and notices the payment is to be sent to Charlie. Since she doesn't intend to pay Charlie, she calls her bank to report the fraud (as directed in the transaction validation method). Of course, that pipe dream would only work if SMS was a confidential channel [and if we could deliver a trustworthy confidential channel to a mobile phone, we will ALREADY solved the trustworthy transaction problem-- they're the same problem] AND if we could find a way to make customers actually PAY ATTENTION to the validation message details.

And of course, incoming SMS messages in the U.S. are charged to the recipient (what a stupid idea-- what happens when people start spamming over SMS like they do over SMTP?).


...

[Side commentary] Oh. And then there's the problem of telling your family members not to click on any random e-cards (or anything else that your family members were not expecting) because it just got a little scarier out there. Imagine your retired parent just clicked on some pretty animated something-or-other that launched some client-side script, installed the Trojan.Silentbanker (or similar) and then the next time they went to check their retirement savings, their life's solvency was sent to a Cayman Islands' account and liquidated faster than you can say "fraud". Does dear old Grandma or Grandpa have to go back to work? They just might, or become a liability on their children and grandchildren. If that won't scare the older generation away from the unsafe web, what will?

Trust is a Simple Equation

[Begin rant]

OK. If security vendors don't get this simple equation, then we might as well all give up and give in...


If you don't know if a computer has been rooted or botted (apologies to the English grammar people-- computer security words are being invented at an ever-increasing rate), then you cannot use that same computer to find out if it has been rooted or botted. Let me say this slightly differently: If you don't know if a computer is worthy of trust (trustworthy), then you cannot trust it to answer correctly if you ask it if it is trustworthy.

It doesn't work in real life. It's stupid to think that a person you just met on the street is trustworthy enough to hold your life savings just because that person says "you can trust me" (or even something of less value than that, probably of any value). My father used to say "never trust a salesman who says the words 'trust me'" because of his life experiences that suggest they're lying most often when they say that (although that may not be statistically relevant, it's relevant as an anecdote).

So why in the world would we EVER trust a piece of software to run on a computer whose state is unknown-- whose TRUSTWORTHINESS is unknown-- to determine if it (or any other system for that matter) is trustworthy???

That's why many NAC implementations fail. It's opt-in security. They send some little java (platform independent) program to run and determine trustworthiness, like presence of patches, AV, etc. Of course, all it takes is a rootkit to say "nothing to see here, move along" to the NAC code. We've seen implementations of that.

So why in the world is Trend Micro-- a company who should KNOW BETTER-- creating code that does just that? RUBotted is the DUMBEST, STUPIDIST, MOST [insert belittling adjective here] idea I have ever seen. It works by running code inside the same CPU controlled by the same OS that is believed already to be botted-- why else would you run the tool unless you already suspected the trustworthiness to be low?!?

This had to have been either designed by a marketing person or a complete security amateur. It attempts to defy (or more realistically: ignore) the laws of trust! How long is it going to be before bot code has the staple feature: "hide command and control traffic from the likes of RUBotted"?


And then eWeek is suggesting that this will be a paid-for-use service or application?!? People, please don't pay money for snake oil, or in this case perpetual motion machines.


This just defies nature. If you wouldn't do it in the "real world", don't do it the "cyber world".



Now, if you could use systems you already know to be trustworthy (i.e. computers that you control and know that an adversary does not control) to monitor systems about which you are not sure, then you may be able to make a valid assertion about the trustworthiness of some other system, but you MUST have an external/third-party trusted system FIRST.


And don't forget that "trust" and "trustworthiness" are not the same.

[End Rant]

Wednesday, January 9, 2008

MBR Rootkits

There is a new flurry of malware floating around in the wild: boot record rootkits (a.k.a. "bootkits"). Yes, for those of you old enough to remember, infecting a Master Boot Record (MBR) is an ancient practice, but it's back.

There are several key details in these events that should be of interest.

As Michael Howard (of Microsoft Security Development Lifecycle fame) points out and Robert Hensing confirms, Windows Vista with Bitlocker Drive Encryption installed (most specifically the use of the Trusted Platform Modules (TPMs), as I have discussed here many times) is immune to these types of attacks. It's not because the drive is encrypted; it's because the TPM validates the chain of trust, starting with the building blocks-- including the MBR.



But that's not the only interesting thing in this story. What's interesting is best found in Symantec's coverage on the latest MBR rootkit (Trojan.Mebroot) that has been recently found in the wild:
"Analysis of Trojan.Mebroot shows that its current code is at least partially copied from the original eEye BootRoot code. The kernel loader section, however, has been modified to load a custom designed stealth back door Trojan 467 KB in size, stored in the last sectors of the disk."
That's right. Our friends at eEye who have been busy hacking the products we use to run our businesses in order to show us the emperor hath no clothes created the first Proof of Concept "bootkit":
"As we have seen, malicious code that modifies a system's MBR is not a new idea – notable research in the area of MBR-based rootkits was undertaken by Derek Soeder of eEye Digital Security in 2005. Soeder created “BootRoot”, a PoC (Proof-of-Concept) rootkit that targets the MBR."

Saturday, December 29, 2007

AV Signature False Positives

Kaspersky's AV accidentally identified the Windows Explorer process as malware. The same thing happened to Symantec with their Asian Language Windows customers. And Heise is running an article on how AV vendors' ability to protect has decreased since last year.


The problem with these commercial, signature-based, anti-malware solutions is that they work 1) Backwards, and 2) Blind. They operate "backwards" in the sense that they are a default-allow (instead of default-deny) mechanism-- they only block (unless they screw up like this) the stuff they know all of their customers will think is bad. And they operate "blind" in that they don't do any QA on their code in your environment. If you think about it, it's scary: they apply multiple (potentially crippling as evidenced by these recent events) changes to production systems, in most organizations several times per day without proper change control processes. Besides anti-malware, what other enterprise applications operate in such a six-shooters-blazing, wild west cowboy sort of way?


Surely this is one more nail in the signature-based anti-malware coffin.

Tuesday, December 11, 2007

OpenDNS - I think I like you

I think I really like OpenDNS. It's intelligent. It's closer to the problem than existing solutions. And it's free.


OpenDNS works by using Anycast to redirect you to the best DNS servers based on where you are. But before it quickly gives you your response, it can optionally filter out unwanted content. OpenDNS partners with communities and service providers to maintain a database of adult content and malicious websites. If you choose to opt in, each DNS query that matches a known bad site returns your browser to a customizable page that explains why the page is not allowed.

Now, privacy advocates are well aware that there is a potential data collection and use problem. However, DNS queries already are a privacy risk, since an ISP can create quite the portfolio on you based on which names get resolved to numbers. OpenDNS can collect information about you, including statistics associated with DNS usage on the networks you manage, but that choice is not turned on by default-- you have to opt into it as well. So, all things considered, privacy is well managed.

I really like this approach to filtering unwanted HTTP content because it completely prevents any connection between clients and offending servers. In fact, clients don't even get to know who (if you can allow me to personify servers for a moment with the term "who") the server is or where it lives. But what I like even more is that this service is simple. There are no complicated client software installs (that users or children can figure out how to disable), no distributed copies of offending URL databases to replicate and synchronize, and no lexicons for users to tweak. It's lightweight. All it takes is updating a DHCP server's entries for DNS servers to point to 208.67.222.222 and 208.67.220.220 and checking a few boxes for which content is needed to be filtered in an intuitive web administration console. For a home user, that's as easy as updating the DNS server fields in a home router-- and all current and future clients are ready to go. An enterprise could use this service as well as its DNS Forwarders. And many larger customers do. A non-tech-savvy parent could turn on content filtering without the "my kids program the VCR" syndrome resulting in the kids bypassing the filters. Setting an IP Address for a DNS server doesn't stand out as a "net nanny" feature to kids who are left alone with the computer.

Use OpenDNS
Okay, there have to be caveats, right? Here they are ...

If you're planning on using some third-party DNS service--especially one that is free-- it had better be performing well and it had better be a service that you trust (because DNS has been used in the past to send people to malicious sites). Since their inception in July 2006, OpenDNS has serviced over 500 Million DNS requests with a 100% perfect uptime track record. And from their open, collaborative stance on issues like phishing (see phishtank.com), you'll want to trust them.

Any DNS misses (except some common typos) will return you to an OpenDNS web page that tries to "help" you find what you missed. The results look like re-branded Google results. Users taking links off the OpenDNS results page is how OpenDNS makes their revenue--on a pay per click basis. That's how they keep the services free.

Dynamic IP Addresses can mess up a home user's ability to keep content filtering policies in-check (but this won't affect enterprises). But there are a number of ways to keep the policies in-synchrony, including their DNS-O-Matic service. What I'd like to see added on: native consumer router support for Dynamic IP address changes to keep content filtering policies in place no matter what the ISP does. [The Linksys WRT54G wireless router, for example, supports similar functions with TZO and DynDNS today-- it would be nice if OpenDNS was another choice in the drop-down menu.] If my neighbor enrolled in the service, it might be possible for me to get my neighbor's OpenDNS filtering policies if we share the same ISP and Dynamic IP pool, but again, that's what the dynamic IP updating services are for.

Enterprises who decide to use OpenDNS for their primary outgoing DNS resolvers must keep in mind that an offending internal user could simply specify a DNS server of their preference-- one that will let them bypass the content filters. However, a quick and simple firewall policy (not some complicated DMZ rule) to screen all DNS traffic (UDP/TCP 53) except traffic destined for OpenDNS servers (208.67.222.222 and 208.67.220.220) will quell that concern.

So the caveats really are not bad at all.

Since the company is a west coast (SF) startup and since the future seems bright for them as long as they can keep their revenue stream flowing, I imagine they'll be gobbled up by some larger fish [Google?].


So this Christmas, give the gift of safe.




...
This might seem like a blatant advertisement, but (number one) I rarely like a service well enough to advocate or recommend it and (number two) I am not financially affiliated with OpenDNS in any way.

Monday, December 10, 2007

Gary McGraw on Application Layer Firewalls & PCI

This serves as a good follow-up to my dissection of Imperva's Application Layer Firewall vs Code Review whitepaper.

Gary McGraw, the CTO of software security firm Cigital, just published an article on Dark Reading called "Beyond the PCI Bandaid". Some tidbits from his article:

Web application firewalls do their job by watching port 80 traffic as it interacts at the application layer using deep packet inspection. Security vendors hyperbolically claim that application firewalls completely solve the software security problem by blocking application-level attacks caused by bad software, but that’s just silly. Sure, application firewalls can stop easy-to-spot attacks like SQL injection or cross-site scripting as they whiz by on port 80, but they do so using simplistic matching algorithms that look for known attack patterns and anomalous input. They do nothing to fix the bad software that causes the vulnerability in the first place.
Gary's got an excellent reputation fighting information security problems from the software development perspective. His Silver Bullet podcast series is one of a kind, interviewing everyone from Peter Neumann (one of the founding fathers of computer security) to Bruce Schneier (the most well known of the gurus) to Ed Felten (of Freedom to Tinker and Princeton University fame). He is also the author of several very well respected software security books.

Tuesday, December 4, 2007

Client Software Update Mechanisms

It's 2007. Even the SANS Top 20 list has client-side applications as being a top priority. Simply put, organizations have figured out how to patch their Microsoft products, using one of the myriad of automated tools out there. Now it's all the apps that are in the browser stack in some way or another that are getting the attention ... and the patches.

Also, since it's 2007, it's well-agreed that operating a computer without administrative privileges significantly reduces risk-- although it doesn't eliminate it.

So why is that when all of these apps in the browser stack (Adobe Acrobat Reader, Flash, RealPlayer, Quicktime, etc.) implement automated patch/update mechanisms, that the mechanisms are completely broken if you follow the principle of least privilege and operate your computer as a non-admin? Even Firefox's built-in update mechanism operates the exact same way.

So, here are your options ....

1) Give up on non-admin and operate your computer with privileges under the justification that the patches reduce risk more than decreased privileges.

2) Give up on patching these add-on applications under the justification that decreased privileges reduce more risk than patching the browser-stack.

3) Grant write permissions to the folders (or registry keys) that belong to the applications that need updates so that users can operate the automated update mechanisms without error dialogs, understanding that this could lead to malicious code replacing part or all of the binaries to which the non-admin users now have access.

4) Lobby the vendors to create a trusted update service that runs with privileges, preferably with enterprise controls, such that the service downloads and performs integrity checking upon the needed updates, notifying the user of the progress.

Neither option 1, nor option 2 are ideal. Both are compromises, the success of each depends heavily upon an ever-changing threat landscape. Option 3 might work for awhile, particularly while it is an obscurely used option, but it's very risky. And option 4 is long overdue. Read this Firefox, Apple, Adobe, et al: Create better software update mechanisms. Apple even created a separate Windows application for this purpose, but it runs with the logged-in user's permissions, so it's useless.


...
And this is not even dealing with all of the patching problems large organizations learned while introducing automated patching systems for Microsoft products: components used in business-critical applications must be tested prior to deployment. These self-update functions in the apps described above have zero manageability for enterprises. Most of these products ship new versions with complete installers instead of releasing updates that patch broken components. The only real option for enterprises is to keep aware of the versions as the vendors release them, packaging the installers for enterprise-wide distribution through their favorite tool (e.g. SMS). It would be nice if these vendors could release a simple enterprise proxy, at least on a similar level to Microsoft's WSUS, where updates could be authorized by a centralized enterprise source after proper validation testing in the enterprise's environment.