Showing posts with label Trust. Show all posts
Showing posts with label Trust. Show all posts

Thursday, August 16, 2012

Classic Trust

Ken Thompson is on the left. That's not Adam Savage on the right.
If you work in computer security or software development, and you have never read Unix co-creator Ken Thompson's original 1984 speech "Reflections on Trusting Trust" then you are hereby obliged to at least read the following snippet for today's history lesson, which is just as relevant-- actually more so-- today:
The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.
Ken was referring to the trojan modifications he embedded into the C compiler, illustrating that you need to rely on more that source code, but the compiler, the assember, the loader, all the way down to the instruction sets of the CPUs.  Or as Schneier famously pitched: "security is a chain; only as strong as its weakest link".

Who operates on a completely self-built system from software to hardware?  We would venture to say: nary a soul.

Just a good reminder for a random Thursday, in case you forgot.

Tuesday, February 7, 2012

Verisign Hacked!


Verisign was breached according to an SEC report (Reuters), yet they report almost no details and act like it's no big deal!

An excerpt from Reuters (emphasis mine):

"Oh my God," said Stewart Baker, former assistant secretary of the Department of Homeland Security and before that the top lawyer at the National Security Agency. "That could allow people to imitate almost any company on the Net."
I knew instantly why Baker is a former Assistant Secretary to DHS: because he understands the gravity of a real security incident. Had he not understood, he would probably still be employed at DHS, along with all of the other laughing stocks and poster children for security theater.

Back on topic: Verisign is probably the single largest peddler of SSL certificates and their Certificate Authorities (CAs) are probably used by more browsers and other applications than any other. Talk about all your eggs in one basket! Not to mention their impact on the control of DNS.

In a past life as a customer of Verisign's certificates, I did not like dealing with them. They were arrogant, acted like they had no competitors, and charged exorbitant prices for their certs. That stated, the fact that mum's the word on what could possibly be the single largest breach in internet history is much cause for concern. If their private keys for any of their CA certs, including their intermediary certs, are breached, then anybody could impersonate any site they wish on the web.

First it was RSA being tight lipped on their SecurID breach, and now it's Verisign on who knows what was breached.

In the authentication world, there really only are 2 methodologies: A) hierarchical, or B) web of trust. Public Key Infrastructure (i.e. Certificate Authorities) are hierarchical. Essentially, we all trust a self-appointed few to discern for us who is authentic and who is not. In the web of trust model, that discernment choice is distributed among all the participants. You may chose to trust a website is your bank, you may not. The most common implementation of web of trust is PGP (the protocol, not the PGP company, which is rife with their own history of issues.) The con to web of trust is that your Grandma (or maybe even you) won't know who to trust, so she'll have a hard time setting up her {computer, iPhone, whatever}. In the hierarchical model, you don't have to think, but sometimes not thinking is a bad thing.

...

What can be learned from this?

1) Even the largest internet security giants can fall, and when they fall they hit the ground hard. A large, recognizable brand may not necessarily improve security. Though these incidents do not conclusively prove this, there is reason to believe that these companies present themselves as a treasure trove to their adversaries. They simply house assets of far greater value than what may otherwise be understood. Aligning your business with these high valued assets might be attracting unnecessary attention from web thieves to your business.

2) It is probably time to revisit the web of trust model.

Wednesday, February 1, 2012

DNSCrypt

This is a very premature response to what I believe is the single best solution to dangers like SOPA, PIPA, and ACTA: DNSCrypt.

To be fair, I don't think DNSCrypt in and of itself would be a solution to draconian "take the free out of internet" laws; however, it's going to be a very necessary component to maximize liberty in the 21st century. It's amazing the web has lasted this long without end-to-end crypto for DNS.

As stated in the link above, DNSCrypt is no replacement for DNSSEC. In fact, the ideal solution would be to rebuild DNS from the ground up using more of a web-of-trust model and completely end-to-end confidential and authenticated channels everywhere, with the ability to determine which "authority" you subscribe to for DNS and with resources listing themselves with as many authorities as they wish to be associated, ending the centralized (yet distributed for availability) control of the web. There's probably no reason to put that much power in the hands of a single entity anyway, and as governments continue on the path they appear to be on, locking down the web, isolated technical solutions will essentially create black markets for essential services like DNS.

For maximum persistence, an ideally liberated DNS solution would also need to float through filters, learning what it can from protocols designed to do so, like bittorrent, IRC, and basically any "cloud service" an enterprise IT Security team would want to block (those online drive storage services always seem to find a way through!).

And ultimately, wide-scale adoption is very necessary. It's excellent to see the inklings of that plan in DNSCrypt. Look at the choice to use ECC to preserve confidentiality. ECC requires low CPU overhead compared to other public key schemes, like RSA encryption, making it perfect for small devices like phones, wireless routers, and other embedded platforms to natively support DNSCrypt. Also important is for large scale providers like OpenDNS to support it. But like so many great technical solutions to problems, market penetration has been the deciding factor what wins out and what doesn't.

Even if DNSCrypt isn't perfect, it's a step in the right direction.

Sunday, January 30, 2011

Visualize Irony

What's the point of the heavy-duty chain and lock if one of the chain's links is just a zip-tie?

Monday, March 29, 2010

SSL & Big Government. Where's Phil Zimmerman?

What an interesting year 2010 is already turning out to be in technology, politics, and life as we know it. More censorship battles are going on than ever before (e.g. Google vs. the Great Firewall of China) and the possibility of more ramp up of governments' control over Internet traffic in their respective companies. Australia has content filters on all ISPs in the name of decency, but political dissident websites have slipped into the "indecent" categories. The UK and US are pushing harder to take control of private access to the Internet. Iran shuts down all Internet access within the country during elections. Now this, reports that governments are manipulating the hierarchical Certificate Authority model to eavesdrop on "secure" encrypted connections over the Internet-- and vendors are creating turn-key appliances to make it easy. Do "netizens" still have a Bill of Rights? Who's watching the watchers?

Enter exhibit A: "Has SSL become pointless?" An article on the plausibility of state-sponsored eavesdropping by political coercion of Certificate Authorities to produce duplicate (faked) SSL certificates for Big Brother devices.
In the draft of a research paper released today (PDF available here), Soghoian and Stamm tell the story of a recent security conference where at least one vendor touted its ability to be dropped seamlessly among a cluster of IP hosts, intercept traffic among the computers there, and echo the contents of that traffic using a tunneling protocol. The tool for this surveillance, marketed by an Arizona-based firm called Packet Forensics, purports to leverage man-in-the-middle attack strategies against SSL's underlying cryptographic protocol.
...
As the researchers report, in a conversation with the vendor's CEO, he confirmed that government agencies can compel certificate authorities (CAs) such as VeriSign to provide them with phony certificates that appear to be signed by legitimate root certificates.
...
The researchers have developed a threat model based on their discoveries, superimposing government agencies in the typical role of the malicious user. They call this model the compelled certificate creation attack. As Soghoian and Stamm write, "When compelling the assistance of a CA, the government agency can either require the CA to issue it a specific certificate for each Web site to be spoofed, or, more likely, the CA can be forced to issue an intermediate CA certificate that can then be re-used an infinite number of times by that government agency, without the knowledge or further assistance of the CA. In one hypothetical example of this attack, the US National Security Agency (NSA) can compel VeriSign to produce a valid certificate for the Commercial Bank of Dubai (whose actual certificate is issued by Etisalat, UAE), that can be used to perform an effective man-in-the-middle attack against users of all modern browsers."
There's more info from Wired on the subject as well.

All of this is calling for a return to our roots. Where's Phil Zimmermann when we need him now?

Phil created PGP (Pretty Good Privacy) during the political crypto export wars, creating the first implementation of the "web of trust" model which is an alternative to the hierarchical model that Certificate Authorities use today in SSL Public Key Infrastructure (PKI). Firefox 3 already saw the introduction of some Web-of-Trust-like features for unsigned SSL certs. If you've ever browsed to an HTTPS site using a self-signed certificate, then you have probably seen the dialog box that asks you if you would like to save the the "exception" to trust that SSL certificate, which is very similar to how SSH works in the command line environment. Essentially, that's the basic premise behind the academic researcher's "CertLock" Firefox add-on, which is forthcoming, but extending the web-of-trust model to all SSL certs encountered and adding some decision support for what certificates to trust based on attribute/key changes.

In the hierarchical model which we have today, a bunch of "authorities" tell us which SSL certificates to trust. That's how we operate today. One CA (Certificate Authority) could tell us a cert for somebank.com at IP address A.B.C.D is OK, while a second CA could also exert that a completely different cert for somebank.com hosted at IP address E.F.G.H is also good. Who is the ultimate authority? You are. But you know that your Grandmother may have a hard time telling which certs to trust, which is why this problem exists and exactly why the hierarchical model exists in the first place. In the Web-of-Trust model, there are no authorities. You trust companyA.com and if companyA.com trusts companyB.com you can automatically trust companyB.com, too (or not, it's up to you). You build up links that represent people vouching for other people, just like real life. If you trust in somebody who is not worthy of that trust, then bad things can happen, just like real life.

In the hierarchical model, you're basically outsourcing those trust decisions to third parties you've never met. You're asking all of them--at the same time-- to guarantee your banking connection is secure. Or your connection to Facebook is secure. Or your connection to a politically dissident web forum is secure. I repeat: you're asking every single CA, each of which is an organization of people that you have never met, to all make these decisions for you simultaneously. Does that sound crazy? You bet. What if, in the real world analogue of this, you outsourced to a collection of "authorities" which TV shows you should watch, which products you should buy, and which politicians should get your vote? [In the U.S. we may already have that with the Big Media corporations, but thank goodness for the Internet, er, wait, well, before we knew about governments abusing SSL certificates anyway.]

It's in this hierarchical model that governments can subvert the confidentiality of the communications. And if governments can do this at-will by forcing Certificate Authorities within their jurisdiction to create fraudulent, duplicate certificates, what's going to stop the ISPs or snoops-for-hire that setup the intercepts from saving copies of pertinent data from themselves, outside of the original intent (regardless of its legal status in your home country)? Probably not much. Maybe an audit trail. Maybe. But likely even that is up for manipulation. After all, look at how poorly the Government Accounting Office ranks the various branches of the U.S. federal government's IT Systems-- many of them are receiving failing grades, yet, they still are in operation. Can you trust them with your data?

My browser has over 100 Certificate Authority certificates in it by default. I know each cert represents probably a dozen or more people who can have a certificate issued from the CA, but assuming it's only a single person per certificate, there certainly aren't 100 people out there I would trust in those aspects of my life. [If 100 doesn't seem that high, just count how many Facebook friends you have that you wouldn't really want to know {your credit card number, your plans for next Friday night, the way you voted at the last election, etc.}.

Perhaps we've gone soft. Perhaps we find hassles using PGP to encrypt messages sent through our favorite free webmail service. Perhaps we're trusting that somebody else is securing our information for us. Whatever it is, perhaps we should read Phil Zimmermann's original words, back when the fight for e-mail privacy was so vivid in our daily lives (before most Internet users could even spell "Internet"). Perhaps then we'll revive the fight for privacy in our web traffic as well and look to solutions like the forthcoming CertLock or maybe a full Web-of-Trust SSL implementation built into each of our browsers, rather than leaving all of our security decisions up to so many semi-trustworthy and unknown Certificate Authorities. Back then, the "activist" in each one of us-- each security professional-- told people to use PGP to encrypt ALL email. Why? Because if you didn't, the messages that were encrypted automatically stand out, like you "have something to hide". It's nonsense if you do or don't, but encrypting all the time doesn't reveal anything in the traffic pattern analysis. Perhaps we should revert to that and be more vigilant in our CA selection.

The following are Phil Zimmermann's own words for why he created PGP (Pretty Good Privacy)
:

Why I Wrote PGP

Part of the Original 1991 PGP User's Guide (updated in 1999)
"Whatever you do will be insignificant, but it is very important that you do it."
–Mahatma Gandhi.
It's personal. It's private. And it's no one's business but yours. You may be planning a political campaign, discussing your taxes, or having a secret romance. Or you may be communicating with a political dissident in a repressive country. Whatever it is, you don't want your private electronic mail (email) or confidential documents read by anyone else. There's nothing wrong with asserting your privacy. Privacy is as apple-pie as the Constitution.
The right to privacy is spread implicitly throughout the Bill of Rights. But when the United States Constitution was framed, the Founding Fathers saw no need to explicitly spell out the right to a private conversation. That would have been silly. Two hundred years ago, all conversations were private. If someone else was within earshot, you could just go out behind the barn and have your conversation there. No one could listen in without your knowledge. The right to a private conversation was a natural right, not just in a philosophical sense, but in a law-of-physics sense, given the technology of the time.
But with the coming of the information age, starting with the invention of the telephone, all that has changed. Now most of our conversations are conducted electronically. This allows our most intimate conversations to be exposed without our knowledge. Cellular phone calls may be monitored by anyone with a radio. Electronic mail, sent across the Internet, is no more secure than cellular phone calls. Email is rapidly replacing postal mail, becoming the norm for everyone, not the novelty it was in the past.
Until recently, if the government wanted to violate the privacy of ordinary citizens, they had to expend a certain amount of expense and labor to intercept and steam open and read paper mail. Or they had to listen to and possibly transcribe spoken telephone conversation, at least before automatic voice recognition technology became available. This kind of labor-intensive monitoring was not practical on a large scale. It was only done in important cases when it seemed worthwhile. This is like catching one fish at a time, with a hook and line. Today, email can be routinely and automatically scanned for interesting keywords, on a vast scale, without detection. This is like driftnet fishing. And exponential growth in computer power is making the same thing possible with voice traffic.
Perhaps you think your email is legitimate enough that encryption is unwarranted. If you really are a law-abiding citizen with nothing to hide, then why don't you always send your paper mail on postcards? Why not submit to drug testing on demand? Why require a warrant for police searches of your house? Are you trying to hide something? If you hide your mail inside envelopes, does that mean you must be a subversive or a drug dealer, or maybe a paranoid nut? Do law-abiding citizens have any need to encrypt their email?
What if everyone believed that law-abiding citizens should use postcards for their mail? If a nonconformist tried to assert his privacy by using an envelope for his mail, it would draw suspicion. Perhaps the authorities would open his mail to see what he's hiding. Fortunately, we don't live in that kind of world, because everyone protects most of their mail with envelopes. So no one draws suspicion by asserting their privacy with an envelope. There's safety in numbers. Analogously, it would be nice if everyone routinely used encryption for all their email, innocent or not, so that no one drew suspicion by asserting their email privacy with encryption. Think of it as a form of solidarity.
Senate Bill 266, a 1991 omnibus anticrime bill, had an unsettling measure buried in it. If this non-binding resolution had become real law, it would have forced manufacturers of secure communications equipment to insert special "trap doors" in their products, so that the government could read anyone's encrypted messages. It reads, "It is the sense of Congress that providers of electronic communications services and manufacturers of electronic communications service equipment shall ensure that communications systems permit the government to obtain the plain text contents of voice, data, and other communications when appropriately authorized by law." It was this bill that led me to publish PGP electronically for free that year, shortly before the measure was defeated after vigorous protest by civil libertarians and industry groups.
The 1994 Communications Assistance for Law Enforcement Act (CALEA) mandated that phone companies install remote wiretapping ports into their central office digital switches, creating a new technology infrastructure for "point-and-click" wiretapping, so that federal agents no longer have to go out and attach alligator clips to phone lines. Now they will be able to sit in their headquarters in Washington and listen in on your phone calls. Of course, the law still requires a court order for a wiretap. But while technology infrastructures can persist for generations, laws and policies can change overnight. Once a communications infrastructure optimized for surveillance becomes entrenched, a shift in political conditions may lead to abuse of this new-found power. Political conditions may shift with the election of a new government, or perhaps more abruptly from the bombing of a federal building.
A year after the CALEA passed, the FBI disclosed plans to require the phone companies to build into their infrastructure the capacity to simultaneously wiretap 1 percent of all phone calls in all major U.S. cities. This would represent more than a thousandfold increase over previous levels in the number of phones that could be wiretapped. In previous years, there were only about a thousand court-ordered wiretaps in the United States per year, at the federal, state, and local levels combined. It's hard to see how the government could even employ enough judges to sign enough wiretap orders to wiretap 1 percent of all our phone calls, much less hire enough federal agents to sit and listen to all that traffic in real time. The only plausible way of processing that amount of traffic is a massive Orwellian application of automated voice recognition technology to sift through it all, searching for interesting keywords or searching for a particular speaker's voice. If the government doesn't find the target in the first 1 percent sample, the wiretaps can be shifted over to a different 1 percent until the target is found, or until everyone's phone line has been checked for subversive traffic. The FBI said they need this capacity to plan for the future. This plan sparked such outrage that it was defeated in Congress. But the mere fact that the FBI even asked for these broad powers is revealing of their agenda.
Advances in technology will not permit the maintenance of the status quo, as far as privacy is concerned. The status quo is unstable. If we do nothing, new technologies will give the government new automatic surveillance capabilities that Stalin could never have dreamed of. The only way to hold the line on privacy in the information age is strong cryptography.
You don't have to distrust the government to want to use cryptography. Your business can be wiretapped by business rivals, organized crime, or foreign governments. Several foreign governments, for example, admit to using their signals intelligence against companies from other countries to give their own corporations a competitive edge. Ironically, the United States government's restrictions on cryptography in the 1990's have weakened U.S. corporate defenses against foreign intelligence and organized crime.
The government knows what a pivotal role cryptography is destined to play in the power relationship with its people. In April 1993, the Clinton administration unveiled a bold new encryption policy initiative, which had been under development at the National Security Agency (NSA) since the start of the Bush administration. The centerpiece of this initiative was a government-built encryption device, called the Clipper chip, containing a new classified NSA encryption algorithm. The government tried to encourage private industry to design it into all their secure communication products, such as secure phones, secure faxes, and so on. AT&T put Clipper into its secure voice products. The catch: At the time of manufacture, each Clipper chip is loaded with its own unique key, and the government gets to keep a copy, placed in escrow. Not to worry, though–the government promises that they will use these keys to read your traffic only "when duly authorized by law." Of course, to make Clipper completely effective, the next logical step would be to outlaw other forms of cryptography.
The government initially claimed that using Clipper would be voluntary, that no one would be forced to use it instead of other types of cryptography. But the public reaction against the Clipper chip was strong, stronger than the government anticipated. The computer industry monolithically proclaimed its opposition to using Clipper. FBI director Louis Freeh responded to a question in a press conference in 1994 by saying that if Clipper failed to gain public support, and FBI wiretaps were shut out by non-government-controlled cryptography, his office would have no choice but to seek legislative relief. Later, in the aftermath of the Oklahoma City tragedy, Mr. Freeh testified before the Senate Judiciary Committee that public availability of strong cryptography must be curtailed by the government (although no one had suggested that cryptography was used by the bombers).
The government has a track record that does not inspire confidence that they will never abuse our civil liberties. The FBI's COINTELPRO program targeted groups that opposed government policies. They spied on the antiwar movement and the civil rights movement. They wiretapped the phone of Martin Luther King. Nixon had his enemies list. Then there was the Watergate mess. More recently, Congress has either attempted to or succeeded in passing laws curtailing our civil liberties on the Internet. Some elements of the Clinton White House collected confidential FBI files on Republican civil servants, conceivably for political exploitation. And some overzealous prosecutors have shown a willingness to go to the ends of the Earth in pursuit of exposing sexual indiscretions of political enemies. At no time in the past century has public distrust of the government been so broadly distributed across the political spectrum, as it is today.
Throughout the 1990s, I figured that if we want to resist this unsettling trend in the government to outlaw cryptography, one measure we can apply is to use cryptography as much as we can now while it's still legal. When use of strong cryptography becomes popular, it's harder for the government to criminalize it. Therefore, using PGP is good for preserving democracy. If privacy is outlawed, only outlaws will have privacy.
It appears that the deployment of PGP must have worked, along with years of steady public outcry and industry pressure to relax the export controls. In the closing months of 1999, the Clinton administration announced a radical shift in export policy for crypto technology. They essentially threw out the whole export control regime. Now, we are finally able to export strong cryptography, with no upper limits on strength. It has been a long struggle, but we have finally won, at least on the export control front in the US. Now we must continue our efforts to deploy strong crypto, to blunt the effects increasing surveillance efforts on the Internet by various governments. And we still need to entrench our right to use it domestically over the objections of the FBI.
PGP empowers people to take their privacy into their own hands. There has been a growing social need for it. That's why I wrote it.
Philip R. Zimmermann
Boulder, Colorado
June 1991 (updated 1999)

[In the PDF to the article written by the researchers, a special thanks was called out to certain paper reviewers, including Jon Callas of PGP Corp with whom so much debate has transpired. To mangle Shakespeare: What a tangled web-of-trust we weave!]


UPDATED 4/12/2010: Bruce Schneier's and Matt Blaze's commentary.

Thursday, February 25, 2010

Earth Shattering Attacks on Disk Encryption


Trusted Platform Modules (TPMs) are were the last hope of truly secure distributed computing endpoints. The idea behind TPMs is that they are safe from physical inspection-- resistant to tampering, but we now know that to no longer be true, thanks to some clever research by Christopher Tarnovsky (pictured at left).

Every disk encryption vendor on the planet tries to sell you the impossible: a product that on one hand they claim is impervious to physical access by an adversary, and-- at the same time on the other hand-- a product they conveniently claim is no better than anything else at preventing data loss when physical access is lost to an adversary. What? Does that even make sense?

Of course it doesn't make sense. It makes dollar$.

Yeah, for the great majority of laptop thefts, probably even disk encryption isn't necessary since the thieves are just after hardware, but I never advise anyone risk that. You never know when that casual thief wants to make a quick buck off of hardware sell to a smart, conniving criminal on eBay, for instance, who just might be equipped with the knowledge and intent to steal the data off of the device.

Look at what I wrote back on October 3, 2007 when dealing with PGP Corp's failure to disclose a dangerous encryption bypass feature:
True. It's not a "backdoor" in the sense of 3 letter agencies' wiretapping via a mathematical-cryptographic hole in the algorithm used for either session key generation or actual data encryption, but how can a PGP WDE customer truly disable this "bypass" feature? As long as the function call to attempt the bypass exists in the boot guard's code, then the feature is "enabled", from my point of view. It may go unused, but it may also be maliciously used in the context of a sophisticated attack to steal a device with higher valued data contained within it:
  1. Trojan Horse prompts user for passphrase (remember, PGP WDE synchronizes with Windows passwords for users, so there are plenty of opportunities to make a semi-realistic user authentication dialog).
  2. Trojan Horse adds bypass by unlocking the master volume key with the user's passphrase.
  3. [Optional] Trojan Horse maliciously alters boot guard to disable the RemBypass() feature. [NOTE: If this were to happen, it would be a permanent bypass, not a one-time-use bypass. Will PGP WDE customers have to rely on their users to notice that their installation of Windows boots without the Boot Guard prompting them? Previous experience should tell us that users will either: A) not notice, or B) not complain.]
  4. Laptop is stolen.
I just described the premise behind the Evil Maid attack years before Joanna Rutkowska coined the term.

Then read the cop-out response by Marc Briceno – Director, Product Management of PGP Corp:
No security product on the market today can protect you if the underlying computer has been compromised by malware with root level administrative privileges. That said, there exists well-understood common sense defenses against “Cold Boot,” “Stoned Boot,” “Evil Maid,” and many other attacks yet to be named and publicized.
You can read his full response, but the gist is that he never admits his product has a flawed assumption: that nobody would ever manipulate the PGP BootGuard-- the software that must remain plaintext on the encrypted drive (if wasn't plaintext, the CPU couldn't read the instructions and execute the decryption routine). At least Microsoft's BitLocker, when used with TPMs did not have this vulnerability, although we'll have to see if breaking TPMs is only accomplished by a handful of experts, like Tarnovsky. If it becomes a repeatable task that can be accomplished by inexpensive tools, then BitLocker in TPM mode will be reduced to the lower security status of PGP Whole Disk Encryption.


So which is it, vendors? Are you still letting your marketing people sell encryption products with powerpoint slides that read: "Keeps your data safe when your device is lost or stolen", while having your technical security people say "Well, about that coldboot or evil-maid attack ... well ... all bets are off when you lose physical access to the device."

It's time for vendors to get their stories straight. Stop selling your products to people who are worried about the physical theft of their devices, unless you make it very clear that there are ways around your product that a dedicated and resourceful adversary may be able to defeat-- disk encryption is only good at keeping the casual thieves out.

Wednesday, December 2, 2009

The Reality of Evil Maids

There have been many attacks on whole disk encryption recently:
  1. Cold Boot attacks in which keys hang around in memory a lot longer than many thought, demonstrating how information flow may be more important to watch than many acknowledge.
  2. Stoned Boot attacks in which a rootkit is loaded into memory as part of the booting process, tampering with system level things, like, say, whole disk encryption keys.
  3. Evil Maid attacks in which Joanna Rutkowska of Invisible Things Lab suggests tinkering with the plaintext boot loader. Why is it plain text if the drive is encrypted? Because the CPU has to be able to execute it, duh. So, it's right there for tampering. Funny thing: I suggested tampering with the boot loader as a way to extract keys way back in October of 2007 when debating Jon Callas of PGP over their undocumented encryption bypass feature, so I guess that means I am the original author of the Evil Maid attack concept, huh?

About all of these attacks, Schneier recently said:
This attack exploits the same basic vulnerability as the "Cold Boot" attack from last year, and the "Stoned Boot" attack from earlier this year, and there's no real defense to this sort of thing. As soon as you give up physical control of your computer, all bets are off.
"As soon as you give up physical control of your computer, all bets are off"??? Isn't that the point of these encryption vendors (Schneier is on the technical advisory board of PGP Corp-- he maybe doesn't add that disclaimer plainly enough). Sure enough, that's the opposite of what PGP Corp claims: "Data remains secure on lost devices." Somebody better correct those sales & marketing people to update their powerpoint slides and website promotions.

To put this plainly: if you still believe that whole disk encryption software is going to keep a skilled or determined adversary out of your data, you are sadly misled. We're no longer talking about 3 letter government agencies with large sums of tax dollars to throw at the problem-- we're talking about script kiddies being able to pull this off. (We may not quite be at the point where script kiddies can do it, but we're getting very close.)

Whole Disk Encryption software will only stop a thief who is interested in the hardware from accessing your data and that thief may not even be smart enough to know how to take a hard drive out of a laptop and plug it into another computer in the first place. You had better hope that thief won't sell it on ebay to somebody who is more interested in data than hardware.

Whole Disk Encryption fails to deliver what it claims. If you want safe data, you need to keep as little of it as possible on any mobile devices that are easily lost or stolen. Don't rely on magic crypto fairy dust and don't trust anyone who spouts the odds or computation time required to compute a decryption key. It's not about the math; it's about the keys on the endpoints.

Trusted Platform Modules (TPMs) (like what Vista can be configured to use) hold out some hope, assuming that somebody cannot find a way to extract the keys out of them by spoofing a trusted bootloader. After all, a TPM is basically just a blackbox: you give it an input (a binary image of a trusted bootloader, for example) and it gives you an output (an encryption key). Since TPMs are accessible over a system bus, which is shared among all components, it seems plausible that a malicious device or even device driver could be used to either make a copy of the key as it travels back across the system bus, OR, that a malicious device could just feed it the proper input (as in not by booting the bootloader but by booting an alternative bootloader and then feeding it the correct binary image) to retrieve the output it wants.

Monday, August 31, 2009

Social Engineering at the Age of 4

I guess maybe I was born to be a security-minded person, if "fate" or "nurture" deemed thus. I just was recollecting this morning about how, at the age of 4, I successfully pulled off my first social engineering experiment.

I noticed on Day 1 of pre-school an example of what I often refer to as "opt-in" security. Parents completed a form with a checkbox that indicated whether or not the pre-schoolers were required to take a nap. Then, at nap time, the teachers asked for children whose parents don't require them to take a nap to raise their hand. Those children were then separated from the rest, who had to lay on mats with the lights out. By Day 2, I realized I could simply raise my hand--albeit it was a lie-- and I could skip nap time and play the whole day. From Day 2 on, I always raised my hand.

We, as curious humans, learn about security policies from some of the most common sources-- so common we may even be oblivious to them.

Thursday, May 28, 2009

More Fake Security

The uninstallation program for Symantec Anti-Virus requires an administrator password that is utterly trivial to bypass. This probably isn't new for a lot of people. I always figured this was weak under the hood, like the password was stored in plaintext in a configuration file or registry key, or stored as a hash output of the password that any admin could overwrite with their own hash. But it turns out it's even easier than that. The smart developers at Symantec were thoughtful enough to have a configuration switch to turn off that pesky password prompt altogether. Why bother replacing a hash or reading in a plaintext value when you can just flip a bit to disable the whole thing?

Just flip the bit from 1 to 0 on the registry value called UseVPUninstallPassword at HKEY_LOCAL_MACHINE\SOFTWARE\INTEL\LANDesk\VirusProtect6\ CurrentVersion\Administrator Only\Security. Then re-run the uninstall program.

I am aware of many large organizations that provide admin rights to their employees on their laptops, but use this setting as a way to prevent them from uninstalling their Symantec security products. Security practitioners worth their salt will tell you that admin rights = game over. This was a gimmick of a feature to begin with. What's worse is that surely at least one developer at Symantec knew that before the code was committed into the product, but security vendors have to sell out and tell you that perpetual motion is possible so you'll spend money with them. These types of features demonstrate the irresponsibility of vendors (Symantec) who build them.

And if you don't think a user with admin rights will do this, how trivial would it be for drive-by malware executed by that user to do this? Very trivial.

Just another example on the pile of examples that security features do not equal security.

Friday, May 15, 2009

PCI & Content Delivery Networks

Here's an interesting, but commonly overlooked, little security nugget.

If you are running an e-commerce application and rely on a Content Delivery Network (CDN), such as Akamai, beware how your customers' SSL tunnels start and stop.

I came across a scenario in which an organization-- who has passed several PCI Reports on Compliance (RoCs)-- used Akamai as a redirect for their www.[companyname].com e-commerce site. Akamai does their impressive geographical caching stuff by owning the "www" DNS record and responding with an IP based on where you are. They do great work. The organization hosts the web, application, and database servers in a state-of-the-art, expensive top five hosting facility. Since it's known that credit card data passes through the web, app, and database tiers, the organization has PCI binding language in their contract with the hosting provider, which requires the hosting provider to do the usual littany to protect credit cards (firewalls, IDS, biometrics-- must have a note from your mom before you can set foot on-site, that sort of thing). And the organization with the goods follows all appropriate PCI controls, obviously, as they have passed their RoC year after year since the origin of PCI.

Funny thing ... it wasn't until some questions came out about how SSL (TLS) really works under the hood before a big, bad hole was discovered. One of the IT managers was pursuing the concept of Extended Validation certs (even though EV certs are a stupid concept), and an "engineer" (use that term laughingly) pointed out that if they purchased the fancy certs and put them on the webservers in at the hosting provider, they would fail to turn their customers' address bars green. Why? Because of the content delivery network.

You see, SSL/TLS happens in the OSI model before HTTP does. That means a customer who wants to start an encrypted tunnel with "www.somecompany.com" must first look up the DNS entry, then attempt SSL/TLS with them over TCP port 443. This is important: the browser does NOT say "Hey, I want 'www.somecompany.com', is that you? Okay ... NOW ... let's exchange keys and start a tunnel."

In this case, as Akamai hosts the "www" record for "somecompany.com", Akamai must be ready for HTTPS calls into their service. "But wait ... " (you're thinking) " ... Akamai just delivers static content like images or resource files. How can they handle the unique and dynamic behaviors of the application which is required on the other end of the SSL/TLS tunnel?" The answer to your question is: They can't.

On the one hand, the CDN could refuse to accept traffic on port 443 or just refuse to handshake SSL/TLS requests. But that would break transactions into your "https://www.somecompany.com" URLs.

On the other hand, the CDN could accept your customers' HTTPS requests, then serve as a proxy between your customers and your hosting providers' web servers. The entire transactions could be encrypted using HTTPS. But the problem is the CDN must act as a termination point for your customers' requests-- they must DECRYPT those requests. Then they pass those messages back to the hosting provider using a new-- and separate-- HTTPS tunnel.

Guess which option CNDs choose? That's right-- they don't choose to break customers HTTPS attempts. They proxy them. And how did this particular organization figure that out? Well, because an EV-SSL cert on their web server is never presented to their customer. The address bar stays the boring white color, because the customer sees the CDN's certificate, not the organization's.

Why is this statistically relevant? Because a malicious CDN-- or perhaps a malicious employee at a CDN-- could eavesdrop on their HTTPS proxies and save copies of your customers' credit card numbers (or any other confidential information) for their own benefit. The CDN gets to see the messages between the clients and the servers even if only for an instant-- the classic man-in-the-middle attack. An instant is long enough for a breach to occur.

The moral of this story? 1) Learn how the OSI model works. 2) Don't overlook anything. 3) PCI (or any other compliance regulation for that matter) is far from perfect.

Thursday, June 26, 2008

Breaking Cisco VPN Policy

I am surprised how often I hear an organization operate under the belief that they can really, truly can control what a remote client does under any situation. Here is today's lesson on how you can never know what computer is on the other end of the Cisco VPN tunnel or how it is behaving, thanks to more "opt-in security".


Step 1: Break the misconception that Cisco VPN concentrators authenticate which computer is on the other end of the tunnel.

Cisco VPNs authenticate people, not computers. When the connection is initiated from a client, sure the client passes a "group password" before the user specifies her credentials, but there's nothing that really restricts that group password to your organization's computers.

Consider this: any user who runs the Cisco VPN Client has to have at least read access to the VPN profile (the .pcf files stored in the Program Files --> Cisco Systems --> VPN --> Profiles folders on Windows systems). If the user has READ access, any malware unintentionally launched as that user could easily steal the contents of the file ... and the "encrypted" copy of the group password stored within it. Or, a Cisco VPN client can be downloaded from the web and the .pcf VPN profile can be imported into it. At that point, it's no longer certain that the connection is coming from one of your computers.


Step 2: Break the misconception that the client is even running the platform you expect.

So since any user has READ access to the .pcf VPN profiles, they can open up the text config file in a text editor like notepad, peruse the name-value pairs for the "enc_GroupPwd" value, copy everything after the equal sign ("="), and paste it into a Cisco VPN client password decoder script like this one. A novice, in less than a minute, can have everything she needs to not only make VPN connections from an unexpected device, but also from an unexpected platform. Not to mention the group password is not really all that secret.

Cisco VPN encrypted group passwords, like any encrypted data that an automated system (i.e. software) needs to unlock without interrogating user for a key value that only a human knows, must be stored in such a way that is readable for the software. Even though the name/value pair for group passwords in Cisco profiles is labeled as "encrypted" group password, the software needs to be able to decrypt it to use it when establishing VPN tunnels, which means the group password is also read-accessible for any reverse engineer (hence this source code in C). So it is now trivial to decode a Cisco group password. This is not an attack on crypto, this is an attack on key management coupled with a misunderstanding on how compiled programs work.

Now that a "decrypted" copy of the group password is known, the open source "vpnc" package can pretend to be the same as the commercial Cisco version. Here's how simple Ubuntu makes configuring vpnc to emulate a Cisco client:


A Cisco VPN concentrator (or any other server in a client-server relationship) cannot know for sure what the remote client's platform really is. Any changes Cisco makes to its client to differentiate a "Cisco" client by behavior from, say, a "vpnc" client, can be circumvented by doing a little reverse engineering and coding the emulation of the new behavior into the "vpnc" package. An important thing to keep present in one's mind is that compiled applications, although obscure, are not completely obfuscated beyond recognition. A binary of a program is not like a "ciphertext" in the crypto world. It has to be "plaintext" to the OS & CPU that execute it. If it was not readable, then the instructions could not be read and loaded into the CPU for execution. So, any controls based on compiled code are fruitless. Sure, there will be a window of time from when the changes are deployed in the official client to when they are emulated by a hacked client, but reverse engineering and debugging tools (such as bindiff and OllyDbg) are getting more and more sophisticated, reducing the time barrier.


Step 3: Break the misconception that you can remotely control a client's behavior.

A "client" is just that ... it's a "client". It's remote. You cannot physically touch it. Any commands you send to it have to be accepted by it before they are executed. Case in point with the Cisco VPN client is the notion that "network security" people try to perpetuate: "split-tunneling is bad, so let's disable it." Network security people don't like split tunnels or split routes because they view it as a way of bridging a remote client's local network and the organization's internal network. However, it's futile to get all worked up about it. If you trust clients to remote in, then you cannot control how they choose to route packets (though you can pretend that what I show below doesn't really exist, I guess.)


In the same Ubuntu screen shot above, there's a quick and easy way to implement split-tunnels. It's the "Only use VPN connection for these addresses" check box. Check the box and specify the IP ranges. Voila! You've got a split tunnel. Don't want to route the Internet through your company's slow network or cumbersome filters? Check the box. Want to access a local file share while connected to an app your organization refuses to web-enable? Check the box. You get the idea. This is an excellent example of how many "network security" people believe they can control a remote client, yet as you can see, the only way to continue the misconception is to ignore distributed computing principles.

Doing what I described above is certainly not "new" information-- clearly because there are now GUIs to do it (so it's obviously very mature). However, the principles are still not well known and we have vendors like Cisco continuing the notion that you can remotely control a client by upping the ante. Many organizations' network security people are considering the deployment of NAC ("network access control") with VPN. Microsoft has had an implementation of it (they call it NAP for "network access protection") for years. The problem is, it's based on this false sense of "opt-in security" just like split tunnels. Let's look at an analogy ...
Physical Security Guard: "Do you have any weapons?"
Visitor: [shoves 9mm handgun further into pants] "No, of course not."
Guard: "Are you on drugs?"
Visitor: [double-checks the dime bag hasn't fallen out of his jacket pocket] "No, I never touch the stuff."
Guard: "Do you have any ill intent?"
Visitor: [pats the "Goodbye cruel world" letter in his front pocket] "Absolutely not!"
How is that any different from this?
Server: "Are you running a platform I expect?"
Client: "Of course" (it says from the unexpected platform)
Server: "Are you patched?"
Client: "Of course" (why does that even matter if I'm on a different platform from the patches?)Server: "Are you running AV?"
Client: "Of course" (your AV doesn't even run on my platform)
The answer: it's not any different. Both are fundamentally flawed ideas.

So, to refute implementation specific objections, there are two key ways for a project, such as the vpnc project, to choose to not "opt-in" if so desired. They both involve lying to the server (VPN concentrator):
  1. Take the "inspection" program the server provides the client to prove the client is "safe" and execute the inspection program in a spoofed or virtual environment. When the inspection program looks for evidence of Microsoft Windows + latest patches + AV, spoof the evidence they exist.
  2. Reverse engineer the "everything is OK, let 'em on the network" message the inspection program sends the server, then don't even bother executing the inspection program, just cut to the chase and send the all-clear message by itself.
Sure, there may be some nuances that will make this slightly more difficult, such as adding some dynamic or more involved deterministic logic into the "inspection" program, but the more sophisticated the checks are, the more likely the whole process will break and generate false positives for legit users who are following the rules. The more false positives, the less likely customers will deploy the flawed technology.


...


So to recap: you cannot control a client or truly know anything about it. It's just not possible, so security practitioners should look to setting policies that include the fact that you cannot control a remote client. For a great in-depth review of all of these principles (with tons more examples), I suggest picking up a copy of Gary McGraw's and Greg Hoglund's "Exploiting Software: How to break code" book.

Friday, February 1, 2008

WiKID soft tokens

I promised Nick Owens at WiKID Systems a response and it is long overdue. Nick commented on my "soft tokens aren't tokens at all" post:
Greetings. I too have posted a response on my blog. It just points out that our software tokens use public key encryption and not a symmetric, seed-based system. This pushes the security to the initial validation/registration system where admins can make some choices about trade-offs.

Second, I submit that any device with malware on it that successfully connects to the network is bad. So you're better off saving money on tokens and spending it on anti-malware solutions, perhaps at the gateway, defense-in-depth and all.

Third, I point out that our PC tokens provide https mutual authentication, so if you are confident in your anti-malware systems, and are concerned about MITM attacks at the network, which are increasingly likely for a number of reasons, you should consider https mutual auth in your two-factor thinking.

Here's the whole thing:
On the security of software tokens for two-factor authentication
and thanks for stimulating some conversation!

Here is their whitepaper on their soft token authentication system.

Unfortunately, I would like to point out that WiKID is first and foremost vulnerable to the same sort of session stealing malware that Trojan.Silentbanker uses. It doesn't matter how strong your authentication system is when you have a large pile of untrustworthy software in between the user and the server-side application (e.g. browser, OS, third party applications, and all the malware that goes with it). I'll repeat the theme: it's time to start authenticating the transactions, not the user sessions. I went into a little of what that might look like.

Nick is aware of that, which is why he said point number two above. But here's the real problem: the web is designed for dynamic code to be pulled down side-by-side with general data, acquired from multiple sources and run in the same security/trust context. Since our browsers don't know which is information and which is instructions until runtime AND since the instructions are dynamic (meaning they may not be there for the next site visit), how is it NOT possible for malware to live in the browser? I submit that it is a wiser choice to NOT trust your clients' browsers, their input into your application, etc., than to trust that a one time password credential really did get input by the proper human on the other end of the software pile. I suggest that organizations should spend resources being able to detect and recover from security failures (out of band mechanisms come to mind-- a good old fashioned phone call to confirm that $5,000 transaction to a foreign national, perhaps?), rather than assuming the money they invested in some new one time password mechanism exempts them from any such problems.

Microsoft published a document titled "10 Immutable Laws of Security" (nevermind for now that they are neither laws, nor immutable, nor even concise) and point number one is entirely relevant: "Law #1: If a bad guy can persuade you to run his program on your computer, it's not your computer anymore". How does javascript, a Turing Complete programming language, fall into that? If you completely disable script in your browser, most applications break. But if you allow it to run, behaviors you cannot control can run on your behalf. Taking Nick's advice, we should be spending all of your time and resources solving the code and data separation problem on the web, not implementing one time passwords (and I agree with him on that).



Second, I have a hard time calling WiKID a token-- not that it couldn't fit that definition-- it's just that it is a public key cryptography system. I never have referred to a PGP key pair as a token, nor have I heard anyone else. Likewise I don't really ever here anyone say "download this x509 token" ... instead they say "x509 certificate". Smart cards might be the saving grace example that allows me to stretch my mind around the vocabulary; generally speaking, a smart card is a physical "token" and smart card implementations can have a PKC key pair. So, I'll have to extend my personal schema, so to speak, but I guess I'll allow WiKID to fit into the "token" category (but just barely).

The x509 cert example is a great analogy, because under the hood that's basically how WiKID works. Just like an HTTPS session, it swaps public keys (which allows for the mutual authentication) and then a session key is created for challenge/response-- the readout of the "soft token" that the user places into the password form, for example.


There is one concerning issue with WiKID. It uses a relatively unknown public key encryption algorithm called NTRU. NTRU aims to run in low resource environments like mobile phones, which is undoubtedly why WiKID employs it. NTRU is also patented by NTRU Cryptosystems, INC (the patent may have some business/political ramifications similar to PGP's original IDEA algorithm). However, when choosing an encryption algorithm, it is best to use that which has withstood significant peer review. Otherwise, Kerckhoffs' Principle that we have come to know and love as "security by obscurity" will eat us alive when the first decent attack reduces our security to rubble. Googling for "NTRU cryptanalysis" returns around 3,000 hits. Googling for "RSA cryptanalysis" returns around 186,000-- two orders of magnitude higher. This is not the nail in WiKID's coffin, though, but it could be betting the company on Betamax. It is undoubtedly less popular than, say RSA or Eliptic Curve. In most aspects of life, supporting the underdog can result in a great time. Doing it in crypto, however, may not be a good idea.

Before somebody reads the above paragraph and goes in the extreme in either direction, please note my point: the workhorse of WiKID, the NTRU encryption algorithm, has an unknown security value. One could argue that RSA, likewise, has only a mostly known security value, but you decide: "mostly known" or "unknown"? There may not be any problems in NTRU and it may be perfectly safe and secure to use. Conversely it may be the worst decision ever to use it. That's what peer review helps us decide.


...
To sum up ... WiKID is cheap, open source, interesting, and ... still vulnerable to malware problems. And don't forget: you have to choose to use a less popular encryption algorithm.

Wednesday, January 30, 2008

Two Words: Code Quality

Dr. Brian Chess, Chief Scientist at Fortify and static analysis guru, has a couple very interesting posts on the company blog: one on the U.S. court system paying attention to source code quality of breathalyzers, and one on the quality of source code in closed systems (Nintendo Wii and Apple's iPhone).

It appears that a custom crafted Zelda saved game file can exploit a buffer overflow in Zelda allowing the execution of any code you want to throw at the console-- step one of software piracy on the Wii. It just further illustrates that you should NEVER trust user input-- no matter how unlikely you think the input will be untrustworthy (ahem, saved games in a closed video game system).

About 1.5 Million iPhones are unaccounted for, suggesting they've been hijacked to be set free of AT&T contracts. And this further illustrates that controlling a system at a distance is impossible.

Dr. Chess' comments about the Breathalyzers are choice as well:
"One of the teams used Fortify to analyze the code, and lo-and-behold, they found a buffer overflow vulnerability! This raises the possibility that if you mix just the right cocktail at just the right time, you could craft an exploit. (Dream on.) The real lesson here is that our legal system is waking up to the importance of code. If the code isn’t trustworthy, the outcome isn’t trustworthy either. (Electronic voting machine vendors, you might want to read that last line again.)"

Tuesday, January 15, 2008

Targeted Bank Malware

There have been a lot of interesting things going on with malware these days, but this is on the top of the list (for the next few hours anyway ;). Symantec has a blog write-up on a specific trojan that targets over 400 of the most popular banks in the U.S. and abroad. From the post:
This Trojan downloads a configuration file that contains the domain names of over 400 banks. Not only are the usual large American banks targeted but banks in many other countries are also targeted, including France, Spain, Ireland, the UK, Finland, Turkey—the list goes on.
Targeted malware has some interesting economic ramifications. With signature based anti-malware, the defenses only work if a signature exists (duh!). But the problem is this: will your very large (think yellow) anti-malware vendor really publish a signature to catch malware targeted at only your organization, especially considering that each signature has the possibility of falsely identifying some other binary executable and causing a problem for another of their customers? No doubt anti-malware vendors have seen targeted malware and then specifically not published a signature for all of their customers. They may have released a specific signature for an individual organization, but the support risks are significant. It's better for everyone to NOT publish a signature unless it's widespread. Signatures add overhead, even if it's minimal. It's at least one more signature to search through when comparing each signature. At best, that's logarithmic complexity, and even that minimal complexity across a half-million signatures is expensive.

But that's not all that is interesting about "Trojan.Silentbanker" ...
Targeting over 400 banks ... and having the ability to circumvent two-factor authentication are just two of the features that push Trojan.Silentbanker into the limelight...

The ability of this Trojan to perform man-in-the-middle attacks on valid transactions is what is most worrying. The Trojan can intercept transactions that require two-factor authentication. It can then silently change the user-entered destination bank account details to the attacker's account details instead. Of course the Trojan ensures that the user does not notice this change by presenting the user with the details they expect to see, while all the time sending the bank the attacker's details instead. Since the user doesn’t notice anything wrong with the transaction, they will enter the second authentication password, in effect handing over their money to the attackers. The Trojan intercepts all of this traffic before it is encrypted, so even if the transaction takes place over SSL the attack is still valid. Unfortunately, we were unable to reproduce exactly such a transaction in the lab. However, through analysis of the Trojan's code it can be seen that this feature is available to the attackers.

The Trojan does not use this attack vector for all banks, however. It only uses this route when an easier route is not available. If a transaction can occur at the targeted bank using just a username and password then the Trojan will take that information, if a certificate is also required the Trojan can steal that too, if cookies are required the Trojan will steal those. In fact, even if the attacker is missing a piece of information to conduct a transaction, extra HTML can be added to the page to ask the user for that extra information.
If you understand how Two Factor Authentication really works (at least how banks are implementing it) then you already understand that it cannot stop fraud by an adversary in the middle (a less chauvinistic way to say "man in the middle"). Bruce Schneier has been preaching "the failures of two factor authentication" for a long time. What's monumental about this piece of malware is that it is the first to do well that which pundits have been saying for a long time. Two factor authentication, including PayPal's Security Key (courtesy of Verisign, a company not on my good list), is broken. SiteKey is broken (for the same reasons). What Schneier said in 2005 took until 2008 to materialize (at least publicly). This will not go down in history as an anomaly; this will go down as the first run at creating a sophisticated "engine" to handle MITM on any web application that has financial transactions.

Here's the real question to answer: How many fund transfer transactions must be hijacked to bank accounts in foreign nations that don't extradite their criminals to the U.S. before we finally realize just how bad malware in the "Web 2.0" world (sorry that's O'Reilly's name, I really mean "XMLHTTPRequest" world) can get?

It's about the transactions, folks. It's time to authenticate the transactions, not the users. (Schneier's been saying that, too.) Bank of America is closer-- yet farther away at the same time-- to getting this multi-factor authentication problem solved by implementing out-of-band multi-factor authentication in their "SafePass" service ... BUT, it's still completely vulnerable to this malware (I cannot state that this specimen does in fact implement a MITM on this BoA service since I have not reversed a sample, but if not in version 1.0, it will eventually). I wanted to write a rebuttal to the entry on Professor Mich Kabay's column on Network World, but the authors of this trojan did a better job!

The downfalls of SafePass, which uses SMS text messages to send one-time-use 6 digit passcodes (think RSA SecurID tokens without the hardware) to a customer's mobile phone. It's nice to see authentication out-of-band (not in the same communication channel as the HTTPS session), however, once the session is authenticated, the trojan can still create fraudulent transactions. SafePass could be improved by: 1) not using a non-confidential communication channel (SMS text messages are sent through the service provider's network in plaintext) and 2) requiring the customer to input the authentication code to validate transaction details that are sent with the out-of-band authentication token. Obviously you don't want to skip on #1 or else you'll have a privacy issue when the details of the transaction are broadcast through the provider's network and over the air.

Suppose Eve can modify Alice's payment to Bob the merchant (the transaction's destination) via a MITM trojan like this one. Eve modifies the payment to go to Charlie (an account in the Cayman Islands). Suppose Alice is presented with a "Please validate the transaction details we have sent to your mobile phone and enter the validation code below" message. Alice receives the message and notices the payment is to be sent to Charlie. Since she doesn't intend to pay Charlie, she calls her bank to report the fraud (as directed in the transaction validation method). Of course, that pipe dream would only work if SMS was a confidential channel [and if we could deliver a trustworthy confidential channel to a mobile phone, we will ALREADY solved the trustworthy transaction problem-- they're the same problem] AND if we could find a way to make customers actually PAY ATTENTION to the validation message details.

And of course, incoming SMS messages in the U.S. are charged to the recipient (what a stupid idea-- what happens when people start spamming over SMS like they do over SMTP?).


...

[Side commentary] Oh. And then there's the problem of telling your family members not to click on any random e-cards (or anything else that your family members were not expecting) because it just got a little scarier out there. Imagine your retired parent just clicked on some pretty animated something-or-other that launched some client-side script, installed the Trojan.Silentbanker (or similar) and then the next time they went to check their retirement savings, their life's solvency was sent to a Cayman Islands' account and liquidated faster than you can say "fraud". Does dear old Grandma or Grandpa have to go back to work? They just might, or become a liability on their children and grandchildren. If that won't scare the older generation away from the unsafe web, what will?

Trust is a Simple Equation

[Begin rant]

OK. If security vendors don't get this simple equation, then we might as well all give up and give in...


If you don't know if a computer has been rooted or botted (apologies to the English grammar people-- computer security words are being invented at an ever-increasing rate), then you cannot use that same computer to find out if it has been rooted or botted. Let me say this slightly differently: If you don't know if a computer is worthy of trust (trustworthy), then you cannot trust it to answer correctly if you ask it if it is trustworthy.

It doesn't work in real life. It's stupid to think that a person you just met on the street is trustworthy enough to hold your life savings just because that person says "you can trust me" (or even something of less value than that, probably of any value). My father used to say "never trust a salesman who says the words 'trust me'" because of his life experiences that suggest they're lying most often when they say that (although that may not be statistically relevant, it's relevant as an anecdote).

So why in the world would we EVER trust a piece of software to run on a computer whose state is unknown-- whose TRUSTWORTHINESS is unknown-- to determine if it (or any other system for that matter) is trustworthy???

That's why many NAC implementations fail. It's opt-in security. They send some little java (platform independent) program to run and determine trustworthiness, like presence of patches, AV, etc. Of course, all it takes is a rootkit to say "nothing to see here, move along" to the NAC code. We've seen implementations of that.

So why in the world is Trend Micro-- a company who should KNOW BETTER-- creating code that does just that? RUBotted is the DUMBEST, STUPIDIST, MOST [insert belittling adjective here] idea I have ever seen. It works by running code inside the same CPU controlled by the same OS that is believed already to be botted-- why else would you run the tool unless you already suspected the trustworthiness to be low?!?

This had to have been either designed by a marketing person or a complete security amateur. It attempts to defy (or more realistically: ignore) the laws of trust! How long is it going to be before bot code has the staple feature: "hide command and control traffic from the likes of RUBotted"?


And then eWeek is suggesting that this will be a paid-for-use service or application?!? People, please don't pay money for snake oil, or in this case perpetual motion machines.


This just defies nature. If you wouldn't do it in the "real world", don't do it the "cyber world".



Now, if you could use systems you already know to be trustworthy (i.e. computers that you control and know that an adversary does not control) to monitor systems about which you are not sure, then you may be able to make a valid assertion about the trustworthiness of some other system, but you MUST have an external/third-party trusted system FIRST.


And don't forget that "trust" and "trustworthiness" are not the same.

[End Rant]

Wednesday, January 9, 2008

MBR Rootkits

There is a new flurry of malware floating around in the wild: boot record rootkits (a.k.a. "bootkits"). Yes, for those of you old enough to remember, infecting a Master Boot Record (MBR) is an ancient practice, but it's back.

There are several key details in these events that should be of interest.

As Michael Howard (of Microsoft Security Development Lifecycle fame) points out and Robert Hensing confirms, Windows Vista with Bitlocker Drive Encryption installed (most specifically the use of the Trusted Platform Modules (TPMs), as I have discussed here many times) is immune to these types of attacks. It's not because the drive is encrypted; it's because the TPM validates the chain of trust, starting with the building blocks-- including the MBR.



But that's not the only interesting thing in this story. What's interesting is best found in Symantec's coverage on the latest MBR rootkit (Trojan.Mebroot) that has been recently found in the wild:
"Analysis of Trojan.Mebroot shows that its current code is at least partially copied from the original eEye BootRoot code. The kernel loader section, however, has been modified to load a custom designed stealth back door Trojan 467 KB in size, stored in the last sectors of the disk."
That's right. Our friends at eEye who have been busy hacking the products we use to run our businesses in order to show us the emperor hath no clothes created the first Proof of Concept "bootkit":
"As we have seen, malicious code that modifies a system's MBR is not a new idea – notable research in the area of MBR-based rootkits was undertaken by Derek Soeder of eEye Digital Security in 2005. Soeder created “BootRoot”, a PoC (Proof-of-Concept) rootkit that targets the MBR."

Tuesday, December 11, 2007

OpenDNS - I think I like you

I think I really like OpenDNS. It's intelligent. It's closer to the problem than existing solutions. And it's free.


OpenDNS works by using Anycast to redirect you to the best DNS servers based on where you are. But before it quickly gives you your response, it can optionally filter out unwanted content. OpenDNS partners with communities and service providers to maintain a database of adult content and malicious websites. If you choose to opt in, each DNS query that matches a known bad site returns your browser to a customizable page that explains why the page is not allowed.

Now, privacy advocates are well aware that there is a potential data collection and use problem. However, DNS queries already are a privacy risk, since an ISP can create quite the portfolio on you based on which names get resolved to numbers. OpenDNS can collect information about you, including statistics associated with DNS usage on the networks you manage, but that choice is not turned on by default-- you have to opt into it as well. So, all things considered, privacy is well managed.

I really like this approach to filtering unwanted HTTP content because it completely prevents any connection between clients and offending servers. In fact, clients don't even get to know who (if you can allow me to personify servers for a moment with the term "who") the server is or where it lives. But what I like even more is that this service is simple. There are no complicated client software installs (that users or children can figure out how to disable), no distributed copies of offending URL databases to replicate and synchronize, and no lexicons for users to tweak. It's lightweight. All it takes is updating a DHCP server's entries for DNS servers to point to 208.67.222.222 and 208.67.220.220 and checking a few boxes for which content is needed to be filtered in an intuitive web administration console. For a home user, that's as easy as updating the DNS server fields in a home router-- and all current and future clients are ready to go. An enterprise could use this service as well as its DNS Forwarders. And many larger customers do. A non-tech-savvy parent could turn on content filtering without the "my kids program the VCR" syndrome resulting in the kids bypassing the filters. Setting an IP Address for a DNS server doesn't stand out as a "net nanny" feature to kids who are left alone with the computer.

Use OpenDNS
Okay, there have to be caveats, right? Here they are ...

If you're planning on using some third-party DNS service--especially one that is free-- it had better be performing well and it had better be a service that you trust (because DNS has been used in the past to send people to malicious sites). Since their inception in July 2006, OpenDNS has serviced over 500 Million DNS requests with a 100% perfect uptime track record. And from their open, collaborative stance on issues like phishing (see phishtank.com), you'll want to trust them.

Any DNS misses (except some common typos) will return you to an OpenDNS web page that tries to "help" you find what you missed. The results look like re-branded Google results. Users taking links off the OpenDNS results page is how OpenDNS makes their revenue--on a pay per click basis. That's how they keep the services free.

Dynamic IP Addresses can mess up a home user's ability to keep content filtering policies in-check (but this won't affect enterprises). But there are a number of ways to keep the policies in-synchrony, including their DNS-O-Matic service. What I'd like to see added on: native consumer router support for Dynamic IP address changes to keep content filtering policies in place no matter what the ISP does. [The Linksys WRT54G wireless router, for example, supports similar functions with TZO and DynDNS today-- it would be nice if OpenDNS was another choice in the drop-down menu.] If my neighbor enrolled in the service, it might be possible for me to get my neighbor's OpenDNS filtering policies if we share the same ISP and Dynamic IP pool, but again, that's what the dynamic IP updating services are for.

Enterprises who decide to use OpenDNS for their primary outgoing DNS resolvers must keep in mind that an offending internal user could simply specify a DNS server of their preference-- one that will let them bypass the content filters. However, a quick and simple firewall policy (not some complicated DMZ rule) to screen all DNS traffic (UDP/TCP 53) except traffic destined for OpenDNS servers (208.67.222.222 and 208.67.220.220) will quell that concern.

So the caveats really are not bad at all.

Since the company is a west coast (SF) startup and since the future seems bright for them as long as they can keep their revenue stream flowing, I imagine they'll be gobbled up by some larger fish [Google?].


So this Christmas, give the gift of safe.




...
This might seem like a blatant advertisement, but (number one) I rarely like a service well enough to advocate or recommend it and (number two) I am not financially affiliated with OpenDNS in any way.