you were hacked? want to prevent next one?

Talk, discussions and suggestions for the project itself or the forum and wiki. Not for discussion the project's goals.

you were hacked? want to prevent next one?

Postby NickP » Sat Aug 23, 2014 1:04 am

Good to see there's a group trying to maintain the valuable Truecrypt project. What's left of it. I myself do high assurance security engineering work, research, etc. Only kind that beats TLA's. Commercial work is NDA'd, but I post many designs & evaluations on Schneier's blog to help people learn *real* security. The problem you are having is luckily one I've covered a few dozen times: a combination of endpoint, network, and SCM security. Yours appears to be a distributed project without much resources, limited personnel for custom SCM work, and plenty of untrustworthy machines between you. So, my old methods might be too heavyweight for this project. I've modified them into three barebones solutions to your problem if you're interested.

High Assurance & Cheap, But Laborious

This is a variant of my old KVM-based MLS system that works very similarly to TinfoilChat. You essentially use three computers. The Receiver can be something cheap like a NetTop or hand-me-down PC. It just needs to display code people send you and do initial GPG checks. The Sender is has a compiler, IDE, and signing mechanism. The code is typed by hand into the Sender. The source and/or binary are archived, signed, and sent to third PC via one-way media. Such media can be a data diode (see Tinfoil's version), CD-R or anything else that's ultra simple to demonstrate one-way property. The third PC is internet connected, maybe your default PC. Development happens by signed code or messages exchanged among project members. Each member looks at Receiver, verifies the code, ensures everyone has about the same version, and manually types it into Sender. Sender keeps the master copy. Now, issues like whitespace can keep the hashes/signatures from syncing. Code guidelines can help here but it comes down to two main options: work extra to ensure documents are all exactly the same, releasing one thing; each person releases their copy, verifying others are at least semantically equivalent, and user downloads one of many releases at random. Extra security measures can be put into Receiver or Network systems to increase availability of the system.

Medium Assurance and Cheap, Less Painful

Another option involves a trusted third party, likely a hosted server. The host might even be a dev, preferrably the best at secure configuration + with a good WAN. The server runs a hardened OS with firewall configured to only accept authenticated, encrypted connections (eg SSH) from dev team members. The developers machines, Windows or otherwise, go through a hardened proxy device that only communicates with the SSH server. The machines never get unrestricted access to Internet. Proxy might be a stripped down Linux or BSD. Firewall or VPN distro's exist that might make this easier. Developers exclusively share source, documentation and messages through the SSH proxies and server. The software installers or updates for development machines are hashed, verified and checked on many (non-Developer) machines before being put onto Development machines. Critical information about them, such as hashes and behavior during testing, is shared among the team via SSH channel. The code is transferred via a one-way media.

Just remember that the goal is that the Trusted Computing Base of the solution, hardware to software, must be simple enough to estimate a level of confidence in the method. And the kind of attackers hitting you mean mainstream COTS/FOSS configurations aren't good enough. You need Medium at the least, with High if you can stand it. You can create a hybrid, as well, where you do most of the development on Medium-rated systems and networking, but critical parts with a High-rated method. So, you'd have the Medium assurance build above with an extra dedicated PC that you manually typed the info in, made the hashes match, did the build, signed it, and released it onto one-way media. Everyone's release should agree. The Medium assurance portion is for rapid development & communication. The high assurance addition keeps the release key safe, eliminates most of the your development TCB, is redundant for hardware, and is a check against malicious/unreliable project members.

Hope these tips get you all in the right direction far as securing your development and release process.
NickP
 
Posts: 4
Joined: Sat Aug 23, 2014 12:28 am

Re: you were hacked? want to prevent next one?

Postby WaywardGeek » Mon Aug 25, 2014 8:23 am

TinfoilChat is cool. Thanks for the tip! Data diodes are a cool idea as well.

I like the redundancy in your "High Assurance" model. The main thing I take issue with is manually typing all the code everywhere, rather than relying on git and other tools to automate this. As you point out, we need to verify that our code base on each developer's machine has the same hash summary. Why not copy the code to our machines, and then verify the hash with git? Ideally we copy with DVD-R or some such media where we believe there is less chance than with a USB drive of compromise. Assuming we get that part right, how does manually typing in the data improve security?

You mentioned keeping the release key safe. Instead of relying only on a Windows executable signing key, we will create a file containing the hashes of each released installer and source tar-ball, and each of the Security Team members who have personally verified every change in the code base and the hashes in the file will sign it. While any single signature may be compromised and faked by an attacker, it hopefully would be difficult to compromise them all. Also, while an attacker might succeed in introducing a malicious flaw in our git repository, such a flaw will have to pass careful review by the security team members.

We are relying on git and multiple manual reviews for a lot of our security. Assuming git is bug-free (haha!), it should be difficult to compromise the code base, since we compare commit hashes. I think someone with control over the shared repository could fool us into accepting a malicious commit. However, it still has to pass review, and all commits to the develop branch have to be signed, so at least we can track down the developer who allowed the malicious commit into the main development branch.

I worry about other subtle hacks. Exploiting bugs in git is not the only attack vector. For example, we have bitmaps that we display. No manual review is going to uncover a malicious .bmp file that somehow executes arbitrary code on the users machine when displayed, by exploiting some buffer overflow somewhere. I doubt such a bug exists in the .bmp display code, but that's just one of many possibilities. There could be a compiler bug in gcc or Visual Studio that causes completely safe looking code to compile to something quite nasty.

I also worry about a compromised OS. We've suspected for a couple of decades that the NSA has back doors in Windows. Who is to say that they can't force Microsoft to detect whenever a binary signed by CipherShed is run, capture all events to the app, and send them encoded with some sort of stenography in the whole boatload of crap that Windows sends to Microsoft from your PC over https?

It would be soooo much easier if we wanted to sell comfort rather than real security... How about let's all have a release, and just tell the world that if they use CipherShed, there's no need to worry about the security of their data? :-P
WaywardGeek
 
Posts: 40
Joined: Sat Jun 07, 2014 8:38 am

Re: you were hacked? want to prevent next one?

Postby NickP » Wed Aug 27, 2014 2:08 am

You're welcome and yes that high assurance scheme has a lot of manual typing. That was a shortcut approach that shifted the burden from devices to people. The plus side is it eliminates much risk even from Five Eye's TLA attacks unless they get really up close & personal. Before I go on, I'm going to link this essay by Wheeler which covers SCM security very well. It also links to other good papers on the subject with various issues and designs.

http://www.dwheeler.com/essays/scm-security.html

So, here's a few possibilities:

1. The server with the download and web site with hash are both hacked to send out a modified binary that subverts or DOS's tool's users. If it's also signed with private keys, they'll need the keys to add those signatures to their tool.

(A Python-powered wiki would please many black hats or TLA's.)

2. A MITM attack is performed instead to do the same thing. Five Eyes uses often uses this method to hack computers, even with targeting rules for automation. So, they might use it for this. There are optimizations of this that involve looking at what people downloaded to ensure auditors always get the right code, but others get the subverted one. Or prehacking a box to see if they're checking things, then if not feed them subverted security tools to ensure access if data moves.

3. A malicious developer sends in source that looks good, but has a subtle weakness. The obfuscated C contest shows this risk. Big risk areas for your particular app are code with side channels, pointer manipulations, code that creates data that becomes syscall arugments, cryptosystem construction of course, entering/storing/deleting of passwords/keymat, and anything in kernel mode. Watch these kinds of source submissions carefully.

4. Malware on a developers machine does the above. One must have a way to keep track of what was sent despite being on a malicious machine. Paper copies of what you submit with visual comparisons helps. You can look at others' copies of your submissions, hash, etc too. One trick used by sophisticated banking malware that might apply is to send one thing and show you another. This is easier if you use an automated process than if you use manual methods with different writing styles & mediums. All kinds of deception can happen if you all have standardized tools and they control every dev/review box.

5. A version of 3-4 sends in malicious documentation that leads users into weakening the crypto or opening themselves up to other attacks. This might be in the app. However, as I'm looking at your wiki I see they might modify it (or official documentation) to link to a subverted version of a tool you need to build/test the app. Or hack *that* site, tool, or update process to hit others that leverage it. Black hats of all kinds, if sophisticated, will consider indirect attacks. I'd MITM Git, the signature verification tool, and/or the compiler. Tools, certs, and sigs.

6. A version of 3-4 sends in something that's encumbered by copyright or patent laws. It gets included into your binary, then distributed. The lawyers finish your project off in any number of ways. (!) This is called an 'encumberance pollution attack.' I can't recall off the top of my head if it's been used offensively, but many accidental forms of this have happened with code cut-and-pastes. So, the developers + reviewers must ensure licensing is appropriate for any code they use. Keeping it non-commercial is a start on patent risk.

So, let's look at your scheme, comparing it to each bullet point. There's a diverse crowd going after software like truecrypt with different operational styles. So, I'm assessing the technical capability here rather than the odds that it will be used.

1. This can still temporarily work. If developers regularly look for this, then it will be caught and the damage limited. It assumes they're easily getting your keys, so it can be repeated whether you change them or not. Stopping the repeats requires increasing the assurance of signing or distribution as my methods do. This requires about 6 machines targeted in your scheme.

2. This can work on individual users. Dev's or Security Team will probably catch it if done to them. It doesn't even require hacking your tool as they can just hack the box, then replace your executable. To fool auditor types, the attack must swap out the public keys and signatures. It's more manual but doable. That would be a quite targeted attack.

3. They're going to try this. NSA did this plenty. Your teams code reviews are a great practice and hopefully will catch it. Your reviewers must know what to look for, though. I added a few risk areas to what they already know. They'll catch anything super obvious, but TLA's are sneaky. This stays a risk, as in most schemes.

4. A number of you have failed the endpoint security part of this test. Both black hats and TLA's exploit that fact. Either might leverage your machine's data or tools in an attack.

5. NSA did this with many standards. Red Team exercises use it with fake tech support. So, the principle works. It means *every aspect* of securely using the software must be signed. If all your docs, tests, source, etc are in your archives and your users only use those, your risk is lower. If your users go with online documentation, such users can be hit via 1-2 with documentation-based attacks. And security of your process still depends on tools others control running on possibly insecure endpoints. So, much of what you do must go with "trust but verify" on the outputs to some degree. Even your tools...

6. I'm almost certain you're vulnerable to this because I doubt you were looking for it. If your Security members were, then I'm quite impressed. If not, it's a little known (but devastating) attack and now they know they need to watch out for encumbered code.

So, it still has risks that medium assurance methods reduce and high assurance reduce further (or eliminate). There might be other risks I'm overlooking. This is just what popped into my head. So, these are some to consider in your choices of software, procedures, tools, etc. Hope it helps.

"We are relying on git and multiple manual reviews for a lot of our security."

Git itself wasn't really designed for secure development. It's also unreasonably easy for developers to mess up their repo's in Git. Aegis (below) has many features one would expect of a secure SCM. I link just to illustrate them rather than promote it. I'd put a network & protocol-level guard in front of it if I used it.

http://aegis.sourceforge.net/propaganda/security.html

Your main security comes from the combination of Git, reviews, signatures, etc. Protecting the build process, signatures and repo access still has significant risks in your case. It's why I like verification activities to be done on a dedicated, hardened box with something like a data diode for secure transfer of files. So, in such a scheme you hand entered public keys, initial hashes, your private key, etc. The archives are moved to the box, checked, operations performed on them, your stuff added, sigs, and signature files moved out (manually or some other method). This means three data moves per build effort effort: move repo data into the machine, move your own contributions in, and if result matches your main (untrusted) machine move the signature out.

That kind of thing is a lot easier on open source compilers and build tools. For Windows or Mac build tools, things get a little more tedious and risky. I once had that use case. I used a Linux or BSD Live CD to do the data moving or verification, but Windows build tools to build. I couldn't software updates on that box, though. My compromises was buying a (now unavailable) HD with a physical switch for hardware-enforced write protect. It stored code and system data, whereas a regular HD stored the app data.

"For example, we have bitmaps that we display. No manual review is going to uncover a malicious .bmp file that somehow executes arbitrary code on the users machine when displayed, by exploiting some buffer overflow somewhere. "

Good thinking:

https://duckduckgo.com/?t=lm&q=buffer+overflow+bmp+file

One of my rules of thumb is that any input with parsing or input-driven processing in general is untrusted by default. Input validation and sanity checks as a best practice follow naturally. This should also be done not just for app's data files (eg volumes), but its configuration files, too. Attackers with just enough privilege to touch those might inject into them. Sandboxing/deprivileging the tools for each thing you need, along with using less risky formats helps. For example, PDF and XML files are closer to programming languages than pure data formats. I used RTF/HTML3.2 and a modified ASN.1 (now JSON) instead to simplify processing. Sun's XDR is another format I used for networked applications. Simple to specify and code = simpler to code correctly.

"There could be a compiler bug in gcc or Visual Studio that causes completely safe looking code to compile to something quite nasty."

The CompCert C compiler was mathematically verified and did amazing during fuzz testing. My preliminary solution for that was to use a C++ to C compiler, visually verify the functions are equivalent, and then run that through CompCert. Could manually generate the equivalent C but you didn't like typing in source so I doubt you want that. This would only be done for major releases or certified versions of the software.

"I also worry about a compromised OS."

It's the reason for my focus on medium to high assurance methods. OS must be assumed compromised if TLA's are the opponent. NSA TAO also does TEMPEST-style attacks, firmware attacks, and physical implants. I'm all about getting rid of low-hanging fruit first, though, so I focus on remote attacks. The risks layer by layer and processes it takes to do high assurance I described in this response on secure code vs systems:

http://www.schneier.com/blog/archives/2 ... l#c1102869

It was basically my proprietary framework for analyzing or designing systems. I posted it because NSA is blocking high assurance in commercial sector, so I'm unlikely to profit off it as a trade secret. We're all probably better off sharing this stuff.

"It would be soooo much easier if we wanted to sell comfort rather than real security... How about let's all have a release, and just tell the world that if they use CipherShed, there's no need to worry about the security of their data?"

You mean we should start a security business, go to the RSA conference, and roll in the cash our coked-out marketers get us? Sounds like the good life! If only I was so rational. I think I'll continue my pointless crusade to bring better security/privacy to a country where most people trade it away for Farmville. I know it sounds foolish and perhaps delusional. Every now and then, though, my posts help a critical project improve their INFOSEC so I stay at it. Hope my efforts help your project as I'd like it to succeed.
NickP
 
Posts: 4
Joined: Sat Aug 23, 2014 12:28 am


Return to Meta

Who is online

Users browsing this forum: No registered users and 1 guest