You're welcome and yes that high assurance scheme has a lot of manual typing. That was a shortcut approach that shifted the burden from devices to people. The plus side is it eliminates much risk even from Five Eye's TLA attacks unless they get really up close & personal. Before I go on, I'm going to link this essay by Wheeler which covers SCM security very well. It also links to other good papers on the subject with various issues and designs.
http://www.dwheeler.com/essays/scm-security.htmlSo, here's a few possibilities:
1. The server with the download and web site with hash are both hacked to send out a modified binary that subverts or DOS's tool's users. If it's also signed with private keys, they'll need the keys to add those signatures to their tool.
(A Python-powered wiki would please many black hats or TLA's.)
2. A MITM attack is performed instead to do the same thing. Five Eyes uses often uses this method to hack computers, even with targeting rules for automation. So, they might use it for this. There are optimizations of this that involve looking at what people downloaded to ensure auditors always get the right code, but others get the subverted one. Or prehacking a box to see if they're checking things, then if not feed them subverted security tools to ensure access if data moves.
3. A malicious developer sends in source that looks good, but has a subtle weakness. The obfuscated C contest shows this risk. Big risk areas for your particular app are code with side channels, pointer manipulations, code that creates data that becomes syscall arugments, cryptosystem construction of course, entering/storing/deleting of passwords/keymat, and anything in kernel mode. Watch these kinds of source submissions carefully.
4. Malware on a developers machine does the above. One must have a way to keep track of what was sent despite being on a malicious machine. Paper copies of what you submit with visual comparisons helps. You can look at others' copies of your submissions, hash, etc too. One trick used by sophisticated banking malware that might apply is to send one thing and show you another. This is easier if you use an automated process than if you use manual methods with different writing styles & mediums. All kinds of deception can happen if you all have standardized tools and they control every dev/review box.
5. A version of 3-4 sends in malicious documentation that leads users into weakening the crypto or opening themselves up to other attacks. This might be in the app. However, as I'm looking at your wiki I see they might modify it (or official documentation) to link to a subverted version of a tool you need to build/test the app. Or hack *that* site, tool, or update process to hit others that leverage it. Black hats of all kinds, if sophisticated, will consider indirect attacks. I'd MITM Git, the signature verification tool, and/or the compiler. Tools, certs, and sigs.
6. A version of 3-4 sends in something that's encumbered by copyright or patent laws. It gets included into your binary, then distributed. The lawyers finish your project off in any number of ways. (!) This is called an 'encumberance pollution attack.' I can't recall off the top of my head if it's been used offensively, but many accidental forms of this have happened with code cut-and-pastes. So, the developers + reviewers must ensure licensing is appropriate for any code they use. Keeping it non-commercial is a start on patent risk.
So, let's look at your scheme, comparing it to each bullet point. There's a diverse crowd going after software like truecrypt with different operational styles. So, I'm assessing the technical capability here rather than the odds that it will be used.
1. This can still temporarily work. If developers regularly look for this, then it will be caught and the damage limited. It assumes they're easily getting your keys, so it can be repeated whether you change them or not. Stopping the repeats requires increasing the assurance of signing or distribution as my methods do. This requires about 6 machines targeted in your scheme.
2. This can work on individual users. Dev's or Security Team will probably catch it if done to them. It doesn't even require hacking your tool as they can just hack the box, then replace your executable. To fool auditor types, the attack must swap out the public keys and signatures. It's more manual but doable. That would be a quite targeted attack.
3. They're going to try this. NSA did this plenty. Your teams code reviews are a great practice and hopefully will catch it. Your reviewers must know what to look for, though. I added a few risk areas to what they already know. They'll catch anything super obvious, but TLA's are sneaky. This stays a risk, as in most schemes.
4. A number of you have failed the endpoint security part of this test. Both black hats and TLA's exploit that fact. Either might leverage your machine's data or tools in an attack.
5. NSA did this with many standards. Red Team exercises use it with fake tech support. So, the principle works. It means *every aspect* of securely using the software must be signed. If all your docs, tests, source, etc are in your archives and your users only use those, your risk is lower. If your users go with online documentation, such users can be hit via 1-2 with documentation-based attacks. And security of your process still depends on tools others control running on possibly insecure endpoints. So, much of what you do must go with "trust but verify" on the outputs to some degree. Even your tools...
6. I'm almost certain you're vulnerable to this because I doubt you were looking for it. If your Security members were, then I'm quite impressed. If not, it's a little known (but devastating) attack and now they know they need to watch out for encumbered code.
So, it still has risks that medium assurance methods reduce and high assurance reduce further (or eliminate). There might be other risks I'm overlooking. This is just what popped into my head. So, these are some to consider in your choices of software, procedures, tools, etc. Hope it helps.
"We are relying on git and multiple manual reviews for a lot of our security."
Git itself wasn't really designed for secure development. It's also unreasonably easy for developers to mess up their repo's in Git. Aegis (below) has many features one would expect of a secure SCM. I link just to illustrate them rather than promote it. I'd put a network & protocol-level guard in front of it if I used it.
http://aegis.sourceforge.net/propaganda/security.htmlYour main security comes from the combination of Git, reviews, signatures, etc. Protecting the build process, signatures and repo access still has significant risks in your case. It's why I like verification activities to be done on a dedicated, hardened box with something like a data diode for secure transfer of files. So, in such a scheme you hand entered public keys, initial hashes, your private key, etc. The archives are moved to the box, checked, operations performed on them, your stuff added, sigs, and signature files moved out (manually or some other method). This means three data moves per build effort effort: move repo data into the machine, move your own contributions in, and if result matches your main (untrusted) machine move the signature out.
That kind of thing is a lot easier on open source compilers and build tools. For Windows or Mac build tools, things get a little more tedious and risky. I once had that use case. I used a Linux or BSD Live CD to do the data moving or verification, but Windows build tools to build. I couldn't software updates on that box, though. My compromises was buying a (now unavailable) HD with a physical switch for hardware-enforced write protect. It stored code and system data, whereas a regular HD stored the app data.
"For example, we have bitmaps that we display. No manual review is going to uncover a malicious .bmp file that somehow executes arbitrary code on the users machine when displayed, by exploiting some buffer overflow somewhere. "
Good thinking:
https://duckduckgo.com/?t=lm&q=buffer+overflow+bmp+fileOne of my rules of thumb is that any input with parsing or input-driven processing in general is untrusted by default. Input validation and sanity checks as a best practice follow naturally. This should also be done not just for app's data files (eg volumes), but its configuration files, too. Attackers with just enough privilege to touch those might inject into them. Sandboxing/deprivileging the tools for each thing you need, along with using less risky formats helps. For example, PDF and XML files are closer to programming languages than pure data formats. I used RTF/HTML3.2 and a modified ASN.1 (now JSON) instead to simplify processing. Sun's XDR is another format I used for networked applications. Simple to specify and code = simpler to code correctly.
"There could be a compiler bug in gcc or Visual Studio that causes completely safe looking code to compile to something quite nasty."
The CompCert C compiler was mathematically verified and did amazing during fuzz testing. My preliminary solution for that was to use a C++ to C compiler, visually verify the functions are equivalent, and then run that through CompCert. Could manually generate the equivalent C but you didn't like typing in source so I doubt you want that. This would only be done for major releases or certified versions of the software.
"I also worry about a compromised OS."
It's the reason for my focus on medium to high assurance methods. OS must be assumed compromised if TLA's are the opponent. NSA TAO also does TEMPEST-style attacks, firmware attacks, and physical implants. I'm all about getting rid of low-hanging fruit first, though, so I focus on remote attacks. The risks layer by layer and processes it takes to do high assurance I described in this response on secure code vs systems:
http://www.schneier.com/blog/archives/2 ... l#c1102869It was basically my proprietary framework for analyzing or designing systems. I posted it because NSA is blocking high assurance in commercial sector, so I'm unlikely to profit off it as a trade secret. We're all probably better off sharing this stuff.
"It would be soooo much easier if we wanted to sell comfort rather than real security... How about let's all have a release, and just tell the world that if they use CipherShed, there's no need to worry about the security of their data?"
You mean we should start a security business, go to the RSA conference, and roll in the cash our coked-out marketers get us? Sounds like the good life! If only I was so rational. I think I'll continue my pointless crusade to bring better security/privacy to a country where most people trade it away for Farmville. I know it sounds foolish and perhaps delusional. Every now and then, though, my posts help a critical project improve their INFOSEC so I stay at it. Hope my efforts help your project as I'd like it to succeed.