Press
Review by Ross Anderson
to appear in Jan/Feb 2007 IEEE S&P Magazine
Gary McGraw, "Software Security - Building Security In"
Addison-Wesley, 2006
`We must first agree that software security is not security software', writes Gary McGraw in the first chapter of his new book. Spot on! Things break because software is just about everywhere, and we rely on it for just about everything; we had software before the Internet, but we couldn't have the Internet until there was software. Software has bugs, and some of them cause vulnerabilities. Trying to compensate for bugs by adding a layer of special security software can only get you so far - often not far enough.
But how can you train programmers to stop writing vulnerabilities? The explosion of the software industry over the past fifty years has created far more programming jobs than there are CS graduates to fill them. Most of my teenage contemporaries who studied science subjects - any science, from physics to geology to physiology - ended up writing code of one kind or another. Having run out of trainable people in the USA and Europe, we now have hundreds of thousands of folks in the developing world writing code. And as processors and communications spread from office equipment to domestic appliances and eventually to most inedible things costing more than a few dollars, the software quality gap can only get worse. So people who understand security and know how to write have an opportunity - one might even say a duty - to try to close this gap.
Gary has already written a couple of well-received books on software security, one on attack and one on defense. His latest book sets out to explain both sides. The first part starts off with a background chapter, which starts to introduce mug metrics - which is very welcome, as the study of vulnerability statistics has been one of the new and interesting fields of security research in the last few years. There's then a chapter on a risk management framework used in his company.
The second section goes through seven security `touchpoints' - components of an assurance program. Gary lists these in descending order of importance as code review, architectural risk analysis, penetration testing, risk-based security testing, abuse cases, security requirements and security operations.
This ranking got me thinking. My first reaction was disagreement: security needs to be engineered into a system from the start, so you have to begin with the abuse cases, derive the security policy, refine that into security requirements, define the architecture and take it from there. Quite a few times I've worked on an early electronic version of an existing application where initial attempts at security had failed because the designers hadn't stopped to think what it actually meant for them. Is the main threat to privacy or to safety?
Are the likely bad guys insiders or outsiders? There are also practical and political aspects to building the security in from the requirements stage - if you wait until the code is almost ready to ship and then point out that it needs extensive rewriting, you'll be unpopular or ignored.
Gary does cheerfully confess, however, that he has a bias coming from years doing security for `code-o-centric organizations'; and he made his bones finding implementation bugs in artefacts such as java virtual machines whose design was (in theory at least) sound. While I've specialised in finding bugs in specifications, Gary's a specialist at finding bugs in code.
Reading on, we find that many of the things that I'd have put in a security requirements chapter turn up early in his doxology under `architectural risk analysis'. So although his book does not give the emphasis that would be prudent for someone doing software security engineering in some completely new application, it is quite workable for engineers working on a fairly well-understood problem such as writing the next version of an operating system or a bank branch accounting package.
Moving on, the book is full of war stories, it distills a huge amount of experience in bug-hunting, and brings depth and detail to topics like code analysis and penetration testing. It also provides some practical guidelines as to how a consultant might help change the business culture of a software team that produces insecure code (stop the bleeding, harvest the low-hanging fruit, establish a foundation, ...). It has extensive lists of abuse cases, and finishes up on a strong note with a massive taxonomy of coding errors.
Stepping aside from the question about whether you look for bugs first in the specification or in the code, Gary's book is clearly going to become one of the classics. I expect it will stay on a near shelf and get well-thumbed. His risk-based approach to software lifecycle management also runs nicely parallel to best practice in safety-critical systems, which I expect to become increasingly important; as potentially lethal devices such as automobiles acquire more and more code in more and more systems, security vulnerabilities will become safety hazards. Firms will need a unified approach to managing software safety and security together.
Overall, I reckon this was the best new security book I've seen this year. It certainly made me think more than any other security book I've read recently. I'd consider it a must-buy for the serious practitioner.
Ross Anderson
Cambridge, 8 October 2006
Copyright © 2006, Gary McGraw