Against Security Nihilism

Thereā€™s a lot of security nihilism in the technology community, and in the culture generally. Many people believe that ā€œdefense is impossibleā€, that ā€œsecurity is a losing battleā€, that nothing can be done, that we should stop trying and divert resources spent on security to other worthy things like features and performance. There is even nihilism in the security community itselfā€Šā€”ā€Šalthough, I suspect, moreso from the offensive side.

I disagree that defensive security is impossible. Yes, the software equivalent of this:

does happen often. However, software engineering, and software security engineering in particular, are very young engineering disciplines. Imagine how bad bridge building was in year 70; then imagine how bad it was given that randos and governments kept trying to destroy them all the time.

But in the short time weā€™ve had to learn how to engineer software, we have learned techniques that definitely do work, and some that definitely donā€™t. Iā€™d say weā€™ve learned a lot, fast. And we know we have, all too often, ignored things we already knew.

For example, the early programming language designer C. A. R. Hoare recognized that security is really just an ā€˜extremeā€™ form of correctness, and that a languageā€™s first duty is to enable programmers to write correct programs. In ā€œThe Emperorā€™s Old Clothesā€ he says:

The first principle was security: The principle that every syntactically incorrect program should be rejected by the compiler and that every syntactically correct program should give a result or an error message that was predictable and comprehensible in terms of the source language program itself. Thus no core dumps should ever be necessary. It was logically impossible for any source language program to cause the computer to run wild, either at compile time or at run time. A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not toā€Šā€”ā€Šthey already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law.

...and yet here we are, in 2016, shipping new software in languages we know are unsafe.

The Big Problem in security engineering is not that itā€™s impossible. Iā€™d even argue that some sound techniques are not even (technically) difficult. Often, the problems are economic, political, and even inter-personal.

I also often find that software engineers are simply unaware of sound security techniques. Even simple things like HTML templating libraries that automatically defang HTML metacharactersā€Šā€”ā€Šwhich are now common and widely-available, and which enable developers to get a solid handle on the XSS problemā€Šā€”ā€Šare unknown to many working programmers (!).

Things We Know Work

Things We Know Donā€™t Work