Last night, I wrote on Facebook that I had been invited to talk at ACSAC about the legacy of the “Orange Book” (TCSEC), as 2013 marks 30 years since it was first published. I was fishing for opinions from a number of people whom I respect (but I’d take them from folks I don’t respect as well 🙂 ), but got few response. So let me expand on my current thoughts… perhaps this will get folks thinking.
The TCSEC was a seminal publication in security … one of the first security criteria out there. It defined preset grouping of functionality and assurance requirements (what today we would call a package or a profile) that were well thought out. This was a strength, but it was also a problem as the pre-defined packages didn’t work well for anything other than monolithic operating systems. The assurance paradigm that it had was based on in-depth design analysis — increasing the level of design details and analysis, as well as testing, to ensure all problems were found.
How well did this all work out? Well, there was the mantra of “C2 by ’92”, which admittedly got more and more systems to have discretionary access control, object reuse, auditing, and I&A. This was one of the factors that led to Windows having more security — NT had to have stronger security to meet C2 by ’92, and Windows NT is the basis of today’s Windows systems. Could one argue that the TCSEC beat up Windows 98 in an alley fight? Perhaps.
However, the assurance paradigm was in some sense flawed, and may have gone down the wrong path. There was a naive assumption that if we could get commercial vendors to follow that path, we would have more secure systems. Did that work? Those working with evaluated systems can answer that question — we never saw higher assurance take off, because it went against commercial practice.
The chafing against the pre-set packages of the digraphs also led to an unbundling of functionality of assurance, eventually leading to the Common Criteria of today. I’d argue that this eventually gave us the “control” notion we now see in 800-53: pick the functional and assurance requirements you need to meet your threats. This is a good thing, but it is also a loss of the forethought that went into the bundling. We are seeing a return to bundling in some sense with the move to standard protection profiles and CNSS 1253 baselines and controlled ways to modify things. Did the TCSEC show the value of these bundles?
The TCSEC, just like 800-53 and the CC, is a catalog of requirements; it is not an evaluation process. Yet the evaluation process that grew up around the TCSEC also has a legacy. That process established a very in-depth process that took far too long. The legacy of that process — and the analysis the TCSEC required — affected how the process is viewed today. We’re still seeing fights against a process that takes too long, and we still haven’t found the balance between better / faster / cheaper that is satisfactory for both the vendors and users of evaluated products.
I’d like to think that the TCSEC has a greater legacy than just perl. Hopefully, my preliminary thoughts above have gotten you thinking, and you’ll share your thoughts in the comments. I’m going to keep thinking on this so that I can work all of this together into a coherent presentation.
======
Additional musings added a few days later:
Thinking more about the TCSEC, I’m seeing a number of dimensions of impact (think like a tag cloud), other than (of course) perl. Here are some thoughts on each in alphabetic order — feel free to add more in the comments:
- Assumptions. Familiarity with the TCSEC led to assumptions about functionality that didn’t propagate through to newer criteria. One can see this in the Common Criteria. Notions that were present in the TCSEC — such as protecting authentication data or having process separation — are no longer explicit in the CC. People assume they are there, but they are not. Are they tested for?
- Assurance. The TCSEC codified the notion of design assurance, but most people didn’t see design assurance because they didn’t see above C2. Although the design assurance transferred to the Common Criteria (CC), there it was more of a failure — precisely because it never became commercial practice for the vendors. Instead, documentation was developed after the fact, which doesn’t improve assurance. Today, is there more thought given to making the design small, simple, and minimized… or are things large and complex with multiple failure paths? Did the TCSEC avenues to assurance survive?
- Awareness. Did the TCSEC make people more aware of security? Certainly, those working on the government side know what C2 security is — if only in terms of the functional requirements. But people in general? Most people probably don’t understand access control or audit — they never use the DAC mechanisms in Windows or Apple, and they’ve probably never looked at the event log. To most people, security is Passwords.
- Bundling. One characteristic of the TCSEC is it bundled functions with assurance. This was also its downfall, as the bundling was designed for a monolithic world and assumed MLS was a need. Yes, MLS needs high assurance, but high assurance doesn’t demand MLS. That was a failure, and that notion led to the unbundled CC. But bundling is returning with the new standard Protection Profiles, the -53 baselines, and overlays. There is thought being given to what requirements belong together. This is good. The problem is there’s often no thought about what these requirements need in terms of assurance. Assurance is typically “best possible” (the standard PP approach), which doesn’t necessarily correspond to what the functions need in their environment. We’re still faced with the eternal problem: people don’t pay for invisible assurance.
- Commercial Products. In my earlier part of this post, I argued that the TCSEC improved the security of some commercial products. Certainly it influenced Windows NT, and arguably influenced a number of Unixes, although whether any of those made it into the Unix base of today is a different question. But many products still think about security after the fact, or don’t incorporate it into the design process.
- Confidentiality. The TCSEC’s focus was confidentiality — access controls to prevent disclosure. This focus remained for many years, and might have hurt trying to grow the focus to integrity and availability.
- Controls. Did the TCSEC come up with the “control” paradigm, or did that exist in the financial world before 1983? Certainly the TCSEC began the Federal Criteria which begat the CC, and TCSEC requirements and CC requirements influenced the controls in NIST SP 800-53.
- Developmental Assurance. The TCSEC had the notion that a well-thought out design would lead to a higher assurance product. Has that been borne out, or is it like FDA studies of marijuana? Is there empirical evidence of what design approaches do best, and did that agree with the TCSEC? Did commercial vendors ever actually follow the TCSEC processes?
- Formal Methods. The TCSEC pushed the notion of formal methods at the higher levels. Yet we rarely see formal methods these days.
- Government Development. Here the TCSEC had more influence. The notion of C2 by 92 led to pushes to use C2 functionality, and this was captured in the 8500.2 controls and 800-53 controls. Many of the security requirements for systems today come from C2. However, the focus on functionality led to a loss of focus on assurance, and that’s only lately being recovered. There was also the assumption problem above.
- Multilevel Security. The TCSEC envisioned a world where MLS was everywhere. Yet MLS as a concept disappeared, reappeared under different names, and nowadays is present mostly in specialized guarding devices. Its become a dirty word — is that because of the TCSEC?
- Product Evaluation. The TCSEC showed the value that could come from product evaluation in the overall accreditation process. Countless dollars were saved because operating systems and other products did not need to be reexamined in depth. Yet the original process in the US (TPEP) was very expensive for the government and took a lot of time. Think “Better / Faster / Cheaper”. We moved to a cheaper process of having labs do the work. People still complained it wasn’t faster. The CC came in, and we’ve been tinkering with the process to make it faster and faster because of Internet time scales. Have we made it better than we had it during the TCSEC? Are the requirements as well understood today? How did the legacy of the TCSEC process color today’s process.
These are just some areas. Perhaps as we explore the legacy, there should be an additional question asked: For those areas where the legacy shows a form of failure, are there places were we can learn lessons and perhaps improve where we are today. Given I’m dealing with the TL;DR generation, perhaps that should be a topic for a future post … or ACSAC talk.