Musings on a Book of Orange

userpic=securityLast night, I wrote on Facebook that I had been invited to talk at ACSAC about the legacy of the “Orange Book” (TCSEC), as 2013 marks 30 years since it was first published. I was fishing for opinions from a number of people whom I respect (but I’d take them from folks I don’t respect as well 🙂 ), but got few response. So let me expand on my current thoughts… perhaps this will get folks thinking.

The TCSEC was a seminal publication in security … one of the first security criteria out there. It defined preset grouping of functionality and assurance requirements (what today we would call a package or a profile) that were well thought out. This was a strength, but it was also a problem as the pre-defined packages didn’t work well for anything other than monolithic operating systems. The assurance paradigm that it had was based on in-depth design analysis — increasing the level of design details and analysis, as well as testing, to ensure all problems were found.

How well did this all work out? Well, there was the mantra of “C2 by ’92”, which admittedly got more and more systems to have discretionary access control, object reuse, auditing, and I&A. This was one of the factors that led to Windows having more security — NT had to have stronger security to meet C2 by ’92, and Windows NT is the basis of today’s Windows systems. Could one argue that the TCSEC beat up Windows 98 in an alley fight? Perhaps.

However, the assurance paradigm was in some sense flawed, and may have gone down the wrong path. There was a naive assumption that if we could get commercial vendors to follow that path, we would have more secure systems. Did that work? Those working with evaluated systems can answer that question — we never saw higher assurance take off, because it went against commercial practice.

The chafing against the pre-set packages of the digraphs also led to an unbundling of functionality of assurance, eventually leading to the Common Criteria of today. I’d argue that this eventually gave us the “control” notion we now see in 800-53: pick the functional and assurance requirements you need to meet your threats. This is a good thing, but it is also a loss of the forethought that went into the bundling. We are seeing a return to bundling in some sense with the move to standard protection profiles and  CNSS 1253 baselines and controlled ways to modify things. Did the TCSEC show the value of these bundles?

The TCSEC, just like 800-53 and the CC, is a catalog of requirements; it is not an evaluation process. Yet the evaluation process that grew up around the TCSEC also has a legacy. That process established a very in-depth process that took far too long. The legacy of that process — and the analysis the TCSEC required — affected how the process is viewed today. We’re still seeing fights against a process that takes too long, and we still haven’t found the balance between better / faster / cheaper that is satisfactory for both the vendors and users of evaluated products.

I’d like to think that the TCSEC has a greater legacy than just perl. Hopefully, my preliminary thoughts above have gotten you thinking, and you’ll share your thoughts in the comments. I’m going to keep thinking on this so that I can work all of this together into a coherent presentation.

======

Additional musings added a few days later:

Thinking more about the TCSEC, I’m seeing a number of dimensions of impact (think like a tag cloud), other than (of course) perl. Here are some thoughts on each in alphabetic order — feel free to add more in the comments:

  • Assumptions. Familiarity with the TCSEC led to assumptions about functionality that didn’t propagate through to newer criteria. One can see this in the Common Criteria. Notions that were present in the TCSEC — such as protecting authentication data or having process separation — are no longer explicit in the CC. People assume they are there, but they are not. Are they tested for? 
  • Assurance. The TCSEC codified the notion of design assurance, but most people didn’t see design assurance because they didn’t see above C2. Although the design assurance transferred to the Common Criteria (CC), there it was more of a failure — precisely because it never became commercial practice for the vendors. Instead, documentation was developed after the fact, which doesn’t improve assurance. Today, is there more thought given to making the design small, simple, and minimized… or are things large and complex with multiple failure paths? Did the TCSEC avenues to assurance survive?
  • Awareness. Did the TCSEC make people more aware of security? Certainly, those working on the government side know what C2 security is — if only in terms of the functional requirements. But people in general? Most people probably don’t understand access control or audit — they never use the DAC mechanisms in Windows or Apple, and they’ve probably never looked at the event log. To most people, security is Passwords.
  • Bundling. One characteristic of the TCSEC is it bundled functions with assurance. This was also its downfall, as the bundling was designed for a monolithic world and assumed MLS was a need. Yes, MLS needs high assurance, but high assurance doesn’t demand MLS. That was a failure, and that notion led to the unbundled CC. But bundling is returning with the new standard Protection Profiles, the -53 baselines, and overlays. There is thought being given to what requirements belong together. This is good. The problem is there’s often no thought about what these requirements need in terms of assurance. Assurance is typically “best possible” (the standard PP approach), which doesn’t necessarily correspond to what the functions need in their environment. We’re still faced with the eternal problem: people don’t pay for invisible assurance.
  • Commercial Products. In my earlier part of this post, I argued that the TCSEC improved the security of some commercial products. Certainly it influenced Windows NT, and arguably influenced a number of Unixes, although whether any of those made it into the Unix base of today is a different question. But many products still think about security after the fact, or don’t incorporate it into the design process.
  • Confidentiality. The TCSEC’s focus was confidentiality — access controls to prevent disclosure. This focus remained for many years, and might have hurt trying to grow the focus to integrity and availability.
  • Controls. Did the TCSEC come up with the “control” paradigm, or did that exist in the financial world before 1983? Certainly the TCSEC began the Federal Criteria which begat the CC, and TCSEC requirements and CC requirements influenced the controls in NIST SP 800-53.
  • Developmental Assurance. The TCSEC had the notion that a well-thought out design would lead to a higher assurance product. Has that been borne out, or is it like FDA studies of marijuana? Is there empirical evidence of what design approaches do best, and did that agree with the TCSEC? Did commercial vendors ever actually follow the TCSEC processes?
  • Formal Methods. The TCSEC pushed the notion of formal methods at the higher levels. Yet we rarely see formal methods these days.
  • Government Development. Here the TCSEC had more influence. The notion of C2 by 92 led to pushes to use C2 functionality, and this was captured in the 8500.2 controls and 800-53 controls. Many of the security requirements for systems today come from C2. However, the focus on functionality led to a loss of focus on assurance, and that’s only lately being recovered. There was also the assumption problem above.
  • Multilevel Security. The TCSEC envisioned a world where MLS was everywhere. Yet MLS as a concept disappeared, reappeared under different names, and nowadays is present mostly in specialized guarding devices. Its become a dirty word — is that because of the TCSEC?
  • Product Evaluation. The TCSEC showed the value that could come from product evaluation in the overall accreditation process. Countless dollars were saved because operating systems and other products did not need to be reexamined in depth. Yet the original process in the US (TPEP) was very expensive for the government and took a lot of time. Think “Better / Faster / Cheaper”. We moved to a cheaper process of having labs do the work. People still complained it wasn’t faster. The CC came in, and we’ve been tinkering with the process to make it faster and faster because of Internet time scales. Have we made it better than we had it during the TCSEC? Are the requirements as well understood today? How did the legacy of the TCSEC process color today’s process.

These are just some areas. Perhaps as we explore the legacy, there should be an additional question asked: For those areas where the legacy shows a form of failure, are there places were we can learn lessons and perhaps improve where we are today. Given I’m dealing with the TL;DR generation, perhaps that should be a topic for a future post … or ACSAC talk.

 

Share

6 Replies to “Musings on a Book of Orange”

  1. From a discussion with Steven Greenwald on Facebook:

    Steven J. Greenwald

    Well, just formal security policy modeling, which doesn’t seem a popular topic now. Nothing more, really. Well, that and the insane decline of MLS (a dirty word now) and, well, the current security establishment that has created the current quagmire can all go to hell, and should (I’d gladly help them along their way if I could).

    Daniel P Faigin

    Yeah, I forgot to mention in the post. I was thinking about the legacy in terms of both formal methods (such promise, but has mostly gone nowhere), and the decline of MLS. Has anything good from the TCSEC?

    Steven J. Greenwald

    Not in terms of modern security mindset. No one even knows about it. People don’t even understand the use of lattices. The CC and other process-whore garbage has taken over so that the bureaucrats and prostitutes can make a buck without actually doing anything except generating paperwork. (You sure you want my comments on this?)

    Steven J. Greenwald

    By the way, knowing how your mind works, if you have an empty slot on a panel, I will quite happily say all of this in public. Quite happily. With these exact words too, not pulling my punches. Because I really don’t give a shit anymore, Daniel.

    Daniel P Faigin

    Yes, I want your comments. I’d prefer them over on the blog post, so that I can find them later. I think there are some good things — I think we can attribute controls and the notion of product evaluation, although the process is wrong. I think we’re moving back to bundling, but I’m not sure it is well thought out. Perhaps the legacy is that it started people thinking about security and security requirements in an organized fashion. So, please, longer thoughts on the blog (if you can).

    Steven J. Greenwald

    FYI: the notion of controls comes straight from the financial-accounting/banking world (read one of my papers on that), and product evaluation comes earlier (look at the Ware Report or Anderson Report, among others, etc.). Although the notion that security stops at A1 systems certainly came from the TCSEC (although the authors disclaim that).

    Steven J. Greenwald

    Do feel quite free to quote me on your blog though.

    Daniel P Faigin

    Twill do. I’ll record the salient parts of this conversion

  2. The Orange Book kept me safe & warm one camping trip. Having run out of wood, the book was thrown into the fire, and it burned well. Perhaps no more fitting end could be had…

  3. It’s not the product, but the process. The TCSEC was an improvement on all its predecessors, but on its successors as well.

    I believe that in terms of evaluated products, the TCSEC-driven evaluation process was essentially a failure. It yielded a tiny number of evaluated products. The majority of those “products” were far from general-purpose, because they were unrealistically limited configurations that could not be used in the real world. Evaluation placed a significant burden on product developers, and took a long time–ensuring that evaluated products were typically years behind the marketplace. The “C2 by 92” mantra had no effect in terms of delivering system assurance through the use of evaluated products.

    But by other measures, the TCSEC was a tremendous success, because it took computer security out of the narrow government-sponsored confines where it had largely been developed and pushed it into the commercial organizations that were creating the majority of IT products (what we called “data processing” back then). This dissemination of knowledge, philosophy, and attitude unquestionably improved the security quality of products throughout the industry in a way that commercial market pressures never would have. It “took” better in some organizations than in others, to be sure, but I believe it yielded improvement pretty much everywhere it touched, if for no other reason that it established a common vocabulary.

    I think of this as the “security diaspora”. Part of it was the NSA evaluation teams, but it was also carried forward by engineers moving from company to company after those initial contacts. We didn’t get many useful evaluated products, but we got significantly better product development across the industry.

    I am proud to have been a part of these “golden years” (“orange years”, maybe?) from around 1982-1990, and I imagine that my colleagues from that era feel similarly.

  4. Another little-appreciated aspect of the TCSEC and the evaluation process is that it was open (or, at least, not classified–“open” in those pre-WWW days was not how we think of it today). This visionary intent behind Roger Schell’s establishment of the National Computer Security Center was, I believe, unprecedented for NSA. Although the Agency had long worked with carefully-selected companies on specific products, the idea that it would provide high-quality computer security guidance, not to mention open discussion of potential vulnerabilities, was a great leap forward.

    If the TCSEC and the evaluation process had followed the familiar closed, need-to-know paradigm of the time, it would have had far less influence.

  5. Chuck Menk noted:

    I believe the Orange book series was the quintessential set of computer security doctrine of its day and is still providing immeasurable returns as the “old-line” orange book early adopters, authors and users still provide what I believe to be the foundational expertise in the area of Computer/INFOSEC/IA security. And, the shift away from the rigor it offered weakened the Computer/INFOSEC/IA world forever!

Comments are closed.