R. and D. Hollingsworth,
“Protection Analysis Project
Final Report,” Information Sciences Institute, University of Southern
California, Marina Del Rey, CA, 1978.
P. Abbott et al, “Security Analysis
and Enhancements of
Computer Operating Systems,” Report NBSIR 76-1041, Institute for
Computer Science and Technology, Natl. Bur. of Stnds, 1976.
Bishop, “A Taxonomy of UNIX System and
Vulnerabilities,” Technical Report CSE-95-10, Purdue University,
Du and Aditya P. Mathur,
“Categorization of Software
Errors that led to Security Breaches,” In Proceeding of the 21st
National Information Systems Security Conference (NISSC’98),
Crystal City, VA, 1998.
Marick, “A survey of software fault
Report UIUCDCS-R-90-1651, University of Illinois at Urbana-
Champaign, December 1990.
There has been a tremendous amount of work done in the area of defining secure design principles. Some of the more notable milestones in this work are summarized here. This summary is by no means comprehensive.
The most frequently-referenced source with regard to secure design principles is undoubtedly the “Orange Book”. [ref. NCSC. Department of Defense Trusted Computer System Evaluation Criteria (TCSEC). National Computer Security Center, December 1985. DOD-5200.28-STD, Orange Book.] The Orange Book delineated a set of fundamental concepts, practices and design principles for the design of secure software.
The Orange Book led to other work, but continues to this day to be a referenced standard.
Another often-referenced early source is Saltzer and Schroeder. [J.H. Saltzer and M.D. Schroeder. The protection of information in computer systems. Proceedings of the IEEE, 63(9):1278-1308, September 1975.] This work stemmed from work on the famous “Multics” system. [verify] See http://www.multicians.org).
Summarized: Saltzer-Schroeder Practices
The Saltzer-Schroeder practices can be summarized as follows:
Another widely references source for
principles is the NSA Principles Of Secure System Design. [ref. P.
Boudra, Jr. Report on rules of system composition: Principles of
secure system design. Technical report, National Security Agency,
Information Systems Security Organization, Office of Infosec Systems
Engineering, I9 Technical Report 1-93, Library No. S-240, 330, March
1993. For Official Use Only.]
Summarized: NSA Principles Of Secure Design Summarized
The principles can be summarized as follows:
The International Information Security Foundation (I2SF) has attempted to develop a comprehensive set of secure design principles and developed the Generally Accepted Systems Security Principles (GASSP). [ref. W. Ozier. GASSP: Generally Accepted Systems Security Principles. Technical report, International Information Security Foundation, June 1997. (http://web.mit.edu/security/www/gassp1.html).] This work is currently incomplete.
Summarized: GASSP Principles Summarized
These principles define the following categories:
Broad Functional Principles
The International Standards Organization developed the Common Criteria for Information Technology Security Evaluation, currently in version 2.1.
[Ref. ISO 15408. ISO/NIST/CCIB, 19 September 2000.
Summarized: Common Criteria
Common Criteria defines these technical (“functional”) evaluation categories:
“TOE” = “Target Of Evaluation”
“TSF” = “TOE Security Functions”
The Common Criteria is comprehensive in that it not only addresses the technical aspects of security (which are listed above), but also addresses lifecycle processes that impact security, such as configuration management, operation, testing, and vulnerability assessment.
Many organizations have endorsed the Common Criteria, such as BITS, the Banking Industry Technology Services arm of the Financial Services Roundtable, a major banking industry group. At this time mostly security products have been CC certified, such as secure operating systems, routers, firewalls, etc. There is no reason why the process could not be applied to an end-use application. In fact, it makes sense to have important applications certified, because certification establishes a security goal instead of merely saying that it “should be secure”. Indeed, the providers of important applications do their customers a disservice by not certifying their applications, because without certification there is no concrete and credible promise of the use of best practices with regard to security.
CC defines seven levels of certification, with the lowest representing a minimum level and the seventh level representing a highly secure system whose design has been formally proven and which implement stringent lifecycle processes in a secure manner. Most security products that have been CC-certified have been certified at level 4, which requires a review of the design.
Peter Neumann has been a major
contributor to the
science of secure computing architectures and methodologies. A recent
research report published by SRI with Peter Neumann as principal
investigator [ref. Principled Assuredly Trustworthy Composable
Architectures First-Year Interim Report and Working Draft of the
Final Report, Jan. 8, 2003; Peter G. Neumann, Principal Investigator;
Principal Scientist, Computer Science Laboratory; SRI International
EL-243, 333 Ravenswood Ave, Menlo Park, California 94025-3493, USA]
lists a set of secure design principles, which are repeated in the
Technical Details table.
Summarized: Neumann’s Principles
Neumann’s principles are summarized here and interpreted in terms of their relationship to secure design:
Sound architecture. A well-planned system archiecture provides “enormous” benefits for system robustness, and by implication security. This applies to a system’s internal design as well as to its external interfaces, and to a lesser degree its internal interfaces.
Minimization of what must be trustworthy. Separate a system into components that must be trustworthy and others that do not require the same high level of trust. That permits attention to be focused where it add the most to overall security, in recognition of the fact that it is too expensive to make every component completely secure.
Abstraction. Design abstraction and a layered design add to the clarity of a system’s design and by implication enhance confidence in the security of that design.
Encapsulation. Encapsulation enhances abstraction and by implication enhances confidence in the security of a design.
Modularity. (Highly paraphrased.) Modularity provides loose coupling and clear definition of system components, and therefore enhances overall design clarify. That loose coupling makes it more likely that security of the overall system can be inferred from the security of its independent parts, since internal dependencies are reduced or eliminated.
Layered and distributed protection. Each resource layer within a system should emply a security model that is meaningful for that layer. A single perimeter is less effective than mulitple perimeters centered on each kind of resource.
Constrained dependency. Keep track of what information has been authenticated and what has not, and make trust determinations accordingly.
Object orientation. OO’s endorsement of abstraction, encapsulation, modularity, type safety, and other features promote many of the principles espoused above; but OO principles must be applied in a safe manner.
Separation of policy & mechanism. Authorization policy should not be inextricably tied to (e.g. embedded in) an implementation. Despite this, policy models must be thought of during design and not be added after the design is complete.
Separation of duties. Define distinct non-overlapping responsbilities with regard to use of the system. These can then become the basis of the definition of roles.
Separation of roles. Role correspond to sets of privileges defined in a particular domain, and should correspond to responsibilities (“duties”) within that domain.
Separation of domains. Separation of domains enables the separation of privilege and compartmentalization of distinct sets of resources.
Sound authentication. All elements of a system should have a known level of trust, based either on authentication or on some authenticated or otherwise secure mechanism of deployment or runtime verification.
Sound authorization and access control. Authorization should be granular and matched to the semantic capabilities of the resource that is being protected. Authorization should be non-subvertible. Authorization relies on trustworthy (e.g. authenticated) inputs.
Administrative controllability. Administration functions must not be so burdensome or complex that the risk of incorrect administrative maintenance becomes significant.
Comprehensive accountability. Monitoring, auditing, and response (e.g. intrusion detection) systems are extremely important, but must be designed with consideration for their own security and confidentiality.