DIRECTIONS
Matt Bishop
Department of Computer ScienceUniversity of California at Davis
Davis, CA 95616-8562
bishop@cs.ucdavis.edu
We are in the midst of a crisis in the deployment and use of
computers, and it is getting worse every day. Our systems are notsecure. They are considerably less secure than the paper systemswe still use, and that are rapidly being replaced. Worse, we arenot taking steps to shore up the infrastructure or systems, and weare neglecting the education necessary to base such improvementsupon. Unless we devote the resources necessary to improve computerand network systems, and to educate computer scientists,operators, and users in INFOSEC, the edifice we have so
painstakingly constructed will collapse, precipitating a crisis ofconfidence, trust, and reliance. The threat to our security as anation is considerable; the threat to our individual privacy andidentity even greater.
In this talk, I will explore the role of INFOSEC education in thiscrisis. I will discuss why we should care, where we are now, wherewe should be heading, and offer concrete suggestions about how toget there.
First, a comment about the topic. In what follows, I consider thecomputer security aspects of INFOSEC. INFOSEC, a portmanteau for“information security,” is actually much broader; but my maininterest is protecting the information on computers, and thecomputers themselves. The greatest threats arise in that arena,due to the marriage of technology to the information managementsystems and techniques.
The Importance of Computer SecurityFirst we must ask what, exactly, is computer security? The stockdefinition is that “a system is secure if it conforms to a statedpolicy that defines what is allowed and what is disallowed” (thesecurity policy). But this statement does not adequately conveythe complexity of the problems of providing INFOSEC.
The World Wide Web provides numerous examples of how complex thisissue is. Recently, the Social Security Administration made itsdatabase of earnings available over the Web. By giving a name, anaddress, a social security number, and the mother’s maiden name, auser could access his or her past earnings as recorded by the
Social Security Administration, and obtain information about their
account. Was this secure? According to the Social SecurityAdministration, it was; the data was protected by passwords(mother’s maiden name). According to many others, it was not,because the passwords were easily determined. In the end, theSocial Security Administration took the database off line.
Electronic mail is another example where different definitions of“security” affect the analysis. If the mail contains passwords,financial data, or expressions of love or hate, “security” meanskeeping the contents of the message confidential. If the mailcontains information the accuracy of which is critical, such asmedical data or contract information, “security” requires that thecontents be unalterable while the letter is in transit; it may
also require that the sender of the letter can be established to ahigh degree of accuracy. The flip side occurs when a senderwishes to remain anonymous, such as the student who sent athreatening letter to President Clinton. I’m sure he thoughtelectronic mail had a serious security problem when the Secret
Service showed up at his door and informed him they had traced the“anonymous” letter.
Determining how to secure a given system, in a given environment,requires analyzing the situation to determine what “security”means. From that, one can design and implement procedures,programs, and technologies to provide a level of securityconforming to the needs of the system.
Some more examples will make this point clearer. A vendor designsand implements a new computer system. What role does securityplay? Clearly the vendor wants to provide some minimal level ofsecurity, but what is that level and how does it impact the use ofthe system? One school of thought is to provide mechanisms, butinitially disable the mechanisms, and the managers and users canenable on those they want. Old versions of the UNIX system weredistributed with this philosophy (by default, anyone could writeanything). A second approach is to pick some particular policy andconfigure the mechanisms to enforce that policy. As an example,one vendor sells two different types of computer systems. Thenewer version is set to distrust other hosts on its networkinitially. The older version is set to trust all hosts on itsnetwork. In both cases, “the network” may well be the Internet.Now, the company at one point reconfigured the older version todistrust all hosts by default, but the outcry from smallbusinesses was so great that the decision was reversed. Theproblem, it turned out, was that small businesses used localnetworks not connected to the Internet, and when they installedthe systems, no other system on the network could talk to it. Thebusinesses did not have the expertise to fix the problem, and socomplained. This is a classic example where one sense of security(“integrity”) is hampered by the need for another sense ofsecurity (“availability”). Which decision was better from thepoint of view of security?
Go further down to the nature of the environment in which a systemwas developed. One often hears that “the UNIX operating system wasnot designed with security in mind.” That’s actually not true.The UNIX system was designed in a research lab, where the only“security” necessary was to prevent one user from accidentally
deleting another user’s files. Given that (loosely stated) policy,the UNIX system is quite secure. But then the UNIX system movedout into less friendly environments, in which attackers could (anddid, and still do) exploit flaws in programs or configuration
errors to acquire privileges. In such an environment, the commentabout UNIX not being designed with security in mind is quitecorrect.
My point is that computer security is more than mechanisms andmathematics. It includes being able to analyze a situation to
figure out what constitutes security, being able to specify thoserequirements, being able to design a system or program to meetthose requirements, being able to implement the system or programcorrectly, and being able to make configuration and maintenancesimple.
Now, how well do we do this, in practice? As the above examplesshowed, rather poorly. I’ll not elaborate on the Social Securityfiasco, other than to say at least the designers tried to followthe above steps; their security model of the Internet, or of
American citizens, was flawed. At least with paper mail, responsescould be sent to a particular address (that of the person aboutwhom the information is requested); on the Internet, this
protection is impossible. With respect to electronic mail, thestudent who threatened President Clinton clearly could not figureout that the implementation of electronic mail failed to providewhat he expected in the way of security. And the vendor whoconfigured systems to trust all hosts on the network did notadequately analyze the assumptions made in assessing the trustbetween system components and the humans who would use and
administer them, thereby violating the principle of psychologicalacceptability [4].
Nowhere do we see our failures better than in the implementationof computer systems. As I’ll discuss later on, writing high-quality code is an art that all too few students ever see, andeven fewer ever master. This lack shows in the systems we deploy.How many of you have ever been on a system where the screendisplay is replaced by a huge list of numbers, letters,
and tables of dots? How many knew your gooses were cooked at thattime? You’re not alone. A study of utility programs on UNIX
systems [3] showed that, given random input, about one-fourth toone-third of the programs dumped core. In 1 case, the kernelpanicked, crashing the system! That’s inexcusable, sloppy
programming, and has serious consequences. In cases involving
security, program crashes or incorrect behavior are not bugs; theyare security holes. And most breaches of security are causedeither by poor programming or by improper configuration.
Poor programming is a generic problem, because it causes securityflaws under most definitions of “security.” If you look at theUSENET newsgroups and traffic on the security-related mailinglists, most security holes reported recently arise from buffer
overflows; the attackers alter data, or change the contents of theprogram stack causing the program to execute machine-languageroutines stored on that stack (or elsewhere in memory). Checkingbuffer bounds is simple. True, it costs a bit more, but the costis negligible compared to the effects of the failure to check thebounds. Also, I’ve never seen a study showing exactly how muchoverhead was added by the checking; I suspect it is considerablyless than most people think. Before this, race conditions were therage. What will be next? I wish I knew!
Improper configuration is arguably the user’s, or system
administrator’s, fault. But why do such configuration problemsarise? Configuration is often a very complex matter, and affectsnot only the part of the system being configured but also itsinteraction with the rest of the system. Most users and system
administrators simply do not have the time, the experience, or theknowledge to consider all the ramifications of their
configurations. So they do not configure, and trust the vendor’sconfiguration. Or they do what seems reasonable to them. But oftenthe vendor’s configuration is for a different environment (as inthe trust example above) or is counter-intuitive and without
sanity checks. A perfect example of the latter is a program thatmanages cached DNS data. This data is critical to the correct
functioning of the Internet, as it stores host name and IP addressassociations. Data remains in the cache for a period of time setin a configuration file; this length of time is stored as an
integer number of seconds. Most folks, for whatever reason, seemto think it is a number of minutes or hours, so they put a
floating-point number in the field. Say they want to purge thedata after 30 minutes. If expressed as 0.5, the program reads the“0”: as the integer, ignores the “.5”, and sets the time-out to 0.This means that the data is never removed from the cache, which isnot what the administrator intended -- and constitutes a securityhazard. Now, how hard would doing a sanity check on the
configuration number, and checking for a floating point number,have been? It’s the “weakest link” phenomenon -- even if you
configure 99.5% of your system components correctly, the remaining0.5% usually leaves you vulnerable to attack. That these links areso weak is in part due to a failure to understand how criticalsimplicity and verification of configuration data is.
All this relates to INFOSEC education because it suggests afailure somewhere. Not enough computer scientists, systemadministrators, and programmers are learning about computersecurity. More knowledge and understanding of the basics of
computer security, and an ability to apply these principles andtechniques, would ameliorate the sorry state of security greatly.So, what can we do? How can we improve this situation? A goodplace to begin is with the current state of INFOSEC education.
Where We Are in INFOSEC EducationIn academia, research and teaching go hand-in-hand, and it is notsurprising that the four largest academic groups in INFOSECsecurity also have the largest concentration of students andfaculty in this area. While their research overlaps somewhat,each group has carved out a general, unique niche.
The Computer Security Laboratory at the University of Californiaat Davis has research projects in network security (including thesecurity of the network infrastructure), testing and verificationmethodologies, intrusion detection, vulnerabilities analysis,policy, and auditing. The COAST Laboratory at Purdue Universityfocuses on host security, intrusion detection, audit technologies,and computer forensics. The Center for Secure Information Systemsat George Mason University conducts research in formal models,
database security, and authentication technologies. The Center forCryptography, Computer, and Network Security at the University ofWisconsin in Milwaukee focuses on the application of cryptographyand cryptographic methods and their extensions. While there areother groups working on computer security in academia (at theUniversity of Idaho, CMU, MIT, the University of Texas, the
University of Maryland, Idaho State University, and Portland StateUniversity), the research programs of these four groups are thelargest.
Gene Spafford presented some statistics worth repeating in hisFebruary 1997 Congressional testimony [6]. Over the last fiveyears, these four academic institutions granted 16 Ph.D.s forsecurity-related research. (Incidentally, 7 came from UC Davis.)Of these graduates, three went into academia. In the same timeperiod, about 50 masters’ students graduated. But these numbers,while revealing, convey only a very small part of the currentstate of computer security education.
Some aspects of computer security education are handled very
well, some moderately well, and still others poorly. To understandthe nature of the weaknesses and strengths, let’s consider two
different levels of INFOSEC education: graduate and undergraduate.At the undergraduate level, teachers tend to focus more on
applications of principles and operational concerns rather thanthe derivation and deep analysis of those principles themselves.Those are discussed, but teachers show the students how to applythe principles in very different and important situations. At thislevel, computer security is typically added to existing courses.For example, most books on operating systems devote a chapter ortwo to issues of information protection, and networking classesemphasize the need for good cryptographic protocols.
Unfortunately, most of this information is presented as an adjunctto the main topic of the course and driven by that topic, so thereis little unity in the material among classes. That is, the
operating system class will use principles of operating systems to
drive the security mechanisms and techniques discussed, and thediscussion of INFOSEC security in a networking class draws uponprinciples of networking far more than principles of security.This is unfortunate because students who take those classes comeout with a distorted view of computer security. They do notrealize that general principles guide the design of security
mechanisms; that in both operating systems and networks, policy iscentral to the definition and implementation of computer security;and that the classes are exploring two different views of the samefundamental subject. As a result, INFOSEC security is seen as muchmore ad hoc than it is. When these students graduate and begin
working in the field of computer science, they will not be able toapply the principles of security to their tasks unless the issuesof security arise in the contexts of operating systems or
networks. Even then, if the context is very different from that inwhich the issues arose in class, the students may have troublewith the security aspects of their task!
A classic example arises from network security. Network securityis in large part based upon cryptography, mainly because the
communications media cannot be secured; you can only protect thecryptographic keys at endpoints. One major corporation, whichsupplies World Wide Web browsers, understood this very well, andused the powerful RSA cipher to protect data that needed to besecured. So far, so good. But they overlooked the issue of keygeneration. The “unbreakable” cipher was broken in minutes by acouple of graduate students who figured out how the keys weregenerated, and simply began regenerating the cryptographic keysuntil they found ones that deciphered the messages correctly! Thistype of attack is rarely discussed in networking classes, yet itis a greater threat than failures in the cryptographic protocols.Are we contributing to the existing state of security by the wayin which we teach it?
Superficiality seems to be common in supporting disciplines suchas computer security. As another, related example, consider thediscipline of programming. Everyone knows that undergraduates aretaught a programming language in their first programming class.But when do they learn how to program? Programming is not simplywriting code in response to an assignment, or even to a
specification. Programming is crafting a program that meets thespecifications, and does more -- it handles errors properly, itchecks for potential problems, and basically embodies the fourbasic principles of robust code [2]:
1.Be paranoid about code you did not write, including libraryroutines; expect them to fail.
2.Assume maximum stupidity; if you’re writing a function, assumethe caller will pass invalid parameters or bogus data.
3.Don’t hand out dangerous implements; don’t let the caller seewhat your internal data structures are, and take pains toprotect them from a malicious caller.
4.Worry about cases that “can’t happen;” they will, and when youleast expect it.A second course in programming should hammer these rules into thestudents. But my experience is that most students do not get
taught these rules in such a way that they routinely apply them.This opinion is reflected in Weinberg’s Second Law: “if buildersbuilt buildings the way programmers wrote programs, then the firstwoodpecker to come along would destroy civilization.”
Undergraduates who wish to study computer security are generallyrelegated to graduate courses or independent study courses. Veryfew undergraduate computer security courses are taught now. UCDavis introduced one this year, and it was wildly successful inpart because it stressed applications more than theory. (Theavailability of security-related jobs didn’t hurt, either.) Themost popular part seemed to be the lectures and assignments onwriting secure code; the course evaluations showed that thebrevity of this part was the major complaint!
The purpose of a graduate education is to stretch the currentstate of the art and the current state of knowledge. So at thegraduate level, classes focus on deriving, arguing, proving, andextrapolating from fundamental principles and results and
extending the underlying theory than applying it. Application isdiscussed when it shows interesting ramifications of the theory,or leads to interesting and novel extensions. Many of the abovecomments apply to these classes, the major difference being that anumber of academic institutions have graduate-level classes
concerned with various aspects of computer security: introductoryclasses, classes focusing on public policy and business, on formalmethods, on databases, on cryptography, on intrusion detection,and on any particular subfield of interest to a faculty member.Classes at this level are far more flexible than undergraduateclasses, and far more numerous.
Graduate education typically focuses on the design and
specification of secure systems, and their development. The studyof multi-level security still constitutes a major part of graduateclasswork, because so much research has been done in that area;information flow, covert channels, formal models of security andintegrity, and trusted computing bases all embody fundamental
principles of security. However, students do not learn much abouthow to analyze existing systems; they may study the theory behindpenetration testing, or the basic models of vulnerability
analysis, but they rarely put these ideas into practice by testingan existing system, or modeling one.
Graduate education beyond the Master’s level (and sometimes at theMaster’s level) involves more than classes, of course. It alsorequires research. For a master’s degree, the research mustcontribute to the state of the art in some way, and for a
doctorate, the research must extend our understanding of someaspect of security, or deepen our understanding. In other words,
it must be original and contribute to the body of knowledgeconstituting the field of INFOSEC.
Academic computer security research is excellent in its
exploration of principles. However, performing the implementationsand testing, or experiments, to support the research is often aproblem. Institutions suffer from inadequate or outdatedequipment. Two examples should suffice. At UC Davis, we are
conducting research in the network infrastructure, specificallyattacks on routers, but so far all our router-related experimentshave been through simulation. One router company has generouslyoffered to let us come and use their labs, but as the company isover 2 hours away, and considerable set-up is required, this is anoption we can use only occasionally. Having a router in the labwould allow us to experiment much more quickly, thereby speedingthe course of the research. As another example, our
vulnerabilities analysis project requires experimenting on a widevariety of computers, so we can determine how to build system-independent tools to detect potential problems. Thus far, we haveonly two types of systems on our network, so we cannot test orport our tools to other systems. This limits the range of our
tools, and our ability to test them thoroughly enough to validatesome aspects of the underlying theory.
Laboratories do not run on research alone; an infrastructure
(administrative support, system administration, and so forth) isnecessary. To some extent, departments try to subsidize this, butmy experience is that departments do not have funding adequate fortheir own needs, let alone those of a growing, or mature, researchlaboratory. To make this personal, we recently cobbled togetherfunds from 9 different government grants to hire an administrator(actually, a technical assistant). This amazing and dynamic
individual has taken over a lot of the administrative work KarlLevitt, our postdocs, our graduate students, and I used to do.Now, I only spend 8 hours a week doing administration (report
writing, not working on papers or my book; sending information topotential sponsors and current sponsors; preparing the non-technical parts of grant proposals; photocopying; scheduling
meetings; updating sand installing system software; some web pagedesigning; and so forth); the administrator has taken over therest. With more administrative support, I could cut this time inat least half, and lift much of the burdens of system maintenancefrom the graduate students (we don’t have a system administrator).I do begrudge the use of that time; I understand the work is
necessary, but others could do it, and probably much better thanthe graduate students and I could. We’d rather be teaching orresearching!
Academic institutions excel at teaching principles in computersecurity courses. They do not teach computer security adequatelyas a supporting discipline in other courses, because the teacherswho teach the lessons, and the authors who write the books, focuson those aspects of computer security that affect their subject.
Further, the gap between design and implementation is not coveredwell, even in most computer security courses.
In terms of research, the work that is done is high quality, butbecause of lack of necessary equipment and lack of adequate
infrastructural support, the research proceeds more slowly thannecessary, and is performed on equipment that is not state of theart. To emphasize this: the problem is not the theory or
modeling, or the experimentation to support them; it is that theexperimentation is often done via simulation rather than directlyon hardware or systems with the characteristics under study. Thework is good, but doing it is frustrating, and implicitly itassumes that the simulations are correct.
This very brief survey outlines the current state of INFOSECeducation and research in the academic setting. To see how toimprove this situation, or if improvement beyond the obvious isneeded, consider what the current practice should be.
Where INFOSEC Development Should BeThe conventional wisdom is that we need to advance our
understanding of modeling, security theory, and policy in order toimprove the state of computer security drastically. While I agree,I think this misses one obvious point. We don’t use what we knowalready, in either the procedural or technical arenas.
Think about it. A buffer overflow occurs when you write beyond theend of an array; this can cause the program to stop, or it maysimply alter data unrelated to the buffer. We have known how tohandle buffer overflows since at least the early 1960s. Compilerscan generate code to check bounds. If that’s too inefficient,segmented architectures provide a system-oriented mechanism forpreventing overflow: make the buffer occupy a single segment. Ifyou overflow, a segmentation trap occurs. The Burroughs systems ofthat time strictly delineated instructions and data; building thatinto the system would also prevent many of these types of attacks.This is not new technology; it’s old technology. The same holdsfor other flaws.
In fact, we recycle flaws as if we forgot about them! In the UNIXarena, a flaw that occurred in 1993 (and would compromise a
system) was the same as one found in 1983; the only difference wasthe name of the program involved, and how the fault was triggered.Another flaw, in an implementation of the Network File System
protocol, was exactly the same as a flaw found in the 1970s in thepaging of a Burroughs system. As Yogi Berra said, “it’s déjà vuall over again!” The moral? We don’t learn from our errors. Wemust do so!
We also do not use what we have learned. Clear statements of
policies and specifications aid immeasurably in the design cycle,because they highlight the assumptions about the environment in
which the programs will function. They also present the goals ofthe program or system clearly, and the constraints under which itmust function. Even if these stated informally, the designers willknow what is expected, and can design towards that goal. The goalsand constraints can include security matters; for example, if theprogram will be writing sensitive data, confidentiality and
integrity constraints should be stated explicitly. This serves twofunctions: to quantify the security desired somewhat, and toprovide a metric for subsequent testing. But how often dospecifications include this information?
Such improvements in the practice of design methodology wouldreduce the most pernicious, and embarrassing, part of computer
security for vendors: the cycle of catch-and-patch. In this cycle,someone catches a security flaw. After considerable work, the
vendor distributes a security patch. The patch typically addressesthe specific flaw reported. Then the vendor learns of anothersecurity flaw. Out goes another patch. Parts of the system arebecoming incrementally more secure in that single flaws are beingfixed, but never is the design checked, or the flaws looked at asa whole, so other similar flaws go undiscovered. This is not costeffective -- the payment for security is incremental, and at theend, rather than up front. Worse, the patches may introduce newsecurity holes, or aggravate the security problem (as at least onevendor discovered, to its embarrassment!)
Learning from the past, and planning designs thoroughly, will
substantially improve out INFOSEC capability. There is more we cando, though.
We need to learn how to build more precise models. Currently oursecurity-related models are crude, to say the least. We can modelsome aspects of systems designed for security fairly well, becausethe hierarchical design methods require a model from which thedesign is drawn. But modeling the security aspects of existingsystems is a nightmare. Worse, modeling is always done at anabstract level. Details deemed irrelevant to the purpose of themodel are elided or ignored. Unfortunately, in computer security,the flaws often lie in those details. The Trusted Computer SystemEvaluation Criteria of the Department of Defense captures thisquite well; the class A1, while requiring formal proofs at thespecification and design level, requires only that the “[trustedcomputing base] implementation must be informally shown to beconsistent with the [formal top-level specification]” [1]. Thenext section, discussing what lies beyond A1, includes formalverification of the implementation at the software and hardwarelevels. Currently such verification is not practical. It needs tobecome practical, if not directly then through techniques such asproperty-based testing.
Jeremy Frank made an interesting and perceptive observation aboutthis. The models we build often hide the problems, rather thanreveal them. For example, we did some work that showed how toderive criteria for auditing from a system model, and then
instantiated it using the Network File System as an example. Thiswork skipped over the deeper question of how to create the modelfrom which the derivation could be done. It’s not intuitive,because part of the analysis is to determine what constitutes atransfer of information. Our work assumed this was known. Butgiven a complex enough system, building a model that correctlycaptured all such flows could be difficult and the modelers wouldbe likely to miss something. Perhaps techniques akin to softwareslicing could help here.
We need to study the formulation and implementation of policy.This includes the areas of audit analysis, configuration
management, distribution of code and configuration data, and thedevelopment of modular techniques for enforcing and definingpolicy.
But the most important aspect of INFOSEC security is people.Programmers make mistakes. Operators make mistakes. Users makemistakes. We need to build systems that reduce the probability ofhuman errors, and to minimize the effects of those errors. Inessence, we must combine the fields of cognitive modeling, humanfactors, and organizational dynamics with the disciplines ofsoftware engineering and formal methods. We must understand howthese errors occur, and why. Little to no work has been done inthis area.
To summarize:
• We need to integrate security into all aspects of computer
science education.
• We need to learn from our mistakes, and not repeat the errors
from the past.
• We need to improve how we design systems and programs to account
for security constraints, and we need to reduce the number ofsecurity patches necessary.
• We need to learn how to abstract models that more precisely
reflect the characteristics of the system.
• We need to grasp the subtleties of policy more completely, and
provide mechanisms for enforcing it with greater precision andcompleteness.
• We need to understand how humans interact with systems, how
security problems arise from this interaction, and use thisknowledge to build systems that minimize the possibility andeffects of errors.So we know where we want to go. How do we get there?
How to Improve INFOSEC Education: Meeting the ChallengeTo meet these challenges, we must improve both the quality anddelivery of computer security education. We need to see computersecurity not simply as a separate discipline, but as a multi-disciplinary science which includes elements of operating systems,
networking, databases, the theory of computation, programming
languages, architecture, and human/computer interaction. The bodyof knowledge must be incorporated as appropriate into thesedisciplines.
As an example, consider a second course in programming; this istypically a course in software development. We can begin to
educate students in computer security at this stage, without evenreferring to that discipline! For example, a policy provides
design constraints, so in the introductory class, we simply statethe requirements of the program and the constraints under which itwill function. We let the students figure out the informal
specification, and require them to argue that their design meetsthe specification. By teaching robust programming, implementationproblems such as argument checking, buffer overflows, and
validation of input data become part of writing good code and arenot separate aspects of writing secure programs. By spending timeon the role of testing, we imbue students with the idea thatsystems must be validated. My point is that with a littlecreativity, we can ameliorate the problem of poor code insecurity-sensitive software.
The problems at the advanced undergraduate and graduate level aremore complex. Universities and colleges provide grounding in
principles, theory, the ability to analyze problems and potentialsolutions, and finding or predicting future problems. While
industry and government are interested in these, their needs aremore immediate. They want students to be educated in the systemsthey use. They want students who can apply technology to problems,and either solve them or figure out what new technology will solvethe problems. At first glance, these roles seem contradictory. Onreflection, they are complementary.
The most effective way to teach principles is to help the studentsdiscover those principles. Rather than simply stating the idea,enable the students to use systems embodying the idea they are tolearn. For example, the concepts of multi-level security aresimple in principle, but their use raises a host of otherquestions involving psychological acceptability, usability,implementation, and so forth. What better way to answer thesequestions than to give the students exercises on such a computersystem?
This is where the marriage of industry, government, and academiacan drastically improve INFOSEC education. Students learn thatsecurity cannot be provided -- indeed, defined -- in a vacuum.Real problems set parameters for applying existing theory andmodels, testing them, and determining their usefulness. Theenvironment in which the problem arises sets constraints for
solutions. Working with these problems teaches students to analyzenot just technical issues but also non-technical issues and
influences. They bring together multiple types of problems in asingle situation, and show how they affect one another. They show
how money, how risk management, and how risk mitigation allinfluence the design and implementation of computer systems.Consider the World Wide Web. The simplest solution to the threatof malicious downloadable executable code, such as computer
viruses, is to disallow such downloads. That is not practical --non-technical considerations suggest that, regardless of what isdone, people will always down load Java applets or ActiveX
programs. So, let’s modify that solution -- force the browser toask the user whether the download should proceed. In theory,
great; in practice, most people will always say “yes” or call thevendor and ask how to disable that darned warning. Okay, sincethat won’t work, how about a “sandbox”, where the downloadedprogram is executed in a contained environment? Great idea in
theory, but in practice, the construction of a universal sandbox,designed to meet all local policies is impossible. Then let’schange ... but you get the idea.
Academia is changing -- slowly, but changing nonetheless. Academicinstitutions encourage work on problems that are presented as ill-defined or ambiguous problems; part of the challenge of the
research is to make the problems well defined. And institutionsare changing the standards by which they evaluate faculty;although the aphorism “publish or perish” is still true,
experimental disciplines in computer science are becoming acceptedas bona fide disciplines, even though they lead to fewer papersthan more theoretical research.
Academia has a duty to educate people so that they can contributevia industry, government, or academia. Most academics, and
academic institutions, take this duty very seriously -- it’s anintegral part of why we’re in academia, and resisting the lures ofmore money elsewhere. We love to teach; we want our research to beuseful. But we need help.
• We need more industry and government participation in selecting
research topics. Nothing is more frustrating than solving aproblem, only to find it is not really a problem, or the “realworld” version of the problem has additional constraints thatchange the approach drastically. Ways to do this are throughpartnerships with industry in which we discuss problems andpossible approaches, and work together to solve them; throughinternships, where members of industry come to academic
institutions for a period of time to teach and work on projectswith students, and where faculty and students go to industry forperiods of time to work on problems of interest to the industry.One of the most common complaints of students is the lack of“real world” experience, and of industry and government is thatthe students lack “real world” experience. These measures wouldprovide them.
• We need long-term funding to provide a stable base for our
research. Short-term resources for tackling particular problems
are helpful, but the distraction of attempting to find funds tocontinue our work, and to build a long-term research program, isa drain on our resources. The lack of any infrastructure supportaggravates this situation; in order to hire an administrativeassistant, we had to get approval from the sponsors of the 9grants from which we drew funds. We’re still short-handed. Thisis a complaint common to the four major labs, and trying to makeup for the lack of support drains energy and time from ourresearch.
More importantly, a stable funding base would give industry,government, and the nation a set of resources upon which theycould draw without having to start from scratch. The importanceof this cannot be underestimated. This base of research and
knowledge can provide help and research results to deal with thecrisis, and to solve the problems causing the crisis.
• We need state-of-the-art equipment. Our students learn computer
science by experimentation and using systems as well as fromlectures and books. The better the equipment, the better theywill learn, and the less industry and government will need totrain them. And the more directly applicable our research willbe.
• Industry and government should fund “blue sky” research and long
term, directed research. Blue sky research is speculative; itmay succeed, it may fail, but the body of knowledge that comesout of it will advance the field in some manner. Remember,failure can be just as strong a result as success. Long-termresearch would allow us to turn our academic resources to
problems that we could study thoroughly and attempt to solve ina number of different ways. Both these suggestions would producean immeasurable amount of research and scholarship, upon whichshort-term projects could be built.
• Finally, industry and government should realize that demanding
short-term deliverables such as software takes academia into anarena it was never meant to be in. Prototypes are built to testtheories; they are in no sense production quality code, andoften use designs unacceptable to production environments.
Remember the software engineering adage “build the first one tothrow away, and the second one to test and analyze?” A prototypeis the first or second implementation. Once the theory is
validated, industry should take the results and re-engineer thesystem to meet its specific needs, for its specific environment.Focus on the research and the results gleaned from it; we’revery good at doing that. That’s what we can contribute toINFOSEC research.We have to adapt. We no longer have the luxury of fielding systemswithout thinking about security issues. Working together,academia, industry, and government can improve the state of
INFOSEC security education and research. But the meltdown point,
the point at which the computer infrastructure is about to fallapart, to become Balkanized through attacks, is almost here. Weneed to act, and act now.
Everyone bemoans the sorry state of computer security; so far,fewer seem willing to provide the resources to deal with thefundamental research necessary to improve the state. We shouldtake a lesson from the Good Doctor, Dr. Seuss, who wrote a
wonderful book about a youngster running away from his troubles toSolla Sollew, the land where there were no more troubles. Butafter a long journey, he realizes he will never get there. So hereturns home [5]:
But I’ve bought a big batI’m all ready, you see;Now my troubles are goingTo have troubles with me!
May we have the wisdom to deal with our INFOSEC troubles in thesame way.
Acknowledgments. Thanks to Rebecca Bace, Jeremy Frank, KarlLevitt, Vic Maconachy, and Alan Paller for helpful ideas and
feedback. The contents of this note represent the opinions of theauthor, and not necessarily anyone else.
References[1]Department of Defense Trusted Computer System Evaluation Criteria , DOD 5200.28-STD , sec. 4.1, p. 44 (Dec. 1985).[2]Elliott, C., “How to Write Bomb-Proof Code,” handout for COSC23, Software Design and Implementation , Department of
Mathematics and Computer Science, Dartmouth College (Jan.1992).[3]Miller, B., Fredriksen, L. and So, B., “An Empirical Study of
the Reliability of UNIX Utilities,” Communications of the ACM 33(12) pp. 33–43 (Dec. 1990).[4]Saltzer, J. and Schroeder, M., “The Protection of Information
in Computer Systems,” Proceedings of the IEEE 63(9) pp.1278–1308 (Sep. 1975).[5]Dr. Seuss, I Had Trouble in Getting to Solla Sollew , Random
House (1965).[6]Spafford, E., Written statement submitted to the Subcommittee
on Technology of the U. S. House of Representatives Committeeon Science (Feb. 1997).
因篇幅问题不能全部显示,请点此查看更多更全内容
Copyright © 2019- huatuo2.com 版权所有 湘ICP备2023021991号-2
违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com
本站由北京市万商天勤律师事务所王兴未律师提供法律服务