可信计算的产业趋势和研究_英文_
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
第30卷 第10期2007年10月
计 算 机 学 报
C HIN ESE J OU RNAL OF COM PU TERS
Vol.30No.10
Oct.2007
收稿日期:2007205223;修改稿收到日期:2007207219.SSIEWIOREK Daniel P.,1946年生,博士,教授,研究方向为可信计算、可穿戴计算、环境感知、计算机辅助设计等.杨孝宗,1939年生,教授,博士生导师,主要研究领域为可信计算和移动计算.CHI LLAREGE R am ,1955年生,博士,研究方向为可信计算.KALBARCZ Y K Zbigniew T.,博士,教授,研究方向为可信计算.
可信计算的产业趋势和研究
SIEWIOREK Daniel P.1) 杨孝宗
2)
CHILLAREGE Ram 3)
KALBARCZYK Zbigniew T.
4)
1)
(卡内基梅隆大学计算机科学与电气和计算机工程系 匹兹堡 15213 美国)
2)
(哈尔滨工业大学计算机科学与技术学院 哈尔滨 150001)
3)
(Chillarege 公司 纽约 10566 美国)
4)
(依利诺斯大学阿贝纳香滨分校交叉科学实验室 乌尔班纳 61801 美国)
摘 要 可信计算实验研究已经进行了30多年,特别是在航空、航天、金融、证券、交通等安全关键领域取得了令
人瞩目的成就.为了从数量和质量两方面综述可信计算的发展和进一步推动可信计算的研究,文中分析了可信计算的产业趋势,包括:(1)差错源的变化;(2)复杂性的迅速增加;(3)计算设备总量的增加.针对每一种趋势,指出了那些可以应用于终端产品或实验性产品以及生产这些产品过程的研究技术.文中的研究给出一个框架,既能反映可信计算过去的研究情况,也指明了今后的研究需求.
关键词 可信计算;可信性与安全性的实验研究;计算产业的趋势中图法分类号TP301
Industry T rends and R esearch in Dependable Computing
SIEWIORE K Daniel P.1) YAN G Xiao 2Zong 2) C HILL A REGE RAM 3) KALBARCZYK Zbigniew T.4
)
1)
(Depart ment of Com p uter S cience and Elect rical and Com p uter Engineering ,Carnegie Mellon Universit y ,Pittsburgh 15213,US A )
2)
(Harbi n I nstit ute of Technology ,Harbin 150001)
3)
(Chillarege I nc.,N ew York 10566,US A )
4)
(Coordinated S cience L aboratory ,Universit y of Illinois at U rbana 2Cham pai gn ,61801,US A )
Abstract Experimental research in dependable co mp uting has evolved over t he past 30Years.
To understand t he magnit ude and nat ure of t his evolution ,t his paper analyzes indust rial t rends ,namely :(1)Shifting Error Sources ,(2)Explosive Complexity ,and (3)Global Volume.Under each of t hese t rends ,t he paper explores research technologies t hat are applicable eit her to t he fin 2ished p roduct or artifact ,and t he processes t hat are used to produce p roduct s.The st udy gives a framework to not only reflect on t he research of t he past ,but also project t he needs of t he f ut ure.K eyw ords dependable comp uting ;experimental research in dependability and security ;comp u 2ting industry trends
1 Introduction
For over four decades Moore ′s Law has been a driving force for t he comp uter indust ry.Doubling on a yearly basis leads to a t hree orders of magni 2t ude increase in only a decade.Such large increases
in capacity (i.e.,number of t ransistors ,p rocess 2ing performance ,bit s of data storage ,and commu 2nications bandwidt h )require f undamental ret hin 2king of all p hases of a p roduct ′s life cycle from de 2sign t hrough usage t hrough maintenance to re 2placement.In addition to capacity ,Moore ′s Law
also applies to volume.Intel produces more t ran2 sistors yearly t han t he number of ant s on t he plan2 et Eart h.Doubling in volume means t hat every couple of years more comp uters will be produced t han were p reviously produced in all of history.
The IT industry has grown in many directions. While the Von2Neuman machine is still at the core of the computer concept,the industry that built at best a few t housand machines in1970,today ship s tens of millions annually.Employment changed from a few t housand to a few million.And t he breadt h of t he indust ry spans technology,manufact uring, software,and IT enabled services,amo unting to a world wide figure in t he range$2~3T.
This paper identifies t hree t rends in a comp ut2 er indust ry f uelled by Moore′s Law t hat has direct impact on comp uter system dependability and secu2 rity:Shifting error sources,increased system com2 plexity,and global volume.Some of t he t rends were identified decades ago and,hence,t here is a rich history among t he research t hreads wit h re2 spect to t hese trends.Ot her t rends are just emer2 ging and can be used to predict f ut ure directions for research among t he t hree t hreads.
Section2p rovides background and a frame2 work for motivating t he t hree indust rial trends. The next t hree sections describe each t rend in t urn and how research in dependable and secure systems responded to each t rend.Section9p rovides con2 cluding observations.2 T rends,artifacts&process
Fro m t he days of early comp uters(t hat em2 ployed vacuum t ubes to perform logic and arit hme2 tic operations)to today′s generation of comp uting systems,dependability has been considered as a f undamental system att ribute t hat determines t he system′s ability to p rovide continuous service to t he end user.Evidence of t hese early effort s can be found in multiple p ublication from t he60′s,e.g.
[1]and[2].An excellent review describing t he advances of IBM comp uter systems in t he RAS(re2 liability,availability,and serviceability)area f rom t hat time may be found in[3].
An important milestone in t he evolution of de2 pendable comp uting(t heory and practice)was t he establishment of a technical conference on fault2 tolerant comp uting:The First International Sym2 po sium on Fault Tolerant Comp uting was held in 1971.This forum has established it self as a prima2 ry arena for presentation,discussion,and dissemi2 nation of new ideas and co ncept s in develop ment of dependable systems.
Over the years fault/error models have evolved along wit h t he advances in t he system hardware/ software.Table1summarizes t he changes over t he last four decades in terms of t he technology,error/ fault sources,number of users and t heir level of sop histication/training.
T able1 F ault/error sources,level of integration,users,and user sophistication over the last four decades Year/decade1970198019902000
Typical Systems Mainframes Workstations Personal Computers Mobile devices,e.g., cellphones,PDAs
Fault/error Sources Hardware Hardware,network.Hardware,network,
software,human errors
Hardware,software,
wireline/wireless networks,
environment e.g.,
frequent connectivity loss
Integration/complexity Close systems;
Highly custom designs,
where bot h hardware and
OS are fully controlled by
t he vendor
Mostly close systems;
Network connectivity;
Standard interfaces
exposed to users
Open systems;
Wide access to network;
CO TS operating systems;
Third2party hardware and
software
Open systems;
Proprietary and CO TS
operating systems;
Highly integrated
PC2like systems
People/Users Tens of t housands Millions10of millions100of millions
Level of sophistication/ training BS in engineering;
5000h
Basic knowledge in
computing;500h
Basic computing literacy
50~100h
Training at t he time of a
purchase of a device.Hours
The technology has evolved t hrough dramatic changes starting f rom mainf rames in1970s(where highly skilled personal was required to operate t he systems)t hrough an era of workstations in1980s and personal comp uters in1990s,to t he current generation of mobile/handheld devices(e.g.,cell2 p hones,PDAs),where t he technology reaches t he general p ublic.Today devices must often operate in highly variable and harsh environment s.As a result t he technology must:(1)hide complexity so t hat relatively unsop histicated customer can oper2 ate t he device and(2)provide continuous opera2 tions despite errors/failures.
Initial focus(in1970s)was mainly on hard2
6461计 算 机 学 报2007年
ware errors as t he hardware devices were t he major cause of p roblems.The following decade(1980s), wit h introduction of workstations and t heir net2 work co nnectivity,makes t he network an impor2 tant additional source of errors.The wide use of personal comp uters(in1990s),executing commod2 ity software,makes t he software become a primary source of failures.The current decade can be char2 acterized wit h t he dominance of failures due to t he environment and operators.
Our framework is defined by three attributes: Trends,artifact s and process.The trends refer to industry trends that have been taking place,and which have a direct impact on dependability.Each t rend is distinct and had been consistently p resent for a substantial period of time—Often a couple decades.We identify three trends and discuss them in considerable detail for the purposes of this paper.
Separate f rom t he trends are artifact s and p rocesses—The other two dimensions of our frame2 work.The artifact is the product of the industry—Be it,a piece of hardware,software or service.An example being a comp uter,a piece of shrink wrap software,or a cell p hone cont ract.It is t he entity of commerce and defines t he work p roduct of engi2 neering effort.
The process is t he means to produce an arti2 fact.It is t he engineering met hods,tools,or labor which t he indust ry employs to create a viable met hod of manufact uring or develop ment.As we reflect on artifact and process,we recognize t hat much of engineering and research is often directed to one of t he two,or bot h.
Table2links toget her t he t hree dimensions of our f ramework wit h trends as rows,and artifact s and process as t he two columns.At t he intersec2 tion of each row and column is t he subject of our st udy.
T able2 I ndustry trends,artifacts and process
Industry Trend Artifact Process
T rend1:Shifting Sources
Failure rates drop in hardware and change to ot her sources Monitoring Failure data analysis
Fault Injection
Raise t he level of abstraction
T rend2:Explosive Complexity
Growth in the system complexity,users,and shrinking user tolerance to failures.Anomaly Detection Trend Analysis&
Formal Met hods
Model checking ODC Software
Reliability Testing
T rend3:G lob al V olume
High level of integration and emerging open systems,a source of new dimensions in failures Tools to assess resilience to bot h malicious and non2malicious errors
Trend1:Shifting Sources,clearly began in t he1980s,but was noticeable towards t he end of t he1980s and by mid1990s had caused a major change in t he dependability area.Trend2:Explo2 sive Complexity,began toward t he early1990s when t he cost of comp uting was dropping substan2 tially and distributed comp uting was on a growt h pat h.By mid1990s t he internet boom cont ributed to an even larger growt h.Trend3:Global Vol2 ume,is only at it s inception,and can be argued to have begun wit h t he huge increase in small yet powerf ul devices flooding t he market.The cell p hone,t he PDA and t he availability of wireless digital networks will have t heir impact.
3 T rend1—Shifting sources
One of t he dominant t rends has been t he chan2 ges in failure rate,as well as t he sources of failures t hat do minate in a particular timeframe.By and large,we can conclude t hat t he hardware failure rates are down while t he relative cont ribution of software is up.In addition as technology mat ures, t he user set changes,and t he degree of sop histica2tion in t he product s increase,new sources of fail2 ures become prominent.
Transient fault s have traditionally been associ2 ated wit h t he corruption of stored data values. This p henomenon has been reported as early as 1954in adverse operating conditions such as near nuclear bomb test sites and later in space applica2 tions[425].Continuously decreasing feat ure sizes and supply voltage of devices reduce capacitive node charge and noise margin,even flip2flop circuit s in2 evitably become susceptible to soft2errors[6].Con2 stantly p ushing t he p rocessor performance enve2 lope will shortly place us in an unfamiliar realm where logically correct implementations alone can2 not ensure correct program execution wit h suffi2 cient confidence.As a result vendors of high2avail2 ability platforms have lo ng incorporated explicit er2 ror detection and correction techniques in t heir ar2 chitect ures.The basic techniques involve informa2 tion redundancy(e.g.parity and ECC),space re2 dundancy(achieved by carrying out t he same com2 p utation on multiple,independent hardware at t he same time and corroborating t he redundant result s
7461
10期SIEWIOREK Daniel P.等:可信计算的产业趋势和研究
to expose errors ),and time redundancy (where re 2
dundant co mp utation is obtained by repeating t he same operations multiple times on t he same hard 2ware ).
3.1 EC L to CMOS circuit technology
The significant change in technology ,over t he past decades ,is not only t he increase in speed and reduced power consumption ,but also t he reliabili 2ty of t he devices.Fig.1,reproduced from IBM da 2ta [7]shows t hat system outage as caused by hard 2ware failure has dropped by two orders of magni 2t ude in two
decades.
Fig.1 Failure rate changes in hardware
This dramatic change in reliability has been t he driver of a major portion of Trend 1.The shift in circuit technology fro m ECL (t he technology for t he 308X/3090series of mainframes )to CMOS is dramatic in terms of reliability ,as shown in t he figure.TCM in t he figure stands for Thermal Con 2duction Module ,a ceramic multi 2chip package t hat is liquid cooled.
Considering t he beginnings of t he F TCS con 2ference ,which dates to 1971,one can see why t he focus in t he early years was on hardware fault tol 2erance.Product dependability was defined by how well one could keep a box running in spite of hard 2ware malf unction —Be it permanent or temporary.To combat t he t ransient failures ,up to 25%of t he circuit ry was utilized for error detection and correc 2tion.These architect ures allowed for very high da 2ta integrity wit h no data pat h inside t he CPU left unchecked.Inst ructions retry mechanism f urt her increased fault tolerance [8].
It was also beco ming clear t hat t he next gener 2ation of circuit technology ,CMOS would obsolete ECL.As Fig.1illust rates ,t he M T TF of a high end machine is over 30years ,almost two orders of magnit ude better t han t hat class of machines two decades ago.
The same time period has wit nessed t he growt h of t he microp rocessor and t he PC indust ry.Today Intel in t he P6family p rocessors brings
high 2end feat ures to t he mass market.All P6′s in 2ternal registers are parity 2checked ,and t he 642bit pat h between t he CPU core and Level 22cache uses ECC.Built 2in diagno stic feat ures allow monitoring and reporting on more t han 100event s and varia 2bles inside t he chip ,including cache misses ,regis 2ter content s ,and occurrences of self 2modifying code.The P6also imp roves support for check 2pointing (i.e.,rolling back t he machine to a known state in t he event of an error ),however ,t he operating system has to be written to take ad 2vantage of machine 2check interrupt s.
The drop in the failure rates of current CO TS (commercial off the shelf )processors to 100FITs [9](M T TF of over 1110years ),could suggest t hat device failures are no longer a major dependability p roblem.However ,as indicated in [10],t here is an increase in significance of ot her t hreat s ,inclu 2ding :(1)susceptibility to environmental interference due to reduced device sizes and power levels of logic (as discussed at the beginning of this section );(2)hardware design fault s t hat are discovered af 2ter t he design is completed (For example ,proces 2sors of t he Intel P6family in April 1999had f rom 45to 101reported design fault s ,of which about 60%remain uncorrected ),and (3)uncertainty a 2bout wearout ,which may lead to an increase in t he failure rate over time.This shows that the benefit s of a very low device failure rates will only be signifi 2cant if the likelihood of system failures due to transi 2ent fault s and design fault s can be reduced as well.3.2 Softw are failures
One of t he consequences of t he dropping hard 2ware failure rate is t hat t he ot her failure modes have become more p rominent.Software ,which has also been growing in complexity ,has gradually cont ributed to a larger proportion of system outa 2ges.Through t he 1980s ,while t he fault tolerance met hods were being developed for hardware and t he incidence rate of hardware failures was drop 2ping software failures became more prominent.At t he same time ,t he focus on software reliability met hods was marginal.In t he high 2end server bus 2iness ,most of t he develop ment budget s were fo 2cused on new f unctio n ,since t hat was a growt h segment all t he way into t he 1990s.The PC seg 2ment was at it s inception and t he focus was f unc 2tionality.As a consequence ,software failures —The class of p ro blems t hat had damaging effect s as significant as a hardware outage t hat took t he en 2tire system down became evident.
The high end server industry responded rapid 2
8461计 算 机 学 报2007年
ly.Bot h fault avoidance and fault tolerance tech2 niques were applied.J ust like t he hardware plat2 forms for t he high end servers,t he software oper2 ating systems included more and more recovery code.The result is imp ressive.Two decades later, a high end IBM server has almost no cold start s in an entire year.
3.3 Planned outage
Fault s and failures produce t he mental image of uncertainty,and catast rop hic consequence. While t hey do happen,t hey are by far less common in high end servers.However,high end comp uting has a dist urbing p roblem called planned outage when,on p urpo se,systems need to be shutdown. Planned outage used to be common wit h installa2 tion and maintenance of hardware,and t hen be2 came common wit h software up dates and mainte2 nance.Databases needed to be reorganized or net2 works needed to be reconfigured.While,t he sur2 prise element was not present,t he un2availability and disruption of services caused just as much a p roblem.Wit h businesses running globally,24×7 availability was vital and planned outage accounted for more downtime in t he1990s t han un2planned outage.
3.4 Desktop soft w are expectations
The desktop indust ry in t he mid1980′s had been a novelty and enjoyed user tolerance to fail2ures.As time progressed and t he dependency on desktop software grew it acquired t he burden of any successf ul segment t hat enters t he stable stage of it s lifecycle[11].In t his stage,t he importance of novelty fades and dependability rises.Alt hough we have wit nessed significant growt h in t he depend2 ability of product s,t here is yet much more to be addressed.The issue is complicated by newer sources of failures t hat arise such as viruses and se2 curity holes.Global usage and lower t raining levels among t he user set p ush t he demands on depend2 ability f urt her.
4 Dependable system research versus
industry T rend1
Research on dependability has advanced fol2 lowing t he changes in hardware and software tech2 nologies.
411 F ault/error classes
U nderstanding fault/error models is of prima2 ry importance in p roviding sound met hods and techniques for dependability assessment and in de2 veloping efficient detection and recovery mecha2 nisms and algorit hms.Table3gives a generic clas2 sification of fault s based on t heir temporal persist2 ence and origin.The fault/error classification in Table3is a simplified version of t he more compre2 hensive taxonomy p rovided in[12].
T able3 F ault classes
Fault classes
Based on t he temporal persistence[13]Based on t he origin[14]
Permanent fault s,whose presence is continuous and stable. Intermittent fault s,whose presence is only occasional due to unstable hardware or varying hardware and software states, e.g.,as a function of load or activity.
Transient fault s,resulting from temporary environmental conditions.Physical faults,stemming from physical phenomena internal to the sys2 tem,such as threshold change,shorts,opens,etc.,or from external changes,such as environmental,electromagnetic,vibration,etc. Human2made fault s,which may be eit her design fault s,introduced during system design,modification,or establishment of operating pro2 cedures,or interaction fault s,which are violation of operating or main2 tenance procedures.
412 Monitoring and emulation of fault sources Table4summarizes t he t rends in experimental dependability research across four decades organ2 ized into met hods and focus of monitoring opera2 tional systems and artificial evaluation by fault in2 jection.The individual columns highlight emer2 ging:(1)so urces of data being colleted f rom sys2 tems in t he field and(2)fault/error types being in2 tegrated into fault injectio n tools and environ2ment s.Analysis of failure data f rom operational systems p rovides insight into t he dominant error categories in deployed systems.It also gives valua2 ble feedback for driving fault/error injection exper2 iment s.Fault/error injection allows accelerate fail2 ure occurrence in t he system and,hence,p rovide very rapid validation of prototype design and f ur2 t her guidance to design decisions.
T able4 Examples of experimental depend ability research
1970′s1980′s1990′s2000′s
Operational life monitoring Crash dumps Error logs Natural workloads Human2computer interaction errors
Fault injection Stuck2at Memory API Security 9461
10期SIEWIOREK Daniel P.等:可信计算的产业趋势和研究
41211 Operational life monitoring and failure data analysis
U nderstanding t he characteristics of a fault source evolves t hrough several stages.In order to gain an understanding of t he significance of a fault source,initial measurement s focus on summarizing t he underlying statistical dist ribution wit h averages such as mean time to an event.Since little is known about t he fault source,existing measure2 ment f rameworks are used to make estimates.This monitoring may be primary(such as analysis of system event logs)or secondary(such as report s f rom t he field).In order to discover more about t he statistical p roperties of t he source(such as dis2 t ribution type and distribution parameter values), customized error monitoring systems t hat are sen2 sitive to t he fault source,while at t he same time filtering o ut ext raneous information on ot her sources,have to be developed.In t he next stage,a deeper semantic understanding of t he fault source and how it propagates are used to devise real time anomaly detection so t hat t he onset of a new fault can be discovered and isolated quickly.This pat2 tern of t he evolution of stages applies to each fault source and t he dept h of understanding of a fault source is directly related to how many stages have been explored.
Direct monitoring,recording,and analysis of nat urally occurring errors and failures in t he sys2 tem can provide valuable information on act ual er2 ror/failure behavior,identify system bottlenecks, quantify dependability measures,and verify as2 sumptions made in analytical models.
There are t hree basic element s in an online t rend diagno sis system[15]:
G athering data/sensors.Sensors must be pro2 vided to detect,store,and forward performance and error information(e.g.,event2log data)to a diagnostic server who se task it is to interp ret t he information.
Interpreting data/analyzers.Once t he system performance and error data have been accumula2 ted,t hey must be interpreted or analyzed.This in2 terp retation is done under t he auspices of expert p roblem2solving modules embedded in t he diagno s2 tic server.The diagnostic server provides profiles of normal system behavior as well as hypot heses about behavior exceptions.
Conf irming interpretation/effectors.After t he diagnostic server interp ret s t he system perform2 ance and error information,a hypot hesis must be confirmed(or denied)before issuing warning mes2sages to users or operators.For t his p urpose, t here must be effectors for stimulating t he hypot h2 esized condition in t he system.Effectors can take t he form of diagno stics or exercisers t hat are down2 line loaded to t he suspected portion of t he system, and t hen run under special conditions to confirm t he fault hypot hesis or to narrow it s range.
Challenged by t he increasing number and se2 verity of malicio us attacks,security has become an issue of primary importance in designing dependa2 ble systems.Analysis of data on security related system activities permit s t he identification of secur2 ity attacks,and vulnerabilities exploited by t he at2 tacker,and enables classification of t he attacks. Many classifications of attacks have been tendered, often in taxonomic form.A common basis of t hese taxonomies is t hat t hey have been f ramed from t he perspective of an attacker—They organize attacks wit h respect to t he attacker′s goals,such as privi2 lege elevation f rom user to root(f rom t he well known Lincoln taxonomy).Taxonomies based on goals are attack2cent ric;t ho se based on defender goals are defense2cent ric.Defenders need a way of determining whet her or not t heir detectors will de2 tect a given attack.It is suggested t hat a defense2 cent ric taxonomy would suit t his role more effec2 tively t han an attack2centric taxonomy.Research has led to a defense2centric attack taxonomy based on t he way t hat attacks manifest as anomalies in monitored sensor data.Unique manifestations, drawn from25attacks,were used to organize t he taxonomy,which was validated t hrough exposure to an intrusion2detection system,confirming attack detectability.The taxonomy′s predictive utility was compared against t hat of a well2known extant attack2cent ric taxonomy.The defense2cent ric tax2 onomy was shown to be a more effective predictor of a detector′s ability to detect specific attacks, hence informing a defender t hat a given detector is competent against an entire class of attacks.
4.2.2 Network user interaction failures
Resolving network interoperability problems is difficult and time consuming.Almost every user has experienced such a p roblem eit her directly,or as a by2product of a task t hey were attempting to complete.Problems may originate or be complicat2 ed by system heterogeneity,administrative poli2 cies,security practices,and end user errors or im2 p roper mental models.
Advances in network flexibility,self2repair, and reconfiguration will improve underlying per2
0561计 算 机 学 报2007年
formance (Meseguer et al ,2003)but lead to in 2creased complexity.Furt hermore ,it is likely t hat t he user will be unable to rely on a consistent de 2tailed mental model of network state and topology.As such ,new human 2comp uter interaction met h 2ods and tools will be required to enhance user awareness and p roblem resolution.
As an example ,consider remote network ac 2cess t hat generates f requent problems and has con 2siderable det rimental impact on user efficiency.Data on remote access t rouble ticket s covering 215years fro m J une 2000t hrough December 2003.The analysis of incident report s is a common and ac 2cepted data collection met hodology (Wickens ,1995;Salvendy &Carayon ,1997).
During t he period in question t here was a p hase out of DSL service ,an int roduction of a V PN option ,and a beta test of a licensed dial 2up ISP for t raveling users.The latter two may explain t he slight rise for incoming cases near t he end of t he sampled period ,while t he DSL p hase out may explain t he small hitch in late 2000.However ,it was interesting to note t hat ,in general ,t he rate of incoming cases was remarkably linear (Fig.2).This suggest s t hat little progress was made over t his period in developing met hods of reducing case 2
load.
Fig.2 Case arrival rate
It is important to note t hat Ho urs to Resolve
is not a good measure of staff time co nsumed.This is simply t he time from t he initial user query to t he time t he case was clo sed.The mo st salient obser 2vation upon looking at Problem Type was t he high caseload and time sink resulting f rom p hone num 2ber queries (Table 5).Also apparent was t he high mean time to resolve problems stemming f rom t hird party networks.Problems due to single con 2figuration change event s were f requent (22%)and likely t he result of shifting policies and network options (i.e.,DSL p hase 2out ,V PN roll out ,and licensed dial 2up ISP beta test ).
T able 5 Problem type statistics
N
Hours to Resolve Mean Std Dev Sum Core 69581114013Network 45771323447Leaf 66601333957Single
86521044490Phone Number 132********Overall 398
49
110
19431
Over 25%of t he cases and 3500h were re 2quest s related to p hone number request s.Some of t hese were influenced by time zone effect s (e.g.,user is in Asia and staff is only fielding queries during t he work day ).
A large number of cases were resolved wit hin t he first day (Fig.3).Inspection of t he pace of p roblem resolution shows t hat Leaf cases linger on much longer t han ot her
types.
Fig.3 Resolution by type
Comparison of case sample totals for t he Win ,Mac ,and Li/unix categories is shown in Table 6.
T able 6 Operating system statistics
N
Hours to Resolve Mean
Std Dev Sum Win 92611185592Mac
2260681322Li/unix (no Mac )2361991392Unknown 121541086493Mixed 81393041109Overall 266
60
118
15908
The curves in Fig.3exhibit t he typical decay pattern expected in help desk operations.
Perhap s t he mo st interesting observation f rom Fig.4is t he almost linear dive to f ull resolution by Mac users shortly after t he first week.This mani 2fested as considerably reduced variability for t he Mac category when compared to t he ot her catego 2ries.
The high rate of problems associated wit h t he V PN implies many cases were specifically due to p roblems originating from t he use or application of
1
56110期SIEWIOREK Daniel P.等:可信计算的产业趋势和研究。