2005-DRM, trusted computing and operating system architecture

合集下载

中科院计算机类SCI期刊及分区(2016年10月发布)

中科院计算机类SCI期刊及分区(2016年10月发布)
中科院计算机类SCI期刊及分区(201
期刊 IEEE TRANSACTIONS ON FUZZY SYSTEMS International Journal of Neural Systems IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION INTEGRATED COMPUTER-AIDED ENGINEERING IEEE Transactions on Cybernetics IEEE Transactions on Neural Networks and Learning Systems MEDICAL IMAGE ANALYSIS Information Fusion INTERNATIONAL JOURNAL OF COMPUTER VISION IEEE TRANSACTIONS ON IMAGE PROCESSING IEEE Computational Intelligence Magazine EVOLUTIONARY COMPUTATION IEEE INTELLIGENT SYSTEMS PATTERN RECOGNITION ARTIFICIAL INTELLIGENCE KNOWLEDGE-BASED SYSTEMS NEURAL NETWORKS EXPERT SYSTEMS WITH APPLICATIONS Swarm and Evolutionary Computation APPLIED SOFT COMPUTING DATA MINING AND KNOWLEDGE DISCOVERY INTERNATIONAL JOURNAL OF APPROXIMATE REASONING SIAM Journal on Imaging Sciences DECISION SUPPORT SYSTEMS Swarm Intelligence Fuzzy Optimization and Decision Making IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING JOURNAL OF MACHINE LEARNING RESEARCH ACM Transactions on Intelligent Systems and Technology NEUROCOMPUTING ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS ARTIFICIAL INTELLIGENCE IN MEDICINE COMPUTER VISION AND IMAGE UNDERSTANDING JOURNAL OF AUTOMATED REASONING INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS COMPUTATIONAL LINGUISTICS ADVANCED ENGINEERING INFORMATICS JOURNAL OF INTELLIGENT MANUFACTURING Cognitive Computation IEEE Transactions on Affective Computing JOURNAL OF CHEMOMETRICS MECHATRONICS IEEE Transactions on Human-Machine Systems Semantic Web IMAGE AND VISION COMPUTING Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery issn 1063-6706 0129-0657 0162-8828 1089-778X 1069-2509 2168-2267 2162-237X 1361-8415 1566-2535 0920-5691 1057-7149 1556-603X 1063-6560 1541-1672 0031-3203 0004-3702 0950-7051 0893-6080 0957-4174 2210-6502 1568-4946 1384-5810 0888-613X 1936-4954 0167-9236 1935-3812 1568-4539 1041-4347 1532-4435 2157-6904 0925-2312 0952-1976 0169-7439 0933-3657 1077-3142 0168-7433 0884-8173 0891-2017 1474-0346 0956-5515 1866-9956 1949-3045 0886-9383 0957-4158 2168-2291 1570-0844 0262-8856 1942-4787

An Efficient Distributed Verification Protocol for Data Storage Security in Cloud Computing

An Efficient Distributed Verification Protocol for Data Storage Security in Cloud Computing

An Efficient Distributed Verification Protocol for Data Storage Security in Cloud ComputingAbstract— Data storage is an important application of cloud computing, where clients can remotely store their data into the cloud. By uploading their data into the cloud, clients can be relieved from the burden of local data storage and maintenance. This new paradigm of data storage service also introduces new security challenges. One of these risks that can attack the cloud computing is the integrity of the data stored in the cloud. In order to overcome the threat of integrity of data, the client must be able to use the assistance of a Third Party A uditor (TPA), in such a way that the TPA verifies the integrity of data stored in cloud with the client’s public key on the behalf of the client. The existing schemes with single verifier (TPA) may not scale well for this purpose. In this paper, we propose A n Efficient Distributed Verification Protocol (EDVP) to verify the integrity of data in a distributed manner with support of multiple verifiers (Multiple TPA s) instead of single Verifier (TPA). Through the extensive security, performance and experimental results, we show that our scheme is more efficient than single verifier based scheme. Keywords: cloud storage, Integrity, Client, TPA, SUBTPAs, Verification, cloud computing.I.I NTRODUCTIONCloud computing is a large-scale distributed computing paradigm in which a pool of computing resources is available to Clients via the Internet. The Cloud Computing resources are accessible as public utility services, such as processing power, storage, software, and network bandwidth etc. Cloud storage is a new business solution for remote backup outsourcing, as it offers an abstraction of infinite storage space for clients to host data backups in a pay-as-you-go manner [1]. It helps enterprises and government agencies significantly reduce their financial overhead of data management, since they can now archive their data backups remotely to third-party cloud storage providersrather than maintaining local computers on their own. For example, Amazon S3 is a well known storage service.The increasing of data storage in the cloud has brought a lot of attention and concern over security issues of this data. One of important issue is with cloud data storage is that of data integrity verification at untrusted cloud servers. For example, the storage service provider, which experiences Byzantine failures occasionally, may decide to hide the data loss incidents from the clients for the benefit of their own. What is more serious is that for saving money and storage space the service provider might neglect to keep or deliberately delete rarely accessed data files which belong to thin clients. Consider the large size of the outsourced data and the client’s constrained resource capability, the main problem can be generalized as how can the client find an efficient way to perform periodical integrity verifications without local copy of data files.To verify the integrity of data in cloud without having local copy of data files, recently several integrity verification protocols have been developed under different systems [2-13].A ll these protocols have verified the integrity of data with single verifier (TPA). However, in single auditor verification systems, they use only one Third Party A uditor (TPA) to verify the Integrity of data based Challenge-Response Protocol. In that verification process, the TPA stores the metadata corresponding to the file blocks and creates a challenge and sends to the CSP. The CSP generates the Integrity proof for corresponding challenge, and send back to the TPA. Then, TPA verifies the response with the previously stored metadata and gives the final audit result to the client. However, in this single A uditor system, if TPA system will crash due to heavy workload then whole verification process will be aborted. In addition, during the verification process, the network traffic will be very high near the TPA organization and may create network congestion. Thus, the performance will be degrading in single auditor verification schemes. Therefore, we need an efficient distributed verification protocol to verify the integrity of data in cloud.In this paper, we propose an Efficient Distributed Verification Protocol (EDVP) to verify the integrity of data in a distributed manner with support of multiple verifiers (Multiple TPAs) instead of single Verifier (TPA), which were discussed in existing prior works[2-13]. In our protocol, many number of SUBTPA s concurrently works under the single TPA and workload also must be uniformly distribute among the SUBTPA s, so that each SUBTPA will verify over the whole part, Suppose if TPA fails, one of the SUBTPA will act as TPA. Our protocol would detect the data corruptions in the cloud efficiently when compared to single verifier based protocols.Our protocol design is based on RSA-based Dynamic Public Audit Service for Integrity Verification of data in cloud proposed by Syam et al.[11] in a distributed manner. Here, the n verifiers challenge the n servers uniformly and if m server’s response is correct out of n servers then, we can say that Integrity of data is ensured. To verify the Integrity of the data, our verification process uses multiple TPA s, among theSyam Kumar.P1Dept.of Computer ScinceIFHE(Deemed University)Hyderabad, Indiashyam.553@1,Subramanian. R2, Thamizh Selvam.D3Dept.of Computer Science School of Engineering and Technology,Pondicherry University, Puducherry, India, rsmanian.csc@.in2,dthamizhselvam@32013 Second International Conference on Advanced Computing, Networking and Securitymultiple TPAs, one TPA will act as main TPA and remaining are SUBTPA s. The main TPA uses all SUBTPA s to detect data corruptions efficiently, if main TPA fails, then one of the SUBTPA will act as main TPA. The SUBTPA s do not communicate with each other and they would like to verify the Integrity of the stored data in cloud, and consistency of the provider’s responses. The propose system guarantee the atomic operations to all TPA s; this means that TPA which observe each SUBTPA operations are consistent, in the sense that their own operations plus those operations whose effects they see have occurred atomically in same sequence.In Centrally Controlled and Distributed Data paradigm, where all SUBTPA s are controlled by the TPA and SUBTPA’s communicate to any Cloud Data Storage Server, we consider a synchronous distributed system with multiple TPA s and Servers. Every SUBTPA is connected to Server through a synchronous reliable channel that delivers a challenge to the server. The SUBTPA and the server together are called parties P. A protocol specifies the behaviours of all parties. An execution of P is a sequence of alternating states and state transitions, called events, which occur according to the specification of the system components. A ll SUBTPA s follow the protocol; in particular, they do not crash. Every SUBTPA has some small local trusted memory, which serves to store distribution keys and authentication values. The server might be faulty or malicious and deviate arbitrarily from the protocol; such behaviour is also called Byzantine failure.The Synchronous system comes down to assuming the following two properties:1. Synchronous computation. There is a known upper bound on processing delays. That is, the time taken by any process to execute a step is always less than this bound. Remember that a step gathers the delivery of a message (possibly nil) sent by some other process, a local computation (possibly involving interaction among several layers of the same process), and the sending of a message to some other process.2. Synchronous communication. There is a known upper bound on challenge/response transmission delays. That is, the time period between the instant at which a challenge is sent and the time at which the response is delivered by the destination process is less than this bound.II.RELATED WORKBowers et al. [2] introduced a High Availability Integrity Layer (HAIL) protocol to solve the Availability and Integrity problems in cloud computing using error correcting codes and Universal Hash Functions (UHFs). This scheme achieves the A vailability and Integrity of data. However, this scheme supports private verifiability.To support public verifiability of data integrity, Barsoum et al. [3] proposed a Dynamic Multiple Data Copies over the Cloud Servers, which is based on multiple replicas. This scheme achieves the Availability and Integrity of data stored in cloud. Public verification enables a third party auditor (TPA) to verify the integrity of data in cloud with the data owner's public key on the behalf of the data owner,. Wang et al. [4] designed an Enabling Public Auditability and Data Dynamics for data storage security in cloud computing using Merkle Hash Tree (MHT). It achieves the guarantee of the data Integrity with efficient data dynamic operations and public verifiability. Similarly,Wang et al. [5] proposed a flexible distributed verification protocol to ensure the dependability, reliability and correctness of outsourced data in the cloud by utilizing homomorpic token and distributed erasure coded data. This scheme allow users to audit the outsourced data with less communication and computation cost. Simultaneously, it detects the malfunctioning servers. In their subsequent work, Wang et al. [6] developed a privacy-preserving data storage security in cloud computing. Their construction utilizes and uniquely combines the public key based homomorpic authenticator with random masking while achieving the Integrity and privacy from the auditor. Similarly, Hao et al. [7] proposed a privacy-preserving remote data Integrity checking protocol with data dynamics and public verifiability. This protocol achives the deterministic guaranty of Integrity and does not leak any information to third party auditors. Zhuo et al. [8] designed a dynamic audit service to verify the Integrity of outsourced data at untrusted cloud servers. Their audit system can support public verifiability and timely abnormal detection with help of fragment structure, random sampling and index hash table. Yang et al. [9] proposed a provable data possession of resource-constrained mobile devices in cloud computing. In their framework, the mobile terminal devices only need to generate some secret keys and random numbers with the help of trusted platform model (TPM) chips, and the needed computing workload and storage space is fit for the mobile devices by using bilinear signature and Merkle hash tree (MHT), this scheme aggregates the verification tokens of the data file into one small signature to reduce the communication and storage burden.Although, all these schemes achieved the Integrity of remote data assurance under different systems, they do not provide a strong integrity assurance to the clients because their verification process using pseudorandom sequence. If we use pseudorandom sequence to verify the remote data Integrity, sometimes they may not detect the data modifications on data blocks. Since pseudorandom sequence is not uniform (uncorrelated numbers), it does not cover the entire file while generating Integrity proof for a challenge. Therefore, probabilistic Integrity checking methods using pseudorandom sequence may not provide strong Integrity assurance to user’s data stored in remotely.To provide better Integrity assurance, Syam et al. [10] proposed a homomorpic distributed verification protocol using Sobol sequence instead of pseudorandom sequence [2-9]. Their protocol ensures the A vailability, Integrity of data and also detects the data corruption efficiently. In their subsequent work, Syam et al. [11] described a RSA-based Dynamic Public Audit protocol for integrity verification of data stored in cloud. This scheme gives probabilistic proofs based on random challenges and like [10] it also detects the data modification on file. Similarly, Syam et al. [12] developed an Efficient and Secure protocol for both Confidentiality andIntegrity of data with public verifiability and dynamic operations. Their construction uses Elliptic Curve Cryptography instead of RSA because ECC offers same security as RSA with small key size. Later, Syam et al.[13] proposed a publicly verifiable Dynamic secret sharing protocol for A vailability, Integrity, Confidentiality of data with public verifiability.Although all these schemes achieved the integrity of remote data under different systems with Single TPA, but in single auditor verification protocols, they use only one Third Party A uditor (TPA) to verify the Integrity of data based Challenge-Response Protocol. However, in this single Auditor system, if TPA system will crash due to heavy workload then whole verification process will be aborted.III.PROBLEM STATEMENTA.Problem DefinitionIn cloud data storage, the client stores the data in cloud via cloud service provider. Once data moves to cloud he has no control over it i.e. no security for outsourced data stored in cloud, even if Cloud Service Provider (CSP) provides some standard security mechanism to protect the data from attackers but still there is a possibility threats from attackers to cloud data storage, since it is under the control of third party provider, such as data leakage, data corruption and data loss. Thus, how can user efficiently and frequently verify that whether cloud server storing data correctly or not? A nd will not be tampered with it. We note that the client can verify the integrity of data stored in cloud without having a local copy of data and any knowledge of the entire data. In case clients do not have the time to verify the security of data stored in cloud, they can assign this task to trusted Third Party Auditor (TPA). The TPA verifies the integrity of data on behalf of clients using their public key.B.System ArchitectureThe network representation architecture for cloud data storage, which consists four parts: those are Client, Cloud Service Provider (CSP), Third Party A uditors (TPA s) and SUBTPAS as depicted in Fig 1:Fig 1: Cloud Data Storage Architecture Client: - Clients are those who have data to be stored, and accessing the data with help of Cloud Service Provider (CSP). They are typically desktop computers, laptops, mobile phones, tablet computers, etc.Cloud Service Provider (CSP):- Cloud Service Providers (CSPs) are those who have major resources and expertise in building, managing distributed cloud storage servers and provide applications, infrastructure, hardware, enabling technology to customers via internet as a service.Third Party Auditor (TPA):- Third Party Auditor (TPA) who has expertise and capabilities that users may not have and he verify the security of cloud data storage on behalf of users. SUBTPAS: the SUBTPA s verifies the integrity of data concurrently under the control of TPAThroughout this paper, terms verifier or TPA and server or CSP are used interchangeablyC.Security ThreatsThe cloud data storage mainly facing data corruption challenge:Data Corruption: cloud service provider or malicious cloud user or other unauthorized users are self interested to alter the user data or deleting.There are two types of attackers are disturbing the data storage in cloud:1) Internal Attackers: malicious cloud user, malicious third party user (either cloud provider or customer organizations) are self interested to altering the user’s personal data or deleting the user data stored in cloud. Moreover they decide to hide the data loss by server hacks or Byzantine Failure to maintain its reputation2) External Attackers: we assume that an external attacker can compromise all storage servers, so that he can intentionally modify or delete the user’s data as long as they are internally consistent.D.GoalsIn order to address the data integrity stored in cloud computing, we propose an Efficient Distribution Verification Protocol for ensuring data storage integrity to achieve the following goals:Integrity: the data stored safely in cloud and maintain all the time in cloud without any alteration.Low-Overhead: the proposed scheme verifies the security of data stored in cloud with less overhead.E.Preliminaries and Notations•f key(.)- Sobol Random Function (SRF) indexed on some key, which is defined asf : {0,1}* ×key-GF (2w).•ʌkey– Sobol Random Permutation (SRP) indexed under key, which is defined asʌ : {0,1}log2(l) × key –{0,1}log2(l) .IV. EFFICENT DISTRIBUTION VERIFICATIONPROTOCOL:EDVP The EDVP protocol is designed based on RSA -based Dynamic Public A udit Protocol (RSA -DPA P), which is proposed by Syam et al.[11]. In EDVP, we are mainly concentrating on verification phase of RSA -DPA P. The EDVP contains three phases: 1) Key Distribution, 2) Verification Process 3) Validating Integrity. The process of EDVP is: first, the TPA generates the keys and distribute to SUBTPA s. Then the SUBTPA s verify the integrity of data and gives result to main TPA. Finally, the main TPA validates the integrity by observing the report from SUBTPAs.A. Key DistributionIn key distribution, the TPA generates the random keyand distributes it to his SUBTPAs as follows:The TPA first generates the Random key by using SobolRandom Function [15] then Compute)(1i f K k =Where1 i n and the key is indexed on some (usually secret) key: f :{0,1}*× keyĺZ p Then, employ (m, n ) secret sharing scheme [14] andpartition the random key K into n pieces. To divide K into npieces, the client select a polynomial a(x) with degree m-1andcomputes the n pieces: 1221....−++++=m j i i a i a i a K K (2)¦−=+=11m j j j i i a K K (3)A fter that TPA chooses nSUBTPA s and distributes n pieces to them. The procedure of key distribution is given in algorithm 1.Algorithm 1: Key Distribution1.1. Generates a random key K using Sobol Sequence. )(1i f K k =2. Then, the TPA partition the K into n pieces using (m,n) secret sharing scheme3. TPA select the Number of SUBTPAs: n, and threshold value m;4. for i ĸ1 to n do5. TPA sends k i to the all SUBTPA i s6. end for7. endB. Verification ProcessIn verification process, all SUBTPAs verify the Integrity of data and give results to the TPA, if m SUBTPAs responses meet the threshold value then TPA says that Integrity of data is valid. At a high level, the protocol operates like this: A TPA assigns a local timestamp to every SUBTPA of its operations. Then, every SUBTPA maintains a timestamp vector T in itstrusted memory. A t SUBTPA i , entry T[j] is equal to thetimestamp of the most recently executed operation by SUBTPA j in some view of SUBTPA i .To verify the Integrity of data, each SUBTPA creates a challenge and sends to the CSP as follows: first SUBTPA generates set of Random indices c of set [1, n] using Sobol Random Permutation (SRP) with random key)(c j j K π= (4) Where 1 c l and ʌkey (.) is a Sobol Random Permutation (SRP), which is indexed under key: ʌ : {0,1}log2(l ) ×key–{0,1} log2(l ).Next, each SUBTPA also chooses a fresh random key r j, wherer j = )(2l f k (5)Then, creates a challenge chal ={j, r j } is pairs of random indices and random values. Each SUBTPA sends a challenge to the CSP and waits for the response. The CSP computes a response to the corresponding SUBTPA challenges and send responses back to SUBTPAs.When the SUBTPA receives the response message, first he checks the timestamp, it make sure that V T (using vectorcomparison) and that V [i] = T[i]. If not, the TPA aborts theoperation and halts; this means that server has violated the consistency of the service. Otherwise, the SUBTP COMMITS the operation and check if stored metadata and response (integrity proof) is correct or not? If it is correct,then stores TRUE in its table and sends true message to TPA, otherwise store FALSE and send a false signal to the TPA for corrupted file blocks. The detailed procedure of verification processes is given in algorithm 2. Algorithm 2: Verification Process 1. Procedure: Verification Process 2. Timestamp T3. Each SUBTPA i computes4. Compute )(c j SRPk π=5. the Generate the sobol random key r j6. Send (Chal=(j, r j ) as a challenge to the CSP;7. the server computes the Proof PR i send back to theSUBTPAs;8. PR i ĸReceive(V);9. If (V T V [i] = T[i]) 10. return COMMIT then11. if PR i equals to Stored Metadata then 12. return TRUE;13. Send Signal, (Packet j , TRUE i ) to theTPA14. else15. return FALSE;16. Send Signal, (Packet i , FALSE i ) to the TPA; 17. end if 18. else19. ABORT and halt the process 20. e nd if 21. e nd(1)C.Validating IntegrityTo validate the Integrity of the data, the TPA will receive the report from any subset m out of n SUBTPAs and validates the Integrity. If the m SUBTPA s give the TRUE signal to TPA, then the TPA decides that data is not corrupted otherwise he decides that data has been corrupted. In the final step, the TPA will give an A udit result to the Client. In algorithm 3, we given the process of validating the Integrity, in which, we generalize the Integrity of the verification protocol in a distributed manner. Therefore, we can use distribution verification on scheme [11].Algorithm 3: Validating Integrity1.Procedure: validation(i)2.TPA receives the response from the m SUBTPAs3.for iĸ1 to m do4.If(response==TRUE)5. Integrity of data is valid6. else if(response==FALSE)7. Integrity is not valid8.end if9.end for10.endV.A NALYSIS OF EDVPIn this section, we analyse the security, and performance of EDVP.A.Security AnalysisIn security analysis, we analyze the Integrity of the data in terms of probability detection.Probability Detection:It is very natural that verification activities would increase the communication and computational overheads of the system. To enhance the performance, we used Secret sharing technique [14] to distribute the Key k that provides minimum communication and tractable computational complexity. Thus, it reduces the communication overhead between TPA and SUBTPAs. For a new verification, the TPA can change the Key K for any SUBTPA and send only the different part of the multiset elements to the SUBTPA. In addition, we used probabilistic verification scheme based on Sobol Sequences that provides uniformity not only for whole sequences but also for each subsequences, so each SUBTPA will independently verify over the entire file blocks. Thus, there is a high probability to detect fault location very quickly. Therefore, a Sobol sequence provides strong Integrity proof for the remotely stored data.The probability detections of data corruptions of this protocol same as previous protocols [9-12].In EDVP, we use Sobol random sequence generator to generate the file block number, because sequence are uniformly distributed over [0, 1] and cover the whole region. To make integers, we multiply constant powers of two with the generated sequences. Here, we consider one concrete example, taking 32 numbers from the Sobol sequences.B. B. Performance Analysis and Experimental ResultsIn this section, we evaluate the performance of theverification time for validating Integrity and compare theexperimental results with previous single verifier basedprotocol [11] as shown in Tables 1-3. In Table 4 and 5, wehave shown that the Computation cost of the Verifier and CSPrespectively.Table 1: Veri ication times (Sec) with 5 veri iers whendifferent percentages of 100000 blocks are corruptedCorruption data in percentageSingle Verifierbased Protocols[11]EDVP[5 verifiers]1% 25.99 12.145% 53.23 26.55 10% 70.12 38.6315% 96.99 51.2220% 118.83 86.4430% 135.63 102.8940% 173.45 130.8550% 216.11 153.81 Table 2: Verif ication times (Sec) with 10 Verif ierswhen di f f erent percentages o f 100000 blocks are corruptedCorruption data in percentage Single Verifier basedProtocols[11]EDVP[10verifiers]1% 25.9908.14 5% 53.2318.55 10% 70.12 29.63 15% 96.99 42.22 20% 118.83 56.44 30% 135.63 65.89 40% 173.45 80.85 50% 216.11 98.81T able 3: Verification times (Sec) with 20 verifiers when different percentages of 100000 blocks are corruptedCorruption data in percentage Single VerifierbasedProtocols[11]EDVP[20verifiers]1% 25.9904.145% 53.2314.5510% 70.12 21.6315% 96.99 32.2220% 118.83 46.4430% 135.63 55.8940% 173.45 68.8550% 216.11 85.81From Tables 1-3, we can observe that verification time is lessfor detecting data corruptions in cloud when compared to single verifier based protocol [11]Table 4:Verifier computation Time (ms) for the differentfile sizesFile Size Single Verifier basedProtocol[11]EDVP1MB 148.26 80.07 2MB 274.05 192.65 4MB 526.25 447.23 6MB 784.43 653.44 8MB 1083.9 820.87 10MB 2048.26 1620.06Table 5:CSP computation Time (ms) for the different filesizesFile Size Single Verifier basedProtocols[11]EDVP1MB 488.16 356.272MB 501.23 392.554MB 542.11 421.116MB 572.17 448.678MB 594. 15 465.1710MB 640.66 496. 02 From the table 4 & 5, we can observe that computation cost of verifier and CSP is less compared existing scheme[11]VI.C ONCLUSIONIn this paper, we presented an EDVP scheme to verify the Integrity of data stored in the cloud in a distributed manner with support of multiple verifiers (Multiple TPAs) instead of single Verifier (TPA). This protocol use many number of SUBTPA s concurrently works under the single TPA and workload also must be uniformly distribute among SUBTPAs, so that each SUBTPA will verify the integrity of data over the whole part. Through the security and performance analysis, we have proved that an EDVP verification protocol would detect the data corruptions in the cloud efficiently when compared to single verifier verification based scheme.R EFERENCES[1]R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, and I.Brandic.“Cloud Computing and Emerging IT Platforms: Vision, Hype, and Reality for Delivering Computing as the 5thUtility,” Future Generation Computer Systems, vol. 25, no. 6,June 2009, pp 599–616, Elsevier Science, A msterdam, TheNetherlands.[2]Bowers K. D., Juels A., and Oprea A., (2008) “HAIL: A High-vailability and Integrity Layer for Cloud Storage,”Cryptology ePrint Archive, Report 2008/489.[3]Barsoum, A. F., and Hasan, M. A., “On Verifying DynamicMultiple Data Copies over Cloud Servers”, Technical Report, Department of Electrical and Computer Engineering University of Waterloo, Ontario, Canada, Aug 2011.[4]Wang Q., Wang C., Li J., Ren K., and Lou W., “Enablingpublic veri¿ability and data dynamics for storage security incloud computing”, IEEE Trans. Parallel and Distributed Systems. VOL.22, NO.5. May, 2011,pp.[5]Wang C., Wang Q., Ren K., cao N., and Lou W.,(2012)“Towards Secure and Dependable Storage Services in CloudComputing”, IEEE Trans. Service Computing. VOL. 5, NO. 2,APRIL -JUNE 2012, pp.220-232.[6]Wang, C., Ren, K., Lou,W., and Li, J., “Toward publiclyauditable secure cloud data storage services”, IEEE Networks,Vol. 24, No. 4, 2010, pp. 19–24.[7]Hao Z., Zhong S., Yu N.,(2011) “A Privacy-Preserving RemoteData Integrity Checking Protocol with Data Dynamics andPublic Verifiability”, IEEE Trans Knowledge and DataEngineering,Vol.23, Issue 9,pp.1432 –1437.[8]Zhu Y., Wang H., Hu Z., Ahn G., Hu H., Stephen, and Yau S.,“Dynamic A udit Services for Integrity Verification of Outsourced Storages in Clouds”, Proc. of the 26th A CMSymposium on Applied Computing (SAC), March 21-24, 2011,Tunghai University, TaiChung, Taiwan.[9]Yang J., Wang H., Wang J., Tan C., and Yu D., (2011)“Provable Data Possession of Resource-constrained MobileDevices in Cloud Computing”, JOURNA L OF NETWORKS,VOL. 6, NO. 7, July,, 2011,pp.1033-1040[10]P. Syam Kumar, R. Subramanian, “Homomorpic DistributedVerification Ptorotocol for Ensuring Data Storage in CloudComputing”. Journal of Information, Vol. 14, No.10, Oct-2011, pp.3465-3476.[11]P. Syam Kumar, R. Subramanian, “RSA-based DynamicPublic A udit Service for Integrity Verification of DataStorage in Cloud Computing using Sobol Sequence” SpecialIssue Security, Privacy and Trust in Cloud Systems, International Journal of Cloud Computing(IJCC) in InderScience Publications, Vol. 1 No.2/3, 2012, pp.167-200. [12]P. Syam Kumar, R. Subramanian, “A n effiecent and Secureprotocol for Ensuring Data Storage Security inCloud Computing” publication in International Journal of computerScience Issues(IJCSI), Volume 8, Issue 6, Nov-2011, pp.261-274.[13]P. Syam Kumar, Marie Stanislas Ashok, Subramanian. R, “APublicly Verifiable Dynamic Secret Sharing Protocol forSecure and Dependable Data Storage in Cloud Computing”Communicated for Publication in International Journal ofCloud Applications and Computing (IJCAC).[14]Shamir A.,“How to Share a Secret”, Comm. A CM, vol.22.1979.[15]Brately P and Fox B L (1988) Algorithm 659: ImplementingSobol’s Quasi-random Sequence Generator ACM Trans. Math.Software 14 (1) 88–100.。

AISI304不锈钢钝化膜在电解质溶液中腐蚀时的半导体性质

AISI304不锈钢钝化膜在电解质溶液中腐蚀时的半导体性质

AISI304不锈钢钝化膜在电解质溶液中腐蚀时的半导体性质王超1,支玉明2,盛敏奇1,钟庆东1,周国治1,3,鲁雄刚1,褚于良4(1.上海大学现代冶金与材料制备重点实验室,上海200072; 2.宝山钢铁股份有限公司,上海201900;3.北京科技大学冶金与生态工程学院,北京100083;4.上海大学分析测试中心,上海200444)摘 要:应用电位-电容测试和M ott-Schottky分析技术研究了AISI304不锈钢钝化膜在电解质溶液中的半导体性质。

结果表明,不锈钢钝化膜在氢氧化钠溶液中,随着浸泡时间延长,半导体类型转变电位发生负移;在硫酸、硫酸钠两种溶液中转变电位无明显变化。

随着腐蚀时间的延长,溶液中不锈钢钝化膜的载流子密度逐渐增加,其载流子密度在几种溶液中从小到大的顺序依次为硫酸钠,氢氧化钠,硫酸。

不锈钢在三种溶液中的M o tt-Scho ttky曲线均出现频率分化,其原因可能为钝化膜中载流子的产生-复合存在时间效应;在氢氧化钠溶液中,钝化膜腐蚀的主要原因为富铬层导电能力增强;在硫酸、硫酸钠两种溶液中,钝化膜腐蚀的主要原因为富铁层导电能力的增强。

关键词:M o tt-Scho ttky分析;不锈钢;腐蚀;载流子;时间效应;导电能力中图分类号:T G172.6 文献标识码:A 文章编号:1005-748X(2009)06-0369-04Semiconductor Characters of Passive Film on AISI304Stainless Steel Surface in Electrolytesduring Corrosion ProcessWANG Chao1,ZH I Yu-m ing2,SH ENG M in-qi1,ZH ONG Qing-dong1,CH OU Kuo-chih1,3,LU Xiong-gang1,CH U Yu-Liang4(1.Shanghai U niv ersity,Shanghai200072,China; 2.Baoshan Ir on&Steel Co.,Ltd.,Shang ha i201900,China;3.U niv er sity of Science and T echnolog y Beijing,Beijing100083,China;4.A nalysis and T est ing Center,Shang hai U niv er sity,Shang hai200444,China)Abstract:T he semico nduct or cha racters of A ISI304st ainless steel s passive film during co rr osio n pro cess in thr ee typical electro ly tes w ere investig ated by using pot entia-l capacit ance measurement and M ott-Schottky analy sis.Passiv e film on the surface of the stainless steel w as constructed fro m t wo different types o f semiconducto r film in electro ly tes under study.In sodium hy dr ox ide,the semico nduct or-t ype-transitio n potential had an obvious negative drifting,w hile in the other tw o elect ro lyt es,the t ransitio n potential had no obvious change w ith immer sion time.Charg e carr ier density in the passiv e film increased wit h immer sion time.T he charg e carr ier densit y at1000H z in these thr ee so lutio ns could be listed in an ascending o rder o f sodium sulfate,sodium hydro xide and sulfuric acid.Fr equency dependence,appeared in all M o tt-Scho ttky plots of AISI304stainless steel's passive film,co uld be attr ibut ed to the time effect in generatio n-recombination pro cess of char ge car rier s.T he main cause of passive film's cor ro sion o n the st ainless steel sur face in sodium hydro xide solut ion was the r ising conductibility of chro mium-rich lay er,while in the other tw o so lutions it w as due to the ascending conductance o f ir on-rich layer.Key words:M ott-Schottky analysis;stainless steel;cor rosion;charg e car rier;t ime effect;conductibility不锈钢的耐腐蚀性能很大程度上决定于表面钝收稿日期:2008-10-11;修订日期:2008-11-26基金项目:国家自然科学基金(Gr ant N o.50571059, 50615024),2007教育部新世纪优秀人才支持计划项目(N CET-07-0536),教育部创新团队IRT0739项目资助。

区块链中密码技术的案例化教学设计

区块链中密码技术的案例化教学设计

第31卷第3期北京电子科技学院学报2023年9月Vol.31No.3JournalofBeijingElectronicScienceandTechnologyInstituteSep.2023区块链中密码技术的案例化教学设计张艳硕㊀李泽昊㊀陈㊀颖北京电子科技学院,北京市㊀100070摘㊀要:区块链是一种新提出的㊁具有广阔应用场景和极高价值的密码技术㊂区块链本质上是基于多种密码技术(包括协议㊁算法等)的综合性数据结构㊂区块链相关课程中密码技术知识的讲解越来越重要,如何对区块链应用中的密码技术进行讲解是教学研究的重点㊂本文针对区块链技术,结合区块链特点和其中的哈希算法㊁数字签名技术和密码协议,给出区块链中密码技术的案例化教学设计,为当下不同领域和层次的读者提供不同而适宜的教学设计,旨在使更多不同领域和层次的学生,提升对区块链及其中的密码技术的了解,协助培养更多相关领域人才㊂关键词:区块链;密码技术;案例化;教学;人才培养中图分类号:G64㊀㊀㊀文献标识码:A文章编号:1672-464X(2023)3-116-126∗㊀基金项目:中央高校基本科研业务费资金资助(课题编号:3282023015)和 信息安全 国家级一流本科专业建设点项目资助㊂∗∗㊀作者简介:张艳硕(1979-),男,博士,副教授,硕士生导师,从事密码数学理论研究㊂E⁃mail:zhang_yanshuo@163.com;李泽昊(2000-),男,2019级信息安全专业本科在读;陈颖(1977-),男,博士,副教授,硕士生导师,主要研究方向为数据挖掘㊁信息安全㊂1㊀引言㊀㊀区块链[1]是建立在互联网之上一个点对点的公共账本,由区块链网络的参与者按照共识算法规则共同添加㊁核验㊁认定账本数据㊂区块链在当下信息技术高速发展的背景下,用于多个场景㊁领域具有广阔的应用场景,能解决多行业㊁多领域痛点,进一步提升生产效率,具有很强的重要性和时代性㊂区块链的本质[2]是一种加密数据结构,是一种基于多种密码技术的全新技术,涉及数学㊁密码学㊁互联网和计算机编程等很多科学技术问题㊂密码学是区块链的核心技术㊂区块链中包含的哈希算法㊁数字签名㊁密码协议等,都是密码学研究的核心内容㊂研究对于区块链中密码技术的教学,有助于让更多学生更好地学习区块链与相关的密码技术,对于培养区块链和密码学有关人才有着重要的作用㊂当下,区块链中密码技术的相关教学在区块链不断取得应用和突破的背景下,具有越来越高的重要性,然而当前对于区块链中密码技术的相关教学方案却相对不足,缺乏案例化的教学实践方案㊂因此,对区块链中的密码技术进行相对体系化的案例化教学设计能够加强区块链有关知识的教学,让更多不同层次的学生(特别是信息安全和密码领域)了解区块链的核心密码技术,有着重要的意义,也能为专业建设服务㊂本文对现有区块链中密码技术给出案例化教学设计思路,随后分别对区块链中的哈希算法㊁数字签名和密第31卷区块链中密码技术的案例化教学设计㊀码协议给出相关案例化教学方案㊂2㊀区块链中密码技术的常见教学方法㊀㊀区块链中密码技术当前的典型教学多采取非案例化或不完全案例化的方式,是目前最广泛使用的教学形式,当下在各类课堂中被普遍采用㊂当前区块链中密码技术的典型教学方法一般从区块链本身的概念入手,对其中涉及的密码算法进行简要介绍,能够一定程度上达到教学效果,但在教学过程中也一定程度上存在不够细致㊁针对性不够强的问题㊂因此,可以通过渐进式案例化教学[3]的方式,改进当前的区块链密码技术教学方法,提升教学质量㊂2 1㊀常见方法区块链中密码技术的常见教学方法一般在涉及区块链的相关课程中体现㊂常见教学一般是从区块链的定义和相关结构入手,先为学生介绍区块链的相关知识,如区块链的定义㊁结构㊁性质,然后对其中涉及较多的密码技术进行介绍㊂常见的教学案例一般是涉及区块链内容的教学㊂教学时,首先讲解区块链的基本知识,从区块链的应用切入,引发学生学习兴趣,然后开始围绕区块链本身进行讲解㊂在讲解区块链结构时,介绍时一般只谈其性质和功能,对于其原理和过程则进行简单讲解或直接忽略㊂2 2㊀存在问题现有的常见教学方法一般着重以比特币等虚拟货币为案例,主要讲解区块链相关知识,并简要讲解其涉及的密码技术㊂这种讲解方式能够一定程度上让学生对区块链的基础概念有一定的了解,但是对于区块链涉及的密码算法㊁密码技术的讲解只是简要带过,并没有深入讲解,学生对其中所涉及的密码技术的了解只停留在对其性质和应用层面的理解,缺乏对密码算法的深入理解和掌握,基本不能理解相应技术的原理和过程㊂因此,我们更加需要一种能够让学生对区块链和其中的密码技术较为深入了解,并且能有方法可循,成体系的教学方案㊂2 3㊀引入案例化教学在这样的背景下,我们可以采取设计案例化教学方案的方式进行区块链中的密码技术教学㊂为完善区块链中密码技术的教学,我们可以借助案例化的方式来进行教学[4]㊂通过案例化教学,我们能够让学生对区块链中密码技术的学习更加深入㊁具体㊁体系化,提升其理解水平,加深学习印象㊂3㊀区块链中密码技术的案例化教学3 1㊀案例化教学简介案例教学,是一种开放式㊁互动式的新型教学方式㊂通常,案例教学要经过事先周密的策划和准备,要使用特定的案例并指导学生,案例教学一般要结合一定理论,通过各种信息㊁知识㊁经验㊁观点的碰撞来达到启示理论和启迪思维的目的㊂在案例教学中,通过使用编写的案例来完成教学,它在用于课堂讨论和分析之后会使学生有所收获,从而提高学生分析问题和解决问题的能力㊂教学中可以通过分析㊁比较,提高学生的学习效果,也可以让学生通过自己的思考或者他人的思考来拓宽自己的视野,从而丰富自己的知识㊂本文的教学对象主要是已经学过密码学㊁有一定基础的学生㊂对于此类学生,通过本文的案例化教学,让其更加深入地理解密码学在区块链中应用的方法和密码技术在区块链中发挥作用的原理,可以加深有关学生对密码学和区块链技术之间联系的认识和对有关技术的整体把握㊂本文也兼顾从未接触密码学的学生㊂通过区块链和有关应用进行引入,并简要讲述有关密码学原理和技术,能够起到科普,激发学习兴趣的作用,从而为其深入学习密码学及区块链技术奠定㊃711㊃北京电子科技学院学报2023年基础㊂3 2㊀案例化教学的目的案例化教学通过案例讲解和学生实践交流来实现教学,在区块链的主题下,案例化教学能够鼓励学员独立思考,能引导学员变注重知识为注重能力,同时加深学生的学习印象,结合案例提升教学水平㊂通过使用案例进行教学[5],能够让教学的过程变得简单㊁有趣㊁生动,激发学生的学习兴趣㊂此外,案例化教学能够通过案例的讲解,让教学的理论与具体实践过程实现结合,完善教学过程中的理论与实践㊂缺乏案例的教学就失去了对实践过程的分析和教学,脱离实践的教学也是不完备的,而案例化的教学恰恰可以弥补这一点㊂因此,案例化教学具有非常重要的教学意义,应该深入研究并做适当推广㊂3 3㊀区块链中密码技术的案例化教学设计思路区块链是一个复杂的系统,其中涉及多个领域的理论和技术,其中密码技术是区块链的关键技术㊂对区块链中密码技术进行案例化教学时,可以每次侧重于不同的内容进行,如不同的密码算法㊁密码协议以及密码应用㊂一次教学侧重一种内容的教学,让这种技术的教学更为深入和具体,提升学生的掌握程度㊂在进行案例化教学设计时,需要结合案例进行讲解,因此需要结合有关技术编写相应的案例作为教学内容㊂而对应的案例应该生动形象并能够更好地解释和反映相关技术中的关键内容,从而实现更好的教学效果㊂在编写案例时,应该注意其与要讲解的技术之间的紧密结合,其次,应该注意其生动性并使其适用于教学㊂区块链中的密码技术多种多样,对于区块链中的密码技术而言,案例化教学应该围绕其中的核心,以及容易理解的部分进行分别设计㊂区块链中的核心密码技术包括哈希算法㊁普通数字签名和功能型签名㊁密码协议等多个部分组成㊂下面分别是结合区块链中的哈希算法㊁数字签名技术和密码协议及其中的几类具体内容的一些可行案例化教学方案㊂4㊀区块链中哈希算法的案例化教学㊀㊀哈希算法是区块链中常用且重要的密码学技术,在区块链中密码技术的案例化教学中具有重要地位㊂可以先简单介绍哈希算法的基本定义和性质,然后简单讲解几种常用哈希算法作为案例,最后讲解哈希算法在区块链中的应用案例㊂4 1㊀哈希算法的基本定义和性质哈希函数是一公开函数,用于将任意长的消息M映射为较短的㊁固定长度的一个值H(M),又称为散列函数㊁杂凑函数.我们称函数值H(M)为哈希值㊁杂凑值㊁杂凑码㊁或消息摘要㊂哈希算法具有等长性㊁单向性㊁抗碰撞性等性质㊂当前应用较为广泛且较为安全的哈希算法有SHA-256㊁SM3等㊂4 2㊀哈希算法在区块链中的应用哈希算法在区块链中的应用场景丰富,大致包含用户地址的生成㊁Merkle树㊁挖矿难度设置㊁数字签名和软件发布等方面[6][7]㊂这些案例有趣而具有广泛应用,可以作为案例化教学的主要案例来源和教学内容㊂(1)用户地址生成在以太坊等区块链系统中,通常使用用户地址来唯一区别所有的用户㊂而用户地址在生成的过程中就使用了哈希算法㊂以以太坊[8]为例,以太坊首先产生的256比特随机数做为用户的私钥,然后利用将私钥和椭圆曲线ECDSA⁃secp256k1计算公钥㊂计算得到公钥后,就利用利用Keccak-256哈希算法计算公钥的哈希值㊂将得到的哈希值的后20byte截出,就得到了用户地址㊂通过哈希算法得到用户地址简单快捷,同时㊃811㊃第31卷区块链中密码技术的案例化教学设计㊀可以起到验证公钥是否正确的作用㊂(2)Merkle树比特币区块包含了区块头部和一些比特币交易㊂一个区块上所有交易的哈希值构成了该区块Merkle树的叶子结点,Merkle哈希树的根节点保存在区块头里面,因此所有交易与区块头部绑定在了一起[9]㊂通过Merkle树,就得到了每个交易区块的哈希值,并得到每个交易区块唯一的TopHash㊂这样的结构实现了区块的不可篡改,保障了区块链上数据的完整性㊂(3)挖矿难度设置比特币难度是对挖矿困难程度的度量[10],即指计算符合给定目标的一个哈希值的困难程度㊂difficulty=difficulty_1_target/current_target,difficulty_1_target的长度为256比特,前32位为0,后面全部为1,current_target是当前块的目标哈希,先经过压缩然后存储在区块中,区块的哈希值必须小于给定的目标哈希值,表示挖矿成功㊂通过设置目标哈希值来确定挖矿难度,在工作量证明中起到了重要的作用[11]㊂在使用工作量证明的比特币系统中,就是使用这种方式来确定挖矿的难度㊂矿工们使用高性能计算机不断调整数值并计算哈希值,直到得到的哈希值前面的0位满足要求便挖矿成功,得到相应的奖励㊂4 3㊀区块链中哈希算法的教学案例为了让学生更好地理解和掌握哈希算法在区块链中的原理和作用,本节提出了一个案例化教学方案,主要包括以下几个步骤:(1)介绍哈希算法的基本概念和特征,以及常见的哈希函数如SHA256㊁MD5的基本知识,以及哈希函数的历史㊁基本设计和结构等基础知识㊂(2)演示如何使用在线工具或编程语言实现哈希函数的计算,并观察不同输入对应的输出结果㊂在这一步中,可以借助图1,让学生明白,哈希函数存在单向性,即任意长度的比特串可以生成一个固定长度的比特串,而根据这一固定长度的比特串反推任意长度的原内容几乎是不可能的㊂图1㊀哈希算法的单向性(3)说明哈希运算在区块链中的数据加密作用,展示每个区块包含上一个区块的哈希值以及自身数据的哈希值,形成一个不可逆转和不可篡改的链式结构㊂(4)说明哈希运算在区块链中的交易地址生成作用,展示如何通过公钥私钥对以及多重哈希运算得到一个独一无二且难以反推的交易地址㊂(5)说明哈希运算在区块链中的Merkle树构建作用,尝试分多个内容,分别结算哈希值,然后再将其中的两个哈希值合并计算新的哈希,直到最后只有一个哈希值为止㊂展示时可以结合图2,展示如何通过并联两个子哈希来往树上爬直到找到根哈希,从而快速定位每笔交易并核实交易数据是否被篡改㊂(6)说明哈希运算在区块链中的工作量证明作用,展示如何通过不断修改随机数Nonce值来寻找符合目标难度要求(即前导零个数)的有效哈希值,并获得挖矿奖励㊂这一步可以通过程序进行展示,将每一次计算尝试得到的哈希值进行显示,通过改变前导零的个数,对比不同的寻找到合适的Nonce值的时间,从而让学生体会前导零的个数对挖矿难度的影响㊂通过这样一个案例化教学方案,可以让学生从具体到抽象地认识和探索区块链中的哈希算法,在动手操作和实践演练中加深理解和记忆,㊃911㊃北京电子科技学院学报2023年㊀㊀图2㊀Merkle树在问题解决和创新思维中提高能力和水平㊂5㊀区块链中数字签名技术的案例化教学㊀㊀数字签名[11]使用非对称算法中的私钥对数据进行加密处理,验签者通过公钥进行验证的方式,保障了数据的完整性和不可否认性,与显示中的签名有相同的功能,在区块链中具有非常重要且广泛的应用㊂在进行这一部分的案例化教学时,可以采用先讲解概念,然后介绍数字签名的过程作为案例化教学的方式进行教学㊂5 1㊀数字签名的定义与性质数字签名可以理解为附加在数据单元上的一些数据,或是对数据单元所做的密码变换,这种数据或变换允许数据单元的接收者用以确认数据单元的来源和数据单元的完整性,并保护数据,防止被人伪造的一种技术㊂数字签名具有不可伪造㊁不可抵赖㊁不可复制㊁不可篡改的性质㊂5 2㊀区块链中数字签名主要流程及其应用数字签名在其过程中使用到了哈希算法㊂在签名前,签名者拥有生成好的公㊁私钥对㊂签名者需要将公钥进行公开,保护私钥只有自己拥有㊂签名时,首先对要签名的消息做哈希运算然后用私钥对哈希运算得到的结果进行加密(即签名),随后,将消息和签名的内容一同发送给接收者㊂接收者收到消息后,将消息做哈希处理,随后将签名内容使用发送者的公钥解密,并与自己计算得到的消息哈希值对比㊂若为相同,则可以验证签名的有效性㊂如今,数字签名广泛应用在区块链中以及各大网站的验证㊁电子邮件㊁信息公告㊁软件下载㊁公钥证书㊁SSL/TLS等场景㊂在进行案例教学时,可以结合其中几个场景,进行更为生动的教学,也可以让学生利用空余时间,自行搜集有关资料,寻找数字签名的应用,作为实践和更为深入的案例教学㊂5 3㊀区块链中数字签名的教学案例在区块链中,数字签名是实现去中心化㊁安全交易和共识机制的重要基础㊂本节将通过一个简单的案例,介绍区块链中数字签名技术的原理和应用㊂在教学时,可以让学生进行角色扮演,充分体会数字签名的整个过程,进一步深入理解区块链中的数字签名技术㊂假设有两个用户A和B,他们想要在区块链上进行一笔交易,即A向B转账10个比特币㊂为了保证交易的有效性和不可篡改性,他们需要使用数字签名技术㊂具体步骤如下:(1)A生成一对密钥,即公钥和私钥㊂公钥㊃021㊃第31卷区块链中密码技术的案例化教学设计㊀是可以公开的,用于验证A的身份;私钥是需要保密的,用于对信息进行加密㊂(2)A构造一个交易信息,包含交易双方的地址㊁转账金额㊁时间戳等内容,并使用私钥对这个信息进行加密,得到一个数字签名㊂(3)A将交易信息和数字签名一起广播到区块链网络上㊂(4)B收到交易信息后,使用A的公钥对数字签名进行解密,得到一个原始信息,并与收到的交易信息进行比较㊂如果两者相同,则说明这个交易确实是由A发出,并且没有被篡改㊂(5)B验证通过后,就可以接受这笔交易,并将其记录在自己的账本上㊂(6)其他节点也可以通过同样的方式验证这笔交易,并将其写入新生成的区块中㊂(7)最后,结合图3,详细讲解区块链中的数字签名作用㊂并引导学生进行总结归纳㊂区块链中的数字签名作用包括:①保证了信息发送者身份的真实性和不可抵赖性㊂②保证了信息内容的完整性和不可篡改性㊂③促进了去中心化㊁安全交易和共识机制㊂图3㊀数字签名主要流程在教学时,通过讲解案例,让学生体会每个步骤不同的作用,进一步理解数字签名的思想和对应的功能㊂通过这一案例化教学,能让学生对数字签名的基础有一定了解,在此之上,学生能够进一步对区块链中数字签名的具体应用和特点有较为深入的理解㊂然而,区块链对数字签名的应用并不局限于这类普通的签名,区块链中还有对功能型签名技术的应用㊂对于现有水平较高的学生,可以结合区块链中的功能型签名技术,进行更深层次的案例化教学㊂5 4㊀区块链中的功能型签名技术和教学案例除普通签名外,区块链中还运用了一些特殊的带有属性的签名技术[12]㊂一些区块链系统采用群签名㊁环签名或盲签名的方式,保障区块链中的某些功能和性质㊂对于学有余力的学生,可以针对这些内容进行进一步升入教学,让学生更加深入㊁多样地理解区块链中的签名技术㊂对于这一部分的教学,可以采用以简单讲解㊁概念理解为主的教学方式,旨在让学生对于区块链中特殊的签名技术有一定的了解和把握㊂可以围绕下面的资料,对环签名㊁盲签名的知识进行简要介绍㊂介绍时可以结合当前现有的先进技术㊂5 4 1㊀环签名环签名[13]最初由Rivest等人提出,环签名中只有环成员没有管理者,不需要环成员间的合作㊂对于环签名,有一个有趣的案例故事,可以作为背景教学的引入:我们可以假设Bob是一个内阁成员,他想向记者揭露首相贪污的情况,他要使记者确信此消息来自一个内阁成员,同时又不想泄露自己的身份(保证匿名性),以免遭到首相报复㊂对于这样一个问题,显然普通签名和群签名是无法解决的,那么该怎么做呢?为解决这样的问题,无管理者,无需成员合作的环签名就诞生了㊂其主要流程为生成㊁签名㊁验证㊂具有正确性㊁无条件匿名性㊁不可伪造等性质㊂在区块链中,环签名也具有很多场景也具有广泛应用㊂在区块链的交易中,部分有不可追踪性和不可链接性的要求㊂不可追踪性的意思是对每个交易输入,所有可能的交易发起者都是等可能的,而对于不可链接性则表示对任意两个交易输出,不能证明他们是否发送给同一用户㊂为了解决这一问题,CryptoNote采用了一次性公私钥对的方式,发起者可以根据接受者的长期公钥生成新的一次性公钥,新公钥只有接受者能计算出对应的私钥,并且不能被其他用户关联㊃121㊃北京电子科技学院学报2023年到接受者的长期公钥.这样,在保证了同一接受者每次交易存在不同接受地址的同时,发送者不需要接受者告知新公钥,可以独立构造交易㊂5 4 2㊀盲签名盲签名的概念首先由DavidChaum于1982年提出,盲签名实现了签名者对发送者的消息进行签名,却不能知道签名者消息的具体内容㊂对于盲签名的概念,可以用以下案例进行教学:寄件者将文件放入带有复写纸的信封,签名者在信封上对文件进行签名而不知道文件的具体内容,这就是盲签名的思想㊂盲签名是为解决电子现金的匿名性问题而提出的㊂当前的移动网络支付工具,如微信㊁支付宝等,在支付时,对于支付的信息,并不能做到匿名,其中交易的提出者和接受者的信息是可查的㊂而银行通过使用盲签名的方式发布电子现金,就可以做到这一点,上图4为银行通过盲签名发布电子现金的流程结构示意图㊂图4㊀银行通过盲签名发布电子现金在讲解时,可以讲解区块链中采用盲签名的典型技术㊂在区块链中,混币技术就是采用盲签名的典型技术㊂区块链系统中地址是由用户自行生成,与用户的身份信息无关,用户创建和使用地址不需要第三方参与㊂因此,相对于传统的账号(例如银行卡号),区块链地址具有较好的匿名性㊂但是区块链交易之间的关联性可以被用于推测敏感信息,区块链所有数据都存储在公开的全局账本中,用户在使用区块链地址参与区块链业务时,有可能泄露一些敏感信息,例如区块链交易在网络层传播轨迹㊂而采用混币技术的有关协议,如Blindcoin协议,保证了没有被动对手可以在特定的混合中链接输入/输出地址对㊂因此,Blindcoin在同时参与混合的所有非恶意用户集合内实现k-匿名性㊂除了混币服务器的资源外,对可以同时参与混币的用户数没有任何限制,因此参与者越多,匿名集就越大㊂通过使用盲签名方案隐藏参与者的输入/输出地址映射,实现了对混合服务的匿名性㊂6㊀区块链中密码协议的案例化教学㊀㊀密码协议是指两个或两个以上的参与者为完成密码通信中某项特定任务而约定的一系列步骤㊂在区块链技术涉及到众多密码协议和一些有关思想,如零知识证明㊁密钥管理等㊂本节以零知识证明和密钥管理为例,对区块链中的密码协议进行案例化教学㊂6 1㊀区块链中的零知识证明6 1 1㊀零知识证明简介零知识证明[14]是一种能够在不透露任何有用信息的情况下,让验证者相信证明者拥有某个知识或满足某个条件的协议㊂零知识证明有三个基本特征:完整性,可靠性和零知识性㊂零知识证明可以分为交互式和非交互式两种类型,前者需要多轮通信,后者只需要一次通信㊂6 1 2㊀区块链中的零知识证明区块链是一种分布式的㊁不可篡改的㊁去中心化的数据存储和交易技术,它可以实现公开透明㊁安全可信㊁无需第三方中介的价值传输㊂然而,区块链的公开性也带来了隐私保护的挑战,因为任何人都可以查看区块链上的交易记录和账户余额,这可能暴露用户的身份信息㊁财务状况㊁商业机密等敏感数据㊂为了解决这个问题,零知识证明被引入到区块链领域[15]㊂在区块链系统中,零知识证明可以用来实现隐私交易㊁隐私智能合约㊁隐私身份认证等功㊃221㊃。

北大考研-计算机科学技术研究所研究生导师简介-黄渭平_ 副研究员

北大考研-计算机科学技术研究所研究生导师简介-黄渭平_ 副研究员

爱考机构中国高端考研第一品牌(保过保录限额)爱考机构-北大考研-计算机科学技术研究所研究生导师简介-黄渭平_副研究员
黄渭平副研究员
黄渭平,男,1964年出生,副研究员,1989年毕业于北京大学,获理学硕士学位。

主要从事栅格图像处理器的研制和开发,对多种页面描述语言BDPDL、PostScript、PDF、PPML、XPS等有较深入的研究,先后参与和主持开发出支持PostScript3、PDF1.5和PPML/VDX的RIP产品。

曾获北京大学第四届科学技术成果一等奖、北京市科技进步一等奖,个人曾获2002年度北京大学柯达奖。

研究方向:
·高速栅格图像处理技术,重点是基于PostScript、PDF、PPML和XPS等页面描述格式的栅格图像处理器产品的研制和开发;
·可变数据印刷技术
主要科研成果:
·参与“方正彩色报版系统”项目的研制工作,负责扩充原来只具备黑白描述能力的北大页面描述语言BDPDL,使之具备彩色描述能力,并与他人合作在BDPDL的解释器中成功实现;
·参与和主持研制跨平台的支持PostScript3的栅格图象处理器RIP内核和相应的RIP产品,奠定了方正RIP产品在国内的垄断地位;
·主持研制面向北美和欧洲等西文市场的栅格图象处理器EagleRIP产品,并成功销售数千套;
·主持研制成功直接解释PDF1.5的RIP内核,并成功应用于高端彩色打印控制器;
·主持研制支持可变数据印刷标准PPML/VDX的RIP内核。

地址北京市海淀区中关村大街59号

地址北京市海淀区中关村大街59号

曾湘泉 ①英俄日德法②劳动经济 学③现代人力资源管理 姚裕群 杨伟国 赵忠 常凯
①西方经济 学②劳动法 学
孙健敏 ①英俄日德法②组织行为 学③现代人力资源管理 张丽华 彭剑峰 程延园 周文霞 林新奇
①管理学原 理②社会科 学研究方法
第 2 页
2010年港澳台博士生招生专业目录
单位编号:10002 地址:北京市海淀区中关村大街59号 邮编: 100872 联系部门:研究生招生办公室 电话:010-62513113 联系人:刘 宇 专业代码、名称 指导 考 试 科 目 加试科目 及研究方向 教师 120404社会保障 01.社会保障理论与政策 郑功成 ①英俄日德法②中国社会 ①管理学原 02.中国社会保障 保障政策③社会保障学 理②劳动经 03.社会保障理论与政策 潘锦棠 济学 04.社会保障理论与政策 孙树菡 05.社会保障理论与政策 仇雨临 100700法学院 030101法学理论 01.比较法 02.法社会学 03.司法理论 04.法制与公共政策 05.立法学 06.人权法理论 07.现代西方法哲学 08.比较法 09.法哲学 10.社会主义法的理论 11.法哲学 12.现代西方法哲学 030102法律史 01.中国法制史 02.比较法律文化 03.中国法制史 04.比较法律文化 05.外国法制史 06.比较法律文化 07.中国法制史 08.比较法律文化 09.外国法制史 10.比较法律文化 11.中国法制史 12.比较法律文化 030103宪法学与行政法学 01.宪法基本理论 02.比较行政法
2010年港澳台博士生招生专业目录
单位编号:10002 地址:北京市海淀区中关村大街59号 邮编: 100872 联系部门:研究生招生办公室 电话:010-62513113 联系人:刘 宇 专业代码、名称 指导 考 试 科 目 加试科目 及研究方向 教师 100300经济学院 020201国民经济学 01.社会经济发展战略与规划 刘瑞 ①英俄日德法②社会主义 ①西方经济 02.产业结构与产业政策 经济理论③国民经济管理 学②国际经 03.宏观经济管理 顾海兵 学 济学 04.宏观经济数量分析方法与模型 05.宏观经济管理 陈璋 06.宏观经济数量分析方法与模型 020206国际贸易学 01.国际贸易 02.国际贸易 03.国际贸易 100400财政金融学院 020203财政学 01.财政理论与政策 02.宏观经济理论与政策 03.财政理论与政策 04.税收制度及国际税收 05.计量财政 06.税收理论与实务 07.财政与公共政策设计 08.财政分权与地方政府治理结构 设计 09.地方财政理论与实证研究 10.收入分配理论与实证研究 020204金融学 01.现代证券投资理论 02.投资银行 03.金融监管 04.国际金融市场 05.银行理论与实践 06.金融制度与金融发展 07.信用管理研究 08.金融理论与政策

计算机技术085211

计算机技术085211

计算机技术(085211)一、专业领域简介2003年获得计算机应用技术硕士学位授予权并招生,2010年获得计算机科学与技术一级学科授权点,2011年获得计算机技术专业学位硕士学位授予权并招生。

分别与中国铁通焦作分公司、上海普华诚信信息有限公司等企业共建有研究生实践基地,在计算机网络应用、嵌入式系统应用及机器人技术、数字图像处理和工程数字化仿真等方向形成了优势和特色。

研究生毕业后主要从事与本学科领域相关的应用开发等技术类工作。

二、培养目标培养适应国家和地方经济与社会发展需要的复合型、应用型高层次的计算机专门人才。

具有坚实宽广的计算机技术领域的基础理论,系统掌握计算机的先进技术方法和现代技术手段。

具有独立从事科学研究或承担专门技术工作的能力,通过与其它学科交叉,能运用计算机技术解决多种研究及应用课题;具有一定的外语听、说能力;身心健康。

三、主要研究方向1.计算机网络与通信技术主要研究计算机网络的体系结构、协议与性能、网络与信息安全及无线网络等方面,探讨计算机网络与通信技术的发展方向及应用前景,开发基于相关技术的应用系统,内容覆盖Internet技术、网络开发与管理技术、物联网技术与应用、云计算及其应用技术,着重解决网络与通信技术相关的实际应用问题。

2.智能测控技术主要研究各种智能系统的理论与技术,包括系统建模、控制、优化,网络化控制技术,人员定位技术,安全智能测控技术,机器人技术,自动控制系统和人工智能技术,无线传感器网络技术,射频识别技术,嵌入式系统技术及其应用,着重解决测量与控制技术相关的实际应用问题。

3.信息处理与数字化仿真技术主要研究数字图形图像处理及辅助设计的基本理论、算法和技术,数据库及数据挖掘技术,矿山信息化及其应用,包括信息系统设计与管理、虚拟现实与增强技术、数字化矿山与地理信息系统、CAD与制造业信息化的技术融合和应用推广,着重解决数字图形图像以及数字化矿山技术的实际应用问题。

四、学制及学习年限学制为3年,学习年限最多延长1年。

博士、硕士学科专业目录

博士、硕士学科专业目录
公共管理
120401
行政管理

120403
教育经济与管理

1205
图书馆、情报与档案管理
120502
情报学

学科
门类数
一级学科授权点数
二级学科级别
一级学科级别
博士后流动
站数
国家
重点学科数
江苏省
重点学科数
硕士点数
博士点数
硕士
博士
一级
二级
一级
二级
8
37
112+15
37+15
20
10
12
2
1+2
4
10
说明:
信息与通信工程
081001
通信与信息系统
①ห้องสมุดไป่ตู้

081002
信号与信息处理


081020
航空电子信息技术
(2003)
(2003)
081021
探测与成像
(2003)
(2003)
081022
集成电路设计
(2004)
(2004)
0811
控制科学与工程
081101
控制理论与控制工程


081102
检测技术与自动化装置
2.2个国家重点一级学科(力学, 航空宇航科学与技术)、1个国家重点二级学科(机械制造及其自动化)由教育部于2007年8月批准;2个国家重点培育学科(电力电子与电力传动,导航、制导与控制)由教育部于2007年11月批准;10个江苏省重点学科由省教育厅于2006年7月批准,4个江苏省重点一级学科(机械工程,电气工程,控制科学与工程,管理科学与工程)由省教育厅于2008年6月批准。还有15个国防特色学科,由国防科工委于2007年6月批准。

【计算机科学】_数字版权管理_期刊发文热词逐年推荐_20140722

【计算机科学】_数字版权管理_期刊发文热词逐年推荐_20140722

科研热词 版权保护 数字版权管理 数字水印 数字指纹 多播通信 外包数据库 共谋攻击 信息隐藏 usbkey
推荐指数 1 1 1 1 1 1 1 1 1
2011年 序号 1 2 3 4 5 6 7 8 9 10 11
2011年 科研热词 数字版权管理 许可证书 权利描述语言 数据加密 攻击后防御强度 层次分析法 安全模型 安全性评测 可信计算 使用控制 mp3 推荐指数 3 1 1 1 1 1 1 1 1 1 1
2012年 序号 1 2 3 4 5 6 7 8 9 10 11 12
科研热词 版权保护 远程证明 移动数字出版 版权水印 数字版权管理 数字内容分发 指纹水印 安全协议 多媒体社交网络 可信计算 叛逆者追踪 使用控制
推荐指数 2 1 1 1 1 1 1 1 1 1 1 1
2013年 序号 1 2 3 4 5 6 7 8 9
2008年 序号 1 2 3 4
科研热词 标准 数字版权管理 关键技术 互操作
推荐指数 1 1 1 1
2009年 序号 1 2 3 4 5 6 7
科研热词 数字版权管理 组密钥技术 家庭网络 可信计算 互操作 rsa算法 drm模块
推荐指数 2 1 1 1 1 1 1
2010年 序号 1 2 3 4 5 6 远程证明 认证协议 脆弱水印 数字签名 多媒体文件加/解密 可信计算 skae
推荐指数 2 1 1 1 1 1 1 1 1
2014年 序号 1 2 3 4
2014年 科研热词 移动多媒体 数字版权管理 数字权利分享 android 推荐指数 1 1 1 1

媒体安全与数字版权管理

媒体安全与数字版权管理

2008北京大学黄铁军32
加密保护
2008北京大学黄铁军33
2008北京大学黄铁军34
State of Art – 工业
• CE:
– – – – – – – – CAS (Conditional Access System) for DTV broadcasting CSS (Content Scramble System) for DVD SDMI (Secure Digital Music Initiative) for MP3 player DTCP (Digital Transmission Content Protection) CPRM (Content Protection for Removable Media) SVP (Secure Video Processor) HDCP (High Bandwidth Digital Content Protection) AACS (Advanced Access Content System) for NG-DVD Intertrust, ContentGuard etc Apple DRM for iPod and iTunes ( and Real Helix) Microsoft Media DRM Sun Microsystems DReaM: open source and royalty-free
知识共享
• 吉尔是一位初露头角的摄影师,她把自己的作品选集放在网络上。或许有朝 一日,她会向复制她的作品的收费。但是现在,她正试图建立声誉,所以希 望他人复制她的作品越多越好。她最得意的作品是一些关于摩天大楼的黑白 照片。 杰克正使用自己新的家用电脑制作一部关于纽约市的数字电影,他想在影片 中放进一张帝国大厦的定格照片,但是他上次在纽约的时候却忘了拍这样的 照片。他在网 络上搜寻“帝国大厦”,找到了一批网站,其中有些有照片。 但他不确定这些照片是否享有著作权。他用了一个搜索引擎寻找没有著作权 标识的作品,但是他知道, 有些作品即使没有著作权标识,仍有可能受到著 作权法的保护。他担心如果自己用了这些在网络上找到的照片,然后把自己 的影片放到网络上,这些照片的拍摄者看 到这部影片后,会感到不满并对他 提起诉讼。 知识共享组织希望能够帮助杰克和吉尔更加容易地在网络上找到对方,开展 他们想进行的创意合作。我们建立一种网络应用模式,让吉尔可以用之(技 术上称为“许可协议”)公告,只要标明她是原摄影者,任何人都可以复制 她的照片。这样的许可协议条件必须是“可直接为电脑处理”的。换言之, 借助于搜索引擎等电脑应用程序,就可以判定吉尔的照片的著作权授权条件, 而杰克就得以搜寻在知识共享许可协议下获得授权,复制并在网上发布的帝 国大厦的照片。他将会找到吉尔的照 片,而且知道吉尔允许他将这些照片放 进自己的影片中。

研究方向、研究课题和研究团队简介

研究方向、研究课题和研究团队简介

研究方向、研究课题和研究团队简介一、主要研究方向程序设计语言理论和实现技术、系统程序的验证、软件安全等。

当前主要课题集中在用形式程序验证技术来提高系统软件的可信程度。

随着国家和社会对软件系统的依赖程度日益增长,复杂软件系统的正确、安全(包括safety和security)和可靠等对国家safety-critical的基础设施和security-critical的应用是至关重要的。

Safety-critical系统的软件错误可能是引起财产损失、身体损伤、甚至生命死亡的根源,security-critical系统的软件错误可能导致黑客窃密和隐私入侵等,间接地也会引起财产和生命损失。

验证是提高软件可信程度的重要方法,当前软件验证的实践主要采用模型检测和基于逻辑推论的形式程序验证两种方式。

模型检测通过遍历系统所有状态空间,能够对有穷状态系统进行自动验证,并自动构造不满足验证性质的反例。

这种方法在工业界比较流行,其优点是需要最小的用户交互,并可用于大规模复杂系统,近年来广泛用于清扫现有代码的错误上。

模型检测方法除了众所周知的难以解决状态空间爆炸问题外,模型检测工具都不能输出显式的证据或证明对象,供机械地检查被分析程序确有所期望的性质。

基于逻辑推论的形式程序验证起源于Hoare逻辑。

Hoare逻辑允许程序设计者使用来自通用逻辑并且具有很强表达能力的断言和推理规则。

但是,由于程序正确性证明往往是定理浅显但证明过程冗长,而自动定理证明问题又迟迟没有解决,导致Hoare逻辑的应用进展缓慢。

1997年,George Necula首先提出携带证明的代码(proof-carrying code,简称PCC)的概念,将Hoare逻辑用到汇编程序的安全性质证明中,来支持分布式计算和移动代码的安全策略的实现。

携带证明的代码(另一种对具有类似性质软件的称呼是:经过验证的软件,certified software)包括机器可执行的程序和机器可检查的严格证明,后者证明该代码满足指定的规范,即代码不会出现违反该规范的错误。

电子科技大学硕士研究生培养方案1

电子科技大学硕士研究生培养方案1
09
生命科学与技术学院
01-生物医学工程(一级学科);02-生物物理学;
03-生物化学与分子生物学;04—信号与信息处理;
05—应用心理学;88-该学院其他学科;41-生物医学工程
10
应用数学学院
01-数学(一级学科);02-应用数学;03-计算数学;
04-运筹学与控制论;88-该学院其他学科
11
管理学院
——第三、四位,代表各学院包含的学科或专业领域对应序号;
特别地,若为面向全日制硕士专业学位所开设的专门课程,则对应专业领域代表的两位数,首位统一为“4”,次位是该领域的编号。
开课学院对应代码和学院包含的学科对应序号详见下表:
学院代码
学院名称
学科、专业领域名称及课程编号中对应的序号
01
通信与信息工程学院
生物物理学
071011
工学
机械工程
0802
机械制造及其自动化
080201

机械电子工程
080202
机械设计及理论
080203


光学工程
0803

光学工程
080300

仪器科学与技术
0804
精密仪器及机械
080401

测试计量技术及仪器
080402


材料科学与工程
0805

材料科学与工程
080500
电子科技大学的“电子科学与技术”一级学科包含5个二级学科,即物理电子学、微电子与固体电子学、电磁场与微波技术、电路与系统、电子信息材料与元器件学科。前4个二级学科均为国内最早批准的博士点和国家重点学科,综合实力居国内领先水平,也是近年来我校“211”工程重点建设学科,而“电子信息材料与元器件”为根据我校特色与优势、反映学科前沿方向而新设置的二级学科。该一级学科已形成以刘盛纲院士、林为干院士、陈星弼院士为学科带头人、一大批国内知名的高层次中青年学者为学术骨干的梯队。设有两个国家重点实验室,拥有一大批国际水平的实验仪器设备、计算机工作站和先进软件。

卸载速率对花岗岩应变岩爆破坏及碎屑形貌特征的影响

卸载速率对花岗岩应变岩爆破坏及碎屑形貌特征的影响

第 54 卷第 6 期2023 年 6 月中南大学学报(自然科学版)Journal of Central South University (Science and Technology)V ol.54 No.6Jun. 2023卸载速率对花岗岩应变岩爆破坏及碎屑形貌特征的影响李春晓1, 2,李德建1, 2,刘校麟1, 2,祁浩1, 2,王德臣1, 2(1. 中国矿业大学(北京) 深部岩土力学与地下工程国家重点实验室,北京,100083;2. 中国矿业大学(北京) 力学与建筑工程学院,北京,100083)摘要:为研究深部工程开采过程中开挖速率对岩爆破坏特征的影响,首先,利用自主研发的真三轴应变岩爆实验系统,开展不同卸载速率下的花岗岩应变岩爆实验,借助动态高速应力监测系统和双目高速摄影系统监测试样岩爆过程;其次,分析了卸载速率对试样的岩爆破坏峰值强度、岩爆破坏过程、试样破坏形态以及碎屑块度特征的影响;再次,利用三维激光扫描系统建立了中粗粒碎屑的三维数字模型,分析碎屑三维形貌的几何及表面幅度特征参数;最后,基于分形理论和立方体覆盖法,定量研究了不同卸载速率对岩爆碎屑表面形貌的影响。

研究结果表明:随着卸载速率提高,试样岩爆时的峰值强度和破坏烈度增大,岩爆碎屑的破碎程度降低,岩爆破坏模式由张拉型破坏过渡到剪切型破坏。

在高卸载速率的影响下,试样内部裂纹未充分发育和扩展,容易形成几何形态规则的岩爆碎屑,并且碎屑表面形貌的复杂程度降低。

岩爆碎屑表面形貌具有明显的分形特征,卸载速率越大,碎屑表面形貌的分形维数越小。

关键词:应变岩爆;卸载速率;破坏特征;碎屑形貌;三维激光扫描;分形中图分类号:TU45 文献标志码:A 开放科学(资源服务)标识码(OSID)文章编号:1672-7207(2023)06-2298-14Effect of unloading rates on characteristics of damage andfragment morphology for strainburst of graniteLI Chunxiao 1, 2, LI Dejian 1, 2, LIU Xiaolin 1, 2, QI Hao 1, 2, WANG Dechen 1, 2(1. State Key Laboratory for Geomechanics and Deep Underground Engineering, China University of Mining andTechnology(Beijing), Beijing 100083, China;2. School of Mechanic and Civil Engineering, China University of Mining and Technology(Beijing),Beijing 100083, China)收稿日期: 2022 −08 −10; 修回日期: 2022 −11 −03基金项目(Foundation item):国家自然科学基金资助项目(41572334);中央高校基本科研业务费专项资金资助(2022YJSSB05);深部岩土力学与地下工程国家重点实验室(北京)创新基金资助(SKLGDUEK202222) (Project(41572334) supported by the National Natural Science Foundation of China; Project(2022YJSSB05) supported by the Fundamental Research Funds for the Central Universities; Project(SKLGDUEK202222) supported by the Innovation Foundation of State Key Laboratory for Geomechanics and Deep Underground Engineering, Beijing)通信作者:李德建,博士,教授,从事深部岩体力学与工程灾害控制研究;E-mail :****************DOI: 10.11817/j.issn.1672-7207.2023.06.019引用格式: 李春晓, 李德建, 刘校麟, 等. 卸载速率对花岗岩应变岩爆破坏及碎屑形貌特征的影响[J]. 中南大学学报(自然科学版), 2023, 54(6): 2298−2311.Citation: LI Chunxiao, LI Dejian, LIU Xiaolin, et al. Effect of unloading rates on characteristics of damage and fragment morphology for strainburst of granite[J]. Journal of Central South University(Science and Technology), 2023, 54(6): 2298−2311.第 6 期李春晓,等:卸载速率对花岗岩应变岩爆破坏及碎屑形貌特征的影响Abstract:To study the effect of excavation speed on rock burst damage characteristic during the mining process of deep engineering, firstly, a series of strainburst experiments under different unloading rates were conducted on granite by using the self-developed true triaxial strainburst experimental system. The rock burst process of specimen was monitored with the dynamic high-speed stress monitoring system and binocular high-speed photography system. Secondly, the influence of unloading rate on peak strength of rock burst damage, rock burst damage process, damage morphology and fragmentation characteristic of specimen were analyzed. Thirdly, the three-dimensional digital model of medium-coarse grained fragment was established by using the three-dimensional laser scanning system, and the characteristic parameters of geometry and surface amplitude for three-dimensional morphology of fragment were studied. Finally, based on fractal theory and cube-covering method, the influence of unloading rate on surface morphology complexity of rock burst fragment was quantitatively studied.The results show that with the increase of unloading rate, the peak strength and damage intensity of specimen during rock burst increase, and the degree of fragmentation of rock burst fragments decreases, meanwhile the damage mode of rock burst transforms from tensile damage to shear damage. The cracks within the specimen are not sufficiently developed and penetrated under the influence of high unloading rate, which will make the geometrical morphology more regular and reduce the complexity of surface morphology of fragment. The surface morphology of rock burst fragment also exhibits distinct fractal characteristic. The higher the unloading rate is, the smaller the fractal dimension of surface morphology of rock burst fragment is.Key words: strainburst; unloading rate; damage characteristic; fragment morphology; three-dimensional laser scanning; fractal岩爆是能量岩体沿开挖临空面瞬间释放能量的非线性动力学现象[1]。

drm标准

drm标准

drm标准数字版权管理(Digital Rights Management,简称DRM),是一种用于保护数字内容的技术措施,它通过加密、许可管理、复制控制等手段,保护内容提供者的权益,防止盗版和侵权行为,并确保内容的合法传播和使用。

DRM标准作为数字版权管理的基础,具有重要的意义。

本文将介绍DRM标准的发展背景、分类和应用等方面的内容。

DRM标准的发展背景数字化技术的迅速发展和互联网的普及,使得数字内容的传播和复制变得极为容易。

在这种情况下,数字内容提供者很容易面临盗版和侵权的风险,这就需要一种有效的技术手段来保护他们的权益。

因此,DRM技术应运而生。

早期的DRM技术主要采用加密算法和数字签名等手段来保护数字内容的安全性。

然而,这种技术往往需要在硬件和软件上进行定制,不具备广泛的适用性。

为了解决这个问题,业界开始致力于制定通用的DRM标准,以便更好地实现数字版权管理。

DRM标准的分类DRM标准可以根据不同的技术标准和应用领域进行分类。

技术标准方面,主要有两种DRM标准:基于硬件的DRM和基于软件的DRM。

基于硬件的DRM采取了硬件保护的方式,它通过在终端设备上加装特定的硬件模块,实现对数字内容的保护。

这种方式的优点是安全性较高,不易受到破解,但其缺点是需要定制硬件,成本较高,并且在不同的设备上可能存在不兼容的问题。

基于软件的DRM则是通过在终端设备上安装特定的软件模块,来实现对数字内容的保护。

这种方式的优点是成本较低,易于推广和应用,但安全性较硬件方案有所降低。

应用领域方面,DRM标准可以分为多种类型,如电子商务领域的ERC(Electronic Rights Commerce),数字音乐领域的MPEG-4 AAC (Advanced Audio Coding),以及数字视频领域的WMV(Windows Media Video)等。

不同领域的DRM标准在应用场景、技术要求和保护层次等方面存在一定的差异。

张宏(浙江大学教授,“透明病理”新概念提出者)

张宏(浙江大学教授,“透明病理”新概念提出者)
张宏(浙江大学教授,“透明病理” 新概念提出者方向
03 学术成果
目录
02 社会职务 04 提出“透明病理”
张宏,男,浙江大学医学PET中心主任、浙江大学医学院附属第二医院核医学科主任、浙江大学核医学与分 子影像研究所所长、浙江省医学分子影像重点实验室主任、浙江大学生物医学工程与仪器科学学院院长。
3. Zhang Y, Chen QZ, Du FL, Hu YN, Chao FF, Tian M, Zhang H*. Frightening Music Triggers Rapid Changes in Brain Monoamine Receptors: A Pilot PET Study. J Nucl Med. 2012; 53:1573– 1578. *Corresponding Author.
5. Hou H, Tian M, Zhang H*.
1.参编全国高等学校八年制临床医学专业卫生部规划教材《核医学》,人民卫生出版社,2005年 2.参编专著《PET、PET/CT诊断学》,化学工业出版社医学出版分社,2007年 3.参编专著《PET/CT诊断学》,人民卫生出版社,2009年
提出“透明病理”
2021年2月15日,浙江大学教授田梅和张宏受邀在《欧洲核医学与分子影像杂志》在线发表了题为《透明病 理:基于分子影像的病理学》的综述文章,首次提出“透明病理”新概念。
研究方向
1.核医学的临床应用与基础研究; 2.分子影像探针的研发; 3.分子影像技术的应用研究;
社会职务
1.Oncology编委 2.World Journal of Nuclear Medicine(《世界核医学杂志》)编委 3.Nuclear Medicine Communications(《英国核医学会杂志》)编委 4.Annuals of Nuclear Medicine(《日本核医学会英文杂志》)编委 5.European Journal of Nuclear Medicine and Molecular Imaging(《欧洲核医学与分子影像杂志》) 编委 6.Nuclear Medicine & Molecular Imaging(韩国核医学与分子影像杂志)编委 7.Evidence-based Complementary and Alternative Medicine(《循证补充替代医学》)编委 8.中国生物物理学会分子影像专业委员会副主任委员 9.中华医学会核医学分会全国委员 10.中国医师协会核医学医师分会对外联络工作委员会副主任委员 11.

04-信息安全与技术(第2版)-朱海波-清华大学出版社

04-信息安全与技术(第2版)-朱海波-清华大学出版社

二、信息隐藏的分类
信息隐藏是一门新兴的信交息隐藏叉学科,包含的内容十分广泛, 在计算机、通信、保密学等领域有着广阔的应用背景。 1999年隐通道,Fabien隐对写术信息隐藏匿名作通信 了分类版权,标识如下图所示。
语言学中的隐写术
技术上的隐写术
稳健的版权标识
脆弱的版权标识
数字水印
数字指纹
可见数字水印
信息隐藏不同于传统的加密,其目的不在于限制正常的 资料存取,而在于保证隐藏数据不被侵犯和重视。隐 藏的数据量与隐藏的免疫力始终是一对矛盾,目前还 不存在一种完全满足这两种要求的隐藏方法。信息隐 藏技术和传统的密码技术的区别在于:密码仅仅隐藏 了信息的内容,而信息隐藏不但隐藏了信息的内容而 且隐藏了信息的存在。
通常把被隐藏的信息称为秘密信息(Secret Message), 如版权信息、秘密数据、软件序列号等。而公开信息 则称为载体信息(Cover Message),如视频、图片、 音频等。这种信息隐藏过程一般由密钥(Key)来控制, 通过嵌入算法(Embedding Algorithm)将秘密信息隐 藏于公开信息中,而隐密载体(隐藏有秘密信息的公 开信息)则通过通信信道(Communication Channel) 传递,然后对方的检测器(Detector)利用密钥从隐蔽 载体中恢复/检测出秘密信息。信息隐藏的一般模型如 下图所示。
信息隐藏是集多门学科理论技术于一身的新兴技术领域, 它利用人类感觉器官对数字信号的感觉冗余,将一个 消息隐藏在另一个消息中。由于隐藏后外部表现的只 是遮掩消息的外部特征,故并不改变遮掩消息的基本 特征和使用价值。
数字信息隐藏技术已成为信息科学领域研究的一个热点。 被隐藏的秘密信息可以是文字、密码、图像、图形或 声音,而作为宿主的公开信息可以是一般的文本文件、 数字图像、数字视频和数字音频等。

瑞银集团提名新的地区执行官

瑞银集团提名新的地区执行官

瑞银集团提名新的地区执行官
佚名
【期刊名称】《科学与财富》
【年(卷),期】2003(000)008
【总页数】1页(P82)
【正文语种】中文
【中图分类】F835.223
【相关文献】
1.邀请提名国际华人麻醉学院-新晨当代名人堂2016年候选人通知 [J],
2.ADI重塑新愿景再踏新征程——专访ADI公司总裁兼首席执行官Vincent Roche先生 [J], 于寅虎
3.与古为新——写在第二届福建省青年美术家提名展之前 [J], 何光锐
4.传播效果视阈下绘画艺术中的图像意识分析
——评《超越图像•中国新绘画:2007〈艺术当代〉架上艺术学术提名展》 [J], 李胤
5.《新医学》荣获第四届广东省优秀期刊提名奖 [J],
因版权原因,仅展示原文概要,查看原文内容请购买。

专硕缩写

专硕缩写

0251金融硕士——MF
0252应用统计硕士——MAS
0253税务硕士——MT
0254国际商务硕士——MIB
0255保险硕士——MI
0256资产评估硕士——MV
0257审计硕士——MAud
0351法律硕士——JM
0352社会工作硕士——MSW
0303警务硕士——MP
0451教育硕士——MEd
0452体育硕士——MPE
0453汉语国际教育硕士——MTCSOL
0454应用心理硕士——MAP
0551翻译硕士——MTI
0552新闻与传播硕士——MJC
0553出版硕士——MP
0651文物与博物馆硕士——建筑学硕士——0852工程硕士——MEM
0853城市规划硕士——MUP
0951农业推广硕士——MAE
0952兽医硕士——MVM
0953风景园林硕士——MLA
0954林业硕士——MF
1051临床医学硕士——MCM
1052口腔医学硕士——MDM
1053公共卫生硕士——MPH 1054护理硕士——MN 1055药学硕士——MP 1056中药学硕士——MCM 1151军事硕士——MMS 1251工商管理硕士——MBA 1252公共管理硕士——MPA 1253会计硕士——MPAcc 1254旅游管理硕士——MTA 1255图书情报硕士——MLIS 1256工程管理硕士——MEM 1351艺术硕士——MFA。

相关主题
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

DRM, Trusted Computing and Operating System ArchitectureJason F. Reid William J. CaelliInformation Security Research CentreQueensland University of TechnologyG P O Box 2434, Brisbane, Qld 4001, Australiajf.reid@.au, w.caelli@.auAbstractRobust technological enforcement of DRM licenses assumes that the prevention of direct access to the raw bit representation of decrypted digital content and the license enforcement mechanisms themselves is possible. This is difficult to achieve on an open computing platform such as a PC. Recent trusted computing initiatives namely, the Trusted Computing Group (TCG) specification, and Microso ft’s Next Generation Secure Computing Base (NGSCB) aim in part to address this problem. The protection architecture and access control model of mainstream operating systems makes them inappropriate as a platform for a DRM content rendering client because decrypted content cannot be protected against a privileged process. If a DRM client is to be deployed on an open computing platform, the operating system should implement the reference monitor concept, which underpins the mandatory access control model. The TCG model of trusted computing has important limitations when combined with an operating system enforcing discretionary access control. We argue that the TCG services of sealed storage and remote attestation which are important in DRM applications, cannot operate in a secure and efficient manner on such an operating system. .1 IntroductionAdvances in digital compression technology coupled with the reduced cost and increased capacity of storage media and network bandwidth have combined to make the distribution of digital content over the Internet a practical reality. The owners of copyrighted works, particularly in the entertainment area, have become increasingly anxious to ensure that evolving digital technology does not limit or reduce their capacity to enforce their copyrights for financial reward. This concern has motivated a steadily growing interest in the field of Digital Rights Management (DRM).DRM has been defined as “the management of rights to digital goods and content, including its confinement to authorized use and users and the management of any consequences of that use throughout the entire life cycle of the content” (CEN/ISSS, 2003).Copyright © 2005, Australian Computer Society, Inc. This paper appeared at the Australasian Information Security Workshop 2005 (AISW2005), Newcastle, Australia. Conferences in Research and Practice in Information Technology, Vol. 44. Reproduction for academic, not-for profit purposes permitted provided this text is included. This definition encompasses two distinct aspects of DRM that are independently recognised as being worthy of protection in the World Intellectual Property Organisation’s copyright treaty (WIPO, 1996): firstly, rights management information,which includes “information which identifies the work, the author of the work, the owner of any right in the work, or information about the terms and conditions of use of the work”, and secondly, technological enforcement, which encompasses effective technological mechanisms used to enforce the terms and conditions of use for a work.Although these two aspects of DRM cannot be cleanly separated, it is the second aspect that is the principle focus of this paper. In particular, we consider the difficult challenge of technological enforcement on open computing platforms such as a general purpose personal computer (PC).The essential premise of DRM is that a rights owner wishes to license digital content (which is represented as binary digits or bits) to a licensee or customer who agrees to be bound by the terms of the license. Note that the customer is not buying the bits themselves. Rather, they are buying the right to use the bits in a defined and restricted manner, as authorised in the terms of the license. Hence the license defines a type of usage policy.A hypothetical license might authorise a single playing of the content on a nominated platform. Copying, multiple ‘plays’, redistribution and modification may be prohibited. Technological enforcement is necessary because the rights owner does not necessarily trust the customer, yet they would like to have a reasonable level of assurance that the license terms will be complied with even though the content is stored and used on devices that they do not own or control. Digital information can be protected from unauthorised access in transit and storage by well-understood cryptographic techniques. As Schneck (1999) argues, the more complicated challenge presented by DRM flows from the observation that the content bits must be in the clear(i.e., not protected by encryption) on the client platform in order to be rendered in a manner perceptible to a user. If the decrypted content bits can be accessed, (for example by using a kernel debugger or modified device driver) the technological enforcement of the license conditions can be circumvented. Once dissociated from its protection, the content can be freely copied, played, modified and redistributed, albeit in violation of the license terms. Consequently, to reliably enforce typical DRM policies, it must not be possible for the platform user to access the plaintext bits that represent the content, despite the practical reality that the platform is under the user’s directcontrol1. This is an access control problem that cannot be solved purely by cryptography. On open computing platforms that can run arbitrary software, it is a difficult problem to which there is currently no practical, deployed solution, particularly in terms of ‘software-only’techniques. Recent trusted computing initiatives, namely Microsoft’s Next Generation Secure Computing Base (NGSCB) (Microsoft, 2004) and the Trusted Computing Group (TCG) specification, (Trusted Computing Group, 2003) formerly known as TCPA aim in part, to address this issue through both hardware and software based methods (Anderson, 2003).The goal of trusted computing is to deliver systems that are highly resistant to subversion by malicious adversaries, allowing them to operate reliably and predictably in almost any circumstance. Trusted computing is an important ingredient in DRM because it provides a sound basis for license enforcement. Given the way the NGSCB and TCG initiatives have been promoted, one could be forgiven for thinking that trusted computing is an entirely new concept. As we discuss in Section 3.1, trusted computing actually has a long history but the lessons this history can teach have been largely ignored over the last 20 years, particularly in the design of mainstream PC operating systems2. As a consequence, such systems are fundamentally ill equipped to provide the level of protection that a robust DRM system demands.1.1 ContributionIn this paper we explain in detail why mainstream, commercial grade operating systems are an inappropriate platform on which to base a DRM client. We clarify why DRM platforms require an operating system that can enforce mandatory access control. We aim to address common misunderstandings as to the extent to which the TCG specification implements DRM. A key conclusion of our analysis is that the addition of TCG components to a discretionary access control enforcing operating system does not result in a ‘trusted system’that can reliably enforce DRM licenses. We identify problems with DRM related applications of the TCG sealed storage feature that flow from the non-deterministic order of execution of a multi-tasking operating system. We highlight issues that undermine the effectiveness of the TCG remote 1 An alternative policy enforcement strategy relies on the detection of policy descriptive watermarks embedded in content. To be effective, all content rendering devices must detect these watermarks and process the content accordingly. Biddle et al. (2003) argue that the requirement for all computing devices to implement and enforce such protections is impractical.2 We adopt the term mainstream operating system to refer to popular commercial operating systems such as Windows 95, NT, 2000 and XP from Microsoft Inc. (USA), Linux from various distributions of this open source system and various versions of Unix that implement a discretionary access control policy. attestation feature when it is deployed on mainstream operating systems.1.2 OverviewThe remainder of this paper is organised as follows. Section 2 provides background on the relevance of trusted systems to DRM. Section 3 examines operating system properties that are necessary to support secure DRM client applications, highlighting the deficiency of mainstream operating systems in relation to these requirements. Section 4 presents an analysis of the TCG specification and the degree to which it can improve the trustworthiness of mainstream operating systems. Section 5 examines Microsoft’s NGSCB initiative and Section 6 provides conclusions to this analysis.2 DRM Client Security and Trusted Systems Stefik (1997) argues that trusted systems are a basic requirement for robust DRM. Stefik defines a trusted system as one “that can be relied on to follow certain rules. In the context of digital works, a trusted system follows rules governing the terms, conditions and fees for using digital works.” The rules or license terms are expressed in a computer-interpretable format, (known as a rights expression language) and the content rendering client ensures that the content is protected and manipulated in a manner consistent with the rules. Stefik does not describe the properties a trusted system should have to reliably enforce the rules. However it is implicit from the functional description that the trusted system must protect the underlying bit representation of the content from direct access by a user. If this were not the case, bypassing rule enforcement would be trivial. According to Lacy et al. (1997), the license interpretation and enforcement components of the client’s content rendering platform, (which we will refer to as the DRM client) must be implemented within the boundaries of a Trusted Computing Base (TCB). Lacy does not consider the complications that are introduced if the DRM client is an open computing device capable of running arbitrary software.Biddle et al. (2003) identify the following necessary enforcement properties for a DRM client:1. The client cannot remove the encryption fromthe file and send it to a peer.2. The client cannot ‘clone’ its DRM system tomake it run on another host.3. The client obeys the rules set out in the DRMlicense.4. The client cannot separate the rules from thepayload.When the DRM client is implemented in application software on a mainstream operating system, Hauser and Wenz (2003) contend that policy enforcement can be bypassed with relative ease. They document a number of successful attacks on deployed schemes. Properties 1, 3 and 4 are particularly difficult to implement in application level clients that run on mainstream operating systems. In the next section we discuss reasons for the inadequacy ofapplication level policy enforcement, in the absence of certain operating system features.3 DRM, Trusted Systems and MandatoryAccess ControlThis section examines the relationship between operating system architecture and the effectiveness of DRM license enforcement.According to Loscocco et al.,(1998) “current security efforts suffer from the flawed assumption that adequate security can be provided in applications with the existing security mechanisms of mainstream operating systems.” They identify two important operating system mechanisms that are necessary for effective application-level security policy enforcement: firstly the ability to enforce a mandatory security policy; and secondly, a trusted path for user input and output. Since neither property is implemented in current mainstream operating systems they argue that the security of applications based thereon can be nothing more than “a fortress built on sand”. This contention is certainly consistent with the results reported by Hauser and Wenz (2003).The terms mandatory security policy and Mandatory Access Control (MAC) are used in a broader sense than their more common association with Multi-Level Security (MLS) systems that are based on the work of Bell and La Padula (1973). As Loscocco et al. (1998) describe, a system must be capable of enforcing a defined security policy in a mandatory fashion. This means that policy enforcement must be assured. To achieve mandatory security, security policy must be established, configured and maintained by an authorised security policy administrator(as opposed to Discretionary Access Control (DAC) systems, which allow ordinary users to configure access policy). The policy may be expressed in MLS terms or it may be role based3 or it may be based on Domain and Type enforcement (DTE)4. The key requirement is that the operating system and underlying hardware are designed so that it is not possible for software or users to reconfigure or subvert the enforcement of the policy. Loscocco et al., argue that carefully controlled memory segregation is critically important in achieving this requirement. When it is attained, mutually distrustful applications are able to execute on the same platform without being able to access each other's resources (if the policy does not allow it). This is a crucial requirement for DRM clients if Biddle’s enforcement properties, as described in Section 2, are to be met.3.1 Architecture Flaws in Mainstream OSsA key limitation of mainstream operating systems that renders them inappropriate as platforms for a DRM client is their enforcement of an identity based discretionary security policy rather than a mandatory security policy. The DAC model is incompatible with the trust3 See: Ferraiolo, et al. (1995)4 See: Badger et al. (1995) relationship that exists between a content provider and a content consumer, who is not assumed to be trustworthy. Since content is locally rendered, the content provider is forced to rely on the customer’s DRM client to enforce their license terms but the environment that this client operates in is under the customer’s control. In a DAC system, control of the execution environment gives the customer the ability to subvert the policy enforcement mechanisms of their DRM client5. The system owner can run privileged software such as a modified device driver that is able to access the memory space of the DRM client6. In a MAC system based for example on DTE, it is possible to configure the MAC policy so that the DRM client’s memory space cannot be accessed, even by other parts of the operating system. If the content provider can evaluate the MAC policy configuration to confirm that it enforces DRM client isolation, they can establish a degree of trust in the DRM client’s ability to enforce the license terms – on the grounds that the application’s technological enforcement measures cannot be tampered with or bypassed and its address space cannot be snooped by other privileged processes to copy decrypted content bits.A MAC capability provides a sound basis for policy enforcement through rigorous control over information flows between subjects and objects (see Badger et al., 1995). MAC systems rely on the concept of a reference monitor, (Anderson, 1972) which is responsible for enforcing the policy. The reference monitor mediates every access to system resources and data, (collectively known as objects) deciding whether the requested access is consistent with the policy. To ensure that every access is mediated, it must not be possible to bypass the reference monitor. The reference monitor mechanism also needs to be protected against tampering to ensure that an attacker cannot subvert or influence its access decisions. Finally, the reference monitor needs to be small enough to allow it to be validated via analysis and testing. These properties can be achieved by software in concert with hardware protection mechanisms (Schell et al., 1985). The totality of software and hardware that is responsible for enforcing the security policy is known as the Trusted Computing Base (TCB).A further critical weakness in mainstream operating systems is their inability to implement the security principle of least privilege, described for example, by Saltzer and Schroeder (1975). As the name suggests, least privilege requires a program or user to be given only the minimum set of access rights necessary to complete a task. To achieve this, a system needs to be able to express and enforce fine-grained access rights. In today’s mainstream operating systems, privileges are bound to so called user ‘IDs’so access decisions are based on user identity. As a consequence, all of a user’s privileges are granted to each program running on behalf of that user.5 A DAC architecture cannot reliably enforce a mandatory security policy (Harrison et al., 1976).6TCG integrity measurement capabilities do not effectively address this problem. See Section 4.3There is no efficient mechanism to reduce the set of available privileges to those that are actually needed.To make matters worse mainstream operating systems have only two major categories of users: the root or super-user, and normal users. As the name ‘super-user’implies, processes with super-user privilege cannot be constrained by access controls as there is no reference monitor. This operating system architecture creates a serious problem for DRM applications, because the platform owner typically has access to the super-user account. Without access controls, a DRM license cannot be enforced against the super-user and plaintext content bits cannot be reliably protected. Observance of the principle of least privilege and enforcement of MAC underpin effective domain confinement through reliable control over information flows within the operating system (Saltzer and Schroeder, 1975). A DRM client based on an open computing platform cannot successfully maintain Biddle’s minimum enforcement properties, (listed in Section 2) without the confinement and information flow control that a reference monitor enables. Device drivers in mainstream operating systems present a particular problem because they must be totally trusted but they are not necessarily trustworthy. Drivers are tailored to a specific piece of hardware (e.g., a particular sound or graphics card) so they are normally provided by the hardware vendor. This creates a problem of trust. Solomon and Russinovich (2000) note: “device driver code has complete access to system memory space and can bypass Windows 2000 security to access objects.” So device driver code is like application code in that it is supplied by a range of sources. But it is effectively operating system code since it has unrestricted access to system resources. To further complicate matters, device drivers can be dynamically loaded at runtime. Thus, a malicious or buggy driver can be used for example, to compromise cryptographic keys and the plaintext of protected content. A digitally signed driver provided by a trusted vendor is a common approach to combat this problem. Unfortunately this offers only a partial solution because drivers are typically too large and complex to evaluate to attain a reasonable degree of assurance that they do not contain exploitable bugs or unexpected behaviours. A signature does not guarantee correct operation. It is also difficult to ensure that the integrity of the signature verification mechanism and signer’s public key are protected.Early trusted systems such as Multics, (Corbato et al. 1972) addressed the device driver trust issue via a hierarchy of hardware enforced execution domains known as rings. Building on the Multics approach, Intel x86 processors have supported a ring based protection architecture (Figure 1) since the 286 chip. This four level design, with ring 0 being the most privileged and ring 3 the least, is intended to allow the operating system kernel, (which implements the reference monitor) to operate at a higher level of hardware enforced privilege than device drivers, other operating system components and code libraries which in turn can have higher privilege than users and application processes. The higher level of privilege ensures that the reference monitor mechanism is more resistant to bypass or tampering by other less trusted processes running in rings of lesser privilege. The quantity and complexity of code that must be trusted to enforce the mandatory security policy is thereby substantially reduced. This makes it easier to establish confidence in its correctness. The x86 hardware architecture is capable of supporting highly secure and trustworthy operation. When correctly utilised, its ring-based architecture combined with its fine-grained memory segmentation allow it to enforce effective domain separation and confinement at the hardware level. Unfortunately, with rare exceptions, (e.g. the GEMSOS OS described in Schell et al. (1985)) the protection rings and memory segmentation/capability features of the Intel x86 have not been used by mainstream, general purpose operating system designers as they were intended. Mainstream operating systems use only the most and least privileged rings for system and user space respectively, emulating two state machines. While PC operating systems may not have been designed with security as a high priority, the same cannot be said of the processor onwhich they are based.Figure 1: The Intel x86 Ring ArchitectureThe failure of mainstream operating systems to correctly utilise the ring structure of the x86 processor explains Intel’s announced intention to release a new chip with what is effectively a ‘ring –1’. According to Peinado, et al., (2004) the reason for this is that Microsoft’s new NGSCB trusted computing architecture requires a higher level of privilege to enable effective domain separation within ring 0 which, for reasons of backward compatibility must continue to host device drivers of questionable provenance and other potentially insecure operating system components. This is discussed in more detail in Section 5.In summary, mainstream operating systems lack the essential features that are required to protect decrypted content and thereby support the enforcement of DRM licenses. They do not enforce a mandatory access policy, and they fail to observe the principle of least privilege, which greatly magnifies the threat presented by software bugs and privileged but malicious code. In addition, the sheer volume and complexity of privileged code, (including device drivers) means that there is no possibility of gaining any reasonable level of assurance that a platform will obey DRM license terms. Trust mechanisms based on signed code and drivers do not alter this situation since the problem flows from the access control model and operating system architecture.4 Trusted Computing Group - formerly TCPAIn response to myriad problems created by the insecurity of open computing platforms, the Trusted Computing Group (TCG) has proposed a trusted computing platform specification (Trusted Computing Group 2003). In this section we briefly describe key aspects of the specification. In the context of a DRM client application, we analyse the operating system features that are necessary to make meaningful use of TCG services, particularly remote attestation and sealed storage.The Trusted Computing Group (TCG), successor to the Trusted Computing Platform Alliance (TCPA), is an initiative led by AMD, Hewlett-Packard, IBM, Intel, Microsoft, Sony, and Sun Microsystems. The TCG aims to “develop and promote open, vendor-neutral, industry standard specifications for trusted computing building blocks and software interfaces across multiple platforms”7.The novelty of the TCG architecture lies in the range of entities that are able to use TCG features as a basis for trust. These include not only the platform user and owner but also, remote entities wishing to interact with the platform. The mechanism of remote attestation allows remote third parties to challenge a platform to report details of its current software state. On the basis of the attestation, third parties can decide whether they consider the platform’s configuration to be trustworthy. If correctly implemented, remote attestation promises to be an important feature for DRM clients on open platforms since it may assist a content provider in deciding whether the client is currently configured to enforce the license terms reliably before the content is actually provided.A closely related TCG objective is to provide reliable, hardware-based protection for secrets such as cryptographic keys. Since open computing platforms can run arbitrary software, this objective aims to ensure that protected secrets will not be revealed unless the platform’s software state meets clearly defined and accurately measurable criteria. TCG’s sealed storage feature can be used to bind a protected secret such as a content decryption key to a particular software configuration. If the configuration is not as specified, the sealed key will not be released.4.1 TCG Architectural ModificationsThe architectural modifications required by the TCG specification include the addition of a cryptographic processor called a Trusted Platform Module (TPM). The TPM must be a fixed part of the computing device that cannot (easily) be transferred to another platform. The TPM provides a range of cryptographic primitives including random number generation, SHA-1 hashing, asymmetric encryption and decryption, signing and verification using 2048 bit RSA, and asymmetric key pair generation8. There is also a small amount of protected key 7 See: https:///home8See: Menezes et al., (1996) for a description of these terms storage. Currently available TPMs are based on smart card processors.4.2 Integrity Measurement and ReportingThe TCG security services of remote attestation and sealed storage build on an integrity protected boot technique that was introduced by Arbaugh et al. (1997). Integrity protected booting is fundamental to the design of the TCG architecture. Figure 2 illustrates the process with numbers in parentheses denoting the sequence ofevents.Figure 2: TCG Integrity Protected Boot Sequence The boot process starts in a defined state with execution of the BIOS ‘boot block’ code. The BIOS boot block is called the Core Root of Trust for Measurement (CRTM). Since it initiates the booting and measurement process, it is implicitly trusted. The core idea behind integrity protected booting is that a precise hash-based measurement or fingerprint of all executable code in the boot chain should be taken and securely stored immediately before that code is given control of the processor. Accordingly, the CRTM takes a hash of the BIOS code (1) and stores the value in a protected hardware register in the TPM (2), called a Platform Configuration Register (PCR). PCRs cannot be deleted or arbitrarily overwritten within a boot cycle. They are ‘update only’using a simple chained hash technique, based on the SHA1 secure hash algorithm that works as follows (where || denotes concatenation):Updated PCR Value=Hash(Previous PCR Value || Current Measurement To Store)This operation is known as extending a PCR. It allows a practically unlimited number of measurements to be stored or committed in a fixed size register.The CRTM then passes control to the BIOS code (3) which stores measurements of option ROMS, CPU microcode updates and the OS loader before passing control to the latter. The boot process continues following the same pattern until the kernel is loaded. If any executable stage in this chain has been modified, the change will be reflected in the hash value. Since the PCRs can only be extended, not overwritten, the modified code cannot hide itself when it is given control of the CPU.。

相关文档
最新文档