A-Summary-of-Guam-Example
The Element of Data Analytic Style
The Elements of Data Analytic StyleA guide for people who want to analyze data.Jeff LeekThis book is for sale at /datastyle This version was published on2015-03-02This is a Leanpub book.Leanpub empowers authors and publishers with the Lean Publishing process.Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and many iterations to get reader feedback,pivot until you have the right book and build traction once you do.©2014-2015Jeff LeekAn@simplystats publication.Thank you to Karl Broman and Alyssa Frazee for constructive and really helpful feedback on the first draft of this manuscript.Thanks to Roger Peng,Brian Caffo,and Rafael Irizarry for helpful discussions about data analysis.Contents1.Introduction (1)2.The data analytic question (3)3.Tidying the data (10)4.Checking the data (17)5.Exploratory analysis (23)6.Statistical modeling and inference (34)7.Prediction and machine learning (45)8.Causality (50)9.Written analyses (53)10.Creating figures (58)11.Presenting data (70)12.Reproducibility (79)13.A few matters of form (85)14.The data analysis checklist (87)CONTENTS15.Additional resources (92)1.IntroductionThe dramatic change in the price and accessibility of data demands a new focus on data analytic literacy.This book is intended for use by people who perform regular data analyses. It aims to give a brief summary of the key ideas,practices,and pitfalls of modern data analysis.One goal is to summarize in a succinct way the most common difficulties encountered by practicing data analysts.It may serve as a guide for peer reviewers who may refer to specific section numbers when evaluating manuscripts.As will become apparent,it is modeled loosely in format and aim on the Elements of Style by William Strunk.The book includes a basic checklist that may be useful as a guide for beginning data analysts or as a rubric for evaluating data analyses.It has been used in the author’s data analysis class to evaluate student projects.Both the checklist and this book cover a small fraction of the field of data analysis,but the experience of the author is that once these elements are mas-tered,data analysts benefit most from hands on experience in their own discipline of application,and that many principles may be non-transferable beyond the basics.If you want a more complete introduction to the analysis of data one option is the free Johns Hopkins Data Science Specialization¹.As with rhetoric,it is true that the best data analysts some-times disregard the rules in their analyses.Experts usually do ¹https:///specialization/jhudatascience/1Introduction2 this to reveal some characteristic of the data that would be obscured by a rigid application of data analytic principles. Unless an analyst is certain of the improvement,they will often be better served by following the rules.After mastering the basic principles,analysts may look to experts in their subject domain for more creative and advanced data analytic ideas.2.The data analyticquestion2.1Define the data analyticquestion firstData can be used to answer many questions,but not all of them.One of the most innovative data scientists of all time said it best.The data may not contain the answer.The com-bination of some data and an aching desire for ananswer does not ensure that a reasonable answercan be extracted from a given body of data.John TukeyBefore performing a data analysis the key is to define the type of question being asked.Some questions are easier to answer with data and some are harder.This is a broad categorization of the types of data analysis questions,ranked by how easy it is to answer the question with data.You can also use the data analysis question type flow chart to help define the question type(Figure2.1)Figure2.1The data analysis question type flow chart2.2DescriptiveA descriptive data analysis seeks to summarize the measure-ments in a single data set without further interpretation.An example is the United States Census.The Census collects data on the residence type,location,age,sex,and race of all people in the United States at a fixed time.The Census is descriptive because the goal is to summarize the measurements in thisfixed data set into population counts and describe how manypeople live in different parts of the United States.The inter-pretation and use of these counts is left to Congress and the public,but is not part of the data analysis.2.3ExploratoryAn exploratory data analysis builds on a descriptive analysis by searching for discoveries,trends,correlations,or rela-tionships between the measurements of multiple variables to generate ideas or hypotheses.An example is the discovery of a four-planet solar system by amateur astronomers using public astronomical data from the Kepler telescope.The data was made available through the website,that asked amateur astronomers to look for a characteristic pattern of light indicating potential planets.An exploratory analysis like this one seeks to make discoveries,but rarely can confirm those discoveries.In the case of the amateur astronomers, follow-up studies and additional data were needed to confirm the existence of the four-planet system.2.4InferentialAn inferential data analysis goes beyond an exploratory anal-ysis by quantifying whether an observed pattern will likely hold beyond the data set in hand.Inferential data analyses are the most common statistical analysis in the formal scientific literature.An example is a study of whether air pollution correlates with life expectancy at the state level in the United States.The goal is to identify the strength of the relationship in both the specific data set and to determine whether that relationship will hold in future data.In non-randomizedexperiments,it is usually only possible to observe whether a relationship between two measurements exists.It is often impossible to determine how or why the relationship exists-it could be due to unmeasured data,relationships,or incomplete modeling.2.5PredictiveWhile an inferential data analysis quantifies the relationships among measurements at population-scale,a predictive data analysis uses a subset of measurements(the features)to pre-dict another measurement(the outcome)on a single person or unit.An example is when organizations like use polling data to predict how people will vote on election day.In some cases,the set of measurements used to predict the outcome will be intuitive.There is an obvious reason why polling data may be useful for predicting voting behavior.But predictive data analyses only show that you can predict one measurement from another,they don’t necessarily explain why that choice of prediction works.2.6CausalA causal data analysis seeks to find out what happens to one measurement if you make another measurement change.An example is a randomized clinical trial to identify whether fecal transplants reduces infections due to Clostridium di-ficile.In this study,patients were randomized to receive a fecal transplant plus standard care or simply standard care. In the resulting data,the researchers identified a relationship between transplants and infection outcomes.The researcherswere able to determine that fecal transplants caused a reduc-tion in infection outcomes.Unlike a predictive or inferential data analysis,a causal data analysis identifies both the mag-nitude and direction of relationships between variables. 2.7MechanisticCausal data analyses seek to identify average effects between often noisy variables.For example,decades of data show a clear causal relationship between smoking and cancer.If you smoke,it is a sure thing that your risk of cancer will increase.But it is not a sure thing that you will get cancer.The causal effect is real,but it is an effect on your average risk.A mechanistic data analysis seeks to demonstrate that changing one measurement always and exclusively leads to a specific, deterministic behavior in another.The goal is to not only understand that there is an effect,but how that effect operates. An example of a mechanistic analysis is analyzing data on how wing design changes air flow over a wing,leading to decreased drag.Outside of engineering,mechanistic data analysis is extremely challenging and rarely undertaken. 2.8Common mistakes2.8.1Correlation does not implycausationInterpreting an inferential analysis as causal.Most data analyses involve inference or prediction.Unless a randomized study is performed,it is difficult to infer whythere is a relationship between two variables.Some great examples of correlations that can be calculated but are clearly not causally related appear at /¹(Figure 2.2).Figure2.2A spurious correlationParticular caution should be used when applying words such as“cause”and“effect”when performing inferential analysis. Causal language applied to even clearly labeled inferential analyses may lead to misinterpretation-a phenomenon called causation creep².2.8.2OverfittingInterpreting an exploratory analysis as predictiveA common mistake is to use a single,unsplit data set for both model building and testing.If you apply a prediction model to the same data set used to build the model you can only estimate“resubstitution error”or“training set error”.These estimates are very optimistic estimates of the error you would get if using the model in practice.If you try enough models on the same set of data,you eventually can predict perfectly.¹/²/numbersruleyourworld/causation-creep/2.8.3n of1analysisDescriptive versus inferential analysis.When you have a very small sample size,it is often impossible to explore the data,let alone make inference to a larger population.An extreme example is when measurements are taken on a single person or sample.With this kind of data it is possible to describe the person or sample,but generally impossible to infer anything about a population they come from.2.8.4Data dredgingInterpreting an exploratory analysis as inferen-tialSimilar to the idea of overfitting,if you fit a large number of models to a data set,it is generally possible to identify at least one model that will fit the observed data very well.This is especially true if you fit very flexible models that might also capture both signal and noise.Picking any of the single exploratory models and using it to infer something about the whole population will usually lead to mistakes.As Ronald Coase³said:“If you torture the data enough,nature will al-ways confess.”This chapter builds on and expands the paper“What is the question?”⁴co-authored by the author of this book.³/wiki/Ronald_Coase⁴/content/early/2015/02/25/science.aaa6146.full3.Tidying the dataThe point of creating a tidy data set is to get the data into a format that can be easily shared,computed on,and analyzed.3.1The components of a data set The work of converting the data from raw form to directly analyzable form is the first step of any data analysis.It is important to see the raw data,understand the steps in the processing pipeline,and be able to incorporate hidden sources of variability in one’s data analysis.On the other hand,for many data types,the processing steps are well documented and standardized.These are the components of a processed data set:1.The raw data.2.A tidy data set.3.A code book describing each variable and its values inthe tidy data set.4.An explicit and exact recipe you used to go from1to2and3.3.2Raw dataIt is critical that you include the rawest form of the data that you have access to.Some examples of the raw form of data are as follows.1.The strange binary file your measurement machinespits out2.The unformatted Excel file with10worksheets thecompany you contracted with sent you3.The complicated JSON data you got from scraping theTwitter API4.The hand-entered numbers you collected looking througha microscopeYou know the raw data is in the right format if you ran no software on the data,did not manipulate any of the numbers in the data,did not remove any data from the data set,and did not summarize the data in any way.If you did any manipulation of the data at all it is not the raw form of the data.Reporting manipulated data as raw data is a very common way to slow down the analysis process,since the analyst will often have to do a forensic study of your data to figure out why the raw data looks weird.3.3Raw data is relativeThe raw data will be different to each person that handles the data.For example,a machine that measures blood pressure does an internal calculation that you may not have access to when you are given a set of blood pressure measurements.In general you should endeavor to obtain the rawest form of the data possible,but some pre-processing is usually inevitable.3.4Tidy dataThe general principles of tidy data are laid out by Hadley Wickham in this paper¹and this video².The paper and the video are both focused on the R package,which you may or may not know how to use.Regardless the four general principles you should pay attention to are:•Each variable you measure should be in one column•Each different observation of that variable should be ina different row•There should be one table for each“kind”of variable •If you have multiple tables,they should include acolumn in the table that allows them to be linked While these are the hard and fast rules,there are a number of other things that will make your data set much easier to handle.3.5Include a row at the top ofeach data table/spreadsheetthat contains full row names. So if you measured age at diagnosis for patients,you would head that column with the name AgeAtDiagnosis instead of something like ADx or another abbreviation that may be hard for another person to understand.¹/papers/tidy-data.pdf²https:///337275553.6If you are sharing your datawith the collaborator in Excel,the tidy data should be in oneExcel file per table.They should not have multiple worksheets,no macros should be applied to the data,and no columns/cells should be high-lighted.Alternatively share the data in a CSV or TAB-delimited text file.3.7The code bookFor almost any data set,the measurements you calculate will need to be described in more detail than you will sneak into the spreadsheet.The code book contains this information.At minimum it should contain:•Information about the variables(including units!)in thedata set not contained in the tidy data•Information about the summary choices you made•Information about the experimental study design youusedIn our genomics example,the analyst would want to know what the unit of measurement for each clinical/demographic variable is(age in years,treatment by name/dose,level of diagnosis and how heterogeneous).They would also want to know how you picked the exons you used for summarizing the genomic data(UCSC/Ensembl,etc.).They would alsowant to know any other information about how you did the data collection/study design.For example,are these the first 20patients that walked into the clinic?Are they20highly selected patients by some characteristic like age?Are they randomized to treatments?A common format for this document is a Word file.There should be a section called“Study design”that has a thorough description of how you collected the data.There is a section called“Code book”that describes each variable and its units.3.8The instruction list or scriptmust be explicitYou may have heard this before,but reproducibility is kind of a big deal in computational science.That means,when you submit your paper,the reviewers and the rest of the world should be able to exactly replicate the analyses from raw data all the way to final results.If you are trying to be efficient,you will likely perform some summarization/data analysis steps before the data can be considered tidy.3.9The ideal instruction list is ascriptThe ideal thing for you to do when performing summarization is to create a computer script(in R,Python,or something else) that takes the raw data as input and produces the tidy data you are sharing as output.You can try running your script a couple of times and see if the code produces the same output.3.10If there is no script,be verydetailed about parameters,versions,and order ofsoftwareIn many cases,the person who collected the data has incentive to make it tidy for a statistician to speed the process of collaboration.They may not know how to code in a scripting language.In that case,what you should provide the statisti-cian is something called pseudocode.It should look something like:•Step1-take the raw file,run version3.1.2of summarizesoftware with parameters a=1,b=2,c=3•Step2-run the software separately for each sample•Step3-take column three of outputfile.out for eachsample and that is the corresponding row in the outputdata setYou should also include information about which system (Mac/Windows/Linux)you used the software on and whether you tried it more than once to confirm it gave the same results.Ideally,you will run this by a fellow student/labmate to confirm that they can obtain the same output file you did.Tidying the data16 3.11Common mistakes3.11.1Combining multiple variables intoa single columnA common mistake is to make one column in a data set represent two variables.For example,combining sex and age range into a single variable..3.11.2Merging unrelated data into asingle fileIf you have measurements on very different topics-for example a person’s finance and their health-it is often a better idea to save each one as a different table or data set with a common identifier.3.11.3An instruction list that isn’texplicitWhen an instruction list that is not a computer script is used, a common mistake is to not report the parameters or versions of software used to perform an analysis.This chapter builds on and expands the book author’s data sharing guide³.³https:///jtleek/datasharing4.Checking the dataData munging or processing is required for basically every data set that you will have access to.Even when the data are neatly formatted like you get from open data sources like ¹,you’ll frequently need to do things that make it slightly easier to analyze or use the data for modeling.The first thing to do with any new data set is to understand the quirks of the data set and potential errors.This is usually done with a set of standard summary measures.The checks should be performed on the rawest version of the data set you have available.A useful approach is to think of every possible thing that could go wrong and make a plot of the data to check if it did.4.1How to code variablesWhen you put variables into a spreadsheet there are several main categories you will run into depending on their data type:•Continuous•Ordinal•Categorical•Missing•Censored¹/Continuous variables are anything measured on a quantita-tive scale that could be any fractional number.An example would be something like weight measured in kg.Ordinal data are data that have a fixed,small(<100)number of possible values,called levels,that are ordered.This could be for example survey responses where the choices are:poor, fair,good.Categorical data are data where there are multiple categories, but they aren’t ordered.One example would be sex:male or female.Missing data are data that are missing and you don’t know the mechanism.You should use a single common code for all missing values(for example,“NA”),rather than leaving any entries blank.Censored data are data where you know the missingness mechanism on some mon examples are a measure-ment being below a detection limit ora patient being lost to follow-up.They should also be coded as NA when you don’t have the data.But you should also add a new column to your tidy data called,“VariableNameCensored”which should have values of TRUE if censored and FALSE if not.4.2In the code book you shouldexplain why censored valuesare missing.It is absolutely critical to report if there is a reason you know about that some of the data are missing.The statistical models used to treat missing data and censored data are completely different.4.3Avoid coding categorical orordinal variables as numbers. When you enter the value for sex in the tidy data,it should be“male”or“female”.The ordinal values in the data set should be“poor”,“fair”,and“good”not1,2,3.This will avoid potential mixups about which direction effects go and will help identify coding errors.4.4Always encode every piece ofinformation about yourobservations using text.For example,if you are storing data in Excel and use a form of colored text or cell background formatting to indicate information about an observation(“red variable entries were observed in experiment1.”)then this information will not be exported(and will be lost!)when the data is exported as raw text.Every piece of data should be encoded as actual text that can be exported.For example,rather than highlighting certain data points as questionable,you should include an additional column that indicates which measurements are questionable and which are not.4.5Identify the missing valueindicatorThere are a number of different ways that missing values can be encoded in data sets.Some common choices are“NA”,“.”,“999”,and“-1”.There is sometimes also a missing data indica-tor variable.Missing values coded as numeric values like“-1”are particularly dangerous to an analysis as they will skew any results.The best ways to find the missing value indicator are to go through the code book or to make histograms and tables of common values.In the R programming language be sure to use useNA argument to highlight missing values table(x,useNA="ifany").4.6Check for clear coding errors It is common for variables to be miscoded.For example,a variable that should take values0,1,2may have some values of9.The first step is to determine whether these are missing values,miscodings,or whether the scale was incorrectly communicated.As an example,it is common for the male patients in a clinical study to be labelled as both“men”and “males”.This should be consolidated to a single value of the variable.4.7Check for label switchingWhen data on the same individuals are stored in multiple tables,a very common error is to have mislabled data.The best way to detect these mislabelings are to look for logical inconsistencies.For example if the same person is labeled as “male”and“female”in two different tables,that is a potential label switching or coding error.4.8If you have data in multiplefiles,ensure that data thatshould be identical acrossfiles is identicalIn some cases you will have the same measurements recorded twice.For example,you may have the sex of a patient recorded in two seperate data tables.You should check that for each patient in the two files the sex is recorded the same.4.9Check the units(or lack ofunits)It is important to check that all variables you have take values on the unit scale you expect.For example,if you observe the people are listed at180inches tall,it is a good bet that the measurement is actually in centimeters.This mistake is so pervasive it even caused the loss of a mars satellite². Histograms and boxplots are good ways to check that the measurements you observe fall on the right scale.4.10Common mistakes4.10.1Failing to check the data at allA common temptation in data analysis is to load the data and immediately leap to statistical modeling.Checking the data before analysis is a critical step in the process.²/2010/11/1110mars-climate-observer-report/4.10.2Encoding factors as quantitativenumbersIf a scale is qualitative,but the variable is encoded as1,2, 3,etc.then statistical modeling functions may interpret this variable as a quantitative variable and incorrectly order the values.4.10.3Not making sufficient plotsA common mistake is to only make tabular summaries of the data when doing data checking.Creating a broad range of data visualizations,one for each potential problem in a data set,is the best way to identify problems.4.10.4Failing to look for outliers ormissing valuesA common mistake is to assume that all measurements follow the appropriate distribution.Plots of the distribution of the data for each measured variable can help to identify outliers. This chapter builds on and expands the book author’s data sharing guide³.³https:///jtleek/datasharing5.Exploratory analysis Exploratory analysis is largely concerned with summarizing and visualizing data before performing formal modeling.The reasons for creating graphs for data exploration are:•To understand properties of the data•To inspect qualitative features rather than a huge tableof raw data•To discover new patterns or associations5.1Interactive analysis is thebest way to explore dataIf you want to understand a data set you play around with it and explore it.You need to make plots make tables,identify quirks,outliers,missing data patterns and problems with the data.To do this you need to interact with the data quickly. One way is to analyze the whole data set at once using tools like Hive,Hadoop,or Pig.But an often easier,better,and more cost effective approach is to use random sampling.As Robert Gentleman put it“make big data as small as possible as quickly as possible”¹.¹https:///EllieMcDonagh/status/4691845545492480005.2Plot as much of the actualdata as you canThese boxplots look very similar:Figure5.1Boxplots that look similarbut if you overlay the actual data points you can see that they have very different distributions.Figure5.2Boxplots that look similar with points overlayed Plotting more of the data allows you to identify outliers, confounders,missing data,relationships,and correlations much more easily than with summary measures.5.3Exploratory graphs and tablesshould be made quicklyUnlike with figures that you will distribute,you are onlycommunicating with yourself.You will be making manygraphs and figures,the goal is speed and accuracy at the expense of polish.Avoid spending time on making axis labels clear or spending time on choosing colors.Find a color palette and sizing scheme you like and stick with it.5.4Plots are better thansummariesYou can explore data by calculating summary statistics,for example the correlation between variables.However all of these data sets have the exact same correlation and regression line².Figure5.3Data sets with identical correlations and regression lines ²/wiki/Anscombe%27s_quartetThis means that it is often better to plot,rather than summa-rize,the data.5.5For large data sets,subsample before plottingIn general,most trends you care about observing will be preserved in a random subsample of the data.Figure5.4A large data set and a random subsample5.6Use color and size to check forconfoundingWhen plotting the relationship between two variables on a scatterplot,you can use the color or size of points to check for a confounding relationship.For example in this plot it looks like the more you study the worse score you get on the test:Figure5.5Studying versus scorebut if you size the points by the skill of the student you see that more skilled students don’t study as much.So it is likelythat skill is confounding the relationship。
Common Phase Error due to Phase Noise in OFDM - Estimation and Suppression
COMMON PHASE ERROR DUE TO PHASE NOISE IN OFDM-ESTIMATION AND SUPPRESSIONDenis Petrovic,Wolfgang Rave and Gerhard FettweisV odafone Chair for Mobile Communications,Dresden University of Technology,Helmholtzstrasse18,Dresden,Germany{petrovic,rave,fettweis}@ifn.et.tu-dresden.deAbstract-Orthogonal frequency division multiplexing (OFDM)has already become a very attractive modulation scheme for many applications.Unfortunately OFDM is very sensitive to synchronization errors,one of them being phase noise,which is of great importance in modern WLAN systems which target high data rates and tend to use higher frequency bands because of the spectrum availability.In this paper we propose a linear Kalmanfilter as a means for tracking phase noise and its suppression.The algorithm is pilot based.The performance of the proposed method is investigated and compared with the performance of other known algorithms.Keywords-OFDM,Synchronization,Phase noise,WLANI.I NTRODUCTIONOFDM has been applied in a variety of digital commu-nications applications.It has been deployed in both wired systems(xDSL)and wireless LANs(IEEE802.11a).This is mainly due to the robustness to frequency selective fading. The basic principle of OFDM is to split a high data rate data stream into a number of lower rate streams which are transmitted simultaneously over a number of orthogonal subcarriers.However this most valuable feature,namely orthogonality between the carriers,is threatened by the presence of phase noise in oscillators.This is especially the case,if bandwidth efficient higher order modulations need to be employed or if the spacing between the carriers is to be reduced.To compensate for phase noise several methods have been proposed.These can be divided into time domain[1][2]and frequency domain approaches[3][4][5].In this paper we propose an algorithm for tracking the average phase noise offset also known as the common phase error(CPE)[6]in the frequency domain using a linear Kalmanfilter.Note that CPE estimation should be considered as afirst step within more sophisticated algorithms for phase noise suppression[5] which attempt to suppress also the intercarrier interference (ICI)due to phase noise.CPE compensation only,can however suffice for some system design scenarios to suppress phase noise to a satisfactory level.For these two reasons we consider CPE estimation as an important step for phase noise suppression.II.S YSTEM M ODELAn OFDM transmission system in the presence of phase noise is shown in Fig. 1.Since all phase noise sources can be mapped to the receiver side[7]we assume,without loss of generality that phase noise is present only at the front end of the receiver.Assuming perfect frequency and timing synchronization the received OFDM signal samples, sampled at frequency f s,in the presence of phase noise can be expressed as r(n)=(x(n) h(n))e jφ(n)+ξ(n).Each OFDM symbol is assumed to consist of a cyclic prefix of length N CP samples and N samples corresponding to the useful signal.The variables x(n),h(n)andφ(n)denote the samples of the transmitted signal,the channel impulse response and the phase noise process at the output of the mixer,respectively.The symbol stands for convolution. The termξ(n)represents AWGN noise with varianceσ2n. The phase noise processφ(t)is modelled as a Wiener process[8],the details of which are given below,with a certain3dB bandwidth∆f3dB.,0,1,2...m lX l=,0,1,2...m lR l=Fig.1Block diagram of an OFDM transmission chain.At the receiver after removing the N CP samples cor-responding to the cyclic prefix and taking the discrete Fourier transform(DFT)on the remaining N samples,the demodulated carrier amplitude R m,lkat subcarrier l k(l k= 0,1,...N−1)of the m th OFDM symbol is given as[4]:R m,lk=X m,lkH m,lkI m(0)+ζm,lk+ηm,lk(1)where X m,lk,H m,lkandηm,lkrepresent the transmitted symbol on subcarrier l k,the channel transfer function andlinearly transformed AWGN with unchanged variance σ2n at subcarrier l k ,respectively.The term ζm,l k represents intercarrier interference (ICI)due to phase noise and was shown to be a gaussian distributed,zero mean,randomvariable with variance σ2ICI =πN ∆f 3dB s[7].The term I m (0)also stems from phase noise.It does not depend on the subcarrier index and modifies all subcarriers of one OFDM symbol in the same manner.As its modulus is in addition very close to one [9],it can be seen as a symbol rotation in the complex plane.Thus it is referred to in the literature as the common phase error (CPE)[6].The constellation rotation due to CPE causes unaccept-able system performance [7].Acceptable performance can be achieved if one estimates I m (0)or its argument and compensates the effect of the CPE by derotating the received subcarrier symbols in the frequency domain (see Eq.(1)),which significantly reduces the error rate as compared to the case where no compensation is used.The problem of esti-mating the CPE was addressed by several authors [3][4][10].In [3]the authors concentrated on estimating the argument of I m (0)using a simple averaging over pilots.In [10]the argument of I m (0)was estimated using an extended Kalman filter,while in [4]the coefficient I m (0)itself was estimated using the LS algorithm.Here we introduce an alternative way for minimum mean square estimation (MMSE)[11]of I m (0)using a linear scalar Kalman filter.The algorithm is as [4]pilot based.III.P HASE N OISE M ODELFor our purposes we need to consider a discretized phase noise model φ(n )=φ(nT s )where n ∈N 0and T s =1/f s is the sampling period at the front end of the receiver.We adopt a Brownian motion model of the phase noise [8].The samples of the phase noise process are given as φ(n )=2πf c √cB (n )where f c is the carrier frequency,c =∆f 3dB /πf 2c [8]and B (n )represents the discretizied Brownian motion process,Using properties of the Brownian motion [12]the fol-lowing holds:B (0)=0and B (n +1)=B (n )+dB n ,n ∈N 0where each increment dB n is an independent random variable and dB n ∼√T s N (0,1).Noting that φ(n )=2πf c √cB (n )we can write the discrete time phase noise process equation asφ(n +1)=φ(n )+w (n )(2)where w (n )∼N (0,4π2f 2c cT s )is a gaussian randomvariable with zero mean and variance σ2w =4π2f 2c cT s .IV.CPE E STIMATION U SING A K ALMAN F ILTER Since all received subcarriers within one OFDM symbolare affected by the same factor,namely I m (0),the problem at hand can be seen as an example of estimating a constant from several noisy measurements given by Eq.(1)for which purpose a Kalman filter is well suited [11].For a Kalmanfilter to be used we need to define the state space model of the system.Define first the set L ={l 1,l 2,l 3,...l P }as a subset of the subcarrier set {0,1,...N −1}.Using Eq.(1)one can writeR m,l k =A m,l k I m,l k (0)+εm,l k(3)where A m,l k =X m,l k H m,l k and I m,l k (0)=I m (0)for all k =1,2...,P .Additional indexing of the CPE terms is done here only for convenience of notation.On the other hand one can writeI m,l k +1(0)=I m,l k (0).(4)Equations (3)and (4)are the measurement and processequation of the system state space model,where A m,l k represents the measurement matrix,while the process matrix is equal to 1and I m,l k (0)corresponds to the state of the system.The measuring noise is given by εm,l k which combines the ICI and AWGN terms in Eq.(1),the varianceof which for all l k equals σ2ε=(σ2ICI +σ2n ).The process noise equals zero.Note that the defined state space model is valid only for one OFDM symbol.For the state space model to be fully defined,knowledge of the A m,l k =X m,l k H m,l k is needed.Here we assume to have ideal knowledge of the channel.On the other hand we define the subset L to correspond to the pilot subcarrier locations within one OFDM symbol so that X m,q ,q ∈L are also known.We assume that at the beginning of each burst perfect timing and frequency synchronization is achieved,so that the phase error at the beginning of the burst equals zero.After the burst reception and demodulation,the demodulated symbols are one by one passed to the Kalman filter.For a Kalman filter initialization one needs for eachOFDM symbol an a priori value for ˆI m,l 1(0)and an a priori error variance K −m,1.At the beginning of the burst,when m =1,it is reasonable to adopt ˆI −1,l 1(0)=1.Within each OFDM symbol,say m th,the filter uses P received pilot subcarriers to recursively update the a priori value ˆI −1m,l 1(0).After all P pilot subcarriers are taken into account ˆI m,l P (0)is obtained,which is adopted as an estimate ofthe CPE within one OFDM symbol,denoted as ˆIm (0).The Kalman filter also provides an error variance of the estimateof I m,l P (0)as K m,P .ˆI m,l P(0)and K m,P are then used as a priori measures for the next OFDM symbol.The detailed structure of the algorithm is as follows.Step 1:InitializationˆI −m,l 1(0)=E {I −m,l 1(0)}=ˆI m −1(0)K −m,1=E {|I m (0)−ˆIm −1(0)|2}∼=E {|φm −ˆφm −1|2}=σ2CP E +K m −1,Pwhere σ2CP E =4π2N 2+13N +N CP ∆f 3dBf s(see [10]),K 0,P =0and φm =arg {I m (0)}.Repeat Step2and Step3for k=1,2,...,P Step2:a-posteriori estimation(update)G m,k=K−m,kH H m,lkH m,lkK−m,kH Hm,l k+(σ2ICI+σ2n)ˆIm,l k (0)=ˆI−m,l k(0)+G m,k[R m,lk−H m,l kˆI−m,l k(0)]K m,k=(1−G m,k H m,lk )K−m,kStep3:State and error variance propagationK−m,k+1=K m,k(5)ˆI−m,l k+1(0)=ˆI m,lk(0)Note that no matrix inversions are required,since the state space model is purely scalar.V.CPE C ORRECTIONThe easiest approach for CPE correction is to derotate all subcarriers l k of the received m th symbol R m,lkby φm=−arg{ˆI m(0)}.Unambiguity of the arg{·}function plays here no role since any unambiguity which is a multiple of2πrotates the constellation to its equivalent position in terms of its argument.The presented Kalmanfilter estimation algorithm is read-ily applicable for the decision feedback(DF)type of algo-rithm presented in[4].The idea there was to use the data symbols demodulated after thefirst CPE correction in a DFE manner to improve the quality of the estimate since that is increasing the number of observations of the quantity we want to estimate.In our case that would mean that after thefirst CPE correction the set L={l1,l2,l3,...l P}of the subcarriers used for CPE estimation,which previously corresponded to pilot subcarriers,is now extended to a larger set corresponding to all or some of the demodulated symbols. In this paper we have extended the set to all demodulated symbols.The Kalmanfilter estimation is then applied in an unchanged form for a larger set L.VI.N UMERICAL R ESULTSThe performance of the proposed algorithm is investigated and compared with the proposal of[4]which is shown to outperform other known approaches.The system model is according to the IEEE802.11a standard,where64-QAM modulation is used.We investigate the performance in AWGN channels and frequency selective channels using as an example the ETSI HiperLAN A-Channel(ETSI A). Transmission of10OFDM symbols per burst is assumed.A.Properties of an EstimatorThe quality of an estimation is investigated in terms of the mean square error(MSE)of the estimator for a range of phase noise bandwidths∆f3dB∈[10÷800]Hz.Table1 can be used to relate the phase noise bandwidth with other quantities.Figures2and3compare the MSE of the LS estimator from[4]and our approach for two channel types and both standard correction and using decision feedback. Note that SNRs are chosen such that the BER of a coded system after the Viterbi algorithm in case of phase noise free transmission is around1·10−4.Kalmanfilter shows better performance in all cases and seems to be more effective for small phase noise bandwidths. As expected when DF is used the MSE of an estimator is smaller because we are taking more measurements into account.Fig.2MSE of an estimator for AWGN channel.Fig.3MSE of an estimator for ETSI A channel.Table 1Useful relationsQuantitySymbolRelationTypical values for IEEE802.11aOscillator constant c [1radHz]8.2·10−19÷4.7·10−18Oscillator 3dB bandwidth ∆f 3dB [Hz]∆f 3dB =πf 2cc 70÷400Relative 3dB bandwidth ∆f 3dB ∆f car∆f 3dBfsN 2·10−4÷13·10−4Phase noise energy E PN [rad]E PN =4π∆f 3dB∆fcar0.0028÷0.016Subcarrier spacing∆f car∆f car =f s N312500HzB.Symbol Error Rate DegradationSymbol error rate (SER)degradation due to phase noise is investigated also for a range of phase noise bandwidths ∆f 3dB ∈[10÷800]Hz and compared for different correc-tion algorithms.Ideal CPE correction corresponds to the case when genie CPE values are available.In all cases simpleconstellation derotation with φ=−arg {ˆIm (0)}is used.Fig.4SER degradation for AWGN channel.In Figs.4and 5SER degradation for AWGN and ETSI A channels is plotted,respectively.It is interesting to note that as opposed to the ETSI A channel case in AWGN channel there is a gap between the ideal CPE and both correction approaches.This can be explained if we go back to Eq.(1)where we have seen that phase noise affects the constellation as additive noise.Estimation error of phase noise affects the constellation also in an additive manner.On the other hand the SER curve without phase noise in the AWGN case is much steeper than the corresponding one for the ETSI A channel.A small SNR degradation due to estimation errors will cause therefore large SER variations.This explains why the performance differs much less in the ETSI A channel case.Generally from this discussion a conclusion can be drawn that systems with large order of diversity are more sensitive to CPE estimation errors.Note that this ismeantFig.5SER degradation for ETSI A channel.not in terms of frequency diversity but the SER vs.SNR having closely exponential dependence.It can be seen that our approach shows slightly better performance than [4]especially for small phase noise bandwidths.What is also interesting to note is,that DF is not necessary in the case of ETSI A types of channels (small slope of SER vs.SNR)while in case of AWGN (large slope)it brings performance improvement.VII.C ONCLUSIONSWe investigated the application of a linear Kalman filter as a means for tracking phase noise and its suppression.The proposed algorithm is of low complexity and its performance was studied in terms of the mean square error (MSE)of an estimator and SER degradation.The performance of an algorithm is compared with other algorithms showing equivalent and in some cases better performance.R EFERENCES[1]R.A.Casas,S.Biracree,and A.Youtz,“Time DomainPhase Noise Correction for OFDM Signals,”IEEE Trans.on Broadcasting ,vol.48,no.3,2002.[2]M.S.El-Tanany,Y.Wu,and L.Hazy,“Analytical Mod-eling and Simulation of Phase Noise Interference in OFDM-based Digital Television Terrestial Broadcast-ing Systems,”IEEE Trans.on Broadcasting,vol.47, no.3,2001.[3]P.Robertson and S.Kaiser,“Analysis of the effects ofphase noise in OFDM systems,”in Proc.ICC,1995.[4]S.Wu and Y.Bar-Ness,“A Phase Noise SuppressionAlgorithm for OFDM-Based WLANs,”IEEE Commu-nications Letters,vol.44,May1998.[5]D.Petrovic,W.Rave,and G.Fettweis,“Phase NoiseSuppression in OFDM including Intercarrier Interfer-ence,”in Proc.Intl.OFDM Workshop(InOWo)03, pp.219–224,2003.[6]A.Armada,“Understanding the Effects of PhaseNoise in Orthogonal Frequency Division Multiplexing (OFDM),”IEEE Trans.on Broadcasting,vol.47,no.2, 2001.[7]E.Costa and S.Pupolin,“M-QAM-OFDM SystemPerformance in the Presence of a Nonlinear Amplifier and Phase Noise,”IEEE mun.,vol.50, no.3,2002.[8]A.Demir,A.Mehrotra,and J.Roychowdhury,“PhaseNoise in Oscillators:A Unifying Theory and Numerical Methods for Characterisation,”IEEE Trans.Circuits Syst.I,vol.47,May2000.[9]S.Wu and Y.Bar-ness,“Performance Analysis of theEffect of Phase Noise in OFDM Systems,”in IEEE 7th ISSSTA,2002.[10]D.Petrovic,W.Rave,and G.Fettweis,“Phase NoiseSuppression in OFDM using a Kalman Filter,”in Proc.WPMC,2003.[11]S.M.Kay,Fundamentals of Statistical Signal Process-ing vol.1.Prentice-Hall,1998.[12]D.J.Higham,“An Algorithmic Introduction to Numer-ical Simulation of Stochastic Differential Equations,”SIAM Review,vol.43,no.3,pp.525–546,2001.。
翻译中的语篇意识
nominal, verbal, and clausal.
Nominal substitution:
one/ones; same, e.g. A: If only I could remember where it was that I saw someone putting away the box with those candles in, I could finish the decorations now. B: You mean the little colored ones?
M.A.K.
Halliday (2000:4) : cohesion is a semantic concept, referring to relations of meaning that exist within the text, and it occurs where the interpretation of some element in the discourse is dependent on that of another.
到游乐场去的那一天,奥斯本中尉到了
勒塞尔广场就对太太、小姐们说:“塞 特笠太太,我希望您这儿有空位子,我 请了我们的都宾来吃饭,然后一块儿上 游乐场。他跟乔斯差不多一样怕羞。”
原文中的“modest”有多个意思,可以表示
“谦虚的”、“端庄的”、“贞节的”、“羞 怯的”、“有节制的”、“朴实的”等等。那 么,为什么译者将词义选定为“怕羞”呢?如 果孤立地看这一句话甚至这一个段落,是很难 确定其词义的,因为任何一个意思在这句话中 似乎都讲得通。只有从语篇着手,分析语境和 上下文,才能选定符合原意的汉语对等词。原 来《名利场》第三章末尾一段对乔斯的性格作 了交代:Poor Joe, why will he be so shy?还 有此书第五章的一段文字:He had arrived with a knock so very timid and quiet, 从 “shy”,、“timid”、 “quiet”这三个词我们就 可以有充分的根据将此处的 “modest” 译作 “怕羞”。
GSPBOX_-Atoolboxforsignalprocessingongraphs_
GSPBOX_-Atoolboxforsignalprocessingongraphs_GSPBOX:A toolbox for signal processing on graphsNathanael Perraudin,Johan Paratte,David Shuman,Lionel Martin Vassilis Kalofolias,Pierre Vandergheynst and David K.HammondMarch 16,2016AbstractThis document introduces the Graph Signal Processing Toolbox (GSPBox)a framework that can be used to tackle graph related problems with a signal processing approach.It explains the structure and the organization of this software.It also contains a general description of the important modules.1Toolbox organizationIn this document,we brie?y describe the different modules available in the toolbox.For each of them,the main functions are brie?y described.This chapter should help making the connection between the theoretical concepts introduced in [7,9,6]and the technical documentation provided with the toolbox.We highly recommend to read this document and the tutorial before using the toolbox.The documentation,the tutorials and other resources are available on-line 1.The toolbox has ?rst been implemented in MATLAB but a port to Python,called the PyGSP,has been made recently.As of the time of writing of this document,not all the functionalities have been ported to Python,but the main modules are already available.In the following,functions pre?xed by [M]:refer to the MATLAB implementation and the ones pre?xed with [P]:refer to the Python implementation. 1.1General structure of the toolbox (MATLAB)The general design of the GSPBox focuses around the graph object [7],a MATLAB structure containing the necessary infor-mations to use most of the algorithms.By default,only a few attributes are available (see section 2),allowing only the use of a subset of functions.In order to enable the use of more algorithms,additional ?elds can be added to the graph structure.For example,the following line will compute the graph Fourier basis enabling exact ?ltering operations.1G =gsp_compute_fourier_basis(G);Ideally,this operation should be done on the ?y when exact ?ltering is required.Unfortunately,the lack of well de?ned class paradigm in MATLAB makes it too complicated to be implemented.Luckily,the above formulation prevents any unnecessary data copy of the data contained in the structure G .In order to avoid name con?icts,all functions in the GSPBox start with [M]:gsp_.A second important convention is that all functions applying a graph algorithm on a graph signal takes the graph as ?rst argument.For example,the graph Fourier transform of the vector f is computed by1fhat =gsp_gft(G,f);1Seehttps://lts2.epfl.ch/gsp/doc/for MATLAB and https://lts2.epfl.ch/pygsp for Python.The full documentation is also avail-able in a single document:https://lts2.epfl.ch/gsp/gspbox.pdf1a r X i v :1408.5781v 2 [c s .I T ] 15 M a r 2016The graph operators are described in section4.Filtering a signal on a graph is also a linear operation.However,since the design of special?lters(kernels)is important,they are regrouped in a dedicated module(see section5).The toolbox contains two additional important modules.The optimization module contains proximal operators,projections and solvers compatible with the UNLocBoX[5](see section6).These functions facilitate the de?nition of convex optimization problems using graphs.Finally,section??is composed of well known graph machine learning algorithms.1.2General structure of the toolbox(Python)The structure of the Python toolbox follows closely the MATLAB one.The major difference comes from the fact that the Python implementation is object-oriented and thus allows for a natural use of instances of the graph object.For example the equivalent of the MATLAB call:1G=gsp_estimate_lmax(G);can be achieved using a simple method call on the graph object:1G.estimate_lmax()Moreover,the use of class for the"graph object"allows to compute additional graph attributes on the?y,making the code clearer as its MATLAB equivalent.Note though that functionalities are grouped into different modules(one per section below) and that several functions that work on graphs have to be called directly from the modules.For example,one should write:1layers=pygsp.operators.kron_pyramid(G,levels)This is the case as soon as the graph is the structure on which the action has to be performed and not our principal focus.In a similar way to the MATLAB implementation using the UNLocBoX for the convex optimization routines,the Python implementation uses the PyUNLocBoX,which is the Python port of the UNLocBoX. 2GraphsThe GSPBox is constructed around one main object:the graph.It is implemented as a structure in Matlab and as a class in Python.It stores the nodes,the edges and other attributes related to the graph.In the implementation,a graph is fully de?ned by the weight matrix W,which is the main and only required attribute.Since most graph structures are far from fully connected, W is implemented as a sparse matrix.From the weight matrix a Laplacian matrix is computed and stored as an attribute of the graph object.Different other attributes are available such as plotting attributes,vertex coordinates,the degree matrix,the number of vertices and edges.The list of all attributes is given in table1.2Attribute Format Data type DescriptionMandatory?eldsW N x N sparse matrix double Weight matrix WL N x N sparse matrix double Laplacian matrixd N x1vector double The diagonal of the degree matrixN scalar integer Number of verticesNe scalar integer Number of edgesplotting[M]:structure[P]:dict none Plotting parameterstype text string Name,type or short descriptiondirected scalar[M]:logical[P]:boolean State if the graph is directed or notlap_type text string Laplacian typeOptional?eldsA N x N sparse matrix[M]:logical[P]:boolean Adjacency matrixcoords N x2or N x3matrix double Vectors of coordinates in2D or3D.lmax scalar double Exact or estimated maximum eigenvalue U N x N matrix double Matrix of eigenvectorse N x1vector double Vector of eigenvaluesmu scalar double Graph coherenceTable1:Attributes of the graph objectThe easiest way to create a graph is the[M]:gsp_graph[P]:pygsp.graphs.Graph function which takes the weight matrix as input.This function initializes a graph structure by creating the graph Laplacian and other useful attributes.Note that by default the toolbox uses the combinatorial de?nition of the Laplacian operator.Other Laplacians can be computed using the[M]:gsp_create_laplacian[P]:pygsp.gutils.create_laplacian function.Please note that almost all functions are dependent of the Laplacian de?nition.As a result,it is important to select the correct de?nition at? rst.Many particular graphs are also available using helper functions such as:ring,path,comet,swiss roll,airfoil or two moons. In addition,functions are provided for usual non-deterministic graphs suchas:Erdos-Renyi,community,Stochastic Block Model or sensor networks graphs.Nearest Neighbors(NN)graphs form a class which is used in many applications and can be constructed from a set of points (or point cloud)using the[M]:gsp_nn_graph[P]:pygsp.graphs.NNGraph function.The function is highly tunable and can handle very large sets of points using FLANN[3].Two particular cases of NN graphs have their dedicated helper functions:3D point clouds and image patch-graphs.An example of the former can be seen in thefunction[M]:gsp_bunny[P]:pygsp.graphs.Bunny.As for the second,a graph can be created from an image by connecting similar patches of pixels together.The function[M]:gsp_patch_graph creates this graph.Parameters allow the resulting graph to vary between local and non-local and to use different distance functions [12,4].A few examples of the graphs are displayed in Figure1.3PlottingAs in many other domains,visualization is very important in graph signal processing.The most basic operation is to visualize graphs.This can be achieved using a call to thefunction[M]:gsp_plot_graph[P]:pygsp.plotting.plot_graph. In order to be displayable,a graph needs to have2D(or3D)coordinates(which is a?eld of the graph object).Some graphs do not possess default coordinates(e.g.Erdos-Renyi).The toolbox also contains routines to plot signals living on graphs.The function dedicated to this task is[M]:gsp_plot_ signal[P]:pygsp.plotting.plot_signal.For now,only1D signals are supported.By default,the value of the signal is displayed using a color coding,but bars can be displayed by passing parameters.3Figure 1:Examples of classical graphs :two moons (top left),community (top right),airfoil (bottom left)and sensor network (bottom right).The third visualization helper is a function to plot ?lters (in the spectral domain)which is called [M]:gsp_plot_filter [P]:pygsp.plotting.plot_filter .It also supports ?lter-banks and allows to automatically inspect the related frames.The results obtained using these three plotting functions are visible in Fig.2.4OperatorsThe module operators contains basics spectral graph functions such as Fourier transform,localization,gradient,divergence or pyramid decomposition.Since all operator are based on the Laplacian de? nition,the necessary underlying objects (attributes)are all stored into a single object:the graph.As a ?rst example,the graph Fourier transform [M]:gsp_gft [P]:pygsp.operators.gft requires the Fourier basis.This attribute can be computed with the function [M]:gsp_compute_fourier_basis[P]:/doc/c09ff3e90342a8956bec0975f46527d3240ca692.html pute_fourier_basis [9]that adds the ?elds U ,e and lmax to the graph structure.As a second example,since the gradient and divergence operate on the edges of the graph,a search on the edge matrix is needed to enable the use of these operators.It can be done with the routines [M]:gsp_adj2vec[P]:pygsp.operators.adj2vec .These operations take time and should4Figure 2:Visualization of graph and signals using plotting functions.NameEdge derivativefe (i,j )Laplacian matrix (operator)Available Undirected graph Combinatorial LaplacianW (i,j )(f (j )?f (i ))D ?WV Normalized Laplacian W (i,j ) f (j )√d (j )f (i )√d (i )D ?12(D ?W )D ?12V Directed graph Combinatorial LaplacianW (i,j )(f (j )?f (i ))12(D ++D ??W ?W ?)V Degree normalized Laplacian W (i,j ) f (j )√d ?(j )?f (i )√d +(i )I ?12D ?12+[W +W ?]D ?12V Distribution normalized Laplacianπ(i ) p (i,j )π(j )f (j )? p (i,j )π(i )f (i )12 Π12PΠ12+Π?12P ?Π12 VTable 2:Different de?nitions for graph Laplacian operator and their associated edge derivative.(For directed graph,d +,D +and d ?,D ?de?ne the out degree and in-degree of a node.π,Πis the stationary distribution of the graph and P is a normalized weight matrix W .For sake of clarity,exact de?nition of those quantities are not given here,but can be found in [14].)be performed only once.In MATLAB,these functions are called explicitly by the user beforehand.However,in Python they are automatically called when needed and the result stored as an attribute. The module operator also includes a Multi-scale Pyramid Transform for graph signals [6].Again,it works in two steps.Firstthe pyramid is precomputed with [M]:gsp_graph_multiresolution [P]:pygsp.operators.graph_multiresolution .Second the decomposition of a signal is performed with [M]:gsp_pyramid_analysis [P]:pygsp.operators.pyramid_analysis .The reconstruction uses [M]:gsp_pyramid_synthesis [P]:pygsp.operators.pyramid_synthesis .The Laplacian is a special operator stored as a sparse matrix in the ?eld L of the graph.Table 2summarizes the available de?nitions.We are planning to implement additional ones.5FiltersFilters are a special kind of linear operators that are so prominent in the toolbox that they deserve their own module [9,7,2,8,2].A ?lter is simply an anonymous function (in MATLAB)or a lambda function (in Python)acting element-by-element on the input.In MATLAB,a ?lter-bank is created simply by gathering these functions together into a cell array.For example,you would write:51%g(x)=x^2+sin(x)2g=@(x)x.^2+sin(x);3%h(x)=exp(-x)4h=@(x)exp(-x);5%Filterbank composed of g and h6fb={g,h};The toolbox contains many prede?ned design of?lter.They all start with[M]:gsp_design_in MATLAB and are in the module[P]:pygsp.filters in Python.Once a?lter(or a?lter-bank)is created,it can be applied to a signal with[M]: gsp_filter_analysis in MATLAB and a call to the method[P]:analysis of the?lter object in Python.Note that the toolbox uses accelerated algorithms to scale almost linearly with the number of sample[11].The available type of?lter design of the GSPBox can be classi?ed as:Wavelets(Filters are scaled version of a mother window)Gabor(Filters are shifted version of a mother window)Low passlter(Filters to de-noise a signal)High pass/Low pass separationlterbank(tight frame of2lters to separate the high frequencies from the low ones.No energy is lost in the process)Additionally,to adapt the?lter to the graph eigen-distribution,the warping function[M]:gsp_design_warped_translates [P]:pygsp.filters.WarpedTranslates can be used[10].6UNLocBoX BindingThis module contains special wrappers for the UNLocBoX[5].It allows to solve convex problems containing graph terms very easily[13,15,14,1].For example,the proximal operator of the graph TV norm is given by[M]:gsp_prox_tv.The optimization module contains also some prede?ned problems such as graph basis pursuit in[M]:gsp_solve_l1or wavelet de-noising in[M]:gsp_wavelet_dn.There is still active work on this module so it is expected to grow rapidly in the future releases of the toolbox.7Toolbox conventions7.1General conventionsAs much as possible,all small letters are used for vectors(or vector stacked into a matrix)and capital are reserved for matrices.A notable exception is the creation of nearest neighbors graphs.A variable should never have the same name as an already existing function in MATLAB or Python respectively.This makes the code easier to read and less prone to errors.This is a best coding practice in general,but since both languages allow the override of built-in functions,a special care is needed.All function names should be lowercase.This avoids a lot of confusion because some computer architectures respect upper/lower casing and others do not.As much as possible,functions are named after the action they perform,rather than the algorithm they use,or the person who invented it.No global variables.Global variables makes it harder to debug and the code is harder to parallelize.67.2MATLABAll function start by gsp_.The graph structure is always therst argument in the function call.Filters are always second.Finally,optional parameter are last.In the toolbox,we do use any argument helper functions.As a result,optional argument are generally stacked into a graph structure named param.If a transform works on a matrix,it will per default work along the columns.This is a standard in Matlab(fft does this, among many other functions).Function names are traditionally written in uppercase in MATLAB documentation.7.3PythonAll functions should be part of a module,there should be no call directly from pygsp([P]:pygsp.my_function).Inside a given module,functionalities can be further split in differentles regrouping those that are used in the same context.MATLAB’s matrix operations are sometimes ported in a different way that preserves the efciency of the code.When matrix operations are necessary,they are all performed through the numpy and scipy libraries.Since Python does not come with a plotting library,we support both matplotlib and pyqtgraph.One should install the required libraries on his own.If both are correctly installed,then pyqtgraph is favoured unless speci?cally speci?ed. AcknowledgementsWe would like to thanks all coding authors of the GSPBOX.The toolbox was ported in Python by Basile Chatillon,Alexandre Lafaye and Nicolas Rod.The toolbox was also improved by Nauman Shahid and Yann Sch?nenberger.References[1]M.Belkin,P.Niyogi,and V.Sindhwani.Manifold regularization:A geometric framework for learning from labeled and unlabeledexamples.The Journal of Machine Learning Research,7:2399–2434,2006.[2] D.K.Hammond,P.Vandergheynst,and R.Gribonval.Wavelets on graphs via spectral graph theory.Applied and ComputationalHarmonic Analysis,30(2):129–150,2011.[3]M.Muja and D.G.Lowe.Scalable nearest neighbor algorithms for high dimensional data.Pattern Analysis and Machine Intelligence,IEEE Transactions on,36,2014.[4]S.K.Narang,Y.H.Chao,and A.Ortega.Graph-wavelet?lterbanks for edge-aware image processing.In Statistical Signal ProcessingWorkshop(SSP),2012IEEE,pages141–144.IEEE,2012.[5]N.Perraudin,D.Shuman,G.Puy,and P.Vandergheynst.UNLocBoX A matlab convex optimization toolbox using proximal splittingmethods.ArXiv e-prints,Feb.2014.[6] D.I.Shuman,M.J.Faraji,and P.Vandergheynst.A multiscale pyramid transform for graph signals.arXiv preprint arXiv:1308.4942,2013.[7] D.I.Shuman,S.K.Narang,P.Frossard,A.Ortega,and P.Vandergheynst.The emerging?eld of signal processing on graphs:Extendinghigh-dimensional data analysis to networks and other irregular domains.Signal Processing Magazine,IEEE,30(3):83–98,2013.7[8] D.I.Shuman,B.Ricaud,and P.Vandergheynst.A windowed graph Fourier transform.Statistical Signal Processing Workshop(SSP),2012IEEE,pages133–136,2012.[9] D.I.Shuman,B.Ricaud,and P.Vandergheynst.Vertex-frequency analysis on graphs.arXiv preprint arXiv:1307.5708,2013.[10] D.I.Shuman,C.Wiesmeyr,N.Holighaus,and P.Vandergheynst.Spectrum-adapted tight graph wavelet and vertex-frequency frames.arXiv preprint arXiv:1311.0897,2013.[11] A.Susnjara,N.Perraudin,D.Kressner,and P.Vandergheynst.Accelerated?ltering on graphs using lanczos method.arXiv preprintarXiv:1509.04537,2015.[12] F.Zhang and E.R.Hancock.Graph spectral image smoothing using the heat kernel.Pattern Recognition,41(11):3328–3342,2008.[13] D.Zhou,O.Bousquet,/doc/c09ff3e90342a8956bec0975f46527d3240ca692.html l,J.Weston,and B.Sch?lkopf.Learning with local and global consistency.Advances in neural informationprocessing systems,16(16):321–328,2004.[14] D.Zhou,J.Huang,and B.Sch?lkopf.Learning from labeled and unlabeled data on a directed graph.In the22nd international conference,pages1036–1043,New York,New York,USA,2005.ACM Press.[15] D.Zhou and B.Sch?lkopf.A regularization framework for learning from graph data.2004.8。
How to Write an Essay Introduction
How to Write an Essay IntroductionFive Parts:Sample Essay Hooks;Hooking Your Reader;Creating Your Context;Presenting Your Thesis;Bringing It All Together;Community Q&AThe introduction of your essay serves two important purposes. First, it gets your reader interested in the topic and encourages them to read what you have to say about it. Second, it gives your reader a roadmap of what you're going to say and the overarching point you're going to make –your thesis statement. A powerful introduction grabs your reader's attention and keeps them reading.[1]Quick SummaryTo write an essay introduction, start with a relevant anecdote, fun fact, or quote that will entice people to keep reading. Follow your opening with 2-3 sentences containing background information or facts that give your essay context, such as important dates, locations, or historical moments. Finally, present your thesis statement. Write a specific and provable statement that answers a question about your essay topic.Did this summary help you? Yes NoPart I: Sample Essay插入图片sample essay插入图片1.11.Identify your audience. The first sentence or two of your introduction should pull the reader in.You want anyone reading your essay to be fascinated, intrigued, or even outraged. You can't do this if you don't know who your likely readers are.[2]If you're writing a paper for a class, don't automatically assume your instructor is your audience. If you write directly to your instructor, you'll end up glossing over some information that is necessary to show that you properly understand the subject of your essay.It can be helpful to reverse-engineer your audience based on the subject matter of your essay. For example, if you're writing an essay about a women's health issue for a women's studies class, you might identify your audience as young women within the age range most affected by the issue.e the element of surprise. A startling or shocking statistic can grab your audience's attentionby immediately teaching them something they didn't know. Having learned something new in the first sentence, people will be interested to see where you go next.[3]For this hook to be effective, your fact needs to be sufficiently surprising. If you're not sure, test it on a few friends. If they react by expressing shock or surprise, you know you've got something good.Use a fact or statistic that sets up your essay, not something you'll be using as evidence to prove your thesis statement. Facts or statistics that demonstrate why your topic is important (or should be important) to your audience typically make good hooks.3.Tug at your reader's heart-strings. Particularly with personal or political essays, use your hookto get your reader emotionally involved in the subject matter of your story. You can do this by describing a related hardship or tragedy.[4]For example, if you were writing an essay proposing a change to drunk driving laws, you might open with a story of how the life of a victim was changed forever after they were hit by a drunk driver.4.Offer a relevant example or anecdote. In your reading and research for your essay, you mayhave come across an entertaining or interesting anecdote that, while related, didn't really fit into the body of your essay. Such an anecdote can work great as a hook.[5]For example, if you're writing an essay about a public figure, you might include an anecdote about an odd personal habit that cleverly relates back to your thesis statement.Particularly with less formal papers or personal essays, humorous anecdotes can be particularly effective hooks.5.Ask a thought-provoking question. If you're writing a persuasive essay, consider using arelevant question to draw your reader in and get them actively thinking about the subject of your essay.[6]For example: "What would you do if you could play God for a day? That's exactly what the leaders of the tiny island nation of Guam tried to answer."If your essay prompt was a question, don't just repeat it in your paper. Make sure to come up with your own intriguing question.6.Avoid clichés and generalizations. Generalizations and clichés, even if presented to contrastwith your point, won't help your essay. In most cases, they'll actually hurt by making you look like an unoriginal or lazy writer.[7]Broad, sweeping generalizations may ring false with some readers and alienate them from the start. For example, "everyone wants someone to love" would alienate someone who identified as aromantic or asexual.Part 2: Creating Your Context1.Relate your hook to a larger topic. The next part of your introduction explains to your readerhow that hook connects to the rest of your essay. Start with a broader, more general scope to explain your hook's relevance.[8]Use an appropriate transitional word or phrase, such as "however" or "similarly," to move from your specific anecdote back out to a broader scope.For example, if you related a story about one individual, but your essay isn't about them, you can relate the hook back to the larger topic with a sentence like "Tommy wasn't alone, however. There were more than 200,000 dockworkers affected by that union strike."2.Provide necessary background information. While you're still keeping things relativelygeneral, let your readers know anything that will be necessary for them to understand your main argument and the points you're making in your essay.[9]For example, if your thesis relates to how blackface was used as a means of enforcing racial segregation, your introduction would describe what blackface performances were, and where and when they occurred.If you are writing an argumentative paper, make sure to explain both sides of the argument in a neutral or objective manner.3.Define key terms for the purposes of your essay. Your topic may include broad concepts orterms of art that you will need to define for your reader. Your introduction isn't the place to reiterate basic dictionary definitions. However, if there is a key term that may be interpreted differently depending on the context, let your readers know how you're using that term.[10] Definitions would be particularly important if your essay is discussing a scientific topic, where some scientific terminology might not be understood by the average layperson.Definitions also come in handy in legal or political essays, where a term may have different meanings depending on the context in which they are used.4.Move from the general to the specific. It can be helpful to think of your introduction as anupside-down pyramid. With your hook sitting on top, your introduction welcomes your readers to the broader world in which your thesis resides.[11]If you're using 2 or 3 sentences to describe the context for your thesis, try to make each sentence a bit more specific than the one before it. Draw your reader in gradually.For example, if you're writing an essay about drunk driving fatalities, you might start with an anecdote about a particular victim. Then you could provide national statistics, then narrow it down further to statistics for a particular gender or age group.Part 3 Presenting Your Thesis插入图片3.11.Make your point. After you've set up the context within which you're making your argument, tell your readers the point of your essay. Use your thesis statement to directly communicate the unique point you will attempt to make through your essay.[12]For example, a thesis for an essay on blackface performance might be "Because of its humiliating and demoralizing effect on African American slaves, blackface was used less as a comedy routine and more as a way of enforcing racial segregation."Be assertive and confident in your writing. Avoid including fluff such as "In this essay, I will attempt to show...." Instead, dive right in and make your claim, bold and proud.Your outline should be specific, unique, and provable. Through your essay, you'll make points that will show that your thesis statement is true – or at least persuade your readers that it's most likely true.插入图片3.22.Describe how you're going to prove your point. Round out your introduction by providing your readers with a basic roadmap of what you will say in your essay to support your thesis statement. In most cases, this doesn't need to be more than a sentence.[13]3.If you've created an outline for your essay, this sentence is essentially the main subjects of each paragraph of the body of your essay.For example, if you're writing an essay about the unification of Italy, you might list 3 obstacles to unification. In the body of your essay, you would discuss details about how each of those obstacles was addressed or overcome.Instead of just listing all of your supporting points, sum them up by stating "how" or "why" your thesis is true. For example, instead of saying, "Phones should be banned from classrooms because they distract students, promote cheating, and make too much noise," you might say "Phones should be banned from classrooms because they act as an obstacle to learning."3.Transition smoothly into the body of your essay. In many cases, you'll find that you can move straight from your introduction to the first paragraph of the body. Some introductions, however, may require a short transitional sentence at the end to flow naturally into the rest of your essay.[14]To figure out if you need a transition sentence, read the introduction and the first paragraph out loud. If you find yourself pausing or stumbling between the paragraphs, work in a transition to make the move smoother.You can also have friends or family members read your easy. If they feel it's choppy or jumps from the introduction into the essay, see what you can do to smooth it out.Part 4 Bringing It All Together1.Read essays by other writers in your discipline. What constitutes a good introduction will varywidely depending on your subject matter. A suitable introduction in one academic discipline may not work as well in another.[15]If you're writing your essay for a class assignment, ask your instructor for examples of well-written essays that you can look at. Take note of conventions that are commonly used by writers in that discipline.Make a brief outline of the essay based on the information presented in the introduction. Then look at that outline as you read the essay to see how the essay follows it to prove the writer's thesis statement.2.Keep your introduction short and simple. Generally, your introduction should be between 5and 10 percent of the overall length of your essay. If you're writing a 10-page paper, your introduction should be approximately 1 page.[16]For shorter essays under 1,000 words, keep your introduction to 1 paragraph, between 100 and 200 words.Always follow your instructor's guidelines for length. These rules can vary at times based on genre or form of writing.3.Write your introduction after you write your essay. Some writers prefer to write the body ofthe essay first, then go back and write the introduction. It's easier to present a summary of your essay when you've already written it.[17]As you write your essay, you may want to jot down things you want to include in your introduction. For example, you may realize that you're using a particular term that you need to define in your introduction.4.Revise your introduction to fit your essay. If you wrote your introduction first, go back andmake sure your introduction provides an accurate roadmap of your completed paper. Even if you wrote an outline, you may have deviated from your original plans.[18]Delete any filler or unnecessary language. Given the shortness of the introduction, every sentence should be essential to your reader's understanding of your essay.5.Structure your introduction effectively. An essay introduction is fairly formulaic, and willhave the same basic elements regardless of your subject matter or academic discipline. While it's short, it conveys a lot of information.[19]The first sentence or two should be your hook, designed to grab your reader's attention and get them interested in reading your essay.The next couple of sentences create a bridge between your hook and the overall topic of the rest of your essay.End your introduction with your thesis statement and a list of the points you will make in your essay to support or prove your thesis statement.Community Q&A New! Make a stranger's day. Answer a question.Q: How do I start a paper about extreme sports that kids play?A: I would first narrow your subject down to one sport so you can be more focused. Note that this will likely be an informative essay. After you do this, an interesting hook statement may be an anecdote describing an intense moment in that chosen sport to get your audience interested. This can be made up or from your own experience with the sport.Q: How can I start an essay about HIV and lifestyle?A: An effective hook statement to start your essay about this topic may be a statistic about HIV, or perhaps an anecdote about someone facing this diagnosis and trying to make positive lifestyle changes for their health.Q: How do you begin an introduction?A: With something interesting! This is easier said than done of course, but a good intro starts with a quote, fact, or brief story that interests the reader. If it interested you while reading or researching, it's a great thing to start with. Just keep it short and it will be great.Q: What should I do if I'm stuck on the thesis?A: Skip it, write down your main points, and build the body of your essay. Once you know all the areas you want to cover, think about what links them all together, and what the main thing you're trying to convey is.Q: How should I start a body paragraph?A: Start off with a mini thesis which states what the body paragraph is talking about.Q: Where do you get started with a topic and introduction?A: Start with the basics -- what do you think about the topic? What argument can you make about it? Once you have an argument, start jotting down the evidence for the argument. This evidence will make up your paragraphs later on. If it's easiest, just skip the introduction now and come back once you're done -- you'll have all the ideas already drawn out.Q: My assignment is to summarize an already-written essay: could I begin by using the same introduction?A: To summarize, you really need to condense what's there and put everything into your own words -- this will include the introduction. It's fine to use the content of the introduction, but make sure not to copy the writing word-for-word.Q: How can I write a short introduction about heart disease?A: Start with something like "Heart disease is a serious condition that takes the lives of (number) Americans every year." Then go on to to talk about the causes of heart disease and the symptoms and warning signs, and treatment options. Maybe something about how we can encourage more people to go to the doctor to get a diagnosis before it becomes more serious.Q: What are some good statements to start with?A: Generally, one starts an essay with an interesting quote, fact, or story to make the reader want to continue reading. Ex. Did you know that every year...? Then you can begin to talk about background information and a thesis. A thesis usually lays out a brief summary of the points you want to make and includes your position on the topic. Ex. Dogs are ideal pets because of their loyalty to humans and their great trainability.Q: How can I write the introduction for an essay on the effects of peer pressure among teenagers? A: Talk about the problem first, this way the reader can understand why you are talking about effects and so the reader gets a good background on the subject.。
Coordinator
D2.3.3.v1SemVersion–Versioning RDF and Ontologies Max V¨o lkel(University of Karlsruhe)with contributions from:Carlos F.Enguix(National University of Ireland,Galway,Ireland)Sebastian Ryszard Kruk(DERI)Anna V.Zhdanova(DERI)Robert Stevens(U Manchester)York Sure(AIFB)Abstract.EU-IST Network of Excellence(NoE)IST-2004-507482KWEBDeliverable D2.3.3.v1(WP2.3)This papers describes the requirements for a semantic versioning system.The design,implementation and usage of SemVersion are described.KWEB/2004/D2.3.3.a/v1.0Document Identi-fierProject KWEB EU-IST-2004-507482Version v1.0Date June6th,2005StatefinalDistribution internalKnowledge Web ConsortiumThis document is part of a research project funded by the IST Programme of the Commission of the European Communities as project number IST-2004-507482.University of Innsbruck(UIBK)-CoordinatorInstitute of Computer Science Technikerstrasse13A-6020InnsbruckAustriaFax:+43(0)5125079872,Phone:+43(0)5125076485/88Contact person:Dieter FenselE-mail address:dieter.fensel@uibk.ac.at `Ecole Polythechnique F´e d´e rale de Lausanne (EPFL)Computer Science DepartmentSwiss Federal Institute of TechnologyIN(Ecublens),CH-1015LausanneSwitzerlandFax:+41216935225,Phone:+41216932738 Contact person:Boi FaltingsE-mail address:boi.faltings@epfl.chFrance Telecom(FT)4Rue du Clos Courtel35512Cesson S´e vign´eFrance.PO Box91226Fax:+33299124098,Phone:+33299124223 Contact person:Alain LegerE-mail address:alain.leger@ Freie Universit¨a t Berlin(FU Berlin) Takustrasse914195BerlinGermanyFax:+493083875220,Phone:+493083875223 Contact person:Robert TolksdorfE-mail address:tolk@inf.fu-berlin.deFree University of Bozen-Bolzano(FUB) Piazza Domenicani339100BolzanoItalyFax:+390471315649,Phone:+390471315642 Contact person:Enrico FranconiE-mail address:franconi@inf.unibz.it Institut National de Recherche en Informatique et en Automatique(INRIA) ZIRST-655avenue de l’Europe-Montbonnot Saint Martin38334Saint-IsmierFranceFax:+33476615207,Phone:+33476615366 Contact person:J´e rˆo me EuzenatE-mail address:Jerome.Euzenat@inrialpes.frCentre for Research and Technology Hellas/ Informatics and Telematics Institute(ITI-CERTH)1st km Thermi-Panorama road57001Thermi-ThessalonikiGreece.Po Box361Fax:+30-2310-464164,Phone:+30-2310-464160 Contact person:Michael G.StrintzisE-mail address:strintzi@iti.gr Learning Lab Lower Saxony(L3S)Expo Plaza130539HannoverGermanyFax:+49-511-7629779,Phone:+49-511-76219711 Contact person:Wolfgang NejdlE-mail address:nejdl@learninglab.deNational University of Ireland Galway (NUIG)National University of IrelandScience and Technology BuildingUniversity RoadGalwayIrelandFax:+35391526388,Phone:+353876826940 Contact person:Christoph BusslerE-mail address:chris.bussler@deri.ie The Open University(OU)Knowledge Media InstituteThe Open UniversityMilton Keynes,MK76AAUnited KingdomFax:+441908653169,Phone:+441908653506 Contact person:Enrico MottaE-mail address:e.motta@Universidad Polit´e cnica de Madrid(UPM) Campus de Montegancedo sn28660Boadilla del MonteSpainFax:+34-913524819,Phone:+34-913367439 Contact person:Asunci´o n G´o mez P´e rezE-mail address:asun@fi.upm.es University of Karlsruhe(UKARL)Institut f¨u r Angewandte Informatik und Formale Beschreibungsverfahren-AIFBUniversit¨a t KarlsruheD-76128KarlsruheGermanyFax:+497216086580,Phone:+497216083923 Contact person:Rudi StuderE-mail address:studer@aifb.uni-karlsruhe.deUniversity of Liverpool(UniLiv) Chadwick Building,Peach StreetL697ZF LiverpoolUnited KingdomFax:+44(151)7943715,Phone:+44(151)7943667 Contact person:Michael WooldridgeE-mail address:M.J.Wooldridge@ University of Manchester(UoM)Room2.32.Kilburn Building,Department of Computer Science,University of Manchester, Oxford RoadManchester,M139PLUnited KingdomFax:+441612756204,Phone:+441612756248 Contact person:Carole GobleE-mail address:carole@University of Sheffield(USFD)Regent Court,211Portobello streetS14DP SheffieldUnited KingdomFax:+441142221810,Phone:+441142221891 Contact person:Hamish CunninghamE-mail address:hamish@ University of Trento(UniTn)Via Sommarive1438050TrentoItalyFax:+390461882093,Phone:+390461881533 Contact person:Fausto GiunchigliaE-mail address:fausto@dit.unitn.itVrije Universiteit Amsterdam(VUA) De Boelelaan1081a1081HV.AmsterdamThe NetherlandsFax:+31842214294,Phone:+31204447731 Contact person:Frank van HarmelenE-mail address:Frank.van.Harmelen@cs.vu.nl Vrije Universiteit Brussel(VUB) Pleinlaan2,Building G101050BrusselsBelgiumFax:+3226293308,Phone:+3226293308 Contact person:Robert MeersmanE-mail address:robert.meersman@vub.ac.beExecutive SummaryChange management for ontologies becomes a crucial aspect for any kind of on-tology management environment,as engineering of ontologies often takes place in distributed settings where multiple independent users have to interact.There is also a variety of ontology languages used.Although RDF Schema and OWL are gaining more and more popularity,a lot of semantic data still resides in other formats,as it is the case in the biology domain(c.f.Sec.1.2.3).Until now,no standard version-ing system or methodology has arisen,that can provide a common way to handle versioning issues.This deliverable describes the RDF-centric versioning approach and implementa-tion SemVersion.It provides structural(purely triple based)and semantic(ontology language based,like RDFS,OWL and OBOL)versioning.It separates language-neutral features for data management from language-specific features like semantic diffs in design and implementation.This way SemVersion offers a common approach for already widely used RDF models and a wide range of ontology languages.The requirements for our system are derived from a set of practical scenarios, which are documented in detail in this deliverable.The project experienced a shift in requirements,when Robert Stevens from Uni-versity of Manchester joined the group in May2005.WP2.3decided to tackle the problem of versioning the Gene Ontology.In[1]we suggested reification for data storage.As we now face the large volume of the Gene Ontology data(see1.2.3),we need more powerful storage solutions than for the other use cases.Addressing triple sets(models)is another challenge.In[1] we argued to use reification,which would make models four times as large.To avoid this,we now use native quad stores,which provide a context URI for each triple. We use the context URI to address models more efficiently.A sub-project,Rdf2Go,has been created to deal with various model abstrac-tions and serves as a unifying triple(and quad)store entry point.Rdf2Go is described in Chapter2.A second sub-project of SemVersion,RdfReactor,facilitates the usage of RDF Schema based data in Java significantly.It’s latest version is based on Rdf2Go.In fact,RDFReactor has been designed for SemVersion in thefirst place.RDFReactor is described in Sec.1.5.4.Contents1SemVersion–An RDF Versioning System11.1Introduction (1)1.1.1Term Definitions (3)1.2Requirements for an ontology versioning system (3)1.2.1Use Case1:MarcOnt Collaborative Ontology Development..31.2.2Use Case2:The People’s Portal for Community OntologyDevelopment (6)1.2.3Use Case3:Versioning the Gene Ontology (7)1.2.4Use Case4:Versioning in a Semantic Wiki (10)1.2.5Use Case5:Analysis of Wikipedia (10)1.2.6Requirements Summary (11)1.3Data Management Design (12)1.3.1RDF as the structural core of ontology languages (12)1.3.2Version Data Management (13)1.4Versioning Functionality Design (14)1.4.1Structural Diff (14)1.4.2Semantic Diff (15)1.4.3Blank Nodes and the Diff (16)1.4.4Branch and Merge (17)1.4.5Conflict Detection (18)1.4.6Query Language Extension (18)1.5Implementation (18)1.5.1Storage Layer Access (19)1.5.2Handling Commits (20)1.5.3Generating globally unique URIs (20)1.5.4RDFReactor (20)2RDF2Go222.1What is RDF2Go? (22)2.2Working Example:Simple FOAF via RDF2Go (24)2.3Architecture (26)2.4The API (26)2.4.1Model and ContextModel (26)iiD2.3.3.v1SemVersion–Versioning RDF and Ontologies IST Project IST-2004-5074822.4.2Queries (29)2.5How to get started (30)3Using and Extending SemVersion313.1Using SemVersion (31)3.1.1Typical Actions (32)3.1.2Administration (33)3.1.3Usage and Implementation Notes (34)3.1.4SemVersion Usage Examples (34)3.2Extending SemVersion (34)4Conclusions and Outlook36 KWEB/2004/D2.3.3.a/v1.0June6th,2005iiiChapter 1SemVersion –An RDF Versioning System1.1IntroductionAs outlined in the Knowledge Web Deliverable D2.3.1”Specification of a method-ology for syntactic and semantic versioning”[1],there is a clear need for RDF data and ontology versioning.This deliverable is a follow-up of D2.3.1,which explains the underlying concepts in detail.Here we focus on the concrete approach and implementation.Change management for ontologies becomes a crucial aspect for any kind of ontology management environment,as engineering of ontologies often takes place in distributed settings where multiple independent users have to interact.There is also a variety of ontology languages used.Although RDF Schema and OWL are gaining more and more popularity,a lot of semantic data still resides in other formats,as it is the case in the biology domain (c.f.Sec. 1.2.3).Until now,no standard versioning system or methodology has arisen,that can provide a common way to handle versioning issues.This deliverable describes the RDF-centric versioning approach and implementa-tion SemVersion 1.It provides structural (purely triple based)and semantic (ontol-ogy language based,like RDFS,OWL and OBOL)versioning.It separates language-neutral features for data management from language-specific features like semantic diffs in design and implementation.This way SemVersion offers a common approach for already widely used RDF models and a wide range of ontology languages.SemVersion is published as an open-source software project on the site OntoWare.The current version of the project homepage is depicted in Fig.1.1.1The name resembles the upcoming de-facto standard subversion ( )and is also a short form of ”Semantic Versioning”11.SEMVERSION–AN RDF VERSIONING SYSTEMFigure1.1:Homepage of the SemVersion project2June6th,2005KWEB/2004/D2.3.3.a/v1.0D2.3.3.v1SemVersion–Versioning RDF and Ontologies IST Project IST-2004-507482 Our approach is inspired by the classical CVS system for version management of textual documents(e.g.Java code).Core element of our approach is the sepa-ration of language-specific features(the semantic diff)from general features(such as structural diff,branch and merge,management of projects and metadata).A speciality of RDF is the usage of so-called blank nodes.As part of our approach we present a method for blank node enrichment which helps in versioning of such blank nodes.1.1.1Term DefinitionsRDF is a data model with the types URI,blank node,plain literal,language tagged literal and data typed literal.It consists of triples(also called state-ments).A set of triples is called model(or triple set).An ontology is a model, in which semantics have been assigned to certain URIs and/or triple constructs, according to an ontology language.We use the term concept to denote things ontologies talk about:classes,properties and instances.In an RDF context,every-thing that is addressable by URI or by blank node is considered a concept.SemVersion versions models.A model under version control is named a ver-sioned model.A versioned model has a root model,which is a version.A version is a model plus versioning metadata.Versions in SemVersion never change. Instead,every operation that changes the state of a versioned model(commit,merge, ...)results in the creation of a new version.More details about SemVersion’s con-ceptual data model can be found in Sec.1.3.2.1.2Requirements for an ontology versioning sys-temWe gathered different requirements from Knowledge Web partners in order to create a more general design.We tried to gather as concrete usage requirements as possible to obtain a usable(and hence testable)design and implementation.In this section we present the different usage requirements.For each use case we name the stakeholder and provide a use case description, characteristics of the data set,and derived versioning requirements.1.2.1Use Case1:MarcOnt Collaborative Ontology Devel-opmentStakeholder:Sebastian Ryszard Kruk(DERI),sebastian.kruk@KWEB/2004/D2.3.3.a/v1.0June6th,200531.SEMVERSION–AN RDF VERSIONING SYSTEMThe MarcOnt2scenario served as thefirst source of inspiration for SemVersion. MarcOnt is a project to create an ontology for library data exchange.One of the most commonly used bibliographic description format is MARC21. Though it is capable of describing most of the features of the library resources, its semantic content is low.It means that while searching for a resource,one has to look for particular keywords in the resource’s descriptionfields,but one cannot carry out a search be meaning or concept.This can often result in large sets of results.Also the data communication between library systems is very hard to extend. On of the earliest shared vocabularies is the Dublin Core Metadata standard for library resource description.Besides the fact that most of the information covered by MARC21is lost,the full potential of the Semantic Web is not being used.The project aims at creating the MarcOnt ontology,based on a social agreement that will combine descriptions from MARC21together with DublinCore and makes use of the full potential of the Semantic Web technologies.This will include transla-tions to/from other ontologies,more efficient searching for resources(ers may have impact on the searching process).The MarcOnt initiative is strongly connected to the Jerome Digital Library project(e-library with semantics,formerly ElvisDL)-which implements a simple library ontology and can be used as a starting point for further work.MarcOnt also assumed that JeromeDL will be a testing platform for an experimental results from the MarcOnt initiative.Data Set Currently there exists only one version of the MarcOnt ontology,which can be downloaded at /index.php?option=com_content&task=view&id=13&Itemid=27.Versioning Requirements The MarcOnt project has a clear view on the process of ontology evolution.It starts with a current main version.Now people can suggest (multiple,independent)changes.Then the community discusses about the proposed changes and selects some.The changes are applied and a new main version is created. The process is illustrated in Fig.1.2.The ontology builder of the MarcOnt portal requires not only a GUI for building the ontology through submitting changes.It also needs the ability to:•Manage a main trunk of the ontology(R1.1)3•Manage versions of suggestions(R1.2)•Generate snapshots of the main ontology with some suggestions applied(R1.3) 2/3Requirements are numbered by”use case number”/”.”/running number4June6th,2005KWEB/2004/D2.3.3.a/v1.0Figure1.2:Versions and suggestions in the MarcOnt use caseKWEB/2004/D2.3.3.a/v1.0June6th,20055•Detect and resolve conflicts(R1.4)•Add suggestions to the main trunk(R1.5)•Attach mapping/translation rules(R1.6)•Be able to check out arbitrary versions by HTTP GET with a specific URL (R1.7)1.2.2Use Case2:The People’s Portal for Community On-tology DevelopmentStakeholder:Anna V.Zhdanova(DERI),anna.zhdanova@deri.at People’s portal[2]is an implementation of a human-Semantic Web interactive environment.The environment is named The People’s Portal and it is implemented employing Java,Jena and Tomcat.The basic idea of the People’s portal is to marry a community Semantic Web portal technology with collaborative ontology manage-ment functionalities in order to bring the Semantic Web to masses and overcome limitations of the existing community web portals.Use cases:The People’s portal environment is applied to DERI and used to produce part of the DERI web site.DERI members can login here to enter the environment.DERI web site managers can login here to manage the data in a centralized fashion.Versioning Requirements The system uses a subset of RDF ers of the portal can introduce new classes and properties on thefly.Consensus is partly reached by usage.Properties that are often used and classes that have many instances are considered useful for the community.Hence it is necessary to ask the versioning system:•How many instance does this class have now?Last week?Generalised:How many instances does a concept(rdfs:Class or rdfs:Property)has at a specific point in time?(R2.1)•When has this classfirst been instantiated?(R2.2)•How many properties are attached to this class?Since when?(R2.3)number of instances of class,properties NOW(specific point in time also)•Who added this ontology item?(R2.4)•Store new versions and return diffs between arbitrary points in time.(R2.5)•Return predecessor of an ontology item(class,property)in time(R2.6)6June6th,2005KWEB/2004/D2.3.3.a/v1.0•Support the evolution primitives:”add”,”remove”and ”replace”on concept definitions.(R2.7)•Return number of changed instance items (also properties,classes)and show which items changed.(R2.8)•Which concepts appeared within a given time interval?(R2.9)•Queries across change log/activity log:For each attribute,when was it instan-tiated and when have instances been created?(R2.10)•What are hot attributes?Those instantiated or changed often recently.Which are these?(R2.11)1.2.3Use Case 3:Versioning the Gene OntologyStakeholder:Robert Stevens (U Manchester),robert.stevens@ Background An important step was the phone conference on 12.07.2005,in which common goals were identified 4.Robert Stevens from Manchester University has be-come an active member of the work package.Robert is a biologist who is also a doctor in Computer Science.Robert is a Bioinformatics Lecturer in the BioHealth Informatics Group at the University of Manchester.He has around 80publications in international conferences,workshops,journals and so on.He was involved in the TAMBIS project for transparent access and integration of biological databases.Now one of his main interests is in the definition of formal biological ontologies.He is involved in the transformation of the Gene Ontology controlled vocabulary into a description-logics OWL based ontology.He is interested in contributing to the devel-opment of an ontology-based versioning system to the Gene Ontology which is part of the Open Biological Ontologies.Also he want’s to study how conceptualisations change over time,hence the need for data analysis.Use case description The gene ontology 5community is where collaborative on-tology construction is practiced a long time comparing to other communities.The GO community showed that involvement of multiple parties is a must for a compre-hensive ontology as a result.The GO community is far ahead of other communities constructing ontologies [3].Hence they are the ideal subject to study real-world change operations.”The goal of the Gene Ontology (GO)consortium is to produce a controlled vocabulary that can be applied to all organisms even as knowledge of gene and 4/wiki/KnowledgeWeb/WP23/MeetingAgenda12July20055KWEB/2004/D2.3.3.a/v1.0June 6th,20057protein roles in cells is accumulating and changing.GO provides three structured networks of defined terms to describe gene product attributes.”6Current Gene Ontology versions are maintained by CVS repositories which han-dle only syntactic differences among ontologies.In other words CVS is not able to differentiate class versions for instance,being able only to differentiate text/file differences.Versioning Requirements Essentially,here SemVersion is used for data analysis.In order to study ontology change operations,SemVersion must cope with multiple versions of the Gene Ontology (GO).The GO is authored in Open Biology Language 7(OBOL),for which usable OWL exports exist.The GO has about 19.000concepts.Assuming about 10statements per concept we estimate a size of roughly 100.000statements –per version.The researchers who study the ontology change patterns (Robert Stevens and his team)would like to use a monthly snapshot for a period of 6years.This amounts to 6years ×12month =72versions.Thus the underlying triple store must be able to handle up to 7million triples and search (maybe even reason)over them.The requirements in short form are thus•Store up to 7million triples (R3.1)•Allow meta-data queries over the 72versions (R3.2)•Allow data queries over all versions (7million triples)(R3.3)•OBOL semantic diff(R3.4)•OBOL to RDF converter (R3.5)•A Java interface (R3.6)Data Set The Gene Ontology ”per se”is not an Ontology in the formal sense,it is rather a cross-species controlled biological vocabulary as previously indicated above.The Gene Ontology is divided in three disjoint sub-ontologies,currently stored in big flat files or also stored in persistent repositories such as a relational database (MySQL database).The three sub-ontologies are divided into vocabularies that describe gene products in terms of:Molecular functions,associated biological processes and cellular components.The GO ontology permits to associate biological relationships among molecular functions,the involvement of molecular functions in biological processes and the 6Extracted from the OBO site /7/8June 6th,2005KWEB/2004/D2.3.3.a/v1.0occurrence of biological processes at a given time and space in cells [4].Whereas the molecular function defines what a gene product does at the biochemical level,the bi-ological process normally indicates a transformation process triggered or contributed by a gene product involving multiple molecular functions.Finally the cellular com-ponent indicates the cell structure a gene product is part of.The Gene Ontology contains around 20.000concepts which are convertible to OWL.The latest statistics about the GO could be found at the GO site 8:Current term counts (as of June 20,2005at 6:00Pacific time):•17946terms,94.2%with definitions.•6984(38.9%)Molecular functions•9410(52.4%)Biological processes•1552(8.6%)Cellular components•There are 998obsolete terms not included in the above statistics(Total Terms=18944)Further complexity assessments can be found at /~cjm/obol/doc/go-complexity.html .According to [5]the GO is a handcrafted ontology accepting only ”is-a”and ”part-of”relationships.The hierarchical organization is represented via a directed-acyclic-graph (DAG)structure similar to the representation of Web pages or hypertext systems.Members of the Consortium group contribute to updates and revisions of the GO.The Go is maintained by editors and scientific curators who notify GO users of ontology changes via email,or at the GO site by monthly reports 9.Please note that ontology creation and annotation of GO terms in databases (association of GO terms with gene products)are two different operations.Each annotation should include its data provenance or source(a cross database reference,a literature reference,etc).Technically,there are two different data sets,available via public CVS stores.Set I ranges from 1999to 2001and has a snapshot of the GO for each month in GO syntax.The second set runs from 2001up to now and contains for each month a Go snapshot in OBO syntax.As OBO is the newer syntax,we assume the existence of a converter from GO syntax to OBO syntax available from the GO community.In order to use the data sets,one has to decide for a format.There are three options:(a)RDF,(b)OWL generated from DAG-Edit 10or (c)nice OWL generated by Prot´e g´e -Plugin.Whatever choice is made,the exported data should contain the provenance8/GO.downloads.shtml#ont9/MonthlyReports/10/dev/java/dagedit/docs/index.html KWEB/2004/D2.3.3.a/v1.0June 6th,20059information of the source file and the conversion process used.SemVersion offers ways to store such provenance information.1.2.4Use Case 4:Versioning in a Semantic WikiStakeholder:Max V ¨o lkel (U Karl),mvo@aifb.uni-karlsruhe.deA wiki is a browser-based environment to author networked,structured notes,often in a collaborative way.The project SemWiki 11aims at creating a semantic wiki for personal note management.SemWiki extends the wiki syntax with means to enter statements about resources,much like in RDF.In a traditional wiki,users are accustomed to see and compare different versions of a page.In the semantic wiki ”SemWiki”12pages are just a special kind of resource and some attached properties.Hence,a semantic diffhas to be calculated ”by hand”.Data Set A typical personal wiki has up to 3000pages with approximately 10versions per page.Each page consists roughly of 50statements.This leads to approximately 1.5million triples for a snapshot-based versioning system.Versioning Requirements SemWiki users need ways to request a semantic diffbetween two page-versions.As pages partly consist of ”background statements”,which do not belong to a particular page,SemWiki needs a model-based versioning approach (R4.1).Sometimes users want to roll-back page changes,thus we need the ability to revert to old states (R4.2).Additionally,users want to track each statement:Who authored it,when has it been introduced,etc.(R4.3).1.2.5Use Case 5:Analysis of WikipediaStakeholder:Denny Vrandecic,Markus Kr ¨o tzsch,Max V ¨o lkel (U Karl){dvr,mkr,mvo}@aifb.uni-karlsruhe.deAn emerging research topic at AIFB is the analysis of changes in the Wikipedia 13.This use case is mostly similar to ”Versioning the Gene Ontology”.Data Set The Wikipedia contains roughly 1.500.000articles across all language versions.11 121310June 6th,2005KWEB/2004/D2.3.3.a/v1.0Versioning Requirements There are no obvious requirements beyond those al-ready mentioned in use case 3.1.2.6Requirements SummaryWe can distinguish rather data management related requirements and rather ontol-ogy language specific features.Data Management Requirements•Store and retrieve versions;store up to 7million triples•Retrieve versions via HTTP or Java function calls;address versions unambigu-ously via URIs and user-friendly via labels•Rich meta data per model /statement:provenance,author,valid time,transaction time•Model based versioning and additionally concept-oriented queries•Queries across versions concerning meta data•Each version can have a number of attached ”suggestions”;ability turn sug-gestions into official versionsOntology Language Requirements•Queries across versions concerning the content•return diffs between arbitrary versions•OBOL semantic diff•OBOL to RDF converter•RDFS semantic diff•OWL semantic diff•Semantic Wiki semantic diff•Conflict detection in OWLKWEB/2004/D2.3.3.a/v1.0June 6th,2005111.3Data Management DesignA versioning system has generally two main parts.One deals with general data management issues,the other part with versioning specific functionality such as cal-culating the difference between two versions.Wefirst present the data management parts and then the ontology specific versioning functions.The data management parts can be used no matter which ontology language is used–as long as the data model is encoded as RDF.RDF encoding of data is crucial in order to have a significant re-use of software across ontology languages.We now present some arguments for this claim.A more detailed discussion can be found in the Knowledge Web Deliverable D2.3.1[1].1.3.1RDF as the structural core of ontology languagesThe most elementary modelling primitive that is needed to model a shared con-ceptualisation of some domain is a way to denote entities and to unambiguously reference them.For this purpose RDF uses URIs,identifiers for resources,that are supposed to be globally unique.Every ontology language needs to provide means to denote entities.For global systems the identifier should be globally unique.Hav-ing entities,that can be referenced,the next step is to describe relations between them.As relations are semantic core elements,they should also be unambiguously addressable.Properties in RDF can be seen as binary relations.This is the very basic type of relations between two entities.More complex types of relations can be modelled by defining a special vocabulary for this purpose on top of RDF,like it has been done in OWL.The two core elements for semantic modelling,mechanisms to identify entities and to identify and state relationships between them,are provided by RDF.Ontol-ogy languages that build upon RDF use these mechanisms and define the semantics of certain relationships,entities,and combinations of relationships and entities.So RDF provides the structure in which the semantic primitives of the ontology lan-guages are embedded.That means we can distinguish three layers here:syntactic layer(e.g.XML),structural layer(RDF),semantic layer(ontology languages).The various ontology languages differ in their vocabulary,their logical founda-tions,and epistemological elements,but they have in common that they describe structures of entities and their relations.Therefore RDF is the largest common de-nominator of all ontology languages.RDF is not only a way to encode the ontology languages or just an arbitrary data model,but it is a structured data model that matches exactly the structure of ontology languages.12June6th,2005KWEB/2004/D2.3.3.a/v1.0。
李宏毅-B站机器学习视频课件BP全
Gradient Descent
Network parameters
Starting
0
Parameters
L
L w1
L w
2
L b1
L b2
w1 , w2 ,, b1 , b2 ,
b
4
2
=
′
’’
′ ′′
(Chain rule)
=
+
′ ′′
Assumed
?
?
3
4
it’s known
Backpropagation – Backward pass
Compute Τ for all activation function inputs z
Chain Rule
y g x
Case 1
z h y
x y z
Case 2
x g s
y hs
x
s
z
y
dz dz dy
dx dy dx
z k x, y
dz z dx z dy
ds x ds y ds
Backpropagation
2
Compute Τ for all parameters
Backward pass:
Compute Τ for all activation
function inputs z
Backpropagation – Forward pass
学术英语课后答案 unit1
学术英语理工教师手册Unit 1 Choosing a TopicI Teaching ObjectivesIn this unit , you will learn how to:1.choose a particular topic for your research2.formulate a research question3.write a working title for your research essay4.enhance your language skills related with reading and listening materials presented in this unit II. Teaching Procedures1.Deciding on a topicTask 1Answers may vary.Task 21 No, because they all seem like a subject rather than a topic, a subject which cannot be addressed even by a whole book, let alone by a1500-wordessay.2Each of them can be broken down into various and more specific aspects. For example, cancer can be classified into breast cancer, lung cancer, liver cancer and so on. Breast cancer can have such specific topics for research as causes for breast cancer, effects of breast cancer and prevention or diagnosis of breast cancer.3 Actually the topics of each field are endless. Take breast cancer for example, we can have the topics like:Why Women Suffer from Breast Cancer More Than Men?A New Way to Find Breast TumorsSome Risks of Getting Breast Cancer in Daily LifeBreast Cancer and Its Direct Biological ImpactBreast Cancer—the Symptoms & DiagnosisBreastfeeding and Breast CancerTask 31 Text 1 illustrates how hackers or unauthorized users use one way or another to get inside a computer, while Text2 describes the various electronic threats a computer may face.2 Both focus on the vulnerability of a computer.3 Text 1 analyzes the ways of computer hackers, while Text 2 describes security problems of a computer.4 Text 1: The way hackers “get inside” a computerText 2: Electronic threats a computer facesYes, I think they are interesting, important, manageable and adequate.Task 41Lecture1:Ten Commandments of Computer EthicsLecture 2:How to Deal with Computer HackersLecture 3:How I Begin to Develop Computer Applications2Answersmay vary.Task 5Answers may vary.2 Formulating a research questionTask 1Text 3Research question 1: How many types of cloud services are there and what are they? Research question 2: What is green computing?Research question 3: What are advantages of the cloud computing?Text 4Research question 1: What is the Web 3.0?Research question 2: What are advantages and disadvantages of the cloud computing? Research question 3: What security benefits can the cloud computing provide?Task 22 Topic2: Threats of Artificial IntelligenceResearch questions:1) What are the threats of artificial intelligence?2) How can human beings control those threats?3) What are the difficulties to control those threats?3 Topic3: The Potentials of NanotechnologyResearch questions:1) What are its potentials in medicine?2) What are its potentials in space exploration?3) What are its potentials in communications?4 Topic4: Global Warming and Its EffectsResearch questions:1) How does it affect the pattern of climates?2) How does it affect economic activities?3) How does it affect human behavior?Task 3Answers may vary.3 Writing a working titleTask 1Answers may vary.Task 21 Lecture 4 is about the security problems of cloud computing, while Lecture 5 is about the definition and nature of cloud computing, hence it is more elementary than Lecture 4.2 The four all focus on cloud computing. Although Lecture 4 and Text 4 address the same topic, the former is less optimistic while the latter has more confidence in the security of cloud computing. Text3 illustrates the various advantages of cloud computing.3 Lecture 4: Cloud Computing SecurityLecture 5: What Is Cloud Computing?Task 3Answers may vary.4 Enhancing your academic languageReading: Text 11.Match the words with their definitions.1g 2a 3e 4b 5c 6d 7j 8f 9h 10i2. Complete the following expressions or sentences by using the target words listed below with the help of the Chinese in brackets. Change the form if necessary.1 symbolic 2distributed 3site 4complex 5identify6fairly 7straightforward 8capability 9target 10attempt11process 12parameter 13interpretation 14technical15range 16exploit 17networking 18involve19 instance 20specification 21accompany 22predictable 23profile3. Read the sentences in the box. Pay attention to the parts in bold.Now complete the paragraph by translating the Chinese in brackets. You may refer to the expressions and the sentence patterns listed above.ranging from(从……到)arise from some misunderstandings(来自于对……误解)leaves a lot of problems unsolved(留下很多问题没有得到解决)opens a path for(打开了通道)requires a different frame of mind(需要有新的思想)4.Translate the following sentences from Text 1 into Chinese.1) 有些人声称黑客是那些超越知识疆界而不造成危害的好人(或即使造成危害,但并非故意而为),而“骇客”才是真正的坏人。
学术英语理工类课后题答案
Reading: Text 11.Match the words with their definitions.1g 2a 3e 4b 5c 6d 7j 8f 9h 10i2. Complete the following expressions or sentences by using the target words listed belowwith the help of the Chinese in brackets. Change the form if necessary.1 symbolic 2distributed 3site 4complex 5identify6fairly 7straightforward 8capability 9target 10attempt11process 12parameter 13interpretation 14technical15range 16exploit 17networking 18involve19 instance 20specification 21accompany 22predictable 23profile3. Read the sentences in the box. Pay attention to the parts in bold.Now complete the paragraph by translating the Chinese in brackets. You may refer to the expressions and the sentence patterns listed above.ranging from(从……到)arise from some misunderstandings(来自于对……误解)leaves a lot of problems unsolved(留下很多问题没有得到解决)opens a path for(打开了通道)requires a different frame of mind(需要有新的思想)4.Translate the following sentences from Text 1 into Chinese.1) 有些人声称黑客是那些超越知识疆界而不造成危害的好人(或即使造成危害,但并非故意而为),而“骇客”才是真正的坏人。
Razavi《模拟CMOS集成电路设计》习题答案精编版
CORRECTIONS TO SOLUTIONS MANUALIn the new edition, some chapter problems have been reordered and equations and figure refer-ences have changed. The solutions manual is based on the preview edition and therefore must be corrected to apply to the new edition. Below is a list reflecting those changes.The “NEW” column contains the problem numbers in the new edition. If that problem was origi-nally under another number in the preview edition,that number will be listed in the“PREVIEW”column on the same line.In addition,if a reference used in that problem has changed,that change will be noted under the problem number in quotes. Chapters and problems not listed are unchanged.For example:NEW PREVIEW--------------4.18 4.5“Fig. 4.38” “Fig. 4.35”“Fig. 4.39” “Fig. 4.36”The above means that problem4.18in the new edition was problem4.5in the preview edition.To find its solution, look up problem 4.5 in the solutions manual. Also, the problem 4.5 solution referred to “Fig. 4.35” and “Fig. 4.36” and should now be “Fig. 4.38” and “Fig. 4.39,” respec-tively._____________________________________________________________________________ CHAPTER 3NEW PREVIEW--------------3.1 3.83.2 3.93.3 3.113.4 3.123.5 3.133.6 3.143.7 3.15“From 3.6” “From 3.14”3.8 3.163.9 3.173.10 3.183.11 3.193.12 3.203.13 3.213.14 3.223.15 3.13.16 3.23.17 3.2’3.18 3.33.19 3.43.20 3.53.21 3.63.22 3.73.23 3.103.24 3.233.25 3.243.26 3.253.27 3.263.28 3.273.29 3.28 CHAPTER 4NEW PREVIEW--------------4.1 4.124.2 4.134.3 4.144.4 4.154.5 4.164.6 4.174.7 4.18“p. 4.6” “p. 4.17”4.8 4.194.9 4.204.10 4.214.11 4.224.12 4.234.13 4.24“p. 4.9” “p. 4.20”4.14 4.1“(4.52)” “(4.51)”“(4.53)” “(4.52)”4.15 4.24.16 4.34.17 4.44.18 4.5“Fig. 4.38” “Fig. 4.35”“Fig. 4.39” “Fig. 4.36”4.19 4.6“Fig 4.39(c)” “Fig 4.36(c)”4.20 4.74.21 4.84.22 4.94.23 4.104.24 4.114.25 4.254.26 4.26“p. 4.9” “p. 4.20”CHAPTER 5NEW PREVIEW--------------5.1 5.165.2 5.175.3 5.185.4 5.195.5 5.205.6 5.215.7 5.225.8 5.235.9 5.15.10 5.25.11 5.35.12 5.45.13 5.55.14 5.65.15 5.75.16 5.85.17 5.95.18 5.10“Similar to 5.18(a)” “Similar to 5.10(a)”5.19 5.115.20 5.125.21 5.135.22 5.145.23 5.15CHAPTER 6NEW PREVIEW--------------6.1 6.76.2 6.86.3 6.9“from eq(6.23)” “from eq(6.20)”6.4 6.106.5 6.11“eq (6.52)” “eq (6.49)”6.6 6.16.7 6.26.8 6.36.9 6.46.10 6.56.11 6.66.13 6.13“eq (6.56)” “eq (6.53)”“problem 3” “problem 9”6.16 6.16“to (6.23) & (6.80)” “to (6.20) & (6.76)”6.17 6.17“equation (6.23)” “equation (6.20)”CHAPTER 7NEW PREVIEW--------------7.27.2“eqn. (7.59)” “eqn. (7.57)”7.177.17“eqn. (7.59)” “eqn. (7.57)7.197.19“eqns 7.66 and 7.67” “eqns 7.60 and 7.61”7.217.21“eqn. 7.66” “eqn. 7.60”7.227.22“eqns 7.70 and 7.71” “eqns. 7.64 and 7.65”7.237.23“eqn. 7.71” “eqn. 7.65”7.247.24“eqn 7.79” “eqn 7.73”CHAPTER 8NEW PREVIEW--------------8.18.58.28.68.38.78.48.88.58.98.68.108.78.118.88.18.98.28.108.38.118.48.138.13“problem 8.5” “problem 8.9”CHAPTER 13NEW PREVIEW--------------3.17 3.17“Eq. (3.123)” “Eq. (3.119)”CHAPTER 14 - New Chapter, “Oscillators”CHAPTER 15 - New Chapter, “Phase-Locked Loops”CHAPTER 16 - Was Chapter 14 in Preview Ed.Change all chapter references in solutions manual from 14 to 16. CHAPTER 17 - Was Chapter 15 in Preview Ed.Change all chapter references in solutions manual from 15 to 17. CHAPTER 18 - Was Chapter 16 in Preview Ed.NEW PREVIEW--------------18.316.3“Fig. 18.12(c)” “Fig. 16.13(c)”18.816.8“Fig. 18.33(a,b,c,d)” “Fig. 16.34(a,b,c,d)”Also, change all chapter references from 16 to 18.。
Adobe Acrobat SDK 开发者指南说明书
This guide is governed by the Adobe Acrobat SDK License Agreement and may be used or copied only in accordance with the terms of this agreement. Except as permitted by any such agreement, no part of this guide may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, recording, or otherwise, without the prior written permission of Adobe. Please note that the content in this guide is protected under copyright law.
summary范文十篇
Summary范文十篇介绍本文将为您提供十篇summary范文,每篇都涵盖了不同的主题和领域。
这些文章旨在为读者提供全面、详细、完整且深入的讨论,帮助读者更好地理解和掌握所涉及的主题。
文章一:人工智能在医学领域的应用简介本文将介绍人工智能在医学领域的应用情况,包括图像识别、疾病诊断和医疗决策等方面。
主要内容1.人工智能在医学图像识别中的应用–使用深度学习算法进行疾病的早期诊断–自动检测和分类肿瘤2.人工智能在疾病诊断中的应用–基于大数据的医学数据分析,帮助医生提供准确的诊断结果–智能助手系统辅助医生做出治疗决策3.人工智能在医疗决策中的应用–基于数据分析和机器学习算法,提供个性化的治疗方案–帮助医生评估治疗效果和预测患者的长期疗效文章二:可再生能源的发展与应用简介本文将探讨可再生能源的发展和应用情况,包括太阳能、风能和水能等。
1.太阳能的发展与应用–太阳能电池的工作原理和研发进展–太阳能发电的应用领域和发展趋势2.风能的发展与应用–风力发电的原理和技术–风能发电的优势和挑战3.水能的发展与应用–水力发电的工作原理和类型–水能发电的可持续性和经济性文章三:大数据在商业领域的应用简介本文将介绍大数据在商业领域的应用情况,包括市场调研、客户关系管理和供应链优化等方面。
主要内容1.大数据在市场调研中的应用–利用大数据分析市场趋势和消费者行为–通过数据挖掘发现新的市场机会2.大数据在客户关系管理中的应用–基于用户数据提供个性化的产品和服务–分析客户反馈和行为,改进营销策略3.大数据在供应链优化中的应用–预测需求,减少库存和运输成本–分析供应链数据,优化物流和生产流程文章四:人工智能和物联网的融合简介本文将探讨人工智能和物联网的融合,包括智能家居、智慧城市和智能工厂等方面。
1.人工智能在智能家居中的应用–基于语音识别和图像识别的智能家居控制系统–智能家电设备的自学习和智能调控2.人工智能在智慧城市中的应用–基于传感器和数据分析的城市交通管理–基于大数据的城市安全监控和预警系统3.人工智能在智能工厂中的应用–自动化生产线的智能控制和优化–数据分析和机器学习在工厂管理中的应用文章五:区块链技术的发展和应用简介本文将介绍区块链技术的发展和应用情况,包括数字货币、供应链管理和身份验证等方面。
DRAFT NIST Special Publication 800-38B DRAFT Recommendation for Block Cipher Modes of Opera
NIST Special Publication 800-38BDRAFTRecommendation for Block Cipher Modes of Operation: The RMAC Authentication ModeMethods and TechniquesMorris DworkinNovember 4, 2002AbstractThis Recommendation defines an authentication mode of operation, called RMAC, for a symmetric key block cipher algorithm. RMAC can provide cryptographic protection of sensitive, but unclassified, computer data. In particular, RMAC can provide assurance of the authenticity and, therefore, of the integrity of the data.KEY WORDS: Authentication; block cipher; cryptography; encryption; Federal Information Processing Standard; information security; integrity; mode of operation.Table of Contents1PURPOSE (5)2AUTHORITY (5)3INTRODUCTION (5)4DEFINITIONS, ABBREVIATIONS, AND SYMBOLS (6)4.1D EFINITIONS AND A BBREVIATIONS (6)4.2S YMBOLS (7)4.2.1Variables (7)4.2.2Operations and Functions (8)5PRELIMINARIES (9)5.1T HE U NDERLYING B LOCK C IPHER A LGORITHM (9)5.2E LEMENTS OF RMAC (9)5.3E XAMPLES OF O PERATIONS AND F UNCTIONS (10)6RMAC SPECIFICATION (10)6.1M ESSAGE F ORMATTING (10)6.2P ARAMETER S ETS (10)6.3MAC G ENERATION (11)6.4T AG G ENERATION AND V ERIFICATION (12)APPENDIX A: SECURITY CONSIDERATIONS (13)A.1E XHAUSTIVE K EY S EARCH (13)A.2G ENERAL F ORGERY (13)A.3E XTENSION F ORGERY B ASED ON A C OLLISION (13)A.4S UMMARY OF S ECURITY P ROPERTIES OF P ARAMETER S ETS (14)APPENDIX B: THE GENERATION OF RMAC PARAMETERS (15)B.1 D ERIVATION OF RMAC KEYS FROM A M ASTER K EY (15)B.2 S ALT G ENERATION (15)APPENDIX C: EXAMPLE VECTORS FOR THE MAC GENERATION FUNCTION (16)C.1RMAC-AES128 E XAMPLE V ECTORS (16)C.1.1RMAC-AES128-I (16)C.1.2RMAC-AES128-II (17)C.1.3RMAC-AES128-III (18)C.1.4RMAC-AES128-IV (19)C.1.5RMAC-AES128-V (20)C.2RMAC-AES192 E XAMPLE V ECTORS (21)C.2.1RMAC-AES192-I (21)C.2.2RMAC-AES192-II (22)C.2.3RMAC-AES192-III (23)C.2.4RMAC-AES192-IV (24)C.2.5RMAC-AES192-V (26)C.3RMAC-AES256 E XAMPLE V ECTORS (27)C.3.1RMAC-AES256-I (27)C.3.2RMAC-AES256-II (28)C.3.3RMAC-AES256-III (29)C.3.4RMAC-AES256-IV (30)C.3.5RMAC-AES256-V (32)C.4RMAC-TDES112 E XAMPLE V ECTORS (33)C.5RMAC-TDES168 E XAMPLE V ECTORS (33)APPENDIX D: REFERENCES (34)Table of FiguresFigure 1: The RMAC MAC Generation Function (12)1 PurposeThis publication is the second part in a series of Recommendations regarding modes of operation of symmetric key block cipher algorithms.2 AuthorityThis document has been developed by the National Institute of Standards and Technology (NIST) in furtherance of its statutory responsibilities under the Computer Security Act of 1987 (Public Law 100-235) and the Information Technology Management Reform Act of 1996, specifically 15 U.S.C. 278 g-3(a)(5). This is not a guideline within the meaning of 15 U.S.C. 278 g-3 (a)(5).This Recommendation is neither a standard nor a guideline, and as such, is neither mandatory nor binding on federal agencies. Federal agencies and nongovernment organizations may use this Recommendation on a voluntary basis. It is not subject to copyright.Nothing in this Recommendation should be taken to contradict standards and guidelines that have been made mandatory and binding upon federal agencies by the Secretary of Commerce under statutory authority. Nor should this Recommendation be interpreted as altering or superseding the existing authorities of the Secretary of Commerce, the Director of the Office of Management and Budget, or any other federal official.Conformance testing for implementations of the modes of operation that are specified in this Recommendation will be conducted within the framework of the Cryptographic Module Validation Program (CMVP), a joint effort of NIST and the Communications Security Establishment of the Government of Canada. An implementation of a mode of operation must adhere to the requirements in this Recommendation in order to be validated under the CMVP. The requirements of this Recommendation are indicated by the word “shall.”3 IntroductionThis Recommendation specifies an algorithm, RMAC [1], that can provide assurance of data origin authentication and, hence, assurance of data integrity. In particular, RMAC is an algorithm for generating a message authentication code (MAC) from the data to be authenticated and from an associated value called the salt, using a block cipher and two secret keys that the parties to the authentication of the data establish beforehand. One party generates the MAC and provides the MAC and the associated salt as the authentication tag; subsequently, any party with access to the secret keys may verify whether the received MAC was generated from the received data and the received salt. Successful verification of the MAC provides assurance of the authenticity of the data, i.e., that it originated from a source with access to the secret keys. Consequently, successful verification of the MAC also provides assurance of the integrity of the data, i.e., that it was not altered after the generation of the MAC.A MAC is sometimes called a cryptographic checksum, because it is generated from a keyed cryptographic algorithm in order to provide stronger assurance of data integrity than an ordinary checksum. The verification of an ordinary checksum or an error detecting code is designed to reveal only accidental modifications of the data, while the verification of a MAC is designed to reveal intentional, unauthorized modifications of the data, as well as accidental modifications. Because RMAC is constructed from a block cipher algorithm, RMAC can be considered a mode of operation of the block cipher algorithm. The block cipher algorithm shall be approved, i.e., specified or adopted in a Federal Information Processing Standard (FIPS) or a NIST Recommendation; for example, FIPS Pub. 197 [2] specifies the AES algorithm, and FIPS Pub. 46-3 [3] adopts the Triple DES algorithm.FIPS Pub. 198 [4] specifies a different MAC algorithm, called HMAC, that is also appropriate for the protection of sensitive data. Because HMAC is constructed from a hash function rather than a block cipher algorithm, RMAC may be preferable for application environments in which an approved block cipher is more convenient to implement than an approved hash function.4 Definitions, Abbreviations, and Symbols4.1 Definitions and AbbreviationsApproved FIPS approved or NIST recommended: an algorithm or technique thatis either 1) specified in a FIPS or NIST Recommendation, or 2) adoptedin a FIPS or NIST Recommendation.Authenticity The property that data indeed originated from its purported source. Authentication Mode A block cipher mode of operation that can provide assurance of theauthenticity and, therefore, the integrity of data.Authentication Tag (Tag) A pair of bit strings associated to data to provide assurance of its authenticity: the salt and the message authentication code that is derived from the data and the salt.Bit A binary digit: 0 or 1.Bit String An ordered sequence of 0s and 1s.Block A bit string whose bit length is the block size of the block cipheralgorithm.Block Cipher See forward cipher function.Block Cipher Algorithm A family of functions and their inverses that is parameterized by cryptographic keys; the functions map bit strings of a fixed length to bit strings of the same length.Block Size The number of bits in an input (or output) block of the block cipher. Cryptographic Key A parameter used in the block cipher algorithm that determines theforward cipher function.Data Integrity The property that data has not been altered by an unauthorized entity. Exclusive-OR The bitwise addition, modulo 2, of two bit strings of equal length. FIPS FederalInformationProcessing Standard.Forward Cipher Function One of the two functions of the block cipher algorithm that is determined by the choice of a cryptographic key.Initialization Vector(IV)A data block that some modes of operation require as an initial input.Message Authentication Code (MAC) A cryptographic checksum on data that is designed to reveal both accidental errors and intentional modifications of the data.Mode of Operation (Mode) An algorithm for the cryptographic transformation of data that features a symmetric key block cipher algorithm.Most Significant Bit(s) The left-most bit(s) of a bit string.Nonce A value that is used only once within a specified context.RMAC The name of the authentication mode that is specified in thisRecommendation.Salt A parameter of an algorithm whose role is to randomize the value ofanother parameter.4.2 Symbols4.2.1 Variablesb The block size, in bits.k The key length for the block cipher.m The bit length of the RMAC MAC.n The number of data blocks in the padded message.r The bit length of the salt.CNST j The j th fixed, i.e., constant, block.K A block cipher key.K1 The first RMAC key.K2 The second RMAC key.K3 A key that is derived from the second RMAC key and the salt.M The message.Mlen The bit length of the message.M j The j th block in the partition of the padded message.j th output block.O j ThePAD The padding that is appended to the message.salt.R The4.2.2 Operations and Functions0s The bit string consisting of s ‘0’ bits.X || Y The concatenation of two bit strings X and Y.X ⊕Y The bitwise exclusive-OR of two bit strings X and Y of the same length.CIPH K(X) The forward cipher function of the block cipher algorithm under the key K applied to the data block X.MSB s(X) The bit string consisting of the s most significant bits of the bit string X.RMAC(R,M) The RMAC message authentication code for message M with salt R.5 Preliminaries5.1 The Underlying Block Cipher AlgorithmThe RMAC algorithm specified in this Recommendation depends on the choice of an underlying symmetric key block cipher algorithm; the RMAC algorithm is thus a mode of operation (mode, for short) of the symmetric key block cipher. The underlying block cipher algorithm must be approved, and two secret, random keys for the block cipher algorithm shall be established. The keys regulate the functioning of the block cipher algorithm and, thus, by extension, the functioning of the mode. The specifications of the block cipher algorithm and the mode are public, so the security of the mode depends, at a minimum, on the secrecy of the keys.For any given key, the underlying block cipher algorithm of the mode consists of two processes that are inverses of each other. As part of the choice of the block cipher algorithm, one of the two processes of the block cipher algorithm is designated as the forward cipher function. The inverse of this process is called the inverse cipher function. Because the RMAC mode does not require the inverse cipher function, the forward cipher function in this Part of the Recommendation is simply called the block cipher.5.2 Elements of RMACThe block cipher keys that are required for the RMAC mode are bit strings, denoted K1and K2, whose bit length, denoted k, depends on the choice of the block cipher algorithm. The keys shall be random or pseudorandom, distinct from keys that are used for other purposes, and secret. The two keys shall each be established by an approved key establishment method, or the keys shall be derived from a single key K, which is established by an approved key establishment method.A method for deriving K1and K2 from a single, master key K is given in Appendix B.1.The block cipher is a function on bit strings of a fixed bit length. The fixed bit length of the bit strings is called the block size and is denoted b; any bit string whose bit length is b is called a (data) block. Under a key K, the block cipher function is denoted CIPH K.For the AES algorithm, b=128 and k=128, 192, or 256; for Triple DES, b=64 and k=112 or 168. The data to be authenticated is one input to the RMAC MAC generation function; the data in this context is called the message, denoted M.Another input to the MAC generation function is a parameter associated with the message called the salt, denoted R. The role of the salt in the MAC generation function is to randomize (i.e., “flavor”) the second key, K2. The bit length of the salt, denoted r, is determined by the choice of a parameter set that is specified in Section 6.2. The use of the salt is optional in the sense that a parameter set may be chosen in which r=0. When r1234567487569 4 9 4 7 7 5 45674 54 6 4 ensure that the expected probability of repeating the salt for different messages is negligible. The generation of the salt is discussed further in Appendix B.2.The RMAC MAC generation function is denoted RMAC, so that the output of the function, the MAC, is denoted RMAC(R,M). The bit length of the MAC, denoted m, is determined by thechoice of a parameter set that is specified in Section 6.2. The authentication tag to the message is the ordered pair (R, RMAC(R,M)); thus, the tag consists of one part, the salt, that may be independent of the message and a second part, the MAC, that depends on both the salt and the message. The total number of bits in the tag is r+m.5.3 Examples of Operations and FunctionsFor a nonnegative integer s, the bit string consisting of s ‘0’ bits is denoted 0s.The concatenation operation on bit strings is denoted ||; for example, 001 || 10111 = 00110111.Given bit strings of equal length, the exclusive-OR operation, denoted ⊕, specifies the addition, modulo 2, of the bits in each bit position, i.e., without carries. Thus, 10011 ⊕ 10101= 00110, for example.The function MSB s returns the s most significant bits of the argument. Thus, for example, MSB4(111011010) = 1110.6 RMAC Specification6.1 Message FormattingThe first steps of the MAC generation function are to append padding to the message and to partition the resulting string into complete blocks. The padding, denoted PAD, is a single ‘1’ bit followed by the minimum number of ‘0’ bits such that the total number of bits in the padded message is a multiple of the block size. The padded message is then partitioned into a sequence of n complete blocks, denoted M1, M2, …, M n. Thus,M || PAD = M1 || M2 ||…|| M n .If the bit length of M is a multiple of the block size, then PAD = 1 || 0b-1, i.e., a complete block.6.2 Parameter SetsA parameter set is a pair of values for the bit lengths r and m of the two parts of the authentication tag, the salt and the MAC. The parameter sets for RMAC depend on the block size of the underlying block cipher algorithm. A parameter set shall be chosen from Table 1 below; five parameter sets are given for the 128 bit block size, and two for the 64 bit block size. Although parameter set I offers the shortest authentication tags, it is not recommended for general use. The decision to use parameter set I requires a risk-benefit analysis of at least three factors: 1) the relevant attack models, 2) the application environment, and 3) the value and longevity of the data to be protected. In particular, parameter set I shall only be used if the controlling protocol or application environment sufficiently restricts the number of times that verification of an authentication tag can fail under any given pair of RMAC keys. For example,the short duration of a session, or, more generally, the low bandwidth of the communication channel may preclude many repeated trials.Parameter sets II, III, IV, and V are appropriate for general use.Table 1: Parameter Setsb=128 b=64Parameter Set r m r m32I 03264 6464II 0n/a80III 16n/a96IV 64V 128 128 n/aSome of the security considerations that underlie the selection of a parameter set are summarized in Appendix A. The expected work factors for important aspects of the attacks that are discussed in the appendix are summarized for each parameter set in Table 2 in Section A.4.6.3 MAC GenerationThe following is a specification of the RMAC MAC generation function:Input:block cipher CIPH;block cipher keys K1 and K2 of bit length k;parameter set (r, m);message M;salt R of bit length r.Output:message authentication code RMAC(R, M) of bit length m.Steps:toM the padding string PAD, as described in Section 6.1.1. Append2.Partition M || PAD into n blocks M1, M2, …, M n, as described in Section 6.1.3.O1 =CIPH K1(M1).j = 2 to n, do O j= CIPH K1(M j⊕O j-1).4. Forr=0, then K3=K2; else K3 = K2 ⊕ (R || 0k-r).5. If6.Return RMAC(R, M) = MSB m(CIPH K3(O n)).The calculations in Steps 3 and 4 are equivalent to encrypting the padded message using the cipher block chaining (CBC) mode [5] of the block cipher, under the first RMAC key, with the zero block as the initialization vector. However, unlike CBC encryption, in which every output block from Steps 3 and 4 is part of the encryption output (i.e., the ciphertext), in RMAC, the output blocks in Steps 3 and 4 are intermediate results. In Step 6, the block cipher under a newkey is applied to the final output block from Step 4, and the result is truncated as specified in the parameter set. The new key for this final application of the block cipher is obtained in Step 5 by exclusive-ORing the salt into the most significant bits of the second RMAC key.The RMAC MAC generation function is illustrated in Figure 1.6.4 Tag Generation and VerificationThe prerequisites for the authentication process are the establishment of an approved block cipher algorithm, two secret RMAC keys, and a parameter set1 among the parties to the authentication of the data.To generate an authentication tag on a message M, a party shall determine an associated salt R in accordance with Appendix B, generate RMAC(R,M), as specified in Section 6.3, and provide the authentication tag (R, RMAC(R,M)) to the data.To verify an authentication tag (R', MAC'), a party shall apply the RMAC MAC generation function, as specified in Section 6.3, to the received message M' and the received salt R' within the tag. If the computed MAC, i.e., RMAC(R',M'), is identical to the received MAC, i.e., MAC', then verification succeeds; otherwise, verification fails, and the message should not be considered authentic.1 For tag verification, the parameter set is implicit in the bit length of the tag.Appendix A: Security ConsiderationsThe submitters of RMAC present a security analysis of RMAC in [6]. In this appendix, three types of attacks on general MAC algorithms are summarized, and discussed with respect to RMAC: exhaustive key search, general forgery, and extension forgery based on birthday collisions.A.1 Exhaustive Key SearchIn principle, given sufficiently many valid message-tag pairs, an unauthorized party can exhaustively search, off-line, every possible key to the MAC generation algorithm. After recovering the secret key, by this method or any other method, the unauthorized party could generate a forgery, i.e., a valid authentication tag, for any message.The number of RMAC keys is so large that exhaustive key search of RMAC is impractical for the foreseeable future. In particular, for the key size k, which is at least 112 bits for the approved block cipher algorithms, the exhaustive search for the two RMAC keys would be expected to require the generation of 22k-1 MACs. Even if the two RMAC keys are derived from a single master key, as discussed in Appendix B.1, the exhaustive search for the master key would be expected to require the generation of 2k-1 MACs.ForgeryA.2 GeneralThe successful verification of a MAC does not guarantee that the associated message is authentic: there is a small chance that an unauthorized party can guess a valid MAC of an arbitrary (i.e., inauthentic) message. Moreover, if many message forgeries are presented for verification, the probability increases that, eventually, verification will succeed for one of them. This limitation is inherent in any MAC algorithm.The protection that the RMAC algorithm provides against such forgeries is determined by the bit length of MAC, m, which in turn is determined by the choice of a parameter set. The probability of successful verification of an arbitrary MAC with any given salt on any given message is expected to be 2-m; therefore, larger values of m offer greater protection against general forgery.A.3 Extension Forgery Based on a CollisionThe underlying idea of extension forgery attacks is for the unauthorized party to find a collision, i.e., two different messages with the same MAC (before any truncation). If the colliding messages are each concatenated with a common string, then, for many MAC algorithms, including RMAC, the two extended messages have a common MAC. Therefore, the knowledge of the MAC of one extended message facilitates the forgery of the other extended message. The unauthorized party can choose the second part of the forged message, i.e., the common string, but generally cannot control the first part, i.e., either of the original, colliding messages.In principle, collisions may exist, because there are many more possible messages than possible MACs. A collision may be detected by the collection and search of a sufficiently large set of message-MAC pairs. By the so-called “birthday surprise” (see, for example, [7]), the size of this sufficiently large set is expected to be, approximately, the square root of the number of possible MAC strings, before any truncation.For RMAC, the extension forgery requires that the salt values, R, are the same for the two colliding messages, as well as the untruncated MACs, i.e., CIPH K3(O n) in the specification of Section 6.3. Therefore, larger values of the block size, b, and the salt size, r, provide greater protection against extension forgery. In particular, the unauthorized party would have to collect at least 2(b+r)/2 message-tag pairs in order to expect to detect a collision.Moreover, if a parameter set is chosen in which m<b, i.e., if CIPH K3(O n) is truncated to produce the MAC, then the discarded bits may be difficult for an unauthorized party to determine, so collisions may be difficult to detect. Parameter sets in which m<b may also provide some protection against other types of attacks.A.4 Summary of Security Properties of Parameter SetsIn Table 2, the expected work factors for the important aspects of the attacks discussed in Sections A.1-A.3 are summarized for the RMAC parameter sets. The values for exhaustive key search are given for the case in which the two RMAC keys are generated from a single master key as discussed in Section B.1.Table 2: Expected Work Factors for Three Types of Attacks on RMACRMAC Parameter Set Exhaustive Key Search(MAC GenerationOperations)General Forgery(Success Probabilityfor a Single Trial )Extension Forgery(Message-Tag Pairs)I 2k-12-32232 (b=64) or 264 (b=128) II 2k-12-64264III 2k-12-80272IV 2k-12-96296V 2k-12-1282128Appendix B: The Generation of RMAC ParametersB.1 Derivation of RMAC keys from a Master KeyThe two secret RMAC keys, K1 and K2, may be derived from a single master key, K, in order to save bandwidth or storage, at the cost of extra invocations of the block cipher to set up the RMAC keys. For example, let CNST1, CNST2, CNST3, CNST4, CNST5, and CNST6 be constants, i.e., fixed, distinct blocks, and let k and b be the key length and block length of the approved block cipher, as before. If k 4 b, then K1 and K2 may be derived from the set of constants as follows:K1=MSB k(CIPH K(CNST1) || CIPH K(CNST3) || CIPH K(CNST5))K2=MSB k(CIPH K(CNST2) || CIPH K(CNST4) || CIPH K(CNST6)).If k=b, then this definition reduces to K1=CIPH K(CNST1) and K2=CIPH K(CNST2), and thus only two constants are actually required.Similarly, if b<k≤2b, then the definition becomes K1= MSB k(CIPH K(CNST1) || CIPH K(CNST3)) and K2=MSB k(CIPH K(CNST2) || CIPH K(CNST4)), and thus only four constants are required.B.2 Salt GenerationThe salt values associated with messages shall repeat with no more than negligible probability. In particular, the expected probability that the same salt will be associated with two different messages that are authenticated under the scope of any pair of RMAC keys shall be no greater than for random values of salt. Therefore, one approach to meeting the requirement is to generate the salt by an approved deterministic random number generator.Another approach is to ensure that the probability of associating the same salt to different messages is zero, in other words, to generate a nonce to be the salt. For example, the salt may be a counter or a message number.Appendix C: Example Vectors for the MAC Generation FunctionIn this appendix, examples vectors are provided for the RMAC MAC generation function with either the AES algorithm or Triple DES as the underlying block cipher. For each allowed key size of the underlying block cipher, MACs are generated on three messages for each parameter set. The lengths of the three messages, denoted Mlen , are 128 bits, 384 bits, and 400 bits. In addition to the MAC for the given input values, intermediate results are provided. All strings are represented in hexadecimal notation.C.1 RMAC-AES128 Example VectorsC.1.1 RMAC-AES128-I RMAC-AES128, r =0, m =32, Mlen =128 M : 000102030405060708090a0b0c0d0e0f K 1: 000102030405060708090a0b0c0d0e0f K 2: 0f0e0d0c0b0a09080706050403020100 R : n o n e M || PAD : 000102030405060708090a0b0c0d0e0f 80000000000000000000000000000000 O_1: 0a940bb5416ef045f1c39458c653ea5a O_n : 3a3807ffe3cb3e978953017210335f0f K 3: 0f0e0d0c0b0a09080706050403020100 CIPH_K 3(O_n ): bfc3c92e04100777be98f7a93e178381 RMAC (R ,M ): bfc3c92e RMAC-AES128, r =0, m =32, Mlen =384 M : 000102030405060708090a0b0c0d0e0f 101112131415161718191a1b1c1d1e1f 202122232425262728292a2b2c2d2e2f K 1: 000102030405060708090a0b0c0d0e0f K 2: 0f0e0d0c0b0a09080706050403020100 R : n o n e M || PAD : 000102030405060708090a0b0c0d0e0f 101112131415161718191a1b1c1d1e1f 202122232425262728292a2b2c2d2e2f 80000000000000000000000000000000 O_1: 0a940bb5416ef045f1c39458c653ea5a O_2: 3cf456b4ca488aa383c79c98b34797cb O_3: 7e163e30ea49d32152a51a08a10ec02d O_n : c5b089e3e4710856581f28b42824c651 K 3: 0f0e0d0c0b0a09080706050403020100 CIPH_K 3(O_n ): a3c33ae5f5d19094c5f65faa4ee60696 RMAC (R ,M ): a3c33ae5 RMAC-AES128, r =0, m =32, Mlen =400 M : 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f 202122232425262728292a2b2c2d2e2f 3031 K 1: 000102030405060708090a0b0c0d0e0f K 2: 0f0e0d0c0b0a09080706050403020100 R : n o n e M || PAD : 000102030405060708090a0b0c0d0e0f 101112131415161718191a1b1c1d1e1f 202122232425262728292a2b2c2d2e2f 30318000000000000000000000000000 O_1: 0a940bb5416ef045f1c39458c653ea5a O_2: 3cf456b4ca488aa383c79c98b34797cb O_3: 7e163e30ea49d32152a51a08a10ec02d O_n : 6a83b72738a946e319702dfd323fae52 K 3: 0f0e0d0c0b0a09080706050403020100 CIPH_K 3(O_n ): 4577d30eac2b9a438e507ecf22cc5fbd RMAC (R ,M ): 4577d30eC.1.2 RMAC-AES128-IIRMAC-AES128, r =0, m =64, Mlen =128 M : 000102030405060708090a0b0c0d0e0f K 1: 000102030405060708090a0b0c0d0e0f K 2: 0f0e0d0c0b0a09080706050403020100 R : n o n e M || PAD : 000102030405060708090a0b0c0d0e0f 80000000000000000000000000000000 O_1: 0a940bb5416ef045f1c39458c653ea5a O_n : 3a3807ffe3cb3e978953017210335f0f K 3: 0f0e0d0c0b0a09080706050403020100 CIPH_K 3(O_n ): bfc3c92e04100777be98f7a93e178381 RMAC (R ,M ): bfc3c92e04100777 RMAC-AES128, r =0, m =64, Mlen =384 M : 000102030405060708090a0b0c0d0e0f 101112131415161718191a1b1c1d1e1f 202122232425262728292a2b2c2d2e2f K 1: 000102030405060708090a0b0c0d0e0f K 2: 0f0e0d0c0b0a09080706050403020100 R : n o n e M || PAD : 000102030405060708090a0b0c0d0e0f 101112131415161718191a1b1c1d1e1f 202122232425262728292a2b2c2d2e2f 80000000000000000000000000000000 O_1: 0a940bb5416ef045f1c39458c653ea5a O_2: 3cf456b4ca488aa383c79c98b34797cb O_3: 7e163e30ea49d32152a51a08a10ec02d O_n : c5b089e3e4710856581f28b42824c651 K 3: 0f0e0d0c0b0a09080706050403020100 CIPH_K 3(O_n ): a3c33ae5f5d19094c5f65faa4ee60696。
(完整word版)AForge 中文文档
AForgeAForge命名空间是AForge。
NET框架,它包含框架和类的其他命名空间所使用的核心类,可以独立地用于各种目的。
AForge.ControlsAForge.Controls命名空间包含不同的有用的UI控件,可以与其他类一起使用架构AForge。
FuzzyAForge.Fuzzy命名空间包含一组接口和类,以便使用模糊集操作.AForge。
GeneticAForge.Genetic命名空间包含用于遗传计算的接口和类。
命名空间及其子名称空间包含类,它允许解决许多不同的问题(优化、近似、pr)。
版,等)和遗传算法(GA)的帮助,遗传编程(GP),基因表达式编程(GEP).AForge.ImagingAForge。
Imaging命名空间包含不同的图像处理例程的接口和类。
AForge.Imaging.ColorReductionAForge.Imaging。
ColorReduction命名空间包含一组用于在彩色图像中执行色彩还原的类,其中包括颜色量化类、颜色重新编码功能和颜色抖动算法。
AForge。
Imaging。
ComplexFiltersAForge.Imaging命名空间包含不同的图像处理例程的接口和类,它们是在复杂的傅立叶变换图像上完成的。
AForge.Imaging。
FiltersAForge.Imaging。
Filters命名空间包含接口和类的集合,它们提供不同的图像处理过滤器。
这个名称空间的类允许对源映像做不同的转换,这样做就可以了。
直接对源图像或提供新图像作为图像处理程序的结果。
AForge。
Imaging.FormatsAForge。
Imaging。
Formats命名空间包含接口和类,它们用于处理不同的图像文件格式。
AForge。
Imaging。
TexturesAForge.Imaging。
Textures命名空间包含类集合,它们生成用于创建不同效果的不同类型的纹理。
AForge.MachineLearningAForge。
FedEx 运费表自助获取用户指南说明书
OverviewFedEx Rate Sheets allows users to self-serve and obtain rate sheets conveniently and efficiently.The Online Customer Rate Visibility tool, also known as FedEx Rate Sheets, provides a secure way for users to view and generate their Net Rates by zones and weights directly from ers can use the generated rate sheets to compare rates, research previous, current and future rates.This user guide is divided into 5 sections:1. 6 Steps to Get Your Account-specific Shipping Rates(Page 2 to 6)2. 2 Steps to Manage Your Account-specific Shipping Rates(Page 7 to9)3.Access to Rate Sheets: Account Qualifications and Blocking(Page 10)4.Access to Rate Sheets: How to Request Access from FedExAdministrator(Page 11 to 17)5.Access to Rate Sheets: How to Obtain Access to FedEx Rate SheetTool (Page 18 to 22)The important features to remember are:•Supported AMEA Markets: Australia, Bahrain, Botswana, China, Egypt, Guam, Hong Kong SAR, China, India, Indonesia, Japan, Kenya, Kuwait, Macau SAR, China, Malawi, Malaysia, Mozambique, Namibia, New Zealand, Oman, Philippines, Saudi Arabia, Singapore, South Africa, South Korea, Swaziland, Taiwan, China, Thailand, United Arab Emirates, Vietnam, Zambia•Accepted Payors:Account holders with 9-digit accounts, AMEA Payors •Available Services:International Services•Convenience: Users can view their rates online and generate rate sheets in English or a local language (8 Local languages offered in AMEA include: Simplified Chinese for China, Traditional Chinese for Hong Kong, Traditional Chinese for Taiwan, Japanese, Korean, Thai, Vietnamese, Bahasa and Arabic)Go to →Enter user ID and password►STEP 2NotesIf you forget user ID or password, then you need to click on ‘Forgot your user ID or password?’ button and it will ask you to enter their email ID to reset the credentials.Because negotiated rates are confidential information, the administrator can use the FedEx Step 1 –Enter into FedEx Rate Sheet page On home page, click "Account" on the menu bar, and select "FedEx Rate Sheet" tool.Click on Account Tab→Click on FedEx Rate Sheet►STEP 1Step 2 –Account Login Use your user ID and password to login to Step 3 –Generate Tab in FedEx Rate Sheet Tool Once you enter login credentials, you are redirected to FedEx Rate Sheet tool and a screen appears with three tabs for Generate, Manage, and Profile (see image below). There are 3 sections under Generate Tab: Account Information, FedEx Service Selection and Display Options.FedEx Service SelectionTIP Click on the Generate tab.►STEP 3Account InformationTIPDisplay OptionsTIPNotesYou may have multiple accounts appear in the dropdown list. Please choose the account that you would like to see account specific rates for.Select the FedEx service that you would like to see rates for►STEP 3B (Required)Step 3A: Account InformationStep 3 –Generate Tab in FedEx Rate Sheet ToolStep 3B: FedEx service selectionCheck Shipping Address detailsTIPSelect FedEx Account►STEP 3A (Required)NotesRequired field:•Effective date -This is the date for which the pricing is in effect. You may choose as far back as 12 months (limited back to the date that the account was transitioned to P01) and as far into the future as 45 days.•Language of the rate sheet output.•Currency in which the rates are quoted.•Unit of weight.Optional field:•Rate Sheet Name, which will appear in your history and make it easy to identify. •Rate Sheet Profile, which stores the selections to create a template that you can use again in the future, i.e. you can request for a rate sheet even if no profile is available or selected Fill in the information for required field.►STEP 3C (Required)Fill in the name and profile for rate sheet if needed.►STEP 4 (Optional)Step 3C: Display OptionsStep 3 –Generate Tab in FedEx Rate Sheet Tool Step 4 –Enter Name and Profile for Rate Sheet Name your rate sheet. You may also name and save the profile used to generate the rate sheet.Step 5 –Submit the Rate Sheet RequestClick "Request Rate Sheet", a link will be available for you to view and download within minutes.►STEP 5AClick on the ‘RequestRate Sheets’ button.►STEP 5BA Rate Sheet RequestConfirmation pop-up willdisplay on the GenerateTab.When you click OK, youwill be redirected toManage Tab, where youcan download and viewthe rate sheets.Notes•From the ‘Manage Rate Sheets’ tab, you can view and download rate sheets from the past 12 months and 45 days into the future.•Once a rate sheet is submitted, its status will show as “Pending”. It should turn to “Complete” within ten minutes. If there is a problem, the status will change to “Failed”.•In order to see a change in status, the page must be refreshed.•You may scroll through the history by moving to “Next” and “Previous” pages.•You may generate many reports and may need to recall which report has the data you desire.•To view the details for a given rate sheet, you may hover the cursor over a cell column and click to reveal a Rate Sheet Summary that shows the services that were selected for this Once status change to “Complete”, click on the icon to download rate sheets in PDF/ XLS format.►STEP 1RatesStep 1 –Download Rate Sheet in Manage Tab in FedEx Rate Sheet Tool Download rate sheets in PDF/ XLS formatToggle allows the customer to hide or reveal the column by clicking that column’s name.TIPRatesStep 2 –Receive Rate Sheets Confirmation email and Rate Sheet in PDF FormatOnce the rate sheet is available, you will receive an email notification with a link to access the account-based rates online. Alternatively, you can download the rates in pdf or excel format from the Manage Tab.RatesStep 2 –Receive Rate Sheets Confirmation email and Rate Sheet in PDF FormatExample of PDF Output for a net rate test account with International Priority Express Export (IPE) service selection optionSection 3 –Access to Rate Sheets: Account Qualifications and BlockingView from an account with no access to rate sheet.NotesIn case you encounter the above screen while logging into Rate sheet tool, it indicates that you don’t have the access privilege to view rates for user account. This could be due to following reasons:1.If your shipping account is registered for FedEx Administration application, you need torequest access from your FedEx Admin. Please follow steps on Page 11 to Page 17 to enable access for user ID for FedEx Rate Sheet tool.2.If you does not have the Rate Admin role for your FedEx shipping account, you will not beable to access FedEx Rate Sheet tool.3.If your shipping account is not managed via FedEx Admin application, then the individualwho created the first user ID and linked it to this your FedEx 9-digit shipping account will have access to FedEx Rate Sheets. Any additional users will not be able to log into FedEx Rate Sheet tool. The new users will need to reach out first user ID and get rate access for FedEx Rate Sheet tool.4.If you are still unable to log into FedEx Rate sheet tool, it could be a system issue. Pleasereach out to the following teams for support and assistanceAPAC : *********************FedEx Administrator Scenario: User has login credentials (FCL) but does not have FedEx Admin right and needs to request access to Rate sheet tool from admin user in FedEx Admin Application. The FedEx Admin user need to follow the below steps:Step 2 –Get Access to FedEx AdministrationStep 1 –A User with FedEx Administrator Role Needs to Login and Go to “My Profile”Go to →Click on Account Tab→Click on My Profile►STEP 1Click Shipping Administration after login using user ID and password►STEP 2Step 3 –Administrator of FedEx Admin Needs to “Create New User”Click “Create new” button under “User” in “Company Summary”►STEP 3FedEx Administrator Scenario: User has login credentials (FCL) but does not have FedEx Admin right and needs to request access to Rate sheet tool from admin user in FedEx Admin Application. The FedEx Admin user need to follow the below steps:Step 4 –Fill in the Requestor Information to Create New User Fill in the new user information►STEP 4AReview and assign user role under dropdown menu►STEP 4CCheck these two boxes►STEP 4BNotesAdministrator needs to enter all necessary details of the new user (i.e. for the person who want to access to the FedEx Rate Sheet tool) and ensure the new user ID is linked to the same 9-digit FedEx shipping account.Administrator could also choose to invite the user and ask them to create new user ID for themselves instead for the same 9-digit FedEx shipping account numberThere are 3 user roles that administrator can assign to new user: Company Administrator, Group Administrator and Standard User can be assigned to each account and they are FedEx Administrator Scenario: User has login credentials (FCL) but does not have FedEx Admin right and needs to request access to Rate sheet tool from admin user in FedEx Admin Application. The FedEx Admin user need to follow the below steps:Step 4 –Fill in the Requestor Information to Create New User Click “Add Account (s)“ and select the 9-digit corporate account number►STEP 4DClick the Permissions Tab after filling in the information►STEP 4EOnce the admin click the customize permissions, the Permission Tab will be available to control the access to FedEx Rate SheetTIPFedEx Administrator Scenario: User has login credentials (FCL) but does not have FedEx Admin right and needs to request access to Rate sheet tool from admin user in FedEx Admin Application. The FedEx Admin user need to follow the below steps:Step 5 –Grant Access to Display Account Based Rates Sheet Check the box “Display account based rates sheet”►STEP 5ANotesIf an account is managed in FedEx Administration (FedEx Admin), only administrator users will have access to FedEx Rate Sheets. They may permit invited users on the account to access rate sheets by selecting the depicted option in FedEx Shipping Administration.Administrator needs to go to “Permissions” tab within FedEx Administration app and ensure “Display account based rate sheet” flag is enabled while creating new user. After clicking save, new user will be createdClick the “Save” button to submit►STEP 5BFedEx Administrator Scenario: User has login credentials (FCL) but does not have FedEx Admin right and needs to request access to Rate sheet tool from admin user in FedEx Admin Application. The FedEx Admin user need to follow the below steps:Step 6 –Requested User Needs to Follow the Below Steps after FedEx Admin Invites Them to Create User IDThe requested user will receive the above email from system if the FedEx Admin invite them to create the user ID themselves (Refer to the Step 4B on Page 13)TIPClick through the invitation link►STEP 6AEnter required personal information and create user ID for the future login►STEP 6BFedEx Administrator Scenario: User has login credentials (FCL) but does not have FedEx Admin right and needs to request access to Rate sheet tool from admin user in FedEx Admin Application. The FedEx Admin user need to follow the below steps:Notes Once the requested user gets the email (Pages 16) and they click on the hyperlink in the email, they will be redirected to the below screen where they have to enter all details and Click the “Submit” button to complete. It will show your user ID is created►STEP 6CStep 6 –Requested User Needs to Follow the Below Steps after FedEx Admin Invites Them to Create User IDFedEx Administrator Scenario: User has login credentials (FCL) but does not have FedEx Admin right and needs to request access to Rate sheet tool from admin user in FedEx Admin Application. The FedEx Admin user need to follow the below steps:FedEx Rate Sheet ToolScenario: A user may or may not have user ID but is not registered for FedEx Admin application for their 9-digit shipping account and needs to access FedEx Rate Sheet toolStep 1 –Create User ID (For Existing Customers)►STEP 1Go to →Click on Sign Up/Log In Tab→Click on CreateUser ID (ForExistingCustomers)FedEx Rate Sheet ToolScenario: A user may or may not have user ID but is not registered for FedEx Admin application for their 9-digit shipping account and needs to access FedEx Rate Sheet toolStep 2 –Fill in the Required Information►STEP 2Fill in the requiredinformation insection logininformation,secret questionand contactinformationStep 3 –Link 9-digit FedEx Account Number with User ID NotesSecurity Check on address matching will perform.FedEx Rate Sheet Tool Scenario: A user may or may not have user ID but is not registered for FedEx Admin application for their 9-digit shipping account and needs to access FedEx Rate Sheet toolCheck the box and enter the existing 9-digit FedEx account number►STEP 3AShipping address on user ID should be the same as the 9-digit FedEx shipping account billing address.TIPClick edit and edit the shipping address on that user ID to align with►STEP 3BStep 4 –Fill in Invoice Number from Last 120 DaysNotesIf you don’t recollect or know the last 2 invoices for your 9-digit FedEx account number, please reach out to the following teams for support and assistance APAC : *********************MEISA : **************************FedEx Rate Sheet Tool Scenario: A user may or may not have user ID but is not registered for FedEx Admin application for their 9-digit shipping account and needs to access FedEx Rate Sheet toolFill in 2 invoice number from last 120 days►STEP 4Step 5 –Select the FedEx AccountFedEx Rate Sheet Tool Scenario: A user may or may not have user ID but is not registered for FedEx Admin application for their 9-digit shipping account and needs to access FedEx Rate Sheet toolCheck this box and select the FedEx account from dropdown menu►STEP 5AClick continue →Complete creating user ID►STEP 5B。
summary-method 多次计算
《深入解析summary-method 多次计算》在编程和数据分析领域,summary-method 多次计算是一个非常重要的概念。
通过多次计算数据的摘要统计量(例如均值、中位数、标准差等),可以更好地了解数据的特征和趋势,从而做出更准确的决策和预测。
本文将针对summary-method 多次计算这一主题展开深入探讨。
1. summary-method 的基本概念在数据分析中,summary-method 指的是对数据进行摘要统计的方法,通常包括计算数据的均值、中位数、标准差、最大值、最小值等。
这些统计量可以帮助我们快速了解数据的分布情况,发现异常值和趋势变化,为后续的分析和建模提供参考。
2. 多次计算的意义为什么要进行多次计算呢?因为一次计算得到的统计量可能受到数据的随机性影响,无法完全代表数据的真实特征。
通过多次计算,可以得到统计量的分布情况,从而更加全面地了解数据的特性。
通过多次计算均值,我们可以得到均值的置信区间,从而对均值的准确性有更清晰的认识。
3. summary-method 多次计算的应用summary-method 多次计算在实际数据分析中有着广泛的应用。
在金融领域,我们可以通过多次计算股票收益率的均值和标准差,来评估股票的风险和收益。
在医学研究中,我们可以通过多次计算患者的生存期和治疗效果,来得到更可靠的统计结论。
4. 个人观点和理解在我看来,summary-method 多次计算是数据分析中不可或缺的一个环节。
通过多次计算,我们可以更加客观地认识数据的特征,避免因单次计算带来的误导。
多次计算也能够增强数据分析的稳健性和可靠性,使得我们的结论更加可信。
summary-method 多次计算是数据分析中的重要环节,它能够为我们提供更全面、准确的数据摘要统计量,帮助我们更好地理解数据的特征和趋势。
在实际应用中,我们需要充分重视多次计算的意义和方法,并结合领域知识和实际需求来进行数据分析和决策。
英语作文因果分析法模版
英语作文因果分析法模版Title: Analyzing Cause and Effect in Modern Society.In the intricate web of modern society, understanding the intricate relationships between causes and effects is crucial. This essay aims to explore the concept of cause and effect, its application in various scenarios, and its significance in shaping our world.Cause and Effect: A Fundamental Concept.Cause and effect is a fundamental principle that governs the operation of the universe. It refers to the relationship between an action or event (the cause) and the result or outcome that follows (the effect). This relationship is not limited to physical phenomena but extends to social, psychological, and even cultural contexts. Understanding cause and effect helps us to comprehend how events unfold, predict future outcomes, and make informed decisions.Applications of Cause and Effect.1. Physical Sciences: In the physical sciences, cause and effect is manifested in the laws of nature. For instance, Newton's laws of motion explain how forces act as causes to produce changes in the motion of objects, resulting in specific effects.2. Social Sciences: In the social sciences, cause and effect analysis is used to understand the reasons behind social phenomena. For example, economic policies can be viewed as causes that affect economic growth or decline.3. Psychology: Psychology relies heavily on cause and effect analysis to explain human behavior. For instance, psychological trauma can be seen as a cause that leads to behavioral changes or mental health issues.4. Environmental Studies: Environmental degradation is often analyzed through cause and effect relationships. For instance, deforestation is a cause that leads to climatechange and ecological imbalances.Importance of Cause and Effect Analysis.Cause and effect analysis is crucial for several reasons:1. Prediction and Planning: Understanding cause and effect relationships allows us to predict future outcomes and plan accordingly. This is especially useful in fields like meteorology, where analyzing weather patterns can help predict future weather conditions.2. Decision-Making: Cause and effect analysis helps in making informed decisions by identifying the potential consequences of various actions. This is essential in areas like policy-making, where decisions can have far-reaching effects on society.3. Problem-Solving: Understanding the root causes of problems is crucial for effective problem-solving. By identifying the causes, we can develop targeted solutionsthat address the underlying issues.4. Critical Thinking: Cause and effect analysis promotes critical thinking by encouraging individuals to question assumptions, analyze evidence, and formulate arguments based on logical relationships.Challenges in Cause and Effect Analysis.While cause and effect analysis is a powerful tool, it also faces some challenges:1. Complexity: In real-world scenarios, cause and effect relationships can be incredibly complex, involving multiple interacting factors. This makes it difficult to identify the exact causes of particular effects.2. Temporal Lag: There is often a temporal lag between causes and effects. This means that the effects of certain actions may not be immediately apparent, making itdifficult to attribute specific outcomes to particular causes.3. Causal Ambiguity: In some cases, it may be unclear whether a particular effect is caused by a single factor or a combination of multiple factors. This causal ambiguitycan lead to confusion and misunderstandings.Conclusion.Cause and effect analysis is a fundamental tool for understanding the world we live in. It helps us to comprehend the relationships between actions and outcomes, predict future trends, and make informed decisions. However, it is important to recognize its limitations and approachit with a critical mindset, accounting for the complexity and ambiguity of real-world scenarios. By doing so, we can harness the power of cause and effect analysis to gain deeper insights into the world and shape a better future.。
ZENworks Suite产品介绍说明书
ZENworks SuiteA robust Unified Endpoint Management and Protection solution, Micro Focus ZENworks Suite helps organisations manage and secure their endpoint devices, protect their endpoint data, package and deliver endpoint software, monitor and manage software licences and effectively support end users.Product HighlightsThe ZENworks Suite combines the tools you need to manage, secure and protect your end-point environment and data, and do it from one place. In addition to the complete ZENworks platform—Micro Focus ZENworks Asset Management, Micro Focus ZENworks Con fi guration Management, Micro Focus ZENworks Endpoint Security Management, Micro Focus ZENworks Full Disk Encryption and Micro Focus ZENworks Patch Management—you also get two additional products. Micro Focus Desktop Containers improves your work-force’s productivity, allowing users to get to their applications wherever they are and IT to reduce application conflicts and deliver a better application experience. Micro Focus Service Desk Standard Edition helps you be better organised and responsive to user needs and provides a selfservice experience that is tightly integrated with the rest of the solution. The combination of these products gives you everything you need to manage, secure and protect your endpoint devices and empower end users.Key BenefitsZENworks Suite provides a unified endpoint management and protection solution that:■Reduces the costs and risks of managing endpoint devices. The number of endpoint devices users are accessing has exploded in recent years. The location users are working from has changed drastically in the recent past.All of this has led IT to find ways to bemore responsive to thedr needs whilereducing the risk the business faces. Byproviding single console management ofall your devices, ZENworks allows you touse a common console and managementparadigm that uses management byexception to allow even the most complexenvironments to be managed by a smallnumber of IT resources. This allows youto drastically reduce the costs whencompared to implementing specialisedendpoint management solutions for PC,Linux, Mac and Mobile management.Unlike other solutions, ZENworks issimple and easy to get up and runningand provides everything you needright in the box. You can further reducecosts associated with licence overagesusing software asset management.■Frees up IT resources to focus onother priorities. Because IT can now bemuch more efficient and can automatemany of their daytoday tasks they canbe retasked to other priorities for thebusiness. ZENworks allows a small amountof endpoint managers to manage tensof thousands of devices effectively.■Improves end user productivityand satisfaction. The way that youremployees and learners are working haschanged. They expect to get accessto the applications and data they needfrom wherever they are at, and theyexpect to do it with little or no directData SheetInformation Management and GovernanceData Sheet ZENworks Suiteinteraction with IT. ZENworks providesselfservice capabilities that let usersget the applications, data and services they need, while allowing IT to have the right level of approvals and governancein place. With the ability to support users as they work from anywhere, your users will be happier and more productive.■Protects the organisation from embarrassing security compromises. We have all seen that story about a lost laptop or stolen mobile device and now the company is on the front page of the paper or above the fold on every Internet news site. With the security capabilitiesof ZENworks Suite you can ensure your devices are secured against common threats such as malware, devices, USB device attacks. You can ensure your devices are not the low hanging fruitfor hackers by automatically patcheing them and you can use the included encryption capabilities to protect your data in case the device is lost or stolen. Key FeaturesZENworks Suite is not just a management suite, and it is not just a protection suite, it is a Unified Endpoint Management and Protection solution—you get everything you need in one box:.■Service Desk, Standard Edition gives you the ability to do more with less. Service Desk helps ensure you are managing incidents and service requests in a timely manner, keeping up with your committed service levels, and keeping your organisation running like a finely tuned machine. Service Desk also integrates with ZENworks to provide an Enterprise Application and Service Store that IT can use to deliver applicationsand services with automated approval workflows and delivery through ZENworks and LDAP integration. A full ITIL addonis also available that extends the solutionto include Change Management, Releaseand Deployment Management and more.■Desktop Containers makes it easy topackage software as containers thatare abstracted from the machine theyare installed on. This reduces conflicts,allows you to extend the support oflegacy applications and improve securitywhile reducing deployment and updatetimes. With an included library of prepackaged applications, you can packageapplications faster than ever. The availableApplication Streaming addon allowsyou to deliver these applications to yourhybrid workers and remote learnersregardless of their location or device.■Software Asset Management allowsyou track and manage the contractsand software assets within yourenvironment. See how many licencesand machines your employees useso you can stop paying for those youdo not. Track that information againstwhat you bought to ensure you haveenough licences to avoid costly licenceoverage fees. Leverage the ConfigurationManagement Database (CMDB) ofZENworks Service Desk to extend assetmanagement to manage the lifecycleof hardware and even nonIT assetssuch as office furniture, vehicles, etc.■Endpoint Device Management managesthe lifecycle of your current and futureendpoints, with support for Windows, Mac,Linux, iOS and Android devices. UsingInternet friendly protocols, ZENworksis suited for the workfromanywhereenvironment of today’s workforce, allowingyou to easily manage and protect deviceswhether the user is in the office, at home,or on the road. With capabilities rangingfrom software distribution and updating,OS imaging, device provisioning, hardwareand software inventory, remote desktopmanagement and more you will be ableto ensure users are safe and productive.■Endpoint Security Management providesfinegrained, policybased control overall your Windows desk¬top and mobilePCs—including the ability to automaticallychange security configurations dependingon a user’s role and location. By creatingand managing policies from a cen¬tralconsole, ZENworks makes it possible toimplement and enforce tightly controlled,highly adaptive security policies withoutplac¬ing any configuration or enforcementburden on end users. With capabilities thatinclude endpoint firewall, folder encryption,WiFi security, VPN enforcement, StorageDevice Control and powerful scriptinginterface you will have everything youneed to protect the organisation’sWindows devices. ZENworks EndpointSecurity Management also provides fullyintegrated malware and virus protectionthat protects against known andunknown (zeroday) malware threats.■Full Disk Encryption not only lets youencrypt a hard drive, but lets you do itfrom wherever you are. With prebootauthentication you can further protectthe device by requiring authenticationbefore allowing the system to boot.Your IT staff can manage all encrypteddevices from a webbased console whilestill providing automated data protectionthat completely locks out threats.■Endpoint Software Patch Managementlets you identify, track and remediatemissing patches and the associatedvulnerabilities. Set patch policies soall your endpoints have the right OSand thirdparty application patchesat the right time. Understand whatCVEs are impacting your environmentand automatically remediate them.This results in patch application thatis up to 13 times faster than manualprocesses. Monitor patch complianceand automatically apply updates andpatches to meet predefined standards.(60day Patch subscription included with ZENworks Suite; additional patch subscription required beyond 60days).Learn more at/en-us/portfolio/ zenworks-suite/overviewThe synergies between all the features of ZENworks are too many to name, but the features are allfocused on a single goal: to keep your usersproductive and protect corporate devices and data from those who would cause harm.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
3. Easter is also a big festive holiday for the Chamorro people. Remembering the death & resurrection of rist. A large Fiesta is also made to commemorate this day with families coming together in celebration.
(at least 5) Sources Cited: /wiki/Guam /Pages/Default.aspx /guam / /guam/
IMPORTANT: You will type up this form, PRINT it off, and give it to me the day you present to the class.
A Summary of GUAM
Please completely fill in every space below with the appropriate information: Class: CCC1 Date: Spring 2010 Summary Sheet Researcher(s): PPT Researcher(s): David Pedosiuk, Sammy Xu Kevin Chang, Lisa Zhang Which group member didn’t work on the PPT Presenter(s): project: David Pedosiuk, Tina Li Lazy Larry Location: Guam is an island in the western Pacific Ocean and is an organized, unincorporated territory of the United States. Population: 154,805 Type of Government: Guam is governed by a popularly elected governor and a unicameral 15-member legislature, whose members are known as senators. Guam elects one non-voting delegate, to the United States House of Representatives. Climate: The climate is characterized as tropical marine. The weather is generally hot and very humid with little seasonal temperature variation. The main high temperature is 86 °F (30 °C) and mean low is 76 °F (24 °C). The dry season runs from December through June. The remaining months constitute the rainy season. The months of January and February are considered the coolest months of the year. Typhoon season runs from October through November. Geography: Guam is the 32nd largest island of the United States. It is the southernmost and largest island in the Mariana island chain and is also the largest island in Micronesia. Guam is the closest land mass to the Mariana Trench, a deep subduction zone, that lies beside the island chain to the east. Language(s) spoken: Chamorro, English Transportation: Cars, Boats, Planes. Capital City: Agana Places of interest: “Tumon Bay” is a favorite tourist area. “2 lovers point” where people overlook the ocean. The American Naval Base is located in the North of the Island, also known as the most important US military base in the pacific. Agania is a district in Guam that has a breath-taking waterfall. Major religions: 85% Catholic, 15% other. 3 Celebrated Holidays: 1. “Guam Liberation Day” is celebrated with a parade and a big Fiesta following the parade, celebrating the US troops liberating Guam from the Japanese. 2. Christmas is a very big holiday, which also includes Catholic traditions like “midnight mass”, as well as gift exchange, and the traditional Christmas tree.