数据库中英文对照外文翻译文献

合集下载

外文文献及翻译

外文文献及翻译

((英文参考文献及译文)二〇一六年六月本科毕业论文 题 目:STATISTICAL SAMPLING METHOD, USED INTHE AUDIT学生姓名:王雪琴学 院:管理学院系 别:会计系专 业:财务管理班 级:财管12-2班 学校代码: 10128 学 号: 201210707016Statistics and AuditRomanian Statistical Review nr. 5 / 2010STATISTICAL SAMPLING METHOD, USED IN THE AUDIT - views, recommendations, fi ndingsPhD Candidate Gabriela-Felicia UNGUREANUAbstractThe rapid increase in the size of U.S. companies from the earlytwentieth century created the need for audit procedures based on the selectionof a part of the total population audited to obtain reliable audit evidence, tocharacterize the entire population consists of account balances or classes oftransactions. Sampling is not used only in audit – is used in sampling surveys,market analysis and medical research in which someone wants to reach aconclusion about a large number of data by examining only a part of thesedata. The difference is the “population” from which the sample is selected, iethat set of data which is intended to draw a conclusion. Audit sampling appliesonly to certain types of audit procedures.Key words: sampling, sample risk, population, sampling unit, tests ofcontrols, substantive procedures.Statistical samplingCommittee statistical sampling of American Institute of CertifiedPublic Accountants of (AICPA) issued in 1962 a special report, titled“Statistical sampling and independent auditors’ which allowed the use ofstatistical sampling method, in accordance with Generally Accepted AuditingStandards (GAAS). During 1962-1974, the AICPA published a series of paperson statistical sampling, “Auditor’s Approach to Statistical Sampling”, foruse in continuing professional education of accountants. During 1962-1974,the AICPA published a series of papers on statistical sampling, “Auditor’sApproach to Statistical Sampling”, for use in continuing professional educationof accountants. In 1981, AICPA issued the professional standard, “AuditSampling”, which provides general guidelines for both sampling methods,statistical and non-statistical.Earlier audits included checks of all transactions in the period coveredby the audited financial statements. At that time, the literature has not givenparticular attention to this subject. Only in 1971, an audit procedures programprinted in the “Federal Reserve Bulletin (Federal Bulletin Stocks)” includedseveral references to sampling such as selecting the “few items” of inventory.Statistics and Audit The program was developed by a special committee, which later became the AICPA, that of Certified Public Accountants American Institute.In the first decades of last century, the auditors often applied sampling, but sample size was not in related to the efficiency of internal control of the entity. In 1955, American Institute of Accountants has published a study case of extending the audit sampling, summarizing audit program developed by certified public accountants, to show why sampling is necessary to extend the audit. The study was important because is one of the leading journal on sampling which recognize a relationship of dependency between detail and reliability testing of internal control.In 1964, the AICPA’s Auditing Standards Board has issued a report entitled “The relationship between statistical sampling and Generally Accepted Auditing Standards (GAAS)” which illustrated the relationship between the accuracy and reliability in sampling and provisions of GAAS.In 1978, the AICPA published the work of Donald M. Roberts,“Statistical Auditing”which explains the underlying theory of statistical sampling in auditing.In 1981, AICPA issued the professional standard, named “Audit Sampling”, which provides guidelines for both sampling methods, statistical and non-statistical.An auditor does not rely solely on the results of a single procedure to reach a conclusion on an account balance, class of transactions or operational effectiveness of the controls. Rather, the audit findings are based on combined evidence from several sources, as a consequence of a number of different audit procedures. When an auditor selects a sample of a population, his objective is to obtain a representative sample, ie sample whose characteristics are identical with the population’s characteristics. This means that selected items are identical with those remaining outside the sample.In practice, auditors do not know for sure if a sample is representative, even after completion the test, but they “may increase the probability that a sample is representative by accuracy of activities made related to design, sample selection and evaluation” [1]. Lack of specificity of the sample results may be given by observation errors and sampling errors. Risks to produce these errors can be controlled.Observation error (risk of observation) appears when the audit test did not identify existing deviations in the sample or using an inadequate audit technique or by negligence of the auditor.Sampling error (sampling risk) is an inherent characteristic of the survey, which results from the fact that they tested only a fraction of the total population. Sampling error occurs due to the fact that it is possible for Revista Română de Statistică nr. 5 / 2010Statistics and Auditthe auditor to reach a conclusion, based on a sample that is different from the conclusion which would be reached if the entire population would have been subject to audit procedures identical. Sampling risk can be reduced by adjusting the sample size, depending on the size and population characteristics and using an appropriate method of selection. Increasing sample size will reduce the risk of sampling; a sample of the all population will present a null risk of sampling.Audit Sampling is a method of testing for gather sufficient and appropriate audit evidence, for the purposes of audit. The auditor may decide to apply audit sampling on an account balance or class of transactions. Sampling audit includes audit procedures to less than 100% of the items within an account balance or class of transactions, so all the sample able to be selected. Auditor is required to determine appropriate ways of selecting items for testing. Audit sampling can be used as a statistical approach and a non- statistical.Statistical sampling is a method by which the sample is made so that each unit consists of the total population has an equal probability of being included in the sample, method of sample selection is random, allowed to assess the results based on probability theory and risk quantification of sampling. Choosing the appropriate population make that auditor’ findings can be extended to the entire population.Non-statistical sampling is a method of sampling, when the auditor uses professional judgment to select elements of a sample. Since the purpose of sampling is to draw conclusions about the entire population, the auditor should select a representative sample by choosing sample units which have characteristics typical of that population. Results will not extrapolate the entire population as the sample selected is representative.Audit tests can be applied on the all elements of the population, where is a small population or on an unrepresentative sample, where the auditor knows the particularities of the population to be tested and is able to identify a small number of items of interest to audit. If the sample has not similar characteristics for the elements of the entire population, the errors found in the tested sample can not extrapolate.Decision of statistical or non-statistical approach depends on the auditor’s professional judgment which seeking sufficient appropriate audits evidence on which to completion its findings about the audit opinion.As a statistical sampling method refer to the random selection that any possible combination of elements of the community is equally likely to enter the sample. Simple random sampling is used when stratification was not to audit. Using random selection involves using random numbers generated byRomanian Statistical Review nr. 5 / 2010Statistics and Audit a computer. After selecting a random starting point, the auditor found the first random number that falls within the test document numbers. Only when the approach has the characteristics of statistical sampling, statistical assessments of risk are valid sampling.In another variant of the sampling probability, namely the systematic selection (also called random mechanical) elements naturally succeed in office space or time; the auditor has a preliminary listing of the population and made the decision on sample size. “The auditor calculated a counting step, and selects the sample element method based on step size. Step counting is determined by dividing the volume of the community to sample the number of units desired. Advantages of systematic screening are its usability. In most cases, a systematic sample can be extracted quickly and method automatically arranges numbers in successive series.”[2].Selection by probability proportional to size - is a method which emphasizes those population units’recorded higher values. The sample is constituted so that the probability of selecting any given element of the population is equal to the recorded value of the item;Stratifi ed selection - is a method of emphasis of units with higher values and is registered in the stratification of the population in subpopulations. Stratification provides a complete picture of the auditor, when population (data table to be analyzed) is not homogeneous. In this case, the auditor stratifies a population by dividing them into distinct subpopulations, which have common characteristics, pre-defined. “The objective of stratification is to reduce the variability of elements in each layer and therefore allow a reduction in sample size without a proportionate increase in the risk of sampling.” [3] If population stratification is done properly, the amount of sample size to come layers will be less than the sample size that would be obtained at the same level of risk given sample with a sample extracted from the entire population. Audit results applied to a layer can be designed only on items that are part of that layer.I appreciated as useful some views on non-statistical sampling methods, which implies that guided the selection of the sample selecting each element according to certain criteria determined by the auditor. The method is subjective; because the auditor selects intentionally items containing set features him.The selection of the series is done by selecting multiple elements series (successive). Using sampling the series is recommended only if a reasonable number of sets used. Using just a few series there is a risk that the sample is not representative. This type of sampling can be used in addition to other samples, where there is a high probability of occurrence of errors. At the arbitrary selection, no items are selected preferably from the auditor, Revista Română de Statistică nr. 5 / 2010Statistics and Auditthat regardless of size or source or characteristics. Is not the recommended method, because is not objective.That sampling is based on the auditor’s professional judgment, which may decide which items can be part or not sampled. Because is not a statistical method, it can not calculate the standard error. Although the sample structure can be constructed to reproduce the population, there is no guarantee that the sample is representative. If omitted a feature that would be relevant in a particular situation, the sample is not representative.Sampling applies when the auditor plans to make conclusions about population, based on a selection. The auditor considers the audit program and determines audit procedures which may apply random research. Sampling is used by auditors an internal control systems testing, and substantive testing of operations. The general objectives of tests of control system and operations substantive tests are to verify the application of pre-defined control procedures, and to determine whether operations contain material errors.Control tests are intended to provide evidence of operational efficiency and controls design or operation of a control system to prevent or detect material misstatements in financial statements. Control tests are necessary if the auditor plans to assess control risk for assertions of management.Controls are generally expected to be similarly applied to all transactions covered by the records, regardless of transaction value. Therefore, if the auditor uses sampling, it is not advisable to select only high value transactions. Samples must be chosen so as to be representative population sample.An auditor must be aware that an entity may change a special control during the course of the audit. If the control is replaced by another, which is designed to achieve the same specific objective, the auditor must decide whether to design a sample of all transactions made during or just a sample of transactions controlled again. Appropriate decision depends on the overall objective of the audit test.Verification of internal control system of an entity is intended to provide guidance on the identification of relevant controls and design evaluation tests of controls.Other tests:In testing internal control system and testing operations, audit sample is used to estimate the proportion of elements of a population containing a characteristic or attribute analysis. This proportion is called the frequency of occurrence or percentage of deviation and is equal to the ratio of elements containing attribute specific and total number of population elements. WeightRomanian Statistical Review nr. 5 / 2010Statistics and Audit deviations in a sample are determined to calculate an estimate of the proportion of the total population deviations.Risk associated with sampling - refers to a sample selection which can not be representative of the population tested. In other words, the sample itself may contain material errors or deviations from the line. However, issuing a conclusion based on a sample may be different from the conclusion which would be reached if the entire population would be subject to audit.Types of risk associated with sampling:Controls are more effective than they actually are or that there are not significant errors when they exist - which means an inappropriate audit opinion. Controls are less effective than they actually are that there are significant errors when in fact they are not - this calls for additional activities to establish that initial conclusions were incorrect.Attributes testing - the auditor should be defining the characteristics to test and conditions for misconduct. Attributes testing will make when required objective statistical projections on various characteristics of the population. The auditor may decide to select items from a population based on its knowledge about the entity and its environment control based on risk analysis and the specific characteristics of the population to be tested.Population is the mass of data on which the auditor wishes to generalize the findings obtained on a sample. Population will be defined compliance audit objectives and will be complete and consistent, because results of the sample can be designed only for the population from which the sample was selected.Sampling unit - a unit of sampling may be, for example, an invoice, an entry or a line item. Each sample unit is an element of the population. The auditor will define the sampling unit based on its compliance with the objectives of audit tests.Sample size - to determine the sample size should be considered whether sampling risk is reduced to an acceptable minimum level. Sample size is affected by the risk associated with sampling that the auditor is willing to accept it. The risk that the auditor is willing to accept lower, the sample will be higher.Error - for detailed testing, the auditor should project monetary errors found in the sample population and should take into account the projected error on the specific objective of the audit and other audit areas. The auditor projects the total error on the population to get a broad perspective on the size of the error and comparing it with tolerable error.For detailed testing, tolerable error is tolerable and misrepresentations Revista Română de Statistică nr. 5 / 2010Statistics and Auditwill be a value less than or equal to materiality used by the auditor for the individual classes of transactions or balances audited. If a class of transactions or account balances has been divided into layers error is designed separately for each layer. Design errors and inconsistent errors for each stratum are then combined when considering the possible effect on the total classes of transactions and account balances.Evaluation of sample results - the auditor should evaluate the sample results to determine whether assessing relevant characteristics of the population is confirmed or needs to be revised.When testing controls, an unexpectedly high rate of sample error may lead to an increase in the risk assessment of significant misrepresentation unless it obtained additional audit evidence to support the initial assessment. For control tests, an error is a deviation from the performance of control procedures prescribed. The auditor should obtain evidence about the nature and extent of any significant changes in internal control system, including the staff establishment.If significant changes occur, the auditor should review the understanding of internal control environment and consider testing the controls changed. Alternatively, the auditor may consider performing substantive analytical procedures or tests of details covering the audit period.In some cases, the auditor might not need to wait until the end audit to form a conclusion about the effectiveness of operational control, to support the control risk assessment. In this case, the auditor might decide to modify the planned substantive tests accordingly.If testing details, an unexpectedly large amount of error in a sample may cause the auditor to believe that a class of transactions or account balances is given significantly wrong in the absence of additional audit evidence to show that there are not material misrepresentations.When the best estimate of error is very close to the tolerable error, the auditor recognizes the risk that another sample have different best estimate that could exceed the tolerable error.ConclusionsFollowing analysis of sampling methods conclude that all methods have advantages and disadvantages. But the auditor is important in choosing the sampling method is based on professional judgment and take into account the cost / benefit ratio. Thus, if a sampling method proves to be costly auditor should seek the most efficient method in view of the main and specific objectives of the audit.Romanian Statistical Review nr. 5 / 2010Statistics and Audit The auditor should evaluate the sample results to determine whether the preliminary assessment of relevant characteristics of the population must be confirmed or revised. If the evaluation sample results indicate that the relevant characteristics of the population needs assessment review, the auditor may: require management to investigate identified errors and likelihood of future errors and make necessary adjustments to change the nature, timing and extent of further procedures to take into account the effect on the audit report.Selective bibliography:[1] Law no. 672/2002 updated, on public internal audit[2] Arens, A şi Loebbecke J - Controve …Audit– An integrate approach”, 8th edition, Arc Publishing House[3] ISA 530 - Financial Audit 2008 - International Standards on Auditing, IRECSON Publishing House, 2009- Dictionary of macroeconomics, Ed C.H. Beck, Bucharest, 2008Revista Română de Statistică nr. 5 / 2010Statistics and Audit摘要美国公司的规模迅速增加,从第二十世纪初创造了必要的审计程序,根据选定的部分总人口的审计,以获得可靠的审计证据,以描述整个人口组成的帐户余额或类别的交易。

数据库中英文对照表

数据库中英文对照表

DBA词典:数据库设计常用词汇中英文对照表1. Access method(访问方法):此步骤包括从文件中存储和检索记录。

2. Alias(别名):某属性的另一个名字。

在SQL中,可以用别名替换表名。

3. Alternate keys(备用键,ER/关系模型):在实体/表中没有被选为主健的候选键。

4. Anomalies(异常)参见更新异常(update anomalies)5. Application design(应用程序设计):数据库应用程序生命周期的一个阶段,包括设计用户界面以及使用和处理数据库的应用程序。

6. Attribute(属性)(关系模型):属性是关系中命名的列。

7. Attribute(属性)(ER模型):实体或关系中的一个性质。

8. Attribute inheritance(属性继承):子类成员可以拥有其特有的属性,并且继承那些与超类有关的属性的过程。

9. Base table(基本表):一个命名的表,其记录物理的存储在数据库中。

10. Binary relationship(二元关系):一个ER术语,用于描述两个实体间的关系。

例如,panch Has Staff。

11. Bottom-up approach(自底向上方法):用于数据库设计,一种设计方法学,他从标识每个设计组建开始,然后将这些组件聚合成一个大的单元。

在数据库设计中,可以从表示属性开始底层设计,然后将这些属性组合在一起构成代表实体和关系的表。

12. Business rules(业务规则):由用户或数据库的管理者指定的附加规则。

13. Candidate key(候选键,ER关系模型):仅包含唯一标识实体所必须得最小数量的属性/列的超键。

14. Cardinality(基数):描述每个参与实体的可能的关系数目。

15. Centralized approach(集中化方法,用于数据库设计):将每个用户试图的需求合并成新数据库应用程序的一个需求集合16. Chasm trap(深坑陷阱):假设实体间存在一根,但某些实体间不存在通路。

SQL数据库中英文对照外文翻译文献

SQL数据库中英文对照外文翻译文献

SQL数据库中英文对照外文翻译文献中英文对照外文翻译文献(文档含英文原文和中文翻译)Working with DatabasesThis chapter describes how to use SQL statements in embedded applications to control databases. There are three database statements that set up and open databases for access: SET DATABASE declares a database handle, associates the handle with an actual database file, and optionally assigns operational parameters for the database.SET NAMES optionally specifies the character set a client application uses for CHAR, VARCHAR, and text Blob data. The server uses this information to transliterate from a database?s default character set to the client?s character set on SELECT operations, and to transliterate from a client application?s character set to the database character set on INSERT and UPDATE operations.g CONNECT opens a database, allocates system resources for it, and optionally assigns operational parameters for the database.All databases must be closed before a program ends. A database can be closed by using DISCONNECT, or by appending the RELEASE option to the final COMMIT or ROLLBACK in a program.Declaring a databaseBefore a database can be opened and used in a program, it must first be declared with SET DATABASE to:CHAPTER 3 WORKING WITH DATABASES. Establish a database handle. Associate the database handle with a database file stored on a local or remote node.A database handle is aunique, abbreviated alias for an actual database name. Database handles are used in subsequent CONNECT, COMMIT RELEASE, and ROLLBACK RELEASE statements to specify which databases they should affect. Except in dynamic SQL (DSQL) applications, database handles can also be used inside transaction blocks to qualify, or differentiate, table names when two or more open databases contain identically named tables.Each database handle must be unique among all variables used in a program. Database handles cannot duplicate host-language reserved words, and cannot be InterBase reserved words.The following statement illustrates a simple database declaration:EXEC SQLSET DATABASE DB1 = ?employee.gdb?;This database declaration identifies the database file, employee.gdb, as a database the program uses, and assigns the database a handle, or alias, DB1.If a program runs in a directory different from the directory that contains the database file, then the file name specification in SET DATABASE must include a full path name, too. For example, the following SET DATABASE declaration specifies the full path to employee.gdb:EXEC SQLSET DATABASE DB1 = ?/interbase/examples/employee.gdb?;If a program and a database file it uses reside on different hosts, then the file name specification must also include a host name. The following declaration illustrates how a Unix host name is included as part of the database file specification on a TCP/IP network:EXEC SQLSET DATABASE DB1 = ?jupiter:/usr/interbase/examples/employee.gdb?;On a Windows network that uses the Netbeui protocol, specify the path as follows: EXEC SQLSET DATABASE DB1 = ?//venus/C:/Interbase/examples/employee.gdb?; DECLARING A DATABASEEMBEDDED SQL GUIDE 37Declaring multiple databasesAn SQL program, but not a DSQL program, can access multiple databases at the same time. In multi-database programs, database handles are required. A handle is used to:1. Reference individual databases in a multi-database transaction.2. Qualify table names.3. Specify databases to open in CONNECT statements.Indicate databases to close with DISCONNECT, COMMIT RELEASE, and ROLLBACK RELEASE.DSQL programs can access only a single database at a time, so database handle use is restricted to connecting to and disconnecting from a database.In multi-database programs, each database must be declared in a separate SET DATABASE statement. For example, the following code contains two SET DATABASE statements: . . .EXEC SQLSET DATABASE DB2 = ?employee2.gdb?;EXEC SQLSET DATABASE DB1 = ?employee.gdb?;. . .4Using handles for table namesWhen the same table name occurs in more than one simultaneously accessed database, a database handle must be used to differentiate one table name from another. The database handle is used as a prefix to table names, and takes the form handle.table.For example, in the following code, the database handles, TEST and EMP, are used to distinguish between two tables, each named EMPLOYEE:. . .EXEC SQLDECLARE IDMATCH CURSOR FORSELECT TESTNO INTO :matchid FROM TEST.EMPLOYEEWHERE TESTNO > 100;EXEC SQLDECLARE EIDMATCH CURSOR FORSELECT EMPNO INTO :empid FROM EMP.EMPLOYEEWHERE EMPNO = :matchid;. . .CHAPTER 3 WORKING WITH DATABASES38 INTERBASE 6IMPORTANTThis use of database handles applies only to embedded SQL applications. DSQL applications cannot access multiple databases simultaneously.4Using handles with operationsIn multi-database programs, database handles must be specified in CONNECT statements to identify which databases among several to open and prepare for use in subsequent transactions.Database handles can also be used with DISCONNECT, COMMIT RELEASE, and ROLLBACKRELEASE to specify a subset of open databases to close.To open and prepare a database with CONNECT, see “Opening a database” on page 41.To close a database with DISCONNECT, COMMIT RELEASE, or ROLLBACK RELEASE, see“Closing a database” on page 49. To learn more about using database handles in transactions, see “Accessing an open database” on page 48.Preprocessing and run time databasesNormally, each SET DATABASE statement specifies a single database file to associate with a handle. When a program is preprocessed, gpre uses the specified file to validate the prog ram?s table and column references. Later, when a user runs the program, the same database file is accessed. Different databases can be specified for preprocessing and run time when necessary.4Using the COMPILETIME clause A program can be designed to run against any one of several identically structured databases. In other cases, the actual database that a program will use at runtime is not available when a program is preprocessed and compiled. In such cases, SET DATABASE can include a COMPILETIME clause to specify a database for gpre to test against during preprocessing. For example, the following SET DATABASE statement declares that employee.gdb is to be used by gpre during preprocessing: EXEC SQLSET DATABASE EMP = COMPILETIME ?employee.gdb?;IMPORTANTThe file specification that follows the COMPILETIME keyword must always be a hard-coded, quoted string.DECLARING A DATABASEEMBEDDED SQL GUIDE 39When SET DATABASE uses the COMPILETIME clause, but no RUNTIME clause, and does not specify a different database file specification in a subsequent CONNECT statement, the same database file is used both for preprocessing and run time. To specify different preprocessing and runtime databases with SET DATABASE, use both the COMPILETIME andRUNTIME clauses.4Using the RUNTIME clauseWhen a database file is specified for use during preprocessing, SET DATABASE can specify a different database to use at run time by including the RUNTIME keyword and a runtime file specification:EXEC SQLSET DATABASE EMP = COMPILETIME ?employee.gdb?RUNTIME ?employee2.gdb?;The file specification that follows the RUNTIME keyword can be either ahard-coded, quoted string, or a host-language variable. For example, the following C code fragment prompts the user for a database name, and stores the name in a variable that is used later in SET DATABASE:. . .char db_name[125];. . .printf("Enter the desired database name, including node and path):\n");gets(db_name);EXEC SQLSET DATABASE EMP = COMPILETIME ?employee.gdb?RUNTIME : db_name; . . .Note host-language variables in SET DATABASE must be preceded, as always, by a colon.Controlling SET DATABASE scopeBy default, SET DATABASE creates a handle that is global to all modules in an application.A global handle is one that may be referenced in all host-language modules comprising the program. SET DATABASE provides two optional keywords to change the scope of a declaration:g STATIC limits declaration scope to the module containing the SET DATABASE statement. No other program modules can see or use a database handle declared STATIC.CHAPTER 3 WORKING WITH DATABASES40 INTERBASE 6EXTERN notifies gpre that a SET DATABASE statement in a module duplicates a globally-declared database in another module. If the EXTERN keyword is used, then another module must contain the actual SET DATABASE statement, or an error occurs during compilation.The STATIC keyword is used in a multi-module program to restrict database handle access to the single module where it is declared. The following example illustrates the use of the STATIC keyword:EXEC SQLSET DATABASE EMP = STATIC ?employee.gdb?;The EXTERN keyword is used in a multi-module program to signal that SET DATABASE in one module is not an actual declaration, but refers to a declaration made in a different module. Gpre uses this information during preprocessing. Thefollowing example illustrates the use of the EXTERN keyword: EXEC SQLSET DATABASE EMP = EXTERN ?employee.gdb?;If an application contains an EXTERN reference, then when it is used at run time, the actual SET DATABASE declaration must be processed first, and the database connected before other modules can access it.A single SET DATABASE statement can contain either the STATIC or EXTERN keyword, but not both. A scope declaration in SET DATABASE applies to both COMPILETIME and RUNTIME databases.Specifying a connection character setWhen a client application connects to a database, it may have its own character set requirements. The server providing database access to the client does not know about these requirements unless the client specifies them. The client application specifies its character set requirement using the SET NAMES statement before it connects to the database.SET NAMES specifies the character set the server should use when translating data from the database to the client application. Similarly, when the client sends data to the database, the server translates the data from the client?s character set to the database?s default character set (or the character set for an individual column if it differs from the database?s default character set). For example, the followingstatements specify that the client is using the DOS437 character set, then connect to the database:EXEC SQLOPENING A DATABASEEMBEDDED SQL GUIDE 41SET NAMES DOS437;EXEC SQLCONNECT ?europe.gdb? USER ?JAMES? PASSWORD ?U4EEAH?;For more information about character sets, see the Data Definition Guide. For the complete syntax of SET NAMES and CONNECT, see the Language Reference. Opening a database After a database is declared, it must be attached with a CONNECT statement before it can be used. CONNECT:1. Allocates system resources for the database.2. Determines if the database file is local, residing on the same host where the application itself is running, or remote, residing on a different host.3. Opens the database and examines it to make sure it is valid.InterBase provides transparent access to all databases, whether local or remote. If the database structure is invalid, the on-disk structure (ODS) number does not correspond to the one required by InterBase, or if the database is corrupt, InterBase reports an error, and permits no further access. Optionally, CONNECT can be used to specify:4. A user name and password combination that is checked against the server?s security database before allowing the connect to succeed. User names can be up to 31 characters.Passwords are restricted to 8 characters.5. An SQL role name that the user adopts on connection to the database, provided that the user has previously been granted membership in the role. Regardless of role memberships granted, the user belongs to no role unless specified with this ROLE clause.The client can specify at most one role per connection, and cannot switch roles except by reconnecting.6. The size of the database buffer cache to allocate to the application when the default cache size is inappropriate.Using simple CONNECT statementsIn its simplest form, CONNECT requires one or more database parameters, each specifying the name of a database to open. The name of the database can be a: Database handle declared in a previous SET DATABASE statement.CHAPTER 3 WORKING WITH DATABASES42 INTERBASE 61. Host-language variable.2. Hard-coded file name.4Using a database handleIf a program uses SET DATABASE to provide database handles, those handles should be used in subsequent CONNECT statements instead of hard-coded names. For example, . . .EXEC SQLSET DATABASE DB1 = ?employee.gdb?;EXEC SQLSET DATABASE DB2 = ?employee2.gdb?;EXEC SQLCONNECT DB1;EXEC SQLCONNECT DB2;. . .There are several advantages to using a database handle with CONNECT:1. Long file specifications can be replaced by shorter, mnemonic handles.2. Handles can be used to qualify table names in multi-database transactions. DSQL applications do not support multi-database transactions.3. Handles can be reassigned to other databases as needed.4. The number of database cache buffers can be specified as an additional CONNECT parameter.For more information about setting the number of databas e cache buffers, see “Setting database cache buffers” on page 47. 4Using strings or host-language variables Instead of using a database handle, CONNECT can use a database name supplied at run time. The database name can be supplied as either a host-language variable or a hard-coded, quoted string.The following C code demonstrates how a program accessing only a single database might implement CONNECT using a file name solicited from a user at run time:. . .char fname[125];. . .printf(?Enter the desired database name, including nodeand path):\n?);OPENING A DATABASEEMBEDDED SQL GUIDE 43gets(fname);. . .EXEC SQLCONNECT :fname;. . .TipThis technique is especially useful for programs that are designed to work with many identically structured databases, one at a time, such as CAD/CAM or architectural databases.MULTIPLE DATABASE IMPLEMENTATIONTo use a database specified by the user as a host-language variable in a CONNECT statement in multi-database programs, follow these steps:1. Declare a database handle using the following SET DATABASE syntax:。

参考文献中文的英文对照

参考文献中文的英文对照

参考文献中文的英文对照在学术论文中,参考文献是非常重要的一部分,它可以为论文的可信度和学术性增添分数,其中包括中文和英文文献。

以下是一些常见的参考文献中文和英文对照:1. 书籍 Book中文:王小明. 计算机网络技术. 北京:清华大学出版社,2018.英文:Wang, X. Computer Network Technology. Beijing: Tsinghua University Press, 2018.2. 学术期刊 Article in Academic Journal中文:张婷婷,李伟. 基于深度学习的影像分割方法. 计算机科学与探索,2019,13(1):61-67.英文:Zhang, T. T., Li, W. Image Segmentation Method Based on Deep Learning. Computer Science and Exploration, 2019, 13(1): 61-67.3. 会议论文 Conference Paper中文:王维,李丽. 基于云计算的智慧物流管理系统设计. 2019年国际物流与采购会议论文集,2019:112-117.英文:Wang, W., Li, L. Design of Smart Logistics Management System Based on Cloud Computing. Proceedings of the 2019 International Conference on Logistics and Procurement, 2019: 112-117.4. 学位论文 Thesis/Dissertation中文:李晓华. 基于模糊神经网络的水质评价模型研究. 博士学位论文,长春:吉林大学,2018.英文:Li, X. H. Research on Water Quality Evaluation Model Based on Fuzzy Neural Network. Doctoral Dissertation, Changchun: Jilin University, 2018.5. 报告 Report中文:国家统计局. 2019年国民经济和社会发展统计公报. 北京:中国统计出版社,2019.英文:National Bureau of Statistics. Statistical Communique of the People's Republic of China on the 2019 National Economic and Social Development. Beijing: China Statistics Press, 2019.以上是一些常见的参考文献中文和英文对照,希望对大家写作有所帮助。

数据库系统英文文献

数据库系统英文文献

Database Systems1. Fundamental Concepts of DatabaseDatabase and database technology are having a major impact on the growing use of computers. It is fair to say that database will play a critical role in almost all areas where computers are used, including business, engineering, medicine, law, education, and library science, to name a few. The word "database" is in such common use that we must begin by defining what a database is. Our initial definition is quit general.A database is a collection of related data. By data, we mean known facts that can be recorded and that have implicit meaning. For example, consider the names, telephone numbers, and addresses of all the people you know. Y ou may have recorded this data in an indexed address book, or you may have stored it on a diskette using a personal computer and software such as DBASE III or Lotus 1-2-3. This is a collection of related data with an implic it meaning and hence is a database.The above definition of database is quite general; for example, we may consider the collection of words that make up thispage of text to be related data and hence a database. However, the common use of the term database is usually more restricted.A database has the following implicit properties:.A database is a logically coherent collection of data with some inherent meaning. A random assortment of data cannot bereferred to as a database..A database is designed, built, and populated with data for a specific purpose. It has an intended group of users and somepreconceived applications in which these users are interested..A database represents some aspect of the real world, sometimes called the mini world. Changes to the mini world are reflected in the database.In other words, a database has some source from which data are derived, some degree of interaction with events in the real world, and an audience that is actively interested in the contents of the database.A database can be of any size and of varying complexity. For example, the list of names and addresses referred to earlier may have only a couple of hundred records in it, each with asimple structure. On the other hand, the card catalog of a large library may contain half a million cards stored under different categories-by primary author’s last name, by subject, by book title, and the like-with each category organized in alphabetic order. A database of even greater size and complexity may be that maintained by the Internal Revenue Service to keep track of the tax forms filed by taxpayers of the United States. If we assume that there are 100million taxpayers and each taxpayer files an average of five forms with approximately 200 characters of information per form, we would get a database of 100*(106)*200*5 characters(bytes) of information. Assuming the IRS keeps the past three returns for each taxpayer in addition to the current return, we would get a database of 4*(1011) bytes. This huge amount of information must somehow be organized and managed so that users can search for, retrieve, and update the data as needed.A database may be generated and maintained manually or by machine. Of course, in this we are mainly interested in computerized database. The library card catalog is an example of a database that may be manually created and maintained. A computerized database may be created and maintained either by a group of application programs written specifically for that task or by a database management system.A data base management system (DBMS) is a collection of programs that enables users to create and maintain a database. The DBMS is hence a general-purpose software system that facilitates the processes of defining, constructing, and manipulating databases for various applications. Defining a database involves specifying the types of data to be stored in the database, along with a detailed description of each type of data. Constructing the database is the process of storing the data itself on some storage medium that is controlled by the DBMS. Manipulating a database includes such functions as querying the database to retrieve specific data, updating the database to reflect changes in the mini world, and generating reports from the data.Note that it is not necessary to use general-purpose DBMS software for implementing a computerized database. We could write our own set of programs to create and maintain the database, in effect creating our own special-purpose DBMS software. In either case-whether we use a general-purpose DBMS or not-we usually have a considerable amount of software to manipulate the database in addition to the database itself. The database and software are together called a database system.2. Data ModelsOne of the fundamental characteristics of the database approach is that it provides some level of data abstraction by hiding details of data storage that are not needed by most database users. A data model is the main tool for providing this abstraction. A data is a set of concepts that can beused to describe the structure of a database. By structure of a database, we mean the data types, relationships, and constraints that should hold on the data. Most data models also include a set of operations for specifying retrievals and updates on the database.Categories of Data ModelsMany data models have been proposed. We can categorize data models based on the types of concepts they provide to describe the database structure. High-level or conceptual data models provide concepts that are close to the way many users perceive data, whereas low-level or physical data models provide concepts that describe the details of how data is stored in the computer. Concepts provided by low-level data models are generally meant for computer specialists, not for typical end users. Between these two extremes is a class of implementation data models, which provide concepts that may be understood by end users but that are not too far removed from the way data is organized within the computer. Implementation data models hide some details of data storage but can be implemented on a computer system in a direct way.High-level data models use concepts such as entities, attributes, and relationships. An entity is an object that is represented in the database. An attribute is a property that describes some aspect of an object. Relationships among objects are easily represented in high-level data models, which are sometimes called object-based models because they mainly describe objects and their interrelationships.Implementation data models are the ones used most frequently in current commerc ial DBMSs and include the three most widely used data models-relational, network, and hierarchical. They represent data using record structures and hence are sometimes called record-based data modes.Physical data models describe how data is stored in the computer by representing information such as record formats, record orderings, and access paths. An access path is a structure that makes the search for particular database records much faster.3. Classification of Database Management SystemsThe main criterion used to classify DBMSs is the data model on which the DBMS is based. The data models used most often in current commercial DBMSs are the relational, network, and hierarchical models. Some recent DBMSs are based on conceptual or object-oriented models. We will categorize DBMSs as relational, hierarchical, and others.Another criterion used to classify DBMSs is the number of users supported by the DBMS. Single-user systems support only one user at a time and are mostly used with personal computer. Multiuser systems include the majority of DBMSs and support many users concurrently.A third criterion is the number of sites over which the database is distributed. Most DBMSs are centralized, meaning that their data is stored at a single computer site. A centralized DBMS can support multiple users, but the DBMS and database themselves reside totally at a single computer site. A distributed DBMS (DDBMS) can have the actual database and DBMS software distributed over many sites connected by a computer network. Homogeneous DDBMSs use the same DBMS software at multiple sites. A recent trend is to develop software to access several autonomous preexisting database stored under heterogeneous DBMSs. This leads to a federated DBMS (or multidatabase system),, where the participating DBMSs are loosely coupled and have a degree of local autonomy.We can also classify a DBMS on the basis of the types of access paty options available for storing files. One well-known family of DBMSs is based on inverted file structures. Finally, a DBMS can be general purpose of special purpose. When performance is a prime consideration, a special-purpose DBMS can be designed and built for a specific application and cannot be used for other applications, Many airline reservations and telephone directory systems are special-purpose DBMSs.Let us briefly discuss the main criterion for classifying DBMSs: the data mode. The relational data model represents a database as a collection of tables, which look like files. Mos t relational databases have high-level query languages and support a limited form of user views.The network model represents data as record types and also represents a limited type of 1:N relationship, called a set type. The network model, also known as the CODASYL DBTG model, has an associated record-at-a-time language that must be embedded in a host programming language.The hierarchical model represents data as hierarchical tree structures. Each hierarchy represents a number of related records. There is no standard language for the hierarchical model, although most hierarchical DBMSs have record-at-a-time languages.4. Client-Server ArchitectureMany varieties of modern software use a client-server architecture, in which requests by one process (the client) are sent to another process (the server) for execution. Database systems are no exception. In the simplest client/server architecture, the entire DBMS is a server, except for the query interfaces that interact with the user and send queries or other commands across to the server. For example, relational systems generally use the SQL language for representing requests from the client to the server. The database server then sends the answer, in the form of a table or relation, back to the client. The relationship between client and server can get more work in theclient, since the server will e a bottleneck if there are many simultaneous database users.。

银行金融数据分析中英文对照外文翻译文献

银行金融数据分析中英文对照外文翻译文献

银行金融数据分析中英文对照外文翻译文献(文档含英文原文和中文翻译)Banks analysis of financial dataAbstractA stochastic analysis of financial data is presented. In particular we investigate how the statistics of log returns change with different time delays t. The scale-dependent behaviour of financial data can be divided into two regions. The first time range, the small-timescale region (in the range of seconds) seems to be characterised by universal features. The second time range, the medium-timescale range from several minutes upwards can be characterised by a cascade process, which is given by a stochastic Markov process in the scale τ. A corresponding Fokker–Planck equation can be extracted from given data and provides a non-equilibrium thermodynamical description of the complexity of financial data.Keywords:Banks; Financial markets; Stochastic processes;Fokker–Planck equation1.IntroductionFinancial statements for banks present a different analytical problem than manufacturing and service companies. As a result, analysis of a bank’s financial statements requires a distinct approach that recognizes a bank’s somewhat unique risks.Banks take deposits from savers, paying interest on some of these accounts. They pass these funds on to borrowers, receiving interest on the loans. Their profits are derived from the spread between the rate they pay forfunds and the rate they receive from borrowers. This ability to pool deposits from many sources that can be lent to many different borrowers creates the flow of funds inherent in the banking system. By managing this flow of funds, banks generate profits, acting as the intermediary of interest paid and interest received and taking on the risks of offering credit.2. Small-scale analysisBanking is a highly leveraged business requiring regulators to dictate minimal capital levels to help ensure the solvency of each bank and the banking system. In the US, a bank’s primary regulator could be the Federal Reserve Board, the Office of the Comptroller of the Currency, the Office of Thrift Supervision or any one of 50 state regulatory bodies, depending on the charter of the bank. Within the Federal Reserve Board, there are 12 districts with 12 different regulatory staffing groups. These regulators focus on compliance with certain requirements, restrictions and guidelines, aiming to uphold the soundness and integrity of the banking system.As one of the most highly regulated banking industries in the world, investors have some level of assurance in the soundness of the banking system. As a result, investors can focus most of their efforts on how a bank will perform in different economic environments.Below is a sample income statement and balance sheet for a large bank. The first thing to notice is that the line items in the statements are not the same as your typical manufacturing or service firm. Instead, there are entries that represent interest earned or expensed as well as deposits and loans.As financial intermediaries, banks assume two primary types of risk as they manage the flow of money through their business. Interest rate risk is the management of the spread between interest paid on deposits and received on loans over time. Credit risk is the likelihood that a borrower will default onits loan or lease, causing the bank to lose any potential interest earned as wellas the principal that was loaned to the borrower. As investors, these are the primary elements that need to be understood when analyzing a bank’s financial statement.3. Medium scale analysisThe primary business of a bank is managing the spread between deposits. Basically when the interest that a bank earns from loans is greater than the interest it must pay on deposits, it generates a positive interest spread or net interest income. The size of this spread is a major determinant of the profit generated by a bank. This interest rate risk is primarily determined by the shape of the yield curve.As a result, net interest income will vary, due to differences in the timing of accrual changes and changing rate and yield curve relationships. Changes in the general level of market interest rates also may cause changes in the volume and mix of a bank’s balance sheet products. For example, when economic activity continues to expand while interest rates are rising, commercial loan demand may increase while residential mortgage loan growth and prepayments slow.Banks, in the normal course of business, assume financial risk by making loans at interest rates that differ from rates paid on deposits. Deposits often have shorter maturities than loans. The result is a balance sheet mismatch between assets (loans) and liabilities (deposits). An upward sloping yield curve is favorable to a bank as the bulk of its deposits are short term and their loans are longer term. This mismatch of maturities generates the net interest revenue banks enjoy. When the yield curve flattens, this mismatch causes net interest revenue to diminish.4.Even in a business using Six Sigma® methodology. an “optimal” level of working capital management needs to beidentified.The table below ties together the bank’s balance sheet with the income statement and displays the yield generated from earning assets and interest bearing deposits. Most banks provide this type of table in their annual reports. The following table represents the same bank as in the previous examples: First of all, the balance sheet is an average balance for the line item, rather than the balance at the end of the period. Average balances provide a better analytical frame work to help understand the bank’s financial performance. Notice that for each average balance item there is a correspondinginterest-related income, or expense item, and the average yield for the time period. It also demonstrates the impact a flattening yield curve can have on a bank’s net interest income.The best place to start is with the net interest income line item. The bank experienced lower net interest income even though it had grown average balances. To help understand how this occurred, look at the yield achieved on total earning assets. For the current period ,it is actually higher than the prior period. Then examine the yield on the interest-bearing assets. It is substantially higher in the current period, causing higher interest-generating expenses. This discrepancy in the performance of the bank is due to the flattening of the yield curve.As the yield curve flattens, the interest rate the bank pays on shorter term deposits tends to increase faster than the rates it can earn from its loans. This causes the net interest income line to narrow, as shown above. One way banks try o overcome the impact of the flattening of the yield curve is to increase the fees they charge for services. As these fees become a larger portion of the bank’s inco me, it becomes less dependent on net interest income to drive earnings.Changes in the general level of interest rates may affect the volume ofcertain types of banking activities that generate fee-related income. For example, the volume of residential mortgage loan originations typically declines as interest rates rise, resulting in lower originating fees. In contrast, mortgage servicing pools often face slower prepayments when rates are rising, since borrowers are less likely to refinance. Ad a result, fee income and associated economic value arising from mortgage servicing-related businesses may increase or remain stable in periods of moderately rising interest rates.When analyzing a bank you should also consider how interest rate risk may act jointly with other risks facing the bank. For example, in a rising rate environment, loan customers may not be able to meet interest payments because of the increase in the size of the payment or reduction in earnings. The result will be a higher level of problem loans. An increase in interest rate is exposes a bank with a significant concentration in adjustable rate loans to credit risk. For a bank that is predominately funded with short-term liabilities, a rise in rates may decrease net interest income at the same time credit quality problems are on the increase.5.Related LiteratureThe importance of working capital management is not new to the finance literature. Over twenty years ago. Largay and Stickney (1980) reported that the then-recent bankruptcy of W.T. Grant. a nationwide chain of department stores. should have been anticipated because the corporation had been running a deficit cash flow from operations for eight of the last ten years of its corporate life. As part of a study of the Fortune 500’s financ ial management practices. Gilbert and Reichert (1995) find that accounts receivable management models are used in 59 percent of these firms to improve working capital projects. while inventory management models were used in 60 percent of the companies. More recently. Farragher. Kleiman andSahu (1999) find that 55 percent of firms in the S&P Industrial index complete some form of a cash flow assessment. but did not present insights regarding accounts receivable and inventory management. or the variations of any current asset accounts or liability accounts across industries. Thus. mixed evidence exists concerning the use of working capital management techniques.Theoretical determination of optimal trade credit limits are the subject of many articles over the years (e.g.. Schwartz 1974; Scherr 1996). with scant attention paid to actual accounts receivable management. Across a limited sample. Weinraub and Visscher (1998) observe a tendency of firms with low levels of current ratios to also have low levels of current liabilities. Simultaneously investigating accounts receivable and payable issues. Hill. Sartoris. and Ferguson (1984) find differences in the way payment dates are defined. Payees define the date of payment as the date payment is received. while payors view payment as the postmark date. Additional WCM insight across firms. industries. and time can add to this body of research.Maness and Zietlow (2002. 51. 496) presents two models of value creation that incorporate effective short-term financial management activities. However. these models are generic models and do not consider unique firm or industry influences. Maness and Zietlow discuss industry influences in a short paragraph that includes the observation that. “An industry a company is located i n may have more influence on that company’s fortunes than overall GNP” (2002. 507). In fact. a careful review of this 627-page textbook finds only sporadic information on actual firm levels of WCM dimensions. virtually nothing on industry factors except for some boxed items with titles such as. “Should a Retailer Offer an In-House Credit Card” (128) and nothing on WCM stability over time. This research will attempt to fill thisvoid by investigating patterns related to working capital measures within industries and illustrate differences between industries across time.An extensive survey of library and Internet resources provided very few recent reports about working capital management. The most relevant set of articles was Weisel and Bradley’s (2003) arti cle on cash flow management and one of inventory control as a result of effective supply chain management by Hadley (2004).6.Research MethodThe CFO RankingsThe first annual CFO Working Capital Survey. a joint project with REL Consultancy Group. was published in the June 1997 issue of CFO (Mintz and Lezere 1997). REL is a London. England-based management consulting firm specializing in working capital issues for its global list of clients. The original survey reports several working capital benchmarks for public companies using data for 1996. Each company is ranked against its peers and also against the entire field of 1.000 companies. REL continues to update the original information on an annual basis.REL uses the “cash flow from operations” value loc ated on firm cash flow statements to estimate cash conversion efficiency (CCE). This value indicates how well a company transforms revenues into cash flow. A “days of working capital” (DWC) value is based on the dollar amount in each of the aggregate. equally-weighted receivables. inventory. and payables accounts. The “days of working capital” (DNC) represents the time period between purchase of inventory on acccount from vendor until the sale to the customer. the collection of the receivables. and payment receipt. Thus. it reflects the company’s ability to finance its core operations with vendor credit. A detailed investigation of WCM is possible because CFO also provides firmand industry values for days sales outstanding (A/R). inventory turnover. and days payables outstanding (A/P).7.Research FindingsAverage and Annual Working Capital Management Performance Working capital management component definitions and average values for the entire 1996 – 2000 period . Across the nearly 1.000 firms in the survey. cash flow from operations. defined as cash flow from operations divided by sales and referred to as “cash conversion efficiency” (CCE). averages 9.0 percent. Incorporating a 95 percent confidence interval. CCE ranges from 5.6 percent to 12.4 percent. The days working capital (DWC). defined as the sum of receivables and inventories less payables divided by daily sales. averages 51.8 days and is very similar to the days that sales are outstanding (50.6). because the inventory turnover rate (once every 32.0 days) is similar to the number of days that payables are outstanding (32.4 days). In all instances. the standard deviation is relatively small. suggesting that these working capital management variables are consistent across CFO reports.8.Industry Rankings on Overall Working Capital Management PerformanceCFO magazine provides an overall working capital ranking for firms in its survey. using the following equation:Industry-based differences in overall working capital management are presented for the twenty-six industries that had at least eight companies included in the rankings each year. In the typical year. CFO magazine ranks 970 companies during this period. Industries are listed in order of the mean overall CFO ranking of working capital performance. Since the best average ranking possible for an eight-company industry is 4.5 (this assumes that the eight companies are ranked one through eight for the entire survey). it is quite obvious that all firms in the petroleumindustry must have been receiving very high overall working capital management rankings. In fact. the petroleum industry is ranked first in CCE and third in DWC (as illustrated in Table 5 and discussed later in this paper). Furthermore. the petroleum industry had the lowest standard deviation of working capital rankings and range of working capital rankings. The only other industry with a mean overall ranking less than 100 was the Electric & Gas Utility industry. which ranked second in CCE and fourth in DWC. The two industries with the worst working capital rankings were Textiles and Apparel. Textiles rank twenty-second in CCE and twenty-sixth in DWC. The apparel industry ranks twenty-third and twenty-fourth in the two working capital measures9. Results for Bayer dataThe Kramers–Moyal coefficients were calculated according to Eqs. (5) and (6). The timescale was divided into half-open intervalsassuming that the Kramers–Moyal coefficients are constant with respect to the timescaleτin each of these subintervals of the timescale. The smallest timescale considered was 240 s and all larger scales were chosen such that τi =0.9*τi+1. The Kramers–Moyal coefficients themselves were parameterised in the following form:This result shows that the rich and complex structure of financial data, expressed by multi-scale statistics, can be pinned down to coefficients with a relatively simple functional form.10. DiscussionCredit risk is most simply defined as the potential that a bank borrower or counter-party will fail to meet its obligations in accordance with agreed terms. When this happens, the bank will experience a loss of some or all of the credit it provide to its customer. To absorb these losses, banks maintain anallowance for loan and lease losses. In essence, this allowance can be viewed as a pool of capital specifically set aside to absorb estimated loan losses. This allowance should be maintained at a level that is adequate to absorb the estimated amount of probable losses in the institution’s loan portfolio.A careful review of a bank’s financial statements can highlight the key factors that should be considered becomes before making a trading or investing decision. Investors need to have a good understanding of the business cycle and the yield curve-both have a major impact on the economic performance of banks. Interest rate risk and credit risk are the primary factors to consider as a bank’s financial performance follows the yield curve. When it flattens or becomes inverted a bank’s net interest revenue is put under greater pressure. When the yield curve returns to a more traditional shape, a bank’s net interest revenue usually improves. Credit risk can be the largest contributor to the negative performance of a bank, even causing it to lose money. In addition, management of credit risk is a subjective process that can be manipulated in the short term. Investors in banks need to be aware of these factors before they commit their capital.银行的金融数据分析摘要财务数据随机分析已经被提出,特别是我们探讨如何统计在不同时间τ记录返回的变化。

大数据外文翻译文献

大数据外文翻译文献

大数据外文翻译文献(文档含中英文对照即英文原文和中文翻译)原文:What is Data Mining?Many people treat data mining as a synonym for another popularly used term, “Knowledge Discovery in Databases”, or KDD. Alternatively, others view data mining as simply an essential step in the process of knowledge discovery in databases. Knowledge discovery consists of an iterative sequence of the following steps:· data cleaning: to remove noise or irrelevant data,· data integration: where multiple data sources may be combined,·data selection : where data relevant to the analysis task are retrieved from the database,·data transformation : where data are transformed or consolidated into forms appropriate for mining by performing summary or aggregation operations, for instance,·data mining: an essential process where intelligent methods are applied in order to extract data patterns,·pattern evaluation: to identify the truly interesting patterns representing knowledge based on some interestingness measures, and ·knowledge presentation: where visualization and knowledge representation techniques are used to present the mined knowledge to the user .The data mining step may interact with the user or a knowledge base. The interesting patterns are presented to the user, and may be stored as new knowledge in the knowledge base. Note that according to this view, data mining is only one step in the entire process, albeit an essential one since it uncovers hidden patterns for evaluation.We agree that data mining is a knowledge discovery process. However, in industry, in media, and in the database research milieu, the term “data mining” is becoming more popular than the longer term of “knowledge discovery in databases”. Therefore, in this book, we choose to use the term “data mining”. We adop t a broad view of data mining functionality: data mining is the process of discovering interestingknowledge from large amounts of data stored either in databases, data warehouses, or other information repositories.Based on this view, the architecture of a typical data mining system may have the following major components:1. Database, data warehouse, or other information repository. This is one or a set of databases, data warehouses, spread sheets, or other kinds of information repositories. Data cleaning and data integration techniques may be performed on the data.2. Database or data warehouse server. The database or data warehouse server is responsible for fetching the relevant data, based on the user’s data mining request.3. Knowledge base. This is the domain knowledge that is used to guide the search, or evaluate the interestingness of resulting patterns. Such knowledge can include concept hierarchies, used to organize attributes or attribute values into different levels of abstraction. Knowledge such as user beliefs, which can be used to assess a pattern’s interestingness based on its unexpectedness, may also be included. Other examples of domain knowledge are additional interestingness constraints or thresholds, and metadata (e.g., describing data from multiple heterogeneous sources).4. Data mining engine. This is essential to the data mining system and ideally consists of a set of functional modules for tasks such ascharacterization, association analysis, classification, evolution and deviation analysis.5. Pattern evaluation module. This component typically employs interestingness measures and interacts with the data mining modules so as to focus the search towards interesting patterns. It may access interestingness thresholds stored in the knowledge base. Alternatively, the pattern evaluation module may be integrated with the mining module, depending on the implementation of the data mining method used. For efficient data mining, it is highly recommended to push the evaluation of pattern interestingness as deep as possible into the mining process so as to confine the search to only the interesting patterns.6. Graphical user interface. This module communicates between users and the data mining system, allowing the user to interact with the system by specifying a data mining query or task, providing information to help focus the search, and performing exploratory data mining based on the intermediate data mining results. In addition, this component allows the user to browse database and data warehouse schemas or data structures, evaluate mined patterns, and visualize the patterns in different forms.From a data warehouse perspective, data mining can be viewed as an advanced stage of on-1ine analytical processing (OLAP). However, data mining goes far beyond the narrow scope of summarization-styleanalytical processing of data warehouse systems by incorporating more advanced techniques for data understanding.While there may be many “data mining systems” on the market, not all of them can perform true data mining. A data analysis system that does not handle large amounts of data can at most be categorized as a machine learning system, a statistical data analysis tool, or an experimental system prototype. A system that can only perform data or information retrieval, including finding aggregate values, or that performs deductive query answering in large databases should be more appropriately categorized as either a database system, an information retrieval system, or a deductive database system.Data mining involves an integration of techniques from mult1ple disciplines such as database technology, statistics, machine learning, high performance computing, pattern recognition, neural networks, data visualization, information retrieval, image and signal processing, and spatial data analysis. We adopt a database perspective in our presentation of data mining in this book. That is, emphasis is placed on efficient and scalable data mining techniques for large databases. By performing data mining, interesting knowledge, regularities, or high-level information can be extracted from databases and viewed or browsed from different angles. The discovered knowledge can be applied to decision making, process control, information management, query processing, and so on. Therefore,data mining is considered as one of the most important frontiers in database systems and one of the most promising, new database applications in the information industry.A classification of data mining systemsData mining is an interdisciplinary field, the confluence of a set of disciplines, including database systems, statistics, machine learning, visualization, and information science. Moreover, depending on the data mining approach used, techniques from other disciplines may be applied, such as neural networks, fuzzy and or rough set theory, knowledge representation, inductive logic programming, or high performance computing. Depending on the kinds of data to be mined or on the given data mining application, the data mining system may also integrate techniques from spatial data analysis, Information retrieval, pattern recognition, image analysis, signal processing, computer graphics, Web technology, economics, or psychology.Because of the diversity of disciplines contributing to data mining, data mining research is expected to generate a large variety of data mining systems. Therefore, it is necessary to provide a clear classification of data mining systems. Such a classification may help potential users distinguish data mining systems and identify those that best match their needs. Data mining systems can be categorized according to various criteria, as follows.1) Classification according to the kinds of databases mined.A data mining system can be classified according to the kinds of databases mined. Database systems themselves can be classified according to different criteria (such as data models, or the types of data or applications involved), each of which may require its own data mining technique. Data mining systems can therefore be classified accordingly.For instance, if classifying according to data models, we may have a relational, transactional, object-oriented, object-relational, or data warehouse mining system. If classifying according to the special types of data handled, we may have a spatial, time -series, text, or multimedia data mining system , or a World-Wide Web mining system . Other system types include heterogeneous data mining systems, and legacy data mining systems.2) Classification according to the kinds of knowledge mined.Data mining systems can be categorized according to the kinds of knowledge they mine, i.e., based on data mining functionalities, such as characterization, discrimination, association, classification, clustering, trend and evolution analysis, deviation analysis , similarity analysis, etc.A comprehensive data mining system usually provides multiple and/or integrated data mining functionalities.Moreover, data mining systems can also be distinguished based on the granularity or levels of abstraction of the knowledge mined, includinggeneralized knowledge(at a high level of abstraction), primitive-level knowledge(at a raw data level), or knowledge at multiple levels (considering several levels of abstraction). An advanced data mining system should facilitate the discovery of knowledge at multiple levels of abstraction.3) Classification according to the kinds of techniques utilized.Data mining systems can also be categorized according to the underlying data mining techniques employed. These techniques can be described according to the degree of user interaction involved (e.g., autonomous systems, interactive exploratory systems, query-driven systems), or the methods of data analysis employed(e.g., database-oriented or data warehouse-oriented techniques, machine learning, statistics, visualization, pattern recognition, neural networks, and so on ) .A sophisticated data mining system will often adopt multiple data mining techniques or work out an effective, integrated technique which combines the merits of a few individual approaches.什么是数据挖掘?许多人把数据挖掘视为另一个常用的术语—数据库中的知识发现或KDD的同义词。

数据分析外文文献+翻译

数据分析外文文献+翻译

数据分析外文文献+翻译文献1:《数据分析在企业决策中的应用》该文献探讨了数据分析在企业决策中的重要性和应用。

研究发现,通过数据分析可以获取准确的商业情报,帮助企业更好地理解市场趋势和消费者需求。

通过对大量数据的分析,企业可以发现隐藏的模式和关联,从而制定出更具竞争力的产品和服务策略。

数据分析还可以提供决策支持,帮助企业在不确定的环境下做出明智的决策。

因此,数据分析已成为现代企业成功的关键要素之一。

文献2:《机器研究在数据分析中的应用》该文献探讨了机器研究在数据分析中的应用。

研究发现,机器研究可以帮助企业更高效地分析大量的数据,并从中发现有价值的信息。

机器研究算法可以自动研究和改进,从而帮助企业发现数据中的模式和趋势。

通过机器研究的应用,企业可以更准确地预测市场需求、优化业务流程,并制定更具策略性的决策。

因此,机器研究在数据分析中的应用正逐渐受到企业的关注和采用。

文献3:《数据可视化在数据分析中的应用》该文献探讨了数据可视化在数据分析中的重要性和应用。

研究发现,通过数据可视化可以更直观地呈现复杂的数据关系和趋势。

可视化可以帮助企业更好地理解数据,发现数据中的模式和规律。

数据可视化还可以帮助企业进行数据交互和决策共享,提升决策的效率和准确性。

因此,数据可视化在数据分析中扮演着非常重要的角色。

翻译文献1标题: The Application of Data Analysis in Business Decision-making The Application of Data Analysis in Business Decision-making文献2标题: The Application of Machine Learning in Data Analysis The Application of Machine Learning in Data Analysis文献3标题: The Application of Data Visualization in Data Analysis The Application of Data Visualization in Data Analysis翻译摘要:本文献研究了数据分析在企业决策中的应用,以及机器研究和数据可视化在数据分析中的作用。

智能控制系统毕业论文中英文资料对照外文翻译文献

智能控制系统毕业论文中英文资料对照外文翻译文献

智能控制系统中英文资料对照外文翻译文献附录一:外文摘要The development and application of Intelligence controlsystemModern electronic products change rapidly is increasingly profound impact on people's lives, to people's life and working way to bring more convenience to our daily lives, all aspects of electronic products in the shadow, single chip as one of the most important applications, in many ways it has the inestimable role. Intelligent control is a single chip, intelligent control of applications and prospects are very broad, the use of modern technology tools to develop an intelligent, relatively complete functional software to achieve intelligent control system has become an imminent task. Especially in today with MCU based intelligent control technology in the era, to establish their own practical control system has a far-reaching significance so well on the subject later more fully understanding of SCM are of great help to.The so-called intelligent monitoring technology is that:" the automatic analysis and processing of the information of the monitored device". If the monitored object as one's field of vision, and intelligent monitoring equipment can be regarded as the human brain. Intelligent monitoring with the aid of computer data processing capacity of the powerful, to get information in the mass data to carry on the analysis, some filtering of irrelevant information, only provide some key information. Intelligent control to digital, intelligent basis, timely detection system in the abnormal condition, and can be the fastest and best way to sound the alarm and provide usefulinformation, which can more effectively assist the security personnel to deal with the crisis, and minimize the damage and loss, it has great practical significance, some risk homework, or artificial unable to complete the operation, can be used to realize intelligent device, which solves a lot of artificial can not solve the problem, I think, with the development of the society, intelligent load in all aspects of social life play an important reuse.Single chip microcomputer as the core of control and monitoring systems, the system structure, design thought, design method and the traditional control system has essential distinction. In the traditional control or monitoring system, control or monitoring parameters of circuit, through the mechanical device directly to the monitored parameters to regulate and control, in the single-chip microcomputer as the core of the control system, the control parameters and controlled parameters are not directly change, but the control parameter is transformed into a digital signal input to the microcontroller, the microcontroller according to its output signal to control the controlled object, as intelligent load monitoring test, is the use of single-chip I / O port output signal of relay control, then the load to control or monitor, thus similar to any one single chip control system structure, often simplified to input part, an output part and an electronic control unit ( ECU )Intelligent monitoring system design principle function as follows: the power supply module is 0~220V AC voltage into a0 ~ 5V DC low voltage, as each module to provide normal working voltage, another set of ADC module work limit voltage of 5V, if the input voltage is greater than 5V, it can not work normally ( but the design is provided for the load voltage in the 0~ 5V, so it will not be considered ), at the same time transformer on load current is sampled on the accused, the load current into a voltage signal, and then through the current - voltage conversion, and passes through the bridge rectification into stable voltage value, will realize the load the current value is converted to a single chip can handle0 ~ 5V voltage value, then the D2diode cutoff, power supply module only plays the role of power supply. Signal to the analog-to-digital conversion module, through quantization, coding, the analog voltage value into8bits of the digital voltage value, repeatedly to the analog voltage16AD conversion, and the16the digital voltage value and, to calculate the average value, the average value through a data bus to send AT89C51P0, accepted AT89C51 read, AT89C51will read the digital signal and software setting load normal working voltage reference range [VMIN, VMAX] compared with the reference voltage range, if not consistent, then the P1.0 output low level, close the relay, cut off the load on the fault source, to stop its sampling, while P1.1 output high level fault light, i.e., P1.3 output low level, namely normal lights. The relay is disconnected after about 2minutes, theAT89C51P1.0outputs high level ( software design), automatic closing relay, then to load the current regular sampling, AD conversion, to accept the AT89C51read, comparison, if consistent, then the P1.1 output low level, namely fault lights out, while P1.3 output high level, i.e. normal lamp ( software set ); if you are still inconsistent, then the need to manually switch S1toss to" repair" the slip, disconnect the relay control, load adjusting the resistance value is: the load detection and repair, and then close the S1repeatedly to the load current sampling, until the normal lamp bright, repeated this process, constantly on the load testing to ensure the load problems timely repair, make it work.In the intelligent load monitoring system, using the monolithic integrated circuit to the load ( voltage too high or too small ) intelligent detection and control, is achieved by controlling the relay and transformer sampling to achieve, in fact direct control of single-chip is the working state of the relay and the alarm circuit working state, the system should achieve technical features of this thesis are as follows (1) according to the load current changes to control relays, the control parameter is the load current, is the control parameter is the relay switch on-off and led the state; (2) the set current reference voltage range ( load normal working voltage range ), by AT89C51 chip the design of the software section, provide a basis for comparison; (3) the use of single-chip microcomputer to control the light-emitting diode to display the current state of change ( normal / fault / repair ); specific summary: Transformer on load current is sampled, a current / voltage converter, filter, regulator, through the analog-digital conversion, to accept the AT89C51chip to read, AT89C51 to read data is compared with the reference voltage, if normal, the normal light, the output port P.0high level, the relay is closed, is provided to the load voltage fault light; otherwise, P1.0 output low level, The disconnecting relay to disconnect the load, the voltage on the sampling, stop. Two minutes after closing relay, timing sampling.System through the expansion of improved, can be used for temperature alarm circuit, alarm circuit, traffic monitoring, can also be used to monitor a system works, in the intelligent high-speed development today, the use of modern technology tools, the development of an intelligent, function relatively complete software to realize intelligent control system, has become an imminent task, establish their own practical control system has a far-reaching significance. Micro controller in the industry design and application, no industry like intelligent automation and control field develop so fast. Since China and the Asian region the main manufacturing plant intelligence to improve the degree of automation, new technology to improve efficiency, have important influence on the product cost. Although the centralized control can be improved in any particular manufacturing process of the overall visual, but not for those response and processingdelay caused by fault of some key application.Intelligent control technology as computer technology is an important technology, widely used in industrial control, intelligent control, instrument, household appliances, electronic toys and other fields, it has small, multiple functions, low price, convenient use, the advantages of a flexible system design. Therefore, more and more engineering staff of all ages, so this graduate design is of great significance to the design of various things, I have great interest in design, this has brought me a lot of things, let me from unsuspectingly to have a clear train of thought, since both design something, I will be there a how to design thinking, this is very important, I think this job will give me a lot of valuable things.中文翻译:智能控制系统的开发应用现代社会电子产品日新月异正在越来越深远的影响着人们的生活,给人们的生活和工作方式带来越来越大的方便,我们的日常生活各个方面都有电子产品的影子,单片机作为其中一个最重要的应用,在很多方面都有着不可估量的作用。

数据采集外文文献翻译中英文

数据采集外文文献翻译中英文

数据采集外文文献翻译(含:英文原文及中文译文)文献出处:Txomin Nieva. DATA ACQUISITION SYSTEMS [J]. Computers in Industry, 2013, 4(2):215-237.英文原文DATA ACQUISITION SYSTEMSTxomin NievaData acquisition systems, as the name implies, are products and/or processes used to collect information to document or analyze some phenomenon. In the simplest form, a technician logging the temperature of an oven on a piece of paper is performing data acquisition. As technology has progressed, this type of process has been simplified and made more accurate, versatile, and reliable through electronic equipment. Equipment ranges from simple recorders to sophisticated computer systems. Data acquisition products serve as a focal point in a system, tying together a wide variety of products, such as sensors that indicate temperature, flow, level, or pressure. Some common data acquisition terms are shown below.Data collection technology has made great progress in the past 30 to 40 years. For example, 40 years ago, in a well-known college laboratory, the device used to track temperature rises in bronze made of helium was composed of thermocouples, relays, interrogators, a bundle of papers, anda pencil.Today's university students are likely to automatically process and analyze data on PCs. There are many ways you can choose to collect data. The choice of which method to use depends on many factors, including the complexity of the task, the speed and accuracy you need, the evidence you want, and more. Whether simple or complex, the data acquisition system can operate and play its role.The old way of using pencils and papers is still feasible for some situations, and it is cheap, easy to obtain, quick and easy to start. All you need is to capture multiple channels of digital information (DMM) and start recording data by hand.Unfortunately, this method is prone to errors, slower acquisition of data, and requires too much human analysis. In addition, it can only collect data in a single channel; but when you use a multi-channel DMM, the system will soon become very bulky and clumsy. Accuracy depends on the level of the writer, and you may need to scale it yourself. For example, if the DMM is not equipped with a sensor that handles temperature, the old one needs to start looking for a proportion. Given these limitations, it is an acceptable method only if you need to implement a rapid experiment.Modern versions of the strip chart recorder allow you to retrieve data from multiple inputs. They provide long-term paper records of databecause the data is in graphic format and they are easy to collect data on site. Once a bar chart recorder has been set up, most recorders have enough internal intelligence to operate without an operator or computer. The disadvantages are the lack of flexibility and the relative low precision, often limited to a percentage point. You can clearly feel that there is only a small change with the pen. In the long-term monitoring of the multi-channel, the recorders can play a very good role, in addition, their value is limited. For example, they cannot interact with other devices. Other concerns are the maintenance of pens and paper, the supply of paper and the storage of data. The most important is the abuse and waste of paper. However, recorders are fairly easy to set up and operate, providing a permanent record of data for quick and easy analysis.Some benchtop DMMs offer selectable scanning capabilities. The back of the instrument has a slot to receive a scanner card that can be multiplexed for more inputs, typically 8 to 10 channels of mux. This is inherently limited in the front panel of the instrument. Its flexibility is also limited because it cannot exceed the number of available channels. External PCs usually handle data acquisition and analysis.The PC plug-in card is a single-board measurement system that uses the ISA or PCI bus to expand the slot in the PC. They often have a reading rate of up to 1000 per second. 8 to 16 channels are common, and the collected data is stored directly in the computer and then analyzed.Because the card is essentially a part of the computer, it is easy to establish the test. PC-cards are also relatively inexpensive, partly because they have since been hosted by PCs to provide energy, mechanical accessories, and user interfaces. Data collection optionsOn the downside, the PC plug-in cards often have a 12-word capacity, so you can't detect small changes in the input signal. In addition, the electronic environment within the PC is often susceptible to noise, high clock rates, and bus noise. The electronic contacts limit the accuracy of the PC card. These plug-in cards also measure a range of voltages. To measure other input signals, such as voltage, temperature, and resistance, you may need some external signal monitoring devices. Other considerations include complex calibrations and overall system costs, especially if you need to purchase additional signal monitoring devices or adapt the PC card to the card. Take this into account. If your needs change within the capabilities and limitations of the card, the PC plug-in card provides an attractive method for data collection.Data electronic recorders are typical stand-alone instruments that, once equipped with them, enable the measurement, recording, and display of data without the involvement of an operator or computer. They can handle multiple signal inputs, sometimes up to 120 channels. Accuracy rivals unrivalled desktop DMMs because it operates within a 22 word, 0.004 percent accuracy range. Some data electronic automatic recordershave the ability to measure proportionally, the inspection result is not limited by the user's definition, and the output is a control signal.One of the advantages of using data electronic loggers is their internal monitoring signals. Most can directly measure several different input signals without the need for additional signal monitoring devices. One channel can monitor thermocouples, RTDs, and voltages.Thermocouples provide valuable compensation for accurate temperature measurements. They are typically equipped with multi-channel cards. Built-in intelligent electronic data recorder helps you set the measurement period and specify the parameters for each channel. Once you set it all up, the data electronic recorder will behave like an unbeatable device. The data they store is distributed in memory and can hold 500,000 or more readings.Connecting to a PC makes it easy to transfer data to a computer for further analysis. Most data electronic recorders can be designed to be flexible and simple to configure and operate, and most provide remote location operation options via battery packs or other methods. Thanks to the A/D conversion technology, certain data electronic recorders have a lower reading rate, especially when compared with PC plug-in cards. However, a reading rate of 250 per second is relatively rare. Keep in mind that many of the phenomena that are being measured are physical in nature, such as temperature, pressure, and flow, and there are generallyfewer changes. In addition, because of the monitoring accuracy of the data electron loggers, a large amount of average reading is not necessary, just as they are often stuck on PC plug-in cards.Front-end data acquisition is often done as a module and is typically connected to a PC or controller. They are used in automated tests to collect data, control and cycle detection signals for other test equipment. Send signal test equipment spare parts. The efficiency of the front-end operation is very high, and can match the speed and accuracy with the best stand-alone instrument. Front-end data acquisition works in many models, including VXI versions such as the Agilent E1419A multi-function measurement and VXI control model, as well as a proprietary card elevator. Although the cost of front-end units has been reduced, these systems can be very expensive unless you need to provide high levels of operation, and finding their prices is prohibited. On the other hand, they do provide considerable flexibility and measurement capabilities.Good, low-cost electronic data loggers have the right number of channels (20-60 channels) and scan rates are relatively low but are common enough for most engineers. Some of the key applications include:•product features•Hot die cutting of electronic products•Test of the environmentEnvironmental monitoring•Composition characteristics•Battery testBuilding and computer capacity monitoringA new system designThe conceptual model of a universal system can be applied to the analysis phase of a specific system to better understand the problem and to specify the best solution more easily based on the specific requirements of a particular system. The conceptual model of a universal system can also be used as a starting point for designing a specific system. Therefore, using a general-purpose conceptual model will save time and reduce the cost of specific system development. To test this hypothesis, we developed DAS for railway equipment based on our generic DAS concept model. In this section, we summarize the main results and conclusions of this DAS development.We analyzed the device model package. The result of this analysis is a partial conceptual model of a system consisting of a three-tier device model. We analyzed the equipment project package in the equipment environment. Based on this analysis, we have listed a three-level item hierarchy in the conceptual model of the system. Equipment projects are specialized for individual equipment projects.We analyzed the equipment model monitoring standard package in the equipment context. One of the requirements of this system is the ability to use a predefined set of data to record specific status monitoring reports. We analyzed the equipment project monitoring standard package in the equipment environment. The requirements of the system are: (i) the ability to record condition monitoring reports and event monitoring reports corresponding to the items, which can be triggered by time triggering conditions or event triggering conditions; (ii) the definition of private and public monitoring standards; (iii) Ability to define custom and predefined train data sets. Therefore, we have introduced the "monitoring standards for equipment projects", "public standards", "special standards", "equipment monitoring standards", "equipment condition monitoring standards", "equipment project status monitoring standards and equipment project event monitoring standards, respectively Training item triggering conditions, training item time triggering conditions and training item event triggering conditions are device equipment trigger conditions, equipment item time trigger conditions and device project event trigger condition specialization; and training item data sets, training custom data Sets and trains predefined data sets, which are device project data sets, custom data sets, and specialized sets of predefined data sets.Finally, we analyzed the observations and monitoring reports in the equipment environment. The system's requirement is to recordmeasurements and category observations. In addition, status and incident monitoring reports can be recorded. Therefore, we introduce the concept of observation, measurement, classification observation and monitoring report into the conceptual model of the system.Our generic DAS concept model plays an important role in the design of DAS equipment. We use this model to better organize the data that will be used by system components. Conceptual models also make it easier to design certain components in the system. Therefore, we have an implementation in which a large number of design classes represent the concepts specified in our generic DAS conceptual model. Through an industrial example, the development of this particular DAS demonstrates the usefulness of a generic system conceptual model for developing a particular system.中文译文数据采集系统Txomin Nieva数据采集系统, 正如名字所暗示的, 是一种用来采集信息成文件或分析一些现象的产品或过程。

论文必备中英文献数据库大全

论文必备中英文献数据库大全

论文必备——中英文献数据库大全终身受用,写论文需要的参考文献都在这里了!一、中文数据库中国最大的数据库,内容较全。

收录了5000多种中文期刊,1994年以来的数百万篇文章,并且目前正以每天数千篇的速度进行更新。

阅读全文需在网站主页下载CAJ全文浏览器。

文献收录1989年以来的全文。

只是扫描质量有点差劲,1994年以后的数据不如CNKI全。

阅读全文需下载维谱全文浏览器,约7M。

目前,以下站点提供免费检索3、万方数据库收录了核心期刊的全文,文件为pdf格式,阅读全文需Acrobat Reader 浏览器。

二、外文全文站点(所有外文数据库世界上第二大免费数据库(最大的免费数据库没有生物学、农业方面的文献),该网站提供部分文献的免费检索,和所用文献的超级链接,免费文献在左边标有FREE.Elsevier Science是荷兰一家全球著名的学术期刊出版商,每年出版大量的农业和生物科学、化学和化工、临床医学、生命科学、计算机科学、地球科学、工程、能源和技术、环境科学、材料科学、航空航天、天文学、物理、数学、经济、商业、管理、社会科学、艺术和人文科学类的学术图书和期刊,目前电子期刊总数已超过1 200多种(其中生物医学期刊499种),其中的大部分期刊都是SCI、EI等国际公认的权威大型检索数据库收录的各个学科的核心学术期刊。

Wiley InterScience是John Wiely & Sons 公司创建的动态在线内容服务,1997年开始在网上开通。

通过InterScience,Wiley公司以许可协议形式向用户提供在线访问全文内容的服务。

Wiley InterScience收录了360多种科学、工程技术、医疗领域及相关专业期刊、30多种大型专业参考书、13种实验室手册的全文和500多个题目的Wiley学术图书的全文。

其中被SCI收录的核心期刊近200种。

(注册一个用户名密码,下次直接用注册的用户名密码进去,不用代理照样能看文章全文,Willey注册一个,就可以免费使用CP了,那可是绝对好的Protocols )施普林格出版集团年出新书2000多种,期刊500多种,其中400多种期刊有电子版。

翻译专业中英文对照外文翻译文献

翻译专业中英文对照外文翻译文献

翻译专业中英文对照外文翻译文献(文档含英文原文和中文翻译)Translation EquivalenceDespite the fact that the world is becoming a global village, translation remains a major way for languages and cultures to interact and influence each other. And name translation, especially government name translation, occupies a quite significant place in international exchange.Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text. While interpreting—the facilitating of oral or sign-language communication between users of different languages—antedates writing, translation began only after the appearance of written literature. There exist partial translations of the Sumerian Epic of Gilgamesh (ca. 2000 BCE) into Southwest Asian languages of the second millennium BCE. Translators always risk inappropriate spill-over of source-language idiom and usage into the target-language translation. On the other hand, spill-overs have imported useful source-language calques and loanwords that have enriched the target languages. Indeed, translators have helped substantially to shape the languages into which they have translated. Due to the demands of business documentation consequent to the Industrial Revolution that began in the mid-18th century, some translation specialties have become formalized, with dedicated schools and professional associations. Because of the laboriousness of translation, since the 1940s engineers havesought to automate translation (machine translation) or to mechanically aid the human translator (computer-assisted translation). The rise of the Internet has fostered a world-wide market for translation services and has facilitated language localizationIt is generally accepted that translation, not as a separate entity, blooms into flower under such circumstances like culture, societal functions, politics and power relations. Nowadays, the field of translation studies is immersed with abundantly diversified translation standards, with no exception that some of them are presented by renowned figures and are rather authoritative. In the translation practice, however, how should we select the so-called translation standards to serve as our guidelines in the translation process and how should we adopt the translation standards to evaluate a translation product?In the macro - context of flourish of linguistic theories, theorists in the translation circle, keep to the golden law of the principle of equivalence. The theory of Translation Equivalence is the central issue in western translation theories. And the presentation of this theory gives great impetus to the development and improvement of translation theory. It‟s not difficult for us to discover that it is the theory of Translation Equivalence that serves as guidelines in government name translation in China. Name translation, as defined, is the replacement of the name in the source language by an equivalent name or other words in the target language. Translating Chinese government names into English, similarly, is replacing the Chinese government name with an equivalentin English.Metaphorically speaking, translation is often described as a moving trajectory going from A to B along a path or a container to carry something across from A to B. This view is commonly held by both translation practitioners and theorists in the West. In this view, they do not expect that this trajectory or something will change its identity as it moves or as it is carried. In China, to translate is also understood by many people normally as “to translate the whole text sentence by sentence and paragraph by paragraph, without any omission, addition, or other changes. In both views, the source text and the target text must be “the same”. This helps explain the etymological source for the term “translation equivalence”. It is in essence a word which describes the relationship between the ST and the TT.Equivalence means the state or fact or property of being equivalent. It is widely used in several scientific fields such as chemistry and mathematics. Therefore, it comes to have a strong scientific meaning that is rather absolute and concise. Influenced by this, translation equivalence also comes to have an absolute denotation though it was first applied in translation study as a general word. From a linguistic point of view, it can be divided into three sub-types, i.e., formal equivalence, semantic equivalence, and pragmatic equivalence. In actual translation, it frequently happens that they cannot be obtained at the same time, thus forming a kind of relative translation equivalence in terms of quality. In terms of quantity, sometimes the ST and TT are not equivalent too. Absolutetranslation equivalence both in quality and quantity, even though obtainable, is limited to a few cases.The following is a brief discussion of translation equivalence study conducted by three influential western scholars, Eugene Nida, Andrew Chesterman and Peter Newmark. It‟s expected that their studies can instruct GNT study in China and provide translators with insightful methods.Nida‟s definition of translation is: “Translation consists in reproducing in the receptor language the closest natural equivalent of the source language message, first in terms of meaning and secondly in terms of style.” It is a replacement of textual material in one language〔SL〕by equivalent textual material in another language(TL). The translator must strive for equivalence rather than identity. In a sense, this is just another way of emphasizing the reproducing of the message rather than the conservation of the form of the utterance. The message in the receptor language should match as closely as possible the different elements in the source language to reproduce as literally and meaningfully as possible the form and content of the original. Translation equivalence is an empirical phenomenon discovered by comparing SL and TL texts and it‟s a useful operational concept like the term “unit of translation”.Nida argues that there are two different types of equivalence, namely formal equivalence and dynamic equivalence. Formal correspondence focuses attention on the message itself, in both form and content, whereas dynamic equivalence is based upon “the principle of equivalent effect”.Formal correspondence consists of a TL item which represents the closest equivalent of a ST word or phrase. Nida and Taber make it clear that there are not always formal equivalents between language pairs. Therefore, formal equivalents should be used wherever possible if the translation aims at achieving formal rather than dynamic equivalence. The use of formal equivalents might at times have serious implications in the TT since the translation will not be easily understood by the target readership. According to Nida and Taber, formal correspondence distorts the grammatical and stylistic patterns of the receptor language, and hence distorts the message, so as to cause the receptor to misunderstand or to labor unduly hard.Dynamic equivalence is based on what Nida calls “the principle of equivalent effect” where the relat ionship between receptor and message should be substantially the same as that which existed between the original receptors and the message. The message has to be modified to the receptor‟s linguistic needs and cultural expectation and aims at complete naturalness of expression. Naturalness is a key requirement for Nida. He defines the goal of dynamic equivalence as seeking the closest natural equivalent to the SL message. This receptor-oriented approach considers adaptations of grammar, of lexicon and of cultural references to be essential in order to achieve naturalness; the TL should not show interference from the SL, and the …foreignness …of the ST setting is minimized.Nida is in favor of the application of dynamic equivalence, as a moreeffective translation procedure. Thus, the product of the translation process, that is the text in the TL, must have the same impact on the different readers it was addressing. Only in Nida and Taber's edition is it clearly stated that dynamic equivalence in translation is far more than mere correct communication of information.As Andrew Chesterman points out in his recent book Memes of Translation, equivalence is one of the five element of translation theory, standing shoulder to shoulder with source-target, untranslatability, free-vs-literal, All-writing-is-translating in importance. Pragmatically speaking, observed Chesterman, “the only true examples of equivalence (i.e., absolute equivalence) are those in which an ST item X is invariably translated into a given TL as Y, and vice versa. Typical examples would be words denoting numbers (with the exception of contexts in which they have culture-bound connotations, such as “magic” or “unlucky”), certain technical terms (oxygen, molecule) and the like. From this point of view, the only true test of equivalence would be invariable back-translation. This, of course, is unlikely to occur except in the case of a small set of lexical items, or perhaps simple isolated syntactic structure”.Peter Newmark. Departing from Nida‟s rece ptor-oriented line, Newmark argues that the success of equivalent effect is “illusory “and that the conflict of loyalties and the gap between emphasis on source and target language will always remain as the overriding problem in translation theory and practice. He suggests narrowing the gap by replacing the old terms with those of semanticand communicative translation. The former attempts to render, as closely as the semantic and syntactic structures of the second language allow, the exact contextual meani ng of the original, while the latter “attempts to produce on its readers an effect as close as possible to that obtained on the readers of the original.” Newmark‟s description of communicative translation resembles Nida‟s dynamic equivalence in the effect it is trying to create on the TT reader, while semantic translation has similarities to Nida‟s formal equivalence.Meanwhile, Newmark points out that only by combining both semantic and communicative translation can we achieve the goal of keeping the …spirit‟ of the original. Semantic translation requires the translator retain the aesthetic value of the original, trying his best to keep the linguistic feature and characteristic style of the author. According to semantic translation, the translator should always retain the semantic and syntactic structures of the original. Deletion and abridgement lead to distortion of the author‟s intention and his writing style.翻译对等尽管全世界正在渐渐成为一个地球村,但翻译仍然是语言和和文化之间的交流互动和相互影响的主要方式之一。

数据采集系统中英文对照外文翻译文献

数据采集系统中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)Data Acquisition SystemsData acquisition systems are used to acquire process operating data and store it on,secondary storage devices for later analysis. Many or the data acquisition systems acquire this data at very high speeds and very little computer time is left to carry out any necessary, or desirable, data manipulations or reduction. All the data are stored on secondary storage devices and manipulated subsequently to derive the variables ofin-terest. It is very often necessary to design special purpose data acquisition systems and interfaces to acquire the high speed process data. This special purpose design can be an expensive proposition.Powerful mini- and mainframe computers are used to combine the data acquisition with other functions such as comparisons between the actual output and the desirable output values, and to then decide on the control action which must be taken to ensure that the output variables lie within preset limits. The computing power required will depend upon the type of process control system implemented. Software requirements for carrying out proportional, ratio or three term control of process variables are relatively trivial, and microcomputers can be used to implement such process control systems. It would not be possible to use many of the currently available microcomputers for the implementation of high speed adaptive control systems which require the use of suitable process models and considerable online manipulation of data.Microcomputer based data loggers are used to carry out intermediate functions such as data acquisition at comparatively low speeds, simple mathematical manipulations of raw data and some forms of data reduction. The first generation of data loggers, without any programmable computing facilities, was used simply for slow speed data acquisition from up to one hundred channels. All the acquired data could be punched out on paper tape or printed for subsequent analysis. Such hardwired data loggers are being replaced by the new generation of data loggers which incorporate microcomputers and can be programmed by the user. They offer an extremely good method of collecting the process data, using standardized interfaces, and subsequently performing the necessary manipulations to provide the information of interest to the process operator. The data acquired can be analyzed to establish correlations, if any, between process variables and to develop mathematical models necessary for adaptive and optimal process control.The data acquisition function carried out by data loggers varies from one to 9 in system to another. Simple data logging systems acquire data from a few channels while complex systems can receive data from hundreds, or even thousands, of input channels distributed around one or more processes. The rudimentary data loggers scan the selected number of channels, connected to sensors or transducers, in a sequential manner and the data are recorded in a digital format. A data logger can be dedicated in the sense that it can only collect data from particular types of sensors and transducers. It is best to use a nondedicated data logger since any transducer or sensor can be connected to the channels via suitable interface circuitry. This facility requires the use of appropriate signal conditioning modules.Microcomputer controlled data acquisition facilitates the scanning of a large number of sensors. The scanning rate depends upon the signal dynamics which means that some channels must be scanned at very high speeds in order to avoid aliasing errors while there is very little loss of information by scanning other channels at slower speeds. In some data logging applications the faster channels require sampling at speeds of up to 100 times per second while slow channels can be sampled once every five minutes. The conventional hardwired, non-programmable data loggers sample all the channels in a sequential manner and the sampling frequency of all the channels must be the same. This procedure results in the accumulation of very large amounts of data, some of which is unnecessary, and also slows down the overall effective sampling frequency. Microcomputer based data loggers can be used to scan some fast channels at a higher frequency than other slow speed channels.The vast majority of the user programmable data loggers can be used to scan up to 1000 analog and 1000 digital input channels. A small number of data loggers, with a higher degree of sophistication, are suitable for acquiring data from up to 15, 000 analog and digital channels. The data from digital channels can be in the form of Transistor- Transistor Logic or contact closure signals. Analog data must be converted into digital format before it is recorded and requires the use of suitable analog to digital converters (ADC).The characteristics of the ADC will define the resolution that can be achieved and the rate at which the various channels can be sampled. An in-crease in the number of bits used in the ADC improves the resolution capability. Successive approximation ADC's arefaster than integrating ADC's. Many microcomputer controlled data loggers include a facility to program the channel scanning rates. Typical scanning rates vary from 2 channels per second to 10, 000 channels per second.Most data loggers have a resolution capability of ±0.01% or better, It is also pos-sible to achieve a resolution of 1 micro-volt. The resolution capability, in absolute terms, also depends upon the range of input signals, Standard input signal ranges are 0-10 volt, 0-50 volt and 0-100 volt. The lowest measurable signal varies form 1 t, volt to 50, volt. A higher degree of recording accuracy can be achieved by using modules which accept data in small, selectable ranges. An alternative is the auto ranging facil-ity available on some data loggers.The accuracy with which the data are acquired and logged-on the appropriate storage device is extremely important. It is therefore necessary that the data acquisi-tion module should be able to reject common mode noise and common mode voltage. Typical common mode noise rejection capabilities lie in the range 110 dB to 150 dB. A decibel (dB) is a tern which defines the ratio of the power levels of two signals. Thus if the reference and actual signals have power levels of N, and Na respectively, they will have a ratio of n decibels, wheren=10 Log10(Na /Nr)Protection against maximum common mode voltages of 200 to 500 volt is available on typical microcomputer based data loggers.The voltage input to an individual data logger channel is measured, scaled and linearised before any further data manipulations or comparisons are carried out.In many situations, it becomes necessary to alter the frequency at which particu-lar channels are sampled depending upon the values of data signals received from a particular input sensor. Thus a channel might normally be sampled once every 10 minutes. If, however, the sensor signals approach the alarm limit, then it is obviously desirable to sample that channel once every minute or even faster so that the operators can be informed, thereby avoiding any catastrophes. Microcomputer controlledintel-ligent data loggers may be programmed to alter the sampling frequencies depending upon the values of process signals. Other data loggers include self-scanning modules which can initiate sampling.The conventional hardwired data loggers, without any programming facilities, simply record the instantaneous values of transducer outputs at a regular samplingin-terval. This raw data often means very little to the typical user. To be meaningful, this data must be linearised and scaled, using a calibration curve, in order to determine the real value of the variable in appropriate engineering units. Prior to the availability of programmable data loggers, this function was usually carried out in the off-line mode on a mini- or mainframe computer. The raw data values had to be punched out on pa-per tape, in binary or octal code, to be input subsequently to the computer used for analysis purposes and converted to the engineering units. Paper tape punches are slow speed mechanical devices which reduce the speed at which channels can be scanned. An alternative was to print out the raw data values which further reduced the data scanning rate. It was not possible to carry out any limit comparisons or provide any alarm information. Every single value acquired by the data logger had to be recorded eventhough it might not serve any useful purpose during subsequent analysis; many data values only need recording when they lie outside the pre-set low and high limits.If the analog data must be transmitted over any distance, differences in ground potential between the signal source and final location can add noise in the interface design. In order to separate common-mode interference form the signal to be recorded or processed, devices designed for this purpose, such as instrumentation amplifiers, may be used. An instrumentation amplifier is characterized by good common-mode- rejection capability, a high input impedance, low drift, adjustable gain, and greater cost than operational amplifiers. They range from monolithic ICs to potted modules, and larger rack-mounted modules with manual scaling and null adjustments. When a very high common-mode voltage is present or the need for extremely-lowcom-mon-mode leakage current exists(as in many medical-electronics applications),an isolation amplifier is required. Isolation amplifiers may use optical or transformer isolation.Analog function circuits are special-purpose circuits that are used for a variety of signal conditioning operations on signals which are in analog form. When their accu-racy is adequate, they can relieve the microprocessor of time-consuming software and computations. Among the typical operations performed are multiplications, division, powers, roots, nonlinear functions such as for linearizing transducers, rimsmeasure-ments, computing vector sums, integration and differentiation, andcurrent-to-voltage or voltage- to-current conversion. Many of these operations can be purchased in available devices as multiplier/dividers, log/antilog amplifiers, and others.When data from a number of independent signal sources must be processed by the same microcomputer or communications channel, a multiplexer is used to channel the input signals into the A/D converter.Multiplexers are also used in reverse, as when a converter must distribute analog information to many different channels. The multiplexer is fed by a D/A converter which continually refreshes the output channels with new information.In many systems, the analog signal varies during the time that the converter takes to digitize an input signal. The changes in this signal level during the conversion process can result in errors since the conversion period can be completed some time after the conversion command. The final value never represents the data at the instant when the conversion command is transmitted. Sample-hold circuits are used to make an acquisition of the varying analog signal and to hold this signal for the duration of the conversion process. Sample-hold circuits are common in multichannel distribution systems where they allow each channel to receive and hold the signal level.In order to get the data in digital form as rapidly and as accurately as possible, we must use an analog/digital (A/D) converter, which might be a shaft encoder, a small module with digital outputs, or a high-resolution, high-speed panel instrument. These devices, which range form IC chips to rack-mounted instruments, convert ana-log input data, usually voltage, into an equivalent digital form. The characteristics of A/D converters include absolute and relative accuracy, linearity, monotonic, resolu-tion, conversion speed, and stability. A choice of input ranges, output codes, and other features are available. The successive-approximation technique is popular for a large number ofapplications, with the most popular alternatives being the counter-comparator types, and dual-ramp approaches. The dual-ramp has been widely-used in digital voltmeters.D/A converters convert a digital format into an equivalent analog representation. The basic converter consists of a circuit of weighted resistance values or ratios, each controlled by a particular level or weight of digital input data, which develops the output voltage or current in accordance with the digital input code. A special class of D/A converter exists which have the capability of handling variable reference sources. These devices are the multiplying DACs. Their output value is the product of the number represented by the digital input code and the analog reference voltage, which may vary form full scale to zero, and in some cases, to negative values.Component Selection CriteriaIn the past decade, data-acquisition hardware has changed radically due to ad-vances in semiconductors, and prices have come down too; what have not changed, however, are the fundamental system problems confronting the designer. Signals may be obscured by noise, rfi,ground loops, power-line pickup, and transients coupled into signal lines from machinery. Separating the signals from these effects becomes a matter for concern.Data-acquisition systems may be separated into two basic categories:(1)those suited to favorable environments like laboratories -and(2)those required for hostile environments such as factories, vehicles, and military installations. The latter group includes industrial process control systems where temperature information may be gathered by sensors on tanks, boilers, wats, or pipelines that may be spread over miles of facilities. That data may then be sent to a central processor to provide real-time process control. The digital control of steel mills, automated chemical production, and machine tools is carried out in this kind of hostile environment. The vulnerability of the data signals leads to the requirement for isolation and other techniques.At the other end of the spectrum-laboratory applications, such as test systems for gathering information on gas chromatographs, mass spectrometers, and other sophis-ticated instruments-the designer's problems are concerned with the performing of sen-sitive measurements under favorable conditions rather than with the problem ofpro-tecting the integrity of collected data under hostile conditions.Systems in hostile environments might require components for wide tempera-tures, shielding, common-mode noise reduction, conversion at an early stage, redun-dant circuits for critical measurements, and preprocessing of the digital data to test its reliability. Laboratory systems, on the other hand, will have narrower temperature ranges and less ambient noise. But the higher accuracies require sensitive devices, and a major effort may be necessary for the required signal /noise ratios.The choice of configuration and components in data-acquisition design depends on consideration of a number of factors:1. Resolution and accuracy required in final format.2. Number of analog sensors to be monitored.3. Sampling rate desired.4. Signal-conditioning requirement due to environment and accuracy.5. Cost trade-offs.Some of the choices for a basic data-acquisition configuration include:1 .Single-channel techniques.A. Direct conversion.B. Preamplification and direct conversion.C. Sample-hold and conversion.D. Preamplification, sample-hold, and conversion.E. Preamplification, signal-conditioning, and direct conversion.F. Preamplification, signal-conditioning, sample-hold, and conversion.2. Multichannel techniques.A. Multiplexing the outputs of single-channel converters.B. Multiplexing the outputs of sample-holds.C. Multiplexing the inputs of sample-holds.D. Multiplexing low-level data.E. More than one tier of multiplexers.Signal-conditioning may include:1. Radiometric conversion techniques.B. Range biasing.D. Logarithmic compression.A. Analog filtering.B. Integrating converters.C. Digital data processing.We shall consider these techniques later, but first we will examine some of the components used in these data-acquisition system configurations.MultiplexersWhen more than one channel requires analog-to-digital conversion, it is neces-sary to use time-division multiplexing in order to connect the analog inputs to a single converter, or to provide a converter for each input and then combine the converter outputs by digital multiplexing.Analog MultiplexersAnalog multiplexer circuits allow the timesharing of analog-to-digital converters between a numbers of analog information channels. An analog multiplexer consists of a group of switches arranged with inputs connected to the individual analog channels and outputs connected in common(as shown in Fig. 1).The switches may be ad-dressed by a digital input code.Many alternative analog switches are available in electromechanical and solid-state forms. Electromechanical switch types include relays, stepper switches,cross-bar switches, mercury-wetted switches, and dry-reed relay switches. The best switching speed is provided by reed relays(about 1 ms).The mechanical switches provide high do isolation resistance, low contact resistance, and the capacity to handle voltages up to 1 KV, and they are usually inexpensive. Multiplexers using mechanical switches are suited to low-speed applications as well as those having high resolution requirements. They interface well with the slower A/D converters, like the integrating dual-slope types. Mechanical switches have a finite life, however, usually expressed innumber of operations. A reed relay might have a life of 109 operations, which wouldallow a 3-year life at 10 operations/second.Solid-state switch devices are capable of operation at 30 ns, and they have a life which exceeds most equipment requirements. Field-effect transistors(FETs)are used in most multiplexers. They have superseded bipolar transistors which can introduce large voltage offsets when used as switches.FET devices have a leakage from drain to source in the off state and a leakage from gate or substrate to drain and source in both the on and off states. Gate leakage in MOS devices is small compared to other sources of leakage. When the device has a Zener-diode-protected gate, an additional leakage path exists between the gate and source.Enhancement-mode MOS-FETs have the advantage that the switch turns off when power is removed from the MUX. Junction-FET multiplexers always turn on with the power off.A more recent development, the CMOS-complementary MOS-switch has the advantage of being able to multiplex voltages up to and including the supply voltages. A±10-V signal can be handled with a ±10-V supply.Trade-off Considerations for the DesignerAnalog multiplexing has been the favored technique for achieving lowest system cost. The decreasing cost of A/D converters and the availability of low-cost, digital integrated circuits specifically designed for multiplexing provide an alternative with advantages for some applications. A decision on the technique to use for a givensys-tem will hinge on trade-offs between the following factors:1. Resolution. The cost of A/D converters rises steeply as the resolution increases due to the cost of precision elements. At the 8-bit level, the per-channel cost of an analog multiplexer may be a considerable proportion of the cost of a converter. At resolutions above 12 bits, the reverse is true, and analog multiplexing tends to be more economical.2. Number of channels. This controls the size of the multiplexer required and the amount of wiring and interconnections. Digital multiplexing onto a common data bus reduces wiring to a minimum in many cases. Analog multiplexing is suited for 8 to 256 channels; beyond this number, the technique is unwieldy and analog errors be-come difficult to minimize. Analog and digital multiplexing is often combined in very large systems.3. Speed of measurement, or throughput. High-speed A/D converters can add a considerable cost to the system. If analog multiplexing demands a high-speedcon-verter to achieve the desired sample rate, a slower converter for each channel with digital multiplexing can be less costly.4. Signal level and conditioning. Wide dynamic ranges between channels can be difficult with analog multiplexing. Signals less than 1V generally require differential low-level analog multiplexing which is expensive, with programmable-gain amplifiers after the MUX operation. The alternative of fixed-gain converters on each channel, with signal-conditioning designed for the channel requirement, with digital multi-plexing may be more efficient.5. Physical location of measurement points. Analog multiplexing is suitedfor making measurements at distances up to a few hundred feet from the converter, since analog lines may suffer from losses, transmission-line reflections, and interference. Lines may range from twisted wire pairs to multiconductor shielded cable, depending on signal levels, distance, and noise environments. Digital multiplexing is operable to thousands of miles, with the proper transmission equipment, for digital transmission systems can offer the powerful noise-rejection characteristics that are required for29 Data Acquisition Systems long-distance transmission.Digital MultiplexingFor systems with small numbers of channels, medium-scale integrated digital multiplexers are available in TTL and MOS logic families. The 74151 is a typical example. Eight of these integrated circuits can be used to multiplex eight A/D con-verters of 8-bit resolution onto a common data bus.This digital multiplexing example offers little advantages in wiring economy, but it is lowest in cost, and the high switching speed allows operation at sampling rates much faster than analog multiplexers. The A/D converters are required only to keep up with the channel sample rate, and not with the commutating rate. When large numbers of A/D converters are multiplexed, the data-bus technique reduces system interconnections. This alone may in many cases justify multiple A/D converters. Data can be bussed onto the lines in bit-parallel or bit-serial format, as many converters have both serial and parallel outputs. A variety of devices can be used to drive the bus, from open collector and tristate TTL gates to line drivers and optoelectronic isolators. Channel-selection decoders can be built from 1-of-16 decoders to the required size. This technique also allows additional reliability in that a failure of one A/D does not affect the other channels. An important requirement is that the multiplexer operate without introducing unacceptable errors at the sample-rate speed. For a digital MUX system, one can determine the speed from propagation delays and the time required to charge the bus capacitance.Analog multiplexers can be more difficult to characterize. Their speed is a func-tion not only of internal parameters but also external parameters such as channel, source impedance, stray capacitance and the number of channels, and the circuit lay-out. The user must be aware of the limiting parameters in the system to judge their ef-fect on performance.The nonideal transmission and open-circuit characteristics of analog multiplexers can introduce static and dynamic errors into the signal path. These errors include leakage through switches, coupling of control signals into the analog path, and inter-actions with sources and following amplifiers. Moreover, the circuit layout can com-pound these effects.Since analog multiplexers may be connected directly to sources which may have little overload capacity or poor settling after overloads, the switches should have a break-before-make action to prevent the possibility of shorting channels together. It may be necessary to avoid shorted channels when power is removed and a chan-nels-off with power-down characteristic is desirable. In addition to the chan-nel-addressing lines, which are normally binary-coded, it is useful to have inhibited or enable lines to turn all switches off regardless of the channel being addressed. This simplifies the external logic necessary to cascade multiplexers and can also be useful in certain modes of channeladdressing. Another requirement for both analog and digital multiplexers is the tolerance of line transients and overload conditions, and the ability to absorb the transient energy and recover without damage.数据采集系统数据采集系统是用来获取数据处理和存储在二级存储设备,为后来的分析。

互联网大数据金融中英文对照外文翻译文献

互联网大数据金融中英文对照外文翻译文献

互联网大数据金融中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:Internet Finance's Impact on Traditional FinanceAbstractAs the advances in modern information and Internet technology, especially the develop of cloud computing, big data, mobile Internet, search engines and social networks, profoundly change, even subvert many traditional industries, and the financial industry is no exception. In recent years, financial industry has become the most far-reaching area influenced by Internet, after commercial distribution and the media. Many Internet-based financial service models have emerged, and have had a profound and huge impact on traditional financial industries. "Internet-Finance" has win the focus of public attention.Internet-Finance is low cost, high efficiency, and pays more attention to the user experience, and these features enable it to fully meet the special needs of traditional "long tail financial market", to flexibly provide more convenient and efficient financial services and diversified financial products, to greatly expand the scope and depth of financial services, to shorten the distance between people space and time, andto establish a new financial environment, which effectively integrate and take use of fragmented time, information, capital and other scattered resources, then add up to form a scale, and grow a new profit point for various financial institutions. Moreover, with the continuous penetration and integration in traditional financial field, Internet-Finance will bring new challenges, but also opportunities to the traditional. It contribute to the transformation of the traditional commercial banks, compensate for the lack of efficiency in funding process and information integration, and provide new distribution channels for securities, insurance, funds and other financial products. For many SMEs, Internet-Finance extend their financing channels, reduce their financing threshold, and improve their efficiency in using funds. However, the cross-industry nature of the Internet Finance determines its risk factors are more complex, sensitive and varied, and therefore we must properly handle the relationship between innovative development and market regulation, industry self-regulation.Key Words:Internet Finance; Commercial Banks; Effects; Regulatory1 IntroductionThe continuous development of Internet technology, cloud computing, big data, a growing number of Internet applications such as social networks for the business development of traditional industry provides a strong support, the level of penetration of the Internet on the traditional industry. The end of the 20th century, Microsoft chairman Bill Gates, who declared, "the traditional commercial bank will become the new century dinosaur". Nowadays, with the development of the Internet electronic information technology, we really felt this trend, mobile payment, electronic bank already occupies the important position in our daily life.Due to the concept of the Internet financial almost entirely from the business practices, therefore the present study focused on the discussion. Internet financial specific mode, and the influence of traditional financial industry analysis and counter measures are lack of systemic research. Internet has always been a key battleground in risk investment, and financial industry is the thinking mode of innovative experimental various business models emerge in endlessly, so it is difficult to use a fixed set of thinking to classification and definition. The mutual penetration andintegration of Internet and financial, is a reflection of technical development and market rules requirements, is an irreversible trend. The Internet bring traditional financial is not only a low cost and high efficiency, more is a kind of innovative thinking mode and unremitting pursuit of the user experience. The traditional financial industry to actively respond to. Internet financial, for such a vast blue ocean enough to change the world, it is very worthy of attention to straighten out its development, from the existing business model to its development prospects."Internet financial" belongs to the latest formats form, discusses the Internet financial research of literature, but the lack of systemic and more practical. So this article according to the characteristics of the Internet industry practical stronger, the several business models on the market for summary analysis, and the traditional financial industry how to actively respond to the Internet wave of financial analysis and Suggestions are given, with strong practical significance.2 Internet financial backgroundInternet financial platform based on Internet resources, on the basis of the big data and cloud computing new financial model. Internet finance with the help of the Internet technology, mobile communication technology to realize financing, payment and information intermediary business, is a traditional industry and modern information technology represented by the Internet, mobile payment, cloud computing, data mining, search engines and social networks, etc.) Produced by the combination of emerging field. Whether financial or the Internet, the Internet is just the difference on the strategic, there is no strict definition of distinction. As the financial and the mutual penetration and integration of the Internet, the Internet financial can refer all through the Internet technology to realize the financing behavior. Internet financial is the Internet and the traditional financial product of mutual infiltration and fusion, the new financial model has a profound background. The emergence of the Internet financial is a craving for cost reduction is the result of the financial subject, is also inseparable from the rapid development of modern information technology to provide technical support.2.1 Demands factorsTraditional financial markets there are serious information asymmetry, greatly improve the transaction risk. Exhibition gradually changed people's spending habits, more and more high to the requirement of service efficiency and experience; In addition, rising operating costs, to stimulate the financial main body's thirst for financial innovation and reform; This pulled by demand factors, become the Internet financial produce powerful inner driving force.2.2 Supply driving factorData mining, cloud computing and Internet search engines, such as the development of technology, financial and institutional technology platform. Innovation, enterprise profit-driven mixed management, etc., for the transformation of traditional industry and Internet companies offered financial sector penetration may, for the birth and development of the Internet financial external technical support, become a kind of externalization of constitution. In the Internet "openness, equality, cooperation, share" platform, third-party financing and payment, online investment finance, credit evaluation model, not only makes the traditional pattern of financial markets will be great changes have taken place, and modern information technology is more easily to serve various financial entities. For the traditional financial institutions, especially in the banking, securities and insurance institutions, more opportunities than the crisis, development is better than a challenge.3 Internet financial constitute the main body3.1 Capital providersBetween Internet financial comprehensive, its capital providers include not only the traditional financial institutions, including penetrating into the Internet. In terms of the current market structure, the traditional financial sector mainly include commercial Banks, securities, insurance, fund and small loan companies, mainly includes the part of the Internet companies and emerging subject, such as the amazon, and some channels on Internet for the company. These companies is not only the providers of capital market, but also too many traditional so-called "low net worth clients" suppliers of funds into the market. In operation form, the former mainly through the Internet, to the traditional business externalization, the latter mainlythrough Internet channels to penetrate business, both externalization and penetration, both through the Internet channel to achieve the financial business innovation and reform.3.2 Capital demandersInternet financial mode of capital demanders although there is no breakthrough in the traditional government, enterprise and individual, but on the benefit has greatly changed. In the rise and development of the Internet financial, especially Internet companies to enter the threshold of made in the traditional financial institutions, relatively weak groups and individual demanders, have a more convenient and efficient access to capital. As a result, the Internet brought about by the universality and inclusive financial better than the previous traditional financial pattern.3.3 IntermediariesInternet financial rely on efficient and convenient information technology, greatly reduces the financial markets is the wrong information. Docking directly through Internet, according to both parties, transaction cost is greatly reduced, so the Internet finance main body for the dependence of the intermediary institutions decreased significantly, but does not mean that the Internet financial markets, there is no intermediary institutions. In terms of the development of the Internet financial situation at present stage, the third-party payment platform plays an intermediary role in this field, not only ACTS as a financial settlement platform, but also to the capital supply and demand of the integration of upstream and downstream link multi-faceted, in meet the funds to pay at the same time, have the effect of capital allocation. Especially in the field of electronic commerce, this function is more obvious.3.4 Large financial dataBig financial data collection refers to the vast amounts of unstructured data, through the study of the depth of its mining and real-time analysis, grasp the customer's trading information, consumption habits and consumption information, and predict customer behavior and make the relevant financial institutions in the product design, precise marketing and greatly improve the efficiency of risk management, etc. Financial services platform based on the large data mainly refers to with vast tradingdata of the electronic commerce enterprise's financial services. The key to the big data from a large number of chaotic ability to rapidly gaining valuable information in the data, or from big data assets liquidation ability quickly. Big data information processing, therefore, often together with cloud computing.4 Global economic issuesFOR much of the past year the fast-growing economies of the emerging world watched the Western financial hurricane from afar. Their own banks held few of the mortgage-based assets that undid the rich world’s financial firms. Commodity exporters were thriving, thanks to high prices fo r raw materials. China’s economic juggernaut powered on. And, from Budapest to Brasília, an abundance of credit fuelled domestic demand. Even as talk mounted of the rich world suffering its worst financial collapse since the Depression, emerging economies seemed a long way from the centre of the storm.No longer. As foreign capital has fled and confidence evaporated, the emerging world’s stockmarkets have plunged (in some cases losing half their value) and currencies tumbled. The seizure in the credit market caused havoc, as foreign banks abruptly stopped lending and stepped back from even the most basic banking services, including trade credits.Like their rich-world counterparts, governments are battling to limit the damage (see article). That is easiest for those with large foreign-exchange reserves. Russia is spending $220 billion to shore up its financial services industry. South Korea has guaranteed $100 billion of its banks’ debt. Less well-endowed countries are asking for help.Hungary has secured a EURO5 billion ($6.6 billion) lifeline from the European Central Bank and is negotiating a loan from the IMF, as is Ukraine. Close to a dozen countries are talking to the fund about financial help.Those with long-standing problems are being driven to desperate measures. Argentina is nationalising its private pension funds, seeminglyto stave off default (see article). But even stalwarts are looking weaker. Figures released this week showed that China’s growth slowed to 9% in the year to the third quarter-still a rapid pace but a lot slower than the double-digit rates of recent years.The various emerging economies are in different states of readiness, but the cumulative impact of all this will be enormous. Most obviously, how these countries fare will determine whether the world economy faces a mild recession or something nastier. Emerging economies accounted for around three-quarters of global growth over the past 18 months. But their economic fate will also have political consequences.In many places-eastern Europe is one example (see article)-financial turmoil is hitting weak governments. But even strong regimes could suffer. Some experts think that China needs growth of 7% a year to contain social unrest. More generally, the coming strife will shape the debate about the integration of the world economy. Unlike many previous emerging-market crises, today’s mess spread from the rich world, largely thanks to increasingly integrated capital markets. If emerging economies collapse-either into a currency crisis or a sharp recession-there will be yet more questioning of the wisdom of globalised finance.Fortunately, the picture is not universally dire. All emerging economies will slow. Some will surely face deep recessions. But many are facing the present danger in stronger shape than ever before, armed with large reserves, flexible currencies and strong budgets. Good policy-both at home and in the rich world-can yet avoid a catastrophe.One reason for hope is that the direct economic fallout from the rich world’s d isaster is manageable. Falling demand in America and Europe hurts exports, particularly in Asia and Mexico. Commodity prices have fallen: oil is down nearly 60% from its peak and many crops and metals have done worse. That has a mixed effect. Although it hurtscommodity-exporters from Russia to South America, it helps commodity importers in Asia and reduces inflation fears everywhere. Countries like Venezuela that have been run badly are vulnerable (see article), but given the scale of the past boom, the commodity bust so far seems unlikely to cause widespread crises.The more dangerous shock is financial. Wealth is being squeezed as asset prices decline. China’s house prices, for instance, have started falling (see article). This will dampen domestic confidence, even though consumers are much less indebted than they are in the rich world. Elsewhere, the sudden dearth of foreign-bank lending and the flight of hedge funds and other investors from bond markets has slammed the brakes on credit growth. And just as booming credit once underpinned strong domestic spending, so tighter credit will mean slower growth.Again, the impact will differ by country. Thanks to huge current-account surpluses in China and the oil-exporters in the Gulf, emerging economies as a group still send capital to the rich world. But over 80 have deficits of more than 5% of GDP. Most of these are poor countries that live off foreign aid; but some larger ones rely on private capital. For the likes of Turkey and South Africa a sudden slowing in foreign financing would force a dramatic adjustment. A particular worry is eastern Europe, where many countries have double-digit deficits. In addition, even some countries with surpluses, such as Russia, have banks that have grown accustomed to easy foreign lending because of the integration of global finance. The rich world’s bank bail-outs may limit the squeeze, but the flow of capital to the emerging world will slow. The Institute of International Finance, a bankers’ group, expects a 30% decline in net flows of private capital from last year.This credit crunch will be grim, but most emerging markets can avoid catastrophe. The biggest ones are in relatively good shape. The morevulnerable ones can (and should) be helped.Among the giants, China is in a league of its own, with a $2 trillion arsenal of reserves, a current-account surplus, little connection to foreign banks and a budget surplus that offers lots of room to boost spending. Since the country’s leaders have made clear that they will do whatev er it takes to cushion growth, China’s economy is likely to slow-perhaps to 8%-but not collapse. Although that is not enough to save the world economy, such growth in China would put a floor under commodity prices and help other countries in the emerging world.The other large economies will be harder hit, but should be able to weather the storm. India has a big budget deficit and many Brazilian firms have a large foreign-currency exposure. But Brazil’s economy is diversified and both countries have plenty of reserves to smooth the shift to slower growth. With $550 billion of reserves, Russia ought to be able to stop a run on the rouble. In the short-term at least, the most vulnerable countries are all smaller ones.There will be pain as tighter credit forces adjustments. But sensible, speedy international assistance would make a big difference. Several emerging countries have asked America’s Federal Reserve for liquidity support; some hope that China will bail them out. A better route is surely the IMF, which has huge expertise and some $250 billion to lend. Sadly, borrowing from the fund carries a stigma. That needs to change. The IMF should develop quicker, more flexible financial instruments and minimise the conditions it attaches to loans. Over the past month deft policymaking saw off calamity in the rich world. Now it is time for something similar in the emerging world.5 ConclusionsInternet financial model can produce not only huge social benefit, lower transaction costs, provide higher than the existing direct and indirect financingefficiency of the allocation of resources, to provide power for economic development, will also be able to use the Internet and its related software technology played down the traditional finance specialized division of labor, makes the financial participants more mass popularization, risk pricing term matching complex transactions, tend to be simple. Because of the Internet financial involved in the field are mainly concentrated in the field of traditional financial institutions to the current development is not thorough, namely traditional financial "long tail" market, can complement with the original traditional financial business situation, so in the short term the Internet finance from the Angle of the size of the market will not make a big impact to the traditional financial institutions, but the Internet financial business model, innovative ideas, and its apparent high efficiency for the traditional financial institutions brought greater impact on the concept, also led to the traditional financial institutions to further accelerate the mutual penetration and integration with the Internet.译文:互联网金融对传统金融的影响作者:罗萨米;拉夫雷特摘要网络的发展,深刻地改变甚至颠覆了许多传统行业,金融业也不例外。

文学作品中英文对照外文翻译文献

文学作品中英文对照外文翻译文献

文学作品中英文对照外文翻译文献
本文旨在汇总文学作品中的英文和中文对照外文翻译文献,共有以下几篇:
1. 《傲慢与偏见》
翻译:英文原版名为“Pride and Prejudice”,中文版由钱钟书翻译。

该小说是英国作家简.奥斯汀的代表作之一,描绘了19世纪英国中上层社会的生活和爱情故事。

2. 《了不起的盖茨比》
翻译:英文原版名为“The Great Gatsby”,中文版由杨绛翻译。

小说主要讲述了一个居住在纽约长岛的年轻白领盖茨比为了追求他的旧爱黛西而付出的努力,是20世纪美国文学的经典之作。

3. 《麦田里的守望者》
翻译:英文原版名为“The Catcher in the Rye”,中文版由施蛰存翻译。

该小说主人公霍尔顿是美国现代文学中最为知名的反英雄形象之一,作品深刻地揭示了青少年内心的孤独和矛盾。

4. 《1984》
翻译:英文原版名为“1984”,中文版由李敬瑞翻译。

该小说是英国作家乔治.奥威尔的代表作之一,描绘了一个虚构的极权主义社会。

以上是部分文学作品的中英文对照外文翻译文献,可以帮助读者更好地理解和学习相关文学作品。

仓储物流外文文献翻译中英文原文及译文2023-2023

仓储物流外文文献翻译中英文原文及译文2023-2023

仓储物流外文文献翻译中英文原文及译文2023-2023原文1:The Current Trends in Warehouse Management and LogisticsWarehouse management is an essential component of any supply chain and plays a crucial role in the overall efficiency and effectiveness of logistics operations. With the rapid advancement of technology and changing customer demands, the field of warehouse management and logistics has seen several trends emerge in recent years.One significant trend is the increasing adoption of automation and robotics in warehouse operations. Automated systems such as conveyor belts, robotic pickers, and driverless vehicles have revolutionized the way warehouses function. These technologies not only improve accuracy and speed but also reduce labor costs and increase safety.Another trend is the implementation of real-time tracking and visibility systems. Through the use of RFID (radio-frequency identification) tags and GPS (global positioning system) technology, warehouse managers can monitor the movement of goods throughout the entire supply chain. This level of visibility enables better inventory management, reduces stockouts, and improves customer satisfaction.Additionally, there is a growing focus on sustainability in warehouse management and logistics. Many companies are implementing environmentally friendly practices such as energy-efficient lighting, recycling programs, and alternativetransportation methods. These initiatives not only contribute to reducing carbon emissions but also result in cost savings and improved brand image.Furthermore, artificial intelligence (AI) and machine learning have become integral parts of warehouse management. AI-powered systems can analyze large volumes of data to optimize inventory levels, forecast demand accurately, and improve operational efficiency. Machine learning algorithms can also identify patterns and anomalies, enabling proactive maintenance and minimizing downtime.In conclusion, warehouse management and logistics are continuously evolving fields, driven by technological advancements and changing market demands. The trends discussed in this article highlight the importance of adopting innovative solutions to enhance efficiency, visibility, sustainability, and overall performance in warehouse operations.译文1:仓储物流管理的当前趋势仓储物流管理是任何供应链的重要组成部分,并在物流运营的整体效率和效力中发挥着至关重要的作用。

中英文双语外文文献翻译:一种基于...

中英文双语外文文献翻译:一种基于...

中英⽂双语外⽂⽂献翻译:⼀种基于...此⽂档是毕业设计外⽂翻译成品(含英⽂原⽂+中⽂翻译),⽆需调整复杂的格式!下载之后直接可⽤,⽅便快捷!本⽂价格不贵,也就⼏⼗块钱!⼀辈⼦也就⼀次的事!英⽂3890单词,20217字符(字符就是印刷符),中⽂6398汉字。

A Novel Divide-and-Conquer Model for CPI Prediction UsingARIMA, Gray Model and BPNNAbstract:This paper proposes a novel divide-and-conquer model for CPI prediction with the existing compilation method of the Consumer Price Index (CPI) in China. Historical national CPI time series is preliminary divided into eight sub-indexes including food, articles for smoking and drinking, clothing, household facilities, articles and maintenance services, health care and personal articles, transportation and communication, recreation, education and culture articles and services, and residence. Three models including back propagation neural network (BPNN) model, grey forecasting model (GM (1, 1)) and autoregressive integrated moving average (ARIMA) model are established to predict each sub-index, respectively. Then the best predicting result among the three models’for each sub-index is identified. To further improve the performance, special modification in predicting method is done to sub-CPIs whose forecasting results are not satisfying enough. After improvement and error adjustment, we get the advanced predicting results of the sub-CPIs. Eventually, the best predicting results of each sub-index are integrated to form the forecasting results of the national CPI. Empirical analysis demonstrates that the accuracy and stability of the introduced method in this paper is better than many commonly adopted forecasting methods, which indicates the proposed method is an effective and alternative one for national CPI prediction in China.1.IntroductionThe Consumer Price Index (CPI) is a widely used measurement of cost of living. It not only affects the government monetary, fiscal, consumption, prices, wages, social security, but also closely relates to the residents’daily life. As an indicator of inflation in China economy, the change of CPI undergoes intense scrutiny. For instance, The People's Bank of China raised the deposit reserve ratio in January, 2008 before the CPI of 2007 was announced, for it is estimated that the CPI in 2008 will increase significantly if no action is taken. Therefore, precisely forecasting the change of CPI is significant to many aspects of economics, some examples include fiscal policy, financial markets and productivity. Also, building a stable and accurate model to forecast the CPI will have great significance for the public, policymakers and research scholars.Previous studies have already proposed many methods and models to predict economic time series or indexes such as CPI. Some previous studies make use of factors that influence the value of the index and forecast it by investigating the relationship between the data of those factors and the index. These forecasts are realized by models such as Vector autoregressive (VAR)model1 and genetic algorithms-support vector machine (GA-SVM) 2.However, these factor-based methods, although effective to some extent, simply rely on the correlation between the value of the index and limited number of exogenous variables (factors) and basically ignore the inherent rules of the variation of the time series. As a time series itself contains significant amount of information3, often more than a limited number of factors can do, time series-based models are often more effective in the field of prediction than factor-based models.Various time series models have been proposed to find the inherent rules of the variation in the series. Many researchers have applied different time series models to forecasting the CPI and other time series data. For example, the ARIMA model once served as a practical method in predicting the CPI4. It was also applied to predict submicron particle concentrations frommeteorological factors at a busy roadside in Hangzhou, China5. What’s more, the ARIMA model was adopted to analyse the trend of pre-monsoon rainfall data forwestern India6. Besides the ARIMA model, other models such as the neural network, gray model are also widely used in the field of prediction. Hwang used the neural-network to forecast time series corresponding to ARMA (p, q) structures and found that the BPNNs generally perform well and consistently when a particular noise level is considered during the network training7. Aiken also used a neural network to predict the level of CPI and reached a high degree of accuracy8. Apart from the neural network models, a seasonal discrete grey forecasting model for fashion retailing was proposed and was found practical for fashion retail sales forecasting with short historical data and better than other state-of-art forecastingtechniques9. Similarly, a discrete Grey Correlation Model was also used in CPI prediction10. Also, Ma et al. used gray model optimized by particle swarm optimization algorithm to forecast iron ore import and consumption of China11. Furthermore, to deal with the nonlinear condition, a modified Radial Basis Function (RBF) was proposed by researchers.In this paper, we propose a new method called “divide-and-conquer model”for the prediction of the CPI.We divide the total CPI into eight categories according to the CPI construction and then forecast the eight sub- CPIs using the GM (1, 1) model, the ARIMA model and the BPNN. To further improve the performance, we again make prediction of the sub-CPIs whoseforecasting results are not satisfying enough by adopting new forecasting methods. After improvement and error adjustment, we get the advanced predicting results of the sub-CPIs. Finally we get the total CPI prediction by integrating the best forecasting results of each sub-CPI.The rest of this paper is organized as follows. In section 2, we give a brief introduction of the three models mentioned above. And then the proposed model will be demonstrated in the section 3. In section 4 we provide the forecasting results of our model and in section 5 we make special improvement by adjusting the forecasting methods of sub-CPIs whose predicting results are not satisfying enough. And in section 6 we give elaborate discussion and evaluation of the proposed model. Finally, the conclusion is summarized in section 7.2.Introduction to GM(1,1), ARIMA & BPNNIntroduction to GM(1,1)The grey system theory is first presented by Deng in 1980s. In the grey forecasting model, the time series can be predicted accurately even with a small sample by directly estimating the interrelation of data. The GM(1,1) model is one type of the grey forecasting which is widely adopted. It is a differential equation model of which the order is 1 and the number of variable is 1, too. The differential equation is:Introduction to ARIMAAutoregressive Integrated Moving Average (ARIMA) model was first put forward by Box and Jenkins in 1970. The model has been very successful by taking full advantage of time series data in the past and present. ARIMA model is usually described as ARIMA (p, d, q), p refers to the order of the autoregressive variable, while d and q refer to integrated, and moving average parts of the model respectively. When one of the three parameters is zero, the model is changed to model “AR”, “MR”or “ARMR”. When none of the three parameters is zero, the model is given by:where L is the lag number,?t is the error term.Introduction to BPNNArtificial Neural Network (ANN) is a mathematical and computational model which imitates the operation of neural networks of human brain. ANN consists of several layers of neurons. Neurons of contiguous layers are connected with each other. The values of connections between neurons are called “weight”. Back Propagation Neural Network (BPNN) is one of the most widely employed neural network among various types of ANN. BPNN was put forward by Rumelhart and McClelland in 1985. It is a common supervised learning network well suited for prediction. BPNN consists of three parts including one input layer, several hidden layers and one output layer, as is demonstrated in Fig 1. The learning process of BPNN is modifying the weights of connections between neurons based on the deviation between the actual output and the target output until the overall error is in the acceptable range.Fig. 1. Back-propagation Neural Network3.The Proposed MethodThe framework of the dividing-integration modelThe process of forecasting national CPI using the dividing-integration model is demonstrated in Fig 2.Fig. 2.The framework of the dividing-integration modelAs can be seen from Fig. 2, the process of the proposed method can be divided into the following steps: Step1: Data collection. The monthly CPI data including total CPI and eight sub-CPIs are collected from the official website of China’s State Statistics Bureau (/doc/d62de4b46d175f0e7cd184254b35eefdc9d31514.html /).Step2: Dividing the total CPI into eight sub-CPIs. In this step, the respective weight coefficient of eight sub- CPIs in forming the total CPI is decided by consulting authoritative source .(/doc/d62de4b46d175f0e7cd184254b35eefdc9d31514.html /). The eight sub-CPIs are as follows: 1. Food CPI; 2. Articles for Smoking and Drinking CPI; 3. Clothing CPI; 4. Household Facilities, Articles and Maintenance Services CPI; 5. Health Care and Personal Articles CPI; 6. Transportation and Communication CPI;7. Recreation, Education and Culture Articles and Services CPI; 8. Residence CPI. The weight coefficient of each sub-CPI is shown in Table 8.Table 1. 8 sub-CPIs weight coefficient in the total indexNote: The index number stands for the corresponding type of sub-CPI mentioned before. Other indexes appearing in this paper in such form have the same meaning as this one.So the decomposition formula is presented as follows:where TI is the total index; Ii (i 1,2, ,8) are eight sub-CPIs. To verify the formula, we substitute historical numeric CPI and sub-CPI values obtained in Step1 into the formula and find the formula is accurate.Step3: The construction of the GM (1, 1) model, the ARIMA (p, d, q) model and the BPNN model. The three models are established to predict the eight sub-CPIs respectively.Step4: Forecasting the eight sub-CPIs using the three models mentioned in Step3 and choosing the best forecasting result for each sub-CPI based on the errors of the data obtained from the three models.Step5: Making special improvement by adjusting the forecasting methods of sub-CPIs whose predicting results are not satisfying enough and get advanced predicting results of total CPI. Step6: Integrating the best forecasting results of 8 sub-CPIs to form the prediction of total CPI with the decomposition formula in Step2.In this way, the whole process of the prediction by the dividing-integration model is accomplished.3.2. The construction of the GM(1,1) modelThe process of GM (1, 1) model is represented in the following steps:Step1: The original sequence:Step2: Estimate the parameters a and u using the ordinary least square (OLS). Step3: Solve equation as follows.Step4: Test the model using the variance ratio and small error possibility.The construction of the ARIMA modelFirstly, ADF unit root test is used to test the stationarity of the time series. If the initial time series is not stationary, a differencing transformation of the data is necessary to make it stationary. Then the values of p and q are determined by observing the autocorrelation graph, partial correlation graph and the R-squared value.After the model is built, additional judge should be done to guarantee that the residual error is white noise through hypothesis testing. Finally the model is used to forecast the future trend ofthe variable.The construction of the BPNN modelThe first thing is to decide the basic structure of BP neural network. After experiments, we consider 3 input nodes and 1 output nodes to be the best for the BPNN model. This means we use the CPI data of time , ,toforecast the CPI of time .The hidden layer level and the number of hidden neurons should also be defined. Since the single-hidden- layer BPNN are very good at non-liner mapping, the model is adopted in this paper. Based on the Kolmogorov theorem and testing results, we define 5 to be the best number of hidden neurons. Thus the 3-5-1 BPNN structure is determined.As for transferring function and training algorithm, we select ‘tansig’as the transferring function for middle layer, ‘logsig’for input layer and ‘traingd’as training algorithm. The selection is based on the actual performance of these functions, as there are no existing standards to decide which ones are definitely better than others.Eventually, we decide the training times to be 35000 and the goal or the acceptable error to be 0.01.4.Empirical AnalysisCPI data from Jan. 2012 to Mar. 2013 are used to build the three models and the data from Apr. 2013 to Sept. 2013 are used to test the accuracy and stability of these models. What’s more, the MAPE is adopted to evaluate the performance of models. The MAPE is calculated by the equation:Data sourceAn appropriate empirical analysis based on the above discussion can be performed using suitably disaggregated data. We collect the monthly data of sub-CPIs from the website of National Bureau of Statistics of China(/doc/d62de4b46d175f0e7cd184254b35eefdc9d31514.html /).Particularly, sub-CPI data from Jan. 2012 to Mar. 2013 are used to build the three models and the data from Apr. 2013 to Sept. 2013 are used to test the accuracy and stability of these models.Experimental resultsWe use MATLAB to build the GM (1,1) model and the BPNN model, and Eviews 6.0 to build the ARIMA model. The relative predicting errors of sub-CPIs are shown in Table 2.Table 2.Error of Sub-CPIs of the 3 ModelsFrom the table above, we find that the performance of different models varies a lot, because the characteristic of the sub-CPIs are different. Some sub-CPIs like the Food CPI changes drastically with time while some do not have much fluctuation, like the Clothing CPI. We use different models to predict the sub- CPIs and combine them by equation 7.Where Y refers to the predicted rate of the total CPI, is the weight of the sub-CPI which has already been shown in Table1and is the predicted value of the sub-CPI which has the minimum error among the three models mentioned above. The model chosen will be demonstrated in Table 3:Table 3.The model used to forecastAfter calculating, the error of the total CPI forecasting by the dividing-integration model is 0.0034.5.Model Improvement & Error AdjustmentAs we can see from Table 3, the prediction errors of sub-CPIs are mostly below 0.004 except for two sub- CPIs: Food CPI whose error reaches 0.0059 and Transportation & Communication CPI 0.0047.In order to further improve our forecasting results, we modify the prediction errors of the two aforementioned sub-CPIs by adopting other forecasting methods or models to predict them. The specific methods are as follows.Error adjustment of food CPIIn previous prediction, we predict the Food CPI using the BPNN model directly. However, the BPNN model is not sensitive enough to investigate the variation in the values of the data. For instance, although the Food CPI varies a lot from month to month, the forecasting values of it are nearly all around 103.5, which fails to make meaningful prediction.We ascribe this problem to the feature of the training data. As we can see from the original sub-CPI data on the website of National Bureau of Statistics of China, nearly all values of sub-CPIs are around 100. As for Food CPI, although it does have more absolute variations than others, its changes are still very small relative to the large magnitude of the data (100). Thus it will be more difficult for the BPNN model to detect the rules of variations in training data and the forecastingresults are marred.Therefore, we use the first-order difference series of Food CPI instead of the original series to magnify the relative variation of the series forecasted by the BPNN. The training data and testing data are the same as that in previous prediction. The parameters and functions of BPNN are automatically decided by the software, SPSS.We make 100 tests and find the average forecasting error of Food CPI by this method is 0.0028. The part of the forecasting errors in our tests is shown as follows in Table 4:Table 4.The forecasting errors in BPNN testError adjustment of transportation &communication CPIWe use the Moving Average (MA) model to make new prediction of the Transportation and Communication CPI because the curve of the series is quite smooth with only a few fluctuations. We have the following equation(s):where X1, X2…Xn is the time series of the Transportation and Communication CPI, is the value of moving average at time t, is a free parameter which should be decided through experiment.To get the optimal model, we range the value of from 0 to 1. Finally we find that when the value of a is 0.95, the forecasting error is the smallest, which is 0.0039.The predicting outcomes are shown as follows in Table5:Table 5.The Predicting Outcomes of MA modelAdvanced results after adjustment to the modelsAfter making some adjustment to our previous model, we obtain the advanced results as follows in Table 6: Table 6.The model used to forecast and the Relative ErrorAfter calculating, the error of the total CPI forecasting by the dividing-integration model is 0.2359.6.Further DiscussionTo validate the dividing-integration model proposed in this paper, we compare the results of our model with the forecasting results of models that do not adopt the dividing-integration method. For instance, we use the ARIMA model, the GM (1, 1) model, the SARIMA model, the BRF neural network (BRFNN) model, the Verhulst model and the Vector Autoregression (VAR) model respectively to forecast the total CPI directly without the process of decomposition and integration. The forecasting results are shown as follows in Table7.From Table 7, we come to the conclusion that the introduction of dividing-integration method enhances the accuracy of prediction to a great extent. The results of model comparison indicate that the proposed method is not only novel but also valid and effective.The strengths of the proposed forecasting model are obvious. Every sub-CPI time series have different fluctuation characteristics. Some are relatively volatile and have sharp fluctuations such as the Food CPI while others are relatively gentle and quiet such as the Clothing CPI. As a result, by dividing the total CPI into several sub-CPIs, we are able to make use of the characteristics of each sub-CPI series and choose the best forecasting model among several models for every sub-CPI’s prediction. Moreover, the overall prediction error is provided in the following formula:where TE refers to the overall prediction error of the total CPI, is the weight of the sub-CPI shown in table 1 and is the forecasting error of corresponding sub-CPI.In conclusion, the dividing-integration model aims at minimizing the overall prediction errors by minimizing the forecasting errors of sub-CPIs.7.Conclusions and future workThis paper creatively transforms the forecasting of national CPI into the forecasting of 8 sub-CPIs. In the prediction of 8 sub-CPIs, we adopt three widely used models: the GM (1, 1) model, the ARIMA model and the BPNN model. Thus we can obtain the best forecasting results for each sub-CPI. Furthermore, we make special improvement by adjusting the forecasting methods of sub-CPIs whose predicting results are not satisfying enough and get the advanced predicting results of them. Finally, the advanced predicting results of the 8 sub- CPIs are integrated to formthe forecasting results of the total CPI.Furthermore, the proposed method also has several weaknesses and needs improving. Firstly, The proposed model only uses the information of the CPI time series itself. If the model can make use of other information such as the information provided by factors which make great impact on the fluctuation of sub-CPIs, we have every reason to believe that the accuracy and stability of the model can be enhanced. For instance, the price of pork is a major factor in shaping the Food CPI. If this factor is taken into consideration in the prediction of Food CPI, the forecasting results will probably be improved to a great extent. Second, since these models forecast the future by looking at the past, they are not able to sense the sudden or recent change of the environment. So if the model can take web news or quick public reactions with account, it will react much faster to sudden incidence and affairs. Finally, the performance of sub-CPIs prediction can be higher. In this paper we use GM (1, 1), ARIMA and BPNN to forecast sub-CPIs. Some new method for prediction can be used. For instance, besides BPNN, there are other neural networks like genetic algorithm neural network (GANN) and wavelet neural network (WNN), which might have better performance in prediction of sub-CPIs. Other methods such as the VAR model and the SARIMA model should also be taken into consideration so as to enhance the accuracy of prediction.References1.Wang W, Wang T, and Shi Y. Factor analysis on consumer price index rising in China from 2005 to 2008. Management and service science 2009; p. 1-4.2.Qin F, Ma T, and Wang J. The CPI forecast based on GA-SVM. Information networking and automation 2010; p. 142-147.3.George EPB, Gwilym MJ, and Gregory CR. Time series analysis: forecasting and control. 4th ed. Canada: Wiley; 20084.Weng D. The consumer price index forecast based on ARIMA model. WASE International conferenceon information engineering 2010;p. 307-310.5.Jian L, Zhao Y, Zhu YP, Zhang MB, Bertolatti D. An application of ARIMA model to predict submicron particle concentrations from meteorological factors at a busy roadside in Hangzhou, China. Science of total enviroment2012;426:336-345.6.Priya N, Ashoke B, Sumana S, Kamna S. Trend analysis and ARIMA modelling of pre-monsoon rainfall data forwestern India. Comptesrendus geoscience 2013;345:22-27.7.Hwang HB. Insights into neural-network forecasting of time seriescorresponding to ARMA(p; q) structures. Omega2001;29:273-289./doc/d62de4b46d175f0e7cd184254b35eefdc9d31514.html am A. Using a neural network to forecast inflation. Industrial management & data systems 1999;7:296-301.9.Min X, Wong WK. A seasonal discrete grey forecasting model for fashion retailing. Knowledge based systems 2014;57:119-126.11. Weimin M, Xiaoxi Z, Miaomiao W. Forecasting iron ore import and consumption of China using grey model optimized by particleswarm optimization algorithm. Resources policy 2013;38:613-620.12. Zhen D, and Feng S. A novel DGM (1, 1) model for consumer price index forecasting. Greysystems and intelligent services (GSIS)2009; p. 303-307.13. Yu W, and Xu D. Prediction and analysis of Chinese CPI based on RBF neural network. Information technology and applications2009;3:530-533.14. Zhang GP. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing 2003;50:159-175.15. Pai PF, Lin CS. A hybrid ARIMA and support vector machines model in stock price forecasting. Omega 2005;33(6):497-505.16. Tseng FM, Yu HC, Tzeng GH. Combining neural network model with seasonal time series ARIMA model. Technological forecastingand social change 2002;69(1):71-87.17.Cho MY, Hwang JC, Chen CS. Customer short term load forecasting by using ARIMA transfer function model. Energy management and power delivery, proceedings of EMPD'95. 1995 international conference on IEEE, 1995;1:317-322.译⽂:⼀种基于ARIMA、灰⾊模型和BPNN对CPI(消费物价指数)进⾏预测的新型分治模型摘要:在本⽂中,利⽤我国现有的消费者价格指数(CPI)的计算⽅法,提出了⼀种新的CPI预测分治模型。

JAVA学习过程论文中英文资料对照外文翻译文献

JAVA学习过程论文中英文资料对照外文翻译文献

JAVA学习过程论文中英文资料对照外文翻译文献During my process of learning Java。

I have found that everyone has their own unique study method。

What works for one person may not work for another。

As I am studying Java independently。

I have not asked ___。

I have had to rely on my own ___ out。

While I cannot say whether this is the best method。

I can offer it as a reference for others.When I first started learning Java。

___ understanding of the language。

As I progressed。

I began to work on small projects to apply what I had learned.One of the biggest challenges I faced was understanding object-oriented programming。

___。

once I understood the basics。

everything else started to fall into place.___ practicing programming。

I also made sure to take breaks and give my brain time to rest。

I found that this helped me to ___.Overall。

my learning process has been a n of n。

数据库英文参考文献(最新推荐120个)

数据库英文参考文献(最新推荐120个)

由于我国经济的高速发展,计算机科学技术在当前各个科技领域中迅速发展,成为了应用最广泛的技术之一.其中数据库又是计算机科学技术中发展最快,应用最广泛的重要分支之一.它已成为计算机信息系统和计算机应用系统的重要技术基础和支柱。

下面是数据库英文参考文献的分享,希望对你有所帮助。

数据库英文参考文献一:[1]Nú?ez Matías,Weht Ruben,Nú?ez Regueiro Manuel. Searching for electronically two dimensional metals in high-throughput ab initio databases[J]. Computational Materials Science,2020,182.[2]Izabela Karsznia,Marta Przychodzeń,Karolina Sielicka. Methodology of the automatic generalization of buildings, road networks, forests and surface waters: a case study based on the Topographic Objects Database in Poland[J]. Geocarto International,2020,35(7).[3]Alankrit Chaturvedi. Secure Cloud Migration Challenges and Solutions[J]. Journal of Research in Science and Engineering,2020,2(4).[4]Ivana Nin?evi? Pa?ali?,Maja ?uku?i?,Mario Jadri?. Smart city research advances in Southeast Europe[J]. International Journal of Information Management,2020.[5]Jongseong Kim,Unil Yun,Eunchul Yoon,Jerry Chun-Wei Lin,Philippe Fournier-Viger. One scan based high average-utility pattern mining in static and dynamic databases[J]. Future Generation Computer Systems,2020.[6]Jo?o Peixoto Martins,António Andrade-Campos,Sandrine Thuillier. Calibration of Johnson-Cook Model Using Heterogeneous Thermo-Mechanical Tests[J]. Procedia Manufacturing,2020,47.[7]Anna Soriani,Roberto Gemignani,Matteo Strano. A Metamodel for the Management of Large Databases: Toward Industry 4.0 in Metal Forming[J]. Procedia Manufacturing,2020,47.[8]Ayman Elbadawi,Karim Mahmoud,Islam Y. Elgendy,Mohammed Elzeneini,Michael Megaly,Gbolahan Ogunbayo,Mohamed A. Omer,Michelle Albert,Samir Kapadia,Hani Jneid. Racial disparities in the utilization and outcomes of transcatheter mitral valve repair: Insights from a national database[J]. Cardiovascular Revascularization Medicine,2020.[9]Maurizio Boccia,Antonio Sforza,Claudio Sterle. Simple Pattern Minimality Problems: Integer Linear Programming Formulations and Covering-Based Heuristic Solving Approaches[J]. INFORMS Journal on Computing,2020.[10]. Inc.; Patent Issued for Systems And User Interfaces For Dynamic Access Of Multiple Remote Databases And Synchronization Of Data Based On User Rules (USPTO 10,628,448)[J]. Computer Technology Journal,2020.[11]. Bank of America Corporation; Patent Issued for System For Electronic Data Verification, Storage, And Transfer (USPTO 10,628,058)[J]. Computer Technology Journal,2020.[12]. Information Technology - Database Management; Data from Technical University Munich (TU Munich) Advance Knowledge in Database Management (Make the most out of your SIMD investments: counter control flow divergence in compiled query pipelines)[J]. Computer Technology Journal,2020.[13]. Information Technology - Database Management; Studies from Pontifical Catholic University Update Current Data on Database Management (General dynamic Yannakakis: conjunctive queries with theta joins under updates)[J]. Computer Technology Journal,2020.[14]Kimothi Dhananjay,Biyani Pravesh,Hogan James M,Soni Akshay,Kelly Wayne. Learning supervised embeddings for large scale sequence comparisons.[J]. PloS one,2020,15(3).[15]. Information Technology; Studies from University of California San Diego (UCSD) Reveal New Findings on Information Technology (A Physics-constrained Data-driven Approach Based On Locally Convex Reconstruction for Noisy Database)[J]. Information Technology Newsweekly,2020.[16]. Information Technology; Researchers from National Institute of Information and Communications Technology Describe Findings in Information Technology (Efficient Discovery of Weighted Frequent Neighborhood Itemsets in Very Large Spatiotemporal Databases)[J]. Information Technology Newsweekly,2020.[17]. Information Technology; Investigators at Gdansk University of Technology Report Findings in Information Technology (A Framework for Accelerated Optimization of Antennas Using Design Database and Initial Parameter Set Estimation)[J]. Information Technology Newsweekly,2020.[18]. Information Technology; Study Results from Palacky University Update Understanding of Information Technology (Evaluation of Replication Mechanisms on Selected Database Systems)[J]. Information Technology Newsweekly,2020.[19]Runfola Daniel,Anderson Austin,Baier Heather,Crittenden Matt,Dowker Elizabeth,Fuhrig Sydney,Goodman Seth,Grimsley Grace,Layko Rachel,MelvilleGraham,Mulder Maddy,Oberman Rachel,Panganiban Joshua,Peck Andrew,Seitz Leigh,Shea Sylvia,Slevin Hannah,Youngerman Rebecca,Hobbs Lauren. geoBoundaries: A global database of political administrative boundaries.[J]. PloS one,2020,15(4).[20]Dupré Damien,Krumhuber Eva G,Küster Dennis,McKeown Gary J. A performance comparison of eight commercially available automatic classifiers for facial affect recognition.[J]. PloS one,2020,15(4).[21]Partha Pratim Banik,Rappy Saha,Ki-Doo Kim. An Automatic Nucleus Segmentation and CNN Model based Classification Method of White Blood Cell[J]. Expert Systems With Applications,2020,149.[22]Hang Dong,Wei Wang,Frans Coenen,Kaizhu Huang. Knowledge base enrichment by relation learning from social tagging data[J]. Information Sciences,2020,526.[23]Xiaodong Zhao,Dechang Pi,Junfu Chen. Novel trajectory privacy-preserving method based on clustering using differential privacy[J]. Expert Systems With Applications,2020,149.[24]. Information Technology; Researchers at Beijing University of Posts and Telecommunications Have Reported New Data on Information Technology (Mining top-k sequential patterns in transaction database graphs)[J]. Internet Weekly News,2020.[25]Sunil Kumar Sharma. An empirical model (EM: CCO) for clustering, convergence and center optimization in distributive databases[J]. Journal of Ambient Intelligence and Humanized Computing,2020(prepublish).[26]Naryzhny Stanislav,Klopov Nikolay,Ronzhina Natalia,Zorina Elena,Zgoda Victor,Kleyst Olga,Belyakova Natalia,Legina Olga. A database for inventory of proteoform profiles: "2DE-pattern".[J]. Electrophoresis,2020.[27]Noel Varela,Jesus Silva,Fredy Marin Gonzalez,Pablo Palencia,Hugo Hernandez Palma,Omar Bonerge Pineda. Method for the Recovery of Images in Databases of Rice Grains from Visual Content[J]. Procedia Computer Science,2020,170.[28]Ahmad Rabanimotlagh,Prabhu Janakaraj,Pu Wang. Optimal Crowd-Augmented Spectrum Mapping via an Iterative Bayesian Decision Framework[J]. Ad Hoc Networks,2020.[29]Ismail Boucherit,Mohamed Ould Zmirli,Hamza Hentabli,Bakhtiar Affendi Rosdi. Finger vein identification using deeply-fused Convolutional Neural Network[J]. Journal of King Saud University - Computer and Information Sciences,2020.[30]Sachin P. Patel,S.H. Upadhyay. Euclidean Distance based Feature Ranking andSubset Selection for Bearing Fault Diagnosis[J]. Expert Systems With Applications,2020.[31]Julia Fomina,Denis Safikanov,Alexey Artamonov,Evgeniy Tretyakov. Parametric and semantic analytical search indexes in hieroglyphic languages[J]. Procedia Computer Science,2020,169.[32]Selvine G. Mathias,Sebastian Schmied,Daniel Grossmann. An Investigation on Database Connections in OPC UA Applications[J]. Procedia Computer Science,2020,170.[33]Abdourrahmane Mahamane Atto,Alexandre Benoit,Patrick Lambert. Timed-image based deep learning for action recognition in video sequences[J]. Pattern Recognition,2020.[34]Yonis Gulzar,Ali A. Alwan,Abedallah Zaid Abualkishik,Abid Mehmood. A Model for Computing Skyline Data Items in Cloud Incomplete Databases[J]. Procedia Computer Science,2020,170.[35]Xiaohan Yang,Fan Li,Hantao Liu. Deep feature importance awareness based no-reference image quality prediction[J]. Neurocomputing,2020.[36]Dilana Hazer-Rau,Sascha Meudt,Andreas Daucher,Jennifer Spohrs,Holger Hoffmann,Friedhelm Schwenker,Harald C. Traue. The uulmMAC Database—A Multimodal Affective Corpus for Affective Computing in Human-Computer Interaction[J]. Sensors,2020,20(8).[37]Tomá? Pohanka,Vilém Pechanec. Evaluation of Replication Mechanisms on Selected Database Systems[J]. ISPRS International Journal of Geo-Information,2020,9(4).[38]Verheggen Kenneth,Raeder Helge,Berven Frode S,Martens Lennart,Barsnes Harald,Vaudel Marc. Anatomy and evolution of database search engines-a central component of mass spectrometry based proteomic workflows.[J]. Mass spectrometry reviews,2020,39(3).[39]Moscona Leon,Casta?eda Pablo,Masrouha Karim. Citation analysis of the highest-cited articles on developmental dysplasia of the hip.[J]. Journal of pediatric orthopedics. Part B,2020,29(3).[40]Nasseh Daniel,Schneiderbauer Sophie,Lange Michael,Schweizer Diana,Heinemann Volker,Belka Claus,Cadenovic Ranko,Buysse Laurence,Erickson Nicole,Mueller Michael,Kortuem Karsten,Niyazi Maximilian,Marschner Sebastian,Fey Theres. Optimizing the Analytical Value of Oncology-Related Data Based on an In-Memory Analysis Layer: Development and Assessment of the Munich OnlineComprehensive Cancer Analysis Platform.[J]. Journal of medical Internet research,2020,22(4).数据库英文参考文献二:[41]Meiling Chai,Changgeng Li,Hui Huang. A New Indoor Positioning Algorithm of Cellular and Wi-Fi Networks[J]. Journal of Navigation,2020,73(3).[42]Mandy Watson. How to undertake a literature search: a step-by-step guide[J]. British Journal of Nursing,2020,29(7).[43]. Patent Application; "Memorial Facility With Memorabilia, Meeting Room, Secure Memorial Database, And Data Needed For An Interactive Computer Conversation With The Deceased" in Patent Application Approval Process (USPTO 20200089455)[J]. Computer Technology Journal,2020.[44]. Information Technology; Data on Information Technology Detailed by Researchers at Complutense University Madrid (Hr-sql: Extending Sql With Hypothetical Reasoning and Improved Recursion for Current Database Systems)[J]. Computer Technology Journal,2020.[45]. Science - Metabolomics; Study Data from Wake Forest University School of Medicine Update Knowledge of Metabolomics (Software tools, databases and resources in metabolomics: updates from 2018 to 2019)[J]. Computer Technology Journal,2020.[46]. Sigma Computing Inc.; Researchers Submit Patent Application, "GeneratingA Database Query To Dynamically Aggregate Rows Of A Data Set", for Approval (USPTO 20200089796)[J]. Computer Technology Journal,2020.[47]. Machine Learning; Findings on Machine Learning Reported by Investigators at Tongji University (Comparing Machine Learning Algorithms In Predicting Thermal Sensation Using Ashrae Comfort Database Ii)[J]. Computer Technology Journal,2020.[48]. Sigma Computing Inc.; "Generating A Database Query Using A Dimensional Hierarchy Within A Graphical User Interface" in Patent Application Approval Process (USPTO 20200089794)[J]. Computer Technology Journal,2020.[49]Qizhi He,Jiun-Shyan Chen. A physics-constrained data-driven approach based on locally convex reconstruction for noisy database[J]. Computer Methods in Applied Mechanics and Engineering,2020,363.[50]José A. Delgado-Osuna,Carlos García-Martínez,JoséGómez-Barbadillo,Sebastián Ventura. Heuristics for interesting class association rule mining a colorectal cancer database[J]. Information Processing andManagement,2020,57(3).[51]Edival Lima,Thales Vieira,Evandro de Barros Costa. Evaluating deep models for absenteeism prediction of public security agents[J]. Applied Soft Computing Journal,2020,91.[52]S. Fareri,G. Fantoni,F. Chiarello,E. Coli,A. Binda. Estimating Industry 4.0 impact on job profiles and skills using text mining[J]. Computers in Industry,2020,118.[53]Estrela Carlos,Pécora Jesus Djalma,Dami?o Sousa-Neto Manoel. The Contribution of the Brazilian Dental Journal to the Brazilian Scientific Research over 30 Years.[J]. Brazilian dental journal,2020,31(1).[54]van den Oever L B,Vonder M,van Assen M,van Ooijen P M A,de Bock G H,Xie X Q,Vliegenthart R. Application of artificial intelligence in cardiac CT: From basics to clinical practice.[J]. European journal of radiology,2020,128.[55]Li Liu,Deborah Silver,Karen Bemis. Visualizing events in time-varying scientific data[J]. Journal of Visualization,2020,23(2–3).[56]. Information Technology - Database Management; Data on Database Management Discussed by Researchers at Arizona State University (Architecture of a Distributed Storage That Combines File System, Memory and Computation In a Single Layer)[J]. Information Technology Newsweekly,2020.[57]. Information Technology - Database Management; New Findings from Guangzhou Medical University Update Understanding of Database Management (GREG-studying transcriptional regulation using integrative graph databases)[J]. Information Technology Newsweekly,2020.[58]. Technology - Laser Research; Reports from Nicolaus Copernicus University in Torun Add New Data to Findings in Laser Research (Nonlinear optical study of Schiff bases using Z-scan technique)[J]. Journal of Technology,2020.[59]Loeffler Caitlin,Karlsberg Aaron,Martin Lana S,Eskin Eleazar,Koslicki David,Mangul Serghei. Improving the usability and comprehensiveness of microbial databases.[J]. BMC biology,2020,18(1).[60]Caitlin Loeffler,Aaron Karlsberg,Lana S. Martin,Eleazar Eskin,David Koslicki,Serghei Mangul. Improving the usability and comprehensiveness of microbial databases[J]. BMC Biology,2020,18(1).[61]Dean H. Barrett,Aderemi Haruna. Artificial intelligence and machine learningfor targeted energy storage solutions[J]. Current Opinion in Electrochemistry,2020,21.[62]Chenghao Sun. Research on investment decision-making model from the perspective of “Internet of Things + Big data”[J]. Future Generation Computer Systems,2020,107.[63]Sa?a Adamovi?,Vladislav Mi?kovic,Nemanja Ma?ek,Milan Milosavljevi?,Marko ?arac,Muzafer Sara?evi?,Milan Gnjatovi?. An efficient novel approach for iris recognition based on stylometric features and machine learning techniques[J]. Future Generation Computer Systems,2020,107.[64]Olivier Pivert,Etienne Scholly,Grégory Smits,Virginie Thion. Fuzzy quality-Aware queries to graph databases[J]. Information Sciences,2020,521.[65]Javier Fernando Botía Valderrama,Diego José Luis Botía Valderrama. Two cluster validity indices for the LAMDA clustering method[J]. Applied Soft Computing Journal,2020,89.[66]Amer N. Kadri,Marie Bernardo,Steven W. Werns,Amr E. Abbas. TAVR VS. SAVR IN PATIENTS WITH CANCER AND AORTIC STENOSIS: A NATIONWIDE READMISSION DATABASE REGISTRY STUDY[J]. Journal of the American College of Cardiology,2020,75(11).[67]. Information Technology; Findings from P. Sjolund and Co-Authors Update Knowledge of Information Technology (Whole-genome sequencing of human remains to enable genealogy DNA database searches - A case report)[J]. Information Technology Newsweekly,2020.[68]. Information Technology; New Findings from P. Yan and Co-Researchers in the Area of Information Technology Described (BrainEXP: a database featuring with spatiotemporal expression variations and co-expression organizations in human brains)[J]. Information Technology Newsweekly,2020.[69]. IDERA; IDERA Database Tools Expand Support for Cloud-Hosted Databases[J]. Information Technology Newsweekly,2020.[70]Adrienne Warner,David A. Hurley,Jonathan Wheeler,Todd Quinn. Proactive chat in research databases: Inviting new and different questions[J]. The Journal of Academic Librarianship,2020,46(2).[71]Chidentree Treesatayapun. Discrete-time adaptive controller based on IF-THEN rules database for novel architecture of ABB IRB-1400[J]. Journal of the Franklin Institute,2020.[72]Tian Fang,Tan Han,Cheng Zhang,Ya Juan Yao. Research and Construction of the Online Pesticide Information Center and Discovery Platform Based on Web Crawler[J]. Procedia Computer Science,2020,166.[73]Dinusha Vatsalan,Peter Christen,Erhard Rahm. Incremental clustering techniques for multi-party Privacy-Preserving Record Linkage[J]. Data & Knowledge Engineering,2020.[74]Ying Xin Liu,Xi Yuan Li. Design and Implementation of a Business Platform System Based on Java[J]. Procedia Computer Science,2020,166.[75]Akhilesh Kumar Bajpai,Sravanthi Davuluri,Kriti Tiwary,Sithalechumi Narayanan,Sailaja Oguru,Kavyashree Basavaraju,Deena Dayalan,Kavitha Thirumurugan,Kshitish K. Acharya. Systematic comparison of the protein-protein interaction databases from a user's perspective[J]. Journal of Biomedical Informatics,2020,103.[76]P. Raveendra,V. Siva Reddy,G.V. Subbaiah. Vision based weed recognition using LabVIEW environment for agricultural applications[J]. Materials Today: Proceedings,2020,23(Pt 3).[77]Christine Rosati,Emily Bakinowski. Preparing for the Implementation of an Agnis Enabled Data Reporting System and Comprehensive Research Level Data Repository for All Cellular Therapy Patients[J]. Biology of Blood and Marrow Transplantation,2020,26(3).[78]Zeiser Felipe André,da Costa Cristiano André,Zonta Tiago,Marques Nuno M C,Roehe Adriana Vial,Moreno Marcelo,da Rosa Righi Rodrigo. Segmentation of Masses on Mammograms Using Data Augmentation and Deep Learning.[J]. Journal of digital imaging,2020.[79]Dhaked Devendra K,Guasch Laura,Nicklaus Marc C. Tautomer Database: A Comprehensive Resource for Tautomerism Analyses.[J]. Journal of chemical information and modeling,2020,60(3).[80]Pian Cong,Zhang Guangle,Gao Libin,Fan Xiaodan,Li Fei. miR+Pathway: the integration and visualization of miRNA and KEGG pathways.[J]. Briefings in bioinformatics,2020,21(2).数据库英文参考文献三:[81]Marcello W. M. Ribeiro,Alexandre A. B. Lima,Daniel Oliveira. OLAP parallel query processing in clouds with C‐ParGRES[J]. Concurrency and Computation: Practice and Experience,2020,32(7).[82]Li Gao,Peng Lin,Peng Chen,Rui‐Zhi Gao,Hong Yang,Yun He,Jia‐Bo Chen,Yi ‐Ge Luo,Qiong‐Qian Xu,Song‐Wu Liang,Jin‐Han Gu,Zhi‐Guang Huang,Yi‐Wu Dang,Gang Chen. A novel risk signature that combines 10 long noncoding RNAs to predict neuroblastoma prognosis[J]. Journal of Cellular Physiology,2020,235(4).[83]Julia Krzykalla,Axel Benner,Annette Kopp‐Schneider. Exploratory identification of predictive biomarkers in randomized trials with normal endpoints[J]. Statistics in Medicine,2020,39(7).[84]Jianye Ching,Kok-Kwang Phoon. Measuring Similarity between Site-Specific Data and Records from Other Sites[J]. ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering,2020,6(2).[85]Anne Kelly Knowles,Justus Hillebrand,Paul B. Jaskot,Anika Walke. Integrative, Interdisciplinary Database Design for the Spatial Humanities: the Case of the Holocaust Ghettos Project[J]. International Journal of Humanities and Arts Computing,2020,14(1-2).[86]Sheng-Feng Sung,Pei-Ju Lee,Cheng-Yang Hsieh,Wan-Lun Zheng. Medication Use and the Risk of Newly Diagnosed Diabetes in Patients with Epilepsy: A Data Mining Application on a Healthcare Database[J]. Journal of Organizational and End User Computing (JOEUC),2020,32(2).[87]Rashkovits Rami,Lavy Ilana. Students' Difficulties in Identifying the Use of Ternary Relationships in Data Modeling[J]. International Journal of Information and Communication Technology Education (IJICTE,2020,16(2).[88]Yusuf Akhtar,Dipti Prasad Mukherjee. Context-based ensemble classification for the detection of architectural distortion in a digitised mammogram[J]. IET Image Processing,2020,14(4).[89]Gurpreet Kaur,Sukhwinder Singh,Renu Vig. Medical fusion framework using discrete fractional wavelets and non-subsampled directional filter banks[J]. IET Image Processing,2020,14(4).[90]Qian Liu,Bo Jiang,Jia-lei Zhang,Peng Gao,Zhi-jian Xia. Semi-supervised uncorrelated dictionary learning for colour face recognition[J]. IET Computer Vision,2020,14(3).[91]Yipo Huang,Leida Li,Yu Zhou,Bo Hu. No-reference quality assessment for live broadcasting videos in temporal and spatial domains[J]. IET Image Processing,2020,14(4).[92]Panetta Karen,Wan Qianwen,Agaian Sos,Rajeev Srijith,Kamath Shreyas,Rajendran Rahul,Rao Shishir Paramathma,Kaszowska Aleksandra,Taylor Holly A,Samani Arash,Yuan Xin. A Comprehensive Database for Benchmarking Imaging Systems.[J]. IEEE transactions on pattern analysis and machine intelligence,2020,42(3).[93]Rahnev Dobromir,Desender Kobe,Lee Alan L F,Adler William T,Aguilar-Lleyda David,Akdo?an Ba?ak,Arbuzova Polina,Atlas Lauren Y,Balc? Fuat,Bang Ji Won,Bègue Indrit,Birney Damian P,Brady Timothy F,Calder-Travis Joshua,Chetverikov Andrey,Clark Torin K,Davranche Karen,Denison Rachel N,Dildine Troy C,Double Kit S,Duyan Yaln A,Faivre Nathan,Fallow Kaitlyn,Filevich Elisa,Gajdos Thibault,Gallagher Regan M,de Gardelle Vincent,Gherman Sabina,Haddara Nadia,Hainguerlot Marine,Hsu Tzu-Yu,Hu Xiao,Iturrate I?aki,Jaquiery Matt,Kantner Justin,Koculak Marcin,Konishi Mahiko,Ko? Christina,Kvam Peter D,Kwok Sze Chai,Lebreton Ma?l,Lempert Karolina M,Ming Lo Chien,Luo Liang,Maniscalco Brian,Martin Antonio,Massoni Sébastien,Matthews Julian,Mazancieux Audrey,Merfeld Daniel M,O'Hora Denis,Palser Eleanor R,Paulewicz Borys?aw,Pereira Michael,Peters Caroline,Philiastides Marios G,Pfuhl Gerit,Prieto Fernanda,Rausch Manuel,Recht Samuel,Reyes Gabriel,Rouault Marion,Sackur Jér?me,Sadeghi Saeedeh,Samaha Jason,Seow Tricia X F,Shekhar Medha,Sherman Maxine T,Siedlecka Marta,Skóra Zuzanna,Song Chen,Soto David,Sun Sai,van Boxtel Jeroen J A,Wang Shuo,Weidemann Christoph T,Weindel Gabriel,WierzchońMicha?,Xu Xinming,Ye Qun,Yeon Jiwon,Zou Futing,Zylberberg Ariel. The Confidence Database.[J]. Nature human behaviour,2020,4(3).[94]Taipalus Toni. The Effects of Database Complexity on SQL Query Formulation[J]. Journal of Systems and Software,2020(prepublish).[95]. Information Technology; Investigators from Deakin University Target Information Technology (Conjunctive query pattern structures: A relational database model for Formal Concept Analysis)[J]. Computer Technology Journal,2020.[96]. Machine Learning; Findings from Rensselaer Polytechnic Institute Broaden Understanding of Machine Learning (Self Healing Databases for Predictive Risk Analytics In Safety-critical Systems)[J]. Computer Technology Journal,2020.[97]. Science - Library Science; Investigators from Cumhuriyet University Release New Data on Library Science (Scholarly databases under scrutiny)[J]. Computer Technology Journal,2020.[98]. Information Technology; Investigators from Faculty of Computer Science and Engineering Release New Data on Information Technology (FGSA for optimal quality of service based transaction in real-time database systems under different workload condition)[J]. Computer Technology Journal,2020.[99]Muhammad Aqib Javed,M.A. Naveed,Azam Hussain,S. Hussain. Integrated data acquisition, storage and retrieval for glass spherical tokamak (GLAST)[J]. Fusion Engineering and Design,2020,152.[100]Vinay M.S.,Jayant R. Haritsa. Operator implementation of Result Set Dependent KWS scoring functions[J]. Information Systems,2020,89.[101]. Capital One Services LLC; Patent Issued for Computer-Based Systems Configured For Managing Authentication Challenge Questions In A Database And Methods Of Use (USPTO 10,572,653)[J]. Journal of Robotics & Machine Learning,2020.[102]Ikawa Fusao,Michihata Nobuaki. In Reply to Letter to the Editor Regarding "Treatment Risk for Elderly Patients with Unruptured Cerebral Aneurysm from a Nationwide Database in Japan".[J]. World neurosurgery,2020,135.[103]Chen Wei,You Chao. Letter to the Editor Regarding "Treatment Risk for Elderly Patients with Unruptured Cerebral Aneurysm from a Nationwide Database in Japan".[J]. World neurosurgery,2020,135.[104]Zhitao Xiao,Lei Pei,Lei Geng,Ying Sun,Fang Zhang,Jun Wu. Surface Parameter Measurement of Braided Composite Preform Based on Faster R-CNN[J]. Fibers and Polymers,2020,21(3).[105]Xiaoyu Cui,Ruifan Cai,Xiangjun Tang,Zhigang Deng,Xiaogang Jin. Sketch‐based shape‐constrained fireworks simulation in head‐mounted virtual reality[J]. Computer Animation and Virtual Worlds,2020,31(2).[106]Klaus B?hm,Tibor Kubjatko,Daniel Paula,Hans-Georg Schweiger. New developments on EDR (Event Data Recorder) for automated vehicles[J]. Open Engineering,2020,10(1).[107]Ming Li,Ruizhi Chen,Xuan Liao,Bingxuan Guo,Weilong Zhang,Ge Guo. A Precise Indoor Visual Positioning Approach Using a Built Image Feature Database and Single User Image from Smartphone Cameras[J]. Remote Sensing,2020,12(5).[108]Matthew Grewe,Phillip Sexton,David Dellenbach. Use Risk‐Based Asset Prioritization to Develop Accurate Capital Budgets[J]. Opflow,2020,46(3).[109]Jose R. Salvador,D. Mu?oz de la Pe?a,D.R. Ramirez,T. Alamo. Predictive control of a water distribution system based on process historian data[J]. Optimal Control Applications and Methods,2020,41(2).[110]Esmaeil Nourani,Vahideh Reshadat. Association extraction from biomedicalliterature based on representation and transfer learning[J]. Journal of Theoretical Biology,2020,488.[111]Ikram Saima,Ahmad Jamshaid,Durdagi Serdar. Screening of FDA approved drugs for finding potential inhibitors against Granzyme B as a potent drug-repurposing target.[J]. Journal of molecular graphics & modelling,2020,95.[112]Keiron O’Shea,Biswapriya B. Misra. Software tools, databases and resources in metabolomics: updates from 2018 to 2019[J]. Metabolomics,2020,16(D1).[113]. Information Technology; Researchers from Virginia Polytechnic Institute and State University (Virginia Tech) Describe Findings in Information Technology (A database for global soil health assessment)[J]. Energy & Ecology,2020.[114]Moosa Johra Muhammad,Guan Shenheng,Moran Michael F,Ma Bin. Repeat-Preserving Decoy Database for False Discovery Rate Estimation in Peptide Identification.[J]. Journal of proteome research,2020,19(3).[115]Huttunen Janne M J,K?rkk?inen Leo,Honkala Mikko,Lindholm Harri. Deep learning for prediction of cardiac indices from photoplethysmographic waveform: A virtual database approach.[J]. International journal for numerical methods in biomedical engineering,2020,36(3).[116]Kunxia Wang,Guoxin Su,Li Liu,Shu Wang. Wavelet packet analysis for speaker-independent emotion recognition[J]. Neurocomputing,2020.[117]Fusao Ikawa,Nobuaki Michihata. In Reply to Letter to the Editor Regarding “Treatment Risk for Elderly Patients with Unruptured Cerebral Aneurysm from a Nationwide Database in Japan”[J]. World Neurosurgery,2020,135.[118]Wei Chen,Chao You. Letter to the Editor Regarding “Treatment Risk for Elderly Patients with Unruptured Cerebral Aneurysm from a Nationwide Database in Japan”[J]. World Neurosurgery,2020,135.[119]Lindsey A. Parsons,Jonathan A. Jenks,Andrew J. Gregory. Accuracy Assessment of National Land Cover Database Shrubland Products on the Sagebrush Steppe Fringe[J]. Rangeland Ecology & Management,2020,73(2).[120]Jing Hua,Yilu Xu,Jianjun Tang,Jizhong Liu,Jihao Zhang. ECG heartbeat classification in compressive domain for wearable devices[J]. Journal of Systems Architecture,2020,104.以上就是关于数据库英文参考文献的全部内容,希望看完后对你有所启发。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

中英文对照外文翻译Database Management SystemsA database (sometimes spelled data base) is also called an electronic database , referring to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structured to facilitate the storage, retrieval , modification, and deletion of data in conjunction with various data-processing operations .Databases can be stored on magnetic disk or tape, optical disk, or some other secondary storage device.A database consists of a file or a set of files. The information in these files may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage , and each field typically contains information pertaining to one aspect or attribute of the entity described by the database . Using keywords and various sorting commands, users can rapidly search , rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregate of data.Complex data relationships and linkages may be found in all but the simplest databases .The system software package that handles the difficult tasks associated with creating ,accessing, and maintaining database records is called a database management system(DBMS).The programs in a DBMS package establish an interface between the database itself and the users of the database.. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements form the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or poorly defined ,but people can “browse” through the database until they have the needed information. In short, the DBMS will “manage” the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t programmers.A database management system (DBMS) is composed of three major parts:(1)a storage subsystemthat stores and retrieves data in files;(2) a modeling and manipulation subsystem that provides the means with which to organize the data and to add , delete, maintain, and update the data;(3)and an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulness of database management systems;Managers: who require more up-to-data information to make effective decisionCustomers: who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts.Users: who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages.Organizations : that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors.The Database ModelA data model describes a way to structure and manipulate the data in a database. The structural part of the model specifies how data should be represented(such as tree, tables, and so on ).The manipulative part of the model specifies the operation with which to add, delete, display, maintain, print, search, select, sort and update the data.Hierarchical ModelThe first database management systems used a hierarchical model-that is-they arranged records into a tree structure. Some records are root records and all others have unique parent records. The structure of the tree is designed to reflect the order in which the data will be used that is ,the record at the root of a tree will be accessed first, then records one level below the root ,and so on.The hierarchical model was developed because hierarchical relationships are commonly found in business applications. As you have known, an organization char often describes a hierarchical relationship: top management is at the highest level, middle management at lower levels, and operational employees at the lowest levels. Note that within a strict hierarchy, each level of management may have many employees or levels of employees beneath it, but each employee has only one manager. Hierarchical data are characterized by this one-to-many relationship among data.In the hierarchical approach, each relationship must be explicitly defined when the database is created. Each record in a hierarchical database can contain only one key field and only one relationship is allowed between any two fields. This can create a problem because data do not always conform to such a strict hierarchy.Relational ModelA major breakthrough in database research occurred in 1970 when E. F. Codd proposed a fundamentally different approach to database management called relational model ,which uses a table asits data structure.The relational database is the most widely used database structure. Data is organized into related tables. Each table is made up of rows called and columns called fields. Each record contains fields of data about some specific item. For example, in a table containing information on employees, a record would contain fields of data such as a person’s last name ,first name ,and street address.Structured query language(SQL)is a query language for manipulating data in a relational database .It is nonprocedural or declarative, in which the user need only specify an English-like description that specifies the operation and the described record or combination of records. A query optimizer translates the description into a procedure to perform the database manipulation.Network ModelThe network model creates relationships among data through a linked-list structure in which subordinate records can be linked to more than one parent record. This approach combines records with links, which are called pointers. The pointers are addresses that indicate the location of a record. With the network approach, a subordinate record can be linked to a key record and at the same time itself be a key record linked to other sets of subordinate records. The network mode historically has had a performance advantage over other database models. Today , such performance characteristics are only important in high-volume ,high-speed transaction processing such as automatic teller machine networks or airline reservation system.Both hierarchical and network databases are application specific. If a new application is developed ,maintaining the consistency of databases in different applications can be very difficult. For example, suppose a new pension application is developed .The data are the same, but a new database must be created.Object ModelThe newest approach to database management uses an object model , in which records are represented by entities called objects that can both store data and provide methods or procedures to perform specific tasks.The query language used for the object model is the same object-oriented programming language used to develop the database application .This can create problems because there is no simple , uniform query language such as SQL . The object model is relatively new, and only a few examples of object-oriented database exist. It has attracted attention because developers who choose an object-oriented programming language want a database based on an object-oriented model. Distributed DatabaseSimilarly , a distributed database is one in which different parts of the database reside on physically separated computers . One goal of distributed databases is the access of informationwithout regard to where the data might be stored. Keeping in mind that once the users and their data are separated , the communication and networking concepts come into play .Distributed databases require software that resides partially in the larger computer. This software bridges the gap between personal and large computers and resolves the problems of incompatible data formats. Ideally, it would make the mainframe databases appear to be large libraries of information, with most of the processing accomplished on the personal computer.A drawback to some distributed systems is that they are often based on what is called a mainframe-entire model , in which the larger host computer is seen as the master and the terminal or personal computer is seen as a slave. There are some advantages to this approach . With databases under centralized control , many of the problems of data integrity that we mentioned earlier are solved . But today’s personal computers, departmental computers, and distributed processing require computers and their applications to communicate with each other on a more equal or peer-to-peer basis. In a database, the client/server model provides the framework for distributing databases.One way to take advantage of many connected computers running database applications is to distribute the application into cooperating parts that are independent of one anther. A client is an end user or computer program that requests resources across a network. A server is a computer running software that fulfills those requests across a network . When the resources are data in a database ,the client/server model provides the framework for distributing database.A file serve is software that provides access to files across a network. A dedicated file server is a single computer dedicated to being a file server. This is useful ,for example ,if the files are large and require fast access .In such cases, a minicomputer or mainframe would be used as a file server. A distributed file server spreads the files around on individual computers instead of placing them on one dedicated computer.Advantages of the latter server include the ability to store and retrieve files on other computers and the elimination of duplicate files on each computer. A major disadvantage , however, is that individual read/write requests are being moved across the network and problems can arise when updating files. Suppose a user requests a record from a file and changes it while another user requests the same record and changes it too. The solution to this problems called record locking, which means that the first request makes others requests wait until the first request is satisfied . Other users may be able to read the record, but they will not be able to change it .A database server is software that services requests to a database across a network. For example, suppose a user types in a query for data on his or her personal computer . If the application is designed with the client/server model in mind ,the query language part on the personal computer simple sends the query across the network to the database server and requests to be notified when the data are found.Examples of distributed database systems can be found in the engineering world. Sun’s Network Filing System(NFS),for example, is used in computer-aided engineering applications to distribute data among the hard disks in a network of Sun workstation.Distributing databases is an evolutionary step because it is logical that data should exist at the location where they are being used . Departmental computers within a large corporation ,for example, should have data reside locally , yet those data should be accessible by authorized corporate management when they want to consolidate departmental data . DBMS software will protect the security and integrity of the database , and the distributed database will appear to its users as no different from the non-distributed database .In this information age, the data server has become the heart of a company. This one piece of software controls the rhythm of most organizations and is used to pump information lifeblood through the arteries of the network. Because of the critical nature of this application, the data server is also the one of the most popular targets for hackers. If a hacker owns this application, he can cause the company's "heart" to suffer a fatal arrest.Ironically, although most users are now aware of hackers, they still do not realize how susceptible their database servers are to hack attacks. Thus, this article presents a description of the primary methods of attacking database servers (also known as SQL servers) and shows you how to protect yourself from these attacks.You should note this information is not new. Many technical white papers go into great detail about how to perform SQL attacks, and numerous vulnerabilities have been posted to security lists that describe exactly how certain database applications can be exploited. This article was written for the curious non-SQL experts who do not care to know the details, and as a review to those who do use SQL regularly.What Is a SQL Server?A database application is a program that provides clients with access to data. There are many variations of this type of application, ranging from the expensive enterprise-level Microsoft SQL Server to the free and open source mySQL. Regardless of the flavor, most database server applications have several things in common.First, database applications use the same general programming language known as SQL, or Structured Query Language. This language, also known as a fourth-level language due to its simplistic syntax, is at the core of how a client communicates its requests to the server. Using SQL in its simplest form, a programmer can select, add, update, and delete information in a database. However, SQL can also be used to create and design entire databases, perform various functions on the returned information, and even execute other programs.To illustrate how SQL can be used, the following is an example of a simple standard SQL query and a more powerful SQL query:Simple: "Select * from dbFurniture.tblChair"This returns all information in the table tblChair from the database dbFurniture.Complex: "EXEC master..xp_cmdshell 'dir c:\'"This short SQL command returns to the client the list of files and folders under the c:\ directory of the SQL server. Note that this example uses an extended stored procedure that is exclusive to MS SQL Server.The second function that database server applications share is that they all require some form of authenticated connection between client and host. Although the SQL language is fairly easy to use, at least in its basic form, any client that wants to perform queries must first provide some form of credentials that will authorize the client; the client also must define the format of the request and response.This connection is defined by several attributes, depending on the relative location of the client and what operating systems are in use. We could spend a whole article discussing various technologies such as DSN connections, DSN-less connections, RDO, ADO, and more, but these subjects are outside the scope of this article. If you want to learn more about them, a little Google'ing will provide you with more than enough information. However, the following is a list of the more common items included in a connection request.Database sourceRequest typeDatabaseUser IDPasswordBefore any connection can be made, the client must define what type of database server it is connecting to. This is handled by a software component that provides the client with the instructions needed to create the request in the correct format. In addition to the type of database, the request type can be used to further define how the client's request will be handled by the server. Next comes the database name and finally the authentication information.All the connection information is important, but by far the weakest link is the authentication information—or lack thereof. In a properly managed server, each database has its own users with specifically designated permissions that control what type of activity they can perform. For example, a user account would be set up as read only for applications that need to only access information. Another account should be used for inserts or updates, and maybe even a third account would be used for deletes.This type of account control ensures that any compromised account is limited in functionality. Unfortunately, many database programs are set up with null or easy passwords, which leads to successful hack attacks.译文数据库管理系统介绍数据库(database,有时拼作data base)又称为电子数据库,是专门组织起来的一组数据或信息,其目的是为了便于计算机快速查询及检索。

相关文档
最新文档