大学毕业设计仓库管理系统数据库计算机外文参考文献原文及翻译

合集下载

仓库管理系统外文翻译英文文献

仓库管理系统外文翻译英文文献

仓库管理系统外文翻译英文文献核准通过,归档资料。

未经允许,请勿外传~Warehouse Management Systems (WMS).The evolution of warehouse management systems (WMS) is very similar to that of many other software solutions. Initially a system to control movement and storage of materials within a warehouse, the role of WMS is expanding to including light manufacturing, transportation management, order management, and complete accounting systems. To use the grandfather of operations-related software, MRP, as a comparison, material requirements planning (MRP) started as a system for planning raw material requirements in a manufacturing environment. Soon MRP evolved into manufacturing resource planning (MRPII), which took the basic MRP system and added scheduling and capacity planning logic. Eventually MRPII evolved into enterprise resource planning (ERP), incorporating all the MRPII functionality with full financials and customer and vendor management functionality. Now, whether WMS evolving into a warehouse-focused ERP system is a good thing or not is up to debate. What is clear is that the expansion of the overlap in functionality between Warehouse Management Systems, Enterprise Resource Planning, Distribution Requirements Planning, Transportation Management Systems, Supply Chain Planning, Advanced Planning and Scheduling, and Manufacturing Execution Systems will only increase the level ofconfusion among companies looking for software solutions for their operations.Even though WMS continues to gain added functionality, the initialcore functionality of a WMS has not really changed. The primary purposeof a WMS is to control the movement and storage of materials within an operation and process the associated transactions. Directed picking, directed replenishment, and directed put away are the key to WMS. The detailed setup and processing within a WMS can vary significantly fromone software vendor to another, however the basic logic will use a combination of item, location, quantity, unit of measure, and1order information to determine where to stock, where to pick, and in what sequence to perform these operations.At a bare minimum, a WMS should:Have a flexible location system.Utilize user-defined parameters to direct warehouse tasks and uselivedocuments to execute these tasks.Have some built-in level of integration with data collection devices.Do You Really Need WMS?Not every warehouse needs a WMS. Certainly any warehouse couldbenefit from some of the functionality but is the benefit great enoughto justify the initial and ongoing costs associated with WMS? Warehouse Management Systems are big, complex, data intensive, applications. They tend to require a lot of initial setup, a lot of system resources to run, and a lot of ongoing data management to continue to run. That’s ri ght, you need to "manage" your warehouse "management" system. Often times, large operations will end up creating a new IS department with the sole responsibility of managing the WMS.The Claims:WMS will reduce inventory!WMS will reduce labor costs!WMS will increase storage capacity!WMS will increase customer service!WMS will increase inventory accuracy!The Reality:The implementation of a WMS along with automated data collectionwill likely give you increases in accuracy, reduction in labor costs (provided the labor required to maintain the system is less than the labor saved on the warehouse floor), and a greater ability to servicethe customer by reducing cycle times. Expectations of inventoryreduction and increased storage capacity are less likely. Whileincreased accuracy and efficiencies in the receiving process may reduce the level of safety stock required, the impact of this reduction will likely be negligible in comparison to overall inventory levels. The predominant factors that control inventory levels are2lot sizing, lead times, and demand variability. It is unlikely that a WMS will have a significant impact on any of these factors. And while a WMS certainly provides the tools for more organized storage which may result in increased storage capacity, this improvement will be relative to just how sloppy your pre-WMS processes were.Beyond labor efficiencies, the determining factors in deciding to implement a WMS tend to be more often associated with the need to do something to service your customers that your current system does not support (or does not support well) such as first-in-first-out, cross-docking, automated pick replenishment, wave picking, lot tracking, yard management, automated data collection, automated material handling equipment, etc.SetupThe setup requirements of WMS can be extensive. The characteristics of each item and location must be maintained either at the detail level or by grouping similar items and locations into categories. An example of item characteristics at the detail level would include exact dimensions and weight of each item in each unit of measure the item is stocked (each, cases, pallets, etc) as well as information such as whether it can be mixed with other items in a location, whether it is rack able, max stack height, max quantity per location, hazard classifications, finished goods or raw material, fast versus slow mover, etc. Although some operations will need to set up each item this way,most operations will benefit by creating groups of similar products. For example, if you are a distributor of music CDs you would create groups for single CDs, and double CDs, maintaining the detailed dimension and weight information at the group level and only needing to attach the group code to each item. You would likely need to maintain detailed information on special items such as boxed sets or CDs in special packaging. You would also create groups for the different types of locations within your warehouse. An example would be to create three different groups (P1, P2, P3) for the three different sized forward picking locations you use for your CD picking. You then set up the quantity of single CDs that will fit in a P1, P2, and P3 location, quantity of double CDs that fit in a P1, P2, P3 location etc. You would likely also be setting up case quantities, and pallet quantities of each CD group and quantities of cases and pallets per each reserve storage location group.If this sounds simple, it is…well… sort of. In reality most operations have a much morediverse product mix and will require much more system setup. And setting up the physical characteristics of the product and locations is only part of the picture. You have set up enough so that the system knows where a product can fit and how many will fit in that location. You now need to set up the information needed to let the system decide exactly which location to pick3from, replenish from/to, and put away to, and in what sequence these events should occur (remember WMS is all about “directed” m ovement). You do this by assigning specific logic to the various combinations of item/order/quantity/location information that will occur.Below I have listed some of the logic used in determining actual locations and sequences.Location Sequence. This is the simplest logic; you simply define a flow through your warehouse and assign a sequence number to each location. In order picking this is used to sequence your picks to flow through the warehouse, in put away the logic would look for the first location in the sequence in which the product would fit.Zone Logic. By breaking down your storage locations into zones you can direct picking, put away, or replenishment to or from specific areas of your warehouse. Since zone logic only designates an area, you will need to combine this with some other type of logic to determine exact location within the zone.Fixed Location. Logic uses predetermined fixed locations per item in picking, put away, and replenishment. Fixed locations are most often used as the primary picking location in piece pick and case-pick operations, however, they can also be used for secondary storage.Random Location. Since computers cannot be truly random (nor would you want them to be) the term random location is a little misleading. Random locations generally refer to areas where products are not storedin designated fixed locations. Like zone logic, you will need some additional logic to determine exact locations.First-in-first-out (FIFO). Directs picking from the oldest inventory first.Last-in-first-out (LIFO). Opposite of FIFO. I didn't think there were any realapplications for this logic until a visitor to my site sent an email describing their operation that distributes perishable goods domestically and overseas. They use LIFO for their overseas customers (because of longer in-transit times) and FIFO for their domestic customers.Pick-to-clear. Logic directs picking to the locations with the smallest quantities on hand. This logic is great for space utilization.Reserved Locations. This is used when you want to predetermine specific locations to put away to or pick from. An application for reserved locations would be cross-docking, where you may specify certain quantities of an inbound shipment be moved to specific outbound staging locations or directly to an awaiting outbound trailer.Maximize Cube. Cube logic is found in most WMS systems however it is seldom used. Cube logic basically uses unit dimensions to calculate cube (cubic inches per unit) and then compares this to the cube capacity of the location to determine how much will fit. Now if the units are capable of being stacked into the location in a manner that fills every cubic inch of4space in the location, cube logic will work. Since this rarely happens in the real world, cube logic tends to be impractical.Consolidate. Looks to see if there is already a location with the same product stored in it with available capacity. May also create additional moves to consolidate like product stored in multiple locations.Lot Sequence. Used for picking or replenishment, this will use the lot number or lot date to determine locations to pick from or replenish from.It’s very common to combine multiple logic methods to determine the best location. Forexample you may chose to use pick-to-clear logic within first-in-first-out logic when there are multiple locations with the same receipt date. You also may change the logic based upon current workload. During busy periods you may chose logic that optimizes productivity while during slower periods you switch to logic that optimizes space utilization.Other Functionality/ConsiderationsWave Picking/Batch Picking/Zone Picking. Support for various picking methods variesfrom one system to another. In high-volume fulfillment operations, picking logic can be a critical factor in WMS selection. See my article on Order Picking for more info on these methods.Task Interleaving. Task interleaving describes functionality that mixes dissimilar tasks such as picking and put away to obtain maximum productivity. Used primarily in full-pallet-load operations, task interleaving will direct a lift truck operator to put away a pallet on his/her way to the next pick. In large warehouses this can greatly reduce travel time, not only increasing productivity, but also reducing wear on the lift trucks and saving on energy costs by reducing lift truck fuel consumption. Task interleaving is also used with cycle counting programs to coordinate a cycle count with a picking or put away task.Integration with Automated Material Handling Equipment. If you are planning onusing automated material handling equipment such as carousels, ASRS units, AGNS, pick-to-light systems, or separation systems, you’ll want to consider this during the software selection process. Since these types of automation are very expensive and are usually a core component of your warehouse, you may find that the equipment will drive the selection of the WMS. As with automated data collection, you should be working closely with the equipment manufacturers during the software selection process.5Advanced Shipment Notifications (ASN). If your vendors are capableof sendingadvanced shipment notifications (preferably electronically) and attaching compliance labels to the shipments you will want to make sure that the WMS can use this to automate your receiving process. In addition, if you have requirements to provide ASNs for customers, you will also want to verify this functionality.Yard Management. Yard management describes the function of managing the contents (inventory) of trailers parked outside the warehouse, or the empty trailers themselves. Yard management is generally associated with cross docking operations and may include the management of both inbound and outbound trailers.Labor Tracking/Capacity Planning. Some WMS systems provide functionality relatedto labor reporting and capacity planning. Anyone that has worked in manufacturing should be familiar with this type of logic. Basically, you set up standard labor hours and machine (usually lift trucks) hours per task and set the available labor and machine hours per shift. The WMS system will use this info to determine capacity and load. Manufacturing has been using capacity planning for decades with mixed results. The need to factor in efficiency and utilization to determine rated capacity is an example of the shortcomings of this process. Not that I’m necessarily against capacity planning in warehousing, I just think most operations don’t really need it and can avoid the disap pointment of trying to make it work. I am, however, a big advocate of labor tracking for individual productivity measurement. Most WMS maintain enough datato create productivity reporting. Since productivity is measured differently from one operation to another you can assume you will have to do some minor modifications here (usually in the form of custom reporting).Integration with existing accounting/ERP systems. Unless the WMS vendor hasalready created a specific interface with your accounting/ERP system (such as those provided by an approved business partner) you can expect to spend some significant programming dollars here. While we are all hoping that integration issues will be magically resolved someday by a standardized interface, we isn’t there yet. Ideally you’ll want an integrator that has already integrated the WMS you chose with the business software you are using. Since this is not always possible you at least want an integrator that is very familiar with one of the systems.WMS + everything else = ? As I mentioned at the beginning of this article, a lot ofother modules are being added to WMS packages. These would include full financials, light manufacturing, transportation management, purchasing, and sales order management. I don’t see t his as aunilateral move of WMS from an add-on module to a core system, but rather an optional approach that has applications in specific industries such as 3PLs. Using ERP systems6as a point of reference, it is unlikely that this add-onfunctionality will match the functionality of best-of-breed applications available separately. If warehousing/distribution is your core business function and you don’t want to have to deal with the integration issues of incorporating separate financials, order processing, etc. you mayfind these WMS based business systems are a good fit.Implementation TipsOutside of the standard “don’t underestimate”, “thoroughlytest”, “train, train, train” implementation tips that apply to any business software installation ,it’s i mportant to emphasize that WMSare very data dependent and restrictive by design. That is, you need to have all of the various data elements in place for the system tofunction properly. And, when they are in place, you must operate within the set parameters.When implementing a WMS, you are adding an additional layer of technology onto your system. And with each layer of technology there is additional overhead and additional sources of potential problems. Now don’t take this as a condemnation of Warehouse Management Systems. Coming from a warehousing background I definitely appreciate the functionality WMS have to offer, and, in many warehouses, this functionality is essential to their ability to serve their customers and remain competitive. It’s just impo rtant to note that every solution hasits downsides and having a good understanding of the potential implications will allow managers to make better decisions related to the levels of technology that best suits their unique environment.仓库管理系统( WMS )仓库管理系统( WMS )的演变与许多其他软件解决方案是非常相似的。

管理系统类毕业设计外文文献翻译

管理系统类毕业设计外文文献翻译

.NET Compact Framework 2.0中的新事物介绍.NET Compact Framework 2.0版在以前版本——.NET Compact Framework1.0版——上提供许多改善。

虽然普遍改善,但他们都集中在共同的目标——改进开发商生产力、以完整的.NET Framwork提供更强的兼容性,以及加大对设备特性的支持。

这篇文章提供一个.NET Compact Framework2.0的变动和改进的高水平的概要。

用户界面相关的灵活的设备显示器的小尺寸要求:应用程序高效率地使用可用空间。

这在过去是要求开发商花费很多时间来设计和实施应用的用户界面。

最近的在灵活的显示能力方面的进步,譬如高分辨率和多方位支持,使得用户界面发展的工作更具挑战性。

为了简化创造应用用户界面的任务,.NET Compact Framework2.0提供许多关于这方面描述的新特性。

窗口形式控制存在于用户界面中心的是控制;.NET Compact Framework2.0提供了很多新的控制。

这些新控制由除了特别针对设备之外的控制组成。

这种控制是.NET Compact Framework有的与.NET Framework一样充分的控制。

MonthCalendarMonthCalendar控制是提供日期显示的可定制的日历控制,而且是有利于为用户提供一个图解方式来精选日期。

DateTimePickerDateTimePicker控制是为显示和允许用户进入日期和时间信息的可定制的控制。

由于它的一个紧凑显示和图解日期选择格式的组合,它特别适用于灵活的设备应用程序。

当显示信息时,DateTimePicker控制与正文框相似;但是,当用户选择了一个日期, 可能显示一个类似于MonthCalendar控制的弹出日历。

WebBrowserWebBrowser控制压缩了设备Web浏览器,并且提供强大的显示能力和暴露很多事件。

这些事件除了允许你的应用程序提供对于这些事件的用户化的行为,还允许你的应用程序追踪用户与Web浏览器内容的互动。

毕业论文英文参考文献与译文

毕业论文英文参考文献与译文

Inventory managementInventory ControlOn the so-called "inventory control", many people will interpret it as a "storage management", which is actually a big distortion.The traditional narrow view, mainly for warehouse inventory control of materials for inventory, data processing, storage, distribution, etc., through the implementation of anti-corrosion, temperature and humidity control means, to make the custody of the physical inventory to maintain optimum purposes. This is just a form of inventory control, or can be defined as the physical inventory control. How, then, from a broad perspective to understand inventory control? Inventory control should be related to the company's financial and operational objectives, in particular operating cash flow by optimizing the entire demand and supply chain management processes (DSCM), a reasonable set of ERP control strategy, and supported by appropriate information processing tools, tools to achieved in ensuring the timely delivery of the premise, as far as possible to reduce inventory levels, reducing inventory and obsolescence, the risk of devaluation. In this sense, the physical inventory control to achieve financial goals is just a means to control the entire inventory or just a necessary part; from the perspective of organizational functions, physical inventory control, warehouse management is mainly the responsibility of The broad inventory control is the demand and supply chain management, and the whole company's responsibility.Why until now many people's understanding of inventory control, limited physical inventory control? The following two reasons can not be ignored:First, our enterprises do not attach importance to inventory control. Especially those who benefit relatively good business, as long as there is money on the few people to consider the problem of inventory turnover. Inventory control is simply interpreted as warehouse management, unless the time to spend money, it may have been to see the inventory problem, and see the results are often very simple procurement to buy more, or did not do warehouse departments .Second, ERP misleading. Invoicing software is simple audacity to call it ERP, companies on their so-called ERP can reduce the number of inventory, inventory control, seems to rely on their small software can get. Even as SAP, BAAN ERP world, the field ofthese big boys, but also their simple modules inside the warehouse management functionality is defined as "inventory management" or "inventory control." This makes the already not quite understand what our inventory control, but not sure what is inventory control.In fact, from the perspective of broadly understood, inventory control, shouldinclude the following:First, the fundamental purpose of inventory control. We know that the so-called world-class manufacturing, two key assessment indicators (KPI) is, customer satisfaction and inventory turns, inventory turns and this is actually the fundamental objective of inventory control.Second, inventory control means. Increase inventory turns, relying solely on the so-called physical inventory control is not enough, it should be the demand and supply chain management process flow of this large output, and this big warehouse management processes in addition to including this link, the more important The section also includes: forecasting and order processing, production planning and control, materials planning and purchasing control, inventory planning and forecasting in itself, as well as finished products, raw materials, distribution and delivery of the strategy, and even customs management processes.And with the demand and supply chain management processes throughout the process, it is the information flow and capital flow management. In other words, inventory itself is across the entire demand and supply management processes in all aspects of inventory control in order to achieve the fundamental purpose, it must control all aspects of inventory, rather than just manage the physical inventory at hand.Third, inventory control, organizational structure and assessment.Since inventory control is the demand and supply chain management processes, output, inventory control to achieve the fundamental purpose of this process must be compatible with a rational organizational structure. Until now, we can see that many companies have only one purchasing department, purchasing department following pipe warehouse. This is far short of inventory control requirements. From the demand and supply chain management process analysis, we know that purchasing and warehouse management is the executive arm of the typical, and inventory control should focus on prevention, the executive branch is very difficult to "prevent inventory" for the simple reason that they assessment indicatorsin large part to ensure supply (production, customer). How the actual situation, a reasonable demand and supply chain management processes, and thus set the corresponding rational organizational structure and is a question many of our enterprisesto exploreThe role of inventory controlInventory management is an important part of business management. In the production and operation activities, inventory management must ensure that both the production plant for raw materials, spare parts demand, but also directly affect the purchasing, sales of share, sales activities. To make an inventory of corporate liquidity, accelerate cash flow, the security of supply under the premise of minimizing Yaku funds, directly affects the operational efficiency. Ensure the production and operation needs of the premise, so keep inventories at a reasonable level; dynamic inventory control, timely, appropriate proposed order to avoid over storage or out of stock; reduce inventory footprint, lower total cost of inventory; control stock funds used to accelerate cash flow.Problems arising from excessive inventory: increased warehouse space andinventory storage costs, thereby increasing product costs; take a lot of liquidity, resultingin sluggish capital, not only increased the burden of payment of interest, etc., would affect the time value of money and opportunity income; finished products and raw materials caused by physical loss and intangible losses; a large number of enterprise resource idle, affecting their rational allocation and optimization; cover the production, operation of the whole process of the various contradictions and problems, is not conducive to improve the management level.Inventory is too small the resulting problems: service levels caused a decline in the profit impact of marketing and corporate reputation; production system caused by inadequate supply of raw materials or other materials, affecting the normal production process; to shorten lead times, increase the number of orders, so order (production) costs; affect the balance of production and assembly of complete sets.NotesInventory management should particularly consider the following two questions:First, according to sales plans, according to the planned production of the goods circulated in the market, we should consider where, how much storage.Second, starting from the level of service and economic benefits to determine howto ensure inventories and supplementary questions.The two problems with the inventory in the logistics process functions.In general, the inventory function:(1)to prevent interrupted. Received orders to shorten the delivery of goods fromthe time in order to ensure quality service, at the same time to prevent out of stock.(2)to ensure proper inventory levels, saving inventory costs.(3)to reduce logistics costs. Supplement with the appropriate time interval compatible with the reasonable demand of the cargo in order to reduce logistics costs, eliminate or avoid sales fluctuations.(4)ensure the production planning, smooth to eliminate or avoid sales fluctuations.(5)display function.(6)reserve. Mass storage when the price falls, reduce losses, to respond to disasters and other contingencies.About the warehouse (inventory) on what the question, we must consider the number and location. If the distribution center, it should be possible according to customer needs, set at an appropriate place; if it is stored in central places to minimize the complementary principle to the distribution centers, there is no place certain requirements. When the stock base is established, will have to take into account are stored in various locations in what commodities.库存管理库存控制在谈到所谓“库存控制”的时候,很多人将其理解为“仓储管理”,这实际上是个很大的曲解。

外文文献-中文翻译-数据库

外文文献-中文翻译-数据库

外文文献-中文翻译-数据库英文原文2:《DBA Survivor: Become a Rock Star DBA》by Thomas LaRock,Published By Apress.2010You know that a database is a collection of logically related data elements that may be structured in various ways lo meet the multiple processing and retrieval needs of organizations and individuals. There’s nothing new about databases—early ones were chiseled in stone, penned on scrolls, and written on index cards. But now databases are commonly recorded on magnetizable media, and computer programs are required to perform the necessary storage and retrieval operations.Yo u’ll see in the following pages that complex data relationships and linkages may be found in all but the simplest databases. The system software package that handles the difficult tasks associated with creating, accessing, and maintaining database records is called a database management system (DBMS) .The programs in a DBMS package establish an interface between the database itself and the users of the database. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements from the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions (hat aren't available in regular reports. These questions might initially be vague and / or poorly defined, but peo ple can "browse” through the database until they have the needed information. Inshort, the DBMS will “m anage”the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t10programmers. In a file-oriented system, users needing special information may communicate their needs to a programmer, who, when time permits, will write one or more programs to extract the data and prepare the information[4].The availability of a DBMS, however, offers users a much faster alternative communications path.If the DBMS provides a way to interactively and update the database, as well as interrogate it capability allows for managing personal data-Aces however, it does not automatically leave an audit trail of actions and docs not provide the kinds of control a necessary in a multiuser organization. These-controls arc only available when a set of application programs arc customized for each data entry and updating function.Software for personal computers which perform me of the DBMS functions have been very popular. Personal computers were intended for use by individuals for personal information storage and process- These machines have also been used extensively small enterprises, professionals like doctors, acrylics, engineers, lasers and so on .By the nature of intended usage, database systems on these machines except from several of the requirements of full doge database systems. Since data sharing is not tended, concurrent operations even less so. the fewer can be less complex. Security and integrity maintenance arc de-emphasized or absent. As data limes will be small, performance efficiency is also important. In fact, the only aspect of a database system that is important is data Independence. Data-dependence, as stated earlier, means that applicant programs and user queries need not recognizant physical organization of data on secondary storage. The importance of this aspect, particularly for the personal computer user, is that this greatly simplifies database usage. The user can store, access and manipulate data a( a high level (close to (he application) and be totally shielded from the10low level (close to the machine) details of data organization. We will not discuss details of specific PC DBMS software packages here. Let us summarize in the following the strengths and weaknesses of personal computer data-base software systems:The most obvious positive factor is the user friendliness of the software. A user with no prior computer background would be able to use the system to store personal and professional data, retrieve and perform relayed processing. The user should, of course, satiety himself about the quality of software and the freedom from errors (bugs) so that invest-merits in data arc protected.For the programmer implementing applications with them, the advantage lies in the support for applications development in terms of input screen generations, output report generation etc. offered by theses stems.The main negative point concerns absence of data protection features. Unless encrypted, data cane accessed by whoever has access to the machine Data can be destroyed through mistakes or malicious intent. The second weakness of many of the PC-based systems is that of performance. If data volumes grow up to a few thousands of records, performance could be a bottleneck.For organization where growth in data volumes is expected, availability of. the same or compatible software on large machines should be considered.This is one of the most common misconceptions about database management systems that are used in personal computers. Thoroughly comprehensive and sophisticated business systems can be developed in dBASE, Paradox and other DBMSs. However, they are created by experienced programmers using the DBMS's own programming language. Thai is not the same as users who create and manage personal10files that are not part of the mainstream company system.Transaction Management of DatabaseThe objective of long-duration transactions is to model long-duration, interactive Database access sessions in application environments. The fundamental assumption about short-duration of transactions that underlies the traditional model of transactions is inappropriate for long-duration transactions. The implementation of the traditional model of transactions may cause intolerably long waits when transactions aleph to acquire locks before accessing data, and may also cause a large amount of work to be lost when transactions are backed out in response to user-initiated aborts or system failure situations.The objective of a transaction model is to pro-vide a rigorous basis for automatically enforcing criterion for database consistency for a set of multiple concurrent read and write accesses to the database in the presence of potential system failure situations. The consistency criterion adopted for traditional transactions is the notion of scrializability. Scrializa-bility is enforced in conventional database systems through theuse of locking for automatic concurrency control, and logging for automatic recovery from system failure situations. A “transaction’’ that doesn't provide a basis for automatically enforcing data-base consistency is not really a transaction. To be sure, a long-duration transaction need not adopt seri-alizability as its consistency criterion. However, there must be some consistency criterion.Version System Management of DatabaseDespite a large number of proposals on version support in the context of computer aided design and software engineering, the absence of a consensus on version semantics10has been a key impediment to version support in database systems. Because of the differences between files and databases, it is intuitively clear that the model of versions in database systems cannot be as simple as that adopted in file systems to support software engineering.For data-bases, it may be necessary to manage not only versions of single objects (e.g. a software module, document, but also versions of a collection of objects (e.g. a compound document, a user manual, etc. and perhaps even versions of the schema of database (c.g. a table or a class, a collection of tables or classes).Broadly, there arc three directions of research and development in versioning. First is the notion of a parameterized versioning", that is, designing and implementing a versioning system whose behavior may be tailored by adjusting system parameters This may be the only viable approach, in view of the fact that there are various plausible choices for virtually every single aspect of versioning.The second is to revisit these plausible choices for every aspect of versioning, with the view to discardingsome of themes either impractical or flawed. The third is the investigation into the semantics and implementation of versioning collections of objects and of versioning the database.There is no consensus of the definition of the te rm “management information system”. Some writers prefer alternative terminology such as “information processing system”, "information and decision syste m, “organizational information syste m”, or simply “i nformat ion system” to refer to the computer-based information processing system which supports the operations, management, and decision-making functions of an organization. This text uses “MIS” because i t is descriptive and generally understood; it also frequently uses "information system”instead of ''MIS” t o refer to an organizational information system.10A definition of a management information system, as the term is generally understood, is an integrated, user-machine system for providing information 丨o support operations, management, and decision-making functions in an organization. The system utilizes computer hardware and software; manual procedures: models for analysis planning, control and decision making; and a database. The fact that it is an integrated system does not mean that it is a single, monolithic structure: rather, ii means that the parts fit into an overall design. The elements of the definition arc highlighted below: Computer-based user-machine system.Conceptually, a management information can exist without computer, but it is the power of the computer which makes MIS feasible. The question is not whether computers should be used in management information system, but the extent to whichinformation use should be computerized. The concept of a user-machine system implies that some (asks are best performed humans, while others are best done by machine. The user of an MIS is any person responsible for entering input da(a, instructing the system, or utilizing the information output of the system. For many problems, the user and the computer form a combined system with results obtained through a set of interactions between the computer and the user.User-machine interaction is facilitated by operation in which the user's input-output device (usually a visual display terminal) is connected lo the computer. The computer can be a personal computer serving only one user or a large computer that serves a number of users through terminals connected by communication lines. The user input-output device permits direct input of data and immediate output of results. For instance, a person using The computer interactively in financial planning poses 4t what10if* questions by entering input at the terminal keyboard; the results are displayed on the screen in a few second.The computer-based user-machine characteristics of an MIS affect the knowledge requirements of both system developer and system user, “computer-based” means that the designer of a management information system must have a knowledge of computers and of their use in processing. The “user-machine” concept means the system designer should also understand the capabilities of humans as system components (as information processors) and the behavior of humans as users of information.Information system applications should not require users Co be computer experts. However, users need to be able lo specify(heir information requirements; some understanding of computers, the nature of information, and its use in various management function aids users in this task.Management information system typically provide the basis for integration of organizational information processing. Individual applications within information systems arc developed for and by diverse sets of users. If there are no integrating processes and mechanisms, the individual applications may be inconsistent and incompatible. Data item may be specified differently and may not be compatible across applications that use the same data. There may be redundant development of separate applications when actually a single application could serve more than one need. A user wanting to perform analysis using data from two different applications may find the task very difficult and sometimes impossible.The first step in integration of information system applications is an overall information system plan. Even though application systems are implemented one at a10time, their design can be guided by the overall plan, which determines how they fit in with other functions. In essence, the information system is designed as a planed federation of small systems.Information system integration is also achieved through standards, guidelines, and procedures set by the MIS function. The enforcement of such standards and procedures permit diverse applications to share data, meet audit and control requirements, and be shares by multiple users. For instance, an application may be developed to run on a particular small computer. Standards for integration may dictate that theequipment selected be compatible with the centralized database. The trend in information system design is toward separate application processing form the data used to support it. The separate database is the mechanism by which data items are integrated across many applications and made consistently available to a variety of users. The need for a database in MIS is discussed below.The term “information” and “data” are frequently used interchangeably; However, information is generally defined as data that is meaningful or useful to The recipient. Data items are therefore the raw material for producing information.The underlying concept of a database is that data needs to be managed in order to be available for processing and have appropriate quality. This data management includes both software and organization. The software to create and manage a database is a database management system.When all access to any use of database is controlled through a database management system, all applications utilizing a particular data item access the same data item which is stored in only one place. A single updating of the data item updates it for10all uses. Integration through a database management system requires a central authority for the database. The data can be stored in one central computer or dispersed among several computers; the overriding requirement is that there be an organizational function to exercise control.It is usually insufficient for human recipients to receive only raw data or even summarized data. Data usually needs to be processed and presented in such a way that Che result is directed toward the decision to be made. To do this, processing of dataitems is based on a decision model.For example, an investment decision relative to new capital expenditures might be processed in terms of a capital expenditure decision model.Decision models can be used to support different stages in the decision-making process. “Intelligence’’ models can be used to search for problems and/or opportunities. Models can be used to identify and analyze possible solutions. Choice models such as optimization models maybe used to find the most desirable solution.In other words, multiple approaches are needed to meet a variety of decision situations. The following are examples and the type of model that might be included in an MIS to aid in analysis in support of decision-making; in a comprehensive information system, the decision maker has available a set of general models that can be applied to many analysis and decision situations plus a set of very specific models for unique decisions. Similar models are available tor planning and control. The set of models is the model base for the MIS.Models are generally most effective when the manager can use interactive dialog (o build a plan or to iterate through several decision choices under different conditions.10中文译文2:《数据库幸存者:成为一个摇滚名明星》众所周知,数据库是逻辑上相关的数据元的汇集.这些数据元可以按不同的结构组织起来,以满足单位和个人的多种处理和检索的需要。

毕业设计外文文献翻译(原文+译文)

毕业设计外文文献翻译(原文+译文)

Environmental problems caused by Istanbul subway excavation and suggestionsfor remediation伊斯坦布尔地铁开挖引起的环境问题及补救建议Ibrahim Ocak Abstract:Many environmental problems caused by subway excavations have inevitably become an important point in city life. These problems can be categorized as transporting and stocking of excavated material, traffic jams, noise, vibrations, piles of dust mud and lack of supplies. Although these problems cause many difficulties,the most pressing for a big city like Istanbul is excava tion,since other listed difficulties result from it. Moreover, these problems are environmentally and regionally restricted to the period over which construction projects are underway and disappear when construction is finished. Currently, in Istanbul, there are nine subway construction projects in operation, covering approximately 73 km in length; over 200 km to be constructed in the near future. The amount of material excavated from ongoing construction projects covers approximately 12 million m3. In this study, problems—primarily, the problem with excavation waste(EW)—caused by subway excavation are analyzed and suggestions for remediation are offered.摘要:许多地铁开挖引起的环境问题不可避免地成为城市生活的重要部分。

毕业设计(论文)外文原文及译文

毕业设计(论文)外文原文及译文

毕业设计(论文)外文原文及译文一、外文原文MCUA microcontroller (or MCU) is a computer-on-a-chip. It is a type of microcontroller emphasizing self-sufficiency and cost-effectiveness, in contrast to a general-purpose microprocessor (the kind used in a PC).With the development of technology and control systems in a wide range of applications, as well as equipment to small and intelligent development, as one of the single-chip high-tech for its small size, powerful, low cost, and other advantages of the use of flexible, show a strong vitality. It is generally better compared to the integrated circuit of anti-interference ability, the environmental temperature and humidity have better adaptability, can be stable under the conditions in the industrial. And single-chip widely used in a variety of instruments and meters, so that intelligent instrumentation and improves their measurement speed and measurement accuracy, to strengthen control functions. In short,with the advent of the information age, traditional single- chip inherent structural weaknesses, so that it show a lot of drawbacks. The speed, scale, performance indicators, such as users increasingly difficult to meet the needs of the development of single-chip chipset, upgrades are faced with new challenges.The Description of AT89S52The AT89S52 is a low-power, high-performance CMOS 8-bit microcontroller with 8K bytes of In-System Programmable Flash memory. The device is manufactured using Atmel's high-density nonvolatile memory technology and is compatible with the industry-standard 80C51 instruction set and pinout. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional nonvolatile memory programmer. By combining a versatile 8-bit CPU with In-System Programmable Flash on a monolithic chip, the Atmel AT89S52 is a powerful microcontroller which provides a highly-flexible and cost-effective solution to many embedded control applications.The AT89S52 provides the following standard features: 8K bytes ofFlash, 256 bytes of RAM, 32 I/O lines, Watchdog timer, two data pointers, three 16-bit timer/counters, a six-vector two-level interrupt architecture, a full duplex serial port, on-chip oscillator, and clock circuitry. In addition, the AT89S52 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. The Idle Mode stops the CPU while allowing the RAM, timer/counters, serial port, and interrupt system to continue functioning. The Power-down mode saves the RAM contents but freezes the oscillator, disabling all other chip functions until the next interrupt or hardware reset.Features• Compatible with MCS-51® Products• 8K Bytes of In-System Programmable (ISP) Flash Memory– Endurance: 1000 Write/Erase Cycles• 4.0V to 5.5V Operating Range• Fully Static Operation: 0 Hz to 33 MHz• Three-level Program Memory Lock• 256 x 8-bit Internal RAM• 32 Programmable I/O Lines• Three 16-bit Timer/Counters• Eight Interrupt Sources• Full Duplex UART Serial Channel• Low-power Idle and Power-down Modes• Interrupt Recovery from Power-down Mode• Watchdog Timer• Dual Data Pointer• Power-off FlagPin DescriptionVCCSupply voltage.GNDGround.Port 0Port 0 is an 8-bit open drain bidirectional I/O port. As an output port, each pin can sink eight TTL inputs. When 1s are written to port 0 pins, the pins can be used as high-impedance inputs.Port 0 can also be configured to be the multiplexed low-order address/data bus during accesses to external program and data memory. In this mode, P0 has internal pullups.Port 0 also receives the code bytes during Flash programming and outputs the code bytes during program verification. External pullups are required during program verification.Port 1Port 1 is an 8-bit bidirectional I/O port with internal pullups. The Port 1 output buffers can sink/source four TTL inputs. When 1s are written to Port 1 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 1 pins that are externally being pulled low will source current (IIL) because of the internal pullups.In addition, P1.0 and P1.1 can be configured to be the timer/counter 2 external count input (P1.0/T2) and the timer/counter 2 trigger input (P1.1/T2EX), respectively.Port 1 also receives the low-order address bytes during Flash programming and verification.Port 2Port 2 is an 8-bit bidirectional I/O port with internal pullups. The Port 2 output buffers can sink/source four TTL inputs. When 1s are written to Port 2 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 2 pins that are externally being pulled low will source current (IIL) because of the internal pullups.Port 2 emits the high-order address byte during fetches from external program memory and during accesses to external data memory that use 16-bit addresses (MOVX @ DPTR). In this application, Port 2 uses strong internal pull-ups when emitting 1s. During accesses to external data memory that use 8-bit addresses (MOVX @ RI), Port 2 emits the contents of the P2 Special Function Register.Port 2 also receives the high-order address bits and some control signals during Flash programming and verification.Port 3Port 3 is an 8-bit bidirectional I/O port with internal pullups. The Port 3 output buffers can sink/source four TTL inputs. When 1s are written to Port 3 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 3 pins that are externally being pulled low will source current (IIL) because of the pullups.Port 3 also serves the functions of various special features of the AT89S52, as shown in the following table.Port 3 also receives some control signals for Flash programming and verification.RSTReset input. A high on this pin for two machine cycles while the oscillator is running resets the device. This pin drives High for 96 oscillator periods after the Watchdog times out. The DISRTO bit in SFR AUXR (address 8EH) can be used to disable this feature. In the default state of bit DISRTO, the RESET HIGH out feature is enabled.ALE/PROGAddress Latch Enable (ALE) is an output pulse for latching the low byte of the address during accesses to external memory. This pin is also the program pulse input (PROG) during Flash programming.In normal operation, ALE is emitted at a constant rate of 1/6 the oscillator frequency and may be used for external timing or clocking purposes. Note, however, that one ALE pulse is skipped during each access to external data memory.If desired, ALE operation can be disabled by setting bit 0 of SFR location 8EH. With the bit set, ALE is active only during a MOVX or MOVC instruction. Otherwise, the pin is weakly pulled high. Setting the ALE-disable bit has no effect if the microcontroller is in external execution mode.PSENProgram Store Enable (PSEN) is the read strobe to external program memory. When the AT89S52 is executing code from external program memory, PSENis activated twice each machine cycle, except that two PSEN activations are skipped during each access to external data memory.EA/VPPExternal Access Enable. EA must be strapped to GND in order to enable the device to fetch code from external program memory locations starting at 0000H up to FFFFH. Note, however, that if lock bit 1 is programmed, EA will be internally latched on reset. EA should be strapped to VCC for internal program executions.This pin also receives the 12-volt programming enable voltage (VPP) during Flash programming.XTAL1Input to the inverting oscillator amplifier and input to the internal clock operating circuit.XTAL2Output from the inverting oscillator amplifier.Special Function RegistersNote that not all of the addresses are occupied, and unoccupied addresses may not be implemented on the chip. Read accesses to these addresses will in general return random data, and write accesses will have an indeterminate effect.User software should not write 1s to these unlisted locations, since they may be used in future products to invoke new features. In that case, the reset or inactive values of the new bits will always be 0.Timer 2 Registers:Control and status bits are contained in registers T2CON and T2MOD for Timer 2. The register pair (RCAP2H, RCAP2L) are the Capture/Reload registers for Timer 2 in 16-bit capture mode or 16-bit auto-reload mode.Interrupt Registers:The individual interrupt enable bits are in the IE register. Two priorities can be set for each of the six interrupt sources in the IP register.Dual Data Pointer Registers: To facilitate accessing both internal and external data memory, two banks of 16-bit Data Pointer Registers areprovided: DP0 at SFR address locations 82H-83H and DP1 at 84H-85H. Bit DPS = 0 in SFR AUXR1 selects DP0 and DPS = 1 selects DP1. The user should always initialize the DPS bit to the appropriate value before accessing the respective Data Pointer Register.Power Off Flag:The Power Off Flag (POF) is located at bit 4 (PCON.4) in the PCON SFR. POF is set to “1” during power up. It can be set and rest under software control and is not affected by reset.Memory OrganizationMCS-51 devices have a separate address space for Program and Data Memory. Up to 64K bytes each of external Program and Data Memory can be addressed.Program MemoryIf the EA pin is connected to GND, all program fetches are directed to external memory. On the AT89S52, if EA is connected to VCC, program fetches to addresses 0000H through 1FFFH are directed to internal memory and fetches to addresses 2000H through FFFFH are to external memory.Data MemoryThe AT89S52 implements 256 bytes of on-chip RAM. The upper 128 bytes occupy a parallel address space to the Special Function Registers. This means that the upper 128 bytes have the same addresses as the SFR space but are physically separate from SFR space.When an instruction accesses an internal location above address 7FH, the address mode used in the instruction specifies whether the CPU accesses the upper 128 bytes of RAM or the SFR space. Instructions which use direct addressing access of the SFR space. For example, the following direct addressing instruction accesses the SFR at location 0A0H (which is P2).MOV 0A0H, #dataInstructions that use indirect addressing access the upper 128 bytes of RAM. For example, the following indirect addressing instruction, where R0 contains 0A0H, accesses the data byte at address 0A0H, rather than P2 (whose address is 0A0H).MOV @R0, #dataNote that stack operations are examples of indirect addressing, so the upper 128 bytes of data RAM are available as stack space.Timer 0 and 1Timer 0 and Timer 1 in the AT89S52 operate the same way as Timer 0 and Timer 1 in the AT89C51 and AT89C52.Timer 2Timer 2 is a 16-bit Timer/Counter that can operate as either a timer or an event counter. The type of operation is selected by bit C/T2 in the SFR T2CON (shown in Table 2). Timer 2 has three operating modes: capture, auto-reload (up or down counting), and baud rate generator. The modes are selected by bits in T2CON.Timer 2 consists of two 8-bit registers, TH2 and TL2. In the Timer function, the TL2 register is incremented every machine cycle. Since a machine cycle consists of 12 oscillator periods, the count rate is 1/12 of the oscillator frequency.In the Counter function, the register is incremented in response to a1-to-0 transition at its corresponding external input pin, T2. In this function, the external input is sampled during S5P2 of every machine cycle. When the samples show a high in one cycle and a low in the next cycle, the count is incremented. The new count value appears in the register during S3P1 of the cycle following the one in which the transition was detected. Since two machine cycles (24 oscillator periods) are required to recognize a 1-to-0 transition, the maximum count rate is 1/24 of the oscillator frequency. To ensure that a given level is sampled at least once before it changes, the level should be held for at least one full machine cycle.InterruptsThe AT89S52 has a total of six interrupt vectors: two external interrupts (INT0 and INT1), three timer interrupts (Timers 0, 1, and 2), and the serial port interrupt. These interrupts are all shown in Figure 10.Each of these interrupt sources can be individually enabled or disabledby setting or clearing a bit in Special Function Register IE. IE also contains a global disable bit, EA, which disables all interrupts at once.Note that Table 5 shows that bit position IE.6 is unimplemented. In the AT89S52, bit position IE.5 is also unimplemented. User software should not write 1s to these bit positions, since they may be used in future AT89 products. Timer 2 interrupt is generated by the logical OR of bits TF2 and EXF2 in register T2CON. Neither of these flags is cleared by hardware when the service routine is vectored to. In fact, the service routine may have to determine whether it was TF2 or EXF2 that generated the interrupt, and that bit will have to be cleared in software.The Timer 0 and Timer 1 flags, TF0 and TF1, are set at S5P2 of the cycle in which the timers overflow. The values are then polled by the circuitry in the next cycle. However, the Timer 2 flag, TF2, is set at S2P2 and is polled in the same cycle in which the timer overflows.二、译文单片机单片机即微型计算机,是把中央处理器、存储器、定时/计数器、输入输出接口都集成在一块集成电路芯片上的微型计算机。

毕业设计数据库管理外文文献

毕业设计数据库管理外文文献

毕业设计(论文)外文参考资料及译文译文题目:学生姓名:学号:专业:所在学院:指导教师:职称:年月日1. Database management system1. Database management systemA Database Management System (DBMS) is a set of computer programs that controls the creation, maintenance, and the use of a database. It allows organizations to place control of database development in the hands of database administrators (DBAs) and other specialists. A DBMS is a system software package that helps the use of integrated collection of data records and files known as databases. It allows different user application programs to easily access the same database. DBMSs may use any of a variety of database models, such as the network model or relational model. In large systems, a DBMS allows users and other software to store and retrieve data in a structured way. Instead of having to write computer programs to extract information, user can ask simple questions in a query language. Thus, many DBMS packages provide Fourth-generation programming language (4GLs) and other application development features. It helps to specify the logical organization for a database and access and use the information within a database. It provides facilities for controlling data access, enforcing data integrity, managing concurrency, and restoring the database from backups. A DBMS also provides the ability to logically present database information to users.2. OverviewA DBMS is a set of software programs that controls the organization, storage, management, and retrieval of data in a database. DBMSs are categorized according to their data structures or types. The DBMS accepts requests for data from an application program and instructs the operating system to transfer the appropriate data. The queries and responses must be submitted and received according to a format that conforms to one or more applicable protocols. When a DBMS is used, information systems can be changed much more easily as the organization's information requirements change. New categories of data can be added to the database without disruption to the existing system.Database servers are computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions.3. HistoryDatabases have been in use since the earliest days of electronic computing. Unlike modern systems which can be applied to widely different databases and needs, the vast majority of older systems were tightly linked to the custom databases in order to gain speed at the expense of flexibility. Originally DBMSs were found only in large organizations with the computer hardware needed to support large data sets.3.1 1960s Navigational DBMSAs computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s there were a number of such systems in commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, Integrated Data Store (IDS), founded the "Database Task Group" within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971 they delivered their standard, which generally became known as the "Codasyl approach", and soon there were a number of commercial products based on it available.The Codasyl approach was based on the "manual" navigation of a linked data set which was formed into a large network. When the database was first opened, the program was handed back a link to the first record in the database, which also contained pointers to other pieces of data. To find any particular record the programmer had to step through these pointers one at a time until the required record was returned. Simple queries like "find all the people in India" required the programto walk the entire data set and collect the matching results. There was, essentially, no concept of "find" or "search". This might sound like a serious limitation today, but in an era when the data was most often stored on magnetic tape such operations were too expensive to contemplate anyway.IBM also had their own DBMS system in 1968, known as IMS. IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to Codasyl, but used a strict hierarchy for its model of data navigation instead of Codasyl's network model. Both concepts later became known as navigational databases due to the way data was accessed, and Bachman's 1973 Turing Award award presentation was The Programmer as Navigator. IMS is classified as a hierarchical database. IMS and IDMS, both CODASYL databases, as well as CINCOMs TOTAL database are classified as network databases.3.2 1970s Relational DBMSEdgar Codd worked at IBM in San Jose, California, in one of their offshoot offices that was primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the Codasyl approach, notably the lack of a "search" facility which was becoming increasingly useful. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.[1]In this paper, he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in Codasyl, Codd's idea was to use a "table" of fixed-length records. A linked-list system would be very inefficient when storing "sparse" databases where some of the data for any one record could be left empty. The relational model solved this by splitting the data into a series of normalized tables, with optional elements being moved out of the main table to where they would take up room only if needed.For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach all of these data would be placed in a single record, and unused items would simply not be placed in the database. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided.Linking the information back together is the key to this system. In the relational model, some bit of information was used as a "key", uniquely defining a particular record. When information was being collected about a user, information stored in the optional (or related) tables would be found by searching for this key. For instance, if the login name of a user is unique, addresses and phone numbers for that user would be recorded with the login name as its key. This "re-linking" of related data back into a single collection is something that traditional computer languages are not designed for.Just as the navigational approach would require programs to loop in order to collect records, the relational approach would require loops to collect information about any one record. Codd's solution to the necessary looping was a set-oriented language, a suggestion that would later spawn the ubiquitous SQL. Using a branch of mathematics known as tuple calculus, he demonstrated that such a system could support all the operations of normal databases (inserting, updating etc.) as well as providing a simple system for finding and returning sets of data in a single operation.Codd's paper was picked up by two people at the Berkeley, Eugene Wong and Michael Stonebraker. They started a project known as INGRES using funding that had already been allocated for a geographical database project, using studentprogrammers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. During this time, a number of people had moved "through" the group — perhaps as many as 30 people worked on the project, about five at a time. INGRES was similar to System R in a number of ways, including the use of a "language" for data access, known as QUEL — QUEL was in fact relational, having been based on Codd's own Alpha language, but has since been corrupted to follow SQL, thus violating much the same concepts of the relational model as SQL itself.IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell did MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. All other DBMS implementations usually called relational are actually SQL DBMSs. In 1968, the University of Michigan began development of the Micro DBMS relational database management system. It was used to manage very large data sets by the US Department of Labor, the Environmental Protection Agency and researchers from University of Alberta, the University of Michigan and Wayne State University. It ran on mainframe computers using Michigan Terminal System. The system remained in production until 1996.3.3 End 1970s SQL DBMSIBM started working on a prototype system loosely based on Codd's concepts as System R in the early 1970s. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (much of which is often optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language, SQL, had been added. Codd's ideas were establishing themselves as both workable and superior to Codasyl, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (DB2).Many of the people involved with INGRES became convinced of the future commercial success of such systems, and formed their own companies to commercialize the work but with an SQL interface. Sybase, Informix, NonStop SQL and eventually Ingres itself were all being sold as offshoots to the original INGRES product in the 1980s. Even Microsoft SQL Server is actually a re-built version of Sybase, and thus, INGRES. Only Larry Ellison's Oracle started from a different chain, based on IBM's papers on System R, and beat IBM to market when the first version was released in 1978.Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions).In Sweden, Codd's paper was also read and Mimer SQL was developed from the mid-70s at Uppsala University. In 1984, this project was consolidated into an independent enterprise. In the early 1980s, Mimer introduced transaction handling for high robustness in applications, an idea that was subsequently implemented on most other DBMS.3.4 1980s Object Oriented DatabasesThe 1980s, along with a rise in object oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person's data were in a database, that person's attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relationships between data to be relation to objects and their attributes and not to individual fields.Another big game changer for databases in the 1980s was the focus on increasing reliability and access speeds. In 1989, two professors from the University of Michigan at Madison, published an article at an ACM associated conference outlining their methods on increasing database performance. The idea was to replicate specific important, and often queried information, and store it in a smaller temporary database that linked these key features back to the main database. This meant that a query could search the smaller database much quicker, rather than search the entire dataset. This eventually leads to the practice of indexing, which is used by almost every operating system from Windows to the system that operates Apple iPod devices.4. DBMS building blocksA DBMS includes four main parts: modeling language, data structure, database query language, and transaction mechanisms:4.1 Components of DBMS∙DBMS Engine accepts logical request from the various other DBMS subsystems, converts them into physical equivalents, and actually accesses thedatabase and data dictionary as they exist on a storage device.∙Data Definition Subsystem helps user to create and maintain the data dictionary and define the structure of the files in a database.∙Data Manipulation Subsystem helps user to add, change, and delete information in a database and query it for valuable information. Software tools within the data manipulation subsystem are most often the primary interfacebetween user and the information contained in a database. It allows user tospecify its logical information requirements.∙Application Generation Subsystem contains facilities to help users to develop transaction-intensive applications. It usually requires that userperform a detailed series of tasks to process a transaction. It facilitateseasy-to-use data entry screens, programming languages, and interfaces.∙Data Administration Subsystem helps users to manage the overall database environment by providing facilities for backup and recovery, security management, query optimization, concurrency control, and change management.4.2 Modeling languageA data modeling language to define the schema of each database hosted in the DBMS, according to the DBMS database model. The four most common types of models are the:•hierarchical model,•network model,•relational model, and•object model.Inverted lists and other methods are also used. A given database management system may provide one or more of the four models. The optimal structure dependson the natural organization of the application's data, and on the application's requirements (which include transaction rate (speed), reliability, maintainability, scalability, and cost).The dominant model in use today is the ad hoc one embedded in SQL, despite the objections of purists who believe this model is a corruption of the relational model, since it violates several of its fundamental principles for the sake of practicality and performance. Many DBMSs also support the Open Database Connectivity API that supports a standard way for programmers to access the DBMS.Before the database management approach, organizations relied on file processing systems to organize, store, and process data files. End users became aggravated with file processing because data is stored in many different files and each organized in a different way. Each file was specialized to be used with a specific application. Needless to say, file processing was bulky, costly and nonflexible when it came to supplying needed data accurately and promptly. Data redundancy is an issue with the file processing system because the independent data files produce duplicate data so when updates were needed each separate file would need to be updated. Another issue is the lack of data integration. The data is dependent on other data to organize and store it. Lastly, there was not any consistency or standardization of the data in a file processing system which makes maintenance difficult. For all these reasons, the database management approach was produced. Database management systems (DBMS) are designed to use one of five database structures to providesimplistic access to information stored in databases. The five database structures are hierarchical, network, relational, multidimensional and object-oriented models.The hierarchical structure was used in early mainframe DBMS. Records’ relationships form a treelike model. This structure is simple but nonflexible because the relationship is confined to a one-to-many relationship. IBM’s IMS system and the RDM Mobile are examples of a hierarchical database system with multiple hierarchies over the same data. RDM Mobile is a newly designed embedded database for a mobile computer system. The hierarchical structure is used primary today for storing geographic information and file systems.The network structure consists of more complex relationships. Unlike the hierarchical structure, it can relate to many records and accesses them by following one of several paths. In other words, this structure allows for many-to-many relationships.The relational structure is the most commonly used today. It is used by mainframe, midrange and microcomputer systems. It uses two-dimensional rows and columns to store data. The tables of records can be connected by common key values. While working for IBM, E.F. Codd designed this structure in 1970. The model is not easy for the end user to run queries with because it may require a complex combination of many tables.The multidimensional structure is similar to the relational model. The dimensions of the cube looking model have data relating to elements in each cell. This structure gives a spreadsheet like view of data. This structure is easy to maintain because records are stored as fundamental attributes, the same way they’re viewed and the structure is easy to understand. Its high performance has made it the most popular database structure when it comes to enabling online analytical processing (OLAP).The object oriented structure has the ability to handle graphics, pictures, voice and text, types of data, without difficultly unlike the other database structures. This structure is popular for multimedia Web-based applications. It was designed to work with object-oriented programming languages such as Java.4.3 Data structureData structures (fields, records, files and objects) optimized to deal with very large amounts of data stored on a permanent data storage device (which implies relatively slow access compared to volatile main memory).4.4 Database query languageA database query language and report writer allows users to interactively interrogate the database, analyze its data and update it according to the users privileges on data. It also controls the security of the database. Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called subschemas. For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data.If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases. However, it may not leave an audit trail of actions or provide the kinds of controls necessary in a multi-user organization. These controls are only available when a set of application programs are customized for each data entry and updating function.4.5 Transaction mechanismA database transaction mechanism ideally guarantees ACID properties in orderto ensure data integrity despite concurrent user accesses (concurrency control), and faults (fault tolerance). It also maintains the integrity of the data in the database. The DBMS can maintain the integrity of the database by not allowing more than one user to update the same record at the same time. The DBMS can help prevent duplicate records via unique index constraints; for example, no two customers with the same customer numbers (key fields) can be entered into the database. See ACID properties for more information (Redundancy avoidance).5. DBMS topics5.1 External, Logical and Internal viewA database management system provides the ability for many different users to share data and process resources. But as there can be many different users, there are many different database needs. The question now is: How can a single, unified database meet the differing requirement of so many users?A DBMS minimizes these problems by providing two views of the database data: an external view(or User view), logical view(or conceptual view)and physical(or internal) view. The user’s view, of a database program represents data in a format that is meaningful to a user and to the software programs that process those data. That is, the logical view tells the user, in user terms, what is in the database. The physicalview deals with the actual, physical arrangement and location of data in the direct access storage devices(DASDs). Database specialists use the physical view to make efficient use of storage and processing resources. With the logical view users can see data differently from how they are stored, and they do not want to know all the technical details of physical storage. After all, a business user is primarily interested in using the information, not in how it is stored.One strength of a DBMS is that while there is typically only one conceptual (or logical) and physical (or Internal) view of the data, there can be an endless number of different External views. This feature allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. Thus the logical view refers to the way user views data, and the physical view to the way the data are physically stored and processed...5.2 DBMS features and capabilitiesAlternatively, and especially in connection with the relational model of database management, the relation between attributes drawn from a specified set of domains can be seen as being primary. For instance, the database might indicate that a car that was originally "red" might fade to "pink" in time, provided it was of some particular "make" with an inferior paint job. Such higher arity relationships provide information on all of the underlying domains at the same time, with none of them being privileged above the others.5.3 DBMS simple definitionData base management system is the system in which related data is stored in an "efficient" and "compact" manner. Efficient means that the data which is stored in the DBMS is accessed in very quick time and compact means that the data which is stored in DBMS covers very less space in computer's memory. In above definition the phrase "related data" is used which means that the data which is stored in DBMS is about some particular topic.Throughout recent history specialized databases have existed for scientific, geospatial, imaging, document storage and like uses. Functionality drawn from such applications has lately begun appearing in mainstream DBMSs as well. However, the main focus there, at least when aimed at the commercial data processing market, is still on descriptive attributes on repetitive record structures.Thus, the DBMSs of today roll together frequently needed services or features of attribute management. By externalizing such functionality to the DBMS, applications effectively share code with each other and are relieved of much internal complexity. Features commonly offered by database management systems include:5.3.1 Query abilityQuerying is the process of requesting attribute information from various perspectives and combinations of factors. Example: "How many 2-door cars in Texas are green?" A database query language and report writer allow users to interactively interrogate the database, analyze its data and update it according to the users privileges on data.5.3.2 Backup and replicationCopies of attributes need to be made regularly in case primary disks or other equipment fails. A periodic copy of attributes may also be created for a distant organization that cannot readily access the original. DBMS usually provide utilities to facilitate the process of extracting and disseminating attribute sets. When data is replicated between database servers, so that the information remains consistent throughout the database system and users cannot tell or even know which server in the DBMS they are using, the system is said to exhibit replication transparency.5.3.2 Rule enforcementOften one wants to apply rules to attributes so that the attributes are clean and reliable. For example, we may have a rule that says each car can have only one engine associated with it (identified by Engine Number). If somebody tries to associate a second engine with a given car, we want the DBMS to deny such a request and display an error message. However, with changes in the model specification such as, in this example, hybrid gas-electric cars, rules may need to change. Ideally such rules should be able to be added and removed as needed without significant data layout redesign.5.3.4 SecurityOften it is desirable to limit who can see or change which attributes or groups of attributes. This may be managed directly by individual, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements.5.3.5 ComputationThere are common computations requested on attributes such as counting, summing, averaging, sorting, grouping, cross-referencing, etc. Rather than have each computer application implement these from scratch, they can rely on the DBMS to supply such calculations.5.3.6 Change and access loggingOften one wants to know who accessed what attributes, what was changed, and when it was changed. Logging services allow this by keeping a record of access occurrences and changes.5.3.7 Automated optimizationIf there are frequently occurring usage patterns or requests, some DBMS can adjust themselves to improve the speed of those interactions. In some cases the DBMS will merely provide tools to monitor performance, allowing a human expert to make the necessary adjustments after reviewing the statistics collected5.4 Meta-data repositoryMetadata is data describing data. For example, a listing that describes what attributes are allowed to be in data sets is called "meta-information". The meta-data is also known as data about data.5.5 Current trendsIn 1998, database management was in need of new style databases to solve current database management problems. Researchers realized that the old trends of database management were becoming too complex and there was a need for automated configuration and management. Surajit Chaudhuri, Gerhard Weikum and Michael Stonebraker, were the pioneers that dramatically affected the thought of database management systems. They believed that database management needed a more modular approach and that there are so many specifications needs for various users. Since this new development process of database management we currently have endless possibilities. Database management is no longer limited to “monolithic entities”. Many solutions have developed to satisfy individual needs of users. Development of numerous database options has created flexible solutions in database management.Today there are several ways database management has affected the technology world as we know it. Organizations demand for directory services has become an extreme necessity as organizations grow. Businesses are now able to use directory services that provided prompt searches for their company information. Mobile devices are not only able to store contact information of users but have grown to bigger capabilities. Mobile technology is able to cache large information that is used for computers and is able to display it on smaller devices. Web searches have even been affected with database management. Search engine queries are able to locate data。

大学毕业设计关于数据库外文翻译2篇

大学毕业设计关于数据库外文翻译2篇

原文:Structure of the Relational database—《Database System Concepts》Part1: Relational Databases The relational model is the basis for any relational database management system (RDBMS).A relational model has three core components: a collection of obj ects or relations, operators that act on the objects or relations, and data integrity methods. In other words, it has a place to store the data, a way to create and retrieve the data, and a way to make sure that the data is logically consistent.A relational database uses relations, or two-dimensional tables, to store the information needed to support a business. Let's go over the basic components of a traditional relational database system and look at how a relational database is designed. Once you have a solid understanding of what rows, columns, tables, and relationships are, you'll be well on your way to leveraging the power of a relational database.Tables, Row, and ColumnsA table in a relational database, alternatively known as a relation, is a two-dimensional structure used to hold related information. A database consists of one or more related tables.Note: Don't confuse a relation with relationships. A relation is essentially a table, and a relationship is a way to correlate, join, or associate two tables.A row in a table is a collection or instance of one thing, such as one employee or one line item on an invoice. A column contains all the information of a single type, and the piece of data at the intersection of a row and a column, a field, is the smallest piece of information that can be retrieved with the database's query language. For example, a table with information about employees might have a column calledLAST_NAME that contains all of the employees' last names. Data is retrieved from a table by filtering on both the row and the column.Primary Keys, Datatypes, and Foreign KeysThe examples throughout this article will focus on the hypothetical work of Scott Smith, database developer and entrepreneur. He just started a new widget company and wants to implement a few of the basic business functions using the relational database to manage his Human Resources (HR) department.Relation: A two-dimensional structure used to hold related information, also known as a table.Note: Most of Scott's employees were hired away from one of his previous employers, some of whom have over 20 years of experience in the field. As a hiring incentive, Scott has agreed to keep the new employees' original hire date in the new database.Row:A group of one or more data elements in a database table that describes a person, place, or thing.Column:The component of a database table that contains all of the data of the same name and type across all rows.You'll learn about database design in the following sections, but let's assume for the moment that the majority of the database design is completed and some tables need to be implemented. Scott creates the EMP table to hold the basic employee information, and it looks something like this:Notice that some fields in the Commission (COMM) and Manager (MGR) columns do not contain a value; they are blank. A relational database can enforce the rule that fields in a column may or may not be empty. In this case, it makes sense for an employee who is not in the Sales department to have a blank Commission field. It also makes sense for the president of the company to have a blank Manager field, since that employee doesn't report to anyone.Field:The smallest piece of information that can be retrieved by the database query language. A field is found at the intersection of a row and a column in a database table.On the other hand, none of the fields in the Employee Number (EMPNO) column are blank. The company always wants to assign an employee number to an employee, and that number must be different for each employee. One of the features of a relational database is that it can ensure that a value is entered into this column and that it is unique. Th e EMPNO column, in this case, is the primary key of the table.Primary Key:A column (or columns) in a table that makes the row in the table distinguishable from every other row in the same table.Notice the different datatypes that are stored in the EMP ta ble: numeric values, character or alphabetic values, and date values.As you might suspect, the DEPTNO column contains the department number for the employee. But how do you know what department name is associated with what number? Scott created the DEPT table to hold the descriptions for the department codes in the EMP table.The DEPTNO column in the EMP table contains the same values as the DEPTNO column in the DEPT table. In this case, the DEPTNO column in the EMP table is considered a foreign key to the same column in the DEPT table.A foreign key enforces the concept of referential integrity in a relational database. The concept of referential integrity not only prevents an invalid department number from being inserted into the EMP table, but it also prevents a row in the DEPT table from being deleted if there are employees still assigned to that department.Foreign Key:A column (or columns) in a table that draws its values from a primary or unique key column in another table. A foreign key assists in ensuring the data integrity of a table. Referential Integrity A method employed by a relational database system that enforces one-to-many relationships between tables.Data ModelingBefore Scott created the actual tables in the database, he went through a design process known as data modeling. In this process, the developer conceptualizes and documents all the tables for the database. One of the common methods for mod eling a database is called ERA, which stands for entities, relationships, and attributes. The database designer uses an application that can maintain entities, their attributes, and their relationships. In general, an entity corresponds to a table in the database, and the attributes of the entity correspond to columns of the table.Data Modeling:A process of defining the entities, attributes, and relationships between the entities in preparation for creating the physical database.The data-modeling process involves defining the entities, defining the relationships between those entities, and then defining the attributes for each of the entities. Once a cycle is complete, it is repeated as many times as necessary to ensure that the designer is capturing what is important enough to go into the database. Let's take a closer look at each step in the data-modeling process.Defining the EntitiesFirst, the designer identifies all of the entities within the scope of the database application.The entities are the pers ons, places, or things that are important to the organization and need to be tracked in the database. Entities will most likely translate neatly to database tables. For example, for the first version of Scott's widget company database, he identifies four entities: employees, departments, salary grades, and bonuses. These will become the EMP, DEPT, SALGRADE, and BONUS tables.Defining the Relationships Between EntitiesOnce the entities are defined, the designer can proceed with defining how each of the entities is related. Often, the designer will pair each entity with every other entity and ask, "Is there a relationship between these two entities?" Some relationships are obvious; some are not.In the widget company database, there is most likely a relations hip between EMP and DEPT, but depending on the business rules, it is unlikely that the DEPT and SALGRADE entities are related. If the business rules were to restrict certain salary grades to certain departments, there would most likely be a new entity that defines the relationship between salary grades and departments. This entity wouldbe known as an associative or intersection table and would contain the valid combinations of salary grades and departments.Associative Table:A database table that stores th e valid combinations of rows from two other tables and usually enforces a business rule. An associative table resolves a many-to-many relationship.In general, there are three types of relationships in a relational database:One-to-many The most common type of relationship is one-to-many. This means that for each occurrence in a given entity, the parent entity, there may be one or more occurrences in a second entity, the child entity, to which it is related. For example, in the widget company database, the DEPT entity is a parent entity, and for each department, there could be one or more employees associated with that department. The relationship between DEPT and EMP is one-to-many.One-to-one In a one-to-one relationship, a row in a table is related to only one or none of the rows in a second table. This relationship type is often used for subtyping. For example, an EMPLOYEE table may hold the information common to all employees, while the FULLTIME, PARTTIME, and CONTRACTOR tables hold information unique to full-time employees, part-time employees, and contractors, respectively. These entities would be considered subtypes of an EMPLOYEE and maintain a one-to-one relationship with the EMPLOYEE table. These relationships are not as common as one-to-many relationships, because if one entity has an occurrence for a corresponding row in another entity, in most cases, the attributes from both entities should be in a single entity.Many-to-many In a many-to-many relationship, one row of a table may be related to man y rows of another table, and vice versa. Usually, when this relationship is implemented in the database, a third entity isdefined as an intersection table to contain the associations between the two entities in the relationship. For example, in a database used for school class enrollment, the STUDENT table has a many-to-many relationship with the CLASS table—one student may take one or more classes, and a given class may have one or more students. The intersection table STUDENT_CLASS would contain the comb inations of STUDENT and CLASS to track which students are in which classes.Once the designer has defined the entity relationships, the next step is to assign the attributes to each entity. This is physically implemented using columns, as shown here for th e SALGRADE table as derived from the salary grade entity.After the entities, relationships, and attributes have been defined, the designer may iterate the data modeling many more times. When reviewing relationships, new entities may be discovered. For exa mple, when discussing the widget inventory table and its relationship to a customer order, the need for a shipping restrictions table may arise.Once the design process is complete, the physical database tables may be created. Logical database design sessions should not involve physical implementation issues, but once the design has gone through an iteration or two, it's the DBA's job to bring the designers "down to earth." As a result, the design may need to be revisited to balance the ideal database implementation versus the realities of budgets andschedules.译文:关系数据库的结构—《数据库系统结构》第一章:关系数据库关系模型是任何关系数据库管理系统(RDBMS)的基础。

数据库外文参考文献及翻译

数据库外文参考文献及翻译

数据库外文参考文献及翻译数据库外文参考文献及翻译SQL ALL-IN-ONE DESK REFERENCE FOR DUMMIESData Files and DatabasesI. Irreducible complexityAny software system that performs a useful function is going to be complex. The more valuable the function, the more complex its implementation will be. Regardless of how the data is stored, the complexity remains. The only question is where that complexity resides. Any non-trivial computer application has two major components: the program the data. Although an application’s level of complexity depends on the task to be performed, developers have some control over the location of that complexity. The complexity may reside primarily in the program part of the overall system, or it may reside in the data part.Operations on the data can be fast. Because the programinteracts directly with the data, with no DBMS in the middle, well-designed applications can run as fast as the hardware permits. What could be better? A data organization that minimizes storage requirements and at the same time maximizes speed of operation seems like the best of all possible worlds. But wait a minute . Flat file systems came into use in the 1940s. We have known about them for a long time, and yet today they have been almost entirely replaced by database s ystems. What’s up with that? Perhaps it is the not-so-beneficial consequences。

库存管理外文文献及翻译

库存管理外文文献及翻译

本科毕业论文外文文献及译文文献、资料题目: Zero Inventory Approach 文献、资料来源: The IUP Journal of SupplyChain Management文献、资料发表(出版)日期: 2012.06院(部):管理工程学院专业:工业工程班级:工业112姓名:张金丰学号:2011021527指导教师:孔海花翻译日期:2015.06.14外文文献:Zero Inventory ApproachManaging optimal inventory in the supply chain is critical for an enterprise. The ability to increase inventory turns and the use of best inventory practices will reduce inventory costs across the supply chain. Moving towards zero inventory will result in effective inventory management in the business process. Inventory Optimization Solutions can be implemented easily using inventory optimization software. With Radio Frequency Identification (RFID) technology, inventory can be updated in real time without product movement, scanning or human involvement. Companies have to adopt best practices to optimize operational processes and lower their cost structure through inventory strategies.IntroductionWith supply chain planning and latest software, companies are managing their inventory in the best possible manner, keeping inventory holdings to the minimum without sacrificing the customer service needs. The zero inventory concept has been around since the 1980s. It tries to reduce inventory to a minimum and enhances profit margins by reducing the need for warehousing and expenses related to it.The concept of a supply chain is to have items flowing from one stage of supply to the next, both within the business and outside, in a seamless fashion. Any stock in the system is caused by either delay between the processes (demand, distribution, transfer, recording and production) or by the variation in the flow. Eliminating/reducing stock can be achieved by: linking processes, making the same throughput rate on processes, locating processes near each other and coordinating flows. Recent advanced software has made zero inventory strategy executable."Inventory optimization is an emerging practical approach to balancing investment and service-level goals over a very large assortment of Stock-Keeping Units (SKUS). In contrast to traditional ‘one-at-a-time’ marginal stock level setting, inventory optimization simultaneously determines all SKU stock levels to fulfill total service and investment constraints or objectives".Inventory optimization techniques provide a new logic to drive the system with information systems. To effectively manage inventory, businesses must also optimize thecosts of buying, holding, producing, moving and selling inventory.The objective of inventory optimization is to sustain minimal levels of inventory while providing the maximum possible levels of service. Supply Chain Design and Optimization (SCDO) is an inventory optimization solution which helps companies satisfy customer demands while balancing limitations on supply and the need for operational efficiency. Inventory optimization focuses on modeling uncertainty and variability and minimizing the risks they impose on the supply chain.Inventory optimization can help resolve total supply chain cost options like:•In-house manufacturing vs. contract manufacturing;•Domestic vs. off shore;•New supplier's cost vs. current suppliers' cost.Companies can benefit from inventory optimization, provided they control their supply chain processes and the complexity of supply chain. In case the supply chain is very complex, besides inventory optimization, network design has to be used to reap the benefits fully. This paper covers various inventory models that are available and then describes the technologies like Radio Frequency Identification (RFID) and networking used for the optimization of inventory. The paper also describes the software solutions available for achieving the same. It concludes by giving a few examples where inventory optimization has been successfully implemented.Inventory ModelsHexagon ModelThe hexagon model was developed due to the need to structure day-to-day work, reduce headcount and other inventory costs and improve customer satisfaction.In the first phase, operation strategies were established in alignment with inte-rnal customers. Later, continuous improvement plans and business continuity pl-ans were added. The five strategies used were: forecasting future consumption,setting financial targets to minimize inventory costs, preparing daily reports to monitor inventory operational performance,studying critical success indicators to track the accomplishments, to form inventory strategic objectives and inventor-y health and operating strategies. The hexagon model is a combination of two triangular structures (Figure 1).The upper triangle focuses on the soft management of human resources, customer orientation and supplier relations; the lower focuses on the execution of inventory plans with their success criteria, continuous improvement methodology and business continuity plans.The inventory indicators are: total inventory value, availability of spares, days of inventory, cost of inventory, cost saving and cash saving output expen-diture and quality improvement. The hexagon model combines the elements of the people involved in managing inventory with operational excellence (Figur2).Managing inventory with operational excellence was achieved by reducing the number of employees in the material department, changing the mix of people skills such as introducing engineering into the department structure and reducing the cost of ownership of the material department to the operation that it supports.Normally, this is implemented with reduction in headcount of material department, having less people with engineering skills in the department. Operation results include, improvement in raw material supply line quality indicators, competitive days of inventory and improved and stabilized spares availability. And the financial results include, increase in cost savings and reduced cost of inventory. It can be established by outsourcing some of the inventory functions as required. The level of efficiency of the inventory managed can be measured to a specific risk level, changing requirements or changes in the environment. Just-In-Time (JIT)Just-in-time (JIT) inventory system is a concept developed by the Japanese, wherein, the suppliers deliver the materials to the factory JIT for their processing, eliminating the need for storage and retrieval. The rate of output and the rate of supply of inputs are synchronized, to manage a zero inventory.The main benefits of JIT are: set up times are significantly reduced in the factory, the flow of goods from warehouse to shelves improves, employees who possess multiple skills are utilized more efficiently, better consistency of scheduling and consistency of employee work hours, increased emphasis on supplier relationships and continuous round the clock supplies keeping workers productive and businesses focused on turnover.And though a JIT system might even be a necessity, given the inventory demands of certain business types, its many advantages are realized only when some significant risks likedelays in movement of goods over long distances are mitigated.Vendor-Managed Inventory (VMI)Vendor-Managed Inventory (VMI) is a planning and management system in which the vendor is responsible for ma intaining the customer’s inventory levels. VMI is defined as a process or mechanism where the supplier creates the purchase orders based on the demand information. VMI is a combination of e-commerce, software and people. It has resulted in the dramatic reduction of inventory across the supply chain. VMI is categorized in the real world as collaboration, automation and cost transference.The main objectives of VMI are better, cheaper and faster transactions. In order to establish the VMI process,management commitment,data synchronization,setting up agreements,data exchange, ordering, invoice matching and measurement have to be undertaken.The benefits of VMI to an organization are reduction in inventory besides reduction of stock-outs and increase in customer satisfaction. Accurate information which is required for optimizing the supply chain is facilitated by efficient transfer of information. The concept of VMI would be successful only when there is trust between the organization and its suppliers as all the demand information is available to the suppliers which can be revealed to the competitors. VMI optimizes inventory in supply chain and reduces stock-outs by proper planning and centralized forecasting.Consignment ModelConsignment inventory model is an extension of VMI where the vendor places inventory at the customer’s location while retaining ownership of the inventory.The consignment inventory model works best in the case of new and unproven products where there is a high degree of demand uncertainty, highly expensive products and service parts for critical equipment. The types of consignment inventory ownership transfer models are: pay as sold during a pre-defined period, ownership changes after a pre-defined period, and order to order consignment.The issues that the VMI and consignment inventory model encounter are cost of developing VMI system, invoicing problems, cash flow problems, Electronic Data Interchange (EDI) problems and obsolete stock.Enabling PracticesThe decision makers have to make prudent decisions on future course of action of a project relating to the following variables: Forecasting and Inventory Management,Inventory Management practices,Inventory Planning,Optimal purchase, Multichannel Inventory, Moving towards zero inventory.To improve inventory management for better forecasting, the 14 best practices that will most likely benefit business the most are:•Synchronize promotions;•Revamp the organizational structure;•Take a longer view of item planning;•Enforce vendor compliance;•Track key inventory metrics;•Select the right systems;•Master the art of master scheduling;•Adhere to exception reporting;•Identify lost demands;•Plan by assortment;•Track inbound receipts;•Create coverage reports;•Balance under stock/overstock; and•Optimize SKUs.This will leverage the retailer’s ability to buy larger quantities across all channels while buying only what is required for a specified period in order to manage risk in a better way. In most multichannel companies, inventory is the largest asset on the balance sheet, which means that their profitability will be determined to a large degree by the way they plan, forecast, and manage inventory (Curt Barry, 2007). They can follow some steps like creating a strategy, integrating planning and forecasting, equipping with the best-laid plans and building strong vendor relationships and effective liquidation.Moving Towards Zero InventoryAt the fore is the development and widespread adoption of nimble, sophisticated software systems such as Manufacturing Resource Planning (MRP II), Enterprise Resource Planning(ERP), and Advanced Planning and Scheduling (APS) systems, as well as dedicated supply chain management software systems. These systems offer manufacturers greater functionality. To implement ‘Zero Stock’ system, companies need to have a good information system to handle customer orders, sub-contractor orders, product inventory and all issues related to production. If the company has no IT infrastructure, it will need to build it from the scratch.A good information system can help managers to get accurate data and make strategic decisions. IT infrastructure is not a cost, but an investment. A company can use RFID method, network inventory and other software tools for inventory optimization.Radio Frequency Identification (RFID)RFID is an automatic identification method, which relies on storing and remotely retrieving data using devices called RFID tags or transponders.RFID use in enterprise supply chain management increases the efficiency of inventory tracking and management. RFID application develops asset utilization by tracking reusable assets and provides visibility, improves quality control by tagging raw material, work-in-progress, and finished goods inventory, improves production execution and supply chain performance by providing accurate, timely and detailed information to enterprise resource planning and manufacturing execution system.The status of inventory can be obtained automatically by using RFID. There are many benefits of using RFID such as reduced inventory, reduced time, reduced errors, accessibility increase, high security, etc.Network InventoryA Network Inventory Management System (NIMS) tracks movement of items across the system and thus can locate malfunctioning equipment/process and provide information required to diagnose and correct problem areas. It also determines where capacity is to be added, calculates impact of market conditions, assesses impact of new products and the impact of a new customer. NIMS is very important when the complexity of a supply chain is high. It determines the manufacturing and distribution strategies for the future. It should take into consideration production, location, inventory and transportation.The NIMS software, including asset configuration information and change management, is an essential component of robust network management architecture.NIMS provideinformation that administrators can use to improve network management performance and help develop effective network asset control processes.A network inventory solution manages network resource information for multiple network technologies as well as multiple vendors in one common accurate database. It is an extremely useful tool for improving several operation processes, such as resource trouble management, service assurance, network planning and provisioning, field maintenance and spare parts management.The NIMS software, including asset configuration information and change management, is an essential component of strong network management architecture. In addition, software tools that provide planning, design and life cycle management for network assets should prominently appear on enterprise radar screens.Inventory Optimization Softwarei2 Inventory Optimizationi2 solutions enable customers to realize top and bottom-line benefits through the use of superior inventory management practices. i2 Inventory Optimization can help companies monitor, manage, and optimize strategies to decide—what to make, what to buy and from whom, what inventories to carry, where, in what form and how much—across the supply chain. It enables customers to learn and continuously improve inventory management policies and processes, strategic analysis and optimization.Product-oriented industry can install i2 Inventory Optimization and develop supply chain. Through this, the company can reduce inventory levels and overall logistics costs. It can also get higher service level performance, greater customer satisfaction, improved asset utilization, accelerated inventory turns, better product availability, reduced risk, and more precise and comprehensive supply chain visibility.Oracle Inventory OptimizationOracle Inventory Optimization considers the demand, supply, constraints and variability in extended supply chain to optimize strategic inventory investment decisions. It allows retailers to provide higher service levels to customers at a lower total cost. Oracle Inventory Optimization is part of the Oracle e-Business Suite, an integrated set of applications that are engineered to work together.Oracle Inventory Optimization provides solutions when demandand supply are in ambiguity. It provides graphic representation of the plan. It calculates cost and risk.MRO SoftwareMRO Software (now a part of IBM's Tivoli software business) announced a marketing alliance with inventory optimization specialists Xtivity to enhance the service offering of inventory management solutions for MRO Software customers. MRO offers Xtivity's Inventory Optimizer (XIO) service as an extension of its asset and service management solutions.Structured Query Language (SQL)Successful implementation of an inventory optimization solution requires significant effort and can pose certain risks to companies implementing such solutions. Structured Query Language (SQL) can be used on a common ERP platform. An optimal inventory policy can be determined by using it. Along with it, other metrics such as projected inventory levels, projected backlogs and their confidence bands can also be calculated. The only drawback of this method is that it may not be possible to obtain quick real-time results because of architectural and algorithmic complexity. However, potential scenarios can be analyzed in anticipation of results stored prior to user requests.Some ExamplesToyota’s Practice in IndiaToyota, a quality conscious company working towards zero inventory has selected Mitsui and Transport Corporation of India Ltd. (TCI) for their entire logistic solutions encompassing planning, transportation, warehousing, distribution and MIS and related documentation. Infrastructure is a bottleneck that continues to dog economic growth in India. Transystem renders services like procurement, consolidation and transportation of original equipment manufacturer's parts, through milk run operations from various suppliers all over India on a JIT basis, transportation of Complete Built-up Units (CBU) from plant to all dealers in the country and operation of CBU yards, coordination and transportation of Knock Down (KD) parts from port of entry to manufacturing plant, transportation of aftermarket parts to dealers by road and air to Toyota Kirloskar Motors Pvt. Ltd.Wal-MartWal-Mart is the largest retailer in the United States, with an estimated 20% of the retail grocery and consumables business, as well as the largest toy seller in the US, with an estimated 22% share of the toy market. Wal-Mart also operates in Argentina, Brazil, Canada, Japan, Mexico, Puerto Rico and UK.Wal-Mart keeps close track of the inventories by extensively adopting vendor-managed inventory to streamline the flow of goods from manufacturer to the store shelf. This results in more turns and therefore fewer inventories.Wal-Mart is an early adopter of RFID to monitor the movement of stocks in different stages of supply chain. The company keeps tabs on all of its merchandize by outfitting its products with RFID.Wal-Mart has indicated recently that it is moving towards the aggressive theoretical zero inventory model.Chordus Inc.Chordus Inc. has the largest division of office furniture in USA. It has advanced logistics and a model of zero inventory. It has Internet-based system for distribution network with real-time updates and low costs. Chordus determined that only SAP R/3 could accommodate this cutting-edge operational model for its network of 150 dealer-owned franchises in 44 states supported by five nationwide Distribution Centers (DCs) and a fleet of 65 delivery trucks. Small Scale Cycle Industry Around LudhianaIn and around Ludhiana, there are many small bicycle units, which are not organized.They have a sharp focus on financial and raw material management enjoying a low employee turnover. They have been practicing zero inventory models which became popular in Japan only much later. Raw material is brought into the unit in the morning, processed during the day and by evening the finished product is passed on to the next unit. Thus, the chain continues till the ultimate finished product is manufactured. In this way, the bicycles used to be produced in Ludhiana at half the production cost of TI Cycles. Even the large manufacturers of cycles, like Hero cycles, Atlas cycles and Avon cycles are reported to maintain only one week's inventory.ConclusionInventory managers are faced with high service-level requirements and many SKUsappreciate the complexity of inventory optimization, as well as the explicit control that is needed over total investment in warehousing, moving and logistics. Inventory optimization can provide both an enormous performance improvement for the supply chain and ongoing continuous improvements over competitors. The company achieves the stability needed to have enough stock to meet unpredictable demands without wasteful allocation of capital. Having the right amount of stock in the right place at the right time improves customer satisfaction, market share and bottom line. Certainly, the organizations that are able to take inventory optimization to the enterprise level will reap greater benefits. Zero inventory may be wishful thinking, but embracing new technologies and processes to manage one's inventory more efficiently could move one much closer to that ideal.中文译文:零库存方法对于一个企业来说,在供应链中优化库存管理是至关重要的。

计算机 数据库 外文文献翻译 中英文

计算机 数据库 外文文献翻译 中英文

科技外文文献Microsoft Future "Soul" - SQL Server 2005 Exploration SecretAuthor : CHEN Bao-linSQL Server development "Brief History"At the beginning of this before, let us look at Microsoft SQL Server development "Brief History."1988 : SQL Server from Microsoft and Sybase common development, running on OS / 2 platform.1993-09-14 : SQL Server 4.2, a desktop database system contains less functional. Integration with Windows and to provide easy-to-use user interface.1994 : Microsoft and Sybase database in cooperation in the development of suspension.1995 : SQL Server 6.0, code-named "SQL95" Microsoft rewriting most of the core system. Provide a low-cost small business application database program.1996-04-16 : SQL Server 6.5, This version brings significant performance improvement and providing a wide variety of useful functions.1998-11-16 : SQL Server 7.0, code-named "Sphinx." Completely rewritten core database engine, providing small and medium business applications database program, contains the initial Web support. SQL Server starting from this version has been widely used.2000-08-07 : the birth of SQL Server 2000, code-named "Shiloh." Microsoft to produce the product has been defined as enterprise-class database system, which includes three components (DB, OLAP, English Query). Rich front-end tools, improved development tools, and XML support, the promotion of this version of the promotion and application. And contains the following several versions.Enterprise Edition : through the deployment of cluster TB-class support services giant databases and thousands of concurrent users online.Standard Edition : to support SMEs.Personal version : support desktop applications.Developer : staff development for enterprises and Windows CE build enterprise applications.Window CE Version : can be applied to any Windows CE mobile devices.2003-04-24 : SQL Server 2000, 64-bit version. Codenamed "Liberty" has been and Unix / Linux Oracle compete.2005-11-07 : SQL Server 2005, codenamed "Yukon" Microsoft SQL Server products to the latest version. Microsoft commented that the status of this product took five years of major changes, a landmark product. Microsoft SQL Server 4.2 to 2005. Microsoft since the early 1990s to enter the database market, SQL Server 2005 until the launch, behaved like an enterprise database from the market to lead the followers of the restructuring, sword was sharpened for 10 years, through many a storm, Microsoft already enterprises database management perspective extends to a broader and deeper realm, the paper attempts to explore the history, Aggregate Microsoft SQL Server formative history.1987 Sysbase developed Unix systems running SQL Server version. In 1988, Microsoft invited the then momentum in the database fields are busy Sysbase. joint development of SQL server. "Sima heart erased", Microsoft tried to enter the database market moves obviously, and, database market is bound to whip up some wind action. Sure enough, after 10 years of market access database for the intense period of the Warring States. 1993-04-12, Microsoft SQL Server version 4.2. And before the introduction of Windows NT echoed that Microsoft officially entered the enterprise applications market. And the SQL Server database and the enterprise is the most important. Although SQL Server 4.2 while still just a desktop version, but there has been considerablepotential. 1994, Microsoft and Sybase formal suspension of the database development cooperation This meaningfully.From 1995 to 2000, Microsoft has adopted 6.0, 6.5,7.0, 2000 Version 4. From the perspective view, SQL Server 2000 version has been able to provide the following services.Online Services (On-line services) : "On-Line" refers to real-time online users use data services.Online transaction processing OLTP (On-Line Transaction Processing) : OLTP operation by the order-processing services transactions, or transactions follow completion or undoes all the principles. It also did not include the type of services. This is a sector that is the most universal and most widely forms of service. Analysis of online services OLAP (On-Line Analytical Processing) : OLAP is a kind of multidimensional data display (such as data warehousing, data mart, data cube), usually to do data mining. As OLTP used to operate and SQL data definition, OLAP is used and MDX (MultiDimensional Expressions) visit and definitions of data. From the technical structure of SQL Server 2000, as follows.Data structure•physical structure of data structure.•logical framework : how to define Tables, ro ws, columns, and other data objectsData Processing• data processing storage engine : it is responsible for dealing with how the data retention.• engine : it is responsible for how the data for the visit and relations.• SQL Server Agent : it is respo nsible for task scheduling and events management.Data manipulation• DB APIs : ADO (ActiveX Data Objects).OLE DB (linking and embedding data objects).DB-Library for C + +.ODBC (Open Data Internet).ESQL (Embedded SQL.)• URLs (uniform resource locat or address).• English inquiries (English Query).SQL Server Enterprise Manager.Tools : Inquiry analyzers, DTS (Data Transformation Services), Backup and restore and replication, metadata services, storage expansion process, SQL tracking, can be used for performance tuning.Experiences from users, SQL Server 2000 version of a number of new characteristics, such as XML support, many examples of support, data warehouse and business intelligence to enhance performance and scalability will improve, operating guide, and the inquiries, DTS, Transact SQL enhancements.From the license price, Microsoft SQL Server 2000, the price and total cost of ownership (TCO) only to the Oracle or D B2 2 / 1 to 1 / 3.In summary, Microsoft high-performance low-cost access to the product concept on the market success SQL Server 2000 database can meet the OLTP and OLAP application deployment, and better performance, and prices relative Oracle, DB2 and other databases low. Meanwhile, SQL Server 2000 Enterprise Edition also includes the standard version and other versions to meet different levels of user demand, These factors prompted the SQL Server 2000 was a significant part of the SME market share Microsoft has the opportunity to enter the mainstream database vendors ranks.At the same time, we should realize that SQL Server 2000 and Oracle launched late in the G 10 high-end enterprise-level functions in surviving deficient, so bridging the gap to catch up on the historic mission to the code-named "Yukon," the new version.Killer code-named "Yukon"From the 1989 release of Microsoft SQL Server 1.0 is now a full 15 years. In that 15 years of SQL Server fromscratch, from small to large, experiencing a once legendary. It has not only eroded with IBM, Oracle database market share, and the next generation of SQL Server has begun to gradually become the next Windows operating system core. China and the Bill Gates mouth • The constant repetition of "seamless calculation" is the core of Yukon, The code-named "Yukon," the next generation of our database will be brought into what kind of world? Internet "soft" pillarIn today's era of the network, data searching,data storage, classification of data, etc. All this has become the Internet network constitutes the "soft" pillars, and the database system is the pillar of the most critical. If there is no database support, we would never be able to Google or Baidu in the search for the information they need. can not use the convenient electronic mailbox, but that Network World because it is a large database consisting of.According to IDC's latest data show that the global database software market seems to be stirring Tension 2003 total revenue reached 13.6 billion U.S. dollars, compared with 2002's 12.6 billion U.S. dollars have increased. Oracle, IBM and Microsoft now controls 75% market share. Oracle last year for a market share of 39.8%, 31.3% for IBM, Microsoft to 12.1%.What is the database? In the University's computer textbooks, the database is being interpreted in this way : The database is the computer application system in a specialized data resource management system. There are many forms of data, such as text, digital, symbols, graphics, images and voices, and so on. All computer data system to deal with the subject. People familiar approach of a document is produced, will soon compile a program processing documents, will be covered by the procedural requirements of data organized into data files, documentation of procedures to call. Data files and program files maintain a certain relationship. Computer Application in the rapid development of the situation, by means of such a document will highlight deficiencies. For example, it allows poor definitive data, facilitate transplantation, in different documents stored information much duplication and waste of storage space, Update inconvenience. Database system will solve this problem. Database systems from the application of specific procedures, but based on the data management, All data will be stored in a database, scientific organizations, and by means of the database management system, using it as an intermediary, with a variety of applications or application interface to make it easy access to the data in the database.This note describes is indeed very detailed, but you may not always seem dizziness, In fact, a simple database that is after a group of computer collation of data stored in one or more documents, and the management of the database software called on the database management system. A general database system (104217) can be divided into the database (Database ) and Data Management System (Database Management System, DBMS) in two parts, all of these constitute the Internet is a "soft" pillars all.Microsoft's SQL Server database software, as many of the upgrade from 6.5 to the 7.0 version, gradually become mainstream database software, and SQL Server 2000 also proved that the Windows operating system can bear the same high-end data application, as the mainstream business application of database management software. It broke the rule by the large Unix database software myth and the next generation of SQL Server 2005 there will be what kind of change?Live Yukon core secretsMicrosoft in the next version of SQL Server (codenamed "Yukon") at the planning stage , considered more of the future development of the database, and SQL Server programming capabilities. Microsoft's internal development staff had long been aware that the future must introduce a more unified programming model but for a different data model to provide more flexibility. The unified programming model means that the ordinary data access and operation tasks can be carried out through various channels. For example, you can choose to use XML or Framework, or Transact-S QL (T-SQL) code, and so on.Such planning will result is a new database programming platform, which in many ways a natural extension. First, host. NET Framework common language runtime (CLR) to the function of the process of expansion of database programming and managed code area. Secondly,. NET framework provides a host integration from within SQL Server powerful object database functions. XML is the in-depth support functions through the XML data typeto achieve, and It has a data type of relationship between all the functions. In addition, also added a pair of XML Query (XQuery) and XML structure definition language (XSD) standard server support. Finally, SQL Server Yukon includes T-SQL language to enhance the important function.XML in SQL Server Yukon's history really began with SQL Server 2000. SQL Server 2000 with the introduction of the XML format to relational data. large load and segmentation XML documents and databases will be open targets for XML-based Web services, and other functions, However Yukon provide a more senior XML Query function, After perfecting the Y ukon will be full play all of the advantages of XML. XML Why so critical? In fact, from the initial XML an alternative HTML said the technical development of a line format, now be seen as a storage format. XML lasting memory has drawn widespread attention, the Internet has also been a lot of XML data type applications. XML itself can be an across any platform data format, It started as a file format for use, as XML in the enterprise has been widely recognized, Users began to use XML to solve thorny business problems, such as data integration. This makes as a data storage format XML development today, Because XML can be displayed on any platform to produce the same results, XML has become a mainstream database storage format. This built-in the Yukon comprehensive XML support will trigger a new database technology revolution.These new programming models and enhanced common language to create a series of programmable, They complement and expand the current relational database model. This architecture has the ultimate aim is to build more scalable, more reliable, more robust applications, and to enhance the development of efficiency. These models Another result is a service called SQL Agent new application framework -- for Asynchronous sources delivering the Distributed Application Framework.Yukon joining century gambleConstantly talking before we say a string of technology advantages, then you may very curious, Why should we introduce this appears to be a high-end database application software technologies? Perhaps we should kick the answer.The richest on Earth doing computer predictions for the future, he believes, in the next world, every one ordinary computer will have a large enough super hard disks, At that time the hard disk is no longer simply an 80 GB is likely to be 80 TB, Although it is only a change GB TB, but that means hard disk capacity of a full upgrade of 1000 times. And the existing Windows disk data storage NTFS format, simply unable to cope with such a large capacity hard disk data search. Said an image of the example, if the 100 TB of disk space on your computer, At that time, or you use Windows XP, You collate debris disk of the time required is likely to be for two days and two nights, if you want to find a particular document, You will have waited for several hours. That feeling is like to return to 286 times.In order to solve this thorny problem, the next generation Windows operating system Longhorn decided with the previous non-Windows diametrically with the programming model. The core is Avalon (development code). Avalon is the new Windows GUI library. New Longhorn into the Indigo (Web services) and WinFS (file system) of the new function. Including Avalon, these three new function called hell. Longhorn is the founder of a new "local" API. Although now is to the Win32 API compatibility and grow, However, to use the new Longhorn functions, under normal circumstances the use of hell. Max belongs to the present. NET Framework in the city. Present. NET Framework used in the category, which has hell, DLL support for the procedural mechanisms and the operation. NET basically the same.. NET Framework in SQL Server Yukon Availability when major version upgrade ( Major VersionUp), the specific date is the end of 2004. In the Yukon. NET Framework to run. In the storage process (Stored Procedures) use. NET Framework The class library. Yukon operations. NET Framework version 2.0. Supplementary to the present. NET Framework 1.1 is no relevant category of multimedia. WinFS use Yukon engines. In other words, Longhorn, the file system will use database engine.This time you understand, the next generation Windows operating system, the whole document data management will be introduced SQL Server configuration management, when Our computer data querycapabilities, data integration capability will be greatly enhanced. This of course, that the rich keep saying that the "seamless calculation" is a critical step on Microsoft, Let database software and operating systems integration projects century is undoubtedly a gamble, which, if successful, Microsoft will gradually become the dominant database, but if it fails, The almost even harden the next generation Windows listing of the normal schedule.Microsoft has provided some tools for SQL server and client applications on the network between the transmission of data increases secret. However, the Microsoft product manager said Kirsten Ward, plans to release next year a new SQL Server database will be stored in the data encryption, Hacker attacks increase defense capabilities.Microsoft earlier this year "SQL Server 2005" release time postponed until the first half of next year. The database software will enhance the launch of Microsoft database computing power and better with Oracle and IBM compete. Microsoft will also introduce a unified storage concept, locating and retrieving data more convenient. Oracle in Windows and Unix database market has been in a leading position. However, the recently adopted this year, Microsoft SQL Server to increase more advanced functions have also made remarkable progress.In addition, Microsoft will also provide a service called "Best Practices Analyzer Tool" (best practice analyzer tool) software. Database administrators can use the software using Microsoft editor of the Guide database software debugging. This applies to software tools for Microsoft database software current version "SQL Server 2000" and to provide a database administrator in various fields Operations Guide, For example, how to improve performance and how to conduct more effective data backup and so on.Ward said that the software tool also includes an "Upgrade Advisor" procedure. This procedure can scan database programs and warned "SQL Server 2000" users to make the necessary amendments changed so that the procedures compatible with the upcoming launch of the "SQL Server 2005."(Source : China Computer Education)中文译文微软未来的“灵魂”—SQL Server 2005探密作者:陈宝林SQL Server的发展“简史”在开始本文之前,先让我们来看一下微软SQL Server的发展“简史”。

数据库外文参考文献及翻译.

数据库外文参考文献及翻译.

数据库外文参考文献及翻译数据库外文参考文献及翻译数据库管理系统——实施数据完整性一个数据库,只有用户对它特别有信心的时候。

这就是为什么服务器必须实施数据完整性规则和商业政策的原因。

执行SQL Server的数据完整性的数据库本身,保证了复杂的业务政策得以遵循,以及强制性数据元素之间的关系得到遵守。

因为SQL Server的客户机/服务器体系结构允许你使用各种不同的前端应用程序去操纵和从服务器上呈现同样的数据,这把一切必要的完整性约束,安全权限,业务规则编码成每个应用,是非常繁琐的。

如果企业的所有政策都在前端应用程序中被编码,那么各种应用程序都将随着每一次业务的政策的改变而改变。

即使您试图把业务规则编码为每个客户端应用程序,其应用程序失常的危险性也将依然存在。

大多数应用程序都是不能完全信任的,只有当服务器可以作为最后仲裁者,并且服务器不能为一个很差的书面或恶意程序去破坏其完整性而提供一个后门。

SQL Server使用了先进的数据完整性功能,如存储过程,声明引用完整性(DRI),数据类型,限制,规则,默认和触发器来执行数据的完整性。

所有这些功能在数据库里都有各自的用途;通过这些完整性功能的结合,可以实现您的数据库的灵活性和易于管理,而且还安全。

声明数据完整性声明数据完整原文请找腾讯3249114六,维-论'文.网 定义一个表时指定构成的主键的列。

这就是所谓的主键约束。

SQL Server使用主键约束以保证所有值的唯一性在指定的列从未侵犯。

通过确保这个表有一个主键来实现这个表的实体完整性。

有时,在一个表中一个以上的列(或列的组合)可以唯一标志一行,例如,雇员表可能有员工编号( emp_id )列和社会安全号码( soc_sec_num )列,两者的值都被认为是唯一的。

这种列经常被称为替代键或候选键。

这些项也必须是唯一的。

虽然一个表只能有一个主键,但是它可以有多个候选键。

SQL Server的支持多个候选键概念进入唯一性约束。

大学毕业设计仓库管理系统数据库计算机外文参考文献原文及翻译

大学毕业设计仓库管理系统数据库计算机外文参考文献原文及翻译

河北工程大学毕业论文(设计)英文参考文献原文复印件及译文数据仓库数据仓库为商务运作提供结构与工具,以便系统地组织、理解和使用数据进行决策。

大量组织机构已经发现,在当今这个充满竞争、快速发展的世界,数据仓库是一个有价值的工具。

在过去的几年中,许多公司已花费数百万美元,建立企业范围的数据仓库。

许多人感到,随着工业竞争的加剧,数据仓库成了必备的最新营销武器——通过更多地了解客户需求而保住客户的途径。

“那么”,你可能会充满神秘地问,“到底什么是数据仓库?”数据仓库已被多种方式定义,使得很难严格地定义它。

宽松地讲,数据仓库是一个数据库,它与组织机构的操作数据库分别维护。

数据仓库系统允许将各种应用系统集成在一起,为统一的历史数据分析提供坚实的平台,对信息处理提供支持。

按照W. H. Inmon,一位数据仓库系统构造方面的领头建筑师的说法,“数据仓库是一个面向主题的、集成的、时变的、非易失的数据集合,支持管理决策制定”。

这个简短、全面的定义指出了数据仓库的主要特征。

四个关键词,面向主特征。

(1)(2)确保(3)(4)下的应用数据。

由于这种分离,数据仓库不需要事务处理、恢复和并行控制机制。

通常,它只需要两种数据访问:数据的初始化装入和数据访问。

概言之,数据仓库是一种语义上一致的数据存储,它充当决策支持数据模型的物理实现,并存放企业决策所需信息。

数据仓库也常常被看作一种体系结构,通过将异种数据源中的数据集成在一起而构造,支持结构化和启发式查询、分析报告和决策制定。

“好”,你现在问,“那么,什么是建立数据仓库?”根据上面的讨论,我们把建立数据仓库看作构造和使用数据仓库的过程。

数据仓库的构造需要数据集成、数据清理、和数据统一。

利用数据仓库常常需要一些决策支持技术。

这使得“知识工人”(例如,经理、分析人员和主管)能够使用数据仓库,快捷、方便地得到数据的总体视图,根据数据仓库中的信息做出准确的决策。

有些作者使用术语“建立数据仓库”表示构造数据仓库的过程,而用术语“仓库DBMS”表示管理和使用数据仓库。

仓储物流外文文献翻译中英文原文及译文2023-2023

仓储物流外文文献翻译中英文原文及译文2023-2023

仓储物流外文文献翻译中英文原文及译文2023-2023原文1:The Current Trends in Warehouse Management and LogisticsWarehouse management is an essential component of any supply chain and plays a crucial role in the overall efficiency and effectiveness of logistics operations. With the rapid advancement of technology and changing customer demands, the field of warehouse management and logistics has seen several trends emerge in recent years.One significant trend is the increasing adoption of automation and robotics in warehouse operations. Automated systems such as conveyor belts, robotic pickers, and driverless vehicles have revolutionized the way warehouses function. These technologies not only improve accuracy and speed but also reduce labor costs and increase safety.Another trend is the implementation of real-time tracking and visibility systems. Through the use of RFID (radio-frequency identification) tags and GPS (global positioning system) technology, warehouse managers can monitor the movement of goods throughout the entire supply chain. This level of visibility enables better inventory management, reduces stockouts, and improves customer satisfaction.Additionally, there is a growing focus on sustainability in warehouse management and logistics. Many companies are implementing environmentally friendly practices such as energy-efficient lighting, recycling programs, and alternativetransportation methods. These initiatives not only contribute to reducing carbon emissions but also result in cost savings and improved brand image.Furthermore, artificial intelligence (AI) and machine learning have become integral parts of warehouse management. AI-powered systems can analyze large volumes of data to optimize inventory levels, forecast demand accurately, and improve operational efficiency. Machine learning algorithms can also identify patterns and anomalies, enabling proactive maintenance and minimizing downtime.In conclusion, warehouse management and logistics are continuously evolving fields, driven by technological advancements and changing market demands. The trends discussed in this article highlight the importance of adopting innovative solutions to enhance efficiency, visibility, sustainability, and overall performance in warehouse operations.译文1:仓储物流管理的当前趋势仓储物流管理是任何供应链的重要组成部分,并在物流运营的整体效率和效力中发挥着至关重要的作用。

(完整版)_毕业设计英文文献51单片机中英文文献翻译_

(完整版)_毕业设计英文文献51单片机中英文文献翻译_

AT89C51的概况The General Situation of AT89C51Chapter 1 The application of AT89C51Microcontrollers are used in a multitude of commercial applications such as modems, motor-control systems, air conditioner control systems, automotive engine and among others. The domains also require that these microcontrollers are be ensured by a robust testing process and a proper tools environment for the validation of these microcontrollers both at the component and at the system level. Intel Plaform Engineering department developed anobject-oriented multi-threaded test environment for the validation of its AT89C51 automotive microcontrollers. The goals of thisenvironment was not only to provide a robust testing environment for the AT89C51 automotive microcontrollers, but to develop an environment which can be easily extended and reused for the validation of several other future microcontrollers. The environment was developed in conjunction with Microsoft Foundation Classes (AT89C51). The paper describes the design and mechanism of this test environment, its interactions with variousThe 8-bit AT89C51 CHMOS microcontrollers are designed to engine-control systems, airbags, suspension systems, and antilock braking systems (ABS). The AT89C51 is especially well suited to applications that benefit from its processing speed and enhancedon-chip peripheral functions set, such as automotive power-train control, vehicle dynamic suspension, antilock braking, and stability control applications. Because of these critical applications, the market requires a reliable cost-effective controller with a low interrupt latency response, ability to service the integrated peripherals needed in real time applications, and a CPU with above average processing power in a single package. The financial and legal risk of the market, particularly in mission criticalapplications such as an autopilot or anti-lock braking system, mistakes are financiallyprohibitive. Redesign costs can run as flaw. In addition, field replacements of components is extremely expensive, as the devices are typically sealed in modules with a total value several times that of the component. To mitigate these problems, it is essential that comprehensive testing of the controllers be carried out at both the component level and system level under worst case environmental and voltage conditions.This complete and thorough validation necessitates not only a well-defined process but also a proper environment and tools to facilitate and execute the mission successfully.Intel Chandler Platform Engineering group provides post silicon system validation (SV) of various micro-controllers and processors. The system validation process can be broken into three major parts.The type of the device and its application requirements determine which types of testing are performed on the device.1.2 The AT89C51 provides the following standard features:4Kbytes of Flash, 128 bytes of RAM, 32 IO lines, two 16-bittimercounters, a five vector two-level interrupt architecture,a full duple ser -ial port, on-chip oscillator and clock circuitry.In addition, the AT89C51 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. The Idle Mode stops the CPU while allowing the RAM, timercounters,serial port and interrupt sys -tem to continue functioning. The Power-down Mode saves the RAM contents but freezes the oscil –lator disabling all other chip functions until the next DescriptionVCC Supply voltage.GND Ground.Port 0:Port 0 is an 8-bit open-drain bi-directional IO port. As an output port, each pin cansink eight TTL inputs. When 1s are written to port 0 pins, the pins can be used as this mode P0 . External pullups are required during programverification.Port 1:Port 1 is an 8-bit bi-directional IO port with internal pullups.The Port 1 output buffers can sinkso -urce four TTL inputs.When 1s are written to Port 1 pins they are pulled be used as inputs. As inputs, Port 1 pins that are externally being pulled low will source current (IIL) because of the internal pullups.Port 1 also receives the low-order address bytes during Flash programming and verification.Port 2:Port 2 is an 8-bit bi-directional IO port with internal pullups.The Port 2 outputbuffers can sinksource four TTL inputs.When 1s are written to Port 2 pins they arepulled be used as inputs. As inputs, Port 2 pins that are externally being pulled low will source current (IIL) because of the internal pullups. Port 2 emits the this application, it uses strong internal pull-ups when emitting 1s. During accesses to external data memory that use 8-bit addresses (MOVX @ RI), Port 2 emits the contents of the P2 Special Function Register.Port 2 also receives the Flash programming and verification.Port 3:Port 3 is an 8-bit bi-directional IO port with internal pullups.The Port 3 outputbuffers can sinksou -rce four TTL inputs.When 1s are written to Port 3 pins they are pulled be used as inputs. As inputs,Port 3 pins that are externally being pulled low will source current (IIL) because of the pullups.Port 3 also serves the functions of various special featuresof the AT89C51 as listed below:RST:Reset input. A this pin for two machine cycles while the oscillator is running resets the device.ALEPROG:Address Latch Enable output pulse for latching the low byte of the address duringaccesses to external memory.This pin is also the program pulse input (PROG) during Flash programming.In normal operation ALE is emitted at a constant rate of 16 the oscillator frequency,and may be used for external timing or clocking purposes. Note, be disabled by setting bit 0 of SFR location 8EH.With the bit set, ALE is active onlyduring a MOVX or MOVC instruction. Otherwise, the pin is weakly pulled external execution mode.PSEN:Program Store Enable is the read strobe to external program memory. When theAT89C51 is executing code from external program memory, PSEN is activated twiceeach machine cycle, except that two PSEN activations are skipped during each access toexternal data memory.EAVPP:External Access Enable. EA must be strapped to GND in order to enable the deviceto fetch code from external program memory locations starting at 0000H up to FFFFH.Note, alsreceives the 12-volt programming enable voltage (VPP) during Flash programming, forparts that require 12-volt VPP.XTAL1:Input to the inverting oscillator amplifier and input to the internal clock operatingcircuit.XTAL2:Output from the inverting oscillator amplifier.Oscillator CharacteristicsXTAL1 and XTAL2 are the input and output, respectively, of an inverting amplifierwhich can be configured for use as an on-chip oscillator, as shown in Figure 1. Either aquartz crystal or ceramic resonator may be used. To drive the device from an externalclock source, XTAL2 should be left unconnected while XTAL1 is driven as shown in Figure 2.There are no requirements on the duty cycle of the external clock signal, since the input to the internal clocking circuitry is through a divide-by-two flip-flop, but minimum and maximum voltage idle mode, the CPU puts itself to sleep while all the onchip peripherals remainactive. The mode is invoked by software. The content of the on-chip RAM and all the special functions registers remain unchanged during this mode. The idle mode can be terminated by any enabled interrupt or by a idle is terminated by a ,from where it left off, up to two machine cycles before the internal reset algorithm takes control. On-chip this event, but access to the port pins is not inhibited. To eliminate the possibility of an unexpected write to a port pin when Idle is terminated by reset, the instruction following the one that invokes Idle should not be one that writes to a port pin or to external memory.Power-down ModeIn the power-down mode, the oscillator is stopped, and the instruction that invokes power-down is the last instruction executed. The on-chip RAM and Special Function Registers retain their values until the power-down mode is terminated. The only exit from power-down is a -chip RAM. The reset should not be activated before VCC is restored to its normal operating level and must be either programming mode. To program any nonblank byte in the on-chip Flash Memory, the entire memory must be erased using the Chip Erase Mode.2 Programming AlgorithmBefore programming the AT89C51, the address, data and control signals should be set up according to the Flash programming mode table and Figure 3 and Figure 4. To program the AT89C51, take the following steps.1. Input the desired memory location on the addresslines.2. Input the appropriate data byte on the data lines. 3. Activate the correct combination of control signals. 4. Raise EAVPP to 12V for the the Flash array or the lock bits. The byte-write cycle is self-timed and typically takes no more than 1.5 ms. Repeat steps 1 through 5, changing the address and data for the entire array or until the end of the object file is reached. Data Polling: The AT89C51 features Data Polling to indicate the end of a write cycle. During a write cycle, an attempted read of the last byte written will result in the complement of the written datum on PO.7. Once the write cycle completed, true data are valid on all outputs, and the next cycle may begin. Data Polling may begin any time after a write cycle initiated.2.1ReadyBusy:The progress of byte programming can also be monitored by the RDYBSY output signal. P3.4 is pulled low after ALE goes when programming is done to indicate READY.Program Verify:If lock bits LB1 and LB2 programmed, the programmed code data can be read back via the address and data lines for verification. The lock bits cannot be verified directly. Verification of the lock bits is achieved by observing that their features are enabled.Figure 2-1-1 Programming the Flash Figure 2-2-2 Verifying the Flash2.2 Chip Erase:The entire Flash array is erased electrically by using the proper combination of control signals and by with all “1”s. The chip erase operation must be executed before the code memory can be re-programmed.2.3 Reading the Signature Bytes:The signature bytes are read by the same procedure as a normal verification of locations 030H, 031H, and 032H, except that P3.6 and P3.7 must be pulled to a logic low. The values returned areas follows.(030H) = 1EH indicates manufactured by Atmel(031H) = 51H indicates 89C51(032H) = FFH indicates 12V programming(032H) = 05H indicates 5V programming2.4 Programming InterfaceEvery code byte in the Flash array can be written and the entire array can be erased by using the appropriate combination of control signals. The write operation cycle is selftimed and once initiated, will automatically time itself to completion. A microcomputer interface converts information between two forms. Outside the microcomputer the information electronic system exists as a physical signal, but within the program, it is represented numerically. The function of any interface can be broken down into a number of operations which modify the data in some way, so that the process of conversion between the external and internal forms is carried out in a number of steps. An analog-to-digital converter(ADC) is used to convert a continuously variable signal to a corresponding digital form which can take any one of a fixed number of possible binary values. If the output of the transducer does not vary continuously, no ADC is necessary. In this case the signal conditioning section must convert the incoming signal to a form which can be connected directly to the next part of the interface, the inputoutput section of the microcomputer itself. Output interfaces take a similar form, the obvious difference being that is in the opposite direction; it is passed from the program to the outside world. In this case the program may call an output subroutine which supervises the operation of the interface andperforms the scaling numbers which may be needed for digital-to-analog converter(DAC). This subroutine passesinformation in turn to an output device which produces a corresponding electrical signal, which could be converted into analog form using a DAC. Finally the signal is conditioned(usually amplified) to a form suitable for operating an actuator.The signals used within microcomputer circuits are almost always too small to be connected directly to the outside world” and some kind of interface must be used to translate them to a more appropriate form. The design of section of interface circuits is one of the most important tasks facing the engineer wishing to apply microcomputers. We that in microcomputers information is represented as discrete patterns of bits; this digital form is most useful when the microcomputer is to be connected to equipment which can only be switched on or off, where each bit might represent the state of a switch or actuator. To solve real-world problems, a microcontroller must just a CPU, a program, and a data memory. In addition, it must contain from the outside world. Once the CPU gathers information and processes the data, it must also be able to effect change on some portion of the outside world. These microcontrollers is the general purpose I70 port. Each of the IO pins can be used as either an input or an output. The function of each pin is determined by setting or clearing corresponding bits in a corresponding data direction register during the initialization stage of a program. Each output pin may be driven to either a logic one or a logic zeroby using CPU instructions to pin may be viewed (or read.) by the CPU using program instructions. Some type of serial unit is included on microcontrollers to allow the CPU to communicate bit-serially with external devices. Using a bit serial format instead of bit-parallel format requires fewer IO pins to perform the communication function, which makes it less expensive, but slower.Serial transmissions are performed either synchronously orasynchronously.翻译AT89C51的概况1 AT89C51应用单片机广泛应用于商业:诸如调制解调器,电动机控制系统,空调控制系统,汽车发动机和其他一些领域。

毕业设计 物流 外文文献翻译 中英文 仓储

毕业设计 物流 外文文献翻译 中英文 仓储

WarehousingThis chapter presents a description of a small, fictitious warehouse that distributes office supplies and some office furniture to small retail stores and individual mail-order customers. The facility was purchased from another company, and it is larger than required for the immediate operation. The operation, currently housed in an older facility, will move in a few months. The owners foresee substantial growth in theirhigh-quality product lines, so the extra space will accommodate the growth for the next few years. The description of the warehouse is of the planned operation after moving into the facility.The purpose of this chapter is to introduce the reader to the operations of warehouses. Basic function sare described, typical equipment types are illustrated, and operations within departments are presented in some detail so that the reader can understand the relationships among products, orders, order lines, storage space, and labor requirements. Storage assignment and retrieval strategies are briefly discussed.Evaluation of the planned operation includes turnover, performance, and cost analyses. Additional information can be found in other chapters of this volume and in the reference material.Role of the Warehouse in the Supply ChainWarehouses can serve different roles within the larger organization. For example, a stock room serving a manufacturing facility must provide a fast response time. The major activities would be piece (item)picking, carton picking, and preparation of assembly kits (kitting). A mail-order retailer usually must provide a great variety of products in small quantities at low cost to many customers. A factory warehouse usually handles a limited number of products in large quantities. A large, discount chain ware hou se typically “pushes” some products out to its retailers based on marketing campaigns, with other products being “pulled” by the store managers. Shipments are oft en full and half truckloads. The Ware house described here is a small, chain warehousethat carries a limited product line for distributionto its retailers and independent customers.The purpose of the warehouse is to provide the utility of time and place to its customers, both retail in the quantities requested by small retailers and individual customers. Production schedules often result in long runs and large lot sizes. Thus, manufacturers usually are not able to meet the delivery dates of small retailers and individuals. The warehouse bridges the gap and enables both parties, manufacturer and customer, to operate within their own spheres.Product and Order Descriptions1.Product DescriptionsThe products handled include paper products, pens, staplers, small storage units, other desktop products, electronic products are delivered directly from other distributors and not handled by the warehouse.One would say that the warehouse handles relatively low-value products from the viewpoint of manufacturing cost. ships among these load types. Individuals usually request pieces; retailers may also request pieces of slow movers, products that are not in high demand. Retailers usually request fast movers, products that are in high demand, in carton quantities. Bulky products like large desktop storage units may be in high enough demand so that they are sold by the warehouse in pallets. Furniture units are also sold on pallets for ease of movement in the warehouse and in the delivery trucks.shows the number of products to be stored and the number of storage locations needed. The latter issue is discussed inSection The typical dimensions of a piece is 10 × 25 × 3.5 cm, with a typical volume of 0.875 liters. A carton has typical dimensions of 33 × 43 × 30 cm, with a typical volume of 42.6 liters. Thus, a typical carton contains 48.7 pieces. The typical dimension of a pallet is 80 × 120 × 140 cm, with the last dimension being and individual. Manufacturers of office supplies and furniture are usually not willing to supply products low-priced media like CD and DVD blanks, book and electronic titles, and office furniture. High-value Products are sold by the warehouse as pieces, cartons, and on pallets. Figure 12.1 shows the relation- the height. Thepallet base is about 10 cm high, so the typical product volume is 1.25 m3, corresponding to 29.3 cartons. The pallet base allows for pickup by forklift truck from any of the four sides. Table 12.2 summarizes these values. Different products, of course, have different dimensions and relationships. The conversion factors can vary depending on whether the product is sold mainly in piece, carton, or pallet quantities. We will not introduce further complexity here and use the values given here for determining storage and labor requirements.2.Order DescriptionsThere are two types of orders processed at the warehouse. Large orders are placed by the retailers who belong to the same corporation; these are delivered by less-than-truckload (LTL) carrier. Small orders are placed by individuals, and these are delivered by package courier service like United States Postal Service (USPS), United Parcel Service (UPS), and Federal Express (Fed EX). Large orders contain more products and the quantity per product is greater than for small orders.Pallet Pick OperationsFull pallet picking is done primarily in the floor storage area and occasionally in the pallet rack area. These pallets move directly to outbound staging. A forklift truck has the capacity to transport one pallet at a time. Travel within the pallet floor storage area follows the rectilinear distance metric (Francis et al. 1992).Sorting, Packing, Staging, Shipping OperationsPieces and cartons that are picked using batch picking must first be sorted by order before further processing. The method of batch picking, described in the following, is designed to facilitate this process without requiring extensive conveyor equipment. In addition, all pieces must be packed into over pack cartons, and these are then consolidated with regular (single product) cartons by order. Some cartons and over packs move to outbound staging for package courier services like USPS, UPS, and FedEx. Others move to outbound staging for LTL carrier service. The package courier services load their vehicles manually, and the LTL carriers are loaded by warehouse personnel using either forklift trucks or pallet jacks.Support Operations, Reware housing, Returns ProcessingAt irregular times, the warehouse staff must perform additional functions that are not part of the normal process. Whenever a new store is being prepared for opening, a large quantity of product, for the full product line, must be picked and staged. There is a separate area set aside for this staging.Occasionally, some products need to be repackaged and/or labeled for retail stores. Th is value-added processing is performed between picking and packing. Returned merchandise must be inspected, possibly repackaged, and then returned to storage locations. The volume is not significant, and it is handled in the value-added area. Periodically, product locations must be changed to reflect changing demand. This reware housing is performed during slack periods so as not to require additional labor.In addition, the warehouse contains an office for management and sales personnel, toilets for both staff and truck drivers, and a break room with space for vending machines and dining. There is a battery charging room for the electric batteries used by forklifts and pallet jacks, and a small maintenance room.Storage Department Descriptions and OperationsThis section presents details on the individual storage departments and their operations. Here we determine the storage space requirements, and we describe the pick methods and obtain labor requirements.Bin ShelvingTh e bin shelving area contains 1000 slow moving products that are picked as pieces. Th ey are housed in shelving units that are 40 cm deep, 180 cm high, and 100 cm wide, for a cubic volume of 0.72 m3. Using a cubic space utilization factor of 0.6 to allow for clearances and mismatches of carton dimensions with the shelves, each shelving unit can accommodate on average 0.72 × 0.6/0.0426 = 10.14 cartons. If each product requires at most one carton, then we need 1000/10.14 = 98.6 or 99 shelving units. Rounding this to 100 units implies a pick line 100/2 = 50 m. One way to implement this is to establish two pick aisles, each 25m long, as shown in Figure 12.9. In the final layout, the system is expanded to a length of 30 m. In addition, space is provided for two future aisles. Although all the products stored here are considered slow movers, with some exceptions for products with small total required inventory measured in cubic volume, the principle of activity-based storage is extended further to identify the faster moving products (among the slow movers). These are placed in the ergonomically desirable golden zone.The small number of requests per order for slow moving products makes it appropriate to use a sort-while-pick (SWP) method for retrieval. An order picker uses a cart with multiple compartments to pick items for several orders on one trip past the shelves. The compartments items for different orders being mixed . Later, when the cart is moved to sorting, consolidation, and packing, there is actually little sorting work to do, but mainly consolidation and packing.Warehouse ManagementThe operation of the warehouse requires careful and constant management. The scanning of received products is just one example of the functions performed by the WMS. It is beyond the scope of this chapter to present details of a typical WMS. However, some main features should be mentioned here.The tracking of flows throughout the warehouse is one of the basic functions of a WMS. This can be done manually, but most facilities today use barcode scanners, and many use barcode scanners intedatabase. A typical WMS enables the functions listed below. These requirements are not inclusive, but only indicate the types of functions desired. Further details are in (Sharp, 2001).The WMS should enable scheduling of personnel, including regular full-time employees and temporary and part-time employees. Tracking of employee productivity is useful for training and workload balancing. Workload scheduling should be linked to forecast information, and the conversion of product volumes should be automatically translated to labor hours by function and employee productivity. out-of-stock conditions, process partial receipts, and quarantine products requiringinspection. It should generate labels for pallets and cartons with data on SKU (unique product type), description, date received, lot or purchase order number, expiration code(s), and location code(s). It should assign storage location recognizing physical characteristics of product, physical characteristics of location, environmental restrictions, and stock rotation. It should also have the ability to send products directly to out-bound vehicles (cross-docking). The ability to schedule trucks and assign them to docks is also useful. mation of stow (storage) action, updating of inventory upon stow, stock reservation capability, and provision for cycle counting. The WMS should support more than one location per SKU and more than one SKU per location. Report generation should include stock activity reports (fast, medium, slow, dead), empty location reports, and anticipated replenishment of forward pick areas.仓储本章提出了一个描述一个小虚拟仓库分发办公用品和办公家具的小零售商店和邮购客户个人。

仓储系统控制技术毕业论文中英文资料外文翻译文献

仓储系统控制技术毕业论文中英文资料外文翻译文献

仓储系统控制技术中英文资料外文翻译文献一篇对于入库系统规划与控制的调查文献1我们提出了一个关于方法以及规划和仓储系统控制技术文献调查。

规划是指管理决策影响中期内(一个或多个个月),如库存管理和储存位分配。

控制是指经营决策a.ect短期(小时,天),如路由,排序,调度和订单批量。

在此之前的文献调查,我们展现了仓储系统介绍和仓库管理问题的分类。

说明1.1仓库的递增GUDEHUS与GRAVES,HAUSMAN,SCHWARZ通过把入库系统规划与控制作为一个新的研究主题而对此介绍构思。

入库系统的操作在文献中自始自终受到了相当大的关注。

入库系统的研究在70年代就得到了关注,这不足为奇,管理部门将眼光从生产力的提高转移到财产目录的消减,这是研究领域的一个新纪元。

信息系统的采用使得这个策略有了实施的可能,随着把制造业资源规划作为一个显著的范例,日本出现了一个新的管理哲学:及时生产(JIT)。

及时生产试图实现在短时间内用极小的一部分存货清单实现高产量的任务。

这个新的发展需要人们通过仓库在短期的回复期内频繁的运送低量货物到一个显著的宽广而多样化的储存保管单元(SKU's)中实现。

对于质量的关注,使得仓库负责人要从产品损坏的角度反复检查他们的仓库操作,在建立短而可靠的交易时期同时提升汇单采购的准确性。

当前在入库与分配后勤学的趋势中,是供应链管理与高效消费响应(ECR)。

供应链管理与高效消费响应负责小量存货清单供应链与贯穿于供应链的可靠短期响应机构的驱动。

所有的交付都是在供应链中销售额日趋下降的情况下促成的。

这样一个机构需要各个公司之间在供应链与当前销售信息的反馈中形成一个严密的合作。

现今,信息技术使得这些手段能够通过电子数据的交换(EDI)与类似基于MRP的企业资源规划(ERP)软件系统与仓库管理系统(WMS)实现。

新市场极大的影响着仓库的经营。

一方面,他们需要一个增长的生产力;另一方面,迅速变换的市场将金融风险强加于采用密集资本的高成果上,由此可能很难重新装配甚至需要摒弃入库设备。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

河北工程大学毕业论文(设计)英文参考文献原文复印件及译文数据仓库数据仓库为商务运作提供结构与工具,以便系统地组织、理解和使用数据进行决策。

大量组织机构已经发现,在当今这个充满竞争、快速发展的世界,数据仓库是一个有价值的工具。

在过去的几年中,许多公司已花费数百万美元,建立企业范围的数据仓库。

许多人感到,随着工业竞争的加剧,数据仓库成了必备的最新营销武器——通过更多地了解客户需求而保住客户的途径。

“那么”,你可能会充满神秘地问,“到底什么是数据仓库?”数据仓库已被多种方式定义,使得很难严格地定义它。

宽松地讲,数据仓库是一个数据库,它与组织机构的操作数据库分别维护。

数据仓库系统允许将各种应用系统集成在一起,为统一的历史数据分析提供坚实的平台,对信息处理提供支持。

按照W. H. Inmon,一位数据仓库系统构造方面的领头建筑师的说法,“数(1)视图。

(2)般文件和联机事务处理记录,集成在一起。

使用数据清理和数据集成技术,确保命名约定、编码结构、属性度量的一致性等。

(3)时变的:数据存储从历史的角度(例如,过去5-10 年)提供信息。

数据仓库中的关键结构,隐式或显式地包含时间元素。

(4) 非易失的:数据仓库总是物理地分离存放数据;这些数据源于操作环境下的应用数据。

由于这种分离,数据仓库不需要事务处理、恢复和并行控制机制。

通常,它只需要两种数据访问:数据的初始化装入和数据访问。

概言之,数据仓库是一种语义上一致的数据存储,它充当决策支持数据模型的物理实现,并存放企业决策所需信息。

数据仓库也常常被看作一种体系结构,通过将异种数据源中的数据集成在一起而构造,支持结构化和启发式查询、分析报告和决策制定。

“好”,你现在问,“那么,什么是建立数据仓库?”根据上面的讨论,我们把建立数据仓库看作构造和使用数据仓库的过程。

数据仓库的构造需要数据集成、数据清理、和数据统一。

利用数据仓库常常需要一些决策支持技术。

这使得“知识工人”(例如,经理、分析人员和主管)能够使用数据仓库,快捷、方便地得到数据的总体视图,根据数据仓库中的信息做出准确的决策。

有些作者使用术语“建立数据仓库”表示构造数据仓库的过程,而用术语“仓库DBMS”表示管理和使用数据仓库。

我们将不区分二者。

“组织机构如何使用数据仓库中的信息?”许多组织机构正在使用这些信息支持商务决策活动,包括:(1)、增加顾客关注,包括分析顾客购买模式(如,喜爱买什么、购买时间、预算周期、消费习惯);(2)、根据季度、年、地区的营销情况比较,重新配置产品和管理投资,调整生产策略;(3)、分析运作和查找利润源;(4)、管理顾客关系、进行环境调整、管理合股人的资产开销。

从异种数据库集成的角度看,数据仓库也是十分有用的。

许多组织收集了形形色色数据,并由多个异种的、自治的、分布的数据源维护大型数据库。

集成这些数据,并提供简便、有效的访问是非常希望的,并且也是一种挑战。

数据库工业界和研究界都正朝着实现这一目标竭尽全力。

对于异种数据库的集成,传统的数据库做法是:在多个异种数据库上,建立一个包装程序和一个集成程序(或仲裁程序)。

这方面的例子包括IBM 的数据连接程序和Informix的数据刀。

当一个查询提交客户站点,首先使用元数据字典对查询进行转换,将它转换成相应异种站点上的查询。

然后,将这些查询映射和发送到局部查询处理器。

由不同站点返回的结果被集成为全局回答。

这种查询驱动的方法需要复杂的信息过滤和集成处理,并且与局部数据源上的处理竞争资源。

这种方法是低效的,并且对于频繁的查询,特别是需要聚集操作的查询,开销很大。

对于异种数据库集成的传统方法,数据仓库提供了一个有趣的替代方案。

数据仓库使用更新驱动的方法,而不是查询驱动的方法。

这种方法将来自多个异种源的信息预先集成,并存储在数据仓库中,供直接查询和分析。

与联机事务处理数据库不同,数据仓库不包含最近的信息。

然而,数据仓库为集成的异种数据库系统带来了高性能,因为数据被拷贝、预处理、集成、注释、汇总,并重新组织到一个语义一致的数据存储中。

在数据仓库中进行的查询处理并不影响在局部源上进行的处理。

此外,数据仓库存储并集成历史信息,支持复杂的多维查询。

这样,建立数据仓库在工业界已非常流行。

1.操作数据库系统与数据仓库的区别由于大多数人都熟悉商品关系数据库系统,将数据仓库与之比较,就容易理解什么是数据仓库。

联机操作数据库系统的主要任务是执行联机事务和查询处理。

这种系统称为联机事务处理(OLTP)系统。

它们涵盖了一个组织的大部分日常操作,如购买、库存、制造、银行、工资、注册、记帐等。

另一方面,数据仓库系统在数据分析和决策方面为用户或“知识工人”提供服务。

这种系统可以用不同的格式组织和提供数据,以便满足不同用户的形形色色需求。

这种系统称为联机分析处理(OLAP)系统。

OLTP 和OLAP 的主要区别概述如下。

(1)用户和系统的面向性:OLTP 是面向顾客的,用于办事员、客户、和信息技术专业人员的事务和查询处理。

OLAP 是面向市场的,用于知识工人(包括经理、主管、和分析人员)的数据分析。

(2)数据内容:OLTP 系统管理当前数据。

通常,这种数据太琐碎,难以方便地用于决策。

OLAP 系统管理大量历史数据,提供汇总和聚集机制,并在不同的粒度级别上存储和管理信息。

这些特点使得数据容易用于见多识广的决策。

(3)数据库设计:通常,OLTP 系统采用实体-联系(ER)模型和面向应用的数据库设计。

而OLAP 系统通常采用星形或雪花模型和面向主题的数据库设计。

(4)视图:OLTP 系统主要关注一个企业或部门内部的当前数据,而不涉及历史数据或不同组织的数据。

相比之下,由于组织的变化,OLAP 系统常常跨越数据库模式的多个版本。

OLAP 系统也处理来自不同组织的信息,由多个数据存储集成的信息。

由于数据量巨大,OLAP 数据也存放在多个存储介质上。

(5)、访问模式:OLTP 系统的访问主要由短的、原子事务组成。

这种系统需要并行控制和恢复机制。

然而,对OLAP系统的访问大部分是只读操作(由于大部分数据仓库存放历史数据,而不是当前数据),尽管许多可能是复杂的查询。

OLTP 和OLAP 的其它区别包括数据库大小、操作的频繁程度、性能度量等。

2.但是,为什么需要一个分离的数据仓库“既然操作数据库存放了大量数据”,你注意到,“为什么不直接在这种数据库上进行联机分析处理,而是另外花费时间和资源去构造一个分离的数据仓库?”分离的主要原因是提高两个系统的性能。

操作数据库是为已知的任务和负载设计的,如使用主关键字索引和散列,检索特定的记录,和优化“罐装的”查询。

另一方面,数据仓库的查询通常是复杂的,涉及大量数据在汇总级的计算,可能需要特殊的数据组织、存取方法和基于多维视图的实现方法。

在操作数据库上处理OLAP 查询,可能会大大降低操作任务的性能。

此外,操作数据库支持多事务的并行处理,需要加锁和日志等并行控制和恢复机制,以确保一致性和事务的强健性。

通常,OLAP 查询只需要对数据记录进行只读访问,以进行汇总和聚集。

如果将并行控制和恢复机制用于这OLAP 操作,就会危害并行事务的运行,从而大大降低OLTP 系统的吞吐量。

最后,数据仓库与操作数据库分离是由于这两种系统中数据的结构、内容和用法都不相同。

决策支持需要历史数据,而操作数据库一般不维护历史数据。

在这种情况下,操作数据库中的数据尽管很丰富,但对于决策,常常还是远远不够的。

决策支持需要将来自异种源的数据统一(如,聚集和汇总),产生高质量的、纯净的和集成的数据。

相比之下,操作数据库只维护详细的原始数据(如事务),这些数据在进行分析之前需要统一。

由于两个系统提供很不相同的功能,需要不同类型的数据,因此需要维护分离的数据库。

Data warehousing provides architectures and tools for business executives to sy stematically organize, understand, and use their data to make strategic decisions. A lar ge number of organizations have found that data warehouse systems are valuable tools in today's competitive, fast evolving world. In the last several years, many firms have spent millions of dollars in building enterprise-wide data warehouses. Many people feel that with competition mounting in every ind ustry, data warehousing is the latest must-have marketing weapon ——a way to keep customers by learning more about their needs.“So", you may ask, full of intrigue, “what exactly is a data warehouse?"Data warehouses have been defined in many ways, making it difficult to formulat e a rigorous definition. Loosely speaking, a data warehouse refers to a database that is maintained separately from an organization's operational databases. Data warehouse s ystems allow for the integration of a variety of application systems. They support info rmation processing by providing a solid platform of consolidated, historical data for a nalysis.According to W. H. Inmon, a leading architect in the construction of data wareho use systems, “a data warehouse is a subject-oriented, integrated, time-variant, and nonvolatile collection of data in support of management's decision makin g process." This short, but comprehensive definition presents the major features of a d ata warehouse. The four keywords, subject-oriented, integrated, time-variant, and nonvolatile, distinguish data warehouses from other data repository syste ms, such as relational database systems, transaction processing systems, and file syste ms. Let's take a closer look at each of these key features.(1).Subject-oriented: A data warehouse is organized around major subjects, such as customer, ven dor, product, and sales. Rather than concentrating on the day-to-day operations and transaction processing of an organization, a data warehouse focuse s on the modeling and analysis of data for decision makers. Hence, data warehouses ty pically provide a simple and concise view around particular subject issues by excluding data that are not useful in the decision support process.(2) Integrated: A data warehouse is usually constructed by integrating multiple he terogeneous sources, such as relational databases, flat files, and on-line transaction records. Data cleaning and data integration techniques are applied to e nsure consistency in naming conventions, encoding structures, attribute measures, and so on.(3).Time-variant: Data are stored to provide information from a historical perspective (e.g., the past 5-10 years). Every key structure in the data warehouse contains, either implicitly or expl icitly, an element of time.(4)Nonvolatile: A data warehouse is always a physically separate store of data transformed from the application data found in the operational environment. Due to this separation, a data warehouse does not require transaction processing, recovery, and co ncurrency control mechanisms. It usually requires only two operations in data accessi ng: initial loading of data and access of data.In sum, a data warehouse is a semantically consistent data store that serves as a p hysical implementation of a decision support data model and stores the information on which an enterprise needs to make strategic decisions. A data warehouse is also often viewed as an architecture, constructed by integrating data from multiple heterogeneou s sources to support structured and/or ad hoc queries, analytical reporting, and decisio n making.“OK", you now ask, “what, then, is data warehousing?"Based on the above, we view data warehousing as the process of constructing and using data warehouses. The construction of a data warehouse requires data integratio n, data cleaning, and data consolidation. The utilization of a data warehouse often nec essitates a collection of decision support technologies. This allows “knowledge worke rs" (e.g., managers, analysts, and executives) to use the warehouse to quickly and con veniently obtain an overview of the data, and to make sound decisions based on infor mation in the warehouse. Some authors use the term “data warehousing" to refer only to the process of data warehouse construction, while the term warehouse DBMS is use d to refer to the management and utilization of data warehouses. We will not make thi s distinction here.“How are organizations using the information from data warehouses?" Many org anizations are using this information to support business decision making activities, in cluding:(1) increasing customer focus, which includes the analysis of customer buying pa tterns (such as buying preference, buying time, budget cycles, and appetites for spendi ng),(2) repositioning products and managing product portfolios by comparing the per formance of sales by quarter, by year, and by geographic regions, in order to fine-tune production strategies,(3) analyzing operations and looking for sources of profit,(4) managing the customer relationships, making environmental corrections, and managing the cost of corporate assets.Data warehousing is also very useful from the point of view of heterogeneous database integration. Many organizations typically collect diverse kinds of data and main tain large databases from multiple, heterogeneous, autonomous, and distributed infor mation sources. To integrate such data, and provide easy and efficient access to it is hi ghly desirable, yet challenging.Much effort has been spent in the database industry and research community tow ards achieving this goal.The traditional database approach to heterogeneous database integration is to buil d wrappers and integrators (or mediators) on top of multiple, heterogeneous databases . A variety of data joiner and data blade products belong to this category. When a quer y is posed to a client site, a metadata dictionary is used to translate the query into quer ies appropriate for the individual heterogeneous sites involved. These queries are then mapped and sent to local query processors. The results returned from the different sit es are integrated into a global answer set. This query-driven approach requires complex information filtering and integration processes, and competes for resources with processing at local sources. It is inefficient and potentiall y expensive for frequent queries, especially for queries requiring aggregations.Data warehousing provides an interesting alternative to the traditional approach o f heterogeneous database integration described above. Rather than using a query-driven approach, data warehousing employs an update-driven approach in which information from multiple, heterogeneous sources is integra ted in advance and stored in a warehouse for direct querying and analysis. Unlike on-line transaction processing databases, data warehouses do not contain the most current information. However, a data warehouse brings high performance to the integrated he terogeneous database system since data are copied, preprocessed, integrated, annotate d, summarized, and restructured into one semantic data store. Furthermore, query proc essing in data warehouses does not interfere with the processing at local sources. Mor eover, data warehouses can store and integrate historical information and support com plex multidimensional queries. As a result, data warehousing has become very popula r in industry.1. Differences between operational database systems and data warehousesSince most people are familiar with commercial relational database systems, it is easy to understand what a data warehouse is by comparing these two kinds of systems .The major task of on-line operational database systems is to perform on-line transaction and query processing. These systems are called on-line transaction processing (OLTP) systems. They cover most of the day-to-day operations of an organization, such as, purchasing, inventory, manufacturing, ban king, payroll, registration, and accounting. Data warehouse systems, on the other hand , serve users or “knowledge workers" in the role of data analysis and decision making. Such systems can organize and present data in various formats in order to accommod ate the diverse needs of the different users. These systems are known as on-line analytical processing (OLAP) systems.The major distinguishing features between OLTP and OLAP are summarized as f ollows.(1). Users and system orientation: An OLTP system is customer-oriented and is used for transaction and query processing by clerks, clients, and infor mation technology professionals. An OLAP system is market-oriented and is used for data analysis by knowledge workers, including managers, exe cutives, and analysts.(2). Data contents: An OLTP system manages current data that, typically, are too detailed to be easily used for decision making. An OLAP system manages large amou nts of historical data, provides facilities for summarization and aggregation, and stores and manages information at different levels of granularity. These features make the d ata easier for use in informed decision making.(3). Database design: An OLTP system usually adopts an entity-relationship (ER) data model and an application -oriented database design. An OLAP system typically adopts either a star or snowflake model, and a subject-oriented database design.(4). View: An OLTP system focuses mainly on the current data within an enterpri se or department, without referring to historical data or data in different organizations. In contrast, an OLAP system often spans multiple versions of a database schema, due to the evolutionary process of an organization. OLAP systems also deal with informat ion that originates from different organizations, integrating information from many da ta stores. Because of their huge volume, OLAP data are stored on multiple storage me dia.(5). Access patterns: The access patterns of an OLTP system consist mainly of sh ort, atomic transactions. Such a system requires concurrency control and recovery me chanisms. However, accesses to OLAP systems are mostly read-only operations (since most data warehouses store historical rather than up-to-date information), although many could be complex queries.Other features which distinguish between OLTP and OLAP systems include data base size, frequency of operations, and performance metrics and so on. 2. But, why ha ve a separate data warehouse?“Since operational databases store huge amounts of data", you observe, “why not perform on-line analytical processing directly on such databases instead of spending additional ti me and resources to construct a separate data warehouse?"A major reason for such a separation is to help promote the high performance of both systems. An operational database is designed and tuned from known tasks and w orkloads, such as indexing and hashing using primary keys, searching for particular re cords, and optimizing “canned" queries. On the other hand, data warehouse queries ar e often complex. They involve the computation of large groups of data at summarized levels, and may require the use of special data organization, access, and implementati on methods based on multidimensional views. Processing OLAP queries in operationa l databases would substantially degrade the performance of operational tasks.Moreover, an operational database supports the concurrent processing of several t ransactions. Concurrency control and recovery mechanisms, such as locking and loggi ng, are required to ensure the consistency and robustness of transactions. An OLAP qu ery often needs read-only access of data records for summarization and aggregation. Concurrency control a nd recovery mechanisms, if applied for such OLAP operations, may jeopardize the ex ecution of concurrent transactions and thus substantially reduce the throughput of an OLTP system.Finally, the separation of operational databases from data warehouses is based on the different structures, contents, and uses of the data in these two systems. Decision support requires historical data, whereas operational databases do not typically mainta in historical data. In this context, the data in operational databases, though abundant, i s usually far from complete for decision making. Decision support requires consolidat ion (such as aggregation and summarization) of data from heterogeneous sources, resu lting in high quality, cleansed and integrated data. In contrast, operational databases c ontain only detailed raw data, such as transactions, which need to be consolidated bef ore analysis. Since the two systems provide quite different functionalities and requiredifferent kinds of data, it is necessary to maintain separate databases.。

相关文档
最新文档