2014大学毕业设计仓库管理系统数据库计算机外文参考文献原文及翻译

合集下载

仓库管理系统外文翻译英文文献

仓库管理系统外文翻译英文文献

仓库管理系统外文翻译英文文献核准通过,归档资料。

未经允许,请勿外传~Warehouse Management Systems (WMS).The evolution of warehouse management systems (WMS) is very similar to that of many other software solutions. Initially a system to control movement and storage of materials within a warehouse, the role of WMS is expanding to including light manufacturing, transportation management, order management, and complete accounting systems. To use the grandfather of operations-related software, MRP, as a comparison, material requirements planning (MRP) started as a system for planning raw material requirements in a manufacturing environment. Soon MRP evolved into manufacturing resource planning (MRPII), which took the basic MRP system and added scheduling and capacity planning logic. Eventually MRPII evolved into enterprise resource planning (ERP), incorporating all the MRPII functionality with full financials and customer and vendor management functionality. Now, whether WMS evolving into a warehouse-focused ERP system is a good thing or not is up to debate. What is clear is that the expansion of the overlap in functionality between Warehouse Management Systems, Enterprise Resource Planning, Distribution Requirements Planning, Transportation Management Systems, Supply Chain Planning, Advanced Planning and Scheduling, and Manufacturing Execution Systems will only increase the level ofconfusion among companies looking for software solutions for their operations.Even though WMS continues to gain added functionality, the initialcore functionality of a WMS has not really changed. The primary purposeof a WMS is to control the movement and storage of materials within an operation and process the associated transactions. Directed picking, directed replenishment, and directed put away are the key to WMS. The detailed setup and processing within a WMS can vary significantly fromone software vendor to another, however the basic logic will use a combination of item, location, quantity, unit of measure, and1order information to determine where to stock, where to pick, and in what sequence to perform these operations.At a bare minimum, a WMS should:Have a flexible location system.Utilize user-defined parameters to direct warehouse tasks and uselivedocuments to execute these tasks.Have some built-in level of integration with data collection devices.Do You Really Need WMS?Not every warehouse needs a WMS. Certainly any warehouse couldbenefit from some of the functionality but is the benefit great enoughto justify the initial and ongoing costs associated with WMS? Warehouse Management Systems are big, complex, data intensive, applications. They tend to require a lot of initial setup, a lot of system resources to run, and a lot of ongoing data management to continue to run. That’s ri ght, you need to "manage" your warehouse "management" system. Often times, large operations will end up creating a new IS department with the sole responsibility of managing the WMS.The Claims:WMS will reduce inventory!WMS will reduce labor costs!WMS will increase storage capacity!WMS will increase customer service!WMS will increase inventory accuracy!The Reality:The implementation of a WMS along with automated data collectionwill likely give you increases in accuracy, reduction in labor costs (provided the labor required to maintain the system is less than the labor saved on the warehouse floor), and a greater ability to servicethe customer by reducing cycle times. Expectations of inventoryreduction and increased storage capacity are less likely. Whileincreased accuracy and efficiencies in the receiving process may reduce the level of safety stock required, the impact of this reduction will likely be negligible in comparison to overall inventory levels. The predominant factors that control inventory levels are2lot sizing, lead times, and demand variability. It is unlikely that a WMS will have a significant impact on any of these factors. And while a WMS certainly provides the tools for more organized storage which may result in increased storage capacity, this improvement will be relative to just how sloppy your pre-WMS processes were.Beyond labor efficiencies, the determining factors in deciding to implement a WMS tend to be more often associated with the need to do something to service your customers that your current system does not support (or does not support well) such as first-in-first-out, cross-docking, automated pick replenishment, wave picking, lot tracking, yard management, automated data collection, automated material handling equipment, etc.SetupThe setup requirements of WMS can be extensive. The characteristics of each item and location must be maintained either at the detail level or by grouping similar items and locations into categories. An example of item characteristics at the detail level would include exact dimensions and weight of each item in each unit of measure the item is stocked (each, cases, pallets, etc) as well as information such as whether it can be mixed with other items in a location, whether it is rack able, max stack height, max quantity per location, hazard classifications, finished goods or raw material, fast versus slow mover, etc. Although some operations will need to set up each item this way,most operations will benefit by creating groups of similar products. For example, if you are a distributor of music CDs you would create groups for single CDs, and double CDs, maintaining the detailed dimension and weight information at the group level and only needing to attach the group code to each item. You would likely need to maintain detailed information on special items such as boxed sets or CDs in special packaging. You would also create groups for the different types of locations within your warehouse. An example would be to create three different groups (P1, P2, P3) for the three different sized forward picking locations you use for your CD picking. You then set up the quantity of single CDs that will fit in a P1, P2, and P3 location, quantity of double CDs that fit in a P1, P2, P3 location etc. You would likely also be setting up case quantities, and pallet quantities of each CD group and quantities of cases and pallets per each reserve storage location group.If this sounds simple, it is…well… sort of. In reality most operations have a much morediverse product mix and will require much more system setup. And setting up the physical characteristics of the product and locations is only part of the picture. You have set up enough so that the system knows where a product can fit and how many will fit in that location. You now need to set up the information needed to let the system decide exactly which location to pick3from, replenish from/to, and put away to, and in what sequence these events should occur (remember WMS is all about “directed” m ovement). You do this by assigning specific logic to the various combinations of item/order/quantity/location information that will occur.Below I have listed some of the logic used in determining actual locations and sequences.Location Sequence. This is the simplest logic; you simply define a flow through your warehouse and assign a sequence number to each location. In order picking this is used to sequence your picks to flow through the warehouse, in put away the logic would look for the first location in the sequence in which the product would fit.Zone Logic. By breaking down your storage locations into zones you can direct picking, put away, or replenishment to or from specific areas of your warehouse. Since zone logic only designates an area, you will need to combine this with some other type of logic to determine exact location within the zone.Fixed Location. Logic uses predetermined fixed locations per item in picking, put away, and replenishment. Fixed locations are most often used as the primary picking location in piece pick and case-pick operations, however, they can also be used for secondary storage.Random Location. Since computers cannot be truly random (nor would you want them to be) the term random location is a little misleading. Random locations generally refer to areas where products are not storedin designated fixed locations. Like zone logic, you will need some additional logic to determine exact locations.First-in-first-out (FIFO). Directs picking from the oldest inventory first.Last-in-first-out (LIFO). Opposite of FIFO. I didn't think there were any realapplications for this logic until a visitor to my site sent an email describing their operation that distributes perishable goods domestically and overseas. They use LIFO for their overseas customers (because of longer in-transit times) and FIFO for their domestic customers.Pick-to-clear. Logic directs picking to the locations with the smallest quantities on hand. This logic is great for space utilization.Reserved Locations. This is used when you want to predetermine specific locations to put away to or pick from. An application for reserved locations would be cross-docking, where you may specify certain quantities of an inbound shipment be moved to specific outbound staging locations or directly to an awaiting outbound trailer.Maximize Cube. Cube logic is found in most WMS systems however it is seldom used. Cube logic basically uses unit dimensions to calculate cube (cubic inches per unit) and then compares this to the cube capacity of the location to determine how much will fit. Now if the units are capable of being stacked into the location in a manner that fills every cubic inch of4space in the location, cube logic will work. Since this rarely happens in the real world, cube logic tends to be impractical.Consolidate. Looks to see if there is already a location with the same product stored in it with available capacity. May also create additional moves to consolidate like product stored in multiple locations.Lot Sequence. Used for picking or replenishment, this will use the lot number or lot date to determine locations to pick from or replenish from.It’s very common to combine multiple logic methods to determine the best location. Forexample you may chose to use pick-to-clear logic within first-in-first-out logic when there are multiple locations with the same receipt date. You also may change the logic based upon current workload. During busy periods you may chose logic that optimizes productivity while during slower periods you switch to logic that optimizes space utilization.Other Functionality/ConsiderationsWave Picking/Batch Picking/Zone Picking. Support for various picking methods variesfrom one system to another. In high-volume fulfillment operations, picking logic can be a critical factor in WMS selection. See my article on Order Picking for more info on these methods.Task Interleaving. Task interleaving describes functionality that mixes dissimilar tasks such as picking and put away to obtain maximum productivity. Used primarily in full-pallet-load operations, task interleaving will direct a lift truck operator to put away a pallet on his/her way to the next pick. In large warehouses this can greatly reduce travel time, not only increasing productivity, but also reducing wear on the lift trucks and saving on energy costs by reducing lift truck fuel consumption. Task interleaving is also used with cycle counting programs to coordinate a cycle count with a picking or put away task.Integration with Automated Material Handling Equipment. If you are planning onusing automated material handling equipment such as carousels, ASRS units, AGNS, pick-to-light systems, or separation systems, you’ll want to consider this during the software selection process. Since these types of automation are very expensive and are usually a core component of your warehouse, you may find that the equipment will drive the selection of the WMS. As with automated data collection, you should be working closely with the equipment manufacturers during the software selection process.5Advanced Shipment Notifications (ASN). If your vendors are capableof sendingadvanced shipment notifications (preferably electronically) and attaching compliance labels to the shipments you will want to make sure that the WMS can use this to automate your receiving process. In addition, if you have requirements to provide ASNs for customers, you will also want to verify this functionality.Yard Management. Yard management describes the function of managing the contents (inventory) of trailers parked outside the warehouse, or the empty trailers themselves. Yard management is generally associated with cross docking operations and may include the management of both inbound and outbound trailers.Labor Tracking/Capacity Planning. Some WMS systems provide functionality relatedto labor reporting and capacity planning. Anyone that has worked in manufacturing should be familiar with this type of logic. Basically, you set up standard labor hours and machine (usually lift trucks) hours per task and set the available labor and machine hours per shift. The WMS system will use this info to determine capacity and load. Manufacturing has been using capacity planning for decades with mixed results. The need to factor in efficiency and utilization to determine rated capacity is an example of the shortcomings of this process. Not that I’m necessarily against capacity planning in warehousing, I just think most operations don’t really need it and can avoid the disap pointment of trying to make it work. I am, however, a big advocate of labor tracking for individual productivity measurement. Most WMS maintain enough datato create productivity reporting. Since productivity is measured differently from one operation to another you can assume you will have to do some minor modifications here (usually in the form of custom reporting).Integration with existing accounting/ERP systems. Unless the WMS vendor hasalready created a specific interface with your accounting/ERP system (such as those provided by an approved business partner) you can expect to spend some significant programming dollars here. While we are all hoping that integration issues will be magically resolved someday by a standardized interface, we isn’t there yet. Ideally you’ll want an integrator that has already integrated the WMS you chose with the business software you are using. Since this is not always possible you at least want an integrator that is very familiar with one of the systems.WMS + everything else = ? As I mentioned at the beginning of this article, a lot ofother modules are being added to WMS packages. These would include full financials, light manufacturing, transportation management, purchasing, and sales order management. I don’t see t his as aunilateral move of WMS from an add-on module to a core system, but rather an optional approach that has applications in specific industries such as 3PLs. Using ERP systems6as a point of reference, it is unlikely that this add-onfunctionality will match the functionality of best-of-breed applications available separately. If warehousing/distribution is your core business function and you don’t want to have to deal with the integration issues of incorporating separate financials, order processing, etc. you mayfind these WMS based business systems are a good fit.Implementation TipsOutside of the standard “don’t underestimate”, “thoroughlytest”, “train, train, train” implementation tips that apply to any business software installation ,it’s i mportant to emphasize that WMSare very data dependent and restrictive by design. That is, you need to have all of the various data elements in place for the system tofunction properly. And, when they are in place, you must operate within the set parameters.When implementing a WMS, you are adding an additional layer of technology onto your system. And with each layer of technology there is additional overhead and additional sources of potential problems. Now don’t take this as a condemnation of Warehouse Management Systems. Coming from a warehousing background I definitely appreciate the functionality WMS have to offer, and, in many warehouses, this functionality is essential to their ability to serve their customers and remain competitive. It’s just impo rtant to note that every solution hasits downsides and having a good understanding of the potential implications will allow managers to make better decisions related to the levels of technology that best suits their unique environment.仓库管理系统( WMS )仓库管理系统( WMS )的演变与许多其他软件解决方案是非常相似的。

管理系统类毕业设计外文文献翻译

管理系统类毕业设计外文文献翻译

.NET Compact Framework 2.0中的新事物介绍.NET Compact Framework 2.0版在以前版本——.NET Compact Framework1.0版——上提供许多改善。

虽然普遍改善,但他们都集中在共同的目标——改进开发商生产力、以完整的.NET Framwork提供更强的兼容性,以及加大对设备特性的支持。

这篇文章提供一个.NET Compact Framework2.0的变动和改进的高水平的概要。

用户界面相关的灵活的设备显示器的小尺寸要求:应用程序高效率地使用可用空间。

这在过去是要求开发商花费很多时间来设计和实施应用的用户界面。

最近的在灵活的显示能力方面的进步,譬如高分辨率和多方位支持,使得用户界面发展的工作更具挑战性。

为了简化创造应用用户界面的任务,.NET Compact Framework2.0提供许多关于这方面描述的新特性。

窗口形式控制存在于用户界面中心的是控制;.NET Compact Framework2.0提供了很多新的控制。

这些新控制由除了特别针对设备之外的控制组成。

这种控制是.NET Compact Framework有的与.NET Framework一样充分的控制。

MonthCalendarMonthCalendar控制是提供日期显示的可定制的日历控制,而且是有利于为用户提供一个图解方式来精选日期。

DateTimePickerDateTimePicker控制是为显示和允许用户进入日期和时间信息的可定制的控制。

由于它的一个紧凑显示和图解日期选择格式的组合,它特别适用于灵活的设备应用程序。

当显示信息时,DateTimePicker控制与正文框相似;但是,当用户选择了一个日期, 可能显示一个类似于MonthCalendar控制的弹出日历。

WebBrowserWebBrowser控制压缩了设备Web浏览器,并且提供强大的显示能力和暴露很多事件。

这些事件除了允许你的应用程序提供对于这些事件的用户化的行为,还允许你的应用程序追踪用户与Web浏览器内容的互动。

英文参考文献原文复印件及译文

英文参考文献原文复印件及译文

英文参考文献原文复印件及译文专业:电气工程及其自动化姓名:曹丽倩学号:100062630指导教师:高敬格完成日期2014 年 6 月RelaysThe Programmable Logic ControllerEarly machines were controlled by mechanical means using cams, gears, levers and other basic mechanical devices. As the complexity grew, so did the need for a more sophisticated control system. This system contained wired relay and switch control elements. These elements were wired as required to provide the control logic necessary for the particular type of machine operation. This was acceptable for a machine that never needed to be changed or modified, but as manufacturing techniques improved and plant changeover to new products became more desirable and necessary, a more versatile means of controlling this equipment had to be developed. Hardwired relay and switch logic was cumbersome and time consuming to modify. Wiring had to be removed and replaced to provide for the new control scheme required. This modification was difficult and time consuming to design and install and any small "bug" in the design could be a major problem to correct since that also required rewiring of the system. A new means to modify control circuitry was needed. The development and testing ground for this new means was the U.S. auto industry. The time period was the late 1960's and early 1970's and the result was the programmable logic controller, or PLC. Automotive plants were confronted with a change in manufacturing techniques every time a model changed and, in some cases, for changes on the same model if improvements had to be made during the model year. The PLC provided an easy way to reprogram the wiring rather than actually rewiring the control system.The PLC that was developed during this time was not very easy to program. The language was cumbersome to write and required highly trained programmers. These early devices were merely relay replacements and could do very little else. The PLC has at first gradually, and in recent years rapidly developed into a sophisticated and highly versatile control system component. Units today are capable of performing complex math functions including numerical integration and differentiation and operate at the fast microprocessor speeds now available. Older PLCs were capable ofonly handling discrete inputs and outputs (that is, on-off type signals), while today's systems can accept and generate analog voltages and currents as well as a wide range of voltage levels and pulsed signals. PLCs are also designed to be rugged. Unlike their personal computer cousin, they can typically withstand vibration, shock, elevated temperatures, and electrical noise to which manufacturing equipment is exposed.As more manufacturers become involved in PLC production and development, and PLC capabilities expand, the programming language is also expanding. This is necessary to allow the programming of these advanced capabilities. Also, manufacturers tend to develop their own versions of ladder logic language (the language used to program PLCs). This complicates learning to program PLC's in general since one language cannot be learned that is applicable to all types. However, as with other computer languages, once the basics of PLC operation and programming in ladder logic are learned, adapting to the various manufacturers’ devices is not a complicated process. Most system designers eventually settle on one particular manufacturer that produces a PLC that is personally comfortable to program and has the capabilities suited to his or her area of applications.It should be noted that in usage, a programmable logic controller is generally referred to as a “PLC” or “programmable controller”. Although the term “programmable controller” is generally accepted, it is not abbreviated “PC” because the abbreviation “PC” is usually used in reference to a personal computer. As we will see in this chapter, a PLC is by no means a personal computer.Programmable controllers (the shortened name used for programmable logic controllers) are much like personal computers in that the user can be overwhelmed by the vast array of options and configurations available. Also, like personal computers, the best teacher of which one to select is experience. As one gains experience with the various options and configurations available, it becomes less confusing to be able to select the unit that will best perform in a particular application.中文翻译可编程序控制器早期的机器用机械的方法采用凸轮控制、齿轮、杠杆和其他基本机械设备。

毕业设计_外文文献翻译

毕业设计_外文文献翻译

(二 〇 一 四 年 六 月本科毕业设计外文文献翻译 学校代码: 10128 学 号:题 目:P a c k e t H a n d l i n g H a r d w a r e S u p p o r t学生姓名:学 院:系 别:专 业:班 级:指导教师:Packet Handling Hardware Support参考文献:Texas 1101 Low-Power Sub-1 GHz RF Transceiver.www. . 2013The CC1101 has built-in hardware support for packet oriented radio protocols.In transmit mode, the packet handler can be configured to add the following elements to the packet stored in the TX FIFO:● A programmable number of preamble bytes● A two byte synchronization (sync) word. Can be duplicated to give a 4-bytesync word (recommended). It is not possible to only insert preamble or onlyinsert a sync word● A CRC checksum computed over the data field.The recommended setting is 4-byte preamble and 4-byte sync word, except for 500 kBaud data rate where the recommended preamble length is 8 bytes. In addition, the following can be implemented on the data field and the optional 2-byte CRC checksum:●Whitening of the data with a PN9 sequence●Forward Error Correction (FEC) by the use of interleaving and coding of thedata (convolutional coding)In receive mode, the packet handling support will de-construct the data packet by implementing the following (if enabled):●Preamble detection●Sync word detection●CRC computation and CRC check●One byte address check●Packet length check (length byte checked against a programmable maximumlength)●De-whitening●De-interleaving and decodingOptionally, two status bytes (see Table 27 and Table 28) with RSSI value, Link Quality Indication, and CRC status can be appended in the RX FIFO.Table 27: Received Packet Status Byte 1(first byte appended after the data) Bit Field Name Description7:0 RSSI RSSI valueTable 28: Received Packet Status Byte 2(second byte appended after the data) Bit Field Name Description7 CRC_OK 1:CRC for received data OK(or CRC disabled)0:CRC error in received data6:0 LQI Indicating the link qualityNote: Register fields that control the packethandling features should only be altered whenCC1101 is in the IDLE state.1. Data whiteningFrom a radio perspective, the ideal over the air data are random and DC free. This results in the smoothest power distribution over the occupied bandwidth. This also gives the regulation loops in the receiver uniform operation conditions (on data dependencies).Real data often contain long sequences of zeros and ones. In these cases, performance can be improved by whitening the data before transmitting, and de-whitening the data in the receiver.With CC1101, this can be done automatically. By setting PKTCTRLO. WHITE_DATA=1, all data, except the preamble and the sync word will be XOR-ed with a 9-bit pseudo-random (PN9) sequence before being transmitted. This is shown in Figure 16. At the receiver end, the data are XOR-ed with the same pseudorandom sequence. In this way, the whitening is reversed, and the original data appear in the receiver. The PN9sequence is initialized to all 1’s.2. Packet FormatThe format of the data packet can be configured and consists of the following items (see Figure 17):●Preamble●Synchronization word●Optional length byte●Optional address byte●Payload●Optional 2 byte CRCThe preamble pattern is an alternating sequence of ones and zeros (10101010…). The minimum length of the preamble is programmable through the value of MDMCFG1.NUM_PREAMBLE. When enabling TX, the modulator will start transmitting the preamble. When the programmed number of preamble bytes has beentransmitted, the modulator will send the sync word and then data from the TX FIFO ifdata is available. If the TX FIFO is empty, the modulator will continue ro send preamble bytes until the first byte is written to the TX FIFO. The modulator will then send the sync word and then the data bytes.The synchronization word is a two-byte value set in the SYNC1 and SYNC0 registers. The sync word provides byte synchronization of the incoming packet. A one-byte sync word can be emulated by setting the AYNC1 value to the preamble pattern. It is also possible to emulate a 32 bit sync word by setting MDMCFG2.SYNC_MODE to 3 or 7. The sync word will then be repeated twice.CC1101 supports both constant packet length protocols and variable length protocols. Variable or fixed packet length mode can be used for packets up to 255 bytes. For longer packets, infinite packet length mode must be used.Fixed packet length mode is selected by setting PKTCTRL0.LENGTH_CONFIG =0. The desired packet length is set by the PKTLEN register. This value must be different from 0.In variable packet length mode, PKTCTRL0.LENGTH_CONFIG=1, the packet length is configured by the first byte after the sync word. The packet length is defined as the payload data, excluding the length byte and optional CRC. The PKTLEN register is used to set the maximum packet length allowed in RX. Any packet received with a length byte with a value greater than PKTLEN will be discarded. The PKTLEN value must be different from 0. The byte written to the TXFIFO must be different from 0.With PKTCTRL0.LENGTH_CONFIG=2, the packet length is set to infinite and transmission and reception will continue until turned off manually. As described in the next section, this can be used to support packet formats with different length configuration than natively supported by CC1101. one should make sure that TX is not turn off during the transmission of the first half of any byte. Refer to the CC1101Errata Notes [4] for more details.Note: The minimum packet length supported (excluding the optional length byte and CRC) is one byte of payload data.2.1 Arbitrary Length Field ConfigurationThe packet length register, PKTLEN, can be reprogrammed during receive and transmit. In combination with fixed packet length mode (PKTCTRL0. LENGTH_CONFIG=0), this opens the possibility to have a different length field configuration can supported for variable length packets (in variable packet length mode the length byte is the first byte after the sync word). At the start of reception, the packet length is set a large value. The MCU reads out enough bytes to interpret the length field in the packet. Then the PKTLEN value is set according to this value. The end of packet will occur when the byte counter in the packet handler is equal to the PKTLEN register. Thus, the MCU must be able to program the correct length, before the internal counter reaches the packet length.2.2 Packet Length >255The packet automation control register, PKTCTRL0, can be reprogrammed during TX and RX. This opens the possibility to transmit and receive packets that are longer than 256 bytes and still be able to use the packet handling hardware support. At the start of the packet, the infinite packet length mode (PKTCTRL0. LENGTH_CONFIG=2) must be active. On the TX side, the PKTLEN register is set to mod(length, 256). On the RX side the MCU reads out enough bytes to interpret the length field in the packet and sets the PKTLEN register to mod(length, 256). When less than 256 bytes remains of the packet, the MCU disables infinite packet length mode and activates fixed packet length mode. When the internal byte counter reaches the PKTLEN value, the transmission or reception ends(the radio enters the state determined by TXOFF_MODE or RXOFF_MODE). Automatic CRC appending/checking can also be used(by setting PKTCTRL0.CRC_EN=1).When for example a 600-byte packet is to be transmitted, the MCU should do the following(see also Figure 18)●Set PKTCTRL0.LENGTH_CONFIG=2.●Pre-program the PKTLEN register to mod(600,256)=88.●Transmit at least 345 bytes(600-255), for example by filling the 64-byte TX FIFOsix times(384 bytes transmitted).●Set PKTCTRL0.LENGTH_CONFIG=0.●The transmission ends when the packet counter reaches 88. a total of 600 bytesare transmitted.3 Packet filtering in Receive ModeCC1101 supports three different types of packet-filtering; address filtering, maximum length filtering, and CRC filtering.3.1 Addressing FilteringSetting PKTCTRL1.ADR_CHK to any other value than zero enables the packet address filter. The packet handler engine will compare the destination address byte in the packet with the programmed node address in the ADDR register and the 0*00 broadcast address when PKTCTRL1.ADR_CHK=10 or both the 0*00 and 0*FF broadcast addresses when PKTCTRL1.ADR_CHK=11. If the received address matches a valid address, the packet is received and written into the RX FIFO. If the address match fails, the packet is discarded and receive mode restarted(regardless of the MCSM1.RXOFF_MODE setting).If the received address matches a valid address when using infinite packet length mode and address filtering is enabled, 0*FF will be written into the RX FIFO followed by the address byte and then the payload data.3.2Maximum Length FilteringIn variable packet length mode, PKTCTRL0.LENGTH_CONFIG=1, the PKTLEN.PACKET_LENGTH register value is used to set the maximum allowed packet length. If the received length byte has a larger value than this, the packet is discarded and receive mode restarted(regardless of the MCSM1.RXOFF_MODE setting).3.3 CRC FilteringThe filtering of a packet when CRC check fails is enabled by setting PKTCTRL1.CRC_AUTOFLUSH=1. The CRC auto flush function will flush the entire RX FIFO if the CRC check fails. After auto flushing the RX FIFO, the next state depends on the MCSM1.RXOFF_MODE setting.When using the auto flush function, the maximum packet length is 63 bytes in variable packet length mode. Note that when PKTCTRL1APPEND_STATUS is enabled, the maximum allowed packet length is reduced by two bytes in order to make room in the RX FIFO for the two status bytes appended at the end of the packet. Since the entire RX FIFO is flushed when the CRC check fails, the previously received packet must be read out of the FIFO before receiving the current packet. The MCU must not read from the current packet until the CRC has been checked as OK.4 Packet Handling in Transmit ModeThe payload that is to be transmitted must be written into the TX FIFO. The first byte written must be the length byte when variable packet length is enabled. The length byte has a value equal to the payload of the packet(including the optional address byte). If address recognition is enabled on the receiver, the second byte written to the TX FIFO must be the address byte.If fixed packet length is enabled, the first byte written to the TX FIFO should be the address(assuming the receiver uses address recognition).The modulator will first send the programmed number of preamble bytes. If data is avaible in the TX FIFO, the modulator will send the two-bytes(optionally 4-byte)sync word followed by the payload in the TX FIFO. If CRC is enabled, the checksum is calculated over all the data pulled from the TX FIFO, and the result is sent as two extra bytes following the payload data. If the TX FIFO runs empty before the complete packet has been transmitted, the radio will enter TXFIFO_UNDERFLOW state. The only way to exit this state is by issuing an SFTX strobe. Writing to the TX FIFO after it has been underflowed will not restart TX mode.If whitening is enabled, everything following the sync words will be whitened. This is done before the optional FEC/interleaver stage. Whitening is enabled by setting PKTCTRL0.WHITE_DATA=1.If FEC/interleaving is enabled, everything following the sync words will be scrambled by the interleaver and FEC encoded before being modulated. FEC is enabled by setting MDMCFG1.FEC_EN=1.5 Packet Handling in Receive ModeIn receive mode, the demodulator and packet handler will search for a valid preamble and the sync word. When found, the synchronism and will receive the first payload byte.If FEC/interleaving is enabled, the FEC decoder will start to decode the first payload byte. The intrerleaver will de-scramble the bits before any other processing is done to the data.If whitening is enabled, the data will be de-whitened at this stage.When variable packet length mode is enabled, the first byte is the length byte. The packet handler stores this value as the packet length and receives the number of bytes indicated by the length byte. If fixed packet length mode is used, the packet handler will accept the programmed number of bytes.Next, the packet handler optionally checks the address and only continues the reception if the address matches. If automatic CRC check is enabled, the packet handler computes CRC and matches it with the appended CRC checksum.At the end of the payload, the packet handler will optionally white two extra packet status bytes(see Table27 and Table28) that contain CRC status, link quality indication, and RSSI value.6 Packet Handling in FirmwareWhen implementing a packet oriented radio protocol in firmware, the MCU needs to know when a packet has been received/transmitted. Additionally, for packets longer than 64 bytes, the RX FIFO needs to be refilled white in TX. This means that the MCU needs to know the number of bytes that can be read from or written to the RX FIFO and TX FIFO respectively. There are two possible solutions to get the necessary status information:a)Interrupt Driven SolutionThe GDO pins can be used in both RX and TX to give an interrupt when a sync word has been received/transmitted or when a complete packet has been received/transmitted by setting IOFGX.GDOx_CFG=0*06. In addition, there are two configurations for the IOCFGx.GDOx_CFG register that can be used as an interrupt source to provide information on how many bytes that are in the RX FIFO and TX FIFO respectively. The IOCFGx.GDOx_CFG=0*02 and IOCFGx.GDOx_CFG=0*03 configurations are associated with the TX FIFO. See Table 41 for more information.b)SPI PollingThe PKTSTSTUS register can be polled at a given rate to get information about the current GDO2 and GDO0 values respectively. The RXBYTES and TXBYTES registers can be polled at a given rate to get information about the number of bytes in the RX FIFO and TX FIFO respectively. Alternatively, the number of bytes in the RX FIFO and the TX FIFO can be read from the chip status byte returned on the MISO line each time a header byte, data byte, or command strobe is sent on the SPI bus.It is recommended to employ an interrupt driven solution since high rate SPI polling reduces the RX sensitivity. Furthermore, as explained in Section 10.3 and the CC1101 Errata Notes[4], when using SPI polling, there is a small, but finite, probability that a single read from registers PKSTATUS, RXBYTES and TXBYTES is being corrupt. The same is the case when reading the chip status byte.Refer to the TI website for SW examples ([9] and [10]).数据包处理的硬件支持CC1101 提供了对数据包导向无线协议的内置硬件支持。

毕业设计(论文)外文参考文献译文本

毕业设计(论文)外文参考文献译文本

武汉工业学院毕业设计(论文)外文参考文献译文本2011届原文出处IBM SYSTEMS JOURNAL, VOL 35, NOS 3&4, 1996毕业设计(论文)题目音乐图像浏览器的设计与实现院(系)计算机与信息工程专业名称计算机科学与技术学生姓名郭谦学生学号070501103指导教师丰洪才译文要求:1、译文内容须与课题(或专业)有联系;2、外文翻译不少于4000汉字。

隐藏数据技术研究数据隐藏,是一种隐秘的数据加密形式,它将数据嵌入到数字媒体之中来达到鉴定,注释和版权保护的目的。

然而,这一应用却受到了一些限制:首先是需要隐藏的数据量,其次是在“主”讯号受到失真的条件影响之下,对于这些需隐藏数据的可靠性的需要。

举例来说,就是有损压缩以及对有损压缩来说数据遇到被拦截,被修改或被第三方移除等操作的免疫程度。

我们同时用传统的和新式技术来探究解决数据隐藏问题的方法并且对这些技术在以下三个方面的应用:版权保护,防止篡改,和增强型数据嵌入做出评估。

我们能非常方便地得到数字媒体并且潜在地改善了其可移植性,信息展现的效率,和信息呈现的准确度。

便捷的数据访问所带来的负面效果包括以下两点:侵犯版权的几率增加或者是有篡改或修改其中内容的可能性增大。

这项工作的目的在于研究知识产权保护条款、内容修改的相关指示和增加注解的方法。

数据隐藏代表了一类用于插入数据的操作,例如版权信息,它利用“主”信号能够感知的最小变化量来进入到各种不同形式的媒体之内,比如图像、声音或本文。

也就是说,嵌入的数据对人类观察者来说应该是既看不见也听不见的。

值得注意的是,数据隐藏虽然与压缩很类似,但与加密解密技术却是截然不同的。

它的目标不是限制或者管理对“主”信号的存取,而是保证被嵌入的数据依然未被破坏而且是可以恢复的。

数据隐藏在数字媒体中的两个重要应用就是提供版权信息的证明,和保证内容完整性。

因此,即使主讯号遭受诸如过滤、重取样,截取或是有损压缩等破坏行为,数据也应该一直在“主”信号中保持被隐藏的特点。

外文文献及翻译-库房管理系统(FMS)

外文文献及翻译-库房管理系统(FMS)

外文文献及翻译-库房管理系统(FMS)
概述
本文介绍了一种基于RFID技术的库房管理系统(FMS),该系统具有可拓展性和高效性,可以在多种环境下使用。

基于标签的追踪技术,该系统可以自动监测库房中的物品,从而提高了库存管理的效率。

除此之外,该系统还具有多重质量控制和安全措施,以确保库房中的物品得到有效的管理和保护。

系统组件
该系统由多个组件组成,主要包括RFID读写器、标签、传感器、数据库和用户界面等。

RFID读写器和标签用于监测库房中物品的位置和数量。

传感器则用于检测库房的环境条件,例如温度和湿度等。

数据库则用于储存和管理物品信息,同时提供数据分析和报告等功能。

用户界面则为用户提供了可视化和交互式的界面,以便于用户实时监测库房中的物品情况。

系统优势
相比传统的库房管理方式,该系统具有以下优势:
- 实时监测库房中物品的位置和数量。

- 减少了手动操作,提高了效率和准确性。

- 多重质量控制和安全措施确保库房中物品得到有效的管理和保护。

- 可拓展性高,可以适用于多种环境。

系统应用
该系统可以广泛应用于各种行业和场合,例如:
- 仓储和物流行业
- 医药和生物科学行业
- 工业制造业
- 客户服务和零售业
结论
库房管理系统(FMS)是一种基于RFID技术的高效管理系统,具有实时监测、质量控制和安全保护等优势。

该系统可以广泛应用于多种行业和场合,是一种值得推广的库房管理方式。

毕业论文英文参考文献及译文

毕业论文英文参考文献及译文

Inventory managementInventory ControlOn the so-called "inventory control", many people will interpret it as a "storage management", which is actually a big distortion.The traditional narrow view, mainly for warehouse inventory control of materials for inventory, data processing, storage, distribution, etc., through the implementation of anti-corrosion, temperature and humidity control means, to make the custody of the physical inventory to maintain optimum purposes. This is just a form of inventory control, or can be defined as the physical inventory control. How, then, from a broad perspective to understand inventory control? Inventory control should be related to the company's financial and operational objectives, in particular operating cash flow by optimizing the entire demand and supply chain management processes (DSCM), a reasonable set of ERP control strategy, and supported by appropriate information processing tools, tools to achieved in ensuring the timely delivery of the premise, as far as possible to reduce inventory levels, reducing inventory and obsolescence, the risk of devaluation. In this sense, the physical inventory control to achieve financial goals is just a means to control the entire inventory or just a necessary part; from the perspective of organizational functions, physical inventory control, warehouse management is mainly the responsibility of The broad inventory control is the demand and supply chain management, and the whole company's responsibility.Why until now many people's understanding of inventory control, limited physical inventory control? The following two reasons can not be ignored: First, our enterprises do not attach importance to inventory control. Especially those who benefit relatively good business, as long as there is money on the few people to consider the problem of inventory turnover. Inventory control is simply interpreted as warehouse management, unless the time to spend money, it may have been to see the inventory problem, and see the results are often very simple procurement to buy more, or did not do warehouse departments .Second, ERP misleading. Invoicing software is simple audacity to call it ERP, companies on their so-called ERP can reduce the number of inventory, inventory control, seems to rely on their small software can get. Even as SAP, BAAN ERP world, the field of these big boys, but also their simple modules inside the warehouse management functionality is defined as "inventory management" or "inventory control." This makes the already not quite understand what our inventory control, but not sure what is inventory control.In fact, from the perspective of broadly understood, inventory control, should include the following:First, the fundamental purpose of inventory control. We know that the so-called world-class manufacturing, two key assessment indicators (KPI) is, customer satisfaction and inventory turns, inventory turns and this is actually the fundamental objective of inventory control.Second, inventory control means. Increase inventory turns, relying solely on the so-called physical inventory control is not enough, it should be the demand and supply chain management process flow of this large output, and this big warehouse management processes in addition to including this link, the more important The section also includes: forecasting and order processing, production planning and control, materials planning and purchasing control, inventory planning and forecasting in itself, as well as finished products, raw materials, distribution and delivery of the strategy, and even customs management processes. And with the demand and supply chain management processes throughout the process, it is the information flow and capital flow management. In other words, inventory itself is across the entire demand and supply management processes in all aspects of inventory control in order to achieve the fundamental purpose, it must control all aspects of inventory, rather than just manage the physical inventory at hand.Third, inventory control, organizational structure and assessment. Since inventory control is the demand and supply chain management processes, output, inventory control to achieve the fundamental purpose of this process must be compatible with a rational organizational structure. Until now, we can seethat many companies have only one purchasing department, purchasing department following pipe warehouse. This is far short of inventory control requirements. From the demand and supply chain management process analysis, we know that purchasing and warehouse management is the executive arm of the typical, and inventory control should focus on prevention, the executive branch is very difficult to "prevent inventory" for the simple reason that they assessment indicators in large part to ensure supply (production, customer). How the actual situation, a reasonable demand and supply chain management processes, and thus set the corresponding rational organizational structure and is a question many of our enterprises to exploreThe role of inventory controlInventory management is an important part of business management. In the production and operation activities, inventory management must ensure that both the production plant for raw materials, spare parts demand, but also directly affect the purchasing, sales of share, sales activities. To make an inventory of corporate liquidity, accelerate cash flow, the security of supply under the premise of minimizing Yaku funds, directly affects the operational efficiency. Ensure the production and operation needs of the premise, so keep inventories at a reasonable level; dynamic inventory control, timely, appropriate proposed order to avoid over storage or out of stock; reduce inventory footprint, lower total cost of inventory; control stock funds used to accelerate cash flow.Problems arising from excessive inventory: increased warehouse space and inventory storage costs, thereby increasing product costs; take a lot of liquidity, resulting in sluggish capital, not only increased the burden of payment of interest, etc., would affect the time value of money and opportunity income; finished products and raw materials caused by physical loss and intangible losses; a large number of enterprise resource idle, affecting their rational allocation and optimization; cover the production, operation of the whole process of the various contradictions and problems, is not conducive to improve the management level.Inventory is too small the resulting problems: service levels caused a decline in the profit impact of marketing and corporate reputation; production system caused by inadequate supply of raw materials or other materials, affecting the normal production process; to shorten lead times, increase the number of orders, so order (production) costs; affect the balance of production and assembly of complete sets.NotesInventory management should particularly consider the following two questions:First, according to sales plans, according to the planned production of the goods circulated in the market, we should consider where, how much storage.Second, starting from the level of service and economic benefits to determine how to ensure inventories and supplementary questions.The two problems with the inventory in the logistics process functions. In general, the inventory function:(1) to prevent interrupted. Received orders to shorten the delivery of goods from the time in order to ensure quality service, at the same time to prevent out of stock.(2) to ensure proper inventory levels, saving inventory costs.(3) to reduce logistics costs. Supplement with the appropriate time interval compatible with the reasonable demand of the cargo in order to reduce logistics costs, eliminate or avoid sales fluctuations.(4) ensure the production planning, smooth to eliminate or avoid sales fluctuations.(5) display function.(6) reserve. Mass storage when the price falls, reduce losses, to respond to disasters and other contingencies.About the warehouse (inventory) on what the question, we must consider the number and location. If the distribution center, it should be possible according to customer needs, set at an appropriate place; if it is stored incentral places to minimize the complementary principle to the distribution centers, there is no place certain requirements. When the stock base is established, will have to take into account are stored in various locations in what commodities.库存管理库存控制在谈到所谓“库存控制”的时候,很多人将其理解为“仓储管理”,这实际上是个很大的曲解。

毕业设计外文文献翻译(原文+译文)

毕业设计外文文献翻译(原文+译文)

Environmental problems caused by Istanbul subway excavation and suggestionsfor remediation伊斯坦布尔地铁开挖引起的环境问题及补救建议Ibrahim Ocak Abstract:Many environmental problems caused by subway excavations have inevitably become an important point in city life. These problems can be categorized as transporting and stocking of excavated material, traffic jams, noise, vibrations, piles of dust mud and lack of supplies. Although these problems cause many difficulties,the most pressing for a big city like Istanbul is excava tion,since other listed difficulties result from it. Moreover, these problems are environmentally and regionally restricted to the period over which construction projects are underway and disappear when construction is finished. Currently, in Istanbul, there are nine subway construction projects in operation, covering approximately 73 km in length; over 200 km to be constructed in the near future. The amount of material excavated from ongoing construction projects covers approximately 12 million m3. In this study, problems—primarily, the problem with excavation waste(EW)—caused by subway excavation are analyzed and suggestions for remediation are offered.摘要:许多地铁开挖引起的环境问题不可避免地成为城市生活的重要部分。

毕业设计(论文)外文原文及译文

毕业设计(论文)外文原文及译文

毕业设计(论文)外文原文及译文一、外文原文MCUA microcontroller (or MCU) is a computer-on-a-chip. It is a type of microcontroller emphasizing self-sufficiency and cost-effectiveness, in contrast to a general-purpose microprocessor (the kind used in a PC).With the development of technology and control systems in a wide range of applications, as well as equipment to small and intelligent development, as one of the single-chip high-tech for its small size, powerful, low cost, and other advantages of the use of flexible, show a strong vitality. It is generally better compared to the integrated circuit of anti-interference ability, the environmental temperature and humidity have better adaptability, can be stable under the conditions in the industrial. And single-chip widely used in a variety of instruments and meters, so that intelligent instrumentation and improves their measurement speed and measurement accuracy, to strengthen control functions. In short,with the advent of the information age, traditional single- chip inherent structural weaknesses, so that it show a lot of drawbacks. The speed, scale, performance indicators, such as users increasingly difficult to meet the needs of the development of single-chip chipset, upgrades are faced with new challenges.The Description of AT89S52The AT89S52 is a low-power, high-performance CMOS 8-bit microcontroller with 8K bytes of In-System Programmable Flash memory. The device is manufactured using Atmel's high-density nonvolatile memory technology and is compatible with the industry-standard 80C51 instruction set and pinout. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional nonvolatile memory programmer. By combining a versatile 8-bit CPU with In-System Programmable Flash on a monolithic chip, the Atmel AT89S52 is a powerful microcontroller which provides a highly-flexible and cost-effective solution to many embedded control applications.The AT89S52 provides the following standard features: 8K bytes ofFlash, 256 bytes of RAM, 32 I/O lines, Watchdog timer, two data pointers, three 16-bit timer/counters, a six-vector two-level interrupt architecture, a full duplex serial port, on-chip oscillator, and clock circuitry. In addition, the AT89S52 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. The Idle Mode stops the CPU while allowing the RAM, timer/counters, serial port, and interrupt system to continue functioning. The Power-down mode saves the RAM contents but freezes the oscillator, disabling all other chip functions until the next interrupt or hardware reset.Features• Compatible with MCS-51® Products• 8K Bytes of In-System Programmable (ISP) Flash Memory– Endurance: 1000 Write/Erase Cycles• 4.0V to 5.5V Operating Range• Fully Static Operation: 0 Hz to 33 MHz• Three-level Program Memory Lock• 256 x 8-bit Internal RAM• 32 Programmable I/O Lines• Three 16-bit Timer/Counters• Eight Interrupt Sources• Full Duplex UART Serial Channel• Low-power Idle and Power-down Modes• Interrupt Recovery from Power-down Mode• Watchdog Timer• Dual Data Pointer• Power-off FlagPin DescriptionVCCSupply voltage.GNDGround.Port 0Port 0 is an 8-bit open drain bidirectional I/O port. As an output port, each pin can sink eight TTL inputs. When 1s are written to port 0 pins, the pins can be used as high-impedance inputs.Port 0 can also be configured to be the multiplexed low-order address/data bus during accesses to external program and data memory. In this mode, P0 has internal pullups.Port 0 also receives the code bytes during Flash programming and outputs the code bytes during program verification. External pullups are required during program verification.Port 1Port 1 is an 8-bit bidirectional I/O port with internal pullups. The Port 1 output buffers can sink/source four TTL inputs. When 1s are written to Port 1 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 1 pins that are externally being pulled low will source current (IIL) because of the internal pullups.In addition, P1.0 and P1.1 can be configured to be the timer/counter 2 external count input (P1.0/T2) and the timer/counter 2 trigger input (P1.1/T2EX), respectively.Port 1 also receives the low-order address bytes during Flash programming and verification.Port 2Port 2 is an 8-bit bidirectional I/O port with internal pullups. The Port 2 output buffers can sink/source four TTL inputs. When 1s are written to Port 2 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 2 pins that are externally being pulled low will source current (IIL) because of the internal pullups.Port 2 emits the high-order address byte during fetches from external program memory and during accesses to external data memory that use 16-bit addresses (MOVX @ DPTR). In this application, Port 2 uses strong internal pull-ups when emitting 1s. During accesses to external data memory that use 8-bit addresses (MOVX @ RI), Port 2 emits the contents of the P2 Special Function Register.Port 2 also receives the high-order address bits and some control signals during Flash programming and verification.Port 3Port 3 is an 8-bit bidirectional I/O port with internal pullups. The Port 3 output buffers can sink/source four TTL inputs. When 1s are written to Port 3 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 3 pins that are externally being pulled low will source current (IIL) because of the pullups.Port 3 also serves the functions of various special features of the AT89S52, as shown in the following table.Port 3 also receives some control signals for Flash programming and verification.RSTReset input. A high on this pin for two machine cycles while the oscillator is running resets the device. This pin drives High for 96 oscillator periods after the Watchdog times out. The DISRTO bit in SFR AUXR (address 8EH) can be used to disable this feature. In the default state of bit DISRTO, the RESET HIGH out feature is enabled.ALE/PROGAddress Latch Enable (ALE) is an output pulse for latching the low byte of the address during accesses to external memory. This pin is also the program pulse input (PROG) during Flash programming.In normal operation, ALE is emitted at a constant rate of 1/6 the oscillator frequency and may be used for external timing or clocking purposes. Note, however, that one ALE pulse is skipped during each access to external data memory.If desired, ALE operation can be disabled by setting bit 0 of SFR location 8EH. With the bit set, ALE is active only during a MOVX or MOVC instruction. Otherwise, the pin is weakly pulled high. Setting the ALE-disable bit has no effect if the microcontroller is in external execution mode.PSENProgram Store Enable (PSEN) is the read strobe to external program memory. When the AT89S52 is executing code from external program memory, PSENis activated twice each machine cycle, except that two PSEN activations are skipped during each access to external data memory.EA/VPPExternal Access Enable. EA must be strapped to GND in order to enable the device to fetch code from external program memory locations starting at 0000H up to FFFFH. Note, however, that if lock bit 1 is programmed, EA will be internally latched on reset. EA should be strapped to VCC for internal program executions.This pin also receives the 12-volt programming enable voltage (VPP) during Flash programming.XTAL1Input to the inverting oscillator amplifier and input to the internal clock operating circuit.XTAL2Output from the inverting oscillator amplifier.Special Function RegistersNote that not all of the addresses are occupied, and unoccupied addresses may not be implemented on the chip. Read accesses to these addresses will in general return random data, and write accesses will have an indeterminate effect.User software should not write 1s to these unlisted locations, since they may be used in future products to invoke new features. In that case, the reset or inactive values of the new bits will always be 0.Timer 2 Registers:Control and status bits are contained in registers T2CON and T2MOD for Timer 2. The register pair (RCAP2H, RCAP2L) are the Capture/Reload registers for Timer 2 in 16-bit capture mode or 16-bit auto-reload mode.Interrupt Registers:The individual interrupt enable bits are in the IE register. Two priorities can be set for each of the six interrupt sources in the IP register.Dual Data Pointer Registers: To facilitate accessing both internal and external data memory, two banks of 16-bit Data Pointer Registers areprovided: DP0 at SFR address locations 82H-83H and DP1 at 84H-85H. Bit DPS = 0 in SFR AUXR1 selects DP0 and DPS = 1 selects DP1. The user should always initialize the DPS bit to the appropriate value before accessing the respective Data Pointer Register.Power Off Flag:The Power Off Flag (POF) is located at bit 4 (PCON.4) in the PCON SFR. POF is set to “1” during power up. It can be set and rest under software control and is not affected by reset.Memory OrganizationMCS-51 devices have a separate address space for Program and Data Memory. Up to 64K bytes each of external Program and Data Memory can be addressed.Program MemoryIf the EA pin is connected to GND, all program fetches are directed to external memory. On the AT89S52, if EA is connected to VCC, program fetches to addresses 0000H through 1FFFH are directed to internal memory and fetches to addresses 2000H through FFFFH are to external memory.Data MemoryThe AT89S52 implements 256 bytes of on-chip RAM. The upper 128 bytes occupy a parallel address space to the Special Function Registers. This means that the upper 128 bytes have the same addresses as the SFR space but are physically separate from SFR space.When an instruction accesses an internal location above address 7FH, the address mode used in the instruction specifies whether the CPU accesses the upper 128 bytes of RAM or the SFR space. Instructions which use direct addressing access of the SFR space. For example, the following direct addressing instruction accesses the SFR at location 0A0H (which is P2).MOV 0A0H, #dataInstructions that use indirect addressing access the upper 128 bytes of RAM. For example, the following indirect addressing instruction, where R0 contains 0A0H, accesses the data byte at address 0A0H, rather than P2 (whose address is 0A0H).MOV @R0, #dataNote that stack operations are examples of indirect addressing, so the upper 128 bytes of data RAM are available as stack space.Timer 0 and 1Timer 0 and Timer 1 in the AT89S52 operate the same way as Timer 0 and Timer 1 in the AT89C51 and AT89C52.Timer 2Timer 2 is a 16-bit Timer/Counter that can operate as either a timer or an event counter. The type of operation is selected by bit C/T2 in the SFR T2CON (shown in Table 2). Timer 2 has three operating modes: capture, auto-reload (up or down counting), and baud rate generator. The modes are selected by bits in T2CON.Timer 2 consists of two 8-bit registers, TH2 and TL2. In the Timer function, the TL2 register is incremented every machine cycle. Since a machine cycle consists of 12 oscillator periods, the count rate is 1/12 of the oscillator frequency.In the Counter function, the register is incremented in response to a1-to-0 transition at its corresponding external input pin, T2. In this function, the external input is sampled during S5P2 of every machine cycle. When the samples show a high in one cycle and a low in the next cycle, the count is incremented. The new count value appears in the register during S3P1 of the cycle following the one in which the transition was detected. Since two machine cycles (24 oscillator periods) are required to recognize a 1-to-0 transition, the maximum count rate is 1/24 of the oscillator frequency. To ensure that a given level is sampled at least once before it changes, the level should be held for at least one full machine cycle.InterruptsThe AT89S52 has a total of six interrupt vectors: two external interrupts (INT0 and INT1), three timer interrupts (Timers 0, 1, and 2), and the serial port interrupt. These interrupts are all shown in Figure 10.Each of these interrupt sources can be individually enabled or disabledby setting or clearing a bit in Special Function Register IE. IE also contains a global disable bit, EA, which disables all interrupts at once.Note that Table 5 shows that bit position IE.6 is unimplemented. In the AT89S52, bit position IE.5 is also unimplemented. User software should not write 1s to these bit positions, since they may be used in future AT89 products. Timer 2 interrupt is generated by the logical OR of bits TF2 and EXF2 in register T2CON. Neither of these flags is cleared by hardware when the service routine is vectored to. In fact, the service routine may have to determine whether it was TF2 or EXF2 that generated the interrupt, and that bit will have to be cleared in software.The Timer 0 and Timer 1 flags, TF0 and TF1, are set at S5P2 of the cycle in which the timers overflow. The values are then polled by the circuitry in the next cycle. However, the Timer 2 flag, TF2, is set at S2P2 and is polled in the same cycle in which the timer overflows.二、译文单片机单片机即微型计算机,是把中央处理器、存储器、定时/计数器、输入输出接口都集成在一块集成电路芯片上的微型计算机。

数据库管理系统毕业论文中英文资料对照外文翻译文献综述

数据库管理系统毕业论文中英文资料对照外文翻译文献综述

数据库管理系统毕业论文中英文资料对照外文翻译文献综述中英文资料对照外文翻译文献综述英文翻译数据库管理系统的介绍Raghu Ramakrishnan数据库(database,有时被拼作data base)又称为电子数据库,是专门组织起来的一组数据或信息,其目的是为了便于计算机快速查询及检索。

数据库的结构是专门设计的,在各种数据处理操作命令的支持下,可以简化数据的存储、检索、修改和删除。

数据库可以存储在磁盘、磁带、光盘或其他辅助存储设备上。

数据库由一个或一套文件组成,其中的信息可以分解为记录,每一条记录又包含一个或多个字段(或称为域)。

字段是数据存取的基本单位。

数据库用于描述实体,其中的一个字段通常表示与实体的某一属性相关的信息。

通过关键字以及各种分类(排序)命令,用户可以对多条记录的字段进行查询,重新整理,分组或选择,以实体对某一类数据的检索,也可以生成报表。

所有数据库(除最简单的)中都有复杂的数据关系及其链接。

处理与创建,访问以及维护数据库记录有关的复杂任务的系统软件包叫做数据库管理系统(DBMS)。

DBMS软件包中的程序在数据库与其用户间建立接口。

(这些用户可以是应用程序员,管理员及其他需要信息的人员和各种操作系统程序)DBMS可组织、处理和表示从数据库中选出的数据元。

该功能使决策者能搜索、探查和查询数据库的内容,从而对正规报告中没有的,不再出现的且无法预料的问题做出回答。

这些问题最初可能是模糊的并且(或者)是定义不恰当的,但是人们可以浏览数据库直到获得所需的信息。

简言之,DBMS将“管理”存储的数据项和从公共数据库中汇集所需的数据项用以回答非程序员的询问。

DBMS由3个主要部分组成:(1)存储子系统,用来存储和检索文件中的数据;(2)建模和操作子系统,提供组织数据以及添加、删除、维护、更新数据的方法;(3)用户和DBMS之间的接口。

在提高数据库管理系统的价值和有效性方面正在展现以下一些重要发展趋势:1.管理人员需要最新的信息以做出有效的决策。

计算机专业毕业设计--英文文献(含译文)

计算机专业毕业设计--英文文献(含译文)

外文文献原文THE TECHNIQUE DEVELOPMENT HISTORY OF JSPThe Java Server Pages( JSP) is a kind of according to web of the script plait distance technique, similar carries the script language of Java in the server of the Netscape company of server- side JavaScript( SSJS) and the Active Server Pages(ASP) of the Microsoft. JSP compares the SSJS and ASP to have better can expand sex, and it is no more exclusive than any factory or some one particular server of Web. Though the norm of JSP is to be draw up by the Sun company of, any factory can carry out the JSP on own system.The After Sun release the JSP( the Java Server Pages) formally, the this kind of new Web application development technique very quickly caused the people's concern. JSP provided a special development environment for the Web application that establishes the high dynamic state. According to the Sun parlance, the JSP can adapt to include the Apache WebServer, IIS4.0 on the market at inside of 85% server product.This chapter will introduce the related knowledge of JSP and Databases, and JavaBean related contents, is all certainly rougher introduction among them basic contents, say perhaps to is a Guide only, if the reader needs the more detailed information, pleasing the book of consult the homologous JSP.1.1 GENERALIZEThe JSP(Java Server Pages) is from the company of Sun Microsystems initiate, the many companies the participate to the build up the together of the a kind the of dynamic the state web the page technique standard, the it have the it in the construction the of the dynamic state the web page the strong but the do not the especially of the function. JSP and the technique of ASP of the Microsoft is very alike. Both all provide the ability that mixes with a certain procedure code and is explain by the language engine to carry out the procedure code in the code of HTML. Underneath we are simple of carry on the introduction to it.JSP pages are translated into servlets. So, fundamentally, any task JSP pages can perform could also be accomplished by servlets. However, this underlying equivalence does not mean that servlets and JSP pages are equally appropriate in all scenarios. The issue is not the power of the technology, it is the convenience, productivity, and maintainability of one or the other. After all, anything you can do on a particular computer platform in the Java programming language you could also do in assembly language. But it still matters which you choose.JSP provides the following benefits over servlets alone:• It is easier to write and maintain the HTML. Your static code is ordinary HTML: no extra backslashes, no double quotes, and no lurking Java syntax.• You can use standard Web-site development tools. Even HTML tools that know nothing about JSP can be used because they simply ignore the JSP tags.• You can divide up your development team. The Java programmers can work on the dynamic code. The Web developers can concentrate on the presentation layer. On large projects, this division is very important. Depending on the size of your team and the complexity of your project, you can enforce a weaker or stronger separation between the static HTML and the dynamic content.Now, this discussion is not to say that you should stop using servlets and use only JSP instead. By no means. Almost all projects will use both. For some requests in your project, you will use servlets. For others, you will use JSP. For still others, you will combine them with the MVC architecture . You want the appropriate tool for the job, and servlets, by themselves, do not complete your toolkit.1.2 SOURCE OF JSPThe technique of JSP of the company of Sun, making the page of Web develop the personnel can use the HTML perhaps marking of XML to design to turn the end page with format. Use the perhaps small script future life of marking of JSP becomes the dynamic state on the page contents.( the contents changes according to the claim of)The Java Servlet is a technical foundation of JSP, and the large Web applies the development of the procedure to need the Java Servlet to match with with the JSP and then can complete, this name of Servlet comes from the Applet, the local translation method of now is a lot of, this book in order not to misconstruction, decide the direct adoption Servlet but don't do any translation, if reader would like to, can call it as" small service procedure". The Servlet is similar to traditional CGI, ISAPI, NSAPI etc. Web procedure development the function of the tool in fact, at use the Java Servlet hereafter, the customer need not use again the lowly method of CGI of efficiency, also need not use only the ability come to born page of Web of dynamic state in the method of API that a certain fixed Web server terrace circulate. Many servers of Web all support the Servlet, even not support the Servlet server of Web directly and can also pass the additional applied server and the mold pieces to support the Servlet. Receive benefit in the characteristic of the Java cross-platform, the Servlet is also a terrace irrelevant, actually, as long as match the norm of Java Servlet, the Servlet is complete to have nothing to do with terrace and is to have nothing to do with server of Web. Because the Java Servlet is internal to provide the service by the line distance, need not start a progress to the each claimses, and make use of the multi-threadingmechanism can at the same time for several claim service, therefore the efficiency of Java Servlet is very high.But the Java Servlet also is not to has no weakness, similar to traditional CGI, ISAPI, the NSAPI method, the Java Servlet is to make use of to output the HTML language sentence to carry out the dynamic state web page of, if develop the whole website with the Java Servlet, the integration process of the dynamic state part and the static state page is an evil-foreboding dream simply. For solving this kind of weakness of the Java Servlet, the SUN released the JSP.A number of years ago, Marty was invited to attend a small 20-person industry roundtable discussion on software technology. Sitting in the seat next to Marty was James Gosling, inventor of the Java programming language. Sitting several seats away was a high-level manager from a very large software company in Redmond, Washington. During the discussion, the moderator brought up the subject of Jini, which at that time was a new Java technology. The moderator asked the manager what he thought of it, and the manager responded that it was too early to tell, but that it seemed to be an excellent idea. He went on to say that they would keep an eye on it, and if it seemed to be catching on, they would follow his company's usual "embrace and extend" strategy. At this point, Gosling lightheartedly interjected "You mean disgrace and distend."Now, the grievance that Gosling was airing was that he felt that this company would take technology from other companies and suborn it for their own purposes. But guess what? The shoe is on the other foot here. The Java community did not invent the idea of designing pages as a mixture of static HTML and dynamic code marked with special tags. For example, Cold Fusion did it years earlier. Even ASP (a product from the very software company of the aforementioned manager) popularized this approach before JSP came along and decided to jump on the bandwagon. In fact, JSP not only adopted the general idea, it even used many of the same special tags as ASP did.The JSP is an establishment at the model of Java servlets on of the expression layer technique, it makes the plait write the HTML to become more simple.Be like the SSJS, it also allows you carry the static state HTML contents and servers the script mix to put together the born dynamic state exportation. JSP the script language that the Java is the tacit approval, however, be like the ASP and can use other languages( such as JavaScript and VBScript), the norm of JSP also allows to use other languages.1.3 JSP CHARACTERISTICSIs a service according to the script language in some one language of the statures system this kind of discuss, the JSP should be see make is a kind of script language.However, be a kind of script language, the JSP seemed to be too strong again, almost can use all Javas in the JSP.Be a kind of according to text originally of, take manifestation as the central development technique, the JSP provided all advantages of the Java Servlet, and, when combine with a JavaBeans together, providing a kind of make contents and manifestation that simple way that logic separate. Separate the contents and advantage of logical manifestations is, the personnel who renews the page external appearance need not know the code of Java, and renew the JavaBeans personnel also need not be design the web page of expert in hand, can use to take the page of JavaBeans JSP to define the template of Web, to build up a from have the alike external appearance of the website that page constitute. JavaBeans completes the data to provide, having no code of Java in the template thus, this means that these templates can be written the personnel by a HTML plait to support. Certainly, can also make use of the Java Servlet to control the logic of the website, adjust through the Java Servlet to use the way of the document of JSP to separate website of logic and contents.Generally speaking, in actual engine of JSP, the page of JSP is the edit and translate type while carry out, not explain the type of. Explain the dynamic state web page development tool of the type, such as ASP, PHP3 etc., because speed etc. reason, have already can't satisfy current the large electronic commerce needs appliedly, traditional development techniques are all at to edit and translate the executive way change, such as the ASP → ASP+;PHP3 → PHP4.In the JSP norm book, did not request the procedure in the JSP code part( be called the Scriptlet) and must write with the Java definitely. Actually, have some engines of JSP are adoptive other script languages such as the EMAC- Script, etc., but actually this a few script languages also are to set up on the Java, edit and translate for the Servlet to carry out of. Write according to the norm of JSP, have no Scriptlet of relation with Java also is can of, however, mainly lie in the ability and JavaBeans, the Enterprise JavaBeanses because of the JSP strong function to work together, so even is the Scriptlet part not to use the Java, edit and translate of performance code also should is related with Java.1.4 JSP MECHANISMTo comprehend the JSP how unite the technical advantage that above various speak of, come to carry out various result easily, the customer must understand the differentiation of" the module develops for the web page of the center" and" the page develops for the web page of the center" first.The SSJS and ASP are all in several year ago to release, the network of that time is still very young, no one knows to still have in addition to making all business, datas and the expression logic enter the original web page entirely heap what better solvethe method. This kind of model that take page as the center studies and gets the very fast development easily. However, along with change of time, the people know that this kind of method is unwell in set up large, the Web that can upgrade applies the procedure. The expression logic write in the script environment was lock in the page, only passing to shear to slice and glue to stick then can drive heavy use. Express the logic to usually mix together with business and the data logics, when this makes be the procedure member to try to change an external appearance that applies the procedure but do not want to break with its llied business logic, apply the procedure of maintenance be like to walk the similar difficulty on the eggshell. In fact in the business enterprise, heavy use the application of the module already through very mature, no one would like to rewrite those logics for their applied procedure.HTML and sketch the designer handed over to the implement work of their design the Web plait the one who write, make they have to double work-Usually is the handicraft plait to write, because have no fit tool and can carry the script and the HTML contents knot to the server to put together. Chien but speech, apply the complexity of the procedure along with the Web to promote continuously, the development method that take page as the center limits sex to become to get up obviously.At the same time, the people always at look for the better method of build up the Web application procedure, the module spreads in customer's machine/ server the realm. JavaBeans and ActiveX were published the company to expand to apply the procedure developer for Java and Windows to use to come to develop the complicated procedure quickly by" the fast application procedure development"( RAD) tool. These techniques make the expert in the some realm be able to write the module for the perpendicular application plait in the skill area, but the developer can go fetch the usage directly but need not control the expertise of this realm.Be a kind of take module as the central development terrace, the JSP appeared. It with the JavaBeans and Enterprise JavaBeans( EJB) module includes the model of the business and the data logic for foundation, provide a great deal of label and a script terraces to use to come to show in the HTML page from the contents of JavaBeans creation or send a present in return. Because of the property that regards the module as the center of the JSP, it can drive Java and not the developer of Java uses equally. Not the developer of Java can pass the JSP label( Tags) to use the JavaBeans that the deluxe developer of Java establish. The developer of Java not only can establish and use the JavaBeans, but also can use the language of Java to come to control more accurately in the JSP page according to the expression logic of the first floor JavaBeans.See now how JSP is handle claim of HTTP. In basic claim model, a claimdirectly was send to JSP page in. The code of JSP controls to carry on hour of the logic processing and module of JavaBeanses' hand over with each other, and the manifestation result in dynamic state bornly, mixing with the HTML page of the static state HTML code. The Beans can be JavaBeans or module of EJBs. Moreover, the more complicated claim model can see make from is request other JSP pages of the page call sign or Java Servlets.The engine of JSP wants to chase the code of Java that the label of JSP, code of Java in the JSP page even all converts into the big piece together with the static state HTML contents actually. These codes piece was organized the Java Servlet that customer can not see to go to by the engine of JSP, then the Servlet edits and translate them automatically byte code of Java.Thus, the visitant that is the website requests a JSP page, under the condition of it is not knowing, an already born, the Servlet actual full general that prepared to edit and translate completes all works, very concealment but again and efficiently. The Servlet is to edit and translate of, so the code of JSP in the web page does not need when the every time requests that page is explain. The engine of JSP need to be edit and translate after Servlet the code end is modify only once, then this Servlet that editted and translate can be carry out. The in view of the fact JSP engine auto is born to edit and translate the Servlet also, need not procedure member begins to edit and translate the code, so the JSP can bring vivid sex that function and fast developments need that you are efficiently.Compared with the traditional CGI, the JSP has the equal advantage. First, on the speed, the traditional procedure of CGI needs to use the standard importation of the system to output the equipments to carry out the dynamic state web page born, but the JSP is direct is mutually the connection with server. And say for the CGI, each interview needs to add to add a progress to handle, the progress build up and destroy by burning constantly and will be a not small burden for calculator of be the server of Web. The next in order, the JSP is specialized to develop but design for the Web of, its purpose is for building up according to the Web applied procedure, included the norm and the tool of a the whole set. Use the technique of JSP can combine a lot of JSP pages to become a Web application procedure very expediently.JSP six built-in objectsrequest for:The object of the package of information submitted by users, by calling the object corresponding way to access the information package, namely the use of the target users can access the information.response object:The customer's request dynamic response to the client sent the data.session object1. What is the session: session object is a built-in objects JSP, it in the first JSP pages loaded automatically create, complete the conversation of management.From a customer to open a browser and connect to the server, to close the browser, leaving the end of this server, known as a conversation.When a customer visits a server, the server may be a few pages link between repeatedly, repeatedly refresh a page, the server should bethrough some kind of way to know this is the same client, which requires session object.2. session object ID: When a customer's first visit to a server on the JSP pages, JSP engines produce a session object, and assigned aString type of ID number, JSP engine at the same time, the ID number sent to the client, stored in Cookie, this session objects, and customers on the establishment of a one-to-one relationship. When a customer to connect to the server of the other pages, customers no longer allocated to the new session object, until, close your browser, the client-server object to cancel the session, and the conversation, and customer relationship disappeared. When a customer re-open the browser to connect to the server, the server for the customer to create a new session object.aplication target1. What is the application:Servers have launched after the application object, when a customer to visit the site between the various pages here, this application objects are the same, until the server is down. But with the session difference is that all customers of the application objects are the same, that is, all customers share this built-in application objects.2. application objects commonly used methods:(1) public void setAttribute (String key, Object obj): Object specified parameters will be the object obj added to the application object, and to add the subject of the designation of a keyword index.(2) public Object getAttribute (String key): access to application objects containing keywords for.out targetsout as a target output flow, used to client output data. out targets for the output data.Cookie1. What is Cookie:Cookie is stored in Web server on the user's hard drive section of the text. Cookie allow a Web site on the user's computer to store information on and then get back to it.For example, a Web site may be generated for each visitor a unique ID, and then to Cookie in the form of documents stored in each user's machine.If you use IE browser to visit Web, you will see all stored on your hard drive on the Cookie. They are most often stored in places: c: \ windows \ cookies (in Window2000 is in the C: \ Documents and Settings \ your user name \ Cookies).Cookie is "keyword key = value value" to preserve the format of the record.2. Targets the creation of a Cookie, Cookie object called the constructor can create a Cookie. Cookie object constructor has two string .parameters: Cookie Cookie name and value.Cookie c = new Cookie ( "username", "john");3. If the JSP in the package good Cookie object to send to the client, the use of the response addCookie () method.Format: response.addCookie (c)4. Save to read the client's Cookie, the use of the object request getCookies () method will be implemented in all client came to an array of Cookie objects in the form of order, to meet the need to remove the Cookie object, it is necessary to compare an array cycle Each target keywords.JSP的技术发展历史Java Server Pages(JSP)是一种基于web的脚本编程技术,类似于网景公司的服务器端Java脚本语言—— server-side JavaScript(SSJS)和微软的Active Server Pages(ASP)。

计算机 数据库 外文文献翻译 中英文

计算机 数据库 外文文献翻译 中英文

科技外文文献Microsoft Future "Soul" - SQL Server 2005 Exploration SecretAuthor : CHEN Bao-linSQL Server development "Brief History"At the beginning of this before, let us look at Microsoft SQL Server development "Brief History."1988 : SQL Server from Microsoft and Sybase common development, running on OS / 2 platform.1993-09-14 : SQL Server 4.2, a desktop database system contains less functional. Integration with Windows and to provide easy-to-use user interface.1994 : Microsoft and Sybase database in cooperation in the development of suspension.1995 : SQL Server 6.0, code-named "SQL95" Microsoft rewriting most of the core system. Provide a low-cost small business application database program.1996-04-16 : SQL Server 6.5, This version brings significant performance improvement and providing a wide variety of useful functions.1998-11-16 : SQL Server 7.0, code-named "Sphinx." Completely rewritten core database engine, providing small and medium business applications database program, contains the initial Web support. SQL Server starting from this version has been widely used.2000-08-07 : the birth of SQL Server 2000, code-named "Shiloh." Microsoft to produce the product has been defined as enterprise-class database system, which includes three components (DB, OLAP, English Query). Rich front-end tools, improved development tools, and XML support, the promotion of this version of the promotion and application. And contains the following several versions.Enterprise Edition : through the deployment of cluster TB-class support services giant databases and thousands of concurrent users online.Standard Edition : to support SMEs.Personal version : support desktop applications.Developer : staff development for enterprises and Windows CE build enterprise applications.Window CE Version : can be applied to any Windows CE mobile devices.2003-04-24 : SQL Server 2000, 64-bit version. Codenamed "Liberty" has been and Unix / Linux Oracle compete.2005-11-07 : SQL Server 2005, codenamed "Yukon" Microsoft SQL Server products to the latest version. Microsoft commented that the status of this product took five years of major changes, a landmark product. Microsoft SQL Server 4.2 to 2005. Microsoft since the early 1990s to enter the database market, SQL Server 2005 until the launch, behaved like an enterprise database from the market to lead the followers of the restructuring, sword was sharpened for 10 years, through many a storm, Microsoft already enterprises database management perspective extends to a broader and deeper realm, the paper attempts to explore the history, Aggregate Microsoft SQL Server formative history.1987 Sysbase developed Unix systems running SQL Server version. In 1988, Microsoft invited the then momentum in the database fields are busy Sysbase. joint development of SQL server. "Sima heart erased", Microsoft tried to enter the database market moves obviously, and, database market is bound to whip up some wind action. Sure enough, after 10 years of market access database for the intense period of the Warring States. 1993-04-12, Microsoft SQL Server version 4.2. And before the introduction of Windows NT echoed that Microsoft officially entered the enterprise applications market. And the SQL Server database and the enterprise is the most important. Although SQL Server 4.2 while still just a desktop version, but there has been considerablepotential. 1994, Microsoft and Sybase formal suspension of the database development cooperation This meaningfully.From 1995 to 2000, Microsoft has adopted 6.0, 6.5,7.0, 2000 Version 4. From the perspective view, SQL Server 2000 version has been able to provide the following services.Online Services (On-line services) : "On-Line" refers to real-time online users use data services.Online transaction processing OLTP (On-Line Transaction Processing) : OLTP operation by the order-processing services transactions, or transactions follow completion or undoes all the principles. It also did not include the type of services. This is a sector that is the most universal and most widely forms of service. Analysis of online services OLAP (On-Line Analytical Processing) : OLAP is a kind of multidimensional data display (such as data warehousing, data mart, data cube), usually to do data mining. As OLTP used to operate and SQL data definition, OLAP is used and MDX (MultiDimensional Expressions) visit and definitions of data. From the technical structure of SQL Server 2000, as follows.Data structure•physical structure of data structure.•logical framework : how to define Tables, ro ws, columns, and other data objectsData Processing• data processing storage engine : it is responsible for dealing with how the data retention.• engine : it is responsible for how the data for the visit and relations.• SQL Server Agent : it is respo nsible for task scheduling and events management.Data manipulation• DB APIs : ADO (ActiveX Data Objects).OLE DB (linking and embedding data objects).DB-Library for C + +.ODBC (Open Data Internet).ESQL (Embedded SQL.)• URLs (uniform resource locat or address).• English inquiries (English Query).SQL Server Enterprise Manager.Tools : Inquiry analyzers, DTS (Data Transformation Services), Backup and restore and replication, metadata services, storage expansion process, SQL tracking, can be used for performance tuning.Experiences from users, SQL Server 2000 version of a number of new characteristics, such as XML support, many examples of support, data warehouse and business intelligence to enhance performance and scalability will improve, operating guide, and the inquiries, DTS, Transact SQL enhancements.From the license price, Microsoft SQL Server 2000, the price and total cost of ownership (TCO) only to the Oracle or D B2 2 / 1 to 1 / 3.In summary, Microsoft high-performance low-cost access to the product concept on the market success SQL Server 2000 database can meet the OLTP and OLAP application deployment, and better performance, and prices relative Oracle, DB2 and other databases low. Meanwhile, SQL Server 2000 Enterprise Edition also includes the standard version and other versions to meet different levels of user demand, These factors prompted the SQL Server 2000 was a significant part of the SME market share Microsoft has the opportunity to enter the mainstream database vendors ranks.At the same time, we should realize that SQL Server 2000 and Oracle launched late in the G 10 high-end enterprise-level functions in surviving deficient, so bridging the gap to catch up on the historic mission to the code-named "Yukon," the new version.Killer code-named "Yukon"From the 1989 release of Microsoft SQL Server 1.0 is now a full 15 years. In that 15 years of SQL Server fromscratch, from small to large, experiencing a once legendary. It has not only eroded with IBM, Oracle database market share, and the next generation of SQL Server has begun to gradually become the next Windows operating system core. China and the Bill Gates mouth • The constant repetition of "seamless calculation" is the core of Yukon, The code-named "Yukon," the next generation of our database will be brought into what kind of world? Internet "soft" pillarIn today's era of the network, data searching,data storage, classification of data, etc. All this has become the Internet network constitutes the "soft" pillars, and the database system is the pillar of the most critical. If there is no database support, we would never be able to Google or Baidu in the search for the information they need. can not use the convenient electronic mailbox, but that Network World because it is a large database consisting of.According to IDC's latest data show that the global database software market seems to be stirring Tension 2003 total revenue reached 13.6 billion U.S. dollars, compared with 2002's 12.6 billion U.S. dollars have increased. Oracle, IBM and Microsoft now controls 75% market share. Oracle last year for a market share of 39.8%, 31.3% for IBM, Microsoft to 12.1%.What is the database? In the University's computer textbooks, the database is being interpreted in this way : The database is the computer application system in a specialized data resource management system. There are many forms of data, such as text, digital, symbols, graphics, images and voices, and so on. All computer data system to deal with the subject. People familiar approach of a document is produced, will soon compile a program processing documents, will be covered by the procedural requirements of data organized into data files, documentation of procedures to call. Data files and program files maintain a certain relationship. Computer Application in the rapid development of the situation, by means of such a document will highlight deficiencies. For example, it allows poor definitive data, facilitate transplantation, in different documents stored information much duplication and waste of storage space, Update inconvenience. Database system will solve this problem. Database systems from the application of specific procedures, but based on the data management, All data will be stored in a database, scientific organizations, and by means of the database management system, using it as an intermediary, with a variety of applications or application interface to make it easy access to the data in the database.This note describes is indeed very detailed, but you may not always seem dizziness, In fact, a simple database that is after a group of computer collation of data stored in one or more documents, and the management of the database software called on the database management system. A general database system (104217) can be divided into the database (Database ) and Data Management System (Database Management System, DBMS) in two parts, all of these constitute the Internet is a "soft" pillars all.Microsoft's SQL Server database software, as many of the upgrade from 6.5 to the 7.0 version, gradually become mainstream database software, and SQL Server 2000 also proved that the Windows operating system can bear the same high-end data application, as the mainstream business application of database management software. It broke the rule by the large Unix database software myth and the next generation of SQL Server 2005 there will be what kind of change?Live Yukon core secretsMicrosoft in the next version of SQL Server (codenamed "Yukon") at the planning stage , considered more of the future development of the database, and SQL Server programming capabilities. Microsoft's internal development staff had long been aware that the future must introduce a more unified programming model but for a different data model to provide more flexibility. The unified programming model means that the ordinary data access and operation tasks can be carried out through various channels. For example, you can choose to use XML or Framework, or Transact-S QL (T-SQL) code, and so on.Such planning will result is a new database programming platform, which in many ways a natural extension. First, host. NET Framework common language runtime (CLR) to the function of the process of expansion of database programming and managed code area. Secondly,. NET framework provides a host integration from within SQL Server powerful object database functions. XML is the in-depth support functions through the XML data typeto achieve, and It has a data type of relationship between all the functions. In addition, also added a pair of XML Query (XQuery) and XML structure definition language (XSD) standard server support. Finally, SQL Server Yukon includes T-SQL language to enhance the important function.XML in SQL Server Yukon's history really began with SQL Server 2000. SQL Server 2000 with the introduction of the XML format to relational data. large load and segmentation XML documents and databases will be open targets for XML-based Web services, and other functions, However Yukon provide a more senior XML Query function, After perfecting the Y ukon will be full play all of the advantages of XML. XML Why so critical? In fact, from the initial XML an alternative HTML said the technical development of a line format, now be seen as a storage format. XML lasting memory has drawn widespread attention, the Internet has also been a lot of XML data type applications. XML itself can be an across any platform data format, It started as a file format for use, as XML in the enterprise has been widely recognized, Users began to use XML to solve thorny business problems, such as data integration. This makes as a data storage format XML development today, Because XML can be displayed on any platform to produce the same results, XML has become a mainstream database storage format. This built-in the Yukon comprehensive XML support will trigger a new database technology revolution.These new programming models and enhanced common language to create a series of programmable, They complement and expand the current relational database model. This architecture has the ultimate aim is to build more scalable, more reliable, more robust applications, and to enhance the development of efficiency. These models Another result is a service called SQL Agent new application framework -- for Asynchronous sources delivering the Distributed Application Framework.Yukon joining century gambleConstantly talking before we say a string of technology advantages, then you may very curious, Why should we introduce this appears to be a high-end database application software technologies? Perhaps we should kick the answer.The richest on Earth doing computer predictions for the future, he believes, in the next world, every one ordinary computer will have a large enough super hard disks, At that time the hard disk is no longer simply an 80 GB is likely to be 80 TB, Although it is only a change GB TB, but that means hard disk capacity of a full upgrade of 1000 times. And the existing Windows disk data storage NTFS format, simply unable to cope with such a large capacity hard disk data search. Said an image of the example, if the 100 TB of disk space on your computer, At that time, or you use Windows XP, You collate debris disk of the time required is likely to be for two days and two nights, if you want to find a particular document, You will have waited for several hours. That feeling is like to return to 286 times.In order to solve this thorny problem, the next generation Windows operating system Longhorn decided with the previous non-Windows diametrically with the programming model. The core is Avalon (development code). Avalon is the new Windows GUI library. New Longhorn into the Indigo (Web services) and WinFS (file system) of the new function. Including Avalon, these three new function called hell. Longhorn is the founder of a new "local" API. Although now is to the Win32 API compatibility and grow, However, to use the new Longhorn functions, under normal circumstances the use of hell. Max belongs to the present. NET Framework in the city. Present. NET Framework used in the category, which has hell, DLL support for the procedural mechanisms and the operation. NET basically the same.. NET Framework in SQL Server Yukon Availability when major version upgrade ( Major VersionUp), the specific date is the end of 2004. In the Yukon. NET Framework to run. In the storage process (Stored Procedures) use. NET Framework The class library. Yukon operations. NET Framework version 2.0. Supplementary to the present. NET Framework 1.1 is no relevant category of multimedia. WinFS use Yukon engines. In other words, Longhorn, the file system will use database engine.This time you understand, the next generation Windows operating system, the whole document data management will be introduced SQL Server configuration management, when Our computer data querycapabilities, data integration capability will be greatly enhanced. This of course, that the rich keep saying that the "seamless calculation" is a critical step on Microsoft, Let database software and operating systems integration projects century is undoubtedly a gamble, which, if successful, Microsoft will gradually become the dominant database, but if it fails, The almost even harden the next generation Windows listing of the normal schedule.Microsoft has provided some tools for SQL server and client applications on the network between the transmission of data increases secret. However, the Microsoft product manager said Kirsten Ward, plans to release next year a new SQL Server database will be stored in the data encryption, Hacker attacks increase defense capabilities.Microsoft earlier this year "SQL Server 2005" release time postponed until the first half of next year. The database software will enhance the launch of Microsoft database computing power and better with Oracle and IBM compete. Microsoft will also introduce a unified storage concept, locating and retrieving data more convenient. Oracle in Windows and Unix database market has been in a leading position. However, the recently adopted this year, Microsoft SQL Server to increase more advanced functions have also made remarkable progress.In addition, Microsoft will also provide a service called "Best Practices Analyzer Tool" (best practice analyzer tool) software. Database administrators can use the software using Microsoft editor of the Guide database software debugging. This applies to software tools for Microsoft database software current version "SQL Server 2000" and to provide a database administrator in various fields Operations Guide, For example, how to improve performance and how to conduct more effective data backup and so on.Ward said that the software tool also includes an "Upgrade Advisor" procedure. This procedure can scan database programs and warned "SQL Server 2000" users to make the necessary amendments changed so that the procedures compatible with the upcoming launch of the "SQL Server 2005."(Source : China Computer Education)中文译文微软未来的“灵魂”—SQL Server 2005探密作者:陈宝林SQL Server的发展“简史”在开始本文之前,先让我们来看一下微软SQL Server的发展“简史”。

数据库外文参考文献及翻译.

数据库外文参考文献及翻译.

数据库外文参考文献及翻译数据库外文参考文献及翻译数据库管理系统——实施数据完整性一个数据库,只有用户对它特别有信心的时候。

这就是为什么服务器必须实施数据完整性规则和商业政策的原因。

执行SQL Server的数据完整性的数据库本身,保证了复杂的业务政策得以遵循,以及强制性数据元素之间的关系得到遵守。

因为SQL Server的客户机/服务器体系结构允许你使用各种不同的前端应用程序去操纵和从服务器上呈现同样的数据,这把一切必要的完整性约束,安全权限,业务规则编码成每个应用,是非常繁琐的。

如果企业的所有政策都在前端应用程序中被编码,那么各种应用程序都将随着每一次业务的政策的改变而改变。

即使您试图把业务规则编码为每个客户端应用程序,其应用程序失常的危险性也将依然存在。

大多数应用程序都是不能完全信任的,只有当服务器可以作为最后仲裁者,并且服务器不能为一个很差的书面或恶意程序去破坏其完整性而提供一个后门。

SQL Server使用了先进的数据完整性功能,如存储过程,声明引用完整性(DRI),数据类型,限制,规则,默认和触发器来执行数据的完整性。

所有这些功能在数据库里都有各自的用途;通过这些完整性功能的结合,可以实现您的数据库的灵活性和易于管理,而且还安全。

声明数据完整性声明数据完整原文请找腾讯3249114六,维-论'文.网 定义一个表时指定构成的主键的列。

这就是所谓的主键约束。

SQL Server使用主键约束以保证所有值的唯一性在指定的列从未侵犯。

通过确保这个表有一个主键来实现这个表的实体完整性。

有时,在一个表中一个以上的列(或列的组合)可以唯一标志一行,例如,雇员表可能有员工编号( emp_id )列和社会安全号码( soc_sec_num )列,两者的值都被认为是唯一的。

这种列经常被称为替代键或候选键。

这些项也必须是唯一的。

虽然一个表只能有一个主键,但是它可以有多个候选键。

SQL Server的支持多个候选键概念进入唯一性约束。

大学毕业设计仓库管理系统数据库计算机外文参考文献原文及翻译

大学毕业设计仓库管理系统数据库计算机外文参考文献原文及翻译

河北工程大学毕业论文(设计)英文参考文献原文复印件及译文数据仓库数据仓库为商务运作提供结构与工具,以便系统地组织、理解和使用数据进行决策。

大量组织机构已经发现,在当今这个充满竞争、快速发展的世界,数据仓库是一个有价值的工具。

在过去的几年中,许多公司已花费数百万美元,建立企业范围的数据仓库。

许多人感到,随着工业竞争的加剧,数据仓库成了必备的最新营销武器——通过更多地了解客户需求而保住客户的途径。

“那么”,你可能会充满神秘地问,“到底什么是数据仓库?”数据仓库已被多种方式定义,使得很难严格地定义它。

宽松地讲,数据仓库是一个数据库,它与组织机构的操作数据库分别维护。

数据仓库系统允许将各种应用系统集成在一起,为统一的历史数据分析提供坚实的平台,对信息处理提供支持。

按照W. H. Inmon,一位数据仓库系统构造方面的领头建筑师的说法,“数据仓库是一个面向主题的、集成的、时变的、非易失的数据集合,支持管理决策制定”。

这个简短、全面的定义指出了数据仓库的主要特征。

四个关键词,面向主特征。

(1)(2)确保(3)(4)下的应用数据。

由于这种分离,数据仓库不需要事务处理、恢复和并行控制机制。

通常,它只需要两种数据访问:数据的初始化装入和数据访问。

概言之,数据仓库是一种语义上一致的数据存储,它充当决策支持数据模型的物理实现,并存放企业决策所需信息。

数据仓库也常常被看作一种体系结构,通过将异种数据源中的数据集成在一起而构造,支持结构化和启发式查询、分析报告和决策制定。

“好”,你现在问,“那么,什么是建立数据仓库?”根据上面的讨论,我们把建立数据仓库看作构造和使用数据仓库的过程。

数据仓库的构造需要数据集成、数据清理、和数据统一。

利用数据仓库常常需要一些决策支持技术。

这使得“知识工人”(例如,经理、分析人员和主管)能够使用数据仓库,快捷、方便地得到数据的总体视图,根据数据仓库中的信息做出准确的决策。

有些作者使用术语“建立数据仓库”表示构造数据仓库的过程,而用术语“仓库DBMS”表示管理和使用数据仓库。

仓储物流外文文献翻译中英文原文及译文2023-2023

仓储物流外文文献翻译中英文原文及译文2023-2023

仓储物流外文文献翻译中英文原文及译文2023-2023原文1:The Current Trends in Warehouse Management and LogisticsWarehouse management is an essential component of any supply chain and plays a crucial role in the overall efficiency and effectiveness of logistics operations. With the rapid advancement of technology and changing customer demands, the field of warehouse management and logistics has seen several trends emerge in recent years.One significant trend is the increasing adoption of automation and robotics in warehouse operations. Automated systems such as conveyor belts, robotic pickers, and driverless vehicles have revolutionized the way warehouses function. These technologies not only improve accuracy and speed but also reduce labor costs and increase safety.Another trend is the implementation of real-time tracking and visibility systems. Through the use of RFID (radio-frequency identification) tags and GPS (global positioning system) technology, warehouse managers can monitor the movement of goods throughout the entire supply chain. This level of visibility enables better inventory management, reduces stockouts, and improves customer satisfaction.Additionally, there is a growing focus on sustainability in warehouse management and logistics. Many companies are implementing environmentally friendly practices such as energy-efficient lighting, recycling programs, and alternativetransportation methods. These initiatives not only contribute to reducing carbon emissions but also result in cost savings and improved brand image.Furthermore, artificial intelligence (AI) and machine learning have become integral parts of warehouse management. AI-powered systems can analyze large volumes of data to optimize inventory levels, forecast demand accurately, and improve operational efficiency. Machine learning algorithms can also identify patterns and anomalies, enabling proactive maintenance and minimizing downtime.In conclusion, warehouse management and logistics are continuously evolving fields, driven by technological advancements and changing market demands. The trends discussed in this article highlight the importance of adopting innovative solutions to enhance efficiency, visibility, sustainability, and overall performance in warehouse operations.译文1:仓储物流管理的当前趋势仓储物流管理是任何供应链的重要组成部分,并在物流运营的整体效率和效力中发挥着至关重要的作用。

毕业设计 物流 外文文献翻译 中英文 仓储

毕业设计 物流 外文文献翻译 中英文 仓储

WarehousingThis chapter presents a description of a small, fictitious warehouse that distributes office supplies and some office furniture to small retail stores and individual mail-order customers. The facility was purchased from another company, and it is larger than required for the immediate operation. The operation, currently housed in an older facility, will move in a few months. The owners foresee substantial growth in theirhigh-quality product lines, so the extra space will accommodate the growth for the next few years. The description of the warehouse is of the planned operation after moving into the facility.The purpose of this chapter is to introduce the reader to the operations of warehouses. Basic function sare described, typical equipment types are illustrated, and operations within departments are presented in some detail so that the reader can understand the relationships among products, orders, order lines, storage space, and labor requirements. Storage assignment and retrieval strategies are briefly discussed.Evaluation of the planned operation includes turnover, performance, and cost analyses. Additional information can be found in other chapters of this volume and in the reference material.Role of the Warehouse in the Supply ChainWarehouses can serve different roles within the larger organization. For example, a stock room serving a manufacturing facility must provide a fast response time. The major activities would be piece (item)picking, carton picking, and preparation of assembly kits (kitting). A mail-order retailer usually must provide a great variety of products in small quantities at low cost to many customers. A factory warehouse usually handles a limited number of products in large quantities. A large, discount chain ware hou se typically “pushes” some products out to its retailers based on marketing campaigns, with other products being “pulled” by the store managers. Shipments are oft en full and half truckloads. The Ware house described here is a small, chain warehousethat carries a limited product line for distributionto its retailers and independent customers.The purpose of the warehouse is to provide the utility of time and place to its customers, both retail in the quantities requested by small retailers and individual customers. Production schedules often result in long runs and large lot sizes. Thus, manufacturers usually are not able to meet the delivery dates of small retailers and individuals. The warehouse bridges the gap and enables both parties, manufacturer and customer, to operate within their own spheres.Product and Order Descriptions1.Product DescriptionsThe products handled include paper products, pens, staplers, small storage units, other desktop products, electronic products are delivered directly from other distributors and not handled by the warehouse.One would say that the warehouse handles relatively low-value products from the viewpoint of manufacturing cost. ships among these load types. Individuals usually request pieces; retailers may also request pieces of slow movers, products that are not in high demand. Retailers usually request fast movers, products that are in high demand, in carton quantities. Bulky products like large desktop storage units may be in high enough demand so that they are sold by the warehouse in pallets. Furniture units are also sold on pallets for ease of movement in the warehouse and in the delivery trucks.shows the number of products to be stored and the number of storage locations needed. The latter issue is discussed inSection The typical dimensions of a piece is 10 × 25 × 3.5 cm, with a typical volume of 0.875 liters. A carton has typical dimensions of 33 × 43 × 30 cm, with a typical volume of 42.6 liters. Thus, a typical carton contains 48.7 pieces. The typical dimension of a pallet is 80 × 120 × 140 cm, with the last dimension being and individual. Manufacturers of office supplies and furniture are usually not willing to supply products low-priced media like CD and DVD blanks, book and electronic titles, and office furniture. High-value Products are sold by the warehouse as pieces, cartons, and on pallets. Figure 12.1 shows the relation- the height. Thepallet base is about 10 cm high, so the typical product volume is 1.25 m3, corresponding to 29.3 cartons. The pallet base allows for pickup by forklift truck from any of the four sides. Table 12.2 summarizes these values. Different products, of course, have different dimensions and relationships. The conversion factors can vary depending on whether the product is sold mainly in piece, carton, or pallet quantities. We will not introduce further complexity here and use the values given here for determining storage and labor requirements.2.Order DescriptionsThere are two types of orders processed at the warehouse. Large orders are placed by the retailers who belong to the same corporation; these are delivered by less-than-truckload (LTL) carrier. Small orders are placed by individuals, and these are delivered by package courier service like United States Postal Service (USPS), United Parcel Service (UPS), and Federal Express (Fed EX). Large orders contain more products and the quantity per product is greater than for small orders.Pallet Pick OperationsFull pallet picking is done primarily in the floor storage area and occasionally in the pallet rack area. These pallets move directly to outbound staging. A forklift truck has the capacity to transport one pallet at a time. Travel within the pallet floor storage area follows the rectilinear distance metric (Francis et al. 1992).Sorting, Packing, Staging, Shipping OperationsPieces and cartons that are picked using batch picking must first be sorted by order before further processing. The method of batch picking, described in the following, is designed to facilitate this process without requiring extensive conveyor equipment. In addition, all pieces must be packed into over pack cartons, and these are then consolidated with regular (single product) cartons by order. Some cartons and over packs move to outbound staging for package courier services like USPS, UPS, and FedEx. Others move to outbound staging for LTL carrier service. The package courier services load their vehicles manually, and the LTL carriers are loaded by warehouse personnel using either forklift trucks or pallet jacks.Support Operations, Reware housing, Returns ProcessingAt irregular times, the warehouse staff must perform additional functions that are not part of the normal process. Whenever a new store is being prepared for opening, a large quantity of product, for the full product line, must be picked and staged. There is a separate area set aside for this staging.Occasionally, some products need to be repackaged and/or labeled for retail stores. Th is value-added processing is performed between picking and packing. Returned merchandise must be inspected, possibly repackaged, and then returned to storage locations. The volume is not significant, and it is handled in the value-added area. Periodically, product locations must be changed to reflect changing demand. This reware housing is performed during slack periods so as not to require additional labor.In addition, the warehouse contains an office for management and sales personnel, toilets for both staff and truck drivers, and a break room with space for vending machines and dining. There is a battery charging room for the electric batteries used by forklifts and pallet jacks, and a small maintenance room.Storage Department Descriptions and OperationsThis section presents details on the individual storage departments and their operations. Here we determine the storage space requirements, and we describe the pick methods and obtain labor requirements.Bin ShelvingTh e bin shelving area contains 1000 slow moving products that are picked as pieces. Th ey are housed in shelving units that are 40 cm deep, 180 cm high, and 100 cm wide, for a cubic volume of 0.72 m3. Using a cubic space utilization factor of 0.6 to allow for clearances and mismatches of carton dimensions with the shelves, each shelving unit can accommodate on average 0.72 × 0.6/0.0426 = 10.14 cartons. If each product requires at most one carton, then we need 1000/10.14 = 98.6 or 99 shelving units. Rounding this to 100 units implies a pick line 100/2 = 50 m. One way to implement this is to establish two pick aisles, each 25m long, as shown in Figure 12.9. In the final layout, the system is expanded to a length of 30 m. In addition, space is provided for two future aisles. Although all the products stored here are considered slow movers, with some exceptions for products with small total required inventory measured in cubic volume, the principle of activity-based storage is extended further to identify the faster moving products (among the slow movers). These are placed in the ergonomically desirable golden zone.The small number of requests per order for slow moving products makes it appropriate to use a sort-while-pick (SWP) method for retrieval. An order picker uses a cart with multiple compartments to pick items for several orders on one trip past the shelves. The compartments items for different orders being mixed . Later, when the cart is moved to sorting, consolidation, and packing, there is actually little sorting work to do, but mainly consolidation and packing.Warehouse ManagementThe operation of the warehouse requires careful and constant management. The scanning of received products is just one example of the functions performed by the WMS. It is beyond the scope of this chapter to present details of a typical WMS. However, some main features should be mentioned here.The tracking of flows throughout the warehouse is one of the basic functions of a WMS. This can be done manually, but most facilities today use barcode scanners, and many use barcode scanners intedatabase. A typical WMS enables the functions listed below. These requirements are not inclusive, but only indicate the types of functions desired. Further details are in (Sharp, 2001).The WMS should enable scheduling of personnel, including regular full-time employees and temporary and part-time employees. Tracking of employee productivity is useful for training and workload balancing. Workload scheduling should be linked to forecast information, and the conversion of product volumes should be automatically translated to labor hours by function and employee productivity. out-of-stock conditions, process partial receipts, and quarantine products requiringinspection. It should generate labels for pallets and cartons with data on SKU (unique product type), description, date received, lot or purchase order number, expiration code(s), and location code(s). It should assign storage location recognizing physical characteristics of product, physical characteristics of location, environmental restrictions, and stock rotation. It should also have the ability to send products directly to out-bound vehicles (cross-docking). The ability to schedule trucks and assign them to docks is also useful. mation of stow (storage) action, updating of inventory upon stow, stock reservation capability, and provision for cycle counting. The WMS should support more than one location per SKU and more than one SKU per location. Report generation should include stock activity reports (fast, medium, slow, dead), empty location reports, and anticipated replenishment of forward pick areas.仓储本章提出了一个描述一个小虚拟仓库分发办公用品和办公家具的小零售商店和邮购客户个人。

毕业设计数据库管理外文文献

毕业设计数据库管理外文文献

毕业设计(论文)外文参考资料及译文译文题目:学生姓名:学号:专业:所在学院:指导教师:职称:年月日1. Database management system1. Database management systemA Database Management System (DBMS) is a set of computer programs that controls the creation, maintenance, and the use of a database. It allows organizations to place control of database development in the hands of database administrators (DBAs) and other specialists. A DBMS is a system software package that helps the use of integrated collection of data records and files known as databases. It allows different user application programs to easily access the same database. DBMSs may use any of a variety of database models, such as the network model or relational model. In large systems, a DBMS allows users and other software to store and retrieve data in a structured way. Instead of having to write computer programs to extract information, user can ask simple questions in a query language. Thus, many DBMS packages provide Fourth-generation programming language (4GLs) and other application development features. It helps to specify the logical organization for a database and access and use the information within a database. It provides facilities for controlling data access, enforcing data integrity, managing concurrency, and restoring the database from backups. A DBMS also provides the ability to logically present database information to users.2. OverviewA DBMS is a set of software programs that controls the organization, storage, management, and retrieval of data in a database. DBMSs are categorized according to their data structures or types. The DBMS accepts requests for data from an application program and instructs the operating system to transfer the appropriate data. The queries and responses must be submitted and received according to a format that conforms to one or more applicable protocols. When a DBMS is used, information systems can be changed much more easily as the organization's information requirements change. New categories of data can be added to the database without disruption to the existing system.Database servers are computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions.3. HistoryDatabases have been in use since the earliest days of electronic computing. Unlike modern systems which can be applied to widely different databases and needs, the vast majority of older systems were tightly linked to the custom databases in order to gain speed at the expense of flexibility. Originally DBMSs were found only in large organizations with the computer hardware needed to support large data sets.3.1 1960s Navigational DBMSAs computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s there were a number of such systems in commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, Integrated Data Store (IDS), founded the "Database Task Group" within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971 they delivered their standard, which generally became known as the "Codasyl approach", and soon there were a number of commercial products based on it available.The Codasyl approach was based on the "manual" navigation of a linked data set which was formed into a large network. When the database was first opened, the program was handed back a link to the first record in the database, which also contained pointers to other pieces of data. To find any particular record the programmer had to step through these pointers one at a time until the required record was returned. Simple queries like "find all the people in India" required the programto walk the entire data set and collect the matching results. There was, essentially, no concept of "find" or "search". This might sound like a serious limitation today, but in an era when the data was most often stored on magnetic tape such operations were too expensive to contemplate anyway.IBM also had their own DBMS system in 1968, known as IMS. IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to Codasyl, but used a strict hierarchy for its model of data navigation instead of Codasyl's network model. Both concepts later became known as navigational databases due to the way data was accessed, and Bachman's 1973 Turing Award award presentation was The Programmer as Navigator. IMS is classified as a hierarchical database. IMS and IDMS, both CODASYL databases, as well as CINCOMs TOTAL database are classified as network databases.3.2 1970s Relational DBMSEdgar Codd worked at IBM in San Jose, California, in one of their offshoot offices that was primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the Codasyl approach, notably the lack of a "search" facility which was becoming increasingly useful. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.[1]In this paper, he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in Codasyl, Codd's idea was to use a "table" of fixed-length records. A linked-list system would be very inefficient when storing "sparse" databases where some of the data for any one record could be left empty. The relational model solved this by splitting the data into a series of normalized tables, with optional elements being moved out of the main table to where they would take up room only if needed.For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach all of these data would be placed in a single record, and unused items would simply not be placed in the database. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided.Linking the information back together is the key to this system. In the relational model, some bit of information was used as a "key", uniquely defining a particular record. When information was being collected about a user, information stored in the optional (or related) tables would be found by searching for this key. For instance, if the login name of a user is unique, addresses and phone numbers for that user would be recorded with the login name as its key. This "re-linking" of related data back into a single collection is something that traditional computer languages are not designed for.Just as the navigational approach would require programs to loop in order to collect records, the relational approach would require loops to collect information about any one record. Codd's solution to the necessary looping was a set-oriented language, a suggestion that would later spawn the ubiquitous SQL. Using a branch of mathematics known as tuple calculus, he demonstrated that such a system could support all the operations of normal databases (inserting, updating etc.) as well as providing a simple system for finding and returning sets of data in a single operation.Codd's paper was picked up by two people at the Berkeley, Eugene Wong and Michael Stonebraker. They started a project known as INGRES using funding that had already been allocated for a geographical database project, using studentprogrammers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. During this time, a number of people had moved "through" the group — perhaps as many as 30 people worked on the project, about five at a time. INGRES was similar to System R in a number of ways, including the use of a "language" for data access, known as QUEL — QUEL was in fact relational, having been based on Codd's own Alpha language, but has since been corrupted to follow SQL, thus violating much the same concepts of the relational model as SQL itself.IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell did MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. All other DBMS implementations usually called relational are actually SQL DBMSs. In 1968, the University of Michigan began development of the Micro DBMS relational database management system. It was used to manage very large data sets by the US Department of Labor, the Environmental Protection Agency and researchers from University of Alberta, the University of Michigan and Wayne State University. It ran on mainframe computers using Michigan Terminal System. The system remained in production until 1996.3.3 End 1970s SQL DBMSIBM started working on a prototype system loosely based on Codd's concepts as System R in the early 1970s. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (much of which is often optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language, SQL, had been added. Codd's ideas were establishing themselves as both workable and superior to Codasyl, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (DB2).Many of the people involved with INGRES became convinced of the future commercial success of such systems, and formed their own companies to commercialize the work but with an SQL interface. Sybase, Informix, NonStop SQL and eventually Ingres itself were all being sold as offshoots to the original INGRES product in the 1980s. Even Microsoft SQL Server is actually a re-built version of Sybase, and thus, INGRES. Only Larry Ellison's Oracle started from a different chain, based on IBM's papers on System R, and beat IBM to market when the first version was released in 1978.Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions).In Sweden, Codd's paper was also read and Mimer SQL was developed from the mid-70s at Uppsala University. In 1984, this project was consolidated into an independent enterprise. In the early 1980s, Mimer introduced transaction handling for high robustness in applications, an idea that was subsequently implemented on most other DBMS.3.4 1980s Object Oriented DatabasesThe 1980s, along with a rise in object oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person's data were in a database, that person's attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relationships between data to be relation to objects and their attributes and not to individual fields.Another big game changer for databases in the 1980s was the focus on increasing reliability and access speeds. In 1989, two professors from the University of Michigan at Madison, published an article at an ACM associated conference outlining their methods on increasing database performance. The idea was to replicate specific important, and often queried information, and store it in a smaller temporary database that linked these key features back to the main database. This meant that a query could search the smaller database much quicker, rather than search the entire dataset. This eventually leads to the practice of indexing, which is used by almost every operating system from Windows to the system that operates Apple iPod devices.4. DBMS building blocksA DBMS includes four main parts: modeling language, data structure, database query language, and transaction mechanisms:4.1 Components of DBMS∙DBMS Engine accepts logical request from the various other DBMS subsystems, converts them into physical equivalents, and actually accesses thedatabase and data dictionary as they exist on a storage device.∙Data Definition Subsystem helps user to create and maintain the data dictionary and define the structure of the files in a database.∙Data Manipulation Subsystem helps user to add, change, and delete information in a database and query it for valuable information. Software tools within the data manipulation subsystem are most often the primary interfacebetween user and the information contained in a database. It allows user tospecify its logical information requirements.∙Application Generation Subsystem contains facilities to help users to develop transaction-intensive applications. It usually requires that userperform a detailed series of tasks to process a transaction. It facilitateseasy-to-use data entry screens, programming languages, and interfaces.∙Data Administration Subsystem helps users to manage the overall database environment by providing facilities for backup and recovery, security management, query optimization, concurrency control, and change management.4.2 Modeling languageA data modeling language to define the schema of each database hosted in the DBMS, according to the DBMS database model. The four most common types of models are the:•hierarchical model,•network model,•relational model, and•object model.Inverted lists and other methods are also used. A given database management system may provide one or more of the four models. The optimal structure dependson the natural organization of the application's data, and on the application's requirements (which include transaction rate (speed), reliability, maintainability, scalability, and cost).The dominant model in use today is the ad hoc one embedded in SQL, despite the objections of purists who believe this model is a corruption of the relational model, since it violates several of its fundamental principles for the sake of practicality and performance. Many DBMSs also support the Open Database Connectivity API that supports a standard way for programmers to access the DBMS.Before the database management approach, organizations relied on file processing systems to organize, store, and process data files. End users became aggravated with file processing because data is stored in many different files and each organized in a different way. Each file was specialized to be used with a specific application. Needless to say, file processing was bulky, costly and nonflexible when it came to supplying needed data accurately and promptly. Data redundancy is an issue with the file processing system because the independent data files produce duplicate data so when updates were needed each separate file would need to be updated. Another issue is the lack of data integration. The data is dependent on other data to organize and store it. Lastly, there was not any consistency or standardization of the data in a file processing system which makes maintenance difficult. For all these reasons, the database management approach was produced. Database management systems (DBMS) are designed to use one of five database structures to providesimplistic access to information stored in databases. The five database structures are hierarchical, network, relational, multidimensional and object-oriented models.The hierarchical structure was used in early mainframe DBMS. Records’ relationships form a treelike model. This structure is simple but nonflexible because the relationship is confined to a one-to-many relationship. IBM’s IMS system and the RDM Mobile are examples of a hierarchical database system with multiple hierarchies over the same data. RDM Mobile is a newly designed embedded database for a mobile computer system. The hierarchical structure is used primary today for storing geographic information and file systems.The network structure consists of more complex relationships. Unlike the hierarchical structure, it can relate to many records and accesses them by following one of several paths. In other words, this structure allows for many-to-many relationships.The relational structure is the most commonly used today. It is used by mainframe, midrange and microcomputer systems. It uses two-dimensional rows and columns to store data. The tables of records can be connected by common key values. While working for IBM, E.F. Codd designed this structure in 1970. The model is not easy for the end user to run queries with because it may require a complex combination of many tables.The multidimensional structure is similar to the relational model. The dimensions of the cube looking model have data relating to elements in each cell. This structure gives a spreadsheet like view of data. This structure is easy to maintain because records are stored as fundamental attributes, the same way they’re viewed and the structure is easy to understand. Its high performance has made it the most popular database structure when it comes to enabling online analytical processing (OLAP).The object oriented structure has the ability to handle graphics, pictures, voice and text, types of data, without difficultly unlike the other database structures. This structure is popular for multimedia Web-based applications. It was designed to work with object-oriented programming languages such as Java.4.3 Data structureData structures (fields, records, files and objects) optimized to deal with very large amounts of data stored on a permanent data storage device (which implies relatively slow access compared to volatile main memory).4.4 Database query languageA database query language and report writer allows users to interactively interrogate the database, analyze its data and update it according to the users privileges on data. It also controls the security of the database. Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called subschemas. For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data.If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases. However, it may not leave an audit trail of actions or provide the kinds of controls necessary in a multi-user organization. These controls are only available when a set of application programs are customized for each data entry and updating function.4.5 Transaction mechanismA database transaction mechanism ideally guarantees ACID properties in orderto ensure data integrity despite concurrent user accesses (concurrency control), and faults (fault tolerance). It also maintains the integrity of the data in the database. The DBMS can maintain the integrity of the database by not allowing more than one user to update the same record at the same time. The DBMS can help prevent duplicate records via unique index constraints; for example, no two customers with the same customer numbers (key fields) can be entered into the database. See ACID properties for more information (Redundancy avoidance).5. DBMS topics5.1 External, Logical and Internal viewA database management system provides the ability for many different users to share data and process resources. But as there can be many different users, there are many different database needs. The question now is: How can a single, unified database meet the differing requirement of so many users?A DBMS minimizes these problems by providing two views of the database data: an external view(or User view), logical view(or conceptual view)and physical(or internal) view. The user’s view, of a database program represents data in a format that is meaningful to a user and to the software programs that process those data. That is, the logical view tells the user, in user terms, what is in the database. The physicalview deals with the actual, physical arrangement and location of data in the direct access storage devices(DASDs). Database specialists use the physical view to make efficient use of storage and processing resources. With the logical view users can see data differently from how they are stored, and they do not want to know all the technical details of physical storage. After all, a business user is primarily interested in using the information, not in how it is stored.One strength of a DBMS is that while there is typically only one conceptual (or logical) and physical (or Internal) view of the data, there can be an endless number of different External views. This feature allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. Thus the logical view refers to the way user views data, and the physical view to the way the data are physically stored and processed...5.2 DBMS features and capabilitiesAlternatively, and especially in connection with the relational model of database management, the relation between attributes drawn from a specified set of domains can be seen as being primary. For instance, the database might indicate that a car that was originally "red" might fade to "pink" in time, provided it was of some particular "make" with an inferior paint job. Such higher arity relationships provide information on all of the underlying domains at the same time, with none of them being privileged above the others.5.3 DBMS simple definitionData base management system is the system in which related data is stored in an "efficient" and "compact" manner. Efficient means that the data which is stored in the DBMS is accessed in very quick time and compact means that the data which is stored in DBMS covers very less space in computer's memory. In above definition the phrase "related data" is used which means that the data which is stored in DBMS is about some particular topic.Throughout recent history specialized databases have existed for scientific, geospatial, imaging, document storage and like uses. Functionality drawn from such applications has lately begun appearing in mainstream DBMSs as well. However, the main focus there, at least when aimed at the commercial data processing market, is still on descriptive attributes on repetitive record structures.Thus, the DBMSs of today roll together frequently needed services or features of attribute management. By externalizing such functionality to the DBMS, applications effectively share code with each other and are relieved of much internal complexity. Features commonly offered by database management systems include:5.3.1 Query abilityQuerying is the process of requesting attribute information from various perspectives and combinations of factors. Example: "How many 2-door cars in Texas are green?" A database query language and report writer allow users to interactively interrogate the database, analyze its data and update it according to the users privileges on data.5.3.2 Backup and replicationCopies of attributes need to be made regularly in case primary disks or other equipment fails. A periodic copy of attributes may also be created for a distant organization that cannot readily access the original. DBMS usually provide utilities to facilitate the process of extracting and disseminating attribute sets. When data is replicated between database servers, so that the information remains consistent throughout the database system and users cannot tell or even know which server in the DBMS they are using, the system is said to exhibit replication transparency.5.3.2 Rule enforcementOften one wants to apply rules to attributes so that the attributes are clean and reliable. For example, we may have a rule that says each car can have only one engine associated with it (identified by Engine Number). If somebody tries to associate a second engine with a given car, we want the DBMS to deny such a request and display an error message. However, with changes in the model specification such as, in this example, hybrid gas-electric cars, rules may need to change. Ideally such rules should be able to be added and removed as needed without significant data layout redesign.5.3.4 SecurityOften it is desirable to limit who can see or change which attributes or groups of attributes. This may be managed directly by individual, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements.5.3.5 ComputationThere are common computations requested on attributes such as counting, summing, averaging, sorting, grouping, cross-referencing, etc. Rather than have each computer application implement these from scratch, they can rely on the DBMS to supply such calculations.5.3.6 Change and access loggingOften one wants to know who accessed what attributes, what was changed, and when it was changed. Logging services allow this by keeping a record of access occurrences and changes.5.3.7 Automated optimizationIf there are frequently occurring usage patterns or requests, some DBMS can adjust themselves to improve the speed of those interactions. In some cases the DBMS will merely provide tools to monitor performance, allowing a human expert to make the necessary adjustments after reviewing the statistics collected5.4 Meta-data repositoryMetadata is data describing data. For example, a listing that describes what attributes are allowed to be in data sets is called "meta-information". The meta-data is also known as data about data.5.5 Current trendsIn 1998, database management was in need of new style databases to solve current database management problems. Researchers realized that the old trends of database management were becoming too complex and there was a need for automated configuration and management. Surajit Chaudhuri, Gerhard Weikum and Michael Stonebraker, were the pioneers that dramatically affected the thought of database management systems. They believed that database management needed a more modular approach and that there are so many specifications needs for various users. Since this new development process of database management we currently have endless possibilities. Database management is no longer limited to “monolithic entities”. Many solutions have developed to satisfy individual needs of users. Development of numerous database options has created flexible solutions in database management.Today there are several ways database management has affected the technology world as we know it. Organizations demand for directory services has become an extreme necessity as organizations grow. Businesses are now able to use directory services that provided prompt searches for their company information. Mobile devices are not only able to store contact information of users but have grown to bigger capabilities. Mobile technology is able to cache large information that is used for computers and is able to display it on smaller devices. Web searches have even been affected with database management. Search engine queries are able to locate data。

库存管理外文文献及翻译

库存管理外文文献及翻译

本科毕业论文外文文献及译文文献、资料题目: Zero Inventory Approach 文献、资料来源: The IUP Journal of SupplyChain Management文献、资料发表(出版)日期: 2012.06院(部):管理工程学院专业:工业工程班级:工业112姓名:张金丰学号:2011021527指导教师:孔海花翻译日期:2015.06.14外文文献:Zero Inventory ApproachManaging optimal inventory in the supply chain is critical for an enterprise. The ability to increase inventory turns and the use of best inventory practices will reduce inventory costs across the supply chain. Moving towards zero inventory will result in effective inventory management in the business process. Inventory Optimization Solutions can be implemented easily using inventory optimization software. With Radio Frequency Identification (RFID) technology, inventory can be updated in real time without product movement, scanning or human involvement. Companies have to adopt best practices to optimize operational processes and lower their cost structure through inventory strategies.IntroductionWith supply chain planning and latest software, companies are managing their inventory in the best possible manner, keeping inventory holdings to the minimum without sacrificing the customer service needs. The zero inventory concept has been around since the 1980s. It tries to reduce inventory to a minimum and enhances profit margins by reducing the need for warehousing and expenses related to it.The concept of a supply chain is to have items flowing from one stage of supply to the next, both within the business and outside, in a seamless fashion. Any stock in the system is caused by either delay between the processes (demand, distribution, transfer, recording and production) or by the variation in the flow. Eliminating/reducing stock can be achieved by: linking processes, making the same throughput rate on processes, locating processes near each other and coordinating flows. Recent advanced software has made zero inventory strategy executable."Inventory optimization is an emerging practical approach to balancing investment and service-level goals over a very large assortment of Stock-Keeping Units (SKUS). In contrast to traditional ‘one-at-a-time’ marginal stock level setting, inventory optimization simultaneously determines all SKU stock levels to fulfill total service and investment constraints or objectives".Inventory optimization techniques provide a new logic to drive the system with information systems. To effectively manage inventory, businesses must also optimize thecosts of buying, holding, producing, moving and selling inventory.The objective of inventory optimization is to sustain minimal levels of inventory while providing the maximum possible levels of service. Supply Chain Design and Optimization (SCDO) is an inventory optimization solution which helps companies satisfy customer demands while balancing limitations on supply and the need for operational efficiency. Inventory optimization focuses on modeling uncertainty and variability and minimizing the risks they impose on the supply chain.Inventory optimization can help resolve total supply chain cost options like:•In-house manufacturing vs. contract manufacturing;•Domestic vs. off shore;•New supplier's cost vs. current suppliers' cost.Companies can benefit from inventory optimization, provided they control their supply chain processes and the complexity of supply chain. In case the supply chain is very complex, besides inventory optimization, network design has to be used to reap the benefits fully. This paper covers various inventory models that are available and then describes the technologies like Radio Frequency Identification (RFID) and networking used for the optimization of inventory. The paper also describes the software solutions available for achieving the same. It concludes by giving a few examples where inventory optimization has been successfully implemented.Inventory ModelsHexagon ModelThe hexagon model was developed due to the need to structure day-to-day work, reduce headcount and other inventory costs and improve customer satisfaction.In the first phase, operation strategies were established in alignment with inte-rnal customers. Later, continuous improvement plans and business continuity pl-ans were added. The five strategies used were: forecasting future consumption,setting financial targets to minimize inventory costs, preparing daily reports to monitor inventory operational performance,studying critical success indicators to track the accomplishments, to form inventory strategic objectives and inventor-y health and operating strategies. The hexagon model is a combination of two triangular structures (Figure 1).The upper triangle focuses on the soft management of human resources, customer orientation and supplier relations; the lower focuses on the execution of inventory plans with their success criteria, continuous improvement methodology and business continuity plans.The inventory indicators are: total inventory value, availability of spares, days of inventory, cost of inventory, cost saving and cash saving output expen-diture and quality improvement. The hexagon model combines the elements of the people involved in managing inventory with operational excellence (Figur2).Managing inventory with operational excellence was achieved by reducing the number of employees in the material department, changing the mix of people skills such as introducing engineering into the department structure and reducing the cost of ownership of the material department to the operation that it supports.Normally, this is implemented with reduction in headcount of material department, having less people with engineering skills in the department. Operation results include, improvement in raw material supply line quality indicators, competitive days of inventory and improved and stabilized spares availability. And the financial results include, increase in cost savings and reduced cost of inventory. It can be established by outsourcing some of the inventory functions as required. The level of efficiency of the inventory managed can be measured to a specific risk level, changing requirements or changes in the environment. Just-In-Time (JIT)Just-in-time (JIT) inventory system is a concept developed by the Japanese, wherein, the suppliers deliver the materials to the factory JIT for their processing, eliminating the need for storage and retrieval. The rate of output and the rate of supply of inputs are synchronized, to manage a zero inventory.The main benefits of JIT are: set up times are significantly reduced in the factory, the flow of goods from warehouse to shelves improves, employees who possess multiple skills are utilized more efficiently, better consistency of scheduling and consistency of employee work hours, increased emphasis on supplier relationships and continuous round the clock supplies keeping workers productive and businesses focused on turnover.And though a JIT system might even be a necessity, given the inventory demands of certain business types, its many advantages are realized only when some significant risks likedelays in movement of goods over long distances are mitigated.Vendor-Managed Inventory (VMI)Vendor-Managed Inventory (VMI) is a planning and management system in which the vendor is responsible for ma intaining the customer’s inventory levels. VMI is defined as a process or mechanism where the supplier creates the purchase orders based on the demand information. VMI is a combination of e-commerce, software and people. It has resulted in the dramatic reduction of inventory across the supply chain. VMI is categorized in the real world as collaboration, automation and cost transference.The main objectives of VMI are better, cheaper and faster transactions. In order to establish the VMI process,management commitment,data synchronization,setting up agreements,data exchange, ordering, invoice matching and measurement have to be undertaken.The benefits of VMI to an organization are reduction in inventory besides reduction of stock-outs and increase in customer satisfaction. Accurate information which is required for optimizing the supply chain is facilitated by efficient transfer of information. The concept of VMI would be successful only when there is trust between the organization and its suppliers as all the demand information is available to the suppliers which can be revealed to the competitors. VMI optimizes inventory in supply chain and reduces stock-outs by proper planning and centralized forecasting.Consignment ModelConsignment inventory model is an extension of VMI where the vendor places inventory at the customer’s location while retaining ownership of the inventory.The consignment inventory model works best in the case of new and unproven products where there is a high degree of demand uncertainty, highly expensive products and service parts for critical equipment. The types of consignment inventory ownership transfer models are: pay as sold during a pre-defined period, ownership changes after a pre-defined period, and order to order consignment.The issues that the VMI and consignment inventory model encounter are cost of developing VMI system, invoicing problems, cash flow problems, Electronic Data Interchange (EDI) problems and obsolete stock.Enabling PracticesThe decision makers have to make prudent decisions on future course of action of a project relating to the following variables: Forecasting and Inventory Management,Inventory Management practices,Inventory Planning,Optimal purchase, Multichannel Inventory, Moving towards zero inventory.To improve inventory management for better forecasting, the 14 best practices that will most likely benefit business the most are:•Synchronize promotions;•Revamp the organizational structure;•Take a longer view of item planning;•Enforce vendor compliance;•Track key inventory metrics;•Select the right systems;•Master the art of master scheduling;•Adhere to exception reporting;•Identify lost demands;•Plan by assortment;•Track inbound receipts;•Create coverage reports;•Balance under stock/overstock; and•Optimize SKUs.This will leverage the retailer’s ability to buy larger quantities across all channels while buying only what is required for a specified period in order to manage risk in a better way. In most multichannel companies, inventory is the largest asset on the balance sheet, which means that their profitability will be determined to a large degree by the way they plan, forecast, and manage inventory (Curt Barry, 2007). They can follow some steps like creating a strategy, integrating planning and forecasting, equipping with the best-laid plans and building strong vendor relationships and effective liquidation.Moving Towards Zero InventoryAt the fore is the development and widespread adoption of nimble, sophisticated software systems such as Manufacturing Resource Planning (MRP II), Enterprise Resource Planning(ERP), and Advanced Planning and Scheduling (APS) systems, as well as dedicated supply chain management software systems. These systems offer manufacturers greater functionality. To implement ‘Zero Stock’ system, companies need to have a good information system to handle customer orders, sub-contractor orders, product inventory and all issues related to production. If the company has no IT infrastructure, it will need to build it from the scratch.A good information system can help managers to get accurate data and make strategic decisions. IT infrastructure is not a cost, but an investment. A company can use RFID method, network inventory and other software tools for inventory optimization.Radio Frequency Identification (RFID)RFID is an automatic identification method, which relies on storing and remotely retrieving data using devices called RFID tags or transponders.RFID use in enterprise supply chain management increases the efficiency of inventory tracking and management. RFID application develops asset utilization by tracking reusable assets and provides visibility, improves quality control by tagging raw material, work-in-progress, and finished goods inventory, improves production execution and supply chain performance by providing accurate, timely and detailed information to enterprise resource planning and manufacturing execution system.The status of inventory can be obtained automatically by using RFID. There are many benefits of using RFID such as reduced inventory, reduced time, reduced errors, accessibility increase, high security, etc.Network InventoryA Network Inventory Management System (NIMS) tracks movement of items across the system and thus can locate malfunctioning equipment/process and provide information required to diagnose and correct problem areas. It also determines where capacity is to be added, calculates impact of market conditions, assesses impact of new products and the impact of a new customer. NIMS is very important when the complexity of a supply chain is high. It determines the manufacturing and distribution strategies for the future. It should take into consideration production, location, inventory and transportation.The NIMS software, including asset configuration information and change management, is an essential component of robust network management architecture.NIMS provideinformation that administrators can use to improve network management performance and help develop effective network asset control processes.A network inventory solution manages network resource information for multiple network technologies as well as multiple vendors in one common accurate database. It is an extremely useful tool for improving several operation processes, such as resource trouble management, service assurance, network planning and provisioning, field maintenance and spare parts management.The NIMS software, including asset configuration information and change management, is an essential component of strong network management architecture. In addition, software tools that provide planning, design and life cycle management for network assets should prominently appear on enterprise radar screens.Inventory Optimization Softwarei2 Inventory Optimizationi2 solutions enable customers to realize top and bottom-line benefits through the use of superior inventory management practices. i2 Inventory Optimization can help companies monitor, manage, and optimize strategies to decide—what to make, what to buy and from whom, what inventories to carry, where, in what form and how much—across the supply chain. It enables customers to learn and continuously improve inventory management policies and processes, strategic analysis and optimization.Product-oriented industry can install i2 Inventory Optimization and develop supply chain. Through this, the company can reduce inventory levels and overall logistics costs. It can also get higher service level performance, greater customer satisfaction, improved asset utilization, accelerated inventory turns, better product availability, reduced risk, and more precise and comprehensive supply chain visibility.Oracle Inventory OptimizationOracle Inventory Optimization considers the demand, supply, constraints and variability in extended supply chain to optimize strategic inventory investment decisions. It allows retailers to provide higher service levels to customers at a lower total cost. Oracle Inventory Optimization is part of the Oracle e-Business Suite, an integrated set of applications that are engineered to work together.Oracle Inventory Optimization provides solutions when demandand supply are in ambiguity. It provides graphic representation of the plan. It calculates cost and risk.MRO SoftwareMRO Software (now a part of IBM's Tivoli software business) announced a marketing alliance with inventory optimization specialists Xtivity to enhance the service offering of inventory management solutions for MRO Software customers. MRO offers Xtivity's Inventory Optimizer (XIO) service as an extension of its asset and service management solutions.Structured Query Language (SQL)Successful implementation of an inventory optimization solution requires significant effort and can pose certain risks to companies implementing such solutions. Structured Query Language (SQL) can be used on a common ERP platform. An optimal inventory policy can be determined by using it. Along with it, other metrics such as projected inventory levels, projected backlogs and their confidence bands can also be calculated. The only drawback of this method is that it may not be possible to obtain quick real-time results because of architectural and algorithmic complexity. However, potential scenarios can be analyzed in anticipation of results stored prior to user requests.Some ExamplesToyota’s Practice in IndiaToyota, a quality conscious company working towards zero inventory has selected Mitsui and Transport Corporation of India Ltd. (TCI) for their entire logistic solutions encompassing planning, transportation, warehousing, distribution and MIS and related documentation. Infrastructure is a bottleneck that continues to dog economic growth in India. Transystem renders services like procurement, consolidation and transportation of original equipment manufacturer's parts, through milk run operations from various suppliers all over India on a JIT basis, transportation of Complete Built-up Units (CBU) from plant to all dealers in the country and operation of CBU yards, coordination and transportation of Knock Down (KD) parts from port of entry to manufacturing plant, transportation of aftermarket parts to dealers by road and air to Toyota Kirloskar Motors Pvt. Ltd.Wal-MartWal-Mart is the largest retailer in the United States, with an estimated 20% of the retail grocery and consumables business, as well as the largest toy seller in the US, with an estimated 22% share of the toy market. Wal-Mart also operates in Argentina, Brazil, Canada, Japan, Mexico, Puerto Rico and UK.Wal-Mart keeps close track of the inventories by extensively adopting vendor-managed inventory to streamline the flow of goods from manufacturer to the store shelf. This results in more turns and therefore fewer inventories.Wal-Mart is an early adopter of RFID to monitor the movement of stocks in different stages of supply chain. The company keeps tabs on all of its merchandize by outfitting its products with RFID.Wal-Mart has indicated recently that it is moving towards the aggressive theoretical zero inventory model.Chordus Inc.Chordus Inc. has the largest division of office furniture in USA. It has advanced logistics and a model of zero inventory. It has Internet-based system for distribution network with real-time updates and low costs. Chordus determined that only SAP R/3 could accommodate this cutting-edge operational model for its network of 150 dealer-owned franchises in 44 states supported by five nationwide Distribution Centers (DCs) and a fleet of 65 delivery trucks. Small Scale Cycle Industry Around LudhianaIn and around Ludhiana, there are many small bicycle units, which are not organized.They have a sharp focus on financial and raw material management enjoying a low employee turnover. They have been practicing zero inventory models which became popular in Japan only much later. Raw material is brought into the unit in the morning, processed during the day and by evening the finished product is passed on to the next unit. Thus, the chain continues till the ultimate finished product is manufactured. In this way, the bicycles used to be produced in Ludhiana at half the production cost of TI Cycles. Even the large manufacturers of cycles, like Hero cycles, Atlas cycles and Avon cycles are reported to maintain only one week's inventory.ConclusionInventory managers are faced with high service-level requirements and many SKUsappreciate the complexity of inventory optimization, as well as the explicit control that is needed over total investment in warehousing, moving and logistics. Inventory optimization can provide both an enormous performance improvement for the supply chain and ongoing continuous improvements over competitors. The company achieves the stability needed to have enough stock to meet unpredictable demands without wasteful allocation of capital. Having the right amount of stock in the right place at the right time improves customer satisfaction, market share and bottom line. Certainly, the organizations that are able to take inventory optimization to the enterprise level will reap greater benefits. Zero inventory may be wishful thinking, but embracing new technologies and processes to manage one's inventory more efficiently could move one much closer to that ideal.中文译文:零库存方法对于一个企业来说,在供应链中优化库存管理是至关重要的。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

河北工程大学毕业论文(设计)英文参考文献原文复印件及译文数据仓库数据仓库为商务运作提供结构与工具,以便系统地组织、理解和使用数据进行决策。

大量组织机构已经发现,在当今这个充满竞争、快速发展的世界,数据仓库是一个有价值的工具。

在过去的几年中,许多公司已花费数百万美元,建立企业范围的数据仓库。

许多人感到,随着工业竞争的加剧,数据仓库成了必备的最新营销武器——通过更多地了解客户需求而保住客户的途径。

“那么”,你可能会充满神秘地问,“到底什么是数据仓库?”数据仓库已被多种方式定义,使得很难严格地定义它。

宽松地讲,数据仓库是一个数据库,它与组织机构的操作数据库分别维护。

数据仓库系统允许将各种应用系统集成在一起,为统一的历史数据分析提供坚实的平台,对信息处理提供支持。

按照W. H. Inmon,一位数据仓库系统构造方面的领头建筑师的说法,“数据仓库是一个面向主题的、集成的、时变的、非易失的数据集合,支持管理决策制定”。

这个简短、全面的定义指出了数据仓库的主要特征。

四个关键词,面向主题的、集成的、时变的、非易失的,将数据仓库与其它数据存储系统(如,关系数据库系统、事务处理系统、和文件系统)相区别。

让我们进一步看看这些关键特征。

(1) 面向主题的:数据仓库围绕一些主题,如顾客、供应商、产品和销售组织。

数据仓库关注决策者的数据建模与分析,而不是构造组织机构的日常操作和事务处理。

因此,数据仓库排除对于决策无用的数据,提供特定主题的简明视图。

(2) 集成的:通常,构造数据仓库是将多个异种数据源,如关系数据库、一般文件和联机事务处理记录,集成在一起。

使用数据清理和数据集成技术,确保命名约定、编码结构、属性度量的一致性等。

(3) 时变的:数据存储从历史的角度(例如,过去5-10 年)提供信息。

数据仓库中的关键结构,隐式或显式地包含时间元素。

(4) 非易失的:数据仓库总是物理地分离存放数据;这些数据源于操作环境下的应用数据。

由于这种分离,数据仓库不需要事务处理、恢复和并行控制机制。

通常,它只需要两种数据访问:数据的初始化装入和数据访问。

概言之,数据仓库是一种语义上一致的数据存储,它充当决策支持数据模型的物理实现,并存放企业决策所需信息。

数据仓库也常常被看作一种体系结构,通过将异种数据源中的数据集成在一起而构造,支持结构化和启发式查询、分析报告和决策制定。

“好”,你现在问,“那么,什么是建立数据仓库?”根据上面的讨论,我们把建立数据仓库看作构造和使用数据仓库的过程。

数据仓库的构造需要数据集成、数据清理、和数据统一。

利用数据仓库常常需要一些决策支持技术。

这使得“知识工人”(例如,经理、分析人员和主管)能够使用数据仓库,快捷、方便地得到数据的总体视图,根据数据仓库中的信息做出准确的决策。

有些作者使用术语“建立数据仓库”表示构造数据仓库的过程,而用术语“仓库DBMS”表示管理和使用数据仓库。

我们将不区分二者。

“组织机构如何使用数据仓库中的信息?”许多组织机构正在使用这些信息支持商务决策活动,包括:(1)、增加顾客关注,包括分析顾客购买模式(如,喜爱买什么、购买时间、预算周期、消费习惯);(2)、根据季度、年、地区的营销情况比较,重新配置产品和管理投资,调整生产策略;(3)、分析运作和查找利润源;(4)、管理顾客关系、进行环境调整、管理合股人的资产开销。

从异种数据库集成的角度看,数据仓库也是十分有用的。

许多组织收集了形形色色数据,并由多个异种的、自治的、分布的数据源维护大型数据库。

集成这些数据,并提供简便、有效的访问是非常希望的,并且也是一种挑战。

数据库工业界和研究界都正朝着实现这一目标竭尽全力。

对于异种数据库的集成,传统的数据库做法是:在多个异种数据库上,建立一个包装程序和一个集成程序(或仲裁程序)。

这方面的例子包括IBM 的数据连接程序和Informix的数据刀。

当一个查询提交客户站点,首先使用元数据字典对查询进行转换,将它转换成相应异种站点上的查询。

然后,将这些查询映射和发送到局部查询处理器。

由不同站点返回的结果被集成为全局回答。

这种查询驱动的方法需要复杂的信息过滤和集成处理,并且与局部数据源上的处理竞争资源。

这种方法是低效的,并且对于频繁的查询,特别是需要聚集操作的查询,开销很大。

对于异种数据库集成的传统方法,数据仓库提供了一个有趣的替代方案。

数据仓库使用更新驱动的方法,而不是查询驱动的方法。

这种方法将来自多个异种源的信息预先集成,并存储在数据仓库中,供直接查询和分析。

与联机事务处理数据库不同,数据仓库不包含最近的信息。

然而,数据仓库为集成的异种数据库系统带来了高性能,因为数据被拷贝、预处理、集成、注释、汇总,并重新组织到一个语义一致的数据存储中。

在数据仓库中进行的查询处理并不影响在局部源上进行的处理。

此外,数据仓库存储并集成历史信息,支持复杂的多维查询。

这样,建立数据仓库在工业界已非常流行。

1.操作数据库系统与数据仓库的区别由于大多数人都熟悉商品关系数据库系统,将数据仓库与之比较,就容易理解什么是数据仓库。

联机操作数据库系统的主要任务是执行联机事务和查询处理。

这种系统称为联机事务处理(OLTP)系统。

它们涵盖了一个组织的大部分日常操作,如购买、库存、制造、银行、工资、注册、记帐等。

另一方面,数据仓库系统在数据分析和决策方面为用户或“知识工人”提供服务。

这种系统可以用不同的格式组织和提供数据,以便满足不同用户的形形色色需求。

这种系统称为联机分析处理(OLAP)系统。

OLTP 和OLAP 的主要区别概述如下。

(1) 用户和系统的面向性:OLTP 是面向顾客的,用于办事员、客户、和信息技术专业人员的事务和查询处理。

OLAP 是面向市场的,用于知识工人(包括经理、主管、和分析人员)的数据分析。

(2) 数据内容:OLTP 系统管理当前数据。

通常,这种数据太琐碎,难以方便地用于决策。

OLAP 系统管理大量历史数据,提供汇总和聚集机制,并在不同的粒度级别上存储和管理信息。

这些特点使得数据容易用于见多识广的决策。

(3) 数据库设计:通常,OLTP 系统采用实体-联系(ER)模型和面向应用的数据库设计。

而OLAP 系统通常采用星形或雪花模型和面向主题的数据库设计。

(4) 视图:OLTP 系统主要关注一个企业或部门内部的当前数据,而不涉及历史数据或不同组织的数据。

相比之下,由于组织的变化,OLAP 系统常常跨越数据库模式的多个版本。

OLAP 系统也处理来自不同组织的信息,由多个数据存储集成的信息。

由于数据量巨大,OLAP 数据也存放在多个存储介质上。

(5)、访问模式:OLTP 系统的访问主要由短的、原子事务组成。

这种系统需要并行控制和恢复机制。

然而,对OLAP系统的访问大部分是只读操作(由于大部分数据仓库存放历史数据,而不是当前数据),尽管许多可能是复杂的查询。

OLTP 和OLAP 的其它区别包括数据库大小、操作的频繁程度、性能度量等。

2.但是,为什么需要一个分离的数据仓库“既然操作数据库存放了大量数据”,你注意到,“为什么不直接在这种数据库上进行联机分析处理,而是另外花费时间和资源去构造一个分离的数据仓库?”分离的主要原因是提高两个系统的性能。

操作数据库是为已知的任务和负载设计的,如使用主关键字索引和散列,检索特定的记录,和优化“罐装的”查询。

另一方面,数据仓库的查询通常是复杂的,涉及大量数据在汇总级的计算,可能需要特殊的数据组织、存取方法和基于多维视图的实现方法。

在操作数据库上处理OLAP 查询,可能会大大降低操作任务的性能。

此外,操作数据库支持多事务的并行处理,需要加锁和日志等并行控制和恢复机制,以确保一致性和事务的强健性。

通常,OLAP 查询只需要对数据记录进行只读访问,以进行汇总和聚集。

如果将并行控制和恢复机制用于这OLAP 操作,就会危害并行事务的运行,从而大大降低OLTP 系统的吞吐量。

最后,数据仓库与操作数据库分离是由于这两种系统中数据的结构、内容和用法都不相同。

决策支持需要历史数据,而操作数据库一般不维护历史数据。

在这种情况下,操作数据库中的数据尽管很丰富,但对于决策,常常还是远远不够的。

决策支持需要将来自异种源的数据统一(如,聚集和汇总),产生高质量的、纯净的和集成的数据。

相比之下,操作数据库只维护详细的原始数据(如事务),这些数据在进行分析之前需要统一。

由于两个系统提供很不相同的功能,需要不同类型的数据,因此需要维护分离的数据库。

Data warehousing provides architectures and tools for business executives to sy stematically organize, understand, and use their data to make strategic decisions. A lar ge number of organizations have found that data warehouse systems are valuable tools in today's competitive, fast evolving world. In the last several years, many firms have spent millions of dollars in building enterprise-wide data warehouses. Many people f eel that with competition mounting in every industry, data warehousing is the latest m ust-have marketing weapon —— a way to keep customers by learning more about the ir needs.“So", you may ask, full of intrigue, “what exactly is a data warehouse?"Data warehouses have been defined in many ways, making it difficult to formulat e a rigorous definition. Loosely speaking, a data warehouse refers to a database that is maintained separately from an organization's operational databases. Data warehouse s ystems allow for the integration of a variety of application systems. They support info rmation processing by providing a solid platform of consolidated, historical data for a nalysis.According to W. H. Inmon, a leading architect in the construction of data wareho use systems, “a data warehouse is a subject-oriented, integrated, time-variant, and non volatile collection of data in support of management's decision making process." This short, but comprehensive definition presents the major features of a data warehouse. T he four keywords, subject-oriented, integrated, time-variant, and nonvolatile, distingui sh data warehouses from other data repository systems, such as relational database systems, transaction processing systems, and file systems. Let's take a closer look at each of these key features.(1).Subject-oriented: A data warehouse is organized around major subjects, such as customer, vendor, product, and sales. Rather than concentrating on the day-to-day o perations and transaction processing of an organization, a data warehouse focuses on t he modeling and analysis of data for decision makers. Hence, data warehouses typical ly provide a simple and concise view around particular subject issues by excluding data that are not useful in the decision support process.(2) Integrated: A data warehouse is usually constructed by integrating multiple he terogeneous sources, such as relational databases, flat files, and on-line transaction rec ords. Data cleaning and data integration techniques are applied to ensure consistency i n naming conventions, encoding structures, attribute measures, and so on.(3).Time-variant: Data are stored to provide information from a historical pers pective (e.g., the past 5-10 years). Every key structure in the data warehouse contains, either implicitly or explicitly, an element of time.(4)Nonvolatile: A data warehouse is always a physically separate store of data tra nsformed from the application data found in the operational environment. Due to this separation, a data warehouse does not require transaction processing, recovery, and co ncurrency control mechanisms. It usually requires only two operations in data accessi ng: initial loading of data and access of data.In sum, a data warehouse is a semantically consistent data store that serves as a p hysical implementation of a decision support data model and stores the information on which an enterprise needs to make strategic decisions. A data warehouse is also often viewed as an architecture, constructed by integrating data from multiple heterogeneou s sources to support structured and/or ad hoc queries, analytical reporting, and decisio n making.“OK", you now ask, “what, then, is data warehousing?"Based on the above, we view data warehousing as the process of constructing and using data warehouses. The construction of a data warehouse requires data integratio n, data cleaning, and data consolidation. The utilization of a data warehouse often nec essitates a collection of decision support technologies. This allows “knowledge worke rs" (e.g., managers, analysts, and executives) to use the warehouse to quickly and con veniently obtain an overview of the data, and to make sound decisions based on infor mation in the warehouse. Some authors use the term “data warehousing" to refer onlyto the process of data warehouse construction, while the term warehouse DBMS is use d to refer to the management and utilization of data warehouses. We will not make thi s distinction here.“How are organizations using the information from data warehouses?" Many org anizations are using this information to support business decision making activities, in cluding:(1) increasing customer focus, which includes the analysis of customer buying pa tterns (such as buying preference, buying time, budget cycles, and appetites for spendi ng),(2) repositioning products and managing product portfolios by comparing the per formance of sales by quarter, by year, and by geographic regions, in order to fine-tune production strategies,(3) analyzing operations and looking for sources of profit,(4) managing the customer relationships, making environmental corrections, and managing the cost of corporate assets.Data warehousing is also very useful from the point of view of heterogeneous dat abase integration. Many organizations typically collect diverse kinds of data and main tain large databases from multiple, heterogeneous, autonomous, and distributed infor mation sources. To integrate such data, and provide easy and efficient access to it is hi ghly desirable, yet challenging.Much effort has been spent in the database industry and research community tow ards achieving this goal.The traditional database approach to heterogeneous database integration is to buil d wrappers and integrators (or mediators) on top of multiple, heterogeneous databases . A variety of data joiner and data blade products belong to this category. When a quer y is posed to a client site, a metadata dictionary is used to translate the query into quer ies appropriate for the individual heterogeneous sites involved. These queries are then mapped and sent to local query processors. The results returned from the different sit es are integrated into a global answer set. This query-driven approach requires comple x information filtering and integration processes, and competes for resources with pro cessing at local sources. It is inefficient and potentially expensive for frequent queries, especially for queries requiring aggregations.Data warehousing provides an interesting alternative to the traditional approach o f heterogeneous database integration described above. Rather than using a query-driven approach, data warehousing employs an update-driven approach in which informati on from multiple, heterogeneous sources is integrated in advance and stored in a ware house for direct querying and analysis. Unlike on-line transaction processing database s, data warehouses do not contain the most current information. However, a data ware house brings high performance to the integrated heterogeneous database system since data are copied, preprocessed, integrated, annotated, summarized, and restructured int o one semantic data store. Furthermore, query processing in data warehouses does not interfere with the processing at local sources. Moreover, data warehouses can store an d integrate historical information and support complex multidimensional queries. As a result, data warehousing has become very popular in industry.1. Differences between operational database systems and data warehousesSince most people are familiar with commercial relational database systems, it is easy to understand what a data warehouse is by comparing these two kinds of systems .The major task of on-line operational database systems is to perform on-line trans action and query processing. These systems are called on-line transaction processing ( OLTP) systems. They cover most of the day-to-day operations of an organization, suc h as, purchasing, inventory, manufacturing, banking, payroll, registration, and account ing. Data warehouse systems, on the other hand, serve users or “knowledge workers" i n the role of data analysis and decision making. Such systems can organize and presen t data in various formats in order to accommodate the diverse needs of the different us ers. These systems are known as on-line analytical processing (OLAP) systems.The major distinguishing features between OLTP and OLAP are summarized as f ollows.(1). Users and system orientation: An OLTP system is customer-oriented and is u sed for transaction and query processing by clerks, clients, and information technolog y professionals. An OLAP system is market-oriented and is used for data analysis by k nowledge workers, including managers, executives, and analysts.(2). Data contents: An OLTP system manages current data that, typically, are too detailed to be easily used for decision making. An OLAP system manages large amou nts of historical data, provides facilities for summarization and aggregation, and stores and manages information at different levels of granularity. These features make the d ata easier for use in informed decision making.(3). Database design: An OLTP system usually adopts an entity-relationship (ER)data model and an application -oriented database design. An OLAP system typically adopts either a star or snowflake model, and a subject-oriented database design.(4). View: An OLTP system focuses mainly on the current data within an enterpri se or department, without referring to historical data or data in different organizations. In contrast, an OLAP system often spans multiple versions of a database schema, due to the evolutionary process of an organization. OLAP systems also deal with informat ion that originates from different organizations, integrating information from many da ta stores. Because of their huge volume, OLAP data are stored on multiple storage me dia.(5). Access patterns: The access patterns of an OLTP system consist mainly of sh ort, atomic transactions. Such a system requires concurrency control and recovery me chanisms. However, accesses to OLAP systems are mostly read-only operations (since most data warehouses store historical rather than up-to-date information), although m any could be complex queries.Other features which distinguish between OLTP and OLAP systems include data base size, frequency of operations, and performance metrics and so on. 2. But, why ha ve a separate data warehouse?“Since operational databases store huge amounts of data", you observe, “why not perform on-line analytical processing directly on such databases instead of spending additional time and resources to construct a separate data warehouse?"A major reason for such a separation is to help promote the high performance of both systems. An operational database is designed and tuned from known tasks and w orkloads, such as indexing and hashing using primary keys, searching for particular re cords, and optimizing “canned" queries. On the other hand, data warehouse queries ar e often complex. They involve the computation of large groups of data at summarized levels, and may require the use of special data organization, access, and implementati on methods based on multidimensional views. Processing OLAP queries in operationa l databases would substantially degrade the performance of operational tasks.Moreover, an operational database supports the concurrent processing of several t ransactions. Concurrency control and recovery mechanisms, such as locking and loggi ng, are required to ensure the consistency and robustness of transactions. An OLAP qu ery often needs read-only access of data records for summarization and aggregation. Concurrency control and recovery mechanisms, if applied for such OLAP operations, may jeopardize the execution of concurrent transactions and thus substantially reducethe throughput of an OLTP system.Finally, the separation of operational databases from data warehouses is based on the different structures, contents, and uses of the data in these two systems. Decision support requires historical data, whereas operational databases do not typically mainta in historical data. In this context, the data in operational databases, though abundant, i s usually far from complete for decision making. Decision support requires consolidat ion (such as aggregation and summarization) of data from heterogeneous sources, resu lting in high quality, cleansed and integrated data. In contrast, operational databases c ontain only detailed raw data, such as transactions, which need to be consolidated bef ore analysis. Since the two systems provide quite different functionalities and require different kinds of data, it is necessary to maintain separate databases.。

相关文档
最新文档