Shift-Nets and Salzburg Tables Power Computing in Number-Theoretical Numerics

合集下载

Evaluation of Grid-Based Relevance Filtering for Multicast Group Assignment

Evaluation of Grid-Based Relevance Filtering for Multicast Group Assignment

Evaluation of Grid-Based Relevance Filteringfor Multicast Group AssignmentSteven J. RakDaniel J. Van HookMIT Lincoln Laboratory244 Wood St.Lexington, MA 02173Keywords: Communications Architecture, Communication Network, Local Area Network (LAN),Multicasting, Scaleability, Wide Area Network (WAN)ABSTRACTThis paper examines the performance of a grid-based relevance filtering algorithm. The goal is the reduction of network traffic between simulation entities to that which is relevant to their collective interaction. This implementation of relevance filtering utilizes the formation of multicast groups to allow entities to communicate with only those directly affected by their actions. A grid-based filtering technique is discussed and evaluated for its potential to reduce network loading and its consumption of multicast group addresses, using militarily relevant scenarios. An idealized relevance filtering algorithm is illustrated to determine the benchmark which defines the minimum network traffic necessary to support the simulation. The shape of an entity’s area of interest and grid alignment relative to battlefield activity are both evaluated for their effect on filtering performance. The current analysis leads to a recommendation for a grid cell size which minimizes network traffic flow and optimizes use of scarce multicast resources, while realizing these results may be somewhat scenario and simulation dependent.1. INTRODUCTIONAs the world of distributed simulation expands and becomes increasingly complex, the demand on a single networked application, or simulation host, increases dramatically, often exceeding its capability. This is the result of interaction with larger numbers of local entities and higher volumes of network traffic from both local and remote entities which must be sorted through to maintain the current state of its own simulated entities. The problem is due in part to the broadcast communication architecture currently specified by the Distributed Interactive Simulation (DIS) standards and by other legacy networked simulations. That is, all participating applications broadcast their simulation updates to a common network which all applications monitor to maintain a consistent simulation state. With the ever increasing scale of simulated engagements and new services like weather and dynamic terrain, many new types of simulation traffic are circulating on the network. In general, these enhancements will also dramatically increase the network traffic load and require additional processing power to conduct a simulation exercise,especially, if all state and event information is sent to all participants, as defined by the current broadcast architecture standard. It is important to find ways to minimize the impact on a simulation host as it struggles to maintain a complete and accurate state of the simulation. Fortunately, significant portions of the data transmitted in a DIS exercise are either redundant or irrelevant to a large fraction of the entities.1.1 Relevance FilteringThe theory of relevance filtering is to reduce the communication and processing requirements of simulation hosts by relaying event and state information only to those applications which require it [1]. It attempts to reduce the load on each application to the minimum level necessary for its entities to participate in the simulation. Typically, simulation interaction is limited to that occurring within the range of an entity’s sensors. The notion of sensor range is important since human visual range may be the maximum range of influence for one vehicle, while an entity with a long range sensor may detect information from a greater range or detect different informationabout remote simulated entities. Note how relevance filtering can be a useful technique to reduce transmitted data to minimum levels especially as the scale and scope of simulated engagements continues to expand.A simple method of implementing relevance filtering in simulation traffic is by using a terrain-based grid cell scheme. The terrain data base is arbitrarily divided into equally spaced grid cells. Simulation traffic is filtered by sending and receiving traffic based on the location of the entities in individual grid cells. It is easy to visualize the potential for traffic reduction; events and updates are transmitted only to entities that are members of the grid cell in which the event occurs (or one of its neighbors). Potentially, large areas of simulation interaction will not be required to forward traffic to a large fraction of the participants, saving significant amounts of processing and network bandwidth. However, it is not enough to just label the simulation data destined for select groups of entities. The traffic must be diverted in such a way as to minimize the processing a host must do to reject irrelevant data.1.2 Multicast Network CommunicationThe concept of multicast network addressing allows simulation hosts to send data to specific network addresses reserved for certain groups of entities rather than broadcasting their updates to all entities [2]. In this manner, the simulation hosts can reject traffic not addressed to them more efficiently, at a lower level in the operating system, rather than running a function at the application level to ascertain the relevance of the data. This should certainly be more efficient than simply calculating the distance from the entity or event to each entity simulated by the host. In the terrain based grid-cell approach, the terrain data base is divided into regular, uniformly spaced grid cells, each having its own multicast address. Entities subscribe to the multicast addresses representing the grid cells within their sensor range.Unfortunately, multicast addressing and transfer capabilities are limited or non-existent in current WANs. Even when such capabilities become widely available, it is anticipated that the number of multicast network addresses, or multicast groups, will be a limited resource, and there will be significant overhead associated with changing group membership. It is, therefore, critical to make use of multicast addresses in the most effective manner. This must also be done with a net reduction in overall network throughput and without substituting multicast routing overhead for simulation traffic.This paper will characterize the performance of a grid-based approach to relevance filtering. It will also attempt to identify an optimum grid cell size which achieves a significant reduction in network traffic balanced with conservative use of multicast resources. Finally, the overall reduction in network traffic will be compared with results for an idealized relevance filtering algorithm, which calculates the absolute minimum network traffic necessary to support the entities in the simulation.2. RELEVANCE FILTERING ALGORITHMS The overall reduction of simulation traffic, both to the network and to the individual simulators, is the driving force for selecting an appropriate relevance filtering algorithm. This is necessary to make the most effective use of available network resources even as engagement scale expands. For instance, relevance filtering based on small regularly spaced grid cells may require numerous multicast addresses and may generate a large number of requests to `join' or `leave' multicast groups, as simulators pass through a region. Multicast groups are currently a limited resource, so they must be used conservatively, and join and leave requests have an inherent latency which limits the ultimate request rate. However, relevance filtering with an overly large grid size may not sufficiently reduce network traffic to be effective. Thus, selection of a relevance filtering algorithm is an optimization process. Both the number of available multicast group addresses and the marginal bandwidth reduction must be considered during selection and parameterization of the relevance filtering criteria.Relevance filtering algorithms attempt to minimize simulation traffic based on an entity’s location or area of interest. Since each entity must be aware of events occurring in its immediate surroundings, a radius of interest (ROI) is chosen to correspond to either the maximum viewing range or maximum sensor range for each simulator. This area of interest may be a circular region with radius ROI, or for computational simplicity, a square region of 2ROI x 2ROI, either to be centered about the entity. Some manned ground vehicle simulators use a 4000 meter ROI centered about the vehicle. This limit reduces the load on the computer graphics processor as it calculates the out-the-window views for the simulator crew. Likewise, the ModSAF Semi-Automated Forces simulation uses a 4000 meter ROI, simulating the visibility limit of a human crew and the maximum range of onboard sensors.The filtering algorithms must deliver events from an area which meets or exceeds the minimum area for each entity. This illustrates how unnecessarily large multicast grid cells can increase the network traffic each host receives, as it tries to approximate a circular ROI with square grid cells. Ultimately, the simulation host must determine the relevance of the received simulationupdates, relative to the ROI. Events are often screened at the application level with what is referred to as an ‘R-squared’ test. The distance from the entity to the event or update is calculated and must be less than the ROI for it to be processed further.2.1 Grid-Based FilteringGrid-based relevance filtering associates multicast groups with cells defined by a grid system overlaid on the terrain. All units within a specific grid cell are assigned membership to that multicast group. Entities send position updates and event information to the multicast group (grid cell) where they are positioned, and subscribe to receive updates for all entities from all grid cells (multicast groups) which are contacted by their region of interest. This method may require large numbers of multicast groups to cover wide areas of terrain, depending on the grid size and layout. Easing the computational burden on the simulation host, Subscription Agents, architectural constructs designed for enhanced resource management in large scale simulations, may be employed to keep track of the multicast memberships on behalf of the simulation hosts [3]. Grids may be organized on a hexagonal basis, as in some constructive wargames, or on a square basis. They may be allocated with uniform area to make location and membership assignments easy, or, to minimize multicast group usage, they may be defined adaptively, with non-uniform areas. A careful analysis is required to assess the trade-offs between the invested calculation cycles and the consumption of network resources for these different approaches.2.2 Idealized Relevance FilteringIdealized relevance filtering refers to the absolute minimum required network throughput and minimum state data which must be received and processed by each simulation host in accordance with the current DIS standards and practices. This algorithm restricts data transmission to events which occur within the actual region of interest about an entity, noting that each entity must receive updates affecting its surroundings so that it may accurately participate in the simulation. This algorithm determines the absolute minimum network traffic load to illustrate the maximum benefit which can be achieved by developing new relevance filtering algorithms. It is literally an idealized case since it is unlikely that, in large scale simulations, the computational requirements or the global knowledge of entity locations would ever be readily available to achieve this complete filtering. This algorithm is the benchmark by which to measure the performance of other relevance filtering techniques, such as grid-based filtering.2.3 ConstraintsNow that the basics of grid-based relevance filtering are clear, it is easy to see that smaller grid cells, or at least smaller regions of entity concentration, will more closely approximate the absolute minimum area from which entities require simulation traffic. However, there are costs associated with the creation and use of multicast groups. There is a limitation to the number of multicast groups a single host computer may join, and there can be a significant latency associated with the join request. In general, host processing power is limited; as hosts join increasing numbers of groups, a larger fraction is spent managing their overhead and results in reduced simulation performance. Other network components, such as routers, also have a limit to the number of groups they can support.Certain classes of entities, such as wide area viewers like plan view displays (PVDs) or wide area surveillance systems must receive information from large areas and consequently subscribe to large numbers of multicast groups. Also, high speed aircraft and missiles, and certain surveillance radars, referred to as rapidly steerable imaging systems (RSISs), must join new multicast groups quickly to get the simulation updates flowing to them for realistic participation in the simulation. Thus, it is important to minimize the impacts of multicast join overhead and latency by the selection of a conservative multicast group grid size.3. ALGORITHM EVALUATIONPerformance of the relevance filtering algorithms described in the previous section was evaluated by comparing their network throughput with that of a broadcast algorithm. Broadcast network throughput was determined by summing the total traffic that would flow through the network, if all PDUs were broadcast to all hosts. This measure, referred to as total host download, was then calculated using relevance filtering with multicasting. Filtered network throughput was determined by totaling the multicast traffic which was addressed to hosts around the network. In this way, traffic reduction due to relevance filtering could be expressed as a ratio of total host downloads comparing multicast vs. broadcast communication architectures. The grid cell relevance filter was tested over a range of grid sizes to evaluate its sensitivity and effectiveness reducing overall network loading. The effect of filtering was also assessed for different definitions of the entity’s area of interest, and for different locations of the grid origin relative to the terrain data base.The algorithms were evaluated using STOW-E exercise logfiles, which were developed in support of theTable 3.1 - Summary of Site/Host/Entity / Packet counts in evaluation scenario.Site Site TypeHosts Entities / HostDownload * (pkts/sec)Ft. Rucker Manned Rotary Wing 811440Grafenwoehr Manned Ground 62111090BBS Constructive 9100+110CMTCLive Range1600+1535*Download reported is mean observed for site for broadcast network architecture.Synthetic Theater of War (STOW) effort and recorded at a milestone demonstration. The nominally ninety minute vignettes were designed to demonstrate network traffic for large scale distributed simulations. They were composed with input from military subject matter experts and are considered militarily relevant. The scenarios were run on a variety of simulation platforms,including: ModSAF, BBS, CMTC, and manned simulator sites and logged for later analysis. Four logfiles which contained the interaction of over fifteen hundred entities at four sites were used for this evaluation.Note that this evaluation looked at traffic broadcast or multicast to hosts, not individual entities. The four sites used for this evaluation have very different entity/host ratios. The manned simulator sites, Rucker and Grafenwoehr, are all single entity hosts; one entity is simulated by each computer host. The CMTC site is an instrumented live-range where 500+ vehicles are represented in the DIS world by a single host computer.Lastly, the BBS site is composed of 9 SAF simulations, each representing 100+ entities in the DIS world. Thus, measures of multicast groups per host or network traffic download reduction should account for these different classes of sites.In the results discussed below, Figure 3.1 - Normalized packet flow, cumulative for hosts exercise-wide.multicast group subscriptions were calculated using a square area of interest of size 2ROI x 2ROI centered about each entity.The currently fielded software uses this method to assign membership to multicast groups. Relevance filtering performance of square regions of interest is compared with that for round regions of interest below.3.1 Grid-Based Filtering ResultsThe grid-based relevance filter determined which grid cells intersected an entity's ROI, designated those cells to be valid multicast groups, and enrolled that entity’s host as a subscriber. This algorithm is simple, and computationally efficient when determining which cells are active multicast groups, and who subscribes to them. Multicast grid cells were calculated using a square area of interest of size 2ROI x 2ROI centered about each entity. This was typical of the method usedby the SAFs to assign multicast group subscriptions.The performance of the grid-based filtering algorithm was evaluated over a range of grid cell sizes for reduction of network throughput, measured by total host download, and for sensitivity to multicast group usage.Figure 3.1 plots normalized aggregate network throughput vs. time for a single exercise with severalgrid cell sizes to illustrate the reduction multicastFigure 3.2 - Mean download reduction ratio, cumulativefor hosts exercise-wide.Figure 3.3 - Normalized packet flow, cumulative for hosts at individual sites.grid-based filtering achieves compared with the aggregate broadcast traffic. Recall these curves represent the cumulative multicast traffic for all hosts normalized by the traffic which would have flowed if all PDUs were broadcast to all hosts. The plot indicates a packet rate reduction of 30-60% over the entire network compared to broadcast throughput. This example supports the earlier statement that grid-based relevance filtering will significantly reduce network traffic. The next step is to characterize the gains, and parameterize the behavior of the algorithm. Figure 3.2 further illustrates the gain due to relevance filtering, by plotting mean host download reduction ratio averaged for 1 second intervals over the duration of the logfile.However, there is no clear operating point, only a trend that smaller grid sizes are better. As multicast groups are a limited resource, further study is necessary. The analysis now shifts to the behavior of the hosts at each site.Figure 3.3 displays the aggregate network throughput for each site filtered using various grid sizes and normalized to the traffic each site would have received in broadcast mode. Now the data characterization gets interesting. At the Ft. Rucker site, a flight of helicopters passing near the main battle shows amarkedly different aggregate traffic flow depending on the size of the multicast group grid cells. As this mission flies near many clusters of entities, the grid plays a big factor in determining how much traffic flows to the site. Naturally, when the grids are large,large amounts of irrelevant traffic flow to the site; for smaller grids, considerably less traffic is required. At Grafenwoehr, the SIMNET manned ground vehicle site,the trend is for smaller grids to gradually reduce traffic per host, as would be expected in a dense, groundvehicle environment. CMTC, the live-range site whichFigure 3.4 - Download reduction ratio, cumulative forhosts at indiviual sites.Figure 3.5 - Mean multicast group usage per host.Figure 3.6 -Mean download reduction ratio, cumulative forhosts at each site.is managed by a single host computer, shows almost no improvement due to decreasing multicast group grid cell size. This is also as expected, since the network traffic and multicast subscriptions are calculated on a per-host basis. BBS, a site of 9 SAF proxy simulators for the BBS aggregate simulation, shows an improvement with decreasing grid size, but it is diminished by the fact that each simulation host has spread out its entities across a wide area. Figure 3.4plots mean host download reduction ratio, here for each site separately. The trend of smaller grid sizes transmitting less traffic holds, but the optimum grid size is still not obvious.Recall that the number of multicast groups available is potentially limited, as is the number of groups a host may join. Figure 3.5 shows group membership per host, for each of the sites as a function of multicast grid size. Here again, the differences in the sites become obvious when comparing the single entity hosts at Rucker and Graf with the single host live range CMTC.Group membership for single entity hosts reflect group membership for a single entity as calculable from the ROI with simple geometry [4]. Membership for the CMTC host is high, likely too high to be realizable.The BBS site has reasonable numbers of groups per host, and now indicates something about the density and location of entities simulated by a single host computer. The number of groups subscribed to by a single computer seems to be reasonable.Finally, Figure 3.6 plots the critical resource multicast groups per host against the achieved download reduction ratio. The “knee” of the curves suggest an operating point beyond which there is little significant traffic flow reduction, even with the addition of large numbers of multicast groups. This makes the selection of an optimum grid size straight forward. First, select a download reduction ratio which maximizes reduction with the fewest multicast groups, and read off the number of groups. Then, using Figure 3.5, map thenumber of groups to the multicast grid size. A grid size of 2 or 2.5 km achieves the greatest reduction ratio,while minimizing the use of multicast groups. This trend is similar for all sites, noting that the entity/host ratio effects the optimal selection of multicast grid size.3.2 Idealized Filtering ResultsTo further quantify the potential reduction in network traffic using relevance filtering, an idealized filter was designed. This filter determined whether each event or update occurred within the ROI of a remote entity, and then forwarded that data to the remote host simulating that entity. The data was sent to a host only once, even if multiple entities were affected by it. The idealized filter measured the baseline for the absolute minimum traffic a host-to-host multicast relevance filter would be required to transmit during the course of an exercise.Note that these results are scenario dependent and are based upon the current DIS standards and guidelines.Referring back to Figure 3.1, normalized aggregate network throughput is plotted to illustrate the potential reduction idealized relevance filtering could achieve compared with aggregate host-to-host broadcast traffic.The plot indicates approximately 70% reduction in total network load. This illustrates the absolute minimum traffic which must be transmitted, based on the current simulation parameters, since each host only receives simulation traffic for events which occur within its entities’ cumulative regions of interest. Figure 3.3 plots a line representing the idealized filtering case for each site independently. As observed for varied grid sizes,the behavior of the idealized filter is highly dependent on the entity/host ratio, with the biggest win occurring for fewer entities per host. 3.3 Effect of Grid AlignmentIt is a natural observation that the performance gains due to grid-based relevance filtering may have aFigure 3.7 - Normalized Packet Flow range for 20%offset in 5 km grid alignment.Figure 3.8 - Normalized Packet Flow range for 20%offset in 2.5 km grid alignment.dependence on the location of areas of high battlefield activity relative to grid cell boundaries. To explore the spatial sensitivity of performance to alignment with the multicast grid, a series of experiments were conducted which calculated the normalized total host download for various grid origins shifted by some fraction of a multicast grid cell. For example, using a 5 km grid,individual tests were run by shifting the origin of the Figure 3.9 - Normalized packet flow, cumulative for hosts at individual sites -calculated for entities with circular areas of interest.grid relative to the terrain coordinates by 1 km in either the X or the Y dimension. A full series consisted of 25runs, as the origin was shifted in 20% increments for X and for Y. The same experiment was run using a smaller grid size of 2.5 km, also with 20% increments. The results, shown in Figures 3.7 and 3.8, indicate a spread in the normalized host download of up to 15%for the 5 km and about 5% for the smaller 2.5 km grid,as would be expected. These fairly small fluctuations indicate the “error bars” on achievable performance, for a given grid size. There is some overlap between the performance range of the 5 km and 2.5 km grid cases.Noting it would be unrealistic to plan an entire exercise to be aligned with the relevance filtering grid, theseresults indicate that a potentially significant traffic reduction can be achieved if grid alignment was optimized on a local level. That is, if the multicast grid could be dynamically re-sized and re-aligned locally,relative to the areas of highest activity, a significant reduction in total host download would be achieved.3.4 Comparison of Subscription Area: Circular Area of Interest vs. Square ExtentRelevance filtering performance using circular subscription templates was compared with that for square extent subscription templates. The download reduction results studied and discussed above were all calculated with multicast subscriptions using a square extent of 2ROIx2ROI, centered about the entities.Smaller grid cells reduced total host download, but did not seem to converge with the performance of the entity’s internally modeled circular area of interest.Total host download was recalculated for the two sites which benefitted most from relevance filtering, using circular areas of interest to calculate the multicast group subscriptions. The results are given in Figure 3.9.Figure 3.10 - Average multicast group join rate for hosts at individual sites.Figure 3.11 - Peak multicast group join rates for hosts atindividual paring the circular area of interest results with thesquare extent results for the Grafenwoehr and BBS sites (shown previously in Figure 3.3), it is apparent that the performance of the circular area of interest templates does converge with that of the Ideal filtered results.This confirms the original hypothesis that smaller multicast grid cells more closely approximate the entity’s circular area of interest. The performance improved achieved by substituting circular areas of interest for square extent is almost negligible at the larger (10 km) grid sizes and increases to about 5% for the 1 km grid sizes. The benefit from using this approach will be more significant for large scale exercises and for certain classes of wide area viewers and rapidly steerable imaging systems.3.5 Multicast Group Join RatesTo determine the impact of grid size on multicast group overhead, Figure 3.10 and 3.11 plot average and peak multicast group join rates for hosts as a function of grid size. This helps to determine how much network overhead will be generated due to decreasing grid size,as entities move about during the course of a scenario.While it is presently unclear how much network overhead can be tolerated, it is certainly clear that decreasing multicast grid size below 2 km will dramatically increase both the average and peak multicast group join rates. These plots further reinforce the earlier recommendation that grid size be set to 2 to 2.5 km for the most efficient relevance filtering and handling of multicast group overhead.4. CONCLUSIONSThis brief study has shown that, in general, relevance filtering based on spatial location significantly reduces network traffic and reduces the load on the average host computer. It achieves this by limiting data transmission between hosts to just that which is required to maintainthe consistency of the simulation and eliminating the transmission of unnecessary or irrelevant data. Certainly relevance filtering will continue to maintain host and network performance as the scale of simulated exercises increases. Significant results and observations include:* Grid-based filtering is a conceptually simple algorithm which requires few computational cycles to determine multicast group membership.* A grid size of 2 to 2.5 km provides significant reduction in total host download.* Decreasing grid size to less than 2 km has only marginal benefit, and is not recommended due to the prohibitively large number of multicast groups joined by each host computer.* The arbitrary location of concentrated activity relative to grid cell boundaries generates a range of performance achievable for a particular grid size. This benefit may be exploited by dynamically restructuring the multicast grid local to areas of high activity.* Calculation of multicast groups using circular areas of interest for subscription templates will more closely approximate the entities’ actual areas of interest resulting in improved relevance filtering.* Idealized filtering based on entity ROI is a good metric by which to measure the performance of other relevance filtering schemes, although in large scale engagements, it is recognized to be far too computationally intensive to be a viable relevance filtering algorithm.Note that the results reported here are scenario dependent and subject to the concentrations of entities present in these scenarios. Common sense indicates that the trends however, will likely hold. Parameter settings。

eta-Scaling of dN_{ch}deta at sqrt{s_{NN}} = 200 GeV by the PHOBOS Collaboration and the Or

eta-Scaling of dN_{ch}deta at sqrt{s_{NN}} = 200 GeV by the PHOBOS Collaboration and the Or

a r X i v :n u c l -t h /0209004v 2 20 D e c 20021η-Scaling of dN ch /dηat√s NN =200GeV,we have analyzed themby means of stochastic theory named the Ornstein-Uhlenbeck process with two sources.Moreover,we display that z r =η/s NN =130GeV by PHOBOS Collaboration.2)Those distributions have been explained by a stochastic approach named the Ornstein-Uhlenbeck (OU)process with the evolution parameter t ,the frictional coefficient γand the variance σ2:∂P (y,t )∂y y +1γ∂2s NN /m N )at t =0and P (y,0)=0.5[δ(y +y max )+δ(y −y max )],we obtain the following distribution function for dn/dη(assuming y ≈η)using the probability density P (y,t )1)dn2V 2(t )+exp−(η−ηmax e −γt )2η2 )scaling function with z max =ηmax /ηrms and V 2r (t )=V 2(t )/η2rms.dn2V 2r (t )+exp−(z r −z max e −γt )22Letters2.Semi-phenomenological analyses of data In Fig.1,we show distributions of dn/dη.As is seen in Fig.1,the intercepts of dn/dη|η=0with different centrality cuts are located in the following narrow intervaldns NN=200GeV.Next,we should examine a power-like law in(0.5 N part )−1(dN ch/dη)|η=0( N part being number of participants)which is proposed by WA98Collaborations4)as1dη η=0=A N part α.(5) As seen in Fig.2,it can be stressed that the power-like law holds fairly well∗).Using estimated parameters A andα,we can express c as0.5 N partc=2πV2(t))·exp −(ηmax e−γt)2/2V2(t) andχ2are shown in Table II.The results are shown in Fig.3.To describe the dip structures,thefinite evolution time is necessary in our approach.Letters322.533.544.55(d N c h /d η)|η=0 /(0.5〈N p a r t 〉)〈N part 〉22.533.544.55(d N c h /d η)|η=0 /(0.5〈N p a r t 〉)〈N part 〉Fig.2.Determination of the parameters A and α.The method of linear regression is used.Thecorrelation coefficient (c.c.)is 0.991(200GeV).From data at 130GeV we have A =1.79,α=0.103and c.c.=0.993.Table I.Empirical examination of Eq.(6)(√N part 93±5138±6200±7.5277±8.5344.5±11N (Ex)ch 1230±601870±902750±1403860±1904960±250c (Ex)0.123±0.0120.124±0.0120.127±0.0120.129±0.0120.130±0.012c (Eq.(6))0.122±0.0090.124±0.0080.127±0.0080.130±0.0080.128±0.008p 0.855±δp 0.861±δp 0.864±δp 0.868±δp 0.873±δp 0.876±δp V 2(t ) 3.62±0.26 3.49±0.23 3.31±0.20 3.09±0.17 2.95±0.15 2.79±0.13N (Th)ch 780±131270±211930±302821±433951±605050±77c ∗(Th)0.121±δc t 0.123±δc t 0.124±δc t 0.125±δc t 0.127±δc t 0.127±δc t χ2/n .d .f . 1.07/510.91/510.88/51 1.18/51 1.06/51 1.46/51ηrms =3.2Comparison with other approaches First we consider a problem between the role of Jacobian and dip structure at η≈0.The authors of Refs.5)and 6)have explained dN ch /dηby means of the Jacobian between the rapidity variable (y )and the pseudorapidity (η):The following relation is well knowndnEdndy,(7)where dn/dy =(1/s NN =200GeV can be explained by Eq.(7).As is seen in Fig.4,for dn/dηin the full phase space (|η|<5.4),it is difficult to explain the ηdistribu-tion.On the other hand,if we restrict the central region (|η|<4),i.e.,neglecting the data in 4.0<|η|<5.4,we have better description.These results are actually4Letters0.040.080.12d n /d ηη00.040.080.12d n /d η00.040.080.120.16d n /d ηηFig.3.Analyses of dn/dηwith centrality cuts using Eq.(2).(See Table II.)utilized in Refs.5)and 6).In other words,this fact suggests us that we have to consider other approaches to explain the dip structure in central region as well as the behavior in the fragmentation region.In our case it is the stochastic theory named the OU process with two sources at ±y max and at t =0.3.3z r scaling in dn/dηdistributions The values of ηrms =s NN =200GeV.To compare z r scaling at 200GeV with one at 130GeV,1)we show the latter in Fig.5(b).It is difficult to distinguish them.This coincidence means that there is no change in dn/dz r as colliding energy increases,except for the region of |z r |>∼2.2.4.Interpretation of the evolution parameter t with γIn our present treatment the evolution parameter t and the frictional coefficient γare dimensionless.When we assign the meaning of second [s]to t ,the frictional coefficient γhas the dimensionLetters50.040.080.120.16d n /d ηηηFig.4.Analyses of dn/dηby means of single Gaussian and Eq.(7).(a)Data in full-ηvariable aredescribed by V 2(t )=5.27±0.26and m/p t =1.13±0.10.The best χ2=20.0/51.(b)Data in |η|<4.0are used,i.e.,4.0<|η|<5.4are neglected.V 2(t )=7.41±0.85and m/p t =0.82±0.13are used.The best χ2=4.0/37.Introduction of renormalization is necessary,due to the Jacobian0.050.10.150.20.250.30.350.4d n /d z rz r = η/ηrms 00.050.10.150.20.250.30.350.4d n /d z rz r = η/ηrmsFig.5.Normalized distribution of dn/dz r with z r =η/ηrms scaling and estimated parameters usingEq.(3).(a)√s NN =130GeV,p =0.854±0.002,V 2r (t )=0.494±0.010,χ2/n .d .f .=25.5/321.Dashedlines are magnitudes of error-bars.Notice that z 2r =z max (1−p )+V 2r =1.0,due to the sum of two Gaussian distributions.of [sec −1=(1/3)×10−23fm −1].The magnitude of the interaction region in Au-Au collision is assumed to be 10fm.See,for example,Ref.7).t is estimated ast ≈10fm /c ≈3.3×10−23sec .(8)The frictional coefficient and the variance are obtained in Table III.They are com-parable with values [τ−1Y =0.1−0.08fm −1]of Ref.8),which have been obtained from the proton spectra at SPS collider.5.Concluding remarks c1)We have analyzed dn/dηdistribution by Eqs.(2)6LettersTable III.Values ofγandσ2at√γ[fm−1]0.0960.0990.1000.1010.1030.1040.101σ2[fm−1]0.8170.8000.7630.7200.6960.6660.744σ2/γ8.518.087.637.13 6.76 6.407.42s NN=200GeV and130GeV,we have shown that both distributions are coincided with each other.If there are no labels(200GeV and130 GeV)in Fig.5,we cannot distinguish them.This coincidence means that there is no particular change in dn/dηbetween√1)M.Biyajima,M.Ide,T.Mizoguchi and N.Suzuki,Prog.Theor.Phys.108(2002)559andAddenda(to appear).See also hep-ph/0110305.2) B.B.Back et al.[PHOBOS Collaboration],Phys.Rev.Lett.87(2001),102303.3)R.Nouicer et al.,[PHOBOS Collaboration],nucl-ex/0208003.4)M.M.Aggarwal et al.[WA98Collaboration],Eur.Phys.J.C18(2001)651.5) D.Kharzeev and E.Levin,Phys.Lett.B523(2001),79.6)K.J.Eskola,K.Kajantie,P.V.Ruuskanen and K.Tuominen,Phys.Lett.B543(2002),208.7)K.Morita,S.Muroya,C.Nonaka and T.Hirano,nucl-th/0205040;to appear in Phys.Rev.C.s NN/m N) asc(200GeV)exp −η2(200)max2V2(t)(130) ≈0.94,V(t)(130)where the suffixes mean colliding energies.Letters7 8)G.Wolschin,Eur.Phys.J.A5(1999),85.。

CJAS2013年第一卷第一期

CJAS2013年第一卷第一期

Launching editorialFollowing the adoption of economic reform and open door policies over the last two decades,China ’s economy is now signi ficantly integrated with international economies.Concurrently,Chinese accounting/auditing standards have made substantial strides towards convergence with International Financial Reporting Standards as well as Interna-tional Standards of Auditing.However,China ’s economy is still in a transitional phase with strong remnants of previous central planning regimes.Consequently,state owner-ship still has a strong presence in many sectors and enterprises and the government plays a signi ficant role in business affairs,accounting and finance regulation,and enforcement.With the strong trend towards economic globalisation and accounting convergence,accounting and finance research in China can and should no longer be undertaken in isolation.At the same time,the transitional nature of China ’s economy offers numerous opportunities for identifying unique accounting and finance issues and solutions.To promote and support signi ficant scholarship in this environment,we are pleased to launch The China Journal of Accounting Studies (CJAS)as a forum for knowledge exchange between and among Chinese and international academics,research students,policy makers and others interested in accounting and finance research and develop-ments in China and elsewhere.CJAS is the of ficial international research journal of the Accounting Society of China.The Society was established in 1980and has been the largest accounting associ-ation in China,with over 2400individual members and over 250institutional members.Even with strict admission criteria,the numbers of both individual and institutional members continue to increase from year to year.As an association journal,CJAS will adhere to a principle of openness and inclusive-ness.This means that it welcomes high-quality papers in financial accounting,manage-ment accounting,auditing,corporate finance,corporate governance,public sector accounting,social and environmental accounting,accounting education,accounting his-tory,accounting information systems,and related areas.The Journal will embrace a wide range of theoretical paradigms based on economics,sociology,psychology and other related sciences and social sciences and research methodologies (e.g.analytical,archival,experimental,survey and qualitative case methods).In addition,it will publish original papers on Chinese as well as non-Chinese accounting and finance theories,methods and issues.In particular,the Journal welcomes submissions that investigate Chinese and inter-national issues in comparative terms,whether comparing China with advanced economies or with other emerging and transitional economies.It expects such comparative studies to offer opportunities for identifying unique accounting and finance issues and solutions in individual countries,and also to provide settings for testing established theories and para-digms as well as developing new ones.It welcomes submissions in English or Chinese and will evaluate them on their origi-nality,rigor,relevance and quality of exposition.The EditorsV ol.1,No.1,1,/10.1080/21697221.2013.781768Ó2013Accounting Society of China D o w n l o a d e d b y [124.207.132.50] a t 19:00 04 J u n e 2013COMMENTARYGlobal comparability in financial reporting:What,why,how,and when?Mary E.Barth*Graduate School of Business,Stanford University,Stanford,CA 94305,USAThe Conceptual Framework identi fies comparability as a qualitative characteristic ofuseful financial reporting information.This paper explains what comparability is,whycomparability is desirable,how comparability is achieved,and when we mightachieve it.In particular,comparability is the qualitative characteristic that enablesusers to identify and understand similarities in,and differences among,items;compa-rability aides investors,lenders and other creditors in making informed capital alloca-tion decisions;and achieving comparability depends on firms applying a common setof financial reporting standards and on requirements in the standards,especially mea-surement requirements.The paper discusses research showing that greater compara-bility can lower costs of comparing investment opportunities and improving financialreporting information quality.When comparability might be achieved is uncertain,although much progress has been made recently.Keywords:Comparability;global financial reporting;International Financial Report-ing Standards;Conceptual FrameworkThe Conceptual Framework of the International Accounting Standards Board (IASB,2010)speci fies comparability as one of the qualitative characteristics of financial report-ing information,which enable that information to achieve the objective of financial reporting.That objective is to provide investors,lenders and other creditors with infor-mation that helps them in making their capital allocation decisions.Because capital is a scare resource,comparability is a crucial characteristic of financial reporting informa-tion.If investors,lenders and other creditors cannot make informed comparisons of alternative investment opportunities,their capital allocation decisions will be subopti-mal.In fact,some believe that enabling investors to compare investment opportunitiesis the main reason we need financial accounting standards to prescribe the contents of financial reports.Without such standards,firms could portray and provide information about their financial position and performance in any way they choose.Given that financial reporting does not derive from a law of nature,there are innumerable ways firms could do that,and comparability would be lost.Thus,comparability is crucial to high quality financial reporting.This discussion seeks to explain what comparability is,why comparability creates bene fits for investors and the firms in which they invest,how comparability can be achieved,and when we might achieve it.*Email:mbarth@Paper accepted by Jason Xiao.V ol.1,No.1,2–12,/10.1080/21697221.2013.781765Ó2013Accounting Society of ChinaD o w n l o a d e d b y [124.207.132.50] a t 19:15 04 J u n e 2013What is comparability?Although the word ‘comparability ’has a meaning in the English language,financial accounting standard setters have a precise de finition in mind.The Conceptual Frame-work explains that comparability is the qualitative characteristic of financial reporting information that enables users to identify and understand similarities in,and differences among,items.That is,comparability results in like things looking alike and different things looking different.The Conceptual Framework goes on to explain that compara-bility makes financial reporting information useful because the information can be com-pared to similar information about other entities or about the same entity at a different parability does not relate to a single item;comparability requires at least two items that are being compared.To avoid misunderstanding,it is important to clarify what comparability is parability is not consistency.Consistency refers to the use of the same accounting methods or principles by a firm for the same items over parability is a goal of consistency and,thus,consistency helps achieve comparability.In itself,however,consistency does not ensure parability also is not uniformity.This is a source of confusion for parability results in like things looking alike and different things looking different.Uniformity requires treating all things in the same way.As a result,uniformity can make unlike things look alike,which impairs,not enhances,comparability.For example,consider an accounting rule specifying that all buildings be depreciated on a straight-line basis using a 30-year useful life and assuming a 10%residual value.Thus,the depreciation method for all buildings would be the same.What if some buildings last 20years and others last 200years?What if some buildings have a 5%residual value and others have a 25%residual value?What if some buildings deteriorate more rapidly at first and others deteriorate more rapidly nearer the end of their lives?Unless all buildings have a 30-year useful life,a 10%residual value,and economic bene fits that are consumed in a straight-line pattern,using the same depreciation method achieves uniformity,but not comparability.It makes all buildings look alike,when in fact they are different.That is not compara-bility.The Conceptual Framework also explains that some degree of comparability can be achieved by faithful representation,which is one of the fundamental qualitative charac-teristics of financial reporting information.That is,if financial statements faithfully rep-resent an item –e.g.,an asset or a liability –then comparability should follow.This is because a faithful representation would re flect the characteristics of the item.In the building example,if the residual value of a particular building is 25%,not 10%,then depreciating the building assuming a 10%residual value would not result in a faithful representation of the building.Comparability:why?Why is comparability so crucial to financial reporting?The primary reason –as with all qualitative characteristics of financial reporting information –is to help meet the objec-tive of financial reporting.That objective is to provide financial information about the entity that is useful to existing and potential investors,lenders and other creditors in making decisions about providing resources to the entity.The Conceptual Framework explains that decisions to buy,sell,or hold equity and debt instruments require alloca-tion of capital,and financial reporting is aimed at those who cannot demand the infor-mation they need to make those capital allocation decisions.Thus,comparability in China Journal of Accounting Studies 3D o w n l o a d e d b y [124.207.132.50] a t 19:15 04 J u n e 2013financial reporting across entities and over time is crucial to enabling investors,lenders and other creditors to make more informed capital allocation decisions.Comparability:how?Having established the importance of comparability,the next question is how can it be achieved?This,of course,is not a simple task.Global standardsWhen many people think about comparability in financial reporting,they think about the increasing use of International Financial Reporting Standards (IFRS).This is,in part,because the stated vision for IFRS is:…one single set of high quality global standards ed on the global capital markets.Many equate use of the same set of standards with achieving comparability.This vision for IFRS is based on the belief that use of global standards will improve the functioning of global capital markets.This should occur by increasing comparability and the quality of information,and by decreasing the costs of preparing financial reports,particularly for multinational firms,and information rmation risk is the risk that investors perceive when they know that they do not fully understand the information they are given,which would be the case if they are not ‘fluent ’in the accounting standards on which a firm ’s financial statements are based.Decreasing information risk should decrease the cost of capital.It is dif ficult to imagine how comparability can be achieved without the use of glo-bal financial reporting standards.However,use of global standards is only a necessary step –not a suf ficient step –to achieving comparability because,for example,the stan-dards need to be rigorously applied and enforced.Ensuring any set of financial report-ing standards achieves its potential ultimately depends on firms applying the standards as written,auditors auditing the resulting financial statements to ensure compliance with the standards,and regulators enforcing the standards.Requirements in standards Although a focus over the last decade or so has been on the widespread adoption ofIFRS as a means of achieving comparability,even strict adherence to a single set of standards does not ensure comparability.The requirements in the standards also affect comparability and,thus,should not be overlooked.Let me explain how and why.The Conceptual Framework considers financial statement elements,e.g.,assets,lia-bilities,income,and expense,item by item.Examples of financial statement elements are accounts receivable,inventory,and long-term debt.The aim of focusing on financial statement elements item by item is to provide investors with comparable information about the entity ’s assets and claims against those assets.Pro fit or loss is the change in the assets and claims that do not arise from other assets,liabilities,or transactions with equity holders in their capacity as equity holders.The assumption underlying this focus is that comparability results from portraying financial statement elements in the same way,for example by recognizing the same (sub)set of assets and liabilities and measur-ing them in the same way.However,does recognizing the same (sub)set of assets and liabilities achieve compa-rability?What if different assets and liabilities are important for some firms versus oth-4BarthD o w n l o a d e d b y [124.207.132.50] a t 19:15 04 J u n e 2013ers?For example,what about intellectual property assets of knowledge-based firms;property,plant,and equipment for durable manufacturers;and insurance liabilities for insurers?Do we achieve comparability if some assets –e.g.,intangibles –or particular types of claims –e.g.,claims with uncertain outcomes –are omitted?Do we achieve comparability if we omit intellectual property assets for all firms?Unrecognized assets and liabilities also have direct consequences for comparability of pro fit or loss because pro fit or loss depends on changes in recognized asset and liability amounts.Thus,if an asset is omitted,by construction the change in its recognized amount is zero and,as a result,it has no effect on pro fit or loss.If we omit intellectual property assets for all firms,do the financial statements of a knowledge-based firm and a durable manufacturer re flect their similarities and differences in a way that enables investors,lenders and other creditors to make informed capital allocation decisions?I cannot see how they can.MeasurementMeasurement also plays a crucial role in comparability that is often overlooked.Because financial reporting standards focus on financial statement elements item by item,one is lulled into thinking that measuring the same asset in the same way helps achieve comparability.But,does it?What if the measure is modi fied historical cost?1Although the method is the same,the resulting amounts likely differ.For example,the same asset purchased at different times will likely have a different measure.More dif-ferences can emerge over the life of the asset,e.g.,if the asset is impaired or is part of a fair value hedge and,thus,its carrying amount is adjusted for the change in fair value attributable to the risk identi fied in the fair value hedge.How can modi fied historical cost achieve comparability?What if the measure is fair value?IFRS 13Fair Value Measurement de fines fair value as the price that would be obtained to sell an asset or transfer a liability between market participants at the measurement date.Fair value has the potential to achieve com-parability because one would expect economic differences and similarities to be re flected in value.Thus,using fair values makes like things look alike and different things look different.However,a concern about using fair value is the potential effects of discretion in estimating the fair values.Although some assets and liabilities have readily determin-able market values,others do not,which means that their fair values must be estimated by managers.Whenever estimates are used in financial reporting –which is almost everywhere –there is concern that managers will use their discretion opportunistically to affect the estimates.There is a vast academic literature that finds evidence of manag-ers ’opportunistic exercise of discretion relating to many accounting amounts,regardless of whether they are based on modi fied historical cost or on fair value.What if the measure were something else?Perhaps another measure exists that over-comes the undesirable features of both modi fied historical cost and fair value and pos-sesses desirable features.Unfortunately,standard setters have yet to identify such an alternative measure.As an example to illustrate the effects on comparability of using cost or fair value to measure all assets,consider three entities:Entity A,Entity B,and Entity C.Entities A,B,and C each owns one share of common stock in Entity Z.The acquisition cost is 20for Entity A,40for Entity B,and 60for Entity C,and the current fair value of a share of common stock in Entity Z is 45.Are the financial statements of Entities A,B,and C comparable if each measures its investment at cost?The answer is ‘no ’because the three cost amounts –20,40,and 60–make the asset look different when it is the same.Thus,China Journal of Accounting Studies 5D o w n l o a d e d b y [124.207.132.50] a t 19:15 04 J u n e 2013cost makes like things look different,thereby failing to achieve comparability.Are the financial statements of Entities A,B,and C comparable if each measures its investment at fair value?The answer is ‘yes ’because all three entities would measure the asset at 45,which means that the same asset held by different entities looks the same.That achieves comparability.However,this conclusion is only clear because the fair value was speci fied in this example.To the extent that Entities A,B,and C need to estimate the fair value,they might estimate different amounts.In such a case,whether cost or fair value provides more comparability depends on whether the range of fair value estimates is smaller than 40,i.e.,60minus 20.That is,it depends on whether fair value results in making the same asset held by the three entities look more alike than cost.Some might argue that comparability is best achieved by reporting both amounts,for example if each entity measures the investment at cost and discloses the fair value.However,the Conceptual Framework is clear that disclosure is not a substitute for rec-ognition and the limited academic research that exists on recognition versus disclosure tends to support that view.What about measuring different assets of the entity in different ways?Can this achieve comparability?For example,presently we measure many financial assets at fair value and property,plant,and equipment at modi fied historical cost.We impair accounts receivable for incurred credit losses;property,plant,and equipment to recoverable amount;and inventory to lower of cost or fair value less costs to sell.We measure most long-term debt at amortized cost and derivative liabilities at fair value.There are many measurement methods used in financial reporting and most often the different measures apply to differ-ent assets.The question is whether this approach can achieve comparability.As an example to illustrate this issue,consider the assets of Entity A and Entity B.A B Cash 500500Accounts receivable 10001000Property,plant,and equipment 15001500Total assets 30003000The reported assets of these two entities –3000–make the assets look the same,and it appears that each entity ’s accounts receivable represents one-third of its assets and prop-erty,plant,and equipment represents one-half of its assets.If this reporting achieves comparability,then the two entities should have the same assets and these proportions should re flect the economics of the assets each entity holds.What if Entity A measures accounts receivable at fair value and property,plant,and equipment at modi fied historical cost,with all amounts in US dollars?Assume Entity A ’s property,plant,and equipment was purchased at various times over the last ten years.What if Entity B measures all assets at fair value and cash is stated in US dol-lars,accounts receivable is stated in euros,and property,plant,and equipment is stated in Swiss francs?Are these two entities comparable?Do the proportions of total assets each reports for the three assets re flect the economics of the assets?The answer is ‘no ’.How does a financial statement user compare Entity A ’s property,plant,and equip-ment with Entity B ’s?In addition,what do the 3000in total assets for Entities A and B represent?Are these amounts comparable to each other?Are they comparable to any-thing?Each 3000is the sum of three numbers derived on different bases.Like the sum of apples and oranges,its meaning is unclear.Many might react to this example by say-6BarthD o w n l o a d e d b y [124.207.132.50] a t 19:15 04 J u n e 2013ing that ‘we would never account for Entity B ’s assets using different currencies and that would be true ’.However,the distinction between using different currencies for Entity B ’s assets and using different measurements for different assets for Entity A is not clear.Why do we recoil at one and accept the other without question?One likely reason is that we are not accustomed to one but are accustomed to the other.Is there any other reason?Can we achieve comparability if we measure the same asset in different ways –either for the same entity or different entities?For example,presently we treat comput-ers as inventory for some entities (e.g.,Apple)and as equipment for others (e.g.,Gen-eral Electric).We treat warranty obligations relating to sales of goods by retailers differently from insurance contracts issued by insurance companies even though they are both insurance contracts.We treat real estate as investment property for some enti-ties (and measure it at fair value)and as property,plant,and equipment for others (and measure it at amortized cost).We treat the gain or loss on an item designated as hedged in a fair value hedge differently from the same gain or loss on an item that is not so designated (i.e.,in a designated fair value hedge,we adjust the carrying amount of the hedged item for its change in value attributable to the hedged risk).In addition,we per-mit optional asset revaluation,application of fair value,and hedge accounting itself.All of these differences result in differences in amounts in the financial statements for what seem to be the same assets and liabilities.Consider an example.Assume Entity A and Entity B each buys a piece of construc-tion equipment for 200.Entity A classi fies the equipment as inventory and Entity B classi fies it as property,plant,and equipment.Six months later,both entities still own the equipment;Entity A holds it in inventory and B has depreciated it because it is available for use,but Entity B has not used it.Entities A and B both dispose of the equipment for a gain.In its income statement,Entity A displays revenue and expense,which net to the amount of the gain,and Entity B displays the gain net.The asset –a piece of construction equipment –is the same for both entities.Yet,the asset is mea-sured at different amounts and the gain on disposal of the asset is presented differently.The question is whether the financial statements of Entities A and B are comparable.Consider another example,which is often used to illustrate this issue.Assume that Bank A and Bank B each buys US Treasury securities at a cost of US$1million.At the reporting date,the fair value of the securities is US$1.2million.Bank A classi fies the securities as trading (or fair value through pro fit or loss)and recognizes US$1.2million in assets and a gain of US$0.2million.Bank B classi fies the securities as held to maturity (or amortized cost)and recognizes US$1.0million in assets and no gain or loss.In both cases,the bank owns the same asset –US Treasury securities –purchased for the same amount –US$1million –and now worth the same amount –US$1.2million.Yet the financial statements of Bank A and Bank B are quite different.Does this financial report-ing make like things look alike?That is,are the financial statements of Entities A and B comparable?The answer is ‘no ’.Does ‘use ’of an asset affect its economics?A question relating to comparability to which we do not have a good answer is whether two assets that seem the same (e.g.,computers)are economically the same if they are used differently (e.g.,as inventory or property,plant,and equipment).If the use of an asset affects its economics,we need to identify how the economics are affected so that we can determine whether and how the differences in economics should be re flected China Journal of Accounting Studies 7D o w n l o a d e d b y [124.207.132.50] a t 19:15 04 J u n e 2013when accounting for the asset.Accountants have been treating seemingly similar assets differently for a long time –the inventory versus property,plant,and equipment exam-ple is not new.However,we have never articulated why.Without knowing why differ-ent uses of an asset affect the economics of the asset that should be re flected in the accounting,it is not possible to determine when and how to re flect such differences.In addition,we need to know whether the notion of different uses of assets applies to all assets.In particular,does it apply to financial assets and liabilities?IFRS 13concludes ‘no ’and explains why.But more thought needs to be devoted to this notion as it applies to non-financial assets.Recently,the notion that the use of assets should affect the accounting for the assets has been characterized as re flecting an entity ’s ‘business model ’.Thus,the question can be rephrased as whether different ‘use ’of an asset depends on the entity ’s ‘business model ’.Unfortunately,the Conceptual Framework has no concepts about the role of business model in financial reporting.In addition,there is no de finition of a business model and,thus,it is unclear what the term means.Some question whether an entity ’s business model differs in any substantive way from management intent (Leisenring,Lins-meier,Schipper,&Trott,2012).Is the business model something that management is doing,plans to do,or only hopes to do?Is an entity ’s business model veri fiable?Is it auditable?The answers to these questions are not obvious,which only adds to the lack of clarity about why and how a business model or intent should affect financial reporting.What does research say?Academic research provides evidence that global financial reporting with greater compa-rability can be bene ficial to investors,by lowering costs of comparing cross-border investment opportunities and,for some countries and firms,by improving the quality of their financial reporting information.Research also shows that comparability can be ben-e ficial to firms by increasing cross-border investment and by lowering cost of capital,presumably from reducing information risk and,for some,by increasing financial report-ing quality.However,research also provides evidence that these potential bene fits are tempered by cross-country differences in implementation,incentives,and enforcement.One example of a study in this literature is ‘Market Reaction to the Adoption of IFRS in Europe ’(Armstrong,Barth,Jagolinzer,&Riedl,2010).The questions motivat-ing this study are:(1)did investors perceive net bene fits to adoption of IFRS in Eur-ope?and (2)if there were net bene fits,are the net bene fits associated with increased comparability or increased quality of financial reporting information?As with any sin-gle research study,this study cannot directly answer these questions.However,these motivating questions lead to two research questions that the study can answer.The first is:did the European stock market react positively (negatively)to regulatory events that increased (decreased)the likelihood of IFRS adoption in Europe?An af firmative answer to this question indicates that investors perceived net bene fits to IFRS adoption in Eur-ope.The second is:were there differences across firms depending on their pre-adoption information environment?Identifying differences across firms in the market reaction to the regulatory events provides insights into what firm characteristics are associated with the perceived bene fits of IFRS adoption.The study focuses on 16regulatory events,which begin with the 2002European Par-liament resolution requiring all listed firms in the European Union to apply IFRS by 2005.The events end with the 2005European Commission endorsement of the revised fair value option.The events and predicted signs of the effect on the likelihood of IFRS adop-8BarthD o w n l o a d e d b y [124.207.132.50] a t 19:15 04 J u n e 2013。

cass命令

cass命令

CASS命令新建图形文件...new打开已有图形...open图形存盘...qsave图形改名存盘...saveas电子传递...etransmit网上发布...publishtoweb输出...export图形核查...audit修复破坏的图形...recover清理图形......purge编组....managroup页面设置...pagesetup打印机管理器...plottermanager打印样式管理器...stylesmanager打印预览...preview打印...plot宗地图表批量打印.....zdtbplot图形属性....dwgpropsCASS6.1参数配置.....setparaCASS6.1系统配置文件.....setsimboldef AutoCAD系统配置.....preferences操作回退......U取消回退......redo物体捕捉....OSNAP捕捉圆心点......CEN端点......endp插入点......ins交点......int中间点 (i)最近点......nea节点......nod垂直点......per四分圆点......qua切点......tan取消捕捉............non前方交会......qfjh边长交会......intersu方向交会......angdist支距量算......zhiju画直线........line徒手画........sketch画弧..........arc画圆..........circle画椭圆........ellipse;画多边形......polygon画点..........point画曲线........quxian画复合线......pline多功能复合线..Pdjf3画圆环........donut制作图块......wblock插入图块......ddinsert批量插入图块..plinsert插入光栅图象..image光栅图象纠正..rectify光栅图象赋予..imageattach光栅图象剪裁..imageclip光栅图象调整..imageadjust光栅图象质量..imagequality光栅图象透明度...transparency 光栅图象框架....imageframe写文字...dtext编辑文字...ddedit批量写文字..mtext沿线条注记...linetext插入文本文件..rtext炸碎文字......TXTEXP MTEXTTEXT...MTEXTTOTEXT文字消隐......textmask取消文字消隐..textunmask查找替换文字..find定义字型......style变换字体... fonts1查询列图形表..list查询工作状态..status编缉文本文件..notepad对象特性管理..properties图元编辑......ddmodify图层设定yer目标实体层ymch当前层ycur仅留实体所在层yiso冻结实体所在层yfrz关闭实体所在层yoff锁定实体所在层ylck解锁实体所在层yulk转移实体所在层ymrg删除实体所在层ydel打开所有图层yon解冻所有图层ythw图层叠放顺序.....draworder删除多重目标选择..erase删除单个目标选择..erase;_si; 删除上个选定目标...erase;_l; 删除实体所在图层...scsd删除实体所在编码...scdaima延伸 ....extend修剪 ....trim对齐 ....align移动 ....move旋转 ....rotate比例缩放....scale伸展 ....stretch阵列 ....array复制 ....copy镜像 ....mirror圆角 ....nfillet偏移拷贝....offset局部偏移....partoffset批量选目标..mssx修改性质....change修改颜色....scsc炸开实体....explode重画屏幕....redraw显示缩放....zoom鹰眼....dsviewer视口........+vports 1命名视图....view平面视图....plan文本窗口....textscr工具栏......toolbar查看实体编码...GETP加入实体编码...PUTP生成用户编码...changecode编辑实体地物编码...modifycode 生成交换文件...INMAP读入交换文件...OUTMAP屏幕菜单功能切换...PP导线记录....ADJRECORD导线平差....ADJUST读取全站仪数据...totalstation微机-E500....stran微机-南方NTS-320...r_nts320;微机-拓普康GTS-211...r_gts211;微机-拓普康GTS-602...r_gts602;微机-索佳SET系列.....r_set500;微机-宾得PCS-300 CSV.R_PCS300;南方RTK格式..NGK300;南方GPS后处理格式...gpshcl;南方S-CASS GRP格式...s_cass;南方S-CASS HTT格式...readhtt;索佳SET2C LST格式....SET2C;索佳SET2C DAT格式....SET2CDAT;索佳POWERSET坐标格式.SET2010;索佳POWERSET SDR2X格式..POWERSDR;杰科全站数据格式........JIEKE测图精灵格式转换读入...readspda转出...writespda原始测量数据录入需要控制点坐标文件...inputsource;1;不需控制点坐标文件...inputsource;2; 原始数据格式转换需要控制点坐标文件...data;1;不需控制点坐标文件...data;2;批量修改坐标数据.....CHDATA数据合并.......SJHB数据分幅.......SJFF坐标显示与打印.......SHOWGPS设置..............jihuo实时GPS跟踪..........gpsin定显示区..........HTCS改变当前图形比例尺...gbblc1展高程点.........zhkzd;1;高程点建模设置...gcddtm;高程点过滤.......gcdguolv水上高程点一般注记法.......zhkzd;2旋转注记.........xiewater海图注记法.......zhkzd;3;打散高程注记.....explodegcd合成打散的高程注记....resumegcd展野外测点点号.....zhdm;2;展野外测点代码.....zhdm;3;展野外测点点位.....zhdm;4;切换展点注记.......changezdh;展控制点.......drawkzd;编码引导....bmyd;简码识别....bmsb;图幅网格(指定长宽).....tfwg;加方格网...............hfgw;方格注记...............FGZJ;建立格网...............fenfu;批量输出...............fenfuout;普通分幅...............plxietf;700米公路分幅..........fenfu700;标准图幅 (50X50cm).....tfzs;2;标准图幅 (50X40cm).....tfzs;1;任意图幅...............tfzs;3;小比例尺图幅...........XBLTF;倾斜图幅...............tfzs;4;工程0 号图框...........HZTK;0;工程1 号图框...........hztk;1;工程2 号图框...........hztk;2;工程3 号图框.........hztk;3;图纸空间图幅youtprint;1; youtprint;2;任意图幅youtprint;3; 图形梯形纠正......ROTA地籍参数设置.....CADAPARA绘制权属线.......JZLINE权属合并.........QSHB由图形生成.......HANDQS由复合线生成.....PLINEQS由界址线生成.....JIEZHIQS;权属信息文件合并......UNITEQS;依权属文件绘权属图....hqst;修改界址点号..........JZNUMBER重排界址点号..........requeuejzp设置最大界址点号......setmaxjzd修改界址点号前缀......setprefix删除无用界址点........delunusejzd注记界址点点名注记..............zjzdm删除..............delzjzdm界址点圆圈修饰剪切.........xiushijzd;1;消隐.........xiushijzd;2;调整宗地内界址点顺序......arrangejzd界址点生成数据文件........jzptofile;查找宗地..............zhizong查找界址点............zhijzp宗地合并..............joinjzx宗地分割..............splitjzx;宗地重构..............regenzd;修改建筑物属性设置结构和层数.............jzwxx注记建筑物边长.............bianchang计算宗地内建筑面积.............jmdmj注记建筑占地面积.............jsmj;4建筑物注记重构.............regenbuildtext修改宗地属性.............setjiezhi修改界址线属性.............jzxinfo修改界址点属性.............jzdinfo输出宗地属性.............zdinfomdb绘制地籍表格界址点成果表.............hjzdb界址点成果表(excel).............jzdcgb_excel界址点坐标表.............jzdzb以街坊为单位界址点坐标表.............jzdtable以街道为单位宗地面积汇总表.............huizong城镇土地分类面积统计表.............chenzhen街道面积统计表.............TONGJI;1街坊面积统计表.............TONGJI;2面积分类统计表.............FENLEI;1街道面积分类统计表.............FENLEI;2街坊面积分类统计表.............FENLEI;3绘制宗地图框32开单块宗地.............hzdtk;1批量处理.............pltf;116开单块宗地.............hzdtk;4批量处理.............pltf;4A4竖单块宗地.............hzdtk;2批量处理.............pltf;2A4横单块宗地.............hzdtk;5批量处理.............pltf;5A3竖单块宗地.............hzdtk;3批量处理.............pltf;3A3横单块宗地.............hzdtk;6批量处理.............pltf;6自定义尺寸单块宗地.............hzdtk;0批量处理.............pltf;0土地详查行政区村界绘制.............drawxzq;1村界内部点.............xzqinsert;1乡镇界绘制.............drawxzq;2乡镇界内部点.............xzqinsert;2县区界绘制.............drawxzq;3县区界内部点.............xzqinsert;3权属区绘制.............qsline内部点生成.............qsinside图斑绘图生成.............dljline内部点生成.............dljinside统计面积.............dljarea线状地类.............linedlj零星地类.............pointdlj地类要素属性修改.............dljinfo线状地类扩面.............fromlinedlj检查线状地类.............checklinedlj分级面积控制.............areacontrol统计土地利用面积.............statdlj等高线(&S)建立DTM.............LINKSJX图面DTM完善.............APPENDSJX删除三角形_erase过滤三角形.............filter_sjx增加三角形.............jsjw三角形内插点.............insert_sjx删三角形顶点.............erase_sjx重组三角形.............re_sjx加入地性线.............valley删三角网.............delsjx三角网存取写入文件.............writesjw读出文件.............readsjw修改结果存盘.............ssjw绘制等高线.............dzx绘制等深线.............dsx等高线内插.............CONTOUR等值线过滤.............dgxguolv删全部等高线.............deldgx查询指定点高程.............height等高线修剪批量修剪等高线.............pltrdgx切除指定二线间等高线.............trtwoline切除指定区域内等高线.............tregion取消等高线消隐.............(arxload "wipeout");erasewipeout 等高线注记单个高程注记.............GCZJ沿直线高程注记.............GCSPZJ;1单个示坡线.............spzj沿直线示坡线.............GCSPZJ;2等高线局部替换已有线.............dgxsegment新画线.............dgxsegment1复合线滤波.............jjjd三维模型绘制三维模型.............vshow低级着色方式.............SHADE;高级着色方式.............RENDER;返回平面视图.............VEND;坡度分析颜色配置.............slopeconfig颜色填充.............slopecolor低级着色方式.............SHADE;高级着色方式.............RENDER;地物编辑(&A)重新生成.............recass;线型换向.............huan修改墙宽.............wallwidth修改坎高.............askan电力电信 >.............$i=dldxx $i=*--植被填充稻田 .............tian;211100;211102旱地 .............tian;211200;211202菜地 .............tian;211400;211402果园 .............tian;212100;212102桑园 .............tian;212200;212202茶园 .............tian;212300;212302橡胶园 .............tian;212400;212402其他园林.............tian;212500;212502有林地 .............tian;213100;213102灌木林 .............tian;213201;213204疏林 .............tian;213300;213302未成林 .............tian;213400;0苗圃 .............tian;213500;213502迹地 .............tian;213600;0竹林 .............tian;213901;213903天然草地.............tian;214100;214102改良草地.............tian;214200;0人工草地.............tian;214300;214302芦苇地 .............tian;215100;215102半荒植物地.............tian;215200;215202植物稀少地.............tian;215300;215302花圃 .............tian;215400;215402水生经济作物地.............tian;211300;211302土质填充肥气池.............tian;153901;0沙地 .............tian;206100;0石块地.............tian;206300;206302盐碱地.............tian;206400;206402小草丘地.............tian;206502;206504龟裂地.............tian;206600;206602能通行沼泽地.............tian;206701;0不能通行沼泽地.............tian;206702;0小比例房屋填充.............tian;141103;0图案填充.............sotian--符号等分内插.............neicha批量缩放文字 .............ctext符号 .............cblock圆圈 ircle复合线处理批量拟合复合线.............plind批量闭合复合线.............plbihe批量修改复合线高.............changeheight批量改变复合线宽.............linewidth--线型规范化.............pludd--复合线编辑............._pedit复合线上加点.............polyins复合线上删点.............erasevertex移动复合线顶点.............movevertex--相邻的复合线连接.............polyjoin;分离的复合线连接.............sepapolyjoin;重量线轻量线.............tolwpoly;--直线复合线.............linetopline;圆弧复合线.............arctopline;SPLINE复合线.............splinetopline;椭圆复合线.............ellipsetopline;图形接边.............mapjoin--图形属性转换图层图层单个处理.............cetoce;1批量处理.............cetoce;2图层编码单个处理.............cetoce;3批量处理.............cetoce;4编码编码单个处理.............bmtobm;1批量处理.............bmtobm;2编码图层单个处理.............bmtobm;3批量处理.............bmtobm;4编码颜色单个处理.............bmtobm;5批量处理.............bmtobm;6编码线形单个处理.............bmtobm;7批量处理.............bmtobm;8编码图块单个处理.............bmtobm;9批量处理.............bmtobm;10图块图块单个处理totk;1批量处理totk;2图块图层单个处理totk;3批量处理totk;4图块编码单个处理totk;5批量处理totk;6线形线形单个处理.............xxtoxx;1批量处理.............xxtoxx;2线形图层单个处理.............xxtoxx;3批量处理.............xxtoxx;4线形编码单个处理.............xxtoxx;5批量处理.............xxtoxx;6字型字型单个处理.............zxtozx;1批量处理.............zxtozx;2字型图层单个处理.............zxtozx;3批量处理.............zxtozx;4--坐标转换.............transform测站改正.............modizhan二维图形.............toplane房檐改正.............changeeaves直角纠正整体纠正.............rightangle单角纠正.............singleangle--批量删剪窗口删剪.............cksj依指定多边形删剪.............plsj批量剪切窗口剪切.............ckjq依指定多边形剪切.............pljq局部存盘窗口内的图形存盘.............savet;2多边形内图形存盘.............savet;1--打散独立图块.............explodeblock打散复杂线型.............explodeline检查入库(&G)地物属性结构设置.............attsetup编辑实体附加属性.............modiappinfo--图形实体检查.............checkdwg--过滤无属性实体.............guolv删除伪结点.............check_node删除复合线多余点.............jjjd;2删除重复实体.............check_repeat--等高线穿越地物检查.............checkdgxcross等高线高程注记检查.............checkdgxtext等高线拉线高程检查.............checkfromline等高线相交检查.............checkdgxinter--坐标文件检查.............check_datfile点位误差检查.............checkcoorderror边长误差检查.............checksideerror--输出ARC/INFO SHP格式.............casstoshp输出MAPINFO MIF/MID格式.............mifmid输出国家空间矢量格式.............vctout工程应用(&C)查询指定点坐标.............CXZB查询两点距离及方位.............distuser查询线长.............getlength查询实体面积.............areauser计算表面积根据坐标文件.............surfacearea;1根据图上高程点.............surfacearea;2 --生成里程文件由纵断面线生成新建.............hdmcreate添加.............hdmadd变长.............hdmlength剪切.............hdmtrim设计.............hdmdesign生成.............fromzdline由复合线生成普通断面.............plptdm隧道断面.............plsddm由等高线生成.............dmfromdgx;1由三角网生成.............dmfromdgx;2由坐标文件生成.............getlicheng--DTM法土方计算根据坐标文件.............DTMTF;1根据图上高程点.............DTMTF;2根据图上三角网.............tstf;--计算两期间土方.............twosjw断面法土方计算道路设计参数文件.............roadpara;--道路断面.............transect;1;场地断面.............transect;2;任意断面.............transect;3;--图上添加断面线.............appenddmx--修改设计参数.............designpara编辑断面线.............editdmx修改断面里程.............chglicheng图面土方计算.............mapretf--二断面线间土方计算.............betweendmx方格网法土方计算.............fgwtf;等高线法土方计算.............dgxtf;区域土方量平衡根据坐标文件.............tfbalance;1根据图上高程点.............tfbalance;2--绘断面图根据已知坐标.............dmt_dat根据里程文件.............dmt_licheng根据等高线.............dmt_dgx;1根据三角网.............dmt_dgx;2--绘设计线.............sjline计算断面面积.............dmarea查询断面点.............dmpoint--公路曲线设计单个交点处理.............pointcurve;--要素文件录入.............putroadata;要素文件处理.............roadcurve;--计算指定范围的面积.............jsmj统计指定区域的面积.............tjmj指定点所围成的面积.............parea--线条长度调整.............linefy面积调整调整一点.............movept调整一边.............mjfy在一边调整一点.............ptatside--指定点生成数据文件.............shzht高程点生成数据文件有编码高程点.............LINKSJX1无编码高程点.............gcdtodat控制点生成数据文件.............kzdtodat等高线生成数据文件.............datincontour图幅管理(&M)图幅信息操作.............MAPMANAGE图幅显示.............SELMAP图幅列表.............MAPBAR--绘超链接索引图.............hypertfgl移动............._move镜像............._mirror旋转............._rotate缩放............._scale拉伸............._stretch基点.............base复制.............copy参照.............reference放弃.............._u特性............._properties转至 ................_gotourl退出................_exit剪切............._cutclip复制............._copyclip带基点复制............._copybase粘贴............._pasteclip粘贴为块............._pasteblock粘贴到原坐标............._pasteorig放弃(&U)............._u重做..............._redo平移................pan缩放.............._zoom--快速选择................_qselect查找................_find选项................_options剪切............._cutclip复制............._copyclip带基点复制............._copybase粘贴............._pasteclip粘贴为块............._pasteblock粘贴到原坐标............._pasteorig--删除............._erase移动..................move复制选择..............copy缩放................._scale旋转................._rotate全部不选(&A).............(ai_deselect)--快速选择(&Q)................_qselect查找(&F)................_find特性(&S)............._properties平移.......................pan缩放......................_zoom标注对象的上下文菜单标注文字位置在尺寸线上............._ai_dim_textabove置中............._ai_dim_textcenter默认位置............._ai_dim_texthome单独移动文字............._aidimtextmove _2与引线一起移动............._aidimtextmove _1 与尺寸线一起移动............_aidimtextmove _0精度0............._aidimprec _00.0............._aidimprec _10.00............._aidimprec _20.000............._aidimprec _30.0000............._aidimprec _40.00000............._aidimprec _50.000000............._aidimprec _6标注样式(&D)另存为新样式(&S)................_aidimstyle _S 标注样式 MRU1............._aidimstyle _1标注样式 MRU2............._aidimstyle _2标注样式 MRU3............._aidimstyle _3标注样式 MRU4............._aidimstyle _4标注样式 MRU5............._aidimstyle _5标注样式 MRU6............._aidimstyle _6其他......................_aidimstyle _O视口对象的上下文菜单视口剪裁(&V)............._vpclip显示视口对象是............._-vports _on _p;; 否............._-vports _off _p;;显示锁定是(&Y)............._-vports _lock _on _p否(&N)............._-vports _lock _off _p消隐出图(&H)是(&Y)............._-vports _hide _on _p否(&N)............._-vports _hide _off _p外部参照对象的上下文菜单外部参照剪裁(&I)............._xclip外部参照管理器(&N)................_xref多行文字对象的上下文菜单编辑多行文字(&I)................_mtedit文字对象的上下文菜单编辑文字(&I)................_ddedit图案填充对象的上下文菜单编辑图案填充................_hatchedit多段线对象的上下文菜单编辑多段线............._pedit样条曲线对象的上下文菜单编辑样条曲线............._splinedit多段线对象的上下文菜单编辑多段线............._pedit标注线性标注............_dimlinear对齐标注............._dimaligned坐标标注............._dimordinate--半径标注............._dimradius直径标注............._dimdiameter角度标注.............._dimangular--快速标注............._qdim基线标注............._dimbaseline连续标注............._dimcontinue快速引线............._qleader公差............._tolerance圆心标记............._dimcenter--编辑标注............._dimedit编辑标注文字............._dimtedit标注更新............._-dimstyle _apply 标注样式...............dimstyle绘图直线............._line构造线............._xline多线............._mline多段线............._pline正多边形............._polygon矩形............._rectang圆弧............._arc 圆............._circle样条曲线............._spline椭圆............_ellipse椭圆弧............_ellipse _a块创建块............._block 点.................point图案填充............._bhatch面域............._region多行文字............._mtext查询距离........................dist面积..............area面域/质量特性............._massprop列表......................_list点坐标 (i)插入插入块............._insert外部参照............._xref图像............._image输入............._importOLE 对象............._insertobj布局新建布局....................._layout _n 来自样板的布局............._layout _t页面设置............._pagesetup显示“视口”对话框............_vports修改删除............._erase复制对象........................copy)镜像.............................mirror) 偏移............._offset阵列............_array移动..............move旋转..............rotate缩放..............scale拉伸..............stretch拉长............._lengthen修剪............._trim延伸............._extend打断于点............._break \f \@打断............._break倒角............_chamfer圆角............._fillet分解............._explode修改_II显示顺序............._draworder--编辑图案填充............._hatchedit编辑多段线............._pedit编辑样条曲线............._splinedit编辑多线............._mledit--编辑属性............._eattedit块属性管理器............._BattMan同步属性............._AttSync属性提取............._EAttExt对象特性将对象的图层置为当前............_ai_molc图层yer 上一个图层............._LayerP对象捕捉临时追踪点............................tt 捕捉自..................from捕捉到端点................endp捕捉到中点 (i)捕捉到交点................int捕捉到外观交点............appint捕捉到延长线 (x)捕捉到圆心................cen捕捉到象限点..............qua捕捉到切点................tan捕捉到垂足................per捕捉到平行线..............par捕捉到插入点..............ins捕捉到节点................nod捕捉到最近点..............nea无捕捉....................non对象捕捉设置..............dsettings 2三维动态观察器三维平移..................3dpan三维缩放".................3dzoom三维动态观察...............3dorbit三维连续观察...............3dcorbit三维旋转...................3dswivel三维调整距离...............3ddistance三维调整剪裁平面...........3dclip前向剪裁开/关............._dview后向剪裁开/关............._dview着色二维线框...............shademode _2三维线框...............shademode _3消隐...................shademode _h平面着色...............shademode _f体着色.................shademode _g带边框平面着色.........hademode _l带边框体着色...........shademode _o参照编辑编辑块或外部参照............._refedit;向工作集添加对象..............refset _add从工作集删除对象..............refset _rem放弃对参照的修改..............refclose _disc 将修改保存到参照..............refclose _sav参照外部参照............_xref附着外部参照............._xattach外部参照剪裁............._xclip外部参照绑定............._xbind外部参照剪裁边框......xclipframe 1--图像.................image附着图像............imageattach图像剪裁.............imageclip图像调整............._imageadjust图像质量"............._imagequality图像透明............._transparency图像边框............._imageframe渲染消隐............._hide渲染............._render场景............._scene光源............._light材质............_rmat材质库............._matlib贴图............._setuv背景............._background雾化............._fog新建配景............._lsnew编辑配景............._lsedit配景库............._lslib--渲染系统配置............._rpref统计信息................._stats实体长方体............._box球体............._sphere圆柱体............._cylinder圆锥体............._cone楔体............._wedge圆环............._torus拉伸............._extrude旋转............._revolve剖切............._slice切割............._section干涉............._interfere设置图形............._soldraw设置视图"............._solview设置轮廓............._solprof实体编辑并集............._union差集............._subtract交集............._intersect拉伸面............._solidedit _face _extrude 移动面............._solidedit _face _move偏移面............._solidedit _face _offset 删除面............._solidedit _face _delete 旋转面............._solidedit _face _rotate 倾斜面............._solidedit _face _taper 复制面............._solidedit _face _copy着色面............._solidedit _face _color --复制边............._solidedit _edge _copy着色边............._solidedit _edge _color --压印............._solidedit _body _imprint 清除............._solidedit _body _clean分割............._solidedit _body _separate 抽壳............._solidedit _body _shell检查............._solidedit _body _check标准新建............._new打开............._open保存............._qsave打印............._plot打印预览............._preview查找和替换............._find剪切到剪贴板............._cutclip复制到剪贴板............._copyclip从剪贴板粘贴............._pasteclip特性匹配................_matchprop--放弃............._u重做............._redo--今日............._Today三维动态观察器.......3dorbit实时平移.............._pan实时缩放.............zoom标准配置标准............._Standards检查标准............._CheckStandards图层转换............._LayTrans曲面二维填充............._solid三维面............._3dface--长方体表面............._ai_box楔体表面............._ai_wedge圆锥面............._ai_cone球面............._ai_sphere上半球面............._ai_dome下半球面............._ai_dish圆环面............._ai_torus-- 边............_edge三维网格............._3dmesh旋转曲面............._revsurf平移曲面............._tabsurf直纹曲面............._rulesurf边界曲面............._edgesurf文字多行文字............._mtext单行文字............._dtext编辑文字............._ddedit查找和替换............._find文字样式................style缩放文字............._scaletext对正文字............._justifytext在空间之间转换距离...._spacetransUCSUCS............._ucs显示 UCS 对话框............._+ucsman 0上一个 UCS............._ucs _p--世界 UCS............._ucs _w对象 UCS............._ucs _ob面 UCS"............._ucs _fa视图 UCS............._ucs _v原点 UCS............._ucs _oZ 轴矢量 UCS............._ucs _zaxis 三点 UCS............._ucs _3X 轴旋转 UCS............._ucs _xY 轴旋转 UCS............._ucs _yZ 轴旋转 UCS............._ucs _z应用 UCS............._ucs _apply显示 UCS 对话框............._+ucsman 0移动 UCS 原点............._ucs _move视图命名视图", "ICON_16_DDVIEW", "ICON_16_DDVIEW")............._view俯视图", "ICON_16_VIETOP", "ICON_16_VIETOP")............._-view _top仰视图", "ICON_16_VIEBOT", "ICON_16_VIEBOT")............._-view _bottom左视图", "ICON_16_VIELEF", "ICON_16_VIELEF")............._-view _left右视图", "ICON_16_VIERIG", "ICON_16_VIERIG")............._-view _right主视图", "ICON_16_VIEFRO", "ICON_16_VIEFRO")............._-view _front后视图", "ICON_16_VIEBAC", "ICON_16_VIEBAC")............._-view _back西南等轴测视图", "ICON_16_VIESWI", "ICON_16_VIESWI")............._-view _swiso 东南等轴测视图", "ICON_16_VIESEI", "ICON_16_VIESEI")............._-view _seiso 东北等轴测视图", "ICON_16_VIENEI", "ICON_16_VIENEI")............._-view _neiso 西北等轴测视图", "ICON_16_VIENWI", "ICON_16_VIENWI")............._-view _nwiso 相机", "ICON_16_CAMERA", "ICON_16_CAMERA")............._camera视口显示“视口”对话框............._vports单个视口.......................-vports剪裁现有视口 ............._vpclipWEB后退..........................._hyperlinkBack前进..........................._hyperlinkFwd停止浏览......................._hyperlinkStop浏览 Web......................._browser缩放窗口缩放.........................zoom _w动态缩放.........................zoom _d比例缩放.........................zoom _s中心缩放.........................zoom _c放大.............................zoom 2x缩小.............................zoom .5x全部缩放.........................zoom _all范围缩放.........................zoom _e标准工具栏图层管理yer把对象的图层置为当前.............ai_molc线型管理.........................linetype 编组选择关.......................PICKSTYLE 0 编组选择开.......................PICKSTYLE 1 打开老图.............open图形存盘.............qsave重画屏幕.............redraw平移.................pan缩放.................zoom窗选.................zoom _w全图.................zoom _e前图.................zoom _p回退.................u取消回退.............redo对象特性.............properties设计中心.............adcenter删除.................erase移动.................move复制.................copy修剪............._trim延伸............._extendCASS实用工具栏查看实体编码.............getp加入实体编码.............putp重新生成.............recass批量选目标线型换向.............huan修改坎高.............askan查询坐标.............cxzb查询距离和方位角.............distuser注记文字.............wzzj多点房屋.............drawddf四点房屋.............fourpt依比例围墙...........drawwq陡坎.................drawdk自然斜坡等...........xp交互展点.............drawgcd图根点...............drawtgd电力线...............drawdlx道路.................drawdl地籍地籍参数设置.............CADAPARA绘制权属线.................JZLINE权属线生成依权属文件绘权属图.............hqst修改界址点号.............JZNUMBER重排界址点号.............requeuejzp设置最大界址点号.............setmaxjzd 删除无用界址点号.............delunusejzd 注记界址点点名界址点圆圈修饰.............xiushijzd调整界址点顺序.............arrangejzd界址点生成数据文件.............jzptofile 查找指定宗地.............zhizong查找指定界址点.............zhijzp宗地合并.............joinjzx宗地分割.............splitjzx宗地重构.............regenzd--修改宗地属性.............setjiezhi修改界址线属性.............jzxinfo修改界址点属性.............jzdinfo输出宗地属性.............zdinfomdb等高线由数据文件建立.............LINKSJX图面DTM完善.............APPENDSJX删除三角形.............._erase过滤三角形.............filter_sjx增加三角形.............jsjw三角形内插点.............insert_sjx删三角形顶点.............erase_sjx重组三角形.............re_sjx删三角网.............delsjx三角网存取修改结果存盘.............ssjw绘制等高线.............dzx绘制等深线.............dsx等高线内插.............CONTOUR等值线过滤.............dgxguolv删全部等高线.............deldgx查询指定点高程.............height等高线修剪切除穿建筑物等高线.............plsx切除穿坡坎等高线.............trkan切除穿围墙等高线.............trwall切除指定二线间等高线.............trtwoline切除指定区域内等高线.............tregion切除穿控制点注记等高线.............kzdtrim消隐穿独立地物等高线.............(ARXLOAD "WIPEOUT");blockmask 切除穿独立地物等高线.............blocktrim消隐穿文字注记等高线.............(ARXLOAD "WIPEOUT");textmask 取消穿注记等高线消隐.............(arxload "wipeout");textunmask 切除穿文字注记等高线.............btxt单个示坡线...............spzj沿直线示坡线.............GCSPZJ;2复合线滤波...............jjjd三维模型绘制三维模型.............vshow低着色方式.............SHADE;高级着色方式.............RENDER;返回平面视图.............VEND;地物编辑修改墙宽.............wallwidth修改坎高.............askan图案填充.............sotian符号等分内插.............neicha线型规范化.............pludd图形接边.............mapjoin坐标转换.............transform测站改更.............modizhan质量控制打散独立图块.............explodeblock。

DB11_T969-2013城市雨水系统规划设计暴雨径流计算标准

DB11_T969-2013城市雨水系统规划设计暴雨径流计算标准
件)。
附件:批准发布的北京市地方标准目录
北京市质量技术监督局 北京市规划委员会
2013年4月18日
北京市质量技术监督局办公室
2013年4月19日印发
附件
批准发布的北京市地方标准目录
序号 地方标准编号 地方标准名称 批准日期 实施日期一
城市雨水系
20”一07一”‘ 一 统规划设计暴.
1
- DB11/ T
条文说明................……。....……。.............................……,...............……21
DB 111T 969- 2013
CONTENTS
I General Povisions ..........................................................................1 2 Terms and Definition ...................................................................... 2 3 Calculation Method and Parameters ............................................... 4
口日 北 京 市 地 方 标 准
编 号:DB11 /T 969-2013 备案号:J 12340- 2013
城市雨水系统规划设计暴雨
径流计算标准
St andar d of s t or m wat er r unof f cal cul at i on f or
urban storm drainage system planning and design

SWAT使用手册(中文翻译)

SWAT使用手册(中文翻译)

Soil and Water Assessment Tool User’s Manual Version 2000S.L.Neitsch, J.G.Arnold, J.R.Kiniry, R.Srinivasan, J.R.Williams, 2002Chapter 1 overview1.1 流域结构W ATERSHED CONFIGURATION✧子流域-无数量限制的HRUs(每个子流域至少有1个)-一个水塘(可选)-一块湿地(可选)✧支流/干流段(每个子流域一个)✧干流河网滞留水(围坝拦截部分)(可选)✧点源(可选)1.1.1子流域(subbasins)子流域是流域划分的第一级水平,其在流域内拥有地理位置并且在空间上与其他子流域相连接。

1.1.2 水文响应单元(HRU)HRUs是子流域内拥有特定土地利用/管理/土壤属性的部分,其为离散于整个子流域内同一土地利用/管理/土壤属性的集合,且假定不同HRU相互之间没有干扰。

HRUs的优势在于其能提高子流域内负荷预测的精度。

一般情况下,一个子流域每会有1-10个HRUs。

为了能在一个数据集内组合更多的多样化信息,一般要求生成多个具有合适数量HRUs的子流域而不是少量拥有大量HRUs的子流域。

1.1.3主河道(Reach/Main Channels)水流路线、沉积物和其他经过河段的物质在theoretical documentation section7中有描述。

1.1.4 支流(Tributary Channels)辅助性水流渠道用来区分子流域内产生的地表径流输入的渠系化水流。

附属水道的输入用来计算子流域内径流产生到汇集的时间以及径流汇集到主河道的输移损失。

辅助性水道输入定义了子流域内最长达水流路经。

对某些子流域而言,主河道可能是最长的水流路经,如果这样,辅助性水流渠道的长度就和主河道一样。

在其他子流域内,辅助性河道的长度和主河道是不同的。

1.1.5池塘、湿地和水库(Ponds/Wetlands/Reservoirs)两类水体(池塘/湿地)在每个子流域内都会有定义。

怀念威廉·科恩劳尔的尼瓦达银腾打印目录与检查表

怀念威廉·科恩劳尔的尼瓦达银腾打印目录与检查表
Aladdin on Carpet
Las Vegas
Year Guide
MM .999 Released Price
CC Rim 1994 $22-$28
(V) Ali Baba (3 Dots)
Aladdin on Carpet
CC Rim 1994 $100-150
Sinbad
Aladdin on Carpet
$18.00 $15.60
$17.00
$22.75
Approx
2019
2020 Silver
Auction Auction Weight
$18-$25 $16-$20 .6010 Oz
.4898 Oz
$16.50 $25.00 .4948 Oz
$16-$20 $20.00 .5881 Oz
$15-$25
Monorail
Bally's (3 7s)
GDC Rim 1995 $22-$28
Running 7 (LV Top)
Bally's (3 7s) (LV Top)
GDC Rim 1995 $22-$28
(E) Running 7(WR)(LV NV.Bottom) Bally's (3 7s) (LV Top)
Reno
MM .999
G Rim
Year Guide Released Price
1996 $22-$28
Toucan/Angel Fish
Atlantis Hotel Tower
G Inner 1998 $22-$28
(E) Toucan/Angel Fish(50++)
Atlantis (Many Blds) (WL)

________________________________________________ Session S3C MINORITY ENGINEERING PROGRAM C

________________________________________________ Session S3C MINORITY ENGINEERING PROGRAM C

________________________________________________ 1Joseph E. Urban, Arizona State University, Department of Computer Science and Engineering, P.O. Box 875406, Tempe, Arizona, 85287-5406, joseph.urban@ 2Maria A. Reyes, Arizona State University, College of Engineering and Applied Sciences, Po Box 874521, Tempe, Arizona 852189-955, maria@ 3Mary R. Anderson-Rowland, Arizona State University, College of Engineering and Applied Sciences, P.O. Box 875506, Tempe, Arizona 85287-5506, mary.Anderson@MINORITY ENGINEERING PROGRAM COMPUTER BASICS WITH AVISIONJoseph E. Urban 1, Maria A. Reyes 2, and Mary R. Anderson-Rowland 3Abstract - Basic computer skills are necessary for success in an undergraduate engineering degree program. Students who lack basic computer skills are immediately at risk when entering the university campus. This paper describes a one semester, one unit course that provided basic computer skills to minority engineering students during the Fall semester of 2001. Computer applications and software development were the primary topics covered in the course that are discussed in this paper. In addition, there is a description of the manner in which the course was conducted. The paper concludes with an evaluation of the effort and future directions.Index Terms - Minority, Freshmen, Computer SkillsI NTRODUCTIONEntering engineering freshmen are assumed to have basic computer skills. These skills include, at a minimum, word processing, sending and receiving emails, using spreadsheets, and accessing and searching the Internet. Some entering freshmen, however, have had little or no experience with computers. Their home did not have a computer and access to a computer at their school may have been very limited. Many of these students are underrepresented minority students. This situation provided the basis for the development of a unique course for minority engineering students. The pilot course described here represents a work in progress that helped enough of the students that there is a basis to continue to improve the course.It is well known that, in general, enrollment, retention, and graduation rates for underrepresented minority engineering students are lower than for others in engineering, computer science, and construction management. For this reason the Office of Minority Engineering Programs (OMEP, which includes the Minority Engineering Program (MEP) and the outreach program Mathematics, Engineering, Science Achievement (MESA)) in the College of Engineering and Applied Sciences (CEAS) at Arizona State University (ASU) was reestablished in 1993to increase the enrollment, retention, and graduation of these underrepresented minority students. Undergraduate underrepresented minority enrollment has increased from 400 students in Fall 1992 to 752 students in Fall 2001 [1]. Retention has also increased during this time, largely due to a highly successful Minority Engineering Bridge Program conducted for two weeks during the summer before matriculation to the college [2] - [4]. These Bridge students were further supported with a two-unit Academic Success class during their first semester. This class included study skills, time management, and concept building for their mathematics class [5]. The underrepresented minority students in the CEAS were also supported through student chapters of the American Indian Science and Engineering Society (AISES), the National Society of Black Engineers (NSBE), and the Society of Hispanic Professional Engineers (SHPE). The students received additional support from a model collaboration within the minority engineering student societies (CEMS) and later expanded to CEMS/SWE with the addition of the student chapter of the Society of Women Engineers (SWE) [6]. However, one problem still persisted: many of these same students found that they were lacking in the basic computer skills expected of them in the Introduction to Engineering course, as well as introductory computer science courses.Therefore, during the Fall 2001 Semester an MEP Computer Basics pilot course was offered. Nineteen underrepresented students took this one-unit course conducted weekly. Most of the students were also in the two-unit Academic Success class. The students, taught by a Computer Science professor, learned computer basics, including the sending and receiving of email, word processing, spreadsheets, sending files, algorithm development, design reviews, group communication, and web page development. The students were also given a vision of advanced computer science courses and engineering and of computing careers.An evaluation of the course was conducted through a short evaluation done by each of five teams at the end of each class, as well as the end of semester student evaluations of the course and the instructor. This paper describes theclass, the students, the course activities, and an assessment of the short-term overall success of the effort.M INORITY E NGINEERING P ROGRAMSThe OMEP works actively to recruit, to retain, and to graduate historically underrepresented students in the college. This is done through targeted programs in the K-12 system and at the university level [7], [8]. The retention aspects of the program are delivered through the Minority Engineering Program (MEP), which has a dedicated program coordinator. Although the focus of the retention initiatives is centered on the disciplines in engineering, the MEP works with retention initiatives and programs campus wide.The student’s efforts to work across disciplines and collaborate with other culturally based organizations give them the opportunity to work with their peers. At ASU the result was the creation of culturally based coalitions. Some of these coalitions include the American Indian Council, El Concilio – a coalition of Hispanic student organizations, and the Black & African Coalition. The students’ efforts are significant because they are mirrored at the program/staff level. As a result, significant collaboration of programs that serve minority students occurs bringing continuity to the students.It is through a collaboration effort that the MEP works closely with other campus programs that serve minority students such as: Math/Science Honors Program, Hispanic Mother/Daughter Program, Native American Achievement Program, Phoenix Union High School District Partnership Program, and the American Indian Institute. In particular, the MEP office had a focus on the retention and success of the Native American students in the College. This was due in large part to the outreach efforts of the OMEP, which are channeled through the MESA Program. The ASU MESA Program works very closely with constituents on the Navajo Nation and the San Carlos Apache Indian Reservation. It was through the MESA Program and working with the other campus support programs that the CEAS began investigating the success of the Native American students in the College. It was a discovery process that was not very positive. Through a cohort investigation that was initiated by the Associate Dean of Student Affairs, it was found that the retention rate of the Native American students in the CEAS was significantly lower than the rate of other minority populations within the College.In the spring of 2000, the OMEP and the CEAS Associate Dean of Student Affairs called a meeting with other Native American support programs from across the campus. In attendance were representatives from the American Indian Institute, the Native American Achievement Program, the Math/Science Honors Program, the Assistant Dean of Student Life, who works with the student coalitions, and the Counselor to the ASU President on American Indian Affairs, Peterson Zah. It was throughthis dialogue that many issues surrounding student success and retention were discussed. Although the issues andconcerns of each participant were very serious, the positiveeffect of the collaboration should be mentioned and noted. One of the many issues discussed was a general reality that ahigh number of Native American students were c oming to the university with minimal exposure to technology. Even through the efforts in the MESA program to expose studentsto technology and related careers, in most cases the schoolsin their local areas either lacked connectivity or basic hardware. In other cases, where students had availability to technology, they lacked teachers with the skills to help them in their endeavors to learn about it. Some students were entering the university with the intention to purse degrees in the Science, Technology, Engineering, and Mathematics (STEM) areas, but were ill prepared in the skills to utilize technology as a tool. This was particularly disturbing in the areas of Computer Science and Computer Systems Engineering where the basic entry-level course expected students to have a general knowledge of computers and applications. The result was evident in the cohort study. Students were failing the entry-level courses of CSE 100 (Principals of Programming with C++) or CSE 110 (Principals of Programming with Java) and CSE 200 (Concepts of Computer Science) that has the equivalent of CSE 100 or CSE 110 as a prerequisite. The students were also reporting difficulty with ECE 100, (Introduction to Engineering Design) due to a lack of assumed computer skills. During the discussion, it became evident that assistance in the area of technology skill development would be of significance to some students in CEAS.The MEP had been offering a seminar course inAcademic Success – ASE 194. This two-credit coursecovered topics in study skills, personal development, academic culture issues and professional development. The course was targeted to historically underrepresented minority students who were in the CEAS [3]. It was proposed by the MEP and the Associate Dean of Student Affairs to add a one-credit option to the ASE 194 course that would focus entirely on preparing students in the use of technology.A C OMPUTERB ASICSC OURSEThe course, ASE 194 – MEP Computer Basics, was offered during the Fall 2001 semester as a one-unit class that met on Friday afternoons from 3:40 pm to 4:30 pm. The course was originally intended for entering computer science students who had little or no background using computer applications or developing computer programs. However, enrollment was open to non-computer science students who subsequently took advantage of the opportunity. The course was offered in a computer-mediated classroom, which meantthat lectures, in- class activities, and examinations could all be administered on comp uters.During course development prior to the start of the semester, the faculty member did some analysis of existing courses at other universities that are used by students to assimilate computing technology. In addition, he did a review of the comp uter applications that were expected of the students in the courses found in most freshman engineering programs.The weekly class meetings consisted of lectures, group quizzes, accessing computer applications, and group activities. The lectures covered hardware, software, and system topics with an emphasis on software development [9]. The primary goals of the course were twofold. Firstly, the students needed to achieve a familiarity with using the computer applications that would be expected in the freshman engineering courses. Secondly, the students were to get a vision of the type of activities that would be expected during the upper division courses in computer science and computer systems engineering and later in the computer industry.Initially, there were twenty-two students in the course, which consisted of sixteen freshmen, five sophomores, and one junior. One student, a nursing freshman, withdrew early on and never attended the course. Of the remaining twenty-one students, there were seven students who had no degree program preference; of which six students now are declared in engineering degree programs and the seventh student remains undecided. The degree programs of the twenty-one students after completion of the course are ten in the computing degree programs with four in computer science and six in computer systems engineering. The remaining nine students includes one student in social work, one student is not decided, and the rest are widely distributed over the College with two students in the civil engineering program and one student each in bioengineering, electrical engineering, industrial engineering, material science & engineering, and mechanical engineering.These student degree program demographics presented a challenge to maintain interest for the non-computing degree program students when covering the software development topics. Conversely, the computer science and computer systems engineering students needed motivation when covering applications. This balance was maintained for the most part by developing an understanding that each could help the other in the long run by working together.The computer applications covered during the semester included e-mail, word processing, web searching, and spreadsheets. The original plan included the use of databases, but that was not possible due to the time limitation of one hour per week. The software development aspects included discussion of software requirements through specification, design, coding, and testing. The emphasis was on algorithm development and design review. The course grade was composed of twenty-five percent each for homework, class participation, midterm examination, and final examination. An example of a homework assignment involved searching the web in a manner that was more complex than a simple search. In order to submit the assignment, each student just had to send an email message to the faculty member with the information requested below. The email message must be sent from a student email address so that a reply can be sent by email. Included in the body of the email message was to be an answer for each item below and the URLs that were used for determining each answer: expected high temperature in Centigrade on September 6, 2001 for Lafayette, LA; conversion of one US Dollar to Peruvian Nuevo Sols and then those converted Peruvian Nuevo Sols to Polish Zlotys and then those converted Polish Zlotys to US Dollars; birth date and birth place of the current US Secretary of State; between now and Thursday, September 6, 2001 at 5:00 pm the expected and actual arrival times for any US domestic flight that is not departing or arriving to Phoenix, AZ; and your favorite web site and why the web site is your favorite. With the exception of the favorite web site, each item required either multiple sites or multiple levels to search. The identification of the favorite web site was introduced for comparison purposes later in the semester.The midterm and final examinations were composed of problems that built on the in-class and homework activities. Both examinations required the use of computers in the classroom. The submission of a completed examination was much like the homework assignments as an e-mail message with attachments. This approach of electronic submission worked well for reinforcing the use of computers for course deliverables, date / time stamping of completed activities, and a means for delivering graded results. The current technology leaves much to be desired for marking up a document in the traditional sense of hand grading an assignment or examination. However, the students and faculty member worked well with this form of response. More importantly, a major problem occurred after the completion of the final examination. One of the students, through an accident, submitted the executable part of a browser as an attachment, which brought the e-mail system to such a degraded state that grading was impossible until the problem was corrected. An ftp drop box would be simple solution in order to avoid this type of accident in the future until another solution is found for the e-mail system.In order to get students to work together on various aspects of the course, there was a group quiz and assignment component that was added about midway through the course. The group activities did not count towards the final grade, however the students were promised an award for the group that scored the highest number of points.There were two group quizzes on algorithm development and one out-of-class group assignment. The assignment was a group effort in website development. This assignment involved the development of a website that instructs. The conceptual functionality the group selected for theassignment was to be described in a one-page typed double spaced written report by November 9, 2001. During the November 30, 2001 class, each group presented to the rest of the class a prototype of what the website would look like to the end user. The reports and prototypes were subject to approval and/or refinement. Group members were expected to perform at approximately an equal amount of effort. There were five groups with four members in four groups and three members in one group that were randomly determined in class. Each group had one or more students in the computer science or computer systems engineering degree programs.The three group activities were graded on a basis of one million points. This amount of points was interesting from the standpoint of understanding relative value. There was one group elated over earning 600,000 points on the first quiz until the group found out that was the lowest score. In searching for the group award, the faculty member sought a computer circuit board in order to retrieve chips for each member of the best group. During the search, a staff member pointed out another staff member who salvages computers for the College. This second staff member obtained defective parts for each student in the class. The result was that each m ember of the highest scoring group received a motherboard, in other words, most of the internals that form a complete PC. All the other students received central processing units. Although these “awards” were defective parts, the students viewed these items as display artifacts that could be kept throughout their careers.C OURSE E VALUATIONOn a weekly basis, there were small assessments that were made about the progress of the course. One student was selected from each team to answer three questions about the activities of the day: “What was the most important topic covered today?”, “What topic covered was the ‘muddiest’?”, and “About what topic would you like to know more?”, as well as the opportunity to provide “Other comments.” Typically, the muddiest topic was the one introduced at the end of a class period and to be later elaborated on in the next class. By collecting these evaluation each class period, the instructor was able to keep a pulse on the class, to answer questions, to elaborate on areas considered “muddy” by the students, and to discuss, as time allowed, topics about which the students wished to know more.The overall course evaluation was quite good. Nineteen of the 21 students completed a course evaluation. A five-point scale w as used to evaluate aspects of the course and the instructor. An A was “very good,” a B was “good,” a C was “fair,” a D was “poor,” and an E was “not applicable.” The mean ranking was 4.35 on the course. An average ranking of 4.57, the highest for the s even criteria on the course in general, was for “Testbook/ supplementary material in support of the course.” The “Definition and application of criteria for grading” received the next highest marks in the course category with an average of 4.44. The lowest evaluation of the seven criteria for the course was a 4.17 for “Value of assigned homework in support of the course topics.”The mean student ranking of the instructor was 4.47. Of the nine criteria for the instructor, the highest ranking of 4.89 was “The instructor exhibited enthusiasm for and interest in the subject.” Given the nature and purpose of this course, this is a very meaningful measure of the success of the course. “The instructor was well prepared” was also judged high with a mean rank of 4.67. Two other important aspects of this course, “The instructor’s approach stimulated student thinking” and “The instructor related course material to its application” were ranked at 4.56 and 4.50, respectively. The lowest average rank of 4.11 was for “The instructor or assistants were available for outside assistance.” The instructor keep posted office hours, but there was not an assistant for the course.The “Overall quality of the course and instruction” received an average rank of 4.39 and “How do you rate yourself as a student in this course?” received an average rank of 4.35. Only a few of the students responded to the number of hours per week that they studies for the course. All of the students reported attending at least 70% of the time and 75% of the students said that they attended over 90% of the time. The students’ estimate seemed to be accurate.A common comment from the student evaluations was that “the professor was a fun teacher, made class fun, and explained everything well.” A common complaint was that the class was taught late (3:40 to 4:30) on a Friday. Some students judged the class to be an easy class that taught some basics about computers; other students did not think that there was enough time to cover all o f the topics. These opposite reactions make sense when we recall that the students were a broad mix of degree programs and of basic computer abilities. Similarly, some students liked that the class projects “were not overwhelming,” while other students thought that there was too little time to learn too much and too much work was required for a one credit class. Several students expressed that they wished the course could have been longer because they wanted to learn more about the general topics in the course. The instructor was judged to be a good role model by the students. This matched the pleasure that the instructor had with this class. He thoroughly enjoyed working with the students.A SSESSMENTS A ND C ONCLUSIONSNear the end of the Spring 2002 semester, a follow-up survey that consisted of three questions was sent to the students from the Fall 2001 semester computer basics course. These questions were: “Which CSE course(s) wereyou enrolled in this semester?; How did ASE 194 - Computer Basi cs help you in your coursework this semester?; and What else should be covered that we did not cover in the course?”. There were eight students who responded to the follow-up survey. Only one of these eight students had enrolled in a CSE course. There was consistency that the computer basics course helped in terms of being able to use computer applications in courses, as well as understanding concepts of computing. Many of the students asked for shortcuts in using the word processing and spreadsheet applications. A more detailed analysis of the survey results will be used for enhancements to the next offering of the computer basics course. During the Spring 2002 semester, there was another set of eight students from the Fall 2001 semester computer basi cs course who enrolled in one on the next possible computer science courses mentioned earlier, CSE 110 or CSE 200. The grade distribution among these students was one grade of A, four grades of B, two withdrawals, and one grade of D. The two withdrawals appear to be consistent with concerns in the other courses. The one grade of D was unique in that the student was enrolled in a CSE course concurrently with the computer basics course, contrary to the advice of the MEP program. Those students who were not enrolled in a computer science course during the Spring 2002 semester will be tracked through the future semesters. The results of the follow-up survey and computer science course grade analysis will provide a foundation for enhancements to the computer basics course that is planned to be offered again during the Fall 2002 semester.S UMMARY A ND F UTURE D IRECTIONSThis paper described a computer basics course. In general, the course was considered to be a success. The true evaluation of this course will be measured as we do follow-up studies of these students to determine how they fare in subsequent courses that require basic computer skills. Future offerings of the course are expected to address non-standard computing devices, such as robots as a means to inspire the students to excel in the computing field.R EFERENCES[1] Office of Institutional Analysis, Arizona State UniversityEnro llment Summary, Fall Semester , 1992-2001, Tempe,Arizona.[2] Reyes, Maria A., Gotes, Maria Amparo, McNeill, Barry,Anderson-Rowland, Mary R., “MEP Summer Bridge Program: A Model Curriculum Project,” 1999 Proceedings, American Society for Engineering Education, Charlotte, North Carolina, June 1999, CD-ROM, 8 pages.[3] Reyes, Maria A., Anderson-Rowland, Mary R., andMcCartney, Mary Ann, “Learning from our MinorityEngineering Students: Improving Retention,” 2000Proceedings, American Society for Engineering Education,St. Louis, Missouri, June 2000, Session 2470, CD-ROM, 10pages.[4] Adair, Jennifer K,, Reyes, Maria A., Anderson-Rowland,Mary R., McNeill, Barry W., “An Education/BusinessPartnership: ASU’s Minority Engineering Program and theTempe Chamber of Commerce,” 2001 Proceeding, AmericanSociety for Engineering Education, Albuquerque, NewMexico, June 2001, CD-ROM, 9 pages.[5] Adair, Jennifer K., Reyes, Maria A., Anderson-Rowland,Mary R., Kouris, Demitris A., “Workshops vs. Tutoring:How ASU’s Minority Engineering Program is Changing theWay Engineering Students Learn, “ Frontiers in Education’01 Conference Proceedings, Reno, Nevada, October 2001,CD-ROM, pp. T4G-7 – T4G-11.[6] Reyes, Maria A., Anderson-Rowland, Mary R., Fletcher,Shawna L., and McCartney, Mary Ann, “ModelCollaboration within Minority Engineering StudentSocieties,” 2000 Proceedings, American Society forEngineering Education, St. Louis, Missouri, June 2000, CD-ROM, 8 pages.[7] Anderson-Rowland, Mary R., Blaisdell, Stephanie L.,Fletcher, Shawna, Fussell, Peggy A., Jordan, Cathryne,McCartney, Mary Ann, Reyes, Maria A., and White, Mary,“A Comprehensive Programmatic Approach to Recruitmentand Retention in the College of Engineering and AppliedSciences,” Frontiers in Education ’99 ConferenceProceedings, San Juan, Puerto Rico, November 1999, CD-ROM, pp. 12a7-6 – 12a7-13.[8] Anderson-Rowland, Mary R., Blaisdell, Stephanie L.,Fletcher, Shawna L., Fussell, Peggy A., McCartney, MaryAnn, Reyes, Maria A., and White, Mary Aleta, “ACollaborative Effort to Recruit and Retain UnderrepresentedEngineering Students,” Journal of Women and Minorities inScience and Engineering, vol.5, pp. 323-349, 1999.[9] Pfleeger, S. L., Software Engineering: Theory and Practice,Prentice-Hall, Inc., Upper Saddle River, NJ, 1998.。

Bain战略分析工具英文版

Bain战略分析工具英文版
2023/12/297
Creating and managing a profit pool
Profit pool analysis may indicate new opportunities or threats
Imperatives
Be open to a new perspective on your business and industry
0
0
other components
personal computers
microprocessors
share of industry revenue
software
peripherals
Value chain focus Axes
Vertical—operating margin Horizontal—share of industry data
service repair
100%
aftermarket parts auto rental
2023/12/29
4
Profit Pools: Company Examples
Companies
Automakers U-Haul Elevators (OTIS) Harley Davidson
Polaroid
Current strategy
Change product focus
Change Customer focus
2023/12/29
15
Application to our cases
Retail industry (Wal*Mart) Soft drink industry (Coca-Cola and

基于改进CS的混合威布尔分布最优化参数估计

基于改进CS的混合威布尔分布最优化参数估计

基于改进CS的混合威布尔分布最优化参数估计
池阔;康建设;王广彦;吴坤
【期刊名称】《现代制造工程》
【年(卷),期】2017(000)004
【摘要】混合威布尔分布常用于拟合多失效模式的设备寿命数据,但由于该分布的形式复杂且参数众多,其参数估计较为困难.针对该问题,在对布谷鸟搜索(Cuckoo Search,Cs)算法的步长比例和寄主鸟发现概率改进的基础上,提出基于改进CS的混合威布尔分布最优化参数估计方法.该方法以最小化残差平方和为目标,建立参数估计优化模型,并通过改进的CS算法进行参数寻优.案例以飞机挡风玻璃寿命数据为对象,采用CS算法以及3种改进的CS算法分别对两重两参数威布尔分布进行2 000次参数估计,对比分析各算法的寻优结果表明:融合步长比例改进和寄主鸟发现概率改进的CS算法的参数估计精度较高,估计结果较可靠.
【总页数】6页(P149-154)
【作者】池阔;康建设;王广彦;吴坤
【作者单位】军械工程学院,石家庄050003;军械工程学院,石家庄050003;军械工程学院,石家庄050003;军械工程学院,石家庄050003
【正文语种】中文
【中图分类】TB114
【相关文献】
1.飞机可靠性分析中的混合威布尔分布参数估计方法 [J], 吴江
2.基于ECM算法的混合改进威布尔分布的参数估计 [J], 刘小宁
3.混合威布尔分布参数估计的L-M算法 [J], 凌丹;黄洪钟;张小玲;蒋工亮
4.自适应SA-PSO优化的威布尔混合分布参数估计方法及应用 [J], 郭森;王大为;张绍伟;姚永超
5.混合指数威布尔分布的参数估计 [J], 张晓勤;王煜;卢殿军
因版权原因,仅展示原文概要,查看原文内容请购买。

ANSYS有限元分析软件介绍

ANSYS有限元分析软件介绍
素的方向。
缺省时,工作平面的原点与总体坐标系的原点重合,但可以将它移动 或旋转到任意想要的位置
通过显示栅格,可以将工作平面作为绘图板
WY
X1 X2
Y2 Y1
WY WX
WX
WP (X,Y)
Chap9-17
布尔操作
1. .....
要使用布尔操作:
2. .....
3. .....
Main Menu: Preprocessor > -Modeling- Operate >
1)查看分析结果; 2)检查结果是否正确。
Chap9-7
ANSYS软件界面及菜单
1. 建立有限元模型
主菜单(Main Menu)
2. 施加载荷求解
3. 查看结果
实用菜单(Utility Menu)
文件 选择 列表 显示 显示控制 工作平面 参数
宏 菜单控制 帮助
Chap9-8
ANSYS的单位制
ANSYS所有的单位是自己统一的。 常用单位如下表:
绘图区
模型控制工 具条
用户提示信息
当前设置
Chap9-6
典型的ANSYS分析过程
典型的ANSYS分析过程包含三个主要的步骤:
1、创建有限元模型 (前处理器)
1)创建或读入有限元模型; 2)定义材料属性; 3)划分网格。
2、施加载荷并求解 (求解器)
1)施加载荷及设定约束条件; 2)求解。
3、查看结果 (后处理器)
Chap9-9
ANSYS的文件管理
• ANSYS在分析过程中需要读写文件. • 文件格式为 jobname.ext, 其中 jobname 是设定的工作文件名,
ext 是由ANSYS定义的扩展名,用于区分文件的用途和类型. • 默认的工作文件名是 file.

拜占庭容错算法范文

拜占庭容错算法范文

拜占庭容错算法范文拜占庭容错算法(Byzantine Fault Tolerance,简称BFT)是一种用于解决分布式系统中存在故障节点的问题的算法。

它起源于20世纪80年代中期的拜占庭将军问题,该问题模拟了一个拜占庭帝国中的将军们需要达成一致的决策,但其中存在一些可能是叛徒的将军。

拜占庭容错算法的目标是确保在存在故障节点或恶意节点的情况下,分布式系统仍能维持正常运行。

拜占庭容错算法在分布式系统中具有重要的应用价值,尤其是在需要保证系统可靠性和安全性的关键领域,如金融交易、电子支付、航空航天等。

它通过在节点之间进行消息传递和共识机制来实现容错性,确保节点之间的一致性和可信性。

拜占庭容错算法的核心思想是通过选举一个领导节点来达成共识。

领导节点负责收集所有节点的决策,并根据接收到的决策进行最终的决策。

为了保证数据的正确性,拜占庭容错算法采用了消息签名和验证的机制,确保消息的完整性和身份的可信性。

在拜占庭容错算法中,故障节点的存在可能导致系统的不一致性或错误的决策。

为了解决这个问题,算法要求至少2/3的节点必须是正确的,并且能够相互达成一致的决策。

在每个轮次中,节点将自己的决策发送给其他节点,并根据接收到的决策进行投票。

当接收到的投票超过2/3时,节点将接受该决策并且广播给其他节点。

最终,所有节点都能达成一致的决策,并且保证算法的安全性和可靠性。

在实际应用中,拜占庭容错算法还需要考虑一些其他因素,如节点的可信度、网络延迟和通信错误等。

为了提高系统的容错性和性能,还可以采用一些优化方法,如选择合适的领导节点、使用快速消息传递协议和引入超级节点等。

总而言之,拜占庭容错算法是解决分布式系统中故障节点问题的一种重要算法。

它通过选举领导节点、消息签名和投票机制来确保节点之间的一致性和可信性。

虽然算法本身复杂且耗时,但是它在保障系统可靠性和安全性方面具有重要的作用,值得进一步研究和应用。

手册:统计分析使用R - 第2版 - 布兰·S·埃维里特和托尔斯坦·豾伯恩说明书

手册:统计分析使用R - 第2版 - 布兰·S·埃维里特和托尔斯坦·豾伯恩说明书

A Handbook of Statistical Analyses Using R—2nd EditionBrian S.Everitt and Torsten HothornCHAPTER11Survival Analysis:Glioma Treatment andBreast Cancer Survival11.1Introduction11.2Survival Analysis11.3Analysis Using R11.3.1Glioma RadioimmunotherapyFigure11.1leads to the impression that patients treated with the novel radioimmunotherapy survive longer,regardless of the tumour type.In order to assess if this informalfinding is reliable,we may perform a log-rank test viaR>survdiff(Surv(time,event)~group,data=g3)Call:survdiff(formula=Surv(time,event)~group,data=g3)N Observed Expected(O-E)^2/E(O-E)^2/Vgroup=Control64 1.49 4.23 6.06group=RIT112 4.51 1.40 6.06Chisq= 6.1on1degrees of freedom,p=0.01which indicates that the survival times are indeed different in both groups. However,the number of patients is rather limited and so it might be danger-ous to rely on asymptotic tests.As shown in Chapter4,conditioning on the data and computing the distribution of the test statistics without additional assumptions are one alternative.The function surv_test from package coin (Hothorn et al.,2006,2008)can be used to compute an exact conditional test answering the question whether the survival times differ for grade III patients. For all possible permutations of the groups on the censored response variable, the test statistic is computed and the fraction of whose being greater than the observed statistic defines the exact p-value:R>library("coin")R>logrank_test(Surv(time,event)~group,data=g3,+distribution="exact")Exact Two-Sample Logrank Testdata:Surv(time,event)by group(Control,RIT)Z=-2,p-value=0.03alternative hypothesis:true theta is not equal to134SURVIVAL ANALYSIS R>data("glioma",package ="coin")R>library("survival")R>layout(matrix(1:2,ncol =2))R>g3<-subset(glioma,histology =="Grade3")R>plot(survfit(Surv(time,event)~group,data =g3),+main ="Grade III Glioma",lty =c(2,1),+ylab ="Probability",xlab ="Survival Time in Month",+legend.text =c("Control","Treated"),+legend.bty ="n")R>g4<-subset(glioma,histology =="GBM")R>plot(survfit(Surv(time,event)~group,data =g4),+main ="Grade IV Glioma",ylab ="Probability",+lty =c(2,1),xlab ="Survival Time in Month",+xlim =c(0,max(glioma$time)*1.05))02040600.00.20.40.60.81.0Grade III Glioma Survival Time in Month P r o b a b i l i ty 0204060..2.40.6.81.0Grade IV GliomaSurvival Time in MonthP ro bab i l i ty Figure 11.1Survival times comparing treated and control patients.which,in this case,confirms the above results.The same exercise can be performed for patients with grade IV gliomaR>logrank_test(Surv(time,event)~group,data =g4,+distribution ="exact")Exact Two-Sample Logrank Testdata:Surv(time,event)by group (Control,RIT)Z =-3,p-value =2e-04alternative hypothesis:true theta is not equal to 1which shows a difference as well.However,it might be more appropriate toANALYSIS USING R5 answer the question whether the novel therapy is superior for both groups of tumours simultaneously.This can be implemented by stratifying,or blocking,with respect to tumour grading:R>logrank_test(Surv(time,event)~group|histology,+data=glioma,distribution=approximate(B=10000)) Approximative Two-Sample Logrank Testdata:Surv(time,event)bygroup(Control,RIT)stratified by histologyZ=-4,p-value=1e-04alternative hypothesis:true theta is not equal to1Here,we need to approximate the exact conditional distribution since the exact distribution is hard to compute.The result supports the initial impression implied by Figure11.1.11.3.2Breast Cancer SurvivalBeforefitting a Cox model to the GBSG2data,we again derive a Kaplan-Meier estimate of the survival function of the data,here stratified with respect to whether a patient received a hormonal therapy or not(see Figure11.2).Fitting a Cox model follows roughly the same rules as shown for linear models in Chapter6with the exception that the response variable is again coded as a Surv object.For the GBSG2data,the model isfitted viaR>GBSG2_coxph<-coxph(Surv(time,cens)~.,data=GBSG2)and the results as given by the summary method are given in Figure11.3.Sincewe are especially interested in the relative risk for patients who underwent a hormonal therapy,we can compute an estimate of the relative risk and a corresponding confidence interval viaR>ci<-confint(GBSG2_coxph)R>exp(cbind(coef(GBSG2_coxph),ci))["horThyes",]2.5%97.5%0.7070.5490.911This result implies that patients treated with a hormonal therapy had a lowerrisk and thus survived longer compared to women who were not treated this way.Model checking and model selection for proportional hazards models are complicated by the fact that easy-to-use residuals,such as those discussed in Chapter6for linear regression models,are not available,but several possibil-ities do exist.A check of the proportional hazards assumption can be done by looking at the parameter estimatesβ1,...,βq over time.We can safely assume proportional hazards when the estimates don’t vary much over time.The null hypothesis of constant regression coefficients can be tested,both globally aswell as for each covariate,by using the cox.zph functionR>GBSG2_zph<-cox.zph(GBSG2_coxph)R>GBSG2_zph6SURVIVAL ANALYSIS R>data("GBSG2",package ="TH.data")R>plot(survfit(Surv(time,cens)~horTh,data =GBSG2),+lty =1:2,mark.time =FALSE,ylab ="Probability",+xlab ="Survival Time in Days")R>legend(250,0.2,legend =c("yes","no"),lty =c(2,1),+title ="Hormonal Therapy",bty ="n")050010001500200025000.00.2.4.6.81.Survival Time in DaysP r o babi l it y Hormonal TherapyyesnoFigure 11.2Kaplan-Meier estimates for breast cancer patients who either receiveda hormonal therapy or not.chisq df phorTh 0.23910.6253age 10.43810.0012menostat 5.40610.0201tsize 0.19110.6620tgrade 10.71220.0047pnodes 0.80810.3688progrec 4.38610.0362estrec 5.89310.0152GLOBAL 24.42190.0037There seems to be some evidence of time-varying effects,especially for age and tumour grading.A graphical representation of the estimated regression coeffi-ANALYSIS USING R7 R>summary(GBSG2_coxph)Call:coxph(formula=Surv(time,cens)~.,data=GBSG2)n=686,number of events=299coef exp(coef)se(coef)z Pr(>|z|)horThyes-0.3462780.7073160.129075-2.680.00730age-0.0094590.9905850.009301-1.020.30913menostatPost0.258445 1.2949150.183476 1.410.15895tsize0.007796 1.0078270.003939 1.980.04779tgrade.L0.551299 1.7355060.189844 2.900.00368tgrade.Q-0.2010910.8178380.121965-1.650.09920pnodes0.048789 1.0499980.007447 6.55 5.7e-11progrec-0.0022170.9977850.000574-3.870.00011estrec0.000197 1.0001970.0004500.440.66131exp(coef)exp(-coef)lower.95upper.95horThyes0.707 1.4140.5490.911age0.991 1.0100.973 1.009menostatPost 1.2950.7720.904 1.855tsize 1.0080.992 1.000 1.016tgrade.L 1.7360.576 1.196 2.518tgrade.Q0.818 1.2230.644 1.039pnodes 1.0500.952 1.035 1.065progrec0.998 1.0020.9970.999estrec 1.000 1.0000.999 1.001Concordance=0.692(se=0.015)Likelihood ratio test=105on9df,p=<2e-16Wald test=115on9df,p=<2e-16Score(logrank)test=121on9df,p=<2e-16Figure11.3R output of the summary method for GBSG2_coxph.cient over time is shown in Figure11.4.We refer to Therneau and Grambsch (2000)for a detailed theoretical description of these topics.The tree-structured regression models applied to continuous and binary responses in Chapter9are applicable to censored responses in survival analysis as well.Such a simple prognostic model with only a few terminal nodes might be helpful for relating the risk to certain subgroups of patients.Both rpart and the ctree function from package party can be applied to the GBSG2 data,where the conditional trees of the latter select cutpoints based on log-rank statisticsR>GBSG2_ctree<-ctree(Surv(time,cens)~.,data=GBSG2)and the plot method applied to this tree produces the graphical representation in Figure11.6.The number of positive lymph nodes(pnodes)is the most important variable in the tree,corresponding to the p-value associated with this variable in Cox’s regression;see Figure11.3.Women with not more than three positive lymph nodes who have undergone a hormonal therapy seem to have the best prognosis whereas a large number of positive lymph nodes and a small value of the progesterone receptor indicates a bad prognosis.8SURVIVAL ANALYSIS R>plot(GBSG2_zph,var ="age")−0.6−.4−0.20.00.2.4TimeBe ta(t)f o r age2704405607701100140018002300Figure 11.4Estimated regression coefficient for age depending on time for theGBSG2data.ANALYSIS USING R 9R>layout(matrix(1:3,ncol =3))R>res <-residuals(GBSG2_coxph)R>plot(res ~age,data =GBSG2,ylim =c(-2.5,1.5),+pch =".",ylab ="Martingale Residuals")R>abline(h =0,lty =3)R>plot(res ~pnodes,data =GBSG2,ylim =c(-2.5,1.5),+pch =".",ylab ="")R>abline(h =0,lty =3)R>plot(res ~log(progrec),data =GBSG2,ylim =c(-2.5,1.5),+pch =".",ylab ="")R>abline(h =0,lty =3)20406080−2−101age Ma r t i ngal eResi d uals010********−2−101pnodes 02468−2−11log(progrec)Figure 11.5Martingale residuals for the GBSG2data.10SURVIVAL ANALYSIS R>plot(GBSG2_ctree)050015002500050015002500050015002500050015002500Figure 11.6Conditional inference tree for the GBSG2data with the survival func-tion,estimated by Kaplan-Meier,shown for every subgroup of patientsidentified by the tree.BibliographyHothorn,T.,Hornik,K.,van de Wiel,M.,and Zeileis,A.(2008),coin: Conditional Inference Procedures in a Permutation Test Framework,URL /package=coin,R package version1.0-21. Hothorn,T.,Hornik,K.,van de Wiel,M.A.,and Zeileis,A.(2006),“A Lego system for conditional inference,”The American Statistician,60,257–263. Therneau,T.M.and Grambsch,P.M.(2000),Modeling Survival Data:Ex-tending the Cox Model,New York,USA:Springer-Verlag.。

斯托克,沃森计量经济学第五章第六章实证练习stata操作及答案

斯托克,沃森计量经济学第五章第六章实证练习stata操作及答案

E5.2E5.3E6.2E6.3(1) VARIABLES ahe age 0.605*** (1.40e-09) Constant 1.082 (0.150) Observations 7,711 R-squared 0.029Robust pval in parentheses *** p<0.01, ** p<0.05, * p<0.1Robust t-statistics in parentheses *** p<0.01, ** p<0.05, * p<0.1a. 在对双边备择检验中,系数的t 统计量为24.70>2.58,与系数对应的p 值是0.0000000014趋近于0<0.01,所以可以在1%显著水平下拒绝原假设,自然可以在5%、10%水平下拒绝原假设。

(1) VARIABLES ahe age 0.605*** (0.550 - 0.660) Constant 1.082 (-0.473 - 2.638) Observations 7,711 R-squared 0.029Robust ci in parentheses *** p<0.01, ** p<0.05, * p<0.1b. 斜率系数95%的置信区间是(0.550,0.660)m1 VARIABLES ahe age 0.605*** (24.70) Constant 1.082 (1.574) Observations 7,711 R-squared 0.029 Ajusted R2 0.0289(1) m2 VARIABLES aheage 0.298***(7.513)Constant 6.522***(5.585)Observations 4,002R-squared 0.012Ajusted R2 0.0117Robust t-statistics in parentheses Robust pval in parentheses *** p<0.01, ** p<0.05, * p<0.1 *** p<0.01, ** p<0.05, * p<0.1只利用高中毕业生的数据,系数的t 统计量为7.513>2.58,与系数对应的p 值是0.0000364趋近于0<0.01,所以可以在1%显著水平下拒绝原假设,自然可以在5%、10%水平下拒绝原假设。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Shift-Nets and Salzburg Tables:Power Computing in Number-Theoretical Numerics∗Wolfgang Ch.Schmid†and Rudolf Sch¨u rer‡Dedicated to Professor Peter Zinterhof on the occasion of his60th birthdayAbstract.We report on some aspects of our research during the last10years,which are closely related to the R esearch I nstitute for S oftware T echnology(RIST)of the P aris L odron U niversity S alzburg(PLUS),founded and led by Peter Zinterhof.Unfortunately,the RIST++exists no more. 1.IntroductionNumber-theoretical numerics has had a long tradition within the Austrian mathematical community, mainly due to mathematicians like Edmund Hlawka.In Salzburg,Peter Zinterhof has focused on number-theoretical numerics and its computational aspects for thirty years:diaphony or gratis lattice points are highlights which everyone will associate with Peter Zinterhof.In the following we report on some of our results in thefield of numerical integration and number-theoretical numerics which were obtained by computational methods.The next two sections deal with numerical integration,thefirst one focusing on the parallelization of adaptive algorithms,the second one comparing various approaches to high-dimensional integration.In Section4and5we discuss two special types of low-discrepancy point sets,namely“shift-nets”and(t,m,s)-nets with parameters recorded in the“Salzburg Tables”.2.Parallel Adaptive IntegrationWe consider the problem of estimating the multi-dimensional integralI f:= C s f(x)d xof a function f:C s→,where C s denotes an s-dimensional hyper-rectangular region in s.The integral is approximated by an integration formula Q n of the formQ n f=ni=1w i f(x i),where the abscissas x i and weights w i are chosen by the integration routine.The number of abscissas, n,is eitherfixed or determined by the algorithm to obtain a requested integration error|I f−Q n f|.Figure1.Time line for a parallel algorithm with manager/worker topologyWe consider integration routines based on interpolatory cubature rules and adaptive subdivision.In-terpolatory cubature rules[25]are integration formulas that integrate all multi-variate polynomials up to a certain degree exactly.Here,the number of abscissas cannot be chosen freely,but depends on the structure of the formula.It usually increases rapidly with the dimension s and the desired degree. Cubature rule based algorithms create a partition R={B1,...,B r}of the integration domain C s and apply the cubature rule to each of these subregions.The integration routine is adaptive if the partition R is not determined a priori,but constructed dur-ing the execution of the algorithm,depending on the information gathered about the integrand[15]. Starting with R={C s},the algorithm proceeds by repeatedly selecting the region with the largest estimated error from R.This region is split,the cubature rule is applied to the new subregions,which replace the original region in R.This process is repeated until the estimated total error falls below a given threshold or the allowed number of integrand evaluations or calculation time is exhausted. Maintaining R and keeping it sorted is straightforward in a sequential program.However,if multiple processing nodes work on the approximation of the integral(see e.g.[9]for a general introduction), communication is needed for redistributing critical regions among nodes.The communication topol-ogy as well as the number of refinement steps between communication events have to be chosen carefully;otherwise the efficiency of the parallel integration routine degrades rapidly compared with the sequential algorithm.2.1.TopologyThere are many possible ways for setting up inter-process communication.Some approaches[2,4] adhere to the manager/worker paradigm,using a dedicated manager node,which collects,sorts,and redistributes regions while the worker nodes perform refinements.Figure1contains a schematic diagram of this topology,together with a time line diagram of the execution of such an algorithm.It shows when a certain processing node is busy performing calcula-tions and when it is waiting for messages from other nodes.Thefigure illustrates the main problem of the manager/worker approach.If the manager(in the bottom line)cannot handle all necessary communication and region management tasks fast enough,worker nodes are starved.In this case, adding further processing nodes does not increase the overall performance.Therefore,the achievableFigure2.Time line for a parallel algorithm with23-hypercube topologyspeedup is bounded and good scalability cannot be expected.To overcome this problem,hierarchical and completely decentralized topologies have been conceived, which avoid the bottleneck of a single manager node(see for instance[1,3,5]).Figure2shows a time line diagram of such an algorithm,using a hypercube topology with23=8nodes,each communicating in turn with one of its three neighbors.In[21]we investigate the impact of various topologies and communication protocols on the execution time as well as the accuracy of the integration routine.It turns out that decentralized approaches usu-ally perform best,given that the integrand is inhomogeneous enough to require a significant amount of load balancing.However,problems like termination detection(Is the total error already small enough?)can often be handled more efficiently by a manager/worker topology.municationAnother important parameter for parallel adaptive integration routines is the communication interval, i.e.,the number of refinement steps performed between two communication events.If it is too low,the integration routine wastes much of its time on superfluous communication,without doing any actual work.If the interval is too large,difficult regions emerging on one node may not be redistributed to other nodes fast enough.The work performed by these nodes in the meanwhile is wasted,because it does not lead to any significant improvement of the total error.Figure3illustrates the effect of insufficient communication.The left picture visualizes the optimal subdivision structure produced by the sequential algorithm when applied to a corner peak function. If the same task is performed in parallel on four nodes,a lack of communication can lead to the subdivision structure shown in the right picture:three nodes waste integrand evaluation in the smooth upper and right part of the integration domain,whereas the lower left corner does not receive enough attention.The problem of determining the optimal communication interval has been investigated in[20].It turns out that it depends primarily on the structure of the integrand and is comparatively little affected by properties of the host system:if an integrand has a very unbalanced optimal subdivision tree,a certain amount of communication is required,even if it is very expensive.Figure3.Subdivision structure for a corner peak function,created by the sequential algorithm(left)and a parallel routine with insufficient load balancing on4nodes(right)2.3.Experimental ResultsTo answer the questions discussed in the previous two sections,experiments on parallel computer systems are indispensable in addition to theoretical work.Test series haven been carried out on various parallel architectures,ranging from shared memory machines to Linux clusters.The SGI Power Challenge GR available at the RIST++,equipped with twenty R10000MIPS pro-cessors and2.5GB main memory,served as the main development environment for all parallel al-gorithms.Due to its stability,its low load,and its unrestricted availability to the author many of the parallel tests were carried out on this machine.3.Numerical IntegrationIn the previous section parallelization of adaptive integration routines based on interpolatory cuba-ture rules is discussed.However,adaptive cubature rule based algorithms are only one approach for tackling the problem of numerical integration over the high-dimensional unit cube[0,1]s.3.1.Quasi-Monte Carlo RoutinesNumerical integration based on quasi-Monte Carlo routines[12]uses an integration formula of the formQ n f=1n )for allintegrands with bounded variation in the sense of Hardy and Krause.Even though the term log s n dominates this bound for reasonable values of s and n,empirical results show that a convergence rate of O(1/n)is obtained for many integrand types.Therefore,quasi-Monte Carlo routines are well suited for high dimensions.Table1.Cubature rule pairsName n Q(2)5-3O(s2)[23]7[13]9-7O(s4)[13]7[6]3.2.Methods based on Interpolatory Cubature RulesA definition of interpolatory cubature rules is given in Section2.Algorithms based on cubature rules can be adaptive or non-adaptive and differ by the applied cubature rule.3.2.1.Adaptive vs.Non-AdaptiveCubature rules have afixed number n of abscissas.Therefore,integration routines based on a rule Q n apply this rule to r:= n1It should be noted that algorithms exist which split a region into afixed or variable number k≥2of parts at each subdivision step.However,algorithms of this type have not been evaluated here.DimensionCorner Peak C 0DiscontinuousGaussian Product Peak Oscillatory Figure 4.Best algorithm depending on integrand family and dimensionbase algorithms are depicted by shaded areas,with an additional vertical pattern if the performance of the non-adaptive algorithm exceeds that of the adaptive routine.If two algorithms are reported as “best”for a problem,the one in the major part of the field achieves the best performance in all tests,whereas the one in the lower right corner can be expected to beat the first one asymptotically.Experimental tests of this kind require a lot of computation time.69test cases are listed in Figure 4.For each test case each algorithm was evaluated,allowing it to use from 2up to 225integrand eval-uations.Finally,each of these tests was repeated 20times with randomized integrands in order to smooth out fluctuations in the result and to estimate the reliability of the algorithm.Since all these tests were carried out on computing facilities at the RIST++and the Department of Scientific Computing,this work would not have been possible without the support of Professor Zinterhof and these two departments.4.Shift-NetsA central issue in quasi-Monte Carlo methods is the effective construction of low-discrepancy point sets and sequences.Currently,the most powerful methods are based on the concepts of (t,m,s )-nets and (t,s )-sequences.A detailed theory was developed in [11](see also [12,Chapter 4]for a survey of this theory).(t,m,s )-nets in a base b provide point sets of b m points in the half-open s -dimensional unit cube [0,1)s for s ≥1.They are extremely well distributed if the quality parameters t ∈0are “small”.All construction methods which have been relevant for applications in quasi-Monte Carlo methods are so-called digital methods.To avoid technicalities,in the following we restrict ourselves to digital point sets defined over a finite field q of prime power order q .Let C (1),...,C (s )be m ×m matrices over q .For 0≤n <q m let n = m −1k =0a k q kbe the q -adic representation of n in base q .Consider an arbitrary bijection ϕ:{0,...,q −1}→q .Lety (i )1(n ),...,y (i )m (n ) T:=C (i )·(ϕ(a 0),...,ϕ(a m −1))Tfor i=1,...,s andx n :=(x(1)n,...,x(s)n)∈[0,1)s with x(i)n:=mk=1ϕ−1(y(i)k(n)) f(x)=∞k=1u(i)kx−k∈q((x−1))for1≤i≤s,and define the elements c(i)jr of the matrices C(i)in the above mentioned construction principle fordigital nets byc(i)jr=u(i)j+r∈q for1≤i≤s,1≤j≤m,0≤r<m.C(1),...,C(s)provide a digital(t,m,s)-net over q with t=m+1−min s i=1(1+deg(h i)), where the minimum is extended over all nonzero s-tuples(h1,...,h s)∈q[x]s for which f dividesTable2.Shift-net parameters.For entries marked with an asterisk,shift-nets are the only construction known today.b=2m,s131527*39*411*513615m,s168181020*11221224*1426*1528*1730*b=3m,s1351729*311*413*515*617*719*921*b=4m,s11122334455b=5m,s1111233b=7m,s13579111*113*s i=1g i h i.If we set g i=g i−1mod f for some g∈q[x],we denote the resulting point set by P(s)(g,f).To obtain suitable parameters for an explicit construction of these nets,one has to resort to a computer search.Thefirst step in this direction was taken by Hansen,Mullen,and Niederreiter[8].Their search in the binary case provided point sets of up to220points in dimensions3and4and up to210points in higher dimensions(therefore too small for applications).The calculation of congruences modulo arbitrary polynomials f is very expensive.Therefore,in all of the former search procedures the most convenient choice f(x)=x m was used.In[10],by using a fast implementation on several workstations,we provided and tabulated the parameters of concrete digital nets P(s)(g,x m)of high quality with2m points for m≤25and in dimensions3≤s≤15. These tables are known as“Salzburg Tables”.A parallelized version of our search algorithm ran under PVM(Parallel Virtual Machine)version3.1on a cluster of seven DEC3000AXP/400Alpha stations at the RIST++.We parallelized the algorithm by splitting up the range of polynomials g into smaller parts.Then the search was performed independently on each processor and the results were collected by a manager node.Later,we extended the Salzburg Tables to s≤25,but for higher dimensions,even with an exhaustive search over all2m possible candidates for g,the quality was quite bad.Existence results showing a much better behavior for irreducible denominator polynomials f motivated a substantial change in the algorithm,abandoning the restriction to f(x)=x m.In[16]we calculated polynomials g providing digital(t,m,s)-nets P(s)(g,f)of high quality(i.e.,with a small quality parameter t)for3≤m≤39and dimensions3≤s≤50.For m≤18we carried out an exhaustive search through all2m possible polynomials g,in smaller dimensions also for larger m.The calculations(implemented in C)were carried out on an SGI Power Challenge GR at the RIST++.Only the high performance of this computer(partially we used all of its20nodes for our calculations)enabled the great extensions and improvements of the Salzburg Tables.In[14]we extended the algorithms and the search to arbitrary prime power bases.Again,the calcula-tions(implementation now in C++)were carried out on the SGI Power Challenge GR at the RIST++. Some of the obtained nets are still better than any other known construction—see,e.g.,MinT,the online database for(t,m,s)-net parameters at http://mint.sbg.ac.at/.6.Concluding RemarksWe have given an overview of some aspects of our research in the past and the present,several projects are still going on.Keywords like number-theoretical numerics,scientific computing,compu-tational mathematics,high-dimensional numerical integration,or power computing shape this part of our work.Undoubtedly,many of the theoretical achievements would have had much less significance without having been supported by computational results.As mentioned before,the bulk of the com-putations were carried out on computing facilities at the RIST++.We are deeply indebted to Peter Zinterhof,its founder and head,who made all our computational efforts possible and supported us throughout without any restrictions.References[1]J.M.Bull and T.L.Freeman.Parallel algorithms for multi-dimensional integration.Parallelput.Practices,1:89–102,1998.[2]R.ˇCiegis,R.ˇSablinskas,and J.Wa´s niewski.Numerical integration on distributed-memoryparallel rmatica,9:123–140,1998.[3]M.D’Apuzzo,pegna,and A.Murli.Scalability and load balancing in adaptive algorithmsfor multidimensional integration.Parallel Comput.,23:1199–1210,1997.[4]E.H.de Doncker and A.Gupta.Multivariate integration on hypercubic and mesh networks.Parallel Comput.,24:1223–1244,1998.[5]A.C.Genz.Parallel adaptive algorithms for multiple integrals.In Mathematics for LargeScale Computing,number120in Lecture Notes in Pure and Applied Mathematics,pages35–47.Springer-Verlag,1989.[6]A.C.Genz and A.A.Malik.Remarks on algorithm006:An adaptive algorithm for numericalintegration over an n-dimensional rectangular put.Appl.Math.,6:295–302,1980.[7]P.C.Hammer and A.H.Stroud.Numerical evaluation of multiple integrals II.Math.TablesAids Comput.,12:272–280,1958.[8]T.Hansen,G.L.Mullen,and H.Niederreiter.Good parameters for a class of node sets inquasi-Monte Carlo p.,61:225–234,1993.[9]A.R.Krommer and Ch.W.¨Uberhuber.Numerical Integration on Advanced Computer Systems.Number848in Lecture Notes in Computer Science,Springer-Verlag,1994.[10]rcher,uß,H.Niederreiter,and W.Ch.Schmid.Optimal polynomials for(t,m,s)-netsand numerical integration of multivariate Walsh series.SIAM J.Numer.Anal.,33:2239–2253, 1996.[11]H.Niederreiter.Point sets and sequences with small discrepancy.Monatsh.Math.,104:273–337,1987.[12]H.Niederreiter.Random Number Generation and Quasi-Monte Carlo Methods,volume63ofCBMS-NSF Regional Conference Series in Applied Mathematics.SIAM Society for Industrial and Applied Mathematics,1992.[13]G.M.Phillips.Numerical integration over an n-dimensional rectangular put.J.,10:297–299,1967.[14]G.Pirsic and W.Ch.Schmid.Calculation of the quality parameter of digital nets and applicationto their plexity,17:827–839,2001.[15]J.R.Rice.A metalgorithm for adaptive quadrature.J.ACM,22:61–82,January1975.[16]W.Ch.Schmid.Improvements and extensions of the“Salzburg Tables”by using irreduciblepolynomials.In H.Niederreiter and J.Spanier,editors,Monte Carlo and Quasi-Monte Carlo Methods1998,pages436–447.Springer-Verlag,2000.[17]W.Ch.Schmid.Shift-nets:A new class of binary digital(t,m,s)-nets.In H.Niederreiter,P.Hellekalek,rcher,and P.Zinterhof,editors,Monte Carlo and Quasi-Monte Carlo Meth-ods1996,volume127of Lecture Notes in Statistics,pages369–381.Springer-Verlag,1998. [18]R.Sch¨u rer.Parallel high-dimensional integration:Quasi-Monte Carlo versus adaptive cubaturerules.In Vassil N.Alexandrov et al.,editors,Computational Science–ICCS2001,volume2073 of Lecture Notes in Computer Science,pages1262–1271.Springer-Verlag,May2001.[19]R.Sch¨u rer.A comparison between(quasi-)Monte Carlo and cubature rule based methods forsolving high-dimensional integration put.Simulation,62:509–517,2003.[20]R.Sch¨u rer.Optimal communication interval for parallel adaptive integration.Parallel Dist.Comput.Practices,2005.Accepted for publication.[21]R.Sch¨u rer and A.Uhl.An evaluation of adaptive numerical integration algorithms on parallelsystems.Parallel Algorithms Appl.,18:27–47,2003.[22]F.Stenger.Numerical integration in n dimensions.Master’s thesis,University of Alberta,Canada,1963.[23]A.H.Stroud.Remarks on the disposition of points in numerical integration formulas.Math.Tables Aids Comput.,11:257–261,1957.[24]A.H.Stroud.Extensions of symmetric integration p.,22:271–274,1968.[25]A.H.Stroud.Approximate Calculation of Multiple Integrals.Prentice-Hall,Englewood Cliffs,NJ,USA,1971.。

相关文档
最新文档