Experimental constraints on fourth generation quark masses
文献翻译-一个未完工的二层预制混凝土结构物的抗震测试
Tests on a Half-Scale Two-Story Seismic-Resisting Precast Concrete BuildingThis paper describes experimental studies on the seismic behavior and design of precast concrete buildings. A half-scale two-story precast concrete building incorporating a dual system and representing a parking structure in Mexico City was investigated. The structure was tested up to failure in a laboratory under simulated seismic loading. In some of the beam-to-column joints, the bottom longitudinal bars of the beam were purposely undeveloped due to dimensional constraints.Emphasis is given in the study on the evaluation of the observed global behavior of the test structure. This behavior showed that the walls of the test structure controlled the force path mechanism and significantly reduced the lateral deformation demands in the precast frames. Seismic design criteria and code implications for precast concrete structures resulting from this research are discussed.The end result of this research is that a better understanding of the structural behavior of this type of building has been gained results of simulated seismic load tests of a two story precast concrete building constructed with precast concrete elements that are used in Mexico are described herein. The structural system chosen in the test structure is the so called dual type, defined as the combination of structural walls and beam-to-column frames. Connections between precast beams and columns in the test structure are of the "window" type. This type of construction is typically used in low- and medium rise buildings in which columns are connected with "windows" at each story level. These "windows" contain the top and bottom reinforcement. Fig. 1 shows this type of construction for a commercial building in Mexico City.In most precast concrete frames such as those shown in Fig, 1, longitudinal beam bottom bars are not fully developed due to constraints imposed by the dimensions of file columns in beam-to-column joints. In an effort to overcome this deficiency, and as described later, some practicing engineers in Mexico design these joints by providing hoops around the hooks of that reinforcement in order to achieve its required continuity. However, this practice is not covered in the ACI Building Code(ACI318-02), nor in the Mexico City Building Code (MCBC, 1993). Part of this research was done to address this issue.The objectives of this research were Io evaluate the observed behavior of a precast concrete structures in the laboratory and to propose the use of precast structural elements or precast structures with both an acceptable level of expected seismic performance and appealing features from the viewpoint of construction Emphasis is given in this paper on the global behavior of the test structure. In the second part of this research which gill be presented in a companion paper, the observed behavior of connections between precast elements in the test structure, as well as the behavior of the precast floor system will be discussed in detail.Structural and non structural damages observed in buildings during past earthquakes throughout the world have shown the importance of controlling lateral displacement in structures to reduce building damage during earth- quakes. It is also relevant to mention that there are several cases of structures in moderate earthquakes in which the observed damage in non-structural elements in buildings was considerable even though the structural elements showed little or no damage. This behavior is also related Io excessive lateral displacement demands in the structure.To minimize seismic damage during earthquakes, the above discussion suggests the convenience of using a structural system capable of controlling lateral displacements in structures. A solution of this type is the so-called dual system. Studies by Paulay and Priestley4 on the seismic response of dual systems have shown that the presence of walls reduce the dynamic moment demands in structural elements in the frame subsystem. Also in conjunction with shake table tests conducted on a cast-in-place reinforced concrete dual system. Bertero5 has shown the potential of the dual system, in achieving excellent seismic behavior [n this investigation, the dual system is applied to the case of precast concrete structures.DUCTILITY DEMAND IN DUAL SYSTEMSIn order to develop a base for a later analysis of the observed seismic response of the test structure studied in this project a simple analytical model is used to evaluate the main features of ductility demands in dual systems.Fig 2 shows the results of a simple approach to analyze the lateral load response iii a dual system. The lateral load has been normalized in such a manner that the combination of maximum lateral resistance in both subsystern i.e. walls and frames--leads to a lateral resistance of the global system equal to unity b is also assumed that both subsystems have the same maximum lateral resistance. In the first case (Fig 2a), it is assumed that the wall and frame subsystems have global displacement ductility capacities equal to 4 and 2 respectively. In the second case (Fig. 2b), the frame subsystem response is assumed to be elastic, and the lateral stiffness of the wall subsystem is taken to be 4 times that of the frame subsystem.As shown in Fig 2, the lateral deformation compatibility of the combined system is controlled by the lateral deformation capacity of the wall subsystem. In the first case Fig 2ak an elastic-plastic envelope for the lateral global response of the dual system is assumed, and the corresponding displacement ductility (u) is equal to 33.For the second case (Fig. 2b) with an elastic behavior of the frame subsystem, this ductility is equal to 25.These simple examples illustrate that in the analyzed cases, due to the higher flexibility in the frame subsystems as compared to those of the wall subsystern, in a dual system, the ductility demands in the frame subsystem result in smaller ductility values than those of the wall subsystem. This analytical finding was verified in this study from the experimental studies conducted on the test structure. This verification is later discussed in the paper It is of interest to note that results of the type shown in Fig. 2 have been also found by Bertero' in shake table tests of a dual system. DESCRIPTION OF TEST STRUCTUREThe test structure used in this investigation is a two-story precast concrete building, representative of a low-rise parking structure located in the highest seismic zone of Mexico City. The prototype was constructed at one-half scale. For the sake of simplicity, ramps required in a parking structure have not been considered in the selected prototype structure. Their use, requiring large openings in the floor system, would have required a very complex model of the floor system for both linear and nonlinear analysis of the structure.A detailed description of the dimensions, materials, design procedures, and construction of the test structure can be found elsewhere.6 A summary of this information is given below.The dimensions and some characteristics of the test structure are shown in Fig. 3. The longitudinal and transverse are shown in Fig3a. Also, the exterior (longitudinal) frame containing the wall (Column Lines 1 and 3) are termed the lateral frame (see Fig, 3b), and the internal (longitudinal) frame with the single tee (Column Line 2) are termed the central frame.Doable tees spanning in the longitudinal direction are supported by L-shaped precast beams in the transverse direction as shown in Fig3a. The structure uses precast frames and precast structural walls, the latter elements functioning as the main lateral load resisting system. Fig. 4 shows an early phase of the construction of the test structure. As can be seen, the "windows'' in the columns and walls are left in these elements for a later assemblage with the precast beams.The unfastened design base shear required by the Mexico City Building Code (MCBC, 1993)2 is 0.2WT, where WT is the total weight of the prototype structure, assuming a dead load of 5,15 KPa (108 psi) and a live load of 0.98 KPa (20.5 psi). The prototype structure was designed using procedures of elastic analyses and proportioning requirements of the MCBC, In these analyses, the gross moment of inertia of the members in the structure was considered and rigid offsets (distances from the joints to the face of the supports) were assumed for all beams in the structure except for beams in the central frame, which had substandard detailing as will be described latch.Results from these analyses indicated that the structural walls in the test structure would take about 65 percent of the design lateral loads. A review of the nominal lateral resistance of the structure using the MCBC procedures showed that this resisting force was about 1.3 times the required code lateral resistance (0,2Wr), This is one of several factors, later discussed, that contributed to the over-strength of the structure.The longitudinal reinforcement in all the structural elements of the test structorewas deformed bars from Grade 420 steel. Table 1 lists the concrete compressive cylinder strengths for different members of the prototype structure. Fig. 5 shows typical reinforcing details for precast beams spanning in the direction of the applied lateral load (see Fig. 3).Figs. 6 and 7 show reinforcing details for the columns, and for the structural wails and their foundation, respectively. It should be mentioned that the test structure was designed with the requirements for moderately ductile structures specified by the MCBC. According to these provisions, the test structure did not require special structural walls with boundary elements such as those specified in Chapter 21 of AC1 318 02.The precast two-story columns were connected to the precast foundation by unthreading them in a grouted socket type connection. The reinforcing details of the foundation, as well as its design procedure and behavior in the test structure are discussed in the companion paper? Tae beam-to-cadmium joints in file test structure were cast-in-place to enable positioning the longitudinal reinforcement of the framing beams. The beam top reinforcement was placed in sum on top of the precast beams. Fig. 8 shows typical reinforcing details for the joints in the double tees of the central frame. Since these tees and their supporting L-shaped beams in Axes A or C (see Fig.3) had the same depth, the hooked bottom longitudinal bars in the double tees could not pass through the full depth of the column because of interference with the bottom bars from file transverse beam (see Fig. g).As a result, these hooked bars possessed only about 55 percent of the development length required by Chapter 21 of ACI 318-02. In an attempt to anchor these hooked bars, some designers in Mexico provide closed hoops around the hooks, as shown in Fig. 8. The effectiveness of this approach was also studied in the companion paper.3 Beam to-column joints in the lateral frames of the test structure had transverse beams that were deeper than the longitudinal beams. This made it possible for the top and bottom bars of the longitudinal beams to pass through the full joint, and, therefore, these bars achieved their required development length.Cast-in-place topping slabs in the test structure were 30 mm (1.18 in.) thick andformed the diaphragms in January-February 2005 Fig. 3. Plan and elevation of test structure: (a) Plan; (b) Lateral frame; (c) Transverse frame. Dimensions in mm. Note: 1 mm - 0.0394 in. the structural system. Welded wire reinforcement (WWR) was used as reinforcement for the topping slabs. The amount of WWR ill the topping slabs was controlled by the temperature and shrinkage provisions of the MCBC. which are similar to those of AC[ 318-02.It is of interest to mention that the requirements for shear strength in the diaphragms given by these provisions, which are similar to those of ACI 318-89, did not control the design. A wire size of 6 x 6 in. 10/10 led to a reinforcing ratio of 0.002 in the topping slab. The strength of the WWR at yield and fracture obtained from tests were 400 and 720 MPa(58 and 104 ksi),respectively.TEST PROGRAM AND INSTRUMENTATIONTest ProgramThe test structure was subjected to simulate seismic loading in the longitudinal direction (see Fig. 3a). Quasitatic cyclic lateral loads FI and F2 were applied at the first and second levels of the structure, respectively (see Fig. 3b). The ratio of F2 to FI was held constant throughout the test., with a value equal to 2.0. This ratio represents an inverted triangular distribution of loads, which is consistent with the assumptions of most seismic codes including the MCBC.The test setup is shown in Fig. 9.The structure had Hinges A, B, and Cat each slab level as shown in Fig. 9b.The purpose of the hinges was to avoid unrealistic restrictions in the structure by allowing the ends of the slabs to rotate freely during lateral load testing. As can be seen in Fig 9, the lateral loads were applied by hydraulic actuators that work in either tension or compression.When the actuators worked in compression, they applied the loads directly at one side of the structure. However, when the actuators applied tensile loads at one side of the test structure, they were convened to compression loads at the other side by means of four high strength reinforcing bars for each actuator [see D32 ((~1~ in) reinforcing bars in Fig. 9]. Both ends of these bars were attached to 50 mm (2 in.) thick steel plates At each of the floor levels, two of these plates were part of Hinge A, and theother two plates at the actuator side were part of Hinge B (see Fig. 9b).As can be seen in Fig 9b before the application of tensile loads in the actuators, the latter end of the plates left a clear space with the end of the slab. This space at zero lateral load was about 50 mm (2 in.), and it allowed for beam elongation of the structure which occurs during the formation of plastic hinges in the beams? For the case of compressive loads on the actuators acting on the transverse beam at the side of Hinge B (see Fig. 9b), the system also allowed 50 mm (2 in.) of beam elongation. These particular features of the test setup allowed the application of compressive loads at points in the slabs with no special reinforcement at these points. If tensile forces had been applied to loading points in the slabs, it is very likely that these loading points would have required unrealistic, special reinforcing details that are not represent five of those in an actual structure.The gravity load was represented by 53 steel ingots, acting at each level of the structure, with the layout shown in Fig. 9a, The weight per unit area of the ingots per level was 2.79 KPa (58.3 psf), which added to the self-weight of tile slab, leading to a total floor dead load of 5.37 KPa (112 psf). This is 88 percent of the code required gravity load for seismic design (MCBC, 1993). It was not possible to apply the remaining 12 percent of gravity load due to space limitations in the slabs. The total weight of the structure, omitting the weight of the foundation, was 284.2 KN (63.9 kips).Transverse displacements in the structure (perpendicular to the loading direction) were precluded by using steel ball hearings installed in the lateral faces of beam to-column joints of the second level at Column Lines A1and A3 (see Fig. 9a). These ball bearings were supported by a steel frame. As shown in Fig. 10, the precast columns and wails were fixed to the strong floor using steel beams supported by the foundation and anchored to the floor. The lateral loading history used in the test structure was based on force control during elastic response of the structure, followed by displacement control during yield fluctuations, using the lateral roof displacement of the structure A.The target lateral load or top displacement was typically reached by incrementalloading of both actuators in which the bottom actuator was fore-contrived to half the load value of the top actuator. A cycle of lateral load of about 0.75Vg was initially applied, where VR is the nominal theoretical base shear strength computed using the ACI 318 02 provisions. The value of VR, 198 KN (44.5 kips), can be assumed to correspond to first yield in the structure.This parameter was computed using measured material properties, a strength reduction factor (1) equal to unity and common assumptions for flexural strength of reinforced concrete sections. For the lateral force of 0.75VR, the corresponding lateral roof displacement was defined as 0.75▽y’.Using this value and assuming elastic behavior, the calculated value of the displacement at first yielding, Ny, was equal to 4.5 mm (0.18 in.).After the cycle at 0.75▽y’, the structure was subjected to three cycles for each of the maximum prescribed roof displacements 2▽y’, 4▽y’, 6▽y’, 13▽y’ and 20▽y’--which correspond to roof drift ratios, Dr, of 0.003, 0.006, 0.009, 0.020 and0.031, respectively. The roof drift ratio can be defined as: Dr=A/H (1) whereH is the height of the structure measured from the fixed ends of first story columns. This value is equal to 2920 nun (115 in.).The complete cyclic lateral loading history applied to the structure is shown in Fig. 11. This loading history is expressed in terms of both displacement ductility, A/Ny, and roof drift ratio, Dr. In addition, the peaks of lateral displacements in Fig. 11 are related to the measured base shear force, V, expressed as the ratio V/VR.InstrumentationA detailed description of the instrumentation of the structure can be found in the report by Rodriguez and Blandon? Lateral displacements of the structure were measured with linear potentiometers placed at each level of the structure as shown in Fig. 12. Beam elongation, discussed later, was evaluated from measurements using this instrumentation. Twelve pairs of potentiometers were used for measuring the average curvatures in critical sections of two columns of the structure. In addition, nine pairs of potentiometers were instrumented in vertical sections of beams in central and lateral frames, and in sections of a wall base.Electrical resistance strain gauges were placed in the longitudinal reinforcement of some beams, columns and wails of the test structure, as well as in the hoops of beam to-column joints with substandard reinforcing details. Some of the measurements from this instrumentation and from the potentiometers in the beams are discussed in the companion paper. Here, the observed experimental response of the beam to-column joints with substandard reinforcing details is evaluated. EXPERIMENTAL RESULTSThe applied lateral loading history in the test structure is shown again in Fig.13. in this case, the peaks of lateral displacement are related to the most important events observed during testing, such as first yielding of the longitudinal reinforcement, first cracking in walls and topping slabs, loss of concrete cover, and buckling of longitudinal reinforcement.Fig. 14 shows the measured base shear force, V, versus roof drift ratio hysteretic loops. The ordinate of this graph represents the measured base shear expressed in a dimensionless form as the ratio V/VR. As shown in Fig. 14a, first yielding of the longitudinal reinforcement occurred at wails in the critical sections at foundation faces, at a roof drift ratio of about 0.0030, corresponding t9 a base shear force equal to about 1.4VR. The maximum measured base shear was equal to 549KN(123.4 kips) or 2.77VR, corresponding to a roof drift ratio of 0.020.Fig. 14b shows envelopes of interstory drift ratio, dr, measured during testing in the two levels of the structure. The results show that the two levels had similar values of interstory drifts, which is due to the significant contribution of the structural walls to the global response of the structure. This feature of the displacement profile of the structure suggests that for a given Dr Value, interstory drift values would be similar in magnitude.Cyclic lateral loading was terminated at a roof drift ratio of 0.034 corresponding to a base shear force of 462KN (103.8 kips) or 2.3Vn. At this level of lateral displacement, the buckling of longitudinal reinforcement at the fixed ends of the first-story walls was excessive, and it led to significant out-of-plane displacements of the walls along with wide cracks in the topping slab-to-wall joints as discussed in thecompanion paper? Fig. 14a also shows some of the most important events observed during testing. A summary of relevant damage observed during testing of die structure is described below.Wall damage was initiated by the loss of concrete cover and buckling of longitudinal reinforcement at the fixed ends of the first-story walls. These events occurred at a D, value equal to about 0.020. Buckling of the longitudinal reinforcement occurred immediately after the loss of concrete cover in these critical sections.Figs. 15 and 16 illustrate the cracking pattern and damage observed at the end of testing for the lateral frame and for the central frame, respectively. Fig. 17 shows an overall view of the damage of the lateral frame at file end of testing, and Fig. 18 provides a closer look at the buckling of longitudinal reinforcement at the fixed end of a first-story wall at the end of testing. Figs. 15 through 18 indicate that the observed damage at the end of testing in the columns and beams to-column, and beam column joints was significantly less than that of the walls.The formation of plastic hinges in the critical sections of the structural elements, such as at the fixed ends of the first story columns and wails, and in beam ends at wall faces, was observed during testing, especially after reaching the maximum base shear.Evidence of buckling of the longitudinal reinforcement was also observed in some of these sections of columns and beams (see Figs. 15 and 16), foremen of some beams, columns and wails of the test structure, as well as in the hoops of beam to-column joints with substandard reinforcing details.Some of the measurements from this instrumentation and from the potentiometers in the beams are discussed in the companion paper. Here, the observed experimental response of the beam to-column joints with substandard reinforcing details is evaluated.一个未完工的二层预制混凝土结构物的抗震测试这篇文章是关于地震和预制混凝土建筑物设计的试验性的研究。
Design of Experiments
Design of Experiments**Delving into the Intricacies of Design of Experiments: A Comprehensive Exploration** The realm of design of experiments (DOE) stands as a cornerstone of scientific research, empowering us to unravel complex relationships between variables and optimize outcomes. This meticulous approach enables researchers to systematically manipulate independent variables and observe their corresponding effects on dependent variables, providing valuable insights into the underlying mechanisms at play. DOE encompasses a vast array of techniques, each tailored to specific experimental objectives. Factorial designs, for instance, allow researchers to investigate the effects of multiple factors simultaneously, identifying their individual and interactive influences. Fractional factorial designs, on the other hand, prove particularly useful when dealing with a large number of factors, offering a cost-effective and efficient alternative. At the heart of DOE lies the principle of randomization, which ensures that experimental units are assigned to treatments in a random manner. This randomization serves to eliminate bias and uncontrolled variability, enhancing the reliability andvalidity of the results obtained. In practice, DOE involves a meticulous sequence of steps, commencing with the precise definition of the experimental objectives. Researchers must carefully consider the factors to be investigated, the desired responses, and the constraints imposed by the experimental context. Subsequently, the appropriate DOE technique is selected, and the experiment is meticulously designed to minimize confounding factors and maximize the accuracy of the results. The analysis of DOE data involves a range of statistical techniques, including analysis of variance (ANOVA) and regression analysis. ANOVA serves to determinethe statistical significance of the effects observed, while regression analysis quantifies the relationships between variables. Through these analytical methods, researchers can identify the most influential factors, optimize process parameters, and draw meaningful conclusions from their experimental findings. DOE has found widespread application across a diverse spectrum of scientific disciplines, including engineering, medicine, agriculture, and social sciences. In engineering, for instance, DOE is employed to optimize manufacturing processes, enhance product quality, and reduce production costs. Within the medical realm, DOE has played apivotal role in the development of new drugs, the evaluation of treatment protocols, and the understanding of disease mechanisms. The benefits of DOE are myriad. By enabling researchers to systematically explore the effects of variables, DOE provides valuable insights into complex systems. It facilitates the optimization of processes, leading to increased efficiency, cost savings, and improved outcomes. Moreover, DOE strengthens the foundation of scientific knowledge, contributing to advancements in various fields of study. However, the application of DOE is not without its challenges. Researchers must exercisecaution to ensure that the experimental design is sound and that the data analysis is appropriate. The interpretation of results requires careful consideration ofthe experimental context and the limitations of the study. Despite these challenges, DOE remains an indispensable tool for researchers seeking to unravel the complexities of real-world phenomena. Its systematic approach, coupled with rigorous statistical analysis, provides a robust framework for exploring relationships between variables and optimizing outcomes. Through the judicious application of DOE, researchers can make significant contributions to scientific knowledge and drive innovation across a multitude of disciplines.。
job-shopschedulingproblem:作业车间调度问题
Multistage-based genetic algorithm for flexible job-shop scheduling problemHaipeng Zhang, and Mitsuo GenGraduate School of Information, Production & Systems, Waseda UniversityWakamatsu-ku, Kitakyushu 808-0135, JAPANEmail:*******************.jp,*************AbstractFlexible Job-shop Scheduling Problem is expanded from the traditional Job-shop Scheduling Problem, which possesses wider availability of machines for all the operations. Considering thetwo states of the problem, two definitions (total and partial) of flexibility are offered to separatethe different availability information of machines.In this paper, a new multistage operation-based representation is proposed to make the chromosome simpler. By using this approach, all the crossover and mutation methods can beapplied to this optimal strategy. The efficiency has been improved after using the newrepresentation, and also the objective values outperform others.1. IntroductionThe classical Job-shop Scheduling Problem (JSP) concerns determination of a set of jobs on a set of machines so that the makespan is minimized. It is obviously a single criterion combinational optimization and has been proved as a NP-hard problem with several assumptions as follows: each job has a specified processing order through the machines which is fixed and known in advance; processing times are also fixed and known corresponding to the operations for each job; set-up times between operations are either negligible or included in processing times (sequence-independent); each machine is continuously available from time zero; there are no precedence constraints among operations of different jobs; each operation cannot be interrupted; each machine can process at most one operation at a time.The flexible Job-shop Scheduling Problem (f-JSP) extends JSP by assuming that a machine may be capable of performing more than one type of operation (Najid, Dauzere-Peres & Zaidat,2002). That means for any given operation, there must exist at least one machine capable of performing it. In this paper two kinds of flexibility are considered to describe the performance of f-JSP (Kacem, Hammadi &Borne, 2002). First total flexibility: in this case all operations are achievable on all the machines available; second, partial flexibility: in this case some operations are only achievable on part of the available machines.Most of the literature on the shop scheduling problem concentrates on JSP case (Gen & Cheng, 1997; Gen & Cheng, 2000; Blazewicz, Domschke &Pesch,1996). The f-JSP recently captured the interests of many researchers. The first paper that addresses the f-JSP was given by Brucker and Schlie (Brucker & Schlie, 1990), which proposes a polynomial algorithm for solving the f-JSP with two jobs, in which the machines able to perform an operation have the same processing time. For solving the general case with more than two jobs, two types of approaches have been used: hierarchical approaches and integratedapproaches. The first was based on the idea of decomposing the original problem in order to reduce its complexity. Brandimarte (Brandimarte, 1993) was the first to use this decomposition for the f-JSP. He solved the assignment problem using some existing dispatching rules and then focused on the resulting job shop subproblems, which are solved using a tabu search heuristic. Mati proposed a greedy heuristic for simultaneously dealing with the assignment and the sequencing subproblems of the flexible job shop model (Mati, Rezg & Xie, 2001). The advantage of Mati’s heuristic is its ability to take into account the assumption of identical machine. Kacem (Kacem, Hammadi &Borne, 2002) came to use GA to solve f-JSP, and he adapted two approaches to solve jointly the assignment and JSP (with total or partial flexibility). The first one is the approach by localization (AL). It makes it possible to solve the problem of resource allocation and build an ideal assignment mode (assignments schemata), the second one is an evolutionary approach controlled by the assignment model, and applying GA to solve the f-JSP.In this paper, we propose a more efficient method called multistage-based GA to solve f-JSP (including total flexibility and partial flexibility) compared with Kacem’s approach. The considered objective is to minimize the makespan, total workloads of the machines and the maximum workloads of machines. This multi-objective optimization will be done by a multistage-based GA which including K stages (the total number of operations for all the jobs), and m state (total number of machines). Computational experiments will be carried out to evaluate the efficiency of our methods with a large set of representative problem instances based on practical data. The rest of the paper is organized as follows: In Section 2, we describe the assumptions of flexible Job-shop Scheduling Problem in detail, and propose the mathematical model of this problem. In Section 3, one heuristic method is applied to solve this problem. Section 4 introduces the GA methods and describes implementations used for this problem. Then, the experimental results are illustrated and analysed in Section 5. Finally, Section 6 provides conclusion and suggestions for further work on this problem.2. Mathematical modelIn this paper, the flexible Job-shop Scheduling Problem we are treating is to minimize the makespan, and balance the workload for all machines. Before defining the problem concretely we should add several assumptions to the problem.1.There is a set of jobs and a set of machines.2.Each job consists of one fixed sequence of operations.3.Each machine can process at most one operation at a time.4.Each machine becomes available to other operations once the operations which arecurrently assigned to be completed.5.All machines are available at t = 0.6.All jobs can be started at t = 0.7.There are no precedence constraints among operations of different jobs.8.Any operation cannot be interrupted.9.Neither release times nor due dates are specified.The f-JSP we considering here is a problem which including n-jobs operated on m-machines. Some symbols and notations have been defined as follows:i: index of jobs, i = 1, 2, … nJ i : the i th jobn : total number of jobsk : index of operations, k = 1, 2, … K i o ik : the k th operation of job i (or J i )K i : total number of operations in job i (or J i )j : index of machines, j = 1, 2, … m M j : the j th machinem : total number of machinesp ikj : processing time of operation o ik on machine j (or M j ) U : a set of machines with the size mU ik : a set of available machines for the operation o ikF ik t : completion time of operation o ikW j : workloads (total processing time) of machine M jThe objective function can be described as the following three equations. Eq. (1) gives the first objective makespan and also means to minimize the maximum finishing time considering all the operations. Eq. (2) gives the second objective which is to minimize the maximum of workloads for all machines. Eq. (3) gives the objective total workloads.Eq. (2) combining with Eq. (3) give a physical meaning to the f-JSP, which referring to reducing total processing time and dispatching the operations averagely for each machine. Considering both of the two equations, our objective is to balancing the workloads of all machines. Eq. (4) and Eq. (5) give two basic processing constrains.3. Heuristic methodTo demonstrate f-JSP model clearly, we first prepare a simple example. Table 1 gives the data set of an f-JSP including 3 jobs operated on 4 machines. It is obviously a problem with total flexibility because all the machines are available for each operation (U ik =U ). There are several traditional heuristic methods that can be used to make a feasible schedule.In this case, we use the SPT (select the operation with the shortest processing time) as selective strategy to find an optimal solution, and the algorithm is based on the procedure in Figure 1. Before selection we first make some initialization:• starting from a table D presenting the processing times possibilities• on the various machines, create a new table D’ whose size is the same one as the table D ; • create a table S whose size is the same one as the table D (S is going to represent chosen{}{}(5)t. s. (4)min (3)max min (2)max max min (1),,,k ,i ,t j ,k ,i t p t W W W W t t F ik F k i j k i F ik mj j T j mj M F ik K k ni M i ∀≥∀≤+==⎭⎬⎫⎩⎨⎧=++=≤≤≤≤≤≤∑0111111assignments);•initialize all elements of S to 0 (S ikj=0)•recopy D in D’Table 1. Data set of a 3-job 4-machine Problem.procedure: SPT Assignmentinput: dataset table Doutput: best schedule Sbeginfor (i=1; i<=n)for (k=1; k<=K i)min=+∞;pos=1;for (j=1; j<=m)if (p’ikj<min) then {min=p’ikj; pos=j;}S i,k,pos=1(assignment of o ik to the machine M pos);//updating of D’;for (k’=k+1; k’<=K i’)p’i’,k,pos= p’i’,k,pos+ p i,k,pos;for (i’= i +1; i’<=n)for (k’= 1; k’<=K i’)p’i’,k’,pos= p’i’,k’,pos+ p i,k,pos;endendoutput best schedule SendFigure 1. SPT Assignment Procedure.Following this algorithm, we assign o11 to M1, and add the processing time p111=1 to the elements of the first column of D’. (shown in Table 2)Table 2 D’ (for i=1 and k=1).Table 3 D’ (for i=1 and k=2).Secondly, we assign o12 to M4, and add the processing time p124=1 to the elements of the fourth column of D’ shown in Table 3. By following the same method, we obtain assignment S shown in Table 4.Furthermore, we can denote the schedule based on job sequence as:S={(o11, M1), (o12, M4), (o13, M1), (o21, M2), (o22, M2), (o23, M1),(o31, M3), (o32, M4)}= {(o11,M1: 0-1), (o12, M4: 1-2), (o13, M1: 2-5), (o21, M2: 0-1), (o22, M2: 1-4),(o23, M1: 4-6), (o31, M3: 1-3), (o32, M4: 3-4)}Finally we can calculate the solution by Eq.1, Eq. 2 and Eq. 3 as follows:t M = max{F t11, F t12, F t13, F t21, F t22, F t23, F t31, F t32}=max{1, 2, 5, 1, 4, 6, 3, 4}= 6WM= max{(1+3), (1+3), (3+2), (1+1)}=5W T=4+4+5+2=154. Genetic Algorithm ApproachThere are three parts in this section, firstly some traditional representation (Mesghouni, 1999), secondly Imed Kacem’s approach (Kacem, Hammadi & Borne, 2002), and thirdly multistage operation-based representation.4.1 Traditional Representation of GA4.1.1 Parallel Machine Representation (PM-R)The chromosome is a list of machines placed in parallel (see Table 5). For each machine, we associate operations to execute. Each operation is coded by three elements:Operation k , job J i and Sikj t (starting time of operation o ik on the machine M j ).4.1.2 Parallel Jobs Representation (PJ-R)The chromosome is represented by a list of jobs showed in Table 6. Information of each job is shown in the corresponding row where each case is constituted of two terms: machine M j which executes the operation and corresponding starting time t ikj S .4.2 Imed Kacem’s approachImed Kacem proposed Operations Machines Representation (OM-R) approach (Kacem, Hammadi & Borne, 2002), which based on a traditional representation called Schemata Theorem Representation (ST-R). It was firstly introduced in GAs by Holland (Charon, Germinated & Hudry, 1996).In the case of a binary coding, a schemata is a chromosome model where some genes are fixed and the other are free (see the following Figure 2), Positions 4 and 6 are occupied by the symbol:“*”. This symbol indicates that considered genes can take “0” or “1” as value. Thus, chromosome C 1 and C 2 respect the model imposed by the schemata S.Based on the ST-R approach, Kacem expanded it to Operations Machines Representation (OM-R). It consists in representing the schedule in the same assignment tableS . We replace each case S ikj =1 by the couple (F ik t , Fik t ), while the cases S ikj =0 are unchanged. To explain this coding, we present the same schedule introduced before (Table 7). Furthermore, operation based crossover and the other two kinds of mutation (operator ofTable 5. Parallel machine representation.Table 6.Parallel jobs representation.00*1*001Position : 1 2 3 4 5 6 7800110001S =C 1=C 2mutation reducing the effective processing time, operator of mutation balancing work loads of machines) are included in this approach.4.3 Multistage operation-based approachConsidering the GA approach proposed by Imed Kacem, it is complex even when you take allthe objectives in count, because all the crossover and mutation are based on the chromosome which is described as a constructor of table. Therefore, it will spend more CPU-time for finding solutions; hence a multistage operation-based GA approach has been proposed. Figure 3 presents an f-JSP which includes 3 jobs operated on 4 machines, we add another two nodes (starting node and terminal node) in the figure to make it a formal network presentation. Denoting each operation as one stage, and each machine as one state, the problem can be formulated into an 8-stage, 4-state problem.Connected by the dashed arcs a feasible schedule can be obtained as:It is obviously simpler than all the representations prsented before, and certainly can easily combine with almost all kinds of classic crossover and mutation methods. Figure 4 and Figure 5 separately give the encoding and decoding procedure.Figure 3.Example for Multistage Operation-based Representatin (MO-R).43422141ID :1 2 3 4 5 6 7 8V =5. Numerical ExperimentIn this paper, we use the same dataset (showed in Table 8 & Table 9) as in Kacem’s paper tocompare the results. It is especially f-JSP with both partial flexibility (Uik⊆U) and totalflexibility (Uik=U). The symbol “-” in Table 8 shows that the machine is not available for thecorresponding operation.We have used random selections to generate the initial population. Then we applied the multistage operation-based GA (moGA combining one-cut point crossover and local-search mutation) with the following parameters: popSize: 100; p M=0.3; p C=0.6All results can be summarized in Table 10 and Table 11. Values of different approach show the efficiency. It is easy to find the moGA outperform than all the other approach.Table 10.Result Comparisons(8×8).Heuristic method (SPT) ClassicGAKacem'sApproach moGAt M19 16 16 15W T91 77 75 73W M16 14 14 14 Table 11.Result Comparisons (10×10).Heuristic method (SPT) ClassicGAKacem'sApproach moGAt M16 7 7 7 W T59 53 45 43W M16 7 6 56. ConclusionSome GA approaches have been used for solving f-JSP recently. However the efficiency is mainly affected by the complexity of chromosome representation. In this paper, a new multistage operation-based representation of GA (moGA) approach is proposed to solve f-JSP. The proposed algorithm is designed for optimal the 3 objectives including the makespan t M, total workloads of all machines W, and maximum of workloads for all machines W M.By using some numerical example of related works, we demonstrate the efficiency of moGA. The optimal result is better than the other related approaches.ReferencesNajid, N.M., Dauzere-Peres, S. and Zaidat, A. (2002), A modified simulated annealing method for flexible job shop scheduling problem, IEEE International Conference on Systems, Man and Cybernetics, 5: 6.Kacem, I., Hammadi, S. and Borne, P. (2002), Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems, IEEE Transactions on Systems, Man and Cybernetics, Part C, 32(1): 408-419.Gen, M. and Cheng, R. (1997), Genetic Algorithms & Engineering Design, John Wiley & Sons.Gen, M. and Cheng, R. (2000), Genetic Algorithms & Engineering Design, John Wiley & Sons.Blazewicz, J., Domschke, W. and Pesch, E. (1996), The job shop scheduling problem: conventional and new solution techniques, European Journal of Operational Research, 93: 1-33.Brucker, P. and Schlie, R. (1990), Job-shop scheduling with multi-purpose machines, Computing, 45: 369-375.Brandimarte, P. (1993), Routing and scheduling in a flexible job shop by tabu search, Annals of Operations Research, 41: 157-183.Mati, Y., Rezg, N. and Xie, X. (2001), An Integrated Greedy Heuristic for a Flexible Job Shop Scheduling Problem, IEEE International Conference on Systems, Man, and Cybernetics, 4: 2534-2539.Mesghouni, K. (1999), Application des algorithmes évolutionnistes dans les problèmes d’ optimization en ordonnancement de production, Ph.D. dissertation, USTL 2451.Charon, I., Germinated, A. and Hudry, O. (1996), Méthodes d’Optimization Combinatoires, Paris, France: Masson.。
情绪对定向遗忘影响的研究综述
情绪对定向遗忘影响的研究综述【摘要】定向遗忘是指由主试的“遗忘”指令所引起的记忆内容的受损,强调个体遗忘的有意性和指向性。
情绪对定向遗忘的影响主要受情绪材料、个体情绪状态、人格特质和个体经历等方面的共同作用。
本文从情绪材料、情绪状态、特殊人群和以往研究不足与展望四个方面分析概括,主要目的是让人们对定向遗忘和情绪相关的研究有更多的了解,为国内外研究开拓新的思路。
【关键词】情绪;定向遗忘;研究综述1引言主动遗忘情绪性记忆,是指人们有意识地、主动地忘记带有情绪色彩的记忆内容,但多为带来痛苦的负面情绪记忆。
对不同个体定向遗忘的研究也主要关注于消极特质的个体和特殊经历的患者。
2情绪材料在情绪材料方面,情绪性词汇、图片、面孔以及自传体的事件等都可以作为实验材料。
词汇的定向遗忘效应比图片更强,负性情绪性图片的唤醒度比负性词汇要强。
实验材料的差异可能是负性情绪记忆有意遗忘的效果不统一的原因。
2.1 情绪词定向遗忘实验的情绪词,一般选用词频较高的正性、负性和中性词,并在实验前对词语的效价、唤醒度和熟悉度进行评分。
大部分结果都支持负性词的定向遗忘效应小于中性词,中性词的定向遗忘效应强于正性和负性词,其结果的不同主要在于不同效价的词汇之间定向遗忘效应差异是否显著,以及负性词是否同样可以被有意遗忘。
研究发现遗忘负性词并不比遗忘中性词更难,但被试会倾向于记住负性词[1]。
尽管材料的情绪性可能会干扰定向遗忘效应,但被试在正性词、中性词和负性词都表现出明显的定向遗忘效应,说明了个体对情绪性信息也能主动遗忘,只是负性词更不容易忘记。
2.2 情绪图片图片效价、唤醒度的控制会对最终的实验结果产生不同的影响。
研究发现,有意遗忘效应仅出现在中性情绪图片条件中,当使用唤醒度较高的情绪图片和中性图片作为实验材料时,只有中性材料才具有定向遗忘效应,被试不能够对情绪性材料产生定向遗忘[2]。
负性情绪图片会干扰有意遗忘,相比于中性图片,被试更难以忘记以及更容易记住负性图片。
geo-neutrinos
Geo-NeutrinosS.T. Dye a,ba Department of Physics and Astronomy, University of Hawaii at Manoa,2505 Correa Road, Honolulu, Hawaii, 96822 USAb College of Natural Sciences, Hawaii Pacific University,45-045 Kamehameha Highway, Kaneohe, Hawaii 96744 USAThis paper briefly reviews recent developments in the field of geo-neutrinos. It describes current and future detection projects, discusses modeling projects, suggests an observational program, and visits geo-reactor hypotheses.1. INTRODUCTIONGeo-neutrinos are electron antineutrinos emitted in the beta decay of long-lived, terrestrial isotopes and their daughters. Due to the distributed nature of their source, geo-neutrinos are not suited for studies of neutrino oscillation. Geo-neutrino flux measurements do provide experimental evidence for the quantity and distribution of radioactive elements internally heating Earth. Radiogenic heating helps power plate tectonics, hot-spot volcanism, mantle convection, and possibly the geo-dynamo. Information on the extent and location of this heating better defines the thermal dynamics and chemical composition of Earth. Fiorentini et al. [1] provide a comprehensive review of geo-neutrinos.Geo-neutrino detection typically uses the same technology employed by reactor antineutrino experiments for many decades [2]. In this technique, an array of hemispherical photomultiplier tubes monitors a large central volume of scintillating liquid. Free protons in the scintillating liquid are the targets for geo-neutrinos. The detected signal is a coincidence of products from the inverse beta interaction. A prompt positron provides a measure of the geo-neutrino energy, which is followed by a mono-energetic neutron capture. This technique allowsa spectral measurement of geo-neutrinos from uranium-238 and thorium-232. Geo-neutrinos from all other isotopes lack the energy to initiate the inverse beta reaction on free protons. Fig. 1 shows the calculated geo-neutrino energy spectrum [3]. A project to directly measure these spectra is ongoing [4]. The highest energy geo-neutrinos derive only from uranium-238. This enables separate measurement of geo-neutrinos from uranium-238 and thorium-232. The traditional inverse beta coincidence technique affords scant information on geo-neutrino direction. This impedes determination of geo-neutrino source locations.Figure 1. The calculated energy spectrum of geo-neutrinos from uranium-238, thorium-232, and potassium-40. Note the maximum energy of geo-neutrinos from potassium-40 is less than the threshold energy of the inverse beta reaction.Two underground detectors are currently recording interactions of geo-neutrinos from uranium-238 and thorium-232, in Japan and Italy, using the inverse beta coincidence on free protons in scintillating liquid. Several other projects are in the planning stages. Future projects dedicated to measuring and modeling the planet’s geo-neutrino flux would define the amount and distribution of heat producing elements in the Earth and provide transformative insights into the thermal history and dynamic processes of the mantle.2. CURRENT PROJECTSThe Kamioka Liquid Scintillator Antineutrino Detector (KamLAND), operating since March 2002 in a mine in central Japan with 1000 tons of scintillating liquid, meets its physics agenda by measuring the oscillated spectrum of electronantineutrinos streaming from many nearby nuclear reactors. A recent analysis of KamLAND data estimates the geo-neutrino flux with 36% uncertainty when fixing the thorium to uranium mass ratio [5]. This estimate does not meaningfully constrain radiogenic heat production; it implies an upper limit greater than the measured terrestrial heat flow. Moreover, the measured spectrum shows no evidence for detection of the highest energy geo-neutrinos from uranium-238. The large antineutrino flux from the nearby reactors and radioactivity inside the detector compete with the geo-neutrino signal. However, removal of radioactivity in the scintillating liquid should be complete soon, with more sensitive geo-neutrino studies ensuing. Borexino, operating since May 2007 with 300 tons of scintillating liquid in a tunnel in the Laboratori Nazionali del Gran Sasso in Italy, meets its physics agenda by measuring low energy solar neutrinos scattering on electrons [6].An analysis of electron antineutrino data including geo-neutrinos is in progress. The geo-neutrino signal to background ratio is expectedto be substantially larger for Borexino than for KamLAND due to lower radioactivity and lower flux of antineutrinos from nuclear reactors. Although their locations and sizes are not optimized for geo-neutrino investigations, KamLAND and Borexino are pioneering measurements that advance new scientific inquiry and aid future project development.3. FUTURE PROJECTSScintillation detector projects in the design or planning stage offer opportunities for precision geo-neutrino measurements with low background. The next phase of the Sudbury Neutrino Observatory, called SNO+, would operate in a mine in Ontario, Canada, perhaps by early 2011 [7]. Comparable in size to KamLAND, it would be the world’s deepest geo-neutrino observatory and the first situated mid-continent in North America. Other mid-continent projects would exploit the low reactor antineutrino flux at the Homestake mine in South Dakota [8] and at the Baksan Neutrino Observatory in the Caucasus [9]. The Low Energy Neutrino Astrophysics detector, called LENA, is under consideration for operation in a mine in Finland. It would be the largest project at 50,000 tons of scintillating liquid [10]. The Hawaii Anti-Neutrino Observatory, called Hanohano, is designed for deployment in the deep ocean with 10,000 tons of scintillating liquid. Operating in the tropical Pacific Ocean far from continental crust and nuclear reactors, it would principally observe geo-neutrinos from the mantle [11]. Being capable of redeploymentat alternate sites it could potentially measure lateral heterogeneity of uranium and thorium in the mantle. The location and overburden of existing and proposed geo-neutrino project sites are compared in Table 1.Table 1Location and overburden of existing and proposed geo-neutrino project sites.Project Lat (N) Lon (E) H (mwe) KamLAND 36.43 137.31 2700 Borexino 42.45 13.57 3700 SNO+ 46.47-81.20 6000 Homestake 44.35 -103.75 4200 Baksan 43.27 42.68 4800 LENA 63.6626.05 4060 Hanohano 19.72 -156.32 45004. MODELING PROJECTSThe two existing geo-neutrino flux models are largely the product of physicists [12,13]. These models enhance neutrino oscillation studies and enable geological investigations. Models typically establish a budget of uranium and thorium prescribed by a primitive mantle composition. Applying mass balance relationships to estimates of uranium and thorium in various Earth reservoirs predicts the distribution of these elements and the resulting geo-neutrino flux. The predictions by both models agree with the geo-neutrino flux recently estimated by KamLAND.The predicted geo-neutrino flux at existing and proposed detection sites depends on the modeled distribution of uranium and thorium. Detection rates using the inverse beta reaction are given in terrestrial neutrino units (TNU). One TNU is equivalent to one interaction per 1032 free protons per year. The predicted detection rate depends strongly on location relative to the continents. This is illustrated in Fig. 2. The predicted contribution from a radial symmetric mantle ranges from 7-22 TNU.5. OBSERVATIONAL PROGRAMDetermining the average concentrations of uranium and thorium in the mantle and continental crust is possible by comparing geo-neutrino observations at two geologically distinctlocations. Whereas a continental observatory would primarily measure geo-neutrinos from the continental crust, an oceanic observatory would primarily measure geo-neutrinos from the mantle. Observations with Hanohano in the mid-Pacific and a detector with 3000 tons of scintillating liquid operating at Homestake would determine the global uranium content within about 20% uncertainty in three to four years [14]. The period of observation, which is greater for determination of thorium content, depends on the predicted geo-neutrino flux, background, and detection efficiencies. Complementary measurements of geo-neutrinos from mid-continental and mid-oceanic detectors would constrain the global content of uranium and thorium, and thereby radiogenic heating of the Earth.Figure 2. The geo-neutrino detection rate at the surface of the Earth due to only the crust in TNUis predicted using the methods and parameters from Enomoto et al. [13].6. GEO-REACTORWhereas radiogenic heating of Earth is certain, although imperfectly quantified, heating by natural fission reactors is speculative. Provocatively, proposals suggest nuclear reactors may exist in or near Earth’s core. These geo-reactors would emit electron antineutrinos just as do human-made nuclear reactors. KamLAND data restrict the power of an Earth-centered geo-reactor to be less than 20% of the terrestrial heat flow [5]. This eliminates some geo-reactor models [15] yet allows others; including one recently highlighted model that suggests a geo-reactor at the core-mantle boundary [16]. An antineutrino detector operating at a location where the flux from human-made reactors is a minimum, such as Hanohano in the mid-Pacific, would be sensitive to geo-reactors with power as low as a few percent of the terrestrial heat flow. 7. CONCLUSIONSThe detection of geo-neutrinos for the study of Earth’s composition and energy dynamics is established by two ongoing projects. Meaningful constraints on terrestrial radiogenic heating are unlikely to result from these projects due to their limited statistical reach. Larger, strategically-located, future projects could estimate average uranium and thorium concentrations in the main Earth reservoirs and test geo-reactor hypotheses, thereby providing transformative insights into Earth’s thermal history and dynamic processes. Measurement of geo-neutrino direction and determination of the flux due to potassium-40 would significantly advance these investigations but require new techniques.REFERENCES1.G. Fiorentini, M. Lissia, and F. Mantovani,Phys. Rep. 453 (2007) 117-172.2. F. Reines and C.L. Cowan Jr., Phys. Rev. 92(1953) 830-831.3.T. Araki, et al., Nature 436 (2005) 499-503.4.G. Bellini, et al., arXiv:0712.0298 [hep-ph].5.S. Abe, et al., Phys. Rev. Lett. 100 (2008)221803.6. C. Arpesella, et al., Phys. Lett. B 658 (2008)101-108.7.M. Chen, Earth Moon Planets 99 (2006)221-228; M.C. Chen et al., arXiv:0810.3694[hep-ex].8.N. Tolich, et al., Earth Moon Planets 99(2006) 229-240.9.G.V. Domogatsky et al., Phys. Atom. Nucl.69 (2006) 1894-1898.10.K.A. Hochmuth, et al., Earth Moon Planets99 (2006) 253-264.11.S.T. Dye, et al., Earth Moon Planets 99(2006) 241-252; J.G. Learned, S.T. Dye, andS. Pakvasa, arXiv:0810.4975 [hep-ex].12. F. Mantovani et al., Phys. Rev. D 69 (2004)013001.13.S. Enomoto et al., Earth Planet. Sci. Lett.258 (2007) 147-159.14.S.T. Dye and E.H. Guillian, Proc. Natl.Acad. Sci. 105 (2008) 44-47.15.V.D. Rusov et al., J. Geophys. Res. 112(2007) B09203.16.R.J. de Meijer and W. van Westrenen, S. Af.J. Sci. 104 (2008) 111-118.。
PEER-REVIEWEDPUBLICATIONS
PEER-REVIEWED PUBLICATIONS1.Low-frequency ultrasonic Bessel-like collimated beam generation from radial modes ofpiezoelectric transducersV.K. Chillara, C. Pantea, and D.N. SinhaAppl. Phys. Lett., vol. 110, (2017), 064101.2.Acoustic Characterization of Fluorinert FC-43 Liquid with Helium Gas Bubbles:Numerical ExperimentsC. Vanhille, C. Pantea, andD.N. SinhaShock and Vibration, vol. 2017, (2017), 2518168.3.High frequency signal acquisition using a smartphone in an undergraduate teachinglaboratory: Applications in ultrasonic resonance spectraB.T. Sturtevant,C. Pantea, andD.N. SinhaJ. Acoust. Soc. Am., vol. 140, issue 4, (2016), pp. 2810.4.Resonant Ultrasound Spectroscopy Studies of Berea Sandstone at High TemperatureE.S. Davis, B.T. Sturtevant, D.N. Sinha, and C. PanteaJ. Geophys. Res.: Solid Earth, vol. 121, issue 9, (2016), pp. 6401.5.Measured sound speeds and acoustic nonlinearity parameter in liquid water up to 523 Kand 14 MPaB.T. Sturtevant,C. Pantea,D.N. SinhaAIP Advances, vol. 6, issue 7, (2016), pp. 075310.6.The acoustic nonlinearity parameter in Fluorinert up to 381 K and 13.8 MPaB.T. Sturtevant,C. Pantea,D.N. SinhaJ. Acoust. Soc. Am., vol. 138, issue 1, (2015), pp. EL31-35.7.Broadband Unidirectional Ultrasound Propagation Using Sonic Crystal and NonlinearMediumD.N. Sinha and C. PanteaEmerging Materials Research, vol. 2, issue EMR3, (2013), pp. 117-126.8.Evaluation of the Transmission Line Model for Couplant Layer Corrections in Pulse-Echo MeasurementsB.T. Sturtevant,C. Pantea,D.N. SinhaIEEE Trans. Ultrason., Ferroelect., Freq. Contr., vol. 60, No. 5, (2013), pp. 943-953 9.Determination of acoustical nonlinear parameter β of water using the finite amplitudemethodC. Pantea, C.F. Osterhoudt,D.N. SinhaUltrasonics, vol. 53, no. 5, (2013), pp. 1012-1019.10.An acoustic resonance measurement cell for liquid property determinations up to 250°CB.T. Sturtevant,C. Pantea,D.N. SinhaRev. Sci. Instrum., vol. 83, no. 11, (2012), art. no. 11510611.Creating a collimated ultrasound beam in highly attenuating fluidsB. Raeymaekers,C. Pantea,D.N. SinhaUltrasonics, vol. 52, no. 4, (2012), pp. 564-570.12.Manipulation of diamond nanoparticles using bulk acoustic wavesB. Raeymaekers,C. Pantea,D.N. SinhaJ. Appl. Phys., vol. 109, (2011), pp. 014317.13.High-pressure neutron diffraction studies at LANSCEY. Zhao, J. Zhang, H. Xu, K.A. Lokshin, D. He, J. Qian, C. Pantea, LL. Daemen, S.C.Vogel, Y. Ding, J. XuAppl. Phys. A: Mater. Sci. & Processing, vol. 99, no. 3, (2010), pp. 585-599.Special Issue: “Emerging Applications of Neutron Scattering in Materials Science and Engineering”14.Elastic constants of osmium between 5 and 300 KC. Pantea, I. Stroe, H. Ledbetter, J.B. Betts, Y. Zhao, L.L. Daemen, H. Cynn, A.MiglioriPhys. Rev. B, vol. 80, no. 2, (2009), pp. 024112-1-10.15.Bulk modulus of osmium, 4-300 KC. Pantea, I. Mihut, H. Ledbetter, J.B. Betts, Y. Zhao, L.L. Daemen, H. Cynn, A.MiglioriActa Mater., vol. 57, iss. 2, (2009) p. 544-54816.Diamond’s elastic stiffnesses from 322 K to 10 KA. Migliori, H. Ledbetter, R.G. Leisure, C. Pantea, J.B. BettsJ. Appl. Phys., vol. 104, no. 5, (2008), pp. 053512-1-417.Structure of diamond-silicon carbide nanocomposites as a function of sinteringtemperature at 8 GPaL. Balogh, S. Nauyoks, T. W. Zerda, C. Pantea, S. Stelmakh, B. Palosz, T. Ungar, Mat. Sci. Eng. A, vol. 487, no. 1-2, (2008), pp. 180-8.18.Direct measurement of spin correlation using magnetostrictionV.S. Zapf, V.F. Correa,P. Sengupta, C.D. Batista, M. Tsukamoto, N. Kawashima, P.Egan, C. Pantea, A. Migliori, J.B. Betts, M. Jaime, A. Paduan-FilhoPhys. Rev. B, vol. 77, no. 2, (2008), pp. 020404(R)-1-419.Osmium’s Debye temperatureC. Pantea, I. Stroe, H. Ledbetter, J.B. Betts, Y. Zhao, L.L. Daemen, H. Cynn, A.MiglioriJ. Phys. Chem. Solids, vol. 69, no. 1, (2008), pp. 211-213.20.High-Temperature Phase Transitions in CsH2PO4 under Ambient and High-PressureConditions: A Synchrotron X-ray Diffraction StudyC.E. Botez, J.D. Hermosillo, J. Zhang, J. Qian, Y. Zhao, J. Majzlan, R.R. Chianelli, C.PanteaJ. Chem. Phys., vol. 127, (2007), pp. 194701-1-621.Alpha-plutonium’s polycrystalline elastic constants over its full temperature rangeA. Migliori, C. Pantea, H. Ledbetter, J.B. Betts , J. E. Mitchell, M. Ramos, F. Freibert,D. Dooley, S. Harrington, C. MielkeJ. Acoust. Soc. Am., vol. 122, no. 4, (2007), pp. 1994-2001.22.Temperature and time-dependence of the elastic moduli of Pu and Pu-Ga alloysA. Migliori, I. Mihut, J.B. Betts, M. Ramos,C. Mielke, C. Pantea,D. MillerJ. Alloy. Compd., vol. 444-445, (2007), pp. 133-137.23.Investigation of relaxation of nanodiamond surface in real and reciprocal spacesB. Palosz,C. Pantea, E. Grzanka, S. Stelmakh, Th. Proffen, T.W. Zerda, W. PaloszDiam. Relat. Mater., vol. 15, no. 11-12, (2006), pp. 1813.24.Microstructure of diamond-SiC nanocomposites determined by X-ray line profileanalysisJ. Gubicza, T. Ungar, Y. Wang, G.A. Voronin, C. Pantea, T.W. ZerdaDiam. Relat. Mater., vol. 15, no. 9, (2006), pp. 1452.25.Pressure-induced elastic softening of monocrystalline zirconium tungstate at 300 KC. Pantea, A. Migliori, P. B. Littlewood, Y. Zhao, H. Ledbetter, J. C. Lashley, T.Kimura, J. Van Duijn, and G. R. KowachPhys. Rev. B, vol. 73, no. 21, (2006), art. no. 214118.26.Evidence for a Structural Transition to a Superprotonic CsH2PO4 Phase Under HighPressureC. E. Botez, R. R. Chianelli, J. Zhang, J. Qian, Y. Zhao, J.Majzlan, C. Panteain Materials in Extreme Environments, edited by C. Mailhiot, P.B. Saganti, D. Ila(Mater. Res. Soc. Symp. Proc.929E, Warrendale, PA, 2006), 0929-II02-01.27.Digital ultrasonic pulse-echo overlap system and algorithm for unambiguousdetermination of pulse transit timeC. Pantea,D.G. Rickel, A. Migliori, J. Zhang, Y. Zhao, S. El-Khatib, R.G. Leisure, B. LiRev. Sci. Instrum., vol. 76, no. 11, (2005), art. no. 11490228.Kinetics of the reaction between diamond and silicon at high pressure and temperatureC. Pantea, G.A. Voronin, T.W. ZerdaJ. Appl. Phys., vol. 98, no. 7, (2005), art. no. 073512.29.Kinetics of SiC formation during the high P-T reaction between diamond and siliconC. Pantea, G.A. Voronin, T.W. Zerda, J. Zhang, L. Wang, Y. Wang, T. Uchida, Y. ZhaoDiam. Relat. Mater., vol. 14, no. 10, (2005), pp. 1611.30.Experimental Constraints on the Phase Diagram of Zirconium MetalJ. Zhang, Y. Zhao, C. Pantea, J. Qian, L.L. Daemen, P.A. Rigg, R.S. Hixson, C.W.Greeff, G.T. Gray III, Y. Yang, L. Wang, Y. Wang, T. UchidaJ. Phys. Chem. Solids, vol. 66, (2005), pp. 1213.31.Thermal equations of state of α, β, and ω phases of zirconiumY. Zhao, J. Zhang, C. Pantea, J. Qian, L.L. Daemen, P.A. Rigg, R.S. Hixson, G.T. Gray III, Y. Yang, L. Wang, Y. Wang, T. UchidaPhys. Rev. B, vol. 71, no. 18,(2005), pp. 184119.32.Yield Strength of α-Silicon Nitride at High Pressure and High TemperatureJ. Qian, C. Pantea, J. Zhang, L.L. Daemen, Y. Zhao, M. Tang, T. Uchida, Y. Wang J. Am. Ceram Soc., vol. 88, no. 4, (2005), pp. 903.33.Microstructure of nanocrystalline diamond powders studied by powder diffractometryB. Palosz, E. Grzanka,C. Pantea, T.W. Zerda, Y. Wang, J. Gubicza, T. UngarJ. Appl. Phys., vol. 97, no. 6, (2005), pp. 064316.34.Thermal equation of state of osmium: a synchrotron x-ray diffraction studyG.A. Voronin, C. Pantea, T.W. Zerda, L. Wang, Y. ZhaoJ. Phys. Chem. Solids, vol. 66 , no. 5, (2005), pp. 706.35.Size and shape of crystallites and internal stresses in carbon blacksT. Ungar, J. Gubicza, G. Tichy, C. Pantea, T.W. ZerdaCompos Part A-Appl S, vol. 36, (2005), pp. 431.36.Structural influence of erbium centers on silicon nanocrystal phase transitionsR.A. Senter, C. Pantea, Y. Wang, H. Liu, T.W. Zerda, J.L. CofferPhys. Rev. Lett., vol. 93, no. 17, (2004), pp. 175502.37.Graphitization of diamond of different sizes at high pressure-high temperatureJ. Qian, C. Pantea, J. Huang, T.W. Zerda, Y. ZhaoCarbon, vol. 42, no. 12-13, (2004), pp. 2691.38.High pressure effect on dislocation density in nano-size diamond crystalsC. Pantea, J. Gubicza, T. Ungar, G.A. Voronin, N.H. Nam, T.W. ZerdaDiam. Relat. Mater., vol. 13, no. 10, (2004), pp. 1753.39.Powder Neutron Diffraction of Wustite (Fe0.93O) to 12 GPa using large moissaniteanvilsJ. Xu, Y. Ding, S.D. Jacobsen, H.K. Mao, R.J. Hemley, J. Zhang, J. Qian, C. Pantea, S.C. Vogel, D.J. Williams, Y. ZhaoHigh Press. Res., vol. 24, no. 2, (2004), pp. 247.40.Enhancement of fracture toughness in nanostructured diamond-SiC compositesY. Zhao, J. Qian, L.L. Daemen, C. Pantea, J. Zhang, G.A. Voronin, T.W. ZerdaAppl. Phys. Lett., vol. 84, no. 8, (2004), pp. 1356.41.In situ x-ray diffraction study of silicon at pressures up to 15.5 GPa and temperatures upto 1073 KG.A. Voronin, C. Pantea, T.W. Zerda, L. Wang, Y. ZhaoPhys. Rev. B, vol. 68, no. 2, (2003), pp. 020102.42.In situ x-ray diffraction study of germanium at pressures up to 11GPa and temperaturesup to 950KG.A. Voronin, C. Pantea, T.W. Zerda, J. Zhang, L. Wang, Y. ZhaoJ. Phys. Chem. Solids, vol. 64, no. 11, (2003), pp. 2113.43.Dislocation density and graphitization of diamond crystalsC. Pantea, J. Gubicza, T. Ungar, G.A. Voronin, T.W. ZerdaPhys. Rev. B, vol. 66, no. 9, (2002), pp. 094106.44.Microstructure of carbon blacks determined by X-ray diffraction profile analysisT. Ungar, J. Gubicza, G. Ribarik, C. Pantea, T.W. ZerdaCarbon, vol. 40, no. 6, (2002), pp. 929.45.High pressure study of graphitization of diamond crystalsC. Pantea, J. Qian, G.A. Voronin,T.W. ZerdaJ. Appl. Phys., vol. 91, no.4, (2002), pp. 1957.46.Oriented growth of b-SiC on diamond crystals at high pressureG. Voronin, C. Pantea, T.W. ZerdaJ. Appl. Phys., vol. 91, no.4, (2002), pp. 1957.47.Partial graphitization of diamond crystals under high-pressure and high-temperatureconditionsJ. Qian, C. Pantea, G. Voronin, T.W. ZerdaJ. Appl. Phys., vol. 90, no. 3, (2001), pp. 1632.48.Structure of carbon blacksT.W. Zerda, J. Qian, C. Pantea, T. UngarMat. Res. Soc. Symp. Proc., vol. 661, (2001), pp. KK6.4.1.49.A study on the electrodic process by Electrochemical Impedance Spectroscopy(Studiul procesului electrodic prin Spectroscopie de Impedanta Electrochimica)F. Kormos, L. Sziraki, C. PanteaRev Chim-Bucharest, vol. 51, no. 4, (2000), pp. 415.50.Enzimatic determination of urea in animal-origin whole blood and blood serum(Determinarea enzimatica a ureei din sange integral si ser sanguin de provenienta animala)I. Tarsiche, F. Kormos, C. PanteaRev Chim-Bucharest, vol. 51, no. 1, (2000), pp. 8.51.Redox sensors based on semiconductor film(Félvezetõ redoxi szenzor)F. Kormos, C. PanteaMagy. Kem. Foly., vol. 105, no. 9, (1999), pp. 379.52.Raman spectroscopic investigations of the xCuO·(1-x)[3B2O3·K2O] glassesD. Maniu, I. Ardelean, T. Iliescu, C. PanteaJ. Mater. Sci. Lett., vol. 16, (1997), pp. 19.BOOK CHAPTERDevelopment of high P-T neutron diffraction at LANSCEY. Zhao, D. He, J. Qian, C. Pantea, K.A. Lokshin, J. Zhang, L.L. Daemenin Advances in High-Pressure Technology for Geophysical Applications, Elsevier, pp.461-474, 2005LANL INTERNAL PUBLICATIONFilling the Gap in Plutonium Properties. Studies at Intermediate Temperatures andPressuresA. Migliori, A.J. Hurd, Y. Zhao and C. PanteaLos Alamos Science, vol. 30, (2006), pp. 86-89.PATENTS1.Apparatus and method for visualization of particles suspended in a fluid and fluid flowpatterns using ultrasound - European Patent EP2612113, Nov 16, 20162.Apparatus and method for acoustic monitoring of steam quality and flow – United StatesPatent US 9,442,094, Sep 13, 20163.Acoustic source for generating an acoustic beam – United States Patent US 9,354,346,May 31, 20164.Device and method for generating a collimated beam of acoustic energy in a borehole -European Patent EP2577357, Sep 02, 20155.System and method for sonic wave measurements using an acoustic beam source - UnitedStates Patent US 9,103,944, Aug 11, 20156.Device and method for generating a collimated beam of acoustic energy in a borehole -European Patent EP2577358, Jun 17, 20157.Device and method for generating a beam of acoustic energy from a borehole, andapplications thereof - European Patent EP2297595, May 21, 20148.Device and method for generating a beam of acoustic energy from a borehole, andapplications thereof - United States Patent US 8,559,269, Oct 15, 20139.Device and method for generating a beam of acoustic energy from a borehole, andapplications thereof - United States Patent US 8,547,791, Oct 1, 201310.Device and method for generating a beam of acoustic energy from a borehole, andapplications thereof - United States Patent US 8,547,790, Oct 1, 201311.System for generating a beam of acoustic energy from a borehole, and applicationsthereof - United States Patent US 8,259,530, Sep 4, 201212.System for generating a beam of acoustic energy from a borehole, and applicationsthereof - United States Patent US 8,233,349, Jul 31, 20113.Device and method for generating a beam of acoustic energy from a borehole, andapplications thereof - United States Patent US 7,839,718, Nov 23, 2010.CONFERENCE PROCEEDINGS1.Broad-band acoustic low frequency collimated beam for ultrasonic imagingC. Pantea,D.N. SinhaProceedings of Meetings on Acoustics (POMA), vol. 19, (2013), pp. 045058.2.Broadband directional ultrasound propagation using sonic crystal and nonlinear mediumD.N. Sinha, C. PanteaProceedings of Meetings on Acoustics (POMA), vol. 19, (2013), pp. 065047.3.Determination of the Acoustic Nonlinearity Parameter in Liquid Water up to 250°C and14 MPaB.T. Sturtevant,C. Pantea,D.N. SinhaProc. 2012 IEEE Int'l Ultrason. Symp., pp. 285-288.4.Acoustic Nonlinearity in Fluorinert FC-43C. Pantea,D.N. Sinha, C.F. Osterhoudt, P.C. MombourquetteProceedings of Meetings on Acoustics (POMA), vol. 6, (2009), pp. 045005-1-14.5.Nano-Diamond compressibility at pressures up to 85 GPaC. Pantea, J. Zhang, J. Qian, Y. Zhao, A. Migliori, E. Grzanka, B. Palosz, Y. Wang,T.W. Zerda, H. Liu, Y. Ding, P.W. Stephens and C.E. BotezTechnical Proceedings of the 2006 Nanotechnology Conference and Trade Show, Vol.1, (2006), pp. 823-826.PRESENTATIONS1.Ultrasonic techniques for measuring physical properties of fluids in harsh environmentsC. PanteaKeithley Award Session, APS March Meeting 2016, Baltimore, MD, 14-18 Mar 20162.Nuclear material identification using resonant ultrasound spectroscopyC. Pantea, T.A. Saleh, A. Migliori, J.B. Betts, E.P. Luther,D.B. Byler167th Meeting of the Acoustical Society of America, Providence, RI, 5-9 May 20143.Broad-band Acoustic Low Frequency Collimated Beam for Ultrasonic ImagingC. Pantea andD.N. Sinha21st International Congress on Acoustics, ICA 2013, Montreal, Canada, 2-7 June 20134.Acoustical Filters and Nonlinear Acoustic Wave Propagation in LiquidsC. Pantea andD.N. Sinha161st Meeting of the Acoustical Society of America, Seattle, WA, 23-27 May 20115.Acoustical shock formation in highly nonlinear fluidsC. Pantea andD.N. SinhaJoint 159th ASA Meeting and Noise-Con 2010, Baltimore, MD, 19-23 April 20106.Nonlinear Acoustical Beam Formation and Beam Profiles in FluidsC. Pantea andD.N. Sinha158th Meeting of the Acoustical Society of America, San Antonio, TX, 26-30 Oct 20097.Acoustic Nonlinearity in Fluorinert FC-43C. Pantea,D.N. Sinha, C.F. Osterhoudt, P.C. Mombourquette157th Meeting of the Acoustical Society of America, Portland, OR, 18-22 May 20098.Acoustic nonlinear beam formation and imagingC. PanteaTexas Christian University, Department of Physics and Astronomy, Fort Worth, TX, January 23, 20099.Negative-thermal-expansion ZrW2O8. Elasticity and pressure.C. Pantea, A. Migliori, P. B. Littlewood, Y. Zhao, H. Ledbetter, J. C. Lashley, T.Kimura, J. Van Duijn, and G. R. KowachAPS March Meeting 2007, March 5-9, Denver, CO.10.Osmium’s full elastic tensor between 5K and 300KC. Pantea152nd Meeting (4th joint meeting of the Acoustical Society of America and the Acoustical Society of Japan), Honolulu, Hawaii, 28 November-2 December 200611.Pressure-induced elastic softening of monocrystalline zirconium tungstate at 300KC. PanteaMSCookies and Tea, LANL, August 2nd, 200612.Nano-Diamond compressibility at pressures up to 85 GPaC. Pantea, J. Zhang, J. Qian, Y. Zhao, A. Migliori, E. Grzanka, B. Palosz, Y. Wang,T.W. Zerda, H. Liu, Y. Ding, P.W. Stephens and C.E. BotezNSTI Nanotech 2006, May 7-11, Boston, MA.13.Digital ultrasonic pulse-echo overlap system and algorithm for unambiguousdetermination of pulse transit timeC. Pantea,D.G. Rickel, A. Migliori, J. Zhang, Y. Zhao, S. El-Khatib, R.G. Leisure, B. LiAPS March Meeting 2006, March 13-17, Baltimore, MD.14.Unusual compressibility in the negative-thermal-expansion material ZrW2O8C. Pantea, A. Migliori, P. B. Littlewood, Y. Zhao, H. Ledbetter, T. Kimura, J. VanDuijn, G. R. KowachICAM/I2CAM Annual Conference on Frontiers in Complex Adaptive Matter & Satellite EventsNovember 8-12, 2005, Bishop's Lodge, Santa Fe, NM15.Nano-Diamond compressibility at pressures up to 85 GPaC. Pantea, J. Zhang, J. Qian, Y. Zhao, B. Palosz, T.W. ZerdaStewardship Science Academic Alliances (SSAA) Program SymposiumMarch 29-31, 2004, Albuquerque, NM.16.Phase-coherent pulse-echo ultrasound in a SiC anvil pressure cellC. Pantea,D.G. Rickel, R.G. Leisure, A. Migliori, Y. ZhaoStewardship Science Academic Alliances (SSAA) Program SymposiumMarch 29-31, 2004, Albuquerque, NM.17.Diamond Composites and control of graphitizationC. Pantea, J. Qian, G.A. Voronin, T.W. Zerda, Y. ZhaoIndustrial Materials For The Future (IMF), Annual Review MeetingJune 23-25, 2003, Golden, CO.18.Structure Study of Diamond-SiC Composites Obtained Under High Pressure-HighTemperature ConditionsC. Pantea, G.A. Voronin, T.W. Zerda, J. Qian, Y. ZhaoAPS March Meeting 2003, March 3-7, Austin, TX.19.Diamond-silicon reaction under high pressure - high temperature conditionsC. Pantea, G.A. Voronin, T. W. ZerdaMRCEDM Research Festival 2002, April 5, UTA, Arlington, TX.20.β-SiC formation on diamond crystals under high pressure-high temperature conditionsC. Pantea, G.A. Voronin, T. W. ZerdaTSAPS Fall Meeting 2001, October 4-6, TCU, Fort Worth, TX.21.X-ray diffraction study of diamond-graphite phase transition at high pressures andtemperaturesC. Pantea, J. Qian, T. W. ZerdaTSAPS Fall Meeting 2000, October 27-29, Rice University, Houston, TX.。
LowPowerSramDesignUsingMulti-BitFlip-Flop(MBFF)
Low Power Sram Design Using Multi-Bit Flip-Flop (MBFF)Lincy J, Sivasankar Rajamani PPG Scholar,Department of ECE, K.S.R College of Engineering,Tiruchengode,India;Associate Professor, Department of ECE,K.S.R College of Engineering, Tiruchengode,India.Email:************************,*************************.ABSTRACT: The increasing demand for battery-powered and green-compliant applications has made power management a dominant factor in SoC design. Clock power contributes 40% of total chip power.To get maximum reduction in power an algorithm has been proposed in which single-bit flip-flops are replaced with maximum possible Multi-Bit Flip-Flop (MBFF) without affecting the performance of the original circuit. Firstly mergable flip-flops are identified based on synchronous clocking and replaced without affecting the performance however replacement will change the location of flip-flops leading to timing and capacity constraint. Tanner EDA V13.0 has been used which reduces the power by 15%.Keywords: low power, clock power, merging, multi-bit flip-flop, time violation.1I NTRODUCTIONPower has become one of the main implementation bottle-necks for modern integrated circuit design. In particular, highpower consumption may prevent a high-speed design fromrunning at its full speed, while low power dissipation is a mustfor consumer and portable electronic products[3],[4]. Moreo-ver, the clock signal toggles in each cycle, the total power dis-sipation in the clock network could be significant.=(1)Where Pclk is clock powerFclk is the clock frequencyVdd is the supply voltageCclk is the switching capacitance including the gate capacit-ance. Power consumed by clock plays a dominant role, the clock system consumes 20–45% of the total chip power [7]. Moreover, systems are operating at very high frequencies due to technology advances, which leads to shorter signal transi-tion time. The transition time has effects on power consump-tion. Therefore the clock distribution needs more careful de-sign planning methodology in low power for modern VLSI. Clock distribution networks, in particular, are an essential ele-ment of a synchronous digital circuit and a significant power consumer. Depending on application the clock power varies and in shown in Fig.1.Fig.1. Power distribution in various applications 2M ULTI-B IT F LIP-F LOPA single-bit flip-flop has two latches (Master latch and slave latch). The latches need Clk and Clk‟ signal to perform oper a-tions. In order to have better delay from Clk-> Q, Clk is rege-nerated from Clk‟. Hence there will be two inverters in the clock path which acts as buffer.Fig.2 shows an example of merging two 1-bit flip-flops into one 2-bit flip-flop. Each 1-bit flip-flop contains two inverters, master-latch and slave-latch. Due to the manufacturing rules, inverters in flip-flops tend to be oversized [8]. As the process technology advances into smaller geometry nodes, the minimum size of clock drivers can drive more than one flip-flop. Merging single-bit flip-flops into one multi-bit flip-flop can avoid duplicate inverters, and lower the total clock dynamic power consumption. The total area contributing to flip-flops can be reduced as well.After the two 1-bit flip-flops are replaced by the 2-bit flip-flop, the wire-lengths of nets are changed. To avoid the timing violation caused by the replacement, the Manhattan distance of new net‟s cannot be longer than the specified values [5],[6]. A posi-tive edge, master-slave type D flip-flop is composed of two level triggered latches, called master and slave. Basic prin-ciple is that second latch (slave) is driven by the clock signal, while the first one (master) is driven by the inverted version of the clock signal. While master latch is transparent and when clock is low, slave latch holds its value and while master latch holds its value, which occurs when clock is high, slave latch becomes transparent, thus making the flip-flop sensitive to low-to-high transition of the clock. There are various ways to implement D latches [4].Fig.2 Example of MBFF.3R ELATED W ORKIn Shyu et al. [3] they proposed an algorithm to reduce power consumption of clock by replacing some flip-flops with fewer multi-bit flip-flops. First, the feasible placement regions of a flip-flop associated with different pins are found based on the timing constraints defined on the pins. Then, the legal place-ment region of the flip-flop fi can be obtained by the over-lapped area of these regions. However, because these regions are in the diamond shape, it is not easy to identify the over-lapped area. To find the overlapped area they used coordinate transformation technique and got rectangular region. In the second stage, they build a combination table, which defines all possible combinations of flip-flops in order to get a new multi-bit flip- flop provided by the library. The flip-flops can be merged with the help of the table. After the legal placement regions of flip-flops are found and the combination table is built, flip-flops can be merged. To speed up the program, they divided a chip into several bins and merged flip-flops in a local bin. They repeated this process by combining bins until no flip-flop can be merged anymore. The limitations of this are they implemented it in C++ which includes complex instruction set, complicated memory management, large execution time and requires additional module for hardware integration.3O UR A LGORITHMThe algorithm is roughly divided into two steps. Firstly, merga-ble flip-flops are identified based on synchronous clocking. Secondly, flip-flops are merged in such a way that the perfor-mance of the design is not affected. Single-bit flip-flops can be replaced with multi-bit flip-flops if and only if the library sup-ports it. The library supports only 1-bit, 2-bit and 4-bit flip-flops. However replacement of MBFF induces longer wirelength be-tween flip-flop and its connection pins which introduces larger delay leading to timing constraints as shown in Fig.3. Hence wirelength should be small to get minimum power dissipation. The capacity constraint is also taken in account.4.1Timing Constaint ViolationThe timing constraint violation can be satisfied by Elmore de-lay model. The delay of a wire is quadratic function of its length. The network can be either lumped or distributed RC.The delay of distributed RC-line is one-half of the delay predicted in lumped model. The latter combines the total resis-tance and capacitance into single elements, and has a time-constant equal. For a 10cm long wire of width 1µm the value of R=0.075Ω/µm and C=110aF/µm which produ ces a delay of 41.3ns for a distributed RC model. However Elmore delay is not equal to delay time [1].Fig3. Timing Constraint Violation 4.1SRAM DesignA 4x4 SRAM has been designed which consist of 16 flip-flops in 2D array, it uses 2-to-4 decoder to select a row [2], a single w/R option is used as both write and read signal, when it is …0‟ write operation will be performed and when it is …1‟ read oper a-tion is been performed, address line is used to select one of the four words and is shown in Fig.4 (a). According to the algo-rithm flip-flops whose input does not depend on previous out-put are the flip-flops that can be merged, without affecting the performance of the design and is shown in Fig.4(b) which uses four 4-Bit flip-flops. The output obtained in both the methods is shown in Fig.4(c) which does not affect the performance of the original circuit satisfying the proposed algorithm and capacity constraintFig.4 (a) 4x4 SRAM design using Single-Bit FF.Fig.4 (b) 4x4 SRAM design using MBFFFig 4(c) Output of 4X4 SRAM 5R ESULTS A ND D ISCUSSIONPower results for multi-bit flip-flop are determined using Tanner Tool v13.0 under 25µm technology at 5V. The power results for 1-bit, 2-bit and 4-bit flip-flops are obtained and the power con-sumed by single-bit flip-flop is more than that of MBFF.The power result for 4X4 SRAM is shown in TABLE I.TABLE IPOWER RESULTS OF SRAMThe power consumed by single-Bit flip-flop is 6times that of Multi-Bit Flip-Flop (MBFF) which gives power reduction ratio[3] of 85% .6C ONCLUSIONThis paper has proposed an algorithm flip-flop replacement for power reduction in memories. The direct way is to repeatedly search a set of flip-flops that can be replaced by a new multi-bit flip-flop until none can be done. However, as the number of flip-flops in a chip increases the complexity also increase, which makes the method impractical. By the guidelines of re-placement from the library, the impossible combination of flip-flops will not be considered since it reduces the execution time. The experimental result shows that our algorithm reduc-es power by 6 times. Future work involves constructing layout for 4X4 SRAM in Tanner v13.0 and finding wirelength for the Aluminium wire and employing the proposed algorithm for S1423 sequential benchmark circuits which consist of 74 D-flip-flops taken from International Symposium on Circuits and Systems 89 (ISCAS 89) and checking their behavior.R EFERENCES[1]Rabaey , Digital Integrated Circuits,2nd ed.,Asign per-spective,2013,pp.120-124.[2]S. Dandamudi, “Fundamentals of Computer Organ i-za tion and Design,” Springer, 2003.[3]Shyu Y.T,Lin J.M,HuangC.P,Lin C.W,Lin Y.Z,ChangS.J,”Effective and Efficient Approach for Power R e-duction by using Multi-bit Flip-flop”,IEEE transactionson Very Large Scale Integration (VLSI) sys-tems,pp.624-635,April 2013.[4]Wang S.H,Liang Y.Y,Kuo T.Y ,Mak W.K,”Power-driven flip-flop mer ging and relocation”,,IEEE transac-tions on Computer-Aided Design of Integrated Circuitsand Systems,pp. 180-191,2012.[5]Jiang J.H.R, Chang C.L, Yang, Y.M,Tsai E.Y.W andChen L.S.F,”INTEGRA: Fast Multi-bit Flip-flop Clus-tering for Clock Power Saving Based on IntervalGraphs”,Proceedings on ACM International Sympo-sium on Physical Design,pp.115-121,2011.[6]Chang Y.T,Hsu C.C,P.Lin P.H,Tsai Y.W,ChenS.F,”Post-placement power optimization with multi-bitflip-flops”, IEEE transactions on Computer-Aided De-sign of Integrated Circuits and Systems,pp. 1870-1882,2011.[7]kawagachi H , Sakurai T , “ A reduced clock-swingflip-flop (RCSFF) for 63% clock power reduction” , inIEEE Journal on solod state circuits, 1998.[8]Chang A.C, Hwang T.T, 2012, ”Synthesis of Multi-BitFlip-Flops for Clock Power Reduction”, Interdisipil i-nary Information science, vol.18, pp.145-159.。
A Critical Review Evaluating the Effectiveness of
US-China Foreign Language, January 2019, Vol. 17, No. 1, 43-47doi:10.17265/1539-8080/2019.01.006 A Critical Review: Evaluating the Effectiveness of ExplicitInstruction on Implicit and Explicit L2 KnowledgeSami Sulaiman AlsalmiSchool of education, University of Bristol, Bristol, United KingdomThis review discusses Akakura’s research study, entitled “valuating the Effectiveness of Explicit Instruction onImplicit and Explicit L2 Knowledge”. The argument presented here will be developed by means of a critique ofAkakura’s study, in turn addressing a summary of the study with the focus placed on the method and statisticaltechniques, as well as the evaluation of the study. Some explanations will be added to the summary of study tomake some points more obvious, specifically those that do not receive enough description in the study. In addition,given that the study is rather broad and includes many situations that are worthy of discussion but that they cannotbe covered in a paper, the evaluation will be narrowed down to concentrate on two aspects of the study: measuresof implicit knowledge and explicit instructions (treatment stage).Keywords: explicit instruction, implicit knowledge, explicit knowledgeSummary of the StudyAkakura’s (2012) study sought to explore to what extent explicit instruction can develop second-languagelearners’ implicit and explicit knowledge of English articles. Explicit instruction is concerned with “developing a metalinguistic awareness of the target rule” (Ellis, 2009, p. 54). That is, learners are provided with the instruction of the target grammatical rules. Implicit knowledge refers to the procedures comprising “knowledge which can be easily and rapidly accessed in unplanned language use. In contrast, explicit knowledge exists as a declarative fact that can only be accessed through the application of attentional processes” (Ellis, 2009, p. 12). The study claims that research has not enough discussed which measures can best test the spontaneous status of the implicit grammatical knowledge.The study employs a quasi-experimental design with a pretest/posttest and delayed test model entailing two groups: experimental (N = 49) and the control group (N = 45). In each testing stage, participants were exposed to four measures: elicited imitation task, oral production task (for implicit knowledge), grammaticality judgement task, and metalinguistic knowledge task (for explicit knowledge). A pretest was run first for the two groups, and then the experimental group was exposed to explicit instruction, using computer-assisted language learning, for one week following the pretest. The form/function mappings of articles were explained to participants, and then the participants were provided with form-focused exercises and quizzes. The posttest was administered after the participants completed article lessons achieved by explicit instruction. The delayed posttest was then completed six weeks after the treatment.Sami Sulaiman Alsalmi, Ph.D. candidate at Bristol University in the UK.All Rights Reserved.EV ALUATING THE EFFECTIVENESS OF EXPLICIT INSTRUCTION44 As indicated above, four instruments were employed to measure the two types of knowledge, and a succinct description of each measure is provided below.Elicited Imitation Task (EIT)This task required a participant to listen to a storey while looking at a sequence of pictures depicting it. Half the recorded storey contained sentences that are incompatible with the pictures and simultaneously included grammatical (N = 10) and ungrammatical (N = 10) articles. Participants were then asked to decide whether or not these sentences match the picture, and to repeat the statement when they heard a bell sound. The inclusion of picture plausibility of the storey is to ensure that a participant’s attention is on meaning and not form. The study does not describe the overall goal of this task. According to literature in the field of implicit knowledge acquisition, the underlying assumption of this task is that if a participant could repeat the statement under time constrains and orally correct the ungrammatical articles spontaneously, it would imply that the participant had internalized the target articles.Oral Production Task (OPT)In this task, participants were required to narrate the identical storey that they had been exposed to in ELT but in their own words. It was hypothesized by the authors that that using the identical pictures could minimize cognitive load during performance and raise the possibility of language complexity. Participants, furthermore, were required to think that their audiences were children. The authors hypothesized that this technique can reduce the likelihood of reliance on hearer knowledge and hence the definite article was excessively used. Grammaticality Judgement Task (GJT)Participants in this task were asked to grammatically judge the underlined portions of sentences. Thejudgement scale was created as a confidence measure requiring coding such as: 1) correct, 2) probably correct,3) probably incorrect, and 4) incorrect. Although such a task allows a participant to process the sentence for its form, time-constraint is hypothesized to stimulate a participant to access implicit knowledge. That is, the likelihood of the participant re-examining and monitoring the response is heavily reduced and that of intuitive linguistic judgement is raised, indicating a high degree of automaticity of the implicit knowledge.Metalinguistic Knowledge Task (MKT)Participants were required to correct 10 sentences and each sentence included an article error that was underlined (N = 10). Next, participants were required to give written explanations for the ungrammatical articles (N = 5). Participants were provided unlimited time to complete the task, and conducted two practice items prior to commencing. Responses were scored as either correct (1 point) or incorrect (0 points).As has already been described, the participants of experimental and control groups were exposed to each measure three times (pretest, posttest, and delayed test). To find out if the explicit instruction exerted an influence on implicit and explicit knowledge, the mean of the observed data in each measure was calculated to explore the extent to which the mean of (ex. EIT) in the pretest was statistically different from the posttest and delayed test and different between the two groups. A statistical test in this case should be employed to test the strength of mean differences between the three testing stages within one group and between the two groups. One-way ANOVA, thus, was the appropriate technique to employ in this case.The study provided a detailed and complex description of statistical data; for instance, the observed data of each measure in each group (experimental and control) was divided into four sections: non-generic articles,All Rights Reserved.EV ALUATING THE EFFECTIVENESS OF EXPLICIT INSTRUCTION 45generic articles, grammatical articles, and ungrammatical articles. Descriptive statistics were also calculated for each section. The author has sought to reorganize part of the observed data of one measure in a way that gives a patent representative sample of the results.Table 1Elicited Imitation Task (Sample)20 articlesExperimental group Control groupM SD N D M SD NPretest 10.29 3.202 49 -0.20 10.96 3.567 45Posttest 13.80 2.993 49 0.79 11.14 3.690 45Delayed test 14.98 3.058 49 1.37 10.64 3.276 45The output presented above shows that the scores generally increased over both posttest and delayed test in the Experimental Group, which outperformed the control group. ANOVA was computed to test the meandifferences of the three testing stages of the elicited imitation task between the two groups. A statisticaldifference was found in the posttest (F(1,92) = 14.866, p = 0.000) with an increase in the delayed test (F(1,92)= 44.023, p = 0.000). The results elucidated, based on the measure “EIT”, that implicit knowledge can bepromoted by explicit instruction.ANOVA was also computed for the other tasks and revealed that there is no significant difference between groups in the posttest of OPT (F(1,92) = 1.609, p = 0.208), but there was a significant difference in the delayedtask (F(1,92) = 5.161, p = 0.025). It also offered no significant difference between the groups in the posttest ofGJT (F(1,92) = 3.496, p = 0.065). However, it revealed a significant difference in the delayed test (F(1,92) =4.457, p = 0.037). Finally, ANOVA showed a significant difference between the groups in the posttest of MKT(F(1,92) = 28.787, p = 0.000). This was sustained in the delayed test (F(1,92) = 27.344, p = 0.000). (Instatistics, a p-value can never be exactly zero, but the zero here was reported based on SPSS output.) The overall findings suggest that implicit and explicit knowledge can be developed as a result of explicit instruction. In addition, the study demonstrated that the measures of implicit and explicit grammaticalknowledge can be reasonably separate. The two measures of implicit knowledge required time constraints and afocus placed on the meaning. The other measures of explicit knowledge did not entail time pressure and a focusplaced on form (greater discussion will be presented under “critique of the study”).Critique of the StudyAs suggested in the introduction, the author is going to present a concise critical discussion of measures of implicit knowledge and explicit instruction. The former was selected because one of the biggest challenges inpsycholinguistics-based research is how best to assess the spontaneous level of acquired language (implicitknowledge). The latter was chosen because it is the independent variable upon which the change in thedependent variables (explicit and implicit knowledge) occurs.Measuring implicit knowledge entails more cautious treatment to ensure that it accurately assesses the unconscious status of specific acquired structures, unlike explicit knowledge. Historically, oral production taskhas been employed to measure implicit knowledge, and although it supplies an amount of natural speech, itlacks the accurate elicitation of the spontaneous use of a specific language structure (Ellis, 2009). Putdifferently, it might give the learner a chance to use his/her explicit knowledge (to plan and monitor their All Rights Reserved.EV ALUATING THE EFFECTIVENESS OF EXPLICIT INSTRUCTION46 responses) rather than to test the spontaneous, contextualized use of a specific structure. This appears in Akakura’s study, where the oral production task reflected the difficulty of generating the specific articles. Thus, we find that the impact of explicit instruction was evident only in the delayed test, although the effect was supposed to appear in the posttest.The elicited imitation test task has been later developed (more description of this task is provided under The Summary of the Study), but some threats have appeared that might potentially affect the validity level of the EIT. Studies have revealed that an L2 learner’s attention could be turned to the form of the sentence rather than the meaning. Other studies have also elucidated that L2 learners imitate the stimulus statement by rote (Erlam, Loewen, & Philp, 2009); they repeat the statement verbatim without understanding the stimulus sentence. Akakura’s (2012) research study could achieve a pioneering success in enhancing the control of these two limitations to provide a higher level of validity. The picture plausibility is employed in the task to make a participant’s attention focus on meaning rather than on form. In addition, it provides a chance for delaying repetition so that participants do not repeat the sentence verbatim. In Rebuschat and William’s (2012) study, semi-artificial grammar is used and participants are required to listen to statements on an item-by-item basis, to judge the plausibility of the semantics of the stimulus statement, and then to repeat the statement. In Erlam, Loewen, and Philp’s (2009) study, the statements are designed to enable the subjects to decide whether they agree with, disagree with, or do not comprehend a statement. However, a storey-based elicited imitation test had not been previously employed to measure implicit knowledge, and it is considered, to my knowledge, that its first use was in the Akakura (2012) study.When a deeper scrutiny is applied to measures of implicit knowledge, we explore that the EIT includes a choice that can increase the level of internal validity, such as the choice not sure (if the sentence fits with thepicture). This is because, if some participants guessed the option correctly more by luck than by judgement, it isexpected that they would not correctly guess during the posttest or delayed-test phase, and then that they would obtain a low score. This threat, statistically, implies that the scores in the distribution regress to the mean as a result of guesswork, not of the explicit instruction itself (Gravetter & Forzano, 2011).The oral production test, as indicated above, might fail to bolster the rigour of the measure of implicit knowledge, and, additionally, it could be influenced by the tutors’ personal prejudices, resulting in a low level of reliability. For instance, some tutors might lose control or their confidence when they are assessing enormous amounts of free natural speech, and, accordingly, they might be inclined to give participants scores in the middle range to ward off severe errors (Morgan, Dunn, Parry, & O’Reilly, 2003). The study further failed to provide a patent description of how the oral production test is achieved and how the data are gathered in a numerical pattern.However, at the treatment stage of the study procedure, the study does not exactly elucidate the role of the researcher—for example, regarding who has taught the participants, the researcher himself or another hired tutor. In addition, the treatment stage is confined to only one learning condition. Other learning conditions of explicit instruction are not addressed in the study. Snobul and Schmitt (2013), for instance, employed three learning conditions—enriched input, enhanced input, and decontextualized input—to evaluate under which conditions both adult native speakers and advanced non-native speakers of English acquire collocations. Tagarelli, Mota and Rebuschat (2015) used two conditions: implicit and explicit input. In the implicit learning condition, subjects were aware of neither the underlying goal of the experiment nor the target knowledge that would be learned or tested . The explicit learning condition is similar to the condition employed in Akakura’sAll Rights Reserved.EV ALUATING THE EFFECTIVENESS OF EXPLICIT INSTRUCTION 47(2012) study, where participants were aware of what knowledge they would acquire. The author considers thatthe Akakura (2012) study could be enhanced if more learning conditions were included to determine whichimplicit knowledge and explicit knowledge of English articles might best be developed.In summary, the study succeeded in bolstering control of the limitations of EIT, in clearly describing the instruction process and in employing the appropriate statistical test (ANOVA). Nonetheless, some threats in thestudy need to be reduced by a more careful treatment related to validity and reliability, such as guesswork andan oral production task. Some further directions have been suggested to improve the treatment stage, such asemploying more than one learning condition in the treatment stage.Finally, the method used has many crucial implications, and the most promising one appears to be pedagogy. For instance, when an L2 learner obtains a high score in grammar, this does not imply that the learnerhas externalized the target rule and thus can use it spontaneously and in unplanned language use. Rather, it showshow well a learner might apply the rule in a context in which close analysis of text is involved. Therefore, policymakers in education should be aware of the learning conditions that enhance not only explicit knowledge but alsoimplicit knowledge, which is considered the chief aim of language learning.ReferencesAkakura, M. (2012). Evaluating the effectiveness of explicit instruction on implicit and explicit L2 knowledge. Language Teaching Research, 16(1), 9-37.Ellis, R. (2009). Implicit and explicit knowledge in second language learning, testing and teaching. London: Multilingual Matter.Erlam, R., Loewen, S., & Philp, J. (2009). Form-focused instruction and the acquisition of implicit and explicit knowledge. In R.Ellis (Ed.), Implicit and explicit knowledge in second language learning, testing and teaching (pp. 237-261). Bristol:Multilingual Matters.Gravetter, F., & Forzano, L. A. (2011). Research methods for the behavioral sciences. New York, NY: Cengage Learning.Morgan, C., Dunn, L., Parry, S., & O’Reilly, M. (2003). The student assessment handbook: New directions in traditional and online assessment. New York, NY: Routledge.Rebuschat, P., & Williams, J. N. (2012). Implicit and explicit knowledge in second language acquisition. Applied Psycholinguistics, 33(4), 829-885.Sonbul, S., & Schmitt, N. (2013). Explicit and implicit lexical knowledge: Acquisition of collocations under different input conditions. Language Learning, 63(1), 121-159.Tagarelli, K. M., Mota, M. B., & Rebuschat, P. (2015). Working memory, learning conditions and the acquisition of L2 syntax. In Z. E. Wen, M. B. Mota, & A. McNeill (Eds.), Working Memory in Second Language Acquisition and Processing (pp.224-247). Bristol, UK: Multilingual Matters.All Rights Reserved.。
关于太空实验的英语作文
关于太空实验的英语作文The Space Experiment。
Space exploration has always been a fascinating and challenging endeavor for mankind. It has opened up new frontiers for scientific research and technological advancement. In recent years, the interest in conducting experiments in space has grown exponentially, as scientists seek to understand the effects of microgravity and space environment on various materials, organisms, and physical processes. In this essay, we will explore the significance of space experiments, the challenges they pose, and the potential benefits they offer.The significance of conducting experiments in space cannot be overstated. The unique conditions of microgravity and vacuum in space provide an unparalleled opportunity to study physical and biological phenomena that are not possible on Earth. For example, the behavior of fluids, combustion, and materials in microgravity can revealfundamental insights that have practical applications in fields such as engineering, medicine, and materials science. Similarly, studying the effects of space radiation and microgravity on living organisms can provide valuable data for understanding human health and developing new medical treatments.However, conducting experiments in space also presents significant challenges. The cost of launching experiments into space is prohibitively high, and the limitedavailability of space on spacecraft and space stations imposes strict constraints on the size, weight, and complexity of experimental setups. Furthermore, the harsh space environment, including extreme temperatures, radiation, and vacuum, requires careful design and testingof experimental hardware to ensure its reliability and functionality in space. Finally, the logistics ofconducting experiments in space, including astronaut training, mission planning, and communication with ground control, add another layer of complexity to the process.Despite these challenges, the potential benefits ofspace experiments are substantial. The knowledge gained from space experiments can lead to new technologies and innovations that improve our daily lives on Earth. For example, research on materials processing in microgravity has led to the development of advanced alloys, ceramics, and pharmaceuticals with improved properties and performance. Similarly, studies on bone and muscle loss in space have contributed to our understanding of osteoporosis and muscle atrophy on Earth, leading to new treatments and rehabilitation methods. Moreover, the unique perspective of space experiments can inspire and educate the public about the wonders of science and the possibilities of space exploration.In conclusion, space experiments are a valuable and essential component of space exploration. They provide unique opportunities to study physical and biological phenomena in a way that is not possible on Earth, and they offer the potential for significant scientific and technological advancements. While the challenges of conducting experiments in space are considerable, the benefits they offer make them a worthwhile and rewardingendeavor. As we continue to push the boundaries of space exploration, space experiments will undoubtedly play a crucial role in expanding our knowledge and understanding of the universe.。
Self-enforcing strategic demand reduction
Self-enforcing Strategic Demand ReductionPaul S.A.Reitsma,Peter Stone,J´a nos A.Csirik,and Michael L.Littman Computer Science Department,Brown University,Box1910,Providence,RI02912 psar@—/˜psarDepartment of Computer Sciences,University of Texas at Austin,Austin,TX78712 pstone@—/˜pstoneAT&T Labs—Research,180Park Ave.,Florham Park,NJ07932 janos@—/˜janosDepartment of Computer Science,Rutgers University,Piscataway,NJ08854 mlittman@—/˜mlittmanAbstract.Auctions are an area of great academic and commercial interest,fromtiny auctions for toys on eBay to multi-billion-dollar auctions held by govern-ments for resources or contracts.Although there has been significant research onauction theory,especially from the perspective of auction mechanisms,studies ofautonomous bidding agents and their interactions are relatively few and recent.This paper examines several autonomous agent bidding strategies in the contextof FAucS,a faithful simulation of a complex FCC spectrum auction.We intro-duce punishing randomized strategic demand reduction(PRSDR),a novel bid-ding strategy by which bidders can partition available goods in a mutually ben-eficial way without explicit inter-agent communication.When all use PRSDR,bidders obtain significantly better results than when using a reasonable baselineapproach.The strategy automatically detects and punishes non-cooperating bid-ders to achieve robustness in the face of agent defection,and performs well underalternative conditions.The PRSDR strategy is fully implemented and we presentdetailed empirical results.1IntroductionSome of the largest auctions held involve hundreds of goods in hundreds of categories, scores of bidders,complicated rules,and everything being auctioned simultaneously over a hundred or more rounds.In these auctions,it is very difficult for humans to grasp all of the nuances regarding the effects of their strategic puter aid could ease the burden of efficiently competing in these auctions,especially if the user need only input his or her values for the goods in the auction into a parameterized strategy.Simple autonomous agents have started to appear as bidders in some auctions.How-ever,they tend to be straightforward incremental bidders that raise the price of a single good up to a user’s stated maximum(e.g.on eBay).Bidding agents that support depen-dent values among multiple interacting goods have been deployed in some experimental scenarios[5,10,8].Here,we present agent bidding strategies in a large-scale and real-istic auction scenario,namely the FCC Spectrum Auction Simulator,or FAucS[1].Here,we build on our previous work in FAucS,in which we created Knapsack agents that optimized the set of goods they bid on given a budget constraint,but not takinginto account the needs and strategies of other agents[1].In this previous work we also created alternative agent strategies that outperformed the Knapsack agents,but that relied on the unrealistic assumption that agents knew each other’s valuations of the goods with complete certainty.In this paper,we present punishing randomized strategic demand reduction(PRSDR), a strategy by which cooperative agents can significantly outperform the Knapsack agents despite having highly uncertain knowledge regarding each others’goals and without any explicit inter-agent communication.The strategy is self-enforcing in that agents cannot benefit by defecting back to the Knapsack strategy.PRSDR serves as an example of a general strategy for bidding in simultaneous multiple-round(SMR)auctions in which:–there is room to bid below the expected sell prices;and in which–there are few value interdependencies among goods.The basic idea of PRSDR is that bidders bid for randomized subsets of their true desired goods while retaining the ability to effectively punish bidders who defect from this mutually beneficial strategy.The punishment takes the form of driving prices up in the markets of interest to a defecting bidder.The remainder of the paper is organized as follows.An overview of FAucS and the general setup appears in Sections2and3.Details of the PRSDR algorithm follow in Sections4and5.Empirical results demonstrating its utility and robustness are pre-sented in Section6along with a suggestion on how to improve the efficiency of FCC Spectrum auctions by a change of rules designed to inhibit communication-free cooper-ation.Section7relates our results to the game-theoretical construct of iterated bimatrix games and Section8concludes.2FCC Spectrum Auction SimulatorOur prototype implementation of PRSDR is tailored to the FAucS domain[1],a detailed and realistic simulator of the FCC Spectrum auctions.The goods available in the FCC spectrum auctions are a set of licenses,or blocks of spectrum;each in a market,or region of the United States.In this paper we focus on FCC Auction35,in which licenses were10or15megahertz in size,and each of the195markets had between1and4 licenses available.A total of422licenses and more than80bidders were involved.To afirst approximation,the rules of the auction are straightforward(official rules are presented in FCC document DA00–20381).All of the FCC spectrum auctions,in-cluding Auction35,use an SMR system in which all goods are available at the same time,and bidding occurs in discrete rounds2.After each round,each bidder’s bids are announced publicly.The provisionally winning bids and corresponding provisional win-ners are also announced:these are the highest bid received up to that point on eachlicense and the party that placed the bid(in case of a tie,thefirst bid submitted to the FCC’s system wins).The auction ends immediately after thefirst round with no new activity.Each license is then sold to its provisional winner for a price equal to the provisionally winning bid.As is customary in auctions,the FCC had anti-collusion rules forbidding bidders from explicitly exchanging information before and/or during the auction.While this underlying mechanism is simple,there have been many subtle modifi-cations and complex rules added over time.The two that are most important for the purposes of this paper are the rules governing allowable bids and eligibility constraints. Allowable bids.Bids on a license that has received no bids in previous rounds must be of a predetermined minimum bid price.After a license has received bids,further bids can only increase the value of the current provisionally winning bid by1to9 bid increments.A bid increment is a value between and,calculated by the FCC and increasing with bidding activity.Eligibility Constraints.Each license is worth a certain number of bidding units(BUs), correlated with the population of the market.Each round,a bidder has a specific eligibility,and may not bid on more BUs than that.A bidder must maintain bidding activity in order to maintain eligibility to bid in later rounds.3FAucS models these rules and all others relevant to Auction35in their entirety.4 It uses a client-server architecture with the server and the bidding agents(clients)all written in Perl and using TCP sockets to communicate with each other.Typical auctions last between100and150rounds;Auction35lasted101rounds. 3Simulation OverviewWhile the original FCC auction35had more than80bidders,only a few—aboutfive—were of significant individual importance in terms of the licenses won.We designate a set offive agents as strategic bidders to emulate the presence offive national companies participating in Auction35.5By virtue of their size,particularly estimated budget size, it is immediately apparent to all agents which bidders are strategic bidders.The other75bidders in the original auction were budget-constrained regional com-panies with interests only in specific markets.The primary effect of these smaller bid-ders was to create competition in every market and drive up prices:they didn’t win very much in the end.Accordingly,we have modeled the aggregate role of these bidders usingfive secondary bidders which are straightforward bidders with no budget con-straints.These bidders each desire a license from each market but are willing to pay only about75%as much as the strategic bidders for any one license.These secondary bidders ensure competition in every market until license prices are beyond the levels they are willing to pay,effectively setting the pricefloor for licenses at about75%of strategic bidder valuations.This bidding approach served to raise prices to realistic lev-els without requiring us to model all75regional bidders explicitly.3.1Agent UtilityFor the purposes of this paper,the utility function for all agents is profit,defined as the value to the agent of the set of licenses it won minus its expenditures.We assume no inter-market dependencies in market valuations.This hypothesis is a reasonable starting point for our explorations and can be partially justified by the existence of an(ineffi-cient)aftermarket for licenses.There is also some evidence that human bidders in these auctions ignore intermarket dependencies.Each agent has significantly different goals drawn from a realistic model based on analysis of real auction data,information from real bidders,and a Merrill Lynch analysis of the estimated theoretical values of particular markets[1].These goals consist of priorities and valuations for each market.Each agent’s priority for a given market is the number of licenses the agent would like to acquire from that market.A market’s value to this agent increases with the size of the licenses in megahertz and the population of the market(mhzpop). This Market Value(MV)is the value to the agent(the maximum amount the agent is willing to pay)for a license in a market where it wants to acquire one more license (whether or not it has any already).If an agent wants two more licenses in a market,it is willing to pay an Enhanced Market Value or EMV of approximately more for the first of those licenses,to try to ensure it gets at least one.Otherwise,if an agent wants no licenses in a market,any licenses in that market have no value to the agent.Profit earned is the difference between the value of a license to the agent and the price paid for that license,and the only effect of priority is to control these values.For example,if the New York market had a population of30,000,000,10-mhz li-censes,and agent3valued the market at$5million per mhzpop and had a priority of 2in the market,it would be willing to pay$1.575billion for thefirst license,$1.5bil-lion for the second,and$0for the third.Purchasing one license for$1.2billion would generate a profit of$375million.These priorities introduce a simple type of inter-good dependency that model the real bidders’apparent interests,as expressed by one of the bidding teams from the orig-inal(real)auction.It is interesting to note that,while this value characterization may not be optimal since it may simplify away information about inter-good dependencies, it is nevertheless approximately what the real human bidders used.Our conversationswith them suggested that the most probable explanation for using this simplified repre-sentation is that the auction is so tremendously complex that even highly skilled human bidders were unable to reason efficiently using the full complexity of all dependencies in the situation,and had to simplify the representation they used in order to make the problem tractable.One of our hopes in carrying out auction agent research is to re-lieve the human bidders of some of this complexity burden,allowing them to use richer goal representations.Potentially,this would improve the efficiency of the allocation of goods,leading to a consequent increase in value for the bidders,the auctioneer,and society at large through the more-efficient utilisation of goods.3.2Uncertain KnowledgeIn our preliminary experiments[1],agents had full knowledge of each others’utility functions;however,in reality,agents have only rough estimates of each others’utilities, arising from market research on that company’s competitors.In this paper,we only consider strategies that are robust to uncertain knowledge.We obtained an agent’s estimate of the budget and MVs of other agents by taking the actual budget and MVs of those agents and randomly perturbing the values by up to 20%in either direction,separately for each value.Every priority for every other agent had approximately a chance of being guessed incorrectly.Note that under this pes-simistic uncertainty scheme,any set of priorities can be guessed for any agent.Particu-larly problematic is the interaction between this extreme randomness,the large variation in market size,and strategic bidders with relatively modest goals;simply guessing the priority of the largest market(New York)incorrectly for the strategic bidder with the smallest budget would add–to the estimate of that bidder’s overall desired value of licenses.This large uncertainty poses a significant problem for any agent strat-egy that makes use of knowledge about the other agents,requiring successful strategies to be robust to misinformation.4General Agent StrategyTable1summarizes the agent bidding algorithm.The quantities in Steps2and3are: Remaining eligibility:bidder’s current eligibility minus the bidding units tied up in licenses of which it is provisional winner;Remaining budget:bidder’s total budget6minus the money tied up in licenses of which it is provisional winner;Current values for markets:use the MV or EMV,depending on the agent’s priority for the market and the number of licenses in the market for which the bidder is already provisional winner;Current costs for each market:prices of the least expensive licenses in each market.Table1:High-level overview of the general agent bidding algorithm.Agents differ in opera-tionalizations of Step4REPEAT(once per round)1.Get market prices from serverpute remaining budget and eligibilitypute current values and costs of markets4.Choose desired licenses within constraints5.Submit bids to server at cheapest incrementUNTIL game overOnce the set of desired licenses is determined,the agent bids at the1-increment price for those and only those licenses(Step5in Table1).All agents in this paper use this basic strategy;they only differ in how they choose desired licenses(Step4). Note that in some implementations Step5is conducted in several stages throughout the execution of Step4,but is otherwise unchanged.4.1Knapsack and ImprovementsThe baseline agent we used was the Knapsack agent.This agent was built on the real-ization that determining which licenses to bid on for maximum profit given a limited budget is very similar to the classic knapsack problem.The addition of BUs and eligi-bility costs for each license makes this problem more complex,however treating it as a knapsack problem usually yields optimal solutions[1].While the Knapsack agent is extremely effective,it has one major weakness:it does not explicitly take into account the presence of other bidders.Our search for improved strategies is driven by two questions:1.Can we develop a strategy that will beat afield of Knapsack agents?2.Can wefind a strategy that will outperform Knapsack when used by all agents?Thefirst question assumes that the other agents have not discovered a strategy su-perior to Knapsack,and so will all be Knapsack agents.Accordingly,it is necessary to adopt a strategy that,when faced with competing Knapsack agents,will generate more profit than when using Knapsack.Certain types of strategies(such as Budget Stretching[1])which exploit the myopic nature of Knapsack initially appeared promis-ing,allowing significant profit gains in the perfect-knowledge case.However,when we relaxed the unrealistic assumption of perfect knowledge,the gains all but disappeared—the strategies seemed too fragile to be effective in the more realistic setting.In this paper,we focus on the second question,describing a strategy(PRSDR) which—when used by all agents—will significantly outperform Knapsack and addi-tionally has the stability property that in an auction of all PRSDR agents,each agent is better off sticking with PRSDR than reverting back to Knapsack.5Randomized SDROne simple strategy that can lead to an improvement in the utilities of all involved is Strategic Demand Reduction(SDR)[9].In SDR,bidders avoid competing with eachother for licenses,keeping the prices on those licenses low for all involved.Essentially, bidders using SDR allocate the goods available among themselves and then do not compete on goods they have not been allocated.For example,consider two identical bidders with each bidding on two identical items worth each.Myopic greedy bidding,like Knapsack,would see each bidder obtain one item for,since they can afford to pay for each item but for only one.Realizing this,the bidders can reduce their demands to one item each,ensuring both bidders will obtain an item for only,earning of profit instead of.Since explicit communication is not permitted,the strategy needs implicit methods to allocate licenses[4].Hence,it is necessary to use a variant of SDR that can allocate licenses dynamically and with only implicit communication.The essential idea in Randomized SDR(RSDR)is to bid for any desired unclaimed license;recall that ties are broken randomly.Thefirst strategic bidder to become the provisional winner of a license is said to own that license.This approach(shown in Table2)provides step4of Table1’s general agent bidding algorithm.Table2:Description of the naive RSDR implementation of Step4in Table1e Knapsack to bid for an optimal set of desired li-censes that are not owned by another strategic bidder(i.e.,don’t steal licenses from strategic bidders).An agent always takes any licenses it owns back from secondary bidders.The simu-lator reports a provisional winner for each license that has received a bid,ensuring that each license any agent desires will become owned by a strategic bidder,and hence can be acquired by that bidder without competition from other RSDR agents.No profit above the valuations of the secondary bidders(the effective pricefloor) is wasted infiguring out this allocation;with this approach,the strategic bidders will determine their allocation while the secondary bidders are still active.5.1RSDR AlgorithmThe expected net value to an agent when all agents use naive RSDR is directly related to the total value the agent desires and inversely related to the amount of competition in those desired markets,as is reasonable.However,due to the underlying randomness,it is possible for an agent to be unlucky and receive only a small amount of value.For each agent,one can compute a satisfaction value,which is simply the total value to that agent of the licenses it owns divided by the total value to that agent of all of its goals.If the satisfaction of an agent is sufficiently below that of the other agents,it has been an unlucky bidder.If the satisfaction of an agent is low enough,the agent might actually receive less profit from RSDR than from Knapsack.An unlucky agent can notice that it is such well before the end of the auction,since license allocations are largely determined early.Additionally,an agent can always startout using RSDR and then later switch to using Knapsack.7Thus,the unlucky agent would likely switch to Knapsack and earn more while lowering the profits of the other agents,which may be desirable if they are competitors.Accordingly,an unlucky bidder will defect from the RSDR scheme and go back to using Knapsack.This risk of defection lowers the utility of RSDR,and so should be minimized. One simple method of avoiding such unlucky bidders is by introducing a process of fairing.The algorithm allows an unlucky bidder to randomly take licenses that it wants, regardless of ownership,until its satisfaction is no longer too low(Table3).If this strategy drives another agent’s satisfaction too low,the newly unlucky agent would then take licenses from others,until every agent has a reasonable amount.Since not all agents can be above average satisfaction,the threshold that determines an unlucky bidder must be less than the average;in practice,a value around90%works well,although any threshold low enough to prevent profit loss due to spurious fairing would also suffice.Table3:Description of the RSDR implementation of Step4in Table1e Knapsack to bid for an optimal set of desired li-censes that are not owned by a strategic bidder.4.2.Submit bids on randomly selected licenses until nolonger an unlucky bidder.(i.e.,only steal as much asis fair)Table3shows Step4of the RSDR algorithm.In Step4.2,ownership of a license can be taken just by bidding on that license and becoming provisional winner.This process greatly smoothes out the randomness of the scheme,but is not without risks of its own.5.2Cheater DetectionThe fairing process allows agents to take licenses that other agents had considered their own.This behavior provides great temptation to cheat by deciding not to use RSDR and defecting to Knapsack instead:if the other agents are using RSDR,they will allow you to take any licenses you want,and won’t compete for them.To dissuade agents from cheating,there must be a punishment for it such that cheaters expect to make a smaller profit than when they don’t cheat.Adding such pun-ishments to RSDR makes a self-enforcing algorithm,called Punishing RSDR(PRSDR).Before cheaters can be punished,they must be detected by observing which agents take licenses from other bidders when they shouldn’t.In the FAucS domain,three ob-servations are key to our detection algorithm:1.A PRSDR agent only takes licenses owned by another agent through either fairingor by punishing a cheater.2.Fairing only occurs when an agent is unlucky(below-average satisfaction rating).3.A PRSDR agent punishes at most one of thefive strategic bidders.These points tightly circumscribe the situations in which a PRSDR agent will take licenses from another agent;it must either have low satisfaction(fairing)or only take from one agent(punishing).Accordingly,our agents identified cheaters as follows:–If an agent has a high satisfaction in a round(at least10%higher than average for that round)and yet it bids on licenses owned by two or more other agents,then it is showing evidence of cheating.–If an agent shows evidence of cheating infive or more total rounds in the auction, then it is considered a cheater for the remainder of the auction.The second requirement is a result of the agent’s uncertain knowledge about each other’s goals,and thus satisfaction levels.Since an agent that appears to have a high satisfaction may in fact not,evidence of cheating in a round is not proof that the agent is in fact a cheater.To reduce false positives,the satisfaction threshold for an agent to be considered a cheater must be greater than the average satisfaction for the round,and this evidence of cheating must be present for more than one round.In practice,using a threshold of10%over average satisfaction and requiring evi-dence of cheating in5total rounds led to good results.These settings correctly iden-tified every cheater in all of60test runs,while incorrectly identifying a complying PRSDR agent as a cheater in none.Further experimentation indicated that good cheater detection is not particularly sensitive to these two parameters:values within a modest window of those above gave perfect or near-perfect ability to identify all cheaters.Thus, there is no requirement that all agents agree upon a precise set of parameters.Due to this high accuracy and the minimal effect of the rare errors potentially caused by randomness,we assumed correct detection in our tests.While cheating was all-or-nothing in this implementation,one could envision a more careful cheater who at-tempted to stay hidden by cheating in moderation.However,any cheating that pushes the cheater’s satisfaction over the threshold–usually10%over the average–will be detected,leaving little room for hidden cheating even with this simple all-or-nothing al-gorithm.Moreover,the uncertain nature of the other agents’information about an agent means that any amount of cheating raises the risk of beingflagged as a cheater,with more cheating causing more risk.In particular,even cheating by adhering to the PRSDR algorithm but with a higher fairing constant–say,insisting on having100%of average satisfaction instead of the default90%–raises the chance of beingflagged as a cheater and severely punished.We leave the investigation of more sophisticated cheating and cheating detection strategies for future work.5.3PRSDR AlgorithmOnce the cheater(s)have been identified,many methods of punishment are possible. Our work aimed for aggressive simplicity:if a cheater takes one of your licenses,make it yourfirst priority to take it back and then keep that license away from the cheater (Table4).Cheaters are not considered to own licenses,and so any license that the other strate-gic bidders have any interest in will become owned by one of them.Thereafter,anyTable4:Description of the full PRSDR implementation of Step4in Table14.1.Bid for any licenses stolen by a cheater.(i.e.,don’t letanyone steal more than is fair)e Knapsack to bid for an optimal set of desired li-censes that are not owned by a strategic bidder.4.3.Submit bids on randomly selected licenses until nolonger an unlucky bidder.license the cheater tries to take will provoke an immediate reaction from the agent per-ceived to be the legitimate owner of that license,who will not let the cheater have the license until it becomes too expensive to take back,at which point the cheater will re-ceive only a small amount of profit from the license.This removes any advantages due to PRSDR from the cheater while still allowing other agents to profitably cooperate among themselves in other markets.5.4Algorithm DetailTable5recaps PRSDR in full algorithmic detail.The table presents only Step4of Table1;otherwise,an agent using PRSDR acts identically to a Basic Agent.The set of strategic bidders participating in this improved strategy is denoted as. Without loss of generality,we use the convention that an agent always refers to itself as agent,and all other strategic bidders are agents through,where. Following this convention,refers to the item corresponding to agent.In Table5,license selection covers thefirst half of Part1(lines1–10)and all of Part2.Part1handles internal state,while Part2computes the sets of available licenses in each market,which is simply those licenses that are not owned by a strategic bidder and whose price is less than this agent’s value for the license.Part3implements fairing, and punishment is implemented in the second half of Part1(lines11–16).6Empirical ResultsIn this section we present empirical results demonstrating the effectiveness PRSDR.Our runs included only the largest67markets from Auction35(this is the subset of the top100U.S.markets that were available in Auction35),which constitute a large majority of the value in the auction.This helped reduce run times to manageable levels. There were163licenses available in these markets.As described above,we usedfive strategic bidders andfive aggregated secondary bidders,with priorities,market values, and budgets randomly selected from a constrained distribution so as to realistically represent the Auction35scenario.The secondary bidders are used to simulate realistic auction conditions;hence only strategic bidders are included in the results.6.1Homogeneous Strategy ResultsOur initial experiment measured the potential gain when all agents use the PRSDR strategy compared to when they all use the Knapsack strategy.Aggregate results from。
ABAQUS常见问题汇总
ABAQUS 常见问题汇总 - 2.0 版
目录 点击小节标题,可以跳到相应的内容(有些 WORD 版本可能需要按住 ctrl 键)
0. ABAQUS 入门资料.......................................................................................................................... 4
6.1 ABAQUS 安装方法 ................................................................................................................. 12 6.2 ABAQUS 显示异常(无法显示栅格、显卡冲突、更改界面颜色).......................................... 21 6.3 Document 无法搜索................................................................................................................. 21 6.4 磁盘空间不足 ........................................................................................................................... 22 6.5 Linux 系统................................................................................................................................ 22 6.6 死机后恢复模型 ....................................................................................................................... 23
关于王亚平的女宇航员的英语作文
关于王亚平的女宇航员的英语作文全文共3篇示例,供读者参考篇1The Pioneering Spirit of Wang Yaping, China's First Woman Astronaut in SpaceAs a high school student with a passion for science and space exploration, I have been captivated by the remarkable achievements of Wang Yaping, China's first woman astronaut to perform extravehicular activities (EVAs) in space. Her groundbreaking mission not only shattered glass ceilings but also ignited the dreams of countless young girls like myself, inspiring us to reach for the stars and pursue careers in the fields of science, technology, engineering, and mathematics (STEM).Born in 1980 in Yantai, Shandong Province, Wang Yaping's journey to the cosmos was paved with determination and unwavering dedication. After graduating from the Air Force Aviation University with a degree in aviation, she joined the People's Liberation Army Air Force and quickly rose through the ranks, becoming a skilled fighter pilot. Her exceptional skills and commitment caught the attention of the Chinese space program,leading to her selection as one of the nation's first female astronaut candidates in 2010.Wang Yaping's historic mission took place in June 2013, when she became a member of the Shenzhou-10 crew, embarking on a 15-day journey to the Tiangong-1 space station. During this groundbreaking expedition, she not only conducted numerous scientific experiments but also delivered a captivating live lecture from space, inspiring millions of students across China and the world with her infectious enthusiasm for science and exploration.One of the most awe-inspiring moments of Wang Yaping's mission was her participation in China's first-ever spacewalk by a female astronaut. Donning a bulky spacesuit and tethered to the Tiangong-1 space station, she fearlessly stepped into the void of space, becoming the first Chinese woman to venture beyond the confines of a spacecraft. For nearly seven hours, she and her crewmate, Zhang Xiaoguang, performed a series of complex tasks, including retrieving and installing experimental components on the exterior of the station.As I watched the live footage of Wang Yaping floating gracefully in the vastness of space, her determination and courage were palpable. She embodied the pioneering spirit thathas driven human exploration since the dawn of time, reminding us that boundaries are meant to be pushed and that the pursuit of knowledge knows no gender.Beyond her remarkable achievements in space, Wang Yaping has become a role model and inspiration for countless young girls and women around the world. Through her educational outreach efforts, she has tirelessly championed the importance of STEM education, encouraging young minds to embrace their curiosity and pursue their dreams, regardless of societal expectations or limitations.One of the most memorable moments for me was when Wang Yaping conducted a live video lesson from space, demonstrating physics experiments in microgravity to over 60 million students across China. With her infectious enthusiasm and engaging teaching style, she not only made complex scientific concepts accessible but also ignited a passion for learning in the hearts and minds of her young audience.As a student myself, watching Wang Yaping's journey has been a profound and empowering experience. Her accomplishments have shattered stereotypes and challenged the notion that certain fields are "off-limits" for women. Through her unwavering determination and groundbreaking achievements,she has paved the way for future generations of female scientists, engineers, and explorers, proving that with hard work and dedication, anything is possible.Wang Yaping's story serves as a powerful reminder that the boundaries of human potential are limitless, and that the pursuit of knowledge and exploration should know no barriers of gender, race, or nationality. Her courage and resilience in the face of adversity have inspired me to dream bigger and to never let societal expectations or limitations define my aspirations.As I look towards the future, I am filled with hope and excitement, knowing that trailblazers like Wang Yaping have opened doors that were once thought to be closed. Her journey has ignited a fire within me, fueling my desire to contribute to the advancement of science and exploration, and to one day follow in her footsteps, venturing into the vast unknown with the same pioneering spirit that has propelled humanity forward.In the words of Wang Yaping herself, "The world is vast, and I have seen its true face from space. My dream is that one day, more Chinese women will go out and experience the wonders of space." As a student and aspiring scientist, I am determined to make that dream a reality, carrying the torch of exploration anddiscovery into the future, inspired by the trailblazing achievements of remarkable women like Wang Yaping.篇2Wang Yaping: China's Pioneering Female TaikonautAs a young student growing up in China, I have been inspired by the incredible achievements of my country's space program and the brave taikonauts who have ventured into the cosmos. Among these pioneering space explorers, one figure stands out as a true trailblazer and role model – Wang Yaping, China's first female taikonaut to conduct a spacewalk and deliver a lecture from space.Born in 1980 in Shandong Province, Wang's journey to becoming an astronaut was not an easy one. From a young age, she exhibited a keen interest in aviation and science, fueled by her fascination with the night sky. Despite facing societal and cultural barriers that often discouraged women from pursuing careers in traditionally male-dominated fields, Wang remained undeterred in her pursuit of her dreams.After completing her studies in space payload design and science communication, Wang joined the ranks of the People's Liberation Army Air Force in 2010. Her tenacity, intelligence, andphysical prowess quickly set her apart, and in 2012, she was selected as part of China's second group of taikonaut trainees, becoming one of only two women in the cohort.Wang's opportunity to make history came in 2013 when she was chosen as a crew member for the Shenzhou-10 mission, China's fifth crewed spaceflight and the longest-duration mission at that time. During this historic journey, Wang became the second Chinese woman and the 59th woman worldwide to travel to space. But her accomplishments didn't stop there.In a groundbreaking moment, Wang conducted China's first-ever spacewalk by a woman, a feat that captured the attention and admiration of people around the globe. Clad in her bulky spacesuit, she gracefully emerged from the Tiangong-1 space module and spent nearly 40 minutes floating in the vacuum of space, performing various tasks and experiments.But Wang's most significant contribution during the Shenzhou-10 mission was her remarkable lecture delivered from the Tiangong-1 space module. In a live broadcast that was watched by millions of students across China, Wang conducted a captivating physics lesson, demonstrating the principles of motion and gravity in the microgravity environment of space. Her engaging and accessible teaching style not only inspiredcountless young minds but also helped to demystify the wonders of space exploration and promote scientific literacy.Wang's achievements have made her a household name in China and a source of immense pride for the nation. Her success has shattered glass ceilings and challenged traditional gender stereotypes, proving that women can excel in even the most demanding and technological fields.As a student, I find Wang's story deeply inspiring. Her unwavering determination, resilience, and commitment to excellence serve as a powerful reminder that no dream is too big or too audacious when pursued with passion and perseverance. Wang's journey has shown that barriers can be broken, and that women can achieve greatness in any field they choose, including the final frontier of space exploration.Moreover, Wang's success has had a profound impact on the way young girls in China view their potential and aspirations. By seeing a fellow countrywoman accomplish such remarkable feats, they are emboldened to pursue their dreams, regardless of societal constraints or gender norms. Wang has become a living embodiment of the boundless possibilities that await those who dare to dream and work hard.Beyond her personal accomplishments, Wang's contributions have also played a crucial role in advancing China's space program and solidifying the country's position as a major player in the global space arena. Her spacewalk and lecture from space have not only showcased China's technological prowess but have also fostered national pride and inspired a new generation of scientists, engineers, and explorers.As I look towards the future, I am filled with hope and excitement at the prospect of what lies ahead for China's space endeavors. With trailblazers like Wang Yaping paving the way, I am confident that my generation will witness even greater achievements and breakthroughs in space exploration. Perhaps one day, we too might have the opportunity to follow in Wang's footsteps and venture into the vast expanse of the cosmos, pushing the boundaries of human knowledge and understanding.In the meantime, Wang Yaping's legacy will continue to inspire and empower young people like me to pursue our passions, embrace challenges, and strive for excellence in whatever field we choose. Her story serves as a powerful reminder that with determination, hard work, and an unwaveringspirit, anything is possible – even defying gravity and reaching for the stars.篇3Wang Yaping: China's Pioneering Female AstronautAs a young student in China, I have always been captivated by the wonders of space exploration. From the first time I learned about Yuri Gagarin's historic journey as the first human in space, to the awe-inspiring achievements of the Apollo missions that landed humans on the Moon, the idea of venturing beyond Earth's atmosphere has filled me with a sense of curiosity and amazement.However, my fascination with space took on a whole new dimension when I learned about Wang Yaping, China's first female astronaut to walk in space. Her remarkable journey not only shattered glass ceilings but also inspired countless young girls like myself to dream big and pursue their passions, no matter how daunting the challenges may seem.Born in 1980 in Shandong Province, Wang Yaping's path to becoming an astronaut was anything but ordinary. Growing up in a family of educators, she developed a keen interest in science and technology from an early age. Her unwaveringdetermination and academic excellence propelled her to enroll at the prestigious Mu Dan Jiang Aviation University, where she honed her skills as a pilot in the Chinese Air Force.Wang's breakthrough came in 2010 when she was selected to join the second batch of Chinese astronauts, becoming one of two female candidates chosen from over 7,000 applicants. This achievement was a testament to her exceptional abilities and unwavering dedication, as she defied societal norms and gender stereotypes that often discouraged women from pursuing careers in traditionally male-dominated fields.In 2013, Wang Yaping made history when she became the second Chinese woman and the 59th woman worldwide to travel to space. As part of the Shenzhou 10 mission, she spent 15 days aboard the Tiangong-1 space station, conducting a wide range of scientific experiments and engaging in educational outreach activities that captured the imagination of millions of Chinese students.One of the most memorable moments of her mission was the live Science Lecture from Space, where Wang Yaping demonstrated physics concepts such as surface tension and inertia, using simple objects like a ring and a ball of water. This groundbreaking event, watched by over 60 million Chinesestudents, not only served as an invaluable educational resource but also shattered stereotypes about women in science and technology.Wang Yaping's accomplishments didn't stop there. In 2022, she embarked on another historic mission, becoming the first Chinese woman to conduct a spacewalk. For nearly seven hours, she and her crewmate ventured outside the Tiangong space station, performing a series of complex tasks, including installing equipment and testing technology that would pave the way for future missions.As I watched her gracefully maneuver in the vast expanse of space, tethered only by a thin lifeline, I was struck by her courage, resilience, and unwavering determination. Wang Yaping's achievements transcended mere scientific milestones; they were a powerful symbol of the boundless potential of women and a testament to the transformative power of education and perseverance.In the aftermath of her historic spacewalk, Wang Yaping's legacy has continued to inspire and empower young girls across China and around the world. Her journey has shown us that gender should never be a barrier to pursuing one's dreams, andthat with hard work, dedication, and a thirst for knowledge, even the most daunting challenges can be overcome.As a student, I am deeply grateful for the example set by trailblazers like Wang Yaping. Her achievements have not only expanded the frontiers of human exploration but have also ignited a passion within me to pursue my own aspirations, no matter how lofty they may seem.In a world where gender inequality and societal biases persist, Wang Yaping's story serves as a powerful reminder that change is possible, and that each of us has the potential to inspire and uplift others through our actions and determination.As I look towards the future, I am filled with hope and excitement, knowing that the path paved by Wang Yaping and other pioneering women has opened up a world of possibilities. Perhaps one day, I too may have the opportunity to venture into the vast expanse of space, contributing to the ongoing quest for knowledge and exploration that has captivated humanity for generations.Until then, I will continue to draw inspiration from Wang Yaping's remarkable journey, using her example as a guiding light to navigate the challenges and obstacles that lie ahead. For in her story, I see a powerful reminder that with courage,perseverance, and an unwavering belief in oneself, even the seemingly impossible can be achieved.。
结构设计的时候模型验证的流程
结构设计的时候模型验证的流程When it comes to the process of model verification in structural design, there are several important steps that need to be carefully followed. 结构设计中模型验证的流程是非常关键的,需要严格按照一定的步骤进行。
Firstly, it is crucial to establish the objectives and scope of the model verification process. 首先,建立模型验证过程的目标和范围非常关键。
This involves clearly defining what aspects of the design will be verified, as well as the specific criteria that will be used to determine the success or failure of the model. 这包括清晰地定义设计的哪些方面将被验证,以及将用于确定模型成功或失败的具体标准。
Additionally, it is important to consider the available resources and time constraints when planning the verification process. 此外,在规划验证过程时考虑到可用资源和时间限制也是非常重要的。
Once the objectives and scope of the model verification process have been established, the next step is to select the appropriate verification methods. 一旦确定了模型验证过程的目标和范围,下一步就是选择合适的验证方法。
curveexpert 拟合四参数的逻辑函数曲线
【主题】curveexpert 拟合四参数的逻辑函数曲线【文章正文】1. 介绍在科学研究和数据分析领域,拟合逻辑函数曲线是一种常见的方法。
而对于拟合四参数的逻辑函数曲线,CurveExpert 是一个非常实用的工具。
该软件可以帮助研究人员快速、准确地拟合四参数的逻辑函数曲线,从而对实验数据进行分析和预测。
本文将通过对 curveexpert 拟合四参数的逻辑函数曲线的介绍和分析,展示其在实际应用中的优势和价值。
2. 什么是四参数的逻辑函数曲线?四参数的逻辑函数曲线是一种常见的曲线拟合模型,它通过四个参数来描述数据的增长和变化趋势。
该模型通常用于描述生物学、医学和环境科学等领域中的生长、衰退和饱和过程。
其函数表达式通常为:\[ y = d + \frac {a-d} {1+(x/c)^b} \]其中,a 是曲线的上限值,b 是曲线的斜率,c 是曲线的中间点,d 是曲线的下限值。
3. curveexpert 的功能和优势CurveExpert 是一款强大的数据拟合工具,它不仅支持常见的线性和非线性拟合模型,还能够对四参数的逻辑函数曲线进行精准的拟合。
其优势主要体现在以下几个方面:- 自动拟合:CurveExpert 能够根据用户提供的数据,自动计算出最佳的曲线拟合参数,无需用户手动调整参数。
- 高精度:CurveExpert 采用先进的数学算法,在拟合过程中可以保证拟合曲线与实际数据的拟合度达到最优状态,提供较高的精度。
- 多样性:除了四参数的逻辑函数曲线,CurveExpert 还支持多种常见的拟合模型,包括线性拟合、多项式拟合、指数拟合等,满足不同研究和应用需求。
4. 如何使用 curveexpert 拟合四参数的逻辑函数曲线?使用 CurveExpert 拟合四参数的逻辑函数曲线非常简单。
用户需要准备好所需拟合的数据,并将其输入到 CurveExpert 软件中。
选择四参数的逻辑函数曲线作为拟合模型,并点击“开始拟合”按钮,软件将自动计算出最佳的拟合参数,并展示拟合曲线和拟合效果。
有关雕塑的英语作文
有关雕塑的英语作文Sculpture: An Enduring Art FormSculpture, the art of creating three-dimensional forms, has captivated the human imagination for millennia. From the intricate carvings of ancient civilizations to the bold, modern installations of contemporary artists, this timeless medium has the power to evoke emotions, challenge perceptions, and leave a lasting impression on those who encounter it.At its core, sculpture is a testament to the human desire to shape and manipulate the physical world around us. Whether working with clay, stone, metal, or a myriad of other materials, sculptors possess a unique ability to transform the raw and the mundane into the extraordinary. Through their skilled hands and creative vision, they breathe life into inanimate objects, imbuing them with a sense of movement, emotion, and meaning.One of the most remarkable aspects of sculpture is its ability to transcend the boundaries of time and culture. The ancient sculptures of Egypt, Greece, and Rome continue to captivate and inspire audiences today, their timeless beauty and profound symbolismresonating across the centuries. Similarly, the avant-garde works of modern sculptors, such as Auguste Rodin, Henry Moore, and Anish Kapoor, have challenged and expanded the very definition of the art form, pushing the boundaries of what is possible.Beyond their aesthetic qualities, sculptures often serve as powerful vehicles for social, political, and cultural expression. Throughout history, sculptors have used their craft to commemorate important events, honor influential figures, and give voice to marginalized communities. The towering statues of revolutionary leaders, the haunting memorials to victims of war and oppression, and the bold, provocative installations that challenge societal norms all bear witness to the transformative power of sculpture.Moreover, the act of creating sculpture itself is a profoundly meaningful and transformative process. Sculptors must possess a deep understanding of their chosen materials, as well as a keen eye for proportion, balance, and form. They must also be willing to embrace the unpredictable nature of their craft, adapting to the challenges and surprises that arise during the creative process. This constant negotiation between the artist's vision and the physical constraints of the medium is what gives sculpture its unique and captivating qualities.In the contemporary art world, the role of sculpture has continued toevolve, with artists pushing the boundaries of traditional techniques and materials. From the use of found objects and recycled materials to the incorporation of technology and digital processes, the sculptural landscape has become increasingly diverse and experimental. This dynamism and innovation have allowed sculpture to remain a vital and relevant art form, capable of addressing the pressing issues and concerns of our time.Ultimately, the enduring appeal of sculpture lies in its ability to connect us to the human experience in a tangible and profound way. Whether we are admiring the intricate details of a ancient statue or pondering the conceptual complexity of a modern installation, the act of engaging with sculpture invites us to pause, reflect, and connect with the world around us in a deeper, more meaningful way. It is this transformative power that has made sculpture an enduring and vital art form, one that will continue to inspire and captivate audiences for generations to come.。
在海拔5000米以上地区利用单粒子方法探测γ暴实验构想--基于水切伦科夫技术
在海拔5000米以上地区利用单粒子方法探测γ暴实验构想--基于水切伦科夫技术刘茂元;厉海金;扎西桑珠;周毅【摘要】Ground extensive air shower experiment is powerless for detecting cosmic ray particles of tens GeV en⁃ergy renge in the GRBs (Gamma Ray Burst) so far, because of its threshold energy. The experimental altitude needs to be increased in order to achieve more effective observation. In the present paper, setting up a water Che⁃renkov detector array at 5200m altitude in Tibet was proposed and the idea of ground experiments on multi-GRB and tens of GeV photon observing can be achieved by using single-particle technology, and also can supportpre⁃dicting for large-scale experiments.%目前,对于伽玛射线暴(Gamma Ray Burst, GRB)的探测,地面广延大气簇射实验由于阈能原因,对几十GeV能区的宇宙线粒子探测无能为力,只有提高实验海拔才能实现更有效的观测。
文章描述了在海拔5000m以上地区建造水切伦科夫(WCD)探测器阵列,利用单粒子技术,来实现地面实验多GRB几十GeV光子的正观测设想,为大规模实验提供预言支持。
因地基剪切而造成损坏的三种形式
因地基剪切而造成损坏的三种形式一、载荷实验的关联理论撑持载荷实验在勘测中主要是继续加压至土体损坏搜集关联承载力及变形的参数,首要需求知道地基损坏的方式。
在载荷效果下,地基的损坏通常因为承载力缺乏而导致剪切损坏,地基剪切损坏的方式可分为以下三种。
全体损坏在载荷较小时p-s曲线存在直线段,在承载力添加时,根底边际土体开端发作剪切损坏,跟着载荷的加大,这时p-s曲线呈曲线改变,当载荷继续添加,剪切损坏区不断扩大,结尾在地基中构成接连滑动面,根底急剧下沉或倾向一边,一起根底附近的地上拱起,地基发作全体剪切损坏。
冲剪损坏则是因为根底下脆弱土的紧缩变形使根底继续下沉,当载荷加大到必定程度,根底向下“切入”,根底侧面的土因为笔直剪切而损坏,损坏过程中,地基中没有显着接连滑动面,根底附近不拱起,根底没有大的歪斜,p-s 曲线无显着转折点。
部分损坏介于两者之间,剪切损坏也从根底边际开端,但滑动面不发展到地上,而是约束在地基内部某一区域,根底附近有拱起但没有显着歪斜和坍毁。
地基发作何种损坏,主要与土的紧缩性有关,通常来说,关于密实砂土和坚固粘土将呈现全体损坏,而关于紧缩性大的松砂和软粘土则能够呈现部分剪切和冲剪损坏。
此外,损坏方式还与还与根底埋臵深度、加荷速率等要素有关,当埋深浅、加快小时,将趋于发作全体损坏,否则以部分或冲切损坏为主。
关于正常固结的饱满粘性土,若是所施加荷载不会导致体变,则将发作全体剪切损坏。
地基承载力核算的理论公式均在全体剪切损坏的条件下得到的。
二、实验 1.测验方法测验方法则可分为应力法及应变法。
应力法就是惯例实验法,有慢速法(相对安稳法)和疾速法(等速加荷法)两类,应变法又称等沉降速率法,关于砂土、中低紧缩性土与高紧缩性土,沉降速率与读取压力要求是不一样的。
实验意图在于求取模量时,可选用慢速法。
只求取承载力时可选用疾速或应变法,通常可用于可塑至坚固状况的粘性土、砂类土、碎石土等。
应变法可用来求取不排水抗剪强度和不排水抗剪模量。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
a r X i v :0711.4353v 2 [h e p -p h ] 2 J a n 2008Experimental constraints on fourth generation quark massesP.Q.Hung ∗Dept.of Physics,University of Virginia,382McCormick Road,P.O.Box 400714,Charlottesville,Virginia 22904-4714,USAMarc Sher †Dept.of Physics,College of William and Mary,Williamsburg,Virginia 23187,USA(Dated:February 2,2008)The existing bounds from CDF on the masses of the fourth generation quarks,t ′and b ′,are reexamined.The bound of 256GeV on the t ′mass assumes that the primary decay of the t ′is into q +W ,which is not the case for a substantial region of parameter space.The bound of 268GeV on the b ′mass assumes that the branching ratio for b ′→b +Z is very large,which is not only not true for much of parameter space,but is never true for b ′masses above 255GeV.In addition,it is assumed that the heavy quarks decay within the silicon vertex detector,and for small mixing angles this will not be the case.The experimental bounds,including all of these effects,are found as a function of the other heavy quark mass and the mixing angle.The question of whether or not there exist quarks and leptons beyond the known three generations has generated a number of theoretical and experimental investigations[1,2].Although direct experimental con-straints did not (and do not)rule out a heavy fourth generation,until recently electroweak precision data ap-peared to disfavour its existence.In addition,there is a strong prejudice against quarks and leptons beyond the third generation which is usually paraphrased by the question:Why is the fourth neutrino so much heavier (m ν4>M Z /2)than the other three?Of course,one can very well imagine several scenarios in which this can “easily”be accomplished since much is yet to be learned about neutrino masses.In the end,it will be the verdict of experiments which will be the determining factor.There is still the question:Why should one bother with a fourth generation?There might be several answers to that question.First,it is possible that a heavy fourth generation might help in bringing the SU (3)⊗SU (2)L ⊗U (1)Y couplings close to a unification point at a scale ∼1016in the simplest non-supersymmetric Grand Unifi-cation model SU (5)[3].Second,its existence might have some interesting connections with the mass of the SM Higgs boson [4].Last but not least,there is no theo-retical reason that dictates the number of families to be three.We still have no satisfactory explanation for the mystery of family replication.Recent reexaminations [4,5]of electroweak precision data led to the conclusion that the possible existence of a fourth generation is not only allowed but its mass range is correlated in an interesting way with that of the Higgs boson in the minimal Standard Model (SM).In [4],the masses of the fourth generation quarks (t ′2 14016018020022024026028030010-1810-1610-1410-1210-1010-810-60.00010.01Mb'(GeV)sin 2 bt'FIG.1:Bound on the t′mass in the m b′−sin2θbt′plane.The shaded region corresponds to the CDF lower bound of256GeV.nate,CDF assumed that it decayed within approximately1centimeter from the beam pipe.For very small mixingangles,this will not be the case.Of course,for extremelysmall mixing angles,such that the decay length is greaterthan about3meters,the t′will appear to be a stable par-ticle and can be ruled out(above some mass)by stablequark searches.All of these results are plotted in Fig.1,where weplot the bound on the t′mass in the m b′−sin2θbt′plane.There are three distinct regions.The shaded re-gion above and to the right of the curve with m t′=256GeV represents the CDF lower bound on m t′.In this re-gion,the CDF bound applies.Below the shaded region,the curves correspond to the new lower bound on m t′from CDF based on the requirement that the t′→q+Wdecay is dominant.These curves all end to the leftat sin2θbt′∼6×10−15.This corresponds to a decaylength of approximately1cm.(Let us recall that presentsearches are sensitive to decays which occur very closeto the beam pipe to a distance of about1cm.)To theleft of those curves lies an unexplored window situatedbetween sin2θbt′∼6×10−15and sin2θbt′∼2×10−17,corresponding to a distance of roughly1cm out to3m.The far-left of the plot represents the constraintcoming0.20.40.60.81200210220230240250260BR(b'-->b+Z)Mb'(GeV)FIG.2:B(b′→b+Z)as a function of M b′for various M t′from the search[11]for a stable t′(at distances greaterthan approximately3m).For the bounds on the b′mass,CDF assumed thatthe branching ratio B(b′→b+Z)was100percent.InFig.2,we plot the branching ratio B(b′→b+Z)asa function of m b′for different values of m t′.Here weuse the results of[12]where the assumption sinθbt′=−sinθtb′=x was made resulting in a GIM cancellationwhen m t′∼m t.Furthermore,as in[12],we will assumethat|sinθcb′|<x2so that the decay of b′into“lighter”quarks will be mainly into the t quark.Note that thisassumption may not be justified,and if it is false thebranching ratio will be even lower,weakening the CDFbound even further.The decay into t is three-body form b′<m t+m W and two-body otherwise.This is tobe compared with b′→b+Z.This analysis has beenperformed in[12](Table I)and[1](Fig.14).The results are shown in Fig. 2.It can be seen thatB(b′→b+Z)<100%for a wide range of b′mass above200GeV.Note that the bound of268GeV on m b′as-suming B(b′→b+Z)=100%does not hold.As m b′gets above200GeV,the decay mode b′→t+W∗beginsto be comparable with the mode b′→b+Z and startsto dominate for m b′≥250GeV[12].In particular,form b′>255GeV,the decay b′→t+W will be into real18020022024026028030010-1510-1310-1110-910-710-50.0010.1M t '(G e V )sin 2 tb'FIG.3:Bound on the b ′mass in the m t ′−sin 2θtb ′plane.particles,and thus this decay will always dominate.The CDF bound can thus never exceed ing Fig.2,we estimate the acceptance for b ′→b +Z as had been done by CDF [7].The results are then used as inputs into our analysis of the bounds on the b ′mass.The b ′decay is treated in a similar fashion.Again,we subdivide the decay into two regions:(I)m b ′≤m t ′,and (II)m b ′>m t ′.Different curves in the m t ′−sin 2θtb ′plane correspond to different values of m b ′for which b ′does not decay into t ′.This is shown in Fig.3.Here,we take into account the value of B (b ′→b +Z )(denoted by βin [7])as obtained from Fig.2for a given m t ′and m b ′.We then use this number to obtain the acceptance as given by [7]which is scaled by a factor 1−(1−β)2.Dif-ferent values of m b ′for different curves in Fig.3reflect this acceptance constraint.We also show an “unexplored region”similar to that shown in Fig.1for decays occur-ing between 1cm and 3m,as well as the region where b ′is “stable”.This unexplored region is not vertical,as in Fig.1,since the rate for b ′→b +Z is very sensitive to the t ′mass.In summary,we reexamined the experimental bounds on the masses of the fourth generation quarks:the t ′and b ′quarks.We divide the search into three distance regions as measured from the center of the beam pipe:1)d =0cm to ∼1cm,2)d ∼1cm to 3m and 3)d >3m.The first region is one where most searches at the Tevatron have been performed.We have computed the lower bounds on the t ′and b ′masses under the re-quirement that t ′and b ′decay primarily into quarks of the first three generations as shown in Fig.1and Fig.3.For b ′,we found that the CDF lower bound on its mass can never exceed 255GeV,contrary to an earlier claim of 268GeV which had made use of the assumption B (b ′→b +Z )=100%and which is not correct when the b ′mass exceeds 200GeV .For t ′,bounds are shown,start-ing with the CDF bound 256GeV.Region (3)(greater than 3m)is bounded by searches for stable quarks.Re-gion (2)(between 1cm and 3m)is unexplored and cor-responds to a range of mixing angle sin 2θbt ′∼6×10−15and sin 2θbt ′∼2×10−17.Such a small mixing angle might seem unlikely,but it could arise very naturally in a 3+1scenario.For example,if one simply had a Z 2symmetry in which the fourth generation fields were odd and all other fields were even,then the mixing angle would vanish.However,discrete symmetries will gener-ally be broken by Planck mass effects,which can lead to sin 2θbt ′of M W /M P l ∼10−17.Thus,such a small mix-ing angle could be natural,and we urge our experimental colleagues to explore this region.If the fourth generation quarks are indeed found in this region,it would shed light on the question of family replication.AcknowledgmentsPQH is supported by the US Department of Energy under grant No.DE-A505-89ER40518.MS is supported by the NSF under grant No.PHY-0554854.We thank David Stuart for useful communications.[1]For an extensive discussion and a comprehensive listof references prior to 2000,see P.H.Frampton,P.Q.Hung and M.Sher,Phys.Rept.330,263(2000)[arXiv:hep-ph/9903387].[2]F.del Aguila,M.Perez-Victoria and J.Santi-ago,JHEP 0009,011(2000)[arXiv:hep-ph/0007316];J.E.Cieza Montalvo and M.D.Tonasse,Nucl.Phys.B 623,325(2002)[arXiv:hep-ph/0008196];S.Nie and M.Sher,Phys.Rev.D 63,053001(2001)[arXiv:hep-ph/0011077]; A.Arhrib and W.S.Hou,Phys.Rev.D 64,073016(2001)[arXiv:hep-ph/0012027];H.Ciftci and S.Sultansoy,Mod.Phys.Lett.A 18,859(2003)[arXiv:hep-ph/0107321];D.Choudhury,T.M.P.Tait and C.E.M.Wagner,Phys.Rev.D 65,053002(2002)[arXiv:hep-ph/0109097];J. A.Aguilar-Saavedra,Phys.Rev.D 67,035003(2003)[Erratum-ibid.D69,099901(2004)][arXiv:hep-ph/0210112];A.Arhrib and W.S.Hou,Eur.Phys.J.C27,555(2003)[arXiv:hep-ph/0211267];J. E.Cieza Mon-talvo and M. D.Tonasse,Phys.Rev.D67,075022 (2003)[arXiv:hep-ph/0302235]; D. E.Morrissey andC. E.M.Wagner,Phys.Rev.D69,053001(2004)[arXiv:hep-ph/0308001];P.Q.Hung,Int.J.Mod.Phys.A20,1276(2005)[arXiv:hep-ph/0406257];G.A.Kozlov,A.N.Sisakian,Z.I.Khubua,G.Arabidze,G.Khoriauliand T.Morii,J.Phys.G30,1201(2004);J.A.Aguilar-Saavedra,Phys.Lett.B625,234(2005)[Erratum-ibid.B633,792(2006)][arXiv:hep-ph/0506187];A.T.Alan,A.Senol and N.Karagoz,Phys.Lett.B639,266(2006)[arXiv:hep-ph/0511199];M.Y.Khlopov,Pisma Zh.Eksp.Teor.Fiz.83,3(2006)[JETP Lett.83,1 (2006)][arXiv:astro-ph/0511796];R.Mehdiyev,S.Sul-tansoy,G.Unel and M.Yilmaz,Eur.Phys.J.C49,613 (2007)[arXiv:hep-ex/0603005];J.A.Aguilar-Saavedra, PoS TOP2006,003(2006)[arXiv:hep-ph/0603199];A.Arhrib and W.S.Hou,JHEP0607,009(2006)[arXiv:hep-ph/0602035].[3]P.Q.Hung,Phys.Rev.Lett.80,3000(1998)[arXiv:hep-ph/9712338].[4]G.D.Kribs,T.Plehn,M.Spannowsky and T.M.P.Tait,arXiv:0706.3718[hep-ph].[5]H.J.He,N.Polonsky and S. f.Su,Phys.Rev.D64,053004(2001)[arXiv:hep-ph/0102144];M.Mal-toni,V. A.Novikov,L. B.Okun, A.N.Rozanov and M.I.Vysotsky,Phys.Lett.B476,107(2000) [arXiv:hep-ph/9911535];V. A.Novikov,L. B.Okun,A.N.Rozanov and M.I.Vysotsky,Phys.Lett.B529,111(2002)[arXiv:hep-ph/0111028];V.A.Novikov,L.B.Okun,A.N.Rozanov and M.I.Vysotsky,JETPLett.76,127(2002)[Pisma Zh.Eksp.Teor.Fiz.76,158(2002)][arXiv:hep-ph/0203132].[6]J.Conway et al.[CDF Collaboration],/physics/new/top/2005/ljets/tprime/gen6 /public.html[7]T.Aaltonen et al.[CDF Collaboration],Phys.Rev.D76,072006(2007)[arXiv:0706.3264[hep-ex]].[8]In their paper,CDF actually referred to any particlewhich decays into b+Z,not just a sequential fourth gen-eration quark.Thus,in some models their analysis wouldbe relevant(for example,in some models with isosingletquarks).However,we are pointing out that the boundsare substantially weakened if one assumes that the newparticle is a sequential fourth generation quark.The ex-ample in their paper of a sequential fourth generationquark was used since the production cross section is wellunderstood,unlike that of other models[9]W.S.Hou and G.G.Wong,Phys.Rev.D49,3643(1994)[arXiv:hep-ph/9308312].[10]I.I.Y.Bigi,Y.L.Dokshitzer,V.A.Khoze,J.H.Kuhnand P.M.Zerwas,Phys.Lett.B181,157(1986).[11]D.E.Acosta et al.[CDF Collaboration],Phys.Rev.Lett.90,131801(2003)[arXiv:hep-ex/0211064].[12]P.H.Frampton and P.Q.Hung,Phys.Rev.D58,057704(1998)[arXiv:hep-ph/9711218].。