1 TECHNIQUES AND METHODOLOGIES FOR MULTIMEDIA SYSTEMS DEVELOPMENT A SURVEY OF

合集下载

针刺方法中英文对照

针刺方法中英文对照
Evaluation of therapeutic effect of acupuncture and moxibustion on diarrhea In China, the efficacy evaluation of acupuncture and moxibustion in the treatment of diarrhea is mainly based on the improvement of patients' symptoms, the change of stool frequency and shape. In English speaking countries, the efficacy evaluation of acupuncture and moxibustion in the treatment of diarrhea is usually based on internationally accepted clinical efficacy evaluation criteria, including evaluation scale, quality of life questionnaire, etc.
04
By inserting needs into these points, acquisition can restore balance and harmony in the body, leading to improved health and well being
Principles of Acquisition
Needlework technique
Needlework technique
Different cupping methods

Methodology and Techniques I.6.6 [Simulation and Modeling]

Methodology and Techniques I.6.6 [Simulation and Modeling]

Comparing LIC and Spot Noise Wim de Leeuw Robert van Liere Center for Mathematics and Computer Science,CWIAbstractSpot noise and line integral convolution(LIC)are two texture syn-thesis techniques for vectorfield visualization.In this paper the two techniques are compared.Continuous directional convolution is used as a common basis for comparing the techniques.It is shown that the techniques are based on the same mathematical concept. Comparisons of the visual appearance of the output and perfor-mance of the algorithms are made.CR Categories and Subject Descriptors:I.3.3[Computer Graphics]:Picture/Image Generation;I.3.6[Computer Graphics]: Methodology and Techniques I.6.6[Simulation and Modeling]: Simulation Output Analysis.Additional Keywords:flow visualization,texture synthesis1INTRODUCTIONAmong the techniques used for the visualization of vectorfields, texture based methods are a recent development.By using texture, a continuous visualization of a two-dimensional vectorfield can be presented.Figure8shows a visualization of a slice from a direct numerical simulation using texture.The images show the turbulent flow around a block.These images clearly show the power of tex-ture as a medium for visualization.The visual effect of direction is achieved by line structures in the direction of the vectorfield.These lines are the result of higher coherency between neighboring pixels in thefield direction.Spot noise and line integral convolution(LIC) are two texture based techniques for vectorfield visualization that make use of this principle.Before texture was used for data visualization,many papers ap-peared dealing with the generation of textures[7,9,8,3].The purpose was artistic or for giving images a realistic appearance. Perlin[9]used directed convolution of random images as a texture synthesis technique to produce images offlames.Spot noise,introduced by van Wijk[13],was thefirst texture synthesis technique for the visualization of vector data.A spot noise texture is synthesized by distributing a large number of small intensity functions–called spots–over the domain of the data. Data is visualized by transforming the spot as a function of the un-derlying vectorfield.Furthermore the concept of texture animation was introduced for static vectorfields.Subsequent texture frames are generated by considering the spots as particles and advecting the spot positions in the vectorfield.Line integral convolution,introduced by Cabral and Leedom[1], uses a piece of a streamline as afilter kernel for the convolution of a random texture.Animation can be realized by cyclic shifting of thefilter kernel in subsequent frames.In later papers both LIC and spot noise have been improved and extended.Improvements include increased texture synthesis speeds,generalizations of grid types,usage of the techniques with time dependent vectorfields,and zooming in on details.In this paper we compare both techniques and some extensions. In Section2,spot noise and LIC are briefly described.We describe CWI,Department of Software Engineering,P.O.Box94097,1090GB Amsterdam,Netherlands.E-mail wimc robertl@cwi.nl the governing algorithms and some extensions.In Section3a com-mon basis is given.By using continuous directional convolution as a basis,it is shown that both techniques share a common underly-ing principle.We also discuss texture synthesis for time dependent vectorfields using this common basis.The techniques will be com-pared with respect to output texture and performance in Section4. The conclusions will be presented in Section5.2LIC AND SPOT NOISE2.1AlgorithmA LIC texture is generated by convolution of an input texture with a one-dimensionalfilter kernel.The shape of the kernel is determined by the shape of the streamline through the pixel.A pixel in thefinal texture is determined by the weighted sum of a number of pixels along a line in the input texture:(1)where is the set of pixels in the input texture used for convolution, is the value of the input texture pixel at grid cell andis the convolutionfilter.Figure1:Schematic representation of the LIC-algorithm.Figure1gives an overview of the main components of the al-gorithm.The inputs of the algorithm are a random texture and the vectorfield.Streamlines are calculated from the vectorfield and are used for convolution of the input texture.This results in the output texture.A spot noise texture is generated by blending together a large number of small intensity functions at random positions on a plane. The shape of the intensity functions is deformed in relation to the vectorfield.Spot noise is described by the following equation:(2)in which is called the spot function.It is an intensity function which has a non zero value only in the neighborhood of the origin.is a random scaling factor with a zero mean and is a random position.The deformation used in[13]was a rotation in the direc-tion of the vectorfield and scaled by a factor of in the velocity direction and perpendicular to thefield.Figure2shows a schematic representation of the algorithm.The vectorfield together with a set of random positions and intensities form the input of the algorithm.From these two inputs the positions and shapes of the spots can be determined.This results,after spot blending,in the output texture.Figure2:Schematic representation of the spot noise algorithm.2.2ExtensionsTo be a useful tool for the analysis of vectorfields,a number of extensions to spot noise and LIC have been proposed.Here we limit our discussion to six extensions:Non-uniform grids.The data may be defined on a grid with an irregular geometry or topology.Mapping the texture to an curved surface will introduce deformations.If the data has a regular topol-ogy the deformation problem can be addressed by transforming the data to aflat geometry.This can be a uniform grid as was described in[2]or a rectilinear grid[6].Using a rectilinear grid in which the size of the cells matches the cell sizes in the undeformed data results in better mapped textures because the scaling of the texture elements is more uniform.Performance.Performance is crucial for interactive visualiza-tion.Both techniques require substantial computations and perfor-mance of the algorithm is therefore important.LIC can be paral-lelized by partitioning the texture in a number of sub domains[15]. Each sub domain can then be processed in parallel.Furthermore, the generation of LIC texture can be accelerated by using the co-herence between successive pixels on a streamline.A substantial performance gain can be achieved by generating long streamlines and reusing them for a large number of pixels.Spot noise can be parallelized because the processing of one spot can be done inde-pendently from the other spots.Therefore,processing of all spots can be distributed over a number of processors.A second method to speed up spot noise is by utilizing graphics hardware[6].Spot rendering and blending can be mapped on functions for which hard-ware support is available.Both ways to speed up generation have been combined[5].Animation and time dependentflow.Although the information of a stationaryflow is available in a single texture,animation can provide important additional information.Animation is even more important when theflow simulation is time dependent.For sta-tionaryflow,a technique was presented where the kernel is shifted for subsequent frames.Animation of time dependentflow can be achieved by UFLIC,described in[10].Here the values in the tex-ture are deposited along path lines to generate subsequent textures. Using spot noise animation can be achieved by regarding the spots as particles and use advection equations to calculate new spot posi-tions for subsequent frames[13].More about the use of texture in time dependentflow can be found in Section3.4.Zooming.In high resolution simulationsflow features can vary three orders of magnitude in size.A single image can impossibly provide all information.Small details in the data can only be per-ceived if the user is able to magnify a part of thefield.In LIC, zooming can be achieved by using high resolution input textures and relating the output texture resolution to the scale at which an image is desired.In spot noise,zooming requires that the texture is regenerated using a smaller part of the data using smaller spots.3D.Visualization of3D vectorfields is possible with V olume-LIC introduced by Interrante and Grosch[4].They used the gen-eralization of LIC to3D in combination with volume rendering for the visualization of3Dflows.Because the presentation of dense volumes is very difficult,selection methods were used tofilter the input texture and therefor the area in which theflow is shown.Work on3D spot noise has not been reported.Flow direction.Both spot noise and LIC are ambivalent with respect to the direction of theflow.Wegenkittl et al.[14]present a modification of LIC algorithm in which sparse input textures are combined with an orientedfilter.In this way the ambivalence in the direction is addressed.3COMMON BASISIn the previous section,we have shown that spot noise and Line In-tegral Convolution textures are synthesized by considering a neigh-borhood of a pixel.In this section we will show that both techniques can be described in terms of convolution over a certain region.As an introduction we will use a simplified model to explain the idea. Then,a more formal treatment will be given by using continuous directional convolution.3.1SimplificationTo illustrate the commonality between the LIC and spot noise tech-niques,we start with two simplified variations of LIC and spot noise.For LIC,a straight line segment is used in the direction of the flow at the center of the calculated pixel.This is the DDA(Digital Differential Analyzer)convolution as described in Cabral and Lee-dom[1].(see equation1)is the set of pixels determined by the line segment when it is rasterized.If a constant convolution kernel is assumed,then a pixel value is calculated as:(3) For spot noise,scan converted lines of afixed length are used.The spots are placed at the center of each pixel in the direction of the flow.Now equation2can be rewritten as:(4) where is the value of the spot at position.These equations produce equivalent output texture becauseis equivalent to and is the same as.The intensity of a spot and the pixel value in the input texture are both uniformly distributed random values.That the set of input values and are the same can be seen in Figure3.On the left,the kernel and the pixels in the input texture are shown.These pixels(grey region)determine the output pixel(the small box)for DDA-convolution.The right image shows the location of spots that influence the pixel of interest(box).A spot influences the pixel of interest if it covers this pixel.Seven spots(the center is shown by black dots)define.Dotted lines indicate a spot’s extent(to avoid cluttering only two dotted lines are drawn).3.2Continuous directional convolutionIt is not possible to generate images using streamlines of a con-stant width and constant density.Due to convergence/divergenceFigure3:Pixels making up the kernel shape of DDA-Convolution and the location of spots influencing the texture value in simplified spot noise.the density of lines changes resulting in a higher/lower density. Turks and Banks[12]propose a method which results in an ap-proximation of constant density by calculating the local density of an initial random set of streamlines.This set is iteratively improved by adding or deleting stream lines based on the local density.They also suggest variation of the width of stream lines to get a uniform coverage.Random value based texture techniques also give an ap-proximation of this idea.In texture based techniques the pixels in the direction of theflow do not have equal intensity,but the impres-sion of lines is achieved.This is the result of a higher correlation of pixels in the direction of theflow,compared to perpendicular to the flow.Due to the slow variations which occur,no discontinuities are introduced.For a more detailed study of this idea,we in introduce continuous directional convolution.The input texture is a continuous function of position:and the convolution equation can be written as:(5) where is the two-dimensional kernel and is a two-dimensional deformation function.The function is a two-dimensional white noise signal.Since the kernel is usually non-zero only in afinite region,the integral need only to be evaluated in a re-gion around.The shape of thefilter depends on the desired effect.Possibili-ties are a line with or without a certain width or a more complicated two-dimensional shape(see Figure4).In terms of frequencies,theFigure4:Possiblefilter shapes for continuous directional convo-lution,one-dimensionalfilter,triangle swept along a streamline, swept rectangle,‘spot’,and low-passfilter.purpose of thefilter is to achieve an anisotropicfiltering of the in-put where the maximum frequency in thefield direction is lower as in the perpendicular direction.According tofiltering theory,the best result would be achieved using the an anisotropic low-passfil-ter(see Figure4lower right)with the main axis deformed using a streamline.In practice,however,the convolution must be car-ried out at afinite resolution.To suppress artifacts due to aliasing introduced by thefinite resolution of the texture the same type of anti-aliasing used for the rendering of lines could be used.It is also possible to encode information regarding the velocity magnitude.This can be done by a parameterizing thefilter kernel with the velocity magnitude.In spot noise,the velocity magnitude is encoded in the difference of the frequency in the direction of the flow and the frequency perpendicular to theflow.The same princi-ple could be incorporated in the continuous directional convolution by parameterizing the width of thefilter inversely proportional to the velocity magnitude.Spot noise and LIC are both approximations of continuous direc-tional convolution.The following observations can be made with respect to the approximation:In LIC,a scan converted curve is used as the kernel.The ker-nel has the shape of a connected set of pixels of a particular size.This leads to irregularfiltering in the direction perpen-dicular to thefield.In spot noise,there are only afinite number of spots.In terms of convolution,the input texture consists of afinite number of randomly placed impulses with a random energy.Since con-volution is a linear operator,the convolution integral(equa-tion5)over this input texture can be evaluated by summation of the separate responses to each impulse(equation2).3.3Conversion between the methodsContinuous directional convolution can be used to‘translate’con-cepts used by both techniques.Concepts in spot noise can be trans-lated to concepts in LIC and vise versa.For example,in spot noise, the magnitude of theflow is visualized using spot scaling.Because the spot shape performs the same role as the kernel shape in LIC, one might expect that similar results might be obtained in LIC by variation of the length of the kernel.Another example:in spot noise it is easy to generate animations in a stationaryflow by spot advec-tion.In LIC this could be realized by advection of the input tex-ture.Each pixel could be regarded as a particle which is advected for some time step.Alternatively,the phase shifting of the kernel technique proposed for LIC could be implemented in spot noise by using a spot shape with a shifting phase in different frames.These examples show that concepts used by the two techniques can be mapped onto each other.The table below lists a number of similar concepts for both techniques.spot noise LICrandom spot intensity random input texturespot function kernel shapespot scaling kernel length variationstandard spots DDA convolutionbent spots streamline convolutionspot advection texture advectionWe could generate spot noise textures using a variant of LIC and, vise versa,LIC textures using a variant of spot noise.The line integral convolution algorithm can be adapted to generate spot noise textures by using a two-dimensionalfilter domain.The shape of the domain is determined by all positions around thefiltered points which,if a spot would be placed there,would cover thefiltered point.The spot noise algorithm can be adapted to generate LIC textures.Spots would have the shape of a streamline and would be rendered at the center of each pixel in the texture.3.4Time dependent vectorfieldsAfirst application of the common basis is the study of texture an-imation of time dependent vectorfields.The challenge of texture animation is to maintain two types of coherence in the textures. To perceiveflow,two issues must be addressed.First,spatial co-herency(the lower frequency in thefield direction as described in the previous section)must be maintained.Second,temporal co-herency must be maintained.Temporal coherence is defined as the movement of patterns between texture frames.The impression of movement results when patterns are displaced for a small distance in subsequent frames.In a stationaryflow,temporal coherence is obtained by using the same principle as is used for spatial coherence.Between succes-sive frames texture values are advected along streamlines.Particle positions on a streamline are calculated by:(6)where is the velocity at position For time dependentflow, temporal coherence is obtained only if particle paths are used to advect the texture.A particle path is expressed as:(7)tt−1t+1flowparticle pathstreamlineFigure5:Stream lines or path lines used for coherence in texture For spatial coherence,streamlines should be used to determine thefiltering domain.On the other hand,to get the best possible temporal coherence particle paths should used.This is illustrated in Figure5.The three columns show different time steps of aflow field.Theflow is spatially uniform and the direction varies linear with time,as illustrated in the top row of th Figure.In rows two and three,two kernels are followed over time.Dotted lines indicate particle paths.Bold lines indicate the shape of the kernel.Note that in this particular case streamlines are straight line segments which is not true in general.Temporal coherence is maintained if particle paths are used, however,it compromises the spatial coherence of the texture.This is because particle paths,and therefore kernels,may intersect,(as is shown in thefirst column of the Figure)resulting in artifacts in the texture.The crossing of kernels introduces high frequency compo-nents in the direction of thefling streamlines compromises the temporal coherence of the texture,as the actual path of theflow may differ from the path suggested by the texture.Figure6compares textures generated using particle paths and streamlines for the kernel shape.The data used for this images is the rotating uniform vectorfield described in Figure5.The Figure shows that using particle paths results in high frequencies in all di-rections and thus compromises spatial coherency in thefield ing streamlines high frequencies occur only perpendicular to thefielddirection.Figure6:particle paths(left)and streamlines(right)used for tex-ture synthesis in a rotatingflow.There is no perfect solution for the coherency problem.How-ever,useful compromises were taken by UFLIC[10]in which the texture at a certain moment might not represent the current vector field perfectly.Spot noise uses streamlines for the spot shape com-promising temporal coherence.These partial solutions are useful as long as the user knows the limitations.4COMPARISONA comparison of the of both algorithms is difficult to realize be-cause the algorithms produce different outputs.It is not possible to adjust the parameters for the methods such that they produce equal textures.Furthermore,there is no metric to compare the informa-tion content of texture.Nevertheless,in this section we will present a metric for com-paring textures.We use this metric as an measure to compare the performance of the techniques.4.1Output texture comparisonBecause both methods do not produce the same texture a metric must be found to compare output textures.We define this metric by introducing the notion of pixel coverage.Pixel coverage is defined as the number of random values contributing to a pixel.Textures are defined to be equivalent if the pixel coverage for each pixel is the same.In the previous section we found that a spot and a kernel are comparable concepts.For a certain kernel a spot can be found such that the area covered is equal to the area of the pixels under a kernel in LIC(see Figure7).The average area covered be a kernel is thefilter length multiplied by the width of a single pixel.If we normalize the width and length of the complete texture to and measure the length of the kernel in pixels,then the area of a kernel is calculated byFigure7:Equal coverage of a pixel:the number of input values which influence a pixel is equal for LIC(left)and spot noise(right)(8)where is the resolution of the texture.A disc shaped spotwith a comparable surface has a radius of(9) Each pixel must be covered by spots therefore the number ofspots to be used is(10)In Figure8the same vectorfield is visualized using both tech-niques.The image on the left shows thefield using LIC while spotnoise is used for the right image.The data is a slice from a directnumerical simulation of a turbulentflow around a block and is de-fined on a rectilinear grid.The resolution of the data is.Theflow is from the bottom to the top of the image.The visual-ization shows the vortex shedding in the wake behind the block.The resolution of both textures is.For the LIC texture akernel length of20pixels was ing Equations9and10weobtained values giving an equal pixel coverage for the spot noiseimage.Figure8:LIC(left)and spot noise(right)images of turbulentflowaround a block.Because the large majority of grid cells in the data is smallerthan a texel the amount of detail which could be visible is limitedby the texture resolution.For further investigation we used a detailbehind the block.This is shown in In Figure9.In the lower part ofthe images the block is visible.The data resolution of the sectionshown ing textures the smallest grid cells in the data are about texels in size.In the top left image thefilter length used for LIC is20pixels while in the bottom left imageafilter length of40used.The spot noise images have equal pixelcoverage using parameters calculated by Equations9and10.From this side by side comparison a number of differences canbe noted.In the LIC image rotation centers are visualized moreaccurately.For example:the LIC images clearly show the two dis-tinct rotation centers in the rotation area in the top left of thefield.The velocity in the rotation centers is relatively low and thereforthe spot noise textures become almost isotropic.In addition to dis-playing velocity direction information,the spot noise image showsthe magnitude of thefield;e.g.the upper right corner of theimage Figure9:LIC(left)and spot noise(right)images with equal pixel coverage using afilter length of20and a spot radius of0.005.(top) and using afilter length of40and a spot radius of0.007.(bottom) shows a region of higher velocity.Unfortunately,this extra infor-mation decreases resolution of the directional information.The fre-quency range needed to encode the velocity magnitude reduces the frequency at which the directional information is displayed.4.2Performance comparisonThe performance can be compared in several ways.First we will do some order estimation of the performance of the algorithms.Sec-ond,we will look at extensions to the algorithms for increased per-formance.Finally,we will present some measurements.In unaccelerated LIC the time needed to generate a texture in-creases linear with the number of pixels in the output texture and linear with the length of the kernel.For spot noise the time needed increases linear with the number of the spots and linear with the area of the spots.If the comparable textures are generated,such as described in the previous section,it is easy to see that the order of generation time of the algorithms is equal.However,the basic operations which have to be carried out are different.For spot noise scan conversion operations are needed and for LIC convolution and stream line integration operations are needed.The previous analysis is valid for the original algorithms.Sev-eral ways have been proposed to speed up the algorithms.In the al-gorithm for LIC proposed by Stalling and Hege[11]the algorithm consists of two steps.In thefirst step stream lines are calculated and in the second step the convolution is carried out by successive processing of pixels along streamlines where results are reused.In this way,the complexity of the algorithm becomes independent of thefilter length.For spot noise,graphics hardware can be utilized to speed up scan conversion an blending of the spots[6].Although this does not change the order of complexity,substantial gains can be achieved in the generation time.Parallelization is another way to speed up the algorithms.LIC can be parallelized by dividing the texture in tiles[15].Parallelization of spot noise is possible by distributing the spots over the processors.The combination of hard-ware acceleration and parallelization for spot noise is possible if the available processor power is matched by the speed of the graphics hardware[5].As a test case to compare the speed the algorithms we used the detail of the DNS described in the previous section(see Figure9). The tests were run on a SGI Indigo workstation equipped with a 250MHz R4400processor and a High Impact graphics board.For LIC the original implementation as described in[1]was used.For spot noise an implementation taking advantage of graphics hard-ware was used.The results for this unfair comparison were93.5 seconds for LIC and6.7seconds for spot noise.For the LIC image afilter length of20was used,the spot noise image was generated with a spot radius of0.005.For the images with afilter length of40 and a spot radius of0.007the times were182.3and6.6respectively.We get a better comparison if use the results of accelerated LIC with the times presented for spot noise.The timing results pre-sented in[11]indicate that4.6seconds are needed for the gener-ation a similar LIC texture on slightly slower hardware using the acceleration techniques described in the paper.This would suggest that LIC is slightly faster than spot noise.In this comparison we used textures with equal pixel coverage. However,a number of parameters in spot noise allow trading qual-ity for speed.Figure10shows spot noise images in which less spots were pared to the spot noise textures in Figure9, 20percent of the number of spots were used in the left image,and 5percent in the right image.The times needed to generate these images were1.3and0.35seconds.Figure10:Trading quality for speed in spot noise.Texture using 50000spots(left)and12500spots(right)5CONCLUSIONIn this paper LIC and spot noise were compared.Both techniques use texture synthesis for vectorfield visualization.Directional in-formation in textures is encoded by coherence between neighboring pixels.Due to differences between the techniques and the concepts used to discribe them,a direct comparison would be very difficult. By using continuous directional convolution as a model,the simi-larity in the underlying mathematical basis becomes clear,and sim-ilar concepts in the techniques can be found.The diffences in information presented by both techniques are due to the fact that LIC does not encode velocity magintude.There-fore,the spatial resolution for presenting directional information is higher than for a comparable spot noise texture.If the acceleration scemes proposed for the techniques are taken into account the dif-ferences in performance of LIC is slightly better than of spot noise. Spot noise is moreflexible with respect to trading texture quality for generation speed.Continuous Directional Convolution is used to show that for tex-ture animation of time dependentflow,it is not possible to fully satisfy the requirements of spatial coherence in the texture and tem-poral coherence in frames.We believe that continuous directional convolution can serve as a basis for future study and improvements of texture synthesis techniques forflow visualization.AcknowledgmentsThanks to Arthur Veldman and Roel Verstappen University of Groningen for using their data.We are grateful to the reviewers who gave valuable ideas for improvements of the paper.This work is partially funded by the Dutch foundation High Performance Com-puting and Networking(High Performance Visualization project). References[1] B.Cabral and L.Leedom.Imaging Vector Fields Using LineIntegral Convolution.In SIGGRAPH93Conference Proceed-ings,Annual Conference Series,pages263–272.ACM SIG-GRAPH,August1993.[2]L.K.Forssell and ing Line Integral Convo-lution for Flow Visualization:Curvilinear Grids,Variable-speed Animation,and Unsteady Flows.IEEE Transactions on Visualization and Computer Graphics,1(2):133–141,1995.[3]P.Haeberli.Painting by Numbers:Abstract Image Represen-tations.In Computer Graphics(SIGGRAPH90Conference Proceedings),volume24,pages207–214,July1990.[4]Victoria Interrante and Chester Grosch.Strategies for Effec-tively Visualizing3D Flow with V olume LIC.In R.Yagel andH.Hagen,editors,Proceedings of Visualization’97,pages421–424,Los Alamitos(CA),1997.IEEE Computer Society Press.[5]W.C.de Leeuw and R.van Liere.Divide and Con-quer Spot Noise.In Proceedings Super Computing’97 (/sc97/program/TECH/DELEEUW/INDEX.HTM),1997.[6]W.C.de Leeuw and J.J.van Wijk.Enhanced Spot Noise forVector Field Visualization.In G.M.Nielson and D.Silver, editors,Proceedings Visualization’95,pages233–239,Los Alamitos(CA),1995.IEEE Computer Society Press.[7]J-P Lewis.Texture Synthesis for Digital Painting.In Com-puter Graphics(SIGGRAPH84Conference Proceedings), volume18,pages245–251,July1984.[8] D.R.Peachey.Solid Texturing of Complex -puter Graphics(SIGGRAPH85Conference Proceedings), 19(3):279–286,July1985.[9]K.Perlin.An Image Synthesizer.In Computer Graphics(SIGGRAPH85Conference Proceedings),volume19,pages 287–296,July1985.[10]H.-W.Shen and D.L.Kao.UFLIC:A Line Integral Convolu-tion Algorithm for Visualizing Unsteady Flows.In R.Yagel and H.Hagen,editors,Proceedings Visualization’97,pages 317–322,Los Alamitos(CA),1997.IEEE Computer Society Press.[11] D.Stalling and H.C.Hege.Fast and Resolution IndependentLine Integral Convolution.In SIGGRAPH95Conference Pro-ceedings,Annual Conference Series,pages249–256,August 1995.[12]G.Turk and D.Banks.Image-Guided Streamline Placement.In SIGGRAPH96Conference Proceedings,pages453–460, July1996.。

Mathematical Modelling and Numerical Analysis Will be set by the publisher Modelisation Mat

Mathematical Modelling and Numerical Analysis Will be set by the publisher Modelisation Mat
& luskin@
c EDP Sciences, SMAI 1999
2

PAVEL BEL K AND MITCHELL LUSKIN
In general, the analysis of stability is more di cult for transformations with N = 4 such as the tetragonal to monoclinic transformations studied in this paper and N = 6 since the additional wells give the crystal more freedom to deform without the cost of additional energy. In fact, we show here that there are special lattice constants for which the simply laminated microstructure for the tetragonal to monoclinic transformation is not stable. The stability theory can also be used to analyze laminates with varying volume fraction 24 and conforming and nonconforming nite element approximations 25, 27 . We also note that the stability theory was used to analyze the microstructure in ferromagnetic crystals 29 . Related results on the numerical analysis of nonconvex variational problems can be found, for example, in 7 12,14 16,18,19,22,26,30 33 . We give an analysis in this paper of the stability of a laminated microstructure with in nitesimal length scale that oscillates between two compatible variants. We show that for any other deformation satisfying the same boundary conditions as the laminate, we can bound the pertubation of the volume fractions of the variants by the pertubation of the bulk energy. This implies that the volume fractions of the variants for a deformation are close to the volume fractions of the laminate if the bulk energy of the deformation is close to the bulk energy of the laminate. This concept of stability can be applied directly to obtain results on the convergence of nite element approximations and guarantees that any nite element solution with su ciently small bulk energy gives reliable approximations of the stable quantities such as volume fraction. In Section 2, we describe the geometrically nonlinear theory of martensite. We refer the reader to 2,3 and to the introductory article 28 for a more detailed discussion of the geometrically nonlinear theory of martensite. We review the results given in 34, 35 on the transformation strains and possible interfaces for tetragonal to monoclinic transformations corresponding to the shearing of the square and rectangular faces, and we then give the transformation strain and possible interfaces corresponding to the shearing of the plane orthogonal to a diagonal in the square base. In Section 3, we give the main results of this paper which give bounds on the volume fraction of the crystal in which the deformation gradient is in energy wells that are not used in the laminate. These estimates are used in Section 4 to establish a series of error bounds in terms of the elastic energy of deformations for the L2 approximation of the directional derivative of the limiting macroscopic deformation in any direction tangential to the parallel layers of the laminate, for the L2 approximation of the limiting macroscopic deformation, for the approximation of volume fractions of the participating martensitic variants, and for the approximation of nonlinear integrals of deformation gradients. Finally, in Section 5 we give an application of the stability theory to the nite element approximation of the simply laminated microstructure.

Machine translation theoretical and methodological issues

Machine translation theoretical and methodological issues

Smart Networking Requests: Supporting Application Development Relying on Network Connectivity forWearable ComputersFriedel van Megen1, Andreas Timm-Giel2, Ivo Salmre1, Carmelita Görg21 European Microsoft Innovation Centre, Aachen, Germany, fmegen@2 University of Bremen, tzi ikom, Bremen, Germany, atg@tzi.deAbstract. In mobile and wearable computing the availability of communication networks changes overtime and location in the same way as the requirements on communication from the application point ofview change. Recent mobile and wearable devices support different networking technologies likeUMTS, WLAN, and Bluetooth simultaneously and can use different networking protocols, such asinfrastructure, ad hoc and MobileIP to extend coverage and support mobility. However, the complexityof managing these network functions increases dramatically, making it more and more difficult forapplication developers to control. Therefore within this paper a novel approach is presented providing atool to the application developer, which allows to attach the business logic to Smart NetworkingRequests, ensuring the communication when a suitable network connectivity has been set up andpossibly further requirements are met, such as a certain context (e.g. location) of the wearablecomputer. The approach presented is based on work achieved in the EC funded WearIT@Work project,which is an Integrated Project of the 6th IST Framework Programme. Within the WearIT@Work projectthe so-called Communication Service Module has been developed as part of the middleware, allowingto control the selection and configuration of the network connectivity by profiles only. ThisCommunication Service Module has now been integrated into the Software Framework allowing theabove addressed attachment of business and networking logic. A first prototype has been implementedproving the feasibility of the approach. Within this paper the main focus is on the requirements andconceptual design of the solution.IntroductionDuring the last two decades, mobile devices have become nearly ubiquitous, a trend that continues in the advent of wearable computing. As a result of this development, it is increasingly important to extend services traditionally targeted at desktop computers to seamlessly interoperate with mobile and wearable devices in order to improve customer satisfaction and service availability. In addition to existing applications being extended for mobile use, an entirely new class of services provides support specifically targeted at nomadic users. Among other obstacles, dealing with intermittent network connectivity proves to be a hard to solve problem for these applications.One problem is handling network interruptions at the connection level. When changing location with a mobile device different communication technology, networks and services become available and the most suitable communication network and service for a specific application can be different over time and location. At some locations the desired quality-of-service might not be available at all or only for an unacceptably high price. Truly mobile devices, like wearable computers, therefore have to be able to cope with changing and intermittent network connectivity. Latest mobile devices are capable of using different network technology, such as UMTS, GPRS, WLAN, and Bluetooth. New network protocols are being researched e.g. providing roaming across heterogeneous networks while maintaining addressability (e.g. MobileIP), or allowing infrastructureless, direct (ad hoc) communications.This means with the mobile device moving, the QoS experienced by the application changes over time and location and the connection might even be interrupted from time to time. This is markedly different from desktop or laptop connectivity as users don’t typically walk around with their laptop open and running. Laptop users typically stay connected with the same network service throughout an application session.In a wearable device that supports more than one mechanism of communication applications will want to perform different kinds of tasks on different kinds of networks; for example, for any given physical location a range of different networks may offer different speed, cost, and security and reliability characteristics. Different network protocols, e.g. ad hoc or infrastructure can be most suitable for the application at different locations and or different times.However, configuration of the different network drivers and protocols requires in depth communication networking knowledge.This paper present a novel programming model allowing programmers to decorate their networking code with attributes describing the network characteristics necessary to start requests. These attributes are used to queue the requests for concurrent execution, to execute them efficiently when the supplied attributes match on a physical connection, and to give applications simple notifications of both successes and failures that they can connect to program logic. The authors believe such a programming model will enable application developers to design software that better matches the requirements that intermittent network connectivity imposes.The authors identified three areas in which current wearable applications would benefit from support of a framework that were specifically tailored for these devices. These are in the network configuration, the execution of networking related code and the binding of results to custom business logic.Concepts for Improved Handling of Networking ConnectivityIn the past, several solutions were introduced trying to hide connection problems from peer nodes. One approach attempts to handle network interruptions transparently to the peer node by application level proxies [NFS, SMB]. Another class of solutions proposes protocols with built-in support for transparent reconnections [SCTP] where link failures are handled at the protocol level instead of being forwarded to the application [MOBILEIP].Although appealing, these solutions have drawbacks that make them unsuitable for many wearable device needs. Both, application level proxies and hiding link failures from the application layer result in unexpected delays that confuse the user (especially if GUI threads experience unexpected delays) since delays are likely to occur and their duration is less predictable compared to a fixed node.Configuring the NetworkTraditional explicit configuring of communications requires explicit knowledge of the communication network, technology, services, and protocols used. The application developer needs this knowledge and therefore the information on the hardware and network configuration needs to be available at the application. From the application developer’s point of view each link, each networking capability and protocol needs to be explicitly configured and controlled. If new communication networking capabilities are later added to the system, e.g. IEEE 802.16 (WiMax) [WIMAX], [802.16] or UWB (Ultra Wide Band), each application that wants to take advantage of the new connectivity has to be adapted.For this reasons it is reasonable to decouple the configuration of the networking capabilities from the application and define a simple and common interface between application and communication functions. Therefore, in the framework of the EC funded WearIT@Work project [WEARIT], [SUMMIT] all functions for configuring and controlling the communication have been realized in the communication service module (CSM) [WWRF], which can be configured and controlled by the application via a simple, standardized interface.The application only provides a set of requirements it expects the link to meet. These requirements are collected in a profile that is attached to an application initiating the session. The application can update the profile, stop the session and obtain the network status. Therefore the interface is defined by four commands only:•start_communication (process_ID, profileID)•update_communication (sessionID, profileID)•stop_communication (sessionID)•get_network_status (sessionID)The communication service module collects all requirements (either given implicitly by selecting a predefined profile) from all applications and aggregates them. It selects automatically the most suitable network connection and protocol out of all available. Possibly several connections over different networks are active concurrently. If there are changes in the available networks during the session, e.g. one link breaks or another becomes available, the selection is repeated and in case the requirements of a session cannot be matched any more it is notified about the change.While this concept helps legacy applications and traditional applications being ported quickly to the new environment, newly written applications can take even more advantage from an abstraction between business logic and network subsystem. Utilizing the communication service module, the authors suggest that application developers attach their network requirements declaratively to the code that accesses the network instead of having this being part of that code. This allows a framework like the one being developed within the WearIT@Work project, to optimize the available resources while supporting a configuration that is dependent of the application state rather than bound to the process as a whole.There are two points in the lifetime of a network request that can be significantly improved with a communication service module and declarative specification of network attributes: First, before the actual networking code is executed, the application has to monitor the available links to meet some set of application dependent criteria before the code may run. This paper introduces criteria allowing to declaratively specify conditions to be met before the user supplied networking code is executed. The set of available criteria is extensible and related criteria can be grouped by the application writer.Fig. 1. Example of a criteria setSince criteria are extendable, new and application specific criteria can be implemented by applicationdevelopers that are able to seamlessly integrate with the set of already existing criteria.Executing networking related codeNetwork related code can be executed in two operating models. The first model follows a synchronous design pattern, i.e., any operation that involves remote communication waits for the communication tocomplete before it continues with its execution. It does this even in cases where communication results are not subsequently used. In the following discussions, this mode is referenced as synchronous programming, where an application programmer treats calls that require networking connectivity like any other call to a library, basically ignoring the additional problems arising from the unreliability of the underlying transport mechanism.The second model follows an event driven approach. This model decouples starting the communication from handing the result of the operation. While this improves performance for communications where data is only pushed to a peer node, it introduces additional programming complexity. Subsequently, this model is referenced as asynchronous programming. This approach introduces additional complexity to the programmer by decoupling the start of an operation from its completion handling code. For example, thread issues arise when updating the Graphical User Interface (GUI) from a thread that is not the dedicated GUI thread; and there is still the need to know the underlying link type in the completion routine to react appropriate on network and link error conditions.A programming framework would serve several purposes. It eases the asynchronous programming model by combining networking configuration options, user code, and hooks for command completion, thus allowing the application developer to attach business logic to the outcome of a networking operation. Additionally, the framework allows selecting the set of required criteria on a per request basis, thus, building a rich environment allowing to dynamically attach criteria to the network by the business logic developer rather than by the developer who implemented the networking code. This concept enables three different kinds of applications to be supported:1.Legacy Applications can be accompanied by a specific User Interface. The user can manually select a predefined profile and select the application for which to set the profile. The settings are reset either by explicitly terminating the profile or closing the application.2.Applications can be specifically written or legacy applications can be adapted to be able to use the communication service module interfaces directly. An application would use the provided connection service modules interfaces, i.e. start a communication session by selecting a profile, update it if required and stop it at the end. By getting the network status information it is also possible to adapt the application behavior to the network status, e.g. if no suitable network is available save the data and transfer it later or switching automatically from video to voice communication.3.The third option uses a software framework as described in the next section and associates the code to be executed with specific link criteria. The framework allows the application developer to specify the required networking conditions on a level suitable for his code.The application developer may either chose to specify a generic profile as exposed by the communications module directly or may choose from the exposed network properties directly, e.g., available bandwidth or latency. The framework will in both cases call the communications subsystem to configure the network such that the code provided by the application developer either runs with the requested settings or the request will not be executed.Binding results to Business logicThe term business logic describes that part of an application’s code that implements its core logic. For example, an account management application’s core logic is to support the user with his financial decisions while only a small part of the application is network related to retrieve data from the bank server.As described in the previous chapter, access to networking functionality can be done either via synchronous access or by an event driven approach. The same is true for business logic that relies (directly or indirectly) on data that is only available on a remote node.With synchronous programming, an application programmer treats operations that require networking connectivity like any other operation, basically ignoring the additional problems arising from the unreliability of the underlying transport mechanism. An application would follow one of the next two patterns when accessing the network subsystem.Fig. 2. Network code interweaved business logicThe code starts by configuring the network to prepare connections and environment such that the following code should execute smoothly. This includes (but is not limited) to resolving peer names, bringing up links, and the validation of network related criteria (like available bandwidth). The application programmer might do this with the help of the Communications Service Module, selecting one of the predefined profiles that matches the requirements best.The second step is to use the previously configured network subsystem to perform application related tasks like downloading data for local consumption or to share results with a remote host.Asynchronous programming, i.e., event driven code uses a different approach, avoiding the problems depicted for the synchronous programming case. An application developer would follow the following pattern when accessing the network.Fig. 4. Event Driven Programming approachThe code follows the same four basic steps identified for the synchronous programming case. First, the application configures the network by calling into the CSM. But instead of waiting for the configuration call to complete, it continues with other work.The CSM will notify the application by raising an event when the configuration operation has finished, either successful or indicating a failure. On successful completion, the application then triggers a network operation, again without waiting for the operation to complete. An event is raised when the network call finally completed. The application then handles any error returned by the operation and uses the results for its business logic [SYMBIAN, TINYOS].While this results in more responsive application behavior, this introduces concurrency into the design, complicating application development. The different events denote possibly different threads handling the actions. This requires careful synchronization between threads to avoid updating data concurrently by two threads. Updates to GUI elements in general need special attention since current GUI libraries expect calls to a GUI element taking place only from a designated thread and, thus, updates from concurrent threads must be forwarded to the GUI thread [WINIDLE, SWINGIDLE].Shifting code utilizing the network to background threads instead of using the described approach eases the interaction with the networking subsystem but still introduces additional complexity. The programmer still has to deal with the low level details of the supported network and has to adapt the application for any new network type instead of being able to just describe the required connection properties and letting a framework select the appropriate link.The goal of the intermittent networking framework is to make network connectivity explicit, i.e., visible to the programmer as this helps separating the networking code from the business program logic.Smart Networking SolutionThis chapter describes part a framework the authors have developed that integrates the concepts described above through Smart Networking Requests having wearable computing constraints in mind. The architecture is based on the following ideas: The code performing network operations is identified and divided into independent blocks organized in a set of methods that have to be called in an application defined order. For each method, a set of criteria is identified that is needed before the code may execute. These criteria include (but are not limited to) properties of the current network connectivity and the successful or failed completion of other methods. For each independent block of code, a Smart Networking Request is instantiated having the corresponding criteria set being attached to. Finally, these requests are queued and executed when their criteria are matched.Smart Networking RequestsA Smart Networking Request connects three parties; criteria, networking code, and business logic.Fig. 5. Sample Smart Networking RequestCriteriaCriteria define a set of properties that must be matched before the code provided by the application developer is executed. In the wearable environment criteria feature two novel properties.First, criteria are groupable and extendable so that an application writer can define and implement new criteria matching the needs of a specific domain. Custom criteria can reference other information, not directly bounded to the networking layer, to specify the exact prerequisites for the attached code to execute successfully. It is up to the criteria implementer to map these criteria down to the specific implementation, like the available data from the networking subsystem in the case of network related criteria. Criteria are groupable on both, the logical level and implementation level.On the logical level, application writers are able to create a generic container and add any available criteria to it. This is used for criteria sets that are shared between requests.On the implementation level, criteria may be grouped using specialized containers. For example, a set of link layer criteria allows selecting a link based on available bandwidth and link technology in a device that features a WiFi and a GPRS connection. Ungrouped, there is no way to specify that the application requires a link that has an available bandwidth of at least 1MBit but is not a WiFi link. A generic container would not ensure that the requested bandwidth belongs to the same link that is not WiFi. Thus, the application may erroneously select the GPRS connection although it does not supply the requested bandwidth.For this, the framework provides a specific container for link layer related criteria and assures that it does report a criteria-match condition only if all containing criteria report a match on the same interface.Fig. 6. Initially available CriteriaThis set of criteria is mapped internally either to a predefined profile exposed by the communication service module, or – if no such profile is available – results in the dynamically creation of a new profile comprising the requested criteria set.Second, criteria feature a “latchable” property that is used to reduce the number of unnecessary evaluations and as such helps to save resources like battery, link bandwidth, and computing power. The idea is that under normal conditions, a single criterion is able to not only distinguish between “matched” and “not matched”, but additionally identifies if the outcome is permanent. For example, an application may add expensive (in terms of bandwidth) criteria to a request ensuring that the device is able to connect to a specific service. Without latching, the criteria would re-evaluate the condition on each call, thus wasting resources. Instead, the criteria latches a permanent result (e.g., the service is malfunctioning) until either a reset of the criteria is performed or a change in network availability is detected. For this information (latching) is exposed, the framework can decide when to force a re-evaluation by resetting the latch. The framework resets the criteria set if necessary and checks periodically until a match is reported.Networking CodeNetworking code is attached to a network request via a callback mechanism. This allows an application writer to add any code, probably private to the library performing the networking request. It is up to the request implementer to register a result for a request with the framework. Unexpected failures are caught by the framework and will cause the request to fail.After the networking code has completed, the session is updated so that the communication service module no longer needs to satisfy the requirements that were set by the request.Business CodeFinally, business logic that deals with the result of the request is attached to the request via a delegate. Special care has to be taken with code updating the user interface. For common GUI programming models allow only a designated thread to update GUI elements, the application writer has to indicate to the Smart Networking Request that some code is going to update the GUI if the application needs to call into the graphics libraries. The framework then assures that the delegate is called from the GUI thread. Otherwise, a delegate is called – for performance reasons – concurrently to the main thread.Request Queue ManagerThe request queue manager component controls the execution of all Smart Networking Request that were added to the request queue. Requests can be added to the queue or removed from the queue if it is no longer requested to run the attached code. An example would be two requests running the same user code but having different criteria. The first request is valid for 1 hour (waiting for the user to reach the home network) while the other request is valid in 1 hour (requiring any network access at all). The first request being executed removes the other from the queue, thus, ensuring that the code is only executed once.The application can register for both, successfully completed and failed requests. This allows custom business logic to monitor the queue while at the same time not require the programmer to add a delegate to any request (when added to the queue).Designing requestsThe authors think that enabling a casual programmer to easily create/modify asynchronous working packages is a key to take advantage of heterogeneous networks.In the described case of intermittent network connectivity that exists for wearable computing, the GUI metaphor is the key to simplify the efficient design of such requests. GUI controls like buttons and labels share a common programming model having two key elements: (1) Properties are used to control the appearance and behavior of a control. (2) Events are used by the control to relay notifications back to user code. An application normally does not have to deal with internals of the control unless required for specific tasks. Making these concepts visually accessible through code designers allows even inexperienced programmers to write complex GUIs in relatively short time, which is why the authors think that it is crucial to implement designer support of Criteria and Requests.Like a GUI control, a Smart Networking Request consists of a set of properties, and events fired by the control whenever the associated condition is met. However, unlike controls, application specific code in form of network access code is attachable to a request as well.With current designer support, an application writer starts with dragging a Request-Manager and a first Smart Networking Request to a form. The manager component is required only once per form/application while for each independent piece of code a separate request has to be defined.In a next step, the programmer visually creates a set of criteria, populates them with application specific properties, and attaches them to the request. Grouping related criteria is supported and allows testing for complex conditions.The application writer may add any code to the request that accesses the network while the framework and underlying communication service module assures that a link has been set up that matches the expected link criteria, if link layer criteria were part of the criteria set.Finally, the application writer adds business logic based on the success or failure event generated after completion of the request.Communication Network LibrarySince the communications service module is a system wide service, there will be a library that is local to the application that builds the connection from either the framework or direct API calls by user code and the service component.By wrapping the Communication Service Module into a library of the Framework (Communication Network Library), Smart Networking Requests can access the functionality and utilise it to execute the networking code requested. This would then be on one hand to configure the network, service and communication protocol and on the other hand to obtain the status of the communication network.As discussed in the chapter describing the concepts for improved handling of networking connectivity, the Communication Service Module (CSM) utilises the concept of profiles in addition to a custom set of network related criteria to simplify the configuration of the communication technology for the application developer or end user directly. The same approach is followed here. The wrapper class of the library translates the requirements coming with the Smart Networking Request either to a predefined profile or creates a new profile based on the required data. In any case, such a profile contains all the QoS, security and cost and power constraints and differs between mandatory and preferred values for each parameter. The CSM aggregates the requirements from all active profiles from possibly different applications. It compares the aggregated requirements with the list of available services and networks and activates the most suitable. “Most suitable” can mean to setup or maintain one or several simultaneous network connections. Via the network status command feed back on the actual connection is returned.A Smart Networking Request utilizes the communications network library the following way: Once the request is registered for execution with the request manager, its criteria set is computed and that part related to the networking subsystem is built and mapped to a profile. A request to satisfy the requirements of the profile is set up and sent to the communication service module. If successful, the code attached to the Smart Networking Request can be executed, if the communication set up has not been successful, the code is not executed and the Smart Networking Request would try again later or eventually fail, depending on the other criteria associated with the request (like timing constraints).。

Handbook of Numerical Analysis. Volume V. Techniques of Scientific Computing. (Part 2).

Handbook of Numerical Analysis. Volume V. Techniques of Scientific Computing. (Part 2).

By P. G. Ciarlet and J. L. Lions. Elsevier, Inc., oretically oriented. The authors analyze the basic Amsterdam, 1997. 818 pp., hardcover. ISBN 0-444- spectral methods on model problems (Dirichlet and 82278-X. Neumann problems the Laplace and bilaplacian operators) for tensorized domains (intervals, squares, and This book contains ve solicited articles by experts cubes). This allows them to carry on a very deliin techniques of scienti c computing. These articles cate analysis which is not readily available in other are written mainly for researchers in related elds, sources. The usage of the tensorized bases of polytherefore, some preliminary knowledge is required. nomials is intended to take advantage of the properties of one-dimensional operators when possible, and, Numerical Path Following by E. L. Allgower and K. hence, to reduce the computational cost by some fast Georg is an extended and updated version of the au- algorithms. thors 1993 survey article 2]. However, the contents of Collocation method is the main emphasis and the the present article are substantially increased. This Galerkin method is discussed for the theoretical con200-page article provides more than 400 references venient. In some cases, Galerkin methods with indating to 1994. More detailed material up to 1990 can tegrals replaced by appropriate quadrature formube found in the authors' earlier book 1] where 900 las coincide with collocation methods. Two topics references are listed. Also, for numerical solutions of which are not considered in this article are the Tau multivariate polynomial systems by homotopy con- method and the Fourier series. The Legendre spectinuation methods, see the recent survey article by tral methods are extensively analyzed. The analysis is carried out on Sobolev spaces with a Jacobi type Li 5]. In the literature, \numerical path following" and weight which includes both Legendre and Chebyshev \numerical continuation method" are used inter- approximations in a uni ed framework. However, a changeably. Numerical continuation methods are number of new and important results are only stated techniques for numerically approximating a solution or proven for the Legendre techniques. More than a curve C which is implicitly de ned by an underdeter- quarter of the contents are devoted to spectral discretizations of the Navier-Stokes equations. The apmined system of equations. This article gives an introduction to the main ideas plication to hyperbolic systems of conservation laws of numerical path following and presents some ad- is brie y discussed. vances in the subject regarding adaptations, appli- Although the spectral method is closely related to cations, analysis of e ciency, and complexity. Both the p-version nite element method, only a few works theoretical and implementing aspects of the predic- on the p-version are quoted. About one quarter of 125 tor corrector path following methods are thoroughly references are the authors' own works. The newest discussed. Piecewise linear methods are also studied. reference is dated 1994. For the literature before At the end, a list for some available software related 1987, the reader is referred to 3] where more than 500 references are listed; additionally, more than 200 to path following is provided. supplemental references from 1986 to 1991 can be Spectral Methods by C. Bernardi and Y. Maday is found in the second and third printings of 3]. the longest article in Volume V (270 pages) which The article, which provides a solid theoretical founis based on the authors' 1992 book in French with dation for spectral methods by rigorous mathematia slight extension. Although the main advantage of cal arguments, is well organized and clearly written. spectral methods is in computational uid dynamics, For the readers' convenience, the authors even prothis article concerns mainly elliptic problems where vide a List of Symbols with the page numbers at the the theory is more complete and appealing. Com- end. The article is surely an important addition to pared with an early book by Canuto, Hussaini, Quar- the literature.

物理化学特有的研究方法

物理化学特有的研究方法

物理化学特有的研究方法Physics chemistry is a branch of science that combines the principles of physics and chemistry to study the physical properties of matter and the energy changes that occur during chemical reactions. 物理化学是一门将物理学和化学原理相结合的科学分支,用来研究物质的物理特性以及化学反应过程中所发生的能量变化。

One of the unique research methods in physical chemistry is spectroscopy, which involves the study of the interaction between matter and electromagnetic radiation. This method allows scientists to analyze the structure and composition of various substances by measuring the way they absorb or emit light at different wavelengths. 物理化学中独特的研究方法之一是光谱学,这涉及了物质与电磁辐射之间的相互作用。

这种方法使科学家能够通过测量物质在不同波长下的吸收或发射光线的方式来分析各种物质的结构和组成。

Another important method in physical chemistry is computational chemistry, which uses computer algorithms to simulate and predict the behavior of molecules and materials at the atomic and molecular level. This allows researchers to understand the underlying principlesof chemical reactions and the properties of new materials without the need for costly and time-consuming experiments. 物理化学另一个重要的方法是计算化学,它使用计算机算法来模拟和预测分子和材料在原子和分子水平上的行为。

2017年11月高考真题浙江卷英语试卷-学生用卷

2017年11月高考真题浙江卷英语试卷-学生用卷

2017年11月高考真题浙江卷英语试卷-学生用卷一、阅读理解(共10小题,每小题2.5分,共25分)1、【来源】 2017年11月高考真题浙江卷(A篇)第21~24题10分When I was in fourth grade, I worked part-time as a paperboy. Mrs. Stanley was one of my customers. She'd watch me coming down her street, and by the time I'd biked up to her doorstep, there'd be a cold drink waiting. I'd sit and drink while she talked.Mrs. Stanley talked mostly about her dead husband, "Mr. Stanley and I went shopping this morning." she'd say. The first time she said that, soda (汽水) went up my nose.I told my father how Mrs. Stanley talked as if Mr. Stanley were still alive. Dad said she was probably lonely, and that I ought to sit and listen and nod my head and smile, and maybe she'd TAL#NBSP work it out of her system. So that's what I did, and it turned out Dad was right. After a while she seemed content to leave her husband over at the cemetery (墓地) .I finally quit delivering newspapers and didn't see Mrs. Stanley for several years. Then we crossed paths at a church fund-raiser (募捐活动) . She was spooning mashed potatoes and looking happy. Four years before, she'd had to offer her paperboy a drink to have someone to talk with. Now she had friends. Her husband was gone, but life went on.I live in the city now, and my paperboy is a lady named Edna with three kids. She asks me how I'm doing. When I don't say "fine, " she sticks around to hear my problems. She's lived in the city most of her life, but she knows about community. Community isn't so much a place as it is a state of mind. You find it whenever people ask how you're doing because they care, and not because they're getting paid to do so. Sometimes it's good to just smile, nod your head and listen.(1) Why did soda go up the author's nose one time?A. He was talking fast.B. He was shocked.C. He was in a hurry.D. He was absent-minded.(2) Why did the author sit and listen to Mrs. Stanley according to Paragraph 3?A. He enjoyed the drink.B. He wanted to be helpful.C. He took the chance to rest.D. He tried to please his dad.(3) Which of the following can replace the underlined phrase "work it out of her system"?A. recover from her sadnessB. move out of the neighborhoodC. turn to her old friendsD. speak out about her past(4) What does the author think people in a community should do?A. Open up to others.B. Depend on each other.C. Pay for others' help.D. Care about one another.2、【来源】 2017年11月高考真题浙江卷(B篇)第25~27题7.5分It's surprising how much simple movements of the body can affect the way we think. Using expansive gestures with open arms makes us feel more powerful, crossing your arms makes you more determined and lying down can bring more insights(领悟).So if moving the body can have these effects, what about the clothes we wear? We're all well aware of how dressing up in different ways can make us feel more attractive, sporty or professional, depending on the clothes we wear, but can the clothes actually change cognitive (认知的) performance or is it just a feeling?Adam and Galinsky tested the effect of simply wearing a white lab coat on people's powers of attention. The idea is that white coats are associated with scientists, who are in turn thought to have close attention to detail.What they found was that people wearing white coats performed better than those who weren't. Indeed, they made only half as many errors as those wearing their own clothes on the Stroop Test(one way of measuring attention). The researchers call the effect "enclothed cognition" suggesting that all manner of different clothes probably affect our cognition in many different ways.This opens the way for all sorts of clothes-based experiments. Is the writer who wears a fedora more creative? Is the psychologist wearing little round glasses and smoking a cigar more insightful? Does a chef's hat make the resultant food taste better?From now on I will only be editing articles for PsyBlog while wearing a white coat to help keep the typing error count low. Hopefully you will be doing your part by reading PsyBlog in a cap and gown(学位服).(1) What is the main idea of the text?A. Body movements change the way people think.B. How people dress has an influence on their feelings.C. What people wear can affect their cognitive performance.D. People doing different jobs should wear different clothes.(2) Adam and Galinsky's experiment tested the effect of clothes on theirwearers'.A. insightsB. movementsC. attentionD. appearance(3) How does the author sound in the last paragraph?A. AcademicB. HumorousC. FormalD. Hopeful3、【来源】 2017年11月高考真题浙江卷(C篇)第28~30题7.5分There are energy savings to be made from all recyclable materials, sometimes huge savings. Recycling plastics and aluminum, for instance, uses only 5% to 10% as much energy as producing new plastic or smelting (提炼)aluminum.Long before most of us even noticed what we now call "the environment, " Buckminster Fuller said, "Pollution is nothing but the resources(资源)we are not harvesting. We allow them to be left around because we've been ignorant of their value." To take one example, let's compare the throwaway economy (经济)with a recycling economy as we feed a cat for life.Say your cat weighs 5kg and eats one can of food each day. Each empty can of its food weighs 40g. In a throwaway economy, you would throw away 5, 475 cans over the cat's 15-year lifetime. That's 219kg of steel—more than a fifth of a ton and more than 40 times the cat's weight.In a recycling economy, we would make one set of 100 cans to start with, then replace them over and over again with recycled cans. Since almost 3% of the metal is lost during reprocessing, we'd have to make an extra 10 cans each year. But in all, only 150 cans will be used up over the cat's lifetime—and we'll still have 100 left over for the next cat.Instead of using up 219kg of steel, we've used only 6kg. And because the process of recycling steel is less polluting than making new steel, we've also achieved the following significant savings: in energy use—47% to 74%; in air pollution—85%; in water pollution—35%; in water use—40%.(1) What does Buckminster Fuller say about pollution?A. It is becoming more seriousB. It destroys the environmentC. It benefits the economyD. It is the resources yet to be used(2) How many cans will be used up in a cat's 15-year lifetime in a recycling economy?A. 50.B. 100.C. 150.D. 250.(3) What is the author's purpose in writing the text?A. To promote the idea of recycling.B. To introduce an environmentalist.C. To discuss the causes of pollution.D. To defend the throwaway economy.二、七选五(共5小题,每小题2分,共10分)4、【来源】 2017年11月高考真题浙江卷第31~35题10分(每题2分)2017~2018学年12月湖北荆州公安县车胤中学高三上学期月考第36~40题10分(每题2分)How to Remember What You Read Reading is important. But the next step is making sure that you remember what you'veread!1You may have just read the text, but the ideas concepts and images (形象) may fly right out of your head. Here are a few tricks for remembering what you read.2If the plot, characters, or word usage is confusing for you, you likely won't be able to remember what you read. It's a bit like reading a foreign language. If you don't understand what you're reading, how would you remember it? But there are a few things you can do… Use a dictionary: look up the difficult words.●Are you connected?Does a character remind you of friend? Doesn't the setting make you want to visit the place? Does the book inspire you, and make you want to read more? With some books, you may feel a connection right away.3How willing are you to make the connections happen?●Read it; hear it; be it!Read the lines. Then, speak them out loud. And, put some character into words. When he was writing his novels, Charles Dickens would act out the parts of the characters. He'd make faces in the mirror, and change his voice for each character.4●How often do you read?If you read frequently, you'll likely have an easier time with remembering what you're reading (and what you've read).5As you make reading a regular part of your life, you'll make more connections, stay more focused and understand the text better. You'll learn to enjoy literature—as you remember what you read!A. Are you confused?B. Practice makes perfect.C. What's your motivation?D. Memory is sometimes a tricky thing.E. Marking helps you remember what you read.F. But other books require a bit more work on your part.G. You can do the same thing when you are reading the text!三、完形填空(共20小题,每小题1.5分,共30分)5、【来源】 2017年11月高考真题浙江卷第36~55题30分2020~2021学年10月浙江嘉兴海宁市上海外国语大学附属浙江宏达学校高二上学期月考第36~55题30分2018~2019学年山东淄博张店区淄博实验中学高一上学期期中第41~60题30分2017~2018学年四川成都郫都区郫都区成都外国语学校高一下学期期末第21~40题30分A young English teacher saved the lives of 30 students when he took1of a bus after its driver suffered a serious heart attack. Guy Harvold, 24,had2the students and three course leaders from Gatwick airport, and they were travelling to Bournemouth to3their host families. They were goingto4 a course at the ABC Language School in Bournemouth where Harvold works as a5.Harvold, who has not6his driving test, said, "I realized the bus was out of control when I was7the students." The bus ran into trees at the side of the road and he8the driver was slumped(倒伏) over the wheel. The driver didn't9. He was unconscious. The bus10 a lamp post and it broke the glass on the front door before Harvold11to bring the bus to a stop. Police12the young teacher's quick thinking. If hehadn't13quickly, there could have been aterrible14.The bus driver never regained consciousness and died at Easy Surrey Hospital. He had worked regularly with the15and was very well regarded by the teachers and students. Harvold said, I was16that no one else was hurt, but I hoped that the driver would17.The head of the language school told the local newspaper that the school is going to send Harvold on a weekend18to Dublin with a friend, thanking him forhis19. A local driving school has also offered himsix20driving lessons.A. controlB. careC. advantageD. noteA. taken inB. picked upC. tracked downD. helped outA. greetB. thankC. inviteD. meetA. presentB. introduceC. takeD. organizeA. driverB. doctorC. librarianD. teacherA. givenB. markedC. passedD. conductedA. speaking toB. waiting forC. returning toD. looking forA. learnedB. noticedC. mentionedD. doubtedA. sleepB. cryC. moveD. recoverA. ran overB. went byC. carriedD. hitA. rememberedB. continuedC. preparedD. managedA. witnessedB. recordedC. praisedD. understoodA. appearedB. reactedC. escapedD. interruptedA. delayB. accidentC. mistakeD. experienceA. airportB. hospitalC. schoolD. policeA. happyB. fortunateC. touchedD. sorryA. surviveB. retireC. relaxD. succeedA. projectB. tripC. dinnerD. dutyA. braveryB. skillC. qualityD. knowledgeA. necessaryB. easyC. differentD. free四、语法填空(共10小题,每小题1.5分,共15分)6、【来源】 2017年11月高考真题浙江卷第56~65题15分2020~2021学年广东广州荔湾区广东实验中学高一上学期期末第66~75题15分(每题1.5分) 2017~2018学年广东广州越秀区高一上学期期末广大附、铁一等三校联考第66~75题15分2018~2019学年4月湖南长沙开福区长沙市第一中学高三下学期月考第61~70题15分2020~2021学年广东广州荔湾区广东实验中学高一上学期期末第66~75题15分(每题1.5分)阅读下面短文,按照句子结构的语法性和上下文连贯的要求,在空格处填入一个适当的词或使用括号中词语的正确形式填空。

一种科学方法英语作文

一种科学方法英语作文

一种科学方法英语作文A scientific method refers to a set of systematic and logical techniques used to gather, analyze, and interpret data in order to answer scientifically important questions. This method involves a series of steps that include asking questions, conducting background research, formulating hypotheses, designing and carrying out an experiment, collecting and analyzing data, and drawing conclusions.The first step in the scientific method is to identify a problem or question to be answered. Once the problem has been identified, background research is conducted to gather information on the topic. This information is used to formulate a hypothesis, which is an educated guess about what will happen in the experiment.Next, an experiment is designed to test the hypothesis. This involves specifying the variables that will be changed and tested, as well as the procedures that will be followed to collect data. The experiment is then carried out, with careful attention paid to consistent procedures and measurements.Once the data has been collected, it is analyzed statistically to determine if there is a significant difference between the groups being tested. Ifthere is a significant difference, the hypothesis is supported and conclusions are drawn. If there is no significant difference, the hypothesis is rejected.Finally, the findings are communicated to the scientific community through publications, presentations, and discussions. Other scientists can then replicate the experiment and test the findings to determine their validity.The scientific method is essential for producing reliable and valid data in scientific research. It allows for rigorous testing of hypotheses and ensures that conclusions are based on empirical evidence rather than opinions or biases. By following the scientific method, scientists can make important discoveries and contribute to our understanding of the world around us.。

毕业论文英语翻译

毕业论文英语翻译

Title: English Translation ofGraduation ThesisAbstract:1. IntroductionThe introduction section provides a background to thetopic of English translation of graduation theses. Itoutlines the purpose of the study, the research questions,and the significance of the topic. [Please note: This section is a placeholder for the introduction and should be filled in the author.]1.1 BackgroundGraduation theses represent the culmination of astudent's academic journey, often showcasing their research skills and knowledge in a specific field. With the increasing globalization of education, there is a growing need for these theses to be translated into English to reach a wider audience. This section will discuss the reasons behind this need and the challenges associated with the translation process.1.2 Purpose of the Study1.3 Research QuestionsWhat are the main challenges faced translators when translating graduation theses?What are the most effective methodologies and tools for ensuring the quality of translated graduation theses?1.4 Significance of the StudyThe findings of this study will contribute to the understanding of the translation process in the academic context. It will provide insights for translators, researchers, and educators on how to improve the quality and accessibility of translated graduation theses.2. Literature ReviewThis section reviews existing literature on the topic of English translation of graduation theses. It includes studies on translation theory, the challenges of translating academic texts, and the role of cultural adaptation in translation. [Please note: This section is a placeholder for theliterature review and should be filled in the author.]3. MethodologyThis section describes the research methodology employed in the study. It includes the data collection process, the participants, and the tools used for analysis. [Please note: This section is a placeholder for the methodology and should be filled in the author.]3.1 Data CollectionDescription of the data collection methods used (e.g., surveys, interviews, case studies)Information on the participants involved in the study3.2 Data AnalysisExplanation of the analytical techniques used (e.g., qualitative analysis, quantitative analysis)Description of the tools and software employed for data analysis4. Results5. DiscussionThe discussion section interprets the results in the context of the existing literature and theories. It explores the implications of the findings for translators, researchers, and educators, and suggests areas for future research. [Please note: This section is a placeholder for thediscussion and should be filled in the author.]6. ConclusionThe conclusion summarizes the key findings of the study and reiterates the importance of accurate and culturally sensitive translation of graduation theses. It alsohighlights the limitations of the study and suggestsdirections for future research. [Please note: This section is a placeholder for the conclusion and should be filled in the author.]ReferencesThis section lists all the sources cited in the paper, formatted according to the relevant citation style. [Please note: This section is a placeholder for the references and should be filled in the author.][Please note: The above content is a template and should be filled in with specific details relevant to the author's research.]Title: English Translation of Graduation ThesisAbstract:1. Introduction1.1 Background1.2 Purpose of the StudyThe purpose of this study is to gain a deeper understanding of the English translation of graduation theses addressing the following objectives:To identify the linguistic and cultural challenges encountered translators during the translation process.To assess the effectiveness of various translation strategies in addressing these challenges.1.3 Research QuestionsWhat are the primary linguistic and cultural challenges faced translators when translating graduation theses?How do different translation strategies impact the quality and accuracy of translated theses?What are the most effective tools and techniques for ensuring the consistency and quality of translated graduation theses?2. Literature ReviewThe literature review section synthesizes existing research on the topic of English translation of graduation theses. It discusses the theoretical frameworks underpinning translation studies, the role of cultural adaptation in translation, and the impact of translation technology on the quality of translations. The review also highlights the limitations of previous studies and identifies gaps in the existing literature that this study aims to address.3. Methodology3.1 Data CollectionThe data collection process involved the following steps: Selection of a diverse sample of graduation theses from various disciplines.Translation of the selected theses into English professional translators.Collection of feedback from both the translators and the original authors regarding the quality of the translations.3.2 Data AnalysisThe data analysis process involved the following steps: Quantitative analysis of the translated theses to evaluate the consistency and accuracy of the translations.Comparative analysis of the translations against the original theses to identify areas of improvement.4. ResultsThe results section presents the findings of the study, focusing on the challenges faced translators, the effectiveness of translation strategies, and the impact of translation technology on the quality of translations.4.1 Linguistic ChallengesThe study identified several linguistic challenges encountered translators, including:The use of specialized terminology in the source language.The presence of cultural references and idiomatic expressions.4.2 Cultural ChallengesCultural challenges were also identified, such as:The need for cultural adaptation to ensure that the translated text resonates with the target audience.The potential for misunderstandings due to differences in cultural norms and values.4.3 Translation StrategiesThe study evaluated various translation strategies employed translators to address the challenges identified, including:The use of glossaries and terminology databases.The adoption of cultural adaptation techniques.The application of advanced translation technology, such as machine translation and postediting tools.4.4 Impact of Translation TechnologyThe study found that the adoption of advanced translation technology had a positive impact on the quality and consistency of translations. However, it also highlighted the need for human oversight to ensure the accuracy and appropriateness of the translations.5. DiscussionThe discussion section interprets the results in the context of the existing literature and theories. It explores the implications of the findings for translators, researchers, and educators, and suggests areas for future research.5.1 Implications for Translators5.1 Implications for Translators5.2 Implications for ResearchersFor researchers, the study highlights the significance of clear and concise writing, as well as the use of universally understood terminology. It also emphasizes the need forthorough review and editing of translated theses to ensure accuracy and to avoid potential misinterpretations.5.3 Implications for Educators5.4 Limitations and Future ResearchThe study acknowledges several limitations, including the small sample size and the potential for bias in the data collection process. Future research could address these limitations expanding the sample size and employing more rigorous data collection methods. Additional areas for future research include the impact of translation quality on the perceived credibility of the research and the development of standardized evaluation metrics for translated academic texts.6. ConclusionIn conclusion, this study has provided valuable insights into the challenges and strategies involved in the English translation of graduation theses. The findings demonstratethe need for a holistic approach to translation thatconsiders linguistic, cultural, and technical factors. By addressing these challenges, translators can contribute tothe global dissemination of academic knowledge and facilitate international collaboration in research.References[Please note: The following is a placeholder for the references section and should be filled in with actual citations according to the relevant citation style.]1. Baker, M. (1992). In other words: A coursebook on translation. London: Routledge.2. Hatim, B., & Mason, I. (1990). Translation as encounter: A narrative account. London: Pinter.3. Newmark, P. (1988). A textbook of translation. London: Prentice Hall International.4. SnellHorn, M. (1993). Translation Studies: An Advanced Resource Book. London: Routledge.5. Toury, G. (1995). Descriptive Translation Studies and Beyond. Philadelphia: John Benjamins Publishing Company.References2. Hatim, B., & Mason, I. (1990). Translation as encounter: A narrative account. London: Pinter. This book offers a unique perspective on translation as a dynamic encounter between cultures, emphasizing the importance of cultural sensitivity and the translator's role in mediating between source and target languages.3. Newmark, P. (1988). A textbook of translation. London: Prentice Hall International. Newmark's textbook is a classic in the field of translation studies, providing a clear andconcise overview of the key concepts and techniques involvedin translating both literary and nonliterary texts.4. SnellHorn, M. (1993). Translation Studies: An Advanced Resource Book. London: Routledge. This resource book is an invaluable tool for advanced students and professionals in translation studies, offering a wide range of articles and case studies that explore various aspects of translationtheory and practice.5. Toury, G. (1995). Descriptive Translation Studies and Beyond. Philadelphia: John Benjamins Publishing Company.Toury's book delves into the theoretical foundations of descriptive translation studies, examining the relationship between translation norms and the cultural context in which translations are produced.6. Albrecht, J. (2006). Translating Academic Texts: A Practical Guide. Manchester: St. Jerome Publishing.Albrecht's guide is a practical resource for translators working on academic texts, offering practical advice and examples to help navigate the specific challenges oftranslating this genre.7. Jirousek, P. (2004). The Translation of Academic Texts. Manchester: St. Jerome Publishing. Jirousek's book provides a detailed analysis of the translation of academic texts,focusing on the linguistic and cultural factors that influence the translation process.8. Koller, W., & Jäger, T. (Eds.). (2003). Specialized Translation: A European Perspective. Manchester: St. Jerome Publishing. This collection of essays offers a European perspective on specialized translation, exploring the challenges and strategies involved in translating texts across various disciplines.10. Muñoz, J. (2003). The Translation of Academi c Texts. Manchester: St. Jerome Publishing. Muñoz's book focuses on the translation of academic texts, offering practical advice and insights into the specific challenges faced translators in this domain.11. Neubert, A., & Shlesinger, M. (Eds.). (1998). Translation as Text. Manchester: St. Jerome Publishing. This collection of essays examines the relationship between translation and textuality, exploring the ways in which translation can influence the reception and understanding ofa text.13. Toury, G. (1995). Descriptive Translation Studies and Beyond. Philadelphia: John Benjamins Publishing Company. This book is a foundational text in descriptive translation studies, providing a detailed analysis of the relationshipbetween translation norms and the cultural context in which translations are produced.16. Jirousek, P. (2006). The Translator's Invisibility: A Practical Guide to Translating Academic Texts. Manchester: St. Jerome Publishing. Jirousek's guide offers practical adviceon how to achieve transparency in translation, ensuring that the translated text is as clear and accessible as theoriginal.17. Koller, W., & Jäger, T. (2003). Specialized Translation: A European Perspective. Manchester: St. Jerome Publishing. This collection of essays provides a European perspective on specialized translation, covering a variety of disciplines and exploring the challenges faced translatorsin these areas.18. Lederer, E. (2002). Translation Studies: An Advanced Reader. Manchester: St. Jerome Publishing. Lederer's reader brings together key texts from the field of translation studies, offering a critical analysis of the discipline's major theories and debates.20. Newmark, P. (2001). About Translation. Manchester: St. Jerome Publishing. Newmark's book is a collection of essays that reflect on his extensive experience as a translator and translation theorist. It offers valuable insights into the nature of translation and the translator's craft.21. Olohan, M. (2005). Translation and Cognition:Focusing on the Reader. Manchester: St. Jerome Publishing. Olohan's work explores the cognitive processes involved in translation, with a particular focus on the reader's role in understanding and interpreting translated texts.22. Pym, A. (2004). Translation and Cognition: The Cognitive Approach Explained. Manchester: St. Jerome Publishing. Pym's book provides a clear and accessible introduction to the cognitive approach in translation studies, explaining how cognitive theory can inform our understandingof the translation process.23. Reiss, K., & Vermeer, H. J. (1984). Grundzüge einer Theorie der Übersetzung. Berlin: Erich Schmidt. This foundational text Koller and Vermeer introduces the Skopos Theory, which emphasizes the purpose or goal of translationas central to the translation process.24. Schäffner, C. (2006). Specialized Translation: A Discourse Approach. Manchester: St. Jerome Publishing.Schäffner's book offers a discourse approach to specialized translation, analyzing how language use is shaped specific social contexts and purposes.25. SnellHorn, M. (2006). Translation Studies: An Advanced Resource Book. London: Routledge. This updated resource book provides an extensive overview of translationstudies, including new developments and case studies that reflect the evolving nature of the field.。

粘弹性功能梯度有限元法使用对应原理

粘弹性功能梯度有限元法使用对应原理

Viscoelastic Functionally Graded Finite-Element MethodUsing Correspondence PrincipleEshan V.Dave,S.M.ASCE1;Glaucio H.Paulino,Ph.D.,M.ASCE2;andWilliam G.Buttlar,Ph.D.,P.E.,A.M.ASCE3Abstract:Capability to effectively discretize a problem domain makes thefinite-element method an attractive simulation technique for modeling complicated boundary value problems such as asphalt concrete pavements with material non-homogeneities.Specialized “graded elements”have been shown to provide an efficient and accurate tool for the simulation of functionally graded materials.Most of the previous research on numerical simulation of functionally graded materials has been limited to elastic material behavior.Thus,the current work focuses onfinite-element analysis of functionally graded viscoelastic materials.The analysis is performed using the elastic-viscoelastic correspondence principle,and viscoelastic material gradation is accounted for within the elements by means of the generalized iso-parametric formulation.This paper emphasizes viscoelastic behavior of asphalt concrete pavements and several examples, ranging from verification problems tofield scale applications,are presented to demonstrate the features of the present approach.DOI:10.1061/͑ASCE͒MT.1943-5533.0000006CE Database subject headings:Viscoelasticity;Asphalt pavements;Concrete pavements;Finite-element method.Author keywords:Viscoelasticity;Functionally graded materials;Asphalt pavements;Finite-element method;Correspondence principle.IntroductionFunctionally Graded Materials͑FGMs͒are characterized by spa-tially varied microstructures created by nonuniform distributionsof the reinforcement phase with different properties,sizes andshapes,as well as,by interchanging the role of reinforcement andmatrix materials in a continuous manner͑Suresh and Mortensen1998͒.They are usually engineered to produce property gradientsaimed at optimizing structural response under different types ofloading conditions͑thermal,mechanical,electrical,optical,etc.͒͑Cavalcante et al.2007͒.These property gradients are produced in several ways,for example by gradual variation of the content ofone phase͑ceramic͒relative to another͑metallic͒,as used in ther-mal barrier coatings,or by using a sufficiently large number ofconstituent phases with different properties͑Miyamoto et al.1999͒.Designer viscoelastic FGMs͑VFGMs͒can be tailored tomeet design requirements such as viscoelastic columns subjectedto axial and thermal loads͑Hilton2005͒.Recently,Muliana ͑2009͒presented a micromechanical model for thermoviscoelastic behavior of FGMs.Apart from engineered or tailored FGMs,several civil engi-neering materials naturally exhibit graded material properties. Silva et al.͑2006͒have studied and simulated bamboo,which is a naturally occurring graded material.Apart from natural occur-rence,a variety of materials and structures exhibit nonhomoge-neous material distribution and constitutive property gradations as an outcome of manufacturing or construction practices,aging, different amount of exposure to deteriorating agents,etc.Asphalt concrete pavements are one such example,whereby aging and temperature variation yield continuously graded nonhomogeneous constitutive properties.The aging and temperature induced prop-erty gradients have been well documented by several researchers in thefield of asphalt pavements͑Garrick1995;Mirza and Witc-zak1996;Apeagyei2006;Chiasson et al.2008͒.The current state-of-the-art in viscoelastic simulation of asphalt pavements is limited to either ignoring non-homogeneous property gradients ͑Kim and Buttlar2002;Saad et al.2006;Baek and Al-Qadi2006; Dave et al.2007͒or considering them through a layered ap-proach,for instance,the model used in the American Association of State Highway and Transportation Officials͑AASHTO͒Mechanistic Empirical Pavement Design Guide͑MEPDG͒͑ARA Inc.,EC.2002͒.Significant loss of accuracy from the use of the layered approach for elastic analysis of asphalt pavements has been demonstrated͑Buttlar et al.2006͒.Extensive research has been carried out to efficiently and ac-curately simulate functionally graded materials.For example, Cavalcante et al.͑2007͒,Zhang and Paulino͑2007͒,Arciniega and Reddy͑2007͒,and Song and Paulino͑2006͒have all reported onfinite-element simulations of FGMs.However,most of the previous research has been limited to elastic material behavior.A variety of civil engineering materials such as polymers,asphalt concrete,Portland cement concrete,etc.,exhibit significant rate and history effects.Accurate simulation of these types of materi-als necessitates the use of viscoelastic constitutive models.1Postdoctoral Research Associate,Dept.of Civil and EnvironmentalEngineering,Univ.of Illinois at Urbana-Champaign,Urbana,IL61801͑corresponding author͒.2Donald Biggar Willett Professor of Engineering,Dept.of Civil andEnvironmental Engineering,Univ.of Illinois at Urbana-Champaign,Ur-bana,IL61801.3Professor and Narbey Khachaturian Faculty Scholar,Dept.of Civiland Environmental Engineering,Univ.of Illinois at Urbana-Champaign,Urbana,IL61801.Note.This manuscript was submitted on April17,2009;approved onOctober15,2009;published online on February5,2010.Discussion pe-riod open until June1,2011;separate discussions must be submitted forindividual papers.This paper is part of the Journal of Materials in CivilEngineering,V ol.23,No.1,January1,2011.©ASCE,ISSN0899-1561/2011/1-39–48/$25.00.JOURNAL OF MATERIALS IN CIVIL ENGINEERING©ASCE/JANUARY2011/39The current work presents afinite element͑FE͒formulation tailored for analysis of viscoelastic FGMs and in particular,as-phalt concrete.Paulino and Jin͑2001͒have explored the elastic-viscoelastic correspondence principle͑CP͒in the context of FGMs.The CP-based formulation has been used in the current study in conjunction with the generalized iso-parametric formu-lation͑GIF͒by Kim and Paulino͑2002͒.This paper presents the details of thefinite-element formulation,verification,and an as-phalt pavement simulation example.Apart from simulation of as-phalt pavements,the present approach could also be used for analysis of other engineering systems that exhibit graded vis-coelastic behavior.Examples of such systems include metals and metal composites at high temperatures͑Billotte et al.2006;Koric and Thomas2008͒;polymeric and plastic based systems that un-dergo oxidative and/or ultraviolet hardening͑Hollaender et al. 1995;Hale et al.1997͒and gradedfiber reinforced cement and concrete structures.Other application areas for the graded vis-coelastic analysis include accurate simulation of the interfaces between viscoelastic materials such as the layer interface between different asphalt concrete lifts or simulations of viscoelastic glu-ing compounds used in the manufacture of layered composites ͑Diab and Wu2007͒.Functionally Graded Viscoelastic Finite-Element MethodThis section describes the formulation for the analysis of vis-coelastic functionally graded problems using FE framework and the elastic-viscoelastic CP.The initial portion of this section es-tablishes the basic viscoelastic constitutive relationships and the CP.The subsequent section provides the FE formulation using the GIF.Viscoelastic Constitutive RelationsThe basic stress-strain relationships for viscoelastic materials have been presented by,among other writers,Hilton͑1964͒and Christensen͑1982͒.The constitutive relationship for quasi-static, linear viscoelastic isotropic materials is given as␴ij͑x,t͒=2͵tЈ=−ϱtЈ=t G͓x,␰͑t͒−␰͑tЈ͔͒ͫ␧ij͑x,tЈ͒−13␦ij␧kkͬdtЈ+͵tЈ=−ϱtЈ=t K͓x,␰͑t͒−␰͑tЈ͔͒␦ij␧kk dtЈ͑1͒where␴ij=stresses;␧ij=strains at any location x.The parameters G and K=shear and bulk relaxation moduli;␦ij=Kronecker delta; and tЈ=integration variable.Subscripts͑i,j,k,l=1,2,3͒follow Einstein’s summation convention.The reduced time␰is related to real time t and temperature T through the time-temperature super-position principle␰͑t͒=͵0t a͓T͑tЈ͔͒dtЈ͑2͒For a nonhomogeneous viscoelastic body in quasi-static condi-tion,assume a boundary value problem with displacement u i on volume⍀u,traction P i on surface⍀␴and body force F i,the equilibrium and strain-displacement relationships͑for small de-formations͒are as shown in Eq.͑3͒␴ij,j+F i=0,␧ij=12͑u i,j+u j,i͒͑3͒respectively,where,u i=displacement and͑•͒,j=ץ͑•͒/ץx j.CP and Its Application to FGMsThe CP allows a viscoelastic solution to be readily obtained by simple substitution into an existing elastic solution,such as a beam in bending,etc.The concept of equivalency between trans-formed viscoelastic and elastic boundary value problems can be found in Read͑1950͒.This technique been extensively used by researchers to analyze variety of nonhomogeneous viscoelastic problems including,but not limited to,beam theory͑Hilton and Piechocki1962͒,finite-element analysis͑Hilton and Yi1993͒, and boundary element analysis͑Sladek et al.2006͒.The CP can be more clearly explained by means of an ex-ample.For a simple one-dimensional͑1D͒problem,the stress-strain relationship for viscoelastic material is given by convolution integral shown in Eq.͑4͒.␴͑t͒=͵0t E͑t−tЈ͒ץ␧͑tЈ͒ץtЈdtЈ͑4͒If one is interested in solving for the stress and material properties and imposed strain conditions are known,using the elastic-viscoelastic correspondence principle the convolution integral can be reduced to the following relationship using an integral trans-form such as the Laplace transform:␴˜͑s͒=sE˜͑s͒␧˜͑s͒͑5͒Notice that the above functional form is similar to that of the elastic problem,thus the analytical solution available for elastic problems can be directly applied to the viscoelastic problem.The transformed stress quantity,␴˜͑s͒is solved with known E˜͑s͒and ␧˜͑s͒.Inverse transformation of␴˜͑s͒provides the stress␴͑t͒.Mukherjee and Paulino͑2003͒have demonstrated limitations on the use of the correspondence principle in the context of func-tionally graded͑and nonhomogenous͒viscoelastic boundary value problems.Their work establishes the limitation on the func-tional form of the constitutive properties for successful and proper use of the CP.Using correspondence principle,one obtains the Laplace trans-form of the stress-strain relationship described in Eq.͑1͒as ␴˜ij͑x,s͒=2G˜͓x,␰˜͑s͔͒␧˜ij͑x,s͒+K˜͓x,␰˜͑s͔͒␦ij␧˜kk͑x,s͒͑6͒where s=transformation variable and the symbol tilde͑ϳ͒on top of the variables represents transformed variable.The Laplace transform of any function f͑t͒is given byL͓f͑t͔͒=f˜͑s͒=͵0ϱf͑t͒Exp͓−st͔dt͑7͒Equilibrium͓Eq.͑3͔͒for the boundary value problem in the trans-formed form becomes␴˜,j͑x,s͒=2G˜͑x,s͒␧˜,j d͑x,s͒+2G˜,j͑x,s͒␧˜d͑x,s͒+K˜͑x,s͒␧˜,j͑x,s͒+K˜,j͑x,s͒␧˜͑x,s͒͑8͒where superscript d indicates the deviatoric component of the quantities.Notice that the transformed equilibrium equation for a nonho-mogeneous viscoelastic problem has identical form as an elastic40/JOURNAL OF MATERIALS IN CIVIL ENGINEERING©ASCE/JANUARY2011nonhomogeneous boundary value problem.This forms the basis for using CP-based FEM for solving graded viscoelastic problems such as asphalt concrete pavements.The basic FE framework for solving elastic problems can be readily used through the use of CP,which makes it an attractive alternative when compared to more involved time integration schemes.However,note that due to the inapplicability of the CP for problems involving the use of the time-temperature superposition principle,the present analysis is applicable to problems with nontransient thermal conditions.In the context of pavement analysis,this makes the present proce-dure applicable to simulation of traffic ͑tire ͒loading conditions for given aging levels.The present approach is not well-suited for thermal-cracking simulations,which require simulation of con-tinuously changing and nonuniform temperature conditions.FE FormulationThe variational principle for quasi-static linear viscoelastic mate-rials under isothermal conditions can be found in Gurtin ͑1963͒.Taylor et al.͑1970͒extended it for thermoviscoelastic boundary value problem͟=͵⍀u͵t Ј=−ϱt Ј=t ͵t Љ=−ϱt Љ=t −t Ј12C ijkl ͓x ,␰ijkl ͑t −t Љ͒−␰ijklЈ͑t Ј͔͒ϫץ␧ij ͑x ,t Ј͒ץt Јץ␧kl ͑x ,t Љ͒ץt Љdt Јdt Љd ⍀u−͵⍀u͵t Ј=−ϱt Ј=t ͵t Љ=−ϱt Љ=t −t ЈC ijkl ͓x ,␰ijkl ͑t −t Љ͒−␰ijklЈ͑t Ј͔͒ϫץ␧ij ء͑x ,t Ј͒ץt Јץ␧kl ء͑x ,t Љ͒ץt Љdt Јdt Љd ⍀u −͵⍀␴͵t Љ=−ϱt Љ=tP i ͑x ,t −t Љ͒ץu i ͑x ,t Љ͒ץt Љdt Љd ⍀␴=0͑9͒where ⍀u =volume of a body;⍀␴=surface on which tractions P iare prescribed;u i =displacements;C ijkl =space and time depen-dent material constitutive properties;␧ij =mechanical strains and ␧ij ء=thermal strains;while ␰ijkl =reduced time related to real time t and temperature T through time-temperature superposition prin-ciple of Eq.͑2͒.The first variation provides the basis for the FE formulation␦͟=͵⍀u ͵t Ј=−ϱt Ј=t ͵t Љ=−ϱt Љ=t −t ЈͭC ijkl ͓x ,␰ijkl ͑t −t Љ͒−␰ijklЈ͑t Ј͔͒ץץt Ј͑␧ij ͑x ,t Ј͒−␧ij ء͑x ,t Ј͒͒ץ␦␧kl ͑x ,t Љ͒ץt Љͮdt Јdt Љd ⍀u−͵⍀␴͵t Љ=−ϱt Љ=tP i ͑x ,t −t Љ͒ץ␦u i ͑x ,t Љ͒ץt Љdt Љd ⍀␴=0͑10͒The element displacement vector u i is related to nodal displace-ment degrees of freedom q j through the shape functions N iju i ͑x ,t ͒=N ij ͑x ͒q j ͑t ͒͑11͒Differentiation of Eq.͑11͒yields the relationship between strain ␧i and nodal displacements q i through derivatives of shape func-tions B ij␧i ͑x ,t ͒=B ij ͑x ͒q j ͑t ͒͑12͒Eqs.͑10͒–͑12͒provide the equilibrium equation for each finite element͵tk ij ͓x ,␰͑t ͒−␰͑t Ј͔͒ץq j ͑t Ј͒ץt Јdt Ј=f i ͑x ,t ͒+f i th ͑x ,t ͒͑13͒where k ij =element stiffness matrix;f i =mechanical force vector;and f i th =thermal force vector,which are described as follows:k ij ͑x ,t ͒=͵⍀uB ik T ͑x ͒C kl ͓x ,␰͑t ͔͒B lj ͑x ͒d ⍀u ͑14͒f i ͑x ,t ͒=͵⍀␴N ij ͑x ͒P j ͑x ,t ͒d ⍀␴͑15͒f i th ͑x ,t ͒=͵⍀u͵−ϱtB ik ͑x ͒C kl ͓x ,␰͑t ͒−␰͑t Ј͔͒ץ␧l ء͑x ,t Ј͒ץt Јdt Јd ⍀u͑16͒␧l ء͑x ,t ͒=␣͑x ͒⌬T ͑x ,t ͒͑17͒where ␣=coefficient of thermal expansion and ⌬T =temperaturechange with respect to initial conditions.On assembly of the individual finite element contributions for the given problem domain,the global equilibrium equation can be obtained as͵tK ij ͓x ,␰͑t ͒−␰͑t Ј͔͒ץU j ͑t Ј͒ץt Јdt Ј=F i ͑x ,t ͒+F i th ͑x ,t ͒͑18͒where K ij =global stiffness matrix;U i =global displacement vec-tor;and F i and F i th =global mechanical and thermal force vectors respectively.The solution to the problem requires solving the con-volution shown above to determine nodal displacements.Hilton and Yi ͑1993͒have used the CP-based procedure for implementing the FE formulation.However,the previous re-search efforts were limited to use of conventional finite elements,while in the current paper graded finite elements have been used to efficiently and accurately capture the effects of material non-homogeneities.Graded elements have benefit over conventional elements in context of simulating non-homogeneous isotropic and orthotropic materials ͑Paulino and Kim 2007͒.Kim and Paulino ͑2002͒proposed graded elements with the GIF,where the consti-tutive material properties are sampled at each nodal point and interpolated back to the Gauss-quadrature points ͑Gaussian inte-gration points ͒using isoparametric shape functions.This type of formulation allows for capturing the material nonhomogeneities within the elements unlike conventional elements which are ho-mogeneous in nature.The material properties,such as shear modulus,are interpolated asG Int.Point =͚i =1mG i N i ͑19͒where N i =shape functions;G i =shear modulus corresponding tonode i ;and m =number of nodal points in the element.JOURNAL OF MATERIALS IN CIVIL ENGINEERING ©ASCE /JANUARY 2011/41A series of weak patch tests for the graded elements have been previously established ͑Paulino and Kim 2007͒.This work dem-onstrated the existence of two length scales:͑1͒length scale as-sociated with element size,and ͑2͒length scale associated with material nonhomogeneity.Consideration of both length scales is necessary in order to ensure convergence.Other uses of graded elements include evaluation of stress-intensity factors in FGMs under mode I thermomechanical loading ͑Walters et al.2004͒,and dynamic analysis of graded beams ͑Zhang and Paulino 2007͒,which also illustrated the use of graded elements for simulation of interface between different material layers.In a recent study ͑Silva et al.2007͒graded elements were extended for multiphys-ics applications.Using the elastic-viscoelastic CP,the functionally graded vis-coelastic finite element problem could be deduced to have a func-tional form similar to that of elastic place transform of the global equilibrium shown in Eq.͑18͒isK ˜ij ͑x ,s ͒U ˜j ͑s ͒=F ˜i ͑x ,s ͒+F ˜i th ͑x ,s ͒͑20͒Notice that the Laplace transform of hereditary integral ͓Eq.͑18͔͒led to an algebraic relationship ͓Eq.͑23͔͒,this is major benefit of using CP as the direct integration for solving hereditary integrals will have significant computational cost.As discussed in a previ-ous section,the applicability of correspondence principle for vis-coelastic FGMs imposes limitations on the functional form of constitutive model.With this knowledge,it is possible to further customize the FE formulation for the generalized Maxwell model.Material constitutive properties for generalized Maxwell model is given asC ij ͑x ,t ͒=͚h =1n͓C ij ͑x ͔͒h Exp ͫ−t ͑␶ij ͒hͬ͑no sum ͒͑21͒where ͑C ij ͒h =elastic contributions ͑spring coefficients ͒;͑␶ij ͒h=viscous contributions from individual Maxwell units,commonly called relaxation times;and n ϭnumber of Maxwell unit.Fig.1illustrates simplified 1D form of the generalized Max-well model represented in Eq.͑21͒.Notice that the generalized Maxwell model discussed herein follows the recommendations made by Mukherjee and Paulino ͑2003͒for ensuring success of the correspondence principle.For the generalized Maxwell model,the global stiffness matrix K of the system can be rewritten asK ij ͑x ,t ͒=K ij 0͑x ͒Exp ͩ−t ␶ijͪ=K ij 0͑x ͒K ij t͑t ͒͑no sum ͒͑22͒where K ij 0=elastic contribution of stiffness matrix and K t=time dependent portion.Using Eqs.͑20͒and ͑22͒,one can summarize the problem asK ij 0͑x ͒K ˜ij t ͑s ͒U ˜j ͑s ͒=F ˜i ͑x ,s ͒+F ˜i th ͑x ,s ͒͑no sum ͒͑23͒FE ImplementationThe FE formulation described in the previous section was imple-mented and applied to two-dimensional plane and axisymmetricproblems.This section provides the details of the implementation of formulation along with brief description method chosen for numerical inversion from Laplace domain to time domain.The implementation was coded in the commercially available software Matlab.The implementation of the analysis code is di-vided into five major steps as shown in Fig.2.The first step is very similar to the FE method for a time dependent nonhomogeneous problem,whereby local contribu-tions from various elements are assembled to obtain the force vector and stiffness matrix for the system.Notice that due to the time dependent nature of the problem the quantities are evaluated throughout the time duration of analysis.The next step is to trans-form the quantities to the Laplace domain from the time domain.For the generalized Maxwell model,the Laplace transform of the time-dependent portion of the stiffness matrix,K t ,can be directly ͑and exactly ͒determined using the analytical transform given byK ij 0͑x ͒K ˜ij t ͑s ͒U ˜j ͑s ͒=F ˜i ͑x ,s ͒+F ˜i th ͑x ,s ͓͒no sum for K ij 0͑x ͒K ˜ij t ͑s ͔͒͑24͒Laplace transform of quantities other than the stiffness matrix canbe evaluated using the trapezoidal rule,assuming that the quanti-ties are piecewise linear functions of time.Thus,for a given time dependent function F ͑t ͒,the Laplace transform F ˜͑s ͒is estimated asF ˜͑s ͒=͚i =1N −11s 2⌬t͕s ⌬t ͑F ͑t i ͒Exp ͓−st i ͔−F ͑t i +1͒Exp ͓−st i +1͔͒+⌬F ͑Exp ͓−st i ͔−Exp ͓−st i +1͔͖͒͑25͒where ⌬t =time increment;N =total number of increments;and⌬F =change in function F for the given increment.Once the quantities are calculated on the transformed domain the system of linear equations are solved to determine the solu-tion,which in this case produces the nodal displacements in thetransformed domain,U˜͑s ͒.The inverse transform provides the solution to the problem in the time domain.It should be noted that()1C x ()2C x ()n C x τττ12nFig.1.Generalized Maxwell modelDefine problem in time-domain (evaluate load vector F(t)and stiffnessmatrix components K 0(t)and K t (x ))Perform Laplace transform to evaluate F(s)and K t (s)~~Solve linear system of equations to evaluate nodal displacement,U(s)Perform inverse Laplace transforms to get the solution,U(t)~Post-process to evaluate field quantities of interestFig.2.Outline of finite-element analysis procedure42/JOURNAL OF MATERIALS IN CIVIL ENGINEERING ©ASCE /JANUARY 2011the formulation as well as its implementation is relatively straight forward using the correspondence principle based transformed ap-proach when compared to numerically solving the convolution integral.The inverse Laplace transform is of greater importance in the current problem as the problem is ill posed due to absence of a functional description in the imaginary prehensive comparisons of various numerical inversion techniques have been previously presented ͑Beskos and Narayanan 1983͒.In the current study,the collocation method ͑Schapery 1962;Schapery 1965͒was used on basis of the recommendations from previous work ͑Beskos and Narayanan 1983;Yi 1992͒.For the current implementation the numerical inverse trans-form is compared with exact inversion using generalized Maxwell model ͓c.f.Eq.͑21͔͒as the test function.The results,shown in Fig.3,compare the exact analytical inversion with the numerical inversion results.The numerical inversion was carried out using 20and 100collocation points.With 20collocation points,the average relative error in the numerical estimate is 2.7%,whereas with 100collocation points,the numerical estimate approaches the exact inversion.Verification ExamplesIn order to verify the present formulation and its implementation,a series of verifications were performed.The verification was di-vided into two categories:͑1͒verification of the implementation of GIF elements to capture material nonhomogeneity,and ͑2͒verification of the viscoelastic portion of the formulation to cap-ture time and history dependent material response.Verification of Graded ElementsA series of analyzes were performed to verify the implementation of the graded elements.The verifications were performed for fixed grip,tension and bending ͑moment ͒loading conditions.The material properties were assumed to be elastic with exponential spatial variation.The numerical results were compared with exact analytical solutions available in the literature ͑Kim and Paulino 2002͒.The comparison results for fixed grip loading,tensile load-ing,and bending were performed.The results for all three casesshow a very close match with the analytical solution verifying the implementation of the GIF graded parison for the bending case is presented in Fig.4.Verification of Viscoelastic AnalysisVerification results for the implementation of the correspondence principle based viscoelastic functionally graded analysis were performed and are provided.The first verification example repre-sents a functionally graded viscoelastic bar undergoing creep de-formation under a constant load.The analysis was conducted for the Maxwell model.Fig.5compares analytical and numerical results for this verification problem.The analytical solution ͑Mukherjee and Paulino 2003͒was used for this analysis.It can be observed that the numerical results are in very good agreement with the analytical solution.The second verification example was simulated for fixed grip loading of an exponentially graded viscoelastic bar.The numeri-cal results were compared with the available analytical solution24F u n c t i o n ,f (t )T e s t Time (sec)Fig.3.Numerical Laplace inversion using collocation method-y -y D i r e c t i o n (M P a )E (x)--S t r e s s i n -x (mm)parison of exact ͑line ͒and numerical solution ͑circular markers ͒for bending of FGM bar ͑insert illustrates the boundary value problem along with material gradation ͒m )y e n t i n y D i r e c t i o n (m E(x)D i s p a l c e m x (mm)parison of exact and numerical solution for the creep ofexponentially graded viscoelastic barJOURNAL OF MATERIALS IN CIVIL ENGINEERING ©ASCE /JANUARY 2011/43͑Mukherjee and Paulino 2003͒for a viscoelastic FGM.Fig.6compares analytical and numerical results for this verification problem.Notice that the results are presented as function of time,and in this boundary value problem the stresses in y-direction are constant over the width of bar.Excellent agreement between nu-merical results and analytical solution further verifies the veracity of the viscoelastic graded FE formulation derived herein and its successful implementation.Application ExamplesIn this section,two sets of simulation examples using the gradedviscoelastic analysis scheme discussed in this paper are presented.The first example is for a simply supported functionally graded viscoelastic beam in a three-point bending configuration.In order to demonstrate the benefits of the graded analysis approach,com-parisons are made with analysis performed using commercially available software ͑ABAQUS ͒.In the case,of ABAQUS simula-tions,the material gradation is approximated using a layered ap-proach and different refinement levels.The second example is that of an aged conventional asphalt concrete pavement loaded with a truck tire.Simply Supported Graded Viscoelastic BeamFig.7shows the geometry and boundary conditions for the graded viscoelastic simply supported beam.A creep load,P ͑t ͒,is imposed at midspanP ͑t ͒=P 0t͑26͒The viscoelastic relaxation moduli on the top ͑y=y 0͒and bottom ͑y=0͒of the beam are shown in Fig.8.The variation of moduli is assumed to vary linearly from top to bottom as follows:E ͑y ,t ͒=ͩyy 0ͪE Top ͑t ͒+ͩy 0−y y 0ͪE Bottom ͑t ͒͑27͒The problem was solved using three approaches namely,͑1͒graded viscoelastic analysis procedure ͑present paper ͒;͑2͒com-mercial software ABAQUS with different levels of mesh refine-ments and averaged material properties assigned in the layered manner;and ͑3͒assuming averaged material properties for the whole beam.In the case of the layered approach using commer-cial software ABAQUS,three levels of discretization were used.A sample of the mesh discritization used for each of the simula-tion cases is shown in Fig.9.Table 1presents mesh attributes for each of the simulation cases.The parameter selected for comparing the various analysis op-tions is the mid span deflection for the beam problem discussed earlier ͑c.f.Fig.7͒.The results from all four simulation options are presented in Fig.10.Due to the viscoelastic nature of the problem,the beam continues to undergo creep deformation with increasing loading time.The results further illustrate the benefit of using the graded analysis approach as a finer level of meshTable 1.Mesh Attributes for Different Analysis Options Simulation case Number of elements Number of nodes Total degrees of freedom FGM/average/6-layer 7201,5733,1469-layer 1,6203,4396,87812-layer2,8806,02512,050y E 0(x )t i me (sec)parison of exact and numerical solution for the exponen-tially graded viscoelastic bar in fixed grip loadingy()()0P t P h t =xx 0=10y 0=1x 0/2Fig.7.Graded viscoelastic beam problemconfigurationE 0(M P a )l x a t i o n M o d u l u s ,E (t )/N o r m a l i z e d R e 10101010time (sec)Fig.8.Relaxation moduli on top and bottom of the graded beamFG M A verage 6-Layer 9-Layer 12-LayerFig.9.Mesh discretization for various simulation cases ͑1/5th beam span shown for each case ͒。

一种新科学方法英文

一种新科学方法英文

一种新科学方法英文*Abstract:*The scientific method has been the backbone of experimental research for centuries. However, in recent years, there has been a growing need for a new scientific method that can keep up with the rapid advancements in technology and expanding knowledge. This article proposes a new approach to scientific inquiry that takes into account the complexities of the modern world.IntroductionSince its inception, the scientific method has provided a systematic way to acquire knowledge, test hypotheses, and make accurate predictions. However, with the advent of new technologies and the increasing complexity of scientific questions, the traditional scientific method may no longer be sufficient. This article aims to introduce a new scientific method that can better cater to the demands of the modern scientific community.The Limitations of the Traditional Scientific MethodThe traditional scientific method, often referred to as the "hypothesis-driven" approach, is a linear process that involves making observations, formulating a hypothesis, conducting experiments, analyzing data, and drawing conclusions. While this method has been immensely successful in advancing scientific knowledge, it has itslimitations.One significant limitation is the exclusion of complex real-world systems. Traditional experiments are often conducted in highly controlled environments, which fail to capture the complex interactions and interdependencies that exist in nature. Additionally, the traditional scientific method tends to favor reductionism, dissecting complex problems into simpler components, and focusing only on one variable at a time. This reductionist approach may work in some cases, but it fails to address the holistic nature of interconnected systems found in nature. Introducing the Holistic Scientific MethodThe proposed holistic scientific method recognizes the limitations of the traditional approach and aims to bridge the gap between reductionism and the complexity of real-world systems. This method combines elements from other scientific approaches, such as systems thinking, network analysis, and computational modeling, to provide a more comprehensive understanding of complex systems.The holistic scientific method employs a multidisciplinary approach, integrating knowledge from various fields. Instead of starting with a single hypothesis, this method begins with a "conceptual framework" that represents the system under investigation. The conceptual framework takes into account the interdependencies, feedback loops, and emergent properties of the system. This framework is then used toguide the collection of data, design experiments, and analyze results.A key aspect of this method is the use of computational modeling and simulation techniques. These tools allow researchers to simulate the behavior of complex systems under different conditions, making it possible to explore scenarios that would be challenging or unethical to study in real life. The holistic scientific method also emphasizes the importance of collecting large datasets and utilizing advanced data analysis techniques, such as machine learning, to extract meaningful patterns and insights.Advantages and Potential ApplicationsThe holistic scientific method offers several advantages over the traditional approach. By considering the complexity and interconnectedness of natural systems, this method allows researchers to study phenomena that were previously difficult to tackle. It provides a more realistic representation of the real world, enabling the development of more accurate models and predictions.Moreover, the holistic scientific method can be applied to a wide range of disciplines, such as biology, ecology, economics, and social sciences. It can help understand complex biological systems, analyze social networks, predict economic fluctuations, and optimize resource allocation.ConclusionIn conclusion, the holistic scientific method aims to address the limitations of the traditional scientific method by incorporating interdisciplinary knowledge, computational modeling, and large datasets. This method provides a more comprehensive understanding of complex systems and enables researchers to tackle the challenges of the modern scientific landscape. By embracing the holistic scientific method, scientists can advance our knowledge and find solutions to the intricate problems we face in the 21st century.*Keywords: scientific method, holistic approach, complex systems, computational modeling, data analysis.*。

非母语英语教师的定位与自我调适解析

非母语英语教师的定位与自我调适解析

非母語英語教師的定位與自我調適壹、緒論國人對英語教學常有個「母語者的謬思」(the native speaker fallacy),認為以英語為母語的外籍教師(Native English Speaking Teachers,簡稱N-NESTs NESTs )比本國合格的英語老師(Non-Native English Speaking Teachers, 簡稱N-NESTs)教得好。

目前一些公私立國民中小學聘任外籍英語教師,教育部也有意引進外籍英語教師以協助偏遠地區的學校充實師資,但是外籍英語教師的引進真的能改善偏遠地區的英語教學嗎?這一切似乎顯示本國合格英語老師的專業知識與教學技能受到質疑。

本文將探討本國中小學英語老師目前所遭遇到的困境與挑戰,同時建議教師如何自我定位、自我調適且建立起自己教學上的自信。

「母語者的謬思」起源於1961年,在Makarere, Uganda(烏干達)的一場TESOL學術研討會中所提出的一個觀念:理想的英語教師最好是以英語為母語的人(An ideal teacher of English is a native speaker.)。

毫無疑問,任何一種語言的母語者最能「感受」到此語言的微妙處,且能流利、自在的使用此語言。

但這並不代表他們有資格去教他們的母語。

大部份人對自己母語的知識都是來自直覺,很難有系統、合邏輯的解釋其母語的語音與語法。

如同作者本身的母語是閩南語,常用語言是國語,雖然能輕鬆自如地運用這兩種語言,但絕對沒有資格擔任這兩種語言的教師。

唯有受過專業訓練的語言教師(即使是非母語者)也才有資格擔任國語文的教學工作。

事實上,受過專業訓練的非母語教師可能比母語者更懂得目標語言的知識與文化內涵。

話雖如此,目前國內的N-NESTs尌如同世界各國非英語系國家的N-NESTs一樣,遭遇很多的質疑與困境。

貳、N-NESTs的挑戰一、語言知識本身的深奧根據Phillipson(1996)針對198位的「教英語為外語的教師」進行調查,請他們將自認最困難的英語文能力加以排序。

TECHNIQUES AND

TECHNIQUES AND
TECHNIQUESAND SSUES IN MULTICAST
SECURITY
Peter S. Kruus Naval Research Labortory Washington, DC 20375 kruus@itd.nrl.navy .mil
ABSTRACT Mdticast networking support is becoming an increasingly important future technology area for both commercial and military distributed and group-based applications. Integrating a multicast security solution involves numerous engineering tradeoffs. The end goal of effective operational performance and scalabili~ over a heterogeneous internetwork is of prima~ interest for widescale adoption and application ofsucha capabili~. Various techniques that have beenproposedto support multicast security are discussed and their relative merits are explored.
SECURITY
SERVICES
In order to counter the common threats to multicast communications, we can apply several of the fundamental security services, including authentication, integrity, and confidentiality as defined in [5]. A secure multicast session may use all or a combination of these services to achieve the desired security level. The amount or type of service required is dictated by the specific security policy defined for the session. Authentication services provide assurance of a participating host identity. Authentication mechanisms can be applied to several aspects of multicast communications. Foremost, authentication is an essential part in providing access control to keying material. If the group employs cryptographic for confidentiality, then techniques such as encryption authentication measures may additionally provide a means to restricted access to the keys used to secure group communications. For an encrypted multicast session, active group membership is essentially defined by access to this keying material. Therefore, the availability and distribution of keys should be restricted to only authorized group members according to the policy of trust established for the session. In order to identifi the source of multicast traffic, authentication mechanisms may be applied by the traffic source. This application serves to fiuther define group membership by positively identifying group members along with their data ProtoGols such as the 1P being sourwd to the group. Authentication Header (AH) can provide authentication for 1P datagrams and may be used for host authentication [15]. Authentication is also an essential part of any key distribution

冷原子光谱法 英语

冷原子光谱法 英语

冷原子光谱法英语Okay, here's a piece of writing on cold atom spectroscopy in an informal, conversational, and varied English style:Hey, you know what's fascinating? Cold atom spectroscopy! It's this crazy technique where you chill atoms down to near absolute zero and study their light emissions. It's like you're looking at the universe in a whole new way.Just imagine, you've got these tiny particles, frozen in place almost, and they're still putting out this beautiful light. It's kind of like looking at a fireworks display in a snow globe. The colors and patterns are incredible.The thing about cold atoms is that they're so slow-moving, it's easier to measure their properties. You can get really precise data on things like energy levels andtransitions. It's like having a super-high-resolution microscope for the quantum world.So, why do we bother with all this? Well, it turns out that cold atom spectroscopy has tons of applications. From building better sensors to understanding the fundamental laws of nature, it's a powerful tool. It's like having a key that unlocks secrets of the universe.And the coolest part? It's just so darn cool! I mean, chilling atoms to near absolute zero? That's crazy science fiction stuff, right?。

选择性注意机制研究的新进展——负启动效应与分心信息抑制

选择性注意机制研究的新进展——负启动效应与分心信息抑制

选择性注意机制研究的新进展——负启动效应与分心信息抑制张雅旭张厚粲北京师范大学心理系【正文】1引言早在1890年,著名心理学家詹姆斯就说过,注意是心理学的中心课题。

在现代认知心理学中,它也是信息加工理论框架的中心概念。

但是,自本世纪20年代以后,统治心理学近半个世纪的行为主义却把注意看成是心灵主义的概念,认为科学心理学中不应该有注意的地位。

因此,长期以来,注意一直得不到很好研究。

直到认知心理学兴起以后,研究者才开始关注注意的问题。

近几年来,选择性注意的机制一直是注意研究的重要课题之一。

在选择性注意机制的研究中,长期争论的问题与目标信息的选择发生在哪个阶段有关。

研究者曾提出两个形成鲜明对照的模型,一个是Broadbent 于1958年提出的早期选择模型,另一个是Deutsch和Deutsch于1963年提出的晚期选择模型。

这两个模型主要试图揭示注意过滤器在信息加工系统中的具体位置问题。

值得注意的是,Kahneman(1973)避开了注意选择机制在信息加工系统中的具体位置问题,而从心理资源分配的角度来解释注意,提出了注意的资源分配理论。

Neisser(1967)也曾从淡化注意位置的思想出发,关注注意与知觉操作的联系,提出了注意加工的两阶段说。

受Neisser的影响,Treisman和Gelade(1980)根据知觉的特征分析说,提出了一个影响较大的注意新理论——特征整合论(featureintegration theory)[1]。

可以说,上述这三方面工作在某种程度上淡化了有关选择机制位置的争论。

长期以来,研究者普遍接受的选择性注意机制实际上只包括兴奋机制。

然而,近年来的一些工作表明,抑制(inhibition)也是选择性注意的一个重要机制[2—4]。

用Tipper的话说,对有关信息的成功选择也需要对无关信息的抑制[4]。

这样,选择性注意机制不仅包括目标激活(target activation),还包括分心信息抑制(distractor inhibition)。

galvanic replacement method

galvanic replacement method

galvanic replacement method Galvanic Replacement Method: Unlocking the Secrets of NanotechnologyIntroduction:Nanotechnology is a field that has gained immense popularity and significance in recent years. Its applications span a wide range of industries, including electronics, medicine, energy, and materials science. One of the key techniques used in nanotechnology is the galvanic replacement method. This method has revolutionized the synthesis and fabrication of nanomaterials, leading to breakthroughs in various research areas. In this article, we will delve into the intricacies of the galvanic replacement method, exploring its step-by-step process and its impact on nanoscale science.Step 1: Understanding the BasicsTo comprehend the galvanic replacement method, we first need to understand the fundamentals. This method involves the chemical transformation of metal structures through a redox reaction. It typically occurs between two different metals, where one metal isoxidized, and the other metal is reduced. This transformation process leads to the formation of a new nanoscale structure, exhibiting unique properties that differ from those of the starting materials.Step 2: Preparation of Reactant MetalsThe first step in the galvanic replacement method is acquiring the reactant metals. These metals should have diverse properties to facilitate the redox reaction. For example, one metal could be chosen for its high reactivity towards a particular ion, while the other metal acts as a template to control the shape and size of the resulting nanomaterial. The selection of suitable reactant metals forms the foundation for a successful galvanic replacement process.Step 3: Immersion in Ionic SolutionFollowing the acquisition of reactant metals, they are immersed in a solution containing an appropriate ionic compound. The choice of this compound depends on the desired final nanomaterial and its specific properties. For instance, if one wishes to create a magneticnanomaterial, an iron salt solution may be suitable. The immersion of metals in the ionic solution initiates the redox reaction, leading to the replacement of one metal ions with the other.Step 4: Manipulating Reaction ParametersThe galvanic replacement reaction is influenced by various parameters, including temperature, pH, concentration, and reaction time. Manipulating these parameters provides control over the reaction kinetics and morphology of the resulting nanomaterial. Higher temperatures typically accelerate the reaction, while pH and concentration affect the deposition rate and the distribution of the new metal ions. These parameters need to be carefully optimized to achieve the desired nanomaterial properties.Step 5: Characterization and AnalysisOnce the galvanic replacement reaction is complete, the resulting nanomaterial needs to be characterized and analyzed. Various techniques are employed to examine its structure, composition, and properties. Electron microscopy techniques such as transmission electron microscopy (TEM) and scanning electronmicroscopy (SEM) provide high-resolution images, enabling researchers to observe the nanomaterial's morphology and size.X-ray diffraction (XRD) and energy-dispersive X-ray spectroscopy (EDS) are used to analyze the crystal structure and element composition of the nanomaterial. These characterization techniques shed light on the success of the galvanic replacement method and offer insights into potential applications.Conclusion:The galvanic replacement method is a powerful tool in the realm of nanotechnology. By harnessing the redox reaction between metals, researchers can synthesize nanomaterials with unique properties. Understanding the step-by-step process of this method, from selecting suitable reactant metals to characterizing the resulting nanomaterial, empowers scientists to unlock the secrets of nanoscale science. As advancements in nanotechnology continue to reshape various industries, the galvanic replacement method is poised to play a crucial role in the discovery and development of novel materials and technologies.。

Method and materials useful for the analysis and d

Method and materials useful for the analysis and d

专利名称:Method and materials useful for the analysis and detection of genetic material发明人:Yang, Huey-Lang,Kirtikar, Dollie,Pergolizzi, Robert G.申请号:EP84104848申请日:19840430公开号:EP0124124A3公开日:19870826专利内容由知识产权出版社提供摘要:in techniques for the analysis of genetic material, such as DNA and/or RNA, wherein the genetic material is denatured, fixed to a substrate and hybridized with a probe, superior results are obtainable when there is employed as the probe a chemically labeled probe which, preferably after hybridization, has added or attached thereto, such as to the chemical label or moiety of the probe, an enzyme component effective upon contact with a chromogen to produce an insoluble color precipitate or product. Techniques for the analysis of genetic material which are improved in sensitivity, definition, accuracy and/or speed, by employing a chemically labeled probe, include such well known techniques as «Southern» blot analysis, «Northern» blot analysis, «Western»blot analysis, colony hybridization, plaque lifts, cytoplasmic dot hybridization, in situ hybridization to cells and/or tissues and other such techniques.申请人:ENZO BIOCHEM, INC.代理人:VOSSIUS & PARTNER更多信息请下载全文后查看。

scientificmethod:科学的方法

scientificmethod:科学的方法

THE SCIENTIFIC METHODScientists, like most people, try and use a common sense approach to solving problems. This approach is logical and systematic and is sometimes called the scientific method.What is the scientific method?The following steps have been found useful in working out the solution to problems.1.Understand the problemAfter making observations, you realize that there is something you do not know or understand. As a result, you wonder why something happens as it does. You need to state the problem facing you clearly in the form of a question. Recall your inquiryinvestigation at the beginning of this unit. You made observations about the beaker.Then you may have wondered if here was something inside the beaker. Observing the beaker and thinking about the situation might have led you to state the problem in the form of a question: Is the beaker empty or is it full of something?2. Collect informationThe first thing you need to do to answer your question is conduct research. Refer to books and other written materials that relate you your problem. You may want tointerview people with knowledge of your problem. It is possible that someone else may have information concerning the same problem. The information you gather mayprovide you with clues that can help you find a way to answer your question.3.Form a hypothesisA hypothesis is a suggested solution to a problem or a possible answer to your question.A hypothesis is not a wild guess. It should be based on the information you collectedand the observations you made. Keep in mind that an observation is information yougather using your senses. There are two types of observations. The first one is known as quantitative observations, which use numbers, and are often a measurement of some kind. The second type of observation is known as qualitative observations, which aregathered by using the five senses. Some examples of qualitative observations are color, texture, aroma and flavor. Learn to be a careful observer. In the case of the inquiryinvestigation, your hypothesis might have been: If the beaker is open in a room filledwith air, then the beaker is also filled with air.4.Test the HypothesisIt is necessary to design and carry out an experiment that will test the hypothesis. The sign of a good hypothesis is that it can be tested. There must be a possible answer that can be shown to be true or false. It is not enough to say, “It’s obvious that the bottle is full of air.” You must obtain scientific evidence.The steps that you follow to test a hypothesis are known as the procedure. You mustwrite the procedure in such a way that someone else could repeat the same experiment in exactly the same way. You need to include the materials you used, the amount orsize of each material, and what you did with the materials in the order you did it.5.Keep a record of your experimentAs you carry out an experiment, observe carefully and record your observations, or data, in a laboratory notebook. Be sure to write the date and time of the observation and theconditions under which it was made. Do not try to remember what you observed and wait to write it down at a later time. You must record your data as you gather it. You can always go back and present your data in a way that makes it easier to follow. For example, you might arrange numerical data in a table or graph. You might presentother kinds of data in a diagram or map.6.Analyze the resultsOnce you have data from your experiment, you need to study it to figure out what it shows. When data consists of numbers, you might look for patterns and trends. In other words, are values increasing, decreasing or staying the same? Once you analyze the results, you can reach a conclusion, which is a statement indicating if the hypothesis was supported by the data. A conclusion is a kind of inference, which is a statement based on a series of observations. An inference is an attempt to explain why theobservations occurred. An example of an inference follows: “th e beaker was not able to fill with water so there must be air in the space where the water did not go.”If the data does not support the hypothesis, the hypothesis must be rejected. Do not consider this a failure. Showing that the hypothesis is not correct provides newinformation. It guides you to develop and test a new hypothesis. That is part of the process of scientific inquiry. Scientific inquiry is the nature of scientific thinking and the process scientists use to solve problems.7.Repeat the experimentIn any experiment, it is always possible that some error was introduced. To avoid“jumping to conclusions,” you should repeat your experiment as many times as possible.If you, or someone else, can duplicate the results of your experiment with little or no change, your conclusions are probably valid. Results that are valid are reasonable,logical, and can be achieved by repeated testing. This is another reason why yourprocedure needs to be well- written. Other people need to be able to repeat yourexperiment to confirm your results.8.Confirm the conclusionRepeating an experiment and getting the same results is one way to confirm theconclusion. Another important method of confirming a conclusion is to try to use the information revealed by the experiment to explain other related problems. If successful, you can be more certain that the conclusion is reasonable.municate the results to othersIt is important for others to know what you have learned so that they will be able to confirm your results and possibly use them to solve other problems. Scientists publicize their findings in several ways, such as writing reports in magazines known as scientific journals or giving lectures to other scientists at professional meetings.10.Identify new problems for investigationScientific knowledge has grown because the solution to one problem often leads to the investigation of new problems. For example, you showed that the “empty” beakeractually contained a gas. The next logical step would be to show that the gas is air.HOW ARE THEORIES DEVELOPED?After scientists have performed many experiments, made many observations, and tested several hypotheses, they may try to provide an explanation of their results that can be applied to related situations. They do this by formulating a theory. A theory is a detailed explanation of large bodies of information, represents the most logical explanation of the evidence, and is generally accepted as fact (unless shown to be otherwise). Theories are often revised as technology improves and new observations are made. Most theories are based on the work of many scientists over a long period of time and become stronger as more supporting evidence and experimental data are gathered.No single experiment can prove that a theory is correct. The experimental results can support only the possibility that the theory is right. However, a single experiment can prove a theory wrong if the results do not support the theory.What are variables and controls?Experiments must be conducted in an organized way so that you can reach valid results. Two important considerations for obtaining valid results are variables and controls.What are experimental variables?Any conditions that can be changed during an experiment are called variables. In an experiment, you need to keep all but one of the variables constant, or unchanged. The one variable that you change is the independent variable. The hypothesis should relate these two variables. Suppose, for example, you want to find out how the height from which you drop a ball affects the height to which it bounces. Your hypothesis might be: if the height from which a ball is dropped increases, the height to which the ball bounces will increase. The starting height would be the independent variable and the height to which the ball bounces would be the dependent variable.Would it be a good experiment to drop a golf ball, a tennis ball, and a basketball from different heights in different places to observe how they bounce in order to test the hypothesis about height? No. All the balls bounce differently, so you will not know if it was the type of ball, the surface on which it bounced, or the height from which it was dropped that affected the height of the bounce. An experiment must have only one independent variable; otherwise, the results cannot be attributed to a single cause.What are controlled experiments?In order for the results of an experiment to be useful, they must be compared with some standard that does not change during the experiment. For example, suppose a teacher wants to find out if listening to classical music helps students perform better on exams. Theteacher gathers a group of students for an exam. Should the teacher let all the students listen to music while taking the exam? If the teacher lets all the students listen to music and finds that most do well on the exam, the teacher cannot be sure if the change was because of the music. There might have been some other variable that affected all the students. For example, the exam might have been easier than usual, or it might have been on a topic that students find interesting.To establish something for comparison, the teacher needs to have only some of the students listen to music during the exam. Then, if all the students do well on the exam, the teacher knows it was not because of the music. If instead, the students who listened to music did substantially better on the exam than the other students, the teacher can conclude that it was because of the music. The group of students who were not allowed to listen to music is known as the control for this investigation.How can you use the Scientific Method in everyday life?You might be surprised to find out that you use the scientific method even when you are not in science class. The work of science requires a variety of abilities and qualities that are helpful in daily life. Suppose for example, you have a cell phone that stops working. Perhaps you have been in that situation before, so you assume the battery needs to be charged. After plugging it into the charger, you find that the phone still does not work. What do you do next? Before you give up on your cell phone, you would probably analyze the situation more closely. You might ask some friends who have similar cell phones if they have ever had the same problem. You might then go on line to search the information at the manufacturer’s Web site. There you might discover that other people have reported that their chargers stopped working, which might lead you to consider that your charger has stopped working as well. To check, you find a friend who has a working charger and use it to charge your cell phone. Your phone works fine! You buy a new charger and your problem is solved.What does this example have to do with the scientific method? You will see if you think about the example in scientific terms. You made an observation that your cell phone did not work. Then you conducted research to investigate the problem. Based on your research, you made a hypothesis about the charger being the cause of the problem. Then you performed a test to evaluate your hypothesis. The test led you to reach a conclusion which you explored by trying another charger.This is just one of the many examples you could consider to realize that you use the scientific method every day. Thinking scientifically is a useful way to solve problems and can be applied to common situations.。

材料测试技术3

材料测试技术3

消光离ξ g的性质
对于确定的波长,消光距离是晶体的一种物理
性质,同时也是不同衍射波矢量g的函数; 同一晶体中,不同的晶面产生的衍射波处于双 束条件时,有不同的消光距离,即不同的ξ g 值。
Modern Analytical Instruments and Technology for Materials
下表面
晶粒II
倾斜界面示意图
明场像
暗场像
等厚条纹形成原理示意图
Modern Analytical Instruments and Technology for Materials
20
2. 等倾条纹(衍射强度随样品位向的变化)
如果样品的厚度不变,但是局部晶面取向发生变化,衍射




强度将随偏离参量的变化而变化,有 Ig=(π 2t2/ξ g2)sin2(π ts)/(π ts)2 s=0时,衍射强度有极大值,当s=(2n+1)/2t(n=1,2,„)时, 衍射强度都有极大值,不过随着s的增大,衍射强度的极 大峰值迅速下降。 当s=1/t时,出现第一个极小值。 当s=n/t时,衍射强度出现第n个极小值,是衍射强度发生 消光的位置。 所以当t一定时,随着s的增加,衍射强度也发生周期性振 荡。
I0=1
I II III
1)双光束近似 2)柱体近似
I Ig2 Ig3 1-Ig Ig Modern Analytical Instruments and Technology forg1Materials
15
双束近似 (运动学基本假设一)
在获得电子显微像时,通常采用双束成像条件:即除透
射电子束外,只有一个强衍射束,且让它偏离精确的布 拉格衍射条件。 用非常薄的样品,这时因吸收而引起的能量损失和多次 散射以及严格双束情况下有限的透射和衍射束之间的交 互作用可以忽略不计。 实际上,要做到这两条是非常困难的,只能尽可能地调 整样品的取向,以期达到双束成像条件。
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

1 TECHNIQUES AND METHODOLOGIESFOR MULTIMEDIA SYSTEMSDEVELOPMENT: A SURVEY OFINDUSTRIAL PRACTICEMichael LangChris BarryDepartment of Accountancy and FinanceNational University of Ireland, GalwayIrelandAbstractThis paper tries to answer the question, how are multimediasystems being developed in practice? Herein are reported thefindings of a preliminary postal survey of the top 1,000 com-panies from general industry and the principal 100 companiesfrom the multimedia industry in Ireland, which reveal that thereis no uniform approach to multimedia systems development andthat approaches prescribed by the literature are not being usedin practice. Nonetheless, the findings are clear that practi-tioners are favorably inclined toward the use of systematicmethods and techniques for multimedia development. This sur-vey paves the way for more detailed and insightful qualitativeresearch into development practices.1.INTRODUCTIONUntil quite recently, most multimedia systems were relatively simple, stand-alone applications. The prolific expansion of enterprise-wide intranets and of Web-based electronic commerce systems has altered this dramatically and has ushered organizational multimedia through the back door, thus presenting formidable and pressing challenges to information systems developers.15Part Number: Title 16Of late, the world of systems development has been dominated by structured methodologies for large-scale projects and by object-oriented or visual-oriented approaches. It would appear that such methods are not entirely appropriate or adequate for multimedia and Web-based systems development (Lowe and Hall 1999; Nanard and Nanard 1995). While researchers have made valuable contri-butions toward a better understanding of the nature of multimedia systems, and have prescribed methods by which they may be constructed, it has been the authors’ experience that practitioners are not using these methods. The moti-vation for this study is therefore to answer the question, how are multimedia systems being developed in practice?2.BACKGROUND TO THE PROJECTWithin information systems research, it has been usual for specialized development techniques and methodologies to be proposed in response to the emergence of new types of systems that are considered to be somehow fundamentally different from all else before, such as decision support systems, workgroup systems, and, now, multimedia and Web-based systems. Since the international multimedia software industry is, in relative terms, new, it is not surprising that multimedia development approaches are at present inconsistent, immature, and lacking formal or tool-based modeling techniques (Britton et al. 1997; Murugesan et al. 1999; Shneiderman 1997).To date, very little is known about the actual practice of multimedia systems development. The only previous empirical studies published in the literature are those of Britton et al. (1997), Liu et al. (1998), Whitley (1998), and Eriksen et al. (1998), three of which focus specifically on multimedia-based training systems.The principal objectives of this survey were, therefore:(1)To examine the current practice of multimedia systems development inIreland.(2)To see if there are differences between the techniques and methodologiesused to develop multimedia systems suggested in the literature and those actually used in practice.3.RESEARCH METHODOLOGYFor the purposes of this study, the authors adopted a broad definition of multimedia information systems that is inclusive of Web-based multimedia applications.Lang & Barry/Multimedia Systems Development17 Two parallel studies were conducted—one that examined the main 100 Irish companies in the multimedia industry, and another that looked at the top 1,000 Irish companies in general industry. The rationale for selecting two parallel samples was to compare and contrast how respondents from both samples approach multimedia systems development, given that general industry respon-dents are presumed to have a tradition of conventional IS development whereas multimedia industry respondents specialize in multimedia systems and may or may not have the same traditions.The questionnaire distributed to general industry was an adaptation of that distributed to the multimedia industry. Both questionnaires examined systems development environments; development practices; the usage of techniques, methodologies, and tools; and future development plans. From general industry, a response rate of 10% was achieved. A higher response rate of 15% was obtained from the multimedia industry. Some apparent volatility in this sector was revealed when it emerged that quite a few recent start-up companies were no longer trading, making the response rate closer to 20% of all trading com-panies. These response rates are in line with those of the previous studies earlier cited, and are reasonable given the growing reluctance of companies to reply to unsolicited questionnaires (Falconer and Hodgett 1999).4.FINDINGS4.1Usage of Approaches and Methodologies for MultimediaSystems DevelopmentRespondents were provided with a list of general approaches toward multimedia development, drawn from the literature, and asked to indicate which if any of these they have ever used (see Table 1). It emerged that general industry respondents have all at some time used a semi-structured systems development life cycle (SDLC) approach in multimedia development. A smaller number use a more formalised SDLC-based approach, or an object-oriented approach. The focus on the SDLC within general industry contrasts with a much broader mix in approaches used by multimedia industry respondents. While prototyping is the most widely used here, production-oriented approaches, semi-structured SDLC approaches, and advertising/graphic design are also in common use. The popularity of the production-oriented approach, an approach that originates from the film industry, is particularly interesting and highlights the potential contribution of other disciplines outside traditional information systems development.Part Number: Title 18Table 1. General Approaches Used in Multimedia Systems DevelopmentApproach General Industry(n = 8)MultimediaIndustry (n = 15)AggregateResponse (n = 23) Incidence Incidence IncidenceSemi-structured SDLC8100%640%1461% Prototyping225%960%1148% Production-orientedApproach113%853%939% Structured SDLC338%427%730% Advertising / GraphicDesign225%533%730% Object-oriented Approach338%320%626% Other113%427%522% Artistic Approach00%427%417% Media Design Approach00%427%417%The finding that respondents from the multimedia industry use a multiplicity of approaches reveals, ipso facto, that they do not agree on a common develop-ment approach. Such diversity may of course reflect the distinct nature of multi-media applications: that different approaches are suitable for different types of applications. However, perhaps the explanation is to be found elsewhere. Diffe-rences in approaches to multimedia systems development may also be explained by reference to the differing backgrounds of developers. Examples of these backgrounds are publishing, software engineering, film production, advertising, product development, graphic design, and information systems development. Each of these root disciplines has its own firmly established development paradigms.With regard to the use of specific methodologies (as opposed to general approaches), it was found that respondents are predominantly using their own in-house methods. Some use is made of the traditional SDLC and of object-oriented methodologies. Revealingly, although a number of methodologies specifically for multimedia and Web-based systems development are set forth in the literature (see Table 2), the findings of this study are that none of these are used at all in practice! It may be because these methodologies are too complex to understand or implement, or that they have little CASE-based support. Of course, it may also be because of a lack of awareness among practitioners, as few of these methodologies have ever been publicized outside of academic journals and conferences.Lang & Barry/Multimedia Systems Development19 Table 2. Web and Multimedia Systems Development MethodologiesHypertext Design Model (HDM)Garzotto and Paolini (1993) Relationship Management Methodology (RMM) Isakowitz et al. (1995) Object-Oriented Hypermedia Design Methodology (OOHDM)Schwabe and Rossi (1995) World Wide Web Design Technique (W3DT/SHDT/eW3DT) Bichler and Nusser (1996);Scharl (1999)Web Site Design Method (WSDM)De Troyer and Leune(1998)Scenario-based Object-Oriented Hypermedia DesignLee et al. (1999a) Methodology (SODHM)View-Based Hypermedia Design Methodology (VHDM) Lee et al. (1999b)4.2Attitudes Toward Methodology UsageWhen asked about the advantages or benefits of their principal methodology, the response from both samples (aggregated in Table 3) highlights cost effective-ness (74%), speed of development (63%), understandability (58%), and adapt-ability (53%) as the most important ones. The relative weight given by respondents to benefits that emphasize improved efficiencies in cost and speed suggests an inclination to use methodologies that assist project management. Approaches like HDM or RMM do not emphasize project management, a task that is crucial to commercial development where budgets and time constraints are everyday realities. This finding may point to reasons why such methodo-logies have not been widely adopted.Table 3. Primary Advantages/Benefits of Principal Multimedia MethodologyBenefit/Advantage Affirmative Responses (n = 19) Cost effectiveness1474% Speed of development1263% Understandability1158% Adaptability1053% Widespread acceptance/Reputation842% Results obtained842% Ease of use737% Comprehensiveness737% Attention to detail526% Broadness/Inclusiveness421% Other211% Narrowness/Specificity00%Part Number: Title 20Table 4. Primary Disadvantages/Drawbacks of Principal Multimedia Methodology Disadvantage/Drawback Affirmative Responses (n = 17) Not widely known741% Complexity741% High level of detail741% Obsolescence741% Cost529% Difficulty in use318% Broadness/Inclusiveness318% Difficulty in understanding212% Other212%Too narrow /Specific16%With regard to disadvantages or drawbacks of the respondents’ principal methodology (shown in Table 4), a number of concerns clearly dominate. Two of these, obsolescence and the use of a technique that is not widely known, suggest fears about failing to use a more universal methodology. A total of 41% of the respondents also cite complexity and the high level of detail required by their principal methodology as being disadvantageous. Overall, fewer disadvan-tages than advantages were reported, suggesting a general satisfaction with methodologies.4.3Use of Techniques in Multimedia Systems DevelopmentRespondents were asked about their usage of techniques in multimedia systems development. They were presented with a list of techniques drawn from the literature. These included traditional structured techniques (such as data flow diagrams), modern techniques (such as use case diagrams), as well as others specifically intended for multimedia development (such as storyboarding and RMDM diagrams). The findings are presented in Table 5.There is an interesting mix of multimedia-specific technique usage like RMDM and MAD. It is evident that project management and prototyping are, as one would anticipate, widely used. Storyboarding, a technique widely used in film-making ever since Walt Disney refined and popularized it in the 1930s, is very clearly in widespread use, consistent with the findings of an earlier study by Britton et al. Flowcharts and menu maps are also popular, presumably because both are conceptual, block-diagraming techniques that may be used to specify navigational structure, links, and interaction branches. The fairly common usage of DFDs (41%) is more difficult to interpret since they represent neither sequential flows nor data modeling. It may simply be that they are popular be-Lang & Barry/Multimedia Systems Development21 Table 5. Use of Techniques in Multimedia Systems Development.TechniqueGeneralIndustry(n = 7)MultimediaIndustry(n = 15)AggregateResponse(n = 22) (Affirmativeresponses)(Affirmativeresponses)(Affirmativeresponses)Project Management6131986% Prototyping5111673% Flowcharting3121568% Storyboarding3101359% Menu Maps291150% Data Flow Diagrams (DFD)27941% Object-Oriented techniques22418% Relationship Management Data-Model(RMDM) Diagram13418% Movie Authoring and Design (MAD)13418% Class Diagrams21314% Entity Relationship Diagrams (ERD)12314% Dialogue Charts03314% Other1129% State Transition Diagrams (STD)0229% Functional Decomposition Diagrams(FDD)1015% Use Case Diagrams1015% Joint Application Design (JAD)0115% cause they are well understood as a legacy technique. Perhaps to compensate for the lack of comprehensive or specific modeling tools, developers are impro-vising with the use of techniques not designed for multimedia development but which perform some useful modeling function.In response to another question, it was found that the most widely used programming languages in multimedia systems development are HTML/ DHTML, Visual Basic, Java, C++, SQL and Javascript. This reveals a clear par-tiality toward visual and object-oriented programming environments. The finding (see Table 5) that object-oriented techniques are in relatively low use is, therefore, somewhat surprising. It may well be the case that object-oriented techniques are too difficult to understand, particularly for those from back-grounds other than software development. Comprehension difficulties may also explain why real-time modeling techniques, which are intended to specifyPart Number: Title 22interactions and time-based constraints, are also not as often used as might be expected.4.4Future Multimedia DevelopmentAll respondents from general industry were asked if they expect to develop multimedia systems in the future, regardless of whether or not they had pre-viously done so. A total of 48% said they will, or are likely to, do so within at most two years. General industry respondents were also asked if they expect that their large-scale, organizational information systems will contain multimedia data in the future. Respondents were given a longer time frame (five years) within which they might expect this to happen. Over half (51%) expect that their conventional organizational systems will or are likely to include multimedia data within the next five years. This is a significant signal that preparation needs to be made in anticipation of the widespread introduction of multimedia applica-tions and consequent technologies.5.CONCLUSIONS AND FURTHER WORKCompanies in general industry are not developing multimedia systems on a widespread basis today. However, the future plans of companies suggest that this is about to change. The finding that most general industry respondents expect that their large-scale information systems will soon contain multimedia data is significant. This has consequences for all aspects of systems development. Staffing, adoption of new technologies, upgrading to hardware and software systems that can handle multimedia data, and use of development methods and techniques will all need to be considered in light of this expectation.While research and development into multimedia information systems deve-lopment has been superseded in the recent past by efforts in the more popular Web-based world, it is to be hoped that work on improving structural under-standing and enhancing process support will soon be revisited and that the pursuit of improved practice in multimedia and Web-based systems development will continue. The findings of this preliminary study reveal that practitioners are not using the multimedia development methodologies prescribed by academic literature, and that most are using their own in-house methods. Further work is necessary to examine why prescribed methodologies are not being used and to reveal richer insights into the methods and techniques that practitioners are adopting.Lang & Barry/Multimedia Systems Development23 6.REFERENCESBichler, M., and Nusser, S. “Modular Design of Complex Web-Applications with W3DT,” in Proceedings of the Fifth IEEE Workshop on Enabling Technology: Infrastructure for Collaborative Enterprises (WET ICE’96), Stanford, California, June 19-21, 1996. Britton, C., Jones, S., Myers, M., and Shair, M. “A Survey of Current Practice in the Development of Multimedia Systems,” Information and Software Technology (39), 1997, pp. 695-705. De Troyer, O. M. F.., and Leune, C. J. “WSDM: A User Centered Design Method for Web Sites,”Computer Networks (30:1-7), 1998, pp. 85-94.Eriksen, L. B., Skov, M., and Stage, J. An Empirical Study of Two Training and Assessment Multimedia Design Processes, Intermedia Research Centre, Aalborg University, Aalborg, Denmark, 1998.Falconer, D. J., and Hodgett, R. A. “Why Executives Don't Respond to Your Survey,” in Proceedings of the Tenth Australasian Conference on Information Systems, Wellington, New Zealand, December 1-3, 1999, pp. 279-285.Garzotto, F., and Paolini, P. “HDM: A Model-Based Approach to Hypertext Application Design,” ACM Transactions on Information Systems (11:1), 1993, pp. 1-26. Isakowitz, T., Stohr, E. A., and Balasubramanian, P. “RMM: A Methodology for Structured Hypermedia Design,” Communications of the ACM (38:8), 1995, pp. 34-44.Lee, H., Lee, C., and Yoo, C. “A Scenario-Based Object-Oriented Hypermedia Design Methodology,” Information & Management (36), 1999a, pp. 121-138.Lee, H., Kim, J., Kim, Y. G., and Cho, S. H. “A View-Based Hypermedia Design Methodology,”Journal of Database Management (10:2), 1999b, pp. 3-13.Liu, M., Jones, C., and Hemstreet, S. “Interactive Multimedia Design and Production Processes,”Journal of Research on Computing in Education (30:3), 1998, pp. 254-280.Lowe, D., and Hall, W. Hypermedia and the Web: An Engineering Approach, Chichester, England: Wiley, 1999.Murugesan, S., Deshpande, Y., Hansen, S., and Ginige, A. “Web Engineering: A New Discipline for Web-Based System Development,” in Proceedings of the First International Conference on Software Engineering Workshop on Web Engineering, S. Murugesan and Y. Deshpande (eds.), Los Angeles, California, May 1999.Nanard, J., and Nanard, M. “Hypertext Design Environments and the Hypertext Design Process,”Communications of the ACM (38:8), 1995, pp. 49-56.Scharl, A. “A Conceptual, User-Centric Approach to Modeling Web Information Systems,” in Proceedings of the Fifth Australian World Wide Web Conference, R. Debreceny and A. Ellis (eds.), Ballina, NSW, Australia. April 17-20, 1999, pp. 33-49.Schwabe, D., and Rossi, G. “The Object-Oriented Hypermedia Design Model,” Communications of the ACM (38:8), 1995, pp. 45-48.Shneiderman, B. “Designing Information-Abundant Websites: Issues and Recommendations,”International Journal of Human-Computer Studies (47:1), 1997, pp. 5-29.Whitley, E. A. “Method-ism in Practice: Investigating the Relationship between Method and Understanding in Web Page Design,” in Proceedings of the Nineteenth International Conference on Information Systems, R. Hirschheim, M. Newman, and J. I. DeGross (eds.), Helsinki, Finland, December 1998, pp. 68-75.Part Number: Title 24About the AuthorsMichael Lang is a lecturer in information systems within the Department of Accountancy and Finance at the National University of Ireland, Galway. His research interests are business systems analysis and design; hypermedia and multimedia information systems development; requirements management; and IS/IT education. He is currently undertaking a Ph.D. in the area of hypermedia systems design at University College Cork, Ireland. He can be reached by e-mail at ng@nuigalway.ie.Chris Barry is a lecturer in information systems at the Department of Accoun-tancy and Finance, Faculty of Commerce, National University of Ireland, Galway. He has 15 years experience of research, lecturing, consultancy and practice in the information systems field. His expertise lies in the fields of systems analysis and design techniques and methodologies; requirements determination; multimedia and Web design and development; computer-aided systems engineering (CASE); management decision systems. He can be reached by e-mail at Chris.Barry@nuigalway.ie.。

相关文档
最新文档