Towards a Tool for Derivation of Implementation Constraints

合集下载

数据挖掘与数据分析,数据可视化试题

数据挖掘与数据分析,数据可视化试题

数据挖掘与数据分析,数据可视化试题1. Data Mining is also referred to as ……………………..data analysisdata discovery(正确答案)data recoveryData visualization2. Data Mining is a method and technique inclusive of …………………………. data analysis.(正确答案)data discoveryData visualizationdata recovery3. In which step of Data Science consume Almost 80% of the work period of the procedure.Accumulating the dataAnalyzing the dataWrangling the data(正确答案)Recapitulation of the Data4. Which Step of Data Science allows the model to consistently improve and provide punctual performance and deliverapproximate results.Wrangling the dataAccumulating the dataRecapitulation of the Data(正确答案)Analyzing the data5. Which tool of Data Science is robust machine learning library, which allows the implementation of deep learning ?algorithms. STableauD3.jsApache SparkTensorFlow(正确答案)6. What is the main aim of Data Mining ?to obtain data from a less number of sources and to transform it into a more useful version of itself.to obtain data from a less number of sources and to transform it into a less useful version of itself.to obtain data from a great number of sources and to transform it into a less useful version of itself.to obtain data from a great number of sources and to transform it into a more useful version of itself.(正确答案)7. In which step of data mining the irrelevant patterns are eliminated to avoid cluttering ? Cleaning the data(正确答案)Evaluating the dataConversion of the dataIntegration of data8. Data Science t is mainly used for ………………. purposes. Data mining is mainly used for ……………………. purposes.scientific,business(正确答案)business,scientificscientific,scientificNone9. Pandas ………………... is a one dimensional labeled array capable of holding data of any type (integer, string, float, python objects, etc.).Series(正确答案)FramePanelNone10. How many principal components Pandas DataFrame consists of ?4213(正确答案)11. Important data structure of pandas is/are ___________SeriesData FrameBoth(正确答案)None of the above12. Which of the following command is used to install pandas?pip install pandas(正确答案)install pandaspip pandasNone of the above13. Which of the following function/method help to create Series? series()Series()(正确答案)createSeries()None of the above14. NumPY stands for?Numbering PythonNumber In PythonNumerical Python(正确答案)None Of the above15. Which of the following is not correct sub-packages of SciPy? scipy.integratescipy.source(正确答案)scipy.interpolatescipy.signal16. How to import Constants Package in SciPy?import scipy.constantsfrom scipy.constants(正确答案)import scipy.constants.packagefrom scipy.constants.package17. ………………….. involveslooking at and describing the data set from different angles and then summarizing it ?Data FrameData VisualizationEDA(正确答案)All of the above18. what involves the preparation of data sets for analysis by removing irregularities in the data so that these irregularities do not affect further steps in the process of data analysis and machine learning model building ?Data AnalysisEDA(正确答案)Data FrameNone of the above19. What is not Utility of EDA ?Maximize the insight in the data setDetect outliers and anomaliesVisualization of dataTest underlying assumptions(正确答案)20. what can hamper the further steps in the machine learning model building process If not performed properly ?Recapitulation of the DataAccumulating the dataEDA(正确答案)None of the above21. Which plot for EDA to check the dependency between two variables ? HistogramsScatter plots(正确答案)MapsTime series plots22. What function will tell you the top records in the data set?shapehead(正确答案)showall of the aboce23. what type of data is useful for internal policymaking and business strategy building for an organization ?public dataprivate data(正确答案)bothNone of the above24. The ………… function can “fill in” NA valueswith non-null data ?headfillna(正确答案)shapeall of the above25. If you want to simply exclude the missing values, then what function along with the axis argument will be use?fillnareplacedropna(正确答案)isnull26. Which of the following attribute of DataFrame is used to display data type of each column in DataFrame?DtypesDTypesdtypes(正确答案)datatypes27. Which of the following function is used to load the data from the CSV file into a DataFrame?read.csv()readcsv()read_csv()(正确答案)Read_csv()28. how to Display first row of dataframe ‘DF’ ?print(DF.head(1))print(DF[0 : 1])print(DF.iloc[0 : 1])All of the above(正确答案)29. Spread function is known as ................ in spreadsheets ?pivotunpivot(正确答案)castorder30. ................. extract a subset of rows from a data fram based on logical conditions ? renamefilter(正确答案)setsubset31. We can shift the DataFrame’s index by a certain number of periods usingthe …………. Method ?melt()merge()tail()shift()(正确答案)32. We can join melted DataFrames into one Analytical Base Table using the ……….. function.join()append()merge()(正确答案)truncate()33. What methos is used to concatenate datasets along an axis ?concatenate()concat()(正确答案)add()merge()34. Rows can be …………….. if the number of missing values is insignificant, as thiswould not impact the overall analysis results.deleted(正确答案)updatedaddedall35. There is a specific reason behind the missing value.What stands for Missing not at randomMCARMARMNAR(正确答案)None of the above36. While plotting data, some values of one variable may not lie beyond the expectedrange, but when you plot the data with some other variable, these values may lie far from the expected value.Identify the type of outliers?Univariate outliersMultivariate outliers(正确答案)ManyVariate outlinersNone of the above37. if numeric values are stored as strings, then it would not be possible to calculatemetrics such as mean, median, etc.Then what type of data cleaning exercises you will perform ?Convert incorrect data types:(正确答案)Correct the values that lie beyond the rangeCorrect the values not belonging in the listFix incorrect structure:38. Rows that are not required in the analysis. E.g ifobservations before or after a particular date only are required for analysis.What steps we will do when perform data filering ?Deduplicate Data/Remove duplicateddataFilter rows tokeep only therelevant data.(正确答案)Filter columns Pick columnsrelevant toanalysisBring the datatogether, Groupby required keys,aggregate therest39. you need to…………... the data in order to get what you need for your analysis. searchlengthorderfilter(正确答案)40. Write the output of the following ?>>> import pandas as pd >>> series1 =pd.Series([10,20,30])>>> print(series1)0 101 202 30dtype: int64(正确答案)102030dtype: int640 1 2 dtype: int64None of the above41. What will be output for the following code?import numpy as np a = np.array([1, 2, 3], dtype = complex) print a[[ 1.+0.j, 2.+0.j, 3.+0.j]][ 1.+0.j]Error[ 1.+0.j, 2.+0.j, 3.+0.j](正确答案)42. What will be output for the following code?import numpy as np a =np.array([1,2,3]) print a[[1, 2, 3]][1][1, 2, 3](正确答案)Error43. What will be output for the following code?import numpy as np dt = dt =np.dtype('i4') print dtint32(正确答案)int64int128int1644. What will be output for the following code?import numpy as np dt =np.dtype([('age',np.int8)]) a = np.array([(10,),(20,),(30,)], dtype = dt)print a['age'][[10 20 30]][10 20 30](正确答案)[10]Error45. We can add a new row to a DataFrame using the _____________ methodrloc[ ]iloc[ ]loc[ ](正确答案)None of the above46. Function _____ can be used to drop missing values.fillna()isnull()dropna()(正确答案)delna()47. The function to perform pivoting with dataframes having duplicate values is _____ ? pivot(unique = True)pivot()pivot_table(unique = True)pivot_table()(正确答案)48. A technique, which when performed on a dataframe, rearranges the data from rows and columns in a report form, is called _____ ?summarisingreportinggroupingpivoting(正确答案)49. Normal Distribution is symmetric is about ___________ ?VarianceMean(正确答案)Standard deviationCovariance50. Write a statement to display “Amount” as x-axis label. (consider plt as an alias name of matplotlib.pyplot)bel(“Amount”)plt.xlabel(“Amount”)(正确答案)plt.xlabel(Amount)None of the above51. Fill in the blank in the given code, if we want to plot a line chart for values of list ‘a’ vs values of list ‘b’.a = [1, 2, 3, 4, 5]b = [10, 20, 30, 40, 50]import matplotlib.pyplot as pltplt.plot __________(a, b)(正确答案)(b, a)[a, b]None of the above52. #Loading the datasetimport seaborn as snstips =sns.load_dataset("tips")tips.head()In this code what is tips ?plotdataset name(正确答案)paletteNone of the above53. Visualization can make sense of information by helping to find relationships in the data and support (or disproving) ideas about the dataAnalyzeRelationShip(正确答案)AccessiblePrecise54. In which option provides A detailed data analysis tool that has an easy-to-use tool interface and graphical designoptions for visuals.Jupyter NotebookSisenseTableau DesktopMATLAB(正确答案)55. Consider a bank having thousands of ATMs across China. In every transaction, Many variables are recorded.Which among the following are not fact variables.Transaction charge amountWithdrawal amountAccount balance after withdrawalATM ID(正确答案)56. Which module of matplotlib library is required for plotting of graph?plotmatplotpyplot(正确答案)None of the above57. Write a statement to display “Amount” as x-axis label. (consider plt as an alias name of matplotlib.pyplot)bel(“Amount”)plt.xlabel(“Amount”)(正确答案)plt.xlabel(Amount)None of the above58. What will happen when you pass ‘h’ as as a value to orient parameter of the barplot function?It will make the orientation vertical.It will make the orientation horizontal.(正确答案)It will make line graphNone of the above59. what is the name of the function to display Parameters available are viewed .set_style()axes_style()(正确答案)despine()show_style()60. In stacked barplot, subgroups are displayed as bars on top of each other. How many parameters barplot() functionhave to draw stacked bars?OneTwoNone(正确答案)three61. In Line Chart or Line Plot which parameter is an object determining how to draw the markers for differentlevels of the style variable.?x.yhuemarkers(正确答案)legend62. …………………..similar to Box Plot but with a rotated plot on each side, giving more information about the density estimate on the y axis.Pie ChartLine ChartViolin Chart(正确答案)None63. By default plot() function plots a ________________HistogramBar graphLine chart(正确答案)Pie chart64. ____________ are column-charts, where each column represents a range of values, and the height of a column corresponds to how many values are in that range.Bar graphHistograms(正确答案)Line chartpie chart65. The ________ project builds on top of pandas and matplotlib to provide easy plotting of data.yhatSeaborn(正确答案)VincentPychart66. A palette means a ________.. surface on which a painter arranges and mixed paints. circlerectangularflat(正确答案)all67. The default theme of the plotwill be ________?Darkgrid(正确答案)WhitegridDarkTicks68. Outliers should be treated after investigating data and drawing insights from a dataset.在调查数据并从数据集中得出见解后,应对异常值进行处理。

219387085_基于红外热成像技术的笼内死鸡自动识别方法

219387085_基于红外热成像技术的笼内死鸡自动识别方法

第46卷第3期2023年5月河北农业大学学报JOURNAL OF HEBEI AGRICULTURAL UNIVERSITYVol.46 No.3May.2023基于红外热成像技术的笼内死鸡自动识别方法贾雁琳,薛 皓,周子轩,赵学谦,霍晓静,李丽华(河北农业大学 机电工程学院,河北 保定 071001)摘要:目前集约化养鸡场主要采用层叠式立体笼养模式,进行死鸡巡检过程,工人需多次攀爬扶梯,劳动强度大且工作简单机械重复。

为了提高劳动效率,增加人工智能对劳动型人才进行补充,本文将图像识别分析与红外热成像技术相结合,采用了利用温度阈值剥离出鸡头特征再提取形态学特征,结合支持向量机的死鸡识别方法。

首先对图像预处理,对应红外温度(T)—灰度值(G)线性函数,找到鸡头与背景的剥离温度阈值;计算出红外热像中鸡头平均灰度值,与设定的剥离阈值比较,保存图片中标记的样本目标。

提取鸡头样本的形态特征向量,并基于XGBoost进行特征筛选,选择得分排序前五的圆形度R、紧凑度J、离心率E、长轴长L、短轴长S作为分类特征向量,最后利用支持向量机分类器实现活鸡与死鸡的区分。

实验结果表明:在对比分类模型效果时,基于决策树算法的死鸡分类准确率为87.5%,基于BP神经网络和支持向量机算法的分类准确率为91.67% ,其中支持向量机分类算法召回率、F1和AUC的值最高。

可为实现多层立体笼养死禽自动识别提供1种新方法。

关 键 词:红外热成像;图像识别;死鸡;多特征值;XGBoost;SVM中图分类号:S24;S831开放科学(资源服务)标识码(OSID):文献标志码:AAutomatic identification method for dead chicken in cage based oninfrared thermal imaging technologyJIA Yanlin , XUE Hao,ZHOU Zixuan, ZHAO Xueqian, HUO Xiaojing, LI Lihua(College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding 071001, China)Abstract: At present, the intensive chicken farm mainly adopts the layered three-dimensional cage mode to inspectthe dead chickens. Workers need to climb the escalator many times to screen out 10-15 dead chickens from nearly30000 chickens in the cages, which takes at least one and a half hours. The labor intensity is high, but the work issimple and repetitive. In order to improve labor efficiency and increase artificial intelligence to supplement labor-oriented talents, this paper combined image recognition analysis with infrared thermal imaging technology, andproposed a method for dead chicken recognition. The chicken head feature was stripped by temperature thresholdfollowed by extraction of the morphological feature combined with support vector machine. The image waspreprocessed according to the linear function of infrared temperature (T) - gray value (G) to determine the strippingtemperature threshold of chicken head from background. The average gray value of the chicken head in the infraredthermal image was calculated and compared with the set stripping threshold to save the sample target marked in收稿日期:2021-07-08基金项目: 国家自然科学基金项目(31902209);河北省重点研发计划项目(20327220D;20326630D);河北省现代农业产业创新团队岗位科学家(HBCT2018060204);河北省省级科技计划资助项目(22326607D).第一作者: 贾雁琳(1996—),女,河北石家庄人,硕士研究生,主要从事设施畜禽养殖环境控制与智能装备研究.E-mail:*****************通信作者: 李丽华(1979—),女,河北唐山人,博士,教授,主要从事设施畜禽养殖环境控制与智能装备研究.E-mail:*************.cn本刊网址:文章编号:1000-1573(2023)03-0105-08DOI:10.13320/ki.jauh.2023.0049106第46卷河北农业大学学报国内养殖场对蛋鸡的养殖方式多为多层笼养模式,饲养密度大,且室内环境差,人工巡检死鸡时,饲养员会吸入浮尘及有害气体,需上下攀爬扶梯挑拣死鸡,费时费力[1-3]。

翻译文献

翻译文献

Generation of planar and helical elliptical gears by applicationof rack-cutter,hob,and shaperFaydor L.Litvin a ,Ignacio Gonzalez-Perez b,*,Kenji Yukishima c ,Alfonso Fuentes b ,Kenichi Hayasaka caDepartment of Mechanical and Industrial Engineering,University of Illinois at Chicago,United StatesbDepartment of Mechanical Engineering,Polytechnic University of Cartagena,Spain cGear R&D Department,Research and Development Operations,Yamaha Motor Co.,JapanReceived 27March 2007;accepted 3May 2007Available online 13May 2007AbstractThe developed approaches are directed at obtaining of:(a)the generated surfaces of elliptical gears by enveloping process,and (b)computerization of the process of generation by application of existing equipment and tools.A matrix approach is proposed and devel-oped for generation of equation of meshing.The developed theory is illustrated with examples of generation of planar and helical ellip-tical gears.Ó2007Elsevier B.V.All rights reserved.Keywords:Theory of gearing;Elliptical gears;Generation methods1.IntroductionThe designers have tried for many years application of non-circular gears in automatic machines and instruments.The obstacle was the lack of effective methods for genera-tion based on enveloping theory applied for non-circular gears.Previously,the generation was based on simulation of meshing of the generating tool with master-gears that have been developed by Fellows [6],Bopp and Reuther [3].The breakthrough has happened in 1949–1951by developing enveloping methods based on meshing of the generating tool (rack-cutter,hob,shaper)with a non-circu-lar gear [11,12,15].Such an idea is illustrated in the case of application of a rack-cutter as follows:(i)Centrode 3of the rack-cutter is a straight line t –t that is a common tangent to centrodes 1and 2of mating non-circular gears 1and 2and rolls over 1and 2(Fig.1).(ii)Rolling is provided wherein the rack-cutter translatesalong tangent t –t and is rotated about the instanta-neous center of rotation I.The related motions of the rack-cutter and the non-cir-cular gear may be determined considering the motions of the generating tool and one of the centrodes of the pair one as follows (Fig.2):(a)The rack-cutter centrode (notified as I)is in meshwith centrode II of a non-circular gear.(b)Rolling is provided by observation of the equationv ðI Þ¼v II rot þv IItr :ð1ÞEq.(1)is obtained considering that the rack-cutter per-forms only translational motion with velocity v (I)along common tangent to centrodes I and II.0045-7825/$-see front matter Ó2007Elsevier B.V.All rights reserved.*Corresponding author.Address:Universidad Politecnica de Carta-gena,Departamento de Ingenieria Mecanica,Campus UniversitarioMuralla del Mar,C/Doctor Fleming,s/n –30202Cartagena,Spain.Tel.:+34968326429;fax:+34968326449.E-mail address:ignacio.gonzalez@upct.es (I.Gonzalez-Perez)./locate/cmaComput.Methods Appl.Mech.Engrg.196(2007)4321–4336The non-circular gear II performs:rotational motion about center of rotation O II,and translational motion in the direction that is perpendicular to t–t.Vectors vðIIÞrot and vðIIÞtr represent velocities of gear II of rotational and transla-tional motions(Fig.2a).Drawings of Fig.2b illustrate positions of rack-cutter I and non-circular gear II infixedIt will be shown in Section2.1that functions/ðIIÞðxðO IÞfÞand yðIIÞfðxðO IÞfÞare nonlinear ones.Fig.3shows the remodelled cutting machine(1951) applied for generation of non-circular gears by envelopingmethod[11,13].Functions/ðIIÞðxðO IÞfÞand yðIIÞfðxðO IÞfÞhave been generated by application of two cam mechanisms. These functions have been computerized as discussed in [15].The list of references[1,2,4,7,9–14,16–20]cover titles of works of many researchers.Numerical examples illustrate the ideas developed in the paper.2.Generation of elliptical gears by rack-cutter2.1.Algorithm of rolling motionsThe derivation of the algorithm is based on application of following planar coordinate systems(Fig.4a and b):(i) movable coordinate systems S c and S1rigidly connected to the rack-cutter and the non-circular gear,(ii)an auxiliary movable coordinate system S n andfixed coordinate system S f.Coordinate systems S c,S1,S f,and S n are planar ones, however,the developed algorithm covers the concept of rolling for generation of planar and as well helical gears. Although the derivations are discussed for the cases of elliptical gears,the obtained results may be applied for other types of non-circular gears.Figs.4a and b show the initial and current positions ofFig.3.Remodelled cutting machine for generation of noncircular gears(year1951).4322 F.L.Litvin et al./Comput.Methods Appl.Mech.Engrg.196(2007)4321–4336Observation of conditions of pure rolling,yield the follow-ing relations between the motions of rack-cutter c and non-circular gear1(see as well[15])xðO cÞf ðhÞ¼ÀsðhÞþrðhÞcos l¼ÀZ hrðhÞsin ld hþrðhÞcos l;ð2ÞyðO cÞfðhÞ¼ÀrðhÞsin l;ð3Þw 1ðhÞ¼hþlÀl0;l0¼p2;ð4ÞlðhÞ¼arctanÀ1Àe cos h e sin h:ð5ÞHere r(h)is the polar equation of the centrode of non-cir-cular gear;l is the angle formed by position vector O1M1 and the tangent to centrodes c and1;h is the polar angle of centrode1;M0and M1are the initial and current points of tangency of centrode c and1;sðhÞ¼M0M1¼O c M1is the length of the arc that is traced out on centrodes c and1;w1represents the current position of coordinate sys-tem S1infixed coordinate system S f;yðO1Þf ¼O n O f repre-sents the current position of coordinate system S n infixed coordinate system S f;coordinate system S n is a movable coordinate system that performs translational motionalong axis y f(Fig.4b).2.2.Derivation of rack-cutter generating surfaces represented in coordinate system S cTwo types of generating surfaces are considered:(i)a planar rack-cutter applied for generation of planar gears and is represented in coordinate system S t(Fig.5a),and skew rack-cutter that is formed by the motion of S t in coor-dinate system S c as illustrated in Fig.5b and c.Surface R c of the skew rack-cutter and the surface unit normal are represented in S c considering the coordinate transformation from S t to S c.This yields the following equa-r cðu c;l cÞ¼Æu c sin a cÇp m n4ÀÁcos b cþl c sin b cu cos a cÇu c sin a cÆp m nÀÁsin b cþl c cos b c12666437775;ð6Þn c¼Çcos a c cos b csin a cÆcos a c sin b c2435:ð7ÞEq.(6)represent in S c the skew rack-cutter R c as a plane one with surface parametersðu c;l cÞ.The unit normal n c is represented by Eq.(7).The upper and lower signs in Eqs.(6)and(7)correspond to the left and right sides of the R c.Taking in Eqs.(6)and(7)b c=0,we obtain the planar rack-cutter surface and its unit normal for generation ofF.L.Litvin et al./Comput.Methods Appl.Mech.Engrg.196(2007)4321–433643232.3.Generation of surface R1of elliptical gear by rack-cutterThe derivation of R1is based on following steps:(i)Determination of coordinate transformation betweencoordinate systems S c and S1;this allows to represent the family of generating rack-cutter surfaces in coor-dinate system S1in three-parametric form asr1ðu c;l c;hÞ¼M1cðhÞr cðu c;l cÞ;ð8Þwhere matrix M1c represents the coordinate transfor-mation in procedures applied from S c to S1.(ii)Derivation of equation of meshingf1cðu c;l c;hÞ¼0ð9Þthat relates parametersðu c;l c;hÞ.Simultaneous con-sideration of Eqs.(8)and(9)determines surface R1 of helical elliptical gear generated by the skew rack-cutter.2.3.1.Coordinate transformation in transition from coordinate system S c to S1Derivation of Eq.(8)is performed as follows:r1ðu c;l c;hÞ¼M1nðhÞM nfðhÞM fcðhÞr cðu c;l cÞ¼cos w1sin w100Àsin w1cos w10000100001266437751000010ÀyðO1Þf0010000126643775Â100xðO cÞf0100001000012666437775r cðu c;l cÞ¼cos w1sin w10xðO cÞfcos w1ÀyðO1Þfsin w1Àsin w1cos w10ÀxðO cÞfsin w1ÀyðO1Þfcos w1001000012666437775r cðu c;l cÞ:ð10Þ2.3.2.Matrix derivation of equation of meshingf1cðu c;l c;hÞ¼0The derivation of equation of meshing for a family of generating surfaces is represented in Differential Geometry (see,for instance[8])aso r1 o u c Âo r1 o l cÁo r1o h¼0:ð11ÞThe authors have applied matrix approach for derivation of equation of meshing applying(see below)matrices 3·3.Such an approach enables to computerize the deriva-tions and is performed as follows:r1ðu c;l c;hÞ¼cos w1sin w10xðO cÞfcos w1ÀyðO1Þfsin w1Àsin w1cos w10ÀxðO cÞfsin w1ÀyðO1Þfcos w1001000012666437775Considering cartesian coordinates instead of homogeneouscoordinates,gear tooth surface may be obtained asq1ðu c;l c;hÞ¼cos w1sin w1Àsin w1cos w10001264375qcðu c;l cÞþcos w1sin w1Àsin w1cos w10001264375xðO cÞfÀyðO1Þf26643775¼L1c q cþL1c R:ð12ÞHere,R¼O n O c¼xðO cÞfÀyðO1Þfh i T:ð13ÞThe proposed approach allows to obtain:(i)relative velocity of rack-cutter c with respect to gear1vðc1Þ1¼d q1d t¼_L1c q cþ_L1c RþL1c_Rð14Þ(ii)or relative velocity of gear1with respect to rack-cut-ter cvð1cÞc¼ÀL c1vðc1Þ1¼ÀL c1ð_L1c q cþ_L1c RþL1c_RÞ:ð15ÞThefinal expression of equation of meshing isfðc1Þ1ðu c;l c;hÞ¼ðL1c nðcÞcÞÁð_L1c q cþ_L1c RþL1c_RÞ¼0ð16Þorfð1cÞcðu c;l c;hÞ¼nðcÞcÁ½ÀL c1ð_L1c q cþ_L1c RþL1c_RÞ ¼0;ð17Þwhere_L1c¼Àsin w1cos w10Àcos w1Àsin w10000264375_w1;ð18Þ_R¼_O n O c¼_xðO cÞfÀ_yðO1Þfh i T:ð19ÞDerivations of derivatives_w1,_xðO cÞf,and_yðO1Þfare as follows:_w1¼d w1d hd hd t;ð20Þ_xðO cÞf¼d xðO cÞfd hd hd t;ð21Þ_yðO1Þf¼d yðO1Þfd hd hd t;ð22Þwhere d w1d h,d xðO cÞfd h,and d yðO1Þfd hare given in Appendix B.The tooth surface of helical elliptical gear is determined4324 F.L.Litvin et al./Comput.Methods Appl.Mech.Engrg.196(2007)4321–43362.4.Examples of planar and helical elliptical gears 2.4.1.Planar elliptical gearThe design parameters applied are represented in Table 1.The planar elliptical gear with parameters above is shown in Fig.6.The contact lines between the rack-cutter and the gear (Fig.7)are shown on the respective contacting surfaces:(a)for tooth number 1,and (b)tooth number 8(see the notation of tooth numbers in Fig.6).A line of contact of generating and generated surfaces (the rack-cutter and generated gear)is obtained as follows:(i)The line of contact on the generating surface (on the rack-cutter)is obtained considering simultaneously Eq.(6)of the rack-cutter and equation of meshing (17)and taking parameter w 1=const.Parameter w 1(h )is determined by Eqs.(4)and (5).Parameter b c is equal to zero since a planar gear is generated.(ii)The line of contact on the generated surface of theelliptic gear is obtained considering simultaneously Eq.(10)(or (12))of the family of generating surfaces and equation of meshing (17)and taking again w 1=const.2.4.2.Helical elliptical gearThe designed parameters for the discussed example are represented in Table 2.The helical elliptical gear with parameters above is shown in Fig.8.The contact lines between the rack-cutter and the gear (Fig.9a,(b))are shown on the respective contacting surfaces:(a)on the rack-cutter,the contact lines are parallel straight lines;(b)on helical elliptical gear;the contact lines are straightTable 1Planar elliptical gearModule,m n [mm]2.0Pressure angle,a c [degrees]20Eccentricity,e0.5Number of teeth,N 127Half length of major axis,a [mm]28.901Face width,w [mm]15Fig.6.Illustration of planar elliptical gear with parameters shown in Table 1.Table 2Helical elliptical gearModule,m n [mm]2.0Pressure angle,a c [degrees]20Eccentricity,e0.5Helix angle,b c [degrees]20(left hand)Number of teeth,N 127Half length of major axis,a [mm]30.756Face width,w [mm]15F.L.Litvin et al./Comput.Methods Appl.Mech.Engrg.196(2007)4321–43364325lines as well.The contact lines shown in Fig.9a and b have been determined for tooth numbers 1and 8(Fig.8).The contact lines discussed above are determined by application of Eqs.(12)of family of generating rack-cutter surfaces and equation of meshing (17),but by taking w 1=const.3.Generation of elliptical gears by hob3.1.Derivation of worm thread generating surfaces represented in coordinate system S wApplication of a grinding worm or a hob for generation of elliptical gears may result in an improvement of produc-tivity and reliability of this type of gears.A worm thread surface R w ,that is in imaginary meshing with rack-cutter tooth surface R c ,is being determined.Conditions of meshing between both surfaces,R w and R c ,allows worm thread surface to be determined.The proce-dure is as follows:(1)Worm shaft and gear shaft are crossing by angle c wg .Fig.10a shows the installation of worm axode on rack-cutter axode P .Angle c wg is given asc wg ¼p2Àb c þk w ;ð23Þwherein b c is the helix angle of the skew rack-cutter and k w is the lead angle of the worm.(2)Moveable coordinate systems S c and S w will be rig-idly connected to surfaces R c and R w ,respectively.Fixed coordinate systems S f and S s are considered for definition of motions of worm and rack-cutter tooth surfaces,respectively (see Fig.10b).(3)Surface R c is considered as given.Two motions areapplied as follows:(i)Translation s c of coordinate system S c is per-formed along the straight line t –t .Line t –t ,at the pitch point O f ,and the helix of the gear (see Fig.10a),are in tangency.(ii)Rotation of coordinate system S w about axis z s ,w w ,is determined byw w ¼s cq w ;ð24Þwherein q w is the pitch radius of the grinding worm.(4)Worm thread surface R w is obtained by simultaneousconsideration of vector equationr w ðu c ;l c ;w w Þ¼M wc ðw w Þr c ðu c ;l c Þð25Þand equation ofmeshingFig.8.Illustration of helical elliptical gear with parameters shown in Table 2.4326 F.L.Litvin et al./Comput.Methods Appl.Mech.Engrg.196(2007)4321–4336o r c o u c Âo r c o l cÁo r co w w¼0:ð26ÞHere,w w is the generalized parameter of meshing;M wc is a4·4matrix that represents coordinate transformation of homogeneous coordinates from system S c to system S w.Worm thread surface R w may be considered for the pur-pose of simplicity as a surface with two independent parametersðh w;e wÞ.3.2.Generation of surface R1of elliptical gear by wormThe derivation of R1by a worm thread surface R w is based on following procedure(see Fig.11):(i)Two coordinate systems S w and S1are considered rig-idly connected to the worm thread surface and the to be determined gear tooth surface.Afixed reference system S f is considered for definition of motions of systems S w and S1.(ii)Worm thread surface R w is considered as given by vector R wðh w;e wÞ.(iii)Two sets of motions are provided to the worm:(a)Rotation/w about axis z w of the worm.(b)Translation s w along axis z f that is parallel to theaxis of the gear.Coordinate system S s is a movable coordinate system that is translated with system S w.(iv)Rotation and translation of the worm are accompa-nied by rotation and translation of the elliptical gear as follows:(a)Rotation w1about axis z1of the gear.Magnitudes w1and yðO1Þfmay be determined as func-tions of polar angle h(see Eqs.(3)and(4)).Magni-tudes w1and yðO1Þfare related with motions/w andF.L.Litvin et al./Comput.Methods Appl.Mech.Engrg.196(2007)4321–43364327y ðO c Þfðh Þ¼Àr ðh Þsin l ;ð27Þw 1ðh Þ¼h þl Àl 0;l 0¼p 2ð28Þand new function to be determined g ð/w ;s w ;h Þ¼0:ð29Þ(v)Coordinate transformation between systems S w andS 1determines the family of worm thread surfaces in system S 1asr 1ðh w ;e w ;/w ;s w Þ¼M 1w ð/w ;s w ÞR w ðh w ;e w Þ:ð30ÞHere,/w and s w are independent generalized param-eters of motion,that means that the generation is a double enveloping process;M 1w 4·4matrix de-scribes coordinate transformation from system S w to system S 1.(vi)Equations of meshingf ðw 1Þ1ðh w ;e w ;/w ;s w Þ¼0;ð31Þf ðw 1Þ2ðh w ;e w ;/w ;s w Þ¼0ð32Þrelate parameters ðh w ;e w ;/w ;s w Þ.Simultaneous con-sideration of Eqs.(30)–(32)determines surface R 1of elliptical gear.3.2.1.Derivation of function g ð/w ;s w ;h Þ¼0The derivation is performed as follows (see Fig.12):(i)An imaginary rack-cutter is considered in simulta-neous meshing with the elliptical gear and the worm.At the initial position,system S c coincides with sys-tem S f and common tangent line t –t between the three surfaces,R w ,R 1,and R c ,is at position t 0.(ii)Due to rotation and translation of the worm on /wand s w ,the common tangent t –t will take position t 2.The location of system S c in S f is determined by x ðO c Þf .(iii)Displacement of system S c may be obtained as thesum of independent displacements D x ðO c Þf 1and D x ðO cÞf 2D x ðO c Þf¼D x ðO c Þf 1þD x ðO c Þf 2:ð33ÞDisplacement D x ðO c Þf 1¼O f S is caused by translation s w and is defined by positions t 0and t 1.Displacement D x ðO c Þf 2¼SO c is resulted by rotation /w and is defined by positions t 1and t 2.(iv)Illustrations of Fig.12b yield the following relations:ÀD x ðO c Þf 1¼O f S ¼tan b c s w ;ð34ÞÀD x ðO c Þf 2¼SO C ¼p wcos k w cos b c/w ;ð35Þwhere p w is the pitch of the worm.(v)Since x ðO c Þf depends on polar angle h (see relation (2)),function g ð/w ;s w ;h Þ¼0is obtained finally asg ð/w ;s w ;h Þ¼x ðO c Þf ðh Þþtan b c s w þp w cos k w cos b c/w ¼0:ð36Þ4328 F.L.Litvin et al./Comput.Methods Appl.Mech.Engrg.196(2007)4321–43363.2.2.Coordinate transformation in transition from coordinate system S w to S1Derivation of Eq.(30)is performed as follows:r1ðh w;e w;/w;s wÞ¼M1fð/w;s wÞM fsðs wÞM swð/wÞR wðh w;e wÞ¼cos w1sin w10ÀyðO1Þfsin w1Àsin w1cos w10ÀyðO1Þfcos w100100001266664377775cos c wg0Àsin c wg0010q wsin c wg0cos c wg s w00012666437775Âcos/wÀsin/w00sin/w cos/w00001000012666437775R wðh w;e wÞ:ð37Þ3.2.3.Matrix derivation of equations of meshingfðw1Þ1ðh w;e w;/w;s wÞ¼0and fðw1Þ2ðh w;e w;/w;s wÞ¼0Matrix derivation of equations of meshing is applied as follows:(i)Vector position r1in homogeneous coordinates isgiven byr1ðh w;e w;/w;s wÞ¼M1wð/w;s wÞR wðh w;e wÞ¼a11a12a13sin w1ðq wÀyðO1ÞfÞa21a22a23cos w1ðq wÀyðO1ÞfÞa31a32a33s w0001266664377775R wðh w;e wÞ;ð38Þwhereina11¼cos w1cos c wg cos/wþsin w1sin/w;ð39Þa12¼Àcos w1cos c wg sin/wþsin w1cos/w;ð40Þa13¼Àcos w1sin c wg;ð41Þa21¼Àsin w1cos c wg cos/wþcos w1sin/w;ð42Þa22¼sin w1cos c wg sin/wþcos w1cos/w;ð43Þa23¼sin w1sin c wg;ð44Þa31¼sin c wg cos/w;ð45Þa32¼Àsin c wg sin/w;ð46Þa33¼cos c wg:ð47Þ(ii)Vector position r1in cartesian coordinates may be represented asq 1ðh w;e w;/w;s wÞ¼a11a12a13a21a22a23a31a32a33264375qwðh w;e wÞþsin w1ðq wÀyðO1ÞfÞcos w1ðq wÀyðO1ÞfÞs w26643775¼L1wð/w;s wÞq wðh w;e wÞþR:ð48ÞHere,L1w is a3·3matrix obtained from M1w.Ma-L1w¼L1f L fs L sw;ð49Þwhereas vector R is defined asR¼sin w1ðq wÀyðO1ÞfÞcos w1ðq wÀyðO1ÞfÞs wh i T:ð50Þ(iii)Considering s w as constant(s w=c),the relative veloc-ity of the worm thread surface with respect to gear tooth surface may be obtained asvðw1Þ1;s w¼c¼_q1¼_L1w q wþ_R;ð51Þwherein_L1w¼_L1f L fs L swþL1f L fs_L sw;ð52Þ_L1f¼Àsin w1cos w10Àcos w1Àsin w10000264375_w1;ð53Þ_Lsw¼Àsin/wÀcos/w0cos/wÀsin/w0000264375_/w;_/w¼Àcos b cpwcos k w_xðO cÞf;ð54Þ_R¼cos w1ðq wÀyðO1ÞfÞ_w1Àsin w1_yðO1ÞfÀsin w1ðq wÀyðO1ÞfÞ_w1Àcos w1_yðO1Þf26643775:ð55ÞThen,equation of meshing may be obtained asfðw1Þ1¼n1Ávðw1Þ1;s w¼c¼0;ð56Þwhereinn1¼L1w n w:ð57ÞHere,n w is the unit normal to the worm thread surface.(iv)Considering/w as constant(/w=c),the relative velocity of the worm thread surface with respect to gear tooth surface may be obtained asvðw1Þ1;/w¼c¼_q1¼_L1w q wþ_R;ð58Þwherein_L1w¼_L1f L fs L sw;ð59Þ_L1f¼Àsin w1cos w10Àcos w1Àsin w10000264375_w1;ð60Þ_R¼cos w1ðq wÀyðO1ÞfÞ_w1Àsin w1_yðO1ÞfÀsin w1ðq wÀyðO1ÞfÞ_w1Àcos w1_yðO1Þf_s w26643775;ð61Þ_s w¼À1tan b c_xðO cÞf:ð62ÞThen,equation of meshing may be obtained asðw1Þðw1ÞF.L.Litvin et al./Comput.Methods Appl.Mech.Engrg.196(2007)4321–43364329(v)Derivations of derivatives _w1,_y ðO 1Þf ,and _x ðO 1Þf are as follows:_w1¼d w 1d h d h d t ;ð64Þ_y ðO 1Þf ¼d y ðO 1Þf d h d h d t ;ð65Þ_x ðO c Þf ¼d x ðO c Þf d h d h d t :ð66ÞDerivatives d w 1,d x ðO cÞf,andd y ðO 1Þfare represented inAppendix B .4.Generation of elliptical gears by shaper4.1.Derivation of surface R 1of elliptical gear generated by shaperThe derivation is based on the following procedure:(1)Two coordinate systems S s and S 1are considered rig-idly connected to the shaper and the to be determined surface.(2)An involute tooth surface r s ðu s ;h s Þis considered asgiven for a shaper with pitch radius q s .(3)Coordinate system S s is rotated while coordinate sys-tem S 1is rotated and translated.Motions of systems S s and S 1are defined by kinematic relation of their centrodes (see Fig.13).v ðs ÞI ¼v ð1ÞI ;ð67Þwherev ð1ÞI ¼v ð1ÞI ;rot þv ð1ÞI ;tr1þv ð1ÞI ;tr2:ð68Þ(4)Coordinate system S s is rotated on w s that is relatedwith polar angle h by functionw s ¼s ðh Þs:ð69Þ(5)Coordinate system S n is translated with system S 1onmagnitudes x ðO 1Þf and y ðO 1Þf ,while system S 1is rotated on the magnitude w 1.Such magnitudes may be repre-sented as functions of polar angle hx ðO 1Þf¼Àr ðh Þcos l ;ð70Þy ðO 1Þf¼r ðh Þsin l ;ð71Þw 1¼h þl Àp2:ð72Þ(6)Elliptical gear tooth surface R 1is obtained by simul-taneous consideration of matrix transformationr 1ðu s ;h s ;h Þ¼M 1s ðh Þr s ðu s ;h s Þð73Þand equation of meshingf ðu s ;h s ;h Þ¼o r 1o u s Âo r 1o h s Áo r 1o h¼0:ð74ÞHowever,the approach proposed in this paper isbased on matrix derivations (see below)that allows to computerize the derivations.4.2.Coordinate transformation in transition from coordinate system S s to S 1Derivation of Eq.(73)is performed asfollows:4330 F.L.Litvin et al./Comput.Methods Appl.Mech.Engrg.196(2007)4321–4336。

数据分析英语试题及答案

数据分析英语试题及答案

数据分析英语试题及答案一、选择题(每题2分,共10分)1. Which of the following is not a common data type in data analysis?A. NumericalB. CategoricalC. TextualD. Binary2. What is the process of transforming raw data into an understandable format called?A. Data cleaningB. Data transformationC. Data miningD. Data visualization3. In data analysis, what does the term "variance" refer to?A. The average of the data pointsB. The spread of the data points around the meanC. The sum of the data pointsD. The highest value in the data set4. Which statistical measure is used to determine the central tendency of a data set?A. ModeB. MedianC. MeanD. All of the above5. What is the purpose of using a correlation coefficient in data analysis?A. To measure the strength and direction of a linear relationship between two variablesB. To calculate the mean of the data pointsC. To identify outliers in the data setD. To predict future data points二、填空题(每题2分,共10分)6. The process of identifying and correcting (or removing) errors and inconsistencies in data is known as ________.7. A type of data that can be ordered or ranked is called________ data.8. The ________ is a statistical measure that shows the average of a data set.9. A ________ is a graphical representation of data that uses bars to show comparisons among categories.10. When two variables move in opposite directions, the correlation between them is ________.三、简答题(每题5分,共20分)11. Explain the difference between descriptive andinferential statistics.12. What is the significance of a p-value in hypothesis testing?13. Describe the concept of data normalization and its importance in data analysis.14. How can data visualization help in understanding complex data sets?四、计算题(每题10分,共20分)15. Given a data set with the following values: 10, 12, 15, 18, 20, calculate the mean and standard deviation.16. If a data analyst wants to compare the performance of two different marketing campaigns, what type of statistical test might they use and why?五、案例分析题(每题15分,共30分)17. A company wants to analyze the sales data of its products over the last year. What steps should the data analyst take to prepare the data for analysis?18. Discuss the ethical considerations a data analyst should keep in mind when handling sensitive customer data.答案:一、选择题1. D2. B3. B4. D5. A二、填空题6. Data cleaning7. Ordinal8. Mean9. Bar chart10. Negative三、简答题11. Descriptive statistics summarize and describe thefeatures of a data set, while inferential statistics make predictions or inferences about a population based on a sample.12. A p-value indicates the probability of observing the data, or something more extreme, if the null hypothesis is true. A small p-value suggests that the observed data is unlikely under the null hypothesis, leading to its rejection.13. Data normalization is the process of scaling data to a common scale. It is important because it allows formeaningful comparisons between variables and can improve the performance of certain algorithms.14. Data visualization can help in understanding complex data sets by providing a visual representation of the data, making it easier to identify patterns, trends, and outliers.四、计算题15. Mean = (10 + 12 + 15 + 18 + 20) / 5 = 14, Standard Deviation = √[(Σ(xi - mean)^2) / N] = √[(10 + 4 + 1 + 16 + 36) / 5] = √52 / 5 ≈ 3.816. A t-test or ANOVA might be used to compare the means ofthe two campaigns, as these tests can determine if there is a statistically significant difference between the groups.五、案例分析题17. The data analyst should first clean the data by removing any errors or inconsistencies. Then, they should transformthe data into a suitable format for analysis, such ascreating a time series for monthly sales. They might also normalize the data if necessary and perform exploratory data analysis to identify any patterns or trends.18. A data analyst should ensure the confidentiality andprivacy of customer data, comply with relevant data protection laws, and obtain consent where required. They should also be transparent about how the data will be used and take steps to prevent any potential misuse of the data.。

《understanding optics with python》 -回复

《understanding optics with python》 -回复

《understanding optics with python》-回复《Understanding Optics with Python》Optics, the branch of physics dealing with the properties and behavior of light, plays a crucial role in various fields, including physics, engineering, and even everyday life. In this article, we will explore the fascinating world of optics using the programming language Python. We will take a step-by-step approach to delve into the fundamental concepts and principles of optics and demonstrate how Python can be leveraged to solve optical problems.Introduction to Optics:Optics is the science of light and its interaction with matter. It encompasses the study of reflection, refraction, diffraction, interference, and the production and detection of light. Understanding optics is essential in various applications, such as designing lenses, cameras, telescopes, and lasers.Using Python for Optical Calculations:Python is a versatile programming language that is widely used in scientific research and engineering. Its extensive libraries andintuitive syntax make it a powerful tool for performing optical calculations and simulations. Let's take a look at how Python can be leveraged in optics.1. Calculating Refraction using Snell's Law:One of the fundamental principles in optics is Snell's Law, which describes the behavior of light when it passes from one medium to another. We can write a Python code snippet to calculate the angle of refraction using Snell's Law for a given incident angle and refractive indices of the media involved.2. Modeling Lens Systems:Lenses are crucial optical devices used in various applications, from eyeglasses to cameras. Python's numerical and plotting libraries allow us to model and simulate lens systems. We can create a simple program that models the behavior of different types of lenses and their effect on light rays.3. Simulation of Interference Patterns:Interference is a fascinating phenomenon that occurs when two or more waves superpose. Python's numerical libraries enable us to simulate and visualize interference patterns. We can write a codethat generates interference patterns by combining waves of different frequencies and phases.4. Analyzing Diffraction Patterns:Diffraction refers to the bending of light around obstacles or through narrow openings. Python can be used to analyze and visualize the diffraction patterns produced by various apertures. By employing the Fourier transform, we can calculate and plot diffraction patterns for different scenarios.5. Designing Optical Filters:Optical filters are essential in many applications, such as photography, spectroscopy, and signal processing. Python can be utilized to design and optimize optical filters by manipulating the spectral transmission or reflection properties of materials. We can create a program that designs bandpass filters based on given specifications.6. Ray Tracing for Optical Systems:Ray tracing is a powerful technique used in optics to simulate the path of light rays through complex systems. Python's libraries can be employed to create a ray tracing program that models thebehavior of light in optical systems, including reflection, refraction, and multiple interactions with lenses and mirrors.Conclusion:Python provides a comprehensive set of tools and libraries for understanding and solving problems in optics. From the basics of refraction and lens systems to the complex phenomena of interference and diffraction, Python enables us to simulate and analyze optical systems with ease. By leveraging the power of Python, students, researchers, and professionals can enhance their understanding and explore the fascinating world of optics. So, pick up Python and embark on a journey to explore the wonders of optics!。

VISUALIZATION TOOL FOR ORTHOPEDIC OPERATIONS

VISUALIZATION TOOL FOR ORTHOPEDIC OPERATIONS

VISUALIZATION TOOLFOR ORTHOPEDIC OPERATIONSHenrik FranssonDepartment of Computer Science and EngineeringMälardalen UniversityVästerås, SwedenTable of Contents ABSTRACT (1)1INTRODUCTION (2)1.1M EDICAL ROBOTICS (2)1.2O BJECTIVES AND REQUIREMENTS (2)1.3A PPROACH (3)1.4L AYOUT (4)2ORTHOPEDIC OPERATION AND 3D VISUALIZATION (5)2.1C OMPARING CURRENT AND NEW PROCEDURES (5)2.2M EDICAL VISUALIZATION REVIEW (6)2.3V IEWING IN 3D (6)2.3.1Model (6)2.3.2Perspective projection (7)2.3.2.1Clipping (7)2.3.3Viewport (7)2.4R ELATED WORK (8)2.4.1Computer-aided simulation for bone surgery (8)2.4.23D Reconstruction of the Femoral Bone using Two X-ray Images from Orthogonal Views (8)2.4.3Amira (9)2.4.4ANALYZE (10)2.4.5MEDx (10)2.4.6VolVis (10)3MRVRVIEWER FUNCTIONALITY (11)3.1T HE MAIN APPLICATION P INTRACE (11)3.1.1Pintrace functionality (11)3.2R UN-TIME VISUALIZATION (13)3.3V ISUALIZING THE MODELS IN 3D (14)3.3.1Robot tool (16)3.3.2Reference pin (17)3.3.3Robot path (17)3.3.4Screws (18)3.3.5Femur (18)3.4P LACING THE MODELS IN THE VIEWER (18)4THE VIEWING ALGORITHM (20)4.1T RANSLATION VALUE CALCULATION (21)4.2R OTATE TOWARDS X-AXIS (22)4.3R OTATE TOWARDS Z-PLANE (23)5THE MODEL ALGORITHMS (24)5.1.1Adapt the femur head (27)5.1.2Adapt the caput neck (28)5.1.3Translate the caput in X direction (29)5.1.4Translate the caput in Y and Z direction (31)5.2T RIANGULATE (32)6IMPLEMENTATION OF VIEWER (37)6.1A CTIVE X (37)6.2V IEW PORT (38)6.3T RANSPARENCY (39)6.4H IDE (40)6.5S HOW (40)6.6T ESTING (40)6.6.1Window (41)6.6.2World (41)6.6.3Integration (41)7FUTURE DEVELOPMENT (42)7.1.1Volume Rendering (42)7.1.2Voxel (42)7.1.3Volume Visualization (43)7.1.4Survey of Algorithms for Volume Visualization (45)7.1.4.1Forward Viewing (46)7.1.4.2Backward Viewing (46)7.1.4.3Template-based ray-casting (48)7.2H ARDWARE (49)7.2.1VolumePro Architecture (49)7.2.2VolumePro Algorithm (50)8CONCLUSION (51)9REFERENCES (I)APPENDIX (I)TABLE OF FIGURESFigure 1 Shows a flow chart description over the viewer (16)Figure 2 Describes where the vertices are placed, which will create the converting vertices for all objects in the viewer (20)Figure 3 Shows a X-ray picture taken in direction AP. Drawn on the X-ray picture is the geometry that defines the 3D model in the viewer (26)Figure 4 Shows an X-ray picture taken in direction LAT. Drawn on the X-ray picture is the geometry that defines the 3D model in the viewer (27)Figure 5 Shows the interpolation curve in the function, which starts from zero and interpolates to one by use of the sinus² function for smoother interpolation (30)Figure 6 Shows the complementary curve to figure 8, which translates the model with the true translation value (30)Figure 7 Shows a line geometry model in AP over a femur with 10 degrees dislocation (31)Figure 8 a) shows three triangles in the femur which will be effected during the dislocation phase. b) Shows the dislocated caput. c) Shows how the triangle follows the dislocation (33)Figure 9 Trying to find the strict largest side at the triangle (34)Figure 10 The combination of triangles operated on (34)Figure 11 Describes the direction d of the new vertex (35)Figure 12 Describes the new vertex that been created for making the two new triangles (36)Figure 13 Shows the two new triangles D and E that the triangulate function created. With these two new triangles we get a better and more natural surface in the dislocation area (36)Figure 14 Shows a screenshot of ActiveX component MRVR Viewer with femur, reference pin, and robot path and robot tool (38)Figure 15 Shows the difference between a voxel and a pixel (43)TABLESTable 1 Compare volume graphics to surface graphics (45)Table 2 An overview of algorithms in the field of volume graphics (46)AbstractThis thesis describes how to create a three-dimension (3D) viewer, inorder to display the course of events during an orthopedic femuroperation. The viewer is an external application that can co-operatewith a main application. It will show the femur and the tools usedduring a Pintrace (PT) operation.The viewer has been created to assist surgeons in their work byincreasing the knowledge of form and shape of the femur. It willprovide precise information helping them to decide where to insert thescrews in the femur and should reduce the fault rate for this kind ofoperation.The viewer allows us to fully visualize a femur operation. The vieweris able to update the image after every completed X-ray session andwhen the surgeon does any adjustment to the femur between two X-raysessions. Furthermore it is able to display the adjustment results by re-shaping the model.The viewer also shows the position of a robot in relation to the targetfemur. This information is obtained by attaching a reference pin to thefemur revealing this position. The reference pin and the robot arecalibrated and give coordinates for the robot to the viewer. When thesurgeon is satisfied with the preparations of position, the femur ispositioned and the robot is set for surgery. The viewer will show thesuggested positions of the screws in the femur, which by default, willbe the two positions where it is most convenient for the patient. Thescrews can be moved physically, but not in the viewer due todifficulties moving three-dimensional objects.1 IntroductionApplications aimed at improved visualization are finding their way intoan increasing number of software programs. Today with personalcomputers we are able to visualize simple 2D graphic programs as wellas advanced and sophisticated 3D modeling and visualizationapplications by using computer graphics technologies. Usingvisualization applications in orthopedic operations increases theoperators ability to select the best course of action. Opportunities existto reduce the fault rate by extending the information in the 2D X-raypictures and apply the information in a 3D model. In approach to thetask of creating the viewer we have created our own models and usedan open graphical library as support.1.1 Medical roboticsMedical Robotics AB (MR) is a recently founded company thatdevelops robotic surgery tools. In order to effectively use the surgerytools, they have developed software called Pintrace. This softwaremakes it possible to perform robot-controlled femur operations. In thePT application MR felt that it would be advantageous to clarify andsimplify the visualization and placement of screws and reference pinsand to visualize the robot’s position during each step of the femuroperation.1.2 Objectives and requirementsThe objective was to write an application to create a viewer for MR.The viewer should be able to view a patients femur and also theequipment and the tools which are used in a PT operation. The viewershould specifically be able to visualize the placement of the screws andalso assist in the placing of screws during a femur operation.The robot’s movements should be seen in the viewer in real time duringan operation and the robot should be able to adapt the patient’s skeletonfrom a theoretical model.The application should be written on an NT based platform usingMicrosoft Visual C++ programming software. The viewer should bewritten so that it is an option of the main application in the form of astand-alone component solution. This component should be able toopen on command from the PT application, which is controlled by thesurgeon. When the viewer is visible in the PT application the modelsthat are viewed should be based on a 3D technology. The opportunity toload different skeleton models should be included in the viewer asshould the possibility to reconstruct the differerent skeleton modelsthrough local adaptations to fit the patients. The input for thereconstruction should use values that are being used in the PTapplication. The viewer should also be able to view an assistant pincalled the reference pin and the screws that are being placed by PT as asuggestion. Finally the viewer must be able to let the robot move inrun-time from a command from the surgeon.1.3 ApproachThe course of action is to adapt a theoretical femur model1 to a realpatient model. The transformation phase is defined through measuringthe femur from two X-ray pictures taken in lateral 2(lat) and anteriorposterior 3(ap) directions. The data will then be transferred from thesketch via PT to the viewer. It is possible to reshape the model forevery new set of X-ray pictures, which gives the surgeon the possibilityof visualizing the patient femur status while trying to correct thedislocation. Once the surgeon is satisfied with the result and the femuris placed in the right position, it is time to place the reference pin in thefemur. The reference pin is placed in the femur in order to calibrate therobot to the femur. The viewer does the opposite, setting the femur asthe origin and next setting the reference pin in relation to the femur.When the reference pin is placed the robot position can be calculated.From this calculation we can position the robot to the correct locationin the viewer and the femur and the robot will now display in the sameview.1 We received the theoretical model from Frontec R&T Jönköping. It is a CT data file that is converted to our file format.2 Lateral x-ray picture taken from the outside of a femur laid flat.3 Anterior posterior x-ray picture taken from front side to back.1.4 LayoutChapter Two will give a short introduction of how to use visualization tools formedical purposes, followed by further background information.It will also describe some related work on the subject.Chapter Three will give a short description of how it is possible to show 3Dgeometry on screen.The chapter assumes some knowledge of the field of computer graphics.Chapter Four describes the essential communication algorithm between allinputs data and the output view.Chapter Five will describe the modification algorithms that are necessary toadapt the femur to a specific individual patient.Some knowledge in linear algebra is assumed.Chapter Six describes the design architecture of the visualization tool andpresents the test description.Chapter Seven presents another method for visualizing skeletons in part by using other input data, which could be used in the future development of thisvisualization tool.Chapter Eight is the conclusion.2 Orthopedic operation and 3D visualizationEvery year Sweden performs 18 000 hip operations at a cost of 18 000Euros per operation. According to MR every fifth hip operation needsto be reconducted each year. In addition this number will increase,according to Dr. Stig Lundquist MR’s founder, because of the rise inthe average age of the Swedish population. Today the average age of apatient is 80 years, with 25% of the patients being men and 75%women. As a result of this MR saw the opportunity to create a newrobotic controlled screw countersink method to improve the result ofthe hip operations. Other methods for performing operations useartificial limbs or the traditional manual screw countersink method.The former method is relatively secure, however it requires a largersurgical operation at a higher cost.2.1 Comparing current and new proceduresIn a normal operation room for femur surgery, the surgeons’ equipmentis one C-arc and an air pressured drilling machine. The C-arc is a toolthat is used to take 2D X-ray pictures of a femur during the operation.With the air pressured drilling machine it is possible to manually makespace for the screws that will stabilize the femur after the operation.MR believes that they can help and improve the placement of thescrews by using a robot instead of hand drilling the holes for thescrews. Using the manual procedure it is necessary to stop in themiddle of the drilling procedure to see if they are right on track withhelp of the X-ray pictures taken from the C-arc. If they notice that theyhave drilled a little obliquely they have to make a new hole. There areoften just a few places where the surgeon can drill because of the thinbone material in the femur neck. In the cases when a screw is placedobliquely the strength of the screw bandage will be changed and thepatient can be harmed. In these cases the operation procedure wouldbenefit by using the MR procedure with the PT system. The screwswill be placed in the thickest material of the bone and parallel to eachother which is important for the strength of the screw bandage. Thewhole operation process can be followed and visualized using theviewer. Training operations can also be made with the system beforethe surgeon actually performs the operation.2.2 Medical visualization reviewIn the field of medicine the surgeons have had access to differentassistants tools since 1895 when the first X-ray beam was developed.With X-ray pictures it is possible to describe an object and display it ina 2D picture. Computed topography (CT) was first used in 1963. WithCT you let X-ray beams pass a slice of the body from differentdirections. The beams generates in an X-ray pipe as circulate in a framearound the gantry where the patient is placed. The investigation of thebody sections sees as a plane, built on small cubes called voxels. Ineach voxel a small attenuation of the X-ray beam is created. Sensitivesensors measure the beams intensity in each direction. A measuredvalue shows the total attenuation of voxels that pass by the beam in thatdirection. Calculations of each voxel produce a value that can be usedto generate a volume or a picture. Letting the tool generate the volumefor visualization would be an advantage for the correctness of thefemur. Unfortunately this sophisticated tool is not included in everyoperating theatre, and the method is to use a C-arc for generating 2Dpicture that will be used during the operation and displayed in the PTapplication.2.3 Viewing in 3DThe complexity of viewing in 3D is partly caused by the fact thatdisplay devices are only in 2D. In 3D viewing, it is necessary to specifya view volume in a 3D space. The model will be positioned in theviewing volume where they can be transformed into positions. Throughthe transformation the viewing volume will determine how an object isprojected onto the screen using perspective or orthographic projection.The view volume will also define which objects or portions of objectsare clipped out of the final image.2.3.1 ModelAn object is built up by one or several polygons. Polygons are typicallydrawn by filling in all the pixels enclosed within the boundaries.Observe that polygons are drawn in such a way that if adjacentpolygons share an edge or vertex the pixels making up the edge orvertex are drawn exactly once. This can lead to narrow polygons thathave no pixels in one or more rows or columns of pixels. A polygonhas two sides front and back and these might be rendered differentlydepending on which side is facing the viewer, which can be helpfulwhen one would like to cull polygons in the model.2.3.2 Perspective projectionThe important characteristic of perspective projection is that the furtheran object is from the camera, the smaller it appears to be in the finalimage. This is possible through the shape of the viewing volume,which has the shape of a truncated pyramid whose top has been cut offby a plane parallel to its base. Objects that fit into the the view volumeare projected toward the apex of the pyramid, where the viewpoint is.Objects that are closer to the viewpoint appear larger because theyoccupy a proportionally larger amount of the viewing volume thanthose that are farther away, in the larger part of the truncated pyramid.2.3.2.1 ClippingAfter the vertices of the objects in the scene have been placed in thetruncated pyramid, any of the primitives that lie outside the viewvolume will be clipped. The objects can be clipped on six differentplanes, those that define the sides and ends of the viewing volume.2.3.3 ViewportThe component responsible for opening a window is explained inchapter 4.1, which displays the view volume in the PT application.However the view port is the rectangular region of the window wherethe image is drawn. The view port is measured in window coordinates,which reflect the positions of pixels on the screen relative the lower leftcorner of the window. The depth (z) coordinate is encoded during theview port transformation and later stored see chapter 4.4.2.4 Related workIn the area of visualization tools for surgery most of the tools arecreated for training and simulation purposes and are not focused onrun-time viewing. Their main data input is from CT or MRI data.3D visualization of the femur is important in a number of clinicalproblems (fracture of the femoral neck, planning of hip replacementetc.). The visualization methods used can be conventional 2D X-raysfrom several different directions, or CT scans with subsequent 3Dreconstruction. CT scans provide the best 3D visualization results, butare not always available, are expensive, and produce a considerableradiation dose. On the other hand, 2D X-rays only provide 3Dinformation in the sense that the radiologist mentally assembles themas a 3D model.2.4.1 Computer-aided simulation for bone surgeryA system for evaluating bone deformities using a 3-D model directlyrecovered from 2-D images and for simulating surgery is described. Itderives a 3-D object representation from only two X-ray images. It alsooffers user-friendly simulation of bone surgery using low-costhardware and software. The system exhibits satisfactory behavior forreconstructing the bone shape, providing suitable data for thesimulation and evaluation of bone surgery. Although the splineinterpolation of the bone surface does not produce a realistic 3-Dvisualization of the tibia, which is used as an example, thereconstruction is useful in solving problems inherent in the pathologyconsidered [14].2.4.2 3D Reconstruction of the Femoral Bone using Two X-rayImages from Orthogonal ViewsNikkhaha-Dehkordi et al present in [20] a method that produces a real3D model of the femur, but only requires two 2D X-ray images fromorthogonal directions as input. Their method uses information from theX-ray images to reconstruct a 3D model. The particular method theyused was to reconstruct a 3D model using two X-ray images positionedorthogonal to each other. Since the outline does not define the entire3D shape of the femur, they had to use regularization to estimate the3D shape. In this case they assumed that the surface was round andsmooth. This assumption is reasonably valid on the shaft, but not on thecaput and condyles. Therefore they separated the femur into sub-partsthat individually satisfied their assumptions. To generate the 3Dmodels from these sub-parts they used Hermite surface patches, andlater assembled the sub-parts together. Hermite surfaces are cubicparametric surface patches defined by four corner points and by thetangent vectors to the surface at the corner points see chapter 11 in[18].With this method 80 % of the femur shaft is within less than 2 mm ofthe CT scan and 93 % is within 4mm. For the shaft these numbersincrease to 95 % within 2mm and 99.8 % within 4mm. We believe ourmethod provides a better estimate of the shape of the femur by usingboth a CT model and model modifications through edge detection fromtwo different X-ray pictures.2.4.3 AmiraAmira is a modern, user-friendly interactive Workbench for researchersand developers. It offers fascinating new possibilities wherever 3-dimensional data sets occur. State-of-the-art visualization techniquesprovide detailed insights into your data.Powerful automatic and interactive segmentation tools can process 3Dimage data. Novel, fast, and robust reconstruction algorithms make iteasy to create polygonal models from segmented objects. This makesAmira the ideal tool for research in medicine, biology, and engineering.In addition, true volumetric tetrahedral meshes can be generated,suitable for advanced finite-element simulations. Simulation results aswell as other data defined on a variety of different grids can beinvestigated using a large set of powerful visualization methods [32].2.4.4 ANALYZEThe ANALYZE tm system features integrated, complimentary tools forfully interactive display, manipulation and measurement ofmultidimensional image data. It can be applied to data from manydifferent imaging modalities, including CT, MRI, ultrasound anddigital microscopy [33].2.4.5 MEDxMEDx is a functional image visualization and analysis programcapable of processing and combining multidimensional data fromvarious imaging modalities, including MRI and CT. It includes a widevariety of tools needed for functional image analysis. These tools canbe combined using a scripting language to construct customizedanalysis modules [34].2.4.6 VolVisVolVis is a volume visualization system that unites numerousvisualization methods within a comprehensive visualization system,providing a flexible tool for the scientist and engineer as well as thevisualization developer and researcher.VolVis supplies a wide range of functionality with numerous methodsprovided within each functional component. For example, VolVisprovides various projection methods including ray casting, ray tracing,radiosity, Marching Cubes, and splatting. [35]3 MRVRViewer functionalityThis section describes the functionality of the viewer in detail. Byintroducing the main application and its user interface understandingof how the viewer receives its information should be greatly enhanced.The viewer has two major tasks – to adapt and modify the femur to thepatients’ shape and to move the robot and screws via modifications inrun-time.3.1 The main application PintraceThe main application Pintrace is based on a Windows NT platform andcreated with the programming tool Microsoft Visual C++. The programis an interface between an operator and a robot. The intention is that asurgeon using instructions via the PT application controls a robot thatcarries out the operation. The PT application has been primarilydeveloped for hip joint operations, but structure options exist in theprogram for other kinds of bone operation such as for a knee joint.However currently MR has put its main focus on hip joint operations.3.1.1 Pintrace functionalityThe PT application windows settings are divided into six differentwindows. Three are on the upper half and the other three on the lowerhalf.4A B12C 35231453 21D4 E5FA. Ap X-ray picture 1. Patient femurB. Lat X-ray picture 2. Reference pinC. Functional window 3. Robot pathD. Information window 4. Robot toolE. 3D viewer 5. Planned screwsF. Pintrace controlwindow for drawingfunctions and robotsynchronynicingFigure 1 Shows a screenshot of an active PT application with theviewer running.The window on the lower half is used to calibrate the robot for thisspecific operation and to configure the settings for the robot in theapplication. The robot can be remotely controlled by joystick andthrough preprogramming.The left is an information window that gives the operator informationof what is being done under the configuration process in the program.In this window is it possible to change the default settings for the robot.The window to the right is where the course of action for the entireoperation can be created. The window in the middle is used forvisualization.After completing calibration and setting the X-ray pictures should beloaded. They will be presented in the larger windows on the upper halfof the screen. Pan and zooming is possible in the small window to theright on the upper half. Marking the objects in the two X-ray pictureswill follow. If some adjustments are made it is possible to reload a newset of X-ray pictures and redraw the markings. When this is preparedwe can open the model with the patient dimensions in the viewer.Through marking the objects in the X-ray picture we can define thecontour of the patients femur that will be used in the viewer foradapting the theoretical model to the patient. The method for the aboveis explained in Chapter 5. It will also be used for recalculation of thepositioning of the screws in the femur. The programs suggestion can bevisualized in the viewer with the number of screws and the path for therobot also being viewed in this stage. At this moment the operatordecides if s/he will use the suggestion from the program or use therobot manually to insert the screws. The method the operator choosesdoes not affect the visualization in the viewer.3.2 Run-time visualizationHaving the opportunity to view changes during surgery through theviewer is a huge benefit for the operator. It could lead to the operatorshaving the possibility to perform an operation from another location.Another possibility is that the PT application with the viewer can beused as a simulation and training tool for performing real operations.Through the data information from the PT application it is possible tochange the geometric parameters of the robot tool, the reference pin,and the screws in run-time, which can be useful if the surgeon decidesto change for example the screws during the operation. We believe thatthe PT application has strong commercial potential because of thevaried options the viewer provides.3.3 Visualizing the models in 3DIn chapter 2.3 we described how a view port is created. In the viewer itis the control function that sets up the model matrix and the viewingvolume. The control function adds the light source that will give thesensation that the bone is real. Further the ability to walk around in theviewing volume is also handled by the control function. In run-timethe PT application sends a permanent flow of input data. This data isreceived into the control function and distributed to the internaldatabase from the PT application and is handled and distributed fromthe viewer to the local database.By default the local database contains the vertices that can be updatedand modified through input data that is send by the PT application.The robot tool: start and end vertices and the radius of the measuring pin.The reference pin: start and end vertex and the radius.The robot path-leg: every vertex where the direction changesThe screw: start and end vertex as well as the radius and the screw number.Femur head: center point vertex and the head radius.Femur neck: the vertex along the neck axis that is the narrowest section on the neck diameter and where are two directions of thediameter。

人工智能的英语作文

人工智能的英语作文

Artificial Intelligence AI has been a topic of significant interest and discussion in the modern world.As an English teacher,I would like to guide you through writing an English essay on this subject.Here are some points to consider when constructing your essay:1.Introduction to AI:Begin your essay by defining what artificial intelligence is.You might mention that AI refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions.2.Historical Context:Provide a brief history of AI,starting from its early conceptualization in the mid20th century to the development of the first AI programs. Discuss key milestones and figures in the field,such as Alan Turing and his Turing Test.3.Types of AI:Explain the different types of AI,including narrow or weak AI,which is designed for a particular task,and general or strong AI,which has the potential for broader cognitive abilities.4.Applications of AI:Discuss various applications of AI in todays society.This could include areas such as healthcare,where AI is used for diagnosis and treatment planning, or in the automotive industry with selfdriving cars.5.Impact on Employment:Address the concern that AI might replace human jobs. Analyze both the positive and negative impacts of AI on employment,including job displacement and the creation of new job opportunities in AIrelated fields.6.Ethical Considerations:Delve into the ethical implications of AI,such as privacy concerns,the potential for bias in AI algorithms,and the responsibility of AI developers to ensure their creations are used ethically.7.Future Prospects:Speculate on the future of AI,including advancements in machine learning,neural networks,and the potential for AI to achieve consciousness or selfawareness.8.Conclusion:Summarize the main points of your essay and offer your personal perspective on the role of AI in society.You might consider ending with a call to action for responsible development and use of AI technologies.9.Citations and References:Ensure that you cite any sources you use to support your arguments and provide a list of references at the end of your essay.10.Proofreading:Finally,proofread your essay for grammatical errors,clarity,and coherence.Make sure your essay flows logically and that your arguments are wellsupported.Remember,an effective essay on AI should be informative,engaging,and thoughtprovoking,encouraging readers to consider the implications of AI on a personal and societal level.。

ADL (体系结构描述语言)

ADL (体系结构描述语言)

component DataStore{ provide landerValues; } component Calculation{ require landerValues; provide calculationService; } component UserInterface{ require calculationService; require landerValues; } component LunarLander{ inst U: UserInterface; C: Calculation; D: DataStore; bind nderValues -- nderValues; nderValues -- nderValues; U.calculationService -- C.calculationService; }


1)组件:计算或数据存储单元; 2)连接件:用于组件问交互建模的体系结构构造块及其支配这些 交互的规则 3)体系结构配置:描述体系结构的组件与连接于的连接图。
ADL应具备的特点: 1.ADL首先应有一个形式化理论基础,如Petri网、状态 图、z、 CSP等。有了形式化理论基础,才能对所描述的系 统进行分析和验证。 2.作为一种描述语言,ADL应具有严谨的语法和语义。 描述能力应足够强,至少应能描述的基本构件如组件、连接 件及有关配置规范。同时,为了更好的应用,一种ADL, 应有相应的支持工具,支持工具的能力直接反映了该ADL 的可使用程度和范围。 3.描述软件体系结构的一个很重要的目的是为了便于软 件开发者的理解和交流,因此,ADL描述应简单易懂,最 好有图表辅助理解。对于同一个体系结构,不同的软件开发 者需要从不同的抽象层次上理解,这就要求ADL能描述不 同抽象程度的软件体系结构

科学探索创新路的英语作文

科学探索创新路的英语作文

The path of scientific exploration is a journey filled with innovation and discovery. It is a road that is paved with curiosity,dedication,and a relentless pursuit of knowledge. This essay will delve into the essence of scientific exploration and how innovation is the driving force behind it.The Importance of CuriosityCuriosity is the spark that ignites the flame of scientific exploration.It is the innate desire to understand the unknown,to question the status quo,and to seek answers to the mysteries of the universe.Without curiosity,the scientific community would stagnate, and the world would be deprived of the advancements that have shaped our lives.The Role of InnovationInnovation is the lifeblood of scientific exploration.It is the process of translating curiosity into tangible results.Innovation in science is not just about creating new technologies or products it is about challenging existing paradigms,developing new theories,and finding novel solutions to complex problems.The Process of Scientific ExplorationThe journey of scientific exploration begins with observation and hypothesis. Researchers observe phenomena,formulate hypotheses to explain these observations,and then design experiments to test these hypotheses.This process is iterative and often involves a great deal of trial and error.The Role of CollaborationScientific exploration is not a solitary endeavor.It relies heavily on collaboration among researchers,institutions,and even across disciplines.Collaboration allows for the sharing of ideas,resources,and expertise,which can lead to breakthroughs that might not have been possible in isolation.The Impact of TechnologyThe advancement of technology plays a crucial role in scientific exploration.Modern tools and techniques,such as highspeed computing,advanced imaging,and data analysis software,have expanded the horizons of what is possible in research.They enable scientists to delve deeper into the unknown and to gather data at an unprecedented scale.Overcoming ChallengesThe path of scientific exploration is fraught with challenges.These can range from technical difficulties and funding constraints to ethical dilemmas and societal implications.Overcoming these challenges requires resilience,creativity,and a commitment to the scientific method.The Future of Scientific ExplorationAs we look to the future,the potential for scientific exploration and innovation is limitless.With the ongoing development of new technologies and the increasing interconnectedness of the global scientific community,we stand on the brink of discoveries that could transform our understanding of the world and our place in it.In conclusion,the road of scientific exploration is a dynamic and everevolving journey.It is driven by the human spirits insatiable thirst for knowledge and the desire to push the boundaries of what is known.Innovation is the key that unlocks new frontiers,and it is through this process that we continue to learn,grow,and advance as a species.。

HORIBA A-TEEM 分子指纹分析技术说明书

HORIBA A-TEEM 分子指纹分析技术说明书

ELEMENTAL ANALYSISFLUORESCENCEOPTICAL COMPONENTSCUSTOM SOLUTIONSSPR IMAGINGAqualog®A-TEEM TMIntroducing the NEW HMMP tool for easybatch regression and discrimination analysis ofAqualog A-TEEM dataHORIBA’s patentedA-TEEM molecularfingerprinting isan ideal opticaltechnique forproductcharacterizationinvolvingcomponent quantification and identification. The HMMPAdd-In tool, powered by Eigenvector Inc. Solo, ideallycomplements the A-TEEM by supporting the developmentand batch wise application of methods for an unlimitednumber of component regression models as well asdiscrimination models. The HMMP breaks the time- andlabor-consuming barrier of analyzing individual modelsand collating results into a cohesive report to meet therequirements of industrial QA/QC applications. TheHMMP tool facilitates administrator level method modeldevelopment but more importantly push-button operator-level application and report generation.The HMMP tool is exclusive to the Aqualog A-TEEM andsupports enhanced model robustness by combining theabsorbance and fluorescence excitation-emission matrix(EEM) data using the Solo Multiblock Model tools! HMMPincorporates a direct, exclusive link to the Aqualog’s batchfile output directory for trouble-free file browsing andautomatic concatenation of absorbance and EEM data aswell as all model-dependent pre-processing.The HMMP tool mates seamlessly with data collectedusing the Fast-01 autosampler as well as any othersampling method that employs the Aqualog SampleQtoolbox.The HMMP tool supports an unlimited number ofregression models in a given method to providecomprehensive reports of all parameters of interest.Discrimination model methods with multiple class groupsare also supported to facilitate product characterizationas functions of unique compositions and component orcontaminant threshold concentrations among other QA/QC scenarios. The HMMP tool can employ a wide rangeof algorithms for discrimination and regression includingPrincipal Components Analysis (PCA), Partial LeastSquares (PLS), Artificial Neural Networks (ANN), SupportVector Machine (SVM) and Extreme Gradient Boost (XGB).Key applications supported include wine quality chemistry,water contamination and pharmaceutical productidentification and composition among many others.Key Features and Benefits• Easy, Rapid Operator Level Analysis• Facilitated Administration of Method Model Developmentand Editing• Complete Parameter Profile and Classification Reports• HMMP Add-In Fully Integrated into Eigenvector Inc.Solo/Solo+Mia and Exclusively Activated and Supportedby HORIBA Instruments Inc.• HMMP Reports include all required parameter informationand are saved in a comma separated format for LIMSsystem compatibility.• The HMMP tool is provided with ample online Helpsupport powered by the Eigenvector Inc. Wiki platformand HORIBA’s fully featured user manual.Aqualog A-TEEM Spectrometerwith FAST-01 AutosamplerPowered by Solo Predictor software fromEigenvector Research, IncorporatedHMMP SpecificationsTo learn more about theA-TEEM molecular fingerprinting technique, applications and uses of this autosampler, refer also to *******************/scientificUSA: +1 732 494 8660 France: +33 (0)1 69 74 72 00 Germany: +49 (0) 6251 8475 0UK: +44 (0)1604 542 500 Italy: +39 06 51 59 22 1 Japan: +81(75)313-8121 China: +86 (0)21 6289 6060 India: +91 80 41273637 Singapore: +65 (0)6 745 8300Taiwan: +886 3 5600606Brazil: +55 (0)11 2923 5400 Ot h er:+33 (0)1 69 74 72 00The HMMP user interface facilitates method development and selection, fully articulated data file browsing with data integrity warnings and push-button report generation.。

代码特征自动提取方法

代码特征自动提取方法

代码特征自动提取方法史志成1,2,周宇1,2,3+1.南京航空航天大学计算机科学与技术学院,南京2100162.南京航空航天大学高安全系统的软件开发与验证技术工信部重点实验室,南京2100163.南京大学软件新技术国家重点实验室,南京210023+通信作者E-mail:***************.cn 摘要:神经网络在软件工程中的应用极大程度上缓解了传统的人工提取代码特征的压力。

已有的研究往往将代码简化为自然语言或者依赖专家的领域知识来提取代码特征,简化为自然语言的处理方法过于简单,容易造成信息丢失,而引入专家制定启发式规则的模型往往过于复杂,可拓展性以及普适性不强。

鉴于以上问题,提出了一种基于卷积和循环神经网络的自动代码特征提取模型,该模型借助代码的抽象语法树(AST )来提取代码特征。

为了缓解因AST 过于庞大而带来的梯度消失问题,对AST 进行切割,转换成一个AST 序列再作为模型的输入。

该模型利用卷积网络提取代码中的结构信息,利用双向循环神经网络提取代码中的序列信息。

整个流程不需要专家的领域知识来指导模型的训练,只需要将标注类别的代码作为模型的输入就可以让模型自动地学习如何提取代码特征。

应用训练好的分类编码器,在相似代码搜索任务上进行测试,Top1、NDCG 、MRR 的值分别能达到0.560、0.679和0.638,对比当下前沿的用于代码特征提取的深度学习模型以及业界常用的代码相似检测工具有显著的优势。

关键词:代码特征提取;代码分类;程序理解;相似代码搜索文献标志码:A中图分类号:TP391Method of Code Features Automated ExtractionSHI Zhicheng 1,2,ZHOU Yu 1,2,3+1.College of Computer Science and Technology,Nanjing University of Aeronautics and Astronautics,Nanjing 210016,China2.Key Laboratory for Safety-Critical Software Development and Verification,Ministry of Industry and Information Technology,Nanjing University of Aeronautics and Astronautics,Nanjing 210016,China3.State Key Laboratory for Novel Software Technology,Nanjing University,Nanjing 210023,ChinaAbstract:The application of neural networks in software engineering has greatly eased the pressure of traditional method of extracting code features manually.Previous code feature extraction models usually regard code as natural language or heavily depend on the domain knowledge of experts.The method of transferring code into natural计算机科学与探索1673-9418/2021/15(03)-0456-12doi:10.3778/j.issn.1673-9418.2005048基金项目:国家重点研发计划(2018YFB1003902);国家自然科学基金(61972197);中央高校基本科研业务费专项资金(NS2019055);江苏高校“青蓝工程”。

adaptive

adaptive

Modeling a tool for the generation of programming environments for adaptive formalismsA.R. Camolesi Departamento de Engenharia de Computação e Sistemas Digitais, Universidade de São Paulo, Brasil Coordenadoria de Informática, Fundação Educacional do Município de Assis, Brasil E-mail: camolesi@.brAbstractThis paper aims to present the logical model that makes up the structure of a tool for the definition of environments for rule-driven adaptive formalisms.1 IntroductionAdaptive applications need resources to adapt themselves to the environment’s momentary needs and to foresee the internal and external demands, thus making up for a complex, robust, and fault-tolerant structure, yet flexible and responsive. Such applications offer modern capacities that are very difficult to be modeled by using present techniques of software development. In order to solve the adaptive applications’ modeling, it was proposed in [1] a generic formalism that allows (underlying) rule-driven non-adaptive devices to be extended to concepts of adaptive mechanisms. Such formalism is based on an Adaptive Mechanism (AM) that involves the kernel of an underlying non-adaptive device (ND). This way, an Adaptive Device (AD) is formally defined by AD = (C, AR, S, c0, A, NA, BA, AA). In this formulation C is the set of all the possible configurations of ND and c0 ∈ C means its initial configuration. S is the set of all possible events that make up AD’s entry chain and set A represents the acceptance configurations for ND. Sets BA and AA are adaptive actions’ sets. NA is a set of all symbols that can be generated with exits by AD, in response to the application of adaptive rules. AR is the set of adaptive rules that define the adaptive behavior of AD and is given by the relationship Ar ⊆ BA × C × S × C × NA × AA in which adaptive actions modify the current set of AR adaptive rules from AD to a new AR set by adding and/or deleting adaptive rules in AR. Based on these definitions, it is proposed in this paper a logical model for the representation of the formalelements shown in [1]. Such model is fundamental to the developing of tools that support a design methodology for adaptive applications. This paper is organized as follows: in section 2, the stages of extensions for adaptive devices and its use will be described. In section 3, the logical representation for adaptive devices is shown, and finally, in section 4, some conclusions and future papers are discussed.2 Stages of extensions for non adaptive rule-driven devices.When extending a formalism of an underlying device to the concepts of adaptive rule-driven mechanisms, a specialist should involve the non adaptive device with an adaptive layer. In order to develop this job, the specialist should possess good knowledge both of the underlying formalism and of the concepts of adaptive mechanisms. On the other hand, a planner that uses a device extended by a specialist does not need a formal knowledge as deep as the one needed by the specialist in extension of devices. The planner needs to know the extended specification language and how to use it in the project of his applications. When extending a non adaptive rule-driven device to support adaptive characteristics there is the need to accomplish 3 stages: the stage of extension of the formal (mathematical) model, the stage of definition of the logical model and the stage of definition of the physical model. Figure 1 illustrates the stages and the existent relationship among them. The stage of extension of the formal model Figure 1(A), offers a view in which a specialist with good mathematical knowledge of underlying formalism accomplishes the conceptual definition of the extended device to the concepts of adaptive mechanisms. In [1] and [2], extensions of underlying devices are presented to the concepts of adaptive devices. In this phase, the junction of the formal concepts of both (underlying and adaptive) formalisms is achieved, thus obtaining a new underlying device extended to concepts of adaptive mechanisms.represented by specification, thus modifying its structure. In [3], a methodology was proposed to give support to the project of adaptive applications by using concepts presented in this paper. In Figure 2, it is shown the design methodology for adaptive applications formed by the following phases: specification phase, transformation specification phase, and validation and specification simulation phase. In the specification phase the application is accomplished by using either a text or a graphic tool. Soon afterwards, the transformation of the produced specification to an intermediate representation (logical model) is accomplished and, based on the obtained representation the planner can inform entry string sequences and evaluate its specification. If mistakes or inconsistencies occur, the planner can make changes in the specification and restart the process.Figure 1. Stages of extension of non adaptive rule-driven devices.After obtaining the adaptive formalism, it is necessary a mapping of its concepts for an intermediate representation, as shown in Figure 1(B). Such stage consists of the definition of the logical structure that represents the formal concepts of the new adaptive device. Such structure is of fundamental importance, because it is part of the information storage structure necessary for the development of tools that will help the planner in designing adaptive applications. In the stage of physical definition, as shown in Figure 1(C), a planner with knowledge of the developed adaptive formalism accomplishes the specification of his application. At this stage, yielded specifications are to be later analyzed and implemented. When performing his work the planner instances the defined objects in the logical stage and he defines the physical elements that represent the behavior of the application. In this phase, it can be observed that the instantiated objects belong to two different classes, i.e., the objects that represent the behavior of the developed application and other objects that represent the adaptive functions and actions responsible for modifications in the behavior of the application in execution. Based on the set of the defined objects in this phase, the presentation, the simulation, the verification and the execution of the projected application are allowed. During the simulation and execution process of specification in the adaptive kernel, adaptive actions can be executed and rules can be added or removed from the behaviorFigure 2. Methodology of Design of Adaptive Applications.The proposed methodology is linked to the need to use tools for helping the planner in the performance of his job. During the specification phase there is the need of a text or graphic tool to aid the planner in the specification of an application. The phase of specification transformation of the application to an intermediate representation can be accomplished in two ways: automatic (generated by the editors at the moment of the edition), or through a translator that makes the transformation process after the specification process. And, finally, tools that allow the visualization, simulation and verification of the projected applications. In this phase, the planner, using an integrated environment informs the values regarding entry chain and submits their specification to the performer of the adaptive kernel. Initially, in case they exist, prior adaptive actions are performed, followed by elementary actions of the underlying device and finally the subsequent adaptive actions. This way, at each step the designer gets a new configuration (state of the system) and a new set of rules (behavior of the application) according to the adaptive actions that wereperformed. The obtained results should be displayed to the user, who can analyze them and, if necessary, make changes and restart the whole process.3 Logical model for adaptive formalismsBased on the concepts shown in [1], a logical model is proposed, so that it allows the construction of tools that help to plan adaptive applications. Such a model is represented by a data structure that gives support to the storage of the intermediate representation and allows the construction of a program that can manage the performance of the resulting specification by using the available facilities from adaptive devices. In [3], a proposal was presented for the logical structuring of the formal definition of the concepts of adaptive devices. Figure 3 shows a diagram of entity relationship of the conceptual model for adaptive devices. Such a diagram is structured by objects of three types: Underlying Kernel (UK), Specification (S) and Adaptive Layer (AL) according to the characteristics they represent. The objects of horizontal hachure (Device, Component Type, Connection Type and Attribute Type) are Underlying Kernel (UK) type and they correspond to the intermediate representation of the basic elements of an underlying device. In this structure, the conceptual elements of the underlying devices formally represented by set C are defined.Figure 3. Diagram of entity relationship intermediate representation.Solid color objects (Project, Attributes, Components, Connections and Variables Environment) are Specification (S) type and aim to represent the specifications yielded by a planner. Each object of this structure corresponds to elements of the formal definition, in which: each rule c that is part of the set of rules NR of an underlying device ND can be represented by the objects in S. The planner, when defining a specification, instances objects of the NS type (elements that constitute the underlying kernel) and defines the behavior of the application. This structure also stores the elements of set A that correspond to the rules of acceptance of an adaptive device and, furthermore, to the information on values ofboth the entry and exit chains in the Variable Environments object. The objects with vertical hachure are Adaptive Layer (AL) type and they aim to provide the necessary resources to support the adaptive layer that involves the underlying kernel. The Adaptive Layer is structured in objects that correspond to the configuration of the adaptive device (Adaptive Action Type), and in objects that correspond to the AR conceptual elements that, in turn, correspond to adaptive functions and actions. When defining the Underlying Kernel of a new device (Petri Nets, Automata, Grammar Free from Context, etc...) the specialist needs to store information related to the name of the device, the creation date and updating, etc… Such information is stored in the Devices object. Information on the types of components (places and transitions of a Petri Net, final states, and non-final states of Automata, etc...) that represent the behavior of an application and that is used by a planner when specifying their application, can be represented by the Component Type object. When specifying a rule that represents the behavior of an application it is necessary to represent the form of the existing connection between its components. The Connection Type object represents the information on the connection type for a device: transition for Automata, Petri Net connections, etc…, while the Attribute Type object contains information on the types of data that are available for attribution to a component of an application behavior. When accomplishing the Specification of an application it is necessary to store information on the description of the specification, on the planner in charge, etc…. Such information is represented by the Projects object. At first, when defining the behavior of a project, one should define the components that constitute the application behavior. Such components are parts of the NR rules and they are represented by the Components object. One can mention the description of the states that constitute a specification of Automata or the description of the places and transitions of a Petri Net, etc… as examples of such components. Following the definition of the components, one defines the rules (set c of the formal representation) that constitute the behavior of an application (formally acted by NR). Such structure establishes the relationship among the defined elements in both Component and Connection Type objects and defines the behavior of an application. The value of each attribute associated to a (Component or Connection) object is represented by the Attributes object. The values of stimulus, and related information to the exit and other necessary information duringexecution are acted by the Environment Variables object. The Adaptive Layer is associated to the elements of specification of an application. This results, at first, in the definition of the information on the type of adaptive action that can occur: consultation action, insert or removal. Such information is stored in the Adaptive Action Type object. When the adaptive mechanism is joined to the underlying kernel it is necessary to define the adaptive functions (the conceptual elements BA and AA) that should be associated to the elements of the Components object. The Adaptive Functions object allows the extension of the underlying kernel to have the features of adaptive mechanisms and it makes the connection between the elements of the underlying kernel and their respective adaptive actions that are represented by the Adaptive Actions object. The Adaptive Actions object represents the set of adaptive actions belonging to AR that has the function of accomplishing changes in the behavior of the projected application. Based on the logical structure, a tool is being developed that will allow a specialist to configure the conceptual elements of a non adaptive device and to accomplish its extension for the adaptive mechanisms. Such tool will also allow a planner, by using a textual language (intermediate representation), to develop the project of their applications. In a second stage other tools will be developed that will allow the specification and display of graphic elements of the extended adaptive devices. The tool development is being made in Java [3] due to the portability and reuse features inherent to this programming language. Figure 3 shows the interface of the tool that is responsible for the definition of the connections of a specification.4 Conclusion.This work aimed to present how to make the extension of a non adaptive device to support the characteristics of adaptive mechanisms. Initially, the general structure of an adaptive mechanism was presented, followed by the stages for the extension of a non adaptive formalism to support the characteristics of adaptive mechanisms. Following, the methodology for the design of adaptive applications was shown by using these concepts. Finally, a logical model was presented for the construction of tools that will give support to a design methodology of adaptive applications. The proposed methodology was used in [2] to modeling of applications that has support the use of graphic interface and tools are being implemented to facilitate specialists and planners in their job with adaptive technology. In relationship the stages of definition of adaptive formalisms several works were accomplished in relation to formal definition and as resulted adaptive formalisms were developed. Such works served as base for the definition of the extension stages for adaptive formalisms and they were to base the proposal of a logical model that it seeks to represent adaptive ruledriven formalisms. The defined logical structure represents the conceptual elements for adaptive formalisms and it constitutes an intermediate representation for the definition of tools that it will support the methodology of design of adaptive applications. As a continuation to this work, it is suggested a deeper study for the validation of the proposed logical model and the definition of a physical model (computational) for the validation of the proposed structure.References[1] Neto J.J.(2001) Adaptive rule-driven devices general formulation and case study, Sixth International Conference on Implementation and Application of Automata, Pretoria-South Africa. [2] Camolesi, A.R. and Neto, J.J. (2004) Modelagem Adaptativa de Aplicações Complexas, XXX Conferencia Latino Americana de Informatica (CLEI), Arequipa, Peru. [3] Camolesi, A.R. e Neto, J.J (2003) An adaptive model for modelling of distributed system, Conference Argentina in la Ciência da Computacion (CACIC), La Plata, Argentina.Figura 4. Interface of an Adaptive Tool System.[4] Programming Language JAVA in (September 2004).。

distance transform of sampled function解读

distance transform of sampled function解读

distance transform of sampled function解读Distance Transform of Sampled Function: An InterpretationIntroductionThe distance transform of a sampled function is a fundamental concept in digital image processing and computer vision. It serves as a powerful tool for various applications such as object recognition, image segmentation, and shape analysis. In this article, we will delve into the intricacies of the distance transform of a sampled function, its key properties, and its significance in computer science.Definition and Basic PrinciplesThe distance transform is an operation that assigns a distance value to each pixel in an image, based on its proximity to a specific target object or region. It quantifies the distance between each pixel and the nearest boundary of the object, providing valuable geometric information about the image.To compute the distance transform, first, a binary image is created, where the target object or region is represented by foreground pixels (usually white) and the background is represented by background pixels (usually black). This binary image serves as the input for the distance transform algorithm.Distance Transform AlgorithmsSeveral distance transform algorithms have been developed over the years. One of the most widely used algorithms is the chamfer distancetransform, also known as the 3-4-5 algorithm. This algorithm assigns a distance value to each foreground pixel by considering the neighboring pixels and their corresponding distances. Other popular algorithms include the Euclidean distance transform, the Manhattan distance transform, and the Voronoi distance transform.Properties of the Distance TransformThe distance transform possesses a set of important properties that make it a versatile tool for image analysis. These properties include:1. Distance Metric Preservation: The distance values assigned to the pixels accurately represent their geometric proximity to the boundary of the target object.2. Locality: The distance transform efficiently encodes local shape information. It provides a detailed description of the object's boundary and captures fine-grained details.3. Invariance to Object Shape: The distance transform is independent of the object's shape, making it robust to variations in object size, rotation, and orientation.Applications of the Distance TransformThe distance transform finds numerous applications across various domains. Some notable applications include:1. Image Segmentation: The distance transform can be used in conjunction with segmentation algorithms to accurately delineate objects inan image. It helps in distinguishing objects from the background and separating overlapping objects.2. Skeletonization: By considering the foreground pixels with a distance value of 1, the distance transform can be used to extract the object's skeleton. The skeleton represents the object's medial axis, aiding in shape analysis and recognition.3. Path Planning: The distance transform can assist in path planning algorithms by providing a distance map that guides the navigation of robots or autonomous vehicles. It helps in finding the shortest path between two points while avoiding obstacles.ConclusionThe distance transform of a sampled function plays a vital role in digital image processing and computer vision. Its ability to capture geometric information, preserve distance metrics, and provide valuable insights into the spatial structure of objects makes it indispensable in various applications. The proper understanding and utilization of the distance transform contribute to the advancement of image analysis techniques, enabling more accurate and efficient solutions in computer science.。

2012_The KITTI Vision Benchmark Suite_citation131

2012_The KITTI Vision Benchmark Suite_citation131

Are we ready for Autonomous Driving? The KITTI Vision Benchmark SuiteAndreas Geiger and Philip Lenz Karlsruhe Institute of Technology {geiger,lenz}@Raquel UrtasunToyota Technological Institute at Chicagorurtasun@AbstractToday,visual recognition systems are still rarely em-ployed in robotics applications.Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios.In this paper,we take advantage of our autonomous driving platform to develop novel chal-lenging benchmarks for the tasks of stereo,opticalflow,vi-sual odometry/SLAM and3D object detection.Our record-ing platform is equipped with four high resolution video cameras,a Velodyne laser scanner and a state-of-the-artlocalization system.Our benchmarks comprise389stereo and opticalflow image pairs,stereo visual odometry se-quences of39.2km length,and more than200k3D ob-ject annotations captured in cluttered scenarios(up to15 cars and30pedestrians are visible per image).Results from state-of-the-art algorithms reveal that methods rank-ing high on established datasets such as Middlebury per-form below average when being moved outside the labora-tory to the real world.Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community.Our benchmarks are avail-able online at:/datasets/kitti1.IntroductionDeveloping autonomous systems that are able to assist humans in everyday tasks is one of the grand challenges in modern computer science.One example are autonomous driving systems which can help decrease fatalities caused by traffic accidents.While a variety of novel sensors have been used in the past few years for tasks such as recognition, navigation and manipulation of objects,visual sensors are rarely exploited in robotics applications:Autonomous driv-ing systems rely mostly on GPS,laser rangefinders,radar as well as very accurate maps of the environment.In the past few years an increasing number of bench-marks have been developed to push forward the perfor-mance of visual recognitions systems,e.g.,Caltech-101Figure1.Recording platform with sensors(top-left),trajectory from our visual odometry benchmark(top-center),disparity and opticalflow map(top-right)and3D object labels(bottom). [17],Middlebury for stereo[41]and opticalflow[2]evalu-ation.However,most of these datasets are simplistic,e.g., are taken in a controlled environment.A notable exception is the PASCAL VOC challenge[16]for detection and seg-mentation.In this paper,we take advantage of our autonomous driv-ing platform to develop novel challenging benchmarks for stereo,opticalflow,visual odometry/SLAM and3D object detection.Our benchmarks are captured by driving around a mid-size city,in rural areas and on highways.Our recording platform is equipped with two high resolution stereo cam-era systems(grayscale and color),a Velodyne HDL-64E laser scanner that produces more than one million3D points per second and a state-of-the-art OXTS RT3003localiza-tion system which combines GPS,GLONASS,an IMU and RTK correction signals.The cameras,laser scanner and lo-calization system are calibrated and synchronized,provid-ing us with accurate ground truth.Table1summarizes our benchmarks and provides a comparison to existing datasets.Our stereo matching and opticalflow estimation bench-mark comprises194training and195test image pairs at a resolution of1240×376pixels after rectification with semi-dense(50%)ground pared to previous datasets[41,2,30,29],this is thefirst one with realis-tic non-synthetic imagery and accurate ground truth.Dif-ficulties include non-lambertian surfaces(e.g.,reflectance, transparency)large displacements(e.g.,high speed),a large variety of materials(e.g.,matte vs.shiny),as well as differ-ent lighting conditions(e.g.,sunny vs.cloudy).Our3D visual odometry/SLAM dataset consists of 22stereo sequences,with a total length of39.2km.To date,datasets falling into this category are either monocular and short[43]or consist of low quality imagery[42,4,35]. They typically do not provide an evaluation metric,and as a consequence there is no consensus on which benchmark should be used to evaluate visual odometry/SLAM ap-proaches.Thus often only qualitative results are presented, with the notable exception of laser-based SLAM[28].We believe a fair comparison is possible in our benchmark due to its large scale nature as well as the novel metrics we pro-pose,which capture different sources of error by evaluating error statistics over all sub-sequences of a given trajectory length or driving speed.Our3D object benchmark focuses on computer vision algorithms for object detection and3D orientation estima-tion.While existing benchmarks for those tasks do not pro-vide accurate3D information[17,39,15,16]or lack real-ism[33,31,34],our dataset provides accurate3D bounding boxes for object classes such as cars,vans,trucks,pedes-trians,cyclists and trams.We obtain this information by manually labeling objects in3D point clouds produced by our Velodyne system,and projecting them back into the im-age.This results in tracklets with accurate3D poses,which can be used to asses the performance of algorithms for3D orientation estimation and3D tracking.In our experiments,we evaluate a representative set of state-of-the-art systems using our benchmarks and novel metrics.Perhaps not surprisingly,many algorithms that do well on established datasets such as Middlebury[41,2] struggle on our benchmark.We conjecture that this might be due to their assumptions which are violated in our sce-narios,as well as overfitting to a small set of training(test) images.In addition to the benchmarks,we provide MAT-LAB/C++development kits for easy access.We also main-tain an up-to-date online evaluation server1.We hope that our efforts will help increase the impact that visual recogni-tion systems have in robotics applications.2.Challenges and MethodologyGenerating large-scale and realistic evaluation bench-marks for the aforementioned tasks poses a number of chal-lenges,including the collection of large amounts of data in real time,the calibration of diverse sensors working at dif-ferent rates,the generation of ground truth minimizing the amount of supervision required,the selection of the /datasets/kitti priate sequences and frames for each benchmark as well as the development of metrics for each task.In this section we discuss how we tackle these challenges.2.1.Sensors and Data AcquisitionWe equipped a standard station wagon with two color and two grayscale PointGrey Flea2video cameras(10Hz, resolution:1392×512pixels,opening:90◦×35◦),a Velo-dyne HDL-64E3D laser scanner(10Hz,64laser beams, range:100m),a GPS/IMU localization unit with RTK cor-rection signals(open sky localization errors<5cm)and a powerful computer running a real-time database[22].We mounted all our cameras(i.e.,two units,each com-posed of a color and a grayscale camera)on top of our vehi-cle.We placed one unit on the left side of the rack,and the other on the right side.Our camera setup is chosen such that we obtain a baseline of roughly54cm between the same type of cameras and that the distance between color and grayscale cameras is minimized(6cm).We believe this is a good setup since color images are very useful for tasks such as segmentation and object detection,but provide lower contrast and sensitivity compared to their grayscale counterparts,which is of key importance in stereo matching and opticalflow estimation.We use a Velodyne HDL-64E unit,as it is one of the few sensors available that can provide accurate3D information from moving platforms.In contrast,structured-light sys-tems such as the Microsoft Kinect do not work in outdoor scenarios and have a very limited sensing range.To com-pensate egomotion in the3D laser measurements,we use the position information from our GPS/IMU system.2.2.Sensor CalibrationAccurate sensor calibration is key for obtaining reliable ground truth.Our calibration pipeline proceeds as follows: First,we calibrate the four video cameras intrinsically and extrinsically and rectify the input images.We thenfind the 3D rigid motion parameters which relate the coordinate sys-tem of the laser scanner,the localization unit and the refer-ence camera.While our Camera-to-Camera and GPS/IMU-to-Velodyne registration methods are fully automatic,the Velodyne-to-Camera calibration requires the user to manu-ally select a small number of correspondences between the laser and the camera images.This was necessary as existing techniques for this task are not accurate enough to compute ground truth estimates.Camera-to-Camera calibration.To automatically cali-brate the intrinsic and extrinsic parameters of the cameras, we mounted checkerboard patterns onto the walls of our garage and detect corners in our calibration images.Based on gradient information and discrete energy-minimization, we assign corners to checkerboards,match them betweenStereo Matching type#images resolution ground truth uncorrelated metric EISATS[30]synthetic4980.3Mpx denseMiddlebury[41]laboratory380.2Mpx dense Make3D Stereo[40]real2570.8Mpx0.5% Ladicky[29]real700.1Mpx manualProposed Dataset real3890.5Mpx50% Optical Flow type#images resolution ground truth uncorrelated metricEISATS[30]synthetic4980.3Mpx denseMiddlebury[2]laboratory240.2Mpx dense Proposed Dataset real3890.5Mpx50%Visual Odometry/SLAM setting#sequences length#frames resolution ground truth metric TUM RGB-D[43]indoor270.4km65k0.3MpxNew College[42]outdoor1 2.2km51k0.2MpxMalaga2009[4]outdoor6 6.4km38k0.8MpxFord Campus[35]outdoor2 5.1km7k 1.0MpxProposed Dataset outdoor2239.2km41k0.5Mpx Object Detection/3D Estimation#categories avg.#labels/category occlusion labels3D labels orientations Caltech101[17]10140-800MIT StreetScenes[3]93,000LabelMe[39]399760ETHZ Pedestrian[15]112,000PASCAL2011[16]201,150Daimler[8]156,000Caltech Pedestrian[13]1350,000COIL-100[33]10072 72bins EPFL Multi-View Car[34]2090 90bins Caltech3D Objects[31]100144 144bins Proposed Dataset280,000 continuousparison of current State-of-the-Art Benchmarks and Datasets.the cameras and optimize all parameters by minimizing the average reprojection error[19].Velodyne-to-Camera calibration.Registering the laser scanner with the cameras is non-trivial as correspondences are hard to establish due to the large amount of noise in the reflectance values.Therefore we rely on a semi-automatic technique:First,we register both sensors using the fully au-tomatic method of[19].Next,we minimize the number of disparity outliers with respect to the top performing meth-ods in our benchmark jointly with the reprojection errors of a few manually selected correspondences between the laser point cloud and the images.As correspondences,we se-lect edges which can be easily located by humans in both domains(i.e.,images and point clouds).Optimization is carried out by drawing samples using Metropolis-Hastings and selecting the solution with the lowest energy.GPS/IMU-to-Velodyne calibration.Our GPS/IMU to Velodyne registration process is fully automatic.We can-not rely on visual correspondences,however,if motion esti-mates from both sensors are provided,the problem becomes identical to the well-known hand-eye calibration problem, which has been extensively explored in the robotics com-munity[14].Making use of ICP,we accurately register laser point clouds of a parking sequence,as this provides a large variety of orientations and translations necessary to well condition the minimization problem.Next,we ran-domly sample1000pairs of poses from this sequence and obtain the desired result using[14].2.3.Ground TruthHaving calibrated and registered all sensors,we are ready to generate ground truth for the individual bench-marks shown in Fig.1.To obtain a high stereo and opticalflow ground truth density,we register a set of consecutive frames(5before and5after the frame of interest)using ICP.We project the accumulated point clouds onto the image and automatically remove points falling outside the image.We then manu-ally remove all ambiguous image regions such as windows and fences.Given the camera calibration,the correspond-ing disparity maps are readily computed.Opticalflowfields are obtained by projecting the3D points into the next frame. For both tasks we evaluate both non-occluded pixels as well as all pixels for which ground truth is available.Our non-occluded evaluation excludes all surface points falling out-side the image plane.Points occluded by objects within the same image could not be reliably estimated in a fully auto-matic manner due to the properties of the laser scanner.To avoid artificial errors,we do not interpolate the ground truth disparity maps and opticalflowfields,leading to a∼50% average ground truth density.The ground truth for visual odometry/SLAM is directly given by the output of the GPS/IMU localization unit pro-Figure2.Object Occurence and Object Geometry Statistics of our Dataset.Thisfigure shows(from left to right and top to bottom): The different types of objects occuring in our sequences,the power-law shaped distribution of the number of instances within an image and the orientation histograms and object size distributions for the two most predominant categories’cars’and’pedestrians’.jected into the coordinate system of the left camera after rectification.To generate3D object ground-truth we hired a set of annotators,and asked them to assign tracklets in the form of3D bounding boxes to objects such as cars,vans,trucks, trams,pedestrians and cyclists.Unlike most existing bench-marks,we do not rely on online crowd-sourcing to perform the labeling.Towards this goal,we create a special pur-pose labeling tool,which displays3D laser points as well as the camera images to increase the quality of the anno-tations.Following[16],we asked the annotators to addi-tionally mark each bounding box as either visible,semi-occluded,fully occluded or truncated.Statistics of our la-beling effort are shown in Fig.2.2.4.Benchmark SelectionWe collected a total of∼3TB of data from which we select a representative subset to evaluate each task.In our experiments we currently concentrate on grayscale images, as they provide higher quality than their color counterparts.For our stereo and opticalflow benchmarks we select a subset of the sequences where the environment is static.To maximize diversity,we perform k-means(k=400)cluster-ing on the data using a novel representation,and chose the elements closest to the center of each cluster for the bench-mark.We describe each image using a144-dimensional image descriptor,obtained by subdividing the image into 12×4rectangular blocks and computing the average dis-parity and opticalflow displacement for each block.After removing scenes with bad illumination conditions as,e.g., tunnels,we obtain194training and195test image pairs for both benchmarks.For our visual odometry/SLAM evaluation we select long sequences of varying speed with high-quality localiza-tion,yielding a set of41.000frames captured at10fps and a total driving distance of39.2km with frequent loop clo-sures which are of interest in SLAM.Our3D object detection and orientation estimation benchmark is chosen according to the number of non-occluded objects in the scene,as well as the entropy of the object orientation distribution.High entropy is desirable in order to ensure diversity.Towards this goal we utilize a greedy algorithm:We initialize our dataset X to the empty set∅and iteratively add images using the following rule X←X∪argmaxx α·noc(x)+1CCc=1H c(X∪x) (1)where X is the current set,x is an image from our dataset, noc(x)stands for the number of non-occluded objects in image x and C denotes the number of object classes.H c is the entropy of class c with respect to orientation(we use 8/16orientation bins for pedestrians/cars).We further en-sure that images from one sequence do not appear in both training and test set.2.5.Evaluation MetricsWe evaluate state-of-the-art approaches utilizing a di-verse set of metrics.Following[41,2]we evaluate stereoand opticalflow using the average number of erroneous pixels in terms of disparity and end-point error.In con-trast to[41,2],our images are not downsampled.There-fore,we employ a disparity/end-point error threshold of τ∈{2,..,5}px for our benchmark,withτ=3px the default setting which takes into consideration almost all cal-ibration and laser measurement errors.We report errors for both non-occluded pixels as well as all pixels where ground-truth is available.Evaluating visual odometry/SLAM approaches based on the error of the trajectory end-point can be misleading, as this measure depends strongly on the point in time where the error has been made,e.g.,rotational errors earlier in the sequence lead to larger end-point errors.K¨u mmerle at al.[28]proposed to compute the average of all relative rela-tions at afixed distance.Here we extend this metric in two ways.Instead of combining rotation and translation errors into a single measure,we treat them separately.Fur-thermore,we evaluate errors as a function of the trajectory length and velocity.This allows for deeper insights into the qualities and failure modes of individual methods.For-mally,our error metrics are defined asE rot(F)=1|F| (i,j)∈F∠[(ˆp j⊖ˆp i)⊖(p j⊖p i)](2)E trans(F)=1|F| (i,j)∈F (ˆp j⊖ˆp i)⊖(p j⊖p i) 2(3)where F is a set of frames(i,j),ˆp∈SE(3)and p∈SE(3)are estimated and true camera poses respectively,⊖denotes the inverse compositional operator[28]and∠[·]is the rotation angle.Our3D object detection and orientation estimation benchmark is split into three parts:First,we evaluate classi-cal2D object detection by measuring performance using the well established average precision(AP)metric as described in[16].Detections are iteratively assigned to ground truth labels starting with the largest overlap,measured by bound-ing box intersection over union.We require true positives to overlap by more than50%and count multiple detections of the same object as false positives.We assess the perfor-mance of jointly detecting objects and estimating their3D orientation using a novel measure which we called the av-erage orientation similarity(AOS),which we define as:AOS=111r∈{0,0.1,..,1}max˜r:˜r≥rs(˜r)(4)Here,r=T PT P+F N is the PASCAL object detection recall,where detected2D bounding boxes are correct if they over-lap by at least50%with a ground truth bounding box.The orientation similarity s∈[0,1]at recall r is a normalized ([0..1])variant of the cosine similarity defined ass(r)=1|D(r)|i∈D(r)1+cos∆(i)θ2δi(5)(a)Best:<1%errors(b)Worst:21%errorsFigure3.Stereo Results for PCBP[46].Input image(top),es-timated disparity map(middle),disparity errors(bottom).Errorrange:0px(black)to≥5px(white).(a)Best:<1%errors(b)Worst:59%errorsFigure4.Optical Flow Results for TGV2CENSUS[45].Inputimage(top),estimatedflowfield(middle),end point errors(bot-tom).Error range:0px(black)to≥5px(white).where D(r)denotes the set of all object detections at recallrate r and∆(i)θis the difference in angle between estimatedand ground truth orientation of detection i.To penalize mul-tiple detections which explain a single object,we setδi=1if detection i has been assigned to a ground truth boundingbox(overlaps by at least50%)andδi=0if it has not beenassigned.Finally,we also evaluate pure classification(16bins forcars)and regression(continuous orientation)performanceon the task of3D object orientation estimation in terms oforientation similarity.3.Experimental EvaluationWe run a representative set of state-of-the-art algorithmsfor each task.Interestingly,we found that algorithms rank-ing high on existing benchmarks often fail when confrontedwith more realistic scenarios.This section tells their story.3.1.Stereo MatchingFor stereo matching,we run global[26,37,46],semi-global[23],local[5,20,38]and seed-growing[27,10,9]methods.The parameter settings we have employed can befound on /datasets/kitti.Missing disparitiesarefilled-in for each algorithm using background interpola-tion[23]to produce dense disparity maps which can then becompared.As Table2shows,errors on our benchmark arehigher than those reported on Middlebury[41],indicatingStereo Non-Occluded All Density PCBP[46] 4.72% 6.16%100.00% ITGV[37] 6.31%7.40%100.00% OCV-SGBM[5]7.64%9.13%86.50% ELAS[20]8.24%9.95%94.55%SDM[27]10.98%12.19%63.58%GCSF[9]12.06%13.26%60.77%GCS[10]13.37%14.54%51.06% CostFilter[38]19.96%21.05%100.00% OCV-BM[5]25.39%26.72%55.84% GC+occ[26]33.50%34.74%87.57%Optical Flow Non-Occluded All Density TGV2CENSUS[45]11.14%18.42%100.00% HS[44]19.92%28.86%100.00%LDOF[7]21.86%31.31%100.00%C+NL[44]24.64%33.35%100.00% DB-TV-L1[48]30.75%39.13%100.00% GCSF[9]33.23%41.74%48.27%HAOF[6]35.76%43.36%100.00% OCV-BM[5]63.46%68.16%100.00% Pyramid-LK[47]65.74%70.09%99.90%Table2.Stereo(left)and Optical Flow(right)Ranking from April2,2012.Numbers denote the percentage of pixels with disparity error or opticalflow end-point error(euclidean distance)larger thanτ=3px,averaged over all test images.Here,non-occluded refers to pixels which remain inside the image after projection in both images and all denotes all pixels for which ground truth information is available.Density refers to the number of estimated pixels.Invalid disparities andflow vectors have been interpolated for comparability.the increased level of difficulty of our real-world dataset.In-terestingly,methods ranking high on Middlebury,perform particularly bad on our dataset,e.g.,guided cost-volumefil-tering[38],pixel-wise graph cuts[26].This is mainly due to the differences in the data sets:Since the Middlebury benchmark is largely well textured and provides a smaller label set,methods concentrating on accurate object bound-ary segmentation peform well.In contrast,our data requires more global reasoning about areas with little,ambiguous or no texture where segmentation performance is less critical. Purely local methods[5,38]fail if fronto-parallel surfaces are assumed,as this assumption is often strongly violated in real-world scenes(e.g.,road or buildings).Fig.3shows the best and worst test results for the(cur-rently)top ranked stereo method PCBP[46].While small errors are made in natural environments due to the large de-gree of textureness,inner-city scenarios prove to be chal-lenging.Here,the predominant error sources are image sat-uration(wall on the left),disparity shadows(RV occludes road)and non-lambertian surfaces(reflections on RV body).3.2.Optical Flow EstimationFor opticalflow we evaluate state-of-the-art variational [24,6,48,44,7,9,45]and local[5,47]methods.The re-sults of our experiments are summarized in Table2.We observed that classical variational approaches[24,44,45] work best on our images.However,the top performing ap-proach TGV2CENSUS[45]still produces about11%of errors on average.As highlighted in Fig.4,most errors are made in regions which undergo large displacements be-tween frames,e.g.,close range pixels on the street.Further-more,pyramidal implementations lack the ability to esti-mateflowfields at higher levels of the pyramid due to miss-ing texture.While best results are obtained at small motions (Fig.4left,flow≤55px),driving at high speed(Fig.4 right,flow≤176px)leads to large displacements,which can not be reliably handled by any of the evaluated meth-ods.We believe that to overcome these problems we need more complex models that utilize prior knowledge of theTranslationError[%]Path Length [m]RotationError[deg/m]Path Length [m] 051015202530TranslationError[%]Speed [km/h]0.050.10.150.20.250.3RotationError[deg/m]Speed [km/h] Figure5.Visual Odometry Evaluation.Translation and rotation errors,averaged over all sub-sequences of a given length or speed.world.Previously hampered by the lack of sufficient train-ing data,such approaches will become feasible in the near future with larger training sets as the one we provide.3.3.Visual Odometry/SLAMWe evaluatefive different approaches on our visual odometry/SLAM dataset:VISO2-S/M[21],a real-time stereo/monocular visual odometry library based on incre-mental motion estimates,the approach of[1]with and with-out Local Bundle Adjustment(LBA)[32]as well as theflow separation approach of[25].All algorithms are compara-ble as none of them uses loop-closure information.All ap-proaches use stereo with the exception of VISO2-M[21] which employs only monocular images.Fig.5depicts the rotational and translational errors as a function of the trajec-tory length and driving speed.In our evaluation,VISO2-S[21]comes closest to the ground truth trajectories with an average translation error of 2.2%and an average rotation error of0.016deg/m.Akin to our opticalflow experiments,large motion impacts perfor-mance,especially in terms of translation.With a recording(a)Precision-Recall(b)Average Orientation Similarity Figure6.Object Detection and Orientation Estimation Results. Details about the metrics can be found in Sec.2.5rate of10frames per second,the vehicle moved up to2.8 meters per frame.Additionally,large motions mainly occur on highways which are less rich in terms of3D structure. Large errors at lower speeds stem from the fact that incre-mental or sliding-window based methods slowly drift over time,with the strongest relative impact at slow speeds.This problem can be easily alleviated if larger timespans are op-timized when the vehicle moves slowly or is standing still. In our experiments,no ground truth information has been used to train the model parameters.We expect detecting loop closures,utilizing more enhanced bundle adjustment techniques as well as utilizing the training data for parame-terfitting to further boost performance.3.4.3D Object Detection/Orientation EstimationWe evaluate object detection as well as joint detec-tion and orientation estimation using average precision and average orientation similarity as described in Sec. 2.5. Our benchmark extracted from the full dataset comprises 12,000images with40,000objects.Wefirst subdivide the training set into16orientation classes and use100non-occluded examples per class for training the part-based ob-ject detector of[18]using three different settings:We train the model in an unsupervised fashion(variable),by initial-izing the components to the16classes but letting the com-ponents vary during optimization(fixed init)and by initial-izing the components and additionallyfixing the latent vari-ables to the16classes(fixed).We evaluate all non-and weakly-occluded(<20%)ob-jects which are neither truncated nor smaller than40px in height.We do not count detecting truncated or occluded ob-jects as false positives.For our object detection experiment, we require a bounding box overlap of at least50%,results are shown in Fig.6(a).For detection and orientation es-timation we require the same overlap and plot the average orientation similarity(Eq.5)over recall for the two unsu-pervised variants(Fig.6(b)).Note that the precision is an upper bound to the average orientation similarity.Overall,we could notfind any substantial difference be-tween the part-based detector variants we investigated.AllClassification SimilaritySVM[11]0.93NN0.85Regression SimilarityGP[36]0.92SVM[11]0.91NN0.86Table3.Object Orientation Errors for Cars.Performance mea-sured in terms of orientation similarity(Eq.5).Higher is better.of them achieve high precision,while the recall seems to be limited by some hard to detect objects.We plan to extend our online evaluation to more complex scenarios such as semi-occluded or truncated objects and other object classes like vans,trucks,pedestrians and cyclists.Finally,we also evaluate object orientation estimation. We extract100car instances per orientation bin,using16 orientation bins.We compute HOG features[12]on all cropped and resized bounding boxes with19×13blocks, 8×8pixel cells and12orientation bins.We evaluate multi-ple classification and regression algorithms and report aver-age orientation similarity(Eq.5).Table3shows our results. We found that for the classification task SVMs[11]clearly outperform nearest neighbor classification.For the regres-sion task,Gaussian Process regression[36]performs best.4.Conclusion and Future WorkThrowing new light on existing methods,we hope that the proposed benchmarks will complement others and help to reduce overfitting to datasets with little training or test examples and contribute to the development of algorithms that work well in practice.As our recorded data provides more information than compiled into the benchmarks so far,our intention is to gradually increase their difficul-ties.Furthermore,we also plan to include visual SLAM with loop-closure capabilities,object tracking,segmenta-tion,structure-from-motion and3D scene understanding into our evaluation framework.References[1]P.Alcantarilla,L.Bergasa,and F.Dellaert.Visual odometrypriors for robust EKF-SLAM.In ICRA,2010.6[2]S.Baker,D.Scharstein,J.Lewis,S.Roth,M.Black,andR.Szeliski.A database and evaluation methodology for op-ticalflow.IJCV,92:1–31,2011.1,2,3,4,5[3]S.M.Bileschi.Streetscenes:Towards scene understandingin still images.Technical report,MIT,2006.3[4]J.-L.Blanco,F.-A.Moreno,and J.Gonzalez.A collectionof outdoor robotic datasets with centimeter-accuracy ground truth.Auton.Robots,27:327–351,2009.2,3[5]G.Bradski.The opencv library.Dr.Dobb’s Journal of Soft-ware Tools,2000.5,6[6]T.Brox,A.Bruhn,N.Papenberg,and J.Weickert.High ac-curacy opticalflow estimation based on a theory for warping.In ECCV,2004.6。

高中英语北师大版必修第二册Unit6TheAdmirableLesson2Historymakers

高中英语北师大版必修第二册Unit6TheAdmirableLesson2Historymakers

一、根据首字母填写单词(单词拼写)1. Anger arose in her heart when he glanced briefly towards her but there was no sign of r_________. (根据首字母单词拼写)2. D________ the traffic jam, we arrived there on time. (根据首字母单词拼写)3. On the Mid-Autumn Day Festival, it is a tradition that we a__________ the moon and eat mooncakes with our family. (根据首字母单词拼写)二、根据汉语意思填写单词(单词拼写)4. He made a lot of____________(雌性的, 女性的)friends in 2022. (根据汉语提示单词拼写)5. During the Mid-Autumn Festival, family members often get together to share a meal and ________ (欣赏) the moon. (根据汉语提示单词拼写)6. These programs will ________ (合并,成为一体) with your existing software. (根据汉语提示单词拼写)三、根据中英文提示填写单词(单词拼写)7. She had hoped for years to work for an English newspaper and e__________ (最后,终于) got a job with English Coaching Paper. (根据中英文提示填空)8. The government will take some e________ (有效的) measures to reduce the white pollution. (根据中英文提示填空)四、完成句子9. 今天,我们将讨论一些英语初学者不能正确使用该语言的情形。

人类与机器人关系作文英语

人类与机器人关系作文英语

The relationship between humans and robots has evolved significantly over the past few decades,and it is a topic of great interest and debate in modern society.Here are some key aspects to consider when discussing this relationship in an essay:1.Historical Context:Begin by providing a brief history of how robots have been developed and integrated into various aspects of human life.Mention the early stages of industrial automation and how it has progressed to more sophisticated forms of artificial intelligence AI.2.Economic Impact:Discuss the economic implications of robots in the workforce. Robots have been instrumental in increasing productivity and efficiency in manufacturing, agriculture,and service industries.However,they also raise concerns about job displacement and the need for reskilling the workforce.3.Social Implications:Explore the social impact of robots,including their role in caregiving,companionship,and education.Robots can provide assistance to the elderly, help children with learning,and even serve as therapy tools for those with mental health issues.4.Ethical Considerations:Delve into the ethical questions surrounding the use of robots. This includes issues of privacy,consent,and the potential for robots to make decisions that have moral implications.5.Cultural Perceptions:Examine how different cultures perceive robots and the impact of these perceptions on the development and acceptance of robotic technologies.6.Technological Advancements:Highlight the rapid advancements in robotics and AI, such as machine learning,natural language processing,and autonomous systems.Discuss how these advancements are changing the nature of the humanrobot relationship.7.Future Prospects:Speculate on the future of humanrobot interactions.Consider scenarios where robots might become more integrated into daily life,possibly even forming emotional bonds with humans.8.Regulatory and Legal Frameworks:Discuss the need for laws and regulations to govern the use of robots,ensuring safety,accountability,and ethical use.9.HumanRobot Collaboration:Explore the concept of humanrobot collaboration,where robots and humans work together to achieve tasks that neither could accomplish alone.cation and Training:Address the importance of educating the public and training professionals to work alongside robots,understanding their capabilities and limitations.11.Safety and Security:Discuss the importance of ensuring that robots are designed with safety features and are secure from potential misuse or hacking.12.Inclusivity and Accessibility:Consider how robots can be designed to be inclusive and accessible to people with disabilities,improving their quality of life.13.Environmental Impact:Evaluate the environmental implications of increased robotic use,including the energy consumption of robots and the potential for robots to assist in environmental conservation efforts.14.Human Identity and SelfReflection:Contemplate how the presence of robots can influence human identity and selfreflection,potentially leading to a deeper understanding of what it means to be human.15.Conclusion:Summarize the main points discussed in the essay and provide a balanced view of the potential benefits and challenges of the humanrobot relationship. Offer suggestions for how society can best navigate this evolving relationship to ensure a harmonious coexistence.When writing your essay,ensure that you provide examples and case studies to support your arguments and that you approach the topic with a critical yet openminded perspective.。

高中英语 单元质量评估检测卷(三)3高一3英语试题

高中英语 单元质量评估检测卷(三)3高一3英语试题

感顿市安乐阳光实验学校单元质量评估检测卷(三)(时间:120分钟满分:150分)第一部分听力(共两节,满分30分)第一节(共5小题;每小题1.5分,满分7.5分)听下面5段对话。

每段对话后有一个小题,从题中所给的A、B、C三个选项中选出最佳选项。

听完每段对话后,你都有10秒钟的时间来回答有关小题和阅读下一小题。

每段对话仅读一遍。

1.What is the man speaker's idea?A.To London Eye.B.To the Thames River.C.To Piccadilly Circus.2.What does the woman speaker think of James?A.Stonehearted. B.Considerate. C.Warm­hearted.3.What kind of room does Braine want?A.A single room.B.A double room.C.A two­bed room.4.What makes the man lose his match?A.Without his coach's help.B.His poor performance.C.Not following his coach.5.What does the woman think of her presentation?A.Done with Dr.Willy's help.B.Done with her own effort.C.Easily done by Dr.Willy.第二节(共15小题;每小题1.5分,满分22.5分)听下面5段对话或独白。

每段对话或独白后有几个小题,从题中所给的A、B、C三个选项中选出最佳选项。

听每段对话或独白前,你将有时间阅读各个小题,每小题5秒钟;听完后,各小题将给出5秒钟的作答时间。

Acceleration of stable TTI P-wave reverse-time migration with GPUs

Acceleration of stable TTI P-wave reverse-time migration with GPUs

Acceleration of stable TTI P-wave reverse-time migration with GPUsYoungseo Kim a,n,Yongchae Cho b,Ugeun Jang b,Changsoo Shin ba Seoul National University,Research Institute of Energy and Resources151-744/Building135,College of Engineering,Seoul National University,Daehak-dong Gwanak-gu,Seoul,Republic of Koreab Seoul National University,Research Institute of Energy and Resources151-744/36-2061College of Engineering,Seoul National University,Daehak-dongGwanak-gu,Seoul,Republic of Koreaa r t i c l e i n f oArticle history:Received7May2012Received in revised form25September2012Accepted19October2012Available online29October2012Keywords:GPUMPIREMRTMTTIa b s t r a c tWhen a pseudo-acoustic TTI(tilted transversely isotropic)coupled wave equation is used to implementreverse-time migration(RTM),shear wave energy is significantly included in the migration image.Because anisotropy has intrinsic elastic characteristics,coupling P-wave and S-wave modes in thepseudo-acoustic wave equation is inevitable.In RTM with only primary energy or the P-wave mode inseismic data,the S-wave energy is regarded as noise for the migration image.To solve this problem,we derive a pure P-wave equation for TTI media that excludes the S-wave energy.Additionally,weapply the rapid expansion method(REM)based on a Chebyshev expansion and a pseudo-spectralmethod(PSM)to calculate spatial derivatives in the wave equation.When REM is incorporated with thePSM for the spatial derivatives,wavefields with high numerical accuracy can be obtained without griddispersion when performing numerical wave modeling.Another problem in the implementation of TTIRTM is that wavefields in an area with high gradients of dip or azimuth angles can be blown up in theprogression of the forward and backward algorithms of the RTM.We stabilize the wavefields byapplying a spatial-frequency domain high-cutfilter when calculating the spatial derivatives using thePSM.In addition,to increase performance speed,the graphic processing unit(GPU)architecture is usedinstead of traditional CPU architecture.To confirm the degree of acceleration compared to the CPUversion on our RTM,we then analyze the performance measurements according to the number of GPUsemployed.&2012Elsevier Ltd.All rights reserved.1.IntroductionRTM(reverse-time migration)(Baysal et al.,1983)is the mosteffective tool for imaging the sequence structure of strata withdistinct velocity contrasts and geologically complex structures.Although practical implementation requires substantial comput-ing costs,the rapid development of the computer industry andthe improvement of the algorithm have made RTM a leader inhigh-end imaging.As the outcomes have become common ininterpreting geological structures,many studies have focused onenhancing the images obtained by RTM.One factor for improvingRTM is the consideration of anisotropy.Although most rocks havethe characteristics of anisotropy,many geophysicists have notwanted to use elastic wave equation-contained anisotropy para-meters because the S-wave must be inherently included in theequation.To apply the anisotropic characteristics of rocks to RTM andeliminate the S-wave modes in the wave equation,Alkhalifah(2000)derived a simplified dispersion relation in VTI(verticaltransversely isotropic)media by setting the SV-wave velocity tozero.The dispersion relation was referred to as pseudo-acousticapproximation,and the VTI pseudo-acoustic wave equation wasapplicable to describing the seismic anisotropy of subsurfaceformations.However,because the symmetric axis perpendicularto the bedding is not always vertical,migration images with theVTI approximation cannot always provide the best quality interms of the definition of layers and salt boundaries.To considervarious geological structures,Zhou et al.(2006)and Fletcher et al.(2009)derived the TTI(tilted transversely isotropic)pseudo-acoustic wave equation based on the Alkhalifah approximation.Three major problems exist in the RTM with the TTI pseudo-acoustic wave equation.First,SV-waves are generated as artifactsin numerical modeling,although it seems that there should beonly P-wave components.Grechka et al.(2004)proved that theartifacts are actually correctly modeled SV-waves of a TI(trans-versely isotropic)medium that has v s¼0.Although we regard v sas zero,this assumption does not mean that the SV-wave phasevelocity is zero for all propagation angles.To solve this problem,Duveneck et al.(2008)set Thomsen’s(1986)parameters,E and d,to be equal in a small region close to a source position,and ZhangContents lists available at SciVerse ScienceDirectjournal homepage:/locate/cageoComputers&Geosciences0098-3004/$-see front matter&2012Elsevier Ltd.All rights reserved./10.1016/j.cageo.2012.10.013n Corresponding author.Tel.:þ821042285568;fax:þ8228756296.E-mail address:kysgood0@snu.ac.kr(Y.Kim).Computers&Geosciences52(2013)204–217et al.(2009)proposed a set of new equations based on the eigenvalue analysis of the original acoustic wave equation.The second problem is that the numerical modeling of the TTI pseudo-acoustic wave equation takes much more time than does the acoustic wave equation in isotropic media.The TTI pseudo-acoustic wave equation is composed of the combination of a pressure wavefield and an auxiliary wavefield and includes many terms of second derivatives and coupled first derivatives.The wave equation in isotropic media requires only the P -wave velocity,while five parameters are required to describe the wave propagation using the TTI pseudo-acoustic wave equation.Third,the wavefield values obtained through numerical modeling with the TTI pseudo-acoustic wave equation may be blown up in areas where the dip or azimuth angles are substantially changed (Crawley et al.,2010).Fletcher et al.(2009)stabilized wave propagation by setting the SV-wave velocity to half of the P -wave vertical velocity;however,this method generates additional energy from the SV-wave,which can produce incorrect reflectors or make the image unclear.Recently,Yoon et al.(2010)addressed the instability of wave propagation by making d equal to E around high symmetry axis gradient spots.In this study,we derive a pure P -wave equation in 2D and 3D TTI media to exclude SV-wave energy in RTM.The spatial frequency-domain dispersion relation obtained from the exact dispersion relations for VTI media derived by Tsvankin (1996)is used to derive the pure P -wave equation in the VTI media (Zhan et al.,2011).Then,the TTI version of the wave equation is obtained by rotating wavenumbers.To obtain a stable solution of the wave equation even in large time steps,we employ the rapid expansion method (REM)to propagate wavefields in time (Pestana and Stoffa,2010).To calculate the spatial derivatives in the wave equation,we select a pseudo-spectral method (PSM)(Kosloff and Baysal,1982;Fornberg,1987)because wave propa-gation incorporated with this method does not incur numerical dispersion,and REM combined with this method can generate a highly accurate solution for wave propagation.In the progression of applying the PSM,we multiply the spatial-frequency domain high-cut filter function with Fourier transformed wavefields at each time step to prevent the wavefield values from being blown up in areas with high gradients of dip or azimuth angles (Zhan et al.,2011).In addition,to accelerate the performance speed of the numerical modeling,we calculate the 2D or 3D Fourier transforms and their inverses using GPUs with CUDA and parallel computing with MPI (Gropp et al.,1999).In addition to the kernel (a function executed in parallel on the GPU device)for FFT,all algorithms required for the RTM are computed on the GPUs.2.Mathematical expression of pure TTI P -wave equation The 3D spatial-frequency domain ðk x ,k y ,k z Þdispersion relation used by Etgen and Brandsberg-Dahl (2009)and Crawley et al.(2010)is expressed as follows:o 2¼v 2p 0ð1þ2e Þðk 2x þk 2y Þþk 2z À2ðe Àd Þðk 2x þk 2y Þk 2z k 2x þk 2y þk 2z !,ð1Þwhere o is the angular frequency,v p 0is the P -wave velocity,and e and d are the Thomsen (1986)parameters.The dispersion relation in Eq.(1)can be applied to the VTI media and can be transformed into the form in TTI media by rotating wavenumber components ðk x ,k y ,k z Þ.The rotated wavenumbers are expressed as follows:^kx ^k y ^k z26643775¼cos y cos j k x þcos y sin j k y þsin y k z Àsin j k x þcos j k y Àsin y cos j k x Àsin y sin j k y þcos y k z 264375,ð2ÞEq.(1)can be rewritten in a rotated coordinate system asÀo 2¼v 2p 0c 1k 2x þc 2k 2y þc 3k 2z þc 4k x k y þc 5k y k z þc 6k z k x þc 7k 4x =k 2r þc 8k 4y =k 2r þc 9k 4z =k 2r þc 10k 2x k 2y =k 2r þc 11k 2y k 2z =k 2r þc 12k 2z k 2x =k 2rþc 13k 3x k z =k 2r þc 14k x k 3z =k 2r þc 15k 3x k y =k 2r þc 16k 3x k y =k 2r þc 17k 3y k z =k 2r þc 18k y k 3z =k 2rþc 19k 2x k y k z =k 2r þc 20k x k 2y k z =k 2r þc 21k x k y k 2z =k 2r 0B B B B B B B B B B B B B B B B B @1C CCC CCC CC C CC C CC C C A ,ð3Þwherec 1¼1þ2e ðsin 2j þcos 2y cos 2j Þ,c 2¼1þ2e ðcos 2j þcos 2y sin 2j Þ,c 3¼1þ2e sin 2y ,c 4¼À2e sin 2y sin 2j ,c 5¼2e sin 2y sin j ,c 6¼2e sin 2y cos j ,c 7¼2ðd Àe Þsin 2y cos 2j ðsin 2j þcos 2y cos 2j Þ,c 8¼2ðd Àe Þsin 2y sin 2j ðcos 2j þcos 2y sin 2j Þ,c 9¼2ðd Àe Þsin 2y cos 2y ,c 10¼0:5ðd Àe Þf sin 2y ðcos 4j þ3Þþ8sin 2y sin 2j cos 2j ð3cos 2y À2Þg ,c 11¼0:5ðd Àe Þf sin 2j ðcos 4y þ3Þþ4cos 2y ðcos 2j À4sin 2y sin 2j Þg ,c 12¼0:5ðd Àe Þf cos 2j ðcos 4y þ3Þþ4cos 2y ðsin 2j À4sin 2y cos 2j Þg ,c 13¼4ðd Àe Þsin y cos y cos j ð2sin 2y cos 2j À1Þ,c 14¼4ðd Àe Þsin y cos y cos 2y cos j ,c 15¼4ðd Àe Þsin 2y sin j cos j ðcos 2j cos 2y þsin 2j Þ,c 16¼4ðd Àe Þsin 2y sin j cos j ðsin 2j cos 2y þcos 2j Þ,c 17¼À4ðd Àe Þsin y cos y sin j ðsin 2j cos 2y þcos 2j Þ,c 18¼4ðd Àe Þsin y cos y cos 2y sin j ,c 19¼4ðd Àe Þsin y cos y sin j ð6sin 2y cos 2j À1Þ,c 20¼4ðd Àe Þsin y cos y cos j ð4sin 2y sin 2j À1Þ,c 21¼4ðd Àe Þsin 2y sin j cos j ð6sin 2y À5Þ:In Eq.(3),y is the dip angle and j is the azimuth ing aFourier transform,we can transform Eq.(3)from the frequency domain to the time domain.When the i o term is substituted by @=@t ,Eq.(3)is changed to the following:@2u ðx ,t Þ@t 2¼v 2p 0c 1k 2x þc 2k 2y þc 3k 2zþc 4k x k y þc 5k y k z þc 6k z k xþc 7k 4x =k 2r þc 8k 4y =k 2r þc 9k 4z =k 2r þc 10k 2x k 2y =k 2r þc 11k 2y k 2z =k 2r þc 12k 2z k 2x =k 2r þc 13k 3x k z =k 2r þc 14k x k 3z =k 2r þc 15k 3x k y =k 2r þc 16k 3x k y =k 2r þc 17k 3y k z =k 2r þc 18k y k 3z =k 2r þc 19k 2x k y k z =k 2r þc 20k x k 2y k z =k 2r þc 21k x k y k 2z =k 2r 0B B B B B B B B B B B B B B B B B @1CC C C C C C C CC C CC C C CC A u ðx ,t Þ,ð4Þwhere u ðx ,t Þis the wavefield at time t .The solution of the 2D TTI pure P -wave equation can be obtained by setting j to 0.In the 2D case,21terms of the spatial derivatives in Eq.(4)can be reduced to 7terms.Y.Kim et al./Computers &Geosciences 52(2013)204–2172053.The solution of the wave equation using REM witha pseudo-spectral methodBy replacing the multiplication of the square of the P-wave velocity and the term of the spatial derivatives in Eq.(4)with the symbolÀF2,the wave equation in Eq.(4)can be written as follows:@2uðx,tÞ@t¼ÀF2uðx,tÞ:ð5ÞThe formal solution to Eq.(5)with two initial conditions ofu0¼uðx,0Þand_u0¼@uðx,tÞ=@t9t¼0is given by the following:uðx,tÞ¼cosðF tÞu0þFÀ1sinðF tÞ_u0:ð6ÞThe wavefields uðx,tþD tÞand uðx,tÀD tÞcan be obtained by setting the t term in Eq.(6)to tþD t and tÀD t,respectively. Adding these two wavefields removes the odd part of the solution,resulting inuðx,tþD tÞþuðx,tÀD tÞ¼2cosðF D tÞuðx,tÞ:ð7ÞBecause the PSM provides optimal spatial accuracy for a given grid size,we select the method to calculate the spatial derivatives with the REM.Based on the PSM,the form of Eq.(7)can be written using the Fourier transform as follows:uðx,tþD tÞ¼2FTÀ1½cosðF D tÞFT½uðx,tÞ Àuðx,tÀD tÞ,ð8Þwhere FT and FTÀ1represent the Fourier transform and its inverse transform,respectively.The cosine operator in Eq.(8)can be expanded by its Cheby-shev expansion for one-step REM as proposed by Kosloff et al. (1989),and the cosine is expanded as follows(Pestana and Stoffa, 2010):cosðF D tÞ¼X Mk¼0C2k J2kðBÞQ2kði xÞ,ð9Þwhere C0¼1and C k¼2for k Z1,9J kðzÞ9¼9z9k=ð2k k!Þ,Q2k is presented asQ0ði xÞ¼1,Q2ði xÞ¼1À2x2,Q4ði xÞ¼1À8x2þ8x4,Q6ði xÞ¼1À18x2þ48x4À32x6,Q8ði xÞ¼1À32x2þ160x4À256x6þ128x8,^Q kþ2ði xÞ¼ðÀ4x2þ2ÞQ kði xÞÀQ kÀ2ði xÞ:ð10ÞIn Eq.(9),J k represents the Bessel function of order k,Q k represents the modified Chebyshev polynomials that are recur-sively obtained from the initial condition of Q0and Q2,and the characters B,x and m represent R D t,F=R and1=R,respectively. The orthogonal polynomial series expansion for the cosine func-tion was presented by Tal-Ezer et al.(1987).For the3D TTI or VTI modeling,the R value is given by the following:R¼p V maxð1þ29e9maxÞffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1xþ1yþ1zs,ð11Þwhere V max is the highest P-wave velocity in the direction of the symmetry axis,and D x,D y,and D z are the spatial grid spacing in the x,y,and z directions,respectively.M should satisfy the condition of M4R D t.When we usefive Chebyshev polynomial terms,Eq.(9)can be written as follows:cosðF D tÞ¼X Mk¼0C2k J2kðBÞQ2kði xÞ¼C0J0ðBÞQ0ði xÞþC2J2ðBÞQ2ði xÞþC4J4ðBÞQ4ði xÞþC6J6ðBÞQ6ði xÞþC8J8ðBÞQ8ði xÞ¼J0ðBÞþ2J2ðBÞð1À2x2Þþ2J4ðBÞð1À8x2þ8x4Þþ2J6ðBÞð1À18x2þ48x4À32x6Þþ2J8ðBÞÂð1À32x2þ160x4À256x6þ128x8Þð12Þand thefinal equation to perform the numerical modeling can beexpressed as follows:uðx,tþD tÞ¼2½J0ðBÞuðx,tÞþ2J2ðBÞf uðx,tÞÀ2m2FðxÞgþ2J4ðBÞf uðx,tÞÀ8m2FðxÞþ8m4F2ðxÞgþ2J6ðBÞf uðx,tÞÀ18m2FðxÞþ48m4F2ðxÞÀ32m6F3ðxÞgþ2J8ðBÞf uðx,tÞÀ32m2FðxÞþ160m4F2ðxÞÀ256m6F3ðxÞþ128m8F4ðxÞg Àuðx,tÀD tÞ,ð13Þwhere FðxÞ¼FTÀ1½F2FT½uðx,tÞ .4.Pseudo-spectral method with CUDA and MPIWhen the model size for RTM is small enough to implementthe numerical modeling with one GPU,massage passing with MPIis not required in the PSM application.However,the devicememory size of the GPUs is not large enough to implement theactual application on wide-azimuth real exploration data.Inaddition,modeled data for the illumination zone should be storedin a global memory in the forward algorithm when RTM isperformed on a cluster without a blade hard disk in each node.To solve these problems,multiple GPUs and CPU processors areemployed to implement the RTM for one shot.Fig.1displays the algorithm structure of a PSM for obtainingFðxÞin Eq.(13)when four GPU devices are employed to imple-ment the modeling for a shot.Let nx,ny and nz denote the numberof grids in the x,y,and z directions and the symbol x representsthe location of the wavefieldðx,y,zÞ.k and~k areðk x,k y,k zÞandðk x,k y,zÞ,respectively.#1in Fig.1demonstrates the domainpartition for four GPU devices where the wavefields in time tare partitioned vertically into four parts(divided by colors),andeach color represents the subdomain assigned to a GPU device.AGPU device does not need to store wavefields on the total domain,which can implement the actual application on a large-sizedmodel by using many GPUs.The wavefields in the total domain are larger than nxÂnyÂnzbecause the grids in each axis are padded until the number of gridpoints is suitable for a prime factor length FFT(Fig.2(a)).When NGPUs i¼ð0,1,2,...,NÀ1Þare employed,the array of wavefieldsassigned to i-th GPU is expressed as½1:nxfft;1:nyfft;ðnzfft=NÞÃiþ0:5:ðnzfft=NÞÃðiþ1Þþ0:5À1 :Because a GPU has enough wavefields to implement the Fouriertransform in the x and y directions,2D FFT can be performed inevery xy-plane along z directions.In this study,we implement the2D FFT on uðxÞby using complex cuFFT supported by NVIDIA andthen obtain uð~kÞ(shown in#2in Fig.1and Fig.2(b)).To perform the FFT in the z direction,communication amongthe GPU devices is needed to exchange wavefields.The dataexchange between GPUs involves three memory copies:from GPUto CPU,from CPU to CPU,and from CPU to GPU(Micikevicius,2009).In the communication from CPU to CPU(#3-#4)afterthe memory copy is made from GPU to CPU(#2-#3),dataexchange among CPUs is achieved using MPI as follows./*Send wavefields to other processors*/forði¼0;i o N;iþþÞf=Ñme’is my rankÃ=buffer[i]¼uk_tilde[1:nxfft,Y.Kim et al./Computers&Geosciences52(2013)204–217206: Communication among nodes: Multiply pseudo-Laplacian to wavefields : Fourier transform: Inverse Fourier transformFig.1.Diagram of the parallel3D pseudo-spectral modeling when four GPU devices are employed to implement numerical modeling for a shot.The symbol x represents the location of the wavefieldðx,y,zÞ.k and~k areðk x,k y,k zÞandðk x,k y,zÞ,respectively.(For interpretation of the references to color in thisfigure caption,the reader is referred to the web version of this article.)(nyfft/N)*iþ0.5:(nyfft/N)*(iþ1)þ0.5-1,(nzfft/N)*meþ0.5:(nzfft/N)*(meþ1)þ0.5-1]; if(i¼¼me)xzwork[1:nxfft,(nyfft/N)*meþ0.5:(nyfft/N)*(meþ1)þ0.5-1,(nzfft/N)*meþ0.5:(nzfft/N)*(meþ1)þ0.5-1]¼buffer[me];elsesend data in buffer[i]to processor i usingMPI_Bsend;}/*Receive wavefields from other processors*/nrecv¼0;whileðnrecv o NÞfif(nrecv!¼me){receive data from nrecv-th processorusing MPI_Recv and store into buffer[nrecv];xzwork[1:nxfft,(nyfft/N)*meþ0.5:(nyfft/N)*(meþ1)þ0.5-1, (nzfft/N)*nrecvþ0.5:(nzfft/N)*(nrecvþ1)þ0.5-1]¼buffer[nrecv];}Y.Kim et al./Computers&Geosciences52(2013)204–217207nrecv þ¼1;}To facilitate understanding,we also illustrate the manner of data exchange in Fig.2(d).After the memory-copy is made from CPU and GPU (#4-#5),wavefields u ðk Þcan be obtained by performing 1D FFT in the z direction and are then stored in the device memory of each GPU device.The array of wavefields u ðk Þassigned to i -th GPU is expressed as½1:nxfft ;ðnyfft =N ÞÃi þ0:5:ðnyfft =N ÞÃði þ1Þþ0:5À1;nzfft ;and is also displayed in Fig.2(c).The processes from #7to #12are the inverses of the processes that occur from #1to #6.Whereas the processes from #1to #6are for preparing a PSM,the processes from #7to #12constitute the main routines for calculating the derivatives in the spatial directions usingtheFig.2.The structure of the prime numbered grids.(a)and (b)are the structures before and after the memory exchange,respectively,between the CPUs and GPUs in the progressions from #1to #6in Fig.1.(c)presents the manner of exchange.Y.Kim et al./Computers &Geosciences 52(2013)204–217208PSM.To implement the REM,we must calculate FðxÞin Eq.(13), which is expressed in detail as follows:FðxÞ¼v2p0c1FTÀ1½k2xFT½uðx,tÞþc2FTÀ1½k2yFT½uðx,tÞþc3FTÀ1½k2zFT½uðx,tÞþc4FTÀ1½k x k y FT½uðx,tÞ þc5FTÀ1½k y k z FT½uðx,tÞþc6FTÀ1½k z k x FT½uðx,tÞþc7FTÀ1½ðk4x=k2rÞFT½uðx,tÞþc8FTÀ1½ðk4y=k2rÞFT½uðx,tÞþc9FTÀ1½ðk4z=k2rÞFT½uðx,tÞþc10FTÀ1½ðk2xk2y=k2rÞFT½uðx,tÞþc11FTÀ1½ðk2yk2z=k2rÞFT½uðx,tÞþc12FTÀ1½ðk2zk2x=k2rÞFT½uðx,tÞþc13FTÀ1½ðk3xk z=k2rÞFT½uðx,tÞþc14FTÀ1½ðk x k3z=k2rÞFT½uðx,tÞþc15FTÀ1½ðk3xk y=k2rÞFT½uðx,tÞþc16FTÀ1½ðk x k3y=k2rÞFT½uðx,tÞþc17FTÀ1½ðk3yk z=k2rÞFT½uðx,tÞþc18FTÀ1½ðk y k3z=k2rÞFT½uðx,tÞþc19FTÀ1½ðk2xk y k z=k2rÞFT½uðx,tÞþc20FTÀ1½ðk x k2yk z=k2rÞFT½uðx,tÞþc21FTÀ1½ðk x k y k2z=k2rÞFT½uðx,tÞB BB BB BB BB BB BB BB BB BB BB BB BB BB BB BB BB BB BB BB BB BB BB BB BB BB BB@1C CC CC CC CC CC CC CC CC CC CC CC CC CC CC CC CC CC CC CC CC CC CC CC CC CC CC A¼v2p0X21i¼1c i FTÀ1½L iðkÞÁuðkÞ :ð14ÞBecause Eq.(14)consists of21terms of spatial derivatives,the processes from#7to#12should be repeated21times.Each GPU device calculates HðkÞ¼L iðkÞÁuðkÞin the subdomain assigned to each(#7in Fig.1)and performs the inverse Fourier transform with cuFFT in the z direction to obtain H ið~kÞ.Because data exchange should be required to take the inverse Fourier transform to H ið~kÞin the xy-plane,data communication among CPUs should be performed with the MPI;its computational algorithm can be summarized as follows./*Send wavefields to other processors*/forði¼0;i o N;iþþÞf=Ã’me’is my rankÃ=buffer[i]¼uk_tilde[1:nxfft,(nyfft/N)*meþ0.5:(nyfft/N)*(meþ1)þ0.5-1,(nzfft/N)*iþ0.5:(nzfft/N)*(iþ1)þ0.5-1];if(i¼¼me)xzwork[1:nxfft,(nyfft/N)*meþ0.5:(nyfft/N)*(meþ1)þ0.5-1, (nzfft/N)*iþ0.5:(nzfft/N)*(iþ1)þ0.5-1]¼buffer[me];elsesend data in buffer[i]to processor i using MPI_Bsend;}/*Receive wavefields from other processors*/nrecv¼0;whileðnrecv o NÞfif(nrecv!¼me){receive data from nrecv-th processor using MPI_Recvand store into buffer[nrecv];xzwork[1:nxfft,(nyfft/N)*nrecvþ0.5:(nyfft/N)*(nrecvþ1)þ0.5-1,(nzfft/N)*meþ0.5:(nzfft/N)*(meþ1)þ0.5-1]¼buffer[nrecv];}nrecvþ¼1;}After the completion of data exchange among the CPUs and memory copy from the CPUs to the GPUs,the wavefields on the subdomain assigned to a GPU are taken from the inverse trans-form to each xy-plane along the z direction(shown in#11-#12 in Fig.1).Then,by multiplying the coefficient c i by H iðxÞ,each GPU stores the values of c i FTÀ1ðL iðkÞÁuðkÞÞin Eq.(14)in the device memory.Finally,by repeating steps#7to#1221times, we can calculate FðxÞin Eq.(14)and obtain the wavefields in the next time step uðx,tþD tÞin Eq.(13).Fig.3shows the2D algorithm with the GPUs and the MPI.The 2D algorithm can be obtained by excluding the wavefields in the x direction on the3D algorithm because the data in the x direction are not shared among processors.In addition,when we perform 2D TTI modeling,the processes from#7to#12in Fig.3can be reduced to one-third of the repetitions used in the3D algorithm. To compare wavefields obtained by the TTI P-wave equation based on our proposed algorithm and those obtained by the pseudo-acoustic TTI wave equation suggested by Fletcher et al. (2009),we display2D and3D wavefield snapshots in Fig.4.The P-wave vertical velocity in the medium is constant at2000m=s. The Thomsen parameter E and d are0.24and0.1,respectively. The tilt angle isfixed to45J in the2D and3D cases,and the azimuth angle is set to45J in the3D case.The wave snapshots in Fig.4(a)–(d)are obtained using the pseudo-acoustic TTI wave equation based on a second-order time-domain eighth-order space-domainfinite difference stencil.The SV-wave velocity is set to zero in Fig4(a)and(b).Fig.4(a)and(b)demonstrates the diamond shape of the SV wavefront.In the P-wave RTM imple-mentation,the SV-waves may act as artifacts that have a harmful effect on the quality on migration images.When we set the SV-wave velocity to0,Fletcher et al.(2009)showed that wavefields in areas with a high gradient of dip angles can be blown up.To stabilize the wavefields,Fletcher et al.(2009)set the SV-wave velocity to half of the P-wave vertical velocity when a high contrast existed in the dipfield.However,as shown in Fig.4(c)and4(d),the drawback of this approach is that additional S-wave noise is generated.Snapshots presented in Fig.4(e)and (f)are generated by our proposed algorithm.Fig.4(e)and(f) indicates that the S-waves are completely removed and that only P-wave wavefronts are clearly observed.5.Algorithm of reverse time migrationWhen the cross-correlation imaging condition is employed in the RTM,the migration image at the k-th node can be expressed as follows:fk¼Xnshoti¼1Z T maxS kðtÞR kðT maxÀtÞd t,ð15Þwhere T max is the maximum recording time,i is the shot number,S k(t)is the source wavefield,R kðT maxÀtÞis the backward-propagated receiver wavefield,and f k is the image at the k-th node.The source wavefield S k(t)can be obtained by propagating a mathematical function,e.g.,a Ricker wavelet,as a source signature forward in time.The receiver wavefield R k(t)Y.Kim et al./Computers&Geosciences52(2013)204–217209。

四川省成都市石室中学2024-2025学年高二上学期十月月考英语试卷

四川省成都市石室中学2024-2025学年高二上学期十月月考英语试卷

四川省成都市石室中学2024-2025学年高二上学期十月月考英语试卷一、阅读理解Studying abroad programs are transformative. You’ll go to class — but you could also explore the timeless and historical landscape of the countries that you may not have the chance to visit on your own. Since 1987, International Studies Abroad (ISA) has provided college students the opportunity to explore the world. We offer high- quality education abroad programs in Africa, Asia, Europe, Latin America, and the Pacific which are bound to turn your dreams into possibilities with unbeatable value.Enjoy Flexibility with ISAWe’ve partnered with Arizona State University (ASU) to help you overcome common study abroad challenges by adding ASU online courses to your ISA program. With more than 2,100 online courses to choose from, ISA minimizes academic challenges, like major requirements, so you can make the most of your experience abroad.1.What is special about the University of Westminster?A.A long history.B.Diverse subjects.C.Small classes.D.Traditionalcampuses.2.When can you apply for the Spring 1, 2025 program?A.Oct 7, 2024.B.Oct 11, 2024.C.Jan 15, 2025.D.May 10, 2025. 3.How does ISA address the study abroad problems?A.By removing academic burdens.B.Through innovative cooperation.C.Through abundant online courses.D.By increasing major requirements.Katalin Karikó along with her colleague Drew Weissman won the Nobel Prize in Physiology or Medicine in 2023 for the development of messenger RNA (mRNA) technology.Karikó was born in January 1955, in a small village in Hungary. She had an ambition from early on to become a scientist. As a young adult, she became interested in mRNA, which carries DNA instructions to the protein-making engine of cells. She hoped that mRNA could play a key role in the treatment of various diseases. It became her mission to make her dream a reality to help cure patients. However, Karikó faced a shortage of money for her research in her country, and she then faced the choice of stop ping and doing something not connected to her mission or continuing her research at the price of having to leave her country.After searching for posts and scholarships worldwide, Karikó accepted an offer from Temple University in Philadelphia for a postdoctoral fellowship. Karikó and her husband gave up everything they had in their homeland and bought a one-way ticket to the U. S., where they knew no one. She was aware of the risks but didn’t feel discouraged. As she put it in an award acceptance speech, “Follow your dreams and don’t hesitate to learn anything from anyone.”She was initially on track to become a full professor but received repeated fund rejections. Undeterred by the problems and challenges, she chose to continue her research. By focusing on what mattered to her every day, she “accidentally” met her work partner Drew Weissman who was also interested in mRNA.They teamed up to work on mRNA and published papers about their groundbreaking discovery for years. Then the pandemic hit the world. The changed mRNA technology Karikó and Weissman invented was then used in vaccines that prevented the infection effectively.Karikó’s life is a testament to finding one’s passion and then pursuing it every single day. Many of us know what we are fond of, but we are not good self-motivators on a daily basis. 4.What can we learn about Karikó from paragraph 2?A.She had a clear sense of purpose.B.She was poor when she was young.C.She was hesitant to leave her country.D.She longed to be a doctor to cure patients. 5.What does the underlined part “Undeterred by” in paragraph 4 probably mean?A.Being afraid of.B.Not motivated by.C.Being unaware of.D.Not discouraged by.6.What is probably the main contribution of Karikó?A.Simplifying the mRNA technology.B.Making the structure of mRNA clear.C.Laying the foundation for mRNA vaccines.D.Developing a vaccine for a serious disease.7.What can we learn from Karikó’s success?A.Every minute counts.B.Two heads are better than one.C.Where there’s a will, there’s a way.D.Necessity is the mother of invention.Does a happy person live longer? Many studies have convinced us that happiness brings good health, which has resulted in an increasing demand for speakers and products encouragingpositive thinking. However, being happy does not promise that one is going to be healthy. There are other factors that influence one’s health and long life such as a person’s genes or even a person’s socio-economic condition.Some research even suggests that positive thinking can be dangerous. Positive thinking, when taken to the extreme, can cause a person to be separated from reality. For example, a person who thinks that staying happy and positive can help him recover from an illness like cancer but later fails to recover from it, may blame himself for not being happy. In this case, positive thinking may potentially make the victim disregard other factors. Sometimes the pursuit of happiness is even associated with serious mental health problems such as depression.All types of happiness are not good for us either. For example, pride, a pleasant feeling, can sometimes rob us of the ability to empathize with others or understand another’s viewpoint. This anti-social behavior can cause people around us to turn away from us, and this could, in turn, make us feel lonely and do harm to our mental and even physical health.Moreover, unpleasant feelings can be beneficial to a person’s well-being. Researchers believe that unpleasant feelings can help us make sense of our challenges and experiences in a way that supports psychological well-being. For example, if I have behaved badly towards my good friend, the feelings of guilt and sadness might motivate me to apologize and ask for forgiveness. The rebuilding of a broken relationship can be a lift to one’s mental well-being.In trying to experience happiness, we should remember that seeking for happiness as an end in itself can be self-defeating, and does not necessarily lead to better health. After all, one will surely experience setbacks and conflicts in life. Instead, learning to cope with negative emotions with a realistic positive attitude is key to a person’s good health.8.What’s the writer’s opinion in this passage?A.Negative thinking can be dangerous.B.Staying happy can bring good health.C.Unpleasant feelings cannot be beneficial.D.Happiness cannot ensure one’s good health.9.When can positive thinking be dangerous according to the passage?A.When we use it with a realistic attitude to solve problems.B.When we focus on it as an only determinant of happy life.C.When we think it one of the necessary factor for good health.D.When we realize it may rob us of the ability to understand others.10.How can unpleasant feelings be beneficial to a person’s well-being?A.They rebuild a broken relationship.B.They lead to self-reflection and personal growth.C.They help keep the problems and challenges away.D.They prevent long-term negative effects on mental health.11.Which of the following has the similar meaning of “an end in itself”?A.An ultimate goal.B.An individual plan.C.A final decision.D.A great start.Robert Chmielewski has had quadriplegia (四肢瘫痪) since his teens. Sensors implanted (植入) in his brain read his thoughts to control two robotic arms, which helps him to perform daily tasks. Now he can use one robotic arm to control a knife and the other a fork.Modern technology can reach inside someone’s head and pull out what he is thinking. Maybe he intends to move a robotic arm or type something on a computer screen. Such thought-controlled devices can help people who aren’t able to move or perform different tasks and promote the well-being of the disabled.Decoding (解码) thought usually requires placing sensors directly on or in someone’s brain. Those implanted sensors can catch the electrical signals passing between the person’s brain cells, or neurons. Such signals carry messages that allow brains to think, feel and control the body.Using brain implants, researchers have picked up electrical signals in the brain linked to certain words or letters. This has allowed brain implants to transform thoughts into text or speech on a computer. Likewise, brain implants have transformed imagined handwriting into text on a screen. Implanted sensors have even allowed scientists to turn the signals they caught that are associated with a song in someone’s head into real music.In a recent study, scientists decoded full stories from people’s brains using MRI scans (磁共振成像扫描). This did not require any brain implants. But building the thought decoder did require many hours of brain scans for each person. What’s more, the system only worked on the person whose brain scans helped build it and only when that person was willing to have their mindread.So devices that might let someone secretly read your mind from across the room are still a long, long way off. Still, it’s clear that mind-reading tech is getting more advanced. As it does, scientists are thinking hard about what it would mean to live in a world where not even the inside of your head is completely private.12.What is the purpose of the first paragraph?A.To give an example.B.To compare the facts.C.To explain the reason.D.To introduce the topic.13.Which of the following is mentioned in the text?A.What principles a thought decoder should follow.B.How MRI monitors the work of the implanted sensors.C.What’s used to catch signals passing through the brain.D.How robotic arms are designed to satisfy different needs.14.What is scientists’ attitude towards the future of the technology discussed in the text?A.Concerned.B.Confident.C.Doubtful.D.Indifferent. 15.What can be a suitable title for the text?A.Mind Reading is Stealing Our PrivacyB.Mind Reading is Hard, but not ImpossibleC.Mind Reading—Good News for MusiciansD.Mind Reading—a Brain-scanning TechnologyThe experience of modern life is a constant stream of reviews where criticism is common, which makes us struggle both to give and receive it gracefully. 16 It’s not personal (even when it’s personal).When criticized, we often make it personal in two ways. 17 Additionally, we interpret criticism as a reflection of our inborn abilities rather than our actions. This tendency, observed even in young children, can lead to reduced self-esteem, mood, and persistence. To settle it, adopting a mindset that separates the feedback from the person giving it can help maintain objectivity and resilience (适应力) in the face of criticism. 18Once you depersonalize criticism in this way, you can start to see it for what it is: a rare glimpseinto what outsiders think about your performance, and thus a potential opportunity to correct course and improve. Studies indicate that actively engaging with feedback improves academic outcomes. If this doesn’t come easily to you, consider forming a critics’ circle with trusted peers to exchange honest feedback and enhance resilience.Make criticism a gift, never a weapon.We all have to give criticism from time to time. 19 Research suggests five key elements for providing constructive feedback: the care of the receiver in mind; respectful delivery; good intentions; a pathway to improvement; and appropriate targeting of the receiver’s needs.Praise in public, criticize in private.20 He used it to motivate players. Research confirms its effectiveness: public praise boosts motivation by 9%, while private criticism increases motivation by 11% compared to public settings.A.View criticism as objective feedback.B.Treat criticism like secret information.C.We shifts the focus from emotion to analysis.D.Therefore, we should approach criticism positively.E.This rule is credited to the football coach Vince Lombardi.F.We may naturally analyze the critic rather than the criticism.G.The key to criticizing is to remember it is intended to help, not harm.二、完形填空As a teenager growing up in Great Britain, Lola Anderson was inspired by the rowing events at the 2012 London Olympics.Moved by the athletes' strength and determination, she decided to 21 the sport herself. In her diary, she expressed her dream of winning an Olympic gold medal in rowing. Embarrassed by her 22 dream, Lola tore out the diary page.” I threw that away because I didn't believe, “Anderson 23 ” I was 14 then, so why would I believe? Young girls struggle to see themselves as strong, athletic individuals, but that's 24 now." Despite her initial 25 , Anderson pursued rowing with her father's support.In 2019, as Don Anderson 26 cancer, he presented Lola with a 27 . He heldopen his hand to 28 the page she had torn from her diary years earlier. Don had found it in the trash(垃圾筒) and kept it, 29 she would need it one day, Don 30 months later, but his faith in his daughter's dream remained.His predictive gesture 31 on Wednesday when Anderson competed in her first Olympic Games as part of the women's quadruple sculls(四人双桨)rowing team. Her team 32 the gold medal by a mere 0.15 seconds. After the race, Anderson reflected on her father's firm support.“It's a piece of paper,but it's the most valuable thing I have," she said. " Maybe jointly with the 33 now." Lola Anderson’s 34 from a self-doubting teenager to an Olympic champion serves as 35 of the power of dreams and the lasting impact of a father's love. 21.A.take up B.look into C.live upon D.fight for 22.A.greedy B.achievable C.shallow D.wild 23.A.imagined B.regretted C.added D.recalled 24.A.changing B.strengthening C.worsening D.speeding 25.A.resolution B.doubt C.confidence D.worry 26.A.studied B.battled C.defeated D.prevented 27.A.wish B.blow C.promise D.surprise 28.A.throw B.reveal C.fold D.release 29.A.advocating B.proving C.feeling D.wondering 30.A.passed away B.died off C.set off D.went away 31.A.happened B.mattered C.arrived D.worked 32.A.bagged B.forgotten C.lost D.recovered 33.A.support B.team C.medal D.rowing 34.A.journey B.range C.departure D.achievement 35.A.advice B.belief C.memory D.proof三、语法填空阅读下面材料,在空白处填入适当的内容(1个单词)或括号内单词的正确形式。

Lathe

Lathe
Belt drive
In some latitudes, a belt drive system is used to transmit power from the motor to the spindle, promoting a cost effective and related solution
Characteristics
The main characteristics of a Lathe include high precision, high efficiency, wide applicability, and easy operation Modern lathes also have features such as automatic tool change, automatic feeding and speed control, and programmable control systems
Regular maintenance
Regular maintenance, including cleaning and lubrication, helps to maintain the accuracy of the over time
Calibration
Periodic calibration of the Lathe's components helps to ensure that they are operating within specified tolerances
Application fields and market demand
Application fields Lates are widely used in the manufacturing industry for machining variant types of workpieces such as shares, disks, and complex shapes They are also used in the automotive industry for machining engine blocks, cylinder heads, and other components In addition, these are used in the aerospace industry for machining precision parts such as turbine blades and landing gear components
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Proceedings of the Ninth IEEE International Conference on Engineering Complex Computer Systems Navigating Complexity in the e-Engineering Age 1050-4729/04 $20.00 © 2004 IEEE
1. Introduction
An increasing concern in the development of embedded systems is that fundamental design problems often remain undetected until the final tests, after implementation and integration of all components, or maybe even later - at runtime. This is particularly important when it comes to meeting non-functional constraints such as performance or resource utilization requirements. Correcting problems, with their sources in design, after implementation may be very costly as it often requires both redesign and re-implementation, and the probability to
bility that the overall system requirements can be met. In that way, failure to keep a budget will be an immediate warning; meaning that further component development should be postponed until any necessary modifications of the design and/or budget assignments has been made. Budgets may also serve as guides for the way components are implemented or compiled by suggesting trade-offs between, for example, execution time and memory consumption. The budgets are referred to as implementation-time constraints (ITCs). This should not be confused with the RTCs mentioned above. In this paper, we both present a method for ITC generation and the results of a preliminary evaluation of a prototype implementation. The prototype includes the method in a software tool to determine whether the method may be practical. The suggested approach for ITC generation is to make a design space exploration and optimize the solutions with respect to the relative amount of end-to-end work compared to the end-to-end constraints. The validation of the method for small and medium sized systems shows that generation of ITCs is possible in a practical way in limited time, based on relative estimates of the execution time and end-to-end timing constraints. If an ITC for a task does not hold, there is a set of implementation directives to choose an alternative ITC from. If there is no budget in the set that holds, it is possible to rerun the budgeting method, using updated execution time estimates. Our solution is in some central parts inspired by the real-time budgeting algorithm described in [4] which focuses on how to derive constraints or budgets for implementation based on end-to-end latencies and how to provide a measure of the "flexibility" of each budget, the tightness. In [5], a framework for a systems engineering method is presented, focusing on keeping the correctness at all steps of the formal process from specification and dimensioning to implementation. There are a number of methods for real-time constraint derivation, such as the method described in [6], but focusing on the implementation validation rather than design. Another closely related area is that of the different system co-design methods that are presented in [7] [8]. The main focus of co-design methods is to simultaneously generate hardware architecture and the software that runs on it, where the architecture may consist of a mix of programmable CPUs, DSPs, and ASICs. The rest of the paper is organized as follows. In Section 2 the problem statement is presented. In Section 3 the budgeting method is presented in more detail, along with a step by step example and method improvement suggestions. Finally, Section 4 presents the evaluation of the budgeting method, followed by the conclusions i for Research on Embedded Systems (CERES) Halmstad University, Halmstad, Sweden Abstract
An increasing concern in the development of embedded systems is that fundamental design problems often remain undetected until the final tests, after implementation and integration of all components, or maybe even later - at runtime. This is particularly important when it comes to meeting non-functional constraints such as performance or resource utilization requirements. Correcting problems with their sources in design, after implementation, may be very costly as it often requires both redesign and re-implementation. Therefore, much effort has been put into the development of methods and tools that help system designers and developers to detect problems as early as possible during system development. This paper contributes with an addition to that field by presenting
相关文档
最新文档