Reliability Comparison of MPPT Converter in DCM and CCM Operation
Example-based metonymy recognition for proper nouns
Example-Based Metonymy Recognition for Proper NounsYves PeirsmanQuantitative Lexicology and Variational LinguisticsUniversity of Leuven,Belgiumyves.peirsman@arts.kuleuven.beAbstractMetonymy recognition is generally ap-proached with complex algorithms thatrely heavily on the manual annotation oftraining and test data.This paper will re-lieve this complexity in two ways.First,it will show that the results of the cur-rent learning algorithms can be replicatedby the‘lazy’algorithm of Memory-BasedLearning.This approach simply stores alltraining instances to its memory and clas-sifies a test instance by comparing it to alltraining examples.Second,this paper willargue that the number of labelled trainingexamples that is currently used in the lit-erature can be reduced drastically.Thisfinding can help relieve the knowledge ac-quisition bottleneck in metonymy recog-nition,and allow the algorithms to be ap-plied on a wider scale.1IntroductionMetonymy is afigure of speech that uses“one en-tity to refer to another that is related to it”(Lakoff and Johnson,1980,p.35).In example(1),for in-stance,China and Taiwan stand for the govern-ments of the respective countries:(1)China has always threatened to use forceif Taiwan declared independence.(BNC) Metonymy resolution is the task of automatically recognizing these words and determining their ref-erent.It is therefore generally split up into two phases:metonymy recognition and metonymy in-terpretation(Fass,1997).The earliest approaches to metonymy recogni-tion identify a word as metonymical when it vio-lates selectional restrictions(Pustejovsky,1995).Indeed,in example(1),China and Taiwan both violate the restriction that threaten and declare require an animate subject,and thus have to be interpreted metonymically.However,it is clear that many metonymies escape this characteriza-tion.Nixon in example(2)does not violate the se-lectional restrictions of the verb to bomb,and yet, it metonymically refers to the army under Nixon’s command.(2)Nixon bombed Hanoi.This example shows that metonymy recognition should not be based on rigid rules,but rather on statistical information about the semantic and grammatical context in which the target word oc-curs.This statistical dependency between the read-ing of a word and its grammatical and seman-tic context was investigated by Markert and Nis-sim(2002a)and Nissim and Markert(2003; 2005).The key to their approach was the in-sight that metonymy recognition is basically a sub-problem of Word Sense Disambiguation(WSD). Possibly metonymical words are polysemous,and they generally belong to one of a number of pre-defined metonymical categories.Hence,like WSD, metonymy recognition boils down to the auto-matic assignment of a sense label to a polysemous word.This insight thus implied that all machine learning approaches to WSD can also be applied to metonymy recognition.There are,however,two differences between metonymy recognition and WSD.First,theo-retically speaking,the set of possible readings of a metonymical word is open-ended(Nunberg, 1978).In practice,however,metonymies tend to stick to a small number of patterns,and their la-bels can thus be defined a priori.Second,classic 71WSD algorithms take training instances of one par-ticular word as their input and then disambiguate test instances of the same word.By contrast,since all words of the same semantic class may undergo the same metonymical shifts,metonymy recogni-tion systems can be built for an entire semantic class instead of one particular word(Markert and Nissim,2002a).To this goal,Markert and Nissim extracted from the BNC a corpus of possibly metonymical words from two categories:country names (Markert and Nissim,2002b)and organization names(Nissim and Markert,2005).All these words were annotated with a semantic label —either literal or the metonymical cate-gory they belonged to.For the country names, Markert and Nissim distinguished between place-for-people,place-for-event and place-for-product.For the organi-zation names,the most frequent metonymies are organization-for-members and organization-for-product.In addition, Markert and Nissim used a label mixed for examples that had two readings,and othermet for examples that did not belong to any of the pre-defined metonymical patterns.For both categories,the results were promis-ing.The best algorithms returned an accuracy of 87%for the countries and of76%for the orga-nizations.Grammatical features,which gave the function of a possibly metonymical word and its head,proved indispensable for the accurate recog-nition of metonymies,but led to extremely low recall values,due to data sparseness.Therefore Nissim and Markert(2003)developed an algo-rithm that also relied on semantic information,and tested it on the mixed country data.This algo-rithm used Dekang Lin’s(1998)thesaurus of se-mantically similar words in order to search the training data for instances whose head was sim-ilar,and not just identical,to the test instances. Nissim and Markert(2003)showed that a combi-nation of semantic and grammatical information gave the most promising results(87%). However,Nissim and Markert’s(2003)ap-proach has two major disadvantages.Thefirst of these is its complexity:the best-performing al-gorithm requires smoothing,backing-off to gram-matical roles,iterative searches through clusters of semantically similar words,etc.In section2,I will therefore investigate if a metonymy recognition al-gorithm needs to be that computationally demand-ing.In particular,I will try and replicate Nissim and Markert’s results with the‘lazy’algorithm of Memory-Based Learning.The second disadvantage of Nissim and Mark-ert’s(2003)algorithms is their supervised nature. Because they rely so heavily on the manual an-notation of training and test data,an extension of the classifiers to more metonymical patterns is ex-tremely problematic.Yet,such an extension is es-sential for many tasks throughout thefield of Nat-ural Language Processing,particularly Machine Translation.This knowledge acquisition bottle-neck is a well-known problem in NLP,and many approaches have been developed to address it.One of these is active learning,or sample selection,a strategy that makes it possible to selectively an-notate those examples that are most helpful to the classifier.It has previously been applied to NLP tasks such as parsing(Hwa,2002;Osborne and Baldridge,2004)and Word Sense Disambiguation (Fujii et al.,1998).In section3,I will introduce active learning into thefield of metonymy recog-nition.2Example-based metonymy recognition As I have argued,Nissim and Markert’s(2003) approach to metonymy recognition is quite com-plex.I therefore wanted to see if this complexity can be dispensed with,and if it can be replaced with the much more simple algorithm of Memory-Based Learning.The advantages of Memory-Based Learning(MBL),which is implemented in the T i MBL classifier(Daelemans et al.,2004)1,are twofold.First,it is based on a plausible psycho-logical hypothesis of human learning.It holds that people interpret new examples of a phenom-enon by comparing them to“stored representa-tions of earlier experiences”(Daelemans et al., 2004,p.19).This contrasts to many other classi-fication algorithms,such as Naive Bayes,whose psychological validity is an object of heavy de-bate.Second,as a result of this learning hypothe-sis,an MBL classifier such as T i MBL eschews the formulation of complex rules or the computation of probabilities during its training phase.Instead it stores all training vectors to its memory,together with their labels.In the test phase,it computes the distance between the test vector and all these train-ing vectors,and simply returns the most frequentlabel of the most similar training examples.One of the most important challenges inMemory-Based Learning is adapting the algorithmto one’s data.This includesfinding a represen-tative seed set as well as determining the rightdistance measures.For my purposes,however, T i MBL’s default settings proved more than satis-factory.T i MBL implements the IB1and IB2algo-rithms that were presented in Aha et al.(1991),butadds a broad choice of distance measures.Its de-fault implementation of the IB1algorithm,whichis called IB1-IG in full(Daelemans and Van denBosch,1992),proved most successful in my ex-periments.It computes the distance between twovectors X and Y by adding up the weighted dis-tancesδbetween their corresponding feature val-ues x i and y i:∆(X,Y)=ni=1w iδ(x i,y i)(3)The most important element in this equation is theweight that is given to each feature.In IB1-IG,features are weighted by their Gain Ratio(equa-tion4),the division of the feature’s InformationGain by its split rmation Gain,the nu-merator in equation(4),“measures how much in-formation it[feature i]contributes to our knowl-edge of the correct class label[...]by comput-ing the difference in uncertainty(i.e.entropy)be-tween the situations without and with knowledgeof the value of that feature”(Daelemans et al.,2004,p.20).In order not“to overestimate the rel-evance of features with large numbers of values”(Daelemans et al.,2004,p.21),this InformationGain is then divided by the split info,the entropyof the feature values(equation5).In the followingequations,C is the set of class labels,H(C)is theentropy of that set,and V i is the set of values forfeature i.w i=H(C)− v∈V i P(v)×H(C|v)2This data is publicly available and can be downloadedfrom /mnissim/mascara.73P F86.6%49.5%N&M81.4%62.7%Table1:Results for the mixed country data.T i MBL:my T i MBL resultsN&M:Nissim and Markert’s(2003)results simple learning phase,T i MBL is able to replicate the results from Nissim and Markert(2003;2005). As table1shows,accuracy for the mixed coun-try data is almost identical to Nissim and Mark-ert’sfigure,and precision,recall and F-score for the metonymical class lie only slightly lower.3 T i MBL’s results for the Hungary data were simi-lar,and equally comparable to Markert and Nis-sim’s(Katja Markert,personal communication). Note,moreover,that these results were reached with grammatical information only,whereas Nis-sim and Markert’s(2003)algorithm relied on se-mantics as well.Next,table2indicates that T i MBL’s accuracy for the mixed organization data lies about1.5%be-low Nissim and Markert’s(2005)figure.This re-sult should be treated with caution,however.First, Nissim and Markert’s available organization data had not yet been annotated for grammatical fea-tures,and my annotation may slightly differ from theirs.Second,Nissim and Markert used several feature vectors for instances with more than one grammatical role andfiltered all mixed instances from the training set.A test instance was treated as mixed only when its several feature vectors were classified differently.My experiments,in contrast, were similar to those for the location data,in that each instance corresponded to one vector.Hence, the slightly lower performance of T i MBL is prob-ably due to differences between the two experi-ments.Thesefirst experiments thus demonstrate that Memory-Based Learning can give state-of-the-art performance in metonymy recognition.In this re-spect,it is important to stress that the results for the country data were reached without any se-mantic information,whereas Nissim and Mark-ert’s(2003)algorithm used Dekang Lin’s(1998) clusters of semantically similar words in order to deal with data sparseness.This fact,togetherAcc RT i MBL78.65%65.10%76.0%—Figure1:Accuracy learning curves for the mixed country data with and without semantic informa-tion.in more detail.4Asfigure1indicates,with re-spect to overall accuracy,semantic features have a negative influence:the learning curve with both features climbs much more slowly than that with only grammatical features.Hence,contrary to my expectations,grammatical features seem to allow a better generalization from a limited number of training instances.With respect to the F-score on the metonymical category infigure2,the differ-ences are much less outspoken.Both features give similar learning curves,but semantic features lead to a higherfinal F-score.In particular,the use of semantic features results in a lower precisionfig-ure,but a higher recall score.Semantic features thus cause the classifier to slightly overgeneralize from the metonymic training examples.There are two possible reasons for this inabil-ity of semantic information to improve the clas-sifier’s performance.First,WordNet’s synsets do not always map well to one of our semantic la-bels:many are rather broad and allow for several readings of the target word,while others are too specific to make generalization possible.Second, there is the predominance of prepositional phrases in our data.With their closed set of heads,the number of examples that benefits from semantic information about its head is actually rather small. Nevertheless,myfirst round of experiments has indicated that Memory-Based Learning is a sim-ple but robust approach to metonymy recogni-tion.It is able to replace current approaches that need smoothing or iterative searches through a the-saurus,with a simple,distance-based algorithm.Figure3:Accuracy learning curves for the coun-try data with random and maximum-distance se-lection of training examples.over all possible labels.The algorithm then picks those instances with the lowest confidence,since these will contain valuable information about the training set(and hopefully also the test set)that is still unknown to the system.One problem with Memory-Based Learning al-gorithms is that they do not directly output prob-abilities.Since they are example-based,they can only give the distances between the unlabelled in-stance and all labelled training instances.Never-theless,these distances can be used as a measure of certainty,too:we can assume that the system is most certain about the classification of test in-stances that lie very close to one or more of its training instances,and less certain about those that are further away.Therefore the selection function that minimizes the probability of the most likely label can intuitively be replaced by one that max-imizes the distance from the labelled training in-stances.However,figure3shows that for the mixed country instances,this function is not an option. Both learning curves give the results of an algo-rithm that starts withfifty random instances,and then iteratively adds ten new training instances to this initial seed set.The algorithm behind the solid curve chooses these instances randomly,whereas the one behind the dotted line selects those that are most distant from the labelled training exam-ples.In thefirst half of the learning process,both functions are equally successful;in the second the distance-based function performs better,but only slightly so.There are two reasons for this bad initial per-formance of the active learning function.First,it is not able to distinguish between informativeandFigure4:Accuracy learning curves for the coun-try data with random and maximum/minimum-distance selection of training examples. unusual training instances.This is because a large distance from the seed set simply means that the particular instance’s feature values are relatively unknown.This does not necessarily imply that the instance is informative to the classifier,how-ever.After all,it may be so unusual and so badly representative of the training(and test)set that the algorithm had better exclude it—something that is impossible on the basis of distances only.This bias towards outliers is a well-known disadvantage of many simple active learning algorithms.A sec-ond type of bias is due to the fact that the data has been annotated with a few features only.More par-ticularly,the present algorithm will keep adding instances whose head is not yet represented in the training set.This entails that it will put off adding instances whose function is pp,simply because other functions(subj,gen,...)have a wider variety in heads.Again,the result is a labelled set that is not very representative of the entire training set.There are,however,a few easy ways to increase the number of prototypical examples in the train-ing set.In a second run of experiments,I used an active learning function that added not only those instances that were most distant from the labelled training set,but also those that were closest to it. After a few test runs,I decided to add six distant and four close instances on each iteration.Figure4 shows that such a function is indeed fairly success-ful.Because it builds a labelled training set that is more representative of the test set,this algorithm clearly reduces the number of annotated instances that is needed to reach a given performance.Despite its success,this function is obviously not yet a sophisticated way of selecting good train-76Figure5:Accuracy learning curves for the organi-zation data with random and distance-based(AL) selection of training examples with a random seed set.ing examples.The selection of the initial seed set in particular can be improved upon:ideally,this seed set should take into account the overall dis-tribution of the training examples.Currently,the seeds are chosen randomly.Thisflaw in the al-gorithm becomes clear if it is applied to another data set:figure5shows that it does not outper-form random selection on the organization data, for instance.As I suggested,the selection of prototypical or representative instances as seeds can be used to make the present algorithm more robust.Again,it is possible to use distance measures to do this:be-fore the selection of seed instances,the algorithm can calculate for each unlabelled instance its dis-tance from each of the other unlabelled instances. In this way,it can build a prototypical seed set by selecting those instances with the smallest dis-tance on average.Figure6indicates that such an algorithm indeed outperforms random sample se-lection on the mixed organization data.For the calculation of the initial distances,each feature re-ceived the same weight.The algorithm then se-lected50random samples from the‘most proto-typical’half of the training set.5The other settings were the same as above.With the present small number of features,how-ever,such a prototypical seed set is not yet always as advantageous as it could be.A few experiments indicated that it did not lead to better performance on the mixed country data,for instance.However, as soon as a wider variety of features is taken into account(as with the organization data),the advan-pling can help choose those instances that are most helpful to the classifier.A few distance-based al-gorithms were able to drastically reduce the num-ber of training instances that is needed for a given accuracy,both for the country and the organization names.If current metonymy recognition algorithms are to be used in a system that can recognize all pos-sible metonymical patterns across a broad variety of semantic classes,it is crucial that the required number of labelled training examples be reduced. This paper has taken thefirst steps along this path and has set out some interesting questions for fu-ture research.This research should include the investigation of new features that can make clas-sifiers more robust and allow us to measure their confidence more reliably.This confidence mea-surement can then also be used in semi-supervised learning algorithms,for instance,where the clas-sifier itself labels the majority of training exam-ples.Only with techniques such as selective sam-pling and semi-supervised learning can the knowl-edge acquisition bottleneck in metonymy recogni-tion be addressed.AcknowledgementsI would like to thank Mirella Lapata,Dirk Geer-aerts and Dirk Speelman for their feedback on this project.I am also very grateful to Katja Markert and Malvina Nissim for their helpful information about their research.ReferencesD.W.Aha, D.Kibler,and M.K.Albert.1991.Instance-based learning algorithms.Machine Learning,6:37–66.W.Daelemans and A.Van den Bosch.1992.Generali-sation performance of backpropagation learning on a syllabification task.In M.F.J.Drossaers and A.Ni-jholt,editors,Proceedings of TWLT3:Connection-ism and Natural Language Processing,pages27–37, Enschede,The Netherlands.W.Daelemans,J.Zavrel,K.Van der Sloot,andA.Van den Bosch.2004.TiMBL:Tilburg Memory-Based Learner.Technical report,Induction of Linguistic Knowledge,Computational Linguistics, Tilburg University.D.Fass.1997.Processing Metaphor and Metonymy.Stanford,CA:Ablex.A.Fujii,K.Inui,T.Tokunaga,and H.Tanaka.1998.Selective sampling for example-based wordsense putational Linguistics, 24(4):573–597.R.Hwa.2002.Sample selection for statistical parsing.Computational Linguistics,30(3):253–276.koff and M.Johnson.1980.Metaphors We LiveBy.London:The University of Chicago Press.D.Lin.1998.An information-theoretic definition ofsimilarity.In Proceedings of the International Con-ference on Machine Learning,Madison,USA.K.Markert and M.Nissim.2002a.Metonymy res-olution as a classification task.In Proceedings of the Conference on Empirical Methods in Natural Language Processing(EMNLP2002),Philadelphia, USA.K.Markert and M.Nissim.2002b.Towards a cor-pus annotated for metonymies:the case of location names.In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC2002),Las Palmas,Spain.M.Nissim and K.Markert.2003.Syntactic features and word similarity for supervised metonymy res-olution.In Proceedings of the41st Annual Meet-ing of the Association for Computational Linguistics (ACL-03),Sapporo,Japan.M.Nissim and K.Markert.2005.Learning to buy a Renault and talk to BMW:A supervised approach to conventional metonymy.In H.Bunt,editor,Pro-ceedings of the6th International Workshop on Com-putational Semantics,Tilburg,The Netherlands. G.Nunberg.1978.The Pragmatics of Reference.Ph.D.thesis,City University of New York.M.Osborne and J.Baldridge.2004.Ensemble-based active learning for parse selection.In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics(HLT-NAACL).Boston, USA.J.Pustejovsky.1995.The Generative Lexicon.Cam-bridge,MA:MIT Press.78。
船用油漆工程师FR考试
Day 1Specification: A document defining what is wanted and what shall be done to achieve this.标书:一个关于我们想做什么和将要完成什么的定义文件Procedure: A detailed description of how to perform the tasks called for in the specification.工艺:一个详细记录了如何执行配套要求的条款Specification ---- ―what‖ to do.标书---做些什么Procedure ---- ―how‖ to do.工艺---怎么去做Coatiog system specification油漆配套标书● Area部位● Substrate基层● Surface preparation表面处理● Coating system涂装系统Coating work specification油漆工作标书● Coating system specification油漆配套标书● Procedure工艺Standard: Definitions and procedures made by third party authority used as a common reference.标准:政府或有关权威部门统一制作的定义和程序Why do we use standards为什么我们要使用标准● Uniform understanding统一理解● Normative procedures and equipment标准化的程序和设备● Rationalization of production合理的产品● Avoidance of discussions避免不同的意见Five levels of standards标准的5个等级● Company standards 公司标准● Industry standards 行业标准● National standards 国家标准(BS英国标准,DIN德国标准,ASTM美国标准)● Regional standards 地区标准(CEN)● International standards (ISO)国际标准Quality assurance (QA): Implementation of all the systematic and planned actions necessary to ensure that a product or service is delivered is the required quality.质量保证: 根据计划的施工要求执行所有体系来保证产品和服务在合同要求范围内的质量.Quality control (QC): Checking of individual operations to ensure conformity with the specifications.质量控制:确保得到标书要求品质的独立检查One of the actions in QA是质量保证的一部分Surface treatment inspector inspection phases表面处理检查人员的检查范围● Before surface preparation表面处理前● Surface preparation表面处理中● Before paint applications表面处理后● Paint application申请涂装● After paint application在涂装后Inspector duty检查员的职责● Observe , monitor观察,监控● Advice建议● Check检查● Report报告● Stop of work requires (given authority)有必要的停止工作(需要授权)How to perform怎么样执行● Technically authoritative技术上的权威● Flexible 灵活掌握(within the limits of specifications在标书范围内的) ● Well behaving /diplomatic良好的行为/老练Why use coating为什么使用涂层● Protection保护● Signal ownership标注所有人, company colour给予颜色, image图象● Warning警告, attention注意● Ease of cleaning容易的清洁Substrates基层● Steel钢材● Non-ferrous metals非铁金属● Concrete混凝土● Wood木材● Plastics塑料Coatings涂层● Paint油漆● Metallizing金属● Glass玻璃● Powder coating粉末涂料● Wax coating (soft coating)蜡涂层(软涂层)Considerations注意事项● Substrate基材● Environment环境● Surface preparation表面处理● Coating quality涂层质量● Coating type涂层类型● Paint application油漆施工● Paint film thickness油漆厚度Day 2Surface preparation: The process making a surface suitable for application of a coating. 表面处理:处理需要涂装的基材表面Increase adhesion – long term增加附着力-长时间的保护● Removal of contaminants (cleanliness)去除污染物(清洁)● Increase of contact area (anchor pattern)增加接触面积峰状表面Contaminants ---General污染物-常规● Dust灰● Oil/grease油/油脂● Moisture水痕迹● Salt ----Osmosis盐-可溶解的● Non-intact, old coating不完整的旧涂层Contaminants ---Substrate污染物-基材● Millscale ---steel (battary), St2,St3 have millscale.氧化皮—钢材● Red rust ---steel红锈-钢材● White rust ---galvanizing, zinc coating白锈-镀锌或锌层● Laitance ---concrete浮浆皮-混凝土Methods for removal (ISO8504)去处方法● Chemical化学处理● Thermal热处理● Mechanical机械处理Chemical methods化学处理● Water (<700bars) ---soluble salts小于700ars的水冲洗---可溶性盐● Alkaline solutions ---grease ( not mineral oil)碱性液体----油脂(没有矿物的油)● Emulsifying solutions ---oil, grease乳化液---油,油脂● Solvents ---oil, grease溶剂---油,油脂● Pickling (acid treatment) ---millscale, rust酸洗(酸处理)---氧化皮,锈● Electrolytic descaling ---rust电解除锈---锈● Phosphatizing ---protective layer, adhesion, promoter, phosphate layer protect corrosion磷化---形成磷酸盐,给予保护Thermal methods热处理● Flame cleaning ---millscale, rust, oil火焰除锈---氧化皮,锈,油Mechanical methods机械处理● Needle gunning, chipping ---rust针枪,敲铲---锈蚀● Grinding (ISO8501) ---rust, old coating打磨---锈,旧涂层● Wire brushing ---rust, old coating (not millscale)钢丝刷---锈,旧涂层(没有氧化皮) ● Sanding ---rust, old coating (roughening)砂纸打磨---锈,旧涂层(粗糙的)● Abrasive blasting ---millscale, rust, old coating喷砂---氧化皮,锈,旧涂层● Water jetting ---rust, old coating (does not roughening)高压水除锈---锈,旧涂层(没有粗糙度)Sequence of cleaning清洁顺序1. Oil, grease ---emulsifying solutions油,油脂---乳化溶解2. Soluble salts ---water hosing, water jetting可溶性盐份---水管清洗,高压水喷射3. Millscale ---blasting, sanding, water jetting, etc.氧化皮---喷砂,打磨,高压水喷射4. Dust ---blowing, vacuum cleaning灰---排风,吸尘5. Moisture ---heating潮湿---加热Methods –relative durability, C-steel● No preparation ---------● Flame cleaning -------------● Power tool ------------------------● Blasting --------------------------------------------------● Water jetting --------------------------------------Standards标准● Cleanliness ---ISO8501-1清洁度● Contaminants ---ISO8501-1, ISO8502污染度● Methods ---ISO8504方法● Roughness characteristics ---ISO8503粗糙度特征● Water jetting ---SSPC –SP12/ NACE5高压水喷射Equipment for abrasive blasting喷砂设备● Centrifugal force ---wheel abrator离心力---叶轮抛丸● Pressurized air (open blasting加压喷射(开放式)Wheel abrator –characteristics叶轮抛丸-特征● Almost exclusively on line大部分用于生产线● Typically arranged insets ---2, 4, …● Low to medium roughness较低的粗糙度Wheel abrator –what to look for?叶轮抛丸-应该期待什么?● Result效果● Refill frequency再装频率● Type and mix of abrasives混合磨料的类型● Synchronizing of wheel abrators同步抛丸Nozzle –choracteristics喷嘴-争论● On-site在现场● Versatile多样性● High roughness possible (pressure, distance)高粗糙度(压力和距离)Nozzle –what to look for?喷嘴-应该期待什么?● Result效果● Oil油● Water水Nozzle –variants喷嘴-不同类型● Dry干● Wet湿● Vacuum真空Abrasive –types磨料类型● Metallic金属● Non-metallic非金属--Mineral矿石--Organic有机物--Others其它Abrasives磨料● Grir ---angular菱角砂---多角的● Shot ---round钢丸---圆的● Wire cut ---cylindrical钢丝段---圆柱型Metalloc金属● Chilled iron grit硬铁粗纱● High-carbon, cast steel grit/shot高碳,铸铁菱角砂/钢丸● Low--carbon, cast steel shot低碳量铸铁钢丸● Wire cut钢丝切段Non-metallic非金属● Natural minerals自然矿石Garnet, Olivine, Staurolite 石榴石,橄榄石,十字石● Synthetic minerals (slags)合成矿石(矿渣)Copper slag, Nickel slag, Coal furnace slag, Iron furnace slag, Glass, Aluminium oxide铜矿砂,镍矿砂,煤矿砂,铁矿砂,玻璃,氧化铝● Organic有机物Ground walnut shellsGround cherry stones● Other其他CO2 ice, Baking soda (NaHCO3), Plastic granulatesSurface characteristics for abrasive blasting磨料喷砂表面特征● Profile ---sharp or dimpled轮廓---尖锐或酒窝状态● Roughness ---high or low粗糙度---高或低● Density ---high or low密度---高或低Ry5 : Mean of five max. Peak to valley measurements in five adjoining sampling lengthRy5:5个最大值的平均值。
students approach to learning(学习品质)
ISSN 1306-3065 Copyright © 2006-2013 by ESER, Eurasian Society of Educational Research. All Rights Reserved.Student Approaches to Learning in Physics – Validityand Exploration Using Adapted SPQManjula Devi Sharma*1, Chris Stewart 1, Rachel Wilson 1, and MuhammedSait Gökalp 2Received 22 December 2012; Accepted 9 March 2013Doi: 10.12973/ijese.2013.203aAbstract: The aim of this study was to investigate an adaptation of the Study ProcessesQuestionnaire for the discipline of physics. A total of 2030 first year physics students at an Australian metropolitan university completed the questionnaire over three different year cohorts. The resultant data has been used to explore whether the adaptation of the questionnaire is justifiable and if meaningful interpretations can be drawn for teaching and learning in the discipline. In extracting scales for deep and surface approaches to learning, we have excised several items, retaining an adequate subset. Reflecting trends in literature, our deep scale is very reliable while the surface scale is not so reliable. Our results show that the behaviour of the mean scale scores for students in different streams in first year physics is in agreement with expectations. Furthermore, different year cohort performance on the scales reflects changes in senior high school syllabus. Our experiences in adaptation, validation and checking for reliability is of potential use for others engaged in contextualising the Study Processes Questionnaire, and adds value to the use of the questionnaire for improving student learning in specific discipline areasKeywords: Student approaches to learning, learning in disciplines, university physics education1University of Sydney, Australia* Corresponding author. Sydney University Physics Education Research group, School of Physics, University of Sydney, NSW 2006, Australia. Email: sharma@.au 2Dumlupınar University, TurkeyIntroductionSince the mid 1960’s a series of inventories exploring student learning in highereducation have been developed based on learning theories, educational psychology and study strategies. For reviews of the six major inventories see Entwistle and McCune (2004)International Journal of Environmental & Science EducationVol. 8, No. 2, April 2013, 241-253242C o p y r i g h t © 2006-2013 b y E S E Rand Biggs (1993a). As can be seen from the reviews, these inventories have two common components. One of these components is related to study strategies and the other one is about cognitive processes. Moreover, these inventories usually have similar conceptual structures and include re-arrangement of the items (Christensen et al., 1991; Wilson et al., 1996).In the current study, as one of these inventories, The Study Processes Questionnaire (SPQ) has been selected to be adapted for use in physics. The SPQ is integrated with the presage-process-product model (3P model) of teaching and learning (Biggs, 1987). Several studies have successfully used the SPQ across different culture s and years to compare students’ approaches in different disciplines (Gow et al., 1994; Kember & Gow, 1990; Skogsberg & Clump, 2003; Quinnell et al., 2005; Zeegers, 2001). Moreover, several other researchers used modified version of the SPQ at their studies (Crawford et al. 1998a,b; Fox, McManus & Winder, 2001; Tooth, Tonge, & McManus, 1989; Volet, Renshaw, & Tietzel, 1994). For example, Volet et al (2001) used a shortened SPQ included 21 items to assess cross cultural differences. Fox et al (2001) modified the SPQ and tested its structure with confirmatory factor analysis. In their study the modified version of the SPQ had 18 items, and this shortened version had same factor structure as the original SPQ. In another study, Crawford et al. (1998a, b) adapted the SPQ for the discipline of mathematics. That adapted questionnaire was named as Approaches to Learning Mathematics Questionnaire.Three different approaches of the students to learning are represented in the SPQ: surface, deep, and achieving approaches. Idea of approaches to learning was presented by Marton and Säljö (1976) and further discussed by several other researchers (eg. Biggs, 1987; Entwistle & Waterston, 1988). Basically, surface approach indicates that the students’ motivation to learn is only for external consequences such as getting the appreciation of the teacher. More specifically, it is enough to fulfill course requirements for the students with surface approach. On theother hand, a deep approach to learning indicates that the motivation is intrinsic. This approach involves higher quality of learning outcomes (Marton & Säljö, 1976; Biggs, 1987). Students with deep approach to learning try to connect what they learn with daily life and they examine the content of the instruction more carefully. On the other hand, achieving approach is about excelling in a course by doing necessary things to have a good mark. However, current study is not focused on this approach. Only the first two approaches were included in the adapted SPQ.Inventories like the SPQ are used in higher education because of several reasons. Such inventories can help educators to evaluate teaching environments (Biggs, 1993b; Biggs, Kember, & Leung, 2001). Moreover, with the use of these inventories, university students often relate their intentions and study strategies for a learning context in a coherent manner. On the other hand, the SPQ is not a discipline specific inventory. It can be used across different disciplines. However, in a research study, if the research questions are related with the common features of learning and teaching within 3P model framework, the SPQ can be used satisfactorily for all disciplines. But, a discipline specific version of the SPQ is required if resolution of details specific to a discipline area is necessary for the research questions. Moreover, in order to reduce systematic error and bias that can be resulted from students in different discipline areas; a discipline specific version may be required. As a community of educators, we are aware that thinking, knowing, and learning processes can differ across discipline areas. A direct consequence of this acknowledgement is the need to understand and model learning in specific discipline areas, such as by adapting the SPQ. However, for the theoretical framework to be valid the conceptual integrity of the inventory must be maintained.This paper reports on how the SPQ has been adapted for physics. The teaching context is first year physics at a research focused Australian university where students are grouped according to differing senior243Student approaches to learning in physicsC o p y r i g h t © 2006-2013 b y E S E Rhigh school experiences into Advanced, Regular, and Fundamentals streams.We report on the selection of items for the deep and surface scales and reliability and validity analyses. A comparison of the Advanced, Regular and Fundamentals streams is carried out to ensure that interpretations associated with the deep and surface scales are meaningful. This is a stage of a large-scale project. The project aims to understand and improve student learning based on the deep and surface approaches to learning inherent in the 3P model (Marton & Säljö, 1976; Biggs, 1987).The studyAs mentioned before, The SPQ has been designed for higher education; however, this questionnaire is not discipline specific. Therefore, in this study, we adapted the SPQ to physics for the following reasons: (1) The first year students have confusions about university studies when they come to university (White et al., 1995). This can lead to misinterpretation of the items. However, specific items related to physics can reduce these misinterpretations. For example students enrolled in a general science degree would view questions related to employment differently to those in professional degrees, and the students we have surveyed are from a range of degree programs. (2) In order to compare the students from the different discipline areas, we need discipline specific inventories. (3) We believe that there are contentious items in the original SPQ and aspects that are specific to physics. For example the use of “truth” in the following item was strongly challenged by a group of physicists validating the questionnaire. While I realize that truth is forever changing as knowledge is increasing, I feel compelled to discover what appears to me to be the truth at this time (Biggs, 1987, p. 132).The item was changed to the following, more in line with the post-positivist paradigm and agreeable to physicists.While I realize that ideas are always changing as knowledge is increasing, I feel a need to discover for myselfwhat is understood about the physical world at this time.One could argue that this is an issue of clarifying the item rather than being specific to physics. However, to our knowledge the clarity of this item has not been debated in literature.Just after we commenced this study in 2001, we became aware that Biggs et al (2001) had produced a revised Study Processes Questionnaire (R-SPQ- 2F). However, it was too late for our study and we did not switch midway. There are four main differences between the SPQ and the R-SPQ-2F; first, the removal of all items on employment after graduation; second, increased emphasis on examination; third, removal of words that imply specificity; and fourth exclusion of the contentious achieving factor identified by Christensen et al., 1991. We focus on the deep and surface approaches and not on the strategy and motive sub-scales as these are not pertinent to our larger study. The SPQ deep and surface scales, in particular, have been shown to be robust (see for example Burnett & Dart, 2000).The participant of the current study was from a university in New South Wales, Australia. Students are provided three basic physics units in the School during their first semester of university: Fundamentals, Regular or Advanced. Students are divided into these three groups of physics units based on their senior high school physics backgrounds. The students from the Fundamentals unit have done no physics in senior high school or have done poorly. On the other hand, in the Regular unit, there were the students had scored high grades in senior high school physics. The last unit, the Advanced unit, is suitable for those who have done extremely well overall in physics during all their years in senior high school.The three physics units that students can register in are for the degree programs in Engineering, Medical Science and Arts. Students who intend to major in physics as well as postgraduate physics students are selected from those enrolled in all three basic physics course in their first semester at university. The largest proportion of students of physics major is from the Advanced244C o p y r i g h t © 2006-2013 b y E S E Rstream, followed by those in the Regular stream, and finally the Fundamentals stream. The data was collected from these streams from 2001 to 2004. From 2001 to 2004, the high school physics syllabi and assessment system was changed in the state of New South Wales in Australia. The details of the changes can be seen in Binnie (2004). Due to these changes, the 2004 cohort of students in this study were instructed using a different curriculum.Within the above context, we have adapted the SPQ to generate a Study Processes Questionnaire for Physics (SPQP). The research questions addressed in this paper are as follows.(a) How do the factor solutions for the SPQP compare with those of the SPQ? (b) Is the SPQP reliable and valid?(c) Are the scales robust enough to reflect detail in senior high school syllabus change? The answers to the research questions will determine if the SPQP is a reliable and valid measure of student approaches to learning physics in our context.MethodRevising the items for the SPQPWe have adapted the SPQ by simply inserting the word “physics” in some items and making substantial changes to others. The adaptations are based on our experiences of student responses to open-ended questions and discipline knowledge, and have been extensively discussed amongst a group of physics educators. The adaptations are of the types listed below. (See appendix A for all items and the types of adaptations.) Type 0: No changeType 1: A simple insertion of terms such as “physics”, “studying physics”.I find that at times studying gives me afeeling of deep personal satisfaction. I find that at times studying physics gives me a feeling of deep personal satisfaction.Type 2: A substantial change in wording that can change the meaning, without intending to.I usually become increasingly absorbed in my work the more I do. When studying physics, I become increasingly absorbed in my work the more I do.Type 3: An intentional change in meaning.My studies have changed my views about such things as politics, my religion, and my philosophy of life. My studies in physics have challenged my views about the way the world works.The number of items corresponding to each Type of change is displayed in Table 1, as are the number of items selected from each Type for inclusion in the SPQP. Type 1 items were more useful in generating the items used in the SPQP.Administering the SPQPThe SPQP was administered at the beginning of the first semester to students in the Advanced, Regular and Fundamentals streams in 2001, 2002 and 2004, respectively. On the questionnaire, the students were requested to indicate their level of agreement with each item on a Likert scale with the options of Strongly Disagree, Disagree, Neutral, Agree, and Strongly Agree . Response rates of 2001, 2002, and 2004 cohorts was 95%, 65%, and 85%, respectively. Except 2002 cohort, the response rates were satisfactory. The main reasons of the lower response rate of 2002 cohort were the changes in class organization and questionnaire administration. Over these245Student approaches to learning in physicsC o p y r i g h t © 2006-2013 b y E S E Rthree years, a total of 2030 first year physics student were responded the SPQP: 63 percent of students in the Fundamentals stream was female, and about 30 percent of them was females in the Regular and Advanced streams. Nevertheless, the three streams are similar in other respects. The sample size of 2030 is large enough to access the natural variance within the diverse population. However, due to missing answers some of the cases were excluded from the analysis. These exclusions were only about 3% of the whole sample. Therefore, we can say that this missing data did not affect the overall results. Data Analysis MethodsFollowing analyses were carried out to answer research questions.(a) Both exploratory and confirmatory factor analyses were performed to validate the two-factor solution: the deep and surface scales. (b) Cronbach’s alpha coefficients were calculated to determine the reliability for the deep and surface scales for the complete data set and for each stream.(c) ANOVA and boxplots were used to determine if the SPQP is able to differentiate between the three streams and changes in syllabus.ResultsFactor analysisIn order to gain construct related evidence for the validity of the SPQP, exploratory and confirmatory factor analysis were conducted. Exploratory factor analysis (EFA) was carried out by using principal components as factor extraction method with quartimax, an orthogonal rotation. The complete data set was included in this analysis. Before proceeding to interpret the results, each item was checked for normality and sphericity. In order to check multicollinearity, the correlation matrix was examined. In terms of multicollinearity, we expect the items to be intercorrelated; however, these correlations should not be so high (0.90 or higher), whichcauses to multicollinearity and singularity. The intercorrelation was checked by Bartlett’s test of sphericity. This test showed that the correlation matrix is not an identity matrix. Moreover, multicollinearity was checked with the determinant of the correlation matrix. The determinant was more than 0. This showed that there is no multicollinearity (Field, 2000). Extraction of factors was based on two criteria: Scree test and Kaiser criterion (eigen values). Based on eigen values and the Scree test, two factors, which accounts for 48% of the variance, were extracted. The items with factor loadings of less than .4 were excluded from the further analyses (Field, 2000). Appendix A shows the two-factor solution for all items including loadings. Those that were retained for the SPQP are starred – 10 items form the deep scale and 6 items the surface scale.According to the results of the EFA, we note that the deep scale is in better agreement with Biggs ’s deep scale than the surface scale - there are more “usable” items on the deep scale than on the surface scale.After having results of the EFA, confirmatory factor analysis (CFA) was performed. This second step of the factor analysis helped us to ensure the factor structure of the SPQP (see Figure 1). Maximum likelihood (ML) was used as the method of estimation at the CFA. The results of the study showed that relative chi-square, which is the “chi square/degree of freedom” ratio is 3.1. Moreover, RMSEA and CFI were found to be 0.07 and 0.69 respectively. According to Browne and Cudeck (1992), RMSEA values less than 0.05 indicate close fit and models with values greater than 0.10 should not be employed. Here, RMSEA indicates moderate fit of the model whereas relative chi-square indicates good fit. However, the CFI should be over 0.90 to have good fit. Nonetheless, we can say that the first two indices support this two-factor model of the SPQP and indicate moderate fit.246C o p y r i g h t © 2006-2013 b y E S E RFigure 1. Validated two-factor structure of the SPQP.Reliability of the SPQPCronbach alpha coefficients of each scale were calculated for each stream and whole data. The results are shown in Table 2. It is apparent that the surface scale has the lowest Cronbach alpha coefficients at each stream. Similar findings were also reported at other studies (Biggs, 1987; Biggs et al, 2001; Wilson and Fowler, 2005). The foundational efficacy of these scales, given such low reliability, is questionable. However, in ourstudy higher levels of internal consistency were apparent (lowest α =.61).Comparing reliabilities across streams, students who have less experience with physics report surface approaches more reliably than students with more experience. On the other hand, students who have more experience with physics report deep approaches more reliably than those who have less experience. Considering reliabilities within streams, the Fundamentals students report deep approaches as reliably as surface247Student approaches to learning in physicsC o p y r i g h t © 2006-2013 b y E S E Rapproaches with values greater than 0.80, while the Advanced students report very different reliabilities for the two scales. The trends are not surprising since Advanced students would tend to be more confident in content and study strategies.The above trends also raise the question: Are the persistently low reliabilities noted for the su rface scale due to student ‘behaviours’ or poor items on the inventory? An adequate reliability measure for the surface scale for the Fundamentals stream α > .80, one that is similar in magnitude to that of the deep scale, implies that there is internal consistency amongst the items for each scale for this group of students. We note that the Fundamentals students have experienced junior high school physics, and are doing concurrent science and mathematics subjects at university. University students tend to have high internal coherence among learning components, intentions and study strategies and are able to adapt their ideas of knowledge and study methods to their expectations of studying in a particular context. The internal coherence is demonstrated in the reliability scores. So why are the reliabilities for the surface scale as low as 0.61 for the Advanced stream? Is it because the nature of the surface approach is different for the Advanced and Fundamentals streams, requiring possibly different items? Or is it because the Advanced students adapt their surface approaches in diverse ways, hence report this scale less reliably? Answers to such questions will indeed add to our understanding of student learning.ANOVA and BoxplotsTo determine if the SPQP is able to differentiate between the three streams, item and scale means were compared using one-way ANOVA.When comparing the means of the three streams for each item on the SPQP, the assumption of homogeneity of variances underpinning ANOVA was tested using Mulachy’s test of Sphericity. Items A5, A13 and A25 were excluded from ANOVA because they violated the assumption of sphericity. This does not affect their use on the SPQP scales. The results of ANOVA showed that there is a significant differenceamong the SPQP scores of the students from Fundamentals, Regular, and Advanced streams for both surface and deep scales (p < .05).There is a debate among the researchers to use ANOVA with the ordinal data mainly because of the normality concern. As stated in Glass, Peckham, and Sanders (1972), violation of normality is not fatal to the ANOVA. Moreover, performing ANOVA with the ordinal data, likert type items in this case, is a controversial issue among the researchers. Here, we have a big sample and this surely increases the power of the test. Therefore, failure to meet normality assumption will not affect the results of the ANOVA. Moreover, we performed Kruskal Wallis test as non-parametric alternative of the ANOVA. The results of that test supported the results of the ANOVA given above.Moreover, in order to investigate if SPQP is robust enough to be able to differentiate changes in syllabus even when sum of the items is used instead of factor scores, boxplots were checked (see Figure 2). The boxplots show a sequence for each scale with the first panel representing the factor scores, the second panel the simple sums of the items scores for the SPQP and the third panel the simple sums of all 16 item scores that should have loaded on each scale. We note two important features. First, the three panels representing the deep scale are sufficiently similar implying that if an adaptation such as that in this study is made, the sums of the 10 SPQP item scores, and indeed all 16 items scores provide a reasonable measure of deep approaches to learning. However, this is not so for the surface scale, while the sum of the 6 SPQP item scores provides a reasonable measure of surface approaches to learning, the sum of all 16 items does not do so. This raises concerns regarding the surface scale and is a reflection of the low reliabilities for the scale.DiscussionAs we are particularly interested in issues to do with learning physics, the rationale and manner in which items were modified and the SPQ adapted are discussed in detail. The advantages of adapting an established, well248C o p y r i g h t © 2006-2013 b y E S E RFigure 2. A comparison by stream and year of the deep and surface scales. The boxplots show a vertical sequence for each scale with panel (a) representing the factor scores for the SPQP deep scale and panel (d) those for the SPQP surface scale. Panel (b) represents the simple sum of item scores for the SPQP deep scale and panel (e) those for the SPQP surface scale. Panel (c) represents the simple sum of all 14 item scores that were intended to load on the deep scale and panel (f) those for the surface scale.implemented inventory with a sound theoretical framework, both for itsdevelopment and for its practical use in a teaching environment, are evident in the249Student approaches to learning in physicsC o p y r i g h t © 2006-2013 b y E S E Rmeaningful interpretations of our results summarized below.1. The SPQ items were modified for our context based on our experiences and any changes were extensively discussed amongst a group of physics educators. Ten items were retained for the deep scale and six for the surface. The rejection of items that had anomalous factor loadings could be conceptually justified. This two-factor solution of the SPQP confirmed with the EFA and CFA and supported the factor structure of the original SPQ (Biggs, 1987).2. The trends in reliabilities according to streams are as expected, with students with less experience in physics reporting less reliably on the deep scale and more reliably on the surface scale and vice versa. The issue of low reliabilities of the surface scale for the Advanced stream raises the question of whether Advanced students have more diverse forms of exhibiting surface approaches to learning. Moreover, the issue with the surface scale coincides with the previous studies (Biggs, 1987; Biggs et al, 2001; Wilson and Fowler, 2005).3. Comparisons of deep factor scores, simple sums of the 10 SPQP items and all 16 items suggest that the deep scale is reliable and particularly robust, see Figure 2. The surface factor scores compare well with the simple sums of the 6 SPQP items, but not with all 16 items, suggesting that reliability and validity checks are particularly important for the surface scale. The implication is twofold: first the SPQ is robust when contextualised as shown by reliability scores; and second, the contextualisation did not alter the overall coherency of the inventory as shown by the meaningful interpretations across streams and years. This, together with the conceptual meanings associated with the items, provides confidence that the SPQP is consistent with the theoretical framework of the SPQ.4. Changes in senior high school physics syllabus have impacted on approaches to study in the cohorts sampled in this study. The SPQP can illustrate differences between streams and years. From our study we are confident that the SPQP is a reliable andvalid measure of approaches to learning physics in our context.5. The adaptation of the SPQ into physics adds value to our project findings as it allows us to illustrate physics’ specific detail between the streams. We are confident that features that could have systematically biased the findings have been minimized. Lastly the ways of thinking, learning and knowing in physics are embedded in the larger context of intentions and study methods in higher education.ConclusionWe have adapted the Study Processes Questionnaire into physics and confirmed that a two-factor solution provides two subsets of selected items representing deep and surface approaches to learning. The resulting inventory is called the Study Processes Questionnaire for Physics, or SPQP. Further reliability and validation checks demonstrate that the two-scale SPQP is a useable inventory for our context. Reliabilities for the Advanced, Regular and Fundamentals streams are adequate and the behaviour of the mean scale scores for the three streams is not contradictory to expected student behaviours.The process of adapting the SPQ has provided useful insights into the way physicists interpret the items, and how deep and surface approaches can be conceptualised in physics. The sound theoretical framework and research underpinning the SPQ has added value to the use of questionnaires for understanding student learning in our project. Such contextualised inventories have the potential to provide context-specific understandings of teaching and learning issues and for improving student learning.AcknowledgementsThe authors acknowledge Science Faculty Education Research (SciFER) grants, University of Sydney, and support from staff and students. The authors are grateful to Professor David Boud for his constructive feedback on this paper.。
Abstract A comprehensive model of PMOS NBTI degradation
A comprehensive model of PMOS NBTI degradationM.A.Alam a,*,S.Mahapatra ba Agere Systems,Allentown,PA,USAb Department of Electrical Engineering,IIT,Bombay,IndiaReceived13November2003;received in revised form31March2004Available online3August2004AbstractNegative bias temperature instability has become an important reliability concern for ultra-scaled Silicon IC technology with significant implications for both analog and digital circuit design.In this paper,we construct a comprehensive model for NBTI phenomena within the framework of the standard reaction–diffusion model.We demonstrate how to solve the reaction–diffusion equations in a way that emphasizes the physical aspects of the deg-radation process and allows easy generalization of the existing work.We also augment this basic reaction–diffusion model by including the temperature andfield-dependence of the NBTI phenomena so that reliability projections can be made under arbitrary circuit operating conditions.Ó2004Published by Elsevier Ltd.1.Introduction1.1.BackgroundThe instability of PMOS transistor parameters(e.g., threshold voltage,transconductance,saturation current, etc.)under negative(inversion)bias and relatively high temperature has been known to be a reliability concern since the1970s[1–4],and modeling effort to understand this is also just as old[5].Named negative bias tem-perature instability(NBTI)in analogy to the Naþin-duced bias temperature instability(BTI)that caused similar shift in threshold voltages under operating con-ditions,the original technological motivation for NBTI studies was the shift of threshold voltage of p-channel MNOS[1,2]or FAMOS non-volatile memories[3] under high program and erasefields.New NMOS based non-volatile memory architectures eventually replaced the PMOS based systems[6],and thereby reduced the importance of NBTI for those specific systems.However other processing and scaling changes,introduced over the last30years to improve device and circuit perfor-mances,have inadvertently reintroduced NBTI as a major reliability concern for mainstream analog and digital circuits[7–17].These changes include:(a)Introduction of CMOS in early1980s that has madePMOS and NMOS devices equally important for IC designs.(b)Introduction of dual poly-process that has allowedreplacement of buried channel PMOS devices with surface channel PMOS devices.Although the circuit performance of surface channel device is better than that of buried channel device,their NBTI perfor-mance is actually worse[18,19].(c)Slower scaling of operating voltages for both analogand digital circuits compared to more aggressive scaling of oxide thickness has gradually increased the effectivefield across the oxide,which in turn has enhanced NBTI.Also,at a given oxidefield,thin oxide devices were found to be more susceptible to NBTI than their thick oxide counterparts.(d)Thinner oxides have brought the poly-silicon gatecloser the Si/SiO2interface.Note that diffusion of*Corresponding author.Address:Room313D,Electrical Engineering Building,School of Electrical and Computer Engineering,Purdue University,465Northwestern Avenue, West Lafayette,IN47907-2035,USA.Tel.:+1-765-494-5988; fax:+1-765-494-6441.E-mail address:alam@(M.A.Alam).0026-2714/$-see front matterÓ2004Published by Elsevier Ltd.doi:10.1016/j.microrel.2004.03.019Microelectronics Reliability45(2005)71–81/locate/microrelhydrogen away from the Si/SiO2interface controls NBTI-specific interface trap generation at the Si/ SiO2interface.Since hydrogen diffusion through poly-silicon is faster than that in oxide,scaling of gate oxides has increased NBTI susceptibilities[16].(e)The introduction of nitrogen to reduce gate leakageand to inhibit boron penetration in thin oxide de-vices has made NBTI worse[10].These technological factors,along with other circuit specific usages,have made NBTI as one of the foremost reliability concerns for modern integrated circuits (almost as important as of gate oxide TDDB reliability). It is in this modern context that we need to understand the origin of the NBTI phenomena and the means to control it.1.2.Experimental signatures of NBTIAny theory of NBTI must be able to explain the following observations regarding the NBTI phenomena [7–27].(a)Time evolution is described by a power-law relation-ship(i.e.,D V T$t n with n$1=4)that holds over many decades of the timescale,D V T being the thresh-old voltage shift,often attributed entirely to genera-tion of interface traps,N IT(more on this at the end of Section2.1).However,there are intriguing devia-tions from the perfect,single-exponent power-law degradation that needs careful analysis.These devi-ations are mentioned below.(i)Increase in stress temperature increases n at earlystress times.At higher temperatures,n becomestime-dependent and the degradation rate beginsto saturate at longer times(e.g.,n$0:25–0:3ini-tially,but changes to n$0:2–0:24at later stageof degradation)[15,27].(ii)Under stronger gate biases and especially for thick oxides,D V T increases dramatically at laterpart of the stress(i.e.,n changes from$0.25to$0.5as shown in Fig.7).The exact point of thisdramatic transition is temperature,bias andoxide thickness dependent[15].(iii)For thinner oxides,NBTI degradation increases unexpectedly at later time at elevated stresstemperatures[16].(b)A fraction of NBTI defects can be annealed once thestress is removed.This makes NBTI lifetimes(to reach a certain amount of degradation)higher for AC stress when compared to DC stress[20–23]. (c)BTI appears to be associated with PMOS devicesunder inversion bias condition.However,NMOS devices at the same voltage show much lower NBTI[12].(d)Unlike voltage-dependent TDDB phenomena[28],NBTI appears to be electricfield dependent[1,2,5, 15,16,25].(e)NBTI is an activated process with activation ener-gies of0.12–0.15eV[9–16].Replacing hydrogen with deuterium during interface passivation reduces NBTI[10].However,effect of such replacement on NBTI is far less dramatic than that in hot carrier degradation.(f)Both interfaces as well as gate oxide processinginfluences NBTI robustness.The presence of water[26],boron[9,11],and nitrogen[17]near the Si/SiO2interface can modify NBTI substantially.(g)Although NBTI is usually associated with interfacetraps only,there are some reports that indicate the number of positive charges in the oxide bulk equal the number of interface traps under NBTI stress conditions[1,5,26]indicating correlated generation.Whether this equality is preserved during the entirety of the degradation is debatable.None of the existing NBTI models are comprehensive enough to consistently explain the above observations within a common framework.The goal of this paper is to develop such a comprehensive framework by gener-alizing the classical reaction–diffusion(R–D)model [5,13–16]for N IT generation.This would allow us to identify the roles of hole density,hydrogen diffusion, oxide thickness,and interface quality that affect NBTI reliability.We will use the above experimental obser-vations to systematically test the R–D model whenever possible and augment the model whenever necessary.2.The standard R–D model of NBTI2.1.Description of the R–D modelTill date,the R–D model is the only model that can interpret the power-law dependence of interface trap generation during NBTI,as discussed in Section1.2(a), without making any a priori assumption regarding the distribution of interface bond-strengths.As discussed in Ref.[5,13,14],the model assumes that when a gate voltage is applied,it initiates afield-dependent reaction at the Si/SiO2interface that generates interface traps by breaking the passivated Si–H bonds.Although the exact mechanism that causes such bond dissociation remains unspecified in the original model[5]and a number of groups are exploring the mechanics of trap generation byfirst-principle calculations[29–31],some have sug-gested[13–17](and we will discuss this in detail in Sec-tion3)that this dissociation is preceded by the capture (viafield assisted tunneling)of inversion layer holes by the Si–H covalent bonds.This weakens the existing Si–H covalent bond[24],which is easily broken at higher72M.A.Alam,S.Mahapatra/Microelectronics Reliability45(2005)71–81temperatures.The newly released hydrogen diffuses away from the interface,leaving behind a positively charged interface state(Siþ)that is responsible for higher threshold voltage and lower transconductance. This process is schematically explained in Fig.1.The R–D model is described by the following equations:d N IT=d t¼k FðN0ÀN ITÞÀk R N H N ITðx¼0Þð1aÞd N IT=d t¼D Hðd N H=d xÞþðd=2Þd N H=d tð0<x<dÞð1bÞD Hðd2N H=d x2Þ¼d N H=d tðd<x<T PHYÞð1cÞD Hðd N H=d xÞ¼k P N Hðx>T PHYÞð1dÞwhere x¼0denotes the Si–SiO2interface and x>0is (in the oxide)towards the gate,N IT is the number of interface traps at any given instant,N0is the initial number of unbroken Si–H bonds,N H is the hydrogen concentration,k F is the oxidefield dependent forward dissociation rate constant,k R is the annealing rate con-stant,D H is the hydrogen diffusion coefficient,d is the interface thickness,T PHY is the oxide thickness and k P is the surface recombination velocity at the oxide/poly-silicon interface.Note that nofield dependent term in Eq.(1c)means that the diffusing species is assumed neutral.Also,the microscopic details of the trap gen-eration and trap annealing processes that occur within a few angstrom of the Si/SiO2interface(i.e.,d)are as-sumed hidden in the constants k F and k R.For now,these parameters are measured and parameterized in terms of device variables like stress bias,stress temperature,and oxide thickness(see Section3).As Si–H bond-dissocia-tion process is better understood[29–31],one may be able to predict k F and k R fromfirst principle.Before we proceed to compare the solutions of the above R–D model to experimental data,note that this model attributes V T shift during NBTI entirely to N IT generation and contributions from slow traps,if any,is not separately distinguished.Moreover,note that mea-sured D V T,especially for thick oxides at high voltages, contains contributions from both interface traps due to NBTI and bulk traps(N OT)due to hot-holes, e.g., D V T¼q=C OX(N ITþN OT)(q is electronic charge and C OX is oxide capacitance)[15].Therefore,bulk-trap contributions must be separated before R–D model predictions can be compared with measurements.TwoM.A.Alam,S.Mahapatra/Microelectronics Reliability45(2005)71–8173separate approaches can be taken to de-embed the ef-fects of bulk traps from measured threshold voltage. First,one can theoretically estimate the bulk-trap con-tribution for a given oxide thickness for every stress voltage and temperature[15,28]and subtract this con-tribution to extract the N IT contribution from measured D V T data.Second,during measurement one can care-fully avoid such‘‘non-NBTI’’contributions to D V T due to bulk traps by carefully optimizing the accelerated stress conditions and demonstrating a good correlation between N IT generation and V T shift[16].We will use both approaches in appropriate in the following dis-cussion.2.2.Solution of the R–D model2.2.1.Stress phaseThe general solution of the R–D model in the stress phase anticipatesfive different regimes of time-depen-dent interface trap generation[14],as shown in Fig.2.During the very early phase,both N IT and N H are small with negligible hydrogen diffusion,and therefore by Eq.(1a),N IT$k F N0t.During the second phase,the forward and reverse reactions in Eq.(1a)are both large but approximately equal(i.e.,k F N0$k R N Hðx¼0ÞN IT) and also all the hydrogen released are still at the inter-face so that N Hðx¼0Þ¼N IT.Therefore,Eq.(1a)sug-gests N IT$ðk F N0=k RÞ1=2t0.For a particular set of parameters relevant for NBTI in modern MOSFETs, both these phases last up to a few microseconds,and therefore are not observed in typical NBTI measure-ments.The phase of NBTI that is most readily observed in typical measurements is when hydrogen diffusion(Eqs. (1b)and(1c))begins to control the trap generation process.In this regime,Eq.(1c)suggests x¼ðD H tÞ1=2 and by Eq.(1b),d N IT=d t$D Hðd N H=d xÞ$D H N Hðx¼0Þ=ðD H tÞ1=2,so that N Hðx¼0Þ$ðt=D HÞ1=2ðd N IT=d tÞ. Substituting this in Eq.(1a)and assuming that net trap generation is so slow that it is negligible in comparison to the largefluxes on the right hand side of Eq.(1a)(e.g., d N IT=d t$0),wefindN IT$ðk F N0=k RÞ1=2ðD H tÞ1=4ð2aÞThe above result is important enough to re-derive it in a slightly different,but perhaps more instructive manner. The total number of interface traps ever generated must equal to the number of hydrogen released,therefore N IT¼1=2N Hðx¼0ÞðD H tÞ1=2(by assuming linear hydro-gen profile)and by Eq.(1a)N Hðx¼0ÞN IT$k F N0=k R. Using these two relations,we can re-derive Eq.(2a),and alsofind that in this regime,while the total number of released hydrogen is increasing at the same rate as N IT, the density of hydrogen at the interface is decreasing as N Hðx¼0Þ$tÀ1=4(see profiles2,3,4in Fig.1(b)).The NBTI time dependence ofðt1=4Þderived above is based on analytical simplifications––numerical solution of Eqs.(1a)–(1c)shows that the exponent can be somewhat larger(n$0:25–0:28).Also,x¼ðD H tÞ1=2 from Eq.(1c)implies that the hydrogen diffusion front is yet to reach the poly-silicon interface.Therefore,this is the only regime that need be considered for very thick oxides[5].Finally,had the diffusing species been posi-tively charged(proton?),the time dependence would have been t1=2[13]––which is not observed during this phase of NBTI degradation.Once the hydrogen reaches the oxide/poly-interface (profile5,Fig.1(b)),according to Eq.(1d)the incoming flux D HðN Hðx¼0ÞÀN Hðx¼T PHYÞÞ=T PHY¼k P N Hðx¼T PHYÞ,so that N Hðx¼0Þ¼ð1=k PþT PHY=D HÞðd N IT=d tÞby Eq.(1b).Substituting this in Eq.(1a)and assuming the trap generation rate is slow compared to the gener-ation and annealingfluxes in Eq.(1a),(i.e.,d N IT=d t$ 0),wefind[14,16]N IT¼Aðk F N0=k RÞ1=2ðD H tÞ1=2ð2bÞwith A¼½2ðD H=k PþT PHYÞ À1=2.This regime(n¼1=2)is likely to be observed in thinner oxides where the diffu-sion front would reach the poly-interface within mea-surement window.This is explicitly shown in Fig.3[16], where measured V T shift(for thinner oxides at high stress temperature)breaks offfrom the initial trend and in-creases at longer times.However,although the break is significant and clear,note that the value of the post-break exponent,n,is not equal to1/2,as one might anticipate from Eq.(2b).We speculate that if the diffu-sion channels in poly-silicon grain boundaries saturate over time,the surface recombination velocity,k P,will reduce over time as well.This will cause the N H con-centration to build-up at the poly/oxide interface(Fig.1, profile5),with a corresponding reduction of hydrogen diffusion away from the Si/oxide interface.This wouldin 74M.A.Alam,S.Mahapatra/Microelectronics Reliability45(2005)71–81turn reduce N IT below the values predicted by Eq.(2b). Therefore,although the break(in degradation rate)will occur once the hydrogen front reaches the poly/oxide interface,the exponent n could be smaller than1/2.Finally,when all the Si–H bonds are broken, N IT$N0¼Const.It is unlikely that we can observe this phase of NBTI degradation experimentally,because other failure modes like TDDB are likely to cause oxide breakdown in the meantime.2.2.2.Annealing phaseApart from thefive-phase degradation discussed above,another key prediction of the R–D model is similar multi-phase annealing of the interface traps (created during the stress phase)once the stress is re-moved(see Section1.2(b)).Once the stress is removed, the dissociation of the Si–H bond that forced forward diffusion of hydrogen away from the interface no longer exists,therefore the hydrogen can now diffuse back and recombine with silicon dangling bonds restoring them to their passive Si–H state.This diffusion controlled annealing phase(analogous to the diffusion-controlled generation phase described by Eq.(2a))can be treated analytically in the following manner[20].Assume that at the end of the stress phaseat time t0,the number of traps generated is Nð0ÞIT and thedensity of interface hydrogen is Nð0ÞH ,so that Nð0ÞIT¼1=2Nð0ÞH ðD H t0Þ1=2(assuming a linear hydrogen profile).During the recovery phase NðÃÞITtraps are annealed attimeðtþt0Þ(the start of the annealing phase defines thetime-origin here),so that NðÃÞIT¼1=2NðÃÞHðn D H tÞ1=2(n¼1=2for one sided diffusion).With k F¼0,Eq.(1a)can berewritten as k R½Nð0ÞHNðÃÞH½Nð0ÞITÀNðÃÞIT$0.Assuming thatthe original hydrogen front continues to diffuse as½D Hðtþt0Þ 1=2,onefindsN IT¼Nð0ÞIT½1Àðn t=t0Þ1=2=ð1þt=t0Þ1=2 ðt>0Þð2cÞSuch annealing effects have indeed been reported in lit-erature by a number of groups[21–23,25].Fig.4showsthat Eqs.(2a)and(2c)can interpret the stress and therelaxation phases of the NBTI degradation well.This isyet another indication that the diffusing species is chargeneutral,because a positively charged species would dif-fuse asymmetrically with stress(negative)bias and an-neal(zero)bias conditions,which would have madeEq.(2c)inconsistent with experimental data.2.3.Discussion of the R–D modelThe R–D model successfully explains both the power-law dependence of interface trap generation and theM.A.Alam,S.Mahapatra/Microelectronics Reliability45(2005)71–8175mechanics of interface trap annealing––the two key features of NBTI as discussed in Section1.2(a)and(b). It does so without requiring that holes be present for NBTI degradation(although it does not forbid it either) and apart from requiring that the diffusing species be neutral,the theory places no restriction on the nature or type of diffusing species involved.Furthermore,it offers no quantitative predictions regarding thefield or tem-perature dependence of the NBTI phenomena.There-fore,one cannot optimize operating or processing conditions to reduce NBTI effects using the basic R–D model.In Sections3.1and3.2below we will generalize of the R-D model to predict thefield and temperature dependence of the NBTI phenomena.Moreover,the predictions of precise power-exponents(e.g.,n¼1=4) also make the R–D model difficult to reconcile with wide variety of exponents observed in various experiments. Failure to account for wide variety of power-exponents is another reason to generalize the standard model.We will take this up in Section3.2.4.3.Enhancement of the R–D model3.1.Model for NBTIfield(or voltage)dependenceAccording to the R–D model,thefield dependence of the NBTI phenomena must be confined to the processes at the interface(k F and k R)because the diffusing species is charge neutral.We anticipate that the forward disso-ciation constant,k F,should depend on the number of holes(p),their ability to tunnel to the Si–H bonds (T COEFF),the capture cross-section of the Si–H bonds (r0),and anyfield dependence of Si–H bond dissociation (B),so that k Fð...;E OXÞ¼B r0pT COEFF.We discuss these individual processes below.3.1.1.Role of holes andfield-dependence through hole densityThe presence of holes have been linked to NBTI degradation from the beginning of the study of this process––in part because the effect wasfirst observed in PMOS devices in inversion.However,the lack of deg-radation in NMOS devices stressed in accumulation with voltages similar to those used for PMOS NBTI tests have sometimes been used to argue against the role of holes in NBTI degradation[12].However,since we must compare the NBTI degradation at comparable hole densities,the surfacefield must be same––which means that the NMOS must be stressed approximately one volt higher compared to PMOS devices for com-parable NBTI effects(to take care offlatband voltage difference).Fig.5shows that this is indeed the case––this explains the experimental observation discussed in Sec-tion1.2(c)and leads us to conclude that the presence of holes play a key role in NBTI degradation consistent with the observation in Section 1.2(c).The density dependence can be summarized as p¼½C OXðV GÀV TÞ $E OX.3.1.2.Role of hole capture and itsfield-dependenceOnce the holes are available,they must tunnel through the interface layer of1–2A to be captured by the Si–H bonds.Thefield dependent tunneling coeffi-cient depends exponentially on the local electricfield at the interface(e.g.,T COEFF$expðE OX=E0),although other forms are also suitable[1,2,15,23]).Fig.6shows that this is indeed the case for oxides of various thick-nesses,confirming the general validity of the approach [24–26,28].3.1.3.Role offield-dependent bond-dissociationA key unknown is the influence of the electricfield on the dissociation of the Si–H bond itself.Assuming that such modification will be associated with a reduction in the barrier height of the bond dissociation,we anticipate that there may befield dependent activation energy associated with this process.Suchfield dependence of bond dissociation has not been experimentally demon-strated yet,but may have to be included in future models.76M.A.Alam,S.Mahapatra/Microelectronics Reliability45(2005)71–81Taken together,thefield dependence is given byk Fð...:;E OXÞ¼B r0E OX expðE OX=E0Þ:ð3aÞ3.1.4.Role of bulk trapsFinally,as discussed earlier in the paper,it is important to realize that at high negative electricfields (especially for thicker gate oxides that involve large stress voltages)NBTI data are often contaminated by generation of bulk traps[15].Such contribution from bulk traps must be isolated before a universal repre-sentation of k F(Eq.(3a))can be found.According to the Anode Hole Injection model,at large negative voltages in both PMOS and NMOS transistors,electrons are injected from the poly-gate into the silicon substrate. These energetic electrons in turn produce hot holes that are injected back into the oxide-creating bulk defects within the oxides[28].As shown in Fig.7,this contri-bution from bulk defect densities can easily be identified and isolated by its time dependence(n¼1=2)and its unique voltage and temperature dependence(for details, see[15]).Once such corrections are accounted for,we find thatfield dependence of Eq.(3a)defines the forward reaction rate well and is consistent with the observation in Section1.2.3.2.Model for NBTI temperature dependence3.2.1.Concept of universal scalingEq.(2a)provides a universal scaling relationship that can used to decouple thefield-and temperature-depen-dence of NBTI.This equation predicts that if the stress time is measured in units of(1=D H)and if the trap density is normalized byðk F N0=k RÞ1=2,then the resulting curve should be universal.Moreover,since D H is a function of temperature alone and is independent of electricfield, and sinceðk F N0=k RÞ1=2isfield dependent,but tempera-ture independent(approximately,see below),the NBTI curves at different temperatures(but at a constant electric field)can be scaled along the time axis to identify the activation energy of the diffusion process[14–16].Fig.8 highlights the scaling methodology and shows that uni-versal scaling holds for NBTI data measured under a wide range of stress bias and temperature.3.2.2.Activation energy of the diffusion processAlthough the hole density(p)and the hole capture cross-section(r)are largely temperature independent, the dissociation and annealing of Si–H bonds(k F and k R)and the diffusion of hydrogen(D H)through the oxide are not.Therefore,based on Eq.(2a),the fol-lowing form must describe N IT:N IT¼½k F0ð...;E OXÞN0=k R0 1=2exp½ðÀE aðk FÞþE aðk RÞÞ=2kT ½D H0expðÀE aðD HÞ=kTÞt 1=4;with net activation energy ofE aðNBTIÞ¼½ÀE aðk FÞþE aðk RÞ =2þE aðD HÞ=4ð3bÞM.A.Alam,S.Mahapatra/Microelectronics Reliability45(2005)71–8177We use two methods to determine the activation ener-gies.Thefirst method uses the time–scaling idea dis-cussed above that allows us to determine E a directly (Fig.8).The second technique uses the fact that the change in power-law exponent from n$1=4to n$1=2 (see Fig.3)is determined by the time hydrogen frontreaches the poly-interface,i.e.,D H¼T2PHY ð1=t BREAKÞ(see Fig.2).Plottingð1=t BREAKÞas a function of tem-perature,one determines the activation energy of the diffusion process,E aðD HÞ.Fig.9shows that activation energies determined by both methods are essentially identical,demonstrating that while the barrier heights for dissociation and annealing of Si–H bonds may themselves be high,the difference½ÀE aðk FÞþE aðk RÞ =2 must be small.Moreover,the activation energy of0.5–0.6eV obtained by both these methods indicates neutral H2diffusion[32].Therefore,the temperature activation of NBTI process is essentially determined by that of the diffusion process,i.e.,E aðNBTIÞ$E aðD HÞ=4¼0:12–0:15 eV-consistent with Section1.2(e).3.2.3.Debate regarding the nature of the diffusing speciesAlthough there is a debate regarding the exact diffu-sion species(–OH,H,and H2O have been suggested), the activation energy of the diffusing species(E aðD HÞ¼4E aðNBTIÞ$0:5eV)appears to indicate that the dif-fusing species is hydrogen,most probably a molecular species[32].Moreover,the experiment in Ref.[10],when viewed in light of Eq.(2a),also supports the hydrogen hypothesis:When hydrogen annealing of Si-dangling bonds to create Si–H bonds is replaced with deuterium annealing to create Si–D bonds,Eq.(2a)anticipates that N ITðDÞ=N ITðHÞ¼ðD D=D HÞ1=4¼ðm D=m HÞ1=8(m H and m D being the effective mass of hydrogen and deuterium respectively),making the dependencefinite but weak [10].Note that if–OH were the diffusing species,the difference in masses of–OH and–OD would have been small,resulting in essentially indistinguishable impact of deuterium annealing.The lack of strong deuterium effect also highlights the chemical nature of the Si–H(or Si–D) bond breaking as opposed to the kinetic nature of Si–H bond breaking during HCI experiments.Such chemical78M.A.Alam,S.Mahapatra/Microelectronics Reliability45(2005)71–81dissociation of Si–H bonds is consistent with the R–D model.3.2.4.Hypothesis regarding the non-standard NBTI exponentsThe R–D model predicts very precise power-law exponent that is independent of temperature,time,or oxide thickness(e.g.,n¼1=4analytically,or n¼0:25–0:28numerically).However,as reported in the lit-erature[7–26]and summarized in Section1.2(a),the experimentally measured power exponent has a wide range that depends on oxide thickness,temperature,and time.One reason behind this wide range of observed power exponent is due to the NBTI recovery that takes place during the delay between stopping the stress for measurement and the start of next stress phase.1The other reason for such discrepancy lies in the assumption of constant time-independent diffusion coefficient for hydrogen diffusion in the amorphous oxide as discussed below.Since the assumption of constant diffusion coefficientis only appropriate for isotropic media with spatially and temporally uniform hopping rates,this may not be appropriate for diffusion in amorphous media where the hopping distances and hopping times are exponentially distributed[33–35].Therefore,although the diffusion process can be described by a constant coefficient at the earliest phases,the diffusive particles soonfind them-selves trapped in states with exponentially long release times,which in turn slows the overall diffusion process. Since such dispersive diffusion is time dependent (reflecting broad distribution of release time),D H¼D H0ðx0tÞÀa[36,37].Therefore,using the same argument we used to derive Eq.(2a),wefind that the interface trap density increases as N IT$t nÀa=4.In other words,such dispersive transport will give rise to effective exponents n0that is less than n predicted by the standard R–D model(i.e.,n0¼nÀa=4).Assuming that a$0.1–0.2 [36,37],and initial n¼0:25,dispersive power-exponents of the order of n0$0:225–0:20are expected.Indeed such lower power law exponents have been reported in the literature[16].One of the consequences of this dispersive diffusion in amorphous oxide is the temperature and time depen-dence of power-exponents observed in measured data. At lower temperatures,the diffusion is slow with lower hopping probability.Therefore,at a given time and a given site,diffusing hydrogen have time to relax to deeper states with long release times.Therefore,at lower-temperature,the observed power-exponents are typically lower,as shown in Fig.10.At higher temper-ature,the hopping probability is higher,therefore ini-tially the probability of getting trapped into deep level is reduced.This explains the higher value of degradation exponent at higher temperature(see Fig.10,[38]).In time however a fraction of hydrogen will eventuallyfind themselves trapped in deeper levels with long release time,making the asymptotic n0value similar to that of the low-temperature value[15].The hypothesis discussed above,if supported by other experiments,would imply that more amorphous the oxide,the better is its NBTI performance,because deep level trapping with long release time would reduce NBTI power-exponent.This provides a technique to control NBTI degradation by processing changes for gate oxide deposition.Clearly more numerical and experimental work is necessary to support these ideas.4.ConclusionsIn this paper,we have discussed a theory of NBTI that uses the theoretical framework of R–D model to encapsulate thefield,temperature,and processing dependencies of the NBTI phenomena.This overall1The absolute degradation exponents is likely to be some-what lower than the measured exponents reported here(n$0:32–0:23in Fig.10),because there is afinite time-delayat the end of each stress section before measurements are made.Since the ratio of the relaxation time(fixed in this case)to stresstime(increasing logarithmically)determines the degree ofrelaxation(Eq.(2c)),the effective relaxation for each data-point decreases as stress progresses.This makes the measuredexponents slightly higher compared to what an absolute no-delay measurement would indicate.Ideally,such no-delay,absolute exponents should be compared with the varioustheoretical predictions presented in this paper(n$0:27–0:2).M.A.Alam,S.Mahapatra/Microelectronics Reliability45(2005)71–8179。
NPP遥感估算中的尺度问题
NPP遥感估算中的 NPP遥感估算中的 尺度效应
目录
NPP估算简介 NPP估算简介 NPP估算方法概述 NPP估算方法概述 NPP估算中的尺度问题 NPP估算中的尺度问题
模型的适用尺度 遥感数据分辨率
NPP估算简介 NPP估算简介
NPP概念: NPP概念: 概念
GPP:是指单位时间内生物(主要是绿色植物)通过 :是指单位时间内生物(主要是绿色植物) 光合作用途径所固定的有机碳量, 光合作用途径所固定的有机碳量,又称植物总第一性 生产力。 生产力。 NPP:表示植物所固定的有机碳中扣除其本身的呼吸 : 消耗之后的部分。这部分用于植物的生长和生殖, 消耗之后的部分。这部分用于植物的生长和生殖,也 成为植物第一性生产力。 成为植物第一性生产力。 NPP=GPP-Ra
尺度效应校正后的相关性较高:RMSE=1.87 尺度效应校正后的相关性较高:RMSE=1.87 R 2 =0.84 因此, 因此, 经过尺度效应校正后的 NPP无论是在相关性还是在误差 NPP无论是在相关性还是在误差 方面都有了很大程度的提高。
其他尺度问题
时间尺度:遥感获得的是瞬时数据,而现在很多的研究 时间尺度:遥感获得的是瞬时数据,而现在很多的研究 获得的NPP时间尺度是月 获得的NPP时间尺度是月或年。 验证中的尺度: 验证中的尺度: 1、实测数据—以点代面 、实测数据— 2、高分辨数据获得的数据—先进行尺度上ห้องสมุดไป่ตู้ 、高分辨数据获得的数据—
光能利用率模型是以资源平衡观点作为其理论基础的。 光能利用率模型是以资源平衡观点作为其理论基础的。
NPP ( x, t ) = APAR( x, t ) × ε ( x, t )
1989年 Heimann和Keeling首次建立了基于该算法 1989年,Heimann和Keeling首次建立了基于该算法 ε 全球净初级生产力计算模型,其中, 的全球净初级生产力计算模型,其中, 的取值不随时 间和地表覆盖类型变化。 间和地表覆盖类型变化。但在全球范围内将 看为一个 常数会引起很大的误差。 常数会引起很大的误差。 1993年 Potte和Field发展了 1993年,Potte和Field发展了CASA模型,CASA模型 发展了CASA模型 CASA模型 模型, 中, 随季节的变化以及群落内部的变化通过温度及土 壤水分的可利用程度来调节。 壤水分的可利用程度来调节。
范里安《微观经济学:现代观点》笔记和课后习题及强化习题详解(测 度)【圣才出品】
第17章测度17.1 复习笔记1.概括数据(1)概括数据的方法包括表格、条形图(barplot)或直条图(barchart),还可以对数据进行分类处理。
(2)辛普森悖论(Simpson’s paradox)是指在某条件下的两组数据,尽管分别讨论时都满足某种性质,没有明显的偏差,但整体考虑时却可能导致相反的结果,给出了存在偏差的印象。
如,表17-1显示了1973年秋季加州伯克利大学研究生院的各系以及整体的男女申请人数与录取率的统计数据,虽然没有哪个专业系别出现了明显偏向于录取男生的倾向,但从整体来看男生比女生更可能被录取。
表17-1 1973年秋季加州伯克利大学研究生院的申请人数与录取率2.估计降价如何影响商品销售量的方法(1)估计降价如何影响商品销售量的理想方法是使用类似投掷硬币的某种随机方法来选择降价的地区。
其中,降价的地区称为处理组(treatment group ),价格不变的地区称为控制组(control group )。
这种随机化处理(randomized treatment )有助于消除系统性偏差的来源。
(2)估计降价如何影响商品的需求的另一种方法是向随机选择的一组人派送优惠券,并观察有多少人会使用这些优惠券来购买该商品。
派送优惠券的问题在于一方面使用优惠券的消费者可能不同于整体消费者。
可能的情形是,与不愿费心使用优惠券的人相比,不嫌麻烦而使用优惠券的人一般来说也许对价格更敏感。
一般而言,选择成为处理对象的消费者是对处理效应更感兴趣的人,可能比整体消费者更倾向于对处理效应作出不同反应。
对于选择使用优惠券(接受处理)的消费者而言,优惠券的处理效应可能相当不同于对于每个人都降价销售的处理效应。
另一方面,相对于消费者整体的处理效应,实验者可能感兴趣于“接受处理的消费者的处理效应”。
3.使用观察数据估计需求经济学家研究价格变化对需求的影响这类问题的最常用统计方法被称为回归(regression )。
Hoymiles单相微型逆变器HMS-1600-4T HMS-1800-4T HMS-2000-4T
Open Energy for AllSingle-phase MicroinverterHMS-1600-4T HMS-1800-4T HMS-2000-4TUSER MANUALAbout MicroinverterThis system is composed of a group of microinverters that convert direct current (DC) into alternating current (AC) and feed the power to the public grid. The system is designed for 4-in-1 microinverters, i.e., one microinverter is connected with four PV modules.Each microinverter works independently so as to guarantee the maximum power generation of each PV module. This setup is highly flexible and reliable as the system enables direct control of the production of each PV module.About the ManualThis manual contains important instructions for HMS-1600-4T/HMS-1800-4T/HMS-2000-4T microinverters and users shall read in its entirety before installing or commissioning the equipment. For safety reasons, only qualified technicians who have received training or demonstrate relevant skills can install and maintain this microinverter under the guidance of this document.Other InformationProduct information is subject to change without notice. User manual will be updated regularly, so please refer to Hoymiles official website at for the latest version.CONTENTS1. Important Notes 41.1 Product Range 41.2 Target Group 41.3 Symbols Used 41.4 Radio Interference Statement 42. About Safety 52.1 Important Safety Instructions 52.2 Explanation of Symbols 63. About Product 73.1 About PV Microinverter System 73.2 About Microinverter 73.3 About 4-in-1 Unit 83.4 About Sub-1G Technology 83.5 Highlights 83.6 Terminals Introduction 93.7 Dimensions (mm) 94. Installation Preparation 104.1 Position and Space Required 104.2 Connecting Multiple PV Modules to Microinverter 104.3 Installation Tools 114.4 AC Branch Circuit Capacity 114.5 Precautions 125. Microinverter Installation 135.1 Accessories 135.2 Installation Steps 136. Troubleshooting 176.1 Troubleshooting List 176.2 LED Indicator Status 206.3 On-site Inspection (For qualified installers only) 216.4 Routine Maintenance 216.5 Microinverter Replacement 227. Decommission 237.1 Decommission 237.2 Storage and Transportation 237.3 Disposal 238. Technical Data 249. Appendix 1: 259.1 Installation Map 2510. Appendix 2:2610.1 WIRING DIAGRAM – 230VAC SINGLE PHASE: 2610.2 WIRING DIAGRAM – 230VAC / 400VAC THREE PHASE: 2710.3 WIRING DIAGRAM –120VAC / 240VAC SPLIT PHASE: 2810.4 WIRING DIAGRAM – 120VAC / 208VAC THREE PHASE: 291. Important Notes1.1 Product RangeThis manual describes the assembly, installation, commissioning, maintenance and troubleshooting of the following models of Hoymiles Microinverter:· HMS-1600-4T· HMS-1800-4T· HMS-2000-4T*Note: “1600” means 1600 W. “1800” means 1800 W, “2000” means 2000 W.*Note: HMS-1600/1800/2000-4T is only compatible with Hoymiles gateway DTU-Pro-S and DTU-Lite-S.1.2 Target GroupThis manual is only for qualified technicians. For safety purposes, only those who have been trainedor demonstrate relevant skills can install and maintain this microinverter under the guidance of thisdocument.1.3 Symbols UsedThe safety symbols in this user manual are shown as below.1.4 Radio Interference StatementThis microinverter has been tested and complies with the requirements of CE EMC, meaning that it will not be affected by electromagnetic interference. Please note that incorrect installation may cause electromag-netic disturbances.You can turn the equipment off and on to see if radio or television reception is interfered by this equipment.If this equipment does cause harmful interference to radio or television, please try the following measures to fix the interference:1) Relocate other apparatus’ antenna.2) Move the microinverter farther away from the antenna.3) Separate the microinverter and the antenna with metal/concrete materials or roof.4) Contact your dealer or an experienced radio/TV technician for help.2. About Safety2.1 Important Safety InstructionsThe HMS-1600-4T/HMS-1800-4T/HMS-2000-4T microinverter is designed and tested according to interna-tional safety requirements. However, certain safety precautions must be taken when installing and operat-ing this inverter. The installer must read and follow all instructions, cautions and warnings in this installa-tion manual.2.2 Explanation of Symbols3. About Product3.1 About PV Inverter SystemA typical grid-tied PV inverter system includes PV modules, PV inverter, meter and power grid, as shownbelow. PV inverter converts the DC power generated by PV modules into AC power that meets the require-ments of the power grid. The AC power is then fed into the grid via meter.A PV moduleB PV inverterC Grid-connected metering deviceD Power grid3.2 About MicroinverterPV Microinverter is a module-level solar inverter that tracks the maximum DC power point of each PV module, which is known as Maximum Power Point Tracking (MPPT).This function of module-level MPPT means that when a PV module fails or is shaded, other modules will not be affected, boosting the overall power production of the system.Microinverter can monitor the current, voltage and power of each module to realize module-level data moni-toring.Moreover, microinverter only carries a few dozen volts of DC voltage (less than 80 volts), which reduces safety hazards to the greatest extent.Hoymiles microinverters feature module-level monitoring. Microinverter data are collected by DTU via wireless transmission and are sent to Hoymiles motoring platform S-Miles Cloud.3.3 About 4-in-1 UnitMicroinverters can be divided into 1-in-1, 2-in-1, 4-in-1, etc., depending on how many PV modules are con-nected to them. This means that the microinverter can connect to one module, two modules and four mod-ules respectively, as shown below.This manual is about Hoymiles 4-in-1 microinverter. With the output power up to 2000 VA, Hoymiles new HMS-2000 series rank among the highest for 4-in-1 microinverters.Each microinverter connects to four PV modules at most with independent MPPT and monitoring, enabling greater energy harvest and easier maintenance.3.4 About Sub-1G TechnologyHMS-2000 series of microinverter adopts the new Sub-1G wireless solution which enables more stable com-munication with Hoymiles gateway DTU. Sub-1G technology is particularly useful for PV microinverters and is different from 2.4GHz in that it has substantially longer range and better interference suppression perfor-mance.Range of Sub-1GHz wireless:Unlike Wi-Fi or Zigbee which both operate on the 2.4 GHz band, Sub-1GHz op-erates on the 868 MHz or the 915 MHz band. Generally speaking, Sub-1GHz wireless transmission covers 1.5 to 2 times longer distance than the 2.4GHz spectrum.Interference: Sub-1GHz wireless networking can handle interference better. This is because it operates on a lower frequency, so the communication between the microinverter and the DTU is more stable. As a result, it is especially useful in industrial or commercial PV power plants.Lower Power Consumption: Sub-1GHz wireless communication uses less power than Wi-Fi or Zigbee. Because of the long range and better interference suppression performance, Sub-1GHz networking is partic-ularly well-suited for rooftop PV power stations.3.5 Highlights- Maximum output power up to 1600/1800/2000 W- Peak efficiency 96.50%- Static MPPT efficiency 99.80%, Dynamic MPPT efficiency 99.76% in overcast weather- Power factor (adjustable) 0.8 leading……0.8 lagging- Sub-1G for stronger communication with DTU- High reliability: IP67 (NEMA 6) enclosure, 6000 V surge protection3.6 Terminals Introduction3.7 Dimensions (mm)4. Installation Preparation4.1 Position and Space RequiredPlease install the microinverter and all DC connections under the PV module to avoid direct sunlight, rain exposure, snow buildup, UV etc. The silver side of the microinverter should be up and facing the PV module.Leave a minimum of 2 cm of space around the microinverter enclosure to ensure ventilation and heat dissi-pation.*Note: In some countries, DTU will be required to meet local grid regulations (e.g. G98/99 for UK etc.)4.2 Connecting Multiple PV Modules to MicroinverterGeneral Guidelines:Note: The voltage of modules (considering the effect of local temperature) must not exceed the maximum input voltage of the microinverter. Otherwise, the microinverter may be damaged (refer to the Technical Data section to determine the absolute maximum input voltage).Single-phase Microinverter HMS-1600/1800/20004.3 Installation ToolsBesides tools recommended below, other auxiliary tools can also be used on site.ScrewdriverMultimeter Socket Wrench or Allen wrenchMarker pen Diagonal pliers Steel tap Wire cutters Cable tieWire stripper Torque and adjustable wrenchUtility knifeSafety glove Dust masks Protective gogglesSafety shoes4.4 AC Branch Circuit CapacityHoymiles HMS-1600-4T/HMS-1800-4T/HMS-2000-4T can be used with 12AWG or 10AWG AC Trunk Cable and the AC Trunk Connector which are provided by Hoymiles. The number of microinverters on each 12AWG or 10AWG AC branch shall not exceed the limit as shown below.Note:1. The number of microinverters that can be connected to each AC branch is determined by the ampacity (also known as current-carrying capacity) of the cable.2. 1-in-1, 2-in-1 and 4-in-1 microinverters can be connected to the same AC branch, as long as the total current does not exceed the ampacity specified in local regulations.4.5 PrecautionsThe equipment is installed based on the system design and the location of installation.Installation location shall meet the following conditions:The AC cable contains earth wire, so grounding can be done directly with it. For regions that have special requirements, we offer optional grounding brackets that can be used to complete the external ground-ing.Torque each grounding cleat screw to 2 N·m.Note:1. Microinverter installation and DC connections must be done under the PV module to avoid direct sunlight, rain exposure, snow buildup, UV etc.2. Leave a minimum of 2 cm of space around the microinverter enclosure to ensure ventilation and heat dissi-pation.3. Mounting torque of the 8 mm screw is 9 N·m. Do not over torque.4. Do not pull or hold the AC cable with your hand. Hold the handle instead.Step 2. Plan and Build the AC Trunk CableAC Trunk Cable is used to connect the microinverter to the power distribution box.A ) Select the appropriate AC Trunk Cable according to the spacing between microinverters. The connec-tors of the AC Trunk Cable should be spaced based on the spacing between microinverters to ensure that they can properly matched. (Hoymiles provides AC Trunk Cable with different AC Trunk Connec-tor spacing.)B ) Determine how many microinverters you plan to install on each AC branch and prepare AC TrunkConnectors accordingly.C ) Take out segments of AC Trunk Cable as you need to make AC branch.1 ) Disassemble the AC Trunk Connector and remove the cable.2 ) Install the AC Trunk End Cap at one side of AC Trunk Cable (the end of AC Trunk Cable).1. Tightening torque of the cap: 4.0±0.5 N·m. Please do not over torque.2. Torque of locking screw: 0.4±0.1 N·m.3. Do not damage the sealing ring in the AC Trunk Connector during disassembly and assembly.4. Wires used in Hoymiles Microinverter:D ) Repeat the above steps to make all the AC Trunk Cables you need. Then lay out the cable on the railas appropriate so that the microinverters can be connected to the Trunk connectors.B ) Connect the AC end cable to the distribution box, and wire it to the local grid network.Step 4. Create an Installation Map2. In case you need to remove the microinverter AC cable from AC Trunk Connector, insert the ACTrunk Port Disconnect Tool into the side of AC Sub Connector to complete the removal.Step 6. Energize the SystemA) Turn on the AC breaker of the branch circuit.B) Turn on the main AC breaker of the house. Your system will start to generate power in about twominutes.Step 7. Set Up Monitoring SystemRefer to the “DTU User Manual”, “DTU Quick Installation Guide”, and “Quick Installation Guide for S-Miles Cloud” to install the DTU and set up the monitoring system.Product information is subject to change without notice. (Please download reference manuals at )6. Troubleshooting6.1 Troubleshooting List6.2 LED Indicator StatusThe LED flashes five times at start-up. All green flashes (1s gap) indicate normal start-up.*Note:1. The microinverter is powered by DC side. If the LED light is not on, please check the DC side connection. If the connection and input voltage are normal, contact your dealer or hoymiles technical support team.2. All the faults are reported to the DTU. Refer to the local DTU app or Hoymiles Monitoring Platform for more information.6.3 On-site Inspection (For qualified installers only)Troubleshoot a malfunctioning microinverter according to the following steps.6.4 Routine Maintenance1. Only authorized personnel are allowed to carry out the maintenance operations and are responsible for reporting any anomalies.2. Always use personal protective equipment provided by the employer during maintenance operation.3. During normal operation, check the environmental conditions regularly to make sure that the conditions have not changed over time and that the equipment is not exposed to adverse weather conditions and has not been obstructed.4. DO NOT use the equipment if any problems are detected. Restore its working conditions after the fault is fixed.5. Conduct annual inspections on various components, and clean the equipment with a vacuum cleaner or special brushes.6.5 Microinverter Replacement7. Decommission7.1 DecommissionDisconnect the inverter from DC input and AC output, remove all connection cable from the microinverter, and remove the microinverter from the frame.Please pack the microinverter in the original packaging. If the original packaging is no longer available, you can use a carton box that can hold 5 kg and can be fully closed.7.2 Storage and TransportationHoymiles packages are specially designed to protect components so as to facilitate transportation and subsequent handling. Transportation of the equipment, especially by road, must be done in a way that can protect the components (particularly the electronic components) from violent shocks, humidity, vibration, etc. Please dispose of the packaging elements in appropriate ways to avoid unforeseen injury.Please examine the conditions of the components to be transported. Upon receiving the microinverter, you should check the container for any external damage and verify the receipt of all items. Please call the carrier immediately if there is any damage or if any parts are missing. In case of any damage caused to the inverter, contact the supplier or authorized distributor to request a repair/return and ask for instructions regarding the process.The storage temperature range of microinverter is -40 to 85°C.7.3 Disposal• If the equipment is not used immediately or is stored for a long period of time, make sure that it is properly packed. The equipment must be stored indoors with good ventilation and without anypotential damage to the components of the equipment.• Take a complete inspection when restarting the equipment after it has stopped operation for a long time.• Please dispose of the microinverters properly in accordance with local regulations after they are scrapped because of potential harms to the environment.• The maximum open circuit voltage rating of the PV module must be within the operating voltage range of the microinverter.• We recommend that the maximum current rating at MPP should be equal to or less than the maxi-mum input DC current.2. The output DC power of PV module shall not exceed 1.35 times of the output AC power of the microinverter. Refer to “Hoymiles Warranty Terms & Conditions” for more information.*1 Nominal voltage/frequency range can vary depending on local requirements.*2 Refer to local requirements for exact number of microinverters per branch.*3 Hoymiles Monitoring System9.1 Installation Map10.1 WIRING DIAGRAM – 230VAC SINGLE PHASE:。
国内外电力建设工程竣工移交条件差异及合同纠纷分析
工程经济ENGINEERING ECONOMY第 30 卷第 2 期2020 年 2 月Vol.30 No.2Feb. 2020国内外电力建设工程竣工移交条件差异及合同纠纷分析华人凤(中国能源建设集团浙江火电建设有限公司,浙江 杭州 310016)摘要:国内外电力建设工程竣工移交时,在性能试验责任主体、时间、试验项目等多方面存在较多差异,进而导致了工程投入商业运行、颁发移交生产交接书、缺陷责任期起始和结束等诸多时间的不一致,这些差异和不一致可能引起业主与EPC 承包商之间的合同纠纷。
本文对上述差异和可能引起的纠纷进行了分析,并提出对策和建议。
关键词:电力建设;工程竣工移交;EPC 合同纠纷中图分类号:TU723 文献标识码:A 文章编号:1672-2442(2020)02-0028-04DOI :10.19298/ki.1672-2442.202002028Analysis of Differences in Transfer Condition for the Completion of Power ConstructionProjects & its Contract DisputesHUA Renfeng(Zhejiang Thermal Power Construction Co.,Ltd.,China Energy Engineering Group ,Hangzhou 310016,China )Abstract :When domestic and foreign power constructions projects are handed over ,there are many differences in responsibility ,time and content of the performance test ,which has led to many inconsistencies in the time of the project being put into commercial operation ,the issue of take-over certificate and the beginning of the defect notification period. These differences and inconsistencies are likely to cause contract disputes between the owner and the EPC contractor. This article analyzes the differences and possible disputes ,and proposes countermeasures and suggestions.Keywords :power construction ;project completion take-over ;EPC contract dispute 1 引 言国际电力建设工程竣工移交过程中,经常会涉及到“竣工移交证书”(即TOC :Take-Over Certificate )、“商业运行时间”(即COD :Commercial Operation Date )、“最终移交证书”(即FAC :Final Acceptance Certificate )等几个节点概念,这是业主与EPC 承包商责任转移的重要分界点。
太阳能充电 MPPT 英文解说
What is Maximum Power Point Tracking (MPPT)and How Does it Work?Maximum Power Point Tracking, frequently referred to as MPPT, is an electronic system that operates the Photovoltaic (PV) modules in a manner that allows the modules to produce all the power they are capable of. MPPT is not a mechanical tracking system that “physically moves”the modules to make them point more directly at the sun. MPPT is a fully electronic system that varies the electrical operating point of the modules so that the modules are able to delivermaximum available power. Additional power harvested from the modules is then made available as increased battery charge current. MPPT can be used in conjunction with a mechanical tracking system, but the two systems are completely different.To understand how MPPT works, let’s first consider the operation of a conventional (non-MPPT) charge controller. When a conventional controller is charging a discharged battery, it simply connects the modules directlyto the battery. This forces themodules to operate at batteryvoltage, typically not the idealoperating voltage at which themodules are able to produce theirmaximum available power. The PVModule Power/Voltage/Current graphshows the traditional Current/Voltagecurve for a typical 75W module atstandard test conditions of 25°C celltemperature and 1000W/m 2 ofinsolation. This graph also shows PVmodule power delivered vs module voltage. For the example shown, the conventional controller simplyconnects the module to the battery and therefore forces the module to operate at 12V. By forcing the 75W module to operate at 12V the conventional controller artificially limits power production to ≈53W.Rather than simply connecting the module to the battery, the patented MPPT system in a Solar Boost™ charge controller calculates the voltage at which the module is able to produce maximum power. In this example the maximum power voltage of the module (V MP ) is 17V. The MPPT system then operates the modules at 17V to extract the full 75W, regardless of present battery voltage. A high efficiency DC-to-DC power converter converts the 17V module voltage at the controller input to battery voltage at the output. If the whole system wiring and all was 100%efficient, battery charge current in this example would be V MODULE ÷ V BATTERY x I MODULE , or 17V ÷12V x 4.45A = 6.30A. A charge current increase of 1.85A or 42% would be achieved by harvesting module power that would have been left behind by a conventional controller and turning it into useable charge current. But, nothing is 100% efficient and actual charge current increase will be Typical 75W PV Module Power/Voltage/Current At Standard Test Conditionssomewhat lower as some power is lost in wiring, fuses, circuit breakers, and in the Solar Boost charge controller.Actual charge current increase varies Array with operating conditions. As shown above,the greater the difference between PV modulemaximum power voltage V MP and batteryvoltage, the greater the charge currentincrease will be. Cooler PV module celltemperatures tend to produce higher V MP andtherefore greater charge current increase.This is because V MP and available powerincrease as module cell temperaturedecreases as shown in the PV ModuleTemperature Performance graph. Moduleswith a 25°C V MP rating higher than 17V willalso tend to produce more charge currentincrease because the difference betweenactual V MP and battery voltage will be greater.A highly discharged battery will also increasecharge current since battery voltage is lower,and output to the battery during MPPT couldbe thought of as being “constant power”.What most people see in cool comfortable temperatures with typical battery conditions is a charge current increase of between 10 – 25%. Cooler temperatures and highly discharged batteries can produce increases in excess of 30%. Customers in cold climates have reported charge current increases in excess of 40%. What this means is that current increase tends to be greatest when it is needed most; in cooler conditions when days are short, sun is low on the horizon, and batteries may be more highly discharged. In conditions where extra power is not available (highly charged battery and hot PV modules) a Solar Boost charge controller will perform as a conventional PWM type controller. Home Power Magazine has presented RV Power Products (now Blue Sky Energy, Inc.) with two Things-That-Work articles; Solar Boost 2000 in HP#73 Oct./Nov. 1999, and Solar Boost 50 in HP#77 June/July 2000, Links to these articles can be found on the Blue Sky Energy, Inc. web site at .Richard A. CullenPresidentBlue Sky Energy, Inc.。
功率谱估计方法的比较Word版
功率谱估计方法的比较摘要:本文归纳了信号处理中关键的一种分析方法, 即谱估计方法。
概述了频谱估计中的周期图法、修正的协方差法和伯格递推法的原理,并且对此三种方法通过仿真做出了对比。
关键词:功率谱估计;AR 模型;参数 引言:谱估计是指用已观测到的一定数量的样本数据估计一个平稳随机信号的谱。
由于谱中包含了信号的很多频率信息,所以分析谱、对谱进行估计是信号处理的重要内容。
谱估计技术发展 渊源很长,它的应用领域十分广泛,遍及雷达、声纳、通信、地质勘探、天文、生物医学工程等众多领域,其内容、方法都在不断更新,是一个具有强大生命力的研究领域。
谱估计的理论和方法是伴随着随机信号统计量及其谱的发展而发展起来的,最早的谱估计方法是建 立在基于二阶统计量, 即自相关函数的功率谱估计的方法上。
功率谱估计的方法经历了经典谱估计法和现代谱估计法两个研究历程,在过去及现在相当长一段时间里,功率谱估计一直占据着谱估计理论里的核心位置。
经典谱估计也成为线性谱估计,包括BT 法、周期图法。
现代谱估计法也称为非线性普估计,包括自相关法、修正的协方差法、伯格(Burg )递推法、特征分解法等等。
原理:经典谱估计方法计算简单,其主要特点是谱估计与任何模型参数无关,是一类非参数化的方法。
它的主要问题是:由于假定信号的自相关函数在数据的观测区间以外等于零,因此估计出来的功率谱很难与信号的真实功率谱相匹配。
在一般情况下,经典法的渐进性能无法给出实际功率谱的一个满意的近似,因而是一种低分辨率的谱估计方法。
现代谱估计方法使用参数化的模型,他们统称为参数化功率谱估计,由于这类方法能够给出比经典法高得多的频率分辨率,故又称为高分辨率方法。
下面分别介绍周期图法、修正的协方差法和伯格递推法。
修正的协方差法和伯格递推法采用的模型均为AR 模型。
(1)周期图法周期图法是先估计自相关函数, 然后进行傅里叶变换得到功率谱。
假设随机信号x(n)只观测到一段样本数据,n=0, 1, 2, …, N -1。
直驱式小型风力发电机MPPT控制
电气传动2015年第45卷第10期直驱式小型风力发电机MPPT 控制巩建英1,谢蓉2(1.长安大学电子与控制学院,陕西西安710064;2.西北工业大学自动化学院,陕西西安710072)摘要:以基于永磁同步发电机的直驱式小型风力发电机组为研究对象,为了在不使用机械传感器的情况下实现最大功率点跟踪(MPPT )控制,提高系统可靠性和降低控制成本,提出了一种利用发电机输出端电压有效值和功率之间的关系曲线进行MPPT 控制方法。
该方法的主要贡献为:1)不使用机械传感器也能获得很好的MPPT 控制效果;2)通过对风力涡轮机和永磁同步发电机组成的系统进行建模,减小了理论分析和计算的难度。
仿真实验验证了所提出方法的有效性。
关键词:风力发电机;最大功率跟踪;电压—功率曲线;永磁同步发电机中图分类号:TN721文献标识码:AMPPT Control of Directly Driven Small Wind Turbine Generator SystemGONG Jian⁃ying 1,XIE Rong 2(1.School of Electronic &Control Engineering ,Chang ’an University ,Xi ’an 710064,Shaanxi ,China ;2.School of Automation ,Northwestern Polytechnical University ,Xi ’an 710072,Shaanxi ,China )Abstract:Addressed the issues of control for directly driven small wind turbine generator system (WTGS )basedon permanent magnet synchronous generators (PMSG ).In order to achieve maximum power point tracking (MPPT )with no mechanical sensors ,improve the reliability of WTGS and reduce its cost ,a method was proposed by using U -P curve to achieve MPPT.The contributions of the method were as follows ,1)a good MPPT control effect was obtained without sensors ,2)In order to reduce the difficulty of theoretical analysis and computation of the whole system ,a WTGS model consisting of wind turbine and PMSG was presented.The efficiency of the proposed method is verified.Key words:wind turbine ;maximum power point tracking (MPPT );U -P curve ;permanent magnet synchronousgenerator (PMSG )基金项目:陕西省自然科学基金(2014JQ8342)作者简介:巩建英(1980-),男,博士,讲师,Email :********************.cnELECTRIC DRIVE 2015Vol.45No.10近年来,随着新能源技术的发展,风力发电的成本不断降低,从而使得全球范围内风力发电机(wind turbine ,WT )的安装容量剧增[1-2]。
带可变遗忘因子递推最小二乘法的超级电容模组等效模型参数辨识方法
2021年3月电工技术学报Vol.36 No. 5 第36卷第5期TRANSACTIONS OF CHINA ELECTROTECHNICAL SOCIETY Mar. 2021 DOI:10.19595/ki.1000-6753.tces.200023带可变遗忘因子递推最小二乘法的超级电容模组等效模型参数辨识方法谢文超1赵延明1,2方紫微1刘树立1(1. 湖南科技大学信息与电气工程学院湘潭 4112012. 风电机组运行数据挖掘与利用技术湖南省工程研究中心(湖南科技大学)湘潭 411201)摘要为了准确地辨识风力发电机变桨系统后备电源中超级电容模组等效模型的参数,解决由于“数据饱和”现象所产生增益下降过快的缺点,建立超级电容模组三分支等效电路模型,提出一种带可变遗忘因子的递推最小二乘法(RLS)的超级电容模组等效电路模型参数辨识方法,然后建立超级电容模组多方法参数辨识的Simulink仿真模型,并进行仿真与分析。
结果表明:该方法充电后静态阶段的综合误差为0.19%,比电路分析法的综合误差降低了 6.92%,比分段优化法的综合误差降低了0.09%。
整个充放电过程的综合误差为1.22%,比电路分析法降低了9.5%,比分段优化法降低了1.6%。
带可变遗忘因子的RLS法比电路分析法和分段优化法拥有更高的辨识精度。
关键词:超级电容模组等效模型参数辨识可变遗忘因子中图分类号:TM53Variable Forgetting Factor Recursive Least Squales Based Parameter Identification Method for the Equivalent Circuit Model ofthe Supercapacitor Cell ModuleXie Wenchao1 Zhao Yanming1,2 Fang Ziwei1 Liu Shuli1(1. School of Information and Electrical Engineering Hunan University of Science and TechnologyXiangtan 411201 China2. School of Engineering Research Center of Hunan Province for the Mining and Utilization of WindTurbines Operation Data Hunan University of Science and Technology Xiangtan 411201 China)Abstract In order to accurately identify the parameters of the equivalent model of supercapacitor cell module in the backup power supply of the pitch system of megawatt wind turbine and to solve the problem that the gain decreases too fast due to the data saturation phenomenon, the three-branch equivalent circuit model for the supercapacitor cell module was established, and a parameter identification method of the equivalent circuit model of supercapacitor cell module based on variable forgetting factor recursive least squares(RLS) was proposed in this paper. Then, the Simulink simulation model was also established for the multi-method parameter identification of supercapacitor cell module, and the simulation and analysis were performed. The comprehensive error in the static self-discharge phase of this new method is 0.19%, which is 6.92% and 0.09% lower than circuit analysis method and segmentation optimization method, respectively. Its comprehensive error in the whole process is 1.22%, which is reduced by 9.5% and 1.6% compared with circuit analysis method and segmentation国家重点研发计划(2016YFF0203400)和湖南省研究生创新项目(CX2018B670)资助。
功率预测的改进自适应变步长光伏MPPT算法
第38卷第3期计算机仿真2021年3月文章编号:1006 -9348(2021)03 -0034 -06功率预测的改进自适应变步长光伏M P P T算法袁臣虎W,王坤1 ,刘晓明1,2,杜永恒1(1.天津工业大学电气工程与自动化学院,天津300387;2.天津工业大学机械工程博士后科研流动站,天津300387)摘要:扰动观察法是太阳能光伏发电系统MPPT应用最普遍的方法之一,追踪过程中光照非均匀变化时扰动观察法存在功率点误判问题且扰动步长设置无法兼顾MPPT的动稳态性能,基于MPPT传统功率预测和自适应变步长思想提出了一种改进MPPT算法,引入光照变化率,将非均匀光照变化线性化处理,解决了扰动观察法在光照非线性变化时存在的dP误判问题;结合dP/dU在MPP时的特性,提出将步长因子运用反正切函数归一化处理,实现了 MPPT过程中步长因子的自动适应动态变化,兼顾了MPPT的动态性能与稳态性能,仿真结果表明,改进算法在光照变化时避免了扰动误判问题,MPPT的动稳态性能良好,具有工程可行性。
关键词:最大功率点追踪;功率预测;光照变化率;自适应;扰动观察法中图分类号:TP393 文献标识码:BImproved Adaptive Variable Step Pv MPPTAlgorithm for Power PredictionYUAN Chen - hu12, WANG Kun1,LIU Xiao - ming12 ,DU Yong - heng1(1. College of Electrical Engineering and Automation, Tianjin Polytechnic University, Tianjin 300387, China;2. Postdoctoral Research Station in Mechanical Engineering, Tianjin Polytechnic University,Tianjin 300387, China)ABSTRACT:The perturb - observe method is one of the methods which are widely used in the MPPT of photovoltaicpower system. In the process of tracking, the perturb - observe method has the problem of power point misjudgmentwhen the illumination is non - uniform. Meanwhile, the consideration cannot be given to both dynamic performanceand steady - state performance of MPPT when setting the disturbance step size. Based on traditional MPPT power prediction and adaptive variable stepsize, an improved MPPT algorithm was put forward. Firstly, the illumination changerate was introduced to linearize the non - uniform illumination change, so that the problem about dP misjudgment ofperturb - observe method the light changed nonlinearly. According to the characteristics of dP/dU in MPP, the stepfactors were normalized by arc tangents. Finally, the step -length factors were automatically adapted to dynamicchanges during the MPPT. Therefore,this algorithm takes into account the dynamic performance and steady - stateperformance of MPPT. Simulation results prove that the improved algorithm can avoid the misjudgment for disturbancewhen illumination changes. The dynamic performance and steady - state performance of MPPT is good, so this algorithm has the engineering feasibility.KEYWORDS:Maximum power point tracking (MPPT);Power prediction;Illumination change rate;Self-adaptive ;Perturb - observe method致了化石能源的日益枯竭,与此同时环境的污染日益加剧,1 引言太阳能光伏发电作为现代清洁能源的代表在此背景下应运随着社会的发展,人类对能源的需求不断加剧,从而导 而生。
概率图模型中的概率推断算法比较(四)
概率图模型(PGM)是一种用来描述随机变量之间依赖关系的数学模型。
它是一种强大的工具,用于建模复杂的现实世界问题,如自然语言处理、生物信息学、机器学习等领域。
在概率图模型中,概率推断算法是一种重要的技术,用于计算给定证据条件下隐含变量的后验概率分布。
在本文中,我们将比较常用的概率推断算法,包括变分推断、信念传播和蒙特卡洛方法。
变分推断(Variational Inference)是一种近似推断算法,用于计算后验概率分布。
它通过最大化一个变分下界来逼近后验分布。
变分推断的优点是计算效率高,可以处理大规模的数据集。
然而,它也有一些缺点,比如对于非凸性问题,变分推断可能陷入局部最优解。
此外,变分推断还需要选择合适的变分分布,这可能需要一些领域知识和经验。
信念传播(Belief Propagation)是一种精确推断算法,用于计算概率图模型中的边缘概率分布。
它通过在图上进行消息传递来计算变量节点的边缘概率。
信念传播的优点是可以得到全局最优解,而且对于一些特定的概率图模型,如树形图模型,信念传播算法是高效的。
然而,信念传播算法也有一些局限性,比如它只适用于一些特定的概率图模型结构,对于一般的图模型结构,信念传播算法可能无法收敛。
蒙特卡洛方法(Monte Carlo Methods)是一种基于随机抽样的推断算法。
它通过从后验分布中抽取样本来近似计算后验概率分布。
蒙特卡洛方法的优点是可以得到任意精度的估计,而且对于一些复杂的后验分布,蒙特卡洛方法可能是唯一可行的方法。
然而,蒙特卡洛方法也有一些缺点,比如计算效率低,需要大量的样本来获得准确的估计,而且对于高维数据,蒙特卡洛方法的计算复杂度可能会变得非常高。
综上所述,不同的概率推断算法各有优缺点。
在实际应用中,选择合适的推断算法取决于具体的问题和数据特征。
未来的研究方向包括设计更加高效的推断算法,以及将不同的推断算法进行结合,从而充分利用它们各自的优势。
希望本文的讨论对概率图模型中的概率推断算法的研究和应用有所帮助。
基于恒压比较法的MPPT算法
基于恒压比较法的MPPT算法闵轩;江智军;王华峰【摘要】根据常用MPPT算法的优缺点设计了一种新的名为恒压比较法(constant voltage comparison method,CVCM)的算法.此新算法把MPPT的全过程分为2个阶段执行,先将输出电压迅速锁定到预设的最大功率点的周边,再施行多点多次比较将其调度到真确的最大功率点.运用Simulink仿真证实了此新算法可以兼顾速度与准度、在到达最大功率点后能够保持稳定的运行状态以及正确应对太阳能电池外界环境的变化.相较于扰动观测法,此新算法速度提升25%、精度提升3%、震荡幅度降低66%、对温度变化的反应能力提升89%、对光照强度变化的反应能力提升94%,有助于太阳能电池效能的提升.【期刊名称】《南昌大学学报(工科版)》【年(卷),期】2018(040)003【总页数】4页(P303-306)【关键词】太阳能电池;电路模型;最大功率点跟踪;恒压比较法;Simulink仿真【作者】闵轩;江智军;王华峰【作者单位】南昌大学信息工程学院,江西南昌330031;南昌大学信息工程学院,江西南昌330031;南昌大学信息工程学院,江西南昌330031【正文语种】中文【中图分类】TM615太阳能发电的各类技术中,效能提升最有效的一项技术便是最大功率点跟踪(maximum power point tracking,MPPT)技术,研究至今MPPT已经诞生了许多算法。
文献[1]提出了一种最优梯度法,将梯度这一数学学科中的概念应用到MPPT当中,解决了搜索时精度不够高的问题。
文献[2]设计了一种拟合曲线法,通过函数求极值的思路来完成搜索,解决了搜索时速度偏慢的问题。
文献[3]发现了一种基于信赖域的算法,引进了适应调整化的新理念,解决了搜索时步长较短的问题。
当前MPPT已有接近百种算法,数量虽不胜枚举但质量却参差不齐。
几种常用的MPPT算法皆有各自的优缺点。
比如扰动观测法,其优点是控制简易便捷且对参数的准确度要求低,而其缺点是不能兼顾跟踪的速度与控制的准确度、不能对环境变化做出及时响应;又如电导增量法,其优点是控制十分精准且产生的震荡较小,而其缺点是跟踪的速度较慢且容易被环境变化所影响;再有模糊控制法,其优点是稳定度极高,而其缺点是算法复杂且不易应用于实际生产。
在非对称规格下弱能指标Cpp的贝氏估计
管理科學研究 2007 第二屆管理與決策學術研討會特刊第97-110頁在非對稱規格下弱度能力指標C pp的貝氏估計Bayesian Estimations of Incapability Index under Asymmetric Specifications曾信超1許淑卿2郭信霖3摘要製程能力指標是常用來評估製造產業的能力,提供測量製程績效重要的量測方法;而傳統方法大部分是從頻率學派的觀點得到。
在本文我們以貝氏方法(Bayes Approach)分別針對非情報事前分配(non-informative prior)、共軛事前分配(Conjugate Prior)與韋伯故障率函數(Weibull Hazard Function)求其有關弱度能力指標的點估計與在一般情形下μ≠T的100×p%的信用區間。
依據貝氏的方法,我們需先確定一個有關參數適當的事前分配,以得到其事後分配;並且在平方誤差損失函數(Squared-Error Loss Function)下,製程能力指標之事後機率的期望值就是其貝氏估計。
關鍵字:弱度能力指標、信用區間、非情報事前分配、共軛事前分配、韋伯故障率函數AbstractProcess capability indices are used to measure on whether a manufacturing process meet a set of requirement preset in the workshop, which designed to quantify the relation between the actual performance of the process and its specified requirement. In this paper we consider the problem of estimations of indices C pp( μ≠T ) under Bayesian framework. Several priors such as non-informative, conjugate prior and Weibull hazard function have been considered. Some estimates of hyper parameters are also studied. Some Bayesian estimators are derived and lower bound of the credible intervals under some conditions is also given.Keyword s:Incapability Indices; Credible Interval; Non-informative Prior; Conjugate Prior;Weibull Hazard Function1. 前言製程能力研究,在品質改善及提高生產力的評估與判定製程能力已日益普遍。
基于相似曲线簇和GBRT方法的超短期风电功率预测
基于相似曲线簇和GBRT方法的超短期风电功率预测张颖超;黄飞;邓华;支兴亮;李慧玲【摘要】为了减少训练数据的冗余信息,提高风电功率预测的精度,提出了基于相似曲线簇和GBRT方法的超短期风电功率预测模型.首先对历史风速序列进行相似曲线簇的提取,采用相似离度作为相似性判据,对大量历史风速序列与测试集风速序列进行相似性的判断,继而找出相似性好的风速曲线簇以及曲线簇中每个风速点对应的功率,并将其作为最终的训练样本,然后采用梯度提升回归树(GBRT)模型进行风电功率的预测.用上海某风场的数据进行对比试验,结果表明,该方法能够明显提高超短期风电功率预测的精度,具有实际意义.【期刊名称】《华北电力大学学报(自然科学版)》【年(卷),期】2018(045)006【总页数】6页(P15-20)【关键词】超短期风电功率预测;相似曲线簇;相似离度;GBRT【作者】张颖超;黄飞;邓华;支兴亮;李慧玲【作者单位】南京信息工程大学信息与控制学院,江苏南京210044;南京信息工程大学气象灾害预报预警与评估协同创新中心,江苏南京210044;南京信息工程大学信息与控制学院,江苏南京210044;南京信息工程大学信息与控制学院,江苏南京210044;南京信息工程大学气象灾害预报预警与评估协同创新中心,江苏南京210044;南京信息工程大学信息与控制学院,江苏南京210044;南京信息工程大学信息与控制学院,江苏南京210044【正文语种】中文【中图分类】TM6140 引言随着世界经济的迅速发展,相应的能源需求也随之大幅增长,传统的化石能源面临着枯竭的威胁。
作为绿色和可再生能源,风力发电在世界各国得到了广泛的应用和发展[1]。
但由于风本身的不稳定性,风力发电具有波动性和间歇性,故准确的超短期风电功率预测会有利于电网对风电的调度[2]。
超短期风电功率预测的方法大致可以分为物理方法[3],统计方法[4-7]以及组合方法[8, 9]。
基于高阶条件随机场模型的改进型图像分割算法
基于高阶条件随机场模型的改进型图像分割算法王灵矫;钟益群;郭华;彭志强【摘要】在图像分割中,将条件随机场(CRF)模型及其高阶模型广泛用作能量函数,后者以二阶CRF模型为基础,通过引入高阶势函数反映各分割块内像素标记的一致性,使分割的目标边缘更加精确,但能量最小化的计算效率不理想.针对该问题,提出一种基于鲁棒PnPotts高阶CRF模型的改进型图像分割算法.根据给定的标记集合运行最大流/最小割算法得到局部最优解,再用局部最优解修改节点的标记,对未确定标记的节点运行α扩展算法,并在每次迭代过程中动态更新图的流和边的剩余容量,使得每次迭代的时间快速减少.实验结果表明,与α扩展算法相比,改进算法在保持原有分割效果的基础上,相同图像的能量最小化收敛速度比原算法快2倍~3倍.【期刊名称】《计算机工程》【年(卷),期】2016(042)006【总页数】6页(P241-246)【关键词】高阶条件随机场模型;图像分割;能量最小化;最大流/最小割;局部最优解;α扩展算法【作者】王灵矫;钟益群;郭华;彭志强【作者单位】湘潭大学信息工程学院,湖南湘潭411105;湘潭大学信息工程学院,湖南湘潭411105;湘潭大学信息工程学院,湖南湘潭411105;湘潭大学信息工程学院,湖南湘潭411105【正文语种】中文【中图分类】TP301.6中文引用格式:王灵矫,钟益群,郭华,等.基于高阶条件随机场模型的改进型图像分割算法[J].计算机工程,2016,42(6):241-246.英文引用格式: Wang Lingjiao,Zhong Yiqun,Guo Hua,et al.Improved Image Segmentation Algorithm Based on High-order Conditional Random Field Model[J].Computer Engineering,2016,42(6):241-246.离散最优已成为目前解决计算机视觉问题的一个重要工具。