2011 ijcai zw Multi-Kernel Multi-Label Learning with Max-Margin Concept Network

合集下载

利用加速度计数据在轨标定重力卫星质心偏差

利用加速度计数据在轨标定重力卫星质心偏差

第 63 卷第 2 期2024 年 3 月Vol.63 No.2Mar.2024中山大学学报(自然科学版)(中英文)ACTA SCIENTIARUM NATURALIUM UNIVERSITATIS SUNYATSENI利用加速度计数据在轨标定重力卫星质心偏差*刘超群1,谷德峰1,2,黄志勇3,王傲明1,刘道平21. 中山大学人工智能学院,广东珠海 5190822. “天琴计划”教育部重点实验室/天琴前沿科学中心/国家航天局引力波研究中心,广东珠海 5190823. 地理信息工程国家重点实验室,陕西西安 710054摘要:针对重力卫星质心偏差在轨标定的问题,提出了一种仅使用加速度计数据进行质心在轨标定的方法:以1A级加速度计数据作为输入,采用Butterworth滤波去除数据中的噪声,提取质心标定机动产生的线性加速度和角加速度信号,最后结合最小二乘原理对质心偏差进行标定。

利用GRACE-FO C星的加速度计数据对提出的质心标定方法进行验证,估计了C星发射至今的质心偏差。

结果显示,三轴的标定精度均优于10 μm,与利用姿态数据计算的质心偏差在三轴方向上的互对比差异(RMS)为[9.6,9.3,7.9]μm,与喷气推进实验室(JPL)公布结果的RMS为[7.4,3.8,4.7]μm。

与传统方法不同,由于仅使用了加速度计测量数据,该方法可用于卫星姿态缺失情况下的质心在轨标定。

关键词:GRACE-FO;重力卫星;加速度计;质心标定中图分类号:V19 文献标志码:A 文章编号:2097 - 0137(2024)02 - 0123 - 08Calibrating of the in-orbit center-of-mass offset of gravity satelliteby accelerometer dataLIU Chaoqun1, GU Defeng1,2, HUANG Zhiyong3, WANG Aoming1, LIU Daoping21. School of Artificial Intelligence, Sun Yat-sen University, Zhuhai 519082, China2. MOE Key Laboratory of TianQin Mission / Frontiers Science Center for TianQin /CNSA Research Center for Gravitational Waves,Sun Yat-sen University,Zhuhai 519082, China3. State Key Laboratory of Geo-Information Engineering, Xi'an 710054, ChinaAbstract:A method for calibration of the in-orbit center-of-mass offset only on accelerometer data is proposed in order to address the issue of calibration of the in-orbit center-of-mass offset of gravity satel‐lite. With ACC1A data as input, and the Butterworth filter is utilized to remove noise from the data. The linear acceleration and angular acceleration signals caused by the calibration maneuver of center-of-mass are extracted, and finally, the center-of-mass offset is calibrated using the least squares estima‐tion. The method for calibration of the in-orbit center-of-mass proposed in this paper is validated using the accelerometer data from the GRACE-FO C satellite, and the center-of-mass offset since the satel‐lite's launch is estimated. The results show that the calibration accuracy of all three axes is better than10 μm. The RMS differences in the comparison of the center-of-mass offset results in the three-axis di‐rection, obtained by using the proposed method and by calculating with attitude data, are respectivelyDOI:10.13471/ki.acta.snus.ZR20230018*收稿日期:2023 − 11 − 21 录用日期:2023 − 12 − 07 网络首发日期:2024 − 01 − 05基金项目:国家自然科学基金(41874028);中央高校基本科研业务费专项资金(23xkjc001)作者简介:刘超群(1998年生),女;研究方向:智能感知与信息处理;E-mail:*******************通信作者:谷德峰(1980年生),男;研究方向:GNSS精密定轨与定位、卫星试验评估与应用等;E-mail:******************第 63 卷中山大学学报(自然科学版)(中英文)[9.6,9.3,7.9]μm, and the RMS of comparison with the center-of-mass offset published by JPL are re‐spectively [7.4,3.8,4.7]μm. Different from the traditional method, because only accelerometer data is used, the proposed method can be used for calibration of the in-orbit center-of-mass in the case of da‐ta failure of gyroscope and star camera.Key words:GRACE-FO; gravity satellite; accelerometer; center-of-mass calibrationGRACE型重力卫星的星载加速度计用于测量卫星所受非保守力(Tapley et al.,2004; Flury et al.,2008; Christophe et al.,2015)。

文献 (10)Semi-supervised and unsupervised extreme learning

文献 (10)Semi-supervised and unsupervised extreme learning

Semi-supervised and unsupervised extreme learningmachinesGao Huang,Shiji Song,Jatinder N.D.Gupta,and Cheng WuAbstract—Extreme learning machines(ELMs)have proven to be an efficient and effective learning paradigm for pattern classification and regression.However,ELMs are primarily applied to supervised learning problems.Only a few existing research studies have used ELMs to explore unlabeled data. In this paper,we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization,thus greatly expanding the applicability of ELMs.The key advantages of the proposed algorithms are1)both the semi-supervised ELM (SS-ELM)and the unsupervised ELM(US-ELM)exhibit the learning capability and computational efficiency of ELMs;2) both algorithms naturally handle multi-class classification or multi-cluster clustering;and3)both algorithms are inductive and can handle unseen data at test time directly.Moreover,it is shown in this paper that all the supervised,semi-supervised and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping,which is the key concept in ELM theory.Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency.Index Terms—Clustering,embedding,extreme learning ma-chine,manifold regularization,semi-supervised learning,unsu-pervised learning.I.I NTRODUCTIONS INGLE layer feedforward networks(SLFNs)have been intensively studied during the past several decades.Most of the existing learning algorithms for training SLFNs,such as the famous back-propagation algorithm[1]and the Levenberg-Marquardt algorithm[2],adopt gradient methods to optimize the weights in the network.Some existing works also use forward selection or backward elimination approaches to con-struct network dynamically during the training process[3]–[7].However,neither the gradient based methods nor the grow/prune methods guarantee a global optimal solution.Al-though various methods,such as the generic and evolutionary algorithms,have been proposed to handle the local minimum This work was supported by the National Natural Science Foundation of China under Grant61273233,the Research Fund for the Doctoral Program of Higher Education under Grant20120002110035and20130002130010, the National Key Technology R&D Program under Grant2012BAF01B03, the Project of China Ocean Association under Grant DY125-25-02,and Tsinghua University Initiative Scientific Research Program under Grants 2011THZ07132.Gao Huang,Shiji Song,and Cheng Wu are with the Department of Automation,Tsinghua University,Beijing100084,China(e-mail:huang-g09@;shijis@; wuc@).Jatinder N.D.Gupta is with the College of Business Administration,The University of Alabama in Huntsville,Huntsville,AL35899,USA.(e-mail: guptaj@).problem,they basically introduce high computational cost. One of the most successful algorithms for training SLFNs is the support vector machines(SVMs)[8],[9],which is a maximal margin classifier derived under the framework of structural risk minimization(SRM).The dual problem of SVMs is a quadratic programming and can be solved conveniently.Due to its simplicity and stable generalization performance,SVMs have been widely studied and applied to various domains[10]–[14].Recently,Huang et al.[15],[16]proposed the extreme learning machines(ELMs)for training SLFNs.In contrast to most of the existing approaches,ELMs only update the output weights between the hidden layer and the output layer, while the parameters,i.e.,the input weights and biases,of the hidden layer are randomly generated.By adopting squared loss on the prediction error,the training of output weights turns into a regularized least squares(or ridge regression)problem which can be solved efficiently in closed form.It has been shown that even without updating the parameters of the hidden layer,the SLFN with randomly generated hidden neurons and tunable output weights maintains its universal approximation capability[17]–[19].Compared to gradient based algorithms, ELMs are much more efficient and usually lead to better generalization performance[20]–[22].Compared to SVMs, solving the regularized least squares problem in ELMs is also faster than solving the quadratic programming problem in standard SVMs.Moreover,ELMs can be used for multi-class classification problems directly.The predicting accuracy achieved by ELMs is comparable with or even higher than that of SVMs[16],[22]–[24].The differences and similarities between ELMs and SVMs are discussed in[25]and[26], and new algorithms are proposed by combining the advan-tages of both models.In[25],an extreme SVM(ESVM) model is proposed by combining ELMs and the proximal SVM(PSVM).The ESVM algorithm is shown to be more accurate than the basic ELMs model due to the introduced regularization technique,and much more efficient than SVMs since there is no kernel matrix multiplication in ESVM.In [26],the traditional RBF kernel are replaced by ELM kernel, leading to an efficient algorithm with matched accuracy of SVMs.In the past years,researchers from variesfields have made substantial contribution to ELM theories and applications.For example,the universal approximation ability of ELMs has been further studied in a classification context[23].The gen-eralization error bound of ELMs has been investigated from the perspective of the Vapnik-Chervonenkis(VC)dimension theory and the initial localized generalization error model(LGEM)[27],[28].Varies extensions have been made to the basic ELMs to make it more efficient and more suitable for specific problems,such as ELMs for online sequential data [29]–[31],ELMs for noisy/missing data[32]–[34],ELMs for imbalanced data[35],etc.From the implementation aspect, ELMs has recently been implemented using parallel tech-niques[36],[37],and realized on hardware[38],which made ELMs feasible for large data sets and real time reasoning. Though ELMs have become popular in a wide range of domains,they are primarily used for supervised learning tasks such as classification and regression,which greatly limits their applicability.In some cases,such as text classification, information retrieval and fault diagnosis,obtaining labels for fully supervised learning is time consuming and expensive, while a multitude of unlabeled data are easy and cheap to collect.To overcome the disadvantage of supervised learning al-gorithms that they cannot make use of unlabeled data,semi-supervised learning(SSL)has been proposed to leverage both labeled and unlabeled data[39],[40].The SSL algorithms assume that the input patterns from both labeled and unlabeled data are drawn from the same marginal distribution.Therefore, the unlabeled data naturally provide useful information for exploring the data structure in the input space.By assuming that the input data follows some cluster structure or manifold in the input space,SSL algorithms can incorporate both la-beled and unlabeled data into the learning process.Since SSL requires less effort to collect labeled data and can offer higher accuracy,it has been applied to various domains[41]–[43].In some other cases where no labeled data are available,people may be interested in exploring the underlying structure of the data.To this end,unsupervised learning(USL)techniques, such as clustering,dimension reduction or data representation, are widely used to fulfill these tasks.In this paper,we extend ELMs to handle both semi-supervised and unsupervised learning problems by introducing the manifold regularization framework.Both the proposed semi-supervised ELM(SS-ELM)and unsupervised ELM(US-ELM)inherit the computational efficiency and the learn-ing capability of traditional pared with existing algorithms,SS-ELM and US-ELM are not only inductive (straightforward extension for out-of-sample examples at test time),but also can be used for multi-class classification or multi-cluster clustering directly.We test our algorithms on a variety of data sets,and make comparisons with other related algorithms.The results show that the proposed algorithms are competitive with state-of-the-art algorithms in terms of accuracy and efficiency.It is worth to mention that all the supervised,semi-supervised and unsupervised ELMs can actually be put into a unified framework,that is all the algorithms consist of two stages:1)random feature mapping;and2)output weights solving.Thefirst stage is to construct the hidden layer using randomly generated hidden neurons.This is the key concept in the ELM theory,which differs it from many existing feature learning methods.Generating feature mapping randomly en-ables ELMs for fast nonlinear feature learning and alleviates the problem of over-fitting.The second stage is to solve the weights between the hidden layer and the output layer, and this is where the main difference of supervised,semi-supervised and unsupervised ELMs lies.We believe that the unified framework for the three types of ELMs might provide us a new perspective to understand the underlying behavior of the random feature mapping in ELMs.The rest of the paper is organized as follows.In Section II,we give a brief review of related existing literature on semi-supervised and unsupervised learning.Section III and IV introduce the basic formulation of ELMs and the man-ifold regularization framework,respectively.We present the proposed SS-ELM and US-ELM algorithms in Sections V and VI.Experiment results are given in Section VII,and Section VIII concludes the paper.II.R ELATED WORKSOnly a few existing research studies on ELMs have dealt with the problem of semi-supervised learning or unsupervised learning.In[44]and[45],the manifold regularization frame-work was introduce into the ELMs model to leverage both labeled and unlabeled data,thus extended ELMs for semi-supervised learning.However,both of these two works are limited to binary classification problems,thus they haven’t explore the full power of ELMs.Moreover,both algorithms are only effective when the number of training patterns is more than the number of hidden neurons.Unfortunately,this condition is usually violated in semi-supervised learning since the training data is relatively scarce compared to the hidden neurons,whose number is commonly set to several hundreds or several thousands.Recently,a co-training approach have been proposed to train ELMs in a semi-supervised setting [46].In this algorithm,the labeled training sets are augmented gradually by moving a small set of most confidently predicted unlabeled data to the labeled set at each loop,and ELMs are trained repeatedly on the pseudo-labeled set.Since the algo-rithm need to train ELMs repeatedly,it introduces considerable extra computational cost.The proposed SS-ELM is related to a few other mani-fold assumption based semi-supervised learning algorithms, such as the Laplacian support vector machines(LapSVMs) [47],the Laplacian regularized least squares(LapRLS)[47], semi-supervised neural networks(SSNNs)[48],and semi-supervised deep embedding[49].It has been shown in these works that manifold regularization is effective in a wide range of domains and often leads to a state-of-the-art performance in terms of accuracy and efficiency.The US-ELM proposed in this paper are related to the Laplacian Eigenmaps(LE)[50]and spectral clustering(SC) [51]in that they both use spectral techniques for embedding and clustering.In all these algorithms,an affinity matrix is first built from the input patterns.The SC performs eigen-decomposition on the normalized affinity matrix,and then embeds the original data into a d-dimensional space using the first d eigenvectors(each row is normalized to have unit length and represents a point in the embedded space)corresponding to the d largest eigenvalues.The LE algorithm performs generalized eigen-decomposition on the graph Laplacian,anduses the d eigenvectors corresponding to the second through the(d+1)th smallest eigenvalues for embedding.When LE and SC are used for clustering,then k-means is adopted to cluster the data in the embedded space.Similar to LE and SC,the US-ELM are also based on the affinity matrix,and it is converted to solving a generalized eigen-decomposition problem.However,the eigenvectors obtained in US-ELM are not used for data representation directly,but are used as the parameters of the network,i.e.,the output weights.Note that once the US-ELM model is trained,it can be applied to any presented data in the original input space.In this way,US-ELM provide a straightforward way for handling new patterns without recomputing eigenvectors as in LE and SC.III.E XTREME LEARNING MACHINES Consider a supervised learning problem where we have a training set with N samples,{X,Y}={x i,y i}N i=1.Herex i∈R n i,y i is a n o-dimensional binary vector with only one entry(correspond to the class that x i belongs to)equal to one for multi-classification tasks,or y i∈R n o for regression tasks,where n i and n o are the dimensions of input and output respectively.ELMs aim to learn a decision rule or an approximation function based on the training data. Generally,the training of ELMs consists of two stages.The first stage is to construct the hidden layer using afixed number of randomly generated mapping neurons,which can be any nonlinear piecewise continuous functions,such as the Sigmoid function and Gaussian function given below.1)Sigmoid functiong(x;θ)=11+exp(−(a T x+b));(1)2)Gaussian functiong(x;θ)=exp(−b∥x−a∥);(2) whereθ={a,b}are the parameters of the mapping function and∥·∥denotes the Euclidean norm.A notable feature of ELMs is that the parameters of the hidden mapping functions can be randomly generated ac-cording to any continuous probability distribution,e.g.,the uniform distribution on(-1,1).This makes ELMs distinct from the traditional feedforward neural networks and SVMs. The only free parameters that need to be optimized in the training process are the output weights between the hidden neurons and the output nodes.By doing so,training ELMs is equivalent to solving a regularized least squares problem which is considerately more efficient than the training of SVMs or backpropagation algorithms.In thefirst stage,a number of hidden neurons which map the data from the input space into a n h-dimensional feature space (n h is the number of hidden neurons)are randomly generated. We denote by h(x i)∈R1×n h the output vector of the hidden layer with respect to x i,andβ∈R n h×n o the output weights that connect the hidden layer with the output layer.Then,the outputs of the network are given byf(x i)=h(x i)β,i=1,...,N.(3)In the second stage,ELMs aim to solve the output weights by minimizing the sum of the squared losses of the prediction errors,which leads to the following formulationminβ∈R n h×n o12∥β∥2+C2N∑i=1∥e i∥2s.t.h(x i)β=y T i−e T i,i=1,...,N,(4)where thefirst term in the objective function is a regularization term which controls the complexity of the model,e i∈R n o is the error vector with respect to the i th training pattern,and C is a penalty coefficient on the training errors.By substituting the constraints into the objective function, we obtain the following equivalent unconstrained optimization problem:minβ∈R n h×n oL ELM=12∥β∥2+C2∥Y−Hβ∥2(5)where H=[h(x1)T,...,h(x N)T]T∈R N×n h.The above problem is widely known as the ridge regression or regularized least squares.By setting the gradient of L ELM with respect toβto zero,we have∇L ELM=β+CH H T(Y−Hβ)=0(6) If H has more rows than columns and is of full column rank,which is usually the case where the number of training patterns are more than the number of the hidden neurons,the above equation is overdetermined,and we have the following closed form solution for(5):β∗=(H T H+I nhC)−1H T Y,(7)where I nhis an identity matrix of dimension n h.Note that in practice,rather than explicitly inverting the n h×n h matrix in the above expression,we can use Gaussian elimination to directly solve a set of linear equations in a more efficient and numerically stable manner.If the number of training patterns are less than the number of hidden neurons,then H will have more columns than rows, which often leads to an underdetermined least squares prob-lem.In this case,βmay have infinite number of solutions.To handle this problem,we restrictβto be a linear combination of the rows of H:β=H Tα(α∈R N×n o).Notice that when H has more columns than rows and is of full row rank,then H H T is invertible.Multiplying both side of(6) by(H H T)−1H,we getα+C(Y−H H Tα)=0,(8) This yieldsβ∗=H Tα∗=H T(H H T+I NC)−1Y(9)where I N is an identity matrix of dimension N. Therefore,in the case where training patterns are plentiful compared to the hidden neurons,we use(7)to compute the output weights,otherwise we use(9).IV.T HE MANIFOLD REGULARIZATION FRAMEWORK Semi-supervised learning is built on the following two assumptions:(1)both the label data X l and the unlabeled data X u are drawn from the same marginal distribution P X ;and (2)if two points x 1and x 2are close to each other,then the conditional probabilities P (y |x 1)and P (y |x 2)should be similar as well.The latter assumption is widely known as the smoothness assumption in machine learning.To enforce this assumption on the data,the manifold regularization framework proposes to minimize the following cost functionL m=12∑i,jw ij ∥P (y |x i )−P (y |x j )∥2,(10)where w ij is the pair-wise similarity between two patterns x iand x j .Note that the similarity matrix W =[w ij ]is usually sparse,since we only place a nonzero weight between two patterns x i and x j if they are close,e.g.,x i is among the k nearest neighbors of x j or x j is among the k nearest neighbors of x i .The nonzero weights are usually computed using Gaussian function exp (−∥x i −x j ∥2/2σ2),or simply fixed to 1.Intuitively,the formulation (10)penalizes large variation in the conditional probability P (y |x )when x has a small change.This requires that P (y |x )vary smoothly along the geodesics of P (x ).Since it is difficult to compute the conditional probability,we can approximate (10)with the following expression:ˆLm =12∑i,jw ij ∥ˆyi −ˆy j ∥2,(11)where ˆyi and ˆy j are the predictions with respect to pattern x i and x j ,respectively.It is straightforward to simplify the above expression in a matrix form:ˆL m =Tr (ˆY T L ˆY ),(12)where Tr (·)denotes the trace of a matrix,L =D −W isknown as the graph Laplacian ,and D is a diagonal matrixwith its diagonal elements D ii =l +u∑j =1w i,j .As discussed in [52],instead of using L directly,we can normalize it byD −12L D −12or replace it by L p (p is an integer),based on some prior knowledge.V.S EMI -SUPERVISED ELMIn the semi-supervised setting,we have few labeled data and plenty of unlabeled data.We denote the labeled data in the training set as {X l ,Y l }={x i ,y i }l i =1,and unlabeled dataas X u ={x i }ui =1,where l and u are the number of labeled and unlabeled data,respectively.The proposed SS-ELM incorporates the manifold regular-ization to leverage unlabeled data to improve the classification accuracy when labeled data are scarce.By modifying the ordinary ELM formulation (4),we give the formulation ofSS-ELM as:minβ∈R n h ×n o12∥β∥2+12l∑i =1C i ∥e i ∥2+λ2Tr (F T L F )s.t.h (x i )β=y T i −e T i ,i =1,...,l,f i =h (x i )β,i =1,...,l +u(13)where L ∈R (l +u )×(l +u )is the graph Laplacian built fromboth labeled and unlabeled data,and F ∈R (l +u )×n o is the output matrix of the network with its i th row equal to f (x i ),λis a tradeoff parameter.Note that similar to the weighted ELM algorithm (W-ELM)introduced in [35],here we associate different penalty coeffi-cient C i on the prediction errors with respect to patterns from different classes.This is because we found that when the data is skewed,i.e.,some classes have significantly more training patterns than other classes,traditional ELMs tend to fit the classes that having the majority of patterns quite well but fits other classes poorly.This usually leads to poor generalization performance on the testing set (while the prediction accuracy may be high,but the some classes are neglected).Therefore,we propose to alleviate this problem by re-weighting instances from different classes.Suppose that x i belongs to class t i ,which has N t i training patterns,then we associate e i with a penalty ofC i =C 0N t i.(14)where C 0is a user defined parameter as in traditional ELMs.In this way,the patterns from the dominant classes will not be over fitted by the algorithm,and the patterns from a class with less samples will not be neglected.We substitute the constraints into the objective function,and rewrite the above formulation in a matrix form:min β∈R n h×n o 12∥β∥2+12∥C 12( Y −Hβ)∥2+λ2Tr (βT H TL Hβ)(15)where Y∈R (l +u )×n o is the training target with its first l rows equal to Y l and the rest equal to 0,C is a (l +u )×(l +u )diagonal matrix with its first l diagonal elements [C ]ii =C i ,i =1,...,l and the rest equal to 0.Again,we compute the gradient of the objective function with respect to β:∇L SS −ELM =β+H T C ( Y−H β)+λH H T L H β.(16)By setting the gradient to zero,we obtain the solution tothe SS-ELM:β∗=(I n h +H T C H +λH H T L H )−1H TC Y .(17)As in Section III,if the number of labeled data is fewer thanthe number of hidden neurons,which is common in SSL,we have the following alternative solution:β∗=H T (I l +u +C H H T +λL L H H T )−1C Y .(18)where I l +u is an identity matrix of dimension l +u .Note that by settingλto be zero and the diagonal elements of C i(i=1,...,l)to be the same constant,(17)and (18)reduce to the solutions of traditional ELMs(7)and(9), respectively.Based on the above discussion,the SS-ELM algorithm is summarized as Algorithm1.Algorithm1The SS-ELM algorithmInput:The labeled patterns,{X l,Y l}={x i,y i}l i=1;The unlabeled patterns,X u={x i}u i=1;Output:The mapping function of SS-ELM:f:R n i→R n oStep1:Construct the graph Laplacian L from both X l and X u.Step2:Initiate an ELM network of n h hidden neurons with random input weights and biases,and calculate the output matrix of the hidden neurons H∈R(l+u)×n h.Step3:Choose the tradeoff parameter C0andλ.Step4:•If n h≤NCompute the output weightsβusing(17)•ElseCompute the output weightsβusing(18)return The mapping function f(x)=h(x)β.VI.U NSUPERVISED ELMIn this section,we introduce the US-ELM algorithm for unsupervised learning.In an unsupervised setting,the entire training data X={x i}N i=1are unlabeled(N is the number of training patterns)and our target is tofind the underlying structure of the original data.The formulation of US-ELM follows from the formulation of SS-ELM.When there is no labeled data,(15)is reduced tomin β∈R n h×n o ∥β∥2+λTr(βT H T L Hβ)(19)Notice that the above formulation always attains its mini-mum atβ=0.As suggested in[50],we have to introduce addtional constraints to avoid a degenerated solution.Specifi-cally,the formulation of US-ELM is given bymin β∈R n h×n o ∥β∥2+λTr(βT H T L Hβ)s.t.(Hβ)T Hβ=I no(20)Theorem1:An optimal solution to problem(20)is given by choosingβas the matrix whose columns are the eigenvectors (normalized to satisfy the constraint)corresponding to thefirst n o smallest eigenvalues of the generalized eigenvalue problem:(I nh +λH H T L H)v=γH H T H v.(21)Proof:We can rewrite the problem(20)asminβ∈R n h×n o,ββT Bβ=I no Tr(βT Aβ),(22)Algorithm2The US-ELM algorithmInput:The training data:X∈R N×n i;Output:•For embedding task:The embedding in a n o-dimensional space:E∈R N×n o;•For clustering task:The label vector of cluster index:y∈N N×1+.Step1:Construct the graph Laplacian L from X.Step2:Initiate an ELM network of n h hidden neurons withrandom input weights,and calculate the output matrix of thehidden neurons H∈R N×n h.Step3:•If n h≤NFind the generalized eigenvectors v2,v3,...,v no+1of(21)corresponding to the second through the n o+1smallest eigenvalues.Letβ=[ v2, v3,..., v no+1],where v i=v i/∥H v i∥,i=2,...,n o+1.•ElseFind the generalized eigenvectors u2,u3,...,u no+1of(24)corresponding to the second through the n o+1smallest eigenvalues.Letβ=H T[ u2, u3,..., u no+1],where u i=u i/∥H H T u i∥,i=2,...,n o+1.Step4:Calculate the embedding matrix:E=Hβ.Step5(For clustering only):Treat each row of E as a point,and cluster the N points into K clusters using the k-meansalgorithm.Let y be the label vector of cluster index for allthe points.return E(for embedding task)or y(for clustering task);where A=I nh+λH H T L H and B=H T H.It is easy to verify that both A and B are Hermitianmatrices.Thus,according to the Rayleigh-Ritz theorem[53],the above trace minimization problem attains its optimum ifand only if the column span ofβis the minimum span ofthe eigenspace corresponding to the smallest n o eigenvaluesof(21).Therefore,by stacking the normalized eigenvectors of(21)corresponding to the smallest n o generalized eigenvalues,we obtain an optimal solution to(20).In the algorithm of Laplacian eigenmaps,thefirst eigenvec-tor is discarded since it is always a constant vector proportionalto1(corresponding to the smallest eigenvalue0)[50].In theUS-ELM algorithm,thefirst eigenvector of(21)also leadsto small variations in embedding and is not useful for datarepresentation.Therefore,we suggest to discard this trivialsolution as well.Letγ1,γ2,...,γno+1(γ1≤γ2≤...≤γn o+1)be the(n o+1)smallest eigenvalues of(21)and v1,v2,...,v no+1be their corresponding eigenvectors.Then,the solution to theoutput weightsβis given byβ∗=[ v2, v3,..., v no+1],(23)where v i=v i/∥H v i∥,i=2,...,n o+1are the normalizedeigenvectors.If the number of labeled data is fewer than the numberTABLE ID ETAILS OF THE DATA SETS USED FOR SEMI-SUPERVISED LEARNINGData set Class Dimension|L||U||V||T|G50C2505031450136COIL20(B)2102440100040360USPST(B)225650140950498COIL2020102440100040360USPST1025650140950498of hidden neurons,problem(21)is underdetermined.In this case,we have the following alternative formulation by using the same trick as in previous sections:(I u+λL L H H T )u=γH H H T u.(24)Again,let u1,u2,...,u no +1be generalized eigenvectorscorresponding to the(n o+1)smallest eigenvalues of(24), then thefinal solution is given byβ∗=H T[ u2, u3,..., u no +1],(25)where u i=u i/∥H H T u i∥,i=2,...,n o+1are the normal-ized eigenvectors.If our task is clustering,then we can adopt the k-means algorithm to perform clustering in the embedded space.We summarize the proposed US-ELM in Algorithm2. Remark:Comparing the supervised ELM,the semi-supervised ELM and the unsupervised ELM,we can observe that all the algorithms have two similar stages in the training process,that is the random feature learning stage and the out-put weights learning stage.Under this two-stage framework,it is easy tofind the differences and similarities between the three algorithms.Actually,all the algorithms share the same stage of random feature learning,and this is the essence of the ELM theory.This also means that no matter the task is a supervised, semi-supervised or unsupervised learning problem,we can always follow the same step to generate the hidden layer. The differences of the three types of ELMs lie in the second stage on how the output weights are computed.In supervised ELM and SS-ELM,the output weights are trained by solving a regularized least squares problem;while the output weights in the US-ELM are obtained by solving a generalized eigenvalue problem.The unified framework for the three types of ELMs might provide new perspectives to further develop the ELM theory.VII.E XPERIMENTAL RESULTSWe evaluated our algorithms on wide range of semi-supervised and unsupervised parisons were made with related state-of-the-art algorithms, e.g.,Transductive SVM(TSVM)[54],LapSVM[47]and LapRLS[47]for semi-supervised learning;and Laplacian Eigenmap(LE)[50], spectral clustering(SC)[51]and deep autoencoder(DA)[55] for unsupervised learning.All algorithms were implemented using Matlab R2012a on a2.60GHz machine with4GB of memory.TABLE IIIT RAINING TIME(IN SECONDS)COMPARISON OF TSVM,L AP RLS,L AP SVM AND SS-ELMData set TSVM LapRLS LapSVM SS-ELMG50C0.3240.0410.0450.035COIL20(B)16.820.5120.4590.516USPST(B)68.440.9210.947 1.029COIL2018.43 5.841 4.9460.814USPST68.147.1217.259 1.373A.Semi-supervised learning results1)Data sets:We tested the SS-ELM onfive popular semi-supervised learning benchmarks,which have been widely usedfor evaluating semi-supervised algorithms[52],[56],[57].•The G50C is a binary classification data set of which each class is generated by a50-dimensional multivariate Gaus-sian distribution.This classification problem is explicitlydesigned so that the true Bayes error is5%.•The Columbia Object Image Library(COIL20)is a multi-class image classification data set which consists1440 gray-scale images of20objects.Each pattern is a32×32 gray scale image of one object taken from a specific view.The COIL20(B)data set is a binary classification taskobtained from COIL20by grouping thefirst10objectsas Class1,and the last10objects as Class2.•The USPST data set is a subset(the testing set)of the well known handwritten digit recognition data set USPS.The USPST(B)data set is a binary classification task obtained from USPST by grouping thefirst5digits as Class1and the last5digits as Class2.2)Experimental setup:We followed the experimental setup in[57]to evaluate the semi-supervised algorithms.Specifi-cally,each of the data sets is split into4folds,one of which was used for testing(denoted by T)and the rest3folds for training.Each of the folds was used as the testing set once(4-fold cross-validation).As in[57],this random fold generation process were repeated3times,resulted in12different splits in total.Every training set was further partitioned into a labeled set L,a validation set V,and an unlabeled set U.When we train a semi-supervised learning algorithm,the labeled data from L and the unlabeled data from U were used.The validation set which consists of labeled data was only used for model selection,i.e.,finding the optimal hyperparameters C0andλin the SS-ELM algorithm.The characteristics of the data sets used in our experiment are summarized in Table I. The training of SS-ELM consists of two stages:1)generat-ing the random hidden layer;and2)training the output weights using(17)or(18).In thefirst stage,we adopted the Sigmoid function for nonlinear mapping,and the input weights and biases were generated according to the uniform distribution on(-1,1).The number of hidden neurons n h wasfixed to 1000for G50C,and2000for the rest four data sets.In the second stage,wefirst need to build the graph Laplacian L.We followed the methods discussed in[52]and[57]to compute L,and the hyperparameter settings can be found in[47],[52] and[57].The trade off parameters C andλwere selected from。

稀疏自编码器l1正则项原理

稀疏自编码器l1正则项原理

稀疏自编码器l1正则项原理
稀疏自编码器是一种无监督学习的神经网络模型,用于学习数据的一种紧凑表示。

它的目标是通过学习输入数据的稀疏表示来捕捉数据的重要特征。

在稀疏自编码器中,L1正则项被用来促使编码器产生稀疏的编码表示。

现在让我来解释一下L1正则项的原理。

L1正则项是指在损失函数中加入对权重的L1范数惩罚。

在稀疏自编码器中,L1正则项的加入可以通过最小化以下形式的损失函数来实现:
L(W, b) = ||x decode(encode(x))||^2 + λ||h||_1。

其中,W表示编码器和解码器的权重,b表示偏置,x表示输入数据,decode和encode分别表示解码器和编码器的函数,h表示编码器的输出,λ是控制稀疏性的超参数。

L1正则项的原理在于它通过对权重进行稀疏化,促使模型学习到数据的稀疏表示。

在训练过程中,L1正则项的加入使得许多权重变为零,从而实现了特征的选择和稀疏性。

这意味着只有少数神经元对输入数据的编码起到重要作用,其余神经元的权重被压缩至接
近零的数值,从而实现了对数据特征的高度压缩和稀疏表示。

总之,L1正则项的原理在于通过对权重的L1范数惩罚,促使稀疏自编码器学习到数据的稀疏表示,从而实现对数据特征的高效捕捉和压缩。

这种稀疏表示有助于提高模型的泛化能力和对噪声的鲁棒性,同时也可以帮助理解数据的结构和特征。

深度学习的迁移模型

深度学习的迁移模型

source domain classification loss
domain distance loss
method Tzeng et al. 2014
where to adapt distance between a specific layer marginal distributions
Long et al. 2015 multiple layers marginal distributions
• General architecture: Siamese architecture
tied layers
adaptation layers
source classifier
source input
target input
domain distance minimization
Unsupervised Deep Transfer Learning
• GFK: Geodesic Flow Kernel: Gong, Boqing, Yuan Shi, Fei Sha, and Kristen Grauman. "Geodesic flow kernel for unsupervised domain adaptation." In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2066-2073. IEEE, 2012.
多模态学习和迁移学习
Multimodal Transfer Deep Learning with Applications in Audio-Visual Recognition,Seungwhan Moon, Suyoun Kim, Haohan Wang, arXiv:1412.3121

基于机器视觉的作物多姿态害虫特征提取与分类方法_李文勇

基于机器视觉的作物多姿态害虫特征提取与分类方法_李文勇


(( I R (i, j ) I MRGB (i, j ))2 ( I G (i, j ) I MRGB (i, j ))2 ( I B (i, j ) I MRGB (i, j )) 2 )
15 N PW
( i , j )PW

虫中 23 只个体大小、姿态进行标准化处理以增强 特征提取效果,然后利用数字识别系统对活飞蛾进 Wang 等在目级昆虫开发了一个 行自动识别研究[5]。 昆虫图像自动识别系统,收集了来自 9 目 225 种昆 虫图像,人工将昆虫位置放好,将不完整的、粘连 在一块昆虫进行剔除,方便特征的自动提取[6]。邱 道尹等设计了一种基于机器视觉的害虫检测系统, 通过自动诱集并调整害虫姿态,以提取出的周长、 不变矩等特征,运用神经网络分类器对常见的 9 种 害虫进行分类[7]。 Wen 等[8-9]利用基于图像的方法对果树害虫进行 了基于全局特征和局部特征的害虫识别, 并指出害虫 存在多姿态,增加了害虫识别的难度。吕军等[10-11] 针对害虫正面和反面 2 种姿态进行了基于模板匹配 的多目标水稻灯诱害虫识别方法研究,但是野外害 虫还存在其他姿态样式 (躯干正反、 翅膀伸缩各异、 倾斜) 。在模式识别方面,近年来支持向量机 (support vector machine,SVM)在农业图像分析 和处理中得到了很广泛的应用[12-15],尤其是针对样 本 集 较 小 的情 况 下 , 分类 效 果 比 人工 神 经 网 络 (artificial neural network,ANN)更加有效[16-17]。 而且针对多类识别问题,可以在标准二分类支持向 量机的基础上构建多分类支持向量机( multi-class support vector machine,MSVM)进行多类目标的 分类。 综上所述,目前大部分研究都是基于害虫标本

喷墨打印高迁移率铟锌锡氧化物薄膜晶体管

喷墨打印高迁移率铟锌锡氧化物薄膜晶体管

喷墨打印高迁移率铟锌锡氧化物薄膜晶体管
赵泽贤;徐萌;彭聪;张涵;陈龙龙;张建华;李喜峰
【期刊名称】《物理学报》
【年(卷),期】2024(73)12
【摘要】采用喷墨打印工艺制备了铟锌锡氧化物(indium-zinc-tin-oxide,IZTO)半导体薄膜,并应用于底栅顶接触结构薄膜晶体管(thin-film transistor,TFT).研究了墨水的溶剂成分以及溶质浓度对打印薄膜图案轮廓的影响.结果表明二元溶剂IZTO 墨水中乙二醇溶剂可有效平衡溶质向内的马兰戈尼回流与向外的毛细管流,避免了单一溶剂墨水下溶质流动不平衡造成IZTO薄膜的咖啡环状沉积轮廓图案,获得均匀平坦的薄膜图案轮廓和良好接触特性,接触电阻为820Ω,优化后IZTO TFT器件的饱和迁移率达到16.6 cm^(2)/(V·s),阈值电压为0.84 V,开关比高达
3.74×10^(9),亚阈值摆幅为0.24 V/dec.通过打印薄膜凝胶化模型解释了IZTO墨水溶剂成分、溶质浓度与最终薄膜形貌的关系.
【总页数】8页(P377-384)
【作者】赵泽贤;徐萌;彭聪;张涵;陈龙龙;张建华;李喜峰
【作者单位】上海大学材料科学与工程学院;上海大学
【正文语种】中文
【中图分类】TN3
【相关文献】
1.氮掺铟锡锌薄膜晶体管的制备及其光电特性
2.薄膜晶体管透明电极铟锡氧化物雾状不良的分析研究
3.基于喷墨打印方法锌锡氧化物薄膜晶体管制备与性能研究
4.溅射气压对铟锡锌氧化物薄膜晶体管性能的影响
因版权原因,仅展示原文概要,查看原文内容请购买。

2011年最新JCR论文分区表

2011年最新JCR论文分区表

刊名简称刊名全称ISSN 大类名称复分大类分区是否为TOP期刊010年影响因子ANNU REV MAR SCI Annual Review of Mari 1941-1405地学1Y 15.000ATMOS CHEM PHYS ATMOSPHERIC CHEMISTRY 1680-7316地学1Y 5.309B AM METEOROL SOC BULLETIN OF THE AMERI 0003-0007地学1Y 5.078CLIM DYNAM CLIMATE DYNAMICS 0930-7575地学1Y 3.843CRYOSPHERE Cryosphere 1994-0416地学1Y 3.641EARTH PLANET SC LETT EARTH AND PLANETARY S 0012-821X 地学1Y 4.279EARTH-SCI REV EARTH-SCIENCE REVIEWS 0012-8252地学1Y 5.833GEOCHIM COSMOCHIM AC GEOCHIMICA ET COSMOCH 0016-7037地学1Y 4.101GEOLOGY GEOLOGY 0091-7613地学1Y 4.026GONDWANA RES GONDWANA RESEARCH 1342-937X 地学1Y 5.503J CLIMATE JOURNAL OF CLIMATE 0894-8755地学1Y 3.513J METAMORPH GEOL JOURNAL OF METAMORPHI 0263-4929地学1Y 3.418J PETROL JOURNAL OF PETROLOGY 0022-3530地学1Y 3.842NAT GEOSCI Nature Geoscience 1752-0894地学1Y 10.392PALEOCEANOGRAPHY PALEOCEANOGRAPHY 0883-8305地学1Y 4.030PRECAMBRIAN RES PRECAMBRIAN RESEARCH 0301-9268地学1Y 4.116QUATERNARY SCI REV QUATERNARY SCIENCE RE 0277-3791地学1Y 4.657REV GEOPHYS REVIEWS OF GEOPHYSICS 8755-1209地学1Y 9.538AM J SCI AMERICAN JOURNAL OF S 0002-9599地学2N 3.045APPL CLAY SCI APPLIED CLAY SCIENCE 0169-1317地学2N 2.303ATMOS MEAS TECH Atmospheric Measureme 1867-1381地学2N 2.623B VOLCANOL BULLETIN OF VOLCANOLO 0258-8900地学2N 2.463BASIN RES BASIN RESEARCH 0950-091X 地学2N 2.264BOREAS BOREAS 0300-9483地学2N 3.052BOUND-LAY METEOROL BOUNDARY-LAYER METEOR 0006-8314地学2N 1.879CHEM GEOL CHEMICAL GEOLOGY 0009-2541地学2N 3.722CLIM PAST Climate of the Past 1814-9324地学2N 2.821CONTRIB MINERAL PETR CONTRIBUTIONS TO MINE 0010-7999地学2N 3.418DEEP-SEA RES PT I DEEP-SEA RESEARCH PAR 0967-0637地学2N 2.372DYNAM ATMOS OCEANS DYNAMICS OF ATMOSPHER 0377-0265地学2N 2.674ELEMENTS Elements 1811-5209地学2N 3.105GEOCHEM GEOPHY GEOSY GEOCHEMISTRY GEOPHYSI 1525-2027地学2N 3.368GEOL SOC AM BULL GEOLOGICAL SOCIETY OF 0016-7606地学2N 3.637GEOMORPHOLOGY GEOMORPHOLOGY 0169-555X 地学2N 2.352GEOPHYS J INT GEOPHYSICAL JOURNAL I 0956-540X 地学2N 2.411GEOSTAND GEOANAL RES GEOSTANDARDS AND GEOA 1639-4488地学2N 3.015GEOTEXT GEOMEMBRANES GEOTEXTILES AND GEOME 0266-1144地学2N 2.590HOLOCENE HOLOCENE 0959-6836地学2N 2.772HYDROL EARTH SYST SC HYDROLOGY AND EARTH S 1027-5606地学2N 2.463INT J CLIMATOL INTERNATIONAL JOURNAL 0899-8418地学2N 2.479INT J EARTH SCI INTERNATIONAL JOURNAL 1437-3254地学2N 1.980J GEOL JOURNAL OF GEOLOGY 0022-1376地学2N 2.238J GEOL SOC LONDON JOURNAL OF THE GEOLOG 0016-7649地学2N 3.312J HYDROMETEOROL JOURNAL OF HYDROMETEO 1525-755X 地学2N 2.185J MARINE SYST JOURNAL OF MARINE SYS 0924-7963地学2N 2.005J PALEOLIMNOL JOURNAL OF PALEOLIMNO 0921-2728地学2N 2.676J PHYS OCEANOGR JOURNAL OF PHYSICAL O 0022-3670地学2N 2.481J QUATERNARY SCI JOURNAL OF QUATERNARY 0267-8179地学2N 3.199J SYST PALAEONTOL JOURNAL OF SYSTEMATIC 1477-2019地学2N 3.844LITHOS LITHOS 0024-4937地学2N 3.121MAR GEOL MARINE GEOLOGY0025-3227地学2N 2.5170377-8398地学2N 2.321 MAR MICROPALEONTOL MARINE MICROPALEONTOL0027-0644地学2N 2.348 MON WEATHER REV MONTHLY WEATHER REVIEOCEAN MODEL OCEAN MODELLING1463-5003地学2N 2.081 ORE GEOL REV ORE GEOLOGY REVIEWS0169-1368地学2N 2.079 ORG GEOCHEM ORGANIC GEOCHEMISTRY0146-6380地学2N 2.3750031-0182地学2N 2.390 PALAEOGEOGR PALAEOCL PALAEOGEOGRAPHY PALAEPALEOBIOLOGY PALEOBIOLOGY0094-8373地学2N 3.0450079-6611地学2N 3.269 PROG OCEANOGR PROGRESS IN OCEANOGRA0309-1333地学2N 2.280 PROG PHYS GEOG PROGRESS IN PHYSICAL0035-9009地学2N 2.977 Q J ROY METEOR SOC QUARTERLY JOURNAL OF1871-1014地学2N 3.238 QUAT GEOCHRONOL Quaternary GeochronolQUATERNARY RES QUATERNARY RESEARCH0033-5894地学2N 2.5761529-6466地学2N 3.694 REV MINERAL GEOCHEM REVIEWS IN MINERALOGY0169-3298地学2N 3.590 SURV GEOPHYS SURVEYS IN GEOPHYSICSTECTONICS TECTONICS0278-7407地学2N 3.1470280-6509地学2N 3.336 TELLUS B TELLUS SERIES B-CHEMI0094-8276地学2Y 3.505 GEOPHYS RES LETT GEOPHYSICAL RESEARCHJ ATMOS SCI JOURNAL OF THE ATMOSP0022-4928地学2Y 2.6000148-0227地学2Y 3.303 J GEOPHYS RES JOURNAL OF GEOPHYSICAJ HYDROL JOURNAL OF HYDROLOGY0022-1694地学2Y 2.514 LIMNOL OCEANOGR LIMNOLOGY AND OCEANOG0024-3590地学2Y 3.3851000-9515地学3N 1.408 ACTA GEOL SIN-ENGL ACTA GEOLOGICA SINICA0567-7920地学3N 1.949 ACTA PALAEONTOL POL ACTA PALAEONTOLOGICA0003-004X地学3N 2.026 AM MINERAL AMERICAN MINERALOGISTANN GEOPHYS-GERMANY ANNALES GEOPHYSICAE0992-7689地学3N 1.620 ANTARCT SCI ANTARCTIC SCIENCE0954-1020地学3N 1.328 APPL GEOCHEM APPLIED GEOCHEMISTRY0883-2927地学3N 2.017 ARCHAEOMETRY ARCHAEOMETRY0003-813X地学3N 1.5811523-0430地学3N 1.600 ARCT ANTARCT ALP RES ARCTIC ANTARCTIC ANDATMOS RES ATMOSPHERIC RESEARCH0169-8095地学3N 1.597 ATMOS SCI LETT Atmospheric Science L1530-261X地学3N 1.4330812-0099地学3N 1.278 AUST J EARTH SCI AUSTRALIAN JOURNAL OF0037-1106地学3N 2.027 B SEISMOL SOC AM BULLETIN OF THE SEISMCHEM ERDE-GEOCHEM CHEMIE DER ERDE-GEOCH0009-2819地学3N 1.5250009-8604地学3N 1.631 CLAY CLAY MINER CLAYS AND CLAY MINERACLIM RES CLIMATE RESEARCH0936-577X地学3N 2.110 CONT SHELF RES CONTINENTAL SHELF RES0278-4343地学3N 1.9280967-0645地学3N 1.670 DEEP-SEA RES PT II DEEP-SEA RESEARCH PAREARTH INTERACT Earth Interactions 1087-3562地学3N 1.0000197-9337地学3N 2.111 EARTH SURF PROC LAND EARTH SURFACE PROCESSECON GEOL ECONOMIC GEOLOGY0361-0128地学3N 2.021 EPISODES EPISODES0705-3797地学3N 2.0410272-7714地学3N 1.887 ESTUAR COAST SHELF S ESTUARINE COASTAL AND0935-1221地学3N 1.469 EUR J MINERAL EUROPEAN JOURNAL OF MFACIES FACIES0172-9179地学3N 1.657 GEOARABIA GEOARABIA1025-6059地学3N 2.0261467-4866地学3N 1.920 GEOCHEM T GEOCHEMICAL TRANSACTIGEOFLUIDS GEOFLUIDS1468-8115地学3N 1.268 GEOL ACTA GEOLOGICA ACTA 1695-6133地学3N 1.474 GEOL J GEOLOGICAL JOURNAL0072-1050地学3N 1.076GEOL MAG GEOLOGICAL MAGAZINE0016-7568地学3N 2.207 GEO-MAR LETT GEO-MARINE LETTERS0276-0460地学3N 1.7300016-8025地学3N 1.493 GEOPHYS PROSPECT GEOPHYSICAL PROSPECTIGEOPHYSICS GEOPHYSICS0016-8033地学3N 1.4061991-959X地学3N 1.591 GEOSCI MODEL DEV Geoscientific Model DGEOSPHERE Geosphere1553-040X地学3N 2.0001438-387X地学3N 1.671 HELGOLAND MAR RES HELGOLAND MARINE RESE0020-6814地学3N 1.288 INT GEOL REV INTERNATIONAL GEOLOGYINT J APPL EARTH OBS International Journal0303-2434地学3N 1.5570020-7128地学3N 1.813 INT J BIOMETEOROL INTERNATIONAL JOURNAL1365-8816地学3N 1.489 INT J GEOGR INF SCI INTERNATIONAL JOURNALINT J SPELEOL INTERNATIONAL JOURNAL0392-6672地学3N 2.0571558-8424地学3N 1.918 J APPL METEOROL CLIM Journal of Applied Me0305-4403地学3N 1.710 J ARCHAEOL SCI JOURNAL OF ARCHAEOLOG1367-9120地学3N 2.215 J ASIAN EARTH SCI JOURNAL OF ASIAN EART0739-0572地学3N 1.860 J ATMOS OCEAN TECH JOURNAL OF ATMOSPHERI1364-6826地学3N 1.579 J ATMOS SOL-TERR PHY JOURNAL OF ATMOSPHERI0375-6742地学3N 2.125 J GEOCHEM EXPLOR JOURNAL OF GEOCHEMICAJ GEODESY JOURNAL OF GEODESY0949-7714地学3N 1.8800264-3707地学3N 1.197 J GEODYN JOURNAL OF GEODYNAMIC0022-1430地学3N 2.603 J GLACIOL JOURNAL OF GLACIOLOGY0022-2402地学3N 1.484 J MAR RES JOURNAL OF MARINE RES0895-9811地学3N 1.543 J S AM EARTH SCI JOURNAL OF SOUTH AMER1385-1101地学3N 2.444 J SEA RES JOURNAL OF SEA RESEARJ SEDIMENT RES JOURNAL OF SEDIMENTAR1527-1404地学3N 2.3020191-8141地学3N 1.911 J STRUCT GEOL JOURNAL OF STRUCTURAL0272-4634地学3N 2.241 J VERTEBR PALEONTOL JOURNAL OF VERTEBRATEJ VOLCANOL GEOTH RES JOURNAL OF VOLCANOLOG0377-0273地学3N 1.941 JOKULL Jokull0449-0576地学3N 1.889 LANDSLIDES Landslides 1612-510X地学3N 1.625 LETHAIA LETHAIA0024-1164地学3N 2.2821541-5856地学3N 1.823 LIMNOL OCEANOGR-METH LIMNOLOGY AND OCEANOGLITHOSPHERE-US Lithosphere1941-8264地学3N 1.7810264-8172地学3N 2.130 MAR PETROL GEOL MARINE AND PETROLEUMMINER DEPOSITA MINERALIUM DEPOSITA0026-4598地学3N 2.0611561-8633地学3N 1.792 NAT HAZARD EARTH SYS NATURAL HAZARDS AND EOCEAN DYNAM OCEAN DYNAMICS1616-7341地学3N 1.677 OCEANOGRAPHY OCEANOGRAPHY 1042-8275地学3N 1.891 PALAEONTOLOGY PALAEONTOLOGY0031-0239地学3N 1.867 PALAIOS PALAIOS0883-1351地学3N 1.7091045-6740地学3N 1.912 PERMAFROST PERIGLAC PERMAFROST AND PERIGL0099-1112地学3N0.931 PHOTOGRAMM ENG REM S PHOTOGRAMMETRIC ENGIN0342-1791地学3N 1.876 PHYS CHEM MINER PHYSICS AND CHEMISTRY1040-6182地学3N 1.768 QUATERN INT QUATERNARY INTERNATIORADIOCARBON RADIOCARBON0033-8222地学3N 2.7030034-6667地学3N 1.985 REV PALAEOBOT PALYNO REVIEW OF PALAEOBOTANSEDIMENT GEOL SEDIMENTARY GEOLOGY0037-0738地学3N 1.685 SEDIMENTOLOGY SEDIMENTOLOGY0037-0746地学3N 2.229 SEISMOL RES LETT SEISMOLOGICAL RESEARC0895-0695地学3N 2.3170038-6804地学3N 1.615 SPEC PAP PALAEONTOL SPECIAL PAPERS IN PALSTRATIGRAPHY Stratigraphy 1547-139X地学3N 1.436SWISS J GEOSCI Swiss Journal of Geos1661-8726地学3N 1.739 TECTONOPHYSICS TECTONOPHYSICS0040-1951地学3N 2.5090280-6495地学3N 2.062 TELLUS A TELLUS SERIES A-DYNAMTERRA NOVA TERRA NOVA0954-4879地学3N 2.1640177-798X地学3N 1.684 THEOR APPL CLIMATOL THEORETICAL AND APPLI0882-8156地学3N 1.448 WEATHER FORECAST WEATHER AND FORECASTIACTA CARSOLOGICA ACTA CARSOLOGICA 0583-6050地学4N0.7501217-8977地学4N0.892 ACTA GEOD GEOPHYS HU Acta Geodaetica et Ge1214-9705地学4N0.452 ACTA GEODYN GEOMATER Acta Geodynamica et G1581-6613地学4N0.346 ACTA GEOGR SLOV Acta Geographica Slov0001-5709地学4N0.779 ACTA GEOL POL ACTA GEOLOGICA POLONIACTA GEOPHYS Acta Geophysica1895-6572地学4N 1.0001854-0171地学4N0.050 ACTA GEOTECH SLOV Acta Geotechnica Slov0894-0525地学4N0.704 ACTA METEOROL SIN Acta Meteorologica Si1335-1788地学4N0.134 ACTA MONTAN SLOVACA Acta Montanistica Slo0253-505X地学4N0.476 ACTA OCEANOL SIN ACTA OCEANOLOGICA SINADV ATMOS SCI ADVANCES IN ATMOSPHER0256-1530地学4N0.9250065-2687地学4N0.636 ADV GEOPHYS ADVANCES IN GEOPHYSICALCHERINGA ALCHERINGA0311-5518地学4N 1.578 AMEGHINIANA AMEGHINIANA0002-7014地学4N0.871 ANDEAN GEOL Andean Geology0718-7092地学4N 1.086 ANN GEOPHYS-ITALY ANNALS OF GEOPHYSICS1593-5213地学4N0.336 ANN LIMNOL-INT J LIM ANNALES DE LIMNOLOGIE0003-4088地学4N0.7960753-3969地学4N0.778 ANN PALEONTOL ANNALES DE PALEONTOLOANN SOC GEOL POL ANNALES SOCIETATIS GE0208-9068地学4N0.184 APPL GEOPHYS Applied Geophysics1672-7975地学4N0.3870141-1187地学4N0.859 APPL OCEAN RES APPLIED OCEAN RESEARCAQUAT GEOCHEM AQUATIC GEOCHEMISTRY1380-6165地学4N 1.1671866-7511地学4N0.538 ARAB J GEOSCI Arabian Journal of Ge1075-2196地学4N 1.368 ARCHAEOL PROSPECT Archaeological ProspeARCTIC ARCTIC0004-0843地学4N0.9881976-7633地学4N0.855 ASIA-PAC J ATMOS SCI Asia-Pacific JournalATMOS OCEAN ATMOSPHERE-OCEAN0705-5900地学4N 1.304 ATMOSFERA ATMOSFERA0187-6236地学4N0.646 AUST METEOROL OCEAN Australian Meteorolog1836-716X地学4N0.5760251-7493地学4N0.400 AUSTRIAN J EARTH SCI Austrian Journal of E0007-4802地学4N0.645 B CAN PETROL GEOL BULLETIN OF CANADIANB CIENC GEOD Boletim de Ciencias G1413-4853地学4N0.0210006-6729地学4N0.382 B GEOFIS TEOR APPL Bollettino di Geofisi0011-6297地学4N0.167 B GEOL SOC DENMARK BULLETIN OF THE GEOLO0367-5211地学4N0.462 B GEOL SOC FINLAND BULLETIN OF THE GEOLO1214-1119地学4N 1.202 B GEOSCI BULLETIN OF GEOSCIENC0007-4977地学4N0.990 B MAR SCI BULLETIN OF MARINE SC0037-9409地学4N 1.250 B SOC GEOL FR BULLETIN DE LA SOCIET0375-7633地学4N0.659 B SOC PALEONTOL ITAL BOLLETTINO DELLA SOCIBALTICA Baltica 0067-3064地学4N0.9131679-8759地学4N0.266 BRAZ J OCEANOGR BRAZILIAN JOURNAL OF0008-3674地学4N0.628 CAN GEOTECH J CANADIAN GEOTECHNICAL0008-4077地学4N 1.034 CAN J EARTH SCI CANADIAN JOURNAL OF E0008-4476地学4N 1.289 CAN MINERAL CANADIAN MINERALOGIST0891-2556地学4N0.389 CARBONATE EVAPORITE CARBONATES AND EVAPOR0254-4059地学4N0.325 CHIN J OCEANOL LIMN CHINESE JOURNAL OF OC0001-5733地学4N0.832 CHINESE J GEOPHYS-CH CHINESE JOURNAL OF GECLAY MINER CLAY MINERALS0009-8558地学4N 1.3410098-3004地学4N 1.416 COMPUT GEOSCI-UK COMPUTERS & GEOSCIENC0266-352X地学4N0.965 COMPUT GEOTECH COMPUTERS AND GEOTECH1420-0597地学4N 1.056 COMPUTAT GEOSCI COMPUTATIONAL GEOSCIE1631-0713地学4N 1.708 CR GEOSCI COMPTES RENDUS GEOSCI1631-0683地学4N 1.000 CR PALEVOL COMPTES RENDUS PALEVOCRETACEOUS RES CRETACEOUS RESEARCH0195-6671地学4N 1.706 DISASTER ADV Disaster Advances 0974-262X地学4N0.4071028-334X地学4N0.318 DOKL EARTH SCI DOKLADY EARTH SCIENCE1755-6910地学4N 1.222 EARTH ENV SCI T R SO Earth and Environment0736-623X地学4N0.083 EARTH SCI HIST EARTH SCIENCES HISTOR1865-0473地学4N0.657 EARTH SCI INFORM Earth Science Informa1794-6190地学4N0.179 EARTH SCI RES J Earth Sciences ResearENG GEOL ENGINEERING GEOLOGY0013-7952地学4N 1.4421078-7275地学4N0.273 ENVIRON ENG GEOSCI ENVIRONMENTAL & ENGINERDE ERDE0013-9998地学4N0.154 ERDKUNDE Erdkunde0014-0015地学4N0.4521736-4728地学4N0.852 EST J EARTH SCI Estonian Journal of EESTUD GEOL-MADRID ESTUDIOS GEOLOGICOS-M0367-0449地学4N0.2310812-3985地学4N0.619 EXPLOR GEOPHYS Exploration Geophysic1863-9135地学4N 1.108 FUND APPL LIMNOL Fundamental and AppliGEMS GEMOL GEMS & GEMOLOGY0016-626X地学4N 1.5410883-6353地学4N0.886 GEOARCHAEOLOGY GEOARCHAEOLOGY-AN INTGEOBIOS-LYON GEOBIOS0016-6995地学4N0.8680016-7029地学4N0.655 GEOCHEM INT+GEOCHEMISTRY INTERNATGEOCHEM J GEOCHEMICAL JOURNAL0016-7002地学4N0.8021467-7873地学4N0.698 GEOCHEM-EXPLOR ENV A GEOCHEMISTRY-EXPLORATGEOCHRONOMETRIA GEOCHRONOMETRIA1733-8387地学4N0.860 GEOD LIST Geodetski List0016-710X地学4N0.038 GEODIN ACTA GEODINAMICA ACTA0985-3111地学4N0.452 GEODIVERSITAS GEODIVERSITAS1280-9659地学4N0.9860016-7169地学4N0.449 GEOFIS INT Geofisica InternacionGEOFIZIKA Geofizika 0352-3659地学4N0.5000435-3676地学4N 1.042 GEOGR ANN A GEOGRAFISKA ANNALER S0391-9838地学4N0.309 GEOGR FIS DIN QUAT Geografia Fisica e DiGEOINFORMATICA GEOINFORMATICA1384-6175地学4N 1.357 GEOL BELG GEOLOGICA BELGICA1374-8505地学4N0.645 GEOL CARPATH GEOLOGICA CARPATHICA1335-0552地学4N0.9091075-7015地学4N0.368 GEOL ORE DEPOSIT+GEOLOGY OF ORE DEPOSIGEOL Q GEOLOGICAL QUARTERLY1641-7291地学4N0.5001811-4598地学4N0.114 GEOL SURV DEN GREENL GEOLOGICAL SURVEY OF1266-5304地学4N0.368 GEOMORPHOLOGIE Geomorphologie-ReliefGEOSCI CAN GEOSCIENCE CANADA0315-0941地学4N0.061 GEOSCI J GEOSCIENCES JOURNAL1226-4806地学4N0.6121072-6349地学4N 1.065 GEOSYNTH INT GEOSYNTHETICS INTERNAGEOTECTONICS+GEOTECTONICS0016-8521地学4N0.900 GFF GFF1103-5897地学4N0.976 GISCI REMOTE SENS GIScience & Remote Se1548-1603地学4N 1.000 HIMAL GEOL HIMALAYAN GEOLOGY0971-8966地学4N0.147HYDROGEOL J HYDROGEOLOGY JOURNAL1431-2174地学4N 1.326 IDOJARAS Idojaras0324-6329地学4N0.5481939-1404地学4N 1.140 IEEE J-STARS IEEE Journal of Selec0379-5136地学4N0.204 INDIAN J MAR SCI INDIAN JOURNAL OF MARINT J DIGIT EARTH International Journal1753-8947地学4N 1.453 ISL ARC ISLAND ARC1038-4871地学4N 1.0272038-1719地学4N0.440 ITAL J GEOSCI Italian Journal of Ge0001-4338地学4N0.528 IZV ATMOS OCEAN PHY+IZVESTIYA ATMOSPHERIC1069-3513地学4N0.340 IZV-PHYS SOLID EART+IZVESTIYA-PHYSICS OF1464-343X地学4N 1.186 J AFR EARTH SCI JOURNAL OF AFRICAN EAJ APPL GEOPHYS JOURNAL OF APPLIED GE0926-9851地学4N 1.1850167-7764地学4N0.900 J ATMOS CHEM JOURNAL OF ATMOSPHERI1090-6924地学4N0.842 J CAVE KARST STUD JOURNAL OF CAVE AND K0749-0208地学4N0.679 J COASTAL RES JOURNAL OF COASTAL RE1674-487X地学4N0.481 J EARTH SCI-CHINA Journal of Earth Scie0253-4126地学4N0.941 J EARTH SYST SCI Journal of Earth SystJ EARTHQ TSUNAMI Journal of Earthquake1793-4311地学4N0.4061083-1363地学4N0.837 J ENVIRON ENG GEOPH JOURNAL OF ENVIRONMEN0096-1191地学4N 1.431 J FORAMIN RES JOURNAL OF FORAMINIFEJ GEOGR SCI Journal of Geographic1009-637X地学4N0.6730016-7622地学4N0.396 J GEOL SOC INDIA JOURNAL OF THE GEOLOG1802-6222地学4N 1.026 J GEOSCI-CZECH Journal of GeoscienceJ IBER GEOL JOURNAL OF IBERIAN GE1698-6180地学4N0.889 J LIMNOL JOURNAL OF LIMNOLOGY1129-5767地学4N 1.140 J MAPS Journal of Maps1744-5647地学4N0.6230026-1165地学4N 1.149 J METEOROL SOC JPN JOURNAL OF THE METEOR1345-6296地学4N0.677 J MINER PETROL SCI Journal of Mineralogi0916-8370地学4N 1.293 J OCEANOGR JOURNAL OF OCEANOGRAP1755-876X地学4N0.750 J OPER OCEANOGR Journal of OperationaJ PALEONTOL JOURNAL OF PALEONTOLO0022-3360地学4N 1.1850141-6421地学4N0.829 J PETROL GEOL JOURNAL OF PETROLEUM0963-0651地学4N0.222 J SEISM EXPLOR JOURNAL OF SEISMIC EXJ SEISMOL JOURNAL OF SEISMOLOGY1383-4649地学4N 1.2740718-9508地学4N0.596 J SOIL SCI PLANT NUT Journal of Soil Scien1449-8596地学4N 1.000 J SPAT SCI Journal of Spatial ScJ TROP METEOROL Journal of Tropical M1006-8775地学4N0.3800742-0463地学4N0.254 J VOLCANOL SEISMOL+Journal of VolcanologLIMNOLOGY LIMNOLOGY1439-8621地学4N0.800 LITHOL MINER RESOUR+LITHOLOGY AND MINERAL0024-4902地学4N0.476 MAR GEOD MARINE GEODESY 0149-0419地学4N0.9170025-3235地学4N0.763 MAR GEOPHYS RES MARINE GEOPHYSICAL RE1874-8961地学4N 1.511 MATH GEOSCI Mathematical GeoscienMAUSAM Mausam 0252-9416地学4N0.1101350-4827地学4N 1.402 METEOROL APPL METEOROLOGICAL APPLIC0177-7971地学4N0.921 METEOROL ATMOS PHYS METEOROLOGY AND ATMOS0941-2948地学4N 1.402 METEOROL Z METEOROLOGISCHE ZEITSMICROPALEONTOLOGY MICROPALEONTOLOGY0026-2803地学4N0.5610930-0708地学4N 1.287 MINER PETROL MINERALOGY AND PETROL0026-461X地学4N0.949 MINERAL MAG MINERALOGICAL MAGAZIN0276-4741地学4N0.476 MT RES DEV MOUNTAIN RESEARCH AND1569-4445地学4N0.989 NEAR SURF GEOPHYS Near Surface GeophysiNETH J GEOSCI NETHERLANDS JOURNAL O0016-7746地学4N0.7500077-7749地学4N0.663 NEUES JAHRB GEOL P-A NEUES JAHRBUCH FUR GE0077-7757地学4N0.407 NEUES JB MINER ABH NEUES JAHRBUCH FUR MI0028-8306地学4N 1.312 NEW ZEAL J GEOL GEOP NEW ZEALAND JOURNAL O0078-0421地学4N 2.200 NEWSL STRATIGR NEWSLETTERS ON STRATI1023-5809地学4N 1.300 NONLINEAR PROC GEOPH NONLINEAR PROCESSES INORW J GEOL NORWEGIAN JOURNAL OF0029-196X地学4N0.809 OCEAN SCI Ocean Science1812-0784地学4N 1.4431730-413X地学4N0.306 OCEANOL HYDROBIOL ST OCEANOLOGICAL AND HYDOCEANOLOGIA OCEANOLOGIA0078-3234地学4N0.983 OCEANOLOGY+OCEANOLOGY0001-4370地学4N0.324 OFIOLITI OFIOLITI0391-2612地学4N 1.5240016-7878地学4N 2.156 P GEOLOGIST ASSOC PROCEEDINGS OF THE GE0044-0604地学4N0.706 P YORKS GEOL SOC PROCEEDINGS OF THE YO0375-0442地学4N 1.633 PALAEONTOGR ABT A PALAEONTOGRAPHICA ABT0375-0299地学4N0.357 PALAEONTOGR ABT B PALAEONTOGRAPHICA ABT1094-8074地学4N 1.189 PALAEONTOL ELECTRON PALAEONTOLOGIA ELECTR0031-0220地学4N0.912 PALAEONTOL Z Palaeontologische Zei0031-0301地学4N0.591 PALEONTOL J+PALEONTOLOGICAL JOURNPALYNOLOGY PALYNOLOGY0191-6122地学4N0.5380369-8963地学4N0.774 PERIOD MINERAL Periodico di MineraloPETROL GEOSCI PETROLEUM GEOSCIENCE1354-0793地学4N 1.294 PETROL SCI Petroleum Science1672-5107地学4N0.432 PETROLOGY+PETROLOGY0869-5911地学4N 1.069 PETROPHYSICS Petrophysics 1529-9074地学4N0.3151432-8364地学4N0.286 PHOTOGRAMM FERNERKUN Photogrammetrie Ferne0031-868X地学4N0.925 PHOTOGRAMM REC PHOTOGRAMMETRIC RECOR1474-7065地学4N0.917 PHYS CHEM EARTH PHYSICS AND CHEMISTRYPHYS GEOGR PHYSICAL GEOGRAPHY0272-3646地学4N0.6830138-0338地学4N0.767 POL POLAR RES POLISH POLAR RESEARCHPOLAR RES POLAR RESEARCH0800-0395地学4N 1.4440033-4553地学4N 1.091 PURE APPL GEOPHYS PURE AND APPLIED GEOP1470-9236地学4N0.859 Q J ENG GEOL HYDROGE QUARTERLY JOURNAL OFQUATERNAIRE QUATERNAIRE1142-2904地学4N0.569 RESOUR GEOL RESOURCE GEOLOGY1344-1698地学4N0.7051026-8774地学4N 1.136 REV MEX CIENC GEOL REVISTA MEXICANA DE CRIV ITAL PALEONTOL S RIVISTA ITALIANA DI P0035-6883地学4N0.7091129-8596地学4N0.241 RIV ITAL TELERILEVAM Rivista Italiana di T1068-7971地学4N 1.051 RUSS GEOL GEOPHYS+Russian Geology and GRUSS J PAC GEOL Russian Journal of Pa1819-7140地学4N0.1751068-3739地学4N0.232 RUSS METEOROL HYDRO+Russian Meteorology a1012-0750地学4N0.638 S AFR J GEOL SOUTH AFRICAN JOURNALSCI CHINA EARTH SCI Science China-Earth S1674-7313地学4N 1.2710036-9276地学4N0.688 SCOT J GEOL SCOTTISH JOURNAL OF GSOLA SOLA 1349-6476地学4N0.964 STRATIGR GEO CORREL+STRATIGRAPHY AND GEOL0869-5938地学4N0.8330039-3169地学4N 1.123 STUD GEOPHYS GEOD STUDIA GEOPHYSICA ETSURV REV SURVEY REVIEW0039-6265地学4N0.373 TERR ATMOS OCEAN SCI TERRESTRIAL ATMOSPHER1017-0839地学4N0.5901300-0985地学4N 1.031 TURK J EARTH SCI TURKISH JOURNAL OF EAWEATHER Weather0043-1656地学4N0.588Z DTSCH GES GEOWISS Zeitschrift der Deuts1860-1804地学4N0.3660372-8854地学4N0.477 Z GEOMORPHOL ZEITSCHRIFT FUR GEOMO0066-4146地学天文1Y27.444 ANNU REV ASTRON ASTR ANNUAL REVIEW OF ASTR0935-4956地学天文1Y15.438 ASTRON ASTROPHYS REV ASTRONOMY AND ASTROPH0067-0049地学天文1Y15.206 ASTROPHYS J SUPPL S ASTROPHYSICAL JOURNAL0084-6597地学天文2N8.048 ANNU REV EARTH PL SC ANNUAL REVIEW OF EARTASTRON J ASTRONOMICAL JOURNAL0004-6256地学天文2N 4.5552041-8205地学天文2N 5.158 ASTROPHYS J LETT Astrophysical Journal1475-7516地学天文2N 6.497 J COSMOL ASTROPART P JOURNAL OF COSMOLOGY0035-8711地学天文2N 4.888 MON NOT R ASTRON SOC MONTHLY NOTICES OF TH0004-637X地学天文2Y 6.063 ASTROPHYS J ASTROPHYSICAL JOURNAL0004-6361地学天文3N 4.425 ASTRON ASTROPHYS ASTRONOMY & ASTROPHYSASTROPART PHYS ASTROPARTICLE PHYSICS0927-6505地学天文3N 3.8080922-6435地学天文3N 2.140 EXP ASTRON EXPERIMENTAL ASTRONOM0921-8181地学天文3N 3.351 GLOBAL PLANET CHANGE GLOBAL AND PLANETARYICARUS ICARUS0019-1035地学天文3N 3.8191086-9379地学天文3N 2.624 METEORIT PLANET SCI METEORITICS & PLANETA1323-3580地学天文3N 1.590 PUBL ASTRON SOC AUST PUBLICATIONS OF THE A0004-6264地学天文3N 2.609 PUBL ASTRON SOC JPN PUBLICATIONS OF THE A0004-6280地学天文3N 3.340 PUBL ASTRON SOC PAC PUBLICATIONS OF THE ASOL PHYS SOLAR PHYSICS0038-0938地学天文3N 3.3880038-6308地学天文3N 4.433 SPACE SCI REV SPACE SCIENCE REVIEWSACTA ASTRONOM ACTA ASTRONOMICA0001-5237地学天文4N 3.4910273-1177地学天文4N 1.076 ADV SPACE RES ADVANCES IN SPACE RES1366-8781地学天文4N0.339 ASTRON GEOPHYS ASTRONOMY & GEOPHYSIC1063-7737地学天文4N 1.091 ASTRON LETT+ASTRONOMY LETTERS-A J0004-6337地学天文4N0.842 ASTRON NACHR ASTRONOMISCHE NACHRICASTRON REP+ASTRONOMY REPORTS1063-7729地学天文4N0.632 ASTROPHYS BULL Astrophysical Bulleti1990-3413地学天文4N0.8380004-640X地学天文4N 1.437 ASTROPHYS SPACE SCI ASTROPHYSICS AND SPACASTROPHYSICS+ASTROPHYSICS0571-7256地学天文4N0.5430304-9523地学天文4N 2.600 B ASTRON SOC INDIA Bulletin of the AstroBALT ASTRON BALTIC ASTRONOMY1392-0049地学天文4N0.4790923-2958地学天文4N 1.722 CELEST MECH DYN ASTR CELESTIAL MECHANICS &1335-1842地学天文4N0.296 CONTRIB ASTRON OBS S Contributions of theCOSMIC RES+COSMIC RESEARCH0010-9525地学天文4N0.325 EARTH MOON PLANETS EARTH MOON AND PLANET0167-9295地学天文4N0.6161343-8832地学天文4N 1.112 EARTH PLANETS SPACE EARTH PLANETS AND SPA0309-1929地学天文4N0.831 GEOPHYS ASTRO FLUID GEOPHYSICAL AND ASTROGRAVIT COSMOL-RUSSIA Gravitation & Cosmolo0202-2893地学天文4N0.5621473-5504地学天文4N 1.097 INT J ASTROBIOL International Journal0218-2718地学天文4N 1.109 INT J MOD PHYS D INTERNATIONAL JOURNAL0250-6335地学天文4N0.531 J ASTROPHYS ASTRON JOURNAL OF ASTROPHYSI0021-8286地学天文4N0.283 J HIST ASTRON JOURNAL FOR THE HISTO1225-4614地学天文4N0.474 J KOREAN ASTRON SOC Journal of the KoreanKINEMAT PHYS CELEST+Kinematics and Physic0884-5913地学天文4N0.337 NEW ASTRON NEW ASTRONOMY1384-1076地学天文4N 1.6321387-6473地学天文4N 1.941 NEW ASTRON REV NEW ASTRONOMY REVIEWSOBSERVATORY OBSERVATORY0029-7704地学天文4N0.4060031-9201地学天文4N 2.640 PHYS EARTH PLANET IN PHYSICS OF THE EARTH0032-0633地学天文4N 2.344 PLANET SPACE SCI PLANETARY AND SPACE SRES ASTRON ASTROPHYS Research in Astronomy1674-4527地学天文4N0.8530185-1101地学天文4N 2.500 REV MEX ASTRON ASTR REVISTA MEXICANA DE A0038-0946地学天文4N0.429 SOLAR SYST RES+SOLAR SYSTEM RESEARCH1542-7390地学天文4N 1.833 SPACE WEATHER SPACE WEATHER-THE INT0360-0300工程技术1Y8.000 ACM COMPUT SURV ACM COMPUTING SURVEYS0730-0301工程技术1Y 3.632 ACM T GRAPHIC ACM TRANSACTIONS ON GACS NANO ACS Nano 1936-0851工程技术1Y9.865 ACTA BIOMATER Acta Biomaterialia1742-7061工程技术1Y 4.824 ACTA MATER ACTA MATERIALIA1359-6454工程技术1Y 3.7910065-2156工程技术1Y 3.000 ADV APPL MECH ADVANCES IN APPLIED M1616-301X工程技术1Y8.508 ADV FUNCT MATER ADVANCED FUNCTIONAL MADV MATER ADVANCED MATERIALS0935-9648工程技术1Y10.8801523-9829工程技术1Y11.000 ANNU REV BIOMED ENG ANNUAL REVIEW OF BIOM1531-7331工程技术1Y10.333 ANNU REV MATER RES ANNUAL REVIEW OF MATE0570-4928工程技术1Y 3.686 APPL SPECTROSC REV APPLIED SPECTROSCOPYBIOMATERIALS BIOMATERIALS0142-9612工程技术1Y7.8830960-8524工程技术1Y 4.365 BIORESOURCE TECHNOL BIORESOURCE TECHNOLOGBIOSENS BIOELECTRON BIOSENSORS & BIOELECT0956-5663工程技术1Y 5.3610734-9750工程技术1Y7.600 BIOTECHNOL ADV BIOTECHNOLOGY ADVANCE0006-3592工程技术1Y 3.700 BIOTECHNOL BIOENG BIOTECHNOLOGY AND BIO1754-6834工程技术1Y 4.146 BIOTECHNOL BIOFUELS Biotechnology for BioCARBON CARBON0008-6223工程技术1Y 4.896 CHEM MATER CHEMISTRY OF MATERIAL0897-4756工程技术1Y 6.4000738-8551工程技术1Y 5.281 CRIT REV BIOTECHNOL CRITICAL REVIEWS IN BCRIT REV FOOD SCI CRITICAL REVIEWS IN F1040-8398工程技术1Y 4.5100958-1669工程技术1Y8.486 CURR OPIN BIOTECH CURRENT OPINION IN BI1359-0286工程技术1Y 4.385 CURR OPIN SOLID ST M CURRENT OPINION IN SOELECTROCHEM COMMUN ELECTROCHEMISTRY COMM1388-2481工程技术1Y 4.287 ELECTROCHIM ACTA ELECTROCHIMICA ACTA0013-4686工程技术1Y 3.6501301-8361工程技术1Y9.333 ENERGY EDUC SCI TECH Energy Education Scie0737-0024工程技术1Y 4.000 HUM-COMPUT INTERACT HUMAN-COMPUTER INTERA0018-8646工程技术1Y 5.216 IBM J RES DEV IBM JOURNAL OF RESEAR1553-877X工程技术1Y 3.692 IEEE COMMUN SURV TUT IEEE Communications S0733-8716工程技术1Y 4.232 IEEE J SEL AREA COMM IEEE JOURNAL ON SELEC0018-9200工程技术1Y 3.132 IEEE J SOLID-ST CIRC IEEE JOURNAL OF SOLID1053-5888工程技术1Y 6.000 IEEE SIGNAL PROC MAG IEEE SIGNAL PROCESSINIEEE T EVOLUT COMPUT IEEE TRANSACTIONS ON1089-778X工程技术1Y 4.4030278-0046工程技术1Y 3.481 IEEE T IND ELECTRON IEEE TRANSACTIONS ON0162-8828工程技术1Y 5.308 IEEE T PATTERN ANAL IEEE TRANSACTIONS ON0920-5691工程技术1Y 5.151 INT J COMPUT VISION INTERNATIONAL JOURNAL0360-3199工程技术1Y 4.057 INT J HYDROGEN ENERG INTERNATIONAL JOURNAL1565-1339工程技术1Y 3.100 INT J NONLIN SCI NUM INTERNATIONAL JOURNALINT J PLASTICITY INTERNATIONAL JOURNAL0749-6419工程技术1Y 5.0820950-6608工程技术1Y 5.793 INT MATER REV INTERNATIONAL MATERIAJ CATAL JOURNAL OF CATALYSIS0021-9517工程技术1Y 5.415 J CIV ENG MANAG Journal of Civil Engi1392-3730工程技术1Y 3.7110304-3894工程技术1Y 3.723 J HAZARD MATER JOURNAL OF HAZARDOUS0959-9428工程技术1Y 5.101 J MATER CHEM JOURNAL OF MATERIALS0376-7388工程技术1Y 3.673 J MEMBRANE SCI JOURNAL OF MEMBRANE S0378-7753工程技术1Y 4.290 J POWER SOURCES JOURNAL OF POWER SOURLAB CHIP LAB ON A CHIP1473-0197工程技术1Y 6.2601022-1336工程技术1Y 4.371 MACROMOL RAPID COMM MACROMOLECULAR RAPIDMACROMOLECULES MACROMOLECULES0024-9297工程技术1Y 4.8380927-796X工程技术1Y19.750 MAT SCI ENG R MATERIALS SCIENCE & EMATER TODAY Materials Today1369-7021工程技术1Y 6.265 MED IMAGE ANAL MEDICAL IMAGE ANALYSI1361-8415工程技术1Y 4.3641096-7176工程技术1Y 5.512 METAB ENG METABOLIC ENGINEERING1475-2859工程技术1Y 4.544 MICROB CELL FACT Microbial Cell FactorMICROFLUID NANOFLUID Microfluidics and Nan1613-4982工程技术1Y 3.507 MIS QUART MIS QUARTERLY0276-7783工程技术1Y 5.0411613-4125工程技术1Y 4.713 MOL NUTR FOOD RES MOLECULAR NUTRITION &MRS BULL MRS BULLETIN0883-7694工程技术1Y 4.764 NANO LETT NANO LETTERS1530-6984工程技术1Y12.219 NANO TODAY Nano Today1748-0132工程技术1Y11.7501549-9634工程技术1Y 4.882 NANOMED-NANOTECHNOL Nanomedicine-NanotechNANOSCALE Nanoscale2040-3364工程技术1Y 4.109 NANOTECHNOLOGY NANOTECHNOLOGY0957-4484工程技术1Y 3.652 NAT BIOTECHNOL NATURE BIOTECHNOLOGY1087-0156工程技术1Y31.090 NAT MATER NATURE MATERIALS1476-1122工程技术1Y29.9201748-3387工程技术1Y30.324 NAT NANOTECHNOL Nature NanotechnologyORG ELECTRON ORGANIC ELECTRONICS1566-1199工程技术1Y 4.0290018-9219工程技术1Y 5.151 P IEEE PROCEEDINGS OF THE IEPLASMONICS Plasmonics 1557-1955工程技术1Y 3.544 POLYM REV Polymer Reviews1558-3724工程技术1Y8.0240960-8974工程技术1Y 6.750 PROG CRYST GROWTH CH PROGRESS IN CRYSTAL G0360-1285工程技术1Y10.362 PROG ENERG COMBUST PROGRESS IN ENERGY AN0079-6425工程技术1Y16.658 PROG MATER SCI PROGRESS IN MATERIALS1062-7995工程技术1Y 6.407 PROG PHOTOVOLTAICS PROGRESS IN PHOTOVOLT0079-6727工程技术1Y 5.250 PROG QUANT ELECTRON PROGRESS IN QUANTUM E0079-6816工程技术1Y10.368 PROG SURF SCI PROGRESS IN SURFACE S1364-0321工程技术1Y 4.595 RENEW SUST ENERG REV RENEWABLE & SUSTAINAB1936-4954工程技术1Y 4.500 SIAM J IMAGING SCI SIAM Journal on ImagiSMALL SMALL1613-6810工程技术1Y7.336 SOFT MATTER Soft Matter1744-683X工程技术1Y 4.4570927-0248工程技术1Y 4.746 SOL ENERG MAT SOL C SOLAR ENERGY MATERIALTRENDS BIOTECHNOL TRENDS IN BIOTECHNOLO0167-7799工程技术1Y9.6440924-2244工程技术1Y 3.710 TRENDS FOOD SCI TECH TRENDS IN FOOD SCIENCVLDB J VLDB JOURNAL1066-8888工程技术1Y 2.198 ACM T COMPUT SYST ACM TRANSACTIONS ON C0734-2071工程技术2N 1.8890098-3500工程技术2N 1.658 ACM T MATH SOFTWARE ACM TRANSACTIONS ON M1550-4859工程技术2N 2.282 ACM T SENSOR NETWORK ACM Transactions on S1049-331X工程技术2N 1.694 ACM T SOFTW ENG METH ACM TRANSACTIONS ON S1559-1131工程技术2N 1.909 ACM T WEB ACM Transactions on t1944-8244工程技术2N 2.925 ACS APPL MATER INTER ACS Applied Materials0724-6145工程技术2N 2.139 ADV BIOCHEM ENG BIOT ADVANCES IN BIOCHEMIC0090-6964工程技术2N 2.376 ANN BIOMED ENG ANNALS OF BIOMEDICAL0066-4200工程技术2N 2.000 ANNU REV INFORM SCI ANNUAL REVIEW OF INFOAPPL ENERG APPLIED ENERGY0306-2619工程技术2N 3.9151568-4946工程技术2N 2.097 APPL SOFT COMPUT APPLIED SOFT COMPUTIN1134-3060工程技术2N 2.760 ARCH COMPUT METHOD E ARCHIVES OF COMPUTATI0004-3702工程技术2N 2.533 ARTIF INTELL ARTIFICIAL INTELLIGEN1322-7130工程技术2N 2.534 AUST J GRAPE WINE R AUSTRALIAN JOURNAL OF。

基于多层特征嵌入的单目标跟踪算法

基于多层特征嵌入的单目标跟踪算法

基于多层特征嵌入的单目标跟踪算法1. 内容描述基于多层特征嵌入的单目标跟踪算法是一种在计算机视觉领域中广泛应用的跟踪技术。

该算法的核心思想是通过多层特征嵌入来提取目标物体的特征表示,并利用这些特征表示进行目标跟踪。

该算法首先通过预处理步骤对输入图像进行降维和增强,然后将降维后的图像输入到神经网络中,得到不同层次的特征图。

通过对这些特征图进行池化操作,得到一个低维度的特征向量。

将这个特征向量输入到跟踪器中,以实现对目标物体的实时跟踪。

为了提高单目标跟踪算法的性能,本研究提出了一种基于多层特征嵌入的方法。

该方法首先引入了一个自适应的学习率策略,使得神经网络能够根据当前训练状态自动调整学习率。

通过引入注意力机制,使得神经网络能够更加关注重要的特征信息。

为了进一步提高跟踪器的鲁棒性,本研究还采用了一种多目标融合的方法,将多个跟踪器的结果进行加权融合,从而得到更加准确的目标位置估计。

通过实验验证,本研究提出的方法在多种数据集上均取得了显著的性能提升,证明了其在单目标跟踪领域的有效性和可行性。

1.1 研究背景随着计算机视觉和深度学习技术的快速发展,目标跟踪在许多领域(如安防、智能监控、自动驾驶等)中发挥着越来越重要的作用。

单目标跟踪(MOT)算法是一种广泛应用于视频分析领域的技术,它能够实时跟踪视频序列中的单个目标物体,并将其位置信息与相邻帧进行比较,以估计目标的运动轨迹。

传统的单目标跟踪算法在处理复杂场景、遮挡、运动模糊等问题时表现出较差的鲁棒性。

为了解决这些问题,研究者们提出了许多改进的单目标跟踪算法,如基于卡尔曼滤波的目标跟踪、基于扩展卡尔曼滤波的目标跟踪以及基于深度学习的目标跟踪等。

这些方法在一定程度上提高了单目标跟踪的性能,但仍然存在一些局限性,如对多目标跟踪的支持不足、对非平稳运动的适应性差等。

开发一种既能有效跟踪单个目标物体,又能应对多种挑战的单目标跟踪算法具有重要的理论和实际意义。

1.2 研究目的本研究旨在设计一种基于多层特征嵌入的单目标跟踪算法,以提高目标跟踪的准确性和鲁棒性。

regularization paths for generalized linear models via coordinate descent

regularization paths for generalized linear models via coordinate descent
Jerome Friedman
Stanford University
Trevor Hastie
Stanford University
Rob Tibshirani
Stanford University
Abstract We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multinomial regression problems while the penalties include 1 (the lasso), 2 (ridge regression) and mixtures of the two (the elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods.
JSS
Journal of Statistical Software
January 2010, Volume 33, Issue 1. /

文献检索格式[整理]

文献检索格式[整理]

参考文献(即引文出处)的类型以单字母方式标识:M——专著,C——论文集,N——报纸文章,J——期刊文章,D——学位论文,R——报告,S——标准,P——专利;对于不属于上述的文献类型,采用字母“Z”标识。

参考文献的格式要求很多,一般来说参考文献的格式都要符合国标GB7714-87《文后参考文献著录规则》,但实际中很多出版社和期刊对论文的要求也不尽相同。

发现周围的很多人对论文参考文献的规范格式不是很清楚,所以把规范格式贴出来。

参考文献著录格式及示例1 专著著录格式[序号]著者.书名[M].版本(第一版不写).出版地:出版者,出版年.起止页码例:[1]孙家广,杨长青.计算机图形学[M].北京:清华大学出版社,1995.26~28Sun Jiaguang, Yang Changqing. Computer graphics[M].Beijing: Tsinghua UniversityPress,1995.26~28(in Chinese)例:[2]Skolink M I. Radar handbook[M]. New York: McGraw-Hill, 19902 期刊著录格式[序号]作者.题名[J].刊名,出版年份,卷号(期号):起止页码例:[3]李旭东,宗光华,毕树生,等.生物工程微操作机器人视觉系统的研究[J].北京航空航天大学学报,2002,28(3):249~252Li Xudong, Zong Guanghua, Bi Shusheng, et al. Research on global vision system for bioengineering-oriented micromanipulation robot system[J]. Journal of Beijing University of Aeronautics and Astronautics, 2002,28(3):249~252(in Chinese)3论文集著录格式[序号]作者.题名[A].见(英文用In):主编.论文集名[C].出版地:出版者,出版年.起止页码例:[4]张佐光,张晓宏,仲伟虹,等.多相混杂纤维复合材料拉伸行为分析[A].见:张为民编.第九届全国复合材料学术会议论文集(下册)[C].北京:世界图书出版公司,1996.410~416例:[5]Odoni A R. The flow management problem in air traffic control[A]. In: Odoni A R, Szego G,eds. Flow Control of Congested Networks[C]. Berlin: Springer-Verlag,1987.269~2984 学位论文著录格式[序号]作者.题名[D].保存地点:保存单位,年例:[6]金宏.导航系统的精度及容错性能的研究[D].北京:北京航空航天大学自动控制系,19985 科技报告著录格式[序号]作者.题名[R].报告题名及编号,出版年例:[7]Kyungmoon Nho. Automatic landing system design using fuzzy logic[R].AIAA-98-4484,19986 国际或国家标准著录格式[序号]标准编号,标准名称[S]例:[8]GB/T 16159-1996,汉语拼音正词法基本规则[S]7 专利著录格式[序号]专利所有者.专利题名[P].专利国别:专利号,出版日期例:[9]姜锡洲.一种温热外敷药制备方案[P].中国专利:881056073,1989-07-068 电子文献著录格式[序号]作者.题名[电子文献/载体类型标识].电子文献的出处或可获得地址,发表或更新日期/引用日期例:[10]王明亮.关于中国学术期刊标准化数据系统工程的进展[EB/OL]./pub/wm1.txt...8-16/1998-10-04说明:①参考文献应是公开出版物,按在论著中出现的先后用阿拉伯数字连续排序.②参考文献中外国人名书写时一律姓前,名后,姓用全称,名可缩写为首字母(大写),不加缩写点(见例2).③参考文献中作者为3人或少于3人应全部列出,3人以上只列出前3人,后加"等"或"et al"(见例3).④在著录中文参考文献时应提供英文著录,见例1、例3.⑤参考文献类型及其标识见表1,电子参考文献类型及其标识见表2.⑥电子文献的载体类型及其标识为:磁带——MT,磁盘——DK,光盘——CD,联机网络——OL.表1 参考文献类型及文献类型标识参考文献类型专著论文集报纸文章期刊文章学位论文报告标准专利文献类型标识M C N J D R S P表2 电子参考文献类型及其标识电子参考文献类型数据库计算机程序电子公告电子文献类型标识DB CP EB科技期刊论文的参考文献1参考文献的功能与作用(1)参考文献是科技论文的重要组成部分,它不仅能为作者的论点提供有力的论据,而且可以精练文字节约篇幅,增加论文的信息量,具有很高的信息价值。

一种多级再分类技术耕地提取方法

一种多级再分类技术耕地提取方法

一种多级再分类技术耕地提取方法陈磊;周询;陈明叶【摘要】鉴于土地利用中耕地类型的遥感光谱特征差异大,以及我国北方农牧交错带中撂荒地、耕地、裸地和草地混淆严重,耕地信息的获取难度大、精度低,提出了利用长时间序列遥感数据,通过多级再分类技术方法(multilevel reclassification,MLRC)提取可耕种区域.首先利用最大似然法对长时间序列的多期遥感数据进行监督分类,提取出耕地区域,之后在初级分类的基础上,通过统计不同区域在多期分类结果中被判定为耕地的次数进而确定可耕作区域的范围.通过对闪电河湿地实验区的研究表明,利用MLRC方法的精度达到了82.56%.%Due to the difference of the remote sensing spectral characteristics of the cultivated land,and the serious confusion with the abandoned land,cultivated land,grassland and bare land,it is more difficult to get the accurate area of the cultivated land by using remote sensing technology.This paper proposes a multi-level reclassification (MLRC)method to extract the arable area by using long time series remote sensing data.In this method,we use the maximum likelihood method to classify the long time series remote sensing data,and to extract the cultivated land area.After that,we calculate the number of the pixel belonging to cultivated land in the primary classification results as the threshold to extract the arable areas.The study in Shandian River shows that the accuracy of the MLRC method reaches 82.56%.【期刊名称】《遥感信息》【年(卷),期】2017(032)004【总页数】6页(P120-125)【关键词】多级再分类技术;长时间序列;最大似然法;监督分类;可耕种区域【作者】陈磊;周询;陈明叶【作者单位】北京师范大学地理科学学部,北京100875;北京师范大学全球变化与地球系统科学研究院,北京100875;北京师范大学地理科学学部,北京100875;河北农业大学林学院,河北保定071000【正文语种】中文【中图分类】TP753耕地作为重要的土地利用类型,在粮食问题等方面具有极其关键的作用[1-2]。

基于双层路由注意力及特征融合的细粒度图像分类

基于双层路由注意力及特征融合的细粒度图像分类

基于双层路由注意力及特征融合的细粒度图像分类沈宇麒;崔衍【期刊名称】《计算机技术与发展》【年(卷),期】2024(34)6【摘要】近年来,视觉Transformer(Vision Transformer,ViT)在图像识别领域取得了突破性进展,其自注意力机制能够从图像中提取出不同像素块的判别性标记信息,进而提升图像分类的精度。

在图像分类领域中,细粒度图像分类具有类与类之间的特征差距小、类内的特征差距大的特点,从而导致了分类困难。

针对细粒度图像分类中数据分布具有小型、非均匀和难以发现类与类之间的差异等特征,提出一种基于双层路由注意力(Bi-level Routing Attention,BRA)的细粒度图像分类模型。

基准骨干网络采用多阶段层级架构设计的新型视觉Transformer模型作为视觉特征提取器,从中获得局部信息和全局信息以及多尺度的特征。

同时引入特征增强、融合模块,以此提高网络对关键特征的学习能力。

实验结果表明,该模型在CUB-200-2011和Stanford Dogs这两个细粒度图像数据集上的分类精度分别达到了91.7%和92.2%,相较于多个主流细粒度图像分类模型,该模型具有更好的分类结果。

【总页数】6页(P23-28)【作者】沈宇麒;崔衍【作者单位】南京邮电大学物联网学院【正文语种】中文【中图分类】TP391.41【相关文献】1.基于多尺度特征融合与反复注意力机制的细粒度图像分类算法2.基于注意力自身线性融合的弱监督细粒度图像分类算法3.基于注意力特征融合的SqueezeNet细粒度图像分类模型4.基于自适应特征融合的小样本细粒度图像分类5.一种新的基于通道-空间融合注意力及SwinT的细粒度图像分类算法因版权原因,仅展示原文概要,查看原文内容请购买。

不同坐标系下六相PMSM_单相开路容错MPC_控制

不同坐标系下六相PMSM_单相开路容错MPC_控制

第45卷 第3期 包 装 工 程2024年2月PACKAGING ENGINEERING ·165·收稿日期:2023-10-23基金项目:国家自然科学基金(5200070339);电磁能技术全国重点实验室资助课题(6142217210301);湖北省教育厅科学技术研究计划重点项目(D2*******) 不同坐标系下六相PMSM 单相开路容错MPC 控制袁凯1,蒋云昊1,袁雷1*, 郭勇2,丁怡丹1(1.湖北工业大学 太阳能高效利用及储能运行控制湖北省重点实验室,武汉 430068;2. 91184部队舰船保障室,青岛 266071)摘要:目的 目前六相永磁同步电机单相开路故障的模型预测容错控制的研究已逐步成为热点,本文将对α-β和d-q 2种坐标系控制下的故障机理进行对比分析,并对比不同坐标系中下正常和故障容错运行模型的控制效果。

方法 基于矢量空间解耦坐标变换矩阵不变原理,对A 相开路进行故障模型的理论计算分析,分别在α-β和d-q 这2种不同坐标系中对其进行模型预测控制容错建模。

最后在MATLAB/Simulink 中对2种坐标系下的电机正常运行和故障容错运行中的工作性能采用相同电机参数进行实时仿真。

结果 仿真结果显示,正常运行时,2种坐标系下总谐波失真(THD )值分别为 2.09%和2.77%;故障运行时,d-q 坐标系下的THD 值比α-β坐标系小了13.15%;容错运行时2种坐标系下的THD 值分别为1.19%和1.79%。

结论 从仿真结果可以看出,d-q 坐标系控制下的电机在故障时具有更稳定的性能,而在正常和容错运行状态下,2种坐标系下的控制效果几乎等效。

关键词:六相永磁同步电机;模型预测电流;矢量空间解耦;开路故障分析;容错控制 中图分类号:TB486.3 文献标志码:A 文章编号:1001-3563(2024)03-0165-11 DOI :10.19554/ki.1001-3563.2024.03.019Single-phase Open Fault-tolerant MPC Control for Six-phase PMSM in DifferentCoordinate SystemsYUAN Kai 1, JIANG Yunhao 1, YUAN Lei 1*, GUO Yong 2, DING Yidan 1(1. Hubei Collaborative Innovation Center for High-efficiency Utilization of Solar Energy,Hubei University of Technology, Wuhan 430068, China; 2. 91184 Troop Ship Support Office, Qingdao 266071, China) ABSTRACT: At present, the model predictive fault-tolerant control of single-phase open fault of six-phase permanent magnet synchronous motor has gradually become a hot topic. The work aims to comparatively analyze the fault mechanism under α-β and d-q coordinate system control and compare the control effect of normal and fault-tolerant operating models in different coordinate systems. Based on the vector space decoupling coordinate transformation matrix invariant principle, the A-phase open fault model was theoretically calculated and analyzed, and the model predictive control fault-tolerant modeling was carried out in two different coordinate systems, α-β and d-q respectively. Finally, in MATLAB/Simulink, the same motor parameters were used for real-time simulation of the normal operation and fault-tolerant operation of the motor in the two coordinate systems. The simulation results showed that under normal operation, the THD was 2.09% and 2.77% respectively. In fault operation, the THD in α-β coordinate system was 13.15% smaller than that in d-qcoordinate system. In fault-tolerant operation, THD was 1.19% and 1.79% respectively in the two·166·包装工程2024年2月coordinate systems. It can be seen from the simulation results that the motor controlled by d-q coordinate system has more stable performance when fault occurs, and the control effect under normal and fault-tolerant operation conditions is almost equivalent.KEY WORDS: six-phase permanent magnet synchronous motor; model predictive current; vector space decouples; open fault analysis; fault-tolerant control永磁同步电机(Permanent Magnet Synchronous Motor, PMSM)驱动系统多用于包装产业的自动化生产线中,尤其是食品加工链等一些具有复杂包装工艺的场景应用更为广泛[1-2]。

AR机器视觉分拣系统的设计与实现

AR机器视觉分拣系统的设计与实现

信Q 与电1BChina Computer & Communication敬件打茨与窓用2021年第5期AR 机器视觉分拣系统的设计与实现黄萍巫钊张智林(玉林师范学院物理与电信工程学院,广西玉林 537000 )摘 要:随着自动化的快速发展以及社会的不断进步,需要分拣的货物越来越多,这就要求现实的货物分拣应具有 快速、实时、高精度、不接触等特点.基于此,笔者设计了基于OpenCV 的机器视觉分拣系统,是机器视觉技术在机械制造自动化系统中的有效应用.该设计的系统硬件由STM32F767单片机、0V7725 X 业摄像头、滚轴传送带以及分拣装置组成,其工作原理如下:摄像头不断采集信息发送至单片机,单片机通过调用OpenCV 库进行图像信息数据转换、数据预处理、 颜色空间分割、特征提取、颜色识别等图像处理后,识别出货物的颜色以及形状大小,然后控制分拣隔板使货物进入不同的储存柜,从而实现货物分拣.关键词:机器视觉;STM32F767; OpenCV;货物分拣中图分类号:TP391.41 文献标识码:A 文章编号:1003-9767 (2021) 05-124-03Design and Implementation of AR Machine Vision Sorting SystemHUANG Ping, WU Zhao, ZHANG Zhilin(College of Physics and Telecommunication Engineering, Yulin Normal University, Yulin Guangxi 537000, China)Abstract: With the rapid development of automation and the continuous progress of society, more and more goods need to besorted, which requires that the actual sorting of goods should be fast, real-time, high-precision, non-contact and other characteristics.Based on this, the author designed a machine vision sorting system based on OpenCV, which is an effective application of machine vision technology in machine manufacturing automation systems. The system hardware of this design is composed of STM32F767 single-chip microcomputer, OV7725 industrial camera, roller conveyor and sorting device. Its working principle is as follows: thecamera continuously collects information and sends it to the single-chip microcomputer, and the single-chip uses the OpenCV library for image information data conversion, data preprocessing, After image processing such as color space segmentation, featureextraction, and color recognition, the color and shape size of the goods are identified, and then the sorting partition is controlled to allow the goods to enter different storage cabinets, thereby realizing the sorting of the goods.Keywords : machine vision; STM32F767; OpenCV ; goods sorting0引言目前,基于OpenCV 的机器视觉技术已成功应用在工业分拣领域中,大幅度提高了产品分拣的精准度和可靠性,保 证了生产的速度[1]o 基于OpenCV 的机器视觉分拣系统相对应的电气以及控制系统也比较简单,并且OpenCV 库中有很 多算法,能满足绝大多数产品特征的识别需求,且运算速度 较快,可移植到单片机上。

一种图像分形特征提取的近似算法

一种图像分形特征提取的近似算法

一种图像分形特征提取的近似算法
朱红;赵亦工
【期刊名称】《西安电子科技大学学报(自然科学版)》
【年(卷),期】1999(026)002
【摘要】提出了一种提取图像分形特征的近似算法.该算法对经典的"毯覆盖"算法中按不同几何度量尺度分别计算图像的"膨胀层"和"腐蚀层"进而计算不同几何度量尺度图像表面积的计算过程进行了简化,降低了利用"毯覆盖"算法提取图像分形特征的计算量和存储量,在现有硬件条件下使实时提取图像分形特征成为可能.
【总页数】3页(P243-245)
【作者】朱红;赵亦工
【作者单位】西安电子科技大学测控工程与仪器系,西安,710071;西安电子科技大学测控工程与仪器系,西安,710071
【正文语种】中文
【中图分类】TP391.4
【相关文献】
1.一种基于分形的木材细胞图像特征提取方法 [J], 任洪娥;高莹;董本志
2.基于分形理论的图像边缘特征提取算法 [J], 冯学晓;古险峰
3.基于分形理论的树皮图像特征提取方法 [J], 潘世豪;程玉柱;许正昊;谢文锴;石玲玉
4.手指静脉图像分形特征提取方法 [J], 杨金锋;李乾司茂;贾桂敏
5.基于纹理和分形的鲜茶叶图像特征提取在茶树品种识别中的应用 [J], 刘自强;周铁军;傅冬和
因版权原因,仅展示原文概要,查看原文内容请购买。

华中科大着手研发高分辨全脑神经元网络可视化仪器

华中科大着手研发高分辨全脑神经元网络可视化仪器

所在位置 >> 首页 >> 科学教育 >> 正文华中科大着手研发高分辨全脑神经元网络可视化仪器湖北新闻网武汉5月2日电 (王潇潇杨义勇)科技发展日新月异,但大脑仍然是最大的未解之谜。

结构决定功能,脑的基本功能都依赖于神经元的聚集体或神经网络,这些精细的脑解剖结构是理解脑功能和脑疾病的基础。

为了观察全脑神经网络,至少需要在厘米大小范围内具备1m的三维空间分辨能力,但经典的磁共振、电镜等成像技术均不能解决大样本和高分辨的矛盾。

因为缺少合适的研究方法,人们对哺乳动物完整脑的神经元网络结构、连接关系的认识还非常匮乏。

笔者近日从华中科技大学了解到,该校正在着手研发高分辨全脑神经元网络可视化仪器,该技术将为揭示大脑奥秘做出重要贡献。

该校骆清铭教授领导的团队经过8年的攻关,在国际上率先建立了可对厘米大小样本进行突起水平精细结构三维成像,具有自主知识产权的显微光学切片断层成像系统(MOST),该研究成果曾发表于2010年第330卷第6009期的《科学》(Science)期刊上。

这是我校首次在《科学》发文,也是我国仪器设备自主开发研究在《科学》上发表的第一篇文章。

MOST技术相对于传统成像技术优势明显,创造出迄今为止最精细的小鼠全脑神经元三维连接图谱,为实现全脑网络可视化创造了必要条件。

此研究成果将在脑结构、脑功能、脑疾病,以及药物作用效果等研究中发挥非常重要的作用。

骆清铭介绍说,通过MOST技术将会更全面深入地了解大脑结构和功能,为治愈多种神经性疾病提供了重要的手段。

该成果曾在2012年初入选“2011年度中国十大科学进展”。

教育部科技司副司长雷朝滋表示:“骆教授的项目获批体现了国家的关注和支持,在欧美发达国家相继开展脑计划的背景下,该技术对于我国在脑科学研究领域的发展,以及提升我国原始创新能力方面将起到重要作用。

”据了解,国家重大科学仪器设备开发专项项目是为了贯彻落实《国家中长期科学和技术发展规划纲要(2006-2020年)》,由科技部与财政部共同设立的专项支持资金。

基于二维特征的快速分形图像编码方案

基于二维特征的快速分形图像编码方案

基于二维特征的快速分形图像编码方案
李高平
【期刊名称】《西南民族大学学报(自然科学版)》
【年(卷),期】2011(037)003
【摘要】分形图像编码方法因其具有高压缩比、重建速度快、分辨率无关性等优点使得它有很好的发展前景.但是,编码过程需要在一个海量码本中寻找每个range 块的最佳匹配domain块而耗去特别多的时间,这阻碍了它的广泛应用.为了加快编码过程,提取图像快的能量和方差来构成它的二维特征,作为图像子块间的相似性度量,在range块与domain块匹配时,大多数与range块的二维特征不相近的domain块,可以在匹配前排除.因此,对一个range块,能够在较小的搜索范围内找到它的最佳匹配domain块,使编码时间极大地减小.仿真实验显示,它能够大大缩短编码时间,同时实现和全搜索分形编码方法相近的重建图像质量.
【总页数】6页(P485-490)
【作者】李高平
【作者单位】西南民族大学计算机科学与技术学院,四川成都610041
【正文语种】中文
【中图分类】TP391.41
【相关文献】
1.基于形态特征的快速分形图像编码 [J], 何传江;杨静
2.基于FGSE的快速分形图像编码算法及其多描述编码方案 [J], 刘美琴;赵耀
3.基于结构信息特征的快速分形图像编码 [J], 王强;梁德群;毕胜;张涛
4.基于图像子块特征的快速分形图像编码算法 [J], 周一鸣;张超;张曾科
5.基于双交叉和特征的快速分形图像编码研究 [J], 张璟;张爱华;汪玮玮;唐婷婷因版权原因,仅展示原文概要,查看原文内容请购买。

基于图像内容的小波包数字水印算法

基于图像内容的小波包数字水印算法

基于图像内容的小波包数字水印算法
杨红颖;王向阳
【期刊名称】《微型机与应用》
【年(卷),期】2004(023)002
【摘要】以最优小波包变换与人眼视觉特性为基础,提出了一种基于图像内容的小波包数字水印算法,并对其透明性与鲁棒性进行了实验分析.
【总页数】3页(P51-53)
【作者】杨红颖;王向阳
【作者单位】大连辽宁师范大学计算机与信息技术学院,116029;大连辽宁师范大学计算机与信息技术学院,116029
【正文语种】中文
【中图分类】TP3
【相关文献】
1.基于图像内容的局部化自适应数字水印算法 [J], 王贤敏;关泽群;吴沉寒
2.一种基于图像内容的数字水印算法 [J], 李昌利
3.一种基于图像内容的半脆弱数字水印算法 [J], 高铁杠;顾巧论;陈增强
4.一种基于图像内容检索技术的数字视频水印算法 [J], 周支元;周素萍
5.基于图像内容敏感度分析数字水印算法 [J], 黄娜;侯刚;王国祥;孙大鑫;陈贺新因版权原因,仅展示原文概要,查看原文内容请购买。

逐像素注意力驱动的红外小目标检测网络

逐像素注意力驱动的红外小目标检测网络

逐像素注意力驱动的红外小目标检测网络王啸林;方厚章;李雪婷;吴辰星;王黎明【期刊名称】《西北工业大学学报》【年(卷),期】2024(42)2【摘要】红外小目标检测在军事和民用领域获得了广泛应用,但其存在目标尺度小、细节少、复杂背景干扰等问题,现有经典深度学习检测方法往往适用于通用目标检测,对红外小目标适配性较差。

针对上述问题,构建了一种新的基于U形注意力块和逐像素注意力块的红外小目标检测网络。

设计了U形注意力块,在单层级内通过局部U形子网络提取多尺度特征,并通过逐像素注意力精细化增强小目标特征,丰富多尺度小尺度目标特征表示,提升网络对小尺度目标判别能力;通过稠密融合方式进一步保留小目标信息,缓解不同层特征融合时的语义鸿沟,降低漏检率;将空间与通道2个维度逐像素注意力块应用于融合后的特征图,避免小目标特征被衰减,同时抑制复杂背景干扰。

实验结果表明,提出的网络在2个红外小目标数据集NUDT-SIRST与IRSTD-1k上交并比、检测概率、虚警率指标均超过最新基准方法。

此外,所提网络在检测精度和效率上也达到较好平衡。

【总页数】9页(P335-343)【作者】王啸林;方厚章;李雪婷;吴辰星;王黎明【作者单位】西安电子科技大学计算机科学与技术学院【正文语种】中文【中图分类】TP391【相关文献】1.基于双通道特征增强集成注意力网络的红外弱小目标检测方法2.结合跨尺度特征融合与瓶颈注意力模块的轻量型红外小目标检测网络3.基于注意力机制的红外小目标检测方法4.融合多尺度分形注意力的红外小目标检测模型5.基于梯度可感知通道注意力模块的红外小目标检测前去噪网络因版权原因,仅展示原文概要,查看原文内容请购买。

基于双分支卷积网络的玉米叶片叶绿素含量高光谱和多光谱协同反演

基于双分支卷积网络的玉米叶片叶绿素含量高光谱和多光谱协同反演

基于双分支卷积网络的玉米叶片叶绿素含量高光谱和多光谱协同反演王亚洲;肖志云【期刊名称】《农业机械学报》【年(卷),期】2024(55)1【摘要】针对智慧农业中叶绿素的精准预测问题,本文提出了基于双分支网络的玉米叶片叶绿素含量高光谱与多光谱协同反演的方法。

使用欠完备自编码器进行数据降维,捕捉数据中最为显著的特征,使降维后的数据可以代替原始数据进行训练,从而加快训练效率,使用双分支卷积网络将多光谱数据用于填充高光谱数据信息,充分利用高光谱数据的空间细节信息,再结合1DCNN建立玉米叶片叶绿素含量预测模型。

结果表明,与传统降维算法相比较,欠完备自编码器处理后预测结果最佳,决定系数R2为0.988,均方根误差(RMSE)为0.273,表明使用欠完备自编码器进行降维可以有效提高数据反演精度;与单一的高光谱数据反演模型和多光谱数据反演模型相比,双分支卷积网络预测模型均取得较优的预测结果,R2在0.932以上,RMSE均在1.765以下,表明基于双分支卷积网络的高光谱与多光谱图像协同反演模型可以有效地利用数据的特征;对于其他数据结合本文提及的双分支卷积网络模型进行反演,其R2均在0.905以上,RMSE均在2.149以下,表明该预测模型具有一定的普适性。

【总页数】8页(P196-202)【作者】王亚洲;肖志云【作者单位】内蒙古工业大学电力学院【正文语种】中文【中图分类】S513;S127【相关文献】1.玉米叶片叶绿素含量的高光谱反演模型探究2.基于GA-BP神经网络高光谱反演模型分析\r玉米叶片叶绿素含量3.基于双分支卷积网络的高光谱与多光谱图像协同土地利用分类4.干旱胁迫下玉米叶片叶绿素含量与含水量高光谱成像反演方法因版权原因,仅展示原文概要,查看原文内容请购买。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Wei Zhang1 , Xiangyang Xue1 , Jianping Fan2 , Xiaojing Huang1 , Bin Wu1 , Mingjie Liu1 1 School of Computer Science, Fudan University, Shanghai, China 2 Department of Computer Science, UNC-Charlotte, NC28223, USA {weizh, xyxue}@ jfan@ {hxj, wubin, mjliu}@
Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence
Multi-Kernel Multi-Label Learning with Max-Margin Concept NetFor many real-world applications, semantics richness requires multiple labels to sufficiently describe the data, thus one object ( image, video, text, etc.) might be related with more than one semantic concepts simultaneously. For example, in the image annotation task, an image that shows a bird flying in the sky is associated with two labels (concepts) bird and sky at the same time. Multi-label learning deals with the data associated with more than one concepts simultaneously and has already been applied to web page classification, text categorization, image annotation, bioinformatics etc. One strategy for multi-label learning is to deem multi-label problem with c labels as a classification problem with 2c classes, and standard multi-class algorithms can be applied straightforward [Tsoumakas and Katakis, 2007]. The main drawbacks of this strategy include: 1) high cost of computation; 2) most classes might have no positive training data [Hariharan et al., 2010]. An alternative strategy for multi-label learning is to independently decompose the task into c binary classification problems, one per label [Boutell et al., 2004; Li et al., 2009]; however, it would lose the correlations between labels, which is significant to the performance of multi-label classification. For example, the concepts bird and sky often cooccur in the same image, while bird and of f ice may seldom
co-occur. To exploit the correlations between labels, many algorithms have been introduced recently, such as CMLF (Collective Multi-Label with Features) [Ghamrawi and McCallum, 2005], M3 N(Max-Margin Markov Network) [Tasker et al., 2003], SSVM(Structural SVM ) [Tsochantaridis et al., 2004], SMML (Structured Max-Margin Learning) [Xue et al., 2010] and CML(Correlative Multi-Label framework) [Qi et al., 2007]. Another strategy is to transform multi-label learning into a ranking problem(ranking the proper labels before others for each data) [Elisseeff and Weston, 2002]. The above existing algorithms all employ the same feature extractor for different concepts and ignore the similarity diversity, which might be unsuitable for the real applications. For example, suppose that there are two images: one contains the concepts sky and bird, the other contains the concepts sky and building . These two images are similar when the concept sky is concerned; however, they are dissimilar to each other when the concept bird or building is concerned. It is well-accepted that extracting more suitable features and designing more accurate similarity functions play an essential role in achieving more precise classification [Sonnenburg et al., 2007]. With the proliferation of kernel-based methods such as SVM, kernel function or kernel matrix has been widely used to implement feature transformation or determine the data similarity. Many existing algorithms employ the same kernel for all the labels (concepts) and show that Gaussian kernel is powerful [Jebara, 2004]. However, the diverse data similarity cannot be characterized effectively by using one single kernel and multiple kernels are necessary [Tang et al., 2009; Bach et al., 2004]. To overcome the disadvantage of traditional one-kernel-fit-all setting, some algorithms learn multiple kernels for each label (concept) [Xiang et al., 2010; Rakotomamonjy et al., 2007]; however, the inter-label correlations are not leveraged sufficiently for achieving more effective multi-kernel learning. In this paper, a novel method is developed for achieving Multi-Kernel Multi-Label Learning with Max-Margin Concept Network such that inter-label dependency and similarity diversity are sufficiently leveraged at the same time. The concept network is constructed for characterizing the inter-label correlations more effectively, so that we can leverage such inter-label correlations for classifier training and enhancing the discrimination power of the classifiers significantly. The site potentials encode the feature-label associations while the
相关文档
最新文档