on Regularization of Diffusion-Based Direction Maps for the Tracking

合集下载

基于模糊粗糙集和D—S证据理论的多源灌溉信息融合方法

基于模糊粗糙集和D—S证据理论的多源灌溉信息融合方法
决 策精 度, 降低灌溉 决策的不确 定性 。 关 键词 : 模糊粗糙 集; 不确定性信 息; D — S 证 据理 论 ; 多 源信 息 融合 ; 权 重 系数
中图分类号 : T P 3 9 1 ; ¥ 2 7 4 . 3 文献标志码 : A
M ul t i - s o ur c e i r r i g a t i o n i n f o r ma t i o n f us i o n me t h o d ba s e d o n f u z z y r o ug h s e t a nd D- S e v i de nc e t he o r y
C 0DE N J YI I DU
h t t p : / / w w w . j o e a . a n
d o i : 1 0 . 1 1 7 7 2 / j . i s s n . 1 0 0 1 — 9 0 8 1 . 2 0 1 3 . 1 0 . 2 8 1 1
基 于 模 糊 粗 糙 集 和 D. S证 据 理 论 的 多 源 灌溉 信 息 融 合 方 法
p r o c e s s o f m u l t i — s o u r c e i r r i g a t i o n i n f o r ma t i o n ,a d e c i s i o n f u s i o n m e t h o d b a s e d o n f u z z y r o u g h s e t a n d D e m p s t e r - S h a f e r ( D — S )
论相结合 的决策融合 方法。运用模 糊粗糙集理论 , 建立基 本概率 分配 函数 , 计 算各 灌溉 因子 与灌溉 决策之 间的依 赖 程度 , 构建 多个融合灌溉 因子对灌溉 决策的i T ,  ̄ J 1 框架; 然后运 用改进 的 D — s证 据理论 , 进 行 多源灌溉信息 决策层级的

时间分数阶扩散方程扩散系数反演问题的唯一性

时间分数阶扩散方程扩散系数反演问题的唯一性

时间分数阶扩散方程扩散系数反演问题的唯一性王兵贤;童东付【摘要】考虑了时间分数阶抛物型方程扩散系数反演问题,通过分数阶抛物型方程解的形式,建立输入-输出映射,并通过讨论其相关性质,证明反问题的唯一性.【期刊名称】《淮阴师范学院学报(自然科学版)》【年(卷),期】2018(017)003【总页数】4页(P194-197)【关键词】抛物型方程;系数反演;输入-输出映射;唯一性.【作者】王兵贤;童东付【作者单位】淮阴师范学院数学科学学院,江苏淮安 223300;淮阴师范学院数学科学学院,江苏淮安 223300【正文语种】中文【中图分类】O175.260 引言在物理学中,对流扩散方程经常用于描述布朗运动和外力场作用下的扩散现象[1-2].然而,对于非正常扩散,经典的对流扩散方程并不能很好地描述这一现象,主要体现在对流、传输过程与外力场作用下的是不同的.由广义连续时间随机行走模型(CTRM)导出的分数阶对流扩散方程却能非常好地描述非正常扩散现象.关于分数阶对流扩散方程的正问题,如初值问题、初边值问题已经有了大量的研究,如Yamamoto等研究了时间分数阶扩散方程初边值问题弱解的存在唯一性[3],文[4]给出了某些分数阶扩散方程的精确解的表达式,文[5]给出了分数阶扩散方程的数值拟合方法等.分数阶扩散方程的反问题有很多,如分数阶扩散方程的Cauchy问题、源项识别问题、侧边值问题等等,国内外学者分别对该些问题做了研究.Xu等[6]证明了当时,Cauchy问题反问题解的唯一性,文[7]研究了时间分数阶扩散方程源项识别问题,文[8]分别用最优滤波法、Fourier方法、迭代方法、卷积型正则化方法研究了时间分数阶扩散方程的侧边值问题,给出了收敛性分析等等,Liu 等[9-10]运用正则化方法求解时间分数阶扩散方程初值和边值反演问题.考虑时间分数阶抛物型方程初值问题,(1)其中为Caputo-Dzherbashyan意义下的分数阶导数,其定义为其中,Iα为Riemann-Liouville分数阶积分假设,对于问题中初边值分别满足条件:(c1) k(x)∈C1[0,1]且0<c0≤k(x)<c1;(c3) ψ0(t)、ψ1(t)∈C[0,T];(c4) q(t)∈C[0,T].基于条件(c1)~(c4) ,问题(1)的解存在而且唯一[11].本文主要研究当给定边界x=0处的测量数据u(0,t)=f(t), t∈(0,T](2)时反演热传导率k(x).1 相关假设与主要结论首先,定义容许集M:={k(x)∈C1[0,1]且0<c0≤k(x)<c1,x∈[0,1])}⊂C[0,1].其次,给出广义Mittag-Leffler函数的定义为其中α>0,β∈R.然后,引入输入-输出映射Φ:M→C[0,T],使得Φ[k]=u(x,t;k)|x=0,即反问题的唯一性讨论可以归结为映射Φ[k]=f, f∈C[0,T](3)则有以下结论:定理1 如果条件(c1)~(c4)满足,且式(3)定义了一个输入-输出映射Φ[k],在容许集M中, 设k1(0)=k2(0)=k(0),则Φ[k]具有性质:如果Φ[k1]=Φ[k2],则有k1=k2.2 定理1的证明首先引入辅助函数v(x,t)满足:这样,可以将问题(1)转化为一个关于v(x,t)的具有齐次边界条件的问题(4)由文[12]得到,初边值问题(4)的解存在且唯一,而且具有形式(5)其中当x=0时,式(2)得(6)假设φk(x)是下列Sturm-Liouville问题的解:(7)为了描述方便,对于式(5),可表示为其中χn(t)=〈ζ(θ),φn(θ)〉Eα,1(-λntα), ωn(t)=sα-1Eα,α(-λnsα)〈ξ(θ,t-s),φn(θ)〉ds. 结合式(6),有(8)即,将f(t)描述为级数表达式,这样式(8)得到了一个定义在容许集M上的输入-输出映射Φ[k],即∀t∈[0,T].定理1的证明假设对于k1(x)、k2(x)∈M对应问题(4)的解v(x,t;k1)、v(x,t;k2)分别记作v1(x,t)、v2(x,t),且记由式(8)得由ωn(t)的定义得且因为k1(0)=k2(0)=k(0),则对于所有即Φ[k1]=Φ[k2].由χn(t)、ωn(t)的定义以及以上分析过程,对于∀t∈[0,T],如果满足〈ξ1(x,t)-ξ2(x,t),φn(θ)〉=0, n=0,1,…,n,则对于∀t∈[0,T],有反则,假如对于某个n,〈ξ1(x,t)-ξ2(x,t),φn(θ)〉≠0,则有k1(x)≠k2(x),从而有输入-输出映射Φ[k]满足:当k1(x)≠k2(x)时,Φ[k1]≠Φ[k2],因此定理1的结论成立.参考文献:【相关文献】[1] Catania F, Massabo M, Palaclino O. Estimation of transport and kinetic parameters using analytical solutions of the 2D advection-dispersion-reactionmodel[J].Envirvnmetrics,2006,17:199-216.[2] Khalifa M E. Some analytical solutions for the advection-dispersion equation[J].Appl Math Comp,2003(139):299-310.[3] Sakamoto K, Yamamoto M. Initial value boundary value problems for fractional diffusion-wave equations and applications to some inverse problems[J].J Math Anai Appi,2011,382(1):426-447.[4] Ding X L,Jiang Y L. Analytical solutions for the multi-term time-space fractional advection-diffusion equations with mixed boundary conditions[J].NonlAnal,2013,14(2):1026-1033.[5] Ren J,Sun Z Z, Zhao X. Compactscheme for the fractional sub-diffusion equations with Neumannboundary conditions[J].J Comp Phys,2013,232(1):456-467.[6] Xu X, Cheng J,Yamamoto M. Carleman estimate for a fractional diffusion equation with half order and applica-tion[J]. Appl Anal,2011,90(9):1355-1371,[7] Kirane M,Malik S A. Determination of an unknown source term and the temperature distribution for the linear heat equation involving fractional derivative intime[J].Appl Math Compu,2011,218(1):163-170.[8] Qian Z. Optimal modified method for afractional-diffusion inverse heat conduction problem[J].Inve Prob Scie Engi,2010,18(4):521-533.[9] Wang L Y,Liu J J. Total variation regularization for a backward time-fractional diffusionproblem[J].Inve Prob,2013,29:1-12.[10] Liu J J, Yamamoto M, Yan L. On the uniqueness and reconstruction for an inverse problem of the fractional diffusion process[J].Appl Nume Math,2015(10):1-19.[11] Podlubny I. Fractional differential equations[M].San Diego: Academic,1999.[12] Luchko Y. Initial boundary value problems for the one dimensional time-fractional diffusion equation[J].Frac Calc Appl Anal,2012,15:141-160.。

机器学习专业词汇中英文对照

机器学习专业词汇中英文对照

机器学习专业词汇中英⽂对照activation 激活值activation function 激活函数additive noise 加性噪声autoencoder ⾃编码器Autoencoders ⾃编码算法average firing rate 平均激活率average sum-of-squares error 均⽅差backpropagation 后向传播basis 基basis feature vectors 特征基向量batch gradient ascent 批量梯度上升法Bayesian regularization method 贝叶斯规则化⽅法Bernoulli random variable 伯努利随机变量bias term 偏置项binary classfication ⼆元分类class labels 类型标记concatenation 级联conjugate gradient 共轭梯度contiguous groups 联通区域convex optimization software 凸优化软件convolution 卷积cost function 代价函数covariance matrix 协⽅差矩阵DC component 直流分量decorrelation 去相关degeneracy 退化demensionality reduction 降维derivative 导函数diagonal 对⾓线diffusion of gradients 梯度的弥散eigenvalue 特征值eigenvector 特征向量error term 残差feature matrix 特征矩阵feature standardization 特征标准化feedforward architectures 前馈结构算法feedforward neural network 前馈神经⽹络feedforward pass 前馈传导fine-tuned 微调first-order feature ⼀阶特征forward pass 前向传导forward propagation 前向传播Gaussian prior ⾼斯先验概率generative model ⽣成模型gradient descent 梯度下降Greedy layer-wise training 逐层贪婪训练⽅法grouping matrix 分组矩阵Hadamard product 阿达马乘积Hessian matrix Hessian 矩阵hidden layer 隐含层hidden units 隐藏神经元Hierarchical grouping 层次型分组higher-order features 更⾼阶特征highly non-convex optimization problem ⾼度⾮凸的优化问题histogram 直⽅图hyperbolic tangent 双曲正切函数hypothesis 估值,假设identity activation function 恒等激励函数IID 独⽴同分布illumination 照明inactive 抑制independent component analysis 独⽴成份分析input domains 输⼊域input layer 输⼊层intensity 亮度/灰度intercept term 截距KL divergence 相对熵KL divergence KL分散度k-Means K-均值learning rate 学习速率least squares 最⼩⼆乘法linear correspondence 线性响应linear superposition 线性叠加line-search algorithm 线搜索算法local mean subtraction 局部均值消减local optima 局部最优解logistic regression 逻辑回归loss function 损失函数low-pass filtering 低通滤波magnitude 幅值MAP 极⼤后验估计maximum likelihood estimation 极⼤似然估计mean 平均值MFCC Mel 倒频系数multi-class classification 多元分类neural networks 神经⽹络neuron 神经元Newton’s method ⽜顿法non-convex function ⾮凸函数non-linear feature ⾮线性特征norm 范式norm bounded 有界范数norm constrained 范数约束normalization 归⼀化numerical roundoff errors 数值舍⼊误差numerically checking 数值检验numerically reliable 数值计算上稳定object detection 物体检测objective function ⽬标函数off-by-one error 缺位错误orthogonalization 正交化output layer 输出层overall cost function 总体代价函数over-complete basis 超完备基over-fitting 过拟合parts of objects ⽬标的部件part-whole decompostion 部分-整体分解PCA 主元分析penalty term 惩罚因⼦per-example mean subtraction 逐样本均值消减pooling 池化pretrain 预训练principal components analysis 主成份分析quadratic constraints ⼆次约束RBMs 受限Boltzman机reconstruction based models 基于重构的模型reconstruction cost 重建代价reconstruction term 重构项redundant 冗余reflection matrix 反射矩阵regularization 正则化regularization term 正则化项rescaling 缩放robust 鲁棒性run ⾏程second-order feature ⼆阶特征sigmoid activation function S型激励函数significant digits 有效数字singular value 奇异值singular vector 奇异向量smoothed L1 penalty 平滑的L1范数惩罚Smoothed topographic L1 sparsity penalty 平滑地形L1稀疏惩罚函数smoothing 平滑Softmax Regresson Softmax回归sorted in decreasing order 降序排列source features 源特征sparse autoencoder 消减归⼀化Sparsity 稀疏性sparsity parameter 稀疏性参数sparsity penalty 稀疏惩罚square function 平⽅函数squared-error ⽅差stationary 平稳性(不变性)stationary stochastic process 平稳随机过程step-size 步长值supervised learning 监督学习symmetric positive semi-definite matrix 对称半正定矩阵symmetry breaking 对称失效tanh function 双曲正切函数the average activation 平均活跃度the derivative checking method 梯度验证⽅法the empirical distribution 经验分布函数the energy function 能量函数the Lagrange dual 拉格朗⽇对偶函数the log likelihood 对数似然函数the pixel intensity value 像素灰度值the rate of convergence 收敛速度topographic cost term 拓扑代价项topographic ordered 拓扑秩序transformation 变换translation invariant 平移不变性trivial answer 平凡解under-complete basis 不完备基unrolling 组合扩展unsupervised learning ⽆监督学习variance ⽅差vecotrized implementation 向量化实现vectorization ⽮量化visual cortex 视觉⽪层weight decay 权重衰减weighted average 加权平均值whitening ⽩化zero-mean 均值为零Letter AAccumulated error backpropagation 累积误差逆传播Activation Function 激活函数Adaptive Resonance Theory/ART ⾃适应谐振理论Addictive model 加性学习Adversarial Networks 对抗⽹络Affine Layer 仿射层Affinity matrix 亲和矩阵Agent 代理 / 智能体Algorithm 算法Alpha-beta pruning α-β剪枝Anomaly detection 异常检测Approximation 近似Area Under ROC Curve/AUC Roc 曲线下⾯积Artificial General Intelligence/AGI 通⽤⼈⼯智能Artificial Intelligence/AI ⼈⼯智能Association analysis 关联分析Attention mechanism 注意⼒机制Attribute conditional independence assumption 属性条件独⽴性假设Attribute space 属性空间Attribute value 属性值Autoencoder ⾃编码器Automatic speech recognition ⾃动语⾳识别Automatic summarization ⾃动摘要Average gradient 平均梯度Average-Pooling 平均池化Letter BBackpropagation Through Time 通过时间的反向传播Backpropagation/BP 反向传播Base learner 基学习器Base learning algorithm 基学习算法Batch Normalization/BN 批量归⼀化Bayes decision rule 贝叶斯判定准则Bayes Model Averaging/BMA 贝叶斯模型平均Bayes optimal classifier 贝叶斯最优分类器Bayesian decision theory 贝叶斯决策论Bayesian network 贝叶斯⽹络Between-class scatter matrix 类间散度矩阵Bias 偏置 / 偏差Bias-variance decomposition 偏差-⽅差分解Bias-Variance Dilemma 偏差 – ⽅差困境Bi-directional Long-Short Term Memory/Bi-LSTM 双向长短期记忆Binary classification ⼆分类Binomial test ⼆项检验Bi-partition ⼆分法Boltzmann machine 玻尔兹曼机Bootstrap sampling ⾃助采样法/可重复采样/有放回采样Bootstrapping ⾃助法Break-Event Point/BEP 平衡点Letter CCalibration 校准Cascade-Correlation 级联相关Categorical attribute 离散属性Class-conditional probability 类条件概率Classification and regression tree/CART 分类与回归树Classifier 分类器Class-imbalance 类别不平衡Closed -form 闭式Cluster 簇/类/集群Cluster analysis 聚类分析Clustering 聚类Clustering ensemble 聚类集成Co-adapting 共适应Coding matrix 编码矩阵COLT 国际学习理论会议Committee-based learning 基于委员会的学习Competitive learning 竞争型学习Component learner 组件学习器Comprehensibility 可解释性Computation Cost 计算成本Computational Linguistics 计算语⾔学Computer vision 计算机视觉Concept drift 概念漂移Concept Learning System /CLS 概念学习系统Conditional entropy 条件熵Conditional mutual information 条件互信息Conditional Probability Table/CPT 条件概率表Conditional random field/CRF 条件随机场Conditional risk 条件风险Confidence 置信度Confusion matrix 混淆矩阵Connection weight 连接权Connectionism 连结主义Consistency ⼀致性/相合性Contingency table 列联表Continuous attribute 连续属性Convergence 收敛Conversational agent 会话智能体Convex quadratic programming 凸⼆次规划Convexity 凸性Convolutional neural network/CNN 卷积神经⽹络Co-occurrence 同现Correlation coefficient 相关系数Cosine similarity 余弦相似度Cost curve 成本曲线Cost Function 成本函数Cost matrix 成本矩阵Cost-sensitive 成本敏感Cross entropy 交叉熵Cross validation 交叉验证Crowdsourcing 众包Curse of dimensionality 维数灾难Cut point 截断点Cutting plane algorithm 割平⾯法Letter DData mining 数据挖掘Data set 数据集Decision Boundary 决策边界Decision stump 决策树桩Decision tree 决策树/判定树Deduction 演绎Deep Belief Network 深度信念⽹络Deep Convolutional Generative Adversarial Network/DCGAN 深度卷积⽣成对抗⽹络Deep learning 深度学习Deep neural network/DNN 深度神经⽹络Deep Q-Learning 深度 Q 学习Deep Q-Network 深度 Q ⽹络Density estimation 密度估计Density-based clustering 密度聚类Differentiable neural computer 可微分神经计算机Dimensionality reduction algorithm 降维算法Directed edge 有向边Disagreement measure 不合度量Discriminative model 判别模型Discriminator 判别器Distance measure 距离度量Distance metric learning 距离度量学习Distribution 分布Divergence 散度Diversity measure 多样性度量/差异性度量Domain adaption 领域⾃适应Downsampling 下采样D-separation (Directed separation)有向分离Dual problem 对偶问题Dummy node 哑结点Dynamic Fusion 动态融合Dynamic programming 动态规划Letter EEigenvalue decomposition 特征值分解Embedding 嵌⼊Emotional analysis 情绪分析Empirical conditional entropy 经验条件熵Empirical entropy 经验熵Empirical error 经验误差Empirical risk 经验风险End-to-End 端到端Energy-based model 基于能量的模型Ensemble learning 集成学习Ensemble pruning 集成修剪Error Correcting Output Codes/ECOC 纠错输出码Error rate 错误率Error-ambiguity decomposition 误差-分歧分解Euclidean distance 欧⽒距离Evolutionary computation 演化计算Expectation-Maximization 期望最⼤化Expected loss 期望损失Exploding Gradient Problem 梯度爆炸问题Exponential loss function 指数损失函数Extreme Learning Machine/ELM 超限学习机Letter FFactorization 因⼦分解False negative 假负类False positive 假正类False Positive Rate/FPR 假正例率Feature engineering 特征⼯程Feature selection 特征选择Feature vector 特征向量Featured Learning 特征学习Feedforward Neural Networks/FNN 前馈神经⽹络Fine-tuning 微调Flipping output 翻转法Fluctuation 震荡Forward stagewise algorithm 前向分步算法Frequentist 频率主义学派Full-rank matrix 满秩矩阵Functional neuron 功能神经元Letter GGain ratio 增益率Game theory 博弈论Gaussian kernel function ⾼斯核函数Gaussian Mixture Model ⾼斯混合模型General Problem Solving 通⽤问题求解Generalization 泛化Generalization error 泛化误差Generalization error bound 泛化误差上界Generalized Lagrange function ⼴义拉格朗⽇函数Generalized linear model ⼴义线性模型Generalized Rayleigh quotient ⼴义瑞利商Generative Adversarial Networks/GAN ⽣成对抗⽹络Generative Model ⽣成模型Generator ⽣成器Genetic Algorithm/GA 遗传算法Gibbs sampling 吉布斯采样Gini index 基尼指数Global minimum 全局最⼩Global Optimization 全局优化Gradient boosting 梯度提升Gradient Descent 梯度下降Graph theory 图论Ground-truth 真相/真实Letter HHard margin 硬间隔Hard voting 硬投票Harmonic mean 调和平均Hesse matrix 海塞矩阵Hidden dynamic model 隐动态模型Hidden layer 隐藏层Hidden Markov Model/HMM 隐马尔可夫模型Hierarchical clustering 层次聚类Hilbert space 希尔伯特空间Hinge loss function 合页损失函数Hold-out 留出法Homogeneous 同质Hybrid computing 混合计算Hyperparameter 超参数Hypothesis 假设Hypothesis test 假设验证Letter IICML 国际机器学习会议Improved iterative scaling/IIS 改进的迭代尺度法Incremental learning 增量学习Independent and identically distributed/i.i.d. 独⽴同分布Independent Component Analysis/ICA 独⽴成分分析Indicator function 指⽰函数Individual learner 个体学习器Induction 归纳Inductive bias 归纳偏好Inductive learning 归纳学习Inductive Logic Programming/ILP 归纳逻辑程序设计Information entropy 信息熵Information gain 信息增益Input layer 输⼊层Insensitive loss 不敏感损失Inter-cluster similarity 簇间相似度International Conference for Machine Learning/ICML 国际机器学习⼤会Intra-cluster similarity 簇内相似度Intrinsic value 固有值Isometric Mapping/Isomap 等度量映射Isotonic regression 等分回归Iterative Dichotomiser 迭代⼆分器Letter KKernel method 核⽅法Kernel trick 核技巧Kernelized Linear Discriminant Analysis/KLDA 核线性判别分析K-fold cross validation k 折交叉验证/k 倍交叉验证K-Means Clustering K – 均值聚类K-Nearest Neighbours Algorithm/KNN K近邻算法Knowledge base 知识库Knowledge Representation 知识表征Letter LLabel space 标记空间Lagrange duality 拉格朗⽇对偶性Lagrange multiplier 拉格朗⽇乘⼦Laplace smoothing 拉普拉斯平滑Laplacian correction 拉普拉斯修正Latent Dirichlet Allocation 隐狄利克雷分布Latent semantic analysis 潜在语义分析Latent variable 隐变量Lazy learning 懒惰学习Learner 学习器Learning by analogy 类⽐学习Learning rate 学习率Learning Vector Quantization/LVQ 学习向量量化Least squares regression tree 最⼩⼆乘回归树Leave-One-Out/LOO 留⼀法linear chain conditional random field 线性链条件随机场Linear Discriminant Analysis/LDA 线性判别分析Linear model 线性模型Linear Regression 线性回归Link function 联系函数Local Markov property 局部马尔可夫性Local minimum 局部最⼩Log likelihood 对数似然Log odds/logit 对数⼏率Logistic Regression Logistic 回归Log-likelihood 对数似然Log-linear regression 对数线性回归Long-Short Term Memory/LSTM 长短期记忆Loss function 损失函数Letter MMachine translation/MT 机器翻译Macron-P 宏查准率Macron-R 宏查全率Majority voting 绝对多数投票法Manifold assumption 流形假设Manifold learning 流形学习Margin theory 间隔理论Marginal distribution 边际分布Marginal independence 边际独⽴性Marginalization 边际化Markov Chain Monte Carlo/MCMC 马尔可夫链蒙特卡罗⽅法Markov Random Field 马尔可夫随机场Maximal clique 最⼤团Maximum Likelihood Estimation/MLE 极⼤似然估计/极⼤似然法Maximum margin 最⼤间隔Maximum weighted spanning tree 最⼤带权⽣成树Max-Pooling 最⼤池化Mean squared error 均⽅误差Meta-learner 元学习器Metric learning 度量学习Micro-P 微查准率Micro-R 微查全率Minimal Description Length/MDL 最⼩描述长度Minimax game 极⼩极⼤博弈Misclassification cost 误分类成本Mixture of experts 混合专家Momentum 动量Moral graph 道德图/端正图Multi-class classification 多分类Multi-document summarization 多⽂档摘要Multi-layer feedforward neural networks 多层前馈神经⽹络Multilayer Perceptron/MLP 多层感知器Multimodal learning 多模态学习Multiple Dimensional Scaling 多维缩放Multiple linear regression 多元线性回归Multi-response Linear Regression /MLR 多响应线性回归Mutual information 互信息Letter NNaive bayes 朴素贝叶斯Naive Bayes Classifier 朴素贝叶斯分类器Named entity recognition 命名实体识别Nash equilibrium 纳什均衡Natural language generation/NLG ⾃然语⾔⽣成Natural language processing ⾃然语⾔处理Negative class 负类Negative correlation 负相关法Negative Log Likelihood 负对数似然Neighbourhood Component Analysis/NCA 近邻成分分析Neural Machine Translation 神经机器翻译Neural Turing Machine 神经图灵机Newton method ⽜顿法NIPS 国际神经信息处理系统会议No Free Lunch Theorem/NFL 没有免费的午餐定理Noise-contrastive estimation 噪⾳对⽐估计Nominal attribute 列名属性Non-convex optimization ⾮凸优化Nonlinear model ⾮线性模型Non-metric distance ⾮度量距离Non-negative matrix factorization ⾮负矩阵分解Non-ordinal attribute ⽆序属性Non-Saturating Game ⾮饱和博弈Norm 范数Normalization 归⼀化Nuclear norm 核范数Numerical attribute 数值属性Letter OObjective function ⽬标函数Oblique decision tree 斜决策树Occam’s razor 奥卡姆剃⼑Odds ⼏率Off-Policy 离策略One shot learning ⼀次性学习One-Dependent Estimator/ODE 独依赖估计On-Policy 在策略Ordinal attribute 有序属性Out-of-bag estimate 包外估计Output layer 输出层Output smearing 输出调制法Overfitting 过拟合/过配Oversampling 过采样Letter PPaired t-test 成对 t 检验Pairwise 成对型Pairwise Markov property 成对马尔可夫性Parameter 参数Parameter estimation 参数估计Parameter tuning 调参Parse tree 解析树Particle Swarm Optimization/PSO 粒⼦群优化算法Part-of-speech tagging 词性标注Perceptron 感知机Performance measure 性能度量Plug and Play Generative Network 即插即⽤⽣成⽹络Plurality voting 相对多数投票法Polarity detection 极性检测Polynomial kernel function 多项式核函数Pooling 池化Positive class 正类Positive definite matrix 正定矩阵Post-hoc test 后续检验Post-pruning 后剪枝potential function 势函数Precision 查准率/准确率Prepruning 预剪枝Principal component analysis/PCA 主成分分析Principle of multiple explanations 多释原则Prior 先验Probability Graphical Model 概率图模型Proximal Gradient Descent/PGD 近端梯度下降Pruning 剪枝Pseudo-label 伪标记Letter QQuantized Neural Network 量⼦化神经⽹络Quantum computer 量⼦计算机Quantum Computing 量⼦计算Quasi Newton method 拟⽜顿法Letter RRadial Basis Function/RBF 径向基函数Random Forest Algorithm 随机森林算法Random walk 随机漫步Recall 查全率/召回率Receiver Operating Characteristic/ROC 受试者⼯作特征Rectified Linear Unit/ReLU 线性修正单元Recurrent Neural Network 循环神经⽹络Recursive neural network 递归神经⽹络Reference model 参考模型Regression 回归Regularization 正则化Reinforcement learning/RL 强化学习Representation learning 表征学习Representer theorem 表⽰定理reproducing kernel Hilbert space/RKHS 再⽣核希尔伯特空间Re-sampling 重采样法Rescaling 再缩放Residual Mapping 残差映射Residual Network 残差⽹络Restricted Boltzmann Machine/RBM 受限玻尔兹曼机Restricted Isometry Property/RIP 限定等距性Re-weighting 重赋权法Robustness 稳健性/鲁棒性Root node 根结点Rule Engine 规则引擎Rule learning 规则学习Letter SSaddle point 鞍点Sample space 样本空间Sampling 采样Score function 评分函数Self-Driving ⾃动驾驶Self-Organizing Map/SOM ⾃组织映射Semi-naive Bayes classifiers 半朴素贝叶斯分类器Semi-Supervised Learning 半监督学习semi-Supervised Support Vector Machine 半监督⽀持向量机Sentiment analysis 情感分析Separating hyperplane 分离超平⾯Sigmoid function Sigmoid 函数Similarity measure 相似度度量Simulated annealing 模拟退⽕Simultaneous localization and mapping 同步定位与地图构建Singular Value Decomposition 奇异值分解Slack variables 松弛变量Smoothing 平滑Soft margin 软间隔Soft margin maximization 软间隔最⼤化Soft voting 软投票Sparse representation 稀疏表征Sparsity 稀疏性Specialization 特化Spectral Clustering 谱聚类Speech Recognition 语⾳识别Splitting variable 切分变量Squashing function 挤压函数Stability-plasticity dilemma 可塑性-稳定性困境Statistical learning 统计学习Status feature function 状态特征函Stochastic gradient descent 随机梯度下降Stratified sampling 分层采样Structural risk 结构风险Structural risk minimization/SRM 结构风险最⼩化Subspace ⼦空间Supervised learning 监督学习/有导师学习support vector expansion ⽀持向量展式Support Vector Machine/SVM ⽀持向量机Surrogat loss 替代损失Surrogate function 替代函数Symbolic learning 符号学习Symbolism 符号主义Synset 同义词集Letter TT-Distribution Stochastic Neighbour Embedding/t-SNE T – 分布随机近邻嵌⼊Tensor 张量Tensor Processing Units/TPU 张量处理单元The least square method 最⼩⼆乘法Threshold 阈值Threshold logic unit 阈值逻辑单元Threshold-moving 阈值移动Time Step 时间步骤Tokenization 标记化Training error 训练误差Training instance 训练⽰例/训练例Transductive learning 直推学习Transfer learning 迁移学习Treebank 树库Tria-by-error 试错法True negative 真负类True positive 真正类True Positive Rate/TPR 真正例率Turing Machine 图灵机Twice-learning ⼆次学习Letter UUnderfitting ⽋拟合/⽋配Undersampling ⽋采样Understandability 可理解性Unequal cost ⾮均等代价Unit-step function 单位阶跃函数Univariate decision tree 单变量决策树Unsupervised learning ⽆监督学习/⽆导师学习Unsupervised layer-wise training ⽆监督逐层训练Upsampling 上采样Letter VVanishing Gradient Problem 梯度消失问题Variational inference 变分推断VC Theory VC维理论Version space 版本空间Viterbi algorithm 维特⽐算法Von Neumann architecture 冯 · 诺伊曼架构Letter WWasserstein GAN/WGAN Wasserstein⽣成对抗⽹络Weak learner 弱学习器Weight 权重Weight sharing 权共享Weighted voting 加权投票法Within-class scatter matrix 类内散度矩阵Word embedding 词嵌⼊Word sense disambiguation 词义消歧Letter ZZero-data learning 零数据学习Zero-shot learning 零次学习Aapproximations近似值arbitrary随意的affine仿射的arbitrary任意的amino acid氨基酸amenable经得起检验的axiom公理,原则abstract提取architecture架构,体系结构;建造业absolute绝对的arsenal军⽕库assignment分配algebra线性代数asymptotically⽆症状的appropriate恰当的Bbias偏差brevity简短,简洁;短暂broader⼴泛briefly简短的batch批量Cconvergence 收敛,集中到⼀点convex凸的contours轮廓constraint约束constant常理commercial商务的complementarity补充coordinate ascent同等级上升clipping剪下物;剪报;修剪component分量;部件continuous连续的covariance协⽅差canonical正规的,正则的concave⾮凸的corresponds相符合;相当;通信corollary推论concrete具体的事物,实在的东西cross validation交叉验证correlation相互关系convention约定cluster⼀簇centroids 质⼼,形⼼converge收敛computationally计算(机)的calculus计算Dderive获得,取得dual⼆元的duality⼆元性;⼆象性;对偶性derivation求导;得到;起源denote预⽰,表⽰,是…的标志;意味着,[逻]指称divergence 散度;发散性dimension尺度,规格;维数dot⼩圆点distortion变形density概率密度函数discrete离散的discriminative有识别能⼒的diagonal对⾓dispersion分散,散开determinant决定因素disjoint不相交的Eencounter遇到ellipses椭圆equality等式extra额外的empirical经验;观察ennmerate例举,计数exceed超过,越出expectation期望efficient⽣效的endow赋予explicitly清楚的exponential family指数家族equivalently等价的Ffeasible可⾏的forary初次尝试finite有限的,限定的forgo摒弃,放弃fliter过滤frequentist最常发⽣的forward search前向式搜索formalize使定形Ggeneralized归纳的generalization概括,归纳;普遍化;判断(根据不⾜)guarantee保证;抵押品generate形成,产⽣geometric margins⼏何边界gap裂⼝generative⽣产的;有⽣产⼒的Hheuristic启发式的;启发法;启发程序hone怀恋;磨hyperplane超平⾯Linitial最初的implement执⾏intuitive凭直觉获知的incremental增加的intercept截距intuitious直觉instantiation例⼦indicator指⽰物,指⽰器interative重复的,迭代的integral积分identical相等的;完全相同的indicate表⽰,指出invariance不变性,恒定性impose把…强加于intermediate中间的interpretation解释,翻译Jjoint distribution联合概率Llieu替代logarithmic对数的,⽤对数表⽰的latent潜在的Leave-one-out cross validation留⼀法交叉验证Mmagnitude巨⼤mapping绘图,制图;映射matrix矩阵mutual相互的,共同的monotonically单调的minor较⼩的,次要的multinomial多项的multi-class classification⼆分类问题Nnasty讨厌的notation标志,注释naïve朴素的Oobtain得到oscillate摆动optimization problem最优化问题objective function⽬标函数optimal最理想的orthogonal(⽮量,矩阵等)正交的orientation⽅向ordinary普通的occasionally偶然的Ppartial derivative偏导数property性质proportional成⽐例的primal原始的,最初的permit允许pseudocode伪代码permissible可允许的polynomial多项式preliminary预备precision精度perturbation 不安,扰乱poist假定,设想positive semi-definite半正定的parentheses圆括号posterior probability后验概率plementarity补充pictorially图像的parameterize确定…的参数poisson distribution柏松分布pertinent相关的Qquadratic⼆次的quantity量,数量;分量query疑问的Rregularization使系统化;调整reoptimize重新优化restrict限制;限定;约束reminiscent回忆往事的;提醒的;使⼈联想…的(of)remark注意random variable随机变量respect考虑respectively各⾃的;分别的redundant过多的;冗余的Ssusceptible敏感的stochastic可能的;随机的symmetric对称的sophisticated复杂的spurious假的;伪造的subtract减去;减法器simultaneously同时发⽣地;同步地suffice满⾜scarce稀有的,难得的split分解,分离subset⼦集statistic统计量successive iteratious连续的迭代scale标度sort of有⼏分的squares平⽅Ttrajectory轨迹temporarily暂时的terminology专⽤名词tolerance容忍;公差thumb翻阅threshold阈,临界theorem定理tangent正弦Uunit-length vector单位向量Vvalid有效的,正确的variance⽅差variable变量;变元vocabulary词汇valued经估价的;宝贵的Wwrapper包装分类:。

对流扩散方程解析解

对流扩散方程解析解

对流扩散方程解析解对流扩散方程(Convection-DiffusionEquation,CDE)是描述物理系统中物质扩散和热对流运动的方程。

它源于20世纪30年代真空磁体理论中发现的电子运动方程,在50年代被普及应用于各种工程、物理学和化学领域,如电子、热传输、水力学等,具有不可缺少的重要意义。

一般来说,对流扩散方程可以被描述为:$$frac{partial y}{partial t}=afrac{partial^2 y}{partial x^2}+bfrac{partial y}{partial x}+cfrac{partial y}{partial y}+d$$其中,a、b、c和d是常数,t和x分别代表时间和物理位置。

若把空间坐标投射到它们的平面上,则可以用更具体的形式表述为: $$frac{partial y}{partial t}=afrac{partial^2 y}{partial x^2}+bfrac{partial y}{partial x}+cfrac{partial y}{partial y}+d+frac{partial y}{partial z}$$其中,z是投射后的空间坐标,a、b、c和d也可以改变以适合不同的实际应用场景。

对于对流扩散方程的解析解,有两种基本方法:一种是用不定积分法;另一种是用微分平面法,也称作渐进分析方法。

从一般的原理上来看,不定积分法是把对流扩散方程拆解成多个简单的可求解的微分方程,然后分别求解它们,最后再综合求得总解。

此外,它还可以运用标准积分法来近似求解,特别有利于解复杂的多变量方程。

而渐进分析(Perturbation Analysis)是把复杂的问题划分成几个渐进步骤,每一步把问题简化为可以近似解决的状态,依此不断迭代,最终求得近似解。

这种技术通常用来求解非线性方程,对于对流扩散方程求解也非常有效,能有效地提高准确度和计算速度。

此外,还有其他一些求解方法,比如拉格朗日法(Lagrange Method)、拉普拉斯正则化(Laplace Regularization)以及偏微分方程的泛函理论方法(Functional Theory of Partial Differential Equations)等。

diffusion model简书

diffusion model简书

diffusion model简书(中英文实用版)Title: A Brief Introduction to Diffusion Models标题:扩散模型简述Diffusion models have gained immense popularity in the field of machine learning, particularly in the generation of images, text, and audio.扩散模型在机器学习领域变得非常流行,尤其是在图像、文本和音频的生成方面。

The core concept of diffusion models lies in simulating the process of data distribution evolving from a noisy state to a clean, high-quality state.扩散模型的核心概念是模拟数据分布从噪声状态演变为干净、高质量状态的过程。

These models are trained to reverse this process, effectively generating new, high-quality data from random noise.这些模型被训练来逆转这一过程,有效地从随机噪声生成新的、高质量的数据。

One of the key advantages of diffusion models is their ability to generate data with high fidelity and diversity, making them highly suitable for various applications such as image synthesis, text generation, and audio processing.扩散模型的一大优势是它们能够生成高保真度和多样性的数据,使它们非常适合各种应用,如图像合成、文本生成和音频处理。

consistency regularization 出处 -回复

consistency regularization 出处 -回复

consistency regularization 出处-回复Consistency Regularization: An Overview and ApplicationsIntroductionConsistency regularization has emerged as a powerful technique in machine learning, specifically in the field of deep learning. It aims to improve the generalization and robustness of models by encouraging consistency in their predictions. This regularization technique has found applications in various domains, including image classification, natural language processing, and speech recognition. In this article, we will provide an overview of consistency regularization, discuss its theoretical foundations, and explore its applications in different areas.Theoretical FoundationsConsistency regularization is rooted in the principle of encouraging smoothness and stability in model predictions. The underlying assumption is that small changes in the input should not significantly alter the output of a well-trained model. This principle is particularly relevant in scenarios where the training data maycontain noisy or ambiguous samples.One of the commonly used methods for achieving consistency regularization is known as consistency training. In this approach, two different input transformations are applied to the same sample, creating two augmented versions. The model is then trained to produce consistent predictions for the transformed samples. Intuitively, this process encourages the model to focus on the underlying patterns in the data rather than being influenced by specific input variations.Consistency regularization can be formulated using several loss functions. One popular choice is the mean squared error (MSE) loss, which measures the discrepancy between predictions of the original input and transformed versions. Other approaches include cross-entropy loss and Kullback-Leibler divergence.Applications in Image ClassificationConsistency regularization has yielded promising results in image classification tasks. One notable application is semi-supervised learning, where the goal is to leverage a small amount of labeleddata with a larger set of unlabeled data. By applying consistent predictions to both labeled and unlabeled data, models can effectively learn from the unlabeled data and improve their performance on the labeled data. This approach has been shown to outperform traditional supervised learning methods in scenarios with limited labeled samples.Additionally, consistency regularization has been explored in the context of adversarial attacks. Adversarial attacks attempt to fool a model by introducing subtle perturbations to the input data. By training models with consistent predictions for both original and perturbed inputs, their robustness against such attacks can be significantly improved.Applications in Natural Language ProcessingConsistency regularization has also demonstrated promising results in natural language processing (NLP) tasks. In NLP, models often face the challenge of understanding and generating coherent sentences. By applying consistency regularization, models can be trained to produce consistent predictions for different representations of the same text. This encourages the model tofocus on the meaning and semantics of the text rather than being influenced by superficial variations, such as different word order or sentence structure.Furthermore, consistency regularization can be used in machine translation tasks, where the goal is to translate text from one language to another. By enforcing consistency between translations of the same source text, models can generate more accurate and consistent translations.Applications in Speech RecognitionSpeech recognition is another domain where consistency regularization has found applications. One of the key challenges in speech recognition is handling variations in pronunciation and speaking styles. By training models with consistent predictions for different acoustic representations of the same speech utterance, models can better capture the underlying patterns and improve their accuracy in recognizing speech in different conditions. This can lead to more robust and reliable speech recognition systems in real-world scenarios.ConclusionConsistency regularization has emerged as an effective technique for improving the generalization and robustness of models in various machine learning tasks. By encouraging consistency in predictions, models can better learn the underlying patterns in the data and generalize well to unseen examples. This regularization technique has been successfully applied in image classification, natural language processing, and speech recognition tasks, among others. As research in consistency regularization continues to advance, we can expect further developments and applications in the future.。

蒸馏算法优化扩散模型原理

蒸馏算法优化扩散模型原理

蒸馏算法优化扩散模型原理Title: The Principles of Distillation Algorithms for Optimizing Diffusion ModelsThe distillation algorithms represent a powerful technique in the realm of machine learning, particularly for optimizing diffusion models. These algorithms essentially boil down complex information into a more manageable and efficient form, akin to the process of distilling a liquor to concentrate its essence.蒸馏算法是机器学习领域中的一种强大技术,特别适用于优化扩散模型。

这些算法基本上将复杂的信息提炼成更易于管理和有效的形式,类似于蒸馏酒以浓缩其精华的过程。

In the context of diffusion models, distillation algorithms are employed to compress the knowledge learned by a larger, more complex model into a smaller, faster, and potentially more robust model. This process involves training the smaller model to mimic the behavior of the larger one, thereby inheriting its predictive powers without incurring the computational overhead.在扩散模型的背景下,蒸馏算法被用于将大型、更复杂的模型所学到的知识压缩到更小、更快、可能更健壮的模型中。

遥感图像去云雾噪声的实现

遥感图像去云雾噪声的实现

遥感图像去云雾噪声的实现石文轩;吴敏渊;邓德祥【摘要】为了去除相机拍摄的遥感图像中的云雾,提出了一种新的非局域均值算法来处理遥感图像序列中的云雾噪声.首先,根据遥感图像在云雾阴影下的梯度值的性质,得出了在云雾阴影下图像灰度值偏低而梯度值却变化不大的结论,从而在权重值的计算中耦合了梯度值信息.然后,利用序列图像的帧间冗余信息来计算新的权重值.最后,用新的权重值对图像进行恢复.用UltraCamD相机对在我国新疆地区和山西地区上空拍摄的两组遥感图像序列进行的实验表明:与传统的图像恢复算法相比,用提出的方法恢复图像可获得更好的图像质量;与原始图像相比,恢复后图像的峰值信噪比提高了9 dB以上.实验结果表明,该算法可以在不知道云层运动方向和相机运动方向以及噪声模型的情况下有效地对薄云雾覆盖的遥感图像进行恢复.【期刊名称】《光学精密工程》【年(卷),期】2010(018)001【总页数】7页(P266-272)【关键词】非局域均值;梯度特征;图像序列;图像去噪;云雾【作者】石文轩;吴敏渊;邓德祥【作者单位】武汉大学,电子信息学院,湖北,武汉,430079;武汉大学,电子信息学院,湖北,武汉,430079;武汉大学,电子信息学院,湖北,武汉,430079【正文语种】中文【中图分类】TP7511 引言遥感卫星获得的大地影像数据大部分是光学影像。

光学影像极易受到气候因素的影响,而云雾遮挡就是其中影响之一。

图像中云雾噪声直接影响了图像信息的判读、分析和使用,使得图像数据的有效利用率降低。

运用图像处理技术,研究如何有效去除云雾噪声、提高遥感图像数据利用率具有重要意义。

作为一种去噪问题,很多算法都是近年来的研究热点,比如:利用各向异性的热传导偏微分方程去噪[1],基于TV全变分模型的去噪[2-3],自适应正则化模型去噪[4]等,这些算法将热传导偏微分方程理论应用于图像去噪中,利用热传导的性质对图像进行各向同性或各向异性的扩散处理,并且用待处理的图像作为偏微分方程的边界条件,加入正则化模型进行图像去噪。

《小型微型计算机系统》期刊简介

《小型微型计算机系统》期刊简介

2656小型微型计算机系统2020 年in collaborative filtering recommender systems [ J ]. Knowledge- Based Systems,2016,100( 10) :74-88.[19] Zhou W,W en J,Xiong Q,et al. SVM-TIA a shilling attack detec­tion method based on SVM and target item analysis in recommen­der systems [ J ]. Neurocomputing, 2016,210 (40) : 197 -205.[20] Tong C, Yin X,Li J,et al. A shilling attack detector based on conv­olutional neural network for collaborative recommender system in social aware network [ J ]. The Computer Journal ,2018,61 ( 7 ):949-958.[21 ] Xu Y,Zhang F. Detecting shilling attacks in social recommendersystems based on time series analysis and trust features[ J]. Knowl­edge-Based Systems,2019,178(16) :25-47.[22] Wu Z,G ao J,M ao B,et al. Semi-SAD: applying semi-supervisedlearning to shilling attack detection [ C ]//Proceedings of the 5th ACM Conference on Recommender Systems, ( RecSys),ACM, 2011:289-292.[23] Wu Z,W u J,Cao J,et al. HySAD:a semi-supervised hybrid shillingattack detector for trustworthy product recommendation[ C]//P ro­ceedings of the ACM SIGKDD Conference on Knowledge Discov­ery and Data Mining(KDD) ,ACM.2012:985-993.[24] Nie F,Wang Z, Wang R. Adaptive local linear discriminant analysis[J ]. ACM Transactions on Knowledge Discovery from Data, 2020,14(1) :1-19.[25] Abdi H,Williams L J. Principal component analysis[ J]. Wiley In­terdisciplinary Reviews:Computational Statistics,2010,2(4) :433-459.[26] Zhong S,Wen Q,Ge Z. Semi-supervised Fisher discriminant analysismodel for fault classification in industrial processes [ J ]. Chemomet-rics and Intelligent Laboratory Systems,2014,138(9) :203-211. [27] Wu Fei,Pei Yuan,Wu Xiang-qian. Android malware traffic featureanalysis technique based on improved Bayesian mcxlel [ J ]. Journalof Chinese Computer Systems,2018,39(2) :230-234.[28] Chen Zhi,Guo Wu. Text classification based on depth learning onunbalanced data[ J]. Journal of Chinese Computer Systems,2020, 41(l):l-5.附中文参考文献:[1]陈海蚊,努尔布力.协同推荐研究前沿与发展趋势的知识图谱分析[J].小型微型计算机系统,2018,39(4) :814名19.[9]李聪,骆志刚,石金龙.一种探测推荐系统托攻击的无监督算法[J].自动化学报,2011,37(2):160-167.[15]伍之昂,庄毅,王有权,等.基于特征选择的推荐系统托攻击检测算法[J].电子学报,2012,40(8): 1687-1693.[17]李文涛,高旻,李华,等.一种基于流行度分类特征的托攻击检测算法[J].自动化学报,2015,41 (9) :1563-1576.[27]吴非,裴源,吴向前.一种改进贝叶斯模型的Android恶意软件流量特征分析技术[J] •小型微型计算机系统,2〇18,39(2) :230-234.[28]陈志,郭武.不平衡训练数据下的基于深度学习的文本分类[J].小型微型计算机系统,2020,41 (1): 1 -5.《小型微型计算机系统》期刊简介《小型微型计算机系统》创刊于1980年,由中国科学院主管、中国科学院沈阳计算技术研究所主办,为中国计算机 学会会刊.创刊40年来,该刊主要面向国内从事计算机研究和教学的科研人员与大专院校的教师,始终致力于传播我国计算 机研究领域最新科研和应用成果,发表高水平的学术文章和高质量的应用文章,坚持严谨的办刊风格,因而受到计算机 业界的普遍欢迎.《小型微型计算机系统》所刊登的内容涵盖了计算机学科的各个领域,包括计算机科学理论、体系结构、软件、数据 库理论、网络(含传感器网络)、人工智能与算法、服务计算、计算机图形与图像等.在收录与检索方面,在国内入选为:《中文核心期刊要目总览》、《中国学术期刊文摘(中英文版)》、《中国科学引文 数据库》(CSCD)、《中国科技论文统计源期刊》、《中国科技论文统计与分析》(RCCSE),并被中国科技论文与引文数据 库、中国期刊全文数据库、中国科技期刊精品数据库、中国学术期刊综合评价数据库、中国核心期刊(遴选)数据库等收 录.还被英国《科学文摘》(INSPEC)、俄罗斯《文摘杂志》(AJ)、美国《剑桥科学文摘》(C S A(N S)和CSA(T))、美国《乌利希期刊指南》(UPD)、日本《日本科学技术振兴机构中国文献数据库》(JS T)和波兰《哥白尼索弓|》(IC)收录.。

stable diffusion unet 原理

stable diffusion unet 原理

stable diffusion unet 原理Stable Diffusion UNet: Understanding the PrincipleStable Diffusion UNet is an advanced algorithm used in image segmentation tasks that combines the strengths of UNet and diffusion models. This principle provides an effective and robust solution for various computer vision applications.UNet is a convolutional neural network architecture widely used for image segmentation. It has a contracting path to capture context and a symmetric expanding path for precise localization. However, conventional UNet suffers from certain limitations, such as difficulties in handling complex and larger datasets and generating sharp boundaries between objects.To overcome these limitations, Stable Diffusion UNet incorporates diffusion models into the architecture. Diffusion models use partial differential equations to smoothen the images while preserving the details. This integration allows the network to process larger datasets effectively and produce more accurate boundaries between objects.The principle behind Stable Diffusion UNet involves two main steps: diffusion regularization and segmentation. In the diffusion regularization step, the input images are gradually refined by applying diffusion equations. This process helps to remove noise, enhance features, and smooth the image without losing important information. By combining the strengths of UNet with diffusion regularization, the network becomes more stable and resistant to perturbations.In the segmentation step, the processed images from the diffusion regularization are fed into the UNet architecture for further analysis. The contracting path captures the contextual information of the image, while the expanding path precisely localizes the objects. The fusion of both paths allows for accurate segmentation with clear object boundaries.The stability of the diffusion regularization process in Stable Diffusion UNet is achieved by carefully controlling the step size and utilizing adaptive gradient-basedmethods. These techniques minimize the accumulation of diffusion errors, ensuring reliable and consistent results.The principle of Stable Diffusion UNet has shown remarkable performance in various medical imaging tasks, such as tumor segmentation, organ delineation, and cell segmentation. Its ability to handle complex datasets and generate sharp boundaries has made it a valuable tool for image analysis in fields like biomedicine and computer vision.In conclusion, Stable Diffusion UNet combines the strengths of UNet and diffusion models to address the limitations of conventional image segmentation algorithms. By integrating diffusion regularization into the architecture, this principle enables the network to process large datasets effectively and produce accurate segmentations with clear object boundaries. Its stability and robustness make it a powerful tool for various computer vision applications.。

行人碰撞腿部保护研究

行人碰撞腿部保护研究
大多数的胫骨伤害都归因于保 险杠碰撞而引起的弯曲力矩,弯曲 导致胫骨在发生撞击的一侧出现压 缩应力,而另一侧则出现拉伸应 力。当应力超过极限时,胫骨发生
图1 行人下肢伤害的主要模式
骨折。股骨和腓骨也具有同样的伤 害机理。
Kajzer对膝关节受到横向碰撞 时的伤害机理进行了详细的研究, 指出膝关节伤害主要是由于横向平 移位移导致的剪切以及角位移导致 的弯曲两种伤害机理造成的。行人 腿部膝关节位置通常是直接受到车 辆保险杠的撞击,由于股骨运动的 滞后使得关节面间发生剪切错位。 这种剪切错位导致了膝关节韧带的 拉伸,并在股骨髁和胫骨髁间隆凸 间产生横向压缩力。横向压缩力导 致关节接触表面出现集中应力,当 应力超过其容忍极限时,胫骨髁间 隆凸或股骨髁就会发生横向骨折。 当膝关节横向弯曲时,关节一侧的 韧带受到拉伸力的作用发生拉伸变 形,与此同时,关节表面的另一侧 则会受到轴向压缩力作用,导致集 中应力的出现。当集中应力超过骨 的压缩强度时,也会出现骨折伤 害,如图 2 所示。
综 述
行人碰撞腿部保护研究
郑 巍
内容提要:本文从生物力学角度综合分析了行人与车辆碰撞过程中其腿部的伤害机理,并根据EEVC 行 人碰撞保护试验法规建立了腿部撞击器的有限元模型。利用该数值模型,本文针对某国产轿车进行了行人腿 部保护的相关研究,并提出了相应的结构改进方案。计算结果表明,通过对保险杠的结构改进可以有效地减 轻车辆对行人腿部的伤害,具有较高的可行性。
分析车身保险杠的结构可以发 现,整个保险杠结构类似于一根简 支梁,最外层为保险杠蒙皮,它通 过螺钉固定在保险杠骨架上,保险 杠骨架又通过保险杠支架与车身前 纵梁相连接。无疑,在梁支承处的 撞击工况相对于其它位置的碰撞而 言更为恶劣,而L2碰撞位置正位于 保险杠支架(梁支承处)附近。

第十四届全国复杂网络大会日程

第十四届全国复杂网络大会日程
16:20 - 18:00
18:30 8:30-10:10 10:10 - 10:40 10:40-12:20 12:20 - 13:30 14 日 13:30 - 14:50
14:50 - 15:10
15:10 - 16:40
16:40 - 17:00
主持人 贾韬
陈关荣 吕金虎 汪小帆 蒋国平 曹进德
尔积
Modeling the complex 质大学(北京) multidimensional information time series to
characterize the volatility pattern evolution
16:40-17:00 黄昌巍 北京邮电大学理学院 Persistence paves the way for cooperation in evolutionary games
Predicting Human Contacts Through Alternating Direction Method of Multipliers
分组报告 A1:网络中的传播 (主持人:许小可)
201 会议厅(二楼)
时间
报告人
单位
题目
14:00-14:20 刘宗华 华东师范大学
复杂网络上的热传递进展
10 月 13 日
最佳学生论文答辩 I (主持人:方锦清)
时间 报告人
单位
401 会议厅(四楼) 题目
14:00-14:20 孙孟锋
上海大学
An Exploration and Simulation of Epidemic Spread and its Control in Multiplex
Networks

stable diffusion unet原理 -回复

stable diffusion unet原理 -回复

stable diffusion unet原理-回复Stable Diffusion UNet: A Comprehensive UnderstandingIntroduction:In recent years, deep learning has revolutionized the field of medical image segmentation, allowing us to automate and improve the accuracy of this critical task. One groundbreaking model is the Stable Diffusion UNet, which has proven to be effective in segmenting various medical images. In this article, we will delve into the principles behind this model, step by step, to provide a comprehensive understanding of Stable Diffusion UNet.1. Overview of UNet Architecture:The UNet architecture plays a central role in Stable Diffusion UNet. It consists of an encoder path for capturing contextual information and a decoder path for precise localization. The encoder path follows a typical convolutional neural network (CNN), while the decoder path uses upsampling and skip connections to refine the segmentation results.2. Incorporating Diffusion Regularization:To stabilize the training process and improve the model's performance on complex medical images, Stable Diffusion UNet introduces diffusion regularization. This technique encourages spatially coherent segmentation by penalizing abrupt changes in predictions across neighboring pixels. By incorporating this regularization, the model learns smoother boundaries between different regions, resulting in more accurate and visually appealing segmentations.3. Diffusion Regularization Loss:Diffusion regularization is achieved by adding a diffusion regularization loss term to the training objective function. This loss term measures the variation in predicted probabilities between neighboring pixels. When adjacent pixels have similar semantic meanings, the regularization loss encourages the model to produce consistent probabilities. Conversely, when adjacent pixels have different semantic meanings, the regularization loss discourages abrupt changes in predictions.4. Mathematical Formulation of Diffusion Regularization Loss: The diffusion regularization loss is mathematically defined as the sum of the squared differences between the probabilities ofneighboring pixels. It can be computed using a convolutional operation that extracts the probability differences along the spatial dimensions of the predicted segmentation map. By backpropagating this loss, the model adjusts its weights to minimize the abrupt changes and improve the consistency of predictions across neighboring pixels.5. Improved Spatial Coherency:Diffusion regularization plays a crucial role in improving the spatial coherency of the segmentation results. It helps the model overcome the natural tendency to produce noisy segmentations with fragmented boundaries. By encouraging smoother transitions between different regions, the model produces segmentations that better align with the underlying anatomical structures in the medical images.6. Training on Diverse Medical Datasets:Stable Diffusion UNet has the advantage of being applicable to diverse medical datasets. By training the model on a wide range of imaging modalities, such as MRI, CT, or ultrasound, it can learn to segment structures specific to different medical domains. This flexibility makes Stable Diffusion UNet a versatile tool for medicalprofessionals working with multiple imaging modalities.7. Architectural Flexibility:The Stable Diffusion UNet model's architecture allows for architectural flexibility, enabling researchers to customize it to suit specific segmentation tasks. For example, different types and depths of convolutional layers or skip connections can be added to adapt the model's capacity to the complexity of the segmentation problem at hand. This flexibility ensures that Stable Diffusion UNet can tackle a wide range of medical image segmentation challenges effectively.Conclusion:Stable Diffusion UNet is a robust and effective model for medical image segmentation. By incorporating diffusion regularization, it addresses the challenge of producing coherent segmentations, especially in complex medical images. The diffusion regularization loss encourages smoother transitions between neighboring pixels, resulting in visually appealing segmentations that align well with anatomical structures. With its flexibility and versatility, StableDiffusion UNet holds great promise for advancing the field of medical image segmentation and facilitating clinical decision-making.。

stablediffusion unet模型

stablediffusion unet模型

stablediffusion unet模型StableDiffusion-UNET is a variant of the U-Net model that utilizes the StableDiffusion technique for more robust and stable image segmentation. The U-Net architecture is a popular choice for image segmentation tasks, as it is known for its ability to capture both low-level and high-level features.StableDiffusion is a regularization technique that helps to improve the stability and generalization of the U-Net model. It achieves this by applying a diffusion process during training, which encourages smooth and consistent predictions across neighboring pixels. This diffusion-based regularization helps to overcome the common issue of U-Net models producing noisy and inconsistent segmentations.The implementation of StableDiffusion-UNET involves modifying the existing U-Net architecture to include the diffusion process. Specifically, the diffusion process is incorporated into the skip connections of the U-Net, where information is passed from the contracting path to the expanding path. This allows the diffusion regularization to be applied at multiple scales, facilitating the capture of both global and local image context.By using StableDiffusion-UNET, the model is encouraged to produce more stable and smooth segmentations, reducing the noise and inconsistency in the final predictions. This can lead to improved performance and better generalization on various segmentation tasks, such as image segmentation in medical imaging or semantic segmentation in computer vision applications.Overall, StableDiffusion-UNET combines the power of the U-Net architecture with the stability and regularization provided by the diffusion process, making it a robust and effective model for image segmentation tasks.。

分子在分子筛上扩散行为的分子模拟研究进展

分子在分子筛上扩散行为的分子模拟研究进展

CHEMICAL INDUSTRY AND ENGINEERING PROGRESS 2011年第30卷第7期·1406·化工进展分子在分子筛上扩散行为的分子模拟研究进展刘立凤,赵亮,陈玉,高金森(中国石油大学(北京)重质油国家重点实验室,北京 102249)摘要:主要综述了采用分子模拟技术考察分子在分子筛上扩散行为的多角度、多层次研究进展,包括分子筛结构、负载量、温度和多组分扩散等因素对扩散系数的影响;在此基础上,进一步介绍了扩散相互作用能和过渡态理论的研究进展,并讨论了分子模拟方法在分子筛扩散研究方面所面临的问题和发展方向。

关键词:分子模拟;分子筛;扩散系数;扩散相互作用能;过渡态理论中图分类号:O 641;TQ 021.4 文献标志码:A文章编号:1000–6613(2011)07–1406–10 Research progress of molecular simulation of diffusion in zeolitesLIU Lifeng,ZHAO Liang,CHEN Yu,GAO Jinsen(State Key Laboratory of Heavy Oil Processing,China University of Petroleum(Beijing),Beijing 102249,China)Abstract:The diffusion in zeolites by using molecular simulation methods has become a new technology in recent years. Molecular simulation technique has contributed to greatly accelerated research on catalysis in a cost-effective manner. Typical methods of molecular simulation are reviewed.Molecular simulation of diffusion in zeolites are introduced in detail,including diffusion coefficients,inffluence factors of diffusion coefficients,application of diffusion interaction energy and transition state theory. In addition,the trend and challenges of molecular simulation technique in heterogeneous catalysis are also discussed.Key words:molecular simulation;zeolite;diffusion coefficients;diffusion interaction energy;transition state theory分子筛因其独特孔道结构而具有优秀的择形及催化功能,被作为催化材料、分离与吸附剂等,广泛应用在石油化工、医药等领域。

融合变精度粗糙熵和协同进化的概念格挖掘算法

融合变精度粗糙熵和协同进化的概念格挖掘算法

S o h w n v r iy,S z o 5 0 ,Chn ) o c o U i e st u h u 2 0 6 1 ia
Ab tac :Ba e n s sr t s d o ome s e i la va a so a i b e p e ii n r p ca d nt ge fv ra l r c so oug e s mo la O— v uto r a — h s t de nd C e ol i na y p r
( . l g fC mp trS in ea d Te h o o y,Na i gUn v riyo r n u is 1 Col eo o ue ce c n c n lg e ni iest fAe o a t n c a d Asr n uis,Na j g 2 0 1 n to a tc ni 1 0 6,C n ; n hia
p e i i h es ol n n O— vo u i r c s on t r h di g a d C e l ton
DI NG e— i g ’ W ANG in d n GUAN irn W ip n ~, Ja — og , Zh — g i
江 l 50 ) 2 3 苏 州 大 学 江 苏 省 计 算 机 信 息 处 理 技 术 重 点 实 验 室 , 苏 苏 卅 1 0 6 .

要 :为解决 概念格 挖掘 优化 问题 , 鉴 变精度 粗糙 集模 型 和协 同进 化思 想 , 出 了融 合 变精 度 粗糙 熵和 借 提
全局 粒 子群 的概念 格协 同挖 掘算 法( VP 。 算法 引入 变精度 粗糙 熵 对各概念 格 子群动 态度 量建 立粗糙 RE T) 该 近似格 , 并通过 种 群之 间协 作 共 享寻优 经验 提 高概 念格 的全局 挖掘 优化 能 力 , 有效 缩减 原格 群规模 并挖 掘 出

人工智能词汇117

人工智能词汇117

实用标准文档常用英语词汇-andrew Ng课程intensity 强度Regression 回归Loss function损失函数non-convex三非凸函数neural network 神经网络supervised learning 监督学习regression problem回归问题处理的是连续的问题classification problem 分类问题discreet value 离散值support vector machines 支持向量机learning theory 学习理论learning algorithms 学习算法unsupervised learning 无监督学习gradient descent 梯度下降linear regression 线性回归Neural Network 神经网络gradient descent 梯度下降normal equationslinear algebra 线性代数superscript 上标exponentiation 指数training set训练集合training example 训练样本hypothesis假设,用来表示学习算法的输出LMS algorithm “least mean squares 最小二乘法算法batch gradient descent 批量梯度下降constantly gradient descent 随机梯度下降iterative algorithm 迭代算法partial derivative 偏导数contour等高线quadratic function 二元函数locally weighted regression 局部加权回归underfitting 欠拟合overfitting 过拟合non-parametric learning algorithms 无参数学习算法parametric learning algorithm 参数学习算法activation 激活值activation function 激活函数additive noise 力口性噪声autoencoder自编码器Autoencoders自编码算法average firing rate 平均激活率实用标准文档diagonal 对角线diffusion of gradients 梯度的弥散 eigenvalue 特征值 eigenvector 特征向量 error term 残差feature matrix 特征矩阵feature standardization 特征标准化feedforward architectures 前馈结构算法 feedforward neural network 前馈神经网络 feedforward pass 前馈传导 fine-tuned 微调first-order feature 一阶特征 forward pass 前向传导 forward propagation 前向传播 Gaussian prior 高斯先验概率 generative model 生成模型 gradient descent 梯度下降Greedy layer-wise training 逐层贪婪训练方法 grouping matrix 分组矩阵 Hadamard product 阿达马乘积 Hessian matrix Hessian 矩阵 hidden layer 隐含层average sum-of-squares error 均方差 backpropagation 后向传播 basis 基basis feature vectors 特征基向量 batch gradient ascent 批量梯度上升法Bayesian regularization method 贝叶斯规则化方 法Bernoulli random variable 伯努利随机变量 bias term 偏置项binary classfication 二元分类 class labels 类型标记 concatenation 级联conjugate gradient 共轭梯度 contiguous groups 联通区域convex optimization software 凸优化软件 convolution 卷积 cost function 代价函数 covariance matrix 协方差矩阵 DC component 直流分量 decorrelation 去相关 degeneracy 退化demensionality reduction 降维 derivative 导函数hidden units隐藏神经元Hierarchical grouping 层次型分组higher-order features 更高阶特征highly non-convex optimization problem高度非凸的优化问题histogram直方图hyperbolic tangent双曲正切函数hypothesis估值,假设identity activation function 恒等激励函数IID独立同分布illumination 照明inactive 抑制independent component analysis 独立成份分析input domains 输入域input layer 输入层intensity亮度/灰度intercept term 截距KL divergence 相对熵KL divergence KL 分散度k-Means K-均值learning rate学习速率least squares 最小二乘法linear correspondence 线性响应linear superposition 线性叠加line-search algorithm 线搜索算法local mean subtraction 局部均值消减local optima局部最优解logistic regression 逻辑回归loss function损失函数low-pass filtering 低通滤波magnitude 幅值MAP极大后验估计maximum likelihood estimation 极大似然估计mean平均值MFCC Mel倒频系数multi-class classification 多元分类neural networks 神经网络neuron神经元Newton,s method 牛顿法non-convex function 三非凸函数non-linear feature 非线性特征norm范式norm bounded有界范数norm constrained 范数约束normalization 归一化numerical roundoff errors 数值舍入误差numerically checking 数值检验numerically reliable 数值计算上稳定object detection 物体检测objective function 目标函数off-by-one error 缺位错误orthogonalization 正交化output layer 输出层overall cost function 总体代价函数over-complete basis 超完备基over-fitting 过拟合parts of objects目标的部件part-whole decompostion 部分-整体分解PCA主元分析penalty term惩罚因子per-example mean subtraction 逐样本均值消减pooling 池化pretrain 预训练principal components analysis 主成份分析quadratic constraints 二次约束RBMs 受限Boltzman 机reconstruction based models 基于重构的模型reconstruction cost 重建代价reconstruction term 重构项redundant 冗余reflection matrix 反射矩阵regularization 正则化regularization term 正则化项rescaling 缩放robust鲁棒性run行程second-order feature 二阶特征sigmoid activation function S 型激励函数significant digits 有效数字singular value 奇异值singular vector 奇异向量smoothed L1 penalty平滑的L1范数惩罚Smoothed topographic L1 sparsity penalty 平滑地形L1稀疏惩罚函数smoothing 平滑Softmax Regresson Softmax 回归sorted in decreasing order 降序排列source features 源特征sparse autoencoder 消减归一化文案大全实用标准文档Sparsity稀疏性sparsity parameter 稀疏性参数sparsity penalty 稀疏惩罚square function 平方函数squared-error 方差stationary平稳性(不变性)stationary stochastic process 平稳随机过程step-size步长值supervised learning 监督学习symmetric positive semi-definite matrix对称半正定矩阵symmetry breaking 对称失效tanh function双曲正切函数the average activation 平均活跃度the derivative checking method 梯度验证方法the empirical distribution 经验分布函数the energy function 能量函数the Lagrange dual拉格朗日对偶函数the log likelihood对数似然函数the pixel intensity value 像素灰度值the rate of convergence 收敛速度topographic cost term 拓扑代价项topographic ordered 拓扑秩序transformation 变换translation invariant 平移不变性trivial answer 平凡解under-complete basis 不完备基unrolling组合扩展unsupervised learning 无监督学习variance 方差vecotrized implementation 向量化实现vectorization 矢量化visual cortex视觉皮层weight decay权重衰减weighted average加权平均值whitening 白化zero-mean均值为零Accumulated error backpropagation 累积误差逆传播Activation Function 激活函数Adaptive Resonance Theory/ART 自适应谐振理论Addictive model 加性学习Adversarial Networks 对抗网络Affine Layer 仿射层Affinity matrix 亲和矩阵Agent代理/智能体Algorithm 算法Alpha-beta pruning a-p 剪枝Anomaly detection 异常检测Approximation 近似Area Under ROC Curve / AUC Roc 曲线下面积Artificial General Intelligence/AGI 通用人工智能Artificial Intelligence/AI 人工智能Association analysis 关联分析Attention mechanism 注意力机制Attribute conditional independence assumption属性条件独立性假设Attribute space 属性空间Attribute value 属性值Autoencoder自编码器Automatic speech recognition 自动语音识另ijAutomatic summarization 自动摘要Average gradient 平均梯度Average-Pooling 平均池化Backpropagation Through Time 通过时间的反向传播Backpropagation/BP 反向传播Base learner基学习器Base learning algorithm 基学习算法Batch Normalization/BN 批量归一化Bayes decision rule贝叶斯判定准则Bayes Model Averaging / BMA 贝叶斯模型平均Bayes optimal classifier贝叶斯最优分类器Bayesian decision theory 贝叶斯决策论Bayesian network贝叶斯网络Between-class scatter matrix 类间散度矩阵Bias偏置/偏差Bias-variance decomposition 偏差一方差分解Bias-Variance Dilemma 偏差-方差困境Bi-directional Long-Short Term Memory/Bi-LSTM 双向长短期记忆Binary classification 二分类Binomial test 二项检验Bi-partition 二分法Boltzmann machine 玻尔兹曼机Bootstrap sampling自助采样法/可重复采样Bootstrapping 自助法Break-Event Point / BEP 平衡点Calibration 校准Cascade-Correlation 级联相关Categorical attribute 离散属性Class-conditional probability 类条件概率Classification and regression tree/CART 分类与回Classifier 分类器Class-imbalance类别不平衡Closed -form 闭式Cluster簇/类/集群Cluster analysis 聚类分析Clustering 聚类Clustering ensemble 聚类集成Co-adapting 共适应Coding matrix编码矩阵COLT国际学习理论会议Committee-based learning 基于委员会的学习Competitive learning 竞争型学习Component learner 组件学习器Comprehensibility 可解释性Computation Cost 计算成本Computational Linguistics 计算语言学Computer vision计算机视觉Concept drift概念漂移Concept Learning System /CLS 概念学习系统Conditional entropy 条件熵Conditional mutual information 条件互信息Conditional Probability Table / CPT 条件概率表Conditional random field/CRF 条件随机场Conditional risk 条件风险Confidence 置信度Confusion matrix 混淆矩阵Connection weight 连接权Connectionism 连结主义Consistency 一致性/相合性Contingency table 歹汁联表Continuous attribute 连续属性Convergence 收敛Conversational agent 会话智能体Convex quadratic programming 凸二次规戈U Convexity 凸性Convolutional neural network/CNN 卷积神经网络Co-occurrence 同现Correlation coefficient 相关系数Cosine similarity 余弦相似度Cost curve成本曲线Cost Function成本函数Cost matrix成本矩阵Cost-sensitive 成本敏感归树Cross entropy 交叉熵Cross validation 交叉验证Crowdsourcing 众包Curse of dimensionality 维数灾难Cut point截断点Cutting plane algorithm 割平面法Data mining数据挖掘Data set数据集Decision Boundary 决策边界Decision stump 决策树桩Decision tree决策树/判定树Deduction 演绎Deep Belief Network深度信念网络Deep Convolutional Generative Adversarial NetworkDCGAN深度卷积生成对抗网络Deep learning深度学习Deep neural network/DNN 深度神经网络Deep Q-Learning 深度Q 学习Deep Q-Network 深度Q 网络Density estimation 密度估计Density-based clustering 密度聚类Differentiable neural computer 可微分神经计算机Dimensionality reduction algorithm 降维算法Directed edge 有向边Disagreement measure 不合度量Discriminative model 判别模型Discriminator 判别器Distance measure 距离度量Distance metric learning 距离度量学习Distribution 分布Divergence 散度Diversity measure多样性度量/差异性度量Domain adaption领域自适应Downsampling 下采样D-separation (Directed separation)有向分离Dual problem对偶问题Dummy node哑结点Dynamic Fusion 动态融合Dynamic programming 动态规划Eigenvalue decomposition 特征值分解Embedding 嵌入Emotional analysis 情绪分析Empirical conditional entropy 经验条件熵Empirical entropy 经验熵Empirical error 经验误差Empirical risk 经验风险End-to-End 端到端Energy-based model基于能量的模型Ensemble learning 集成学习Ensemble pruning 集成修剪Error Correcting Output Codes / ECOC 纠错输出码Error rate 错误率Error-ambiguity decomposition 误差-分歧分解Euclidean distance 欧氏距离Evolutionary computation 演化计算Expectation-Maximization 期望最大化Expected loss期望损失Exploding Gradient Problem 梯度爆炸问题Exponential loss function 指数损失函数Extreme Learning Machine/ELM 超限学习机Factorization 因子分解False negative 假负类False positive 假正类False Positive Rate/FPR 假正例率Feature engineering 特征工程Feature selection 特征选择Feature vector 特征向量Featured Learning 特征学习Feedforward Neural Networks/FNN 前馈神经网络Fine-tuning 微调Flipping output 翻转法Fluctuation 震荡Forward stagewise algorithm 前向分步算法Frequentist频率主义学派Full-rank matrix 满秩矩阵Functional neuron 功能神经元Gain ratio增益率Game theory 博弈论Gaussian kernel function 高斯核函数Gaussian Mixture Model 高斯混合模型General Problem Solving 通用问题求解Generalization 泛化Generalization error 泛化误差Generalization error bound 泛化误差上界Generalized Lagrange function 广义拉格朗日函数Generalized linear model 广义线性模型Generalized Rayleigh quotient 广义瑞利商Generative Adversarial Networks/GAN 生成对抗网络Generative Model 生成模型Generator生成器Genetic Algorithm/GA 遗传算法Gibbs sampling吉布斯采样Gini index基尼指数Global minimum 全局最小Global Optimization 全局优化Gradient boosting 梯度提升Gradient Descent 梯度下降Graph theory 图论Ground-truth 真相 /真实Hard margin 硬间隔Hard voting 硬投票Harmonic mean调和平均Hesse matrix 海塞矩阵Hidden dynamic model 隐动态模型Hidden layer 隐藏层Hidden Markov Model/HMM隐马尔可夫模型Hierarchical clustering 层次聚类Hilbert space希尔伯特空间Hinge loss function合页损失函数Hold-out留出法Homogeneous 同质Hybrid computing 混合计算Hyperparameter 超参数Hypothesis 假设Hypothesis test 假设验证ICML国际机器学习会议Improved iterative scaling/IIS 改进的迭代尺度法Incremental learning 增量学习Independent and identically distributed/i.i.d.独立同分布Independent Component Analysis/ICA 独立成分分析Indicator function 指示函数Individual learner 个体学习器Induction 归纳Inductive bias 归纳偏好Inductive learning 归纳学习Inductive Logic Programming / ILP 归纳逻辑程序设计Information entropy 信息熵Information gain 信息增益Input layer 输入层Inter-cluster similarity 簇间相似度International Conference for MachineLearning/ICML国际机器学习大会Intra-cluster similarity 簇内相似度Intrinsic value 固有值Isometric Mapping/Isomap 等度量映射Isotonic regression 等分回归Iterative Dichotomiser 迭代二分器Kernel method 核方法Kernel trick 核技巧Kernelized Linear Discriminant Analysis / KLDA核线性判别分析K-fold cross validation k折交叉验证/ k倍交叉验证K-Means Clustering K-均值聚类K-Nearest Neighbours Algorithm/KNN K近邻算法Knowledge base 矢口识库Knowledge Representation 矢口识表征Label space标记空间Lagrange duality拉格朗日对偶性Lagrange multiplier拉格朗日乘子Laplace smoothing拉普拉斯平滑Laplacian correction 拉普拉斯修正Latent Dirichlet Allocation 隐狄利克雷分布Latent semantic analysis 潜在语义分析Latent variable 隐变量Lazy learning 懒惰学习Learner学习器Learning by analogy 类比学习Learning rate 学习率Learning Vector Quantization/LVQ 学习向量量化Least squares regression tree 最小二乘回归树Leave-One-Out/LOO 留一法linear chain conditional random field线性链条件随机场Linear Discriminant Analysis / LDA 线性判别分析Linear model线性模型Linear Regression 线性回归Link function联系函数Local Markov property局部马尔可夫性Local minimum局部最小Log likelihood 对数似然Log odds / logit 对数几率Logistic Regression Logistic 回归Log-likelihood 对数似然Log-linear regression 对数线性回归Long-Short Term Memory/LSTM 长短期记忆Loss function损失函数Machine translation/MT 机器翻译Macron-P宏查准率Macron-R宏查全率Majority voting绝对多数投票法Manifold assumption 流形假设Manifold learning 流形学习Margin theory间隔理论Marginal distribution 边际分布Marginal independence 边际独立性Marginalization 边际化Markov Chain Monte Carlo/MCMC马尔可夫链蒙特卡罗方法Markov Random Field马尔可夫随机场Maximal clique 最大团Maximum Likelihood Estimation/MLE极大似然估计/极大似然法Maximum margin 最大间隔Maximum weighted spanning tree 最大带权生成树Max-Pooling最大池化Mean squared error 均方误差Meta-learner元学习器Metric learning 度量学习Micro-P微查准率Micro-R微查全率Minimal Description Length/MDL 最小描述长度Minimax game极小极大博弈Misclassification cost 误分类成本Mixture of experts 混合专家Momentum 动量Moral graph道德图/端正图Multi-class classification 多分类Multi-document summarization 多文档摘要Multi-layer feedforward neural networks多层前馈神经网络Multilayer Perceptron/MLP 多层感知器Multimodal learning 多模态学习Multiple Dimensional Scaling 多维缩放Multiple linear regression 多元线性回归Multi-response Linear Regression / MLR多响应线性回归Mutual information 互信息Naive bayes朴素贝叶斯Naive Bayes Classifier朴素贝叶斯分类器Named entity recognition 命名实体识另LlNormalization 归一化 Nuclear norm 核范数 Numerical attribute 数值属性 Letter OObjective function 目标函数 Oblique decision tree 斜决策树 Occam,s razor 奥卡姆剃刀 Odds 几率 Off-Policy 离策略One shot learning 一次性学习One-Dependent Estimator / ODE 独依赖估计 On-Policy 在策略Ordinal attribute 有序属性Out-of-bag estimate 包外估计Output layer 输出层Output smearing 输出调制法Overfitting 过拟合/过配Oversampling 过采样 Paired t-test 成对 t 检验 Pairwise 成对型Pairwise Markov property 成对马尔可夫性 Parameter 参数Nash equilibrium 纳什均衡Natural language generation/NLG 自然语言生成 Natural language processing 自然语言处理 Negative class 负类Negative correlation 负相关法 Negative Log Likelihood 负对数似然 Neighbourhood Component Analysis/NCA近邻成分分析Neural Machine Translation 神经机器翻译 Neural Turing Machine 神经图灵机 Newton method 牛顿法 NIPS 国际神经信息处理系统会议No Free Lunch Theorem / NFL 没有免费的午餐定 理Noise-contrastive estimation 噪音对比估计 Nominal attribute 列名属性 Non-convex optimization 非凸优化Nonlinear model 非线性模型 Non-metric distance 非度量距离Non-negative matrix factorization 非负矩阵分解 Non-ordinal attribute 无序属性 Non-Saturating Game 非饱和博弈 Norm 范数Parameter estimation 参数估计Parameter tuning 调参Parse tree解析树Particle Swarm Optimization/PSO 粒子群优化算法Part-of-speech tagging 词性标注Perceptron 感知机Performance measure 性能度量Plug and Play Generative Network 即插即用生成网络Plurality voting相对多数投票法Polarity detection 极性检测Polynomial kernel function 多项式核函数Pooling 池化Positive class 正类Positive definite matrix 正定矩阵Post-hoc test后续检验Post-pruning 后剪枝potential function 势函数Precision查准率/准确率Prepruning 预剪枝Principal component analysis/PCA 主成分分析Principle of multiple explanations 多释原则Probability Graphical Model 概率图模型Proximal Gradient Descent/PGD 近端梯度下降Pruning 剪枝Pseudo-label 伪标记Quantized Neural Network 量子化神经网络Quantum computer量子计算机Quantum Computing 量子计算Quasi Newton method 拟牛顿法Radial Basis Function / RBF 径向基函数Random Forest Algorithm 随机森林算法Random walk随机漫步Recall查全率/召回率Receiver Operating Characteristic/ROC受试者工作特征Rectified Linear Unit/ReLU 线性修正单元Recurrent Neural Network 循环神经网络Recursive neural network 递归神经网络Reference model 参考模型Regression 回归Regularization 正则化Reinforcement learning/RL 强化学习Representation learning 表征学习Representer theorem 表示定理reproducing kernel Hilbert space/RKHS再生核希尔伯特空间Re-sampling重采样法Rescaling再缩放Residual Mapping 残差映射Residual Network 残差网络Restricted Boltzmann Machine/RBM 受限玻尔兹曼机Restricted Isometry Property/RIP 限定等距性Re-weighting重赋权法Robustness稳健性/鲁棒性Root node根结点Rule Engine规则引擎Rule learning规则学习Saddle point 鞍点Sample space样本空间Sampling 采样Score function 评分函数Self-Driving自动驾驶Self-Organizing Map / SOM 自组织映射Semi-naive Bayes classifiers半朴素贝叶斯分类器Semi-Supervised Learning 半监督学习semi-Supervised Support Vector Machine半监督支持向量机Sentiment analysis 情感分析Separating hyperplane 分离超平面Sigmoid function Sigmoid 函数Similarity measure 相似度度量Simulated annealing 模拟退火Simultaneous localization and mapping同步定位与地图构建Singular Value Decomposition 奇异值分解Slack variables 松弛变量Smoothing 平滑Soft margin 软间隔Soft margin maximization 软间隔最大化Soft voting 软投票Sparse representation 稀疏表征Sparsity稀疏性Specialization 特化Spectral Clustering 谱聚类Speech Recognition 语音识另ijSplitting variable 切分变量Squashing function 挤压函数Stability-plasticity dilemma 可塑性-稳定性困境Statistical learning 统计学习实用标准文档Status feature function 状态特征函Stochastic gradient descent 随机梯度下降Stratified sampling 分层采样Structural risk 结构风险Structural risk minimization/SRM 结构风险最小化Subspace子空间Supervised learning监督学习/有导师学习support vector expansion 支持向量展式Support Vector Machine/SVM 支持向量机Surrogat loss替代损失Surrogate function 替代函数Symbolic learning 符号学习Symbolism符号主义Synset同义词集T-Distribution Stochastic Neighbour Embedding t-SNE T -分布随机近邻嵌入Tensor张量Tensor Processing Units/TPU 张量处理单元The least square method 最小二乘法Threshold 阈值Threshold logic unit阈值逻辑单元Threshold-moving 阈值移动Time Step时间步骤Tokenization 标记化Training error 训练误差Training instance 训练示例/训练例Transductive learning 直推学习Transfer learning 迁移学习Treebank 树库Tria-by-error 试错法True negative 真负类True positive 真正类True Positive Rate/TPR 真正例率Turing Machine 图灵机Twice-learning 二次学习Underfitting欠拟合/欠配Undersampling 欠采样Understandability 可理解性Unequal cost非均等代价Unit-step function单位阶跃函数Univariate decision tree 单变量决策树Unsupervised learning无监督学习/无导师学习Unsupervised layer-wise training 无监督逐层训练Upsampling 上采样实用标准文档Vanishing Gradient Problem 梯度消失问题 Variational inference 变分推断 VC Theory VC 维理论 Version space 版本空间 Viterbi algorithm 维特比算法Von Neumann architecture 冯•诺伊曼架构 Wasserstein GAN/WGAN Wasserstein 生成对抗网络Weak learner 弱学习器 Weight 权重Weight sharing 权共享 Weighted voting 加权投票法Within-class scatter matrix 类内散度矩阵 Word embedding 词嵌入Word sense disambiguation 词义消歧 Zero-data learning 零数据学习convex 凸的Zero-shot learning 零次学习contours 轮廓approximations 近似值constraint 约束arbitrary 随意的constant 常理affine 仿射的commercial 商务的arbitrary 任意的complementarity 补充amino acid 氨基酸coordinate ascent 同等级上升amenable 经得起检验的axiom 公理,原则 abstract 提取architecture 架构,体系结构;建造业 absolute 绝对的 arsenal 军火库 assignment 分酉己 algebra 线性代数 asymptotically 无症状的appropriate 恰当的 bias 偏差brevity 简短,简洁;短暂 [800 ] broader 广泛 briefly 简短的 batch 批量convergence 收敛,集中到一点clipping剪下物;剪报;修剪component分量;部件continuous 连续的covariance 协方差canonical正规的,正则的concave非凸的corresponds相符合;相当;通信corollary 推论concrete具体的事物,实在的东西cross validation 交叉验证correlation相互关系convention 约定cluster 一簇centroids质心,形心converge 收敛computationally 计算(机)的calculus 计算derive获得,取得dual二元的duality二元性;二象性;对偶性derivation求导;得到;起源denote预示,表示,是…的标志;意味divergence散度;发散性dimension尺度,规格;维数dot小圆点distortion 变形density概率密度函数discrete离散的discriminative有识别能力的diagonal 对角dispersion分散,散开determinant决定因素disjoint不相交的encounter 遇至" ellipses 椭圆equality 等式extra额外的empirical经验;观察ennmerate例举,计数exceed超过,越出expectation 期望efficient生效的endow赋予exponential family 指数家族equivalently 等价的feasible可行的forary初次尝试finite有限的,限定的forgo摒弃,放弃fliter过滤frequentist最常发生的forward search前向式搜索formalize使定形generalized 归纳的generalization概括,归纳;普遍化;判断(根据不足)guarantee保证;抵押品generate形成,产生geometric margins 几何边界gap 裂口generative生产的;有生产力的heuristic启发式的;启发法;启发程序hone怀恋;磨hyperplane 超平面initial最初的implement 执行intuitive凭直觉获知的incremental 增加的intercept 截距intuitious 直觉instantiation 例子indicator指示物,指示器interative重复的,迭代的integral 积分identical相等的;完全相同的indicate表示,指出invariance不变性,恒定性impose把…强加于intermediate 中间的interpretation 解释,翻译joint distribution 联合概率lieu替代logarithmic对数的,用对数表示的latent潜在的Leave-one-out cross validation 留一法交叉验证magnitude 巨大mapping绘图,制图;映射matrix矩阵实用标准文档mutual相互的,共同的monotonically 单调的minor较小的,次要的multinomial 多项的multi-class classification 二分类问题nasty讨厌的notation标志,注释naive朴素的obtain得到oscillate 摆动optimization problem 最优化问题objective function 目标函数optimal最理想的orthogonal(矢量,矩阵等)正交的orientation 方向ordinary普通的occasionally 偶然的partial derivative 偏导数property 性质proportional成比例的primal原始的,最初的permit允许pseudocode 伪代码permissible可允许的polynomial 多项式preliminary 预备precision 精度perturbation 不安,扰乱poist假定,设想positive semi-definite 半正定的parentheses 圆括号posterior probability 后验概率plementarity 补充pictorially 图像的parameterize确定…的参数poisson distribution 柏松分布pertinent相关的quadratic 二次的quantity量,数量;分量query疑问的regularization使系统化;调整reoptimize重新优化restrict限制;限定;约束reminiscent回忆往事的;提醒的;使人联想…的(of)文案大全remark注意random variable 随机变量respect 考虑respectively各自的;分别的redundant过多的;冗余的susceptible 敏感的stochastic可能的;随机的symmetric对称的sophisticated 复杂的spurious假的;伪造的subtract减去;减法器simultaneously同时发生地;同步地suffice 满足scarce稀有的,难得的split分解,分离subset子集statistic统计量successive iteratious 连续的迭代scale标度sort of有几分的squares 平方trajectory 轨迹文案大全实用标准文档temporarily 暂时的terminology专用名词tolerance容忍;公差thumb翻阅threshold 阈,临界theorem 定理tangent 正弦unit-length vector 单位向量valid有效的,正确的variance 方差variable变量;变元vocabulary 词匚valued经估价的;宝贵的wrapper 包装总计1038词汇。

物理专业英语词汇(D)

物理专业英语词汇(D)

物理专业英语词汇(D)物理专业英语词汇(D)物理专业英语词汇(D)d bandd 带d quarkd 夸克d'alembert's operator达朗伯算符d'alembert's paradox达朗伯佯谬d'alembert's principle达朗伯原理d'alembertian达朗伯算符d'arsonval galvanometer达松伐耳电疗d/a conversion数模转换d/a converter数字模拟转换器dalitz pair达利兹对dalitz plot达利兹图damped阻尼的damped oscillation阻尼振荡damped vibration阻尼振荡damped wave阻尼波damping阻尼damping coefficient阻尼因数damping coil阻尼线圈damping constant阻尼常数damping factor阻尼因数damping force阻尼力dangerous diagram危险图dangling bond自由键daniell cell丹尼耳电池dark adaptation暗适应dark cathode space阴极暗区dark current暗电流dark discharge暗放电dark field暗场dark field image暗场象dark field method暗场法dark line spectrum吸收光谱dark matter暗物质dark nebula黑暗星云dark space暗区data数据data bank数据库data base数据库data communication数据传输data compression数据压缩data processing数据处理data recorder数据记录器data relay satellite数据中继卫星data set数据集date line日界线dating测定年代daughter atom子体原子daughter nucleus子核davisson germer's experiment船逊革茉实验davydov splitting达维多夫分裂daylight昼光daylight factor昼光因数daylight illumination昼光照明daylight lamp昼光灯daylight light昼光照明daylight lighting昼光照明dc current transformer直龄流dc generator直立电机dc motor直羚动机dc transformers直龄压器dc voltage直羚压de broglie frequency德布罗意频率de broglie wave德布罗意波de broglie wavelength德布罗意波长de gennes factor德坚涅因子de haas van alphen effect德哈斯范阿尔文效应de sitter space time德呜时空de sitter universe德呜宇宙deactivation减活化dead beat不摆的dead beat galvanometer不摆电疗dead load静负荷dead room静室dead time死时间dead water死水dead weight静重dead weight piston gage自重活塞式压力计deafness聋度debuncher散束器debye德拜debye function德拜函数debye radius德拜半径debye scherrer camera德拜谢乐照相机debye shielding distance德拜屏蔽距离debye temperature德拜温度debye waller factor德拜瓦勒因数debye's specific heat formula德拜比热公式deca十decahedron十面体decametre wave十米波decatron十进位计数管decay衰变decay chain衰变链decay constant蜕变常数decay curve衰变曲线decay event衰变事件decay particle衰变粒子decay product衰变产物decay rate衰变率decay scheme衰变图decay series衰变链decay time衰变时间deceleration减速deci分decibel分贝decibelmeter分贝计decimal counting circuit十进位计数电路decimal system十进制decimeter分米decimetre wave分米波declination赤纬declination axis赤纬轴declination circle赤纬盘declinometer磁偏计decoder译码机decoloring消色decomposition分解decontamination去污decoration装饰decoupling去耦decrease减少decrement衰减deduction推论deductive method演绎法deed 形电极deep hologram深全息图deep inelastic scattering深度非弹性散射deep irradiation深部照射deep survey深巡天deexcitation去激活defect缺陷defect cluster缺陷群defecton缺陷子deflagration wave爆燃波deflecting coil偏转线圈deflecting electrode致偏电极deflecting field偏转场deflecting force偏转力deflecting plate偏转板deflection偏转deflection method偏转法deflection system偏转系统deflector偏转器deflector coil偏转线圈deflector field偏转场deflector plate偏转板defocusing散焦defocusing lens发散透镜deformation应变deformation energy形变能deformation potential形变势deformed nucleus变形核degassing除气degeneracy简并degenerate fermi gas简并费密气体degenerate gas简并气体degenerate level简并能级degenerate semiconductor简并型半导体degenerate star简并星degenerate state简并态degeneration简并化degradation of energy能的退降degraded color递减色degree度degree celsius摄氏度degree fahrenheit华氏度degree of accuracy精确度degree of burn up燃耗度degree of coherence相干度degree of coupling耦合度degree of crystallinity结晶度degree of depolarization去偏振度degree of dissociation离解度degree of freedom自由度degree of ionization电离度degree of obscuration食分degree of order秩序度degree of purity纯度degree of stability稳定度deionization消电离dekatron十进位计数管delay时滞delay circuit延迟电路delay line延迟线delay time延迟时间delayed coincidence延迟符合delayed fluorescence迟滞荧光delayed fracture延迟破坏delayed neutron缓发中子dellinger phenomenon德林格尔现象delphinus海豚座delta connection三角形连接delta electron电子delta function函数delta rays射线delta resonance共振delta state状态demagnetization退磁demagnetization curve退磁曲线demagnetization energy去磁能demagnetization field去磁场demagnetizing factor去磁因数demagnetizing force去磁力demarcation划分界线dematerialization消没demodulation解调demodulator解调demonstration表演denaturation变性dendrite枝晶体dendritic structure师状结构denominator分母densimeter比重计densimetry密度测定densitometer显象密度计density密度density distribution密度分布density matrix密度矩阵density of states态密度density wave密度波deoxyribonucleic acid脱氧核糖核酸depleted uranium贫铀depletion layer过渡层depolarization去极化depolarizer退极化剂deposit淀积depression气旋depression of freezing point冰点降低depth of focus焦深depth of penetration穿透深度derived unit导出单位desaturated color不饱和色descent velocity下降速度desensitization减敏感design of experiments试验设计deslandres table德兰得尔表desorption解吸destruction破坏destruction test破坏试验detachment脱离detailed balance细致平衡detailed balancing principle细致平衡原理detectable可检测的detection探测detection limit检出界限detection of nuclear explosions核爆炸的探测detector探测器检波器detector tube检波管detonation爆炸detonation wave爆炸波deuterium氘deuterium nuclei氘核deuterium oxide重水deuteron氘核deuteron beam氘核束deuteron stripping氘核剥裂deuton氘核developer显象剂developing显象developing solution显象剂development显象development accelerator显象加速剂deviation偏差deviation prism偏转棱镜devitrification失去透迷dew point露点dew point hygrometer露点湿度计dewar bottle杜瓦瓶dewar vessel杜瓦瓶df laser氟化氘激光器df 激光器diabatic potential传热势diagram图diagrammatical method图解法dial标度盘dialysis渗析diamagnet抗磁体diamagnetic抗磁性的diamagnetic material抗磁质diamagnetic substance抗磁质diamagnetic susceptibility抗磁磁化率diamagnetism抗磁性diameter直径diamond金刚石;稳定区diamond structure金刚石结构diaphanous透媚diaphragm光栏;隔膜;振动板diaphragm gauge膜片真空仪表diapositive透谬片diascopic projection透射放映diastrophism地壳蚀变diathermancy透热性diathermic透热的diatomic二原子的diatomic gas二原子气体diatomic molecule二原子分子dichroism二色性dichromatism二色性dielectric电介质;电介质的dielectric absorption电介质吸收dielectric after effect电介质后效dielectric amplification电介质放大dielectric amplifier电介质放大器dielectric anomaly电介质异常dielectric breakdown电介质哗dielectric constant介电常数dielectric dispersion电介质色散dielectric function电介质函数dielectric heating电介质加热dielectric hysteresis电介质滞后dielectric lens电介质透镜dielectric loss介电损耗dielectric loss angle介电损耗角dielectric loss factor电介质损耗系数dielectric polarization电介质极化dielectric property介电特性dielectric relaxation电介质弛豫dielectric strain电分质变形dielectric strength电介质强度dielectric substance电介质dielectrics电介质dielectronic recombination双电子性复合difference equation差分方程difference in optical path程差difference method差动法difference number中子盈余difference tone差音differential amplifier差动放大器differential cross section微分截面differential galvanometer差动电疗differential method差动法differential operator微分算符differential permeability微分磁导率differential scattering cross section微分散射截面differential thermometer差示温度计differential transformer差动变压器differentiating circuit微分电路diffracted wave衍射波diffraction衍射diffraction absorption衍射吸收diffraction angle衍射角diffraction crystallography衍射晶体学diffraction disc衍射盘diffraction dissociation衍射离解diffraction efficiency衍射效率diffraction figure衍射图diffraction fringe衍射条纹diffraction function衍射函数diffraction grating衍射光栅diffraction image衍射象diffraction microscopy衍射显微术diffraction pattern衍射图diffraction ring衍射环diffraction scattering衍射散射diffraction spectrograph衍射光栅摄谱仪diffraction spectrum衍射光谱diffractometer衍射计diffuse double layer扩散双层diffuse molecular spectrum扩散分子光谱diffuse nebula弥漫星云diffuse reflection漫反射diffuse scattering漫散射diffuse series漫线系diffused junction diode扩散结二极管diffused junction transistor扩散结型晶体管diffused light漫射光diffused radiation散射辐射diffuser扩散器扩压段;漫射体diffusing surface扩散面diffusion扩散diffusion cloud chamber扩散云室diffusion coefficient扩散系数diffusion constant扩散常数diffusion current扩散电流diffusion equation扩散方程diffusion equilibrium扩散平衡diffusion layer扩散层diffusion length扩散长度diffusion potential扩散电位diffusion pump扩散泵diffusion zone扩散带diffusive equilibrium扩散平衡diffusivity of heat热扩散率digital circuit数字电路digital computer数字计算机digital filter数字滤波器数字过滤器digital multimeter数字万用表digital oscilloscope数字示波器digital signal数字信号digital to analog conversion数模转换digital to analog converter数字模拟转换器digital voltmeter数字伏特计digitizer数字转换器dilatancy膨胀性dilatant flow膨胀流dilatation膨胀dilatation invariance扩张不变性dilation伸长dilatometer膨胀计dilute magnetic alloy稀磁合金dilute solution稀溶液dilution稀释dimension因次dimensional analysis量纲分析dimensional equation量纲方程dimensional method量纲分析法dimensional regularization维数正规化法dimensional theory量纲理论dimensionless parameter无量纲参数dimensionless quantity无量纲量dimensionsless unit无量纲单位dimer二聚物dimorphism二形diode二极管diode laser二极管激光器diode transistor logic二极管晶体管逻辑diopter屈光度dioptric屈光的dioptric system折光组dioptrics屈光学dip倾角diplopia复视dipolar gas偶极气体dipole偶极子dipole antenna偶极天线dipole electric field偶极电场dipole magnetic field偶极磁场dipole moment偶极矩dipole oscillation偶极振动dipole radiation偶极辐射dipole transition偶极跃迁dipping refractometer浸式折射计;浸没折射计diquark双夸克dirac bracket狄喇克括号dirac electron狄喇克电子dirac equation狄喇克方程dirac theory of electron狄喇克电子论direct current直流direct current amplifier直僚大器direct driven antenna直接驱动天线direct exchange interaction直接交换相互酌direct illumination直接照明直照direct lighting直接照明直照direct reaction直接反应direct transition直接跃迁direct vision prism直视棱镜direct vision spectroscope直视分光镜direction方向directional定向的directional antenna指向天线directional coupler定向耦合器directional effect方向效应directional focusing方向聚焦directional gain指向性指数directivity方向性directivity index指向性指数disagreement不符合disbalance不平衡disc laser盘形激光器discharge放电discharge chamber放电室discharge in vacuum真空放电discharge lamp放电灯discharge potential放电电位discharge tube放电管discharge voltage放电电位disclination向错discomposition effect位移效应discontinuity不连续discontinuity surface不连续面discontinuous flow不连续流discord不谐和音discrete absorption线吸收discrete energy level离散能级discrete filter离散滤波器discrete pulses不连续脉冲discrete spectrum不连续光谱discrete state不连续状态discriminator甄别器鉴频器disintegration分解disintegration chain蜕变链disintegration constant蜕变常数disintegration particle衰变粒子disintegration rate衰变率disintegration series蜕变链disjoining pressure楔裂压disk laser盘形激光器disk of galaxy星系盘dislocation位错dislocation density位错密度dislocation line位错线dislocation pile up位错堆积dislocation source弗朗克里德源disorder无序disorder order transformation无序有序转变disorder parameter无序度disordered crystal无序晶体disordered state无序态disordered system无序系统disperse system弥散系dispersed fluorescence method分散荧光法dispersed phase弥散相dispersing medium色散介质dispersing prism色散棱镜dispersion色散dispersion forces分散力dispersion formula色散公式dispersion medium色散介质dispersion relations色散关系dispersion surface色散面dispersive medium色散介质dispersive power色散率dispersoid弥散体displacement位移displacement current位移电流displacement flux位移通量displacement law位移律displacement operator位移算符displacement spike离位峰displacive type ferroelectrics位移型铁电体display unit显示装置disposition配置disruption破裂disruptive voltage哗电压dissipated power耗散功率dissipation耗散dissipation function耗散函数dissipation of energy能量耗散dissipative force耗散力dissipative process耗散过程dissipative structure耗散结构dissipative system耗散系统dissociation离解dissociation constant离解常数dissociation energy离解能dissociation equilibrium离解平衡dissociation limit离解极限dissociative capture离解俘获dissociative diffusion离解扩散dissociative ionization离解电离dissociative recombination离解复合distance of distinct vision糜距离distant collision远程碰撞distant control遥控distillation蒸镏distilled water蒸馏水distorted image畸变象distorted wave畸变波distorted wave born approximation扭曲波玻饵似distorted wave impulse approximation扭曲波脉冲近似distortion畸变distortion factor失真系数distortionless不失真的distributed amplifier分布式放大器distributed capacity分布电容distributed constant分布常数distributed feedback laser分布反馈型激光器distributed ion pump分布型离子泵distributed load分布负载distributed parameter分布参数distribution超函数distribution coefficient分布系数distribution function分布函数distribution law分布定律distribution ratio分布系数disturbance扰动disturbed motion受摄运动disturbed sun活动太阳diurnal aberration周日光行差diurnal parallax周日视差divergence发散divergent pencil of rays发散光束diverging lens发散透镜divertor分流偏滤器divided circle分度弧division of wavefront波阵面分割dna脱氧核糖核酸dna repairdna 修复document information system文献信息系dodecahedron十二面体domain畴domain energy磁畴能量domain pattern磁畴图样domain wall磁畴壁dominant wave吱dominant wavelength吱长donkey electron驴电子donor施主给予体donor atom施汁子donor bond施贮donor center施中心donor level施周级dopant掺杂物doped crystal掺杂晶体doping加添加剂doping density掺杂浓度doppler broadening多普勒致宽doppler displacement多普勒位移doppler effect多普勒效应doppler free spectroscopy多普勒自由光谱学doppler free two photon spectroscopy多普勒自由双光子光谱学doppler principle多普勒原理doppler shift多普勒位移doppler width多普勒宽度dorado剑鱼座dosage剂量dosage measurement剂量测定dosage rate剂量率dose剂量dose equivalent剂量当量dose meter剂量计dose rate剂量率dose rate meter剂量率测量计dosimeter剂量计dosimetry剂量测定double beam spectroscopy双线束光谱学double beta decay双衰变double bond双键double bridge双臂电桥double charged双电荷的double clad optical fiber双层光学纤维double coincidence双符合double concave lens双凹透镜double convex lens双凸透镜double curvature双曲率double diffraction method双衍射法double exchange interaction双变换相互酌double focusing双聚焦double galaxy双重星系double group双值群double harmonic buncher双谐波聚束器double heterojunction双异质结double heterostructure semiconductor laser双异质结构半导体激光器double layer双层double lens双透镜double mirror双镜double pendulum双摆double pulse双脉冲double pulse generator双脉冲发生器double reference beam holography双参考光束全息学double reflection双反射double refraction双折射double refraction of flow怜双折射double resonance spectrograph双重共振摄谱仪double star双星doublet双重线doublet method双重线法doublet objective双合透镜doublet splitting双重线分裂doublet structure双重线结构doublet system双重系统doubly charged双电荷的doughnut环形室dove prism道威棱镜down quarkd 夸克draco天龙座draconic month交点月drag阻力drag coefficient阻力系数draper classification德雷伯分类drell ratio德勒尔比drift漂移drift chamber漂移室drift energy漂移能drift instability漂移不稳定性drift transistor漂移晶体管drift tube漂移管drift velocity漂移速度driving force驱动力dry adiabatic process干绝热过程dry air干空气dry bulb thermometer干球温度表dry cell干电池dry type recording干式记录dryelement cell干电池dual control两级控制dual lattice双晶格dual model对偶模型dual space对偶空间dual temperature exchange process双温交换过程dual transformation对偶变换dualism二重duality二象性duantd 形电极duct导管ductility延性dufour effect达福效应dulong petit's law杜隆普蒂定律dumbbell model哑铃模型dummy load虚负载duopigatron双彭宁源装置duoplasmatron双等离子管duration持续时间dust core压粉铁芯dust counter尘埃计数器计尘器dwarf矮星dwarf cepheid矮造父变星dwarf galaxy矮星系dwarf star矮星dyadic system二进位制dye laser染料激光器dynamic behavior动态行为dynamic characteristic动特性dynamic electricity动电dynamic equilibrium动态平衡dynamic error动态误差dynamic head动压头dynamic instability动态不稳定性dynamic load动负载dynamic polarizability动态极化率dynamic pressure动压力dynamic programming动态程序设计dynamic resistance动阻力dynamic response动特性dynamic stability动态稳定性dynamic stress动态应力dynamic test冲辉验dynamic transition动态跃迁dynamic viscosity动力粘度dynamical friction动摩擦dynamical parallax动力学视差dynamical similarity动力学相似dynamical structure factor动力学结构因子dynamical symmetry动力学对称性dynamical theory of diffraction动力学衍射理论dynamical variables动力学变量dynamics动力学dynamitron地那米加速器高频高压发生器并激式高频高压加速器dynamo发电机dynamo theory发电机理论dynamometer测力计dynatron负阻管dynatron effect负阻效应dyne达因dynode倍增电极dyon双荷子dyson equation捶方程dysprosium镝物理专业英语词汇(D) 相关内容:。

GAN与Diffusion在传统纹样设计中的实验研究

GAN与Diffusion在传统纹样设计中的实验研究

GAN与Diffusion在传统纹样设计中的实验研究作者:张驰王祥荣李莉毛子晗吕思奇袁晨旭彭玉旭来源:《丝绸》2024年第08期摘要:传统纹样是中国优秀传统文化的重要组成部分,传统人工设计已经无法满足纹样的现代设计需求,生成式AI为传统纹样设计提供了新的设计路径和方法。

文章将生成式AI应用于传统纹样设计中,通过适配实验优选基于GAN的Style GAN和基于Diffusion的Stable Diffusion两种主流图像生成模型进行实验,采用技术分析与艺术分析相结合,对实验结果进行多角度、多维度对比分析,为设计师选择生成设计方法提供参照。

实验结果表明,两个模型均能满足基本的艺术设计需求。

Style GAN模型生成的纹样图像更接近真实图像的分布,具有更高的图像质量和多样性;Stable Diffusion模型能较好地传承传统纹样的基因,艺术性与创造性兼具,更加符合传统纹样的艺术设计需求。

关键词: GAN;Diffusion;传统纹样;评价指标;对比分析;实验研究中图分类号: TS941.26文献标志码: A文章编号: 10017003(2024)08期数0009起始页码14篇页数DOI: 10.3969/j.issn.1001-7003.2024.08期数.002(篇序)收稿日期: 20240330;修回日期: 20240623基金项目:教育部人文社会科学研究规划基金项目(22YJA760038);长沙理工大学研究生科研创新项目(CSLGCX23124)作者简介:李莉(1981),女,副教授,主要从事民族纹样研究、传统文化数智化设计研究、跨学科视觉创新设计研究。

中国传统纹样是艺术文化中的瑰宝,承载着中国智慧与美学记忆,在现代的演绎中焕发出新的生命力。

传承与发展传统纹样助力优秀传统文化的传播是传统纹样设计之根本,纹样设计方法的推陈出新是传统纹样创作的不竭动力,是传统纹样设计之指南。

中国传统纹样已广泛应用于建筑、绘画、雕塑、平面设计、室内设计、工业设计等行业,艺术设计及相关人员均可从纹样优美的图形纹饰、丰富的形态意蕴、独特的图式造型中汲取养分、激发灵感[1]。

federated learning based on dynamic regularization

federated learning based on dynamic regularization

federated learning based on dynamic regularization 随着人工智能技术的发展,越来越多的企业和机构开始将其应用于各种商业和科学领域。

然而,在实际应用中,由于数据保密性和隐私性的问题,数据共享和联合学习成为了制约人工智能技术发展的一个瓶颈。

为了解决这个问题,研究人员提出了一种新的联合学习方法:基于动态正则化的联邦学习。

联邦学习的基本思想是将训练数据分散在多个设备或节点中,每个节点只训练本地数据,然后将本地模型的参数上传到中央服务器进行模型融合,从而实现全局模型的更新。

这种方法可以有效地保护数据隐私,但是由于节点之间的数据分布和样本量的不同,会导致模型的过拟合和欠拟合问题。

为了解决这个问题,研究人员提出了一种新的动态正则化方法,即基于动态正则化的联邦学习。

动态正则化的联邦学习方法是在传统联邦学习的基础上引入了正则化项,通过对模型参数进行约束,减少模型的过拟合和欠拟合问题。

与传统的正则化方法不同的是,动态正则化方法可以根据节点的数据分布和样本量动态地调整正则化系数,从而实现更好的模型泛化能力。

具体来说,动态正则化的联邦学习方法包括以下步骤:1. 将训练数据分散在多个节点中,每个节点只训练本地数据,得到本地模型参数。

2. 将本地模型参数上传到中央服务器,进行模型融合。

3. 在模型融合的过程中,引入动态正则化项,对模型参数进行约束。

4. 根据节点的数据分布和样本量,动态调整正则化系数,从而实现更好的模型泛化能力。

动态正则化的联邦学习方法在实验中取得了很好的效果。

与传统的联邦学习方法相比,动态正则化方法可以有效地减少模型的过拟合和欠拟合问题,提高模型的泛化能力。

同时,该方法还可以根据节点的数据分布和样本量动态地调整正则化系数,从而实现更好的模型适应性和鲁棒性。

总之,动态正则化的联邦学习方法是一种新的联合学习方法,可以有效地解决数据隐私和共享问题。

该方法可以根据节点的数据分布和样本量动态地调整正则化系数,从而实现更好的模型泛化能力。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Regularization of Diffusion-Based Direction Maps for the Trackingof Brain White Matter FasciclesC.Poupon,*,†C.A.Clark,*V.Frouin,*J.Re´gis,‡I.Bloch,†D.Le Bihan,*and J.-F.Mangin**Service Hospitalier Fre´de´ric Joliot,CEA,4Place du Ge´ne´ral Leclerc,91401Orsay Cedex,France;†De´partement Signal et Image,ENST, Paris,France;and‡Service de Neurochirurgie Fonctionnelle et Ste´re´otaxique,CHU,La Timone,FranceReceived November2,1999Magnetic resonance diffusion tensor imaging(DTI) provides information aboutfiber local directions in brain white matter.This paper addresses inference of the connectivity induced by fascicles made up of nu-merousfibers from such diffusion data.The usual fas-cicle tracking idea,which consists of following locally the direction of highest diffusion,is prone to errone-ous forks because of problems induced byfiber cross-ing.In this paper,this difficulty is partly overcomed by the use of a priori knowledge of the low curvature of most of the fascicles.This knowledge is embedded in a model of the bending energy of a spaghetti plate representation of the white matter used to compute a regularized fascicle direction map.A new tracking al-gorithm is then proposed to highlight putative fascicle trajectories from this direction map.This algorithm takes into account potential fan shaped junctions be-tween fascicles.A study of the tracking behavior ac-cording to the influence given to the a priori knowl-edge is proposed and concrete tracking results obtained with in vivo human brain data are illus-trated.These results include putative trajectories of some pyramidal,commissural,and various associa-tionfibers.©2000Academic PressKey Words:diffusion;connectivity;white matter;fi-ber;regularization.INTRODUCTIONThe study of connectivity in the human brain is of interest to a wide range of investigators in the neuro-science community and encompassesfields of research concerned with the functional and anatomical organi-zation of the brain,focal lesion-deficit studies,and brain development(Dejerine,1895;Rye,1999).MRI is a unique tool for providing in vivo images of the brain for analysis of brain structure and organiza-tion.T1-weighted volume scans have been used to seg-ment brain structures allowing automatic parcellation of the white matter and cortex(Filipek et al.,1989; Mangin et al.,1995;Meyer et al.,1999;Makris et al.,1999).These high-resolution MRI-based methods can be used to investigate the spatial relationships be-tween anatomical regions.They are,however,unable to infer the connectivity induced by axonalfibers. Tract tracing methodologies dedicated to the human brain have been restricted to post mortem studies. These methods include the dissection of white matter (Dejerine,1895),strychnine neuronography(Pribam and MacLean,1953),and the Nauta(Whitlock and Nauta,1956)and Fink-Heimer methods(Turner et al., 1980)of tracing neuronal degeneration after localized lesions.New histological methods have been recently proposed to study heavily myelinatedfiber tracts in the normal brain(Bu¨rgel et al.,1999).Methods based on active axonal transport of tracer molecules,however, are restricted to animals for all but the shortest of pathways(Young et al.,1995).These approaches which require animal sacrifice are limited by the number of injections(each of which may track only a fewfiber bundles)in the same individual.In vivo MR imaging of the tracer could overcome this difficulty(Pautler et al., 1998).MRI methods based on the study of water diffusion in the human brain represent a promising new ap-proach forfiber tracking and the study of connectivity. Diffusion imaging provides unique quantitative infor-mation about brain structure,which is completely non-invasive and covers the entire brain(see(LeBihan, 1991;Le Bihan,1995)for a review).The basic principle stems from the orientational information provided by the phenomenon of diffusion anisotropy in white mat-ter.Diffusion tensor imaging(DTI)characterizes the diffusional behavior of water in tissue on a pixel by pixel basis.This is done in terms of a matrix quantity from which the diffusion coefficient can be obtained corresponding to any direction in space(Basser et al., 1994b).Subsequent diagonalization of the diffusion tensor yields its eigenvalues and eigenvectors.As such, the eigenvector corresponding to the largest eigenvalue is taken to represent the main direction of diffusion in a voxel.Given that one may ascribe diffusion anisot-ropy in white matter to a greater hindrance or restric-NeuroImage12,184–195(2000)doi:10.1006/nimg.2000.0607,available online at ontion to diffusion across thefiber axes than along them, the principal eigenvector may be considered to point along the direction of a putativefiber bundle traversing the voxel.Thus direction maps made up of the princi-pal eigenvectors can be produced,providing a striking visualization of the white matter pathways and their orientation(Makris et al.,1997).Several approaches have been recently proposed to study anatomical connectivity from such direction maps(Basser,1998;Poupon et al.,1998b,1998a;Mori et al.,1999;Conturo et al.,1999;Xu et al.,1999).The general aim is the possibility of asserting which corti-cal areas or basal ganglia are connected by fascicles embedded in white matter bundles.All these ap-proaches rely on variants of the idea of locally following the direction of highest diffusion to reveal a fascicle trajectory.Unfortunately,in vivo protocols for diffu-sion image acquisition suffer from partial volume effect and various artifacts which lead to corrupted direction maps.For instance,as a result of partial volume,the direction of highest diffusion may not be related in a simple way to the underlying fascicle direction because offiber crossing(Pierpaoli et al.,1996a,1996b;Tuch et al.,1999).As each error in the direction map may lead to an erroneous fork for a local tracking process,this situation prevents robust fascicle tracking.This paper proposes a new approach which consists of using a priori knowledge of white matter geometry to attempt to overcome such difficulties.The approach relies on a global model of the likelihood of any fascicle direction map.This model is used within a Bayesian framework to compute a regularized direction map al-lowing a global trade-off between diffusion tensor data and a priori knowledge regarding the low curvature of most of the fascicles.This regularized direction map is then used to perform tracking operations according to a new algorithm dealing with potential fan-shaped junctions between fascicles.Various results obtained in a series of normal volunteers show that the regular-ization approach improves the tracking robustness.METHODSData AcquisitionAll scans were acquired on a1.5T Signa Horizon Echospeed MRI system(Signa,General Electric Med-ical Systems,Milwaukee,WI)equipped with magnetic field gradients of up to22mT/m.A standard quadra-ture head coil was used for RF transmission and recep-tion of the NMR signal.Head motion was minimised with standard foam padding as provided by the man-ufacturer.The images were acquired after a sagittal localizer scan.An inversion recovery prepared spoiled gradient echo sequence was used to obtain a volume image of the entire brain with a resolution of256ϫ192ϫ124slice locations each1.3mm thick.Thefield of viewϭ24ϫ18cm along the read and phase directions respec-tively.Flip angleϭ10°,TEϭ2.2ms,TIϭ600ms.Four excitations were acquired.Imaging time was ap-proximately27min.Echoplanar diffusion-weighted images were ac-quired in the axial plane.Blocks of eight contiguousslices were acquired each2.8mm thick.Seven blockswere acquired covering the entire brain correspondingto56slice locations.For each slice location31imageswere acquired;a T2-weighted image with no diffusionsensitisation followed by5diffusion sensitized sets(bvalues linearly incremented to a maximum value of1000s/mm2)in each of6noncolinear directions.Thesedirections were as follows:{(1,1,0),(1,0,1),(0,1,1),(1,Ϫ1,0),(1,0,Ϫ1),(0,1,Ϫ1)}providing the best preci-sion in the tensor component when6directions areused(Pierpaoli et al.,1996b).In order to improve the signal to noise ratio this wasrepeated4times,providing124images per slice loca-tion.The image resolution was128ϫ128,field of view24ϫ24cm,TEϭ84.4ms,TRϭ2.5s.Imaging timeexcluding time for on-line reconstruction was approxi-mately37min.A database of8normal volunteers(men,age range25–34years)has been acquired withthis protocol.All subjects gave informed consent andthe study was approved by the local Ethic Committee. Computation of the Diffusion TensorBefore performing the tensor estimation,an unwarp-ing algorithm is applied to the diffusion-weighted datasetto correct for distortions related to eddy currents inducedby the large diffusion sensitizing gradients.This algo-rithm relies on a three parameter distortion model in-cluding scale,shear,and linear translation in the phase-encoding direction(Haselgrove and Moore,1996).Theoptimal parameters are assessed independently for eachslice relative to the T2-weighted corresponding image bythe maximization of an entropy-related similarity mea-sure called mutual information(Wells III et al.,1997).Following the distorsion correction,the diffusion tensor,and,subsequently,the eigen system are calculated foreach voxel of the brain using a robust version of themethod described in(Basser et al.,1994a).This robustregression method is a M estimator,which can be re-garded as an iterated weighted least-squares(Meer et al.,1991).This method remains reliable if less than half ofthe data are contaminated by outliers.A detailed descrip-tion of this tensor reconstruction process is beyond thescope of this paper(Poupon,1999b).The improvementsinduced by distortion correction and robust regressionare illustrated by Fig.1.Direction Map RegularizationIll-posed nature of the tracking problem.Let us as-sume now that the fascicle tracking problem consists of185REGULARIZATION OF DIFFUSION BASED DIRECTION MAPSinferring trajectories from a fascicle direction map ex-tracted from tensor diffusion data.It should be noted that the spatial sampling of this direction map may be higher than the spatial resolution of diffusion data.This sampling may even be continuous when an inter-polation method is used (Conturo et al.,1999).Conse-quently,the fascicle tracking problem is very similar to a number of problems where “flow lines”have to be inferred from a vector field.The flow line running through a given location may be reconstructed bidirec-tionally by incremental displacements in the local vec-tor direction.All the methods proposed to date follow this idea in conjunction with a stop criterion,which relies on thresholding tensor anisotropy (Conturo et al.,1999)or trajectory curvature (Mori et al.,1999).While such flow line local reconstruction approaches may give very good results with a smooth vector field,stemming for instance from the simulation of some partial differential equation,they turn out to be ques-tionable when applied to a corrupted vector field re-sulting from noisy observations.Indeed,it has to be understood that each erroneous vector may yield an erroneous fork of the trajectory reconstruction process.Hence,this local tracking method is unable to tolerate even one corrupted vector along a fascicle.All the methods described up to now use the vector field made up of the tensor eigenvector related to the largest tensor eigenvalue.Unfortunately,this vector field is error prone.First,the low spatial resolution of diffusion weighted images with respect to the usual fiber bundle diameters encountered in the human brain leads to significant partial volume effects.If a voxel includes several fiber directions,the tensor is difficult to interpret.In such situations,the main eig-envector might follow a “mean direction”which may differ significantly from the directions of the underly-ing fascicles (Pierpaoli et al.,1996a,1996b;Tuch et al.,1999).Furthermore,DTI is highly sensitive to physio-logical motion which may systematically lead to incor-rect tensor estimation in some brain areas.Considering that the corrupted nature of the eigen-vector based direction maps is intrinsic to in vivo DTI of brain white matter,the tracking problem turns out to be “ill-posed.”This mathematical terminology means here that two different DTI acquisitions of the same brain could lead to two highly different tracking results because of a high sensitivity to corrupted local directions.This type of situation,which is common in the field of computer vision,calls for the use of a priori knowledge on the tracking solution.Indeed a priori knowledge may allow the tracking algorithm to find the correct fascicle trajectory whatever the corrupted local directions.The regularized approach.A vast amount of path-way specific knowledge could be used to constraintheFIG.1.Correction for the spatial distortions induced by eddy currents and computation of diffusion tensor using a robust regression method highly improves anisotropy maps.For instance,artifactual white areas induced by these distorsions in the phase-encoding direction have disappeared and the gray/white boundary is better defined.186POUPON ET AL.tracking algorithm.For instance some connections are known from post mortem studies and animal studies can be used to infer others.Furthermore,pathway diameter,dispersion,and anisotropy can be deter-mined from previous studies.However,our current limited understanding of the variability of the human brain connectivity makes the use of such knowledge problematic(Bu¨rgel et al.,1999).Additionally,the use of such specific knowledge may prevent consistent studies of pathological brains.We have chosen there-fore to use only a simple global assumption regarding the low curvature of most white matter fascicles.This kind of a priori knowledge leads to a classical paradigm of applied mathematics called regularization theory (Tikhonov and Arsenin,1977).The above introduction to the ill-posed nature of the fascicle tracking problem has led to two important observations.First,some principal eigenvectors do not follow the underlying fascicle directions.This will lead us to abandon the idea of using the tensor eigensystem to compute the direction map.Second,a local trajectory reconstruction is oblivious to corrupted vectors that produce erroneous forks.This will lead us to use a larger scale strategy in order to interpret ambiguous tensors.Our new fascicle tracking method is made up of two stages.Thefirst stage consists of computing a regular-ized fascicle direction map as the optimal trade-off between diffusion data and the low curvature hypoth-esis.For instance,while the direction of thefirst eig-envector of a“flat tensor”may largely differ from ac-tualfiber directions,the plane defined from thefirst two eigenvectors is supposed to include these direc-tions.Hence,the low curvature hypothesis will lead the regularization algorithm to choose within this plane the fascicle direction which provides the best agreement with the surrounding trajectories(see Fig.2).The second stage of our method consists of studying brain connectivity using a simple tracking algorithm applied to this regularized direction map.Since this map is supposed to be error free,this algorithm relies on local information.The initial step of our method consists of defining a mask W within which all the algorithms are restricted. Afirst volume of interest made up of the voxels belong-ing to white matter is automatically extracted from the T1-weighted image using an algorithm developed in our institution(Mangin et al.,1995,1998).A sophisti-cated morphological dilation is then applied to this segmentation in order to be robust to potential distor-tions occuring in relating the echo-planar images(DTI) and the high resolution T1-weighted image mask.This dilation is homotopic which means that the topology of white matter is preserved(Mangin et al.,1995)thus preventing the creation of impossible pathways through cortical folds.The inference of the regularized direction map stems from a classical Bayesian interpretation.The regular-ized map Voptis the optimal map which maximizes the a posteriori probability p(V͉D),where D denotes the diffusion tensor data set and V denotes a random vec-torfield which covers all possible direction maps de-fined on W.Bayes rule allows us to introduce the a priori knowledge on the low curvature of fascicles:p͑V͉D͒ϭp͑D͉V͒p͑V͒p͑D͒.(1)Since p(D)does not depend on V,maximizing p(V͉D) amounts to maximizing the product p(D͉V)p(V).In the following,we propose a model of these two probabilitieswhich allows the computation of Vopt.The a priori prob-ability p(V)will favor the direction maps made up of low curvature fascicles.The probability p(D͉V)will be related to the nature of water diffusion in white mat-ter:for each voxel M,the diffusion coefficient in the direction vជ(M)should be as high as possible.Hence,the optimal map will be the best trade-off between the observations(the diffusion data)and the a priori knowledge(the low curvature assumption).An impor-tant point to be noted is that if p(V)is the uniformprobability density(no a priori knowledge),Voptturns out to be the usual map made up of the tensor principal eigenvectors.The spaghetti plate model.Let us consider any di-rection map V and a voxel M of this map.According to this map,vជ(M)corresponds to the local direction of an underlying fascicle crossing voxel M.Assume now that we canfind a way to compute the local curvature of this fascicle from the neighborhood of M in the map.This can be achieved using local geometric relationships. Because fascicle curvature is the only information used to assess the likelihood of any direction map(without diffusion observations),then p(vជ(M))depends only on the realizations of the random vectorfield in the neigh-borhood of M.Randomfields endowed with this prop-erty are called Markovfields.Thesefields have been intensively studied in statistical physics as models of spin glasses.Suchfields are very interesting from a practical point of view because the globalfield proba-bility p(V)follows a Gibbs distribution(Geman and Geman,1984):p͑V͒ϭ1ZeϪ⌺c P c͑V͒,(2)where Z is a normalizing constant and Pcare potential functions defined on subsets of neighboring points which interact with each other.These subsets are called cliques.Each clique c possesses a potential func-tion Pc,which embeds the nature of the local interac-tion.Intuitively,the lower the potential,the higher the187REGULARIZATION OF DIFFUSION BASED DIRECTION MAPSprobability of the clique realization.It should be noted that one random variable (or one point)usually belongs to several bining all potential functions leads to the global energy of the field,which is minimal for the most likely realizations,i.e.,those having the highest probability.We have recently introduced a class of Markovian models dedicated to direction map regularization (Pou-pon et al.,1998a).In this paper,we focus on one specific model of this class which appears especially suitable to the tracking problem and stems from an intuitive anal-ogy.At the resolution of our diffusion data,a lot of white matter voxels include some trans -callosal,some projectional,and some associational fibers (Dejerine,1895).As a first approximation,however,the geometry of white matter may be compared with the geometry of spaghetti plates illustrated by Fig.3.While this anal-ogy will be used to describe our methodology,one has to bear in mind that this point of view is a significativesimplification of the actual white matter geometry.Indeed,the striking simplicity of Fig.3dissection re-sults in part from the fact that the anatomist was a talented sculptor who has discarded finer fascicles and fan-shaped endings.Let us consider a single strand of spaghetti.Before any cooking,this spaghetti can be considered as a straight line.Put in hot water,the spaghetti accumu-lates some energy and becomes a bended curve.The magnitude of the total curvature is related to the total amount of energy which has been accumulated.A sim-ple way to assess the cooking effect on the spaghetti geometry consists of integrating the curvature to de-rive the spaghetti bending energy E :E ͑spaghetti ͒ϭ͵length␣c 2͑s ͒ds ;(3)where s is simply a curvilinear length measured along the spaghetti,c (s )is the spaghetti curvature at dis-tance s from the origin and ␣is the spaghetti rigidity.This energy,well known in chemistry as the Kratky-Porod model of semi-flexible polymers (Chaikin and Lubensky,1995),can be extended in a straightforward way to the whole spaghetti plate.Note now that the spaghetti energy defined in Eq.(3)breaks down to a sum of local terms related to the local curvature,which are exactly the kind of local interactions we are seeking to define a Gibbs distribution for the fascicle direction maps.Hence,the low curvature hypothesis will be translated into a low bending energy constraint.It is still necessary,however,to design the way to compute the fascicle local bending energy.Let us re-turn to the fascicle going through voxel M .Since fas-cicles cannot end inside white matter,we must findtwoFIG.2.The regularization idea illustrated by a Y shaped trajec-tory extracted from the pyramidal pathway.The diffusion tensors are represented by ellipsoids,the tensor principal eigenvectors by light yellow cylinders,and the regularized fascicle directions by dark blue cylinders.The trajectory has been obtained using a local track-ing algorithm applied to the regularized direction map.This trajec-tory includes some flat tensors at the level of the Y junction.The principal eigenvectors of these flat tensors give low confidence infor-mation on actual fascicle directions.Therefore,a local tracking al-gorithm applied to the eigenvector map may lead to erroneous forks at the level of these flat tensors.The regularization algorithm has reoriented the putative fascicle directions in order to create low curvature trajectories while keeping a high diffusion coefficient along these directions.The largest corrections occur for the low anisotropytensors.FIG.3.The spaghetti plate like geometry of white matter illus-trated by a brain dissection (Williams et al.,1997).The preservation method of Klinger was utilized.Fine forceps (straight or curved)were used to dissect the delicate nerve bundle preparations.188POUPON ET AL.prolonging voxels or perhaps the boundary of white matter.To be consistent with the spaghetti plate anal-ogy,these linked voxels are defined from a criterion e (M ,P ),which mimics the spaghetti local bending en-ergy (see Fig.4):e ͑M ,P ͒ϭmax 2͑͑v ជ͑M ͒,u ជMP ͒,͑v ជ͑P ͒,u ជMP ͒,͑v ជ͑M ͒,v ជ͑P ͒͒͒ʈជMPʈ,(4)where u ជMP ϭជMP /ʈជMP ʈand (u ជ,v ជ)denotes the anglebetween directions u ជand v ជ.The neighborhood of M is split in two by a plane that defines forward and back-ward neighbors.Then one point minimizing e (M ,P )is selected in each of these half-neighborhoods (forward:f (M ),backward:b (M )).The underlying fascicle is sup-posed to follow the f (M )ϪM Ϫb (M )trajectory.The potentials used to define the Gibbs distribution follow directly:P M S ͑V ͒ϭe ͑M ,f ͑M ͒͒ϩe ͑M ,b ͑M ͒͒.(5)Because of the definitions of f (M )and b (M ),the cliquerelated to P M Sis the 26-neighborhood of M (more pre-cisely the points of the 26-neighborhood of M included in W ).The Gibbs distribution made up of these poten-tials gives a high probability to the direction maps endowed with a low global bending energy.Because of the analogy introduced above,this model is called later the spaghetti plate model.It should be noted that al-though this model is made up of local potentials,the probability distribution p (V )is global.This endows our tracking model with a non local field of view.The diffusion information.A model will now be pro-posed for the second probability p (D ͉V ).It should be noted by a reader unfamiliar with the Bayesian frame-work that V is now fixed and supposed to represent an actual fascicle direction map.Let us assume that the tensor measurement in one voxel M depends only on the local fascicle direction given by v ជ(M ).This reason-able hypothesis will lead to a second Gibbs distribution corresponding to a field without interaction.The prob-ability p (D ͉V )can be rewritten in the following way:p ͑D ͉V ͒ϭ͹Mp ͑D ͑M ͉͒v ជ͑M ͒͒.Assuming now that any tensor measurement D (M )has a non-zero probability,p (D ͉V )can be rewritten as:p ͑D ͉V ͒ϭ1Z D͹MeϪP M D͑v ជ͑M ͒͒ϭ1Z De Ϫ⌺M P M D͑v ជ͑M ͒͒,where Z D is a normalizing constant and P M D(v ជ(M ))ϭϪln(p (D (M )͉v ជ(M ))).Finally,in order to get a probabil-ity p (D (M )͉v ជ(M ))decreasing with the discrepancy be-tween diffusion in the direction v ជ(M )and the largest diffusion coefficient ␭1(the largest tensor eigenvalue),we propose the following potential function:P M D ͑v ជ͑M ͒͒ϭͩv ជ͑M ͒t D ͑M ͒v ជ͑M ͒Ϫ␭1ͪ2.(6)The discrepancy is normalized by the tensor norm(Basser and Pierpaoli,1996)in order to remove all diffusion-based information apart from anisotropy.For highly anisotropic tensors,this potential acts like a spring which tries to align v ជ(M )along the direction of the largest diffusion coefficient.In the case of flat ten-sors,this potential is more flexible and allows v ជ(M )to turn in the high diffusion plane.Finally,for isotropic tensors,this potential allows any fascicle direction.The entire bining all the previous equa-tions leads to an expression for p (V ͉D )which turns out to be a new Gibbs distribution:p ͑V ͉D ͒ϭ1Ze Ϫ⌺M P M D͑v ជ͑M ͒͒ϩ␣P M S͑V ͒,(7)where Z is a normalizing constant.This new distribu-tion now leads to the definition of V opt as the directionmap that minimizes the energy givenby:FIG.4.2-D illustration of the computation of fascicle local bend-ing energy.(A)The spaghetti local bending energy.Note that c 2ds ϭ(d ␣/ds )2ds ϭd ␣2/ds ,where c denotes local curvature (cf.Eq.3).(B)A discrete bending energy is defined between two neighboring points in order to mimic the continuous case (cf.Eq.4).(C)The 26-neighbor-hood of each point M is split in two by a plane orthogonal to v ជ(M )which defines forward and backward neighbors.One point minimiz-ing the discrete bending energy is selected in each of these half-neighborhoods (forwards:f (M ),backwards:b (M )).The underlying fascicle is supposed to follow the f (M )ϪM Ϫb (M )trajectory.189REGULARIZATION OF DIFFUSION BASED DIRECTION MAPSE ͑V ͒ϭ͸MP M D ͑v ជ͑M ͒͒ϩ␣͸MP M S ͑V ͒.(8)Finally,this last definition shows that the optimaldirection map V opt is a trade-off between the measured tensor data and the a priori knowledge on the low curvature of fascicles.The constant ␣reflects the in-fluence of this a priori knowledge,which is equivalent to the rigidity of the underlying spaghetti plate.Hence,V opt represents the low energy spaghetti plate with the highest diffusion along each spaghetti.Tracking ExperimentsOnce the regularized fascicle direction map V opt has been computed from the minimization of E (V ),the sys-tem user can perform tracking experiments according to a simple local tracking scheme.This scheme relies on the following organization of the direction map.Each M Ϫf (M )and M Ϫb (M )link defined according to the principle used during the regularization stage (see Fig.4)is converted into a two-way link (see Fig.5A).Recall that each voxel is endowed with forward and backward half-neighborhoods (see Fig.4C).After the conversion operation,some of the voxels have more than one two-way link in the same half-neighborhood.These voxels prove to be junctions related either tofan-shaped divergences (see Fig.2)or to variations of a bundle thickness.The system user first selects one area on the white matter surface.Next a real-time propagation algo-rithm follows the two-way links until other areas of the white matter surface are reached (see Fig.5B).Several areas may be reached if the propagation encounters junction voxels.Other kinds of experiments may also be performed.For example,the input area may be located inside white matter (W ),in which case the propagation algorithm is bidirectional.Or two input areas,defined for instance by functional activations,may be specified.In this case the system performs the propagation from the first area and yields only the trajectories reaching the second area.RESULTSFor all the following results,the state space of the random vectors v ជ(M )has been discretized in 162uni-formly distributed directions.A deterministic minimi-zation algorithm has been used to find the local mini-mum for E (V )the nearest to the principal eigenvector map (Besag,1986).The computation time is about one hour on a conventional workstation.The resulting fas-cicle direction map is used to perform the tracking experiments.Influence of the a Priori Model WeightRegularization of one set of diffusion tensor data has been performed with eleven different values of the weighting parameter ␣(the spaghetti rigidity).In or-der to then study the influence of regularization on the fascicle direction map geometry,the voxels have been classified into four types according to topological con-siderations.In order to assess the number of voxels leading to high local curvature trajectories,a threshold on the local bending energy has been used to remove some links.Since we will only focus on the evolution of the number of the created “dead ends”relative to ␣,this threshold has been arbitrarily fixed to remove links leading to angular variations of more than 45degree (see Fig.4).The four types are:Simple fascicle nodes.Voxels having exactly one two-way link in each half neighborhood;Junctions.Voxels related to the merge (or split)of several fascicles made up of points of the previous type;Gate to gray matter.Voxels leading to outside of the white matter.Dead ends.Voxels without a link in one half-neigh-borhood.Figure 6presents the evolution from no regular-ization (left asymptotes)to high regularization (right).First,the number of pathological sites (see Fig.6.3)decreases dramatically with the regulariza-tion which demonstrates the efficiency of themodel.FIG.5.The tracking system.(A)The regularized direction map is endowed with a system of two-way links.Some of the voxels which have more than one two-way link in the same half neighborhood turn out to be fascicle junctions.(B)The user selects one area on the white matter surface.Then a propagation algorithm follows the links until other white matter surface areas are reached.If the propagation comes across a junction voxel,several trajectories may be followed in parallel.The rule that prevents the propagation from proceeding backwards at junction voxels is simple:when a voxel is reached,the voxel mid-plan has to be crossed before using a new link.Black arrows indicate the links used in the example.190POUPON ET AL.。

相关文档
最新文档