Non invariant zeta-function regularization in quantum Liouville theory
riemannzeta函数 模形式
riemannzeta函数和模形式是数学领域中重要的概念,它们在数论、解析数论、自守形式等领域有着重要的作用。
本文将从理论和应用两个方面来介绍riemannzeta函数和模形式的基本概念、性质和相关的研究成果。
一、riemannzeta函数riemannzeta函数是数论中的重要函数,它被定义为复平面上的解析函数,其表达式为:\[ \zeta(s) = \sum_{n=1}^{\infty}\frac{1}{n^s} \]其中s是复数变量。
riemannzeta函数最初由黎曼在研究素数分布时引入,并在分析数论中占据着至关重要的地位。
riemannzeta函数具有许多重要的性质,比如在复平面上的解析性、黎曼函数方程等。
1.1 riemannzeta函数的解析性riemannzeta函数在复平面上的解析性是指它在定义域内是解析的,即对于复平面上的任意一点s,riemannzeta函数都有定义且在该点处有导数。
这一性质使得riemannzeta函数在复变函数论中占据着重要地位,也为研究riemannzeta函数的性质奠定了基础。
1.2 黎曼函数方程riemannzeta函数满足着著名的黎曼函数方程,即对于所有的s∈C\{1},都有:\[ \zeta(s) = 2^s\pi^{s-1}\sin(\frac{\pi s}{2})\Gamma(1-s)\zeta(1-s) \]这一函数方程表明了riemannzeta函数在复平面上的对称性,为研究riemannzeta函数的性质提供了极大的便利。
1.3 riemannzeta函数在数论中的应用riemannzeta函数在数论中有着许多重要的应用,其中最著名的莫过于黎曼假设。
黎曼假设是指所有非平凡的riemannzeta函数零点的实部都是1/2。
该假设在数论领域和素数分布领域有着深远的意义,然而至今尚未得到严格的证明。
二、模形式模形式是复变函数论中的一个重要概念,它起源于数论领域,随后发展成为一个独立的研究方向。
几类具非局部项的椭圆型方程解的存在性研究
几类具非局部项的椭圆型方程解的存在性研究本文运用临界点理论研究几类非局部椭圆型偏微分方程,分别讨论了它们基态解、正解及变号解的存在性.在本文的假设条件下,非线性项f仅要求是连续的,从而Nehari流形不必是C1的.在第二章中我们研究一类带有周期位势
Kirchhoff方程非平凡解的存在性,其中f满足更一般的超四次增长条件但不必是C1类函数.当f在无穷远处是次临界增长或临界增长情况下,应用Szulkin和Weth的广义Nehari流形方法我们证明了基态解的存在性.最后,当位势V(x)是常数时,我们证明了自治Kirchhoff型方程基态解的存在性.在第三章中我们研究一类具临界指数增长的Schrodinger-Poisson方程正解的存在性,其中f满足次临界增长条件,位势V在无穷远处衰减于零.利用变分法,我们证明了方程至少存在一个正解.在第四章中我们研究一类带有衰减位势的p-Laplacian Kirchhoff 方程变号解的存在性,其中f不是C1类函数,利用Nehari流形方法和极小化原理我们证明了方程存在变号解.进一步,当f是奇函数时,我们证明了方程存在无穷多个非平凡解.。
优化理论
有限维空间的优化理论与算法
引言
刘红英 数学与系统科学学院
1.1 数学描述与例子
• 目 标:系统性能的一种“量的度量”(利润、时间、 势能)--任何数量或某些量的组合--数
• 变 量:目标所依赖的系统的“某些可控的特征” • 约束条件:经常变量以某种方式受限制(分子中电子密度
的量、贷款利率的量,不能是负的)
故必要条件即对所有 p,有
等价地
(一阶条件),G*半正定(二阶条件)
稳定点/驻点(stationary point):使得 g(x*)=0 的 x*
局部极小点的充分条件
定理. x*是严格局部极小点的充分条件是 ,G*正定.
例.考虑Rosenbrock函数
在x*=(1, 1)处 严格局部极小点-全局极小点 充分非必要:
优化问题的一般模型--数学规划问题
一个小例子
• 可行域/可行集 • 最优解/解 • 图解法
1
优化建模(modeling): 识别出给定问题的目标、变量和约束的过程。
• 建立恰当模型:第一步、最重要的一步(太简单-不能给 实际问题提供有用的信息;太复杂-不易求解)
• 选择特定算法:很重要--决定求解速度及质量(无通用优化 算法,有求解特定类型优化问题的算法)
积极(约束指标)集 x2
x*
x1 Lagrange函数:
一阶条件:KKT条件
正则性假设1:
定理(一阶条件). 若 x* 是局部极小点且在 x* 处正则性假设1成立,则存在
Lagrange乘子 使得
满足
◎ Karush-Kuhn-Tucker条件, KKT条件/KKT点
局部极小的条件-充分条件(续)
定理.可微凸函数的稳定点是全局极小点
椭圆方程柯西问题的拟逆正则化方法
椭圆方程柯西问题的拟逆正则化方法
椭圆方程柯西问题是指在椭圆型偏微分方程中,给出了一些边界条件和初始条件,需要求解未知函数在整个区域内的解。
由于该问题的求解常常涉及到非线性和高维的计算,因此需要采用合适的算法来求解。
近年来,拟逆正则化方法被广泛应用于椭圆方程柯西问题的求解中。
该方法通过构造一个正则化方程,并利用正则化方程与原方程之间的关系,逐步求解未知函数。
该方法的优点在于可以避免数值算法中的不稳定性和数值误差,并且对于某些特殊情况下的求解问题,具有较好的数值稳定性和计算速度。
在拟逆正则化方法中,首先需要构造一个正则化方程,然后通过正则化方程的逐步求解,得到未知函数的解。
正则化方程的构造通常是基于某种特定的求解策略和逆正则化算子的选择。
逆正则化算子是指一个映射,可以将原问题的解映射到一个更简单的空间中,从而使得求解问题更容易。
在实际应用中,拟逆正则化方法可以结合其他求解方法,例如有限元法、边界元法等,来实现更加准确和高效的求解。
此外,该方法还可以应用于其他类型的偏微分方程求解中,例如抛物型偏微分方程和双曲型偏微分方程等。
总之,拟逆正则化方法是一种有效的求解椭圆方程柯西问题的方法,其在实际应用中具有广泛的应用前景。
随着计算机技术的不断发展和算法优化的深入研究,该方法将会在更多的领域内展现出其巨大的潜力和应用价值。
关于黎曼Zeta函数的若干性质及其在复平面上的积分表示
+
8
),取$
=
1 + %0 —$—
•考查级数n! _8_=_1 一 - nI%n;一n 在[$,
+
8)的一致收敛性•首先,因为
$ > 1,则正项级数! 8 卫1 收敛,而 n = 1 n〒
Inn / 1 1 / n(1+)/2
1mn(&+)/2罟=11工
n&8
n$ n&8 n($+1)/2
0,
所以正项级数!8
我想知道黎曼猜想是否真的解决了?另外黎曼猜想跟其它数学命题之间有着千丝万缕的联系?据统计在今天的数学文献中已经有一千条以上的数学命题是以黎曼猜想或其推广形式的成立为前提的?这表明黎曼猜想及其推广形式一旦被证明对数学的影响将是十分巨大的一个数学猜想与为数如此众多的数学命题有着密切关联这在数学中可以说是绝无仅有的?更令人们惊讶的是黎曼猜想还与量子力学和弦论等也有深刻的联系?也难怪当阿蒂亚爵士宣称他证明了黎曼猜想时会引起如此大的轰动?令人们唏嘘的是阿蒂亚爵士于2019年1月11日去世他生前留下的关于黎曼猜想的最后一篇手稿1尽管没有获得主流数学家的认可但依然将名垂青史因为它再一次激起了人们讨论和研究黎曼猜想的热情与激情相信在众多数学家的不懈努力下黎曼猜想问题终将被完全解决下转第23页?3?付成君等
1黎曼Zeta函数的定义
我们知道,“ p级数”
! —— 二 1 =1’ $----1--- 1---1---- ----- $---1---- -----
台"
2 3!
"
当! > 1时收敛,当! " 1发散(卩二1时为调和级数),由此可见,哪怕! > 0有多小,正项级数!O 1
求解无约束最优化非单调自确定信赖域算法
作 者简 介 :刘宁 ( 9 3 ) 男 ,广 西 玉 林 人 ,硕 士 研 究 生 ,主 要 研 究方 向 为最 优 化 理 论 与 算 法 。E ma : 1 8 7 @ 1 3t m 18一 , — i k 3 7 5 6 .o l
“ 一 一
( 一 )
g户^ ÷(t ^ ≥ : 一 BPt l k )
嚣 [ 一P 吉
] 一
( △) ≥ , P ) / 7 k ( ’
令 厂( =厂 z( ) ma ( ( ki)m( ) f) ( f)一 t ^ x f x-) , 足 满足 ( ) O 一O m( ) , 志 一mi 是 ) , ] ( 为 一 非 负 整 n m( 一1 +1 ,
较 容易 出现 可行 域 中存 在细 长 弯 曲的峡 谷[ , 时算 6这 3
பைடு நூலகம்
法 的有 效 性 就大 大 降 低 。因为 这 时 一 旦迭 代 进 入峡
谷 , 就 只能 沿 着峡 谷 缓 慢 前 进 , 样 就会 导 致 很 小 它 这 的步长 , 至出现 锯齿 现 象 [ 。针对 上 面所提 出的 困 甚 7 ]
步骤 5 设 k: +1 转 步骤 1 一五 , 。
证毕。
() 4
引理 2 算 法 1 适 定的 , 是 若假设 1中的条件 ( ) 1
和 () 2 成立 。
步骤 4 利 用 B G F S公 式修正 , 到 B 一 。 得
证 明 由中值定 理 以及假 设 1 又 由于 厂 ≥厂 , , f) ^
v r e e a e as ai d und r s t l o e g nc r lo obt ne e uiab e c ndiins to .
基于弹性网和直方图相交的非负局部稀疏编码
DOI: 10. 11772 / j. issn. 1001-9081. 2018071483
基于弹性网和直方图相交的非负局部稀疏编码
*பைடு நூலகம்
万 源,张景会 ,陈治平,孟晓静
( 武汉理工大学 理学院,武汉 430070) ( * 通信作者电子邮箱 Jingzhang@ whut. edu. cn)
摘 要: 针对稀疏编码模型在字典基的选择时忽略了群效应,且欧氏距离不能有效度量特征与字典基之间距离 的问题,提出基于弹性网和直方图相交的非负局部稀疏编码方法( EH-NLSC) 。首先,在优化函数中引入弹性网模型, 消除字典基选择数目的限制,能够选择多组相关特征而排除冗余特征,提高了编码的判别性和有效性。然后,在局部 性约束中引入直方图相交,重新定义特征与字典基之间的距离,确保相似的特征可以共享其局部的基。最后采用多 类线性支持向量机进行分类。在 4 个公共数据集上的实验结果表明,与局部线性约束的编码算法( LLC) 和基于非负 弹性网的稀疏编码算法( NENSC) 相比,EH-NLSC 的分类准确率分别平均提升了 10 个百分点和 9 个百分点,充分体现 了其在图像表示和分类中的有效性。
Key words: sparse coding; elastic net model; locality; histogram intersection; image classification
0 引言
图像分类是计算机视觉领域的一个重要研究方向,广泛 应用于生物特征识别、网络图像检索和机器人视觉等领域,其 关键在于如何提取特征对图像有效表示。稀疏编码是图像特 征表示 的 有 效 方 法。考 虑 到 词 袋 ( Bag of Words,BoW) 模 型[1]和空 间 金 字 塔 匹 配 ( Spatial Pyramid Matching,SPM) 模 型[2]容易造成量化误差,Yang 等[3] 结合 SPM 模型提出利用 稀疏编 码 的 空 间 金 字 塔 的 图 像 分 类 算 法 ( Spatial Pyramid Matching using Sparse Coding,ScSPM) ,在图像的不同尺度上 进行稀疏编码,取得了较好的分类效果。在稀疏编码模型中, 由于 1 范数在字典基选择时只考虑稀疏性而忽略了群体效 应,Zou 等[4]提出一种新的正则化方法,将弹性网作为正则项 和变量选择方法。Zhang 等[5]提出判别式弹性网正则化线性
zeta函数解析延拓
zeta函数解析延拓《Zeta函数解析延拓》1 绪论Zeta函数是一种重要的非线性函数,其在数论,统计物理和量子力学等领域有着重要的应用。
它在寻找大素数数列,判定马尔可夫链可达性,研究量子数值仿真,分析哥德巴赫猜想等方面都发挥着优秀的作用。
在物理学中,它用于计算多粒子干涉的能量值,从宇宙扩展到原子物理,它也是理解过去和今天宇宙的基础。
它也在机器学习和大数据挖掘中发挥重要作用。
在统计物理学中,Zeta函数用于推断自由能、多粒子干涉和热力学性质,在计算机科学中用于求解网络和最优化问题。
在计算机科学中,它用于求解网络和最优化问题,在量子力学中,它用于计算原子结构和电子结构,在统计物理学中,它用于计算多粒子干涉的能量和热力学性质,在数论中,它用于研究大素数数列,判定马尔可夫链的可达性和哥德巴赫猜想等问题。
Zeta函数的性质由它的定义和延拓来描述,延拓得到的结果可以用来解决一些有关数学的复杂问题。
2 Zeta函数的定义Zeta函数是贝尔数学家Riemann在1859年发现的一种数学函数,他在自己的著作《柯西猜想》中应用它来解决了哥德巴赫猜想。
它的定义为:$$zeta(s) = sum_{n = 1}^{infty} frac{1}{n^s} = lim_{N to infty} sum_{n = 1}^{N} frac{1}{n^s}$$其中,s是一个实数,称为参数。
它的特殊值有:$zeta(1) = 0, zeta(2) = frac{pi^2}{6}, zeta(3) = 1.2021...$,以及一系列其他高点的特殊值。
为了更好地理解这个函数,我们可以这样思考:假如有一个正整数的数列,每个元素满足$n^s$的函数,Zeta函数的定义就是用来计算这个数列中所有元素的和。
3 Zeta函数的延拓Zeta函数的延拓是指把Zeta函数用于更多的情况,这些情况包括复数参数、多元参数、非整数和变量参数等等。
同时,延拓是为了扩展Zeta函数的应用范围,使它能够解决更多的数学问题。
黎曼zeta和伽马函数
黎曼zeta和伽马函数-概述说明以及解释1.引言1.1 概述黎曼zeta函数和伽马函数是数学中的两个重要函数。
黎曼zeta函数是由德国数学家黎曼在19世纪提出的,而伽马函数则是由瑞士数学家欧拉在18世纪首次引入。
这两个函数在数学分析、复变函数论和数论等多个领域中都有广泛的应用。
黎曼zeta函数最初是为了研究素数分布而引入的。
它的定义是通过级数来表达的,即黎曼zeta函数的值可以通过对正整数的倒数进行求和得到。
然而,黎曼函数的定义不仅限于正整数,它可以通过解析延拓的方法得到更广泛的定义域。
黎曼zeta函数的性质非常丰富,它与素数的分布、调和级数、Γ函数等之间有着密切的联系。
伽马函数是一种特殊的复变函数,定义为一个无穷积分。
它具有一些重要的性质,包括对复数域上所有值的定义、互补性质和解析延拓。
伽马函数在各种数学问题中都有广泛的应用,包括概率论、数论、复变函数论以及物理学中的量子力学和场论等。
黎曼zeta函数与伽马函数之间存在着密切的关系。
它们之间的联系可以通过黎曼函数和伽马函数的定义以及它们的函数等式互补性质来描述。
黎曼zeta函数和伽马函数的关系在数学研究和应用中有着重要的意义,它们共同为数学家提供了一种更深入地理解数论、复变函数和解析数论等数学分支的方法。
综上所述,本文将主要介绍黎曼zeta函数和伽马函数的定义、性质以及它们之间的关系。
通过对它们的深入研究和应用,我们可以更好地理解数论和复变函数论等数学领域中的一些重要问题。
文章结构部分的内容可以包括以下内容:文章结构主要分为四个部分:引言、黎曼zeta函数、伽马函数和黎曼zeta函数与伽马函数的关系。
每个部分包含若干小节,分别介绍相应的内容。
引言部分(Introduction)主要介绍本文要讨论的主题,即黎曼zeta 函数和伽马函数。
在概述(Overview)部分,简要介绍黎曼zeta函数和伽马函数的定义与性质,引起读者对这两个函数的兴趣。
接着,在文章结构(Structure of the Article)部分,详细介绍文章的组织结构和每个部分的内容,使读者对全文有一个清晰的了解。
Nonnegative matrix factorization and applications
NONNEGATIVE MATRIX F ACTORIZATION AND APPLICATIONSMOODY CHU and ROBERT PLEMMONSDepartment of Mathematics,North Carolina State University,Raleigh,NC27695-8205. Departments of Computer Science and Mathematics,Wake Forest University,Winston-Salem,NC27109. 1IntroductionData analysis is pervasive throughout science,engineering and business applications.Very often the data to be analyzed is nonnegative,and it is very often preferable to take this constraint into account in the analysis process.In this paper we provide a survey of some aspects of nonnegative matrix factorization and its applications to nonnegative matrix data analysis.In general the problem is the following:given a nonnegative data matrix Yfind reduced rank nonnegative matrices U and V so thatY≈UV.Here,U is often thought of as the source matrix and V as the mixing matrix associated with the data in Y.A more formal definition of the problem is given below.This approximate factorization process is an active area of research in several disciplines(a Google search on this topic recently provided over250references to papers involving nonnegative matrix factorization and applications written in the past ten years),and the subject is certainly a fertile area of research for linear algebraists.An indispensable task in almost every discipline is to analyze a certain data to search for relationships between a set of exogenous and endogenous variables.There are two special concerns in data analysis.First, most of the information gathering devices or methods at present have onlyfinite bandwidth.One thus cannot avoid the fact that the data collected often are not exact.For example,signals received by antenna arrays often are contaminated by instrumental noises;astronomical images acquired by telescopes often are blurred by atmospheric turbulence;database prepared by document indexing often are biased by subjective judgment; and even empirical data obtained in laboratories often do not satisfy intrinsic physical constraints.Before any deductive sciences can further be applied,it is important tofirst reconstruct or represent the data so that the inexactness is reduced while certain feasibility conditions are satisfied.Secondly,in many situations the data observed from complex phenomena represent the integrated result of several interrelated variables acting together.When these variables are less precisely defined,the actual information contained in the original data might be overlapping and ambiguous.A reduced system model could provide afidelity near the level of the original system.One common ground in the various approaches for noise removal,model reduction,feasibility reconstruction,and so on,is to replace the original data by a lower dimensional representation obtained via subspace approximation.The notion of low rank approximations therefore arises in a wide range of important applications.Factor analysis and principal component analysis are two of the many classical methods used to accomplish the goal of reducing the number of variables and detecting structures among the variables.However,as indicated above,often the data to be analyzed is nonnegative,and the low rank data are further required to be comprised of nonnegative values only in order to avoid contradicting physical realities. Classical tools cannot guarantee to maintain the nonnegativity.The approach of low-rank nonnegative matrix factorization(NNMF)thus becomes particularly appealing.The NNMF problem,probably due originally to Paatero and Tapper[21],can be stated in generic form as follows:(NNMF)Given a nonnegative matrix Y∈R m×n and a positive integer p<min{m,n},find nonnegative matrices U∈R m×p and V∈R p×n so as to minimize the functionalf(U,V):=12Y−UV 2F.(1)The product UV of the least squares solution is called a nonnegative matrix factorization of Y,although Y is not necessarily equal to the product UV.Clearly the product UV is of rank at most p.An appropriate decision on the value of p is critical in practice,but the choice of p is very often problem dependent.The objective function(1)can be modified in several ways to reflect the application need.For example,penalty1terms can be added to f(U,V)in order to enforce sparsity or to enhance smoothness in the solution U and V[13,24].Also,because UV=(UD)(D−1V)for any invertible matrix D∈R p×p,sometimes it is desirable to“normalize”columns of U The question of uniqueness of the nonnegative factors U and V also arises, which is easily seen by considering case where the matrices D and D−1are nonnegative.For simplicity, we shall concentrate on(1)only in this essay,but the metric to be minimized in the NNMF problem can certainly be generalized and constraints beyond nonnegativity are sometimes imposed for specific situations, e.g.,[5,13,14,15,18,19,24,25,26,27].In many applications,we will see that the p factors,interpreted as either sources,basis elements,or concepts,play a vital role in data analysis.In practice,there is a need to determine as few factors as possible and,hence the need for a low rank NNMF of the data matrix Y arises. 2Some ApplicationsThe basic idea behind the NNMF is the linear model.The matrix Y=[y ij]∈R m×n in the NNMF formulation denotes the“observed”data whereas each entry y ij represents,in a broad sense,the score obtained by entity j on variable i.One way to characterize the interrelationships among multiple variables that contribute to the observed data Y is to assume that y ij is a linearly weighted score by entity j based on several“factors”. We shall temporarily assume that there are p factors,but often it is precisely the point that the factors are to be retrieved in the mining process.A linear model,therefore,assumes the relationshipY=AF,(2) where A=[a ik]∈R m×p is a matrix with a ik denoting the loading of variable i to factor k or,equivalently, the influence of factor k on variable i,and F=[f kj]∈R p×n with f kj denoting the score on factor k by entity j or the response of entity j to factor k.Depending on the applications,there are many ways to interpret the meaning of the linear model.We briefly describe a few applications below.2.1Air Emission QualityIn the air pollution research community,one observational technique makes use of the ambient data and source profile data to apportion sources or source categories[12,15].The fundamental principle in this model is that mass conservation can be assumed and a mass balance analysis can be used to identify and apportion sources of airborne particulate matter in the atmosphere.For example,it might be desirable to determine a large number of chemical constituents such as elemental concentrations in a number of samples.The relationships between p sources which contribute m chemical species to n samples,therefore,lead to a mass balance equation,y ij=pk=1a ik f kj,(3)where y ij is the elemental concentration of the i th chemical measured in the j th sample,a ik is the gravimetric concentration of the i th chemical in the k th source,and f kj is the airborne mass concentration that the k th source has contributed to the j th sample.In a typical scenario,only values of y ij are observable whereas neither the sources are known nor the compositions of the local particulate emissions are measured.Thus,a critical question is to estimate the number p,the compositions a ik,and the contributions f kj of the sources.Tools that have been employed to analyze the linear model include principal component analysis,factor analysis,cluster analysis,and other multivariate statistical techniques.In this receptor model,however,there is a physical constraint imposed upon the data.That is,the source compositions a ik and the source contributions f kj must all be nonnegative.The identification and apportionment,therefore,becomes a nonnegative matrix factorization problem of Y.2.2Image and Spectral Data ProcessingDigital images are represented as nonnegative matrix arrays,since pixel intensity values are nonnegative.It is sometimes desirable to process data sets of images represented by column vectors as composite objects in2many articulations and poses,and sometimes as separated parts for in,for example,biometric identification applications such as face or iris recognition.It is suggested that the factorization in the linear model would enable the identification and classification of intrinsic “parts”that make up the object being imaged by multiple observations [7,16,26].More specifically,each column y j of a nonnegative matrix Y now represents m pixel values of one image.The columns a k of A are basis elements in R m .The columns of F ,belonging to R p ,can be thought of as coefficient sequences representing the n images in the basis elements.In other words,the relationship,y j =p k =1a k f kj ,(4)can be thought of as that there are standard parts a k in a variety of positions and that each image represented as a vector y j ,making up the factor U of basis elements is made by superposing these parts together in specific ways by a mixing matrix represented by V in (1).Those parts,being images themselves,are necessarily nonnegative.The superposition coefficients,each part being present or absent,are also necessarily nonnegative.A related application to the identification of object materials from spectral reflectance data at different optical wavelengths has been investigated in [25].2.3Text MiningAssume that the textual documents are collected in an indexing matrix Y =[y ij ]∈R m ×n .Each document is represented by one column in Y .The entry y ij represents the weight of one particular term i in document j whereas each term could be defined by just one single word or a string of phrases.To enhance discrimination between various documents and to improve retrieval effectiveness,a term-weighting scheme of the form,y ij =t ij g i d j ,(5)is usually used to define Y [2],where t ij captures the relative importance of term i in document j ,g i weights the overall importance of term i in the entire set of documents,and d j =( m i =1t ij g i )−1/2is the scaling factor for normalization.The normalization by d j per document is necessary because,otherwise,one could artificially inflate the prominence of document j by padding it with repeated pages or volumes.After the normalization,the columns of Y are of unit length and usually nonnegative.The indexing matrix contains lot of information for retrieval.In the context of latent semantic indexing (LSI)application [2,10],for example,suppose a query represented by a row vector q =[q 1,...,q m ]∈R m ,where q i denotes the weight of term i in the query q ,is submitted.One way to measure how the query q matches the documents is to calculate the row vector s =q Y and rank the relevance of documents to q according to the scores in s .The computation in the LSI application seems to be merely the vector-matrix multiplication.This is so only if Y is a “reasonable”representation of the relationship between documents and terms.In practice,however,the matrix Y is never exact.A major challenge in the field has been to represent the indexing matrix and the queries in a more compact form so as to facilitate the computation of the scores [6,23].The idea of representing Y by its NNMF approximation seems plausible.In this context,the standard parts a k indicated in (4)may be interpreted as subcollections of some “general concepts”contained in these documents.Like images,each document can be thought of as a linear composition of these general concepts.The column-normalized matrix A itself is a term-concept indexing matrix.Nonnegative matrix factorization has many other applications,including linear sparse coding [13,29],chemometric [11,21],image classification [9],neural learning process [20],sound recognition [14],remote sensing and object characterization [25,30].We stress that,in addition to low-rank and nonnegativity,there are applications where other conditions need to be imposed on U and V .Some of these constraints include sparsity,smoothness,specific structures,and so on.The NNMF formulation and resulting computational methods need to be modified accordingly,but it will be too involved to include that discussion in this brief survey.33OptimalityQuite a few numerical algorithms have been developed for solving the NNMF.The methodologies adapted are following more or less the principles of alternating direction iterations,the projected Newton,the reduced quadratic approximation,and the descent search.Specific implementations generally can be categorized into alternating least squares algorithms[21],multiplicative update algorithms[16,17,13],gradient descent algorithm,and hybrid algorithm[24,25].Some general assessments of these methods can be found in[5, 18,28].It appears that there is much room for improvement of numerical methods.Although schemes and approaches are different,any numerical method is essentially centered around satisfying thefirst order optimality conditions derived from the Kuhn-Tucker theory.Recall that the computed factors U and V may only be local minimizers of(1).Theorem3.1Necessary conditions for(U,V)∈R m×p+×R p×n+to solve the nonnegative matrix factorizationproblem(1)areU.∗(Y−UV)V=0∈R m×p,(6)V.∗U (Y−UV)=0∈R p×n,(7) (Y−UV)V ≤0,(8) U (Y−UV)≤0,(9)where.∗denotes the Hadamard product.4Conclusions and Some Open ProblemsWe have attempted to outline some of the major concepts related to nonnegative matrix factorization and to briefly discuss a few of the many practical applications.Several open problems remain,and we list just a few of them.•Preprocessing the data matrix Y.It has been observed,e.g.[25,27],that noise removal or a particular basis representation for Y can improve the effectiveness of algorithms for solving(1).This is an active area of research and is unexplored for many applications.•Initializing the factors.Methods for choosing,or seeding,the initial matrices U and V for various algorithms(see,e.g.,[30])is a topic in need of further research.•Uniqueness.Sufficient conditions for uniqueness of solutions to the NNMF problem can be considered in terms of simplicial cones[1],and have been studied in[7].Algorithms for computing the factors U and V generally produce local minimizers of f(U,V),even when constraints are imposed.It would thus be interesting to apply global optimization algorithms to the NNMF problem.•Updating the factors.Devising efficient and effective updating methods when columns are added to the data matrix Y in(1)appears to be a difficult problem and one in need of further research.Our survey in this short essay is of necessity incomplete,and we apologize for resulting omission of other material or ments by readers to the authors on the material are welcome. References[1] A.Berman and R.J.Plemmons,Nonnegative Matrices in the Mathematical Sciences,SIAM,Philadelphia,1994.[2]M.W.Berry,Computational Information Retrieval,SIAM,Philadelphia,2000.[3]M.Catral,L.Han,M.Neumann and R.J.Plemmons,On reduced rank nonnegative matrix factorizations forsymmetric matrices,Lin.Alg.and Appl.,Special Issue on Positivity in Linear Algebra,393(2004),107-126. [4]M.T.Chu,On the statistical meaning of the truncated singular decomposition,preprint,North Carolina StateUniversity,November,2000.4[5]M.T.Chu,F.Diele,R.Plemmons,and S.Ragni,Optimality,computation,and interpretation of nonnegativematrix factorizations,preprint,2004.[6]I.S.Dhillon and D.M.Modha,Concept decompositions for large sparse text data using clustering,MachineLearning J.,42(2001),143-175.[7] D.Donoho and V.Stodden,When does nonnegative matrix factorization give a correct decomposition into parts,Stanford University,2003,report,available at /~donoho.[8]EPA,National air quality and emissions trends report,Office of Air Quality Planning and Standards,EPA,Research Traingle Park,EPA454/R-01-004,2001.[9] D.Guillamet,B.Schiele,and J.Vitri.Analyzing non-negative matrix factorization for image classification.InProc.16th Internat.Conf.Pattern Recognition(ICPR02),Vol.II,116119.IEEE Computer Society,August2002.[10]T.Hastie,R.Tibshirani,and J.Friedman,The Elements of Statistical Learning:Data Mining,Inference,andPrediction,Springer-Verlag,New York,2001.[11]P.K.Hopke,Receptor Modeling in Environmental Chemistry,Wiley and Sons,New York,1985.[12]P.K.Hopke,Receptor Modeling for Air Quality Management,Elsevier,Amsterdam,Hetherlands,1991.[13]P.O.Hoyer,Nonnegative sparse coding,Neural Networks for Signal Processing XII,Proc.IEEE Workshop onNeural Networks for Signal Processing,Martigny,2002.[14]T.Kawamoto,K.Hotta,T.Mishima,J.Fujiki,M.Tanaka,and T.Kurita.Estimation of single tones from chordsounds using non-negative matrix factorization,Neural Network World,3(2000),429-436.[15] E.Kim,P.K.Hopke,and E.S.Edgerton,Source identification of Atlanta aerosol by positive matrix factorization,J.Air Waste Manage.Assoc.,53(2003),731-739.[16] D.D.Lee and H.S.Seung,Learning the parts of objects by nonnegative matrix factorization,Nature,401(1999),788-791.[17] D.D.Lee and H.S.Seung,Algorithms for nonnegative matrix factorization,in Advances in Neural InformationProcessing13,MIT Press,2001,556-562.[18]W.Liu and J.Yi,Existing and new algorithms for nonnegative matrix factorization,University of Texas at Austin,2003,report,available at /users/liuwg/383CProject/final_report.pdf.[19] E.Lee,C.K.Chun,and P.Paatero,Application of positive matrix factorization in source apportionment ofparticulate pollutants,Atmos.Environ.,33(1999),3201-3212.[20]M.S.Lewicki and T.J.Sejnowski.Learning overcomplete representations.Neural Comput.,12:337365,2000.[21]P.Paatero and U.Tapper,Positive matrix factorization:A non-negative factor model with optimal utilization oferror estimates of data values,Environmetrics,vol.5,pp.111126,1994.[22]P.Paatero and U.Tapper,Least squares formulation of robust nonnegative factor analysis,Chemomet.Intell.Lab.Systems,37(1997),23-35.[23]H.Park,M.Jeon,and J.B.Rosen,Lower dimensional representation of text data in vector space based informationretrieval,in Computational Information Retrieval,ed.M.Berry,rm.Retrieval Conf.,SIAM, 2001,3-23.[24]V.P.Pauca,F.Shahnaz,M.W.Berry,and R.J.Plemmons,Text mining using nonnegative matrix factorizations,In Proc.SIAM Inter.Conf.on Data Mining,Orlando,FL,April2004.[25]J.Piper,V.P.Pauca,R.J.Plemmons,and M.Giffin,Object characterization from spectral data using non-negative factorization and information theory.In Proc.Amos Technical Conf.,Maui,HI,September2004,see /~plemmons.[26]R.J.Plemmons,M.Horvath,E.Leonhardt,V.P.Pauca,S.Prasad,S.Robinson,H.Setty,T.Torgersen,J.vander Gracht,E.Dowski,R.Narayanswamy,and P.Silveira,Computational imaging Systems for iris recognition,In Proc.SPIE49th Annual Meeting,Denver,CO,5559(2004),335-345.[27] F.Shahnaz,M.Berry,P.Pauca,and R.Plemmons,Document clustering using nonnegative matrix factorization,to appear in the Journal on Information Processing and Management,2005,see /~plemmons.[28]J.Tropp,Literature survey:Nonnegative matrix factorization,University of Texas at Asutin,preprint,2003.[29]J.Tropp,Topics in Sparse Approximation,Ph.D.Dissertation,University of Texas at Austin,2004.[30]S.Wild,Seeding non-negative matrix factorization with the spherical k-means clustering,M.S.Thesis,Universityof Colorado,2002.5。
一个求解非线性互补问题非单调自适应信赖域方法
桂 林 电 子 科 技 大 学 学 报
J r a ii i e st f El c r n c Te h olg ou n l Gu l Un v r iy o e t o i c n o y of n
V o _ O, I 3 No.3
num e iale e i e s rc xp rm nt .
Ke r s n n i e rc mp e n a i r be n n n t n u o tcd t r i a i n t u t e i n me h d; l b l y wo d : o l a o l me t rt p o l m n y o mo o o ea t ma i e e m n t r s g o t o g o a o r
Ab ta t B s n Fic e — r it r f n t n ( u c i n) we c n r f r u a e t e n n i e r c mp e n a iy s r c : a e o s h r Bu me s e u c i o FB f n t o a e o m l t h o l a o l me t rt n
2 S h o fMah maisa d C mp t rS in e uin No ma ie st . c o l t e t n o u e ce c ,F j r l o c a Unv ri y,F z o 5 0 7, h n ) u h u 3 0 0 C ia
摘
要 : 于Fsh rB r i e 基 i e— ume tr函数 ( 称 F 函数 ) 将 非 线 性 互 补 问 题 转 化 等 价 的 无 约 束 问题 求 解 。 信 赖 域 与 c s 简 B 可 在
求解非光滑凸最小值问题的自适应信赖域方法
A b t a t: s r c A s fa ptve r s e i n e hod f el— da i t u t r g o m t or nons o h o ex m i m ia in s m ot c nv ni z to i pr e e es nt d. Thi pa e fr t s p r is t a f r s he r ns o m t no m o h onv x ns ot c e m i m ia in nt a fe e i l c nv i m iaton y i t M or a ni z to i o dif r ntab e o ex m ni z i b usng he e u—
A p .2 0 r 01
求 解 非 光 滑 凸 最 小 值 问 题 的 自适 应 信 赖 域 方 法
唐 江 花
( 桂林 电 子 科 技 大 学 数 学 与计 算 科 学 学院 , 西 桂 林 广 510) 4 0 4
摘
要: 针对非光 滑凸最 小值 问题提 出一个 自适应 的信赖域 方法 , 利用 Moeu Y s a 则化将非 光滑凸最小值 在 ra — o i 正 d
Ke y wor s: on x mi m iaton;r tr gi e ho ;gl d c ve ni z i t us — e on m t d oba onv r e e;s m is oot e s lc e g nc e —m hn s
中微子 非标准相互作用
中微子非标准相互作用Neutrinos are fascinating subatomic particles that have captured the interest of scientists for decades. They are electrically neutral and interact only weakly with matter, making them extremely elusive to detect and study. However, recent research has revealed the existence of non-standard neutrino interactions, which deviate from the predictions of the Standard Model of particle physics. This discovery has opened up new avenues for understanding the fundamental nature of neutrinos and the laws that govern the universe.One of the main motivations for studying non-standard neutrino interactions is to shed light on the phenomenon of neutrino oscillation. Neutrinos come in three different flavors – electron, muon, and tau – and they can change from one flavor to another as they travel through space. This phenomenon, known as neutrino oscillation, impliesthat neutrinos must have mass, which contradicts theinitial assumption of the Standard Model. Non-standard interactions could provide an explanation for this massgeneration mechanism and help us understand why neutrinos behave the way they do.Another reason why non-standard neutrino interactions are important is their potential impact on astrophysics and cosmology. Neutrinos are produced in the core of the Sun, in supernovae explosions, and in other high-energy astrophysical processes. By studying the properties of neutrinos, scientists can gain insights into the inner workings of these cosmic phenomena. Non-standard interactions could affect the production, propagation, and detection of neutrinos, leading to observable differences in astrophysical data. Understanding these effects is crucial for accurately interpreting astronomical observations and unraveling the mysteries of the universe.Furthermore, non-standard neutrino interactions have implications for the search for new physics beyond the Standard Model. The Standard Model is a highly successful theory that describes the behavior of elementary particles and their interactions. However, it is known to be incomplete and does not account for several phenomena, suchas dark matter and the matter-antimatter asymmetry in the universe. Non-standard neutrino interactions could be a manifestation of new, yet undiscovered particles or forces that lie beyond the reach of the Standard Model. By studying these interactions, scientists hope to uncover clues about the nature of this new physics and pave the way for a more comprehensive theory of the universe.From a technological perspective, non-standard neutrino interactions could have practical applications in the field of neutrino detectors. Neutrino detectors play a crucial role in studying neutrinos and their properties. They are used to measure the flux, energy, and flavor composition of neutrinos, providing valuable data for theoretical models and experimental tests. Understanding non-standard interactions can help improve the design and sensitivity of these detectors, leading to more precise measurements and a deeper understanding of neutrinos.In conclusion, non-standard neutrino interactions represent an exciting area of research with far-reaching implications. They provide a unique window into themysteries of neutrino oscillation, astrophysics, new physics beyond the Standard Model, and even technological advancements in neutrino detectors. By studying these interactions, scientists are not only unraveling the secrets of the universe but also pushing the boundaries of human knowledge and understanding. The pursuit of knowledge about non-standard neutrino interactions is a testament to the curiosity and ingenuity of the human mind, and it holds the promise of revolutionizing our understanding of the fundamental laws that govern the cosmos.。
非凸优化中对于带不同惯性项的前后分裂算法的一种加速技术
①通信作者:刘海玉(1996—),男,硕士,研究方向为最优化算法理论。
E-mail:*****************。
DOI:10.16660/ki.1674-098X.2012-5640-5432非凸优化中对于带不同惯性项的前后分裂算法的一种加速技术①刘海玉*(河北工业大学理学院 天津 300401)摘 要:本文考虑最小化一类非凸非光滑优化问题,对带不同惯性项的前后分裂算法中的步长作了改进,运用非单调线搜索技术来加快收敛速度。
新算法利用了非单调线搜索技术,在每一次迭代中满足预先设置条件,从而在总体上使目标函数值有更大的下降。
通过假设算法产生序列的有界性,本文利用数学归纳法完成了算法的序列收敛性证明。
最后对非凸二次规划问题进行了数值实验,通过合适的参数选取,说明新算法有效地减少了迭代次数,达到预先给定的终止条件。
关键词:非凸优化 非单调线搜索技术 带不同惯性项的前后分裂算法 收敛速度中图分类号:O224文献标识码:A 文章编号:1674-098X(2021)03(a)-0184-04An Acceleration Technique of the Forward-backward SplittingAlgorithm with Different Inertial Terms for Non-convexOptimizationLIU Haiyu *(Hebei University Of Technology, School of Science,Tianjin, 300401 China)Abstract: In this paper, we consider minimizing a class of non-convex and non-smooth optimization problems, improve the step size of the forward-backward algorithm with different inertia terms, and use the nonmonotone proximal gradient technology to speed up the convergence. The new algorithm uses the nonmonotone proximal gradient technology to select the maximum value of adjacent objective functions in the iteration, so that the value of the function drops more. We prove the convergence of the new algorithm under strong hypothetical conditions. Finally, the numerical experiment is carried out on the non-convex quadratic programming problem, which proves that the new algorithm effectively improves the convergence speed of the original algorithm.Key Words: Non-convex optimization; Nonmonotone proximal gradient method; Forward-backward algorithm with different inertial terms; Convergence speed1 问题背景介绍本文求解一类一阶优化问题:()()()min min n nx Rx RF x f x g x ∈∈=+ (1)其中目标函数给出限定条件:():nf x R R →,():n g x R R →且:1)函数f (x )是连续可微函数(可能非凸)且梯度李普希兹连续,即存在常数L f >0,对任意的,n x y R ∈满足:()()f f x f y L x y∇−∇≤−2)函数g (x )是正常、闭凸函数(可能非光滑),在其有效域内有下界。
椭圆方程柯西问题的拟逆正则化方法
椭圆方程柯西问题的拟逆正则化方法椭圆方程柯西问题是一类重要的偏微分方程问题,它在科学、工程、医学等领域中有着广泛的应用。
对于柯西问题的求解,传统的数值方法往往会面临病态问题,导致解的精度和稳定性都较差。
为了克服这一困难,研究者们提出了很多正则化方法。
其中,拟逆正则化方法被广泛应用于椭圆方程柯西问题的求解中。
该方法基于正则化理论,通过引入一个逆算子,将原问题转化为一个近似的正则化问题。
拟逆正则化方法具有简单、快速、有效的特点,可以在不增加额外计算量的情况下,显著提高数值解的质量和稳定性。
对于一般的椭圆方程柯西问题,其数学模型可以表示为:$$L(u)=f,quad u|_{partialOmega}=g,$$其中,$L$是一个椭圆微分算子,$u$是未知函数,$f$是已知函数,$g$是边界条件。
拟逆正则化方法可以通过以下步骤来求解上述问题:1.引入逆算子$A^{-1}$,将原问题转化为:$$A^{-1}L(u)=A^{-1}f,quad u|_{partialOmega}=g.$$2.利用正则化方法,构造出一个辅助问题:$$A^{-1}L(u_alpha)+alpha u_alpha=h,quadu_alpha|_{partialOmega}=g,$$其中,$alpha$是正则化参数,$h$是已知函数。
3.通过将原问题与辅助问题相减,得到一个新的方程:$$A^{-1}L(tilde{u}_alpha)=f-h+alpha(u-u_alpha),quadtilde{u}_alpha|_{partialOmega}=0,$$其中,$tilde{u}_alpha=u-u_alpha$。
4.对于新方程,采用传统的数值方法求解,得到数值解$tilde{u}_alpha^*$。
5.最终的求解结果为$u_alpha^*=u_alpha+tilde{u}_alpha^*$。
拟逆正则化方法可以有效地提高椭圆方程柯西问题的数值解的质量和稳定性。
zeta函数与gamma函数
zeta函数与gamma函数Zeta函数和Gamma函数都是数学中的重要函数,它们在数论、分析、物理科学等不同领域都有广泛的应用。
Zeta函数最初由欧拉发现,Gamma函数的概念最早出现在欧拉和欧登堡的工作中,后来由魏尔斯特拉斯和斯托克斯进一步发展和应用。
Zeta函数是一个复变函数,定义为Riemann级数的和,即:ζ(s)=∑(n=1至∞) n^-s其中s是一个复变数,当实部大于1时该级数收敛。
该函数的最著名的应用是在数论中,其中它与素数的分布有关。
具体来说,欧拉证明了如下的似乎显然的等式:1/2+1/3+1/5+1/7+1/11+⋯=ζ(1)其中右边的ζ(1)是无穷级数1+1/2+1/3+1/4+⋯的发散积形式。
这个等式的精神表明:素数是被非常密集地分布在正整数的所有分数中的,因此该分数的和将无限增加。
实际上,任何数学上恰当的表述,来说明素数如何和分数密集地分布,都与Zeta函数是有联系的。
Gamma函数是另一个复变函数,它是欧拉先前定义的一种特殊的双重积分所定义的,即:Γ(z)=∫(0至∞)t^(z-1) e^-t dtGamma函数在分析学中有广泛应用,其中它扮演着阶乘函数的角色。
例如,当n是正整数时,n!=Γ(n+1)。
此外,Gamma函数在概率论、统计学、物理学、工程学等领域中也有广泛的应用。
Zeta函数和Gamma函数之间有着紧密的联系。
特别地,欧拉证明了以下公式:ζ(2n)=(-1)^(n-1) (2π)^n B_(2n) /(2n)!,其中n是任意正整数,B_2n是伯努利数。
这条公式表明Zeta函数的偶数值可以表示为伯努利数的多项式和一个常数项之积,其中常数项可以通过Gamma函数的定义来表示。
这个公式是欧拉的重要发现之一,它是Zeta函数和Gamma函数之间联系的一个例子。
此外,Zeta函数和Gamma函数还有一些其他的联系。
例如,在分析学中,人们通常使用Zeta函数的解析延拓来定义Gamma函数。
zeta法 -回复
zeta法-回复什么是zeta法?Zeta法是一种数学工具,用于计算无穷级数的和。
它是由数学家数学家劳伦斯·埃尔南德斯·齐龙(Laurent Schwartz)于20世纪50年代提出的。
Zeta法是基于黎曼Zeta函数(又称黎曼zeta函数)的性质和特点,通过一系列步骤将无穷级数的和计算出来。
黎曼zeta函数是专门用于研究无穷级数的数学函数,它的定义如下:ζ(s) = 1^(-s) + 2^(-s) + 3^(-s) + 4^(-s) + ...其中s是复数。
当s的实部大于1时,这个级数收敛,而当s的实部小于等于1时,这个级数发散。
那么,如果我们想要计算这个级数的和,即黎曼zeta函数的值,我们可以使用Zeta法。
第一步是定义实值函数,将复数s拆分为实部和虚部:s = σ+ it其中σ和t是实数。
通过如此定义可以将黎曼zeta函数转化为两个实变量的函数。
第二步是做变量替换,将无穷级数改写为积分形式。
这一步利用了Gamma 函数。
Gamma函数是如下定义的:Γ(x) = ∫[0,∞] t^(x-1) * e^(-t) dt然后,我们将黎曼zeta函数改写为积分形式:ζ(s) = 1^(-s) + 2^(-s) + 3^(-s) + 4^(-s) + ...= ∫[1,∞] x^(-s) dx第三步是利用解析延拓,将积分限从实数范围∞拓展到复数范围。
通过解析延拓,我们可以获得更广阔的计算范围,并计算黎曼zeta函数在其他复数点上的值。
第四步是计算积分。
我们可以通过数值方法或解析方法计算积分。
数值方法可以使用数值积分技术,例如梯形法则或辛普森法则。
解析方法则需要利用复数的性质和公式进行计算。
最后一步是根据计算结果,得到黎曼zeta函数在特定复数点上的值。
通过这个值,我们可以了解无穷级数的和。
综上所述,Zeta法是一种用于计算无穷级数和的数学工具,基于黎曼Zeta 函数的性质和特点。
非线性椭圆方程改进的极值原理(英文)
非线性椭圆方程改进的极值原理(英文)
保继光
【期刊名称】《北京师范大学学报:自然科学版》
【年(卷),期】2000(36)5
【摘要】证明了任意有界区域上二阶非线性椭圆方程Dirichlet问题改进的极值原理。
【总页数】5页(P574-578)
【关键词】非线性椭圆方程;极值原理;有界区域
【作者】保继光
【作者单位】北京师范大学数学系
【正文语种】中文
【中图分类】O175.25
【相关文献】
1.一类二阶椭圆型特征方程的极值原理 [J], 郝江浩;张亚静
2.一类非线性椭圆型方程极值原理的新进展 [J], 张海亮;张武
3.非线性退缩椭圆型方程的比较原理及方程组的边值问题 [J], 杨世廞
4.含有梯度的非线性椭圆型方程的极值原理 [J], 冀振斌;燕蜻
5.一类非线性四阶椭圆型方程解的极值原理 [J], 郝江浩
因版权原因,仅展示原文概要,查看原文内容请购买。
基于形状自适应非局部回归和非局部梯度正则的深度图像超分辨
基于形状自适应非局部回归和非局部梯度正则的深度图像超分辨张莹莹;任超;朱策【期刊名称】《计算机应用》【年(卷),期】2022(42)6【摘要】针对深度图像分辨率低、深度不连续性模糊问题,提出一种基于形状自适应非局部回归和非局部梯度正则的深度图像超分辨方法。
为了探究深度图像非局部相似块之间的相关性,提出了形状自适应的非局部回归。
该方法对每个像素点提取其形状自适应块,并根据形状自适应块构建目标像素的相似像素组;然后针对相似像素组中的每个像素,结合同场景的高分辨率彩色图像获得非局部权重,从而构建非局部回归先验。
为了保持深度图像的边缘信息,对图像梯度的非局部性进行探究。
不同于总变分(TV)正则化对所有像素点梯度的零均值拉普拉斯分布假设,该方法利用深度图像梯度的非局部相似性,用非局部块估计特定像素点的梯度均值,并用学习到的均值来拟合各像素点的梯度分布。
实验结果表明,相较于基于边缘不一致性评价模型(EIEM),所提方法在Middlebury数据集上的2倍和4倍上采样率的平均绝对值差(MAD)分别下降了41.1%和40.8%。
【总页数】9页(P1941-1949)【作者】张莹莹;任超;朱策【作者单位】电子科技大学信息与通信工程学院;四川大学电子信息学院【正文语种】中文【中图分类】TP391.41【相关文献】1.运用 Foveated 非局部均值和局部核回归的单幅图像超分辨重构2.基于非负邻域嵌入和非局部正则化的单帧图像超分辨率重建算法3.基于非局部自回归学习的医学图像超分辨重建方法4.基于深度学习局部与非局部信息的单幅图像超分辨率重建5.基于非局部均值约束的深度图像超分辨率重建因版权原因,仅展示原文概要,查看原文内容请购买。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Abstract We consider two possible zeta-function regularization schemes of quantum Liouville theory. One refers to the Laplace-Beltrami operator covariant under conformal transformations, the other to the naive non invariant operator. The first produces an invariant regularization which however does not give rise to a theory invariant under the full conformal group. The other is equivalent to the regularization proposed by Zamolodchikov and Zamolodchikov and gives rise to a theory invariant under the full conformal group.
dz − zn
−
z¯
dz¯ − z¯n )
+
ηn2
log
ε2n
1
+ 4πi
ϕB
∂ΓR
dz − dz¯ + log R2 z z¯
Sq[ϕB, χ]
=
lim
εn →0
R→∞
1 4π
(∂aχ)2 + 4πµeϕB (e2bχ − 1 − 2bχ) d2z
Γ
+
(2 + b2) log R2 + 1 4πi
ϕB
∂ΓR
dz z
−
dz¯ z¯
b
+
χ
2πi ∂ΓR
dz z
−
dz¯ z¯
.(2)
In eq.(2) Γ is a disk of radius R from which disks of radius εn around the singularities have been removed.
We recall that Scl is O(1/b2) while the first integral appearing in the quantum action (2)
two loops while in [8] it was shown that such a result holds true to all order perturbation
theory on the pseudosphere. Here we consider the approach in which the determinant of
1Work supported in part by M.I.U.R.
Quantum Liouville theory has been the subject of intense study following different lines of
attack. While the bootstrap [1, 2, 3, 4, 5] starts from the requirement of obtaining a theory
under the full infinite dimensional conformal group.
It came somewhat of a surprise that in the functional approach the regularization which
realizes the full conformal invariance is the non invariant regularization introduced by
K−
1 2
=
D[χ]
e−
1 2
R
χ(z )Dχ(z )d2 z
(7)
where
D
=
−
2 π
∂z
∂z¯
+
4µb2eϕB
≡
−
1 2π
∆
+
m2eϕB .
(8)
The usual invariant zeta-function technique [12] for the computation of the functional
function at coincident points proposed by ZZ [4], and extensively used in [7, 8, 9, 10]. For
definiteness we shall refer to the case of sphere topology.
The complete action is given by SL[ϕB, χ] = Scl[ϕB] + Sq[ϕB, χ] where [7]
1
Scl [ϕB ]
=
lim
εn →0
b2
R→∞
1 8π
Γ
1 2
(∂aϕB
)2
+
8πµb2eϕB
d2z
(1)
N
−
n=1
and
1 ηn 4πi
∂Γn
ϕB
(z
Alternatively one can consider the elliptic operator D defined in (8) and its determinant
generated by the zeta function ∞
ζD(s) = µ−i s
(15)
i=1
being µi the eigenvalues
arXiv:hep-th/0612270v1 26 Dec 2006
IFUP-TH/2006-41
Non invariant zeta-function regularization in quantum Liouville theory 1
Pietro Menotti
Dipartimento di Fisica dell’Universita`, Pisa 56100, Italy and INFN, Sezione di Pisa
determinant K consists in writing
χ(z)Dχ(z) d2z =
χ(z)
−1 2π
∆LB
+
m2
χ(z) dρ(z)
(9)
being dρ(z) = eϕB(z)d2z the conformal invariant measure and
∆LB = e−ϕB ∆
(10)
(5)
i=1
where ηi = bαi and the vertex functions are given by
Vα(z) = e2αφ(z) = eηϕ(z)/b2 ; ϕ = 2bφ = ϕB + 2bχ.
(6)
We recall that the action (1) ascribes to the vertex function Vα(z) the semiclassical dimension ∆sc(α) = α(1/b − α) [11]. In performing the perturbative expansion in b we have to keep η1, . . . ηn constant [3]. The one loop contribution to the n-point function is given by
way on the regularization scheme adopted. In the hamiltonian treatment [6] for the theory
compactified on a circle the normal ordering regularization gives rise to a theory invariant
a non covariant operator is computed in the framework of the zeta function regularization
and show that this procedure is equivalent to the non invariant regularization of the Green
Associated
to
the
operator
H
=
−
1 2π
∆LB
+
m2wecanconsiderthe
Green
function
HG(z, z′) = δ2(z − z′)e−ϕB(z′) ≡ δI (z, z′)
(14)
where δI(z, z′) is the invariant delta function.
Zamolodchikov and Zamolodchikov (ZZ) [4]. In [7] it was shown that such a regularization
provides the correct quantum dimensions to the vertex functions on the sphere at least to
to provide an invariant regularization scheme as the eigenvalues λi are invariant under