关于0-1加权网络的翻译

合集下载

英语单词翻译

英语单词翻译

1.point and click (鼠标)点击2.integrated circuit 集成电路3.online transactions 网上交易puter monitor 电脑显示器5. projector 投影仪6. screen saver 电脑保护系统7. virtual currency 虚拟货币8. computerized system 计算机系统9. internet distance learning 网络远程教育10. anti-virus programs 杀毒软件11. bar code 条形码12. cordless telephone 无线电话13. cyberspace 网络空间14. desktop 桌面,台式机15. digital television 数字电视16. video camera 摄像机17. electronic hearing aid 电子助听器18. fiber optic technology 光纤技术19. firewall 防火墙20. genetic engineering 基因工程21. hacker 黑客22. intelligent system 智能系统23. it-industry 信息产业24. minicomputer 小型计算机25. multimedia learning system 多媒体学习系统26. palmtop 掌上电脑27. password 密码,口令28. software package 软件包29. solar collector 太阳能集热器30. terminal 终端文化教育词汇1. educational background 教育背景2. educational history 学历3. curriculum 课程4. major 主修5. minor 未成年的;次要的;较小的未成年人副修6. educational highlights 课程重点部分7. specialized courses 专业课8. social practice 社会实践9. part-time jobs 兼职10. extracurricular activities 课外活动11. recreational activities 娱乐活动12. academic activities 学术活动13. scholarship 奖学金14. excellent leader 优秀干部15. student council 学生会16. off-job training 脱产培训17. in-job training 在职培训18. faculty 学院,系19. president 校长20.dean 院长21. department chairman 系主任22. associate professor 副教授23. guest professor 客座教授24. lecturer 讲师25. teaching assistant 助教,教学助理26. research fellow 研究员27. supervisor 监督人,监管员28. talent fair 人才招聘会29. post doctorate 博士后30. undergraduate 本科31. senior 大学四年级学生,高三学生32.junior 大学三年级学生,高二学生33. sophomore 大学二年级学生,高中一年级34. freshman 大学一年级学生35. guest student 旁听生(英)36. auditor 旁听生(美)37. government-supported student 公费生38. day-student 走读生39. intern 实习生40. boarder 寄宿生41. grade-point average (gpa) (加权)平均成绩42. graduate-equivalency certificate 同等学力文凭43. adult-literacy program 成人读写能力课程44. higher education 高等教育45. out-of-class work 课外作业46. admission standard 入学标准47. semester 学期48. expand enrollment 扩大招生49. intellectual property 知识产权50. duck-stuffing 填鸭式51. linguistic and cultural diversity 语言及文化多样性52. bilingualism 双语53. cultural integrity 文化的完整性54.endangered languages 濒危语种55. native tongue 母语社会生活类词汇i. baby boom 婴儿潮2. questionnaire 问卷调查3. social change 社会变迁4. social interrelationship 社会关系5. social status 社会地位6. mass media 大众媒体7. commercial spheres 商业领域8. marital instability 婚姻关系不稳定9. rotating shifts 轮班工作10. familial relationships 家庭关系11. single mother 单身母亲12. graveyard shifts 夜班13. one-on-one interaction 一对一交流14. telephone operator 电话接线员15. tollbooth collector 公路收费员16. service economy 服务经济17. expertise economy 经济专业知识18. 50 highest-paying occupation 50大高收入工作19. extra money 外快20. nest egg 私房钱21. joint declaration 联合声明22. audience rating 收视率23. free-lance writer 自由撰稿人24. practitioner 从业者25. publishing house 出版社26. the space race 太空竞赛27. jury system 陪审制度28. literacy 读写能力29. democratic values 民主观念30. racial discrimination 种族歧视(三)近年真题经济管理类1. auction houses 拍卖行2. auctioneer 拍卖商3. bankruptcy 破产4. bonus 奖金5. bull run 布尔溪6. category 种类,分类;范畴7. chief executive 行政长官;董事长8. compensation committee 薪酬委员会9. consumerism 消费主义10. decline 拒绝;下降;衰退;谢绝11. depression 沮丧;不景气12. domestic 国内的;家庭的;国货13. downturn 衰退;低迷时期14. elite 精英,精华15. financially prudent 财务稳健16. income 收入,收益17. investor 投资者18. manufacture 制造,生产19. marketing trend 营销趋势20. marketing trick 市场营销技巧21. momentum 动量22. monopoly 垄断23. outside director 外部董事24. patent 专利25. private industry 私营企业26. profit driven 利益驱动27. profit margin 利润率28. recession 经济衰退29. revenue 收入30. ruthless advertising 无情的广告31. slump 暴跌32. stagnation 停滞,萧条,不景气33. stock 库存;股份;存货34. strategy 战略35. subsidy 补贴,津贴36. term 学期;术语文化教育类1. academic achievement 学术成就2. advanced course 高级课程3. advertising 广告,广告学;广告的4. anecdote 轶事,奇闻5. approach 方法,途径;接近,靠近6. artwork circulation 艺术品流通7. assign 分配,指派8. associate professor 副教授9. authority 权威,权利10. campus 校园,校区11. contemporary art 当代艺术12. conversational partners 对话伙伴13. educational policy 教育政策14. educational ritual 教育仪式15. educational standard 教育标准16. enormous ego 自负,自我17. gallery 美术馆,画廊18. generation 一代人19. gesture 手势,姿势20. historian 历史学家21. impressionist 印象派,印象主义者22. journalist 记者,新闻工作者23. masterpiece 杰作24. passion 激情,热情社会生活类1. administrator 管理员,行政人员2. appearance 外貌,外观3. biotech industry 生物科技产业4. blue-ribbon juries 陪审团5. bottled water 瓶装水6. charity 慈善,施舍7. convention 大会;公约;风俗8. currency 货币,通货9. decoration 装饰,装潢10. defendants 被告11. development 发展12. disinfecting wipes 消毒湿巾13. domain 领域14. efficacy 功效,疗效15. encode 编码16. era 时代17. euro zone 欧元区18. fabric softeners 织物柔软剂19. far-offspring 后代,子孙20. femininity 女性化21. gender difference 性别差异22. genes 基因23. globalization 全球化24. identity 身份25. imagination 想象力26. immigrant 移民27. impoverish 使贫困28. individual 个体,个人29. inflexible policy 僵化的政策30. innocence 清白,无辜31. innovation 创新,革新32. interaction 相互作用33. joblessness 失业34. jury selection procedures 陪审团的选择过程35. jury system 陪审制度36. lifestyle 生活方式3 7. masculine 阳性,男性的38. molecules 分子,微粒39. public hearing 公开审理,听证会40. questionable beauty cream41. racial discrimination 种族歧视42. skin moisturizers 护肤液43. social fabric 社会结构44. suburban 郊区居民,郊外45. supreme court 最高法院46. talkative 健谈,多话的4 7. tangible inequities 实在的不公平现象48. unemployment 失业,失业率49. verdict 裁决,判决50. washing machine 洗衣机科技技术类1. air-traffic-control regulations 空中交通管制规定2. biotech industry3. development4. domain 领域5. efficacy6. encode7. genes8. innovation9. interaction10. jet-fuel 喷气燃料11. molecules12. Nitrogen-oxide emissions 氮氧化物排放量13. operational guidelines 操作规范14. routine military flight 常规军事飞行15. senior medical figures 高级医疗数据16. washing machine。

transformer模型中英文互译

transformer模型中英文互译

一、概述Transformer模型是由Google于2017年提出的一种自然语言处理模型,其在机器翻译、文本生成等任务中取得了巨大成功。

该模型通过利用自注意力机制和位置编码,实现了并行化处理和捕捉长距离依赖关系的能力。

其革命性的设计和优越的性能使得它成为了自然语言处理领域的一颗新星。

而其中的中英文互译任务更是其重要应用之一,本文将讨论Transformer模型在中英文互译任务中的应用和性能。

二、Transformer模型的基本原理1. 自注意力机制Transformer模型的核心是自注意力机制,该机制允许模型同时对输入序列中的所有位置进行加权处理,从而实现了并行化的处理能力。

具体来说,通过计算每个位置与其他位置的相似度得到权重,然后将权重作为每个位置的加权值进行计算,这使得模型能够捕捉输入序列中不同位置之间的依赖关系。

2. 位置编码为了将位置信息引入模型中,Transformer模型采用了位置编码的方式。

其通过在输入词向量中加上位置编码向量,使得每个词向量都具有了位置信息,从而使得模型能够捕捉到输入序列中词之间的位置信息。

这种处理方式避免了传统循环神经网络中的局限性,使得模型能够更好地处理长距离依赖关系。

三、Transformer模型在中英文互译任务中的应用1. 输入编码在中英文互译任务中,Transformer模型首先利用词嵌入将输入的中文句子和英文句子分别映射为词向量序列,然后通过位置编码的方式将位置信息引入词向量序列中。

这样,输入序列的信息就被编码在了词向量序列中,为后续的处理做好了准备。

2. 编码器接下来,输入的词向量序列会经过多层的编码器进行处理,每个编码器块包括自注意力机制和前馈神经网络。

自注意力机制允许模型同时对输入序列中的所有位置进行加权处理,从而捕捉输入序列中的依赖关系;而前馈神经网络则通过多层感知机对输入序列进行非线性变换。

这些编码器的处理使得输入序列的信息得到了丰富的表示。

Network的含义

Network的含义

Network的含义Network是在路由进程下的一条命令的其作用是:如果接口下配置了ip地址,那么被network 匹配到的接口就加入进了路由选择协议的进程中,那么这些接口就会发送hello包来建邻居,当然rip是没有hello包的,这个network的宣告可以只包含一个接口也可以包含几个接口还可以是一个或多个网段的接口,总之只要network宣告中包含的接口都会加入到这个进程中(前提是接口是打开的且有地址和掩码),但network宣告并不影响路由条目的掩码位数,network宣告的不同只是包括的ip地址的范围的大小而已,network可以将路由表中直连的接口宣告进来,可以是打C的或者S的(静态路由写的是出接口而不是下一跳地址)格式:network + 网段或单个地址+反掩码子网掩码子网掩码:子网掩码(IP subnet mask)是用在接口下配置地址时用的,用连续的数个1开头的二进制位表示的,连续的1后面是连续的0,用于表示ip地址的掩码,如:1.1.1.1 255.255.255.0 其中255.255.255.0就是一个子网掩码,子网掩码也是32位的和ip地址其实是对应的,子网掩码的第一位对应ip地址的第一位,第二位对应第二位以此类推直到第32位,子网掩码的1表示它所对应的ip地址为网络位,0表示他所对应的ip地址为主机位,由于ip地址前面连续的是网络位而后面连续的是主机位,所以这就要求子网掩码前面必须是连续的1后面必须是连续的0,否则是不正确的且无法输入到路由器当中。

反掩码反掩码:之所以叫反掩码(wild card bits)是因为它的写法和子网掩码(IP subnet mask)是相反的子网掩码是连续的1开头后面是连续的0跟随,而反掩码是连续的0开头连续的1跟随。

0表示精确匹配network后面的网络号的二进制位1表示忽略不匹配network后面的网络号的二进制位比如进程下network 12.1.1.0 0.0.0.255网络号12.1.1.0 划为二进制是00001100 00000001 00000001 0000000反掩码0.0.0.255划为二进制是00000000 00000000 00000000 1111111通过上面的比较可以看出网络号的前24位被匹配到了,而因为反掩码的后8位为1,1表示忽略就是说只要有接口的ip地址的前24位为00001100 00000001 00000001 的ip地址都被宣告进了路由协议的进程中,那又因为12.1.1.1--12.1.1.254的ip地址的前24位都为00001100 00000001 00000001 所以只要是这些地址都被宣告进了路由协议的进程中其实就是network后面的反掩码跟network后面的网络号相比较(0就复制1就不进行比较)得出的结果,再跟network后面的反掩码与接口ip地址相比较得出的结果进行对比,相同就表明该接口宣告进了路由进程,不同就表示没有宣告进进程,network 0.0.0.0 0.0.0.0 这条命令除外,0.0.0.0 表示匹配所有在show run中会显示network 0.0.0.0如果写成network 0.0.0.0255.255.255.255 那么show run中显示的还是network 0.0.0.0,当在ospf的进程中输入network 0.0.0.0 0.0.0.0命令时路由器会自动翻译成0.0.0.0 255.255.255.255,因为0.0.0.0表示匹配所有,而255.255.255.255表示忽略那所以就是匹配了所有,而在eigrp 中显示的是network 0.0.0.0把所有接口都宣告进了路由进程中,如果写反掩码时写成连续的1开头连续的0跟随那么系统会自动把1转换成0,把0转换成1。

net英语单词

net英语单词

在英语中,"net" 是一个前缀,常用于构成单词,意味着“网络的”或“互联网的”。

以下是一些常见的以"net" 开头的英文单词:
1. Network -网络,通常指计算机网络或社交网络。

2. Internet -互联网,全球范围内的计算机网络系统。

3. Netscape -曾经的一个网络浏览器名称,也是互联网的景象或景象的意思。

4. Netizen -网络公民,指在互联网上有活跃参与行为的个人。

5. Netiquette -网络礼仪,网络上行为的规范或礼节。

6. Net-zero -净零,指碳排放量为零或能源消耗与产生相平衡的状态。

7. Networth -净资产,个人或公司的总资产减去总负债后的价值。

8. Net income -净收入,个人或公司从经营活动中获得的收入减去费用后的金额。

9. Net gain/loss -净收益/损失,指的是总的收益减去总的损失。

请注意,"net" 作为后缀时,通常用于表示“净”或“总共”的意思,如"gross"(总的)和"loss"(损失)。

在不同的语境中,"net" 的含义可能会有所不同。

self-attention计算公式

self-attention计算公式

Self-attention是一种用于构建神经网络的重要机制,它在自然语言处理和计算机视觉等领域得到了广泛的应用。

本文将介绍self-attention 的计算公式,并对其在神经网络中的作用进行详细解析。

一、self-attention的基本原理Self-attention是一种能够将输入序列中不同位置的信息进行关联和整合的机制。

在自然语言处理中,输入序列通常是一句话或一段文本;在计算机视觉中,输入序列可以是一幅图像的像素。

Self-attention的基本原理是,对输入序列中的每个元素都计算一个权重,然后将这些权重与相应元素的特征向量进行加权求和,得到整合后的表示。

这样一来,每个元素都能够同时融合整个序列的信息,从而达到全局关联的效果。

二、self-attention的计算公式1. 计算权重对于输入序列中的每个元素,首先需要计算其与其他所有元素的相关度。

这可以通过以下公式来实现:\[ E_{ij} = q(i) \cdot k(j) \]其中,\( E_{ij} \) 表示元素i与元素j的相关度,\( q(i) \) 表示元素i的查询向量,\( k(j) \) 表示元素j的键向量。

在实际应用中,查询向量和键向量都是通过输入序列的特征向量经过线性变换得到的。

2. 归一化在计算完所有元素与其他元素的相关度后,需要对这些相关度进行归一化处理,以确保它们都在0到1之间,并且相加等于1。

这可以通过softmax函数来实现:\[ a_{ij} = \frac{exp(E_{ij})}{\sum_{j} exp(E_{ij})} \]其中,\( a_{ij} \) 表示元素i与元素j的归一化权重。

3. 加权求和将归一化后的权重与相应的数值特征向量进行加权求和,得到整合后的表示:\[ v(i) = \sum_{j} a_{ij} \cdot v(j) \]其中,\( v(i) \) 表示元素i的整合表示,\( v(j) \) 表示元素j的特征向量。

AI专用词汇

AI专用词汇

AI专⽤词汇LetterAAccumulatederrorbackpropagation累积误差逆传播ActivationFunction激活函数AdaptiveResonanceTheory/ART⾃适应谐振理论Addictivemodel加性学习Adversari alNetworks对抗⽹络AffineLayer仿射层Affinitymatrix亲和矩阵Agent代理/智能体Algorithm算法Alpha-betapruningα-β剪枝Anomalydetection异常检测Approximation近似AreaUnderROCCurve/AUCRoc曲线下⾯积ArtificialGeneralIntelligence/AGI通⽤⼈⼯智能ArtificialIntelligence/AI⼈⼯智能Associationanalysis关联分析Attentionmechanism注意⼒机制Attributeconditionalindependenceassumption属性条件独⽴性假设Attributespace属性空间Attributevalue属性值Autoencoder⾃编码器Automaticspeechrecognition⾃动语⾳识别Automaticsummarization⾃动摘要Aver agegradient平均梯度Average-Pooling平均池化LetterBBackpropagationThroughTime通过时间的反向传播Backpropagation/BP反向传播Baselearner基学习器Baselearnin galgorithm基学习算法BatchNormalization/BN批量归⼀化Bayesdecisionrule贝叶斯判定准则BayesModelAveraging/BMA贝叶斯模型平均Bayesoptimalclassifier贝叶斯最优分类器Bayesiandecisiontheory贝叶斯决策论Bayesiannetwork贝叶斯⽹络Between-cla ssscattermatrix类间散度矩阵Bias偏置/偏差Bias-variancedecomposition偏差-⽅差分解Bias-VarianceDilemma偏差–⽅差困境Bi-directionalLong-ShortTermMemory/Bi-LSTM双向长短期记忆Binaryclassification⼆分类Binomialtest⼆项检验Bi-partition⼆分法Boltzmannmachine玻尔兹曼机Bootstrapsampling⾃助采样法/可重复采样/有放回采样Bootstrapping⾃助法Break-EventPoint/BEP平衡点LetterCCalibration校准Cascade-Correlation级联相关Categoricalattribute离散属性Class-conditionalprobability类条件概率Classificationandregressiontree/CART分类与回归树Classifier分类器Class-imbalance类别不平衡Closed-form闭式Cluster簇/类/集群Clusteranalysis聚类分析Clustering聚类Clusteringensemble聚类集成Co-adapting共适应Codin gmatrix编码矩阵COLT国际学习理论会议Committee-basedlearning基于委员会的学习Competiti velearning竞争型学习Componentlearner组件学习器Comprehensibility可解释性Comput ationCost计算成本ComputationalLinguistics计算语⾔学Computervision计算机视觉C onceptdrift概念漂移ConceptLearningSystem/CLS概念学习系统Conditionalentropy条件熵Conditionalmutualinformation条件互信息ConditionalProbabilityTable/CPT条件概率表Conditionalrandomfield/CRF条件随机场Conditionalrisk条件风险Confidence置信度Confusionmatrix混淆矩阵Connectionweight连接权Connectionism连结主义Consistency⼀致性/相合性Contingencytable列联表Continuousattribute连续属性Convergence收敛Conversationalagent会话智能体Convexquadraticprogramming凸⼆次规划Convexity凸性Convolutionalneuralnetwork/CNN卷积神经⽹络Co-oc currence同现Correlationcoefficient相关系数Cosinesimilarity余弦相似度Costcurve成本曲线CostFunction成本函数Costmatrix成本矩阵Cost-sensitive成本敏感Crosse ntropy交叉熵Crossvalidation交叉验证Crowdsourcing众包Curseofdimensionality维数灾难Cutpoint截断点Cuttingplanealgorithm割平⾯法LetterDDatamining数据挖掘Dataset数据集DecisionBoundary决策边界Decisionstump决策树桩Decisiontree决策树/判定树Deduction演绎DeepBeliefNetwork深度信念⽹络DeepConvolutionalGe nerativeAdversarialNetwork/DCGAN深度卷积⽣成对抗⽹络Deeplearning深度学习Deep neuralnetwork/DNN深度神经⽹络DeepQ-Learning深度Q学习DeepQ-Network深度Q⽹络Densityestimation密度估计Density-basedclustering密度聚类Differentiab leneuralcomputer可微分神经计算机Dimensionalityreductionalgorithm降维算法D irectededge有向边Disagreementmeasure不合度量Discriminativemodel判别模型Di scriminator判别器Distancemeasure距离度量Distancemetriclearning距离度量学习D istribution分布Divergence散度Diversitymeasure多样性度量/差异性度量Domainadaption领域⾃适应Downsampling下采样D-separation(Directedseparation)有向分离Dual problem对偶问题Dummynode哑结点DynamicFusion动态融合Dynamicprogramming动态规划LetterEEigenvaluedecomposition特征值分解Embedding嵌⼊Emotionalanalysis情绪分析Empiricalconditionalentropy经验条件熵Empiricalentropy经验熵Empiricalerror经验误差Empiricalrisk经验风险End-to-End端到端Energy-basedmodel基于能量的模型Ensemblelearning集成学习Ensemblepruning集成修剪ErrorCorrectingOu tputCodes/ECOC纠错输出码Errorrate错误率Error-ambiguitydecomposition误差-分歧分解Euclideandistance欧⽒距离Evolutionarycomputation演化计算Expectation-Maximization期望最⼤化Expectedloss期望损失ExplodingGradientProblem梯度爆炸问题Exponentiallossfunction指数损失函数ExtremeLearningMachine/ELM超限学习机LetterFFactorization因⼦分解Falsenegative假负类Falsepositive假正类False PositiveRate/FPR假正例率Featureengineering特征⼯程Featureselection特征选择Featurevector特征向量FeaturedLearning特征学习FeedforwardNeuralNetworks/FNN前馈神经⽹络Fine-tuning微调Flippingoutput翻转法Fluctuation震荡Forwards tagewisealgorithm前向分步算法Frequentist频率主义学派Full-rankmatrix满秩矩阵Func tionalneuron功能神经元LetterGGainratio增益率Gametheory博弈论Gaussianker nelfunction⾼斯核函数GaussianMixtureModel⾼斯混合模型GeneralProblemSolving通⽤问题求解Generalization泛化Generalizationerror泛化误差Generalizatione rrorbound泛化误差上界GeneralizedLagrangefunction⼴义拉格朗⽇函数Generalized linearmodel⼴义线性模型GeneralizedRayleighquotient⼴义瑞利商GenerativeAd versarialNetworks/GAN⽣成对抗⽹络GenerativeModel⽣成模型Generator⽣成器Genet icAlgorithm/GA遗传算法Gibbssampling吉布斯采样Giniindex基尼指数Globalminimum全局最⼩GlobalOptimization全局优化Gradientboosting梯度提升GradientDescent梯度下降Graphtheory图论Ground-truth真相/真实LetterHHardmargin硬间隔Hardvoting硬投票Harmonicmean调和平均Hessematrix海塞矩阵Hiddendynamicmodel隐动态模型H iddenlayer隐藏层HiddenMarkovModel/HMM隐马尔可夫模型Hierarchicalclustering层次聚类Hilbertspace希尔伯特空间Hingelossfunction合页损失函数Hold-out留出法Homo geneous同质Hybridcomputing混合计算Hyperparameter超参数Hypothesis假设Hypothe sistest假设验证LetterIICML国际机器学习会议Improvediterativescaling/IIS改进的迭代尺度法Incrementallearning增量学习Independentandidenticallydistributed/i.i.d.独⽴同分布IndependentComponentAnalysis/ICA独⽴成分分析Indicatorfunction指⽰函数Individuallearner个体学习器Induction归纳Inductivebias归纳偏好I nductivelearning归纳学习InductiveLogicProgramming/ILP归纳逻辑程序设计Infor mationentropy信息熵Informationgain信息增益Inputlayer输⼊层Insensitiveloss不敏感损失Inter-clustersimilarity簇间相似度InternationalConferencefor MachineLearning/ICML国际机器学习⼤会Intra-clustersimilarity簇内相似度Intrinsicvalue固有值IsometricMapping/Isomap等度量映射Isotonicregression等分回归It erativeDichotomiser迭代⼆分器LetterKKernelmethod核⽅法Kerneltrick核技巧K ernelizedLinearDiscriminantAnalysis/KLDA核线性判别分析K-foldcrossvalidationk折交叉验证/k倍交叉验证K-MeansClusteringK–均值聚类K-NearestNeighb oursAlgorithm/KNNK近邻算法Knowledgebase知识库KnowledgeRepresentation知识表征LetterLLabelspace标记空间Lagrangeduality拉格朗⽇对偶性Lagrangemultiplier拉格朗⽇乘⼦Laplacesmoothing拉普拉斯平滑Laplaciancorrection拉普拉斯修正Latent DirichletAllocation隐狄利克雷分布Latentsemanticanalysis潜在语义分析Latentvariable隐变量Lazylearning懒惰学习Learner学习器Learningbyanalogy类⽐学习Learn ingrate学习率LearningVectorQuantization/LVQ学习向量量化Leastsquaresre gressiontree最⼩⼆乘回归树Leave-One-Out/LOO留⼀法linearchainconditional randomfield线性链条件随机场LinearDiscriminantAnalysis/LDA线性判别分析Linearmodel线性模型LinearRegression线性回归Linkfunction联系函数LocalMarkovproperty局部马尔可夫性Localminimum局部最⼩Loglikelihood对数似然Logodds/logit对数⼏率Lo gisticRegressionLogistic回归Log-likelihood对数似然Log-linearregression对数线性回归Long-ShortTermMemory/LSTM长短期记忆Lossfunction损失函数LetterM Machinetranslation/MT机器翻译Macron-P宏查准率Macron-R宏查全率Majorityvoting绝对多数投票法Manifoldassumption流形假设Manifoldlearning流形学习Margintheory间隔理论Marginaldistribution边际分布Marginalindependence边际独⽴性Marginalization边际化MarkovChainMonteCarlo/MCMC马尔可夫链蒙特卡罗⽅法MarkovRandomField马尔可夫随机场Maximalclique最⼤团MaximumLikelihoodEstimation/MLE极⼤似然估计/极⼤似然法Maximummargin最⼤间隔Maximumweightedspanningtree最⼤带权⽣成树Max-P ooling最⼤池化Meansquarederror均⽅误差Meta-learner元学习器Metriclearning度量学习Micro-P微查准率Micro-R微查全率MinimalDescriptionLength/MDL最⼩描述长度Minim axgame极⼩极⼤博弈Misclassificationcost误分类成本Mixtureofexperts混合专家Momentum动量Moralgraph道德图/端正图Multi-classclassification多分类Multi-docum entsummarization多⽂档摘要Multi-layerfeedforwardneuralnetworks多层前馈神经⽹络MultilayerPerceptron/MLP多层感知器Multimodallearning多模态学习Multipl eDimensionalScaling多维缩放Multiplelinearregression多元线性回归Multi-re sponseLinearRegression/MLR多响应线性回归Mutualinformation互信息LetterN Naivebayes朴素贝叶斯NaiveBayesClassifier朴素贝叶斯分类器Namedentityrecognition命名实体识别Nashequilibrium纳什均衡Naturallanguagegeneration/NLG⾃然语⾔⽣成Naturallanguageprocessing⾃然语⾔处理Negativeclass负类Negativecorrelation负相关法NegativeLogLikelihood负对数似然NeighbourhoodComponentAnalysis/NCA近邻成分分析NeuralMachineTranslation神经机器翻译NeuralTuringMachine神经图灵机Newtonmethod⽜顿法NIPS国际神经信息处理系统会议NoFreeLunchTheorem /NFL没有免费的午餐定理Noise-contrastiveestimation噪⾳对⽐估计Nominalattribute列名属性Non-convexoptimization⾮凸优化Nonlinearmodel⾮线性模型Non-metricdistance⾮度量距离Non-negativematrixfactorization⾮负矩阵分解Non-ordinalattribute⽆序属性Non-SaturatingGame⾮饱和博弈Norm范数Normalization归⼀化Nuclearnorm核范数Numericalattribute数值属性LetterOObjectivefunction⽬标函数Obliquedecisiontree斜决策树Occam’srazor奥卡姆剃⼑Odds⼏率Off-Policy离策略Oneshotlearning⼀次性学习One-DependentEstimator/ODE独依赖估计On-Policy在策略Ordinalattribute有序属性Out-of-bagestimate包外估计Outputlayer输出层Outputsmearing输出调制法Overfitting过拟合/过配Oversampling过采样LetterPPairedt-test成对t检验Pairwise成对型PairwiseMarkovproperty成对马尔可夫性Parameter参数Parameterestimation参数估计Parametertuning调参Parsetree解析树ParticleSwarmOptimization/PSO粒⼦群优化算法Part-of-speechtagging词性标注Perceptron感知机Performanceme asure性能度量PlugandPlayGenerativeNetwork即插即⽤⽣成⽹络Pluralityvoting相对多数投票法Polaritydetection极性检测Polynomialkernelfunction多项式核函数Pooling池化Positiveclass正类Positivedefinitematrix正定矩阵Post-hoctest后续检验Post-pruning后剪枝potentialfunction势函数Precision查准率/准确率Prepruning预剪枝Principalcomponentanalysis/PCA主成分分析Principleofmultipleexplanations多释原则Prior先验ProbabilityGraphicalModel概率图模型ProximalGradientDescent/PGD近端梯度下降Pruning剪枝Pseudo-label伪标记LetterQQuantizedNeu ralNetwork量⼦化神经⽹络Quantumcomputer量⼦计算机QuantumComputing量⼦计算Quasi Newtonmethod拟⽜顿法LetterRRadialBasisFunction/RBF径向基函数RandomFo restAlgorithm随机森林算法Randomwalk随机漫步Recall查全率/召回率ReceiverOperatin gCharacteristic/ROC受试者⼯作特征RectifiedLinearUnit/ReLU线性修正单元Recurr entNeuralNetwork循环神经⽹络Recursiveneuralnetwork递归神经⽹络Referencemodel参考模型Regression回归Regularization正则化Reinforcementlearning/RL强化学习Representationlearning表征学习Representertheorem表⽰定理reproducingke rnelHilbertspace/RKHS再⽣核希尔伯特空间Re-sampling重采样法Rescaling再缩放Residu alMapping残差映射ResidualNetwork残差⽹络RestrictedBoltzmannMachine/RBM受限玻尔兹曼机RestrictedIsometryProperty/RIP限定等距性Re-weighting重赋权法Robu stness稳健性/鲁棒性Rootnode根结点RuleEngine规则引擎Rulelearning规则学习LetterS Saddlepoint鞍点Samplespace样本空间Sampling采样Scorefunction评分函数Self-Driving⾃动驾驶Self-OrganizingMap/SOM⾃组织映射Semi-naiveBayesclassifiers半朴素贝叶斯分类器Semi-SupervisedLearning半监督学习semi-SupervisedSupportVec torMachine半监督⽀持向量机Sentimentanalysis情感分析Separatinghyperplane分离超平⾯SigmoidfunctionSigmoid函数Similaritymeasure相似度度量Simulatedannealing模拟退⽕Simultaneouslocalizationandmapping同步定位与地图构建SingularV alueDecomposition奇异值分解Slackvariables松弛变量Smoothing平滑Softmargin软间隔Softmarginmaximization软间隔最⼤化Softvoting软投票Sparserepresentation稀疏表征Sparsity稀疏性Specialization特化SpectralClustering谱聚类SpeechRecognition语⾳识别Splittingvariable切分变量Squashingfunction挤压函数Stability-plasticitydilemma可塑性-稳定性困境Statisticallearning统计学习Statusfeaturefunction状态特征函Stochasticgradientdescent随机梯度下降Stratifiedsampling分层采样Structuralrisk结构风险Structuralriskminimization/SRM结构风险最⼩化S ubspace⼦空间Supervisedlearning监督学习/有导师学习supportvectorexpansion⽀持向量展式SupportVectorMachine/SVM⽀持向量机Surrogatloss替代损失Surrogatefunction替代函数Symboliclearning符号学习Symbolism符号主义Synset同义词集LetterTT-Di stributionStochasticNeighbourEmbedding/t-SNET–分布随机近邻嵌⼊Tensor张量TensorProcessingUnits/TPU张量处理单元Theleastsquaremethod最⼩⼆乘法Th reshold阈值Thresholdlogicunit阈值逻辑单元Threshold-moving阈值移动TimeStep时间步骤Tokenization标记化Trainingerror训练误差Traininginstance训练⽰例/训练例Tran sductivelearning直推学习Transferlearning迁移学习Treebank树库Tria-by-error试错法Truenegative真负类Truepositive真正类TruePositiveRate/TPR真正例率TuringMachine图灵机Twice-learning⼆次学习LetterUUnderfitting⽋拟合/⽋配Undersampling⽋采样Understandability可理解性Unequalcost⾮均等代价Unit-stepfunction单位阶跃函数Univariatedecisiontree单变量决策树Unsupervisedlearning⽆监督学习/⽆导师学习Unsupervisedlayer-wisetraining⽆监督逐层训练Upsampling上采样LetterVVanishingGradientProblem梯度消失问题Variationalinference变分推断VCTheoryVC维理论Versionspace版本空间Viterbialgorithm维特⽐算法VonNeumannarchitecture冯·诺伊曼架构LetterWWassersteinGAN/WGANWasserstein⽣成对抗⽹络Weaklearner弱学习器Weight权重Weightsharing权共享Weightedvoting加权投票法Within-classscattermatrix类内散度矩阵Wordembedding词嵌⼊Wordsensedisambiguation词义消歧LetterZZero-datalearning零数据学习Zero-shotlearning零次学习。

分子生物网络分析---期末复习课

分子生物网络分析---期末复习课
Page 2
2.网络科学(Network Science): 网络科学是研究利用网络来描述物理、生物
和社会现象,建立这些现象预测模型的科学.
有关原文:
Logically, the notion of network science is straightforward. It is organized knowledge of networks based on their study using the scientific method. This notion is immediately valuable in that it distinguishes science from technology. Throughout human history technology often evolved far earlier than the scientific knowledge on which it is based.
Page 27
随机图→复杂网络
在20世纪的后40年中,随机图理论一直 是研究复杂网络的基本理论。
在此期间,人们也做了试图解释社会网 络的一些实验。
下面介绍的小世界实验。
《分子生物网络分析》(Molecular Biology Network Analysis)
Page 28
1.2.3 小世界实验
Page 3
网络科学是一门新兴科学,研究物理、信息、生物、 认知和社会网络的相互联系。这门科学领域致力于发现主 导财务网络行为的普适性规律,算法和工具。美国科学院 国家研究委员会把网络科学定义为“使用科学方法研究网 络有组织的知识”。
网络科学的子科学包括:动态网络分析,社会网络分析, 复杂网络研究,网络优化、生物网络和图论。

gru方法

gru方法

gru方法GRU(Gated Recurrent Unit,门控循环单元)是一种常用于处理序列数据的循环神经网络(RNN)模型。

相较于传统的RNN结构,GRU具有更强的建模能力和更好的长期依赖处理能力。

在本文中,我们将介绍GRU的原理、结构以及应用,并给出一些相关论文和案例作为参考。

一、GRU的原理与结构GRU是RNN的一种变体,它在传统RNN的基础上引入了门控机制,以便更好地控制信息的更新和遗忘。

相比于长短期记忆网络(LSTM),GRU减少了一部分门控单元,从而简化了结构,并减少了计算消耗。

GRU的主要思想是通过两个门控信号,即“重置门”(reset gate)和“更新门”(update gate),来调节信息的流动。

1. 重置门(reset gate):重置门决定了在当前时间步,模型应该从之前的状态中丢弃哪些信息。

它通过一个sigmoid激活函数对输入进行加权求和,并输出一个取值范围为0到1之间的向量。

当重置门的输出接近0时,将丢弃之前的状态信息;当接近1时,保留之前的状态信息。

2. 更新门(update gate):更新门用于控制当前时间步的输入与先前的状态之间的权衡。

它也通过sigmoid函数对输入进行加权求和,并输出一个取值范围为0到1之间的向量。

接近0时,忽略更新;接近1时,允许状态信息的更新。

3. 隐藏状态(hidden state):在GRU中,隐藏状态在每个时间步都会更新。

通过重置门和更新门的调整机制,隐藏状态有效地捕捉到了输入序列中的相关信息,并在后续时间步中保留了这些信息。

二、GRU的应用GRU作为一种强大的序列建模工具,在自然语言处理、语音识别、图像描述生成等任务中被广泛应用。

下面是一些相关论文和案例,供参考:1. "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation"这篇论文提出了一个基于RNN的编码器-解码器框架,其中编码器部分采用了GRU单元来将输入序列编码为一个固定长度的向量表示,用于机器翻译任务。

华为词汇汇总(中-英)(0)

华为词汇汇总(中-英)(0)
四端口cE1接口前处理卡 | [hw] 4-port Channelized E1 Interface Pre-Processor
4端口千兆以太网内容线路板 (单模 10km) | [hw] 4-port Gigabit Ethernet Content Line Card (Single-mode 10km)
32端口快速以太网电接口内容线路板 | [hw] 32-port Fast Ethernet Electrical Interface Content Line Card
32端口快速以太网电接口线路板 | [hw] 32-port Fast Ethernet Electrical Interface Line Card
1端口千兆以太网接口后转接卡-1550nm长距离单模光接口 | [hw] 1-port Gigabit Ethernet Interface Back Board (Single-Mode Long Haul)
1端口千兆以太网接口后转接卡-1550nm超长距离单模光接口 | [hw] 1-port Gigabit Ethernet Interface Back Board (Single-Mode Ultra-long Haul)
1端口千兆以太网接口后转接卡-1310nm单模光接口 | [hw] 1-port Gigabit Ethernet Interface Back Board (Single-mode)
1端口千兆以太网接口前处理卡 | [hw] 1-port Gigabit Ethernet Interface Pre-Processor
16路POTS业务分离板 | [hw] 16-port POTS service detachment board

netmask翻译

netmask翻译

netmask翻译netmask翻译:netmask是一种网络地址掩码,它在计算机网络中被用于将IP地址划分为子网(subnet)。

netmask也可以用来确定属于同一网络的主机,这样就可以防止数据包发送到其他网络中。

netmask由32位二进制数字组成,每位上的“1”表示用于子网地址,而每位上的“0”表示用于主机地址。

netmask一般常见的格式有255.255.255.0(/24)、255.255.0.0(/16)、255.0.0.0(/8)等。

所有的netmask都可以用一个称为CIDR(Classless Inter-Domain Routing,无类型域间路由)表示法来表示,CIDR 表示后面跟着一个斜杠和一个数字,数字表示netmask有多少位是1。

例如,255.255.255.0(/24)的CIDR表示法为/24,因为其中有24位是1。

netmask的使用能够帮助计算机网络中的设备快速地确定哪些数据包需要发送到本地网络,哪些需要发送到其他网络中。

例如,如果两台计算机A和B都在同一子网中,A想发送数据给B,那么A就会检查B的IP地址和netmask,如果IP地址前面的位都相同,就说明这两台计算机在同一子网中,那么A就会把数据发送到B;如果IP地址前面的位不同,就说明A和B不在同一子网中,那么A 就会通过路由器发送数据给B。

netmask也可以用来确定子网中可用的IP地址数量。

例如,255.255.255.0(/24)的netmask有24位是1,也就是说只有最后8位是0,可以用来表示可用的IP地址,因此这个子网中有2^8-2=254个可用的IP地址。

总之,netmask是一种网络地址掩码,它可以帮助我们把IP地址划分为子网,以及确定哪些数据包应该发送到本地网络,哪些应该发送到其他网络中,还可以用来计算某个子网中可用的IP地址数量。

lse注意力机制原理 -回复

lse注意力机制原理 -回复

lse注意力机制原理-回复LSE(Long Short-Term Memory)是一种基于循环神经网络(Recurrent Neural Network)的注意力机制模型,其原理是通过记忆单元及门控机制实现对长时依赖关系的建模。

本文将从LSE模型的基本结构和原理出发,逐步解释LSE注意力机制的工作原理。

具体而言,我们将分为以下几个部分进行阐述:LSE模型的基本结构、LSE模型的记忆单元、LSE模型的门控机制、LSE模型的注意力机制。

一、LSE模型的基本结构LSE模型是一种具有循环结构的神经网络,由输入层、隐藏层和输出层组成。

其输入层接受输入序列,隐藏层中的单元通过循环连接接收隐藏层自身在上一时刻的输出作为输入,并根据当前输入和上一时刻的输出计算当前时刻的输出。

输出层将隐藏层的输出映射为对应的输出序列。

二、LSE模型的记忆单元LSE模型的记忆单元是LSTM(Long Short-Term Memory)单元,它的设计目标是解决传统循环神经网络无法处理长时依赖关系的问题。

LSTM 单元包含一个细胞状态(Cell State)和三个门控单元:输入门(Input Gate)、遗忘门(Forget Gate)和输出门(Output Gate)。

细胞状态是LSTM单元中核心的记忆单元,它存储了之前的状态信息,并负责决定传递给下一时刻的状态。

输入门决定了何时更新细胞状态,遗忘门决定了何时忘记细胞状态的一部分,输出门决定了何时将细胞状态输出到隐藏层。

三、LSE模型的门控机制LSTM单元的门控机制使其能够选择性地记住或忘记状态信息。

输入门根据当前输入和上一时刻的隐藏状态来确定更新细胞状态的权重,它通过将当前输入和上一时刻的隐藏状态输入到一个sigmoid函数中,得到一个在0和1之间的权重向量。

遗忘门则通过将当前输入和上一时刻的隐藏状态输入到一个sigmoid函数中,得到一个在0和1之间的权重向量,该向量决定了保留上一时刻细胞状态的程度。

《医学影像成像原理》专业名词解释与翻译

《医学影像成像原理》专业名词解释与翻译

62.window width,WW:窗宽
63.窗位:window level,WL
64.投影:projection
65.CT值:computed tomography number
66.采集时间:acquisition time
67.半程扫描时间:half-scan time
76.magnetic rasonance imaging:磁共振成像
77.磁旋比(gyromagnetic-ratio)(gyromagnetic-ratio):
78.magnetization vector:磁化强度矢量M
79.横向磁化矢量MXY:transverse magnetization
2.方法 4分;
3.R(ω)、H(ω)曲线 4分。
答:
11.分析ROC曲线面积值ZA(Az)的物理意义 (10分)。
评分标准:
分析Az三个不同域值范围及意义 每个2分。
答:
17.Groedel 技术:空气间隙效应
18.栅比:grid radio
19.栅密度:grid density
20.contranst improvement factor:提高对比度的能力(对比度改善系数)
21.空间分辨力:spatial resolution
85.纵向弛豫时间:longitudinal relaxation time
86.自由感应衰减:Free induction decay,FID
87.T1WI:T1加权像
88.T2IW:T2 weighted image,T2加权像
89.质子密度加权像:proton density weighted image,PDWI

各种数学语言的英语翻译.

各种数学语言的英语翻译.

各种数学语言的英语翻译数学 mathematics, maths(BrE), math(AmE)公理 axiom 定理 theorem计算 calculation 运算 operation证明 prove 假设 hypothesis, hypotheses(pl.)命题 proposition 算术 arithmetic加plus(prep.), add(v.), addition(n.) 被加数 augend, summand加数 addend 和 sum减 minus(prep.), subtract(v.), subtraction(n.) 被减数 minuend减数 subtrahend 差 remainder乘 times(prep.), multiply(v.), multiplication(n.)被乘数 multiplicand, faciend 乘数 multiplicator积 product 除 divided by(prep.), divide(v.), division(n.) 被除数 dividend 除数 divisor商 quotient 等于 equals, is equal to, is equivalent to大于 is greater than 小于 is lesser than大于等于 is equal or greater than 小于等于 is equal or lesser than运算符 operator 平均数mean算术平均数arithmatic mean 几何平均数geometric mean n个数之积的n次方根倒数(reciprocal) x的倒数为1/x 有理数 rational number无理数 irrational number 实数 real number虚数 imaginary number 数字 digit数 number 自然数 natural number整数 integer 小数 decimal小数点 decimal point 分数 fraction分子 numerator 分母 denominator比 ratio 正 positive负 negative 零 null, zero, nought, nil 十进制 decimal system 二进制 binary system十六进制 hexadecimal system 权 weight, significance进位 carry 截尾 truncation四舍五入 round 下舍入 round down上舍入 round up 有效数字 significant digit 无效数字 insignificant digit 代数 algebra公式 formula, formulae(pl.) 单项式 monomial多项式 polynomial, multinomial 系数 coefficient未知数 unknown, x-factor, y-factor, z-factor等式,方程式 equation 一次方程 simple equation 二次方程 quadratic equation 三次方程 cubic equation四次方程 quartic equation 不等式 inequation阶乘 factorial 对数 logarithm指数,幂 exponent 乘方 power二次方,平方 square 三次方,立方 cube四次方 the power of four, the fourth powern次方 the power of n, the nth power 开方 evolution, extraction 二次方根,平方根 square root 三次方根,立方根 cube root 四次方根 the root of four, the fourth rootn次方根 the root of n, the nth rootsqrt(2)=1.414 sqrt(3)=1.732 sqrt(5)=2.236常量 constant 变量 variable坐标系 coordinates 坐标轴 x-axis, y-axis, z-axis横坐标 x-coordinate 纵坐标 y-coordinate原点 origin 象限quadrant截距(有正负之分)intercede (方程的)解solution几何geometry 点 point线 line 面 plane体 solid 线段 segment射线 radial 平行 parallel相交 intersect 角 angle角度 degree 弧度 radian锐角 acute angle 直角 right angle钝角 obtuse angle 平角 straight angle周角 perigon 底 base边 side 高 height三角形 triangle 锐角三角形 acute triangle直角三角形 right triangle 直角边 leg斜边 hypotenuse 勾股定理 Pythagorean theorem钝角三角形 obtuse triangle 不等边三角形 scalene triangle等腰三角形 isosceles triangle 等边三角形 equilateral triangle四边形 quadrilateral 平行四边形 parallelogram矩形 rectangle 长 length宽 width 周长 perimeter面积 area 相似 similar全等 congruent 三角 trigonometry正弦 sine 余弦 cosine正切 tangent 余切 cotangent正割 secant 余割 cosecant反正弦 arc sine 反余弦 arc cosine反正切 arc tangent 反余切 arc cotangent反正割 arc secant 反余割 arc cosecant补充:集合aggregate 元素 element空集 void 子集 subset交集 intersection 并集 union补集 complement 映射 mapping函数 function 定义域 domain, field of definition 值域 range 单调性 monotonicity奇偶性 parity 周期性 periodicity图象 image 数列,级数 series微积分 calculus 微分 differential导数 derivative 极限 limit无穷大 infinite(a.) infinity(n.) 无穷小 infinitesimal积分 integral 定积分 definite integral不定积分 indefinite integral 复数 complex number矩阵 matrix 行列式 determinant圆 circle 圆心 centre(BrE), center(AmE) 半径 radius 直径 diameter圆周率 pi 弧 arc半圆 semicircle 扇形 sector环 ring 椭圆 ellipse圆周 circumference 轨迹 locus, loca(pl.)平行六面体 parallelepiped 立方体 cube七面体 heptahedron 八面体 octahedron九面体 enneahedron 十面体 decahedron十一面体 hendecahedron 十二面体 dodecahedron二十面体 icosahedron 多面体 polyhedron旋转 rotation 轴 axis球 sphere 半球 hemisphere底面 undersurface 表面积 surface area体积 volume 空间 space双曲线 hyperbola 抛物线 parabola四面体 tetrahedron 五面体 pentahedron六面体 hexahedron 菱形 rhomb, rhombus, rhombi(pl.), diamond 正方形 square 梯形 trapezoid直角梯形 right trapezoid 等腰梯形 isosceles trapezoid五边形 pentagon 六边形 hexagon七边形 heptagon 八边形 octagon九边形 enneagon 十边形 decagon十一边形 hendecagon 十二边形 dodecagon多边形 polygon 正多边形 equilateral polygon相位 phase 周期 period振幅 amplitude 内心 incentre(BrE), incenter(AmE)外心 excentre(BrE), excenter(AmE) 旁心 escentre(BrE), escenter(AmE)垂心 orthocentre(BrE), orthocenter(AmE)重心 barycentre(BrE), barycenter(AmE)内切圆 inscribed circle 外切圆 circumcircle统计 statistics 平均数 average加权平均数 weighted average 方差 variance标准差 root-mean-square deviation, standard deviation比例 propotion 百分比 percent 百分点 percentage百分位数 percentile 排列 permutation组合 combination 概率,或然率 probability分布 distribution 正态分布 normal distribution 非正态分布 abnormal distribution 图表 graph条形统计图 bar graph 柱形统计图 histogram折线统计图 broken line graph 曲线统计图 curve diagram扇形统计图 pie diagram。

银行业主要指标中英文对照(全)1

银行业主要指标中英文对照(全)1
正常
pass
关注
special mention
次级
substandard
可疑
doubtful
流动性利率
liquidity ratio
实收资本
paid-in capital
留存收益
retained earnings
盈余公积
surplus reserve
一般风险准备
general reserve
未分配利润
风险加权资产占总资产比率
risk-weighted assets to total assets
ratio
利差
interestrate spread
应收账款
receivables
预算外支出
off-budget expenditure
活期贷款
call money
大额存单
certificate of deposit(CD)
结算、清算和现金管理
settlement, clearing business and
cash management
投资银行业务
investment banking business
对公理财
corporate wealth management services
担保及承诺
guarantee and commitment business
集中度情况
单一集团客户授信集中度
the credit concentration ratio of a single group client
单一客户贷款集中度
theloan concentration ratio of a singleclient

中文翻译The Cross-Section of Expected Stock Returns

中文翻译The Cross-Section of Expected Stock Returns

The Cross-Section of Expected Stock ReturnsEUGENE F. FAMA and KENNETH R. FRENCH (1992) 摘要:结合两个简单的衡量变量:规模和账面对市价比,获得与市场β、规模、财务杠杆、账面对市价比、收益价格比有关的股票平均回报率横截面变动的关系。

而且,当检验中考虑到β的变动与规模无关时,即使β是唯一解释变量,市场β跟股票平均回报率间的关系是无关的。

Sharpe(1964), Linter(1965), 和 Black(1972)所提出的资产定价模型长期被学术界及实务界用来探讨平均回报率与风险的关系。

这个模型核心预测是财富投资的市场组合是马科维茨提出的均值-方差有效。

效率市场投资组合意味着:(a)证券的预期回报率与市场β(一个证券收益对市场收益的回归斜率)是正的线性函数关系。

(b)市场βs有能力解释预期横截面回报率。

实证上的发现有许多与Sharpe-Lintner-Black(SLB)模型相矛盾的地方。

最突出的是Banz(1981)的规模效应:在给定市场βs下预期股票回报率的横截面,加入市值ME(股票价格乘以流通在外股数)这个解释变量,结果显示在给定他们的β估计下,低市值股票的平均回报率太高;高市值股票的平均回报率则太低。

另一个有关SLB模型的矛盾则是Bhandari(1988)所提出的财务杠杆与平均回报率间的正相关。

财务杠杆与风险及回报率相关看起来似乎合理,但在SLB模型下,财务杠杆风险应已包含于市场β中。

然而Bhandari发现财务杠杆能协助解释包含规模(ME)和β的平均股票回报率的横截面变动。

Stattman(1980), Rosenberg, Reid , and Lanstein (1985)发现美国股票的平均回报率与普通股账面价值(BE)市值(ME)比有正相关。

Chan, Hamao, and Lakonishok(1991)发现账面对市价比(BE/ME)对于解释日本股票的横截面平均回报率也扮演很重要的角色。

如何利用神经网络进行机器翻译

如何利用神经网络进行机器翻译

如何利用神经网络进行机器翻译近年来,随着人工智能技术的飞速发展,机器翻译作为自然语言处理领域的重要应用之一,受到了广泛关注。

而神经网络作为一种强大的模型,被广泛应用于机器翻译任务中。

本文将探讨如何利用神经网络进行机器翻译,并对其优势和挑战进行分析。

一、神经网络在机器翻译中的应用神经网络在机器翻译任务中的应用主要体现在两个方面:编码器-解码器架构和注意力机制。

编码器-解码器架构是神经网络机器翻译的核心模型。

编码器负责将源语言句子编码成一个固定长度的向量,而解码器则根据该向量生成目标语言句子。

这种架构的优势在于能够处理变长输入和输出序列,并且可以通过训练来学习输入和输出之间的对应关系。

注意力机制是神经网络机器翻译中的重要组成部分。

传统的编码器-解码器架构在处理长句子时存在信息丢失的问题,而注意力机制通过引入一个注意力向量,可以在解码器生成每个单词时,动态地对编码器的不同部分进行加权,从而更好地捕捉源语言句子的信息。

二、神经网络机器翻译的优势相比传统的统计机器翻译方法,神经网络机器翻译具有以下几个优势:1. 端到端学习:神经网络机器翻译采用端到端的学习方式,即从源语言句子直接生成目标语言句子,无需依赖中间的对齐和翻译模型。

这样可以简化整个翻译流程,提高翻译效率。

2. 上下文信息的处理:神经网络机器翻译能够更好地处理上下文信息。

通过编码器-解码器架构和注意力机制,神经网络可以更好地捕捉源语言句子的语义和句法信息,从而提高翻译的准确性。

3. 可迁移性:神经网络机器翻译的模型可以迁移到其他语言对的翻译任务中。

这意味着只需要进行少量的调整和微调,就可以将已经训练好的模型应用于新的语言对,大大提高了模型的可用性和效率。

三、神经网络机器翻译的挑战虽然神经网络机器翻译具有很多优势,但也面临一些挑战:1. 数据稀缺性:神经网络机器翻译需要大量的平行语料进行训练,但现实中很难获得大规模的高质量平行语料。

这导致了数据稀缺性的问题,使得模型的泛化能力受到限制。

加权网络

加权网络

10k
Random Real Inverse
100k
Betweenness
1k 100 10 1
Betweenness
Random Real Inverse
10k
Inverse Random Real
1 10 100 1k 10k
1k 1 10 100 1000
(c) Rank of link
(d) Rank of Vertex
wjk
d
i j k
w w
ij
jk
取最小值即为最短路径的距离
度分布
1000
Degree
800 600
Degree
400 200 0 0 100 200 300 400 500
Rank
点介数
logarithm multiplicative
1400
Unweighted
Betweenness of Vertex
Vertex Weight
1000
0.1
1
10
100
Rank of Vertex
Rank of Vertex
单位权
1 0.1
multiplicative
40 35 30
logarithm
Weight per Degree
1E-3 1E-4 1E-5 1E-6 1E-7 1E-8 1 10 100
Weight per Degree
加权、有向网络的静态统计性质
In-Out度和权的分布,度权的相关性,单位权
网络的演化性质
偏好性的实证检验
网络上思想的传播及效率分析
科学家的类聚分析

权重系数 英语

权重系数 英语

权重系数英语English:Weight coefficients, also known as weighting factors or weighting coefficients, are numerical values assigned to different elements or variables in a mathematical equation or model to represent their relative importance or contribution to the overall result. In various fields such as statistics, economics, engineering, and machine learning, weight coefficients play a crucial role in decision-making processes, optimization algorithms, and predictive models. These coefficients are typically determined through various techniques such as expert judgment, statistical analysis, optimization algorithms, or machine learning algorithms like linear regression or neural networks. The choice of weight coefficients can significantly impact the performance and accuracy of the model or system they are applied to, as they directly influence how each variable contributes to the final outcome. It's essential to carefully select and validate weight coefficients to ensure the model accurately reflects the real-world scenario it aims to represent. Moreover, weight coefficients may need to be adjusted or updated over time to adapt to changing circumstances or to improve the model's predictive capabilities.中文翻译:权重系数,又称为加权因子或加权系数,是分配给数学方程或模型中不同元素或变量的数字值,以表示它们对整体结果的相对重要性或贡献。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

1.引言图论过去被用来描述大量的实际现象, 但就在现在图论研究不论是从经验[1]–[6] 还是理论[7]–[9]的观点已经从二元转变到加权图。

实际文献有很多鲁棒化的因素广泛应用于因特网流量,航空节点,国际贸易领域。

尤其是,其已经证明加权图能够在有限大小截断[3, 10]上展示连接幂率分布P(K), ; (ii)一个连接权重P(w) 和已给定连接的节点P(W) [11, 12]总权重的偏态分布与(iii) 一个在权杖节点度和范围在1.3到1.5的θ[10, 13]的幂率关系本文, 我们提出一个简单的随机模型,该模型可以使描述加权网络的结构以及演变的数量和权重同比增长并可以解释上述规律。

在设定中,我们扩展啦Barabási 和Albert (BA) 模式[14]来适应网络加权力学。

这是stanley和他的合作者扩展理论框架来解释在复杂系统中放缩比例结构问题的[15]–[17]。

我们用网络上贸易流量的数据来检验我们的模型,而贸易流量是现实世界内在权重网络经典的例子。

国际贸易流量过去使用一种叫做引力模型的方法来测算的,引力模型和两国的面积大小以及距离有关。

然而该方法最大的局限是无法获取存在与双边合作中的矩阵中的分数为零的数。

尽管这个问题已经在标准经济理论[19]中得到解决,图论还是自然而然得成为有这种特征的数据的解决办法我们选择贸易流量(简称ITN)作为我们模型的测试背景基于一下考虑:一,是ITN已经被广泛研究啦[5, 6], [20]–[24], 而以前关于ITN的研究可以验证我们模型得出的结论。

我们知道在ITN案例中链环权重结构呈现标准型,其增长呈现尾现象[24]。

第二,,节点强度和度的关系在密集且广泛的贸易利润互相作用的经济学关于ITN文献中是相当重要的,而且也是解释贸易流量的关键[25]。

第三,尽管是由于ITN的结构惯性,在2008年的全球金融危机导致的贸易流量的大幅波动还是引起啦广泛关注。

我们的理论为解释节点的向心性和网络流的方差提供啦依据。

本论文有如下结构:第二部分给出的模型是最重要的。

而后在第三部分和第四部分我们用ITN的数据模拟验证我们的模型。

最后,我们会总结一些结论并列出以后研究的内容。

6广泛的利润由大量的贸易伙伴和产品出口k组成,然而集中利润代表每个国家的装货产品w.2. 模型Barabási 和Albert [14] 提出啦简单随机网络增长模型,它是基于现实世界观察得来的程式化事实的优先连接。

[7, 9]. The route we take here exploits the theoretical framework recently put forward by Stanley and co-authors [16] to deal with the growth dynamics of complex systems. We prove that our model is capable of accurately matching the structural properties that characterize a number of real-world weighted networks.We therefore propose a generalized version of the BA model to describe the dynamicsand growth of weighted networks, by modeling them as a set of links of different weights occurring among nodes. In particular, we assume that the weight of links grows accordingto a geometric Brownian motion (also known as Gibrat’s law of proportionate effects [26]),so that the expected value of the growth rate of link weights is independent of their current level.The key sets of assumptions in the model are the following [14, 16, 27]:1. The network begins at time t = 0 with N0 nodes, each with a self-loop. At each timestep t = {1, . . . , M}, a new link among two nodes arises; thus the number of links(excluding self-loops that are used only for initialization) existing at time t is mt = t .We write Ki (t) for the number of links of node i at time t (node degree). To identifythe nodes connected by the newly formed link at time t , we adopt the following procedure: with probability a the new link is assigned to a new source node, whereaswith probability 1−a it is allocated to an existing node i . In the latter case, theprobability of choosing node i is given by pi (t) = Ki (t −1)/2t . Edg e endpoints i andj of the new link are chosen symmetrically with i = j . Thus with probability a the newlink is assigned to a new target node, whereas with probability 1−a it is allocated to an existing node with probability pj (t) = Kj (t −1)/(2t − Ki (t −1)) if j = i and pj (t) = 0 otherwise. Hence, at each time t, this rule identifies the pair of (distinct) nodes to beLinked.2. At time t , each existing link between nodes i and j has weight wi j (t) > 0, where Ki , Kj and wi j are independent random variables. At time t + 1, the weight of each link is increasedor decreased by a random factor xi j (t), so that wi j (t + 1) = wi j (t)xi j (t). The shocksand initial link weights are taken from a distribution with finite mean and standard deviation.Thus, we assume that each link weight grows in time according to a random process. Moreover, the two processes governing link formation and weight growth are assumed to be independent. We therefore combine a preferential attachment mechanism (assumption 1), with an independent geometric Brownian motion of link weights (assumption 2). In this way, we obtain a generalization of the BA set-up capable of accounting for the growth of weighted networks.Based on the first assumption, we derive the degree distribution P(K) [14, 28]. In the absence of the entry of new nodes (a = 0), the probability distribution of the number of links at 4large t , i.e. the distribution P(K), is exponential:P(K) ≈ 1¯ Kexp(−K/ ¯ K), (1)where ¯ K = 2t/N0 is the average number of links per node, which linearly grows with time7 .If a > 0, P(K) becomes a Yule distribution that behaves as a power law for small K,P(K) ∼ K−ϕ, (2)where ϕ= 2 + a/(1−a) > 2, followed by the exponential decay of equation (1) for large Kwith ¯ K = (1 + 2t/N0)1−a−1 [15].Hence, in the limit of large t when a = 0 (no entry), the distribution of P(K) convergesto an exponential; conversely, when a > 0 and small, the connectivity distribution at large t converges to a power law with an exponential cut-off [15].Using the second assumption, we can compute the growth rate of the strength of nodes.The strength of node i is given by Wi =PKiwi j . The growth rate is measured as g =ln(W(t + 1)/W(t)). Thus, the resulting distribution of the growth rates of node strength P(g) is determined byP(g) ≡∞ XK=1P(K)P(g|K), (3)where P(K) is the connectivity distribution, computed in the previous stage of the model, and P(g|K) is the conditional distribution of the growth rates of nodes with a given number of links determined by the distribution P(w) and P(x).Fu et al [16] found an analytical solution for the distribution of the growth rates of the weights of links P(g) for the case when a→0 and t →∞,P(g) ≈ 2Vgpg2 + 2Vg (|g| +pg2 + 2Vg)2. (4)P(g) has similar behavior to the Laplace distribution for small g, i.e. P(g) ≈exp(−√2|g|/pVg)/p2Vg, whereas for large g, P(g) has power-law tails, P(g) ∼ g−3,which are eventually truncated for g→∞ by the distribution P(x) of the growth rate of asingle link.A further implication of the model that can be derived from the second assumptionconcerns the distribution of link weights P(w). The proportional growth process (assumption 2) implies that the distribution of weights P(w) converges to a log-normal. Thus node strengthW is given by the sum of K log-normally distributed stochastic values. Since the log-normal distribution is not stable on aggregation, the distribution of node strength P(W) is multiplied by a stretching factor that, depending on the distribution of the number of links P(K), could lead to a Pareto upper tail [29, 30].Moreover, a negative relationship exists among the weight of links and the variance oftheir growth rate. Our model implies an approximate power-law behavior for the variance of growth rates of the form σ(g) = W−β(W), where β(W) is an exponent that weakly depends onthe strength W. In particular, β = 0 for small values of W, β = 1/2 for W →∞, and it is well approximated by β ≈ 0.2 for a wide range of intermediate values of W [17].7 ¯ K does not include initial self-loops.Finally, the model also yields a prediction on the relation between degree K and strengthW of each node. In section 4, we show that since the weight of each link is sampled from a log- normal distribution (w are log-normally distributed), and given the skewness of such a density function, the law of large numbers does not work effectively. In other words, the probabilityto draw a large value for a link weight increases with the number of draws, thus generating a positive power-law relationship between W, and K, for small K.3. Empirical evidenceTo test our model, we use the NBER–United Nations Trade Data [31] available through theCenter for International Data at UC Davis. This database provides bilateral trade flows between countries over 1962–2000, disaggregated at the level of commodity groups (four-digit level of the Standard International Trade Classification, SITC). Data are in thousands of US dollars and, for product-level flows, there is a lower threshold at 100 000 dollars, b elow which transactions are not recorded. One point to note is that disaggregated data are not always consistent with country trade flows; in a number of cases we do not observe any four-digit transaction recorded between two countries, but nevertheless find a positive total trade, and vice versa. Since we take the number of products traded among any pair of countries as the empirical counterpart of the number of transactions, to avoid inconsistency we compute the total trade by aggregating commodity-level data.In this section, we test the predictions of our model, while in the following section weuse the data to calibrate the simulations and check for the ability of the model to replicatereal-world phenomena by comparing simulated and actual trade flows. We already know from previous work [24] that the main features of the ITN are broadly consistent with our model. Here, we look in more detail at some specific characteristics of the ITN.Figure 1 shows that the distribution P(K), which is the number of four-digit SITCproducts traded by countries, is power-law distributed with an exponential cut-off. The main plot displays the probability distribution in log–log scale, where the power law is the straight line body, and the exponential cut-off is represented by the right tail. The inset presents the same distribution in semi-log scale; this time it is the exponential part of the distribution that becomes a straight line, so that we can magnify what happens to the probability distribution as K grows large. As discussed in section 2, the power-law distribution of K hints at the existence of moderate entry of new nodes into the network. Indeed, 17 new countries enter into theITN during the observed time frame, mostly due to the collapse of the Soviet Union and Yugoslavia.Moving to the weighted version of the network, one can look at the distribution of positivelink weights as measured by bilateral trade flows at the commodity level, P(w), as well asthe total value of country trade or node strength P(W). Figure 2 shows the complementary cumulative probability distribution of trade flows in log–log scale, both for product-level transactions and for aggregate flows. Figure 2 refers to 1997 data (other years display the same behavior).We observe that both distributions show the parabolic shape typical of the log-normal distribution, thus conforming to previous findings [23, 24]. As predicted, on aggregation the power-law behavior of the upper tail becomes more pronounced [29]. However, this departure from log-normality concerns a very small number of observations (0.16% in the case of commodities flows, 2.21% for aggregate flows) since only a few new nodes (countries) enter the network over time.Figure 1. Distribution of the number of products traded, 1997. Double logarithmic scale (main plot) and semi-logarithmic scale (inset).Figure 2. Distribution of the link weights and node strength in the year1997. Complementary cumulative distribution of the strength distribution P(W) (aggregate flows) and link weights P(w) (commodity flows) and their power-law fits (dashed lines) [34].Figure 3. Distribution of the growth rates of aggregate trade flows P(g).As for the growth of trade flows, figure 3 shows the empirical distribution P(g), togetherwith themaximumlikelihood fit of equa tion (4) and also the generalized exponential distribution (GED, with shape parameter 0.7224).Goodness-of-fit tests, reported in table 1, show that P(g) is neither Gaussian nor Laplace, whereas the distribution in equation (4) performs much better in terms of KS and AD tests8.Hence, the growth of node centrality, as measured by strength W, follows the same law of thefluctuations of the size of complex systems [16, 35]. This is not surprising, since the size of an airport can be measured by the number of p assengers who travel through it, and the size of a firm in terms of sales is given by the sum of the value of each product it sells. Thus, the theoretical framework of Stanley and co-workers [16] complements and completes the BA proportional growth model in the case of weighted networks.As discussed in section 2, our model implies a negative relationship between node strengthand the variance of its growth rate. Figure 4 reports the standard deviation of the annual growth8KS and AD are non-parametric tests used to evaluate whether a sample comes from a population with a specificdistribution. Both KS and AD tests quantify a distance between the empirical distribution function of the sampleand the cumulative distribution function of the reference distribution. The AD test gives moreweight to the tailsthan the KS test. More detailed information is available in [32, 33].Figure 4. Size–variance relationship between node strength W (trade values) andthe standard deviation of its growth rate σ(g), double logarithmic scale.rates of node strength σ(g) and their initial magnitude (W). The standard deviation of the growth rate of link weights exhibits a power-law relationship σ(g) = W−β with β ≈ 0.2, as predictedby the model [17]. This implies that the fluctu ations of the most intense trade relationships are more volatile than expected based on the central limit theorem.All in all, our model accurately predicts the growth and weight distribution of trade flows,the number of commodities traded and the size–var iance relationship of trade flows. Thus,we can conclude that a stochastic model that assumes a proportional growth of the number of links combined with an independent proportional growth process of link weights can reproduce most of the observed structural features of the world trade web and should be taken as a valid stochastic benchmark to test the explanatory power of alternative theories of the evolution of international trade and weighted networks in general. In the next section we compare the structure of random networks generated according to our model with the real-world trade network.4. Simulation resultsBased on the assumptions in section 2, we generate a set of random networks and fit themwith real-world data in order to test the predictive capability of our theoretical framework.We proceed in two steps. First, we generate the unweighted network according to the firstset of assumptions. Next, we assign the value of weights based on a random sampling of K values from a log-normal distribution P(w) whose parameters are obtained through a maximum likelihood fit of the real-world data.We model a system where at every time t a new link is added, which represents thepossibility to exchange one product with a trading partner. We slightly modify the original setting in order to account for the possibility that the new links could be assigned randomly rather than proportionally to node connectivity. Thus, in our simulations, parameter a governsthe entry of new nodes according to assumption 1, whereas parameter b is the probability that a new link is assigned randomly. Thus with probability a the new link is assigned to a new source node, whereas with probability 1−a it is allocated to an existing node i . In the latter case, the probability of choosing no de i is now given by pi (t) = (1−b)Ki (t −1)/2t + b/Nt−1, whereNt−1 is the number of nodes at time t −1. The target of the new link is chosen symmetricallywith i = j .Tuning the two model parameters a and b, we generate different networks interms of the connectivity distribution of trade links P(K). In particular, without entry (a = 0)and completely random allocation of opportunities (b = 1), one obtains a random graph characterized by a Poisson connectivity distribution [36], whereas allowing entry (a > 0),P(K) is exponentially distributed. Keeping a positive entry rate, but assigning opportunities according to a preferential attachment model (b = 0), the model leads to a power-law connectivity distribution with an exponential cut-off, which is more pronounced, thehigher the number of initial nodes N0. In the limit case in which entry of new nodes isruled out (a = 0), the connectivity distribution tends toward a Bose–Einstein geometric distribution.We compare the structure of random scale-free model networks with the real-world trade network in 1997. Since the structure of the network is highly stable over time, results do not change substantially if we compare simulations with the structure of the real-world network in different years. In the first s tage, we generate one million networks, with a and b both ranging from 0 to 1.We simulate random networks of 166 nodes (countries) and 1 079 398 links (number of different commodities traded by two countries). The number of commodities traded is takenas a proxy of the number of transactions. Next, we select the random networks that better fit the real-world pattern in terms of correlation, as measured by the Mantel r test, and connectivity distribution9.9The Mantel test is a non-parametric statistical test of the correlation between two matrices [37]. The test is basedon the distance or dissimilarity matrices that, in the present case, summarize the number of links between twonodes in the simulated and real networks. A typical use of the test entails comparing an observed connectivitymatrix with one posed by a model. The significance of a correlation is evaluated via permutations, whereby therows and columns of the matrices are randomly rearranged.Figure 5. Mantel test comparing simulated and real networks.Figure 5 reports the value of theMantel test for networks with 0 6 b 6 1 and an entry rate a,which implies the entry of 0 to 66 countries. The Mantel correlation statistics reach a peak of0.88 (p-value < 0.01) in the case of pure preferential attachment regimes (b = 0). However,the Mantel test does not discriminate between different entry regimes. We next compare the connectivity distribution of simulated networks with the real-world distribution of the numberof traded commodities P(K) by means of the KS goodness-of-fit test. Figure 6 confirms thatthe best fit is obtained in the case of a purely preferential attachment network (b = 0). However, the KS tests provide additional information on the most likely value of a (entry rate of new nodes).Figure 7 shows that our model can better reproduce the connectivity distribution withan entry rate a > 0, which implies the entry of 14–18 countries. This closely correspondsto the empirically observed number of new countries. Thus, we can conclude that a simple proportional growth model with mild entry can account for the distribution of the number of commodities traded by each pair of countries.Introducing the value of transactions, we can show that the model generates theobserved relationship between intensive and extensive margins of trade. Figure 8 depicts the relationship between total trade flows (W) and the number of trade links maintained by each country (K). Empirically, we proxy the number of transactions by means of the number of products traded by each country. Figure 8 displays the relationship that emerges from 1997trade data, and confirms that there exists a positive correlation between the two variables. The slope of the interpolating line (1.33) in double logarithmic scale reveals a positive relationship between the number of commodities and their average value of the kind W = Kθwith θ ≈ 1.33.Figure 6. KS goodness-of-fit test for different entry rates and probabilities of random assignment.Figure 7. KS goodness-of-fit test for different entry rates in a pure preferential attachment regime (b = 0).Figure 8. Relationship between the number of products traded and trade value.Double logarithmic scale. Simulated (black) and real-world (red) data, mean andone standard deviation in each direction. The dashed line represents the referenceline W = Kθwith θ ≈ 1.33.The curve displays an upward departure in the upper tail. This can be explained by notingthat the product classification used imposes a ceiling on the number of products a country can trade as there are only around 1300 four-digit categories (vertical dotted line)10.Apart from the upper decile of the distribution, the simulated version of the networkshows exactly the same dependence among the magnitude and the number of transactions. This seems surprising, considering that the model assumes two independent growth processes forthe number of transactions K and their values w. However, it should be noted that the law of large numbers does not work properly in the case of skew distributions such as the log-normal. Given a random number of transactions with a finite expected value, if its values are repeatedly sampled from a log-normal, as the number of links increases the average link weight will tendto approach and stay close to the expected value (the average for the population). However, thisis true only for large K, while according to the distribution P(K), the vast majority of nodeshave few links (small K). The higher the variance of the growth process of link weights, thelarg er K has to be to start observing convergence toward W = wKθ, with θ = 1 predicted bythe law of large numbers. Thus, only the largest countries approach the critical threshold. In10Another possible explanation is that for large enough K, some scale effects kick in establishing a correlationbetween Kand W. This could be tested in other real networks with larger Kand no cut-off. We areaware that ourmodeling strategy to assume away any relationship between the mechanisms governing the binary structure of thenetwork and the one assigning link weights is as extreme as other strategies that simply assume a single processgoverning the two parts. Yet, we consider the positive relationship between Kand W as an interesting emergingproperty of the model.sum, our simulations demonstrate that our model can account for the relationship between Kand W, which has been observed in many real-world weighted networks [10, 13].5. Discussion and conclusionsUsing a simple model of proportionate growth and preferential attachment, we are able to replicate some of the main topological properties of real-world weighted networks. In particular, we provide an explanation for the power-law distribution of connectivity, as well as for thefat tails displayed by the distribution of the growth rates of link weights and node strength. Additionally, the model matches the log-normal distribution of positive link weights (tradeflows in the present context) and the negative relationship between node strength and varianceof growth fluctuations σ(g) = W−β with β ≈ 0.2.The main contribution of the paper is to offer an extension of the BA model for weighted networks. We also provide further evidence that such a unifying stochastic framework is able to capture the dynamics of a vast array of phenomena concerning complex system dynamics [16]. Further refinements of our model entail investigating its ability to match other topological properties of the networks such as assortativity and clustering.AcknowledgmentsWe acknowledge Gene Stanley, Sergey Buldyrev, Fabio Pammolli, Jakub Growiec, Dongfeng Fu, Kazuko Yamasaki, Kaushik Matia, Lidia Ponta, Giorgio Fagiolo and Javier Reyes for their previous work on which this contribution builds.引用文章[1] Newman M 2001 The structure of scientific collaboration networks Proc. Natl Acad. Sci. USA 98 404–9[2] Pastor-Satorras R and Vespignani A 2004 Evolution and Structure of the Internet: a Statistical PhysicsApproach (Cambridge: Cambridge University Press)[3] GuimeràR, Mossa S, Turtschi A and Amaral L 2005 The worldwide air transportation network: anomalouscentrality, community structure and cities’global roles Proc. Natl Acad. Sci. USA 102 7794–9 [4] Garlaschelli D, Battiston S, Castri M, Servedio V and Caldarelli G 2005 The scale-free topology of marketinvestments Physica A 350 491–9[5] Fagiolo G, Reyes J and Schiavo S 2008 On the topological properties of the world trade web: a weightednetwork analysis Physica A 387 3868–73[6] Bhattacharya K, Mukherjee G, Saramaki J, Kaski K and Manna S 2008 The International Trade Network:weighted network analysis and modelling J. Stat. Mech.: Theor. Exp. P02002[7] Yook S, Jeong H, Barabási A-L and Tu Y 2001 Weighted evolving networks Phys. Rev. Lett.86 5835[8] Zheng D, Trimper S, Zheng B and Hui P 2003 Weighted scale-free networks with stochastic weightassignments Phys. Rev. E 67 040102[9] Barrat A, Barthélemy M and Vespignani A 2004 Modeling the evolution of weighted networks Phys. Rev. E70 066149[10] Barrat A, Barthélemy M, Pastor-Satorras R and Vespignani A 2004 The architecture of complex weightednetworks Proc. Natl Acad. Sci. USA 101 3747–52[11] Ramasco J and Gonçalves B 2007 Transport on weighted networks: when correlations are independentof degree Phys. Rev. E 76 066106[12] Serrano M, BogunáM and Vespignani A 2009 Extracting the multiscale backbone of complex weightednetworks Proc. Natl Acad. Sci. USA 106 6483–8[13] Eom Y-H, Jeon C, Jeong H and Kahng B 2008 Evolution of weighted scale-free networks in empirical dataPhys. Rev. E 77 056105[14] Barabási A and Albert R 1999 Emergence of scaling in random networks Science 286 509[15] Yamasaki K, Matia K, Buldyrev S, Fu D, Pammolli F, Riccaboni M and Stanley H 2006 Preferentialattachment and growth dynamics in complex systems Phys. Rev. E 74 035103[16] Fu D, Pammolli F, Buldyrev S, Riccaboni M, Matia K, Yamasaki K and Stanley H 2005 The growth ofbusiness firms: theoretical framework and empirical evidence Proc. Natl Acad. Sci. USA 102 18801–6[17] RiccaboniM, Pammolli F, Buldyrev S, Ponta L and Stanley H 2008 The size variance relationship of businessfirm growth rates Proc. Natl Acad. Sci. USA 105 19595[18] Tinbergen J 1962 Shaping the World Economy: Suggestions for an International Economic Policy (New York:The Twentieth Century Fund)[19] Helpman E, Melitz M and Rubinstein Y 2008 Estimating trade flows: trading partners and trading volumesQ. J. Econ. 123 441–87[20] Serrano M and BogunáM 2003 Topology of the world trade web Phys. Rev. E 68 15101[21] Garlaschelli D and Loffredo M 2004 Fitness-dependent topological properties of the world trade web Phys.Rev. Lett. 93 188701[22] Garlaschelli D and Loffredo M 2005 Structure and evolution of the world trade network Physica A 355138–44[23] Bhattacharya K, Mukherjee G and Manna S 2007 The International Trade Network. arXiv:0707.4347[24] Fagiolo G, Reyes J and Schiavo S 2009 World-trade web: topological properties, dynamics and evolutionPhys. Rev. E 79 36115[25] Chaney T 2008 Distorted gravity: the intensive and extensive margins of international trade Am. Econ. Rev.98 1707–21[26] Gibrat R 1931 Les Inegalites Economiques (Paris: Sirey)[27] Bollobás B and Riordan O 2003 Mathematical results on scale-free random graphs Handbook of Graphs andNetworks ed S Bornholdt and H G Schuster (New York: Wiley–VCH), pp. 1–34[28] Buldyrev S, Growiec J, Pammolli F, Riccaboni M and Stanley H 2007 The growth of business firms: factsand theory J. Eur. Econ. Assoc. 5 574–84[29] Growiec J, Pammolli F, Riccaboni M and Stanley H 2008 On the size distribution of business firms Econ.Lett. 98 207–12[30] De Fabritiis G, Pammolli F and Riccaboni M 2003 On size and growth of business firms Physica A324 38–4[31] Feenstra R, Lipsey R, Deng H,Ma A,Mo H and Drive O 2005World trade flows: 1962–2000 NBER workingpaper[32] Chakravarti I, Laha R and Roy J 1967 Handbook of Methods of Applied Statistics V ol I (New York: Wiley),pp. 392–4[33] Stephens M 1974 EDF statistics for goodness of fit and some comparisons J. Am. Stat. Assoc.69 730–7[34] Clauset A, Shalizi C and Newman M 2009 Power-law distributions in empirical data SIAM Rev. 51 661–703[35] Fagiolo G, Napoletano M and Roventini A 2008 Are output growth-rate distributions fat-tailed? Someevidence from OECD countries J. Appl. Econ. 23 639–9[36] Erdos P and Renyi A 1959 On random graphs Publ. Math. Debrecen 6 156[37] Mantel N 1967 The detection of disease clustering and a generalized regression approach Cancer Res.27 209–20。

相关文档
最新文档