An Information-Based Neural Approach to Constraint Satisfaction

合集下载

Reservoir Computing Approaches toRecurrent Neural Network Training

Reservoir Computing Approaches toRecurrent Neural Network Training
author. Email addresses: m.lukosevicius@jacobs-university.de (Mantas Lukoˇ seviˇ cius), h.jaeger@jacobs-university.de (Herbert Jaeger) Preprint submitted to Computer Science Review January 18, 2010
1. Introduction Artificial recurrent neural networks (RNNs) represent a large and varied class of computational models that are designed by more or less detailed analogy with biological brain modules. In an RNN numerous abstract neurons (also called units or processing elements ) are interconnected by likewise abstracted synaptic connections (or links ), which enable activations to propagate through the network. The characteristic feature of RNNs that distinguishes them from the more widely used feedforward neural networks is that the connection topology possesses cycles. The existence of cycles has a profound impact: • An RNN may develop a self-sustained temporal activation dynamics along its recurrent connection pathways, even in the absence of input. Mathematically, this renders an RNN to be a dynamical system, while feedforward networks are functions. • If driven by an input signal, an RNN preserves in its internal state a nonlinear transformation of the input history — in other words, it has a dynamical memory, and is able to process temporal context information. This review article concerns a particular subset of RNN-based research in two aspects: • RNNs are used for a variety of scientific purposes, and at least two major classes of RNN models exist: they can be used for purposes of modeling biological brains, or as engineering tools for technical applications. The first usage belongs to the field of computational neuroscience, while the second

人脸表情识别英文参考资料

人脸表情识别英文参考资料

二、(国外)英文参考资料1、网上文献2、国际会议文章(英文)[C1]Afzal S, Sezgin T.M, Yujian Gao, Robinson P. Perception of emotional expressions in different representations using facial feature points. In: Affective Computing and Intelligent Interaction and Workshops, Amsterdam,Holland, 2009 Page(s): 1 - 6[C2]Yuwen Wu, Hong Liu, Hongbin Zha. Modeling facial expression space for recognition In:Intelligent Robots and Systems,Edmonton,Canada,2005: 1968 – 1973 [C3]Y u-Li Xue, Xia Mao, Fan Zhang. Beihang University Facial Expression Database and Multiple Facial Expression Recognition. In: Machine Learning and Cybernetics, Dalian,China,2006: 3282 – 3287[C4] Zhiguo Niu, Xuehong Qiu. Facial expression recognition based on weighted principal component analysis and support vector machines. In: Advanced Computer Theory and Engineering (ICACTE), Chendu,China,2010: V3-174 - V3-178[C5] Colmenarez A, Frey B, Huang T.S. A probabilistic framework for embedded face and facial expression recognition. In: Computer Vision and Pattern Recognition, Ft. Collins, CO, USA, 1999:[C6] Yeongjae Cheon, Daijin Kim. A Natural Facial Expression Recognition Using Differential-AAM and k-NNS. In: Multimedia(ISM 2008),Berkeley, California, USA,2008: 220 - 227[C7]Jun Ou, Xiao-Bo Bai, Yun Pei,Liang Ma, Wei Liu. Automatic Facial Expression Recognition Using Gabor Filter and Expression Analysis. In: Computer Modeling and Simulation, Sanya, China, 2010: 215 - 218[C8]Dae-Jin Kim, Zeungnam Bien, Kwang-Hyun Park. Fuzzy neural networks (FNN)-based approach for personalized facial expression recognition with novel feature selection method. In: Fuzzy Systems, St.Louis,Missouri,USA,2003: 908 - 913 [C9] Wei-feng Liu, Shu-juan Li, Yan-jiang Wang. Automatic Facial Expression Recognition Based on Local Binary Patterns of Local Areas. In: Information Engineering, Taiyuan, Shanxi, China ,2009: 197 - 200[C10] Hao Tang, Hasegawa-Johnson M, Huang T. Non-frontal view facial expression recognition based on ergodic hidden Markov model supervectors.In: Multimedia and Expo (ICME), Singapore ,2010: 1202 - 1207[C11] Yu-Jie Li, Sun-Kyung Kang,Young-Un Kim, Sung-Tae Jung. Development of a facial expression recognition system for the laughter therapy. In: Cybernetics and Intelligent Systems (CIS), Singapore ,2010: 168 - 171[C12] Wei Feng Liu, ZengFu Wang. Facial Expression Recognition Based on Fusion of Multiple Gabor Features. In: Pattern Recognition, HongKong, China, 2006: 536 - 539[C13] Chen Feng-jun, Wang Zhi-liang, Xu Zheng-guang, Xiao Jiang. Facial Expression Recognition Based on Wavelet Energy Distribution Feature and Neural Network Ensemble. In: Intelligent Systems, XiaMen, China, 2009: 122 - 126[C14] P. Kakumanu, N. Bourbakis. A Local-Global Graph Approach for FacialExpression Recognition. In: Tools with Artificial Intelligence, Arlington, Virginia, USA,2006: 685 - 692[C15] Mingwei Huang, Zhen Wang, Zilu Ying. Facial expression recognition using Stochastic Neighbor Embedding and SVMs. In: System Science and Engineering (ICSSE), Macao, China, 2011: 671 - 674[C16] Junhua Li, Li Peng. Feature difference matrix and QNNs for facial expression recognition. In: Control and Decision Conference, Yantai, China, 2008: 3445 - 3449 [C17] Yuxiao Hu, Zhihong Zeng, Lijun Yin,Xiaozhou Wei, Jilin Tu, Huang, T.S. A study of non-frontal-view facial expressions recognition. In: Pattern Recognition, Tampa, FL, USA,2008: 1 - 4[C18] Balasubramani A, Kalaivanan K, Karpagalakshmi R.C, Monikandan R. Automatic facial expression recognition system. In: Computing, Communication and Networking, St. Thomas,USA, 2008: 1 - 5[C19] Hui Zhao, Zhiliang Wang, Jihui Men. Facial Complex Expression Recognition Based on Fuzzy Kernel Clustering and Support Vector Machines. In: Natural Computation, Haikou,Hainan,China,2007: 562 - 566[C20] Khanam A, Shafiq M.Z, Akram M.U. Fuzzy Based Facial Expression Recognition. In: Image and Signal Processing, Sanya, Hainan, China,2008: 598 - 602 [C21] Sako H, Smith A.V.W. Real-time facial expression recognition based on features' positions and dimensions. In: Pattern Recognition, Vienna,Austria, 1996: 643 - 648 vol.3[C22] Huang M.W, Wang Z.W, Ying Z.L. A novel method of facial expression recognition based on GPLVM Plus SVM. In: Signal Processing (ICSP), Beijing, China, 2010: 916 - 919[C23] Xianxing Wu; Jieyu Zhao; Curvelet feature extraction for face recognition and facial expression recognition. In: Natural Computation (ICNC), Yantai, China, 2010: 1212 - 1216[C24]Xu Q Z, Zhang P Z, Yang L X, et al.A facial expression recognition approach based on novel support vector machine tree. In Proceedings of the 4th International Symposium on Neural Networks, Nanjing, China, 2007: 374-381.[C25] Wang Y B, Ai H Z, Wu B, et al. Real time facial expression recognition with adaboost.In: Proceedings of the 17th International Conference on Pattern Recognition , Cambridge,UK, 2004: 926-929.[C26] Guo G, Dyer C R. Simultaneous feature selection and classifier training via linear programming: a case study for face expression recognition. In: Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, W isconsin, USA, 2003,1: 346-352.[C27] Bourel F, Chibelushi C C, Low A A. Robust facial expression recognition using a state-based model of spatially-localised facial dynamics. In: Proceedings of the 5th IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, USA, 2002: 113-118·[C28] Buciu I, Kotsia I, Pitas I. Facial expression analysis under partial occlusion. In: Proceedings of the 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, Philadelphia, PA, USA, 2005,V: 453-456.[C29] ZHAN Yong-zhao,YE Jing-fu,NIU De-jiao,et al.Facial expression recognition based on Gabor wavelet transformation and elastic templates matching. Proc of the 3rd International Conference on Image and Graphics.Washington DC, USA,2004:254-257.[C30] PRASEEDA L V,KUMAR S,VIDYADHARAN D S,et al.Analysis of facial expressions using PCA on half and full faces. Proc of ICALIP2008.2008:1379-1383.[C31] LEE J J,UDDIN M Z,KIM T S.Spatiotemporal human facial expression recognition using Fisher independent component analysis and Hidden Markov model [C]//Proc of the 30th Annual International Conference of IEEE Engineering in Medicine and Biology Society.2008:2546-2549.[C32] LITTLEWORT G,BARTLETT M,FASELL. Dynamics of facial expression extracted automatically from video. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Workshop on Face Processing inVideo, Washington DC,USA,2006:80-81.[C33] Kotsia I, Nikolaidis N, Pitas I. Facial Expression Recognition in Videos using a Novel Multi-Class Support Vector Machines Variant. In: Acoustics, Speech and Signal Processing, Honolulu, Hawaii, USA, 2007: II-585 - II-588[C34] Ruo Du, Qiang Wu, Xiangjian He, Wenjing Jia, Daming Wei. Facial expression recognition using histogram variances faces. In: Applications of Computer Vision (WACV), Snowbird, Utah, USA, 2009: 1 - 7[C35] Kobayashi H, Tange K, Hara F. Real-time recognition of six basic facial expressions. In: Robot and Human Communication, Tokyo , Japan,1995: 179 - 186 [C36] Hao Tang, Huang T.S. 3D facial expression recognition based on properties of line segments connecting facial feature points. In: Automatic Face & Gesture Recognition, Amsterdam, The Netherlands, 2008: 1 - 6[C37] Fengjun Chen, Zhiliang Wang, Zhengguang Xu, Donglin Wang. Research on a method of facial expression recognition.in: Electronic Measurement & Instruments, Beijing,China, 2009: 1-225 - 1-229[C38] Hui Zhao, Tingting Xue, Linfeng Han. Facial complex expression recognition based on Latent DirichletAllocation. In: Natural Computation (ICNC), Yantai, Shandong, China, 2010: 1958 - 1960[C39] Qinzhen Xu, Pinzheng Zhang, Wenjiang Pei, Luxi Yang, Zhenya He. An Automatic Facial Expression Recognition Approach Based on Confusion-Crossed Support Vector Machine Tree. In: Acoustics, Speech and Signal Processing, Honolulu, Hawaii, USA, 2007: I-625 - I-628[C40] Sung Uk Jung, Do Hyoung Kim, Kwang Ho An, Myung Jin Chung. Efficient rectangle feature extraction for real-time facial expression recognition based on AdaBoost.In: Intelligent Robots and Systems, Edmonton,Canada, 2005: 1941 - 1946[C41] Patil K.K, Giripunje S.D, Bajaj P.R. Facial Expression Recognition and Head Tracking in Video Using Gabor Filter .In: Emerging Trends in Engineering and Technology (ICETET), Goa, India, 2010: 152 - 157[C42] Jun Wang, Lijun Yin, Xiaozhou Wei, Yi Sun. 3D Facial Expression Recognition Based on Primitive Surface Feature Distribution.In: Computer Vision and PatternRecognition, New York, USA,2006: 1399 - 1406[C43] Shi Dongcheng, Jiang Jieqing. The method of facial expression recognition based on DWT-PCA/LDA.IN: Image and Signal Processing (CISP), Yantai,China, 2010: 1970 - 1974[C44] Asthana A, Saragih J, Wagner M, Goecke R. Evaluating AAM fitting methods for facial expression recognition. In: Affective Computing and Intelligent Interaction and Workshops, Amsterdam,Holland, 2009:1-8[C45] Geng Xue, Zhang Youwei. Facial Expression Recognition Based on the Difference of Statistical Features.In: Signal Processing, Guilin, China, 2006[C46] Metaxas D. Facial Features Tracking for Gross Head Movement analysis and Expression Recognition.In: Multimedia Signal Processing, Chania,Crete,GREECE, 2007:2[C47] Xia Mao, YuLi Xue, Zheng Li, Kang Huang, ShanWei Lv. Robust facial expression recognition based on RPCA and AdaBoost.In: Image Analysis for Multimedia Interactive Services, London, UK, 2009: 113 - 116[C48] Kun Lu, Xin Zhang. Facial Expression Recognition from Image Sequences Based on Feature Points and Canonical Correlations.In: Artificial Intelligence and Computational Intelligence (AICI), Sanya,China, 2010: 219 - 223[C49] Peng Zhao-yi, Wen Zhi-qiang, Zhou Yu. Application of Mean Shift Algorithm in Real-Time Facial Expression Recognition.In: Computer Network and Multimedia Technology, Wuhan,China, 2009: 1 - 4[C50] Xu Chao, Feng Zhiyong, Facial Expression Recognition and Synthesis on Affective Emotions Composition.In: Future BioMedical Information Engineering, Wuhan,China, 2008: 144 - 147[C51] Zi-lu Ying, Lin-bo Cai. Facial Expression Recognition with Marginal Fisher Analysis on Local Binary Patterns.In: Information Science and Engineering (ICISE), Nanjing,China, 2009: 1250 - 1253[C52] Chuang Yu, Yuning Hua, Kun Zhao. The Method of Human Facial Expression Recognition Based on Wavelet Transformation Reducing the Dimension and Improved Fisher Discrimination.In: Intelligent Networks and Intelligent Systems (ICINIS), Shenyang,China, 2010: 43 - 47[C53] Stratou G, Ghosh A, Debevec P, Morency L.-P. Effect of illumination on automatic expression recognition: A novel 3D relightable facial database .In: Automatic Face & Gesture Recognition and Workshops (FG 2011), Santa Barbara, California,USA, 2011: 611 - 618[C54] Jung-Wei Hong, Kai-Tai Song. Facial expression recognition under illumination variation.In: Advanced Robotics and Its Social Impacts, Hsinchu, Taiwan,2007: 1 - 6[C55] Ryan A, Cohn J.F, Lucey S, Saragih J, Lucey P, De la Torre F, Rossi A. Automated Facial Expression Recognition System.In: Security Technology, Zurich, Switzerland, 2009: 172 - 177[C56] Gokturk S.B, Bouguet J.-Y, Tomasi C, Girod B. Model-based face tracking for view-independent facial expression recognition.In: Automatic Face and Gesture Recognition, Washington, D.C., USA, 2002: 287 - 293[C57] Guo S.M, Pan Y.A, Liao Y.C, Hsu C.Y, Tsai J.S.H, Chang C.I. A Key Frame Selection-Based Facial Expression Recognition System.In: Innovative Computing, Information and Control, Beijing,China, 2006: 341 - 344[C58] Ying Zilu, Li Jingwen, Zhang Youwei. Facial expression recognition based on two dimensional feature extraction.In: Signal Processing, Leipzig, Germany, 2008: 1440 - 1444[C59] Fengjun Chen, Zhiliang Wang, Zhengguang Xu, Jiang Xiao, Guojiang Wang. Facial Expression Recognition Using Wavelet Transform and Neural Network Ensemble.In: Intelligent Information Technology Application, Shanghai,China,2008: 871 - 875[C60] Chuan-Yu Chang, Yan-Chiang Huang, Chi-Lu Yang. Personalized Facial Expression Recognition in Color Image.In: Innovative Computing, Information and Control (ICICIC), Kaohsiung,Taiwan, 2009: 1164 - 1167[C61] Bourel F, Chibelushi C.C, Low A.A. Robust facial expression recognition using a state-based model of spatially-localised facial dynamics. In: Automatic Face and Gesture Recognition, Washington, D.C., USA, 2002: 106 - 111[C62] Chen Juanjuan, Zhao Zheng, Sun Han, Zhang Gang. Facial expression recognition based on PCA reconstruction.In: Computer Science and Education (ICCSE), Hefei,China, 2010: 195 - 198[C63] Guotai Jiang, Xuemin Song, Fuhui Zheng, Peipei Wang, Omer A.M. Facial Expression Recognition Using Thermal Image.In: Engineering in Medicine and Biology Society, Shanghai,China, 2005: 631 - 633[C64] Zhan Yong-zhao, Ye Jing-fu, Niu De-jiao, Cao Peng. Facial expression recognition based on Gabor wavelet transformation and elastic templates matching.In: Image and Graphics, Hongkong,China, 2004: 254 - 257[C65] Ying Zilu, Zhang Guoyi. Facial Expression Recognition Based on NMF and SVM. In: Information Technology and Applications, Chengdu,China, 2009: 612 - 615 [C66] Xinghua Sun, Hongxia Xu, Chunxia Zhao, Jingyu Yang. Facial expression recognition based on histogram sequence of local Gabor binary patterns. In: Cybernetics and Intelligent Systems, Chengdu,China, 2008: 158 - 163[C67] Zisheng Li, Jun-ichi Imai, Kaneko M. Facial-component-based bag of words and PHOG descriptor for facial expression recognition.In: Systems, Man and Cybernetics, San Antonio,TX,USA,2009: 1353 - 1358[C68] Chuan-Yu Chang, Yan-Chiang Huang. Personalized facial expression recognition in indoor environments.In: Neural Networks (IJCNN), Barcelona, Spain, 2010: 1 - 8[C69] Ying Zilu, Fang Xieyan. Combining LBP and Adaboost for facial expression recognition.In: Signal Processing, Leipzig, Germany, 2008: 1461 - 1464[C70] Peng Yang, Qingshan Liu, Metaxas, D.N. RankBoost with l1 regularization for facial expression recognition and intensity estimation.In: Computer Vision, Kyoto,Japan, 2009: 1018 - 1025[C71] Patil R.A, Sahula V, Mandal A.S. Automatic recognition of facial expressions in image sequences: A review.In: Industrial and Information Systems (ICIIS), Mangalore, India, 2010: 408 - 413[C72] Iraj Hosseini, Nasim Shams, Pooyan Amini, Mohammad S. Sadri, Masih Rahmaty, Sara Rahmaty. Facial Expression Recognition using Wavelet-Based Salient Points and Subspace Analysis Methods.In: Electrical and Computer Engineering, Ottawa, Canada, 2006: 1992 - 1995[C73][C74][C75][C76][C77][C78][C79]3、英文期刊文章[J1] Aleksic P.S., Katsaggelos A.K. Automatic facial expression recognition using facial animation parameters and multistream HMMs.IEEE Transactions on Information Forensics and Security, 2006, 1(1):3-11[J2] Kotsia I,Pitas I. Facial Expression Recognition in Image Sequences Using Geometric Deformation Features and Support Vector Machines. IEEE Transactions on Image Processing, 2007, 16(1):172 – 187[J3] Mpiperis I, Malassiotis S, Strintzis M.G. Bilinear Models for 3-D Face and Facial Expression Recognition. IEEE Transactions on Information Forensics and Security, 2008,3(3) : 498 - 511[J4] Sung J, Kim D. Pose-Robust Facial Expression Recognition Using View-Based 2D+3D AAM. IEEE Transactions on Systems and Humans, 2008 , 38 (4): 852 - 866 [J5]Yeasin M, Bullot B, Sharma R. Recognition of facial expressions and measurement of levels of interest from video. IEEE Transactions on Multimedia, 2006, 8(3): 500 - 508[J6] Wenming Zheng, Xiaoyan Zhou, Cairong Zou, Li Zhao. Facial expression recognition using kernel canonical correlation analysis (KCCA). IEEE Transactions on Neural Networks, 2006, 17(1): 233 - 238[J7]Pantic M, Patras I. Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2006, 36(2): 433 - 449[J8] Mingli Song, Dacheng Tao, Zicheng Liu, Xuelong Li, Mengchu Zhou. Image Ratio Features for Facial Expression Recognition Application. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2010, 40(3): 779 - 788[J9] Dae Jin Kim, Zeungnam Bien. Design of “Personalized” Classifier Using Soft Computing Techniques for “Personalized” Facial Expression Recognition. IEEE Transactions on Fuzzy Systems, 2008, 16(4): 874 - 885[J10] Uddin M.Z, Lee J.J, Kim T.-S. An enhanced independent component-based human facial expression recognition from video. IEEE Transactions on Consumer Electronics, 2009, 55(4): 2216 - 2224[J11] Ruicong Zhi, Flierl M, Ruan Q, Kleijn W.B. Graph-Preserving Sparse Nonnegative Matrix Factorization With Application to Facial Expression Recognition.IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2011, 41(1): 38 - 52[J12] Chibelushi C.C, Bourel F. Hierarchical multistream recognition of facial expressions. IEE Proceedings - Vision, Image and Signal Processing, 2004, 151(4): 307 - 313[J13] Yongsheng Gao, Leung M.K.H, Siu Cheung Hui, Tananda M.W. Facial expression recognition from line-based caricatures. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 2003, 33(3): 407 - 412[J14] Ma L, Khorasani K. Facial expression recognition using constructive feedforward neural networks. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2004, 34(3): 1588 - 1595[J15] Essa I.A, Pentland A.P. Coding, analysis, interpretation, and recognition of facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, 19(7): 757 - 763[J16] Anderson K, McOwan P.W. A real-time automated system for the recognition of human facial expressions. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2006, 36(1): 96 - 105[J17] Soyel H, Demirel H. Facial expression recognition based on discriminative scale invariant feature transform. Electronics Letters 2010, 46(5): 343 - 345[J18] Fei Cheng, Jiangsheng Yu, Huilin Xiong. Facial Expression Recognition in JAFFE Dataset Based on Gaussian Process Classification. IEEE Transactions on Neural Networks, 2010, 21(10): 1685 – 1690[J19] Shangfei Wang, Zhilei Liu, Siliang Lv, Yanpeng Lv, Guobing Wu, Peng Peng, Fei Chen, Xufa Wang. A Natural Visible and Infrared Facial Expression Database for Expression Recognition and Emotion Inference. IEEE Transactions on Multimedia, 2010, 12(7): 682 - 691[J20] Lajevardi S.M, Hussain Z.M. Novel higher-order local autocorrelation-like feature extraction methodology for facial expression recognition. IET Image Processing, 2010, 4(2): 114 - 119[J21] Yizhen Huang, Ying Li, Na Fan. Robust Symbolic Dual-View Facial Expression Recognition With Skin Wrinkles: Local Versus Global Approach. IEEE Transactions on Multimedia, 2010, 12(6): 536 - 543[J22] Lu H.-C, Huang Y.-J, Chen Y.-W. Real-time facial expression recognition based on pixel-pattern-based texture feature. Electronics Letters 2007, 43(17): 916 - 918[J23]Zhang L, Tjondronegoro D. Facial Expression Recognition Using Facial Movement Features. IEEE Transactions on Affective Computing, 2011, pp(99): 1[J24] Zafeiriou S, Pitas I. Discriminant Graph Structures for Facial Expression Recognition. Multimedia, IEEE Transactions on 2008,10(8): 1528 - 1540[J25]Oliveira L, Mansano M, Koerich A, de Souza Britto Jr. A. Selecting 2DPCA Coefficients for Face and Facial Expression Recognition. Computing in Science & Engineering, 2011, pp(99): 1[J26] Chang K.I, Bowyer W, Flynn P.J. Multiple Nose Region Matching for 3D Face Recognition under Varying Facial Expression. Pattern Analysis and Machine Intelligence, IEEE Transactions on2006, 28(10): 1695 - 1700[J27] Kakadiaris I.A, Passalis G, Toderici G, Murtuza M.N, Yunliang Lu, Karampatziakis N, Theoharis T. Three-Dimensional Face Recognition in the Presence of Facial Expressions: An Annotated Deformable Model Approach.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(4): 640 - 649[J28] Guoying Zhao, Pietikainen M. Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6): 915 - 928[J29] Chakraborty A, Konar A, Chakraborty U.K, Chatterjee A. Emotion Recognition From Facial Expressions and Its Control Using Fuzzy Logic. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 2009, 39(4): 726 - 743 [J30] Pantic M, RothkrantzL J.M. Facial action recognition for facial expression analysis from static face images. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2004, 34(3): 1449 - 1461[J31] Calix R.A, Mallepudi S.A, Bin Chen, Knapp G.M. Emotion Recognition in Text for 3-D Facial Expression Rendering. IEEE Transactions on Multimedia, 2010, 12(6): 544 - 551[J32]Kotsia I, Pitas I, Zafeiriou S, Zafeiriou S. Novel Multiclass Classifiers Based on the Minimization of the Within-Class Variance. IEEE Transactions on Neural Networks, 2009, 20(1): 14 - 34[J33]Cohen I, Cozman F.G, Sebe N, Cirelo M.C, Huang T.S. Semisupervised learning of classifiers: theory, algorithms, and their application to human-computer interaction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(12): 1553 - 1566[J34] Zafeiriou S. Discriminant Nonnegative Tensor Factorization Algorithms. IEEE Transactions on Neural Networks, 2009, 20(2): 217 - 235[J35] Zafeiriou S, Petrou M. Nonlinear Non-Negative Component Analysis Algorithms. IEEE Transactions on Image Processing, 2010, 19(4): 1050 - 1066[J36] Kotsia I, Zafeiriou S, Pitas I. A Novel Discriminant Non-Negative Matrix Factorization Algorithm With Applications to Facial Image Characterization Problems. IEEE Transactions on Information Forensics and Security, 2007, 2(3): 588 - 595[J37] Irene Kotsia, Stefanos Zafeiriou, Ioannis Pitas. Texture and shape information fusion for facial expression and facial action unit recognition . Pattern Recognition, 2008, 41(3): 833-851[J38]Wenfei Gu, Cheng Xiang, Y.V. Venkatesh, Dong Huang, Hai Lin. Facial expression recognition using radial encoding of local Gabor features and classifier synthesis. Pattern Recognition, In Press, Corrected Proof, Available online 27 May 2011[J39] F Dornaika, E Lazkano, B Sierra. Improving dynamic facial expression recognition with feature subset selection. Pattern Recognition Letters, 2011, 32(5): 740-748[J40] Te-Hsun Wang, Jenn-Jier James Lien. Facial expression recognition system based on rigid and non-rigid motion separation and 3D pose estimation. Pattern Recognition, 2009, 42(5): 962-977[J41] Hyung-Soo Lee, Daijin Kim. Expression-invariant face recognition by facialexpression transformations. Pattern Recognition Letters, 2008, 29(13): 1797-1805[J42] Guoying Zhao, Matti Pietikäinen. Boosted multi-resolution spatiotemporal descriptors for facial expression recognition . Pattern Recognition Letters, 2009, 30(12): 1117-1127[J43] Xudong Xie, Kin-Man Lam. Facial expression recognition based on shape and texture. Pattern Recognition, 2009, 42(5):1003-1011[J44] Peng Yang, Qingshan Liu, Dimitris N. Metaxas Boosting encoded dynamic features for facial expression recognition . Pattern Recognition Letters, 2009,30(2): 132-139[J45] Sungsoo Park, Daijin Kim. Subtle facial expression recognition using motion magnification. Pattern Recognition Letters, 2009, 30(7): 708-716[J46] Chathura R. De Silva, Surendra Ranganath, Liyanage C. De Silva. Cloud basis function neural network: A modified RBF network architecture for holistic facial expression recognition. Pattern Recognition, 2008, 41(4): 1241-1253[J47] Do Hyoung Kim, Sung Uk Jung, Myung Jin Chung. Extension of cascaded simple feature based face detection to facial expression recognition. Pattern Recognition Letters, 2008, 29(11): 1621-1631[J48] Y. Zhu, L.C. De Silva, C.C. Ko. Using moment invariants and HMM in facial expression recognition. Pattern Recognition Letters, 2002, 23(1-3): 83-91[J49] Jun Wang, Lijun Yin. Static topographic modeling for facial expression recognition and analysis. Computer Vision and Image Understanding, 2007, 108(1-2): 19-34[J50] Caifeng Shan, Shaogang Gong, Peter W. McOwan. Facial expression recognition based on Local Binary Patterns: A comprehensive study. Image and Vision Computing, 2009, 27(6): 803-816[J51] Xue-wen Chen, Thomas Huang. Facial expression recognition: A clustering-based approach. Pattern Recognition Letters, 2003, 24(9-10): 1295-1302 [J52] Irene Kotsia, Ioan Buciu, Ioannis Pitas. An analysis of facial expression recognition under partial facial image occlusion. Image and Vision Computing, 2008, 26(7): 1052-1067[J53] Shuai Liu, Qiuqi Ruan. Orthogonal Tensor Neighborhood Preserving Embedding for facial expression recognition. Pattern Recognition, 2011, 44(7):1497-1513[J54] Eszter Székely, Henning Tiemeier, Lidia R. Arends, Vincent W.V. Jaddoe, Albert Hofman, Frank C. Verhulst, Catherine M. Herba. Recognition of Facial Expressions of Emotions by 3-Year-Olds. Emotion, 2011, 11(2): 425-435[J55] Kathleen M. Corcoran, Sheila R. Woody, David F. Tolin. Recognition of facial expressions in obsessive–compulsive disorder. Journal of Anxiety Disorders, 2008, 22(1): 56-66[J56] Bouchra Abboud, Franck Davoine, Mô Dang. Facial expression recognition and synthesis based on an appearance model. Signal Processing: Image Communication, 2004, 19(8): 723-740[J57] Teng Sha, Mingli Song, Jiajun Bu, Chun Chen, Dacheng Tao. Feature level analysis for 3D facial expression recognition. Neurocomputing, 2011,74(12-13) :2135-2141[J58] S. Moore, R. Bowden. Local binary patterns for multi-view facial expression recognition . Computer Vision and Image Understanding, 2011, 15(4):541-558[J59] Rui Xiao, Qijun Zhao, David Zhang, Pengfei Shi. Facial expression recognition on multiple manifolds. Pattern Recognition, 2011, 44(1):107-116[J60] Shyi-Chyi Cheng, Ming-Yao Chen, Hong-Yi Chang, Tzu-Chuan Chou. Semantic-based facial expression recognition using analytical hierarchy process. Expert Systems with Applications, 2007, 33(1): 86-95[J71] Carlos E. Thomaz, Duncan F. Gillies, Raul Q. Feitosa. Using mixture covariance matrices to improve face and facial expression recognitions. Pattern Recognition Letters, 2003, 24(13): 2159-2165[J72]Wen G,Bo C,Shan Shi-guang,et al. The CAS-PEAL large-scale Chinese face database and baseline evaluations.IEEE Transactions on Systems,Man and Cybernetics,part A:Systems and Hu-mans,2008,38(1):149-161.[J73] Yongsheng Gao,Leung ,M.K.H. Face recognition using line edge map.IEEE Transactions on Pattern Analysis and Machine Intelligence,2002,24:764-779. [J74] Hanouz M,Kittler J,Kamarainen J K,et al. Feature-based affine-invariant localization of faces.IEEE Transactions on Pat-tern Analysis and Machine Intelligence,2005,27:1490-1495.[J75] WISKOTT L,FELLOUS J M,KRUGER N,et al.Face recognition by elastic bunch graph matching.IEEE Trans on Pattern Analysis and Machine Intelligence,1997,19(7):775-779.[J76] Belhumeur P.N, Hespanha J.P, Kriegman D.J. Eigenfaces vs. fischerfaces: recognition using class specific linear projection.IEEE Trans on Pattern Analysis and Machine Intelligence,1997,15(7):711-720[J77] MA L,KHORASANI K.Facial Expression Recognition Using Constructive Feedforward Neural Networks. IEEE Transactions on Systems, Man and Cybernetics, Part B,2007,34(3):1588-1595.[J78][J79][J80][J81][J82][J83][J84][J85][J86][J87][J88][J89][J90]4、英文学位论文[D1]Hu Yuxiao. Three-dimensional face processing and its applications in biometrics:[Ph.D dissertation]. USA,Urbana-Champaign: University of Illinois, 2008。

人工智能会使大脑退化吗专四英语作文

人工智能会使大脑退化吗专四英语作文

人工智能会使大脑退化吗专四英语作文全文共3篇示例,供读者参考篇1Will Artificial Intelligence Cause Brain Degeneration?As technology continues its rapid advancement, artificial intelligence (AI) has become increasingly ubiquitous in our daily lives. From virtual assistants like Siri and Alexa to self-driving cars and advanced robotics, AI is transforming the way we live, work, and interact with the world around us. However, amid this technological revolution, a concerning question arises: will the widespread adoption of AI lead to a decline in our cognitive abilities and ultimately cause brain degeneration?To understand the potential impact of AI on our brain functions, we must first grasp the fundamental role that cognitive activities play in maintaining and enhancing our mental faculties. The human brain is a remarkable organ, possessing an incredible capacity for adaptability and growth, a phenomenon known as neuroplasticity. Engaging in cognitively demanding tasks, such as problem-solving, critical thinking, and memory exercises, stimulates the formation of new neuralconnections and strengthens existing ones, thereby enhancing our cognitive abilities.With the advent of AI, there is a growing concern that we may become overly reliant on these advanced systems, leading to a reduction in the cognitive demands placed on our brains. As AI algorithms become increasingly adept at performing tasks that were once exclusively within the realm of human intelligence, we may find ourselves outsourcing more and more cognitive functions to these systems, potentially resulting in a lack of mental stimulation and ultimately, brain degeneration.One area of particular concern is memory. AI-powered virtual assistants and search engines have made it easier than ever to access and retrieve information, reducing the need for us to actively engage in memorization and recall. While this convenience is undeniably beneficial in many aspects of our lives, it may also lead to a decline in our ability to store and retrieve information from our own memory banks, potentially weakening the neural pathways associated with these processes.Additionally, AI-driven automation and decision-making systems could potentially diminish our problem-solving and critical thinking skills. As we increasingly rely on these systems to handle complex tasks and make decisions for us, we maybecome complacent and fail to exercise the cognitive processes that were once essential for navigating challenges and solving problems.However, it is important to note that the impact of AI on our brain functions is not necessarily a one-way street. Just as AI could potentially lead to cognitive decline, it could also serve as a powerful tool for enhancing our mental abilities. For instance, AI-powered educational technologies and brain-training applications could provide personalized and adaptive learning experiences, challenging our minds in novel ways and stimulating the growth of new neural connections.Moreover, the integration of AI into fields such as neuroscience and cognitive research could lead to groundbreaking discoveries and advancements in our understanding of the human brain, potentially enabling us to develop more effective strategies for maintaining and enhancing our cognitive abilities.Ultimately, the question of whether AI will cause brain degeneration is a complex one, with arguments on both sides. While the potential risks of cognitive decline due to over-reliance on AI cannot be ignored, it is essential to approach this issue with a balanced perspective. By actively engaging with AItechnologies in a mindful and responsible manner, we can harness their potential benefits while mitigating potential negative impacts on our cognitive abilities.One way to strike this balance is through education and awareness. Incorporating AI literacy into curricula at all levels of education can equip individuals with the knowledge and skills necessary to navigate the AI-driven world while maintaining a healthy balance between technology and cognitive engagement. This could involve teaching critical thinking and problem-solving strategies, encouraging active learning and memorization techniques, and promoting a deeper understanding of the principles underlying AI systems.Furthermore, fostering a culture of lifelong learning and cognitive enrichment can help counteract the potential negative effects of AI on our brain functions. Engaging in activities that challenge our minds, such as reading, puzzles, learning new skills, and participating in intellectually stimulating hobbies, can help maintain and strengthen our cognitive abilities, regardless of our reliance on AI technologies.In addition, it is imperative that we approach the development and implementation of AI systems with a strong emphasis on ethical considerations and human-centric designprinciples. By ensuring that AI technologies are developed and deployed in a manner that respects human agency and autonomy, we can mitigate the risk of excessive dependence and cognitive complacency.As we navigate this era of unprecedented technological advancement, it is crucial that we remain vigilant and proactive in safeguarding our cognitive abilities. While AI undoubtedly holds immense potential for enhancing various aspects of our lives, we must not lose sight of the fundamental importance of maintaining and nurturing our innate human intelligence.In conclusion, the relationship between AI and brain degeneration is a complex and multifaceted issue that requires careful consideration and a balanced approach. While the risks of cognitive decline due to over-reliance on AI should not be dismissed, the potential benefits of AI in enhancing our cognitive abilities and advancing our understanding of the human brain should also be recognized. By embracing a mindful and responsible integration of AI into our lives, fostering a culture of lifelong learning, and prioritizing ethical and human-centric design principles, we can harness the power of AI while safeguarding the integrity and vitality of our cognitive abilities.篇2Will AI Cause Brain Degeneration?The rapid development of artificial intelligence (AI) has sparked concerns about its potential impact on human cognition. Some argue that our reliance on AI technologies could lead to a decline in mental capacities, ultimately causing our brains to degenerate. As a student facing the challenges of the modern world, I find this topic particularly relevant and worth exploring.On one hand, the convenience and efficiency offered by AI technologies are undeniable. From voice assistants that can answer our queries to recommendation algorithms that curate personalized content, AI has seamlessly integrated into our daily lives. The ability to outsource tasks to these intelligent systems could potentially reduce the cognitive load on our brains, allowing us to conserve mental energy for more demanding endeavors.However, there is a valid concern that this reliance on AI could lead to a gradual atrophy of our cognitive abilities. Just as muscles can weaken from lack of use, our brains may lose their sharpness if we become too dependent on AI for tasks that traditionally challenged our problem-solving and critical thinking skills.One area where this potential degeneration could manifest is in our ability to navigate and orient ourselves in physical spaces. With the advent of GPS and navigation apps, many of us have become accustomed to relying on these technologies to guide us through unfamiliar territories. While convenient, this dependence could potentially weaken our innate sense of direction and spatial awareness, skills that our ancestors had to hone for survival.Similarly, the proliferation of search engines and information retrieval systems has made it easier than ever to access vast amounts of knowledge at our fingertips. While this accessibility is undoubtedly valuable, it could also diminish our motivation to commit information to memory. The ability to instantly look up facts and figures may lead to a decline in our capacity for memorization and recall, cognitive functions that were once essential for academic and professional success.Moreover, the algorithms behind many AI systems are designed to cater to our preferences and biases, creating personalized echo chambers that reinforce our existing beliefs and perspectives. This could potentially stunt the development of critical thinking skills, as we become less exposed to diverse viewpoints and challenging ideas that would traditionallyprompt us to question our assumptions and broaden our perspectives.On the other hand, proponents of AI argue that these technologies can actually enhance our cognitive abilities if utilized correctly. For instance, AI-powered educational tools can provide personalized learning experiences tailored to individual strengths and weaknesses, potentially improving our ability to acquire and retain knowledge more effectively.Additionally, the automation of routine and repetitive tasks could free up mental resources for more complex and creative endeavors, allowing us to focus our cognitive capacities on higher-order thinking and problem-solving. AI systems could also augment our decision-making processes by providing data-driven insights and analyses, potentially mitigating the effects of cognitive biases and emotional influences.Ultimately, the impact of AI on our cognitive abilities will likely depend on how we choose to integrate these technologies into our lives. If we become overly reliant on AI as a crutch, outsourcing tasks that would traditionally challenge and exercise our brains, then there is a risk of cognitive degeneration. However, if we approach AI as a tool to augment and enhance our existing abilities, leveraging its strengths while activelyengaging our own cognitive faculties, then it could potentially amplify our mental capacities.As a student navigating the complexities of the modern world, I believe it is essential to strike a balance between embracing the conveniences of AI and maintaining a commitment to exercising our cognitive abilities. We should remain vigilant about the potential risks of over-reliance on these technologies and actively seek opportunities to challenge our minds through activities that promote problem-solving, critical thinking, and lifelong learning.Furthermore, educational institutions and policymakers should prioritize the development of curricula and programs that foster cognitive resilience in the face of technological advancements. This could involve incorporating activities that strengthen skills such as memorization, spatial awareness, and analytical reasoning, while also promoting digital literacy and responsible use of AI technologies.In conclusion, the question of whether AI will cause brain degeneration is a complex one with valid arguments on both sides. While the convenience and efficiency of AI technologies are undeniable, we must remain mindful of the potential risks of over-reliance and strive to maintain a balance betweenleveraging these tools and actively exercising our cognitive abilities. By adopting a thoughtful and measured approach to AI integration, we can harness its potential while preserving and enhancing our mental capacities for generations to come.篇3Will AI Cause Brain Degeneration?The rapid advancement of artificial intelligence (AI) technology has ignited a heated debate about its potential impact on human cognitive abilities. As AI systems become increasingly sophisticated, capable of performing tasks that were once solely within the human domain, concerns have arisen that our reliance on these technologies may lead to a decline in our brain's capabilities. In this essay, I will explore the arguments on both sides of this contentious issue and ultimately conclude that while AI does present some risks, it is unlikely to cause widespread brain degeneration if used judiciously.On one hand, proponents of the brain degeneration hypothesis argue that our growing dependence on AI could result in a phenomenon akin to the atrophy of unused muscles. Just as our physical muscles weaken when we become sedentary, the cognitive muscles of our brains may deteriorate if we offloadtoo many mental tasks onto AI systems. They contend that by outsourcing cognitive functions like memory, problem-solving, and decision-making to AI, we may lose the ability to exercise and maintain these crucial mental faculties.This line of reasoning is bolstered by numerous examples of how technology has already impacted our cognitive abilities. The advent of calculators and spell-checkers, for instance, has arguably diminished our ability to perform mental arithmetic and spelling tasks. Similarly, the ubiquity of GPS navigation systems has reduced our reliance on mental mapping and spatial reasoning skills. Proponents argue that AI poses an even greater threat, as it has the potential to automate increasingly complex cognitive tasks, leaving our brains underutilized and susceptible to degeneration.Moreover, there are concerns that the convenience and efficiency offered by AI could foster a culture of intellectual laziness and complacency. If we become accustomed to relying on AI for mental tasks, we may lose the motivation to challenge ourselves and develop our cognitive abilities, leading to a gradual erosion of our mental capabilities.On the other hand, opponents of the brain degeneration hypothesis argue that AI is a tool, and like any tool, its impact onour cognitive abilities depends on how we choose to use it. They contend that AI has the potential to augment and enhance our mental capabilities rather than diminish them. By offloading routine and tedious tasks to AI systems, we can free up cognitive resources to focus on higher-order thinking, creativity, and problem-solving.Furthermore, AI can serve as a powerful educational tool, providing personalized learning experiences and adaptive curricula tailored to individual needs and learning styles. This could potentially enhance our cognitive development and foster a deeper understanding of complex concepts, rather than promoting intellectual atrophy.Opponents also argue that the human brain is remarkably plastic and adaptable, capable of reorganizing and rewiring itself in response to new challenges and experiences. As we interact with AI systems, our brains may develop new cognitive pathways and strategies to integrate and leverage these technologies effectively. This process of cognitive adaptation could ultimately strengthen and diversify our mental capabilities, rather than causing degeneration.Additionally, the development of AI is not a one-way street; it is a collaborative process that requires human intelligence andoversight. By actively engaging with AI systems, we can continually challenge ourselves to understand and improve these technologies, fostering a symbiotic relationship that stimulates cognitive growth and innovation.In my opinion, the truth likely lies somewhere between these two extremes. While the potential for brain degeneration due to excessive reliance on AI cannot be dismissed entirely, it is unlikely to occur on a widespread scale if we adopt a balanced and judicious approach to integrating AI into our lives.The key lies in striking a careful balance between leveraging the benefits of AI while still engaging in activities that exercise and stimulate our cognitive faculties. Rather than outsourcing all mental tasks to AI, we should selectively offload routine and repetitive tasks, freeing up mental resources to focus on more complex and creative endeavors. This approach could potentially enhance our cognitive abilities by allowing us to concentrate on higher-order thinking and problem-solving, while still maintaining and exercising our fundamental cognitive skills.Moreover, it is crucial to foster a culture of lifelong learning and intellectual curiosity, where we continually challenge ourselves to acquire new knowledge and skills, both with and without the aid of AI. By embracing a growth mindset andactively seeking opportunities for cognitive enrichment, we can counteract the potential risks of complacency and intellectual laziness.Education and awareness also play a vital role in mitigating the risks of brain degeneration. By understanding the potential pitfalls of overreliance on AI, we can develop strategies and best practices for integrating these technologies in a responsible and cognitively-enriching manner. This includes promoting media literacy, critical thinking skills, and a healthy skepticism towards the outputs of AI systems, ensuring that we remain actively engaged and discerning users.In conclusion, while the potential for AI to cause brain degeneration should not be dismissed entirely, it is a risk that can be effectively mitigated through a balanced and thoughtful approach to integrating these technologies into our lives. By selectively leveraging the benefits of AI while actively engaging in cognitively-stimulating activities, fostering a culture of lifelong learning, and promoting responsible AI usage through education and awareness, we can harness the power of AI to enhance and augment our cognitive abilities, rather than diminish them.。

边缘计算下的联邦学习挑战及进展

边缘计算下的联邦学习挑战及进展

文献 场景
表 1 通信资源受限问题研究总结 Tab.1 Summary of research on communication resource limitation
技术特点
测试数据集
参照
提升
[2]
H-FEEL
子信道分配和助手调度问题的非线性 规划理论分析
MNIST
-
-
[3]
H-FEEL
同步客户端 - 边缘聚合 + 异步边缘 云聚合
1.1 资源受限 首先,在联邦学习中,设备之间需要频繁交换模型 参数和梯度信息,由于边缘设备的带宽通常很有限,无 法同时处理大量数据和传输数据,设备之间的通信可能 会出现较高的延迟,从而影响模型的训练效率和准确性。 其次,边缘计算设备的计算能力、存储资源也有限,另 外,边缘计算设备对能耗也有着更高的要求,这些限制 对联邦学习这种分布式机器学习方法提出了挑战。 1.2 异构 数据异构,特别是数据非独立同分布,一直是联邦 学习的一个挑战,而这个问题在边缘计算下由于边缘设 备的异构性而变得更加突出。边缘设备不仅本身的差异 非常大,例如,智能传感器、边缘智能路由、ICT 融合 网关等,这些设备产生的数据更是复杂多样,并且由于 受到设备性能及网络等因素影响,边缘设备可能产生数 据缺失,以及产生的数据往往还具有实时性的特征,也 容易引起数据时效性问题。另外,边缘设备还存在移动 性和不稳定性,边缘网络中链路的带宽资源可能发生变 化,不仅影响链路的实际带宽,甚至还可能出现连接中 断的情况,这些给联邦学习带来了更大的挑战。 1.3 隐私安全 联邦学习在隐私安全方面的研究一直是一个热门话 题,因此,在边缘计算背景下的联邦学习隐私安全问题 也同样受到关注。另外,对边缘服务器本地子网络进行
MINIST;FashionMNIST;CIFAR-10

基于ANN与深度学习的社交媒体情感分析研究

基于ANN与深度学习的社交媒体情感分析研究

基于ANN与深度学习的社交媒体情感分析研究IntroductionSocial media has become an integral part of our lives, and people can express their emotions and opinions online freely. The analysis of sentiment on social media can provide valuable information to companies, governments, and individuals. Traditional sentiment analysis methods rely on manual labeling, which is time-consuming and requires a large amount of effort. Recently, many researchers have explored the use of artificial neural network (ANN) and deep learning techniques to perform sentiment analysis on social media platforms. In this article, we will review the latest developments in sentiment analysis based on ANN and deep learning.Overview of ANN and Deep LearningThe ANN is an information processing system inspired by the structure and function of a biological brain. ANNs consist of interconnected nodes (neurons) that process information. Deep learning is a type of ANN composed of multiple layers, which allows the network to learn increasingly complex features of data. Deep learning has revolutionized the field of machine learning, especially in computer vision and natural language processing.Social Media Sentiment AnalysisSocial media sentiment analysis is the process of identifying people's opinions, attitudes, and emotions expressed in social media data. Sentiment analysis can be used in various fields, including marketing, finance, politics, and healthcare. The traditional approach to sentiment analysis involves the use of machine learning algorithms, such as support vector machines (SVMs) and Naive Bayes classifiers, which require labeled data to train the model. However, labeling a large amount of data is difficult and time-consuming.ANN-based Sentiment AnalysisANN-based sentiment analysis involves the use of ANNs to automatically classify text data as positive, negative, or neutral. The most commonly used ANN-based models for sentiment analysis are the multilayer perceptron (MLP) and the recurrent neural network (RNN).MLP is a type of feedforward ANN, where the signal flows only in one direction through the network. MLP consists of an input layer, hidden layers, and an output layer. The input layer receives the input data, which is then processed by the hidden layers. The output layer produces the final classification result. MLP has been shown to be effective in sentiment analysis tasks, especially when combined with other techniques such as feature selection and weighting.RNN is a type of ANN that has loops in the network, which allow information to persist from previous time steps. RNNs have shown remarkable results in natural language processing tasks such aslanguage translation, speech recognition, and text generation. RNNs can also be used for sentiment analysis, where the network can learn the context of the text and capture the temporal relationships between words.Deep Learning-based Sentiment AnalysisDeep learning-based sentiment analysis involves the use of deep neural networks, such as convolutional neural networks (CNNs) and long short-term memory (LSTM) networks. CNNs are a type of deep neural network that is mainly used for image processing. However, CNNs can also be used for sentiment analysis, where the network learns high-level features of the input data automatically. CNN-based models have shown outstanding performance in sentiment analysis tasks, especially when combined with pre-trained models such as word embeddings.LSTM is a type of deep neural network that is used to process sequential data, such as text and speech. LSTM networks can learn long-term dependencies in the input data, which makes them suitable for sentiment analysis tasks. LSTM-based models have achieved state-of-the-art performance in sentiment analysis tasks, especially when combined with techniques such as attention mechanisms and bidirectional LSTM.ConclusionIn conclusion, sentiment analysis is a crucial task in understanding people's opinions and emotions in social media data. ANN and deep learning techniques have shown promising results in sentiment analysis tasks, where the models can learn the high-level features of the input data automatically. ANNs such as MLP and RNNs have been used for sentiment analysis tasks, while deep learning techniques such as CNN and LSTM networks have achieved state-of-the-art results. Future research in sentiment analysis should focus on improving the models' interpretability and domain adaptation.。

【8A版】APA格式参考文献示例

【8A版】APA格式参考文献示例

APA格式参考文献示例期刊文章1.一位作者写的文章Hu,L.G.[胡莲香].(20XX).走向大数据知识服务:大数据时代图书馆服务模式创新.农业图书情报学刊(2):173-177.Olsher,D.(20XX).Semantically-basedpriorsandnuancedknowledgecoreforBigData,Soc ialAI,andlanguageunderstanding.NeuralNetworks,58,131-147.2.两位作者写的文章Li,J.Z.,&Liu,G.M.[李建中,刘显敏].(20XX).大数据的一个重要方面:数据可用性.计算机研究与发展(6):1147-1162.Mendel,J.M.,&Korjani,M.M.(20XX).Onestablishingnonlinearcombinationsofvariable rmationSciences,280,98-110. 3.三位及以上的作者写的文章Weichselbraun,A.etal.(20XX).Enrichingsemanticknowledgebasesforopinionmininginb igdataapplications.Knowledge-BasedSystems,69,78-85.Zhang,P.etal.[张鹏等].(20XX).云计算环境下适于工作流的数据布局方法.计算机研究与发展(3):636-647.专著1.一位作者写的书籍Rossi,P.H.(1989).DownandoutinAmerica:Theoriginsofhomelessness.Chicago:Universi tyofChicagoPress.Wang,B.B.[王彬彬].(20XX).文坛三户:金庸·王朔·余秋雨——当代三大文学论争辨析.郑州:大象出版社.2.两位作者写的书籍Plant,R.,&Hoover,K.(20XX).ConservativecapitalisminBritainandtheUnitedStates:Acr iticalappraisal.London:Routledge.Yin,D.,&Shang,H.[隐地,尚海].(20XX).到绿光咖啡屋听巴赫读余秋雨.上海:上海世界图书出版公司.3.三位作者写的书籍Chen,W.Z.etal.[陈维政等].(20XX).人力资源管理.大连:大连理工大学出版社. Hall,S.etal.(1991).Culture,media,language:Workingpapersinculturalstudies,1972-79( CulturalstudiesBirmingham).London:Routledge.4.新版书Kail,R.(1990).Memorydevelopmentinchildren(3rded.).NewYork:Freeman.编著1.一位主编编撰的书籍Loshin,D.(Ed.).(20XXa).Bigdataanalytics.Boston:MorganKaufmann.Zhong,L.F.[钟兰凤](编).(20XX).英文科技学术话语研究.镇江:江苏大学出版社. 2.两位主编编撰的书籍Hyland,K.,&Diani,G.(Eds.).(20XX).Academicevaluation:Reviewgenresinuniversityset tings.London:PalgraveMacmillan.Zhang,D.L.,&Zhang,G.[张德禄,张国](编).(20XX).英语文体学教程.北京:高等教育出版社.3.三位及以上主编编撰的书籍Zhang,K.D.etal.[张克定等](编).(20XX).系统评价功能.北京:高等教育出版社. Campbell,C.M.etal.(Eds.).(20XX).GroupsStAndrews20XXinOGford:Volume2.NewYor k:CambridgeUniversityPress.4.书中的文章DelaRosaAlgarín,A.,&Demurjian,S.A.(20XX).Anapproachtofacilitatesecurityassuran ceforinformationsharingandeGchangeinbig-dataapplications.InB.Akhgar&H.R.A rabnia(Eds.),EmergingtrendsinICTsecurity(pp.65-83).Boston:MorganKaufmann. He,J.M.,&Yu,J.P.[何建敏,于建平].(20XX).学术论文引言部分的经验功能分析.张克定等.(编).系统功能评价(pp.93-101).北京:高等教育出版社.翻译的书籍Bakhtin,M.M.(1981).Thedialogicimagination:Fouressays(C.Emerson&M.Holquist,Tr ans.).Austin:UniversityofTeGasPress.Le,D.L.[勒代雷].(20XX).释意学派口笔译理论(刘和平译).北京:中国对外翻译出版公司.Kontra,M.etal.(20XX).语言:权利和资源(李君,满文静译).北京:外语教学与研究出版社.Wang,R.D.,&Yu,Q.Y.[王仁定,余秋雨].(20XX).吴越之间——余秋雨眼里的中国文化(彩图本)(梁实秋,董乐天译).上海:上海文化出版社.硕博士论文Huan,C.P.(2015).JournalisticstanceinChineseandAustralianhardnews.Unpublisheddo ctorialdissertation,MacquarieUniversity,Sydney.Wang,G.Z.[王璇子].(20XX).功能对等视角下的英语长句翻译.南京大学硕士学位论文.注:1.APA格式参考文献中的文章标题、书籍名称,冒号后第一个单词,括号里第一个单词和专有名词的首字母大写,其余单词首字母均小写。

基于混合神经网络与遗传算法方法的注塑参数优化

基于混合神经网络与遗传算法方法的注塑参数优化

收稿日期:2003-08-01;修订日期:2003-11-04 基金项目:教育部科技研究重点项目(0366);江西省科委科技项目(Z1891) 作者简介:郑生荣(1964-),男,江西南昌人,博士研究生,主要研究方向:材料成型CAD/CAE/CAM 、人工智能; 辛勇(1959-),男,江西萍乡人,教授,博士生导师,博士后,主要研究方向:注塑成型CAD/CAE/K BE 、材料成型过程及模具设计与制造CAX 和K BE 集成技术.文章编号:1001-9081(2004)02-0091-04基于混合神经网络与遗传算法方法的注塑参数优化郑生荣,辛 勇,杨国泰,何成宏(南昌大学机电工程学院,江西南昌330029)(zsrcm @ )摘 要:建立了基于混合神经网络与遗传算法方法的注塑工艺参数优化系统,用Matlab 语言编制了应用程序,对神经网络的参数预测与遗传算法的优化过程进行求解。

将网络预测结果与CAE 模拟结果进行比较和误差分析,显示出BP 网络的稳定性和可靠性;优化结果经CAE 模拟和实验验证,证明是正确的,表明基于混合神经网络与遗传算法方法的注塑工艺参数优化方法是可行的。

关键词:人工神经网络;遗传算法;混合方法;Matlab ;CAE ;参数优化中图分类号:TP183 文献标识码:AOptimization of Injection Parameters B ased onH ybrid N eural N et work and G enetic AlgorithmZHEN G Sheng 2rong ,XIN Y ong ,YAN G Guo 2tai ,HE Cheng 2hong(Mechanical and Elect ronic Engineering College ,N anchang U niversity ,N anchang Jiangxi 330029,China )Abstract :In this paper ,an optimization system is established based on a hybrid neural network and genetic algorithm approach.The application program is compiled in Matlab engineering computing language ,which is used in calculating the parameter value predicted by neural network and the result of genetic algorithm optimization.The comparison and error analysis has been carried out between the results predicted by network and CAE simulated results ,which shows that the BP network is stable and reliable.The optimized outcome ,after verified by CAE simulation and tested by experiment ,has been proved to be correct.It has been indicated that the in jection parameter optimization method based on the hybrid neural network and genetic algorithm approach is feasible.K ey w ords :artificial neural network ;genetic algorithm ;hybrid approach ;Matlab ;CAE ;parameter optimization1 前言人工神经网络(ANN )是人类在对大脑神经网络认识理解的基础上人工构造的、由多层神经元经连接而成,能实现某种功能的、理论化的数学模型,是基于模仿大脑神经网络结构和功能而建立的一种信息系统[1]。

自然语言处理中的词向量模型

自然语言处理中的词向量模型

自然语言处理中的词向量模型自然语言处理(Natural Language Processing,NLP)是人工智能(Artificial Intelligence,AI)领域中的一个重要研究分支,其研究目的是使计算机理解和处理自然语言,实现人机之间的有效交流。

在NLP中,词向量模型是一个重要的研究方向,其目的是将文本信息转换为向量形式,在向量空间中进行处理和分析,以实现特定的NLP应用和功能。

一、词向量模型简介词向量模型是一种将词汇表中的每个单词映射到一个向量空间中的技术。

常见的词向量模型有基于统计的模型和基于神经网络的模型。

其中,基于统计的模型主要包括潜在语义分析(Latent Semantic Analysis,LSA)、概率潜在语义分析(Probabilistic Latent Semantic Analysis, PLSA)和隐式狄利克雷分配(Latent Dirichlet Allocation,LDA)等。

基于神经网络的模型主要包括嵌入式层(Embedded Layer)、循环神经网络(Recursive Neural Network,RNN)和卷积神经网络(Convolutional Neural Network,CNN)等。

二、词向量模型的应用词向量模型在NLP中有着广泛的应用。

其中,最主要的应用包括文本分类和情感分析等。

1. 文本分类文本分类是将一篇文档或一个句子分配到特定的预定义类别中的任务。

例如,将一篇新闻文章分配为政治、科技或体育类别等。

在文本分类中,词向量模型可以帮助将单词映射到向量空间中,并且计算每个类别的向量表示,以便对测试文本进行分类。

常见的文本分类算法包括朴素贝叶斯(Naive Bayes)、支持向量机(Support Vector Machine,SVM)和逻辑回归(Logistic Regression)等。

2. 情感分析情感分析是通过对文本内容的分析,确定人们在撰写或阅读一篇文章、观看一份视频或使用某个产品时的情感状态。

越南语文语转换技术研究的开题报告

越南语文语转换技术研究的开题报告

越南语文语转换技术研究的开题报告题目:越南语文语转换技术研究一、研究背景和意义随着全球化的发展和国际交流的加强,越南作为一个重要的东南亚国家,其在经济、文化等各个领域都有着不可忽视的地位。

而越南语作为越南的官方语言,其在越南的政治、经济、文化等领域都有着广泛的应用。

越南语和汉语的语言结构和语法相差较大,因此在汉越之间进行翻译和语言转换时存在诸多难点。

为了提高越南语和汉语之间的交流效率、促进两国的合作和交流,需要开展越南语文语转换技术的研究。

二、研究内容和方法1. 研究内容本研究旨在研究越南语文语转换技术,探讨越南语和汉语之间的语言难点和翻译技术,以提高汉越之间的交流效率和翻译准确性。

具体内容包括:(1)研究越南语和汉语的语音、语法结构等方面的异同,掌握汉越翻译和语言转换的难点和技术要点。

(2)研究越南语和汉语之间的翻译技术,包括基于规则的翻译、统计机器翻译等方法,探究其优缺点和适用范围。

(3)建立基于深度学习的越南语文语转换模型,实现汉越之间的语音、文本翻译。

(4)开发基于互联网的越南语在线翻译系统,提供实时、准确、便捷的汉越翻译服务。

2. 研究方法本研究将采取理论研究、实证研究、模型建立等多种研究方法,综合运用计算机科学、语言学和机器学习等领域的理论和技术,开展越南语文语转换技术的研究和探索。

三、预期结果和成果本研究的预期结果和成果如下:(1)研究得出越南语和汉语之间的语言难点和翻译技术,为汉越翻译和语言转换提供理论依据和实践指导。

(2)建立基于深度学习的越南语文语转换模型,实现汉越之间的语音、文本翻译。

(3)开发基于互联网的越南语在线翻译系统,提供实时、准确、便捷的汉越翻译服务。

(4)形成越南语文语转换技术的相关论文和专利,为越南语和汉语之间的交流和翻译提供新的思路和方法。

四、研究计划和进度1.研究计划(1)第一阶段:文献综述和研究方法选择(1个月)。

(2)第二阶段:越南语和汉语之间的语言特点和翻译技术研究(3个月)。

基于变分信息瓶颈的半监督神经机器翻译

基于变分信息瓶颈的半监督神经机器翻译

基于变分信息瓶颈的半监督神经机器翻译于志强1, 2, 3余正涛 1, 3 黄于欣 1, 3 郭军军 1, 3 高盛祥1, 3摘 要 变分方法是机器翻译领域的有效方法, 其性能较依赖于数据量规模. 然而在低资源环境下, 平行语料资源匮乏,不能满足变分方法对数据量的需求, 因此导致基于变分的模型翻译效果并不理想. 针对该问题, 本文提出基于变分信息瓶颈的半监督神经机器翻译方法, 所提方法的具体思路为: 首先在小规模平行语料的基础上, 通过引入跨层注意力机制充分利用神经网络各层特征信息, 训练得到基础翻译模型; 随后, 利用基础翻译模型, 使用回译方法从单语语料生成含噪声的大规模伪平行语料, 对两种平行语料进行合并形成组合语料, 使其在规模上能够满足变分方法对数据量的需求; 最后, 为了减少组合语料中的噪声, 利用变分信息瓶颈方法在源与目标之间添加中间表征, 通过训练使该表征具有放行重要信息、阻止非重要信息流过的能力, 从而达到去除噪声的效果. 多个数据集上的实验结果表明, 本文所提方法能够显著地提高译文质量, 是一种适用于低资源场景的半监督神经机器翻译方法.关键词 神经机器翻译, 跨层注意力机制, 回译, 变分信息瓶颈引用格式 于志强, 余正涛, 黄于欣, 郭军军, 高盛祥. 基于变分信息瓶颈的半监督神经机器翻译. 自动化学报, 2022, 48(7):1678−1689DOI 10.16383/j.aas.c190477Improving Semi-supervised Neural Machine Translation WithVariational Information BottleneckYU Zhi-Qiang 1, 2, 3 YU Zheng-Tao 1, 3 HUANG Yu-Xin 1, 3 GUO Jun-Jun 1, 3 GAO Sheng-Xiang 1, 3Abstract Variational approach is effective in the field of machine translation, its performance is highly dependent on the scale of the data. However, in low-resource setting, parallel corpus is limited, which cannot meet the de-mand of variational approach on data, resulting in suboptimal translation effect. To address this problem, we pro-pose a semi-supervised neural machine translation approach based on variational information bottleneck. The cent-ral ideas are as follows: 1) cross-layer attention mechanism is introduced to train the basic translation model;2) the trained basic translation model is used on the basis of small-scale parallel corpus, then get large-scale noisy pseudo-parallel corpus by back-translation with the input of monolingual corpus. Finally, pseudo-parallel and paral-lel corpora are merged into combinatorial corpora; 3) variational information bottleneck is used to reduce data noise and eliminate information redundancy in the combinatorial corpus. Experiment results on multiple language pairs show that the model we proposed can effectively improve the quality of translation.Key words Neural machine translation, cross-layer attention mechanism, back-translation, variational information bottleneckCitation Yu Zhi-Qiang, Yu Zheng-Tao, Huang Yu-Xin, Guo Jun-Jun, Gao Sheng-Xiang. Improving semi-super-vised neural machine translation with variational information bottleneck. Acta Automatica Sinica , 2022, 48(7):1678−1689自端到端的神经机器翻译(Neural machine translation)模型[1−2]提出以来, 神经机器翻译得到了飞速的发展. 基于注意力机制[2]的神经机器翻译模型提出之后, 更使得神经机器翻译在很多语言对上的翻译性能超越了传统的统计机器翻译(Statist-ical machine translation)[3], 成为自然语言处理领域的热点研究方向[4], 也因此促进了很多神经网络方法在其上的迁移与应用, 变分方法[5−6]即是其中一种重要方法. 变分方法已证明能够显著提升神经机器翻译的性能[7], 但是由于数据驱动特性, 其性能较收稿日期 2019-06-24 录用日期 2020-01-17Manuscript received June 24, 2019; accepted January 17, 2020国家重点研发计划(2019QY1800), 国家自然科学基金(61732005, 61672271, 61761026, 61762056, 61866020), 云南省高新技术产业专项基金(201606), 云南省自然科学基金(2018FB104)资助Supported by National Key Research and Development Pro-gram of China (2019QY1800), National Natural Science Founda-tion of China (61732005, 61672271, 61761026, 61762056, 61866020), Yunnan High-Tech Industry Development Project (201606), and Natural Science Foundation of Yunnan Province (2018FB104)本文责任编委 张民Recommended by Associate Editor ZHANG Min1. 昆明理工大学信息工程与自动化学院 昆明 6505002. 云南民族大学数学与计算机科学学院 昆明 6505003. 云南省人工智能重点实验室 昆明 6505001. Faculty of Information Engineering and Automation, Kun-ming University of Science and Technology, Kunming 6505002. School of Mathematics and Computer Science, Yunnan MinzuUniversity, Kunming 650500 3. Yunnan Key Laboratory of Ar-tificial Intelligence, Kunming 650500第 48 卷 第 7 期自 动 化 学 报Vol. 48, No. 72022 年 7 月ACTA AUTOMATICA SINICAJuly, 2022依赖于平行语料的规模与质量, 只有当训练语料规模达到一定数量级时, 变分方法才会体现其优势.然而, 在低资源语言对上, 不同程度的都面临平行语料缺乏的问题, 因此如何利用相对容易获取的单语语料、实现语料扩充成为应用变分方法的前提.针对此问题, 本文采用能够同时利用平行语料和单语语料的半监督学习方式展开研究. 半监督神经机器翻译(Semi-supervised neural machine transla-tion)主要通过两种方式对单语语料进行利用:1)语料扩充−再训练: 利用小规模平行语料训练基础翻译模型, 在此模型基础上利用回译[8]等语料扩充方法对大规模单语语料进行翻译, 形成伪平行语料再次参与训练; 2)联合训练: 利用自编码[9−10] 等方法, 以平行语料和单语语料共同作为输入, 进行联合训练. 本文重点关注语料扩充后的变分方法应用, 因此采用语料扩充−再训练方式.k −1k 目前被较多采用的语料扩充方法为: 首先利用小规模平行语料训练基础翻译模型, 在此基础上通过回译将大规模单语语料翻译为伪平行语料, 进而组合两种语料进行再次训练. 因此, 基础翻译模型作为任务的起始点, 它的性能直接影响后续任务的执行质量. 传统提升基础翻译模型性能的手段限于使用深层神经网络和在解码端最高层网络应用注意力机制. 然而, 由于深层神经网络在应用于自然语言处理任务中时, 不同层次的神经网络侧重学习的特征不同: 低层网络倾向于学习词法和浅层句法特征, 高层网络则倾向于获取更好的句法结构特征和语义特征[11]. 因此, 很多研究者通过层级注意力机制, 利用神经网络每一层编码器产生的上下文表征指导解码. 层级注意力机制使高层网络的特征信息得以利用的同时, 也挖掘低层网络对输入序列的表征能力. 然而, 上述研究多采用层内融合方式实现层级注意力机制, 其基本方式为将 层上下文向量融入第 层的编码中. 事实上在低资源环境中,受限的语料规模易导致模型训练不充分, 在此情况下引入层级注意力, 可能会加重网络复杂性, 造成性能下降. 因此, 本文设想通过融入跨层注意力机制, 使低层表征能够跨越层次后对高层表征产生直接影响, 既能弥补因网络复杂性增加带来的性能损失, 又能更好地利用表征信息提升翻译效果. 除此以外, 由于在基础模型的训练过程中缺少双语监督信号, 导致利用其产生的伪平行语料中不可避免的存在大量的数据噪声, 而在增加使用层级注意力机制后, 并不能减少噪声, 相反, 噪声随着更多表征信息的融入呈正比例增长[12−13]. 在随后的再训练过程中, 虽然语料规模能够满足变分方法的需求, 但含有较多噪声的语料作为编码器的输入, 使训练在源头就产生了偏差, 因此对整个再训练过程均造成影x y 响. 针对上述问题, 本文提出了一种融入变分信息瓶颈的神经机器翻译方法. 首先利用小规模平行语料训练得到基础翻译模型, 在其基础上利用回译将大规模单语语料翻译为伪平行语料, 进而合并两种平行语料, 使语料规模达到能够较好地应用变分方法的程度. 在此过程中, 针对基础翻译模型的训练不充分问题, 通过引入跨层注意力机制加强不同层次网络的内部交互, 除了通过注意力机制学习高层网络编码器产生的语义特征之外, 也关注低层网络产生上下文表征的能力和对高层表征的直接影响.随后, 针对生成的语料中的噪声问题, 使用变分信息瓶颈[12]方法, 利用其信息控制特性, 在编码端输入(源语言 )与解码端输出(目标语言 )之间的位置引入中间表征, 通过优化中间表征的分布, 使通过瓶颈的有效信息量最大, 从而最大程度放行重要信息、忽略与任务无关的信息, 实现噪声的去除.本文的创新点包括以下两个方面: 1)通过融入跨层注意力机制加强基础翻译模型的训练, 在增强的基础翻译模型上利用回译产生伪平行语料、增大数据规模, 使其达到能够有效应用变分方法的程度.2)首次将变分信息瓶颈应用于神经机器翻译任务,在生成的语料的基础上, 利用变分特性提升模型的性能, 同时针对生成语料中的噪声, 利用信息瓶颈的控制特性进行去除. 概括来说, 方法整体实现的是一种语料扩充−信息精炼与利用的过程, 并预期在融合该方法的神经机器翻译中取得翻译效果的提升. 在IWSLT 和WMT 等数据集上进行的实验结果表明, 本文提出的方法能显著提高翻译质量.1 相关工作1.1 层级注意力机制注意力机制的有效性得到证明之后, 迅速成为研究者们关注的热点. 很多研究者在神经网络的不同层次上应用注意力机制构建层级注意力模型, 在此基础上展开训练任务. Yang 等[14]将网络划分为两个注意力层次, 第一个层次为 “词注意”, 另一个层次为 “句注意”, 每部分通过双向循环神经网络(Recurrent neural network)结合注意力机制实现文本分类. Pappas 等[15]提出了一种用于学习文档结构的多语言分层注意力网络, 通过跨语言的共享编码器和注意力机制, 使用多任务学习和对齐的语义空间作为文本分类任务的输入, 显著提升分类效果. Zhang 等[16]提出一种层次结构摘要方法, 使用分层结构的自我关注机制来创建句子和文档嵌入,通过层次注意机制提供额外的信息源来获取更佳的特征表示, 从而更好地指导摘要的生成. Miculi-7 期于志强等: 基于变分信息瓶颈的半监督神经机器翻译1679cich等[17]提出了一个分层关注模型, 将其作为另一个抽象层次集成在传统端到端的神经机器翻译结构中, 以结构化和动态的方式捕获上下文, 显著提升了结果的BLEU (Bilingual evaluation under-study)值. Zhang等[18]提出了一种深度关注模型,模型基于低层网络上的注意力信息, 自动确定从相应的编码器层传递信息的阈值, 从而使词的分布式表示适合于高层注意力, 在多个数据集上验证了模型的有效性. 研究者们通过融入层级注意力机制到模型训练中, 在模型之上直接执行文本分类、摘要和翻译等任务, 与上述研究工作不同的是, 本文更关注于跨层次的注意力机制, 并期待将融入跨层注意力机制的基础翻译模型用于进一步任务.1.2 单语语料扩充如何在低资源场景下进行单语语料的扩充和利用一直是研究者们关注的热点问题之一. 早在2007年, Ueffing等[19]就提出了基于统计机器翻译的语料扩充方法: 利用直推学习来充分利用单语语料库.他们使用训练好的翻译模型来翻译虚拟的源文本,将其与译文配对, 形成一个伪平行语料库. 在此基础上, Bertoldi等[20]通过改进的网络结构进行训练,整个过程循环迭代直至收敛, 取得了性能上的进一步提升. Klementiev等[21]提出了一种单语语料库短语翻译概率估计方法, 在一定程度上缓解了生成的伪平行语料中的重复问题. 与前文不同, Zhang等[22]使用检索技术直接从单语语料库中提取平行短语.另一个重要的研究方向是将基于单语语料库的翻译视为一个解密问题, 将译文的生成过程等同于密文到明文的转换[23−24].以上的单语语料扩充方法主要应用于统计机器翻译中. 随着深度学习的兴起, 神经机器翻译成为翻译任务的主流方法, 探索在低资源神经机器翻译场景下的语料扩充方法成为研究热点. Sennrich等[8]在神经机器翻译框架基础上提出了语料扩充方法.他们利用具有网络结构普适性的两种方法来使用单语语料. 第1种方法是将单语句子与虚拟输入配对,然后在固定编码器和注意力模型参数的情况下利用这些伪平行句对进行训练. 在第2种方法中, 他们首先在平行语料库上训练初步的神经机器翻译模型, 然后使用该模型翻译单语语料, 最后结合单语语料及其翻译构成伪平行语料, 第2种方法也称为回译. 回译可在不依赖于神经网络结构的情况下实现平行语料的构建, 因此广泛应用于半监督和无监督神经机器翻译中. Cheng等[25]提出一种半监督神经机器翻译模型, 通过将回译与自编码进行结合重构源与目标语言的伪平行语料, 取得了翻译性能上的提升. Skorokhodov等[26]提出了一种将知识从单独训练的语言模型转移到神经机器翻译系统的方法, 讨论了在缺乏平行语料和计算资源的情况下,利用回译等方法提高翻译质量的几种技术. Ar-tetxe等[27]利用共享编码器, 在两个解码器上分别应用回译与去噪进行联合训练, 实现了只依赖单语语料的非监督神经机器翻译. Lample等[28]提出了两个模型变体: 一个神经网络模型和一个基于短语的模型. 利用回译、语言模型去噪以及迭代反向翻译自动生成平行语料. Burlot等[29]对回译进行了系统研究, 并引入新的数据模拟模型实现语料扩充. 与上述研究工作不同的是, 本文同时关注于伪平行语料生成所依赖的基础翻译模型的训练. 在训练过程中, 不仅利用注意力机制关注高层网络中对句法结构和语义信息的利用, 同时也关注低层网络信息对高层网络信息的直接影响.1.3 变分信息瓶颈为了实现信息的压缩和去噪, Tishby等[30]提出基于互信息的信息瓶颈(Information bottleneck)方法. 深度神经网络得到广泛应用后, Alemi等[12]在传统信息瓶颈的基础上进行改进, 提出了适用于神经网络的变分信息瓶颈(Variational informa-tion bottleneck), 变分信息瓶颈利用深度神经网络来建模和训练, 通过在源和目标之间添加中间表征来进行信息过滤.在神经机器翻译中, 尚未发现利用变分信息瓶颈进行噪声去除的相关研究工作, 但是一些基于变分的方法近期已经在神经机器翻译中得到应用, 有效提高了翻译性能. Zhang等[7]提出一个变分模型,通过引入一个连续的潜在变量来显式地对源语句的底层语义建模并指导目标翻译的生成, 能够有效的提高翻译质量. Eikema等[31]提出一个双语句对的深层生成模型, 该模型从共享的潜在空间中共同生成源句和目标句, 通过变分推理和参数梯度化来完成训练, 在域内、混合域等机器翻译场景中证明了模型的有效性. Su等[32]基于变分递归神经网络, 提出了一种变分递归神经机器翻译模型, 利用变分自编码器将随机变量添加到解码器的隐藏状态中, 能够在不同的时间步长上进一步捕获依赖关系.2 模型本节首先介绍传统基于注意力机制的基础翻译模型, 接着介绍了融入跨层注意力机制的基础翻译模型. 区别于传统的基础翻译模型, 本文通过融入跨层注意力机制, 除关注高层编码器产生的上下文1680自 动 化 学 报48 卷表征向量之外, 也关注低层编码器产生的上下文表征向量对高层编码的直接影响. 最后介绍了变分信息瓶颈模型, 展示了利用该模型对回译方法生成的伪平行语料中的噪声进行去除的过程.2.1 传统注意力机制模型y t x =(x 1,x 2,···,x n )y =(y 1,y 2,···,y t −1),y t 传统方法中, 最初通过在解码端最高层网络引入注意力机制进行基础翻译模型的训练. 如图1所示的2层编解码器结构中, 它通过在每个时间步长生成一个目标单词 来进行翻译. 给定编码端输入序列 和已生成的翻译序列 解码端产生下一个词 的概率为g s t t 其中, 是非线性函数, 为在时间步时刻的解码端隐状态向量, 由下式计算得到f c t t 其中,是激活函数, 是 时刻的上下文向量, 其计算式为h j =[h j ;h j ]x j αt,j 其中, 是输入序列 的向量表征, 由前向和后向编码向量拼接得到. 权重的定义为e t,j s t −1h j 其中,是对 和 相似性的度量, 其计算式为k k −1通过在最高层网络引入注意力机制来改善语义表征、辅助基础翻译模型的训练, 能够有效地提升翻译性能, 但仅利用最高层信息的方式使得其他层次的词法和浅层句法等特征信息被忽略, 进而影响生成的伪平行语料质量. 针对此问题, 能够利用每层网络上下文表征的层级注意力机制得到关注, 成为众多翻译系统采用的基础方法. 这些系统往往采用层内融合方式的层级注意力机制, 如图2所示的编解码器结构中, 第 层的输入融合了 层的上下文向量和隐状态向量. 具体计算式为d k −1t =tanh (W d [s k −1t ;c k −1t ]+b d )(6)s k t =f (s k t −1,d k −1t)(7)p t =tanh (W p ([s r t ;c r t ])+b p )(8)f r 其中, 为激活函数, 为神经网络层数.AttentionAttentionMLP c 2y 2y 1x nx 2x 1〈GO 〉kd t c 2k −1k −1c t k −1s t k −1编码器解码器图 2 层内融合方式的层级注意力机制融入Fig. 2 Model with hierarchical attention mechanismbased on inner-layer merge2.2 跨层注意力机制模型c k t ,r c t层内融合方式加强了低层表征利用, 但难以使低层表征跨越层次对高层表征产生直接影响. 因此,本文设想利用跨层融合, 在利用低层表征的同时促进低层表征对高层表征的直接影响. 通过融入跨层注意力机制, 使各层特征信息得到更加充分的利用.如图3所示, 模型通过注意力机制计算每一层的上下文向量 在最高层对它们进行拼接, 得到跨层融合的上下文向量 s t 同样, 通过跨层拼接操作得到 , 随后通过非注意力分布注意力打分编码器c 2y 2x 2x 2x n〈GO 〉图 1 传统作用于最高层网络的注意力机制融入Fig. 1 Model with traditional attention mechanismbased on top-layer merge7 期于志强等: 基于变分信息瓶颈的半监督神经机器翻译1681p t p t softmax 线性变换得到 , 用于输入到 函数中计算词表中的概率分布s t =[s 1t ;s 2t ;···;s r t ](10)p t =tanh (W p ([s t ;c t ])+b p )(11)2.3 变分信息瓶颈模型X Y Z X X →Z →Y R IB (θ)Z X 在基础翻译模型的训练中, 通过融入不同层次的上下文向量来改善语义表征, 但也因此带来更多的噪声信息. 针对此问题, 本文通过在编解码结构中引入适用于神经网络的变分信息瓶颈方法来进行解决. 需要注意的是, 编解码结构中, 编码端的输入通过编码端隐状态隐式传递到解码端. 变分信息瓶颈要求在编码端输入与解码端最终输出之间的位置引入中间表征, 因此为了便于实现, 将变分信息瓶颈应用于解码端获取最终输出之前, 以纳入损失计算的方式进行模型训练, 其直接输入为解码端的隐状态, 以此种方式实现对编码端输入中噪声的过滤.具体流程为: 在给定的 到 的转换任务中, 引入 作为源输入 的中间表征, 构造从 的信息瓶颈 , 利用 实现对 中信息的筛选和过滤. 计算过程为R IB (θ)=I (Z,Y ;θ)−βI (Z,X ;θ)(13)I (Z,Y ;θ)Y Z Z X →Y 其中, 表示 和 之间的互信息量. 变分信息瓶颈的目标是以互信息作为信息量的度量, 通过学习编码 的分布, 使 的信息量最小, 强迫模型让最重要的信息流过信息瓶颈而忽略与任务无关的信息, 从而实现噪声的去除.D ={⟨x (n),y (n )⟩}Nn =1给定输入平行语料 , 神经机器翻译的标准训练目标是极大化训练数据的似然概率P (y |x ;θ)x →y θ其中, 是 的翻译模型, 为模型的参数集合. 训练过程中, 寻求极大化似然概率等价于寻求损失的最小化z =f (x,y <t )z y 本文引入信息瓶颈作为编码的中间表征, 构造从中间表征 到输出序列 的损失, 作为训练的交叉熵损失, 计算式为P (z |x ;θ)Q (z )同时加入约束, 目标为 的分布与标准正态分布 的KL 散度(Kullback-Leibler diver-gence)最小化, 在引入变分信息瓶颈之后, 训练过程的损失函数为λλ10−3其中, 为超参数, 实验结果表明, 设置为 时取得最优结果.D a,b ={⟨a (m ),b (m )⟩}Mm =1D x ={⟨x (n )⟩}Nn =1D a,b D x D x,y D b +y,a +x ={⟨b (m )+y (n ),a (m )+x (n )⟩}M +Nm,n =1D x ={⟨x (n )⟩}Nn =1D y ={⟨y (n )⟩}Nn =1D b +D y D a +D x ,图4显示了引入了变分信息瓶颈后的模型结构, 同样地, 为了利用不同层次的上下文表征信息,在变分信息瓶颈模型中也引入了跨层注意力机制.模型的输入为平行语料和伪平行语料的组合. 以给定小规模平行语料 和单语语料 为例, 表1展示了由原始小规模平行语料 和由单语语料 生成的伪平行语料 进行组合, 形成最终语料 的过程. 需要注意的是, 变分信息瓶颈是通过引入中间表征来实现去除源输入中的噪声信息, 对于单语语料 而言, 噪声信息存在于通过回译生成的对应伪语料 中. 因此在模型训练时, 需调换翻译方向, 将包含噪声信息的 作为源语言语料进行输入, 对其进行噪声去除. 而目标语言为不含噪声的 利于损失的计算.c 2k c 2y 2y 1x 2x 2x n〈GO 〉k −1AttentionAttention编码器解码器图 3 跨层融合方式的层级注意力机制融入Fig. 3 Model with hierarchical attention mechanismbased on cross-layer merge1682自 动 化 学 报48 卷表 1 语料组合结构示例Table 1 Examples of the combined corpus structure 语料类别源语言语料目标语言语料原始语料D a D b单语语料D x—伪平行语料D x D y组合语料D b+D y D a+D x3 实验设置3.1 数据集本文选择机器翻译领域的通用数据集作为平行语料来源, 表2显示了平行语料的构成情况. 为观察本文方法在不同规模数据集上的作用, 采用不同规模的数据集进行对比实验. 小规模训练语料中,英−越、英−中和英−德平行语料均来自IWSLT15数据集, 本文选择tst2012作为验证集进行参数优化和模型选择, 选择tst2013作为测试集进行测试验证. 大规模训练来自WMT14数据集, 验证集和测试集分别采用newstest2012和newstest2013.表3显示了单语语料的构成情况, 英−越和英−中翻译中, 英文和中文使用的单语语料来源于GIGAWORD数据集, 越南语方面为互联网爬取和人工校验结合处理后得到的1 M高质量语料. IWSLT 和WMT上的英−德翻译任务中, 使用的单语语料来源于WMT14数据集的单语部分, 具体由Euro-parl v7、News Commentary和News Crawl 2011组合而成. 本文对语料进行标准化预处理, 包括词切分、过长句对过滤, 其中, 对英语、德语还进行了去停用词操作. 本文选择BPE作为基准系统, 源端和目标端词汇表大小均设置为30000.表 3 实验使用的单语语料的构成, 其中越南语使用本文构建的单语语料Table 3 The composition of monolingual corpus, in which Vietnamese was collected by ourselves翻译任务语言数据集句数 (M)单语语料en↔vien GIGAWORD22.3vi None1en↔zhen GIGAWORD22.3zh GIGAWORD18.7en↔de(IWSLT15)en WMT1418de WMT1417.3en↔de(WMT14)en WMT1418de WMT1417.3 3.2 参数设置本文选择以下模型作为基准系统:1) RNNSearch模型: 编码器和解码器分别采编码器解码器SoftmaxVIBWord embedding (from original+pseudo corpus) A t t e n t i o n图 4 融入变分信息瓶颈后的神经机器翻译模型Fig. 4 NMT model after integrating variational information bottleneck表 2 平行语料的构成Table 2 The composition of parallel corpus语料类型数据集语言对训练集验证集测试集小规模平行语料IWSLT15en↔vi133 K15531268 IWSLT15en↔zh209 K8871261 IWSLT15en↔de172 K8871565大规模平行语料WMT14en↔de 4.5 M30033000注: en: 英语, vi: 越南语, zh: 中文, de: 德语.7 期于志强等: 基于变分信息瓶颈的半监督神经机器翻译1683用6层双向长短期记忆网络(Bi-directional long short-term memory, Bi-LSTM)和长短期记忆网络(Long short-term memory, LSTM)构建. 隐层神经元个数设置为1 000, 词嵌入维度设置为620.使用Adam 算法[33]进行模型参数优化, dropout 率设定为0.2, 批次大小设定为128. 使用集束宽度为4的集束搜索(Beam search)算法进行解码.2) Transformer 模型: 编码器和解码器分别采用默认的6层神经网络, 头数设置为8, 隐状态和词嵌入维度设置为512. 使用Adam 算法进行模型参数优化, dropout 率设定为0.1, 批次大小设置为4 096.测试阶段使用集束搜索算法进行解码, 集束宽度为4.利用IWSLT15数据集进行的小规模平行语料实验中, 本文参考了Sennrich 等[34]关于低资源环境下优化神经机器翻译效果的设置, 包括层正则化和激进dropout.3.3 评价指标本文选择大小写不敏感的BLEU 值[35]作为评价指标, 评价脚本采用大小写不敏感的multi-bleu.perl. 为了从更多角度评价译文质量, 本文另外采用RIBES 进行辅助评测. RIBES (Rank-based in-tuitive bilingual evaluation score)是另一种评测机器翻译性能的方法[36], 与BLEU 评测不同的是,RIBES 评测方法侧重于关注译文的词序是否正确.4 实验结果分析本节首先通过机器翻译评价指标对提出的模型进行量化评价, 接着通过可视化的角度对模型效果进行了分析.4.1 BLEU 值评测本文提出的方法和基准系统在不同翻译方向上的BLEU 值如表4所示, 需要注意的是, 为了应用变分信息瓶颈、实现对源端噪声信息进行去除, 最终翻译方向与基础翻译模型方向相反(具体原因见第2.3节中对表1的描述). 表4中RNNSearch 和Transformer 为分别在基线系统上, 利用基础模型进行单语语料回译, 接着将获得的组合语料再次进行训练后得到的BLEU 值. 表4同时展示了消融不同模块后的BLEU 值变化, 其中CA 、VIB 分别表示跨层注意力、变分信息瓶颈模块.通过实验结果可以观察到, 本文提出的融入跨层注意力和变分信息瓶颈方法在所有翻译方向上均取得了性能提升. 以在IWSLT15数据集上的德→英翻译为例, 相较Transformer 基准系统, 融入两种方法后提升了0.69个BLEU 值. 同时根据德英翻译任务结果可以观察到, BLEU 值的提升幅度随着语料规模的上升而减小. 出现该结果的一个可能原因是在低资源环境下, 跨层注意力的使用能够挖掘更多的表征信息、使低层表征对高层表征的影响更为直接. 而在资源丰富的环境下, 平行语料规模提升所引入的信息与跨层注意力所挖掘信息在一定程度上有所重合. 另一个可能原因是相对于资源丰富环境, 低资源环境产生的伪平行语料占组合语料的比例更大, 变分信息瓶颈进行了更多的噪声去除操作.表 4 BLEU 值评测结果(%)Table 4 Evaluation results of BLEU (%)模型BLEUen →vi vi →en en →zh zh →en en →de (IWSLT15)de →en (IWSLT15)en →de (WMT14)de →en (WMT14)RNNSearch 26.5524.4721.1819.1525.0328.5126.6229.20RNNSearch+CA 27.0424.9521.6419.5925.3928.9427.0629.58RNNSearch+VIB 27.3525.1221.9419.8425.7729.3127.2729.89RNNSearch+CA+VIB27.83*25.61*22.3920.2726.14*29.66*27.61*30.22*△ +1.28+1.14+1.21+1.12+1.11+1.15+0.99+1.02Transformer 29.2026.7323.6921.6127.4830.6628.7431.29Transformer+CA 29.5327.0023.9521.8227.7430.9828.9331.51Transformer+VIB 29.9627.3824.3022.1328.0431.2429.1631.75Transformer+CA+VIB30.17*27.56*24.4322.3228.11*31.35*29.25*31.89*△+0.97+0.83+0.74+0.71+0.63+0.69+0.51+0.60△p <0.05注: 表示融入CA+VIB 后相较基准系统的BLEU 值提升, * 表示利用bootstrap resampling [37] 进行了显著性检验 ( )1684自 动 化 学 报48 卷。

中科大_盲信号处理_第3章2

中科大_盲信号处理_第3章2

(3-45)式可表示为
( W) log{det( W)} E p ( x ) [log gi( yi )]
i 1
N
(3-47)
二、代价函数的常规随机梯度和自然随机梯度 由于
log det(W) ( WT )1 W T W N E p ( x ) [log gi( yi )] E p ( x ) [ψ (y )xT ] W i 1
Infomax: ψ () 有所有的 gi () 决定,理论上应根据信源的 pdf 来选取 gi () ,但实际应
用中,对其要求不是很严格。
MMI: ψ () 由 k3 、 k4 来确定,实际应用中用观测数据来递推估计。
关于基于 Infomax 准则的 ICA 算法的详细分析和一些细节请阅读下列文献:
( W ) H ( yi ) log | det( W ) |
i 1
N
(3-28)
yi wij x j
j 1
N
i, j 1,2,, N
(3-29)
根据(3-23)或(3-25), H ( yi ) 可以用 yi 的三阶和四阶累量 k3 ( yi ) 、 k4 ( yi ) 来估计, 即
k3 ( yi | k 1) k3 ( yi | k ) k [k3 ( yi | k ) yi3 (k )] k4 ( yi | k 1) k4 ( yi | k ) k [k4 ( yi | k ) yi4 (k ) 3]
其中, k3 ( yi | k ) 和 k4 ( yi | k ) 分别表示 k 时刻的三阶累积量和四阶累积量的估计值。 利用式(3-41)~(3-44),就可以自适应地递推估计出分离矩阵 W 。

随着科技的发展人工智能英语作文

随着科技的发展人工智能英语作文

随着科技的发展人工智能英语作文全文共3篇示例,供读者参考篇1The Development of Artificial Intelligence with the Advancement of TechnologyHey there, guys! Let me share my thoughts on this super fascinating topic – the development of artificial intelligence (AI) and how it's being shaped by advancements in technology. As a student deeply intrigued by the rapid pace of innovation, I can't help but be in awe of the incredible progress we've made in this field.First off, let's start with a basic understanding of what AI really is. In simple terms, AI refers to the simulation of human intelligence processes by machines, particularly computer systems. These processes include learning, reasoning, problem-solving, perception, and even creativity. The goal of AI is to create intelligent machines that can perform tasks that typically require human intelligence.Now, the development of AI has been an ongoing journey, with its roots dating back to the 1950s. However, it's only inrecent years that we've witnessed a remarkable acceleration in AI capabilities, largely due to the advancements in computer hardware, software, and data availability.One of the key drivers of AI's growth has been the exponential increase in computing power. Thanks to Moore's Law, which predicted that the number of transistors on a microchip would double approximately every two years, we've seen a massive surge in processing power. This has enabled AI systems to crunch through massive amounts of data and perform complex calculations at lightning-fast speeds.Another critical factor has been the availability of vast amounts of data. With the rise of the internet, social media, and the Internet of Things (IoT), we're generating an unprecedented amount of data every single day. This data serves as the fuel for AI algorithms, allowing them to learn, adapt, and improve their performance over time.One of the most exciting developments in AI has been the rise of machine learning (ML) and deep learning (DL) techniques. ML algorithms have the ability to learn from data without being explicitly programmed, while DL takes this a step further by mimicking the neural networks of the human brain. These techniques have revolutionized fields like computer vision,natural language processing, and speech recognition, enabling AI systems to perform tasks that were once considered impossible.For instance, have you ever used a virtual assistant like Siri, Alexa, or Google Assistant? These AI-powered assistants can understand and respond to our voice commands, thanks to advancements in speech recognition and natural language processing. Similarly, facial recognition systems used for security and surveillance purposes rely on computer vision algorithms powered by deep learning.But AI's impact extends far beyond just consumer applications. It's transforming industries across the board, from healthcare and finance to transportation and manufacturing. For example, AI algorithms are being used to analyze medical images and assist in disease diagnosis, while self-driving cars rely on AI to navigate through complex environments.However, as with any powerful technology, AI also comes with its fair share of challenges and ethical considerations. One of the biggest concerns is the potential for job displacement, as AI systems become capable of automating various tasks traditionally performed by humans. There are also valid concernsaround privacy, data bias, and the potential for AI systems to be misused for malicious purposes.Despite these challenges, I firmly believe that the benefits of AI outweigh the risks, and it's up to us – students, researchers, policymakers, and the wider society – to ensure that AI is developed and deployed responsibly and ethically.Personally, I'm excited about the prospect of pursuing a career in AI or a related field. The opportunities are truly limitless, from developing innovative AI applications to conducting groundbreaking research that pushes the boundaries of what's possible.As we move forward, I believe we'll see AI becoming even more integrated into our daily lives, revolutionizing how we work, learn, and interact with the world around us. Who knows, perhaps one day we'll have AI systems that can not only assist us but also engage in deep, meaningful conversations and even exhibit genuine creativity and emotional intelligence.However, it's important to remember that AI is not a magic solution or a replacement for human intelligence. Instead, it should be viewed as a powerful tool that can augment and enhance our capabilities, allowing us to tackle challenges that were once thought to be insurmountable.In conclusion, the development of AI is a testament to the incredible power of human ingenuity and our relentless pursuit of knowledge. As technology continues to advance at a breakneck pace, we can expect AI to evolve and transform in ways we can't even imagine today. It's an exciting time to be a student, and I can't wait to see what the future holds for this fascinating field.篇2The Rise of AI: Revolutionizing English Writing and BeyondAs a student in the rapidly evolving digital age, I can't help but marvel at the incredible advancements in artificial intelligence (AI) and its profound impact on various aspects of our lives, including the realm of English writing. AI has emerged as a game-changer, offering unprecedented opportunities and challenges that are reshaping the way we perceive and approach language and communication.From humble beginnings as a theoretical concept to the sophisticated systems we have today, AI has undergone a remarkable transformation. Its ability to process and analyze vast amounts of data, recognize patterns, and generate human-like responses has opened up a world of possibilities in the field ofEnglish writing. No longer confined to the realm of science fiction, AI is now an integral part of our daily lives, assisting us in tasks ranging from spell-checking and grammar correction to content generation and translation.One of the most significant contributions of AI in English writing is its capacity to provide real-time feedback and suggestions. Gone are the days when writers had to rely solely on human editors and proofreaders. AI-powered writing assistants can now analyze our work, identify potential errors, and offer recommendations for improvement. These tools not only enhance our writing skills but also save precious time, allowing us to focus on the creative aspects of the writing process.Moreover, AI has the potential to revolutionize the way we approach writing in educational settings. Personalized learning experiences tailored to individual strengths and weaknesses can be facilitated through AI-driven adaptive learning systems. These systems can analyze a student's writing samples, identify areas for improvement, and provide customized exercises and resources to enhance their skills. This level of personalization was previously unimaginable, and it promises to transform the way we teach and learn English writing.Another fascinating aspect of AI in English writing is its ability to generate original content. While the debate surrounding the ethics and implications of AI-generated writing continues, it is undeniable that these systems can produce coherent and well-structured pieces on a wide range of topics. From creative writing to academic essays and even news articles, AI has the potential to augment human creativity and augment our writing capabilities.However, it is crucial to acknowledge the potential risks and limitations associated with AI in English writing. As with any emerging technology, there are valid concerns about the possibility of plagiarism, bias, and the perpetuation of misinformation. Additionally, the lack of emotional intelligence and nuanced understanding of human experiences may pose challenges in capturing the depth and complexity of certain writing genres, such as literary fiction or personal narratives.Furthermore, the rise of AI in English writing raises questions about the future of human writers and the potential impact on employment in industries that rely heavily on writing and communication. While some argue that AI will eventually replace human writers, others believe that it will create newopportunities and allow for a more symbiotic relationship between humans and machines.As a student navigating this rapidly changing landscape, I find myself both excited and apprehensive about the implications of AI in English writing. On one hand, I am amazed by the potential for AI to enhance our writing skills, provide personalized learning experiences, and augment human creativity. On the other hand, I am cognizant of the ethical considerations and potential risks that must be addressed to ensure responsible and beneficial integration of AI into the writing process.Ultimately, I believe that the key to harnessing the full potential of AI in English writing lies in striking a balance between leveraging its capabilities and maintaining a strong foundation in human creativity, critical thinking, and ethical reasoning. We must embrace AI as a powerful tool to augment our writing abilities while retaining our unique perspectives, emotional intelligence, and personal narratives.As we move forward, it is essential to approach the integration of AI in English writing with a growth mindset, continuously adapting and learning to harness its potential while mitigating its risks. Interdisciplinary collaboration betweenwriters, educators, technologists, and ethicists will be crucial in shaping the responsible development and implementation of AI in this field.In conclusion, the rise of AI in English writing is a remarkable testament to the unprecedented pace of technological advancement. While it presents both opportunities and challenges, it is our responsibility as students and future leaders to navigate this landscape with curiosity, critical thinking, and a commitment to ethical principles. By embracing AI as a powerful tool while maintaining our human essence, we can unlock new frontiers in English writing and shape a future where technology and human creativity coexist in a harmonious and enriching manner.篇3The Rise of Artificial Intelligence: Reshaping Our WorldAs technology continues its relentless march forward, one field that has captured the world's imagination like no other is artificial intelligence (AI). What was once confined to the realms of science fiction has now become an integral part of our everyday lives. From the virtual assistants on our smartphones tothe recommendation algorithms that curate our entertainment, AI has firmly entrenched itself in the fabric of modern society.As a student navigating this rapidly evolving landscape, I can't help but feel both excitement and trepidation at the prospect of what AI holds in store for our future. On one hand, the potential benefits are staggering – AI has the power to revolutionize fields as diverse as healthcare, education, and scientific research. On the other hand, the ethical and societal implications of this technology are profound and warrant careful consideration.One area where AI has already made significant strides is in the field of healthcare. From machine learning algorithms that can detect early signs of disease to robots assisting in complex surgeries, AI is transforming the way we approach medicine. Imagine a world where early detection and personalized treatment plans become the norm, potentially saving countless lives and reducing the burden on healthcare systems worldwide.In education, AI has the potential to revolutionize the way we learn. Intelligent tutoring systems can adapt to each student's unique learning style, providing personalized instruction and feedback. AI-powered language learning tools can help students master new languages more efficiently, breaking down barriersand fostering global communication. The possibilities are endless, and as a student, I can't help but feel excited about the prospect of learning in a more engaging and tailored manner.However, as with any transformative technology, AI also raises significant ethical and societal concerns. One of the most pressing issues is the potential for job displacement as AI systems become increasingly capable of automating tasks traditionally performed by humans. While some argue that this will lead to the creation of new jobs and industries, others fear that the pace of change may outstrip our ability to adapt, leaving many workers behind.Another concern is the potential for AI systems to perpetuate or even amplify existing biases and discrimination. If the data used to train these systems is biased or incomplete, the resulting AI models may reflect and reinforce those biases, leading to unfair and discriminatory outcomes. As AI becomes more integrated into decision-making processes, from hiring to lending decisions, it is crucial that we address these issues and strive for transparency and accountability.Furthermore, the rise of AI raises important questions about privacy and data ownership. As AI systems become more sophisticated, they require vast amounts of data to train andoperate effectively. This data is often collected from individuals without their explicit consent or understanding of how it will be used. We must strike a delicate balance between harnessing the power of data-driven AI and protecting individual privacy and autonomy.Despite these challenges, I remain cautiously optimistic about the future of AI. The potential benefits are too significant to ignore, and as a society, we have a responsibility to harness this technology in a responsible and ethical manner. This will require collaboration between researchers, policymakers, and the public to ensure that AI development is guided by principles of transparency, fairness, and accountability.As a student, I believe that education will play a crucial role in shaping the future of AI. We must equip ourselves with not only the technical skills to develop and deploy AI systems but also the critical thinking skills to navigate the complex ethical and societal implications of this technology. Interdisciplinary programs that combine computer science, ethics, and public policy will be essential in preparing the next generation of AI practitioners and thought leaders.Moreover, we must foster a culture of lifelong learning and adaptability. As AI continues to reshape the job market and thenature of work, we must be prepared to continuously upskill and pivot to new industries and roles. Embracing a growth mindset and developing resilience in the face of change will be key to thriving in the age of AI.In conclusion, the rise of artificial intelligence represents one of the most transformative technological shifts of our time. While the potential benefits are immense, ranging from advancements in healthcare to revolutionizing education, we must also confront the ethical and societal challenges that come with this technology. As students, we have a unique opportunity to shape the future of AI by acquiring the necessary skills, fostering critical thinking, and advocating for responsible and ethical development.The path ahead is uncertain, but one thing is clear: AI will continue to reshape our world in ways we can scarcely imagine. It is up to us to navigate this journey with wisdom, foresight, and a commitment to using this powerful technology for the betterment of humanity. Let us embrace the future with open minds and a determination to harness the potential of AI while safeguarding the values that make us human.。

供应链缩写

供应链缩写
Logistics resource planning
物流资源计划
TOC
Theory of constraint
约束理论
AFR
Aggregate Forecasting and Replenishment
合作预测于补给
JMI
Jointly managed inventory
共同管理库存
POS
Point of sale
信息主管
CEO
Chief Executive Officer
企业主管
ABC
Activity based costing approach
ABC成本法
ANN
Artificial neural network
人工精神网络
QFD
Quality function development
质量功能开发
SRM
Supplier relationship management
库存管理策略
CIM
Coordinated inventory management
协同库存管理
第十章
OEM
Original equipment manufacturer
原始设备制造商
FMS
Flexible Manufacture System
柔性制造系统
CIM
Computer Integrated Manufacturing
供应商关系管理
MIS
Measuring index system
评价指标体系
NSM
Negotiation selection method
onic Funds Transfer
电子资金转账

《基于深度学习的英文关系体抽取》范文

《基于深度学习的英文关系体抽取》范文

《基于深度学习的英文关系体抽取》篇一High-Quality Paper on Relation Extraction Based on Deep LearningAbstract:This paper presents a novel approach for high-quality relation extraction using deep learning techniques. Relation extraction is a fundamental task in Natural Language Processing (NLP) and plays a pivotal role in various fields, including information extraction, knowledge discovery, and machine comprehension. In this paper, we detail our method, which utilizes deep learning to improve the accuracy and efficiency of relation extraction. We introduce the underlying theory, our proposed model, and the results of our empirical analysis.I. IntroductionRelation extraction is a crucial task in NLP that involves identifying and categorizing the relationships between entities in textual data. This process is essential for various applications, such as information retrieval, question answering, and knowledge graph construction. However, traditional methods of relation extraction often suffer from limited accuracy and scalability. To address these issues, we propose a novel approach based on deep learning.II. Background and Related WorkIn recent years, deep learning has achieved remarkable success in various NLP tasks. As such, several approaches have been proposedfor relation extraction using deep learning. However, these methods still suffer from certain limitations. Firstly, most of these models require a considerable amount of labeled data to train, which can be costly and time-consuming. Secondly, the accuracy of existing models may not be satisfactory for high-quality applications that require precise and comprehensive information extraction.III. Proposed ModelOur proposed model leverages the powerful capabilities of deep learning to enhance the accuracy and efficiency of relation extraction. Our model is built upon an attention-based recurrent neural network architecture, which is designed to capture the contextual information between entities and their relationships effectively. Specifically, our model uses attention mechanisms to identify the most relevant portions of textual data and generate more accurate relationship labels. Additionally, our model can learn the structure of textual data automatically and adjust its internal parameters to improve performance over time.IV. Experimental AnalysisWe conducted extensive experiments to evaluate the performance of our proposed model. We used a large-scale dataset for relation extraction that includes various types of relationships between entities in textual data. We compared our model with several state-of-the-art models in the literature and observed significant improvements in both accuracy and efficiency. Specifically, our model achieved higher accuracy rates while also reducing the overall time required for training and testing compared to previous models. Furthermore, weanalyzed the generalization ability of our model using various datasets and observed consistent performance improvements across different datasets.V. ConclusionIn this paper, we presented a novel approach for high-quality relation extraction based on deep learning techniques. Our model utilizes an attention-based recurrent neural network architecture to capture contextual information between entities and their relationships effectively. We conducted extensive experiments to evaluate the performance of our model and observed significant improvements in both accuracy and efficiency compared to previous models. The results demonstrate that our approach is effective in extracting precise and comprehensive relationships from textual data, thereby enhancing its application in various fields such as information retrieval, question answering, and knowledge graph construction.In future work, we plan to further optimize our model by exploring different architectures and techniques to improve its performance even further. Additionally, we aim to apply our model to more complex datasets with diverse relationships between entities to demonstrate its generalizability and scalability across different applications. Overall, we believe that our approach offers a promising solution for high-quality relation extraction in NLP tasks.Overall, the success of our proposed model is promising and provides a new direction for future research in the field of NLP. By leveraging the powerful capabilities of deep learning, we have achieved high-quality relation extraction with improved accuracy and efficiency. This notonly contributes to the advancement of NLP but also has potential applications in various fields that require precise and comprehensive information extraction.We hope that our work can inspire further research and development in the area of deep learning for relation extraction, as well as other NLP tasks. We believe that by continuously exploring and improving deep learning models, we can achieve even better performance in relation extraction and other NLP tasks, thereby advancing the field of Natural Language Processing.。

人工智能会让人大脑变笨吗英语作文

人工智能会让人大脑变笨吗英语作文

人工智能会让人大脑变笨吗英语作文全文共3篇示例,供读者参考篇1Will Artificial Intelligence Make Our Brains Dumber?Artificial Intelligence (AI) has rapidly advanced in recent years, raising concerns about its potential impact on human intelligence and cognitive abilities. As a student, I can't help but wonder if our increasing reliance on AI-powered tools and technologies might lead to a decline in our critical thinking and problem-solving skills. While AI undoubtedly offers numerous benefits, it's crucial to examine its potential drawbacks and how it might affect our mental capabilities in the long run.To begin with, one of the primary concerns is the risk of cognitive offloading – the tendency to rely excessively on external sources, such as AI assistants or search engines, to perform tasks that our brains are capable of handling. As AI becomes more sophisticated and accessible, we might find ourselves outsourcing more and more cognitive tasks to these systems, leading to a potential atrophy of our mental faculties.For instance, instead of memorizing facts, formulas, or even basic arithmetic operations, we might become overly dependent on AI tools to retrieve and process information for us. While this convenience might seem appealing initially, it could gradually erode our ability to retain and recall knowledge independently. Consequently, our brains might become less adept at forming connections, drawing inferences, and applying critical thinking skills.Moreover, AI-generated content and solutions, while often impressive, might discourage us from engaging in the creative process and developing our own unique problem-solving approaches. When we rely too heavily on AI-generated outputs, we risk becoming passive consumers of information rather than active participants in the learning process. This could potentially stifle our ability to think outside the box, challenge assumptions, and come up with innovative ideas.However, it's important to note that AI is not inherently detrimental to human intelligence. In fact, when used judiciously and in conjunction with human input, AI can be a powerful tool for enhancing learning and cognitive development. For example, AI-powered adaptive learning platforms can personalize educational content based on individual needs and learningstyles, potentially improving our understanding and retention of complex concepts.Additionally, AI can assist in automating tedious or repetitive tasks, freeing up mental resources for more complex and creative endeavors. By offloading mundane tasks to AI systems, we can allocate our cognitive capacities toward higher-order thinking, problem-solving, and creative pursuits that truly challenge and stimulate our minds.Ultimately, the impact of AI on human intelligence will largely depend on how we choose to integrate and utilize these technologies. If we adopt a balanced approach, leveraging AI as a supportive tool while actively engaging our own cognitive abilities, we can potentially enhance our mental capacities rather than diminishing them.On the other hand, if we become overly reliant on AI and neglect to exercise our critical thinking and problem-solving skills, there is a risk of cognitive stagnation or even regression. It is our responsibility as students and lifelong learners to strike a harmonious balance between embracing the benefits of AI and nurturing our innate human intelligence.In conclusion, the question of whether AI will make our brains dumber is a complex one, with valid concerns andpotential benefits to consider. While the risk of cognitive offloading and overreliance on AI-generated solutions is real, AI can also serve as a powerful tool for enhancing learning and cognitive development when used judiciously.As students in an era of rapid technological advancement, it is crucial for us to cultivate a critical mindset, actively engage in the learning process, and continuously challenge ourselves to think independently and creatively. By embracing a balanced approach to AI integration, we can harness its potential while simultaneously nurturing and expanding our own cognitive abilities.Ultimately, the impact of AI on human intelligence will be determined by our collective choices and actions. It is up to us to ensure that AI remains a supportive tool that augments and enriches our mental capabilities, rather than a crutch that leads to their erosion.篇2Will AI Make Our Brains Dull?A Controversial Look at the Effects of Artificial Intelligence on Human CognitionAs technology rapidly advances, one of the most heavily debated topics is the rise of artificial intelligence (AI) and its potential impacts on humanity. Among the concerns raised is the notion that as we increasingly rely on AI to handle complex tasks, our brains may become "dull" or less capable over time. In this essay, I will explore both sides of this argument and attempt to arrive at a nuanced understanding.On the one hand, there is a valid fear that the convenience and capabilities of AI could lead to cognitive laziness and atrophy of our mental faculties. Just as physical muscles deteriorate without regular exercise, our brains require constant stimulation and problem-solving to stay sharp. With AI assistants handling everything from information lookup to complex analysis, there is a risk that we may become overly reliant on these systems and fail to exercise our own cognitive abilities.Furthermore, the very process of learning and struggling through challenges is what strengthens our neural pathways and promotes neuroplasticity – the brain's ability to adapt and rewire itself. If AI handles all the difficult tasks for us, we may miss out on these valuable opportunities for intellectual growth. Studies have shown that mentally taxing activities like learning a new language or skill can actually increase brain mass andconnectivity, while a sedentary intellectual lifestyle may contribute to cognitive decline.Proponents of this view argue that throughout history, labor-saving technologies have indeed made us lazier in certain domains. For instance, the advent of calculators and spreadsheet software has diminished our ability to perform complex mathematical calculations by hand. While this may free up cognitive resources for other pursuits, it also represents a loss of a once-prized skill. Applied to AI, this could mean atrophying of critical thinking, problem-solving, and creativity as we become overdependent on machines.On the flip side, advocates of AI argue that these technologies will actually enhance and augment our cognitive capabilities rather than diminish them. By offloading mundane or computationally intensive tasks to AI, we free up valuable brain power and attention to focus on higher-order thinking, creativity, and intellectual pursuits that machines cannot yet match.For example, an AI research assistant could rapidly synthesize vast amounts of data and surface key insights, while a human researcher applies critical analysis, contextual understanding, and creative ideation to generate truly novel concepts. Similarly, AI tutoring systems could providepersonalized, adaptive learning experiences, identifying and addressing individual strengths and weaknesses in a way that enhances human understanding and skill acquisition.Moreover, as AI permeates various industries, it may give rise to entirely new disciplines and modes of thinking that exercise our cognitive faculties in novel ways. Humans may need to develop enhanced skills in areas like algorithm design, data curation, and human-AI collaboration – intellectual pursuits that could stimulate our brains in ways we haven't yet experienced.Ultimately, the relationship between AI and human cognition may not be a zero-sum game. Just as the industrial revolution did not make our bodies obsolete, but rather shifted the balance of mental and physical labor, AI could redefine the types of cognitive efforts required of us while amplifying our overall intellectual potential.Another counterargument is that the fears of cognitive atrophy may be overblown, as humanity has consistently risen to the challenge of adapting to technological change throughout history. From the invention of the printing press to the rise of the internet, each new advancement has sparked concerns about diminishing intellectual capabilities. Yet, time and again, we have found ways to leverage these technologies to enhance ourknowledge and understanding, rather than succumbing to intellectual laziness.In the modern era, we have already witnessed how technologies like search engines and digital information repositories have fundamentally changed the way we acquire and process information. While rote memorization may be less necessary, we have developed new skills in areas like information evaluation, synthesis, and application. As AI progresses, we may need to further evolve our cognitive strategies, but our ability to adapt and capitalize on these tools should not be underestimated.Ultimately, the impact of AI on human cognition will likely depend on how we choose to integrate and utilize these technologies. If we approach AI as a crutch or a replacement for our own mental efforts, then the fears of cognitive dullness may indeed be realized. However, if we embrace AI as a powerful tool to augment and elevate our intellectual capacities, while still exercising and challenging our brains in new ways, then the outcome could be a renaissance of human cognition and achievement.As with any transformative technology, the responsibility lies with us to thoughtfully shape the human-AI relationship andsteer it towards positive outcomes. This may require a concerted effort in areas like education, ethics, and public policy to ensure that AI enhances rather than diminishes our cognitive abilities.In conclusion, the notion that AI will make our brains "dull" is an oversimplification of a complex issue. While there are valid concerns about cognitive atrophy and over-reliance on technology, there is also tremendous potential for AI to augment and expand our intellectual capabilities in powerful ways. The key will be striking the right balance, leveraging AI as a tool while continuing to challenge and exercise our uniquely human cognitive faculties. With thoughtful integration and a commitment to lifelong learning, AI could usher in a new era of human cognitive enhancement and achievement.篇3Will AI Make Our Brains Dumber?As technology continues to advance at a breakneck pace, one of the biggest concerns people have is whether artificial intelligence (AI) will make our brains "dumber" or less capable over time. With AI assistants like Siri, Alexa, and ChatGPT able to quickly provide information, solve problems, and even write essays for us, it's a valid worry that we might become overlyreliant on these tools and experience cognitive decline as a result. As a student preparing for an increasingly AI-driven world, I've spent a lot of time thinking about this issue. Here's my take.First, it's important to understand that AI is a tool, not a replacement for human intelligence. Just like a calculator doesn't make us worse at math, AI assistants aren't inherently making us dumber. In fact, they can be incredibly useful aids that augment and extend our mental capabilities in powerful ways. Need to quickly find a key fact for a research paper? Ask an AI assistant. Stuck on a tricky coding problem。

基于深度预测的单目SLAM绝对尺度估计

基于深度预测的单目SLAM绝对尺度估计

2021年6月计算机工程与设计June2021第42卷第6期COMPUTER ENGINEERING AND DESIGN Vol.42No.6基于深度预测的单目SLAM绝对尺度估计张建博,袁亮+,何丽,冉腾,唐鼎新(新疆大学机械工程学院,新疆乌鲁木齐830047)摘要:针对单目同时定位与地图构建(simultan-eous localization and mapping,SLAM)技术存在的尺度不确定性问题,提出一种结合深度预测网络来估计绝对尺度的单目SLAM算法。

利用MonoDepth网络对单目图像进行深度预测,与从单目图像中提取的ORB特征点进行深度值的数据关联,通过设定深度阈值的方法剔除具有不可靠深度值的特征点,恢复单目的绝对尺度,根据特征点的真实深度信息,通过光束法平差优化位姿图,校正尺度漂移,减少累积误差&通过室外KIT-TI数据集进行对比实验,其结果表明,该方法能够获得更高的定位精度&关键词:同时定位与地图构建;深度预测网络;尺度漂移;绝对尺度估计;数据关联中图法分类号:TP242文献标识号:A文章编号:1000-7024(2021)061749-07doi:10.16208/j.issnl000-7024.2021.06.033Absolute scale estimation of monocular SLAM based on depth prediction ZHANG Jian-bo,YUAN Liang+,HE Li&RAN Teng&TANG Ding-xin(School of Mechanicd Engineering,Xinjiang University&Urumqi830047,China)Abstract:In view of the scale uncertainty of simultaneous localization and mapping(SLAM)technology&an approach based on depth prediction network was proposed to estimate the absolute scale of SLAM system.The MonoDepthconvolutionalneural network was used to predict the depth of monocular images&and the ORB feature points extracted from monocular images were associated with the data of the depth values.The feature points with unreliable depth values were removed by se t ing the distance threshold&andtheabsolutescaleofmonocularimageswasrecovered.Accordingtotherealdepthinformationofthefeature points&posegraphwasoptimizedthroughthebundleadjustmentmethod&inwhichwaydriftofscalewascorrectedandthecu-mulative error was reduced.Through the comparison experiment on outdoor KITTI data set&the results show that the proposed methodcane f ectivelyimprovethepositioningaccuracy.Key words:SLAM;depth prediction network;scale drift;absolute scale estimation;data association0引言由于单目相机具有成本低、适用范围广和校准过程简单等优势,使得单目视觉同时定位与地图构建(simulta­neous localization and mapping,SLAM)成为机器人在未知环境中自主定位的一个重要研究方向。

基于BBNN的网络攻击文本自动化分类方法

基于BBNN的网络攻击文本自动化分类方法

第22卷第1期信息工程大学学报Vol.22No.12021年2月Journal of Information Engineering UniversityFeb.2021㊀㊀收稿日期:2020-08-31;修回日期:2020-09-08㊀㊀基金项目:国家自然科学基金资助项目(61502528)㊀㊀作者简介:欧昀佳(1989-),男,硕士生,主要研究方向为网络安全㊂DOI :10.3969/j.issn.1671-0673.2021.01.008基于BBNN 的网络攻击文本自动化分类方法欧昀佳,周天阳,朱俊虎,臧艺超(信息工程大学,河南郑州450001)摘要:基于描述文本的网络攻击自动化分类是实现APT 攻击知识智能抽取的重要基础㊂针对网络攻击文本专业词汇多㊁难识别,语义上下文依赖强㊁难判断等问题提出一种基于上下文语义分析的文本词句特征自动抽取方法,通过构建BERT 与BiLSTM 的混合神经网络模型BBNN (BERT and BiLSTM Neural Network ),计算得到网络攻击文本的初步分类结果,再利用方差过滤器对分类结果进行自动筛选㊂在CAPEC (Common Attack Pattern Enumeration and Classifica-tion )攻击知识库上的实验结果显示,该方法的准确率达到了79.17%,相较于单一的BERT 模型和BiLSTM 模型的分类结果分别提高了7.29%和3.00%,实现了更好的网络攻击文本自动化分类㊂关键词:神经网络;APT 网络攻击;文本分类中图分类号:TP393㊀㊀㊀文献标识码:A文章编号:1671-0673(2021)01-0044-07Document Classification in Cyberattack Text Based on BBNN ModelOU Yunjia,ZHOU Tianyang,ZHU Junhu,ZANG Yichao(Information Engineering University,Zhengzhou 450001,China)Abstract :The document classification in cyberattack text is fundamental to automatic knowledge ex-traction from APT attack information.In this paper,an automatic method based on context analysis is proposed to tackle the problems rooted in cyberattack,such as having too many terminologies,be-ing hard to distinguish and classify,over-relying on context,etc.,by extracting text features in words level and sentences level respectively.This method,BBNN (BERT and BiLSTM Neural Net-work)model,synthesized BERT and BiLSTM Neural Network,can compute the preliminary classifi-cation results of cyberattack,and automatically filter the classification results of the text via vari-ance.The experiment results from the attack knowledge base of CAPEC (Common Attack Pattern Enumeration and Classification)suggests this method can reach a 79.17%accuracy,which is in-creased by 7.29%and 3.00%compared to singular BERT or BiLSTM models,and thus achieve a better automatic classification of cyberattack.Key words :neural network;cyber attack;document classification㊀㊀随着信息技术的飞速发展,网络攻击手段层出不穷,在互联网上的黑客论坛㊁技术博客以及安全厂商发布的研究报告中含有大量关于高级可持续威胁(Advanced Persistent Threat,APT)攻击的研究成果㊂这些原始的攻击技术描述信息和攻击案例的流程分析文本是安全研究人员构建网络攻击安全知识体系,深化攻击研究的重要数据来源㊂从复杂多样的攻击描述文本中抽取APT 攻击㊀第1期欧昀佳,等:基于BBNN的网络攻击文本自动化分类方法45㊀知识,构建完整的网络攻击知识图谱,可以更好地确定安全威胁程度,推测攻击者意图,拟定有针对性地防御措施㊂通过网络攻击文本分类,将攻击知识映射到ATT&CK(Adversarial Tactics,Tech-niques,and Common Knowledge)㊁CAPEC等成熟攻击知识框架,是认知网络攻击,抽取攻击知识的重要基础㊂传统的人工阅读专家判定的分类方法存在人员素质要求严苛㊁效率低下㊁成本过高等问题㊂如何自动㊁快速㊁准确地识别大量网络安全文本中攻击手段的所属类别是当前网络安全研究的新热点㊂开源的网络攻击描述文本具有来源多样㊁表达各异㊁语法不规范等特点㊂例如,技术性短文本偏重于对某一具体的攻击技术实现进行描述,而大多APT报告则通过长期跟踪着重对捕获案例的整个攻击流程进行描述㊂原始网络攻击文本的这些特点极大地增加了自动化识别与分类的难度㊂针对该问题,本文构建了一种基于新型混合神经网络模型的网络攻击文本分类方法,通过分析网络攻击文本的上下文语义,自动抽取文本词句特征,并以此作为自动化攻击判定与分类的计算依据㊂该方法利用模型输出的概率分布方差评估预测项的确定程度,对特定的低概率分类项进行随机选取预测,可以有效提高网络攻击文本分类的准确率㊂1㊀相关研究文本分类是自然语言处理的一大热门研究领域㊂早期的研究大多基于规则和统计的方法,但该类方法存在效率低下㊁人工成本高㊁泛化能力弱㊁特征提取不充分等缺点㊂神经网络能够弥补上述缺点更好地解决分类问题㊂文献[1]首先提出基于卷积神经网络的文本特征提取,并用于文本倾向性分类任务上㊂文献[2]在此基础上提出VDCNN模型,该方法采用深度卷积网络对文本局部特征进行提取,提高文本分类效果㊂但是卷积神经网络在考虑文本的上下文关系时有很大局限性,其固定的卷积核不能建模更长的序列信息,使得文本分类效果很难进一步提升㊂RNN(Recurrent Neural Network)模型不仅可以处理不同长度的文本,而且能够学习输入序列的上下文语义关系㊂双向RNN[3]可以让输出获取当前输出之前以及之后的时间步信息㊂但由于RNN是通过时间序列进行输入,随着输入的增多,RNN对很久以前信息的感知能力下降,将会产生长期依赖和梯度消失问题[4]㊂在RNN基础之上改进而来的门控制单元(GRU)[5]和长短时序记忆模型(Long Short Term Memory,LSTM)[6]利用门机制可以解决RNN的长期依赖和梯度消失问题㊂文献[7]利用RNN设计了3种用于多任务文本分类问题的模型㊂文献[8]提出了一种基于word2vec与LSTM模型的健康文本分类方法㊂为进一步提高文本分类的准确率,文献[9]首先提出了一种用于文档分类的分层注意力网络模型㊂该模型在单词和句子级别应用了两个不同的注意力机制,使得模型在构建文档时能够给予重要内容更大权重,同时也可以缓解RNN在捕捉文档序列信息时产生的梯度消失问题㊂文献[10]提出一种基于注意力机制的深度学习态势信息推荐模型,该模型能学习指挥员与态势信息之间的潜在关系,为用户推荐其关心的重要信息㊂在注意力机制基础上,Google团队提出了Transformer模型[11],它摒弃了常用的CNN或者RNN模型,采用Encoder-Decoder架构,对文本进行加密㊁解密处理㊂Trans-former通过利用大量原始的语料库训练,从而得到一个泛化能力很强的模型,只需微调参数进行训练,就可以将模型应用到特定的文本分类任务中[12]㊂词嵌入(word embedding)是单词的一种数值化表示方法,是机器学习方法用于文本分类的基础㊂通过此技术可以将文本中的词语转化为在向量空间上的分布式表示,并大大降低词语向量化后的维度㊂预训练模型是在具体分类任务训练前使用大量文本训练词嵌入模型,使模型通过学习一般语义规律,获得单词的特定词嵌入表达㊂文献[13]在2013年提出了word2vec词嵌入训练模型,包括CBOW模型和Skip-Gram模型[14],这两种模型可以通过学习得到高质量的词语分布式表达,能训练大量的语料且捕捉文本之间的相似性㊂但其缺点是只考虑到了文本的局部信息,未考虑整体信息㊂文献[15]提出GloVe模型,利用共现矩阵,同时考虑局部信息和整体信息㊂上述提及的词嵌入技术均属于静态词嵌入,即训练后词向量固定不变㊂但在实际应用中,同一词在不同语境下的语义是不一样的,其表达需要根据不同语境进行变化㊂针对该问题,文献[16]提出了BERT预训练模型,使用多层双向Transformer编码器对海量语料进行训练,结合所有层的上下文信息进行提取,实现了文本的46㊀信息工程大学学报㊀2021年㊀深度双向表示㊂网络攻击的描述文本相对特殊,大多采用人工分析,专家定义的方式㊂该类文本具有长度短㊁专业词汇多㊁文字特征稀疏㊁上下文依赖性强等特点㊂神经网络的优势在于对文本的特征提取上,但短文本的文字特征稀疏,单一的神经网络难以进行有效处理㊂为此,可以在此基础上引入迁移学习的思想,即利用经过预训练的深度神经网络对文本进行编码,经过这样编码的文本蕴含了在大量文本上提取的一般特征,然后再针对具体分类任务将这些特征用于神经网络的训练㊂由于具体分类任务的关注点不同,导致文本信息对分类预测的重要程度不同㊂注意力机制可以使分类模型关注到重点信息,从而有效提升模型性能㊂本文将神经网络㊁预训练和注意力机制进行有机融合,开展网络攻击分类方法研究㊂2㊀网络攻击文本分类模型网络攻击文本分类任务不同于一般的新闻文本分类任务,其存在描述范围有限㊁各类攻击关联性强㊁区分度不大且专业性词汇多的特点㊂例如:句1:Client side injection-induced buffer over-flow,this type of attack exploits a buffer overflow vul-nerability in targeted client software through injection of malicious content from a custom-built hostile serv-ice.句2:Log injection-tampering-forging,this attack targets the log files of the target host.句1㊁句2都有对注入injection的描述,但两者描述的攻击类别分别属于Manipulate Data Struc-tures类和Manipulate System Resources类㊂句子中的exploits属于专业词汇,与平时所指代的 开发㊁开拓 意义不同,这里指的是 对软件的脆弱性利用 ㊂这导致了普通的静态预训练模型在该任务上性能不佳㊂针对以上问题,本文提出从词语级和句子级分别对攻击文本进行特征提取㊂利用BERT模型的动态编码机制对输入攻击文本进行动态编码表示,使计算出的句子向量能包含具体攻击描述文本中的语义信息,解决文本中存在的语义偏移和专业词汇识别难等问题㊂同时,采用BiLSTM模型提取攻击文本中词与词的前后关联关系特征,学习的词向量利用注意力机制强化特殊词汇对于攻击文本分类的权重比值,解决文本中存在的描述类似导致区分度小等问题㊂然后,结合以上两个模型学习到的特征,分别计算出攻击文本分类的预估值㊂继而对获取的两类预估值进行分析判定,求出最后的分类预测值㊂如图1所示,BBNN攻击文本分类模型分为4层,输入层㊁模型计算层㊁预值分析层㊁输出层㊂下面将依次对模型的各层结构进行详细介绍㊂图1㊀BBNN攻击文本分类模型输入层的数据处理要经过数据输入㊁分词㊁向量计算3个阶段㊂对于要输入给BiLSTM模块的数据只需经过简单的分词和词嵌入处理㊂而对于要输入给BERT模块的数据则需要由词向量㊁分段向量和位置向量组成㊂BERT模块的分词阶段主要包含两个部分:①将输入的文本进行wordpiece 处理,wordpiece处理是将词划分为更小的单元,如图1中rapport一词就被划分为rap和##port,这样做的目的是压缩了词典大小,并且增大了可表示词数量;②在句子首尾分别嵌入[CLS]和[SEP]两个特殊token㊂对于BERT模块来说,由于文本分类任务的输入对象是一个单句,所以分段向量E A全部以0进行填充㊂位置向量(E1至E7)则通过正弦余弦编码来获取,表征该句子序列的位置信息㊂将上述3部分向量相加得到BERT模块的输入向量㊂模型计算层包含BiLSTM计算模块和BERT 计算模块㊂这两个模块需要进行单独训练,再加入到整体的BBNN攻击文本分类模型结构中,如图2所示㊂从图2可以看到,BiLSTM模块包含一个前向LSTM神经网络和一个后向LSTM神经网络,分别学习输入序列中各个词的前后信息㊂该模块接收输入层编码的词嵌入向量Embedding作为输入㊂E代表经过词嵌入处理的词向量,t代表第t个时间步㊂hңt是第t个时间步的前向隐藏状态编码,它由上一个时间步的隐藏状态编码hңt-1和当前输㊀第1期欧昀佳,等:基于BBNN 的网络攻击文本自动化分类方法47㊀图2㊀BiLSTM 模块结构入向量E t 计算得出㊂h ѳt 是第t 个时间步的后向隐藏状态编码,由下一个时间步隐藏状态编码h ѳt-1和当前输入向量E t 计算得出㊂连接前向后向隐藏状态向量得到含有上下文信息的t 时刻隐藏状态向量h =[h ңt ,h ѳt ]㊂综合各个时间步隐藏状态向量得到文本词向量矩阵H ={h 0,h 1 ,h n }作为输出,传递给实现注意力机制的Attention 网络㊂Atten-tion 网络接收文本向量矩阵H 作为输入,为每个输入值分配不同的权重ω和偏置项b ,计算出文本中每个单词的权重μ,得到权重矩阵U ㊂W 是随机初始化生成的矩阵:u t =tanh(h t ∗ωt +b )(1)U =tanh(H ∗W +b )(2)V 是随机初始化生成的一维矩阵,使用softmax 函数计算出每个时间步对当前时刻的权值向量αt ,得到得分矩阵A :αt =e u t∗Vðt e u t∗V(3)A =Softmax(UV )(4)将每个得分权重和对应的隐藏状态编码进行加权求和,得到Attention 网络的输出s :s =ðnt =0αt ∗h t(5)最后将得到的向量s 进行softmax 归一化处理,计算各个分类的概率情况作为BiLSTM 模块的输出㊂BERT 模块接收输入层的输出向量作为输入,其模型结构如图3所示㊂双向Transformer 模型能利用内部的self_attetnion 机制计算出该句其他词和当前编码词的语义关联程度,从而增强上下文语义信息的提取㊂词嵌入向量通过双向Transformer 模型结构进行编码学习,其第一位词嵌入向量(即[CLS]对应词向量编码)包含该句所有信息㊂抽取第一位词嵌入向量作为该句子的向量表示,利用tanh 函数激活后做softmax 处理,获得该句子的各类概率分布作为BERT 模块的输出㊂图3㊀BERT 模块结构分析层的输入为BiLSTM 模块和BERT 模块对文本的预测概率分布㊂分析层主要有两个阶段,①选取两个模型确定性更大的预测分布作为最终预测分布,②在最终预测分布上筛选不确定性更大的预测项,对该项分布的最大二值进行随机抽取确定最终输出标签㊂第1个阶段中,两个概率分布的离散程度反应了该模型对预测分类的准确度,即离散程度越大表明该模型对此项分类判定的确定性越大㊂反之,概率分布的离散程度越小,代表各个分类的概率值相近,判断相对模糊㊂因此,通过对比两个模型得出的概率分布可以获知某个模型对该段文本的预测确定性更高㊂取确定性更高的分布作为BBNN 攻击文本分类模型的最终概率分布输出㊂如算法1所示㊂算法1㊀基于概率分布的预测确定性分析算法输入:BiLSTM 和BERT 模块预测的各类别的概率分布P bert ㊁P lstm输出:最终判定的概率分布P out1㊀v bert =GetVariance(P bert )2㊀v lstm =GetVariance(P lstm )3㊀if v bert >v lstm4㊀㊀P out =P bert5㊀else6㊀㊀P out =P lstm7㊀return P out48㊀信息工程大学学报㊀2021年㊀步骤1㊁步骤2分别计算BERT模块㊁BiLSTM 模块预测概率分布的方差;步骤3~步骤6通过对比方差大小选定确定性更大的概率分布;步骤7输出最终概率分布㊂第2个阶段中,认为最终输出概率分布中低概率值㊁低方差区域的预测项不确定性最大,这些分布的各项预测值相近㊂其中最大预测值项并不是真实分类项的概率最高㊂因此,可扩大选择范围,对相对较高的预测值进行随机选取以确定最后预测项㊂这里,采用最优二值随机选择的办法来进行㊂首先通过设置阈值选定随机区域,然后对方差和概率值属于该区域的预测项进行最优二项值随机选取㊂如算法2所示㊂算法2㊀基于概率分布方差及概率值的低确定性随机选择算法输入:预测概率分布P in,概率项阈值α㊁方差项阈值β输出:最终预测分类编号1㊀v=GetVariance(P in)2㊀P max1=Max(P in(x i))3㊀if P max1<αand v<β4㊀P max2=Max(P in(X=x i)-P max1)5㊀P final=Random(P max1,P max2)6㊀else7㊀㊀P final=P max18㊀㊀x out=Classify(P out)9㊀return x out步骤1获取输入分布的方差值v,步骤2获取输入分布的最大概率值P max1,步骤3判断最大概率值和方差是否均分别小于阈值α㊁β,步骤4㊁步骤5表示若满足步骤3的条件则求出第二大概率值并在最大概率值和第二大概率值间做随机选择赋值给最终选择概率P final,步骤6㊁步骤7表示若不符合步骤3条件则将最大概率值赋值给最终选择概率P final,步骤8求出输入分布中概率为P final的分类标签x out,步骤9输出标签㊂3㊀实验3.1㊀实验设置本文采用通用攻击模型枚举分类项目, CAPEC中的攻击分类描述文本作为实验数据集,该数据集包括了攻击定义描述㊁攻击实例描述㊁攻击条件准备等信息㊂目前,从攻击机制的角度,CAPEC将所有攻击共分为9个基础类型[17](截止2019年9月30日)㊂本次实验数据通过对数据集的攻击名称㊁攻击定义描述㊁实例描述数据进行提取并标注,获得了1452条样本信息㊂样本分布信息如图4所示, CAPEC的9类数据中Collect and Analyze Informa-tion类别的样本数最多,这是因为网络攻击过程中的多个步骤均涉及信息的搜集与分析,所占比例最大㊂Employ Probabilitstic Techniques和Manipulate Timing and State类的样本数不足50所占比例最少,表明该两类攻击使用领域较小㊂图4㊀样本分布实验中将样本数据做随机抽取,选取其中70%的样本数据作为训练集,10%作为验证集, 20%作为测试集㊂实验环境如表1所示㊂为验证BBNN模型的有效性,实验使用以下几个模型进行对比:①通过GloVe预训练模型获得词嵌入向量输入微调训练后的BiLSTM+Attention模型;②经数据微调训练后获得的BERT模型;③无随机选择的BBNN模型;④选定阈值随机选择的BBNN模型㊂表1㊀实验环境项目版本操作系统Linux Mint19.3Cinnamon4.4.5处理器 2.6GHz18核Intel Core i9-7980XE内存125.6GiB显卡NVIDIA Corporation GV100[TITAN V]Python 3.6Tensorflow Gpu-2.2.0Numpy 1.18.1Pandas 1.0.3Keras 2.3.1Matplotlib 3.1.3实验中GloVe预训练模型使用预训练好的㊀第1期欧昀佳,等:基于BBNN的网络攻击文本自动化分类方法49㊀glove.6B.300d,即编码后的词嵌入向量为300维, LSTM隐藏单元数设置为64,Attention网络隐藏单元为16㊂BERT预训练模型结构使用Google发布的uncased_L-12_H-768_A-12,经过该BERT模型结构的词向量维度为768㊂为避免过拟合BiLSTM 和BERT模型输出后的Drpout设置为0.5,模型最大句子长度为512㊂BBNN模型的随机选择区域阈值,概率项设为0.75和方差项设为0.06,即概率值小于0.75且方差小于0.06的预测项进行最优二项选择㊂由于随机选择存在随机性,BBNN模型的随机预测结果评估值取100次预测的平均评估值作为最后的参考指标㊂3.2㊀实验结果与分析几种模型在测试集上的预测分布方差和预测概率值如图5所示,图5(a)为BERT模型预测结果,图5(b)为BiLSTM模型预测结果,图5(c)为BBNN模型低命中区域无随机选取的预测结果,图5(d)为BBNN低命中区域随机选取的预测结果㊂图5中,横轴代表预测的概率值,纵轴代表预测分布的方差㊂点标代表预测正确的文本,叉标表示预测错误的文本㊂对比图5(a)和图5(b)可以看出,大多数预测错误项的方差和概率值小于预测正确项的方差和概率值,预测正确的值集中在方差与预测概率值较高的区域㊂并且,方差越小概率值越小㊂图5(c)对比图5(a)和图5(b)可以发现经过BBNN模型优选后的输出结果在方差和概率值低的区域预测错误项有明显下降㊂对比图5(c)和图5(d),设置阈值并选取低概率值㊁低方差区域做随机猜选能够提高低命中率区域的命中率㊂图5㊀模型预测结果分布㊀㊀模型在测试集上的预测表现如表2所示, BBNN模型在测试集上的各项指标优于其他模型㊂对比BERT和BiLSTM+Attention模型可以发现BBNN模型的优选机制对模型的准确率有一定提升,分别提高了7.29%和3.00%㊂同时,F1值分别提升了7.62%㊁1.73%,精确率分别提升了8.29%㊁1.72%,召回率分别提升了6.22%㊁1.90%㊂表2㊀测试集预测评估结果对比表模型Accuracy(%)F1(%)Precision(%)Recall(%) BERT71.8872.0572.7072.94 BiLSTM76.1777.9479.2777.26BBNN78.5278.9079.9278.69 BBNN(Random)79.1779.6780.9979.16实验中使用BBNN模型进行了100次随机选择预测,结果如图6所示㊂图中,横轴代表预测次数,纵轴代表百分比㊂4项指标中的最好预测结果为准确率80.46%㊁精确率82.21%㊁F1值80.63%㊁回归率80.08%;最差预测结果为准确率77.73%㊁精确率79.45%㊁F1值78.33%㊁回归率78.14%㊂最差预测结果的各项指标相比单一的模型预测结果要好㊂随机选择机制使得模型的各项指标在3%以内上下浮动,存在一定的不稳定性㊂图6㊀100次随机预测评估结果4㊀结束语网络中存在大量攻击描述文本,对该类文本进行快速分类有助于研究人员从原始数据中迅速提炼知识,形成有体系的知识架构㊂本文通过对网络攻击描述文本进行分析,提出一种BBNN混合优选型模型,用于自动识别网络攻击文本中描述的攻击方式㊂与传统的机器学习模型不同,该模型能从词语级和句子级的层面对文本特征进行学习,并通过优选机制得到更确定的预测分类㊂实验表明,50㊀信息工程大学学报㊀2021年㊀BBNN模型对攻击文本的识别效果最好,但是该模型更为复杂,训练和使用时相比其他模型耗时更长,占用资源更多,且预测结果存在一定的不稳定性㊂下一步工作中将进一步扩大数据集,提高样本覆盖率㊂同时有针对性地对模型结构进行简化和优化,提高攻击文本分类方法的效率和稳定性㊂参考文献:[1]KIM Y.Convolutional neural networks for sentence clas-sification[C]//Empirical Methods in Natual Language Processing.Doha Ratar,2014:1746-1751. [2]CONNEAU A,SCHWENK H,BARRAULT L,et al. Very deep convolutional networks for text classification [J].Proceedings of the15th Conference of the Europeam Chapter of the Associational Linguistics.Vacencia Spain, 2017:193-207.[3]李洋,董红斌.基于CNN和BiLSTM网络特征融合的文本情感分析[J].计算机应用,2018,38(11): 29-34.[4]BENGIO Y,SIMARD P,FRASCONI P.Learning long-term dependencies with gradient descent is difficult[J]. IEEE Transactions on Neural Networks,1994,5(2): 157-166.[5]CHO K,VAN MERRIENBOER B,GULCEHRE C,et al.Learning phrase representations using RNN encoder-decoder for statistical machine translation[C]//2014 Conference on Empirical Methods in Natural Language Processing.Doha Qatar,2014:1724-1734.[6]HOCHREITER S,SCHMIDHUBER J.LSTM can solve hard long time lag problems[C]//Advances in Neural In-formation Processing Systems,1997:473-479. [7]LIU P,QIU X,HUANG X.Recurrent neural network for text classification with multi-task learning[C]//Interna-tional Joint Conference on Artifical Intelligence.New York,2016:2873-2879.[8]赵明,杜会芳,董翠翠,等.基于word2vec和LSTM 的饮食健康文本分类研究[J].农业机械学报,2017, 48(10):202-208.[9]YANG Z,YANG D,DYER C,et al.Hierarchical attention networks for document classification[C]//Proceedings of the2016Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies.San Diego,2016:1480-1489. [10]周春华,郭晓峰,沈建京,等.基于注意力机制的深度学习态势信息推荐模型[J].信息工程大学学报,2019,20(5):597-603.[11]VASWANI A,SHAZEER N,PARMAR N,et al.At-tention is all you need[C]//Advances in Neural Infor-mation Processing Systems,2017:5998-6008. [12]QIU X,SUN T,XU Y,et al.Pre-trained models fornatural language processing:a survey[EB/OL].[2020-08-24].https:/abs/2003.08271. [13]MIKOLOV T,CHEN K,CORRADO G,et al.Efficientestimation of word representations in vector space[EB/OL].[2013-09-07].arXiv preprint,2013arXiv:1301.3781.[14]MIKOLOV T,SUTSKEVER I,CHEN K,et al.Distrib-uted representations of words and phrases and their com-positionality[C]//Advances in neural information pro-cessing systems,2013:3111-3119.[15]PENNINGTON J,SOCHER R,MANNING C D.Glove:Global vectors for word representation[C]//Pro-ceedings of the2014conference on empirical methods innatural language processing(EMNLP).Doha Qatar,2014:1532-1543.[16]DEVLIN J,CHANG M W,LEE K,et al.Bert:Pre-training of deep bidirectional transformers for languageunderstanding[C]//2019Conference of the NorthAmerilan Chapter on the Association for ComputationalLinguistics Human Language Technologies.MinneapolisUSA,2019:4171-4186.[17]CAPEC TEAM.Schema Documentation[EB/OL].(2019-09-30).[2020-06-01].https://capec.mitre.org/documents/schema/index.html.(编辑:高明霞)。

基于深度学习的配对交易策略叶映彤蔡熙腾李雅妮

基于深度学习的配对交易策略叶映彤蔡熙腾李雅妮

配对交易策略就是一种经典的量化交易策略,其基本思想来源于20世纪20年代华尔街传奇交易员J e s s e Liver more的姐妹股票对(Sister s tocks)交易策略。

其关键点在于找到高度相关的股票对以及股票对的股价之间存在着的数量关系[1],这也是配对交易的难点所在。

目前该领域主要有四种传统方法,包括相关系数法、V id y a m u r t hy 在2004年提出的协整法[2]、E l l iot t 等在2005年提出的随机价差法[3]、以及Gat e v 等在2006年提出的最小距离法[4]。

但是这些方法都有一个共同的局限性,它们都寄希望于所有股票对的价差都满足假定的模式。

而事实上股票对之间的价格关系可能千变万化,不能一概而论。

这些方法寻找到的股票对很可能只是其中一个很小的子集。

而且传统方法常常只能用于寻找两只股票之间的线性关系,对于多只股票之间的非线性关系往往无能为力。

深度学习是一类新兴的多层神经网络学习算法,因其缓解了传统训练算法的局部最小性,引起机器学习领域的广泛关注[5]。

深度学习通过组合低层特征形成更加抽象的高层表示(属性、类别或特征),以发现数据的分布式特征表示,实现对复杂的高维非线性函数的逼近。

用深度学习技术替换配对交易策略中的传统方法,来寻找高度相关的股票对,以及股票对的股价之间存在着的数量关系,有利于挖掘更多复杂的、更隐蔽的非线性盈利机会,也可以很容易地扩展到在大量的证券中寻找套利机会。

该文拟用深度学习中的栈式自动编码器作为基本模型,参考配对交易策略中的基本思想,设计一种能够自动从股票相关性中挖掘套利机会,并通过买入或卖的基本思想,设计一种能够自动从股票相关性中挖掘套利机会,并通过买入或卖出股票实现盈利的量化交易策略。

此外,该文还使用了基于历史数据的测试来验证这种策略在实战中的价值。

1 基于深度学习的配对交易策略1.1 基本思想不同股票的价格直接具有很高的相关性,常常表现为同涨同跌。

灰色的用英文怎么读是正确

灰色的用英文怎么读是正确

灰色的用英文怎么读是正确灰色的用英文怎么读是正确大家都会用中文读灰色这个词语,但是会用正确的英文读音来读吗?为此店铺为大家带来正确的用英文读灰色的读音。

灰色的英文读音如音标所示gray 英 [greɪ] 美 [ɡre]adj. 灰色的,灰白的v. (使)变成灰色,变老n. 灰色All cats are grey in the dark.所有的猫在黑暗中都是灰色的(夜晚难分好坏美丑)。

The ceiling was grey and cracked.天花板是灰色的,而且有裂纹。

His hair has greyed a lot.他的头发多已花白。

The lady in grey is my teacher.穿灰衣服的女士是我的老师。

灰色的英文:gray的情景对话打电话A:Hello. Can I speak to Alice, please?喂,请找丽思接电话好吗?B:Hold on, please.请稍等。

A:Thank you.谢谢你。

B:Sorry, but she's out.抱歉,她出去了。

A:Would you tell her Tom Gray called?你能告诉她汤姆?格雷打过电话来吗?B:I'd be glad to.我会的。

灰色的英文双语例句1. Kyungin Korean company's high-Series Light reactive, colors K - HL Red Yellow Blue, plus two monochrome high-Light gray and purple, a total of five varieties.韩国京仁公司生产的高日晒系列活性染料,三原色是K-HL红黄蓝,外加两只单色高耐晒的灰和紫,共5个品种。

2. A type of ink products, mixing produce grey CMY value is constant, that is, a certain proportion of the ink of gray balance is constant values, so we measured the ink gray balance, you can according to that balance to correction image so business card printing and membership card crafted image in adjusted accurately reproduce color appearance of remedying the shortcomings of this kind of ink.差某一栽类型的油不朱产物,混分爆发灰色的CMY值为变数,即某栽油不朱的灰不均比例值是恒定的,这样我们测得了这栽油不朱的灰不均后,不离可以根据这栽不均来订正图像,使给制卡和会员卡制作的图像在订正后不妨不确边现颜色边观,从而填充这栽油不朱的弊端。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

a rX iv:c ond-ma t/15319v1[c ond-m at.dis-nn]16Ma y21LU TP 00-19An Information-Based Neural Approach to Constraint Satisfaction Henrik J¨o nsson 1and Bo S¨o derberg 2Complex Systems Division,Department of Theoretical Physics Lund University,S¨o lvegatan 14A,S-22362Lund,Sweden http://www.thep.lu.se/complex/To appear in Neural Computation Abstract A novel artificial neural network approach to constraint satisfaction problems is pre-sented.Based on information-theoretical considerations,it differs from a conventional mean-field approach in the form of the resulting free energy.The method,implemented as an annealing algorithm,is numerically explored on a testbed of K -SAT problems.The performance shows a dramatic improvement to that of a conventional mean-field ap-proach,and is comparable to that of a state-of-the-art dedicated heuristic (Gsat+Walk).The real strength of the method,however,lies in its generality –with minor modifications it is applicable to arbitrary types of discrete constraint satisfaction problems.1IntroductionIn the context of difficult optimization problems,artificial neural networks(ANN)based on the mean-field approximation provides a powerful and versatile alternative to problem-specific heuristic methods,and have been successfully applied to a number of different problem types(Hopfield and Tank1985;Peterson and S¨o derberg1998).In this paper,an alternative ANN approach to combinatorial constraint satisfaction problems(CSP)is presented.It is derived from a very general information-theoretical idea,which leads to a modified cost function as compared to the conventional mean-field based neural approach.A particular class of binary CSP that has attracted recent attention is K-SAT(Pa-padimitriou1994;Du et al.1997);many combinatorial optimization problems can be cast in K-SAT form.We will demonstrate in detail how to apply the information-based ANN approach,to be referred to as INN,to K-SAT as a modified mean-field annealing algorithm.The method is evaluated by means of extensive numerical explorations on suitable testbeds of random K-SAT instances.The resulting performance shows a substantial improvement as compared to that of the conventional ANN approach,and is comparable to that of a good dedicated heuristic–Gsat+Walk(Selman et al.1994;Gu et al.1997). The real strength of the INN approach lies in its generality–the basic idea can easily be applied to arbitrary types of constraint satisfaction problems,not necessarily binary. 2K-SATA CSP amounts to determining whether a given set of simple constraints over a set of discrete variables can be simultaneously fulfilled.Most heuristic approaches to a CSP attempt tofind a solution,i.e.an assignment of values to the variables consistent with the constraints,and are hence incomplete in the sense that they cannot prove unsatisfiability.If the heuristic succeeds infinding a solution,satisfiability is proven;a failure,however,does not imply unsatisfiability.A commonly studied class of binary CSP is K-SAT.A K-SAT instance is defined as follows:For a set of N Boolean variables x i,determine whether an assignment can befound such that a given Boolean function U evaluates to True,where U has the form U=(a11OR a12OR...a1K)AND(a21OR...a2K)AND...AND(a M1OR...a MK),(1) i.e.U is the Boolean disjunction of M clauses,indexed by m=1...M,each defined as the Boolean conjunction of K simple statements(literals)a mk,k=1...K.Each literal represents one of the elementary Boolean variables x i or its negation¬x i.For K=2we have a2-SAT problem;for K=3a3-SAT problem,etc.If the clauses are not restricted to have equal length the problem is referred to as a satisfiability problem (SAT).There is a fundamental difference between K-SAT problems for different values of K.While a2-SAT instance can be exactly solved in a time polynomial in N,K-SAT with K≥3is NP-complete.Every K-SAT instance with K>3can be transformed in polynomial time into a3-SAT instance(Papadimitriou1994).In this paper we will focus on3-SAT.3Conventional ANN Approach3.1ANN Approach to CSP in GeneralIn order to apply the conventional mean-field based ANN approach as a heuristic to a Boolean CSP problem,the latter is encoded in terms of a non-negative cost function H(s)in terms of a set of N binary(±1)spin variables,s={s i,i=1,...,N},such that a solution corresponds to a combination of spin values that makes the cost function vanish.The cost function can be extended to continuous arguments in a unique way,by de-manding it to be a multi-linear polynomial in the spins(i.e.containing no squared spins).Assuming a multi-linear cost function H(s),one considers mean-field variables (or neurons)v i∈[−1,1],approximating the thermal spin averages s i in a Boltzmann distribution P(s)∝exp(−H(s)/T).They are defined by the mean-field equationsv i=tanh(u i/T)(2)∂H(v)u i=−where S(v)is the spin entropy,S(v)=− i1+v i2 −1−v i2 .(5) The conventional ANN algorithm consists in solving the mean-field equations(2,3) iteratively,combined with annealing in the temperature.A typical algorithm is described infigure1.•Initiate the mean-field spins v i to random values close to zero,and T to a highvalue.•Repeat the following(a sweep),until the mean-field variables have saturated(i.e.become close to±1):–For each spin,calculate its localfield from(3),and update the spin ac-cording to(2).–Decrease T slightly(typically by a few percent).•Extract the resulting solution candidate,using s i=sign(v i).Figure1:A mean-field annealing ANN algorithm.3.2Application to K-SATWhen applying the ANN approach to K-SAT the Boolean variables are encoded using ±1-valued spin variables s i,i=1...N,with s i=+1representing True,and s i=−1 False.In terms of the spins,a suitable multi-linear cost function H(s)is given by the following expression,H(s)=M m=1 i∈M m11),the temperature T is initiated at a high value,and then slowly decreased(annealing), while a solution to(2,3)is tracked iteratively.At high temperatures there will be a stable fixed point with all neurons close to zero,while at a low temperature they will approach ±1(the neurons have saturated)and an assignment can be extracted.For the K-SAT cost function(6)the localfield u i in(3)is given byu i= m12(1−C mj v j),(7)which,due to the multi-linearity of H does not depend on v i;this lack of self-coupling is beneficial for the stability of the dynamics.4Information-Based ANN Approach:INN4.1The Basic IdeaFor problems of the CSP type,we suggest an information-based neural network approach, based on the idea of balance of information,considering the variables as sources of information,and the constraints as consumers thereof.This suggests constructing an objective function(or free energy)F of the general form F=const.×(information demand)−const.×(available information),(8) that is to be minimized.The meaning of the two terms can be made precise in a mean-field-like setting,where a factorized artificial Boltzmann distibution is assumed,with each Boolean variable having an independent probability to be assigned the value True. We will give a detailed derivation below for K-SAT.Other problem types can be treated in an analogous way.We will refer to this type of approach as INN.4.2INN Approach to K-SATHere we describe in detail how to apply the general ideas above to the specific case of K-SAT.The average information resource residing in a spin is given by its entropy,S(s i)=−P si=1log P si=1−P si=−1log P si=−1,(9)where P are probabilities.If the spin is completely random,P si=1=P si=−1=12,(11) in terms of mean-field variables v i= s i ∈[−1,1],the probabilities used above for the clauses becomeP unsatm= i∈M m12(1−C mi v i) .(13)We now have the necessary prerequisites to define an information-based free energy, which we choose as F(v)=I(v)−T S(v)(in analogy with ANN),which is to be mini-mized.Demanding that F have a local minimum with respect to the mean-field variables yields equations similar to the mean-field equations(2,3),but with H(v)replaced by I(v):u i=−∂I4.3Algorithmic DetailsBased on the analysis above,we propose an information-based annealing algorithm sim-ilar to mean-field annealing,but with the multi-linear cost function H(6)replaced by the clause information I(13).Note that the contribution I m to I from a single clause m is a simple function of the corresponding contribution H m to H,I m=−log(1−H m).(15) As a result,the effective cost function I is not multilinear,and measures have to be taken to ensure stability of the dynamics.The resulting self-couplings can be avoided by instead of the derivative in(14)using the difference,1u i=−5.1TestbedsFor performance investigations,we have considered two distinct testbeds.One consists of uniform random K-SAT problems with N andα=M/Nfixed((Cook and Mitchell 1997)).For every problem instance,each of the M clauses is independently generated by chosing at random a set of K distinct variables(among the N available).Each selected variable is negated with probability13rmatik.tu-darmstadt.de/AI/SATLIBwalk move.A clause among those that are unsatisfied is chosen at random,and then a randomly chosen variable in this clause isflipped.5.3Implementations detailsIn order to have a fair comparison of performances,we have chosen the parameter values such,that the three algorithms use approximately equal CPU time for each problem size.5.3.1ANNFor ANN a preliminary initial temperature of3.0is used,which is dynamically adjusted upwards until the neurons are close to zero( i v2i<0.1N),in order to ensure a start close to the high-Tfixed point.The annealing rate is set to0.99.At each temperature up to10sweeps are allowed in order for the neurons to converge,as signalled by the maximal change in value for a single neuron being less than0.01.At every tenth temperature value,the cost function is evaluated using the signs of the mean-field variables,s i=sign(v i);if this vanishes, a solution is found and the algorithm exits.If no solution has been found when the temperature reaches a certain lower bound(set to0.1),the algorithm also exits;at that temparature,most neurons typically will have stabilized close to±1(or occasionally0). Neurons that wind up at zero are those that are not needed at all or equally needed as ±1.5.3.2INNFor the INN approach,the same temperature parameters as in ANN are used except for the low T bound,which is set to0.5.Because of the divergent nature of the cost function I(13)and the localfield u i(16),extra precaution has to be taken when updating the neurons–infinities appear when all the neurons in a clause are±1with the wrong sign:v i=−C mi.When calculating u i,the infinite clause contributions are counted separately.If the positive(negative)infinities are more(less)numerous,v i is set to+1 (-1);otherwise,v i is randomly set to±1if infinities exist but in equal numbers,else the finite part of u i is used.This introduces randomness in the low temperature region if a solution has not been found;the algorithm then acquires a local search behaviour increasing its ability tofind a solution.In this mode the neurons do not change smoothly and the maximum numberof updates per temperature sweep(set to10)is frequently used,which explains why INN needs more time than the conventional ANN for difficult problem instances.Performance can be improved,at the cost of increasing the CPU time used,with a slower annealing rate and/or a lower low-T bound.Restarts of the algorithm also improves performance.5.3.3gsat+walkThe source code for gsat+walk can be found at SATLIB4.We have attempted to follow the recommendations in the enclosed documentation for parameter settings.The prob-ability at eachflip of choosing a greedy move instead of a restricted random walk move is set to0.5.We have chosen to use a single run with200×Nflips per problem,instead of several runs with lessflips per try,since this appears to improve overall performance. Making several runs or using moreflips per run will improve performance at the cost of an increased CPU consumption.5.4ResultsHere follow the results from our numerical investigations for the two testbeds.All ex-plorations have been made on a600MHz AMD Athlon computer running Linux.The results from INN,ANN and Gsat+Walk for the uniform random testbed are sum-marized infigures3,4,and5.Infigure3the fraction of the problems not satisfied by the separate algorithms(f U) is shown as a function ofαfor different problem sizes N.The three algorithms show different transitions inαabove which they fail tofind solutions.For INN and gsat+walk the transition appears slightly beneath the realαc,while for ANN the transition is situated belowα=3.7.The average number of unsatisfied clauses per problem instance(H)is presented infigure 4for the three algorithms.H is shown as a function ofαfor different N.This can be used as a performance measure also when an algorithm fails tofind solutions5.The average CPU-time consumption(t)is shown infigure5for all algorithms.The CPU-time is presented as a function of N for differentαin order to show how the algorithms scale with problem size.The results(f U,H,t)for the solvable testbed for all three algorithms are summarized1f Uα = M/N1α = M/N1α = M/NFigure 3:Fraction unsatisfied problems (f U )versus αfor ANN (A ),INN (B )and gsat+walk (C ),for N =100(+),500(×),1000(∗),1500( )and 2000().The fractions are calculated from 200instances;the error in each point is less than 0.035.10Hα = M/N10α = M/N10α = M/NFigure 4:Number of unsatisfied clauses H per instance versus α,for ANN (A ),INN (B )and gsat+walk (C ),for N =100(+),500(×),1000(∗),1500( )and 2000( ).Average over 200instances.in table 1.5.5DiscussionThe first point to be made is the dramatic performance improvement in INN as compared to ANN.This is partly due to the divergent nature of the INN cost function I ,leading to a progressively increased focus on the neurons involved in the relatively few critical0.11101001000tN(A)0.11101001000N(B)0.11101001000N(C)Figure 5:Log-log plot of used CPU-time t (given in seconds)versus N ,for ANN (A ),INN (B )and gsat+walk (C ),for α=3.7(+),3.9(×),4.1(∗)and 4.3( ).N ranges from 100to 2000.Averaged over 200instances.num ANNINN gsat+walkN M inst.f U H t 209110000.0760.0780.010.6070.7590.040.0080.0080.02753251000.410.440.110.844 1.4850.090.0720.0740.091255381000.390.410.180.892.070.140.160.170.191757531000.510.610.3913.060.200.320.340.392259601000.520.670.510.993.530.250.390.440.53Table 1:Results for the solvable 3-SAT problems close to αc .f U is the fraction of problems not satisfied by the algorithm,H is the average number of unsatisfied clauses (6)and t is the average CPU-time used (given in seconds).The third column (num inst.)is the number of instances in the problem set.clauses on the virge of becoming unsatisfied.This improves the revision capability which is beneficial for the performance.The choice of randomizing v i to ±1(which appears very natural)in cases of balancing infinities in u i contributes to this effect.A performance comparison of INN and gsat+walk indicates that the latter appears to have the upper hand for small N .For larger N however,INN seems to be quite compa-rable to gsat+walk.6Summary and OutlookWe have presented a heuristic algorithm,INN,for binary satisfiability problems.It is a modification of the conventional mean-field based ANN annealing algorithm,and differs from this mainly by a replacement of the usual multilinear cost function by one derived from an information-theoretical argument.This modification is shown empirically to dramatically enhance the performance on a testbed of random K-SAT problem instances;the resulting performance is for large problem sizes comparable to that of a good dedicated heuristic,tailored to K-SAT.An important advantage of the INN approach is its generality.The basic philosophy–the balance of information–can be applied to a host of different types of binary as well as non-binary problems;work in this direction is in progress. AcknowledgementThanks are due to C.Peterson for inspiring discussions.This work was in part supported by the Swedish Foundation for Strategic Research.ReferencesCook,S.A.and D.G.Mitchell(1997).Finding hard instances of the satisfiability problem:A survey.See Du,Gu,and Pardalos(1997),pp.1–18.Du,D.,J.Gu,and P.M.Pardalos(Eds.)(1997).Satisfiability Problem:Theory and Applications,DIMACS Series in Discrete Mathematics and Theoretical Computer Science.American Mathematical Society.Gu,J.,P.W.Purdom,J.Franco,and B.W.Wah(1997).Algorithms for the satisfia-bility(sat)problem:A survey.See Du,Gu,and Pardalos(1997),pp.19–152. Hogg,T.,B.A.Hubermann,and C.P.Williams(1996).Special volume on frontiers in problem solving:Phase transitions and complexity.Artificial Intelligence81(1,2). Hopfield,J.J.and D.W.Tank(1985).Neural computation of decisions in optimization problems.Biological Cybernetics52,141–152.Monasson,R.,R.Zecchina,S.Kirkpatrick,B.Selman,and L.Troyansky(1999).Determining computational complexity from characteristic’phase transitions’.Na-ture400(6740),133–137.Ohlsson,M.,C.Peterson,and B.S¨o derberg(1993).Neural networks for optimization problems with inequality constraints-the knapsack problem.Neural Computa-tion5(2),331–339.Papadimitriou, C.H.(1994).Computational Complexity.Reading,Massachusetts: Addison-Wesley Publishing Company.Peterson,C.and B.S¨o derberg(1998).Neural optimization.In M.A.Arbib(Ed.), The Handbook of Brain Research and Neural Networks,(2nd edition),pp.617–622.Bradford Books/The MIT Press.Selman,B.,H.A.Kautz,and B.Cohen(1994).Noise strategies for improving local search.In Proceedings of the Twelfth National Conference on Artificial Intelligence, Menlo Park,California,pp.337–343.AAAI Press.。

相关文档
最新文档