Top-Down Visual Saliency via Joint CRF and Dictionary Learning

合集下载

自顶向下_top_down_的蛋白质组学_蛋白质变体的规模化鉴定

自顶向下_top_down_的蛋白质组学_蛋白质变体的规模化鉴定

鉴定到的肽段同时匹配上几个高度同源的蛋白质时 就无法确定哪些蛋白质是真实存在的,这样的现象 在人类蛋白质中大量存在[17].以完整蛋白质为研究 对象的“自顶向下”的蛋白质组学,为解决这个问 题提供了新的思路和技术实现的可能,这主要也是 源自最近几年内质谱技术的快速发展.
早期的质谱技术主要用途是测量元素的同位 素,之后出现了可以测量小分子的质谱.20 世纪 80 年 代 末 电 喷 雾 离 子 化 (electro-spray ionization, ESI) 与 基 质 辅 助 激 光 解 吸 附 离 子 化 (matrix-assisted laser desorption and ionization,MALDI)两项软电离 技术的发明,使得质谱技术可以在测量生物大分子 上得到广泛的应用,以肽段为中心的质谱分析已经 成为当前蛋白质鉴定的常规手段.随着高分辨率质 谱技术越来越成熟,直接测量完整蛋白质的分子质 量与蛋白质的碎裂谱成为可能.质谱技术的发展还 可以直接测量蛋白质复合体的质谱.从上述质谱技 术的发展过程来看,可测量的对象由小逐渐变大, 从元素到小分子,从肽段到蛋白质,再到蛋白质的 复合体,这反映出了质谱技术发展的趋势,同时也 反映出了蛋白质研究的技术需求.
2015; 42 (2)
关联变得困难.上述 BU 技术上的局限性阻碍了 “自底向上”蛋白质组技术的深入应用.
为了缓解 BU 技术上的局限性,“自中向下” (middle-down,MD) 的蛋白质组技术采用了不同的 生物酶,它可以获得较长的肽段,如 6.3 ku 以上的 肽段[7].这样,MD 可以分析鉴定较长的肽链上同 时发生的几个翻译后修饰,相比 BU 方法来说,它 可以分析的肽段范围更广,在多泛素化链结构[8]和 单克隆抗体[9]等分析上获得了应用.

莫西沙星与异烟肼治疗老年肺结核的效果对比

莫西沙星与异烟肼治疗老年肺结核的效果对比

and discomfort after cystoscopy: a single-center prospective randomized pilot study[J]. Medicina (Kaunas, Lithuania),2023,59(6):1165.[11]李和,邹筱萌,郑勤云,等.地佐辛复合丙泊酚在老年硬性膀胱镜检查中的麻醉剂量及效果探讨[J].浙江实用医学,2018,23(1):16-18.[12]俞欣欣,张磊,范皓.基于分子对接技术解析地佐辛与阿片受体相互作用[J].浙江中西医结合杂志,2020,30(3):250-252,268.[13] GU Z F,XIN L,WANG H X,et al. Doxapram alleviates lowSpO 2 induced by the combination of Propofol and Fentanyl during painless gastrointestinal endoscopy[J]. BMC Anesthesiology,2019,19(1):216.[14]李军.异丙酚在小儿麻醉中的应用[J].国外医学:麻醉学与复苏分册,1996(5):276-280.[15]徐峰,陈鑫,李金玉.成年患者麻醉诱导期低血压多因素回顾分析[J].中华急诊医学杂志,2015,24(3):332-334.[16]袁军.异丙酚配伍阿托品用于结肠镜检查的观察[J].长江大学学报(自然科学版)医学卷,2010,7(1):145-147.(收稿日期:2024-01-08)①徐州市传染病医院 江苏 徐州 221000通信作者:吴晓华莫西沙星与异烟肼治疗老年肺结核的效果对比沈琛琛① 吴晓华①【摘要】 目的:对比莫西沙星与异烟肼治疗老年肺结核的效果。

方法:选取2020年1月—2023年1月102例徐州市传染病医院收治102例老年肺结核患者作为研究对象,按照随机数表法将其分为A 组(n =51)和B 组(n =51)。

弱监督学习下的视觉显著性目标检测算法

弱监督学习下的视觉显著性目标检测算法

2017年5月计算机工程与设计 May 2017第 38 卷第 5 期 COMPUTER ENGINEERING AND DESIGN Vol. 38 No. 5弱监督学习下的视觉显著性目标检测算法李策邓浩海、肖利梅、张爱华1(1.兰州理工大学电气工程与信息工程学院,甘肃兰州730050;2.西安交通大学电子与信息工程学院,陕西西安710049)摘要:为模拟人类视觉对含有特定目标图像集中目标逐渐关注感知的行为,提出一种弱监督学习的视觉显著性目标检测 算法。

根据已有的视觉显著性方法获得图像的显著性区域;提取显著区域的底层视觉特征,训练获得视觉显著目标的表 征;用条件随机场(conditional random fields, C R F)将学习到视觉显著目标表征进行联合学习,获得该表征在最后显著性 中的权重;计算每次迭代显著图的ROC曲线,寻找视觉显著性目标最优表征及其在最后显著图中的最优权重。

实验结果 表明,该算法检测精度优于现有诸多算法,能够有效检测出视觉显著性目标。

该算法模拟了人类视觉中对特定关注目标的 感知过程,对不断重复出现的视觉显著性目标进行强化学习,具有较高的准确率。

关键词:条件随机场(C R F);视觉显著性目标的表征;视觉显著性;弱监督学习;底层视觉特征中图法分类号:TP391 文献标识号:A 文章编号:1000-7024 (2017) 05-1335-07doi:10. 16208/j. issnl000-7024. 2017. 05. 040Visual salient object detection via weakly supervised learningLI Ce1,2,DENG Hao-hai1,XIAO Li-mei1,ZHANG Ai-hua1(1. College of Electrical and Information Engineering, Lanzhou University of Technology, Lanzhou 730050, China;2. School of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049,China) Abstract:Aiming at simulating a human visual sense that people will gradually focus on specific object in a target image set, a visual salient object detection via weakly supervised learning was proposed According to the stat^of-art saliency method, the sa- liency regions of image were obtained. The low-level visual feature of saliency regions was extracted, and it was used to train ap­pearances of visual salient object. A conditional random fields (CRF) model was built to learn the model coefficient together with the appearances of saliency object. The area of ROC was calculated after each iteration so as to obtain the best appearances of vi­sual saliency object and the weight of it in the final saliency map. Experimental results on the dataset indicate that this method performs much better than the existing state-of-art approaches, and it can detect the visual saliency object efficiently. This me­thod simulates human visual sense procession, the repetitive visual saliency object can be learnt and emphasized, and at the same time it has good accuracy.Key words:conditional random fields (CRF) ; appearance of visual saliency object;visual saliency;weakly supervised learning;low-level visual feature〇引言当观看一幅图像时,人的视觉系统往往第一时间会对 图像中的某个区域或目标关注度比较高,我们则认为这样的一个区域或目标是显著的。

优质护理在脑梗死患者护理中的应用效果评价

优质护理在脑梗死患者护理中的应用效果评价

review[J].Biomed Pap Med Fac Univ Palacky Olomouc Czech Repub,2012,156:186-199.[12] Lubiński J,Lener MR,Marciniak W,et al.Serum essential elements andsurvival after cancer diagnosis[J].Nutrients,2023,15(11):2611.[13] Schenk JM,Till C,Neuhouser M,et al.Differential biopsy patternsinfluence associations between multivitamin use and prostate cancer risk in the selenium and vitamin e cancer prevention trial[J].Cancer Epidemiol Biomarkers Prev,2022,31:2063-2069.[14] Crowe FL,Appleby PN,Travis RC,et al.Endogenous hormones,nutritionalbiomarkers and prostate cancer collaborative group. Circulating fatty acids and prostate cancer risk: Individual participant meta-analysis of prospective studies[J].J Natl Cancer Inst,2014,106(9):dju240.[15] Parra-SS,Ahumada D,Petermann-Rocha F,et al.Association ofmeat,vegetarian,pescatarian and fish-poultry diets with risk of 19 cancer sites and all cancer:findings from the UK Biobank prospective cohort study and meta-analysis[J].BMC Med,2022,20:79.[16] Birney E.Mendelian randomization[J].Cold Spring Harb PerspectMed,2022,12(4):a041302.[17] Davey SG,Hemani G.Mendelian randomization:genetic anchors for causalinference in epidemiological studies[J].Hum Mol Genet,2014,23:89-98.[18] Dong H,Kong X,Wang X,et al.The causal effect of dietary compositionon the risk of breast cancer: A mendelian randomization study[J].Nutrients,2023,15(11):2586.[19] Yan H,Jin X,Zhang C,et al.Associations between diet and incidencerisk of lung cancer: A Mendelian randomization study[J].Front Nutr,2023,10:1149317.[20] Yin L,Yan H,Chen K,et al.Diet-derived circulating antioxidants andrisk of digestive system tumors: A mendelian randomization study[J].Nutrients,2022,14(16):3274.[21] Brasky TM,Darke AK,Song X,et al.Plasma phospholipid fatty acidsand prostate cancer risk in the SELECT trial[J].J Natl Cancer Inst,2013,105:1132-1141.[22] Outzen M,Tj ønneland A,Christensen J,et al.Fish consumption andprostate cancer risk and mortality in a Danish cohort study[J].Eur J Cancer Prev,2018,27:355-360.[23] Fu YQ,Zheng JS,Yang B,et al.Effect of individual omega-3 fatty acids onthe risk of prostate cancer: A systematic review and dose-response meta-analysis of prospective cohort studies[J].J Epidemiol,2015,25:261-274.[24] Burgess S,Swanson SA,Labrecque JA.are mendelian randomizationinvestigations immune from bias due to reverse causation[J].Eur J Epidemiol,2021,36:253-257.[25] Guo JZ,Xiao Q,Gao S,et al.Review of Mendelian Randomization Studieson Ovarian Cancer[J].Front Oncol,2021,11:681396.[2024-01-07收稿]脑血管疾病是指脑血管病变所引起的脑功能障碍。

科幻片介绍(英文版)

科幻片介绍(英文版)
• Social Commentary: Science fiction films often serve as a platform for social commentary, examination issues such as the impact of technology on privacy, the ethics of scientific experience, and the potential danger of unchecked scientific progress
• Modern Era: In recent decks, science fiction films have continued to evolve and expand in scope, tapping more complex themes and ideas Films such as "Blade Runner" (1982), "The Matrix" (1999), and "Interstellar" (2014) have pushed the gene forward with their innovative visual effects and thought-provoking narratives
• "Frankenstein" (1931): Based on Mary Shelley's novel, this film by James Whale tells the story of a scientific who created a monster in his laboratory, only to have it turn on him and write havoc It explores themes of scientific responsibility and the nature of monostability

磁共振在乳腺癌新辅助治疗疗效评估与预测中的应用进展

磁共振在乳腺癌新辅助治疗疗效评估与预测中的应用进展

磁共振在乳腺癌新辅助治疗疗效评估与预测中的应用进展田力文1,2,王翠艳21 山东大学齐鲁医学院,济南250012;2 山东省立医院医学影像科摘要:新辅助治疗(NAT )是乳腺癌综合治疗的重要组成部分。

目前,NAT 疗效评估的金标准是组织病理学,但其需要术后标本,存在明显滞后性。

准确评估和预测NAT 疗效对及时改进治疗方案、确定精准的手术计划具有重要价值。

已有多种影像学检查方法被用于乳腺癌NAT 疗效评估与预测,其中磁共振(MR )检查因其优越的软组织分辨力和多方位、多参数成像特点,成为准确性最高的影像学手段。

传统MR 成像可从肿瘤长径、体积和退缩模式等形态学改变来评估与预测NAT 疗效。

近年来随着MR 功能成像技术不断更新迭代,动态对比增强磁共振成像、扩散加权成像及体素内不相干运动扩散加权成像、酰胺质子转移成像、磁共振波谱成像等MR 功能成像从不同维度实现了对乳腺癌NAT 疗效的早期预测,进一步提高了MR 成像对NAT 疗效的早期预测能力。

关键词:乳腺癌;新辅助治疗;磁共振成像doi :10.3969/j.issn.1002-266X.2023.13.022中图分类号:R737.9 文献标志码:A 文章编号:1002-266X (2023)13-0087-05世界卫生组织国际癌症研究机构发布的2020年全球最新癌症负担数据显示,乳腺癌已超过肺癌成为全球新发病例最多的恶性肿瘤。

国家癌症中心公布的最新数据显示,2022年我国女性乳腺癌的发基金项目:山东省医学会乳腺疾病科研基金(YXH2020ZX068)。

通信作者:王翠艳(E -mail : wcyzhang@ )[9]SHAN S , ZHU W , ZHANG G , et al. Video -urodynamics efficacyof sacral neuromodulation for neurogenic bladder guided by three -dimensional imaging CT and C -arm fluoroscopy : a single -center prospective study [J ]. Sci Rep , 2022,12(1):16306.[10]王磊,宋奇翔,许成,等.计算机辅助设计3D 打印在骶神经调控术中的应用[J ].第二军医大学学报,2019,40(11):1203-1207.[11]ZHANG J , ZHANG P , WU L , et al. Application of an individu⁃alized and reassemblable 3D printing navigation template for accu⁃rate puncture during sacral neuromodulation [J ]. Neurourol Uro⁃dyn , 2018,37(8):2776-2781.[12]BRUNS N , KRETTEK C. 3D -printing in trauma surgery : Plan⁃ning , printing and processing [J ]. Unfallchirurg , 2019,122(4):270-277.[13]GU Y , LV T , JIANG C , et al. Neuromodulation of the pudendalnerve assisted by 3D printed : A new method of neuromodulation for lower urinary tract dysfunction [J ]. Front Neurosci , 2021,15:619672.[14]WANG Q , GUO W , LIU Y , et al. Application of a 3D -printednavigation mold in puncture drainage for brainstem hemorrhage [J ]. J Surg Res , 2020,245:99-106.[15]WOO S H , SUNG M J , PARK K S , et al. Three -dimensional -printing technology in hip and pelvic surgery : Current landscape[J ]. Hip Pelvis , 2020,32(1):1-10.[16]CAVALCANTI KUBMAUL A , GREINER A , KAMMERLAND⁃ER C , et al. Biomechanical comparison of minimally invasive treatment options for type C unstable fractures of the pelvic ring[J ]. Orthop Traumatol Surg Res , 2020,106(1):127-133.[17]廖正俭,刘宇清,何炳蔚,等.多模态影像融合结合3D 打印技术在大脑镰旁脑膜瘤切除术中的初步应用[J ].宁夏医学杂志,2020,42(6):499-501.[18]HU Y , MODAT M , GIBSON E , et al. Weakly -supervised convo⁃lutional neural networks for multimodal image registration [J ]. Med Image Anal , 2018,49:1-13.[19]CONDINO S , CARBONE M , PIAZZA R , et al. Perceptual limitsof optical see -through visors for augmented reality guidance of man⁃ual tasks [J ]. IEEE Trans Biomed Eng , 2020,67(2):411-419.[20]SUN R , ALDUNATE R G , SOSNOFF J J. The validity of amixed reality -based automated functional mobility assessment [J ]. Sensors (Basel ), 2019,19(9):2183.[21]COOLEN B , BEEK P J , GEERSE D J , et al. Avoiding 3DObstacles in mixed reality : Does it differ from negotiating real obstacles [J ] Sensors (Basel ), 2020,20(4):1095.[22]NIAZI A U , CHIN K J , JIN R , et al. Real -time ultrasound -guidedspinal anesthesia using the SonixGPS ultrasound guidance sys⁃tem : a feasibility study [J ]. Acta Anaesthesiol Scand , 2014,58(7):875-881.[23]WARNAT -HERRESTHAL S , SCHULTZE H , SHASTRY K L ,et al. Swarm learning for decentralized and confidential clinical machine learning [J ]. Nature , 2021,594(7862):265-270.[24]PRICE W N 2ND , COHEN I G. Privacy in the age of medical bigdata [J ]. Nat Med , 2019,25(1):37-43.(收稿日期:2023-02-24)开放科学(资源服务)标识码(OSID )87病率约为29.05/10万,是女性发病率最高的恶性肿瘤。

医药行业外文文献

医药行业外文文献

医药行业外文文献以下是一些医药行业外文文献的推荐:1. "The Role of AI in Personalized Medicine" by Ritu Banerjee, published in the Journal of Biomedical Informatics.2. "The Application of Machine Learning in Drug Discovery" by Arjun Panesar and colleagues, published in the journal Drug Discovery Today.3. "Deep Learning for Drug Discovery and Development" by Markus Helfrich and colleagues, published in the journal Expert Opinion on Drug Discovery.4. "Artificial Intelligence in Healthcare: Opportunities and Challenges" by Joshua Kohn and colleagues, published in the Journal of the American Medical Association.5. "Machine Learning in Biomarker Development: A Review" by Gaurav Banga and colleagues, published in the journal Trends in Molecular Medicine.这些文献提供了有关人工智能在医药行业中的应用和影响,以及其面临的挑战和机遇的深入见解。

THE INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY Int J Med Robot

THE INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY Int J Med Robot

Introduction
Computer-assisted surgery (CAS) is a methodology that translates into accurate and reliable image-to-surgical space guidance. Neurosurgery is a very complex procedure and the surgeon has to integrate multi-modal data to produce an optimal surgical plan. Often the lesion of interest is surrounded by vital structures, such as the motor cortex, temporal cortex, vision and audio sensors, etc., and has irregular configurations. Slight damage to such eloquent brain structures can severely impair the patient (1,2). CASMIL, an imageguided neurosurgery toolkit, is being developed to produce optimum plans resulting in minimally invasive surgeries. This system has many innovative features needed by neurosurgeons that are not available in other academic and commercial systems. CASMIL is an integration of various vital modules, such as rigid and non-rigid co-registration (image–image, image–atlas and

计算机视觉、机器学习相关领域论文和源代码大集合

计算机视觉、机器学习相关领域论文和源代码大集合

计算机视觉、机器学习相关领域论文和源代码大集合--持续更新……zouxy09@/zouxy09注:下面有project网站的大部分都有paper和相应的code。

Code一般是C/C++或者Matlab代码。

最近一次更新:2013-3-17一、特征提取Feature Extraction:·SIFT [1] [Demo program][SIFT Library] [VLFeat]·PCA-SIFT [2] [Project]·Affine-SIFT [3] [Project]·SURF [4] [OpenSURF] [Matlab Wrapper]·Affine Covariant Features [5] [Oxford project]·MSER [6] [Oxford project] [VLFeat]·Geometric Blur [7] [Code]·Local Self-Similarity Descriptor [8] [Oxford implementation]·Global and Efficient Self-Similarity [9] [Code]·Histogram of Oriented Graidents [10] [INRIA Object Localization Toolkit] [OLT toolkit for Windows]·GIST [11] [Project]·Shape Context [12] [Project]·Color Descriptor [13] [Project]·Pyramids of Histograms of Oriented Gradients [Code]·Space-Time Interest Points (STIP) [14][Project] [Code]·Boundary Preserving Dense Local Regions [15][Project]·Weighted Histogram[Code]·Histogram-based Interest Points Detectors[Paper][Code]·An OpenCV - C++ implementation of Local Self Similarity Descriptors [Project]·Fast Sparse Representation with Prototypes[Project]·Corner Detection [Project]·AGAST Corner Detector: faster than FAST and even FAST-ER[Project]· Real-time Facial Feature Detection using Conditional Regression Forests[Project]· Global and Efficient Self-Similarity for Object Classification and Detection[code]·WαSH: Weighted α-Shapes for Local Feature Detection[Project]· HOG[Project]· Online Selection of Discriminative Tracking Features[Project]二、图像分割Image Segmentation:·Normalized Cut [1] [Matlab code]·Gerg Mori’ Superpixel code [2] [Matlab code]·Efficient Graph-based Image Segmentation [3] [C++ code] [Matlab wrapper]·Mean-Shift Image Segmentation [4] [EDISON C++ code] [Matlab wrapper]·OWT-UCM Hierarchical Segmentation [5] [Resources]·Turbepixels [6] [Matlab code 32bit] [Matlab code 64bit] [Updated code]·Quick-Shift [7] [VLFeat]·SLIC Superpixels [8] [Project]·Segmentation by Minimum Code Length [9] [Project]·Biased Normalized Cut [10] [Project]·Segmentation Tree [11-12] [Project]·Entropy Rate Superpixel Segmentation [13] [Code]·Fast Approximate Energy Minimization via Graph Cuts[Paper][Code]·Efficient Planar Graph Cuts with Applications in Computer Vision[Paper][Code]·Isoperimetric Graph Partitioning for Image Segmentation[Paper][Code]·Random Walks for Image Segmentation[Paper][Code]·Blossom V: A new implementation of a minimum cost perfect matching algorithm[Code]·An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Computer Vision[Paper][Code]·Geodesic Star Convexity for Interactive Image Segmentation[Project]·Contour Detection and Image Segmentation Resources[Project][Code]·Biased Normalized Cuts[Project]·Max-flow/min-cut[Project]·Chan-Vese Segmentation using Level Set[Project]· A Toolbox of Level Set Methods[Project]·Re-initialization Free Level Set Evolution via Reaction Diffusion[Project]·Improved C-V active contour model[Paper][Code]· A Variational Multiphase Level Set Approach to Simultaneous Segmentation and BiasCorrection[Paper][Code]· Level Set Method Research by Chunming Li[Project]· ClassCut for Unsupervised Class Segmentation[cod e]· SEEDS: Superpixels Extracted via Energy-Driven Sampling[Project][other]三、目标检测Object Detection:· A simple object detector with boosting [Project]·INRIA Object Detection and Localization Toolkit [1] [Project]·Discriminatively Trained Deformable Part Models [2] [Project]·Cascade Object Detection with Deformable Part Models [3] [Project]·Poselet [4] [Project]·Implicit Shape Model [5] [Project]·Viola and Jones’s Face Detection [6] [Project]·Bayesian Modelling of Dyanmic Scenes for Object Detection[Paper][Code]·Hand detection using multiple proposals[Project]·Color Constancy, Intrinsic Images, and Shape Estimation[Paper][Code]·Discriminatively trained deformable part models[Project]·Gradient Response Maps for Real-Time Detection of Texture-Less Objects: LineMOD [Project]·Image Processing On Line[Project]·Robust Optical Flow Estimation[Project]·Where's Waldo: Matching People in Images of Crowds[Project]· Scalable Multi-class Object Detection[Project]· Class-Specific Hough Forests for Object Detection[Project]· Deformed Lattice Detection In Real-World Images[Project]· Discriminatively trained deformable part models[Project]四、显著性检测Saliency Detection:·Itti, Koch, and Niebur’ saliency detection [1] [Matlab code]·Frequency-tuned salient region detection [2] [Project]·Saliency detection using maximum symmetric surround [3] [Project]·Attention via Information Maximization [4] [Matlab code]·Context-aware saliency detection [5] [Matlab code]·Graph-based visual saliency [6] [Matlab code]·Saliency detection: A spectral residual approach. [7] [Matlab code]·Segmenting salient objects from images and videos. [8] [Matlab code]·Saliency Using Natural statistics. [9] [Matlab code]·Discriminant Saliency for Visual Recognition from Cluttered Scenes. [10] [Code]·Learning to Predict Where Humans Look [11] [Project]·Global Contrast based Salient Region Detection [12] [Project]·Bayesian Saliency via Low and Mid Level Cues[Project]·Top-Down Visual Saliency via Joint CRF and Dictionary Learning[Paper][Code]· Saliency Detection: A Spectral Residual Approach[Code]五、图像分类、聚类Image Classification, Clustering·Pyramid Match [1] [Project]·Spatial Pyramid Matching [2] [Code]·Locality-constrained Linear Coding [3] [Project] [Matlab code]·Sparse Coding [4] [Project] [Matlab code]·Texture Classification [5] [Project]·Multiple Kernels for Image Classification [6] [Project]·Feature Combination [7] [Project]·SuperParsing [Code]·Large Scale Correlation Clustering Optimization[Matlab code]·Detecting and Sketching the Common[Project]·Self-Tuning Spectral Clustering[Project][Code]·User Assisted Separation of Reflections from a Single Image Using a Sparsity Prior[Paper][Code]·Filters for Texture Classification[Project]·Multiple Kernel Learning for Image Classification[Project]· SLIC Superpixels[Project]六、抠图Image Matting· A Closed Form Solution to Natural Image Matting [Code]·Spectral Matting [Project]·Learning-based Matting [Code]七、目标跟踪Object Tracking:· A Forest of Sensors - Tracking Adaptive Background Mixture Models [Project]·Object Tracking via Partial Least Squares Analysis[Paper][Code]·Robust Object Tracking with Online Multiple Instance Learning[Paper][Code]·Online Visual Tracking with Histograms and Articulating Blocks[Project]·Incremental Learning for Robust Visual Tracking[Project]·Real-time Compressive Tracking[Project]·Robust Object Tracking via Sparsity-based Collaborative Model[Project]·Visual Tracking via Adaptive Structural Local Sparse Appearance Model[Project]·Online Discriminative Object Tracking with Local Sparse Representation[Paper][Code]·Superpixel Tracking[Project]·Learning Hierarchical Image Representation with Sparsity, Saliency and Locality[Paper][Code]·Online Multiple Support Instance Tracking [Paper][Code]·Visual Tracking with Online Multiple Instance Learning[Project]·Object detection and recognition[Project]·Compressive Sensing Resources[Project]·Robust Real-Time Visual Tracking using Pixel-Wise Posteriors[Project]·Tracking-Learning-Detection[Project][OpenTLD/C++ Code]· the HandVu:vision-based hand gesture interface[Project]· Learning Probabilistic Non-Linear Latent Variable Models for Tracking Complex Activities[Project]八、Kinect:·Kinect toolbox[Project]·OpenNI[Project]·zouxy09 CSDN Blog[Resource]· FingerTracker 手指跟踪[code]九、3D相关:·3D Reconstruction of a Moving Object[Paper] [Code]·Shape From Shading Using Linear Approximation[Code]·Combining Shape from Shading and Stereo Depth Maps[Project][Code]·Shape from Shading: A Survey[Paper][Code]· A Spatio-Temporal Descriptor based on 3D Gradients (HOG3D)[Project][Code]·Multi-camera Scene Reconstruction via Graph Cuts[Paper][Code]· A Fast Marching Formulation of Perspective Shape from Shading under FrontalIllumination[Paper][Code]·Reconstruction:3D Shape, Illumination, Shading, Reflectance, Texture[Project]·Monocular Tracking of 3D Human Motion with a Coordinated Mixture of Factor Analyzers[Code]·Learning 3-D Scene Structure from a Single Still Image[Project]十、机器学习算法:·Matlab class for computing Approximate Nearest Nieghbor (ANN) [Matlab class providing interface to ANN library]·Random Sampling[code]·Probabilistic Latent Semantic Analysis (pLSA)[Code]·FASTANN and FASTCLUSTER for approximate k-means (AKM)[Project]·Fast Intersection / Additive Kernel SVMs[Project]·SVM[Code]·Ensemble learning[Project]·Deep Learning[Net]· Deep Learning Methods for Vision[Project]·Neural Network for Recognition of Handwritten Digits[Project]·Training a deep autoencoder or a classifier on MNIST digits[Project]· THE MNIST DATABASE of handwritten digits[Project]· Ersatz:deep neural networks in the cloud[Project]· Deep Learning [Project]· sparseLM : Sparse Levenberg-Marquardt nonlinear least squares in C/C++[Project]· Weka 3: Data Mining Software in Java[Project]· Invited talk "A Tutorial on Deep Learning" by Dr. Kai Yu (余凯)[Video]· CNN - Convolutional neural network class[Matlab Tool]· Yann LeCun's Publications[Wedsite]· LeNet-5, convolutional neural networks[Project]· Training a deep autoencoder or a classifier on MNIST digits[Project]· Deep Learning 大牛Geoffrey E. Hinton's HomePage[Website]· Multiple Instance Logistic Discriminant-based Metric Learning (MildML) and Logistic Discriminant-based Metric Learning (LDML)[Code]· Sparse coding simulation software[Project]· Visual Recognition and Machine Learning Summer School[Software]十一、目标、行为识别Object, Action Recognition:·Action Recognition by Dense Trajectories[Project][Code]·Action Recognition Using a Distributed Representation of Pose and Appearance[Project]·Recognition Using Regions[Paper][Code]·2D Articulated Human Pose Estimation[Project]·Fast Human Pose Estimation Using Appearance and Motion via Multi-Dimensional Boosting Regression[Paper][Code]·Estimating Human Pose from Occluded Images[Paper][Code]·Quasi-dense wide baseline matching[Project]· ChaLearn Gesture Challenge: Principal motion: PCA-based reconstruction of motion histograms[Project]· Real Time Head Pose Estimation with Random Regression Forests[Project]· 2D Action Recognition Serves 3D Human Pose Estimation[Project]· A Hough Transform-Based Voting Framework for Action Recognition[Project]·Motion Interchange Patterns for Action Recognition in Unconstrained Videos[Project]·2D articulated human pose estimation software[Project]·Learning and detecting shape models [code]·Progressive Search Space Reduction for Human Pose Estimation[Project]·Learning Non-Rigid 3D Shape from 2D Motion[Project]十二、图像处理:· Distance Transforms of Sampled Functions[Project]· The Computer Vision Homepage[Project]· Efficient appearance distances between windows[code]· Image Exploration algorithm[code]· Motion Magnification 运动放大[Project]· Bilateral Filtering for Gray and Color Images 双边滤波器[Project]· A Fast Approximation of the Bilateral Filter using a Signal Processing Approach [Project]十三、一些实用工具:·EGT: a Toolbox for Multiple View Geometry and Visual Servoing[Project] [Code]· a development kit of matlab mex functions for OpenCV library[Project]·Fast Artificial Neural Network Library[Project]十四、人手及指尖检测与识别:· finger-detection-and-gesture-recognition [Code]· Hand and Finger Detection using JavaCV[Project]· Hand and fingers detection[Code]十五、场景解释:· Nonparametric Scene Parsing via Label Transfer [Project]十六、光流Optical flow:· High accuracy optical flow using a theory for warping [Project]· Dense Trajectories Video Description [Project]·SIFT Flow: Dense Correspondence across Scenes and its Applications[Project]·KLT: An Implementation of the Kanade-Lucas-Tomasi Feature Tracker [Project]·Tracking Cars Using Optical Flow[Project]·Secrets of optical flow estimation and their principles[Project]·implmentation of the Black and Anandan dense optical flow method[Project]·Optical Flow Computation[Project]·Beyond Pixels: Exploring New Representations and Applications for Motion Analysis[Project] · A Database and Evaluation Methodology for Optical Flow[Project]·optical flow relative[Project]·Robust Optical Flow Estimation [Project]·optical flow[Project]十七、图像检索Image Retrieval:· Semi-Supervised Distance Metric Learning for Collaborative Image Retrieval [Paper][code]十八、马尔科夫随机场Markov Random Fields:·Markov Random Fields for Super-Resolution [Project]· A Comparative Study of Energy Minimization Methods for Markov Random Fields with Smoothness-Based Priors [Project]十九、运动检测Motion detection:· Moving Object Extraction, Using Models or Analysis of Regions [Project]·Background Subtraction: Experiments and Improvements for ViBe [Project]· A Self-Organizing Approach to Background Subtraction for Visual Surveillance Applications [Project] ·: A new change detection benchmark dataset[Project]·ViBe - a powerful technique for background detection and subtraction in video sequences[Project] ·Background Subtraction Program[Project]·Motion Detection Algorithms[Project]·Stuttgart Artificial Background Subtraction Dataset[Project]·Object Detection, Motion Estimation, and Tracking[Project]。

新辅助化疗后机器人辅助与开腹手术治疗局部晚期子宫颈癌术后生存影响因素的对比分析

新辅助化疗后机器人辅助与开腹手术治疗局部晚期子宫颈癌术后生存影响因素的对比分析

新辅助化疗后机器人辅助与开腹手术治疗局部晚期子宫颈癌术后生存影响因素的对比分析周潇妮1,唐旭秀2,蔡丽萍3,涂春华3,张智3,张琦玲3,肖子文2,赵娜4(1.景德镇市第二人民医院妇科 江西 景德镇 333000;2.南昌大学第一临床医学院 江西 南昌 330000; 3.南昌大学第一附属医院妇产科 江西 南昌 330000;4.景德镇市第二人民医院检验科 江西 景德镇 333000)摘 要目的:探究局部晚期宫颈癌患者在新辅助化疗后应用机器人辅助下腹腔镜手术与开腹手术治疗后术后生存质量的差异,并对其影响因素进行分析。

方法:对2016年1月—2016年12月在南昌大学第一附属医院妇科接受治疗的76例宫颈癌患者进行回顾性研究。

其中研究组为机器人手术组,38例患者全部于新辅助化疗结束后行机器人辅助腹腔镜下广泛性子宫切除术及盆腔淋巴结清扫术;对照组为开腹手术组,38例患者在新辅助化疗结束后行开腹下广泛性子宫切除术及盆腔淋巴结清扫术;两组均对部分患者行腹主动脉旁淋巴结取样。

纳入研究对象的一般临床特征、术后手术质量评价指标并统计和分析其无进展生存期及总生存期。

结果:研究组与对照组的一般临床特征无差异,但研究组术后多项手术质量评价指标与对照组均有统计学差异(P<0.05)。

两组病理预后因素、无病生存期、总生存期、3年生存率和5年生存率比较,差异无统计学意义(P>0.05);经单因素和多因素Cox比例风险模型分析发现,宫颈浸润程度和术后是否补充放化疗为预后独立危险因素。

结论:对于局部晚期宫颈癌患者,机器人辅助腹腔镜手术比传统开腹手术的手术质量评价好,但两者在患者病理预后因素及术后生存期上无明显差异。

关键词子宫颈癌;机器人辅助手术;开腹手术;总生存期;无病生存期中图分类号 R608 R713.4 文献标识码 A 文章编号 2096-7721(2024)02-0178-08收稿日期:2022-04-29 录用日期:2022-11-31Received Date: 2022-04-29 Accepted Date: 2022-11-31基金项目:江西省自然科学基金(20192ACBL20038);江西省科技计划重大项目(20152ACG70022)Foundation Item: Natural Science Foundation of Jiangxi Province(20192ACBL20038); Major Science and Technology Project of Jiangxi Province(20152ACG70022)通讯作者:蔡丽萍,E-mail:*********************Corresponding Author: CAI Liping, E-mail: *********************引用格式:周潇妮,唐旭秀,蔡丽萍,等.新辅助化疗后机器人辅助与开腹手术治疗局部晚期子宫颈癌术后生存影响因素的对比分析[J].机器人外科学杂志(中英文),2024,5(2):178-185.Citation: ZHOU X N, TANG X X, CAI L P, et al. Comparative analysis of the influencing factors on postoperative survival between robotic surgery and conventional laparotomy for locally advanced cervical cancer after neoadjuvant chemotherapy [J].Chinese Journal of Robotic Surgery, 2024, 5(2): 178-185.179Comparative analysis of the influencing factors on postoperative survival between robotic surgery and conventional laparotomy for locally advancedcervical cancer after neoadjuvant chemotherapyZHOU Xiaoni 1, TANG Xuxiu 2, CAI Liping 3, TU Chunhua 3, ZHANG Zhi 3, ZHANG Qiling 3, XIAO Ziwen 2, ZHAO Na 4(1. Department of Gynecology, the Second People ’s Hospital of Jingdezhen, Jingdezhen 333000, China; 2.The First Clinical Medical College of Nanchang University, Nanchang 330000, China; 3.Department of Obstetrics and Gynecology, the First Affiliated Hospital of Nanchang University, Nanchang 330000, China; 4.Department of Clinical Laboratory, the Second People ’s Hospital of Jingdezhen, Jingdezhen 333000, China)Abstract Objective: To explore the differences of postoperative survival between patients underwent robot -assisted laparoscopic surgery and conventional laparotomy for locally advanced cervical cancer after neoadjuvant chemotherapy. Methods: A retrospective study was performed on 76 patients with cervical cancer who were treated in the Department of Gynecology, the First Affiliated Hospital of Nanchang University from January 2016 to December 2016. Among which, 38 patients underwent robot -assisted laparoscopic radical hysterectomy and pelvic lymph node dissection after neoadjuvant chemotherapy were divided into the study group, and 38 patients who were treated with conventional laparotomy after neoadjuvant chemotherapy into the control group. There was part of patients underwent para -aortic lymph node sampling in both groups. Results: There was no difference in terms of the general clinical characteristics between the two groups, but significant differences on surgical quality evaluation indicators were found between the two groups (P <0.05). There was no significant difference in pathological prognostic factors, disease -free survival, overall survival, 3-year survival rate, and 5-year survival rate (P >0.05). Doing chemoradiotherapy or not after surgery was an independent prognostic risk factor. Conclusion: For patients with locally advanced cervical cancer, robot -assisted surgery has better evaluation of surgical quality after surgery than the conventional laparotomy, but no significant difference was found in terms of pathological prognostic factors and postoperative survival between the two ways.Key words Cervical Cancer; Robot -assisted Surgery; Laparotomy; Overall Survival; Disease -free Survival宫颈癌是全球范围内严重的公共卫生问题,其平均初诊年龄为53岁,平均死亡年龄为59岁,每年新发和死亡病例数为57万和31万,是导致女性癌症患者死亡的主要原因。

CVPR2013总结

CVPR2013总结

CVPR2013总结前不久的结果出来了,⾸先恭喜我⼀个已经毕业⼯作的师弟中了⼀篇。

完整的⽂章列表已经在CVPR的主页上公布了(),今天把其中⼀些感兴趣的整理⼀下,虽然论⽂下载的链接⼤部分还都没出来,不过可以follow最新动态。

等下载链接出来的时候⼀⼀补上。

由于没有下载链接,所以只能通过题⽬和作者估计⼀下论⽂的内容。

难免有偏差,等看了论⽂以后再修正。

显著性Saliency Aggregation: A Data-driven Approach Long Mai, Yuzhen Niu, Feng Liu 现在还没有搜到相关的资料,应该是多线索的⾃适应融合来进⾏显著性检测的PISA: Pixelwise Image Saliency by Aggregating Complementary Appearance Contrast Measures with Spatial Priors Keyang Shi, Keze Wang, Jiangbo Lu, Liang Lin 这⾥的两个线索看起来都不新,应该是集成框架⽐较好。

⽽且像素级的,估计能达到分割或者matting的效果Looking Beyond the Image: Unsupervised Learning for Object Saliency and Detection Parthipan Siva, Chris Russell, Tao Xiang, 基于学习的的显著性检测Learning video saliency from human gaze using candidate selection , Dan Goldman, Eli Shechtman, Lihi Zelnik-Manor这是⼀个做视频显著性的,估计是选择显著的视频⽬标Hierarchical Saliency Detection Qiong Yan, Li Xu, Jianping Shi, Jiaya Jia的学⽣也开始做显著性了,多尺度的⽅法Saliency Detection via Graph-Based Manifold Ranking Chuan Yang, Lihe Zhang, Huchuan Lu, Ming-Hsuan Yang, Xiang Ruan这个应该是扩展了那个经典的 graph based saliency,应该是⽤到了显著性传播的技巧Salient object detection: a discriminative regional feature integration approach , Jingdong Wang, Zejian Yuan, , Nanning Zheng⼀个多特征⾃适应融合的显著性检测⽅法Submodular Salient Region Detection , Larry Davis⼜是⼤⽜下⾯的⽂章,提法也很新颖,⽤了submodular。

不同脑小血管病负荷评分与伴无症状腔隙的脑小血管病患者认知功能的关系

不同脑小血管病负荷评分与伴无症状腔隙的脑小血管病患者认知功能的关系

不同脑小血管病负荷评分与伴无症状腔隙的脑小血管病患者认知功能的关系杜晓光,魏荣,刘琦慧,于力群,周丽摘要:目的 探讨不同脑小血管病(CSVD)负荷评分与伴无症状腔隙的CSVD患者认知功能的关系。

方法 纳入2021年7月—2023年10月就诊于潍坊市人民医院神经内科的128例伴无症状腔隙的CSVD患者,运用蒙特利尔量表(MoCA)、CSVD总负荷评分和改良负荷评分统计受试者的认知功能和CSVD负荷,分为认知障碍组(MoCA<26分)和无认知障碍组(MoCA≥26分),比较两组患者的人口社会学信息、血管病危险因素及CSVD负荷评分的差异。

采用线性回归分析MoCA评分与两种CSVD负荷评分的关系,采用趋势检验分析伴无症状腔隙的CSVD 患者认知障碍的发病趋势。

结果 研究共纳入伴无症状性腔隙的CSVD患者128例,其中认知障碍组68例(53.1%),无认知障碍组60例(46.9%),两组患者人口社会学信息及血管病危险因素差异无统计学意义(P>0.05)。

两组患者的CSVD总负荷评分和改良负荷评分比较均存在统计学差异(P<0.05)。

Spearman秩相关分析显示,CSVD 总负荷评分和改良负荷评分均与MoCA评分呈负相关(P<0.001)。

线性趋势χ2检验分析显示,伴无症状腔隙的CSVD患者认知障碍发病风险随CSVD改良负荷评分增加而增加(P trend<0.05),该发病风险与CSVD总负荷评分间趋势分析无统计学意义(P trend=0.069)。

结论 CSVD总负荷评分和改良负荷评分均可用于筛检伴无症状腔隙的CSVD认知障碍患者。

改良负荷评分可能在识别认知障碍高风险患者方面更具优势。

关键词:脑小血管病;认知;腔隙中图分类号:R743 文献标识码:ARelationship between cerebral small vessel disease burden scores and cognitive function in patients with cerebral small vessel disease with asymptomatic lacunes DU Xiaoguang,WEI Rong,LIU Qihui, et al.(Department of Neurol⁃ogy, Weifang People′s Hospital, Weifang 261000, China)Abstract:Objective To investigate the relationship between cerebral small vessel disease (CSVD) burden scores and cognitive function in patients with CSVD with asymptomatic lacunes.Methods A total of 128 patients with CSVD with asymptomatic lacunes who visited the Department of Neurology of Weifang People′s Hospital from July 2021 to October 2023 were included. All the patients were scored using the Montreal Cognitive Assessment (MoCA) for cognitive function and using the total CSVD score and the modified CSVD score for CSVD burden. They were divided into cognitive impairment group (MoCA score<26)and non-cognitive impairment group (MoCA score≥26).The demographic information,vascular disease risk factors, and the CSVD scores of the two groups were compared. A linear regression analysis was performed to as⁃sess the relationship between the MoCA score and the two CSVD scores. A trend analysis was conducted to analyze the trend of incidence of cognitive impairment in patients with CSVD with asymptomatic lacunes.Results Among the 128 patients with CSVD with asymptomatic lacunes, 68 (53.1%) were in the cognitive impairment group and 60 (46.9%) were in the non-cognitive impairment group. There were no significant differences in the demographic information and vascular disease risk factors between the two groups (P>0.05). The total CSVD score and the modified CSVD score differed significantly be⁃tween the two groups (P<0.05). The Spearman correlation analysis showed that the total and modified CSVD scores were sig⁃nificantly negatively correlated with the MoCA score (P<0.001). The chi-square test for linear trend revealed that the cogni⁃tive impairment risk increased significantly with the modified CSVD score in patients with CSVD with asymptomatic lacunes (P trend<0.05), but with no significance for the total CSVD score (P trend=0.069).Conclusion Both the total and modified CSVD scores are useful tools to detect cognitive impairment in patients with CSVD with asymptomatic lacunes, and the modi⁃fied CSVD score may be superior in identifying patients at high risk of cognitive impairment.Key words:Cerebral small vessel disease;Cognition;Lacune脑小血管病(cerebral small vessel disease,CSVD)是血管性认知障碍的主要原因[1]。

治疗窗课件

治疗窗课件
治疗窗 Therapeutic Window
药物
剂量
毒物
安眠药
安眠药
•所有的物质都是毒药;所有物质都是毒物,正确的 剂量是用以区分一种物质是毒药还是补品的标准。
All substances are poisons; the right dose differentiates a poisons from a remedy.
6 mg/kg 浓度:0-50 mg/L
• 疗效在血药浓度约6 mg/L时达 到峰值
• 严重不良反应在11 mg/L时开 始出现
• 有疗效而无严重不良反应的有 效浓度范围为3-11mg/L,称为 治疗浓度范围
更连续,更广泛的数值范围内研究人群中的浓度与反应的关系
治疗窗
苯妥英: 10-20 mg/L 疗效好,不良反应少
浣葛草与冰续丹的区别在哪里?
浣葛草
一屋子
冰续丹
治疗指数
• 当达到药物的预期疗效剂量与限制性不良反应高发生率的 剂量非常接近时,治疗指数低。
• 大多数药物治疗指数都是中,高。
•剂量决定了毒性
“Dose determines toxicity.”
——— 帕拉塞尔苏斯
Paracelsus (1493-1541) physician and philosopher
治疗暴露
• 给予了过高的剂量,所有的药物无一例外地都会产生不良反应。 • 剂量—平衡药物的利益与不良反应的风险。 • 临床药理学—判断药物应用于患者的剂量范围。
嗜睡 步态共济失调 眼球震颤
治疗指数
• 药物的效应是相对于它的安全性(治疗指数) • 治疗指数是与治疗窗相关的概念
跟着琅琊榜学临床药理
“此草名叫浣葛草” “解肌退热,生津止渴” “会使血脉凝滞,肝行不畅”

七氟醚下调TRPV4

七氟醚下调TRPV4

前期研究发现,机械通气可上调花生四烯酸(AA )代谢途径关键限速酶胞质型磷脂酶A2(C-PLA2)活性使肺内AA 及其致炎性代谢产物生成增加继而引起VILI ,而七氟醚可通过阻断上述病理过程发挥其抗VILI 保护作用[1,2],但机械通气及七氟醚调控C-PLA2的具体机制尚未完全阐明。

由于细胞内钙离子浓度的增加是C-PLA2活化必不可少的条件[3],而TRPV4作为一种位于细胞膜上的钙离子通道,可被机械力直接激活[4],活化的TRPV4可上调C-PLA2表达继而引起高血压小鼠动脉内皮细胞收缩[5]。

体外实验研究发现,机械通气可激活TRPV4造成肺屏障功能破坏[6],而七氟醚可通过阻断瞬时受体电位钙离子通道发挥其心肌保护作用[7]。

据此我们推测机械通气所产生的机械力激活C-PLA2的具体机制与TRPV4有关,而七氟醚可通过阻断TRPV4/C-PLA2信号通路发挥其抗VILI 保护作用。

Sevoflurane alleviates ventilator-induced lung injury in rats by down-regulating the TRPV4/C-PLA2signaling pathwayWANG Wenfa 1,YANG Yong 2,WANG LI 3,GUO Xin 3,TIAN Lingfang 1,WANG He 1,HU Yuzhen 1,LIU Rui 31Department of Anesthesiology,Chuxiong Yi Autonomous Prefecture People's Hospital,Chuxiong 675000,China;2Experimental Center of Medical Function,Kunming Medical University,Kunming 650500,China;3Department of Anesthesiology,First People's Hospital of Yunnan Province/Affiliated Hospital of Kunming University of Science and Technology,Kunming 650032,China摘要:目的探讨七氟醚抗呼吸机诱导的肺损伤(VILI )的保护作用机制。

双歧杆菌三联活菌胶囊联合匹维溴铵在肠易激综合征患者中的应用效果

双歧杆菌三联活菌胶囊联合匹维溴铵在肠易激综合征患者中的应用效果

- 5 -①中国人民解放军海军第971医院消化内科 山东 青岛 266071通信作者:苑刚双歧杆菌三联活菌胶囊联合匹维溴铵在肠易激综合征患者中的应用效果苑刚① 孙海源① 孙波①【摘要】 目的:探究双歧杆菌三联活菌胶囊联合匹维溴铵在肠易激综合征患者中的应用效果及对肛肠动力学的影响。

方法:选择2020年6月—2022年12月中国人民解放军海军第971医院收治的80例肠易激综合征患者,根据随机数字表法分为两组,每组40例。

对照组采用匹维溴铵进行治疗,观察组则在对照组的基础上加用双歧杆菌三联活菌胶囊治疗。

比较两组治疗总有效率、治疗前后的症状体征积分(腹痛、腹胀及大便性状)、肛肠动力学指标(肛管静息压、肛管收缩压及直肠最大容量)及生存质量[肠易激综合征-生存质量量表(IBS-QOL)]。

结果:观察组治疗总有效率显著高于对照组,差异有统计学意义(P <0.05)。

治疗前两组症状体征积分、肛肠动力学指标及IBS-QOL 评分比较,差异均无统计学意义(P >0.05);治疗后观察组症状体征积分、IBS-QOL 评分均显著低于对照组,肛管静息压、肛管收缩压及直肠最大容量均显著高于对照组,差异均有统计学意义(P <0.05)。

结论:双歧杆菌三联活菌胶囊联合匹维溴铵在肠易激综合征患者中的应用效果较好,可显著改善患者的肛肠动力学,减轻症状体征,提高生活质量。

【关键词】 双歧杆菌三联活菌胶囊 匹维溴铵 肠易激综合征 肛肠动力学 Application Effect of Live Combined Bifidobacterium, Lactobacillus and Enterococcus Capsules Combined with Pinaverium Bromide in Patients with Irritable Bowel Syndrome/YUAN Gang, SUN Haiyuan, SUN Bo. //Medical Innovation of China, 2024, 21(03): 005-008 [Abstract] Objective: To investigate the application effect of Live Combined Bifidobacterium, Lactobacillus and Enterococcus Capsules combined with Pinaverium Bromide in patients with irritable bowel syndrome and its influence on anorectal dynamics. Method: A total of 80 patients with irritable bowel syndrome admitted to the 971st Hospital of the Chinese People's Liberation Army Navy from June 2020 to December 2022 were selected and divided into two groups according to random number table method, with 40 cases in each group. The control group was treated with Pinaverium Bromide, and the observation group was treated with Live Combined Bifidobacterium, Lactobacillus and Enterococcus Capsules on the basis of the control group. The total effective rate, symptoms and signs integrations (abdominal pain, abdominal bloating and fecal characteristics), anorectal dynamics indexes (resting anal pressure, anal systolic pressure and maximum rectal volume) and quality of life [irritable bowel syndrome-与妊娠期高血压疾病患者围产结局相关性分析[J].河北医药,2022,44(6):852-855.[17]杨慧,崔海峰.血清IFI16、sFlt-1、VEGF 在子痫前期孕妇中的表达和相关性探究[J].中国妇产科临床杂志,2020,21(5):530-531.[18]胡小娜,郭敏,熊杰,等.孕中期血清PLGF、sFlt-1、sEng、sCD40L 与子痫前期及胎儿不良结局的关系研究[J].中国妇产科临床杂志,2022,23(1):53-56.[19]张种,贺锐,赵翠生,等.孕妇血清PIGF、sFlt-1及sEng联合使用在预测子痫前期发病中的诊断价值分析[J].中国实验诊断学,2018,22(9):1534-1536.[20]赵影庭,卢海英,刘玮.血清PLGF、sFlt-1和sEng 水平与妊娠期高血压和子痫前期的严重程度及其不良结局关系[J].中国妇幼保健,2019,34(12):2714-2716.[21]陈勇,杨琴,李倩,等.高敏C 反应蛋白同型半胱氨酸与胰岛素抵抗对妊娠期糖尿病并发妊娠期高血压综合征的影响[J].中国预防医学杂志,2019,20(6):557-560.(收稿日期:2023-12-06) (本文编辑:占汇娟) 肠易激综合征在临床常见,其中以腹泻型相对多见。

逆行的高手

逆行的高手

逆行的高手
佚名
【期刊名称】《软件世界》
【年(卷),期】2004(000)003
【摘要】股市里有一种人非常善于逆势操作,他们专门去“经营”别人都不买的股票,因为他们善于发现这类股票的成长性。

同样,许多企业在被收购之后,原来的创始人常常会带着满满的荷包离开,而Jonathan schwartz的软件公司Lighthouse Design在1996年被Sun收购后,他却留了下来——因为他的个性与Sun公司的企业文化非常符合,他们都擅长于逆势操作。

对此,Schwartz深有体会地说:“这里不是人云亦云者的温床!在Sun公司、我们总可以提出与业界盛行的论调截然相反的想法,并且于行进中证明自己就是那掌握真理的少数人。

”【总页数】3页(P34-36)
【正文语种】中文
【中图分类】F407.67
【相关文献】
1.提高手背静脉逆行穿刺成功率的方法体会 [J], 刘迎春
2.原稿照登:当咏春高手遇上散打高手 [J], 周向前
3.原稿照登:当咏春高手遇上散打高手 [J], 周向前;
4.没养成高手的《终极高手》 [J], 曹珺萌
5.主流游戏高手盈通GTX660-2048GD5游戏高手显卡 [J],
因版权原因,仅展示原文概要,查看原文内容请购买。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Top-Down Visual Saliency via Joint CRF and Dictionary LearningJimei Yang and Ming-Hsuan YangUniversity of California at Merced{jyang44,mhyang}@AbstractTop-down visual saliency facilities object localization by providing a discriminative representation of target objects and a probability map for reducing the search space.In this paper,we propose a novel top-down saliency model that jointly learns a Conditional Random Field(CRF)and a discriminative dictionary.The proposed model is formu-lated based on a CRF with latent variables.By using sparse codes as latent variables,we train the dictionary modulated by CRF,and meanwhile a CRF with sparse coding.We pro-pose a max-margin approach to train our model via fast inference algorithms.We evaluate our model on the Graz-02and PASCAL VOC2007datasets.Experimental results show that our model performs favorably against the state-of-the-art top-down saliency methods.We also observe that the dictionary update significantly improves the model per-formance.1.IntroductionBottom-up visual saliency models the unconscious vi-sual processing in early vision and is mainly driven by low-level cues(e.g.,orientedfilter responses and color). In the last two decades,some basic principles,such as center-surround contrast[10],self-information[3],topolog-ical connectivity[8]and spectral residual[9],have been es-tablished for computing bottom-up saliency maps,which are shown to be effective for predicting human eye move-ments[3,8]and for highlighting the informative regions of images[10,9].However,the data-driven nature of bottom-up saliency limits its applications in target-oriented computer vision tasks,such as object localization,detection and segmen-tation.In some cases when backgrounds are highly clut-tered,due to lack of top-down prior knowledge,bottom-up saliency algorithms usually respond to numerous unrelated low-level visual stimuli(i.e.,false positives)and thus miss the objects of interest(i.e.,false negatives).In Figure1,for instance,two typical bottom-up saliency maps((b)and(c)) highlight a stop sign as interesting regions and do notdistin-(a)input(b)Itti et al.[10](c)Hou and Zhang[9](d)bicycle(e)car(f)personFigure1.Bottom-up and top-down saliency.Given an input image(a),we present two bottom-up saliency maps that are producedby[10]in(b)and[9]in(c).In the bottom panel,we present three top-down saliency maps for bicycle(d),car(e)and person(f)gen-erated by our algorithm(best viewed in color).guish the bicycle from two persons.In contrast,top-down saliency models learn from training examples to generate probability maps for localizing objects of interest,which are bicycle(d),car(e)and person(f),respectively.Classic visual recognition problems entail detection(po-sition and scale)and identification of objects(on the in-stance or category level).The difficulties of visual recog-nition mainly result from exploring large search space(over position and scale)and modeling high variability of object appearance(due to the changes of pose and illumination as well as occlusions).Recent progress on Bag-of-Words(BoW)models[27,5, 2]reveals the effectiveness of patch-based representation.On the patch level,we represent the object appearance by a dictionary of visual words and sample the image patches to reduce the complexity of searching over parametric space.The performance of BoW models highly depends on the dictionary[20]and sampling strategy[21].We propose a novel top-down saliency model that facilitates visual recog-nition from those two perspectives.The central idea of our top-down saliency model is to build a conditional random 1field(CRF)upon sparse coding of image patches with a joint learning approach.For any image patch,we use a binary variable to label the presence or absence of target objects.The use of conditional randomfield enables us to exploit the connectivity of adjacent image patches so that we can compute the saliency maps by incorporating local context information.On the other hand,the use of sparse coding facilities us to model feature selectivity for saliency map,which typically results in a more compact and discrim-inative representation.We note that the proposed model is more than a straight-forward combination of CRF and sparse coding.Instead, we formulate a novel CRF with sparse latent variables.By using sparse codes as latent variables,we learn a discrimi-native dictionary modulated by CRF,and meanwhile a CRF driven by sparse coding.We propose a max-margin ap-proach to train the model by exploiting fast inference al-gorithms,such as graph cut[13].We empirically evaluate our model on the Graz-02[22]and PASCAL VOC2007[4] datasets and measure the quality of saliency maps by patch-level precision-recall rates.The experimental results show that our model performs favorably against several state-of-the-art top-down saliency algorithms[6,12].We also show that the dictionary update component of our algorithm sig-nificantly improves the performance of our model.2.Related WorkWefirst discuss the related algorithms on top-down saliency maps and then briefly describe CRF and dictionary learning methods that are related to the proposed joint learn-ing algorithm.2.1.Top-Down Saliency MapsTop-down visual saliency involves the processes of fea-ture learning and saliency computation[6].Gao et al.[6] propose a top-down saliency algorithm by selecting dis-criminant features from a pre-definedfilter bank.Their dis-criminant features are characterized by statistical difference of target presence or absence in the training images.With the selected features,the saliency values of interest points can be computed based on mutual information.Instead of using pre-definedfilter bank,Kanan et al.[12]propose to learn features with independent component analysis(ICA) from natural images,and construct a top-down saliency model by training a support vector machine(SVM).In our model,the target object features are learned from training images by CRF-modulated dictionary learning.In[12],the top-down saliency map is evaluated by both the appearance component(probabilistic output of SVM)and contextual prior of target location.This location prior performs well when there is a strong correlation between the target loca-tions and holistic scenes,such as cars in urban scenes,but becomes less effective when target objects appear randomly anywhere in general cases(e.g.,images from the Graz-02 and PASCAL VOC2007datasets).In contrast,we compute the saliency map by inference on CRF,which is moreflexi-ble to leverage local context information.2.2.CRFsCRFs have been demonstrated as aflexible framework of incorporating different kinds of features for visual recogni-tion[25,24,5,7,1].In particular,CRFs are used1)to learn an optimal combination of low-level cues(color and edge) and pre-learned high-level modules(e.g.,part-based detec-tor,Bag-of-Words classifiers),and2)to accommodate infer-ence functions(e.g.,graph cut and belief propagation)for graphical models of specific visual recognition problems. In this sense,CRFs are used to integrate different cues[24] or refine labeling results[5].In our model,the CRF parame-ters include the node classifier built on sparse coding so that the number of CRF parameters is not several combination coefficients but hundreds or thousands of classifier coeffi-cients up to the number of bases in sparse coding.A sim-ilar idea has been explored in the Discriminative Random Field model[14]which learns node and edge logistic clas-sifiers simultaneously.We note that it is rather challenging to learn a large set of parameters from limited training sam-ples.Instead of using the pseudo-likelihood method[14], we take a discriminative training approach by converting the likelihood maximization into an inequality constrained optimization problem[11,25].Aside from the node clas-sifier,our model also involves learning a dictionary which is essential for representing object appearance on the patch level.Therefore,our saliency formulation can be consid-ered as a latent variable model by training a CRF classifier jointly with dictionary learning.Although our model bears some resemblance to the hidden CRFs[23,26]developed for object and action recognition,they are intrinsically dif-ferent.The hidden CRFs use a vector of latent variables to represent unobserved part labels of local patches in an observed image whereas our model uses latent variables to model the sparse representations of local observations with the dictionary.In addition,the hidden CRFs predict one cat-egory label of the input image while our model produces a saliency map of predicting the presence of target objects.2.3.Dictionary LearningRecent advances in machine learning enable us to train task-specific dictionaries in a supervised manner[18,28, 17].Mairal et al.[18]combine sparse coding and classi-fication loss in a single optimization objective.Although this method shows promising results on digit recognition and texture classification,it is not clear how it performs on complex object images as no mechanism for integrat-ing local evidences.Yang et al.[28]propose to learn translation-invariant dictionary for image classification viaback-projection techniques.Their method perform well forface and digit recognition because of the translation in-variance property obtained from max pooling.Our modelis able to learn dictionaries from complex object images(e.g.,bicycles,cars,persons)in cluttered backgrounds.Un-like[28]that uses max pooling to resolve geometric ambi-guity,we use CRF to regulate the patches within their lo-cal contexts for learning salient visual words in complexscenes.3.Problem FormulationGiven an image,we are interested in knowing whetherand where the target objects appear.For a local image patchx∈R p,we assign a binary label y to indicate the presence (y=1)or absence(y=−1)of a target object.We samplea set of patches X={x1,x2,...,x m}from different loca-tions of the image as the observations.The corresponding labels Y={y1,y2,...y m}carry the information of target presence.In a particular scale,a sampled patch x i usually carries partial information about the target object.It is thus challenging to directly infer the presence of the target from x i without considering the others due to the semantic and geometric ambiguities of patch-level representations.Suppose that there exists a dictionary D∈R p×k thatstores the most representative object parts(visual words) {d1,d2,...,d k}learned from the training data.We intro-duce a vector of latent variables s i∈R k to model the sparse representation of x i=Ds i,which is usually obtained by optimizing the following problem,s(x,D)=arg mins 12x−D s 2+λ s 1,(1)whereλis a parameter controlling the sparse penalty.Wedenote the latent variables for all the patches by S(X,D)= [s(x1,D),s(x2,D),...,s(x m,D)].Note that we use the notations s(x,D)and S(x,D)to emphasize that the sparselatent variables are a function of the dictionary.In the fol-lowing sections,we simplify the notations by s i s(x,D) and S S(x,D)for presentation clarity when necessary. The sparse coding formulation in Eqn.1can be solved ef-ficiently[15].Through our sparse coding formulation,the visual information contained in the dictionary is transferred into the latent variables by S(X,D)which is thus more in-formative than image patches X.If a local patch shows evidence about target objects,it islikely that nearby patches also exhibit similar support.Webuild a four-connected graph G=<V,E>on the sampledpatches based on their spatial adjacency,where V denotethe nodes and E the edges.Assuming that the labels Yenjoy the Markov property on the graph G conditioned onthe sparse latent variables S(X,D),we formulate a novelCRF model byP(Y|S(X,D),w)=1Ze−E(S(X,D),Y,w),(2)where Z is the partition function,E(S(X,D),Y,w)is theenergy function and w is the CRF weight vector.This for-mulation enables us to jointly learn the CRF weight w andthe dictionary D.Given the CRF weight w,the model inEqn.2can be viewed as CRF supervised dictionary learn-ing,whereas given the dictionary D,it can be viewed asCRF learning with sparse coding.In this model,we can eas-ily retrieve the target information at a particular node i∈Vfrom its marginal probabilityp(y i|s i,w)=y N(i)p(y i,y N(i)|s i,w),(3)where N(i)denotes the neighbors of node i on the graph G.We define the saliency value of the patch i asu(s i,w)=p(y i=1|s i,w),(4)and thus the saliency map U(S,w)={u1,u2,...,u m}canbe inferred by message passing algorithms.This probabilis-tic definition of top-down saliency map leverages not onlythe appearance information[6,12],but also the local con-textual information through the marginalization in Eqn.3.We decompose the energy function E(S(X,D),Y,w)into node and pairwise energy terms.For each node i∈V,the energy is measured by the total contribution of sparsecodesψ(s i,y i,w1)=−y i w 1s i,where w1∈R k isthe weight vector.For each edge(i,j)∈E,we onlyconsider data-independent smoothnessψ(y i,y j,w2)=w2I(y i,y j),where the scaler w2measures the weight oflabeling smoothness and I is an indicator function equalingone for different labels.Therefore,the randomfield energycan be detailed asE(S,Y,w,D)=i∈Vψ(s i,y i,w1)+(i,j)∈Eψ(y i,y j,w2).(5)Note that our energy function is linear with the parame-ter w=[w1;w2]which is similar to most CRF mod-els[24,25,1],but is nonlinear with the dictionary D thatis implicitly defined by s(x,D)in Eqn.1.This nonlin-ear parametrization makes it challenging to learn the model.We discuss our learning approach in the next section.Let us now assume that we have learned the optimal CRFparametersˆw and the dictionaryˆD.Our top-down saliencyformulation in Eqn.2does not involve complex evaluationsof latent variables[23,18],and makes it feasible to infer thesaliency map in a straight-forward manner without alternat-ing between evaluation of latent variables and label infer-ence.For a test image X={x1,x2,...,x m},we computeits saliency map U as follows:1.evaluate the sparse latent variables S(X,ˆD)by Eqn.1;2.infer the saliency map U(S,ˆw)by Eqn.3and Eqn.4.4.Joint CRF and Dictionary LearningLet X={X(1),X(2),...X(N)}be a collection of train-ing images and Y={Y(1),Y(2),...Y(N)}be the corre-sponding labels.We aim to learn the CRF parametersˆw and the dictionaryˆD to maximize the joint likelihood of training samples,maxw∈R(k+1),D∈D,S(n)Nn=1P(Y(n)|S(X(n),D),w),(6)where S(n)is a shorthand of S(X(n),D)and D is the con-vex set of dictionaries that satisfies the following constraint: D={D∈R p×k, d j 2≤1,∀j=1,2,...,k}.(7) 4.1.Max-Margin ApproachThe difficulties in CRF learning mainly lie in evaluating the partition function Z of Eqn.2.Inspired by the max-margin CRF learning approaches[25,1],we pursue the op-timal w and D so that for all Y=Y(n),n=1,...,NP(Y(n)|S(X(n),D),w)≥P(Y|S(X(n),D),w).(8) This constrained optimization allows us to cancel the parti-tion function Z from both sides of the constraints and ex-press them in terms of energiesE(Y(n),S(n),w)≤E(Y,S(n),w).(9) Furthermore,we expect the ground truth energy E(Y(n),S(X(n),D),w)is less than any other ener-gies E(Y,S(X(n),D),w)by a large marginΔ(Y,Y(n)). We thus have a new constraint setE(Y(n),S(n),w)≤E(Y,S(n),w)−Δ(Y,Y(n)).(10) In this paper,we define the margin functionΔ(Y,Y(n))=mi=1I(y i,y(n)i).There are exponentially large number ofconstraints with respect to labeling Y(n)for each training sample.Similar with the cutting plane algorithm[11],we seek for the most violated constraints by solving ˆY(n)=arg minYE(Y,S(n),w)−Δ(Y,Y(n)).(11) Therefore,we are able to learn the weight w and the dic-tionary D by minimizing the following objective function,min w,D∈D γ2w 2+Nn=1n(w,D),(12)where n(w,D) E(ˆY(n),S(n),w)−E(Y(n),S(n),w) andγcontrols the regularization of w.We note that our approach shares a similar objective function with the latent structural SVM[29].The differ-ence is that the latent structural SVM is linearly parameter-ized while ours is nonlinear with the dictionary D.4.2.Learning AlgorithmWe propose a stochastic gradient descent algorithm for optimizing the objective function in Eqn.12.The basic idea is simple and easy to implement.At the t th iteration,we randomly select a training instance(X(n),Y(n)),and then 1.evaluate the sparse latent variables with the dictionaryD(t−1)by Eqn.1,2.obtain the most violated labeling with the weightw(t−1)by Eqn.11,3.update the weight w(t)and the dictionary D(t)by thegradients of the loss function n.We next describe the methods of computing the gradients with respect to the weight and the dictionary.When the latent variables S are known,the energy func-tion E(Y,S,w)is linear with w(Eqn.5),E(Y,S,w)=<w,f(S,Y)>,(13) where f(S,Y)=[−i∈Vs i y i;(i,j)∈EI(y i,y j)].We can thus compute the gradient with respect to w,∂ n∂w=f(S(n),ˆY(n))−f(S(n),Y(n))+γw.(14) The dictionary is not explicitly defined in the energy function12but implicitly by sparse coding(Eqn.1).We use the chain rule of differentiation to compute the gradient of n with respect to the dictionary,∂ n∂D=i∈V(∂ n∂s i)∂s i∂D,(15) The difficulty of computing this gradient lies in that there is no explicit differentiation of sparse code s with respect to the dictionary D.We overcome this difficulty by using implicit differentiation on thefixed point equation,in a way similar with[28]and[17].Wefirst establish thefixed point equation of Eqn.1,D (Ds−x)=−λsign(s),(16) where sign(s)denotes the sign of s in a point-wise manner and sign(0)=0.We calculate the derivative of D on both sides of Eqn.16,and have∂sΛ∂D=(D ΛDΛ)−1(∂D Λx∂D−∂D ΛDΛ∂D),(17) where we denoteΛas the index set of non-zero codes of s and¯Λas the index set of zero codes.To simplify the gradient computation in Eqn.15,we introduce a vector of auxiliary variables z for each s,z¯Λ=0,zΛ=(D ΛDΛ)−1∂ n∂sΛ,(18) where∂ n/∂sΛ=(y i−ˆy i)wΛ.In addition,we denote Z=[z1,z2,...,z m].Therefore,the gradient of n with respect to D is computed by∂ n∂D=−DZS +(X−DS)Z .(19) The proposed joint learning algorithm is summarized in Al-gorithm1.Algorithm1Joint CRF and dictionary learning.Input:X(training images)and Y(ground truth labels); D(0)(initial dictionary);w(0)(initial CRF weight);λ(in Eqn.1);T(number of cycles);γ(in Eqn.12)ρ0(initial learning rate).Output:ˆD andˆw.for t=1,...,T doPermute training samples(X,Y)for n=1,...,N doEvaluate the latent variables s i by Eqn.1,∀i∈V;Solve the most violated labelingˆY(n)by Eqn.11;Update the weight w by Eqn.14:w(t)=w(t−1)−ρt∂ n∂w(t−1);Find the active setΛi for s i,∀i∈V; Compute the auxiliary variables z i by Eqn.18; Update the dictionary D by Eqn.19:D(t)=D(t−1)+ρt∂ n∂D(t−1);Project the dictionary D(t)onto D by Eqn.7;Update the learning rateρ:ρt=ρ0/nend forend for5.ExperimentsWe evaluate the proposed top-down saliency model on the Graz-02and PASCAL VOC2007datasets.We choose these two datasets because they both contain real-world images with large amount of intra-class variations,occlu-sions and background clutters.The MATLAB code and ex-perimental results are available from our website(http: ///mhyang/pubs.html). 5.1.Graz-02The Graz-02dataset contains three categories(bicycles, cars and persons)and a background class.Each category has300images of size640×480pixels and the correspond-ing pixel-level foreground/background annotations.The task is to evaluate the performance of top-down saliency maps to localize target objects against the background.We sample image patches of64×64pixels by shifting16pix-els so that we collect999patches on a27×37grid for each image.We use the same patch sampling method for all the following experiments.The SIFT descriptors[16] are extracted from each image patch to represent the object appearance.We label a patch as positive if at least one quar-ter of its total pixels are foreground;otherwise we label it as negative.We thus obtain patch-level ground truth from the original pixel-level annotations.For each category,we use the150odd-numbered images of its category and ad-ditional150odd-numbered images from background class as the training set,and the remaining150even-numbered images of its category and150even-numbered background images as the test set.To train our saliency model by Algorithm1,we need to initialize the dictionary and the CRF.We collect all the SIFT descriptors from training set and use the K-means algorithm to initialize the dictionary D(0).After evaluating the latent variables by sparse coding,we initialize the CRF node en-ergy weight w(0)1by training a linear SVM on the sparse codes and the corresponding patch labels.For the pairwise energy weight w(0)2,we simply set it to1.All the models are trained with20cycles.There are two important parameters in our model.One is the number of visual words(atoms)k in the dictionary, which controls the capacity of modeling the appearance ually,a dictionary of larger size will produce better results but is more difficult to learn as it requires more training examples with a higher computational cost.In our experiments,we train the models with256or512visual words.The other parameterλcontrols the sparse penalty defined in Eqn.1.The greater theλis,the more sparse the latent variables are and less visual words are selected to rep-resent an image patch.We use two values,0.15and0.30, forλin the experiments.In Algorithm1,we set the initial learning rateρ0=1e−3and the weight penaltyγ=1e−5. To demonstrate the effectiveness of joint CRF and dictio-nary learning,we build a baseline model by directly com-bining sparse coding and CRF,which means learning CRF weight by using sparse codes computed from the initial dic-tionary as features.We also compare our model with two state-of-the-art top-down saliency algorithms[6,12]by us-ing our own implementations.For the discriminant saliency detection algorithm[6](DSD),wefirst construct a DCT (Discrete Cosine Transform)dictionary with256filters of size64×64,and then select100salient features with largest mutual information.More details can be found in[6].For the saliency using natural statistics algorithm[12](SUN), wefirst reduce the dimension of the image patches by Prin-ciple Component Analysis(PCA)and then learn724ICA filters from the training data.By using the ICAfilter re-sponses as features,a linear SVM is trained to compute the saliency values of patches.All those models(ours,baseline,DSD,SUN)are eval-uated by patch-level precision-recall rates on the test set of each category.Figure2shows the precision-recall curves for three object categories,respectively.RecallPrecisionRecallPrecisionRecallPrecision(a)bicycle(b)car(c)person Figure2.Patch-level precision-recall curves on Graz-02dataset.In Table1,we compare our results for different parame-ters(k,λ)with other models by precision rates at equal er-ror rates(EER where precision is equal to recall).The best(a)Bicycle(b)Car(c)Person paring top-down saliency maps produced by the proposed,DSD and SUN models.Bicycle Car PersonDSD[6]62.537.648.2SUN[12]61.945.752.2 Baseline,k=512,λ=0.1571.939.356.8Ours,k=256,λ=0.1573.357.564.2Ours,k=512,λ=0.1580.168.672.4Ours,k=512,λ=0.3073.566.669.6 Table1.Precision rates(%)at EER on the Graz-02dataset.Bicycle Car PersonOurs62.460.062.0[19](full framework)61.853.844.1Table2.Precision rates(%)at EER against shape mask[19]. results are obtained by our model with the parameters k= 512,λ=0.15.We can see the clear improvements of our models over the baseline and other algorithms.The DSD al-gorithm selects salient features based on image-level statis-tics that usually has limited ability of suppressing back-ground image patches.In general,the DSD method gener-ates a high recall rate but a low precision rate.The SUN al-gorithm performs better than the DSD method due to its use of strong classifier.Without considering the local context, the SUN algorithm tends to produce noisy saliency maps. Our models are able to produce clear saliency maps when target objects appear in different viewpoints and scales with partial occlusions.We compare saliency maps of the DSD, SUN and proposed models in Figure3.In Figure4,we present more saliency maps produced by our models.Note that our saliency model is able to locate objects heavily occluded(e.g.,bicycle and cars)whereas state-of-the-art object detection methods are not expected to perform well in such cases.A saliency map of an image has the size of its patch grid, i.e.,27×37.To visualize the localization performance, we upsample the originalsaliency maptothesizeofim-agesothatweget pixel-level results.We notice that ourpixel-level saliency maps are similar to the output of shape mask model[19](approximate object regions).In Table2, we compare our results with shape masks by measuring pixel-level precision recall rates on the same test set(150 even-numbered images from each object category).Our re-sults are consistently better than those by the shape mask Figure4.Our saliency maps of bicycle,car and person cate-gories from the Graz-02dataset.Our model is robust to viewpoint changes,scale variations and partial occlusions.model[19].Compared with patch-level results,the perfor-mance drop we observe in Table2(compared with Table1) is mainly because:1)there are many background pixels in-cluded with object regions,especially for bicycle images;2) object boundaries are not preserved in our saliency maps.Our saliency model jointly learns CRF weight and dic-tionary from the training examples by gradient updates(Al-gorithm1).We are interested in how the dictionary update help improve the model performance.We record the CRF weight and the dictionary at each training cycle and evalu-ate them on the test set.Figure6shows the precision rates at EER of each cycle.It can be seen that the performance improves dramatically in thefirst several cycles and get con-verged after10cycles.The stochastic nature of our learning algorithm results in some performance perturbation at someFigure5.Saliency maps generated by our model.aeroplane bicycle bird boat bottle bus car cat chair cow15.239.09.4 5.7 3.422.030.515.8 5.78dining table dog horse motorbike person potted plant sheep sofa train tv monitor11.112.810.923.742.0 2.020.210.424.710.5Table3.Precision rates(%)at EER on the PASCAL VOC2007dataset.cycles.The results show that dictionary update significantlyimproves the model performance.051015200.70.710.720.730.740.750.76PrecisionrateatEERBicycle0.450.50.550.60.65PrecisionrateatEERCar0.590.60.610.620.630.640.65PrecisionrateatEERPerson(a)bicycle(b)car(c)personFigure6.Performance gain with training cycles.The dictionarysize k=256and the sparse penaltyλ=0.15.5.2.PASCAL VOC2007The PASCAL VOC2007dataset consists of9963im-ages from20categories and background class where objectsegmentation annotations are available for632images.Weevaluate our top-down saliency models for the task of local-izing target objects against the background and the objectsfrom other categories.Objects from different categories of-ten share similar part appearance.For example,bicycles,motorbikes and buses share similar wheel structures.Thisphenomenon makes it challenging to discriminate target ap-pearance from the others on the patch level.We use thesame training and test split as the PASCAL VOC2007ob-ject segmentation challenge,i.e.,422images for trainingand210images for tests.Similar to the experiments on theGraz-02dataset,we create20object saliency masks fromsegmentation annotations for each image.We notice thatonly few examples contain target objects for each categoryin the training set,compared with negative examples.Tolearn a model from a unbalanced dataset,we also use thebounding box annotations of the positive examples for train-ing.We create saliency masks for those images by measur-ing whether the sampled patches fall into target boundingboxes.For each category,we train a saliency model with20cycles.The number of visual words in the dictionary is512and the sparse penaltyλis0.15.We present some representative saliency maps in Fig-ure5.We observe that our saliency model performs well forthose objects that have rich inner structures,such as bicycle,motorbike,person and train;while it does not perform wellfor objects that are identified by their global shapes and col-ors,such as dinning tables,potted plants,bottles and sofas.We quantitatively evaluate our results with the precision-recall rates.The precision rates at EER are shown in Ta-ble3.Figure7shows saliency maps on two images thatcontain instances from more than one categories.There aresome categories that share similar part appearance on thepatch level,which causes confusions between relevant cat-egory models(7(a)),such as(1)bicycle and motorbike;(2)train and bus;(3)dog and cat.Our model partially depends on whether the target ob-jects contain rich structured information on the patch level.Taking airplane as example,our model does not work wellfor the cases where large airplanes are the dominant in theimages(e.g.,only parts of airplanes are viewable)becauselocal patches of those images contain limited relevant infor-mation(i.e.,plain patches only)while our model success-fully localizes small airplanes at a scale close to the patchsize used in the experiments.Considering we sample image。

相关文档
最新文档