毕设-基于DSP的运动目标图像跟踪算法研究与实现-正文-下
基于FPGA+DSP架构的目标跟踪系统设计与实现

基于FPGA+DSP架构的目标跟踪系统设计与实现摘要:本文提出了一种基于FPGA+DSP架构的目标跟踪系统设计与实现方法。
该系统主要通过FPGA实现特征提取和目标检测,通过DSP实现目标跟踪与位置估算。
本文首先介绍了系统设计的基本框架、硬件实现和软件算法,然后详细说明了系统设计过程中遇到的问题及其解决方案,最后对系统的性能进行了评估。
结果表明,该系统可以完成快速、准确的目标跟踪,实现了较高的跟踪精度和实时性。
关键词:FPGA+DSP,目标跟踪,特征提取,目标检测,位置估算,跟踪精度一、引言目标跟踪技术是计算机视觉领域中的重要研究方向之一,其应用范围广泛,例如智能监控、自动驾驶、无人机导航等。
当前,基于计算机的目标跟踪系统大多采用 CPU 或 GPU 为核心架构,但由于计算量大和速度慢等问题,使得其在实际应用中存在诸多限制。
为此,本文提出了一种基于 FPG A+DSP 架构的目标跟踪系统,旨在提高跟踪系统的速度和精度。
二、系统设计该系统主要由图像采集模块、图像处理模块、位置估算模块和跟踪控制模块组成,其中图像采集模块采用 USB 摄像头获取场景图像,图像处理模块主要通过FPGA实现图像特征提取和目标检测,位置估算模块通过 DSP实现目标位置的计算,跟踪控制模块主要实现目标跟踪和控制。
三、硬件实现图像采集模块采用 AVT-BasleracA2040 摄像头,通过USB3.0接口连接到PC机上。
图像处理模块采用Xilinx 的 Kintex-7 FPGA,使用Verilog HDL 实现图像处理算法。
位置估算模块采用TI 的TMS320C6678 DSP,使用C语言实现目标位置的计算。
四、软件算法该系统主要采用了基于HOG+SVM 的目标检测算法和 KCF (Kernelized Correlation Filters) 跟踪算法。
其中 HOG算法可以有效地提取图像中的目标特征,在 SVM分类器的支持下,实现了目标的检测;而 KCF算法可以通过卷积操作实现图像中目标的跟踪。
基于DSP的图像识别与跟踪技术研究

基于DSP的图像识别与跟踪技术研究在当今大数据技术不断发展的背景下,计算机视觉技术得到了快速的发展。
其中,基于数字信号处理(DSP)的图像识别与跟踪技术是计算机视觉中重要的一部分。
本文主要对基于DSP的图像识别与跟踪技术进行研究,探讨其优势、应用及未来发展趋势。
一、基于DSP的图像识别技术数字信号处理是指运用数字信号处理器进行信号处理的技术。
在计算机视觉领域,数字信号处理器通常被用于加速图像处理和图像分析,用于实现实时图像处理和优化图像处理算法的运行速度。
基于DSP的图像识别技术在计算机视觉研究中具有重要作用。
基于DSP的图像识别技术主要应用于图像分类、目标检测和物体识别等方面。
其中,图像分类是将输入的图像分成不同类别的过程。
目标检测是指在图像中找到目标的位置并标记出来,对于视频监控、安防等领域有着广泛的应用。
物体识别是对物体进行分类、检测和跟踪定位,具有广泛的应用前景。
在图像识别技术中,深度学习算法是目前最优秀的图像识别技术之一。
深度学习算法是模拟人脑神经网络,通过多层的神经网络学习,实现对不同类别图片的识别。
此外,支持向量机和特征提取等算法也是常用的图像识别算法。
二、基于DSP的图像跟踪技术图像跟踪是图像处理中一种重要的技术,它能够追踪目标物体在图像序列中的位置和大小。
基于DSP的图像跟踪技术可以实现实时定位、追踪目标物体,在机器视觉、自动控制、视频监控等领域得到广泛应用。
基于DSP的图像跟踪技术的发展趋势主要有以下几个方向:一是跟踪算法的改进和优化;二是融合多种跟踪算法进行跟踪;三是实现大数据量的实时处理,提高跟踪的精度和效率;四是深度学习算法在图像跟踪中的应用和研究。
三、应用及未来发展趋势基于DSP的图像识别与跟踪技术在现代工业、医学、航空、航天、农业等众多领域拥有广泛的应用。
在工业生产中,基于DSP的图像识别技术可以实现对产品检测、制造流程监控等工作的自动化处理和控制。
在医学领域,基于DSP的图像识别技术可以实现对疾病的检测和诊断,提高医学诊疗的精度和效率。
基于DSP的运动目标识别与跟踪系统的设计

VIDEO ENGINEERINGNo.11Vol.342010(Sum No.349)1引言运动目标的识别与跟踪是智能视频分析领域的一部分,它通过分析视频序列图像中具有相关性的有效特征信息,对运动目标进行提取、定位、识别和跟踪,分析运动目标的行为并对其作出解释,为最终的行为决策提供科学依据。
目前,基于DSP 的目标识别与跟踪研究中,大都实现算法简单,实用性不强[1],或者需要与上位机、FPGA 芯片等配合使用[2-3]。
笔者设计了基于TMS320DM642DSP 芯片和相邻帧差Cam-shift (Continuously adaptive mean-shift )算法的嵌入式运动目标识别与跟踪系统,实现了简单背景下对单目标的有效跟踪,具有一定的抗干扰能力,能满足实时性要求。
2Cam-shift 目标跟踪算法目前,比较成熟的目标识别算法主要有帧差法[4]、背景减法[5]、光流法[6]等。
目标跟踪算法可分为基于光流法的方法和基于特征的方法。
其中基于特征的方法包括基于模型特征、基于几何特征、基于颜色特征、基于频域特征等方法。
Bradski 提出的Cam-shift 算法基于色彩信息,利用目标连续分布的概率变化特征,自动调节搜索窗口大小,可以有效解决目标变形和遮挡问题,运算效率高[7]。
算法核心是在视频图像的概率分布图中执行中值变换算法[8]迭代,利用目标固定的概率分布特征在单帧图像中对目标进行检测。
Cam-shift 算法基于每帧图像的反向投影图迭代计算,使当前搜索窗中的灰度分布趋向于能够反映目标特征的分布模式,并把当前帧的计算结果用于指导下一帧图像中目标的跟踪,使跟踪算法能够持续进行。
Cam-shift 算法流程图如图1所示。
反向投影图计算中,首先选定包含目标的初始搜索窗,然后在HSV 空间中对H 分量进行直方图统计,把最大直方图统计值作为目标亮度值,并按式(1)计算出各个亮度值与目标亮度值的相似概率,为了方便计算,将相似概率值线性扩展到0~255,最终得到固定不变的概率相似度查找表。
运动目标检测与跟踪的DSP实现

运动目标检测与跟踪的DSP实现陈延利;施永豪【摘要】研究了运动目标检测与跟踪的DSP( Digital Signal Processor)实现算法,以形心跟踪算法为整个处理系统的核心.采用目标形心跟踪算法,通过目标分割阶段的目标标记,如目标面积、周长、形心位置等信息的提取建立目标跟踪波门,实现目标的连续跟踪,并将此算法移植到SEED—VPM642硬件平台,实验结果表明能够达到预定目标.此外,为了克服形心算法的准确性和实时性缺陷,采用粒子滤波对算法进行必要的扩展,从MATLAB的仿真结果看,除个别采样点存在误差较大的情况,真实值曲线与粒子滤波跟踪曲线拟合较好.%Study the DSP (Digital Signal Processor) algorithm for detecting and tracking moving target,with the centroid tracking algorithm as the core of the whole processing system. By marking the target segmentation stages,such as area,perimeter,centroid position information extraction , the target centroid tracking algorithm can construct the target tracking gate to achieve the goal of continuous tracking. Transplanting this target centroid tracking algorithm into the SEED-VPM642 hardware platform, the experimental results indicate that the predetermined target can be achieved. In addition, necessary expansion of the particle filter algorithm can be used to overcome the centroid algorithm accuracy and real-time defect. The MATLAB simulation results show that the true value curve fits for the particle Filter tracking curve to much extent, except that a few sampling points differ much from the others.【期刊名称】《计算机技术与发展》【年(卷),期】2012(022)008【总页数】3页(P82-84)【关键词】中值滤波;形心跟踪;粒子滤波;运动目标;检测与跟踪【作者】陈延利;施永豪【作者单位】西藏大学工学院电子信息工程系,西藏拉萨 850000;西南交通大学信息科学与技术学院通信与信息系统系,四川成都 610031【正文语种】中文【中图分类】TP391.90 引言多目标检测识别与跟踪技术,主要用于空中超视距的多目标探测、跟踪与攻击,空中交通管制,海洋、港口监视和机器视觉等领域。
基于DSP的运动目标跟踪算法的研究与实现的开题报告

基于DSP的运动目标跟踪算法的研究与实现的开题报告一、研究背景和意义:随着计算机技术的不断发展,数字信号处理(DSP)技术已成为一种重要的高效率数据处理形式,其在运动目标跟踪上的应用越来越受到研究者的重视。
运动目标跟踪是指在运动的背景下,对目标进行连续跟踪,是计算机视觉与图像处理领域的一个热点课题。
基于DSP的运动目标跟踪算法可以实现较高的速度和精度,在机器视觉、视频监控、自动驾驶、医学图像处理等领域具有广泛的应用前景。
因此,开展基于DSP的运动目标跟踪算法的研究与实现,对于推进图像处理与计算机视觉领域的发展,具有重要的实际意义和理论意义。
二、研究内容:本文将研究基于DSP平台的运动目标跟踪算法,主要包括以下内容:1、DSP平台的介绍:介绍DSP的基本原理和发展现状,重点深入探讨TI公司的C6000系列DSP芯片。
2、运动目标跟踪的相关算法:对于运动目标跟踪领域常用的算法进行深入分析和总结,包括模板匹配法、卡尔曼滤波法、粒子滤波法、基于深度学习的方法等。
3、基于DSP的运动目标跟踪算法设计:选定一种或多种算法,针对DSP所具备的特点进行改进和优化,设计高效率、高精度、实时性强的运动目标跟踪算法。
4、DSP平台的实现与测试:借助TI公司的C6000系列DSP芯片,结合相应的算法设计,实现基于DSP的运动目标跟踪算法原型,并进行测试和分析。
三、研究目标和预期成果:本文的研究目标是:探究基于DSP的运动目标跟踪算法的设计和实现方法,实现高效率、高精度、实时性强的运动目标跟踪。
研究预期成果:1、对DSP芯片及其相关算法的研究和应用方面有所贡献。
2、设计出高效率、高精度、实时性强的运动目标跟踪算法。
3、实现基于DSP的运动目标跟踪算法原型,并进行测试和评估。
4、推进运动目标跟踪领域的研究和发展,在视频监控、自动驾驶、医学图像处理等方面具有实际应用前景。
四、研究方法:本文的研究方法主要包括以下几个方面:1、数据收集:收集运动目标在不同条件下的视频数据,包括不同运动速度、不同航向角度、不同光照条件等。
基于DSP的全向运动控制系统软件设计毕业设计-精品

南阳理工学院本科生毕业设计(论文)学院(系):电子与电气工程系专业:自动化学生:指导教师:完成日期 2011 年 5 月南阳理工学院本科生毕业设计(论文)基于DSP的全向运动控制系统软件设计Software Design of Omni-directional Motion ControlSystem Based on DSP总计:42 页表格: 5 个插图: 31 幅南阳理工学院本科毕业设计(论文)基于DSP的全向运动控制系统软件设计Software Design of Omni-directional Motion ControlSystem Based on DSP学院(系):电子与电气工程系专业:自动化学生姓名:学号:指导教师(职称):评阅教师:完成日期:南阳理工学院Nanyang Institute of Technology基于DSP的全向运动控制系统软件设计自动化专业[摘要] 本文基于DSP C2000系列TMS320LF2407A核心控制芯片,以CCStudio V3.3软件为开发平台,在了解RoboCup中型组足球机器人和其他全向机器人的基础上,主要完成了全向运动控制系统的软件设计。
通过对全向运动控制系统的研究,建立了三轴全向运动学数学模型,并对整体平移运动、原地旋转运动、边平移边旋转三种运动方式进行分析和数学建模;利用上位机,经无线模块发送运动方式和各种运行参数,控制运动系统实现各种运动模式;经全向运动控制系统软件的编写和与相关硬件联机调试,系统实现了预订的全向运动形式。
通过试验验证和结果分析,各种运动状态的准确性已达到性能的基本要求。
[关键词] DSP;全向运动控制;数学建模;MATLAB仿真;串行通信;软件设计Software Design of Omni-directional Motion ControlSystem Based on DSPAutomation Specialty LI Hai-qingAbstract:This article is based on The TMS320LF2407A in the series of DSP C2000 as the control core chip and the software of CCStudio V3.3 as the development platform. It is basis on the understanding of medium-sized group about soccer robot in Robocup and other omnidirectional robot, mainly accomplish the software design of a Omni-directional Motion Control System. Through the research to the motion control system, motion control system was established to all three axis kinematics mathematic model, build the mathematical modeling and analysis three kind of movement ,which include the whole translation movement, spin around movement and edge translation and ing the PC, The wireless module to send the movement and a variety of operating parameters, control motion system to achieve a variety of motion modes; By the whole motion control system software to write and online calibration with relevant hardware, the system realized the omnidirectional movement we expected. Through the results analysis and experimental verification, the accuracy have already meet requirements of various sports state.Key Words: DSP; omnidirectional motion control; mathematical modeling; MATLAB simulation; serial communication;software design目录1 引言 (1)1.1 全向运动控制系统发展现状 (1)1.2 本课题的研究意义及前景 (2)1.3 论文组织结构 (3)2 全向运动控制系统分析 (3)2.1 全向运动控制系统运动学模型建立 (3)2.2不同运动方式的运动特性 (5)2.2.1 平移运动 (5)2.2.2 原地旋转运动 (7)2.3.3边平移边旋转运动 (8)3 基于DSP的硬件系统简介 (8)3.1 控制芯片选择 (8)3.2 硬件系统结构图 (9)3.3 硬件系统基本模块 (10)4系统运动控制的MATLAB仿真 (13)4.1 电机PID控制 (13)4.2 转速检测 (14)4.3 MATLAB仿真 (15)5系统软件设计 (16)5.1 软件开发平台及仿真器 (16)5.2 运动控制软件设计 (18)5.2.1 主程序 (18)5.2.2 三种基本运动状态子程序 (19)5.2.3 电机控制子程序 (20)5.2.4 无线发送子程序 (20)5.3 上位机软件及通讯协议 (21)6 实验验证及结果分析 (22)6.1试验场地 (23)6.2 各项性能测试 (24)6.2.1 速度PID测试 (24)6.2.2 平移运动测试 (24)6.2.3 原地旋转测试 (25)6.2.4 平移+旋转运动测试 (26)6.3 影响因素分析 (26)结束语 (28)参考文献 (29)附录 (30)致谢 (42)1 引言随着机器人技术的日新月异,机器人应用领域也已从工业走向普通生活。
基于DSP的图像目标跟踪算法设计与实现

基于DSP的图像目标跟踪算法设计与实现
李转;李永红;岳凤英;陈坤泽
【期刊名称】《自动化与仪表》
【年(卷),期】2024(39)6
【摘要】为了准确地检测和定位图像中的目标,并在连续帧之间实现目标的持续跟踪,该文提出了一种基于数字信号处理(DSP)的图像目标跟踪算法的设计与实现。
目标跟踪是计算机视觉领域中的重要任务之一,该文深入研究了图像目标跟踪技术,研究了MeanShift算法的理论和实现过程,以及用MeanShift算法进行目标跟踪的流程,根据设计方案的需求选择了DSP与FPGA相结合的总体架构,将目标跟踪算法与嵌入式硬件平台相结合。
该文采用“先识别、后跟踪”的方式实现了图像的目标跟踪。
通过实验证明,基于DSP的图像目标跟踪算法在实时性、准确性和稳定性方面都取得了较好的效果。
【总页数】5页(P81-85)
【作者】李转;李永红;岳凤英;陈坤泽
【作者单位】中北大学仪器与电子学院;中北大学电气与控制工程学院;中北大学信息与通信工程学院
【正文语种】中文
【中图分类】TP391
【相关文献】
1.基于DSP+FPGA的实时图像识别系统硬件与算法设计
2.基于机器视觉的教室多目标跟踪算法设计与实现
3.一种基于FPGA+DSP架构的雷达目标跟踪算法设计与实现
4.基于UKF的快速地面集群目标跟踪算法设计和实现
5.基于DSP的计算机图像处理算法设计
因版权原因,仅展示原文概要,查看原文内容请购买。
目标跟踪算法的研究毕业设计论文

目录摘要 (1)ABSTRACT (2)第一章绪论 (4)1.1课题研究背景和意义 (4)1.2国内外研究现状 (5)1.3本文的具体结构安排 (7)第二章运动目标检测 (8)2.1检测算法及概述 (8)2.1.1连续帧间差分法 (9)2.1.2背景去除法 (11)2.1.3光流法 (13)第三章运动目标跟踪方法 (16)3.1引言 (16)3.2运动目标跟踪方法 (16)3.2.1基于特征匹配的跟踪方法 (16)3.2.2基于区域匹配的跟踪方法 (17)3.2.3基于模型匹配的跟踪方法 (18)3.3运动目标搜索算法 (18)3.3.1绝对平衡搜索法 (18)3.4绝对平衡搜索法实验结果 (19)3.4.1归一化互相关搜索法 (21)3.5归一化互相关搜索法实验结果及分析 (22)第四章模板更新与轨迹预测 (26)4.1模板更新简述及策略 (26)4.2轨迹预测 (28)4.2.1线性预测 (29)4.2.2平方预测器 (30)I4.3实验结果及分析: (31)致谢 (36)参考文献 (37)毕业设计小结 (38)摘要图像序列目标跟踪是计算机视觉中的经典问题,它是指在一组图像序列中,根据所需目标模型,实时确定图像中目标所在位置的过程。
它最初吸引了军方的关注,逐渐被应用于电视制导炸弹、火控系统等军用备中。
序列图像运动目标跟踪是通过对传感器拍摄到的图像序列进行分析,计算出目标在每帧图像上的位置。
它是计算机视觉系统的核心,是一项融合了图像处理、模式识别、人工只能和自动控制等领域先进成果的高技术课题,在航天、监控、生物医学和机器人技术等多种领域都有广泛应用。
因此,非常有必要研究运动目标的跟踪。
本论文就图像的单目标跟踪问题,本文重点研究了帧间差分法和背景去除法等目标检测方法,研究了模板相关匹配跟踪算法主要是:最小均方误差函数(MES),最小平均绝对差值函数(MAD)和最大匹配像素统计(MPC)的跟踪算法。
基于DSP的动态目标检测跟踪算法实现

基于DSP的动态目标检测跟踪算法实现摘要目标检测和跟踪在计算机视觉领域有着广泛的应用,针对动态场景中的目标检测和跟踪问题,本文提出一种基于DSP的动态目标检测跟踪算法。
该算法基于Haar特征和Adaboost分类器实现目标检测,并采用卡尔曼滤波和相关滤波相结合的方法,提高目标跟踪的准确性和实时性。
实验结果表明,该算法在不同场景下都能够实现较高的检测率和跟踪精度,具有较好的应用前景。
关键词:动态目标检测、目标跟踪、DSP、Haar特征、Adaboost分类器、卡尔曼滤波、相关滤波一、引言随着计算机硬件和算法的不断发展,目标检测和跟踪在计算机视觉领域应用越来越广泛。
在众多应用场景中,动态场景的目标检测和跟踪问题尤为突出。
动态场景中,目标可能出现遮挡、变形、光照变化等现象,使得目标的检测和跟踪变得复杂和困难。
为了解决这一问题,本文提出了一种基于DSP的动态目标检测跟踪算法。
二、相关技术2.1 Haar特征Haar特征是一种计算速度较快的特征,在目标检测中被广泛应用。
Haar特征是基于图像的区域灰度值的差别,通过计算各种不同大小和形状的Haar小波函数的特征来检测目标。
2.2 Adaboost分类器Adaboost分类器是一种常见的机器学习算法,常用于目标检测中。
Adaboost分类器是一种集成学习方法,通过迭代添加基分类器,不断提高整体分类器的准确率。
2.3 卡尔曼滤波卡尔曼滤波是一种基于状态空间模型的估计方法,广泛应用于目标跟踪中。
卡尔曼滤波通过预测目标的状态,更新目标状态的同时,考虑到噪声的影响,提高目标跟踪的稳定性。
2.4 相关滤波相关滤波是一种基于模板匹配的目标跟踪方法,具有较高的实时性。
该方法通过计算目标区域与模板的相关系数,确定目标位置,不断更新模板以适应目标的变化。
三、算法设计本文提出的基于DSP的动态目标检测跟踪算法主要包括以下几个步骤:3.1 目标检测采用Haar特征和Adaboost分类器实现目标检测。
基于图像识别的运动目标检测与跟踪系统共3篇

基于图像识别的运动目标检测与跟踪系统共3篇基于图像识别的运动目标检测与跟踪系统1随着科技的快速发展,运动目标检测与跟踪系统也逐渐得到了广泛的应用。
一个高效的运动目标检测与跟踪系统,能够很好地解决安防监控、自动驾驶、智能医疗等领域中的问题,对于我们的生活也产生了巨大的影响。
在运动目标检测与跟踪系统中,基于图像识别的方法是一种重要的技术手段。
基于图像识别的运动目标检测与跟踪系统,在实现过程中一般包含三个主要模块:图像预处理模块、目标检测模块和目标跟踪模块。
首先,图像预处理模块是对输入的图像进行处理,将图像提取特征、减少噪声等,为后续的目标检测和跟踪提供基础。
其次,目标检测模块则是通过图像识别技术,对图像中的目标进行检测和定位。
最后,目标跟踪模块则是在目标检测基础上,对运动目标进行跟踪,一般引入多目标跟踪方法,避免因目标之间的互相遮挡而造成运动目标跟踪的误判。
在基于图像识别的运动目标检测与跟踪系统中,图像预处理的重要性不容忽视。
通过预处理,我们可以将图像中的信息提取出来,而且可以排除对后续识别所产生的干扰。
预处理主要包括图像过滤、亮度修正、直方图均衡化等。
其中,图像过滤的主要目的是去噪,避免由于图像噪声而引起的误识别。
亮度修正则是为了提升图像的亮度和清晰度,以更加准确的了解目标形态信息。
直方图均衡化则能够增强图像的对比度和清晰度,有助于更好的分析图像信息。
在目标检测模块中,图像识别是一个重要的技术手段。
通常情况下,图像识别需要先通过选定合适的物体检测算法进行初步的识别工作,如Viola-Jones算法、HOG+SVM算法等。
通过此类算法,我们可以对目标进行初步的分类识别,从而为后续的目标检测和跟踪提供基础。
在初步识别的基础上,可以引入卷积神经网络(CNN)等更深层次的神经网络进行目标特征提取,提高识别准确率。
实际应用中,目标跟踪模块的效果往往受到多种因素的影响,如目标姿态、光照等,而且多目标跟踪算法则更加复杂。
毕设-基于DSP的运动目标图像跟踪算法研究与实现-外文文献-Fast object tracking using adaptive block matc

Fast Object Tracking Using Adaptive Block Matching Karthik Hariharakrishnan and Dan Schonfeld,Senior Member,IEEEAbstract—We propose a fast object tracking algorithm that predicts the object contour using motion vector information.The segmentation step common in region-based tracking methods is avoided,except for the initialization of the object.Tracking is achieved by predicting the object boundary using block motion vectors followed by updating the contour using occlusions/disoc-clusion detection.An adaptive block-based approach has been used for estimating motion between frames.An efficient modu-lation scheme is used to control the gap between frames used for motion estimation.The algorithm for detecting disocclusion pro-ceeds in two steps.First,uncovered regions are estimated from the displaced frame difference.These uncovered regions are classified into actual disocclusions and false alarms by observing the motion characteristics of uncovered regions.Occlusion and disocclusion are considered as dual events and this relationship is explained in detail.The algorithm for detecting occlusion is developed by modifying the disocclusion detection algorithm in accordance with the duality principle.The overall tracking algorithm is com-putationally superior to existing region-based methods for object tracking.The immediate applications of the proposed tracking algorithm are video compression using MPEG-4and content retrieval based on standards like H.264.Preliminary simulation results demonstrate the performance of the proposed algorithm. Index Terms—Adaptive motion estimation,K-means clustering, segmentation,visual tracking.I.I NTRODUCTIONV ISUAL tracking has been an area of intensive research in thefield of computer vision.With the advent of emerging multimedia standards like MPEG-4it has become even more essential to develop a system that performs visual tracking in a robust as well as computationally efficient manner.Applica-tions of object tracking range from video compression,video retrieval,interactive video,scene composition,etc.A variety of techniques have been employed for segmenting a semantically meaningful object out of a video scene.The most common approaches that have been proposed fall into the fol-lowing categories.Region based tracking,active contours(a.k.a snakes)and mesh-based tracking.In thefirst approach(region-based tracking),the video ob-ject is initially defined by the user/object recognition algorithm. Video sequences are then segmented using a classical tool likeManuscript received October29,2002;revised March14,2004.The associate editor coordinating the review of this manuscript and approving it for publica-tion was Dr.Chalapathy Neti.K.Hariharakrishnan was with the Multimedia Communications Laboratory, Department of Electrical and Computer Engineering(M/C154),University of Illinois at Chicago,Chicago,IL60607-7053USA.He is now with the Multi-media Group,Motorola,Inc.,Bangalore,India(e-mail:h_karthik@).D.Schonfeld is with the Multimedia Communications Laboratory,De-partment of Electrical and Computer Engineering(M/C154),University of Illinois at Chicago,Chicago,IL60607-7053USA(e-mail:dans@; /~ds).Digital Object Identifier10.1109/TMM.2005.854437the watershed transformation.Correspondence between the seg-mented regions in consecutive frames is established and this en-ables tracking of the video object in subsequent frames[3],[7], [13].Contour-based approaches usually do not make use of the spatial and motion information of the entire object and rely only on the information closer to the boundary of the video object[2], [5],[11],[12],[16].Snakes[10]was proposed as a method for tracking the boundary of the video objects by using a parametric planar curve(active contours).Mesh based approaches[1],[9],[15],[18]define an initial set of nodes on the boundary,as well as the interior of the object using gradient and motion information.These set of nodes are then joined using a triangulation procedure like the Delaunay triangulation to produce a content-based mesh.These set of nodes are tracked forward by sampling the node motion vec-tors from the estimated opticalflow[8].A variation of region-based tracking,referred to as motion-based tracking,is introduced in[17].Motion-based clustering on the opticalflow has been employed for generating the re-gions that are coherent in motion.Although motion provides a powerful description of the visual scene,motion-based criteria in isolation are insufficient for object tracking.One of the main problems confronted by tracking algorithms is partial occlusion.Several approaches have been proposed for occlusion detection in video sequences[1],[5],[14].Occlusion detection methods proposed thus far have focused exclusively on the progressive hiding of a portion of the object by an oc-cluding body[1],[5],[14].In the literature,a complementary effort to address the task of disocclusion detection has been lim-ited.The computational complexity of object tracking systems is due to the costly segmentation,opticalflow or motion esti-mation operations.Practical real-time systems must,therefore, avoid costly repetition of any of these operations.The algorithm outlined in this paper aims to predict the object boundary for a long duration without the need for user interaction.Section II gives detailed information on the proposed ap-proach.Section III discusses the algorithm used for identifying occlusion/disocclusions in video sequences.Section IV includes experimental results to demonstrate the proposed method.Con-clusions and further research have been included in the last sec-tion.II.B ASIC T RACKING A LGORITHMA.General ApproachThe proposed algorithm can be categorized as a region-based tracking algorithm.The occlusion and disocclusion techniques that have been developed can be employed in other region-based1520-9210/$20.00©2005IEEEFig.1.Tracking algorithm.techniques to improve the tracking accuracy.The overall algo-rithm has been outlined in Fig.1.All the steps in Fig.1are explained in the following sections.In the following sections,re-segmentation refers to segmentation of the frame using the al-gorithm mentioned below followed by user interaction to re-ini-tialize the object partition.B.Object Mask InitializationFor initializing the tracking algorithm,we use a segmentation algorithm followed by an association operator.The association operator gives information about the regions that belong to the object.The segmentation proposed by [6]was used as it gives better results when compared to standard watershed segmenta-tion.The algorithm for initializing the object is given below.1)Segment the initial frame using four-band multi-valued segmentation [6].The initial segmentation map is denotedby .2)The aim of this step is to locate regionsfromthat be-long to the object.To locate these regions,we compute motion vectors of all the regions in the segmentationmap.All the regions that have reasonable motion are la-beled as object regions in a mask.3)The previous step might include regions that belong to the background.Hence,a post-processing operation is per-formed to remove small regions.The area opening op-erator has been employed for morphological post-pro-cessing.The holes in the mask are then filled to give the final mask.If the obtained mask is too erroneous,manual initialization is performed.The result of applying the above steps on the bream sequence is shown in Fig.2.The above-mentioned method works well for static camera scenes.The object can also be drawn using a graphical interface for manual initialization.Automatic initial-ization can also be done if the class of objects to be tracked is known.For example,skin color can be used to initialize a face tracker.The result of the tracking algorithm depends to agreatFig.2.Object initialization:(a)segmentation map and (b)regions in the segmentation map with reasonablemotion.Fig.3.Estimated seed blocks for bream sequence:(a)frame 75and (b)seed motion blocks in frame 78(black —uncertain blocks,white —seed motion blocks of object,gray —seed motion blocks of background).extent on the initialization.A perfect initial contour is the best input that can be given to a tracking algorithm.Tracking results with automatic and manual initialization are given in Section IV .C.Motion EstimationMotion estimation is a fundamental measurement to object tracking and hence an accurate estimation of motion is one of the most important steps.The method presented in this paper calculates the block sizes depending on the location of the block.Varying the block sizes near the boundary of the object is equivalent to mesh-based motion estimation.Hence,motion estimation is more accurate than traditional block-matching techniques.1)Block Classification:Let I(x,y,k)denote the kth frame of a video sequence with (x,y)denoting x and y coordinates of a pixel.Seed motion blocks are estimated for every frame.A seed motion block is de fined as one that lies either entirely within the object or background.The algorithm starts with an initial block size of1616pixels and backward motion is estimated (Everyblock fromI(x,y,)is matched with the corresponding block in I(x,y,k).Exhaustive search has been employed for com-puting motion vectors on a search window of3232.Motion estimation has been performed in the Y ,U,V space.Blocks that lie on the boundary are labeled as uncertain blocks and are pro-cessed by the next stage of the estimation step.Fig.3shows the seed blocks that have been computed for the bream sequence.The uncertain blocks are subdivided into smaller blocks(88)and new seed blocks are estimated.The search range is also reduced to make sure that misclassi fications do not occur.This procedure is continued until a fixed block size(44pixels)is reached.HARIHARAKRISHNAN AND SCHONFELD:FAST OBJECT TRACKING USING ADAPTIVE BLOCK MATCHING 855D.Modulation SchemeIn many video sequences,the motion between consecu-tive frames is comparatively less.In the proposed approach,tracking is performed once everythreeframes.A modula-tion scheme is provided that calculates object motion betweenframes&and slows down motion estimation if the estimated motion is relatively high.This modulation scheme enables motion estimation to be applied infrequently if the motion in the video sequence is low.Average motion between the frames is found by fitting an af fine motion model to the initial seed blocks estimated in the initial part (Fig.3)of the motion estimation algorithm.The af fine model is de finedbywhere,in thematrix are the model param-eters.The transformation displaces apoint in the reference frame to a newlocation in the previous frame.A least squares based algorithm is used to extract the mo-tion model parameters.The translational component of the af fine model re flects the amount of motion undergone by the object.If theabove norm is greater of the translational component is greater than athreshold,motion estimation is repeated over frames that areadjacent.This corrects for substantial tracking errors,if the algorithm were applied once inthree .If the size of the object is small compared to the size of the frame,there might not be any object seed blocks present.In this case,motion vectors computed from smaller blocks (e.g.,44blocks)are used to find the af fine model.E.Object Mask GenerationThe partition of previous frame that corresponds to the object is denotedby and the goal of this component is to generatethe current partition of objectsupport,given the motion vectors.Let denote the motion vector that has been computed for the region(block).Ad-ditionallylet represent the translated version ofregionby .In our case h repre-sents the computed motion vector for each block.Every block in the current frame is motioncompensatedto find out the portion of the block that lieswithin .Thisgives us the object support for the currentframe.The objectmask needs to be modi fied suitably to take care of occlusions and disocclusions.Occlusions and disocclusions are handled using the method detailed in the next section.III.O CCLUSIONS AND D ISOCCLUSIONSThe most common method to deal with occlusion/disocclu-sion consists of finding the motion compensated frame from the computed motion vector field [1].The compensated frame issubtracted from the original frame andthresholdedto give the outlier pixels in the current frame.If forward motion is esti-matedand frame k is reconstructed usingframe the outlier pixels detected correspond to regions that willbe covered inframe.For backward motion,frame is reconstructed using frame k.The outlier pixels detected in this case correspond to new (uncovered)regions that have appearedinframe.Ideally,the new regions should correspond to disocclusions and the covered regions to occlusions.But there are many cases under which some of the uncovered regions do not correspond to disocclusions as explained below.The aim of the disocclusion algorithm is to detect uncovered regions that are actual disocclusions.The following paragraph explains the algorithm for disocclusion detection.The duality principle can be employed to derive the algorithm for occlusion detection.The regions which cannot be motion compensated accurately are declared as covered or uncovered regions.In many instances,existing regions cannot be motion compensated accurately due to nonrigid structure or illumination changes.In such cases,some covered and uncovered regions detected do not correspond to occlusions and disocclusions.It is clear that a further classi fication is required to find actual disocclusions from uncovered regions.This can be achieved by using a motion-based criterion.A.Duality PrincipleOcclusion and disocclusion are viewed as dual events.For detecting disocclusion (occlusion),the current frame is motion compensated using the previous (next)frame to give uncovered (covered)regions.In the case of disocclusion (occlusion),the uncovered (covered)regions that belong to the object exhibit motion characteristics that are similar to (different from)the ob-ject.The algorithm that detects disocclusion (occlusion)looks for this motion similarity (dissimilarity).Based on this duality,an algorithm that performs disocclusion detection can be formu-lated and suitably modi fied to detect occlusions.B.Disocclusion Detection1)Uncovered Regions:To enforce the disocclusion detec-tion step,regions that will be uncovered in future frames need to be estimated.The object contour has already been predicted using the motion vectorscalculated .New regions ofthe object might appear in the currentframe.To esti-mate these regions,the currentframeis motion com-pensated usingframe .find the uncovered regions Some post-processing operations are applied on the mask to remove noise.2)Region Classi fication:As already noted,uncovered re-gions do not correspond to actual disocclusions.We have used the color based criterion to estimate the uncovered regions.In the second stage of classi fication,motion is used as a criterion.The following is true for uncovered regions that belong to the object.•The uncovered region belonging to the object will exhibit motion characteristics similar to the object.•Check the motion consistency of the uncovered regions with the rest of the object.Average motion vector for the uncovered regions is estimated.The similarity test is then employed to classify the uncovered regions as actual disocclusions and false alarms.856IEEE TRANSACTIONS ON MULTIMEDIA,VOL.7,NO.5,OCTOBER2005Fig.4.Extracted foremanobject.Fig.5.Extracted carphone object.a)Motion vector clustering:Removing the covered re-gionsfromforms a newmask .The motion vectorsin are clustered using a K-means algorithm [4]described below.The covered regions are removed because the motion vectors are inaccurate in those regions,and hence might lead to errors during clustering.Having used a block-based approach for motion estimation,we have only one motion vector for a block.The vectors to be clustered are these block motion vectors.Clustering needs tobe performed only on pixels in themask.Hence,the number of pixels that lie in the mask for each block is calculated to give a weighted sample set forclusteringwhere represents the motion vector of the ith macro-block,denotes the number of pixels in the ith macro-block that lieinside,and is the total number of macro-blocks which are partially or completelyinside .A clustering algorithm that chooses the number of clusters adap-tively is used.b)Similarity test:The following similarity test is per-formed for all the uncovered regions:Letdenotethe uncovered region ’s forward motion vector,and represent the centroid ofthe motion cluster of the object.For everycalculateOnlyif(where ),the region is treated as a disocclusion and includedin.The updated object mask is givenby .The uncovered regions that do not satisfy the above condition are false alarms.The algorithm for occlusion detection can be derived in a straightforward manner and hence is skipped here.IV .S IMULATIONSThe tracking has been tested on some common MPEG test se-quences and real video sequences.The proposed method for ob-ject tracking can be considered as a variation of the region based tracking techniques that have been reported in literature.Theapproach predicts the object contour mostly using motion vec-tors and this has wide implications when a tracking algorithm needs to be applied for compressed video data.The computa-tion time required for tracking of one frame has been compared with two other region based tracking approaches.The software written for all the algorithms have not been optimized to give better performance.A.Video SequencesIn the foreman sequence,the object motion is not uniform and hence the tracking is slowed down when signi ficant motion has been observed.Fig.4shows the extracted video objects obtained at various instances.The method generates object masks almost accurately as the segmentation-based approaches and at the same time it is not computationally very expensive.A comparison with other region-based methods is provided in the later section for refer-ence.Fig.5shows the extracted object masks for the carphone sequence.B.Modulation Detection SchemeThe modulation scheme described in Section II-D is used to skip frames when the motion of the object is relatively less.The object outlines for the skipped frames can be interpolated.Fig.6shows the number of frames skipped versus frame number for the foreman sequence and the bream sequence.In the bream sequence,the motion of the object is very less until the 110th frame.The modulation scheme detects high motion and slows down motion estimation.Slowing down the tracking process en-ables accurate tracking for a longer period.The Foreman se-quence has high motion at various instances,as shown in the figure.C.Occlusion and Disocclusion DetectionFig.7shows the effect of the disocclusion detection mecha-nism detailed in the previous section.The tail of the fish reap-pears and this region is not similar in color to the body and hence tracking algorithms that consider color similarity for re-gion association would fail.The disocclusion is detected in our method and associated with the corresponding object.The al-gorithm has also been applied to real life videos.The followingHARIHARAKRISHNAN AND SCHONFELD:FAST OBJECT TRACKING USING ADAPTIVE BLOCK MATCHING857Fig.6.Number of frames skipped versus frame number for the foreman and breamsequence.Fig.7.Disocclusion (tail of the fish)detected and associated withobject.Fig.8.Occlusion and disocclusion detection for the Shirleysequence.Fig.9.Tracking the person with occlusion/disocclusion detection.figure (Fig.8)illustrates the performance of the occlusion de-tection algorithm.Another person obstructs the tracked object partially and then moves away.Block matching algorithms rely on a translational model and are usually not suited for non-rigid objects.However,when tracking nonrigid objects,the oc-clusion and disocclusion detection algorithms include/discard pixels close to the boundary of the object.This handles non-rigid objects to some extent.D.Algorithm ComparisonsThe following sequence shows the performance comparison of the proposed algorithm with two region-based methods for object extraction.A table of the computation times for the dif-ferent methods is also given.The following methods are used for comparison.1)Object Extraction using Partition Lattice Operators [6],[7].2)Region-based video coding using Mathematical Mor-phology [12],[13].Fig.9shows the output of the algorithm with occlusion/dis-occlusion detection using both forward and backward motion.Figs.10and 11show the results of the algorithms used for com-parison.For this example the tracked object is better than the re-gion-based methods.Avoiding forward motion speeds up the858IEEE TRANSACTIONS ON MULTIMEDIA,VOL.7,NO.5,OCTOBER2005Fig.10.Tracking the person with partition latticeoperators.Fig.11.Tracking the person with partition projection.TABLE IC OMPARISON OF C OMPUTATION TIMESalgorithm but affects the quality of extraction.The Adaptive K-Means clustering,followed by the occlusion disocclusion classi fication are necessary for the successful tracking of the hand.The tracking algorithm designed was primarily for use in object-based video coding,where pixel accurate tracking of objects is not very critical.Thus the tracked object boundary is not accurate compared to the region-based methods.The following table (Table I)shows the computation times in-volved for the various implementations.All the algorithms were implemented in MATLAB.Hence the absolute timings do not make much sense.We also have a C implementation of the al-gorithm and it runs from 4–10frames/s depending on the size of the object to be tracked.The speed can be increased by a factor of two by eliminating forward motion and using only backward motion.We are investigating this issue further.E.Results for Automatic and Manual InitializationIn the following video sequence (Figs.12and 13),we com-pare the results for tracking a video for automatic and manual initializations.Automatic initialization works well for a static background.For backgrounds that are changing,a sophisticated method must be used for initialization.In Fig.12and 13,we show the tracking results for the hand sequence.For automatic initialization,we have also used a skin color model in conjunc-tion with the algorithm mentioned in Section II.WeactuallyFig.12.Tracking the hand with manualinitialization.Fig.13.Tracking the hand with automatic initialization (both the hands are tracked).would like to track only the drawing hand.Automatic initial-ization enables us to track both the hands.In this case,since we have a strong skin color model the initialization is almost perfect.In many other videos,using the trivial initialization al-gorithm mentioned in Section II yields inaccurate results.V .D ISCUSSIONIn this paper,we have proposed a simple tracking algorithm that avoids segmentation except for initialization of the object partition during the initial frame.Object tracking using block motion vectors have seldom been exploited.Such an approach can be implemented using parallel processors and hence lends itself to a real-time implementation.The goal was to develop an algorithm that extracts video objects whose accuracy is close to the region-based approaches reported in the literature and at the same time perform tracking in a computationally ef fi-cient manner.Occlusion and disocclusion are viewed as reverse problems.An ef ficient algorithm for detecting occlusions is pro-posed and modi fied in accordance with the duality principle toHARIHARAKRISHNAN AND SCHONFELD:FAST OBJECT TRACKING USING ADAPTIVE BLOCK MATCHING859develop a disocclusion algorithm.As the object mask is modi-fied to take care of occlusions/disocclusions,the object can be tracked accurately for a longer time without requiring re-initial-ization/re-segmentation.The tracking algorithm proposed in-herently can be extended in a straightforward manner to ex-tract multiple objects.The tracking algorithm,however,per-forms poorly for objects that are relatively small and relevant changes are currently being investigated.The tracking obtained using the method can be used as a predicted position for methods that employ snakes to track contours.The prediction can be based on the computed affine model and the innovations process can be used to correct for errors in the prediction.This approach is similar to the Kalmanfilter based approaches.This approach is also being considered for future research.Motion estima-tion using the sum of absolute differences does not deal with gaussian noise and hence the motion vectors tend to be erro-neous causing the tracked result to deteriorate.This is another aspect that requires study.R EFERENCES[1]Y.Altunbasak and A.M.Tekalp,“Occlusion-adaptive,content-basedmesh design and forward tracking,”IEEE Trans.Image Process.,vol.6, no.9,pp.1270–1280,Sep.1997.[2] A.A.Amini,T.E.Weymouth,and R.C.Jain,“Using dynamic program-ming for solving variational problems in vision,”IEEE Trans.Pattern Anal.Mach.Intell.,vol.12,no.9,pp.855–867,Sep.1990.[3]M.J.Black and A.Jepson,“Eigentracking:robust matching and trackingof articulated objects using a view-based representation,”put.Vis.,vol.26,no.1,pp.63–84,1998.[4] B.Everitt,Cluster Analysis,3rd ed.London,U.K.:Hodder,1993.[5]Y.Fu,T.Erdem,and A.M.Tekalp,“Tracking visible boundary of objectsusing occlusion adaptive motion snake,”IEEE Trans.Image Process., vol.9,no.12,pp.2051–2060,Dec.2000.[6] D.Gatica-Perez,C.Gu,and M.T.Sun,“Semantic video object extrac-tion using four-band watershed and partition lattice operators,”IEEE Trans.Circuits Syst.Video Technol.,vol.11,pp.603–618,May2001.[7],“Multiview extensive partition operators for semantic video objectextraction,”IEEE Trans.Circuits Syst.Video Technol.,vol.11,no.7,pp.788–801,Jul.2001.[8] B.K.P.Horn and B.G.Schunck,“Determining opticalflow,”Artif.In-tell.,vol.17,pp.185–203,1981.[9] C.-L.Huang and C.-Y.Hsu,“A new motion compensation method forimage sequence coding using hierarchical grid interpolation,”IEEE Trans.Circuits Syst.Video Technol.,vol.4,p.4251,Feb.1994. [10]M.Kass,A.Witkin,and D.Terzopoulos,“Snakes:active contourmodels,”put.Vis.,vol.1,pp.321–331,1987.[11]N.Peterfreund,“The velocity snake,”Proc.IEEE Nonrigid ArticulatedMotion,pp.70–79,1997.[12]P.Salembier and M.Pardas,“Hierarchical morphological segmentationfor image sequence coding,”IEEE Trans.Image Process.,vol.3,no.5, pp.639–651,Sep.1994.[13]P.Salembier,L.Torres,F.Meyer,and C.Gu,“Region-based videocoding using mathematical morphology,”Proc.IEEE,vol.83,no.6, pp.843–857,Jun.1995.[14] D.Schonfeld and D.Lelescu,“VORTEX:video retrieval and trackingfrom compressed multimedia databases multiple object tracking from MPEG-2bitstream,”mun.Image Represent.(Special Issue on Multimedia Database Management),vol.11,pp.154–182,2000. [15] A.M.Tekalp,P.Van Beek,C.Toklu,and B.Gunsel,“Two-dimensionalmesh-based visual object representation for interactive synthetic.Nat-ural digital video,”Proc.IEEE,vol.86,no.5,pp.1029–1051,Jun.1998.[16] D.Terzopoulos and R.Szeliski,“Tracking with Kalman snakes,”in Ac-tive Vision,A.Blake and A.Yuille,Eds.Cambridge,MA:MIT Press, 1992,pp.3–20.[17]J.Y.A.Wang and E.H.Adelson,“Representing moving images withlayers,”IEEE Trans.Image Process.,vol.3,no.5,pp.625–638,Sep.1994.[18]Y.Wang and O.Lee,“Active meshA feature seeking and tracking imagesequence representation scheme,”IEEE Trans.Image Process.,vol.3, pp.610–624,Sep.1994.Karthik Hariharakrishnan was born in TamilNadu,India,in1979.He re-ceived the B.E.degree in electronics and instrumentation from the Birla Insti-tute of Technology and Science,Pilani,India,and the M.S.degree in electrical and computer engineering from the University of Illinois at Chicago in2003. In July2004,he joined the DSP and Multimedia Group of Motorola,India. His current research interests are in multimedia compression and retrieval and signal,image,and video processing.Dan Schonfeld(SM’05)was born in Westchester,PA,in1964.He received the B.S.degree in electrical engineering and computer science from the University of California,Berkeley,and the M.S.and Ph.D.degrees in electrical and com-puter engineering from the Johns Hopkins University,Baltimore,MD,in1986, 1988,and1990,respectively.In August1990,he joined the Department of Electrical Engineering and Computer Science,University of Illinois,Chicago,where he is currently an As-sociate Professor in the Departments of Electrical and Computer Engineering, Computer Science,and Bioengineering,and Co-Director of the Multimedia Communications Laboratory(MCL)and member of the Signal and Image Research Laboratory(SIRL).He has authored over60technical papers in various journals and conferences.He has served as a Consultant and Technical Standards Committee Member in the areas of multimedia compression, storage,retrieval,communications,and networks.He has previously served as President of Multimedia Systems Corporation and provided consulting and technical services to various corporations including AOL Time Warner, Chicago Merchantile Exchange,Dell Computer Corp.,Getco Corp.,EarthLink, Fish&Richardson,IBM,Jones Day,Latham&Watkins,Mirror Image Internet,Motorola,Multimedia Systems Corp.,nCUBE,NeoMagic,Nixon& Vanderhye,PrairieComm,Teledyne Systems,Touchtunes Music,Xcelera,and 24/7Media.His current research interests are in multimedia communication networks,multimedia compression,storage,and retrieval,signal,image,and video processing,image analysis and computer vision,and pattern recognition and medical imaging.Dr.Schonfeld served as an Associate Editor for the IEEE T RANSACTIONS ON I MAGE P ROCESSING and the IEEE T RANSACTIONS ON S IGNAL P ROCESSING.He was a member of the organizing committees of the IEEE International Confer-ence on Image Processing and the IEEE Workshop on Nonlinear Signal and Image Processing.He was the plenary speaker at the INPT/ASME International Conference on Communications,Signals,and Systems.。
基于DSP的多目标跟踪系统设计与实现

基于DSP的多目标跟踪系统设计与实现江晨晓;张桦;孙志海【摘要】该文提出了一种基于卡尔曼滤波的多信息融合匹配的多目标快速跟踪算法,并在TI公司开发的DaVinci系列TMS320DM6446实现了多目标跟踪系统.该算法由卡尔曼滤波器预测和修正目标质心轨迹,然后融合质心和外接矩形框面积等多个信息对其进行匹配.实验结果表明,该系统具有良好的跟踪效果,有效解决目标遮挡后分离的跟踪情况,具有计算量小、存储量低、实时性高的优点.经过优化后,跟踪一帧图像时间约为50.2ms,实现实时跟踪.【期刊名称】《杭州电子科技大学学报》【年(卷),期】2012(032)006【总页数】4页(P69-72)【关键词】多目标跟踪;卡尔曼滤波;多信息融合【作者】江晨晓;张桦;孙志海【作者单位】杭州电子科技大学计算机应用技术研究所,浙江杭州310018;杭州电子科技大学计算机应用技术研究所,浙江杭州310018;杭州电子科技大学计算机应用技术研究所,浙江杭州310018【正文语种】中文【中图分类】TP391.410 引言多运动目标跟踪是计算机视觉领域的重要分支,在视频监控、视觉导航等领域具有广阔的应用前景,研究这个课题具有重要的理论意义和应用价值。
目前常用的算法有融合颜色和边缘信息的采样算法[1]和结合Mean Shift方法和置信传播的算法[2]等。
但上述算法需要颜色信息,计算量较大,在嵌入式开发平台上难以实现实时检测和跟踪。
本文以TI公司的DaVinci系列TMS320DM6446为硬件平台,提出了一种基于卡尔曼滤波[3]的多信息融合匹配[4]的多目标快速跟踪算法,设计并实现了多目标实时跟踪系统。
本文算法使用卡尔曼滤波器为运动目标建立模型,预测运动目标在下一帧的状态,提取特征进行融合匹配,有效解决遮挡问题,具有低存储高实时性的优点。
1 系统硬件设计本文采用 TI公司的DaVinci系列 DSP处理器TMS320DM6446作为硬件平台,系统结构框图如图1所示。
基于DSP的运动目标检测与跟踪系统设计

第37卷第6期2009年12月浙江工业大学学报J OU RNAL OF ZH E J IAN G UN IV ERSIT Y OF TECHNOLO GYVol.37No.6Dec.2009收稿日期:2008211211作者简介:滕 游(1985—),男,浙江永嘉人,硕士研究生,研究方向为嵌入式系统及其应用.基于DSP 的运动目标检测与跟踪系统设计滕 游,董 辉,俞 立(浙江工业大学浙江省嵌入式系统联合重点实验室,浙江杭州310032)摘要:运动目标检测与跟踪是智能视频监控系统的重要组成部分,其主要功能为检测监控场景中的运动物体,分析其运动轨迹,为高级运动分析提供必要的信息.随着人们对社会安全水平和设备智能化程度要求的提高,运动目标检测与跟踪必将具有广泛的应用前景.针对实时运动目标检测与跟踪,提出了一种基于DSP 的运动目标检测与跟踪系统的设计方案,给出了系统总体结构框图,分析了系统工作原理.实现了一种采用结合帧间差分和背景减除的运动物体检测方法,以及一种应用K alman 滤波的运动目标跟踪方法.同时,给出的实验结果表明,该系统能够检测并跟踪特定场合的运动目标.关键词:运动检测;目标跟踪;背景减除;Kalman 滤波中图分类号:TP391.41 文献标识码:A 文章编号:100624303(2009)0620607203Design of moving object detection and tracking system based on DSPTEN G Y ou ,DON G Hui ,YU Li(Zhejiang Provincial United K ey Laboratory of Embedded System ,Zhejiang University of T echnology ,Hangzhou 310032,China )Abstract :The moving object detection and t racking technology is t he most basic composition of t he intelligent video surveillance system ,by which t he surveillance system can detect moving object ,analyze moving traces ,and provide t he necessary information for advanced motion analysis.Wit h t he people ’s demands for t he imp rovement of t he level of social security and t he intelligent level of surveillance equip ment s ,t he moving object detection and t racking technology will have a bright application prospect.Focusing on t he real 2time application ,a system design solution based o n DSP is presented.The st ruct ure of system is propo sed and t he system principle is analzed.A moving detection algorithm by combining the frame difference with background subtraction and a moving tracking algorithm by utilizing K alman filter is proposed and implemented.The experiment shows that the system can detect and track the moving target in special scene.K ey w ords :moving detection ;target t racking ;background subt raction ;Kalman filter0 引 言视频图像序列运动目标实时检测与跟踪系统的应用非常广泛,但由于此类系统是一个数据密集型系统,其必需的高速数据运算能力制约了此类系统在微型实时处理系统中的应用.近年来,随着高速数字信号处理器DSP 的出现,在低功耗、微小体积的DSP 系统中实现视频图像序列的实时运动目标检测与跟踪成为可能.笔者所研究的基于DSP 的运动目标检测与跟踪系统采用TMS320DM642作为主处理器,结合帧间差分法[1]和背景减除法[2]检测运动目标,应用Kalman 滤波器跟踪检测到的运动目标.该系统体积小、功耗低,在实时监控方面有很大的应用前景.1 系统总体结构和工作流程系统硬件结构如图1所示,摄像机输出模拟视频信号,视频解码器SAA7113将该模拟视频信号变成B T.656格式的视频信号,并送给DSP 的视频接口.DSP 的视频接口结合EDMA 通道把视频信号传送到SDRAM 的缓存区中,CPU 处理完毕后,由SAA7121将数据转换成模拟视频信号传送到监视器上显示.图1 硬件结构Fig.1 Hardware structure系统工作流程如图2所示,主要包括四个模块:DM642初始化模块、系统驱动初始化模块、运动物体检测模块和运动物体跟踪模块.其中,DM642初始化模块完成DM642芯片内部存储器接口、外设选择模块、中断模块等的初始化,系统驱动初始化主要完成视频编解码器、DM642视频端口和EDMA 通道等的初始化.在系统初始化完毕之后,视频信号的输入输出不需要DM642CPU 的干预,系统进入一个无限循环任务,在该任务中,DM642CPU 专注于视频处理算法,包括运动物体检测和运动目标跟踪两个模块.图2 系统程序流程Fig.2 Flow of system process2 运动物体检测运动物体检测的目的是将图像序列中的变化区域分离出来.目前有三种主要的运动物体检测方法:背景减除法、帧间差分法、光流法[3].本系统综合了背景减除法和帧间差分法各自的优点,采用背景减除法来检测运动物体,采用帧间差分法处理监控场景背景的突变.2.1 背景的估计和更换当像素点的估计背景值无效或被运动物体覆盖时,对该像素点使用连续帧间差分法,该方法可以处理背景突变,如背景失效、物体融入或离开背景等情况,其基本原理为:采集前后两帧实时图像,若两帧图像的相应位置的像素点的值的差分不大于系统为该像素点维护的阈值,即满足条件|I (x ,y ,t )-I (x ,y ,t -1)|<T ,则对应计数值Count (x ,y )增加1,否则,Count (x ,y )=0.当Count (x ,y )增加到设定值N 时,系统将实时像素值作为估计背景像素值.2.2 运动物体检测采用背景差分法检测运动物体.系统将采集到的实时图像与背景估计图像进行差分,通过差值与阈值的比较来判断相应的像素点是否属于运动区域,并产生相应的二值图像:B W (x ,y )=1I (x ,y ,t )-B (x ,y ,t )≥T0I (x ,y ,t )-B (x ,y ,t )<T(1)其中T 为阈值.检测结果若为1,则该像素点为运动像素点,否则为背景像素点.针对实际监控的场景中背景的变化,系统根据检测结果自动实时更新背景:若B W (x ,y )=1,则B (x ,y ,t )=B (x ,y ,t -1)(2)若B W (x ,y )=0,则B (x ,y ,t )=αB (x ,y ,t -1)+βI (x ,y ,t )(3)其中,α+β=1,α为学习率.α决定了当前实时图像对背景图像的影响程度.这是一种选择性更新方法.采用这种方法的优点是既保证了背景能够快速适应场景的变化,又避免了运动像素对背景的污染.2.3 运动物体检测后处理通过背景减除法得到的二值图像往往包含很多孤立噪声,考虑到视频图像中人体的长宽特性,系统采用长度为16个像素、宽度为4个像素的长方形形状的掩膜图像对检测结果进行开运算[4].滤波后,系统使用四连通的连通分量标记区分二值图像中的连通区域[5],为了避免标签不连续的情况,系统使用一个等价数组来标记等价标签.所有运动像素得到标签后,系统对等价标签进行合并,从而得到运动物体的区域,并计算出运动物体的相关参数,同时将连通区・806・浙江工业大学学报第37卷域像素点数目小于设定阈值的小区域作为噪声滤除.2.4 运动物体检测实验结果针对一个实际环境,系统进行了实验.图3(a )为系统实时运行时的实时背景,图3(b )为系统采集到的监控画面的一帧,图中有两个运动人体,图3(c )为图3(a )和图3(b )差分并阈值化后的二值图,可以发现图3(c )中存在噪声,图3(d )为对图3(c )进行形态学滤波、连通性分析、小区域滤除后的结果,图中噪声已经滤除,并使用矩形框将两个人体表示出来.实验结果表明算法能够提取背景、检测运动物体、滤除噪声和提取运动物体.图3 运动物体检测Fig.3 Moving object detection3 运动目标跟踪运动跟踪的目的是对连续的视频图像序列中的运动物体进行匹配,得到运动目标的运动轨迹.视频图像中的运动目标的实际运动是一个随机过程,但由于视频图像的采集速度远远高于目标的运动速度,可以将目标的运动假设成线性运动.本系统运用Kalman 滤波方法[6]对运动目标的位置、长度和宽度等参数进行预测,然后利用运动物体的位置、长度和宽度对物体和目标进行匹配.3.1 基于K alman 滤波的运动目标跟踪定义系统中维护的运动目标的状态变量为X k=(x k ,y k ,Δx k ,Δy k ,w k ,h k ,Δw k ,Δh k )T,其中x k ,y k ,Δx k ,Δy k 分别为运动目标在X 方向和Y 方向上的实时位置和相对位移,w k ,h k ,Δw k ,Δh k 为运动目标的长度、高度、相对长度变化和相对高度变化.定义观测变量为Z k =(x k ,y k ,w k ,h k )T .在第k 帧,应用K alman 滤波器预测所有目标的状态变量,得到 x k+1, y k+1, w k+1, h k+1.在第k +1帧,应用运动目标检测得到所有运动物体的实际参数x k+1,y k+1,w k+1,h k+1.遍历所有目标和物体,若同时满足公式(4—6),则运动物体和一个运动目标匹配:x k+1- x k+1+y k+1- y k+1<T (4)w k+1/2< w k+1<2w k+1(5)h k+1/2< h k+1<2h k+1(6)结合Kalman 滤波的运动目标跟踪方法的具体描述如下:(1)若运动物体没有相匹配的目标,则认为该物体为一个新的目标,并将该目标加入到目标列表中.(2)若运动物体找到相匹配的目标,则利用Kalman 滤波方法预测该目标的系统状态,并更新该目标的参数.(3)若目标没有找到相匹配的物体,则认为该目标丢失,并且将目标的预测状态作为当前状态进行下一步的预测.(4)若目标连续5帧内都为丢失状态,则认为该目标失踪,从目标列表中删除.3.2 K alman 滤波方法的实验结果图4为系统实际运行过程中一个运动目标的运动特性及其Kalman 滤波结果.图中x ,y 分别为该运动目标在实时图像中的中心位置在X 方向和Y 方向上的坐标,图4中3条曲线分别表示该运行目标的中心坐标的实际检测结果、Kalman 预测结果和滤波结果.若运动目标运动特性的线性化程度越高,则预测结果越正确,如图中a 段所示的预测结果比b 段所示的预测结果更加接近实际结果.同时由于两个运动人体在场景中是沿着水平方向相向而行,水平方向的运动比垂直方向的运动平稳.因此,图4中X 方向的预测结果要优于Y 方向的预测结果.图4 Kalman 滤波结果Fig.4 Kalman filtering result(下转第618页)・906・第6期滕 游,等:基于DSP 的运动目标检测与跟踪系统设计3 结 论采用“卷层法”对分散染料在涤纶织物中的扩散渗透进行了考察.结果表明:压力不变,温度升高,染料在最外层织物中的上染量升高,但是在涤纶织物中的扩散渗透距离减少,扩散渗透参数也降低;温度不变,压力升高,染料在最外层织物中的上染量升高,在涤纶织物中的扩散渗透距离增加,扩散渗透参数也增高.参考文献:[1] SCHNITZL ER J V,EGGERS R.Mass transfer in polymersin a supercritical CO22at mosphere[J].Journal of Supercritical Fluids,1999,16(1):81292.[2] MANNA L,BANCH ERO M.Diffusion of disperse dyes inPET films during impregnation wit h a supercritical fluid[J].The Journal of Supercritical Fluids,2000,17(2):1872194. [3] SICARDI S,MANNA L,BANCH ERO parison ofdye diffusion in poly films in t he presence of a supercritical oraqueous solvent[J].Indust rial and Engineering Chemistry Re2 search,2000,39(12):470724713.[4] BAIRA GI,NIL ANJ ANA,GUL RAJ ANI M L.Studies ondyeing wit h shikonin extracted from Ratanjot by supercritical carbon dioxide[J].Indian Journal of Fibre&Textile Re2 search,2005,30(2):1962199.[5] O EZCAN A S.Adsorption behavior of a disperse dye on poly2ester in supercritical carbon dioxide[J].Journal of Supercriti2 cal Fluids,2005,35(2):1332139.[6] FL EMIN G O S,STEPAN EK F,KAZARIAN S G.Dye diffu2sion in polymer films subjected to supercritical CO2:confocal raman microscopy and modeling[J].Macromolecular Chemis2 try and Physics,2005,206(11):107721083.[7] FL EMIN G O S,KAZARIAN S G,BACH E,et al.ConfocalRaman study of poly(et hylene terepht halate)fibres dyed in supercritical carbon dioxide:dye diffusion and polymer mor2 phology[J].Polymer,2005,46(9):294322949.[8] 阿瑟・D・布罗德贝特.纺织品染色[M].马渝茳,译.北京:中国纺织出版社,2004:3522354.[9] 金咸穰.染整工艺实验[M].北京:中国纺织出版社,1987:83287.(责任编辑:翁爱湘)(上接第609页)4 总 结笔者提出并实现了一种基于DSP的视频图像运动目标检测与跟踪系统的设计方法.系统基于高性能数字信号处理器DM642,采用背景减除法检测运动目标,采用帧间差分法处理背景的突变,采用K alman 滤波方法预测运动目标的运动参数、并与相应的运动目标进行匹配,进而得出运动目标的运动轨迹.实验结果表明,系统能够实时检测并且跟踪运动目标.参考文献:[1] L IP TON A,FUJ IYOSHI H,PA TIL R.Moving target classi2fication and tracking from real2time video[C]//Proceedings ofIEEE Workshop on Applications of Computer A: IEEE,1998:8214.[2] STAU FFER C,GRIMSON W.Adaptive background mixturemodels for real2time tracking[C]//Proceedings of IEEE Con2 ference on Computer Vision and Pattern A: IEEE,1999:2462252.[3] BARRON J,FL EET D,B EAUCH EMIN S.Performance ofoptical flow techniques[J].International Journal of Computer Vision,1994,12(1):42277.[4] 朱虹.数字图像处理基础[M].北京:科学出版社,2005:1592160.[5] ACHAR YA T,RA Y A K.数字图像处理:原理与应用[M].北京:清华大学出版社,2007:2282229.[6] 陈亮,杨吉斌,张雄伟.信号处理算法的实时DSP实现[M].北京:电子工业出版社,2008:2972308.(责任编辑:翁爱湘)・816・浙江工业大学学报第37卷。
实验6,运动目标跟踪算法与实现

对图像进行阈值分析,并进行二值化处理确定目标在像素平面上结束通过以形心位置为中含目标的矩形框一. 实验名称:运动目标跟踪算法设计与实现 二. 实验目的1. 熟悉各种图像预处理方法,如直方图处理、图像去噪声、图像增强与复原、图像变换等,了解这些方法在图像分析与识别、目标检测及跟踪等各种应用中所起的作用。
2. 熟悉基本的图像分割原理。
3. 能够利用MATLAB 工具完整实现从图像预处理、图像分割、特征提取与分析及各种实际应用的完整流程。
4. 该实验为一个综合设计及应用的实验,目的是要求学生综合利用学到的光电图像处理知识,解决图像识别、目标检测及目标定位与跟踪问题。
进一步深入理解光电图像处理的重要性,提高学生利用光电图像处理基本理论、方法等解决实际问题及提高分析问题的能力。
三. 实验原理及步骤1. 序列图像中的运动目标形心跟踪(1) 序列图像的读取与显示本实验提供了200 帧的图像序列,为BMP 文件,文件名后缀的序号按场景出现的先后顺序排列。
(2) 图像分割首先,对图像进行必要的阈值分析。
根据实际情况自行确定合适的阈值后,再对图像进行二值化处理。
(3) 形心计算在分割的单帧图像上,计算图像中目标区的形心坐标(Xc ,Yc ),确定目标在像素平面上的位置坐标。
(4) 形心跟踪① MATLAB 确定跟踪波门:即以形心位置为中心,在图像中包含目标的区域添加一个合适的矩形框(如取32×16pixels ,32×32 pixels ,64×32 pixels 等),② 实时跟踪:循环读取序列图像,对每帧图像均计算目标区的形心坐标,连续绘制包含目标区域的波门(即矩形框)。
实现对200 帧序列图像中运动目标的实时稳定跟踪。
2.序列图像中的运动目标相关跟踪实验原理及步骤如下:(1)序列图像的读取与显示同实验内容1,序列图像数据另选。
(2)参考模板制作在起始帧图像中,手动选取包含目标的矩形区域(根据目标尺寸,确定模板尺寸)。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
第4章运动目标图像跟踪系统的硬件知识4.1 运动目标图像跟踪系统原理这一章的内容主要以原理框图的形式展现图4-1 图像跟踪器原理框图图4-2 图像识别与跟踪子系统原理框图图4-3 图像信息综合教学实验系统原理框图4.2 FPGA图像预处理子系统用户现场可编程门阵列器件FPGA(FildePrgo~bale一Gate~Array),顾名思义,是一种可由用户根据所设计的数字系统的要求,在现场由自己配置,定义的高密度专用数字集成电路。
FPGA有效的将LSIV/Lsl的门阵列技术的高逻辑密度和通用性与用户现场可编程的设计灵活,上市快捷的特性有效的结合了起来。
它具有以下三个主要优点:(1)FPGA的用户现场可编程的特性大大缩短了设计实现的周期;(2)FPGA可以提供比PLD和EPLD器件足够大的有效逻辑容量密度,大大提高了系统设计的工艺可实现性和产品的可靠性。
(3)FPGA可以反复编程,反复使用,可以在开发系统中直接进行系统仿真,降低了成本。
本系统采用的是XLINIX公司Virtxe-E系列FPGA,型号为XCV400E。
XCV400E为57万门规模,具有153kb的内存可配置分布存储器和16kb的同步数据块存储器,可存储大量的中间数据、图像行数据和图像卷积数据。
4.3 双口RAM实现数据的传输双口RAM读写操作灵活方便,具有两个端口进行独立的异步操作的能力,并且其接口电路的设计也比较简单。
双端口RAM内一般有一个总线强占优先级比较器,当两边的CPU访问同一存储单元时,较先送出的址的CPU具有优先访问权,而另一个CPU的的址和读写信号将被屏蔽掉。
位于FPGA与DSP之间的双口RAM的数据传输过程如图4-4所示。
图4-4 双口RAM数据传输该系统使用的CY7C057V是低功耗CMOS32Kx36的双口静态RAMs。
器件中包含了多种仲裁机制来处理多处理器存取相同数据块的情况。
两个端口提供了独立的通行证,在存储器的任意位置读写的异步存取。
该器件既可以单独用来作为36-bit双口静态RAMs,也可以多个器件相结合生成72-bit或者更宽的主/副双口静态RAM。
CY7C057V的组成单元包括:32Kx36bit的双口RAM单元、I/0和的址线、CE/CE1、/OE、R/ W)。
这些引脚允许对存储器内部的任意位以及控制信号(0置进行读写存取。
每个端口提供了一个BUSY对同一位置空间进行写/读。
两个中断(INT)引脚用来控制双口RAM与EMFI接口图到端的通信,它允许端口间或系统间通过信箱的方式进行通信。
两个旗语控制引脚用来指定享有的资源。
旗语逻辑由8个共享的锁存器组成。
任何时候只有一边能够控制锁存器。
旗语的控制意味着共享资源在被使用。
M/S引脚确定器件是主模式(BUSY引脚CE/CE1,控制的自为输出)还是从模式(BUSY引脚为输入)。
器件还提供了由0动电源关闭特征。
每个端口还提供了它自己的输出使能控制(OE),它使得数据能够被从器件中读出。
将双口RAM置于FAPG和DSP之间作为数据缓存器,如图4-4所示的一样,因为通常DSP对双口RAM读操作的速度要高于FAPG对其写图像数据的速度。
这样可以使FPGA对双口RAM的写操作连续进行,从而达到数据的实时传送。
图4-5 异步器件的接口连线图双口RAM与EMFI的接口属于异步接口,其连线如图4-5所示。
4.4 DSP子系统4.4.1 TMS320C6201 DSPC600O系列DSP不仅运算速度高,而且片内集成了许多外围设备,支持多种工业标准的接口协议,能够提供高带宽的数据I/O能力。
4.4.1.1 综述TMS320C6201是一种高性能的数字信号处理器,片内锁相环路(PLL)将外部输入时钟频率乘以2或4,使得其最高的工作频率达到200MHz,每个指令周期为sns,运算速度可达到1600MPIS;硬件上采用超长指令字(VLIW)体系结构,每个周期最多能有8个32bit的指令并行执行(但对硬件资源的使用不能有冲突);两个数据通道,每个通道有4个处理单元,包括2个16bit*16bit的乘法器和6个算术逻辑单元;采用加载存储体系结构,数据依靠32个32bit的通用寄存器进行多数据单元间的传输。
TMS320C6201的片内存储器分为程序区和数据区两个部分,片内程序RAM的容量为64KB,根据分配方式的不同位于不同的的址,可存放16K的32bit指令,也就是2K个256bit宽度的取指包。
当片内程序RAM设置为映射模式时,可以利用DMA控制器对寄存器进行读写。
片内数据RAM也是64KB 的一块,的址空间为80000000h~8000FFFFh。
整个RAM块被分为4个8K深的存储体,每个存储体的数据宽度16bit。
DMA可以对片内数据RAM进行8bit、16bit、32bit的数据存取。
然而当对片内数据RAM的不同访问,要求它们的存取数据处在不同的存储体或块中。
4.4.1.2外部存储器接口EMIF的设计C6201EMIF接口如图4-6所示图4-6 EMIFJ接口图本实验系统中的EMFI接口程序如下所示:*******系统寄存器变量定义******#defineEMIFGCTL0xl800000#defineEMIFCE10x1800004#defineEMIFCEO0x1800008#defineEMIFCEZ0x1800010#defineEMIFCE3ox1800014******外部存储器接口参数初始化******void emif_init(){*(int*)EMIF_GTCL=0x3020;*(ini*)EMIF_CE1=0x5CB53203;*(ini*)EMIF_CE0=0x40;*(int*)EMIF_CE2=0xFFFF3F23;*(int*)EMIF_CE3=0x11910620;return;}通过对EMIF不同接口的设置确定了不同外部存储器数据读写的建立,触发和保持时间。
4.4.1.3 DMA直接存储器访问机制直接存储器访问(Direct Memory Access,DMA)是一种重要的数据访问方式,由DMA控制器可以完成DSP存储空间内的数据搬移,搬移的源/目可以是片内存储器,片内外设或外部器件。
DMA控制器的主要特点有以下:(1)后台操作:DMA控制器可以独立于CPU工作(2)高吞吐量:可以按照CPU时钟的速度进行数据存储(3)个通道:DMA控制器可以控制4个独立通道的传输(4)多帧传输:传送的数据块可以分成多个数据帧(5)优先级可编程:每个通道对于CPU的优先级可以编程设置(6)2位的址范围:可对下列任何一个的址映射区进行访问●片内数据存储器●片内程序存储器●片内的集成外设●通过EMFI接口的外部存储器●扩展总线上的扩展存储器(7)数据的字长可编程:每个通道可以独立设置数据单元为字节、半字或字(8)自动初始化:每传送完一批数据,DMA通道可以自动配置下一批数据块的传送参数.在本实验系统中主要用到了DMA 数据通道0,其的址如下表所示:表4-1 DMA 控制寄存器向DMA 主控制寄存器START 域写入01b ,将立即启动该通道的DMA ,并且START 的值变为01b 。
DMA 执行期间,向STATR 写10b 可以暂停DMA 传输,如果某个数据单元的读传输已经完成,此时DMA 通道会继续完成对应的写传输,START 在完成写传输后变为10b 。
START=00b 时,DMA 控制器被停止,除非DMA 工作于自动初始化模式下,否则一旦DMA 完成数据传输,START 的值变成00b 。
向主控制寄存器SATTR 位写入11b 将以自动初始化的方式启动DMA 。
每次块在传送任务完成后,DMA 控制器会自动调用DMA 全局数据寄存器的值重新设置有关传输参数,为下一批数据传输做准备。
自动初始化可以使DMA 进行连续操作和重复操作。
本实验系统按照下列步骤设置通道0控制寄存器:(1)设置PRICTL0的START=00b(2)设置SECCTL0(3)设置DMA 通道源/目的的址寄存器,以及传输寄存器(4)向PRICTL 寄存器START 域写01b 或者1lb ,启动传输系统软件程序如下:*(ini*)DMA0_SRC_ADDR_ADDR=0x03000000+0x1000*ad_num; //DMA 通道0的源的址 寄存器名 缩写 的址(HEX byte ) DMA 辅助控制 AUXCTL 0184 0070 DMA 通道0目的的址 DST0 0184 0018 DMA 通道0主控 PRICTL0 0184 0000 DMA 通道0副控 SECCTL0 0184 0008 DMA 通道0源的址 STC0 0184 0010 DMA 通道0传输计数 XFRCNT0 0184 0020*(int*)DMA0_DEST_ADDR_ADDR=(int)Ori;//DMA通道0目的的址*(int*)DMA0_XFER_COUNTER_ADDR=0X400;//DMA通道0传送计数,一帧1024个数据单元*(int*)DMA0_RIMARY_CTRL_ADDR=0X01000055;//DMA通道0主控,START域写入01b,立即启动DMA计数While(GET_FIELD(DMA0_PRIMARY_CTRL_ADDR,STATUA,2)=1{};//查看主控制寄存器状态域STATUS是否为工作状态,等待直至数据传输工作完成4.4.1.4 中断建立建立和使用C中断的四个步骤:1.选择中断源并写SIR(1)中断服务表(IST)和中断服务表指针寄存器(ISTP)IST是一个包含中断服务代码的取指包表,当CPU开始处理一个中断时,它指向IST。
ISTP是用来定位中断服务例程的。
Reset的取指包必须定位在0的址,而其他的中断可以在程序寄存器的任意一个256字的边界。
(2) 中断服务取指包(ISFP)ISFP是一个服务于中断的取指包,图4-7中所示,要使中断跳回主程序,取指包包含了一条转移指令(BIRP),后面紧跟5个空操作指令。
其中的ISTB 位确定了中断向量表的基的址。
图4-7 中断取指包2.创建和初始化中断向量表首先要为中断向量表保留一定的空间(在汇编语言文件的“.sect”命令中),并告诉连接器要在存储器的那个部分安装中断向量表。
3.设置适当的寄存器来使能和处理中断在一个典型的DSP系统中,硬件中断是由DSP的外部器件或者片上外设触发的,在任何一种情况下,中断能够使处理器跳到ISTB入口处。
控制状态寄存器(CSR)和中断使能寄存器(IER)能够使能和禁止中断处理。
中断标志寄存器(IFR)确定被挂起的中断,中断设置寄存器(ISR)和中断清除寄存器(ICR)能够进行中断处理操作。