Distributed passive radar sensor networks with near-space vehicle-borne receivers

合集下载

分布式卡尔曼滤波

分布式卡尔曼滤波

分布式卡尔曼滤波分布式卡尔曼滤波(Distributed Kalman Filter)是一种基于分布式计算的滤波算法,其目的在于对一个由多个传感器组成的系统进行状态估计,其中每个传感器只能观测到系统的一部分状态。

传统的卡尔曼滤波算法是基于单一中心控制器的,该控制器负责整个系统的状态估计和控制。

然而,在实际应用中,系统通常由多个不同地点的传感器组成,因此中心控制器的方式会带来许多问题,比如传感器之间的通信延迟、网络传输带宽限制等,影响了系统的实时性和稳定性。

分布式卡尔曼滤波通过将卡尔曼滤波算法分解成多个局部滤波器,每个局部滤波器只负责对本地观测得到的状态进行估计,在滤波器之间通过局部观测值和相关信息进行信息交互和更新,最终完成全局状态估计。

相比于传统的卡尔曼滤波算法,分布式卡尔曼滤波具有计算资源分布、通信开销小、实时性好等优点,因此在无人机、传感器网络、智能交通等领域得到了广泛应用。

分布式卡尔曼滤波的基本框架如下:-系统模型:系统状态方程和观测方程;-局部估计器:每个局部估计器利用本地观测值对局部状态进行估计,并预测下一时刻的状态;-信息交互:每个局部估计器根据局部观测值和估计结果,与周围局部估计器交换信息,并更新自己的估计值;-全局估计器:全局估计器收集所有局部估计器的消息,整合后获得全局状态估计值。

具体地,分布式卡尔曼滤波可以通过以下步骤进行实现:1.确定系统模型:系统状态方程和观测方程是分布式卡尔曼滤波的关键。

最常用的是线性的系统状态方程和观测方程,可以用矩阵形式表示。

2.选择一个节点作为全局估计器:在分布式卡尔曼滤波中,需要一个节点负责整合所有局部估计值,得到全局状态估计值。

一般选择一个中心节点或者根据特定参数选择最优节点。

3.为每个局部估计器指定观测变量集合:由于每个局部估计器只能观测到系统的局部状态,因此需要在每个局部估计器处预先指定该局部观测变量集合。

这里需要注意,不同局部观测变量之间应当是相互独立的。

DAS技术

DAS技术

分布式声波传感系统(Distributed Acoustic Sensing,DAS)1. 传感测量的发展历程20世纪70年代,低损耗石英光纤问世,研究人员对光纤的损耗机制产生了浓厚的兴趣,在研究中发现,近红外吸收窗口的光波损耗主要源于瑞利散射。

通过对后向瑞利散射的探测可以实现光纤损耗和缺陷的测试,研究人员依此发明了光时域反射计(OTDR),这一技术极大推动了光纤通信事业的发展。

20世纪80年代,人们在OTDR的使用中发现了瑞利散射的干涉效应,探测到的瑞利背向散射光强度会随时空变化,这严重影响了光纤损耗评估的准确性。

为解决这一问题,大量研究工作聚集于相干瑞利散射的机理与特性,这加速了相干OTDR的诞生,并将相干OTDR用于测量超长距离光纤通信线路状态。

20世纪90年代初,H. F. Taylor等人提出利用这一干涉效应进行光纤沿线扰动探测的设想,并开展了验证性试验和测试。

随后R. Juskaitis等人发表了第一篇基于相干瑞利散射的分布式光纤振动传感的学术论文。

21世纪初,随着窄线宽单频激光器技术的成熟和商业化,这一技术得以迅速发展,并称为相位敏感光时域反射计(Φ-OTDR)。

这一阶段的Φ-OTDR是通过直接探测方式获取相干瑞利散射回波的强度,对前后时间内的强度信息进行差分,实现外界扰动动态检测的。

但是,施加在光纤上的物理量变化与散射光强度并不是呈单调变化的,这一信号解调方式只能定性判断扰动事件的有无,难以直接获取扰动信号的准确波形。

这一定性检测阶段的Φ-OTDR通常被称为分布式光纤振动传感(DVS)技术。

2011年,中科院上海光机所在国际上率先提出和开展了基于光纤瑞利散射相位提取的Φ-OTDR技术研究。

研究人员利用瑞利散射光相位空间差分与外界振动的线性映射关系,通过数字相干相位解调,首次实现了光纤沿线外界振动信号的分布式定量化测量,这标志着Φ-OTDR步入定量测量阶段,即分布式光纤声波传感技术(DAS)。

雷达低空探测算法

雷达低空探测算法

雷达低空探测算法是用来检测和跟踪低空目标的一种技术,主要应用于军事和民用领域。

雷达低空探测面临的主要挑战是地面和低空目标的杂波干扰,以及目标与地面、建筑物之间的遮挡。

以下是一些常用的雷达低空探测算法:
1. CFAR算法:CFAR(Constant False Alarm Rate)算法是一种自适应杂波抑制算法,通过计算每个像素的杂波功率水平,调整门限电平,以保持恒定的虚警概率。

在低空探测中,CFAR算法可以有效抑制地面和低空杂波,提高目标检测概率。

2. MTI算法:MTI(Moving Target Indicator)算法是一种基于运动目标与固定杂波在多普勒频移上存在差异的算法。

通过滤波器组对回波信号进行滤波处理,去除固定杂波,保留运动目标信号。

MTI算法可以降低杂波干扰,提高运动目标检测能力。

3. DPC算法:DPC(Doppler Power Coherence)算法是一种基于多普勒频移的检测算法。

该算法通过分析多普勒频谱,检测出具有高功率谱密度的目标信号。

DPC算法对低空目标的探测具有一定的鲁棒性,能够克服地面和建筑物遮挡的影响。

4. 协同探测算法:协同探测算法是一种利用多个雷达站进行低空目标探测的算法。

该算法通过多个雷达站的信号处理和信息融合,提高低空目标的检测概率和定位精度。

协同探测算法可以降低单个雷达站面临的杂波干扰和遮挡问题。

这些算法各有特点,可以根据具体应用场景选择适合的算法进行低空目标探测。

同时,还需要注意雷达系统的参数设置,如波束宽
度、脉冲宽度、采样率等,以获得更好的低空探测性能。

无线传感器网络中的分簇路由算法研究与实现

无线传感器网络中的分簇路由算法研究与实现

无线传感器网络中的分簇路由算法研究与实现摘要:无线传感器网络是由大量分布在监测区域内的低功耗传感器节点组成的,这些节点能够自组织地协同工作,实现环境感知和数据采集的任务。

由于节点具有有限的能量和计算资源,设计高效的路由算法是无线传感器网络中的一个重要挑战。

本文主要对无线传感器网络中的分簇路由算法进行了研究与实现,着重探讨了分簇算法的基本原理、优缺点以及在实际应用中的性能。

关键词:无线传感器网络,分簇路由算法,自组织,能量效率。

1. 引言无线传感器网络(Wireless Sensor Network, WSN)是一种由大量的低功耗、小型、分布式的传感器节点组成的无线网络,能够实时监测、收集和处理环境中的各种信息。

WSN在环境监测、农业、医疗、交通等领域具有广泛的应用前景。

然而,由于节点具有有限的能量和计算资源,设计高效的路由算法成为无线传感器网络中的一个重要问题。

2. 分簇路由算法基本原理分簇路由算法是无线传感器网络中一种常见的路由机制,它将网络中的节点分成多个簇(cluster),每个簇中有一个簇头(cluster head)负责与其他簇头进行通信,并将数据传输到基站。

分簇路由算法的基本原理如下:(1)簇头选举:节点根据自身的一些参数(如能量、距离等)来竞选成为簇头。

通常情况下,具有充足能量和较高的剩余能量的节点更容易被选为簇头。

(2)簇内通信:簇头负责接收簇内其他节点的数据,并将其聚合后发送给其他簇头。

簇内节点之间的通信通常采用近距离的无线通信方式,以减少能量消耗和网络拥塞。

(3)簇间通信:簇头之间进行远距离通信,将聚合后的数据传输到基站。

簇头之间的通信通常采用更高功率和更远距离的无线通信方式。

3. 分簇路由算法的优缺点分簇路由算法具有如下优点:(1)降低能量消耗:通过节点之间的局部通信,分簇路由算法能够减少每个节点的长距离通信次数,从而降低能量消耗。

(2)提高网络生命周期:通过平衡簇头的负载以及合理分配簇头节点,分簇路由算法能够延长网络的生命周期。

海康威视熵增保护系列产品介绍说明书

海康威视熵增保护系列产品介绍说明书

RPerimeter Protection in All WeatherHikvision Thermal ProductsABOUT HIKVISIONIndustry Pioneer Since 2001, Hikvision has grown from being a single-product supplier to the world’s leading provider of security products and solutions. From the early digital age to today's intelligence era, we have seized every opportunity to advance the industry with our innovative technologies. And venturing into new areas of inspiring technology – such as artificial intelligence, cloud computing, and the fusion of deep learning and multi-dimensional perception technologies, to name a few – Hikvision leads the security industry as an IoT provider with video as the core competency.Global Operations Hikvision has established one of the most extensive marketing networks in the industry, comprising 44 international subsidiaries and branch offices to ensure quick responses to the needs of customers, users and partners.Visual Perception Cloud Storage Big Data Cross-Media Perception and Reasoning Audio and Video Data Storage Streaming Media Networking and Management Video Codec Embedded Systems Development Core TechnologiesHikvision Thermal Products12Detection, recognition and identification distances (with 8 mm lens)BASIC PRINCIPLES OF THERMAL CAMERAS Each type of radiation has a unique wavelength.Any object with a temperature above absolute zero can emita detectable amount of infrared radiation. The higher anobject's temperature, the more infrared radiation is emitted.An infrared camera’s effective range is what is meant by “seeing an object”. Defined thresholds, known as Johnson’s Criteria, refer to the minimum number of pixels necessary to either detect, recognize, or identify targets captured by scene imagers. The lower limits of detection, recognition,and identification (DRI), according to Johnson criteria are:Detection : In order to distinguish an object from the background, the image must be covered by 1.5 or more pixels.Recognition : In order to classify the object (animal, human, vehicle, boat, etc.), the image must have at least 6 pixels across its critical dimension.Identification : In order to identify the object and describe it in details, the critical dimension must be at least 12 pixels across. While invisible to human eyes, thermal cameras detect this kind of radiation (from wavelength 8 to 14 μm, or 8,000 – 14,000 nm) and produce images using temperature differences, making it possible to see the environment without visible light.Detection DistanceNotice: This distance is based on a 17-μm sensor.Detection Recognition Identification 5294 m 92 m 184 m 735 m184 m388 m1471 m662 m 1324 m11 m23 m91 m 180mm 25mm 3mm mm50Energy increases VCA Distance Notice: This distance is based on a 17-μm sensor.VCA rules: line crossing, intrusion, region entrance, region exit Vehicle Human 150 m 250 m 350 m 450 m 750 m 1050 m 1525mmmm mm 35Superior Environmental AdaptabilityThermal cameras capture sharp images around-the-clock, regardless of environmental factors such as light levels, contrast, backlighting, shadows, fog, smog, rain, etc.Based on deep learning algorithms, thermal cameras provide ultra-effective detection for line crossing, intrusion, and region entry and exit. False alarms triggered by non-human and non-vehicle objects arevastly reduced.High Alarm AccuracyWith thermal cameras, you can easily discover objects and potential risks that are invisible to conventional cameras. In addition to thermal images, the built-in visible-light module can provide supplementary recorded evidence – lowering costs for installation.Extended Distances Thermal detection covers much larger distances and requires fewer devices to do it, compared against conventional, optical cameras.Better VisualsWHY DO WE USE THERMAL CAMERASFOR PERIMETER PROTECTION?Hikvision Thermal Products3Hikvision Thermal Products 5MEDIUM-RANGE PROTECTIONAdvanced IntelligenceBased on deep learning algorithms, Hikvision’s thermal products deliver powerful and accurate behavior analyses, including detections such as line crossing, intrusion, region entrance and exit, and more. The intelligent human/vehicle detection feature helps reduce false alarms caused by animals, camera shake, falling leaves, or other irrelevant objects, significantly improving alarm accuracy.Deep-learning-based dynamic fire source detection takes advantage of Hikvision’s security big data, containing over 100,000 samples of global climate information to provide the highest possible detection accuracy. This front-end device can detect fire based on raw, frame-by-frame data, ensuring firsthand image analysis and rapid alarm triggering.Region Entrance6The Hikvision Thermal Smart Linkage Tracking System is formed by one bi-spectrum bullet camera offering panorama and one optical PTZ camera smartly tracking moving targets.Smart Tracking LinkageThe bullet camera for all-weather protection offers live view 24/7 of significant passageways, highly accurate detection in specified areas and human & vehicle classifications. The speed dome identifies trespassers with auto or manual tracking for multiple targets and can be zoomed in for more details.Within the linkage system, it is easy to achieve one-touch connection and automatic alignment between the bi-spectrum bullet camera and optical PTZ camera.Hikvision Thermal Products7LONG-RANGE PROTECTIONPRODUCT MODELSEffective Coverage Short Range (20-70 m)DS-2TDxx17-2/QADS-2TDxx17-3/QADS-2TDxx17-6/QADS-2TD2617-10/PA(QA)DS-2TD1228-2/QADS-2TDxx28-3/QADS-2TDxx28-7/QADS-2TD2628-10/QA26.5945.61591.230142.64850.41786.429165.655232.878Thermal Network Bullet Camera Thermal Network Turret Camera DS-2TD1228/QADS-2TD2628/QAThermal: 256 × 192, 12 μm, Optical: 2688 x 1520Lens (thermal) : 3 / 7 / 10 mmLens (optical) : 4 / 6/ 8 mmVCA : Line crossing / Intrusion / Region entrance / Region exitingAudible Alert and Strobe Light Temperature ExceptionTemperature Exception Range : -20 to 150°C Temperature Accuracy : ±8°CWorking Temperature :-40°C to 65°C (-40° F to 149° F)IP67Thermal: 256 × 192, 12 μm, Optical: 2688 x 1520Lens (thermal) : 2 / 3 / 7 mmLens (optical) : 2 / 4 / 6 mmVCA : Line crossing / Intrusion / Region entrance /Region exitingAudible Alert and Strobe LightTemperature ExceptionTemperature Exception Range : -20 to 150°CTemperature Accuracy : ±8°CWorking Temperature :-40°C to 65°C (-40° F to 149° F)IP66Thermal NetworkTurret CameraDS-2TD1217/QAThermal : 160 x 120, 17 μm; Optical : 2688 x 1520Lens (thermal) : 2 / 3 / 7 mmLens (Optical) : 2/ 4 / 6 mmVCA : Line crossing / Intrusion / Region entrance /Region exitingAudible Alert and Strobe LightTemperature ExceptionTemperature Exception Range : -20 to 150°CTemperature Accuracy : ±8°CWorking Temperature :-40°C to 65°C (-40° F to 149°F)IP66Bi-Spectrum NetworkBullet CameraDS-2TD2617/QAThermal : 160 x 120, 17 μm; Optical : 2688 x 1520Lens (thermal) : 3 / 6 / 10 mmLens (Optical) : 4/ 6 / 8 mmVCA : Line crossing / Intrusion / Region entrance /Region exitingAudible Alert and Strobe LightTemperature ExceptionTemperature Exception Range : -20 to 150°CTemperature Accuracy : ±8°CWorking Temperature :-40°C to 65°C (-40° F to 149°F)IP67VCA Range(Vehicles: 1.4 × 4.0 m)VCA Range(Humans: 1.8 × 0.5 m) HeatPro Series CamerasThermal: : 640 512, 17 μm, Optical: 2688 x 1520Lens (thermal) : 15 / 25 / 35 mm15mm: 42.5° × 33.6° / 25mm: 24.55° × 19.75° / 35mm: Lens (optical) : 4 / 6/ 8 mmVCA : Line crossing / Intrusion / Region entrance / Region exitingTemperature ExceptionTemperature Exception Range : -20 to 150°C Temperature Accuracy : ±8°C -40°C to 65°C (-40° F to 149° F) Anti-corrosion Coating (PY) Thermal: 384 × 288, 17 μm, Optical: 2688 x 1520Lens (thermal) : 10 / 15 / 25 / 35 mm10mm: 37.5° × 28.5° / 15mm: 24.5° × 18.5° / 25mm: 14.9° × 11.2° / 35mm: 10.7° × 8.0° Lens (optical) : 4 / 6/ 12 / 15 mmVCA : Line crossing / Intrusion / Region entrance / Region exitingTemperature ExceptionTemperature Exception Range : -20 to 150°C Temperature Accuracy : ±8°C -40°C to 65°C (-40° F to 149° F) Anti-corrosion Coating (PY) Thermal Network Bullet CameraDS-2TD2138/QYThermal: 384 × 288, 12 μm 7 mm: 88.5° x 73.2° / 15 mm: 42.5° x 33.6° / 25 mm: 24.55° x 19.75° / 35 mm: 17.67° x 14.18°VCA : Line crossing / Intrusion / Region entrance / Temperature Exception Range : -20 to 150°C Temperature Accuracy : ±8°CAlarm Input : 2-ch inputs (0-5 VDC)Alarm Output : 2-ch relay outputs, alarm response -40°C to 65°C (-40° F to 149° F) Anti-corrosion Coating (PY) Lens (thermal) : 25 / 50 mm; Optical: 6–252 mm 25 mm: 14.9° × 11.2° , 50 mm: 7.5° × 5.6°VCA : Line crossing / Intrusion / Region entrance / Region exitingTemperature ExceptionTemperature Exception Range : -20 to 150°C 10mm: 37.5°× 28.5° / 15mm: 24.5° × 18.5° / 25mm: 14.9°Thermal: 384 × 288, 17 μm, Optical: 1920 × 1080Lens (thermal) : 10 / 25 mm; Optical: 4.8-153 mm FOV: 10 mm: 37.7° × 28.7°, 25 mm: 14.88° × 11.19°VCA : Line crossing / Intrusion / Region entrance / Region exitingTemperature ExceptionTemperature Exception Range : -20 to 150°C Temperature Accuracy : ±8°C Working Temperature :-40°C to 65°C (-40° F to 149° F) IP66Network Bi-Spectrum Speed DomeDS-2TD6237-H4L/W(Y)Thermal : 640 × 512, 17 μm Optical : 2688 × 1520Thermal : 50 / 75 / 100 mmOptical : H (6–240 mm) / C (6–336 mm)VCA: Line crossing / Intrusion / Region entrance / Region exitingTemperature ExceptionTemperature Exception Range : -20 to 150°C Temperature Accuracy : ±8°C Working Temperature :-40°C to 65°C (-40° F to 149° F) Anti-corrosion Coating (PY) IP66Network Bi-Spectrum Positioning SystemDS-2TD6267-H4L/W(Y)Thermal: 640 × 512, 17 μm Optical: 2688 x 1520Lens (thermal) : 25 / 50 mm; Optical: 6–252 mm FOV: 25 mm : 24.5° × 19.7°, 50 mm : 7.5° × 5.6°VCA : Line crossing / Intrusion / Region entrance / Region exitingTemperature ExceptionTemperature Exception Range : -20 to 150°C Temperature Accuracy : ±8°C Working Temperature :-40°C to 65°C (-40° F to 149° F) Anti-corrosion Coating (PY) IP66Thermal : 640 × 512, 17 μmOptical : -C: 2688 × 1520 / -E: 1920 × 1080Lens (thermal) : 150 / 190 / 230 mmLens (optical) : C (6.7-330 mm) / E (12.5-775 mm) / G (16.7-1000 mm)FOV: 150 mm: 20.56° × 16.51° / 190 mm: 17.19° × 13.79° / 230 mm: 26.61° × 21.43°VCA: Line crossing / Intrusion / Region entrance / Region exitingTemperature ExceptionTemperature Exception Range : -20 to 150°C Temperature Accuracy : ±8°CWorking Temperature : -40°C to 65°C (-40° F to 149° F) Anti-corrosion Coating (PY) IP66Network Bi-Spectrum Positioning SystemDS-2TD8167-ZC(E/G)F(L)W(Y)Effective Coverage DS-2TD2137-4/P DS-2TD2137-7/PDS-2TD2x37-10/PDS-2TD2x37-15/PDS-2TD2x37-25/PMedium Range (70-350 m)DS-2TD2x37-35/P DS-2TD2138-4/QYDS-2TD2138-7/QYDS-2TD2138-10/QYDS-2TD2138-15/QY92.4 30.8136.545.5203.767.9315.0105525.0175735.0245132441956529197450150Industrial Fixed CamerasVCA Range(Vehicles: 1.4 × 4.0 m)VCA Range(Humans: 1.8 × 0.5 m)Effective CoverageDS-2TD4237-10DS-2TD41x7-50 DS-2TD62x7-50DS-2TD4237-25DS-2TD41x7-25 Long Range (over 350 m)DS-2TD2138-25/QY DS-2TD2167-7/P DS-2TD2x67-15 DS-2TD23X6-75 DS-2TD23x6-100DS-2TD2x67-25DS-2TD2x67-35DS-2TD23x6-50DS-2TD28x6-501050.0350735.0245525.0175315.0105136.545.5750250205.973.5514.7183.81029.4367.62100.07001575.0525DS-2TD62x7-75 DS-2TD6267-100 DS-2TD8167-150 DS-2TD8167-190 DS-2TD8167-230551.51544.17002100105031503912473516911397VCA Range(Vehicles: 1.4 × 4.0 m)VCA Range(Humans: 1.8 × 0.5 m)Industrial PT CamerasHikvision Thermal ProductsPerimeter Protection in All Weather Hikvision Thermal ProductsHikvision EuropeDirk Storklaan 32132 PX Hoofddorp The NetherlandsT +31 23 5542770*********************Hikvision France6 rue Paul Cézanne,93360 Neuilly-PlaisanceFranceT +33 (0)1 85330450*********************Hikvision PolandBusiness Garden, Budynek B3 ul.Żwirki i Wigury 16B,02-092 WarszawaT +48 4600150*********************Hikvision RomaniaSplaiul Independentei street 291-293, Riverside Tower, 12th floor, 6thdistrict,Bucharest, RomaniaT +31235542770/988**************************Hikvision BeneluxNeringenweg 44,3001 Leuven, BelgiumT +31 23 5542770**********************Hikvision HungaryBudapest, Reichl Kálmán u. 8,1031, HungaryT +36 1 323 7650*********************Hikvision CzechBETA Building, Vyskocilova1481/4, Prague 4Czech RepublicT +42 29 6182640*********************Hikvision GermanyWerner-Heisenberg Str. 2b63263 Neu-lsenburg,GermanyT +49 69 401507290************************@HikvisionEurope@HikvisionEurope HIKVISION Europe。

常见遥感卫星及传感器汇总介绍

常见遥感卫星及传感器汇总介绍
合观测的计划太阳同步ccdsceres云与地球辐射能量系统测量仪mopitt对流层污染测量仪aster先进星载热辐射与反射测量仪vnir052086m60kmswir16243mtir81251165maqua卫星eospm1下午星水20km云与地球辐射能量系统测量earth?sradiantenergysystem33个传感器每个有三波段2个传感器其中一个横向扫描另一个360度旋转扫描035m测量太阳反地球边际到边际12m测量地球辐射03100m以上测量总体辐射量中分辨率成象光谱仪modismoderateresolutionimagingspectroradiometer大气红外探测器airsatmosphericinfraredsounder可见光近红外404101650km垂直分辨公里红外2378374154文章内容仅供参考学习如存在不妥之处或者侵权请您联系本人改正或者删除
5个波段
地球观测系统先进微波扫描辐射计-AMSR-E (Advanced Microwave Scanning Radiometer-EOS
(?)
Aura
高分辨动力发声器-HIRDLS(High Resolution Dynamics Limb Sounder)
微波分叉发声器-MLS(Microwave Limb Sounder)
可见光/近红外(4)0.4-1.0
1650km
垂直分辨率
1公里
红外(2378)3.74-15.4
先进微波探测器-AMSU-A (Advanced Microwave Sounding Unit-A)、
A1(13)和A2(2) 15 个波段
1690km
巴西湿度探测器-HSB (Humidity Sounder for Brazil)

思博伦在民用航空领域的一些测试案例

思博伦在民用航空领域的一些测试案例

思博伦在民用航空领域的一些测试案例
思博伦(Spirent)在民用航空领域进行了一系列测试,以下是其中一些案例:
1. 航空通信测试:思博伦为航空公司提供全面的通信测试解决方案,包括语音通信、数据链通信和卫星通信等测试。

这些测试确保航空公司在不同通信系统中的可靠性和安全性。

2. 航空导航测试:思博伦的导航测试解决方案包括测试仪表着陆系统(ILS)、全球定位系统(GPS)和其他导航设备的性能和准确性。

这些测
试有助于确保飞行员在飞行中能够准确找到航向和着陆点。

3. 航空电子设备测试:思博伦提供航空电子设备的测试解决方案,包括飞行控制计算机、自动飞行控制系统和气象雷达等。

这些测试用于验证设备的性能、可靠性和安全性。

4. 航空安全测试:思博伦还提供航空安全测试解决方案,包括飞机防撞系统、紧急撤离系统和其他安全相关设备的测试。

这些测试有助于确保飞机的安全性和乘客的生命安全。

5. 航空网络测试:随着航空公司和机场不断扩大其网络规模,思博伦也提供航空网络测试解决方案,包括测试网络设备的性能、可靠性和安全性。

这些测试有助于确保航空公司和其他航空机构能够高效地运营其网络。

总之,思博伦在民用航空领域进行了一系列广泛的测试,以确保航空公司和机场的设备和网络的可靠性和安全性。

面向无线传感器网络的能量感知路由算法研究

面向无线传感器网络的能量感知路由算法研究

面向无线传感器网络的能量感知路由算法研究无线传感器网络(Wireless Sensor Networks,WSNs)由大量的分布式传感器节点组成,这些节点能够自主感知、采集信息并将其传输到其他节点或基站进行处理。

然而,节点的能源限制是WSNs面临的主要挑战之一。

为了延长网络的生命周期,降低能源消耗是至关重要的,因此研究面向无线传感器网络的能量感知路由算法显得非常重要和紧迫。

能量感知(Energy Awareness)路由算法是一种将能源消耗作为重要指标的路由选择算法。

它在选择传输路径时考虑节点的剩余能量、节点间的通信质量以及距离等因素,以降低网络的能耗。

下面将讨论面向无线传感器网络的能量感知路由算法的一些关键研究内容。

1. 能量感知路由算法的需求和目标能量感知路由算法的需求和目标主要包括以下几个方面:1.1 能源均衡性(Energy Balance):在整个网络中实现节点能量的均衡消耗,避免部分节点能量过早耗尽而导致网络中断。

1.2 路径稳定性(Path Stability):选择稳定的传输路径,减少路径的变动,降低由于路径切换引起的能耗。

1.3 距离优化(Distance Optimization):根据节点之间的距离选择最短路径,减少能量消耗和传输延迟。

1.4 覆盖率(Coverage):根据节点的覆盖范围选择传输路径,以保证网络的全面覆盖。

2. 能量感知路由算法的研究内容2.1 距离感知路由算法距离感知路由算法根据节点之间的距离选择最短路径,以减少能量消耗和传输延迟。

常用的距离感知路由算法包括基于最短路径树(Shortest Path Tree,SPT)的算法和基于距离向量(Distance Vector)的算法。

这些算法通过计算节点之间的距离来选择最佳传输路径,从而降低能耗。

2.2 能量均衡路由算法能量均衡路由算法旨在实现网络中节点能量的均衡消耗,避免部分节点能量过早耗尽而导致网络中断。

维萨拉二氧化碳传感器测量原理

维萨拉二氧化碳传感器测量原理

维萨拉二氧化碳传感器测量原理维萨拉二氧化碳传感器于1997年推出,具有新功能——用于内置参考测量的微型电调法布里-珀罗涉仪(FPI)滤波器。

自20世纪90年代后期以来,这种可靠而稳定的传感器一直在提供准确的测量,涵盖从建筑自动化和CO2安全到生命科学和生态学研究等的众多行业与应用。

工作原理:气体在红外(IR)区域具有的吸收频段,每种气体均对应于一个波长。

当IR 辐射穿过我们正在测量的包含另一种气体的气体时,辐射的一部分会被吸收。

因此,穿过气体的辐射量取决于所测量气体的存在量,用红外探测器可以探测到这一点。

维萨拉CARBOCAP®传感器具有电调谐FPI滤波器。

除了测量气体吸收之外,微型机械FPI滤波器还可以在不发生吸收的波长下进行参考测量。

在进行参考测量时,对FPI滤波器进行电调,将旁通带从吸收波长切换到非吸收波长。

参考测量可补偿光源强度的潜在变化,以及光路中的污染和污垢积聚。

此功能意味着CARBOCAP®传感器可以维持长时间及其稳定的测量运行。

采用单一光源以多个吸收波长和参考波长进行测量的仪表叫作单光束多波长仪表。

该技术广泛用于昂贵的分析仪中。

CARBOCAP®传感器的特点在于其微型机械FPI滤波器,该传感器使用单个探测器执行多波长测量。

传感器体积小巧,这意味着,这种技术可以集成到小型探头、模块和发射机中。

常见应用:维萨拉CARBOCAP®传感器技术适合广泛的应用,但是由于每种工业应用的最终客户价值都是的,因此它取决于产品线如何实现CARBOCAP®传感器技术。

在二氧化碳测量产品GMP251和GMP252中,该技术用于ppm(百万分之一)和百分比水平的测量。

由于采用二氧化碳取代氧气,因此,当CO2浓度很高时,可能危害人体健康。

百分率二氧化碳仅在封闭式工中存在,如发酵和受控气氛储存环境。

百分率测量在生命科学应用中也较为典型,如二氧化碳培养箱。

正常大气空气中的CO2为ppm水平。

面向智能交通的无线传感网络分簇算法

面向智能交通的无线传感网络分簇算法

面向智能交通的无线传感网络分簇算法苏航【摘要】Wireless sensor networks (WSNs) are important components of intelligent transportation systems.The energy efficiency of WSNs can benefit from a suitable clustering technique.Based on Energy Efficient Clustering (INEEC) scheme, a clustering algorithm of WSNs is studied in this paper.Operating time is divided into several rounds.Cluster head (CH) is determined in each round to balance energy consumption of CH.During a selection phase of CH, each node chooses a random number and computes the threshold value of the random number by residual energy and the average of the regional energy of all sensors in each cluster.If the random number is less than the threshold value, the node becomes a CH for the current round.Each CH broadcasts a joining request message to the rest of nodes.If a non-CH node receives many joining request messages, the node decides to join the closest cluster accordingly.Those non-CH nodes that do not receive a joining request message are considered as isolated nodes.In order to improve the energy efficiency of an isolated node, INEEC scheme determines its transmission pared with the HEED and LEACH algorithms, the simulation results show that INEEC scheme has better performance in reducing energy consumption which means this scheme has the least isolated nodes, the minimum transmission delay, and a more uniform CH distribution.In detail, the network lifetime increases nearly 23.5% compared with the HEED algorithm.Key problems inthe application of the INEEC algorithm, such as high energy consumption, low energy efficiency and network lifetime, are well solved in this study.%无线传感网络是智能交通系统的关键组成部分.合理的分簇技术有利用于提高无线传感网络能量利用率.研究了基于孤点感知能效的无线传感网络分簇算法(INEEC),该算法将时间划分等间隔的轮,每一轮进行簇头选择,平衡簇头的能耗.在簇头选择阶段,每个节点先选择一个随机数,依据各节点的能量以及该区域的局部能量信息估计门限值.若随机数小于门限值,就成为簇头.每个簇头节点广播请求加入消息,其他节点接收消息后,加入最近的簇.若没有收到任何请求加入消息,节点就为孤点.仿真结果表明,相比于HEED和LEACH算法,该算法孤点数最少,传输时延最低,簇头分布更加均匀使总体能耗降低,网络寿命比HEED算法提高近23.5%,较好解决了同类算法在应用中能量效率较低、能耗较高、网络生命周期较短的关键问题.【期刊名称】《交通信息与安全》【年(卷),期】2017(035)003【总页数】7页(P74-79,106)【关键词】智能交通;无线传感网络;分簇;孤点【作者】苏航【作者单位】中国交通通信信息中心交通安全应急信息技术国家工程实验室北京100011【正文语种】中文【中图分类】TP393智能交通系统在交通中的应用主要体现在交通信息采集、交通控制和诱导等方面,其中无线传感技术是交通信息采集与处理的最主要技术手段,是系统建设的基础[1]。

无线传感器网络中的分布式随机感知理论研究

无线传感器网络中的分布式随机感知理论研究

无线传感器网络中的分布式随机感知理论研究随着科技的不断发展,无线传感器网络(Wireless Sensor Network,WSN)作为一种新兴的网络通信技术也被广泛应用于多个领域中,如环境监测、智能交通、医疗保健等。

在无线传感器网络中,分布式随机感知(Distributed Random Sensing,DRS)技术的应用及研究已成为当前热点领域。

一、Distributed Random Sensing技术概述Distributed Random Sensing技术是一种利用多个分布式传感器节点随机感知环境中的信息,并将采集的信息进行整合、分析和传输的技术。

该技术利用了多个节点的协同作用,实现了大规模环境信息的感知及处理,从而能够提高网络的性能和可靠性。

DRS技术相对于其他传统的感知技术,具有以下优点:(1)能够充分利用网络中传感器节点的分布式特性,减少了单个节点对网络的影响,提高了网络的鲁棒性。

(2)DRS技术采用随机化的方法,保证了网络节点的均衡负载,减少了感知节点之间的冲突和重复。

(3)DRS技术对于节点失效和阻塞情况具有强大的容错能力,能够保证网络的长期稳定运行。

二、Distributed Random Sensing算法研究当前,DRS算法的研究重点主要集中在两个方面:一是感知信息的采集,包括节点选择和感知范围的确定;二是数据处理和传输,包括节点数据的处理和整合、协议设计等。

(1)节点选择和感知范围的确定传感器节点选择是一个非常重要的问题。

在DRS技术中,节点选择旨在确定哪些节点将参与到感知过程中。

当前研究主要关注以下两种节点选择算法:①基于覆盖的节点选择。

该算法是根据节点感知范围对节点进行选择的。

选择的节点能够监控所选择的区域,以提高网络感知的效率和精度。

②基于均衡负载的节点选择。

该算法是根据节点当前负载和饱和度来进行节点选择的。

选择的节点应该能够满足所指定的感知负载条件,以保证网络感知过程平衡和均衡。

认知无线电网络抗恶意用户节能协作频谱感知策略

认知无线电网络抗恶意用户节能协作频谱感知策略

认知无线电网络抗恶意用户节能协作频谱感知策略任晓岳;陈长兴【摘要】针对当前认知无线电网络协作频谱感知策略存在能量消耗过大且未考虑恶意用户存在及通信链路中断的问题,提出一种基于信誉管理系统的节能协作频谱感知策略.首先对系统信誉值和感知报告总数的表达式进行推导分析,建立感知策略模型,对恶意用户进行有效抑制.进一步提出策略对应的虚警和漏检概率估计方法及其闭合表达式,深入分析通信链路中断带来的影响.最后仿真结果表明,在达到相同检测目标情况下,提出的节能策略所需感知报告数量较传统策略更少,有效降低了能量消耗.感知性能在通信链路存在中断情况下受到的影响更小.%In view of problems existing in the collaborative spectrum sensing strategy of cognitive radio networks, for example, too much energy consumption and lack of considerations about malicious users and interrupted communication link, a new collaborative spectrum sensing strategy is proposed based on the energy-efficient method and trust management system.Firstly, the expressions of the trust values and the total amount of spectrum sensing reports are derived and analyzed.And the sensing model is set up to suppress effectively behaviors of malicious users.Secondly, the estimation method and the closed expression of the false alarm and missed detection probability are proposed corresponding to the proposed strategy, and the impact of the communication link interruption is analyzed further.Finally, the simulation results show that the required reports of the proposed energy-efficient strategy are fewer than those of the traditional strategy, which means that energy consumption is effectively reduced.The effect ofinterrupted communication link on sensing performance is smaller withthe help of the proposed strategy.【期刊名称】《系统工程与电子技术》【年(卷),期】2017(039)006【总页数】11页(P1347-1357)【关键词】认知无线电网络;恶意用户;节能;频谱感知【作者】任晓岳;陈长兴【作者单位】空军工程大学理学院, 陕西西安 710051;空军工程大学理学院, 陕西西安 710051【正文语种】中文【中图分类】TN92认知无线电技术(cognitive radio, CR)是解决频谱资源紧缺问题的有效途径之一[1]。

分布式天线系统MIMO信道容量分析

分布式天线系统MIMO信道容量分析

分布式天线系统MIMO信道容量分析一、内容综述随着无线通信技术的不断发展,分布式天线系统(Distributed Antenna System,DAS)已经成为现代通信系统中的重要组成部分。

特别是在MIMO(多输入多输出)技术的应用背景下,分布式天线系统为提高系统性能和频谱效率提供了有力支持。

本文将对分布式天线系统的MIMO信道容量分析进行全面梳理,旨在为相关领域的研究者和工程师提供一个理论参考和实践指导。

首先本文将介绍分布式天线系统的基本概念、组成结构以及其在MIMO通信中的优势。

在此基础上,针对MIMO信道容量分析的基本原理和方法进行详细阐述,包括信道容量的定义、计算公式、性能指标等。

此外本文还将重点讨论分布式天线系统在MIMO通信中的信道建模方法,如香农费诺方程、高斯谢泼德方程等,以及这些模型在实际应用中的局限性和改进策略。

其次本文将对分布式天线系统的MIMO信道容量进行深入研究,包括单用户和多用户两种场景下的信道容量分析。

针对单用户场景,本文将探讨分布式天线系统如何通过引入阵列自适应技术和空间分集技术来提高信道容量;而对于多用户场景,本文将研究分布式天线系统如何利用波束形成技术、空时分组码(SpaceTime Block Coding,STBC)等技术来实现多用户同时传输和共享信道资源,从而提高整体系统性能。

本文将结合国内外相关研究成果,对分布式天线系统的MIMO信道容量分析进行总结和展望。

通过对现有理论研究和实际应用的分析,本文将提出一些有针对性的建议和发展方向,以期为进一步推动分布式天线系统在MIMO通信中的应用和发展提供理论支持和技术指导。

1.1 背景介绍随着无线通信技术的飞速发展,多输入多输出(MIMO)技术已经成为现代无线通信系统的重要组成部分。

MIMO技术通过在发射和接收天线之间引入多个天线,极大地提高了无线通信系统的频谱效率、抗干扰能力和数据传输速率。

然而随着MIMO系统容量的提高,信道容量分析变得越来越复杂,尤其是在分布式天线系统中。

一种基于双波长散射信号的气溶胶特征参数传感方法及其应用[发明专利]

一种基于双波长散射信号的气溶胶特征参数传感方法及其应用[发明专利]

专利名称:一种基于双波长散射信号的气溶胶特征参数传感方法及其应用
专利类型:发明专利
发明人:王殊,邓田,窦征
申请号:CN201580043588.1
申请日:20150623
公开号:CN106663357A
公开日:
20170510
专利内容由知识产权出版社提供
摘要:本发明涉及基于双波长光散射信号的气溶胶特征参数传感方法及其应用,属于消防报警技术领域。

该方法通过接收两种波长的光散射光功率,计算气溶胶表面积浓度和体积浓度,得到气溶胶索特(Sauter)平均直径,与对应门限比较,从而发出相应的火灾报警信号。

采用本发明后,不仅可以通过索特平均直径判断气溶胶粒径的大小,从而及时鉴别出是否发生了火灾,及时正确地发出火灾报警信号或提示非火灾因素干扰;而且可以通过气溶胶表面积浓度和体积浓度判断得到被探测气溶胶的特征参数,从而判断并发出火灾种类报警信号,以便采取针对性的合理措施。

申请人:华中科技大学
地址:430074 湖北省武汉市珞喻路1037号
国籍:CN
代理机构:南京利丰知识产权代理事务所(特殊普通合伙)
代理人:任立
更多信息请下载全文后查看。

面向分布式相参isar雷达的高精度光控波束形成机理研究

面向分布式相参isar雷达的高精度光控波束形成机理研究

面向分布式相参isar雷达的高精度光控波束形成机理
研究
近年来,随着技术的不断发展和需求的不断增加,以及对信息获取和
处理的要求越来越高,光控波束形成技术成为了新的研究热点。

在众
多的应用领域中,以分布式相参isar雷达为代表的雷达技术中,光控
波束形成技术得到了广泛的应用。

因此,对于面向分布式相参isar雷
达的高精度光控波束形成机理研究,具有重要意义和应用价值。

光控波束形成技术利用发射的光波束与目标反射回来的波束之间的相
位差来实现波束的控制和形成,从而实现有效的目标识别、跟踪和定位。

在分布式相参isar雷达中,通过多个与地面信号接收站相连的飞
行器,可以对目标进行高分辨率的成像,从而可以获得更高精度的目
标信息。

在实际应用中,光控波束形成技术的精度和实时性是关键因素。

其中,波束形成的精度主要由多个因素决定,如光路长度、波长、相位控制
精度等。

为了提高光控波束形成技术的精度,需要对其机理进行深入
研究,并优化其各项参数。

在分布式相参isar雷达中,飞行器间的相
互配合和信号处理的实时性也是非常重要的因素,需要通过高效的算
法和实时的数据传输来实现。

总之,面向分布式相参isar雷达的高精度光控波束形成机理研究具有重要的意义和应用价值。

通过对技术的不断优化和发展,可以实现对目标的更加准确和高效的识别和跟踪,为实现精准化和智能化的雷达目标监测和控制提供更有力的技术保障。

太赫兹光谱检测 被动式

太赫兹光谱检测 被动式

太赫兹光谱检测被动式
被动式太赫兹人体安检成像系统不对被检测人发射太赫兹波,而是完全利用人体本身的辐射对被检测人进行成像。

它主要由准光系统、太赫兹探测器和信号图像处理系统组成。

被动式太赫兹人体安检成像系统原理图根据黑体辐射的普朗克公式计算,人体(37℃)发射功率的峰值位于电磁波谱中的远红外波段。

以350GHz中心频率为例计算,人体发射功率在50GHz的带宽内与一个20℃物体的差值仅有10nW/cm²,要实现对差别如此小的功率的探测和进一步的成像是被动式太赫兹成像系统面临的最大困难。

云帆瑞达 R24BBD1生物感知雷达产品手册说明书

云帆瑞达 R24BBD1生物感知雷达产品手册说明书

Iotrda Technology(Shenzhen)Co.,LTDR24BBD1_生物感知雷达产品手册(V1.1)产品特点⏹静止人体探测⏹生命体征检测⏹24GHz毫米波雷达传感器⏹基于多普勒雷达技术,实现雷达扫描区域人员感知功能;⏹实现运动人员及静止人员的同步感知功能;⏹人体睡眠质量监测最大距离:≤2.75m⏹人体呼吸频率探测最大距离:≤1.5米⏹天线波束宽度:⏹R24BBD1:水平40°/垂直40°扇形波束⏹具备场景识别能力,识别有人/无人及人员活动状态,输出体动⏹不受温度、湿度、噪声、气流、尘埃、光照等影响,适合恶劣环境;⏹输出功率小,长时间照射对人体无伤害;⏹无人到有人探测时间:0.5秒以内⏹有人到无人探测时间:大于1分钟R24BBD1型号说明✧R24BBD1-窄波束人体感知雷达传感器,40度/40度扇形波束(测量精度高,建议在6米距离内使用)产品应用睡眠探测应用:✧睡眠监控(睡眠曲线)呼吸探测应用:✧呼吸频率监测产品封装体积:≤46MM×27.5MM×5MM接口:P ITCH2.0MM接口,双排插针目录1.概述 (4)2.电气特性及参数 (5)2.1.检测角度及距离 (5)2.2.电气特性 (5)2.3.RF性能 (5)3.模块尺寸及引脚说明 (6)3.1.模块尺寸封装 (6)3.2.引脚说明 (6)3.3.使用接线图 (7)4.主要工作性能及 (7)4.1.雷达模块工作范围 (7)4.2.主要功能及性能 (7)5.雷达工作及安装方式 (8)5.1.安装方式 (8)5.2.雷达模块工作模式 (9)6.典型应用模式 (10)6.1.智能家电应用 (10)6.2.家居场所应用 (10)6.3.卧室安装应用 (11)6.4.节能控制应用 (11)6.5.健康生活应用 (11)7.注意事项 (12)7.1.启动时间 (12)7.2.有效探测距离 (12)7.3.雷达生物探测性能 (12)7.4.电源 (12)8.常见问题 (13)9.免责声明 (13)10.版权说明 (13)11.联系方式 (13)12.历史版本更新说明 (14)1.概述R24BBD1雷达模块是采用毫米波雷达技术,实现的人体运动感知、人体生物感知以及人体呼吸探测的雷达探测模块。

分布式无线传感器网络定位算法MDS_MAP_D_马震

分布式无线传感器网络定位算法MDS_MAP_D_马震

2008年6月Journal on CommunicationsJune 2008第29卷第6期 通 信 学 报 V ol.29 No.6分布式无线传感器网络定位算法MDS-MAP(D)马震,刘云,沈波(北京交通大学 通信与信息系统北京市重点实验室,北京 100044)摘 要:针对无线传感器网络的定位问题,提出了一种分布式的算法MDS-MAP(D),明确给出了节点相对坐标计算和局部网络融合的过程,并对算法进行了计算复杂性分析和仿真。

MDS-MAP(D)以分布式节点分簇为基础,利用网络的连接关系,在不需要高精度测距技术支持的条件下对节点坐标进行估计,减小了节点定位的计算复杂度和能量消耗。

分析与仿真结果表明,算法的计算复杂度由3()O N 下降到2(),O Nm m N <,并且定位精度提高了1%~3%。

关键词:无线传感器网络;定位;多维标度;分布式中图分类号:TP393 文献标识码:A 文章编号:1000-436X(2008)06-0057-06Distributed locating algorithm for wireless sensornetworks- MDS-MAP(D)MA Zhen, LIU Yun, SHEN Bo(Key Laboratory of Communication & Information Systems, Beijing Jiaotong University,Beijing Municipal Commission of Education, Beijing 100044, China)Abstract: A new distributed locating algorithm MDS-MAP(D) was proposed, which attempted to improve the perform-ance of node localization in wireless sensor networks. The process of the computation about node relative coordinates and the aggregation from local network to global network are introduced explicitly. Further, the analyses to computational complexity and the simulations of the algorithm are also present. MDS-MAP(D), which is based on node clustering mechanism and uses connectivity of nodes to estimate the coordinates of nodes, reduces the complexity and energy con-sumption of node localization on the absence of distance measurement with high precision. The simulation and analysis results indicate that the complexity of node localization algorithm falls to 2(),O Nm m N < from 3()O N and the accu-racy is improved 1%~3%.Key words: wireless networks; location; multidimensional scaling; distribution1 引言无线传感器网络(WSN, wireless sensor network)技术在最近几年得到了迅速发展,正逐渐被广泛用于军事、交通、环境和工业生产等领域,完成对温度、湿度、压力和速度等许多物理量的测量[1]。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Published in IET Wireless Sensor Systems Received on22nd December2010 Revised on10th February2012doi:10.1049/iet-wss.2011.0178ISSN2043-6386 Distributed passive radar sensor networks withnear-space vehicle-borne receiversW-.Q.Wang*School of Communication and Information Engineering,University of Electronic Science and Technology of China (UESTC),Chengdu,People’s Republic of China,611731*State Laboratory of Remote Sensing Science,Institute of Remote Sensing Applications,Chinese Academy of Sciences, Beijing,People’s Republic of China,100101E-mail:wqwang@Abstract:In this study,we propose a distributed passive radar sensor network with near-space vehicle-borne receivers for regional remote sensing surveillance.Note that near-space is referred to the altitude range between20and100km is too high for airplanes,but too low for satellites.Near-space vehicles can offer a wide coverage like satellite and a fast maneuverability like airplane.The distributed passive radar sensor networks system operation mode,imaging coverage and imaging resolution are analysed.As there is a big speed difference between the transmit and receive platforms,we propose a multi-beamforming and scan-on-receive combined approach to extend the limited imaging coverage.Since the conventional motion compensation technique may be not reachable for the system due to its limited load capability,an overlapped subaperture-based motion compensation algorithm is proposed.The effectiveness of the approaches is validated by numerical simulation results.1IntroductionIn recent years,radar sensor networks have received enormous research interests[1–7].Radar sensor networks can enable a technology for potential applications such as surveillance and environment monitoring,which are not accessible for conventional communication networks[8].In the radar sensor network,each radar sensor can be an independent system which transmits a known waveform and receives the returns.Radar sensor networks can be utilised to obtain an improved performance[9].They can be arranged to survey a large area and observe targets from a number of different angles.Moreover,radar sensor networks offer to alleviate the blind speed problem that occurs when the Doppler shift is equal to the same or a multiple of the pulse repetition frequency(PRF)[10].One radar sensor network that works in an ad hoc fashion,but is grouped together by an intelligent clusterhead was proposed in[11].Different from the literature,in this study we propose a distributed passive radar sensor network with near-space vehicle-borne receivers for regional remote sensing surveillance.Although spaceborne and airborne radars have become a valuable remote sensing tool,they are not a good tool that can be used for persistent monitoring because of its low revisiting frequency.Differently,near-space vehicle can supply the gap between satellite and airplane[12]. Therefore,the proposed distributed passive radar sensor network can be used for regionally remote monitoring. Near-space is referred to the altitude range between20 and100km is too high for airplanes but too low for satellites [13,14].However,near-space offers many capabilities that are not accessible for low earth orbit(LEO)satellites and airplanes [15].Generally speaking,satellites usually operate in the orbits higher than200km,and air-breathing airplanes routinely operate lower than18km.Consequently there are little sensors in the altitude between airplanes and satellites[16].Although compared with satellites and airplanes,the vehicles operating in near-space offers three obvious advantages:first,near-space is above troposphere and atmosphere region where most weather occurs,so both stationary and ultrasound speed can be obtained for near-space vehicles.Secondly,not constrained by orbital mechanics like satellites or high fuel consumption like airplanes,they can stay at a specific site almost indefinitely to provide a persistent region coverage.Thirdly,they are low cost.Their inherent simplicity,recoverability and without space-hardening requirements all contribute into this advantage.These advantages provide promising potentials for some specific remote sensing applications[17,18].Thus,near-space has received much attention in recent years and why several types of near-space vehicles are being studied, developed or employed[19–23].The proposed radar sensor network can be seen as a kind of distributed radar system.The interest in distributed radars has rapidly increased in recent years[24,25].This is based on the specific advantages of bistatic configuration when comparison with monostatic configuration.However,most of the distributed radar systems investigated in the literature are azimuth-invariant configurations[26],in which the transmitter and the receiver are moving along parallel trajectories with the same velocity.The distributed passive radar sensor network considered in this paper has an azimuth-variant configuration where the transmitter and the receiverhave different trajectories or velocities.Consequently the Doppler signal will be azimuth-variant.It is thus necessary to analyse the corresponding system performance.On the contrary,conventional motion compensation algorithm cannot be employed for the system because the near-space vehicles have only limited load capability and motion compensation algorithm is required.The remaining sections are organised as follows.In Section 2,the system parameters and imaging performance are analysed.Next,Section 3presents the overlapped subaperture-based motion compensation technique.Section 4designs the conceptual system and provides the simulation results.Finally,Section 5concludes the whole study.2Distributed passive radar sensor network via near-space vehicle-borne receiversThe distributed passive radar sensor network involves placing a receiver inside a near-space vehicle and utilising opportunistic transmitters such as spaceborne and airborne radar sensor.Fig.1shows an example geometry of the passive radar sensor network.Although the near-space vehicle-borne receiver may be stationary,an aperture synthesis can still be obtained by the transmitter motion only.The near-space vehicle-borne receiver consists of two channels.One channel is fixed to collect the direct-path signals coming from the transmitter antenna sidelobes,which is used as the reference function for matched filtering and synchronisation compensation [27].The other channel is configured to gather the reflected signals with which navigation and surveillance are attempted.2.1Operational mode and system parametersAs distributed radar sensor networks can be represented by bistatic radars,we consider only the bistatic radars in the following discussions.Near-space vehicle-borne bistatic radars can operate in strip,spotlight and scan modes.Without loss of generality,only strip mode using spaceborne transmitter is considered in the following sections.Suppose the transmitter and the receiver are flying with a parallel trajectory but at unequal velocities.Consider Fig.1(Tx and Rx denote the transmitter and receiver,respectively),as only the volume common to both transmit and receive beams can be imaged,the overlap time T overlap is limited byT overlap =|W az,t −W az,r ||v t −v r |(1)where W az,t and W az,r are the ground coverage in azimuth for the transmitter and the receiver,v t and v r are the velocity for the transmitter and the receiver (Fig.2).There are two parameters on the transmitter side,which have an influence on the imaging time during which the area of interest can be illuminated.The first parameter is the transmitter velocity,which is very high relative to the receiver.The second transmitter parameter is the very small antenna steering range in azimuth.To increase the scene extension in azimuth,the receiver must perform antenna steering.One pulse chasing technique is proposed in [28],but it cannot be easily implemented for non-cooperative bistatic radar configurations.An approach involving transmitter sliding spotlight in transmitter and receiver footprint chasing is proposed for spaceborne /airborne bistatic radar systems in [29].Owing to the fact that antenna steering is employed in both the transmitter and the receiver,it is difficult to develop the subsequent image formation algorithms.Moreover,the satellite antenna direction is often uncontrollable for us.To overcome these disadvantages,we can use a wide antenna beamwidth on receive due to its advantage of high signal-to-noise ratio (SNR).The bistatic radar equation is given by [30]P r =P t l 2G t G r s o b A res(4p )3R 2t R 2r(2)where P t and P r are the average transmit and receive power,l is the wavelength,G t and G r are the gain of the transmit and the receiver antenna,R t and R r are the distance from the transmitter and receiver to the imaged scene,s o b is the bistatic scattering coefficient and A res is the size of the resolution cell.The SNR in receiver can then be represented bySNR b =P t l 2G t G r s o b A res z int h (4p )3R 2t R 2r K 0T 0F n(3)where z int is the coherent integration time,h is the duty cycle,K B is the Boltzmann constant,T 0is the system noise temperature and F n is the noise figure.Simplify,we suppose the transmitter and the receiver are flying in a parallel trajectory (but their flying velocity are not equal),then the size of the resolution cell is expressed asA res =l v t /R t +v r /R r ×1z int ×c 02B r cos(b /2)sin(g b )(4)Fig.1Geometry of the passive radar sensor network with near-space vehicle-borne receivers and spacebornetransmitterFig.2Illustration of the overlapped swath covered by the transmitter and receiverwhere v t (v r )is the transmitter (receiver)velocity,R t (R r )is the transmitter-to-target (target-to-receiver)distance,c 0is the speed of light,B r is the transmitted signal bandwidth,b is the bistatic angle and g b is the incidence angle of the bistatic angle bisector [31].Equation (3)can then be changed intoSNR b =P t l 2G t G r s o b h (4p )3R 2t R 2r K B T 0F n×l v t /R t +v r /R r ×c 02B r cos(b /2)sin(g b )(5)Similarly,for the corresponding monostatic spaceborne radar,there isSNR m =P t l 2G 2t s o m h (4p )3R 4t K B T s F n×l 2v t /R t ×c 02B r sin(g m )(6)where s o m and g m are the monostatic scattering coefficient and radar incidence angle,respectively.Note that,here equal system noise temperature and noise figure are assumed.We then haveK m =SNR b SNR m =G r G t R t R r22v t /R tv t /R t +v r /R r×s o b s o m ×sin(g m )cos(b /2)sin(g b )(7)As an example,supposing the following parameters:R t ¼800km,v t ¼7000m /s,R r ¼30km,v r ¼5m /s,g b ¼458,g m ¼608,b ¼308and s b o ¼s m o,the K m is found to be 1973.70G r /G t .This points out that the receiver antenna beamwidth can be significantly extended to provide the same SNR in comparison to the monostatic case [32];hence,an extended bistatic radar imaging coverage can be obtained by extending the beamwidth of the near-space vehicle-borne receiver antenna.To ensure the transmitter and the receiver have common beam coverage,the synthetic aperture time is limited byT s =min l R t0L t v t ,l R r0L r v r(8)where R t0and R r0are the nearest slant range for the transmitterand the receiver,L t and L r denote the transmitting and the receiving antenna length.In the case that spaceborne transmitter is employed,there should beL r ≤r a ,L t ≤R t0D r /R r0(9)Considering the geometry shown in Fig.3,the transmitting and receiving antenna width D t and D r are determined,respectively,byD t =l /u tu t=W r cos(f t )/R t0R t0=h t /cos(f t )⎧⎨⎩(10)D r =l /u ru r=W r cos(f r )/R r0R r0=h r /cos(f r )⎧⎨⎩(11)The corresponding imaging swath W r is determined by W r =c 0PRF ×1(sin(f t )/sin(u t /2))+(sin(f r )/sin(u r /2))(12)where c 0and PRF denote the speed of light and PRF,respectively.2.2Imaging spatial resolutionFrom Fig.3we know that the instantaneous range history of the transmitter and the receiver to an arbitrary point target (x ,y ,0)isR = (x −x t )2+(y −y t )2+h 2t+(x −x r )2+(y −y r )2+h 2r(13)where (x t ,y t ,h t )and (x r ,y r ,h r )are the coordinates of thetransmitter and the receiver,respectively.We then have∇R =∂R ∂x i x +∂R ∂yi y =[sin(a t )cos(z t )+sin(a r )cos(z r )]i x +[sin(z t )+sin(z r )]i y(14)where a t ¼a t (x )and a r ¼a r (x )are the instantaneous looking-down angles,z t ¼z t (x,y,y t )and z r ¼z r (x,y;y r )(y t ,y r is the instantaneous location in y -direction)are the instantaneous squint angles.There is|∇R |=[sin(a t )cos(z t )+sin(a r )cos(z r )]2+[sin(z t )+sin(z r )]2(15)As the range resolution of a monostatic radar is c 0/2B r with B r the transmit signal bandwidth,the range resolutionofFig.3Geometry of the relations between transmit and receivebeamsnear-space vehicle-borne bistatic radar can then be derived asr r =c 0/B r ∇R ×1sin(j xy )(16)withj xy =arctansin(j t )+sin(j r )sin(a t )cos(j t )+sin (a r )cos(j r )(17)We then haver r =c 0/B rsin(a t )cos(j t )+sin(a r )cos(j r )(18)Unlike the monostatic cases,the range resolution is determined by not only the transmitted signal bandwidth,but also the specific bistatic radar configuration geometry.To investigate the azimuth resolution,we consider the range history to an arbitrary reference point at an azimuth time tR b (t )= R 2t0+(v t t )2 +R 2r0+(v r t )2(19)As the propagation speed of electromagnetic signal is muchfaster than the speed of platforms,here the stop-and-go hypothesis [33]is still reasonable.The instantaneous Doppler chirp rate is derived ask d (t )=−1l ×∂2R b (t )∂2t ≃−1l v 2t R t0cos v t t R t0 +v 2r R r0cos v r t )R r0(20)The azimuth resolution can then be expressed asr a =v 2t +v 2r −2v t v r cos(p −g )2[(v t /l )sin(l R r0v t /2L r R t0v r )+(v r /l )sin(l /2L r )](21)where g is defined as the angle between the transmitter and the receiver velocity vectors.If v r ¼0,this case is just a fixed-receiver bistatic radar [34].3Overlapped subaperture-based motioncompensationIn the previous discussions,we did not consider the motion errors.For a short coherent processing interval,we ignore the acceleration errors in along-track and consider only the motion errors in cross-track in the following discussions.As shown in Fig.4,suppose the ideal transmitter and the receiver instantaneous positions at azimuth time t m are (v t t m ,y t0,h t )and (v r t m ,y r0,h r ),respectively,but their actual positions are (v t t m ,y t0+D y t (t m ),h t )and (v r t m ,y r0D y r (t m ),h r ).Suppose the transmitter motion error in the cross-track is D r t (t m ),there are D y t (t m )¼2D r t (t m )cos(a t0)(a t0is the instantaneous incidence angle from the transmitter to the point target P n (x n ,y n ,0)),D z t (t m )¼2D r t (t m )sin(a t0)and y n 2y t0¼r tn cos(a t0)with r tn = h 2t +(y n −y t0)2andh t ¼r tn sin(a t0).The transmitter range history can then berepresented byR t (t m )= (x n −v t t m )2+(y n −y t0−D y t (t m ))2+(h t −D z t (t m ))2=(x n −v t t m )2+r 2tn +2r tn (D r t (t m )+D r 2t (t m )) (22)Assume the instantaneous transmitter squint angle is u tm anddenote x t (t m )¼x n 2v t t m and tan(u tm )¼x t (t m )/r tn ,we can then getR t (t m )=r 2tn +x 2t (t m ) −D r t (t m )cos(u tm)+sin(u tm )sin(2u tm )D r 2t (t m )2r tn +OD r t (t m )r tn≃ r 2tn +x 2t (t m ) −D r t (t m )cos(u tm )(23)Similarly,for the receiver range history we haveR r (t m )≃r 2rn +x 2r (t m ) −D r r (t m )cos(u rm)(24)where r rn ,x r (t m ),D r r (t m )and u rm are are defined in an alike manner as the r tn ,x t (t m ),D r t (t m )and u tm .The bistatic range history can then be represented byR b (t m )≃ r 2tn +x 2t (t m ) −D r t (t m )cos(u tm )+r 2rn +x 2r (t m ) −D r r (t m )cos(u rm )(25)As the first and third terms are the ideal range history for subsequent image formation processing,we consider only the second and fourth terms.We have∂[D r t (t m )cos(u tm )+D r r (t m )cos(u rm )]∂t m =−D r t (t m1)r tnsin(u tm )×v t ×(t m2−t m1)−D r r (t m1)r rnsin(u rm )×v r ×(t m2−t m1)(26)Fig.4Illustration of the relative motion errors between the transmitter and receiverAs there are D r t(t m1)/r tn¼1and D r r(t m1)/r rn¼1,when t m22t m1is short,(26)will be equal to zero.Therefore,we present a subaperture-based motion compensation algorithm. It can easily be derived that the Doppler signal received by each radar sensor can be represented byG B(f)=exp j p f2 k a,−B a2,f,B a2(27)where f is the Doppler frequency,B a is the Doppler bandwidth and k a is the Doppler chirp rate.As shown in Fig.4,we divide the azimuth Doppler data into multiple(N)subapertures,each subaperture has a bandwidth of B as.Consider two adjacent subaperturesG Li (f)=G B f−B as2=exp j p(f−(B as/2))2k a,f i−B a2,f,f i+B a2(28)G Hi (f)=G B f+B as2=exp j p(f+(B as/2))2k a,f i−B a2,f,f i+B a2(29)We then haveG mi (f)=G Li(f)G∗Hi(f)=exp−j2pB ask af,f i−B a2,f,f i+B a2(30)Applying an inverse Fourier transform,we getg mi (t)=sinc t i−B ask a(31)The maxima arrives at t i¼B as/k a.The chirp rate in the i th subaperture can then be estimated ask a=B ast i(32)The detailed processingflow chart of the overlappedsubaperture-based motion compensation algorithm is givenin Figs.5and6.Fig.5Illustration of overlapped azimuth DopplersubaperturesFig.6Flow chart of the overlapped subaperture-based motioncompensation algorithmTable1Typical system parameters for the radar sensor network with near-space vehicle-borne receiversParameters Tx-A Rx-A Tx-B Rx-B Tx-C Rx-Cflying altitude in km515208002064520flying velocity in m/s760057450575305 incidence angle8456045603060 carrier frequency in GHz9.659.65 5.33 5.33 1.260 1.26 transmit power in kW 2.26– 2.30–4–PRF in Hz4000–2000–2000–system SNR loss in dB232323 pulse duration in m s45–25–35–receiver noisefigure in dB–5–5–5 receiver noise temperature in8C–300–300–300 signal bandwidth in MHz150–16–85–range beamwidth in8C 2.3010 2.3010 2.315 azimuth beamwidth in8C0.33100.33100.2810 imaging scene in km(4.04,4.03)–(4.04,4.04)–(6.08,6.07)–4Conceptual systems and simulation resultsTo obtain quantitative evaluation,we consider the spaceborne radar TerraSAR-X,Envisat and TettaSAR-L as example transmitters,the corresponding typical system parameters are given in Table 1(Tx-A,Tx-B and Tx-C denotethe TerraSAR-X [35],Envisat [36]and TettaSAR-L,respectively.Rx-A /B /C denote three different near-space vehicle-borne receivers).We notice that an imaging scene coverage of dozens of square kilometers (the size is from 4×4km 2to 6×6km 2in the simulation examples)can beobtained.Fig.8Azimuth resolution results of the example transmitter and receiver configurationsa Case A:Tx-A and Rx-Ab Case B:Tx-B and Rx-Bc Case C:Tx-C andRx-CFig.7Range resolution results of the example transmitter and receiver configurationsa Case A:Tx-A and Rx-Ab Case B:Tx-B and Rx-Bc Case C:Tx-C and Rx-CFig.7gives the range resolution results of the example radar sensor configurations.The range resolution has geometry-variant characteristics,which depends not only on slant range but also azimuth range.It degrades with the increase of azimuth range displaced from scene centre.To obtain a consistent range resolution,the imaged scene coverage should be limited or a long slant range should be employed.Fig.8gives the azimuth resolution results of the example radar sensor configurations.It can be noticed that the angle between the transmitter and the receiver velocity vectors has also an impact on the azimuth resolution.Additionally,we can notice that the azimuth resolution between scene centre and scene edge has a small performance difference.This phenomenon is caused by the change of azimuth time.It is well known that the final imaging swath is inversely proportional to the height of the antenna aperture.To obtain a wider swath,the elevation dimension of the near-space vehicle-borne receive aperture should be in a small size.However,a smaller height of the transmit antenna implies a reduction on the radiometric resolution.Therefore,to further extend the imaging scene coverage,we present a multi-beamforming and scan-on-receive combined approach,as shown in Fig.9.The receive antenna is formed by multiple channels or apertures in elevation and azimuth.The height of each receive subaperture should be small as soon as possible,each of them can cover a wide area illuminated by the spaceborne transmit aperture.The proper combination of the signals from the different channels is performed through a digital beamforming technique [37,38],like the scan-on-receive discussed in [39].The basic idea is to shape a time varying elevation beam in reception such that it follows the radar echoes on the ground.To evaluate the performance of the overlapped subapertures-based motion compensation algorithm,we made a simulation using the following system parameters:radar carrier wavelength is l ¼0.03m,a coherent processing interval is 1s,the transmitter’s speed is 7000m /s,the distance from the transmitter to the target is 800km,the ideal near-space vehicle-borne receiver’s speed is 0m /s and the distance from the receiver to the target is 30km.We further assume there are phase errors caused by the motion errors as shown in Fig.10.Fig.11shows the comparative processing results.It can be noticed that,after applying the motion compensation algorithm,significantly processing performance improvements are obtained.5ConclusionThis study proposes a distributed passive radar sensor network with near-space vehicle-borne receivers for regional remote sensing surveillance.We analysed the corresponding system performance such as imaging coverage and imaging resolution.Since there is a big-speed difference between the transmit and receive platforms,we proposed an approach to extend the imaging coverage through multi-beamforming and scan-on-receive.The numerical analysis results show that satisfactory imaging performance can be obtained by this approach.An overlapped subaperture-based motion compensation algorithm is also proposed for this passive radar sensor network,which is validated by the point target simulation results.Note that,in this study only one transmitter is employed.The radar sensor networks can employ multiple transmitters;however,in this case orthogonal waveforms may be required.Another remaining problem is synchronisation for distributed radar sensor networks.These problems will be further investigated in our subsequent work.6AcknowledgmentThis work was supported in part by the National Natural Science Foundation of China under grant No.41101317,the Fundamental Research Funds for the Central Universities under grant No.ZYGX2010J001,the First Grade of 49th Chinese Post-Doctor Research Funds under grant No.20110490143and the Open Funds of theStateFig.9Extending imaged scene by using multi-beamforming and scan-on-receive combinedapproachFig.10Assumed phase errors caused by the motionerrorsFig.11Comparative processing results between before and after applying the motion compensation algorithmLaboratory of Remote Sensing Science,Institute of Remote Sensing Applications,Chinese Academy of Sciences under grant No.OFSLRSS201011.7References1Arik,M.,Akan,O.B.:‘Collaborative mobile target imaging in UWB wireless radar sensor networks’,IEEE J.Sel.Areas in Commun., 2010,28,(6),pp.950–9612Liang,Q.L.:‘Radar sensor wireless channel modeling in Foliage environment:UWB versus narrowband’,IEEE Sensor J.,2011,11,(6),pp.1448–14573Schuerger,J.,Garmatyuk,D.:‘Decteption jamming modeling in radar sensor networks’.Proc.IEEE Military Communications Conf.,San Diego,CA,November2008,pp.1–74Chiani,M.,Giorgetti,A.,Mazzotti,M.,Minutolo,R.,Paolini,E.:‘Target detection metrics and tracking for UWB radar sensor networks’.Proc.IEEE Int.Ultra-Wideband Conf.,Vancouver, Canada,September2009,pp.469–4745Bartoletti,S.,Conti,A.,Giorgetti,A.:‘Analysis of UWB Radar Sensor Networks’.Proc.IEEE munication Conf.,Cape Town,South Africa,May2010,pp.1–66Bielefeld, D.,Mathar,R.,Hirsch,O.,Thoma,R.S.:‘Power-aware distributed target detection in wireless sensor networks with UWB-radar nodes’.Proc.IEEE Radar Conf.,Arlington,VA,May2010, pp.842–8477Xu,L.,Liang,J.:‘Radar sensor network using a set of new ternary codes:theory and application’,IEEE Sensors J.,2011,11,(2), pp.439–4508Jiang, D.D.,Wang,L.:‘Communication networks mahalandobis distance-based traffic matrix estimation’,Eur.Trans.Telecommun., 2010,21,(3),pp.195–2019Hume,A.L.,Baker,C.J.:‘Netted radar sensing’.Proc.IEEE Radar Conf.,Atlanta,GA,May2001,pp.23–2610Liang,Q.:‘Radar sensor networks for automatic target recognition with delay-Doppler uncertainty’.Proc.IEEE Military Commun.Conf., Washington,DC,June2006,pp.1–711Liang,J.,Liang,Q.:‘Design and analysis of distributed radar sensor networks’,IEEE Trans.Parallel Distrib.Syst.,2011,22,(11), pp.1926–193312Wang,W.Q.:‘Near-space vehicles:supply a gap between satellites and airplanes’,IEEE Aerosp.Electron.Syst.Mag.,2011,25,(4),pp.4–9 13Allen,E.H.:‘The case for near-space’,Aerosp.Am.,2006,22,(1), pp.31–3414Tomme, E.B.:‘Balloons in today’s military:an introduction to near-space concept’,Airspace J.,2005,19,(1),pp.39–5015Wang,W.Q.,Cai,J.Y.,Peng,Q.C.:‘Near-space microwave radar remote sensing:potentials and challenge analysis’,Remote Sens.,2010,2,(3), pp.717–73916Wang,W.Q.,Cai,J.Y.:‘A technique for jamming bi-and multistatic SAR systems’,IEEE Geosci.Remote Sens.Lett.,2007,4,(1),pp.80–82 17Wang,W.Q.:‘Application of near-space passive radar for homeland security’,Sens.Imag.:An Int.J.,2007,8,(1),pp.39–5218Wang,W.Q.,Cai,J.Y.,Peng,Q.C.:‘Near-space SAR:a revolutionizing remote sensing mission’-Pacific Synthetic Aperture Radar Conf.,Huangshan,China,November2007,pp.127–13119Marcel,M.J.,Baker,J.:‘Interdisciplinary design of a near-space vehicle’.Proc.IEEE Southeast Conf.,Richmond,USA,March2007, pp.421–42620Guan,M.X.,Guo,Q.,Li,L.:‘A novel access protocol for communication system in near-space’.Proc.Wireless Communication Network Mobile Computation Conf.,Shanghai,China,September 2007,pp.1849–185221Galletti,M.,Krieger,G.,Thomas,B.,Marquart,M.,Johannes,S.S.:‘Concept design of a near-space radar for tsunami detection’.Proc.IEEE Geosci.Remote Sens.Symp.,Barcelona,June2007,pp.34–37 22Rome,G.,Frulla,G.:‘HELIPLAT:high altitude very-long endurance solar powered UAV for telecommunication and earth observation applications’,Aeronaut.J.,2004,108,(4),pp.277–29323Wang,W.Q.:‘Near-space remote sensing:potential and challenges’(Springer,New York,2011)24Krieger,G.,Moreira,A.:‘Spaceborne bi-and multistatic SAR:potential and challenges’,IET Radar Sonar Navig.,2006,153,(3),pp.184–198 25Wang,W.Q.:‘GPS-based time&phase synchronization processing for distributed SAR’,IEEE Trans.Aerosp.Electron.Syst.,2009,45,(3), pp.1040–105126Lv,X.,Xing,M.,Wu,Y.,Zhang,S.:‘Azimuth-invariant bistatic multichannel synthetic aperture radar for moving target detection and location’,IET Radar Sonar Navig.,2009,3,(5),pp.461–47327Wang,W.Q.,Ding,C.B.,Liang,X.D.:‘Time and phase synchronization via direct-path signal for bistatic synthetic aperture radar systems’,IET Radar Sonar Navig.,2008,2,(1),pp.1–1128Purdy, D.S.:‘Receiver antenna scan rate requirements needed to implement pulse chasing in a bistatic radar receiver’,IEEE Trans.Aerosp.Electron.Syst.,2001,37,(1),pp.285–28829Gebhardt,U.,Loffeld,O.,Nies,H.,Natroshvili,M.,Knedlik,S.:‘Bistatic spaceborne/airborne experiment:geometrical modeling and simulation’.Proc.IEEE Int.Geosci.Remote Sens.Symp.,Denver, July2006,pp.1832–183530Willis,N.J.:‘Bistatic radar’(Artech House,1991)31Walterscheid,I.,Klare,J.,Brenner,A.R.,Ender,J.H.G.,Loffeld,O.:‘Challenges of a bistatic spaceborne/airborne SAR experiment’.Proc.European Synthetic Aperture Radar Symp.,Dresden,Germany,May 200632Zhou,P.,Pi,Y.M.:‘A technique for beam synchronization in non-cooperative hybrid bistatic SAR’,J.Electron.Inf.Tech.,2009,31,(5),pp.1122–112633Curlander,J.C.,McDonough,R.N.:‘Synthetic aperture radar:systems and signal processing’(Wiley-Interscience,1991)34Marcos,J.S.,Dekker,P.L.,Mallorqui,J.J.,Aguasca,A.,Prats,P.:‘SABRINA:a SAR bistatic receiver for interferometric applications’, IEEE Geosci.Remote Sens.Lett.,2007,4,(2),pp.307–31135Lenz,R.,Schuler,K.,Younis,M.,Wiesbeck,W.:‘TerraSAR-X active radar ground calibrator system’,IEEE Aerosp.Electron.Syst.Mag., 2006,21,(5),pp.30–3336Liebe,J.R.,van de Giesen,N.,Andreini,M.S.,Steenhuis,T.S.,Walter, M.T.:‘Suitability and limitations of ENVISAT ASAR for monitoring small reservoirs in a semiarid area’,IEEE Trans.Geosci.Remote Sens.,2009,47,(5),pp.1536–154737Gebert,N.,Krieger,G.,Moreira,A.:‘Digital beamforming on receive: techniques and optimization strategies for high-resolution wide-swath SAR imaging’,IEEE Trans.Aerosp.Electron.Syst.,2009,45,(2), pp.564–59238Younis,M.,Fisher,C.,Wiesbeck,W.:‘Digital beamforming in SAR systems’,IEEE Trans.Geosci.Remote Sens.,2003,41,(7), pp.1735–173939Suess,M.,Grafmeller,R.,Zahn,R.:‘A novel high resolution,wide swath SAR system’.Proc.IEEE Int.Geosci.Remote Sens.Symp., June2001,pp.1013–1015。

相关文档
最新文档