Low Complexity Multiuser Detection Algorithm for Multi-Beam Satellite Systems
图像去噪中的稀疏表示算法与技巧
图像去噪中的稀疏表示算法与技巧图像去噪是数字图像处理中的一个重要问题,它的目标是从图像中去除噪声,以提高图像的质量和清晰度。
在实际应用中,图像往往受到各种因素的干扰,如传感器的噪声、图像采集过程中的不完美以及信号传输的失真等。
这些因素导致了图像中的噪声,降低了图像的质量。
因此,图像去噪一直是数字图像处理领域的研究热点之一。
稀疏表示算法是一种常用于图像去噪的方法。
其基本思想是通过寻找一组稀疏基向量来表示图像,将噪声和信号分离开来。
稀疏表示的概念源于信号处理中的一系列理论与算法,如小波变换、压缩感知等。
通过将图像表示为稀疏基向量的线性组合,可以将图像中的噪声部分抑制住,从而实现图像去噪的目标。
在稀疏表示算法中,要实现图像去噪,首先需要构建一个稀疏表示模型。
常用的稀疏模型包括正交匹配追踪(OMP)、基追踪(MP)和稀疏编码(L1范数最小化)等。
这些模型在理论上和实践中都被证明是有效的图像去噪方法。
通过这些算法,可以提取出图像中的稀疏特征,并用于构建稀疏表示模型。
除了稀疏模型之外,稀疏约束也是图像去噪中的一个重要问题。
稀疏约束是指通过增加额外的稀疏性要求,来提高求解稀疏表示问题的精确度和鲁棒性。
常见的稀疏约束方法包括多尺度稀疏约束、结构稀疏约束等。
这些约束能够减小误差的影响,提高了图像去噪的效果。
在实际应用中,为了提高图像去噪的效果,可以采用一些技巧和优化方法。
可以通过调整稀疏度参数来控制稀疏表示的效果,以达到更好的去噪效果。
可以利用先验知识或者模型来引导稀疏表示的过程,使得稀疏模型更加符合实际情况。
例如,可以针对特定场景或者特定噪声类型进行先验模型的训练和更新。
可以结合其他图像去噪方法,如小波变换、总变差正则化等,以进一步提高去噪效果。
图像去噪中的稀疏表示算法是一种常用且有效的方法。
通过构建稀疏表示模型和应用稀疏约束,可以从图像中去除噪声,提高图像的质量和清晰度。
在实际应用中,我们还可以通过调整参数、引入先验知识以及结合其他方法等,进一步优化去噪效果。
基于多通道图像深度学习的恶意代码检测
2021⁃04⁃10计算机应用,Journal of Computer Applications2021,41(4):1142-1147ISSN 1001⁃9081CODEN JYIIDU http ://基于多通道图像深度学习的恶意代码检测蒋考林,白玮,张磊,陈军,潘志松*,郭世泽(陆军工程大学指挥控制工程学院,南京210007)(∗通信作者电子邮箱hotpzs@ )摘要:现有基于深度学习的恶意代码检测方法存在深层次特征提取能力偏弱、模型相对复杂、模型泛化能力不足等问题。
同时,代码复用现象在同一类恶意样本中大量存在,而代码复用会导致代码的视觉特征相似,这种相似性可以被用来进行恶意代码检测。
因此,提出一种基于多通道图像视觉特征和AlexNet 神经网络的恶意代码检测方法。
该方法首先将待检测的代码转化为多通道图像,然后利用AlexNet 神经网络提取其彩色纹理特征并对这些特征进行分类从而检测出可能的恶意代码;同时通过综合运用多通道图像特征提取、局部响应归一化(LRN )等技术,在有效降低模型复杂度的基础上提升了模型的泛化能力。
利用均衡处理后的Malimg 数据集进行测试,结果显示该方法的平均分类准确率达到97.8%;相较于VGGNet 方法在准确率上提升了1.8%,在检测效率上提升了60.2%。
实验结果表明,多通道图像彩色纹理特征能较好地反映恶意代码的类别信息,AlexNet 神经网络相对简单的结构能有效地提升检测效率,而局部响应归一化能提升模型的泛化能力与检测效果。
关键词:多通道图像;彩色纹理特征;恶意代码;深度学习;局部响应归一化中图分类号:TP309文献标志码:AMalicious code detection based on multi -channel image deep learningJIANG Kaolin ,BAI Wei ,ZHANG Lei ,CHEN Jun ,PAN Zhisong *,GUO Shize(Command and Control Engineering College ,Army Engineering University Nanjing Jiangsu 210007,China )Abstract:Existing deep learning -based malicious code detection methods have problems such as weak deep -level feature extraction capability ,relatively complex model and insufficient model generalization capability.At the same time ,code reuse phenomenon occurred in large number of malicious samples of the same type ,resulting in similar visual features of the code.This similarity can be used for malicious code detection.Therefore ,a malicious code detection method based on multi -channel image visual features and AlexNet was proposed.In the method ,the codes to be detected were converted into multi -channel images at first.After that ,AlexNet was used to extract and classify the color texture features of the images ,so as to detect the possible malicious codes.Meanwhile ,the multi -channel image feature extraction ,the Local Response Normalization (LRN )and other technologies were used comprehensively ,which effectively improved the generalization ability of the model with effective reduction of the complexity of the model.The Malimg dataset after equalization was used for testing ,the results showed that the average classification accuracy of the proposed method was 97.8%,and the method had the accuracy increased by 1.8%and the detection efficiency increased by 60.2%compared with the VGGNet method.Experimental results show that the color texture features of multi -channel images can better reflect the type information of malicious codes ,the simple network structure of AlexNet can effectively improve the detection efficiency ,and the local response normalization can improve the generalization ability and detection effect of the model.Key words:multi -channel image;color texture feature;malicious code;deep learning;Local Response Normalization (LRN)引言恶意代码已经成为网络空间的主要威胁来源之一。
超低功耗微型超声时间差测距传感器CH101说明书
CH101 Ultra-low Power Integrated Ultrasonic Time-of-Flight Range SensorChirp Microsystems reserves the right to change specifications and information herein without notice.Chirp Microsystems2560 Ninth Street, Ste 200, Berkeley, CA 94710 U.S.A+1(510) 640–8155Document Number: DS-000331Revision: 1.2Release Date: 07/17/2020CH101 HIGHLIGHTSThe CH101 is a miniature, ultra-low-power ultrasonic Time-of-Flight (ToF) range sensor. Based on Chirp’s patented MEMS technology, the CH101 is a system-in-package that integrates a PMUT (Piezoelectric Micromachined Ultrasonic Transducer) together with an ultra-low-power SoC (system on chip) in a miniature, reflowable package. The SoC runs Chirp’s advanced ultrasonic DSP algorithms and includes an integrated microcontroller that provides digital range readings via I2C.Complementing Chirp’s long-range CH201 ultrasonic ToF sensor product, the CH101 provides accurate range measurements to targets at distances up to 1.2m. Using ultrasonic measurements, the sensor works in any lighting condition, including full sunlight to complete darkness, and provides millimeter-accurate range measurements independent of the target’s color and optical transparency. The sensor’s Field-of-View (FoV) can be customized and enables simultaneous range measurements to multiple objects in the FoV. Many algorithms can further process the range information for a variety of usage cases in a wide range of applications.The CH101-00ABR is a Pulse-Echo product intended for range finding and presence applications using a single sensor for transmit and receive of ultrasonic pulses. The CH101-02ABR is a frequency matched Pitch-Catch product intended for applications using one sensor for transmit and a second sensor for receiving the frequency matched ultrasonic pulse.DEVICE INFORMATIONPART NUMBER OPERATION PACKAGECH101-00ABR Pulse-Echo 3.5 x 3.5 x 1.26mm LGA CH101-02ABR Pitch-Catch 3.5 x 3.5 x 1.26mm LGA RoHS and Green-Compliant Package APPLICATIONS•Augmented and Virtual Reality•Robotics•Obstacle avoidance•Mobile and Computing Devices•Proximity/Presence sensing•Ultra-low power remote presence-sensing nodes •Home/Building automation FEATURES•Fast, accurate range-finding•Operating range from 4 cm to 1.2m•Sample rate up to 100 samples/sec• 1.0 mm RMS range noise at 30 cm range•Programmable modes optimized for medium and short-range sensing applications•Customizable field of view (FoV) up to 180°•Multi-object detection•Works in any lighting condition, including full sunlight to complete darkness•Insensitive to object color, detects opticallytransparent surfaces (glass, clear plastics, etc.) •Easy to integrate•Single sensor for receive and transmit•Single 1.8V supply•I2C Fast-Mode compatible interface, data rates up to 400 kbps•Dedicated programmable range interrupt pin•Platform-independent software driver enables turnkey range-finding•Miniature integrated module• 3.5 mmx 3.5 mm x 1.26 mm, 8-pin LGA package•Compatible with standard SMD reflow•Low-power SoC running advanced ultrasound firmware•Operating temperature range: -40°C to 85°C •Ultra-low supply current• 1 sample/s:o13 µA (10 cm max range)o15 µA (1.0 m max range)•30 samples/s:o20 µA (10 cm max range)o50 µA (1.0 m max range)Table of ContentsCH101 Highlights (1)Device Information (1)Applications (1)Features (1)Simplified Block Diagram (3)Absolute Maximum Ratings (4)Package Information (5)8-Pin LGA (5)Pin Configuration (5)Pin Descriptions (6)Package Dimensions (6)Electrical Characteristics (7)Electrical Characteristics (Cont’d) (8)Typical Operating Characteristics (9)Detailed Description (10)Theory of Operation (10)Device Configuration (10)Applications (11)Chirp CH101 Driver (11)Object Detection (11)Interfacing to the CH101 Ultrasonic Sensor (11)Device Modes of Operation: (12)Layout Recommendations: (13)PCB Reflow Recommendations: (14)Use of Level Shifters (14)Typical Operating Circuits (15)Ordering Information (16)Part Number Designation (16)Package Marking (17)Tape & Reel Specification (17)Shipping Label (17)Revision History (19)SIMPLIFIED BLOCK DIAGRAMFigure 1. Simplified Block DiagramABSOLUTE MAXIMUM RATINGSPARAMETER MIN. TYP. MAX. UNIT AVDD to VSS -0.3 2.2 V VDD to VSS -0.3 2.2 V SDA, SCL, PROG, RST_N to VSS -0.3 2.2 V Electrostatic Discharge (ESD)Human Body Model (HBM)(1)Charge Device Model (CDM)(2)-2-5002500kVV Latchup -100 100 mA Temperature, Operating -40 85 °C Relative Humidity, Storage 90 %RH Continuous Input Current (Any Pin) -20 20 mA Soldering Temperature (reflow) 260 °CTable 1. Absolute Maximum RatingsNotes:1.HBM Tests conducted in compliance with ANSI/ESDA/JEDEC JS-001-2014 Or JESD22-A114E2.CDM Tests conducted in compliance with JESD22-C101PACKAGE INFORMATION8-PIN LGADESCRIPTION DOCUMENT NUMBER CH101 Mechanical Integration Guide AN-000158CH101 and CH201 Ultrasonic Transceiver Handling andAssembly Guidelines AN-000159Table 2. 8-Pin LGAPIN CONFIGURATIONTop ViewFigure 2. Pin Configuration (Top View)PIN DESCRIPTIONSPIN NAME DESCRIPTION1 INT Interrupt output. Can be switched to input for triggering and calibration functions2 SCL SCL Input. I2C clock input. This pin must be pulled up externally.3 SDA SDA Input/Output. I2C data I/O. This pin must be pulled up externally.4 PROG Program Enable. Cannot be floating.5 VSS Power return.6 VDD Digital Logic Supply. Connect to externally regulated 1.8V supply. Suggest commonconnection to AVDD. If not connected locally to AVDD, b ypass with a 0.1μF capacitor asclose as possible to VDD I/O pad.7 AVDD Analog Power Supply. Connect to externally re gulated supply. Bypass with a 0.1μFcapacitor as close as possible to AVDD I/O pad.8 RESET_N Active-low reset. Cannot be floating.Table 3. Pin DescriptionsPACKAGE DIMENSIONSFigure 3. Package DimensionsELECTRICAL CHARACTERISTICSAVDD = VDD = 1.8VDC, VSS = 0V, T A = +25°C, min/max are from T A = -40°C to +85°C, unless otherwise specified.PARAMETER SYMBOL CONDITIONS MIN TYP MAX UNITSPOWER SUPPLYAnalog Power Supply AVDD 1.62 1.8 1.98 V Digital Power Supply VDD 1.62 1.8 1.98 VULTRASONIC TRANSMIT CHANNELOperating Frequency 175 kHzTXRX OPERATION (GPR FIRMWARE USED UNLESS OTHERWISE SPECIFIED)Maximum Range Max Range Wall Target58 mm Diameter Post1.2(1)0.7mm Minimum Range Min Range Short-Range F/W used 4(2)cm Measuring Rate (Sample/sec) SR 100 S/s Field of View FoV Configurable up to 180º deg Current Consumption (AVDD +VDD) I SSR=1S/s, Range=10 cmSR=1S/s, Range=1.0mSR=30S/s, Range=10 cmSR=30S/s, Range=1.0m13152050μAμAμAμA Range Noise N R Target range = 30 cm 1.0 mm, rms Measurement Time 1m max range 18 ms Programming Time 60 msTable 4. Electrical CharacteristicsNotes:1.Tested with a stationary target.2.For non-stationary objects. While objects closer than 4cm can be detected, the range measurement is not ensured.ELECTRICAL CHARACTERISTICS (CONT’D)AVDD = VDD = 1.8VDC, VSS = 0V, T A = +25°C, unless otherwise specified.PARAMETERSYMBOL CONDITIONS MINTYP MAX UNITS DIGITAL I/O CHARACTERISTICS Output Low Voltage V OL SDA, INT,0.4 V Output High Voltage V OH INT 0.9*V VDD V I 2C Input Voltage Low V IL_I2C SDA, SCL 0.3*V VDDV I 2C Input Voltage High V IH_I2C SDA, SCL 0.7*V VDD V Pin Leakage Current I L SDA,SCL, INT(Inactive), T A =25°C±1μA DIGITAL/I 2C TIMING CHARACTERISTICSSCL Clock Frequencyf SCLI 2C Fast Mode400kHzTable 5. Electrical Characteristics (Cont’d)TYPICAL OPERATING CHARACTERISTICSAVDD = VDD = 1.8VDC, VSS = 0V, T A = +25°C, unless otherwise specified.Typical Beam Pattern – MOD_CH101-03-01 Omnidirectional FoV module(Measured with a 1m2 flat plate target at a 30 cm range)Figure 4. Beam pattern measurements of CH101 moduleDETAILED DESCRIPTIONTHEORY OF OPERATIONThe CH101 is an autonomous, digital output ultrasonic rangefinder. The Simplified Block Diagram, previously shown, details the main components at the package-level. Inside the package are a piezoelectric micro-machined ultrasonic transducer (PMUT) and system-on-chip (SoC). The SoC controls the PMUT to produce pulses of ultrasound that reflect off targets in the sensor’s Field of View (FoV). The reflections are received by the same PMUT after a short time delay, amplified by sensitive electronics, digitized, and further processed to produce the range to the primary target. Many algorithms can further process the range information for a variety of usage cases in a wide range of applications.The time it takes the ultrasound pulse to propagate from the PMUT to the target and back is called the time-of-flight (ToF). The distance to the target is found by multiplying the time-of-flight by the speed of sound and dividing by two (to account for the round-trip). The speed of sound in air is approximately 343 m/s. The speed of sound is not a constant but is generally stable enough to give measurement accuracies within a few percent error.DEVICE CONFIGURATIONA CH101 program file must be loaded into the on-chip memory at initial power-on. The program, or firmware, is loaded through a special I2C interface. Chirp provides a default general-purpose rangefinder (GPR) firmware that is suitable for a wide range of applications. This firmware enables autonomous range finding operation of the CH101. It also supports hardware-triggering of the CH101 for applications requiring multiple transceivers. Program files can also be tailored to the customer’s application. Contact Chirp for more information.CH101 has several features that allow for low power operation. An ultra-low-power, on-chip real-time clock (RTC) sets the sample rate and provides the reference for the time-of-flight measurement. The host processor does not need to provide any stimulus to the CH101 during normal operation, allowing the host processor to be shut down into its lowest power mode until the CH101 generates a wake-up interrupt. There is also a general-purpose input/output (INT) pin that is optimized to be used as a system wake-up source. The interrupt pin can be configured to trigger on motion or proximity.APPLICATIONSCHIRP CH101 DRIVERChirp provides a compiler and microcontroller-independent C driver for the CH101 which greatly simplifies integration. The CH101 driver implements high-level control of one or more CH101s attached to one or more I2C ports on the host processor. The CH101 driver allows the user to program, configure, trigger, and readout data from the CH101 through use of C function calls without direct interaction with the CH101 I2C registers. The CH101 driver only requires the customer to implement an I/O layer which communicates with the host processor’s I2C hardware and GPIO hardware. Chirp highly recommends that all designs use the CH101 driver.OBJECT DETECTIONDetecting the presence of objects or people can be optimized via software, by setting the sensor’s full-scale range (FSR), and via hardware, using an acoustic housing to narrow or widen the sensor’s field-of-view. The former means that the user may set the maximum distance at which the sensor will detect an object. FSR values refer to the one-way distance to a detected object.In practice, the FSR setting controls the amount of time that the sensor spends in the listening (receiving) period during a measurement cycle. Therefore, the FSR setting affects the time required to complete a measurement. Longer full-scale range values will require more time for a measurement to complete.Ultrasonic signal processing using the CH101’s General Purpose Rangefinder (GPR) Firmware will detect echoes that bounce off the first target in the Field-of-View. The size, position, and material composition of the target will affect the maximum range at which the sensor can detect the target. Large targets, such as walls, are much easier to detect than smaller targets. Thus, the associated operating range for smaller targets will be shorter. The range to detect people will be affected by a variety of factors such as a person’s size, clothing, orientation to the sensor and the sensor’s field-of-view. In general, given these factors, people can be detected at a maximum distance of 0.7m from the CH101 sensor.For additional guidance on the detection of people/objects using the NEMA standard, AN-000214 Presence Detection Application Note discusses the analysis of presence detection using the Long-Range CH201 Ultrasonic sensor.INTERFACING TO THE CH101 ULTRASONIC SENSORThe CH101 communicates with a host processor over the 2-wire I2C protocol. The CH101 operates as an I2C slave and responds to commands issued by the I2C master.The CH101 contains two separate I2C interfaces, running on two separate slave addresses. The first is for loading firmware into the on-chip program memory, and the second is for in-application communication with the CH101. The 7-bit programming address is0x45, and the 7-bit application address default is 0x29. The application address can be reprogrammed to any valid 7-bit I2C address. The CH101 uses clock stretching to allow for enough time to respond to the I2C master. The CH101 clock stretches before the acknowledge (ACK) bit on both transmit and receive. For example, when the CH101 transmits, it will hold SCL low after it transmits the 8th bit from the current byte while it loads the next byte into its internal transmit buffer. When the next byte is ready, it releases the SCL line, reads the master’s ACK bit, and proceeds accordingly. When the CH101 is receiving, it holds the SCL line low after it receives the 8th bit in a byte. The CH101 then chooses whether to ACK or NACK depending on the received data and releases the SCL line.The figure below shows an overview of the I2C slave interface. In the diagram, ‘S’ indicates I2C start, ‘R/W’ is the read/write bit, ‘Sr’ is a repeated start, ‘A’ is acknowledge, and ‘P’ is the stop condition. Grey boxes indicate the I2C master actions; white boxes indicate the I2C slave actions.Figure 5. CH101 I2C Slave Interface DiagramDEVICE MODES OF OPERATION:FREE-RUNNING MODEIn the free-running measurement mode, the CH101 runs autonomously at a user specified sample rate. In this mode, the INT pin is configured as an output. The CH101 pulses the INT pin high when a new range sample is available. At this point, the host processor may read the sample data from the CH101 over the I2C interface.HARDWARE-TRIGGERED MODEIn the hardware triggered mode, the INT pin is used bi-directionally. The CH101 remains in an idle condition until triggered by pulsing the INT pin. The measurement will start with deterministic latency relative to the rising edge on INT. This mode is most useful for synchronizing several CH101 transceivers. The host controller can use the individual INT pins of several transceivers to coordinate the exact timing.CH101 BEAM PATTERNSThe acoustic Field of View is easily customizable for the CH101 and is achieved by adding an acoustic housing to the transceiver that is profiled to realize the desired beam pattern. Symmetric, asymmetric, and omnidirectional (180° FoV) beam patterns are realizable. An example beam pattern is shown in the Typical Operating Characteristics section of this document and several acoustic housing designs for various FoV’s are available from Chirp.LAYOUT RECOMMENDATIONS:RECOMMENDED PCB FOOTPRINTDimensions in mmFigure 6. Recommended PCB FootprintPCB REFLOW RECOMMENDATIONS:See App Note AN-000159, CH101 and CH201 Ultrasonic Transceiver Handling and Assembly Guidelines.USE OF LEVEL SHIFTERSWhile the use of autosense level shifters for all the digital I/O signal signals is acceptable, special handling of the INT line while using a level shifter is required to ensure proper resetting of this line. As the circuit stage is neither a push-pull nor open-drain configuration (see representative circuit below), it is recommended that level shifter with a manual direction control line be used. The TI SN74LVC2T45 Bus Transceiver is a recommended device for level shifting of the INT signal line.Figure 7. INT Line I/O Circuit StageTYPICAL OPERATING CIRCUITSFigure 8. Single Transceiver OperationFigure 9. Multi- Transceiver OperationORDERING INFORMATIONPART NUMBER DESIGNATIONFigure 10. Part Number DesignationThis datasheet specifies the following part numbersPART NUMBER OPERATION PACKAGE BODY QUANTITY PACKAGING CH101-00ABR Pulse-Echo 3.5 mm x 3.5 mm x 1.26 mmLGA-8L 1,000 7” Tape and ReelCH101-02ABR Pitch-Catch 3.5 mm x 3.5 mm x 1.26 mmLGA-8L 1,000 7” Tape and ReelTable 6. Part Number DesignationCH101-xxABxProduct FamilyProduct Variant Shipping CarrierR = Tape & Reel 00AB = Pulse-Echo Product Variant02AB = Pitch-Catch Product VariantCH101 = Ultrasonic ToF SensorPACKAGE MARKINGFigure 11. Package MarkingTAPE & REEL SPECIFICATIONFigure 12. Tape & Reel SpecificationSHIPPING LABELA Shipping Label will be attached to the reel, bag and box. The information provided on the label is as follows:•Device: This is the full part number•Lot Number: Chirp manufacturing lot number•Date Code: Date the lot was sealed in the moisture proof bag•Quantity: Number of components on the reel•2D Barcode: Contains Lot No., quantity and reel/bag/box numberDimensions in mmDEVICE: CH101-XXXXX-XLOT NO: XXXXXXXXDATE CODE: XXXXQTY: XXXXFigure 13. Shipping LabelREVISION HISTORY09/30/19 1.0 Initial Release10/22/19 1.1 Changed CH-101 to CH101. Updated figure 7 to current markings.07/17/20 1.2 Format Update. Incorporated “Maximum Ratings Table” and “Use of LevelShifters” section.This information furnished by Chirp Microsystems, Inc. (“Chirp Microsystems”) is believed to be accurate and reliable. However, no responsibility is assumed by Chirp Microsystems for its use, or for any infringements of patents or other rights of third parties that may result from its use. Specifications are subject to change without notice. Chirp Microsystems reserves the right to make changes to this product, including its circuits and software, in order to improve its design and/or performance, without prior notice. Chirp Microsystems makes no warranties, neither expressed nor implied, regarding the information and specifications contained in this document. Chirp Microsystems assumes no responsibility for any claims or damages arising from information contained in this document, or from the use of products and services detailed therein. This includes, but is not limited to, claims or damages based on the infringement of patents, copyrights, mask work and/or other intellectual property rights.Certain intellectual property owned by Chirp Microsystems and described in this document is patent protected. No license is granted by implication or otherwise under any patent or patent rights of Chirp Microsystems. This publication supersedes and replaces all information previously supplied. Trademarks that are registered trademarks are the property of their respective companies. Chirp Microsystems sensors should not be used or sold in the development, storage, production or utilization of any conventional or mass-destructive weapons or for any other weapons or life threatening applications, as well as in any other life critical applications such as medical equipment, transportation, aerospace and nuclear instruments, undersea equipment, power plant equipment, disaster prevention and crime prevention equipment.©2020 Chirp Microsystems. All rights reserved. Chirp Microsystems and the Chirp Microsystems logo are trademarks of Chirp Microsystems, Inc. The TDK logo is a trademark of TDK Corporation. Other company and product names may be trademarks of the respective companies with which they are associated.©2020 Chirp Microsystems. All rights reserved.。
面向高阶调制的低复杂度MIMO 检测技术
面向高阶调制的低复杂度MIMO 检测技术作者:仇晓颖余观夏吴迪来源:《电脑知识与技术》2015年第26期摘要:针对高阶调制下多输入多输出(MIMO)接收机检测算法复杂度高的问题,提出了一种面向高阶调制的低复杂度非线性MIMO检测算法。
该算法采用分层处理方法,支持并行硬件电路实现,有效提高了检测效率。
在建立系统仿真模型的基础上,分析几种算法的误码率性能及复杂度,同时实现该算法的参数选择。
仿真结果表明,提出的低复杂度MIMO检测算法能以较低的运算复杂度,达到逼近最大似然译码算法的误码率性能,从而获得性能和复杂度的折衷。
该算法为MIMO无线通信接收机的超大规模集成电路实现提供了理论基础,具有较高的应用价值。
关键词:多输入多输出检测;低复杂度检测算法;高阶调制;分层处理;球形译码中图分类号:TN919 文献标识码:A 文章编号:1009-3044(2015)26-0185-04Low Complexity MIMO Detection for High-Order Modulation SchemesQIU Xiao-ying1, YU Guan-xia2, WU Di3(1. College of Information Science and Technology, Nanjing Forestry University, Nanjing 210037, China;2. College of Science, Nanjing Forestry University, Nanjing 210037, China;3. School of Electronic and Information Engineering, Soochow University, Suzhou 215006,China)Abstract: Aiming at the high complexity of detection for multi-input multi-output (MIMO)systems using high-order modulation schemes, a low complexity nonlinear detection algorithm for high-order modulation was proposed. Based on a layered approach, the algorithm proposed can effectively improve the detection efficiency. Based on link-level simulation, the bit error rate performance and the complexity of several algorithms were analyzed. The simulation results show that the proposed low complexity MIMO detection algorithm can achieve similar performance of maximum likelihood detection with much lower computational complexity,thus obtaining a good trade-off between performance and complexity. The detection algorithm is straightforward to implement in hardware, which makes it valuable for practical application.Key words: multi-input multi-output detection; low complexity detection algorithm; high-order modulation scheme; layered approach; sphere decoding多输入多输出(MIMO, multi-input multi-output)技术被广泛应用于最新的无线通信标准(如3GPP LTE 和802.11ac)中。
纹理物体缺陷的视觉检测算法研究--优秀毕业论文
摘 要
在竞争激烈的工业自动化生产过程中,机器视觉对产品质量的把关起着举足 轻重的作用,机器视觉在缺陷检测技术方面的应用也逐渐普遍起来。与常规的检 测技术相比,自动化的视觉检测系统更加经济、快捷、高效与 安全。纹理物体在 工业生产中广泛存在,像用于半导体装配和封装底板和发光二极管,现代 化电子 系统中的印制电路板,以及纺织行业中的布匹和织物等都可认为是含有纹理特征 的物体。本论文主要致力于纹理物体的缺陷检测技术研究,为纹理物体的自动化 检测提供高效而可靠的检测算法。 纹理是描述图像内容的重要特征,纹理分析也已经被成功的应用与纹理分割 和纹理分类当中。本研究提出了一种基于纹理分析技术和参考比较方式的缺陷检 测算法。这种算法能容忍物体变形引起的图像配准误差,对纹理的影响也具有鲁 棒性。本算法旨在为检测出的缺陷区域提供丰富而重要的物理意义,如缺陷区域 的大小、形状、亮度对比度及空间分布等。同时,在参考图像可行的情况下,本 算法可用于同质纹理物体和非同质纹理物体的检测,对非纹理物体 的检测也可取 得不错的效果。 在整个检测过程中,我们采用了可调控金字塔的纹理分析和重构技术。与传 统的小波纹理分析技术不同,我们在小波域中加入处理物体变形和纹理影响的容 忍度控制算法,来实现容忍物体变形和对纹理影响鲁棒的目的。最后可调控金字 塔的重构保证了缺陷区域物理意义恢复的准确性。实验阶段,我们检测了一系列 具有实际应用价值的图像。实验结果表明 本文提出的纹理物体缺陷检测算法具有 高效性和易于实现性。 关键字: 缺陷检测;纹理;物体变形;可调控金字塔;重构
Keywords: defect detection, texture, object distortion, steerable pyramid, reconstruction
II
数字图像修复的DI灢StyleGANv2模型
第38卷第6期2023年12月安 徽 工 程 大 学 学 报J o u r n a l o fA n h u i P o l y t e c h n i cU n i v e r s i t y V o l .38N o .6D e c .2023文章编号:1672-2477(2023)06-0084-08收稿日期:2022-12-16基金项目:安徽省高校自然科学基金资助项目(K J 2020A 0367)作者简介:王 坤(1999-),男,安徽池州人,硕士研究生㊂通信作者:张 玥(1972-),女,安徽芜湖人,教授,博士㊂数字图像修复的D I -S t yl e G A N v 2模型王 坤,张 玥*(安徽工程大学数理与金融学院,安徽芜湖 241000)摘要:当数字图像存在大面积缺失或者缺失部分纹理结构时,传统图像增强技术无法从图像有限的信息中还原出更多所需的信息,增强性能有限㊂为此,本文提出了一种将解码信息与S t y l e G A N v 2相融合的深度学习图像修复方法 D I -S t y l e G A N v 2㊂首先由U -n e t 模块生成包含图像主信息的隐编码以及次信息的解码信号,随后利用S t y l e G A N v 2模块引入图像生成先验,在图像生成的过程中不但使用隐编码主信息,并且整合了包含图像次信息的解码信号,从而实现了修复结果的语义增强㊂在F F HQ 和C e l e b A 数据库上的实验结果验证了所提方法的有效性㊂关 键 词:图像修复;解码信号;生成对抗神经网络;U -n e t ;S t y l e G A N v 2中图分类号:O 29 文献标志码:A 数字图像是现代图像信号最常用的格式,但往往在采集㊁传输㊁储存等过程中,由于噪声干扰㊁有损压缩等因素的影响而发生退化㊂数字图像修复是利用已退化图像中存在的信息重建高质量图像的过程,可以应用于生物医学影像高清化㊁史物影像资料修复㊁军事科学等领域㊂研究数字图像修复模型具有重要的现实意义和商业价值㊂传统的数字图像修复技术多依据图像像素间的相关性和内容相似性进行推测修复㊂在纹理合成方向上,C r i m i n i s i 等[1]提出一种基于图像块复制粘贴的修补方法,该方法采用基于图像块的采样对缺失部分的纹理结构进行信息的填充㊂在结构相似性度研究上,T e l e a [2]提出基于插值的F MM 算法(F a s tM a r c -h i n g M e t h o d );B e r t a l m i o 等[3]提出基于N a v i e r -S t o k e s 方程和流体力学的图像视频修复模型(N a v i e r -S t o k e s ,F l u i dD y n a m i c s ,a n d I m a g e a n dV i d e o I n p a i n t i n g )㊂这些方法缺乏对图像内容和结构的理解,一旦现实中待修复图像的纹理信息受限,修复结果就容易产生上下文内容缺失语义;同时它们需要手动制作同高㊁宽的显示待修复区域的掩码图像来帮助生成结果㊂近年来,由于深度学习领域的快速发展,基于卷积神经网络的图像修复方法得到了广泛的应用㊂Y u等[4]利用空间变换网络从低质量人脸图像中恢复具有逼真纹理的高质量图像㊂除此之外,三维面部先验[5]和自适应空间特征融合增强[6]等修复算法也很优秀㊂这些方法在人工退化的图像集上表现得很好,但是在处理实际中的复杂退化图像时,修复效果依旧有待提升;而类似P i x 2P i x [7]和P i x 2P i x H D [8]这些修复方法又容易使得图像过于平滑而失去细节㊂生成对抗网络[9](G e n e r a t i v eA d v e r s a r i a lN e t w o r k s ,G A N )最早应用于图像生成中,U -n e t 则用于编码解码㊂本文提出的D I -S t y l e G A N v 2模型充分利用S t y l e G A N v 2[10]和U -n e t [11]的优势,通过S t y l e -G A N v 2引入高质量图像生成图像先验,U -n e t 生成修复图像的指导信息㊂在将包含深度特征的隐编码注入预训练的S t y l e G A N v 2中的同时,又融入蕴含图像局部细节的解码信号,得以从严重退化的图像中重建上下文语义丰富且真实的图像㊂1 理论基础1.1 U -n e t U -n e t 是一种A u t o -E n c o d e r ,该结构由一个捕获图像上下文语义的编码路径和能够实现图像重建的对称解码路径组成㊂其前半部分收缩路径为降采样,后半部分扩展路径为升采样,分别对应编码和解码,用于捕获和恢复空间信息㊂编码结构包括重复应用卷积㊁激活函数和用于下采样的最大池化操作㊂在整个下采样的过程中,伴随着特征图尺度的降低,特征通道的数量会增加,扩展路径的解码过程同理,同时扩展路径与收缩路径中相应特征图相连㊂1.2 生成对抗网络生成对抗网络框架由生成器G 和判别器D 这两个模块构成,生成器G 利用捕获的数据分布信息生成图像,判别器D 的输入是数据集的图像或者生成器G 生成的图像,判别器D 的输出刻画了输入图像来自真实数据的可能性㊂为了让生成器G 学习到数据x 的分布p g ,先定义了一个关于输入的噪声变量z ,z 到数据空间的映射表示为G (z ;θg ),其中G 是一个由参数为θg 的神经网络构造的可微函数㊂同时定义了第2个神经网络D (x ;θd ),其输出单个标量D (x )来表示x 来自于数据而不是P g 的概率㊂本训练判别器最大限度地区分真实样本和生成样本,同时训练生成器去最小化l o g(1-D (G (z ))),相互博弈的过程由以下公式定义:m i n G m a x D V (D ,G )=E x ~P d a t a (x )[l o g (D (x ))]+E z ~P z (z )[l o g (1-D (G (z )))],(1)通过生成器㊁判别器的相互对抗与博弈,整个网络最终到达纳什平衡状态㊂这时生成器G 被认为已经学习到了真实数据的内在分布,由生成器合成的生成图像已经能够呈现出与真实数据基本相同的特征,在视觉上难以分辨㊂1.3 S t y l e G A N v 2选用高分辨率图像生成方法S t y l e G A N v 2[10]作为嵌入D I -S t y l e G A N v 2的先验网络,S t yl e G A N v 2[10]主要分为两部分,一部分是映射网络(M a p p i n g N e t w o r k ),如图1a 所示,其中映射网络f 通过8个全连接层对接收的隐编码Z 进行空间映射处理,转换为中间隐编码W ㊂特征空间中不同的子空间信息对应着数据的不同类别信息或整体风格,因为其相互关联存在较高的耦合性,且存在特征纠缠的现象,所以利用映射网络f 使得隐空间得到有效的解耦,最后生成不必遵循训练数据分布的中间隐编码㊂另一部分被称为合成网络,如图1b 所示㊂合成网络由卷积和上采样层构成,其根据映射网络产生的隐编码来生成所需图像㊂合成网络用全连接层将中间隐编码W 转换成风格参数A 来影响不同尺度生成图像的骨干特征,用噪声来影响细节部分使生成的图片纹理更自然㊂S t y l e G A N v 2[10]相较于旧版本重新设计生成器的架构,使用了权重解调(W e i g h tD e m o d u t i o n )更直接地从卷积的输出特征图的统计数据中消除特征图尺度的影响,修复了产生特征伪影的缺点并进一步提高了结果质量㊂图1 S t yl e G A N v 2基本结构根据传入的风格参数,权重调制通过调整卷积权重的尺度替代性地实现对输入特征图的调整㊂w 'i j k =s i ㊃w i jk ,(2)式中,w 和w '分别为原始权重和调制后的权重;s i 为对应第i 层输入特征图对应的尺度㊂接着S t y l e -㊃58㊃第6期王 坤,等:数字图像修复的D I -S t y l e G A N v 2模型G A N v 2通过缩放权重而非特征图,从而在卷积的输出特征图的统计数据中消除s i 的影响,经过调制和卷积后,权值的标准偏差为:σj =∑i ,k w 'i j k 2,(3)权重解调即权重按相应权重的L 2范式进行缩放,S t y l e G A N v 2使用1/σj 对卷积权重进行操作,其中η是一个很小的常数以避免分母数值为0:w ″i j k =w 'i j k /∑i ,k w 'i j k 2+η㊂(4)解调操作融合了S t y l e G A N 一代中的调制㊁卷积㊁归一化,并且相比一代的操作更加温和,因为其是调制权重信号,而并非特征图中的实际内容㊂2 D I -S t yl e G A N v 2模型D I -S t y l e G A N v 2模型结构如图2所示:其主要部分由一个解码提取模块U -n e t 以及预先训练过的S t y l e G A N v 2组成,其中S t y l e G A N v 2包括了合成网络部分(S y n t h e s i sN e t w o r k )和判别器部分㊂待修复的输入图像在被输入到整个D I -S t yl e G A N v 2网络之前,需要被双线性插值器调整大小到固定的分辨率1024×1024㊂接着输入图像通过图2中左侧的U -n e t 生成隐编码Z 和解码信号(D e c o d i n g I n f o r m a t i o n ,D I )㊂由图2所示,隐编码Z 经过多个全连接层构成的M a p p i n g Ne t w o r k s 解耦后转化为中间隐编码W ㊂中间隐编码W 通过仿射变换产生的风格参数作为风格主信息,已经在F F H Q 数据集上训练过的S t yl e G A N v 2模块中的合成网络在风格主信息的指导下恢复图像㊂合成网络中的S t y l e G A N B l o c k 结构在图2下方显示,其中包含的M o d 和D e m o d 操作由式(2)和式(4)给出㊂同时U -n e t 解码层每一层的解码信息作为风格次信息以张量拼接的方式加入S t y l e G A NB l o c k 中的特征图,更好地让D I -S t y le G A N v 2模型生成细节㊂最后将合成网络生成的图片汇入到判别器中,由判别器判断是真实图像还是生成图像㊂图2 数字图像修复网络结构图模型鉴别器综合了3个损失函数作为总损失函数:对抗损失L A ㊁内容损失L C 和特征匹配损失L F ㊂其中,L A 为原始G A N 网络中的对抗损失,被定义为式(5),X 和^X 表示真实的高清图像和低质量的待修复图像,G 为训练期间的生成器,D 为判别器㊂L A =m i n G m a x D E (X )L o g (1+e x p (-D (G (X ^)))),(5)L C 定义为最终生成的修复图片与相应的真实图像之间的L 1范数距离;L F 为判别器中的特征层,其定义为式(6);T 为特征提取的中间层总数;D i (X )为判别器D 的第i 层提取的特征:㊃68㊃安 徽 工 程 大 学 学 报第38卷L F =m i n G E (X )(∑T i =0||D i (X )-D i (G (X ^))||2),(6)最终的总损失函数为:L =L A +αL C +βL F ,(7)式(7)中内容损失L C 判断修复结果和真实高清图像之间的精细特征与颜色信息上差异的大小;通过判别器中间特征层得到的特征匹配损失L F 可以平衡对抗损失L A ,更好地恢复㊁还原高清的数字图像㊂α和β作为平衡参数,在本文的实验中根据经验设置为α=1和β=0.02㊂3实验结果分析图3 F F HQ 数据集示例图3.1 数据集选择及数据预处理从图像数据集的多样性和分辨率考虑,本文选择在F F H Q (F l i c k rF a c e s H i g h Q u a l i t y )数据集上训练数字图像修复模型㊂该数据集包含7万张分辨率为1024×1024的P N G 格式高清图像㊂F F H Q 囊括了非常多的差异化的图像,包括不同种族㊁性别㊁表情㊁脸型㊁背景的图像㊂这些丰富属性经过训练可以为模型提供大量的先验信息,图3展示了从F F H Q 中选取的31张照片㊂训练过程中的模拟退化过程,即模型生成数据集对应的低质量图像这部分,本文主要通过以下方法实现:通过C V 库对图像随机地进行水平翻转㊁颜色抖动(包括对图像的曝光度㊁饱和度㊁色调进行随机变化)以及转灰度图等操作,并对图像采用混合高斯模糊,包括各向同性高斯核和各向异性高斯核㊂在模糊核设计方面,本文采用41×41大小的核㊂对于各向异性高斯核,旋转角度在[-π,π]之间均匀采样,同时进行下采样和混入高斯噪声㊁失真压缩等处理㊂整体模拟退化处理效果如图4所示㊂图4 模拟退化处理在模型回测中,使用C e l e b A 数据集来生成低质量的图像进行修复并对比原图,同时定量比较本模型与近年来提出的其他方法对于数字图像的修复效果㊂3.2 评估指标为了公平地量化不同算法视觉质量上的优劣,选取图像质量评估方法中最广泛使用的峰值信噪比(P e a kS i g n a l -t o -n o i s eR a t i o ,P S N R )以及结构相似性指数(S t r u c t u r a l S i m i l a r i t y I n d e x ,S S I M )指标,去量化修复后图像和真实图像之间的相似性㊂P S N R 为信号的最大可能功率和影响其精度的破坏性噪声功率的比值,数值越大表示失真越小㊂P S N R 基于逐像素的均方误差来定义㊂设I 为高质量的参考图像;I '为复原后的图像,其尺寸均为m ×n ,那么两者的均方误差为:M S E =1m n ∑m i =1∑n j =1(I [i ,j ]-I '[i ,j ])2,(8)P S N R 被定义为公式(9),P e a k 表示图像像素强度最大的取值㊂㊃78㊃第6期王 坤,等:数字图像修复的D I -S t yl e G A N v 2模型P S N R =10×l o g 10(P e a k 2M S E )=20×l o g 10(P e a k M S E ),(9)S S I M 是另一个被广泛使用的图像相似度评价指标㊂与P S N R 评价逐像素的图像之间的差异,S S I M 仿照人类视觉系统实现了其判别标准㊂在图像质量的衡量上更侧重于图像的结构信息,更贴近人类对于图像质量的判断㊂S S I M 用均值估计亮度相似程度,方差估计对比度相似程度,协方差估计结构相似程度㊂其范围为0~1,越大代表图像越相似;当两张图片完全一样时,S S I M 值为1㊂给定两个图像信号x 和y ,S S I M 被定义为:S S I M (x ,y )=[l (x ,y )α][c (x ,y )]β[s (x ,y )]γ,(10)式(10)中的亮度对比l (x ,y )㊁对比度对比c (x ,y )㊁结构对比s (x ,y )三部分定义为:l (x ,y )=2μx μy +C 1μ2x +μ2y +C 1,c (x ,y )=2σx σy +C 2σ2x +σ2y +C 2,l (x ,y )=σx y +C 3σx σy +C 3,(11)其中,α>0㊁β>0㊁γ>0用于调整亮度㊁对比度和结构之间的相对重要性;μx 及μy ㊁σx 及σy 分别表示x 和y 的平均值和标准差;σx y 为x 和y 的协方差;C 1㊁C 2㊁C 3是常数,用于维持结果的稳定㊂实际使用时,为简化起见,定义参数为α=β=γ=1以及C 3=C 2/2,得到:S S I M (x ,y )=(2μx μy +C 1)(2σx y +C 2)(μ2x +μ2y +C 1)(σ2x +σ2y +C 2)㊂(12)在实际计算两幅图像的结构相似度指数时,我们会指定一些局部化的窗口,计算窗口内信号的结构相似度指数㊂然后每次以像素为单位移动窗口,直到计算出整幅的图像每个位置的局部S S I M 再取均值㊂3.3 实验结果(1)D I -S t y l e G A N v 2修复结果㊂图5展示了D I -S t yl e G A N v 2模型在退化图像上的修复结果,其中图5b ㊁5d 分别为图5a ㊁5c 的修复结果㊂通过对比可以看到,D I -S t y l e G A N v 2修复过的图像真实且还原,图5b ㊁5d 中图像的头发㊁眉毛㊁眼睛㊁牙齿的细节清晰可见,甚至图像背景也被部分地修复,被修复后的图像通过人眼感知,整体质量优异㊂图5 修复结果展示(2)与其他方法的比较㊂本文用C e l e b A -H Q 数据集合成了一组低质量图像,在这些模拟退化图像上将本文的数字图像修复模型与G P E N [12]㊁P S F R G A N [13]㊁H i F a c e G A N [14]这些最新的深度学习修复算法的修复效果进行比较评估,这些最近的修复算法在实验过程中使用了原作者训练过的模型结构和预训练参数㊂各个模型P S N R 和L P I P S 的测试结果如表1所示,P S N R 和S S I M 的值越大,表明修复图像和真实高清图像之间的相似度越高,修复效果越好㊂由表1可以看出,我们的数字图像修复模型获得了与其他顶尖修复算法相当的P S N R 指数,L P I P S 指数相对于H i F a c e G A N 提升了12.47%,同时本实验环境下修复512×512像素单张图像耗时平均为1.12s ㊂表1 本文算法与相关算法的修复效果比较M e t h o d P S N R S S I M G P E N 20.40.6291P S F R G A N 21.60.6557H i F a c e G A N 21.30.5495o u r 20.70.6180值得注意的是P S N R 和S S I M 大小都只能作为参考,不能绝对反映数字图像修复算法的优劣㊂图6展示了D I -S t y l e G A N v 2㊁G P E N ㊁P S F R G A N ㊁H i F a c e G A N 的修复结果㊂由图6可以看出,无论是全局一致性还是局部细节,D I -S t y l e G A N v 2都做到了很好得还原,相比于其他算法毫不逊色㊂在除人脸以外的其他自然场景的修复上,D I -S t yl e G A N v 2算法依旧表现良好㊂图7中左侧为修复前㊃88㊃安 徽 工 程 大 学 学 报第38卷图像,右侧为修复后图像㊂从图7中红框处标记出来的细节可以看出,右侧修复后图像的噪声相比左图明显减少,观感更佳,这在图7下方放大后的细节比对中表现得尤为明显㊂从图像中招牌的字体区域可见对比度和锐化程度的提升使得图像内部图形边缘更加明显,整体更加清晰㊂路面的洁净度上噪声也去除很多,更为洁净㊂整体上修复后图像的色彩比原始的退化图像要更丰富,层次感更强,视觉感受更佳㊂图6 修复效果对比图图7 自然场景图像修复结果展示(左:待修复图像;右:修复后图像)4 结论基于深度学习的图像修复技术近年来在超分辨图像㊁医学影像等领域得到广泛的关注和应用㊂本文针对传统修复技术处理大面积缺失或者缺失部分纹理结构的图像时容易产生修复结果缺失图像语义的问题,在国内外图像修复技术理论与方法的基础上,由卷积神经网络U -n e t 结合近几年效果极佳的生成对抗网络S t yl e G A N v 2,提出了以图像解码信息㊁隐编码㊁图像生成先验这三类信息指导深度神经网络对图像进行修复㊂通过在F F H Q 数据集上进行随机图像退化模拟来训练D I -S t y l e G A N v 2网络模型,并由P S N R 和S S I M 两个指标来度量修复样本和高清样本之间的相似性㊂㊃98㊃第6期王 坤,等:数字图像修复的D I -S t y l e G A N v 2模型㊃09㊃安 徽 工 程 大 学 学 报第38卷实验表明,D I-S t y l e G A N v2网络模型能够恢复清晰的面部细节,修复结果具有良好的全局一致性和局部精细纹理㊂其对比现有技术具有一定优势,同时仅需提供待修复图像而无需缺失部分掩码就能得到令人满意的修复结果㊂这主要得益于D I-S t y l e G A N v2模型能够通过大样本数据的训练学习到了丰富的图像生成先验,并由待修复图像生成的隐编码和解码信号指导神经网络学习到更多的图像结构和纹理信息㊂参考文献:[1] A N T O N I O C,P A T R I C KP,K E N T A R O T.R e g i o n f i l l i n g a n do b j e c t r e m o v a l b y e x e m p l a r-b a s e d i m a g e i n p a i n t i n g[J].I E E ET r a n s a c t i o n s o n I m a g eP r o c e s s i n g:A P u b l i c a t i o no f t h eI E E E S i g n a lP r o c e s s i n g S o c i e t y,2004,13(9):1200-1212.[2] T E L E A A.A n i m a g e i n p a i n t i n g t e c h n i q u eb a s e do nt h e f a s tm a r c h i n g m e t h o d[J].J o u r n a l o fG r a p h i c sT o o l s,2004,9(1):23-34.[3] B E R T A L M I O M,B E R T O Z Z IA L,S A P I R O G.N a v i e r-s t o k e s,f l u i dd y n a m i c s,a n d i m a g ea n dv i d e o i n p a i n t i n g[C]//I E E EC o m p u t e r S o c i e t y C o n f e r e n c e o nC o m p u t e rV i s i o n&P a t t e r nR e c o g n i t i o n.K a u a i:I E E EC o m p u t e r S o c i e t y,2001:990497.[4] Y U X,P O R I K L I F.H a l l u c i n a t i n g v e r y l o w-r e s o l u t i o nu n a l i g n e d a n dn o i s y f a c e i m a g e s b y t r a n s f o r m a t i v e d i s c r i m i n a t i v ea u t o e n c o d e r s[C]//I nC V P R.H o n o l u l u:I E E EC o m p u t e r S o c i e t y,2017:3760-3768.[5] HU XB,R E N W Q,L AMA S T E RJ,e t a l.F a c e s u p e r-r e s o l u t i o n g u i d e db y3d f a c i a l p r i o r s.[C]//C o m p u t e rV i s i o n–E C C V2020:16t hE u r o p e a nC o n f e r e n c e.G l a s g o w:S p r i n g e r I n t e r n a t i o n a l P u b l i s h i n g,2020:763-780.[6] L IX M,L IW Y,R E ND W,e t a l.E n h a n c e d b l i n d f a c e r e s t o r a t i o nw i t hm u l t i-e x e m p l a r i m a g e s a n d a d a p t i v e s p a t i a l f e a-t u r e f u s i o n[C]//P r o c e e d i n g so f t h eI E E E/C V F C o n f e r e n c eo nC o m p u t e rV i s i o na n dP a t t e r n R e c o g n i t i o n.V i r t u a l:I E E EC o m p u t e r S o c i e t y,2020:2706-2715.[7] I S O L A P,Z HUJY,Z HO U T H,e t a l.I m a g e-t o-i m a g e t r a n s l a t i o nw i t hc o n d i t i o n a l a d v e r s a r i a l n e t w o r k s[C]//P r o-c e ed i n g s o f t h eI E E E C o n fe r e n c eo nC o m p u t e rV i s i o na n dP a t t e r n R e c o g n i t i o n.H o n o l u l u:I E E E C o m p u t e rS o c i e t y,2017:1125-1134.[8] WA N G TC,L I U M Y,Z HUJY,e t a l.H i g h-r e s o l u t i o n i m a g es y n t h e s i sa n ds e m a n t i cm a n i p u l a t i o nw i t hc o n d i t i o n a lg a n s[C]//P r o c e e d i n g so f t h eI E E E C o n f e r e n c eo nC o m p u t e rV i s i o na n dP a t t e r n R e c o g n i t i o n.S a l tL a k eC i t y:I E E EC o m p u t e r S o c i e t y,2018:8798-8807.[9] G O O D F E L L OWI A N,P O U G E T-A B A D I EJ,M I R Z A M,e t a l.G e n e r a t i v ea d v e r s a r i a ln e t s[C]//A d v a n c e s i n N e u r a lI n f o r m a t i o nP r o c e s s i n g S y s t e m s.M o n t r e a l:M o r g a nK a u f m a n n,2014:2672-2680.[10]K A R R A ST,L A I N ES,A I T T A L A M,e t a l.A n a l y z i n g a n d i m p r o v i n g t h e i m a g e q u a l i t y o f s t y l e g a n[C]//P r o c e e d i n g so f t h e I E E E/C V FC o n f e r e n c e o nC o m p u t e rV i s i o n a n dP a t t e r nR e c o g n i t i o n.S n o w m a s sV i l l a g e:I E E EC o m p u t e r S o c i e-t y,2020:8110-8119.[11]O L A FR O N N E B E R G E R,P H I L I P PF I S C H E R,T HOMA SB R O X.U-n e t:c o n v o l u t i o n a l n e t w o r k s f o r b i o m e d i c a l i m a g es e g m e n t a t i o n[C]//M e d i c a l I m a g eC o m p u t i n g a n dC o m p u t e r-A s s i s t e dI n t e r v e n t i o n-M I C C A I2015:18t hI n t e r n a t i o n a lC o n f e r e n c e.M u n i c h:S p r i n g e r I n t e r n a t i o n a l P u b l i s h i n g,2015:234-241.[12]Y A N G T,R E NP,X I EX,e t a l.G a n p r i o r e m b e d d e dn e t w o r k f o r b l i n d f a c e r e s t o r a t i o n i n t h ew i l d[C]//P r o c e e d i n g s o ft h eI E E E/C V F C o n f e r e n c eo n C o m p u t e r V i s i o na n d P a t t e r n R e c o g n i t i o n.V i r t u a l:I E E E C o m p u t e rS o c i e t y,2021: 672-681.[13]C H E NCF,L IX M,Y A N GLB,e t a l.P r o g r e s s i v e s e m a n t i c-a w a r e s t y l e t r a n s f o r m a t i o n f o r b l i n d f a c e r e s t o r a t i o n[C]//P r o c e e d i n g s o f t h e I E E E/C V FC o n f e r e n c e o nC o m p u t e rV i s i o n a n dP a t t e r nR e c o g n i t i o n.V i r t u a l:I E E EC o m p u t e r S o c i-e t y,2021:11896-11905.[14]Y A N GLB,L I U C,WA N GP,e t a l.H i f a c e g a n:f a c e r e n o v a t i o nv i a c o l l a b o r a t i v e s u p p r e s s i o na n dr e p l e n i s h m e n t[C]//P r o c e e d i n g s o f t h e28t hA C MI n t e r n a t i o n a l C o n f e r e n c e o n M u l t i m e d i a.S e a t t l e:A s s o c i a t i o n f o rC o m p u t i n g M a c h i n e r y, 2020:1551-1560.D I -S t y l e G A N v 2M o d e l f o rD i g i t a l I m a geR e s t o r a t i o n WA N G K u n ,Z H A N G Y u e*(S c h o o l o fM a t h e m a t i c s a n dF i n a n c e ,A n h u i P o l y t e c h n i cU n i v e r s i t y ,W u h u241000,C h i n a )A b s t r a c t :W h e n t h e r e a r e l a r g e a r e a s o fm i s s i n g o r c o m p l e x t e x t u r e s t r u c t u r e s i n d i g i t a l i m a g e s ,t r a d i t i o n -a l i m a g e e n h a n c e m e n t t e c h n i q u e s c a n n o t r e s t o r em o r e r e q u i r e d i n f o r m a t i o n f r o mt h e l i m i t e d i n f o r m a t i o n i n t h e i m a g e ,r e s u l t i n g i n l i m i t e d e n h a n c e m e n t p e r f o r m a n c e .T h e r e f o r e ,t h i s p a p e r p r o p o s e s a d e e p l e a r n -i n g i n p a i n t i n g m e t h o d ,D I -S t y l e G A N v 2,w h i c hc o m b i n e sd e c o d i n g i n f o r m a t i o n (D I )w i t hS t y l e G A N v 2.F i r s t l y ,t h eU -n e tm o d u l e g e n e r a t e s ah i d d e n e n c o d i n g s i g n a l c o n t a i n i n g t h em a i n i n f o r m a t i o no f t h e i m -a g e a n d a d e c o d i n g s i g n a l c o n t a i n i n g t h e s e c o n d a r y i n f o r m a t i o n .T h e n ,t h e S t y l e G A N v 2m o d u l e i s u s e d t o i n t r o d u c e a n i m a g e g e n e r a t i o n p r i o r .D u r i n g t h e i m a g e g e n e r a t i o n p r o c e s s ,n o t o n l y t h eh i d d e ne n c o d i n gm a i n i n f o r m a t i o n i s u s e d ,b u t a l s o t h e d e c o d i n g s i g n a l c o n t a i n i n g t h e s e c o n d a r y i n f o r m a t i o no f t h e i m a g e i s i n t e g r a t e d ,t h e r e b y a c h i e v i n g s e m a n t i c e n h a n c e m e n t o f t h e r e p a i r r e s u l t s .T h e e x pe r i m e n t a l r e s u l t s o n F F H Qa n dC e l e b Ad a t a b a s e s v a l i d a t e t h e ef f e c t i v e n e s s o f t h e p r o p o s e da p p r o c h .K e y w o r d s :i m ag e r e s t o r a t i o n ;d e c o d e s i g n a l ;g e n e r a t i v e a d v e r s a r i a l n e t w o r k ;U -n e t ;S t y l e G A N v 2(上接第83页)P r o g r e s s i v e I n t e r p o l a t i o nL o o p S u b d i v i s i o n M e t h o d w i t hD u a lA d ju s t a b l eF a c t o r s S H IM i n g z h u ,L I U H u a y o n g*(S c h o o l o fM a t h e m a t i c s a n dP h y s i c s ,A n h u i J i a n z h uU n i v e r s i t y ,H e f e i 230601,C h i n a )A b s t r a c t :A i m i n g a t t h e p r o b l e mt h a t t h e l i m i ts u r f a c e p r o d u c e db y t h ea p p r o x i m a t eL o o p s u b d i v i s i o n m e t h o d t e n d s t o s a g a n d s h r i n k ,a p r o g r e s s i v e i n t e r p o l a t i o nL o o p s u b d i v i s i o nm e t h o dw i t hd u a l a d j u s t a -b l e f a c t o r s i s p r o p o s e d .T h i sm e t h o d i n t r o d u c e s d i f f e r e n t a d j u s t a b l e f a c t o r s i n t h e t w o -p h a s eL o o p s u b d i -v i s i o nm e t h o d a n d t h e p r o g r e s s i v e i t e r a t i o n p r o c e s s ,s o t h a t t h e g e n e r a t e d l i m i t s u r f a c e i s i n t e r p o l a t e d t o a l l t h e v e r t i c e s o f t h e i n i t i a l c o n t r o lm e s h .M e a n w h i l e ,i t h a s a s t r i n g e n c y ,l o c a l i t y a n d g l o b a l i t y .I t c a nn o t o n l y f l e x i b l y c o n t r o l t h e s h a p e o f l i m i t s u r f a c e ,b u t a l s o e x p a n d t h e c o n t r o l l a b l e r a n g e o f s h a p e t o a c e r -t a i ne x t e n t .F r o mt h e n u m e r i c a l e x p e r i m e n t s ,i t c a nb e s e e n t h a t t h em e t h o d c a n r e t a i n t h e c h a r a c t e r i s t i c s o f t h e i n i t i a l t r i a n g u l a rm e s hb e t t e rb y c h a n g i n g t h ev a l u eo f d u a l a d j u s t a b l e f a c t o r s ,a n d t h e g e n e r a t e d l i m i t s u r f a c eh a s a s m a l l d e g r e e o f s h r i n k a g e ,w h i c h p r o v e s t h em e t h o d t ob e f e a s i b l e a n d e f f e c t i v e .K e y w o r d s :L o o p s u b d i v i s i o n ;p r o g r e s s i v e i n t e r p o l a t i o n ;d u a l a d j u s t a b l e f a c t o r s ;a s t r i n g e n c y ㊃19㊃第6期王 坤,等:数字图像修复的D I -S t y l e G A N v 2模型。
低光增强算法指标 -回复
低光增强算法指标-回复低光增强算法指标——从理论到应用的完全解析引言:低光条件下拍摄的图像往往存在明暗不均、细节模糊、噪点严重等问题,给图像的质量与清晰度带来了挑战。
为了解决这一问题,科学家们提出了众多的低光增强算法。
这些算法以不同的方式利用图像处理技术,通过改变亮度、对比度或者修复噪点,来提升低光图像的质量。
在选择适用的低光增强算法之前,我们需要了解一些关键的指标,以便更好地评估算法的性能和效果。
本文将详细介绍低光增强算法的指标,并从理论到应用一步一步回答相关问题。
一、图像质量评价指标1. 信噪比(Signal-to-Noise Ratio, SNR)信噪比是衡量图像质量的一个重要指标,它描述了信号与噪声之间的比例关系。
SNR越高,图像质量越好。
在低光增强算法中,我们希望通过减少噪声的同时增强信号,从而提高SNR。
2. 峰值信噪比(Peak Signal-to-Noise Ratio, PSNR)峰值信噪比是信噪比的一种具体形式,通常用来度量图像增强算法的效果。
PSNR的值越高,代表图像质量越好。
3. 结构相似性指数(Structure Similarity Index, SSIM)结构相似性指数是一种通过比较原始图像和增强图像的结构相似性来评估图像增强算法效果的指标。
SSIM的取值范围为0到1,值越接近1,表示图像质量越好。
二、低光增强算法的常用指标1. 亮度增强效果亮度是图像中像素的明亮程度。
低光增强算法的首要任务就是增强图像的亮度,使其更加清晰明亮。
常用指标有亮度增益、亮度对比度增益等。
2. 细节保留程度低光条件下图像的细节往往会受到抑制或模糊。
好的低光增强算法应该能够准确地保留图像中的细节信息,并具有较低的细节损失。
常用指标有细节增强能力、细节清晰度等。
3. 噪点抑制效果低光条件下拍摄的图像通常伴随着大量的噪点。
低光增强算法需要能有效地抑制噪点,并提高图像的清晰度。
常用的噪点抑制指标有噪点密度、均方误差等。
基于多级全局信息传递模型的视觉显著性检测
2021⁃01⁃10计算机应用,Journal of Computer Applications 2021,41(1):208-214ISSN 1001⁃9081CODEN JYIIDU http ://基于多级全局信息传递模型的视觉显著性检测温静*,宋建伟(山西大学计算机与信息技术学院,太原030006)(∗通信作者电子邮箱wjing@ )摘要:对神经网络中的卷积特征采用分层处理的思想能明显提升显著目标检测的性能。
然而,在集成分层特征时,如何获得丰富的全局信息以及有效融合较高层特征空间的全局信息和底层细节信息仍是一个没有解决的问题。
为此,提出了一种基于多级全局信息传递模型的显著性检测算法。
为了提取丰富的多尺度全局信息,在较高层级引入了多尺度全局特征聚合模块(MGFAM ),并且将多层级提取出的全局信息进行特征融合操作;此外,为了同时获得高层特征空间的全局信息和丰富的底层细节信息,将提取到的有判别力的高级全局语义信息以特征传递的方式和较低层次特征进行融合。
这些操作可以最大限度提取到高级全局语义信息,同时避免了这些信息在逐步传递到较低层时产生的损失。
在ECSSD 、PASCAL -S 、SOD 、HKU -IS 等4个数据集上进行实验,实验结果表明,所提算法相较于较先进的NLDF 模型,其F -measure (F )值分别提高了0.028、0.05、0.035和0.013,平均绝对误差(MAE )分别降低了0.023、0.03、0.023和0.007。
同时,所提算法在准确率、召回率、F -measure 值及MAE 等指标上也优于几种经典的图像显著性检测方法。
关键词:显著性检测;全局信息;神经网络;信息传递;多尺度池化中图分类号:TP391.413文献标志码:AVisual saliency detection based on multi -level global information propagation modelWEN Jing *,SONG Jianwei(School of Computer and Information Technology ,Shanxi University ,Taiyuan Shanxi 030600,China )Abstract:The idea of hierarchical processing of convolution features in neural networks has a significant effect onsaliency object detection.However ,when integrating hierarchical features ,it is still an open problem how to obtain rich global information ,as well as effectively integrate the global information and of the higher -level feature space and low -leveldetail information.Therefore ,a saliency detection algorithm based on a multi -level global information propagation model was proposed.In order to extract rich multi -scale global information ,a Multi -scale Global Feature Aggregation Module(MGFAM )was introduced to the higher -level ,and feature fusion operation was performed to the global information extracted from multiple levels.In addition ,in order to obtain the global information of the high -level feature space and the rich low -level detail information at the same time ,the extracted discriminative high -level global semantic information was fused with the lower -level features by means of feature propagation.These operations were able to extract the high -level global semantic information to the greatest extent ,and avoid the loss of this information when it was gradually propagated to the lower -level.Experimental results on four datasets including ECSSD ,PASCAL -S ,SOD ,HKU -IS show that compared with the advanced NLDF (Non -Local Deep Features for salient object detection )model ,the proposed algorithm has the F -measure (F )valueincreased by 0.028、0.05、0.035and 0.013respectively ,the Mean Absolute Error (MAE )decreased by 0.023、0.03、0.023and 0.007respectively ,and the proposed algorithm was superior to several classical image saliency detection methods in terms of precision ,recall ,F -measure and MAE.Key words:saliency detection;global information;neural network;information propagation;multi -scale pooling引言视觉显著性源于认知学中的视觉注意模型,旨在模拟人类视觉系统自动检测出图片中最与众不同和吸引人眼球的目标区域。
Ch3.扩频与多址技术
Chapter3 Spread spectrum and Multiple Access techniques-Spread spectrum
Direct sequence SS
Signal flow
jingxr@
Chapter3 Spread spectrum and Multiple Access techniques-Spread spectrum
jingxr@
Chapter3 Spread spectrum and Multiple Access techniques-Spread spectrum(参考张贤达:通信信号处理第6章)
Direct sequence SS-PN
sequence
A possible PN sequences generator is shown in figure:
无线通信理论与技术 Wireless Communications Theory and Techniques
Chapter3 Spread spectrum and Multiple Access techniques
Outline Spread spectrum theory and multiple access technique Signal processing in CDMA systems Multicarrier and OFDM
*bn can be written as: *where C1,…,Cr are micro-switch setting coefficients of generator r is the shift register length
Increasing r, s becomes negligible, and the sequence becomes more and more random.
基于物联网技术的小波域分块压缩感知算法的图像重构系统设计
基于物联网技术的小波域分块压缩感知算法的图像重构系统设计作者:赵勃来源:《计算技术与自动化》2021年第01期摘要:针对低照明度重构图像分辨率不高、重构时间长的问题,提出了基于小波域分块压缩感知算法的图像重构系统。
建立低照明度图像采样模型,采用图像的景深自适应调节方法进行小波域分块压缩感知和信息融合处理。
利用多尺度的Retinex算法进行小波域分块压缩感知和信息提取,提取图像的信息熵特征量。
采取图像自适应增强方法进行低照度图像增强处理,使用物联网技术进行低照明度图像的三维信息重构,结合细节增强方法进行低照度图像增强处理,完成重构系统设计,实现透射率图的轮廓检测和特征重构。
仿真结果表明,采用该方法进行低照明度图像重构的分辨率较高,边缘感知能力较好,且重构耗时较短,实际应用效率较高。
关键词:物联网技术;小波域分块;压缩感知;图像重构中图分类号:TP391 文献标识码:ADesign of Image Reconstruction System for Wavelet DomainSub-block Compression-aware Algorithm Basedon Internet of Things TechnologyZHAO Bo(Shaanxi Xueqian Normal University,Xi'an, Shaanxi 710100,China)Abstract:Aiming at the problems of low resolution and long reconstruction time for low-illumination reconstructed images, an image reconstruction system based on the Internet of Things technology for wavelet-domain block compressed sensing algorithm is proposed. Establish a low-illumination image sampling model, use the image's depth-of-field adaptive adjustment method to perform wavelet-domain block compressed sensing and information fusion processing, and use a multi-scale Retinex algorithm to perform wavelet-domain block compressed sensing and information extraction to extract the information entropy of the image Feature quantity, low-illumination image enhancement processing using image adaptive enhancement method, 3D information reconstruction of low-illumination image using Internet of Things technology, low-illumination image enhancement processing combined with detail enhancement method, complete design of stink dog system, and transmission Contour detection and feature reconstruction of rate maps. The simulation results show that the low-illumination image reconstruction with this method has higher resolution,better edge sensing ability, shorter reconstruction time, and higher practical application efficiency.Key words:Internet of Things technology; wavelet domain block; compression perception; image reconstruction為提升低照明度图像的分辨率,需对低照明度图像进行优化重构,提高低照明度图像成像能力[1]。
低光增强算法指标 -回复
低光增强算法指标-回复低光增强算法是一种图像处理技术,用于改善在低照度环境下拍摄的图像质量。
它通过增加图像的亮度和对比度来提高画面的可见性。
在本文中,我们将逐步回答有关低光增强算法的指标,包括其原理、常用技术以及评估方法。
一、低光增强算法的原理低光增强算法的目标是提高图像在低照度环境下的可视性。
它采用多种技术来增加图像的亮度和对比度,从而使图像更清晰、更易观察。
这些技术通常涉及灰度拉伸、直方图均衡化、多尺度变换和图像增强滤波等。
灰度拉伸是一种常见的低光增强技术,通过扩展图像的灰度范围来增强图像的明暗细节。
直方图均衡化则是一种统计方法,根据图像的直方图分布特性来增加图像的对比度。
多尺度变换是一种通过将图像分解为不同比例的尺度来提取和增强图像细节的方法。
而图像增强滤波是通过应用滤波器来调整图像的亮度和对比度,以达到增强的效果。
二、常用的低光增强算法技术1. 双边滤波算法:双边滤波算法是一种常用的低光增强算法,它通过保持边缘信息,同时对图像进行平滑处理,以减少噪声的影响。
双边滤波算法能够在增强图像的同时保持图像的细节。
2. 清晰度增强算法:清晰度增强算法通过增加图像的锐度来提高图像质量。
它能够增强图像的细节和边缘,使其更加清晰和鲜明。
3. 深度学习技术:近年来,深度学习技术在低光增强领域取得了显著的成果。
通过使用大量的图像数据进行训练,深度学习模型能够学习到图像的特征,并通过增强细节和对比度来提高图像质量。
三、低光增强算法的评估方法评估低光增强算法的效果是非常重要的,因为它能够帮助我们了解算法的优势和不足之处。
以下是一些常用的评估方法:1. 主观评估:主观评估是一种使用人工主观感觉进行评估的方法。
通过请专家或普通用户对增强后的图像进行观察和评价,从而确定增强算法的效果如何。
主观评估方法通常采用问卷调查或实验室实验来进行。
2. 客观评估:客观评估是一种使用指标和度量方法来评估算法效果的方法。
例如,可以使用图像质量评价指标,如峰值信噪比(PSNR)、结构相似性指标(SSIM)和感知质量评估指标(PQI)等来评估增强算法的效果。
低光增强算法指标 -回复
低光增强算法指标-回复低光增强算法指标是衡量一种算法在低光条件下的性能表现的标准。
在相机、监控摄像头等设备中,由于光线较暗,图像的质量会受到影响,细节不清晰,画面噪点多等问题。
为了解决这些问题,需要使用低光增强算法进行处理。
本文将深入探讨低光增强算法指标,并逐步回答相关问题,以帮助读者更好地理解该主题。
第一部分:什么是低光增强算法指标首先,我们需要明确低光增强算法指标是什么。
低光增强算法指标是用来评估低光增强算法性能的一种评价体系。
它通过衡量算法处理后的图像质量、细节保留度、画面噪点程度等指标,来判断算法在低光条件下的实际效果。
第二部分:常用的低光增强算法指标接下来,我们将介绍几个常用的低光增强算法指标。
1. 图像灰度/对比度:低光条件下,图像往往灰暗,缺乏明暗变化,而增强算法应该能够提高图像的亮度,增加对比度,以使细节更清晰可见。
2. 细节保留度:在低光条件下,细节往往被模糊或丢失。
好的低光增强算法应该能够保留图像的细节,使其更加清晰可辨。
3. 噪点程度:低光条件下,图像的噪点会变得更加明显,而低光增强算法应该能够减少噪点的数量,使图像更加清晰。
第三部分:低光增强算法指标的评估方法接下来,我们将介绍一些评估低光增强算法指标的常用方法。
1. 主观评价:通过人眼观察直观感受来评价算法的效果。
这种方法依赖于个体主观的观察和判断,不够客观准确,但可以提供一些实际使用场景下的效果反馈。
2. 客观评价:通过使用一些客观的评价指标来衡量算法的效果。
例如,可以使用图片增强度、结构相似性指标等进行评估。
这种方法相对较为客观,但还是无法完全表示人眼的感受。
3. 实际应用测试:将算法应用于实际场景中,检测其对于图像质量、细节保留度和画面噪点的改善效果。
这种方法是最直接的评估方式,但可能需要较长时间和成本来获取准确结果。
第四部分:低光增强算法指标的研究进展和挑战最后,我们将讨论低光增强算法指标的研究进展和当前面临的挑战。
广义空间调制系统的低复杂度检测算法
广义空间调制系统的低复杂度检测算法李小文;冯永帅;张丁全【摘要】To overcome the disadvantage of high computational complexity of maximum likelihood( ML) de-tection algorithm for the generalized spatial modulation( GSM) system,a low complexity signal detection al-gorithm based on compressed sensing( CS) signal reconstruction theory is inverstigated. Firstly,under the channel model of multiple-input multiple-output( MIMO) ,through improving orthogonal matching pursuit ( OMP) algorithm,a candidate set of active antenna index is obtained. Then the ML detection algorithm is used to perform traversal search in the candidate set to obtain the active antenna index and the constellation modulation symbols. Simulation results show that the detection performance of the proposed algorithm is similar to that of ML detection algorithm and its computational complexity is about 2% of that of ML. Therefore,the proposed algorithm reduces the computational complexity while ensuring the detection per-formance,which achieves the favourable performance-complexity trade-off.%针对广义空间调制( GSM)系统接收端最大似然( ML)检测算法计算复杂度极高的缺点,提出了一种基于压缩感知( CS)信号重构理论的低复杂度信号检测算法。
一种大容量的半色调图像水印算法
一种大容量的半色调图像水印算法
谢建全;阳春华;黄大足;谢勍
【期刊名称】《计算机应用》
【年(卷),期】2008(28)5
【摘要】在半色调图像中隐藏信息是解决印刷品的版权保护、来源认证和防止伪
造等问题的有效措施.提出了一种能在半色调图像中嵌入大容量数据的隐藏算法,该
算法利用误差扩散算法将待隐藏的信息作为误差信号嵌入到与该位置相邻的多个像素上,信息嵌入与半色调处理同步完成.实验表明本算法能将一幅与半色调图像同样
大小的二值水印图像嵌入到半色调图像中,载有水印的图像在经受一定的噪声干扰、污损和旋转的情况下,仍能有效提取水印图像.该算法可用于普通证件的防伪.
【总页数】3页(P1211-1213)
【作者】谢建全;阳春华;黄大足;谢勍
【作者单位】中南大学,信息科学与工程学院,长沙,410083;湖南财经高等专科学校,信息管理系,长沙,410205;中南大学,信息科学与工程学院,长沙,410083;中南大学,信息科学与工程学院,长沙,410083;湖南财经高等专科学校,信息管理系,长沙,410205;湖南财经高等专科学校,信息管理系,长沙,410205
【正文语种】中文
【中图分类】TP309.2
【相关文献】
1.一种基于离散小波变换的大容量数字图像水印算法 [J], 徐富治
2.一种基于像素交换的半色调图像水印算法 [J], 安勇聪;赵小梅;李慎亮;王颖;陈雪静
3.一种基于DWT的自适应大容量音频水印算法 [J], 侯剑;付永生;郭恺
4.一种大容量鲁棒性中文文本数字水印算法 [J], 董相志;柳岸;苏庆堂;陈伟波;邹海林
5.一种大容量文本水印算法研究 [J], 金真伊;李德
因版权原因,仅展示原文概要,查看原文内容请购买。
louvain算法分辨率
louvain算法分辨率
Louvain算法的分辨率是指算法能够发现的最小社区大小。
Louvain算法是一种基于模块度优化的社区发现算法,它通过优化模块度来发现网络中的社区结构。
然而,当网络中存在多个规模较小但结构显著的社区时,由于模块度的定义依赖网络的全局属性(网络边的总权重),一些规模较小但结构显著的社区可能会被淹没在较大的社区中,无法被识别出来。
这种现象被称为“分辨率局限”。
Louvain算法的分辨率局限可以通过调整“分辨率参数”来改变。
在算法实现中,这个参数通常用来决定检测到的社区的粒度。
一种针对低质量指纹证据图像的增强算法设计
一种针对低质量指纹证据图像的增强算法设计
卢超超
【期刊名称】《现代计算机》
【年(卷),期】2024(30)7
【摘要】指纹图像采集过程中存在旋转、平移以及图像采集不全等内在差异,导致每次提取到的细节点数目是不固定的,这种特征形式后期匹配计算复杂度较高。
当前基于模板匹配的图像增强算法针对低质量指纹证据图像匹配计算复杂度较高的情况,以固定阈值完成原始图像滤波处理,未考虑单像素细化方向场特征丢失问题。
提出基于卡尔曼滤波的低质量指纹证据图像增强方法。
根据图像高斯噪声分布函数,选用VisuShrink阈值降噪,构建卡尔曼滤波器,完成指纹图像预处理。
将原始图像转化为单像素细化图,提取滤波后指纹图像的奇异点与特征点。
划分指纹图像,根据奇异点与方向向量确定指纹的方向场,进行图像增强滤波处理,完成基于卡尔曼滤波的低质量指纹证据图像增强方法设计。
构建实验环节,验证该方法应用效果,实验方法表明:该方法使用后,低质量指纹图像变得更加清晰,并且平均错误率最高仅为6.21%,评估分数均值达到了0.85,该方法有效提高了图像的增强效果。
【总页数】7页(P17-23)
【作者】卢超超
【作者单位】山西警察学院侦查系
【正文语种】中文
【中图分类】TP3
【相关文献】
1.一种基于指纹线方向的指纹图像增强算法
2.Matlab GUI在低质量指纹图像增强中的应用
3.低质量指纹图像的增强算法研究
4.面向低质量指纹图像的增强算法及应用
5.一种织物背景上的指纹图像增强方法
因版权原因,仅展示原文概要,查看原文内容请购买。