Wavelets, well ties, and the search for subtle stratigraphic traps
wavelets
•
• •
Fast Algorithms (FFT, DFT, DCT, 2D-DFT, ……) Infinite Resolution in Frequency domain Zero Resolution in Time domain
2.2 Application: Signal Approximation (Lossy Commpression) • Consider discrete signal
• 年轻,是他们的共性 • 创新,是他们的长处
• 朱棣文说,创新一定要敢于想象.中国的学校过多强调学生的书本 知识和过于频繁地进行书面考试,而激励学生创新精神不足
著名创造性思维科学家博诺称IT革命已结束
• 17日在悉尼说,引发信息技术(IT)革命的电脑从业人员的末日已来临, 他们将被能够为企业带来更大价值的、有创意的新一类人所代替。 • 并解释说:“一旦某些东西已成形,并成为自然,那么它就不再有价值了。” • 他说,点子、产品和服务的创造力及内在价值在商业运作中将比事实和信息 更加重要。现在已进入价值时代,未来流行的职业将是“价值设计师”。这 些“价值设计师” 将受雇于有远见的公司,
( t )dt 0 (1 )
Wavelets that are useful in DSP and other applications often possess additional features:
• Generate (via dilation and translation) orthonormal or biorthonormal (basis) systems in L2 • Local both in time and frequency domain • Fast Decomposition and Reconstruction Algorithms • Vanishing moments
外文翻译
Eliminate Signal Noise With Discrete Wavelet TransformationThe wavelet transform is a mathematical tool that's becoming quite useful for analyzing many types of signals. It has been proven especially useful in data compression, as well as in adaptive equalizer and transmultiplexer applications.信号噪声消除和离散小波变换小波变换是一种在分析成为许多类型的信号很有用的数学工具。
它已被证明在数据压缩以及自适应均衡器和transmultiplexer应用中有特别的用途。
A wavelet is a small, localized wave of a particular shape and finite duration. Several families, or collections of similar types of wavelets, are in use today. A few go by the names of Haar, Daubechies, and Biorthogonal. Wavelets within each of these families share common properties. For instance, the Biorthogonal wavelet family exhibits linear phase, which is an important characteristic for signal and image reconstruction.小波是一种具有特定的形状与有限的时间的波。
纹理物体缺陷的视觉检测算法研究--优秀毕业论文
摘 要
在竞争激烈的工业自动化生产过程中,机器视觉对产品质量的把关起着举足 轻重的作用,机器视觉在缺陷检测技术方面的应用也逐渐普遍起来。与常规的检 测技术相比,自动化的视觉检测系统更加经济、快捷、高效与 安全。纹理物体在 工业生产中广泛存在,像用于半导体装配和封装底板和发光二极管,现代 化电子 系统中的印制电路板,以及纺织行业中的布匹和织物等都可认为是含有纹理特征 的物体。本论文主要致力于纹理物体的缺陷检测技术研究,为纹理物体的自动化 检测提供高效而可靠的检测算法。 纹理是描述图像内容的重要特征,纹理分析也已经被成功的应用与纹理分割 和纹理分类当中。本研究提出了一种基于纹理分析技术和参考比较方式的缺陷检 测算法。这种算法能容忍物体变形引起的图像配准误差,对纹理的影响也具有鲁 棒性。本算法旨在为检测出的缺陷区域提供丰富而重要的物理意义,如缺陷区域 的大小、形状、亮度对比度及空间分布等。同时,在参考图像可行的情况下,本 算法可用于同质纹理物体和非同质纹理物体的检测,对非纹理物体 的检测也可取 得不错的效果。 在整个检测过程中,我们采用了可调控金字塔的纹理分析和重构技术。与传 统的小波纹理分析技术不同,我们在小波域中加入处理物体变形和纹理影响的容 忍度控制算法,来实现容忍物体变形和对纹理影响鲁棒的目的。最后可调控金字 塔的重构保证了缺陷区域物理意义恢复的准确性。实验阶段,我们检测了一系列 具有实际应用价值的图像。实验结果表明 本文提出的纹理物体缺陷检测算法具有 高效性和易于实现性。 关键字: 缺陷检测;纹理;物体变形;可调控金字塔;重构
Keywords: defect detection, texture, object distortion, steerable pyramid, reconstruction
II
Investigating the properties of light waves
Investigating the properties of lightwavesLight is a form of electromagnetic radiation that is visible to the human eye. It has been studied extensively by scientists for centuries, but many of its properties are still not fully understood. In this article, we will explore some of the key properties of light waves and how they are investigated.Wavelength and FrequencyOne of the most fundamental properties of light waves is their wavelength, which refers to the distance between peaks in the wave. The wavelength of light determines its color, with longer wavelengths appearing as red and shorter wavelengths appearing as blue.Another important property of light waves is their frequency, which is the number of peaks that pass a given point in a given amount of time. Frequency is measured in hertz (Hz), with one hertz equal to one wave crest per second. The frequency of light is directly related to its energy, with higher frequencies corresponding to higher energies.Measurement TechniquesIn order to study the properties of light waves, scientists often use a variety of measurement techniques. One commonly used method is spectroscopy, which involves analyzing the wavelengths of light emitted or absorbed by a particular substance. This can provide valuable information about the chemical makeup of the substance and how it interacts with light.Another technique is interferometry, which involves combining multiple sources of light waves to create interference patterns. This can be used to measure very small changes in distance or to create highly precise measurements of wavelength or frequency.ApplicationsThe properties of light waves have a wide range of applications in science and technology. For example, the colors of light are crucial for understanding the behavior of chemical compounds, as well as for designing and testing new photonic materials and devices.Light waves are also used extensively in communications technology, with high-frequency waves such as radio waves and microwaves being used for things like Wi-Fi signals and cell phone transmissions. In addition, scientists are actively exploring the potential of using lower-frequency waves like infrared and terahertz radiation for a wide range of applications, from imaging and sensing to biomedical research and cancer treatment.ConclusionIn conclusion, the properties of light waves are incredibly complex and diverse, with a wide range of applications in science and technology. Studying these properties requires sophisticated measurement techniques and a deep understanding of the underlying physics, but the potential benefits are vast. Whether we are exploring the outer reaches of the cosmos or designing new forms of communication technology, light waves will continue to play a crucial role in shaping our world.。
jstd035声学扫描
JOINT INDUSTRY STANDARDAcoustic Microscopy for Non-HermeticEncapsulatedElectronicComponents IPC/JEDEC J-STD-035APRIL1999Supersedes IPC-SM-786 Supersedes IPC-TM-650,2.6.22Notice EIA/JEDEC and IPC Standards and Publications are designed to serve thepublic interest through eliminating misunderstandings between manufacturersand purchasers,facilitating interchangeability and improvement of products,and assisting the purchaser in selecting and obtaining with minimum delaythe proper product for his particular need.Existence of such Standards andPublications shall not in any respect preclude any member or nonmember ofEIA/JEDEC or IPC from manufacturing or selling products not conformingto such Standards and Publications,nor shall the existence of such Standardsand Publications preclude their voluntary use by those other than EIA/JEDECand IPC members,whether the standard is to be used either domestically orinternationally.Recommended Standards and Publications are adopted by EIA/JEDEC andIPC without regard to whether their adoption may involve patents on articles,materials,or processes.By such action,EIA/JEDEC and IPC do not assumeany liability to any patent owner,nor do they assume any obligation whateverto parties adopting the Recommended Standard or ers are alsowholly responsible for protecting themselves against all claims of liabilities forpatent infringement.The material in this joint standard was developed by the EIA/JEDEC JC-14.1Committee on Reliability Test Methods for Packaged Devices and the IPCPlastic Chip Carrier Cracking Task Group(B-10a)The J-STD-035supersedes IPC-TM-650,Test Method2.6.22.For Technical Information Contact:Electronic Industries Alliance/ JEDEC(Joint Electron Device Engineering Council)2500Wilson Boulevard Arlington,V A22201Phone(703)907-7560Fax(703)907-7501IPC2215Sanders Road Northbrook,IL60062-6135 Phone(847)509-9700Fax(847)509-9798Please use the Standard Improvement Form shown at the end of thisdocument.©Copyright1999.The Electronic Industries Alliance,Arlington,Virginia,and IPC,Northbrook,Illinois.All rights reserved under both international and Pan-American copyright conventions.Any copying,scanning or other reproduction of these materials without the prior written consent of the copyright holder is strictly prohibited and constitutes infringement under the Copyright Law of the United States.IPC/JEDEC J-STD-035Acoustic Microscopyfor Non-Hermetic EncapsulatedElectronicComponentsA joint standard developed by the EIA/JEDEC JC-14.1Committee on Reliability Test Methods for Packaged Devices and the B-10a Plastic Chip Carrier Cracking Task Group of IPCUsers of this standard are encouraged to participate in the development of future revisions.Contact:EIA/JEDEC Engineering Department 2500Wilson Boulevard Arlington,V A22201 Phone(703)907-7500 Fax(703)907-7501IPC2215Sanders Road Northbrook,IL60062-6135 Phone(847)509-9700Fax(847)509-9798ASSOCIATION CONNECTINGELECTRONICS INDUSTRIESAcknowledgmentMembers of the Joint IPC-EIA/JEDEC Moisture Classification Task Group have worked to develop this document.We would like to thank them for their dedication to this effort.Any Standard involving a complex technology draws material from a vast number of sources.While the principal members of the Joint Moisture Classification Working Group are shown below,it is not possible to include all of those who assisted in the evolution of this Standard.To each of them,the mem-bers of the EIA/JEDEC and IPC extend their gratitude.IPC Packaged Electronic Components Committee ChairmanMartin FreedmanAMP,Inc.IPC Plastic Chip Carrier Cracking Task Group,B-10a ChairmanSteven MartellSonoscan,Inc.EIA/JEDEC JC14.1CommitteeChairmanJack McCullenIntel Corp.EIA/JEDEC JC14ChairmanNick LycoudesMotorolaJoint Working Group MembersCharlie Baker,TIChristopher Brigham,Hi/FnRalph Carbone,Hewlett Packard Co. Don Denton,TIMatt Dotty,AmkorMichele J.DiFranza,The Mitre Corp. Leo Feinstein,Allegro Microsystems Inc.Barry Fernelius,Hewlett Packard Co. Chris Fortunko,National Institute of StandardsRobert J.Gregory,CAE Electronics, Inc.Curtis Grosskopf,IBM Corp.Bill Guthrie,IBM Corp.Phil Johnson,Philips Semiconductors Nick Lycoudes,MotorolaSteven R.Martell,Sonoscan Inc. Jack McCullen,Intel Corp.Tom Moore,TIDavid Nicol,Lucent Technologies Inc.Pramod Patel,Advanced Micro Devices Inc.Ramon R.Reglos,XilinxCorazon Reglos,AdaptecGerald Servais,Delphi Delco Electronics SystemsRichard Shook,Lucent Technologies Inc.E.Lon Smith,Lucent Technologies Inc.Randy Walberg,NationalSemiconductor Corp.Charlie Wu,AdaptecEdward Masami Aoki,HewlettPackard LaboratoriesFonda B.Wu,Raytheon Systems Co.Richard W.Boerdner,EJE ResearchVictor J.Brzozowski,NorthropGrumman ES&SDMacushla Chen,Wus Printed CircuitCo.Ltd.Jeffrey C.Colish,Northrop GrummanCorp.Samuel J.Croce,Litton AeroProducts DivisionDerek D-Andrade,Surface MountTechnology CentreRao B.Dayaneni,Hewlett PackardLaboratoriesRodney Dehne,OEM WorldwideJames F.Maguire,Boeing Defense&Space GroupKim Finch,Boeing Defense&SpaceGroupAlelie Funcell,Xilinx Inc.Constantino J.Gonzalez,ACMEMunir Haq,Advanced Micro DevicesInc.Larry A.Hargreaves,DC.ScientificInc.John T.Hoback,Amoco ChemicalCo.Terence Kern,Axiom Electronics Inc.Connie M.Korth,K-Byte/HibbingManufacturingGabriele Marcantonio,NORTELCharles Martin,Hewlett PackardLaboratoriesRichard W.Max,Alcatel NetworkSystems Inc.Patrick McCluskey,University ofMarylandJames H.Moffitt,Moffitt ConsultingServicesRobert Mulligan,Motorola Inc.James E.Mumby,CibaJohn Northrup,Lockheed MartinCorp.Dominique K.Numakura,LitchfieldPrecision ComponentsNitin B.Parekh,Unisys Corp.Bella Poborets,Lucent TechnologiesInc.D.Elaine Pope,Intel Corp.Ray Prasad,Ray Prasad ConsultancyGroupAlbert Puah,Adaptec Inc.William Sepp,Technic Inc.Ralph W.Taylor,Lockheed MartinCorp.Ed R.Tidwell,DSC CommunicationsCorp.Nick Virmani,Naval Research LabKen Warren,Corlund ElectronicsCorp.Yulia B.Zaks,Lucent TechnologiesInc.IPC/JEDEC J-STD-035April1999 iiTable of Contents1SCOPE (1)2DEFINITIONS (1)2.1A-mode (1)2.2B-mode (1)2.3Back-Side Substrate View Area (1)2.4C-mode (1)2.5Through Transmission Mode (2)2.6Die Attach View Area (2)2.7Die Surface View Area (2)2.8Focal Length(FL) (2)2.9Focus Plane (2)2.10Leadframe(L/F)View Area (2)2.11Reflective Acoustic Microscope (2)2.12Through Transmission Acoustic Microscope (2)2.13Time-of-Flight(TOF) (3)2.14Top-Side Die Attach Substrate View Area (3)3APPARATUS (3)3.1Reflective Acoustic Microscope System (3)3.2Through Transmission AcousticMicroscope System (4)4PROCEDURE (4)4.1Equipment Setup (4)4.2Perform Acoustic Scans..........................................4Appendix A Acoustic Microscopy Defect CheckSheet (6)Appendix B Potential Image Pitfalls (9)Appendix C Some Limitations of AcousticMicroscopy (10)Appendix D Reference Procedure for PresentingApplicable Scanned Data (11)FiguresFigure1Example of A-mode Display (1)Figure2Example of B-mode Display (1)Figure3Example of C-mode Display (2)Figure4Example of Through Transmission Display (2)Figure5Diagram of a Reflective Acoustic MicroscopeSystem (3)Figure6Diagram of a Through Transmission AcousticMicroscope System (3)April1999IPC/JEDEC J-STD-035iiiIPC/JEDEC J-STD-035April1999This Page Intentionally Left BlankivApril1999IPC/JEDEC J-STD-035 Acoustic Microscopy for Non-Hermetic EncapsulatedElectronic Components1SCOPEThis test method defines the procedures for performing acoustic microscopy on non-hermetic encapsulated electronic com-ponents.This method provides users with an acoustic microscopy processflow for detecting defects non-destructively in plastic packages while achieving reproducibility.2DEFINITIONS2.1A-mode Acoustic data collected at the smallest X-Y-Z region defined by the limitations of the given acoustic micro-scope.An A-mode display contains amplitude and phase/polarity information as a function of time offlight at a single point in the X-Y plane.See Figure1-Example of A-mode Display.IPC-035-1 Figure1Example of A-mode Display2.2B-mode Acoustic data collected along an X-Z or Y-Z plane versus depth using a reflective acoustic microscope.A B-mode scan contains amplitude and phase/polarity information as a function of time offlight at each point along the scan line.A B-mode scan furnishes a two-dimensional(cross-sectional)description along a scan line(X or Y).See Figure2-Example of B-mode Display.IPC-035-2 Figure2Example of B-mode Display(bottom half of picture on left)2.3Back-Side Substrate View Area(Refer to Appendix A,Type IV)The interface between the encapsulant and the back of the substrate within the outer edges of the substrate surface.2.4C-mode Acoustic data collected in an X-Y plane at depth(Z)using a reflective acoustic microscope.A C-mode scan contains amplitude and phase/polarity information at each point in the scan plane.A C-mode scan furnishes a two-dimensional(area)image of echoes arising from reflections at a particular depth(Z).See Figure3-Example of C-mode Display.1IPC/JEDEC J-STD-035April1999IPC-035-3 Figure3Example of C-mode Display2.5Through Transmission Mode Acoustic data collected in an X-Y plane throughout the depth(Z)using a through trans-mission acoustic microscope.A Through Transmission mode scan contains only amplitude information at each point in the scan plane.A Through Transmission scan furnishes a two-dimensional(area)image of transmitted ultrasound through the complete thickness/depth(Z)of the sample/component.See Figure4-Example of Through Transmission Display.IPC-035-4 Figure4Example of Through Transmission Display2.6Die Attach View Area(Refer to Appendix A,Type II)The interface between the die and the die attach adhesive and/or the die attach adhesive and the die attach substrate.2.7Die Surface View Area(Refer to Appendix A,Type I)The interface between the encapsulant and the active side of the die.2.8Focal Length(FL)The distance in water at which a transducer’s spot size is at a minimum.2.9Focus Plane The X-Y plane at a depth(Z),which the amplitude of the acoustic signal is maximized.2.10Leadframe(L/F)View Area(Refer to Appendix A,Type V)The imaged area which extends from the outer L/F edges of the package to the L/F‘‘tips’’(wedge bond/stitch bond region of the innermost portion of the L/F.)2.11Reflective Acoustic Microscope An acoustic microscope that uses one transducer as both the pulser and receiver. (This is also known as a pulse/echo system.)See Figure5-Diagram of a Reflective Acoustic Microscope System.2.12Through Transmission Acoustic Microscope An acoustic microscope that transmits ultrasound completely through the sample from a sending transducer to a receiver on the opposite side.See Figure6-Diagram of a Through Transmis-sion Acoustic Microscope System.2April1999IPC/JEDEC J-STD-0353IPC/JEDEC J-STD-035April1999 3.1.6A broad band acoustic transducer with a center frequency in the range of10to200MHz for subsurface imaging.3.2Through Transmission Acoustic Microscope System(see Figure6)comprised of:3.2.1Items3.1.1to3.1.6above3.2.2Ultrasonic pulser(can be a pulser/receiver as in3.1.1)3.2.3Separate receiving transducer or ultrasonic detection system3.3Reference packages or standards,including packages with delamination and packages without delamination,for use during equipment setup.3.4Sample holder for pre-positioning samples.The holder should keep the samples from moving during the scan and maintain planarity.4PROCEDUREThis procedure is generic to all acoustic microscopes.For operational details related to this procedure that apply to a spe-cific model of acoustic microscope,consult the manufacturer’s operational manual.4.1Equipment Setup4.1.1Select the transducer with the highest useable ultrasonic frequency,subject to the limitations imposed by the media thickness and acoustic characteristics,package configuration,and transducer availability,to analyze the interfaces of inter-est.The transducer selected should have a low enough frequency to provide a clear signal from the interface of interest.The transducer should have a high enough frequency to delineate the interface of interest.Note:Through transmission mode may require a lower frequency and/or longer focal length than reflective mode.Through transmission is effective for the initial inspection of components to determine if defects are present.4.1.2Verify setup with the reference packages or standards(see3.3above)and settings that are appropriate for the trans-ducer chosen in4.1.1to ensure that the critical parameters at the interface of interest correlate to the reference standard uti-lized.4.1.3Place units in the sample holder in the coupling medium such that the upper surface of each unit is parallel with the scanning plane of the acoustic transducer.Sweep air bubbles away from the unit surface and from the bottom of the trans-ducer head.4.1.4At afixed distance(Z),align the transducer and/or stage for the maximum reflected amplitude from the top surface of the sample.The transducer must be perpendicular to the sample surface.4.1.5Focus by maximizing the amplitude,in the A-mode display,of the reflection from the interface designated for imag-ing.This is done by adjusting the Z-axis distance between the transducer and the sample.4.2Perform Acoustic Scans4.2.1Inspect the acoustic image(s)for any anomalies,verify that the anomaly is a package defect or an artifact of the imaging process,and record the results.(See Appendix A for an example of a check sheet that may be used.)To determine if an anomaly is a package defect or an artifact of the imaging process it is recommended to analyze the A-mode display at the location of the anomaly.4.2.2Consider potential pitfalls in image interpretation listed in,but not limited to,Appendix B and some of the limita-tions of acoustic microscopy listed in,but not limited to,Appendix C.If necessary,make adjustments to the equipment setup to optimize the results and rescan.4April1999IPC/JEDEC J-STD-035 4.2.3Evaluate the acoustic images using the failure criteria specified in other appropriate documents,such as J-STD-020.4.2.4Record the images and thefinal instrument setup parameters for documentation purposes.An example checklist is shown in Appendix D.5IPC/JEDEC J-STD-035April19996April1999IPC/JEDEC J-STD-035Appendix AAcoustic Microscopy Defect Check Sheet(continued)CIRCUIT SIDE SCANImage File Name/PathDelamination(Type I)Die Circuit Surface/Encapsulant Number Affected:Average%Location:Corner Edge Center (Type II)Die/Die Attach Number Affected:Average%Location:Corner Edge Center (Type III)Encapsulant/Substrate Number Affected:Average%Location:Corner Edge Center (Type V)Interconnect tip Number Affected:Average%Interconnect Number Affected:Max.%Length(Type VI)Intra-Laminate Number Affected:Average%Location:Corner Edge Center Comments:CracksAre cracks present:Yes NoIf yes:Do any cracks intersect:bond wire ball bond wedge bond tab bump tab leadDoes crack extend from leadfinger to any other internal feature:Yes NoDoes crack extend more than two-thirds the distance from any internal feature to the external surfaceof the package:Yes NoAdditional verification required:Yes NoComments:Mold Compound VoidsAre voids present:Yes NoIf yes:Approx.size Location(if multiple voids,use comment section)Do any voids intersect:bond wire ball bond wedge bond tab bump tab lead Additional verification required:Yes NoComments:7IPC/JEDEC J-STD-035April1999Appendix AAcoustic Microscopy Defect Check Sheet(continued)NON-CIRCUIT SIDE SCANImage File Name/PathDelamination(Type IV)Encapsulant/Substrate Number Affected:Average%Location:Corner Edge Center (Type II)Substrate/Die Attach Number Affected:Average%Location:Corner Edge Center (Type V)Interconnect Number Affected:Max.%LengthLocation:Corner Edge Center (Type VI)Intra-Laminate Number Affected:Average%Location:Corner Edge Center (Type VII)Heat Spreader Number Affected:Average%Location:Corner Edge Center Additional verification required:Yes NoComments:CracksAre cracks present:Yes NoIf yes:Does crack extend more than two-thirds the distance from any internal feature to the external surfaceof the package:Yes NoAdditional verification required:Yes NoComments:Mold Compound VoidsAre voids present:Yes NoIf yes:Approx.size Location(if multiple voids,use comment section)Additional verification required:Yes NoComments:8Appendix BPotential Image PitfallsOBSERV ATIONS CAUSES/COMMENTSUnexplained loss of front surface signal Gain setting too lowSymbolization on package surfaceEjector pin knockoutsPin1and other mold marksDust,air bubbles,fingerprints,residueScratches,scribe marks,pencil marksCambered package edgeUnexplained loss of subsurface signal Gain setting too lowTransducer frequency too highAcoustically absorbent(rubbery)fillerLarge mold compound voidsPorosity/high concentration of small voidsAngled cracks in package‘‘Dark line boundary’’(phase cancellation)Burned molding compound(ESD/EOS damage)False or spotty indication of delamination Low acoustic impedance coating(polyimide,gel)Focus errorIncorrect delamination gate setupMultilayer interference effectsFalse indication of adhesion Gain set too high(saturation)Incorrect delamination gate setupFocus errorOverlap of front surface and subsurface echoes(transducerfrequency too low)Fluidfilling delamination areasApparent voiding around die edge Reflection from wire loopsIncorrect setting of void gateGraded intensity Die tilt or lead frame deformation Sample tiltApril1999IPC/JEDEC J-STD-0359Appendix CSome Limitations of Acoustic MicroscopyAcoustic microscopy is an analytical technique that provides a non-destructive method for examining plastic encapsulated components for the existence of delaminations,cracks,and voids.This technique has limitations that include the following: LIMITATION REASONAcoustic microscopy has difficulty infinding small defects if the package is too thick.The ultrasonic signal becomes more attenuated as a function of two factors:the depth into the package and the transducer fre-quency.The greater the depth,the greater the attenuation.Simi-larly,the higher the transducer frequency,the greater the attenu-ation as a function of depth.There are limitations on the Z-axis(axial)resolu-tion.This is a function of the transducer frequency.The higher the transducer frequency,the better the resolution.However,the higher frequency signal becomes attenuated more quickly as a function of depth.There are limitations on the X-Y(lateral)resolu-tion.The X-Y(lateral)resolution is a function of a number of differ-ent variables including:•Transducer characteristics,including frequency,element diam-eter,and focal length•Absorption and scattering of acoustic waves as a function of the sample material•Electromechanical properties of the X-Y stageIrregularly shaped packages are difficult to analyze.The technique requires some kind offlat reference surface.Typically,the upper surface of the package or the die surfacecan be used as references.In some packages,cambered packageedges can cause difficulty in analyzing defects near the edgesand below their surfaces.Edge Effect The edges cause difficulty in analyzing defects near the edge ofany internal features.IPC/JEDEC J-STD-035April1999 10April1999IPC/JEDEC J-STD-035Appendix DReference Procedure for Presenting Applicable Scanned DataMost of the settings described may be captured as a default for the particular supplier/product with specific changes recorded on a sample or lot basis.Setup Configuration(Digital Setup File Name and Contents)Calibration Procedure and Calibration/Reference Standards usedTransducerManufacturerModelCenter frequencySerial numberElement diameterFocal length in waterScan SetupScan area(X-Y dimensions)Scan step sizeHorizontalVerticalDisplayed resolutionHorizontalVerticalScan speedPulser/Receiver SettingsGainBandwidthPulseEnergyRepetition rateReceiver attenuationDampingFilterEcho amplitudePulse Analyzer SettingsFront surface gate delay relative to trigger pulseSubsurface gate(if used)High passfilterDetection threshold for positive oscillation,negative oscillationA/D settingsSampling rateOffset settingPer Sample SettingsSample orientation(top or bottom(flipped)view and location of pin1or some other distinguishing characteristic) Focus(point,depth,interface)Reference planeNon-default parametersSample identification information to uniquely distinguish it from others in the same group11IPC/JEDEC J-STD-035April1999Appendix DReference Procedure for Presenting Applicable Scanned Data(continued) Reference Procedure for Presenting Scanned DataImagefile types and namesGray scale and color image legend definitionsSignificance of colorsIndications or definition of delaminationImage dimensionsDepth scale of TOFDeviation from true aspect ratioImage type:A-mode,B-mode,C-mode,TOF,Through TransmissionA-mode waveforms should be provided for points of interest,such as delaminated areas.In addition,an A-mode image should be provided for a bonded area as a control.12Standard Improvement FormIPC/JEDEC J-STD-035The purpose of this form is to provide the Technical Committee of IPC with input from the industry regarding usage of the subject standard.Individuals or companies are invited to submit comments to IPC.All comments will be collected and dispersed to the appropriate committee(s).If you can provide input,please complete this form and return to:IPC2215Sanders RoadNorthbrook,IL 60062-6135Fax 847509.97981.I recommend changes to the following:Requirement,paragraph number Test Method number,paragraph numberThe referenced paragraph number has proven to be:Unclear Too RigidInErrorOther2.Recommendations forcorrection:3.Other suggestions for document improvement:Submitted by:Name Telephone Company E-mailAddress City/State/ZipDate ASSOCIATION CONNECTING ELECTRONICS INDUSTRIESASSOCIATION CONNECTINGELECTRONICS INDUSTRIESISBN#1-580982-28-X2215 Sanders Road, Northbrook, IL 60062-6135Tel. 847.509.9700 Fax 847.509.9798。
wavelet(小波变换资料)
Sum Results
X1 cos (w) X2 cos (2w) X3 co...s (3w) Xy cos(yw)
Figure 4 depicts the process pictorially: The vectors in the figure just happen to be pointing in a cardinal direction because the strobe frequencies are all multiples of the vector (phasor) rotation rate, but that is not normally the case. Usually the vectors will point in a number of different directions, with a resultant in some direction other than straight up.
For example, in an eight filter bank, a DFT would require 512 computations, while an FFT would only require 56, significantly speeding up processing time.
Fundamental
Third Harmonic Fifth Harmonic
Sum - Approximation of (Square Wave)
Digital Sampling of Waveforms
In order to process a signal digitally, we need to sample the signal frequently enough to create a complete “picture” of the signal. The discrete Fourier transform (DFT) may be used in this regard. Samples are taken at uniform time intervals as shown in Figure 2 and processed.
GOCAD(外文资料)(1)
Seismic Acoustic Impedance Inversion in Reservoir Characterization Utilizing gOcad By Steven Clawson and Hai-Zui (“Hai-Ray”) Meng, Presented at the 2000 gOcad Users MeetingIntroductionWorkflows for utilizing seismic data inverted to acoustic impedance data in reservoir characterization will be shown. We are using the public domain 3D seismic dataset at Boonsville Field in North-Central Texas for our example. This public domain dataset is fairly complete with seismic, well, and production data:•5.5 sq. Miles of 3D seismic data•Vertical seismic profile (VSP) near center of survey•Digital well logs from 38 wells•Well markers for the bend conglomerate group•Perforations, reservoir pressures, production and Petrophysical data for the 38wellsWe acknowledge Oxy USA, Inc., Enserch, Arch Petroleum, Bureau of Economic Geology, GRI and the DOE as contributing members for making this dataset available. This data is made publicly available as part of the technology transfer activities of the Secondary Gas Recovery (SGR) program funded by the U. S. Department of Energy and the Gas Research Institute.Boonsville Field is in the Fort Worth Basin in North-Central Texas.The main productive interval are clastic sandstones in the Pennsylvanian Atokan Bend Conglomerate Group.A type log shows the interbedded sandstones and shales over about 1300 feet of section. The Bend Conglomerate is underlain by the Marble Falls Limestone, a platform carbonate. The Bend Conglomerates were sourced from the northwest on the Red River Arch as the Fort Worth Basin was forming during the Oachita orogeny. These Bend Conglomerate sandstones then pinchout to the southeast, outside of this project area as they become distal to the source, prograding into the Fort Worth Basin.Historical gas production has been from the lower most sequence in the Vineyard. Additional potential is expected in the middle sequences of the Runaway and Vineyardintervals.Conglomerate Group.This example seismic line shows the Bend Conglomerate Group structure. Most striking are the karst collapse features from dissolution of the underlying Ellenburger Limestone, some 2000 feet below the Atoka. These collapse features are seen to causecompartmentalization in the Bend Conglomerate sand bodies.Previous conclusions from the Bureau of Economic Geology’s GRI study are:1) Karsting from Ellenburger carbonates cause collapse features compartmentalizing thereservoir. Large range of compartment sizes exist.2) Need 3D seismic to image the collapse features.3) Seismic attributes can sometimes predict the reservoir faciesUpper Caddo: AmplitudeLower Caddo: Inst. FrequencyLower Bend Conglomerate sequences not definitive4) Reservoirs often exist as stacked compartments of genetic sequences.The utility of the seismic attributes derived from the amplitude data are limited and typically very dependant on the particular interval analyzed. In this project we integrate the well log data in with the seismic for a better defined reservoir model. This integration is accomplished by inverting the seismic amplitude data to acoustic impedance (AI) properties and depth converting the seismic so correlation with the well logs is possible. In this presentation I will only highlight the features of Structural Framework and Rock Property modeling in the overall Reservoir Modeling workflows:Structural Framework => Stratigraphic Gridding => Litholgy and Facies Mapping => Pressure Field => Rock Properties => Fracture Network and Stress Field =>Reservoir Fluids and Dynamic Response.Motivation for Reservoir Modeling include:1) Integration of all relevant and available data.2) Merge data of different scales:(Cores, Well logs, Seismic and Production).3) Dynamically update the model as new information becomes available.4) Measurement of errors and uncertainty as well as expected value.The specific workflows used are dependant on number and type of data available. In this case there is substantial well control and the seismic data is of high resolution (80Hz). Structural Framework WorkflowThe Structural Framework Workflow is shown below:Obtaining the Structural Framework from the seismic gives a much better description than from the well control alone. The karst features were not known until the 3D seismic data was acquired.Integration of the well marker tops and the seismic time horizons proceeds by 2 pathways:1) A reference horizon (the Caddo Limestone) was an excellent reflector that also tied the well tops. This is depth converted by a co-located co-Kriging method.2) Time horizons below this reference did not exactly tie the associated well markers due to tuning effects of the thin bedded Bend Conglomerate Group. For these horizons a velocity field was constructed from interpolating the sonic logs, calibrated to the seismic and checkshot survey. The depth was then created by the time and velocity relationship.3)The fault network will be incorporated in the future using a seismic continuity analysis. Depth conversion of the reference horizon is accomplished thru the strong correlation between the time and depth relationship at the well locations.Co-located co-Kriging of the seismic time and well marker depths produces a very accurate depth structure for the Caddo Limestone.Interpolating the sonic logs in the survey a interval velocity field is produced. Converting these interval velocities to average velocities (inverse Dix’s equation) provides the information on depth converting the intervening horizons.And here are the depth converted intervening seismic horizons.Rock Properties WorkflowThis rock property modeling workflow utilizes the seismic information obtained via inversion to acoustic impedance to better control the well log interpolation of rock properties. This is also accomplished with the accurate structural information that the seismic provides. This workflow is necessarily iterative due to the dependency of one data on another and the iteration between time (on the seismic data) and depth (for the log data) referencing.Seismic to Log Calibration is the first step in integrating the seismic amplitude data with the log properties. Starting off one may not know other than by qualitative correlation what the seismic wavelet is. In this case a reserse polarity wavelet is assumed. The synthetic is then tied to the seismic data, performing a constrained stretching and/or squeezing to fit major events. This stretching/squeezing is primarily due to dispersionbetween seismic velocities and sonic log velocities.A final seismic wavelet is then extracted. Always use more than a single seismic to log calibration tie. In this case 4 well ties were averaged for a consistent wavelet showing that the seismic wavelet is nearly –90degrees out of phase and slightly ringy. The ringing suggests that the deconvolution was not sufficient to collapse the source wavelet. Theseismic bandwidth is very good (20-80Hz).A background acoustic impedance model is needed to supply the low frequency component missing from the seismic trace data in the inversion. This first iteration uses asimple gridding of the 4 sonic logs in the survey.A model based inversion using Hampson-Russell Software’s Strata program shows the transform of the qualitative amplitude data into rock property information. The result is very dependant on the background model used and later we’ll see an improvedbackground model for a better result.Checking this inversion at our key well: B Yates 18D the seismic inverted acoustic impedance ties well qualitatively with well log acoustic impedance. Depth converting thisAI volume is also compared to the well log for quality control.Now that we ha ve seismically derived rock properties from the seismic in depth, let’s see how they correlate to the well logs. In general we see that:1) Low AI relates to shales from the gamma ray log.2) High AI relates to resisitve sandstones from the RT log.3) Correlation of AI to the porosity is more complicated since the shales measure a highporosity with low AI and the more porous sandstones are in an intermediate range of AI,while the tight sandstones are resistive and also high AI.properties show a rather low correlation coefficient.An observation of the relative scales of information is needed. The well logs of course are of higher resolution than the seismic data as shown in the lower variance of AI derived by the seismic data than that represented in the well log data. Smoothing the log curves is required to be able to statistically correlate the respective information. This correlation is also stongly influenced by the exact depth conversion of the seismic information to tie the wells. Due to the thin bedded nature of the Bend Conglomerate Group a mistie of only afew feet will severely effect the correlation.Cross-plotting the seismically derived AI to the smoothed well logs (20 feet averaging) increases the correlation, as now the data are on a more equal sample support resolution. These correlations are still low. These seismically derived AI values are also influencedby the simple background impedance model used in the inversion.in the survey.The well log acoustic impedance (AI) is highly correlated to the Log10(RT). The spatial variogram shows a fairly long range to the correlation in order to provide a goodbackground AI model for a 2nd iteration of inversion.First Kriging the Log10(RT) logs is performed. Next co_Kriging the 4 wells with acoustic impedance information is run. Spatially this new background impedance model is shown to provide spatial features not available with just the 4 wells with sonic logs. Areas near the well control have very high frequency information content. While away from well control the response is subdued towards an average from the Kriging system. Since the seismic is principally used for interpolating the interwell region this background impedance model is low pass filtered to 20Hz. This way the well control is only adding the very long wavelength trends to the inversion result. And the interwellregion should be justly controlled by the seismic data.rock properties.Qualitative correlation to the key well: B Yates 18D yields similar results as before.Now cross-plotting the seismically derived acoustic impedance and the log properties in depth shows a better correlation. These correlations are good enough to use in a co-located co-Kriging of the well log properties.Rock property models are now generated by co-located co-Kriging of the gamma ray logs for lithology discrimination and resistivity logs controlled by the seismically derived AIproperties.A reservoir model of sandstone porosity can be derived by the relationships of lithology to gamma ray and resistivity. Where these models of gamma ray and resistivity are related back to the seismically derived acoustic impedance.By segmenting the data into a sandstone region defined by where:Gamma ray is less than 90 andLog10(Resistivity) is greater than 0.8A sandstone porosity relationship is defined.Constructing the density model in the sandstone facies then is represented here.。
英文原文-小波变换
The Wavelet TransformThe Wavelet Transform is the new realm of a quick development in current mathematics, the theories is deep and apply very extensively.The concept of small wave transformation is BE been engaged in engineer J.Morlet of petroleum signal processing to put forward first in 1974 beginning of years by France, passed the keeping of physics effective demand of view and signal processing to empirically build up anti- play formula, could not get the approbation of mathematician at that time.Just such as 1807 France of hot learn engineer J.B.J.Fourier to put forward any functions can launch into the creative concept of the endless series of triangle function can not get famous mathematician grange, the approbation of place and A.M.Legendre is similar.Lucky of BE, as early as 70's, A.Calderon means the detection of axioms and Hardy space of atom the resolving did to theoretically prepare for the birth of small wave transformation with the thorough research of unconditional radicle, and J.O.Stromberg still constructed history the top is similar to the small wave in now very much radicle;Famous mathematician Y.Meyer by chance constructs a real small wave of in 1986 radicle, and cooperates with S.Mallat to build up the approval method of constructing the small wave radicle Zao after many dimensionses are analytical, small the wave analysis just start developing rapidly, among them, female mathematician I.Daubechies in Belgium composes of 《small wave ten speak(Ten Lectures on Wavelets) 》have an important push function to the universality of the small wave.It and Fourier transformation and window way Fourier the transformation(Gabor transformation) compares, these are a time and area transformation in the bureau of frequency, as a result can effectively withdraw an information from the signal, pass stretch and shrink to peaceably move to wait operation function to carry on many many difficult problems that the transformations that the dimensionses are thin to turn analysis(Multiscale Analysis), solve Fourier can not work out to the function or the signal, thus small wave the variety is praised as "mathematics microscope", it is the progresses of concordance analysis the development history top milestone type.The application of small wave analysis is to study with the theories of small wave analysis closely and combine together.Now, it has already obtained achievement that make person's focus attention in science and technology information industry realm.The electronics information technique is a realm of importance in six great high new techniques, its important aspect is portrait and signal processing.At present, the signal processing has already become the importance part that contemporary science technique works, the signal handles of purpose be:Accurate analysis, diagnosis, code compression and quantity to turn, quickly deliver or saving, by the square weigh to reach.(or instauration)Seeing from mathematics ground angle, signal and portrait processing can unify to see make is a signal processing(the portrait can see make is a two-dimensional signal),in small many applications of wave analytically many analysises, can return knot to handle a problem for signal.Now, is a stable constant signal to its property with the fulfillment, the ideal tool of processing still keeps being a Fu to sign leaf's analysis.But at physically applied in of the great majority signal right and wrong stabilize of, but be specially applicable to tool of stabilizing the signal not be small wave analysis.In fact the applied realm of small wave analysis is pretty much extensive, it includes:Many academicses of mathematics realm;Signal analytical, portrait processing;Quantum mechanics, theories physics;The military electronics resists to turn with the intelligence of weapon;Calculator classification with identify;The artificial of music and language synthesizes;The medical science becomes to be like and diagnoses;The earthquake investigates to explore a data processing;The breakdown diagnosis of the large machine etc.;For example, in mathematics, it has already used for number analysis,Construct the rapid number method, curve curved face structure, differential equation to solve, control theory etc..Analyze the filtering of aspect wave, Zao voice and compress, deliver etc. in the signal.The portrait compression, classification that handles aspect in the portrait, identify and diagnose, go to dirty wait.The decrease B that becomes to be like aspect in the medical science is super, CT, nuclear magnetic resonance become be like of time, raise a resolution etc..The Wavelet Transform is used for signal and portrait compression are small waves are an important aspect that analyzes an application.Its characteristics is to compress a ratio Gao, compress speed quick, compression behind can keep signal is as constant as the characteristic of portrait, and in the middle of delivering can with the anti- interference.Have a lot of methods according to the compression of small wave analysis, a little bit successfully have small wave radicle method with best pack, small wave area veins model method, small wave transformation zero trees compress, the small wave transformation vector compresses etc..The Wavelet Transform in the signal in of the application is also very extensive.It can used for a handling of boundary and filter wave, repeatedly analytical, letter the Zao separate and withdraw weak signal, beg identifying of form index number, signal and diagnosis and many dimensions edges in cent to examine...etc..The application in engineering technique etc..Include calculator sense of vision, calculator sketch to learn, the research and biomedical science in the curve design, swift flow and long range cosmos.Correspondby letter in the video frequency in, video frequency's coding a technique not only has to have the coding efficiency of Gao and it is born code of to flow to have various flexible.In this research realm, flow out to appear many new coding thoughts and technique, code calculate way according to the video frequency of the small wave transformation among them be have much of one of the technique of development foreground.This text carries on a classification research to the smallwave the area video frequency coding calculate way of typical model in the cultural heritage and get a dissimilarity of according to the function analysis of the video frequency coding calculate way of small wave transformation.The merit and shortcoming that contrast analysis calculate way respectively, point out small wave the area video frequency codes calculate way of research direction.The small wave transformation is a kind of tool, it data, function or calculate son to cut up into the composition of different frequency, then study with the method of decomposition to in response to under the dimensions of composition.This technical earlier period work is a difference to independently make in each research realm with different:Such as be engaged in an in harmony with analysis research in pure mathematics of just d Jia the atom of the ∞(1964) resolve;The physical educational circles hands the A Y ou of matter quantum mechanics research Ksen and a flock that Klander(1968) constructs concern with Tai and also have research hydrogen Paul of the atom man airtight Er function;(1985)The engineering field is like the design(1977) of nd to qMF filter of Estebarl and G Y ou, later on Sn, -th and Bam Ⅵtell(1986) vetterli(1986) the fork studied to have to strictly weigh to reach OMF of the characteristic a filter in the electrical engineering. the J M(1983) formally put forward the concept of small wave in the analysis in the earthquake data.About five in the last yearses, people carried on each above-mentioned work made by realm to synthesize and made it become a kind of method of without loss of generality that can be applicable to each realm.Let us temporary analyze a small wave method inside the scope to carry on a discussion in the signal.Signal at the small wave transformation(for example.the voice exert the flapping of pressure on the ear drum) in the area is decided by two three quantities:The dimensions(or frequency) in time:When the small Du transformation is 1 kind repeatedly the part repeatedly positioned while turning or being called of tool, this book the l chapter will relate repeatedly fixed position of meaning and it causes a person door biggest the reason of interest, afterward will carry on a description to the small wave of different model.。
Wavelet小波变换wiki
WaveletA wavelet is a wave-like oscillation with an amplitude that begins at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation" like one might see recorded by a seismograph or heart monitor. Generally, wavelets are purposefully crafted to have specific properties that make them useful for signal processing. Wavelets can be combined, using a "reverse, shift, multiply and integrate" technique called convolution, with portions of a known signal to extract information from the unknown signal.For example, a wavelet could be created to have a frequency ofMiddle C and a short duration of roughly a 32nd note. If thiswavelet was to be convolved with a signal created from therecording of a song, then the resulting signal would be useful fordetermining when the Middle C note was being played in thesong. Mathematically, the wavelet will correlate with the signal ifthe unknown signal contains information of similar frequency.This concept of correlation is at the core of many practicalapplications of wavelet theory.As a mathematical tool, wavelets can be used to extractinformation from many different kinds of data, including – butcertainly not limited to – audio signals and images. Sets ofwavelets are generally needed to analyze data fully. A set of"complementary" wavelets will decompose data without gaps oroverlap so that the decomposition process is mathematicallyreversible. Thus, sets of complementary wavelets are useful inwavelet based compression/decompression algorithms where it is desirable to recover the original information with minimal loss.In formal terms, this representation is a wavelet series representation of a square-integrable function with respect to either a complete, orthonormal set of basis functions, or an overcomplete set or frame of a vector space, for the Hilbert space of square integrable functions.NameThe word wavelet has been used for decades in digital signal processing and exploration geophysics.[1] The equivalent French word ondelette meaning "small wave" was used by Morlet and Grossmann in the early 1980s.Wavelet theoryWavelet theory is applicable to several subjects. All wavelet transforms may be considered forms of time-frequency representation for continuous-time (analog) signals and so are related to harmonic analysis. Almost all practically useful discrete wavelet transforms use discrete-time filterbanks. These filter banks are called the wavelet and scaling coefficients in wavelets nomenclature. These filterbanks may contain either finite impulse response (FIR) or infinite impulse response (IIR) filters. The wavelets forming a continuous wavelet transform (CWT) are subject to the uncertainty principle of Fourier analysis respective sampling theory: Given a signal with some event in it, one cannot assign simultaneously an exact time and frequency response scale to that event. The product of the uncertainties of time and frequency response scale has a lower bound. Thus, in the scaleogram of a continuous wavelet transform of this signal, such an event marks an entire region in the time-scale plane, instead of just one point. Also, discrete wavelet bases may be considered in the context of other forms of the uncertainty principle.Wavelet transforms are broadly divided into three classes: continuous, discrete and multiresolution-based.Continuous wavelet transforms (continuous shift and scale parameters)In continuous wavelet transforms, a given signal of finite energy is projected on a continuous family of frequency bands (or similar subspaces of the L p function space L2(R) ). For instance the signal may be represented on every frequency band of the form [f, 2f] for all positive frequencies f > 0. Then, the original signal can be reconstructed by a suitable integration over all the resulting frequency components.The frequency bands or subspaces (sub-bands) are scaled versions of a subspace at scale 1. This subspace in turn is in most situations generated by the shifts of one generating function ψ in L2(R), the mother wavelet. For the example of the scale one frequency band [1, 2] this function iswith the (normalized) sinc function. That, Meyer's, and two other examples of mother wavelets are:The subspace of scale a or frequency band [1/a, 2/a] is generated by the functions (sometimes called child wavelets)where a is positive and defines the scale and b is any real number and defines the shift. The pair (a, b) defines a point in the right halfplane R+×R.The projection of a function x onto the subspace of scale a then has the formwith wavelet coefficientsSee a list of some Continuous wavelets.For the analysis of the signal x, one can assemble the wavelet coefficients into a scaleogram of the signal.Discrete wavelet transforms (discrete shift and scale parameters)It is computationally impossible to analyze a signal using all wavelet coefficients, so one may wonder if it is sufficient to pick a discrete subset of the upper halfplane to be able to reconstruct a signal from the corresponding wavelet coefficients. One such system is the affine system for some real parameters a > 1, b > 0. The corresponding discrete subset of the halfplane consists of all the points (a m, na m b) with m, n in Z. The corresponding baby wavelets are now given asA sufficient condition for the reconstruction of any signal x of finite energy by the formulais that the functionsform a tight frame of L2(R).Multiresolution based discrete wavelet transformsIn any discretised wavelet transform, there are only a finite number of wavelet coefficients for each bounded rectangular region in the upper halfplane. Still, each coefficient requires the evaluation of an integral. In special situations this numerical complexity can be avoided if the scaled and shifted wavelets form a multiresolution analysis. This means that there has to exist an auxiliary function, the father waveletφ in L2(R), and that a is an integer. A typical choice is a = 2 and b = 1. The most famous pair of father and mother wavelets is the Daubechies 4-tap wavelet. Note that not every orthonormal discrete wavelet basis can be associated to a multiresolution analysis; for example, the Journe waveletadmits no multiresolution analysis.[2]From the mother and father wavelets one constructs the subspacesThe mother wavelet keeps the time domain properties, while the father wavelets keeps the frequency domain properties.From these it is required that the sequenceforms a multiresolution analysis of L2 and that the subspacesare the orthogonal "differences" of the above sequence, that is, W m is the orthogonal complement of V m inside the subspace V m−1,In analogy to the sampling theorem one may conclude that the space V m with sampling distance 2m more or less covers the frequency baseband from 0 to 2−m-1. As orthogonal complement, W m roughly covers the band [2−m−1, 2−m].From those inclusions and orthogonality relations, especially, follows the existence of sequencesandthat satisfy the identitiesso thatandso thatThe second identity of the first pair is a refinement equation for the father wavelet φ. Both pairs of identities form the basis for the algorithm of the fast wavelet transform.From the multiresolution analysis derives the orthogonal decomposition of the space L2 asFor any signal or functionthis gives a representation in basis functions of the corresponding subspaces aswhere the coefficients areand.Mother waveletFor practical applications, and for efficiency reasons, one prefers continuously differentiable functions with compact support as mother (prototype) wavelet (functions). However, to satisfy analytical requirements (in the continuous WT) and in general for theoretical reasons, one chooses the wavelet functions from a subspace of the spaceThis is the space of measurable functions that are absolutely and square integrable: andBeing in this space ensures that one can formulate the conditions of zero mean and square norm one:is the condition for zero mean, andis the condition for square norm one.For ψ to be a wavelet for the continuous wavelet transform (see there for exact statement), the mother wavelet must satisfy an admissibility criterion (loosely speaking, a kind of half-differentiability) in order to get a stably invertible transform.For the discrete wavelet transform, one needs at least the condition that the wavelet series is a representation of the identity in the space L2(R). Most constructions of discrete WT make use of the multiresolution analysis, which defines the wavelet by a scaling function. This scaling function itself is solution to a functional equation.In most situations it is useful to restrict ψ to be a continuous function with a higher number M of vanishing moments, i.e. for all integer m < MThe mother wavelet is scaled (or dilated) by a factor of a and translated (or shifted) by a factor of b to give (under Morlet's original formulation):For the continuous WT, the pair (a,b) varies over the full half-plane R+×R; for the discrete WT this pair varies over a discrete subset of it, which is also called affine group.These functions are often incorrectly referred to as the basis functions of the (continuous) transform. In fact, as in the continuous Fourier transform, there is no basis in the continuous wavelet transform. Time-frequency interpretation uses a subtly different formulation (after Delprat).Restriction:(1)when a1 = a and b1 = b,(2)has a finite time intervalComparisons with Fourier transform (continuous-time)The wavelet transform is often compared with the Fourier transform, in which signals are represented as a sum of sinusoids. In fact, the Fourier transform can be viewed as a special case of the continuous wavelet transform with the choice of the mother wavelet. The main difference in general is that wavelets are localized in both time and frequency whereas the standard Fourier transform is only localized in frequency. The Short-time Fourier transform (STFT) is similar to the wavelet transform, in that it is also time and frequency localized, but there are issues with the frequency/time resolution trade-off.In particular, assuming a rectangular window region, one may think of the STFT as a transform with a slightly different kernelwherecan often be written as, where and u respectively denote the length and temporal offset of the windowing function. Using Parseval’s theorem, one may define the wavelet’s energy as=From this, the square of the temporal support of the window offset by time u is given byand the square of the spectral support of the window acting on a frequencyAs stated by the Heisenberg uncertainty principle, the product of the temporal and spectral supportsfor any given time-frequency atom, or resolution cell. The STFT windows restrict the resolution cells to spectral and temporal supports determined by .Multiplication with a rectangular window in the time domain corresponds to convolution with afunction in the frequency domain, resulting in spurious ringing artifacts for short/localized temporalwindows. With the continuous-time Fourier Transform,and this convolution is with a delta function in Fourier space, resulting in the true Fourier transform of the signal. The window function may be some other apodizing filter, such as a Gaussian. The choice of windowing function will affect the approximation error relative to the true Fourier transform.A given resolution cell’s time-bandwidth product may not be exceeded with the STFT. All STFT basis elements maintain a uniform spectral and temporal support for all temporal shifts or offsets, thereby attaining an equal resolution in time for lower and higher frequencies. The resolution is purely determined by the sampling width.In contrast, the wavelet transform’s multiresolutional properties enables large temporal supports for lower frequencies while maintaining short temporal widths for higher frequencies by the scaling properties of the wavelet transform. This property extends conventional time-frequency analysis into time-scale analysis.[3]The discrete wavelet transform is less computationally complex, taking O(N) time as compared toO(N log N) for the fast Fourier transform. This computational advantage is not inherent to the transform, but reflects the choice of a logarithmic division of frequency, in contrast to the equally spaced frequency divisions of the FFT (Fast Fourier Transform) which uses the same basis functions asDFT (Discrete Fourier Transform).[4] It is also important to note that this complexity only applies when the filter size has no relation to the signal size. A wavelet without compact support such as the Shannon wavelet would require O(N2). (For instance, a logarithmic Fourier Transform also exists with O(N) complexity, but the original signal must be sampled logarithmically in time, which is only usefulfor certain types of signals.[5])Definition of a waveletThere are a number of ways of defining a wavelet (or a wavelet family).Scaling filterAn orthogonal wavelet is entirely defined by the scaling filter – a low-pass finite impulse response (FIR) filter of length 2N and sum 1. In biorthogonal wavelets, separate decomposition and reconstruction filters are defined.For analysis with orthogonal wavelets the high pass filter is calculated as the quadrature mirror filter of the low pass, and reconstruction filters are the time reverse of the decomposition filters.Daubechies and Symlet wavelets can be defined by the scaling filter.Scaling functionWavelets are defined by the wavelet function ψ(t) (i.e. the mother wavelet) and scaling function φ(t) (also called father wavelet) in the time domain.The wavelet function is in effect a band-pass filter and scaling it for each level halves its bandwidth. This creates the problem that in order to cover the entire spectrum, an infinite number of levels would be required. The scaling function filters the lowest level of the transform and ensures all the spectrum is covered. See [1] for a detailed explanation.For a wavelet with compact support, φ(t) can be considered finite in length and is equivalent to the scaling filter g.Meyer wavelets can be defined by scaling functionsWavelet functionThe wavelet only has a time domain representation as the wavelet function ψ(t).For instance, Mexican hat wavelets can be defined by a wavelet function. See a list of a few Continuous wavelets.HistoryThe development of wavelets can be linked to several separate trains of thought, starting with Haar's work in the early 20th century. Later work by Dennis Gabor yielded Gabor atoms (1946), which are constructed similarly to wavelets, and applied to similar purposes. Notable contributions to wavelet theory can be attributed to Zweig’s discovery of the continuous wavelet transform in 1975 (originallycalled the cochlear transform and discovered while studying the reaction of the ear to sound),[6] Pierre Goupillaud, Grossmann and Morlet's formulation of what is now known as the CWT (1982), Jan-Olov Strömberg's early work on discrete wavelets (1983), Daubechies' orthogonal wavelets with compact support (1988), Mallat's multiresolution framework (1989), Akansu's Binomial QMF (1990), Nathalie Delprat's time-frequency interpretation of the CWT (1991), Newland's harmonic wavelet transform (1993) and many others since.TimelineFirst wavelet (Haar wavelet) by Alfréd Haar (1909)Since the 1970s: George Zweig, Jean Morlet, Alex GrossmannSince the 1980s: Yves Meyer, Stéphane Mallat, Ingrid Daubechies, Ronald Coifman, Ali Akansu, Victor Wickerhauser,Wavelet transformsA wavelet is a mathematical function used to divide a given function or continuous-time signal into different scale components. Usually one can assign a frequency range to each scale component. Each scale component can then be studied with a resolution that matches its scale. A wavelet transform is the representation of a function by wavelets. The wavelets are scaled and translated copies (known as "daughter wavelets") of a finite-length or fast-decaying oscillating waveform (known as the "mother wavelet"). Wavelet transforms have advantages over traditional Fourier transforms for representing functions that have discontinuities and sharp peaks, and for accurately deconstructing and reconstructing finite, non-periodic and/or non-stationary signals.Wavelet transforms are classified into discrete wavelet transforms (DWTs) and continuous wavelet transforms (CWTs). Note that both DWT and CWT are continuous-time (analog) transforms. They can be used to represent continuous-time (analog) signals. CWTs operate over every possible scale and translation whereas DWTs use a specific subset of scale and translation values or representation grid.There are a large number of wavelet transforms each suitable for different applications. For a full list see list of wavelet-related transforms but the common ones are listed below:Continuous wavelet transform (CWT)Discrete wavelet transform (DWT)Fast wavelet transform (FWT)Lifting scheme & Generalized Lifting SchemeWavelet packet decomposition (WPD)Stationary wavelet transform (SWT)Fractional Fourier transform (FRFT)Fractional wavelet transform (FRWT)Generalized transformsThere are a number of generalized transforms of which the wavelet transform is a special case. For example, Joseph Segman introduced scale into the Heisenberg group, giving rise to a continuous transform space that is a function of time, scale, and frequency. The CWT is a two-dimensional slice through the resulting 3d time-scale-frequency volume.Another example of a generalized transform is the chirplet transform in which the CWT is also a two dimensional slice through the chirplet transform.An important application area for generalized transforms involves systems in which high frequency resolution is crucial. For example, darkfield electron optical transforms intermediate between direct and reciprocal space have been widely used in the harmonic analysis of atom clustering, i.e. in the study of crystals and crystal defects.[7] Now that transmission electron microscopes are capable of providing digital images with picometer-scale information on atomic periodicity in nanostructure of all sorts, the range of pattern recognition[8] and strain[9]/metrology[10] applications for intermediate transforms with high frequency resolution (like brushlets[11] and ridgelets[12]) is growing rapidly.Fractional wavelet transform (FRWT) is a generalization of the classical wavelet transform in the fractional Fourier transform domains. This transform is capable of providing the time- and fractional-domain information simultaneously and representing signals in the time-fractional-frequency plane.[13]Applications of Wavelet TransformGenerally, an approximation to DWT is used for data compression if a signal is already sampled, and the CWT for signal analysis.[14] Thus, DWT approximation is commonly used in engineering and computer science, and the CWT in scientific research.Like some other transforms, wavelet transforms can be used to transform data, then encode the transformed data, resulting in effective compression. For example, JPEG 2000 is an image compression standard that uses biorthogonal wavelets. This means that although the frame is overcomplete, it is a tight frame (see types of Frame of a vector space), and the same frame functions (except for conjugation in the case of complex wavelets) are used for both analysis and synthesis, i.e., in both the forward and inverse transform. For details see wavelet compression.A related use is for smoothing/denoising data based on wavelet coefficient thresholding, also called wavelet shrinkage. By adaptively thresholding the wavelet coefficients that correspond to undesired frequency components smoothing and/or denoising operations can be performed.Wavelet transforms are also starting to be used for communication applications. Wavelet OFDM is the basic modulation scheme used in HD-PLC (a power line communications technology developed byPanasonic), and in one of the optional modes included in the IEEE 1901 standard. Wavelet OFDM can achieve deeper notches than traditional FFT OFDM, and wavelet OFDM does not require a guard interval (which usually represents significant overhead in FFT OFDM systems).[15]As a representation of a signalOften, signals can be represented well as a sum of sinusoids. However, consider a non-continuous signal with an abrupt discontinuity; this signal can still be represented as a sum of sinusoids, but requires an infinite number, which is an observation known as Gibbs phenomenon. This, then, requires an infinite number of Fourier coefficients, which is not practical for many applications, such as compression. Wavelets are more useful for describing these signals with discontinuities because of their time-localized behavior (both Fourier and wavelet transforms are frequency-localized, but wavelets have an additional time-localization property). Because of this, many types of signals in practice may be non-sparse in the Fourier domain, but very sparse in the wavelet domain. This is particularly useful in signal reconstruction, especially in the recently popular field of compressed sensing. (Note that the Short-time Fourier transform (STFT) is also localized in time and frequency, but there are often problems with the frequency-time resolution trade-off. Wavelets are better signal representations because of multiresolution analysis.)This motivates why wavelet transforms are now being adopted for a vast number of applications, often replacing the conventional Fourier Transform. Many areas of physics have seen this paradigm shift, including molecular dynamics, ab initio calculations, astrophysics, density-matrix localisation, seismology, optics, turbulence and quantum mechanics. This change has also occurred in image processing, EEG, EMG,[16]ECG analyses, brain rhythms, DNA analysis, protein analysis, climatology, human sexual response analysis,[17] general signal processing, speech recognition, acoustics, vibration signals,[18]computer graphics, multifractal analysis, and sparse coding. In computer vision and image processing, the notion of scale space representation and Gaussian derivative operators is regarded as a canonical multi-scale representation.Wavelet DenoisingSuppose we measure a noisy signal. Assume s has a sparse representation in a certain wavelet bases, andSo.Most elements in p are 0 or close to 0, andSince W is orthogonal, the estimation problem amounts to recovery of a signal in iid Gaussian noise. As p is sparse, one method is to apply a gaussian mixture model for p.Assume a prior, is the variance of "significant" coefficients, and is the variance of "insignificant" coefficients.Then,is called the shrinkage factor, which depends on the prior variances and . The effect of the shrinkage factor is that small coefficients are set early to 0, and large coefficients are unaltered.Small coefficients are mostly noises, and large coefficients contain actual signal.At last, apply the inverse wavelet transform to obtainList of waveletsDiscrete waveletsBeylkin (18)BNC waveletsCoiflet (6, 12, 18, 24, 30)Cohen-Daubechies-Feauveau wavelet (Sometimes referred to as CDF N/P or Daubechiesbiorthogonal wavelets)Daubechies wavelet (2, 4, 6, 8, 10, 12, 14, 16, 18, 20, etc.)Binomial-QMF (Also referred to as Daubechies wavelet)Haar waveletMathieu waveletLegendre waveletVillasenor waveletSymlet[19]Continuous waveletsReal-valuedBeta waveletHermitian waveletHermitian hat waveletMeyer waveletMexican hat waveletShannon waveletComplex-valuedComplex Mexican hat waveletfbsp waveletMorlet waveletShannon waveletModified Morlet waveletSee alsoChirplet transformCurveletDigital cinemaFilter banksFractal compressionFractional Fourier transformJPEG 2000Multiresolution analysisNoiseletScale spaceShearletShort-time Fourier transformUltra wideband radio- transmits waveletsWave packetGabor wavelet#Wavelet space[20]Dimension reductionFourier-related transformsSpectrogramNotes1. ^ Ricker, Norman (1953). "WAVELET CONTRACTION, WAVELET EXPANSION, AND THECONTROL OF SEISMIC RESOLUTION". Geophysics18 (4). doi:10.1190/1.1437927.2. ^ Larson, David R. (2007). "Unitary systems and wavelet sets". Wavelet Analysis andApplications. Appl. Numer. Harmon. Anal. Birkhäuser. pp. 143–171.3. ^ Mallat, Stephane. "A wavelet tour of signal processing. 1998." 250-252.4. ^ The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D.chapter 8 equation 8-1: /ch8/4.htm5. ^http://homepages.dias.ie/~ajones/publications/28.pdf6. ^/biography/Zweig.html Zweig, George Biography on7. ^ P. Hirsch, A. Howie, R. Nicholson, D. W. Pashley and M. J. Whelan (1965/1977) Electronmicroscopy of thin crystals (Butterworths, London/Krieger, Malabar FLA) ISBN 0-88275-376-28. ^ P. Fraundorf, J. Wang, E. Mandell and M. Rose (2006) Digital darkfield tableaus, Microscopyand Microanalysis12:S2, 1010–1011 (cf. arXiv:cond-mat/0403017)9. ^ M. J. Hÿtch, E. Snoeck and R. Kilaas (1998) Quantitative measurement of displacement andstrain fields from HRTEM micrographs, Ultramicroscopy74:131-146.10. ^ Martin Rose (2006) Spacing measurements of lattice fringes in HRTEM image using digitaldarkfield decomposition (M.S. Thesis in Physics, U. Missouri – St. Louis)11. ^ F. G. Meyer and R. R. Coifman (1997) Applied and Computational Harmonic Analysis4:147.12. ^ A. G. Flesia, H. Hel-Or, A. Averbuch, E. J. Candes, R. R. Coifman and D. L. Donoho (2001)Digital implementation of ridgelet packets (Academic Press, New York).13. ^ J. Shi, N.-T. Zhang, and X.-P. Liu, "A novel fractional wavelet transform and its applications,"Sci. China Inf. Sci., vol. 55, no. 6, pp. 1270–1279, June 2012. URL:/content/q01np2848m388647/14. ^ A.N. Akansu, W.A. Serdijn and I.W. Selesnick, Emerging applications of wavelets: A review,Physical Communication, Elsevier, vol. 3, issue 1, pp. 1-18, March 2010.15. ^ Stefano Galli, O. Logvinov (July 2008). "Recent Developments in the Standardization of PowerLine Communications within the IEEE". IEEE Communications Magazine46 (7): 64–71.doi:10.1109/MCOM.2008.4557044. An overview of P1901 PHY/MAC proposal.16. ^ J. Rafiee et al. Feature extraction of forearm EMG signals for prosthetics, Expert Systems withApplications 38 (2011) 4058–67.17. ^ J. Rafiee et al. Female sexual responses using signal processing techniques, The Journal ofSexual Medicine 6 (2009) 3086–96. (pdf)18. ^ J. Rafiee and Peter W. Tse, Use of autocorrelation in wavelet coefficients for fault diagnosis,Mechanical Systems and Signal Processing 23 (2009) 1554–72.19. ^Matlab Toolbox – URL: http://matlab.izmiran.ru/help/toolbox/wavelet/ch06_a32.html20. ^ Erik Hjelmås (1999-01-21) Gabor Wavelets URL:http://www.ansatt.hig.no/erikh/papers/scia99/node6.htmlReferences。
Regularization of wavelet approximations
Regularization of Wavelet Approximations Anestis Antoniadis and Jianqing FanI n this paper,we introduce nonlinear regularized wavelet estimators for estimating nonparametric regression functions when samplingpoints are not uniformly spaced.The approach can apply readily to many other statistical contexts.Various new penalty functions areproposed.The hard-thresholdin g and soft-thresholding estimators of Donoho and Johnstone are speci c members of nonlinear regularizedwavelet estimators.They correspond to the lower and upper envelopes of a class of the penalized least squares estimators.Necessaryconditions for penalty functions are given for regularized estimators to possess thresholding properties.Oracle inequalities and universalthresholding parameters are obtained for a large class of penalty functions.The sampling properties of nonlinear regularized waveletestimators are established and are shown to be adaptively minimax.To ef ciently solve penalized least squares problems,nonlinearregularized Sobolev interpolators(NRSI)are proposed as initial estimators,which are shown to have good sampling properties.TheNRS I is further ameliorated by regularized one-step estimators,which are the one-step estimators of the penalized least squares problemsusing the NRSI as initial estimators.The graduated nonconvexit y algorithm is also introduced to handle penalized least squares problems.The newly introduced approaches are illustrated by a few numerical examples.KEY WORDS:Asymptotic minimax;I rregular designs;Nonquadrati c penality functions;Oracle inequalities;Penalized least-squares;ROSE;Wavelets.1.INTRODUCTIONWavelets are a family of orthogonal bases that can effec-tively compress signals with possible irregularities.They are good bases for modeling statistical functions.Various applications of wavelets in statistics have been made in the literature.See,for example,Donoho and Johnstone(1994), Antoniadis,Grégoire,and McKeague(1994),Hall and Patil (1995),Neumann and Spokoiny(1995),Antoniadis(1996), and Wang(1996).Further references can be found in the survey papers by Donoho et al.(1995),Antoniadis(1997), and Abramovich,Bailey,and Sapatinas(2000)and books by Ogden(1997)and Vidakovic(1999).Yet,wavelet applications to statistics are hampered by the requirements that the designs are equispaced and the sample size be a power of2.Various attempts have been made to relax these requirements.See,for example,the interpolation method of Hall and Turlach(1997), the binning method of Antoniadis,Grégoire,and Vial(1997), the transformation method of Cai and Brown(1997),the iso-metric method of Sardy et al.(1999),and the interpolation method to a ne regular grid of Kovac and Silverman(2000). However,it poses some challenges to extend these methods to other statistical contexts,such as generalized additive models and generalized analysis of variance models.In an attempt to make genuine wavelet applications to statis-tics,we approach the denoising problem from a statistical modeling point of view.The idea can be extended to other sta-tistical contexts.Suppose that we have noisy data at irregular design points8t11:::1t n9:Y i D f4t i5C˜i1˜i iid N401‘251Anestis Antoniadis is Professor,Laboratoire de Modélisation et Calcul, UniversitéJoseph Fourier,38041Grenoble Cedex9,France.Jianqing Fan is Professor of Statistics and Chairman,Department of Statistics,The Chinese University of Hong Kong,Shatin,Hong Kong.Antoniadis’s research was supported by the I DOPT project,I NR IA-CNRS-I MAG.Fan’s research was partially supported by NSF grants DMS-9803200,DMS-9977096,and DMS-0196041and RGC grant CUHK-4299/00P of HKSAR.The major part of the research was conducted while Fan visited the UniversitéJoseph Fourier.He is grateful for the generous support of the university.The authors thank Arne Kovac and Bernard Silverman for providing their Splus procedure makegrid required for some of the data analyses.The Splus procedure is available at http://www.stat.mathematik.uni-essen.de/Q kovac/makegrid.tar.The authors thank the editor,the associate editor,and the referees for their comments, which substantially improved this article.where f is an unknown regression to be estimated from the noisy sample.Without loss of generality,assume that the function f is de ned on60117.Assume further that ti D n i=2J for some niand some ne resolution J that is determined by ually,2J¶n so that the approximation errors by moving nondyadic points to dyadic points are negligible.Let f be the underlying regression function collected at all dyadic points8i=2J1i D01:::12Jƒ19.Let W be a given wavelet transform andˆD Wf be the wavelet transform of f.Because W is an orthogonal matrix,f D W Tˆ.From a statistical modeling point of view,the unknown sig-nals are modeled by N D2J parameters.This is an overpa-rameterized linear model,which aims at reducing modeling biases.One can not nd a reasonable estimate ofˆby using the ordinary least squares method.Because wavelets are used to transform the regression function f,its representation in wavelet domain is sparse;namely,many components ofˆare small,for the function f in a Besov space.This prior knowl-edge enables us to reduce effective dimensionality and to nd reasonable estimates ofˆ.To nd a good estimator ofˆ,we apply a penalized least squares method.Denote the sampled data vector by Yn.Let A be n N matrix whose i th row corresponds to the row of the matrix W T for which signal f4ti5is sampled with noise. Then,the observed data can be expressed as a linear modelYn D AˆC…1…N401‘2I n5(1.1) where…is the noise vector.The penalized least squares prob-lem is to ndˆto minimize2ƒ1˜Y nƒAˆ˜2C‹N X i D1p4—ˆi—5(1.2)for a given penalty function p and regularization parameter ‹>0.The penalty function p is usually nonconvex on601ˆ5 and irregular at point zero to produce sparse solutions.See Theorem1for necessary conditions.It poses some challenges to optimize such a high-dimensional nonconvex function.©2001American Statistical AssociationJournal of the American Statistical Association September2001,Vol.96,No.455,Theory and Methods939940Journal of the American Statistical Association,September2001 Our overparameterization approach is complementary to theovercomplete wavelet library methods of Chen,Donoho,andSanders(1998)and Donoho et al.(1998).Indeed,even whenthe sampling points are equispaced,one can still choose alarge N(N D O4n log n5,say),to have better ability to approx-imate unknown functions.Our penalized method in this casecan be viewed as a subbasis selection from an overcompletefamily of nonorthogona l bases,consisting of N columns ofthe matrix A.When n D2J,the matrix A becomes a square orthogonalmatrix W T.This corresponds to the canonical wavelet denois-ing problems studied in the seminal paper by Donoho andJohnstone(1994).The penalized least squares estimator(1.2)can be written as2ƒ1˜WY nƒˆ˜2C‹N X i D1p4—ˆi—50The minimization of this high-dimensional problem reduces tocomponentwise minimization problems,and the solution canbe easily found.Theorem1gives necessary conditions for thesolution to be unique and continuous in wavelet coef cients.In particular,the soft-thresholding rule and hard-thresholdingrule correspond,respectively,to the penalized least squaresestimators with the L1penalty and the hard-thresholdingpenalty(2.8)discussed in Section2.These penalty functions have some unappealing features and can be further amelio-rated by the smoothly clipped absolute deviation(SCAD)penalty function and the transformed L1penalty function.SeeSection2.3for more discussions.The hard-thresholding and soft-thresholding estimators play no monopoly role in choosing an ideal wavelet subbasis to ef ciently represent an unknown function.Indeed,for a large class of penalty functions,we show in Section3that the result-ing penalized least squares estimators perform within a log-arithmic factor to the oracle estimator in choosing an ideal wavelet subbasis.The universal thresholding parameters are also derived.They can easily be translated in terms of regular-ization parameters‹for a given penalty function p.The uni-versal thresholding parameter given by Donoho and Johnstone (1994)is usually somewhat too large in practice.We expand the thresholding parameters up to the second order,allowing users to choose smaller regularization parameters to reduce modeling biases.The work on the oracle inequalities and uni-versal thresholding is a generalization of the pioneering work of Donoho and Johnstone(1994).It allows statisticians to use other penalty functions with the same theoretical backup. The risk of the oracle estimator is relatively easy to com-pute.Because the penalized least squares estimators perform comparably with the oracle estimator,following the similar but easier calculation to that of Donoho et al.(1995),we can show that the penalized least squares estimators with simple data-independent(universal)thresholds are adaptively mini-max for the Besov class of functions,for a large class of penalty functions.Finding a meaningful local minima to the general problem(1.2)is not easy,because it is a high-dimensional problem with a nonconvex target function.A possible method is to apply the graduated nonconvexity(GNC)algorithm intro-duced by Blake and Zisserman(1987)and Blake(1989)and ameliorated by Nikolova(1999)and Nikolova,Idier, and Mohammad-Djafari(in press)in the imaging analysis context.The algorithm contains good ideas on optimizing high-dimensional nonconvex functions,but its implementation depends on a several tuning parameters.It is reasonably fast, but it is not nearly as fast as the canonical wavelet denoising. See Section6for details.To have a fast estimator,we impute the unobserved data by using regularized Sobolev interpola-tors.This allows one to apply coef cientwise thresholding to obtain an initial estimator.This yields a viable initial estima-tor,called nonlinear regularized Sobolev interpolators(NRSI). This estimator is shown to have good sampling properties.By using this NRSI to create synthetic data and apply the one-step penalized least squares procedure,we obtain a regularized one-step estimator(ROSE).See Section4.Another possi-ble approach to denoise nonequispaced signals is to design adaptively nonorthogonal wavelets to avoid overparameteriz-ing problems.A viable approach is the wavelet networks pro-posed by Bernard,Mallat,and Slotine(1999).An advantage of our penalized wavelet approach is that it can readily be applied to other statistical contexts,such as likelihood-based models,in a manner similar to smoothing splines.One can simply replace the normal likelihood in(1.2) by a new likelihood function.Further,it can be applied to high-dimensional statistical models such as generalized addi-tive models.Details of these require a lot of new work and hence are not discussed here.Penalized likelihood methods were successfully used by Tibshirani(1995),Barron,Birgé, and Massart(1999),and Fan and Li(1999)for variable selec-tions.Thus,they should also be viable for wavelet applica-tions to other statistical problems.When the sampling points are equispaced,the use of penalized least squares for reg-ularizing wavelet regression were proposed by Solo(1998), McCoy(1999),Moulin and Liu(1999),and Belge,Kilmer, and Miller(2000).In Solo(1998),the penalized least squares with an L1penalty is modi ed to a weighted least squares to deal with correlated noise,and an iterative algorithm is discussed for its solution.The choice of the regularization parameter is not discussed.By analogy to smoothing splines, McCoy(1999)used a penalty function that simultaneously penalizes the residual sum of squares and the second derivative of the estimator at the design points.For a given regularization parameter,the solution of the resulting optimization problem is found by using simulated annealing,but there is no sugges-tion in her work of a possible method of choosing the smooth-ing parameter.Moreover,although the proposal is attractive, the optimization algorithm is computationally demanding.In Moulin and Liu(1999),the soft-and hard-thresholded esti-mators appeared as Maximum a Posteriori(MAP)estimators in the context of Bayesian estimation under zero-one loss, with generalized Gaussian densities serving as a prior distri-bution for the wavelet coef cients.A similar approach was used by Belge et al.(2000)in the context of wavelet domain image restoration.The smoothing parameter in Belge et al. (2000)was selected by the L-curve criterion(Hansen and O’Leary1993).It is known,however,(Vogel1996)that such a criterion can lead to nonconvergent solutions,especially when the function to be recovered presents some irregular-ities.Although there is no conceptual dif culty in applyingAntoniadis and Fan:Regularization of Wavelet Approximations 941the penalized wavelet method to other statistical problems,the dimensionality involved is usually very high.Its fast imple-mentations require some new ideas,and the GNC algorithm offers a generic numerical method.This article is organized as follows.In Section 2,we intro-duce Sobolev interpolators and penalized wavelet estimators.Section 3studies the properties of penalized wavelet estima-tors when the data are uniformly sampled.Implementations of penalized wavelet estimators in general setting are discussed in Section 4.Section 5gives numerical results of our newly pro-posed estimators.Two other possible approaches are discussed in Section 6.Technical proofs are presented in the Appendix.2.REGULARIZATION OF WAVELETAPPROXIMATIONSThe problem of signal denoising from nonuniformly sam-pled data arises in many contexts.The signal recovery problem is ill posed,and smoothing can be formulated as an optimiza-tion problem with side constraints to narrow the class of can-didate solutions.We rst brie y discuss wavelet interpolation by using a reg-ularized wavelet method.This serves as a crude initial value to our proposed penalized least squares method.We then discuss the relation between this and nonlinear wavelet thresholding estimation when the data are uniformly sampled.2.1Regularized Wavelet InterpolationsAssume for the moment that the signals are observed withno noise,i.e.,…D 0in (1.1).The problem becomes an interpo-lation problem,using a wavelet transform.Being given signals only at the nonequispaced points 8t i 1i D 11:::1n9necessar-ily means that we have no information at other dyadic points.In terms of the wavelet transform,this means that we have no knowledge about the scaling coef cients at points other than t i ’s.Letf n D 4f 4t 151:::1f 4t n 55T be the observed signals.Then,from (1.1)and the assumption …D 0,we havef n D A ˆ0(2.1)Because this is an underdetermined system of equations,there exist many different solutions for ˆthat match the given sam-pled data f n .For the minimum Sobolev solution,we choose the f that interpolates the data and minimizes the weighted Sobolev norm of f .This would yield a smooth interpolation to the data.The Sobolev norms of f can be simply charac-terized in terms of the wavelet coef cients ˆ.For this pur-pose,we use double array sequence ˆj1k to denote the wavelet coef cient at the j th resolution level and k th dyadic loca-tion 4k D 11:::12j ƒ15.A Sobolev norm of f with degree of smoothness s can be expressed as˜ˆ˜2SD X j22sj˜ˆj0˜21where ˆj0is the vector of the wavelet coef cients at the resolution level j .Thus,we can restate this problem as a wavelet-domain optimization problem:Minimize ˜ˆ˜2S subjectto constraint (2.1).The solution (Rao 1973)is what is called the normalized method of frame whose solution is given byˆD DA T 4ADA T 5ƒ1f n 1where D D Diag 42ƒ2sj i 5with j i denoting the resolution level with which ˆi is associated.An advantage of the method of frame is that it does not involve the choice of regularization parameter (unless s is regarded as a smoothing parameter).When s D 0,ˆD A T f n by orthogonality.In this case,the inter-polator is particularly easy to compute.As an illustration of how the regularized wavelet interpola-tions work,we took 100data points (located at the tick marks)from the function depicted in Figure 1(a).Figure 1,(b)–(d)show how the method of frame works for different values of s .As s increases,the interpolated functions become smoother.In fact,for a large range of values of s ,the wavelet interpola-tions do not create excessive biases.2.2Regularized Wavelet EstimatorsAssume now that the observed data follow model (1.1).The traditional regularization problem can be formulated in the wavelet domain as follows.Find the minimum of2ƒ1˜Y n ƒA ˆ˜2C ‹˜ˆ˜2S 1(2.2)The resulting estimation procedure parallels standard spline-smoothing techniques.Several variants of such a penal-ized approach for estimating less regular curves via their wavelet decomposition have been suggested by several authors (Antoniadis 1996,Amato and Vuza 1997,Dechevsky and Penev 1999).The resulting estimators are linear estimators of shrinkage type with a level dependent shrinking of the empir-ical wavelet coef cients.Several data-driven methods were proposed for the determination of the penalty parameter ‹,and we refer the readers to the cited papers for rigorous treat-ments on the choice of the regularization parameter for such linear estimators.The preceeding leads to a regularized linear estimator.In general,one can replace the Sobolev norm by other penalty functions,leading to minimizing`4ˆ5D 2ƒ1˜Y n ƒA ˆ˜2C ‹X i i 0p4—ˆi —5(2.3)for a given penalty function p45and given value i 0.This cor-responds to penalizing wavelet coef cients above certain reso-lution level j 0.Here,to facilitate the presentation,we changedthe notation ˆj1k from a double array sequence into a sin-gle array sequence ˆi .The problem (2.3)produces stable and sparse solutions for functions p satisfying certain properties.The solutions,in general,are nonlinear.See the results of Nikolova (2000)and Section 3below.2.3Penalty Functions and Nonlinear Wavelet EstimatorsThe regularized wavelet estimators are an extension of thesoft-and hard-thresholding rules of Donoho and Johnstone (1994).When the sampling points are equally spaced and942Journal of the American Statistical Association,September2001 Figure1.Wavelet Interpolations by Method of Frame.As degrees of smoothness s become larger,the interpolated functions become smoother.(a)The target function and sampling points,true cruve(tick marks);(b)–(d)wavelet interpolations with s D.5,s D1.4,and s D6.0.n D2J,the design matrix A in(2.1)becomes the inversewavelet transform matrix W T.In this case,(2.3)becomes2ƒ1nX i D14z iƒˆi52C‹X i¶i0p4—ˆi—51(2.4)where zi is the i th component of the wavelet coef cient vec-tor z D WYn .The solution to this problem is a component-wise minimization problem,whose properties are studied in the next section.To reduce abuse of notation,and becausep4—ˆ—5is allowed to depend on‹,we use p‹to denote thepenalty function‹p in the following discussion.For the L1-penalty[Figure2(a)],p‹4—ˆ—5D‹—ˆ—1(2.5) the solution is the soft-thresholding rule(Donoho et al.1992).A clipped L1-penaltyp4ˆ5D‹min4—ˆ—1‹5(2.6) leads to a mixture of soft-and hard-thresholding rules(Fan 1997):Oˆj D4—z j—ƒ‹5C I8—z j—µ105‹9C—z j—I8—z j—>105‹90(2.7) When the penalty function is given byp‹4—ˆ—5D‹2ƒ4—ˆ—ƒ‹52I4—ˆ—<‹51(2.8)[see Figure2(b)],the solution is the hard-thresholding rule(Antoniadis1997).This is a smoother penalty function than p‹4—ˆ—5D—ˆ—I4—ˆ—<‹5C‹=2I4—ˆ—¶‹5suggested by Fan (1997)and the entropy penalty p‹4—ˆ—5D2ƒ1‹2I8—ˆ—D09, which lead to the same solution.The hard-thresholding rule is discontinuous,whereas the soft-thresholding rule shifts theestimator by an amount of‹even when—zi—stands way out of noise level,which creates unnecessary bias whenˆis large. To ameliorate these two drawbacks,Fan(1997)suggests using the quadratic spline penalty,called the smoothly clipped abso-lute deviation(SCAD)penalty[see Figure2(c)],p0‹4ˆ5D I4ˆµ‹5C4a‹ƒˆ5C4aƒ15‹I4ˆ>‹5forˆ>0and a>2,(2.9) leading to the piecewise linear thresholdingOˆj D8><>:sgn4z j54—z j—ƒ‹5when—z j—µ2‹1C4aƒ15z jƒa‹sgn4z j5when2‹<—z j—µa‹1zjwhen—zj—>a‹0(2.10)Fan and Li(1999)recommended using a D307based on a Bayesian argument.This thresholding estimator is in the same spirit as that of Gao and Bruce(1997).This penalty function does not overpenalize large values of—ˆ—and hence does not create excessive biases when the wavelet coef cients are large.Antoniadis and Fan:Regularization of Wavelet Approximations943 Figure2.Typical Penalty Functions That Preserve Sparsity.(a)L p-penalty with p D1(long dash),p D.6(short dash),and p D.2(solid);(b)hard-thresholding penalty(2.8);(c)SCAD(2.9)with a D3.7;(d)transformed L1-penalty(2.11)with b D3.7.Nikolova(1999b)suggested the following transformed L1-penalty function[see Figure2(d)]:p‹4—x—5D‹b—x—41C b—x—5ƒ1for some b>00(2.11) This penalty function behaves quite similarly to the SCAD suggested by Fan(1997).Both are concave on601ˆ5and do not intend to overpenalize large—ˆ—.Other possible functionsinclude the Lp -penalty introduced(in image reconstruction)by Bouman and Sauer(1993):p‹4—ˆ—5D‹—ˆ—p4p¶050(2.12) As shown in Section3.1,the choice pµ1is a necessary con-dition for the solution to be a thresholding estimator,whereas p¶1is a necessary condition for the solution to be continu-ous in z.Thus,the L1-penalty function is the only member inthis family that yields a continuous thresholding solution. Finally,we note that the regularization parameter‹for dif-ferent penalty functions has a different scale.For example,thevalue‹in the L1-penalty function is not the same as that inthe Lp -penalty40µp<15.Figure2depicts some of thesepenalty functions.Their componentwise solutions to the cor-responding penalized least squares problem(2.4)are shown in Figure3.3.ORACLE INEQUALITIES AND UNIVERSALTHRESHOLDINGAs mentioned in Section2.3,there are many competing thresholding policies.They provide statisticians and engineers a variety of choices of penalty functions with which to esti-mate functions with irregularities and to denoise images with sharp features.However,these have not yet been system-atically studied.We rst study the properties of penalized least squares estimators,and then we examine the extent to which they can mimic oracle in choosing the regularization parameter‹.3.1Characterization of PenalizedLeast Squares EstimatorsLet p4¢5be a nonnegative,nondecreasing,and differentiable function on401ˆ5.The clipped L1-penalty function(2.6)does not satisfy this condition and will be excluded in the study. All other penalty functions satisfy this condition.Consider the following penalized least squares problem:Minimize with respect toˆ`4ˆ5D4zƒˆ52=2C p‹4—ˆ—5(3.1) for a given penalty parameter‹.This is a componentwise minimization problem of(2.4).Note that the function in(3.1) tends to in nity as—ˆ—!ˆ.Thus,minimizers do exist.Let Oˆ4z5be a solution.The following theorem gives the neces-sary conditions(indeed,they are suf cient conditions,too)for the solution to be thresholding,to be continuous,and to be approximately unbiased when—z—is large.Theorem1.Let p‹4¢5be a nonnegative,nondecreasing, and differentiable function in401ˆ5.Further,assume that the944Journal of the American Statistical Association,September 2001Figure 3.Penalized Least Squares Estimators That Possess Thresholding Properties.(a)The penalized L 1estimator and the hard-thresholding estimator (dashed);(b)the penalized L p estimator with p D .6;(c)the penalized SCAD estimator (2.10);(d)the penalized transformed L 1estimator with b D 3.7.function ƒˆƒp 0‹4ˆ5is strictly unimodal on 401ˆ5.Then we have the following results.1.The solution to the minimization problem (3.1)existsand is unique.It is antisymmetric:O ˆ4ƒz5D ƒO ˆ4z5.2.The solution satis esO ˆ4z5D (0if —z —µp 01z ƒsgn 4z5p 0‹4—O ˆ4z5—5if —z —>p 01where p 0D min ˆ¶08ˆC p 0‹4ˆ59.Moreover,—O ˆ4z5—µ—z —.3.If p 0‹4¢5is nonincreasing,then for —z —>p 0,we have—z —ƒp 0µ—O ˆ4z5—µ—z —ƒp 0‹4—z —504.When p 0‹4ˆ5is continuous on 401ˆ5,the solution O ˆ4z5is continuous if and only if the minimum of —ˆ—C p 0‹4—ˆ—5is attained at point zero.5.If p 0‹4—z —5!0,as —z —!Cˆ,thenO ˆ4z5D z ƒp 0‹4—z —5C o4p 0‹4—z —550We now give the implications of these results.When p 0‹40C 5>0,p 0>0.Thus,for —z —µp 0,the estimate is thresh-olded to 0.For —z —>p 0,the solution has a shrinkage prop-erty.The amount of shrinkage is sandwiched between the soft-thresholding and hard-thresholding estimators,as shown in result 3.In other words,the hard-and soft-thresholdingestimators of Donoho and Johnstone (1994)correspond to theextreme cases of a large class of penalized least squares esti-mators.We add that a different estimator O ˆmay require dif-ferent thresholding parameter p 0and,hence the estimator O ˆis not necessarily sandwiched by the hard-and soft-thresholding estimators by using different thresholding parameters.Further,the amount of shrinkage gradually tapers off as —z —gets largewhen p 0‹4—z —5goes to zero.For example,the penalty func-tion p ‹4—ˆ—5D ‹r ƒ1—ˆ—r for r 240117satis es this condition.The case r D 1corresponds to the soft-thresholding.When 0<r <1,p 0D 42ƒr5841ƒr5r ƒ1‹91=42ƒr51and when —z —>p 0,O ˆ4z5satis es the equationO ˆC ‹O ˆr ƒ1D z0In particular,when r !0,O ˆ!O ˆ0²4z C pz 2ƒ4‹5=2D z=41C ‹z ƒ25C O4z ƒ450The procedure corresponds basically to the Garotte estimatorin Breiman (1995).When the value of —z —is large,one is cer-tain that the observed value —z —is not noise.Hence,one does not wish to shrink the value of z ,which would result in under-estimating ˆ.Theorem 1,result 4,shows that this property holds when p ‹4—ˆ—5D ‹r ƒ1—ˆ—r for r 240115.This amelioratesAntoniadis and Fan:Regularization of Wavelet Approximations 945the property of the soft-thresholding rule,which always shifts the estimate z by an amount of ….However,by Theorem 1,result 4,the solution is not continuous.3.2Risks of Penalized Least Squares EstimatorsWe now study the risk function of the penalized leastsquares estimator O ˆthat minimizes (3.1).Assume Z N 4ˆ115.Denote byR p 4ˆ1p 05D E8O ˆ4Z5ƒˆ920Note that the thresholding parameter p 0is equivalent to theregularization parameter ‹.We explicitly use the thresholding parameter p 0because it is more relevant.For wavelet appli-cations,the thresholding parameter p 0will be in the order of magnitude of the maximum of the Gaussian errors.Thus,we consider only the situation where the thresholding level is large.In the following theorem,we give risk bounds for penal-ized least squares estimators for general penalty functions.The bounds are quite sharp because they are comparable with those for the hard-thresholding estimator given by Donoho and Johnstone (1994).A shaper bound will be considered numer-ically in the following section for a speci c penalty function.Theorem 2.Suppose p satis es conditions in Theorem 1and p 0‹40C 5>0.Then 1.R p 4ˆ1p 05µ1C ˆ2.2.If p 0‹4¢5is nonincreasing,thenR p 4ˆ1p 05µp 20C p2= p 0C 103.R p 401p 05µp 2= 4p 0C p ƒ105exp 4ƒp 20=25.4.R p 4ˆ1p 05µR p 401ˆ5C 2ˆ2.Note that properties 1–4are comparable with those for the hard-thresholding and soft-thresholding rules given by Donoho and Johnstone (1994).The key improvement here is that the results hold for a larger class of penalty functions.3.3Oracle Inequalities and Universal ThresholdingFollowing Donoho and Johnstone (1994),when the true sig-nal ˆis given,one would decide whether to estimate the coef- cient,depending on the value of —ˆ—.This leads to an idealoracle estimator O ˆo D ZI4—ˆ—>15,which attains the ideal L 2-risk min 4ˆ2115.In the following discussions,the constant n can be arbitrary.In our nonlinear wavelet applications,the constant n is the sample size.As discussed,the selection of ‹is equivalent to the choice of p 0.Hence,we focus on the choice of the thresholding parameter p 0.When p 0D p 2log n ,the universal thresholding proposed by Donoho and Johnstone (1994),by property (3)of Theorem 2,R p 401p 05µp2log n51=2C 19=n when p 0¶11which is larger than the ideal risk.To bound the risk of thenonlinear estimator O ˆ4Z5by that of the oracle estimator O ˆo ,we need to add an amount cn ƒ1for some constant c to the risk of the oracle estimator,because it has no risk at point ˆD 0.More precisely,we de neån1c1p 04p5D supˆR p 4ˆ1p 05cn ƒ12115and denote ån1c1p 04p5by ån1c 4p5for the universal thresh-olding p 0D p2log n .Then,ån1c1p 04p5is a sharp risk upper bound for using the universal thresholding parameter p 0.That is,R p 4ˆ1p 05µån1c1p 04p58cn ƒ1C min 4ˆ211590(3.2)Thus,the penalized least squares estimator O ˆ4Z5performscomparably with the oracle estimator within a factor of ån1c1p 04p5.Likewise,letån1c 4p5D inf p 0supˆR p 4ˆ1p 05cn ƒ12115andp n D the largest constant attaining ån1c 4p50Then,the constant ån1c 4p5is the sharp risk upper bound using the minimax optimal thresholding p n .Necessarily,R p 4ˆ1p n 5µån1c 4p n 58cn ƒ1C min 4ˆ211590(3.3)Donoho and Johnstone (1994)noted that the universal thresholding is somewhat too large.This is observed in prac-tice.In this section,we propose a new universal thresholding policy,that takes the second order into account.This gives a lower bound under which penalized least squares estima-tors perform comparably with the oracle estimator.We then establish the oracle inequalities for a large variety of penalty functions.I mplications of these on the regularized wavelet estimators are given in the next section.By Theorem 2,property 2,for any penalized least squares estimator,we haveR p 4ˆ1p 05µ2log n Cp4= 4log n51=2C 1(3.4)if p 0µp2log n .This is a factor of log n order larger than the oracle estimator.The extra log n term is necessary because thresholding estimators create biases of order p 0at —ˆ—p 0.The risk in 60117can be better bounded by using the following lemma.Lemma 1.I f the penalty function satis es conditions ofTheorem 1and p 0‹4¢5is nonincreasing and p 0‹40C 5>0,thenR p 4ˆ1p 05µ42log n C 2log 1=2n58c=n C min 4ˆ21159for the universal thresholdingp 0Dp0µd µc 21with n ¶4and c ¶1and p 0>1014.。
基于小波变换的舰船磁场信号粗提取
∗收稿日期:2019年10月11日,修回日期:2019年11月22日作者简介:罗静博,男,硕士研究生,研究方向:军用目标特性及信息感知技术。
林春生,男,博士,教授,研究方向:军用目标特性及信息感知技术。
1引言磁场信号是水中作战探测的一项特殊信号源。
它不受水体条件的限制,灵敏度、分辨率高,且能与各类电子设备兼容,为方向识别、舰船导航等方面提供了源信号,在兵器引信中得到了广泛的应用[1~5]。
磁场引信是水雷的重要引信方式,但在实际的舰船磁场探测中,由于目标磁场强度与地磁场强度存在数量级的差别,且地磁场存在正常的地磁波动,导致目标信号以较低的信噪比淹没在地磁场中,难以进行有效的舰船磁场检测分析[6]。
小波变换具有良好的时频分析能力,可以在非平稳噪声下提取弱信号[7]。
本文通过一种基于小波变换的提取方法,将舰船磁场信号从地磁噪声中提取出来,以便于后续的舰船磁场识别与目标定位。
2舰船磁场与小波滤波2.1舰船磁场分析现代的舰船绝大多数都是由钢铁制成的,它们基于小波变换的舰船磁场信号粗提取∗罗静博林春生(海军工程大学兵器工程学院武汉430033)摘要小波变换具有良好的时频分析能力,是一种有效的多分辨率分析工具,可以在非平稳噪声下提取弱信号。
针对舰船磁场的低信噪比和低频特性提出了基于小波变换的舰船磁场信号的初步提取方法。
通过对含噪舰船磁场信号进行小波分解,并将各层系数做非线性处理后重构信号以达到目标初步提取的效果。
仿真计算结果表明,当原始信号的信噪比高于5dB 的时候,可以较好地提取出目标。
试验计算结果也表明,基于小波变换可以初步提取出舰船磁场信号。
研究内容为后续的舰船识别与定位提供了前提条件。
关键词小波变换;滤波;舰船磁场;信号提取中图分类号TN911.72;TM936.1DOI :10.3969/j.issn.1672-9730.2020.04.038Detection of Ship Magnetic Field Signal Based on WaveletTransformLUO JingboLIN Chunsheng(Department of Weapon Engineering ,Naval University of Engineering ,Wuhan430033)AbstractWavelet transform is an effective multi-resolution analysis tool with a great time-frequency analysis capability thatcan extract weak signals under non-stationary noise.In this paper ,because of the low signal-to-noise ratio and low-frequency char ⁃acteristics of the ship's magnetic field ,a preliminary method for extracting the ship 's magnetic field signal based on wavelet trans ⁃form is proposed.The wavelet decomposition is performed on the noisy signal ,and the coefficients of each layer are nonlinearly pro ⁃cessed to reconstruct the signal ,so the preliminary extraction effect can be achieved.The simulation results show that when the sig ⁃nal-to-noise ratio of the original signal is higher than 5dB ,the target can be extracted very well.The experiment results also show that the ship's magnetic field signal can be initially extracted based on the wavelet transform.Research content provides the precon ⁃ditions for subsequent ship identification and location.Key Words wavelet transform ,filter ,magnetic field of ship ,signal extractionClass NumberTN911.72;TM936.1总第310期会被地磁场磁化形成舰船磁场。
Petrel官方地震解释培训M4_Well_Ties_Synthetics
Despiked
2
3
5
4
Log estimation – Make density from sonic
1. Right click on the well where an estimated log should be generated
1
2. Click on Use existing, select Global Well Log from the list
Execute
1
4
2
5
Acoustic impedance and Reflection coefficient series
1. In the Seismogram tab, click on Create synthetic seismogram
2. Select Density to use
2 3
Opens settings page for the object Create a new object Opens the spreadsheet for the object
Well selection
1. Select a single well or the Wells folder to work with from the drop down menu
7
Synthetic seismogram
1. Click on the Synthetic seismogram pull down menu to view its content
2. Reflection coefficients and wavelet is input when Apply or OK is clicked
On the computation of periodic spline wavelets
Gerlind Plonka and Manfred Tasche Fachbereich Mathematik Universitat Rostock D { 18051 Rostock Germany
Abstract
presented algorithms can be applied to the decomposition and reconstruction of L2(R){ functions, too. Further, using the output of decomposition and reconstruction algorithms, respectively, a fast algorithm for the computation of function values is given. In Section 6 we e ciently construct the Fourier transformed coe cients of the M -periodic spline interpolant fj 2 Vj of a given continuous function f 2 L2 such that our decompoM sition and reconstruction algorithms can be applied. Finally, in Section 7 the new decomposition algorithm is used to analyze the local regularity of a given M {periodic function.
Wavelet Theory Demystified
Wavelet Theory DemystifiedMichael Unser,Fellow,IEEE,and Thierry Blu,Member,IEEE Abstract—In this paper,we revisit wavelet theory starting fromthe representation of a scaling function as the convolution of aB-spline(the regular part of it)and a distribution(the irregularor residual part).This formulation leads to some new insights onwavelets and makes it possible to rederive the main results of theclassical theory—including some new extensions for fractionalorders—in a self-contained,accessible fashion.In particular,weprove that the B-spline component is entirely responsible for fivekey wavelet properties:order of approximation,reproductionof polynomials,vanishing moments,multiscale differentiationproperty,and smoothness(regularity)of the basis functions.We also investigate the interaction of wavelets with differentialoperators giving explicit time domain formulas for the fractionalderivatives of the basis functions.This allows us to specify acorresponding dual wavelet basis and helps us understand whythe wavelet transform provides a stable characterization of thederivatives of a signal.Additional results include a new peelingtheory of smoothness,leading to the extended notion of waveletdifferentiability inthe-transformsof the filters satisfy the perfect reconstruction(PR)equations also given in Fig.1[1],[3].In the tree-structured wavelet transform,the decomposition step is further iterated on the lowpasscomponent,which is an expression that involves some number of“regularity”factors(satisfying the lowpass constraint.A nontrivialfactorin all the desired mathematical properties.Although the effect of the regularity factors is well understood by mathematicians working with wavelets,we are not aware of any deliberate effort to explain these properties from the perspective of B-splines. While it could be argued that this is essentially a matter of inter-pretation(regularity factors are equivalent to B-splines),we be-lieve that the present formulation makes the whole matter more transparent and accessible.Our only prerequisite here is to have a complete understanding of the properties of B-splines,which is much easier than for other wavelets,since these are the only ones that have explicit formulas.We then use relatively simple manipulations to show that these properties carry over to all scaling functions through the convolution relation.A.Scope and Organization of the PaperThis paper is written for researchers in signal processing who have a basic knowledge of wavelets and filterbanks and who would like to improve their understanding of the more theoret-ical aspects of wavelets that are usually confined to the math-ematical literature.The paper is largely self-contained but as-sumes some familiarity with Mallat’s multiresolution formula-tion of the wavelet transform that rests on the fundamental no-tion of scaling function[4].From a theoretical point of view,the scaling function is more important than the wavelet because it provides the elementary building blocks for the decomposition; it is responsible for the key properties of the transform—and this is precisely what this paper is all about.The presentation is organized as follows.In Section II,we recall the main definitions and mathematical concepts that are required forthe,which are not covered by traditional wavelet theory.This brings in two novelties:1)the extension of the theory for fractional wavelets such asthose introduced in[11];2)a new“peeling”theory of smoothness which general-izes an interpretation of integer orders of differentiability given by Strang[6].The motivation here is to come up with a more intuitive—and perhaps even more general—understanding of the concept of wavelet smoothness.Another important goal of this paper is to clarify the notion that the wavelet representation is stable with respect to(fractional)differentiation.II.S CALING F UNCTIONS AND W A VELETSINformulation and interpretation of the wavelet transform.It also contains a short primer on fractional B-splines.A.Continuous Function Spaces and NotationsThe continuous-time version of the wavelet transform applies tofunctions(1) where the integral is taken in Lebesgue’s sense.The energy of afunctionis equivalent to the state-ment that is finite.More generally,one definesthe.The Fourier transformof, it is givenby,as well as for generalized func-tions[12].In wavelet theory,one usually considers some wavelet func-tioncan be represented in a unique and stable fashion using theexpansion2)of thewaveletsuchthatis either given explicitly—as is the case with the B-splines—or it is derivedFig.2.Illustration of the key properties for the Haar case '(x )= (x )(B-spline of degree zero).(a)Orthogonality.(b)Two-scale relation.(c)Wavelet relation.(d)Partition of unity.indirectly from the refinementfilteris an admissible scaling functionofsuchthatgenerates a stable basis for the basicfunctionspaceandthewhere.The basis is or-thonormal if and onlyifolution analysisofthat generates a Riesz basisof(8)where theweightsand wavelet are defined in a similar fashion using the dual versions of (6)and (8),respectively;these involve the analysis filters and in Fig.1.C.Fractional B-SplinesThe simplest example of a scaling function is the B-spline of degree 0(cf.Fig.2).This function can be constructed by taking the difference of two step functions shifted by one.This yields the following mathematicalrepresentation:times,one generates the classicalB-splines which are piecewise polynomials ofdegree,which is best definedin the Fourierdomaininteger)displayed in thicker lines.These functions have a number of in-teresting properties that are briefly summarized here—for more details,see [11].•They generate valid Riesz basesfor.In par-ticular,this means that they are square integrable,i.e.,.•They belongto.This comes as a direct consequence of theirdefinition.•They reproduce polynomials ofdegree.In particular,they satisfy thepartition of unityforis in-teger.Forwhenis noninteger,although it decayslike .D.Differentiation and SmoothnessOur primary motivation for considering fractional splines in-stead of conventional ones is that the enlarged family happens to be closed under fractional differentiation.This will prove ex-tremely useful for understanding and characterizing the smooth-ness properties of scaling functions and wavelets.Our fractional derivativeoperatoris integer.Applying the operator to the fractional B-splines yields the fol-lowing explicit differentiationformulathat is piecewise constant andbounded(becauseisis infinitely differentiable everywhere except atthe knots (integers),where it has discontinuities oforder).Hölder smoothness is a pointwise measure of conti-nuity that is often used to characterize wavelets [19].In this con-text,the Sobolev smoothness,which is a more global measure of differentiability inthe.In otherwords,.III.W A VELET T HEORY R EVISITEDOur goal in this section is to reformulate wavelet theory using a nontraditional point of view.Our argumentation is entirely based on a B-spline factorization theorem (Section III-B),which is intimately related to the crucial notion of approximation order.We will use this representation to derive the most important wavelet properties using relatively straightforward manipula-tions.In other words,we will show that the order of approxima-tion of the transform (Section III-A),the reproduction of poly-nomials property (Section III-C),the vanishing moments of the wavelet (Section III-D),and the multiscale differentiation prop-erty of the wavelet (Section III-E)can all be attributed to the B-spline that lies hidden within.A.Order of ApproximationA fundamental idea in wavelet theory is the concept of mul-tiresolution analysis.There,one usually considers a sequence of embedded subspaces with a dyadic scale progression,i.e.,,.Specifically,the approximation space atscale (or at resolutionlevelgetscloser to zero.In other words,we want to be able to approxi-mateanyhas order ofapproximationis a constant that may dependon)ensuresthat th derivativeofth power of the scale.Allpopular families (Daubechies,Battle-Lemarié,etc.)satisfy the order constraint (29)and are indexed by theirorder(e.g.,the 9/7Daubechies and cubic splinewavelets).While traditional wavelet theoryrequiresFig.4.Plotofhasacan befactorizedas(19)wherecompactly sup-ported—orequivalentlyinteger is a standard result in wavelet theory (cf.[6]).The important point here is thatthe present version holds for anyreal with minimal re-strictionon[cf.(13)].Theis a valid scalingfunction oforderis the Fourier transform of a fractional B-splineas given by (10),and,thiscorresponds to a well-defined convolution product in the timedomainwith(20)It is therefore always possible to express a scaling function as the convolution between a B-spline and a distribution.What The-orem 2also tells us is that the B-spline part is entirely respon-sible for the approximation order of the transform.We will now use the convolution relation (20)to show that the B-spline part brings in three other very useful properties.C.Reproduction of PolynomialsWe have already mentioned that the B-splines ofdegree.Practically,this means that we can gen-erate all polynomials by taking suitable linear combination of B-splines.In particular,one can construct the following series of polynomialsforwithbecauseinteger,is not too surprising because the B-splines arethemselves piecewise polynomial ofdegreecase [cf.Fig.4(b)]is less intuitive but is only truly relevant for fractional wavelets,which exhibit a few exotic properties [11].The main point of this section is that the polynomial repro-duction property is preserved through the convolution relation (20).Proposition 3:Let,.Whenhas sufficient(inverse polynomial)decay for the momentsof.We then use this result to evaluate thesumThis polynomial reproduction property is illustrated in Fig.5for the Daubechies scaling function of order 2.Although no one will question the fact that the linear B-splines reproduce the con-stant and the ramp,it is much less obvious that the fractal-like Daubechies functions are endowed with the same property.For sake of completeness,we mention the existence of a con-verse implication which goes back to the Strang–Fix theory of approximation.Theorem 4:If afunctionreproduces the polynomials ofdegreecompactly supported,and .We have already encoun-tered some counter examples with the fractional B-splines(i.e.,),which are not compactly supported.In addition,note that the results in this section do not require the two-scale relation.As such,they are also applicable outside the wavelet framework—for instance,in the context of interpo-lation [27].D.Vanishing MomentsA wavelet transform is said tohaveis suchthat,[5],[21].This property translates into a sparserepresentation of piecewise smooth signals because the wavelet coefficients will be essentially zero over all regions where the signal is well approximated by a polynomial,e.g.,the first few terms of its Taylor series.This will produce streams of zero co-efficients that can be coded with very few bits [28],[29].The vanishing moments also allow the characterization of singular-ities based on the decay of the coefficients across scale—theso-called persistence across scale property of the wavelet trans-form [10],[30],[31].The study of this decay plays an important role in the analysis of fractal and multifractal signals [32].The vanishing moments are nothing but an indirect manifes-tation of the ability of the scaling function to reproduce polyno-mials.Proposition 5:If the scalingfunctionreproduces the polynomials ofdegreehas Thus,by combining Propositions 3and 5,we can claim thatit is the B-spline component once again that is entirely respon-sible for the vanishing moments of .The above argument can also be applied to the synthesis side of the transform.In other words,if the dual scaling function can be factorizedasis of order —then the synthesiswavelet,are stable in the sensethat their Fourier transforms are bounded.We also recall that the perfect reconstruction property has anequivalent(22)These are strong constraints that have a direct implication on theform of the wavelet filter.Proposition 6:Under the constraint of a stable perfect recon-struction filterbank,the order condition (19)is equivalent to the following factorization of the wavelet analysisfilter:(23)where thefilterand the determinant of thematricesin (22),respectively.Then,clearly,,imposing (24)with the assumptionthat,which is stable as well.Finally,becausestable is equivalentto theconditionTo get a better feel for this result,we consider (23)and takethe limit to obtain the asymptotic form of the filteras.Using the factthatand assumingthat is of maximumorderandnear the origin.Theorem 7:Letandcontinuousat .Then,is continuous as well,we can easily obtain the asymp-totic version of the result by plugging (25)into (26)and making use of thepropertythderivative of a smoothed version of the inputsignalwhere the smoothing kernel is defined by its frequencyresponse;it is necessarily lowpass and boundedbecause of (27)and Theorem 7.This kernel essentially limits the bandwidth of the signal being analyzed with two practical ben-efits.First,it regularizes the differentiation process by reducing its noise amplification effect,and second,it attenuates the signal components above the Nyquist frequency so that the differenti-ated signal is well represented by its samples (or wavelet coef-ficients).In the classical casewhere is an integer,there is an equivalence between (27)and the vanishing moment property.This can be shown by determining the Taylor seriesof:and,therefore,no direct equivalence with the vanishing moments property.The unique property of this kind of wavelets is that they give access to fractional orders of differentiation,which can be very useful in some applications [34],in particular,when dealing with fractal-like signals.IV .W A VELET D IFFERENTIABILITY AND I NTEGRABILITY One of the primary reasons for the mathematical successes of wavelets is their stability with respect to differentiation and integration.In the conclusion of his classical monograph,Meyer writes:“everything takes place as if thewavelets”[35].We will now use our B-spline formalismto shed some light on this important aspect of wavelet theory.As starting point,we use the B-spline factorization (20)together with the B-spline convolution property(,(orequivalentlyforand notingthat,which is consistent with (10),wegetinteger,but its extension forarbitrarywhere the coefficients of the finite difference operator are given by (12).We also recallthatis absolutely summable (i.e.,Theorem 8provides an explicit link between the smoothness properties of wavelets and the B-spline factorization.It is also interesting because it yields a general and coherent approach to the concept of smoothness,i.e.,fractional differentiability intheis no longer bounded,and itdoes not make much sense to attempt to represent it graphically.Fig.6.Residual factor '(x )in (29)as a function of r for Daubechies’scaling function of order =2.The critical valuer(x )is bounded (r derivatives inLrrrr r r r r xr r x r r:r re r r r r r r r rr r r x rrr r re rx xr r r r r r rifis of maximumorderis notinin the case ofthe B-splines.We can therefore safely statethatfor a givenfilter[6],[20].Various techniques have also been proposed to estimate the Hölder exponent(aliasissymmetrical)but can provide tight upper and lower bounds.Determining fractional orders of smoothness in norms otherthan,whichis the inverse of the fractional differentiationoperator2.We now close the loop by showing that the biorthogonal partner2ofprovides the complementary basis forexpanding ,whichisthewhere .Indeed,we can invoke Young’s convolution inequality (cf.[42])(sinceFig.7.Summary of wavelet theory:main properties and equivalences.pair (,and .Note that it is essentialhere to work with wavelets because the fractional integral of ascaling function is generally not in).Finally,by using the rescaling propertyand (32)which are the “eigen-relations”to which Meyer was alluding,even though he did not write them down explicitly.Of course,the qualifying statement is not rigorously correct—the im-portant point is that there are basis functions with the same wavelet structure on both sides of the identity.The practical relevance of these “differential”wavelets is that they give us a direct way of gauging the fractional derivative of a signal based on its wavelet coefficients in the original basis.Specifically,we have thatare the coefficients of}.This provides a strong hint as to why the Sobolev norm of a signal can be measured from theIf these fundamental properties are to be attributed to the B-spline part exclusively,then what is the purpose of thedistribution?This part is essential for imposing additional properties—such as orthonormality—and,more important,to balance the local-ization properties (size)of the analysis and synthesis filters in Fig.1.B-splines are optimal in terms of size and smoothness,but they are not orthogonal.To construct a pure spline wavelet transform,one needs to orthogonalize the B-splines [43],[44]or to specify a dual pair of spline functions [45]–[47];in both cases,this is equivalent to selecting a distributionalpart of theform,where ,eventually moving it to the analysis side,which yields biorthogonal spline wavelets [8].This explains why all popular wavelet families (Daubechies,Coiflets,Cohen-Daubechies-Feauveau,9/7,etc.)include nontrivial distributionalfactorsis oforder(consequence of the Riesz condition),impliesthat(consequence of the continuityofatsinceis stable,as seen in the proof of Proposition6.Thus,from Proposition 6,this is equivalent totheB.Proof of Theorem 9We require one (very mild)technical assumption,namely,that the refinementfilterbe continuousat .To prove Theorem 9,wedefinesatisfies the scalingrelationis integrable over [].First,we observethatof []that does not include 0.This isbecause,which impliesthat—at 0implies that forany —we canfind2.Consequently.We thushave2which is bounded,independentlyofis integrable in[isfully integrable in [].It is now a simple matter to state that thefunctionisinR EFERENCES[1]P.P.Vaidyanathan,“Quadrature mirror filter banks,M-band extensionsand perfect-reconstruction technique,”IEEE Acoust.,Speech,Siganl Process.Mag.,vol.4,pp.4–20,July 1987.[2]O.Rioul and M.V etterli,“Wavelets and signal processing,”IEEE SignalProcessing Mag.,vol.8,pp.11–38,Oct.1991.[3]M.Vetterli,“A theory of multirate filter banks,”IEEE Trans.Acoust.Speech Signal Processing ,vol.ASSP-35,pp.356–372,Mar.1987.[4]S.G.Mallat,“A theory of multiresolution signal decomposition:Thewavelet representation,”IEEE Trans.Pattern Anal.Machine Intell.,vol.11,pp.674–693,July 1989.[5]I.Daubechies,Ten Lectures on Wavelets .Philadelphia,PA:SIAM,1992.[6]G.Strang and T.Nguyen,Wavelets and Filter Banks .Wellesley,MA:Wellesley-Cambridge,1996.[7]S.G.Mallat,“Multiresolution approximations and wavelet orthogonalbases of L[10]S.Mallat,A Wavelet Tour of Signal Processing.San Diego,CA:Aca-demic,1998.[11]M.Unser and T.Blu,“Fractional splines and wavelets,”SIAM Rev.,vol.42,no.1,pp.43–67,Mar.2000.[12]Y.Katznelson,An Introduction to Harmonic Analysis.New York:Dover,1976.[13]I.Daubechies and garias,“Two-scale difference-equations.1.Existence and global regularity of solutions,”SIAM J.Math.Anal.,vol.22,no.5,pp.1388–1410,1991.[14]G.Strang,“Eigenvalues of(#2)H and convergence of the cascade algo-rithm,”IEEE Trans.Signal Processing,vol.44,pp.233–238,Feb.1996.[15]M.Unser,“Sampling—50years after Shannon,”Proc.IEEE,vol.88,pp.569–587,Apr.2000.[16]I.J.Schoenberg,“Contribution to the problem of approximation ofequidistant data by analytic functions,”Quart.Appl.Math.,vol.4,pp.45–99,1946.[17]M.Unser,“Splines:A perfect fit for signal and image processing,”IEEESignal Processing Mag.,vol.16,pp.22–38,Nov.1999.[18] C.de Boor,A Practical Guide to Splines.New York:Springer-Verlag,1978.[19]O.Rioul,“Simple regularity criteria for subdivision schemes,”SIAM J.Math.Anal.,vol.23,pp.1544–1576,Nov.1992.[20]L.F.Villemoes,“Wavelet analysis of refinement equations,”SIAM J.Math.Anal.,vol.25,no.5,pp.1433–1460,1994.[21]G.Strang,“Wavelets and dilation equations:A brief introduction,”SIAMRev.,vol.31,pp.614–627,1989.[22]M.Unser,“Approximation power of biorthogonal wavelet expansions,”IEEE Trans.Signal Processing,vol.44,pp.519–527,Mar.1996. [23]T.Blu and M.Unser,“Wavelet regularity and fractional orders of ap-proximation,”SIAM J.Math.Anal.,submitted for publication. [24]G.Strang and G.Fix,“A fourier analysis of the finite element variationalmethod,”in Constructive Aspects of Functional Analysis.Rome,Italy: Edizioni Cremonese,1971,pp.793–840.[25] A.Ron,“Factorization theorems for univariate splines on regular grids,”Israel J.Math.,vol.70,no.1,pp.48–68,1990.[26]T.Blu,P.Thévenaz,and M.Unser,“Complete parametrization of piece-wise-polynomial interpolation kernels,”IEEE Trans.Image Processing, submitted for publication.[27]P.Thévenaz,T.Blu,and M.Unser,“Interpolation revisited,”IEEETrans.Med.Imag.,vol.19,pp.739–758,July2000.[28]J.Shapiro,“Embedded image coding using zerotrees of wavelet coef-ficients,”IEEE Trans.Acoust.,Speech,Signal Processing,vol.41,pp.3445–3462,Dec.1993.[29] A.Said and W.A.Pearlman,“A new fast and efficient image codecbased on set partitioning in hierarchical trees,”IEEE Trans.Circuits Syst.Video Technol.,vol.6,pp.243–250,June1996.[30]S.Jaffard,“Pointwise smoothness,two-microlocalization and waveletcoefficients,”Publicacions Matemàtiques,vol.35,pp.155–168,1991.[31]S.Mallat and W.L.Hwang,“Singularity detection and processing withwavelets,”IEEE rm.Theory,vol.38,pp.617–643,Mar.1992.[32] A.Arneodo,F.Argoul,E.Bacry,J.Elezgaray,and M.J.F.,Ondelettes,Multifractales et Turbulence.Paris,France:Diderot,1995.[33]M.Vetterli and C.Herley,“Wavelets and filter banks:Theory and de-sign,”IEEE Trans.Signal Processing,vol.40,pp.2207–2232,Sept.1992.[34] F.J.Herrmann,“Singularity characterization by monoscale analysis:Application to seismic imaging,”put.Harmon.Anal.,vol.11, no.1,pp.64–88,July2001.[35]Y.Meyer,Ondelettes et Opérateurs I:Ondelettes.Paris,France:Her-mann,1990.[36]T.Eirola,“Sobolev characterization of solutions of dilation equations,”SIAM J.Math.Anal.,vol.23,pp.1015–1030,1992.[37]I.Daubechies and garias,“Two-scale difference-equations.2.Local regularity,infinite products of matrices and fractals,”SIAM J.Math.Anal.,vol.23,no.4,pp.1031–1079,1992.[38]T.Blu and M.Unser,“The fractional spline wavelet transform:Defi-nition and implementation,”in Proc.Int.Conf.Acoust.,Speech,Signal Process.,Istanbul,Turkey,2000,pp.512–515.[39] A.Cohen,I.Daubechies,and A.Ron,“How smooth is the smoothestfunction in a given refinable space?,”put.Harmon.Anal., vol.3,no.1,pp.87–90,1996.[40] C.A.Micchelli and T.Sauer,“Regularity of multiwaveletes,”Adv.Comput.Math.,vol.7,pp.455–545,1997.[41]P.P.Vaidyanathan and B.Vrcelj,“Biorthogonal partners and applica-tions,”IEEE Trans.Signal Processing,vol.49,pp.1013–1027,May 2001.[42] E.M.Stein and G.Weiss,Fourier Analysis on EuclideanSpaces.Princeton,NJ:Princeton Univ.Press,1971.[43]G.Battle,“A block spin construction of ondelettes.Part I:Lemariéfunc-tions,”Commun.Math.Phys.,vol.110,pp.601–615,1987.[44]P.-G.Lemarié,“Ondelettesàlocalization exponentielle,”J.Math.Pureset Appl.,vol.67,no.3,pp.227–236,1988.[45] C.K.Chui and J.Z.Wang,“On compactly supported spline waveletsand a duality principle,”Trans.Amer.Math.Soc.,vol.330,no.2,pp.903–915,1992.[46]M.Unser,A.Aldroubi,and M.Eden,“On the asymptotic convergenceof B-spline wavelets to Gabor functions,”IEEE rm.Theory, vol.38,pp.864–872,Mar.1992.[47],“A family of polynomial spline wavelet transforms,”SignalProcess.,vol.30,no.2,pp.141–162,Jan.1993.Michael Unser(M’89–SM’94–F’99)received theM.S.(summa cum laude)and Ph.D.degrees in elec-trical engineering in1981and1984,respectively,from the Swiss Federal Institute of Technology(EPFL),Lausanne,Switzerland.From1985to1997,he was with the BiomedicalEngineering and Instrumentation Program,NationalInstitutes of Health,Bethesda,MD.He is now Pro-fessor and Head of the Biomedical Imaging Group atEPFL.His main research area is biomedical imageprocessing.He has a strong interest in sampling the-ories,multiresolution algorithms,wavelets,and the use of splines for image processing.He is the author of100published journal papers in these areas. Dr.Unser is Associate Editor-in-Chief for the IEEE T RANSACTIONS ON M EDICAL I MAGING.He is on the editorial boards of several other journals, including IEEE S IGNAL P ROCESSING M AGAZINE,Signal Processing,IEEE T RANSACTIONS ON I MAGE P ROCESSING(from1992to1995),and IEEE S IGNAL P ROCESSING L ETTERS(from1994to1998).He serves as regular chair for the SPIE Conference on Wavelets,which has been held annually since1993.He was general co-chair of the first IEEE International Symposium on Biomedical Imaging,Washington,DC,2002.He received the1995Best Paper Award and the2000Magazine Award from the IEEE Signal Processing Society.Thierry Blu(M’96)was born in Orléans,France,in1964.He received the “Diplôme d’ingénieur”fromÉcole Polytechnique,Paris,France,in1986and from Télécom Paris(ENST),in1988.In1996,he received the Ph.D degree in electrical engineering from ENST for a study on iterated rational filterbanks ap-plied to wideband audio coding.He is currently with the Biomedical Imaging Group,Swiss Federal Institute of Technology(EPFL),Lausanne,Switzerland,on leave from France Télécom National Center for Telecommunications Studies(CNET), Issy-les-Moulineaux,France.His research interests include(multi)wavelets, multiresolution analysis,multirate filterbanks,approximation and sampling theory,mathematical imaging,etc.。
Building your own wavelets at home
Building Your Own Wavelets at HomeWim Sweldens Peter Schr¨o derAbstractWe give an practical overview of three simple techniques to construct wavelets under general circumstances: interpolating subdivision,average interpolation,and lifting.We include examples concerning the construction ofwavelets on an interval,weighted wavelets,and wavelets adapted to irregular samples.1IntroductionDuring the last decade,many constructions of wavelets on the real line became available,see for example[1,5, 7,8,12,13,14,17,18,22,24,25,26,34].Thefilter sequences for scaling functions and wavelets are typically derived through the Fourier transform and the consideration of certain trigonometric polynomials and their properties. From a user’s point of view though,the constructions are not always suitable for straightforward implementation or specialization to particular cases,such as boundaries.The purpose of this paper is to show that simple techniques exist which allow the construction of versatile families of scaling functions and wavelets under general circumstances.Some of these constructions will lead to well studied classical cases,others to wavelets custom-designed to certain applications.None of the techniques, interpolating subdivision[15],average interpolation[17],and lifting[33,32],are new,but taken together they result in a straightforward and easy to implement toolkit.To make the treatment as accessible as possible we will take a very“nuts and bolts”,algorithmic approach.In particular we will initially ignore many of the mathematical details and introduce the basic techniques with a sequence of examples.Other sections will be devoted to more formal and rigorous mathematical descriptions of the underlying principles.Onfirst reading one can skip these sections which are marked with an asterisk.We begin with the construction of scaling functions by interpolating subdivision,and average-interpolation.All the algorithms can be derived via simple arguments involving nothing more than polynomial interpolation.In fact constraints such as specialization to intervals,boundary conditions,biorthogonality with respect to a weighted inner product,and irregular samples,are easily incorporated.We show how thisfits into the general framework of“second generation wavelets.”The basic idea of second generation wavelets is to give up translation and dilation in order to construct wavelets adapted to general settings(domains,surfaces,weights,irregular samples)while maintaining powerful properties of classical wavelets such as time-frequency localization and fast transforms.We demonstrate how second generation wavelets can be constructed with the lifting scheme.Finally we give concrete examples and point out questions which require further research.2Interpolating Subdivision2.1AlgorithmTo motivate the construction of interpolating scaling functions we begin by considering the problem of interpolating a sequence of data values.To be concrete,suppose we have the samples0Z of some unknown function given at the integers0.How can we define an extension of these values to a function defined on the whole real line?Obviously there are many possible approaches.Deslauriers and Dubuc attacked this problem by definingcubic interpolationlinear interpolation linear interpolation cubic interpolationFigure 1:Examples of interpolating subdivision.On the left a diagram showing the filling in of “in between”samples by linear interpolation between neighboring samples.On the right the same idea is applied to higher order interpolation using two neighbors to either side and the unique cubic polynomial which interpolates these.This process is repeated an infinitum to define the limit function.a recursive procedure for finding the value of an interpolating function at all dyadic points [15,16].We will refer to this as interpolating subdivision .For our purposes this is a particularly well suited approach since we are interested in constructions which obey refinement relations.As we will see later these will lead to a particular set of wavelets.Perhaps the simplest way to set up such an interpolating subdivision scheme is the following.Let 0be the original sample values.Now define a refined sequence of sample values recursively as12121121and place the at locations 2.Or in words,new values are inserted halfway between old values by linearly interpolating the two neighboring old values (see the left side of Figure 1).It is not difficult to see that in the limit this will result in a piecewise linear interpolation of the original sample values.Suppose the initial sample values given to us were actually samples of a linear polynomial.In that case our subdivision scheme will exactly reproduce that polynomial.Let us consider fancier interpolation schemes.For example,instead of defining the new value at the midpoint between two old values as a linear interpolation of the neighboring values we can use two neighboring values on eitherside and define the (unique)cubic polynomialwhich interpolates those four values 111122The new sample value (odd index)will then be the value that this cubic polynomial takes on at the midpoint,while all old samples (even index)are preserved12121121Figure 1(right side)shows this process in a diagram.Even though each step in the subdivision involves cubic polynomials the limit function will not be a polynomial anymore.While we don’t have a sense yet as to what the limit function looks like it is easy to see that it can reproduce0.00.20.40.60.8 1.00.00.51.00.00.20.40.60.8 1.00.00.51.00.00.20.40.60.8 1.00.00.51.00.00.20.40.60.8 1.00.00.51.0Figure 2:Scaling functions which result from interpolating subdivision.Going from left to right the orderof thesubdivision is 2,4,6,and 8.cubic polynomials.Assume that the original sequence of sample values came from some given cubic polynomial.In that case the interpolating polynomial over each set of 4neighboring sample values will be the same polynomial and all newly generated samples will be on the original cubic polynomial,in the limit reproducing it.In general we use(even)samples and build a polynomials of degree 1.We then say that the order of the subdivision scheme is.Next we define a function,which Deslauriers and Dubuc refer to as the fundamental solution ,but which we will refer to as the scaling function :set all 0equal to zero except for 00which is set to 1.Now run the interpolatingsubdivision ad infinitum.The resulting function is,the scaling function.Figure 2shows the scaling functions which result from the interpolating subdivision of order 2,4,6,and 8(left to right).What makes interpolating subdivision so attractive from an implementation point of view is that we only need a routine which can construct an interpolating polynomial given some number of sample values and locations.The new sample value is then simply given by the evaluation of this polynomial at the new,refined location.A particularly efficient (and stable)procedure for this is Neville’s algorithm [30,27].Notice also that nothing in the definition of this procedure requires the original samples to be located at ter we will use this feature to define scaling functions over irregular subdivisions.Interval boundaries for finite sequences are also easily accommodated.E.g.,for the cubic construction described above we can take 1sample on the left and 3on the right at the left boundary of an interval.We will come back to this in Section 4.First we turn to a more formal definition of the interpolating subdivision in the regular case (2)and discuss some of the properties of the scaling functions.2.2Formal Description*The interpolating subdivision scheme can formally be defined as follows.For each group of2coefficients1,it involves two steps:1.Construct a polynomial of degree 1so thatfor 12.Calculate one coefficient on the next finer level as the value of this polynomial at 121121121The properties of the resulting scaling function are given in the following table.pact support:is exactly zero outside the interval11.This easily followsfrom the locality of the subdivision scheme.2.Interpolation:is interpolating in the sense that01and0for0.Thisimmediately follows from the definition.3.Symmetry:is symmetric.This follows from the symmetry of the construction.4.Polynomial reproduction:The scaling functions and its translates reproduces polynomials up todegree 1.In other wordsfor0This can be seen by starting the subdivision scheme with the sequence and using the fact that thesubdivision definition insures the reproduction of polynomials up to degree 1.5.Smoothness:Typically where.We know that42and62830(strict bounds).Also,the smoothness increases linearly with.This fact is much less trivial thanthe previous ones.We refer to[15,16].6.Refinability:This means it satisfies a refinement relation of the form2This can be seen as follows.Do one step in the subdivision starting from00.Call the result .It is easy to see that only21coefficients are non-zero.Now,start the subdivision 1scheme from level1with these values1.The refinement relation follows from the fact that thisshould give the same result as starting from level0with the values0.Also because of interpolation,it follows that20.We refer to the asfilter coefficients.We next define the scaling function as the limit function of the subdivision scheme started on level with0.A moment’s thought reveals that2.With a change of variables in the refinement relation we get21With these facts in hand consider some original sequence of sample values at level.Simply using linear superposition and starting the subdivision scheme at level,yields a limit function of the formThe same limit function can also be written as11Equating these two ways of writing and using the refinement relation to replace with a linear combination of 1we get1121Evaluating both sides at1wefinally arrive at12This equation has the same structure as the usual inverse wavelet transform(synthesis)in case all wavelet coefficients are set to zero.10Example function SamplesPiecewise linear approximationBasis of translated hat functions Hat function (N=2)Figure 3:An example function is sampled at a number of locations.The values at these sample points are used ascoefficients in the interpolating scheme of2.The scaling function of 2is the familiar hat function and the basis for the approximation is a set of translated hat functions2.Figure 4:Examples of average-interpolation.On the left a diagram showing the constant average-interpolation scheme.Each subinterval gets the average of a constant function defined by the parent interval.On the right the same idea is applied to higher order average-interpolation using a neighboring interval on either side.The unique quadratic polynomial which has the correct averages over one such triple is used to compute the averages over the subintervals of the middle interval.This process is repeated an infinitum to define the limit function.In the case of linear subdivision,the filter coefficients are12112.The associated scaling function is the familiar linear B-spline “hat”function,see Figure 3.The cubic case leads to filters116091619160116.The general expression is21121012Suppose that instead of samples we are given averages of some unknown function over intervals1Such values might arise from a physical device which does not perform point sampling but integration as is done,for example,by a CCD cell(to afirst approximation).How can we use such values to define a function whose averages are exactly the measurement values given to us?One obvious answer is to use these values to define a piecewise constant function which takes on the value0for1.This corresponds to the following constant average-interpolation scheme12121In words,we assign averages to each subinterval(left and right)by setting them to be the same value as the average for the parent interval.Cascading this procedure ad infinitum we get a function which is defined everywhere and is piecewise constant.Furthermore its averages over intervals1match the observed averages.The disadvantage of this simple scheme is that the limit function is not smooth.In order to understand how to increase the smoothness of such a reconstruction we again define a general average-interpolating procedure.One way to think about the previous scheme is to describe it as follows.We assume that the(unknown)function we are dealing with is a constant polynomial over the interval212.The values of12and121 then follow as the averages of this polynomial over the respective subintervals.The diagram on the left side of Figure4 illustrates this scheme.Just as before we can extend this idea to higher order polynomials.The next natural choice is quadratic.For a given interval consider the intervals to its left and right.Define the(unique)quadratic polynomial such that121212 2122 12Now compute12and121as the average of this polynomial over the subintervals of2121222121 2121212 2121Figure4(right side)shows this procedure.It is not immediately clear what the limit function of this process will look like,but it easy to see that the procedure will reproduce quadratic polynomials.Assume that the initial averages0were averages of a given quadratic polynomial.In that case the unique polynomial which has the prescribed averages over each triple of intervals will always be that same polynomial which gave rise to the initial set of averages.Since the interval sizes go to zero and the averages over the intervals approach the value of the underlying function in the limit the original quadratic polynomial will be reproduced.We can now define the scaling function exactly the same way as in the interpolating subdivision case.In general we use intervals(odd)to construct a polynomial of degree 1.Again is the order of the subdivision scheme.Figure5shows the scaling functions of order1,3,5,and7(left to right).Just as the earlier interpolating subdivision process this scheme also has the virtue that it is very easy to implement. The conditions on the integrals of the polynomial result in an easily solvable linear system relating the coefficients of to the.In its simplest form(we will see more general versions later on)we can streamline this computation0.00.51.00.00.51.00.00.51.00.00.51.0Figure 5:Scaling functions which result from average interpolation.Going from left to right orders of the respective subdivision schemes were 1,3,5,and 7.even further by taking advantage of the fact that the integral of a polynomial is itself another polynomial.This leads to another interpolation problem0111122Given such a polynomial the finer averages become12212112121121This computation,just like the earlier interpolating subdivision,can be implemented in a stable and very efficient way with a simple Neville interpolation algorithm.Notice again how we have not assumed that the are regular and generalizations to non-even sized intervals are possible without fundamental changes.As in the earlier case boundaries are easily absorbed by taking two intervals to the right at the left boundary,for example.We can also allow weighted averages.All of these generalizations will be discussed in more detail in Section 4.In the next section we again take a more formal look at these ideas in the regular case.3.2Formal Description*The average-interpolating subdivision scheme of order can be defined as follows.For each group of21coefficients ,it involves two steps:1.Construct a polynomial of degree 1so that122for2.Calculate two coefficients on the next finer level as12221212and 1212122121The properties of the scaling function are given in the following table,see [17].pact support:is exactly zero outside the interval1.This easily follows fromthe locality of the subdivision scheme.2.Average-interpolation:is average-interpolating in the sense that1This immediately follows from the definition.3.Symmetry:is symmetric around12.This follows from the symmetry of the construction.4.Polynomial reproduction:reproduces polynomials up to degree 1.In other words11111for0This can be seen by starting the scheme with this particular coefficient sequence and using the factthat the subdivision reproduces polynomials up to degree 1.5.Smoothness:is continuous of order,with0.One can show that3678,51272,71826,92354,and asymptotically2075[17].6.Refinability:satisfies a refinement relation of the form21This follows from similar reasoning as in the interpolating case starting from00.Theconstruction then implies that011and221if0.Next consider some original sequence of averages at level.Simply using linear superposition and starting the subdivision scheme at level,yields a limit function of the formThe same limit function can also be written as11Equating these two ways of writing and using the refinement relation to replace with a linear combination of 1we again get12This equation has the same structure as the usual inverse wavelet transform(synthesis)in case all wavelet coefficients are set to zero.4GeneralizationsSo far we have been discussing scaling functions defined on the real line with sample locations2.This has the nice feature that all scaling functions are translates and dilates of onefixed function.However,the true power of subdivision schemes lies in the fact that they can also be used in much more general settings.In particular we are interested in the following cases:1.Interval constructions:When working withfinite data it is desirable to have basis functions adapted to life onan interval.This way no half solutions such as zero padding,periodization,or reflection are needed.We pointunaffected by boundary unaffected by boundary affected by boundary k=1k=2k=3k=4k=0k=1k=2k=3k=0k=1k=2k=3boundary boundaryboundary Figure 6:Behavior of the cubic interpolating subdivision near the boundary.The midpoint samples between 23and12are unaffected by the boundary.When attempting to compute the midpoint sample for the interval 01we must modify the procedure since there is no neighbor to the left for the cubic interpolation problem.Instead we choose 3neighbors to the right.Note how this results in the same cubic polynomial as used in the definitionof the midpoint value12.The procedure clearly preserves the cubic reconstruction property even at the interval boundary and is thus the natural choice for the boundary modification.unaffected by boundaryFigure 7:Behavior of the quadratic average-interpolation process near the boundary.The averages for the subintervals2and 1are unaffected.When attempting to compute the finer averages for the left most interval the procedure needs to be modified since no further average to the left of 0exists for the average-interpolation problem.Insteadwe use 2intervals to the right of0,effectively reusing the same average-interpolating polynomial constructed for the subinterval averages on1.Once again it is immediately clear that this is the natural modification to the process near the boundary,since it insures that the crucial quadratic reproduction property is preserved.-1.00.01.0-1.00.01.0-1.00.01.0-1.00.01.00.00.20.40.60.8-1.00.01.00.00.20.40.60.8-1.00.01.00.00.20.40.60.8-1.00.01.00.00.20.40.60.8-1.00.01.0Figure 8:Examples of scaling functions affected by a boundary.On top the scaling function of quadratic (3)average-interpolation at3and 0123.On the bottom the scaling functions of cubic (4)interpolation at3and 0123.Note how the boundary scaling functions are still interpolating.out that many wavelet constructions on the interval already exist,see [2,9,4],but we would like to use thesubdivision schemes of the previous sections since they lead to easy implementations.2.Weighted inner products:Often one needs a basis adapted to a weighted inner product instead of the regularL 2inner product.A weighted inner product of two functions and is defined aswhere is some positive function.Weighted wavelets are extremely useful in the solution of boundaryvalue ODEs,see [23,31].Also,as we will see later,they are useful in the approximation of functions withsingularities.3.Irregular samples:In many practical applications,the samples do not necessarily live on a regular grid.Resampling is awkward.A basis adapted to the irregular grid is desired.The exciting aspect of subdivision schemes is that they adapt in a straightforward way to these settings.Let us discuss this in more detail.Both of the subdivision schemes we discussed assemble coefficientsin each step.These uniquely define a polynomial of degree 1.This polynomial is then used to generate one (interpolating case)or two (average-interpolation case)new coefficients 1.Each time the new coefficients are located in the middle of theold coefficients.When working on an interval the same principle can be used as long as we are sufficiently far from the boundary.Close to the boundary we need to adapt this scheme.Consider the case where one wants to generate a newcoefficient 1but is unable to find old samplesequally spaced to the left or right of the new sample,simply because they are not available.The basic idea is then to choose,from the set of available samples,those which are closest to the new coefficient 1.To be concrete,take the interval 01.In the interpolating case we have 21coefficients at locations 2for 02.In the average-interpolation case we have 2coefficients corresponding to the intervals 212for 02.Now consider the interpolating case as an example.The left most coefficient 10is simply 0.The next one,11is found by constructing the interpolating polynomial to the pointsfor 0and evaluating it at 11.For 12we evaluate the same polynomial at 12.Similar constructions work for the otherboundary coefficients,the right side,and the average-interpolation case.Figures 6and 7show this idea for a concrete example in the interpolating and average-interpolation case.Figure 8shows the scaling functions affected by the boundary for both the interpolating and average-interpolation case.Next,take the case of a weighted inner product.In the interpolating case,nothing changes.In the average-interpolation case,the only thing we need to do is to replace the integrals with weighted integrals.We now construct a polynomial of degree 1so that1forwhere1 Then we calculate two coefficients on the nextfiner level as12112121and12111211121Everything else remains the same,except the fact that the polynomial problem cannot be recast into a Neville algorithm any longer since the integral of a polynomial times the weight function is not necessarily a polynomial.This construction of weighted wavelets using average-interpolation wasfirst done in[31].The case of irregular samples can also be accommodated by observing that neither of the subdivision schemes requires samples on a regular grid.We can take an arbitrarily spaced set of points with12and1.In the interpolating case a coefficient lives at,while in the average-interpolation casea coefficient is associated with the interval1.The subdivision schemes can now be applied in a straightforward manner.Obviously any combinations of these three cases can also be accommodated.5Multiresolution Analysis5.1IntroductionIn the above sections we have discussed two subdivision schemes to generate scaling functions and pointed out how their definitions left plenty of room for generalizations.The resulting modifications to the algorithms are straightforward.We have yet to introduce the wavelets that go with these scaling functions.In order to do so we need a slightly more formal framework in which to embed the above scaling function constructions.In particular we need a framework which allows us to carry over all the generalizations described above.This general framework we refer to as“second generation wavelets.”The ambition of second generation wavelets is to generalize the construction of wavelets and scaling functions to intervals,bounded domains,curves and surfaces,weights,irregular samples,etc. In these settings translation and dilation cannot be used any more.Second generation wavelets rely on the fact that translation and dilation are not fundamental to obtain wavelets with desirable properties such as localization in space and frequency and fast transforms.We begin with a discussion of multiresolution analysis before showing how second generation wavelets can be constructed with the lifting scheme.5.2Generalized Refinement RelationsIn classical constructions,scaling functions are defined as the dyadic translates and dilates of onefixed scaling function .I.e.,the classical scaling functions satisfy a refinement relation of the form21However,in the generalizations discussedin the previous section,the scaling functions constructed through subdivision are not necessarily translates and dilates of onefixed function.However they still satisfy refinement relations which we canfind as follows.Start the subdivision on level with0.We know that the subdivision scheme converges to.Now do only one step of the subdivision scheme.Call the resulting coefficients1. Only afinite number are non zero.Since starting the subdivision scheme at level1with the coefficients also converges to,we have that1The coefficients of the refinement relation are thus different for each scaling function.5.3Formal Description*Before we begin,let usfix notation.We will always assume the interval01,a weight function,and possiblyirregular samples.The coarsest level will be denoted0.The index ranges from0to2in the interpolation case and from0to21in the average-interpolation case.In the refinement relations,021while 0211.We begin with the definition of multiresolution analysis in this general context.A multiresolution analysis is a setof closed subspaces L201with N,which are defined asspan021It follows from the refinement relations that the spaces are nested,1.We require that every function offinite energy can be approximated arbitrarily close with scaling functions,oris dense in L2In other words,projection operators:L2exist,so that for every L2limThe question now is:how do wefind these projection operators?In case the form an orthonormal basis for,the answer would be easy.We leti.e.,the coefficients of the projection of can be found by taking inner products against the basis functions themselves. However,in general,it is very hard to construct orthonormal scaling functions.Instead we consider a more general, biorthogonal setting.In that setting we have a second set of scaling functions,dual scaling functions˜,so that we can write˜i.e.,we need a second set of functions such that when we take inner products of against them,they yield exactly theright coefficients with respect to.How can wefind these dual scaling functions?Their defining property follows from the requirement that be aprojection operator,i.e.,.Using some arbitrary test function and substituting into wefind that scaling functions and dual scaling functions have to be biorthogonal in the sense that˜For normalization purposes,we always let1˜1(1)5.4ExamplesWe already encountered dual functions in both of the subdivision schemes.Indeed,average-interpolation subdivision relies on the fact that the are local averages,i.e.,inner products with box functions.If we let˜, according to(1)the dual functions are(cf.Section4)˜Note that in the canonical case with1the dual functions are box functions of height inversely proportional to the width of their support.In the interpolating case,the were actual evaluations of the function.This implies that˜since taking inner products with(formal)duals,which are Dirac pulses,amounts to evaluating the function at thelocation where the pulse is located.。
Introduction to Wavelet a Tutorial - Qiao
“The Forest & the Trees”
Notice gross features with a large "window“ Notice small features with a small "window”
Stationary
Magnitude
3
2
1
0
-1
-2
-3
0
0.2
0.4
0.6
0.8
1
Time
0.0-0.4: 2 Hz +
1
0.4-0.7: 10 Hz +
0.8
0.7-1.0: 20Hz
0.6
0.4
Magnitude
0.2
Non-
0
Stationary
-0.2
-0.4
-0.6
-0.8
-1
0
0.5
1960-1980
Guido Weiss and Ronald R. Coifman; Grossman and Morlet
Post-1980
Stephane Mallat; Y. Meyer; Ingrid Daubechies; wavelet applications today
PRE-1930
FREQUENCY ANALYSIS
Frequency Spectrum
Be basically the frequency components (spectral components) of that signal
SparseImageandSignalProcessing
sparse image and signal processing: wavelets, curvelets, morphological diversity (pdf) by jean luc starck (ebook)This book presents the state of the art in sparse and multiscale image and signal processing, covering linear multiscale transforms, such as wavelet, ridgelet, or curvelet transforms, and non-pages: 336He holds a habilitation degree from, the print copy limits. Spheres this book by starck, centre d'etudes de saclay stanomir mathematical morphology operators. Recent concepts of ulster queen's university missing data analysis he was mcdonnell. Kubota computing reviews the associated web site this book. The cambridge in sparse and cosmology, this book presents the activity includes university. ' yan gao osterhout design group for graduate level students or computer science where. Yan gao osterhout design group for data analysis the image. Multiscale transforms such as astronomy biology physics digital media and data are described. The reasoning and application of image, applications in the art. The university paris 11 a fellow of cte d'azur and extraction can be overcome. This guide weds theory and forensics all in signal processing. He was mcdonnell pew fellow of sparse representation modeling and extraction. Matlab algorithms for various problems such as denoising inverse problem regularization sparse signal.Since september at the reasoning and does not currently available. Please contact collegesales providing a final chapter the performance. This noise filtering is not currently available for wavelet transforms based. A concise overview of an important areas signal decomposition blind source separation.Matlab algorithms for various problems such as wavelet ridgelet or curvelet transforms and the theory. Disclaimer ebookee is well organized and compressed sensing. Chapter explores a fionn murtagh, directs science foundation ireland's national funding programs in examining applications. The book ends up with the, reasoning and idl code. Upper right then is a great one for pattern recognition. You are multiresolution and astronomical image, processing covering linear multiscale transforms such as numerical. Disclaimer ebookee is well written and compresses.Sparse Image and Signal Processing: Wavelets, Curvelets, Morphological DiversityDownload more books:raku-pottery-robert-e-piepenburg-pdf-9250385.pdffat-boy-thin-man-michael-prager-pdf-4438969.pdfbasic-first-aid-national-safety-council-nsc-pdf-2678236.pdf。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
GEOPHYSICS,VOL.63,NO.1(JANUARY-FEBRUARY 1998);P .297–313,19FIGS.Wavelets,well ties,and the search for subtle stratigraphic trapsAnton Ziolkowski ∗,John R.Underhill ∗,and Rodney G.K.Johnston ∗ABSTRACTWe examine the conventional methodology for ty-ing wells to processed seismic data and show why this methodology fails to allow for reliable interpretation of the seismic data for stratigraphy.We demonstrate an alternative methodology that makes the tie with-out the use of synthetic seismograms,but at the price of measuring the seismic source signature,the cost of such measurements being about 1%of data acquisi-tion costs.The essence of the well tie is (1)to iden-tify geological and seismic interfaces from the logs and core,(2)to measure the one-way traveltime to these interfaces using downhole geophones,and (3)to use the polarity information from (1)and the timing infor-mation from (2)to identify the horizons on the zero-phase processed seismic data.Conventional process-ing of seismic data usually causes the wavelet to vary from trace to trace,and conventional wavelet extrac-tion at a well using the normal-incidence reflection coef-ficients relies on a convolutional relationship between these coefficients and the processed data that has no basis in the physics of the problem.Each new well in-troduces a new wavelet and poses a new problem—how should the zero-phasing filter be derived between wells?Our methodology consists of three steps:(1)deter-mination of the wavelet consisting of all known convo-lutional effects before any processing using measure-ments of the source time function made during data acquisition,(2)compression of this wavelet to the short-est zero-phase wavelet within the bandwidth available,and (3)elimination of uncontrolled distortions to the wavelet in subsequent processing.This method is illus-trated with data from the prospective Jurassic succession in the Moray Firth rift arm of the North Sea in which we have identified,for the first time on seismic data,a major regional unconconformity that cuts out more than 20Ma of geological time.This method offers two major benefits over the conventional approach.First,all lateral varia-tions in the processed seismic data are caused by the ge-ology.Second,events on the processed seismic data may be identified from well logs simply by their polarity and timing.It follows that events can then be followed on the seismic data from one well to another with confidence,the seismic data can be interpreted for stratigraphy,and subtle stratigraphic traps may be identified.INTRODUCTIONInterpretation of seismic reflection data for stratigraphy re-quires precise knowledge of the wavelet (Anstey,1978).Con-ventionally,“the wavelet”that “ties”seismic data to the stratig-raphy known from a well is determined from both the seismic data and the well logs (e.g.White,1980).This approach has problems that have not been addressed properly.Conventionally,the well-tie problem is this.The seismic data are processed to a “known”phase,typically zero phase.This means the phase of the seismic wavelet that was assumed to bePublish on “Geophysics on Line”April 22,1997.Manuscript received by the Editor November 2,1995;revised manuscript received January 15,1997.∗Department of Geology and Geophysics,University of Edinburgh,Grant Institute,Kings Building,West Mains Road,Edinburgh EH93JW,United Kingdom.E-mail:anton.ziolkowski@;john.underhill@;rodney.johnston@.c 1998Society of Exploration Geophysicists.All rights reserved.in the data has been removed.At the well position,the seismic data should then “tie,”that is,strongly correlate,with the syn-thetic seismogram calculated by convolving the primary reflec-tivity series with an appropriate zero-phase wavelet.Normally there is a mis-tie,and the synthetic seismogram is then adjust-ed to make the tie.This adjustment has three components:a time shift,or linear phase component in the frequency domain,an adjustment to the amplitude spectrum of the wavelet,and an adjustment to the phase spectrum of the wavelet (e.g.Nyman et al.,1987;Richard and Brac,1988).This approach essen-tially admits that the so-called zero-phase seismic data are not297298Ziolkowski et al.zero-phase,as pointed out in Nyman et al.(1987).An almost perfect tie can be made at any well with this approach.How-ever,If such a“tie”is made at one well,and a marker horizon is followed from this well to the next well,there is normally a mis-tie of,typically,30ms at the next well.A new wavelet must be found to“tie”the seismic data at the new well.It follows that“the wavelet”is unknown unless there is a well and the interpretation of the seismic data for stratigraphy between the wells or beyond well control can be unreliable.Poggiagliolmi and Allred(1994)have developed a method that forces the seismic data to tie with the same zero-phase wavelet at every given well,but requires not only time shifts of the synthetic seismograms,but also a space-adaptive operator that changes the amplitude spectrum and phase spectrum of all traces in the seismic data.There are two problems with these approaches.First,the time-shifts of the synthetic seismograms ignore the constraints imposed by the check-shot data at each well.Second,since the wavelet that makes the tie is,in principle,known only at the well,any interpretation of the seismic data for stratigra-phy between the wells or beyond well control is,in principle, unreliable.Consider the history of hydrocarbon exploration in the North Sea.In more than three decades of exploration,over 70%of all the oil and gas has been found to be reservoired in Jurassic sediments in structural traps(Spencer et al.,1996), most notably those created during the Late Jurassic rift episode including the Brent,Ninian and Statfjordfields.Over90%of these reserves were discovered in the1970s in the most obvious seismically defined traps(Spencer et al.,1996).Even though comparatively few readily identifiable major structures have been found subsequently,attention has turned only recently to the deliberate search for subtle stratigraphic traps.Of those found so far,none appears to have been the result of a deliber-ate search.The Miller Field is a case in point:it is essentially a Late Jurassic stratigraphic trap in an intrabasinal location and was discovered only because of the drilling of a minor struc-tural expression at the base Cretaceous level(Rooksby,1991; Garland,1993).For the deliberate search for stratigraphic traps to be suc-cessful,it is essential that the data be acquired and processed with this as a major goal,rather than merely as a secondary possibility following the determination of structure.Determi-nation of structure from seismic data is essentially a search for a velocity model that allows the seismic data to be migrated to determine a sharp image.The velocities are found by lining up seismograms,using the rate of change of traveltime with offset. This is afirst-order effect.In contrast,determination of stratig-raphy from seismic data is essentially the correlation of seismic reflection amplitudes with changes in acoustic impedance in the subsurface.This is a second-order effect and requires more precision.We argue that this correlation requires the absolute arrival times of the reflections at a well to be known.In the presenta-tion of our argument,we found it necessary to examine what is meant by“a good well tie,”“the seismic wavelet,”“the convo-lutional model of the seismogram,”the effects of viscoelastic absorption on wave propagation,and the role of conventional predictive deconvolution in seismic data processing.It is shown that conventional processing forces the wavelet to vary uncon-trollably from trace to trace.In our approach,the wavelet is determined from measure-ments made during acquisition,and then is compressed to the shortest possible wavelet within the bandwidth available.Sub-sequent processing of the data is constrained to introduce no uncontrolled distortions to the wavelet.In particular,predic-tive deconvolution with a predictive gap less than the length of the wavelet is eliminated.The argument is illustrated with data from a seismic survey tying exploration wells that targeted the prospective Jurassic reservoir interval in the Inner Moray Firth,at the western end of one of the three main rift arms in the North Sea Basin. THE CONVOLUTIONAL MODEL OF THE SEISMOGRAM Consider the modeling of seismic reflection data in a com-puter,using,for example,afinite-difference scheme in three dimensions.Every element in the earth is parameterized by its density and elastic constants.Absorption can be included by allowing the stress to depend linearly on the strain rate as well as on the strain itself.Each element in the model earth would than be linearly viscoelastic.The source and receivers must also be characterized.The receivers have the known,measured,characteristics of hydrophone or geophone arrays,together with the known re-sponses of the associated recording systems.Over the band-width of interest,and at the signal levels we measure,these devices are linear,and the effects of their responses are strictly convolutional.It is possible that there are severely attenuating zones beneath the geophones.These can be characterized us-ing linear viscoelasticity,as described above.Such zones are,in any case,part of the earth,and not part of the receivers.The input to the model is the source time function of the source in the right units.For vibroseis vibrators,this is the force applied by the vibrator plate to the surface of the earth, and is not necessarily the same for each vibrator(Baeten and Ziolkowski,1990).For the dynamite source it is the radial dis-placement of the cavity generated by the dynamite explosion. For an air gun,it is the displacement of the air-gun bubble. The signal generated by the computer program at a receiver is the convolution of the source time function with the earth impulse response.This response is the pressure or particle ve-locity that would be generated at the receiver if the source time function were an impulse.The impulse response at each receiver is different,because the travel paths vary from receiver to receiver.At a given receiver the impulse response is the sum of many different arrivals,each of which has traveled a differ-ent path.The viscoelastic absorption of energy along each of these paths is different.As a result,the effects of absorption are different for each arrival in the seismogram and are there-fore not convolutional.Therefore,they cannot,in principle,be removed by de convolution.In the recorded seismogram there are three convolutional components:the source time function,the receiver response, and the response of the recording system.These compo-nents may be removed by deconvolution,provided they are known.In real seismic data,the characteristics of the receivers and recording systems are known with great precision.The source time function can be measured for every major source (Ziolkowski,1991)and the cost,for marine seismic data, is on the order of1%of the data acquisition cost(Hones, 1996).Wavelets,Well-Ties and Stratigraphy299THE WAVELETWhat is “the wavelet”?It is impossible to give a simple an-swer to this question.One reason we find a different wavelet at every well is because conventional seismic data processing can introduce uncontrolled distortions to the wavelet.Another is that conventional methodology (White,1980)requires the simple normal-incidence synthetic seismogram to “match”the seismic data after stack and migration after stack.However,there is no theory to support this requirement.Therefore,the match tells us nothing about the meaning of this “wavelet,”and therefore nothing about the stratigraphy.These claims are dis-cussed further below.At this point we introduce our approach.Our approach is (1)to determine the wavelet before any processing,using measurements of the source wavefield made during data acquisition,(2)to compress this wavelet to the shortest possible wavelet within the bandwidth available,and (3)to ensure that the subsequent processing introduces no un-known distortions to the wavelet.With this approach,it is pos-sible to say what the wavelet is.It involves measurement of the source signature.This step is indispensable and we consider it to be a small price to pay to get precise well ties,and to be able to interpret stratigraphy reliably between wells.The convolutional wavelet is the combination of all convolu-tional effects in the shot record.For a point source and point re-ceiver it is the source time function convolved with the receiver response and the recording instrument response.In principle,the surface of the earth is a boundary to the earth and should not be considered part of the source or receiver.Until we have an exact solution to the removal of free-surface effects,how-ever,source ghosts should be included where applicable (air guns and dynamite),and also receiver ghosts,where applica-ble (marine surveys).Marine source arrays and vibrator ar-rays are directional,as are the receiver groups.In theory,these directional effects can be removed in common-receiver gath-ers for source effects (assuming shot-to-shot repeatability)and in common-shot gathers for receiver effects (e.g.,Baeten and Ziolkowski,1990,Chap.7).In practice,it may be sufficient for most purposes simply to take an average angle of incidence,say 20◦,to allow for the nonvertical incidence of the raypath from the source to the target and back to the receiver.This raypath is illustrated in Figure 1for a marine example.TheresultingF IG .1.Geometry for calculation of the wavelet.wavelet,shown in Figure 2and calculated using source mea-surements recorded through the same instruments as the seis-mic reflection data,defines the bandwidth of the data:the data cannot have a bandwith any greater than the bandwidth of this wavelet.The method used to calculate the wavelet from measure-ments is discussed in detail in Ziolkowski and Johnston (1997).The source wavefield of the air-gun array was measured for each shot using near-field ing the method of Ziolkowski et al.(1982)and Parkes et al.(1984),the measure-ments were used to decompose the wavefield into the individ-ual spherical waves generated by the oscillating bubbles emit-ted by the air guns.The spherical waves were then superposed in the frequency domain,taking into account the geometry of the array and the free surface,to compute a far-field signa-ture at 20◦to the vertical.A receiver ghost was added in the frequency domain,taking into account the 20◦angle of inci-dence and the measured depth of the streamer.A reflection coefficient of −1was assumed at the water surface.The wavelet thus obtained,as shown in Figure 2,contains bubble pulses and is very long.The bubble pulses do not appear to be very significant when the full frequency bandwidth is included.At the target the highest frequency in the data may be 40Hz.Figure 3shows the wavelet of Figure 2filtered with a minimum-phase filter with a high-cut frequency of 40Hz,and the primary-to-bubble ratio is reduced by a factor of2.F IG .2.The 20◦wavelet.F IG .3.The wavelet of Figure 2after low-pass filtering with40Hz high cut.300Ziolkowski et al.The wavelet clearly needs to be shortened to obtain maximum resolution within the bandwidth available.Figure 4shows the construction of a signature deconvolu-tion filter that makes the known long unfiltered convolutional wavelet of Figure 2into a short minimum-phase wavelet with essentially the same bandwidth.The application of the sig-nature deconvolution filter to the data makes the wavelet in the data short,thus increasing the resolution.We now have a known short wavelet,Figure 4b,in the data.If the wavelet changes,for instance because of changes in the source signature detected through continuous monitoring during data acquisi-tion,we must keep the desired short wavelet,Figure 4b,fixed and design a new filter,Figure 4c,to shape this new long wavelet to the same short desired wavelet.This will ensure that,after signature deconvolution,the same known wavelet,Figure 4b,exists throughout thedata.F IG .4.Signature deconvolution:(a)the wavelet of Figure 2;(b)the delayed desired minimum-phase wavelet;(c)the signaturedeconvolution filter;(d)the convolution of the wavelet (a)with the filter (c).Figure 5shows the process of Figure 4in the frequency do-main.The air-gun bubble pulses seen in Figure 4a exhibit the characteristic peaks and notches in the amplitude spectrum as shown in Figure 5a.The tuned air-gun array reduces the bub-ble pulses somewhat,but not completely.Figure 5b shows the amplitude spectrum of the desired wavelet:it is designed to be essentially the same as Figure 5a,but is smooth.The am-plitude spectrum of the filter,Figure 5c,that shapes the calcu-lated wavelet into the desired wavelet is essentially flat over the bandwidth of the wavelet,but with superimposed notches and peaks to compensate for the corresponding peaks and notches in the spectrum,Figure 5a,of the wavelet.The result of multi-plying Figure 5a by Figure 5c is Figure 5d,which is very similar to the desired spectrum,Figure 5b.Because the filter spectrum,Figure 5c is essentially flat,the signal-to-random noise level es-sentially is unchanged.Wavelets,Well-Ties and Stratigraphy301Thus the application of the signature deconvolution filter,designed in this way,increases the resolution,by shortening the wavelet,but without decreasing the signal-to-noise ratio.THE ROLE OF PREDICTIVE DECONVOLUTIONPredictive deconvolution is required in conventional seis-mic data processing to attenuate multiples generated by the free surface (Peacock and Treitel,1969;Taner et al.,1995).The predictive gap must be shorter than the multiple period.In the case of marine seismic data processing,the predictive gap must be shorter than the two-way traveltime in the water.If the wavelet is shorter than this gap it is unaffected by the predictive deconvolution operator.In other words,if the orig-inal long wavelet is compressed to a known wavelet,shorter than the predictive gap,before predictive deconvolutionisF IG .5.Frequency-domain equivalent of Figure 4:(a)amplitude spectrum of 4(a);(b)amplitude spectrum of 4(b),which is a smoothly interpolated version of (a);(c)amplitude spectrum of 4(c)which is,on average,flat in the bandwidth of the wavelet;(d)amplitude spectrum of 4(d).applied,the wavelet will be the same after predictive deconvo-lution.If the wavelet is known to be long,as in marine surveys,performing the signature deconvolution step before predictive deconvolution is critical.If the wavelet is not known,as is usu-ally the case,this critical step cannot be done,and the wavelet will be altered by the predictive deconvolution operator,as discussed below.TRUE-AMPLITUDE RECOVERYThe seismic reflection data are not stationary.That is,the statistical properties of the data are not independent of the time origin.In particular,there are no data before the shot is fired,the biggest amplitudes are usually the first arrivals,the amplitude and bandwidth decreases with time,and there are302Ziolkowski et al.no data after recording has stopped.A stationary time series is infinitely long.Predictive deconvolution uses a prediction-errorfilter(Peacock and Treitel,1969;Taner et al.,1995)to remove the so-called“predictable part”of the data(the mul-tiples)from the seismogram,revealing the so-called“unpre-dictable part”(the primaries).The“predictable part”is esti-mated from the autocorrelation of the data,which also contains the primaries.Predictive deconvolution assumes that there are no accidental correlations between primary reflections that can contribute to the autocorrelation of the data.If the data were only a small part of an infinitely long stationary time series,the prediction-errorfilter could be computed from the autocorre-lation function estimated from any suitably long segment in the time series,by definition,including the segment containing the primary reflections we are trying to reveal.Since that segment is the only segment available,the stationarity assumption is critical to the predictive deconvolution process.This argument is discussed in more detail in Ziolkowski(1984,Chap.5). Since seismic data clearly are not stationary,the predictive deconvolution step is preceded by a step,usually known as “true-amplitude recovery,”that applies a time-dependent gain to the data to make the data look more as if they are seg-ments of stationary time series.Typically,this gain function is t n where t is time and n is approximately2.This gain cor-rection introduces a time-dependent distortion to the wavelet: that is,after application of this gain correction the wavelet varies down each trace and the convolutional model is violated (Ziolkowski,1984,Chap.2).The only gain correction that does not violate the convolutional model is an expontial gain,eαt, typically about12dB/s.Neither t n nor eαt make the data stationary.There is no“true-amplitude recovery”correction that makes the data stationary. If we want to keep track of the wavelet,we should use the exponential gain;if we are more concerned about stationarity than wavelet distortion,it may be better to use t n.These two operations are equivalent only at one time instant.ZERO-PHASINGAt some point we should like to make the wavelet zero phase, to enable the interpreter to line up known acoustic interfaces with peaks or troughs in the seismic data.Zero-phasing can be done before or after predictive deconvolution.After the exponential gain correction,the wavelet is w(t),and is related to the original desired wavelet d(t)asw(t)=eαt d(t).(1)The wavelet w(t)has a Fourier transform W(f),where f is frequency andW(f)=|W(f)|exp{iφw(f)},(2)in which|W(f)|is its amplitude spectrum andφw(f)is its phase spectrum.The zero-phasingfilter is simply the multi-plication of the Fourier transform of the data by the function exp{−iφw(f)}.The result is to produce a wavelet w0(t)in the data that has the Fourier transformW0(f)=|W(f)|.(3)This is the shortest possible wavelet within the bandwidth of the data(Berkhout,1984,Chap.1)and is illustrated in Figure6.After application of the zero-phasingfilter,each seismogramin the data is now the earth impulse response(with exponentialgain applied)convolved with this wavelet,with its peak exactlyat the arrival time of the event.The polarity is our choice.In thisexample,the polarity is chosen to give a black peak in the seis-mic data to correspond to a step increase in acoustic impedancein the well.This is SEG normal polarity(Sheriff and Geldart,1995,183).If it is required that a black peak should corre-spond to a step decrease in acoustic impedance,for instanceto show gas,the polarity of the data should be reversed afterzero-phasing.It should be noted that the zero-phasing step can be com-bined with the signature deconvolution step.Instead of theminimum-phase wavelet d(t),the wavelet e−αt w0(t)is used as the desired wavelet.A different signature operator would bedesigned and applied,causing this wavelet to be present in thedata after signature deconvolution.After the exponential gaincorrection,the wavelet in the data would then be w0(t)and nofurther operations to the wavelet would be required.THE EFFECT OF CONVENTIONAL PROCESSINGON THE WAVELETConsider what happens to the wavelet with conventionalprocessing.First the conventional t n process of“true ampli-tude recovery”applies a distortion to the wavelet that variesalong each trace of the data,thus violating the convolutionalmodel.Next,predictive deconvolution is applied with a gapthat is shorter than the wavelet.The predictive deconvolutionoperator is derived from the autocorrelation of the trace,which is different for every trace in the gather in the1-D case(Peacock and Treitel,1969)and different for every gatherin the2-D case(Taner et al.,1995).This is partly becausethe so-called“predictable part”of the trace,including thewavelet and the multiples,is expected to vary with offset,but it is also because the so-called“unpredictable part,”theresponse of the earth,is not white(Fokkema and Ziolkowski,1987)and contributes to the autocorrelation function in away that must vary with offset and from gather to gather.Theoperator is therefore different for every trace of the data.After predictive deconvolution there is therefore a differentwavelet on every trace of the data.Even if the quality controlin data acquisition ensured perfect repeatability of thewaveletF IG.6.Zero-phase equivalent of4(b).Wavelets,Well-Ties and Stratigraphy303from trace to trace,the predictive deconvolution step would destroy it.There is,therefore,no such thing as“the wavelet”after conventional prestack predictive deconvolution.The data are stacked.Then,after stack,predictive decon-volution is often applied again,usually with different param-eters,again with a different operator on every trace,thus en-suring that there are further differences in the wavelet from trace to trace.The conventional use of predictive deconvolu-tion thus guarantees that lateral variations in the data cannot be attributed solely to lateral variations in the geology,thus ren-dering the data unreliable for the interpretation of stratigraphy.THE WELL TIE BY CHARACTER-MATCHING AND THECONVOLUTIONAL MODELThe goal of seismic data processing is to render the seismic data interpretable.The interpreter attempts to correlate re-flections seen in the data with known acoustic impedance con-trasts in the earth.The processed seismic reflection data are presented to the interpreter as if they contain primary normal-incidence reflections only,with all unwanted multiples,refrac-tions,diffractions,surface waves,compression-to-shear conver-sions,etc.having been removed successfully by the processing. The convolutional model described above for each recorded trace of a shot gather is valid because of the linearity of the viscoelastic earth model and the linear response of both the receiver and the recording system.After the shot records have been combined in conventional processing to produce stacked sections,the resulting seismic traces are not the convolution of some physically meaningful wavelet with the primary reflection coefficient series.If,nevertheless,this convolution is assumed(e.g.,White, 1980;Nyman et al.,1987;Richard and Brac,1988;Poggiagliolmi and Allred,1994),each well introduces a new wavelet and a new question is now posed—how should the zero-phasingfil-ter be derived between wells?This approach does not solve a problem;it simply creates a new one.PRECISE SYNTHETIC SEISMOGRAMSPrecise synthetic shot records could in principle be com-puted properly usingfinite differences,as outlined above,pro-vided a completely faithful3-D anisotropic viscoelastic model of the earth existed,and provided all receiver and source effects were included,as discussed above.To calculate these synthetic shot records would require much more knowledge of the earth than can be obtained from a single well and also would require knowledge of the source that is usually not available.If all the required knowledge were available,the synthetic shot records would resemble the recorded data.Then there would be a rea-sonable match to the processed data only if the synthetic shot records were processed in the same way as the real data.DETERMINISTIC WELL-TIE METHODOLOGYThe best match we can hope to get by the simple convolu-tion of a wavelet with the primary normal-incidence reflection coefficients obtained from a limited part of the well is for the timing and polarity of the principal reflections.Suppose we have measured the signature and have followed the recommended processing scheme described above,making sure that no uncontrolled distortions have been applied to the wavelet.The wavelet in the data is some known,short,zero-phase wavelet w0(t),as shown in Figure6.Both the polarity and the arrival time of events in the real data are now the same as the normal-incidence reflection coefficient series converted to two-way traveltime using the measured check-shot data at the well locations.With this processing scheme known interfaces in the wells may therefore be correlated with reflections in the seismic data without synthetic seismograms.We illustrate this method with data from the Inner Moray Firth of the North Sea. THE STRATIGRAPHIC SETTING OF THE INNER MORAYFIRTH BASINThe Jurassic stratigraphy of the Inner Moray Firth Basin is well known from nearby onshore outcrops,drilling,and seismic exploration(e.g.,Sykes,1975;Andrews et al.,1990;Underhill, 1991a and b;Stephen et al.,1993;Wignall and Pickering, 1993;Davies et al.,1996).Regional uplift during the Tertiary (Thomson and Underhill,1993;Hillis et al.,1994)has left the Jurassic relatively shallow in the Inner Moray Firth,and the seismic resolution of Jurassic stratigraphy is therefore gener-ally much better here than elsewhere in the North Sea. Seismic stratigraphic studies,based on the recognition and correlation of surfaces defined by reflector terminations,en-able the Jurassic to be subdivided into two main megase-quences J1and J2(Underhill,1991a and b).The boundary between the two megasequences is marked by regional onlap onto a strong reflector that corresponds to a unit known lo-cally as the Alness Spiculite.Analysis of well logs enables the J1megasequence to be subdivided further into genetic strati-graphic sequences through the identification and correlation of through-going marine shale horizons,or maximumflood-ing surfaces,as shown in Figures7and8(Stephen et al.,1993, Partington et al.,1993).A regional correlation of these events has led to the identi-fication of the Mid-Cimmerian Unconformity that allows the J1seismic megasequence to be subdivided into two parts J1a and J1b(of Underhill,1991a and b).This unconformity is a significant,but subtle,gentle surface of truncation and onlap, which records an episode of thermal doming and subsequent deflation of the North Sea area(Underhill and Partington,1993 and1994).The best sequence-stratigraphic definition of the top of the J1megasequence and of the Mid-Cimmerian Uncon-formity occurs in the central areas of the Inner Moray Firth (Stephen et al.,1993).RESULTS OF THE NEW WELL-TIE METHODOLOGY The methodology we propose provides two separate benefits over the conventional approach:(1)accurate stratigraphic cor-relation between wells and(2)detection of subtle stratigraphic truncations.We illustrate these benefits with examples from a speculative well-tie seismic survey shot in October1992by the Seismograph Service Limited(SSL)Seisventurer in the Inner Moray Firth of the North Sea.The layout of the survey is shown in Figure9.The source wavefield was measured according to the method in Ziolkowski et al.(1982)and Parkes et al.(1984), and the seismic reflection data were processed to zero phase according to our method,described above.Wells12/22-2and 12/22-3are connected by a seismic line,as shown in Figure9, and lie in the area where the top of the J1megasequences trun-cated(Figure7)and hence where the Mid-Cimmerian Uncon-formity might best be resolved.。