Medical imaging with mercuric iodide direct digital radiography

合集下载

八年级人工智能英语阅读理解20题

八年级人工智能英语阅读理解20题

八年级人工智能英语阅读理解20题1<背景文章>Artificial intelligence (AI) is making significant impacts in the field of healthcare. One of the major applications of AI in healthcare is disease diagnosis. AI algorithms can analyze large amounts of medical data and detect patterns that may be difficult for human doctors to identify. For example, AI can be used to analyze blood test results, medical images, and patient symptoms to diagnose diseases such as cancer, diabetes, and heart disease.Another area where AI is being used in healthcare is medical imaging analysis. AI can analyze medical images such as X-rays, CT scans, and MRIs to detect abnormalities and assist radiologists in making diagnoses. AI can also help in the early detection of diseases by identifying subtle changes in medical images that may not be visible to the human eye.AI is also being used to develop personalized treatment plans for patients. By analyzing a patient's medical history, genetic information, and other data, AI can recommend personalized treatment options that are tailored to the patient's specific needs.In addition to disease diagnosis and treatment planning, AI is also being used in healthcare for tasks such as patient monitoring and drugdiscovery. AI-powered devices can monitor patients' vital signs and detect early signs of deterioration, allowing for timely intervention. In drug discovery, AI can analyze large amounts of data to identify potential drug candidates and predict their efficacy and safety.Overall, AI has the potential to revolutionize healthcare by improving disease diagnosis, treatment planning, and patient outcomes. However, there are also concerns about the ethical and legal implications of using AI in healthcare, such as issues related to data privacy and the responsibility of AI in making medical decisions.1. What is one of the major applications of AI in healthcare?A. EducationB. Disease diagnosisC. EntertainmentD. Transportation答案:B。

人脑影像表型的基因组与暴露组广度关联研究

人脑影像表型的基因组与暴露组广度关联研究

国际医学放射学杂志InternationalJournalofMedicalRadiology2021May 鸦44穴3雪:249-253人脑影像表型的基因组与暴露组广度关联研究于春水△【摘要】脑影像技术可以精确刻画人脑结构与功能,其个体变异决定了人类认知功能及神经精神疾病易感性的个体差异。

人脑结构与功能的个体差异与遗传变异、环境暴露、遗传-遗传交互作用、环境-环境交互作用及遗传-环境交互作用有关,需要从整个基因组和暴露组广度进行系统研究。

重点讨论基因组广度关联研究、暴露组广度关联研究、基因组广度遗传-遗传交互作用研究、暴露组广度环境-环境交互作用研究及基因组-暴露组广度遗传-环境交互作用研究在揭示人脑结构与功能个体差异成因中的潜在价值及其面临的挑战。

【关键词】磁共振成像;基因组;暴露组;人脑中图分类号:R394;R445.2文献标志码:AGenome -and exposome -wide association studies of human brain imaging phenotypes YU Chunshui △.Department of Radiology and Tianjin Key Laboratory of Functional Imaging,Tianjin Medical University General Hospital,Tianjin 300052,China.△E-mail:******************.cn【Abstract 】Human brain structure and function can be precisely characterized by brain imaging techniques and theirinter -individual variations are associated with individual differences in cognitive abilities and susceptibility to neuropsychiatric disorders.Individual differences in human brain structure and function can be attributed to genetic variations,environmental exposures,as well as gene -gene,environment -environment and gene -environment interactions.These effects should be unbiasedly investigated from the whole genome and exposome.Here,we discuss the potential values and challenges in investigating individual differences in brain structure and function by genome -wide association,exposome-wide association,genome-wide gene-gene interaction,exposome-wide environment-environment interaction and genome-exposome-wide gene-environment interaction analyses.【Keywords 】Magnetic resonance imaging;Genome;Exposome;Human brainIntJMedRadiol,2021,44(3):249-253作者单位:天津医科大学总医院医学影像科,天津市功能影像重点实验室,天津300052△E-mail :******************.cn 基金项目:国家重点研发计划“重大慢性非传染性疾病防控研究”重点专项(2018YFC1314300);国家自然科学基金重点项目(82030053)DOI:10.19300/j.2021.S18881专家述评以MRI 为代表的脑影像技术可以准确评估人脑结构与功能,其个体变异决定了人类认知功能及神经精神疾病易感性的个体差异[1]。

3D MEDIC和3D SPACE磁共振神经成像在腰骶丛神经根的一致性对比研究

3D MEDIC和3D SPACE磁共振神经成像在腰骶丛神经根的一致性对比研究

$$Feb.2021Vo . 42$No. 12021年 2月 第 42 卷$ 第 1 期首都医科大学学报Journal of Capital Medical Universi/[doi : 10. 3969/j. issp. 1006-7795. 2021. 01. 022]・临床研究*3D MEDIC 和3D SPACE 磁共振神经成像在腰骶丛神经根的一 致性对比研究孙峥1!2孔超3鲁世保3陈海%笪宇威%张苗1>2卢洁心(1.首都医科大学宣武医院放射科,北京100053; 2.磁共振成像脑信息学北京市重点实验室,北京100053; 3.首都医科大学宣武医院骨科,北京100053; 4.首都医科大学宣武医院神经内科,北京100053)$摘要】 目的 验证三维多回波数据联合成像(three dimensional multi-echo data imagine combination with selective water excitation , 3D MEDIC WE)和三维快速自旋回波成像(three dimensional sampling peSection with application optimized contrasts byusing d/ferent Uip angle evelu/on , 3D SPACE STIR )序列在腰骶丛神经根成像中的可行性和重复性。

方法 将55例受试者分为腰椎无异常表现的正常对照组(20例)、单纯性腰椎间盘突出症(lumbar d/c hernm/on , LDH )组(20例)和慢性炎性脱髓鞘性多发性神经根神经病症(ch/nw inUamma"/ demyelinating polyradwuloneuropathy , CIBP )组(15例),分别应用两种腰骶丛神经根成 像,评价图像质量参数信噪比(signal " noise ratio , SNR )、对比噪声比(contrast " noise ratio , CNR )和对比度(contrast ratio , CR ),并验证正常对照组、CBP 组和LDH 组测量神经根直径的一致性。

《医学影像识别课件》

《医学影像识别课件》

1 Principles of MRI
Understand the physics behind magnetic resonance imaging and how it creates detailed anatomical images.
2 MRI vs CT
Compare the advantages and limitations of MRI and CT scans in different clinical scenarios.
Principles of Medical Imaging
Understand the fundamental principles of medical image acquisition and interpretation.
History of Medical Imaging
Trace the evolution of medical imaging techniques from X-rays to modern advancements.
Women's Imaging
Screening and diagnosis of breast and gynecological conditions, including mammography and ultrasound.
Future Trends in Medical Imaging Technology
Image Processing and Analysis
Discover the role of image processing and analysis in enhancing medical images and extracting valuable diagnostic information.

医学图像分析外文文献翻译2020

医学图像分析外文文献翻译2020

医学图像(影像)分析外文翻译2020英文Medical Image Analysis: Human and MachineRobert NickBryan, Christos Davatzikos,etcINTRODUCTIONImages play a critical role in science and medicine. Until recently, analysis of scientific images, specifically medical images, was an exclusively human task. With the evolution from analog to digital image data and the development of sophisticated computer algorithms, machines (computers) as well as humans can now analyze these images. Though different in many ways, these alternative means of analysis share much in common: we can model the newer computational methods on the older human-observer approaches. In fact, our Perspective holds that radiologists and computers interpret medical images in a similar fashion.Critical to understanding image analysis, whether by human or machine, is an appreciation of what an image is. The universe, including all and any part therein, can be defined as:U=(m,E)(x,y,z)(t)Or, more qualitatively: the universe is mass and energy distributed within a three-dimensional space, varying with time. Scientific observations, whether encoded as numerical measurements or categorical descriptors, reflect information about one or more of these three intrinsic domains of nature. Measurements of mass and energy (m, E) are oftencalled signals. In medical imaging, essentially all our signals reflect measurements of energy. An image can be defined as a rendering of spatially and temporally defined signal measurements, or:I=f(m,E)(x,y,z)(t)Note the parallelism between what the universe is and how it is reflected by an image. In the context of scientific observations, an image is the most complete depiction of observations of nature— or in the case of medicine, the patient and his/her disease. Images and images alone include explicit spatial information, information that is intrinsic and critical to the understanding of most objects of medical interest.HUMAN IMAGE ANALYSISPattern recognition is involved in many, if not most, human decisions. Human image analysis is based upon pattern recognition. In medicine, radiologists use pattern recognition when making a diagnosis. It is the heart of the matter. Pattern recognition has two components: pattern learning and pattern matching. Learning is a training or educational process; radiology trainees are taught the criteria of a normal chest X-ray examination and observe hundreds of normal examinations, eventually establishing a mental pattern of “normal.” Matching involves decision-making; when an unknown chest film is presented for interpretation, the radiologist compares this unknown pattern to their “normal” pattern and makes a decision as to whether or not the case isnormal, or by exclusion, abnormal.One of the most striking aspects of human image analysis is how our visual system deconstructs image data in the central nervous system. The only place in the human head where there is anything resembling a coherent pattern of what we perceive as an image is on the surface of the retina. Even at this early point in the human visual process, the image data are separated into color versus light intensity pathways, and other aspects of the incoming image are emphasized and/or suppressed. The deconstruction of image data proceeds through the primary visual cortex into secondary, and higher, visual cortices where different components of image data, particularly those related to signal (brightness), space (shape), and time (change), are processed in distinct and widely separated anatomic regions of the brain. The anatomic substrate of these cortical patches that process image data is the six layered, cerebral cortex columnar stacks of neurons that make up local neural networks.The deconstruction of image data in the human brain—“what happens where”—is relatively well understood. However, the structures and processes involved in the reintegration of these now disparate data into a coherent pattern of what we perceive remains a mystery. Regardless of how the brain creates a perceived image, the knowledge that it does so by initially deconstructing image data and processing these different data elements by separate anatomic and physiological pathwaysprovides important clues as to how images are analyzed by humans.FINDINGS = OBSERVED KEY FEATURESIn the process of deconstructing image data, the human brain extracts key, or salient, features by separate mechanisms and pathways. These key features (KFs) reveal the most important information. While their number and nature are not fully understood, it is clear that KFs include signal, spatial, and temporal information. They are separately extracted and analyzed with the goal of defining dominant patterns in the image that can be compared to previously learned KF patterns in the process of diagnostic decision-making.Given that analyzing the patterns of KFs of an image is fundamental to human image interpretation, an obvious step in the interpretive process is to define the KFs to be extracted and the patterns thereof learned. This is the learning part of pattern recognition and is a function of image data itself, empirical experience, and task definition. KFs must be contained in the image data, extractable by an observer, and relevant to the decision-making process. Since image data consist of signal, spatial, and temporal information, KFs will likewise reflect one or more of these elements. To extract a KF, a human observer has to be able to see the feature on an image. In a black and white image, a KF cannot be color. Ideally, KFs are easy to see and report, i.e., extract.A KF must make a significant contribution to some decision basedon the image data. Since the ultimate performance metric of medical image interpretation is diagnostic accuracy, KFs of medical images must individually contribute to correct diagnoses. The empirical correlation of image features with specific diagnosis by trial and error observations has been the traditional way to identify KFs on the basis of their contribution to diagnosis. Observed KFs provide the content for the “Findings” section of a radiology report. For brevity and convenience, we will focus on signal and spatial KFs.Medical diagnosis in general and radiological diagnosis in particular is organ based. The brain is analyzed as a sep arate organ from the “head and neck,” which must be separately analyzed and reported. Every organ has its unique set of KFs and disease patterns. The first step in the determination of “normal” requires the learning of a pattern, which in this case consists of the signal and spatial KFs of a normal brain.For medical images, the signal measured by the imaging device is usually invisible to humans, and therefore the detected signal must be encoded as visible light, most commonly as the relative brightness of pixels or voxels. In general, the greater the magnitude of the signal detected by the imaging device, the brighter the depiction of the corresponding voxel in the image. Once again, the first step in image analysis is to extract a KF from the image, in this case relative voxel brightness. An individual image's signal pattern is compared to a learnednormal pattern. If the signal pattern of an unknown case does not match the normal pattern, one or more parts of the diagnostic image must be brighter or darker than the anatomically corresponding normal tissue.The specific nature of an abnormal KF is summarized in the Findings section of the radiology report, preferably using very simple descriptors, such as “Increased” or “Decreased” signal intensity (SI). To reach a specific diagnosis, signal KFs for normal and abnormal tissues must be evaluated, though usually only abnormal KFs are reported. Signal KFs are modality specific. The SIs of different tissues are unique to that modality. For each different signal measured, there is usually a modality specific name (with X-ray images, for example, radiodensity is a name commonly applied to SI, with relative intensities described as increased or decreased).Specific objects within images are initially identified and subsequently characterized on the basis of their signal characteristics. The more unique the signal related to an object, the simpler this task. For example, ventricles and subarachnoid spaces consist of cerebrospinal fluid, which has relatively distinctive SI on computed tomography (CT) and magnetic resonance imaging (MRI). Other than being consistent with the signal of the object of interest (i.e., cerebrospinal fluid), SI is irrelevant to the evaluation of the spatial features of that object. With minima l training, most physicians can easily “extract”, i.e., see anddistinguish signal KFs on the basis of relative visual brightness.The second component of the Findings section of a radiology report relates specifically to spatial components of image data. Spatial analysis is geometric in nature and commonly uses geometric descriptors for spatial KFs. The most important spatial KFs are number, size, shape and anatomic location. A prerequisite for the evaluation of these spatial attributes is identification of the object to which these descriptors will be applied, beginning with the organ of interest.In the case of the brain, we uniquely use surrogate structures, the ventricles and subarachnoid spaces, to evaluate this particular organ's spatial properties. Due to the fixed nature of the adult skull, ventricles and subarachnoid spaces provide an individually normalized metric of an individual's brain size, shape, and position. Fortunately, the ventricles and subarachnoid spaces can easily be observed on CT, MRI, or ultrasound and their spatial attributes easily learned by instruction and repetitive observations of normal examinations.The second step of pattern recognition—pattern matching—is completely dependent on the first step of pattern recognition—pattern learning. Matching is the operative decision-making step of pattern recognition. In terms of ventricles and subarachnoid spaces, the most fundamental spatial pattern discriminator is size, whether or not the ventricles are abnormally large or small. If the ventricles and/orsubarachnoid spaces are enlarged, the differential diagnoses might include hydrocephalus or cerebral atrophy. If they are abnormally small, mass effect is suggested, and the differential diagnosis might include cerebral edema, tumor, or other space occupying lesion. In any case, a KF extracted from any brain scan is the spatial pattern of the ventricles and subarachnoid spaces, this specific pattern is matched against a learned, experience-based, normal pattern, and a decision of normal or abnormal is made.When reporting image features, humans tend to use categorical classification systems rather than numeric systems. Humans will readily, though not always reliably, classify a light source as relatively bright or dark, but only reluctantly attempt to estimate the brightness in lumens or candelas. Humans are not good at generating numbers from a chest film, but they are very good at classifying it as normal or abnormal. If quantitative image measurements are required, radiologists bring additional measurement tools to bear, like a ruler to measure the diameter of a tumor, or a computer to calculate its volume. If pushed for a broader, more dynamic reporting range, a radiologist may incorporate a qualitative modifier, such as “marked,” to an abnormal KF description to indicate the degree of abnormality.Interestingly, and of practical importance, human observers tend to report psychophysiological observations using a scale of no more thanseven. This phenomenon is well documented in George Miller's paper, The Magical Number 7. A comparative scale of seven is reflected in the daily use of such adjective groupings as “mild, moderate, severe”; “possible, probable, definite”; “minimal, moderate, marked.” If an image feature has the possibility of being normal, increased, or decreased, with three degrees of abnormality in each direction, the feature can be described with a scale of seven. While there are other human observer scales, feature rating scales of from two to seven generally suffice and reflect well documented behavior of radiologists.Based on the concept of extracting a limited number of KFs and reporting them with a descriptive scale of limited dynamic range, it is relatively straightforward to develop a highly structured report-generating tool applicable to diagnostic imaging studies. The relative intensity of each imaging modality's detected signal is a KF, potentially reflecting normal or pathological tissue. An accompanying spatial KF of any abnormal signal is its anatomic location. A spatial KF of brain images is the size of the ventricles and subarachnoid spaces, which reflect the presence or absence of mass effect and/or atrophy.IMPRESSION = INFERRED DIFFERENTIAL DIAGNOSISMedical diagnosis is based upon the concept of differential diagnoses, which consist of a list of diseases with similar image findings.A radiographic differential diagnoses is the result of the logicallyconsistent matching of KFs extracted from a medical image to specific diagnosis. KFs are extracted from medical images, summarized by structured descriptive findings as previously described, and a differential diagnostic list consistent with the pattern of extracted features is inferred. This inferential form of pattern matching for differential diagnosis is reflected in such publications as Gamuts of Differential Diagnosis and StatDx. These diagnostic tools consist of a list of diseases and a set of matching image KFs.Differential diagnosis, therefore, is another pattern recognition process based upon the matching of extracted KF patterns to specific diseases. A complete radiographic report incorporates a list of observed KFs summarized in the FINDINGS and a differential diagnosis in the IMPRESSION, which was inferred from the KFs. A normal x-ray CT report might be:Findings•There are no areas of abnormal radiodensity. (Signal features encoded as relative light intensity)•The ventricles and subarachnoid spaces are normal as to size, shape, and position. (Spatial features of the organ of interest, the brain) •Ther e are no craniofacial abnormalities. (Signal/spatial features of another organ)•There is no change from the previous exam. (Temporal feature)Impression•Normal examination of the head. (Logical inference)For an abnormal report, one or more of the KF statements must be modified and the Impression must include one or more inferred diseases.Findings•There is increased radiodensity in the right basal ganglia.•The frontal horn of the right lateral ventricle is abnormally small (compressed).•There are no c raniofacial abnormalities.•The lesion was not evident on the previous exam.Impression•Acute intracerebral hemorrhage.The list of useful KFs is limited by the nature of signal and spatial data and is, we believe, relatively short. While human inference mechanisms are not fully understood, the final diagnostic impression probably reflects rule-based or Bayesian processes, the latter of which deal better with the high degree of uncertainty in medicine and take better advantage of prior knowledge, such as prevalence of disease in a practice.Less experienced radiologists and radiology trainees typically perform image analysis as outlined above, tediously learning and matching normal and abnormal signal and spatial patterns, consciously extracting KFs, and then deducing the best matches between the observedKFs and memorized KF patterns of specific diseases. This linear intellectual process is an example of “thinking slow,” a cognitive process described by Kahneman. However, when a radiologist is fully trained and has sufficient experience, he/she switches from this cognitive mental process to the much quicker “thinking fast,” heuristic mode of most professional practitioners in most fields. Most pattern matching tasks take less than a second to complete. A skilled radiologist makes the normal/abnormal diagnosis of a chest image in less than one second.In his book Outliers, Malcom Gladwell famously concluded that 10,000 hours of training are mandatory to function as a professional. The specific number has been challenged, of course, but it appropriately emphasizes the fact that professionals’ function differently than amateurs. They think fast, and, often, accurately. To achieve success at this level, the professional needs to have seen and performed the relevant task thousands of times—exactly how many thousand, who knows. The neuropsychological processes underlying these “slow” and “fast” mental processes are not clear, but it is hypothesized that higher order pattern matching processes become encoded in brain structure and eventually allow the “ah hah” identification of an “Aunt Minnie” brain stem cavernoma in a fraction of a second on a T1-weighted MRI image.However, humans working in this mode do make mistakes related to well-known biases, including: availability (recent cases seen),representativeness (patterns learned), and anchoring (prevalence). Other psychophysical factors such as mood and fatigue can also affect this process. Slower, cognitive thinking does not have the same faults and biases. The two types of decision-making are complementary and often combined, as in the case of a radiologist interpreting a case of a rare disease that they have not seen or a case with a disease having a more variable KF pattern.COMPUTER IMAGE ANALYSISWhereas humans can analyze analog or digital images, computers can operate only on digital or digitized images, both types of which can be defined as before:I=f((m,E)(x,y,z)(t))Therefore, computers face the same basic image analysis problem as humans and can perform this task similarly. As with human observers, computers can be programmed to deconstruct an image in terms of signal, spatial, and temporal content. It is relatively trivial to develop and implement algorithms that extract the same image KFs from digital data that radiologists extract from analog or digital data. Computers can be trained with pattern recognition techniques to match image KFs with normal and/or disease feature patterns in order to formulate a differential diagnosis.A significant difference between human and computer image analysis is the relative strength in classifying versus quantifying imagefeatures. Humans are very adept at classifying observations but can quantify them only crudely. In contrast, quantitative analysis of scientific measurements is the traditional forte of computers. Until recently, computers tended to use linear algebraic algorithms for image analysis, but with the advent of inexpensive graphics processing unit hardware and neural network algorithms, classification techniques are being widely implemented. Each approach has different strengths and weaknesses for specific applications, but combinations of the two will offer the best solutions for the diverse needs of the clinic.To illustrate these two computational options for image analysis, let us take the task of extracting and reporting the fluid-attenuated inversion recovery (FLAIR) signal KF on brain MRI scans. A traditional quantitative approach might be based on histogram analysis of normal brain FLAIR SIs. After appropriate preprocessing steps, a histogram of SI of brain voxels from MRI scans of a normal population can be described by Gaussian distribution with preliminary ±2 SD normal/abnormal thresholds, as for conventional clinical pathology tests. Those voxels in the >2 SD tail of the distribution can subsequently be classified as Increased SI; the voxels <2 SD as Decreased SI; with the remainder of the voxels labeled as Normal. By this process, each voxel has a number directly reflecting the measurement of SI and a categorical label based on its SI relative to the mean of the distribution of all voxel SIs. While usefulfor many image analysis tasks, this analytical approach has weaknesses in the face of noise, which is present on every image. Differentiating signal from noise is difficult for these linear models.The alternative classification approach requires the labeling, or “annotating,” of brain voxels as Increased, Normal, or Decreased FLAIR SI in a training case set. This labeling is often performed by human experts and is tedious. This training set is then used to build a digital KF pattern of normal and abnormal FLAIR SI. This task can be performed by a convolutional neural network of the 3-D U-Net type, using “deep learning” artificial intelligence algorithm s. After validation on a separate case set, this FLAIR “widget” can be applied to clinical cases to extract the FLAIR KF. These nonlinear, neural network classifiers often handle image noise better than linear models, better separating the “chaff from the wheat.” Note the fundamental difference of the two approaches. One is qualitative, based on the matching of human and computer categorical classification, while the other is quantitative, based on the statistical analysis of a distribution of signal measurements.For most medical images, there is a single signal measured for each image type and, therefore, a separate computational algorithm, or “widget,” is needed for each image type or modality. For a CT scan of the brain, only a single signal widget is needed to measure or classify radiodensity. For a multimodality MRI examination, not only are signalspecific pulse sequences required, but signal specific analytic widgets are necessary for FLAIR, T2, T1, diffusion-weighted imaging, susceptibility, etc. Regardless, rather than a radiologist's often ambiguous free-text report, the computer derived signal KFs are discrete and easily entered into a KF table.It should be noted that KFs reported in this fashion are associated with only one lesion, and this is a significant limitation of this simplistic approach. If there are multiple similar appearing lesions from the same disease (metastasis), this limitation is significantly mitigated by the additional spatial KF of multiplicity. However, if there are multiple lesions from different diseases, separate analysis for each disease must be performed and reported. This is a difficult task even for humans, and is, at present, beyond computational techniques.As with human observers, specific objects within images, such as a tumor, are detected and partially characterized on the basis of their abnormal SI. Lesions that have no abnormal signal are rare and difficult to identify. Once a computer has identified an object by its signal characteristics, whether by classification or numeric methods, the spatial features of the object must also be extracted. This requires spatial operators that combine voxels of related signal characteristics into individual objects that other algorithms must then count, measure, spatially describe, and anatomically localize. These KFs can be enteredinto the spatial components of a KF table.As with radiologists, organ-based analysis is advantageous and easily performed by computers. Requirements for the evaluation of whole organ spatial pattern s are “normal” anatomic atlases and computer algorithms for identifying specific organs and comparing their spatial properties to those of normal atlas templates. Remarkable progress has been made over the past 10 years in the development and use of digital, three-dimensional anatomic templates. Typically, tissue segmentation algorithms are applied, oftentimes relying on machine learning models. Atlas-based deformable registration methods then apply spatial transformations to the image data to bring anatomically corresponding regions into spatial co-registration with the normal atlas. There are numerous sophisticated software programs that perform these functions for evaluating the spatial properties of an organ or lesion. The output of these algorithms are the same spatial KFs reported by radiologists, including the number, size, and shape of organs and lesions and their anatomic locations.The computer, by extracting brain image KFs and reporting them numerically or categorically, can generate a highly structured Findings section of a radiology report that is directly comparable to that generated by a radiologist. The computer's extracted, discrete KFs can also be entered into a computational inference engine, of which there are many.One could use simple, naïve Bayesian networks, which structurally have an independent node for every disease with conditional nodes for each KF. These tools include look-up tables with rows listing all possible diagnoses, columns for all extracted KFs, and cells containing the probabilities of KF states conditioned on each covered disease. Given a set of KFs of a clinical examination, a Bayesian network calculates the probability of each disease and ranks them into the differential diagnoses that can be incorporated into the “Impression” section of the computer report. This is a form of computational pattern recognition resulting from best matches of particular KF patterns with a specific diagnosis.The preceding approach to computer image analysis closely resembles that of the cognitive, slow thinking, human. While the process is relatively transparent and comprehensible, it can be computationally challenging. But as with humans, there are alternative, faster thinking, heuristic computational methods, most commonly based on neural networks, that are a revolution in digital image analysis. The algorithms are usually nonlinear classifiers that are designed to output a single diagnosis, and nothing else. These programs are trained on hundreds or thousands of carefully “annotated” case s, with and without the specified disease. No intermediate states or information are used or generated. In other words, there are no KFs that might inform the basis of a diagnosis, nor is there quantitative output to provide more specific informationabout the disease or to guide clinical management. These “black box” systems resemble human professionals thinking fast, but with little obvious insight. However, an experienced radiologist incorporates thousands of these heuristic black boxes into his/her decision-making, many of which incorporate nonimage data from the electronic medical record, local practice mores, the community, and environment.For a computer algorithm to mimic the radiologist in daily practice, it too must incorporate thousands of widgets and vast quantities of diverse data. Such a task may not be impossible, but it does not seem eminent. Furthermore, a radiologist can, when necessary, switch from heuristics to the deliberative mode and “open” the box to explain why they made a particular diagnosis. This often involves the explication of associated KFs (mass effect) that may simultaneously be important for clinical management (decompression).CONCLUSIONA computer using contemporary computational tools functionally resembling human behavior could, in theory, read in image data as it comes from the scanner, extract KFs, find matching diagnoses, and integrate both into a standardized radiology report. The computer could populate the report with additional quantitative data, including organ/lesion volumetrics and statistical probabilities for the differential diagnosis. We predict that within 10 years this conjecture will be realityin daily radiology practice, with the computer operating at the level of subspecialty fellows. Both will require attending oversight. A combination of slow and fast thinking is important for radiologists and computers.中文医学图像分析:人与机器引言图像(影像)在科学和医疗中起着至关重要的作用。

罗丹明B荧光猝灭法测定水中碘

罗丹明B荧光猝灭法测定水中碘

罗丹明B荧光猝灭法测定水中碘乔玉春;龙大成;孙宗招;郑滢婷;陈丽君;王桦【摘要】碘是人体不可或缺的一种微量元素,目前众多的含碘食品进入人们生活中.碘过量或缺乏都会对人体产生不良的影响.罗丹明B作为最常用的荧光染料被广泛用于分析化学和生物医药领域.研究表明,在酸性条件下碘离子与碘酸根反应生成碘单质,碘单质与罗丹明B反应导致其荧光猝灭,由此建立了一种简便的碘离子荧光分析方法,并发现罗丹明B荧光猝灭值与5.0~60 μg/L碘离子的浓度范围内呈良好的线性关系.该碘离子检测方法具有灵敏度高、检出限低、选择性好、操作方便等优点.【期刊名称】《化学传感器》【年(卷),期】2018(038)001【总页数】5页(P39-43)【关键词】碘离子;罗丹明B;荧光猝灭法【作者】乔玉春;龙大成;孙宗招;郑滢婷;陈丽君;王桦【作者单位】曲阜师范大学化学与化工学院,山东省生命有机分析重点实验室,山东曲阜273165;济宁功能材料与监测器件工程技术中心,山东三生新材料科技有限公司,山东济宁272000;济宁功能材料与监测器件工程技术中心,山东三生新材料科技有限公司,山东济宁272000;曲阜师范大学化学与化工学院,山东省生命有机分析重点实验室,山东曲阜273165;曲阜师范大学化学与化工学院,山东省生命有机分析重点实验室,山东曲阜273165;曲阜师范大学化学与化工学院,山东省生命有机分析重点实验室,山东曲阜273165;济宁功能材料与监测器件工程技术中心,山东三生新材料科技有限公司,山东济宁272000【正文语种】中文0 引言碘为卤族元素之一,在卤族元素中化学活性最弱,但仍可与大多数元素直接化合,并以化合物形式广泛存在于自然界。

碘作为人体的必需微量元素,是合成甲状腺激素的主要原料,成人每天需碘量为150~200 mg[1-3]。

人体中碘缺乏可引起如甲状腺肿、地方性克汀病等;碘过量可使患甲亢的风险提高以及引起甲状腺肿、甲状腺功能减退、甲状腺炎等[4-7]。

颈动脉易损斑块影像学评估的现在与未来

颈动脉易损斑块影像学评估的现在与未来

·4055··述评·颈动脉易损斑块影像学评估的现在与未来高天理【摘要】 颈动脉粥样硬化性狭窄是缺血性卒中的公认危险因素,占卒中或短暂性缺血发作的10%~20%。

常规的脑血管检查技术只能显示管腔的狭窄程度,而不能阐明狭窄的结构和狭窄的原因。

因此,仅在脑血管疾病的诊断中研究管腔狭窄的程度,对评估疾病的特征和预防卒中的危险分层显然是不够的。

对斑块和动脉壁基于成像特征如斑块内出血(IPH)、溃疡、新生血管、纤维帽(FC)厚度和富含脂质的坏死核心(LRNC)等的无创评估是指导治疗的基本方法。

本综述总结了当前在颈动脉易损斑块成像中,血管壁成像包括磁共振成像(MRI)、计算机断层扫描(CT)和超声(US)诊断方法的更新。

【关键词】 颈动脉疾病;斑块,动脉粥样硬化;易损斑块;血管壁成像;超声检查;计算机断层扫描;磁共振成像;综述【中图分类号】 R 543.4 【文献标识码】 A DOI:10.12114/j.issn.1007-9572.2021.01.101高天理.颈动脉易损斑块影像学评估的现在与未来[J].中国全科医学,2021,24(32):4055-4060,4067.[]GAO T L. Imaging evaluation of vulnerable carotid plaques:present status and future prospect[J]. Chinese General Practice,2021,24(32):4055-4060,4067.Imaging Evaluation of Vulnerable Carotid Plaques :Present Status and Future Prospect GAO Tianli Department of Neurology ,Beijing Anzhen Hospital ,Capital Medical University ,Beijing 100029,China【Abstract 】 Carotid atherosclerotic stenosis is a well-established risk factor of ischemic stroke,contributing to up to 10%-20% of strokes or transient ischemic attacks. Routine cerebrovascular examination techniques can only show the degree of stenosis of the lumen,but cannot demonstrate the structure of the stenosis and the cause of the stenosis. Therefore,simply studying the degree of luminal stenosis in the diagnosis of cerebrovascular disease,is obviously insufficient to assess the characteristics of the disease and stratify the risk for preventing stroke. Non-invasive in vivo assessment of plaques and arterial walls based on the presence of imaging features such as intraplaque hemorrhage,ulceration,neovascularity,fibrous cap thickness,and lipid-rich necrotic core is a basic method to guide treatment. This review summarizes recent updates on vessel wall imaging modalities for the evaluation of carotid vulnerable plaques,such as MRI,CT and ultrasound.【Key words 】 Carotid artery diseases;Plaque,atherosclerotic;Vulnerable plaque;Vascular wall imaging;Ultrasonography;Computed tomography;Magnetic resonance Imaging;Review和62%的女性有颈动脉粥样硬化性狭窄[2]。

医工交叉前沿技术英语

医工交叉前沿技术英语

医工交叉前沿技术英语Medical Engineering Cross-cutting Frontier TechnologyIn recent years, medical engineering has rapidly advanced through the integration and application of cutting-edge technologies. This cross-cutting approach has led to the emergence of multiple frontiers in the field. Here, we will discuss some of the prominent frontier technologies in medical engineering.1. Biomedical Imaging:Biomedical imaging encompasses a range of techniques aimed at visualizing and diagnosing diseases within the human body. These include X-ray imaging, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and positron emission tomography (PET). Advances in image processing algorithms and hardware have significantly improved the resolution and accuracy of these imaging techniques, enabling earlier and more accurate detection of diseases.2. Bioinformatics:Bioinformatics is an interdisciplinary field that combines biology, computer science, and statistics to manage and analyze biological data, particularly genomics data. This field has revolutionized medical research by enabling the storage, retrieval, and analysis of vast amounts of genomic and proteomic data. Bioinformatics techniques are used for understanding genetic diseases, developing personalized medicine, and analyzing complex biological networks.3. Biomedical Materials and Tissue Engineering: Advancements in materials science have led to the development ofinnovative biomaterials that can interface with the human body, such as biodegradable materials and smart materials. These materials are used in a variety of applications, including prosthetic implants, drug delivery systems, and tissue engineering scaffolds. Tissue engineering, another frontier in medical engineering, involves the fabrication of artificial tissues and organs using scaffolds and cells. This field has the potential to revolutionize organ transplantation and regenerative medicine.4. Artificial Intelligence (AI) and Machine Learning:AI and machine learning techniques are increasingly being applied in medical engineering to improve diagnosis, treatment planning, and patient monitoring. These technologies can analyze large datasets, identify patterns, and make accurate predictions. AI algorithms can assist in image interpretation, predict disease progression, and support clinical decision-making. Additionally, AI-powered robotics are being developed for surgical procedures, enhancing precision and reducing invasiveness.5. Nanotechnology:Nanotechnology involves the manipulation and control of materials at the nanometer scale. In medical engineering, nanotechnology has applications in drug delivery, imaging, diagnosis, and therapy. Nanoscale particles and structures can improve targeting of drugs to specific tissues, enhance imaging contrast, and enable novel therapies. This field holds great potential for personalized medicine and targeted therapies. These are just a few examples of the cross-cutting frontier technologies in medical engineering. Through the integration ofthese technologies, medical engineering continues to advance, leading to new discoveries, improved healthcare outcomes, and the potential for transformative breakthroughs in healthcare.。

THE INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY Int J Med Robot

THE INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY Int J Med Robot

Introduction
Computer-assisted surgery (CAS) is a methodology that translates into accurate and reliable image-to-surgical space guidance. Neurosurgery is a very complex procedure and the surgeon has to integrate multi-modal data to produce an optimal surgical plan. Often the lesion of interest is surrounded by vital structures, such as the motor cortex, temporal cortex, vision and audio sensors, etc., and has irregular configurations. Slight damage to such eloquent brain structures can severely impair the patient (1,2). CASMIL, an imageguided neurosurgery toolkit, is being developed to produce optimum plans resulting in minimally invasive surgeries. This system has many innovative features needed by neurosurgeons that are not available in other academic and commercial systems. CASMIL is an integration of various vital modules, such as rigid and non-rigid co-registration (image–image, image–atlas and

剧毒化学品目录2022年版

剧毒化学品目录2022年版

绿,巴黎绿,
Imperial green O16
53 砷酸
原 帝砷 绿酸 ,苔绿, Arsenic acid
Orthoarsenic acid H3AsO4 7778-39-4 1553
1554
58 氧氯化磷
氯化磷酰;磷 Phosphorus 酰氯;三氯氧 oxychloride 化磷;三氯化 磷酰;三氯氧 磷;磷酰三氯
Mercury
bichloride;
Corrosive
sublimate
19 碘化汞
碘化高汞;二 Mercuric iodide
Mercury biiodide; HgI2
碘化汞
第1页
7774-29-0 1638
19 碘化汞
碘化高汞;二 Mercuric iodide 碘化汞
HgI2 Mercury iodide
Ca(CN)2 KAg(CN)2
Cadium cyanide
氰化高汞;二 Mercuric cyanide 氰化汞
Mercury dicyanide;
Cd(CN)2 Hg(CN)2
Dicyanomercury
亚金氰化钾 Gold potassium cyanide
Potassium aurous KAu(CN)2 cyanide
36 丙二酸铊
丙二酸亚铊 Thallous malonate
Thallium (I) malonate
C3H2O4Tl2 2757-18-8 1707
37 硫酸三乙基锡
Triethyltin sulphate Triaethylzinnsulfat C12H30O4SS 57-52-3
e
n2
3146

磁共振弥散成像在脊髓损伤中的应用

磁共振弥散成像在脊髓损伤中的应用

磁共振弥散成像在脊髓损伤中的应用毛锐利; 陈昆涛【期刊名称】《《中国CT和MRI杂志》》【年(卷),期】2014(000)004【总页数】4页(P1-4)【关键词】磁共振; 弥散加权成像; 弥散张量成像; 脊髓损伤【作者】毛锐利; 陈昆涛【作者单位】遵义医学院第五附属珠海医院影像科广东珠海 519100【正文语种】中文【中图分类】R744; R445.2随着世界各国经济水平的发展,脊髓损伤发生率呈现逐年增高的趋势。

脊髓损伤(spinal cord injuries, SCI)是脊柱损伤最严重的并发症,往往导致损伤节段以下肢体严重的功能障碍。

脊髓损伤不仅会给患者本人带来身体和心理的严重伤害,还会对整个社会造成巨大的经济负担。

针对脊髓损伤的预防、治疗和康复已成为当今医学界的一大课题。

目前有关脊髓损伤评价和治疗效果验证主要依靠临床的主观评价,缺乏客观的功能影像学标准。

脊髓损伤首选的影像学检查是MRI,但常规MRI结果与功能残疾评价相关性差,与神经病学及组织学损伤程度不相关联,低估了脊髓病变程度,不能反映脊髓白质纤维束状态,不能有效定量观察轴突损伤及再髓鞘化过程,不适于评价脊髓的功能状态[1-2]。

常规MRI检查在脊髓型颈椎病早期诊断中敏感度较低,T2WI显示高信号的患者其病程多为晚期,多提示为不可逆性损伤[3],且往往低估了脊髓损伤的程度,其与脊髓临床功能状态无相关性[4-6],不能为临床早期干预提供可靠信息。

根据急性脊髓损伤在MRI上的基本表现,Kulkarni等[7]将其分为出血、挫伤和水肿三类,结合MRI表现与患者临床评定做了相关性分析。

Frankel分级标准[8]是评价SCI预后的指标。

MRI上最常见的脊髓外伤表现是脊髓出血和脊髓水肿[9-10]。

脊髓水肿的预后往往较好,脊髓挫伤的预后则较差,而脊髓出血往往为完全性SCI,预后很差[11-14]。

因此,能够早期准确地评价脊髓损伤的情况,检测出脊髓内有无出血,对患者的预后评价非常重要。

医学影像专业术语英文

医学影像专业术语英文

医学影像专业术语英文Medical Imaging Professional Terminology1. What is medical imaging?Medical imaging refers to the techniques and processes used to create images of the human body for clinical purposes. These images are used by healthcare professionals to diagnose and treat medical conditions.医学影像是指用于临床目的的创建人体图像的技术和过程。

这些图像被医疗保健专业人员用于诊断和治疗医疗状况。

2. What are the different modalities of medical imaging?There are several modalities of medical imaging, including X-ray, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, nuclear medicine, and positron emission tomography (PET).医学影像学有多种模态,包括X射线、计算机断层扫描(CT)、磁共振成像(MRI)、超声波、核医学和正电子发射断层扫描(PET)。

3. What is the purpose of medical imaging?The purpose of medical imaging is to help healthcare professionals visualize the internal structures of the body in order to diagnose and treat medical conditions. It can also be used to monitor the progression of diseases and the effectiveness of treatments.医学影像的目的是帮助医疗保健专业人员可视化人体内部结构,以便诊断和治疗疾病。

Deformable Medical Image Registration

Deformable Medical Image Registration

Deformable Medical Image Registration:A Survey Aristeidis Sotiras*,Member,IEEE,Christos Davatzikos,Senior Member,IEEE,and Nikos Paragios,Fellow,IEEE(Invited Paper)Abstract—Deformable image registration is a fundamental task in medical image processing.Among its most important applica-tions,one may cite:1)multi-modality fusion,where information acquired by different imaging devices or protocols is fused to fa-cilitate diagnosis and treatment planning;2)longitudinal studies, where temporal structural or anatomical changes are investigated; and3)population modeling and statistical atlases used to study normal anatomical variability.In this paper,we attempt to give an overview of deformable registration methods,putting emphasis on the most recent advances in the domain.Additional emphasis has been given to techniques applied to medical images.In order to study image registration methods in depth,their main compo-nents are identified and studied independently.The most recent techniques are presented in a systematic fashion.The contribution of this paper is to provide an extensive account of registration tech-niques in a systematic manner.Index Terms—Bibliographical review,deformable registration, medical image analysis.I.I NTRODUCTIOND EFORMABLE registration[1]–[10]has been,alongwith organ segmentation,one of the main challenges in modern medical image analysis.The process consists of establishing spatial correspondences between different image acquisitions.The term deformable(as opposed to linear or global)is used to denote the fact that the observed signals are associated through a nonlinear dense transformation,or a spatially varying deformation model.In general,registration can be performed on two or more im-ages.In this paper,we focus on registration methods that involve two images.One is usually referred to as the source or moving image,while the other is referred to as the target orfixed image. In this paper,the source image is denoted by,while the targetManuscript received March02,2013;revised May17,2013;accepted May 21,2013.Date of publication May31,2013;date of current version June26, 2013.Asterisk indicates corresponding author.*A.Sotiras is with the Section of Biomedical Image Analysis,Center for Biomedical Image Computing and Analytics,Department of Radi-ology,University of Pennsylvania,Philadelphia,PA19104USA(e-mail: aristieidis.sotiras@).C.Davatzikos is with the Section of Biomedical Image Analysis,Center for Biomedical Image Computing and Analytics,Department of Radi-ology,University of Pennsylvania,Philadelphia,PA19104USA(e-mail: christos.davatzikos@).N.Paragios is with the Center for Visual Computing,Department of Applied Mathematics,Ecole Centrale de Paris,92295Chatenay-Malabry,France,and with the Equipe Galen,INRIA Saclay-Ile-de-France,91893Orsay,France,and also with the Universite Paris-Est,LIGM(UMR CNRS),Center for Visual Com-puting,Ecole des Ponts ParisTech,77455Champs-sur-Marne,France. Digital Object Identifier10.1109/TMI.2013.2265603image is denoted by.The two images are defined in the image domain and are related by a transformation.The goal of registration is to estimate the optimal transforma-tion that optimizes an energy of the form(1) The previous objective function(1)comprises two terms.The first term,,quantifies the level of alignment between a target image and a source image.Throughout this paper,we in-terchangeably refer to this term as matching criterion,(dis)sim-ilarity criterion or distance measure.The optimization problem consists of either maximizing or minimizing the objective func-tion depending on how the matching term is chosen.The images get aligned under the influence of transformation .The transformation is a mapping function of the domain to itself,that maps point locations to other locations.In gen-eral,the transformation is assumed to map homologous loca-tions from the target physiology to the source physiology.The transformation at every position is given as the addition of an identity transformation with the displacementfield,or.The second term,,regularizes the trans-formation aiming to favor any specific properties in the solution that the user requires,and seeks to tackle the difficulty associ-ated with the ill-posedness of the problem.Regularization and deformation models are closely related. Two main aspects of this relation may be distinguished.First, in the case that the transformation is parametrized by a small number of variables and is inherently smooth,regularization may serve to introduce prior knowledge regarding the solution that we seek by imposing task-specific constraints on the trans-formation.Second,in the case that we seek the displacement of every image element(i.e.,nonparametric deformation model), regularization dictates the nature of the transformation. Thus,an image registration algorithm involves three main components:1)a deformation model,2)an objective function, and3)an optimization method.The result of the registration algorithm naturally depends on the deformation model and the objective function.The dependency of the registration result on the optimization strategy follows from the fact that image regis-tration is inherently ill-posed.Devising each component so that the requirements of the registration algorithm are met is a de-manding process.Depending on the deformation model and the input data,the problem may be ill-posed according to Hadamard’s definition of well-posed problems[11].In probably all realistic scenarios, registration is ill-posed.To further elaborate,let us consider some specific cases.In a deformable registration scenario,one0278-0062/$31.00©2013IEEEseeks to estimate a vector for every position given,in general, scalar information conveyed by image intensity.In this case,the number of unknowns is greater than the number of constraints. In a rigid setting,let us consider a consider a scenario where two images of a disk(white background,gray foreground)are registered.Despite the fact that the number of parameters is only 6,the problem is ill-posed.The problem has no unique solution since a translation that aligns the centers of the disks followed by any rotation results in a meaningful solution.Given nonlinear and nonconvex objective functions,in gen-eral,no closed-form solutions exist to estimate the registration parameters.In this setting,the search methods reach only a local minimum in the parameter space.Moreover,the problem itself has an enormous number of different facets.The approach that one should take depends on the anatomical properties of the organ(for example,the heart and liver do not adhere to the same degree of deformation),the nature of observations to be regis-tered(same modality versus multi-modal fusion),the clinical setting in which registration is to be used(e.g.,offline interpre-tation versus computer assisted surgery).An enormous amount of research has been dedicated to de-formable registration towards tackling these challenges due to its potential clinical impact.During the past few decades,many innovative ideas regarding the three main algorithmic registra-tion aspects have been proposed.General reviews of thefield may be found in[1]–[7],[9].However due to the rapid progress of thefield such reviews are to a certain extent outdated.The aim of this paper is to provide a thorough overview of the advances of the past decade in deformable registration.Never-theless,some classic papers that have greatly advanced the ideas in thefield are mentioned.Even though our primary interest is deformable registration,for the completeness of the presenta-tion,references to linear methods are included as many prob-lems have been treated in this low-degree-of-freedom setting before being extended to the deformable case.The main scope of this paper is focused on applications that seek to establish spatial correspondences between medical im-ages.Nonetheless,we have extended the scope to cover appli-cations where the interest is to recover the apparent motion of objects between sequences of successive images(opticalflow estimation)[12],[13].Deformable registration and opticalflow estimation are closely related problems.Both problems aim to establish correspondences between images.In the deformable registration case,spatial correspondences are sought,while in the opticalflow case,spatial correspondences,that are associ-ated with different time points,are looked for.Given data with a good temporal resolution,one may assume that the magnitude of the motion is limited and that image intensity is preserved in time,opticalflow estimation can be regarded as a small defor-mation mono-modal deformable registration problem.The remainder of the paper is organized by loosely following the structural separation of registration algorithms to three com-ponents:1)deformation model,2)matching criteria,and3)op-timization method.In Section II,different approaches regarding the deformation model are presented.Moreover,we also chose to cover in this section the second term of the objective function, the regularization term.This choice was motivated by the close relation between the two parts.In Section III,thefirst term of the objective function,the matching term,is discussed.The opti-mization methods are presented in Section IV.In every section, particular emphasis was put on further deepening the taxonomy of registration method by grouping the presented methods in a systematic manner.Section V concludes the paper.II.D EFORMATION M ODELSThe choice of deformation model is of great importance for the registration process as it entails an important compromise between computational efficiency and richness of description. It also reflects the class of transformations that are desirable or acceptable,and therefore limits the solution to a large ex-tent.The parameters that registration estimates through the op-timization strategy correspond to the degrees of freedom of the deformation model1.Their number varies greatly,from six in the case of global rigid transformations,to millions when non-parametric dense transformations are considered.Increasing the dimensionality of the state space results in enriching the de-scriptive power of the model.This model enrichment may be accompanied by an increase in the model’s complexity which, in turns,results in a more challenging and computationally de-manding inference.Furthermore,the choice of the deformation model implies an assumption regarding the nature of the defor-mation to be recovered.Before continuing,let us clarify an important,from imple-mentation point of view,aspect related to the transformation mapping and the deformation of the source image.In the in-troduction,we stated that the transformation is assumed to map homologous locations from the target physiology to the source physiology(backward mapping).While from a theoretical point of view,the mapping from the source physiology to the target physiology is possible(forward mapping),from an implemen-tation point of view,this mapping is less advantageous.In order to better understand the previous statement,let us consider how the direction of the mapping influences the esti-mation of the deformed image.In both cases,the source image is warped to the target domain through interpolation resulting to a deformed image.When the forward mapping is estimated, every voxel of the source image is pushed forward to its esti-mated position in the deformed image.On the other hand,when the backward mapping is estimated,the pixel value of a voxel in the deformed image is pulled from the source image.The difference between the two schemes is in the difficulty of the interpolation problem that has to be solved.In thefirst case,a scattered data interpolation problem needs to be solved because the voxel locations of the source image are usually mapped to nonvoxel locations,and the intensity values of the voxels of the deformed image have to be calculated.In the second case,when voxel locations of the deformed image are mapped to nonvoxel locations in the source image,their intensities can be easily cal-culated by interpolating the intensity values of the neighboring voxels.The rest of the section is organized by following coarsely and extending the classification of deformation models given 1Variational approaches in general attempt to determine a function,not just a set of parameters.SOTIRAS et al.:DEFORMABLE MEDICAL IMAGE REGISTRATION:A SURVEY1155Fig.1.Classi fication of deformation models.Models that satisfy task-speci fic constraints are not shown as a branch of the tree because they are,in general,used in conjunction with physics-based and interpolation-based models.by Holden [14].More emphasis is put on aspects that were not covered by that review.Geometric transformations can be classi fied into three main categories (see Fig.1):1)those that are inspired by physical models,2)those inspired by interpolation and ap-proximation theory,3)knowledge-based deformation models that opt to introduce speci fic prior information regarding the sought deformation,and 4)models that satisfy a task-speci fic constraint.Of great importance for biomedical applications are the con-straints that may be applied to the transformation such that it exhibits special properties.Such properties include,but are not limited to,inverse consistency,symmetry,topology preserva-tion,diffeomorphism.The value of these properties was made apparent to the research community and were gradually intro-duced as extra constraints.Despite common intuition,the majority of the existing regis-tration algorithms are asymmetric.As a consequence,when in-terchanging the order of input images,the registration algorithm does not estimate the inverse transformation.As a consequence,the statistical analysis that follows registration is biased on the choice of the target domain.Inverse Consistency:Inverse consistent methods aim to tackle this shortcoming by simultaneously estimating both the forward and the backward transformation.The data matching term quanti fies how well the images are aligned when one image is deformed by the forward transformation,and the other image by the backward transformation.Additionally,inverse consistent algorithms constrain the forward and backward transformations to be inverse mappings of one another.This is achieved by introducing terms that penalize the difference between the forward and backward transformations from the respective inverse mappings.Inverse consistent methods can preserve topology but are only asymptotically symmetric.Inverse-consistency can be violated if another term of the objective function is weighted more importantly.Symmetry:Symmetric algorithms also aim to cope with asymmetry.These methods do not explicitly penalize asym-metry,but instead employ one of the following two strategies.In the first case,they employ objective functions that are by construction symmetric to estimate the transformation from one image to another.In the second case,two transformation functions are estimated by optimizing a standard objective function.Each transformation function map an image to a common domain.The final mapping from one image to another is calculated by inverting one transformation function and composing it with the other.Topology Preservation:The transformation that is estimated by registration algorithms is not always one-to-one and cross-ings may appear in the deformation field.Topology preserving/homeomorphic algorithms produce a mapping that is contin-uous,onto,and locally one-to-one and has a continuous inverse.The Jacobian determinant contains information regarding the injectivity of the mapping and is greater than zero for topology preserving mappings.The differentiability of the transformation needs to be ensured in order to calculate the Jacobian determi-nant.Let us note that Jacobian determinant and Jacobian are in-terchangeably used in this paper and should not be confounded with the Jacobian matrix.Diffeomorphism:Diffeomoprhic transformations also pre-serve topology.A transformation function is a diffeomorphism,if it is invertible and both the function and its inverse are differ-entiable.A diffeomorphism maps a differentiable manifold to another.1156IEEE TRANSACTIONS ON MEDICAL IMAGING,VOL.32,NO.7,JULY2013In the following four subsections,the most important methods of the four classes are presented with emphasis on the approaches that endow the model under consideration with the above desirable properties.A.Geometric Transformations Derived From Physical Models Following[5],currently employed physical models can be further separated infive categories(see Fig.1):1)elastic body models,2)viscousfluidflow models,3)diffusion models,4) curvature registration,and5)flows of diffeomorphisms.1)Elastic Body Models:a)Linear Models:In this case,the image under deforma-tion is modeled as an elastic body.The Navier-Cauchy Partial Differential Equation(PDE)describes the deformation,or(2) where is the forcefield that drives the registration based on an image matching criterion,refers to the rigidity that quanti-fies the stiffness of the material and is Lamésfirst coefficient. Broit[15]first proposed to model an image grid as an elastic membrane that is deformed under the influence of two forces that compete until equilibrium is reached.An external force tries to deform the image such that matching is achieved while an internal one enforces the elastic properties of the material. Bajcsy and Kovacic[16]extended this approach in a hierar-chical fashion where the solution of the coarsest scale is up-sam-pled and used to initialize thefiner one.Linear registration was used at the lowest resolution.Gee and Bajscy[17]formulated the elastostatic problem in a variational setting.The problem was solved under the Bayesian paradigm allowing for the computation of the uncertainty of the solution as well as for confidence intervals.Thefinite element method(FEM)was used to infer the displacements for the ele-ment nodes,while an interpolation strategy was employed to es-timate displacements elsewhere.The order of the interpolating or shape functions,determines the smoothness of the obtained result.Linear elastic models have also been used when registering brain images based on sparse correspondences.Davatzikos[18]first used geometric characteristics to establish a mapping be-tween the cortical surfaces.Then,a global transformation was estimated by modeling the images as inhomogeneous elastic ob-jects.Spatially-varying elasticity parameters were used to com-pensate for the fact that certain structures tend to deform more than others.In addition,a nonzero initial strain was considered so that some structures expand or contract naturally.In general,an important drawback of registration is that when source and target volumes are interchanged,the obtained trans-formation is not the inverse of the previous solution.In order to tackle this shortcoming,Christensen and Johnson[19]pro-posed to simultaneously estimate both forward and backward transformations,while penalizing inconsistent transformations by adding a constraint to the objective function.Linear elasticity was used as regularization constraint and Fourier series were used to parametrize the transformation.Leow et al.[20]took a different approach to tackle the incon-sistency problem.Instead of adding a constraint that penalizes the inconsistency error,they proposed a unidirectional approach that couples the forward and backward transformation and pro-vides inverse consistent transformations by construction.The coupling was performed by modeling the backward transforma-tion as the inverse of the forward.This fact was also exploited during the optimization of the symmetric energy by only fol-lowing the gradient direction of the forward mapping.He and Christensen[21]proposed to tackle large deforma-tions in an inverse consistent framework by considering a se-quence of small deformation transformations,each modeled by a linear elastic model.The problem was symmetrized by consid-ering a periodic sequence of images where thefirst(or last)and middle image are the source and target respectively.The sym-metric objective function thus comprised terms that quantify the difference between any two successive pairs of images.The in-ferred incremental transformation maps were concatenated to map one input image to another.b)Nonlinear Models:An important limitation of linear elastic models lies in their inability to cope with large defor-mations.In order to account for large deformations,nonlinear elastic models have been proposed.These models also guar-antee the preservation of topology.Rabbitt et al.[22]modeled the deformable image based on hyperelastic material properties.The solution of the nonlinear equations was achieved by local linearization and the use of the Finite Element method.Pennec et al.[23]dropped the linearity assumption by mod-eling the deformation process through the St Venant-Kirchoff elasticity energy that extends the linear elastic model to the non-linear regime.Moreover,the use of log-Euclidean metrics in-stead of Euclidean ones resulted in a Riemannian elasticity en-ergy which is inverse consistent.Yanovsky et al.[24]proposed a symmetric registration framework based on the St Venant-Kir-choff elasticity.An auxiliary variable was added to decouple the regularization and the matching term.Symmetry was im-posed by assuming that the Jacobian determinants of the defor-mation follow a zero mean,after log-transformation,log-normal distribution[25].Droske and Rumpf[26]used an hyperelastic,polyconvex regularization term that takes into account the length,area and volume deformations.Le Guyader and Vese[27]presented an approach that combines segmentation and registration that is based on nonlinear elasticity.The authors used a polyconvex regularization energy based on the modeling of the images under deformation as Ciarlet-Geymonat materials[28].Burger et al.[29]also used a polyconvex regularization term.The au-thors focused on the numerical implementation of the registra-tion framework.They employed a discretize-then-optimize ap-proach[9]that involved the partitioning voxels to24tetrahedra.2)Viscous Fluid Flow Models:In this case,the image under deformation is modeled as a viscousfluid.The transformation is governed by the Navier-Stokes equation that is simplified by assuming a very low Reynold’s numberflow(3) These models do not assume small deformations,and thus are able to recover large deformations[30].Thefirst term of theSOTIRAS et al.:DEFORMABLE MEDICAL IMAGE REGISTRATION:A SURVEY1157Navier-Stokes equation(3),constrains neighboring points to de-form similarly by spatially smoothing the velocityfield.The velocityfield is related to the displacementfield as.The velocityfield is integrated in order to estimate the displacementfield.The second term al-lows structures to change in mass while and are the vis-cosity coefficients.Christensen et al.[30]modeled the image under deformation as a viscousfluid allowing for large magnitude nonlinear defor-mations.The PDE was solved for small time intervals and the complete solution was given by an integration over time.For each time interval a successive over-relaxation(SOR)scheme was used.To guarantee the preservation of topology,the Jaco-bian was monitored and each time its value fell under0.5,the deformed image was regridded and a new one was generated to estimate a transformation.Thefinal solution was the con-catenation of all successive transformations occurring for each regridding step.In a subsequent work,Christensen et al.[31] presented a hierarchical way to recover the transformations for brain anatomy.Initially,global affine transformation was per-formed followed by a landmark transformation model.The re-sult was refined byfluid transformation preceded by an elastic registration step.An important drawback of the earliest implementations of the viscousfluid models,that employed SOR to solve the equa-tions,was computational inefficiency.To circumvent this short-coming,Christensen et al.employed a massive parallel com-puter implementation in[30].Bro-Nielsen and Gramkow[32] proposed a technique based on a convolutionfilter in scale-space.Thefilter was designed as the impulse response of the linear operator defined in its eigen-function basis.Crun et al.[33]proposed a multi-grid approach towards handling anisotropic data along with a multi-resolution scheme opting forfirst recovering coarse velocity es-timations and refining them in a subsequent step.Cahill et al.[34]showed how to use Fourier methods to efficiently solve the linear PDE system that arises from(3)for any boundary condi-tion.Furthermore,Cahill et al.extended their analysis to show how these methods can be applied in the case of other regu-larizers(diffusion,curvature and elastic)under Dirichlet,Neu-mann,or periodic boundary conditions.Wang and Staib[35]usedfluid deformation models in an atlas-enhanced registration setting while D’Agostino et al. tackled multi-modal registration with the use of such models in[36].More recently,Chiang et al.[37]proposed an inverse consistent variant offluid registration to register Diffusion Tensor images.Symmetrized Kullback-Leibler(KL)diver-gence was used as the matching criterion.Inverse consistency was achieved by evaluating the matching and regularization criteria towards both directions.3)Diffusion Models:In this case,the deformation is mod-eled by the diffusion equation(4) Let us note that most of the algorithms,based on this transforma-tion model and described in this section,do not explicitly state the(4)in their objective function.Nonetheless,they exploit the fact that the Gaussian kernel is the Green’s function of the diffu-sion equation(4)(under appropriate initial and boundary condi-tions)to provide an efficient regularization step.Regularization is efficiently performed through convolutions with a Gaussian kernel.Thirion,inspired by Maxwell’s Demons,proposed to perform image matching as a diffusion process[38].The proposed algo-rithm iterated between two steps:1)estimation of the demon forces for every demon(more precisely,the result of the appli-cation of a force during one iteration step,that is a displace-ment),and2)update of the transformation based on the cal-culated forces.Depending on the way the demon positions are selected,the way the space of deformations is defined,the in-terpolation method that is used,and the way the demon forces are calculated,different variants can be obtained.The most suit-able version for medical image analysis involved1)selecting all image elements as demons,2)calculating demon forces by considering the opticalflow constraint,3)assuming a nonpara-metric deformation model that was regularized by applying a Gaussianfilter after each iteration,and4)a trilinear interpo-lation scheme.The Gaussianfilter can be applied either to the displacementfield estimated at an iteration or the updated total displacementfield.The bijectivity of the transformation was en-sured by calculating for every point the difference between its initial position and the one that is reached after composing the forward with the backward deformationfield,and redistributing the difference to eachfield.The bijectivity of the transformation can also be enforced by limiting the maximum length of the up-date displacement to half the voxel size and using composition to update the transformation.Variants for the contour-based reg-istration and the registration between segmented images were also described in[38].Most of the algorithms described in this section were inspired by the work of Thirion[38]and thus could alternatively be clas-sified as“Demons approaches.”These methods share the iter-ative approach that was presented in[38]that is,iterating be-tween estimating the displacements and regularizing to obtain the transformation.This iterative approach results in increased computational efficiency.As it will be discussed later in this section,this feature led researchers to explore such strategies for different PDEs.The use of Demons,as initially introduced,was an efficient algorithm able to provide dense correspondences but lacked a sound theoretical justification.Due to the success of the algo-rithm,a number of papers tried to give theoretical insight into its workings.Fischer and Modersitzki[39]provided a fast algo-rithm for image registration.The result was given as the solution of linear system that results from the linearization of the diffu-sion PDE.An efficient scheme for its solution was proposed while a connection to the Thirion’s Demons algorithm[38]was drawn.Pennec et al.[40]studied image registration as an energy minimization problem and drew the connection of the Demons algorithm with gradient descent schemes.Thirion’s image force based on opticalflow was shown to be equivalent with a second order gradient descent on the Sum of Square Differences(SSD) matching criterion.As for the regularization,it was shown that the convolution of the global transformation with a Gaussian。

脑小血管病脑白质损伤动物模型

脑小血管病脑白质损伤动物模型

脑小血管病脑白质损伤动物模型基金项目国家自然科学基金(81871021)作者单位1100070 北京首都医科大学附属北京天坛医院国家神经系统疾病临床医学研究中心 2首都医科大学附属北京天坛医院神经病学中心通信作者刘向荣liuxr@【摘要】 脑小血管病是导致认知功能减退、步态情感障碍和痴呆的重要脑血管疾病,脑白质弥漫性损伤是该病的重要影像学特征。

结合国内外近年来相关研究内容,本文对不同的脑小血管病白质损伤动物模型制作进行了系统性回顾,包括单一型动物模型如双侧颈总动脉狭窄模型、脑淀粉样血管病模型、Notch3转基因小鼠模型、自发性高血压大鼠模型、易卒中型肾血管性高血压大鼠模型,以及由两个或两个以上单一型动物模型组合而成的复合型动物模型等。

【关键词】 脑小血管病;脑白质损伤;动物模型【DOI】 10.3969/j.issn.1673-5765.2019.05.019Animal Models of Cerebral Small Vessel Disease with White Matter LesionDONG Cheng-Ya 1, WANG Yi-Long 1,2, LIU Xiang-Rong 1. 1China National Clinical Research Center for Neurological Diseases, Beijing Tian Tan Hospital, Capital Medical University, Beijing 100070, China; 2Department of Neurology, Beijing Tian Tan Hospital, Capital Medical University, Beijing 100070, ChinaCorresponding Author:LIU Xiang-Rong, E-mail:liuxr@【Abstract 】 In recent years, cerebral small vessel disease (CSVD) has been paid more and more concern because it can cause cognitive decline, dementia, gait abnormality, emotion disorder and etc. The important imaging feature of CSVD is diffuse white matter lesions. This article reviewed the advance in the establishment of animal models of CSVD with white matter lesion in different ways. These animal models included bilateral common carotid artery stenosis mouse model, cerebral amyloid angiopathy mouse model, Notch3 transgenic mouse model, spontaneous hypertensive rat model, stroke-prone renovascular hypertensive rat model, and mixed models composed of two or more single animal models.【Key Words 】 Cerebral small vessel disease; White matter lesion; Animal model·综述·董成亚1,王伊龙1,2,刘向荣1脑小血管病(cerebral smal l vessel disease,CSVD)指包括位于脑实质和蛛网膜下腔的所有血管结构发生病理改变,引起的脑实质缺血或出血性损伤[1]。

医疗影像分析的深度学习模型研究(英文中文双语版优质文档)

医疗影像分析的深度学习模型研究(英文中文双语版优质文档)

医疗影像分析的深度学习模型研究(英文中文双语版优质文档)With the continuous development of medical technology, medical imaging is more and more widely used in clinical practice. However, the analysis and diagnosis of medical images usually takes a lot of time and manpower, and the accuracy is also affected by the doctor's personal experience. In recent years, the emergence of deep learning technology has brought new opportunities and challenges for medical image analysis. This article will deeply discuss the research on deep learning models for medical image analysis, including the application and research progress of models such as convolutional neural network (CNN), recurrent neural network (RNN) and generative adversarial network (GAN).1. Application of convolutional neural network (CNN) model in medical image analysisConvolutional neural network is a state-of-the-art deep learning model, which has a wide range of applications in the field of medical image analysis. Convolutional neural networks can automatically extract features from medical images and classify them as normal and abnormal. For example, in medical image analysis, convolutional neural networks can be used to analyze lung X-rays to identify lung diseases such as pneumonia, tuberculosis, and lung cancer. In addition, convolutional neural networks can also be used for segmentation and registration of medical images for more precise localization and identification of lesions.2. Application of Recurrent Neural Network (RNN) Model in Medical Image AnalysisA recurrent neural network is a deep learning model capable of processing sequential data. In medical image analysis, recurrent neural networks are often used to analyze time series data, such as electrocardiograms and electroencephalograms in medical images. The cyclic neural network can automatically extract the features in the time series data, so as to realize the classification and recognition of medical images. For example, in the diagnosis of heart disease, a recurrent neural network can identify heart diseases such as arrhythmia and myocardial infarction by analyzing ECG data.3. Application of Generative Adversarial Network (GAN) Model in Medical Image AnalysisGenerative adversarial networks, a deep learning model capable of generating realistic images, have also been widely used in medical image analysis in recent years. Generative adversarial networks usually consist of two neural networks, one is a generative network, which is responsible for generating realistic images; the other is a discriminative network, which is used to judge whether the images generated by the generative network are consistent with real images. In medical image analysis, GAN can be used to generate realistic medical images, such as MRI images, CT images, etc., to help doctors better diagnose and treat. In addition, the generative confrontation network can also be used for denoising and enhancement of medical images to improve the clarity and accuracy of medical images.In conclusion, the application of deep learning models in medical image analysis has broad prospects and challenges. With the continuous development of technology and the deepening of research, the application of deep learning models in medical image analysis will become more and more extensive and in-depth, making greater contributions to the progress and development of clinical medicine.随着医疗技术的不断发展,医疗影像在临床中的应用越来越广泛。

MEDICAL IMAGE PROCESSING DEVICE, MEDICAL IMAGE PRO

MEDICAL IMAGE PROCESSING DEVICE, MEDICAL IMAGE PRO

专利名称:MEDICAL IMAGE PROCESSING DEVICE, MEDICAL IMAGE PROCESSING METHODAND PROGRAM, AND DIAGNOSISASSISTANCE DEVICE发明人:USUDA, Toshihiro申请号:EP19879182申请日:20191028公开号:EP3875022A4公开日:20211020专利内容由知识产权出版社提供摘要:Provided are a medical image processing apparatus, a medical image processing method, a program, and a diagnosis support apparatus that report a region of interest without hindering observation of the boundary between the region of interest and a region of non-interest in a medical image. A medical image processing apparatus includes a superimposition processing unit that superimposes, on a medical image, a figure for reporting a region of interest included in the medical image. The superimposition processing unit superimposes the figure on an inside of the region of interest such that at least part of a boundary between the region of interest and a region of non-interest is not superimposed with the figure, thereby reporting the region of interest without hindering observation of the boundary between the region of interest and the region of non-interest.申请人:FUJIFILM Corporation更多信息请下载全文后查看。

碘化钾汞安全技术说明书MSDS

碘化钾汞安全技术说明书MSDS

化学品安全技术说明书第一部分化学品及企业标识化学品中文名:碘化钾汞化学品英文名:mercurate(2-), tetraiodo-, dipotassium, (T-4)-|mercury potassium iodide|potassium mercuric iodide 化学品别名:碘化汞钾CAS No.:7783-33-7EC No.:231-990-4分子式:K2HgI4产品推荐用途:请咨询生产商。

产品限制用途:请咨询生产商。

第二部分危险性概述| 紧急情况概述液体。

吞食后有剧毒。

跟皮肤接触有剧毒。

吸入有剧毒。

长期暴露有损伤健康的危险。

对水生物有剧毒, 使用适当的容器, 以预防污染环境。

对水生环境可能会引起长期有害作用。

使用适当的容器, 以预防污染环境。

| GHS 危险性类别根据 GB 30000-2013 化学品分类和标签规范系列标准(参阅第十六部分),该产品分类如下:急毒性-口服,类别 2;急毒性-皮肤,类别 1;急毒性-吸入,类别 2;特定目标器官毒性-重复接触,类别 2;危害水生环境-急性毒性,类别 1;危害水生环境-慢性毒性,类别 1。

| 标签要素象形图警示词:危险危险信息:吞咽致命,皮肤接触致命,吸入致命,长期或重复接触可能对器官造成伤害,对水生生物毒性极大,对水生生物毒性极大并具有长期持续影响。

防范说明预防措施:不要吸入粉尘/烟/气体/烟雾/蒸气/喷雾。

严防进入眼中、接触皮肤或衣服。

作业后彻底清洗。

使用本产品时不要进食、饮水或吸烟。

只能在室外或通风良好之处使用。

避免释放到环境中。

戴防护手套/穿防护服/戴防护眼罩/ 戴防护面具。

[在通风不足的情况下]戴呼吸防护装置。

事故响应:立即呼叫中毒急救中心/医生。

如感觉不适,须求医/就诊。

漱口。

收集溢出物。

如误吞咽:立即呼叫中毒急救中心/医生。

如误吸入:将受人转移到空气新鲜处,保持呼吸舒适的体位。

立即脱掉所有沾染的衣服,清洗后方可重新使用。

一种眼底图像血管轮廓提取的处理方法

一种眼底图像血管轮廓提取的处理方法

学位论文作者签名: 日期: 年 月 日
《中国优秀博硕士学位论文全文数据库》投稿声明
研究生院: 本人同意《中国优秀博硕士学位论文全文数据库》出版章程 的内容, 愿意将本人的学位论文委托研究生院向中国学术期刊 (光 盘版)电子杂志社的《中国优秀博硕士学位论文全文数据库》投 稿,希望《中国优秀博硕士学位论文全文数据库》给予出版,并 同意在 《中国博硕士学位论文评价数据库》 和 CNKI 系列数据库中 使用,同意按章程规定享受相关权益。 论文级别:□硕士 □博士 学科专业:计算机应用技术 论文题目:一种眼底图像血管轮廓提取的处理方法 作者签名: 指导教师签名:
1
6.2 图像腐蚀 ........................................................................................... 35 第七章 二值化图像的压缩 ........................................................................... 39 第八章 基于虚拟现实技术的三维重建 ....................................................... 46 结论与展望 ..................................................................................................... 49 参考文献 ......................................................................................................... 50 摘要 ....................................................................................................................I Abstract ........................................................................................................... IV 致 谢
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Medical imaging with mercuric iodide direct digital radiographyflat-panel X-ray detectorsH. Gilboa, A. Zuck, O. Dagan, A. Vilensky, B.N. Breen, A. Taieb, B. Reisman and H. HermonReal-Time Radiography, Jerusalem Technology Park, Jerusalem 91487 Israel.G. Zentai, L. PartainGinzton Technology Center, Mountain View, CA 94043 USAR. Street, S. ReadyXerox Palo Alto Research Center, Palo Alto, CA94304 USAABSTRACTPhotoconductive polycrystalline mercuric iodide coated on amorphous silicon flat panel thin film transistor (TFT) arrays is the best candidate for direct digital X-ray detectors for radiographic and fluoroscopic applications in medical imaging.The mercuric iodide is vacuum deposited by Physical Vapor Deposition (PVD). This coating technology is capable of being scaled up to sizes required in common medical imaging applications. Coatings were deposited on 2”X2” and 4”X4” TFT arrays for imaging performance evaluation and also on conductive-coated glass substrates for measurements of X-ray sensitivity and dark current. TFT arrays used included pixel pitch dimensions of 100, 127 and 139 microns. Coating thickness between 150 microns and 250 microns were tested with beam energy between 25 kVP and 100 kVP utilizing exposure ranges typical for both fluoroscopic, and radiographic imaging.X-ray sensitivities measured for the mercuric iodide samples and coated TFT detectors were superior to any published results for competitive materials (up to 7100 ke/mR/pixel for 100 micron pixels). It is believed that this higher sensitivity can result in fluoroscopic imaging signal levels high enough to overshadow electronic noise. Diagnostic quality of radiographic and fluroscopic images of up to 15 pulses per second were demonstrated. Image lag characteristics appear adequate for fluoroscopic rates. Resolution tests on resolution target phantoms showed that resolution is limited to the TFT array Nyquist frquency including detectors with pixel size of 139 microns resolution ~3.6 lp/mm) and 127 microns (resolution ~3.9 lp/mm). The ability to operate at low voltages (~0.5 volt/ micron) gives adequate dark currents for most applications and allows low voltage electronics designs.Keywords: imaging, X-ray radiology, polycrystalline, mercuric iodide, imaging detectors, Flat-Panel imaging arrays.1. INTRODUCTIONDirect detector materials need to have several attributes, including high x-ray absorption, high charge collection, low dark current, good uniformity etc., and these are difficult to achieve in a single material. Nevertheless, higher resolutions are possible with direct detectors than those attainable by detectors that utilize phosphor coatings.Poly-HgI2a-Se Comments Atomic Number (Z) 80, 53 34Absorption increases with ZEnergy Band Gap (Eg) eV 2.1 2.2 Wide gap reduces dark currentCharge Pair Formation Energy (W), eV 5.5 42 Lower W increases the gain Mobility Life-time Product (PW) cm2/V 1.5*10-510-6 – 10-5Higher PW increases the charge collection Operational Electric Field (E) V/micron 0.5-1 10Lower E reduces electrical breakdownProcessing Temperature (ºC) 100 400 Lower temperature – simpler processTable 1. Comparison of Amorphous Se and Poly HgI2Polycrystalline semiconductor HgI2 films directly convert X-rays into electrical signals and show promise as X-ray detectors for digital radiography1-13. Table 1 summarises film properties of amorphous Se and polycrystalline HgI2 which are the materials considered for direct imaging. This paper describes measurements performed on high-resolution image sensors using HgI2 photoconducting layers in the direct detection mode of operation.The high Z value of HgI2 indicates that it is an efficient material for absorbing X-rays for clinically used X-ray exposure energies. In addition, the X-ray energy required to generate an Electron-Hole Pair in the mercuric iodide photoconductor, as designated by the parameter W, is relatively low. The lower the W, the higher the number of charges caused by irradiation, and the higher the X-ray sensitivity. As Table 1 indicates, the parameter W is smaller for HgI2 compared to Se by a factor of 7.The larger the Mobility-Lifetime (µIJ-Product), the greater the distance the electrical charges move in the detector. Greater distances result in higher sensitivity due to better charge collection. The high Z number, low W and high µIJ-product result in very high signal which can overcome all noise sources in fluoroscopic modes of operation. The ability to operate at low voltages allows low voltage electronic design.Mercuric iodide can be manufactured using very low temperature processing. This allows simple manufacturing techniques, a greater choice of substrate materials, and less danger of damaging the expensive TFTs during the coating process. This low manufacturing temperature will also allow the use of TFTs coated on plastic substrates, when they become available in the future. Plastic substrates offer cost-effective parts that lack brittleness and fragility, characteristically seen in the currently used glass. The temperature stability of HgI2allows operation, storage and transport at normally encountered temperatures. Other materials, such as amorphous selenium, have storage and transport problems that require them to be shipped and stored under special conditions. Even under non-extreme temperatures, amorphous selenium can irreversibly recrystallize, ceasing to act as a detector.2. THEORETICAL RESULTS2.1 Photon attenuationCalculated graphs of photon attenuation efficiency (PAE) for polycrystalline Mercuric Iodide and amorphous Selenium are shown in Figure 1for 70 kVp and 120 kVp with added 18 mm Al filtration. The graphs were obtained by using semi-empirical computer-generated spectra (TASMIP model)14, mass-attenuation coefficients taken from the NIST tables 15, and the following equation for calculating the photon attenuation efficiency (PAE)16:³³) ) max max00)()(1)(E E E E E E x E dEE dE e E PAE U P (1)where ĭ(E) is the spectral photon fluence,µ(E) is the energy-dependent attenuation coefficient,ȡ is the absorber coating density and x its thickness.Since high attenuation is a necessary condition for high DQE (Detective Quantum Efficiency),one can conclude from Fig 1 that HgI 2 film thickness of 300 microns will result in >80%of photon attenuation at 70 kVp whereas for Se the film thickness is >1000 microns. For 120kVp, film thickness of 500 micron of HgI2 will attenuate the X-ray more than 80% whereas for Se, more than 2000 microns is needed to attenuate this amount of photons.Fig.1: Photon attenuation of HgI 2 and Se at 70kVp and 120 kVp with added filtration of 18 mm Al.Photon attenuation is also dependent upon beam quality, as one can see from the theoretical modeling in Fig2.This dependence needs to be considered in the image correction procedures performed by the readout electronics and software.Fig2. Photon attenuation as a function of beam energy for added filtration of 2 mm and 18 mm Al. Film thickness is 300 microns3. EXPERIMENTAL RESULTS3.1X-ray transmissionX-ray transmission of glass substrates without an HgI2 layer were measured and these dose rate levels were taken into consideration for consequent measurements of X-ray transmission through different thickness of HgI2layers that were deposited on the same type of glass substrates. Each tested value of transmission was averaged over three sequential measurements. The reading spread of the Radcal dosimeter ranged from 0.15% to 0.43%.Fig 3shows that a 300 micron film will absorb more than80% of the X ray photons and that the absorption depends on the beam quality. These data can be compared to the theoretical calculation of transmission using NIST data for50keV.Fig 3. X-Ray dose transmission vs. HgI2 thickness3.2Dark currentsDark current can limit the use of a material for X-ray detection, since it prevents application of high field to optimize the charge collection.High dark current will be stored in the storage capacitance and will limit the dynamic range of the imager. In addition, it will add to the total noise level of the imager. The typical leakage current of HgI2 PVD films is shown in Fig4 for a 4” square detector. This detector consisted of an ITO coated substrate upon which HgI2 was deposited by the PVD process. Top electrodes were then deposited onto the mercuric iodide top surface in a distributed pattern across the substrate. The film leakage current was found to be ~10 pA/mm2 at 0.5 V/P m, which is significantly reduced compared to earlier published results2. There is,however, a degree of variation in the dark current. The dark current standard deviation between electrodes is about20% of the average value for all fields of 0.3 V/P m and higher.Fig4.Dark Current vs.bias uniformity across 4”x4” substrateThe temperature dependence of the dark current was also evaluated. This measurement was carried out in a thermostatically controlled, lightproof testing box, enabling precise temperature monitoring and uniformity of testing conditions. The testing rig used consisted of a laboratory oven having 0.1°C temperature resolution,a Keithley pico-ammeter and a power supply with computer controlled biasing. The temperature dependence of the dark current is shown in Figure 5. From the slope of the curve, it is possible to calculate the activation energy for the leakage current for HgI2. Fig 5 shows an activation energy of 1.055 eV.Fig. 5. Dark current vs. temperature measured on PVD detector at 0.3V/µ and 0.5V/µ3.3SignalThe same 4” square detector that was used to characterize dark current uniformity was also used to evaluate uniformity of X-ray signal response of the PVD mercuric iodide film. The detector was exposed to380 mR/min of80kVp X-ray radiation for 4.6 minutes and the signal response was recorded for all electrodes distributed across the substrate. The signal uniformity is significantly better than observed for the detector’s dark current. For voltage bias of 0.5V/P m and above, the standard deviation of signal response is about 6.7% of the average response across all electrodes. Figure 6 presents the signal response for the detector electrodes in terms of the X-ray sensitivity,measured in units of P Coulombs/Roentgen-cm2.Although non-uniformity of dark current is moderately high,its average value is quite low and it will therefore add little to signal noise. This means that it is the uniformity of X-ray sensitivity across the mercuric iodide photoconductor that will determine the ultimate resolution and gray scale that can be achieved in a full TFT array imager. Although further improvement is desirable, the present result of signal uniformity allows the achievement of high quality X-ray imaging.Fig 6. Signal response (sensitivity) as a function of bias voltage for electrodes distributed across a 4”*4” detector4. PIXEL TO PIXEL UNIFORMITY4.1 TFT array image sensor and measuring systemThe measurements were performed on512x512 pixel arrays having a pixel size of100P m x 100P m. However, the fabrication processes used are inherently scalable to the large areas required for medical imaging. The pixel design is typical for direct detection devices, and has been described elsewhere10.The source contact of the amorphous silicon TFT is connected to the contact electrode, which is isolated from the rest of the pixel by an insulation layer, and the TFT gate and drain are connected to address lines for readout. The contacts are almost square, with a minimum separation to the neighboring pads of 15P m, and has a geometrical fill factor of 67%.4.2SignalMeasurement of the electron charge collection is shown in Figure 7. From the figure, one can extract the PW of the detector which gives essentially the same PW value of ~1.5x10-5 cm 2/V for the two measurements 10.The initial distribution of carriers depends on the X-ray energy. For low energies (i.e. 24kVp), the absorption is essentially in a region close to the top contact and in this case, there is a high degree of certainty that the carriers being collected are electrons, and that holes contribute little to the signal. For high energy exposure, the absorption is more uniform throughout the sample, and there may be contributions from both types of carriers.Figure 7. Electron charge collection for HgI 2 measured at 24 and 80 kVp exposure 4.3Pixel to pixel uniformity Figure 8 shows pixel-to-pixel uniformity at various exposures for 80 kVp . The dark current background has been subtracted so that broadening of the histogram peaks is due to photon noise and non-uniformity of the pixel response.The sample was exposed by a x-ray beam of 80kVp and average energy per photon (E avg ) of 52.3 keV.One can see that the “net” pixel-to-pixel fluctuations obtained decreases with increasing dose . This can be explained by the dependence of gain variance on the dose, namely the dependence of absorption process, charge creation by hot photo-electrons,and charge collection (including traps filling) on dose. This result is a significant improvement relative to that formerly reported 8.Figure 8. Pixel-to-pixel uniformity is shown at various exposures for 80KVp. The improvement in the micro-uniformity relatively toformer results 8 is significant.5. IMAGING WITH TFT ARRAYSFigure 9 exhibits lag for a mercuric iodide PVD imager when biased at 200V. A continuous 60kVp X-ray beam was applied and then switched off, then charge was collected at the rate of 15 frames/sec. From this graph, charge collection in the first image frame after X-ray illumination is seen to be ~80%of the total generated charge (charge collection is arbitrarily defined as 100% in the first frame while 22.6%additional cumulative charge is collected in the following 14frames).Fig 9 . Lag measured for a PVD mecuric iodide at 15 frames per second.The excellent resolution was confirmed by imaging a resolution test phantom shown in Figure 10. The measurements and the image were taken in continuous fluoroscopic mode at 60-kVp X-ray energy and 2.1mA current. The imager runs at 15frame/sec. The dose rate was 0.29mR/frame.The lines are well distinguishable up to the 3.9 lp/mm range (see insert) so the resolution was essentially limited only by the pixel size.Fig.10. Resolution pattern image obtained with a pixilated amorphous silicon substrate coated with a 165 µm HgI 2 film deposited byPVD on a 127um pixel size TFT array. Excellent resolution is achieved00.51 1.52 2.53 3.54 4.5cycles/mm M T FFig. 11. MTF of a PVD deposited HgI 2imager with 139P m pixel size. The theoretical sinc function is also plotted for reference As a more qualitative gauge of the resolution,we also measured the Modulation Transfer Function (MTF) of an imager with 139um pixel pitch (Fig.11). For these tests, 60kVp x-ray energy and 290P R dose/frame were used. The MTF is only slightly below the theoretical sinc function up to the Nyquist frequency (~3.6lp/mm) given by the 139P m pixel size. This shows that the resolution is practically limited only by the pixel size. By decreasing the pixel size, we can get higher resolutions using HgI 2 material.Fig. 12. Foot phantom image taken by a 4” x 4” size imager.Finally, we took a foot phantom image by an HgI 2 imager with 127P m pixel size (Fig 12). The fine structures of the bones are clearly visible. This image demonstrates that the image quality of the HgI 2 imagers (even on larger sizes) are getting better and approaching that of the medical requirements.6. SUMMARYImages obtained with 4”x4” and 2”x2” imaging pixilated arrays using PVD-HgI2 films are shown. The HgI2 arrays demonstrate high sensitivity to x-rays and excellent spatial resolution. Extremely high resolution and high absorption were also shown. This suggests new applications for HgI2 detectors, especially those requiring extreme sensitivity such as fluoroscopy, which is beyond the capabilities of thin film detectors.7. ACKNOWLEDGMENTSThe authors are grateful to the members of the RTR Testing group, I. Baydjanov and M. Kaminsky for carrying out the tests of X-Ray absorption and dark current dependence on temperature.8. REFERENCES1.“Mercuric Iodide Thick Films for Radiological X-ray Detectors”. M. Schieber, H. Hermon, R.A. Street, S.E.Ready, A. Zuck, A. Vilensky, L. Melekhov, R. Shatunovsky, M. Lukach, E. Meerson, Y. Saado and E.Pinhasy, Proc. of the SPIE Vol. 4142 (2000) 197.2. “Radiological X-ray Response of Polycrystalline Mercuric Iodide Detectors”.M. Schieber, H. Hermon, R.Street, S. Ready, A. Zuck, A. Vilensky, L. Melekhov, R. Shatunovsky, E. Meerson, Y. Saado. In Proc. of the SPIE on Medical Imaging 2000 San Diego, Vol. 3977 (2000) 48.3.“Theoretical and experimental sensitivity to X-rays of single and polycrystalline HgI2 compared with differentsingle crystal detectors”. M. Schieber, H. Hermon, A. Zuck, A. Vilensky, L. Melekhov, R. Shatunovsky, E.Meerson, H. Saado, NIMA Vol. 458 (2001) p 41.4.“Characterization of CZT Detectors Grown From Horizontal and Vertical Bridgman”. H. Hermon, M.Schieber, M. Goorsky, T. Lam, E. Meerson, H. Yao, J. Erickson, and R.B. James, in Proc. of the SPIE on Hard X-ray and Gamma-ray Radiation, Vol. 4141 (2000) 186.5.“Comparison of Cadmium Zinc Telluride Crystals Grown by Horizontal and Vertical Bridgman and From theVapor Phase”.M. Schieber, R.B. James, H. Hermon, A.Vilensky, I. Baydjanov, M.Goorsky, T. Lam, E.Meerson, H.W. Yao, J. Erickson, E. Cross, A. Burger, J.O. Ndap, G. Wright, and M. Fiederle, Accepted forpublication in JCG (2001).6.“Thick Films of X-Ray Polycrystalline Mercuric Iodide Detectors”. M. Schieber, H. Hermon, A. Zuck, A.Vilensky, L. Melekhov, R. Shatunovsky, E. Meerson, Y. Saado, M. Lukach, E.Pinhasy, S.E. Ready, and R.A.Street, Journal of Crystal Growth, 225 (2-4) (2001) pp. 118-123.7.“Deposition of Thick Films of Polycrystalline Mercuric Iodide X-Ray Detectors”.H. Hermon, M. Schieber, A.Zuck, A. Vilensky, L. Melekhov,E. Shtekel, A. Green, O. Dagan, S.E. Ready,R.A. Street, G. Zentai, and L.Partain, Proc. of the SPIE, MI 2001 – Vol. 4320 (2001) pp. 133-139.8.“Comparative Study of PbI2 and HgI2 as Direct Detector Materials for High Resolution X-ray ImageSensors”. R.A. Street, M. Mulato, S.E. Ready, R. Lau, J. Ho, K. VanSchuylengergh, M. Schieber, H. Hermon,A. Zuck, A. Vilensky, K. Shah, P. Bennett and Y. Dmitryev. Proc. of the SPIE MI 2001 - Vol. 4320 (2001) pp.1-12.9.“Non Destructive Imaging with Mercuric Iodide Thick Film X ray detectors”. M. Schieber, H. Hermon, A.Zuck, A. Vilensky, L. Melekhov, M. Lukach, E. Meerson, Y. Saado, E. Shtekel, B. Reisman, G. Zentai, E.Seppi, R. Pavlyuchkova, G. Virshup, L. Partain, R. Street, S.E. Ready and R. James. Published in Proc. of the SPIE NDT 2001 - Vol. 4335 (2001) pp. 43-51.10. “Comparison of PbI2 and HgI2 for Direct Detection Active N=Matrix X –Ray Image Sensor” R.A. Street,S.E. Ready, K.V. Schuylenbergh, J. Ho, J.B. Boyce, P. Nylen, K. Shah, L. Melekhov and H. Hermon J. App.Phys. Vol 91, No 5 P 3345 (2002).11. “Approaching the Theoretical X-ray Sensitivity with HgI2 Direct Detection Image Sensors”. R.A. Street, S.E.Ready, L. Melekhov, J. Ho, A. Zuck and B. Breen. To be published in Proc. of the SPIE MI 2002.12. “Large Area Mercuric Iodide X-Ray Imager”. G. Zentai, L. Partain, R. Pavlyuchkova, G. Virshup, A. Zuck, L.Melekhov, O. Dagan, A. Vilensky and H. Gilboa. To be published in Proc. of the SPIE MI 2002.13. “Large area mercuric iodide thick film X-ray detectors for fluoroscopic (on-line) imaging”. G. Zentai, L.Partain, R. Pavlyuchkova, C. Proano, G. Virshup, B.N. Breen, A. Zuck, B. Reisman, A. Taieb and M.Schieber. To be published in Proc. of the SPIE NDT 2002.14. “Technology and Applications of Amorphous Silicon” R. A. Street, ed., Springer, 1999.JM Boone, JA Seibert,Med. Phys. 24 (1997), 1661-1670.15. /PhysRefData/XrayMassCoef/cover.html, JH Hubbell, SM Seltzer, (NIST public domain).16. “Handbook of Medical Imaging” Vol 1, JM Boone in (J Beutel, HL Kundel, RL Van Metter, eds.), SPIE Press2000.17. “X-Ray Imaging Using Amorphous Selenium: Feasibility of a Flat Panel Self- Scanned Detector for DigitalRadiology” W Zhao, JA Rowlands, Med. Phys. 22 (1995), 1595-1604.。

相关文档
最新文档