Hybrid image segmentation using watersheds and fast region merging

合集下载

车辆控制系统说明书

车辆控制系统说明书

IndexAactuation layer, 132average brightness,102-103adaptive control, 43Badaptive cruise control, 129backpropagation algorithm, 159adaptive FLC, 43backward driving mode,163,166,168-169adaptive neural networks,237adaptive predictive model, 283Baddeley-Molchanov average, 124aerial vehicles, 240 Baddeley-Molchanov fuzzy set average, 120-121, 123aerodynamic forces,209aerodynamics analysis, 208, 220Baddeley-Molchanov mean,118,119-121alternating filter, 117altitude control, 240balance position, 98amplitude distribution, 177bang-bang controller,198analytical control surface, 179, 185BCFPI, 61-63angular velocity, 92,208bell-shaped waveform,25ARMAX model, 283beta distributions,122artificial neural networks,115Bezier curve, 56, 59, 63-64association, 251Bezier Curve Fuzzy PI controller,61attitude angle,208, 217Bezier function, 54aumann mean,118-120bilinear interpolation, 90, 300,302automated manual transmission,145,157binary classifier,253Bo105 helicopter, 208automatic formation flight control,240body frame,238boiler following mode,280,283automatic thresholding,117border pixels, 101automatic transmissions,145boundary layer, 192-193,195-198autonomous robots,130boundary of a fuzzy set,26autonomous underwater vehicle, 191braking resistance, 265AUTOPIA, 130bumpy control surface, 55autopilot signal, 228Index 326CCAE package software, 315, 318 calibration accuracy, 83, 299-300, 309, 310, 312CARIMA models, 290case-based reasoning, 253center of gravity method, 29-30, 32-33centroid defuzzification, 7 centroid defuzzification, 56 centroid Method, 106 characteristic polygon, 57 characterization, 43, 251, 293 chattering, 6, 84, 191-192, 195, 196, 198chromosomes, 59circuit breaker, 270classical control, 1classical set, 19-23, 25-26, 36, 254 classification, 106, 108, 111, 179, 185, 251-253classification model, 253close formation flight, 237close path tracking, 223-224 clustering, 104, 106, 108, 251-253, 255, 289clustering algorithm, 252 clustering function, 104clutch stroke, 147coarse fuzzy logic controller, 94 collective pitch angle, 209 collision avoidance, 166, 168 collision avoidance system, 160, 167, 169-170, 172collision avoidance system, 168 complement, 20, 23, 45 compressor contamination, 289 conditional independence graph, 259 confidence thresholds, 251 confidence-rated rules, 251coning angle, 210constant gain, 207constant pressure mode, 280 contrast intensification, 104 contrast intensificator operator, 104 control derivatives, 211control gain, 35, 72, 93, 96, 244 control gain factor, 93control gains, 53, 226control rules, 18, 27, 28, 35, 53, 64, 65, 90-91, 93, 207, 228, 230, 262, 302, 304-305, 315, 317control surfaces, 53-55, 64, 69, 73, 77, 193controller actuator faulty, 289 control-weighting matrix, 207 convex sets, 119-120Coordinate Measurement Machine, 301coordinate measuring machine, 96 core of a fuzzy set, 26corner cube retroreflector, 85 correlation-minimum, 243-244cost function, 74-75, 213, 282-283, 287coverage function, 118crisp input, 18, 51, 182crisp output, 7, 34, 41-42, 51, 184, 300, 305-306crisp sets, 19, 21, 23crisp variable, 18-19, 29critical clearing time, 270 crossover, 59crossover probability, 59-60cruise control, 129-130,132-135, 137-139cubic cell, 299, 301-302, 309cubic spline, 48cubic spline interpolation, 300 current time gap, 136custom membership function, 294 customer behav or, 249iDdamping factor, 211data cleaning, 250data integration, 250data mining, 249, 250, 251-255, 259-260data selection, 250data transformation, 250d-dimensional Euclidean space, 117, 124decision logic, 321 decomposition, 173, 259Index327defuzzification function, 102, 105, 107-108, 111 defuzzifications, 17-18, 29, 34 defuzzifier, 181, 242density function, 122 dependency analysis, 258 dependency structure, 259 dependent loop level, 279depth control, 202-203depth controller, 202detection point, 169deviation, 79, 85, 185-188, 224, 251, 253, 262, 265, 268, 276, 288 dilation, 117discriminated rules, 251 discrimination, 251, 252distance function, 119-121 distance sensor, 167, 171 distribution function, 259domain knowledge, 254-255 domain-specific attributes, 251 Doppler frequency shift, 87 downhill simplex algorithm, 77, 79 downwash, 209drag reduction, 244driver’s intention estimator, 148 dutch roll, 212dynamic braking, 261-262 dynamic fuzzy system, 286, 304 dynamic tracking trajectory, 98Eedge composition, 108edge detection, 108 eigenvalues, 6-7, 212electrical coupling effect, 85, 88 electrical coupling effects, 87 equilibrium point, 207, 216 equivalent control, 194erosion, 117error rates, 96estimation, 34, 53, 119, 251, 283, 295, 302Euler angles, 208evaluation function, 258 evolution, 45, 133, 208, 251 execution layer, 262-266, 277 expert knowledge, 160, 191, 262 expert segmentation, 121-122 extended sup-star composition, 182 Ffault accommodation, 284fault clearing states, 271, 274fault detection, 288-289, 295fault diagnosis, 284fault durations, 271, 274fault isolation, 284, 288fault point, 270-271, 273-274fault tolerant control, 288fault trajectories, 271feature extraction, 256fiber glass hull, 193fin forces, 210final segmentation, 117final threshold, 116fine fuzzy controller, 90finer lookup table, 34finite element method, 318finite impulse responses, 288firing weights, 229fitness function, 59-60, 257flap angles, 209flight aerodynamic model, 247 flight envelope, 207, 214, 217flight path angle, 210flight trajectory, 208, 223footprint of uncertainty, 176, 179 formation geometry, 238, 247 formation trajectory, 246forward driving mode, 163, 167, 169 forward flight control, 217 forward flight speed, 217forward neural network, 288 forward velocity, 208, 214, 217, 219-220forward velocity tracking, 208 fossil power plants, 284-285, 296 four-dimensional synoptic data, 191 four-generator test system, 269 Fourier filter, 133four-quadrant detector, 79, 87, 92, 96foveal avascular zone, 123fundus images, 115, 121, 124 fuselage, 208-210Index 328fuselage axes, 208-209fuselage incidence, 210fuzz-C, 45fuzzifications, 18, 25fuzzifier, 181-182fuzzy ACC controller, 138fuzzy aggregation operator, 293 fuzzy ASICs, 37-38, 50fuzzy binarization algorithm, 110 fuzzy CC controller, 138fuzzy clustering algorithm, 106, 108 fuzzy constraints, 286, 291-292 fuzzy control surface, 54fuzzy damage-mitigating control, 284fuzzy decomposition, 108fuzzy domain, 102, 106fuzzy edge detection, 111fuzzy error interpolation, 300, 302, 305-306, 309, 313fuzzy filter, 104fuzzy gain scheduler, 217-218 fuzzy gain-scheduler, 207-208, 220 fuzzy geometry, 110-111fuzzy I controller, 76fuzzy image processing, 102, 106, 111, 124fuzzy implication rules, 27-28 fuzzy inference system, 17, 25, 27, 35-36, 207-208, 302, 304-306 fuzzy interpolation, 300, 302, 305- 307, 309, 313fuzzy interpolation method, 309 fuzzy interpolation technique, 300, 309, 313fuzzy interval control, 177fuzzy mapping rules, 27fuzzy model following control system, 84fuzzy modeling methods, 255 fuzzy navigation algorithm, 244 fuzzy operators, 104-105, 111 fuzzy P controller, 71, 73fuzzy PD controller, 69fuzzy perimeter, 110-111fuzzy PI controllers, 61fuzzy PID controllers, 53, 64-65, 80 fuzzy production rules, 315fuzzy reference governor, 285 Fuzzy Robust Controller, 7fuzzy set averages, 116, 124-125 fuzzy sets, 7, 19, 22, 24, 27, 36, 45, 115, 120-121, 124-125, 151, 176-182, 184-188, 192, 228, 262, 265-266fuzzy sliding mode controller, 192, 196-197fuzzy sliding surface, 192fuzzy subsets, 152, 200fuzzy variable boundary layer, 192 fuzzyTECH, 45Ggain margins, 207gain scheduling, 193, 207, 208, 211, 217, 220gas turbines, 279Gaussian membership function, 7 Gaussian waveform, 25 Gaussian-Bell waveforms, 304 gear position decision, 145, 147 gear-operating lever, 147general window function, 105 general-purpose microprocessors, 37-38, 44genetic algorithm, 54, 59, 192, 208, 257-258genetic operators, 59-60genetic-inclined search, 257 geometric modeling, 56gimbal motor, 90, 96global gain-scheduling, 220global linear ARX model, 284 global navigation satellite systems, 141global position system, 224goal seeking behaviour, 186-187 governor valves80, 2HHamiltonian function, 261, 277 hard constraints, 283, 293 heading angle, 226, 228, 230, 239, 240-244, 246heading angle control, 240Index329heading controller, 194, 201-202 heading error rate, 194, 201 heading speed, 226heading velocity control, 240 heat recovery steam generator, 279 hedges, 103-104height method, 29helicopter, 207-212, 214, 217, 220 helicopter control matrix, 211 helicopter flight control, 207 Heneghan method, 116-117, 121-124heuristic search, 258 hierarchical approaches, 261 hierarchical architecture, 185 hierarchical fuzzy processors, 261 high dimensional systems, 191 high stepping rates, 84hit-miss topology, 119home position, 96horizontal tail plane, 209 horizontal tracker, 90hostile, 223human domain experts, 255 human visual system, 101hybrid system framework, 295 hyperbolic tangent function, 195 hyperplane, 192-193, 196 hysteresis thres olding, 116-117hIIF-THEN rule, 27-28image binarization, 106image complexity, 104image fuzzification function, 111 image segmentation, 124image-expert, 122-123indicator function, 121inert, 223inertia frame, 238inference decision methods, 317 inferential conclusion, 317 inferential decision, 317 injection molding process, 315 inner loop controller, 87integral time absolute error, 54 inter-class similarity, 252 internal dependencies, 169 interpolation property, 203 interpolative nature, 262 intersection, 20, 23-24, 31, 180 interval sets, 178interval type-2 FLC, 181interval type-2 fuzzy sets, 177, 180-181, 184inter-vehicle gap, 135intra-class similarity, 252inverse dynamics control, 228, 230 inverse dynamics method, 227 inverse kinema c, 299tiJ - Kjoin, 180Kalman gain, 213kinematic model, 299kinematic modeling, 299-300 knowledge based gear position decision, 148, 153knowledge reasoning layer, 132 knowledge representation, 250 knowledge-bas d GPD model, 146eLlabyrinths, 169laser interferometer transducer, 83 laser tracker, 301laser tracking system, 53, 63, 65, 75, 78-79, 83-85, 87, 98, 301lateral control, 131, 138lateral cyclic pitch angle, 209 lateral flapping angle, 210 leader, 238-239linear control surface, 55linear fuzzy PI, 61linear hover model, 213linear interpolation, 300-301, 306-307, 309, 313linear interpolation method, 309 linear optimal controller, 207, 217 linear P controller, 73linear state feedback controller, 7 linear structures, 117linear switching line, 198linear time-series models, 283 linguistic variables, 18, 25, 27, 90, 102, 175, 208, 258Index 330load shedding, 261load-following capabilities, 288, 297 loading dock, 159-161, 170, 172 longitudinal control, 130-132 longitudinal cyclic pitch angle, 209 longitudinal flapping angle, 210 lookup table, 18, 31-35, 40, 44, 46, 47-48, 51, 65, 70, 74, 93, 300, 302, 304-305lower membership functions, 179-180LQ feedback gains, 208LQ linear controller, 208LQ optimal controller, 208LQ regulator, 208L-R fuzzy numbers, 121 Luenburger observer, 6Lyapunov func on, 5, 192, 284tiMMamdani model, 40, 46 Mamdani’s method, 242 Mamdani-type controller, 208 maneuverability, 164, 207, 209, 288 manual transmissions, 145 mapping function, 102, 104 marginal distribution functions, 259 market-basket analysis, 251-252 massive databases, 249matched filtering, 115 mathematical morphology, 117, 127 mating pool, 59-60max member principle, 106max-dot method, 40-41, 46mean distance function, 119mean max membership, 106mean of maximum method, 29 mean set, 118-121measuring beam, 86mechanical coupling effects, 87 mechanical layer, 132median filter, 105meet, 7, 50, 139, 180, 183, 302 membership degree, 39, 257 membership functions, 18, 25, 81 membership mapping processes, 56 miniature acrobatic helicopter, 208 minor steady state errors, 217 mixed-fuzzy controller, 92mobile robot control, 130, 175, 181 mobile robots, 171, 175-176, 183, 187-189model predictive control, 280, 287 model-based control, 224 modeless compensation, 300 modeless robot calibration, 299-301, 312-313modern combined-cycle power plant, 279modular structure, 172mold-design optimization, 323 mold-design process, 323molded part, 318-321, 323 morphological methods, 115motor angular acceleration, 3 motor plant, 3motor speed control, 2moving average filter, 105 multilayer fuzzy logic control, 276 multimachine power system, 262 multivariable control, 280 multivariable fuzzy PID control, 285 multivariable self-tuning controller, 283, 295mutation, 59mutation probability, 59-60mutual interference, 88Nnavigation control, 160neural fuzzy control, 19, 36neural networks, 173, 237, 255, 280, 284, 323neuro-fuzzy control, 237nominal plant, 2-4nonlinear adaptive control, 237non-linear control, 2, 159 nonlinear mapping, 55nonlinear switching curve, 198-199 nonlinear switching function, 200 nonvolatile memory, 44 normalized universe, 266Oobjective function, 59, 74-75, 77, 107, 281-282, 284, 287, 289-291,Index331295obstacle avoidance, 166, 169, 187-188, 223-225, 227, 231 obstacle avoidance behaviour, 187-188obstacle sensor, 224, 228off-line defuzzification, 34off-line fuzzy inference system, 302, 304off-line fuzzy technology, 300off-line lookup tables, 302 offsprings, 59-60on-line dynamic fuzzy inference system, 302online tuning, 203open water trial, 202operating point, 210optical platform, 92optimal control table, 300optimal feedback gain, 208, 215-216 optimal gains, 207original domain, 102outer loop controller, 85, 87outlier analysis, 251, 253output control gains, 92 overshoot, 3-4, 6-7, 60-61, 75-76, 94, 96, 193, 229, 266Ppath tracking, 223, 232-234 pattern evaluation, 250pattern vector, 150-151PD controller, 4, 54-55, 68-69, 71, 74, 76-77, 79, 134, 163, 165, 202 perception domain, 102 performance index, 60, 207 perturbed plants, 3, 7phase margins, 207phase-plan mapping fuzzy control, 19photovoltaic power systems, 261 phugoid mode, 212PID, 1-4, 8, 13, 19, 53, 61, 64-65, 74, 80, 84-85, 87-90, 92-98, 192 PID-fuzzy control, 19piecewise nonlinear surface, 193 pitch angle, 202, 209, 217pitch controller, 193, 201-202 pitch error, 193, 201pitch error rate, 193, 201pitch subsidence, 212planetary gearbox, 145point-in-time transaction, 252 polarizing beam-splitter, 86 poles, 4, 94, 96position sensor detectors, 84 positive definite matrix, 213post fault, 268, 270post-fault trajectory, 273pre-defined membership functions, 302prediction, 251, 258, 281-283, 287, 290predictive control, 280, 282-287, 290-291, 293-297predictive supervisory controller, 284preview distance control, 129 principal regulation level, 279 probabilistic reasoning approach, 259probability space, 118Problem understanding phases, 254 production rules, 316pursuer car, 136, 138-140 pursuer vehicle, 136, 138, 140Qquadrant detector, 79, 92 quadrant photo detector, 85 quadratic optimal technology, 208 quadrilateral ob tacle, 231sRradial basis function, 284 random closed set, 118random compact set, 118-120 rapid environment assessment, 191 reference beam, 86relative frame, 240relay control, 195release distance, 169residual forces, 217retinal vessel detection, 115, 117 RGB band, 115Riccati equation, 207, 213-214Index 332rise time, 3, 54, 60-61, 75-76road-environment estimator, 148 robot kinematics, 299robot workspace, 299-302, 309 robust control, 2, 84, 280robust controller, 2, 8, 90robust fuzzy controller, 2, 7 robustness property, 5, 203roll subsidence, 212rotor blade flap angle, 209rotor blades, 210rudder, 193, 201rule base size, 191, 199-200rule output function, 191, 193, 198-199, 203Runge-Kutta m thod, 61eSsampling period, 96saturation function, 195, 199 saturation functions, 162scaling factor, 54, 72-73scaling gains, 67, 69S-curve waveform, 25secondary membership function, 178 secondary memberships, 179, 181 selection, 59self-learning neural network, 159 self-organizing fuzzy control, 261 self-tuning adaptive control, 280 self-tuning control, 191semi-positive definite matrix, 213 sensitivity indices, 177sequence-based analysis, 251-252 sequential quadratic programming, 283, 292sets type-reduction, 184setting time, 54, 60-61settling time, 75-76, 94, 96SGA, 59shift points, 152shift schedule algorithms, 148shift schedules, 152, 156shifting control, 145, 147shifting schedules, 146, 152shift-schedule tables, 152sideslip angle, 210sigmoidal waveform, 25 sign function, 195, 199simplex optimal algorithm, 80 single gimbal system, 96single point mass obstacle, 223 singleton fuzzification, 181-182 sinusoidal waveform, 94, 300, 309 sliding function, 192sliding mode control, 1-2, 4, 8, 191, 193, 195-196, 203sliding mode fuzzy controller, 193, 198-200sliding mode fuzzy heading controller, 201sliding pressure control, 280 sliding region, 192, 201sliding surface, 5-6, 192-193, 195-198, 200sliding-mode fuzzy control, 19 soft constraints, 281, 287space-gap, 135special-purpose processors, 48 spectral mapping theorem, 216 speed adaptation, 138speed control, 2, 84, 130-131, 133, 160spiral subsidence, 212sporadic alternations, 257state feedback controller, 213 state transition, 167-169state transition matrix, 216state-weighting matrix, 207static fuzzy logic controller, 43 static MIMO system, 243steady state error, 4, 54, 79, 90, 94, 96, 98, 192steam turbine, 279steam valving, 261step response, 4, 7, 53, 76, 91, 193, 219stern plane, 193, 201sup operation, 183supervisory control, 191, 280, 289 supervisory layer, 262-264, 277 support function, 118support of a fuzzy set, 26sup-star composition, 182-183 surviving solutions, 257Index333swing curves, 271, 274-275 switching band, 198switching curve, 198, 200 switching function, 191, 194, 196-198, 200switching variable, 228system trajector192, 195y,Ttail plane, 210tail rotor, 209-210tail rotor derivation, 210Takagi-Sugeno fuzzy methodology, 287target displacement, 87target time gap, 136t-conorm maximum, 132 thermocouple sensor fault, 289 thickness variable, 319-320three-beam laser tracker, 85three-gimbal system, 96throttle pressure, 134throttle-opening degree, 149 thyristor control, 261time delay, 63, 75, 91, 93-94, 281 time optimal robust control, 203 time-gap, 135-137, 139-140time-gap derivative, 136time-gap error, 136time-invariant fuzzy system, 215t-norm minimum, 132torque converter, 145tracking error, 79, 84-85, 92, 244 tracking gimbals, 87tracking mirror, 85, 87tracking performance, 84-85, 88, 90, 192tracking speed, 75, 79, 83-84, 88, 90, 92, 97, 287trajectory mapping unit, 161, 172 transfer function, 2-5, 61-63 transient response, 92, 193 transient stability, 261, 268, 270, 275-276transient stability control, 268 trapezoidal waveform, 25 triangular fuzzy set, 319triangular waveform, 25 trim, 208, 210-211, 213, 217, 220, 237trimmed points, 210TS fuzzy gain scheduler, 217TS fuzzy model, 207, 290TS fuzzy system, 208, 215, 217, 220 TS gain scheduler, 217TS model, 207, 287TSK model, 40-41, 46TS-type controller, 208tuning function, 70, 72turbine following mode, 280, 283 turn rate, 210turning rate regulation, 208, 214, 217two-DOF mirror gimbals, 87two-layered FLC, 231two-level hierarchy controllers, 275-276two-module fuzzy logic control, 238 type-0 systems, 192type-1 FLC, 176-177, 181-182, 185- 188type-1 fuzzy sets, 177-179, 181, 185, 187type-1 membership functions, 176, 179, 183type-2 FLC, 176-177, 180-183, 185-189type-2 fuzzy set, 176-180type-2 interval consequent sets, 184 type-2 membership function, 176-178type-reduced set, 181, 183-185type-reduction,83-1841UUH-1H helicopter, 208uncertain poles, 94, 96uncertain system, 93-94, 96 uncertain zeros, 94, 96underlying domain, 259union, 20, 23-24, 30, 177, 180unit control level, 279universe of discourse, 19-24, 42, 57, 151, 153, 305unmanned aerial vehicles, 223 unmanned helicopter, 208Index 334unstructured dynamic environments, 177unstructured environments, 175-177, 179, 185, 187, 189upper membership function, 179Vvalve outlet pressure, 280vapor pressure, 280variable structure controller, 194, 204velocity feedback, 87vertical fin, 209vertical tracker, 90vertical tracking gimbal, 91vessel detection, 115, 121-122, 124-125vessel networks, 117vessel segmentation, 115, 120 vessel tracking algorithms, 115 vision-driven robotics, 87Vorob’ev fuzzy set average, 121-123 Vorob'ev mean, 118-120vortex, 237 WWang and Mendel’s algorithm, 257 WARP, 49weak link, 270, 273weighing factor, 305weighting coefficients, 75 weighting function, 213weld line, 315, 318-323western states coordinating council, 269Westinghouse turbine-generator, 283 wind–diesel power systems, 261 Wingman, 237-240, 246wingman aircraft, 238-239 wingman veloc y, 239itY-ZYager operator, 292Zana-Klein membership function, 124Zana-Klein method, 116-117, 121, 123-124zeros, 94, 96µ-law function, 54µ-law tuning method, 54。

Waters Protein-Pak Hi Res Q Column 分离 Low Range ss

Waters Protein-Pak Hi Res Q Column 分离 Low Range ss

Size and Purity Assessment of Single-Guide RNAs by Anion-Exchange Chromatography (AEX)Hua Yang,Stephan M. Koza,Ying Qing YuWaters CorporationAbstractSingle-guide RNA (sgRNA) is a critical element in the CRISPR/Cas9 Technology for gene editing, the size of which usually ranges from 100 to 150 bases. In this application note, we show that the size of several sgRNAs could be estimated by comparison to a Low Range ssRNA Ladder (50–500 bases) using an optimized anion-exchange method developed on a Waters Protein-Pak Hi Res Q Column. In addition, the purity of the sgRNA samples can be assessed using the same anion exchange method, providing an informative and non-complex method for sgRNA product consistency.BenefitsWaters Protein-Pak Hi Res Q Column separation of a Low Range ssRNA Ladder with the size ranging from ■50 to 500 basesWaters Protein-Pak Hi Res Q Column separation of ssRNAs and their impurities■Size and purity estimation of ssRNAs having a size range of 100–150 mer under the same gradient conditions ■using the AEX method on Waters Protein-Pak Hi Res Q ColumnIntroductionThe discovery of clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR associated (Cas) bacterial immunity systems and the rapid adaptation of RNA guided CRISPR/CRISPR Associated Protein 9 (Cas9) Technology to mammalian cells have had a significant impact in the field of gene editing.1–3 The Cas9 protein, a non-specific endonuclease, is directed to a specific DNA site by a guide RNA (gRNA), where it makes a double-strand break of the DNA of interest. The gRNA consists of two parts: CRISPR RNA (crRNA) and trans-activating crRNA (tracrRNA). The crRNA is usually a 17–20 nucleotide sequence complementary to the target DNA, and the tracrRNA serves as a binding scaffold for the Cas9 nuclease. While crRNAs and tracrRNAs exist as two separate RNA molecules in nature, the single-guide RNA (sgRNA), which combines both the crRNA sequence and the tracrRNA sequence into a single RNA molecule, has become a commonly used format. The length of a sgRNA is in the range of 100–150 nucleotides. It is critical to characterize the sgRNA, as it is the core of the CRISPR/Cas9 technology.Anion-exchange chromatography (AEX) separates molecules based on their differences in negative surface charges. This analytical technique can be robust, reproducible, and quantitative. It is also easy to automate, requires small amounts of sample, and allows for the isolation of fractions for further analysis. AEX has been utilized in multiple areas related to gene therapy, including adeno-associated virus empty and full capsid separation, plasmid isoform separation, and dsDNA fragment separation.4–6 Since the sgRNAs are negatively charged due to the phosphate groups on the backbone, we investigated AEX for size and purity assessment of sgRNAs.In this application note, we show that using a Waters Protein-Pak Hi Res Q strong Anion-Exchange Column on an ACQUITY UPLC H-Class Bio System, a single-stranded RNA (ssRNA) ladder ranging from 50 to 500 bases can be separated and used for estimating the size of ssRNAs in the approximate range of 100–150 bases, including the sgRNAs for CRISPR/Cas9 System. Moreover, the purity of these ssRNAs can be estimated with the same gradient conditions.ExperimentalSample DescriptionHPRT (purified and crude) is a pre-designed CRISPR/Cas9 sgRNA (Hs.Cas9.HPRT1.1AA, 100 mer). GUAC is acustomized ssRNA (150 mer), which contains repeats of GUAC sequence. HPRT sgRNA and GUAC ssRNA were purchased from Integrated DNA Technologies (IDT). Rosa26 and Scrambled #2 are both pre-designedCRISPR/Cas9 sgRNAs purchased from Synthego (100 mer). Low Range ssRNA Ladder was purchased from New England Biolabs (N0364S).Method ConditionsLC ConditionsLC system:ACQUITY UPLC H-Class BioDetection:ACQUITY UPLC TUV Detector with 5 mm titaniumflow cellWavelength:260 nmVials:Polypropylene 12 x 32 mm Screw Neck Vial, withCap and Pre-slit PTFE/Silicone Septum, 300 µLVolume, 100/pk (P/N 186002639)Column(s):Protein-Pak Hi Res Q Column, 5 µm, 4.6 x 100 mm(P/N 186004931)Column temp.:60 °CSample temp.:10 °CInjection volume:1–10 µLFlow rate:0.4 mL/minMobile phase A:100 mM Tris-HClMobile phase B:100 mM Tris baseMobile phase C: 3 M Tetramethylammonium chloride (TMAC)Mobile phase D:WaterBuffer conc. to deliver:20 mMGradient Table (an AutoBlend Plus Method, Henderson-Hasselbalch derived).In the above gradient table, the buffer is 20 mM Tris pH 9.0. The initial salt concentration is set to 0 mM to ensure all the analytes are strongly bound onto the column. After 5 mins, the salt concentration is increased to 1400 mM where most of the impurities will elute, based on prior investigation. After 4 mins equilibration, the separation gradient starts. The salt concentration increases linearly from 1400 m to 2100 mM in 20 mins for the Low Range ssRNA Ladder separation, as well as individual ssRNAs. Then it is ramped up to 2400 mM to strip off any remaining bound molecules. Finally, an equilibration step to the initial condition takes place, preparing for the next injection.An equivalent gradient table for a generic quaternary LC system is shown above.Data ManagementChromatography software:Empower 3 (FR 4)Results and DiscussionSize AssessmentVarious mobile phase conditions were tested using a Low Range ssRNA Ladder for size assessment of the ssRNAs, including pH (7.4 and 9.0), column temperature (30 °C and 60 °C) and salt (NaCl and TMAC).The results from the optimal conditions are shown in Figure 1B. Using a pH 9.0 Tris buffer with 60 °C column temperature and a TMAC salt gradient, the Low Range ssRNA Ladder (50–500 bases) along with four pre-made sgRNAs (100 mer), and one customized ssRNA (150 mer) were separated on a Waters Protein-Pak Hi Res Q Column. The separation for the Low Range ssRNA Ladder on this strong anion exchange column was very similar to that on an agarose gel, as shown in Figure 1A. A calibration curve was constructed based on the retention time and the logarithm of the number of bases of each ssRNA in the ladder (Figure 1C, blue dots). Thelinear fit from the Low Range ssRNA Ladder indicates a strong correlation between the logarithm of the size andthe retention time (R2=0.993). Using this plot, the size of the ssRNAs was calculated from their individual retention time. The percent error is calculated using the formula {(calculated size – theoretical size)/theoretical size}. The percent error was less than 6% for all the RNAs tested (Figure 1d), as evidenced by the orange data points residing on or very closely to the trendline of the calibration curve. Notice that small percent error was obtained from four pre-made sgRNAs from two different manufacturers and a customized ssRNA with an artificial sequence. Although ssRNAs with shorter than 100 bases and larger than 150 bases were not tested, it is possible that this method can be used for the ssRNAs size assessment in the range of 50–500 bases.Figure 1A.Agarose gel separation of Low Range ssRNA Ladder (Reprinted from (2021) with permission from New England Biolabs); 1B. Anion-exchange separation of Low Range ssRNA Ladder and ssRNAs on a Waters Protein-Pak Hi Res Q Column; 1C. A plot of log(size) vs. retention time of Low Range ssRNA Ladder (blue dots) and individual ssRNAs (orange dots); 1D. Size estimation of individual ssRNAs based on retention time and calibration curve. Small percent error was obtained for all ssRNAs.It is noteworthy that a mobile phase condition with pH 7.4 Tris buffer, 60 °C column temperature and a TMAC salt gradient also resulted in good size estimation with percent error <5% for all pre-made sgRNAs (100 mer) and ~12% for the artificially made GUAC ssRNA (150 mer). Overall, 60 °C column temperature resulted in one singlepeak for each ssRNA which is needed to determine the retention time of the peak for size assessment. 30 °C column temperature resulted in more than one major peaks, which are presumably the isomers of the ssRNAs. Multiple peaks were also observed when using NaCl as the salt, regardless of the pH and column temperature.Purity AssessmentPurified and crude HPRT sgRNA was separated on the Protein-Pak Hi Res Q Column (Figure 2) using the same gradient conditions for size assessment. The relative purities of the crude and purified samples were measured as 37.4% and 88.0%, respectively, based on the peak areas indicated. The majority of the impurities eluted prior to 50 bases although lower abundance impurities appear to be present up to the size of the HPRT sgRNA.Figure 2. Crude and purified HPRT sgRNA for CRISPR/Cas 9 System were separated on a Waters Protein-Pak Hi Res Q Column using the same conditions as in Figure 1B (see Experimental for details).ConclusionAnion-exchange chromatography is robust, reproducible, easy to automate, yields quantitative information, andrequires a small amount of sample. We demonstrate here that the components of a Low Range ssRNA Ladder, ranging from 50 to 500 bases, can be separated on a Waters Protein-Pak Hi Res Q Column with a linear correlation between the log of base-number and observed retention time when TMAC is used as an elution salt. The size of ssRNAs ranging from 100 to 150 bases can be estimated by comparing the retention time of the ssRNAs with that of the Low Range ssRNA Ladder. In addition, the purity of a sgRNAs may also be observed from the same chromatographic separation. This method can potentially be applied to the analysis of sgRNAs which are the key element for CRISPR/Cas9 gene editing technology.ReferencesDunbar C E, High K A, J. Joung K, Kohn D B, Ozawa K, Sadelain M. Gene Therapy Comes of Age. Science 1.2018; 359: 175.2.Rath D, Amlinger L, Rath A, Lundgren M. The CRISPR-Cas Immune System: Biology, Mechanisms and Applications. Biochimie 2015; 117: 119–128.3.Patrick D. Hsu P D, Eric S. Lander E S, and Zhang F. Development and Applications of CRISPR-Cas9 for Genome Engineering. Cell 2014; 157: 1262–1278.Yang H, Koza S and Chen W. Anion-Exchange Chromatography for Determining Empty and Full Capsid4.5.Yang H, Koza S and Chen W. Plasmid Isoform Separation and Quantification by Anion-Exchange6.Yang H, Koza S and Chen W. Separation and Size Assessment of dsDNA Fragments by Anion-ExchangeFeatured Products■■720007428, November 2021© 2021 Waters Corporation. All Rights Reserved.。

A Fast and Accurate Plane Detection Algorithm for Large Noisy Point Clouds Using Filtered Normals

A Fast and Accurate Plane Detection Algorithm for Large Noisy Point Clouds Using Filtered Normals

A Fast and Accurate Plane Detection Algorithm for Large Noisy Point CloudsUsing Filtered Normals and Voxel GrowingJean-Emmanuel DeschaudFranc¸ois GouletteMines ParisTech,CAOR-Centre de Robotique,Math´e matiques et Syst`e mes60Boulevard Saint-Michel75272Paris Cedex06jean-emmanuel.deschaud@mines-paristech.fr francois.goulette@mines-paristech.frAbstractWith the improvement of3D scanners,we produce point clouds with more and more points often exceeding millions of points.Then we need a fast and accurate plane detection algorithm to reduce data size.In this article,we present a fast and accurate algorithm to detect planes in unorganized point clouds usingfiltered normals and voxel growing.Our work is based on afirst step in estimating better normals at the data points,even in the presence of noise.In a second step,we compute a score of local plane in each point.Then, we select the best local seed plane and in a third step start a fast and robust region growing by voxels we call voxel growing.We have evaluated and tested our algorithm on different kinds of point cloud and compared its performance to other algorithms.1.IntroductionWith the growing availability of3D scanners,we are now able to produce large datasets with millions of points.It is necessary to reduce data size,to decrease the noise and at same time to increase the quality of the model.It is in-teresting to model planar regions of these point clouds by planes.In fact,plane detection is generally afirst step of segmentation but it can be used for many applications.It is useful in computer graphics to model the environnement with basic geometry.It is used for example in modeling to detect building facades before classification.Robots do Si-multaneous Localization and Mapping(SLAM)by detect-ing planes of the environment.In our laboratory,we wanted to detect small and large building planes in point clouds of urban environments with millions of points for modeling. As mentioned in[6],the accuracy of the plane detection is important for after-steps of the modeling pipeline.We also want to be fast to be able to process point clouds with mil-lions of points.We present a novel algorithm based on re-gion growing with improvements in normal estimation and growing process.For our method,we are generic to work on different kinds of data like point clouds fromfixed scan-ner or from Mobile Mapping Systems(MMS).We also aim at detecting building facades in urban point clouds or little planes like doors,even in very large data sets.Our input is an unorganized noisy point cloud and with only three”in-tuitive”parameters,we generate a set of connected compo-nents of planar regions.We evaluate our method as well as explain and analyse the significance of each parameter. 2.Previous WorksAlthough there are many methods of segmentation in range images like in[10]or in[3],three have been thor-oughly studied for3D point clouds:region-growing, hough-transform from[14]and Random Sample Consen-sus(RANSAC)from[9].The application of recognising structures in urban laser point clouds is frequent in literature.Bauer in[4]and Boulaassal in[5]detect facades in dense3D point cloud by a RANSAC algorithm.V osselman in[23]reviews sur-face growing and3D hough transform techniques to de-tect geometric shapes.Tarsh-Kurdi in[22]detect roof planes in3D building point cloud by comparing results on hough-transform and RANSAC algorithm.They found that RANSAC is more efficient than thefirst one.Chao Chen in[6]and Yu in[25]present algorithms of segmentation in range images for the same application of detecting planar regions in an urban scene.The method in[6]is based on a region growing algorithm in range images and merges re-sults in one labelled3D point cloud.[25]uses a method different from the three we have cited:they extract a hi-erarchical subdivision of the input image built like a graph where leaf nodes represent planar regions.There are also other methods like bayesian techniques. In[16]and[8],they obtain smoothed surface from noisy point clouds with objects modeled by probability distribu-tions and it seems possible to extend this idea to point cloud segmentation.But techniques based on bayesian statistics need to optimize global statistical model and then it is diffi-cult to process points cloud larger than one million points.We present below an analysis of the two main methods used in literature:RANSAC and region-growing.Hough-transform algorithm is too time consuming for our applica-tion.To compare the complexity of the algorithm,we take a point cloud of size N with only one plane P of size n.We suppose that we want to detect this plane P and we define n min the minimum size of the plane we want to detect.The size of a plane is the area of the plane.If the data density is uniform in the point cloud then the size of a plane can be specified by its number of points.2.1.RANSACRANSAC is an algorithm initially developped by Fis-chler and Bolles in[9]that allows thefitting of models with-out trying all possibilities.RANSAC is based on the prob-ability to detect a model using the minimal set required to estimate the model.To detect a plane with RANSAC,we choose3random points(enough to estimate a plane).We compute the plane parameters with these3points.Then a score function is used to determine how the model is good for the remaining ually,the score is the number of points belonging to the plane.With noise,a point belongs to a plane if the distance from the point to the plane is less than a parameter γ.In the end,we keep the plane with the best score.Theprobability of getting the plane in thefirst trial is p=(nN )3.Therefore the probability to get it in T trials is p=1−(1−(nN )3)ing equation1and supposing n minN1,we know the number T min of minimal trials to have a probability p t to get planes of size at least n min:T min=log(1−p t)log(1−(n minN))≈log(11−p t)(Nn min)3.(1)For each trial,we test all data points to compute the score of a plane.The RANSAC algorithm complexity lies inO(N(Nn min )3)when n minN1and T min→0whenn min→N.Then RANSAC is very efficient in detecting large planes in noisy point clouds i.e.when the ratio n minN is 1but very slow to detect small planes in large pointclouds i.e.when n minN 1.After selecting the best model,another step is to extract the largest connected component of each plane.Connnected components mean that the min-imum distance between each point of the plane and others points is smaller(for distance)than afixed parameter.Schnabel et al.[20]bring two optimizations to RANSAC:the points selection is done locally and the score function has been improved.An octree isfirst created from point cloud.Points used to estimate plane parameters are chosen locally at a random depth of the octree.The score function is also different from RANSAC:instead of testing all points for one model,they test only a random subset and find the score by interpolation.The algorithm complexity lies in O(Nr4Ndn min)where r is the number of random subsets for the score function and d is the maximum octree depth. Their algorithm improves the planes detection speed but its complexity lies in O(N2)and it becomes slow on large data sets.And again we have to extract the largest connected component of each plane.2.2.Region GrowingRegion Growing algorithms work well in range images like in[18].The principle of region growing is to start with a seed region and to grow it by neighborhood when the neighbors satisfy some conditions.In range images,we have the neighbors of each point with pixel coordinates.In case of unorganized3D data,there is no information about the neighborhood in the data structure.The most common method to compute neighbors in3D is to compute a Kd-tree to search k nearest neighbors.The creation of a Kd-tree lies in O(NlogN)and the search of k nearest neighbors of one point lies in O(logN).The advantage of these region growing methods is that they are fast when there are many planes to extract,robust to noise and extract the largest con-nected component immediately.But they only use the dis-tance from point to plane to extract planes and like we will see later,it is not accurate enough to detect correct planar regions.Rabbani et al.[19]developped a method of smooth area detection that can be used for plane detection.Theyfirst estimate the normal of each point like in[13].The point with the minimum residual starts the region growing.They test k nearest neighbors of the last point added:if the an-gle between the normal of the point and the current normal of the plane is smaller than a parameterαthen they add this point to the smooth region.With Kd-tree for k nearest neighbors,the algorithm complexity is in O(N+nlogN). The complexity seems to be low but in worst case,when nN1,example for facade detection in point clouds,the complexity becomes O(NlogN).3.Voxel Growing3.1.OverviewIn this article,we present a new algorithm adapted to large data sets of unorganized3D points and optimized to be accurate and fast.Our plane detection method works in three steps.In thefirst part,we compute a better esti-mation of the normal in each point by afiltered weighted planefitting.In a second step,we compute the score of lo-cal planarity in each point.We select the best seed point that represents a good seed plane and in the third part,we grow this seed plane by adding all points close to the plane.Thegrowing step is based on a voxel growing algorithm.The filtered normals,the score function and the voxel growing are innovative contributions of our method.As an input,we need dense point clouds related to the level of detail we want to detect.As an output,we produce connected components of planes in the point cloud.This notion of connected components is linked to the data den-sity.With our method,the connected components of planes detected are linked to the parameter d of the voxel grid.Our method has 3”intuitive”parameters :d ,area min and γ.”intuitive”because there are linked to physical mea-surements.d is the voxel size used in voxel growing and also represents the connectivity of points in detected planes.γis the maximum distance between the point of a plane and the plane model,represents the plane thickness and is linked to the point cloud noise.area min represents the minimum area of planes we want to keep.3.2.Details3.2.1Local Density of Point CloudsIn a first step,we compute the local density of point clouds like in [17].For that,we find the radius r i of the sphere containing the k nearest neighbors of point i .Then we cal-culate ρi =kπr 2i.In our experiments,we find that k =50is a good number of neighbors.It is important to know the lo-cal density because many laser point clouds are made with a fixed resolution angle scanner and are therefore not evenly distributed.We use the local density in section 3.2.3for the score calculation.3.2.2Filtered Normal EstimationNormal estimation is an important part of our algorithm.The paper [7]presents and compares three normal estima-tion methods.They conclude that the weighted plane fit-ting or WPF is the fastest and the most accurate for large point clouds.WPF is an idea of Pauly and al.in [17]that the fitting plane of a point p must take into consider-ation the nearby points more than other distant ones.The normal least square is explained in [21]and is the mini-mum of ki =1(n p ·p i +d )2.The WPF is the minimum of ki =1ωi (n p ·p i +d )2where ωi =θ( p i −p )and θ(r )=e −2r 2r2i .For solving n p ,we compute the eigenvec-tor corresponding to the smallest eigenvalue of the weightedcovariance matrix C w = ki =1ωi t (p i −b w )(p i −b w )where b w is the weighted barycenter.For the three methods ex-plained in [7],we get a good approximation of normals in smooth area but we have errors in sharp corners.In fig-ure 1,we have tested the weighted normal estimation on two planes with uniform noise and forming an angle of 90˚.We can see that the normal is not correct on the corners of the planes and in the red circle.To improve the normal calculation,that improves the plane detection especially on borders of planes,we propose a filtering process in two phases.In a first step,we com-pute the weighted normals (WPF)of each point like we de-scribed it above by minimizing ki =1ωi (n p ·p i +d )2.In a second step,we compute the filtered normal by us-ing an adaptive local neighborhood.We compute the new weighted normal with the same sum minimization but keep-ing only points of the neighborhood whose normals from the first step satisfy |n p ·n i |>cos (α).With this filtering step,we have the same results in smooth areas and better results in sharp corners.We called our normal estimation filtered weighted plane fitting(FWPF).Figure 1.Weighted normal estimation of two planes with uniform noise and with 90˚angle between them.We have tested our normal estimation by computing nor-mals on synthetic data with two planes and different angles between them and with different values of the parameter α.We can see in figure 2the mean error on normal estimation for WPF and FWPF with α=20˚,30˚,40˚and 90˚.Us-ing α=90˚is the same as not doing the filtering step.We see on Figure 2that α=20˚gives smaller error in normal estimation when angles between planes is smaller than 60˚and α=30˚gives best results when angle between planes is greater than 60˚.We have considered the value α=30˚as the best results because it gives the smaller mean error in normal estimation when angle between planes vary from 20˚to 90˚.Figure 3shows the normals of the planes with 90˚angle and better results in the red circle (normals are 90˚with the plane).3.2.3The score of local planarityIn many region growing algorithms,the criteria used for the score of the local fitting plane is the residual,like in [18]or [19],i.e.the sum of the square of distance from points to the plane.We have a different score function to estimate local planarity.For that,we first compute the neighbors N i of a point p with points i whose normals n i are close toFigure parison of mean error in normal estimation of two planes with α=20˚,30˚,40˚and 90˚(=Nofiltering).Figure 3.Filtered Weighted normal estimation of two planes with uniform noise and with 90˚angle between them (α=30˚).the normal n p .More precisely,we compute N i ={p in k neighbors of i/|n i ·n p |>cos (α)}.It is a way to keep only the points which are probably on the local plane before the least square fitting.Then,we compute the local plane fitting of point p with N i neighbors by least squares like in [21].The set N i is a subset of N i of points belonging to the plane,i.e.the points for which the distance to the local plane is smaller than the parameter γ(to consider the noise).The score s of the local plane is the area of the local plane,i.e.the number of points ”in”the plane divided by the localdensity ρi (seen in section 3.2.1):the score s =card (N i)ρi.We take into consideration the area of the local plane as the score function and not the number of points or the residual in order to be more robust to the sampling distribution.3.2.4Voxel decompositionWe use a data structure that is the core of our region growing method.It is a voxel grid that speeds up the plane detection process.V oxels are small cubes of length d that partition the point cloud space.Every point of data belongs to a voxel and a voxel contains a list of points.We use the Octree Class Template in [2]to compute an Octree of the point cloud.The leaf nodes of the graph built are voxels of size d .Once the voxel grid has been computed,we start the plane detection algorithm.3.2.5Voxel GrowingWith the estimator of local planarity,we take the point p with the best score,i.e.the point with the maximum area of local plane.We have the model parameters of this best seed plane and we start with an empty set E of points belonging to the plane.The initial point p is in a voxel v 0.All the points in the initial voxel v 0for which the distance from the seed plane is less than γare added to the set E .Then,we compute new plane parameters by least square refitting with set E .Instead of growing with k nearest neighbors,we grow with voxels.Hence we test points in 26voxel neigh-bors.This is a way to search the neighborhood in con-stant time instead of O (logN )for each neighbor like with Kd-tree.In a neighbor voxel,we add to E the points for which the distance to the current plane is smaller than γand the angle between the normal computed in each point and the normal of the plane is smaller than a parameter α:|cos (n p ,n P )|>cos (α)where n p is the normal of the point p and n P is the normal of the plane P .We have tested different values of αand we empirically found that 30˚is a good value for all point clouds.If we added at least one point in E for this voxel,we compute new plane parameters from E by least square fitting and we test its 26voxel neigh-bors.It is important to perform plane least square fitting in each voxel adding because the seed plane model is not good enough with noise to be used in all voxel growing,but only in surrounding voxels.This growing process is faster than classical region growing because we do not compute least square for each point added but only for each voxel added.The least square fitting step must be computed very fast.We use the same method as explained in [18]with incre-mental update of the barycenter b and covariance matrix C like equation 2.We know with [21]that the barycen-ter b belongs to the least square plane and that the normal of the least square plane n P is the eigenvector of the smallest eigenvalue of C .b0=03x1C0=03x3.b n+1=1n+1(nb n+p n+1).C n+1=C n+nn+1t(pn+1−b n)(p n+1−b n).(2)where C n is the covariance matrix of a set of n points,b n is the barycenter vector of a set of n points and p n+1is the (n+1)point vector added to the set.This voxel growing method leads to a connected com-ponent set E because the points have been added by con-nected voxels.In our case,the minimum distance between one point and E is less than parameter d of our voxel grid. That is why the parameter d also represents the connectivity of points in detected planes.3.2.6Plane DetectionTo get all planes with an area of at least area min in the point cloud,we repeat these steps(best local seed plane choice and voxel growing)with all points by descending order of their score.Once we have a set E,whose area is bigger than area min,we keep it and classify all points in E.4.Results and Discussion4.1.Benchmark analysisTo test the improvements of our method,we have em-ployed the comparative framework of[12]based on range images.For that,we have converted all images into3D point clouds.All Point Clouds created have260k points. After our segmentation,we project labelled points on a seg-mented image and compare with the ground truth image. We have chosen our three parameters d,area min andγby optimizing the result of the10perceptron training image segmentation(the perceptron is portable scanner that pro-duces a range image of its environment).Bests results have been obtained with area min=200,γ=5and d=8 (units are not provided in the benchmark).We show the re-sults of the30perceptron images segmentation in table1. GT Regions are the mean number of ground truth planes over the30ground truth range images.Correct detection, over-segmentation,under-segmentation,missed and noise are the mean number of correct,over,under,missed and noised planes detected by methods.The tolerance80%is the minimum percentage of points we must have detected comparing to the ground truth to have a correct detection. More details are in[12].UE is a method from[12],UFPR is a method from[10]. It is important to notice that UE and UFPR are range image methods and our method is not well suited for range images but3D Point Cloud.Nevertheless,it is a good benchmark for comparison and we see in table1that the accuracy of our method is very close to the state of the art in range image segmentation.To evaluate the different improvements of our algorithm, we have tested different variants of our method.We have tested our method without normals(only with distance from points to plane),without voxel growing(with a classical region growing by k neighbors),without our FWPF nor-mal estimation(with WPF normal estimation),without our score function(with residual score function).The compari-son is visible on table2.We can see the difference of time computing between region growing and voxel growing.We have tested our algorithm with and without normals and we found that the accuracy cannot be achieved whithout normal computation.There is also a big difference in the correct de-tection between WPF and our FWPF normal estimation as we can see in thefigure4.Our FWPF normal brings a real improvement in border estimation of planes.Black points in thefigure are non classifiedpoints.Figure5.Correct Detection of our segmentation algorithm when the voxel size d changes.We would like to discuss the influence of parameters on our algorithm.We have three parameters:area min,which represents the minimum area of the plane we want to keep,γ,which represents the thickness of the plane(it is gener-aly closely tied to the noise in the point cloud and espe-cially the standard deviationσof the noise)and d,which is the minimum distance from a point to the rest of the plane. These three parameters depend on the point cloud features and the desired segmentation.For example,if we have a lot of noise,we must choose a highγvalue.If we want to detect only large planes,we set a large area min value.We also focus our analysis on the robustess of the voxel size d in our algorithm,i.e.the ratio of points vs voxels.We can see infigure5the variation of the correct detection when we change the value of d.The method seems to be robust when d is between4and10but the quality decreases when d is over10.It is due to the fact that for a large voxel size d,some planes from different objects are merged into one plane.GT Regions Correct Over-Under-Missed Noise Duration(in s)detection segmentation segmentationUE14.610.00.20.3 3.8 2.1-UFPR14.611.00.30.1 3.0 2.5-Our method14.610.90.20.1 3.30.7308Table1.Average results of different segmenters at80%compare tolerance.GT Regions Correct Over-Under-Missed Noise Duration(in s) Our method detection segmentation segmentationwithout normals14.6 5.670.10.19.4 6.570 without voxel growing14.610.70.20.1 3.40.8605 without FWPF14.69.30.20.1 5.0 1.9195 without our score function14.610.30.20.1 3.9 1.2308 with all improvements14.610.90.20.1 3.30.7308 Table2.Average results of variants of our segmenter at80%compare tolerance.4.1.1Large scale dataWe have tested our method on different kinds of data.We have segmented urban data infigure6from our Mobile Mapping System(MMS)described in[11].The mobile sys-tem generates10k pts/s with a density of50pts/m2and very noisy data(σ=0.3m).For this point cloud,we want to de-tect building facades.We have chosen area min=10m2, d=1m to have large connected components andγ=0.3m to cope with the noise.We have tested our method on point cloud from the Trim-ble VX scanner infigure7.It is a point cloud of size40k points with only20pts/m2with less noise because it is a fixed scanner(σ=0.2m).In that case,we also wanted to detect building facades and keep the same parameters ex-ceptγ=0.2m because we had less noise.We see infig-ure7that we have detected two facades.By setting a larger voxel size d value like d=10m,we detect only one plane. We choose d like area min andγaccording to the desired segmentation and to the level of detail we want to extract from the point cloud.We also tested our algorithm on the point cloud from the LEICA Cyrax scanner infigure8.This point cloud has been taken from AIM@SHAPE repository[1].It is a very dense point cloud from multiplefixed position of scanner with about400pts/m2and very little noise(σ=0.02m). In this case,we wanted to detect all the little planes to model the church in planar regions.That is why we have chosen d=0.2m,area min=1m2andγ=0.02m.Infigures6,7and8,we have,on the left,input point cloud and on the right,we only keep points detected in a plane(planes are in random colors).The red points in thesefigures are seed plane points.We can see in thesefig-ures that planes are very well detected even with high noise. Table3show the information on point clouds,results with number of planes detected and duration of the algorithm.The time includes the computation of the FWPF normalsof the point cloud.We can see in table3that our algo-rithm performs linearly in time with respect to the numberof points.The choice of parameters will have little influence on time computing.The computation time is about one mil-lisecond per point whatever the size of the point cloud(we used a PC with QuadCore Q9300and2Go of RAM).The algorithm has been implented using only one thread andin-core processing.Our goal is to compare the improve-ment of plane detection between classical region growing and our region growing with better normals for more ac-curate planes and voxel growing for faster detection.Our method seems to be compatible with out-of-core implemen-tation like described in[24]or in[15].MMS Street VX Street Church Size(points)398k42k7.6MMean Density50pts/m220pts/m2400pts/m2 Number of Planes202142Total Duration452s33s6900sTime/point 1ms 1ms 1msTable3.Results on different data.5.ConclusionIn this article,we have proposed a new method of plane detection that is fast and accurate even in presence of noise. We demonstrate its efficiency with different kinds of data and its speed in large data sets with millions of points.Our voxel growing method has a complexity of O(N)and it is able to detect large and small planes in very large data sets and can extract them directly in connected components.Figure 4.Ground truth,Our Segmentation without and with filterednormals.Figure 6.Planes detection in street point cloud generated by MMS (d =1m,area min =10m 2,γ=0.3m ).References[1]Aim@shape repository /.6[2]Octree class template /code/octree.html.4[3] A.Bab-Hadiashar and N.Gheissari.Range image segmen-tation using surface selection criterion.2006.IEEE Trans-actions on Image Processing.1[4]J.Bauer,K.Karner,K.Schindler,A.Klaus,and C.Zach.Segmentation of building models from dense 3d point-clouds.2003.Workshop of the Austrian Association for Pattern Recognition.1[5]H.Boulaassal,ndes,P.Grussenmeyer,and F.Tarsha-Kurdi.Automatic segmentation of building facades using terrestrial laser data.2007.ISPRS Workshop on Laser Scan-ning.1[6] C.C.Chen and I.Stamos.Range image segmentationfor modeling and object detection in urban scenes.2007.3DIM2007.1[7]T.K.Dey,G.Li,and J.Sun.Normal estimation for pointclouds:A comparison study for a voronoi based method.2005.Eurographics on Symposium on Point-Based Graph-ics.3[8]J.R.Diebel,S.Thrun,and M.Brunig.A bayesian methodfor probable surface reconstruction and decimation.2006.ACM Transactions on Graphics (TOG).1[9]M.A.Fischler and R.C.Bolles.Random sample consen-sus:A paradigm for model fitting with applications to image analysis and automated munications of the ACM.1,2[10]P.F.U.Gotardo,O.R.P.Bellon,and L.Silva.Range imagesegmentation by surface extraction using an improved robust estimator.2003.Proceedings of Computer Vision and Pat-tern Recognition.1,5[11] F.Goulette,F.Nashashibi,I.Abuhadrous,S.Ammoun,andurgeau.An integrated on-board laser range sensing sys-tem for on-the-way city and road modelling.2007.Interna-tional Archives of the Photogrammetry,Remote Sensing and Spacial Information Sciences.6[12] A.Hoover,G.Jean-Baptiste,and al.An experimental com-parison of range image segmentation algorithms.1996.IEEE Transactions on Pattern Analysis and Machine Intelligence.5[13]H.Hoppe,T.DeRose,T.Duchamp,J.McDonald,andW.Stuetzle.Surface reconstruction from unorganized points.1992.International Conference on Computer Graphics and Interactive Techniques.2[14]P.Hough.Method and means for recognizing complex pat-terns.1962.In US Patent.1[15]M.Isenburg,P.Lindstrom,S.Gumhold,and J.Snoeyink.Large mesh simplification using processing sequences.2003.。

image segmentation

image segmentation

Color image segmentation using histogram thresholding –Fuzzy C-means hybrid approachKhang Siang Tan,Nor Ashidi Mat Isa nImaging and Intelligent Systems Research Team (ISRT),School of Electrical and Electronic Engineering,Engineering Campus,Universiti Sains Malaysia,14300Nibong Tebal,Penang,Malaysiaa r t i c l e i n f oArticle history:Received 29March 2010Received in revised form 2July 2010Accepted 4July 2010Keywords:Color image segmentation Histogram thresholding Fuzzy C-meansa b s t r a c tThis paper presents a novel histogram thresholding –fuzzy C-means hybrid (HTFCM)approach that could find different application in pattern recognition as well as in computer vision,particularly in color image segmentation.The proposed approach applies the histogram thresholding technique to obtain all possible uniform regions in the color image.Then,the Fuzzy C-means (FCM)algorithm is utilized to improve the compactness of the clusters forming these uniform regions.Experimental results have demonstrated that the low complexity of the proposed HTFCM approach could obtain better cluster quality and segmentation results than other segmentation approaches that employing ant colony algorithm.&2010Elsevier Ltd.All rights reserved.1.IntroductionColor is one of the most significant low-level features that can be used to extract homogeneous regions that are most of the time related to objects or part of objects [1–3].In a 24-bit true color image,the number of unique colors usually exceeds half of the image size and can reach up to 16millions.Most of these colors are perceptually close and cannot be differentiated by human eye that can only internally identify a number of 30colors in cognitive space [4,5].For all unique colors that perceptually close,they can be combined to form homogeneous regions representing the objects in the image and thus,the image could become more meaningful and easier to be analyzed.In image processing and computer vision,color image segmentation is a central task for image analysis and pattern recognition [6–23]It is a process of partitioning an image into multiple regions that are homogeneous with respect to one or more characteristics.Although many segmentation techniques have been appeared in scientific literature,they can be divided into image-domain based,physics based and feature-space based techniques [24].These segmentation techniques have been used extensively but each has its own advantages and limitations.Image-domain based techniques utilize both color features and spatial relationship among color in its homogeneity evaluation to perform segmenta-tion.These techniques produce the regions that have reasonablecompactness of regions but facing difficulty in the selection of suitable seed regions.Physics based techniques utilize the physical models of the reflection properties of material to carry out color segmentation but more application specific,as they model the causes that may produce color variation.Feature-based techniques utilize color features as the key and the only criteria to segment image.The segmented regions are usually fragmented since the spatial relationship among color is ignored [25].But this limitation can be solved by improving the compactness of the regions.In computer vision and pattern recognition,Fuzzy C-means (FCM)algorithm has been used extensively to improve the compactness of the regions due to its clustering validity and simplicity of implementation.It is a pixels clustering process of dividing pixels into clusters so that pixels in the same cluster are as similar as possible and those in different clusters are as dissimilar as possible.This accords with segmentation application since different regions should be visually as different as possible.However,its implementation often encounters two unavoidable initialization difficulties of deciding the cluster number and obtaining the initial cluster centroids that are properly distrib-uted.These initialization difficulties have their impacts on segmentation quality.While the difficulty of deciding the cluster number could affect the segmented area and region tolerance for feature variance,the difficulty of obtaining the initial cluster centroids could affect the cluster compactness and classification accuracy.Recently,some feature-based segmentation techniques have employed the concept of ant colony algorithm (ACA)to carry out image segmentation.Due to the intelligent searching ability of theContents lists available at ScienceDirectjournal homepage:/locate/prPattern Recognition0031-3203/$-see front matter &2010Elsevier Ltd.All rights reserved.doi:10.1016/j.patcog.2010.07.013nCorresponding author.Tel.:+6045996093;fax:+6045941023.E-mail addresses:khangsiang85@ (K.Siang Tan),ashidi@m.my (N.A.Mat Isa).Pattern Recognition 44(2011)1–15ACA,these techniques could achieve further optimization of segmentation results.But they are suffering from low efficiency due to their computational complexity.Apart from obtaining good segmentation result,the improved ant system algorithm(AS)as proposed in[26]could also provide a solution to overcome the FCM’s sensitiveness to the initialization condition of cluster centroids and centroid number.However,the AS technique does not seek for very compact clustering result in the feature space.To improve the performance of the AS,the ant colony–Fuzzy C-means hybrid algorithm(AFHA)is introduced[26].Essentially, the AFHA incorporates the FCM algorithm to the AS in order to improve the compactness of the clustering results in the feature space.However,its efficiency is still low due to computational complexity of the AS.To increase the algorithmic efficiency of the AFHA,the improved ant colony–Fuzzy C-means hybrid algorithm (IAFHA)is introduced[26].The IAFHA adds an ant sub-sampling based method to modify the AFHA in order to reduce its computational complexity thus has higher efficiency.Although the IAFHA’s efficiency has been increased,it still suffers from high computational complexity.In this paper,we propose a novel segmentation approach called Histogram Thresholding–Fuzzy C-means Hybrid(HTFCM) algorithm.The HTFCM consists of two modules,namely the histogram thresholding module and the FCM module.The histogram thresholding module is used for obtaining the FCM’s initialization condition of cluster centroids and centroid number. The implementation of this module does not require high computational complexity comparing to those techniques using ant system.This marked the simplicity of the proposed algorithm.The rest of the paper is organized as follows:Section2 presents the histogram thresholding module and the FCM module in detail.Section3provides the illustration of the implementation procedure.Section4analyzes the result obtained for the proposed approach and at the same time comparing it to other techniques. Finally,Section5concludes the work of this paper.2.Proposed approachIn this paper,we attempt to obtain a solution to overcome the FCM’s sensitiveness to the initialization conditions of cluster centroid and centroid number.The histogram thresholding module is introduced to initialize the FCM in view of the drawbacks by taking the global information of the image into consideration.In this module,the global information of the image is used to obtain all possible uniform regions in the image and thus the cluster centroids and centroid number could also be obtained.The FCM module is then used to improve the compactness of the clusters.In this context,the compactness refers to obtaining the optimized label for each cluster centroid from the members of each cluster.2.1.Histogram thresholdingGlobal histogram of a digital image is a popular tool for real-time image processing due to its simplicity in implementation.It serves as an important basis of statistical approaches in image processing by producing the global description of the image’s information[27].For color images with RGB representation,the color of a pixel is a mixture of the three primitive colors red,green and blue.Each image pixel can be viewed as three dimensional vector containing three components representing the three colors of an image pixel.Hence,the global histograms representing three primitive components,respectively,could produce the global information about the entire image.The basic analysis approach of global histogram is that a uniform region tends to form a dominating peak in the corresponding histogram.For a color image,a uniform region could be identified by the dominating peaks in the global histograms.Thus,histogram thresholding is a popular segmenta-tion technique that looks for the peaks and valleys in histogram [28,29].A typical segmentation approach based on histogram analysis can only be carried out if the dominating peaks in the histogram can be recognized correctly.Several widely used peak-finding algorithms examined the peak’s sharpness or area to identify the dominating peaks in the histogram.Although these peak-finding algorithms are useful in histogram analysis,they sometimes do not work well especially if the image contains noise or radical variation[30,31].In this paper,we propose a novel histogram thresholding technique containing three phases such as the peakfinding technique,the region initialization and the merging process.The histogram thresholding technique applies a peakfinding techni-que to identify the dominating peaks in the global histograms.The peakfinding algorithm could locate all the dominating peaks in the global histograms correctly and have been proven to be efficient by testing on numerous color images.As a result,the uniform regions in the image could be obtained.Since any uniform region contains3components representing the3colors of the RGB color image,each component of the uniform region is assigned one value corresponding to the intensity level of one dominating peak in their respective global histograms.Although the uniform regions are successfully obtained,some uniform regions are still perceptually close.Thus,a merging process is applied to merge these regions together.2.1.1.PeakfindingLet us suppose dealing with color image with RGB representa-tion which each of the primitive color components’intensity is stored in n-bit integer,giving a possible L¼2n intensity levels in the interval[0,LÀ1].Let r(i),g(i)and b(i)be the red component, the green component and the blue component histograms, respectively.Let x i,y i and z i be the number of pixels associated with i th intensity level in r(i),g(i)and b(i),respectively.The peakfinding algorithm can be described as follows:i.Represent the red component,green component and bluecomponent histograms by the following equations:rðiÞ¼x i,ð1ÞgðiÞ¼y i,ð2ÞbðiÞ¼z i,ð3Þwhere0r i r LÀ1.ii.From the original histogram,construct a new histogram curve with the following equation:T sðiÞ¼ðsðiÀ2ÞþsðiÀ1ÞþsðiÞþsðiþ1Þþsðiþ2ÞÞ5,ð4Þwhere s can be substituted by r,g and b and2r i r LÀ3.T r(i), T g(i)and T b(i)are the new histogram curves constructed from the red component,green component and blue component histograms,respectively.(Note:Based on analysis done using numerous images,the half window size can be set from2to5in this study.The half window size that is smaller than2could not produce a smooth histogram curve while large half window size could produce different general shape of a smooth histogram curve when comparing it to the original histogram.)K.Siang Tan,N.A.Mat Isa/Pattern Recognition44(2011)1–15 2iii.Identify all peaks using the following equation:P s¼ðði,T sðiÞÞ9T sðiÞ4T sðiÀ1Þand T sðiÞ4T sðiþ1ÞÞ,ð5Þwhere s can be substituted by r,g and b and1r i r LÀ2.P r,P g and P b are the set of peaks identified from T r(i),T g(i)and T b(i), respectively.iv.Identify all valleys using the following equation:V s¼ðði,T sðiÞÞ9T sðiÞo T sðiÀ1Þand T sðiÞo T sðiþ1ÞÞ,ð6Þwhere s can be substituted by r,g and b and1r i r LÀ2.V r,V g and V b are the set of valleys identified from T r(i),T g(i)and T b(i), respectively.v.Remove all peaks and valleys based on the following fuzzy rule base:IFði is peakÞANDðT sðiþ1Þ4T sðiÀ1ÞÞTHENðT sðiÞ¼T sðiþ1ÞÞIFði is peakÞANDðT sðiþ1Þo T sðiÀ1ÞÞTHENðT sðiÞ¼T sðiÀ1ÞÞIFði is valleyÞANDðT sðiþ1Þ4T sðiÀ1ÞÞTHENðT sðiÞ¼T sðiÀ1ÞÞIFði is valleyÞANDðT sðiþ1Þo T sðiÀ1ÞÞTHENðT sðiÞ¼T sðiþ1ÞÞ,ð7Þwhere s can be substituted by r,g and b and1r i r LÀ2.vi.Identify the dominating peaks in T r(i),T g(i)and T b(i)by examining the turning point which having positive to negative gradient change and the number of pixels is greater than a predefined threshold,H.(Note:Based on analysis done using numerous images,the typical value for H is set to20.)2.1.2.Region initializationAfter the peakfinding algorithm,three sets of dominating peak’s intensity level in the red,green and blue component histograms,respectively,are obtained.Let x,y and z be the number of dominating peaks identified in the red component, green component and blue component histograms,respectively. Then P r¼(i1,i2,y,i x),P g¼(i1,i2,y,i y)and P b¼(i1,i2,y,i z) are the sets of dominating peak’s intensity levels in the red component,green component and blue component histo-grams,respectively.A uniform region labeling by a cluster centroid tends to form one dominating peak in the red, green and blue component histograms,respectively.In this paper,the region initialization algorithm can be described as follows:i.Form all the possible cluster centroids(Note:Each componentof the cluster centroids can only take the intensity level of one dominating peak in the red,green and blue histogram, respectively.Thus,a number of(xÂyÂz)possible cluster centroids are formed.)ii.Assign every image pixels to the nearest cluster centroid and form the pixel set of each cluster by assigning them to their corresponding cluster centroid.iii.Eliminate all cluster centroids that having the number of pixels assigned to them is less than a threshold,V.(Note:To reduce the initial cluster centroid number,the value for V is set to0.006NÀ0.008N where N is the total number of pixels in that image.)iv.Reassign every image pixels to the nearest cluster centroid.(Note:Let c l be the l th element in the cluster centroid set and X l be the pixel set that are assigned to c l.)v.Update each cluster centroid,c l by the mode of its pixel set,X l, respectively.2.1.3.MergingAfter the region initialization algorithm,the uniform regions labeling by their respective cluster centroids are obtained.Some of these regions are perceptually close and could be merged together in order to produce a more concise set of cluster centroids representing the uniform region.Thus,a merging algorithm is needed to merge these regions based on their color similarity.One of the simplest measures of color similarity is the Euclidean distance which is used to measure the color difference between two uniform regions.Let C¼(c1,c2,y,c M)be the set of cluster centroids and M be the number of cluster centroids.In this paper,the merging algorithm can be described as follows:i.Set the maximum threshold of Euclidean distance,dc to apositive integer value.ii.Calculate the distance,D for any two out of these M cluster centroids with the following equation:D c j,c kÀÁ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðR jÀR kÞ2þðG jÀG kÞ2þðB jÀB kÞ2q,8j a k,ð8Þwhere1r j r M and1r k r M.R j,G j and B j are the value of the red,green and blue component of the j th cluster centroid, respectively,and R k,G k and B k are the value of the red,green and blue component of the k th cluster centroid,respectively. iii.Find the minimum distance between two nearest cluster centroids.Merge these nearest cluster centroids to form the new cluster centroid if the minimum distance between them is less than dc.Otherwise,stop the merging process.iv.Update the pixel set that assigned to the new cluster centroid by merging the pixel sets that assigned to these nearest cluster centroids.v.Refresh the new cluster centroid by the mode of its pixel set. vi.Reduce the number of cluster centroids,M to(MÀ1)and repeat steps ii to vi until no minimum distance between two nearest cluster centroids is less than dc.(Note:Based on analysis done using numerous images,the number of cluster centroids remain constant for most of the images by varying dc from24to32.)2.2.Fuzzy C-meansThe FCM algorithm is essentially a Hill-Climbing technique which was developed by Dunn in1973[32]and improved by Bezdek in1981[33].This algorithm has been used as one of the popular clustering techniques for image segmentation in compu-ter vision and pattern recognition.In the FCM,each image pixels having certain membership degree associated with each cluster centroids.These membership degrees having values in the range [0,1]and indicate the strength of the association between that image pixel and a particular cluster centroid.The FCM algorithm attempts to partition every image pixels into a collection of the M fuzzy cluster centroids with respect to some given criterion[34].Let N be the total number of pixels in that image and m be the exponential weight of membership degree.The objective function W m of the FCM is defined asW mðU,CÞ¼X Ni¼1X Mj¼1u mjid2ji,ð9Þwhere u ji is the membership degree of i th pixel to j th cluster centroid and d ji is the distance between i th pixel and j th cluster centroid.Let U i¼(u1i,u2i,y,u Mi)T is the set of membership degree of i th pixel associated with each cluster centroids,x i is the i th pixel in the image and c j is the j th cluster centroid.Then U¼(U1, U2,y,U N)is the membership degree matrix and C¼(c1,c2,y,c M) is the set of cluster centroids.K.Siang Tan,N.A.Mat Isa/Pattern Recognition44(2011)1–153The degree of compactness and uniformity of the cluster centroids greatly depend on the objective function of the FCM.In general,a smaller objective function of the FCM indicates a more compact and uniform cluster centroid set.However,there is no close form solution to produce minimization of the objective function.To achieve the optimization of the objective function,an iteration process must be carried out by the FCM algorithm.In this paper,the FCM is employed to improve the compactness of the clusters produced by the histogram thresholding module.The FCM can be described as follows:i.Set the iteration terminating threshold,e to a small positive number in the range [0,1]and the number of iteration q to 0.ii.Calculate U (q )according to C (q )with the following equation:u ji ¼1P Mk ¼1ðd ji =d ki Þ2=ðm À1Þ,ð10Þwhere 1r j r M and 1r i r N .Notice that if d ji ¼0,then u ji ¼1and set others membership degrees of this pixel to 0.iii.Calculate C (q +1)according to U (q )with the following equation:c j ¼P Ni ¼1u mji x iP i ¼1u m ji ,ð11Þwhere 1r j r M .iv.Update U (q +1)according to C (q +1)with Eq.(10).pare U (q +1)with U (q ).If :U (q +1)ÀU (q ):r e ,stop iteration.Otherwise,q ¼q +1,and repeat steps ii to iv until :U (q +1)ÀU (q ):4e .3.Illustration of the implementation procedureIn this section,we apply the HTFCM approach to perform the segmentation with the 256Â256image House depicted in Fig.1(a).Figs.2(a),3(a)and 4(a)show the red,green and blue component histogram of the image,respectively.In Fig.2(a),the dominating peaks that could represent the red component of dominant regions are difficult to be recognized as a large number of small peaks exist in the histogram.The similar case has also been shown in Figs.3(a)and 4(a).These small peaks must be removed so that the dominating peaks can be recognized effectively.Thus,the dominating peaks are recognized according to the proposed peak finding technique.Figs.2(b),3(b)and 4(b)show the resultant histogram curves,after applying the proposed peak finding technique.Since the general shape of each resultant histogram curve have a great similarity with their respective original histogram,the dominating peak in the histogram curve can be considered as the dominating peak in their respective histogram.Furthermore,each of the resultant histogram curves gives better degree of smoothness than their respective original histogram.As a result,the dominating peaks in the histogram canbe recognized easily by examining the histogram curve.Then,the initial clusters are formed by assigning every pixel to their cluster centroids,respectively.For example,the left uppermost pixel in the image having the RGB value of (159,197,222)is assigned to the centroid having the RGB value of (159,199,224)since the Euclidean distance between the pixel and the centroid is the shortest compared to other centroids.The RGB valueofFig.1.Image House and its segmentation results.(a)Original image House ,(b)image after cluster centroid initialization,(c)image after merging and (d)final segmentationresult.Fig.2.(a)Red component histogram of image House ,(b)resultant histogram curve after peak finding technique (Note:The intensity level of the dominating peaks is labeled by x ).K.Siang Tan,N.A.Mat Isa /Pattern Recognition 44(2011)1–154the centroid is obtained from one of the dominating peaks in Figs.2(b),3(b)and 4(b),respectively.The result of initial clusters is illustrated in Fig.1(b).Next,the merging process is carried out to merge all the clusters that are perceptually close.This merging process is able to reduce the cluster number and keep a reasonable cluster number for all kinds of input images.The result of the merging process is illustrated in Fig.1(c).Finally,the FCM algorithm is applied to perform color segmentation.The final segmentation result is illustrated in Fig.1(d).4.Experiment resultsThe HTFCM approach has been tested on more than 200images taken from public image segmentation databases.In this paper,30images are selected to demonstrate the capability of the proposed HTFCM approach.8out of these images namely House (256Â256),Football (256Â256),Golden Gate (256Â256),Smarties (256Â256),Capsicum (256Â256),Gantry Crane (400Â264),Beach (321Â481)and Girl (321Â481)are evaluated in details to highlight the advantages of the proposed HTFCM approach.While another 22images are presented as supplementary images to further support the findings.The AS,the AFHA and the IAFHA approaches which are proved to be able to provide good solution to overcome the FCM’s sensitiveness to the initialization condi-tions of cluster centroids and centroid number are used as comparison in order to see whether the HTFCM approach could result in generally better performance and cluster quality than these approaches.The performance of the HTFCM approach is evaluated by comparing the algorithmic efficiency and the segmentation results with the AS,the AFHA and the IAFHA approaches.In this study,we fix dc as 28since it tend to produce reasonable results for the AS,the AFHA,the IAFHA [26]and the HTFCM approaches.4.1.Evaluation on segmentation resultsIn this section,the segmentation results for the AS,the AFHA,the IAFHA and the HTFCM approaches are evaluated visually.Generally,as shown in Figs.5–12,the proposed HTFCM approach produces better segmentation results as compared to the AS,theFig.3.(a)Green component histogram of image House and (b)resultant histogram curve after peak finding technique (Note:The intensity level of the dominating peaks is labeled by x.).Fig.4.(a)Blue component histogram of image House and (b)resultant histogram curve after peak finding technique (Note:The intensity level of the dominating peaks is labeled by x .).K.Siang Tan,N.A.Mat Isa /Pattern Recognition 44(2011)1–155AFHA and the IAFHA approaches.The segmented regions of the resultant images produced by the HTFCM approach are more homogeneous.For example,notice for the image House ,the HTFCM approach gives better segmentation result than the AS,the AFHA and the IAFHA approaches by producing more homogeneous house roof and walls as depicted in Fig.5.In the image Football ,the HTFCM approach also outperforms the AS,the AFHA and the IAFHA approaches by giving more homogeneous background as shown in Fig.6.As for the image Golden Gate ,although the AS,the AFHA and the IAFHA approaches produce homogeneous sky region,but an obvious classification error could be seen where these approaches mistakenly assign considerable pixels of the leave of tree as part of the bridge.The proposed HTFCM approach successfully avoids this classification error and furthermore,produces more homogeneous hill and sea regions as shown in Fig.7.In the image Capsicum ,the proposedHTFCMFig.5.The image House :(a)original image,and the rest are segmentation results of the test image by various algorithms (b)AS,(c)AFHA,(d)IAFHA,and (e)HTFCM.Fig.6.The image Football :(a)original image,and the rest are segmentation results of the test image by various algorithms (b)AS,(c)AFHA,(d)IAFHA,and (e)HTFCM.K.Siang Tan,N.A.Mat Isa /Pattern Recognition 44(2011)1–156approach also outperforms other techniques by producing more homogeneous red capsicum as depicted in Fig.8.Notice for the image Smarties ,the AS,the AFHA and the IAFHA approaches mistakenly assign considerable pixels of the background as part of the green Smarties.The proposed HTFCM approach successfully avoids this classification error and furthermore,produces more homogeneous background as shown in Fig.9.As for the image Beach ,the HTFCM approach outperforms others approaches by giving more homogeneous beach and sea as depicted in Fig.10.As for the image Gantry Crane ,there is a classification error where part of the left-inclined truss has been assigned to the sky by the AS,the AFHA and the IAFHA approaches as shown in Fig.11.The proposed HTFCM successfully avoids this classification error.The similar result is obtained for the image Girl .The HTFCM approach outperforms other approaches by classifying the blue and red part of the shirt as single cluster while theAS,Fig.7.The image Golden Gate :(a)original image,and the rest are segmentation results of the test image by various algorithms (b)AS,(c)AFHA,(d)IAFHA,and (e)HTFCM.Fig.8.The image Capsicum :(a)original image,and the rest are segmentation results of the test image by various algorithms (b)AS,(c)AFHA,(d)IAFHA,and (e)HTFCM.K.Siang Tan,N.A.Mat Isa /Pattern Recognition 44(2011)1–157Fig.9.The image Smarties :(a)original image,and the rest are segmentation results of the test image by various algorithms (b)AS,(c)AFHA,(d)IAFHA,and (e)HTFCM.Fig.10.The image Beach :(a)original image,and the rest are segmentation results of the test image by various algorithms (b)AS,(c)AFHA,(d)IAFHA,and (e)HTFCM.K.Siang Tan,N.A.Mat Isa /Pattern Recognition 44(2011)1–158。

自动化外文参考文献(精选120个最新)

自动化外文参考文献(精选120个最新)

自动化外文参考文献(精选120个最新)自动化外文参考文献(精选120个最新)本文关键词:外文,参考文献,自动化,精选,最新自动化外文参考文献(精选120个最新)本文简介:自动化(Automation)是指机器设备、系统或过程(生产、管理过程)在没有人或较少人的直接参与下,按照人的要求,经过自动检测、信息处理、分析判断、操纵控制,实现业绩预期的目标的过程。

下面是搜索整理的关于自动化参考文献,欢迎借鉴参考。

自动化外文释义一:[1]NazriNasir,Sha自动化外文参考文献(精选120个最新)本文内容:自动化(Automation)是指机器设备、系统或过程(生产、管理过程)在没有人或较少人的直接参与下,按照人的要求,经过自动检测、信息处理、分析判断、操纵控制,实现预期的目标的过程。

下面是搜索整理的关于自动化后面外文参考文献,欢迎借鉴参考。

自动化外文引文一:[1]Nazri Nasir,Shabudin Mat. An automated visual tracking measurement for quantifying wing and body motion of free-flying houseflies[J]. Measurement,2021,143.[2]Rishikesh Kulkarni,Earu Banoth,Parama Pal. Automated surface feature detection using fringe projection: An autoregressive modeling-based approach[J]. Optics and Lasers in Engineering,2021,121.[3]Tengyue Fang,Peicong Li,Kunning Lin,NengwangChen,Yiyong Jiang,Jixin Chen,Dongxing Yuan,Jian Ma. Simultaneous underway analysis of nitrate and nitrite inestuarine and coastal waters using an automated integrated syringe-pump-based environmental-water analyzer[J]. Analytica Chimica Acta,2021,1076.[4]Shengfeng Chen,Jian Liu,Xiaosong Zhang,XinyuSuo,Enhui Lu,Jilong Guo,Jianxun Xi. Development ofpositioning system for Nuclear-fuel rod automated assembly[J]. Robotics and Computer Integrated Manufacturing,2021,61.[5]Cheng-Ta Lee,Yu-Ching Lee,Albert Y. Chen. In-building automated external defibrillator location planning and assessment through building information models[J]. Automation in Construction,2021,106.[6]Torgeir Aleti,Jason I. Pallant,Annamaria Tuan,Tom van Laer. Tweeting with the Stars: Automated Text Analysis of the Effect of Celebrity Social Media ications on ConsumerWord of Mouth[J]. Journal of Interactive Marketing,2021,48.[7]Daniel Bacioiu,Geoff Melton,MayorkinosPapaelias,Rob Shaw. Automated defect classification of SS304 TIG welding process using visible spectrum camera and machine learning[J]. NDT and E International,2021,107.[8]Marcus von der Au,Max Schwinn,KatharinaKuhlmeier,Claudia Büchel,Bj?rn Meermann. Development of an automated on-line purification HPLC single cell-ICP-MS approach for fast diatom analysis[J]. Analytica ChimicaActa,2021,1077.[9]Jitendra Mehar,Ajam Shekh,Nethravathy M. U.,R. Sarada,Vikas Singh Chauhan,Sandeep Mudliar. Automation ofpilot-scale open raceway pond: A case study of CO 2 -fed pHcontrol on Spirulina biomass, protein and phycocyanin production[J]. Journal of CO2 Utilization,2021,33.[10]John T. Sloop,Henry J.B. Bonilla,TinaHarville,Bradley T. Jones,George L. Donati. Automated matrix-matching calibration using standard dilution analysis withtwo internal standards and a simple three-port mixing chamber[J]. Talanta,2021,205.[11]Daniel J. Spade,Cathy Yue Bai,ChristyLambright,Justin M. Conley,Kim Boekelheide,L. Earl Gray. Corrigendum to “Validation of an automated counting procedure for phthalate-induced testicular multinucleated germ cells” [Toxicol. Lett. 290 (2021) 55–61][J]. Toxicology Letters,2021,313.[12]Christian P. Janssen,Shamsi T. Iqbal,Andrew L. Kun,Stella F. Donker. Interrupted by my car? Implications of interruption and interleaving research for automatedvehicles[J]. International Journal of Human - Computer Studies,2021,130.[13]Seunguk Lee,Si Kuan Thio,Sung-Yong Park,Sungwoo Bae. An automated 3D-printed smartphone platform integrated with optoelectrowetting (OEW) microfluidic chip for on-site monitoring of viable algae in water[J]. Harmful Algae,2021,88.[14]Yuxia Duan,Shicai Liu,Caiqi Hu,Junqi Hu,Hai Zhang,Yiqian Yan,Ning Tao,Cunlin Zhang,Xavier Maldague,Qiang Fang,Clemente Ibarra-Castanedo,Dapeng Chen,Xiaoli Li,Jianqiao Meng. Automated defect classification in infrared thermography based on a neural network[J]. NDT and E International,2021,107.[15]Alex M. Pagnozzi,Jurgen Fripp,Stephen E. Rose. Quantifying deep grey matter atrophy using automated segmentation approaches: A systematic review of structural MRI studies[J]. NeuroImage,2021,201.[16]Jin Ye,Zhihong Xuan,Bing Zhang,Yu Wu,LiLi,Songshan Wang,Gang Xie,Songxue Wang. Automated analysis of ochratoxin A in cereals and oil by iaffinity magnetic beads coupled to UPLC-FLD[J]. Food Control,2021,104.[17]Anne Bech Risum,Rasmus Bro. Using deep learning to evaluate peaks in chromatographic data[J].Talanta,2021,204.[18]Faris Elghaish,Sepehr Abrishami,M. Reza Hosseini,Soliman Abu-Samra,Mark Gaterell. Integrated project delivery with BIM: An automated EVM-based approach[J]. Automation in Construction,2021,106.[19]Carl J. Pearson,Michael Geden,Christopher B. Mayhorn. Who's the real expert here? Pedigree's unique bias on trust between human and automated advisers[J]. Applied Ergonomics,2021,81.[20]Vibhas Mishra,Dani?l M.J. Peeters,Mostafa M. Abdalla. Stiffness and buckling analysis of variablestiffness laminates including the effect of automated fibre placement defects[J]. Composite Structures,2021,226.[21]Jenny S. Wesche,Andreas Sonderegger. When computers take the lead: The automation of leadership[J]. Computers in Human Behavior,2021,101.[22]Murat Ayaz,Hüseyin Yüksel. Design of a new cost-efficient automation system for gas leak detection in industrial buildings[J]. Energy &amp;amp; Buildings,2021,200.[23]Stefan A. Mann,Juliane Heide,Thomas Knott,Razvan Airini,Florin Bogdan Epureanu,Alexandru-FlorianDeftu,Antonia-Teona Deftu,Beatrice Mihaela Radu,Bogdan Amuzescu. Recording of multiple ion current components and action potentials in human induced pluripotent stem cell-derived cardiomyocytes via automated patch-clamp[J]. Journal of Pharmacological and Toxicological Methods,2021,100.[24]Rhar? de Almeida Cardoso,Alexandre Cury,Flavio Barbosa. Automated real-time damage detection strategy using raw dynamic measurements[J]. Engineering Structures,2021,196.[25]Mengmeng Zhong,Tielong Wang,Chengdu Qi,Guilong Peng,Meiling Lu,Jun Huang,Lee Blaney,Gang Yu. Automated online solid-phase extraction liquid chromatography tandem mass spectrometry investigation for simultaneous quantification of per- and polyfluoroalkyl substances, pharmaceuticals and personal care products, and organophosphorus flame retardants in environmental waters[J]. Journal of Chromatography A,2021,1602.[26]Pau Climent-Pér ez,Susanna Spinsante,Alex Mihailidis,Francisco Florez-Revuelta. A review on video-based active and assisted living technologies for automated lifelogging[J]. Expert Systems With Applications,2021,139.[27]William Snyder,Marisa Patti,Vanessa Troiani. An evaluation of automated tracing for orbitofrontal cortexsulcogyral pattern typing[J]. Journal of Neuroscience Methods,2021,326.[28]Juan Manuel Davila Delgado,LukumonOyedele,Anuoluwapo Ajayi,Lukman Akanbi,OlugbengaAkinade,Muhammad Bilal,Hakeem Owolabi. Robotics and automated systems in construction: Understanding industry-specific challenges for adoption[J]. Journal of Building Engineering,2021,26.[29]Mohamed Taher Alrefaie,Stever Summerskill,Thomas W Jackon. In a heart beat: Using driver’s physiological changes to determine the quality of a takeover in highly automated vehicles[J]. Accident Analysis andPrevention,2021,131.[30]Tawseef Ayoub Shaikh,Rashid Ali. Automated atrophy assessment for Alzheimer's disease diagnosis from brain MRI images[J]. Magnetic Resonance Imaging,2021,62.自动化外文参考文献二:[31]Vaanathi Sundaresan,Giovanna Zamboni,Campbell Le Heron,Peter M. Rothwell,Masud Husain,Marco Battaglini,Nicola De Stefano,Mark Jenkinson,Ludovica Griffanti. Automatedlesion segmentation with BIANCA: Impact of population-level features, classification algorithm and locally adaptive thresholding[J]. NeuroImage,2021,202.[32]Ho-Jun Suk,Edward S. Boyden,Ingrid van Welie. Advances in the automation of whole-cell patch clamp technology[J]. Journal of Neuroscience Methods,2021,326.[33]Ivana Duznovic,Mathias Diefenbach,Mubarak Ali,Tom Stein,Markus Biesalski,Wolfgang Ensinger. Automated measuring of mass transport through synthetic nanochannels functionalized with polyelectrolyte porous networks[J]. Journal of Membrane Science,2021,591.[34]James A.D. Cameron,Patrick Savoie,Mary E.Kaye,Erik J. Scheme. Design considerations for the processing system of a CNN-based automated surveillance system[J]. Expert Systems With Applications,2021,136.[35]Ebrahim Azadniya,Gertrud E. Morlock. Automated piezoelectric spraying of biological and enzymatic assays for effect-directed analysis of planar chromatograms[J]. Journal of Chromatography A,2021,1602.[36]Lilla Z?llei,Camilo Jaimes,Elie Saliba,P. Ellen Grant,Anastasia Yendiki. TRActs constrained by UnderLying INfant anatomy (TRACULInA): An automated probabilistic tractography tool with anatomical priors for use in the newborn brain[J]. NeuroImage,2021,199.[37]Kate?ina Fikarová,David J. Cocovi-Solberg,María Rosende,Burkhard Horstkotte,Hana Sklená?ová,Manuel Miró. A flow-based platform hyphenated to on-line liquid chromatography for automatic leaching tests of chemical additives from microplastics into seawater[J]. Journal of Chromatography A,2021,1602.[38]Darko ?tern,Christian Payer,Martin Urschler. Automated age estimation from MRI volumes of the hand[J]. Medical Image Analysis,2021,58.[39]Jacques Blum,Holger Heumann,Eric Nardon,Xiao Song. Automating the design of tokamak experiment scenarios[J]. Journal of Computational Physics,2021,394.[40]Elton F. de S. Soares,Carlos Alberto V.Campos,Sidney C. de Lucena. Online travel mode detection method using automated machine learning and feature engineering[J]. Future Generation Computer Systems,2021,101.[41]M. Marouli,S. Pommé. Autom ated optical distance measurements for counting at a defined solid angle[J].Applied Radiation and Isotopes,2021,153.[42]Yi Dai,Zhen-Hua Yu,Jian-Bo Zhan,Bao-Shan Yue,Jiao Xie,Hao Wang,Xin-Sheng Chai. Determination of starch gelatinization temperatures by an automated headspace gas chromatography[J]. Journal of Chromatography A,2021,1602.[43]Marius Tarp?,Tobias Friis,Peter Olsen,MartinJuul,Christos Georgakis,Rune Brincker. Automated reduction of statistical errors in the estimated correlation functionmatrix for operational modal analysis[J]. Mechanical Systems and Signal Processing,2021,132.[44]Wenxia Dai,Bisheng Yang,Xinlian Liang,ZhenDong,Ronggang Huang,Yunsheng Wang,Wuyan Li. Automated fusionof forest airborne and terrestrial point clouds throughcanopy density analysis[J]. ISPRS Journal of Photogrammetry and Remote Sensing,2021,156.[45]Jyh-Haur Woo,Marcus Ang,Hla Myint Htoon,Donald Tan. Descemet Membrane Endothelial Keratoplasty Versus Descemet Stripping Automated Endothelial Keratoplasty andPenetrating Keratoplasty[J]. American Journal of Ophthalmology,2021,207.[46]F. Wilde,S. Marsen,T. Stange,D. Moseev,J.W. Oosterbeek,H.P. Laqua,R.C. Wolf,K. Avramidis,G.Gantenbein,I.Gr. Pagonakis,S. Illy,J. Jelonnek,M.K. Thumm,W7-X team. Automated mode recovery for gyrotrons demonstrated at Wendelstein 7-X[J]. Fusion Engineering and Design,2021,148.[47]Andrew Kozbial,Lekhana Bhandary,Shashi K. Murthy. Effect of yte seeding density on dendritic cell generation in an automated perfusion-based culture system[J]. Biochemical Engineering Journal,2021,150.[48]Wen-Hao Su,Steven A. Fennimore,David C. Slaughter. Fluorescence imaging for rapid monitoring of translocation behaviour of systemic markers in snap beans for automatedcrop/weed discrimination[J]. Biosystems Engineering,2021,186.[49]Ki-Taek Lim,Dinesh K. Patel,Hoon Se,JanghoKim,Jong Hoon Chung. A fully automated bioreactor system for precise control of stem cell proliferation anddifferentiation[J]. Biochemical Engineering Journal,2021,150.[50]Mitchell L. Cunningham,Michael A. Regan,Timothy Horberry,Kamal Weeratunga,Vinayak Dixit. Public opinion about automated vehicles in Australia: Results from a large-scale national survey[J]. Transportation Research Part A,2021,129.[51]Yi Xie,Qiaobei You,Pingyang Dai,Shuyi Wang,Peiyi Hong,Guokun Liu,Jun Yu,Xilong Sun,Yongming Zeng. How to achieve auto-identification in Raman analysis by spectral feature extraction &amp;amp; Adaptive Hypergraph[J].Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy,2021,222.[52]Ozal Yildirim,Muhammed Talo,Betul Ay,Ulas Baran Baloglu,Galip Aydin,U. Rajendra Acharya. Automated detection of diabetic subject using pre-trained 2D-CNN models with frequency spectrum images extracted from heart ratesignals[J]. Computers in Biology and Medicine,2021,113.[53]Marius Kern,Laura Tusa,Thomas Lei?ner,Karl Gerald van den Boogaart,Jens Gutzmer. Optimal sensor selection for sensor-based sorting based on automated mineralogy data[J]. Journal of Cleaner Production,2021,234.[54]Karim Keddadouche,Régis Braucher,Didier L.Bourlès,Mélanie Baroni,Valéry Guillou,La?titia Léanni,Georges Auma?tre. Design and performance of an automated chemical extraction bench for the preparation of 10 Be and 26 Al targets to be analyzed by accelerator mass spectrometry[J]. Nuclear Inst. and Methods in Physics Research, B,2021,456.[55]Christian P. Janssen,Stella F. Donker,Duncan P. Brumby,Andrew L. Kun. History and future of human-automation interaction[J]. International Journal of Human - Computer Studies,2021,131.[56]Victoriya Orlovskaya,Olga Fedorova,Michail Nadporojskii,Raisa Krasikova. A fully automated azeotropic drying free synthesis of O -(2-[ 18 F]fluoroethyl)- l -tyrosine ([ 18 F]FET) using tetrabutylammonium tosylate[J]. Applied Radiation and Isotopes,2021,152.[57]Dinesh Krishnamoorthy,Kjetil Fjalestad,Sigurd Skogestad. Optimal operation of oil and gas production usingsimple feedback control structures[J]. Control Engineering Practice,2021,91.[58]Nick Oliver,Thomas Calvard,Kristina Poto?nik. Safe limits, mindful organizing and loss of control in commercial aviation[J]. Safety Science,2021,120.[59]Bo Sui,Nils Lubbe,Jonas B?rgman. A clustering approach to developing car-to-two-wheeler test scenarios for the assessment of Automated Emergency Braking in China using in-depth Chinese crash data[J]. Accident Analysis and Prevention,2021,132.[60]Ji-Seok Yoon,Eun Young Choi,Maliazurina Saad,Tae-Sun Choi. Automated integrated system for stained neuron detection: An end-to-end framework with a high negative predictive rate[J]. Computer Methods and Programs in Biomedicine,2021,180.自动化外文参考文献八:[61]Min Wang,Barbara E. Glick-Wilson,Qi-Huang Zheng. Facile fully automated radiosynthesis and quality control of O -(2-[ 18 F]fluoroethyl)- l -tyrosine ([ 18 F]FET) for human brain tumor imaging[J]. Applied Radiation andIsotopes,2021,154.[62]Fabian Pütz,Finbarr Murphy,Martin Mullins,LisaO'Malley. Connected automated vehicles and insurance: Analysing future market-structure from a business ecosystem perspective[J]. Technology in Society,2021,59.[63]Victoria A. Banks,Neville A. Stanton,Katherine L. Plant. Who is responsible for automated driving? A macro-level insight into automated driving in the United Kingdom using the Risk Management Framework and Social NetworkAnalysis[J]. Applied Ergonomics,2021,81.[64]Yingjun Ye,Xiaohui Zhang,Jian Sun. Automated vehicle’s behavior decision making using deep reinforcement learning and high-fidelity simulation environment[J]. Transportation Research Part C,2021,107.[65]Hasan Alkaf,Jameleddine Hassine,TahaBinalialhag,Daniel Amyot. An automated change impact analysis approach for User Requirements Notation models[J]. TheJournal of Systems &amp;amp; Software,2021,157.[66]Zonghua Luo,Jiwei Gu,Robert C. Dennett,Gregory G. Gaehle,Joel S. Perlmutter,Delphine L. Chen,Tammie L.S. Benzinger,Zhude Tu. Automated production of a sphingosine-1 phosphate receptor 1 (S1P1) PET radiopharmaceutical [ 11C]CS1P1 for human use[J]. Applied Radiation andIsotopes,2021,152.[67]Sarfraz Qureshi,Wu Jiacheng,Jeroen Anton van Kan. Automated alignment and focusing system for nuclear microprobes[J]. Nuclear Inst. and Methods in Physics Research, B,2021,456.[68]Srikanth Sagar Bangaru,Chao Wang,MarwaHassan,Hyun Woo Jeon,Tarun Ayiluri. Estimation of the degreeof hydration of concrete through automated machine learning based microstructure analysis – A study on effect of image magnification[J]. Advanced Engineering Informatics,2021,42.[69]Fang Tengyue,Li Peicong,Lin Kunning,Chen Nengwang,Jiang Yiyong,Chen Jixin,Yuan Dongxing,Ma Jian. Simultaneous underway analysis of nitrate and nitrite in estuarine and coastal waters using an automated integrated syringe-pump-based environmental-water analyzer.[J]. Analytica chimica acta,2021,1076.[70]Ramos Inês I,Carl Peter,Schneider RudolfJ,Segundo Marcela A. Automated lab-on-valve sequential injection ELISA for determination of carbamazepine.[J]. Analytica chimica acta,2021,1076.[71]Au Marcus von der,Schwinn Max,Kuhlmeier Katharina,Büchel Claudia,Meermann Bj?rn. Development of an automated on-line purification HPLC single cell-ICP-MS approach for fast diatom analysis.[J]. Analytica chimica acta,2021,1077.[72]Risum Anne Bech,Bro Rasmus. Using deep learning to evaluate peaks in chromatographic data.[J].Talanta,2021,204.[73]Spade Daniel J,Bai Cathy Yue,LambrightChristy,Conley Justin M,Boekelheide Kim,Gray L Earl. Corrigendum to "Validation of an automated counting procedure for phthalate-induced testicular multinucleated germ cells" [Toxicol. Lett. 290 (2021) 55-61].[J]. Toxicologyletters,2021,313.[74]Zhong Mengmeng,Wang Tielong,Qi Chengdu,Peng Guilong,Lu Meiling,Huang Jun,Blaney Lee,Yu Gang. Automated online solid-phase extraction liquid chromatography tandem mass spectrometry investigation for simultaneousquantification of per- and polyfluoroalkyl substances, pharmaceuticals and personal care products, and organophosphorus flame retardants in environmental waters.[J]. Journal of chromatography. A,2021,1602.[75]Stein Christopher J,Reiher Markus. autoCAS: A Program for Fully Automated MulticonfigurationalCalculations.[J]. Journal of computationalchemistry,2021,40(25).[76]Alrefaie Mohamed Taher,Summerskill Stever,Jackon Thomas W. In a heart beat: Using driver's physiological changes to determine the quality of a takeover in highly automated vehicles.[J]. Accident; analysis andprevention,2021,131.[77]Shaikh Tawseef Ayoub,Ali Rashid. Automatedatrophy assessment for Alzheimer's disease diagnosis frombrain MRI images.[J]. Magnetic resonance imaging,2021,62.[78]Xie Yi,You Qiaobei,Dai Pingyang,Wang Shuyi,Hong Peiyi,Liu Guokun,Yu Jun,Sun Xilong,Zeng Yongming. How to achieve auto-identification in Raman analysis by spectral feature extraction &amp;amp; Adaptive Hypergraph.[J]. Spectrochimica acta. Part A, Molecular and biomolecular spectroscopy,2021,222.[79]Azadniya Ebrahim,Morlock Gertrud E. Automated piezoelectric spraying of biological and enzymatic assays for effect-directed analysis of planar chromatograms.[J]. Journal of chromatography. A,2021,1602.[80]Fikarová Kate?ina,Cocovi-Solberg David J,Rosende María,Horstkotte Burkhard,Sklená?ová Hana,Miró Manuel. Aflow-based platform hyphenated to on-line liquid chromatography for automatic leaching tests of chemical additives from microplastics into seawater.[J]. Journal of chromatography. A,2021,1602.[81]Moitra Dipanjan,Mandal Rakesh Kr. Automated AJCC (7th edition) staging of non-small cell lung cancer (NSCLC) using deep convolutional neural network (CNN) and recurrent neural network (RNN).[J]. Health information science and systems,2021,7(1).[82]Ramos-Payán María. Liquid - Phase microextraction and electromembrane extraction in millifluidic devices:A tutorial.[J]. Analytica chimica acta,2021,1080.[83]Z?llei Lilla,Jaimes Camilo,Saliba Elie,Grant P Ellen,Yendiki Anastasia. TRActs constrained by UnderLying INfant anatomy (TRACULInA): An automated probabilistic tractography tool with anatomical priors for use in the newborn brain.[J]. NeuroImage,2021,199.[84]Sedghi Gamechi Zahra,Bons Lidia R,Giordano Marco,Bos Daniel,Budde Ricardo P J,Kofoed Klaus F,Pedersen Jesper Holst,Roos-Hesselink Jolien W,de Bruijne Marleen. Automated 3D segmentation and diameter measurement of the thoracic aorta on non-contrast enhanced CT.[J]. European radiology,2021,29(9).[85]Smith Claire,Galland Barbara C,de Bruin Willemijn E,Taylor Rachael W. Feasibility of Automated Cameras to Measure Screen Use in Adolescents.[J]. American journal of preventive medicine,2021,57(3).[86]Lambert Marie-?ve,Arsenault Julie,AudetPascal,Delisle Benjamin,D'Allaire Sylvie. Evaluating an automated clustering approach in a perspective of ongoing surveillance of porcine reproductive and respiratory syndrome virus (PRRSV) field strains.[J]. Infection, genetics and evolution : journal of molecular epidemiology and evolutionary genetics in infectious diseases,2021,73.[87]Slanetz Priscilla J. Does Computer-aided Detection Help in Interpretation of Automated Breast US?[J]. Radiology,2021,292(3).[88]Sander Laura,Pezold Simon,Andermatt Simon,Amann Michael,Meier Dominik,Wendebourg Maria J,Sinnecker Tim,Radue Ernst-Wilhelm,Naegelin Yvonne,Granziera Cristina,Kappos Ludwig,Wuerfel Jens,Cattin Philippe,Schlaeger Regina. Accurate, rapid and reliable, fully automated MRI brainstem segmentation for application in multiple sclerosis and neurodegenerative diseases.[J]. Human brainmapping,2021,40(14).[89]Pajkossy Péter,Sz?ll?si ?gnes,Racsmány Mihály. Retrieval practice decreases processing load of recall: Evidence revealed by pupillometry.[J]. International journal of psychophysiology : official journal of the International Organization of Psychophysiology,2021,143.[90]Kaiser Eric A,Igdalova Aleksandra,Aguirre Geoffrey K,Cucchiara Brett. A web-based, branching logic questionnaire for the automated classification ofmigraine.[J]. Cephalalgia : an international journal of headache,2021,39(10).自动化外文参考文献四:[91]Kim Jin Ju,Park Younhee,Choi Dasom,Kim Hyon Suk. Performance Evaluation of a New Automated Chemiluminescent Ianalyzer-Based Interferon-Gamma Releasing Assay AdvanSure I3 in Comparison With the QuantiFERON-TB Gold In-Tube Assay.[J]. Annals of laboratory medicine,2021,40(1).[92]Yang Shanling,Gao Xican,Liu Liwen,Shu Rui,Yan Jingru,Zhang Ge,Xiao Yao,Ju Yan,Zhao Ni,Song Hongping. Performance and Reading Time of Automated Breast US with or without Computer-aided Detection.[J]. Radiology,2021,292(3).[93]Hung Andrew J,Chen Jian,Ghodoussipour Saum,OhPaul J,Liu Zequn,Nguyen Jessica,Purushotham Sanjay,Gill Inderbir S,Liu Yan. A deep-learning model using automated performance metrics and clinical features to predict urinary continence recovery after robot-assisted radical prostatectomy.[J]. BJU international,2021,124(3).[94]Kim Ryan S,Kim Gene. Double Descemet Stripping Automated Endothelial Keratoplasty (DSAEK): Secondary DSAEK Without Removal of the Failed Primary DSAEK Graft.[J]. Ophthalmology,2021,126(9).[95]Sargent Alexandra,Theofanous Ioannis,Ferris Sarah. Improving laboratory workflow through automated pre-processing of SurePath specimens for human papillomavirus testing with the Abbott RealTime assay.[J]. Cytopathology : official journal of the British Society for Clinical Cytology,2021,30(5).[96]Saba Tanzila. Automated lung nodule detection and classification based on multiple classifiers voting.[J]. Microscopy research and technique,2021,82(9).[97]Ivan D. Welsh,Jane R. Allison. Automated simultaneous assignment of bond orders and formal charges[J]. Journal of Cheminformatics,2021,11(1).[98]Willem Jespers,MauricioEsguerra,Johan ?qvist,Hugo Gutiérrez-de-Terán. QligFEP: an automated workflow for small molecule free energycalculations in Q[J]. Journal of Cheminformatics,2021,11(1).[99]Manav Raj,Robert Seamans. Primer on artificial intelligence and robotics[J]. Journal of OrganizationDesign,2021,8(1).[100]Yvette Pronk,Peter Pilot,Justus M.Brinkman,Ronald J. Heerwaarden,Walter Weegen. Response rate and costs for automated patient-reported outcomes collection alone compared to combined automated and manual collection[J]. Journal of Patient-Reported Outcomes,2021,3(1).[101]Tristan Martin,Ana?s Moyon,Cyril Fersing,Evan Terrier,Aude Gouillet,Fabienne Giraud,BenjaminGuillet,Philippe Garrigue. Have you looked for “stranger things” in your automated PET dose dispensing system? A process and operators qualification scheme[J]. EJNMMI Radiopharmacy and Chemistry,2021,4(1).[102]Manuel Peuster,Michael Marchetti,Ger ardo García de Blas,Holger Karl. Automated testing of NFV orchestrators against carrier-grade multi-PoP scenarios using emulation-based smoke testing[J]. EURASIP Journal on Wireless ications and Networking,2021,2021(1).[103]R. Ferrús,O. Sallent,J. Pérez-Romero,R. Agustí. On the automation of RAN slicing provisioning: solution framework and applicability examples[J]. EURASIP Journal on Wireless ications and Networking,2021,2021(1).[104]Duo Li,Peter Wagner. Impacts of gradual automated vehicle penetration on motorway operation: a comprehensive evaluation[J]. European Transport Research Review,2021,11(1).[105]Abel Gómez,Ricardo J. Rodríguez,María-Emilia Cambronero,Valentín Valero. Profiling the publish/subscribe paradigm for automated analysis using colored Petri nets[J]. Software &amp;amp; Systems Modeling,2021,18(5).[106]Dipanjan Moitra,Rakesh Kr. Mandal. Automated AJCC (7th edition) staging of non-small cell lung cancer (NSCLC) using deep convolutional neural network (CNN) and recurrent neural network (RNN)[J]. Health Information Science and Systems,2021,7(1).[107]Marta D’Alonzo,Laura Martincich,Agnese Fenoglio,Valentina Giannini,Lisa Cellini,ViolaLiberale,Nicoletta Biglia. Nipple-sparing mastectomy: external validation of a three-dimensional automated method to predict nipple occult tumour involvement on preoperative breast MRI[J]. European Radiology Experimental,2021,3(1).[108]N. V. Dozmorov,A. S. Bogomolov,A. V. Baklanov. An Automated Apparatus for Measuring Spectral Dependences ofthe Mass Spectra and Velocity Map Images of Photofragments[J]. Instruments and Experimental Techniques,2021,62(4).[109]Zhiqiang Sun,Bingzhao Gao,Jiaqi Jin,Kazushi Sanada. Modelling, Analysis and Simulation of a Novel Automated Manual Transmission with Gearshift Assistant Mechanism[J]. International Journal of Automotive Technology,2021,20(5).[110]Andrés Vega,Mariano Córdoba,Mauricio Castro-Franco,Mónica Balzarini. Protocol for automating errorremoval from yield maps[J]. Precision Agriculture,2021,20(5).[111]Bethany L. Lussier,DaiWai M. Olson,Venkatesh Aiyagari. Automated Pupillometry in Neurocritical Care: Research and Practice[J]. Current Neurology and Neuroscience Reports,2021,19(10).[112] B. Haskali,Peter D. Roselt,David Binns,Amit Hetsron,Stan Poniger,Craig A. Hutton,Rodney J. Hicks. Automated preparation of clinical grade [ 68 Ga]Ga-DOTA-CP04, a cholecystokinin-2 receptor agonist, using iPHASE MultiSyn synthesis platform[J]. EJNMMI Radiopharmacy andChemistry,2021,4(1).[113]Ju Hyun Ahn,Minho Na,Sungkwan Koo,HyunsooChun,Inhwan Kim,Jong Won Hur,Jae Hyuk Lee,Jong G. Ok. Development of a fully automated desktop chemical vapor deposition system for programmable and controlled carbon nanotube growth[J]. Micro and Nano Systems Letters,2021,7(1).[114]Kamellia Shahi,Brenda Y. McCabe,Arash Shahi. Framework for Automated Model-Based e-Permitting System forMunicipal Jurisdictions[J]. Journal of Management in Engineering,2021,35(6).[115]Ahmed Khalafallah,Yasmin Shalaby. Change Orders: Automating Comparative Data Analysis and Controlling Impacts in Public Projects[J]. Journal of Construction Engineering and Management,2021,145(11).[116]José ?. Martínez-Huertas,OlgaJastrzebska,Ricardo Olmos,José A. León. Automated summary evaluation with inbuilt rubric method: An alternative to constructed responses and multiple-choice testsassessments[J]. Assessment &amp;amp; Evaluation in Higher Education,2021,44(7).[117]Samsonov,Koshel,Walther,Jenny. Automated placement of supplementary contour lines[J]. International Journal of Geographical Information Science,2021,33(10).[118]Veronika V. Odintsova,Peter J. Roetman,Hill F. Ip,René Pool,Camiel M. Van der Laan,Klodiana-DaphneTona,Robert R.J.M. Vermeiren,Dorret I. Boomsma. Genomics of human aggression: current state of genome-wide studies and an automated systematic review tool[J]. PsychiatricGenetics,2021,29(5).[119]Sebastian Eggert,Dietmar W Hutmacher. In vitro disease models 4.0 via automation and high-throughput processing[J]. Biofabrication,2021,11(4).[120]Asad Mahmood,Faizan Ahmad,Zubair Shafiq,Padmini Srinivasan,Fareed Zaffar. A Girl Has No Name: Automated Authorship Obfuscation using Mutant-X[J]. Proceedings on Privacy Enhancing Technologies,2021,2021(4).。

基于改进的Otsu算法的遥感图像阈值分割

基于改进的Otsu算法的遥感图像阈值分割

基于改进的Otsu 算法的遥感图像阈值分割韩青松1,贾振红1,杨 杰2,庞韶宁31.新疆大学信息科学与工程学院,新疆乌鲁木齐 830046;2.上海交通大学图像处理与模式识别研究所,上海 200240;3.新西兰奥克兰理工大学知识工程与开发研究所,新西兰奥克兰 1020提要:传统的Otsu 算法仅仅适用于目标与背景分布均匀的图像,在处理遥感图像时具有一定的局限性。

本文在分析传统的Otsu 算法原理的基础上,结合遥感图像灰度级多、信息量大和边界模糊的实际情况,提出了一种改进的O ts u 算法,用图像的方差信息代替均值信息计算最佳的分割阈值,实现遥感图像阈值分割。

实验仿真结果表明,与传统的Otsu 算法以及其它的一些改进的Otsu 算法相比,本文算法具有明显的优越性。

关键词:Otsu 算法;方差信息;均值信息;遥感图像阈值分割中图分类号:TP391.41 文献标识码:A 文章编号:0253-2743(2010)06-0033-02Remote sensing image thresholding segmentation based on the modified Otsu algorithmHAN Qing-s ong 1,JIA Zhen-hong 1,YA NG Jie 2,PA NG Shao-ning 31.College of Information Science and Engineering,Xi njiang Uni versity,Urumqi 830046,China;2.Ins titute of Image Processing and Pattern Recogniti on,Shanghai Jiaotong University,Shanghai 200240,China;3.Knowledge Engi neering and Dis covery Res earch Ins ti tute,Auckland Universi ty of Technology,Auckland 1020,Ne w ZealandAbs tract:The traditional Otsu algorithm only sui t w ell-di stributed images in target and background.When us ed in processing the remote sensi ng i mages,it exis ts s ome limitati ons.Based on the traditi onal Ots u .s princi ples,this paper proposes a modified Otsu algorithm which combines the charac ters of the remote sensing image:more gray-scale ,great information and fuzz y boundaries.In order to calculate the optimum thres hold of the re mote sensi ng image,thi s paper uses the variance information ins tead of the mean information.Co mpared with the tradi ti onal Otsu algorithm and other modified algorithms,the experi mental resul t show that this al gori thm has obvious advantages.K ey words :Otsu al gori thm;variance information;mean information;remote s ensing i mage thresholdi ng segmentati on收稿日期:2010-10-11基金项目:科技部国际科技合作项目(项目编号;2009DF A12870)作者简介:韩青松(1983-),男,硕士研究生,主要研究方向:数字图像处理。

《MultimediaIEEETransactionson》期刊第47页50条数据

《MultimediaIEEETransactionson》期刊第47页50条数据

《MultimediaIEEETransactionson》期刊第47页50条数据《Multimedia, IEEE Transactions on》期刊第47页50条数据https:///doc/1b14620746.htmlacademic-journal-foreign_multimedia-ieee-transactions_info_20_1/1.《A Novel Sign Language Recognition Framework Using Hierarchical Grassmann Covariance Matrix》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113420194.html2.《A Gated Peripheral-Foveal Convolutional Neural Network for Unified Image Aesthetic Prediction》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113420196.html3.《COMIC: Toward A Compact Image Captioning Model With Attention》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516694.html4.《Deep Learning for Single Image Super-Resolution: A Brief Review》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516718.html5.《First-Person Action Recognition With Temporal Pooling and Hilbert–Huang Transform》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516720.html6.《BranchGAN: Unsupervised Mutual Image-to-Image Transfer With A Single Encoder and Dual Decoders》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516722.html7.《Bidirectional Convolutional Recurrent Sparse Network (BCRSN): An Efficient Model for Music Emotion Recognition》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516725.html8.《Effective 3-D Shape Retrieval by Integrating Traditional Descriptors and Pointwise Convolution》transactions_thesis/0204113516729.html9.《Deep Progressive Hashing for Image Retrieval》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516733.html10.《Can Categories and Attributes Be Learned in a Multi-Task Way?》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516737.html11.《Adaptive Convolution for Object Detection》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516740.html12.《A Novel Projective-Consistent Plane Based Image Stitching Method》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516743.html13.《Weakly Supervised Dual Learning for Facial Action Unit Recognition》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516744.html14.《Multi-Speaker Tracking From an Audio–Visual Sensing Device》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516747.html15.《AccAnn: A New Subjective Assessment Methodology for Measuring Acceptability and Annoyance of Quality of Experience》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516750.html16.《Naturalness-Aware Deep No-Reference Image Quality Assessment》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516753.html17.《Adaptive Cyclopean Image-Based Stereoscopic Image-Quality Assessment Using Ensemble Learning》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516756.html18.《Superpixel Segmentation Based on Square-Wise Asymmetric Partition and Structural Approximation》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516758.html19.《FIVR: Fine-Grained Incident Video Retrieval》transactions_thesis/0204113516759.html20.《Multi-Person Pose Estimation Using Bounding Box Constraint and LSTM》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516760.html21.《Quality-Aware Unpaired Image-to-Image Translation》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516761.html22.《Cross-Modality Bridging and Knowledge Transferring for Image Understanding》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516763.html23.《Deep Objective Quality Assessment Driven Single Image Super-Resolution》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516784.html24.《Real-Time Visual–Inertial SLAM Based on Adaptive Keyframe Selection for Mobile AR Applications》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516873.html25.《Adaptive Hypergraph Embedded Semi-Supervised Multi-Label Image Annotation》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516874.html26.《TPCKT: Two-Level Progressive Cross-Media Knowledge Transfer》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516875.html27.《Supervised Robust Discrete Multimodal Hashing for Cross-Media Retrieval》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516876.html28.《Effective Image Retrieval via Multilinear Multi-Index Fusion》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516877.html29.《Feature Affinity-Based Pseudo Labeling for Semi-Supervised Person Re-Identification》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516878.html30.《Distributed and Efficient Object Detection via Interactions Among Devices, Edge, and Cloud》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516879.html31.《SkeletonNet: A Hybrid Network With a Skeleton-Embedding Process for Multi-View Image Representation Learning》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516880.html32.《Decoupled Spatial Neural Attention for Weakly Supervised Semantic Segmentation》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516881.html33.《Deep Hierarchical Encoder–Decoder Network for Image Captioning》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113516882.html34.《Message From the Outgoing Editor-in-Chief》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754116.html35.《Generative Model Driven Representation Learning in a Hybrid Framework for Environmental Audio Scene and Sound Event Recognition》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754118.html36.《Image Vectorization With Real-Time Thin-Plate Spline》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754120.html37.《Radiance–Reflectance Combined Optimization and Structure-Guided ?0-Norm for Single Image Dehazing》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754121.html38.《Generative Adversarial Network-Based Intra Prediction for Video Coding》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754122.html39.《Online Robust Principal Component Analysis With Change Point Detection》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-40.《QoE Analysis of Dense Multiview Video With Head-Mounted Devices》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754129.html41.《Efficient and Secure Image Communication System Based on Compressed Sensing for IoT Monitoring Applications》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754135.html42.《Deep Position-Sensitive Tracking》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754138.html43.《Using Blockchain for Improved Video Integrity Verification》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754140.html44.《Multi-Task Learning for Acoustic Event Detection Using Event and Frame Position Information》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754182.html45.《Steered Mixture-of-Experts for Light Field Images and Video: Representation and Coding》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754183.html46.《Design of Compressed Sensing System With Probability-Based Prior Information》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754184.html47.《Rate-Distortion Optimal Joint Texture and Depth Map Coding for 3-D Video Streaming》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754185.html48.《Spatiotemporal Recurrent Convolutional Networks for Recognizing Spontaneous Micro-Expressions》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754186.html49.《Image Retargetability》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-50.《Stereoscopic Image Stitching via Disparity-Constrained Warping and Blending》原⽂链接:https:///doc/1b14620746.html/academic-journal-foreign_multimedia-ieee-transactions_thesis/0204113754188.html。

一种去除椒盐噪声带L^1保真项的混合变分模型

一种去除椒盐噪声带L^1保真项的混合变分模型
Mixed Variational with L1 Fidelity Model for Removal of Salt and Pepper Noise ZHANG Long, LIU Zhaoxia, LIU Hongchen
School of Sciences, Minzu University of China, Beijing 100081, China
张龙,刘朝霞,刘洪琛 . 一种去除椒盐噪声带 L1 保真项的混合变分模型 . 计算机工程与应用,2019,55(1):210-216. ZHANG Long, LIU Zhaoxia, LIU Hongchen. Mixed variational with L1 fidelity model for removal of salt and pepper noise. Computer Engineering and Applications, 2019, 55(1):210-neering and Applications 计算机工程与应用
一种去除椒盐噪声带 L1 保真项的混合变分模型
张 龙,刘朝霞,刘洪琛 中央民族大学 理学院,北京 100081
摘 要:图像去噪技术是数字图像处理领域中一个重要的分支,目的是在去除噪声同时更好地保持图像的对比度、 清晰度 、纹理特征等有用的信息 ,它是图像分割 、特征提取与目标识别等图像处理过程的前提。为了有效抑制脉冲 噪声,针对调和模型和 TV- L1 模型去噪的不足,提出一种针对脉冲噪声去噪的带 L1 保真项的混合变分模型,并用增 广拉格朗日算法进行数值实现。采用峰值信噪比 、均方根误差指标评定图像的去噪效果。实验结果表明 ,该模型的 峰值信噪比大于其他几类已有模型,有效降低了均方根误差,并且计算的 CPU 时间更短,去噪效果得到明显改善。 该模型具有更好的去噪性能,获得了更理想的视觉效果,不仅能提高了图像质量,而且在客观上得到了有效证实。 关键词:图像去噪 ;变分法 ;偏微分方程 ;混合去噪模型 ;数值仿真 文献标志码:A 中图分类号:TP391 doi:10.3778/j.issn.1002-8331.1709-0295

基于HD和SVD的DWT变换的数字图像水印

基于HD和SVD的DWT变换的数字图像水印

基于HD和SVD的DWT变换的数字图像水印作者:甘志超刘丹来源:《现代信息科技》2022年第01期摘要:文章提出一种基于离散小波变换(DWT)、Hessenberg分解(HD)和奇异值分解(SVD)的图像水印方法。

在嵌入过程中,对原始载体图像进行多级DWT分解,并将得出的子带系数作为HD的输入。

在创建水印的同时对SVD进行操作,通过缩放因子将水印嵌入到主图像中。

运用果蝇优化算法,通过给出的客观评价函数来寻找比例因子。

在各种欺骗攻击下,将所提出的方法与其他方法进行比较,实验结果表明,该方法对水印具有良好的鲁棒性和不可见性。

关键词:图像水印;离散小波变换;Hessenberg分解;奇异值分解中图分类号:TP391.4 文献标识码:A文章编号:2096-4706(2022)01-0040-04Abstract: This paper proposes an image watermarking method based on discrete wavelet transform (DWT), Hessenberg decomposition (HD) and singular value decomposition (SVD). In the embedding process, the original carrier image is decomposed by multi-level DWT, and the obtained subband coefficients are used as the input of HD. While creating the watermark, the SVD is operated, and the watermark is embedded into the main image through the scaling factor. Using the fruit fly optimization algorithm, the scale factor is found through the given objective evaluation function. Under various spoofing attacks, the proposed method is compared with other methods. The experimental results show that this method has good robustness and invisibility to watermark.Keywords: image watermarking; discrete wavelet transform; Hessenberg decomposition; singular value decomposition0 引言图像的鲁棒性和不可见性是评价水印技术有效性的两个主要指标。

李 树 涛 (博导) - 湖南大学研究生招生办公室

李 树 涛 (博导) - 湖南大学研究生招生办公室
Project of European Union Information Society Technology (2002.112003.1 1 )
王麒云专业资料
2003.1 1 ) Key Project of Science and Technology, Hunan University, China (2002.1-
Funded Research Projects:
Open Projects Program of National Laboratory of Pattern Recognition (2009.1-2011.12, Principle Investigator)
Key Project of Chinese Ministry of Education (2009.1-2012.12, Principle Investigator)
2005年1月-2005年12月 视觉听觉信息处理国家重点实验室项目(负责) 2005年1月-2007年12月 国家自然科学基金项目(负责) 2004年1月-2006年12月 湖南大学育英计划资助项目(负责) 2003年1月-2004年12月 湖南省优秀博士论文基金(负责) 2003年1月-2004年12月 湖南省自然科学基金(负责) 2002年11月-2003年11月 欧盟第五框架项目(主研) 2002年1月-2003年12月 湖南大学科研基金重点项目(负责) 2001年5月-2001年10月 香港RGC项目(主研)
National Natural Science Foundation of China (2005.1-2007.12, Principle Investigator)
Opening Project of National Laboratory on Machine Perception, Peking University, China (2005.1-2005.12, Principle Investigator)

研究所创新团队建设问题浅析

研究所创新团队建设问题浅析

研究所创新团队是以科研创新为目的袁以科学技术研究与开发为 内容袁由专业知识与技能互补的科研人员组成的既分工又协作袁且具 有良好凝聚力和互动性的专业内或跨专业的创新研究团队遥 研究所创 新团队是为适应社会发展而在科研人员管理中施行的一种新的人员 组织方式袁是研究所培养科研人才尧提高科研水平尧增强创新能力的重 要途径[1]遥
其次袁应完善创新团队的外部支持机制遥 科技管理部门需要改变 传统的项目尧经费和成果管理模式袁建立相应的人才流动管理制度袁对 外积极与其他高校和研究院所建立联系袁 为人员和技术的跨部门尧跨 单位交流创造条件曰对内积极加强对本单位人才的发现和培养袁组织 真正的跨专业尧跨部门的创新团队遥 3.2 革新研究所创新团队的评价考核体系(a)(源自)图 3 车库门口行人场景图像
表 猿 车库门口行人场景的平均像素偏移
配准算法
评价结果(err)
NMI
3.22
. All RightGsMMReserved.
0.91
FRGMM
0.34
源 结论
本文提出了一种改进的基于混合高斯模型的图像配准算法,将相 邻像素之间的关系用马尔科夫随机场进行了描述,提出了基于马尔科 夫随机场和混合高斯模型的图像配准算法。通过仿真实验,与基于混 合高斯的图像配准算法和及基于灰度互信息的图像配准算法比较,本 文所提的算好有较好的的配准效果。
渊1冤对不同性质的创新团队采用不同的评价标准实施分类评价遥 对 于偏重基础性研究的创新团队袁主要以学术创新成果为主袁潜在经济 价值为辅曰对于偏重产品研究的创新团队袁应当学术成果与经济效益 评价并重曰对于偏重产业化研究的科技创新团队袁应当以经济效益为 主袁学术创新成果为辅[3]遥
渊2冤注重对创新团队科研成果质量的考核遥 主要是看成果是否做 到了野四高冶院项目高层次立项袁论文高档次发表袁获得高级别奖项袁产 生高效益渊包括经济效益和社会效益冤遥

一种基于watershed的贝叶斯图像分割方法pdf

一种基于watershed的贝叶斯图像分割方法pdf

像中第i行,第抄J的像素mR∥的8邻域,那么m一∥就被看作是局部极小值。

这些局部极小值表示图像中盆地区域,并且每~个区域都被标注大于零的数。

最终是把图像B中的每一个局部极小值的像素朝着邻域内的更小灰度值的像素台并,最后形成一个盆地区域ml。

我们用WS代表E述处理过程的变换,用W代表分水线:W=WS(G+V,)r5、得出的,俐就是一个分水线分割后的标注图像,其中弦一是对应的图像的位置。

在所有的分水线区域被标注后,得出的结果边缘图^Vw(i,J)>0…。

6’其他使用watershed算法的一个主要优点是分割的边缘连续,并且单像素宽,因为边缘是被定义在连续区域的边界,成为watershed(分水线)。

而用watershed进行图像分割的主要缺点是计算量巨大并目造成过分乳同时也造成了巨大的计算量。

分割后的结果一般要经过进~步的处理才可以完成分割任舞目前很多方法用来解决这些问题,比如标注方法和多分辨率基础上的金字塔方法。

[9]3.2Watershed变换结果E的贝叶斯分割假设S为R+的一个开子集,灰度图像,为一个被定义在S上的图像。

,:s_+R+可以看作被观测的数据函数。

即使连续隋况下,我们使用”位置”或”像素”来表示图像当中的一个点。

每一点z∈S被指定一个对应的灰度值,(x)。

watershed变换后的结果可以看成是由T个目标{g,):,的集合(如图1)。

假设(g.cJ)为图像域的非空且不相交的集合,watershed变换后的分水线卢(Og)图1是它们的边界。

图像域5的最终分割结果是由目标集合{g,)名和背景喜组成。

我们假定将被分割灰度图像,的模型为:f=‘。

+s,其中s是零均值高斯白噪声:£(x)~N(O,d2),x∈S。

真实图像的模型为_。

方差在整个图像当中假定为已知且连续。

因此,在给定的{毋)二下,的似然函数为p(,Igt,A'go)。

cexp(一乏丁{砉J。

(,(z)一元,)2出+』i(,(x)一磊)2出))(7)其中五.和磊分别代表区域lg,I和I季I上的像素值,的平均值。

基于 HESSIAN 增强和形态学尺度空间的视网膜血管分割

基于 HESSIAN 增强和形态学尺度空间的视网膜血管分割

基于 HESSIAN 增强和形态学尺度空间的视网膜血管分割于挥;王小鹏【摘要】眼底视网膜血管的走向、弯曲度、分叉度等性状分析已成为医学上诊断全身血管性疾病的重要手段。

采集到的眼底图像常存在光照不均匀等现象,利用传统的血管分割方法难以对微小血管进行检测。

为此提出一种基于改进 Hessian 矩阵增强和形态学尺度空间的分割方法。

首先利用高斯函数构建多尺度 Hessian 增强滤波器,采用新型的血管相似性函数对血管网络进行对比度增强,同时平滑图像以减轻噪声;然后利用改进的 Top-hat 变换尺度空间从背景中提取血管,并引入形态学重建方法进一步突出血管像素,消除伪边缘及孤立点噪声;最后使用二次阈值化方法实现血管的最终分割。

仿真结果表明,改进的分割方法在保证大血管脉络准确分割的同时,能够较好地实现微小血管分割。

%Characters analysis in regard to the trend,curvature and bifurcation of retinal vessels in fundus has become the important means of systemic vascular diseases diagnosis in medicine science.Because of most collected fundus images has the phenomenon of light unevenness,it is difficult to use traditional vessel segmentation methods to detect the micro vessels.Therefore we proposed a segmentation algorithm,it is based on the improved Hessian matrix enhancement and morphological scale space.First,by using Gauss function the algorithm constructs multi-scale Hessian enhanced filter,and uses a novel vascular similarity function to carry out the contrast enhancement on vascular network,while smoothes the image to weaken noise as well;then it extracts the vessels from background using an improved Top-hat transformation scale space,and introduces morphological reconstructionmethod to further highlight the vascular pixels and to eliminate the pseudo-edges and the noise of outliers;finally the algorithm uses secondary thresholding approach to realise final vessel segmentation. Simulation experimental results showed that while ensuring the accurate segmentation of great vessels and choroid,the improved segmentation method can better realise the segmentation of micro vessels.【期刊名称】《计算机应用与软件》【年(卷),期】2016(033)008【总页数】6页(P200-205)【关键词】视网膜血管;Hessian 增强;尺度空间;形态学分割【作者】于挥;王小鹏【作者单位】兰州交通大学电子与信息工程学院甘肃兰州 730070;兰州交通大学电子与信息工程学院甘肃兰州 730070【正文语种】中文【中图分类】TP391.4近年来,由于临床诊断全身血管类疾病的需要,国内外学者针对视网膜血管的增强和分割提取进行了大量研究。

Underwater+Image+Enhancement+by+Wavelength+Compensation+and+Dehazing+

Underwater+Image+Enhancement+by+Wavelength+Compensation+and+Dehazing+

Underwater Image Enhancement by Wavelength Compensation and DehazingJohn Y.Chiang and Ying-Ching ChenAbstract—Light scattering and color change are two major sources of distortion for underwater photography.Light scat-tering is caused by light incident on objects reflected and deflected multiple times by particles present in the water before reaching the camera.This in turn lowers the visibility and contrast of the image captured.Color change corresponds to the varying degrees of attenuation encountered by light traveling in the water with dif-ferent wavelengths,rendering ambient underwater environments dominated by a bluish tone.No existing underwater processing techniques can handle light scattering and color change distor-tions suffered by underwater images,and the possible presence of artificial lighting simultaneously.This paper proposes a novel systematic approach to enhance underwater images by a dehazing algorithm,to compensate the attenuation discrepancy along the propagation path,and to take the influence of the possible presence of an artifical light source into consideration.Once the depth map, i.e.,distances between the objects and the camera,is estimated, the foreground and background within a scene are segmented. The light intensities of foreground and background are compared to determine whether an artificial light source is employed during the image capturing process.After compensating the effect of artifical light,the haze phenomenon and discrepancy in wave-length attenuation along the underwater propagation path to camera are corrected.Next,the water depth in the image scene is estimated according to the residual energy ratios of different color channels existing in the background light.Based on the amount of attenuation corresponding to each light wavelength,color change compensation is conducted to restore color balance.The perfor-mance of the proposed algorithm for wavelength compensation and image dehazing(WCID)is evaluated both objectively and subjectively by utilizing ground-truth color patches and video downloaded from the Youtube website.Both results demonstrate that images with significantly enhanced visibility and superior colorfidelity are obtained by the WCID proposed.Index Terms—Color change,image dehazing,light scattering, underwater image,wavelength compensation.I.I NTRODUCTIONA CQUIRING clear images in underwater environmentsis an important issue in ocean engineering[1],[2].The quality of underwater images plays a pivotal role in scientific missions such as monitoring sea life,taking census of popu-lations,and assessing geological or biological environments. Capturing images underwater is challenging,mostly due toManuscript received July16,2011;revised December04,2011;accepted De-cember06,2011.Date of publication December13,2011;date of current ver-sion March21,2012.The associate editor coordinating the review of this man-uscript and approving it for publication was Prof.Wai-Kuen Cham.The authors are with the Department of Computer Science and Engi-neering,National Sun Yat-Sen University,Kaohsiung80424,Taiwan(e-mail: chiang@.tw;e30803@).Color versions of one or more of thefigures in this paper are available online at .Digital Object Identifier10.1109/TIP.2011.2179666Fig.1.Hazing and bluish effects caused by light scattering and color change inunderwater images.This image is part of an underwater footage on the Youtubewebsitefilmed by the Bubble Vision Company.haze caused by light that is reflected from a surface and isdeflected and scattered by water particles,and color change dueto varying degrees of light attenuation for different wavelengths[3]–[5].Light scattering and color change result in contrastloss and color deviation in images acquired underwater.Forexample,in Fig.1,the haze in the school of Carangid,the diver,and the reef at the back is attributed to light scattering,whereascolor change is the reason for the bluish tone appearing inthe brown coral reef at the bottom and the yellowfish in theupper-right corner.Haze is caused by suspended particles such as sand,minerals,and plankton that exist in lakes,oceans,and rivers.As light re-flected from objects propagates toward the camera,a portion ofthe light meets these suspended particles.This in turn absorbsand scatters the light beam,as illustrated in Fig.2.In the absenceof blackbody radiation[6],the multiscattering process along thecourse of propagation further disperses the beam into homoge-neous background light.Conventionally,the processing of underwater images focusessolely on compensating either light scattering or color changedistortion.Techniques targeting on removal of light scatteringdistortion include exploiting the polarization effects to compen-sate for visibility degradation[7],using image dehazing to re-store the clarity of the underwater images[8],and combiningpoint spread functions and a modulation transfer function to re-duce the blurring effect[9].Although the aforementioned ap-proaches can enhance scene contrast and increase visibility,dis-tortion caused by the disparity in wavelength attenuation,i.e.,color change,remains intact.On the other hand,color-changecorrection techniques estimate underwater environmental pa-rameters by performing color registration with consideration of 1057-7149/$26.00©2011IEEEFig.2.Natural light enters from air to an underwater scene point .The lightre flected propagates distanceto the camera.The radiance perceived by the camera is the sum of two the background light formed by multi-scattering and the direct transmission of re flected light.light attenuation [10],employing histogram equalization in both RGB and HSI color spaces to balance the luminance distribu-tions of color [11],and dynamically mixing the illumination of an object in a distance-dependent way by using a controllable multicolor light source to compensate color loss [12].Despite the improved color balance,these methods are ineffective in re-moving the image blurriness caused by light scattering.A sys-tematic approach is needed to take all the factors concerning light scattering,color change,and possible presence of arti ficial light source into consideration.The algorithm for wavelength compensation and image de-hazing (WCID)proposed in this paper combines techniques of WCID to remove distortions caused by light scattering and color change.Dark-channel prior [13],an existing scene-depth derivation method,is used first to estimate the distances of the scene objects to the camera.The low intensities in the dark channel are mainly due to three factors:1)shadows,e.g.,the shadows of creatures,plankton,plants,or rocks in seabed im-ages;2)colorful objects or surfaces,e.g.,green plants,red or yellow sands,and colorful rocks/minerals,de ficient in certain color channels;and 3)dark objects or surfaces,e.g.,dark crea-tures and stone [8].Based on the depth map derived,the fore-ground and background areas within the image are segmented.The light intensities of foreground and background are then compared to determine whether an arti ficial light source is em-ployed during the image acquiring process.If an arti ficial light source is detected,the luminance introduced by the auxiliary lighting is removed from the foreground area to avoid overcom-pensation in the stages followed.Next,the dehazing algorithm and wavelength compensation are utilized to remove the haze effect and color change along the underwater propagation path to the camera.The residual energy ratio among different color channels in the background light is employed to estimate the water depth within an underwater scene.Energy compensation for each color channel is carried out subsequently to adjust the bluish tone to a natural color.With WCID,expensive optical in-struments or stereo image pairs are no longer required.WCIDcan effectively enhance visibility and restore the color balance of underwater images,rendering high visual clarity and color fidelity.Fig.3.Different wavelengths of light are attenuated at different rates in water.The blue color travels the longest in the water due to its shortest wavelength.This is the reason that underwater images are dominated by blue color.A.Underwater ModelThe background light in an underwater image can be used to approximate the true in-scattering term in the full radiative transport equation to achieve the following simpli fied hazy image formation model [14],[15]:red,green,blue(1)where is a point in the underwater scene,is the image captured by the camera,is the scene radiance at point ,is the residual energy ratio of after re flecting from point in the underwater scene and reaching the camera,is the homogeneous background light,and is the light wavelength.Note that the residual energy ratio is a function of both wavelength and the object–camera distance.summarizes the overall effects for both light scat-tering and color change suffered by light with wavelength traveling the underwater distance .The direct attenuationtermdescribes the decay of scene radiance in the water [16].The residual energy ratio can be represented alternatively as the energy of a light beam with wavelength before and after traveling distance within the waterand ,respectively,as follows:Nrer(2)where the normalized residual energy ratio Nrercorresponds to the ratio of residual to initial energy for every unit of distance propagated and is the medium extinction coef ficient [15].The normalized residual energy ratio Nrerdepends on the light wavelength transmitted [17],as illustrated in Fig.3,where red light possesses longer wavelength and lower frequency and thereby attenuates faster than the blue counterpart.This results in the bluish tone prevalent in underwater images [18].Other than the wavelength of light transmitted,the normal-ized residual energy ratio Nreris also affected by water salinity and concentration of phytoplankton [17].In light of this observation,oceanic water is further classi fied into three cate-gories.Type-I waters represent extremely clear oceanic waters.Most clear coastal waters with a higher level of attenuationFig.4.Flowchart of the WCID algorithm proposed.belong to the Type-II class.Turbid upwelling coastal waters are listed as Type III.Water types I,II,and III roughly correspond to oligo-,meso-,and eutrophic waters[19].For every meter of ocean type I that a light beam passes through,the values of normalized residual energy ratio Nrer in red(700m)light, green(520m)light,and blue(440m)light are82%,95%, and97.5%,respectively.Based on the water type considered, the normalized residual energy ratio Nrer can be adjusted based on that of Ocean Type I as follows:Nrerif m red,if m green,if m blue.(3) II.U NDERWATER I MAGE F ORMATION M ODELThe proposed WCID algorithm proceeds in a direction in-verse to the underwater image formation path discussed above, as depicted in Fig.4.First,consider the possible presence and influence of the artificial light source.Next,remove the light scattering and color change that occurred along the course of propagation from the object to the camera.Finally,com-pensate the disparities of wavelength attenuation for traversing the water depth to the top of the image andfine-tune the en-ergy loss by deriving a more precise depth value for every point within an image.Fig.2illustrates an underwater image formation model.Ho-mogeneous skylight entering above into the water is the major source of illumination in an underwater environment.Incident light traverses from the surface of water reaching the image scene,covering a range from depth through,where corresponds to the image depth range.During the course of propagation,light with different wavelengths is subjected to varying degrees of attenuation.The color change of ambient lighting makes the underwater environment tinted with a bluish hue.As airlight incident from air to the water reaches the un-derwater scene point with depth,i.e.,,the amount of residual light formed after wave-length attenuation can be formulated according to the energy attenuation model in(2)as follows:Nrer red,green,blue(4) At point,the light reflected again travels distance to the camera forming pixel,red,green,blue.Along this underwater object–camera path,two phenomena occur,i.e., light scattering and color change.Note that color change oc-curs not only along the surface–object propagation path but also along the object–camera route.Light emanated from point is equal to the amount of illuminating ambient light reflected,i.e.,Nrer,where is the reflectivity of point for light with wavelength.By following the image formation model in a hazy environment in(1),the image formed at the camera can be formulated as follows:Nrerred,green,blue(5) where the background light represents the part of the object reflected light and ambient light scattered toward the camera by particles in the water.The background light increases as the object is placed farther away from the camera[13],[20]–[23].Alternatively,the residual energy ratio in the above equation can be rep-resented in terms of normalized residual energy ratio Nrer using(2)as follows:Nrer NrerNrer red,green,blue(6) Equation(6)incorporates light scattering during the course of propagation from object to the camera,and the wave-length attenuation along both the surface–object pathand object–camera route,Once the scene depth,i.e.,ob-ject–camera distance,is known through the dark-channel prior,the value of residual energy ratio Nrer after wave-length attenuation can be calculated;thus,the direct attenuation term Nrer is derivable through a dehazing procedure.The surface–object distance is cal-culated by comparing the residual energy ratio of different color channels.Given the water depth,the amount of reflecting light,i.e.,free of light scattering and color change,from point illuminated by airlight is determined. Moreover,the artificial light source is often provided to overcome insufficient lighting commonly encountered in an un-derwater photographic environment.The luminance contributed by the artificial light source has to be removed before the de-hazing and wavelength compensation operations to avoid over-compensation.When the artificial light source is detected,the light emitted hasfirst to travel distance before reaching point.The residual energy after the course of propagation is Nrer.The total amount of light impinges on point is therefore the summation of ambient lighting and the attenuated artificial light Nrer.The total amount ofincident light Nrer Nrer is re-flected with re flectivity and bounces back distancebefore reaching the camera.During both forward and backward courses of propagation pertinentto ,color change occurs.Accordingly,(6)can be further modi fied as the hazy image for-mation equation(7)The underwater image formation model in (7)takes the hazing effect,wavelength attenuation,and arti ficial lighting into consideration.Given the signal perceived,our goal is to remove the in fluences of arti ficial lighting,hazing,and color change along the object–camera path,and color change suffered from water surface to object,as outlined in the pre-vious paragraph.The following subsections will discuss the steps for the estimation of the object–camera distance ,the arti ficial light source ,the water depth ,the depth range ,and the corresponding procedures for dehazing and energy compensation.A.Distance Between the Camera and the Object:The common approach for estimating the depth of objects within a scene,i.e.,depth map,often requires two images for parallax [20].In a hazy environment,haze increases with dis-tance;therefore,haze itself can be a useful depth clue for scene understanding.Consequently,evaluating the concentration of haze in a single image is suf ficient to predict the distancebetween the object in the scene and the camera [21].The dark-channel prior [8],which is an existing scene-depth derivation method,is based on the observation that,in most of the non-background light patches ,where ,on a haze-free underwater image,at least one color channel has a very low in-tensity at some pixels.In other words,the minimum intensity in such a patch should have a very low value,i.e.,a dark channel.Note that the low intensity observed through the dark channel is a consequence of low re flectivity existing in certain color channels.No pixels with a very low value can be found in the local patch ,which implies the existence of haze.The concentration of haze in a local patch can then be quanti fied by dark-channel prior.This in turn provides the object–camera dis-tance [13].As formulated in (7),light re flected from point isNrerNrerred,green,blue(8)We de fine the dark channel for the underwaterimage asred,green,blue (9)If point belongs to a part of the foreground object,the value of the dark channel is very small,i.e.,.Taking the min operation in the local patch on the hazy image in (7),we haveNrerNrerred,green,blue(10)Since is the homogeneous background light and the residualenergy ratio Nrer on the small local patch sur-rounding point is essentially a constant Nrer [21],thevalue on the second term of (10)can be subsequently re-moved asNrerNrerred,green,blue(11)We rearrange the above equation and perform one more min operation among all three color channels as follows:NrerNrerred,green,blue(12)The first term on (12)can be shown to satisfy the following inequality (refer to Appendix I for details):NrerNrer(13)The first term in the numerator of (13)is equal to the dark channel de fined in (9)and is very close to zero.There-fore,(12)can be rewritten asNrerred,green,blue(14)Among all color channels,Nrer red possesses the lowest residual value.This indicates that Nrer ,red,green,blue is simply equal to Nrer red .Thebackground light is usually assumed to be the pixel in-tensity with the highest brightness value in an image [20].However,this simple assumption often renders erroneous results due to the presence of self-luminous organisms or an extremely smooth surface,e.g.,a white fish.In order to increase the detection robustness of background light,a min operation is first performed in every local patch of all pixels in aFig.5.(a)Depth map obtained by estimating,which is the distance be-tween the object and the camera using Blowups of(b)Frame I and(c)Frame II.Visible mosaic artifacts are observed due to the block-based operation of dark-channel prior.hazy image.The brightest pixel value among all local minima corresponds to the background light as follows:red,green,blue(15) Given the values of,,and Nrer red,distance between point on an object and the camera can be determined. The depth map of Fig.1is shown in Fig.5(a).Block-based dark-channel prior will inevitably introduce a mosaic artifact and produce a less accurate depth map,as shown in Fig.5(b)and(c).By imposing a locally linear assumption for foreground and background colors and applying image matting to repartition the depth map,the mosaic effect is reduced,and object contours can be identified more precisely[13],[22].Ap-plying image matting to the underwater depth map derived by the general dark-channel methodology is a novel approach.We denote the depth map of Fig.5(a)as.The depth map after refinement can be formulated as(16) where is a unit matrix,is a regularization coefficient,and represents the matting Laplacian matrix as follows:(17) where represents the original image,the coordinates of pixel are,is the Kronecker delta,is the color covariance of a small area,is the mean color value of,and is a regularization coefficient.Fig.6shows the depth map after applying image matting to remove mosaic distortion.B.Removal of the Artificial Light SourceArtificial light sources are often supplemented to avoid insufficient lighting commonly encountered in an underwater photographic environment,as shown in Fig.7.If an artificial light source is employed during the image capturing process, the luminance contributed by must be deductedfirst toavoid Fig.6.(a)Depth map obtained after refining with image matting.Blowups of (b)Frame I and(c)Frame II.When compared with Fig.5(b)and(c),the depth map after refinement reduces the mosaic effect and captures the contours of objects moreaccurately.Fig.7.Illuminated by an artificial light source,the intensity of the foreground appears brighter than that of thebackground.Fig.8.When the luminance contributed by an artificial light source is not deductedfirst,an overexposed image will be obtained after the compensation stages followed.overcompensation for the stages followed,as illustrated in Fig.8.Modeling,detecting,and compensating the presence of an artificial light source are novel to the processing of under-water images.The existence of an artificial light source can be determined by comparing the difference between the mean luminance of the foreground and the background.In an under-water image without artificial lighting,the dominant source of light originates from the airlight above the water surface. The underwater background corresponds to light transmitted without being absorbed and reflected by objects and is therefore the brighter part of the image.Higher mean luminance in the foreground of an image than that in the background indicates the existence of a supplementary light source.The foregroundand the background of an image can be segmented based on the depth map derived earlier as follows:area -typeforeground if backgroundif(18)whereis the distance between the object and the camera,whereas is a threshold.Upon detection of the presence of an arti ficial lighting,the added luminance introduced by the arti ficial light source has to be removed.The in fluence of an arti ficial lighting perceived by the camera is a function of the amount of luminance contributed by the light source and surface re flectance of objects.Since a point light source emanates spherically,the amount of the lu-minance supplied is inversely proportional to the square of the distance between the object and the light source.The closer the object,the stronger the intensity of the arti ficial light source,and vice versa.As discussed earlier,the light Nrer Nrer re flected from point is equal to the product of re flectivity times the summa-tion of the ambient lighting Nrer and the attenu-ated arti ficial light Nrer .For a fixed object–camera distance ,the intensity of an attenuated arti ficial lightingNrer is a constant.The difference in terms of bright-ness perceived by a camera for fixed can therefore be at-tributed to the re flectance of objects.Of all the pixels with the same ,finding solutions for ,,,andis an overdetermined problem and can be consideredas an optimization least squares solution.In this problem,the number of equations exceeds the number of unknowns.There is generally no exact solution but an approximate solution that minimizes the quadratic error Nrer Nrer [Note that the termNrer Nrer Nrer Nrer is known through (7)].This approximate solution is called the least squares solution and is computed by using the pseudoinverse of as follows:Nrer Nrerred,green,blue(19)Fig.9shows the distribution of the luminance of an arti ficial light source present and the re flectance of red,green,and blue channels in Fig.7.After deriving the luminance contributed by the arti ficial light source and re flectivity ,red,green,blue ,at point ,the in fluence caused by the arti ficial lighting can beremoved by subtraction from (7)as follows:Nrer NrerNrer NrerNrerred,green,blue(20)Fig.9.Distribution of (a)the luminance of an arti ficial light source and (b)red,(c)green,and (d)blue channel re flectivity.Fig.10.Underwater image obtained after the removal of the arti ficial light is shown in the right panel of the split screen,whereas the original one is in the left panel.The in fluence of an arti ficial light depends on the object–camera distance.The closer the object,the more signi ficant the impact is.The split screens of Fig.10(a)and (b)show the results (see right panel)after eliminating the arti ficial lighting detected in Figs.1and 7(see left panel),respectively.Due to the size of scene area covered,the amount of arti ficial light received in Fig.7is more concentrated and larger than that of pensation of Light Scattering and Color Change Along the Object–Camera PathAfter removing the arti ficial light source and deriving dis-tance between an object and the camera,the haze can be removed by subtracting the in-scattering termFig.11.Underwater image obtained(a)after eliminating haze and(b)color change along the object–camera light propagation route is shown in the right panel of the split screen,respectively,whereas the image,i.e.,in Fig.1,is in the left panel.A bluish color offset remains prevalent.in(1)of the hazy image formation model from image per-ceived by the camera,i.e.,NrerNrer Nrer(21) The image after dehazing is shown in the right panel of Fig.11(a).Next,the color change encountered during the ob-ject–camera path can be corrected by dividing both sides of(21) by the wavelength-dependent attenuation ratio Nrer. Therefore,the image after the dehazing and correction of color change introduced through the propagation path can be formulated asNrerNrerNrerred,green,blue(22) The right panel of the split screen in Fig.11(b)shows the result after the removal of light scattering and correction of color change along the object–camera path.A bluish color offset remains prevalent across the whole frame.This is due to the disparity in the amount of wavelength attenuation encountered when skylight penetrates through the water surface reaching the imaging scene,causing underwater environments illuminated by the bluish ambient light.The further estimation of the water depth of the image scene is required to satisfactorily correct the color change introduced along the course of propagation from the water surface to the photographic scene.Fig.12.Underwater image after removing light scattering and color change by considering the artificial light source,the object–camera distance,and the depth from the water surface to top of image.As the depths of and bottom of the image are different,visible color change distortion still exists at the lower portion of the image.Since our goal is to obtain a haze-free and color-corrected image,.The only unknown left in(22)is the water depth of point.In the next subsection,the derivation of scene depth will be discussed.D.Underwater Depth at the Top of the Photographic Scene: For the homogeneous skylight,the energy corresponding to red,green,blue channels right above the water surface shall be the same,i.e.,.After penetrating the water depth,the energy each color channel after attenuation, i.e.,the underwater ambient light,red,green,blue, becomes,,and,respectively.To estimate the underwater depth,the corresponding intensity of the ambient lighting shall be detectedfirst.Therefore,the water depthis the least squares solution that makes the difference between the attenuated version of the incident light,,and after propagation,and the detected ambient lighting,red,green,blue in depth,with energy,, and at the minimum as follows:Nrerred,green,blue(23) Once is determined,the amount of attenuation in each wavelength can be utilized to compensate the energy differences and correct the color change distortion by dividing(22)with Nrer as follows:Nrerred,green,blue(24)where is the restored energy of the underwater image after haze removal and calibration of color change,as shown in Fig.12.However,the water depths at the top and bottom of the scene are usually not the same in a typical image.Employing a single value of depth to compensate the entire image will leave visible color change at the bottom portion since the inten-sity of incident light decreases significantly as the water depth increases.Thus,the refinement of depth estimation for each point in the image is necessary to achieve correct energy com-pensation at different depths.。

遗传学英文词汇

遗传学英文词汇

Genetics Glossaries Week 1Heredity 遗传Variation 变异Preformation 先成论Epigenesist 后生说Mendel’s law 孟德尔定律Law of segregation 分离定律Law of independent assortment 自由组合定律Inheritance 遗传特征Trait 特征Full/Constrict 饱满/收缩Pod 荚Axial/Terminal 轴生/顶生Stem 茎Monohybrid cross 单基因杂交Postulate 假说Dominance/Recessiveness 显性/隐形Gamete 配子Likelihood 可能性Punnett square 旁那特方格/棋盘法Genetype 基因型Allele 等位基因Homozygote/ Heterozygous 纯合子/杂合子Phenotype 表现型Test cross 测交Dihybrid cross 双因子杂交Chi-square test 卡方测验Week 2Pedigree 系谱Huntington disease 亨廷顿舞蹈症Cystic fibrosis 囊性纤维化(胰腺病)Vertical inheritance 垂直遗传特性Horizontal inheritance 水平遗传特性Incomplete dominance 不完全显性Semidominance 半显性Codominance 共显性Multiple alleles 复等位基因Self-incompatibility 自交不亲和Pleiotropy 基因多效性Lethal gene 致死基因Cytogenetics 细胞遗传学Chromatin 染色质Chromosome 染色体Haploid 单倍体Diploid 二倍体Karyotype 核型Sex chromosome/Autosome 性/常染色体Moths 蛾Alligator 短吻鳄Parental 亲本的Maternal 母系的Subsequent 随后的Meiosis 减数分裂Drosophila 果蝇Drosophila melanogaster 黑腹果蝇Fruit fly 果蝇1Prolific 多产的Nomenclature 命名法Hemizygous 半合子的Color-blindnenss 色盲Descendant 后代Hormone 荷尔蒙Pattern baldness 模型斑秃Week 3Mitosis 有丝分裂Complement 互补Cytokinesis 胞质分裂Ongoing 持续的Synthesis 合成Telophase 末期Anaphase 后期Aligned 对齐的Metaphase 中期Prophase 前期Duplicated 复制的Duplication 复制Centrosome 中心体Meiosis I/II 减数分裂I/II期Nondisjunction 不分离Red-green colorblindness 红绿色盲Sex-linked 伴性的Hemophilia 血友病Hypophosphatemia 低磷血症Deoxyribonucleic acid 脱氧核糖核酸(DNA)Nuclei 核Principle 组分Ultracentrifugation 超速离心法Predominance 优势Phage 噬菌体Host cell well 宿主细胞壁Double helix 双螺旋Complementary pairing 互补配对Central dogma 中心法则Prokaryote 原核生物Eukaryote 真核生物Week 4Nucleotide 核苷酸Phosphate 磷酸盐Quagga 斑驴Skull 颅骨Neanderthal 穴居人的Uracil 尿嘧啶Thymine 胸腺嘧啶Adenine 腺嘌呤Guanine 鸟嘌呤Cytosine 胞嘧啶Ribonucleotide 核糖核苷酸Tobacco mosaic virus 烟草花叶病毒(TMV)Semiconservative 半保留的Methylate 使甲基化Splicing 剪接Alternative splicing 选择性剪接Reverse transcription 反转录Retrovirus 逆转录病毒2Immunodeficiency 免疫缺陷Matrix 基质Bilipid outer layer 双脂质外层Viral particle 病毒颗粒Disintegrate 破裂Week 5Correlation 相关性Polarity 极性Nonoverlapping 不重叠的Degenerate 简并的Incorporation 编入Nickel hydride 镍氢Wobble rule 摆动法则Peptidyl 肽基Aminoacyl 氨酰基Polyribosome 多核糖体Elongation 延长Termination 终止Multimeric protein 多亚基蛋白质Posttranslational 翻译后Prion 阮病毒Spongiform encephalopathy 海绵状脑病Spongy 海绵似的Proteinaceous 蛋白质的Deposit 沉淀物Incubation 潜伏期Progressive 渐进的Neurodegeneration 神经性退行性病变Infectious 传染的Forward/reverse mutation 正向/反向突变Rearrangement 重排Spontaneous mutation 自发突变Haploid 单倍体Susceptibility 敏感性Mutagen 诱变剂Bactericide 杀菌剂Fluctuation 波动Polymerase 聚合酶Proofreading 校对Crossing-over 互换Transposon 转座子Base analog 碱基类似物Intercalator 插入剂Alkyltransferase 烷基Homology-dependent 同源依赖Excision 切除Methyl 甲基Mismatch 错配Error-prone 易错的Nonhomologous end-joining 非同源末端接合Xeroderma pigmentosum 着色性干皮病Alkaptonuria 尿黑酸症Hypothesis 假说Neurospora 脉胞菌Mold 霉菌Nutritional mutant 营养突变体Auxotroph 营养缺陷型Prototroph 原养型Modulate 调节3Genetics 20104Perception 感觉Week 6-7Transgenic 转基因 Recombinant 重组的 Donor 供体Restriction enzyme 限制性内切酶 Fragment 碎片 Vector 载体 Transformation 转导 Amplification 扩增 Endonuclease 核酸内切酶 Cornerstone 基础 Degrade 降解 Palindrome 回文 Overhang 悬突体 Isoschizomer 同切酶 Isocaudarner 同尾酶 Gel electrophoresis 凝胶电泳 Partial digestion 部分消化 Infer 推断Selectable marker 可选标记 Drug resistance 抗药性 Ligation 连接反应 Ligase 连接酶 Sticky end 粘性末端 Blunt end 平整末端 Cosmid 粘性质粒YAC 酵母人工染色体(yeast artificial chromosome ) Autonomous 自主的Subcloning 亚克隆化β-galactosidase β半乳糖苷酶 Gal 标准编号 Blue dye 蓝色染料 Plaque 噬菌斑Shuttle vector 穿梭载体 Intron 内含子 Probing 探测Southern blotting DNA 印迹 Reverse genetics 反向遗传学 Transgenic 转基因的 Metabolity 代谢物 Gene knockout 基因敲除 Ectopic expression 异位表达Week 8Embryo 胚胎Genetic linkage 遗传连锁 Chiasmata 复交叉Chromosome breakage 染色体断裂 Cytological 细胞学的 Abnormality 异常Keep track of 与……保持联系 Genetic marker 遗传标记 Progeny 子代 Discontinuity 不连续的 Parental class 亲本 Assort 分配 Tracing 追踪 Correction 修正Chromosomal interference 染色体干扰Orient 定向Homologous chromosome 同源染色体Coefficient of coincidence 并发系数Linkage group 连锁群Interchangeable 相互可交换Week 9HGP (Human Genome Project) 人类基因组计划Proposed 被提议Draft 草稿Skepticism 怀疑论Computational biology 计算生物学Ethics 伦理学Legislation 法律Arabidopsis thaliana 拟南芥Facilitate 促进Manipulation 操作-omics 各种组学Transcriptomics 转录组学Proteomics 蛋白质组学Phenomics 表型组学Accuracy 精确性Polymorphism 多态性Heterochromatic DNA 异染色DNAHybridization 杂种Identifying 标记Estimating error 估计误差SNP (Single Nucleotide Polymorphism) 单核苷酸多态性SSR (Simple Sequence Repeat) 简单重复序列Microsatellite 微卫星Genomewide 全基因组Constellation 构象Span 跨度Counterpart 副本Bottom-up approach 自下而上模式STS (Sequence Tagged Site) 标志序列位点Top-down approach自上而下模式Fluorescent 荧光的In situ hybridization 原位杂交Loci (locus复数) 位点Resolution 分辨率Hierarchical shotgun approach 分层散弹枪策略Shearing 剪切Throughput 吞吐量/生产量Distinct 不同的Lateral transfer 横向迁移Complexity 复杂性Shuffling 慢慢移动Module 模块Paralogs 种内同源基因Pseudogene 假基因Duplication 重复Telomere 端粒Orthologous gene 种间/直系同源基因Paralogous gene 种内/旁系同源基因Week 10Organelle 细胞器Saccharomyces cerevisiae 酿酒酵母5Preserve 保护Integrity 完整性Shortening 缩短Fusion 融合Degradation 降解Germ-line cell 生殖细胞Somatic cell 体细胞Histone 组蛋白Heterogeneous 不均匀的,多样的Uneven 不均匀Supercoiling 超螺旋Radial loop 桡箕/反箕Scaffold 支架结构Heterochromatin 异染色质Staining 着色Transcription 转录Inactive 失活Constitutive 组成性的Facultative 兼性的Euchromatin 常染色质Condense 浓缩Dosage compensation 剂量补偿Barr body 巴氏小体/X染色质Deletion 删除Inversion 倒位Translocation 易位Transposition 转置Polytene 多线型Giant chromosome 巨染色体Salivary gland cell 唾液腺细胞Inversion loop 倒位环Chromatid 染色单体Centromere 着丝点Suppressor 抑制物/抑制基因Disruption 分裂Speciation 物种形成Transposable element 转位因子Retroposon 反转录子LINE (Long interspersed element)长散在序列SINE (short interspersed element)短散在序列Relocate 迁移Euploid 整倍体Aneuploid 非整倍体Monosomy 单倍体Trisomy 三倍体Tetrasomy 四倍体Polyploidy 多倍体Colchicines 秋水仙碱Down's syndrome 唐氏综合征Inactivation 失活Mosaic 嵌合体Diploid 二倍体Vigor 活力Sterile 不育的Odd-number 奇数Allopolyploid 异源多倍体Raphanobrassica 萝卜属Week 11Prokaryotic 原核的6Proliferating 增生的ORF (open reading frame) 阅读框架Operon 操纵子Spontaneous 自发的Transformation 转化Conjugation 结合Transduction 转导Recipient 接受者Hfr 高频重组Integrate 融入Excision 切除Reverting 回复Non-Mendelian 非孟德尔式Four-o-clock 紫茉莉Mitochondria 线粒体Polypeptide-encoding 多肽编码Compact 压缩Intron 内含子Liverwort 地钱Protozoan 原生动物Parasite 寄生虫Apparatus 组织/器官Exception 例外mtDNA 线粒体DNA Chloroplast 叶绿体cpDNA 胞质DNAResponsive 回应的Heteroplasmic 异质的Homoplasmic 同质的Bioreactor 生物反应器Week 12Developmental genetics 发育遗传学Manipulation 操纵Species-specific 特种异性的Cell formation 细胞形成Mutant 突变体Loss-of-function 功能性缺失Null 失效的Hypomorphic 亚效等位基因Dominant-negative 显性失活的Gain-of-function 功能性获得Overexpression 超量表达Ectopic expression 异位表达Null mutation 无效突变Leaky 有漏洞的Permissive temp 允许温度Restrictive temp 限制温度Haploinsufficiency 单倍剂量不足Subcellular localization 亚细胞定位Epistasis 上位/异位显性Sepal 萼片Petal 花瓣Stamen 雄蕊Carpel 心皮EMS (Ethylmethane Sulphonate) 乙基甲磺酸Irradiation 放射T-DNA 转运DNAsiRNA 小干扰RNAmiRNA = microRNA 微小RNA7Functional genomics 功能基因组学Adenosine deaminase 腺苷脱氨酶Embryonic 胚胎的Totipotent (细胞)全能的Pluripotent 多能性的Blastocyst 胚泡Multipotent 多能干细胞Hematopoietic 造血的Bone marrow 骨髓Week 13Anterior-posterior 后前位的Syncytium 多核体Cortex 皮层Pole cell 极细胞Blastoderm 胎盘Fertilization 受精Segmentation gene 分节基因Homeotic gene 同源框基因Cellularization 细胞化Gastrulation 原肠胚形成Germ layer 胚层Mesoderm 中胚叶Endoderm 内胚层Ectoderm 外胚层Maternal gene 母体基因Gap gene 裂隙基因Pair-rule gene 成对规则基因Segment-polarity 体节极性基因Maternal-effect 母体影响bicoid (bcd) 果蝇中控制头胸发育的一个关键母体基因Morphogen 成形素Repressor 阻遏物Zygotic gene 合子基因Hierarchy 层次结构Promoter 启动子Affinity 亲和力Regulating 调节Subdivide 细分Mirror-image 镜像Intra-segmental 节内的Patterning 图样Ligand 配合体Transcription factor 转录因子Regulatory cascade 级联调节系统Gene cluster 基因群Biothorax complexHomeodomain 同源域Penetrance 外显率Expressivity 表现度Imprinting 印迹Insulin-like 胰岛素样Epigenetic 表观遗传的Methylation 甲基化作用Prader-Willi syndrome 普拉德-威利综合征Angelman syndrome 天使综合征Haig hypothesis 海格假说Down-regulation 减量调节Sequential 连续的Asymmetric 不对称的8Genetics 20109Intrinsic 固有的 Juxtacrine 邻分泌 Paracrine 旁分泌 Mediated 介导的Week 14Population genetics 种群遗传学 Gene pool 基因库 Microevolution 微观进化 Macroevolution 宏观进化Hardy-Weinberg law 哈代-温伯格定律 Infinite number 无穷 Migration 迁移 Equilibrium 平衡 Correlate 相关 Albino 白化病者 Genetic drift 遗传漂变 Nonrandom mating 选择性交配 Fitness 适合度Natural selection 自然选择 Artificial selection 人工选择 Antibiotic 抗生素 Preexisting 预成 Viability 生存能力 Counteract 抵消 Confer 授予 Persist 保持Heterozygous advantage 杂种优势 Eugenics 优生学Geographically 地理学上的Fluctuation 波动Founder effect 创建者效应 Pathogen 病菌 Insecticide 杀虫剂 Inbreeding 近亲交配 Self-fertilization 自体受精 Hybrid vigor 杂种优势 Deleterious 有害的 Overdominance 超显性Week 15Pre-existing 之前就存在的 Chimpanzee 黑猩猩 Subtle 微妙的 Complexity 复杂度 Transposition 转置 Diversification 多样化 Divergence 分歧Fibrinopeptide 血纤维蛋白肽 Phylogeny tree 系统树。

在本段落中,作者避免使用第一和第二人称,而使用第三人称,这符解读

在本段落中,作者避免使用第一和第二人称,而使用第三人称,这符解读

在本段落中,作者避免使用第一和第二人称,而使用第三人称,这符合科技英语的用法。

尽量使用名词化的短语,从而避免使用定语从句,可使读者一目了然。

还有,作者使用的基本上是被动语态,让物体作为主语(remote sensing techniques,these techniques),从而避免使用I,We等在口语中常出现的人称代词。

本段中采用一般完成时(has advanced,have been applied,have been developed)。

大量使用了正式词语(rapid growth,processing techniques,various stages),避免使用口头词语。

本段中使用不少习惯用法(over the past two decades,from local to global scales,be applied to,be developed to do sth)。

此段的展开方式:第一句是全段的中心句,下面的句子是对第一句的展开,进一步详细阐述第一句所要表达的意思。

态从而避免使用I,We等在口语中常出现的人称代词。

多使用长句,从而避免使用从句。

本段中没有使用词语的缩写形式,以求写法规范。

该段中大量使用习惯用法(it is necessary to do sth,in other circumstances,as long as,in order to)。

副词与动词搭配比较得当(radiometrically calibrate,be directly compared)。

此段的展开方式:第一句依然是全段的中心句,下面的句子是对第一句的展开,详细阐述第一句所要表达的意思。

本段依然延续上面两段的写作风格,名词化的短语(advanced image processing algorithms, learning vector quantization method),动宾搭配比较得当(network produce results and reduce costs, compare several algorithms,use the method),介词与名词的搭配(be used for sth,compare sth with sth,be compared to sth,in training stage)。

医学图像处理 第二版教学课件7

医学图像处理 第二版教学课件7
• Morphology can provide boundaries of objects, their skeletons, and their convex hulls. It is also useful for many pre- and post-processing techniques, especially in edge thinning and pruning.
1
References
[1] Luc Vincent, Pierre Soille. Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations[J].IEEE Trans. PAMI, 1991 ,13(6), 583-598.
Mathematic Morphology Operation
• Generally speaking most morphological operations are based on simple expanding and shrinking operations.
• The primary application of morphology occurs in binary images, though it is also used on grey level images.
• The simplest way to construct dams separated sets of binary points is to use morphological dilation.
10
Mathematic Morphology Operation
• Mathematical Morphology is a tool for extracting image components that are useful for representation and description.

基于无人机遥感影像芨芨草盖度的图像分割方法

基于无人机遥感影像芨芨草盖度的图像分割方法

基于无人机遥感影像芨芨草盖度的图像分割方法吴赵丽;梁栋;赵晋陵;黄林生【摘要】The proposed image segmentation method mainly includes three stages:the first stage is using Otsu method to get the maximum variance of gray image between clusters,namely the optimal threshold.The second stage is gray image is segmented by the obtained threshold to generate achnatherum splendens and non achnatherum splendens binary image.The third stage is utilizing edge detection method combines mean filtering with mathematical morphological to extract achnatherum splendens image. and divide it with non achnatherum image.Based on unmanned aerial vehicle(UAV)image,use image segmentationmethod,achnatherum splendens can be extracted effectively.Results of estimation can obtain a higher accuracy of 97.3% compared with measured achnatherum splendens coverage.%根据像素比计算芨芨草面积占区域总面积的百分比,由此整体估算呼伦贝尔草原芨芨草的盖度.图像分割采用了最大类间方差法,得到类间最大方差值,即最佳阈值.利用阈值对灰度化图像进行分割得到芨芨草和非芨芨草两类物的二值化图像.采用中值滤波和数学形态学相结合的边缘检测法,提取芨芨草图像,将其与非芨芨草图像分割.基于无人机影像,采用图像分割方法,能够有效提取芨芨草图像,与实测芨芨草盖度相比,估算结果准确度高达97.3%.【期刊名称】《传感器与微系统》【年(卷),期】2018(037)004【总页数】3页(P51-53)【关键词】无人机遥感影像;图像分割;芨芨草盖度【作者】吴赵丽;梁栋;赵晋陵;黄林生【作者单位】安徽大学电子与信息工程学院,安徽合肥230601;中国科学院南京地理与湖泊研究所,江苏南京210008;安徽大学电子与信息工程学院,安徽合肥230601;安徽大学电子与信息工程学院,安徽合肥230601;安徽大学电子与信息工程学院,安徽合肥230601【正文语种】中文【中图分类】TP3910 引言植被盖度获取有地表实测和遥感测量[1]两种。

基于改进区域生长的图像显著区域的提取方法

基于改进区域生长的图像显著区域的提取方法

基于改进区域生长的图像显著区域的提取方法王晓晓;刘丹华【期刊名称】《现代计算机(专业版)》【年(卷),期】2012(000)005【摘要】提出一种改进的区域生长方法来进行显著区域的提取。

与以往基于像素或基于简单的NxN区域的方法不同.首先采用分水岭算法对原图像进行初始分割,利用显著图和区域的相对位置进行种子区域的自动选取,在生长过程中,将基于区域的相对边界强度,连接紧密程度与传统的区域颜色均值差异度准则相结合.构成新的区域可生长度评价函数。

实验结果表明.该方法与现有算法相比,有效提高图像感兴趣区域提取的准确性。

%Proposes an improved region growing method to extract significant region. Firstly, different from the pixel-based or the simple NxN region based region growing method, uses the watershed transform based on morphology for the initial segmentation of the original image. Designs the algorithm of automatic seeded region selection based on local saliency and relative position by using visual attention mechanism. In the region growth process, presents a hybrid region dissim- ilarity function to measure the degree of similarity between two regions. It combines the relative boundary integrity criteria, the boundary length criteria with the traditional region homogeneity criteria, increases the contour accuracy of merged regions. Experimental result shows that our growing method can effectively improve the accuracy ofthe extraction of salient regions.【总页数】5页(P27-31)【作者】王晓晓;刘丹华【作者单位】厦门大学计算机科学系,厦门361005;厦门大学计算机科学系,厦门361005【正文语种】中文【中图分类】TP391.41【相关文献】1.一种基于区域生长法的背景图像斑点提取方法 [J], 何晖;余松林;娄亮2.基于自适应调节核函数的图像显著区域提取方法 [J], 高洪涛;陆伟;杨余旺3.一种基于区域生长的图像显著区域提取算法 [J], 郝昕疌;崔敬敏;王晓晓4.基于区域生长型分水岭算法的卫星图像道路提取方法 [J], 潘婷婷;李朝锋5.一种基于区域生长与空间形状约束的高分辨率遥感图像道路提取方法 [J], 叶勤;张小虎;王栋因版权原因,仅展示原文概要,查看原文内容请购买。

UNIFI科学信息系统中3D峰检测后的组分化功能-Waters

UNIFI科学信息系统中3D峰检测后的组分化功能-Waters
3. Rosano TG, Wood M, Ihenetu K, and Swift TA. Drug screening in medical examiner casework by high resolution mass spectrometry (UPLC-MSE-TOF). J Anal Toxicol. 2013 Oct;37:580–93.
白皮书
简介 广泛全面的筛查有赖于完整、不受限制的数据采集,以使 所有的数据均可用于目标物和非目标物处理。作为一种包 含低/高能量条件下全扫描数据采集的新型采集模式,MSE 的用途已经在食品和环境2以及毒理学3检测中获得了充分 的证明。这种类型的数据可提供大量信息,例如,低能量 数据可提供母离子和同位素簇的精确质量数,而高能量数 据则包含相关的碎片离子。虽然这些数据很复杂,但UNIFI 会采用已获得专利的算法将数据整理成组分集。
图9. 对应于植物碱蟾蜍灵质子化加合物的提取质谱图。
5
白皮书
相比之下,相同峰的三维展示图见图10,精确质量数色谱示意图见图9。很明显,峰顶点的m/z值为387.239,比蟾蜍灵 质子化加合物的预期m/z值低36 ppm。 值得注意的是,在这个示例中使用UNIFI中的组分化功能时,对于蟾蜍灵被报告为假阳性结果的情况,m/z容差必须大于 36 ppm。当容差设置得很宽时,测量中的质量数误差也会被记录在UNIFI中。
3
白皮书 由于与每个候选组分相关联的谱图仅包含对应于峰顶点、且具有与候选组分保留时间相同的离子,所以其中包含的离 子比该保留时间处测得的谱图包含的离子要少得多。这一点可以由图7看出,图中对比了测得的东莨菪碱连续谱图和该 谱的棒状图,以及东莨菪碱组分相关联的候选谱图。图8中对候选谱图进行简化后更为明显,这三张谱图均显示的是较 低的m/z范围。
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Hybrid Image Segmentation Using Watersheds and Fast Region Merging Kostas Haris,Serafim N.Efstratiadis,Member,IEEE,Nicos Maglaveras,Member,IEEE,and Aggelos K.Katsaggelos,Fellow,IEEEAbstract—A hybrid multidimensional image segmentation al-gorithm is proposed,which combines edge and region-based techniques through the morphological algorithm of watersheds. An edge-preserving statistical noise reduction approach is used as a preprocessing stage in order to compute an accurate estimate of the image gradient.Then,an initial partitioning of the image into primitive regions is produced by applying the watershed trans-form on the image gradient magnitude.This initial segmentation is the input to a computationally efficient hierarchical(bottom-up)region merging process that produces thefinal segmentation. The latter process uses the region adjacency graph(RAG)repre-sentation of the image regions.At each step,the most similar pair of regions is determined(minimum cost RAG edge),the regions are merged and the RAG is updated.Traditionally,the above is implemented by storing all RAG edges in a priority queue. We propose a significantly faster algorithm,which additionally maintains the so-called nearest neighbor graph,due to which the priority queue size and processing time are drastically reduced. Thefinal segmentation provides,due to the RAG,one-pixel wide, closed,and accurately localized contours/surfaces.Experimental results obtained with two-dimensional/three-dimensional(2-D/3-D)magnetic resonance images are presented.Index Terms—Image segmentation,nearest neighbor region merging,noise reduction,watershed transform.I.I NTRODUCTIONI MAGE segmentation is an essential process for most sub-sequent image analysis tasks.In particular,many of the existing techniques for image description and recognition[1], [2],image visualization[3],[4],and object based image com-pression[5]–[7]highly depend on the segmentation results. The general segmentation problem involves the partitioning of a given image into a number of homogeneous segments (spatially connected groups of pixels),such that the union of any two neighboring segments yields a heterogeneousManuscript received July13,1996;revised October20,1997.This work was supported in part by the I4C project of the Health Telematics programme of the CEC.The associate editor coordinating the review of this manuscript and approving it for publication was Prof.Jeffrey J.Rodriguez.K.Haris is with the Laboratory of Medical Informatics,Faculty of Medicine,Aristotle University,Thessaloniki54006,Greece,and with the Department of Informatics,School of Technological Applications, Technological Educational Institution of Thessaloniki,Sindos54101,Greece (e-mail:haris@med.auth.gr).S.N.Efstratiadis and N.Maglaveras are with the Laboratory of Medical Informatics,Faculty of Medicine,Aristotle University,Thessaloniki54006, Greece(e-mail:serafim@med.auth.gr;nicmag@med.auth.gr).A.K.Katsaggelos is with the Department of Electrical and Com-puter Engineering,McCormick School of Engineering and Applied Science,Northwestern University,Evanston,IL60208-3118USA(e-mail: aggk@).Publisher Item Identifier S1057-7149(98)08714-4.segment.Alternatively,segmentation can be considered as a pixel labeling process in the sense that all pixels that belong to the same homogeneous region are assigned the same label. There are several ways to define homogeneity of a region based on the particular objective of the segmentation process. However,independently of the homogeneity criteria,the noise corrupting almost all acquired images is likely to prohibit the generation of error-free image partitions[8].Many techniques have been proposed to deal with the image segmentation problem[9],[10].They can be broadly grouped into the following categories.Histogram-Based Techniques:The image is assumed to be composed of a number of constant intensity objects in a well-separated background.The image histogram is usually considered as being the sample probability density function (pdf)of a Gaussian mixture and,thus,the segmentation prob-lem is reformulated as one of parameter estimation followed by pixel classification[10].However,these methods work well only under very strict conditions,such as small noise variance or few and nearly equal size regions.Another problem is the determination of the number of classes,which is usually assumed to be known.Better results have been obtained by the application of spatial smoothness constraints[11].Edge-Based Techniques:The image edges are detected and then grouped(linked)into contours/surfaces that represent the boundaries of image objects[12],[13].Most techniques use a differentiationfilter in order to approximate thefirst-order image gradient or the image Laplacian[14],[15].Then, candidate edges are extracted by thresholding the gradient or Laplacian magnitude.During the edge grouping stage,the detected edge pixels are grouped in order to form continuous, one-pixel wide contours as expected[16].A very successful method was proposed by Canny[15]according to which the image isfirst convolved by the Gaussian derivatives,the candidate edge pixels are isolated by the method of nonmax-imum suppression and then they are grouped by hysteresis thresholding.The method has been accelerated by the use of recursivefiltering[17]and extended successfully to3D images[18].However,the edge grouping process presents serious difficulties in producing connected,one-pixel wide contours/surfaces.Region-Based Techniques:The goal is the detection of re-gions(connected sets of pixels)that satisfy certain predefined homogeneity criteria.In region-growing or merging tech-niques,the input image isfirst tessellated into a set of homo-geneous primitive regions.Then,using an iterative merging1057–7149/98$10.00©1998IEEEprocess,similar neighboring regions are merged according to a certain decision rule[12],[19]–[21].In splitting techniques, the entire image is initially considered as one rectangular region.In each step,each heterogeneous image region of the image is divided into four rectangular segments and the process is terminated when all regions are homogeneous.In split-and-merge techniques,after the splitting stage a merging process is applied for unifying the resulting similar neigh-boring regions[22],[23].However,the splitting technique tends to produce boundaries consisting of long horizontal and vertical segments(i.e.,distorted boundaries).The heart of the above techniques is the region homogeneity test,usually formulated as a hypothesis testing problem[23],[24]. Markov Random Field-Based Techniques:The true image is assumed to be a realization of a Markov or Gibbs random field with a distribution that captures the spatial context of the scene[25].Given the prior distribution of the true image and the observed noisy one,the segmentation problem is formulated as an optimization problem.The commonly used estimation principles are maximum a posteriori(MAP) estimation,maximization of the marginal probabilities(ICM) [26]and maximization of the posterior marginals[27].How-ever,these methods require fairly accurate knowledge of the prior true image distribution and most of them are quite computationally expensive.Hybrid Techniques:The aim here is offering an improved solution to the segmentation problem by combining techniques of the previous categories.Most of them are based on the inte-gration of edge-and region-based methods.In[20],the image is initially partitioned into regions using surface curvature-sign and,then,a variable-order surfacefitting iterative region merging process is initiated.In[28],the image is initially segmented using the region-based split-and-merge technique and,then,the detected contours are refined using edge in-formation.In[29],an initial image partition is obtained by detecting ridges and troughs in the gradient magnitude image through maximum gradient paths connecting singular points. Then,region merging is applied through the elimination of ridges and troughs via similarity/dissimilarity measures.The algorithm proposed in this paper belongs to the category of hybrid techniques,since it results from the integration of edge-and region-based techniques through the morphological watershed transform.Many morphological segmentation ap-proaches using the watershed transform have been proposed in the literature[30],[31].Watersheds have also been used in multiresolution methods for producing resolution hierarchies of image ridges and valleys[3],[32].Although these methods were successful in segmenting certain classes of images, they require significant interactive user guidance or accurate prior knowledge on the image structure.By improving and extending earlier work on this problem[8],[33],[34],the pro-posed algorithm delivers accurately localized,one pixel wide and closed object contours/surfaces while it requires a small number of input parameters(semiautomatic segmentation). Initially,the noise corrupting the image is reduced by a novel noise reduction technique that is based on local homogeneity testing followed by local classification[35].This technique is applied to the original image and preserves edges remarkably well,while reducing the noise quite effectively.At the second stage,this noise suppression allows a more accurate calculation of the image gradient and reduction of the number of the detected false edges.Then,the gradient magnitude is input to the watershed detection algorithm,which produces an initial image tessellation into a large number of primitive regions [31].This initial oversegmentation is due to the high sensitivity of the watershed algorithm to the gradient image intensity variations,and,consequently,depends on the performance of the noise reduction algorithm.Oversegmentation is further reduced by thresholding the gradient magnitude prior to the application of the watershed transform.The output of the watershed transform is the starting point of a bottom-up hierarchical merging approach,where at each step the most similar pair of adjacent regions is detected and merged.Here, the region adjacency graph(RAG)is used to represent the image partitions and is combined with a newly introduced nearest neighbor graph(NNG),in order to accelerate the region merging process.Our experimental results indicate a remarkable acceleration of the merging process in comparison to the RAG based merging.Finally,a merging stopping rule may be adopted for unsupervised segmentation.In Section II,the segmentation problem is formulated and the algorithm outline is presented.In Section III,a novel edge-preserving noise reduction technique is presented as a preprocessing step,followed by the proposed gradient ap-proximation method.In Section IV,the watershed algorithm used and an oversegmentation reduction technique are briefly described.In Section V,the proposed accelerated bottom-up hierarchical merging process is presented and analyzed. Results are presented in Section VI on two-dimensional/three-dimensional(2-D/3-D)synthetic and real magnetic resonance (MR)images.Finally,conclusions and possible extensions of the algorithm are discussed in Section VII.II.P ROBLEM F ORMULATION AND A LGORITHM O UTLINELet be the set of intensitiesandbe the spatial coordinates of a pixel inaneighborhood ofpixel is defined asfollows:where are odd and denotes the largest integer not greater than its argument.In the3-D case,the neighborhood ofpointis corrupted by additive independent identically distributed Gaussian noise. Hence,the observedimage(1)where.It is also assumed that the true image is piecewise constant.More specifically,there is a partitionof,forsome natural number(2)whereif(3)It is reminded that two regions are adjacent if they share a common boundary,that is,if there is at least one pixel in one region,such that,its3ofof the observed image,for oddis considered to be a sample of sizeand variance.Aheterogeneous is considered to be a sample of sizewhere(9)for(10)whereand th order sample momentof,for.Experimental comparisons of the moment es-timator with the ML estimator have shown that,when theclasses are well-separated,the estimators yield nearly identicalestimates[8].Provided that the original image follows theadopted piecewise constant model and the noise is above acertain level,the performance of the proposed noise reductionmethod is superior to that of other methods,such as linearfiltering,medianfiltering and anisotropic diffusion[36].Theperformance of the noise reduction stage depends on theaccurate estimation of the noisevariance in the observedimage.Several noise variance estimation methods have beenproposed in the literature[37].Also,the noise reduction stagedepends on the value ofparameteris computed.Among the known gradient operators,namely,classical(Sobel,Prewitt),Gaussian or morphological,the Gaussian derivativeshave been extensively studied in the literature[12].Providedthat the original noise level is not high or the noise hasbeen effectively reduced in thefirst stage,then all the aboveoperators may perform well.However,if the original noiselevel is high or the noise has not been effectively reduced inthefirst stage,the use of small scale Gaussian derivativefiltersmay further reduce noise.Finally,the gradient magnitudeimagebe a greyscale digital image.Watersheds are definedas the lines separating the so-called catchment basins,whichbelong to different minima.More specifically,aminimumis a connected set of pixelswithintensity,where.The catchmentbasinis a set of pixels,such that,if a dropof water falls at any pixelin.The watersheds computation algorithmused here is based on immersion simulations[31],that is,on the recursive detection and fast labeling of the differentcatchment basins using queues.The algorithm consists of twosteps:sorting andflooding.At thefirst step,the image pixelsare sorted in increasing order according to their intensities.Using the image intensity histogram,a hash table is allocatedin memory,where the.Then,this hash table isfilled byscanning the image.Therefore,sorting requires scanning theimage twice using only constant memory.At theflooding step,the pixels are quickly accessed in increasing intensity order(immersion)using the sorted image and labels are assigned tocatchment basins.The label propagation is based on queuesconstructed using neighborhoods[31].The output of the watersheds algorithm is a tessellationof the input image into its different catchment basins,eachone characterized by a unique label.Among the image water-shed points,only these located exactly half-way between twocatchment basins are given a special label[31].In order toobtain thefinal image tessellation,the watersheds are removedby assigning their corresponding points to the neighboringcatchment basins.The input to the watersheds algorithm is the gradientmagnitudeimageFig.1.Flow diagram of the proposed segmentation algorithm.[30],[31],do not use all the regional minima of the input image in the flooding step but only a small number of them.These selected regional minima are referred to as markers .Prior to the application of the watershed transform,the intensity image can be modified so that its regional minima are identical to a predetermined set of markers by the homotopy modification method [30].This method achieves the suppression of the minima not related to the markers by applying geodesic reconstruction techniques and can be implemented efficiently using queues of pixels [38].Although markers have been successfully used in segmenting many types of images,their selection requires either careful user intervention or explicit prior knowledge on the image structure.In our approach,image oversegmentation is regarded as an initial image partition to which a fast region-merging procedure is applied (see Section V).As explained in Section V,the larger the initial oversegmentation,the higher the probability of false region merges during merging.In addition,the computational overhead of region merging clearly depends on the size of this initial partition,and consequently the smallest possible oversegmentation size is sought.One way to limit the size of the initial image partition is to prevent oversegmentation in homogeneous (flat)regions,where the gradient magnitude is low since it is generated by the residual noise of the first stage (see Fig.1).The watershed transform is applied to the thresholded gradient magnitudeimagehaving value smaller than a giventhresholdlocated in homogeneous regions arereplaced by fewer zero-valued regional minimainmay cause merging of regional minimainin the thresholding process,thatis,.A candidate edge pixel is defined as havingintensity valueinin (11)may be determined directly based on the esti-mated noise variancewhich produce satisfactory initial oversegmentation reduction in almost all experimental cases considered are lessthansevere oversegmentation is avoided through edge preserving noise reduction and gradient magnitude thresholding(Section III).B.Region Dissimilarity FunctionsThe objective cost function used in this work is the square error of the piecewise constant approximation of the ob-servedimagebeabe the set of pixels belongingtoregion.In the piecewise constant approximationof,ofpartitionand isequal to the mean valueof.The correspondingsquare errorisis theoptimal,whichminimizes the following dissimilarity function[41],[42]:.If-p a r t i t i o ni s d efin e d a s a n u n d i r e c t e d g r a p h,i s t h e s e t ofF i g.2.S i x-p a r t i t i o n o f a n i m a g e(l e f t),a n d t h e c o r r e s p o n d i n g R A G(r i g h t).F i g.3.M e r g i n g o f t w o R AG n o d e s.F i g.4.R A G(l e f t)a n d o n e o f i t s p o s s i b l e N N G’s(r i g h t).e d g e s.E a c h r e g i o n i s r e p r e s e n t e d b y a g r a p h n ot w o r e g i o n s(n o d e s )e x i s t s t h e e d g e-p a r t i t i o n i m a g e i s u s e d f o r t h e c o n s t r u ci n i t i a l R A G ((a)(b)(c)Fig.5.Examples of the three possible NNG-cycle modification types due to merging.(a)NNG-cycle cancellation.(b),(c)NNG-cycle creation with (b)and without (c)the participation of the node resulting from merging.Notation:RAG edge (111),NNG edge (!)and NNG cycle ($).Fig.6.Two examples of NNG-edge modification due to merging.Given the RAG of theinitial -RAG)and the heap of its edges,the RAG of thesuboptimal-RAG)is constructed by the following algorithm,which implements the stepwise optimization procedure de-scribed above.Input:RAG ofthe-RAG).Iteration:ForFind the minimum cost edge inthe-partition(time and the cor-responding nodes are merged.The merging operation causes changes in both the RAG and the heap.All RAG nodes that neighbored a node of the merged node pair must restructuretheir neighbor lists.Also,the dissimilarity values (costs)of the neighboring nodes with the node resulting from the merging stage change and must be recalculated using (12).The positions of the changed-cost edges in the heap must be updated,requiring time for each update.In addition,a few edges must be removed since they are canceled due to merging.This is illustrated in Fig.3,where a merging example of two RAG nodes is given.Before the merging of nodes a and b ,node e is a common neighbor to a and b .After their merging,one of the edges (a ,e ),(b ,e )must be removed from the RAG and the heap.Then,the positions of the changed-cost edges in the heap must be updated (edges (ab ,c ),(ab ,d ),(ab ,e )in Fig.3).However,since these positions are unknown,a linear search operationrequiringtime resultsin time for each merge,whereFig.7.Synthetic image (left)and real medical MR image(right).Fig.8.Result of the noise reduction stage on the images of Fig.7.considerably increased.This is particularly true in 3-D images where the initial partition usually contains a very large number of regions.D.Fast Nearest Neighbor MergingThe proposed solution to accelerate region merging is based on the observation that it is not necessary to keep all RAG edges in the heap but only a small portion of them [8].Specifically,we introduce the NNG,which is defined as follows.For a givenRAG,,the NNG,namely,,is a directed graphwithand thedirectededgebelongsto,the edge is directed toward thenode with the minimum label.The above definition implies that the out-degree of each node is equal to one.The edge starting at a node is directed toward its most similar neighbor.A cycle in the NNG is defined as a sequence of connected graph nodes (path)in which the starting and ending nodes coincide (see Fig.4).By definition,the NNG containsFig.9.Initial segmentation results of the images in Fig.8after applying the Gaussianfilter( =0:7)and thresholding.(a)T=0(2672regions).(b) T=0(3782regions).(c)T=5(1376regions).(d)T=5(1997regions).edges and has the following properties[8].Property1:The NNG contains at least one cycle.Property2:The maximum length of a cycle is two.The regions of the most similar pair are con-nected by a cycle.Property4:A node can participate at most one cycle.Property5:The maximum number of cycles isIteration:ForFind the minimum cost edge in the-partition.Fig.10.Intermediate segmentation results.Top:1000regions.Middle:500regions.Bottom:100regions.Fig.11.Final segmentation results overlaid on the original images.Left:7regions.Right:25regions.During the merging operation,the NNG is updated as follows.When the nodes of a cycle are merged,the costs of the neighboring RAG edges and,consequently,the structure of the NNG are modified.Two NNG cycles are defined as neighbors if there is at least one RAG edge connecting two of their nodes.For example,in Fig.5(a),thecyclesandandofcycle,resulting in the cancellation of cycleandtime for each merge,whereis the number of NNG cycles modified by the merge,and256,8b/pixel)images shown in Fig.7were used in order to illustrate the stages of the segmentation algorithm and visually assess the quality of the segmentation results.The synthetic image [Fig.7(a)]is piecewise constant,the background intensity level is 80,the object intensity level is 110and contains simulated additive white Gaussian noise with standarddeviation13neighborhood.In theabove MR image the estimated noise standard deviationwas.The window size was set to119for the MR image,and it affectsFig.12.Number of RAG edges(solid line)and NNG cycles(dotted line)as a function of the merge number for the image in Fig.8(right).Fig.13.Histogram of the RAG node degree for the image in Fig.8(right).the performance of the noise reduction algorithm as follows. For large window sizes,the power of the homogeneity test (i.e.,the probability of correctly accepting heterogeneity)is large in the case of step edges,while it is relatively small in the case of bar edges.Therefore,the thin features of the image(lines,corners)are oversmoothed.For small window sizes,the power of the homogeneity test is small and the variance of the mixture parameter estimates is large.Therefore, the resulting noise reduction is small.However,the above phenomena occur for very noisy images.In Fig.8,it is clear that the noise is sufficiently reduced while the image context is preserved and enhanced.Note that the proposed noise reduction algorithm does not impose any smoothness constraints and,therefore,when the noise level is not high, the image structure is preserved remarkably well.However,we believe that the lack of smoothness constraints is the source of the nonrobust behavior of the algorithm on very noisy images. In addition,the adopted image model does not handle more complex structures such as smooth intensity transitions(ramp edges)and junctions.At the second stage,the gradient magnitude of the smoothed image is calculated using the Gaussianfilter derivatives withscale.Then,the gradient magnitude was thresh-olded using(11),where the smoothed gradientmagnitude3neighborhood averaging of noncandidate edge pixels.At the third stage,the watershed detection algorithm was applied to the thresholded image gradient magnitude.Fig.9shows the initial tessellations of the images produced by the application of the watershed detection algorithm on the image gradient magnitude for various thresholds.It is clear that the larger the threshold the smaller the number of the regions produced by the watershed(a)(b) (c)(d) Fig.14.Segmentation of a natural image.(a)Original image(“MIT”).(b)Result of the noise reduction stage.(c)Initial segmentation after gradient thresholding(T=2,2347regions).(d)Final segmentation(80regions).detection algorithm.However,the use of high thresholds may destroy part of the image contours,which cannot be recovered at the merging stage of the segmentation algorithm.More specifically,it was observed that when the noise is not high, the choice for the threshold value close to the noise standard deviation is safe.However,when noise is high,small threshold values should be used.This is justified from the fact that when noise is high,the noise reduction algorithm may oversmooth part of the image intensity discontinuities resulting in low gradient magnitudes.Therefore,the use of high threshold values in(11)may destroy part of the object boundaries. The initial tessellations are used at the last stage of the algorithm for the construction of the RAG’s and NNG’s,and then the merging process begins.Fig.10shows several inter-mediate results of the merging process using the corresponding initial segmentation results shown in Fig.9(c)and(d).The final segmentation results are given in Fig.11with seven and 25regions,respectively.The number of regions of the initial image tessellation determines the computational and memory requirements for the construction and processing(merging)of the RAG and NNG.The number of the RAG-edges and the number of NNG-cycles are shown in Fig.12as a function of the number of merges.The size of the cycle heap is nearly one order of magnitude smaller than the size of the heap of RAG edges.As explained in Section V,the additional computational effort for manipulating the NNG at each merge of region pair depends on the distribution of the second order neighborhood size in the RAG.In Fig.13,a typical histogram of the RAG degree at an intermediate stage of merging is shown.As expected,the RAG is a graph with low mean degree and this explains the low additional computational effort for the NNG maintenance.TABLE IIT YPICAL E XECUTION T IMES OF THE P ROPOSED S EGMENTATION A LGORITHM AND I TS S TAGES WITH AND W ITHOUT THE USE OF THENNGThe proposed segmentation algorithm was also applied to natural images,such as,the standard “MIT”image(256.The result of the noise reduction stageusing a5,2347regions)and the final segmentation result (80regions)are given in Fig.14(c)and (d),respectively.Note that,despite the simplicity of both the underlying image model and the dissimilarity function used,the majority of important image regions were successfully extracted.The 3-D version of the algorithm was applied to a16256MR cardiac image,a slice of which is shown inFig.15(a).Fig.15(b)shows the result of the noise reductionstage,where a35window was used.Fig.15(c)shows the initial segmentation which resulted from the watershed detection stage on the thresholded gradient magnitude image,where the scale of the Gaussian filter was 0.7and thethreshold.Lastly,Fig.15(d)shows the final segmentation result containing 403-D regions.Based on our experiments we concluded,that the smaller the number of the initial (correct)partition segments,the better the final segmentation results.On the other hand,the use of thresholds producing initial partitions with small number of segments may cause the disappearance of a few significant contours.The 2-D and 3-D version of the proposed image segmentation algorithm were implemented in the C program-ming language on a Silicon Graphics (R4000)computer.Table II shows typical execution times and percentages with respect to the total time for each stage of the proposed algorithm with and without the use of NNG.Note that the noise reduction stage requires a great percentage of the total execution time.This is due to the current implementation in which the required parameters are computed at each window position separately.The noise reduction algorithm may be accelerated by consid-ering a faster implementation,namely,using the separability property in order to compute the sample moments [12].Finally,regarding the memory requirements of the proposed algorithm,they are high due primarily to the watershed al-gorithm [31].At the merging step,the memory requiredfor(a)(b)(c)(d)Fig.15.Three-dimensional image segmentation results.(a)Raw 3-D MR image (slice 5).(b)Smoothed image.(c)Initial oversegmentation (3058regions).(d)Final segmentation (40regions).。

相关文档
最新文档