Error reduction through learning multiple descriptions
线路板词汇中英文对照
线路板词汇中英文对照2009—08—20 18:42A/W (artwork) 底片Ablation 烧溶(laser),切除abrade 粗化abrasion resistance 耐磨性absorption 吸收ACC ( accept ) 允收accelerated corrosion test 加速腐蚀accelerated test 加速试验acceleration 速化反应accelerator 加速剂acceptable 允收activator 活化液active work in process 实际在制品adhesion 附着力adhesive method 黏着法air inclusion 气泡air knife 风刀amorphous change 不定形的改变amount 总量amylnitrite 硝基戊烷analyzer 分析仪anneal 回火annular ring 环状垫圈;孔环anode slime (sludge) 阳极泥anodizing 阳极处理AOI ( automatic optical inspection ) 自动:光学检测applicable documents 引用之文件AQL sampling 允收水准抽样aqueous photoresist 液态光阻aspect ratio 纵横比(厚宽比)As received 到货时back lighting 背光back—up 垫板banked work in process 预留在制品base material 基材baseline performance 基准绩效batch 批beta backscattering 贝他射线照射法beveling 切斜边;斜边biaxial deformation 二方向之变形black-oxide 黑化blank controller 空白对照组blank panel 空板blanking 挖空blip 弹开blister 气泡;起泡blistering 气泡blow hole 吹孔board-thickness error 板厚错误bonding plies 黏结层bow ; bowing 板弯break out 从平环内破出bridging 搭桥;桥接BTO (Build To Order)接单生产burning 烧焦burr 毛边(毛头)camcorder 一体型摄录放机carbide 碳化物carlson pin 定位梢carrier 载运剂catalyzing 催化catholic sputtering 阴极溅射法caul plate 隔板;钢板calibration system requirements 校验系统之各种要求center beam method 中心光束法central projection 集中式投射线certification 认证chamfer 倒角(金手指)chamfering 切斜边;倒角characteristic impedance 特性阻抗charge transfer overpotential 电量传递过电压chase 网框checkboard 棋盘chelator 蟹和剂chemical bond 化学键chemical vapor deposition 化学蒸着镀circumferential void 圆周性之孔破clad metal 包夹金属clean room 无尘室clearance 间隙coat 镀外表coating error 防焊覆盖错误coefficient of thermal expansion (CTE) 热澎胀系数cold solder joint 冷焊点cold—weld 金属粉末冷焊color 颜色color error 颜色错误compensation 补偿competitive performance 竞争力绩效complex salt 错化物complexor 错化物component hole 零件孔component side 零件面concentric 同心conformance 密贴性consumer products 消费性产品contact resistance 接触电阻continuous performance 连续发挥效能contract service 协力厂controlled split 均裂式conventional flow 乱流方式conventional tensile test 传统张力测试法conversion coating 转化层convex 突出coordinate list 数据清单copper claded laminates (CCL)铜箔基板copper exposure 线路露铜copper mirror 镜铜copper pad 铜箔圆配copper residue (copper splash) 铜渣corrosion rate numbering 腐蚀速率计数系统corrosion resistance 抗蚀性coulombs law 库伦定律countersink 喇叭孔coupon 试样coupon location 试样点covering power 遮盖力CPU 中央处理器crack 破裂;裂痕crazing 裂痕;白斑cross linking 交联聚合cross talk 呼应作用crosslinking 交联crystal collection 结晶收集curing 聚合体current efficiency 电流效率cut—outs 挖空cutting 裁板cyanide 氰化物cycles of learning 学习循环cycle—time reduction 交期缩短date code 周期deburring 去毛头dedicated 专用型degradation 退变delamination 分层dent / pin hole 凹陷/ 针孔department of defense 国防部designation 字码简示法de—smear 除胶渣developing 显影dewetting 缩锡dewetting time 缩锡时间dimension error 外形尺寸错误dielectric constant 介质常数difficulty 困难度difunctional 双功能dimension 尺寸dimension stability 尺寸安定性dimensional stability 尺度安定性dimension and tolerance 尺寸与公差dirty hole 孔内异物discolor hole 孔黑;孔灰;氧化discoloration 变色disposable eyelet method 消耗性铆钉法distortion factor 尺寸变形函数double side 双面板downtime 停机时间drill 钻孔drill bit 钻头drill facet 钻尖切萷面drill pointer 钻尖重(研)磨机drilled blank board 已钻孔之裸板drilling 钻孔dry film 干膜ductility 延展性economy of scale 经济规模edge spacing 板边空地edge-board contact ( gold finger ) 金手指efficiency 能量效率electric test 电测electrical testing 电测;测试electrochemical machine ECM 电化学加工法electrochemical reactor 电化学反应器electroforming 电铸electroless plate 化学铜electroless—deposition 无电镀electropolishing 电解拋光electrorefining 电解精炼electrowinning 电解萃取elliptical set 椭圆形embrittlement 脆性entitlement performance 可达成绩效entrapment 电镀夹杂物epoxy 环氧树酯equipotential 电位线error data file 异常情形etch rate 蚀铜速率etchants 蚀刻液etchback 回蚀evaluation program 评估用程序exposure 曝光external pin method 外部插梢法eyelet hole 铆钉孔Eyeletting 铆眼fabric 网布failure 故障fast response 快速响应fault 瑕庛;缺陷fiber exposure 纤维显露fiber protrusion 纤维突出fiducial mark 光学点,基准记号filler 填充料film 底片filtration 过滤finished board 成品fixing 固着fixture 电测夹具(治具)flaking off 粹离flammability rating 燃性等级flare 喇叭形孔flat cable 并排电缆feedback loop 回馈循环first-in-first-out (FIFO)先进先出flexible manufacturing system (FMS) 弹性制造系统flux 助焊剂foil distortion 铜层变形fold 空泡foreign include 异物foreign material 基材内异物free radical chain polymerization 自由基连锁聚合fully additive 加成法fully annealed type 彻底回火轫化之类形function 函数fundamental and basic 基本fungus resistance 抗霉性funnel flange 喇叭形折翼galvanized 加法尼化制程gap 钻尖分开gauge length 有效长度gel time 胶化时间general resist ink 一般阻剂油墨general 通论general industrial 一般性(电子)工业级geometrical levelling 几何平整glass transition temperature (Tg)玻璃态转换温度Gold 金gold finger 金手指gold plating 镀金golden board 标准板gouges 刷磨凹沟gouging 挖破grain boundary 金属晶体之四边green 绿色grip 夹头ground plane 接地层ground plane clearance 接地空环hackers 骇客HAL ( hot air leveling ) 喷锡haloing 白边;白圈hardener 硬化剂hardness 硬度hepa filter 空气滤清器high performance industrial 高性能(电子)工业级high reliability 高可靠度high resolution 高分辨率high temperature elongation (HTE)高温延展性铜箔high temperature epoxy (HTE) 高温树酯hit 击hole counter 数孔机hole diameter 孔径hole diameter error 孔径错误hole location 孔位hole number 孔数hole wall quality 孔壁品质hook 外弧hot dip 热浸法hull cell 哈氏槽hybrid 混成集成电路hydrogen bonding 氢键hydrolysis 水解hydrometallurgy 湿法冶金法image analysis system 影像分析系统image transfer 影像转移immersion gold 浸金(化镍金)immersion plating 浸镀法impedance 阻抗infrared reflow 红外线重熔inhibitor 热聚合抑制剂injection mold 射模ink 油墨innerlayer & outlayer 内外层insulation resistance 绝缘电阻intended position 应该在的位置intensifier 增强器intensity 强度inter molecular exchange 交互改变interconnection 互相连通ionic contaminants 离子性污染物ionic contamination testing 离子污染试验IPA 异丙醇5I : inspiration (启蒙)identification 确认计划目标implementation 改善方案information 数据internalization 制度化invisible inventory 无形的库存knife edges 刀缘Knoop 努普(硬度单位)kraft paper 牛皮纸laminar flow 层流laminate 基层板laminating 压合lamination 压合laminator 压膜机land 焊垫lay back 刃角磨损lay up 组合叠板layout 布线;布局lead screw 牵引螺丝leakage 漏电learning curve 学习曲线legend 文字标记leveling 平整levelling additive 平整剂levelling power 平整力life support 维系生命limiting current 极限电流line space 线距line width 线宽linear variable differential transformer(LVDL)线性可变差动:转换器liquid 液状(态)liquid crystal resins 液晶树脂liquid photoimageable solder resist ink 液态感光防焊油墨liquid photoresist ink 液态光阻剂油墨lot size 批量lower carrier 底部承载板mechanical plating 机祴镀法machine scrub 刷磨清洁法macrothrowing power 巨分布力margin 钻头刃带market share 市场占有率marking error 文字错误masked leveling 儰装平整mass lamination 大型压板mass transfer 质量传送效应mass transfer overpotential 质量传递过电压mass transportation 质传master drawing 主图;蓝图material use factor 材料使用率mealing 泡点;白点memory 记忆装置meniscograph solderability measurement 新月型焊锡效果microetch 微蚀microetching 微蚀microfocus 微焦距microfocus system 微焦距系统microprofile 微表面microsectioning 微切片法microthrowing power 微分布力migration 迁移mini-tensile tester 迷你拉力测试仪mis hole location 孔位错误misregistration 焊锡面与零件面对位偏差misregsitration 对不准moisture and insulation resistance test 湿气与绝缘电阻试验molded circuit board (MCB)模制电路板monoethanal amine 单乙醇氨monohydrate state 水化物monomer 单分子膜;单体mouse bite 锯齿;蚀刻缺口msec 毫秒muffle furnace 高温焚火炉multichip 超大IC型(多芯片模块)mylar 保护膜nail head 钉头NC drill 数字钻孔机negative etchback 反回蚀negative film 负片negative rake angle 负抠耙角network 回路;网络neutralization 中和nick 缺口nickel 镍nodule 铜瘤;瘤粒no flow resin 不流树脂noise 噪声nominal 标示nominal dimension 标定长度nominal gel time 标示胶性时间nominal resin content 标示胶含量nominal resin flow 标示胶流量nominal scaled flow thickness 标示比例流量厚度OA equip 办公室自动:化设备obsolescence factor 报废因素OEM 原设备制造商offset—list 补偿数据清单ohmmeter 欧姆计open 断路open circuits 断路open short testing 断短路测试opening 开口original art work (A/W)原稿底片Others 其它outgrowth 增出over design 牛刀杀鸡overlap 钻尖重叠overlay entry 盖板overpotential 过电压oxidation 氧化oxide treatment 黑化处理oxided cytochrome 氧化性之细包色素oxygen evolution 氧气发生反应packed bed 充填床式pad 锡垫;圆配pad copper exposure pad露铜panel 小型板面;母板panel plating 一次铜电镀parasitic 寄生的part no. 料号pattern plating 二次铜电镀PCB ( print circuit board )印刷电路板pcs 片peel strength 抗撕强度peeling off 剥离(剥落)performance specification 性能规范permittivity 透电率perspectives on experience 经验透视PET 聚酯photodiode detector 发光二极管侦测器photo initiator 感光启始剂photoresist 光阻phototool 光具(指工作底片)piece 子板面pinceton applied research 腐蚀测定仪pink ring 粉红圈pit 凹点pitch 脚距planar 平面plating 电镀plating exposure 下镀层露出plug gauge 插规plug hole 孔塞PNL (panel)排板polar-polar interaction 极性之间的吸力polyester 聚酯类polyglycols 聚乙二醇polyimide 聚亚醯氨poor bevelling 磨边加工引起突起,剥离poor drill 孔形不良poor HAL 喷锡不良poor marking 字体不良poor pad 锡垫不良poor printed 印刷偏差poor solderability 焊锡性不良poor touch—up 补线不良position control system 位置控制系统positive rake angle 正抠耙角power curve model 幕次曲线模式practice 工艺惯例preferred 良好premature tearing 提前撕裂prepolymer 预聚合物prepreg 胶片pre-process ( front-end) 制前press 压床press cycle 压合周期primary current distribution 一次电流分布primary 主要product lifetimes 生命周期product process 制程promoter 促进剂protocal 初步资料prussic acid 普鲁士酸PTF-based process 厚膜糊法PTH (plating though hole)导通孔pull away 拉开pumice 浮石粉pumice scrub 喷砂清洁法pyrometallurgy 火烧法冶炼QC ( quality control)品管QFP (quad flat pack ) 扁方型封装体qualification inspection 资格审查检验qualification testing 资格检定quality classification 品质等级quantitative 计量式测试rack 挂架radiometer 能量剂rake angle 抠耙角RAM [Random Access Memory 随机存取内存real time 关键时刻recessed trace process 凹槽线路法recovery tank 回收槽reduction 还原re-eninforcement 强化refraction 折光率reinforcement style 补强材料的型式register mark 对位用标记registration hole 对位孔registration pattern 长方形铜地REJ ( reject )退货;拒收rejectable 拒收release agent 脱模剂relief angle 浮离角remark 备注repair 修理resin content 树脂含量(胶含量)resin flow 胶流量resin flow percentage 树脂流量之百分率resin recession 树脂下陷resin smear 胶渣resist strippers 剥干膜剂resistor network 排列电阻resolution 解像度return on assets 资产报酬率reversibility 可逆性rework 重工rosin 天然松香rotating cylinder 旋转圆柱形roughtness 孔壁粗糙;粗慥routing 切外形,成型routing bit 铣刀runout 偏转S/L on hole 孔内沾文字S/M ( solder mask ), S/L 防焊文字S/M (solder mask) 防焊S/M error 防焊种类错误S/M on hole 孔内绿漆salt spray test 盐水喷雾试验sampling size 抽样数scope 范围scored 刻痕scoring 枢槽;刮线scrap 废框scratches 刮伤screen printing 网版印刷scum 透明残膜sealing 封孔处理secondary 次要semi—additive 半加成法sensitize 敏化sensitizer 敏化液separator 钢隔板sequential lamination 渐成式压法serrated edges 毛边shatter 破碎short 短路shunt 分路silane treatment 硅烷处理silicone coupling agent 硅烷偶合剂silk screen 文字印刷simulator 仿真器single axis 单轴sizing 底片之伸缩补偿skip 漏印skip printing 跳印;漏印sliver 丝条slot 开槽slotting 开槽SMD ( surface mount device ) 表面黏着组件smear 胶渣SMT ( surface mount technology )表面黏着技术sodium carbonate monohydrate 结晶水碳酸钠soft tooling 软性工具solder 焊锡; 锡铅solder bridge 锡桥solder bump 锡突solder float 漂锡solder mask adhesion 绿漆附着力solder on G/F 金手指沾锡solder on trace 线路沾锡solder plug 锡塞solder side 焊锡面solderability 焊锡性solid carbide 实质碳化物spacing 间距spacing nonenough 间距不足SPC ( Statistical Process Control )统计生管specification 规范special considerations 特别考虑spin coating 旋转涂布spindle 钻轴spiral contractometer 螺旋收缩仪spot face 铣靶spray coating 喷涂Squeegee 刮刀stacking structure 叠板结构stamping 冲压standard hydraulic lamination标准液压法standardizing 标准化starvation 缺胶step tablet 格片数stock option 认股选择权strain 应度strength 强度stressmeter 应力计subtractive 减除法surface convex 表面突起surface examination 表面检查surface insulation resistance (SIR)表面绝缘电阻surface mount 表面黏着方式surface roughness 表面粗慥度surges 突波switch circuit 开关线路tab 金手指tack free 不黏taped hole gauge 锥形孔规target hole 靶孔task force 任务编组tensile strength 抗拉强度tensile stress 张性应力tent 浮盖terms and definitions 术语与定义termination load 抗匹配负载test circuit 测试线路test method 试验方法test point 测试点thermal shock 热震荡试验thermal stress 热应力试验thermistor 热电感应式thermo cycling 热循环试验theoretical cycle time 理论性周期时间thickness 厚度time to market 上市时机thickness distribution 厚度分布thief 补助阴极thin core 薄基板;内层板throwing power 分布力tolerance 公差;容差tooling hole 工具孔torque load 扭力拒之负载total quality program 全面的品质计划toughness 坚度trace error 线路错误trace nick & pin hole 线路缺口及针孔trace peeling 线路剥离trace pin-hole 线路针孔trace surface roughness 线路表面粗糙tarnish and oxide resist 抗污抗氧化剂transmittance 透光度trim line 裁切线true levelling 真平整true position 真正位置的孔;真位twist 板翘type 种类umbra 本影undercut 侧蚀uneven coating 喷锡厚镀不平整universal 万用型universal tensile tester 万用拉力试验机universal tester 泛用型测试机upper carrier 顶部承载钢uptime 稼动:时间vacuum deposition 真空蒸镀法vacuum hydraulic lamination真空液压法vaporizer 气化室V—cut V形槽vertical microsection 垂直微切片via hole 导通孔visible inventory 有形的库存vision inspection 目视检查Void 孔破void in hole 孔壁上的破洞void in PTH hole 孔破walkman 随身听warehouse 仓库warp 板弯warp , warpage 板弯water absorption 吸水性wear resistance 耐磨度weave exposure 纤纹显露weave texture 织纹隐现wedge angle 契尖角week 周wet chemistry 湿式化学制程wet film 湿膜wet lamination 湿膜压膜法wet process 湿制程wetting 沾锡wetting balance 沾锡平衡法wicking 渗铜;渗入;灯蕊效应width 宽度width reduce 线细width—to-thickness ratio 宽度与厚度的比值window 操作范围work—in—process 在制品work order 工单working film 工作片working master 工作母片year 年yellow 金黄色yield 良率。
News 23 工业称量与测量生产 p. 8 p. 6 p. 4 p. 2 智能工作区域装备手册说明
23p. 8p. 6p. 4p. 2Equipping manual workplaces with smart step-by-step operator guidance and prompt-ing alongside continuous parts control can lead to higher overall yield. Here’s how:Intelligent solutions for better qualityEnabling efficiency and agility in manual as sembly lines can have significant com -petitive advantages. A scale with smart functions can not only count parts but also check different types of parts with a pre-loaded production order. Additionally, smartsolutions check if the correct amount is ei ther placed on the weighing platform or picked from scale-controlled bins. You gain error-free handling with operator guidance and prompting throughout the process.Machine learning for error reductionCombine precision weighing and visual ver-ification with machine learning to achieve 100% traceable process control.Advanced tracking and documentation en-sure the highest quality and customer sat-isfaction. Read on to learn how solutions from MET TLER TOLEDO can boost productivity and quality.Enable Smarter Manual Workplaces Maximize Quality and ProductivityManufacturers can take steps toward industry 4.0 implementation by upgrading production with the latest smart technology. Equipment enabled with machine learning can improve processes and increase operator efficiency.Solutions for Secure Quality ControlNever Run out of Parts Again with SmartShelfBreakthrough Innovation for Productivity and Quality4 Solutions to Improve Manual Workplaces4 Solutions to Improve Manual Workplaces Achieve Higher Productivity with Smart WeighingImplement smart weighing and advanced machine learning in manual workplaces to increase workflow capacity and individual processing speed – all while ensuring high quality with zero errors. See how solutions from METTLER TOLEDO can help you better handle a large range of products and components to pick or assemble correctly.S m a r t W e i g h i n g S o l u t i o n sSince implementing InVision TM , we have increased productivity by 30% in our manual work-places and improved our de-liv ery quality significantly!Competence in manufacturingExplore METTLER TOLEDO’s competencies from material receiving to shipping and find the best-fit solutions to keep process reliability under /manufacturing2 InVision TMThe combination of visual recognition, weighing and intelligent machine learning brings extra process safety for manual production and packaging processes. InVision TM ensures the completeness of assembly kits with smart operator guidance features and a second set of eyes. To learn more, visit: `/InVision3 SmartShelf Inventory ControlRemotely monitor inventory levels with weighing sensors. Choose from an array of integration options to find the best solution to continuously track and analyze stock levels. To learn more, visit: `/remote-inventory4 Smart Assembly WorkplaceThis solution guides operators through the assembly process and weighing pads control piece counts. It can be equipped with lighted directions for easy picking. Get in contact with your manual workplace system integrator to find the best solution for your needs. For more details, visit: `/ind-smart-assembly1 Pick&PackPick parts, components or product manuals with the scale-assistedPick&Pack application. Intuitive user guidance and workflows ensure better picking quality, complete kits or packages, and avoid customer complaints and costly returns. By combining SpeedWeigh with this application, you can significantly reduce your operating time. To learn more, visit: `/ind-pickandpackNew!Enhance Product Quality with Full Visibility Benefit from Vision-Based Machine LearningManual weighing, counting and packaging tasks have high manual error potential. Ease order-pickingefforts for operators while ensuring product completeness in production and packaging processes with the latest machine-learning vision and weighing technology: InVision TM .`/InVisionN e w : I n Vi s i o n T MIncrease productivity up to 30%InVision TM combines guided working steps and intuitive process verification with smart work -sta tion design, leading to increased operator efficiency. See improvements in productivity by up to 30% and get more out of your processes.Achieve 100% quality kitsZero missing parts is a reality with InVision TM as your second set of eyes on every package. When you reduce the risk of human error in manual processes you can achieve 100% quality kits and maximize customer satisfaction.Connected work stations improve operational transparencyBenefit from automatic data and picture capture with every kit. Send this information directly to your connected ERP or MES for simplified track and trace capabilities.Breakthrough bench scale innovation:InVision TM is able to identify parts that are very similar in weight or in appearance by using both weighing technology and camera recognition: • The smart algorithm combines the two technologies for optimal parts identification results in milliseconds • Two-way verification boosts process safety • Smart operator guidance on an intuitive touchscreen display reduces errors significantlyNew!I n v e n t o r yM a n a g e m e n t w i t h S m a r t S h e l fNever Run out of Parts AgainSmartShelf Simplifies Inventory ControlWeighing sensors are the perfect tool to monitor inventory levels remotely. Integrate them into shelves, racks and industrial vending machines and have continuous visibility into stock levels for easy supply-chain optimization.`/remote-inventory1 SmartShelf: Complete shelves offer maximum simplicityA complete SmartShelf system offers maximum simplicity due to having the weighing pads di-rectly integrated into your shelf. Individually query the weight per all bins for current stock levels. The setup offers:• a shelf, the integrated weighing pads and indi-vidual A/D converters for each weighing pad • f ull flexibility for weighing with a weighing range of 8 kg weighing pad to 1000 kg floor platform2 SmartShelf: Weighing pads offer maximum flexibilityBuild weighing pads into an existing shelf sys-tem, where not every bin or container in the shelf requires control by weight or build individ-ual scales for integra tion into shelf systems or machines. This setup is fully equipped to ensure easy integration and maximum flexibility: • l oad cell with housing for overload protection, scale board and mounting assembly • f ull flexibility for weighing with a weighing range from 2 kg to 20 kg3 Inventory management software: Track all parts in your networkFull tracking of stock levels and usage is possible with the SmartShelf and weighing pads system. Simply connect the system described in boxes 1 or 2 to a PC for data exchange and analysis. Custom inventory manage ment software can keep track of your stock levels while saving you time and money with exact inventory numbers.Weight-based inventory control for lean operationsWeighing pads offer installation that is flexible to the needs in your operation, whether you are integrating the sensors into shelves, mobile point-of-use racks or vending machines. These solutions allow remote ac cess and visibility to your actual inventory levels. Combined with digital connectivity for easy integration into existing ERP networks, they help you to track inventory levels, find savings potential and discover im provement opportunities for your lean efforts.Integrate High-Speed Weigh Modules into MachinesAchieve up to 100% process control with the use of in-process weigh modules to check quality by measuring weight before and after each production step. WMS weigh modules for fast in-line checkweighing ensure accurate and reliable testing during automated production. Easy integra tion, high resolution and proven ruggedness help to guarantee perfect process control. `/WMSControl Heavy Parts with Weighing PlatformsEnsure consistent quality when handling heavy loads. PFK9 high-precision weighing platforms are ideal for checking completeness of heavy bulk quantities in production output control. `/PFK9Check More than Samples with Bench ScalesRepetitive manual tasks in production are prone to errors. METTLER TOLEDO’s ICS685 high-resolution compact scale and end-of-line checkweigher help verify correct quantities, kit and package completeness, and product intactness. Intuitive user interfaces and colored displays guide operators to ensure consistent quality. `/ICS685`/checkweigherQuality Control Across Your Value Chain Find Your Ideal SolutionWeighing instruments quickly and precisely control the completeness or intactness ofproducts and components. Flexible configurations provide in-process or end-of-line checks that keep your production processes moving.Manual and semi-automated quality and completeness checksICS685 Compact Scale PFK9 FloorscaleWMS Weigh ModuleCheckweigherAccurate weighing for heavy and bulk loadsIn-process quality checksManufacturing competence brochure Discover smart solutions to improve process reliability and productivity. Download the brochure:/manufacturing-competenceQ u a l i t y w i t h D a t a3 Steps to Erase Blind SpotsThe primary targets in manufacturing are to increase quality and reduce costs. By using weighing equipment for quality checks you can also track the data of the processes. With METTLER TOLEDO software you are able to control your processes and respond quickly to issues. You achieve a secure data collection process and keep your customers happy at the same time.`/ind-software101001123Do I have all the data?This first step in increasing your quality control is key. Check your manual and semi-automated manufacturing processes for control gaps and opportunities to improve speed and productivity with more data transparency. Weighing equipment and sensors from METTLER TOLEDO will then continuously collect the data from each weighing point. The collection of all weighing data in your production process allows for more exact reporting and purpose- driven corrective measures to boost quality and productivity.METTLER TOLEDO provides tools that can connect your processes and transfer all data to your analysis software. Whether you’re working on a single project to improve your processes or you’re at the starting point of taking your production to an Industry 4.0 solution – we have a solution to fit your needs.Find more information at `/ind-4-0Can I work with all the data effectively?After connecting all weighing sensors in a production process to collect data, the next step focuses on data usability. Ensuring that all your data will be transferred to an oversight system using a standardized protocol can help.METTLER TOLEDO’s Collect+ software continually collects and monitors production output and quality. By providing you with these production insights on an easy-to-use dashboard, you can analyze and optimize your processes. You benefit from the unique Collect+ advantage: capturing weight and process data from every scale and sensor in your production area, including third-party devices.Read more about Collect+ at `/CollectplusHow can I make better decisions with the collected data?By working with steps 1 and 2, you can actively collect, track and visualize manufacturing data. You gain better insight to the status of your processes and if they are supporting productivity and quality goals. This helps you maintain process performance at a high level.Nevertheless, you always want to improve your processesand make better decisions. Our statistical quality control software TM will boost quality by actively tracking, handling and even stopping or starting processes based on the data col-lected from your shop floor. You boost quality and increase yield by customizing the system to your needs.Find more information at `/Freeweighnet9 Checks – 1 SolutionWeight-Based Quality ControlWeight-based quality control can detect invisible defects like cavities in sintered metal parts or check completeness no matter the shape or weight of the workpiece. It offers high resolution and speedy processing for a thorough quality check that does not slow down production.For more information/ind-maW e i g h t -b a s e d Q u a l i t y C o n t r o lSim ple Qu ali ty Contr olat Hig h T hro ug hp utRa tes12738495Number of parts correct?Every part inside?All parts assembled?Correct length?Enough oil?Cavities inside?Properly counterbalanced?Coating applied?6Density of sintered parts?`/ind-ICS5-qualityDownload the weight-based qualitycontrol white paper:N e w I n d u s t r i a l W e i g h i n g C a t a l o g`w w w .m t.c o m /i nd -c a ta lo gF r e e D o w n lo a dMettler-Toledo Product Inspection 1571 Northpointe Parkway Lutz, FL 33558Mettler-Toledo, Inc.2915 Argentia Road, Unit 6Mississauga, Ontario L5N 8G6。
机器学习专业词汇中英文对照
机器学习专业词汇中英⽂对照activation 激活值activation function 激活函数additive noise 加性噪声autoencoder ⾃编码器Autoencoders ⾃编码算法average firing rate 平均激活率average sum-of-squares error 均⽅差backpropagation 后向传播basis 基basis feature vectors 特征基向量batch gradient ascent 批量梯度上升法Bayesian regularization method 贝叶斯规则化⽅法Bernoulli random variable 伯努利随机变量bias term 偏置项binary classfication ⼆元分类class labels 类型标记concatenation 级联conjugate gradient 共轭梯度contiguous groups 联通区域convex optimization software 凸优化软件convolution 卷积cost function 代价函数covariance matrix 协⽅差矩阵DC component 直流分量decorrelation 去相关degeneracy 退化demensionality reduction 降维derivative 导函数diagonal 对⾓线diffusion of gradients 梯度的弥散eigenvalue 特征值eigenvector 特征向量error term 残差feature matrix 特征矩阵feature standardization 特征标准化feedforward architectures 前馈结构算法feedforward neural network 前馈神经⽹络feedforward pass 前馈传导fine-tuned 微调first-order feature ⼀阶特征forward pass 前向传导forward propagation 前向传播Gaussian prior ⾼斯先验概率generative model ⽣成模型gradient descent 梯度下降Greedy layer-wise training 逐层贪婪训练⽅法grouping matrix 分组矩阵Hadamard product 阿达马乘积Hessian matrix Hessian 矩阵hidden layer 隐含层hidden units 隐藏神经元Hierarchical grouping 层次型分组higher-order features 更⾼阶特征highly non-convex optimization problem ⾼度⾮凸的优化问题histogram 直⽅图hyperbolic tangent 双曲正切函数hypothesis 估值,假设identity activation function 恒等激励函数IID 独⽴同分布illumination 照明inactive 抑制independent component analysis 独⽴成份分析input domains 输⼊域input layer 输⼊层intensity 亮度/灰度intercept term 截距KL divergence 相对熵KL divergence KL分散度k-Means K-均值learning rate 学习速率least squares 最⼩⼆乘法linear correspondence 线性响应linear superposition 线性叠加line-search algorithm 线搜索算法local mean subtraction 局部均值消减local optima 局部最优解logistic regression 逻辑回归loss function 损失函数low-pass filtering 低通滤波magnitude 幅值MAP 极⼤后验估计maximum likelihood estimation 极⼤似然估计mean 平均值MFCC Mel 倒频系数multi-class classification 多元分类neural networks 神经⽹络neuron 神经元Newton’s method ⽜顿法non-convex function ⾮凸函数non-linear feature ⾮线性特征norm 范式norm bounded 有界范数norm constrained 范数约束normalization 归⼀化numerical roundoff errors 数值舍⼊误差numerically checking 数值检验numerically reliable 数值计算上稳定object detection 物体检测objective function ⽬标函数off-by-one error 缺位错误orthogonalization 正交化output layer 输出层overall cost function 总体代价函数over-complete basis 超完备基over-fitting 过拟合parts of objects ⽬标的部件part-whole decompostion 部分-整体分解PCA 主元分析penalty term 惩罚因⼦per-example mean subtraction 逐样本均值消减pooling 池化pretrain 预训练principal components analysis 主成份分析quadratic constraints ⼆次约束RBMs 受限Boltzman机reconstruction based models 基于重构的模型reconstruction cost 重建代价reconstruction term 重构项redundant 冗余reflection matrix 反射矩阵regularization 正则化regularization term 正则化项rescaling 缩放robust 鲁棒性run ⾏程second-order feature ⼆阶特征sigmoid activation function S型激励函数significant digits 有效数字singular value 奇异值singular vector 奇异向量smoothed L1 penalty 平滑的L1范数惩罚Smoothed topographic L1 sparsity penalty 平滑地形L1稀疏惩罚函数smoothing 平滑Softmax Regresson Softmax回归sorted in decreasing order 降序排列source features 源特征sparse autoencoder 消减归⼀化Sparsity 稀疏性sparsity parameter 稀疏性参数sparsity penalty 稀疏惩罚square function 平⽅函数squared-error ⽅差stationary 平稳性(不变性)stationary stochastic process 平稳随机过程step-size 步长值supervised learning 监督学习symmetric positive semi-definite matrix 对称半正定矩阵symmetry breaking 对称失效tanh function 双曲正切函数the average activation 平均活跃度the derivative checking method 梯度验证⽅法the empirical distribution 经验分布函数the energy function 能量函数the Lagrange dual 拉格朗⽇对偶函数the log likelihood 对数似然函数the pixel intensity value 像素灰度值the rate of convergence 收敛速度topographic cost term 拓扑代价项topographic ordered 拓扑秩序transformation 变换translation invariant 平移不变性trivial answer 平凡解under-complete basis 不完备基unrolling 组合扩展unsupervised learning ⽆监督学习variance ⽅差vecotrized implementation 向量化实现vectorization ⽮量化visual cortex 视觉⽪层weight decay 权重衰减weighted average 加权平均值whitening ⽩化zero-mean 均值为零Letter AAccumulated error backpropagation 累积误差逆传播Activation Function 激活函数Adaptive Resonance Theory/ART ⾃适应谐振理论Addictive model 加性学习Adversarial Networks 对抗⽹络Affine Layer 仿射层Affinity matrix 亲和矩阵Agent 代理 / 智能体Algorithm 算法Alpha-beta pruning α-β剪枝Anomaly detection 异常检测Approximation 近似Area Under ROC Curve/AUC Roc 曲线下⾯积Artificial General Intelligence/AGI 通⽤⼈⼯智能Artificial Intelligence/AI ⼈⼯智能Association analysis 关联分析Attention mechanism 注意⼒机制Attribute conditional independence assumption 属性条件独⽴性假设Attribute space 属性空间Attribute value 属性值Autoencoder ⾃编码器Automatic speech recognition ⾃动语⾳识别Automatic summarization ⾃动摘要Average gradient 平均梯度Average-Pooling 平均池化Letter BBackpropagation Through Time 通过时间的反向传播Backpropagation/BP 反向传播Base learner 基学习器Base learning algorithm 基学习算法Batch Normalization/BN 批量归⼀化Bayes decision rule 贝叶斯判定准则Bayes Model Averaging/BMA 贝叶斯模型平均Bayes optimal classifier 贝叶斯最优分类器Bayesian decision theory 贝叶斯决策论Bayesian network 贝叶斯⽹络Between-class scatter matrix 类间散度矩阵Bias 偏置 / 偏差Bias-variance decomposition 偏差-⽅差分解Bias-Variance Dilemma 偏差 – ⽅差困境Bi-directional Long-Short Term Memory/Bi-LSTM 双向长短期记忆Binary classification ⼆分类Binomial test ⼆项检验Bi-partition ⼆分法Boltzmann machine 玻尔兹曼机Bootstrap sampling ⾃助采样法/可重复采样/有放回采样Bootstrapping ⾃助法Break-Event Point/BEP 平衡点Letter CCalibration 校准Cascade-Correlation 级联相关Categorical attribute 离散属性Class-conditional probability 类条件概率Classification and regression tree/CART 分类与回归树Classifier 分类器Class-imbalance 类别不平衡Closed -form 闭式Cluster 簇/类/集群Cluster analysis 聚类分析Clustering 聚类Clustering ensemble 聚类集成Co-adapting 共适应Coding matrix 编码矩阵COLT 国际学习理论会议Committee-based learning 基于委员会的学习Competitive learning 竞争型学习Component learner 组件学习器Comprehensibility 可解释性Computation Cost 计算成本Computational Linguistics 计算语⾔学Computer vision 计算机视觉Concept drift 概念漂移Concept Learning System /CLS 概念学习系统Conditional entropy 条件熵Conditional mutual information 条件互信息Conditional Probability Table/CPT 条件概率表Conditional random field/CRF 条件随机场Conditional risk 条件风险Confidence 置信度Confusion matrix 混淆矩阵Connection weight 连接权Connectionism 连结主义Consistency ⼀致性/相合性Contingency table 列联表Continuous attribute 连续属性Convergence 收敛Conversational agent 会话智能体Convex quadratic programming 凸⼆次规划Convexity 凸性Convolutional neural network/CNN 卷积神经⽹络Co-occurrence 同现Correlation coefficient 相关系数Cosine similarity 余弦相似度Cost curve 成本曲线Cost Function 成本函数Cost matrix 成本矩阵Cost-sensitive 成本敏感Cross entropy 交叉熵Cross validation 交叉验证Crowdsourcing 众包Curse of dimensionality 维数灾难Cut point 截断点Cutting plane algorithm 割平⾯法Letter DData mining 数据挖掘Data set 数据集Decision Boundary 决策边界Decision stump 决策树桩Decision tree 决策树/判定树Deduction 演绎Deep Belief Network 深度信念⽹络Deep Convolutional Generative Adversarial Network/DCGAN 深度卷积⽣成对抗⽹络Deep learning 深度学习Deep neural network/DNN 深度神经⽹络Deep Q-Learning 深度 Q 学习Deep Q-Network 深度 Q ⽹络Density estimation 密度估计Density-based clustering 密度聚类Differentiable neural computer 可微分神经计算机Dimensionality reduction algorithm 降维算法Directed edge 有向边Disagreement measure 不合度量Discriminative model 判别模型Discriminator 判别器Distance measure 距离度量Distance metric learning 距离度量学习Distribution 分布Divergence 散度Diversity measure 多样性度量/差异性度量Domain adaption 领域⾃适应Downsampling 下采样D-separation (Directed separation)有向分离Dual problem 对偶问题Dummy node 哑结点Dynamic Fusion 动态融合Dynamic programming 动态规划Letter EEigenvalue decomposition 特征值分解Embedding 嵌⼊Emotional analysis 情绪分析Empirical conditional entropy 经验条件熵Empirical entropy 经验熵Empirical error 经验误差Empirical risk 经验风险End-to-End 端到端Energy-based model 基于能量的模型Ensemble learning 集成学习Ensemble pruning 集成修剪Error Correcting Output Codes/ECOC 纠错输出码Error rate 错误率Error-ambiguity decomposition 误差-分歧分解Euclidean distance 欧⽒距离Evolutionary computation 演化计算Expectation-Maximization 期望最⼤化Expected loss 期望损失Exploding Gradient Problem 梯度爆炸问题Exponential loss function 指数损失函数Extreme Learning Machine/ELM 超限学习机Letter FFactorization 因⼦分解False negative 假负类False positive 假正类False Positive Rate/FPR 假正例率Feature engineering 特征⼯程Feature selection 特征选择Feature vector 特征向量Featured Learning 特征学习Feedforward Neural Networks/FNN 前馈神经⽹络Fine-tuning 微调Flipping output 翻转法Fluctuation 震荡Forward stagewise algorithm 前向分步算法Frequentist 频率主义学派Full-rank matrix 满秩矩阵Functional neuron 功能神经元Letter GGain ratio 增益率Game theory 博弈论Gaussian kernel function ⾼斯核函数Gaussian Mixture Model ⾼斯混合模型General Problem Solving 通⽤问题求解Generalization 泛化Generalization error 泛化误差Generalization error bound 泛化误差上界Generalized Lagrange function ⼴义拉格朗⽇函数Generalized linear model ⼴义线性模型Generalized Rayleigh quotient ⼴义瑞利商Generative Adversarial Networks/GAN ⽣成对抗⽹络Generative Model ⽣成模型Generator ⽣成器Genetic Algorithm/GA 遗传算法Gibbs sampling 吉布斯采样Gini index 基尼指数Global minimum 全局最⼩Global Optimization 全局优化Gradient boosting 梯度提升Gradient Descent 梯度下降Graph theory 图论Ground-truth 真相/真实Letter HHard margin 硬间隔Hard voting 硬投票Harmonic mean 调和平均Hesse matrix 海塞矩阵Hidden dynamic model 隐动态模型Hidden layer 隐藏层Hidden Markov Model/HMM 隐马尔可夫模型Hierarchical clustering 层次聚类Hilbert space 希尔伯特空间Hinge loss function 合页损失函数Hold-out 留出法Homogeneous 同质Hybrid computing 混合计算Hyperparameter 超参数Hypothesis 假设Hypothesis test 假设验证Letter IICML 国际机器学习会议Improved iterative scaling/IIS 改进的迭代尺度法Incremental learning 增量学习Independent and identically distributed/i.i.d. 独⽴同分布Independent Component Analysis/ICA 独⽴成分分析Indicator function 指⽰函数Individual learner 个体学习器Induction 归纳Inductive bias 归纳偏好Inductive learning 归纳学习Inductive Logic Programming/ILP 归纳逻辑程序设计Information entropy 信息熵Information gain 信息增益Input layer 输⼊层Insensitive loss 不敏感损失Inter-cluster similarity 簇间相似度International Conference for Machine Learning/ICML 国际机器学习⼤会Intra-cluster similarity 簇内相似度Intrinsic value 固有值Isometric Mapping/Isomap 等度量映射Isotonic regression 等分回归Iterative Dichotomiser 迭代⼆分器Letter KKernel method 核⽅法Kernel trick 核技巧Kernelized Linear Discriminant Analysis/KLDA 核线性判别分析K-fold cross validation k 折交叉验证/k 倍交叉验证K-Means Clustering K – 均值聚类K-Nearest Neighbours Algorithm/KNN K近邻算法Knowledge base 知识库Knowledge Representation 知识表征Letter LLabel space 标记空间Lagrange duality 拉格朗⽇对偶性Lagrange multiplier 拉格朗⽇乘⼦Laplace smoothing 拉普拉斯平滑Laplacian correction 拉普拉斯修正Latent Dirichlet Allocation 隐狄利克雷分布Latent semantic analysis 潜在语义分析Latent variable 隐变量Lazy learning 懒惰学习Learner 学习器Learning by analogy 类⽐学习Learning rate 学习率Learning Vector Quantization/LVQ 学习向量量化Least squares regression tree 最⼩⼆乘回归树Leave-One-Out/LOO 留⼀法linear chain conditional random field 线性链条件随机场Linear Discriminant Analysis/LDA 线性判别分析Linear model 线性模型Linear Regression 线性回归Link function 联系函数Local Markov property 局部马尔可夫性Local minimum 局部最⼩Log likelihood 对数似然Log odds/logit 对数⼏率Logistic Regression Logistic 回归Log-likelihood 对数似然Log-linear regression 对数线性回归Long-Short Term Memory/LSTM 长短期记忆Loss function 损失函数Letter MMachine translation/MT 机器翻译Macron-P 宏查准率Macron-R 宏查全率Majority voting 绝对多数投票法Manifold assumption 流形假设Manifold learning 流形学习Margin theory 间隔理论Marginal distribution 边际分布Marginal independence 边际独⽴性Marginalization 边际化Markov Chain Monte Carlo/MCMC 马尔可夫链蒙特卡罗⽅法Markov Random Field 马尔可夫随机场Maximal clique 最⼤团Maximum Likelihood Estimation/MLE 极⼤似然估计/极⼤似然法Maximum margin 最⼤间隔Maximum weighted spanning tree 最⼤带权⽣成树Max-Pooling 最⼤池化Mean squared error 均⽅误差Meta-learner 元学习器Metric learning 度量学习Micro-P 微查准率Micro-R 微查全率Minimal Description Length/MDL 最⼩描述长度Minimax game 极⼩极⼤博弈Misclassification cost 误分类成本Mixture of experts 混合专家Momentum 动量Moral graph 道德图/端正图Multi-class classification 多分类Multi-document summarization 多⽂档摘要Multi-layer feedforward neural networks 多层前馈神经⽹络Multilayer Perceptron/MLP 多层感知器Multimodal learning 多模态学习Multiple Dimensional Scaling 多维缩放Multiple linear regression 多元线性回归Multi-response Linear Regression /MLR 多响应线性回归Mutual information 互信息Letter NNaive bayes 朴素贝叶斯Naive Bayes Classifier 朴素贝叶斯分类器Named entity recognition 命名实体识别Nash equilibrium 纳什均衡Natural language generation/NLG ⾃然语⾔⽣成Natural language processing ⾃然语⾔处理Negative class 负类Negative correlation 负相关法Negative Log Likelihood 负对数似然Neighbourhood Component Analysis/NCA 近邻成分分析Neural Machine Translation 神经机器翻译Neural Turing Machine 神经图灵机Newton method ⽜顿法NIPS 国际神经信息处理系统会议No Free Lunch Theorem/NFL 没有免费的午餐定理Noise-contrastive estimation 噪⾳对⽐估计Nominal attribute 列名属性Non-convex optimization ⾮凸优化Nonlinear model ⾮线性模型Non-metric distance ⾮度量距离Non-negative matrix factorization ⾮负矩阵分解Non-ordinal attribute ⽆序属性Non-Saturating Game ⾮饱和博弈Norm 范数Normalization 归⼀化Nuclear norm 核范数Numerical attribute 数值属性Letter OObjective function ⽬标函数Oblique decision tree 斜决策树Occam’s razor 奥卡姆剃⼑Odds ⼏率Off-Policy 离策略One shot learning ⼀次性学习One-Dependent Estimator/ODE 独依赖估计On-Policy 在策略Ordinal attribute 有序属性Out-of-bag estimate 包外估计Output layer 输出层Output smearing 输出调制法Overfitting 过拟合/过配Oversampling 过采样Letter PPaired t-test 成对 t 检验Pairwise 成对型Pairwise Markov property 成对马尔可夫性Parameter 参数Parameter estimation 参数估计Parameter tuning 调参Parse tree 解析树Particle Swarm Optimization/PSO 粒⼦群优化算法Part-of-speech tagging 词性标注Perceptron 感知机Performance measure 性能度量Plug and Play Generative Network 即插即⽤⽣成⽹络Plurality voting 相对多数投票法Polarity detection 极性检测Polynomial kernel function 多项式核函数Pooling 池化Positive class 正类Positive definite matrix 正定矩阵Post-hoc test 后续检验Post-pruning 后剪枝potential function 势函数Precision 查准率/准确率Prepruning 预剪枝Principal component analysis/PCA 主成分分析Principle of multiple explanations 多释原则Prior 先验Probability Graphical Model 概率图模型Proximal Gradient Descent/PGD 近端梯度下降Pruning 剪枝Pseudo-label 伪标记Letter QQuantized Neural Network 量⼦化神经⽹络Quantum computer 量⼦计算机Quantum Computing 量⼦计算Quasi Newton method 拟⽜顿法Letter RRadial Basis Function/RBF 径向基函数Random Forest Algorithm 随机森林算法Random walk 随机漫步Recall 查全率/召回率Receiver Operating Characteristic/ROC 受试者⼯作特征Rectified Linear Unit/ReLU 线性修正单元Recurrent Neural Network 循环神经⽹络Recursive neural network 递归神经⽹络Reference model 参考模型Regression 回归Regularization 正则化Reinforcement learning/RL 强化学习Representation learning 表征学习Representer theorem 表⽰定理reproducing kernel Hilbert space/RKHS 再⽣核希尔伯特空间Re-sampling 重采样法Rescaling 再缩放Residual Mapping 残差映射Residual Network 残差⽹络Restricted Boltzmann Machine/RBM 受限玻尔兹曼机Restricted Isometry Property/RIP 限定等距性Re-weighting 重赋权法Robustness 稳健性/鲁棒性Root node 根结点Rule Engine 规则引擎Rule learning 规则学习Letter SSaddle point 鞍点Sample space 样本空间Sampling 采样Score function 评分函数Self-Driving ⾃动驾驶Self-Organizing Map/SOM ⾃组织映射Semi-naive Bayes classifiers 半朴素贝叶斯分类器Semi-Supervised Learning 半监督学习semi-Supervised Support Vector Machine 半监督⽀持向量机Sentiment analysis 情感分析Separating hyperplane 分离超平⾯Sigmoid function Sigmoid 函数Similarity measure 相似度度量Simulated annealing 模拟退⽕Simultaneous localization and mapping 同步定位与地图构建Singular Value Decomposition 奇异值分解Slack variables 松弛变量Smoothing 平滑Soft margin 软间隔Soft margin maximization 软间隔最⼤化Soft voting 软投票Sparse representation 稀疏表征Sparsity 稀疏性Specialization 特化Spectral Clustering 谱聚类Speech Recognition 语⾳识别Splitting variable 切分变量Squashing function 挤压函数Stability-plasticity dilemma 可塑性-稳定性困境Statistical learning 统计学习Status feature function 状态特征函Stochastic gradient descent 随机梯度下降Stratified sampling 分层采样Structural risk 结构风险Structural risk minimization/SRM 结构风险最⼩化Subspace ⼦空间Supervised learning 监督学习/有导师学习support vector expansion ⽀持向量展式Support Vector Machine/SVM ⽀持向量机Surrogat loss 替代损失Surrogate function 替代函数Symbolic learning 符号学习Symbolism 符号主义Synset 同义词集Letter TT-Distribution Stochastic Neighbour Embedding/t-SNE T – 分布随机近邻嵌⼊Tensor 张量Tensor Processing Units/TPU 张量处理单元The least square method 最⼩⼆乘法Threshold 阈值Threshold logic unit 阈值逻辑单元Threshold-moving 阈值移动Time Step 时间步骤Tokenization 标记化Training error 训练误差Training instance 训练⽰例/训练例Transductive learning 直推学习Transfer learning 迁移学习Treebank 树库Tria-by-error 试错法True negative 真负类True positive 真正类True Positive Rate/TPR 真正例率Turing Machine 图灵机Twice-learning ⼆次学习Letter UUnderfitting ⽋拟合/⽋配Undersampling ⽋采样Understandability 可理解性Unequal cost ⾮均等代价Unit-step function 单位阶跃函数Univariate decision tree 单变量决策树Unsupervised learning ⽆监督学习/⽆导师学习Unsupervised layer-wise training ⽆监督逐层训练Upsampling 上采样Letter VVanishing Gradient Problem 梯度消失问题Variational inference 变分推断VC Theory VC维理论Version space 版本空间Viterbi algorithm 维特⽐算法Von Neumann architecture 冯 · 诺伊曼架构Letter WWasserstein GAN/WGAN Wasserstein⽣成对抗⽹络Weak learner 弱学习器Weight 权重Weight sharing 权共享Weighted voting 加权投票法Within-class scatter matrix 类内散度矩阵Word embedding 词嵌⼊Word sense disambiguation 词义消歧Letter ZZero-data learning 零数据学习Zero-shot learning 零次学习Aapproximations近似值arbitrary随意的affine仿射的arbitrary任意的amino acid氨基酸amenable经得起检验的axiom公理,原则abstract提取architecture架构,体系结构;建造业absolute绝对的arsenal军⽕库assignment分配algebra线性代数asymptotically⽆症状的appropriate恰当的Bbias偏差brevity简短,简洁;短暂broader⼴泛briefly简短的batch批量Cconvergence 收敛,集中到⼀点convex凸的contours轮廓constraint约束constant常理commercial商务的complementarity补充coordinate ascent同等级上升clipping剪下物;剪报;修剪component分量;部件continuous连续的covariance协⽅差canonical正规的,正则的concave⾮凸的corresponds相符合;相当;通信corollary推论concrete具体的事物,实在的东西cross validation交叉验证correlation相互关系convention约定cluster⼀簇centroids 质⼼,形⼼converge收敛computationally计算(机)的calculus计算Dderive获得,取得dual⼆元的duality⼆元性;⼆象性;对偶性derivation求导;得到;起源denote预⽰,表⽰,是…的标志;意味着,[逻]指称divergence 散度;发散性dimension尺度,规格;维数dot⼩圆点distortion变形density概率密度函数discrete离散的discriminative有识别能⼒的diagonal对⾓dispersion分散,散开determinant决定因素disjoint不相交的Eencounter遇到ellipses椭圆equality等式extra额外的empirical经验;观察ennmerate例举,计数exceed超过,越出expectation期望efficient⽣效的endow赋予explicitly清楚的exponential family指数家族equivalently等价的Ffeasible可⾏的forary初次尝试finite有限的,限定的forgo摒弃,放弃fliter过滤frequentist最常发⽣的forward search前向式搜索formalize使定形Ggeneralized归纳的generalization概括,归纳;普遍化;判断(根据不⾜)guarantee保证;抵押品generate形成,产⽣geometric margins⼏何边界gap裂⼝generative⽣产的;有⽣产⼒的Hheuristic启发式的;启发法;启发程序hone怀恋;磨hyperplane超平⾯Linitial最初的implement执⾏intuitive凭直觉获知的incremental增加的intercept截距intuitious直觉instantiation例⼦indicator指⽰物,指⽰器interative重复的,迭代的integral积分identical相等的;完全相同的indicate表⽰,指出invariance不变性,恒定性impose把…强加于intermediate中间的interpretation解释,翻译Jjoint distribution联合概率Llieu替代logarithmic对数的,⽤对数表⽰的latent潜在的Leave-one-out cross validation留⼀法交叉验证Mmagnitude巨⼤mapping绘图,制图;映射matrix矩阵mutual相互的,共同的monotonically单调的minor较⼩的,次要的multinomial多项的multi-class classification⼆分类问题Nnasty讨厌的notation标志,注释naïve朴素的Oobtain得到oscillate摆动optimization problem最优化问题objective function⽬标函数optimal最理想的orthogonal(⽮量,矩阵等)正交的orientation⽅向ordinary普通的occasionally偶然的Ppartial derivative偏导数property性质proportional成⽐例的primal原始的,最初的permit允许pseudocode伪代码permissible可允许的polynomial多项式preliminary预备precision精度perturbation 不安,扰乱poist假定,设想positive semi-definite半正定的parentheses圆括号posterior probability后验概率plementarity补充pictorially图像的parameterize确定…的参数poisson distribution柏松分布pertinent相关的Qquadratic⼆次的quantity量,数量;分量query疑问的Rregularization使系统化;调整reoptimize重新优化restrict限制;限定;约束reminiscent回忆往事的;提醒的;使⼈联想…的(of)remark注意random variable随机变量respect考虑respectively各⾃的;分别的redundant过多的;冗余的Ssusceptible敏感的stochastic可能的;随机的symmetric对称的sophisticated复杂的spurious假的;伪造的subtract减去;减法器simultaneously同时发⽣地;同步地suffice满⾜scarce稀有的,难得的split分解,分离subset⼦集statistic统计量successive iteratious连续的迭代scale标度sort of有⼏分的squares平⽅Ttrajectory轨迹temporarily暂时的terminology专⽤名词tolerance容忍;公差thumb翻阅threshold阈,临界theorem定理tangent正弦Uunit-length vector单位向量Vvalid有效的,正确的variance⽅差variable变量;变元vocabulary词汇valued经估价的;宝贵的Wwrapper包装分类:。
语音信号当中降噪算法的实现方法
语音信号当中降噪算法的实现方法1.语音信号的降噪算法可以通过滤波器来实现。
The noise reduction algorithm of speech signals can be implemented through filters.2.降噪算法可以利用数字信号处理技术进行实现。
The noise reduction algorithm can be implemented using digital signal processing techniques.3.常见的降噪算法包括中值滤波和小波变换。
Common noise reduction algorithms include median filtering and wavelet transforms.4.中值滤波是一种简单且有效的降噪技术。
Median filtering is a simple and effective noise reduction technique.5.小波变换可以将信号分解成不同频率的子信号进行处理。
Wavelet transform can decompose the signal into sub-signals of different frequencies for processing.6.降噪算法的实现需要考虑运算速度和处理效果的平衡。
The implementation of noise reduction algorithm needs to consider the balance between computational speed and processing effect.7.降噪算法的性能评价可以使用信噪比等指标进行量化。
The performance evaluation of noise reduction algorithm can be quantified using metrics such as signal-to-noise ratio.8.自适应滤波是一种根据信号特性进行动态调整的降噪技术。
Minebea Intec 全自动重量检查机手册说明书
accuracy, throughput, hygienic design, regulation compliance and cost.Food and beverages Pharmaceutical Chemical LogistikAgrarindustrie Cosmetics Building materials Logistics Machinery// 4 CheckweighersWhat customers can expect from a checkweigher from Minebea Intec:Intuitive and multilingual operator interface. Products can be set up or adjusted fast and error-free by line operators without requiring special training or knowledge and without the involvement of an engineer. This ensures maximum uptime and productivityQuick and tool-less belt release for ensuring minimum down time when cleaning or changing beltsA wide variety of available interfaces allows an easy integration into data networksThe optional trend control function ensures optimal filling processes and a reduction in cost-intensive over fillingSupreme weighing accuracy increases economic efficiency and ensures that, among other things, legal requirements are met and separation errors are preventedT hanks to German Quality and decades of experience in the development, manufacture and main-tenance of checkweighers, we can guarantee reliable operation and a long service life. This makes any checkweigher from Minebea Intec a sound investmentCheckweighers are used in a wide variety of applications in the food and other industries for verifying product weight or completeness. As a provider of pioneering technologies, we know what our clients around the world are looking for when it comes to accuracy, throughput and hygienic design, and we are familiar with the legal provisions and industry standards.The performance and functionality therefore not only needs to be in line with your present requirements; making sure you do not pay for functionality you do not need but at the same time a checkweigher needs to offer flexibility for meeting future requirements ensuring a secure investment.Minebea Intec offers a wide range of checkweighers, each configurable to suit your individual requirements. We even offer tailored solutions for specific applications.An important contribution to quality assuranceAlways the right solution for each of these applications:‘Minebea Intec checkweighers are a secure investment as they can be adapted to changing requirements and conditions.’Why Minebea Intec?Minebea Intec is a byword for quality and cutting-edge technology. Our innovative German Quality solutions have proved themselves all over the world, handling the very toughest of conditions and the strictest of requirements. We offer on-site support and services throughout the entire life cycle of our products. This means our customers always have the best possible solution for their requirements.// 6 CheckweighersWith the Essentus ®, Synus ®, Flexus ® and EWK CD VV models, we offer a complete product portfolio in the in-line segment, covering a product throughput of up to 600 pcs. per minute. All of Minebea’s checkweighers feature a durable and sturdy mechanical construction with sufficient mass to ensure precise and repeatable weighing results, even at the highest speeds.Easily adjustable thanks to intelligent designPackaging lines and the critical control points they contain, need to accommodate fast and easy switching between products and changing line configurations. Flexus ® makes this possible with minimum effort:T he upper frame construction allows simple mounting of additional modulesT he width of the conveyor belts can easily be altered for different product sizesA lways at the right level: the transport height can be varied using the adjustable feetFlexus ®Checkweigher Flexus ®The checkweigher Flexus ® combines hygienic design with flexibility and maximum performance. It was engineered in accordance with the strict hygienic requirements for packaged foods. With its wide range of configuration options, it meets practically any requirement. Further features include:O ptional weights & measures approvalE MFC high-resolution weigh cell technology, combining maximum throughput with maximum precisionO ptimised standard configurations for versatile applicationsC an be combined with Vistus ® metal detectorsM inimum machine weight: approx. 260 kg, depending on configurationIn-line checkweighersfor packaged food// 8 Checkweighers Checkweigher Essentus®Essentus® is the ideal economic solution for numerous weighing applications, completeness checks and filling process optimization. The checkweigher is extremely user-friendly, is selected according to needs and thus reduces procurement costs. Customers have the choice betweenVersion L for product weights of up to 6 kgVersion H for product weights of up to 60 kgEssentus®efficiency for pure classification and individual weight outputEssentus®performance for extended checkweigher functions, such as:An automatic learning function for simple commissioning and setting of the weighingparameters without in-depth knowledge of the equipmentA wide variety of optionsAutomatic speed regulationExtended analysesEssentus® L performanceEssentus® L efficiencyEssentus® H efficiencyEssentus® H performanceEssentus® H –also available asan end-of-linecheckweigher!In-line checkweighersSynus®EWK CD VV// 10 CheckweighersFor more information on our checkweighers or for locating our office and partners in your country, please visit Multi-lane checkweigherMulti-lane checkweighers for maximum throughput in the smallest of space Multi-lane systems save a lot of space and offer attractive price and throughput advantages.S imultaneous checkweighing in 2 to maximal 6 lanesAll lanes managed by one central user-friendly operator interfaceThroughput up to 600 pieces / minuteTrack related control and rejectionEssentus® H performanceEssentus® H efficiency EWK WS 60 kg// 12 CheckweighersFlexus ® Combi with Vistus ®Combination unitscheckweigher/metal detectorOur combination units offer you next to the checkweighing functionality, the reliable detection of metallic foreign bodies through integrated Vistus ® metal detection technology. This solution significantly reduces required line space and optimizes user control. Both the checkweigher and metal detector are controlled via a common, intuitively-designed user interface.Vistus ® metal detectors for optimum consumer protectionVistus ® metal detectors are specifically designed for the food industry. M ulti-frequency technology offering premium detection performanceFast and easy switching between product batches via extensive product memory V ia the automatic learn-function, products can be set up or adjusted fast and er-ror-free by line operators without requiring special training or knowledge and without the involvement of an engineerTo learn more about metal detection in general, download ourWhite Paper here!// 14 CheckweighersU p g r a d et o 4.0 – e x p an d e de q u ip m e n t c on n e c -t io n , im p r o v e dp e r f o r m a n c eConfiguration options and complementary productsCustomer individual solutionsToday’s wide variety of different products, being offered in an even larger variety of cartons, boxes, pouches, bags, trays, sachets and bottles, each have their individual requirements when it comes to product handling and transportation.Although our checkweighers are constructed in such a way that as a standard they already offern extreme flexibility, in some cases in depth consultancy or bespoke solutions are required. Our in-house Engineering Support services provide both. Our staff listen carefully and propose solutions that fully meet all requirements. Our engineers also offer design-in support for the integration of our machines or solutions into production or packaging lines. Product tests that can give a precise answer with regard to achievable throughput and accuracy are also specially carried out for checkweigher applications.For more detailed information, please visit our website or contact**************************Batching and formulationWeighing of incoming goodsComponents and solutions for Components and solutions for vehicle weighing (analogue/digital)Foreign object detectionPortioning and checkweighingFormulation and formulation weighingX-ray inspectionPre-packaging checking and Checkweighers for heavy loadsComponents and solutionsfor vehicle weighing// 18 CheckweighersEngineering Support and Global Solutions – ensuring optimal solutionsConsultation on selecting the best products and solutions with regard to the desired performance, precision and costs Design-in support for the integration of our products and solutions in existing constructions Customer-specific products or solutions – tailored to individual requirementsVia our world-wide presence, we and our certified partners stand beside our customers across the globe throughout the entire life cycle of our products and solutions, from choosing the rightequipment and systems to upgrades, replacement parts and training.Commissioning – for a timely start to productionMechanical and/or electrical installation,commissioning and training on set-up and use C alibration or conformity assessment ofequipment and systems according to statutory requirements for measuring technology Equipment qualification (IQ/OQ)**********************Our servicesMaintenance and repair – for guaranteed availability and performanceC alibration or verification preparation of equipment and systemsaccording to statutory requirements for measuring technologyPreventative maintenance safeguarding continued availabilityand performanceR epair services, including emergency service contracts for aguaranteed response timeProfessional replacement parts serviceR emote services such as the service tool miRemote basedon augmented reality – for first-line support on siteR e v . 10/2019Everything from a single sourceMinebea Intec provides products, solutions and services to improve the reliability, safety and efficiency of production and packaging lines in virtually all industries.From goods receipt to goods issue – our portfolio comprises a variety of automatic and manual weighing and inspection solutions, software and services for a wide range of applications and industries.Minebea Intec Aachen GmbH & Co. KG Am Gut Wolf 11, 52070 Aachen, Germany Phone +49.241.1827.0***************************。
Improving multiclass text classification with the support vector machine
Improving Multiclass Text Classification with theSupport Vector MachineJason D.M.Rennie jrennie@ Ryan Rifkin rif@October16,2001AbstractWe compare Naive Bayes and Support Vector Machines on the task of multiclass text classifiing a variety of approaches to combine the underlying binary classifiers,wefind that SVMs substantially outperform Naive Bayes.We present full multiclass results on two well-known text data sets,including the lowest error to date on both data sets.We develop a new indicator of binary performance to show that the SVM’s lower multiclass error is a result of its improved binary performance.Furthermore,we demonstrate and explore the surprising result that one-vs-all classification performs favorably compared to other approaches even though it has no error-correcting properties. AcknowledgementsThis report describes research done within the Center for Biological&Compu-tational Learning in the Department of Brain&Cognitive Sciences and in the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology.This research was sponsored by grants from:Office of Naval Research(DARPA) under contract No.N00014-00-1-0907,National Science Foundation(ITR)un-der contract No.IIS-0085836,National Science Foundation(KDI)under con-tract No.DMS-9872936,and National Science Foundation under contract No. IIS-9800032Additional support was provided by:Central Research Institute of Electric Power Industry,Center for e-Business(MIT),Eastman Kodak Company,Daim-lerChrysler AG,Compaq,Honda R&D Co.,Ltd.,Komatsu Ltd.,Merrill-Lynch, NEC Fund,Nippon Telegraph&Telephone,Siemens Corporate Research,Inc., Toyota Motor Corporation and The Whitaker Foundation.11Introduction and Related WorkMulticlass text classification involves the assigning of one of m>2labels to test documents.There are many ways to approach this problem.One,called Error-Correcting Output Coding(ECOC),is to learn a number of different binary classifiers and use the outputs of those classifiers to determine the label for a new example.An advantage of this method is that the problem of binary classification is well studied.ECOC merely bridges the gap between binary classification and multiclass classification.The Support Vector Machine(SVM) has proven to be an effective binary text classifier.Through experiments on two data sets,we show that the SVM can also be an effective multiclass text classifier when used with ECOC.In1997Joachims published results on a set of binary text classification experiments using the Support Vector Machine.The SVM yielded lower error than many other classification techniques.Yang and Liu followed two years later with experiments of their own on the same data set[1999].They used improved versions of Naive Bayes(NB)and k-nearest neighbors(kNN)but still found that the SVM performed at least as well as all other classifiers they tried. Both papers used the SVM for binary text classification,leaving the multiclass problem(assigning a single label to each example)open for future research.Berger and Ghani individually chose to attack the multiclass problem using Error-correcting Output Codes(ECOC)with NB as the binary classifier[1999] [2000].Ghani found the greatest reduction in error on Industry Sector,a dataset with105classes.In parallel,Allwein et al.wrote an article on using ECOC with loss functions for multiclass classification in non-text domains[2000].They presented a“unifying framework”for multiclass classification,encouraging the use of loss functions,especially when the classifier is optimized for a particular one.We bridge these bodies of work by applying ECOC to multiclass text classi-fication with the SVM as the binary learner.We achieve the lowest-known error rate on both the Industry Sector and20Newsgroups data sets.Surprisingly,we find that one-vs-all performs comparably with matrices that have high row-and column-separation.Binary performance plays a key role in these results.We see this through the introduction of a new measure,ROC breakeven.We show that trends in binary performance are directly reflected in multiclass error.In particular,improved binary performance allows for low error with the one-vs-all matrix even though it has no error-correcting properties.Allwein et al.give theoretical arguments that the loss function used should match that used to optimize the binary classifier[2000].We found that the most important aspect of the loss function is to contribute confidence information,and that in practice, other loss functions perform equally well.22Error-Correcting Output CodingError-correcting output coding(ECOC)is an approach for solving multiclasscategorization problems originally introduced by Dietterich and Bakiri[1991]. It reduces the multiclass problem to a group of binary classification tasks andcombines the binary classification results to predict multiclass labels.R is the code matrix.It defines the data splits which the binary classifier is to learn.R i·is the i th row of the matrix and defines the code for class i.R·jis the j th column of the matrix and defines a split for the classifier to learn. R∈{−1,+1}m×{−1,+1}l where m is the number of classes and l is thenumber of partitionings(or length of each code).In a particular column,R·j,−1and+1represent the assignment of the classes to one of two partitions.We use three different matrices,1)the one-vs-all(OVA)matrix,where the diagonalisfilled with+1’s and is otherwisefilled with−1entries,2)the Dense matrix [Allwein et al.,2000],where entries are independently determined byflipping a fair coin,assigning+1for heads and−1for tails,and3)BCH codes,a matrixconstruction technique that yields high column-and row-separation[Ghani, 2000].We use the63-20and63-105BCH codes1that Ghani has made available on-line at /∼rayid/ecoc.Let(f1,...,f l)be the classifiers trained on the partitionings indicated in the code matrix.Furthermore,let g:ℜ→ℜbe the chosen loss function.Then, the multiclass classification of a new example,x isˆH(x)=argminc∈{1,...,m}li=1g(f i(x)R ci).(1)Allwein et al.give a full description of the code matrix classification framework and give loss functions for various models[2000].As suggested by Allwein et al.,we use“hinge”loss,g(z)=(1−z)+,for the SVM.Naive Bayes does not optimize a loss function,but we use the“hinge”loss since it gives the lowest error in practice.3Classification AlgorithmsHere we briefly describe the two classification algorithms used for experiments in this paper.3.1Naive BayesNaive Bayes is a simple Bayesian text classification algorithm.It assumes that each term in a document is drawn independently from a multinomial distribution and classifies according to the Bayes optimal decision rule.Each class has its own set of multinomial parameters,θc.We estimate parameters via Maximuma Posteriori(MAP)using the training data.A new document,d,is assigned the labelˆH(d)=argmaxcp(d|ˆθc)p(c).(2) Using D c to denote the training data for class c,we use parameter estimatesˆθc=argmaxθp(D c|θ)p(θ).(3) A Dirichlet parameter prior,p(θ),with hyper-parametersαi=2∀i gives an estimate ofˆθc k=N c k+1m ,our decisionrule isˆH(d)=argmaxc kN c k+12w 2+C iξi,(5) with constraintsy i(x i·w+b)≥1−ξi∀i.(6) For more information,see Burges’tutorial and Cristianini and Shawe-Taylor’s book[1998][2000].There are non-linear extensions to the SVM,but Yang and Liu found the linear kernel to outperform non-linear kernels in text classification [1999].In informal experiments,we also found that linear performs at least as well as non-linear kernels.Hence,we only present linear SVM results.We use the SMART ltc2transform;the SvmFu package is used for running experiments [Rifkin,2000].10.html.43.3Relative efficiency of NB and the SVMNaive Bayes and the linear SVM are both highly efficient and are suitable for large-scale text systems.As they are linear classifiers,both require a simple dot product to classify a document.The implication by Dumais et al.that the linear SVM is faster to train than NB is incorrect[1998].Training NB is faster since no optimization is required;only a single pass over the training set is needed to gather word counts.The SVM must read in the training set and then perform a quadratic optimization.This can be done quickly when the number of training examples is small(e.g.<10000documents),but can be a bottleneck on larger training sets.We realize speed improvements with chunking and by caching kernel values between the training of binary classifiers.4Data SetsFor our experiments,we use two well-known data sets,20Newsgroups and In-dustry Sector[McCallum and Nigam,1998;Slonim and Tishby,1999;Berger, 1999;Ghani,2000].We use rainbow to pre-process the text documents into feature-vector form[McCallum,1996].Documents that are empty after pre-processing(23for20Newsgroups,16for Industry Sector)are not used.The code and pre-processed data we used for our experiments can be found at /people/jrennie/ecoc-svm/.20Newsgroups is a data set collected and originally used for text classifica-tion by Lang[1995].3It contains19,997documents evenly distributed across20 classes.We remove all headers and UU-encoded blocks,and skip stoplist words and words that occur only once.4The vocabulary size is62,061.We randomly select80%of documents per class for training and the remaining20%for test-ing.This is the same pre-processing and splitting as McCallum and Nigam used in their20Newsgroups experiments[McCallum and Nigam,1998].The Industry Sector data is a collection of corporate web pages organized into categories based on what a company produces or does.5There are9649 documents and105categories.The largest category has102documents,the smallest has27.We remove headers,and skip stoplist words and words that occur only once.6Our vocabulary size is55197.We randomly select50%of documents per class for training and the remaining50%for testing.We conduct our experiments on10test/train splits and report average per-formance.To gauge performance for different amounts of training documents, we create nested training sets of800,250,100and30documents/class for20 Newsgroups and(up to)52,20,10and3documents/class for Industry Sector.When a class does not have the specified number of documents,we use all of the training documents for that class.Text classification experiments often include a feature selection step which may improve classification.McCallum and Nigam performed feature selection experiments on the20Newsgroups data set and an alternate version of the Industry Sector data set[1998].Feature selection did not improve accuracy for 20Newsgroups and improved accuracy only slightly on Industry Sector.In our own experiments,we found feature selection to only increase error.We use the entire vocabulary in our experiments.5ResultsThe SVM consistently outperforms Naive Bayes,but the difference in perfor-mance varies by matrix and amount of training data used.On the Industry Sector data set,the disparity for the OVA matrix between the SVM and Naive Bayes is especially marked.Our analysis(§5.4)shows that this is a direct result of differences in binary performance.A new measure of binary classification, ROC breakeven(§5.3),allows us to see this close connection.Other factors, such as the loss function and the independence of binary classifiers show less influence.Our results include the best-known multiclass error on both data sets.5.1Multiclass classificationThe SVM always outperforms Naive Bayes on the multiclass classification task. Table1shows multiclass error for a variety of conditions.The SVM achieves10-20%lower error than Naive Bayes for most of the20Newsgroups experiments. The differences are larger for Industry Sector.When all training documents are used,the SVM achieves80%and48%less error for the OVA and63-column BCH matrices,respectively.On both data sets,the lowest error is found using the SVM with a63-column BCH matrix.BCH consistently outperforms Dense; the under-performance of the15-column BCH matrix is a result of using a sub-matrix of the63-column BCH.7With the SVM,the OVA matrix performs well even though it has no error-correcting properties and experiments on non-text data sets have shown it to perform poorly[Allwein et al.,2000].OVA performs about as well as BCH even though its row-and column-separation is small.OVA has a row-and column-separation of2;The63-column BCH matrix has a row-separation of31 and a column-separation of49.8Some have suggested that OVA is inferior to other methods[Ghani,2000;Guruswami and Sahal,1999].These results shown n i=1min j=i Hamming(r i,r j).Column-separation is defined analogously.620NewsgroupsNB SVM NB SVM0.1310.1990.2140.4450.1420.2220.2510.4310.1450.2250.2620.5200.1350.2140.2330.4280.1310.1980.2240.4380.1290.1980.2220.4070.1250.1880.2130.3905220103SVM NB SVM NB OVA0.3570.1760.7250.650 Dense150.1910.2830.5420.738BCH150.1820.2610.5180.717 Dense310.1450.2160.4820.701BCH310.1400.1980.4620.676 Dense630.1350.1890.4530.674BCH630.1280.1760.4430.653Table1:Above are results of multiclass classification experiments on the20 Newsgroups(top)and Industry Sector(bottom)data sets.The top row of each table indicates the number of documents/class used for training.The second row indicates the binary classifier.The far left column indicates the multiclass technique.Entries in the table are classification error.70.020.040.080.160.3280025010030C l a s s i f i c a t i o n e r r o r Number of training examples per class 20 Newsgroups OVA PerformanceNB Binary Error NB Multi Error0.010.020.040.080.160.320.643102052C l a s s i f i c a t i o n e r r o r Number of training examples per class Industry Sector OVA Performance NB Binary Error NB Multi Error Figure 1:Multiclass error improves as the number of training examples in-creases,but binary error improves marginally for Industry Sector and degrades for 20Newsgroups.Shown is the performance of OVA with Naive Bayes as the binary classifier.Guessing achieves a binary error of 0.05for 20Newsgroups and approximately 0.01for Industry Sector.Binary error is loosely tied to binary classifier strength.Note the log scale on both axes.that OVA should not be ignored as a multiclass classification technique.In subsequent sections,we show that the success of OVA is due to strong binary SVM classifiers and the use of confidence information.Naive Bayes performs poorly on Industry Sector with the OVA matrix because of its lackluster binary performance.Error differences between the SVM and Naive Bayes are less on 20News-groups than Industry Sector.We believe this to be partially caused by the fact that nearly 6%(we counted 1169)of the non-empty 20Newsgroups documents are exact duplicates,many cross-posts to multiple newsgroups.This limits clas-sification accuracy and makes training more difficult.We ignored documents with duplicate feature vectors when training the SVM since handling duplicate vectors in SVM training is inconvenient.5.2Comparison with other workOur Naive Bayes results match those of neither Ghani nor Berger [2000][1999].We did not try to reproduce the results of Berger because his preprocessing code was hand-written and somewhat complex.We made an effort to match those of Ghani,but were not successful.When we used the same pre-processing (stripping HTML markup),we saw higher error,0.187for the 63-column BCH matrix,compared to the 0.119he reports.We worked with Ghani to try to resolve this discrepancy,but were unsuccessful.We found that including HTML gives lower error than skipping HTML.5.3ROC BreakevenThe performance of an ECOC classifier is affected by a number of factors:(1)binary classifier performance,(2)independence of the binary classifiers,and8+1 True+1fnfp0.10.20.480025010030E r r o r /R O C B r e a k e v e n Number of training examples per class 20 Newsgroups BCH PerformanceSVM ROC NB ROC SVM Multi Error NB Multi Error0.050.10.20.40.83102052E r r o r /R O C B r e a k e v e n Number of training examples per class Industry Sector BCH Performance SVM ROC NB ROC SVM Multi Error NB Multi Error Figure 2:Shown is a comparison between ROC breakeven and multiclass error of ECOC using a BCH-63matrix and the SVM and Naive Bayes as the binary classifier.We see that ROC breakeven largely dictates multiclass error.Trends in the ROC breakeven curves are reflected in the multiclass error curves.Note the log scale on both axes.0.050.10.20.480025010030E r r o r /R O C B r e a k e v e n Number of training examples per class 20 Newsgroups One-vs-all PerformanceSVM ROC NB ROC SVM Multi Error NB Multi Error0.040.080.160.320.643102052E r r o r /R O C B r e a k e v e n Number of training examples per class Industry Sector One-vs-all Performance SVM ROC NB ROC SVM Multi Error NB Multi Error Figure 3:Shown is ROC breakeven and multiclass error for ECOC with the OVA matrix.Changes in ROC breakeven are directly reflected in multiclass error.Multiclass error changes gradually for 20Newsgroups,but trends in ROC breakeven are evident in the multiclass error.Note the log scale on both axes.1020NewsgroupsNB SVM NB SVM0.0150.0270.030.0490.0430.1460.0780.3750.0790.1210.1350.2240.0810.1270.1380.2375220103SVM NB SVM NB OVA/Error0.0080.0050.0090.009OVA/ROC0.2820.0750.4280.264BCH/Error0.1000.1370.2530.347BCH/ROC0.0990.1370.2530.348Table3:Shown are binary errors and ROC breakeven points for the binary classifiers trained according to the matrix columns.Results for the Dense matrix are omitted since they are nearly identical to the BCH results.Table entries are averaged over all matrix columns and10train/test splits.Error is a poor judge of classifier strength for the OVA matrix.Error increases with more examples on 20Newsgroups.Note that error and ROC breakeven numbers are very similar for the BCH matrix.A similar relation between ROC breakeven and multiclass error can be seen on the OVA results,as is depicted infirge differences in ROC breakeven translate into small differences in multiclass error on20Newsgroups, but the connection is still visible.The SVM shows the greatest improvements over Naive Bayes on the Industry Sector data ing all of the training examples,the SVM achieves an ROC breakeven of0.036,compared to Naive Bayes’s lackluster0.282.This translates into the large difference in multiclass error.The SVM’s binary performance is a result of its ability to handle uneven example distributions.Naive Bayes cannot make effective use of such a training example distribution[Rennie,2001].The full binary error and ROC breakeven results can be found in table3. As we have seen,ROC breakeven helps greatly to explain differences in multi-class error.Trends in ROC breakeven are clearly reflected in multiclass error. Independence of the binary classifiers also clearly plays a role—identical Naive Bayes and SVM ROC breakevens do not yield identical multiclass errors.But, such effects are secondary when comparing the SVM and Naive Bayes.Next,we show that the loss function plays essentially no role in ECOC text classification.5.5The Loss FunctionAllwein et al.indicate that it is important for the loss function used for ECOC to be the same as the one used to optimize the classifier[2000].Wefind this not to be the case.Rather,we see the most important aspect of the loss function being1120Newsgroups Linear0.131OVA/NB0.1460.125BCH63/NB0.144Industry Sector Linear0.072OVA/NB0.3570.067BCH63/NB0.127Table4:Shown are multiclass errors on two data sets and a variety of ECOC classifiers.Errors are nearly identical between the hinge and linear loss func-tions.Although ECOC provides opportunity for non-linear decision rules through the loss function,the use of a non-linear loss function provides no practical benefit.that it convey confidence information of the binary classifier.The simple linear loss function,g(x)=−x,performs as well as the hinge loss in our experiments. See table4for a full comparison.Both the linear and hinge losses perform better than other loss functions we tried.The Hamming loss,which conveys no confidence information,performed worst.The principle job of the loss function is to convey binary classification confidence information.In the case of the SVM, wefind it unnecessary that the loss function match the loss used to optimize the binary classifier.6ConclusionWe have shown that the Support Vector Machine can perform multiclass text classification very effectively when used as part of an ECOC scheme.Its im-proved ability to perform binary classification gives it much lower error scores than Naive Bayes.In particular,one-vs-all,when used with confidence scores, holds promise as an alternative to more complicated matrix-construction tech-niques.Another important area of future work is in matrix design.We have shown that high row-and column-separation are not the only aspects of a code matrix that lead to low overall error.A binary classifier that can learn one-vs-all splits better than BCH splits may yield equal or lower error overall.Different data sets will have different natural partitionings.Discovering those before training may lead to reduced multiclass error.Crammer and Singer have proposed an algorithm for relaxing code matrices,but we believe that more work is needed [2001].It may also be beneficial to weight individual examples,as is done in multiclass boosting algorithms[Guruswami and Sahal,1999;Schapire,1997]. References[Allwein et al.,2000]Erin L.Allwein,Robert E.Schapire,and Yoram Singer.Re-ducing multiclass to binary:A unifying approach for margin classifiers.Journal of Machine Learning Research,1:113–141,2000.12[Berger,1999]Adam Berger.Error-correcting output coding for text classification.In Proceedings of IJCAI-99Workshop on Machine Learning for Information Filtering, Stockholm,Sweeden,1999.[Burges,1998]Christopher J.C.Burges.A tutorial on support vector machines for pattern recognition.Data Mining and Knowledge Discovery,2(2):121–167,1998. [Chakrabarti et al.,1997]Soumen Chakrabarti,Byron Dom,Rakesh Agrawal,and Prabhakar ing taxonomy,discriminants and signatures for navigating in text databases.In Proceedings of the23rd VLDB Conference,1997. [Crammer and Singer,2001]Koby Crammer and Yoram Singer.Improved output cod-ing for classification using continuous relaxation.In Advances in Neural Information Processing Systes13(NIPS*00),2001.[Cristianini and Shawe-Taylor,2000]Nello Cristianini and John Shawe-Taylor.An introduction to support vector machines.Cambridge University Press,2000. [Dietterich and Bakiri,1991]Tom G.Dietterich and Ghulum Bakiri.Error-correcting output codes:A general method for improving multiclass inductive learning pro-grams.In Proceedings of the Ninth National Conference on Artificial Intelligence, pages572–577,Anaheim,CA,1991.AAAI Press.[Dumais et al.,1998]Susand Dumais,John Platt,David Heckerman,and Mehran Sa-hami.Inductive learning algorithms and reepresentations for text classification.In Seventh International Conference on Information and Knowledge Management, 1998.[Ghani,2000]Rayid ing error-correcting codes for text classification.In Proceedings of the Seventeenth International Conference on Machine Learning,2000. [Guruswami and Sahal,1999]Venkatesan Guruswami and Amit Sahal.Multiclass learning,boosting and error-correcting codes.In Proceedings of the Twelfth An-nual Conference on Computational Learning Theory,1999.[Joachims,1997]Thorsten Joachims.Text categorization with support vector ma-chines:Learning with many relevant features.Technical report,University of Dort-mund,Computer Science Department,1997.[Lang,1995]Ken Lang.Newsweeder:Learning tofilter netnews.In Proceedings of the Twelfth International Conference on Machine Learning,pages331–339,1995. [McCallum and Nigam,1998]Andrew McCallum and Kamal Nigam.A comparison of event models for naive bayes text classification.In Proceedings of the AAAI-98 workshop on Learning for Text Categorization,1998.[McCallum,1996]Andrew Kachites McCallum.Bow:A toolkit for sta-tistical language modeling,text retrieval,classification and clustering./∼mccallum/bow,1996.[Rennie,2001]Jason D.M.Rennie.Improving multi-class text classification with naive bayes.Master’s thesis,Massachusetts Institute of Technology,2001. [Rifkin,2000]Ryan Rifkin.Svmfu.http://fi/SvmFu/,2000. [Schapire,1997]Robert ing output codes to boost multiclass learn-ing problems.In Machine Learning:Proceedings of teh Fourteenth International Conference,1997.[Slonim and Tishby,1999]Noam Slonim and Naftali Tishby.Agglomerative informa-tion bottleneck.In Neural Information Processing Systems12(NIPS-99),1999.13[Vapnik,1995]Vladimir Vapnik.The Nature of Statistical Learning Theory.Springer-Verlag,1995.[Yang and Liu,1999]Yiming Yang and Xin Liu.A re-examination of text catego-rization methods.In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval,1999.14。
Sharp SP-1120N 网络兼容单面双面扫描仪说明书
Simple and network compatible scanner for businessSP-1120N Image ScannerAssistance for safe and reliable scanningSP-1120N comes with brake rollers to deliver accuratepage separation and prevent any multi-feeding errors from occurring. This mechanism, along with our ultrasonic multi-feed sensors, provides users with stable paper feeding, prevents any potential information loss from occurring, and enables scanning of all documents and cards at the office with maximized precision and efficiency. Application forms and ID cards at the reception desk, for instance, can be scanned in just one batch, allowing for quick processing and reduced customer wait times.Document software for maximized flexibilityEasily find the information you need using ABBYY® FineReader® Sprint. Specializing in OCR processing, thesoftware is compatible with over 190 languages and generates both searchable PDF and Microsoft Office documents.Flexible and easy operation to improve daily workflowHigh-speed USB 3.2 Gen 1x1 and wired network connection expands the versatility of user operation so that users are no longer confined to operating near the computer. Operation is now possible in a wider variety of locations, with reliable network environment support. SP-1120N is also compact in size, making it the perfect scanner to use on a desk or reception space where space is limited. This entry level model allows for a simple, push-scan from the front panel, for intuitive use. All these features combined, enable anyone to operate the scanner anywhere.The SP-1120N document scanner is a compact and network compatible scanner that provides high-value performance, perfect for entry-level for personal and small businesses. The SP-1120N scans 20 ppm/40 ipm (A4 portrait 200/300 dpi) and holds 50 sheets in the ADF, making it a great device formoderate batch scanning.PaperStream Capture makes scanning fast and easyEliminate the learning curve. PaperStream Capture’s user-friendly interface allows easy operation from start to finish. Changing scan settings is simple. Indexing and sorting features include barcode, patch code, and blank pageseparation – making batch scanning a breeze for operators.PaperStream ClickScan simplifies scanningEasy to use capture software for any business. Simple scanning interface with 3-steps: scan, select destination& save.Intelligent image processing with PaperStream IPPaperStream IP (PSIP) is a TWAIN/ISIS® - compliant driver that cleans up and optimizes scanned images without any advance settings. PSIP features:• Auto Color Detection to automatically identify the best color mode for the document • Auto Deskew to automatically corrects skewed images • Blank Page Detection to automatically remove blank pages • Front and Back Merge to place two sides of a page into one convenient image •Automatic hole punch removalCentralized fleet managementIncludes Scanner Central Admin Agent to remotely manage your entire fi Series fleet. Effectively allocate your resourcesbased on scan volume, consumables wear, and more.Network compatible image scannerSP-1120NV12108DS1120NMFor more information visit the Fujitsu Computer Products of America website , email ********************* or call 888-425-8228.¹ Actual scanning speeds are affected by data transmission and software processing times. 2 Indicated speeds are from using JPEG compression. 3 Indicated speeds are from using TIFF CCITT Group 4 compression. 4 Selectable maximum density may vary depending on the length of the scanned document. 5 Limitations may apply to the size of documents that can be scanned, depending on system environment, when scanning at high resolution (over 600 dpi). 6 Capable of scanning documents with dimensions exceeding that of Legal sizes. Resolutions are limited to 300 dpi or less when scanning documents >355.6 mm (14 in.) to > 863 mm (34 in.), 200 dpi or less when scanning documents ≦ 863mm (34 in.) in length. 7 Thicknesses of up to 127 to 209 g/m² (34 to 56 lb) can be scanned for A8 (52 x 74 mm / 2.1 x 2.9 in.) sizes. 8 ISO7810 ID-1 type compliant. Capable of scanning embossed cards with total thicknesses of 1.24 mm (0.049 in.) or less. 9 Maximum capacity depends on paper weight and may vary. 10 Capable of setting additional documents while scanning. 11 Numbers are calculated using scanning speeds and typical hours of scanner use, and are not meant to guarantee daily volume or unit durability. 12 Scanning speeds slow down when using USB 1.1. 13 When using USB, device must be connected to the USB hub connected to the PC port. If using USB 3.2 Gen 1x1 (USB 3.0) / USB 2.0, USB port and hub compatibility is required. 14 Excludes the ADF paper chute and stacker. 15 Functions equivalent to those offered by PaperStream IP may not be available with the WIA Driver. 16 Refer to the SP Series Support Site for software downloads. 17 In-box software only includes driver support for MacOS and Ubuntu.TrademarksABBYY and FineReader are registered trademarks of ABBYY Software, Ltd. which may be registered in some jurisdictions. ISIS is a trademark of Open Text. Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.ENERGY STAR®PFU Limited, a Fujitsu company, has determined that this product meets the ENERGY STAR® guidelines for energy efficiency. ENERGY STAR® is a registered trademark of the United States.©2021 Fujitsu Computer Products of America, Inc. Fujitsu and the Fujitsu logo are registered trademarks of Fujitsu Limited. All text, graphics, trademarks, logos contained herein related to Fujitsu, PFU, or Fujitsu Computer Products of America, Inc. (“FCPA”) are owned, controlled or licensed by or to FCPA with all rights reserved. All other text, graphics, trademarks, service marks and logos used herein are the copyrights, trademarks, service marks or logos of their respective owners.Scanner TypeADF (Automatic Document Feeder), Duplex Scanning speed 1 (A4 Portrait)Color 2, Grayscale 2 and Monochrome 3Simplex: Duplex:20 ppm (200/300 dpi)40 ipm (200/300 dpi)Image Sensor Type Single line CMOS-CIS x 2 (front x 1, back x 1)Light Source RGB LED x 2 (front x 1, back x 1)Optical Resolution600 dpiOutput Resolution 4(Color / Grayscale / Monochrome)50 to 600 dpi (adjustable by 1 dpi increments)1,200 dpi (driver)5Output Format Color: 24-bit, Grayscale: 8-bit, Monochrome: 1-bit Background ColorsWhiteDocument Size Maximum MinimumLong Page Scanning s 6 (Maximum) 216 x 355.6 mm (8.5 x 14 in.)52 x 74 mm (2.0 x 2.9 in.)3,048 mm (120 in.)Paper Weight (Thickness)PaperPlastic Card 50 to 209 g/m² (13.4 to 56 lb)70.76 mm (0.0299 in.) or less 8ADF Capacity 9, 1050 sheets (A4 80 g/m² or Letter 20 lb) Expected Daily Volume 113,000 sheetsInterface USB 12, 13EthernetUSB 3.2 Gen 1x1 / USB 2.0 / USB 1.110BASE-T,100BASE-TX,1000BASE-T Power requirements AC 100 to 240 V ±10 %Power Consumption Operating Mode Sleep ModeAuto Standby (Off) Mode18 W or less 2 W or less 0.4 W or lessOperating Environment TemperatureRelative Humidity5 to 35 °C (41 to 95 °F)20 to 80% (non-condensing)Environmental Compliance ENERGY STAR 3.0®, RoHSDimensions 14(Width x Depth x Height)298 x 135 x 133 mm (11.7 x 5.3 x 5.2 in.)Weight2.5 kg (5.5 lb)Included in the box AC adapter, USB cable, Setup DVD-ROMIncluded Software / DriversPaperStream IP for SP Series (TWAIN/TWAIN x64/ISIS), WIA Driver 15, PaperStream Capture, PaperStream ClickScan, Software Operation Panel, Error Recovery Guide, ABBYY FineReader for ScanSnap 16, Scanner Central Admin, ABBYY™ FineReader Sprint™, Network Setup Tool for SP Series, SP Series Online Update Supported operating systemsMacOS V10.15 Catalina, V11 Big Sur 17Linux (Ubuntu)17Windows® 10Windows® 8.1 Windows® 7Windows Server® 2019Windows Server® 2016Windows Server® 2012 R2Windows Server® 2012Windows Server® 2008 R2Image processing functionsMulti image output, Automatic color detection, Automatic page size detection, Blank page detection, Dynamicthreshold (iDTC), Advanced DTC, SDTC, Error diffusion, Dither,De-Screen, Emphasis, Dropout color (None/Red/Green/Blue/White/Saturation/Custom), sRGB output, Split image,De-Skew, Edge filler, Vertical streaks reduction, Digitalendorser, Background pattern removal, Character thickness,Character augmentation, Character extractionTechnical InformationFujitsu industry-leading support keeps digital transformation projects on-time and on budget• U.S. based support • Specialized Teams • Flexible service programsFujitsu Imaging Solutions provide superior engineering at the forefront of innovation through:• Engineering Passion and Dedication • Human Centric Design • Worldwide ReliabilityAdvance Exchange SP1120N-AEMYNBD-33-year scanner contract shipping a replacement unit shipped overnight *PaperStream Capture ProPSCP-WG-0001PaperStream Capture Pro optional licenseRoller Set Pick Roller, Brake RollerPA03708-0001Every 100,000 sheets or one year ScanAid KitCG01000-287201Consumable kit with instructions and cleaning suppliesDuplex Scans both sidesScansPlastic CardsFlat and embossed 600Optical DPI24-bit ColorScanning supported TWAIN & I SIS SupportedIndustry Leading Net PromoterScore* Replacement units shipped overnight for all requests received by 2 P.M. PSTInsist on Genuine Fujitsu Service to keep your scanner running at its best。
LS电子标准AC驱动S100说明书
S1000.4~2.2kW(0.5~3HP) 1-Phase 200~240Volts0.4~15kW(0.5~20HP) 3-Phase 200~240Volts0.4~75kW(0.5~100HP) 3-Phase 380~480VoltsIP66 NEMA4X 0.4~2.2kW(0.5~3HP) 1-phase 200V~240VoltsIP66 NEMA4X 0.4~15kW(0.5~20HP) 3-Phase 200~240VoltsIP66 NEMA4X 0.4~22kW(0.5~30HP) 3-Phase 380~480Volts2C o n t e n t s0410121417212832S100 Features IP66/NEMA 4X Model and Type SpecificationsWiring / Terminal Configuration Keypad Functions Peripheral Devices DimensionsPowerful sensorless control, and a diverse range of user friendly functions delivers added value to our customers.Meet the new standard drive S100 by LS for the global market.Strong Power with a Compact Size !S10 0High-performanceStandard Drive3High-performance Standard Drive• Built-in safe torque off (STO) • Redundant input circuitSafetyFunctions • Sensorless control functions• Starting torque (200%/0.5Hz)Strong Performance• Side-by-side installation • Decreased dimensions Space EfficientDesign • Various field networksSuitable for Users• Built-in EMC filter • International standardsStandard ComplianceScan the QR code on your drive front and check the key use information4Specialized FeaturesS100 Improves User Convenience with Smart Copier .Drive Control PartDrive Input/Output Part (Flash Memory)Auto-synchronization When Power-onSmart Copier Receives From Drive Input/Output PartSmart Copier Sends to Drive Input/Output PartRead Parameter Memory Write Parameter MemorySmart CopierSmart Copier Flow ChartThe drive does not need to be powered when using the smart copier.Functions Without Power InputI/O input and output can be shared among master and slave drive (RS485 wiring required)P2P Function EmbeddedThe run LED flickers during normal operation. The error LED flickers when events such as communication errors occur.LED Lamp FeedbacksParameters can be copied/loaded from the drive to the smart copier and vise versa,simply with the keypad.Read / Write Function of ParametersMultiple drives can be controlled and monitored with single keypad. (RS485 wiring required)Multi Keypad FunctionParameters saved in the smart copier can be down-loaded to both the drive I/O and the control part.Simple InstallationSimple PLC sequences can be operated with various function block combinations.User Sequence FunctionUser ConvenienceRJ45(Included if Smart Copiers are Purchased)Smart CopierGraphic LCD can be used for parameter copy from drive to drive.6Suitable for UsersS100 Offers a Variety of Customer Conveniencesto Compete in the Global Market.① Profibus-DP ④ CANopen ② Ethernet IP ⑤ EtherCAT ③ Modbus TCP⑥ PROFInet[Various Field Bus Options]Various Field Bus Options Easy to Install and Use.Conduit Kit Acquired UL open type & enclosed type1 certification※ UL open type is offered as default※ UL enclosed type1 needs conduit kit(option) installationThe heat sink can be mounted outside of the panel in case the space is limited.Flange Type• Relay output: 2ea (NO/NC selectable)• Digital input: 3ea (NPN/PNP selectable)• Analog I/O: 2ea/1ea eachExtension I/O Option CardPossible to connect to a variety of fieldbus networks Easy maintenance and mountingSimple Cooling Fan ReplacementReplaceable fan without complete disassemblyMasterSlave2Slave1※ LCD keypad (same as iS7 model) enables handy parameter set up.※ Multi language support will be available.Multi-Keypad FunctionSingle LCD keypad can be used to set up the parameters of a RS485 connected drives.Parameter Change with a Keypad.7High-performance Standard DriveDriveView9 Connection with RJ45 PortiS7 Normal CableRJ45 to USB Cable* RJ45 to USB Cable : Available as option8Space Efficient DesignS100 Increases Efficiency of the Control Panel.50mm50mm2mm2mm50mm 50mm50mm50mmiG5A (500mm) S100 (404mm) Minimized distance between drives enables panel size reduction for multiple drives installation.Side-by-side InstallationMain components have been optimally designedthrough thermal analysis and 3D design to reducethe dimension up to 60% compared to iG5A.Smaller SizeHDWSize Reduction60%400V 11kW Basis9High-performance Standard DriveStandard ComplianceS100 Has Built-in Safety Functions Suitable for Modern Safety Standards.※ Safety relay needs to be purchased separately.Main PowerSafety RelaySC 24VSA SBGate BlockMControl Circuit• Meets the electrical noise reduction regulation.• Related standards: 2nd Environment/Category C3(Class A) - CE standard is certified※ 1-phase 200V 0.4~2.2kW (C2) / 3-phase 400V 0.4~45kW (C3]Designed to be used for heavy and normal duty applications.• Overload capacity - H eavy duty operation: 150% of rated current, 60 seconds - Normal duty operation: 120% of rated current, 60 seconds※ Excludes IP66/NEMA 4XDual Rating OperationThe Safety input function meets EN ISO 13849-1 PLd and EN 61508 SIL2 (EN60204-1, stop category 0).This feature is standard and enables compliance with current safety standards.Built-in Safe Torque Off(STO)Built-in EMC FilterEffective in improving power factor and decreasing THD.※ 3-phase 400V 30~75kWBuilt-in DC ReactorRedundant Input CircuitStandstill or rotary auto-tuning options are available as standard to find motor constants with or without rotating the motor for optimized motor performance.Selectable Rotary / Standstill Auto-tuningGlobal standard complianceGlobal Compliance10High-performance Standard DriveS100 IP66/NEMA 4X SeriesProtected Against Foreign Substances Such as Fine Dust and High Pressure Water Spray.• Satisfies NEMA standard type 4X for indoor use.• Satisfies IEC 60529 standard IP66•1Ø 200V 0.4~2.2kW / 3Ø 200V 0.4~15kW / 3Ø 400V 0.4~22kW • PDS / Non-PDS (PDS; Power Disconnect Switch)The Drive for Harsh Ambient Conditions.IP66 / N EMA 4X※ (F): Built-in EMC or Non-EMC type selectable ※ 55~75kW satisfies EMC class 3* For the rated capacity, 200 and 400V class input capacities are based on 220 and 440V, respectively.* The rated output current is limited depending on the setup of carrier frequency (CN-04).* The output voltage becomes 20~40% lower during no-load operations to protect the drive from the impact of the motor closing and opening (0.4~4.0kW models only). * Dual rating is supported except IP66/NEMA 4X* For the rated capacity, 200 and 400V class input capacities are based on 220 and 440V, respectively.* The rated output current is limited depending on the setup of carrier frequency (CN-04).* The output voltage becomes 20~40% lower during no-load operations to protect the drive from the impact of the motor closing and opening (0.4~4.0kW models only). * Dual rating is supported except IP66/NEMA 4Xequest saLes peRson FoR ensoRLess FunctionDC InputShort Bar2) Use copper wires with 600V, 75°C specification.connected .DC ReactorBraking ResistorR(L1)S(L2)T(L3)P1(+)P2(+)BN(-)U V WR(L1)S(L2)T(L3)P2(+)P3(+)N(-)U V WDC InputBraking Unit1)2) Standard I/O is only provided for P5.3) In case of Standard I/O, Pulse input TI and multi-function terminal P5 share the same terminal. Set the ln.69 P5 define to 54(TI).4) In case of Standard I/O, Pulse output TO and multi-function output Q1 share the same terminal. Set the OU.33Q1 define to 38(TO).S+S-SG VR V1CM I2AO P5P6P7CM SA SB SC TO A1B1C1Q1EG24P1P2P3P4TIS+S-SG VR V1CM I2AOP4P5CM SA SB SC A1B1C1Q1EG24P1P2P30.4~22kW30~75kW0.4~22kWStandard I/OMultiple I/OStandard I/O※ LSLV-S100 can be supplied with either standard I/O or multiple I/O※ I/O board is supplied built in. IS7 LCD loader can be mounted on the front of the drive. ※ NC: Terminal not in use.DisplayFWD during acceleration/deceleration)operationSETLearning how to operate a S100(Smart device with android)2) Visible only when setting the function item of In.65~71 multi-function input terminal as no.26(2nd motor).1) Indicates only the target frequency when LCD keypad is installed. The first code of the operation group is a place to set a target frequency. It had been set as 0.00 when shipping fromthe factory, however, if a user changes the operating frequency, it indicates the changed operating frequency.2) Visible only when setting the function item of In.65~71 multi-function input terminal as no.26(2nd motor).Therefore, an over current trip (OCT) or over voltage trip (OVT) may occur when there is a low-resistance ground fault.1)If you do not want to enter the modified value, you can press the left, right, up or down keys (◀) (▶) (▲) (▼) except the enter key (ENT) in the ON condition to cancel the input.calculated at twice the standard.• The resistance/rated capacity/braking torque/%ED of DB Resistor are valid only for the DB unit of type A and the values of DB Resistor for type B and C refer to the manual of DB Unit..• Rating Watt of DBU has to be doubled when %ED is doubled.• It is not necessary to use option type dynamic braking unit for S100 below 22kW capacity because basically the dynamic braking unit is built in.• You must refer to dynamic braking unit manual for usage recommended dynamic braking unit in the table above due to changeable table.Terminal ArrangementGroup 1Group 3Group 2 :Group 4, 5braking unit.LSLV0022S100-1 / 0037S100-2 / 0037S100-4 / 0040S100-2 / 0040S100-4LSLV0004S100-1 / 0004S100-4 / 0008S100-4 (Built-in EMC)LSLV0008S100-1 / 0015S100-1 / 0015S100-4 / 0022S100-4 (Built-in EMC)LSLV0022S100-1 / 0037S100-4 / 0040S100-4 (Built-in EMC)LSLV0055S100-2 / 0075S100-2 / 0055S100-4 / 0075S100-4LSLV0110S100-2 / 0110S100-4 / 0150S100-4LSLV0150S100-2 / 0185S100-4 / 0220S100-4LSLV0300S100-4LSLV0370S100-4 / 0450S100-4LSLV0550S100-4 / 0750S100-4IP66(NEMA4X) 0.4~4.0kWIP66(NEMA4X) 5.5~7.5kWIP66(NEMA4X) 11~22kWCommunication Option Module (Installation Example)Conduit OptionConduit Option1) eMc LteRBuiLt-in c La※c onduit s ize:1/2inches(Ø:22.3MM),3/4inches(Ø:28.6 MM) 1 inches (Ø : 35 MM), 1+1/4 inches (Ø : 44.5 MM)1) eMc LteRBuiLt-in c Lass3 ※c onduit s ize:1/2inches(Ø:22.3MM),1+1/4inches(Ø:44.5MM) 1+1/2 inches (Ø : 50.8 MM), 2 inches (Ø : 63.5 MM)Flange OptionFlange OptionFlange Option。
机器学习英语词汇
目录第一部分 (3)第二部分 (12)Letter A (12)Letter B (14)Letter C (15)Letter D (17)Letter E (19)Letter F (20)Letter G (21)Letter H (22)Letter I (23)Letter K (24)Letter L (24)Letter M (26)Letter N (27)Letter O (29)Letter P (29)Letter R (31)Letter S (32)Letter T (35)Letter U (36)Letter W (37)Letter Z (37)第三部分 (37)A (37)B (38)C (38)D (40)E (40)F (41)G (41)H (42)L (42)J (43)L (43)M (43)N (44)O (44)P (44)Q (45)R (46)S (46)U (47)V (48)第一部分[ ] intensity 强度[ ] Regression 回归[ ] Loss function 损失函数[ ] non-convex 非凸函数[ ] neural network 神经网络[ ] supervised learning 监督学习[ ] regression problem 回归问题处理的是连续的问题[ ] classification problem 分类问题处理的问题是离散的而不是连续的回归问题和分类问题的区别应该在于回归问题的结果是连续的,分类问题的结果是离散的。
[ ]discreet value 离散值[ ] support vector machines 支持向量机,用来处理分类算法中输入的维度不单一的情况(甚至输入维度为无穷)[ ] learning theory 学习理论[ ] learning algorithms 学习算法[ ] unsupervised learning 无监督学习[ ] gradient descent 梯度下降[ ] linear regression 线性回归[ ] Neural Network 神经网络[ ] gradient descent 梯度下降监督学习的一种算法,用来拟合的算法[ ] normal equations[ ] linear algebra 线性代数原谅我英语不太好[ ] superscript上标[ ] exponentiation 指数[ ] training set 训练集合[ ] training example 训练样本[ ] hypothesis 假设,用来表示学习算法的输出,叫我们不要太纠结H的意思,因为这只是历史的惯例[ ] LMS algorithm “least mean squares” 最小二乘法算法[ ] batch gradient descent 批量梯度下降,因为每次都会计算最小拟合的方差,所以运算慢[ ] constantly gradient descent 字幕组翻译成“随机梯度下降” 我怎么觉得是“常量梯度下降”也就是梯度下降的运算次数不变,一般比批量梯度下降速度快,但是通常不是那么准确[ ] iterative algorithm 迭代算法[ ] partial derivative 偏导数[ ] contour 等高线[ ] quadratic function 二元函数[ ] locally weighted regression局部加权回归[ ] underfitting欠拟合[ ] overfitting 过拟合[ ] non-parametric learning algorithms 无参数学习算法[ ] parametric learning algorithm 参数学习算法[ ] other[ ] activation 激活值[ ] activation function 激活函数[ ] additive noise 加性噪声[ ] autoencoder 自编码器[ ] Autoencoders 自编码算法[ ] average firing rate 平均激活率[ ] average sum-of-squares error 均方差[ ] backpropagation 后向传播[ ] basis 基[ ] basis feature vectors 特征基向量[50 ] batch gradient ascent 批量梯度上升法[ ] Bayesian regularization method 贝叶斯规则化方法[ ] Bernoulli random variable 伯努利随机变量[ ] bias term 偏置项[ ] binary classfication 二元分类[ ] class labels 类型标记[ ] concatenation 级联[ ] conjugate gradient 共轭梯度[ ] contiguous groups 联通区域[ ] convex optimization software 凸优化软件[ ] convolution 卷积[ ] cost function 代价函数[ ] covariance matrix 协方差矩阵[ ] DC component 直流分量[ ] decorrelation 去相关[ ] degeneracy 退化[ ] demensionality reduction 降维[ ] derivative 导函数[ ] diagonal 对角线[ ] diffusion of gradients 梯度的弥散[ ] eigenvalue 特征值[ ] eigenvector 特征向量[ ] error term 残差[ ] feature matrix 特征矩阵[ ] feature standardization 特征标准化[ ] feedforward architectures 前馈结构算法[ ] feedforward neural network 前馈神经网络[ ] feedforward pass 前馈传导[ ] fine-tuned 微调[ ] first-order feature 一阶特征[ ] forward pass 前向传导[ ] forward propagation 前向传播[ ] Gaussian prior 高斯先验概率[ ] generative model 生成模型[ ] gradient descent 梯度下降[ ] Greedy layer-wise training 逐层贪婪训练方法[ ] grouping matrix 分组矩阵[ ] Hadamard product 阿达马乘积[ ] Hessian matrix Hessian 矩阵[ ] hidden layer 隐含层[ ] hidden units 隐藏神经元[ ] Hierarchical grouping 层次型分组[ ] higher-order features 更高阶特征[ ] highly non-convex optimization problem 高度非凸的优化问题[ ] histogram 直方图[ ] hyperbolic tangent 双曲正切函数[ ] hypothesis 估值,假设[ ] identity activation function 恒等激励函数[ ] IID 独立同分布[ ] illumination 照明[100 ] inactive 抑制[ ] independent component analysis 独立成份分析[ ] input domains 输入域[ ] input layer 输入层[ ] intensity 亮度/灰度[ ] intercept term 截距[ ] KL divergence 相对熵[ ] KL divergence KL分散度[ ] k-Means K-均值[ ] learning rate 学习速率[ ] least squares 最小二乘法[ ] linear correspondence 线性响应[ ] linear superposition 线性叠加[ ] line-search algorithm 线搜索算法[ ] local mean subtraction 局部均值消减[ ] local optima 局部最优解[ ] logistic regression 逻辑回归[ ] loss function 损失函数[ ] low-pass filtering 低通滤波[ ] magnitude 幅值[ ] MAP 极大后验估计[ ] maximum likelihood estimation 极大似然估计[ ] mean 平均值[ ] MFCC Mel 倒频系数[ ] multi-class classification 多元分类[ ] neural networks 神经网络[ ] neuron 神经元[ ] Newton’s method 牛顿法[ ] non-convex function 非凸函数[ ] non-linear feature 非线性特征[ ] norm 范式[ ] norm bounded 有界范数[ ] norm constrained 范数约束[ ] normalization 归一化[ ] numerical roundoff errors 数值舍入误差[ ] numerically checking 数值检验[ ] numerically reliable 数值计算上稳定[ ] object detection 物体检测[ ] objective function 目标函数[ ] off-by-one error 缺位错误[ ] orthogonalization 正交化[ ] output layer 输出层[ ] overall cost function 总体代价函数[ ] over-complete basis 超完备基[ ] over-fitting 过拟合[ ] parts of objects 目标的部件[ ] part-whole decompostion 部分-整体分解[ ] PCA 主元分析[ ] penalty term 惩罚因子[ ] per-example mean subtraction 逐样本均值消减[150 ] pooling 池化[ ] pretrain 预训练[ ] principal components analysis 主成份分析[ ] quadratic constraints 二次约束[ ] RBMs 受限Boltzman机[ ] reconstruction based models 基于重构的模型[ ] reconstruction cost 重建代价[ ] reconstruction term 重构项[ ] redundant 冗余[ ] reflection matrix 反射矩阵[ ] regularization 正则化[ ] regularization term 正则化项[ ] rescaling 缩放[ ] robust 鲁棒性[ ] run 行程[ ] second-order feature 二阶特征[ ] sigmoid activation function S型激励函数[ ] significant digits 有效数字[ ] singular value 奇异值[ ] singular vector 奇异向量[ ] smoothed L1 penalty 平滑的L1范数惩罚[ ] Smoothed topographic L1 sparsity penalty 平滑地形L1稀疏惩罚函数[ ] smoothing 平滑[ ] Softmax Regresson Softmax回归[ ] sorted in decreasing order 降序排列[ ] source features 源特征[ ] sparse autoencoder 消减归一化[ ] Sparsity 稀疏性[ ] sparsity parameter 稀疏性参数[ ] sparsity penalty 稀疏惩罚[ ] square function 平方函数[ ] squared-error 方差[ ] stationary 平稳性(不变性)[ ] stationary stochastic process 平稳随机过程[ ] step-size 步长值[ ] supervised learning 监督学习[ ] symmetric positive semi-definite matrix 对称半正定矩阵[ ] symmetry breaking 对称失效[ ] tanh function 双曲正切函数[ ] the average activation 平均活跃度[ ] the derivative checking method 梯度验证方法[ ] the empirical distribution 经验分布函数[ ] the energy function 能量函数[ ] the Lagrange dual 拉格朗日对偶函数[ ] the log likelihood 对数似然函数[ ] the pixel intensity value 像素灰度值[ ] the rate of convergence 收敛速度[ ] topographic cost term 拓扑代价项[ ] topographic ordered 拓扑秩序[ ] transformation 变换[200 ] translation invariant 平移不变性[ ] trivial answer 平凡解[ ] under-complete basis 不完备基[ ] unrolling 组合扩展[ ] unsupervised learning 无监督学习[ ] variance 方差[ ] vecotrized implementation 向量化实现[ ] vectorization 矢量化[ ] visual cortex 视觉皮层[ ] weight decay 权重衰减[ ] weighted average 加权平均值[ ] whitening 白化[ ] zero-mean 均值为零第二部分Letter A[ ] Accumulated error backpropagation 累积误差逆传播[ ] Activation Function 激活函数[ ] Adaptive Resonance Theory/ART 自适应谐振理论[ ] Addictive model 加性学习[ ] Adversarial Networks 对抗网络[ ] Affine Layer 仿射层[ ] Affinity matrix 亲和矩阵[ ] Agent 代理/ 智能体[ ] Algorithm 算法[ ] Alpha-beta pruning α-β剪枝[ ] Anomaly detection 异常检测[ ] Approximation 近似[ ] Area Under ROC Curve/AUC Roc 曲线下面积[ ] Artificial General Intelligence/AGI 通用人工智能[ ] Artificial Intelligence/AI 人工智能[ ] Association analysis 关联分析[ ] Attention mechanism 注意力机制[ ] Attribute conditional independence assumption 属性条件独立性假设[ ] Attribute space 属性空间[ ] Attribute value 属性值[ ] Autoencoder 自编码器[ ] Automatic speech recognition 自动语音识别[ ] Automatic summarization 自动摘要[ ] Average gradient 平均梯度[ ] Average-Pooling 平均池化Letter B[ ] Backpropagation Through Time 通过时间的反向传播[ ] Backpropagation/BP 反向传播[ ] Base learner 基学习器[ ] Base learning algorithm 基学习算法[ ] Batch Normalization/BN 批量归一化[ ] Bayes decision rule 贝叶斯判定准则[250 ] Bayes Model Averaging/BMA 贝叶斯模型平均[ ] Bayes optimal classifier 贝叶斯最优分类器[ ] Bayesian decision theory 贝叶斯决策论[ ] Bayesian network 贝叶斯网络[ ] Between-class scatter matrix 类间散度矩阵[ ] Bias 偏置/ 偏差[ ] Bias-variance decomposition 偏差-方差分解[ ] Bias-Variance Dilemma 偏差–方差困境[ ] Bi-directional Long-Short Term Memory/Bi-LSTM 双向长短期记忆[ ] Binary classification 二分类[ ] Binomial test 二项检验[ ] Bi-partition 二分法[ ] Boltzmann machine 玻尔兹曼机[ ] Bootstrap sampling 自助采样法/可重复采样/有放回采样[ ] Bootstrapping 自助法[ ] Break-Event Point/BEP 平衡点Letter C[ ] Calibration 校准[ ] Cascade-Correlation 级联相关[ ] Categorical attribute 离散属性[ ] Class-conditional probability 类条件概率[ ] Classification and regression tree/CART 分类与回归树[ ] Classifier 分类器[ ] Class-imbalance 类别不平衡[ ] Closed -form 闭式[ ] Cluster 簇/类/集群[ ] Cluster analysis 聚类分析[ ] Clustering 聚类[ ] Clustering ensemble 聚类集成[ ] Co-adapting 共适应[ ] Coding matrix 编码矩阵[ ] COLT 国际学习理论会议[ ] Committee-based learning 基于委员会的学习[ ] Competitive learning 竞争型学习[ ] Component learner 组件学习器[ ] Comprehensibility 可解释性[ ] Computation Cost 计算成本[ ] Computational Linguistics 计算语言学[ ] Computer vision 计算机视觉[ ] Concept drift 概念漂移[ ] Concept Learning System /CLS 概念学习系统[ ] Conditional entropy 条件熵[ ] Conditional mutual information 条件互信息[ ] Conditional Probability Table/CPT 条件概率表[ ] Conditional random field/CRF 条件随机场[ ] Conditional risk 条件风险[ ] Confidence 置信度[ ] Confusion matrix 混淆矩阵[300 ] Connection weight 连接权[ ] Connectionism 连结主义[ ] Consistency 一致性/相合性[ ] Contingency table 列联表[ ] Continuous attribute 连续属性[ ] Convergence 收敛[ ] Conversational agent 会话智能体[ ] Convex quadratic programming 凸二次规划[ ] Convexity 凸性[ ] Convolutional neural network/CNN 卷积神经网络[ ] Co-occurrence 同现[ ] Correlation coefficient 相关系数[ ] Cosine similarity 余弦相似度[ ] Cost curve 成本曲线[ ] Cost Function 成本函数[ ] Cost matrix 成本矩阵[ ] Cost-sensitive 成本敏感[ ] Cross entropy 交叉熵[ ] Cross validation 交叉验证[ ] Crowdsourcing 众包[ ] Curse of dimensionality 维数灾难[ ] Cut point 截断点[ ] Cutting plane algorithm 割平面法Letter D[ ] Data mining 数据挖掘[ ] Data set 数据集[ ] Decision Boundary 决策边界[ ] Decision stump 决策树桩[ ] Decision tree 决策树/判定树[ ] Deduction 演绎[ ] Deep Belief Network 深度信念网络[ ] Deep Convolutional Generative Adversarial Network/DCGAN 深度卷积生成对抗网络[ ] Deep learning 深度学习[ ] Deep neural network/DNN 深度神经网络[ ] Deep Q-Learning 深度Q 学习[ ] Deep Q-Network 深度Q 网络[ ] Density estimation 密度估计[ ] Density-based clustering 密度聚类[ ] Differentiable neural computer 可微分神经计算机[ ] Dimensionality reduction algorithm 降维算法[ ] Directed edge 有向边[ ] Disagreement measure 不合度量[ ] Discriminative model 判别模型[ ] Discriminator 判别器[ ] Distance measure 距离度量[ ] Distance metric learning 距离度量学习[ ] Distribution 分布[ ] Divergence 散度[350 ] Diversity measure 多样性度量/差异性度量[ ] Domain adaption 领域自适应[ ] Downsampling 下采样[ ] D-separation (Directed separation)有向分离[ ] Dual problem 对偶问题[ ] Dummy node 哑结点[ ] Dynamic Fusion 动态融合[ ] Dynamic programming 动态规划Letter E[ ] Eigenvalue decomposition 特征值分解[ ] Embedding 嵌入[ ] Emotional analysis 情绪分析[ ] Empirical conditional entropy 经验条件熵[ ] Empirical entropy 经验熵[ ] Empirical error 经验误差[ ] Empirical risk 经验风险[ ] End-to-End 端到端[ ] Energy-based model 基于能量的模型[ ] Ensemble learning 集成学习[ ] Ensemble pruning 集成修剪[ ] Error Correcting Output Codes/ECOC 纠错输出码[ ] Error rate 错误率[ ] Error-ambiguity decomposition 误差-分歧分解[ ] Euclidean distance 欧氏距离[ ] Evolutionary computation 演化计算[ ] Expectation-Maximization 期望最大化[ ] Expected loss 期望损失[ ] Exploding Gradient Problem 梯度爆炸问题[ ] Exponential loss function 指数损失函数[ ] Extreme Learning Machine/ELM 超限学习机Letter F[ ] Factorization 因子分解[ ] False negative 假负类[ ] False positive 假正类[ ] False Positive Rate/FPR 假正例率[ ] Feature engineering 特征工程[ ] Feature selection 特征选择[ ] Feature vector 特征向量[ ] Featured Learning 特征学习[ ] Feedforward Neural Networks/FNN 前馈神经网络[ ] Fine-tuning 微调[ ] Flipping output 翻转法[ ] Fluctuation 震荡[ ] Forward stagewise algorithm 前向分步算法[ ] Frequentist 频率主义学派[ ] Full-rank matrix 满秩矩阵[400 ] Functional neuron 功能神经元Letter G[ ] Gain ratio 增益率[ ] Game theory 博弈论[ ] Gaussian kernel function 高斯核函数[ ] Gaussian Mixture Model 高斯混合模型[ ] General Problem Solving 通用问题求解[ ] Generalization 泛化[ ] Generalization error 泛化误差[ ] Generalization error bound 泛化误差上界[ ] Generalized Lagrange function 广义拉格朗日函数[ ] Generalized linear model 广义线性模型[ ] Generalized Rayleigh quotient 广义瑞利商[ ] Generative Adversarial Networks/GAN 生成对抗网络[ ] Generative Model 生成模型[ ] Generator 生成器[ ] Genetic Algorithm/GA 遗传算法[ ] Gibbs sampling 吉布斯采样[ ] Gini index 基尼指数[ ] Global minimum 全局最小[ ] Global Optimization 全局优化[ ] Gradient boosting 梯度提升[ ] Gradient Descent 梯度下降[ ] Graph theory 图论[ ] Ground-truth 真相/真实Letter H[ ] Hard margin 硬间隔[ ] Hard voting 硬投票[ ] Harmonic mean 调和平均[ ] Hesse matrix 海塞矩阵[ ] Hidden dynamic model 隐动态模型[ ] Hidden layer 隐藏层[ ] Hidden Markov Model/HMM 隐马尔可夫模型[ ] Hierarchical clustering 层次聚类[ ] Hilbert space 希尔伯特空间[ ] Hinge loss function 合页损失函数[ ] Hold-out 留出法[ ] Homogeneous 同质[ ] Hybrid computing 混合计算[ ] Hyperparameter 超参数[ ] Hypothesis 假设[ ] Hypothesis test 假设验证Letter I[ ] ICML 国际机器学习会议[450 ] Improved iterative scaling/IIS 改进的迭代尺度法[ ] Incremental learning 增量学习[ ] Independent and identically distributed/i.i.d. 独立同分布[ ] Independent Component Analysis/ICA 独立成分分析[ ] Indicator function 指示函数[ ] Individual learner 个体学习器[ ] Induction 归纳[ ] Inductive bias 归纳偏好[ ] Inductive learning 归纳学习[ ] Inductive Logic Programming/ILP 归纳逻辑程序设计[ ] Information entropy 信息熵[ ] Information gain 信息增益[ ] Input layer 输入层[ ] Insensitive loss 不敏感损失[ ] Inter-cluster similarity 簇间相似度[ ] International Conference for Machine Learning/ICML 国际机器学习大会[ ] Intra-cluster similarity 簇内相似度[ ] Intrinsic value 固有值[ ] Isometric Mapping/Isomap 等度量映射[ ] Isotonic regression 等分回归[ ] Iterative Dichotomiser 迭代二分器Letter K[ ] Kernel method 核方法[ ] Kernel trick 核技巧[ ] Kernelized Linear Discriminant Analysis/KLDA 核线性判别分析[ ] K-fold cross validation k 折交叉验证/k 倍交叉验证[ ] K-Means Clustering K –均值聚类[ ] K-Nearest Neighbours Algorithm/KNN K近邻算法[ ] Knowledge base 知识库[ ] Knowledge Representation 知识表征Letter L[ ] Label space 标记空间[ ] Lagrange duality 拉格朗日对偶性[ ] Lagrange multiplier 拉格朗日乘子[ ] Laplace smoothing 拉普拉斯平滑[ ] Laplacian correction 拉普拉斯修正[ ] Latent Dirichlet Allocation 隐狄利克雷分布[ ] Latent semantic analysis 潜在语义分析[ ] Latent variable 隐变量[ ] Lazy learning 懒惰学习[ ] Learner 学习器[ ] Learning by analogy 类比学习[ ] Learning rate 学习率[ ] Learning Vector Quantization/LVQ 学习向量量化[ ] Least squares regression tree 最小二乘回归树[ ] Leave-One-Out/LOO 留一法[500 ] linear chain conditional random field 线性链条件随机场[ ] Linear Discriminant Analysis/LDA 线性判别分析[ ] Linear model 线性模型[ ] Linear Regression 线性回归[ ] Link function 联系函数[ ] Local Markov property 局部马尔可夫性[ ] Local minimum 局部最小[ ] Log likelihood 对数似然[ ] Log odds/logit 对数几率[ ] Logistic Regression Logistic 回归[ ] Log-likelihood 对数似然[ ] Log-linear regression 对数线性回归[ ] Long-Short Term Memory/LSTM 长短期记忆[ ] Loss function 损失函数Letter M[ ] Machine translation/MT 机器翻译[ ] Macron-P 宏查准率[ ] Macron-R 宏查全率[ ] Majority voting 绝对多数投票法[ ] Manifold assumption 流形假设[ ] Manifold learning 流形学习[ ] Margin theory 间隔理论[ ] Marginal distribution 边际分布[ ] Marginal independence 边际独立性[ ] Marginalization 边际化[ ] Markov Chain Monte Carlo/MCMC 马尔可夫链蒙特卡罗方法[ ] Markov Random Field 马尔可夫随机场[ ] Maximal clique 最大团[ ] Maximum Likelihood Estimation/MLE 极大似然估计/极大似然法[ ] Maximum margin 最大间隔[ ] Maximum weighted spanning tree 最大带权生成树[ ] Max-Pooling 最大池化[ ] Mean squared error 均方误差[ ] Meta-learner 元学习器[ ] Metric learning 度量学习[ ] Micro-P 微查准率[ ] Micro-R 微查全率[ ] Minimal Description Length/MDL 最小描述长度[ ] Minimax game 极小极大博弈[ ] Misclassification cost 误分类成本[ ] Mixture of experts 混合专家[ ] Momentum 动量[ ] Moral graph 道德图/端正图[ ] Multi-class classification 多分类[ ] Multi-document summarization 多文档摘要[ ] Multi-layer feedforward neural networks 多层前馈神经网络[ ] Multilayer Perceptron/MLP 多层感知器[ ] Multimodal learning 多模态学习[550 ] Multiple Dimensional Scaling 多维缩放[ ] Multiple linear regression 多元线性回归[ ] Multi-response Linear Regression /MLR 多响应线性回归[ ] Mutual information 互信息Letter N[ ] Naive bayes 朴素贝叶斯[ ] Naive Bayes Classifier 朴素贝叶斯分类器[ ] Named entity recognition 命名实体识别[ ] Nash equilibrium 纳什均衡[ ] Natural language generation/NLG 自然语言生成[ ] Natural language processing 自然语言处理[ ] Negative class 负类[ ] Negative correlation 负相关法[ ] Negative Log Likelihood 负对数似然[ ] Neighbourhood Component Analysis/NCA 近邻成分分析[ ] Neural Machine Translation 神经机器翻译[ ] Neural Turing Machine 神经图灵机[ ] Newton method 牛顿法[ ] NIPS 国际神经信息处理系统会议[ ] No Free Lunch Theorem/NFL 没有免费的午餐定理[ ] Noise-contrastive estimation 噪音对比估计[ ] Nominal attribute 列名属性[ ] Non-convex optimization 非凸优化[ ] Nonlinear model 非线性模型[ ] Non-metric distance 非度量距离[ ] Non-negative matrix factorization 非负矩阵分解[ ] Non-ordinal attribute 无序属性[ ] Non-Saturating Game 非饱和博弈[ ] Norm 范数[ ] Normalization 归一化[ ] Nuclear norm 核范数[ ] Numerical attribute 数值属性Letter O[ ] Objective function 目标函数[ ] Oblique decision tree 斜决策树[ ] Occam’s razor 奥卡姆剃刀[ ] Odds 几率[ ] Off-Policy 离策略[ ] One shot learning 一次性学习[ ] One-Dependent Estimator/ODE 独依赖估计[ ] On-Policy 在策略[ ] Ordinal attribute 有序属性[ ] Out-of-bag estimate 包外估计[ ] Output layer 输出层[ ] Output smearing 输出调制法[ ] Overfitting 过拟合/过配[600 ] Oversampling 过采样Letter P[ ] Paired t-test 成对t 检验[ ] Pairwise 成对型[ ] Pairwise Markov property 成对马尔可夫性[ ] Parameter 参数[ ] Parameter estimation 参数估计[ ] Parameter tuning 调参[ ] Parse tree 解析树[ ] Particle Swarm Optimization/PSO 粒子群优化算法[ ] Part-of-speech tagging 词性标注[ ] Perceptron 感知机[ ] Performance measure 性能度量[ ] Plug and Play Generative Network 即插即用生成网络[ ] Plurality voting 相对多数投票法[ ] Polarity detection 极性检测[ ] Polynomial kernel function 多项式核函数[ ] Pooling 池化[ ] Positive class 正类[ ] Positive definite matrix 正定矩阵[ ] Post-hoc test 后续检验[ ] Post-pruning 后剪枝[ ] potential function 势函数[ ] Precision 查准率/准确率[ ] Prepruning 预剪枝[ ] Principal component analysis/PCA 主成分分析[ ] Principle of multiple explanations 多释原则[ ] Prior 先验[ ] Probability Graphical Model 概率图模型[ ] Proximal Gradient Descent/PGD 近端梯度下降[ ] Pruning 剪枝[ ] Pseudo-label 伪标记[ ] Letter Q[ ] Quantized Neural Network 量子化神经网络[ ] Quantum computer 量子计算机[ ] Quantum Computing 量子计算[ ] Quasi Newton method 拟牛顿法Letter R[ ] Radial Basis Function/RBF 径向基函数[ ] Random Forest Algorithm 随机森林算法[ ] Random walk 随机漫步[ ] Recall 查全率/召回率[ ] Receiver Operating Characteristic/ROC 受试者工作特征[ ] Rectified Linear Unit/ReLU 线性修正单元[650 ] Recurrent Neural Network 循环神经网络[ ] Recursive neural network 递归神经网络[ ] Reference model 参考模型[ ] Regression 回归[ ] Regularization 正则化[ ] Reinforcement learning/RL 强化学习[ ] Representation learning 表征学习[ ] Representer theorem 表示定理[ ] reproducing kernel Hilbert space/RKHS 再生核希尔伯特空间[ ] Re-sampling 重采样法[ ] Rescaling 再缩放[ ] Residual Mapping 残差映射[ ] Residual Network 残差网络[ ] Restricted Boltzmann Machine/RBM 受限玻尔兹曼机[ ] Restricted Isometry Property/RIP 限定等距性[ ] Re-weighting 重赋权法[ ] Robustness 稳健性/鲁棒性[ ] Root node 根结点[ ] Rule Engine 规则引擎[ ] Rule learning 规则学习Letter S[ ] Saddle point 鞍点[ ] Sample space 样本空间[ ] Sampling 采样[ ] Score function 评分函数[ ] Self-Driving 自动驾驶[ ] Self-Organizing Map/SOM 自组织映射[ ] Semi-naive Bayes classifiers 半朴素贝叶斯分类器[ ] Semi-Supervised Learning 半监督学习[ ] semi-Supervised Support Vector Machine 半监督支持向量机[ ] Sentiment analysis 情感分析[ ] Separating hyperplane 分离超平面[ ] Sigmoid function Sigmoid 函数[ ] Similarity measure 相似度度量[ ] Simulated annealing 模拟退火[ ] Simultaneous localization and mapping 同步定位与地图构建[ ] Singular Value Decomposition 奇异值分解[ ] Slack variables 松弛变量[ ] Smoothing 平滑[ ] Soft margin 软间隔[ ] Soft margin maximization 软间隔最大化[ ] Soft voting 软投票[ ] Sparse representation 稀疏表征[ ] Sparsity 稀疏性[ ] Specialization 特化[ ] Spectral Clustering 谱聚类[ ] Speech Recognition 语音识别[ ] Splitting variable 切分变量[700 ] Squashing function 挤压函数[ ] Stability-plasticity dilemma 可塑性-稳定性困境[ ] Statistical learning 统计学习[ ] Status feature function 状态特征函[ ] Stochastic gradient descent 随机梯度下降[ ] Stratified sampling 分层采样[ ] Structural risk 结构风险[ ] Structural risk minimization/SRM 结构风险最小化[ ] Subspace 子空间[ ] Supervised learning 监督学习/有导师学习[ ] support vector expansion 支持向量展式[ ] Support Vector Machine/SVM 支持向量机[ ] Surrogat loss 替代损失[ ] Surrogate function 替代函数[ ] Symbolic learning 符号学习[ ] Symbolism 符号主义[ ] Synset 同义词集Letter T[ ] T-Distribution Stochastic Neighbour Embedding/t-SNE T –分布随机近邻嵌入[ ] Tensor 张量[ ] Tensor Processing Units/TPU 张量处理单元[ ] The least square method 最小二乘法[ ] Threshold 阈值[ ] Threshold logic unit 阈值逻辑单元[ ] Threshold-moving 阈值移动[ ] Time Step 时间步骤[ ] Tokenization 标记化[ ] Training error 训练误差[ ] Training instance 训练示例/训练例[ ] Transductive learning 直推学习[ ] Transfer learning 迁移学习[ ] Treebank 树库[ ] Tria-by-error 试错法[ ] True negative 真负类[ ] True positive 真正类[ ] True Positive Rate/TPR 真正例率[ ] Turing Machine 图灵机[ ] Twice-learning 二次学习Letter U[ ] Underfitting 欠拟合/欠配[ ] Undersampling 欠采样[ ] Understandability 可理解性[ ] Unequal cost 非均等代价[ ] Unit-step function 单位阶跃函数[ ] Univariate decision tree 单变量决策树[ ] Unsupervised learning 无监督学习/无导师学习[ ] Unsupervised layer-wise training 无监督逐层训练[ ] Upsampling 上采样Letter V[ ] Vanishing Gradient Problem 梯度消失问题[ ] Variational inference 变分推断[ ] VC Theory VC维理论[ ] Version space 版本空间[ ] Viterbi algorithm 维特比算法[760 ] Von Neumann architecture 冯· 诺伊曼架构Letter W[ ] Wasserstein GAN/WGAN Wasserstein生成对抗网络[ ] Weak learner 弱学习器[ ] Weight 权重[ ] Weight sharing 权共享[ ] Weighted voting 加权投票法[ ] Within-class scatter matrix 类内散度矩阵[ ] Word embedding 词嵌入[ ] Word sense disambiguation 词义消歧Letter Z[ ] Zero-data learning 零数据学习[ ] Zero-shot learning 零次学习第三部分A[ ] approximations近似值[ ] arbitrary随意的[ ] affine仿射的[ ] arbitrary任意的[ ] amino acid氨基酸[ ] amenable经得起检验的[ ] axiom公理,原则[ ] abstract提取[ ] architecture架构,体系结构;建造业[ ] absolute绝对的[ ] arsenal军火库[ ] assignment分配[ ] algebra线性代数[ ] asymptotically无症状的[ ] appropriate恰当的B[ ] bias偏差[ ] brevity简短,简洁;短暂[800 ] broader广泛[ ] briefly简短的[ ] batch批量C[ ] convergence 收敛,集中到一点[ ] convex凸的[ ] contours轮廓[ ] constraint约束[ ] constant常理[ ] commercial商务的[ ] complementarity补充[ ] coordinate ascent同等级上升[ ] clipping剪下物;剪报;修剪[ ] component分量;部件[ ] continuous连续的[ ] covariance协方差[ ] canonical正规的,正则的[ ] concave非凸的[ ] corresponds相符合;相当;通信[ ] corollary推论[ ] concrete具体的事物,实在的东西[ ] cross validation交叉验证[ ] correlation相互关系[ ] convention约定[ ] cluster一簇[ ] centroids 质心,形心[ ] converge收敛[ ] computationally计算(机)的[ ] calculus计算D[ ] derive获得,取得[ ] dual二元的[ ] duality二元性;二象性;对偶性[ ] derivation求导;得到;起源[ ] denote预示,表示,是…的标志;意味着,[逻]指称[ ] divergence 散度;发散性[ ] dimension尺度,规格;维数[ ] dot小圆点[ ] distortion变形[ ] density概率密度函数[ ] discrete离散的[ ] discriminative有识别能力的[ ] diagonal对角[ ] dispersion分散,散开[ ] determinant决定因素[849 ] disjoint不相交的E[ ] encounter遇到[ ] ellipses椭圆[ ] equality等式[ ] extra额外的[ ] empirical经验;观察[ ] ennmerate例举,计数[ ] exceed超过,越出[ ] expectation期望[ ] efficient生效的[ ] endow赋予[ ] explicitly清楚的[ ] exponential family指数家族[ ] equivalently等价的F[ ] feasible可行的[ ] forary初次尝试[ ] finite有限的,限定的[ ] forgo摒弃,放弃[ ] fliter过滤[ ] frequentist最常发生的[ ] forward search前向式搜索[ ] formalize使定形G[ ] generalized归纳的[ ] generalization概括,归纳;普遍化;判断(根据不足)[ ] guarantee保证;抵押品[ ] generate形成,产生[ ] geometric margins几何边界[ ] gap裂口[ ] generative生产的;有生产力的H[ ] heuristic启发式的;启发法;启发程序[ ] hone怀恋;磨[ ] hyperplane超平面L[ ] initial最初的[ ] implement执行[ ] intuitive凭直觉获知的[ ] incremental增加的[900 ] intercept截距[ ] intuitious直觉[ ] instantiation例子[ ] indicator指示物,指示器[ ] interative重复的,迭代的[ ] integral积分[ ] identical相等的;完全相同的[ ] indicate表示,指出[ ] invariance不变性,恒定性[ ] impose把…强加于[ ] intermediate中间的[ ] interpretation解释,翻译J[ ] joint distribution联合概率L[ ] lieu替代[ ] logarithmic对数的,用对数表示的[ ] latent潜在的[ ] Leave-one-out cross validation留一法交叉验证M[ ] magnitude巨大[ ] mapping绘图,制图;映射[ ] matrix矩阵[ ] mutual相互的,共同的[ ] monotonically单调的[ ] minor较小的,次要的[ ] multinomial多项的[ ] multi-class classification二分类问题N[ ] nasty讨厌的[ ] notation标志,注释[ ] naïve朴素的O[ ] obtain得到[ ] oscillate摆动[ ] optimization problem最优化问题[ ] objective function目标函数[ ] optimal最理想的[ ] orthogonal(矢量,矩阵等)正交的[ ] orientation方向[ ] ordinary普通的[ ] occasionally偶然的P[ ] partial derivative偏导数[ ] property性质[ ] proportional成比例的[ ] primal原始的,最初的[ ] permit允许[ ] pseudocode伪代码[ ] permissible可允许的[ ] polynomial多项式[ ] preliminary预备[ ] precision精度[ ] perturbation 不安,扰乱[ ] poist假定,设想[ ] positive semi-definite半正定的[ ] parentheses圆括号[ ] posterior probability后验概率[ ] plementarity补充[ ] pictorially图像的[ ] parameterize确定…的参数[ ] poisson distribution柏松分布[ ] pertinent相关的Q[ ] quadratic二次的[ ] quantity量,数量;分量[ ] query疑问的R[ ] regularization使系统化;调整[ ] reoptimize重新优化[ ] restrict限制;限定;约束[ ] reminiscent回忆往事的;提醒的;使人联想…的(of)[ ] remark注意[ ] random variable随机变量[ ] respect考虑[ ] respectively各自的;分别的[ ] redundant过多的;冗余的S[ ] susceptible敏感的[ ] stochastic可能的;随机的[ ] symmetric对称的[ ] sophisticated复杂的[ ] spurious假的;伪造的[ ] subtract减去;减法器[ ] simultaneously同时发生地;同步地[ ] suffice满足[ ] scarce稀有的,难得的[ ] split分解,分离[ ] subset子集[ ] statistic统计量[ ] successive iteratious连续的迭代[ ] scale标度[ ] sort of有几分的[ ] squares平方T[ ] trajectory轨迹[ ] temporarily暂时的[ ] terminology专用名词[ ] tolerance容忍;公差[ ] thumb翻阅[ ] threshold阈,临界[ ] theorem定理[ ] tangent正弦U[ ] unit-length vector单位向量V[ ] valid有效的,正确的[ ] variance方差[ ] variable变量;变元[ ] vocabulary词汇[ ] valued经估价的;宝贵的[ ] W [1038 ] wrapper包装。
openmp reduction 标量算术类型
OpenMP(Open Multi-Processing)是一种并行编程的标准,它可以帮助程序员利用计算机系统中的多个处理器和多核心来加速程序的执行。
在其中,reduction是能够在并行循环中实现对变量的求和或其他运算。
本文将介绍OpenMP中reduction在标量算术类型上的应用。
1. 什么是OpenMP reduction?OpenMP reduction指的是将并行循环中的运算结果进行归约(reduction)操作,得到最终的结果。
归约操作可以是对变量进行求和、求积、求最大值、求最小值等。
在OpenMP中,reduction可以加速并行程序的执行,因为它减少了不必要的竞争和同步。
2. 标量算术类型在OpenMP中,reduction可以应用于多种数据类型,包括整型、浮点型、逻辑型等。
其中,标量(scalar)指的是单个数值,而标量算术类型则是指在并行循环中对单个数值进行运算的数据类型。
3. 如何在OpenMP中使用reduction?在OpenMP中,要在并行循环中使用reduction,首先需要在循环的并行区域添加reduction子句。
在C/C++中,可以使用如下的语法:```#pragma omp parallel for reduction(+:sum)for (int i = 0; i < N; i++) {sum += array[i];}```在上面的示例中,我们使用了reduction(+:sum)来对循环中的sum 变量进行求和操作。
这样,OpenMP会自动对sum变量进行归约操作,在每个线程中计算局部的求和结果,并最终将所有线程的结果进行合并得到最终的全局求和结果。
4. 不同标量算术类型的应用在实际应用中,我们可以针对不同的算术类型使用reduction操作。
例如对于整型的求和、浮点型的求积、逻辑型的逻辑与或运算等。
在OpenMP中,可以通过在reduction子句中指定不同的运算符来实现这些操作。
多通道输入 多通道输出计算流程
多通道输入多通道输出计算流程英文回答:Multi-Channel Input and Multi-Channel Output: Computational Pipeline.Multi-channel input and multi-channel output constitute a fundamental framework in machine learning, deep learning, and signal processing. This pipeline involves processing multi-dimensional data through a series of transformations to generate outputs that capture the underlying patterns and relationships within the input data.Input Processing.The initial stage involves preparing the input data for computation. This may include:1. Data normalization: Rescaling the input data to a specific range to improve convergence and stability duringtraining.2. Feature extraction: Identifying key features or representations from the input data that are relevant to the learning task.3. Dimensionality reduction: Reducing the number of features or dimensions in the input data to enhance computational efficiency.Network Architecture.Next, the multi-channel input is fed into a network architecture designed for multi-channel processing. This architecture typically consists of multiple layers, each performing specific transformations on the input data.1. Convolutional layers: These layers apply convolution operations to extract features from the input data. They can capture spatial relationships and local dependencies.2. Pooling layers: These layers reduce thedimensionality of the feature maps by selecting the maximum or average value within a specific region.3. Fully connected layers: These layers integrate the extracted features and perform a linear transformation to generate the output.Output Processing.The final stage involves processing the multi-channel output to extract the desired information or make predictions. This may include:1. Activation functions: These functions introduce non-linearity into the network and determine the output values based on the weighted sum of inputs.2. Output normalization: Rescaling or normalizing the output to ensure it falls within a desired range.3. Classification or regression: Using the output probabilities or values to classify the input data orpredict continuous values.Applications.Multi-channel input and multi-channel output pipelines find applications in a wide range of domains, including:1. Image processing: Object detection, image segmentation, image classification.2. Natural language processing: Machine translation, text summarization, language modeling.3. Audio processing: Speech recognition, audio classification, music generation.4. Medical imaging: Medical diagnosis, disease detection, treatment planning.中文回答:多通道输入和多通道输出,计算流程。
【2021.03.07】看论文神器知云文献翻译、百度翻译API申请、机器学习术语库
【2021.03.07】看论⽂神器知云⽂献翻译、百度翻译API申请、机器学习术语库最近在看论⽂,因为论⽂都是全英⽂的,所以需要论⽂查看的软件,在macOS上找到⼀款很好⽤的软件叫做知云⽂献翻译知云⽂献翻译界⾯长这样,可以长段翻译,总之很不错百度翻译API申请使⽤⾃⼰的api有两个好处:⼀、更加稳定⼆、可以⾃定义词库,我看的是医疗和机器学习相关的英⽂⽂献,可以⾃定义api申请在上⽅控制台、根据流程申请后可以在这⾥看到⾃⼰的ID和密钥填⼊就可以了⾃定义术语库我看的是机器学习的⽂献,因此在术语库⾥添加,导⼊⽂件(我会把⽂本放在后⾯导⼊后完成,有部分词语不翻译,⽐如MNIST这样的专有词语,就会报错,忽略掉就可以了开启术语库就⾏了机器学习术语库Supervised Learning|||监督学习Unsupervised Learning|||⽆监督学习Semi-supervised Learning|||半监督学习Reinforcement Learning|||强化学习Active Learning|||主动学习Online Learning|||在线学习Transfer Learning|||迁移学习Automated Machine Learning (AutoML)|||⾃动机器学习Representation Learning|||表⽰学习Minkowski distance|||闵可夫斯基距离Gradient Descent|||梯度下降Stochastic Gradient Descent|||随机梯度下降Over-fitting|||过拟合Regularization|||正则化Cross Validation|||交叉验证Perceptron|||感知机Logistic Regression|||逻辑回归Maximum Likelihood Estimation|||最⼤似然估计Newton’s method|||⽜顿法K-Nearest Neighbor|||K近邻法Mahanalobis Distance|||马⽒距离Decision Tree|||决策树Naive Bayes Classifier|||朴素贝叶斯分类器Generalization Error|||泛化误差PAC Learning|||概率近似正确学习Empirical Risk Minimization|||经验风险最⼩化Growth Function|||成长函数VC-dimension|||VC维Structural Risk Minimization|||结构风险最⼩化Eigendecomposition|||特征分解Singular Value Decomposition|||奇异值分解Moore-Penrose Pseudoinverse|||摩尔-彭若斯⼴义逆Marginal Probability|||边缘概率Conditional Probability|||条件概率Expectation|||期望Variance|||⽅差Covariance|||协⽅差Critical points|||临界点Support Vector Machine|||⽀持向量机Decision Boundary|||决策边界Convex Set|||凸集Lagrange Duality|||拉格朗⽇对偶性KKT Conditions|||KKT条件Coordinate ascent|||坐标下降法Sequential Minimal Optimization (SMO)|||序列最⼩化优化Ensemble Learning|||集成学习Bootstrap Aggregating (Bagging)|||装袋算法Random Forests|||随机森林Boosting|||提升⽅法Stacking|||堆叠⽅法Decision Tree|||决策树Classification Tree|||分类树Adaptive Boosting (AdaBoost)|||⾃适应提升Decision Stump|||决策树桩Meta Learning|||元学习Gradient Descent|||梯度下降Deep Feedforward Network (DFN)|||深度前向⽹络Backpropagation|||反向传播Activation Function|||激活函数Multi-layer Perceptron (MLP)|||多层感知机Perceptron|||感知机Mean-Squared Error (MSE)|||均⽅误差Chain Rule|||链式法则Logistic Function|||逻辑函数Hyperbolic Tangent|||双曲正切函数Rectified Linear Units (ReLU)|||整流线性单元Residual Neural Networks (ResNet)|||残差神经⽹络Regularization|||正则化Overfitting|||过拟合Data(set) Augmentation|||数据增强Parameter Sharing|||参数共享Ensemble Learning|||集成学习Dropout|||L2 Regularization|||L2正则化Taylor Series Approximation|||泰勒级数近似Taylor Expansion|||泰勒展开Bayesian Prior|||贝叶斯先验Bayesian Inference|||贝叶斯推理Gaussian Prior|||⾼斯先验Maximum-a-Posteriori (MAP)|||最⼤后验Linear Regression|||线性回归L1 Regularization|||L1正则化Constrained Optimization|||约束优化Lagrange Function|||拉格朗⽇函数Denoising Autoencoder|||降噪⾃动编码器Label Smoothing|||标签平滑Eigen Decomposition|||特征分解Convolutional Neural Networks (CNNs)|||卷积神经⽹络Semi-Supervised Learning|||半监督学习Generative Model|||⽣成模型Discriminative Model|||判别模型Multi-Task Learning|||多任务学习Bootstrap Aggregating (Bagging)|||装袋算法Multivariate Normal Distribution|||多元正态分布Sparse Parametrization|||稀疏参数化Sparse Representation|||稀疏表⽰Student-t Prior|||学⽣T先验KL Divergence|||KL散度Orthogonal Matching Pursuit (OMP)|||正交匹配追踪算法Adversarial Training|||对抗训练Matrix Factorization (MF)|||矩阵分解Root-Mean-Square Error (RMSE)|||均⽅根误差Collaborative Filtering (CF)|||协同过滤Nonnegative Matrix Factorization (NMF)|||⾮负矩阵分解Singular Value Decomposition (SVD)|||奇异值分解Latent Sematic Analysis (LSA)|||潜在语义分析Bayesian Probabilistic Matrix Factorization (BPMF)|||贝叶斯概率矩阵分解Wishart Prior|||Wishart先验Sparse Coding|||稀疏编码Factorization Machines (FM)|||分解机second-order method|||⼆阶⽅法cost function|||代价函数training set|||训练集objective function|||⽬标函数expectation|||期望data generating distribution|||数据⽣成分布empirical risk minimization|||经验风险最⼩化generalization error|||泛化误差empirical risk|||经验风险overfitting|||过拟合feasible|||可⾏loss function|||损失函数derivative|||导数gradient descent|||梯度下降surrogate loss function|||代理损失函数early stopping|||提前终⽌Hessian matrix|||⿊塞矩阵second derivative|||⼆阶导数Taylor series|||泰勒级数Ill-conditioning|||病态的critical point|||临界点local minimum|||局部极⼩点local maximum|||局部极⼤点saddle point|||鞍点local minima|||局部极⼩值global minimum|||全局最⼩点convex function|||凸函数weight space symmetry|||权重空间对称性Newton’s method|||⽜顿法activation function|||激活函数fully-connected networks|||全连接⽹络Resnet|||残差神经⽹络gradient clipping|||梯度截断recurrent neural network|||循环神经⽹络long-term dependency|||长期依赖eigen-decomposition|||特征值分解feedforward network|||前馈⽹络vanishing and exploding gradient problem|||梯度消失与爆炸问题contrastive divergence|||对⽐散度validation set|||验证集stochastic gradient descent|||随机梯度下降learning rate|||学习速率momentum|||动量gradient descent|||梯度下降poor conditioning|||病态条件nesterov momentum|||Nesterov 动量partial derivative|||偏导数moving average|||移动平均quadratic function|||⼆次函数positive definite|||正定quasi-newton method|||拟⽜顿法conjugate gradient|||共轭梯度steepest descent|||最速下降reparametrization|||重参数化standard deviation|||标准差coordinate descent|||坐标下降skip connection|||跳跃连接convolutional neural network|||卷积神经⽹络convolution|||卷积pooling|||池化feedforward neural network|||前馈神经⽹络maximum likelihood|||最⼤似然back propagation|||反向传播artificial neural network|||⼈⼯神经⽹络deep feedforward network|||深度前馈⽹络hyperparameter|||超参数sparse connectivity|||稀疏连接parameter sharing|||参数共享receptive field|||接受域chain rule|||链式法则tiled convolution|||平铺卷积object detection|||⽬标检测error rate|||错误率activation function|||激活函数overfitting|||过拟合attention mechanism|||注意⼒机制transfer learning|||迁移学习autoencoder|||⾃编码器unsupervised learning|||⽆监督学习back propagation|||反向传播pretraining|||预训练dimensionality reduction|||降维curse of dimensionality|||维数灾难feedforward neural network|||前馈神经⽹络encoder|||编码器decoder|||解码器cross-entropy|||交叉熵tied weights|||绑定的权重PCA|||PCAprincipal component analysis|||主成分分析singular value decomposition|||奇异值分解SVD|||SVDsingular value|||奇异值reconstruction error|||重构误差covariance matrix|||协⽅差矩阵Kullback-Leibler (KL) divergence|||KL散度denoising autoencoder|||去噪⾃编码器sparse autoencoder|||稀疏⾃编码器contractive autoencoder|||收缩⾃编码器conjugate gradient|||共轭梯度fine-tune|||精调local optima|||局部最优posterior distribution|||后验分布gaussian distribution|||⾼斯分布reparametrization|||重参数化recurrent neural network|||循环神经⽹络artificial neural network|||⼈⼯神经⽹络feedforward neural network|||前馈神经⽹络sentiment analysis|||情感分析machine translation|||机器翻译pos tagging|||词性标注teacher forcing|||导师驱动过程back-propagation through time|||通过时间反向传播directed graphical model|||有向图模型speech recognition|||语⾳识别question answering|||问答系统attention mechanism|||注意⼒机制vanishing and exploding gradient problem|||梯度消失与爆炸问题jacobi matrix|||jacobi矩阵long-term dependency|||长期依赖clip gradient|||梯度截断long short-term memory|||长短期记忆gated recurrent unit|||门控循环单元hadamard product|||Hadamard乘积back propagation|||反向传播attention mechanism|||注意⼒机制feedforward network|||前馈⽹络named entity recognition|||命名实体识别Representation Learning|||表征学习Distributed Representation|||分布式表征Multi-task Learning|||多任务学习Multi-Modal Learning|||多模态学习Semi-supervised Learning|||半监督学习NLP|||⾃然语⾔处理Neural Language Model|||神经语⾔模型Neural Probabilistic Language Model|||神经概率语⾔模型RNN|||循环神经⽹络Neural Tensor Network|||神经张量⽹络Graph Neural Network|||图神经⽹络Graph Covolutional Network (GCN)|||图卷积⽹络Graph Attention Network|||图注意⼒⽹络Self-attention|||⾃注意⼒机制Feature Learning|||表征学习Feature Engineering|||特征⼯程One-hot Representation|||独热编码Speech Recognition|||语⾳识别DBM|||深度玻尔兹曼机Zero-shot Learning|||零次学习Autoencoder|||⾃编码器Generative Adversarial Network(GAN)|||⽣成对抗⽹络Approximate Inference|||近似推断Bag-of-Words Model|||词袋模型Forward Propagation|||前向传播Huffman Binary Tree|||霍夫曼⼆叉树NNLM|||神经⽹络语⾔模型N-gram|||N元语法Skip-gram Model|||跳元模型Negative Sampling|||负采样CBOW|||连续词袋模型Knowledge Graph|||知识图谱Relation Extraction|||关系抽取Node Embedding|||节点嵌⼊Graph Neural Network|||图神经⽹络Node Classification|||节点分类Link Prediction|||链路预测Community Detection|||社区发现Isomorphism|||同构Random Walk|||随机漫步Spectral Clustering|||谱聚类Asynchronous Stochastic Gradient Algorithm|||异步随机梯度算法Negative Sampling|||负采样Network Embedding|||⽹络嵌⼊Graph Theory|||图论multiset|||多重集Perron-Frobenius Theorem|||佩龙—弗罗贝尼乌斯定理Stationary Distribution|||稳态分布Matrix Factorization|||矩阵分解Sparsification|||稀疏化Singular Value Decomposition|||奇异值分解Frobenius Norm|||F-范数Heterogeneous Network|||异构⽹络Graph Convolutional Network (GCN)|||图卷积⽹络CNN|||卷积神经⽹络Semi-Supervised Classification|||半监督分类Chebyshev polynomial|||切⽐雪夫多项式Gradient Exploding|||梯度爆炸Gradient Vanishing|||梯度消失Batch Normalization|||批标准化Neighborhood Aggregation|||邻域聚合LSTM|||长短期记忆⽹络Graph Attention Network|||图注意⼒⽹络Self-attention|||⾃注意⼒机制Rescaling|||再缩放Attention Mechanism|||注意⼒机制Jensen-Shannon Divergence|||JS散度Cognitive Graph|||认知图谱Generative Adversarial Network(GAN)|||⽣成对抗⽹络Generative Model|||⽣成模型Discriminative Model|||判别模型Gaussian Mixture Model|||⾼斯混合模型Variational Auto-Encoder(VAE)|||变分编码器Markov Chain|||马尔可夫链Boltzmann Machine|||玻尔兹曼机Kullback–Leibler divergence|||KL散度Vanishing Gradient|||梯度消失Surrogate Loss|||替代损失Mode Collapse|||模式崩溃Earth-Mover/Wasserstein-1 Distance|||搬⼟距离/EMD Lipschitz Continuity|||利普希茨连续Feedforward Network|||前馈⽹络Minimax Game|||极⼩极⼤博弈Adversarial Learning|||对抗学习Outlier|||异常值/离群值Rectified Linear Unit|||线性修正单元Logistic Regression|||逻辑回归Softmax Regression|||Softmax回归SVM|||⽀持向量机Decision Tree|||决策树Nearest Neighbors|||最近邻White-box|||⽩盒(测试 etc. )Lagrange Multiplier|||拉格朗⽇乘⼦Black-box|||⿊盒(测试 etc. )Robustness|||鲁棒性/稳健性Decision Boundary|||决策边界Non-differentiability|||不可微Intra-technique Transferability|||相同技术迁移能⼒Cross-technique Transferability|||不同技术迁移能⼒Data Augmentation|||数据增强Adaboost|||recommender system|||推荐系统Probability matching|||概率匹配minimax regret|||face detection|||⼈脸检测i.i.d.|||独⽴同分布Minimax|||极⼤极⼩linear model|||线性模型Thompson Sampling|||汤普森抽样eigenvalues|||特征值optimization problem|||优化问题greedy algorithm|||贪⼼算法Dynamic Programming|||动态规划lookup table|||查找表Bellman equation|||贝尔曼⽅程discount factor|||折现系数Reinforcement Learning|||强化学习gradient theorem|||梯度定理stochastic gradient descent|||随机梯度下降法Monte Carlo|||蒙特卡罗⽅法function approximation|||函数逼近Markov Decision Process|||马尔可夫决策过程Bootstrapping|||引导Shortest Path Problem|||最短路径问题expected return|||预期回报Q-Learning|||Q学习temporal-difference learning|||时间差分学习AlphaZero|||Backgammon|||西洋双陆棋finite set|||有限集Markov property|||马尔可夫性质sample complexity|||样本复杂性Cartesian product|||笛卡⼉积Kevin Leyton-Brown|||SVM|||⽀持向量机MNIST|||ImageNet|||Ensemble learning|||集成学习Neural networks|||神经⽹络Neuroevolution|||神经演化object recognition|||⽬标识别Multi-task learning|||多任务学习Treebank|||树图资料库covariance|||协⽅差Hamiltonian Monte Carlo|||哈密顿蒙特卡罗Inductive bias|||归纳偏置bilevel optimization|||双层规划genetic algorithms|||遗传算法Bayesian linear regression|||贝叶斯线性回归ANOVA|||⽅差分析Extrapolation|||外推法activation function|||激活函数CIFAR-10|||Gaussian Process|||⾼斯过程k-nearest neighbors|||K最近邻Neural Turing machine|||神经图灵机MCMC|||马尔可夫链蒙特卡罗Collaborative filtering|||协同过滤AlphaGo|||random forests|||随机森林multivariate Gaussian|||多元⾼斯Bayesian Optimization|||贝叶斯优化meta-learning|||元学习iterative algorithm|||迭代算法Viterbi algorithm|||维特⽐算法Gibbs distribution|||吉布斯分布Discriminative model|||判别模型Maximum Entropy Markov Model|||最⼤熵马尔可夫模型Information Extraction|||信息提取clique|||⼩圈⼦conditional random field|||条件随机场CRF|||条件随机场triad|||三元关系Naïve Bayes|||朴素贝叶斯social network|||社交⽹络Bayesian network|||贝叶斯⽹络SVM|||⽀持向量机Joint probability distribution|||联合概率分布Conditional independence|||条件独⽴性sequence analysis|||序列分析Perceptron|||感知器Markov Blanket|||马尔科夫毯Hidden Markov Model|||隐马尔可夫模型finite-state|||有限状态Shallow parsing|||浅层分析Active learning|||主动学习Speech recognition|||语⾳识别convex|||凸transition matrix|||转移矩阵factor graph|||因⼦图forward-backward algorithm|||前向后向算法parsing|||语法分析structural holes|||结构洞graphical model|||图模型Markov Random Field|||马尔可夫随机场Social balance theory|||社会平衡理论Generative model|||⽣成模型probalistic topic model|||概率语义模型TFIDF|||词频-⽂本逆向频率LSI|||潜在语义索引Bayesian network|||贝叶斯⽹络模型Markov random field|||马尔科夫随机场restricted boltzmann machine|||限制玻尔兹曼机LDA|||隐式狄利克雷分配模型PLSI|||概率潜在语义索引模型EM algorithm|||最⼤期望算法Gibbs sampling|||吉布斯采样法MAP (Maximum A Posteriori)|||最⼤后验概率算法Markov Chain Monte Carlo|||马尔科夫链式蒙特卡洛算法Monte Carlo Sampling|||蒙特卡洛采样法Univariate|||单变量Hoeffding Bound|||Hoeffding界Chernoff Bound|||Chernoff界Importance Sampling|||加权采样法invariant distribution|||不动点分布Metropolis-Hastings algorithm|||Metropolis-Hastings算法Probablistic Inference|||概率推断Variational Inference|||变量式推断HMM|||隐式马尔科夫模型mean field|||平均场理论mixture model|||混合模型convex duality|||凸对偶belief propagation|||置信传播算法non-parametric model|||⾮参模型Gaussian process|||正态过程multivariate Gaussian distribution|||多元正态分布Dirichlet process|||狄利克雷过程stick breaking process|||断棒过程Chinese restaurant process|||中餐馆过程Blackwell-MacQueen Urn Scheme|||Blackwell-MacQueen桶法De Finetti's theorem|||de Finetti定理collapsed Gibbs sampling|||下陷吉布斯采样法Hierarchical Dirichlet process|||阶梯式狄利克雷过程Indian Buffet process|||印度餐馆过程。
multiobjective deep reinforcement learning
多目标深度强化学习(Multiobjective Deep Reinforcement Learning,MDRL)是一种结合了深度学习和强化学习的方法,旨在解决具有多个优化目标的问题。
在传统的强化学习中,通常只有一个目标,例如最大化累积奖励。
然而,在许多实际应用中,可能需要同时考虑多个目标,例如在机器人控制中,可能需要最大化能量效率、最小化完成任务的时间等多个目标。
MDRL方法通过使用深度学习技术来构建决策模型,并使用强化学习技术来优化这些决策模型,以同时满足多个目标。
MDRL方法通常使用多目标优化算法来优化决策模型,例如使用Pareto最优解的概念来找到一组解,这些解在不同的目标之间取得了平衡。
MDRL方法已经在许多领域得到了应用,例如机器人控制、自然语言处理、推荐系统等。
在机器人控制中,MDRL方法可以帮助机器人实现高效、稳定和安全的运动控制。
在自然语言处理中,MDRL方法可以帮助机器翻译系统同时优化翻译质量和翻译速度。
在推荐系统中,MDRL方法可以帮助推荐系统同时优化准确性和多样性。
总之,MDRL方法是一种强大的技术,可以帮助机器实现多个目标的优化。
贫困政策和贫困人口比例英语作文
贫困政策和贫困人口比例英语作文全文共3篇示例,供读者参考篇1Poverty is a pressing issue that affects millions of people around the world. According to the World Bank, roughly 10% of the global population lives in extreme poverty, surviving on less than $1.90 per day. In response to this crisis, governments and organizations have implemented various poverty alleviation policies to help lift people out of poverty.One of the key strategies to combat poverty is through targeted poverty reduction policies. These policies are designed to address the specific needs and challenges faced by the poor, such as lack of access to education, healthcare, and employment opportunities. For example, in many developing countries, governments have implemented cash transfer programs that provide financial assistance to poor families, allowing them to purchase food, clothing, and other basic necessities.Another important aspect of poverty reduction policies is promoting inclusive economic growth. By investing in infrastructure, improving healthcare and education systems, andcreating job opportunities, governments can help create a more inclusive economy that benefits all citizens, including the poor. For instance, in China, the government has implemented a series of measures to reduce poverty, such as investing in rural infrastructure and providing training programs for poor farmers.In addition to government initiatives, non-governmental organizations (NGOs) and international aid organizations play a crucial role in poverty alleviation efforts. These organizations often provide support to vulnerable populations, such as refugees, women, and children, by offering food aid, medical assistance, and educational programs. For example, the United Nations World Food Programme provides food assistance to millions of people in crisis-affected areas around the world.Despite these efforts, the global poverty rate remains stubbornly high, with millions of people still living in extreme poverty. Therefore, it is crucial for governments and organizations to continue working together to develop and implement effective poverty reduction policies. By addressing the root causes of poverty and empowering individuals to improve their livelihoods, we can create a more equitable and prosperous world for all.篇2Poverty has been a major issue in many countries around the world, affecting a significant portion of the population. In order to tackle this problem, governments have implemented various poverty reduction policies and programs. This essay will examine the effectiveness of these policies in reducing the proportion of the population living in poverty.One of the most common poverty reduction policies is the implementation of social welfare programs. These programs aim to provide financial assistance to those in need, such as the unemployed, the elderly, and the disabled. By offering financial support, these programs help to alleviate the economic burden on low-income individuals and families. In addition, social welfare programs often provide access to essential services such as healthcare, education, and housing, which can help to improve the overall well-being of the population.Another common poverty reduction policy is the promotion of economic development and job creation. By stimulating economic growth and creating more job opportunities, governments can help to lift people out of poverty. Economic development projects, such as infrastructure development, industrial expansion, and investment in technology, can create new employment opportunities and increase the overall incomelevels of the population. In addition, job training and skills development programs can help to equip individuals with the necessary skills and knowledge to secure stable and well-paying jobs.Furthermore, education plays a crucial role in reducing poverty. By investing in education and improving access to quality education for all, governments can empower individuals to break the cycle of poverty. Education provides individuals with the knowledge and skills they need to secure better jobs, earn higher incomes, and improve their quality of life. In addition, education can help to reduce inequalities and improve social mobility, ensuring that everyone has an equal opportunity to succeed.Despite the implementation of these poverty reduction policies, there are still a significant number of people living in poverty around the world. In many cases, poverty is caused by a combination of factors, including limited access to education, lack of job opportunities, inadequate healthcare services, and social inequalities. In order to effectively reduce poverty, governments must address these underlying causes and implement comprehensive and sustainable poverty reduction strategies.In conclusion, poverty reduction policies play a crucial role in addressing the issue of poverty and improving the well-being of the population. By implementing social welfare programs, promoting economic development, and investing in education, governments can help to reduce the proportion of the population living in poverty. However, in order to achieve significant and lasting reductions in poverty rates, governments must address the root causes of poverty and implement policies that are targeted, sustainable, and inclusive. Only through a holistic and multi-faceted approach to poverty reduction can we create a more equitable and prosperous society for all.篇3Poverty Policy and the Proportion of Poor PeopleIntroductionPoverty is a major issue that affects millions of people around the world. In order to address this issue, governments have implemented various policies and programs aimed at reducing poverty levels and improving the lives of those living in poverty. This essay will discuss the various poverty policies that have been implemented in different countries and analyze the impact of these policies on the proportion of poor people.Poverty PoliciesThere are several poverty policies that governments can implement to help reduce poverty levels. Some of the most common policies include:1. Social welfare programs: Social welfare programs provide financial assistance to individuals and families living in poverty. These programs can include cash assistance, food stamps, housing assistance, and other forms of support.2. Education and training programs: Education and training programs are designed to help individuals acquire the skills and knowledge they need to secure employment and improve their economic situation.3. Job creation programs: Job creation programs aim to create new employment opportunities for individuals living in poverty. This can include programs that provide subsidies to businesses that hire low-income individuals or initiatives that promote entrepreneurship.4. Healthcare programs: Healthcare programs provide access to affordable healthcare services for individuals living in poverty. This can include programs that provide free or low-cost medical care, prescription medications, and other healthcare services.Impact of Poverty Policies on the Proportion of Poor PeopleThe impact of poverty policies on the proportion of poor people can vary depending on the specific policy and the context in which it is implemented. However, research has shown that poverty policies can have a significant impact on reducing poverty levels and improving the lives of those living in poverty.For example, social welfare programs have been shown to reduce poverty levels by providing financial assistance to individuals and families in need. These programs can help ensure that individuals have access to basic necessities such as food, housing, and healthcare. Education and training programs have also been shown to be effective in reducing poverty levels by helping individuals acquire the skills they need to secure employment and improve their economic situation.Job creation programs can also play a key role in reducing poverty levels by creating new employment opportunities for individuals living in poverty. By providing individuals with the opportunity to secure stable and well-paying jobs, job creation programs can help lift individuals out of poverty and improve their economic situation. Healthcare programs are also crucial in reducing poverty levels by providing individuals with access to affordable healthcare services.ConclusionIn conclusion, poverty policies play a crucial role in reducing poverty levels and improving the lives of those living in poverty. By implementing a combination of social welfare programs, education and training programs, job creation programs, and healthcare programs, governments can make significant strides towards reducing the proportion of poor people in their countries. It is essential for governments to continue to invest in poverty policies and programs in order to create a more equitable society and improve the well-being of all citizens.。
如何促进教育平等英语作文
如何促进教育平等英语作文Education is a fundamental human right and a key driver of social and economic progress. However, access to quality education is often unequal, with some individuals and communities facing significant barriers and disadvantages. Promoting educational equality is essential for creating a more just and inclusive society, where everyone has the opportunity to reach their full potential. In this essay, we will explore several strategies and approaches that can help to address educational inequalities and foster greater educational equity.One crucial aspect of promoting educational equality is ensuring that all children, regardless of their socioeconomic background, have access to high-quality early childhood education. Research has consistently shown that children who attend preschool and receive a strong foundation in literacy, numeracy, and social-emotional skills are more likely to succeed academically and in life. Governments and policymakers should invest in expanding access to affordable, high-quality early childhood education programs, particularly in underserved communities.Another important strategy is to address the resource disparities between schools in different socioeconomic areas. Schools in low-income neighborhoods often lack adequate funding, qualified teachers, modern technology, and other essential resources, putting students at a significant disadvantage compared to their peers in more affluent areas. Governments should implement policies and funding mechanisms that ensure equitable distribution of resources, including targeted funding for schools serving disadvantaged populations.Promoting teacher quality and diversity is also crucial for fostering educational equality. Teachers play a vital role in shaping the educational experiences and outcomes of their students. Ensuring that all schools have access to highly qualified, well-trained, and diverse teachers can help to create more inclusive and responsive learning environments. This may involve initiatives such as providing competitive salaries and benefits, offering professional development opportunities, and actively recruiting and supporting teachers from underrepresented backgrounds.Addressing the unique needs and challenges faced by marginalized groups, such as students with disabilities, English language learners, and racial and ethnic minorities, is another key aspect of promoting educational equality. Schools and education systems shouldimplement targeted support and accommodation measures, such as specialized instructional programs, language assistance, and culturally responsive teaching practices, to ensure that these students have the resources and support they need to succeed.Engaging and empowering parents and communities is also essential for promoting educational equality. When families and communities are actively involved in the educational process, they can advocate for their children's needs, hold schools and policymakers accountable, and collaborate to develop tailored solutions. Schools should strive to build strong partnerships with parents and community organizations, fostering open communication, shared decision-making, and collaborative problem-solving.Finally, addressing the broader social, economic, and systemic barriers that contribute to educational inequalities is crucial for achieving lasting change. Factors such as poverty, discrimination, and lack of access to healthcare and social services can have a profound impact on educational outcomes. Addressing these underlying issues through comprehensive, multi-sectoral approaches, such as poverty reduction programs, anti-discrimination policies, and integrated social services, can help to create a more level playing field and ensure that all students have the opportunity to succeed.In conclusion, promoting educational equality is a complex andmultifaceted challenge that requires a holistic and sustained approach. By investing in early childhood education, addressing resource disparities, improving teacher quality and diversity, supporting marginalized groups, engaging parents and communities, and addressing broader social and systemic barriers, we can work towards a more equitable and inclusive education system that empowers all individuals to reach their full potential. This is not only a moral imperative, but also a crucial investment in the future of our societies and the well-being of generations to come.。
qlearning的损失函数
qlearning的损失函数Q-learning是一种强化学习算法,用于解决马尔可夫决策过程(Markov Decision Process,MDP)问题。
其目标是通过学习一个价值函数来选择最优的行动策略。
在Q-learning中,损失函数起到了引导算法学习的作用,用于衡量当前策略与最优策略之间的差异。
本文将详细介绍Q-learning的损失函数及其背后的原理。
在Q-learning中,我们通过不断更新Q表来学习最优策略。
Q表是一个二维表格,其中行表示状态(State),列表示行动(Action),每个表格元素Q(s,a)表示在状态s下执行行动a所得到的累积回报。
Q-learning的核心思想是通过不断更新Q值来逼近最优策略。
损失函数在Q-learning中被定义为Q值的均方差(Mean Squared Error,MSE),用于衡量当前估计的Q值与目标Q值之间的差异。
具体而言,损失函数的定义如下:Loss = (Q_target - Q_current)^2其中,Q_target表示目标Q值,Q_current表示当前估计的Q值。
目标Q值是通过Q-learning算法中的贝尔曼方程计算得到的,它表示当前状态下选择最优行动所得到的累积回报。
贝尔曼方程的定义如下:Q_target = R + γ * max(Q_next)其中,R表示当前状态下选择行动a后获得的即时回报,γ表示折扣因子,用于考虑未来回报的重要性,Q_next表示下一个状态对应的最大Q值。
Q-learning的目标是最小化损失函数,即通过不断更新Q值,使得当前估计的Q值更接近目标Q值。
为了达到这个目标,我们需要使用梯度下降方法来更新Q值。
更新Q值的公式如下:Q_current = Q_current + α * (Q_target - Q_current)其中,α表示学习率,用于控制更新的步长。
学习率越大,更新的幅度越大,学习速度越快。
dest
Nicholas Roy Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213 USA Andrew McCallum WhizBang! Labs - Research, 4616 Henry Street, Pittsburgh, PA 15213 USA
the unlabeled examples, adding it to the training set with a sample of its possible labels, and estimating the resulting future error rate as just described. This seemingly daunting sampling and re-training can be made efficient through a number of rearrangements of computation, careful sampling choices, and efficient incremental training procedures for the underlying learner. We show experimental results on two real-world document classification tasks, where, in comparison with densityweighted Query-by-Committee we reach 85% of full performance in one-quarter the number of training examples.
FUJITSU文档扫描仪FI-7160说明书
Organizations of all types and sizes choose the fi -7160 for its speed, reliability, and accuracy. Small enough to fi t on any desk, yet powerful enough to sail through routine billing, data entry, and other administrative tasks. It’s the class-leading standard for small teams and workgroups.60Pages per minute DuplexScans bothsidesSizes up toLegal8.5” x 14” max.ScansPlastic CardsFlat andembossed600Optical DPI24-bit ColorAcceptsSticky NotesTapedReceiptsLabelsTWAIN& ISIS®Supportedfi -7160 Technical Specifi cationsDocument feeding method Automatic Document Feeder (ADF) Scanning modes Simplex/Duplex in Color, Grayscale, or MonochromeImage sensor type Color Charge-Coupled Device (CCD) x 2 (Front x 1, Back x 1)Light source White LED Array x 2 (Front x 1, Back x 1)Multi-feed protection Ultrasonic multi-feed detection sensor Paper detection sensorPaper protection iSOP (Intelligent Sonic Paper Protection)Document size Minimum 2.0” x 2.13” (50.8 x 54 mm) Maximum8.5” x 14” (216 x 355.6 mm) Long page scanning 18.5” x 220” (216 x 5,588 mm)Paper weight Paper7.2 to 110 lb (27 to 413 g/m2) Plastic card Up to 1.4mm 2Scanning speed 3200 or 300 dpi, Letter,Color 4, Grayscale 4and Monochrome 5Simplex60 pages/minuteDuplex120 images/minuteADF capacity 680 Sheets (A4/Letter: 20 lb. or 80 g/m2) Background colors White / Black (switchable)Output resolution 7Color (24-bit),Grayscale (8-bit),Monochrome (1-bit)50 to 600 dpi, 600 dpi optical, 1200 dpi software 8Internal video processing10-bit (1,024 levels) Interface USB 3.0 / USB 2.0 / USB 1.1 Power requirements100 to 240 VAC, 50/60 HzPower consumptionOperating: 38 W or lessSleep mode: 1.8 W or less Auto-standby (Off) mode: 0.35 W or lessOperating environment Temperature41° F to 95° F (5° C to 35° C) Relative humidity20% to 80% (non-condensing)Dimensions (WxDxH) 911.81” x 6.69” x 6.42” (300 x 170 x 163 mm)Weight9.26 lb (4.2 kg)Included in the box Stacker, ADF paper chute, AC cable & adapter, USB cable, Setup DVD-ROMBundled software (DVD format)PaperStream IP (TWAIN/ISIS) Driver, PaperStream Capture, ScanSnap Manager for fi Series 10, Scan to Microsoft SharePoint 10, ABBYY FineReader for ScanSnap 10, Scanner Central Admin Agent, Software Operation Panel, Error Recovery GuideEnvironmental designations 11ENERGY STAR®, RoHS, and EPEAT SilverSupported operating systemsWindows® 10 (32-bit/64-bit),Windows® 8.1 (32-bit/64-bit),Windows® 7 (32-bit/64-bit),Windows Server® 2016 (64-bit), Windows Server® 2012 R2 (64-bit),Windows Server® 2012 (64-bit), Windows Server® 2008 R2 (64-bit), Windows Server® 2008 (32-bit/64-bit)macOS 1213Linux (Ubuntu)12 13Image processing functions Multi-image output, Auto color detection, Blank page detection, Dynamic threshold (iDTC), Advanced DTC, SDTC, Error diffusion, De-screen, Emphasis, Halftone, Dropout color, sRGB output, Hole punch removal,Index tab cropping, Split image, De-skew, Edge correction, Streak reduction, Cropping, Dither, Staticthreshold, Divide long page1 Can scan documents longer than A4 sheets. Documents longer than 34” require using lower resolution (200 DPI or less)2 Can scan up to3 fl at plastic cards or one embossed card at a time 3 Actual scanning speeds are affected by data transmission and software processing times4 Using JPEG compression5 Using TIFF CCITT Group 4 compression6 Maximum capacity varies depending upon paper thickness7 Selectable maximum density may vary depending on length of document8 When scanning at high resolutions (600 dpi or higher), some limitations to document size may apply depending on system environment9 Dimensions measured with machine closed to minimum positions. During operation, machine depth is increased by the ADF chute and output tray. Minimum depth during operation is about 13.0” (330.2 mm) with ADF attached and output tray open but not extended, and can extend up to 27.56” (700 mm) when ADF and output trays are open and fully extended to their maximum postitions. 10 Can be downloaded following instructions on Setup DVD-ROM 11 PFU Limited, a Fujitsu company, has determined that this product meets the ENERGY STAR® guidelines for energy effi ciency and RoHS requirements (2005/95/EC)12Functions equivalent to those offered by PaperStream IP may not be available with the Image Scanner Driver for macOS/Linux and WIA Driver 13Refer to the fi Sereies Support Sitefor driver/software downloads and full lineup of all supported operating systems versions.Fujitsu Computer Products of America, Inc.1250 East Arques Avenue, Sunnyvale, CA 94085888.425.8228 Sales · 800.626.4686 Technical Support/fcpa·*****************.comfi -7160Workgroup-class professional document scannerA capable workhorse that keeps up with your businessPowers through your documents at up to 120 images per minuteLarge-capacity 80-page Automatic Document FeederSupports documents up to 220” long and embossed plastic cardsGive your teams the convenience of desktop scanningThe fi -7160 offers robust scanning right on your desk, with smart features to make it painless.Scan documents of mixed paper sizes and weights all at once - no need to pre-sortIntelligent MultiFeed Function allows easy manual bypass for sticky notes, taped receipts, andlabels that can slow down batch scanningUltrasonic Double Feed Detection identifi es sheets stuck together so you don’t miss an imageForgot to remove a staple? Intelligent Sonic Paper Protection “listens” to paper fl owing throughthe machine and stops if a misfeed occurs, reducing damage to your documentsSkew Reduction signifi cantly improves feeding performance and ensures that your wholedocument gets accurately captured from edge to edgeSuper-fast USB 3.0 interfaceClean up and optimize scans without changing settings in advancePaperStream IP (PSIP) is a TWAIN/ISIS ®-compliant driver with easy-to-use features including:Assisted Scanning lets you choose the best image cleanup through visual selectionAdvanced Image Cleanup corrects the toughest documents, including colored and decoratedbackgrounds, to improve OCR and reduce rescansAuto Color Detection identifi es the best color mode for the documentBlank Page Detection removes blank pages automaticallyFront and Back Merge places two sides of a page into one convenient imagePaperStream Capture makes scanning fast and easyEliminate the learning curve. PaperStream Capture’s user-friendly interface allows easy operation fromstart to fi nish. Changing scan settings is simple. Indexing and sorting features include barcode, patchcode, and blank page separation – making batch scanning a breeze for operators.Centralized fl eet managementIncludes Scanner Central Admin Agent to remotely manage your entire fi Series fl eet.Effectively allocate your resources based on scan volume, consumables wear, and more.Make it even better with PaperStream Capture ProOptional PaperStream Capture Pro software offerssuperior front-end capture, image processing, and optionsfor enhanced data extraction and indexing for release.A value-priced bundle is available.©2018 Fujitsu Computer Products of America, Inc. All rights reserved. Microsoft, SharePoint, and Windows are trademarks of Microsoft Corporation.ISIS is a registered trademark of EMC Corporation. ABBYY, FineReader are trademarks of ABBYY Software Ltd. ENERGY STAR is a U.S. registeredtrademark. PaperStream is registered trademark of PFU Limited. All other trademarks are the property of their respective owners. Specifi cationssubject to change without notice. Printed in USA on paper from responsible sources. Please recycle. 161108R4♼Insist on Genuine Fujitsu Service to keep your scanner running at its best。
非负稀疏表示的多标签特征选择
非负稀疏表示的多标签特征选择蔡志铃;祝峰【期刊名称】《计算机科学与探索》【年(卷),期】2017(011)007【摘要】在多标签学习中,数据降维是一项重要而又具有挑战性的任务.特征选择是一种高效的数据降维技术,它通过保持最大相关信息选取一个特征子集.通过对子空间学习的研究,提出了基于非负稀疏表示的多标签特征选择方法.该方法可以看成是矩阵分解问题,其融合了非负约束问题和L2,1-范数最小优化问题.设计了一种高效的矩阵更新迭代算法求解新问题,并证明其收敛性.最后,对6个实际的数据集进行了测试,实验结果证明了算法的有效性.%Dimensionality reduction is an important and challenging task in multi-label learning. Feature selection is a highly efficient technique for dimensionality reduction by maintaining maximum relevant information to find an optimal feature subset. First of all, this paper proposes a multi-label feature selection method based on non-negative sparse representation by studying the subspace learning. This method can be treated as a matrix factorization prob-lem, which is combined with non-negative constraint problem and L2,1- norm minimization problem. Then, this paper designs a kind of efficient iterative update algorithm to tackle the new problem and proves its convergence. Finally, the experimental results on six real-world data sets show the effectiveness of the proposed algorithm.【总页数】8页(P1175-1182)【作者】蔡志铃;祝峰【作者单位】闽南师范大学粒计算重点实验室,福建漳州 363000;闽南师范大学粒计算重点实验室,福建漳州 363000【正文语种】中文【中图分类】TP18【相关文献】1.基于非负稀疏表示的SAR图像目标识别方法 [J], 丁军;刘宏伟;王英华2.基于非负稀疏表示的遮挡人耳识别 [J], 张保庆;穆志纯;曾慧3.基于核非负稀疏表示的人脸识别 [J], 薄纯娟;张汝波;刘冠群;汪语哲4.基于非负稀疏表示的标签繁殖算法 [J], 杨南海;桑媛媛;赫然;王秀坤5.自适应非负加权约束低秩稀疏表示的跨数据集面部表情识别 [J], 付俊妮因版权原因,仅展示原文概要,查看原文内容请购买。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Machine Learning, 24, 173–202 (1996)
Error Reduction through Learning Multiple Descriptions
Figure 1. An example of learning multiple models - each model consists of a set of class descriptions.
each class description is a set of first-order Horn clauses1 for that class). The set of learned models is called an ensemble (Hansen & Salamon, 1990). Previous work in learning multiple models has mainly been concerned with demonstrating that the multiple models approach reduces error as opposed to the goal of this paper which is to explain the variation in error reduction from domain to domain. Previous work has compared different search strategies (Kononenko & Kovacic, 1992) compared different search evaluation measures (Gams, 1989; Smyth & Goodman, 1992), evaluated the effects of pruning (Kwok & Carter, 1990; Buntine, 1990) and compared different ways of generating models (nearly all authors). Except for the work of Buntine, all the other comparisons have been made on a few domains so we still do not have a clear picture of how domain characteristics affect the efficacy of using multiple models. It is important to analyze these experimental data because the amount of error reduction obtained by using multiple models varies a great deal. On the wine data set, for example, the error obtained by uniformly weighted voting between eleven, stochastically-generated descriptions is only one seventh that of the error obtained by using a single description. On the other hand, on the primarytumor data set, the error obtained by the identical multiple models procedure is the same as that obtained by using a single description. Much of the work on learning multiple models is motivated by Bayesian learning theory (e.g. Bernardo & Smith, 1994) which dictates that to maximize predictive accuracy, instead of making classifications based on a single learned model, one should ideally use all hypotheses (models) in the hypothesis space. The vote of each hypothesis should be weighted by the posterior probability of that hypothesis given the training data. Since the theory requires voting from all hypotheses or models in the hypothesis space, all tractable implementations of this theory have to be approximations. This raises the following experimental question: what model-generation/evidence-combination method yields the lowest error rates in practice? Or, how can one characterize the domains in which a particular method works best and why does it work best on such domains? The main hypothesis examined in this paper is whether error is most reduced for domains for which the errors made by models in the ensemble are made in an uncorrelated manner. In order to test this hypothesis, we first need to define error reduction more precisely. Two obvious measures comparing the error of the ensemble (Ee ) to the error of the single model
174
1st model of the data 1st concept description for class a class-a(X,Y) :- b(X),c(Y). class-a(X,Y) :- d(X,Z),e(Z,Y). 1st concept description for class b class-b(X,Y) :- e(X,Y),f(X,X). class-b(X,Y) :- g(X),class-b(Y,X).
1. Introduction Learning multiple models of the data has been shown to improve classification error rate as compared to the error rate obtained by learning a single model of the data (for example: decision trees: Kwok & Carter, 1990; Buntine, 1990, Kong & Dietterich, 1995; rules: Gams, 1989; Smyth & Goodman, 1992; Kononenko & Kovacic,1992; Brazdil & Torgo, 1990; neural nets: Hansen & Salamon, 1990; Baxt, 1992; Bayesian nets: Madigan & York, 1993; regression: Perrone, 1993, Breiman, in press). Although much work has been done in learning multiple models not many domains were used for such studies. There has also been little attempt to understand the variation in error reduction (the error rate of multiple models compared to error rate of the single model learned on the same data) from domain to domain. Three of the data sets used in our study for which this approach provides the greatest reduction in error (Tic-tac-toe, DNA, Wine) have not been used in previous studies. For these data sets, the multiple models approach is able to reduce classification error on a test set of examples by a factor of up to seven! This paper uses a precise definition of “correlated errors” to provide an understanding of the variation in error reduction. We also present the idea of “gain ties” to understand why the multiple models approach is effective - especially why it is more effective for domains with more irrelevant attributes. Figure 1 shows an example of multiple learned models of the form used in this paper. In the multiple models approach, several models of one training set are learned. Each model consists of a description for each class. Each description is a set of rules for that class (i.e.