lolliCoP-- A Linear Logic Implementation of a Lean Connection-Method Theorem Prover for Fir

合集下载

Fuzzy Logic and Systems

Fuzzy Logic and Systems

Fuzzy Logic and SystemsFuzzy logic is a fascinating concept that has gained popularity in various fields, including artificial intelligence, control systems, and decision-making processes. Unlike traditional binary logic, which operates based on precise values of true or false, fuzzy logic allows for degrees of truth, making it moreadaptable to real-world scenarios where uncertainties and ambiguities exist. One of the key advantages of fuzzy logic is its ability to handle imprecise data and vague boundaries. In many real-life situations, such as weather forecasting or medical diagnosis, information is often incomplete or uncertain. Fuzzy logic provides a framework for reasoning with this fuzzy information, allowing for more nuanced and flexible decision-making. In the field of artificial intelligence, fuzzy logic plays a crucial role in mimicking human reasoning and decision-making processes. By incorporating fuzzy logic into AI systems, researchers can develop more intelligent and adaptive algorithms that can learn from experience and make decisions based on uncertain or incomplete information. This is particularlyuseful in applications such as natural language processing, image recognition, and autonomous systems. From a control systems perspective, fuzzy logic offers a more intuitive and user-friendly approach to designing controllers for complex systems. Traditional control systems rely on precise mathematical models and algorithms, which can be challenging to implement in systems with nonlinear dynamics or uncertain parameters. Fuzzy logic controllers, on the other hand, can capture the expertise and intuition of human operators, making them more robust and adaptable to changing conditions. Despite its many advantages, fuzzy logic is not withoutits limitations. One common criticism is its lack of formal mathematical rigor compared to traditional logic systems. Critics argue that the subjective nature of fuzzy logic can lead to inconsistencies and ambiguity in decision-making processes. Additionally, designing fuzzy systems can be complex and time-consuming, requiring domain expertise and careful tuning of parameters. In conclusion, fuzzy logic isa powerful tool that offers a more flexible and intuitive approach to reasoningand decision-making in complex and uncertain environments. By allowing for degrees of truth and uncertainty, fuzzy logic can capture the nuances of human reasoning and behavior, making it a valuable tool in artificial intelligence, controlsystems, and other applications. While it may not be suitable for all situations, fuzzy logic has proven to be a valuable addition to the toolkit of researchers and practitioners in various fields.。

MBA课程 线性规划 Linear Programing 英文原版

MBA课程 线性规划 Linear Programing 英文原版

ASSUMPTIONS OF LINEAR PROGRAMMING •from a mathematical viewpoint, the assumptions simply are that the model must have a linear objective function subject to linear constraints.•However, from a modeling viewpoint, these mathematical properties of a linear programming model imply that certain assumptions must hold about the activities and data of the problem being modeled, including assumptions about the effect of varying the levels of the activities.•Proportionality •Additivity •Divisibility •CertaintyProportionality assumptioncosts•This case would arise if there were start-up costs associated with initiating the production of product 1. For example, there might be costs involved with setting up the production facilities. There might also be costs associated with arranging the distribution of the new product. Because these are one-time costs, they would need to be amortized on a per-week basis to be commensurable with Z (profit in thousands of dollars per week).costs•Suppose that this amortization were done and that the total start-up cost amounted to reducing Z by 1, but that the profit without considering the start-up cost would be 3x1. This would mean that the contribution from product 1 to Z should be 3x1-1 for x1 > 0, whereas the contribution would be 3x1 0 when x1 0 (no start-up cost). This profit function,3 which is given by the solid curve in Fig., certainly is not proportional to x1.increasing marginal return•the slope of the profit function for product 1 keeps increasing asx 1is increased. This violation of proportionality might occurbecause of economies of scale that can sometimes be achieved at higher levels of production, e.g., through the use of more efficient high-volume machinery, longer production runs, quantity discounts for large purchases of raw materials, and the learning-curve effect whereby workers become more efficient as they gain experience with a particular mode of production.As the incremental cost goes down, the incremental profit will go up (assuming constant marginal revenue).decreasing marginal return•the slope of the profit function for product 1 keeps decreasing as xis increased.1decreasing marginal return•the marketing costs need to go up more than proportionally to attain increases in the level of sales . For example, it might be possible to sell product 1 at the rate of 1 per week (x 1=1) with no advertising, whereas attaining sales to sustain a production rate of x 1=2 might require a moderate amount of advertising, x 1=3might necessitate an extensive advertising campaign, and x 1=4 might require also lowering the price•The conclusion was that proportionality could indeed be assumed without serious distortion.•what happens when the proportionality assumption does not hold even as a reasonable approximation? In most cases, this means you must use nonlinear programming instead• a certain important kind of nonproportionality can still be handled by linear programming by reformulating the problem appropriately.•Furthermore, if the assumption is violated only because of start-up costs, there is an extension of linear programming (mixed integer programming) that can be usedAdditivity•Although the proportionality assumption rules out exponents other than 1, it does not prohibit cross-product terms (terms involving the product of two or more variables).•Additivity assumption: Every function in a linear programming model (whether the objective function or the function on the left-hand side of a functional constraint) is the sum of the individual contributions of the respective activities•this case corresponds to an objective function of Z =3x1+5x2+x1x2, so that Z =3+ 5+ 1= 9 for (x1, x2) (1, 1), thereby violating the additivity assumption that Z =3+5.•The proportionality assumption still is satisfied since after the value of one variable is fixed, the increment in Z from the other variable is proportional to the value of that variable. This case would arise if the two products were complementary in some way that increases profit.•For example, suppose that a major advertising campaign would be required to market either new product produced by itself, but that the same single campaign can effectively promote both products if the decision is made to produce both. Because a major cost is saved for the second product, their joint profit is somewhat more than the sum of their individual profits when each is produced by itself.•Case 2 also violates the additivity assumption because of the extra term in the corresponding objective function, Z =3x 1+5x 2-x 1x 2, so that Z=3+5-1= 7 for (x 1, x 2) (1, 1). As the reverse of the first case, Case 2 would arise if the two products were competitive in some way that decreased their joint profit.•For example, suppose that both products need to use the same machinery and equipment . If either product were produced by itself, this machinery and equipment would be dedicated to this one use. However, producing both products would require switching the production processes back and forth, with substantial time and cost involved in temporarily shutting down the production of one product and setting up for the other.Affect the additivity of the constraint functions•Affect the additivity of the constraints function•For example, consider the third functional constraint of the Wyndor Glass Co. problem: 3x1+2x2<=18. (This is the only constraint involving both products.)•3x1+2x2+0.5x1x2<=18•namely, extra time is wasted switching the production processes back and forth between the two products. The extra cross-product term (0.5x1x2) would give the production time wastedin this way. (Note that wasting time switching between products leads to a positive cross-product term here, where the total function is measuring production time used, whereas it led to a negative cross-product term for Case 2 because the total function there measures profit.)•For Case 4 the function for production time used is 3x1+2x2-0.1x21x2, so the function value for (x1, x2)=(2, 3) is 6+6-1.2=10.8. This case could arise in the following way.•As in Case 3, suppose that the two products require the same type of machinery and equipment. But suppose now that the time required to switch from one product to the other would be relatively small.•occasional idle periodsDivisibility•Divisibility assumption: Decision variables in a linear programming model are allowed to have any values, including noninteger values, that satisfy the functional and nonnegativityconstraints. Thus, these variables are not restricted tojust integer values. Since each decision variable represents the level of some activity, it is being assumed that the activities can be run at fractional levels.Certainty•Certainty assumption: The value assigned to each parameter of a linear programming model is assumed to be a known constant •Linear programming models usually are formulated to select some future course of action. Therefore, the parameter values used would be based on a prediction of future conditions, which inevitably introduces some degree of uncertainty.•sensitivity analysis to identify the sensitive parameters•other ways of dealing with linear programming under uncertainty•It is very common in real applications of linear programming that almost none of the four assumptions hold completely. Except perhaps for the divisibility assumption, minor disparities are to be expected.This is especially true for the certainty assumption, so sensitivity analysis normally is a must to compensate for the violation of this assumption•A disadvantage of these other models is that the algorithms available for solving them are not nearly as powerful as those for linear programming, but this gap has been closing in some cases. For some applications, the powerful linear programming approach is used for the initial analysis, and then a more complicated model is used to refine this analysisThe Simplex MethodTHE ESSENCE OF THE SIMPLEX METHOD •The simplex method is an algebraic procedure. However, its underlying concepts are geometric.•Before delving into algebraic details, we focus in this section on the big picture from a geometric viewpoint.•each constraint boundary is a line that forms the boundary of what is permitted by the corresponding constraint. The points of intersection are the corner-point solutions of the problem. The five that lie on the corners of the feasible region—(0, 0), (0, 6), (2, 6), (4, 3), and (4, 0)—are the cornerpoint feasible solutions (CPF solutions). [The other three—(0, 9), (4, 6), and (6, 0)—are called corner-point infeasible solutions.]•In this example, each corner-point solution lies at the intersection of two constraint boundaries.•For a linear programming problem with n decision variables, each of its cornerpoint solutions lies at the intersection of n constraint boundaries.•Certain pairs of the CPF solutions share a constraint boundary, and other pairs do not.•For any linear programming problem with n decision variables, two CPF solutions are adjacent to each other if they share n-1 constraint boundaries. The two adjacent CPF solutions are connected by a line segment that lies on these same shared constraint boundaries. Such a line segment is referred to as an edge of the feasible region•Since n=2 in the example, two of its CPF solutions are adjacent if they share one constraint boundary; for example, (0, 0) and (0, 6) are adjacent because they share the x1=0 constraint boundary. The feasible region in Fig has five edges, consisting of thefive line segments forming the boundary of this region. Note that two edges emanate from each CPF solution. Thus, each CPF solution has two adjacent CPF solutions•Optimality test: Consider any linear programming problem that possesses at least one optimal solution. If a CPF solution has no adjacent CPF solutions that are better (as measured by Z), thenit must be an optimal solutionSolving the Example -Wyndor Glass Co. Problem•Initialization: Choose (0, 0) as the initialCPF solution to examine. (This is aconvenient choice because no calculationsare required to identify this CPF solution.)•Optimality Test: Conclude that (0, 0) is notan optimal solution. (Adjacent CPFsolutions are better.)•Iteration 1: Move to a better adjacent CPFsolution, (0, 6), by performing the followingthree steps.•1. Considering the two edges of the feasible region that emanate from (0, 0), choose to move along the edge that leads up the x 2axis. (With an objective function of Z=3x 1+5x 2, moving up the x 2axis increases Z at afaster rate than moving along the x 1axis.)•2. Stop at the first new constraint boundary:2x 2=12. [Moving farther in the directionselected in step 1 leaves the feasible region; e.g., moving to the second new constraint boundary hit when moving in that direction gives (0, 9), which is a corner-point infeasible solution.]•3. Solve for the intersection of the new set of constraint boundaries: (0, 6). (The equations for these constraint boundaries, x 1=0 and 2x 2=12, immediately yield this solution.)•Optimality Test: Conclude that (0, 6) is not an optimal solution. (An adjacent CPF solution is better.)•Iteration 2: Move to a better adjacent CPF solution, (2, 6), by performing the following three steps•1. Considering the two edges of the feasible region that emanate from (0, 6), choose tomove along the edge that leads to the right. (Moving along this edge increases Z, whereas backtracking to move back down the x2axis decreases Z.)2. Stop at the first new constraint boundary encountered when moving in that direction:3x1+2x2=12. (Moving farther in the direction selected in step 1 leaves the feasibleregion.)3. Solve for the intersection of the new set of constraint boundaries: (2, 6). (The equations for these constraint boundaries, 3x1+2x2=18 and2x2=12, immediately yield this solution.)•Optimality Test: Conclude that (2, 6) is an optimal solution, so stop. (None of the adjacent CPF solutions are better.)The Key Solution Concepts•Solution concept 1: The simplex method focuses solely on CPF solutions. For any problem with at least one optimal solution, finding one requires only finding•The only restriction is that the problem must possess CPF solutions. This is ensured if the feasible region is bounded.•Solution concept 2: The simplex method is an iterative algorithm (a systematic solution procedure that keeps repeating a fixed series of steps, called an iteration, until a desired result has been obtained) with the following structure.•Solution concept 3: Whenever possible, the initialization of the simplex method chooses the origin (all decision variables equal to zero) to be the initial CPF solution. When there are too many decision variables to find an initial CPF solution graphically, this choice eliminates the need to use algebraic procedures tofind and solve for an initial CPF solution•Solution concept 4: Given a CPF solution, it is much quicker computationally to gather information about its adjacent CPF solutions than about other CPF solutions. Therefore, each time the simplex method performs an iteration to move from the current CPF solution to a better one, it always chooses a CPF solution that is adjacent to the current one. No other CPF solutions are considered. Consequently, the entire path followed to eventually reach an optimal solution is alongthe edges of the feasible region.•Solution concept 5: After the current CPF solution is identified, the simplex method examines each of the edges of the feasibleregion that emanate from this CPF solution. Each of theseedges leads to an adjacent CPF solution at the other end, but the simplex method does not even take the time to solve for theadjacent CPF solution. Instead, it simply identifies the rate of improvement in Z that would be obtained by moving along the edge. Among the edges with a positive rate of improvement in Z, it then chooses to move along the one with the largest rate of improvement in Z. The iteration is completed by first solving for the adjacent CPF solution at the other end of this one edge and then relabeling this adjacent•Solution concept 6: Solution concept 5 describes how the simplex method examines each of the edges of the feasible region that emanate from the current CPF solution. This examination of an edge leads to quickly identifying the rate of improvement in Z that would be obtained by moving along the edge toward theadjacent CPF solution at the other end. A positive rate of improvement in Z implies that the adjacent CPF solution is better than the current CPF solution, whereas a negative rate of improvement in Z implies that the adjacent CPF solution is worse. Therefore, the optimality test consists simply of checking whether any of the edges give a positive rate of improvement in Z. If none do, then the current CPF solution is optimalSETTING UP THE SIMPLEX METHOD•The algebraic procedure is based on solving systems of equations. Therefore, the first step in setting up the simplex method is to convert the functional inequality constraints to equivalent equality constraints. (The nonnegativity constraints are left asinequalities because they are treated separately.) This conversion is accomplished by introducing slack variables.•Although both forms of the model represent exactly the same problem, the new form is much more convenient for algebraic manipulation and for identification of CPF solutions.•We call this the augmented form of the problem because the original form has been augmented by some supplementary variables neededto apply the simplex method.。

Linear Programming for Optimization

Linear Programming for Optimization
Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction
1 .1 Definition Linear programming is the name of a branch of applied mathematics that deals with solving optimization problems of a particular form. Linear programming problems consist of a linear cost function (consisting of a certain number of variables) which is to be minimized or maximized subject to a certain number of constraints. The constraints are linear inequalities of the variables used in the cost function. The cost function is also sometimes called the objective function. Linear programming is closely related to linear algebra; the most noticeable difference is that linear programming often uses inequalities in the problem statement rather than equalities. 1 .2 History Linear programming is a relatively young mathematical discipline, dating from the invention of the simplex method by G. B. Dantzig in 1947. Historically, development in linear programming is driven by its applications in economics and management. Dantzig initially developed the simplex method to solve U.S. Air Force planning problems, and planning and scheduling problems still dominate the applications of linear programming. One reason that linear programming is a relatively new field is that only the smallest linear programming problems can be solved without a computer. 1 .3 Example (Adapted from [1].) Linear programming problems arise naturally in production planning. Suppose a particular Ford plant can build Escorts at the rate of one per minute, Explorer at the rate of one every 2 minutes, and Lincoln Navigators at the rate of one every 3 minutes. The vehicles get 25, 15, and 10 miles per gallon, respectively, and Congress mandates that the average fuel economy of vehicles produced be at least 18 miles per gallon. Ford loses $1000 on each Escort, but makes a profit of $5000 on each Explorer and $15,000 on each Navigator. What is the maximum profit this Ford plant can make in one 8-hour day?

李雅普诺夫稳定性自动化专业英语词汇表

李雅普诺夫稳定性自动化专业英语词汇表

.李雅普诺夫稳定性自动化专业英语词汇表公告记录成长的脚印,分享败绩、成功的智慧。

(大部门日记转自采集,如有侵权,即删。

) 日记总数: 47 品题数目: 42 访问次数: 15577 acceptance testing 验收测试 accumulated error积累误差 ac-dc-ac frequency converter 交-直-交变频器 ac(alternatingcurrent)electric drive交流电子传动 active attitude stabilization主动姿态稳定 actuator 驱动器,执行机构 adaline 线性适应元daptation layer适应层 adaptive telemeter system 适应遥测系统 adjoint operator 陪同算子 admissible error容许误差 aggregationmatrix结集矩阵ahp(analytic你好 erarchy process)条理分析法 amplifying element放大环节analog-digital conversion模数转换 ntenna pointing control接收天线指向控制anti-integral windup抗积分饱卷 aperiodic decomposition非周期分解 a posteriori estimate笱楣兰?approximate reasoning类似推理 a priori estimate 先验估计 articulated robot关节型机器人 assignment problem配置问题,分配问题 associative memory model遐想记忆模子 asymptotic stability渐进稳定性 attained pose drift现实位姿漂移 attitude acquisition姿态捕获aocs(attritude and orbit control system)姿态轨道控制系统 attitude angular velocity姿态角速度 attitude disturbance姿态扰动 attitude maneuver 姿态机动 augment ability可扩充性 augmented system增广系统 automatic manual station不用人力-手动操作器 autonomous system自治系统 backlash characteristics间隙特征 base coordinate system基座坐标系bayes classifier 贝叶斯分类器 bearing alignment 方位瞄准 bellows pressure gauge 波纹管压力表 benefit-cost analysis 收入成本分析 bilinear system 双线性系统 biocybernetics 生物控制论 biological feedback system 生物反馈系统black box testing approach 黑箱测试法 blind search 盲目搜索 block diagonalization 块对于角化 boltzman mac 你好 ne 玻耳兹曼机 bottom-up development 自下而上开辟 boundary value analysis 界限值分析 brainstorming method 头脑风暴法 breadth-first search 广度优先搜索 cae(computer aided engineering) 计较机匡助工程 cam(computer aided manufacturing) 计较机匡助创造 camflex valve 偏疼旋转阀 canonical state vari able 标准化状况变量capacitive displacementtransducer 电容式位移传感器 capsule pressure gauge 膜盒压力表 card 计较机匡助研究开辟 cartesian robot 直角坐标型机器人cascadecompensation 串联赔偿 catastrophe theory 突变论 chained aggregation 链式结集 characteristic locus 特征轨迹 chemical propulsion 化学推进classical information pattern 经典信息标准样式 clinical controlsystem 临床控制系统关上 d loop pole 闭环极点关上 d looptransfer function 闭环传递函数cluster analysis 聚类分析 coarse-finecontrol 粗- 精控制 cobweb model 蜘蛛网模子 coefficient matrix 凳?卣?cognitive science 认知科学 coherent system 枯燥关接洽统 combination decision 组合决定计划 combinatorial explosion 组合爆炸combined pressure and vacuum gauge 压力真空表 command pose 指令位姿companion matrix 相伴矩阵 compartmental model 房室模子 compatibility 相容性,兼容性 compensating network 赔偿采集 compensation 赔偿,矫正compliance 柔顺, 适应 composite control 组合控制 computable general equilibrium model 可计较普通均衡模子 conditionallyinstability 条件不稳定性connectionism 毗连机制 conservative system 守恒系统 constraint condition 约束条件 consumption function 消费函数 context-free grammar 上下文无关语法continuous discrete eventhybrid system simulation 连续离散事件混淆系统仿真continuous duty 连续事情制 control accuracy 控制精密度 control cabinet 控制柜controllability index 可控指数 controllable canonical form 可控标准型[control]plant 控制对于象,被控对于象 controlling instrument 控制仪表 control moment gyro 控制力矩捻捻转儿 control panel 控制屏,控制盘 control synchro 控制 [式]自整角机 control system synthesis 控制系统综合 control time horizon 控制时程 cooperativegame 互助对于策 coordinability condition 可协调条件coordinationstrategy 协调计谋 corner frequency 迁移转变频率 costate variable 蔡?淞?cost-effectiveness analysis 用度效益分析 coupling ofrbit and attitude 轨道以及姿态耦合 critical damping 临界阻尼 ritical stability 临界稳定性 cross-over frequency 穿越频率,交越频率 current source inverter 电流[源]型逆变器 cut-off frequency 截止频率 cyclic remote control 循环遥控 cylindrical robot 圆柱坐标型机器人 damped oscillation 阻尼振动 damping ratio 阻尼比 data acquisition 数值采集 data encryption 数值加密 data preprocessing 数值预处理 data processor 数值处理器 dc generator-motor set drive 直流发机电-电动机组传动 d controller 微分控制器 decentralizedstochastic control 分散 rand 控制 decision space 决定计划空间 decisionsupport system 决定计划支持系统 decomposition-aggregation approach 分解结集法 decoupling parameter 解耦参量 deductive-inductive hybrid modeling method 演绎与归纳混淆建模法 delayed telemetry 延时遥测derivation tree 导出树 derivative feedback 微分反馈 describingfunction 描写函数 desired value 希望值deterministic automaton 确定性不用人力机 deviation alarm 误差报警器 dfd 数值流图 diagnosticmodel 诊断模子 diagonally dominant matrix 对于角主导矩阵diaphragmpressure gauge 膜片压力表 difference equation model 差分方程模子differential dynamical system 微分动力学系统 differential game⒎侄圆differential pressure level meter 差压液位计 differentialpressure transmitter 差压变送器 differential transformer displacementtransducer 差动变压器式位移传感器 differentiation element 微分环节 digital filer 数码滤波器 digital signal processing 数码旌旗灯号处理 digitizer 数码化仪 dimension transducer 尺度传感器 direct coordination 直接协调 discrete event dynamic system 离散事件动态系统 discretesystem simulation language 离散系统仿真语言 discriminant function 判别函数 displacement vibration amplitude transducer 位移波幅传感器dissipative structure 耗扩散局 distributed parameter control system 漫衍参量控制系统 disturbance compensation 扰动赔偿 domain knowledge 范畴常识dominant pole 主导极点 dose-response model 剂量反映模子 dual modulation telemetering system 两重调制遥测系统 dualprinciple 对于偶原理 dual spin stabilization 双自旋稳定 duty ratio 负载比 dynamic braking 能耗制动 dynamic characteristics 动态特征 dynamic deviation 动态误差 dynamic error coefficient 动态误差系数 dynamic exactness 动它吻合性 dynamic input-outputmodel 动态投入产出模子 econometric model 计量经济模子 economiccybernetics 经济控制论 economic effectiveness 经济效益 economicvaluation 经济评价 economic index 经济指数 economic in dicator 经济指标 eddy current t 你好 ckness meter 电涡流厚度计 effectivenesstheory 效益意见 elasticity of demand 需求弹性 electric actuator 电动执行机构 electric conductancelevelmeter 电导液位计 electricdrive control gear 电动传动控制设备 electric hydraulic converter 电-液转换器 electric pneumatic converter 电-气转换器electrohydraulicservo vale 电液伺服阀 electromagnetic flow transducer 电磁流量传感器 electronic batc 你好 ng scale 电子配料秤 electronic belt conveyorscale 电子皮带秤 electronic hopper scale 电子料斗秤 emergencystop 异样住手empirical distribution 经验漫衍 endogenous variable 内发生变故量equilibrium growth 均衡增长 equilibrium point 平衡点 equivalence partitioning 等价类区分清晰 error-correction parsing 纠错剖析 estimation theory 估计意见 evaluation technique 评价技术 event chain 事件链evolutionary system 高级演化系统 exogenous variable 外发生变故量 expected characteristics 希望特征 failure diagnosis 妨碍诊断 fast mode 快变模态 feasibility study 可行性研究 feasiblecoordination 可行协调 feasible region 可行域 feature detection 特征检测 feature extraction 特征抽取 feedback compensation 反馈赔偿 feedforward path 前馈通路 field bus 现场总线 finite automaton 有限不用人力机 fip(factory information protocol) 工场信息以及谈 first order predicate logic 一阶谓词逻辑 fixed sequence manipulator 固定挨次机械手 fixed set point control 定值控制 fms(flexiblemanufacturing system) 柔性创造系统 flowsensor/transducer 流量传感器 flow transmitter 流量变送器 forced oscillation 强迫振动 formal language theory 情势语言意见 formal neuron 情势神经元forward path 正向通路 forward reasoning 正向推理 fractal 分形体,分维体frequency converter 变频器 frequency domain modelreduction method 频域模子降阶法 frequency response 频域相应 full order observer 全阶测候器 functional decomposition 功效分解 fes(functional electricalstimulation)功效电刺激 functionalsimularity 功效相仿 fuzzy logic 含糊逻辑 game tree 对于策树 general equilibrium theory 普通均衡意见 generalized least squaresestimation 意义广泛最小二乘估计 generation function 天生函数geomagnetictorque 地磁性矩 geometric similarity 几何相仿 gimbaled wheel 蚣苈global asymptotic stability 全局渐进稳定性 global optimum 全局最优 globe valve 球形阀 goal coordination method 目标协调法 grammatical inference 文法判断 grap 你好 c search 图搜索 gravitygradient torque 重力梯度力矩 group technology 成组技术 guidancesystem 制导系统 gyro drift rate 捻捻转儿漂移率 hall displacementtransducer 霍尔式位移传感器 hardware-in-the-loop simulation 半实物仿真 harmonious deviation 以及谐误差 harmonious strategy 以及谐计谋 heuristic inference 开导式推理你好 dden oscillation 隐蔽振动你好 erarc 你好 calchart 条理布局图你好 erarc 你好 cal planning 递阶规划你好 erarc你好 calontrol 递阶控制 homomorp 你好 c model 同态系统 horizontal decomposition 横向分解 hormonal control 内排泄控制 hydraulic step motor 液压步进马达 hypercycle theory 超循环意见 i controller 积分控制器 identifiability 可辨识性 idss(intelligent decision support system)智能决定计划支持系统 image recognition 图象辨认 impulse function 冲击函数,电子脉冲函数 incompatibility principle 不相容原理 incrementalmotion control 增量运动控制 index of merit 品质因数 inductiveforce transducer 电感式位移传感器 inductive modeling method 归纳建模法 industrial automation 工业不用人力化 inertial attitude sensor 惯性姿态敏锐器 inertial coordinate system 惯性坐标系 inertialwh eel 惯性轮 inference engine 推理机 infinite dimensional system 无限维系统information acquisition 信息采集 infrared gasanalyzer 红外线气体分析器 inherent nonlinearity 本来就有非线性 inherent regulation 本来就有调节 initial deviation 初始误差 injection attitude 入轨姿式input-output model 投入产出模子 instability 不稳定性 instructionlevel language 指令级语言 integral of absolute value of errorcriterion 绝对于误差积分准则integral of squared error criterion 平方误差积分准则 integral performance criterion 积分性能准则 integration instrument 积算摄谱仪 intelligent terminal 智能终端 interactedsystem 互接洽统,关接洽统 interactive prediction approach 互联预估法,关联预估法 intermittent duty 断续事情制ism(interpretivestructure modeling) 诠释布局建模法 invariant embedding principle 不变镶嵌原理 inventory theory 库伦论 inverse nyquist diagram 逆奈奎斯特图 investment decision 投资决定计划 isomorp 你好 c model 同构模子iterative coordination 迭代协调 jet propulsion 喷气推进 job-lot control 分批控制kalman-bucy filer 卡尔曼-布西滤波器 knowledgeaccomodation 常识适应knowledge acquisition 常识获取 knowledgessimilation 常识夹杂kbms(knowledge base management system) 常识库管理系统 knowledge representation 常识抒发 lad der diagram 菪瓮?lag-lead compensation 滞后超前赔偿 lagrange duality 拉格朗日对于偶性 laplace transform 拉普拉斯变换 large scale system 大系统 lateral in 你好 bition network 侧抑制采集 least cost input 最小成本投入 least squares criterion 最小二乘准则 level switch 物位开关 libration damping 天平动阻尼 limit cycle 极限环 linearizationtechnique 线性化要领 linear motion electric drive 直线运动电气传动 linear motion valve 直行程阀 linear programming 线性规划 lqr(linear quadratic regulator problem) 线性二次调节器问题 oad cell 称重传感器 local asymptotic stability 局部渐近稳定性 local optimum 局部最优 log magnitude-phase diagram 对于数幅相图long term memory 长期记忆 lumped parameter model 集总参量模子 lyapunov theorem of asymptotic stability 李雅普诺夫渐近稳定性定理 macro-economic system 宏观经济系统 magnetic dumping 磁卸载 magnetoelastic weig 你好ng cell 磁致弹性称重传感器 magnitude- frequencycharacteristic 幅频特征magnitude margin 幅值裕度 magnitudecale factor 幅值缩尺 man-mac 你好ne coordination 人机协调 manualstation 手动操作器 map(manufacturing automation protocol) 创造不用人力化以及谈 marginal effectiveness 边岸效益mason's gain formula 梅森增益公式 matc 你好 ng criterion 匹配准则 maximum likelihood estimation 最大似然估计 maximum ove rshoot 最大超调量maximum principle 极大值原理 mean-square error criterion 均方误差准则mechanismmodel 机理模子 meta-knowledge 元常识 metallurgical automation 冶金不用人力化 minimal realization 最小使成为事实 minimum phase system 最小相位系统 minimum variance estimation 最小方差估计 minor loop 副回路missile-target relative movement simulator 弹体- 目标相对于运动仿真器 modal aggregation 模态结集 modal transformation 模态变换 mb(model base)模子库model confidence 模子置信度 model fidelity 模子传神度 model reference adaptive control system 模子参考适应控制系统 model verification 模子证验mec(mostconomic control)最经济控制 motion space 可动空间 mtbf(mean time between failures) 均等妨碍距离时间 mttf(mean timeto failures)均等无妨碍时间 multi-attributive utility function 嗍粜孕в 煤??multicriteria 多重判据 multilevel 你好 erarc 你好 cal structure 多级递阶布局 multiloop control 多回路控制 multi- objective decision 多目标决定计划 multistate logic 多态逻辑multistratum 你好 erarc 你好 calcontrol 多段递阶控制 multivariable control system 多变量控制系统 myoelectric control 肌电控制 nash optimality 纳什最优性 naturallanguage generation 自然语言天生 nearest- neighbor 这段邻necessitymeasure 肯定是性侧度 negative feedback 负反馈 neural assembly 神经集合 neural network computer 神经采集计较机 nichols chart 尼科尔斯图noetic science 思维科学 noncoherent system 非枯燥关接洽统 noncooperative game 非互助博弈 nonequilibrium state 非平衡态 nonlinear element 非线性环节nonmonotonic logic 非枯燥逻辑 nonparametric training 非参量训练nonreversible electric drive 不成逆电气传动 nonsingular perturbation 非奇妙摄动 non-stationaryrandom process 非平稳 rand 历程 nuclear radiation levelmeter 核辐射物位计 nutation sensor 章动敏锐器 nyquist stability criterion 奈奎斯特稳定判据 objective function 目标函数 observability index 可测候指数observable canonical form 可测候标准型 on-line assistance 在线帮忙 on- off control 通断控制 open loop pole 开环极点 operational research model 运筹学模子 optic fiber tachometer 光纤式转速表 opt imal trajectory 最优轨迹optimization technique 最优化技术 orbital rendezvous 轨道交会 orbit gyrocompass 轨道捻捻转儿罗经 orbit perturbation 轨道摄动 order parameter 序参量 orientationcontrol 定向控制 oscillating period 振动周期 output predictionmethod 输出预估法 oval wheel flowmeter 椭圆齿轮流量计overalldesign 总体设计 overlapping decomposition 交叠分解 pade approximation 帕德类似 pareto optimality 帕雷托最优性 passive attitude stabilization 不主动姿态稳定 path repeatability 路径可重复性 pattern primitive 标准样式基元 pr(pattern recognition)标准样式辨认 p control 比例控制器 peak time 峰值时间penalty function method 罚函数法 periodic duty 周期事情制 perturbation theory 摄动意见 pessimisticvalue 悲观值 phase locus 相轨迹 phase trajectory 相轨迹hase lead 相位超前 photoelectric tachometric transducer 光电式转速传感器phrase-structure grammar 短句布局文法 physical symbol system 物理符号系统 piezoelectric force transducer 压电式力传感器 playbackrobot 示教再现式机器人 plc(programmable logic controller)可编步伐逻辑控制器 plug braking 反接制动 plug valve 旋塞阀 pneumaticactuator 气动执行机构 point-to-point control 点位控制 polar robot 极坐标型机器人 pole assignment 极点配置 pole-zero cancellation 零极点相消 polynom ial input 多项式输入 portfolio theory 投资配搭意见 pose overshoot 位姿过调量 position measuring instrument 位置丈量仪posentiometric displacement transducer 电位器式位移传感器 positive feedback 正反馈 power system automation 电力系统不用人力化 predicate logic 谓词逻辑pressure gauge with electric contact 电接点压力表 pressure transmitter 压力变送器 price coordination 价格协调 primal coordination 主协调 primary frequency zone 主频区 pca(principal component analysis)主成份分析法principlef turnpike 通途原理 process- oriented simulation 面向历程的仿真production budget 生产预算 production rule 孕育发生式法则 profitforecast 利润预测 pert(program evaluation and review technique) 计划评审技术program set station 步伐设定操作器 proportionalcontrol 比例控制 proportional plus derivative controller 比例微分控制器 protocol engineering 以及谈工程pseudo random sequence 伪 rand 序列 pseudo-rate-increment control 伪速度增量控制 pulse duration 电子脉冲持续时间 pulse frequency modulation control system 电子脉冲调频控制系统 pulse width modulation controlsystem 电子脉冲调宽控制系统 pwm inverter 脉宽调制逆变器 pushdown automaton 下推不用人力机 qc(quality control)质量管理 quadratic performance index 二次型性能指标 quali tative physical model 定性物理模子quantized noise 量化噪声 quasilinear characteristics 准线性特征 queuing theory 列队论 radio frequency sensor 射频敏锐器 ramp function 斜坡函数 random disturbance rand 扰动 random process rand 历程 rateintegrating gyro 速度积分捻捻转儿 ratio station 比率操作器 reactionwheel control 反效用轮控制realizability 可以使成为事实性,能使成为事实性 eal time telemetry 实时遥测receptive field 感受野 rectangularrobot 直角坐标型机器人 recursive estimation 递推估计 reducedorder observer 降阶测候器 redundant information 冗余信息 reentrycontrol 再入控制 regenerative braking 回馈制动,再生制动 regionalplanning model 地区范围规划模子 regulating device 调节装载 relationalalgebra 关系代数 relay characteristic 继电器特征 remote manipulator 遥控操作器 remote set point adjuster 远程设定点调整器 rendezvo 目前世界上最强大的国家 nd docking 交会以及对于接 resistance thermometer sensor 热电阻 esolution principle 归结原理 resource allocation 资源分配responsecurve 相应曲线 return difference matrix 回差矩阵 return ratiomatrix 回比矩阵 reversible electric drive 可逆电气传动 revoluterobot 关节型机器人revolution speed transducer 转速传感器 rewritingrule 重写法则 rigid spacecraft dynamics 刚性航天动力学 riskdecision 危害分析 robotics 机器人学 robot programming language 机器人编程语言 robust control 鲁棒控制 roll gap measuring instrument 辊缝丈量仪 root locus 根轨迹 roots flowmeter 腰轮流量计otameter 浮子流量计,转子流量计 rotary eccentric plug valve 偏疼旋转阀 rotary motionvalve 角行程阀 rotating transformer 旋转变压器 routh approximation method 劳思类似判据 routing problem 肪段侍?sampled-data control system 采样控制系统 sampling controlsystem 采样控制系统 saturation characteristics 饱以及特征 scalarlyapunov function 标量李雅普诺夫函数 scara(selective complianceassembly robot arm) 最简单的面关节型机器人 scenario analysis method 情景分析法 scene analysis 物景分析 self- operated controller 自力式控制器 self-organizing system 自组织系统 self-reproducing system 自繁殖系统self-tuning control 自校正控制 semantic network 语义采集 semi-physical simulation 半实物仿真 sensing element 敏锐元件 sensitivity analysis 活络度分析sensory control 觉得控制 sequentialdecomposition 挨次分解 sequential least squares estimation 序贯最小二乘估计 servo control 伺服控制,随动控制servomotor 伺服马达 settling time 过渡时间 short term planning 短期计划shorttime horizon coordination 短时程协调 signal detection and estimation 旌旗灯号检测以及估计 signal reconstruction 旌旗灯号重构 simulated interrupt 仿真中断 simulation block diagram 仿真框图 simulation experiment 仿真实验simulation velocity 仿真速度 single axle table 单轴转台 single degree of freedom gyro 单自由度捻捻转儿 single levelprocess 单级历程 single value nonlinearity 单值非线性 singularattractor 奇妙吸引子 singular perturbation 奇妙摄动 slave dsystem 受役系统 slower-than-real-time simulation 欠实时仿真slow subsystem 慢变子系统 socio-cybernetics 社会形态控制论 socioeconomic system 社会形态经济系统软体 psychology 软件生理学 solar array pointing control 日头帆板指向控制 solenoid valve 电磁阀 speed control system 魉傧低spin axis 自旋轴 stability criterion 稳定性判据 stabilitylimit 稳定极限 stabilization 镇定,稳定 stackelberg decision theory 施塔克尔贝格决定计划意见 state equation model 状况方程模子 state space description 状况空间描写 static characteristics curve 静态特征曲线 station accuracy 定点精密度stationary random process 平稳 rand 历程 statistical analysis 统计分析 statistic pattern recognition 统计标准样式辨认 steady state deviation 稳态误差steadystate error coefficient 稳态误差系数 step-by-step control 步进控制step function 阶跃函数 stepwise refinement 慢慢精化 stochasticfinite automaton rand 有限不用人力机 strain gauge load cell 应变式称重传感器 strategic function 计谋函数 strongly coupled system 狂詈舷低?subjective probability 主观频率 supervised training 喽窖??supervisory computer control system 计较机监控系统 sustainedoscillation 矜持振动 swirlmeter 旋进流量计 switc 你好 ng point 切换点 symbolic processing 符号处理 synaptic plasticity 突触可塑性syntactic analysis 句法分析 system assessment 系统评价 systemhomomorp 你好sm 系统同态 system isomorp 你好 sm 系统同构 system engineering 系统工程target flow transmitter 靶式流量变送器 task cycle 功课周期 teac 你好 ng programming 示教编程 telemetering system ofrequency division type 频分遥测系统 teleological system 目的系统 temperature transducer 温度传感器template base 模版库 theoremproving 定理证实 therapy model 治疗模子 t 你好ckness meter 厚度计 three-axis attitude stabilization 三轴姿态稳定 three state controller 三位控制器 thrust vector control system 推力矢量控制系统 time constant 时间常数 time-invariant system 定常系统,非时变系统 time schedule controller 时序控制器 time-sharing control 分时控制 time-varying parameter 时变参量 top-down testing 自上而下测试topological structure 拓扑布局 tqc(total quality control)全面质量管理 tracking error 跟踪误差 trade-off analysis 权衡分析 transfer function matrix 传递函数矩阵transformation grammar 转换文法 transient deviation 瞬态误差 transient process 过渡历程 transition diagram 转移图 transmissible pressure gauge 电远传压力表 trend analysis 趋向分析 triple modulation telemetering system 三重调制遥测系统 turbine flowmeter 涡轮流量计 turing mac 你好 ne 剂榛?two-time scale system 双时标系统 ultrasonic levelmeter??镂患?unadjustable speed electric drive 非调速电气传动 unbiasedestimation 无偏估计 uniformly asymptotic stability 一致渐近稳定性 uninterrupted duty 不间断事情制,长期事情制 unit circle 单位圆 unit testing 单位测试 unsupervised learing 非监视进修upperlevel problem 较高等级问题 urban planning 城市规划 utility function 效用函数 value engineering 价值工程 variable gain 可变增益,可变放大系数 variable structure control system 变布局控制 vectorlyapunov function 向量李雅普诺夫函数 velocity error coefficient 速度误差系数 velocity transducer 速度传感器vertical decomposition 纵向分解 vibrating wire force transducer 振弦式力传感器 viscousdamping 粘性阻尼 voltage source inverter 电压源型逆变器vortexprecession flowmeter 旋进流量计 vortex shedding flowmeter 涡街流量计 wb(way base) 要领库 weig 你好 ng cell 称重传感器 weightingfactor 权因数weighting method 加权法 w 你好 ttaker-shannon samplingtheorem 惠特克-喷鼻农采样定理 wiener filtering 维纳滤波 work stationfor computer aided design 计较机匡助设计事情站 w-plane w 最简单的面 zero-based budget 零基预算 zero-input response 零输入相应 zero-stateresponse 零状况相应 zero sum game model 零以及对于策模子2022 年 07 月 31 日历史上的今天:ipad2 怎么贴膜好吧,我还是入了 iPad2 2022-06-26 斗破苍穹快眼看书 2斗破苍穹 22 下载 20 11-06-26特殊声明:1:资料来源于互联网,版权归属原作者2:资料内容属于网络意见,与本账号立场无关3 :如有侵权,请告知,即将删除。

LINEAR PROGRAMMING

LINEAR PROGRAMMING
Example 2
A patient of limited financial means was advised by his doctor to increase the consumption of liver and chicken in his diet. In each meal he must get no less than200calories from this combination and no more than15units of fat. When he consulted his diet book, he found the following information: there are200calories in a pound of chicken and150calories in a pound of liver. However, there are15units of fat in a pound of liver and5units of fat in a pound of chicken. The price of chicken is£5a poundand the price of liver is£1.50a pound.The patient wants to minimize the total cost subject to the medical constraints imposed by his doctor.
Total number of hours spent at divisionD2= 4x(UnitP1)+3y(UnitP2)+3z(UnitP3)200
Total number of hours spent at divisionD3= 2x(UnitP1)+3y(UnitP2)+0z(UnitP3)30

Introduction to Linear Algebra

Introduction to Linear Algebra

Chapter1Resources,M ATLAB primer and Introduction to Linear Algebra“Begin at the beginning,”,the King said,very gravely,“and go on till you cometo the end:then stop.”Lewis CarrollWelcome to Modeling Methods for Marine Science.The main purpose of this book is to give you,as ocean scientists,a basic set of tools to use for interpreting and analyzing data,for modeling, and for scientific visualization.Skills in these areas are becoming increasingly necessary and useful for a variety of reasons,not the least of which is the burgeoning supply of ocean data,the ready availability and increasing power of computers,and sophisticated software tools.In a world such such as this,a spreadsheet program is not enough.We don’t expect the reader to have any experience in programming,although you should be comfortable with working with computers and web browsers.Also,we don’t require any background in sophisticated mathematics;undergraduate calculus will be enough,with some nodding acquaintance with differential equations.However, much of what we will do will not require expertise in either of these areas.Your most valuable tool will be common sense.1.1ResourcesThe activities of modeling,data analysis,and scientific visualization are closely related,both tech-nically and philosophically,so we thought it important to present them as a unified whole.Many of the mathematical techniques and concepts are identical,although often masked by different ter-minology.You’ll be surprised at how frequently the same ideas and tools keep coming up.The purpose of this chapter is threefold:to outline the goals,requirements,and resources of the book, to introduce M ATLAB(and give a brief tour on how to use it),and to review some elements of basic linear algebra that we’ll be needing in subsequent chapters.As we stated in the preface(you did read the preface,didn’t you?),our strategy will be to try to be as“correct”as possible without being overly rigorous.That is,we won’t be dragging you12Modeling Methods for Marine Science through any theorem proofs,or painful details not central to the things you need to do.We will rely quite heavily on M ATLAB,so you won’t need to know how to invert a matrix or calculate a determinant.But you should know qualitatively what’s involved and why some approaches may be better than others;and,just as important,why some procedures fail.Thus we will try to strike a happy medium between the“sorcerer’s apprentice”on the one extreme and having to write your own FORTRAN programs on the other.That is,there will be no programming in this book, although you will be using M ATLAB as a kind of programmer’s tool.The hope is that the basic mathematical concepts will shine through without getting bogged down in the horrid details.1.1.1Book StructureThe course of study will have four sections.Thefirst section(chapters1-7)is aimed at data analysis and some statistics.We’ll talk about least squares regression,principal components,objective analysis and time-series approaches to analyzing and processing data.The second section(chapters 8-12)will concentrate on the basic techniques of modeling,which include numerical techniques such asfinite differencing.The third(chapters13-17)will consist of case studies of ocean models. The largest amount of time will be spent on1-D models(and to a lesser extent2-D and3-D models)since they contain most of the important elements of modeling techniques.Thefinal section(chapters18-20)will discuss inverse methods and scientific visualization,which in some respects is an emerging tool for examining model and observational data.Throughout the text will be examples of real ocean and earth science data used to support the data analysis or modeling techniques being discussed.1.1.2Our World Wide Web SiteWe teach a course at the Woods Hole Oceanographic Institution using this text,and we also sup-port this course with a web page(/12.747/)and versions of our work may be available there.We draw upon a number of other sources,some from textbooks of a more applied mathematics background and others from the primary ocean literature.Davis’(1986)book is a primary one, although it really only covers thefirst section of this book.Bevington and Robinson(2002)is such a useful book that we strongly recommend you obtain a copy(it’s relatively inexpensive,too). Press et al.’s(1992)book is useful for the second section,although the Roache(1976)book is best forfinite difference techniques,sadly it is out-of-print.The third and fourth sections will rely on material to be taken from the literature.Each chapter has a list of references and the other texts are listed only as supplemental references in case you(individually)have need to delve more deeply into some aspects of thefield.Glover,Jenkins and Doney;March15,2006DRAFT3 1.2NomenclatureWe could put a lot of quote marks around the various commands,program names,variables,etc. to indicate these are the things you should type as input or expect as output.But this would be confusing(is this an input?or an output?)plus it would be a real drag typing all those quote marks. So a word about nomenclature in this text.This book is being typeset with L A T E X,and we will try to use various fonts to designate different things in a consistent manner.From now on,when we mean this is something that is“computer related”(the name of afile,a variable in a M ATLAB program,a M ATLAB command,etc.)it will be in simulated typewriter font.If it is a downloadable link,it will be underlined as in http://URL.If,however,we are referring to the mathematics or science we are discussing(variables from equations,mathematical entities,etc.),we will use fonts.In particular,scalar variables will be in simple italic font(),vectors will be lowercase bold faced(),and matrices will be upper case bold face().If you have read enough math books,you have learned that there is no universally consistent set of symbols that have unique meanings.We will try to be consistent in our usage within the pages of our book,but there are only26letters in our alphabet and an even smaller number of Greek characters,so some recycling is inevitable.As always,the context of the symbol will be your best guide in deciphering whether this is the from chapter4(eigenvalues)or11(decay constants).1.3A M ATLAB PrimerYou can read this book,and benefit greatly,without ever touching a computer.But this is a lot like theoretical bicycle riding,there is no better way to learn than with hands on experience,in this case,hands on a keyboard.M ATLAB is is a very powerful tool(there are others)and for showing practical examples it’s more useful than pseudocode.The best way to learn how to use M ATLAB is to do a little tutorial.There are at least two places where you can get hold of a tutorial.The best is the M ATLAB Primer(Davis and Sigmon, 2004),the7ed.as of this writing.A second place is to use the online help available from your M ATLAB command window.If you are using the M ATLAB graphical user interface(GUI),pull down the help menu from the toolbar at the top of the command window and select“MATLAB help”.This will start the online documentation M ATLAB help browser.In the“Contents”tab on the left select the“MATLAB”level and go to“Printable Documentation(PDF)”tofind a listing of manuals available to you.Thefirst one,“Getting Started”,is a good place to begin.If you are not using the command window GUI,type helpdesk at the M ATLAB prompt and this online documentation help browser will be launched.If you get an error with the helpdesk command we suggest you speak with your system administrator to learn how to configure your web browser or type help docopt if you are an experienced computer user.Whichever one you pick,just start at the beginning and noodle through the tutorial.It basically shows you how to do most things. Don’t worry if some of the linear algebra and matrix math things don’t make much sense yet.Read the primer.We won’t waste time repeating the words here,except to point out a few4Modeling Methods for Marine Science obvious features.M ATLAB is case sensitive;a variable named A is not the same as a.Nor is Alpha the same as ALPHA or alpha or AlPhA.They are all different.Use help and lookfor.If you don’t know the name of a command,but,for example, want to know how to make an identity matrix,type lookfor identity.You will then be told about e help eye tofind out about the eye command.Note that M AT-LAB capitalizes things in its help messages for EMPHASIS,which confuses things a little.Commands and functions are always in lower case,although they are capitalized in the help messages.Remember that matrix multiplication is not the same as scalar or array multiplication;the latter is designated with a“dot”before it.For example C=A*B is a matrix multiplication, whereas C=A.*B is array multiplication.In the latter,it means that the elements of are the scalar product of the corresponding elements of and(i.e.,the operation is done element by element).The colon operator(:)is a useful thing to learn about;in addition to being a very compact notation,it frequently executes much,much faster than the equivalent for...next loop.For example,j:k is equivalent to[j,j+1,j+2,...,k]or j:d:k is equiv-alent to[j,j+d,j+2*d,...,j+m*d]where m=fix((K-J)/D).There’s even more,the colon operator can be used to pick out specific rows,columns,elements of arrays.Check it out with help colon.If you don’t want M ATLAB to regurgitate all the numbers that are an answer to the statement you just entered,be sure tofinish your command with a semicolon(;).M ATLAB has a“scripting”capability.If you have a sequence of operations that you routinely do,you can enter them into a textfile(using your favorite text editor,or better yet,use M ATLAB’s editor)and save it to disk.By default,all M ATLAB scriptfiles end in a.m so that your script (or“m-file”)might be called fred.m.You can edit thisfile with the M ATLAB command edit fred,if thefile does not exist yet,M ATLAB will prompt you asking if you wish to create it.Then, you run the script by entering fred in your M ATLAB window,and it executes as if you typed in each line individually at the command prompt.You can also record your keystrokes in M ATLAB using the diary command,but we don’t recommend that you use it,better to see Hints and Tricks #0(Creating m-files)in the appendix of this book.You’ll learn more about these kind offiles as you learn to write functions in M ATLAB.You can load data from the hard drive directly into M ATLAB.For example,if you have a data file called xyzzy.dat,within which you had an array laid out in the following way:Glover,Jenkins and Doney;March15,2006DRAFT5 then you could load it into M ATLAB by saying load xyzzy.dat.You would then have a new matrix in your workspace with the name xyzzy.Note that M ATLAB would object(and rightly so,we might add)if you had varying numbers of numbers in each row,since that doesn’t make sensein a matrix.Also,if you had afile named jack.of.all.trades you would have a variable named jack(M ATLAB is very informal that way).Note that if you had afile without a“.”in its name,M ATLAB would assume that it is a“mat-file”,which is a special M ATLAB binary formatfile (which you cannot read/modify with an editor).For example,load fred would cause M ATLABto look for afile called fred.mat.If it doesn’tfind it,it’ll complain.Butfirst,make sureM ATLAB is looking in the correctfile directory,which is an equivalent way of saying make sure thefile is in M ATLAB’s PATH.You can save data to disk as well.If you simply type save,M ATLAB saves everything in your workspace to afile called matlab.mat.Don’t try to read it with an editor(remember it’sin binary)!You can recover everything in a later M ATLAB session by simply typing load.You can save a matrix to disk in a“readable”file by typing save foo.dat xyzzy-ascii.In this case you have saved the variable xyzzy to thefile foo.dat in ASCII(editable)form.You can specify more than one variable(type help save tofind out more).Remember the-ascii, because nobody but M ATLAB can read thefile if you forget it.You can even read and writefiles that are compatible with(shudder!)Excel.There are a numberof ways to do this.For example,to read an Excelfile you can use A=xlsread(’filename.xls’), and the numeric data in filename.xls will be found in the M ATLAB variable A.The xlsread function has a number of other capabilities;to learn more simply type help xlsread.M AT-LAB even has a function that will tell you things about what is inside the Excelfile,for example SheetNames,to learn more type help xlsfinfo.Also the M ATLAB functions csvreadand csvwrite facilitate transferring data to and from Excel;do a help csvread to haveM ATLAB explain how to use these functions.Afinal word about the M ATLAB code presented in this book.As we write(and rewrite)these chapters we are using M ATLAB release13and14(depending whether we upgraded recently).To the best of our knowledge,all of the examples and programs we provide in this book are compatible with release13and14(versions6and7).As time goes on,some of our code will undoubtably become in compatible with M ATLAB release X.To deal with this eventuality we have decided to make our material available on web pages instead of the more static CD-ROM media(see section 1.1.2).1.4Basic Linear AlgebraA scalar is a single number.A vector is a row or column of numbers.You can plot a vector,for example[372]which would be an arrow going from the origin to a point in3-dimensional space indicated by and.A matrix may be thought of as a bundle of vectors,either column or row vectors(it really depends on what“physical reality”the matrix represents).If eachof the vectors in a matrix is at right angles to all of its mates,then they are said to be orthogonal.6Modeling Methods for Marine Science They are also called linearly independent.If the lengths of the vectors,as defined by the square root of the sum of the squares of its components(i.e.,),are also1,then they are said to be orthonormal(ortho–at right angles,normal–of unit length).For example,a vector[1/sqrt(2) 01/sqrt(2)]has a length of1,as does[1/sqrt(3)1/sqrt(3)1/sqrt(3)]and[001].Before we start,there are some simple rules for matrix manipulation.First,you can only add or subtract matrices of the same size(same number of rows and columns).The one exception to this is when you add or subtract scalars from/to a matrix.In that case the scalar is added/subtracted from each element of the matrix individually.Second,when you multiply matrices,they must be conformable,which means that the left matrix must have the same number of columns as the right matrix has rows:(1.1)Matrix multiplication is non commutative.That is,in general,is not the same as(in fact,the actual multiplication may not be defined in general).The algorithm for matrix multi-plication is straightforward,but tedious.Have a look at a standard matrix text to see how matrix multiplication works.Even though you won’t actually be doing matrix multiplication by hand,a lot of this stuff is going to make more sense if you understand what is going on behind the scene in M ATLAB,Strang(1980)is a good place to start.1.4.1Simultaneous Linear EquationsThe whole idea of linear algebra is to solve sets of simultaneous linear equations.You can represent a set of such equations with the statement,where is a rectangular matrix,is a column vector of variables,and is a column vector of values.For example,consider the following system of equations:(1.2) which can be represented as:(1.3) Note that the matrix contains the coefficients of the equations,is a column vector that contains the knowns on the right hand side(RHS),and is a column vector containing the unknowns or “target variables”.where,the matrix of the coefficients looks like:Glover,Jenkins and Doney;March15,2006DRAFT7(1.4)You enter into M ATLAB the following way:A=[121;23-2;1-23];(note that the array starts and ends with the square bracket and the rows are separated by semi-colons;also we’ve terminated the statement with a semicolon so as not to have M ATLAB regurgitate the numbers you’ve just typed in.)The column value vector representing the RHS of the equation system is:(1.5) which you enter with:b=[8;2;6]Finally,the column“unknown”vector is:(1.6) (don’t try to enter this into M ATLAB,it’s the answer to the question!)Now how do you go about solving this?If we gave you the following scalar equation:(1.7) then you’d solve it by dividing both sides by3,to get.Similarly,for a more general scalar equation:(1.8) you’d do the same thing,getting.Or more appropriately or put slightly differently.Here,we’ve said that is just the inverse of the number.Well,you can do a similar thing with matrices,except the terminology is a little different.If you enter these data into M ATLAB,then you can solve for the values of the three variables(,8Modeling Methods for Marine Science and)with the simple statement x=A b.This is really equivalent to the statement x=inv(A)*b. Now check your answer;just multiply it back out to see if you get b by typing A*x.The answers just pop out!This is simple.You could do this just as easily for a set of25simultaneous equations with as many unknowns(or100or if you have a big computer).But what did you just do?Well,it’s really simple.Just as simple scalar numbers have inverses (e.g.,the number3has an inverse,it’s1/3),so do matrices.With a scalar,we demand the following:scalar*inv(scalar)=1e.g.:3*1/3=1So with a matrix,we demand:matrix*inv(matrix)=IHere we have the matrix equivalent of“1”,namely I,the identity matrix.It is simply a square matrix of the same size as the original two matrices,with zeros everywhere,except on the diagonal, which is all ones.Examples of identity matrices are:10001000100100100010010010001well,you get the idea.Oh,by the way,a matrix must be square(at the very least)to have an inverse, and the inverse must be the same size as the original.Note that like it’s scalar little brother,“1”, the identity matrix times any matrix of the same size gives the same matrix back.For example, A*I=A or I*A=A.OK,now try this with the matrix you keyed into M ATLAB.Type A*inv(A)(you are multi-plying the matrix times its inverse).What do you get?You get the identity matrix.Why are some “ones”represented as“1”and some by“1.000”?Also,you sometimes get0.0000and-0.0000 (yeah,we know,there’s no such thing as“negative zero”).The reason is that the computation of the matrix inverse is not an exact process,and there is some very small roundoff error(see Chapter section2.1.5).Which means that“1”is not exactly the same as“1.000000000000000000000000”, but is pretty darn close.This is a result of both the approximate techniques used to compute the inverse,and thefinite precision of computer number representation.It mostly doesn’t matter,but can in some special cases.Also try multiplying the matrix A times the identity matrix,A*eye(3) (the identity matrix of rank3).What is rank?keep reading!Finally,let’s do one more thing.We can calculate the determinant of a matrix with:d=det(A)Glover,Jenkins and Doney;March15,2006DRAFT9 The determinant of a matrix is a scalar number(valid only for square matrices)and gives insight into the fundamental nature of the matrix.We will run into the determinant in the future.We won’t tell you how to calculate it,since M ATLAB does such afine job doing it anyway.If you’re interested,go to a matrix math text.Anyway,now calculate the determinant of the inverse of A with:dd=det(inv(A))(See how we have done two steps in one;M ATLABfirst evaluates the inv(A)then feeds the result into det()).Guess what?dd=1/d.Before we do anything else,however,let’s save this matrix to a new variable AA with:AA=A;The semicolon at the end suppresses output,since you already know what A is.Now that you see how it works,try another set of equations:(1.9)i.e.,you enter,A=[121;23-2;35-1]b=[8;2;10]x=A\bWhoops!You get an error message:Warning:Matrix is close to singular or badly scaled.Results may be inaccurate.RCOND= 1.171932e-017(Note that your result for RCOND might be a little different,but generally very small.) What happened?Well,look closely at the original set of equations.The third equation is really not very useful,since it is the sum of thefirst two equations.It is not linearly independent of the other two.You have in effect two equations in three unknowns,which is therefore not solvable. This is seen in the structure of the matrix.The error message arises when you try to invert the matrix because it is rank deficient.If you look at its rank,with rank(A),you get a value of2, that is less than the full dimensionality(3)of the matrix.If you did that for thefirst matrix we looked at,by typing rank(AA),it would return a value of3.Remember our friend the determinant?Try det(A)again.What value do you get?zero.If the determinant of the inverse of A is the inverse of the determinant of A(get it?),then guess what happens?A matrix with a determinant of zero is said to be singular.10Modeling Methods for Marine Science 1.4.2Singular Value Decomposition(SVD)Common sense tells you to quit right there.Trying to solve two equations with three unknowns is not useful...or is it?There are an infinite number of combinations of and that satisfy the equations,but not every combination will work.Sometimes it is of value to know what the range of values is,or to obtain some solution subject to some other(as yet to be defined)criteria or conditions.We will tell you of a sure-fire technique to do this,singular value decomposition. Here’s how it works.You can split a scalar into an infinite number of factors.For example,you can represent the number12in the following ways:Glover,Jenkins and Doney;March15,2006DRAFT11 Note that there are two ways of defining the size of the matrices,you may come across the other in Strang’s(1980)book,but the results are the same when you multiply them out.The actual procedure for calculating the SVD is pretty long and tedious,but it always works regardless of the form of the matrix.You accomplish this for thefirst matrix in M ATLAB in the following way:[U,S,V]=svd(AA,0);That’s how we get more than one thing back from a M ATLAB function call;you line them up inside a set of brackets separated by commas on the left hand side(LHS)of the equation.You can get this information by typing help svd.Note also that we have included a,0after the AA.This selects a special(and more useful to us)form of the SVD output.To look at any of the matrices, simply type its name.For example,let’s look at S by typing S.Note that for the matrix AA,which we had no trouble with,all three singular values are non-zero.S=5.1623000 3.0000000 1.1623Now try it with the other,troublesome matrix:[U,S,V]=svd(A,0);and after typing S,you can see that the lowest right hand element is zero.This is the trouble spot!S=7.3728000 1.90850000.0000Now we don’t need to go into the details,but it can be proven that you can construct the matrix inverse from the relation inv(A)=V*W*U’,as we would write it in M ATLAB,where W is just S with the diagonal elements inverted(each element is replaced by it’s inverse).For a rank deficient matrix(like the one we had trouble with),at least one of the diagonal elements in S is zero.In fact,the number of non-zero singular values is the rank of the matrix.Thus if you went ahead and blindly inverted a zero element,you’d have an infinity.The trick is to replace the inverse of the zero element with zero,not infinity.Doing that allows you to compute an inverse anyway.We can do this inversion in M ATLAB in the following way.First replace any zero elements by1.You convert the diagonal matrix to a single column vector containing the diagonal elements with:s=diag(S)12Modeling Methods for Marine Science (note the lower and upper case usage).Then set the zero element to1with:s(3)=1Then invert the elements with:w=1./s(note the decimal point,which means do the operation on each element,as an“array operation”rather than a“matrix operation”).Next,make that pesky third element really zero with: w(3)=0;Then,convert it back to a diagonal matrix with:W=diag(w)Note that M ATLAB is smart enough to know that you are handing it a column vector and to convert it to a diagonal matrix(it did the opposite earlier on).Now you can go ahead and calculate the best guess inverse of A with:BGI=V*W*U’where BGI is just a new variable name for this inverted matrix.Now,try it out with: A*BGIBet you were expecting something like the identity matrix.Instead you get:0.6667-0.33330.3333-0.33330.66670.33330.33330.33330.6667Why isn’t it identity?Well the answer to that question gets to the very heart of inverse theory,and we’ll get to that later in this book(Chapter18).For now we just want you to note the symmetry of the answer to A*BGI(i.e.,the0.6667down the diagonal with a positive or negative0.3333 everywhere else).Now,let’s get down to business,and get a solution for the equation set.We compute the solution with:x=BGI*bGlover,Jenkins and Doney;March15,2006DRAFT13 which is:(1.11) Do you believe the results?Well,try them with:A*xwhich gives you the original b!But why this solution?For example,x=[123]’worksfine too(entering the numbers without the semicolons gives you a row vector,and the prime turns a row vector into a column vector,its transpose).Well the short answer is because of all of the possible solutions,this one has the shortest length.Check it out,the square root of the sum of the squares of the components of[123]’is longer than the vector you get from BGI*b.The reason is actually an important attribute of SVD,but more explanations will have to wait for Chapter18.Also,note that the singular values are arranged in order of decreasing value.This doesn’t have to be the case,but the M ATLAB algorithm does this to be nice to you.Also,the singular values to some extent tell you about the structure of the matrix.Not all cases are as clear-cut as the two we just looked at.A matrix may be nearly singular, so that although you get an“answer”,it may be subject to considerable uncertainties and not particularly robust.The ratio of the largest to the smallest singular values is the condition number of the matrix.The larger it is,the worse(more singular)it is.You can get the condition number of a matrix by entering:cond(A)In fact,M ATLAB uses SVD to calculate this number.And RCOND is just its reciprocal.So what have we learned?We’ve learned about matrices,and how they can be used to represent and solve systems of equations.We have a technique(SVD)that allows us to calculate under any circumstances,the inverse of a matrix.With the inverse of the matrix,we can then solve a set of simultaneous equations with a very simple step,even if there is no unique answer.But wait,there’s more!Stay tuned...1.5ProblemsAll of our problems sets,required m-files,and datafiles are served from the web page: /12.747/problem_sets.html1.1.Download the data matrix A.dat(remember to put them inthe same directory in which you use M ATLAB).Now load them into M ATLAB using the command load A.dat and load b.dat.(Make sure you are in the same directory as thefiles!).14Modeling Methods for Marine Science(a)Now solve the equation set designated by Ax=b.This is a set of7equations with7unknowns.List the values of x.(b)What is the rank and determinant of A?(c)List the singular values for A.1.2.Download A1.dat.Load them into M ATLAB.Then,do the following:(a)What is the rank and determinant of A1?(b)What happens when you solve A1*x=b1directly?(c)Do a singular value decomposition,compute the inverse of A1by zeroing the singularvalues and solve for x.ReferencesBevington,P.R.and D.K.Robinson,2002,Data Reduction and Error Analysis for the Physical Sciences,3Edition,McGraw-Hill Inc.,New York,NY,336pp.Davis,J.C.,1986,Statistics and Data Analysis in Geology,2Edition.John Wiley and Sons, New York,646pp.Davis,T.A.and K.Sigmon,2004,M ATLAB Primer,7Edition,Chapman and Hall/CRC,Boca Raton,FL,215pp.Press,W.H.,B.P.Flannery,S.A.Teukolsky,and W.T.Vetterling,1992,Numerical Recipes,2 Edition,Cambridge University Press,New York,,818pp.Roache,P.J.,1976,Computational Fluid Dynamics,Hermosa Publishers,Albuquerque,NM,446 pg.Strang,G.,1980,Linear Algebra and its Applications,2Ed.,Academic Press,New York,414 pg.。

Existence of Infinitely Many Solutions for a Quasilinear Elliptic Problem on Time Scales

Existence of Infinitely Many Solutions for a Quasilinear Elliptic Problem on Time Scales
Let f : T → R and t ∈ Tk (assume t is not left-scattered if t = sup T). We define f △(t) to be the number (provided it exists) such that given any ǫ > 0 there is a neighborhood U of t such that
arXiv:0705.3674v1 [math.AP] 24 May 2007
Existence of Infinitely Many Solutions for a Quasilinear Elliptic Problem on Time Scales
Moulay Rchid Sidi Ammi sidiammi@mat.ua.pt
2 Preliminary results on time scales
We begin by recalling some basic concepts of time scales. Then, we prove some preliminary results that will be needed in the sequel.
Delfim F. M. Torres delfim@mat.ua.pt
Department of Mathematics
University of Aveiro 3810-193 Aveiro, Portugal
Abstract
We study a boundary-value quasilinear elliptic problem on a generic time scale. Making use of the fixed-point index theory, sufficient conditions are given to obtain existence, multiplicity, and infinite solvability of positive solutions.

数理逻辑(Mathematical Logic)

数理逻辑(Mathematical Logic)

数理逻辑(MathematicalLogic)数理逻辑(Mathematical logic)是用数学方法研究诸如推理的有效性、证明的真实性、数学的真理性和计算的可行性等这类现象中的逻辑问题的一门学问。

其研究对象是对证明和计算这两个直观概念进行符号化以后的形式系统。

数理逻辑是数学基础的一个不可缺少的组成部分。

数理逻辑的研究范围是逻辑中可被数学模式化的部分。

以前称为符号逻辑(相对于哲学逻辑),又称元数学,后者的使用现已局限于证明论的某些方面。

历史背景“数理逻辑”的名称由皮亚诺首先给出,又称为符号逻辑。

数理逻辑在本质上依然是亚里士多德的逻辑学,但从记号学的观点来讲,它是用抽象代数来记述的。

某些哲学倾向浓厚的数学家对用符号或代数方法来处理形式逻辑作过一些尝试,比如说莱布尼兹和朗伯(Johann Heinrich Lambert);但他们的工作鲜为人知,后继无人。

直到19世纪中叶,乔治·布尔和其后的奥古斯都·德·摩根才提出了一种处理逻辑问题的系统性的数学方法(当然不是定量性的)。

亚里士多德以来的传统逻辑得到改革和完成,由此也得到了研究数学基本概念的合适工具。

虽然这并不意味着1900年至1925年间的有关数学基础的争论已有了定论,但这“新”逻辑在很大程度上澄清了有关数学的哲学问题。

在整个20世纪里,逻辑中的大量工作已经集中于逻辑系统的形式化以及在研究逻辑系统的完全性和协调性的问题上。

本身这种逻辑系统的形式化的研究就是采用数学逻辑的方法.传统的逻辑研究(参见逻辑论题列表)较偏重于“论证的形式”,而当代数理逻辑的态度也许可以被总结为对于内容的组合研究。

它同时包括“语法”(例如,从一形式语言把一个文字串传送给一编译器程序,从而转写为机器指令)和“语义”(在模型论中构造特定模型或全部模型的集合)。

数理逻辑的重要著作有戈特洛布·弗雷格(Gottlob Frege)的《概念文字》(Begriffsschrift)、伯特兰·罗素的《数学原理》(Principia Mathematica)等。

1.运筹学-线性规划理论及应用

1.运筹学-线性规划理论及应用

3x1 + 4x2 ≥ 1.5
x1 ,x2 ≥ 0
28
二维线性规划的可行域是一个什么形状? 多边形,而且是“凸”形的多边形。 最优解在什么位置获得? 在边界,而且是在某个顶点获得。
29
线性规划 Linear Programming(LP)
图解法的启示
1. 线性规划问题解的可能情况 a.唯一最优解 b.无穷多最优解 c.无解(没有有界最优解,无可行解)
每公斤含营养成分 ABC D 0.1 0 0.1 0.2 0 0.1 0.2 0.1 0.4 0.6 2.0 1.7
13
解:设购买M、N饲料各为x1,x2 ,则 Min z = 10x1 + 4x2 s.t. 0.1x1 + 0x2 ≥ 0.4 0x1 + 0.1x2 ≥ 0.6 0.1x1 + 0.2x2 ≥ 2 0.2x1 + 0.1x2 ≥ 1.7 x1 ,x2 , ≥ 0
前苏联的尼古拉也夫斯克城住宅兴建计划采用了上
述模型,共用了12个变量,10个约束条件。
12
练习:某畜牧厂每日要为牲畜购买饲料以使其获取 有关数据如下:试决定买M与N二种饲料各 多少公斤而使支出的总费用为最少?
售价 (元/公斤)
M
10
N
4
牲畜每日每头需要量
8=5X1+4X2
D 此点是唯一最优解
(0,2)
可行域
43=5X1+4X2
max Z
X1 + 1.9X2 = 3.8(≥)
min Z
X1 - 1.9X2 = 3.8 (≤)
o
x1
L0: 0=5X1+4X2
27
练习:用图解法求解下面的线性规划。

glpkAPI - GLPK(GNU Linear Programming Kit)的低级接口 Qu

glpkAPI - GLPK(GNU Linear Programming Kit)的低级接口 Qu

glpkAPI–Quick StartGabriel Gelius-DietrichNovember10,20221IntroductionThe package glpkAPI provides a low level interface to the C API of GLPK1,the GNU Linear Programming Kit.It is similar in purpose to the package glpk2,but glpkAPI relies on a separate installation of GLPK.2InstallationThe package glpkAPI depends on a working installation of GLPK(in particular libraries and headerfiles).It is recommended to link GLPK to the GNU Multiple Precision Arithmetic Library Library(GMP)3in order to gain more performance when using the exact simplex algorithm.See INSTALL for installation instructions and platform specific details.CRAN4provides binary versions of glpkAPI for Windows and MacOS X,no other software is required here.3Usage3.1Creating and solving a linear optimization problemIn the following,an example lp-problem will be created and solved.It is the same lp-problem which is used in the GLPK manual:maximizez=10x1+6x2+4x3subject tox1+x2+x3≤10010x1+4x2+5x3≤6002x1+2x2+6x3≤300With all variables being non-negative.1Andrew Makhorin:GNU Linear Programming Kit,Version4.42(or higher)http://www.gnu.org/software/glpk/glpk.html 2Maintained by Lopaka Lee,available on CRAN /package=glpk3/4/1Load the library.>library(glpkAPI)Create an empty problem object.>prob<-initProbGLPK()Assign a name to the problem object.>setProbNameGLPK(prob,"sample")Set the direction of optimization.The object GLP_MAX is a predefined constant used by GLPK.A list of all available contants is written in the documentation glpkConstants. >setObjDirGLPK(prob,GLP_MAX)Add three rows and three colunms to the problem object.>addRowsGLPK(prob,3)[1]1>addColsGLPK(prob,3)[1]1Set row and column names.>setRowNameGLPK(prob,1,"p")>setRowNameGLPK(prob,2,"q")>setRowNameGLPK(prob,3,"r")>setColNameGLPK(prob,1,"x1")>setColNameGLPK(prob,2,"x2")>setColNameGLPK(prob,3,"x3")Set the type and bounds of the rows.>setRowBndGLPK(prob,1,GLP_UP,0,100)>setRowBndGLPK(prob,2,GLP_UP,0,600)>setRowBndGLPK(prob,3,GLP_UP,0,300)Set the type and bounds of rows using a function which has the ability to work with vectors.>lb<-c(0,0,0)>ub<-c(100,600,300)>type<-rep(GLP_UP,3)>setRowsBndsGLPK(prob,1:3,lb,ub,type)2Set the type and bounds of the columns.>setColBndGLPK(prob,1,GLP_LO,0,0)>setColBndGLPK(prob,2,GLP_LO,0,0)>setColBndGLPK(prob,3,GLP_LO,0,0)Set the objective function.>setObjCoefGLPK(prob,1,10)>setObjCoefGLPK(prob,2,6)>setObjCoefGLPK(prob,3,4)Set the type and bounds of columns and the objective function using a function which has the ability to work with vectors.>lb<-c(0,0,0)>ub<-lb>type<-rep(GLP_LO,3)>obj<-c(10,6,4)>setColsBndsObjCoefsGLPK(prob,1:3,lb,ub,obj,type)Load the constraint matrix.>ia<-c(1,1,1,2,3,2,3,2,3)>ja<-c(1,2,3,1,1,2,2,3,3)>ar<-c(1,1,1,10,2,4,2,5,6)>loadMatrixGLPK(prob,9,ia,ja,ar)Solve the problem using the simplex algorithm.>solveSimplexGLPK(prob)[1]0Retrieve the value of the objective function after optimization.>getObjValGLPK(prob)[1]733.3333Retrieve the values of the structural variables(columns)after optimization.>getColPrimGLPK(prob,1)[1]33.33333>getColPrimGLPK(prob,2)[1]66.666673>getColPrimGLPK(prob,3)[1]0Retrieve all primal values of the structural variables(columns)after optimization.>getColsPrimGLPK(prob)[1]33.3333366.666670.00000Retrieve all dual values of the structural variables(columns)after optimization(reduced costs).>getColsDualGLPK(prob)[1]0.0000000.000000-2.666667Print the solution to textfile sol.txt.>printSolGLPK(prob,"sol.txt")[1]0Write the problem tofile prob.lp in lp format.>writeLPGLPK(prob,"prob.lp")[1]0Read problem fromfile prob.lp in lp format.>lp<-initProbGLPK()>readLPGLPK(lp,"prob.lp")[1]0Free memory,allacated to the problem object.>delProbGLPK(prob)>delProbGLPK(lp)3.2Setting control prarmetersAll parameters and possible values are described in the documentation,see>help(glpkConstants)for details.The control parameters used by glpkAPI have the same names like those from GLPK,except that they are written in capital letters.For example,the parameter tm_lim in GLPK is TM_LIM in glpkAPI.The prarmeters are stored in a structure available only once per R session.Set the searching time limit to one second.>setSimplexParmGLPK(TM_LIM,1000)44Function names4.1SearchingThe function names in glpkAPI are different from the names in GLPK,e.g.the function addColsGLPK in glpkAPI is called glp_add_cols in GLPK.The directory inst/containes afile c2r.map which maps a GLPK function name to the corresponding glpkAPI function name.Additionally,all man-pages contain an alias to the GLPK function name.The call>help("glp_add_cols")will bring up the man-page of addColsGLPK.Keep in mind that most of the GLPK functions do not work on vectors.For example the function setColBndGLPK(which is glp_set_col_bnds in GLPK)sets the upper and lower bounds for exactly one column. The function setColsBndsGLPK in glpkAPI can handle a vector of column indices.Assume,we have a problem containing1000columns and600rows,with all variables having a lower bound of zero and an upper bound of25.The problem will be created as follows.>prob<-initProbGLPK()>addColsGLPK(prob,1000)[1]1>addRowsGLPK(prob,600)[1]1Now we can set the column bounds via mapply and setColBndGLPK.>system.time(+mapply(setColBndGLPK,j=1:1000,+MoreArgs=list(lp=prob,type=GLP_DB,lb=0,ub=25))+)User System verstrichen0.0080.0000.009Or we use the simpler call to setColsBndsGLPK.>system.time(+setColsBndsGLPK(prob,j=1:1000,+type=rep(GLP_DB,1000),+lb=rep(0,1000),+ub=rep(0,1000))+)User System verstrichen000The latter call is also much faster.54.2MappingThefile c2r.map in inst/maps the glpkAPI function names to the orininal GLPK function names of its C-API.To use the latter,run>c2r<-system.file(package="glpkAPI","c2r.map")>source(c2r)now either>pr1<-initProbGLPK()>delProbGLPK(pr1)or the original functions>pr2<-glp_create_prob()>glp_delete_prob(pr2)work both.Keep in mind that the mapping only affects the function names not the arguments of a function.6。

【精品】科技英语4低通滤波器原文和翻译

【精品】科技英语4低通滤波器原文和翻译

【关键字】精品Words and Expressionsintegrator n. 积分器amplitude n. 幅值slope n 斜率denominator n. 分母impedance n 阻抗inductor n. 电感capacitor n 电容cascade n. 串联passband n 通带ringing n. 振铃damping n. 阻尼,衰减conjugate adj. 共轭的stage v. 成为low-pass filters 低通滤波器building block 模块linear ramp 线性斜坡log/log coordinates 对数/对数坐标Bode plot 伯德图transfer function 传递函数complex-frequency variable 复变量complex frequency plane 复平面real component 实部frequency response 频率响应complex function 复变函数Laplace transform 拉普拉斯变换real part 实部imaginary part 虚部angular frequency 角频率frequency response 频率响应transient response 瞬态响应decaying-exponential response 衰减指数响应step function input 阶跃(函数)输入time constant 时间常数first-order filters 一阶滤波器second-order low-pass filters 二阶低通滤波器passive circuit 无源电路active circuit 有源电路characteristic frequency 特征频率quality factor n. 品质因子,品质因数circular path 圆弧路径complex conjugate pairs 共轭复数对switched-capacitor 开关电容negative-real half of the complex plane 复平面负半平面Unit 4 Low-pass FiltersFirst-Order FiltersAn integrator (Figure 2. la) is the simplest filter mathematically, and it forms the building block for most modern integrated filters. Consider what we know intuitively about an integrator. If you apply a DC signal at the input (i.e., zero frequency), the output will describe a linear ramp that grows in amplitude until limited by the power supplies. Ignoring that limitation, the response of an integrator at zero frequency is infinite, which means that it has a pole at zero frequency. (A pole exists at any frequency for which the transfer function's value becomes infinite.)(为什么为极点,为什么低通?)Figure A simple RC integratorWe also know that the integrator's gain diminishes with increasing frequency and that at high frequencies the output voltage becomes virtually zero. Gain is inversely proportional to frequency, so it has a slope of -1 when plotted on log/log coordinates (i.e., -20dB/decade on a Bode plot, Figure 2. 1b).Figure 2.1 b A Bode plot of a simple integratorYou can easily derive the transfer function asWhere s is the complex-frequency variable and is 1/RC. If we think of s as frequency, this formula confirms the intuitive feeling that gain is inversely proportional to frequency.The next most complex filter is the simple low-pass RC type (Figure 2. 2a). Its characteristic (transfer function) isWhen, the function reduces to , i.e., 1. When s tends to infinity, the function tends to zero, so this is a low-pass filter. When, the denominator is zero and the function's value is infinite, indicating a pole in the complex frequency plane. The magnitude of the transfer function is plotted against s in Figure 2. 2b, where the real component of s () is toward us and the positive imaginary part () is toward the right. The pole at - is evident. Amplitude is shown logarithmically to emphasize the function's form. For both the integrator and the RC low-pass filter, frequency response tends to zero at infinite frequency; that is, there is a zero at. This single zero surrounds the complex plane.But how does the complex function in s relate to the circuit's response to actual frequencies? When analyzing the response of a circuit to AC signals, we use the expression for impedance of an inductor and for that of a capacitor. When analyzing transient response using Laplace transforms, we use sL and 1/sC for the impedance of these elements. The similarity is apparent immediately. The in AC analysis is in fact the imaginary part of s, which, as mentioned earlier, is composed of a real part and an imaginary part.If we replace s by in any equation so far, we have the circuit's response to an angular frequency. In the complex plot in Figure 2.2b, and hence along the positive j axis. Thus, the function's value along this axis is the frequency response of the filter. We have sliced the function along the axis and emphasized the RC low-pass filter's frequency-response curve by adding a heavy line for function values along the positive j axis. The more familiar Bode plot (Figure 2.2c) looks different in form only because the frequency isexpressed logarithmically.(根据图翻译这两句话)Figure 2.2a A simple RC low-pass filterWhile the complex frequency's imaginary part () helps describe a response to AC signals, the real part() helps describe a circuit's transient response. Looking at Figure 2.2b, we can therefore say something about the RC low-pass filter's response as compared to that of the integrator. The low-pass filter's transient response is more stable, because its pole is in the negative-real half of the complex plane. That is, the low-pass filter makes a decaying-exponential response to a step-function input; the integrator makes an infinite response. For the low-pass filter, pole positions further down the axis mean a higher, a shorter time constant, and therefore a quicker transient response. Conversely, a pole closer to the j axis causes a longer transient response.So far, we have related the mathematical transfer functions of some simple circuits to their associated poles and zeroes in the complex-frequency plane . From these functions, we have derived the circuit ’s frequency response (and hence its Bode plot) and also its transient response. Because both the integrator and the RC filter have only one s in the denominator of their transfer functions, they each have only one pole. That is, they are first-order filters .Figure 2.2b The complex function of an RC low-pass filterFigure 2.2c A Bode plot of a low-pass filterHowever, as we can see from Figure 2.1b, the first-order filter does not provide a very selective frequency response. To tailor a filter more closely to our needs , we must move on to higher orders. From now on, we will describe the transfer function using f(s) rather than the cumbersome IN OUT V V . Second-Order Low-Pass FiltersA second-order filter has 2s in the denominator and two poles in the complex plane. You can obtain such a response by using inductance and capacitance in a passive circuit or by creating an active circuit of resistors, capacitors, and amplifiers. Consider the passive LC filter in Figure 2.3a, for instance. We can show that its transfer function has the formand if we defineLC /120=ωand R L Q /0ω=,then where 0ωis the filter's characteristic frequency and Q is the quality factor (lower R means higher Q).Figure 2.3a An RLC low-pass filterThe poles occur at s values for which the denominator becomes zero; that is,when 0/2002=++ωωQ s s . We can solve this equation by remembering that the roots of 02=++c bx ax are given byIn this case, a = 1, b 0ω=, and 20ω=c .The term (ac b 42-) equals ()4/1220-Q ω, so if Q isless than 0.5 then both roots are real and lie on the negative-real axis. The circuit's behavior is much like that of two first order RC filters in cascade . This case isn't very interesting, so we'll consider only the case where Q > 0.5, which means ()ac b 42-is negative and the roots are complex.Figure 2.3b A pole-zero diagram of an RLC low-pass filterThe real part is therefore a b 2/-, which is Q 2/0ω-, and common to both roots. The roots' imaginary parts will be equal and opposite in signs. Calculating the position of the roots in the complex plane, we find that they lie at a distance of0ωfrom the origin, as shown in Figure 2.3b. Varying 0ω, changes the poles' distance from the origin. Decreasing the Q moves the poles toward each other, whereas increasing the Q moves the poles in a semicircle away from each other and toward the ωj axis. When Q = 0.5, the poles meet at 0ω-on the negative-real axis. In this case, the corresponding circuit is equivalent to two cascaded first-order filters.Now let's examine the second-order function's frequency response and see how it varies with Q. As before, Figure 2.4a shows the function as a curved surface, depicted in the three-dimensional space formed by the complex plane and a vertical magnitude vector . Q =0.707, and you can see immediately that the response is a low-pass filter.The effect of increasing the Q is to move the poles in a circular path toward the ωj axis. Figure2.4b shows the case where Q = 2. Because the poles are closer to the ωj axis, they have a greater effect on the frequency response, causing a peak at the high end of the passband .Figure 2.4a The complex function of a second-order low-pass filter (Q = 0.707)Figure 2.4b The complex function of a second-order low-pass filter (Q = 2)There is also an effect on the filter's transient response. Because the poles' negative-real part is smaller, an input step function will cause ringing at the filter output. Lower values of Q result in less ringing, because the damping is greater. On the other hand, if Q becomes infinite, the poles reach the ωj axis, causing an infinite frequency response (instability and continuous oscillation) at 0ωω=. In the LCR circuit in Figure 2.3a, this condition would be impossible unless R=0. For filters that contain amplifiers, however, the condition is possible and must be considered in the design process.A second-order filter provides the variables 0ωand Q, which allow us to place poles wherever we want in the complex plane. These poles must, however, occur as complex conjugate pairs , in which the real parts are equal and the imaginary parts have opposite signs. This flexibility in pole placement is a powerful tool and one that makes the second-order stage a useful component in many switched-capacitor filters. As in the first-order case, the second-order low-pass transfer function tends to zero as frequency tends to infinity. The second-order function decreases twice as fast, however, because of the 2s factor in the denominator. The result is a double zero (零点) at infinity. 低通滤波器一阶滤波器从数学公式上讲,积分器(见图2.1a )是最简单的滤波器;它是构成大多数现代滤波器的基本模块。

dangling sentences (from students' assignments)

dangling sentences (from students' assignments)

1.dangling sentenceHowever, due to hometextile industry developed at a low speed in China. The hometextile color application lacks systematic theoretical direction. As a result, many companies faced a problem in color choice.2.Grammar mistakesS+VDoing research by combining the fashion color and the hometextile the disparty of fashion color applications was found between Chinese modern hometextile and international hometextile.In recent years, it was proved that nano-particles with surface activity can be used to prepare ultra-stable emulsion in suitable conditions.In this paper we discuss a frequency domain approach to model multirate single-input single-output systems which facilitates design of linear time-invariant controllers operating at the fast rate.By employing Lyapunov-Krasovskii functional, and conducting a linear matrix ineqauality (LMI) approach, Kronedker product is developed to derive the criteria for the synchronization, which can be readily checked by using some standard numerical packages suchas the Matlab LMI Toolbox.In the past few decades, chaos synchronization has attracted a great deal of research interest….It is now well recognized that the dynamical behaviors of the chaos synchronization processes contains inherent time delays.At last, these samples were delivered to the refrigeratory before all were cleaned up and pick out the unhealthy jujubes.By adding thickeners to Chongcai a semi-solid structure is formed, in which flavor compounds will be trapped, thus stop the flovor compounds from degrading and solve the problem, this also presents a way of using Chongcai as a raw material in mustard paste instead.During this process the nucleic acids might lose its physiological functionRaman spectroscopy as a biometric fingerprint recognition technology has great potential is the rapid identification of bacteria.One probability is the multiple probiotic formulation VSL#3 prevents the onset of intestinal inflammation by local stimulation ofepithelial innate immue response. Another is the probiotic bacteria stimulate epithelial production of TNF-əand activate NF-KB in vitro.The oleaginous microorganism is the oil content is up to more than 2o% of their biomass,…Firstly, using the method of serial dilution isolated some strains from the wounds and surface of fruits, leaves and rhizosphere soil.It is obvious that milk powder should be dryly stored at low temperature or it will get brown or badly oxidation.Therefore, keep milk powder storage temperature under its Tg is necessary.Now the water activity and corresponding water content is characterized by the water sorption isothern…There does not only exist serious traffic jams but also crowded buses, underground and railways.But there are lot of proofs prove that they are very stable, enzymes, heat treatment, light and even the anthocyamins themselves can cause the degradation procedure.We both want a good foaming properties and a foam stability.We find a new mehtod is that produce IMO from broken rice. IMO is an important functional oligosaccharide, it has many good processing properties and physiological functions such as adjust inteestinal microecological balance, enhance immunity and so on.Chinese chestnut is one of the world famous dried fruit, which has 3000 years planting history in our country. It was favored by consumers, because of its rich nutrition value and high health care value. Production of Chinese chestnut was in the world’s first. However, chinese chestnut have high water content and not resistant in storage.In modern years, there are two kinds of vectors according to its function.PCR, in abbreviation of polymerase chain reaction, is a techniquewidely used in the field of molecular engineering.Systematic study of active compounds inhibiting fromation of biofilm and its role in the destruction of the mature biofilm using scanning electron microscopy, confocal laser microscopy analyze the visual impact of baterial biofilms.3.Sentences difficult to understandBecause of the significant value of the polypeptides or amino acids from wool keratin in food, cosmetic, and textile industries, it is very necessary to study the hydrolysis of wool keratin.…, but the methods mostly based on chemical clearage of the disulfide bonds of the amino acid cystine, with the reductive or oxidative agents used foro s-s clearage, namely, sulfites, thiols, or peroxides are harmful, often toxic, and difficult to handle.In order to publish article, I need to find an enhanced sensitivity lateral flow tests by using nanoparticles and functional materials.This paragraph is talking about the electrochemical behavior of biological molecules.First, micellar effects on the electrochemical oxidation of norephinephrine (NE) in automic sodium dodecysulfate (SDS) and catiomic cetytrimethyl ammonium bromide (CTAB) micelles.As we know, colloidal partically wetted by both water and oil phases can aggregate at oil-water interface and stabilize macroemulsions. Currently various nanoparticles which have not been surface treated, such as SiO2, CaCO3 abd carbon black, are not surface active due to either their extreme hydrophilicity or hydrophobicity.A support vector machine (SVM) is a concept in computer science for a set of related supervised learning methods that analyze data and recognize patterns, used for classification and regression analysis. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible.The nucleotide base-pairing in DNA replication and proteins systhesis, which were basic procedures in genetic information processing, could be investigated as an unsorted database search. Because of this disturbance from some circumstances, they hardly maintain quantum coherence.4. No logic connectionEmulsion can be seen everywhere in daily life, so it is concerned about stability. People always use surfactants or polymers to stabilize the emulsion, but the resulting emulsion is thermodynamically unstable system.Packaged products can be potentially damaged by droppingt, and it is important ti investigate the oscillation process of the packaging system. In the past, great efforts have been made in this special field. Newton’s damage boundary concept and succedding modified damage evaluation approaches, such as fatigue damage boundary concept, dosplacement damage boundary concept and droppingdamage boundary concept, were widely utilized in packaging design.…In addition, a neural network sometimes has finite modes that switch from one to another at different times, and such a switching can be governed by a Markovian chain. 2008, Liu Yuring studies synchronization of comples networks with Markovian jump.However, synchronization of coupled neural networks with Markovian jump and impulsive effect has been less studied, and it can be studied.4.Not make efficient use of wordsUntil now, there are three kinds of methods of hydrolysis of woolkeratin: enzymatic hydrolysis, alkavine hydrolysis and acidic hydrolysis. Enzymatic hydrolysis methods offers adavantages such as a less species alteration…。

ElementaryLinearAlgebraInstructorSolutionsManual

ElementaryLinearAlgebraInstructorSolutionsManual

Elementary Linear Algebra Instructor Solutions ManualIf you are searching for a ebook Elementary linear algebra instructor solutions manual elementary-linear-algebra-instructor-solutions-manual.pdfin pdf format, then you've come to faithful site. We furnish full edition of this book in txt, PDF, DjVu, doc, ePubforms. You can read Elementary linear algebra instructor solutions manual online elementary-linear-algebra-instructor-solutions-manual.pdf ordownload. In addition, on our website you can read the instructions and another artistic eBooks online, ordownload them. We like to invite your note that our site does not store the eBook itself, but we grant ref towebsite wherever you may downloading or read online. So if have necessity to download pdf Robinair model34134z repair manual, then you've come to the loyal website. We own Elementary linear algebra instructor solutions manual txt,ePub, PDF, DjVu, doc forms. We will be happy if you get back to us over.elementary linear algebra 10th edition anton solutions manual - Lial 2nd Edition, Stanford, Allen, Anton, Instructor Manual. Elementary Algebra. Student Solutions Elementary Linear Algebra with Applicationselementary linear algebra 10th edition - - Elementary linear algebra 10th edition - University of Warwickelementary linear algebra solutions manual pdf - - solutions manual you are looking for is not listed here, do not give up and send me email, Elementary Linear Algebra with Applications (Kolman & Hill)elementary linear algebra with applications 9 edition by - Elementary Linear Algebra with Applications 9 edition by Howard Anton, Chris Rorres Instructor Solutions Manual and Instructor Testbank Showing 1-2 of 2 messageselementary linear algebra solutions manual 9th - Elementary linear Algebra solutions manual by Bernard Kolman Elementary Linear Algebra With An Instructor's Solutions Manual We wish you much successinstructor's solutions manual for elementary - Instructor's Solutions Manual for Elementary Linear Algebra 8th Edition [lDennis Kletzing] on . *FREE* shipping on qualifying offers. 160 pages. Companypearson - elementary linear algebra, 2/e - - Sign in to the Instructor Resource Center. User name: Password: Cancel Student Solution Manual for Elementary Linear Algebra, 2/E. Spence, Insel & Friedberg.elementary linear algebra, 4th edition | stephen - Elementary Linear Algebra, containing fully worked out solutions and instructors manual available; Linear Transformations;[solutions manual] [instructors] introduction to linear - [Solutions Manual] [Instructors] Introduction to Linear Algebra--3rd Edition [Solutions Manual] [Instructors] Introduction to Linear Algebra--3rd Editionelementary linear algebra spence solutions manual - Elementary Linear Algebra, Solution Manual, Instructor Manual, Student Solution Manual for Elementary Linear Algebra Sec-elementary linear algebra, 7th edition - ron - and concise presentation of linear algebra balances The cornerstone of ELEMENTARY LINEAR ALGEBRA is the Student Solutions Manualanton, rorres: elementary linear algebra: - Elementary Linear Algebra: Applications Version, 10th Edition. Home. Instructor's Solution Manual requires Adobe Acrobat Reader. Password Protected Assets.anton: elementary linear algebra, 11th edition - - Elementary Linear Algebra, 11th Edition. Home. Table of Contents. Systems of Linear Equations and Matrices. Instructor's Solutions Manualelementary linear algebra larson instructor - Elementary Linear Algebra RON LARSON The Pennsylvania State University The Behrend College DAVID C. FALVO Instructor's Solutions Manual,featuring completedownload algebra instructor solutions manual pdf - Algebra Instructor Solutions Manual pdf designed to accompany the seventh edition of Linear. manual Ftce elementaryelementary linear algebra, 6th edition - ron - Elementary Linear Algebra, Evaluation of a Determinant Using Elementary Operations. Online Instructor s Solutions Manualdefranza solution manual - scribd - Instructor Solutions Manual for Introduction to Linear Algebra with Applications Jim DeFranza Contents 1 Systems of Linear Equations and Matriceselementary linear algebra, 4th edition - fox - Elementary Linear Algebra, 4th Edition PDF Free Download, Reviews, containing fully worked out solutions and instructors manual available; Book Details.instructor s solutions manual to accompany - FIND instructor s solutions manual to accompany elementary linear algebra howard anton, Books on Barnes & Noble. Free 3-Day shipping on $25 orders! Skip to Main Content;solution manual for elementary linear algebra 9th - Solution manual for Elementary Linear Algebra 9th edition. This classic treatment of linear algebra presents the fundamentals in the clearest possible way,elementary linear algebra - mathematics & - Elementary Linear Algebra 4th Edition A Student solutions manual, containing fully worked out solutions and instructors manual available;elementary linear algebra - (fourth edition) - sciencedirect - Elementary Linear Algebra develops and explains in careful detail the computational techniques containing fully worked out solutions and instructors manual elementary linear algebra solution manual | - Chegg s Elementary Linear Algebra solutions manual is one of hundreds of solution manuals Chegg has to offer.linear algebra solutions manuals | - Elementary Linear Algebra Solutions Manual. Textbook authors: Howard Anton. Editions: 10th Edition (10e), 10th Edition (10e), 10th Edition (10e), 9th Edition solutions manual for elementary linear algebra - Solutions manual for Elementary Linear Algebra with Chris Rorres Instructor Solutions Manual and Instructor Testbank Elementary Linear Algebra withelementary linear algebra, instructor's solution - Elementary Linear Algebra, Instructor's Solution Guide, Third Edition [Bruce Edwards Roland Larson] on . *FREE* shipping on qualifying offers.applications manual for differential equations and linear - For courses in Differential Equations and Linear Algebra in departments while the Instructor's Solutions Manual of elementary linear algebraelementary linear algebra (6thed) - larson, falvo - Elementary Linear Algebra (6thEd) - Larson, Falvo Instructors Solution Manual is available for purchase! Contact me at solutionbuy[at]instructor's solution's manual for elementary - Instructor's Solution's Manual for Elementary Linear Algebra, 2nd Edition. By Lawrence Spence, Arnold J Insel, Stephen H Friedberg. Published by Pearsonelementary linear algebra anton instructor - Elementary Linear Algebra Anton Instructor Solutions Truck Nozzle. ELEMENTARY LINEAR ALGEBRA ANTON INSTRUCTOR SOLUTIONS. DOWNLOAD: ELEMENTARY LINEAR ALGEBRA ANTONinstructor's solutions manual to accompany - Find Instructor's Solutions Manual to Accompany Elementary Linear Algebra Ninth Edition (9780471447986) by Anton. Compare book prices from over 100,000 booksellersstudent solutions manual to accompany elementary - Student Solutions Manual to accompany Elementary Linear Algebra, Applications version, 11e [Howard Anton] You don't need an instructor to help you with itsolutions to elementary linear algebra larson | - Math 2331 Elementary Linear Algebra Spring 2009 Instructor: Dr. Fran coisZiegler Oce: Elementary Linear Algebra Solutions Manual . Linear algebra Filetype:instructor solutions manual for elementary linear - Tricia's Compilation for 'instructor solutions manual for elementary linear algebra a matrix approach 2e'pearson - elementary linear algebra with applications, 9/e - Elementary Linear Algebra with Applications, 9/E an Instructor's Solutions Old section 3.4 Span and Linear Independence has been split into two sections 3.3 pearson - instructor's solutions manual (download - Instructor's Solutions Manual (Download only) for Elementary Linear Algebra with Applications, 9/E Bernard Kolman, Drexel University productFormatCode=W22elementary linear algebra solutions manual - Aug 30, 2013 Elementary linear algebra solutions manual Manual to accompany Elementary Linear Algebra, Instructor s Solutions Manual toinstructor's solutions manual elementary linear - Instructor's Solutions Manual Elementary Linear Algebra with Applications Ninth Edition. Uploaded by G. Inga Flores. Info; Research Interests: Mathematicsinstructor's solutions manual to accompany - Instructor's solutions manual to accompany Elementary linear algebra, with application, third edition part 2 Elementary linear algebra, with application,Related PDFs:owners manual polaris ranger crew 800 2015, mitsubishi delica 1991 manual, kubota tractors 2350 manual, verifone omni 5100 user guide, 10th principle of corporate finance solution manual, jason eckert linux certification lab manual, anatomy martini study guide, owners manual for pioneer deh14ub, fellowcraft proficiency manual, zombie combat manual, chromosomes and phenotype study guide answer key, weatherking furnace manual, 74 mercury 650 manual, au falcon workshop manual, 2015 chevy blazer repair manual torrent, daihatsu hijet truck service manual, anton calculus 10th edition solutions manual, service manual keeway atv, microsoft office training manual, rational combi scc61g manual, omaha public school math pacing guide, corghi em43 manual en fran ais, service manual for a 98 yamaha virago, 1984 honda shadow 750 repair manual, 96 suzuki katana 600 repair manual, boy scout manual torrent, johnny appleseed study guide first grade, acura vigor 1992 repair and service manual, 15hp johnson bombardier service manuals, ingersoll rand vhp manual, solutions manual for mastering physics, harley davidson v rod owners manual 2015, nccer civil carpentry study guide, end of course test georgia study guide, all toyota engines repair manual, hunger game study guide questions, quickbooks premier user guide, ina may guide to childbirth, school psychology praxis study guide, chemistry spring semester final study guide。

Consensus and Cooperation in Networked Multi-Agent Systems

Consensus and Cooperation in Networked Multi-Agent Systems

Consensus and Cooperation in Networked Multi-Agent SystemsAlgorithms that provide rapid agreement and teamwork between all participants allow effective task performance by self-organizing networked systems.By Reza Olfati-Saber,Member IEEE,J.Alex Fax,and Richard M.Murray,Fellow IEEEABSTRACT|This paper provides a theoretical framework for analysis of consensus algorithms for multi-agent networked systems with an emphasis on the role of directed information flow,robustness to changes in network topology due to link/node failures,time-delays,and performance guarantees. An overview of basic concepts of information consensus in networks and methods of convergence and performance analysis for the algorithms are provided.Our analysis frame-work is based on tools from matrix theory,algebraic graph theory,and control theory.We discuss the connections between consensus problems in networked dynamic systems and diverse applications including synchronization of coupled oscillators,flocking,formation control,fast consensus in small-world networks,Markov processes and gossip-based algo-rithms,load balancing in networks,rendezvous in space, distributed sensor fusion in sensor networks,and belief propagation.We establish direct connections between spectral and structural properties of complex networks and the speed of information diffusion of consensus algorithms.A brief introduction is provided on networked systems with nonlocal information flow that are considerably faster than distributed systems with lattice-type nearest neighbor interactions.Simu-lation results are presented that demonstrate the role of small-world effects on the speed of consensus algorithms and cooperative control of multivehicle formations.KEYWORDS|Consensus algorithms;cooperative control; flocking;graph Laplacians;information fusion;multi-agent systems;networked control systems;synchronization of cou-pled oscillators I.INTRODUCTIONConsensus problems have a long history in computer science and form the foundation of the field of distributed computing[1].Formal study of consensus problems in groups of experts originated in management science and statistics in1960s(see DeGroot[2]and references therein). The ideas of statistical consensus theory by DeGroot re-appeared two decades later in aggregation of information with uncertainty obtained from multiple sensors1[3]and medical experts[4].Distributed computation over networks has a tradition in systems and control theory starting with the pioneering work of Borkar and Varaiya[5]and Tsitsiklis[6]and Tsitsiklis,Bertsekas,and Athans[7]on asynchronous asymptotic agreement problem for distributed decision-making systems and parallel computing[8].In networks of agents(or dynamic systems),B con-sensus[means to reach an agreement regarding a certain quantity of interest that depends on the state of all agents.A B consensus algorithm[(or protocol)is an interaction rule that specifies the information exchange between an agent and all of its neighbors on the network.2 The theoretical framework for posing and solving consensus problems for networked dynamic systems was introduced by Olfati-Saber and Murray in[9]and[10] building on the earlier work of Fax and Murray[11],[12]. The study of the alignment problem involving reaching an agreement V without computing any objective functions V appeared in the work of Jadbabaie et al.[13].Further theoretical extensions of this work were presented in[14] and[15]with a look toward treatment of directed infor-mation flow in networks as shown in Fig.1(a).Manuscript received August8,2005;revised September7,2006.This work was supported in part by the Army Research Office(ARO)under Grant W911NF-04-1-0316. R.Olfati-Saber is with Dartmouth College,Thayer School of Engineering,Hanover,NH03755USA(e-mail:olfati@).J.A.Fax is with Northrop Grumman Corp.,Woodland Hills,CA91367USA(e-mail:alex.fax@).R.M.Murray is with the California Institute of Technology,Control and Dynamical Systems,Pasadena,CA91125USA(e-mail:murray@).Digital Object Identifier:10.1109/JPROC.2006.8872931This is known as sensor fusion and is an important application of modern consensus algorithms that will be discussed later.2The term B nearest neighbors[is more commonly used in physics than B neighbors[when applied to particle/spin interactions over a lattice (e.g.,Ising model).Vol.95,No.1,January2007|Proceedings of the IEEE2150018-9219/$25.00Ó2007IEEEThe common motivation behind the work in [5],[6],and [10]is the rich history of consensus protocols in com-puter science [1],whereas Jadbabaie et al.[13]attempted to provide a formal analysis of emergence of alignment in the simplified model of flocking by Vicsek et al.[16].The setup in [10]was originally created with the vision of de-signing agent-based amorphous computers [17],[18]for collaborative information processing in ter,[10]was used in development of flocking algorithms with guaranteed convergence and the capability to deal with obstacles and adversarial agents [19].Graph Laplacians and their spectral properties [20]–[23]are important graph-related matrices that play a crucial role in convergence analysis of consensus and alignment algo-rithms.Graph Laplacians are an important point of focus of this paper.It is worth mentioning that the second smallest eigenvalue of graph Laplacians called algebraic connectivity quantifies the speed of convergence of consensus algo-rithms.The notion of algebraic connectivity of graphs has appeared in a variety of other areas including low-density parity-check codes (LDPC)in information theory and com-munications [24],Ramanujan graphs [25]in number theory and quantum chaos,and combinatorial optimization prob-lems such as the max-cut problem [21].More recently,there has been a tremendous surge of interest V among researchers from various disciplines of engineering and science V in problems related to multia-gent networked systems with close ties to consensus prob-lems.This includes subjects such as consensus [26]–[32],collective behavior of flocks and swarms [19],[33]–[37],sensor fusion [38]–[40],random networks [41],[42],syn-chronization of coupled oscillators [42]–[46],algebraic connectivity 3of complex networks [47]–[49],asynchro-nous distributed algorithms [30],[50],formation control for multirobot systems [51]–[59],optimization-based co-operative control [60]–[63],dynamic graphs [64]–[67],complexity of coordinated tasks [68]–[71],and consensus-based belief propagation in Bayesian networks [72],[73].A detailed discussion of selected applications will be pre-sented shortly.In this paper,we focus on the work described in five key papers V namely,Jadbabaie,Lin,and Morse [13],Olfati-Saber and Murray [10],Fax and Murray [12],Moreau [14],and Ren and Beard [15]V that have been instrumental in paving the way for more recent advances in study of self-organizing networked systems ,or swarms .These networked systems are comprised of locally interacting mobile/static agents equipped with dedicated sensing,computing,and communication devices.As a result,we now have a better understanding of complex phenomena such as flocking [19],or design of novel information fusion algorithms for sensor networks that are robust to node and link failures [38],[72]–[76].Gossip-based algorithms such as the push-sum protocol [77]are important alternatives in computer science to Laplacian-based consensus algorithms in this paper.Markov processes establish an interesting connection between the information propagation speed in these two categories of algorithms proposed by computer scientists and control theorists [78].The contribution of this paper is to present a cohesive overview of the key results on theory and applications of consensus problems in networked systems in a unified framework.This includes basic notions in information consensus and control theoretic methods for convergence and performance analysis of consensus protocols that heavily rely on matrix theory and spectral graph theory.A byproduct of this framework is to demonstrate that seem-ingly different consensus algorithms in the literature [10],[12]–[15]are closely related.Applications of consensus problems in areas of interest to researchers in computer science,physics,biology,mathematics,robotics,and con-trol theory are discussed in this introduction.A.Consensus in NetworksThe interaction topology of a network of agents is rep-resented using a directed graph G ¼ðV ;E Þwith the set of nodes V ¼f 1;2;...;n g and edges E V ÂV .TheFig.1.Two equivalent forms of consensus algorithms:(a)a networkof integrator agents in which agent i receives the state x j of its neighbor,agent j ,if there is a link ði ;j Þconnecting the two nodes;and (b)the block diagram for a network of interconnecteddynamic systems all with identical transfer functions P ðs Þ¼1=s .The collective networked system has a diagonal transfer function and is a multiple-input multiple-output (MIMO)linear system.3To be defined in Section II-A.Olfati-Saber et al.:Consensus and Cooperation in Networked Multi-Agent Systems216Proceedings of the IEEE |Vol.95,No.1,January 2007neighbors of agent i are denoted by N i ¼f j 2V :ði ;j Þ2E g .According to [10],a simple consensus algorithm to reach an agreement regarding the state of n integrator agents with dynamics _x i ¼u i can be expressed as an n th-order linear system on a graph_x i ðt Þ¼X j 2N ix j ðt ÞÀx i ðt ÞÀÁþb i ðt Þ;x i ð0Þ¼z i2R ;b i ðt Þ¼0:(1)The collective dynamics of the group of agents following protocol (1)can be written as_x ¼ÀLx(2)where L ¼½l ij is the graph Laplacian of the network and itselements are defined as follows:l ij ¼À1;j 2N i j N i j ;j ¼i :&(3)Here,j N i j denotes the number of neighbors of node i (or out-degree of node i ).Fig.1shows two equivalent forms of the consensus algorithm in (1)and (2)for agents with a scalar state.The role of the input bias b in Fig.1(b)is defined later.According to the definition of graph Laplacian in (3),all row-sums of L are zero because of P j l ij ¼0.Therefore,L always has a zero eigenvalue 1¼0.This zero eigenvalues corresponds to the eigenvector 1¼ð1;...;1ÞT because 1belongs to the null-space of L ðL 1¼0Þ.In other words,an equilibrium of system (2)is a state in the form x üð ;...; ÞT ¼ 1where all nodes agree.Based on ana-lytical tools from algebraic graph theory [23],we later show that x Ãis a unique equilibrium of (2)(up to a constant multiplicative factor)for connected graphs.One can show that for a connected network,the equilibrium x üð ;...; ÞT is globally exponentially stable.Moreover,the consensus value is ¼1=n P i z i that is equal to the average of the initial values.This im-plies that irrespective of the initial value of the state of each agent,all agents reach an asymptotic consensus regarding the value of the function f ðz Þ¼1=n P i z i .While the calculation of f ðz Þis simple for small net-works,its implications for very large networks is more interesting.For example,if a network has n ¼106nodes and each node can only talk to log 10ðn Þ¼6neighbors,finding the average value of the initial conditions of the nodes is more complicated.The role of protocol (1)is to provide a systematic consensus mechanism in such a largenetwork to compute the average.There are a variety of functions that can be computed in a similar fashion using synchronous or asynchronous distributed algorithms (see [10],[28],[30],[73],and [76]).B.The f -Consensus Problem and Meaning of CooperationTo understand the role of cooperation in performing coordinated tasks,we need to distinguish between un-constrained and constrained consensus problems.An unconstrained consensus problem is simply the alignment problem in which it suffices that the state of all agents asymptotically be the same.In contrast,in distributed computation of a function f ðz Þ,the state of all agents has to asymptotically become equal to f ðz Þ,meaning that the consensus problem is constrained.We refer to this con-strained consensus problem as the f -consensus problem .Solving the f -consensus problem is a cooperative task and requires willing participation of all the agents.To demonstrate this fact,suppose a single agent decides not to cooperate with the rest of the agents and keep its state unchanged.Then,the overall task cannot be performed despite the fact that the rest of the agents reach an agree-ment.Furthermore,there could be scenarios in which multiple agents that form a coalition do not cooperate with the rest and removal of this coalition of agents and their links might render the network disconnected.In a dis-connected network,it is impossible for all nodes to reach an agreement (unless all nodes initially agree which is a trivial case).From the above discussion,cooperation can be infor-mally interpreted as B giving consent to providing one’s state and following a common protocol that serves the group objective.[One might think that solving the alignment problem is not a cooperative task.The justification is that if a single agent (called a leader)leaves its value unchanged,all others will asymptotically agree with the leader according to the consensus protocol and an alignment is reached.However,if there are multiple leaders where two of whom are in disagreement,then no consensus can be asymptot-ically reached.Therefore,alignment is in general a coop-erative task as well.Formal analysis of the behavior of systems that involve more than one type of agent is more complicated,partic-ularly,in presence of adversarial agents in noncooperative games [79],[80].The focus of this paper is on cooperative multi-agent systems.C.Iterative Consensus and Markov ChainsIn Section II,we show how an iterative consensus algorithm that corresponds to the discrete-time version of system (1)is a Markov chainðk þ1Þ¼ ðk ÞP(4)Olfati-Saber et al.:Consensus and Cooperation in Networked Multi-Agent SystemsVol.95,No.1,January 2007|Proceedings of the IEEE217with P ¼I À L and a small 90.Here,the i th element of the row vector ðk Þdenotes the probability of being in state i at iteration k .It turns out that for any arbitrary graph G with Laplacian L and a sufficiently small ,the matrix P satisfies the property Pj p ij ¼1with p ij !0;8i ;j .Hence,P is a valid transition probability matrix for the Markov chain in (4).The reason matrix theory [81]is so widely used in analysis of consensus algorithms [10],[12]–[15],[64]is primarily due to the structure of P in (4)and its connection to graphs.4There are interesting connections between this Markov chain and the speed of information diffusion in gossip-based averaging algorithms [77],[78].One of the early applications of consensus problems was dynamic load balancing [82]for parallel processors with the same structure as system (4).To this date,load balancing in networks proves to be an active area of research in computer science.D.ApplicationsMany seemingly different problems that involve inter-connection of dynamic systems in various areas of science and engineering happen to be closely related to consensus problems for multi-agent systems.In this section,we pro-vide an account of the existing connections.1)Synchronization of Coupled Oscillators:The problem of synchronization of coupled oscillators has attracted numer-ous scientists from diverse fields including physics,biology,neuroscience,and mathematics [83]–[86].This is partly due to the emergence of synchronous oscillations in coupled neural oscillators.Let us consider the generalized Kuramoto model of coupled oscillators on a graph with dynamics_i ¼ Xj 2N isin ð j À i Þþ!i (5)where i and !i are the phase and frequency of the i thoscillator.This model is the natural nonlinear extension of the consensus algorithm in (1)and its linearization around the aligned state 1¼...¼ n is identical to system (2)plus a nonzero input bias b i ¼ð!i À"!Þ= with "!¼1=n P i !i after a change of variables x i ¼ð i À"!t Þ= .In [43],Sepulchre et al.show that if is sufficiently large,then for a network with all-to-all links,synchroni-zation to the aligned state is globally achieved for all ini-tial states.Recently,synchronization of networked oscillators under variable time-delays was studied in [45].We believe that the use of convergence analysis methods that utilize the spectral properties of graph Laplacians willshed light on performance and convergence analysis of self-synchrony in oscillator networks [42].2)Flocking Theory:Flocks of mobile agents equipped with sensing and communication devices can serve as mobile sensor networks for massive distributed sensing in an environment [87].A theoretical framework for design and analysis of flocking algorithms for mobile agents with obstacle-avoidance capabilities is developed by Olfati-Saber [19].The role of consensus algorithms in particle-based flocking is for an agent to achieve velocity matching with respect to its neighbors.In [19],it is demonstrated that flocks are networks of dynamic systems with a dynamic topology.This topology is a proximity graph that depends on the state of all agents and is determined locally for each agent,i.e.,the topology of flocks is a state-dependent graph.The notion of state-dependent graphs was introduced by Mesbahi [64]in a context that is independent of flocking.3)Fast Consensus in Small-Worlds:In recent years,network design problems for achieving faster consensus algorithms has attracted considerable attention from a number of researchers.In Xiao and Boyd [88],design of the weights of a network is considered and solved using semi-definite convex programming.This leads to a slight increase in algebraic connectivity of a network that is a measure of speed of convergence of consensus algorithms.An alternative approach is to keep the weights fixed and design the topology of the network to achieve a relatively high algebraic connectivity.A randomized algorithm for network design is proposed by Olfati-Saber [47]based on random rewiring idea of Watts and Strogatz [89]that led to creation of their celebrated small-world model .The random rewiring of existing links of a network gives rise to considerably faster consensus algorithms.This is due to multiple orders of magnitude increase in algebraic connectivity of the network in comparison to a lattice-type nearest-neighbort graph.4)Rendezvous in Space:Another common form of consensus problems is rendezvous in space [90],[91].This is equivalent to reaching a consensus in position by a num-ber of agents with an interaction topology that is position induced (i.e.,a proximity graph).We refer the reader to [92]and references therein for a detailed discussion.This type of rendezvous is an unconstrained consensus problem that becomes challenging under variations in the network topology.Flocking is somewhat more challenging than rendezvous in space because it requires both interagent and agent-to-obstacle collision avoidance.5)Distributed Sensor Fusion in Sensor Networks:The most recent application of consensus problems is distrib-uted sensor fusion in sensor networks.This is done by posing various distributed averaging problems require to4In honor of the pioneering contributions of Oscar Perron (1907)to the theory of nonnegative matrices,were refer to P as the Perron Matrix of graph G (See Section II-C for details).Olfati-Saber et al.:Consensus and Cooperation in Networked Multi-Agent Systems218Proceedings of the IEEE |Vol.95,No.1,January 2007implement a Kalman filter [38],[39],approximate Kalman filter [74],or linear least-squares estimator [75]as average-consensus problems .Novel low-pass and high-pass consensus filters are also developed that dynamically calculate the average of their inputs in sensor networks [39],[93].6)Distributed Formation Control:Multivehicle systems are an important category of networked systems due to their commercial and military applications.There are two broad approaches to distributed formation control:i)rep-resentation of formations as rigid structures [53],[94]and the use of gradient-based controls obtained from their structural potentials [52]and ii)representation of form-ations using the vectors of relative positions of neighboring vehicles and the use of consensus-based controllers with input bias.We discuss the later approach here.A theoretical framework for design and analysis of distributed controllers for multivehicle formations of type ii)was developed by Fax and Murray [12].Moving in formation is a cooperative task and requires consent and collaboration of every agent in the formation.In [12],graph Laplacians and matrix theory were extensively used which makes one wonder whether relative-position-based formation control is a consensus problem.The answer is yes.To see this,consider a network of self-interested agents whose individual desire is to minimize their local cost U i ðx Þ¼Pj 2N i k x j Àx i Àr ij k 2via a distributed algorithm (x i is the position of vehicle i with dynamics _x i ¼u i and r ij is a desired intervehicle relative-position vector).Instead,if the agents use gradient-descent algorithm on the collective cost P n i ¼1U i ðx Þusing the following protocol:_x i ¼Xj 2N iðx j Àx i Àr ij Þ¼Xj 2N iðx j Àx i Þþb i (6)with input bias b i ¼Pj 2N i r ji [see Fig.1(b)],the objective of every agent will be achieved.This is the same as the consensus algorithm in (1)up to the nonzero bias terms b i .This nonzero bias plays no role in stability analysis of sys-tem (6).Thus,distributed formation control for integrator agents is a consensus problem.The main contribution of the work by Fax and Murray is to extend this scenario to the case where all agents are multiinput multioutput linear systems _x i ¼Ax i þBu i .Stability analysis of relative-position-based formation control for multivehicle systems is extensively covered in Section IV.E.OutlineThe outline of the paper is as follows.Basic concepts and theoretical results in information consensus are presented in Section II.Convergence and performance analysis of consensus on networks with switching topology are given in Section III.A theoretical framework for cooperative control of formations of networked multi-vehicle systems is provided in Section IV.Some simulationresults related to consensus in complex networks including small-worlds are presented in Section V.Finally,some concluding remarks are stated in Section VI.RMATION CONSENSUSConsider a network of decision-making agents with dynamics _x i ¼u i interested in reaching a consensus via local communication with their neighbors on a graph G ¼ðV ;E Þ.By reaching a consensus,we mean asymptot-ically converging to a one-dimensional agreement space characterized by the following equation:x 1¼x 2¼...¼x n :This agreement space can be expressed as x ¼ 1where 1¼ð1;...;1ÞT and 2R is the collective decision of the group of agents.Let A ¼½a ij be the adjacency matrix of graph G .The set of neighbors of a agent i is N i and defined byN i ¼f j 2V :a ij ¼0g ;V ¼f 1;...;n g :Agent i communicates with agent j if j is a neighbor of i (or a ij ¼0).The set of all nodes and their neighbors defines the edge set of the graph as E ¼fði ;j Þ2V ÂV :a ij ¼0g .A dynamic graph G ðt Þ¼ðV ;E ðt ÞÞis a graph in which the set of edges E ðt Þand the adjacency matrix A ðt Þare time-varying.Clearly,the set of neighbors N i ðt Þof every agent in a dynamic graph is a time-varying set as well.Dynamic graphs are useful for describing the network topology of mobile sensor networks and flocks [19].It is shown in [10]that the linear system_x i ðt Þ¼Xj 2N ia ij x j ðt ÞÀx i ðt ÞÀÁ(7)is a distributed consensus algorithm ,i.e.,guarantees con-vergence to a collective decision via local interagent interactions.Assuming that the graph is undirected (a ij ¼a ji for all i ;j ),it follows that the sum of the state of all nodes is an invariant quantity,or P i _xi ¼0.In particular,applying this condition twice at times t ¼0and t ¼1gives the following result¼1n Xix i ð0Þ:In other words,if a consensus is asymptotically reached,then necessarily the collective decision is equal to theOlfati-Saber et al.:Consensus and Cooperation in Networked Multi-Agent SystemsVol.95,No.1,January 2007|Proceedings of the IEEE219average of the initial state of all nodes.A consensus algo-rithm with this specific invariance property is called an average-consensus algorithm [9]and has broad applications in distributed computing on networks (e.g.,sensor fusion in sensor networks).The dynamics of system (7)can be expressed in a compact form as_x ¼ÀLx(8)where L is known as the graph Laplacian of G .The graph Laplacian is defined asL ¼D ÀA(9)where D ¼diag ðd 1;...;d n Þis the degree matrix of G with elements d i ¼Pj ¼i a ij and zero off-diagonal elements.By definition,L has a right eigenvector of 1associated with the zero eigenvalue 5because of the identity L 1¼0.For the case of undirected graphs,graph Laplacian satisfies the following sum-of-squares (SOS)property:x T Lx ¼12Xði ;j Þ2Ea ij ðx j Àx i Þ2:(10)By defining a quadratic disagreement function as’ðx Þ¼12x T Lx(11)it becomes apparent that algorithm (7)is the same as_x ¼Àr ’ðx Þor the gradient-descent algorithm.This algorithm globallyasymptotically converges to the agreement space provided that two conditions hold:1)L is a positive semidefinite matrix;2)the only equilibrium of (7)is 1for some .Both of these conditions hold for a connected graph and follow from the SOS property of graph Laplacian in (10).Therefore,an average-consensus is asymptotically reached for all initial states.This fact is summarized in the following lemma.Lemma 1:Let G be a connected undirected graph.Then,the algorithm in (7)asymptotically solves an average-consensus problem for all initial states.A.Algebraic Connectivity and Spectral Propertiesof GraphsSpectral properties of Laplacian matrix are instrumen-tal in analysis of convergence of the class of linear consensus algorithms in (7).According to Gershgorin theorem [81],all eigenvalues of L in the complex plane are located in a closed disk centered at Áþ0j with a radius of Á¼max i d i ,i.e.,the maximum degree of a graph.For undirected graphs,L is a symmetric matrix with real eigenvalues and,therefore,the set of eigenvalues of L can be ordered sequentially in an ascending order as0¼ 1 2 ÁÁÁ n 2Á:(12)The zero eigenvalue is known as the trivial eigenvalue of L .For a connected graph G , 290(i.e.,the zero eigenvalue is isolated).The second smallest eigenvalue of Laplacian 2is called algebraic connectivity of a graph [20].Algebraic connectivity of the network topology is a measure of performance/speed of consensus algorithms [10].Example 1:Fig.2shows two examples of networks of integrator agents with different topologies.Both graphs are undirected and have 0–1weights.Every node of the graph in Fig.2(a)is connected to its 4nearest neighbors on a ring.The other graph is a proximity graph of points that are distributed uniformly at random in a square.Every node is connected to all of its spatial neighbors within a closed ball of radius r 90.Here are the important degree information and Laplacian eigenvalues of these graphsa Þ 1¼0; 2¼0:48; n ¼6:24;Á¼4b Þ 1¼0; 2¼0:25; n ¼9:37;Á¼8:(13)In both cases, i G 2Áfor all i .B.Convergence Analysis for Directed Networks The convergence analysis of the consensus algorithm in (7)is equivalent to proving that the agreement space characterized by x ¼ 1; 2R is an asymptotically stable equilibrium of system (7).The stability properties of system (7)is completely determined by the location of the Laplacian eigenvalues of the network.The eigenvalues of the adjacency matrix are irrelevant to the stability analysis of system (7),unless the network is k -regular (all of its nodes have the same degree k ).The following lemma combines a well-known rank property of graph Laplacians with Gershgorin theorem to provide spectral characterization of Laplacian of a fixed directed network G .Before stating the lemma,we need to define the notion of strong connectivity of graphs.A graph5These properties were discussed earlier in the introduction for graphs with 0–1weights.Olfati-Saber et al.:Consensus and Cooperation in Networked Multi-Agent Systems220Proceedings of the IEEE |Vol.95,No.1,January 2007。

and

and

Extension of W AM for a linear logic programming languageNaoyuki TamuraGraduate School of Science and Technology,Kobe University1-1Rokkodai,Nada,Kobe657JapanE-mail:tamura@kobe-u.ac.jpandYukio KanedaDepartment of Computer and Systems Engineering,Kobe University1-1Rokkodai,Nada,Kobe657JapanE-mail:kaneda@seg.kobe-u.ac.jpABSTRACTThis paper describes an extension of the WAM(Warren Abstract Machine)for a logic programming language called LLP which is based on intuitionisiclinear logic.LLP is a subset of Lolli and includes additive and multiplicativeconjunction,linear implication in a goal,exponential(!),and the constant1.Theextension of the WAM is mainly for efficient resource management:especiallyfor resource look-up and deletion.In our design,only one table is maintained tokeep resources during the execution.Looking-up of a resource is done througha hash table.Deletion of a resource is done by just“marking”the entry in thetable.Our prototype compiler produces25times faster code compared with acompiled Prolog program which represents resources by a list structure.1.IntroductionLinear logic developed by J.-Y.Girard[5]is expected to be applied for various fields in computer science.Linear logic is called“resource-conscious”because con-sumed hypotheses can not be used again.Therefore,in linear logic,a resource can be represented as a formula rather than represented as a term.There have been several proposals for logic programming language based on linear logic:LO[2],ACL[9],Lolli[8][3],Lygon[6][7],and Forum[10].Lolli a and Lygon b are implemented as interpreter systems(on SML andλProlog for Lolli,on Prolog for Lygon).But,none of them have been implemented as a compiler system.BinProlog 5.00can compile linear implications of affine logic(a variant of linear logic),but other connectives are not covered[14].In this paper,we describe an extension of the WAM(Warren Abstract Machine) [15][1]for a logic programming language called LLP which is a subset of Lolli and based on intuitionisic linear logic.The extension of the WAM is mainly for efficient resource management:especially for resource look-up and deletion.In our design,only one table is maintained to keep resources during the execution.Looking-up of a resource is done through a hasha /˜hodas/research/lolli/b http://www.cs.mu.oz.au/˜winikoff/lygon/lygon.html1table.Deletion of a resource is done by just changing the level value stored in the table.nguageWe use the following fragment for LLP where the symbol A represents an atomic formula,and x represents all free variables in the scope.P::=C|C⊗PC::=!∀ x.A|!∀ x.(G A)G::=1| |A|G1⊗G2|G1&G2|!G|R GR::=S|!A|R1⊗R2S::=A|S1&S2P,C,G,R,and S stand for“program”,“clause”,“goal”,“resource”,and“selective resource”respectively.A program is a(multiplicative)conjunction of clauses.A clause is either a fact(!∀ x.A)or a rule(!∀ x.(G A)).Exponential symbol(!)in a clause means each clause can be used arbitrary times.A goal is either a logical constant(1or ),an atomic formula(A),a conjunction of subgoals(G1⊗G2or G1&G2),a compound formula with exponential symbol(!G),or a compound formula with linear implication(R G)where R is a resource formula.A resource formula, in general,has a form R1⊗R2⊗···⊗R n(modulo associativity of⊗)where each R i is either A1&A2&···&A m(modulo associativity of&)or!A.We call resources A and!A primitive resources.We use the following notations to write LLP programs corresponding to the above definition of formulas.P::=C|C PC::=A.|A:-G.G::=1|top|A|G1,G2|G1&G2|!G|R-<>GR::=S|!A|R1,R2S::=A|S1&S2The order of the operator precedence is“:-”,“-<>”,“&”,“,”,“!”from wider to narrower.LLP is based on intuitionisic linear logic,and is a subset of Lolli[8],that is,it has fewer features.The following formulas can be used in Lolli but not in LLP.•Resources including linear implications(G A).•Goals and resources including universal quantifiers(∀x.G and∀x.R).•Compound formulas(&and )in clause heads.2Our future goal is to cover them.3.Resource ProgrammingThe fragment of LLP is small,but a superset of Prolog,and it enables various “resource programming”.•Atomic goal formula A means resource consumption and program invocation.All possibilities are examined by backtracking.•Goal formula R G adds resource R and then executes the goal G.R should be consumed up in G.•Goal formula G1⊗G2is similar to conjunctive goal G1,G2of Prolog.Resources consumed in G1can not be used in G2.•Goal formula G1&G2is also similar to conjunctive goal G1,G2of Prolog.But, resources are copied before execution,and the same resources should be con-sumed in G1and G2.•Goal formula!G is just like G,but only!’ed resources(that is,resources of the form!A)can be consumed during the execution of G.•Goal formula means some resources can be remained without consumption.•Goal formula1is similar to‘true’of Prolog.•Resource formula R1⊗R2means both R1and R2are resources.•Resource formula A1&···&A n means exactly one of A i’s can be selected as a resource.•Resource formula!A means A is an infinite resource(that is,it can be consumed arbitrary times including0times).The following program(list reverse)uses the resource formula result(Zs)as a “slot”to return the result from the deepest recursive call of rev.For example,by the goal reverse([1,2,3],Zs),the slot result(Zs)is added as a resource and the subgoal rev([1,2,3],[])is called.At the third recursive call rev([],[3,2,1]), the resource result(Zs)is consumed and the Zs is unified with[3,2,1].reverse(Xs,Zs):-result(Zs)-<>rev(Xs,[]).rev([],Zs):-result(Zs).rev([X|Xs],Zs):-rev(Xs,[X|Zs]).The next is a program to solve N-queen problem.In addition to the utilization of slots n(N)and result(Q),this program uses the benefit of resource programming, that is,consumed resource can not be used again.This program maps each column, each right-up diagonal,and each right-down diagonal to c,u,and d respectively. Attack check is done automatically by consuming c(j),u(i+j),and d(i−j)when placing a queen at(i,j)(see Figure1).387654321123456782167-7I JU=I+JD=I-JFigure 1:Resources in 8-queen programqueen(N,Q):-n(N),result(Q)-<>place(N).place(1):-c(1),u(2),d(0)-<>n(N),solve(N,[]).place(I):-I >1,I1is I-1,U1is 2*I,U2is 2*I-1,D1is I-1,D2is 1-I,(c(I),u(U1),u(U2),d(D1),d(D2)-<>place(I1)).solve(0,Q):-result(Q),top.solve(I,Q):-I >0,c(J),U is I+J,u(U),D is I-J,d(D),I1is I-1,solve(I1,[J|Q]).For example,at the execution of the goal queen(8,Q),the place predicate adds the resources c(1),...,c(8),u(2),...,u(16),d(-7),...,d(7),then solve(8,[])is called.The solve predicate finds a solution by consuming c(j ),u(i +j ),and d(i −j )for each row i =1..8.After placing 8queens,the result is returned through the slot result(Q),and remaining resources are erased by the top predicate.Of course,it is possible to represent resources by a list and use the following4program to consume a resource.select([X|Xs],X,Xs).select([Y|Xs],X,Zs):-select(Xs,X,Zs).However,list is a sequential data structure,and we need to scan the list tofind a consumable resource.Also reconstruction is needed when we delete some resource from the list.This is what Lolli interpreter is doing.4.Resource Management ModelIn the execution of the goal G1⊗G2,if we split the resource into two disjoint parts as defined in the original linear logic system,there occurs a lot of non-determinism.Hodas and Miller in[8]solved this problem by using IO-model in which each goal has its input resource and output resource.Output resource of G1is forwarded to G2 as its input resource.Before specifying the IO-model of LLP,we define some relations on R-formulas (in this section,we treat1as a R-formula in addition to the definition given in the previous section).•subcontext(O,I)holds iff–I=1and O=I,or–I=!A and O=I,or–I=A1&A2&···&A n(modulo associativity of&)and O=I,or–I=A1&A2&···&A n and O=1,or–I=I1⊗I2,O=O1⊗O2,subcontext(O1,I1),and subcontext(O2,I2).•pickR(I,O,A)holds iff–I=!A and O=I,or–I=A1&A2&···&A n,A=A i for some i,and O=1,or–I=I1⊗I2,pickR(I1,O1,A),and O=O1⊗I2,or–I=I1⊗I2,pickR(I2,O2,A),and O=I1⊗O2.•thinable(O)holds iff–O=1,or–O=!A,or–O=O1⊗O2,thinable(O1),and thinable(O2).Figure2provides a specification of the IO-model for the propositional fragment of LLP.Note that only the propositional fragment is described,it is easy to extend it to thefirst order.5P−→I{1}I subcontext(O,I) P−→I{ }O!A∈PP−→I{A}I!(G A)∈P P−→I{G}OP−→I{A}OpickR(I,O,A)P−→I{A}OP−→I{G1}M P−→M{G2}O P−→I{G1⊗G2}O P−→I{G1}O P−→I{G2}O P−→I{G1&G2}OP−→I{G}I P−→I{!G}I P−→R⊗I{G}O ⊗O thinable(O )P−→I{R G}OFigure2:IO-model for the propositional LLPIn the sequent P−→I{G}O,P is a set of clauses(C-formulas),I and O are re-sources(R-formulas),and G is a goal(G-formula).We also assume subcontext(O,I) holds when we use the sequent.The sequent P−→I{G}O means the execution of G under program P where I is given as input resource and O is produced as output resource.A computation of a goal G under program P is considered as a bottom-up and left-to-right proof search of P−→1{G}1.Here,we do not consider programs as resources because our abstract machine treats them differently,while Miller and Hodas’s original IO-model treats them uniformly.IO-model is very useful as a basic computation model,but there are still some points which can be improved when its efficient implementation is considered.•Output resource is reconstructed from input resource by replacing the consumed place with1.For example,when p⊗q⊗r is the input for the goal q,the output p⊗1⊗r is reconstructed by the pickR predicate.This slows down the execution speed.•Multiple resources are kept in memory.For example,in the execution of G1&G2,the input resource I should be kept until the execution of G2.This wastes memory.To improve these points,we propose the following ideas.•Use of consumption-level which is assigned to each primitive resource in the re-source.When a primitive resource is consumed,its consumption-level is changed6rather than it is replaced with1.•Use of single resource table.Only one table is maintained during the execution.Before revising the IO-model based on this idea,we introduce some definitions. We assign a positive integer(called a consumption-level)for each primitive resource in R-formula.We use P( )to denote a primitive resource with consumption-level . Informally speaking,under the specified level L,a primitive resource is consumable if it is an atomic resource whose level is equal to L,or it is a!’ed resource.In fact,the consumption-level values for!’ed primitive resources are not neces-sary,and also those for atomic primitive resources in the same selective resource arealways the same(that is, 1=···= n for any A( 1)1&···&A( n)n),but we assign thelevel value for each primitive resource to describe our abstract machine architecture precisely.Now,we need to revise the predicates subcontext,pickR,and thinable for R-formulas with level values.The predicates subcontext and pickR have two addi-tional arguments L and U where L indicates the level value for consumable primitive resources(its initial value is1),and U is the level value to be set after the consumption (its initial value is maxint,the maximum integer value).The predicate thinable has one additional argument L.We also need two new relations change and put to represent changing levels and putting levels respectively.•subcontext L,U(O,I)holds iff–I=!A( )and O=I,or–I=A( )1&···&A( )nand O=I,or–I=A(L)1&···&A(L)n and O=A(U)1&···&A(U)n,or–I=I1⊗I2,O=O1⊗O2,subcontext L,U(O1,I1),and subcontext L,U(O2,I2).•pickR L,U(I,O,A)holds iff–I=!A( )and O=I,or–I=A(L)1&···&A(L)n ,A=A i for some i,and O=A(U)1&···&A(U)n,or–I=I1⊗I2,pickR L,U(I1,O1,A),and O=O1⊗I2,or –I=I1⊗I2,pickR L,U(I2,O2,A),and O=I1⊗O2.•thinable L(O)holds iff–O=!A( ),or–O=A( )1&···&A( )nand L= ,or–O=O1⊗O2,thinable L(O1),and thinable L(O2).•change L1,L2(I,O)holds iff–I=!A( )and O=I,or7P−→L,U I{1}I subcontext L,U(O,I) P−→L,U I{ }O!A∈PP−→L,U I{A}I!(G A)∈P P−→L,U I{G}OP−→L,U I{A}OpickR L,U(I,O,A)P−→L,U I{A}OP−→L,U I{G1}M P−→L,U M{G2}OP−→L,U I{G1⊗G2}OP−→L,U−1I{G1}M change U−1,L+1(M,N)P−→L+1,U N{G2}O thinable L+1(O)P−→L,U I{G1&G2}OP−→L+1,U I{G}O P−→L,U I{!G}O put L(R,I )P−→L,U I ⊗I{G}O ⊗O thinable L(O )P−→L,U I{R G}OFigure3:IO-model with consumption-level for the propositional LLP–I=A(L1)1&···&A(L1)nand O=A(L2)1&···&A(L2)n,or–I=I1⊗I2,O=O1⊗O2,change L,U(I1,O1),and change L,U(I2,O2).•put L(R,O)holds iff–R=!A and O=!A(L),or–R=A1&···&A n and O=A(L)1&···&A(L)n,or–R=R1⊗R2,O=O1⊗O2,put L(R1,O1),and put L(R2,O2).Figure3provides a specification of the IO-model with consumption-level values.In the sequent P−→L,U I{G}O,P is a set of clauses,I and O are R-formulas with consumption-level,G is a goal,and L and U are positive integers.When we use the sequent,we also assume U is sufficiently larger than L(that is,U−L is greater than the number of nesting calls of G1&G2and!G),all level values in I and O are in the ranges1..L or U..maxint,and subcontext L,U(O,I)holds.A computation of a goal G under program P is considered as a bottom-up andleft-to-right proof search of P−→1,maxint 1(1){G}1(1).In the next section,wedescribe an extended WAM based on this computation model.5.LLP Abstract MachineIn this section,we describe LLPAM(LLP Abstract Machine)which is an extended8WAM for LLP language.5.1.RegistersLLPAM has three new registers R,L,and U in addition to P(program pointer), CP(continuation program pointer),S(structure pointer),H(top of heap),HB(heap backtrack pointer),E(last environment),B(last choice point),B0(cut pointer),and TR(top of trail).•R:top of resource tableR is an index indicating the top of resource table.It increases when resource is added,and decreases by backtracking.Its initial value is0.•L:consumable levelL is an integer indicating the consumable level of resources.Its initial value is 1.•U:level of consumed resourceU is an integer for the level of consumed resource.Its initial value is maxint.5.2.Resource TableLLPAM has one new data area called RES(resource table)in addition to CODE, HEAP,STACK,TRAIL,and PDL.RES is an array of the following record structure(we use non-negative integers for indices).record{consumption-level}level:integer;eflag:Boolean;{exponentialflag}head:term;{resource structure}{related-resources}rellist:termend;The register R points the top of RES.It grows when resource is added by ,and shrinks by backtracking.Each entry of RES corresponds to a primitive resource A or!A.Thefield head contains a pointer to the resource structure A on HEAP(“!”operator is removed for non-atomic resource!A).Thefield eflag is set to be true if the resource is!A.The field level indicates the consumption-level of the primitive resource.Thefield rellist is a list of the relative positions of the related resources.Re-lated resources are primitive resources combined with&operators.For example,in A1&A2&A3,rellist of each A i is[1,2],[−1,1],and[−2,−1]respectively.The9level eflag head rellistRES[0]11p(X)[]RES[1]10q(Y)[1]RES[2]10p(Z)[−1]Figure4:Resource table:!p(X)⊗(q(Y)&p(Z))related resource list is a constant term and can be determined at compilation time because relative positions are used.Figure4shows the contents of RES after adding!p(X)⊗(q(Y)&p(Z)).5.3.Code for R GResource R is added by a goal R G.To speed up the resource look-up,R is decomposed into primitive resources at compilation time.The outline of the execution of R G will be as follows:(1)Remember the current value of the register R in a permanent variable Y i.(2)Add all primitive resource in R(suppose there are n primitive resources)tothe resource table(R is increased by n)and add their entries to the hash table (predicate symbol and thefirst argument are used as the key).(3)Execute G.(4)Check there are no remaining resources in R(resources of positions from Y i toY i+n−1are checked).(5)Change the consumption-level of each primitive resource of R to a negativevalue so that it should not be used later.The following instructions are used for R G.•begin imp YiRemembers R value to a permanent variable Yi.•add res f/n,Ai,AjAdds an atomic resource Ai of f/n to the resource table and the hash table.Aj is the value for rellist.The value of register L is stored in the levelfield.•add exp res f/n,AiAdds an exponential resource Ai of f/n to the resource table and the hash table.The rellist is set to[](empty list).The value of register L is stored in the levelfield.•end imp Yi,nFails if there is a remaining resource in positions from Yi to Yi+n−1.Otherwise,10negates their consumption-levels(the negation will be canceled by backtrack-ing).The following is a code for!p(1)⊗(p(2)&p(3)) G.begin imp Y1put str p/1,A1%put p(1)to A1set int1add exp res p/1,A1%add!p(1)put str p/1,A1%put p(2)to A1set int2%put[1]to A2put list A2set int1%add p(2)set con[]add res p/1,A1,A2put str p/1,A1%put p(3)to A1set int3put list A2%put[−1]to A2set int-1set con[]add res p/1,A1,A2%add p(3)Code for Gend imp Y1,35.4.Code for G1&G2In IO-model,G1and G2of G1&G2take the same input resource and should produce the same output resource.In other words,G2can consume only what is consumed in G1and should consume all of them.The similar idea can be found in [4].To implement this idea,we assign an integer called consumption-level for each primitive resource.We also introduce two new registers L and U.The initial values of L and U are1 and maxint respectively.L indicates the consumable level.U indicates the level to be assigned after consumption.The outline of the execution of G1&G2will be as follows:(1)Decrement U so that we can know which are consumed in G1.(2)Execute G1.(3)Change the level of consumed resource in G1to L+1,that is,all resources oflevel U is changed to level L+1.(4)Increment L and U.(5)Execute G2.11goal p⊗((q&r)&(s⊗(t&u)))L112223↓↓ ↓ ↓↓ ↓U m m−2m−1m m−1m consumption check23,2Figure5:Level Transition in p⊗((q&r)&(s⊗(t&u)))(6)Check there are no more remaining resource whose consumption-level is L.(7)Decrement L.Figure5shows the transitions of level values in the goal p⊗((q&r)&(s⊗(t&u))). Where m means the maxint,down arrow means the consumption,up-right arrow means the rewriting of the consumption-level in(3),and values in consumption check field show the consumption-levels to be checked in(6).The following instructions are used for G1&G2.•begin withDecrements U.•mid withChanges the level of U to L+1and increments L and U.•end withFails if there is a remaining resource of the level L.Otherwise,decrements L.The following is a code for p&(q⊗(r&s)).begin withcall p/0mid withcall q/0begin withcall r/0mid withcall s/0end withend with5.5.Code for!GIn the execution of!G,only exponential resources can be consumed.Again,the consumption-level can be used to implement it.The outline of the execution of!G will be as follows:12(1)Increment L so that only exponential resources can be consumed during G.(2)Execute G.(3)Decrement L.The following instructions are used for!G.•begin bangIncrements L.•end bangDecrements L.The following is a code for!(p⊗q).begin bangcall p/0call q/0end bang5.6.Code forThe execution of arises non-determinism because subcontext has a lot of possibilities.Therefore,we define the operational semantics of as follows,although it is not complete.•topConsumes all consumable resources.Consuming some consumable resources is the correct way.The treatment in[4] or[11,7]should be considered for the complete handling of .5.7.Code for Atomic GoalsGoal A means resource consumption and predicate invocation(all possibilities are examined by backtracking).The outline of the execution of A will be as follows: (1)Extract a list of possibly unifiable primitive resources from the hash table(pred-icate symbol and thefirst argument are used as the key).(2)For each primitive resource P in the list,try the following.After the failure ofall trials,invoke the predicate A.(3)Backtrack if P is not consumable.(4)Unify A with P.Backtrack if the unification fails.(5)Update the consumption level of P and its related resources to U if P is anatomic resource.13The consumability of a primitive resource P of level M can be checked as follows:•Atomic resource P is consumable iffM=L.•Non-atomic resource P(that is,P=!A)is consumable iff1≤M≤L.We use a special built-in predicate res/2for the above procedure to make the implementation easy.The res/2might be invoked from the call p/n instruction(also execute p/n) when there is a resource for p/n.Thefirst argument A1is set to the structure A,and the second argument A2is set to the list of indices of resources for p/n.The res/2scans the list A2tofind a consumable primitive resource which is unifiable with A1.If there are no such resource,it calls A1.The following is the program of res/2.res/2:try me else L3L1:get consumable X3,A2update cpf A2update cpf BP L2unify resource X3,A1proceedL2:retry L1L3:trust meexec program indirect A1The instructions used in res/2are as follows.•get consumable Ai,AjFinds an index value of consumable resource from the list Aj.Fails if there are no consumable resources.Ai is set to the index value.The value of Aj is also updated.•unify resource Ai,AjUnifies the resource RES[Ai]with Aj.If it succeeds,consume the resource RES[Ai].•update cpf AiUpdates the value of Ai in the current choice point frame.•update cpf BP LUpdates the value of BP in the current choice point frame.5.8.BacktrackingTo recover the old state on backtracking,we need to do the following.•Register values R,L,and U are stored on each choice point frame.1450004000300020001000910111213x:N,y:time(sec.)LLP:solid line,Prolog:dashed lineFigure6:Execution time of N-queen(all solutions)•Resource addition should be trailed.•Each level change should be trailed.6.Performance EvaluationCurrently we are developing a compiler system.A prototype compiler system has been developed,in which the compiler is written in SICStus Prolog,and the LLPAM emulator is written in C.We use N-queen program(all solutions)as a benchmark.A Prolog program representing resources by a list is used for comparison(see Figure7).This is almost equivalent with a partially evaluated program of Lolli interpreter described in[8]. Only the subcontext predicate is modified to improve the speed.The prototype compiler generates25times faster code for8-queen compared with the Prolog program(compiled to the WAM compact code by SICStus Prolog version 2.1)which spends about90%of the time for resource management(in bc/4and pickR/3).Figure6and Table1show the CPU times on LLP compiler system and Prolog compiler system.15queen(N,Q,R0,R):-place(N,[n(N),result(Q)|R0],[1,1|R]).place(1,R0,R):-proveA([c(1),u(2),d(0)|R0],R1,n(N)),solve(N,[],R1,[1,1,1|R]).place(I,R0,R):-I>1,I1is I-1,U1is2*I,U2is2*I-1,D1is I-1,D2is1-I,place(I1,[c(I),u(U1),u(U2),d(D1),d(D2)|R0],[1,1,1,1,1|R]). solve(0,Q,R0,R):-proveA(R0,R1,result(Q)),subcontext(R,R1).solve(I,Q,R0,R):-I>0,proveA(R0,R1,c(J)),U is I+J,proveA(R1,R2,u(U)),D is I-J,proveA(R2,R3,d(D)),I1is I-1,solve(I1,[J|Q],R3,R).proveA(I,O,A):-pickR(I,M,R),bc(M,O,A,R).bc(I,I,A,A).bc(I,O,A,(R1&R2)):-bc(I,O,A,R1);bc(I,O,A,R2).pickR([!R|I],[!R|I],R).pickR([R|I],[1|I],R):-\+R=(!T).pickR([S|I],[S|O],R):-pickR(I,O,R).subcontext([1|O],[R|I]):-!,subcontext(O,I).subcontext([S|O],[S|I]):-!,subcontext(O,I).subcontext([],[]).Figure7:Prolog program used for the comparison16LLP PrologN seconds(ratio)seconds(ratio)8 1.5(1.0)40.9(26.6)9 6.4(1.0)195.6(30.4)1028.4(1.0)974.7(34.3)11138.0(1.0)5421.8(39.3)12741.2(1.0)134166.4(1.0)Table1:Execution time of N-queen(all solutions)7.Conclusion and Future WorksThe prototype compiler generates25times faster code compared with a Prolog program which represents resources as a list.This means the resource management method of LLPAM is25times faster compared with the method representing resource as a compound term(e.g.a list).We are planning the following enhancements.•Handling by using -flags described in[4].•Allowing G A as a resource formula by adding bodyfield in RES table.The field value will be a pointer to a compiled code(a kind of closure).•Putting R,L,and U in each environment frame so that we can do the last-call-optimization for R1 G1,G1&G2,and!G.AcknowledgmentsWe would like to thank all students included in this project.Eiji Sugiyama, Mitsunori Banbara,and Kyoungsun Kang helped the design of the language and the abstract machine.Fumiko Anno,Yuichi Ikeda,Tomoaki Kume,and Tadanori Wakamatsu helped the implementation.References[1]Hassan A¨ıt-Kaci.Warren’s Abstract Machine.The MIT Press,1991.(http://www.isg.sfu.ca/~hak/documents/wam.html).[2]Jean-Marc Andreoli and Remo Pareschi.Linear objects:Logical processes withbuilt-in inheritance.New Generation Computing,9:445–473,1991.17[3]Iliano Cervesato,Joshua S.Hodas,and Frank Pfenning.Efficient resource man-agement for linear logic proof search.In R.Dyckhoff,H.Herre,and P.Schroeder-Heister,editors,Proceedings of the Fifth International Workshop on Extensions of Logic Programming—ELP’96,pages67–81,Leipzig,Germany,28–30March 1996.Springer-Verlag LNAI1050.[4]Iliano Cervesato,Joshua S.Hodas,and Frank Pfenning.Efficient resource man-agement for linear logic proof search.In Proceedings of the1996International Workshop on Extensions of Logic Programming,pages67–81.Springer-Verlag LNAI1050,March1996.[5]Jean-Yves Girard.Linear logic.Theoretical Computer Science,50:1–102,1987.[6]James Harland and David Pym.A uniform proof-theoretic investigation of linearlogic programming.Journal of Logic and Computation,4(2):175–207,April1994.[7]James Harland and Michael Winikoff.Deterministic resource management forthe linear logic programming language Lygon.Technical Report TR94/23,Mel-bourne University,Department of Computer Science,1994.[8]Joshua S.Hodas and Dale Miller.Logic programming in a fragment of intuition-istic linear rmation and Computation,110(2):327–365,1994.Extended abstract in the Proceedings of the Sixth Annual Symposium on Logic in Com-puter Science,Amsterdam,July15–18,1991.[9]Naoki Kobayashi and Akinori Yonezawa.ACL—A concurrent linear logic pro-gramming paradigm.In ler,editor,Proceedings of the1993International Logic Programming Symposium,pages279–294,Vancouver,Canada,October 1993.MIT Press.[10]Dale Miller.A multiple-conclusion meta-logic.In S.Abramsky,editor,NinthAnnual Symposium on Logic in Computer Science,pages272–281,Paris,France, July1994.IEEE Computer Society Press.[11]D.Pym and J.Harland.A uniform proof-theoretic investigation of linear logicprogramming.Journal of Logic and Computation,4(2):175–207,April1994. [12]Naoyuki Tamura and Yuichi Ikeda.Resource management method for a compilersystem of a linear logic programming language.In IPSJ96-PRO-7,pages25–30, May1996.(in Japanese).[13]Naoyuki Tamura and Yukio Kaneda.Resource management method for a com-piler system of a linear logic programming language.In GMD-Studien Nr.296, Proceedings of the Poster Session at JICSLP’96,pages87–98.German National Research Center for Information Technology,Sepember1996.18。

Π Logic with Fixed Point Operator Home

Π Logic with Fixed Point Operator Home

ŁΠLogic with Fixed Point OperatorLuca SpadaDipartimento di MatematicaUniversit`a degli Studi di Sienae-mail:spada@unisi.ithttp://homelinux.capitano.unisi.it/∼spadal1Contents of the talk1Preliminaries3 2Fixed Points9 3µŁΠLogic121PreliminariesHistory•G¨o del,Lukasiewicz and Product Logic are at the basis of the t-norm based LogicsHistory•G¨o del,Lukasiewicz and Product Logic are at the basis of the t-norm based Logics•H´a jek found the common intersection of these three logic:Basic Logic ([Haj98])History•G¨o del,Lukasiewicz and Product Logic are at the basis of the t-norm based Logics•H´a jek found the common intersection of these three logic:Basic Logic ([Haj98])•Two questions:History•G¨o del,Lukasiewicz and Product Logic are at the basis of the t-norm based Logics•H´a jek found the common intersection of these three logic:Basic Logic ([Haj98])•Two questions:–Which one to use to grasp the expressiveness of Fuzzy Logic?History•G¨o del,Lukasiewicz and Product Logic are at the basis of the t-norm based Logics•H´a jek found the common intersection of these three logic:Basic Logic ([Haj98])•Two questions:–Which one to use to grasp the expressiveness of Fuzzy Logic?–What about the union of those Logics?History•G¨o del,Lukasiewicz and Product Logic are at the basis of the t-norm based Logics•H´a jek found the common intersection of these three logic:Basic Logic ([Haj98])•Two questions:–Which one to use to grasp the expressiveness of Fuzzy Logic?–What about the union of those Logics?•Esteva,Godo and Montagna introducedŁΠLogic([EG99],[Mon00], [EGM01])History•G¨o del,Lukasiewicz and Product Logic are at the basis of the t-norm based Logics•H´a jek found the common intersection of these three logic:Basic Logic ([Haj98])•Two questions:–Which one to use to grasp the expressiveness of Fuzzy Logic?–What about the union of those Logics?•Esteva,Godo and Montagna introducedŁΠLogic([EG99],[Mon00], [EGM01])•It is a powerful Logic still having good algebraic properties.Figure1:Diagram of principal t-norm based LogicsŁΠAlgebrasDefinition1.1.An LΠalgebra is a structureL= L,⊕,−L,⇒Π,∗Π,0L,1L where:1. L,⊕,−L,0L is a MV algebra2. L,⇒Π,∗Π,0L,1L is aΠalgebra3.x∗Π(y z)=(x∗Πy) (x∗Πz)4.∆(x⇒L y)⇒L(x⇒Πy)=1LŁΠAlgebrasDefinition1.1.An LΠalgebra is a structureL= L,⊕,−L,⇒Π,∗Π,0L,1Lwhere:1. L,⊕,−L,0L is a MV algebra2. L,⇒Π,∗Π,0L,1L is aΠalgebra3.x∗Π(y z)=(x∗Πy) (x∗Πz)4.∆(x⇒L y)⇒L(x⇒Πy)=1LThis definition differs from the original one from[EGM01]and wasintroduced in[Cin04].Interesting enough the redundancy of axiom(4)is still open.ResultsTheorem1.2(Algebraic Completeness).ŁΠ ϕif,and only if,in any LΠalgebraϕ∗=1holds.ResultsTheorem1.2(Algebraic Completeness).ŁΠ ϕif,and only if,in any LΠalgebraϕ∗=1holds.Theorem1.3(Standard Completeness).ŁΠ ϕif,and only if,in the LΠalgebra on the real interval[0,1],ϕ∗=1holds.ResultsTheorem1.2(Algebraic Completeness).ŁΠ ϕif,and only if,in any LΠalgebraϕ∗=1holds.Theorem1.3(Standard Completeness).ŁΠ ϕif,and only if,in the LΠalgebra on the real interval[0,1],ϕ∗=1holds.Theorem1.4.The category of linearly ordered LΠalgebras and the cate-gory of linearly orderedfield are equivalent.ResultsTheorem1.2(Algebraic Completeness).ŁΠ ϕif,and only if,in any LΠalgebraϕ∗=1holds.Theorem1.3(Standard Completeness).ŁΠ ϕif,and only if,in the LΠalgebra on the real interval[0,1],ϕ∗=1holds.Theorem1.4.The category of linearly ordered LΠalgebras and the cate-gory of linearly orderedfield are equivalent.Theorem1.5.The categories of LΠalgebras and f-semifield are equivalent.Properties ofŁΠLogic•Obviously,Lukasiewicz and Product Logic are faithfully interpretable inŁΠProperties ofŁΠLogic•Obviously,Lukasiewicz and Product Logic are faithfully interpretable inŁΠ•G¨o del Logic is faithfully interpretable inŁΠProperties ofŁΠLogic•Obviously,Lukasiewicz and Product Logic are faithfully interpretable inŁΠ•G¨o del Logic is faithfully interpretable inŁΠ•Takeuti and Titani’s Logic is faithfully interpretable inŁΠ12Properties ofŁΠLogic•Obviously,Lukasiewicz and Product Logic are faithfully interpretable inŁΠ•G¨o del Logic is faithfully interpretable inŁΠ•Takeuti and Titani’s Logic is faithfully interpretable inŁΠ12•Pavelka’s Rational Logic is faithfully interpretable inŁΠ12Properties ofŁΠLogic•Obviously,Lukasiewicz and Product Logic are faithfully interpretable inŁΠ•G¨o del Logic is faithfully interpretable inŁΠ•Takeuti and Titani’s Logic is faithfully interpretable inŁΠ12•Pavelka’s Rational Logic is faithfully interpretable inŁΠ12•Pavelka’s Rational Product Logic is faithfully interpretable inŁΠ12Properties ofŁΠLogic•Obviously,Lukasiewicz and Product Logic are faithfully interpretable inŁΠ•G¨o del Logic is faithfully interpretable inŁΠ•Takeuti and Titani’s Logic is faithfully interpretable inŁΠ12•Pavelka’s Rational Logic is faithfully interpretable inŁΠ12•Pavelka’s Rational Product Logic is faithfully interpretable inŁΠ12Why stopping here??2Fixed PointsHistoryFixed Points are presents in manyfield of Mathematics,Logic and Com-puter ScienceHistoryFixed Points are presents in manyfield of Mathematics,Logic and Com-puter Sciencein Logic:HistoryFixed Points are presents in manyfield of Mathematics,Logic and Com-puter Sciencein Logic:–Extensions of First Order Logic(LFP,PFP,IFP,etc.)HistoryFixed Points are presents in manyfield of Mathematics,Logic and Com-puter Sciencein Logic:–Extensions of First Order Logic(LFP,PFP,IFP,etc.)–modalµcalculusHistoryFixed Points are presents in manyfield of Mathematics,Logic and Com-puter Sciencein Logic:–Extensions of First Order Logic(LFP,PFP,IFP,etc.)–modalµcalculusBased on operators defined on First Order structuresHistoryFixed Points are presents in manyfield of Mathematics,Logic and Com-puter Sciencein Logic:–Extensions of First Order Logic(LFP,PFP,IFP,etc.)–modalµcalculusBased on operators defined on First Order structuresTheir existence is based on Tarski’sfixed point TheoremHistoryFixed Points are presents in manyfield of Mathematics,Logic and Com-puter Sciencein Logic:–Extensions of First Order Logic(LFP,PFP,IFP,etc.)–modalµcalculusBased on operators defined on First Order structuresTheir existence is based on Tarski’sfixed point TheoremIn Propositional Logic seems to be no simple solution...HistoryFixed Points are presents in manyfield of Mathematics,Logic and Com-puter Sciencein Logic:–Extensions of First Order Logic(LFP,PFP,IFP,etc.)–modalµcalculusBased on operators defined on First Order structuresTheir existence is based on Tarski’sfixed point TheoremIn Propositional Logic seems to be no simple solution......but in Many Valued Propositional Logic one can see formulas as func-tions!In particular in our case they are continuous functions from[0,1] to itselfBrouwer TheoremTheorem2.1(Brouwer1910).Every continuous function from a convex compact to itself has afixed point3µŁΠLogicSome Precautions•To steak with continuous functions we will not allowfixed points of formulas containing the symbol→ΠSome Precautions•To steak with continuous functions we will not allowfixed points of formulas containing the symbol→Π•For sake of readability we will use a fresh variable symbol x to denote the variable falling under the scope of thefixed point operator.µŁΠLogicDefinition3.1.The Fixed Point LΠLogic(µLΠlogic for short)has the fol-lowing theory:µŁΠLogicDefinition3.1.The Fixed Point LΠLogic(µLΠlogic for short)has the fol-lowing theory:1.Al the axioms and rules from LΠLogicµŁΠLogicDefinition3.1.The Fixed Point LΠLogic(µLΠlogic for short)has the fol-lowing theory:1.Al the axioms and rules from LΠLogic2.µ.x(T(x))↔T(µ.x(T(x)))µŁΠLogicDefinition 3.1.The Fixed Point L ΠLogic (µL Πlogic for short)has the fol-lowing theory:1.Al the axioms and rules from L ΠLogic2.µ.x (T (x ))↔T (µ.x (T (x )))3.( i ≤n ∆(p i ↔q i ))→(µ.x (T (p 1,...,p n ))↔µ.x (T (q 1,...,q n )))µŁΠLogicDefinition 3.1.The Fixed Point L ΠLogic (µL Πlogic for short)has the fol-lowing theory:1.Al the axioms and rules from L ΠLogic2.µ.x (T (x ))↔T (µ.x (T (x )))3.( i ≤n ∆(p i ↔q i ))→(µ.x (T (p 1,...,p n ))↔µ.x (T (q 1,...,q n )))4.The rule T (p )↔pµ(T (x ))→pµŁΠalgebrasDefinition3.2.AµLΠalgebra is a particular LΠalgebra endowed with an operatorµthat satisfies the following condition for any term T(x)not containing the symbol→Π.µŁΠalgebrasDefinition3.2.AµLΠalgebra is a particular LΠalgebra endowed with an operatorµthat satisfies the following condition for any term T(x)not containing the symbol→Π.1.µ.x(T(x))=T(µ.x(T(x)))µŁΠalgebrasDefinition3.2.AµLΠalgebra is a particular LΠalgebra endowed with an operatorµthat satisfies the following condition for any term T(x)not containing the symbol→Π.1.µ.x(T(x))=T(µ.x(T(x)))2.If T(t)=t thenµ.x(T(x))≤tµŁΠalgebrasDefinition 3.2.A µL Πalgebra is a particular L Πalgebra endowed with an operator µthat satisfies the following condition for any term T (x )not containing the symbol →Π.1.µ.x (T (x ))=T (µ.x (T (x )))2.If T (t )=t then µ.x (T (x ))≤t3.( i ≤n ∆(p i ⇔q i ))≤(µ.x (T (p 1,...,p n ))⇔µ.x (T (q 1,...,q n )))µŁΠalgebrasDefinition 3.2.A µL Πalgebra is a particular L Πalgebra endowed with an operator µthat satisfies the following condition for any term T (x )not containing the symbol →Π.1.µ.x (T (x ))=T (µ.x (T (x )))2.If T (t )=t then µ.x (T (x ))≤t3.( i ≤n ∆(p i ⇔q i ))≤(µ.x (T (p 1,...,p n ))⇔µ.x (T (q 1,...,q n )))Proposition 3.3.µŁΠalgebras form a varietyCompletenessTheorem3.4.µŁΠLogic is algebraically complete,i.e.µŁΠ ϕif,and only if,in everyµŁΠalgebra holdsϕ∗=1CompletenessTheorem3.4.µŁΠLogic is algebraically complete,i.e.µŁΠ ϕif,and only if,in everyµŁΠalgebra holdsϕ∗=1Lemma3.5.AnyµŁΠalgebra is isomorphic to a subdirect product of lin-early orderedµŁΠalgebrasRepresentationTheorem3.6.Any l.o.µŁΠalgebra is isomorphic to the interval algebra of some real closedfield,conversely every l.o.real closedfield contains a definableµŁΠalgebra.RepresentationTheorem3.6.Any l.o.µŁΠalgebra is isomorphic to the interval algebra of some real closedfield,conversely every l.o.real closedfield contains a definableµŁΠalgebra.sketch.For any c∈R and any polynomial p(x)such that p(c)=0we can translate p(x)in a term of the algebra p∗(x).Now,considering the termp (x)=p∗(x)⊕xit is sufficient to takeµ.x(p (x))as the wanted element.。

线性规划模型的应用与灵敏度分析正文

线性规划模型的应用与灵敏度分析正文

线性规划模型的应用与灵敏度分析正文线性规划模型的应用与灵敏度分析第一章线性规划问题1.线性规划简介及发展线性规划(Linear Programming)是运筹学中研究最早、发展最快、应用广泛、方法成熟的一个重要分支,它是辅助人们进行科学管理的一种数学方法,研究线性约束条件下线性目标函数的极值问题的数学理论和方法,英文缩写为LP。

它是运筹学的一个重要分支,广泛应用于军事作战、经济分析、经营管理和工程技术等方面,为合理利用有限的人力、物力、财力等资源做出的最优决策,提供科学的依据。

线性规划及其通用解法——单纯形法是由美国G.B.Dantzig在1947年研究空军军事规划提出来的。

法国数学家傅里叶和瓦莱-普森分别于1832和1911年独立地提出线性规划的想法,但未引起注意。

1939年苏联数学家康托罗维奇在《生产组织与计划中的数学方法》一书中提出线性规划问题,也未引起重视[1]。

1947年美国数学家丹齐克提出线性规划的一般数学模型和求解线性规划问题的通用方法──单纯形法,为这门学科奠定了基础。

1947年美国数学家诺伊曼提出对偶理论,开创了线性规划的许多新的研究领域,扩大了它的应用范围和解题能力[2]。

1951年美国经济学家库普曼斯把线性规划应用到经济领域,为此与康托罗维奇一起获1975年诺贝尔经济学奖。

50年代后对线性规划进行大量的理论研究,并涌现出一大批新的算法。

例如,1954年莱姆基提出对偶单纯形法,1954年加斯和萨迪等人解决了线性规划的灵敏度分析和参数规划问题,1956年塔克提出互补松弛定理,1960年丹齐克和沃尔夫提出分解算法等。

线性规划的研究成果还直接推动了其他数学规划问题包括整数规划、随机规划和非线性规划的算法研究[3]。

由于数字电子计算机的发展,出现了许多线性规划软件,如MPSX,OPHEIE,UMPIRE等,可以很方便地求解几千个变量的线性规划问题。

1979年苏联数学家提出解线性规划问题的椭球算法,并证明它是多项式时间算法。

Linear Systems and Control

Linear Systems and Control

Linear Systems and Control Linear systems and control are essential concepts in the field of engineering, particularly in the areas of electrical, mechanical, and aerospace engineering. These concepts play a crucial role in designing and analyzing systems to ensure stability, performance, and robustness. Linear systems are characterized by their linearity, which means that they obey the principle of superposition and homogeneity. This allows engineers to use mathematical tools such as Laplace transforms and transfer functions to model and analyze the behavior of these systems. Control theory, on the other hand, deals with the design of systems that can manipulate the behavior of a dynamic system to achieve a desired outcome. This can involve stabilizing unstable systems, tracking reference signals, or rejecting disturbances. Control systems can be classified into two main categories: open-loop and closed-loop systems. Open-loop systems do not take feedback into account, while closed-loop systems use feedback to adjust the system's behavior based onits output. This feedback mechanism is crucial for ensuring that the system performs as desired in the presence of uncertainties and disturbances. One of the key challenges in designing control systems for linear systems is ensuring stability. A system is said to be stable if its output remains bounded for any bounded input. This is a critical requirement for ensuring that the system does not exhibit erratic behavior or become uncontrollable. Engineers use tools such as the Routh-Hurwitz criterion and the Nyquist stability criterion to analyze the stability of linear systems and design controllers that guarantee stability under various operating conditions. In addition to stability, control systems must also meet performance specifications such as speed of response, accuracy, and robustness. Performance specifications are often conflicting, requiring engineers to make trade-offs between different criteria. For example, increasing the speed of response may lead to a decrease in stability margins, while improving accuracy may require more complex control algorithms. Engineers must carefully balance these trade-offs to design control systems that meet the desired performance specifications. Robustness is another important aspect of control system design, especially in the presence of uncertainties and disturbances. Robust control systems are able to maintain stability and performance even in the face of varyingoperating conditions and uncertainties in the system model. Engineers usetechniques such as H-infinity control and robust control design to ensure that the system remains stable and performs satisfactorily under a wide range of conditions. Overall, linear systems and control play a crucial role in the design and analysis of engineering systems. By understanding the principles of linearity, stability, performance, and robustness, engineers can develop control systems that meet the desired specifications and ensure the reliable operation of complex systems. The field of linear systems and control continues to evolve, with new techniques and algorithms being developed to address the challenges of modern engineering systems. It is an exciting and dynamic field that offers endless opportunities forinnovation and discovery.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

lolliCoP–A Linear Logic Implementation of a Lean Connection-Method Theorem Prover forFirst-Order Classical LogicJoshua S.Hodas and Naoyuki Tamura⋆Department of Computer ScienceHarvey Mudd CollegeClaremont,CA91711,USAhodas@,tamura@kobe-u.ac.jpAbstract.When Prolog programs that manipulate lists to manage acollection of resources are rewritten to take advantage of the linear logicresource management provided by the logic programming language Lolli,they can obtain dramatic speedup.Thus far this has been demonstratedonly for“toy”applications,such as n-queens.In this paper we presentsuch a reimplementation of the lean connection-calculus prover leanCoPand obtain a theorem prover forfirst-order classical logic which rivals oroutperforms state-of-the-art provers on a significant body of problems.1IntroductionThe development of logic programming languages based on intuitionistic[11]and linear logic[6]has been predicated on two principal assumptions.Thefirst,and the one most argued in public,has been that,given the increased expressivity, programs written in these languages are more perspicuous,more natural,and easier to reason about formally.The second assumption,which the designers have largely kept to themselves,is that by moving the handling of various program features into the logic,and hence from the term level to the formula level,we would expose them to the compiler,and,thus,to optimization.In the end, we believed,this would yield programs that executed more efficiently than the equivalent program written in more traditional logic programming languages. Until now,this view has been downplayed as most of these new languages have thus far been implemented only in relatively inefficient,interpreted systems.With the recent development of compilers for languages such asλ-Prolog[13] and Lolli[7],however,we are beginning to see this belief justified.In the case of Lolli,we are focused on logic programs which have used a term-level list as a sort of bag from which items are selected according to some rules.In earlier work we showed that when such code is rewritten in Lolli,allowing the elements in the list to instead be stored in the proof context–with the underlying rulesof linear logic managing their consumption–substantial speedups can occur.To date,however,that speedup has been demonstrated only on the execution of simple,“toy”applications,such as an n-queens problem solver[7].Now we have turned our attention to a more sophisticated application:the-orem proving.We have reimplemented the leanCoP connection-calculus theo-rem prover of Otten and Bibel[14]in Lolli.This“lean”theorem prover has been shown to have remarkably good performance relative to state-of-the-art systems,particularly considering that it is implemented in just a half-page of Prolog code.The reimplemented prover,which we call lolliCoP,is of comparable size,and,when compiled under LLP(the reference Lolli compiler[7]),provides a speedup of40%over leanCoP.On many of the hardest problems that both can solve,it is roughly the same speed as the Otter theorem prover[8].(Both lean-CoP and lolliCoP solve a number of problems that Otter cannot.Conversely, Otter solves many problems that they cannot.On simpler problems that both solve,Otter is generally much faster than leanCoP and lolliCoP.) While this is a substantial improvement,it is not the full story.LLP is a relatively naive,first-generation compiler and run-time system.Whereas,it is being compared to a program compiled in a far more mature and optimized Prolog compiler(SICStus Prolog3.7.1).When we adjust for this difference,we find that lolliCoP is more than twice as fast as leanCoP,and solves(within a limited time allowance)more problems from the test library.Also,when the program is rewritten in Lolli,two simple improvements become obvious.When these changes are made to the program,performance improves by a further factor of three,and the number of problems solved expands even further.1.1OrganizationThe remainder of this paper is organized as follows:Section2gives a brief introduction to the connection calculus forfirst-order classical logic;Section3 describes the leanCoP theorem prover;Section4gives a brief introduction to linear logic,Lolli,and the LLP compiler;Section5introduces lolliCoP;Section6 presents the results and analysis of various performance tests and comparisons; and,Section7presents the two optimizations mentioned above.2Connection-Calculus Theorem ProvingThe connection calculus[2]is a matrix proof procedure for clausalfirst-order classical logic.(Variations have been proposed for other logics,but this is its primary application.)The calculus,which uses a positive representation,proving matrices of clauses in disjunctive normal form,has been utilized in a number of theorem proving systems,including KoMeT[3],Setheo and E-Setheo[9, 12].It features two principal rules,extension and reduction.The extension step, which corresponds roughly to backchaining,consists of matching the complement of a literal in the active goal clause with the head of some clause in the matrix. The body of that clause is then proved,as is the remainder of the original clause.Γ|∅⊢A1,...,A n(extension0)Γ|Π⊢Γ|L i,Π⊢L11,...,L1m C,Γ|Π⊢L1,...,L i−1,L i+1,...,L nL i,L11,...,L1m},1≤i≤n,and m≥0)C,Γ|L i,Π⊢L11,...,L1m C,Γ|Π⊢L1,...,L i−1,L i+1,...,L nL i,L11,...,L1m}for some t,1≤i≤n,and m≥0)Γ|Γ|prove(Mat):-prove(Mat,1).prove(Mat,PathLim):-append(MatA,[Cla|MatB],Mat),\+member(-_,Cla),append(MatA,MatB,Mat1),prove([!],[[-!|Cla]|Mat1],[],PathLim).prove(Mat,PathLim):-\+ground(Mat),PathLim1is PathLim+1,prove(Mat,PathLim1).prove([],_,_,_).prove([Lit|Cla],Mat,Path,PathLim):-(-NegLit=Lit;-Lit=NegLit)->(member_oc(NegLit,Path);append(MatA,[Cla1|MatB],Mat),copy_term(Cla1,Cla2),append_oc(ClaA,[NegLit|ClaB],Cla2),append(ClaA,ClaB,Cla3),(Cla1==Cla2->append(MatB,MatA,Mat1);length(Path,K),K<PathLim,append(MatB,[Cla1|MatA],Mat1)),prove(Cla3,Mat1,[Lit|Path],PathLim)),prove(Cla,Mat,Path,PathLim).Fig.2.The leanCoP theorem prover of Otten and Bibelvariables are used to represent object variables.This last fact causes some com-plications,discussed below.Thefirst evident difference between the calculus and its implementation is that an extra value,an integer path-depth limit,is added to each of the Prolog predicates.It is used to implement iterative deepening based on the maximum allowed path length,which is necessary to insure completeness in thefirst-order case,due to Prolog’s depth-first search strategy.When prove/1is called,it sets the initial path limit to1and calls prove/2,which in turn selects(without loss of generality)a purely positive start clause.The selection of the clause,Cla,is done using a trick of Prolog:since the predicate append(A,B,C)holds if the list C results from appending list B to list A,append(A,[D|B],C)(in which[D|B]is a list that has D as it’sfirst item, followed by the list B)will hold if D is an element of C and if,further,A is the list of items preceding it and B is the list of items following it.Thus Prolog can,in one predicate,select an element from an arbitrary position in a list and identify all the remaining elements in the list,which result from appending A and B.This technique is used to select literals from clauses and clauses from matri-ces throughout leanCoP.While it is an interesting trick,it relies on significant manipulation and construction of list structures on the heap.It is precisely cer-tain uses of this trick which will be replaced by linear logic resource management at the formula level in lolliCoP.To insure that the selected clause is purely positive,the code checks that the clause contains no negated terms(terms of the form-_,where the underscore is a wildcard).This is done using Prolog’s negation-as-failure operator:\+.Once this is confirmed,the proof is started using a dummy(unit)goal clause,!,which willcause the selected clause to become the goal clause in the next step.This is done to avoid duplicating some bookkeeping code already present in the general case in prove/4,which implements the core of the prover.Note that the similarity of appearance to the Prolog cut operator is coincidental.Should the call to prove/4at the end of thefirst clause of prove/2fail,then, provided this is not a purely propositional problem(That is,if it is not true that the entire matrix is ground.)the second clause of prove/2will cause the entire process to repeat,but with a path-depth limit one larger.Thefirst clause of prove/4implements the termination case,extension0, and is straightforward.The second implements the remaining rules.This clause begins by selecting,without loss of completeness,thefirst literal,Lit,from the goal clause.If the complement of this literal as computed by thefirst line of the body of the clause matches a literal in the Path,then the system attempts to apply an instance of the reduction rule,jumping to the last line of the clause, where it recursively proves the remainder of the goal using the same matrix and path,under the substitution resulting from the matching process.(That is,free variables in literals in the goal and the path may have become instantiated.) If a match to the complement of the literal is not found on the path,that is, if all attempts to apply instances of reduction have failed,then this is treated as either extension1or extension2,depending on whether or not the clause selected next is ground.A clause is selected by the technique described above.Then a literal matching the complement of the goal literal is selected from the clause. (If this fails then the program backtracks and selects another clause.)The test Cla1==Cla2is used,as explained below,to determine if the selected clause is ground,and the matrix for the subproof is constructed accordingly,either with or without the chosen clause.If the path limit has not been reached,the prover recursively proves the body of the selected clause under the new path assumption and substitution,and,if it succeeds,goes on to prove the remainder of the current goal clause.As the depth-first prover is complete for propositional logic,the path limit check is not done if the selected clause is ground.Note,P->Q;R is an extra-logical control structure corresponding to an if-then-else statement,The difference between this and((P,Q);(\+P,R)) is that the latter allows for backtracking and retrying the test under another substitution,whereas the former allows the test to be computed only once and an absolute choice is made at that point.It can also be written without R,as is done in some cases here.Such use is,in essence,a hidden use of the Prolog cut operator,which is used for pruning search.As mentioned above,the use of Prolog terms to represent atomic formulas introduces complications.This is because the free variables of a term,intended to represent the implicitly quantified variables of the atoms,can become bound if the term is compared(unified)with another term.In order to avoid the variables in clauses in the matrix from being so bound,when a clause is selected from the matrix,a copy with a fresh set of variables is produced using copy_term,and that copy is the clause that is used.Thus,the comparison Cla1==Cla2,which checks for syntactic identity,succeeds only if there were no variables in theΓ;∆−→⊤⊤RΓ;∆−→B−◦C −◦RΓ,B;∆−→CΓ;∆−→B&C &RΓ;∆1−→B1Γ;∆2−→B2Γ;∆−→∃x.B ∃RΓ;∆−→B iΓ,B;∆−→C absorbΓ;∆,B[x→t]−→CΓ;∆,B⇒C−→E ⇒LΓ;∆1−→BΓ;∆2,C−→Etensor,⊗,and with,&.In proving a conjunction formed with⊗,the current set of restricted assumptions,∆,is split between the two conjuncts:those not used in proving thefirst conjunct must be used while proving the second.To prove a&conjunction,the set of assumptions is copied to both sides:each conjunct’s proof must use all of the assumptions.In Lolli,the⊗conjunction is represented by the familiar“,”.This is a natural mapping,as we expect the effect of a succession of goals to be cumulative:each has available to it the resources not yet used by its predecessors.The&conjunction,which is less used,is written “&”.Thus,a query showing that two dollars are needed to buy pizza and soda when each costs a dollar can be written in Lolli as:?-(dollar-o pizza)=>(dollar-o soda)=>(dollar-o dollar-o(pizza,soda))which would succeed.In contrast,a single,ordinary dollar would be insufficient, as in the failing query:?-(dollar-o pizza)=>(dollar-o soda)=>(dollar-o(pizza,soda)) If we wished to allow ourselves a single,infinitely reusable dollar,we would write:?-(dollar-o pizza)=>(dollar-o soda)=>(dollar=>(pizza,soda)) which would also succeed.Finally,the puzzling query:?-(dollar-o pizza)=>(dollar-o soda)=>(dollar-o(pizza&soda)) would also succeed.It says that with a dollar it is possible to buy soda and possible to buy pizza,but not both at the same time.It is important to note that while the implication operators add clauses to a program while it is running,they are not the same as the Prolog assert mechanism.First,the addition is scoped over the subgoal on the right of the implication,whereas a clause assert ed in Prolog remains until it is retract ed. So,for example,the following query will fail:?-(dollar=>dollar),dollar.Assumed clauses also go out of scope if search backtracks out of the subordinate goal.Second,whereas assert automatically universalizes any free variables in an added clause,in Lolli clauses added with implication can contain free logic variables,which may get bound when the clause is used to prove some goal. Therefore,whereas the Prolog query:?-assert(p(X)),p(a),p(b).will succeed,because X is universalized,the seemingly similar Lolli query:?-p(X)=>(p(a),p(b)).will fail,because the attempt to prove p(a)causes the variable X to become instantiated to a.If we desire the other behavior,we must quantify explicitly:?-(forall X\p(X))=>(p(a),p(b)).What’s more,any action that causes the variable X to become instantiated will affect instances of that variable in added assumptions.For example,the query: ?-p(X)=>r(a)=>(r(X),p(b)).will fail,since proving r(X)causes the variable X to be instantiated to a,both in that position,and in the assumption p(X).Our implementation of lolliCoP will rely crucially on all these behaviors.Though there are two forms of disjunction in linear logic,only one,“⊕”is used in Lolli.It corresponds to the traditional one and is therefore written with a semicolon in Lolli as in Prolog.There are also two forms of truth,⊤,and1.The latter,which Lolli calls “true”,can only be proved if all the linear assumptions have already been used. In contrast,⊤is provable even if some resources are,as yet,unused.Thus if a ⊤occurs as one of the conjuncts in a⊗conjunction,then the conjunction may succeed even if the other conjuncts do not use all the linear resources.The⊤is seen to consume the leftovers.Therefore,Lolli calls this operator“erase”.It is beyond the scope of this paper to demonstrate the applications of all these operators.Many good examples can be found in the literature,particularly in the papers on Lygon and Lolli[5,6].The proof theory of this fragment has also been developed extensively[6].Of crucial importance is that there is a straightforward goal-directed proof procedure(conceptually similar to the one used for Prolog)that is sound and complete for this fragment of linear logic.5The lolliCoP Theorem ProverFigure4gives the code for lolliCoP,a reimplementation of leanCoP in Lolli/LLP.1 The basic premise of its design is that,rather than being passed around as a list, the matrix will be loaded as assumptions into the proof context and accessed directly.In addition,ground clauses will be added as linear resources,since the calculus dictates that in any given branch of the proof,a ground clause should be removed from the matrix once it is used.Non-ground clauses are added to the intuitionistic(unbounded)context.In either case(ground or non-ground)these assumptions are stored as clauses for the special predicate cl/1.Literals in the path are also stored as assumptions added to the program.They are unbounded assumptions added as clauses of the special predicate path.While Lolli supports theλ-terms ofλ-Prolog,LLP does not.Therefore,clauses are still represented as lists of literals,which are represented as terms as before.The proof procedure begins with a call to prove/1with a matrix to be proved. This predicatefirst reverses the order of the clauses,so that when they are added recursively the resultant context will be searched in their original order.It then calls pr/1to load the matrix into the unrestricted and linear proof contexts, as appropriate.First,however,it checks whether the entire matrix is groundprove(Mat):-reverse(Mat,Mat1),(ground(Mat)->propositional=>pr(Mat1);pr(Mat1)).pr([]):-p(1).pr([Cla|Mat]):-(ground(Cla)->(cl(Cla)-<>pr(Mat));(cl(Cla)=>pr(Mat))).p(PathLim):-cl(Cla),\+member(-_,Cla),copy_term(Cla,Cla1),prove(Cla1,PathLim).p(PathLim):-\+propositional,PathLim1is PathLim+1,p(PathLim1).prove([],_):-erase.prove([Lit|Cla],PathLim):-(-NegLit=Lit;-Lit=NegLit)->(path(NegLit),erase;cl(Cla1),copy_term(Cla1,Cla2),append(ClaA,[NegLit|ClaB],Cla2),append(ClaA,ClaB,Cla3),(Cla1==Cla2->true;PathLim>0),PathLim1is PathLim-1,path(Lit)=>prove(Cla3,PathLim1))&prove(Cla,PathLim).Fig.4.The lolliCoP theorem proveror not.If it is,aflag predicate is assumed(using=>)to indicate that this is a propositional problem,and that iterative deepening is not necessary.The predicate pr/1takes thefirst clause out of the given matrix,adds it to the current context as either a linear or unlimited assumption,as appropriate, and then calls itself recursively as the goal nested under the implication.Thus, each call to this predicate will be executed in a context which contains the assumptions added by all the previous calls.When the end of the given matrix is reached,thefirst clause of pr/1calls p/1with an initial path-length limit of 1,so that a start clause can be selected,and the proof search begun.The clauses for p/1take the place of the clauses for prove/2in leanCoP. They are responsible for managing the iterative deepening,and for selecting the start clause for the search.A clause is selected just by attempting to prove the predicate cl/1which will succeed by matching one of the clauses from the matrix which has been added to the program.This is significantly simpler than the process in leanCoP.Once the programfinds a purely positive start clause, it is copied and its proof is attempted at the current path-length limit.Should that process fail for all possible choices of start clause,the second clause of p/1 is invoked.It checks to see that this is not a purely propositional problem,and if it is not,makes a recursive call with the path-length limit one higher.The predicate prove/2takes the role of prove/4in leanCoP;because the matrix and path are stored in the proof context,they no longer need to bepassed around as arguments.Thefirst clause,corresponding to extension0,here has a body consisting of the erase(⊤)operator.Its purpose is to discard any linear assumptions(i.e.ground clauses in the matrix)that were not used in this branch of the proof.This is necessary since we are building a prover for classical logic,in which assumptions can be discarded.The second clause of this predicate is,as before,the core of the prover, covering the remaining three rules.It begins by selecting a literal from the goal clause and forming its complement.If a literal matching the complement occurs as an argument to one of the assumed path/1clauses,then this is an instance of the reduction rule and this branch is terminated.As with the extension0rule, erase is used to discard unused assumptions.Otherwise,the predicate cl/1extracts a clause from the matrix,which is then copied and checked to see if it contains a match for the complement of the goal literal.If the clause is ground or if the path-length limit has not been reached,the current literal is added to the path and prove/2is called recursively as a subordinate goal(within the scope of the assumption added to the path)to prove the body of the selected clause.If this was an instance of the reduction rule,or if it was an instance of extension1or extension2and the proof of the body of the matching clause suc-ceeded,the call to prove/2finishes with a recursive call to prove the rest of the current goal clause.Because this must be done using the same matrix and path that were used in the other branch of the proof,the two branches are joined with a&conjunction.Thus the context is copied independently to the two branches.It is important to notice that,other than checking whether the path-length limit has been reached,there is no difference between the cases when the selected clause is ground or not.If it was ground,it was added to the context using linear implication,and,since it has been used(to prove the cl/1predicate),it has automatically been removed from the program,and,hence,the matrix.Also, lolliCoP uses a different method for checking path length against the limit:the limit is simply decremented each time a literal is added to the path.This is done because there is no way to access the whole path to check its length,but has the advantage of being significantly more efficient as well.It is also important to note that,as mentioned before,we rely on the fact that free variables in assumptions retain their identity as logic variables and may become instantiated subsequently.In particular,the literals added to the path may contain instances of free variables from the goal clause from which they de-rive.Anything which causes these variables to become instantiated will similarly affect those occurrences in these assumptions.Thus,this technique could not be implemented using Prolog’s assert mechanism.In any case,assert ed clauses are generally not as fast as compiled ones.6Performance AnalysisWe have tested lolliCoP on the2200clausal form problems in the TPTP library version2.3.0[15,8].These consist of2193problems known to be unsatisfiable(or valid using positive representation)and7propositional problems known to be satisfiable(or invalid).Each problem is rated from0.00to1.00relative to its difficulty.A rating of“?”means the difficulty is unknown.No reordering of clauses or literals has been done.The tests were performed on a Linux system with a550MHz Pentium III processor and128M bytes of memory.The programs were compiled with version 0.50of LLP which generated abstract machine code executed by an emulator written in C.The time limit for all proof attempts was300seconds.Table1.Overall performance of Otter,leanCoP,and lolliCoP Solved22001602(73%)810(37%)822(37%)880(40%)Problems rated0.0013081230(94%)713(55%)716(55%)737(56%)Problems rated>0.00733249(34%)76(10%)83(11%)118(16%)Problems rated?159123(77%)21(13%)23(14%)25(16%)Fig.5.Performance of Otter,leanCoP and lolliCoP classified by problem rating 3.1is not yet available,we used Otter3.0.6instead.All tests were made on the same550MHz Pentium III.Table2gives the results of this comparison.(Otter results labeled“error”refer to an empty set-of-support.)As mentioned in the introduction,although the table shows lolliCoP as almost consistently outpacing leanCoP these results do not tell the entire story.Because LLP is afirst-generation implementation,the code generator is not nearly as sophisticated as SICStus’,nor is its runtime system.To adjust for this factor we also executed a version of leanCoP using the LLP compiler and runtime system (since Lolli is a superset of Prolog).In this test,looking only at the problems that it succeeded in solving,leanCoP took2.3times as long as lolliCoP,providing a more accurate measure of the benefits accrued from the logical treatment.Table3a compares the performance of all four systems on the33problems that they can all solve.Total CPU time is shown,along with a speedup ratio relative to leanCoP(under SICStus).On just these problems,lolliCoP has almost the same performance as Otter.However,comparing the result of36problems solved by both Otter and lolliCoP,Otter is71%faster as shown in Table3b. Finally,Table3c shows a similar analysis for the76problems that lolliCoP and leanCoP can both solve.7Improvements to the lolliCoP ProverIn the design of leanCoP,Otten and Bibel seem to have been focused primarily on keeping the code as short as possible.In the process of reimplementing the system in Lolli,a simple but significant performance improvement became apparent, which we discuss here.The most obvious inefficiency in the system as described thus far is that copy_term is called in order to create a new set of logic variables in a selected clause,even when the clause is ground,since that test is not made till later on. Given the size of some of the clauses in the problems in the TPTP library,this can be quite inefficient.While the obvious solution would be to move the use of copy_term into the body of the if-then-else along with the path-limit check, Lolli affords a more creative solution.Table2.Problems solved by lolliCoP2and rated higher than0.00BOO012-1(0.17) 3.448.137.33 1.28 BOO012-3(0.33)17.39237.8163.009.43 COL002-3(0.33)>3000.010.030.01 COL075-1(0.50)>300>300275.7760.29GEO026-3(0.11) 2.1520.3419.16 2.35 GEO030-3(0.44)8.04>300271.7230.90 GEO032-3(0.25) 1.16>300292.0732.02 GEO033-3(0.38) 4.81>300>30039.41 GEO041-3(0.22)0.2142.2832.90 3.60 GEO051-3(0.25)7.26>300>30056.82 GEO064-3(0.12)0.33>300>30055.07 GEO065-3(0.12)0.34>300>30055.11 GEO066-3(0.12)0.32>300>30055.14 HEN007-6(0.17)0.12>300>300211.72NUM009-1(0.12) 3.4075.6349.37 4.44 NUM283-1.005(0.20)0.430.280.200.17 NUM284-1.014(0.20)0.89180.54147.55129.91 PUZ034-1.004(0.67)error15.8712.429.83 SET014-2(0.33)176.24174.31134.3527.97 SET016-7(0.12)>30010.998.29 1.05 SET018-7(0.12)>30011.138.37 1.06 SET041-3(0.44)>30059.8845.36 4.88 SET060-6(0.12)0.190.040.030.00 SET060-7(0.12)0.330.050.030.00 SET083-7(0.12)24.3940.3434.70 5.41 SET085-6(0.12)12.72>300>30065.58 SET085-7(0.25)65.7946.0133.55 5.22 SET119-7(0.25)177.9760.3548.50 6.71 SET120-7(0.25)181.6260.2348.46 6.71 SET121-7(0.25)178.4272.8155.777.63 SET122-7(0.25)180.1372.8355.827.64 SET152-6(0.12)0.45 3.50 2.600.38 SET153-6(0.12)>3000.700.560.10 SET187-6(0.38)>30018.0113.53 2.27 SET196-6(0.12)10.59>300>300196.13 SET197-6(0.12)10.63>300>300196.06 SET199-6(0.25)>300>300>300203.63 SET231-6(0.12)>30012.869.74 1.63 SET234-6(0.25)>300>300>300251.18 SET252-6(0.25)61.53>300>300202.60 SET253-6(0.25)>300>300>300203.24 SET451-6(0.12)>300>300>300281.67 SET553-6(0.25)36.81>300>300204.46parison of Otter,leanCoP,and lolliCoP(a)33problems solved by Otter,leanCoP,and lolliCoPTotal CPU time1143.031590.661139.41338.47Average CPU time34.6448.2034.5310.26Speedup Ratio 1.39 1.00 1.40 4.70(c)76problems solved by leanCoP and lolliCoPOtter lolliCoP lolliCoP2Total CPU time2757.832038.58853.24Average CPU time36.2926.8211.23Speedup Ratio 1.00 1.35 3.23However,to the extent that these programs rely on the use of term-level Prolog data structures to maintain their proof contexts,they require the use of list manipulation predicates that are neither particularly fast nor clear.In this paper we have shown that by representing the proof context within the proof context of the meta-language,we can obtain a program that is at once clearer, simpler,and faster.Source code for the examples in this paper,as well as the LLP compiler can be found at /~hodas/research/lollicop. References1. B.Beckert and J.Posegga.leanTAP:lean tableau-based theorem proving.In12thCADE,pages793–797.Springer-Verlag LNAI814,1994.2.W.Bibel.Deduction:Automated Logic.Academic Press,1993.3.W.Bibel,S.Br¨u ning,U.Egly,and T.Rath.KoMeT.In12th CADE,pages783–787.Springer-Verlag LNAI814,1994.4.J.-Y.Girard.Linear logic.Theoretical Computer Science,50:1–102,1987.5.James Harland,David Pym,and Michael Winikoff.Programming in Lygon:Anoverview.In M.Wirsing and M.Nivat,editors,Algebraic Methodology and Software Technology,pages391–405,Munich,Germany,1996.Springer-Verlag LNCS1101.6.J.S.Hodas and ler.Logic programming in a fragment of intuitionistic linearrmation and Computation,110(2):327–365,1994.Extended abstraction in the Proceedings of the Sixth Annual Symposium on Logic in Computer Science, Amsterdam,July15–18,1991.7.J.S.Hodas,K.Watkins,N.Tamura,and K.-S.Kang.Efficient implementation of alinear logic programming language.In Proceedings of the1998Joint International Conference and Symposium on Logic Programming,pages145–159,June1998. 8.Argonne National Laboratory.Otter and MACE on TPTP v2.3.0.Web page at/AR/otter/tptp230.html,May2000.9.R.Letz,J.Schumann,S.Bayerl,and W.Bibel.Setheo:a high-performancetheorem prover.Journal of Automated Reasoning,8(2):183–212,1992.10.W.MacCune.Otter3.0reference manual and guide.Technical Report ANL-94/6,Argonne National Laboratory,1994.11. ler,G.Nadathur,F.Pfenning,and A.Scedrov.Uniform proofs as a founda-tion for logic programming.Annals of Pure and Applied Logic,51:125–157,1991.12.M.Moser,O.Ibens,R.Letz,J.Steinbach,C.Goller,J.Schumann,and K.Mayr.Setheo and E-Setheo—the CADE-13systems.Journal of Automated Reasoning, 18:237–246,1997.13.G.Nadathur and D.J.Mitchell.Teyjus—a compiler and abstract machine basedimplementation of lambda Prolog.In6th CADE,pages287–291.Springer-Verlag LNCS1632,1999.14.J.Otten and W.Bibel.leanCoP:lean connection-based theorem prov-ing.In Proceedings of the Third International Workshop on First-Order Theorem Proving,pages152–157.University of Koblenz,2000.Electronically available,along with submitted journal-length version,at rmatik.tu-darmstadt.de/~jeotten/leanCoP/. 15.G.Sutcliffe and C.Suttner.The TPTP problem library—CNF release v1.2.1.Journal of Automated Reasoning,21:177–203,1998.。

相关文档
最新文档