BusMgt 331 Case Study 2 (Winter 2012)
motres
Tools for Test Case Generation – p.3/39
Lutess
testing time-synchronous programs based on Lustre SUT, oracle: binary time-sync. programs environment desc. : observer in Lustre testing method parameterization fully automated test generation & execution
Tools for Test Case Generation – p.6/39
Lutess – Testing Method
uniform random testing operational profile-based testing
¢ ¥ ¦¤ ¢ ¥ ¡ § £ ¨ ©§ ¨ ¨ ¨ ¢
Tools for Test Case Generation – p.2/39
Tools
Tool Lutess Lurette GATeL Autofocus Conformance Kit Phact TVEDA AsmL Cooper TorX TGV STG AGEDIS TestComposer Autolink Languages Lustre Lustre Lustre Autofocus EFSM EFSM SDL, Estelle AsmL LTS (Basic LOTOS) LTS (LOTOS, Promela, FSP) LTS-API (LOTOS, SDL, UML) NTIF UML/AML SDL SDL CAR A A A A R R R R A A A A ? C C Access n.p. n.p. n.p. n.p. no no no www www www www winter ’04? ? $ $
最近鲁棒优化进展Recent Advances in Robust Optimization and Robustness An Overview
Recent Advances in Robust Optimization and Robustness:An OverviewVirginie Gabrel∗and C´e cile Murat†and Aur´e lie Thiele‡July2012AbstractThis paper provides an overview of developments in robust optimization and robustness published in the aca-demic literature over the pastfive years.1IntroductionThis review focuses on papers identified by Web of Science as having been published since2007(included),be-longing to the area of Operations Research and Management Science,and having‘robust’and‘optimization’in their title.There were exactly100such papers as of June20,2012.We have completed this list by considering 726works indexed by Web of Science that had either robustness(for80of them)or robust(for646)in their title and belonged to the Operations Research and Management Science topic area.We also identified34PhD disserta-tions dated from the lastfive years with‘robust’in their title and belonging to the areas of operations research or management.Among those we have chosen to focus on the works with a primary focus on management science rather than system design or optimal control,which are broadfields that would deserve a review paper of their own, and papers that could be of interest to a large segment of the robust optimization research community.We feel it is important to include PhD dissertations to identify these recent graduates as the new generation trained in robust optimization and robustness analysis,whether they have remained in academia or joined industry.We have also added a few not-yet-published preprints to capture ongoing research efforts.While many additional works would have deserved inclusion,we feel that the works selected give an informative and comprehensive view of the state of robustness and robust optimization to date in the context of operations research and management science.∗Universit´e Paris-Dauphine,LAMSADE,Place du Mar´e chal de Lattre de Tassigny,F-75775Paris Cedex16,France gabrel@lamsade.dauphine.fr Corresponding author†Universit´e Paris-Dauphine,LAMSADE,Place du Mar´e chal de Lattre de Tassigny,F-75775Paris Cedex16,France mu-rat@lamsade.dauphine.fr‡Lehigh University,Industrial and Systems Engineering Department,200W Packer Ave Bethlehem PA18015,USA aure-lie.thiele@2Theory of Robust Optimization and Robustness2.1Definitions and BasicsThe term“robust optimization”has come to encompass several approaches to protecting the decision-maker against parameter ambiguity and stochastic uncertainty.At a high level,the manager must determine what it means for him to have a robust solution:is it a solution whose feasibility must be guaranteed for any realization of the uncertain parameters?or whose objective value must be guaranteed?or whose distance to optimality must be guaranteed? The main paradigm relies on worst-case analysis:a solution is evaluated using the realization of the uncertainty that is most unfavorable.The way to compute the worst case is also open to debate:should it use afinite number of scenarios,such as historical data,or continuous,convex uncertainty sets,such as polyhedra or ellipsoids?The answers to these questions will determine the formulation and the type of the robust counterpart.Issues of over-conservatism are paramount in robust optimization,where the uncertain parameter set over which the worst case is computed should be chosen to achieve a trade-off between system performance and protection against uncertainty,i.e.,neither too small nor too large.2.2Static Robust OptimizationIn this framework,the manager must take a decision in the presence of uncertainty and no recourse action will be possible once uncertainty has been realized.It is then necessary to distinguish between two types of uncertainty: uncertainty on the feasibility of the solution and uncertainty on its objective value.Indeed,the decision maker generally has different attitudes with respect to infeasibility and sub-optimality,which justifies analyzing these two settings separately.2.2.1Uncertainty on feasibilityWhen uncertainty affects the feasibility of a solution,robust optimization seeks to obtain a solution that will be feasible for any realization taken by the unknown coefficients;however,complete protection from adverse realiza-tions often comes at the expense of a severe deterioration in the objective.This extreme approach can be justified in some engineering applications of robustness,such as robust control theory,but is less advisable in operations research,where adverse events such as low customer demand do not produce the high-profile repercussions that engineering failures–such as a doomed satellite launch or a destroyed unmanned robot–can have.To make the robust methodology appealing to business practitioners,robust optimization thus focuses on obtaining a solution that will be feasible for any realization taken by the unknown coefficients within a smaller,“realistic”set,called the uncertainty set,which is centered around the nominal values of the uncertain parameters.The goal becomes to optimize the objective,over the set of solutions that are feasible for all coefficient values in the uncertainty set.The specific choice of the set plays an important role in ensuring computational tractability of the robust problem and limiting deterioration of the objective at optimality,and must be thought through carefully by the decision maker.A large branch of robust optimization focuses on worst-case optimization over a convex uncertainty set.The reader is referred to Bertsimas et al.(2011a)and Ben-Tal and Nemirovski(2008)for comprehensive surveys of robust optimization and to Ben-Tal et al.(2009)for a book treatment of the topic.2.2.2Uncertainty on objective valueWhen uncertainty affects the optimality of a solution,robust optimization seeks to obtain a solution that performs well for any realization taken by the unknown coefficients.While a common criterion is to optimize the worst-case objective,some studies have investigated other robustness measures.Roy(2010)proposes a new robustness criterion that holds great appeal for the manager due to its simplicity of use and practical relevance.This framework,called bw-robustness,allows the decision-maker to identify a solution which guarantees an objective value,in a maximization problem,of at least w in all scenarios,and maximizes the probability of reaching a target value of b(b>w).Gabrel et al.(2011)extend this criterion from afinite set of scenarios to the case of an uncertainty set modeled using intervals.Kalai et al.(2012)suggest another criterion called lexicographicα-robustness,also defined over afinite set of scenarios for the uncertain parameters,which mitigates the primary role of the worst-case scenario in defining the solution.Thiele(2010)discusses over-conservatism in robust linear optimization with cost uncertainty.Gancarova and Todd(2012)studies the loss in objective value when an inaccurate objective is optimized instead of the true one, and shows that on average this loss is very small,for an arbitrary compact feasible region.In combinatorial optimization,Morrison(2010)develops a framework of robustness based on persistence(of decisions)using the Dempster-Shafer theory as an evidence of robustness and applies it to portfolio tracking and sensor placement.2.2.3DualitySince duality has been shown to play a key role in the tractability of robust optimization(see for instance Bertsimas et al.(2011a)),it is natural to ask how duality and robust optimization are connected.Beck and Ben-Tal(2009) shows that primal worst is equal to dual best.The relationship between robustness and duality is also explored in Gabrel and Murat(2010)when the right-hand sides of the constraints are uncertain and the uncertainty sets are represented using intervals,with a focus on establishing the relationships between linear programs with uncertain right hand sides and linear programs with uncertain objective coefficients using duality theory.This avenue of research is further explored in Gabrel et al.(2010)and Remli(2011).2.3Multi-Stage Decision-MakingMost early work on robust optimization focused on static decision-making:the manager decided at once of the values taken by all decision variables and,if the problem allowed for multiple decision stages as uncertainty was realized,the stages were incorporated by re-solving the multi-stage problem as time went by and implementing only the decisions related to the current stage.As thefield of static robust optimization matured,incorporating–ina tractable manner–the information revealed over time directly into the modeling framework became a major area of research.2.3.1Optimal and Approximate PoliciesA work going in that direction is Bertsimas et al.(2010a),which establishes the optimality of policies affine in the uncertainty for one-dimensional robust optimization problems with convex state costs and linear control costs.Chen et al.(2007)also suggests a tractable approximation for a class of multistage chance-constrained linear program-ming problems,which converts the original formulation into a second-order cone programming problem.Chen and Zhang(2009)propose an extension of the Affinely Adjustable Robust Counterpart framework described in Ben-Tal et al.(2009)and argue that its potential is well beyond what has been in the literature so far.2.3.2Two stagesBecause of the difficulty in incorporating multiple stages in robust optimization,many theoretical works have focused on two stages.Regarding two-stage problems,Thiele et al.(2009)presents a cutting-plane method based on Kelley’s algorithm for solving convex adjustable robust optimization problems,while Terry(2009)provides in addition preliminary results on the conditioning of a robust linear program and of an equivalent second-order cone program.Assavapokee et al.(2008a)and Assavapokee et al.(2008b)develop tractable algorithms in the case of robust two-stage problems where the worst-case regret is minimized,in the case of interval-based uncertainty and scenario-based uncertainty,respectively,while Minoux(2011)provides complexity results for the two-stage robust linear problem with right-hand-side uncertainty.2.4Connection with Stochastic OptimizationAn early stream in robust optimization modeled stochastic variables as uncertain parameters belonging to a known uncertainty set,to which robust optimization techniques were then applied.An advantage of this method was to yield approaches to decision-making under uncertainty that were of a level of complexity similar to that of their deterministic counterparts,and did not suffer from the curse of dimensionality that afflicts stochastic and dynamic programming.Researchers are now making renewed efforts to connect the robust optimization and stochastic opti-mization paradigms,for instance quantifying the performance of the robust optimization solution in the stochastic world.The topic of robust optimization in the context of uncertain probability distributions,i.e.,in the stochastic framework itself,is also being revisited.2.4.1Bridging the Robust and Stochastic WorldsBertsimas and Goyal(2010)investigates the performance of static robust solutions in two-stage stochastic and adaptive optimization problems.The authors show that static robust solutions are good-quality solutions to the adaptive problem under a broad set of assumptions.They provide bounds on the ratio of the cost of the optimal static robust solution to the optimal expected cost in the stochastic problem,called the stochasticity gap,and onthe ratio of the cost of the optimal static robust solution to the optimal cost in the two-stage adaptable problem, called the adaptability gap.Chen et al.(2007),mentioned earlier,also provides a robust optimization perspective to stochastic programming.Bertsimas et al.(2011a)investigates the role of geometric properties of uncertainty sets, such as symmetry,in the power offinite adaptability in multistage stochastic and adaptive optimization.Duzgun(2012)bridges descriptions of uncertainty based on stochastic and robust optimization by considering multiple ranges for each uncertain parameter and setting the maximum number of parameters that can fall within each range.The corresponding optimization problem can be reformulated in a tractable manner using the total unimodularity of the feasible set and allows for afiner description of uncertainty while preserving tractability.It also studies the formulations that arise in robust binary optimization with uncertain objective coefficients using the Bernstein approximation to chance constraints described in Ben-Tal et al.(2009),and shows that the robust optimization problems are deterministic problems for modified values of the coefficients.While many results bridging the robust and stochastic worlds focus on giving probabilistic guarantees for the solutions generated by the robust optimization models,Manuja(2008)proposes a formulation for robust linear programming problems that allows the decision-maker to control both the probability and the expected value of constraint violation.Bandi and Bertsimas(2012)propose a new approach to analyze stochastic systems based on robust optimiza-tion.The key idea is to replace the Kolmogorov axioms and the concept of random variables as primitives of probability theory,with uncertainty sets that are derived from some of the asymptotic implications of probability theory like the central limit theorem.The authors show that the performance analysis questions become highly structured optimization problems for which there exist efficient algorithms that are capable of solving problems in high dimensions.They also demonstrate that the proposed approach achieves computationally tractable methods for(a)analyzing queueing networks,(b)designing multi-item,multi-bidder auctions with budget constraints,and (c)pricing multi-dimensional options.2.4.2Distributionally Robust OptimizationBen-Tal et al.(2010)considers the optimization of a worst-case expected-value criterion,where the worst case is computed over all probability distributions within a set.The contribution of the work is to define a notion of robustness that allows for different guarantees for different subsets of probability measures.The concept of distributional robustness is also explored in Goh and Sim(2010),with an emphasis on linear and piecewise-linear decision rules to reformulate the original problem in aflexible manner using expected-value terms.Xu et al.(2012) also investigates probabilistic interpretations of robust optimization.A related area of study is worst-case optimization with partial information on the moments of distributions.In particular,Popescu(2007)analyzes robust solutions to a certain class of stochastic optimization problems,using mean-covariance information about the distributions underlying the uncertain parameters.The author connects the problem for a broad class of objective functions to a univariate mean-variance robust objective and,subsequently, to a(deterministic)parametric quadratic programming problem.The reader is referred to Doan(2010)for a moment-based uncertainty model for stochastic optimization prob-lems,which addresses the ambiguity of probability distributions of random parameters with a minimax decision rule,and a comparison with data-driven approaches.Distributionally robust optimization in the context of data-driven problems is the focus of Delage(2009),which uses observed data to define a”well structured”set of dis-tributions that is guaranteed with high probability to contain the distribution from which the samples were drawn. Zymler et al.(2012a)develop tractable semidefinite programming(SDP)based approximations for distributionally robust individual and joint chance constraints,assuming that only thefirst-and second-order moments as well as the support of the uncertain parameters are given.Becker(2011)studies the distributionally robust optimization problem with known mean,covariance and support and develops a decomposition method for this family of prob-lems which recursively derives sub-policies along projected dimensions of uncertainty while providing a sequence of bounds on the value of the derived policy.Robust linear optimization using distributional information is further studied in Kang(2008).Further,Delage and Ye(2010)investigates distributional robustness with moment uncertainty.Specifically,uncertainty affects the problem both in terms of the distribution and of its moments.The authors show that the resulting problems can be solved efficiently and prove that the solutions exhibit,with high probability,best worst-case performance over a set of distributions.Bertsimas et al.(2010)proposes a semidefinite optimization model to address minimax two-stage stochastic linear problems with risk aversion,when the distribution of the second-stage random variables belongs to a set of multivariate distributions with knownfirst and second moments.The minimax solutions provide a natural distribu-tion to stress-test stochastic optimization problems under distributional ambiguity.Cromvik and Patriksson(2010a) show that,under certain assumptions,global optima and stationary solutions of stochastic mathematical programs with equilibrium constraints are robust with respect to changes in the underlying probability distribution.Works such as Zhu and Fukushima(2009)and Zymler(2010)also study distributional robustness in the context of specific applications,such as portfolio management.2.5Connection with Risk TheoryBertsimas and Brown(2009)describe how to connect uncertainty sets in robust linear optimization to coherent risk measures,an example of which is Conditional Value-at-Risk.In particular,the authors show the link between polyhedral uncertainty sets of a special structure and a subclass of coherent risk measures called distortion risk measures.Independently,Chen et al.(2007)present an approach for constructing uncertainty sets for robust opti-mization using new deviation measures that capture the asymmetry of the distributions.These deviation measures lead to improved approximations of chance constraints.Dentcheva and Ruszczynski(2010)proposes the concept of robust stochastic dominance and shows its applica-tion to risk-averse optimization.They consider stochastic optimization problems where risk-aversion is expressed by a robust stochastic dominance constraint and develop necessary and sufficient conditions of optimality for such optimization problems in the convex case.In the nonconvex case,they derive necessary conditions of optimality under additional smoothness assumptions of some mappings involved in the problem.2.6Nonlinear OptimizationRobust nonlinear optimization remains much less widely studied to date than its linear counterpart.Bertsimas et al.(2010c)presents a robust optimization approach for unconstrained non-convex problems and problems based on simulations.Such problems arise for instance in the partial differential equations literature and in engineering applications such as nanophotonic design.An appealing feature of the approach is that it does not assume any specific structure for the problem.The case of robust nonlinear optimization with constraints is investigated in Bertsimas et al.(2010b)with an application to radiation therapy for cancer treatment.Bertsimas and Nohadani (2010)further explore robust nonconvex optimization in contexts where solutions are not known explicitly,e.g., have to be found using simulation.They present a robust simulated annealing algorithm that improves performance and robustness of the solution.Further,Boni et al.(2008)analyzes problems with uncertain conic quadratic constraints,formulating an approx-imate robust counterpart,and Zhang(2007)provide formulations to nonlinear programming problems that are valid in the neighborhood of the nominal parameters and robust to thefirst order.Hsiung et al.(2008)present tractable approximations to robust geometric programming,by using piecewise-linear convex approximations of each non-linear constraint.Geometric programming is also investigated in Shen et al.(2008),where the robustness is injected at the level of the algorithm and seeks to avoid obtaining infeasible solutions because of the approximations used in the traditional approach.Interval uncertainty-based robust optimization for convex and non-convex quadratic programs are considered in Li et al.(2011).Takeda et al.(2010)studies robustness for uncertain convex quadratic programming problems with ellipsoidal uncertainties and proposes a relaxation technique based on random sampling for robust deviation optimization sserre(2011)considers minimax and robust models of polynomial optimization.A special case of nonlinear problems that are linear in the decision variables but convex in the uncertainty when the worst-case objective is to be maximized is investigated in Kawas and Thiele(2011a).In that setting,exact and tractable robust counterparts can be derived.A special class of nonconvex robust optimization is examined in Kawas and Thiele(2011b).Robust nonconvex optimization is examined in detail in Teo(2007),which presents a method that is applicable to arbitrary objective functions by iteratively moving along descent directions and terminates at a robust local minimum.3Applications of Robust OptimizationWe describe below examples to which robust optimization has been applied.While an appealing feature of robust optimization is that it leads to models that can be solved using off-the-shelf software,it is worth pointing the existence of algebraic modeling tools that facilitate the formulation and subsequent analysis of robust optimization problems on the computer(Goh and Sim,2011).3.1Production,Inventory and Logistics3.1.1Classical logistics problemsThe capacitated vehicle routing problem with demand uncertainty is studied in Sungur et al.(2008),with a more extensive treatment in Sungur(2007),and the robust traveling salesman problem with interval data in Montemanni et al.(2007).Remli and Rekik(2012)considers the problem of combinatorial auctions in transportation services when shipment volumes are uncertain and proposes a two-stage robust formulation solved using a constraint gener-ation algorithm.Zhang(2011)investigates two-stage minimax regret robust uncapacitated lot-sizing problems with demand uncertainty,in particular showing that it is polynomially solvable under the interval uncertain demand set.3.1.2SchedulingGoren and Sabuncuoglu(2008)analyzes robustness and stability measures for scheduling in a single-machine environment subject to machine breakdowns and embeds them in a tabu-search-based scheduling algorithm.Mittal (2011)investigates efficient algorithms that give optimal or near-optimal solutions for problems with non-linear objective functions,with a focus on robust scheduling and service operations.Examples considered include parallel machine scheduling problems with the makespan objective,appointment scheduling and assortment optimization problems with logit choice models.Hazir et al.(2010)considers robust scheduling and robustness measures for the discrete time/cost trade-off problem.3.1.3Facility locationAn important question in logistics is not only how to operate a system most efficiently but also how to design it. Baron et al.(2011)applies robust optimization to the problem of locating facilities in a network facing uncertain demand over multiple periods.They consider a multi-periodfixed-charge network location problem for which they find the number of facilities,their location and capacities,the production in each period,and allocation of demand to facilities.The authors show that different models of uncertainty lead to very different solution network topologies, with the model with box uncertainty set opening fewer,larger facilities.?investigate a robust version of the location transportation problem with an uncertain demand using a2-stage formulation.The resulting robust formulation is a convex(nonlinear)program,and the authors apply a cutting plane algorithm to solve the problem exactly.Atamt¨u rk and Zhang(2007)study the networkflow and design problem under uncertainty from a complexity standpoint,with applications to lot-sizing and location-transportation problems,while Bardossy(2011)presents a dual-based local search approach for deterministic,stochastic,and robust variants of the connected facility location problem.The robust capacity expansion problem of networkflows is investigated in Ordonez and Zhao(2007),which provides tractable reformulations under a broad set of assumptions.Mudchanatongsuk et al.(2008)analyze the network design problem under transportation cost and demand uncertainty.They present a tractable approximation when each commodity only has a single origin and destination,and an efficient column generation for networks with path constraints.Atamt¨u rk and Zhang(2007)provides complexity results for the two-stage networkflow anddesign plexity results for the robust networkflow and network design problem are also provided in Minoux(2009)and Minoux(2010).The problem of designing an uncapacitated network in the presence of link failures and a competing mode is investigated in Laporte et al.(2010)in a railway application using a game theoretic perspective.Torres Soto(2009)also takes a comprehensive view of the facility location problem by determining not only the optimal location but also the optimal time for establishing capacitated facilities when demand and cost parameters are time varying.The models are solved using Benders’decomposition or heuristics such as local search and simulated annealing.In addition,the robust networkflow problem is also analyzed in Boyko(2010),which proposes a stochastic formulation of minimum costflow problem aimed atfinding network design andflow assignments subject to uncertain factors,such as network component disruptions/failures when the risk measure is Conditional Value at Risk.Nagurney and Qiang(2009)suggests a relative total cost index for the evaluation of transportation network robustness in the presence of degradable links and alternative travel behavior.Further,the problem of locating a competitive facility in the plane is studied in Blanquero et al.(2011)with a robustness criterion.Supply chain design problems are also studied in Pan and Nagi(2010)and Poojari et al.(2008).3.1.4Inventory managementThe topic of robust multi-stage inventory management has been investigated in detail in Bienstock and Ozbay (2008)through the computation of robust basestock levels and Ben-Tal et al.(2009)through an extension of the Affinely Adjustable Robust Counterpart framework to control inventories under demand uncertainty.See and Sim (2010)studies a multi-period inventory control problem under ambiguous demand for which only mean,support and some measures of deviations are known,using a factor-based model.The parameters of the replenishment policies are obtained using a second-order conic programming problem.Song(2010)considers stochastic inventory control in robust supply chain systems.The work proposes an inte-grated approach that combines in a single step datafitting and inventory optimization–using histograms directly as the inputs for the optimization model–for the single-item multi-period periodic-review stochastic lot-sizing problem.Operation and planning issues for dynamic supply chain and transportation networks in uncertain envi-ronments are considered in Chung(2010),with examples drawn from emergency logistics planning,network design and congestion pricing problems.3.1.5Industry-specific applicationsAng et al.(2012)proposes a robust storage assignment approach in unit-load warehouses facing variable supply and uncertain demand in a multi-period setting.The authors assume a factor-based demand model and minimize the worst-case expected total travel in the warehouse with distributional ambiguity of demand.A related problem is considered in Werners and Wuelfing(2010),which optimizes internal transports at a parcel sorting center.Galli(2011)describes the models and algorithms that arise from implementing recoverable robust optimization to train platforming and rolling stock planning,where the concept of recoverable robustness has been defined in。
Probabilistic model checking of an anonymity system
Probabilistic Model Checking ofan Anonymity SystemVitaly ShmatikovSRI International333Ravenswood AvenueMenlo Park,CA94025U.S.A.shmat@AbstractWe use the probabilistic model checker PRISM to analyze the Crowds system for anonymous Web browsing.This case study demonstrates howprobabilistic model checking techniques can be used to formally analyze se-curity properties of a peer-to-peer group communication system based onrandom message routing among members.The behavior of group mem-bers and the adversary is modeled as a discrete-time Markov chain,and thedesired security properties are expressed as PCTL formulas.The PRISMmodel checker is used to perform automated analysis of the system and ver-ify anonymity guarantees it provides.Our main result is a demonstration ofhow certain forms of probabilistic anonymity degrade when group size in-creases or random routing paths are rebuilt,assuming that the corrupt groupmembers are able to identify and/or correlate multiple routing paths originat-ing from the same sender.1IntroductionFormal analysis of security protocols is a well-establishedfield.Model checking and theorem proving techniques[Low96,MMS97,Pau98,CJM00]have been ex-tensively used to analyze secrecy,authentication and other security properties ofprotocols and systems that employ cryptographic primitives such as public-key en-cryption,digital signatures,etc.Typically,the protocol is modeled at a highly ab-stract level and the underlying cryptographic primitives are treated as secure“black boxes”to simplify the model.This approach discovers attacks that would succeed even if all cryptographic functions were perfectly secure.Conventional formal analysis of security is mainly concerned with security against the so called Dolev-Yao attacks,following[DY83].A Dolev-Yao attacker is a non-deterministic process that has complete control over the communication net-work and can perform any combination of a given set of attacker operations,such as intercepting any message,splitting messages into parts,decrypting if it knows the correct decryption key,assembling fragments of messages into new messages and replaying them out of context,etc.Many proposed systems for anonymous communication aim to provide strong, non-probabilistic anonymity guarantees.This includes proxy-based approaches to anonymity such as the Anonymizer[Ano],which hide the sender’s identity for each message by forwarding all communication through a special server,and MIX-based anonymity systems[Cha81]that blend communication between dif-ferent senders and recipients,thus preventing a global eavesdropper from linking sender-recipient pairs.Non-probabilistic anonymity systems are amenable to for-mal analysis in the same non-deterministic Dolev-Yao model as used for verifica-tion of secrecy and authentication protocols.Existing techniques for the formal analysis of anonymity in the non-deterministic model include traditional process formalisms such as CSP[SS96]and a special-purpose logic of knowledge[SS99].In this paper,we use probabilistic model checking to analyze anonymity prop-erties of a gossip-based system.Such systems fundamentally rely on probabilistic message routing to guarantee anonymity.The main representative of this class of anonymity systems is Crowds[RR98].Instead of protecting the user’s identity against a global eavesdropper,Crowds provides protection against collaborating local eavesdroppers.All communication is routed randomly through a group of peers,so that even if some of the group members collaborate and share collected lo-cal information with the adversary,the latter is not likely to distinguish true senders of the observed messages from randomly selected forwarders.Conventional formal analysis techniques that assume a non-deterministic at-tacker in full control of the communication channels are not applicable in this case. Security properties of gossip-based systems depend solely on the probabilistic be-havior of protocol participants,and can be formally expressed only in terms of relative probabilities of certain observations by the adversary.The system must be modeled as a probabilistic process in order to capture its properties faithfully.Using the analysis technique developed in this paper—namely,formalization of the system as a discrete-time Markov chain and probabilistic model checking of2this chain with PRISM—we uncovered two subtle properties of Crowds that causedegradation of the level of anonymity provided by the system to the users.First,if corrupt group members are able to detect that messages along different routingpaths originate from the same(unknown)sender,the probability of identifyingthat sender increases as the number of observed paths grows(the number of pathsmust grow with time since paths are rebuilt when crowd membership changes).Second,the confidence of the corrupt members that they detected the correct senderincreases with the size of the group.Thefirstflaw was reported independently byMalkhi[Mal01]and Wright et al.[W ALS02],while the second,to the best ofour knowledge,was reported for thefirst time in the conference version of thispaper[Shm02].In contrast to the analysis by Wright et al.that relies on manualprobability calculations,we discovered both potential vulnerabilities of Crowds byautomated probabilistic model checking.Previous research on probabilistic formal models for security focused on(i)probabilistic characterization of non-interference[Gra92,SG95,VS98],and(ii)process formalisms that aim to faithfully model probabilistic properties of crypto-graphic primitives[LMMS99,Can00].This paper attempts to directly model andanalyze security properties based on discrete probabilities,as opposed to asymp-totic probabilities in the conventional cryptographic sense.Our analysis methodis applicable to other probabilistic anonymity systems such as Freenet[CSWH01]and onion routing[SGR97].Note that the potential vulnerabilities we discovered inthe formal model of Crowds may not manifest themselves in the implementationsof Crowds or other,similar systems that take measures to prevent corrupt routersfrom correlating multiple paths originating from the same sender.2Markov Chain Model CheckingWe model the probabilistic behavior of a peer-to-peer communication system as adiscrete-time Markov chain(DTMC),which is a standard approach in probabilisticverification[LS82,HS84,Var85,HJ94].Formally,a Markov chain can be definedas consisting in afinite set of states,the initial state,the transition relation such that,and a labeling functionfrom states to afinite set of propositions.In our model,the states of the Markov chain will represent different stages ofrouting path construction.As usual,a state is defined by the values of all systemvariables.For each state,the corresponding row of the transition matrix de-fines the probability distributions which govern the behavior of group members once the system reaches that state.32.1Overview of PCTLWe use the temporal probabilistic logic PCTL[HJ94]to formally specify properties of the system to be checked.PCTL can express properties of the form“under any scheduling of processes,the probability that event occurs is at least.”First,define state formulas inductively as follows:where atomic propositions are predicates over state variables.State formulas of the form are explained below.Define path formulas as follows:Unlike state formulas,which are simplyfirst-order propositions over a single state,path formulas represent properties of a chain of states(here path refers to a sequence of state space transitions rather than a routing path in the Crowds speci-fication).In particular,is true iff is true for every state in the chain;is true iff is true for all states in the chain until becomes true,and is true for all subsequent states;is true iff and there are no more than states before becomes true.For any state and path formula,is a state formula which is true iff state space paths starting from satisfy path formula with probability greater than.For the purposes of this paper,we will be interested in formulas of the form ,evaluated in the initial state.Here specifies a system con-figuration of interest,typically representing a particular observation by the adver-sary that satisfies the definition of a successful attack on the protocol.Property is a liveness property:it holds in iff will eventually hold with greater than probability.For instance,if is a state variable represent-ing the number of times one of the corrupt members received a message from the honest member no.,then holds in iff the prob-ability of corrupt members eventually observing member no.twice or more is greater than.Expressing properties of the system in PCTL allows us to reason formally about the probability of corrupt group members collecting enough evidence to success-fully attack anonymity.We use model checking techniques developed for verifica-tion of discrete-time Markov chains to compute this probability automatically.42.2PRISM model checkerThe automated analyses described in this paper were performed using PRISM,aprobabilistic model checker developed by Kwiatkowska et al.[KNP01].The toolsupports both discrete-and continuous-time Markov chains,and Markov decisionprocesses.As described in section4,we model probabilistic peer-to-peer com-munication systems such as Crowds simply as discrete-time Markov chains,andformalize their properties in PCTL.The behavior of the system processes is specified using a simple module-basedlanguage inspired by Reactive Modules[AH96].State variables are declared in thestandard way.For example,the following declarationdeliver:bool init false;declares a boolean state variable deliver,initialized to false,while the followingdeclarationconst TotalRuns=4;...observe1:[0..TotalRuns]init0;declares a constant TotalRuns equal to,and then an integer array of size,indexed from to TotalRuns,with all elements initialized to.State transition rules are specified using guarded commands of the form[]<guard>-><command>;where<guard>is a predicate over system variables,and<command>is the tran-sition executed by the system if the guard condition evaluates to mandoften has the form<expression>...<expression>, which means that in the next state(i.e.,that obtained after the transition has beenexecuted),state variable is assigned the result of evaluating arithmetic expres-sion<expression>If the transition must be chosen probabilistically,the discrete probability dis-tribution is specified as[]<guard>-><prob1>:<command1>+...+<probN>:<commandN>;Transition represented by command is executed with probability prob,and prob.Security properties to be checked are stated as PCTL formulas (see section2.1).5Given a formal system specification,PRISM constructs the Markov chain and determines the set of reachable states,using MTBDDs and BDDs,respectively. Model checking a PCTL formula reduces to a combination of reachability-based computation and solving a system of linear equations to determine the probability of satisfying the formula in each reachable state.The model checking algorithms employed by PRISM include[BdA95,BK98,Bai98].More details about the im-plementation and operation of PRISM can be found at http://www.cs.bham. /˜dxp/prism/and in[KNP01].Since PRISM only supports model checking offinite DTMC,in our case study of Crowds we only analyze anonymity properties offinite instances of the system. By changing parameters of the model,we demonstrate how anonymity properties evolve with changes in the system configuration.Wright et al.[W ALS02]investi-gated related properties of the Crowds system in the general case,but they do not rely on tool support and their analyses are manual rather than automated.3Crowds Anonymity SystemProviding an anonymous communication service on the Internet is a challenging task.While conventional security mechanisms such as encryption can be used to protect the content of messages and transactions,eavesdroppers can still observe the IP addresses of communicating computers,timing and frequency of communi-cation,etc.A Web server can trace the source of the incoming connection,further compromising anonymity.The Crowds system was developed by Reiter and Ru-bin[RR98]for protecting users’anonymity on the Web.The main idea behind gossip-based approaches to anonymity such as Crowds is to hide each user’s communications by routing them randomly within a crowd of similar users.Even if an eavesdropper observes a message being sent by a particular user,it can never be sure whether the user is the actual sender,or is simply routing another user’s message.3.1Path setup protocolA crowd is a collection of users,each of whom is running a special process called a jondo which acts as the user’s proxy.Some of the jondos may be corrupt and/or controlled by the adversary.Corrupt jondos may collaborate and share their obser-vations in an attempt to compromise the honest users’anonymity.Note,however, that all observations by corrupt group members are local.Each corrupt member may observe messages sent to it,but not messages transmitted on the links be-tween honest jondos.An honest crowd member has no way of determining whether6a particular jondo is honest or corrupt.The parameters of the system are the total number of members,the number of corrupt members,and the forwarding probability which is explained below.To participate in communication,all jondos must register with a special server which maintains membership information.Therefore,every member of the crowd knows identities of all other members.As part of the join procedure,the members establish pairwise encryption keys which are used to encrypt pairwise communi-cation,so the contents of the messages are secret from an external eavesdropper.Anonymity guarantees provided by Crowds are based on the path setup pro-tocol,which is described in the rest of this section.The path setup protocol is executed each time one of the crowd members wants to establish an anonymous connection to a Web server.Once a routing path through the crowd is established, all subsequent communication between the member and the Web server is routed along it.We will call one run of the path setup protocol a session.When crowd membership changes,the existing paths must be scrapped and a new protocol ses-sion must be executed in order to create a new random routing path through the crowd to the destination.Therefore,we’ll use terms path reformulation and proto-col session interchangeably.When a user wants to establish a connection with a Web server,its browser sends a request to the jondo running locally on her computer(we will call this jondo the initiator).Each request contains information about the intended desti-nation.Since the objective of Crowds is to protect the sender’s identity,it is not problematic that a corrupt router can learn the recipient’s identity.The initiator starts the process of creating a random path to the destination as follows: The initiator selects a crowd member at random(possibly itself),and for-wards the request to it,encrypted by the corresponding pairwise key.We’ll call the selected member the forwarder.The forwarderflips a biased coin.With probability,it delivers the request directly to the destination.With probability,it selects a crowd member at random(possibly itself)as the next forwarder in the path,and forwards the request to it,re-encrypted with the appropriate pairwise key.The next forwarder then repeats this step.Each forwarder maintains an identifier for the created path.If the same jondo appears in different positions on the same path,identifiers are different to avoid infinite loops.Each subsequent message from the initiator to the destination is routed along this path,i.e.,the paths are static—once established,they are not altered often.This is necessary to hinder corrupt members from linking multiple7paths originating from the same initiator,and using this information to compromise the initiator’s anonymity as described in section3.2.3.3.2Anonymity properties of CrowdsThe Crowds paper[RR98]describes several degrees of anonymity that may be provided by a communication system.Without using anonymizing techniques, none of the following properties are guaranteed on the Web since browser requests contain information about their source and destination in the clear.Beyond suspicion Even if the adversary can see evidence of a sent message,the real sender appears to be no more likely to have originated it than any other potential sender in the system.Probable innocence The real sender appears no more likely to be the originator of the message than to not be the originator,i.e.,the probability that the adversary observes the real sender as the source of the message is less thanupper bound on the probability of detection.If the sender is observed by the adversary,she can then plausibly argue that she has been routing someone else’s messages.The Crowds paper focuses on providing anonymity against local,possibly co-operating eavesdroppers,who can share their observations of communication in which they are involved as forwarders,but cannot observe communication involv-ing only honest members.We also limit our analysis to this case.3.2.1Anonymity for a single routeIt is proved in[RR98]that,for any given routing path,the path initiator in a crowd of members with forwarding probability has probable innocence against collaborating crowd members if the following inequality holds:(1)More formally,let be the event that at least one of the corrupt crowd members is selected for the path,and be the event that the path initiator appears in8the path immediately before a corrupt crowd member(i.e.,the adversary observes the real sender as the source of the messages routed along the path).Condition 1guarantees thatproving that,given multiple linked paths,the initiator appears more often as a sus-pect than a random crowd member.The automated analysis described in section6.1 confirms and quantifies this result.(The technical results of[Shm02]on which this paper is based had been developed independently of[Mal01]and[W ALS02],be-fore the latter was published).In general,[Mal01]and[W ALS02]conjecture that there can be no reliable anonymity method for peer-to-peer communication if in order to start a new communication session,the initiator must originate thefirst connection before any processing of the session commences.This implies that anonymity is impossible in a gossip-based system with corrupt routers in the ab-sence of decoy traffic.In section6.3,we show that,for any given number of observed paths,the adversary’s confidence in its observations increases with the size of the crowd.This result contradicts the intuitive notion that bigger crowds provide better anonymity guarantees.It was discovered by automated analysis.4Formal Model of CrowdsIn this section,we describe our probabilistic formal model of the Crowds system. Since there is no non-determinism in the protocol specification(see section3.1), the model is a simple discrete-time Markov chain as opposed to a Markov deci-sion process.In addition to modeling the behavior of the honest crowd members, we also formalize the adversary.The protocol does not aim to provide anonymity against global eavesdroppers.Therefore,it is sufficient to model the adversary as a coalition of corrupt crowd members who only have access to local communication channels,i.e.,they can only make observations about a path if one of them is se-lected as a forwarder.By the same token,it is not necessary to model cryptographic functions,since corrupt members know the keys used to encrypt peer-to-peer links in which they are one of the endpoints,and have no access to links that involve only honest members.The modeling technique presented in this section is applicable with minor mod-ifications to any probabilistic routing system.In each state of routing path construc-tion,the discrete probability distribution given by the protocol specification is used directly to define the probabilistic transition rule for choosing the next forwarder on the path,if any.If the protocol prescribes an upper bound on the length of the path(e.g.,Freenet[CSWH01]),the bound can be introduced as a system parameter as described in section4.2.3,with the corresponding increase in the size of the state space but no conceptual problems.Probabilistic model checking can then be used to check the validity of PCTL formulas representing properties of the system.In the general case,forwarder selection may be governed by non-deterministic10runCount goodbad lastSeen observelaunchnewstartrundeliver recordLast badObserve4.2Model of honest members4.2.1InitiationPath construction is initiated as follows(syntax of PRISM is described in section 2.2):[]launch->runCount’=TotalRuns&new’=true&launch’=false;[]new&(runCount>0)->(runCount’=runCount-1)&new’=false&start’=true;[]start->lastSeen’=0&deliver’=false&run’=true&start’=false;4.2.2Forwarder selectionThe initiator(i.e.,thefirst crowd member on the path,the one whose identity must be protected)randomly chooses thefirst forwarder from among all group mem-bers.We assume that all group members have an equal probability of being chosen, but the technique can support any discrete probability distribution for choosing for-warders.Forwarder selection is a single step of the protocol,but we model it as two probabilistic state transitions.Thefirst determines whether the selected forwarder is honest or corrupt,the second determines the forwarder’s identity.The randomly selected forwarder is corrupt with probability badCbe next on the path.Any of the honest crowd members can be selected as the forwarder with equal probability.To illustrate,for a crowd with10honest members,the following transition models the second step of forwarder selection: []recordLast&CrowdSize=10->0.1:lastSeen’=0&run’=true&recordLast’=false+0.1:lastSeen’=1&run’=true&recordLast’=false+...0.1:lastSeen’=9&run’=true&recordLast’=false;According to the protocol,each honest crowd member must decide whether to continue building the path byflipping a biased coin.With probability,the forwarder selection transition is enabled again and path construction continues, and with probability the path is terminated at the current forwarder,and all requests arriving from the initiator along the path will be delivered directly to the recipient.[](good&!deliver&run)->//Continue path constructionPF:good’=false+//Terminate path constructionnotPF:deliver’=true;The specification of the Crowds system imposes no upper bound on the length of the path.Moreover,the forwarders are not permitted to know their relative position on the path.Note,however,that the amount of information about the initiator that can be extracted by the adversary from any path,or anyfinite number of paths,isfinite(see sections4.3and4.5).In systems such as Freenet[CSWH01],requests have a hops-to-live counter to prevent infinite paths,except with very small probability.To model this counter,we may introduce an additional state variable pIndex that keeps track of the length of the path constructed so far.The path construction transition is then coded as follows://Example with Hops-To-Live//(NOT CROWDS)////Forward with prob.PF,else deliver13[](good&!deliver&run&pIndex<MaxPath)->PF:good’=false&pIndex’=pIndex+1+notPF:deliver’=true;//Terminate if reached MaxPath,//but sometimes not//(to confuse adversary)[](good&!deliver&run&pIndex=MaxPath)->smallP:good’=false+largeP:deliver’=true;Introduction of pIndex obviously results in exponential state space explosion, decreasing the maximum system size for which model checking is feasible.4.2.4Transition matrix for honest membersTo summarize the state space of the discrete-time Markov chain representing cor-rect behavior of protocol participants(i.e.,the state space induced by the abovetransitions),let be the state in which links of the th routing path from the initiator have already been constructed,and assume that are the honestforwarders selected for the path.Let be the state in which path constructionhas terminated with as thefinal path,and let be an auxiliary state. Then,given the set of honest crowd members s.t.,the transi-tion matrix is such that,,(see section4.2.2),i.e.,the probability of selecting the adversary is equal to the cumulative probability of selecting some corrupt member.14This abstraction does not limit the class of attacks that can be discovered using the approach proposed in this paper.Any attack found in the model where indi-vidual corrupt members are kept separate will be found in the model where their capabilities are combined in a single worst-case adversary.The reason for this is that every observation made by one of the corrupt members in the model with separate corrupt members will be made by the adversary in the model where their capabilities are combined.The amount of information available to the worst-case adversary and,consequently,the inferences that can be made from it are at least as large as those available to any individual corrupt member or a subset thereof.In the adversary model of[RR98],each corrupt member can only observe its local network.Therefore,it only learns the identity of the crowd member imme-diately preceding it on the path.We model this by having the corrupt member read the value of the lastSeen variable,and record its observations.This cor-responds to reading the source IP address of the messages arriving along the path. For example,for a crowd of size10,the transition is as follows:[]lastSeen=0&badObserve->observe0’=observe0+1&deliver’=true&run’=true&badObserve’=false;...[]lastSeen=9&badObserve->observe9’=observe9+1&deliver’=true&run’=true&badObserve’=false;The counters observe are persistent,i.e.,they are not reset for each session of the path setup protocol.This allows the adversary to accumulate observations over several path reformulations.We assume that the adversary can detect when two paths originate from the same member whose identity is unknown(see sec-tion3.2.2).The adversary is only interested in learning the identity of thefirst crowd mem-ber in the path.Continuing path construction after one of the corrupt members has been selected as a forwarder does not provide the adversary with any new infor-mation.This is a very important property since it helps keep the model of the adversaryfinite.Even though there is no bound on the length of the path,at most one observation per path is useful to the adversary.To simplify the model,we as-sume that the path terminates as soon as it reaches a corrupt member(modeled by deliver’=true in the transition above).This is done to shorten the average path length without decreasing the power of the adversary.15Each forwarder is supposed toflip a biased coin to decide whether to terminate the path,but the coinflips are local to the forwarder and cannot be observed by other members.Therefore,honest members cannot detect without cooperation that corrupt members always terminate paths.In any case,corrupt members can make their observable behavior indistinguishable from that of the honest members by continuing the path with probability as described in section4.2.3,even though this yields no additional information to the adversary.4.4Multiple pathsThe discrete-time Markov chain defined in sections4.2and4.3models construc-tion of a single path through the crowd.As explained in section3.2.2,paths have to be reformulated periodically.The decision to rebuild the path is typically made according to a pre-determined schedule,e.g.,hourly,daily,or once enough new members have asked to join the crowd.For the purposes of our analysis,we sim-ply assume that paths are reformulated somefinite number of times(determined by the system parameter=TotalRuns).We analyze anonymity properties provided by Crowds after successive path reformulations by considering the state space produced by successive execu-tions of the path construction protocol described in section4.2.As explained in section4.3,the adversary is permitted to combine its observations of some or all of the paths that have been constructed(the adversary only observes the paths for which some corrupt member was selected as one of the forwarders).The adversary may then use this information to infer the path initiator’s identity.Because for-warder selection is probabilistic,the adversary’s ability to collect enough informa-tion to successfully identify the initiator can only be characterized probabilistically, as explained in section5.4.5Finiteness of the adversary’s state spaceThe state space of the honest members defined by the transition matrix of sec-tion4.2.4is infinite since there is no a priori upper bound on the length of each path.Corrupt members,however,even if they collaborate,can make at most one observation per path,as explained in section4.3.As long as the number of path reformulations is bounded(see section4.4),only afinite number of paths will be constructed and the adversary will be able to make only afinite number of observa-tions.Therefore,the adversary only needsfinite memory and the adversary’s state space isfinite.In general,anonymity is violated if the adversary has a high probability of making a certain observation(see section5).Tofind out whether Crowds satisfies16。
29.Emergency departments I the use of simulation and design of experiments for estimating maximum c
Proceedings of the 2003 Winter Simulation ConferenceS. Chick, P. J. Sánchez, D. Ferrin, and D. J. Morrice, eds.THE USE OF SIMULATION AND DESIGN OF EXPERIMENTS FOR ESTIMATINGMAXIMUM CAPACITY IN AN EMERGENCY ROOMFelipe F. BaeslerHector E. Jahnsen Departamento de Ingeniería Industrial Universidad del Bío-BíoAv. Collao 1202, Casilla 5-CConcepción, CHILE MahalDaCostaFacultad de MedicinaUniversidad de ConcepciónAv. Roosevelt 1550Concepción, CHILEABSTRACTThis work presents the results obtained after using a simu-lation model for estimating the maximum possible demand increment in an emergency room of a private hospital in Chile. To achieve this objective the first step was to create a simulation model of the system under study. This model was used to create a curve for predicting the behavior of the variable patient’s time in system and estimate the maximum possible demand that the system can absorb. Fi-nally, a design of experiments was conducted in order to define the minimum number of physical and human re-sources required to serve this demand.1 INTRODUCTIONThe Hospital del Trabajador in Concepción city in Chile is an institution that offers a wide variety of healthcare ser-vices. The hospital is mainly oriented to serve patients that are workers in local companies who have had work acci-dents or diseases developed from their professional activi-ties. The companies have contracts with the hospital in or-der to get treatment for their workers. For this reason the most important part of the demand is controlled by the hospital based on the number of companies affiliated to them. In other words, if more companies were affiliated to the hospital it could be said that they were incrementing their demand. The hospital interest is to estimate the amount of extra demand that they are able to absorb con-sidering two main issues, maintain the patients’ waiting time standard and to consider some physical and human resources limitations.2 BACKGROUNDSimulation is an excellent and flexible tool to model dif-ferent types of environments. It is possible to find in the literature several simulation experiences in healthcare. For example, in the area of emergency rooms simulation it is possible to highlight Garcia et al. (1995). They present a simulation model focused on reduction of waiting time in the emergency room of Mercy Hospital in Miami. A simi-lar application is presented in Baesler et al. (1998) where important issues that have to be considered when interact-ing with healthcare practitioners during a simulation pro-ject are presented. Other cases not related to emergency rooms can be found in Pitt (1997). They present a simula-tion system to support strategic resource planning in healthcare. Lowery (1996) presents an introduction to simulation in healthcare showing very important considera-tions and barriers in a simulation project. Sepulveda et al. (1999) shows how simulation is used to understand and improve patient flow in an ambulatory cancer treatment center. This same study is complemented in Baesler & Se-pulveda (2001) where a multi-objective optimization analysis is performed.3 SYSTEM DESCRIPTIONThe emergency department of the hospital is open 24 hours a day and receives an average of 1560 patients a year. Be-sides their internal capacity the emergency department shares resources with other hospital services such as, X rays, Scanner, MRI, clinical laboratory, blood bank, phar-macy, and surgery. The human resources work in shifts, but at every moment three physicians are available, one nurse, and two or three paramedics depending on the time of the day. The patients get their examination and general treatment in five rooms, three of them for general use, the other two for specific cases. The general patient´s process is presented in Figure 1.When the patient arrives to the hospital, a receptionist collects their personal information. After this, the patient waits for availability of a treatment room and a paramedic. When this occurs the patient is walked to the room whereFigure 1: Patient Flowthe paramedic takes their vital signs. Then the physician is informed that a patient is waiting for treatment. If the phy-sician is available goes to see the patient and performs the examination. After the physician evaluates the patient he could conclude that additional exams are required. In this case the patient is transported to the corresponding test area, such as X rays, scanner, MRI, etc. Finally the patient returns to the exam room and the physician concludes the treatment and the patient is sent home.4 THE SIMULATION MODELThe simulation model was constructed using the simulation package Arena 4.0. The information required as input for the model was collected from the hospital databases, such as inter arrival rates, type of diagnosis, type and duration of treatments. A replication/deletion approach was used in or-der to run the model for a length of 1 month and a warm-up period of 4 days. A total of 57 replication were neces-sary in order to obtain the statistical precision required. The results obtained after running the as-is scenario were validated using hospital data.The objective of this project was to predict the maxi-mum demand that the emergency room is able to afford without increasing the waiting time over an acceptable level. The response variable “time in system”, that repre-sents the total time a patient spends inside the emergency room, was used as a service level parameter. Currently, pa-tients spend an average of approximately 70 minutes inside the system. The management is willing to increase the time up to 100 minutes in order to increment their demand.At the same time they are willing to expand their re-sources within a range that offers feasibility to this project, this means, add one receptionist, two physicians, two paramedics and build one extra room. The question that arises is “which is the maximum demand that the emer-gency room can afford without going over 100 minutes of patient average time in system with this new configuration of resources. In order to answer this question it was neces-sary to understand the behavior of the variable time in sys-tem versus changes in demand. This was done running the simulation model with the new configuration of resources in 5 different levels of demand. Table 1 shows the percent-age of demand increased and the number of patients asso-ciated to that level of demand.Table 1: Changes in Demand% DemandIncreasePatientsper dayPatientsper monthAs-Is 52 156021 63 189044 75 225070 88 2640100 106 3180150 131 3930 The results obtained after running these five scenarios as well as a polynomial curve that fits the behavior of the time in system is presented in Figure 2.Interpolating this curve it is possible to estimate that the level of demand that generates an average time in sys-tem of 100 minutes corresponds to a 130% increase of de-mand. The question now is to determine the minimum number of resources required to achieve this level of de-mand. The simulation scenarios were carried out using the maximum feasible hospital capacity, but it could be possi-ble that one or more resources of this configuration wereFigure 2 : Demand Curveunder utilized. In this case the same level of demand could be satisfied using less resources. To do this it is necessary to determine which resources could be decreased without altering the system’s performance. In order to answer this question it was decided to perform an experimental design analysis. 5DESIGN OF EXPERIMENTSIn order to determine the significance of the resources in the system’s behavior, a design of experiments was per-formed. The experiments considered a fixed level of de-mand (130%) and four factors, physicians, paramedics, exam rooms and receptionists. Table 2 shows the settings of this experiment.Table 2: Factor LevelsLevel Receptionist Physician Paramedic Room - 1 3 3 5 + 2 5 5 6A fractional factorial design with resolution IV was con-ducted. This requires a total of 24-1 = 8 simulation scenar-ios. With this resolution it is possible to determine the significance of the main effects, but the two-way interac-tions are confounded. The results obtained after perform-ing the experiments are presented in the pareto chart shown in Figure 3.This chart shows that the main effects receptionist and paramedics as well as the confounded interactions AC+BDare significant. Since it is not possible to determine which one of the interactions AC or BD is the significant one, it is necessary to conduct additional experiments that permit to understand the significance of the interaction AC (Physi-cians- Rooms). The design selected was a full factorialdesign considering two factors, Physicians and Rooms. Since the main factors receptionists and paramedics re-sulted to be significant from the previous experiment, it was decided to fix these factors in the high level, this means, two receptionists and five paramedics and a level of demand of 130% increase. The experiments were performed and it was concluded that the two factors were significant, so they have to be set in a high level. Figure 4 presents a response surface plot of the two factors.The response surface plot indicates that in order to de-crease the time in system it is necessary to set the two fac-tors in a high level, six rooms and five physicians. Even though it is clear that the two factors are significant, the plot shows that the maximum time in system allowed (100 minutes) is reached before the level of five physicians. A contour map can explained better this issue and it is pre-sented in Figure 5.Figure 5: Contour MapThe contour line highlighted with the two arrows represents the level of resources required to reach a time in system of 102 minutes, very close to 100 minutes. It can be concluded that fixing the factor physicians in a level of 4.5, it is possible to maintain the time in system in 100 minutes. This interesting result could be interpreted as a requirement of four fulltime physicians plus one halftime physician.6 CONCLUSIONSThis study showed how simulation could be used to esti-mate the maximum level of demand that an emergency room is able to absorb and which is the configuration of resources required to maintain a quality of service. The re-sults showed that the resources required to reach this level of demand are close to the feasible maximum level. For example, the hospital layout permits to build just one extra exam room. Probably the most important conclusion of this study is that 4.5 physician are required (four fulltime and one halftime). Of course, this means important saving to the hospital.REFERENCESBaesler, F., Sepúlveda, J.A., Thompson, W., Kotnour, T.(1998), Working with Healthcare Practitioners to Im-prove Hospital Operations with Simulation, in Pro-ceedings of Arena Sphere ’98, 122-130.Baesler, F., Sepúlveda, J., (2001) “Multi-Objective Simula-tion Optimization for a Cancer Treatment Center” in Proceedings of Winter Simulation Conference 2001, Virginia, USA. B. A. Peters, J. S. Smith, D. J.Medeiros, and M. W. Rohrer, (eds.) 1405-1411. Garcia, M.L., Centeno M.A., Rivera, C., DeCario N.(1995), Reducing Time in an Emergency Room Via a Fast-Track, in Proceedings of the 1995 Winter Simula-tion Conference, Alexopoulus, Kang, Lilegdon & Goldman (eds.), 1048-1053.Pitt, M. (1997), A Generalised Simulation System to Sup-port Strategic Resource Planning in Healthcare, Pro-ceedings of the 1997 Winter Simulation Conference, S.Andradóttir, K. J. Healy, D. H. Withers, and B. L.Nelson (eds), 1155-1162.Lowery, J. C. (1996), Introduction to Simulation in Health Care, in Proceedings of the 1996 Winter Simulation Conference, J. M. Charnes, D.J. Morrice, D. T. Brun-ner, and J. J. Swain (eds), 78-84.Sepúlveda, J.A.,., Thompson, W., Baesler, F., Alvarez, M.(1999), “The Use of Simulation for Process Improve-ment in a Cancer Treatment Center”, Proceedingsof the1999 Winter Simulation Conference, Phoenix, Ari-zona, USA, 1551-1548.BIOGRAPHIESFELIPE F. BAESLER is an Assistant Professor of Indus-trial Engineering at Universidad del Bio-Bio in Concep-ción Chile. He received his Ph.D. from University of Cen-tral Florida in 2000. His research interest are in Simulation Optimization and Artificial Intelligence. His email is <fbaesler@ubiobio.cl>.HECTOR E. JAHNSEN is a graduate student in the De-partment of Industrial Engineering at the University of Bio-Bio. He works as a research assistant in projects re-lated to industrial and healthcare simulation. His e-mail is <hjahnsen@alumnos.ubiobio.cl>.MAHAL DACOSTA is an assistant professor at the col-lege of medicine at the Universidad de Concepción in Chile. She has a Doctorate degree in Bio-ethics and a Mas-ter degree in public health. Her research interests are in the field of public health management. Her email is <gdacosta@udec.cl>.。
To transfer or not to transfer
To Transfer or Not To TransferMichael T.Rosenstein,Zvika Marx,Leslie Pack KaelblingComputer Science and Artificial Intelligence LaboratoryMassachusetts Institute of TechnologyCambridge,MA02139{mtr,zvim,lpk}@Thomas G.DietterichSchool of Electrical Engineering and Computer ScienceOregon State UniversityCorvallis,OR97331tgd@AbstractWith transfer learning,one set of tasks is used to bias learning and im-prove performance on another task.However,transfer learning may ac-tually hinder performance if the tasks are too dissimilar.As describedin this paper,one challenge for transfer learning research is to developapproaches that detect and avoid negative transfer using very little datafrom the target task.1IntroductionTransfer learning involves two interrelated learning problems with the goal of using knowl-edge about one set of tasks to improve performance on a related task.In particular,learning for some target task—the task on which performance is ultimately measured—is influenced by inductive bias learned from one or more auxiliary tasks,e.g.,[1,2,8,9].For example, athletes make use of transfer learning when they practice fundamental skills to improve training in a more competitive setting.Even for the restricted class of problems addressed by supervised learning,transfer can be realized in many different ways.For instance,Caruana[2]trained a neural network on several tasks simultaneously as a way to induce efficient internal representations for the target task.Wu and Dietterich[9]showed improved image classification by SVMs when trained on a large set of related images but relatively few target images.Sutton and McCallum[7]demonstrated effective transfer by“cascading”a class of graphical models, with the prediction from one classifier serving as a feature for the next one in the cascade. In this paper we focus on transfer using hierarchical Bayesian methods,and elsewhere we report on transfer using learned prior distributions over classifier parameters[5].In broad terms,the challenge for a transfer learning system is to learn what knowledge should be transferred and how.The emphasis of this paper is the more specific problem of deciding when transfer should be attempted for a particular class of learning algorithms. With no prior guarantee that the auxiliary and target tasks are sufficiently similar,an algo-rithm must use the available data to guide transfer learning.We are particularly interested in the situation where an algorithm must detect,perhaps implicitly,that the inductive bias learned from the auxiliary tasks will actually hurt performance on the target task.In the next section,we describe a“transfer-aware”version of the naive Bayes classification algorithm.We then illustrate that the benefits of transfer learning depend,not surprisingly, on the similarity of the auxiliary and target tasks.The key challenge is to identify harmful transfer with very few training examples from the target task.With larger amounts of “target”data,the need for auxiliary training becomes diminished and transfer learning becomes unnecessary.2Hierarchical Naive BayesThe standard naive Bayes algorithm—which we callflat naive Bayes in this paper—has proven to be effective for learning classifiers in non-transfer settings[3].Theflat naive Bayes algorithm constructs a separate probabilistic model for each output class,under the “naive”assumption that each feature has an independent impact on the probability of the class.We chose naive Bayes not only for its effectiveness but also for its relative sim-plicity,which facilitates analysis of our hierarchical version of the algorithm.Hierarchical Bayesian models,in turn,are well suited for transfer learning because they effectively combine data from multiple sources,e.g.,[4].To simplify our presentation we assume that just two tasks,A and B,provide sources of data,although the methods extend easily to multiple A data sources.Theflat version of naive Bayes merges all the data without distinction,whereas the hierarchical version con-structs two ordinary naive Bayes models that are coupled together.LetθA i andθB i denote the i-th parameter in the two models.Transfer is achieved by encouragingθA i andθB i to have similar values during learning.This is implemented by assuming thatθA i andθB i are both drawn from a common hyperprior distribution,P i,that is designed to have unknown mean but small variance.Consequently,at the start of learning,the values ofθA i andθB i are unknown,but they are constrained to be similar.As with any Bayesian learning method,learning consists of computing posterior distribu-tions for all of the parameters in the two models,including the hyperprior parameters.The overall model can“decide”that two parameters are very similar(by decreasing the variance of the hyperprior)or that two other parameters are very different(by increasing the vari-ance of the hyperprior).To compute the posterior distributions,we developed an extension of the“slice sampling”method introduced by Neal[6].3ExperimentsWe tested the hierarchical naive Bayes algorithm on data from a meeting acceptance task. For this task,the goal is to learn to predict whether a person will accept an invitation to a meeting given information about(a)the current state of the person’s calendar,(b)the person’s roles and relationships to other people and projects in his or her world,and(c)a description of the meeting request including time,place,topic,importance,and expected duration.Twenty-one individuals participated in the experiment:eight from a military exercise and 13from an academic setting.Each individual supplied between99and400labeled ex-amples(3966total examples).Each example was represented as a15-dimensional feature vector that captured relational information about the inviter,the proposed meeting,and any conflicting meetings.The features were designed with the meeting acceptance task in mind but were not tailored to the algorithms studied.For each experiment,a single person was08162432Amount of Task B Training (# instances)T a s k B P e r f o r m a n c e (% c o r r e c t )Figure 1:Effects of B training set size on performance of the hierarchical naive Bayes al-gorithm for three cases:no transfer (“B-only”)and transfer between similar and dissimilar individuals.In each case,the same person served as the B data source.Filled circles de-note statistically significant differences (p <0.05)between the corresponding transfer and B-only conditions.chosen as the target (B )data source;100of his or her examples were set aside as a holdout test set,and from the remaining examples either 2,4,8,16,or 32were used for training.These training and test sets were disjoint and stratified by class.All of the examples from one or more other individuals served as the auxiliary (A )data source.Figure 1illustrates the performance of the hierarchical naive Bayes algorithm for a single B data source and two representative A data sources.Also shown is the performance for the standard algorithm that ignores the auxiliary data (denoted “B-only”in the figure).Transfer learning has a clear advantage over the B-only approach when the A and B data sources are similar,but the effect is reversed when A and B are too dissimilar.Figure 2a demonstrates that the hierarchical naive Bayes algorithm almost always performs at least as well as flat naive Bayes,which simply merges all the available data.Figure 2b shows the more interesting comparison between the hierarchical and B-only algorithms.The hierarchical algorithm performs well,although the large gray regions depict the many pairs of dissimilar individuals that lead to negative transfer.This effect diminishes—along with the positive transfer effect—as the amount of B training data increases.We also ob-served qualitatively similar results using a transfer-aware version of the logistic regression classification algorithm [5].4ConclusionsOur experiments with the meeting acceptance task demonstrate that transfer learning often helps,but can also hurt performance if the sources of data are too dissimilar.The hierar-chical naive Bayes algorithm was designed to avoid negative transfer,and indeed it does so quite well compared to the flat pared to the standard B-only approach,however,there is still room for improvement.As part of ongoing work we are exploring the use of clustering techniques,e.g.,[8],to represent more explicitly that some sources of data may be better candidates for transfer than others.Amount of Task B Training (#instances)F r a c t i o n o f P e r s o n P a i r sAmount of Task B Training (#instances)F r a c t i o n o f P e r s o n P a i r s(a)(b)Figure 2:Effects of B training set size on performance of the hierarchical naive Bayes al-gorithm versus (a)flat naive Bayes and (b)training with no auxiliary data.Shown are the fraction of tested A-B pairs with a statistically significant transfer effect (p <0.05).Black and gray respectively denote positive and negative transfer,and white indicates no statis-tically significant difference.Performance scores were quantified using the log odds of making the correct prediction.AcknowledgmentsThis material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA),through the Department of the Interior,NBC,Acquisition Services Division,under Con-tract No.NBCHD030010.Any opinions,findings,and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.References[1]J.Baxter.A model of inductive bias learning.Journal of Artificial Intelligence Research ,12:149–198,2000.[2]R.Caruana.Multitask learning.Machine Learning ,28(1):41–70,1997.[3]P.Domingos and M.Pazzani.On the optimality of the simple bayesian classifier under zero-one loss.Machine Learning ,29(2–3):103–130,1997.[4] A.Gelman,J.B.Carlin,H.S.Stern,and D.B.Rubin.Bayesian Data Analysis,Second Edition .Chapman and Hall/CRC,Boca Raton,FL,2004.[5]Z.Marx,M.T.Rosenstein,L.P.Kaelbling,and T.G.Dietterich.Transfer learning with an ensemble of background tasks.Submitted to this workshop.[6]R.Neal.Slice sampling.Annals of Statistics ,31(3):705–767,2003.[7] C.Sutton and position of conditional random fields for transfer learning.In Proceedings of the Human Language Technologies /Emprical Methods in Natural Language Processing Conference (HLT/EMNLP),2005.[8]S.Thrun and J.O’Sullivan.Discovering structure in multiple learning tasks:the TC algorithm.In L.Saitta,editor,Proceedings of the Thirteenth International Conference on Machine Learning ,pages 489–497.Morgan Kaufmann,1996.[9]P.Wu and T.G.Dietterich.Improving SVM accuracy by training on auxiliary data sources.In Proceedings of the Twenty-First International Conference on Machine Learning ,pages 871–878.Morgan Kaufmann,2004.。
概念的力量
顾问 邬书林 高明光编委(按姓氏笔画排序)邬书林 刘小枫 刘东杰 杨 平 宋 伟 张 柠 张艺兵 周志强孟宪实 贺照田 贺耀敏 高明光郭义强 程 巍 社长总编辑 副总编辑 刘苏哲副社长 夏泽平执行主编 周志强编辑部主任 管 飞编辑 程 成 魏建宇 郎 静 龚 燕陈琰娇 刘 英 王天孜 周雨尘 封面题画 游 江封面设计 刘冰宇主管单位 中国图书评论学会主办单位 中国图书评论杂志社出版单位 《中国图书评论》编辑部杂志社社址 北京市朝阳区和平街11区37号楼一层南侧电话传真 (010)64173406邮政编码 100013电子信箱***********************杂志官微 中国图书评论杂志官方微博制版 北京心水衡昌广告设计有限公司印刷 沈阳百江印刷有限公司国外代号 M1029 ISSN1002-235X 国内统一刊号 CN21-1035/G2国内邮发代号 8-46发行范围 国内外发行国内总发行 辽宁省报刊发行局订购处 全国各地邮局国外发行 中国国际图书贸易总公司(北京399信箱) 出版日期 每月10日出版定价 15.00元广告经营许可证号 2017000029·卷首语·概念的力量做理论研究的人,遭遇的最多的问题可能就是某个概念是什么意思?它是哪里来的?有没有用错?比如,很久以来,身边的人都会纠结拉康的“The Real”(实在界)到底指的是什么?人生的一个阶段?性格发展中的一段旅程?淹没一切的空无?康德的物自体?神秘莫测的历史?本我?拉康可能真的像有学者所说的,是一个“超现实主义的自动写作的人”:尊重内心的无意识,努力消除概念本身带来的思想规范的作用。
细思之,却可以发现,世上存在两种概念的使用方式。
一种可以叫作“概念神圣论”,认为“概念”有一种类似希腊诸神一样的意义规定性,掌管一方,不可随意窜用。
他们也会赋予一些概念不可质疑的内涵,甚至变成只允许使用不允许反思的“思想坚壳”。
(2000). ‘Investment-cash flow sensitivities are useful. A comment on Kaplan and Zingales’
INVESTMENT-CASH FLOW SENSITIVITIES ARE USEFUL:A COMMENT ON KAPLAN AND ZINGALES*S TEVEN M.F AZZARIR.G LENN H UBBARDB RUCE C.P ETERSENpaper in this Journal by Kaplan and Zingales reexamines a subset of firms ofFazzari,Hubbard,and Petersen and criticizes the usefulness of investment-cash flow sensitivities for detecting financing constraints.We show that the Kaplan and Zingales theoretical model fails to capture the approach employed in the literature and thus does not provide an effective critique.Moreover,we describe why their empirical classification system is flawed in identifying both whether firms are constrained and the relative degree of constraints across firm groups.We conclude that their results do not support their conclusions about the usefulness of investment-cash flow sensitivities.In a recent paper in this Journal Kaplan and Zingales [1997,hereinafter KZ]argue that investment-cash flow sensitivities do not provide useful evidence about the presence of financing constraints.Because KZ use a subset of the same firms and the same regressions as Fazzari,Hubbard,and Petersen [1988,hereinafter FHP]and claim [page 176]that FHP ‘‘can legitimately be considered the parent of all papers in this literature,’’it is appropriate that we respond.Based on a simple theoretical model,KZ reach the provocative conclusion [page 211]that ‘‘the invest-ment-cash flow sensitivity criterion as a measure of financial constraints is not well-grounded in theory.’’In Section I we show that the KZ model does not capture the theoretical approach employed in FHP and many subsequent studies.Most of the KZ paper attempts to show that empirical investment-cash flow sensitivities do not increase monotonically with the degree of financing constraints within the 49low-dividend firms from the FHP sample.In Section II we explain why the KZ classification of the degree of constraints is flawed in identifying both whether or not firms are constrained (absolute constraints)as well as the relative degree of constraints across firms.As a argue in Section III that there is no expected ex ante for the *WethankMichaelAthey,Charles Calomiris,Robert Carpenter,RobertChirinko,Mark Gertler,Simon Gilchrist,Kevin Hassett,Charles Himmelberg,Anil Kashyap,Ronald King,Wende Reeser,Joachim Winter,and two referees,one of the editors (Andrei Shleifer),and participants in seminars at the London Schoolof Economics and the NBER Summer Institute Conference on Corporate Finance for comments and suggestions.2000by the President and Fellows of Harvard College and the Massachusetts Institute of Technology.The Quarterly Journal of Economics,May 2000695 at Glasgow University Library on January 20, 2012/Downloaded frominvestment-cash flow sensitivities across the KZ categories,mak-ing their empirical results uninformative about the usefulness of investment-cash flow sensitivities.1I.T HE KZ M ODEL AND T ESTS OF F INANCING C ONSTRAINTSThe one-period KZ model consists of a return on investment F (I ),internal financing (W )with constant opportunity cost,external financing (E ),and a premium for external funds C (E ,k ),where k measures the cost wedge between internal and external funds.KZ show that the investment-cash flowsensitivity is(1)dI dW ϭC 11C 11ϪF 11,where C 11is the slope of the supply curve for external finance and F 11is of the investment demand curve.KZ focus on firm heterogeneity in dI /dW as measured by the level of W .To analyze dI /dW at different levels of W they compute(2)d 2I dW 2ϭ3F 111F 11ϪC 111C 114F 112C 112(C 11ϪF 11)3.KZ note that d 2I /dW 2is negative only if the term in brackets is negative.They then point out that the bracketed term could be positive if F 111Ͼ0or C 111Ͻ0.This leads KZ to conclude that the theoretical foundation of previous research is weak because dI /dW may not fall as the degree of financing constraints declines (with larger W ).Before we assess this conclusion,it is helpful to consider the intuition (which does not appear in KZ)behind why d 2may be positive.In Figure I investment is on the horizontal F 1is investment demand,W L or W H indicates the quantity financing (with constant marginal cost as indicated by the horizon-tal line segment),and C 1is the supply of external funds.In the left panel of Figure I,F 111ϭ0and C 111Ͻ0(i.e.,linear demand and concave supply).Investment is more sensitive to small internal finance fluctuations (⌬W )at high internal finance (W H )than at low internal finance (W L )because a firm at W H uses less external 1.Extensive empirical research since FHP (surveyed by Hubbard [1998])also addresses many of the issues raised in KZ.QUARTERLY JOURNAL OF ECONOMICS696 at Glasgow University Library on January 20, 2012/Downloaded fromfinancing,and thereforethe concavity of supply causes its C 11to be larger (see equation (1)).Alternatively,consider F111Ͼ0and C 111ϭ0(i.e.,convex demand and linear supply)as in the right panel of Figure I.Again,investment is more sensitive to W at W H than at W L because investment demand is more sensitive to the cost of capital as W rises.This focus in KZ on d 2I /dW 2does not provide an effective critique of the literature (including the FHP theoretical approach)because most studies do not use the level of W to classify firms.2Instead,FHP and much of the literature classify firms according to a priori criteria designed to give large differences in the slope of the external financing schedule,C 11,across groups.The obvious testable implication of this approach,using equation (1),is that constrained firms with a large C 11should have a larger dI /dW than (relatively)unconstrained firms with a small (or zero)C 11,other things equal.3The necessary condition for dI /dW to be larger for constrained firms is(3)C 11Constrained /C 11Unconstrained ϾF 11Constrained /F 11Unconstrained .2.In fact,KZ never reference any specific study,including FHP ,to demon-strate the relevanceofd 2I /dW 2.3.To appreciate the intuition graphically,consider the effect of a small changein W on two firms with linear demand curves.If the ‘‘constrained’’firmfacesrelatively steepand the ‘‘unconstrained’’firm relatively flat supply,the result isobvious.KZ implicitly assume away this possibility by positing that all firms face the same C 11for a given level of E . at Glasgow University Library on January 20, 2012/Downloaded fromWhile F 11may differ across firms,we can think of no reasons why F 11Constrainedshould be systematically greater thanF 11Unconstrained ,and KZ provide no reasons.Thus,as long as researchers separatefirms by a priori criteria in C 11Constrained ϾC 11Unconstrained inthe relevant range,the of dI /dW across firm groupshas a solid theoretical We also point out that as C 11Unconstrained approaches zero,as we argue below is the case in many studies,(3)almost certainly holds.In addition,if (3)holds,the issues that KZ raise about curvature and nonlinearity are not likely to be relevant.4The only remaining question is whether previous research has effectively classified firms in ways that generate large differ-ences in C 11.Consider the model and discussion in FHP [pages 146–157and Appendix A].In the supply of funds schedule in FHP Figure I,C 11equals zero for internal financing (as in KZ)and C 11is greater than zero for external financing.One group of firms faces C 11of zero at the margin because investment demand is less than internal financing.In contrast,constrained firms exhaust inter-nal funds and finance marginal investment with external funds,and thus face a positive C 11.Operationally,as implied by the model,unconstrained firms are those with large dividend payouts,and constrained firms are those with low or zero dividends.Since FHP ,many other researchers have devised different approaches for separating firms into groups with low and high C 11.5A common separating criterion is access to public debt.Calomiris,Himmelberg,and Wachtel [1995]report that firms with debt ratings are very different from firms without rated debt.Firms that issue public debt,especially commercial paper,are far larger on average,have much lower volatility of sales and income,and therefore pose relatively little,possibly negligible,default risk.The case can be made that firms with commercial paper or high bond ratings face a C 11close to zero.Almost surely,firms that issue public debt tend to have a substantially lower C 11than those 4.KZ also mention the possibility of ‘‘nonmonotonicity’’with the wedge k as a proxy for the degree of financing constraints.Thisapproach is not relevant to theFHP model,discussed in the next paragraph,because high-dividend firms,intheory,face no wedge at the margin.In general,if researchers effectively split theirsamples withcriteria that generate large differences in k that lead to large differences in C 11,the condition in equation (3)is likely to be satisfied.5.See Calomiris,Himmelberg,and Wachtel [1995];Gilchrist and Himmel-berg [1995,1998];Kashyap,Lamont,and Stein [1994];and Whited [1992].Hubbard [1998]provides many other references.Hoshi,Kashyap,and Scharfstein [1991]use association with a large bank to identify firms with a relatively low C 11.In addition,many studies split samples by firm size which is highly correlated with both dividend payout and access to public debt.QUARTERLY JOURNAL OF ECONOMICS698 at Glasgow University Library on January 20, 2012/Downloaded fromthat do not.In contrast,manyfirms without public debt also havearelikelyforcasecan be made that thesefirms face a high C11for externalfinancing.Empirical evidence from most studies is consistent with equation(1)in the sense thatfirms likely to have a priori high C11,(e.g.,firms with low dividends,no public debt,or small size)almost always have a larger dI/dW thanfirms likely to have a lowC11.Furthermore,many studies cannot reject dI/dW equals zerofor control groups selected to have a low(or zero)C11(e.g.,Gilchrist and Himmelberg[1995,1998]).6Thus,the implicationsof the theoretical approach in much previous research are sup-ported by the evidence.II.P ROBLEMS WITH THE KZ E MPIRICAL C LASSIFICATION A PPROACH KZ employ managerial statements and quantitative mea-sures fromfirms’financial statements to sort the49FHP low-dividendfirms into one offive groups:7Not Financially Con-strained(NFC),Likely Not Financially Constrained(LNFC), Possibly Financially Constrained(PFC),Likely Financially Con-strained(LFC),and Financially Constrained(FC).This section summarizes our concerns about the effectiveness of their ap-proach for determining both absolute and relative constraintsacrossfirms.on Managers’Statements and Regulation S-KTo justify use of managerial statements to identify the degreeoffinancing constraints,KZ[p.180]rely on Securities and Exchange Commission Regulation S-K which they claim‘‘explic-itly requiresfirms to disclose whether or not they are havingdifficultyfinancing their investments.’’It is not obvious,however,that this regulation forces afirm to revealfinancing constraints.We contacted Robert Lipe,Academic Fellow in the Office of theChief Accountant of the SEC and asked whether afirm that is6.See also Kashyap,Lamont,and Stein[1994].Some Euler equations studiescannot reject C11equal to zero for control groups offirms[Gilchrist1991;Hubbard, Kashyap,and Whited1995;Whited1992].7.KZ do not explain how these diverse criteria are specifically combined toclassifyfirms into thefive groups.COMMENT ON KAPLAN AND ZINGALES699at Glasgow University Library on January 20, 2012/Downloaded fromunable to undertake a new,positive NPV project due to financing constraints would be obliged to reveal this information.Lipe responded that this is not the case.Rather,he explained,Regula-tion S-K requires the firm to reveal the inability to invest due to financing constraints only when the firm fails to act on a previ-ously announced investment commitment.As a result,we doubt the relevance of self-serving managers’statements as evidence of the absence of financing constraints in most situations.B.Problems with the Quantitative Classification CriteriaThe classification criteria in KZ include cash stocks,unused lines of credit,and leverage.They report summary measures for these variables in Table III [KZ,pages 185–187]and argue that they support the success of their relative ranking of the degree of financing constraints and their finding that the firms face abso-lute financing constraints (PFC,LFC,or FC )in only 15percent of the firm-years.We begin by explaining why the summary statistics in KZ do not support their surprising finding about the infrequency of absolute constraints in the FHP sample.KZ suggest that both the cash flow and the cash stock positions for NFC and LNFC firm-years are so large relative to fixed investment that these firms could not be financially constrained.Their numbers in Table III,however,are misleading because they implicitly assume that firms use sources of financing only for fixed investment when,in fact,growing companies invest heavily in both inventories and accounts receivable (see Fazzari and Petersen [1993,pages 330–331]).We recomputed the KZ figures with the proper comparison of cash flow and cash stocks relative to total investment (fixed investment plus the changes in inventories and accounts receiv-able).These new statistics change some of the KZ conclusions.For example,KZ [page 184]note that the median value of cash flow less fixed investment is positive for NFC firm-years and write ‘‘[t]his suggests that NFC firms could have increased their invest-ment without tapping external sources of capital.’’In sharp contrast,in our computations the median value of cash flow less total investment is negative at the seventy-fifth percentile for even the NFC and LNFC firms.Thus,most NFC and LNFC firms exhaust all internal finance for investment purposes.Further-more,while the median cash stock-fixed investment ratio for NFC and LNFC firm-years is 0.66(similar to the statistics in KZ Table QUARTERLY JOURNAL OF ECONOMICS700 at Glasgow University Library on January 20, 2012/Downloaded fromIII)the median ratio of cash stocks to total investment is only 0.27.8ouropinion,this cash stock too small tosupport the in KZ of thefinancing constraints.constrained firms will maintain some buffer stock of cash to protect against having to cancel invest-ment projects as well as to avoid the costs with financial distress.It is well-known that cash volatile in manufacturing,frequently declining by 50percent or more and often becoming negative during a recession.Suppose,for example,that cash flow declined to zero.Our computations indicate that NFC and LNFC firms could maintain only about three months of median total investment from cash stocks,and then only if these stocks were (implausibly)driven to zero.We believe these statis-tics are consistent with the view that these firms face absolute financing constraints.are to provide them with credit,perhaps due to lack low-debt firms may therefore face more severe example,small high-tech companies—much of to have little collateral value,and little debt,possibly because their assets are intangible or firm-specific (see,for example,Himmelberg and Petersen [1994]).In addition,comparatively large cash positions or unused lines of credit may indicate relatively severe constraints.As argued in recent papers[Fazzari and Petersen 1993;Carpenter,Fazzari,and Petersen 1994;Calomiris,Himmelberg,and Wachtel 1995],it is costly for constrained firms to adjust fixed investment when internal funds fluctuate.Forward-looking firms will therefore partially protect themselves with buffer stocks of cash or unused debt capacity.The more financially constrained a firm is,the greater is its incentive to accumulate liquid buffer stocks.Such a firm may be able to invest more at the margin at a moment in time,but the firm is nonetheless financially constrained.This dynamic perspective contrasts with the static view of financing constraints employed by KZ,which creates problems in their classification approach.8.This statistic excludes observations for which totalinvestment is less than or equalto zero.KZ also point out that unused lines of credit are larger for NFC and LNFC firms.We do not have these data,but the ratios of slack to investment reported by KZ on page 188would be similarly reduced by recognizing a broader measure of MENT ON KAPLAN AND ZINGALES 701at Glasgow University Library on January 20, 2012/Downloaded fromC.The Absence of Heterogeneity in the KZ ClassificationOne striking finding in KZ is that only 19of 719observations (2.6percent)are FC and another 34observations (4.8percent)are LFC.Given so few FC and LFC observations,how do KZ obtain enough FC firms for their regressions?KZ placed firms in the FC category if they had just a single year (out of 15)with an FC or LFC rating.In the FC category,14of the 22firms had an FC or LFC rating only one or two times,while six firms had FC or LFC ratings in just three or four of the fifteen years.For this reason,the difference in cash flow coefficients across the KZ regressions may have little to do with their relative ranking of financing constraints.III.T HE KZ R EGRESSION R ESULTSKZ find that the investment of NFC and LNFC firms displays a greater sensitivity to cash flow than FC firms.Space does not permit a detailed discussion of this pattern of results.One possibility is that the FC firms include some years of financial distress.KZ describe firms in FC years as having ‘‘liquidity problems,’’which is not surprising given that their criteria for receiving the FC classification include violation of debt covenants and renegotiation of debt payments [page 182].The KZ summary statistics in Table III also strongly suggest that the FC firm-years are periods of financial distress.9During years of financial dis-tress,firms,possibly at the insistence of their creditors,are likely to use cash flow to enhance liquidity and avoid bankruptcy resulting in little change in fixed investment as measured in Compustat.A broader measure of investment,however,is likely to respond much more to cash flow for such firms.10Financial distress is one possible explanation for the low cash flow coefficient of the FC firms.Regardless of how one explains the 9.The mean cash flow-net plant ratio for these observations is Ϫ0.047and the mean of interest coverage is only 1.650.While KZ recognizethepossibility offinancial distressin FC observations [page 208],the defense they offer is notconvincing.They note that firms increase rather than repay debt in the PFC,LFC,and FC years.This observation,however,may be due to creditors permitting illiquid,but growing,firms to rebuild liquidity.10.Financially distressed firms (with low or negativecash flow)often disinvest assets with low adjustment costs such as working capital (see Fazzari and Petersen [1993]).In addition,such firms likely sell off existing fixed assets.Neither oftheseresponsesare included in the Compustat measure of fixedinvestment and ignoring them causes a downward bias in the cash flow coefficient,especially at times of financial distress.QUARTERLY JOURNAL OF ECONOMICS702 at Glasgow University Library on January 20, 2012/Downloaded frompattern of results in KZ,however,we argue that this pattern is not informative.As discussed in the previous section,the firms in the NFC and LNFC categories likely are financially constrained and the relative degree of constraints across the KZ categories is far from clear.If there is no clear a priori difference in financing constraints across the firm groups in KZ,their strategy does not meet the criterion (summarized by equation (3))necessary for meaningful tests of financing constraints with firm heterogeneity.Finally,KZ [page 196]present additional tests with group-ings based on ‘‘quantitative/objective data.’’The only one of these tests consistent with their main findings shows that firms with high interest coverage have higher cash flow coefficients than firms with low coverage.KZ imply that the pattern should be the opposite,but this need not be the case.As we discussed earlier,either low levels of debt or high interest coverage may indicate an inability to obtain debt financing,possibly signaling relatively severe financial constraints.KZ [page 211]themselves note that some studies use high leverage as an indicator of more severe financing constraints,while other studies argue the opposite.Thus,these tests do little to bolster the KZ conclusions.11IV .C ONCLUSIONKZ argue that investment-cash flow sensitivities do not provide useful evidence about the presence of financing con-straints.We believe that this conclusion does not follow from their analysis for two reasons.First,their theoretical model fails to capture the approach of most previous research,making their theoretical analysis irrelevant as a criticism of FHP and most subsequent research.Second,the KZ empirical findings are difficult to interpret.The 49low-dividend FHP firms are a poor choice for such a study because they are relatively homogeneous for purposes of testing for capital-market imperfections,making it extremely difficult to classify these firms finely by degree of financing constraints.Furthermore,some of the KZ classification 11.Two new studies are relevant to the KZ results.In a sample of large,dividend-paying firms,Cleary [1999]argues that the ‘‘most financially con-strained’’firms have the lowest investment-cashflow sensitivity.These FC firms,however,appear to be financially distressed.Their mean net income is Ϫ4.8percent of sales compared with 9.6percent for NFC firms.Mean sales growth forFC firmsis Ϫ2.3percent versus 23.5percent for the NFC firms.Winter [1998],using the KZ sample,includes the KZ indicator offinancial constraint status inregressions for investment and firm exit.He finds that the KZ indicator is either statistically insignificant or,when significant,has the wrong MENT ON KAPLAN AND ZINGALES 703at Glasgow University Library on January 20, 2012/Downloaded fromcriteria (e.g.,stock of cash and degree of leverage),may indicate high or low levels of constraints.We therefore believe their finding of nonmonotonic investment-cash flow sensitivities is not informative.While the sweeping critical conclusions in KZ do not follow from their results,we believe their paper makes a contribution.Empirical work in this area has not always clearly identified the theoretical model under investigation.While FHP provided a model of investment behavior that described the criteria for separating firms into ‘‘constrained’’and ‘‘unconstrained’’catego-ries,not all papers have done so.In addition,while commonly used separating criteria have a solid theoretical foundation,not all criteria are as defensible.KZ (and we hope this comment)will lead future researchers to clearly state their model and to carefully choose the criteria used for defining constrained and unconstrained groupings.W ASHINGTON U NIVERSITY AND J EROME L EVY E CONOMICS I NSTITUTEC OLUMBIA U NIVERSITY AND N ATIONAL B UREAU OFE CONOMIC R ESEARCHW ASHINGTON U NIVERSITYR EFERENCESCalomiris,Charles W.,Charles P .Himmelberg,and Paul Wachtel,‘‘Commercial Paper andCorporateFinance:A MicroeconomicPerspective,’’Carnegie-Rochester Conference Series on Public Policy,XLI (1995),203–250.Carpenter,Robert E.,Steven M.Fazzari,and Bruce C.Petersen,‘‘InventoryInvestment,Internal-Finance Fluctuations,and the Business Cycle,’’Brook-ings Papers on Economic Activity (1994:2),75–138.Cleary,Sean,‘‘The Relationship between Firm Investment and Financial Status,’’Journal of Finance,LIV (1999),673–692.Fazzari,Steven M.,R.Glenn Hubbard,and BruceC.Petersen,‘‘Financing Constraints and Corporate Investment,’’Brookings Papers on EconomicActivity (1988:1),141–195.Fazzari,Steven M.,and Bruce C.Petersen,‘‘Working Capital and Fixed Invest-ment:New Evidence on Finance Constraints,’’RAND Journal of Economics,XXIV (1993),328–342.Gilchrist,Simon,‘‘An Empirical Analysis of Corporate Investment and FinancingHierarchies Using Firm-Level Panel Data,’’mimeograph,Board of Governorsof the Federal Reserve System,1991.Gilchrist,Simon,and Charles P .Himmelberg,‘‘Evidence on the Role of Cash Flowfor Investment,’’Journal of Monetary Economics,XXXVI (1995),541–572.Gilchrist,Simon,and Charles P .Himmelberg,‘‘Investment,Fundamentals,andFinance,’’NBER Macroeconomics Annual,XIII (Cambridge,MA:MIT Press,1998).Himmelberg,Charles P .,and Bruce C.Petersen,‘‘R&D and Internal Finance:A Panel Study of Small Firms in High-Tech Industries,’’Review of Economicsand Statistics,LVI (1994),38–51.Hoshi,Takeo,Anil K.Kashyap,and David Scharfstein,‘‘Corporate Structure,Liquidity,and Investment:Evidence from Japanese Panel Data,’’QuarterlyJournal of Economics,CVI (1991),33–60.Hubbard,R.Glenn,‘‘Capital-Market Imperfections and Investment,’’Journal of Economic Literature,XXXV (March 1998),193–225.QUARTERLY JOURNAL OF ECONOMICS704 at Glasgow University Library on January 20, 2012/Downloaded fromHubbard,R.Glenn,Anil K.Kashyap,and Toni M.Whited,‘‘Internal Finance and Firm Investment,’’Journal of Money,Credit and Banking,XXVII (1995),683–701.Kaplan,Steven N.,and Luigi Zingales,‘‘Do Investment-Cash Flow Sensitivities Provide Useful Measures of Financing Constraints?’’Quarterly Journal ofEconomics,CXII(1997),169–215.Kashyap,Anil K,Owen mont,and Jeremy C.Stein,‘‘Credit Conditions andthe Cyclical Behavior of Inventories,’’Quarterly Journal of Economics,CIX(1994),565–592.Whited,Toni M.,‘‘Debt,Liquidity Constraints,and Corporate Investment:Evi-dencefrom PanelData,’’Journal of Finance,XLVII (1992),1425–1460.Winter,Joachim K.,‘‘Does Firms’Financial Status Affect Plant-Level Investment and Exit Decisions?’’mimeograph,University of Mannheim,MENT ON KAPLAN AND ZINGALES 705at Glasgow University Library on January 20, 2012/Downloaded from。
《伤寒论》信息化研究进展
ʌ述评ɔ‘伤寒论“信息化研究进展❋林睿凡1,刘亮亮2,张妮楠1,周洪伟1ә,谢㊀琪3(1.中国中医科学院中医临床基础医学研究所,北京㊀100700;2.上海对外经贸大学统计与信息学院,上海㊀200000;3.中国中医科学院学术管理处,北京㊀100700)㊀㊀摘要:通过文献梳理可知‘伤寒论“信息化研究分为古籍整理㊁数字化和信息化3个过程㊂整理初期工作围绕编辑和出版,目的在于通过翻印保留不同版本的‘伤寒论“并对内容进行校勘㊁注解;中期基于数字化技术,以网页等形式展示‘伤寒论“条文,既能防止长期翻阅对古籍造成伤害又能够方便读者查找㊁检索并进行版本内容比对研究㊂目前热点集中于信息化,可细分为六经系统研究㊁逻辑方法分析㊁数据挖掘㊁机器学习和图像法5个部分㊂通过梳理‘伤寒论“信息化研究历程,认为知识图谱技术作为图像法的一个分支与‘伤寒论“相结合研究的案例不多,‘伤寒论“信息化研究基于哲学㊁计算机以及其他多领域,未来还可以从研究概念以及概念之间关系的角度深入探索㊂㊀㊀关键词:‘伤寒论“;古籍;信息化㊀㊀中图分类号:R222.19㊀㊀文献标识码:A㊀㊀文章编号:1006-3250(2021)05-0856-05Research Progress on Informatization of Treatise on Febrile DiseasesLIN Rui-fan 1,LIU Liang-liang 2,ZHANG Ni-nan 1,ZHOU Hong-wei 1ә,XIE Qi 3(1.Institute of Basic Research In Clinical Medicine,China Academy of Chinese Medical Sciences,Beijing 100700,China;2.School of Statistics and Information,Shanghai University of International Business And Economics,Shanghai 201620,China;3.Department of Academic Management,China Academy of Chinese Medical Sciences,Beijing 100700,China)㊀㊀Abstract :Through literature review ,it can be seen that the informatization study of Treatise on Febrile Diseases is divided into three processes :collation of ancient books ,digitization and informatization.At present ,the hot spot focuses on informatization ,which can be subdivided into five parts :six meridians system research ,logic method analysis ,data mining ,machine learning and image method.By sorting out the information research process of Treatise on Febrile Diseases,it is concluded that the knowledge mapping technology ,as a branch of image method ,has not been studied in the collection of Treatise on Febrile Diseases .The information research of Treatise on Febrile Diseases is based on philosophy ,computer and other fields ,and it can be further explored from the perspective of complex relations between concepts in the future.㊀㊀Key words :Treatise on Febrile Diseases;Ancient books ;Informatization❋基金项目:国家重点研发计划项目(2017YFB1002302)-多维多层多态中医药知识图谱及时空演化模型研究;中央级公益性科研院所科研基本业务费自主选题(ZZ140505)-余瀛鳌传承工作室建设(第二期) 余瀛鳌先生的专病诊疗方案研究;十二五 科技支撑课题(2013BA102B10)- 病症结合 中医药真实世界临床科研方法学研究作者简介:林睿凡(1992-),女,吉林敦化人,本科,从事中医信息学研究㊂ә通讯作者:周洪伟(1982-),男,河北廊坊人,助理研究员,博士研究生,从事中医临床疗效评价与数据管理研究,Tel :************,E-mail :zhouhw@ ㊂㊀㊀中医古籍是中医药学术的载体,它以图文形式记录了中医药学数千年来积累的丰富理论知识和临床经验,具有重要的学术价值和文物价值,不仅是现代中医药学发展与实践探索的源泉和动力,而且对中医学的发展具有重要的启发和指导作用㊂‘伤寒论“作为一部理论联系实际的著作,其六经辨证以及理法方药于一体的理论体系,至今依然指导临床实践,是中医知识结构的重要基石,临床实践的有效指导工具㊂1㊀‘伤寒论“古籍整理1954年,中央提出 整理出版中医古籍,包括编辑和翻印古典和近代医书 ,其后‘伤寒论“纳入古籍整理工作的一部分[1]㊂在版本流传考证方面,我国著名文献学家钱超尘教授做出了重要贡献,主要对宋本㊁唐本㊁金匮玉函经㊁高继冲本㊁敦煌残卷㊁康赔本㊁康治本以及安政本等版本进行考证,并且论证了‘汤液经法“‘伤寒论“‘辅行诀“之间的关系[2-4]㊂‘伤寒论“原文考注类文献多以图书形式出版㊂1957年由科技卫生出版社出版㊁任应秋教授主编的‘伤寒论语译“,对‘伤寒论“原文中晦涩难懂的词语或者古今异义的词语进行注释,并配原文以翻译[5]㊂1982年由湖南科学技术出版社出版㊁朱佑武教授主编的‘宋本伤寒论校注“在校注生僻字词的同时,不仅给出条文的内容提要,点明每条的主旨大意,还增加了按语以助读者深刻理解原文含义[6]㊂1996年天津科学技术出版社出版的郭霭春教授主编的‘伤寒论校注语译“,该书在校注和注解原文的基础上,收录了成无己注解[7]㊂1988年马继兴教授对敦煌古医籍进行整理,并出版‘敦煌古医籍考释“,其中收录‘伤寒论“敦煌本考注,对条文中难以理解的词汇进行注释并标明文字删减[8]㊂2㊀‘伤寒论“数字化20世纪50年代是数字化技术飞速发展的时期,数据库则是数字化的重要产物㊂‘伤寒论“作为中医古籍的重要组成部分,多为孤本或绝本,除了具有临床学术价值之外,同时具有一定文物价值,长期手工翻阅势必会对原版资料造成不可修复的损毁㊂并且由于‘伤寒论“版本众多,收藏地点分布较广,手工整理速度缓慢,无法足以满足文献研究㊁版本比对以及文献保护的需要,古代文献与现代文献在文字表达方面也存在差异㊂因此古籍数字化可以处理部分整理过程中所面临的难点㊂以上原因促使‘伤寒论“数字化工作的开展,成果收录于不同种类的古籍数据库(表1)㊂表1㊀‘伤寒论“数字化成果简介名称㊀㊀版本刻本存储方式㊀载体类型网址版权所有中医药文献数字图书馆伤寒论清光绪八年壬午(1882)刻本图片网站http :// /home 陕西师范大学出版总社中医药文献数字图书馆康平本伤寒论日本昭和十二年丁丑(1937)东京汉方医学会铅印本未做展示网站http :// /home陕西师范大学出版总社中医药文献数字图书馆康平本伤寒论民國三十六年(1947)蘇州友助医学社铅印本未做展示网站http :// /home 陕西师范大学出版总社中医药文献数字图书馆康治本伤寒论宋元祐六年辛未(1091)序刊本图片网站http :// /home 陕西师范大学出版总社中医药文献数字图书馆康治本伤寒论日本安政五年戊午(1858)京都书林刻本未做展示网站http :// /home 陕西师范大学出版总社中医药文献数字图书馆康治本伤寒论1965年日本民族医学研究所据日本安政五年戊午(1858)刻本影印本未做展示网站http :// /home陕西师范大学出版总社中医药文献数字图书馆康治本伤寒论1982年中医古籍出版社据日本安政五年戊午(1858)刻本影印本未做展示网站http :// /home陕西师范大学出版总社中医药文献数字图书馆伤寒论民国二十年(1931)上海中华书局印未做展示网站http :// /home 陕西师范大学出版总社中国哲学书电子化计划桂林古本伤寒杂病论未标注刻本文字网站https :// /zhs 台湾中研院文献处理实验室中国哲学书电子化计划宋本伤寒论未标注刻本文字网站https :// /zhs 台湾中研院文献处理实验室中国哲学书电子化计划康治本伤寒论未标注刻本文字网站https :// /zhs 台湾中研院文献处理实验室古腾堡计划(Project Gutenberg )伤寒论未标注刻本文字网站http :// /版权公开香港中文大学图书馆千金翼方清光绪四年景元大德梅溪书院本(庄兆祥教授知足书室藏书)图片网站https ://.hk /sc #e-resource-tab香港中文大学中国基本古籍库金匮玉函经康熙刻本图片网站http :// /北京爱如生数字化技术研究中心中华古籍资源库仲景全书(明)赵开美辑刻图片网站http :// /allSearch /searchList ?searchType =25&showType =1&pageNo =1国家图书馆数字古籍仲景全书(明)赵开美辑刻图片网站http :// /allSearch /searchList ?searchType =25&showType =1&pageNo =1国家图书馆法藏敦煌遗书伤寒杂病论乙本/伤寒杂病论丙本P.3287图片网站http :// /allSearch /searchList ?searchType =25&showType =1&pageNo =1国家图书馆北京师范大学古籍馆数据库康治本伤寒论日本安政四年[1857]刻本图片网站http :// 北京师范大学图书馆北京师范大学古籍馆数据库康平本伤寒论清道光四年[1854]抄本图片网站http ://北京师范大学图书馆续表1名称㊀㊀版本刻本存储方式㊀载体类型网址版权所有国学宝典宋本伤寒卒病论明赵开美影宋本㊀图片㊀网站http :// /北京国学时代文化传播股份有限公司大学数字图书馆合作计划CADAL康平本伤寒论未标注刻本㊀图片㊀网站http ://CADAL 管理中心敦煌文献数字图书馆伤寒论辨脈法‘敦煌宝藏“第002冊第232页名称作 脉经 ㊂㊀图片㊀网站http :// /陕西师范大学出版总社中医世家伤寒论未标注刻本㊀文字㊀网站http :// /lilunshuji /shanghanlun /index.html版权公开㊀㊀3 ‘伤寒论“信息化研究从严格意义上说,中医古籍文献的数字化是 形式 上的研究,中医古籍文献的信息化是 内容 上的探索[9]㊂运用信息科学技术使中医理论可视化㊁可重复㊁可操作,是在继承意义上的再创造[10]㊂因此进入21世纪,我国开始由中医古籍数字化向内容梳理和信息化过渡㊂此阶段除对‘伤寒论“原文进行处理外,还包括基于经方的医家诊疗经验总结,此类总结通常围绕医案展开,故不包含在分析范围之内㊂3.1㊀‘伤寒论“六经系统研究是信息化的基础上世纪80年代,有学者提出六经之间信息的多变性和复杂型取决于六经系统的多样性和层次性,并且‘伤寒论“的思维模式包括整理㊁联系和转化,均是信息论所具备的特点,因此可以尝试从信息的角度进行分析[11]㊂通过整理各代医家对于六经本质的看法,结合现代系统科学知识,笔者认为六经体系本质实际是系统模型,隶属于系统科学体系[12]㊂故将宋本‘伤寒论“作为研究对象,对其所收录的条文进行分类,总结六经病下所包含的本证㊁兼证㊁辨证㊁疑似证等类别,为‘伤寒论“的信息化研究打下基础[13]㊂3.2㊀‘伤寒论“逻辑方法分析不同学者从多种角度分析‘伤寒论“潜在逻辑㊂杨振华认为,‘伤寒论“中主要使用三重逻辑思维法,包括类比取象法㊁假说思维法和科学抽象法,其中由于误治导致的变证是假说思维法的成功运用,通过科学归纳法对条文进行总结,从而系统的认识疾病,掌握发展中的病邪[14]㊂赵明君认为,‘伤寒论“的六经证及其变证和兼证,病证相应㊁异病同治和六经病治均体现从一般到特殊的思维方法[15]㊂贾春华认为,‘伤寒论“中存在墨子创建的名㊁理㊁故㊁类逻辑法,以及现代逻辑体系中的三段论和命题逻辑等多种逻辑形式[16]㊂吴清荣在明因学的背景下,以‘伤寒论“条文为例,探究中医辨证论治过程与明因论式之间的关系,认为2种思维体系颇为接近[17]㊂穆勒五法作为一种排除归纳法,可以对张仲景方进行归纳分析,建立条文的因果关联㊂研究结果表明,方剂与证候之间存在联系,使用相同方剂时其证候不必完全相同,围绕其主症即可[18]㊂杨培坤采用集合论的思想,构建‘伤寒论“信息处理模型㊂本工作仅选取少部分条文,并且在增加病机的前提下完成条文的模型构建[19-21]㊂更多学者则是从命题逻辑的角度分析经方原文㊂杨培坤论述‘伤寒论“方证系统涉并认为从证和病到方的过程都属于条件关系推理过程,整个推理系统是一种强关系下的条件推理,为知识库的构建提供理论支撑㊂通过条件关系分析,认为脉浮㊁头项强痛㊁恶寒是太阳病的三大必要条件,同时也可以将太阳伤寒和太阳中风相区分㊂并且通过举例说明条文中所涉及的内容与其取非后的可以产生逻辑推理关系[22]㊂基于任何科学都要遵守逻辑的基础,贾春华对‘伤寒论“涉及充分条件㊁必要条件和充分必要条件的条文进行陈述,并认为方证之间命题是属于广义模态逻辑的道义逻辑[23-24]㊂范吉平基于命题逻辑,分析‘伤寒论“的推理过程,将条文内部逻辑分为联言命题㊁选言命题和假言命题,并举例说明[25]㊂邹崇理和贾春华认为,‘伤寒论“中存在条件句,并认为可以细化分为条件句前和条件句后的关联关系,其成分相干蕴含,也就是条件句和主句同时具有显性和隐形构成成分,同时条件句依赖 语义 语用 需要相关的背景知识来解释㊂基于此进而从信息流推理的角度去理解‘伤寒论“中的逻辑推理[26-27]㊂王瑞祥以数理逻辑对太阳中风和太阳伤寒条文进行逻辑分析,得出中风的充分条件为 脉浮㊁头项强痛㊁恶寒㊁发热㊁汗出㊁脉缓 ,伤寒的充分条件是 脉浮㊁头项强痛㊁恶寒㊁体痛㊁呕逆㊁脉阴阳俱紧 ,中风和伤寒均为太阳病[28]㊂贾春华基于命题逻辑中的充分条件,对‘伤寒论“中麻黄㊁桂枝㊁芍药㊁半夏的主治进行推理,认为麻黄治疗实证水肿,桂枝治疗大便不坚之小不利,芍药用于非虚寒性腹痛,半夏用于止呕[29]㊂3.3㊀‘伤寒论“数据挖掘关联规则采用CMAR 子算法可以挖掘张仲景用药模式,对寒证㊁热证㊁虚证㊁实证㊁表证及不同脏腑㊁六经病证等病证进行用药分析,概括不同病证的用药规律,与临床用药相互印证[30]㊂采用布尔关联规则Apriori 算法,分析‘伤寒论“药对分布情况,可以得出张仲景用药多使用生姜㊁甘草㊁大枣顾户脾胃的结论[31]㊂通过经典关联规则算法Apriori,挖掘‘伤寒论“症状组合规则以及药-症关联规律,总结出一系列病机相同的症状群,如烦躁-小便不利-渴等,指出症状群常使用的药物,如渴-身疼痛症状群多使用猪苓-泽泻药对[32]㊂基于数据挖掘方法的双向强关联规则算法,研究伤寒桂枝类方,得出结论桂枝汤类方主要用于虚证,桂芍1ʒ1用于外感发热类疾病, 1ʒ2用于腹痛虚劳等疾病,并提取出桂枝汤中存在的药对以及对桂枝汤适用的主证兼证进行归类[33]㊂采用中药处方智能分析系统(CPIAS)对麻黄汤的方证知识进行挖掘,通过贡献度分析方剂的君臣佐使配伍,指出其方剂属于辛温发散的性味归经,主要功效为辛温解表,使用于感受寒邪以及伴有胃气上逆和肺气不宣的患者[34]㊂采用数据挖掘方法对‘伤寒论“中非衡量器药物剂量进行探讨,以小青龙汤为例,将各家发表的‘伤寒论“中非衡量器计量的药物剂量考察结果输入CPIAS,确定五味子半升的计量为20g[35]㊂基于中药计量是处方发挥功效的重要因素,作者使用CPIAS结合文献法,对‘伤寒论“的112方重新进行中药计量范围的界定㊂以桂枝汤为例,将新旧药物使用剂量进行对比,并对桂枝汤药物贡献度进行排序,认为生姜在桂枝汤中的贡献度大于炙甘草[36]㊂基于平均常用量线性模型,探究‘伤寒论“中方剂由于加减而带来的寒热变化情况,从量化和可视化角度阐述寒热性质,尝试将计算机科学与中医进行交叉领域研究[37]㊂层次分析法(AHP)作为一种用于分析复杂大系统的数学方法,引入中医领域辅助进行方证理论验证,以头痛㊁发热㊁汗出㊁恶风为例,判断出其适用方剂为桂枝并非麻黄汤[38]㊂3.4㊀采用机器学习方法分析‘伤寒论“采用支持向量机(SVM)方法,可以从不同角度进行‘伤寒论“方剂分类,一方面便于现代人理解张仲景思维模式,另一方面可以发现‘伤寒论“的隐藏知识[39]㊂使用决策树算法建立葛根芩连汤㊁黄芩汤㊁白头翁汤分类模型,为三方的临床选择提供指导[40]㊂作者使用人工神经网络算法研究药物和症状之间的非线性对应关系,进一步明确了方剂和对应症状之间的关系,但仍存在症状归类标注欠缺㊁条文症状收录不完整㊁方药剂量描述不精准等不足之处[41]㊂基于人工神经网络技术构建‘伤寒论“主证-药物对应关系与方-证要素对应关系数学模型,分析了方证要素对应系统收敛效果强于主证药物的原因[42]㊂通过整合随机森林㊁支持向量机和神经网络算法,构建一种复合结构智能化的变证选方模型(RSB),以尝试优化对症选方㊂将RSB与支持向量算法㊁随机森林算法和BP神经网络算法进行比较,其中RSB推荐准确性最高[43]㊂3.5㊀图像法剖析‘伤寒论“偏序结构图(POSD)是燕山大学洪文学教授及其研究团队提出的一种知识可视化方法,包括属性偏序结构图和对象偏序结构图,其中属性偏序结构图将共同特征的对象聚类,对象偏序结构图则可以将具有特异性对象与其他对象相区分[44]㊂基于此方法,广州中医药大学李赛美团队与洪文学团队联合对‘伤寒论“条文进行分析,挖掘深层知识㊂利用这种多层次复杂概念网络生成法,可以对‘伤寒论“进行全部㊁客观的知识发现,从而对中医知识发掘提供新的途径[45]㊂如分析出张仲景使用和法㊁温法以及汗法的必用基础药物㊁基础药对和基础方,将桂枝类方㊁柴胡类方和泻心类方进行二次分析,发现三类方剂的加减规律以及所形成的类别[46]㊂对小便类方剂进行分类归纳,主要形成甘草类集合㊁大黄类集合㊁茯苓类集合以及其他散在集合,并对四类集合的组方规律进行分析[47]㊂以甘草用量进行分类,分别探讨甘草使用二两㊁三两㊁四两的配伍特点[48]㊂对‘伤寒论“整体结构进行梳理,认为其收录方剂主要为12大类,包括桂枝类㊁麻黄类㊁葛根类等,并将方剂按照治法进行分类,同时总结每类治法的用药特点[49]㊂另外,主题图技术作为一种知识组织技术,可以定义知识之间的关系,实现分布式知识集成和共享,通过此技术对‘伤寒论“原文进行内容整理,探索主题图技术在中医药领域的应用[50]㊂知识图谱作为一项新兴技术也尝试使用于‘伤寒论“研究㊂采用基于条件随机场(CFR)的方法,识别‘伤寒论“文本的症状㊁病名㊁脉象㊁方剂等中医术语,识别准确率为85%,F值为75.56%[51]㊂以‘伤寒论“中涉及方剂的条文为蓝本,利用人工神经网络对245条构建方证数据库,先以100条原文为训练样本,通过桂枝汤证㊁麻黄汤证㊁小柴胡汤证和白虎汤证症状作为测试,进行从症状到药物的预测分析,模型准确率达79%[52]㊂使用Neo4j图数据库构建经方桂枝汤类方的小型知识图谱,对证㊁方和药进行可视化分析,为中医类知识图谱的构建奠定基础[53]㊂4 小结纵观上述文献,其目的均为从哲学㊁计算机以及其他多科领域角度分析‘伤寒论“背后的意义,通过与不同领域相结合的方式进行研究,取得一定成果㊂但通过阅读‘伤寒论“原文可知,不同概念之前存在混淆或异词同义的情况,概念之间关系错综复杂㊂述评所含文献由于技术所限并未完全对‘伤寒论“中概念之间的联系进行梳理和呈现,但仍然为后续研究提供了哲学思路基础和方法上的借鉴㊂参考文献:[1]㊀余瀛鳌.中医古籍整理与文献研究的今昔观[J].中医药文化,2008,3(3):8-10.[2]㊀钱超尘.伤寒论文献通考[M].北京:学苑出版社,2001:1-2.[3]㊀钱超尘.日本安政本‘伤寒论“[J].中医文献杂志,2011,29(1):1-3.[4]㊀钱超尘.‘汤液经法“‘伤寒论“‘辅行诀“古今谈(待续)[J].世界中西医结合杂志,2008,3(6):311-315.[5]㊀伤寒论语译[M].任应秋,语译.上海:上海卫生出版社,1958:2-3.[6]㊀宋本伤寒论校注[M].朱佑武,校注.长沙:湖南科学技术出版社,1982:1.[7]㊀伤寒论校注语译[M].郭霭春,张海玲,校注语译.北京:中国中医药出版社,1996:1-5.[8]㊀马继兴.敦煌古医籍考释[M].南昌:江西科学技术出版社,1988:3.[9]㊀任廷革,刘晓峰,王庆国,等.面向主题的中医古籍文献的信息研究[C]//中华中医药学会中医诊断学分会成立暨学术研讨会论文集.中华中医药学会,2006:309-313.[10]㊀任廷革,萧旭泰,刘晓峰,等.中医理论现代化的基础 中医信息学研究[J].中医药临床杂志,2005,17(1):91-92. [11]㊀王宝瑞.试论‘伤寒论“六经辨证理论体系中的信息论方法[J].医学与哲学,1986,5(8):35-37.[12]㊀周雪亮.‘伤寒论“六经的系统模型本质[D].济南:山东中医药大学,2008.[13]㊀蔡永敏,陈丽平,孙大鹏.‘伤寒论“六经病证知识分类体系研究[J].中华中医药杂志,2015,30(1):208-211.[14]㊀杨振华.浅析‘伤寒论“常用的逻辑思维方法[J].天津中医学院学报,2006,25(1):44.[15]㊀祝茜,赵明君. 从一般到特殊 思维方法在‘伤寒论“中的运用[J].河南中医,2016,36(6):936-939.[16]㊀郭瑨,贾春华.张仲景方证理论的逻辑探讨[C]//中华中医药学会仲景学说分会全国第二十一次仲景学说学术年会论文集.中华中医药学会,2013:33-36.[17]㊀吴清荣.因明学视域下之张仲景方证理论体系研究[D].北京:北京中医药大学,2016.[18]㊀王顺治,贾春华.基于穆勒五法的张仲景用药规律研究[J].中医杂志,2016,57(5):387-390.[19]㊀杨培坤.运用电脑对‘伤寒论“辨证论治思想体系的验证(一)[J].湖北中医杂志,1981,11(3):1-4.[20]㊀杨培坤.运用电脑对‘伤寒论“辨证论治思想体系的验证(二)[J].湖北中医杂志,1981,11(4):49-56.[21]㊀杨培坤,张淦生.张仲景‘伤寒论“的信息处理模型及其实现[J].华中师院学报(自然科学版),1981,20(2):104-112. [22]㊀杨武金.‘伤寒论“的条件关系分析与知识表达[J].重庆理工大学学报(社会科学),2010,24(10):70-74.[23]㊀贾春华,王永炎,黄启福,等.基于命题逻辑的伤寒论方证论治系统构建[J].北京中医药大学学报,2007,30(6):369-373.[24]㊀贾春华,王永炎,黄启福,等.关于‘伤寒论“中的假言命题及其推理[J].北京中医药大学学报,2007,30(1):9-12. [25]㊀苏芮,韩振蕴,范吉平.‘伤寒论“研究中逻辑推理的应用[J].中国中医基础医学杂志,2010,16(10):862-863.[26]㊀邹崇理.从语言逻辑角度解析‘伤寒论“的条件句[J].重庆理工大学学报(社会科学),2010,24(10):65-69.[27]㊀贾春华,王庆国,王永炎,等.基于相干蕴涵原理的‘伤寒论“条件句分析[J].江苏中医药,2007,39(2):10-12.[28]㊀王瑞祥.中医语言的形式逻辑体系刍议[J].中医药导报,2013,19(5):19-20.[29]㊀马思思,贾春华,郭瑨,等.基于推理有效式的张仲景药物主治分析[J].北京中医药大学学报,2016,39(10):807-810. [30]㊀官晖.基于分类关联规则的仲景方挖掘研究[D].福州:福建中医学院,2008.[31]㊀陈丽平,蔡永敏.‘伤寒论“药对使用规律关联规则研究[J].北京中医药大学学报,2013,36(12):808-811.[32]㊀张琴.基于关联规则的‘伤寒论“症 药关系研究[D].北京:北京中医药大学,2017.[33]㊀陈明,陆建峰,赵国平.基于双向强关联规则的仲景桂枝汤类方研究[J].南京中医药大学学报,2009,25(4):249-251. [34]㊀汤尔群,任廷革,陈明,等.基于数据挖掘方法的‘伤寒论“方证知识挖掘研究[J].中国中医药信息杂志,2012,19(4):31-34.[35]㊀汤尔群,任廷革,陈明,等.基于数据挖掘方法的‘伤寒论“非衡量器药物剂量研究[J].中国中医药信息杂志,2009,16(10):90-93.[36]㊀汤尔群,任廷革,陈明,等.基于方证知识挖掘的中药剂量范围研究[J].中国中医药信息杂志,2009,16(12):94-96. [37]㊀高晶晶.基于量化组方研究的方剂寒热属性可视化分析平台构建[D].北京:北京中医药大学,2009.[38]㊀潘大为.运用AHP法建立方证耦合模型:一种方证研究的新思路[J].时珍国医国药,2011,22(5):1226-1228.[39]㊀孙燕,臧传新,任廷革,等.支持向量机方法在‘伤寒论“方分类建模中的应用[J].中国中医药信息杂志,2007,14(1):101-102.[40]㊀王倩,傅延龄,陈文强,等.基于医案文献的葛根芩连汤㊁黄芩汤㊁白头翁汤三方分类鉴别研究[J].中国中医急症,2017,26(1):33-37.[41]㊀杨涛,吴承玉.基于人工神经网络的‘伤寒论“方证知识库的构建[J].世界科学技术-中医药现代化,2013,15(9):2033-2036.[42]㊀陈擎文.基于人工神经网络的中医证治模型探析[J].中华中医药学刊,2009,27(7):1517-1520.[43]㊀周璐,李光庚,孙燕,等.复合结构智能化辨证选方模型的构建[J].世界中医药,2018,13(2):479-483.[44]㊀郑存芳,李少雄,栾景民,等.偏序结构环形图:一种大数据偏序结构可视化新方法[J].燕山大学学报,2014,38(5):409-415.[45]㊀刘超男,李赛美,洪文学.基于形式概念分析数学理论研究‘伤寒论“方药整体知识[J].中医杂志,2014,55(5):365-368.[46]㊀刘超男,徐笋晶,李赛美,等.基于多层次复杂概念网络表示方法的‘伤寒论“方药按治法分类的知识发现[J].北京中医药大学学报,2014,37(7):452-457.[47]㊀刘超男,邓烨,李赛美,等.基于多层次复杂概念网络生成方法的Sunshine图发现‘伤寒论“小便相关方药知识[J].时珍国医国药,2015,26(9):2277-2278.[48]㊀邓烨,刘超男,李赛美,等.基于数学属性偏序表示原理的‘伤寒论“甘草方剂配伍量效群结构知识发现[J].中国实验方剂学杂志,2016,22(7):213-217.[49]㊀刘超男,邓烨,李赛美,等.大数据时代下中医方药知识发现新方法[J].燕山大学学报,2014,38(5):423-427.[50]㊀李芹,苏大明,张华敏,等.以‘伤寒论“为例探讨主题图技术在中医药知识组织中的应用[J].国际中医中药杂志,2017,39(2):101-105.[51]㊀孟洪宇,谢晴宇,常虹,等.基于条件随机场的‘伤寒论“中医术语自动识别[J].北京中医药大学学报,2015,38(9):587-590.[52]㊀杨涛,吴承玉.基于人工神经网络的‘伤寒论“方证知识库的构建[J].世界科学技术-中医药现代化,2013,15(9):2033-2036.[53]㊀赵凯,王华星,施娜,等.基于Neo4j桂枝汤类方知识图谱的研究与实现[J].世界中医药,2019,14(10):2636-2639.收稿日期:2020-07-08。
开放经济中的财政政策规则——基于中国宏观经济数据的DSGE模型
Fiscal Policy Rules in an Open Economy:A DSGE Model Based on Chinese Macroeconomic Data 作者: 朱军[1,2]
作者机构: [1]南京财经大学财政与税务学院,江苏南京210046;[2]财政部财政科学研究所博士后流动站,北京100142
出版物刊名: 财经研究
页码: 135-144页
年卷期: 2013年 第3期
主题词: 开放经济;DSGE模型;财政规则;贝叶斯估计
摘要:目前,鲜有文献讨论开放经济中的中国财政政策及其规则。
文章通过构建开放经济的宏观财政DSGE模型,基于我国的宏观经济数据,研究了开放经济中的财政政策规则及其经济效应,比较了不同规则的结果。
文章采取贝叶斯估计方法进行数值模拟后发现:在开放经济中,财政政策的即期“乘数效应”相对较小;“连续支出规则”与“盯住产出规则”具有相似的产出效应,在开放经济中可利用“盯住产出规则”发挥政府的主导性;“严格”债务约束的政策效应类似“宽松”债务约束的政策效应,为充分利用资源可采取后一政策规则.。
浙师大应用心理专硕书单
浙师大应用心理专硕书单包括:
1. 《心理学导论》(第三版),陈会昌编,人民教育出版社。
2. 《社会心理学》(第二版),侯玉波编,北京大学出版社。
3. 《临床心理学》(第二版),王登峰编,人民卫生出版社。
4. 《发展心理学》(第二版),林春红编,人民教育出版社。
5. 《人格心理学》(第二版),张亚旭编,人民卫生出版社。
6. 《心理测量学》(第二版),戴海崎等编,高等教育出版社。
7. 《实验心理学》(第二版),郭秀艳编,人民教育出版社。
8. 《认知心理学》(第二版),莫莉森、赫布纳编,北京大学出版社。
9. 《教育心理学》(第二版),刘兆吉编,人民教育出版社。
10. 《社会心理学》(第二版),侯玉波编,北京大学出版社。
以上书籍涵盖了心理学的多个领域,包括社会心理学、临床心理学、发展心理学、人格心理学、心理测量学、实验心理学、认知心理学、教育心理学和社会心理学等。
这些书籍对于浙师大应用心理专硕的学生来说,是必备的学习资料。
2020-2021年汕头大学行政管理考研真题及考研复试参考书
2020-2021年汕头大学行政管理考研真题及考研复试参考书育明教育大印老师2019年10月10日星期日一、2020年行政管理考研复试参考书清华北大人大北师大北航南开中山武大复旦等院校行政管理考研复试越来越趋向于考察热点问题,尤其是政策热点问题,比如产业政策争论、垃圾分类、放管服改革等,所以,大家一定要多看一些热点方面的参考书,如下所示:《公共管理学》,李国正,首都师范大学出版社,2018年版;《公共政策分析》,李国正,首都师范大学出版社,2019年版;《公共管理学:考点热点与真题解析》,首都师范大学出版社,2020年版;《公共政策分析:考点热点与真题解析》,首都师范大学出版社,2020年版;二、2021年行政管理考研参考书汕头大学101思想政治理论201英语一622公共管理理论809政治学与行政学622公共管理理论《管理学概论》,邵冲主编,中山大学出版社,2005年版;《公共管理原理》,陈振明,中国人民社,2003年版。
《公共管理学:考点热点与真题解析》,首都师范大学出版社,2020年版;《公共政策分析:考点热点与真题解析》,首都师范大学出版社,2020年版;《公共管理学》,李国正,首都师范大学出版社,2018年版809政治学与行政学《政治学》(MPA系列),孙关宏,胡雨春主编,复旦大学出版社,2002年5月《公共行政学》(第二版)(MPA系列),竺乾威主编,复旦大学出版,2002年三、2021年行政管理考研参考书笔记2.互适模型美国学者麦克拉夫林认为政策执行是执行者与受影响者之间就目标或手段相互调适的一个过程,这是一个动态平衡的过程,政策执行是否有效取决于二者互适的程度。
如图15-2所示。
图15-2政策执行互适过程模型麦克拉夫林的相互调适模型,至少包含以下四项逻辑认定:(1)政策执行者与受影响者之问的需求和观点并不完全一致,基于双方在政策上的共同利益,彼此必须经过说明、协商、妥协等确定一个双方都可以接受的政策执行方式;(2)相互调适的过程是处于平等地位的双方彼此进行双向交流的过程,而不是传统的“上令下行”这种单向流程;(3)政策执行者的目标和手段可随着环境因素、受影响者的需求和观点的改变而改变;(4)受影响者的利益和价值取向将反馈到政策上,从而影响政策执行者的利益和价值取向。
生技中草制药智慧财产案例研讨会
生技中草製藥智慧財產案例研討會Intellectual Property of Biotechnologyand Pharmaceutical Science~~ 歡迎踴躍報名參加~~【生技中草製藥智慧財產案例研討會】之講授老師黃秀敏為哲學及榮譽法學博士及美國專利律師;在此之前執業於香港。
目前為加州開業律師。
曾代表包括美國3M、General Mills、香港百泰醫藥科技公司、維特健靈公司、香港浸會大學、香港長江生命科技集團有限公司、香港理工大學、香港賽馬會中藥研究院有限公司等多家客戶。
技術領域涉及化學、藥物、生物科技、醫療器材、診斷、農業、及食品化學。
專業領域包括專利法顧問、申請專利、提供可專利性、侵權及專利有效性諮詢,專利草擬及執行、專利組合管理等項目。
黃律師畢業於台北醫學院藥學系,台大醫學院藥理碩士。
於西元1990年從美國密西西比醫學中心取得藥理及毒理學博士;西元2003 年在美國明尼蘇達州威廉馬歇爾法學院獲得榮譽畢業生。
目前是美國明尼蘇達州和華盛頓哥倫比亞特區(Washington, DC) 的執照律師,以及美國專利及商標局註冊律師。
黃律師有廣泛的技術背景,並具藥品申請與專利申請的實務經驗:她曾擔任台灣衛生署藥政處技士;從西元1991 到1996 年,於美國馬里蘭州國家衛生總署從事博士後研究,與諾貝爾獎得主Marshall Nirenberg 博士共同發表文章,並為論文第一作者;曾任「美國專利及商標局」生物科技領域專利審查委員、也曾在跨國企業Cargill 公司擔任專利溝通專家;曾於美國明尼蘇達州著名的律師事務所從事法律工作,並曾擔任美國國家研究院科學與技術政策研究員。
此次陽明大學生技中草製藥教學資源中心有幸邀請黃秀敏律師,擔任【生技中草製藥智慧財產案例研討會】之講師,黃律師將運用其在生物學界及法律學界之專業學識與豐富經驗,與大家分享其精闢之見解與評析,2007年僅此機會,課程內容精采可期,歡迎大家共襄盛舉!!台灣技術經理人協會敬邀結合生物學領域與法律學界的專業剖析~2007年度僅此一次機會~~報名從速~或於8月25日(六)前,以下列方式繳交報名之課程費用總額:1.匯款:請將費用匯至台灣技術經理人協會『台北中崙郵局00013440745541』。
2012-2013学年第二学期研究生全英文课程项目公示名单
男 男 男 男
美国 美国 英国 法国
博士 博士
教授 教授
美国密苏里大学堪萨斯城分校计 算机科学与工程学院 美国伊利诺理工大学计算机科学 系 英国布鲁内尔大学
28 28 21
36 36 46 36 36 36 36 36 36 36 36 36 36 60 36 22 18 28 9 36
19 20 21 22 23 24 25 26 27 28 29 30 31 32
博士 高级讲师
博士 副研究员 澳大利亚 Ian Wark Research Institute 10 香港大学 美国里海大学 美国德州大学达拉斯分校 哥伦比亚大学教育研究院 美国丹佛大学电机系 美国田纳西大学电气及计算机工 程学院 美国田纳西大学电气及计算机工 程学院 瑞士联邦理工大学苏黎世分校 (ETH Zurich) 美国伊利诺伊大学芝加哥分校, 土木与材料工程系 美国德克萨斯大学阿林顿分校 美国犹他州立大学 美国佛罗里达大学 美国佛罗里达大学 美国芝加哥大学10天 美国路易斯克拉克州立大学 台湾阳明大学 15 15 10 20 15 120 25 14 10 15 15 20 20 5 5 10
生医科学与医学工程学院 生医科学与医学工程学院 经济管理学院 经济管理学院 外国语学院 外国语学院 电气工程学院 电气工程学院 电气工程学院 化学化工学院 交通学院 交通学院 交通学院 交通学院
医学成像和图像处理前沿 生物材料科学 运营与生产管理 供应链建模 英文学术写作 第二语言习得 智能电网 电力市场基础 优化理论与技术 胶体科学与工程 环境岩土工程 现代原位测试理论 城市交通网络分析 遥感数据处理与分析
Klaus Kunzmann 男 Yuk Lee 男
戴维·莱瑟巴罗 男
关联理论视角下托马斯·巴赫记者会模拟口译实践报告
关联理论视角下托马斯·巴赫记者会模拟口译实践报告摘要:托马斯·巴赫记者会是一个复杂的交际场景,涉及到多种语言和不同文化背景的参与者之间的交互。
本文通过运用关联理论视角,分析一位口译人员在托马斯·巴赫记者会模拟口译实践中的表现,探讨其如何应对语言和文化差异,如何处理语境中的信息,以及如何构建发言者和听众之间的良好关系。
研究结果表明,口译人员在托马斯·巴赫记者会模拟实践中能够有效地运用关联理论,通过积累语言和背景知识、注意语境变化和情感因素、合理运用复述和瞒漏等策略来实现准确传译和有效沟通。
同时,这种实践还有助于口译人员进一步提高其语言和跨文化交际能力。
关键词:托马斯·巴赫,记者会,口译实践,关联理论,语言和文化差异,沟通策略1.引言近年来,随着全球化的不断加速,越来越多的人们涉足跨文化交际领域,而口译作为跨语言交际中的重要手段之一,更是成为了各种文化交流活动、商务谈判、政治磋商等场合的常客。
然而,在这些涉及到多种语言和不同文化背景的参与者之间的交际中,语言和文化差异、信息传递的失误和沟通的误解等问题也时有发生。
因此,如何有效地应对这些问题,实现准确传译和有效沟通,成为了口译人员必须具备的重要能力。
本文旨在通过运用关联理论视角,分析一位口译人员在托马斯·巴赫记者会模拟口译实践中的表现,探讨其如何应对语言和文化差异、如何处理语境中的信息、以及如何构建发言者和听众之间的良好关系,进一步提高其语言和跨文化交际能力。
2.托马斯·巴赫记者会模拟实践介绍托马斯·巴赫记者会是一项国际记者会议,每年都会邀请世界各地的媒体代表和政府官员参加。
在这个交际场景中,语言和文化差异、情境、主题等因素都会影响到参与者之间的交互,其复杂性不可小觑。
在托马斯·巴赫记者会模拟实践中,一位口译人员需要对主持人和嘉宾的发言进行实时传译,将其转化为另一种语言,使得听众能够理解。
作为溯因推理研究方法的因果过程追踪及其在公共政策研究中的应用
与量化研究相 比,质 性 研 究 显 然 更 擅 长 于 发 掘 和 描 述 政 策 变 化 过 程 的 因 果 机 制①( 朱天飚,2017) 。“一个完整的解释,必须规定一种机制来描述一个变量影响另 一个变量的过程,换句话说,X 是如何产生 Y 的”( Kiser and Hechter,1991) 。质性方 法论体系利用个案分析方法研究社会现实案例,是国内外公共管理以及公共政策研 究的重要载体( Sigelman and Gadbois,1983; 马骏,2012) 。个案分析方法通过对公共 政策过程的全景式描述,多角度地把握研究对象的特征,可以探索揭示公共政策实 施过 程 的 因 果 机 制,但 其 方 法 论 瓶 颈 在 于 从 个 别 到 一 般 的 因 果 推 论 解 释 力 弱 ( Eisenhardt,1989; Yin,1994; Goodin,2009) 。为提高因果推论能力,公共政策的质 性研究方法论 体 系 需 要 进 一 步 突 破。近 年 来 兴 起 的 因 果 过 程 追 踪 方 法 ( Causal Process Tracing) 具有识别并纠正虚假因果关联以及遗漏变量偏误等内生性问题的方 法论优势( Falleti,2016; 张长东,2018) ,成为公共政策学者可以使用的重要方法论 工具。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
BusMgt 331 Case Study #2Due Monday(January 30th) / Tuesday (January 31st) at the beginning of classPart A --- Production ProblemPentagonal Pictures produces motions pictures in Hollywood and distributes them nationwide. Currently, it is considering 10 possible films; these include dramas, comedies, and action adventures. The success of each film depends somewhat on both the strength of the subject matter, and the appeal of the cast. Estimating the cost of a film and its potential box office draw is i nexact at best; still the studio must rely on its experts’ opinions to help it evaluate which projects to undertake.The following table lists the films currently under construction by Pentagonal Pictures, including the projected cost and box office gross receipts.In addition to these productions costs, each movie will have a $1 million advertising budget, which will increase to $3 million if the movies is to have a “big star” cast. Assume that the studio received 80% of a film’s gross receipts. The company would like to maximize its net profit [gross profit – (production costs + advertising costs)] for the year. Pentagonal Pictures has a production budget of $100 million and an advertising budget of $15 million. In addition, it would like to adhere to the following restrictions:∙At least half of the films produced should have a rating of PG or PG-13.∙At least two comedies are to be produced.∙If The Crash is produced, Bombs Away will not be.∙At least one drama is to be produced.∙At least two films should have big star casts.∙At least two PG films should be produced.∙At least one action movie with a big star cast should be produced.∙ A maximum of one version of any film will be produced.Formulate a binary program to determine which films (with which casts) should be produced.1. Put the decision variables in Row 1 of the worksheet. Since the decision variable names will be long, it is OK touse shorter names but a key must be provided below the linear programming formulation on the same worksheet.2. Row 2 of the worksheet must be the linear program solution.3. Put the constraint names in Column A of the worksheet. Since the constraint names will be long, it is OK to useshorter names but a key must be provided below the linear programming formulation on the same worksheet.4. The solution must be checked to ensure that all constraints have been satisfied.Part B --- Staffing ProblemWings Airlines flies international flights between Columbus and Quebec. The management must determine how to staff the ticket counters and the arrival/departure gates between the hours of 6:00am and 9:00pm at the Port Columbus International Airport. Wings Airlines is very aware of problems that can occur with international passengersso it hires both bilingual agents as well as agents who only speak English. The agents can work 3-hour shifts, 6-hour shifts, or 9-hour shifts. The table below indicates the minimum number of agents for each 3-hour time period throughout the day:Agents begin their shifts either at 6:00am, 9:00am, noon, 3:00pm, or 6:00pm. The English-speaking agents are paid $19.00 per hour and bilingual agents are paid $23.00 per hour. In addition, Wings Airlines provides outstanding benefits; the cost of these benefits is $50.00 per agent per day regardless of the length of their shift. No more than 40% of the agents must be on 9-hour shifts and at least 20% of the agents must be on 3-hour shifts. Wings Airlines’ policy is that at least 30% the agents workingduring any 3-hour time periodbe bilingual.Formulate an integer program to determine the staffing (the number of each type of agent, their starting time, and the number of hours worked) to minimize total staffing cost.1. Put the decision variables in Row 1 of the worksheet. Since the decision variable names will be long, it is OK touse shorter names but a key must be provided below the linear programming formulation on the same worksheet.2. Row 2 of the worksheet must be the linear program solution.3. Put the constraint names in Column A of the worksheet. Since the constraint names will be long, it is OK to useshorter names but a key must be provided below the linear programming formulation on the same worksheet.4. The solution must be checked to ensure that all constraints have been satisfied.Part C --- Advertising ProblemKEC Foods is planning to introduce a new product (KEC Taco Sauce) to its legacy products (KEC Ketchup and KEC Spaghetti Sauce). The new product introduction will result in an increase its advertising budget from $1.4 million to $2.0 million. In the past, KEC Foods promoted its legacy products individually, splitting its advertising budget equally between ketchup and spaghetti sauce.Historically, the marketing department estimates that each dollar spent advertising KEC Ketchup individually increases sales by 4.0 bottles. Since KEC has a profit margin of $0.30 per bottle of ketchup, each dollar spent advertising only ketchup increases the company’s profit by $0.20 per dollar of advertising (4.0 x $0.30 - $1.00 = $0.20).Historically, the marketing department estimates that each dollar spent advertising KEC Spaghetti Sauceindividually increasessales by 3.2 jars. Since KEC has a profit margin of $0.35 per jar of spaghetti sauce, each dollar spent advertising only spaghetti sauce incre ases the company’s profit by $0.12 per dollar of advertising (3.2 x $0.35 - $1.00 = $0.12).Because taco sauce is a new product, the marketing department estimates that each dollar spent advertising KEC Taco Sauce individually will increase sales by 11 bottles. Since KEC has a profit margin of $0.10 per bottle of taco sauce, each dollar spent advertising only taco sauce increases the company’s profit by $0.10 per dollar of advertising (11 x $0.10 - $1.00 = $0.10).KEC Foods is considering changing its advertising strategy to allow joint advertising of all 3 of KEC Foods’ products. The company projects that the sales of each product would increase by another 10% over 1/3 of the existing estimates for each dollar spent on joint advertising. For example, for each dollar spent in joint advertising, ketchup sales would increase by (4.0 / 3) * (1.10) = 1.4667 bottles, spaghetti sauce sales would increase by 1.1733 jars, and taco sauce would increase by 4.033 bottles.Note: Linear programming can be extremely sensitive to small changes in values caused by rounding. Use equations whenever possible (allowing EXCEL to keep 13 significant digits) instead of typing values into cells. KEC Foods wishes to maximize the increase in profits while also “building for the future” by adhering to the following guidelines for this year’s advertising budget:∙ A maximum of $2 million spent in total advertising.∙At most $400,000 spent on joint advertising.∙At least $100,000 spent on joint advertising.∙At least $1 million spent promoting taco sauce, either individually or through joint advertising.∙At least $250,000 spent advertising ketchup individually.∙At least $250.000 spent advertising spaghetti sauce individually.∙At least $750,000 spent advertising taco sauce individually.∙At least as much spent this year as last year promoting ketchup, either individually or through joint advertising.∙At least as much spent this year as last year promoting spaghetti sauce, either individually or through joint advertising.∙An increase of at least 7.5 million total bottles/jars of product sold due to advertising.Formulate a linear program to determine the allocation of advertising dollars.1. Decision variables are the amount spent on advertising KEC Ketchup individually, the amount spent onadvertising on advertising KEC Spaghetti Sauce individually, the amount spent on advertising KEC Taco Sauce individually, and the amount spent advertising all 3 KEC products jointly.2. Put the decision variables in Row 1 of the worksheet. Since the decision variable names will be long, it is OK touse shorter names but a key must be provided below the linear programming formulation on the same worksheet.3. Row 2 of the worksheet must be the linear program solution.4. Put the constraint names in Column A of the worksheet. Since the constraint names will be long, it is OK to useshorter names but a key must be provided below the linear programming formulation on the same worksheet.5. The solution must be checked to ensure that all constraints have been satisfied.Part C --- Advertising Problem (Sensitivity Analysis) How much should be spent in advertising for each product?What is the increase revenue?What would the profit be if:How much slack/surplus is there in the constraints?A marketing manager would like to borrow funds toincrease the total advertising budget. What is the rate of return for each additional dollar spent in advertising?What would be the total profit if one additional dollar is spent on joint advertising?What is the range of increased profits for which the optimal solution is still valid?Part D --- Investment ProblemDave Torchia, a managing partner of the investment firm of Torchia and Arbutina, is designing a portfolio for his client Larry Wasser. Larry wants to invest exactly $500,000. Dave has identified 11 different investments falling in 4 broad categories that Dave and Larry would be potential candidates for the portfolio. The investments and their characteristics are given below:The expected annual after-tax returns account for all commissions and service charges. Note that there are two separate investments in the Beckman Corporation and that the Carlton REIT is a single investment (a stock in a real estate investment company).Dave’s objective is to construct a portfolio for Larry that maximi zes his total estimated after-tax return over the next year, subject to the following concerns Larry has raised regarding his portfolio∙The average risk factor must be no greater than 55.∙The average liquidity factor must be at least 85.∙At least $10,000 is to be invested in the Beckman Corporation.∙At least 20% but no more than 50% of the “non-money” portion of the portfolio should be from any one category of investment.∙With the exception of the money category investments, no more than 20% of the total portfolio should be in any one investment.∙At least $20,000 should be invested in the money market fund.∙ A minimum investment of $125,000 should be in bonds.∙No more than 40% of the total portfolio in investments with expected annual after-tax returns of less than 10% are to have risk factors exceeding 25.∙At least one-half of the portfolio must be totally liquid (i.e., have a liquidity factor of 100).Formulate a linear program to determine the investment portfolio distribution.1. The decision variables are the amounts placed in each of the 11 investments.2. Put the decision variables in Row 1 of the worksheet. Since the decision variable names will be long, it is OK touse shorter names but a key must be provided below the linear programming formulation on the same worksheet.3. Row 2 of the worksheet must be the linear program solution.4. Put the constraint names in Column A of the worksheet. Since the constraint names will be long, it is OK to useshorter names but a key must be provided below the linear programming formulation on the same worksheet.5. The solution must be checked to ensure that all constraints have been satisfied.Part D --- Investment Problem (Sensitivity Analysis) What is the optimum distribution of funds in each investment?What is the optimal after-tax return (in dollars)?What is the average risk when the optimal after-tax return is achieved?What is the average liquidity when the optimal after-tax return is achieved?Dave always invests in Beckman stock for good luck. If Dave invests $1.00 in Beckman stock, what is the new after-tax return on Larry’s portfolio?If Larry had an additional $5,000 to invest, what would be the new after-tax return on his portfolio?What is the range of annual after-tax returns for the 11 investments for which the optimal solution is still valid? Investments Maximum Return Minimum ReturnBeckman Corporation (Stock)Beckman Corporation (Bond)Carlton REIT Certificate of DepositLA PowerMetropolitan TransitMoney Market FundQubeElectronicsSoCalPartnershipTreasury BillsTaco GrandeCase Study Requirements:(a) Cover sheet will all team member names[1 page portrait].(b) Production Problem – Print the worksheet on one page, landscape, and centered with margins set to 0.25”[1 pagelandscape].(c) Staffing Problem – Print the worksheet on one page, landscape, and centered with margins set to 0.25”[1 pagelandscape].(d) Marketing Problem – Print the worksheet on one page, landscape,and centered with margins set to 0.25”[1 pagelandscape].(e) Marketing Problem – Print the SOLVER sensitivity analysis on one page, landscape, and centered with margins setto 0.25” [1 page landscape].(h) Marketing Problem – Print the sensitivity analysis report (page 4 of this document) with the answers typed on onepage, portrait, and centered with margins set to 0.25” [1 page portrait].(f) Investment Problem – Print the worksheet on one page, landscape, and centered with margins set to 0.25” [1 pagelandscape].(g) Investment Problem – Print the SOLVER sensitivity analysis on one page, landscape, and centered with marginsset to 0.25” [1 page landscape].(h) Investment Problem – Print the sensitivity analysis report (page 6 of this document) with the answers typed on onepage, portrait, and centered with margins set to 0.25”[1 page portrait].P oints will be deducted for “non-professional” reports or excess output.In addition, a single spreadsheet with 6 workbooks (“Production”, “Staffing”, “Marketing”, “Marketing Sensitivity”, “Investment”, and “Investment Sensitivity”) must be sent as an attachment to an e-mail to Dr. Mark before the beginning of class. Dr. Mark will use the date/time stamp of the e-mail to determine if the file was received on time. Use the following formats:File name must be “BusMgt 331 Case 2 Group xxx” (replace the x xx with the group number)E-mail subject must be “BusMgt 331 Case 2 Group xxx” (replace the x xx with the group number)< Last Name, First Name >< Last Name, First Name >< Last Name, First Name >< Last Name, First Name >Group Number: XXXBusMgt 331 Case Study #2Due Monday (January 30th) / Tuesday (January 31st) at the beginning of class∙This cover sheet must be filled in completely and signed by all members to be accepted∙All team members must be listed on the cover sheet in alphabetical order.∙All material must be stapled to the cover page.∙Up to 25 points will be deducted for “unprofessional” work.By signing below, I/we attest that I/we have performed this analysis. I understand that any violation of this statement by handing in another student’s work as my/our own will result in a suspected case of academic misconduct.Signature: ______________________________________________ Date: __________________ Signature: ______________________________________________ Date: __________________ Signature: ______________________________________________ Date: __________________ Signature: ______________________________________________ Date: __________________。