Complexity and Approximation of Fixing Numerical Attributes in Databases Under Integrity Co
欧盟《食品添加剂标准》
1995L0002 — EN — 29.01.2004 — 005.001 — 1
This document is meant purely as a documentation tool and the institutions do not assume any liability for its contents
►B
EUROPEAN PARLIAMENT AND COUNCIL DIRECTIVE No 95/2/EC
of 20 February 1995
on food additives other than colours and sweeteners
(OJ L 61, 18.3.1995, p. 1)
Amended by:
Official Journal
No
page
date
►M1 Directive 96/85/EC of the European Parliament and of the Council of 19 L 86 December 1996
4
28.3.1997
►M2 Directive 98/72/EC of the European Parliament and of the Council of 15 L 295
Whereas the Commission is to adapt Community provisions to accord with the rules laid down in this Directive;
(1) OJ No C 206, 13. 8. 1992, p. 12, and OJ No C 189, 13. 7. 1993, p. 11. (2) OJ No C 108, 19. 4. 1993, p. 26. (3) Opinion of the European Parliament of 26 May 1993 (OJ No C 176, 28. 6.
UniversityofWisconsin-Madison(
University of Wisconsin-Madison(UMW)周玉龙1101213442 计算机应用UMW简介美国威斯康辛大学坐落于美国密歇根湖西岸的威斯康辛州首府麦迪逊市,有着风景如画的校园,成立于1848年, 是一所有着超过150年历史的悠久大学。
威斯康辛大学是全美最顶尖的三所公立大学之一,是全美最顶尖的十所研究型大学之一。
在美国,它经常被视为公立的常青藤。
与加利福尼亚大学、德克萨斯大学等美国著名公立大学一样,威斯康辛大学是一个由多所州立大学构成的大学系统,也即“威斯康辛大学系统”(University of Wisconsin System)。
在本科教育方面,它列于伯克利加州大学和密歇根大学之后,排在公立大学的第三位。
除此之外,它还在本科教育质量列于美国大学的第八位。
按美国全国研究会的研究结果,威斯康辛大学有70个科目排在全美前十名。
在上海交通大学的排行中,它名列世界大学的第16名。
威斯康辛大学是美国大学联合会的60个成员之一。
特色专业介绍威斯康辛大学麦迪逊分校设有100多个本科专业,一半以上可以授予硕士、博士学位,其中新闻学、生物化学、植物学、化学工程、化学、土木工程、计算机科学、地球科学、英语、地理学、物理学、经济学、德语、历史学、语言学、数学、工商管理(MBA)、微生物学、分子生物学、机械工程、哲学、西班牙语、心理学、政治学、统计学、社会学、动物学等诸多学科具有相当雄厚的科研和教学实力,大部分在美国大学相应领域排名中居于前10名。
学术特色就学术方面的荣耀而言,威斯康辛大学麦迪逊校区的教职员和校友至今共获颁十七座诺贝尔奖和二十四座普立兹奖;有五十三位教职员是国家科学研究院的成员、有十七位是国家工程研究院的成员、有五位是隶属于国家教育研究院,另外还有九位教职员赢得了国家科学奖章、六位是国家级研究员(Searle Scholars)、还有四位获颁麦克阿瑟研究员基金。
威斯康辛大学麦迪逊校区虽然是以农业及生命科学为特色,但是令人注目,同时也是吸引许多传播科系学子前来留学的最大诱因,则是当前任教于该校新闻及传播研究所、在传播学界有「近代美国传播大师」之称的杰克·麦克劳(Jack McLauld)。
椭圆傅里叶描述符英文
椭圆傅里叶描述符英文The Elliptical Fourier DescriptorsThe Elliptical Fourier Descriptors (EFDs) are a powerful tool used in the field of image analysis and shape recognition. They provide a mathematical representation of the shape of an object or a closed contour, allowing for the quantification and comparison of shapes. The EFDs are particularly useful in applications where the shape of an object is an important feature, such as in biological studies, pattern recognition, and computer vision.The Fourier transform is a fundamental mathematical concept that allows for the decomposition of a periodic function into a sum of sine and cosine waves with different frequencies and amplitudes. The Elliptical Fourier Descriptors extend this idea to the description of closed contours in two-dimensional space. Instead of representing the contour as a function of a single variable (such as the angle around the contour), the EFDs represent the contour as a function of two variables, the x and y coordinates.The process of obtaining the Elliptical Fourier Descriptors for a given contour involves several steps. First, the contour is digitized,meaning that the coordinates of a finite number of points along the contour are recorded. These points are then used to calculate the Fourier coefficients that define the EFDs.The Fourier coefficients are calculated by considering the x and y coordinates of the contour as separate periodic functions. For each coordinate, the Fourier series expansion is computed, yielding a set of Fourier coefficients. These coefficients are then used to reconstruct the contour, with the lower-order coefficients capturing the overall shape of the contour and the higher-order coefficients capturing the finer details.One of the key advantages of the Elliptical Fourier Descriptors is their ability to provide a compact and efficient representation of the shape of an object. The shape can be described using a relatively small number of Fourier coefficients, making it possible to store and compare shapes efficiently. This is particularly useful in applications where large numbers of shapes need to be analyzed or compared, such as in biological studies or industrial quality control.Another advantage of the EFDs is their invariance to certain transformations of the shape, such as translation, rotation, and scaling. This means that the Fourier coefficients are not affected by these transformations, making it possible to compare shapes that have been subjected to different transformations.The Elliptical Fourier Descriptors have found numerous applications in various fields. In biology, they have been used to study the shapes of plant leaves, pollen grains, and other biological structures. In computer vision, they have been used for object recognition and shape-based image retrieval. In industrial applications, they have been used for quality control and defect detection.Despite their many advantages, the Elliptical Fourier Descriptors are not without their limitations. One of the main challenges is the selection of the appropriate number of Fourier coefficients to use in the representation of a shape. Too few coefficients may result in a poor approximation of the shape, while too many coefficients can lead to overfitting and increased computational complexity.Another challenge is the interpretation of the Fourier coefficients themselves. While the coefficients provide a compact and efficient representation of the shape, they do not necessarily have a clear physical interpretation. This can make it difficult to understand the underlying shape characteristics that are being captured by the EFDs.Despite these challenges, the Elliptical Fourier Descriptors remain an important and widely-used tool in the field of image analysis and shape recognition. As computational power and storage capabilities continue to increase, the use of EFDs is likely to become even morewidespread, with applications in an ever-widening range of fields.In conclusion, the Elliptical Fourier Descriptors are a powerful mathematical tool that provides a compact and efficient representation of the shape of an object or a closed contour. They have found numerous applications in fields such as biology, computer vision, and industrial quality control, and their use is likely to continue to grow in the years to come.。
白噪声作用下欠阻尼随机双稳系统中的随机共振
第18卷第3期复杂系统与复杂性科学Vol.18No.3 2021年9月COMPLEX SYSTEMS AND COMPLEXITY SCIENCE Sep.2021文章编号:16723813(2021)03 006007;DOI:10.13306/j.1672-3813.2021.03.009白噪声作用下欠阻尼随机双稳系统中的随机共振朱福成1,郭锋2(1.绵阳职业技术学院,四川绵阳621000;.西南科技大学信息工程学院,四川绵阳621000)摘要:基于两态理论,利用绝热近似条件,根据随机势能的统计特性,推导出系统输出信噪比(SNR)的数学表达式。
研究结果表明SNR随阻尼系数、加性噪声强度、以及系统参数的变化出现随机共振现象。
随着随机势能相关长度的增大SNR的最大值单调减小;随着随机势能幅度的增大SNR的最大值单调增加。
数值仿真结果表明系统输出信噪比与理论结果相符。
关键词:随机共振;欠阻尼双稳系统;随机势能;白噪声中图分类号:TN911.7文献标识码:AStochastic Resonance for an Under d amped StochasticBistable System Subject to White NoiseZHU Fucheng1,GUO Feng2(1.Mianyang Polytechnic,Mianyang621000,China; 2.School of Information Engineering ofSouthwest University of Science and Technology,Mianyang621010,China)Abstract:Based on the two-state theory,under the adiabatic approximation con-ition,applyingthe properties of the stochastic potential,the system output signal-to-noise ratio(SNR)is ob-taine-•It is shown that the SR phenomenon can be found as SNR varies with the damping coefficient,with the additive noise strength,as well as with the system parameters.With the increaseof the correlation length of stochastic potential,the maximum value of SNR decreases monotonically;while with the increase of the amplitude of the stochastic potential,the maximum value ofSNR increases monotonically.Numerical simulation results show that the output SNR of thesystem is consistent with the theoretical results.Key words:stochastic resonance;un-er-ampe-bistable system;stochastic potential;white noise0引言随机共振是一种出现在随机动力系统中的非线性现象,即在弱周期力的作用下,由噪声诱导的系统势井间的跳跃同步行为」1]。
Computational Fluid Dynamics
Computational Fluid Dynamics Computational Fluid Dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to solve and analyze problems that involve fluid flows. It has become an essential tool in various industries, including aerospace, automotive, and environmental engineering. CFD allows engineers and scientists to simulate the behavior of fluids in complex systems, such as air flow over an aircraft wing or water flow in a river, without the need for costly and time-consuming physical experiments. One of the key advantages of CFD is itsability to provide detailed insights into fluid flow phenomena that are difficult or impossible to observe experimentally. This is particularly useful in the design and optimization of engineering systems, where understanding the behavior offluids is crucial. For example, CFD can be used to predict the performance of a new aircraft design, optimize the cooling system of a car engine, or analyze the dispersion of pollutants in the atmosphere. By simulating these scenarios, engineers can make informed decisions and improve the efficiency and safety oftheir designs. However, CFD is not without its challenges. One of the main issues is the complexity and computational cost of simulating fluid flows accurately.Fluid dynamics is a highly nonlinear and chaotic phenomenon, and simulating it requires solving complex mathematical equations that describe the behavior of fluids. This often involves dividing the fluid domain into a large number of smaller elements, or "cells," and solving the equations for each cell. As a result, CFD simulations can be computationally intensive and time-consuming, requiringhigh-performance computing resources and specialized software. Another challengeis the accuracy and reliability of CFD simulations. While CFD has made significant advancements in recent years, it is still a numerical approximation of the real physical phenomena. The accuracy of CFD simulations depends on various factors, such as the quality of the mesh (the division of the fluid domain into cells), the choice of numerical methods, and the assumptions and simplifications made in the simulation. Engineers and scientists must carefully validate and verify their CFD simulations against experimental data to ensure their reliability and trustworthiness. Furthermore, CFD requires a deep understanding of fluid mechanics, numerical methods, and computer programming. Engineers and scientistsmust be well-versed in the underlying physics of fluid flows, as well as the mathematical and computational techniques used in CFD. This often requires extensive training and expertise, which can be a barrier for individuals and organizations looking to adopt CFD in their work. Despite these challenges, the potential benefits of CFD make it a valuable tool for engineers and scientists. By providing detailed insights into fluid flow phenomena, CFD enables the design and optimization of engineering systems with improved efficiency and performance. As computing power and simulation techniques continue to advance, CFD is expected to play an increasingly important role in engineering and scientific research, shaping the way we understand and manipulate fluid flows in the world around us.。
an introduction to the analysis of algorithms
An introduction to the analysis of algorithms"An Introduction to the Analysis of Algorithms" is a book or course title that suggests a focus on understanding and evaluating algorithms. Here is a brief overview of what such a study might involve:Overview:1.Algorithm Basics:* Introduction to the concept of algorithms.* Understanding algorithm design principles.* Basic algorithmic paradigms (e.g., divide and conquer, greedy algorithms, dynamic programming).2.Algorithm Analysis:* Time complexity analysis: Big-O notation, time complexity classes.* Space complexity analysis: Memory usage analysis.* Worst-case, average-case, and best-case analysis.3.Asymptotic Notation:* Big-O, Big-Theta, and Big-Omega notation.* Analyzing the efficiency of algorithms as the input size approaches infinity.4.Recursion:* Understanding recursive algorithms.* Analyzing time and space complexity of recursive algorithms.5.Sorting and Searching:* Analysis of sorting algorithms (e.g., bubble sort, merge sort, quicksort).* Analysis of searching algorithms (e.g., binary search).6.Dynamic Programming:* Introduction to dynamic programming principles.* Analyzing algorithms using dynamic programming.7.Greedy Algorithms:* Understanding the greedy algorithmic paradigm.* Analyzing algorithms using greedy strategies.8.Graph Algorithms:* Analyzing algorithms related to graphs (e.g., Dijkstra's algorithm, breadth-first search, depth-first search).9.Case Studies:* Analyzing real-world applications of algorithms.* Case studies on how algorithm analysis influences practical implementations.10.Advanced Topics:* Introduction to advanced algorithmic concepts.* Optional topics such as randomized algorithms, approximation algorithms, and more.Conclusion:"An Introduction to the Analysis of Algorithms" provides a foundation for students or readers to understand the efficiency and performance of algorithms. It equips them with the tools to choose the right algorithms for specific tasks and to evaluate their impact on system resources.。
analytical method和numerical method
analytical method和numerical method Analytical MethodIntroductionAnalytical methods are mathematical methods that are used to solve equations or problems in a closed form. Analytical methods are used when exact solutions to problems can be obtained using mathematical equations. These methods are widely used in various fields of science and engineering.Advantages of Analytical MethodsOne of the main advantages of analytical methods is that they provide exact solutions to problems. This means that the results obtained using analytical methods are highly accurate and reliable. Furthermore, analytical methods can be used to derive general formulas that can be applied to a wide range of problems.Disadvantages of Analytical MethodsOne of the main disadvantages of analytical methods is that they can only be applied to simple problems with well-definedboundary conditions. Furthermore, analytical methods may not always provide practical solutions to real-world problems due to their complexity.Examples of Analytical MethodsSome examples of analytical methods include:1. Differential Equations: Differential equations are used to describe the behavior of physical systems such as heat transfer, fluid flow, and electromagnetic fields.2. Fourier Analysis: Fourier analysis is used to decompose complex signals into simple sinusoidal components, which can then be analyzed more easily.3. Laplace Transform: The Laplace transform is used to solve differential equations by transforming them into algebraic equations.Numerical MethodIntroductionNumerical methods are mathematical techniques that are usedto obtain approximate solutions to complex mathematical problems. Numerical methods involve the use of computers and algorithms to perform calculations on large datasets and complex systems.Advantages of Numerical MethodsOne of the main advantages of numerical methods is that they can be applied to complex problems with uncertain boundary conditions, which cannot be solved using analytical methods. Furthermore, numerical methods can provide accurate results even for large datasets and complex systems.Disadvantages of Numerical MethodsOne major disadvantage of numerical methods is that they involve a degree of approximation and uncertainty in their results. Furthermore, numerical methods can be computationally expensive and time-consuming.Examples of Numerical MethodsSome examples of numerical methods include:1. Finite Element Method: The finite element method is used to solve complex problems in engineering and physics by dividingthe problem into smaller, simpler elements.2. Monte Carlo Method: The Monte Carlo method is used to simulate complex systems by generating random numbers and analyzing their behavior.3. Numerical Integration: Numerical integration is used to approximate the value of integrals that cannot be solved analytically.ConclusionIn conclusion, both analytical and numerical methods have their advantages and disadvantages, depending on the problem being solved. Analytical methods are best suited for simple problems with well-defined boundary conditions, while numerical methods are best suited for complex problems with uncertain boundary conditions. Both methods are important tools for scientists and engineers in various fields of study.。
分子模拟中静电力计算方法的研究
分子模拟中静电力计算方法的研究摘要:分子模拟是现今科学家们探索物质特性的微观机制与内在联系的重要手段,承担着沟通模型与理论、与实验的关键作用。
通过分子力场建模,人们可以设定分子内、分子间的化学键作用和非键作用。
其中被赋予了部分电荷的原子之间的非键且长程的静电作用因收敛缓慢、有效距离长,而常常占用着大量的计算时间、制约着模拟尺度的扩大。
在静电算法的发展过程中,Ewald3D求和方法首先使精确计算成为可能,而后PME的提出更使原子个数在百万量级的模拟成为可能,也揭开了分子模拟广泛应用于生物分子体系的序幕。
随着研究内容向多样性发展,静电算法也向着专门化、特异化发展,这里面的主要原因是体系的维度决定了静电作用的具体形式。
因此,相对于百年前的Ewald3D-tinfoil,晚近才诞生的Ewald2D拥有一套独立的数学表达,但也不该忽视两者之间物理图像上或数学原理上的相似性,这正是各种近似方法(Ewald3DC、Ewald3DLC等)具有等价性的基础。
这篇论文的第一个原创性研究工作正是进一步探索Ewald2D的数学基础,并以此严格地证明了几种近似方法的物理图像的完备。
我们先把Ewald2D虚部能量写成傅里叶变换的形式,利用简单的梯形法则做近似,再借助留数定理将原式与近似式化作复平面积分的形式,最后通过估计两复平面积分的差值给出一个误差界限,并将此与实际计算误差进行对比、发现二者始终相当,故从数值的角度证明了此种方法之可行。
这个过程其实是将Mori 等人的数值估计与误差处理的经验应用到Ewald2D的分析中。
尤其复平面路径积分中的奇点贡献恰好对应了Ewald3DLC中的多余镜像层的静电作用,因而从精确数值的角度肯定了后者物理含义的正确。
既然Ewald2D公式天然地分成若干项,我们优化算法也就从对这些分量的相对大小和计算时间的消耗对比入手。
论文中简单的举例说明了一个算法优化的准则,即找到计算中不怎么影响精度又占用大量计算资源的那些成分、直接约去以显著提高计算效率。
Applied Probability and Stochastic Processes
Applied Probability and StochasticProcessesApplied Probability and Stochastic Processes are fundamental concepts in the field of mathematics and have wide-ranging applications in various fields such as engineering, finance, biology, and telecommunications. These concepts play a crucial role in understanding and analyzing random phenomena, making predictions, and making informed decisions in uncertain situations. In this response, we will explore the significance of applied probability and stochastic processes from multiple perspectives, highlighting their real-world applications, challenges, and future developments. From an engineering perspective, applied probability and stochastic processes are essential in modeling and analyzing complex systems with random behavior. For instance, in the field of telecommunications, these concepts are used to analyze the performance of communication networks, such as wireless systems and the Internet, taking into account factors like signal interference, data transmission errors, and network congestion. Engineers use stochastic processes to model the random arrival of data packets, the duration of calls, and other unpredictable events, enabling them to design efficient and reliable communication systems. Moreover, in the field of electrical engineering, applied probability and stochastic processes are utilized to analyze the behavior of electronic circuits, random signals, and noise, contributing to the development of robust and high-performance electronic devices. In the realm of finance, applied probability and stochastic processes are instrumental in modeling and predicting the behavior of financial markets, asset prices, and investment portfolios. For example, the Black-Scholes model, which is based on stochastic calculus, is widely used to price options and other derivatives, providing valuable insights into risk management and investment strategies. Moreover, in the field of insurance and risk assessment, these concepts are employed to evaluate and quantify various risks, such as natural disasters, accidents, and health-related events, enabling insurance companies to set premiums and reserves accurately. The application of stochastic processes in finance has revolutionized the way financial instruments are priced, traded, and managed, shaping the modern financial industry. From abiological perspective, applied probability and stochastic processes are utilized to model and analyze various biological phenomena, such as population dynamics, genetic mutations, and the spread of infectious diseases. In epidemiology, stochastic models are used to simulate the transmission of diseases within a population, taking into account factors like individual interactions, mobility, and immunity, which are inherently random. These models help public healthofficials and researchers to assess the impact of interventions, such as vaccination campaigns and social distancing measures, and to make informed decisions to control the spread of diseases. Furthermore, in evolutionary biology, stochastic processes are employed to study the genetic diversity within populations, the emergence of new traits, and the process of natural selection, shedding light on the mechanisms driving the evolution of species. Despite the wide-ranging applications of applied probability and stochastic processes, there are several challenges and limitations associated with their practical implementation. One of the key challenges is the computational complexity of simulating and analyzing stochastic models, especially when dealing with high-dimensional or continuous-time processes. As a result, researchers and practitioners often rely on approximation techniques and numerical methods to solve stochastic differential equations, simulate Monte Carlo simulations, and estimate the parameters of stochastic models. Moreover, the accurate estimation of model parameters from real-world data poses a significant challenge, as the observed data may be noisy, incomplete, or subject to sampling biases, leading to uncertainties in the model predictions and inferences. Additionally, the interpretation and communication of stochastic modeling results to non-experts can be challenging, as it requires a clear understanding of probabilistic concepts and statistical reasoning, which may not be familiar to individuals outside the field of mathematics and statistics. Looking ahead, the future developments in applied probability and stochastic processes are poised to address some of these challenges and open up new frontiers of applications. With the advancement of computational tools and techniques, such as high-performance computing, parallel processing, and cloud-based simulations, researchers will be able to tackle more complex and realistic stochastic models, leading to better predictions andinsights in various domains. Furthermore, the integration of machine learning and artificial intelligence with stochastic modeling holds great promise in improving the accuracy and efficiency of stochastic simulations, parameter estimation, and decision-making under uncertainty. By leveraging the power of data-driven approaches and advanced algorithms, practitioners can harness the wealth of information contained in large-scale datasets to refine stochastic models and enhance their predictive capabilities. Moreover, the development of user-friendly software tools and visualization techniques will facilitate the communication of stochastic modeling results to a broader audience, enabling decision-makers and stakeholders to make informed choices based on probabilistic assessments. In conclusion, applied probability and stochastic processes are indispensable tools for understanding and navigating the inherent randomness and uncertainty in various natural and man-made systems. From engineering and finance to biology and beyond, these concepts provide a powerful framework for modeling, analyzing, and making decisions in complex and uncertain environments. While there are challenges associated with their practical implementation, the ongoing advancements in computational methods, interdisciplinary collaborations, and technological innovations are poised to unlock new opportunities and applications for applied probability and stochastic processes in the future. As we continue to explore and harness the potential of these concepts, we can expect to gain deeper insightsinto the dynamics of random phenomena and to make more informed and effective decisions in the face of uncertainty.。
一种改进的高斯频率域压缩感知稀疏反演方法(英文)
AbstractCompressive sensing and sparse inversion methods have gained a significant amount of attention in recent years due to their capability to accurately reconstruct signals from measurements with significantly less data than previously possible. In this paper, a modified Gaussian frequency domain compressive sensing and sparse inversion method is proposed, which leverages the proven strengths of the traditional method to enhance its accuracy and performance. Simulation results demonstrate that the proposed method can achieve a higher signal-to- noise ratio and a better reconstruction quality than its traditional counterpart, while also reducing the computational complexity of the inversion procedure.IntroductionCompressive sensing (CS) is an emerging field that has garnered significant interest in recent years because it leverages the sparsity of signals to reduce the number of measurements required to accurately reconstruct the signal. This has many advantages over traditional signal processing methods, including faster data acquisition times, reduced power consumption, and lower data storage requirements. CS has been successfully applied to a wide range of fields, including medical imaging, wireless communications, and surveillance.One of the most commonly used methods in compressive sensing is the Gaussian frequency domain compressive sensing and sparse inversion (GFD-CS) method. In this method, compressive measurements are acquired by multiplying the original signal with a randomly generated sensing matrix. The measurements are then transformed into the frequency domain using the Fourier transform, and the sparse signal is reconstructed using a sparsity promoting algorithm.In recent years, researchers have made numerous improvementsto the GFD-CS method, with the goal of improving its reconstruction accuracy, reducing its computational complexity, and enhancing its robustness to noise. In this paper, we propose a modified GFD-CS method that combines several techniques to achieve these objectives.Proposed MethodThe proposed method builds upon the well-established GFD-CS method, with several key modifications. The first modification is the use of a hierarchical sparsity-promoting algorithm, which promotes sparsity at both the signal level and the transform level. This is achieved by applying the hierarchical thresholding technique to the coefficients corresponding to the higher frequency components of the transformed signal.The second modification is the use of a novel error feedback mechanism, which reduces the impact of measurement noise on the reconstructed signal. Specifically, the proposed method utilizes an iterative algorithm that updates the measurement error based on the difference between the reconstructed signal and the measured signal. This feedback mechanism effectively increases the signal-to-noise ratio of the reconstructed signal, improving its accuracy and robustness to noise.The third modification is the use of a low-rank approximation method, which reduces the computational complexity of the inversion algorithm while maintaining reconstruction accuracy. This is achieved by decomposing the sensing matrix into a product of two lower dimensional matrices, which can be subsequently inverted using a more efficient algorithm.Simulation ResultsTo evaluate the effectiveness of the proposed method, we conducted simulations using synthetic data sets. Three different signal types were considered: a sinusoidal signal, a pulse signal, and an image signal. The results of the simulations were compared to those obtained using the traditional GFD-CS method.The simulation results demonstrate that the proposed method outperforms the traditional GFD-CS method in terms of signal-to-noise ratio and reconstruction quality. Specifically, the proposed method achieves a higher signal-to-noise ratio and lower mean squared error for all three types of signals considered. Furthermore, the proposed method achieves these results with a reduced computational complexity compared to the traditional method.ConclusionThe results of our simulations demonstrate the effectiveness of the proposed method in enhancing the accuracy and performance of the GFD-CS method. The combination of sparsity promotion, error feedback, and low-rank approximation techniques significantly improves the signal-to-noise ratio and reconstruction quality, while reducing thecomputational complexity of the inversion procedure. Our proposed method has potential applications in a wide range of fields, including medical imaging, wireless communications, and surveillance.。
OpenProblemsList
Open Problems ListArising from MathsCSP Workshop,Oxford,March2006Version0.3,April25,20061Complexity and Tractability of CSPQuestion1.0(The Dichotomy Conjecture)Let B be a relational structure.The problem of deciding whether a given relational structure has a homomorphism to B is denoted CSP(B).For which(finite)structures is CSP(B)decidable in polynomial time?Is it true that for anyfinite structure B the problem CSP(B)is either decidable in polynomial time or NP-complete?Communicated by:Tomas Feder&Moshe Vardi(1993) Question1.1A relational structure B is called hereditarily tractable if CSP(B )is tractable for all substructures B of B.Which structures B are hereditarily tractable?Communicated by:Pavol Hell Question1.2A weak near-unanimity term is defined to be one that satisfies the following identities:f(x,...,x)=x and f(x,y,....y)=f(y,x,y,....y)=...=f(y,...,y,x).Is CSP(B)tractable for any(finite)structure B which is preserved by a weak near-unanimity term?Communicated by:Benoit Larose,Matt Valeriote Question1.3A constraint language1S is called globally tractable for a problem P,if P(S)is tractable,and it is called(locally)tractable if for everyfinite L⊆S,P(L)is tractable.These two notions of tractability do not coincide in the Abduction problem(see talk by Nadia Creignou).•For which computational problems related to the CSP do these two notions of tractability coincide?•In particular,do they coincide for the standard CSP decision problem?Communicated by:Nadia Creignou 1That is,a(possibly infinite)set of relations over somefixed set.1Question1.4(see also Question3.5)It has been shown that when a structure B has bounded pathwidth duality the corresponding problem CSP(B)is in the complexity class NL (see talk by Victor Dalmau).Is the converse also true(modulo some natural complexity-theoretic assumptions)?Communicated by:Victor Dalmau Question1.5Is there a good(numerical)parameterization for constraint satisfaction problems that makes themfixed-parameter tractable?Question1.6Further develop techniques based on delta-matroids to complete the com-plexity classification of the Boolean CSP(with constants)with at most two occurrences per variable(see talk by Tomas Feder).Communicated by:Tomas Feder Question1.7Classify the complexity of uniform Boolean CSPs(where both structure and constraint relations are specified in the input).Communicated by:Heribert Vollmer Question1.8The microstructure graph of a binary CSP has vertices for each variable/value pair,and edges that join all pairs of vertices that are compatible with the constraints.What properties of this graph are sufficient to ensure tractability?Are there properties that do not rely on the constraint language or the constraint graph individually?2Approximability and Soft ConstraintsQuestion2.1Is it true that Max CSP(L)is APX-complete whenever Max CSP(L)is NP-hard?Communicated by:Peter Jonsson Question2.2Prove or disprove that Max CSP(L)is in PO if the core of L is super-modular on some lattice,and otherwise this problem is APX-complete.The above has been proved for languages with domain size3,and for languages contain-ing all constants by a computer-assisted case analysis(see talk by Peter Jonsson).Develop techniques that allow one to prove such results without computer-assisted analysis.Communicated by:Peter Jonsson Question2.3For some constraint languages L,the problem Max CSP(L)is hard to approximate better than the random mindless algorithm on satisfiable or almost satisfiable instances.Such problems are called approximation resistant(see talk by Johan Hastad).Is a single random predicate over Boolean variables with large arity approximation resistant?What properties of predicates make a CSP approximation resistant?What transformations of predicates preserve approximation resistance?Communicated by:Johan Hastad2Question2.4Many optimisation problems involving constraints(such as Max-Sat,Max CSP,Min-Ones SAT)can be represented using soft constraints where each constraint is specified by a cost function assigning some measure of cost to each tuple of values in its scope.Are all tractable classes of soft constraints characterized by their multimorphisms?(see talk by Peter Jeavons)Communicated by:Peter Jeavons 3AlgebraQuestion3.1The Galois connection between sets of relations and sets of operations that preserve them has been used to analyse several different computational problems such as the satisfiability of the CSP,and counting the number of solutions.How can we characterise the computational goals for which we can use this Galois connection?Communicated by:Nadia Creignou Question3.2For any relational structure B=(B,R1,...,R k),let co-CSP(B)denote the class of structures which do not have a homomorphism to B.It has been shown that the question of whether co-CSP(B)is definable in Datalog is determined by P ol(B),the polymorphisms of the relations of B(see talk by Andrei Bulatov).Let B be a core,F the set of all idempotent polymorphisms of B and V the variety generated by the algebra(B,F).Is it true that co-CSP(B)is definable in Datalog if and only if V omits types1and2(that is,the local structure of anyfinite algebra in V does not contain a G-set or an affine algebra)?Communicated by:Andrei Bulatov Question3.3Does every tractable clone of polynomials over a group contain a Mal’tsev operation?Communicated by:Pascal Tesson Question3.4Classify(w.r.t.tractability of corresponding CSPs)clones of polynomials of semigroups.Communicated by:Pascal Tesson Question3.5Is it true that for any structure B which is invariant under a near-unanimity operation the problem CSP(B)is in the complexity class NL?Does every such structure have bounded pathwidth duality?(see also Question1.4)Both results are known to hold for a2-element domain(Dalmau)and for majority operations(Dalmau,Krokhin).Communicated by:Victor Dalmau,Benoit Larose3Question3.6Is it decidable whether a given structure is invariant under a near-unanimity function(of some arity)?Communicated by:Benoit Larose Question3.7Let L be afixedfinite lattice.Given an integer-valued supermodular func-tion f on L n,is there an algorithm that maximizes f in polynomial time in n if the function f is given by an oracle?The answer is yes if L is a distributive lattice(see“Supermodular Functions and the Complexity of Max-CSP”,Cohen,Cooper,Jeavons,Krokhin,Discrete Applied Mathemat-ics,2005).More generally,the answer is yes if L is obtained fromfinite distributive lattices via Mal’tsev products(Krokhin,Larose–see talk by Peter Jonsson).The smallest lattice for which the answer is not known is the3-diamond.Communicated by:Andrei Krokhin Question3.8Find the exact relationship between width and relational width.(It is known that one is bounded if and and only if the other is bounded.)Also,what types of width are preserved under natural algebraic constructions?Communicated by:Victor Dalmau 4LogicQuestion4.1The(basic)Propositional Circumscription problem is defined as fol-lows:Input:a propositional formulaφwith atomic relations from a set S,and a clause c.Question:is c satisfied in every minimal model ofφ?It is conjectured(Kirousis,Kolaitis)that there is a trichotomy for this problem,that it iseither in P,coNP-complete or inΠP2,depending on the choice of S.Does this conjecturehold?Communicated by:Nadia Creignou Question4.2The Inverse Satisfiability problem is defined as follows: Input:afinite set of relations S and a relation R.Question:is R expressible by a CNF(S)-formula without existential variables?A dichotomy theorem was obtained by Kavvadias and Sideri for the complexity of this problem with constants.Does a dichotomy hold without the constants?Are the Schaefer cases still tractable?Communicated by:Nadia Creignou4Question4.3Let LFP denote classes of structures definable infirst-order logic with a least-fixed-point operator,let HOM denote classes of structures which are closed under homomorphisms,and let co-CSP denote classes of structures defined by not having a homomorphism to somefixed target structure.•Is LFP∩HOM⊆Datalog?•Is LFP∩co-CSP⊆Datalog?(forfinite target structures)•Is LFP∩co-CSP⊆Datalog?(forω-categorical target structures)Communicated by:Albert Atserias,Manuel BodirskyQuestion4.4(see also Question3.2)Definability of co-CSP(B)in k-Datalog is a sufficient condition for tractability of CSP(B),which is sometimes referred to as having width k. There is a game-theoretic characterisation of definability in k-Datalog in terms of(∃,k)-pebble games(see talk by Phokion Kolaitis).•Is there an algorithm to decide for a given structure B whether co-CSP(B)is definable in k-Datalog(for afixed k)?•Is the width hierarchy strict?The same question when B isω-categorical,but not necessarilyfinite?Communicated by:Phokion Kolaitis,Manuel BodirskyQuestion4.5Find a good logic to capture CSP with“nice”(e.g.,ω-categorical)infinite templates.Communicated by:Iain Stewart 5Graph TheoryQuestion5.1The list homomorphism problem for a(directed)graph H is equivalent to the problem CSP(H∗)where H∗equals H together with all unary relations.•It is conjectured that the list homomorphism problem for a reflexive digraph is tractable if H has the X-underbar property(which is the same as having the bi-nary polymorphism min w.r.t.some total ordering on the set of vertices),and NP-complete otherwise.•It is conjectured that the list homomorphism problem for an irreflexive digraph is tractable if H is preserved by a majority operation,and NP-complete otherwise. Do these conjectures hold?Communicated by:Tomas Feder&Pavol Hell5Question5.2“An island of tractability?”Let A m be the class of all relational structures of the form(A,E1,...,E m)where each E i is an irreflexive symmetric binary relation and the relations E i together satisfy the following‘fullness’condition:any two distinct elements x,y are related in exactly one of the relations E i.Let B m be the single relational structure({1,...,m},E1,...,E m)where each E i is the symmetric binary relation containing all pairs xy except the pair ii.(Note that the relations E i are not irreflexive.)The problem CSP(A m,B m)is defined as:Given A∈A m,is there a homomorphism from A to B m?When m=2,this problem is solvable in polynomial time-it is the recognition problem for split graphs(see“Algorithmic Graph Theory and Perfect Graphs”,M.C.Golumbic, Academic Press,New York,1980)When m>3,this problem is NP-complete(see“Full constraint satisfaction problems”,T.Feder and P.Hell,to appear in SIAM Journal on Computing).What happens when m=3?Is this an“island of tractability”?Quasi-polynomial algorithms are known for this problem(see“Full constraint satisfaction problems”,T. Feder and P.Hell,,to appear in SIAM Journal on Computing,and“Two algorithms for list matrix partitions”,T.Feder,P.Hell,D.Kral,and J.Sgall,SODA2005).Note that a similar problem for m=3was investigated in“The list partition problem for graphs”, K.Cameron,E.E.Eschen,C.T.Hoang and R.Sritharan,SODA2004.Communicated by:Tomas Feder&Pavol Hell Question5.3Finding the generalized hypertree-width,w(H)of a hypergraph H is known to be NP-complete.However it is possible to compute a hypertree-decomposition of H in polynomial time,and the hypertree-width of H is at most3w(H)+1(see talk by Georg Gottlob).Are there other decompositions giving better approximations of the generalized hypertree-width that can be found in polynomial time?Communicated by:Georg Gottlob Question5.4It is known that a CSP whose constraint hypergraph has bounded fractional hypertree width is tractable(see talk by Daniel Marx).Is there a hypergraph property more general than bounded fractional hypertree width that makes the associated CSP polynomial-time solvable?Are there classes of CSP that are tractable due to structural restrictions and have unbounded fractional hypertree width?Communicated by:Georg Gottlob,Daniel Marx Question5.5Prove that there exist two functions f1(w),f2(w)such that,for every w, there is an algorithm that constructs in time n f1(w)a fractional hypertree decomposition of width at most f2(w)for any hypergraph of fractional hypertree width at most w(See talk by Daniel Marx).Communicated by:Daniel Marx6Question5.6Turn the connection between the Robber and Army game and fractional hypertree width into an algorithm for approximating fractional hypertree width.Communicated by:Daniel Marx Question5.7Close the complexity gap between(H,C,K)-colouring and (H,C,K)-colouring (see talk by Dimitrios Thilikos)Find a tight characterization for thefixed-parameter tractable(H,C,K)-colouring problems.•For the(H,C,K)-colouring problems,find nice properties for the non-parameterisedpart(H−C)that guaranteefixed-parameter tractability.•Clarify the role of loops in the parameterised part C forfixed-parameter hardnessresults.Communicated by:Dimitrios Thilikos6Constraint Programming and ModellingQuestion6.1In a constraint programming system there is usually a search procedure that assigns values to particular variables in some order,interspersed with a constraint propagation process which modifies the constraints in the light of these assignments.Is it possible to choose an ordering for the variables and values assigned which changes each problem instance as soon as possible into a new instance which is in a tractable class? Can this be done efficiently?Are there useful heuristics?Question6.2The time taken by a constraint programming system tofind a solution toa given instance can be dramatically altered by modelling the problem differently.Can the efficiency of different constraint models be objectively compared,or does it depend entirely on the solution algorithm?Question6.3For practical constraint solving it is important to eliminate symmetry,in order to avoid wasted search effort.Under what conditions is it tractable to detect the symmetry in a given problem in-stance?7Notes•Representations of constraints-implicit representation-effect on complexity•Unique games conjecture-structural restrictions that make it false-connectionsbetween definability and approximation•MMSNP-characterise tractable problems apart from CSP7•Migrate theoretical results to tools•What restrictions do practical problems actually satisfy?•Practical parallel algorithms-does this align with tractable classes?•Practically relevant constraint languages(”global constraints”)•For what kinds of problems do constraint algorithms/heuristics give good results?8。
读《假如历史是一群猫》有感英语作文
读《假如历史是一群猫》有感英语作文全文共3篇示例,供读者参考篇1My Reflections on "If History Were a Cat" by Rasheed OgunlaruWhen I first saw the quirky title "If History Were a Cat" by Rasheed Ogunlaru on the shelf at the library, I have to admit I was intrigued. A book comparing history to a cat? I couldn't quite wrap my head around the metaphor, but I decided to give it a chance. Little did I know just how profound and insightful this little book would turn out to be.The central premise is that if we imagine history as an actual cat, we can gain a deeper understanding of how to view and interpret the past. Ogunlaru paints history as an aloof, independent feline who doesn't adhere to the expectations we try to place on it. Just as cats will roam where they please and behave as they wish regardless of our desires, history unfolds according to its own chaotic rhythm, not the neat linear narrative humans attempt to impose.One of the first impactful metaphors compares different philosophical perspectives on history to the variety of ways people view cats. The metaphor goes that some see history as a noble creature to be admired from a distance, studying its movements and habits without ever truly understanding its inner essence. This represents more empirical, detached approaches to examining the past based solely on surviving evidence and artifacts.Others view history as a tool to be utilized, appreciating it for how it can provide us with practical benefits like entertainment or moral lessons, just as some appreciate cats for their pest control abilities. This metaphor represents interpretations that see history as a means to an end, valuing only what can provide tangible value.In contrast, Ogunlaru promotes an approach of intimately bonding with and accepting history in all its complexity, much like developing a relationship with a cat as a companion. We must strive to appreciate history for what it is rather than what we want it to be. We can't force it into the mold of the stories we wish to tell.This inspires one of the core insights - that history is not meant to be tamed and confined to crisp narratives. Like a catresisting constraint, suppressing details that don't fit our desired sequence of events does an injustice to the richness and nuance of the past. We must embrace history's contradictions, ambiguities, and elements of chaos.The book is full of clever turns of phrases that lend humor and creativeness to the cat metaphor. For example, Ogunlaru describes history's tendency to "cough up" unexpected hairballs of information that can contradict established assumptions. Or when discussing omitted or neglected histories, he warns that like housecats, "history always leaves downy sediments of itself behind closed doors."I found the chapter on historiography, the study of how history is researched and written, particularly insightful through the cat metaphor. Ogunlaru argues that no matter how skilled the historian, their work is merely an approximation of history's "fur" - its surface appearance and behavior. No matter how comprehensive, we only capture a rendering of history's outer manifestations and impressions. Its true inner essence as a whole remains elusive, much like how a cat's inner mental life is impossible to fully comprehend.This drives home the theme that we should remain humble about our ability to conclusively determine historical truth.History exists in a state of supposition, like Schrondinger's famous thought experiment with a cat in a box who is simultaneously alive and dead until observed. Until a historian peers into the "box" of a historical event and is affected by what they discover, the true reality remains uncertain.Similarly, just as observers affect the behavior of the particles they observe in quantum physics, so too do historians influence the histories they attempt to study through unconscious biases and limitations of perspective. We can never be impartial witnesses, but rather active participants imposing our own attitudes and blind spots through the historical narratives we construct.As such, Ogunlaru promotes the radical idea that we must constantly re-evaluate our assumptions about major historical events and figures through new lenses. Our histories quickly become outdated narratives that are products of their time and place. While not discarding all previous work, we must update our histories much like revising software or upgrading technology so it remains compatible with new evidence and societal outlooks that emerge.For example, he analyzes how Western interpretations of Cleopatra's legacy were heavily prejudiced by racist attitudesthat portrayed her as a cunning seductress who used her sexuality to gain influence over noble Roman leaders. However, from a more modern postcolonial perspective, her story can be reframed as one of a strong female sovereign protecting the sovereignty of her Egyptian kingdom against imperialist threats.Overall, I came away from "If History Were a Cat" with a much more mature perspective on how to approach the study of the past. History is a complex, multi-layered tapestry that should never be reduced to simple fables crafted to suit particular moral or ethnocentric agendas. We must develop the humility and open-mindedness to engage with the full extent of history's tangles and contradictions.Just as cats can never be fully domesticated and retain elements of inscrutable wildness, the past can never be completely tamed to conform to any single perspective or interpretation. There will always be loose ends that refuse to be neatly tucked away. Our mission should be to dive into those loose threads and engage with the wonderful mess that is the unbounded lived experience of humanity across time.Rather than seeking utter objectivity, which is impossible, we should aim to incorporate as many subjective viewpoints into our histories as possible. Like a cat showing affection to theirbeloved human companions, history occasionally grants small insights and purrs of clarity amid the mystery. But those brief moments of revelation should inspire us to forever continue revising our understanding, not resting on perceived laurels of objectivity.I'll confess that some of the cat analogies felt a bit strained or whimsical at times. However, this light-hearted playfulness seemed intentional on Ogunlaru's part to encourage us not to take ourselves too seriously as we tackle the profound challenge of excavating the human experience. History is too important to be treated as dry and austere - it's messy, funny, tragic, and everything in between. We should embrace its characterful quirks and chaos with empathy and openness.In the end, "If History Were a Cat" didn't provide any groundbreaking new historical revelations or frameworks. Rather, it served as a reminderto maintain a flexible mindset and willingness to question assumptions as we approach the daunting task of reconstructing and reinterpreting the past. While relatively slight at less than 200 pages, this unassuming little book containing an entitled feline's musings opened my eyes to a whole new way of seeing history and the world around me. I have a new appreciation for ambiguity and a doggeddetermination to untangle the hairballs of half-truths and contradictions obscuring so much of the human story. I may never attain anything close to a complete understanding of history, but at least I can strive to cuddle up to it and relate to it on its own peculiar terms - as one would with a finicky, complicated cat.篇2If History Was a Cat: Reflections of a StudentAs a student, I have always found history to be a fascinating yet sometimes dry subject. Learning about dates, names, and events from the past can feel like memorizing a endless list of facts disconnected from our modern lives. However, after reading the delightful book "If History Was a Cat" by Xu Zhiyuan, my perspective has been transformed. This whimsical tale breathes new life into the study of history by personifying it as a quirky and mischievous feline.At first, the very premise of anthropomorphizing history as a cat may seem peculiar or even absurd. How can the sweeping narrative of human civilizations be captured through the lens of a household pet? But as I turned the pages, I found myself utterly captivated by Xu's imaginative storytelling. The author deftlyweaves together engaging anecdotes and profound insights, utilizing the metaphor of the cat to shed light on the complexities and paradoxes that define the human experience across eras.One of the most striking aspects of "If History Was a Cat" is how it challenges our traditional, linear understanding of historical progression. Much like a cat's tendency to wander and explore without adhering to predetermined paths, Xu reminds us that history is not a neatly packaged chronology but rather a tapestry of interconnected threads, each unraveling and intertwining in unexpected ways. This perspective encourages readers to step back and appreciate the intricate patterns that emerge when we examine the past from a more holistic and fluid viewpoint.Throughout the book, the cat serves as a playful yet poignant metaphor for the unpredictable and often paradoxical nature of historical events. Just as cats can be both affectionate and aloof, history is portrayed as a capricious force that can bring forth both remarkable achievements and devastating tragedies. Xu's vivid descriptions of the cat's mischievous antics and inscrutable behavior mirror the twists and turns that have shaped the course of human civilizations, reminding us that eventhe most seemingly insignificant actions can have profound and far-reaching consequences.Perhaps one of the most profound lessons I gleaned from "If History Was a Cat" is the importance of maintaining a sense of curiosity and wonder when studying the past. Xu's playful narrative encourages readers to approach history not as a dry collection of facts, but as a rich tapestry of stories waiting to be explored and unraveled. By imbuing history with the qualities of a curious and adventurous feline, the author invites us to embrace the spirit of inquiry and to seek out the unexpected connections and insights that lie hidden beneath the surface.Moreover, the book's central metaphor serves as a powerful reminder of the enduring legacy of human civilization. Just as cats have been revered and celebrated across cultures for millennia, the achievements and struggles of our ancestors continue to shape the world we inhabit today. Xu's imaginative tale encourages us to recognize the threads that connect us to those who came before, and to appreciate the richness and diversity of the human experience that has unfolded over countless generations.As I reflect on the profound impact "If History Was a Cat" has had on my understanding of the past, I am struck by the book'sability to bridge the gap between academic study and personal resonance. By infusing history with a sense of whimsy and relatability, Xu has created a work that transcends the confines of traditional textbooks and invites readers of all ages to engage with the subject in a more intimate and meaningful way.In a world that often prioritizes efficiency and practicality, "If History Was a Cat" serves as a gentle reminder of the power of imagination and metaphor. By embracing the unconventional lens of a feline protagonist, Xu has crafted a narrative that not only educates but also captivates and inspires. As a student, I find myself inspired to approach the study of history with a renewed sense of wonder and curiosity, seeking out the hidden stories and interconnections that lie beneath the surface of recorded events.In conclusion, "If History Was a Cat" is a remarkable work that has profoundly impacted my understanding and appreciation of the past. Through its imaginative metaphor and playful storytelling, Xu Zhiyuan has breathed new life into the study of history, inviting readers to embrace the complexities, paradoxes, and enduring legacies that have shaped the human experience across generations. As I continue my academic journey, I carry with me the invaluable lessons gleaned from thiscaptivating tale, forever inspired to approach the study of history with a sense of curiosity, wonder, and a willingness to embark on unexpected paths, much like the mischievous feline that has captured my imagination.篇3If History Were a Cat: A Student's ReflectionsAs a student, I've read countless history textbooks over the years. Dry recitations of names, dates, and events that seemed to have little relevance to my life. However, the book "If History Were a Cat" by Umberto Eco opened my eyes to history in a whole new way. Through its whimsical premise of personifying history as a group of unruly felines, this book managed to breathe life into the past like never before.The central metaphor is both ingenious and apt. Just like a cluster of cats, history can often feel chaotic, unpredictable, and resistant to human efforts to systematize and control it. Eco illustrates how major civilizations and empires have risen and fallen in seemingly random patterns, just as cats knock over vases and claw up furniture on a whim. Yet amidst the apparent pandemonium, there are also moments of peaceful coexistenceand an underlying order, much like a litter of kittens curling up together for a nap after a rambunctious play session.One of the book's great strengths is how it frames historical narratives in delightfully feline terms. The ancient Egyptians, for instance, are likened to "sleek, regal cats," lounging imperiously along the Nile while lesser feline civilizations scurry around them. In contrast, the aggressive military campaigns of figures like Alexander the Great are portrayed as "history's first major cat fight," with the great conqueror bounding from Persia to India, knocking over any mouse-civilizations foolish enough to get in his way. These playful descriptions make even the most familiar historical events feel fresh and engaging.At the same time, Eco uses the feline lens to uncover deeper truths about the human condition and our endless grappling with the forces of history. He posits that we are all just "humans in a room with a bunch of cats," trying in vain to comprehend and assert control over these unruly beasts. Our great leaders and nation-builders fancy themselves as canny cat-herders, carefully guiding the course of events. But more often than not, Eco argues, we're simply carried along by the churning currents of historical change, just one more series of scratches left on the torn fabric of time.This message resonated deeply with me. As a student, I've been trained to seek overarching narratives and ideological frameworks for understanding the world. Marxist theories of historical inevitability. The cyclical philosophies of Ssu-ma Ch'ien. The Great Man theory of history as shaped by the whims of a few elite individuals. Yet "If History Were a Cat" challenges these tidy centralized models. Instead, it suggests that history emerges organically from the swarming, chaotic interplay of a multitude of actors, chance events, and unforeseen consequences—much like the engrossing yet inscrutable dances of cats.One particularly striking example is Eco's depiction of the fall of Rome. Rather than a singular cataclysmic event, he frames it as a gradual fraying, with once-great cats devolving into quarrelsome strays as resources dwindled. Bit by bit, the grand imperial feline shed its fur and retreated into the shadows while scrappier cat communities took its place—first the Byzantines, then the Germanic tribes, and ultimately the Islamic caliphates. This demystifying, decentralized interpretation upends the traditional historical focus on the Decline and Fall as a clash of great civilizations. It's just one set of cats outmaneuvering and outlasting another in the never-ending struggle for territory, security, and the prime sunbeam.By presenting history through this unique zoomorphic lens, Eco exposes the biases baked into conventional historical narratives. Too often, we project our human-centric values and preconceptions onto the past, seeking grandiose meanings and reinforcing our cultural mythologies. We write ourselves into history as the prime movers, the rightful conquerors, the inevitable victors. Yet from the feline perspective, our hubristic aspirations to mastery appear rather pitiful—just another set of peculiar grooming behaviors by a particularly self-important species of ape."If History Were a Cat" prompts us to radically decenter ourselves, to recognize that we are but one thread in the rich tapestry of life on this planet. Our retellings of the past are inevitably colored by our anthropocentric conditioning, our desperate desire to find significance amid the seeming chaos of existence. The cats, for their part, seem utterly indifferent to such human foibles. They simply go about their timeless routines of eating, grooming, napping, and territorial skirmishing—their own unique forms of history-making that predate and will likely long outlast our fleeting civilizations.For students like myself, this message is both deflating and strangely uplifting. On one hand, it pricks the inflated balloons ofego and exceptionalism we've been fed. The grand civilizational narratives we cling to are mere catnip—tantalizing fictions confected to soothe our troubled primate minds. And our future ambitions to remake the world through scientific or ideological dogma are merely the human variation on a cat chasing a laser pointer, endlessly frantic yet never catching the elusive red dot.Yet this ruthlessly honest portrayal is also deeply liberating. Stripped of our ingrained self-importance, we can appreciate history anew—as the vivid, variegated unfolding of life itself across eons. An intricate, indifferent dance that our kind has been privileged to witness and fleetingly participate in. We are not the central players, but jovial guests at the grand feline pageant of existence. Our mark will inevitably fade, but the great carnival of cats will frolic on, leaving new claw marks and molted fur in its wake.In this light, our role becomes one of simply bearing witness and finding joy in this cosmic cat circus. Rather than lofty attempts to control the uncontrollable, Eco seems to argue, we would do better to lounge in the warm sunbeams of being, to gaze with amused detachment at the perennial games of chase and territoriality playing out around us. To laugh at our own species as we mouth pithy homilies about the meaning of it allwhile the cats bat indifferently at our self-important theorems, just more dangly abstractions to be toyed with and discarded.For me, this re-framing has been incredibly liberating as a student and a human being. No longer do I need to anxiously seek the One True Path to historical enlightenment or civilizational progress. Those are just more futile longings to join the supposed "cat herders" club, a Sisyphean quest doomed to frustration and farce. Instead, I can let the currents of history flow through and around me while still finding my own pockets of meaning and connection. My studies become an appreciation of the richness and dynamism of life rather than an obsessive compulsion to categorize and control.I'll always cherish the memory of reading "If History Were a Cat" sprawled out on a sunny patch of campus greenery, surrounded by the comforting thrums of squirrels and the concerned stares of indifferent passers-by. In those quiet moments, I felt at peace with my insignificance, content to be but one more curious primate delighting in this grand, inexplicable circus we call existence. The cats will do as they will regardless of our puny protests, sowing paradoxes and overturning ideologies with every calculated twitch of their tails. All we can do is sit back and enjoy the enigmatic spectacle,reveling in its mysteries even as we futilely attempt to demystify it through books and theories.For as long as our breed walks this earth, the interminable cat parade of history will wind its way through our lands, by turns alluring and terrifying, fascinating and indifferent. We may fancy ourselves the stars of the show, but deep down we know the truth. The real players have been here all along, casually licking themselves as the feeble rise and fall of human civilizations is just another fleeting warm patch on the carpet. So sit back, forget your cares, and let Umberto Eco's inimitable cat tales transport you to a new appreciation of our delightfully unimportant place in the grand scheme. We are but whiskers in the wind, and what a privilege it is to behold the majestic follies of these felines we call history.。
供应链下的多级存货管理【外文翻译】
本科毕业论文(设计)外文翻译原文:Multi-echelon inventory management in supply chains Historically, the echelons of the supply chain, warehouse, distributors, retailers, etc., have been managed independently, buffered by large inventories. Increasing competitive pressures and market globalization are forcing firms to develop supply chains that can quickly respond to customer needs. To remain competitive and decrease inventory, these firms must use multi-echelon inventory management interactively, while reducing operating costs and improving customer service.Supply chain management (SCM) is an integrative approach for planning and control of materials and information flows with suppliers and customers, as well as between different functions within a company. This area has drawn considerable attention in recent years and is seen as a tool that provides competitive power .SCM is a set of approaches to integrate suppliers, manufacturers, warehouses, and stores efficiently, so that merchandise is produced and distributed at right quantities, to the right locations and at the right time, in order to minimize system-wide costs while satisfying service-level requirements .So the supply chain consists of various members or stages. A supply chain is a dynamic, stochastic, and complex system that might involve hundreds of participants.Inventory usually represents from 20 to 60 per cent of the total assets of manufacturing firms. Therefore, inventory management policies prove critical in determining the profit of such firms. Inventory management is, to a greater extent, relevant when a whole supply chain (SC), namely a network of procurement, transformation, and delivering firms, is considered. Inventory management is indeed a major issue in SCM, i.e. an approach that addresses SC issues under an integrated perspective.Inventories exist throughout the SC in various forms for various reasons. Thelack of a coordinated inventory management throughout the SC often causes the bullwhip effect, namely an amplification of demand variability moving towards the upstream stages. This causes excessive inventory investments, lost revenues, misguided capacity plans, ineffective transportation, missed production schedules,and poor customer service.Many scholars have studied these problems, as well as emphasized the need of integration among SC stages, to make the chain effectively and efficiently satisfy customer requests (e.g. reference). Beside the integration issue, uncertainty has to be dealt with in order to define an effective SC inventory policy. In addition to the uncertainty on supply (e.g. lead times) and demand, information delays associated with the manufacturing and distribution processes characterize SCs.Inventory management in multi-echelon SCs is an important issue, because thereare many elements that have to coordinate with each other. They must also arrangetheir inventories to coordinate. There are many factors that complicate successful inventory management, e.g. uncertain demands, lead times, production times, product prices, costs, etc., especially the uncertainty in demand and lead times where the inventory cannot be managed between echelons optimally.Most manufacturing enterprises are organized into networks of manufacturingand distribution sites that procure raw material, process them into finished goods, and distribute the finish goods to customers. The terms ‘multi-echelon’ or ‘multilevel‘production/distribution networks are also synonymous with such networks(or SC), when an item moves through more than one step before reaching the final customer. Inventories exist throughout the SC in various forms for various reasons. Atany manufacturing point, they may exist as raw materials, work in progress, or finished goods. They exist at the distribution warehouses, and they exist in-transit, or‘in the pipeline’, on each path linking these facilities.Manufacturers procure raw material from suppliers and process them into finished goods, sell the finished goods to distributors, and then to retail and/or customers. When an item moves through more than one stage before reaching thefinal customer, it forms a ‘multi-echelon’ inventory system. The echelon stock of a stock point equals all stock at this stock point, plus in-transit to or on-hand at any of its downstream stock points, minus the backorders at its downstream stock points.The analysis of multi-echelon inventory systems that pervades the business world has a long history. Multi-echelon inventory systems are widely employed to distribute products to customers over extensive geographical areas. Given the importance of these systems, many researchers have studied their operating characteristics under a variety of conditions and assumptions. Since the development of the economic order quantity (EOQ) formula by Harris (1913), researchers and practitioners have been actively concerned with the analysis and modeling of inventory systems under different operating parameters and modeling assumptions .Research on multi-echelon inventory models has gained importance over the last decade mainly because integrated control of SCs consisting of several processing and distribution stages has become feasible through modern information technology. Clark and Scarf were the first to study the two-echelon inventory model. They proved the optimality of a base-stock policy for the pure-serial inventory system and developed an efficient decomposing method to compute the optimal base-stock ordering policy. Bessler and Veinott extended the Clark and Scarf model to include general arbores cent structures. The depot-warehouse problem described above was addressed by Eppen and Schrage who analyzed a model with a stockless central depot. They derived a closed-form expression for the order-up-to-level under the equal fractile allocation assumption. Several authors have also considered this problem in various forms. Owing to the complexity and intractability of the multi-echelon problem Hadley and Whitin recommend the adoption of single-location, single-echelon models for the inventory systems.Sherbrooke considered an ordering policy of a two-echelon model for warehouse and retailer. It is assumed that stock outs at the retailers are completely backlogged. Also, Sherbrooke constructed the METRIC (multi-echelon technique for coverable item control) model, which identifies the stock levels that minimize the expected number of backorders at the lower-echelon subject to a bud get constraint. This modelis the first multi-echelon inventory model for managing the inventory of service parts. Thereafter, a large set of models which generally seek to identify optimal lot sizes and safety stocks in a multi-echelon framework, were produced by many researchers. In addition to analytical models, simulation models have also been developed to capture the complex interaction of the multi-echelon inventory problems.So far literature has devoted major attention to the forecasting of lumpy demand, and to the development of stock policies for multi-echelon SCs Inventory control policy for multi-echelon system with stochastic demand has been a widely researched area. More recent papers have been covered by Silver and Pyke. The advantage of centralized planning, available in periodic review policies, can be obtained in continuous review policies, by defining the reorder levels of different stages, in terms of echelon stock rather than installation stock.Rau et al. , Diks and de Kok , Dong and Lee ,Mitra and Chatterjee , Hariga , Chen ,Axsater and Zhang , Nozick and Turnquist ,and So and Zheng use a mathematic modeling technique in their studies to manage multi-echelon inventory in SCs. Diks and de Kok’s study considers a divergent multi-echelon inventory system, such as a distribution system or a production system, and assumes that the order arrives after a fixed lead time. Hariga, presents a stochastic model for a single-period production system composed of several assembly/processing and storage facilities in series. Chen, Axsater and Zhang, and Nozick and Turnquist consider a two-stage inventory system in their papers. Axsater and Zhang and Nozickand Turnquist assume that the retailers face stationary and independent Poisson demand. Mitra and Chatterjee examine De Bodt and Graves’ model (1985), which they developed in their paper’ Continuous-review policies for a multi-echelon inventory problem with stochastic demand’, for fast-moving items from the implementation point of view. The proposed modification of the model can be extended to multi-stage serial and two -echelon assembly systems. In Rau et al.’s model, shortage is not allowed, lead time is assumed to be negligible, and demand rate and production rate is deterministic and constant. So and Zheng used an analytical model to analyze two important factors that can contribute to the high degree of order-quantity variability experienced bysemiconductor manufacturers: supplier’s lead time and forecast demand updating. They assume that the external demands faced by there tailor are correlated between two successive time periods and that the retailer uses the latest demand information to update its future demand forecasts. Furthermore, they assume that the supplier’s delivery lead times are variable and are affected by the retailer’s order quantities. Dong and Lee’s paper revisits the serial multi-echelon inventory system of Clark and Scarf and develops three key results. First, they provide a simple lower-bound approximation to the optimal echelon inventory levels and an upper bound to the total system cost for the basic model of Clark and Scarf. Second, they show that the structure of the optimal stocking policy of Clark and Scarf holds under time-correlated demand processing using a Martingale model of forecast evolution. Third, they extend the approximation to the time-correlated demand process and study, in particular for an autoregressive demand model, the impact of lead times, and autocorrelation on the performance of the serial inventory system.After reviewing the literature about multi-echelon inventory management in SCs using mathematic modeling technique, it can be said that, in summary, these papers consider two, three, or N-echelon systems with stochastic or deterministic demand. They assume lead times to be fixed, zero, constant, deterministic, or negligible. They gain exact or approximate solutions.Dekker et al. analyses the effect of the break-quantity rule on the inventory costs. The break-quantity rule is to deliver large orders from the warehouse, and small orders from the nearest retailer, where a so-called break quantity determines whether an order is small or large. In most l-warehouse–N-retailers distribution systems, it is assumed that all customer demand takes place at the retailers. However, it was shown by Dekker et al. that delivering large orders from the warehouse can lead to a considerable reduction in the retailer’s inventory costs. In Dekker et al. the results of Dekker et al. were extended by also including the inventory costs at the warehouse. The study by Mohebbi and Posner’s contains a cost analysis in the context of a continuous-review inventory system with replenishment orders and lost sales. The policy considered in the paper by V ander Heijden et al. is an echelon stock, periodicreview, order-up-to policy, under both stochastic demand and lead times.The main purpose of Iida’s paper is to show that near-myopic policies are acceptable for a multi-echelon inventory problem. It is assumed that lead times at each echelon are constant. Chen and Song’s objective is to minimize the long-run average costs in the system. In the system by Chen et al., each location employs a periodic-review, or lot-size reorder point inventory policy. They show that each location’s inventory positions are stationary and the stationary distribution is uniform and independent of any other. In the study by Minner et al., the impact of manufacturing flexibility on inventory investments in a distribution network consisting of a central depot and a number of local stock points is investigated. Chiang and Monahan present a two-echelon dual-channel inventory model in which stocks are kept in both a manufacturer warehouse (upper echelon) and a retail store (lower echelon), and the product is available in two supply channels: a traditional retail store and an internet-enabled direct channel. Johansen’s system is assumed to be controlled by a base-stock policy. The independent and stochastically dependent lead times are compared.To sum up, these papers consider two- or N-echelon inventory systems, with generally stochastic demand, except for one study that considers Markov-modulated demand. They generally assume constant lead time, but two of them accept it to be stochastic. They gain exact or approximate solutions.In multi-echelon inventory management there are some other research techniques used in literature, such as heuristics, vary-METRIC method, fuzzy sets, model predictive control, scenario analysis, statistical analysis, and GAs. These methods are used rarely and only by a few authors.A multi-product, multi-stage, and multi-period scheduling model is proposed by Chen and Lee to deal with multiple incommensurable goals for a multi-echelon SC network with uncertain market demands and product prices. The uncertain market demands are modeled as a number of discrete scenarios with known probabilities, and the fuzzy sets are used for describing the sellers’ and buyers’ incompatible preference on product prices.In the current paper, a detailed literature review, conducted from an operational research point of view, is presented, addressing multi-echelon inventory management in supply chains from 1996 to 2005.Here, the behavior of the papers, against demand and lead time uncertainty, is emphasized.The summary of literature review is given as: the most used research technique is simulation. Also, analytic, mathematic, and stochastic modeling techniques are commonly used in literature. Recently, heuristics as fuzzy logic and GAs have gradually started to be used.Source: A Taskin Gu¨mu¨s* and A Fuat Gu¨neri Turkey, 2007. “Multi-echelon inventory management in supply chains with uncertain demand and lead times: literature review from an operational research perspective”. IMechE V ol. 221 Part B: J. Engineering Manufacture. June, pp.1553-1570.译文:供应链下的多级存货管理从历史上看,多级供应链、仓库、分销商、零售商等,已经通过大量的库存缓冲被独立管理。
脆弱的联盟——论复杂性建筑与复杂性科学的关系
Industrial Construction Vol.52,No.1,2022工业建筑㊀2022年第52卷第1期㊀47㊀脆弱的联盟论复杂性建筑与复杂性科学的关系周官武(石家庄铁道大学建筑与艺术学院,石家庄㊀050043)㊀㊀摘㊀要:在放弃解构论述后,复杂性建筑转而寻求与复杂性科学结盟,形成以非线性算法生成为核心,更加精确㊁严格的设计体系,以此突破现代主义建筑范式的形式语言和设计方法束缚㊂通过对这一联盟始终面临的过分迎合时尚潮流和商业文化㊁实施过程中的妥协对 复杂性 的破坏㊁逻辑与现实的必要性等问题的论述,进而分析了复杂性科学作为城市空间基本元素在建筑研究中的适用性,据此认为,复杂性建筑与复杂性科学的联盟并不稳固,与复杂性科学结盟看似提高了复杂性建筑的科学成色及其创作中科学判断的比重,但作为社会产物的建筑终究无法脱离价值判断,否则就必然会损害建筑的适用性㊂㊀㊀关键词:复杂性建筑;复杂性科学;方法论;联盟㊀㊀DOI :10.13204/j.gyjzG21040811The Fragile Alliance On the Relation Between ComplexityArchitecture and Complexity ScienceZHOU Guanwu(School of Architecture and Art,Shijiazhuang Tiedao University,Shijiazhuang 050043,China)Abstract :After abandoning Deconstructivism,complexity architecture turned to seek alliances with complexity scienceto form an exact and rigorous design system with nonlinear algorithm generation as the core,so as to break through themodernism architectural paradigm.Through discussion on the problems that the alliance has always faced such as catering to fashion trends and business culture,destroying complexity due to compromise in the process ofimplementation,the necessity of logic and reality,the applicability of complexity science in the research of architecture as basic element of urban space were analyzed.Based on that,the alliance between complexity architecture andcomplexity science was considered to be not stable.The alliance with complexity science seemed to increase the scientific quality of complexity architecture and the proportion of scientific judgment in its creation,however,architecture as a social product could not be separated from value judgment,otherwise it would inevitably damage theapplicability of architecture.Keywords :complexity architecture;complexity science;methodology;alliance作㊀㊀者:周官武,男,1971年出生,硕士,副教授㊂电子信箱:451147901@ 收稿日期:2021-04-08㊀㊀当下,西方发达国家的基础设施建设已进入缓慢发展的阶段,规模宏大的中国基础设施建设则方兴未艾,看似南辕北辙的两种现象却共同成就了一个风口,为建筑的去实质化提供了广阔市场㊂建筑创作以创新名义挣脱 适用性 的约束,在发明空间之路上突飞猛进㊂为了更多㊁更快地发明空间,建筑学全力发明着概念,并不断从外部引进更多概念㊂其中,复杂性建筑的贡献尤其令人眼花缭乱,解构㊁非线性㊁涌现性㊁褶子,诸如此类为晦涩建筑形式做注脚和背书的晦涩概念,多来自同一个源头 复杂性科学㊂1㊀当代建筑的理论匮乏焦虑自从现代建筑运动将创造性确定为核心价值,建筑学便丢掉了按图索骥的工匠式传统,高度依赖理论的注解和支持,因而经常性陷入理论匮乏引发的焦虑㊂现代建筑运动确立的现代建筑范式是理论与实践的综合体系,以哲学严格性和社会责任感著称㊂在现代建筑范式支持下,建筑师只需依循柯布西耶㊁密斯开创的传统,聚焦效率与服务,不必为寻求新理论而困扰(图1[1])㊂然而,现代建筑范式的形式语言是相对固化的,48㊀工业建筑㊀2022年第52卷第1期图1㊀多米诺体系Fig.1㊀The structure system of Domino商业文化却要求形式不断花样翻新来刺激公众的感官,藉以推动消费的增长㊂作为建筑创作主体的建筑师无法无视商业文化的驱策而永久托庇于现代建筑范式羽翼之下㊂他们不得不尝试跳出现代建筑范式的安全区,探寻新的建筑形式语言㊂问题是,瞬间的范式脱离只需灵光一现,但要另辟天地,就必须夯实逻辑基础,构建与现代建筑范式相仿的可靠理论台地㊂因此,相对历史上任何时期的建筑,当代建筑都更加渴求理论来提供认识论和方法论㊂但建筑界往往怯于理论思考,这使建筑学的理论产出总是无法满足自身需求,不得不经常求诸外部,从其他人文社会学科和自然科学中寻觅理论引擎㊂复杂性建筑与复杂性科学的结盟正是在这种情况下发生的㊂2㊀复杂性建筑与复杂性科学的联盟复杂性科学于20世纪80年代兴起,先后经历了埃德加㊃莫兰学说㊁普利高津引领的布鲁塞尔学派以及圣塔菲研究所的理论三个发展阶段,包括协同论㊁突变论㊁混沌理论㊁分形理论等一系列理论㊂复杂性科学以复杂性系统为研究对象,揭示了复杂性的广泛存在及其非线性㊁不确定性㊁自组织性㊁涌现性特征㊂其超越还原论的方法论,颠覆了传统的还原论研究范式,是分析处理复杂性事物的强大工具㊂所以,兴起不久,其影响即溢出自然科学领域,向哲学㊁社会科学等领域全面渗透[2]㊂复杂性建筑同样发端于20世纪80年代,解构建筑是其早期发展阶段,代表人物如艾森曼㊁屈米等大多受德里达解构理论影响㊂解构建筑并不标榜复杂性㊁非线性,更关切从价值论角度对整体性㊁结构进行颠覆㊂但解构建筑与后期复杂性建筑之间有明显的传承关系,而且其形态已经很复杂,有些作品甚至开始部分借助计算机非线性算法生成建筑形态,表现出一定的非线性特征[3]㊂20世纪90年代,复杂性科学的影响开始波及建筑学领域,复杂性建筑进入后期发展阶段㊂曾经的解构派领袖艾森曼这时候对解构失去了兴趣,开始大谈非线性㊂一度对解构建筑持严厉批评态度的查尔斯㊃詹克斯,也转而称赞解构建筑蕴含的复杂性,并预言非线性建筑运动即将到来[4]㊂复杂性科学为建筑的发展带来巨大的想象空间,计算机科学的飞速发展则使想象的落实成为可能㊂随着计算机模拟复杂系统技术的成熟,通过非线性算法生成复杂建筑形式不再遥不可及㊂格雷格㊃林恩㊁蓝天组等前卫建筑师敏锐地认识到其中蕴含的机会:一种颠覆性的设计方法及其一体化形式语言成为可能㊂他们开始积极寻求复杂性科学的指导,将非线性生成置于设计方法的核心,从而突破了现代建筑还原论方法的束缚,同时孕育出一种颠覆现代建筑语言的生成性形式语言,刷新了有序与无序㊁整体与局部等基本形式问题的认知㊂后期复杂性建筑对更复杂的非线性生成工具的渴求永无休止,这推动着算法生成工具不断发展完善,参数化设计正是在此基础上逐渐成熟㊁流行起来㊂当今天的建筑师通过参数化设计创造各种奇异形体或表皮时,他们或许只是追求视觉冲击,未必会深究设计工具与复杂性科学的关系,甚至意识不到自己采用的设计方法和形式语言是复杂性建筑实践的组成部分,而这恰恰证明复杂性建筑的思想和方法已深入人心㊂从早期到后期,复杂性建筑经历了重大理论变化,解构理论为复杂性科学所替代,价值论换成了科学论,科学主义对人文主义再次取得胜利[5]㊂这并不令人意外,一则,引进科学范式提高 科学 成色是建筑学的长期传统,况且作为最前沿科学理论,复杂性科学还自带时尚光环;二则,解构理论不仅存在逻辑问题,更无法解决设计方法问题,关键的概念 形式转化完全依赖主观理解和想象㊂而基于复杂性科学发展出的非线性生成设计方法可以确保概念 形式转化的严格性和精确性㊂最后也是最关键的,作为一种价值理论,解构理论却无法为复杂性形式提供有力的价值论证,逻辑上很难令人信服㊂所以,当复杂性科学揭示出复杂性的机理,西方建筑界从中看到了摆脱价值论困扰的希望,便无暇顾及复杂性科学是否适用于建筑,急匆匆宣布复杂性建筑投入复杂性科学麾下: 我们获得了第一个后基督教的新型综合世界观,一个能使科学家㊁理论家㊁建筑师㊁艺术家以及普通民众联合起来的结合点㊂它是由所谓 复杂性科学 阐明的新世界观㊂ [3]3㊀并不稳固的联盟拥有纯正科学血统的复杂性科学也是当今最富魅力的时尚题材,混沌㊁分形㊁非线性㊁蝴蝶效应之类术语掺杂在影视文艺作品中,使复杂性科学成为 一个专业人士与非专业人士,科学家与公众,既复杂又有吸引力的结合点㊂ [6]复杂性建筑与之结盟,方方面面皆大欢喜,建筑界得到新理论㊁新方法,公众得以满足时尚需求,商业文化则捕捉到一个可供长期炒作的消费热点㊂但问题是,这一维系专业群体㊁科学理论㊁流行文化㊁商业需求的纽带是否足够坚韧?第一个必须面对的问题是,公众对复杂性科学的热情有多少出自真正的科学认知和兴趣,又有多少出自被流行文化扭曲的浪漫想象㊂肤浅且变动不居的流行口味赋予的荣耀是廉价的,即使复杂性建筑可以分享复杂性科学的这份荣耀,但得到的支持也是不深刻㊁不持久的㊂第二个更为关键的问题是,复杂性科学是否能够在复杂性建筑中真正兑现㊂早期复杂性建筑多不具备足够 复杂性 ,如弗兰克㊃盖瑞的迪士尼音乐厅(图2[7])㊁艾森曼的辛辛纳提阿罗诺夫中心㊁李伯斯金的柏林犹太人纪念馆,已经被今天的评论家开除出 非线性建筑 ,尽管 它们部分地通过计算机非线性的方法生成出来 [8]㊂图2㊀迪士尼音乐厅Fig.2㊀Walt Disney Concert Hall后期复杂性建筑的非线性特征普遍更加鲜明[8],如格雷格㊃林恩的胚胎住宅(图3a[9]㊁图3b[10])㊁蓝天组的云状建筑(图3c[11])㊂这些作品大量使用数字化设计技术进行生成,具有典型的非线性空间形态,理论上的确很符合复杂性科学的标准㊂不过建筑最终得落实到现实空间㊂一旦进入实施环节,正如徐卫国教授指出的,那些基于非线性生成的建筑方案,如FOA的日本横滨国际码头(图4a[12])㊁扎哈㊃哈迪德的广州歌剧院(图4b[13]),蓝天组的大连国际会议中心(图4c[14]),都不得不向技术妥协,以大量平面转折寻求复杂曲面的近似效果㊂[8]非线性生成设计方法确实非常有吸引力,它使建筑形式的自动生成一定程度上成为可能,无须全程依赖人的控制㊂建筑师可以借助Wavefront㊁a㊁b 胚胎住宅;c Paneum中心㊂图3㊀复杂非线性建筑Fig.3㊀Complex nonlinear buildingsa 横滨国际客运码头;b 广州大剧院;c 大连国际会议中心㊂图4㊀基于非线性生成的建筑Fig.4㊀The architecture based on nonlinear generation Rhino等大型3D软件模拟各种力场的复杂相互作用,建构复杂性动力系统,只需改变一些系统参数即可引发系统的自组织演化㊂软件以动画呈现系统演化带来的几何形变,动画的瞬间定格即可得到原始的建筑形式,也即所谓的 动画形式 [15]㊂但在当前技术条件下,复杂性的真正兑现还局限于计算机内的生成过程,动画定格为 动画形式 的瞬间,生成便终结了,不确定性随之消失,得到的只是生成过程的片段和遗迹,自然也无法如詹克斯所期待的那样运动起来,与人共生,反映宇宙发生的过程㊂[3]而且,动画形式只是纯粹的几何形式,生成过程中悬搁脆弱的联盟 周官武49㊀的材料㊁工艺等建构问题依旧离不开人为干预㊂接下来的营建过程,需要确定技术保障下的确定形式㊁确定结构,只能拼凑线性部件 伪装 非线性㊂总体来看,复杂性建筑在形式生成初期阶段,在非线性生成过程确实比较严格地遵循着复杂性科学,但也仅限于此了㊂第三个问题,复杂性科学向建筑领域的全面渗透,并被复杂性建筑奉为圭臬是否具备逻辑和现实的必要性㊂有些学者认为,现代建筑范式只是工业社会的空间方案,而今天的社会则是所谓 后工业化信息社会 ,注定要将基于数字化技术的非线性建筑推向核心位置[16]㊂那么,信息社会与工业社会的空间需求是否有本质不同?信息时代确实带来一些新的空间需求,但这些新需求是否是现代建筑范式无法应对的?如果能够应对,为什么还要在资源危机频发的情况下,以如此巨大的代价寻求一个更复杂,却不能带来太多福利的解决方案?20世纪90年代以来,数字化依赖确已逐渐形成㊂但全面的数字化生存仍只存在于科幻作品之中,现实的数字化则寄居在现代建筑空间之内,并没有表现出明显的适应㊂或许现代建筑范式无法满足信息社会特有的空间需求,但还不足以引发现代建筑范式的崩溃㊂复杂性建筑依旧无法成为主流范式,离核心位置还远得很㊂所以,让人不得不怀疑:以复杂性科学为基础重塑建筑学,到底是出于现实的需要,还是为了给建筑学涂抹更多的科学装饰色,顺带解救陷入理论焦虑的建筑共同体?复杂性建筑从复杂性科学大量吸收规则㊁工具和方法,自觉接受后者的规定,并因此越来越依赖计算机技术,大量进行进行虚拟设计和仿真㊂复杂性建筑在解构建筑阶段曾激烈反对现代建筑的机械论,而今却彻底离不开机器,比异化的现代建筑更加异化了㊂非线性生成是复杂性建筑设计方法的核心,也是复杂性建筑从复杂性科学得到的最大馈赠㊂其实质是虚拟系统的自组织过程,对人而言则是一个 黑箱 ,可以排除价值判断和隐喻,保证生成形式的绝对抽象性㊂不过,即使是最狂热的复杂性建筑派也不敢完全信任计算机,他们会设计和选择算法,再通过反复输入输出寻求理想方案,其结果就是 黑箱 不黑,自组织滑向他组织㊂在现实面前,复杂性建筑与复杂性科学的联盟总是这么摇摇晃晃,把方法论逻辑搞得支离破碎㊂在与复杂性科学结盟后,复杂性建筑就经常脱离现实的轨道:无视人的需求和资源禀赋对建筑的规定性,一味追求 复杂性 ;排斥人对建筑天然拥有的干预控制权利,为计算机算法生成让路;无视建筑的人文属性,清除价值判断,诸如此类㊂归根结底,复杂性建筑并非出于建筑的现实和逻辑需要选择理论,而是预先选择理论,再裁剪现实以服从理论㊂但现实并不会真的服从理论,所以复杂性建筑必然要陷入两难困境:如果坚持复杂性科学逻辑就会在现实面前不断碰壁,如果向现实妥协又会违背复杂性科学逻辑,令两者的联盟变得脆弱不堪㊂4㊀复杂性科学是否适用于建筑建筑是否复杂到必须采用复杂性科学来进行研究,对于复杂性建筑与复杂性科学的联盟来说,这是一个根本性问题㊂建筑界对建筑的复杂性有两种不同理解㊂其一为文丘里所谓的复杂性,产生于大量堆积的样式符号的多层次意义纠缠,空间本身并不复杂㊂这是一种建筑意义的复杂性,用詹克斯喜用的 双重编码 来表达或许更准确[4]㊂其二为建筑本体的复杂性,如复杂性建筑的复杂性,表现为抽象几何形式构成的复杂空间关系,不附加外部意义或隐喻㊂由于意义的理解主观性太强,前者很容易导向无视建筑自身规律的形式主义游戏,而后者着眼建筑本身,逻辑要严密得多㊂但必须指出的是,复杂性建筑的复杂性由复杂性科学定义,不同于一般意义上的建筑本体的复杂性㊂这就带来一个问题:作为有限尺度的空间单位和更大尺度空间系统的构成元素,建筑是否具备这样的复杂性㊂ 在宏观的空间㊁时间尺度上,在建筑和城市的统一体中的确存在非线性㊁突变㊁混沌㊁自相似性的性质㊂ 它们真的能够在一个单体建筑上全部展现出来吗? [6]詹克斯曾经提出过一种缩微宇宙论,主张建筑必须追随科学尤其是复杂性科学, 表现宇宙发生的基本规律 自组织㊁突变以及向更高或更低层次的跃迁 [3]㊂ 建筑的下一个挑战是如何创造真正给能够运动的局部,使居住者或参观者与建筑建立共生关系,积极反映宇宙发生的过程㊂ [3]这一理论将建筑看作宇宙的同构缩微模型,与凯文㊃林奇在古代城市中发现的 宇宙模式 颇为相似,其内在逻辑也与 宇宙模式 一样充满神秘主义色彩[17]㊂詹克斯并不能证明建筑与宇宙间存在自相似性,或具有宇宙式的 复杂性 ,却强行要求建筑套用宇宙图式㊁提高复杂度,以便与复杂性科学相匹配㊂这样得到的建筑并不能反映宇宙,充其量是对宇宙的静态50㊀工业建筑㊀2022年第52卷第1期的㊁图式化的隐喻[6]㊂复杂性 并非元素的属性,而是系统对元素进行组织和整合的产物,是在系统整体层次上涌现出来的东西㊂ [18]因而,作为大空间系统的城市,或大规模聚落㊁城市区段表现出高度复杂性并不出人意料㊂早在复杂性科学影响建筑与城市研究领域之前,简㊃雅各布斯和克里斯托弗㊃亚历山大等学者对此就有深入阐述,复杂性科学则帮助我们对城市空间复杂性的认识更加精确㊁严格㊂但单体建筑只是城市空间系统的元素或局部,不具备系统整体才能具备的复杂性,赋予城市空间复杂性的自然与社会因素的复杂相互作用,及其历时性演化并不存在于建筑单体层面㊂[19]建筑的核心问题始终是适用性问题,即基于资源禀赋和人的需求,对经济㊁技术㊁功能及形式等各方面加以综合㊁平衡的问题㊂这些问题显然不是复杂性科学所能应对的㊂所以,复杂性科学对建筑的影响几乎从未超越形式层面㊂复杂性建筑推进了建筑形式和形式生成方法的革新,却并未提高建筑的适用性,反倒经常因为过度追求形式的复杂性而牺牲经济㊁技术和功能各方面的合理性㊂这不是作为科学工具的复杂性科学本身的问题,而是在非适用领域滥用科学工具造成的问题㊂复杂性建筑追随复杂性科学很大程度上是为了摆脱现代建筑范式的束缚㊂现代建筑强调功能与效率,反对任何非必要的空间㊁形式复杂化㊂这样或许会损害多样性,但对建筑的认知并无原则性问题㊂其真正问题在于过度推崇简约化,试图在本质复杂的城市层级上消灭复杂性㊂而复杂性建筑正相反,以复杂性为目标,不分单体建筑还是大规模空间系统㊂藉此固然可以跳出现代建筑范式的樊笼,却也同时迷失了面向建筑的问题视野㊂当建筑师沉迷于在建筑单体中构建复杂性,他们不只是在浪费宝贵的资源,也是对基本建筑价值的践踏:用喧嚣㊁自负的几何杂耍替代严肃的人类空间生产实践,其深层则是陷入混乱的哲学意识和社会责任感的沦丧㊂复杂性科学或许可以在大规模建筑群体和城市空间组织中大展身手,但用于单体建筑却是严重的对象选择错误,复杂性建筑的逻辑与现实困境的根源正在于此㊂5㊀结束语复杂性建筑与复杂性科学的结盟是建筑学科学化的又一次努力㊂复杂性建筑派试图将建筑形式的发生更多建立在科学逻辑之上,降低人的干预,减少价值判断的影响,提高建筑创作的客观性㊂这种尝试推进了设计方法的进步,对当下的建筑设计产生了深刻影响㊂但作为社会产物,人为㊁为人是建筑的根基,建筑的人文属性是内在的,必须永远接受价值的约束㊂建筑学的意义在于寻求 好的空间 ,这本身就是一个典型的价值问题㊂所以建筑创作无法摆脱价值判断,建筑也从来不是理想的自然科学应用领域㊂无视这一点,一味用科学判断挤压价值判断,并不能真正提高建筑学的科学成色或推进建筑工业化的深入,只会得到另一种创造新奇形式的手段,作为一时的流行符号而沦为商业文化的附庸㊂[20]复杂性建筑对复杂性的探索,扩展了建筑学的边界㊂但是,复杂性建筑是将过度的复杂性强加给无需过度复杂的建筑,让本应服从人和现实的建筑为复杂性科学理论服务[21]㊂复杂性建筑与复杂性科学的联盟就建立在这种头脚倒置的逻辑上,这必然导致对建筑自身规则的背离,产生内在的适用性问题:功能不佳㊁极高的实施难度㊁严重的资源浪费㊁空间设置不合理和缺乏效率㊂但现实并不会迁就理论,所以复杂性建筑的营建总是与数不尽的技术妥协相伴,最终变得不够 复杂性 或局限于表皮的复杂性,其与复杂性科学的联盟也随之摇摇欲坠㊂参考文献[1]㊀博奥席耶W,斯通诺霍O.勒㊃柯布西耶全集:第1卷[M].牛燕芳,程超,译.北京:中国建筑工业出版社,2005:18.[2]㊀黄欣荣.复杂性科学与哲学[M].北京:中央编译出版社,2007.[3]㊀JENCKS C.The architecture of the jumping universe[M].Lanham:National Book Network,Inc,1996.[4]㊀JENCKS C.The new moderns[M].New York:RizzoliInternational Publications Inc,1990.[5]㊀曾欢.西方科学主义思潮的历史轨迹:以科学统一为研究视角[M].北京:世界知识出版社,2009.[6]㊀周官武,姜玉艳.查尔斯㊃詹克斯的宇源建筑理论评析[J].新建筑,2003(6):58-61.[7]㊀THOMAS.The walt disney concert hall[EB/OL].2012-09-02[2021-03-27]./building/read/35/The-Walt-Disney-Concert-Hall/1192.[8]㊀徐卫国.褶子思想,游牧空间:关于非线性建筑参数化设计的访谈[J].世界建筑,2009(8):16-17.[9]㊀LECOMTE J.Speculative architectures[EB/OL].2013-10-02[2021-03-27].https:///editorial/articles/speculative-architectures.[10]KLEIN L.Tasting space[EB/OL].2013-04-10[2021-03-29]./tasting-space.[11]CORRADI M.Coop himmelb(L)AU:paneum-wunderkammer des Brotes,Asten[EB/OL].2018-07-02[2021-03-27]./paneum-wunderkammer-des-brotes-by-coop-himmelblau.htm.(下转第7页)脆弱的联盟 周官武51㊀充薄膜均可降低钢管应变水平,提高钢管对混凝土的约束作用㊂4)钢管径厚比越大,屈服强度越高,钢管的横向变形系数越大㊂钢管与混凝土间填充薄膜的试件横向系数大于钢管与混凝土间涂油处理的试件㊂5)基于Mander模型建议了钢管约束陶粒混凝土短柱轴压极限承载力计算公式,计算结果与试验结果吻合良好㊂参考文献[1]㊀中华人民共和国建设部.轻集料混凝土技术规程:JGJ512002[S].北京:中国建筑工业出版社,2002.[2]㊀YU Q L,SPIESZ P,BROUWERS H.Ultralightweight concrete:conceptual design and performance evaluation[J].Cement& Concrete Composites,2015,61:18-28.[3]㊀刘平,葛婷,王小亮.LC7.5轻质陶粒混凝土的配制与性能研究[J].建材发展导向,2019,17(12):105-108.[4]㊀GAO J,SUN W,MORINO K.Mechanical properties of steelfiber-reinforced,high-strength,lightweight concrete[J].Cement and Concrete Composites,1997,19(4):307-313. [5]㊀WANG P T,SHAH S P,NAAMAN A E.Stress-strain curves ofnormal and lightweight concrete in compression[J].Journal of American Concrete Institute,1978,75(11):603-611. [6]㊀王振宇,丁建彤,郭玉顺.结构轻骨料混凝土的应力-应变全曲线[J].混凝土,2005(3):39-41,66.[7]㊀叶列平,孙海林,陆新征,等.高强轻骨料混凝土结构性能㊁分析与计算[M].北京:科学出版社,2009:1-4. [8]㊀ZHANG M H,GJVORV O E.Mechanical properties of high-strength lightweight concrete[J].Materials Journal,1991,88(3):240-247.[9]㊀董祥.纤维增强高性能轻骨料混凝土物理力学性能㊁抗冻性及微观结构研究[D].南京:东南大学,2005.[10]田耀刚.高强次轻混凝土的研究[D].武汉:武汉理工大学,2005.[11]周绪红,刘界鹏.钢管约束混凝土柱的性能与设计[M].北京:科学出版社,2010.[12]ZHAN Y,ZHAO R,MA Z J,et al.Behavior of prestressedconcrete-filled steel tube(CFST)beam[J].Engineering Structures,2016,122:144-155.[13]LAI M H,HO J C M.A theoretical axial stress-strain model forcircular concrete-filled-steel-tube columns[J].Engineering Structures,2016,125:124-143.[14]FAKHARIFAR M,CHEN pressive behavior of FRP-confined concrete-filled PVC tubular columns[J].Composite Structures,2016,141:91-109.[15]WANG X D,LIU J P,ZHANG S M.Behavior of short circulartubed-reinforced-concrete columns subjected to eccentric compression[J].Engineering Structures,2015,105:77-86. [16]张素梅,刘界鹏,马乐,等.圆钢管约束高强混凝土轴压短柱的试验研究与承载力分析[J].土木工程学报,2007,40(3):24-31.[17]WANG X D,LIU J P,ZHANG S M.Behavior of short circulartubed-reinforced-concrete columns subjected to eccentric compression[J].Engineering Structures,2015,105:77-86. [18]ZHOU X H,LIU J P,WANG X D,et al.Behavior and design ofslender circular tubed-reinforced-concrete columns subjected to eccentric compression[J].Engineering Structures,2016,124:17-28.[19]周绪红,闫标,刘界鹏,等.不同长径比圆钢管约束钢筋混凝土柱轴压承载力研究[J].建筑结构学报,2018,39(12):11-21.[20]甘丹.钢管约束混凝土短柱的静力性能和抗震性能研究[D].兰州:兰州大学,2012.[21]刘文晓,姜凡,李淼,等.圆钢管约束轻骨料钢筋混凝土轴压短柱力学性能试验[J].混凝土,2020(8):19-22,26. [22]高喜安,吴成龙,李斌.方钢管约束轻骨料混凝土轴压短柱的力学性能[J].科学技术与工程,2018,18(12):256-261. [23]宋玉普,赵国藩.轻骨料砼在双轴压压及拉压状态下的变形和强度特性[J].建筑结构学报,1994,15(2):17-21. [24]杨明.钢管约束下核心轻集料混凝土基本力学性能研究[D].南京:河海大学,2006.[25]李帼昌,刘之洋,杨良志.钢管煤矸石砼中核心砼的强度准则及本构关系[J].东北大学学报,2002(1):64-66. [26]吴东阳,傅中秋,吉伯海,等.钢管约束下轻集料混凝土本构模型[J].扬州大学学报(自然科学版),2019,22(1):67-73. [27]颜燕祥,徐礼华,蔡恒,等.高强方钢管超高性能混凝土短柱轴压承载力计算方法研究[J].建筑结构学报,2019,40(12): 128-137.[28]MANDER J B,PRIESTLEY M J N,PARK R.Theoretical stress-strain model for confined concrete[J].Journal of Structural Engineering,1988,114(8):1804-1826.(上接第51页)[12]LANGDON D.AD classics:yokohama international passengerterminal[EB/OL].2018-10-17[2021-03-27].https://www./554132/ad-classics-yokohama-international-passenger-terminal-foreign-office-architects-foa.[13]MCGRATH K.Melbourne set to get the only Zaha Hadid buildingin Australia[EB/OL].2016-07-13[2021-03-27].https:// /project-news/melbourne-set-to-get-the-only-zaha-hadid-building-in-australia.[14]TEEMUNNY.Dalian international conference[EB/OL].2013-04-01[2021-03-27]./2013/04/01/ dalian-international-conference-center-coop-himmelblau/. [15]薛彦波,仇宁.动画形式+虚拟建造[M].北京:中国建筑工业出版社,2011.[16]徐卫国.非线性体:表现复杂性[J].世界建筑,2006(12):118-121.[17]林奇K.城市形态[M].林庆怡,等,译.北京:华夏出版社,2001.[18]苗东升.分形与复杂性[J].系统辨证学学报,2003(2):7-13.[19]雅各布斯J.美国大城市的死与生[M].金衡山,译.南京:译林出版社,2005.[20]德勒兹G L R.哲学与权力的谈判:德勒兹访谈录[M].刘汉全,译.北京:商务印书馆,2001.[21]范振刚,周官武,姜玉艳.基于可实施手段的复杂性建筑讨论[J].建筑学报,2015(4):107-109.圆钢管约束陶粒混凝土短柱的单轴受压试验研究及承载力计算 王宇航,等7㊀。
城市公共健康风险的复杂性认知与适应性规划响应
城市公共健康风险的复杂性认知与适应性规划响应□ 于婷婷,冷 红,袁 青[摘 要]全球频发的传染病疫情对城市公共安全造成严重威胁,城市应急性规划响应成为应对公共健康风险的重要议题。
为剖析城市系统面对突发公共卫生事件时的运作规律,文章从复杂性视角解读城市公共健康风险特征,构建适应性主体—城市空间—社会网络的复杂系统并分析其复杂适应机制,同时结合智慧规划技术方法拓宽适应性规划响应途径,实现从疫情防控管理到空间布局规划的尺度转换,以期为完善城市公共安全规划、建设健康城市人居环境提供参考。
[关键词]公共健康风险;复杂适应性;规划响应;智慧规划技术;健康城市[文章编号]1006-0022(2020)05-0045-04 [中图分类号]TU984 [文献标识码]A[引文格式]于婷婷,冷红,袁青.城市公共健康风险的复杂性认知与适应性规划响应[J].规划师,2020(5):45-48.Complexity of Urban Public Health Risk and Adaptive Planning Responses/Y u Tingting, Leng Hong, Y uan Qing[Abstract]The global epidemic situation of infectious diseases is a serious threat to urban public security. Urban emergency planningresponses have become an important issue to deal with public health risks. In order to analyze the operation law of urban systemin the face of public health emergencies, this paper interprets the characteristics of urban public health risks from the perspectiveof complexity, constructs a complex system of subject, urban space, social network and analyzes the adaptive mechanism, andimproves the planning response path with the combination of intelligent planning technology, so as to realize the scale transitionfrom epidemic prevention management to spatial planning. It provides a reference that helps to improve urban public safety planningand construct healthy urban living environment.[Key words] Public health risk, Adaption to complexity, Planning response, Smart planning technology, Healthy city0引言2019年以来,埃博拉病毒、流感病毒和新型冠状病毒肺炎等疫情在世界各地肆虐,对城市公共安全造成了严重威胁[1]。
peaks函数在工程上的应用
peaks函数在工程上的应用The application of the peaks function in engineering is vast and diverse, spanning numerous fields. Primarily, the peaks function serves as a testbed for various algorithms and techniques, particularly in the realm of optimization, numerical analysis, and simulation. Its complex, multimodal nature provides a challenging landscape for testing the effectiveness and robustness of various algorithms.在工程领域,peaks函数的应用广泛而多样,涵盖了众多子领域。
它主要作为各种算法和技术的测试平台,特别是在优化、数值分析和模拟方面。
peaks函数复杂多变的特点为测试各种算法的有效性和鲁棒性提供了具有挑战性的环境。
In the context of optimization, the peaks function is frequently used to benchmark optimization algorithms, such as genetic algorithms, simulated annealing, and particle swarm optimization. By minimizing or maximizing the peaks function, researchers can assess the convergence speed, accuracy, and stability of these algorithms.在优化方面,peaks函数常被用作优化算法的基准测试函数,如遗传算法、模拟退火和粒子群优化等。
围绕英语单词
围绕英语单词The English language, with its rich lexicon and versatile grammar, offers a fascinating subject for exploration. One of the most intriguing aspects of English is the depth and breadth of its vocabulary, which is continually expanding with the evolution of society and technology.The English word "around," for instance, is a preposition that denotes a position in the vicinity of something, or the act of encircling or moving in a circular path. It is a versatile word that can be used in various contexts, from describing physical locations to abstract concepts. For example, "The children played around the garden" illustrates a physical setting, while "The discussion revolved around the central theme" indicates a more abstract usage.Moreover, "around" is a word that can be used to express approximation, as in "I will be there around 5 PM," which suggests a time frame rather than an exact moment. This flexibility makes it a common choice for speakers and writers who wish to convey a sense of proximity or uncertainty.In the digital age, the English language has also seen the emergence of new words and phrases that reflect our increasingly interconnected world. "Around" has taken on new meanings in the context of social media and online communication, where it might be used to describe the act of browsing or skimming through content, as in "I was justbrowsing around on the internet."The study of English words like "around" is not only about understanding their meanings and uses but also about appreciating the cultural and historical influences that have shaped the language. As we delve deeper into the nuances of English, we gain a greater appreciation for the complexity and beauty of communication.。
统计拟合校正
统计拟合校正Statistical fitting correction is an important step in data analysis, aimed at enhancing the accuracy and reliability of statistical models. It involves the use of various statistical techniques to adjust and refine the model based on observed data, thereby reducing the impact of potential biases and systematic errors. The correction process typically begins with a thorough understanding of the data, including its source, collection methods, and any inherent limitations.统计拟合校正是数据分析中的一个重要步骤,旨在提高统计模型的准确性和可靠性。
它涉及使用各种统计技术来根据观察到的数据调整和完善模型,从而减少潜在偏差和系统误差的影响。
校正过程通常始于对数据的深入了解,包括其来源、收集方法以及任何固有的局限性。
Next, statistical techniques such as regression analysis, maximum likelihood estimation, or Bayesian inference are applied to the data. These methods help identify patterns and relationships within the data, enabling the creation of a more accurate model. However, it's crucial to remember that no model can perfectly capture the complexity of real-world phenomena, and therefore, some degree of approximation is inevitable.接下来,将应用诸如回归分析、最大似然估计或贝叶斯推断等统计技术来处理数据。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Complexity and Approximation of FixingNumerical Attributes in Databases UnderIntegrity ConstraintsLeopoldo Bertossi,Loreto BravoCarleton University,School of Computer Science,Ottawa,Canada.{bertossi,lbravo}@scs.carleton.caEnrico Franconi,Andrei LopatenkoFree University of Bozen–Bolzano,Faculty of Computer Science,Italy.{franconi,lopatenko}@inf.unibz.itAbstract.Consistent query answering is the problem of computing theanswers from a database that are consistent with respect to certainintegrity constraints that the database as a whole may fail to satisfy.Those answers are characterized as those that are invariant under min-imal forms of restoring the consistency of the database.In this context,we study the problem of repairing databases byfixing integer numeri-cal values at the attribute level with respect to denial and aggregationconstraints.We introduce a quantitative definition of databasefix,andinvestigate the complexity of several decision and optimization prob-lems,including DFP,i.e.the existence offixes within a given distancefrom the original instance,and CQA,i.e.deciding consistency of answersto aggregate conjunctive queries under different semantics.We providesharp complexity bounds,identify relevant tractable cases;and introduceapproximation algorithms for some of those that are intractable.Morespecifically,we obtain results like undecidability of existence offixes foraggregation constraints;MAXSNP-hardness of DFP,but a good approx-imation algorithm for a relevant special case;and intractability but goodapproximation for CQA for aggregate queries for one database atom de-nials(plus built-ins).1IntroductionIntegrity constraints(ICs)are used to impose semantics on a database with the purpose of making the database an accurate model of an application domain. Database management systems or application programs enforce the satisfaction of the ICs by rejecting undesirable updates or executing additional compensat-ing actions.However,there are many situations where we need to interact with databases that are inconsistent in the sense that they do not satisfy certain desirable ICs.In this context,an important problem in database research con-sists in characterizing and retrieving consistent data from inconsistent databases [4],in particular consistent answers to queries.From the logical point of view, consistently answering a query posed to an inconsistent database amounts to Dedicated to the memory of Alberto Mendelzon.Our research on this topic started with conversations between Loreto Bravo and him.Alberto was always generous with his time,advice and ideas;our community is already missing him very much. Also:University of Manchester,Department of Computer Science,UK.evaluating the truth of a formula against a particular class offirst-order struc-tures[2],as opposed to the usual process of truth evaluation in a single structure (the relational database).Certain database applications,like census,demographic,financial,and ex-perimental data,contain quantitative data,usually associated to nominal or qualitative data,e.g.number of children associated to a household identification code(or address);or measurements associated to a sample identification code. Usually this kind of data contains errors or mistakes with respect to certain se-mantic constraints.For example,a census form for a particular household may be considered incorrect if the number of children exceeds20;or if the age of a parent is less than10.These restrictions can be expressed with denial integrity constraints,that prevent some attributes from taking certain values[10].Other restrictions may be expressed with aggregation ICs,e.g.the maximum concen-tration of certain toxin in a sample may not exceed a certain specified amount;or the number of married men and married women must be the same.Inconsisten-cies in numerical data can be resolved by changing individual attribute values, while keeping values in the keys,e.g.without changing the household code,the number of children is decreased considering the admissible values.We consider the problem offixing integer numerical data wrt certain con-straints while(a)keeping the values for the attributes in the keys of the relations, and(b)minimizing the quantitative global distance between the original and modified instances.Since the problem may admit several global solutions,each of them involving possibly many individual changes,we are interested in char-acterizing and computing data and properties that remain invariant under any of thesefixing processes.We concentrate on denial and aggregation constraints; and conjunctive queries,with or without aggregation.Database repairs have been studied in the context of consistent query an-swering(CQA),i.e.the process of obtaining the answers to a query that are consistent wrt a given set of ICs[2](c.f.[4]for a survey).There,consistent data is characterized as invariant under all minimal forms of restoring consistency,i.e. as data that is present in all minimally repaired versions of the original instance (the repairs).Thus,an answer to a query is consistent if it can be obtained as a standard answer to the query from every possible repair.In most of the re-search on CQA,a repair is a new instance that satisfies the given ICs,but differs from the original instance by a minimal set,under set inclusion,of(completely) deleted or inserted tuples.Changing the value of a particular attribute can be modelled as a deletion followed by an insertion,but this may not correspond to a minimal repair.However,in certain applications it may make more sense to correct(update)numerical values only in certain attributes.This requires a new definition of repair that considers:(a)the quantitative nature of individual changes,(b)the association of the numerical values to other key values;and(c) a quantitative distance between database instances.Example1.Consider a network traffic database D that storesflow measure-ments of links in a network.This net-work has two types of links,labelled0 and1,with maximum capacities1000Traffic Time Link Type Flow1.1a011001.1b19001.3b1850and1500,resp.Database D is inconsistent wrt this IC.Under the tuple and set oriented semantics of repairs[2],there is a unique repair,namely deleting tuple Traffic(1.1,a,0,1100).However,we have two options that may make more sense than deleting theflow measurement,namely updating the violating tu-ple to Traffic(1.1,a,0,1000)or to Traffic(1.1,a,1,1100);satisfying an implicit requirement that the numbers should not change too much. Update-based repairs for restoring consistency are studied in[24];where chang-ing values in attributes in a tuple is made a primitive repair action;and semantic and computational problems around CQA are analyzed from this perspective. However,peculiarities of changing numerical attributes are not considered,and more importantly,the distance between databases instances used in[24,25] is based on set-theoretic homomorphisms,but not quantitative,as in this pa-per.In[24]the repaired instances are calledfixes,a term that we keep here (instead of repairs),because our basic repair actions are also changes of(nu-merical)attribute values.In this paper we considerfixable attributes that take integer values and the quadratic,Euclidean distance L2between database in-stances.Specificfixes and approximations may be different under other distance functions,e.g.the“city distance”L1(the sum of absolute differences),but the general(in)tractability and approximation results remain.However,moving to the case of real numbers will certainly bring new issues that require different approaches;they are left for ongoing and future research.Actually it would be natural to investigate them in the richer context of constraint databases[17].The problem of attribute-based correction of census data forms is addressed in[10]using disjunctive logic programs with stable model semantics.Several underlying and implicit assumptions that are necessary for that approach to work are made explicit and used here,extending the semantic framework of[10].We provide semantic foundations forfixes that are based on changes on numerical attributes in the presence of key dependencies and wrt denial and aggregate ICs,while keeping the numerical distance to the original database to a minimum.This framework introduces new challenging decision and optimization problems,and many algorithmic and complexity theoretic issues.We concentrate in particular on the“Database Fix Problem”(DFP),of determining the existence of afix at a distance not bigger than a given bound,in particular considering the problems of construction and verification of such afix.These problems are highly relevant for large inconsistent databases.For example,solving DFP can help us find the minimum distance from afix to the original instance;information that can be used to prune impossible branches in the process of materialization of a fix.The CQA problem of deciding the consistency of query answers is studied wrt decidability,complexity,and approximation under several alternative semantics.We prove that DFP and CQA become undecidable in the presence of aggre-gation constraints.However,DFP is NP-complete for linear denials,which are enough to capture census like applications.CQA belongs toΠP2and becomes ∆P2-hard,but for a relevant class of denials we get tractability of CQA to non ag-gregate queries,which is again lost with aggregate queries.Wrt approximation, we prove that DFP is MAXSNP-hard in general,and for a relevant subclass of denials we provide an approximation within a constant factor that depends on the number of atoms in them.All the algorithmic and complexity results,unlessotherwise stated,refer to data complexity[1],i.e.to the size of the database that here includes a binary representation for numbers.For complexity theoretic definitions and classical results we refer to[20].This paper is structured as follows.Section2introduces basic definitions. Sections3presents the notion of databasefix,several notions of consistent answer to a query;and some relevant decision problems.Section4investigates their complexity.In Section5approximations for the problem offinding the minimum distance to afix are studied,obtaining negative results for the general case,but good approximation for the class of local denial constraints.Section6investigates tractability of CQA for conjunctive queries and denial constraints containing one database atom plus built-ins.Section7presents some conclusions and refers to related work.Proofs and other auxiliary,technical results can be found in Appendix A.1.2PreliminariesConsider a relational schemaΣ=(U,R,B,A),with domain U that includes Z,1 R a set of database predicates,B a set of built-in predicates,and A a set of attributes.A database instance is afinite collection D of database tuples,i.e.of ground atoms P(¯c),with P∈R and¯c a tuple of constants in U.There is a set F⊆A of all thefixable attributes,those that take values in Z and are allowed to befixed.Attributes outside F are called rigid.F need not contain all the numerical attributes,that is we may also have rigid numerical attributes.We also have a set K of key constraints expressing that relations R∈R have a primary key K R,K R⊆(A F).Later on(c.f.Definition2),we will assume that K is satisfied both by the initial instance D,denoted D|=K,and itsfixes. Since F∩K R=∅,values in key attributes cannot be changed in afixing process; so the constraints in K are hard.In addition,there may be a separate set IC of flexible ICs that may be violated,and it is the job of afix to restore consistency wrt them(while still satisfying K).A linear denial constraint[17]has the form∀¯x¬(A1∧...∧A m),where the A i are database atoms(i.e.with predicate in R),or built-in atoms of the form xθc,where x is a variable,c is a constant andθ∈{=,=,<,>,≤,≥},or x=y. If x=y is allowed,we call them extended linear denials.Example2.The following are linear denials(we replace∧by a comma):(a) No customer is younger than21:∀Id,Age,Income,Status¬(Customer(Id,Age, Income,Status),Age<21).(b)No customer with income less than60000has “silver”status:∀Id,Age,Income,Status¬(Customer(Id,Age,Income,Status), Income<60000,Status=silver).(c)The constraints in Example1,e.g.∀T,L,Type,Flow¬(Traffic(T,L,Type,Flow),Type=0,Flow>1000). We consider aggregation constraints(ACs)[22]and aggregate queries with sum, count,average.Filtering ACs impose conditions on the tuples over which ag-gregation is applied,e.g.sum(A1:A2=3)>5is a sum over A1of tuples with A2=3.Multi-attribute ACs allow arithmetical combinations of attributes as arguments for sum,e.g.sum(A1+A2)>5and sum(A1×A2)>100.If 1With simple denial constraints,numbers can be restricted to,e.g.N or{0,1}.an AC has attributes from more than one relation,it is multi-relation ,e.g.sum R 1(A 1)=sum R 2(A 1),otherwise it is single-relation .An aggregate conjunctive query has the form q (x 1,...x m ;agg (z ))←B (x 1,...,x m ,z,y 1,...,y n ),where agg is an aggregation function and its non-aggregate matrix (NAM)given by q (x 1,...x m )←B (x 1,...,x m ,z ,y 1,...,y n )is a usual first-order (FO)conjunctive query with built-in atoms,such that the aggregation attribute z does not appear among the x i .Here we use the set semantics.An aggregate conjunctive query is cyclic (acyclic )if its NAM is cyclic (acyclic)[1].Example 3.q (x,y,sum (z ))←R (x,y ),Q (y,z,w ),w =3is an aggregate con-junctive query,with aggregation attribute z .Each answer (x,y )to its NAM,i.e.to q (x,y )←R (x,y ),Q (y,z,w ),w =3,is expanded to (x,y,sum (z ))as an answer to the aggregate query.sum (z )is the sum of all the values for z having a w ,such that (x,y,z,w )makes R (x,y ),Q (y,z,w ),w =3true.In the database instance D ={R (1,2),R (2,3),Q (2,5,9),Q (2,6,7),Q (3,1,1),Q (3,1,5),Q (3,8,3)}the answer set for the aggregate query is {(1,2,5+6),(2,3,1+1)}. An aggregate comparison query is a sentence of the form q (agg (z )),agg (z )θk ,where q (agg (z ))is the head of a scalar aggregate conjunctive query (with no free variables),θis a comparison operator,and k is an integer number.For example,the following is an aggregate comparison query asking whether the aggregated value obtained via q (sum (z ))is bigger than 5:Q :q (sum (z )),sum (z )>5,with q (sum (z ))←R (x,y ),Q (y,z,w ),w =3.3Least Squares FixesWhen we update numerical values to restore consistency,it is desirable to make the smallest overall variation of the original values,while considering the relative relevance or specific scale of each of the fixable attributes.Since the original instance and a fix will share the same key values (c.f.Definition 2),we can use them to compute variations in the numerical values.For a tuple ¯k of values for the key K R of relation R in an instance D ,¯t (¯k,R,D )denotes the unique tuple ¯tin relation R in instance D whose key value is ¯k .To each attribute A ∈F a fixed numerical weight αA is assigned.Definition 1.For instances D and D over schema Σwith the same set val (K R )of tuples of key values for each relation R ∈R ,their square distance is ∆¯α(D ,D )= R ∈R ,A ∈F ¯k ∈val (K R )αA(πA (¯t (¯k,R,D ))−πA (¯t (¯k,R,D )))2,where πA is the projection on attribute A and ¯α=(αA )A ∈F . Definition 2.For an instance D ,a set of fixable attributes F ,a set of key dependencies K ,such that D |=K ,and a set of flexible ICs IC :A fix for D wrt IC is an instance D such that:(a)D has the same schema and domain as D ;(b)D has the same values as D in the attributes in A F ;(c)D |=K ;and (d)D |=IC .A least squares fix (LS-fix)for D is a fix D that minimizes the square distance ∆¯α(D,D )over all the instances that satisfy (a)-(d).In general we are interested in LS-fixes,but (non-necessarily minimal)fixes will be useful auxiliary instances.Example 4.(example 1cont.)R ={Traffic },A ={T ime,Link,T ype,F low },K Traffic ={T ime,Link },F ={T ype,F low },with weights ¯α=(10−5,1),resp.For original instance D ,val (K Traffic )={(1.1,a ),(1.1,b ),(1.3,b )},¯t((1.1,a ),Traffic ,D )=(1.1,a,0,1100),etc.Fixes are D 1={(1.1,a,0,1000),(1.1,b,1,900),(1.3,b,1,850)}and D 2={(1.1,a,1,1100),(1.1,b,1,900),(1.3,b,1,850)},with distances ∆¯α(D,D 1)=1002×10−5=10−1and ∆¯α(D,D 2)=12×1,resp.Therefore,D 1is the only LS-fix. The coefficients αA can be chosen in many different ways depending on factors like relative relevance of attributes,actual distribution of data,measurement scales,etc.In the rest of this paper we will assume,for simplification,that αA =1for all A ∈F and ∆¯α(D ,D )will be simply denoted by ∆(D ,D ).Example 5.The database D has relations Client (ID ,A ,M ),with key ID ,at-tributes A for age and M for amount of money;and Buy (ID ,I ,P ),with key {ID ,I },I for items,and P for prices.We have denials IC 1:∀ID ,P,A,M ¬(Buy (ID ,I,P ),Client (ID ,A,M ),A <18,P >25)and IC 2:∀ID ,A,M ¬(Client (ID ,A,M ),A <18,M >50),requiring that people younger than 18can-D :Client ID A M 11552t 121651t 2360900t 3Buy ID I P 1CD 27t 41DVD 26t 53DVD 40t 6not spend more than 25on one item nor spend more than 50in the store.We added an extra column in the ta-bles with a label for each tuple.IC 1is violated by {t 1,t 4}and {t 1,t 5};and IC 2by {t 1}and {t 2}.We have two LS-fixes (the modified version of tuple t 1is t 1,etc.),with distances ∆(D,D )=D :D :Client’ID A M11550t 121650t 2360900t 3Buy’ID I P1CD 25t 41DVD 25t 53DVD 40t 6Client”ID A M 11852t 1 21650t 2 360900t 3Buy”ID I P 1CD 27t 41DVD 26t 53DVD 40t 622+12+22+12=10,and ∆(D,D )=32+12=10.We can see that a global fix may not be the result of applying “local”minimal fixes to tuples. The built-in atoms in linear denials determine a solution space for fixes as an intersection of semi-spaces,and LS-fixes can be found at its “borders”(c.f.pre-vious example and Proposition A.1in Appendix A.1).It is easy to construct examples with an exponential number of fixes.For the kind of fixes and ICs we are considering,it is possible that no fix exists,in contrast to [2,3],where,if the set of ICs is consistent as a set of logical sentences,a fix for a database always exist.Example 6.R (X,Y )has key X and fixable Y .IC 1={∀X 1X 2Y ¬(R (X 1,Y ),R (X 2,Y ),X 1=1,X 2=2),∀X 1X 2Y ¬(R (X 1,Y ),R (X 2,Y ),X 1=1,X 2=3),∀X 1X 2Y ¬(R (X 1,Y ),R (X 2,Y ),X 1=2,X 2=3),∀XY ¬(R (X,Y ),Y >3),∀XY ¬(R (X,Y ),Y <2)}is consistent.The first three ICs force Y to be different inevery tuple.The last two ICs require2≤Y≤3.The inconsistent database R={(1,−1),(2,1),(3,5)}has nofix.Now,for IC2with∀X,Y¬(R(X,Y), Y>1)and sum(Y)=10,any database with less than10tuples has nofixes. Proposition1.If D has afix wrt IC,then it also has an LS-fix wrt IC.4Decidability and ComplexityIn applications wherefixes are based on changes of numerical values,computing concretefixes is a relevant problem.In databases containing census forms,cor-recting the latter before doing statistical processing is a common problem[10]. In databases with experimental samples,we canfix certain erroneous quantities as specified by linear ICs.In these cases,thefixes are relevant objects to com-pute explicitly,which contrasts with CQA[2],where the main motivation for introducing repairs is to formally characterize the notion of a consistent answer to a query as an answer that remains under all possible repairs.In consequence, we now consider some decision problems related to existence and verification of LS-fixes,and to CQA under different semantics.Definition3.For an instance D and a set IC of ICs:(a)Fix(D,IC):={D |D is an LS-fix of D wrt IC},thefix checking problem.(b)Fix(IC):={(D,D )|D ∈Fix(D,IC)}.(c)NE(IC):={D|Fix(D,IC)=∅},for non-empty set offixes,i.e.the problem of checking existence of LS-fixes.(d)NE:={(D,IC)|Fix(D,IC)=∅}.(e)DFP(IC):={(D,k)|there is D ∈F ix(D,IC)with∆(D,D )≤k},the databasefix problem,i.e.the problem of checking existence of LS-fixes within a given positive distance k.(f)DFOP(IC)is the optimization problem offinding the minimum distance from an LS-fix wrt IC to a given input instance. Definition4.Let D be a database,IC a set ICs,and Q a conjunctive query2.(a)A ground tuple¯t is a consistent answer to Q(¯x)under the:(a1)skeptical semantics if for every D ∈Fix(D,IC),D |=Q(¯t).(a2)brave semantics if there exists D ∈Fix(D,IC)with D |=Q(¯t).(a3)majority semantics if|{D |D ∈Fix(D,IC)and D |=Q(¯t)}|>|{D |D ∈Fix(D,IC)and D |=Q(¯t)}|.(b)That¯t is a consistent answer to Q in D under semantics S is denoted by D|=S Q[¯t].If Q is ground and D|=S Q,we say that yes is a consistent answer,meaning that Q is true in thefixes of D according to semantics S.CA(Q,D,IC,S)is the set of consistent answers to Q in D wrt IC under semantics S.For ground Q,if CA(Q,D,IC,S)={yes},CA(Q,D,IC,S):={no}.(c)CQA(Q,IC,S):={(D,¯t)|¯t∈CA(Q,D,IC,S)}is the decision problem of consistent query answering,of checking consistent answers. Proposition2.NE(IC)can be reduced in polynomial time to the complements of CQA(False,IC,Skeptical)and CQA(True,IC,Majority),where False,True are ground queries that are always false,resp.true. 2Whenever we say just“conjunctive query”we understand it is a non aggregate query.In Proposition2,it suffices for queries False,True to be false,resp.true,in all instances that share the key values with the input database.Then,they can be represented by∃Y R(¯c,Y),where¯c are not(for False),or are(for True)key values in the original instance.Theorem1.Under extended linear denials and complex,filtering,multi-attri-bute,single-relation,aggregation constraints,the problems NE of existence of LS-fixes,and CQA under skeptical or majority semantics are undecidable. The result about NE can be proved by reduction from the undecidable Hilbert’s problem on solvability of diophantine equations.For CQA,apply Proposition 2.Here we have the original database and the set of ICs as input parameters. In the following we will be interested in data complexity,when only the input database varies and the set of ICs isfixed[1].Theorem2.For afixed set IC of linear denials:(a)Deciding if for an instance D there is an instance D (with the same key values as D)that satisfies IC with ∆(D,D )≤k,with positive integer k that is part of the input,is in NP.(b) DFP(IC)is NP-complete.(c.f.Definition3(e)) By Proposition1,there is afix for D wrt IC at a distance≤k iffthere is an LS-fix at a distance≤k.Part(b)of Theorem2follows from part(a)and a reduction of Vertex Cover to DFP(IC0),for afixed set of denials IC0.By Theorem2(a), if there is afix at a distance≤k,the minimum distance to D for afix can be found by binary search in log(k)steps.Actually,if an LS-fix exists,its square distance to D is polynomially bounded by the size of D(c.f.proof of Theorem 3).Since D and afix have the same number of tuples,only the size of their values in afix matter,and they are constrained by afixed set of linear denials and the condition of minimality.Theorem3.For afixed set IC of extended linear denials:(a)The problem NE(IC)of deciding if an instance has an LS-fix wrt IC is NP-complete,and(b) CQA under the skeptical and the majority semantics is coNP-hard. For hardness in(a),(b)in Theorem3,linear denials are good enough.Member-ship in(a)can be obtained for anyfixedfinite set of extended denials.Part(b) follows from part(a).The latter uses a reduction from3-Colorability. Theorem4.For afixed set IC of extended linear denials:(a)The problem Fix(IC)of checking if an instance is an LS-fix is coNP-complete,and(b)CQA under skeptical semantics is inΠP2,and,for ground atomic queries,∆P2-hard. Part(a)uses3SAT.Hardness in(b)is obtained by reduction from a∆P2-complete decision version of the problem of searching for the lexicographically Maximum 3-Satisfying Assignment(M3SA):Decide if the last variable takes value1in it [16,Theo.3.4].Linear denials suffice.Now,by reduction from the Vertex Cover Problem,we obtainTheorem5.For aggregate comparison queries using sum,CQA under linear denials and brave semantics is coNP-hard.5Approximation for the Database Fix ProblemWe consider the problem offinding a good approximation for the general opti-mization problem DFOP(IC).Proposition3.For afixed set of linear denials IC,DFOP(IC)is MAXSNP-hard. This result is obtained by establishing an L-reduction to DFOP(IC)from the MAXSNP-complete[21,20]B-Minimum Vertex Cover Problem,i.e.the vertex cover minimization problem for graphs of bounded degree[15,Chapter10].As an immediate consequence,we obtain that DFOP(IC)cannot be uniformly ap-proximated within arbitrarily small constant factors[20].Corollary1.Unless P=NP,there is no Polynomial Time Approximation Schema for DFOP. This negative result does not preclude the possibility offinding an efficient al-gorithm for approximation within a constant factor for DFOP.Actually,in the following we do this for a restricted but still useful class of denial constraints.5.1Local denialsDefinition5.A set of linear denials IC is local if:(a)Attributes participating in equality atoms between attributes or in joins are all rigid;(b)There is a built-in atom with afixable attribute in each element of IC;(c)No attribute A appears in IC both in comparisons of the form A<c1and A>c2.3 In Example5,IC is local.In Example6,IC1is not local.Local constraints have the property that by doing localfixes,no new inconsistencies are generated,and there is always an LS-fix wrt to them(c.f.Proposition A.2in Appendix A.1). Locality is a sufficient,but not necessary condition for existence of LS-fixes as can be seen from the database{P(a,2)},with thefirst attribute as a key and non-local denials¬(P(x,y),y<3),¬(P(x,y),y>5),that has the LS-fix {P(a,3)}.Proposition4.For the class of local denials,DFP is NP-complete,and DFOP is MAXSNP-hard. This proposition tells us that the problem offinding good approximations in the case of local denials is still relevant.Definition6.A set I of database tuples from D is a violation set for ic∈IC if I|=ic,and for every I I,I |=ic.I(D,ic,t)denotes the set of violation sets for ic that contain tuple t.A violation set I for ic is a minimal set of tuples that simultaneously participate in the violation of ic.Definition7.Given an instance D and ICs IC,a localfix for t∈D,is a tuple t with:(a)the same values for the rigid attributes as t;(b)S(t,t ):= {I|there is ic∈IC,I∈I(D,ic,t)and((I {t})∪{t })|=ic}=∅;and (c)there is no tuple t that simultaneously satisfies(a),S(t,t )=S(t,t ),and ∆({t},{t })≤∆({t},{t }),where∆denotes quadratic distance. S(t,t )contains the violation sets that include t and are solved by replacing t for t.A localfix t solves some of the violations due to t and minimizes the distance to t.3To check condition(c),x≤c,x≥c,x=c have to be expressed using<,>,e.g. x≤c by x<c+1.5.2Database fix problem as a set cover problemFor a fixed set IC of local denials,we can solve an instance of DFOP by trans-forming it into an instance of the Minimum Weighted Set Cover Optimization Problem (MWSCP ).This problem is MAXSNP -hard [19,20],and its general ap-proximation algorithms are within a logarithmic factor [19,8].By concentrating on local denials,we will be able to generate a version of the MWSCP that can be approximated within a constant factor (c.f.Section 5.3).Definition 8.For a database D and a set IC of local denials,G (D,IC )=(T,H )denotes the conflict hypergraph for D wrt IC [7],which has in the set T of vertices the database tuples,and in the set H of hyperedges,the violation sets for elements ic ∈IC . Hyperedges in H can be labelled with the corresponding ic ,so that different hyperedges may contain the same tuples.Now we build an instance of MWSCP .Definition 9.For a database D and a set IC of local denials,the instance (U,S ,w )for the MWSCP ,where U is the underlying set,S is the set collection,and w is the weight function,is given by:(a)U :=H ,the set of hyperedges of G (D,IC ).(b)S contains the S (t,t ),where t is a local fix for a tuple t ∈D .(c)w (S (t,t )):=∆({t },{t }). It can be proved that the S (t,t )in this construction are non empty,and that S covers U (c.f.Proposition A.2in Appendix A.1).If for the instance (U,S ,w )of MWSCP we find a minimum weight cover C ,we could think of constructing a fix by replacing each inconsistent tuple t ∈D by a local fix t with S (t,t )∈C .The problem is that there might be more than one t and the key dependencies would not be respected.Fortunately,this problem can be circumvented.Definition 10.Let C be a cover for instance (U,S ,w )of the MWSCP associ-ated to D,IC .(a)C is obtained from C as follows:For each tuple t with local fixes t 1,...,t n ,n >1,such that S (t,t i )∈C ,replace in C all the S (t,t i )by a single S (t,t ),where t is such that S (t,t )= n i =1S (t,t i ).(b)D (C )is thedatabase instance obtained from D by replacing t by t if S (t,t )∈C .It holds (c.f.Proposition A.3in Appendix A.1)that such an S (t,t )∈S exists in part (a)of Definition 10.Notice that there,tuple t could have other S (t,t )outside C .Now we can show that the reduction to MWSCP keeps the value of the objective function.Proposition 5.If C is an optimal cover for instance (U,S ,w )of the MWSCP associated to D,IC ,then D (C )is an LS-fix of D wrt IC ,and ∆(D,D (C ))=w (C )=w (C ∗). Proposition 6.For every LS-fix D of D wrt a set of local denials IC ,there exists an optimal cover C for the associated instance (U,S ,w )of the MW SCP ,such that D =D (C ). Proposition 7.The transformation of DFOP into MWSCP ,and the construc-tion of database instance D (C )from a cover C for (U,S ,w )can be done in polynomial time in the size of D .。