Construction of a Temporal Coherency Preserving Dynamic Data Dissemination Network

合集下载

principles of construction

principles of construction

principles of constructionPrinciples of construction refer to the basic guidelines that govern the design and construction of buildings and other structures. These principles are essential to ensure that the structures are safe, functional, and durable. Here are some of the key principles of construction:Safety: The most important principle of construction is ensuring the safety of the structure. The design should take into account the loads and forces acting on the structure, and it should be able to withstand them without collapsing or suffering damage. Proper calculations and reinforcements must be implemented to provide stability and prevent collapse.Durability: The structure should be designed to last for a reasonable period of time, taking into account factors such as the materials used, exposure to the elements, and maintenance requirements. Durability is achieved through proper materials selection, sound design, and effective construction methods.Functionality: The structure should meet the intended purpose for which it was designed. The design should consider the daily use of the structure, the number of people it will serve, and any specific requirements that need to be met. Functionality also includes accessibility for people with disabilities and ease of use for everyone.Cost-effectiveness: Construction should be cost-effective, balancing quality with cost. The design and materials used should be chosen to minimize cost while still meeting the requirements of safety, durability, and functionality.Environmental Sustainability: The construction process should minimize its impact on the environment, using sustainable materials and construction methods that conserve natural resources and reduce waste. The structure itself should also be designed to conserve energy and minimize environmental impact, such as through energy-efficient design and green building techniques.These principles must be considered during all stages of the constructionprocess, from design to construction to maintenance, to ensure that the final product is safe, functional, durable, cost-effective, and environmentally sustainable.。

architectural practices

architectural practices

architectural practicesArchitectural Practices: Creating Functional and Aesthetic SpacesIntroduction:Architectural practices play a crucial role in shaping the world we live in. From soaring skyscrapers to humble residential homes, architecture combines functionality with aesthetic appeal to create spaces that enhance our lives. In this article, we will take astep-by-step approach to understand the process and principles behind architectural practices.1. Conceptualizing the Design:The first step in any architectural practice is conceptualizing the design. Architects draw inspiration from various sources, including natural elements, cultural influences, and client requirements. They analyze the site's context, weather conditions, and social aspects to develop a design concept that addresses these factors. This conceptualization phase is crucial as it sets the foundation for the rest of the architectural process.2. Initial Design Development:Once the design concept is established, architects proceed to develop the initial design. This involves translating the concept into drawings and plans. Architects use a variety of tools and techniques, such as computer-aided design (CAD) software, to create detailed floor plans, elevations, sections, and three-dimensional models. These drawings help visualize the design and ensure that it meets functional requirements and regulations.3. Functional Analysis:Functionality is a core aspect of architectural practices. Architects analyze the spatial requirements of the building, considering factors such as circulation, ergonomics, and accessibility. They also study the programmatic needs, determining the relationships between different spaces and their functionalities. This functional analysis ensures that the design optimizes space usage and accommodates the intended activities.4. Material Selection and Sustainability:Sustainability has become a significant consideration in modern architectural practices. Architects carefully select materials that are environmentally friendly, energy-efficient, and long-lasting. They explore sustainable building techniques, such as green roofs, solar panels, and rainwater harvesting systems, to minimize the building's impact on the environment. By embracing sustainable practices, architects contribute to a more eco-friendly and sustainable future.5. Structural Design and Engineering:Architectural practices incorporate structural design and engineering principles to ensure the safety and stability of the building. Architects collaborate with structural engineers to design the structural system, considering factors such as load-bearing capacity, seismic resistance, and wind loads. They also account for materials' properties, such as their strength and durability, to create a solid and structurally sound building.6. Construction Documentation and Specifications:Architects prepare detailed construction documentation, includingplans, sections, and specifications, to guide builders during construction. These documents outline construction details, material specifications, and quality standards. Architects also collaborate with contractors and suppliers to address any queries or concerns during the construction process. Clear and comprehensive construction documentation ensures the design intent is maintained during construction.7. Project Management and Coordination:Architectural practices involve project management and coordination to ensure a successful outcome. Architects oversee the project's progress, ensuring that it adheres to the design intent and meets the client's expectations. They coordinate with various stakeholders, including builders, subcontractors, and suppliers, to ensure smooth execution. Effective project management ensures that the project stays on schedule, within budget, and meets all necessary standards.8. Interior Design and Finishes:Interior design is an essential aspect of architectural practices.Architects collaborate with interior designers to create cohesive interior spaces that complement the overall design. They select finishes, such as flooring, wall treatments, and lighting, that enhance the aesthetics of the space. Architects ensure that the interior design aligns with the functional requirements and design intent, creating a harmonious and pleasing environment.9. Post-Construction Evaluation:Once the building is completed, architects conductpost-construction evaluations to assess the performance of the design and identify any areas for improvement. They analyze factors such as energy efficiency, thermal comfort, and user satisfaction. Feedback from the occupants of the building helps architects refine their future designs and continuously improve their architectural practices.Conclusion:Architectural practices involve a step-by-step process that combines creativity, functionality, and sustainability to create inspiring spaces. From conceptualizing the design topost-construction evaluations, architects work diligently to bring their vision to life. By considering the site context, incorporating sustainable practices, and ensuring structural integrity, architects shape our built environment in a way that enhances our lives and fosters a sustainable future.。

219415437_森林_郑州中心

219415437_森林_郑州中心

森林郑州中心The ForestZhengzhou Headquarters中原地区是华夏文明的摇篮。

6世纪末,位于此地的郑州古城逐渐成为重要城市之一。

如今的郑州快速崛起,成为河南省的政治与经济中心,不懈地试图在未来图景中重塑昔日形象。

由如恩设计的郑州中心大楼位于城市一处待开发区域。

在这座城市中,文化遗产与现代化发展之间充斥着各样的矛盾。

郑州的历史古迹,如城墙和要塞塔楼,其建造形式和材料与这片土地密切相关。

而如今的郑州规划了新的城市中心,并用大片的玻璃塔楼群向天空示意,轻盈透亮并自带反射的外观仿佛在骄傲地宣告它们作为现代性标志的存在。

随着郑州开始对全新身份的追寻,如恩在城市中心设计了这栋具有前瞻性的占地近乎一个街区的多功能大楼。

它不仅与郑州丰富的历史和谐共存,也是当下充满活力的存在,将创造新的集体记忆。

如恩将项目设想为一片森林,使不同部分联系起来形成整体感。

整个项目由三栋独立的建筑组成,并在室内外设有多种公共便利设施,广阔通达的空间为使用者提供了充满活力的社交环境。

微微弯曲的屋顶轮廓是一种对古城要塞中瞭望塔屋顶形式的当代演绎,柔化了新区规划的严肃感。

结构构件的聚合带来了建筑的永恒感。

1600多个承重拱墙形成了开放的空间系统,可以满足室内外不同的功能需求。

而每个9平方米的独立隔间都可以根据不同需求灵活配置,比如,开放式的楼层提供了很大的空间来容纳多个工作区,而私人办公室、会议室和储藏间则只占据隔间1/4到1/2的空间。

在公共区域,如接待大厅和采光中庭,这些结构隔间又在垂直方向叠放,供集体聚会使用。

Arriving at the tabula rasa project site in the rapidly expanding city of Zhengzhou, the confrontation betweenheritage and modernity is striking. Historically, the Central Plain area, in which the dynastic city first gainedprominence by the end of the sixth century CE, was the cradle of ancient Chinese civilization. Zhengzhou today,as a fast-rising political and economic powerhouse, is in relentless pursuit of reinventing itself in the image of thefuture. Ancient monuments in the region, such as the city walls and fortress towers, often reveal a close relationshipto the land in their built forms and construction materials. The current city plan charts new urban centers and alsogestures towards the sky with clusters of glass towers, whose lightness, transparency and self-reflection proudlydeclare their existence as signs of modernity. For the commission to design a city block-sized, multi-use projectin the midst of Zhengzhou in search of a new identity, we envision a forward-looking edifice that could coexist inharmony with layers of history, as well as being a vibrant present where new collective memories can be created.The project is conceived as a forest, which allows for a sense of wholeness among heterogeneous parts of thisproject. Consisting of three separate buildings, the project creates outdoor and indoor public amenities to providea lively social environment throughout the expansive grounds for the occupants. The gently curved roof profilesoftens the severity of the new district’s zoning code, and hints at a contemporary interpretation of the eave formsderiving from the watchtowers of the historic city fortress.The building’s sense of permanence is imparted by the aggregation of structural elements. Comprising over 1,600load-bearing arch walls, the project is an open system receptive to a range of indoor and outdoor functions. Theindividual bays, measuring 9 × 9 m (30 × 30 ft), can be flexibly configured according to a range of requirements.In the office component of the project, for example, the open floor plans provide large expanses to accommodate workstations, whereas private offices, meeting rooms and storage spaces occupy half or quarter of a bay. In public areas, such as the arrival lobby and sky-lit atriums, these structural bays are vertically stacked to celebrate collective gatherings.Extending the rhythmic and adaptable logic outward, the façade is similarly considered as a space where the realms of the interior and exterior overlap, rather than a mere skin. The southern face of the building is punctured by either 4.5 m (15 ft)-or 9 m (30 ft)-deep verdant terraces, specifically designed for the use of office workers, as private residences, or for public functions. Intermittent glazing setbacks allow the hanging gardens to break up the scale of an otherwise brutal street wall, while also providing a sense of openness on the façade.Working in tandem with the massing articulation, the ground is sculpted to provide a varied landscape of seating, planters, reflecting pools and gardens. The sectional qualities and localized detailing suggest a sense of the past revealing itself. Bushhammered stone blocks, terrazzo, vegetation and water elements hint at a weathered environment where nature and artifice become one. The history referenced here is not one of the dead and gone, nor a literal invocation of memories. Rather, we posit a particularly durable form of architecture, with the co-existence of the past and present as an active, open-ended, archaeological process, within which a new kind of contemporary life is possible.Throughout the day, the play of light and shadow highlights the three-dimensional qualities of the concrete arches and the spaces within. Rigorous repetition of the structural members extends from the exterior hanging gardens, ground-level arcades and sunken courtyards to all interior grand lobbies, atriums and event spaces, creating a seamless integration of structure and space throughout. At times, the ambiguity is intentionally compounded, where nature is born out of artifice, and the manmade is embedded in the unrefined. The potentiality of the past resides in a utopic present, and the fleeting, visceral “now” finds home in the ruin-like ground.如恩将这样的建筑节奏与多功能逻辑延伸至建筑立面,其整体被视为室内外空间功能重叠的多层次场域,而不仅仅是一层外壳。

计量经济学中英对照词汇

计量经济学中英对照词汇

计量经济学中英对照词汇Absolute deviation, 绝对离差Absolute number, 绝对数Absolute residuals, 绝对残差Acceleration array, 加速度立体阵Acceleration in an arbitrary direction, 任意方向上的加速度Acceleration normal, 法向加速度Acceleration space dimension, 加速度空间的维数Acceleration tangential, 切向加速度Acceleration vector, 加速度向量Acceptable hypothesis, 可接受假设Accumulation, 累积Accuracy, 准确度Actual frequency, 实际频数Adaptive estimator, 自适应估计量Addition, 相加Addition theorem, 加法定理Additive Noise, 加性噪声Additivity, 可加性Adjusted rate, 调整率Adjusted value, 校正值Admissible error, 容许误差Aggregation, 聚集性Alpha factoring,α因子法Alternative hypothesis, 备择假设Among groups, 组间Amounts, 总量Analysis of correlation, 相关分析Analysis of covariance, 协方差分析Analysis Of Effects, 效应分析Analysis Of Variance, 方差分析Analysis of regression, 回归分析Analysis of time series, 时间序列分析Analysis of variance, 方差分析Angular transformation, 角转换ANOVA (analysis of variance), 方差分析ANOVA Models, 方差分析模型ANOVA table and eta, 分组计算方差分析Arcing, 弧/弧旋Arcsine transformation, 反正弦变换Area 区域图Area under the curve, 曲线面积AREG , 评估从一个时间点到下一个时间点回归相关时的误差ARIMA, 季节和非季节性单变量模型的极大似然估计Arithmetic grid paper, 算术格纸Arithmetic mean, 算术平均数Arrhenius relation, 艾恩尼斯关系Assessing fit, 拟合的评估Associative laws, 结合律Asymmetric distribution, 非对称分布Asymptotic bias, 渐近偏倚Asymptotic efficiency, 渐近效率Asymptotic variance, 渐近方差Attributable risk, 归因危险度Attribute data, 属性资料Attribution, 属性Autocorrelation, 自相关Autocorrelation of residuals, 残差的自相关Average, 平均数Average confidence interval length, 平均置信区间长度Average growth rate, 平均增长率Bar chart, 条形图Bar graph, 条形图Base period, 基期Bayes' theorem , Bayes定理Bell-shaped curve, 钟形曲线Bernoulli distribution, 伯努力分布Best-trim estimator, 最好切尾估计量Bias, 偏性Binary logistic regression, 二元逻辑斯蒂回归Binomial distribution, 二项分布Bisquare, 双平方Bivariate Correlate, 二变量相关Bivariate normal distribution, 双变量正态分布Bivariate normal population, 双变量正态总体Biweight interval, 双权区间Biweight M-estimator, 双权M估计量Block, 区组/配伍组BMDP(Biomedical computer programs), BMDP统计软件包Boxplots, 箱线图/箱尾图Breakdown bound, 崩溃界/崩溃点Canonical correlation, 典型相关Caption, 纵标目Case-control study, 病例对照研究Categorical variable, 分类变量Catenary, 悬链线Cauchy distribution, 柯西分布Cause-and-effect relationship, 因果关系Cell, 单元Censoring, 终检Center of symmetry, 对称中心Centering and scaling, 中心化和定标Central tendency, 集中趋势Central value, 中心值CHAID -χ2 Automatic Interaction Detector, 卡方自动交互检测Chance, 机遇Chance error, 随机误差Chance variable, 随机变量Characteristic equation, 特征方程Characteristic root, 特征根Characteristic vector, 特征向量Chebshev criterion of fit, 拟合的切比雪夫准则Chernoff faces, 切尔诺夫脸谱图Chi-square test, 卡方检验/χ2检验Choleskey decomposition, 乔洛斯基分解Circle chart, 圆图Class interval, 组距Class mid-value, 组中值Class upper limit, 组上限Classified variable, 分类变量Cluster analysis, 聚类分析Cluster sampling, 整群抽样Code, 代码Coded data, 编码数据Coding, 编码Coefficient of contingency, 列联系数Coefficient of determination, 决定系数Coefficient of multiple correlation, 多重相关系数Coefficient of partial correlation, 偏相关系数Coefficient of production-moment correlation, 积差相关系数Coefficient of rank correlation, 等级相关系数Coefficient of regression, 回归系数Coefficient of skewness, 偏度系数Coefficient of variation, 变异系数Cohort study, 队列研究Collinearity, 共线性Column, 列Column effect, 列效应Column factor, 列因素Combination pool, 合并Combinative table, 组合表Common factor, 共性因子Common regression coefficient, 公共回归系数Common value, 共同值Common variance, 公共方差Common variation, 公共变异Communality variance, 共性方差Comparability, 可比性Comparison of bathes, 批比较Comparison value, 比较值Compartment model, 分部模型Compassion, 伸缩Complement of an event, 补事件Complete association, 完全正相关Complete dissociation, 完全不相关Complete statistics, 完备统计量Completely randomized design, 完全随机化设计Composite event, 联合事件Composite events, 复合事件Concavity, 凹性Conditional expectation, 条件期望Conditional likelihood, 条件似然Conditional probability, 条件概率Conditionally linear, 依条件线性Confidence interval, 置信区间Confidence limit, 置信限Confidence lower limit, 置信下限Confidence upper limit, 置信上限Confirmatory Factor Analysis , 验证性因子分析Confirmatory research, 证实性实验研究Confounding factor, 混杂因素Conjoint, 联合分析Consistency, 相合性Consistency check, 一致性检验Consistent asymptotically normal estimate, 相合渐近正态估计Consistent estimate, 相合估计Constrained nonlinear regression, 受约束非线性回归Constraint, 约束Contaminated distribution, 污染分布Contaminated Gausssian, 污染高斯分布Contaminated normal distribution, 污染正态分布Contamination, 污染Contamination model, 污染模型Contingency table, 列联表Contour, 边界线Contribution rate, 贡献率Control, 对照, 质量控制图Controlled experiments, 对照实验Conventional depth, 常规深度Convolution, 卷积Corrected factor, 校正因子Corrected mean, 校正均值Correction coefficient, 校正系数Correctness, 正确性Correlation coefficient, 相关系数Correlation, 相关性Correlation index, 相关指数Correspondence, 对应Counting, 计数Counts, 计数/频数Covariance, 协方差Covariant, 共变Cox Regression, Cox回归Criteria for fitting, 拟合准则Criteria of least squares, 最小二乘准则Critical ratio, 临界比Critical region, 拒绝域Critical value, 临界值Cross-over design, 交叉设计Cross-section analysis, 横断面分析Cross-section survey, 横断面调查Crosstabs , 交叉表Crosstabs 列联表分析Cross-tabulation table, 复合表Cube root, 立方根Cumulative distribution function, 分布函数Cumulative probability, 累计概率Curvature, 曲率/弯曲Curvature, 曲率Curve Estimation, 曲线拟合Curve fit , 曲线拟和Curve fitting, 曲线拟合Curvilinear regression, 曲线回归Curvilinear relation, 曲线关系Cut-and-try method, 尝试法Cycle, 周期Cyclist, 周期性D test, D检验Data acquisition, 资料收集Data bank, 数据库Data capacity, 数据容量Data deficiencies, 数据缺乏Data handling, 数据处理Data manipulation, 数据处理Data processing, 数据处理Data reduction, 数据缩减Data set, 数据集Data sources, 数据来源Data transformation, 数据变换Data validity, 数据有效性Data-in, 数据输入Data-out, 数据输出Dead time, 停滞期Degree of freedom, 自由度Degree of precision, 精密度Degree of reliability, 可靠性程度Degression, 递减Density function, 密度函数Density of data points, 数据点的密度Dependent variable, 应变量/依变量/因变量Dependent variable, 因变量Depth, 深度Derivative matrix, 导数矩阵Derivative-free methods, 无导数方法Design, 设计Determinacy, 确定性Determinant, 行列式Determinant, 决定因素Deviation, 离差Deviation from average, 离均差Diagnostic plot, 诊断图Dichotomous variable, 二分变量Differential equation, 微分方程Direct standardization, 直接标准化法Direct Oblimin, 斜交旋转Discrete variable, 离散型变量DISCRIMINANT, 判断Discriminant analysis, 判别分析Discriminant coefficient, 判别系数Discriminant function, 判别值Dispersion, 散布/分散度Disproportional, 不成比例的Disproportionate sub-class numbers, 不成比例次级组含量Distribution free, 分布无关性/免分布Distribution shape, 分布形状Distribution-free method, 任意分布法Distributive laws, 分配律Disturbance, 随机扰动项Dose response curve, 剂量反应曲线Double blind method, 双盲法Double blind trial, 双盲试验Double exponential distribution, 双指数分布Double logarithmic, 双对数Downward rank, 降秩Dual-space plot, 对偶空间图DUD, 无导数方法Duncan's new multiple range method, 新复极差法/Duncan新法Error Bar, 均值相关区间图Effect, 实验效应Eigenvalue, 特征值Eigenvector, 特征向量Ellipse, 椭圆Empirical distribution, 经验分布Empirical probability, 经验概率单位Enumeration data, 计数资料Equal sun-class number, 相等次级组含量Equally likely, 等可能Equivariance, 同变性Error, 误差/错误Error of estimate, 估计误差Error type I, 第一类错误Error type II, 第二类错误Estimand, 被估量Estimated error mean squares, 估计误差均方Estimated error sum of squares, 估计误差平方和Euclidean distance, 欧式距离Event, 事件Event, 事件Exceptional data point, 异常数据点Expectation plane, 期望平面Expectation surface, 期望曲面Expected values, 期望值Experiment, 实验Experimental sampling, 试验抽样Experimental unit, 试验单位Explained variance (已说明方差)Explanatory variable, 说明变量, 解释变量Exploratory data analysis, 探索性数据分析Explore Summarize, 探索-摘要Exponential curve, 指数曲线Exponential growth, 指数式增长EXSMOOTH, 指数平滑方法Extended fit, 扩充拟合Extra parameter, 附加参数Extrapolation, 外推法Extreme observation, 末端观测值Extremes, 极端值/极值F distribution, F分布F test, F检验Factor, 因素/因子Factor analysis, 因子分析Factor Analysis, 因子分析Factor score, 因子得分Factorial, 阶乘Factorial design, 析因试验设计False negative, 假阴性False negative error, 假阴性错误Family of distributions, 分布族Family of estimators, 估计量族Fanning, 扇面Fatality rate, 病死率Field investigation, 现场调查Field survey, 现场调查Finite population, 有限总体Finite-sample, 有限样本First derivative, 一阶导数First principal component, 第一主成分First quartile, 第一四分位数Fisher information, 费雪信息量Fitted value, 拟合值Fitting a curve, 曲线拟合Fixed base, 定基Fluctuation, 随机起伏Forecast, 预测Four fold table, 四格表Fourth, 四分点Fraction blow, 左侧比率Fractional error, 相对误差Frequency, 频率Frequency polygon, 频数多边图Frontier point, 界限点Function relationship, 泛函关系Gamma distribution, 伽玛分布Gauss increment, 高斯增量Gaussian distribution, 高斯分布/正态分布Gauss-Newton increment, 高斯-牛顿增量General census, 全面普查Generalized least squares, 综合最小平方法GENLOG (Generalized liner models), 广义线性模型Geometric mean, 几何平均数Gini's mean difference, 基尼均差GLM (General liner models), 通用线性模型Goodness of fit, 拟和优度/配合度Gradient of determinant, 行列式的梯度Graeco-Latin square, 希腊拉丁方Grand mean, 总均值Gross errors, 重大错误Gross-error sensitivity, 大错敏感度Group averages, 分组平均Grouped data, 分组资料Guessed mean, 假定平均数Half-life, 半衰期Hampel M-estimators, 汉佩尔M估计量Happenstance, 偶然事件Harmonic mean, 调和均数Hazard function, 风险均数Hazard rate, 风险率Heading, 标目Heavy-tailed distribution, 重尾分布Hessian array, 海森立体阵Heterogeneity, 不同质Heterogeneity of variance, 方差不齐Hierarchical classification, 组内分组Hierarchical clustering method, 系统聚类法High-leverage point, 高杠杆率点High-Low, 低区域图Higher Order Interaction Effects,高阶交互作用HILOGLINEAR, 多维列联表的层次对数线性模型Hinge, 折叶点Histogram, 直方图Historical cohort study, 历史性队列研究Holes, 空洞HOMALS, 多重响应分析Homogeneity of variance, 方差齐性Homogeneity test, 齐性检验Huber M-estimators, 休伯M估计量Hyperbola, 双曲线Hypothesis testing, 假设检验Hypothetical universe, 假设总体Image factoring,, 多元回归法Impossible event, 不可能事件Independence, 独立性Independent variable, 自变量Index, 指标/指数Indirect standardization, 间接标准化法Individual, 个体Inference band, 推断带Infinite population, 无限总体Infinitely great, 无穷大Infinitely small, 无穷小Influence curve, 影响曲线Information capacity, 信息容量Initial condition, 初始条件Initial estimate, 初始估计值Initial level, 最初水平Interaction, 交互作用Interaction terms, 交互作用项Intercept, 截距Interpolation, 内插法Interquartile range, 四分位距Interval estimation, 区间估计Intervals of equal probability, 等概率区间Intrinsic curvature, 固有曲率Invariance, 不变性Inverse matrix, 逆矩阵Inverse probability, 逆概率Inverse sine transformation, 反正弦变换Iteration, 迭代Jacobian determinant, 雅可比行列式Joint distribution function, 分布函数Joint probability, 联合概率Joint probability distribution, 联合概率分布K-Means Cluster逐步聚类分析K means method, 逐步聚类法Kaplan-Meier, 评估事件的时间长度Kaplan-Merier chart, Kaplan-Merier图Kendall's rank correlation, Kendall等级相关Kinetic, 动力学Kolmogorov-Smirnove test, 柯尔莫哥洛夫-斯米尔诺夫检验Kruskal and Wallis test, Kruskal及Wallis检验/多样本的秩和检验/H检验Kurtosis, 峰度Lack of fit, 失拟Ladder of powers, 幂阶梯Lag, 滞后Large sample, 大样本Large sample test, 大样本检验Latin square, 拉丁方Latin square design, 拉丁方设计Leakage, 泄漏Least favorable configuration, 最不利构形Least favorable distribution, 最不利分布Least significant difference, 最小显着差法Least square method, 最小二乘法Least Squared Criterion,最小二乘方准则Least-absolute-residuals estimates, 最小绝对残差估计Least-absolute-residuals fit, 最小绝对残差拟合Least-absolute-residuals line, 最小绝对残差线Legend, 图例L-estimator, L估计量L-estimator of location, 位置L估计量L-estimator of scale, 尺度L估计量Level, 水平Leveage Correction,杠杆率校正Life expectance, 预期期望寿命Life table, 寿命表Life table method, 生命表法Light-tailed distribution, 轻尾分布Likelihood function, 似然函数Likelihood ratio, 似然比line graph, 线图Linear correlation, 直线相关Linear equation, 线性方程Linear programming, 线性规划Linear regression, 直线回归Linear Regression, 线性回归Linear trend, 线性趋势Loading, 载荷Location and scale equivariance, 位置尺度同变性Location equivariance, 位置同变性Location invariance, 位置不变性Location scale family, 位置尺度族Log rank test, 时序检验Logarithmic curve, 对数曲线Logarithmic normal distribution, 对数正态分布Logarithmic scale, 对数尺度Logarithmic transformation, 对数变换Logic check, 逻辑检查Logistic distribution, 逻辑斯特分布Logit transformation, Logit转换LOGLINEAR, 多维列联表通用模型Lognormal distribution, 对数正态分布Lost function, 损失函数Low correlation, 低度相关Lower limit, 下限Lowest-attained variance, 最小可达方差LSD, 最小显着差法的简称Lurking variable, 潜在变量Main effect, 主效应Major heading, 主辞标目Marginal density function, 边缘密度函数Marginal probability, 边缘概率Marginal probability distribution, 边缘概率分布Matched data, 配对资料Matched distribution, 匹配过分布Matching of distribution, 分布的匹配Matching of transformation, 变换的匹配Mathematical expectation, 数学期望Mathematical model, 数学模型Maximum L-estimator, 极大极小L 估计量Maximum likelihood method, 最大似然法Mean, 均数Mean squares between groups, 组间均方Mean squares within group, 组内均方Means (Compare means), 均值-均值比较Median, 中位数Median effective dose, 半数效量Median lethal dose, 半数致死量Median polish, 中位数平滑Median test, 中位数检验Minimal sufficient statistic, 最小充分统计量Minimum distance estimation, 最小距离估计Minimum effective dose, 最小有效量Minimum lethal dose, 最小致死量Minimum variance estimator, 最小方差估计量MINITAB, 统计软件包Minor heading, 宾词标目Missing data, 缺失值Model specification, 模型的确定Modeling Statistics , 模型统计Models for outliers, 离群值模型Modifying the model, 模型的修正Modulus of continuity, 连续性模Morbidity, 发病率Most favorable configuration, 最有利构形MSC(多元散射校正)Multidimensional Scaling (ASCAL), 多维尺度/多维标度Multinomial Logistic Regression , 多项逻辑斯蒂回归Multiple comparison, 多重比较Multiple correlation , 复相关Multiple covariance, 多元协方差Multiple linear regression, 多元线性回归Multiple response , 多重选项Multiple solutions, 多解Multiplication theorem, 乘法定理Multiresponse, 多元响应Multi-stage sampling, 多阶段抽样Multivariate T distribution, 多元T分布Mutual exclusive, 互不相容Mutual independence, 互相独立Natural boundary, 自然边界Natural dead, 自然死亡Natural zero, 自然零Negative correlation, 负相关Negative linear correlation, 负线性相关Negatively skewed, 负偏Newman-Keuls method, q检验NK method, q检验No statistical significance, 无统计意义Nominal variable, 名义变量Nonconstancy of variability, 变异的非定常性Nonlinear regression, 非线性相关Nonparametric statistics, 非参数统计Nonparametric test, 非参数检验Nonparametric tests, 非参数检验Normal deviate, 正态离差Normal distribution, 正态分布Normal equation, 正规方程组Normal P-P, 正态概率分布图Normal Q-Q, 正态概率单位分布图Normal ranges, 正常范围Normal value, 正常值Normalization 归一化Nuisance parameter, 多余参数/讨厌参数Null hypothesis, 无效假设Numerical variable, 数值变量Objective function, 目标函数Observation unit, 观察单位Observed value, 观察值One sided test, 单侧检验One-way analysis of variance, 单因素方差分析Oneway ANOVA , 单因素方差分析Open sequential trial, 开放型序贯设计Optrim, 优切尾Optrim efficiency, 优切尾效率Order statistics, 顺序统计量Ordered categories, 有序分类Ordinal logistic regression , 序数逻辑斯蒂回归Ordinal variable, 有序变量Orthogonal basis, 正交基Orthogonal design, 正交试验设计Orthogonality conditions, 正交条件ORTHOPLAN, 正交设计Outlier cutoffs, 离群值截断点Outliers, 极端值OVERALS , 多组变量的非线性正规相关Overshoot, 迭代过度Paired design, 配对设计Paired sample, 配对样本Pairwise slopes, 成对斜率Parabola, 抛物线Parallel tests, 平行试验Parameter, 参数Parametric statistics, 参数统计Parametric test, 参数检验Pareto, 直条构成线图(又称佩尔托图)Partial correlation, 偏相关Partial regression, 偏回归Partial sorting, 偏排序Partials residuals, 偏残差Pattern, 模式PCA(主成分分析)Pearson curves, 皮尔逊曲线Peeling, 退层Percent bar graph, 百分条形图Percentage, 百分比Percentile, 百分位数Percentile curves, 百分位曲线Periodicity, 周期性Permutation, 排列P-estimator, P估计量Pie graph, 构成图,饼图Pitman estimator, 皮特曼估计量Pivot, 枢轴量Planar, 平坦Planar assumption, 平面的假设PLANCARDS, 生成试验的计划卡PLS(偏最小二乘法)Point estimation, 点估计Poisson distribution, 泊松分布Polishing, 平滑Polled standard deviation, 合并标准差Polled variance, 合并方差Polygon, 多边图Polynomial, 多项式Polynomial curve, 多项式曲线Population, 总体Population attributable risk, 人群归因危险度Positive correlation, 正相关Positively skewed, 正偏Posterior distribution, 后验分布Power of a test, 检验效能Precision, 精密度Predicted value, 预测值Preliminary analysis, 预备性分析Principal axis factoring,主轴因子法Principal component analysis, 主成分分析Prior distribution, 先验分布Prior probability, 先验概率Probabilistic model, 概率模型probability, 概率Probability density, 概率密度Product moment, 乘积矩/协方差Profile trace, 截面迹图Proportion, 比/构成比Proportion allocation in stratified random sampling, 按比例分层随机抽样Proportionate, 成比例Proportionate sub-class numbers, 成比例次级组含量Prospective study, 前瞻性调查Proximities, 亲近性Pseudo F test, 近似F检验Pseudo model, 近似模型Pseudosigma, 伪标准差Purposive sampling, 有目的抽样QR decomposition, QR分解Quadratic approximation, 二次近似Qualitative classification, 属性分类Qualitative method, 定性方法Quantile-quantile plot, 分位数-分位数图/Q-Q图Quantitative analysis, 定量分析Quartile, 四分位数Quick Cluster, 快速聚类Radix sort, 基数排序Random allocation, 随机化分组Random blocks design, 随机区组设计Random event, 随机事件Randomization, 随机化Range, 极差/全距Rank correlation, 等级相关Rank sum test, 秩和检验Rank test, 秩检验Ranked data, 等级资料Rate, 比率Ratio, 比例Raw data, 原始资料Raw residual, 原始残差Rayleigh's test, 雷氏检验Rayleigh's Z, 雷氏Z值Reciprocal, 倒数Reciprocal transformation, 倒数变换Recording, 记录Redescending estimators, 回降估计量Reducing dimensions, 降维Re-expression, 重新表达Reference set, 标准组Region of acceptance, 接受域Regression coefficient, 回归系数Regression sum of square, 回归平方和Rejection point, 拒绝点Relative dispersion, 相对离散度Relative number, 相对数Reliability, 可靠性Reparametrization, 重新设置参数Replication, 重复Report Summaries, 报告摘要Residual sum of square, 剩余平方和residual variance (剩余方差)Resistance, 耐抗性Resistant line, 耐抗线Resistant technique, 耐抗技术R-estimator of location, 位置R估计量R-estimator of scale, 尺度R估计量Retrospective study, 回顾性调查Ridge trace, 岭迹Ridit analysis, Ridit分析Rotation, 旋转Rounding, 舍入Row, 行Row effects, 行效应Row factor, 行因素RXC table, RXC表Sample, 样本Sample regression coefficient, 样本回归系数Sample size, 样本量Sample standard deviation, 样本标准差Sampling error, 抽样误差SAS(Statistical analysis system , SAS统计软件包Scale, 尺度/量表Scatter diagram, 散点图Schematic plot, 示意图/简图Score test, 计分检验Screening, 筛检SEASON, 季节分析Second derivative, 二阶导数Second principal component, 第二主成分SEM (Structural equation modeling), 结构化方程模型Semi-logarithmic graph, 半对数图Semi-logarithmic paper, 半对数格纸Sensitivity curve, 敏感度曲线Sequential analysis, 贯序分析Sequence, 普通序列图Sequential data set, 顺序数据集Sequential design, 贯序设计Sequential method, 贯序法Sequential test, 贯序检验法Serial tests, 系列试验Short-cut method, 简捷法Sigmoid curve, S形曲线Sign function, 正负号函数Sign test, 符号检验Signed rank, 符号秩Significant Level, 显着水平Significance test, 显着性检验Significant figure, 有效数字Simple cluster sampling, 简单整群抽样Simple correlation, 简单相关Simple random sampling, 简单随机抽样Simple regression, 简单回归simple table, 简单表Sine estimator, 正弦估计量Single-valued estimate, 单值估计Singular matrix, 奇异矩阵Skewed distribution, 偏斜分布Skewness, 偏度Slash distribution, 斜线分布Slope, 斜率Smirnov test, 斯米尔诺夫检验Source of variation, 变异来源Spearman rank correlation, 斯皮尔曼等级相关Specific factor, 特殊因子Specific factor variance, 特殊因子方差Spectra , 频谱Spherical distribution, 球型正态分布Spread, 展布SPSS(Statistical package for the social science), SPSS统计软件包Spurious correlation, 假性相关Square root transformation, 平方根变换Stabilizing variance, 稳定方差Standard deviation, 标准差Standard error, 标准误Standard error of difference, 差别的标准误Standard error of estimate, 标准估计误差Standard error of rate, 率的标准误Standard normal distribution, 标准正态分布Standardization, 标准化Starting value, 起始值Statistic, 统计量Statistical control, 统计控制Statistical graph, 统计图Statistical inference, 统计推断Statistical table, 统计表Steepest descent, 最速下降法Stem and leaf display, 茎叶图Step factor, 步长因子Stepwise regression, 逐步回归Storage, 存Strata, 层(复数)Stratified sampling, 分层抽样Stratified sampling, 分层抽样Strength, 强度Stringency, 严密性Structural relationship, 结构关系Studentized residual, 学生化残差/t化残差Sub-class numbers, 次级组含量Subdividing, 分割Sufficient statistic, 充分统计量Sum of products, 积和Sum of squares, 离差平方和Sum of squares about regression, 回归平方和Sum of squares between groups, 组间平方和Sum of squares of partial regression, 偏回归平方和Sure event, 必然事件Survey, 调查Survival, 生存分析Survival rate, 生存率Suspended root gram, 悬吊根图Symmetry, 对称Systematic error, 系统误差Systematic sampling, 系统抽样Tags, 标签Tail area, 尾部面积Tail length, 尾长Tail weight, 尾重Tangent line, 切线Target distribution, 目标分布Taylor series, 泰勒级数Test(检验)Test of linearity, 线性检验Tendency of dispersion, 离散趋势Testing of hypotheses, 假设检验Theoretical frequency, 理论频数Time series, 时间序列Tolerance interval, 容忍区间Tolerance lower limit, 容忍下限Tolerance upper limit, 容忍上限Torsion, 扰率Total sum of square, 总平方和Total variation, 总变异Transformation, 转换Treatment, 处理Trend, 趋势Trend of percentage, 百分比趋势Trial, 试验Trial and error method, 试错法Tuning constant, 细调常数Two sided test, 双向检验Two-stage least squares, 二阶最小平方Two-stage sampling, 二阶段抽样Two-tailed test, 双侧检验Two-way analysis of variance, 双因素方差分析Two-way table, 双向表Type I error, 一类错误/α错误Type II error, 二类错误/β错误UMVU, 方差一致最小无偏估计简称Unbiased estimate, 无偏估计Unconstrained nonlinear regression , 无约束非线性回归Unequal subclass number, 不等次级组含量Ungrouped data, 不分组资料Uniform coordinate, 均匀坐标Uniform distribution, 均匀分布Uniformly minimum variance unbiased estimate, 方差一致最小无偏估计Unit, 单元Unordered categories, 无序分类Unweighted least squares, 未加权最小平方法Upper limit, 上限Upward rank, 升秩Vague concept, 模糊概念Validity, 有效性VARCOMP (Variance component estimation), 方差元素估计Variability, 变异性Variable, 变量Variance, 方差Variation, 变异Varimax orthogonal rotation, 方差最大正交旋转Volume of distribution, 容积W test, W检验Weibull distribution, 威布尔分布Weight, 权数Weighted Chi-square test, 加权卡方检验/Cochran检验Weighted linear regression method, 加权直线回归Weighted mean, 加权平均数Weighted mean square, 加权平均方差Weighted sum of square, 加权平方和Weighting coefficient, 权重系数Weighting method, 加权法W-estimation, W估计量W-estimation of location, 位置W估计量Width, 宽度Wilcoxon paired test, 威斯康星配对法/配对符号秩和检验Wild point, 野点/狂点Wild value, 野值/狂值Winsorized mean, 缩尾均值Withdraw, 失访Youden's index, 尤登指数Z test, Z检验Zero correlation, 零相关Z-transformation, Z变换。

新奥法英文文献翻译

新奥法英文文献翻译

NATM tunnel design principle in the construction of major and Construction TechnologyI.The NATM Design Principle1.Tunnel design and construction of two major theoretical and development processSince the 20th century, human space on the ground floor of the growing demand, thus the underground works of the study of a rapid development. In a large number of underground engineering practice, it is generally recognized that the tunnel and underground cavern project, the core of the problem, all up in the excavation and retaining two key processes. How excavation, it will be more conducive to the stability and cavern facilitate support : For more support, supporting how they can more effectively ensure stability and facilitate the cavern excavation. This is the tunnels and underground works two promote each other and check each other's problems.Tunnels and underground caverns, and focusing on the core issues with the above practice and research, in different periods, People of different theories and gradually established a system of different theories, Each system includes theory and resolve (or are studying the resolution) from the works of understanding (concept), mechanics, engineering measures to the construction methods (Technology), a series of engineering problems.A theory of the 20th century the 1920s the traditional "load relaxation theory." Its core content is : a stable rock self-stability, no load : unstable rock may have collapsed. need shoring structure to be supported. Thus, the role of the supporting structure of the rock load is within a certain range may be due to relaxation and collapse of rock gravity. This is a traditional theory, and their representative is Taishaji and Principe's and others. It works similar to the surface issues of the thinking is still widely used to.Another theory of the 20th century made the 1950s the modern theory of timbering or "rock for the theory." Its core content is : rock stability is clearly bearing rock to their own self-stability : unstable rock loss of stability is a process, and if this process in providing the necessary help or restrictions will still be able to enter the rock steady state. This theoretical system of representative characters Labuxiweici, Miller-Feiqieer, Fenner -Daluobo and Kashitenai others. This is a more modern theory, it is already out of the ground works to consider the ideas, and underground works closer to reality, the past 50 years has been widely accepted and applied. demonstrated broad development prospects. Can be seen from the above, the former theory more attention to the findings and the results of treatment : The latter theory is even more attention to the process and the control of the process, right from the rock for the full utilization of capacity. Given this distinction, which both theory and methods in the process, each with different performance characteristics. NATM theory is rock for the tunnel engineering practice in the representation method.2. NATMNATM that the new Austrian Tunneling Method short the original is in New Austrian Tunneling Method, referred to as the NATM. France said it convergence bound or some countries alleged to observe the dynamic design and construction of the basic principles. NATM concept of filibustering Xiweici Austria scholars in the 20th century, Professor age of 50. It was based on the experience of both the tunnel and rock mechanics theory, will bolt and shotcrete combination as a major means of supporting a construction method, Austria, Sweden, Italy and other countries, many practical and theoretical study in the 1960s and patented officially named. Following this approach in Western Europe, Scandinavia, the United States and Japan and many other underground works with a very rapid development, have become modern tunnels new technologies landmark. Nearly 40 years ago, the railway sector through research, design, construction combining, in many construction of the tunnel, according to their own characteristics successfully applied a new Austrian law, made more experience, have accumulated large amounts of data, This is the application stage. However, in the road sector NATM of only 50%. Currently, the New Austrian Tunneling Method almost become weak and broken rock section of a tunnel construction method, technical and economic benefits are clear. NATM the basic points can be summarized as follows :(1). Rock tunnel structure is the main loading unit, the construction must fullyprotect the rock, it minimize the disturbance to avoid excessive damage to theintensity of rock. To this end, the construction of sub-section should not block toomuch, excavation should be used smooth blasting, presplit blasting or mechanicaltunneling.(2). In order to give full play to rock the carrying capacity should be allowed to control and rock deformation. While allowing deformation, which can be a rockbearing ring; The other hand, have to limit it, Rock is not so lax and excessive loss or greatly reduced carrying capacity. During construction should be used with rock close to, the timely building puzzle keeps strengthening Flexible support structure, such as bolting and shotcreting supporting. This adjustment will be adopted supportingstructural strength, Stiffness and its participation in the work of the time (including the closure of time) to control the deformation of the rock mass.(3). In order to improve the support structure, the mechanical properties, theconstruction should be closed as soon as possible, and to become a closed cylindrical structure. In addition, the tunnel shape with a round should, as far as possible, toavoid the corner of the stress concentration.(4). Construction right through the rock and supporting the dynamic observation, measurement, and reasonable arrangements for the construction procedures, changes in the design and construction management of the day-to-day.(5). To lay waterproof layer, or is subject to bolt corrosion, deterioration of rock properties, rheological, swelling caused by the follow-up to load, use compositelining.(6). Lining in principle, and the early rock deformation Supporting the basicstability of the conditions under construction. rock and supporting structure into awhole, thereby improving the support system of security.NATM above the basic elements can be briefly summarized as : "less disturbance, early spray anchor, ground measurements, closed tight."3.With a spring to understand the principle NATM(1). Cavern brink of a point A in the original excavation ago with stress (stressself-respect and tectonic stress) in a state of equilibrium. As an elastic stiffness of the spring K, P0 under compression in a state of equilibrium.(2). Cavern excavation, A point in attacking lose face constraints, the original stress state to be adjusted, if the intensity of rock big enough, After less stress adjustments may cavern in a stable condition (without support). But most of the geological conditions of the poor, that is, after the stress cavern adjustments, such as weak protection, we could have convergence deformation, even instability (landslides), must be provided to support power PE, in order to prevent landslides instability. Equivalent to the Spring of deformation u, in the role of PE is now in the midst of a state of equilibrium.(3). By the mechanical balance equation, we can see in the spring P0 role in a state of equilibrium; Spring in the event of deformation u, PE in the role they will be in equilibrium, assuming spring elasticity of K, were : P0=PE+KuDiscussion :(1). When u = 0, that is not allowed P0=PE rock deformation, is a rigid support, not economic;(2). W hen u ↑, PE ↓; When u ↓, PE ↑. That is, rock deformation occurred, therelease of some of the load (unloading), we should allow some extent rockdeformation, to give full play to rock the capacity for self. Is an economic supportmeasures, the rock self-stability P=P0-PE=Ku;(3). When u=umax, landslides, have relaxation load and unsafe.4. Points(1). Rock cavern excavation is affected by that part of rock (soil) body, the rock is a trinity : have a load bearing structure, building materials.(2). Tunnel construction is in the rock stress is of special architectural environment, which can not be equated with the construction on the ground.(3). Tunnel structure rock + = bracing system.II. The main tunnel construction technology1. Cave construction :(1).excavation slope around :Lofting total station measurements, the use of excavators from top to bottom,paragraph by paragraph excavation, not the amount of excavation or the end of next overlapping excavation, remove pits with the above may slump topsoil, shrubs and rock slopes, rock strata of slope excavation needs blasting, Discussion should focus mainly loose blasting. Also partial artificial finishing, when excavation andinspection slope of slope, if sliding and cracking phenomenon and slowing down due slope.(2).Cheng Tung-supporting :Yang Brush Singapore Singapore after the completion of timely inspection plateslope gradient, the gradient to pass the inspection, the system set up to fight timeanchor, and the exposed bolt heads, hanging metal based network expansion and bolt welding into first overall. Linked network immediately after the completion ofshotcrete and repeatedly jet until it reaches the thickness of the design so far. (3).as of gutter construction :Yang slope away from the groove 5 meters excavation ditch interception,interception gutter mainly mechanical excavation, artificial finishing, after dressing,7.5# immediately masonry made of mortar and stones, and the floor surface withmortar.2. Auxiliary construction :(1)A long pipe roof :Sets arch construction : construction Lofting, template installation, assemblingreinforcement, the guidance of lofting 127 installation guide, concrete pouring.Pipe specifications : Heat Nazarbayev Seamless Steel Tube ¢108 mm and athickness of 6 mm, length of 3 m, 6 m;N pipe from : Central to the distance 50 cm;N Inclination : Elevation 1 ° (the actual construction works by 2 °), the directionparallel with the Central Line;N pipe construction error : Radial not more than 20 cm;N tunnel longitudinal joints within the same section with more than 50% adjacentpipe joints staggered at least a meter.A. pipe roof construction method :Lofting accurate measurement personnel, marking the centerline and the vault out of its hole elevation, soil excavation reserved as a core pipe roof construction workplatform Excavation footage of 2.5 meters, after the end of excavation, artificiasymmetrical on both sides of excavation (Commodities H) platform, level width of1.5 meters,2.0 meters high, as construction sets and pipe arch shed facilities drillingplatform. Pipe-roof design position should be and it should be a good hole steel tube, grouting after playing non-porous tube steel, non-porous tube can be used as pipeinspection, Grouting quality inspection, drill vertical direction must be accuratelycontrolled to guarantee the opening hole to the right, End each drilling a hole is apipe jacking, drilling should always use dipcompass drilling pipe measuring thedeflection, found that the deflection over design requirements in a timely fashion.Pipe joints using screw connection, screw length 15 cm, to stagger the pipe joints,odd-numbered as the first section of the introduction of three-meter steel pipes and even numbered the first section of pipe using 6 meters, After each have adoptedsix-meter-long steel pipe.B. pipe roof construction machinery :N drilling machinery : XY-28-300 equipped with electric drill, drilling and pipejacking long shelf;N grouting machine : BW-250/50-injection pump two Taiwan;N using cement-water glass slurry. Mud and water volume ratio 1:0.5; water glassslurry concentration of water-cement ratio 1:1 silicate 35 Baume; The efficacysilicate modulus pressure grouting pressure early pressure 2.0MPA 0.5~1.0MPA;end.(2). a small catheterA. small catheter used ahead diameter of 42 mm and a thickness of 3.5 mm thermal Nazarbayev seamless steel tubes, steel pipe was front-tip, Welding on the tail ¢6 stiffening brace and the wall around the drilling hole grouting 8 mm, but the tail of a meter without grouting holes and Advance Construction of a small catheter, the tubes and the lining of the centerline parallel to 10 ° -30 ° Chalu into the rock arch. penstocks to 20-50 cm spacing. Each was over a steel tubes, should be closed immediately shotcrete excavation face and then grouting. After grouting, erecting steel Arch, Supporting the early completion of every (2-3 meters, and the paper attempts to be) another one for steel tubes, Advance small catheter general lap length of 1.0 meters.B. Grouting parameters :N water slurry and water glass volume : 1:0.5;N slurry water-cement ratio 1:1N 35 Baume concentration of sodium silicate; The efficacy silicate modulusN grouting pressure 0.5~1.0MPA; if necessary, set up only orifice Pulp Cypriots. (3). bolting ahead : The Chalu must be greater than 14 degrees, grouting satiated and lap length is not less than 1 meter.3.Correcting construction :Embedded parts used by the Design Dimensions plank make shape design, installation in contrast snoop plate car, and position accuracy (error ± 50CM), the firm shall not be fixed, you must be in possession of the wire through the middle wear.4. Leveling ConstructionInstallation templates, at the request of both sides leveling layer calibration position to install template. Side-channel steel templates used [10#, top elevation with a corresponding length of the road elevation unanimously to allow deviation ± 2mm. adjusted using the standard measurement to determine elevation. Every template fixed a certain distance from the outside to ensure that no displacement, the joints template close comfort, not from a slit, crooked and formation, and the bottom connector templates are not allowed to leak plasma. Concrete before reperfusion, the bottom surface of concrete must be clean. When the concrete arrived at the construction site directly installed backward mode of the road bed, and using artificial Huabu uniform. Concrete paver should be considered after the earthquake destroyed the settlement. Unrealistically high can be 10% higher, Lan is the surface elevation and design line. Concrete earthquake destroyed at or anywhere near the corner with plug-Lan Lan pound for pound order; Flat-Lan pound for pound crisscross comprehensive Lan, Inside each location is no longer the time for concrete sinks, no longer emitted large bubbles, and the surface of cement mortar later. normally no less than 15 seconds, also should not be too long; Then Chun-pound beam along the longitudinalLan-pound trailer, With redundant Chun-pound concrete beams were dragged shift Trim, Dixian Department should keep leveling Lan facts. Finally, the diameter 75~100mmrolling seamless steel pipe for further leveling. Just do prohibited in the surface spraying water, and threw cement.5. Water, cable duct constructionInstall groove wall reinforcement of location accuracy, the line must be linked to the construction. Install groove wall purity, the purity requirements of accurate location, a vertical line. Dyadic greatest degree of not more than 3 mm, and template-Ditch Thetop-pronged, pass the inspection before the concrete reperfusion, on the side of the original wall must pick hair, and embedded parts to the location accurately. Template using stereotypes purity.6.Gate ConstructionCleared the site for construction layout. By design size requirement dug-wall basis. M7.5# masonry made of mortar and stones.Template installation, location accuracy requirements purity, a vertical line, and timely inspection template slope. Concrete pouring 15 # Riprap concrete, concrete strength to be more than 70% for Myeongdong vault backfill.Myungdong vault backfill should hierarchical compaction said. The typical thickness of less than 0.3M, both backfill surface height difference of not more than 0.5M. restored to the vault after the pack to design hierarchical compaction high, the use of machines rolling, Ramming must manually filled to vault over 1.0M before mechanical compaction .新奥法设计原理在隧道施工中的应用及主要施工工艺Ⅰ、新奥法的设计原理一、隧道设计施工的两大理论及其发展过程二十世纪以来,人类对地下空间的需求越来越多,因而对地下工程的研究有了一个突飞猛进的发展。

空间位阻效应英语

空间位阻效应英语

空间位阻效应英语The Steric Hindrance Effect in SpaceThe concept of steric hindrance, also known as steric inhibition or steric crowding, is a fundamental principle in organic chemistry and has significant implications in the field of space exploration. This phenomenon occurs when the spatial arrangement of atoms or molecules within a chemical structure impedes or restricts the desired reaction or interaction, often due to the bulkiness or size of the substituents involved.In the context of space exploration, the steric hindrance effect plays a crucial role in the design and development of various spacecraft components, materials, and systems. The unique challenges posed by the harsh environment of space, such as extreme temperatures, radiation, and the absence of gravity, require a deep understanding of how steric effects can influence the performance and stability of these systems.One of the primary areas where steric hindrance becomes a significant consideration is in the selection and engineering of spacecraft materials. The materials used in spacecraft constructionmust be able to withstand the rigors of launch, the vacuum of space, and the various stresses encountered during mission operations. The spatial arrangement of atoms and molecules within these materials can greatly impact their mechanical properties, thermal stability, and resistance to degradation.For instance, the choice of polymers used in spacecraft insulation or structural components must take into account the steric effects that can influence their thermal expansion, flexibility, and resistance to radiation damage. The selection of lubricants and sealants for moving parts, such as hinges or joints, must also consider the steric hindrance that could affect their performance and longevity in the space environment.Another crucial application of the steric hindrance effect in space exploration is the design of spacecraft propulsion systems. The efficient and reliable operation of rocket engines, ion thrusters, or other propulsion technologies often depends on the careful management of the spatial arrangement of reactants, catalysts, or propellants within the system. Steric effects can influence the kinetics of chemical reactions, the flow dynamics of propellants, and the overall efficiency of the propulsion system.Furthermore, the steric hindrance effect plays a significant role in the development of space-based sensors and instrumentation. Thedesign of optical systems, such as telescopes or spectrometers, must account for the spatial constraints imposed by the instrument's components, including lenses, mirrors, and detectors. The arrangement of these elements can impact the system's resolution, sensitivity, and overall performance.In the field of astrochemistry, the steric hindrance effect is also relevant in the study of complex organic molecules and their formation in the interstellar medium. The spatial arrangement of atoms within these molecules can influence their stability, reactivity, and the pathways by which they are synthesized in the harsh conditions of space.To address the challenges posed by steric hindrance in space exploration, researchers and engineers employ various strategies, such as molecular modelling, computational chemistry, and advanced materials science. These tools help them to predict, analyze, and mitigate the effects of steric crowding, enabling the development of more robust and efficient spacecraft systems.In conclusion, the steric hindrance effect is a critical consideration in the design and development of spacecraft, systems, and materials for space exploration. By understanding and leveraging this fundamental principle of organic chemistry, scientists and engineers can create innovative solutions that push the boundaries of what ispossible in the exploration and utilization of the final frontier – the vast expanse of space.。

UPGMA

UPGMA

Construction of a distance tree using clusteringwith the Unweighted Pair Group Method withArithmatic Mean (UPGMA).The UPGMA is the simplest method of tree construction. It was originally developed for constructing taxonomic phenograms,i.e. trees that reflect the phenotypic similarities between OTUs, but it can also be used to construct phylogenetic trees if the rates of evolution are approximately constant among the different lineages. For this purpose the number of observed nucleotide or amino-acid substitutions can be used. UPGMA employs a sequential clustering algorithm, in which local topological relationships are identifeid in order of similarity, and the phylogenetic tree is build in a stepwise manner. We first identify from among all the OTUs the two OTUs that are most similar to each other and then treat these as a new single OTU. Such a OTU is referred to as a composite OTU. Subsequently from among the new group of OTUs we identify the pair with the highest similarity, and so on, until we are left with only two UTUs.Suppose we have the following tree consisting of 6 OTUs:The pairwise evolutionary distances are given by the followingbeing A and B, that are separated a distance of 2. The branching point is positioned at a distance of 2 / 2 = 1 substitution. We thus constuct a subtree as follows:Following the first clustering A and B are considered as a single composite OTU(A,B) and we now calculate the new distance matrix as follows:dist(A,B),C = (distAC + distBC) / 2 = 4dist(A,B),D = (distAD + distBD) / 2 = 6dist(A,B),E = (distAE + distBE) / 2 = 6dist(A,B),F = (distAF + distBF) / 2 = 8In other words the distance between a simple OTU and a composite OTU is the average of the distances between the simple OTU and the constituent simple OTUs of the composite OTU. Then a new distance matrix is recalculated using the newly calculated distances and the whole cycle is being repeated:Fifth cycleThe final step consists of clustering the last OTU, F, withUPGMA assumes equal rates of mutation along all the branches, as the model of evolution used. The theoretical root, therefore, must be equidistant from all OTUs. We can here thus apply the method of mid-point rooting. The root of the entire tree is then positioned at dist (ABCDE),F / 2 = 4.The final tree as inferred by using the UPGMA method is shown below.So now we have reconstructed the phylogenetic tree using the UPGMA method. As you can see we have obtained the original phylogenetic tree we started with.However, there are some pitfalls:the UPGMA clustering method is very sensitive to unequal evolutionar rates. This meansthat when one of the OTUs has incorporated more mutations over time, than the otherOTU, one may end up with a tree that has the wrong topology.Clustering works only if the data are ultrametricUltrametric distances are defined by the satisfaction of the 'three-point condition'. What is the three-point condition?For any three taxa: dist AC <= max (distAB, distBC) or in words: the two greatest distances are equal, or UPGMA assumes that the evolutionary rate is the same for all branchesIf the assumption of rate constancy among lineages does not hold UPGMA may give an erroneous topology. This is illustarted in te following example:Suppose the you have the following tree:Since the divergence of A and B, B has accumulated mutationsat a much higher rate than A. The Three-point criterion is violated ! e.g. distBD <= max (distBA,distAD) or,10 <= max (5,7) = FalseThe reconstruction of the evolutionary history uses the following distance matrix:We now cluster the pair of OTUs with the smallest distance, being A and C, that are separated a distance of 4. The branching point is positioned at a distance of 4 / 2 = 2 substitutions. We thus constuct a subtree as follows:Fifth cycleThe final step consists of clustering the last OTU, F, withcompared it is obvious that we end up with a tree that has the wrong topology.Conclusion: The unequal rates of mutation has led to a completelydifferent tree topology.Last updated: 9 September 1997.created by :Fred Opperdoes。

土木工程外文文献翻译

土木工程外文文献翻译

附录A 科技文献翻译原文Construction and Building MaterialsVolume 21, Issue 5 , May 2007, Pages 1052-1060An approach to determine long-term behavior of concrete membersprestressed with FRP tendonsAbstractThe combined effects of creep and shrinkage of concrete andrelaxation ofprestressing tendons cause gradual changes in the stresses in both concrete andprestressing tendons. A simple method is presented to calculate the long-term prestress loss and the long-term change in concrete stresses in continuousprestressed concrete members with either carbon fiber reinforced polymer (CFRP) or aramid fiber reinforced polymer (AFRP) tendons. The method satisfies the requirements of equilibrium and compatibility and avoids the use of any empirical multipliers. A simple graph is proposed to evaluate the reduced relaxation in AFRP tendons. It is shown that the prestress loss in FRP tendons is significantly less than that when using prestressing steel, mainly because of the lower moduli of elasticity of FRP tendons. The long-term changes in concrete stresses and deflection can beeither smaller or greater than those of comparable girders prestressed with steeltendons, depending on the type of FRP tendons and the initial stress profile of the cross-section under consideration.Keywords: Creep; FRP; Long-term; Prestress loss; Prestressed concrete;Relaxation; ShrinkageNomenclatureA area of cross sectiond vertical distance measured from top fiber of cross sectionE modulus of elasticityage-adjusted elasticity modulus of concrete f ultimate strength of prestressing tendon puh total thickness of concrete cross section I second moment of areaO centroid of age-adjusted transformed section t final time (end of service life of concrete member) t concrete age at prestressing 0 y coordinate of any fiber measured downward from Oχ aging coefficientχ reduced relaxation coefficient rα ratio of modulus of elasticity of FRP or steel to that of concreteΔε(t,t) change in concrete strain between time t and t c00Δε change in axial strain at the centroid of age-adjusted transformed section O OΔσ(t,t) stress applied gradually from time t to its full amount at time t c00Δσ intrinsic relaxation prreduced relaxationΔσ total long-term prestress loss pΔψ change in curvatureε shrinkage strain of concrete between t and t cs0ε(t) instantaneous strain at time t c00(t, t) creep coefficient between t and t 00σ(t) stress applied at time t and sustained to a later time t c00σ initial stress of prestressing tendon p0ρ reinforcement ratioψ curvatureΩ the ratio of the difference between the total prestress loss and intrinsic relaxationto the initial stressSubscripts- 2 -1 transformed section at t 0c concretecc net concrete sectionf FRP reinforcement or flangep prestressing FRP tendonps prestressing steel tendons steel reinforcementArticle OutlineNomenclature1. Introduction2. Relaxation of FRP prestressing tendons3. Proposed method of analysis3.1. Initial steps3.2. Time-dependent change in concrete stress3.3. Long-term deflection4. Application to continuous girders5. Development of design aids6. Illustrative example7. SummaryAcknowledgementsReferences1. IntroductionThe use of fiber reinforced polymer (FRP) tendons as prestressing reinforcements have been proposed in the past decade and a few concrete bridges have already been constructed utilizing fiber reinforced polymer (FRP) tendons. Compared to conventional steel prestressing tendons, FRP tendons have many advantages, including their noncorrosive and nonconductive properties, lightweight, and high tensile strength. Most of the research conducted on concrete girders prestressed with - 3 -FRP tendons has focused on the short-term behavior of prestressed members; research findings on the long-term behavior of concrete members with FRP tendons are scarce in the literature. The recent ACI Committee report on prestressing concrete structures with FRP tendons (ACI 440.4R-04 [1]) has pointed out that: “Research on thelong-term loss of prestress and the resultant time-dependentcamber/deflection is needed …” Most of the research and applicationsof FRP tendons in concrete structures have adopted either carbon fiber reinforced polymer (CFRP) or aramid fiber reinforced polymer (AFRP) tendons. The use of glass fiber reinforced polymers (GFRP) has mostly been limited to conventional reinforcing bars due to their relativelylow tensile strength and poor resistance to creep. Therefore, this paper focuses on prestressed members with either CFRP or AFRP tendons.Creep and shrinkage of concrete, and relaxation of prestressing tendons, cause long-term deformations in concrete structures. While itis generally accepted that long-term losses do not affect the ultimate capacity of a prestressed concrete member, a reasonably accurate prediction of these losses is important to ensure satisfactory performance of concrete structures in service. If prestress losses are underestimated, the tensile strength of concrete can be exceeded under full service loads, causing cracking and unexpected excessive deflection. On the other hand, overestimating prestress losses can lead to excessive camber and uneconomic design. The error in predicting the long-term prestress losses can be due to: (1) inaccuracy in estimation of thelong-term material characteristics (creep and shrinkage of concrete and relaxation of prestressing tendons); and (2) inaccuracy of the method of analysis used. The objective of this paper is to address the second source of inaccuracy by presenting a simple analytical method to estimate the time-dependent strains and stresses in concrete members prestressed with FRP tendons. The method satisfies the requirements of equilibrium and compatibility and avoids the use of empirical equations, which in general show loss in accuracy to enable generality. The - 4 -inaccuracy in the material characteristics used can be mitigated by varying the input material parameters and establishing upper and lower bounds on the analysis results. For the purpose of this paper, and to avoid confusion, a consistent sign convention is used. Axial force N is positive when it is tensile. Bending moment, M, that produces tension at the bottom fiber of a cross section and the associated curvature ψ arepositive. Stress, σ, and strain, ε, are positive for tension and elongation, respectively. Downward deflection is positive. It follows that shrinkage, ε, is negative quantity. csThe loss in tension in prestressing reinforcement due to relaxation Δσ or due to the p rcombined effects of creep, shrinkage, and relaxation, Δσ, is negative quantity. The panalysis considered herein focuses on a prestressed concrete section with its centroidal principal y-axis in vertical direction with the coordinate y of any concrete fiber orsteel layer being measured downward from a given reference point.2. Relaxation of FRP prestressing tendonsSimilar to concrete and steel, AFRP prestressing tendons exhibit some creep if subjected to sustained strains. CFRP tendons typically display insignificant amount of creep, which can be neglected for most practical applications. When a prestressing tendon is stretched between two points, it will be subjected to a constant strain. Because of creep, the stress in the tendon decreases (or relaxes) with time to maintain the state of constant strain. This reduction in stress is known as intrinsic relaxationΔσ. While steel tendons subjected to stresses less than 50% of the yield stress do not prexhibit appreciable amount of relaxation, tests on AFRP tendons have shown that they display relaxation under very low stresses. The level of relaxation of AFRP tendons depends upon many factors, including ambient temperature, environment (e.g., air, alkaline, acidic, or salt solutions), ratio of initial s tress, σ, to its ultimate strength, f,p0puand time t lapsed after initial stressing. Based on extensive experimentation on relaxation properties of AFRP tendons, Saadatmanesh and Tannous [2] suggested a relationship of the form:- 5 -(1)where λ = σ/f. σ is the stress in the tendon 1 h after stress release. Ratios of σ/σ p1pup1p1p0in their tests varied between 0.91 and 0.96, with an average of 0.93. Tabulated values of the variables a and b were provided for λ = 0.4 and λ = 0.6, and for differenttemperature levels and solution types. For AFRP tendons in air at a temperature of 25 ?C, relationships for a and b were proposed [2] as(2)In a prestressed concrete member, the two ends of the prestressing tendon constantly move toward each other because of creep and shrinkage of concrete, thereby reducing the tensile stress in the tendon. This reduction in tension has a similar effect to that when the tendon is subjected to a lesser initial stress. Thus, a reduced relaxation value,, should be used in the analysis of long-term effects in prestressed members, such that(3)where χ is a dimensionless coefficient less than unity. Followingan approach rpreviously suggested by Ghali and Trevino [3] to evaluate χ for prestressing steel rtendons, χ for AFRP tendons can be calculated as (log t in Eq. (1) is taken equal to 5 rfor 100,000 h):(4)where(5)- 6 -and ζ is a dimensionless time function defining the shape of the tendon stress–timecurve. The value of ζ increases from 0 to 1 as time changes from initial prestress time t to final time t. Ω is the ratio of the difference between the total prestress loss Δσ(t) 0psand intrinsic relaxation Δσ(t) to the initial stress σ, expressed as prp0(6)Fig. 1 shows the variation of χ with Ω for σ/f = 0.4, 0.5, and 0.6, which represents rp0puthe common values of initial prestressing ratios [1]. As will be shown in a latersection, Ω typically varies between 0.1 and 0.2 and a value of χ = 0.95 can be rassumed for practical purposes.(20K)Fig. 1. Reduced relaxation coefficient χ for AFRP. r3. Proposed method of analysisThe analysis follows the four generic steps proposed by Ghali et al.[4] and depicted schematically in Fig. 2. The procedure can be developed considering an arbitrary section consisting of a simple type of concrete, subjected at time t to both 0prestressing and dead loads. The method will result in a simple equation that is easy to use by practicing engineers instead of lengthy matrix analysis that could only be used in special-purpose computer programs. In addition to the initial strain profile of the cross section, the equation is only a function of four dimensionless coefficients that can be easily calculated (or interpolated from graphs) and the creep coefficient and shrinkage.- 7 -(56K)Fig. 2. Four steps of analysis of time-dependent effects (after Ghali et al. [4]).3.1. Initial stepsStep 1: Instantaneous strains. At any fiber, the strain and the curvature at time t due 0to the dead load and prestressing effects (primary + secondary) can be calculated. Alternatively, at this stage, the designer may have determined the stress distribution at t to verify that the allowable stresses are not exceeded. In this case, the strain diagram 0 at t can be obtained by dividing the stress values by the modulus of elasticity of 0concrete at t, E(t). 0c0Step 2: Free creep and shrinkage of concrete. The distribution of hypothetical freechange in concrete strain due to creep and shrinkage in the period t to t is defined by 0its value (Δε) at the centroid of the area of the net concrete section, A (defined as ccfreecthe gross area minus the area of the FRP reinforcement, A, minus the area of the fprestressing duct in the case of post-tensioning, or minus the area of the FRP tendons, A, in case of pretensioning) at y = y as shown in Fig. 3, such that pcc(Δε)=ε(t)+ε (7) ccfreecc0cswhere y is the y coordinate of the centroid of the net concrete section, is the creep cccoefficient for the period t to t, and ε is the shrin kage in the same period and ε(t) 0cscc0is the strain at the centroid of the net concrete section given by - 8 -ε(t)=ε(t)+(y-y)ψ(t) (8) cc010cc10where y is the centroid of the transformed area at t, and ψ(t) is the curvature (slope 100of the strain diagram) at t. Also free curvature is 0Δψ=ψ(t) (9) free0(15K)Fig. 3. Typical prestressed concrete section and the strain diagram immediately after transfer.Step 3: Artificial restraining forces. The free strain calculated in Step 2 can be artificially prevented by a gradual application of restraining stress, whose value at anyfiber y is given by(10)where is the age-adjusted modulus of concrete [5] and [6], used to account for creep effects of stresses applied gradually to concrete and is defined as(11)The artificial restraining forces, ΔN at the reference point O (which is the centroid ofthe age-adjusted transformed section), and ΔM, that can prevent strain changes due to creep, shrinkage and relaxation can be defined as(12)and- 9 -(13)where I, y, and are the second moment of A about its centroid, y coordinate of cpcthe centroid of the FRP tendons, and the reduced relaxation stress between times t 0and t. It should be noted that if the section contains more than one layer ofprestresssing tendons, the terms containing A or yA should be substituted by the pppsum of the appropriate parameters for all layers. Step 4:Elimination of artificial restraint. The artificial forces ΔN and ΔM can beapplied in reversed direction on the age-adjusted transformed section to give the truechange in strain at O, Δε, and in curvature, Δψ, such that O(14a)(14b)where is the second moment of about its centroid and is the area of age-adjusted transformed section defined as(15)where E and E are the moduli of elasticity for the FRP reinforcement and tendons, fprespectively, and the is as defined in Eq. (11).Substituting Eqs. (12) and (13) into Eqs. (14a), (14b) and (15) gives(16)and(17)- 10 -where(18)The time-dependent change in strain in prestressing tendons Δε can then be evaluated pusing Eq. (19) and the time-dependent change in stress in prestressing tendons (described by Eq. (20)) is the sum of EΔε and the reduced relaxation. ppΔε=Δε+yΔψ (19) pOp(20)Substitution of Eqs. (16) and (17) into Eq. (20) gives an expression for the long-termprestress loss, Δσ, due to creep, shrinkage, and relaxation as p(21)It should be noted that the last term in Eq. (21), , is zero in the case ofprestressed members using CFRP tendons.(23)(24)4. Application to continuous girdersPrestressing of continuous beams or frames produces statically indeterminate bendingmoments (referred to as secondary moments). As mentioned previously, ε(t) a nd 10ψ(t) (Eqs. (7), (8) and (9)) represent the strain parameters at a section due to dead 0load plus the primary and secondary moments due to prestressing. The - 11 -time-dependent change in prestress force in the tendon produces changes in these secondary moments, which are not included in Eq. (21). This section considers the effect of the time-dependent change in secondary moments on the prestress loss. Step 1: Considering a two-span continuous beam, as shown in Fig. 4(a) where the variation of the tendon profile is parabolic in each span, the statically indeterminate beam can be solved by any method of structural analysis (such as the force method) to determine the moment diagram at time t due to dead load and prestressing. 0(14K)Fig. 4. Two-span continuous prestressed girder. (a) Dimensions and cable profile; (b) Locations of integration points (sections).Step 2: The time-dependant sectional analysis can be performed as shown previously for each of the three sections shown in Fig. 4(b) and de termine (Δψ) for each section, iwhere i = A, B and C.Step 3: Use the force method to determine the change in internal forces and displacements in the continuous beam. The released structure with the showncoordinate system in Fig. 5(a) can be used. It can be assumed that the change in angular discontinuity at middle support between t and t is ΔD and that the unknown 01change in the connecting moment is ΔF. The change in angular discontinuity ΔD is 11then evaluated as the sum of the two end rotations of each of the simple spans l and l. 12Using the method of elastic weights and assuming a parabolicvariation of curvature in each span, ΔD can be expressed as 1 - 12 -(25)(10K)Fig. 5. Analysis by the force method. (a) Released structure and coordinate system; (b)Moment diagram due to unit value of connecting moment.Step 4: Due to unit load of the connecting moment ΔF = 1 that is to be applied 1gradually on the released structure from zero at time t to unity at time t (Fig. 5(b)), 0determine the change in curvature at each section (Δψ) as u1i(26)The age-adjusted flexibility coefficient can be evaluated as(27)Step 5: The change in connecting moment ΔF can be computed by solving the 1compatibility equation , i.e.,(28)The prestress change (loss or gain) at each section due tocontinuity (Δσ) is then p(cont)i- 13 -(29)where (ΔM) is the change in bending moment at each section. Thus, i(ΔM) = (ΔM) = ΔF/2 and ΔM = ΔF. Consideration of parameters generic to most AB1B1bri dges [7] has indicated that Δσ is very small relative to Δσ determined by p(cont)panalysis that ignores the time-dependent changes in these moments. 5. Development of design aidsThe geometric coefficients k, k, k, and k (Eq. (18)) depend upon the geometry of AIccpthe section and the material parameters E/E(t), E/E(t), and χ. The most common fc0pc0girder cross sections likely to be used with FRP tendons are single- or double-T (DT) girders. Therefore, in lieu of using Eq. (18), design aids for the geometric coefficients for a typical DT post-tensioned section (Fig. 3) are presented in Fig. 6a, Fig. 6b, Fig. 6c, Fig. 7a, Fig. 7b, Fig. 7c and Fig. 7d for sections with CFRP and AFRP tendons, respectively. In these figures, the ratio of FRP reinforcement in the flange is ρ = A/(bh), and the ratio of prestressing steel area to area of webs is ρ = A/(hΣb). fffppwLinear interpolation can be used for ρ and ρ values not shown inthe graphs. fp7. SummaryA simple method is presented to estimate the long-term prestressloss in continuous concrete girders with FRP tendons as well as thetime-dependent change in concrete stresses and deflections at criticalsections assuming uncracked conditions. The method presented can be easily programmed using hand-held calculators or computer spread sheets.A simple graphical tool is proposed to calculate the reduced relaxation coefficient χ for AFRP tendons to be used in applying the method to prestressed rgirders with AFRP tendons and a value of χ = 0.95 is suggested for practical purposes. rFor the most common DT prestressed girders used in practice, design aids are presented to further simplify the method for practicing engineers.- 14 -The long-term prestress loss in concrete girders prestressed with FRP tendons is less than that when using steel tendons, mainly because of the lower moduli of elasticity of FRP. The time-dependent change in concrete stresses and deflection can be either smaller or greater than those of comparable girders prestressed with steel tendons, depending on the type of FRP tendons and the initial stress profile (due to dead load and prestressing) of the prestressed cross-section at member mid-span. AcknowledgementsThe authors gratefully acknowledge the provided by California Department of Transportation under Research Grant No. 59A0420.References[1] ACI Committee 440.4R-04 Prestressing concrete structures with FRP tendons. American Concrete Institute. Farmington Hills, MI, 2004.[2] H. Saadatmanesh and F.E. Tannous, Long-term behavior of aramid fiber reinforced plastic (AFRP) tendons, ACI Mater J 96 (1999) (3), pp. 297–305.[3] A. Ghali and J. Trevino, Relaxation of steel in prestressed concrete, PCI J 30(1985) (5), pp. 82–94.[4] A. Ghali, R. Favre and M.M. Elbadry, Concrete Structures, Stresses and Deformations (3rd ed.), Spon Press, London & New York (2002).[5] H. Trost, Auswirkungen des Superpositionsprinzips auf Kriech-und Relaxations-problems bei Beton und Spannbeton, Beton Stahlbetonbaun 62 (1967)(10), pp. 230–238 (62)11: 261–269 (in German).[6] Z.P. Bazant, Prediction of concrete creep effects using age-adjusted effective modulus, ACI J 69 (1972) (4), pp. 212–217.- 15 -[7] Youakim SA, Karbhari VM. A Simplified method for prediction of long-term prestress loss in post-tensioned concrete bridges. Caltrans Draft Report. University of California at San Diego, CA, 2004.[8] American Association of State Highway and Transportation Officials AASHTO-LRFD bridge design specifications. 3rd ed., Washington DC, 2004.- 16 -译文施工与建筑材料21卷,编号5,2007年5月,1052-1060页决定用FRP筋制作的预应力混凝土构件的长期行为的方法摘要:混凝土的徐变和收缩以及预应力筋的松弛两者的联合作用导致混凝土和预应力筋的应力逐步发生了变化。

建筑结构外文文献翻译-建筑结构

建筑结构外文文献翻译-建筑结构

Architecture StructureWe have and the architects must deal with the spatial aspect of activity, physical, and symbolic needs in such a way that overall performance integrity is assured. Hence, he or she well wants to think of evolving a building environment as a total system of interacting and space forming subsystems. Is represents a complex challenge, and to meet it the architect will need a hierarchic design process that provides at least three levels of feedback thinking: schematic, preliminary, and final.Such a hierarchy is necessary if he or she is to avoid being confused , at conceptual stages of design thinking ,by the myriad detail issues that can distract attention from more basic considerations .In fact , we can say that an architect’s ability to distinguish the more basic form the more detailed issues is essential to his success as a designer .The object of the schematic feed back level is to generate and evaluate overall site-plan, activity-interaction, and building-configuration options .To do so the architect must be able to focus on the interaction of the basic attributes of the site context, the spatial organization, and the symbolism as determinants of physical form. This means that ,in schematic terms ,the architect may first conceive and model a building design as an organizational abstraction of essential performance-space in he or she may explore the overall space-form implications of the abstraction. As an actual building configuration option begins to emerge, it will be modified to include consideration for basic site conditions.At the schematic stage, it would also be helpful if the designer could visualize his or her options for achieving overall structural integrity and consider the constructive feasibility and economic of his or her scheme .But this will require that the architect and/or a consultant be able to conceptualize total-system structural options in terms of elemental detail .Such overall thinking can be easily fed back to improve the space-form scheme.At the p reliminary level, the architect’s emphasis will shift to the elaboration of his or her more promising schematic design options .Here the architect’s structural needs will shift to approximate design of specific subsystem options. At this stage the total structural scheme is developed to a middle level of specificity by focusing on identification and design of major subsystems to the extent that their key geometric, component, and interactive properties are established .Basic subsystem interaction and design conflicts can thus be identified and resolved in the context of total-system objectives. Consultants can play a significant part in this effort; these preliminary-level decisions may also result in feedback that calls for refinement or even major change in schematic concepts.When the designer and the client are satisfied with the feasibility of a design proposal at the preliminary level, it means that the basic problems of overall design are solved and details are notlikely to produce major change .The focus shifts again ,and the design process moves into the final level .At this stage the emphasis will be on the detailed development of all subsystem specifics . Here the role of specialists from various fields, including structural engineering, is much larger, since all detail of the preliminary design must be worked out. Decisions made at this level may produce feedback into Level II that will result in changes. However, if Levels I and II are handled with insight, the relationship between the overall decisions, made at the schematic and preliminary levels, and the specifics of the final level should be such that gross redesign is not in question, Rather, the entire process should be one of moving in an evolutionary fashion from creation and refinement (or modification) of the more general properties of a total-system design concept, to the fleshing out of requisite elements and details.To summarize: At Level I, the architect must first establish, in conceptual terms, the overall space-form feasibility of basic schematic options. At this stage, collaboration with specialists can be helpful, but only if in the form of overall thinking. At Level II, the architect must be able to identify the major subsystem requirements implied by the scheme and substantial their interactive feasibility by approximating key component properties .That is, the properties of major subsystems need be worked out only in sufficient depth to very the inherent compatibility of their basic form-related and behavioral interaction . This will mean a somewhat more specific form of collaboration with specialists then that in level I .At level III ,the architect and the specific form of collaboration with specialists then that providing for all of the elemental design specifics required to produce biddable construction documents .Of course this success comes from the development of the Structural Material.ConcretePlain concrete is formed from a hardened mixture of cement ,water ,fine aggregate, coarse aggregate (crushed stone or gravel),air, and often other admixtures. The plastic mix is placed and consolidated in the formwork, then cured to facilitate the acceleration of the chemical hydration reaction lf the cement/water mix, resulting in hardened concrete. The finished product has high compressive strength, and low resistance to tension, such that its tensile strength is approximately one tenth lf its compressive strength. Consequently, tensile and shear reinforcement in the tensile regions of sections has to be provided to compensate for the weak tension regions in the reinforced concrete element.It is this deviation in the composition of a reinforces concrete section from the homogeneity of standard wood or steel sections that requires a modified approach to the basic principles of structural design. The two components of the heterogeneous reinforced concrete section are to be so arranged and proportioned that optimal use is made of the materials involved. This is possible because concrete can easily be given any desired shape by placing and compacting the wet mixtureof the constituent ingredients are properly proportioned, the finished product becomes strong, durable, and, in combination with the reinforcing bars, adaptable for use as main members of any structural system.The techniques necessary for placing concrete depend on the type of member to be cast: that is, whether it is a column, a bean, a wall, a slab, a foundation. a mass columns, or an extension of previously placed and hardened concrete. For beams, columns, and walls, the forms should be well oiled after cleaning them, and the reinforcement should be cleared of rust and other harmful materials. In foundations, the earth should be compacted and thoroughly moistened to about 6 in. in depth to avoid absorption of the moisture present in the wet concrete. Concrete should always be placed in horizontal layers which are compacted by means of high frequency power-driven vibrators of either the immersion or external type, as the case requires, unless it is placed by pumping. It must be kept in mind, however, that over vibration can be harmful since it could cause segregation of the aggregate and bleeding of the concrete.Hydration of the cement takes place in the presence of moisture at temperatures above 50°F. It is necessary to maintain such a condition in order that the chemical hydration reaction can take place. If drying is too rapid, surface cracking takes place. This would result in reduction of concrete strength due to cracking as well as the failure to attain full chemical hydration.It is clear that a large number of parameters have to be dealt with in proportioning a reinforced concrete element, such as geometrical width, depth, area of reinforcement, steel strain, concrete strain, steel stress, and so on. Consequently, trial and adjustment is necessary in the choice of concrete sections, with assumptions based on conditions at site, availability of the constituent materials, particular demands of the owners, architectural and headroom requirements, the applicable codes, and environmental reinforced concrete is often a site-constructed composite, in contrast to the standard mill-fabricated beam and column sections in steel structures.A trial section has to be chosen for each critical location in a structural system. The trial section has to be analyzed to determine if its nominal resisting strength is adequate to carry the applied factored load. Since more than one trial is often necessary to arrive at the required section, the first design input step generates into a series of trial-and-adjustment analyses.The trial-and –adjustment procedures for the choice of a concrete section lead to the convergence of analysis and design. Hence every design is an analysis once a trial section is chosen. The availability of handbooks, charts, and personal computers and programs supports this approach as a more efficient, compact, and speedy instructional method compared with the traditional approach of treating the analysis of reinforced concrete separately from pure design.2. EarthworkBecause earthmoving methods and costs change more quickly than those in any other branchof civil engineering, this is a field where there are real opportunities for the enthusiast. In 1935 most of the methods now in use for carrying and excavating earth with rubber-tyred equipment did not exist. Most earth was moved by narrow rail track, now relatively rare, and the main methods of excavation, with face shovel, backacter, or dragline or grab, though they are still widely used are only a few of the many current methods. To keep his knowledge of earthmoving equipment up to date an engineer must therefore spend tine studying modern machines. Generally the only reliable up-to-date information on excavators, loaders and transport is obtainable from the makers.Earthworks or earthmoving means cutting into ground where its surface is too high ( cuts ), and dumping the earth in other places where the surface is too low ( fills). Toreduce earthwork costs, the volume of the fills should be equal to the volume of the cuts and wherever possible the cuts should be placednear to fills of equal volume so as to reduce transport and double handlingof the fill. This work of earthwork design falls on the engineer who lays out the road since it is the layout of the earthwork more than anything else which decides its cheapness. From the available maps ahd levels, the engineering must try to reach as many decisions as possible in the drawing office by drawing cross sections of the earthwork. On the site when further information becomes available he can make changes in jis sections and layout,but the drawing lffice work will not have been lost. It will have helped him to reach the best solution in the shortest time.The cheapest way of moving earth is to take it directly out of the cut and drop it as fill with the same machine. This is not always possible, but when it canbe done it is ideal, being both quick and cheap. Draglines, bulldozers and face shovels an do this. The largest radius is obtained with the dragline,and the largest tonnage of earth is moved by the bulldozer, though only over short disadvantages of the dragline are that it must dig below itself, it cannot dig with force into compacted material, it cannot dig on steep slopws, and its dumping and digging are not accurate.Face shovels are between bulldozers and draglines, having a larger radius of action than bulldozers but less than draglines. They are anle to dig into a vertical cliff face in a way which would be dangerous tor a bulldozer operator and impossible for a dragline. Each piece of equipment should be level of their tracks and for deep digs in compact material a backacter is most useful, but its dumping radius is considerably less than that of the same escavator fitted with a face shovel.Rubber-tyred bowl scrapers are indispensable for fairly level digging where the distance of transport is too much tor a dragline or face shovel. They can dig the material deeply ( but only below themselves ) to a fairly flat surface, carry it hundreds of meters if need be, then drop it and level it roughly during the dumping. For hard digging it is often found economical to keep a pusher tractor ( wheeled or tracked ) on the digging site, to push each scraper as it returns to dig. As soon as the scraper is full,the pusher tractor returns to the beginning of the dig to heop to help the nest scraper.Bowl scrapers are often extremely powerful machines;many makers build scrapers of 8 cubic meters struck capacity, which carry 10 m ³ heaped. The largest self-propelled scrapers are of 19 m ³struck capacity ( 25 m ³ heaped )and they are driven by a tractor engine of 430 horse-powers.Dumpers are probably the commonest rubber-tyred transport since they can also conveniently be used for carrying concrete or other building materials. Dumpers have the earth container over the front axle on large rubber-tyred wheels, and the container tips forwards on most types, though in articulated dumpers the direction of tip can be widely varied. The smallest dumpers have a capacity of about m ³, and the largest standard types are of about m ³. Special types include the self-loading dumper of up to 4 m ³ and the articulated type of about m ³. The distinction between dumpers and dump trucks must be remembered .dumpers tip forwards and the driver sits behind the load. Dump trucks are heavy, strengthened tipping lorries, the driver travels in front lf the load and the load is dumped behind him, so they are sometimes called rear-dump trucks.of StructuresThe principal scope of specifications is to provide general principles and computational methods in order to verify safety of structures. The “ safety factor ”, which according to modern trends is independent of the nature and combination of the materials used, can usually be defined as the ratio between the conditions. This ratio is also proportional to the inverse of the probability ( risk ) of failure of the structure.Failure has to be considered not only as overall collapse of the structure but also as unserviceability or, according to a more precise. Common definition. As the reaching of a “ limit state ” which causes the construction not to accomplish the task it was designed for. There are two categories of limit state :(1)Ultimate limit sate, which corresponds to the highest value of the load-bearing capacity. Examples include local buckling or global instability of the structure; failure of some sections and subsequent transformation of the structure into a mechanism; failure by fatigue; elastic or plastic deformation or creep that cause a substantial change of the geometry of the structure; and sensitivity of the structure to alternating loads, to fire and to explosions.(2)Service limit states, which are functions of the use and durability of the structure. Examples include excessive deformations and displacements without instability; early or excessive cracks; large vibrations; and corrosion.Computational methods used to verify structures with respect to the different safety conditions can be separated into:(1)Deterministic methods, in which the main parameters are considered as nonrandom parameters.(2)Probabilistic methods, in which the main parameters are considered as random parameters.Alternatively, with respect to the different use of factors of safety, computational methods can be separated into:(1)Allowable stress method, in which the stresses computed under maximum loads are compared with the strength of the material reduced by given safety factors.(2)Limit states method, in which the structure may be proportioned on the basis of its maximum strength. This strength, as determined by rational analysis, shall not be less than that required to support a factored load equal to the sum of the factored live load and dead load ( ultimate state ).The stresses corresponding to working ( service ) conditions with unfactored live and dead loads are compared with prescribed values ( service limit state ) . From the four possible combinations of the first two and second two methods, we can obtain some useful computational methods. Generally, two combinations prevail:(1)deterministic methods, which make use of allowable stresses.(2)Probabilistic methods, which make use of limit states.The main advantage of probabilistic approaches is that, at least in theory, it is possible to scientifically take into account all random factors of safety, which are then combined to define the safety factor. probabilistic approaches depend upon :(1) Random distribution of strength of materials with respect to the conditions of fabrication and erection ( scatter of the values of mechanical properties through out the structure );(2) Uncertainty of the geometry of the cross-section sand of the structure ( faults and imperfections due to fabrication and erection of the structure );(3) Uncertainty of the predicted live loads and dead loads acting on the structure;(4)Uncertainty related to the approximation of the computational method used ( deviation of the actual stresses from computed stresses ).Furthermore, probabilistic theories mean that the allowable risk can be based on several factors, such as :(1) Importance of the construction and gravity of the damage by its failure;(2)Number of human lives which can be threatened by this failure;(3)Possibility and/or likelihood of repairing the structure;(4) Predicted life of the structure.All these factors are related to economic and social considerations such as:(1) Initial cost of the construction;(2) Amortization funds for the duration of the construction;(3) Cost of physical and material damage due to the failure of the construction;(4) Adverse impact on society;(5) Moral and psychological views.The definition of all these parameters, for a given safety factor, allows construction at the optimum cost. However, the difficulty of carrying out a complete probabilistic analysis has to be taken into account. For such an analysis the laws of the distribution of the live load and its induced stresses, of the scatter of mechanical properties of materials, and of the geometry of the cross-sections and the structure have to be known. Furthermore, it is difficult to interpret the interaction between the law of distribution of strength and that of stresses because both depend upon the nature of the material, on the cross-sections and upon the load acting on the structure. These practical difficulties can be overcome in two ways. The first is to apply different safety factors to the material and to the loads, without necessarily adopting the probabilistic criterion. The second is an approximate probabilistic method which introduces some simplifying assumptions ( semi-probabilistic methods ) .建筑结构建筑师必须从一种全局的角度出发去处理建筑设计中应该考虑到的实用活动,物质及象征性的需求。

交通流

交通流

Network impacts of a road capacity reduction:Empirical analysisand model predictionsDavid Watling a ,⇑,David Milne a ,Stephen Clark baInstitute for Transport Studies,University of Leeds,Woodhouse Lane,Leeds LS29JT,UK b Leeds City Council,Leonardo Building,2Rossington Street,Leeds LS28HD,UKa r t i c l e i n f o Article history:Received 24May 2010Received in revised form 15July 2011Accepted 7September 2011Keywords:Traffic assignment Network models Equilibrium Route choice Day-to-day variabilitya b s t r a c tIn spite of their widespread use in policy design and evaluation,relatively little evidencehas been reported on how well traffic equilibrium models predict real network impacts.Here we present what we believe to be the first paper that together analyses the explicitimpacts on observed route choice of an actual network intervention and compares thiswith the before-and-after predictions of a network equilibrium model.The analysis isbased on the findings of an empirical study of the travel time and route choice impactsof a road capacity reduction.Time-stamped,partial licence plates were recorded across aseries of locations,over a period of days both with and without the capacity reduction,and the data were ‘matched’between locations using special-purpose statistical methods.Hypothesis tests were used to identify statistically significant changes in travel times androute choice,between the periods of days with and without the capacity reduction.A trafficnetwork equilibrium model was then independently applied to the same scenarios,and itspredictions compared with the empirical findings.From a comparison of route choice pat-terns,a particularly influential spatial effect was revealed of the parameter specifying therelative values of distance and travel time assumed in the generalised cost equations.When this parameter was ‘fitted’to the data without the capacity reduction,the networkmodel broadly predicted the route choice impacts of the capacity reduction,but with othervalues it was seen to perform poorly.The paper concludes by discussing the wider practicaland research implications of the study’s findings.Ó2011Elsevier Ltd.All rights reserved.1.IntroductionIt is well known that altering the localised characteristics of a road network,such as a planned change in road capacity,will tend to have both direct and indirect effects.The direct effects are imparted on the road itself,in terms of how it can deal with a given demand flow entering the link,with an impact on travel times to traverse the link at a given demand flow level.The indirect effects arise due to drivers changing their travel decisions,such as choice of route,in response to the altered travel times.There are many practical circumstances in which it is desirable to forecast these direct and indirect impacts in the context of a systematic change in road capacity.For example,in the case of proposed road widening or junction improvements,there is typically a need to justify econom-ically the required investment in terms of the benefits that will likely accrue.There are also several examples in which it is relevant to examine the impacts of road capacity reduction .For example,if one proposes to reallocate road space between alternative modes,such as increased bus and cycle lane provision or a pedestrianisation scheme,then typically a range of alternative designs exist which may differ in their ability to accommodate efficiently the new traffic and routing patterns.0965-8564/$-see front matter Ó2011Elsevier Ltd.All rights reserved.doi:10.1016/j.tra.2011.09.010⇑Corresponding author.Tel.:+441133436612;fax:+441133435334.E-mail address:d.p.watling@ (D.Watling).168 D.Watling et al./Transportation Research Part A46(2012)167–189Through mathematical modelling,the alternative designs may be tested in a simulated environment and the most efficient selected for implementation.Even after a particular design is selected,mathematical models may be used to adjust signal timings to optimise the use of the transport system.Road capacity may also be affected periodically by maintenance to essential services(e.g.water,electricity)or to the road itself,and often this can lead to restricted access over a period of days and weeks.In such cases,planning authorities may use modelling to devise suitable diversionary advice for drivers,and to plan any temporary changes to traffic signals or priorities.Berdica(2002)and Taylor et al.(2006)suggest more of a pro-ac-tive approach,proposing that models should be used to test networks for potential vulnerability,before any reduction mate-rialises,identifying links which if reduced in capacity over an extended period1would have a substantial impact on system performance.There are therefore practical requirements for a suitable network model of travel time and route choice impacts of capac-ity changes.The dominant method that has emerged for this purpose over the last decades is clearly the network equilibrium approach,as proposed by Beckmann et al.(1956)and developed in several directions since.The basis of using this approach is the proposition of what are believed to be‘rational’models of behaviour and other system components(e.g.link perfor-mance functions),with site-specific data used to tailor such models to particular case studies.Cross-sectional forecasts of network performance at specific road capacity states may then be made,such that at the time of any‘snapshot’forecast, drivers’route choices are in some kind of individually-optimum state.In this state,drivers cannot improve their route selec-tion by a unilateral change of route,at the snapshot travel time levels.The accepted practice is to‘validate’such models on a case-by-case basis,by ensuring that the model—when supplied with a particular set of parameters,input network data and input origin–destination demand data—reproduces current mea-sured mean link trafficflows and mean journey times,on a sample of links,to some degree of accuracy(see for example,the practical guidelines in TMIP(1997)and Highways Agency(2002)).This kind of aggregate level,cross-sectional validation to existing conditions persists across a range of network modelling paradigms,ranging from static and dynamic equilibrium (Florian and Nguyen,1976;Leonard and Tough,1979;Stephenson and Teply,1984;Matzoros et al.,1987;Janson et al., 1986;Janson,1991)to micro-simulation approaches(Laird et al.,1999;Ben-Akiva et al.,2000;Keenan,2005).While such an approach is plausible,it leaves many questions unanswered,and we would particularly highlight two: 1.The process of calibration and validation of a network equilibrium model may typically occur in a cycle.That is to say,having initially calibrated a model using the base data sources,if the subsequent validation reveals substantial discrep-ancies in some part of the network,it is then natural to adjust the model parameters(including perhaps even the OD matrix elements)until the model outputs better reflect the validation data.2In this process,then,we allow the adjustment of potentially a large number of network parameters and input data in order to replicate the validation data,yet these data themselves are highly aggregate,existing only at the link level.To be clear here,we are talking about a level of coarseness even greater than that in aggregate choice models,since we cannot even infer from link-level data the aggregate shares on alternative routes or OD movements.The question that arises is then:how many different combinations of parameters and input data values might lead to a similar link-level validation,and even if we knew the answer to this question,how might we choose between these alternative combinations?In practice,this issue is typically neglected,meaning that the‘valida-tion’is a rather weak test of the model.2.Since the data are cross-sectional in time(i.e.the aim is to reproduce current base conditions in equilibrium),then in spiteof the large efforts required in data collection,no empirical evidence is routinely collected regarding the model’s main purpose,namely its ability to predict changes in behaviour and network performance under changes to the network/ demand.This issue is exacerbated by the aggregation concerns in point1:the‘ambiguity’in choosing appropriate param-eter values to satisfy the aggregate,link-level,base validation strengthens the need to independently verify that,with the selected parameter values,the model responds reliably to changes.Although such problems–offitting equilibrium models to cross-sectional data–have long been recognised by practitioners and academics(see,e.g.,Goodwin,1998), the approach described above remains the state-of-practice.Having identified these two problems,how might we go about addressing them?One approach to thefirst problem would be to return to the underlying formulation of the network model,and instead require a model definition that permits analysis by statistical inference techniques(see for example,Nakayama et al.,2009).In this way,we may potentially exploit more information in the variability of the link-level data,with well-defined notions(such as maximum likelihood)allowing a systematic basis for selection between alternative parameter value combinations.However,this approach is still using rather limited data and it is natural not just to question the model but also the data that we use to calibrate and validate it.Yet this is not altogether straightforward to resolve.As Mahmassani and Jou(2000) remarked:‘A major difficulty...is obtaining observations of actual trip-maker behaviour,at the desired level of richness, simultaneously with measurements of prevailing conditions’.For this reason,several authors have turned to simulated gaming environments and/or stated preference techniques to elicit information on drivers’route choice behaviour(e.g. 1Clearly,more sporadic and less predictable reductions in capacity may also occur,such as in the case of breakdowns and accidents,and environmental factors such as severe weather,floods or landslides(see for example,Iida,1999),but the responses to such cases are outside the scope of the present paper. 2Some authors have suggested more systematic,bi-level type optimization processes for thisfitting process(e.g.Xu et al.,2004),but this has no material effect on the essential points above.D.Watling et al./Transportation Research Part A46(2012)167–189169 Mahmassani and Herman,1990;Iida et al.,1992;Khattak et al.,1993;Vaughn et al.,1995;Wardman et al.,1997;Jou,2001; Chen et al.,2001).This provides potentially rich information for calibrating complex behavioural models,but has the obvious limitation that it is based on imagined rather than real route choice situations.Aside from its common focus on hypothetical decision situations,this latter body of work also signifies a subtle change of emphasis in the treatment of the overall network calibration problem.Rather than viewing the network equilibrium calibra-tion process as a whole,the focus is on particular components of the model;in the cases above,the focus is on that compo-nent concerned with how drivers make route decisions.If we are prepared to make such a component-wise analysis,then certainly there exists abundant empirical evidence in the literature,with a history across a number of decades of research into issues such as the factors affecting drivers’route choice(e.g.Wachs,1967;Huchingson et al.,1977;Abu-Eisheh and Mannering,1987;Duffell and Kalombaris,1988;Antonisse et al.,1989;Bekhor et al.,2002;Liu et al.,2004),the nature of travel time variability(e.g.Smeed and Jeffcoate,1971;Montgomery and May,1987;May et al.,1989;McLeod et al., 1993),and the factors affecting trafficflow variability(Bonsall et al.,1984;Huff and Hanson,1986;Ribeiro,1994;Rakha and Van Aerde,1995;Fox et al.,1998).While these works provide useful evidence for the network equilibrium calibration problem,they do not provide a frame-work in which we can judge the overall‘fit’of a particular network model in the light of uncertainty,ambient variation and systematic changes in network attributes,be they related to the OD demand,the route choice process,travel times or the network data.Moreover,such data does nothing to address the second point made above,namely the question of how to validate the model forecasts under systematic changes to its inputs.The studies of Mannering et al.(1994)and Emmerink et al.(1996)are distinctive in this context in that they address some of the empirical concerns expressed in the context of travel information impacts,but their work stops at the stage of the empirical analysis,without a link being made to net-work prediction models.The focus of the present paper therefore is both to present thefindings of an empirical study and to link this empirical evidence to network forecasting models.More recently,Zhu et al.(2010)analysed several sources of data for evidence of the traffic and behavioural impacts of the I-35W bridge collapse in Minneapolis.Most pertinent to the present paper is their location-specific analysis of linkflows at 24locations;by computing the root mean square difference inflows between successive weeks,and comparing the trend for 2006with that for2007(the latter with the bridge collapse),they observed an apparent transient impact of the bridge col-lapse.They also showed there was no statistically-significant evidence of a difference in the pattern offlows in the period September–November2007(a period starting6weeks after the bridge collapse),when compared with the corresponding period in2006.They suggested that this was indicative of the length of a‘re-equilibration process’in a conceptual sense, though did not explicitly compare their empiricalfindings with those of a network equilibrium model.The structure of the remainder of the paper is as follows.In Section2we describe the process of selecting the real-life problem to analyse,together with the details and rationale behind the survey design.Following this,Section3describes the statistical techniques used to extract information on travel times and routing patterns from the survey data.Statistical inference is then considered in Section4,with the aim of detecting statistically significant explanatory factors.In Section5 comparisons are made between the observed network data and those predicted by a network equilibrium model.Finally,in Section6the conclusions of the study are highlighted,and recommendations made for both practice and future research.2.Experimental designThe ultimate objective of the study was to compare actual data with the output of a traffic network equilibrium model, specifically in terms of how well the equilibrium model was able to correctly forecast the impact of a systematic change ap-plied to the network.While a wealth of surveillance data on linkflows and travel times is routinely collected by many local and national agencies,we did not believe that such data would be sufficiently informative for our purposes.The reason is that while such data can often be disaggregated down to small time step resolutions,the data remains aggregate in terms of what it informs about driver response,since it does not provide the opportunity to explicitly trace vehicles(even in aggre-gate form)across more than one location.This has the effect that observed differences in linkflows might be attributed to many potential causes:it is especially difficult to separate out,say,ambient daily variation in the trip demand matrix from systematic changes in route choice,since both may give rise to similar impacts on observed linkflow patterns across re-corded sites.While methods do exist for reconstructing OD and network route patterns from observed link data(e.g.Yang et al.,1994),these are typically based on the premise of a valid network equilibrium model:in this case then,the data would not be able to give independent information on the validity of the network equilibrium approach.For these reasons it was decided to design and implement a purpose-built survey.However,it would not be efficient to extensively monitor a network in order to wait for something to happen,and therefore we required advance notification of some planned intervention.For this reason we chose to study the impact of urban maintenance work affecting the roads,which UK local government authorities organise on an annual basis as part of their‘Local Transport Plan’.The city council of York,a historic city in the north of England,agreed to inform us of their plans and to assist in the subsequent data collection exercise.Based on the interventions planned by York CC,the list of candidate studies was narrowed by considering factors such as its propensity to induce significant re-routing and its impact on the peak periods.Effectively the motivation here was to identify interventions that were likely to have a large impact on delays,since route choice impacts would then likely be more significant and more easily distinguished from ambient variability.This was notably at odds with the objectives of York CC,170 D.Watling et al./Transportation Research Part A46(2012)167–189in that they wished to minimise disruption,and so where possible York CC planned interventions to take place at times of day and of the year where impacts were minimised;therefore our own requirement greatly reduced the candidate set of studies to monitor.A further consideration in study selection was its timing in the year for scheduling before/after surveys so to avoid confounding effects of known significant‘seasonal’demand changes,e.g.the impact of the change between school semesters and holidays.A further consideration was York’s role as a major tourist attraction,which is also known to have a seasonal trend.However,the impact on car traffic is relatively small due to the strong promotion of public trans-port and restrictions on car travel and parking in the historic centre.We felt that we further mitigated such impacts by sub-sequently choosing to survey in the morning peak,at a time before most tourist attractions are open.Aside from the question of which intervention to survey was the issue of what data to collect.Within the resources of the project,we considered several options.We rejected stated preference survey methods as,although they provide a link to personal/socio-economic drivers,we wanted to compare actual behaviour with a network model;if the stated preference data conflicted with the network model,it would not be clear which we should question most.For revealed preference data, options considered included(i)self-completion diaries(Mahmassani and Jou,2000),(ii)automatic tracking through GPS(Jan et al.,2000;Quiroga et al.,2000;Taylor et al.,2000),and(iii)licence plate surveys(Schaefer,1988).Regarding self-comple-tion surveys,from our own interview experiments with self-completion questionnaires it was evident that travellersfind it relatively difficult to recall and describe complex choice options such as a route through an urban network,giving the po-tential for significant errors to be introduced.The automatic tracking option was believed to be the most attractive in this respect,in its potential to accurately map a given individual’s journey,but the negative side would be the potential sample size,as we would need to purchase/hire and distribute the devices;even with a large budget,it is not straightforward to identify in advance the target users,nor to guarantee their cooperation.Licence plate surveys,it was believed,offered the potential for compromise between sample size and data resolution: while we could not track routes to the same resolution as GPS,by judicious location of surveyors we had the opportunity to track vehicles across more than one location,thus providing route-like information.With time-stamped licence plates, the matched data would also provide journey time information.The negative side of this approach is the well-known poten-tial for significant recording errors if large sample rates are required.Our aim was to avoid this by recording only partial licence plates,and employing statistical methods to remove the impact of‘spurious matches’,i.e.where two different vehi-cles with the same partial licence plate occur at different locations.Moreover,extensive simulation experiments(Watling,1994)had previously shown that these latter statistical methods were effective in recovering the underlying movements and travel times,even if only a relatively small part of the licence plate were recorded,in spite of giving a large potential for spurious matching.We believed that such an approach reduced the opportunity for recorder error to such a level to suggest that a100%sample rate of vehicles passing may be feasible.This was tested in a pilot study conducted by the project team,with dictaphones used to record a100%sample of time-stamped, partial licence plates.Independent,duplicate observers were employed at the same location to compare error rates;the same study was also conducted with full licence plates.The study indicated that100%surveys with dictaphones would be feasible in moderate trafficflow,but only if partial licence plate data were used in order to control observation errors; for higherflow rates or to obtain full number plate data,video surveys should be considered.Other important practical les-sons learned from the pilot included the need for clarity in terms of vehicle types to survey(e.g.whether to include motor-cycles and taxis),and of the phonetic alphabet used by surveyors to avoid transcription ambiguities.Based on the twin considerations above of planned interventions and survey approach,several candidate studies were identified.For a candidate study,detailed design issues involved identifying:likely affected movements and alternative routes(using local knowledge of York CC,together with an existing network model of the city),in order to determine the number and location of survey sites;feasible viewpoints,based on site visits;the timing of surveys,e.g.visibility issues in the dark,winter evening peak period;the peak duration from automatic trafficflow data;and specific survey days,in view of public/school holidays.Our budget led us to survey the majority of licence plate sites manually(partial plates by audio-tape or,in lowflows,pen and paper),with video surveys limited to a small number of high-flow sites.From this combination of techniques,100%sampling rate was feasible at each site.Surveys took place in the morning peak due both to visibility considerations and to minimise conflicts with tourist/special event traffic.From automatic traffic count data it was decided to survey the period7:45–9:15as the main morning peak period.This design process led to the identification of two studies:2.1.Lendal Bridge study(Fig.1)Lendal Bridge,a critical part of York’s inner ring road,was scheduled to be closed for maintenance from September2000 for a duration of several weeks.To avoid school holidays,the‘before’surveys were scheduled for June and early September.It was decided to focus on investigating a significant southwest-to-northeast movement of traffic,the river providing a natural barrier which suggested surveying the six river crossing points(C,J,H,K,L,M in Fig.1).In total,13locations were identified for survey,in an attempt to capture traffic on both sides of the river as well as a crossing.2.2.Fishergate study(Fig.2)The partial closure(capacity reduction)of the street known as Fishergate,again part of York’s inner ring road,was scheduled for July2001to allow repairs to a collapsed sewer.Survey locations were chosen in order to intercept clockwiseFig.1.Intervention and survey locations for Lendal Bridge study.around the inner ring road,this being the direction of the partial closure.A particular aim wasFulford Road(site E in Fig.2),the main radial affected,with F and K monitoring local diversion I,J to capture wider-area diversion.studies,the plan was to survey the selected locations in the morning peak over a period of approximately covering the three periods before,during and after the intervention,with the days selected so holidays or special events.Fig.2.Intervention and survey locations for Fishergate study.In the Lendal Bridge study,while the‘before’surveys proceeded as planned,the bridge’s actualfirst day of closure on Sep-tember11th2000also marked the beginning of the UK fuel protests(BBC,2000a;Lyons and Chaterjee,2002).Trafficflows were considerably affected by the scarcity of fuel,with congestion extremely low in thefirst week of closure,to the extent that any changes could not be attributed to the bridge closure;neither had our design anticipated how to survey the impacts of the fuel shortages.We thus re-arranged our surveys to monitor more closely the planned re-opening of the bridge.Unfor-tunately these surveys were hampered by a second unanticipated event,namely the wettest autumn in the UK for270years and the highest level offlooding in York since records began(BBC,2000b).Theflooding closed much of the centre of York to road traffic,including our study area,as the roads were impassable,and therefore we abandoned the planned‘after’surveys. As a result of these events,the useable data we had(not affected by the fuel protests orflooding)consisted offive‘before’days and one‘during’day.In the Fishergate study,fortunately no extreme events occurred,allowing six‘before’and seven‘during’days to be sur-veyed,together with one additional day in the‘during’period when the works were temporarily removed.However,the works over-ran into the long summer school holidays,when it is well-known that there is a substantial seasonal effect of much lowerflows and congestion levels.We did not believe it possible to meaningfully isolate the impact of the link fully re-opening while controlling for such an effect,and so our plans for‘after re-opening’surveys were abandoned.3.Estimation of vehicle movements and travel timesThe data resulting from the surveys described in Section2is in the form of(for each day and each study)a set of time-stamped,partial licence plates,observed at a number of locations across the network.Since the data include only partial plates,they cannot simply be matched across observation points to yield reliable estimates of vehicle movements,since there is ambiguity in whether the same partial plate observed at different locations was truly caused by the same vehicle. Indeed,since the observed system is‘open’—in the sense that not all points of entry,exit,generation and attraction are mon-itored—the question is not just which of several potential matches to accept,but also whether there is any match at all.That is to say,an apparent match between data at two observation points could be caused by two separate vehicles that passed no other observation point.Thefirst stage of analysis therefore applied a series of specially-designed statistical techniques to reconstruct the vehicle movements and point-to-point travel time distributions from the observed data,allowing for all such ambiguities in the data.Although the detailed derivations of each method are not given here,since they may be found in the references provided,it is necessary to understand some of the characteristics of each method in order to interpret the results subsequently provided.Furthermore,since some of the basic techniques required modification relative to the published descriptions,then in order to explain these adaptations it is necessary to understand some of the theoretical basis.3.1.Graphical method for estimating point-to-point travel time distributionsThe preliminary technique applied to each data set was the graphical method described in Watling and Maher(1988).This method is derived for analysing partial registration plate data for unidirectional movement between a pair of observation stations(referred to as an‘origin’and a‘destination’).Thus in the data study here,it must be independently applied to given pairs of observation stations,without regard for the interdependencies between observation station pairs.On the other hand, it makes no assumption that the system is‘closed’;there may be vehicles that pass the origin that do not pass the destina-tion,and vice versa.While limited in considering only two-point surveys,the attraction of the graphical technique is that it is a non-parametric method,with no assumptions made about the arrival time distributions at the observation points(they may be non-uniform in particular),and no assumptions made about the journey time probability density.It is therefore very suitable as afirst means of investigative analysis for such data.The method begins by forming all pairs of possible matches in the data,of which some will be genuine matches(the pair of observations were due to a single vehicle)and the remainder spurious matches.Thus, for example,if there are three origin observations and two destination observations of a particular partial registration num-ber,then six possible matches may be formed,of which clearly no more than two can be genuine(and possibly only one or zero are genuine).A scatter plot may then be drawn for each possible match of the observation time at the origin versus that at the destination.The characteristic pattern of such a plot is as that shown in Fig.4a,with a dense‘line’of points(which will primarily be the genuine matches)superimposed upon a scatter of points over the whole region(which will primarily be the spurious matches).If we were to assume uniform arrival rates at the observation stations,then the spurious matches would be uniformly distributed over this plot;however,we shall avoid making such a restrictive assumption.The method begins by making a coarse estimate of the total number of genuine matches across the whole of this plot.As part of this analysis we then assume knowledge of,for any randomly selected vehicle,the probabilities:h k¼Prðvehicle is of the k th type of partial registration plateÞðk¼1;2;...;mÞwhereX m k¼1h k¼1172 D.Watling et al./Transportation Research Part A46(2012)167–189。

《混凝土结构设计原理》双语 (13)

《混凝土结构设计原理》双语 (13)
E(X 2 ) x2 P(x)dx
• Variance (方差)
Variance=
E[( X
E(X )2 ]
E[ X
2 ] [E(X )]2
2 X
• Standard deviation(标准差) X
• Coefficient of variation (变异系数)
X X E(x) X
s
s
s
s
Pf P(Z 0) [ f R (r)dr] fS (S)dS FR (S) fS (S)dS
00
0
First-order second-moment method
Z=R-S
Z R S
Z
2 R
2 S
βis called reliability index (可靠度指标)
GB50100-2002
第4.1.3条 混凝土轴心抗压,轴心抗拉强度标准值fck,ftk应 按表4.1.3采用。
强度种 类
混凝土强度标准值(N/mm2) 混凝土强度等级
表4.1.3
C15 C20 C25 C30 C35 C40 C45 C50 C55 C60 C65 C70 C75 C80
fck
10.0 13.4 16.7 20.1 23.4 26.8 29.6 32.4 35.5 38.5 41.5 44.5 47.4 50.2
3.4 Classification of design methods
• 水准Ⅰ • 水准Ⅱ • 水准Ⅲ
半概率法 近似概率法 全概率法
3.5 Limit State Design
• Two principal types of limit state: • o Ultimate limit state: The whole structure or its

HILBERT MODULAR FORMS AND p-ADIC HODGE THEORY

HILBERT MODULAR FORMS AND p-ADIC HODGE THEORY

iБайду номын сангаас
a b ai bi = ∈ GL2 (R)I and τ = (τi )i ∈ X I . It induces a left c d c i di i action of GL2 (F ) on X I × GL2 (AF,f ) defined by γ (τ, g ) = (γ (τ ), γg ). Here GL2 (F ) is naturally embedded in GL2 (AF ) = GL2 (R)I × GL2 (AF,f ). We also consider the right action of GL2 (AF,f ) on X I × GL2 (AF,f ) defined by (τ, g )g ′ = (τ, gg ′ ). A complex valued function f = f (τ, g ) on X I × GL2 (AF,f ) is said to be holomorphic if the function τ → f (τ, g ) on X I is holomorphic for each g and the map g → (τ → f (τ, g )) is locally constant on GL2 (AF,f ). The actions of GL2 (F ) and of GL2 (AF,f ) on holomorphic functions on X I × GL2 (AF,f ) are defined as follows. For γ ∈ GL2 (F ) and a holomorphic function f on X I × GL2 (AF,f ), we define γ (k)∗ f = γ ∗ f to be for γ = det(γ ) 2 (γ f )(τ, g ) = f (γτ, γg ) = (cτ + d)k

中国大陆高密度城市公园建设水平演变特征分析——从1996年到2019年

中国大陆高密度城市公园建设水平演变特征分析——从1996年到2019年

82 | 城市研究Evolution Characteristics of High-density Cities' Park Construction in China's Mainland from 1996 to 2019中国大陆高密度城市公园建设水平演变特征分析*——从1996年到2019年徐吉羽 刘志强 余 慧 洪亘伟 XU Jiyu, LIU Zhiqiang, YU Hui, HONG Genwei高密度城市的人口与绿地矛盾日益突出,准确研判其公园建设水平的时空演变规律对新型城镇化建设有重要意义。

以人口密度和经济密度综合表征城市密度,在我国2019年266个地级及以上城市基础上,遴选出当年的26个高密度城市,并以此为研究单元探究1996—2019年高密度城市公园建设水平的演变特征。

从时序来看,公园建设态势为“快速提升—增速减缓—稳步推进”,建设模式以公园个数的增加引领公园规模、公园密度的高速发展。

从城市类型来看,经济密度是影响高密度进程速度和公园建设基础的重要因素,而人口密度的影响逐渐加强;“人口、经济密度极高”型、“高密度进程较快”型公园建设“主增个数,辅增规模”;“人口、经济密度双高”型、“高密度进程平稳”型公园配置最为协调。

提出“重密度、重配置”“重系统、重需求”“重低碳、重效能”等发展对策,以期为“公园城市”背景下高密度城市的绿色、可持续发展提供有益探索。

The contradiction between population and green space is increasingly prominent in high-density cities, so it is of greatsignificance to accurately study the spatial-temporal evolution of park construction in high-density cities. Based on the comprehensive representation of urban density by population density and economic density, 26 high-density cities have been selected from 266 prefecture-level and above cities in 2019 to explore the evolution characteristics of urban park construction level from 1996 to 2019. The results show that from the temporal perspective, the trend of park construction changes from rapid improvement to slow-down growth and to steady progress, and the construction pattern leads to the rapid development of park scales and park densities with the increase of park amounts. From the perspective of different city types, economic density is an important factor affecting the speed of high-density process and the foundation of park construction, and population density plays an increasingly important role. "High population and economic density" cities' and "rapid development" cities' park construction mainly increase in number and also increase in scale. Park construction in "high population and economic density" cities and "steady progress" cities are the most coordinated. To provide beneficial exploration for the green and sustainable development of high-density cities under the background of "park city", the development countermeasures emphasizing "density and configuration", "system and demand", and "low-carbon and efficiency" are put forward.高密度城市;公园建设水平;演变;人口密度;经济密度;中国high-density city; park construction level; evolution; population density; economic density; China文章编号 1673-8985(2022)06-0082-07 中图分类号 TU984 文献标志码 A DOI 10.11982/j.supr.20220611摘 要Abstract 关 键 词Key words 作者简介徐吉羽苏州科技大学建筑与城市规划学院硕士研究生刘志强(通信作者)苏州科技大学建筑与城市规划学院硕士生导师,教授,*******************基金项目:国家自然科学基金面上项目“基于空间计量分析的中国市域建成区绿地率空间分异的格局、演变及其机理研究”(编号51778389);江苏省2021年普通高校研究生科研创新计划基金(编号KYCX21_3058);江苏高校“青蓝工程”、江苏省企业研究生工作站(苏州园林设计院有限公司)和江苏高校品牌专业建设工程二期项目(风景园林)、“十四五”江苏省重点学科(风景园林学)资助。

融合绿色建筑理念的彝族新农房课程设计教学研究

融合绿色建筑理念的彝族新农房课程设计教学研究

2019年20期前沿视界高教学刊融合绿色建筑理念的彝族新农房课程设计教学研究*成斌,刘冲(西南科技大学土木工程与建筑学院,四川绵阳621010)一、概述随着当代农村城镇化发展,各个地方的传统民居面临升级换代的需求,现代农房应运而生,就在今年,国家提出乡村振兴国家战略,结合农村扶贫的美丽乡村建设如火如荼地展开,农村集中居民安置点、新农房建设已成为一个重要的课题。

同时,在当今倡导低碳生活的时代背景下,我国不仅在农村地区强调资源的集约化利用,尤其是土地以及诸多不可再生资源的节约利用,而且注重农房居住的舒适性问题。

在新农村建设中,关注功能适应、结构安全,民族文化传承等新农房设计显得尤为必要。

在我国高校建筑学专业的教学中,开设了《中国民居》的理论课程,为改变传统高年级理论教学不能有效指导学生创作实践的问题,我们在完成基本理论教学同时,根据时代发展的特点与需要,在教学课程中进行《新民居设计实践》的课程设计,其内容主要包括现代功能适应与空间重构、现代材料与地域适宜技术结合、能耗技术创新与居住舒适度提升、民族与地域文化的传承与创新等方面。

我校教学改革结合凉山彝族地区农房实际社会需求,在《中国民居》课程教学中加入实践环节,开展了融合绿色建筑理念的彝族新农房课程设计实践,为提升课程教学质量,做出了有益的探索。

二、融合绿色建筑理念的彝族新农房课程设计实践的意义(一)《中国民居》课程设计的提出符合农村城镇化发展的时代性需求当前,农村城镇化迅速发展,乡村规划与农房建设的发展机制却很还不健全,在经济迅速发展的情况下,急需科学的规划与设计理论体系指导以及可行的技术措施支撑。

首先,农村住宅设计常盲目照搬城市类型的住宅平面与风格,很少根据农民自身的生活、生产需求和行为要求出发,这就亟待展开专项调查、再进行设计与研发。

其次,现有农村住宅多为农民自建,用材因地制宜,因陋就简,平面布局不合理,常导致农村住宅性能差,建筑空间品质不高,建筑风貌也考虑较少,特色彰显不足。

identity construction theory

identity construction theory

identity construction theoryIdentity construction theory aims to explain how individuals form, maintain, and transform their sense of self in relation to their social and cultural environment. According to this theory, identity is not fixed or predetermined, but rather it is a dynamic process shaped by a range of factors including social context, cultural norms, personal experiences, and individual agency.Identity construction theory suggests that individuals use various strategies and techniques to create a consistent and meaningful sense of self. These strategies may include adopting particular roles, attitudes, and behaviors, as well as seeking out social groups and communities that share similar values, beliefs, and experiences.Identity construction theory also recognizes that identity is fluid and multifaceted, and that individuals may have multiple, sometimes conflicting identities that are activated in different contexts. As such, this theory emphasizes the importance of understanding the complex ways in which people negotiate their identities in relation to their social and cultural environment.Overall, identity construction theory highlights the importance of individual agency in shaping and transforming identity, while also recognizing the social and cultural contexts in which this process occurs.。

孔隙率 英语 复数

孔隙率 英语 复数

孔隙率英语复数Porosity is a crucial factor in materials science. It basically refers to the amount of empty space within asolid material. You know, it's like those sponges you havein the kitchen some are more spongy, meaning they have higher porosity.Talking about porosity in plural, we often consider different types of porosity in a material. Like, there can be macroporosity for the big holes and microporosity forthe tiny ones. Engineers and scientists take these into account when designing materials for specific applications.In the world of geology, porosity plays a big role too. You see, rocks and sediments aren't completely solid. They have pores, spaces that can be filled with water or gas. So, when we talk about the porosity of a rock formation, we're referring to how much of it is made up of these pores.And when it comes to construction materials, porosityis important for their performance. Materials with high porosity tend to be lighter and have better thermal insulation properties. But too much porosity can also reduce their structural strength.Lastly, porosity is a key aspect in soil science. The amount of pore space in soil affects how well it can hold water and nutrients for plants. Different types of soil have varying porosities, which is why some are better suited for growing certain plants.So, in essence, porosity whether we're talking about it in the singular or plural is a fundamental concept that has applications across various fields.。

cast studies in construction materials q1

cast studies in construction materials q1

cast studies in constructionmaterials q1Cast Studies in Construction Materials: Quarter 1Cast studies in construction materials have been an integral part of the building industry for decades, especially in Quarter 1 of each year. This period marks the beginning of new projects and the renewal of old ones, making it a prime time for evaluating the performance and durability of various materials used in construction.During this quarter, cast studies focus on a range of materials, including concrete, steel, and various alloys. These materials are subjected to rigorous testing to assess their strength, durability, and resistance to external factors such as weathering, corrosion, and load bearing.Concrete, for instance, is a popular material used in various construction projects. Cast studies in Quarter 1 involve testing concrete samples under controlled conditions to determine their compressive strength, tensile strength, and flexural strength. This helps engineers and architects assess the suitability of concrete for specific applications and ensure the safety and durability of structures.Steel, on the other hand, is a crucial material in structural engineering. Cast studies in Quarter 1 involve evaluating steel's resistance to corrosion, fatigue, and other factors that may affect its performance over time. This is crucial for ensuring the long-term stability and safety of steel structures.Alloys are also studied during this quarter, as they offer unique properties that can enhance the performance of construction materials. Cast studies evaluate the mechanical properties, corrosion resistance, and thermal stability of alloys to determine their suitability for specific applications.In conclusion, cast studies in construction materials during Quarter 1 play a pivotal role in ensuring the safety, durability, and performance of constructionprojects. By rigorously testing various materials under controlled conditions, engineers and architects can make informed decisions about material selection and ensure the long-term stability of structures.。

英文文献

英文文献

Foundations for buildingsTypes of Foundations StructureThe design of foundation embodies three essential operations: (i) calculating the loads to be transmitted by the foundation structure to the soils or supporting it; (ii) determining the engineering performance of these soils and rocks; and (iii) designing a suitable foundation.Footings distribute the load to the ground over an area sufficient to suit the pressures to the properties of the soil or rock. Their size is therefore governed by the strength of the foundation material. If the footing supports a single column it is known as a spread or pad footing whereas a footing beneath a wall is referred to as a strip or continuous footing.The amount and rate of settlement of a footing due to a given load per unit area of its base, is a function of the dimensions of the base, and of the compressibility and permeability of the foundation materials. In addition, if footings are to be constructed on cohesive soil, it is necessary to determine whether or not the soil is likely to swell or shrink according to any seasonal variations. Fortunately significant variations below a depth of about 2 m are rather rare.Footings usually provide the most economical type of foundation structure but the allowable bearing capacity must be chosen to provide an adequate factor of safety against shear failure in the soil and toensure that settlement are not excessive. Settlement for any given pressure increases with the width of footing in almost direct proportion on clays and to a lesser degree on sands.A raft permits the construction of a satisfactory foundation in materials whose strength is too low for the use of footings. The chief function of a raft is to spread the building load over as great an area of ground as possible and thus reduce the bearing pressure to a minimum. Also, a raft provides a degree of rigidity which reduces differential movements in the superstructure. The settlement of a raft foundation does not depend on the weight of the building which is supported by the raft. It depends on the difference between this weight and the weight of the soil which is removed prior to the construction of the raft, provided the heave produced by the excavation is inconsequential. A raft can be built at a sufficient depth so that the weight of soil removed equals the weight of the building. Hence, such rafts are sometimes called floating foundations. The success of this type of foundation structure in overcoming difficult soil conditions has led to the use of deep-raft and rigid frame basements for a number of high buildings on clay.When the soil immediately beneath a proposed structure is too weak or too compressible to provide adequate support, the loads can be transferred to more suitable material at greater depth by means of piles. Such bearing piles must be capable of sustaining the load with anadequate factor of safety, without allowing settlement detrimental to the structure to occur. Although these piles derive their carrying capacity from end-bearing at their bases, friction along their sides also contributes towards this. Indeed, friction is likely to be the predominant factor for piles in clays and silts whilst end-bearing provides the carrying capacity for piles terminating in or on gavel or rock.Piles may be divided into three main types, according to the effects of their installation: (i) displacement piles; (ii) small displacement piles; and (iii) non-displacement piles. Displacement piles are installed by driving and so their volume has to be accommodated below ground by vertical and lateral displacement of soil which may give rise to heave or compaction, this could have detrimental effects upon neighbor structures. Driving may also cause piles which are already installed to lift. Driving piles into clay may affect its consistency, that is, the penetration of piles combined with the vibrations set up by the falling hammer, destroys the structure of the clay and initiates a new process of consolidation which drags the piles in a downward direction. Sensitive clays are affected in this way whilst insensitive clays are not. Small displacement piles include some piles which may be used in soft alluvial ground of considerable depth. They also may be used to withstand uplift forces. They are not suitable in stiff clays or gravels. Non-displacement piles are formed by boring and the hole may be lined with casing whichmay or may not be left in place. When working near existing structures which are founded on loose sands or silts, particularly if these are saturated, it is essential to avoid using methods which cause dangerous vibrations and may give rise to a quick condition.For practical purposes the ultimate bearing capacity may be taken as that load which causes the head of the pile to settle 10% of the pile diameter. The ratio between the settlement of a pile foundation and that of a single pile acted upon by the design load can have almost any value. This is due to the fact that the settlement of an individual pile depends only on the nature of the soil in direct contact with the pile, whereas the settlement of a pile foundation also depends on the number of piles and on the compressibility of the strata located between the level of the ends of the piles and the surface of the bedrock.Bearing CapacityFoundation design is primarily concerned with ensuring that movements of a foundation are kept within limits which can be tolerated by the proposed structure without adversely affecting its functional requirements. Hence the design of a foundation structure requires an understanding of the local geological and groundwater conditions, and more particularly an appreciation of the various types of ground movement that can occur.In order to avoid shear failure or substantial shear deformation ofthe ground, the foundation pressures used in design should have an adequate factor of safety when compared with the ultimate bearing capacity of the foundation. The ultimate bearing capacity is the value of the loading intensity which causes the ground to fail in shear. If this is to be avoided, then a factor of safety must be applied to the ultimate bearing capacity, the value obtained being the safe bearing capacity. But even this value may still mean that there is a risk of excessive or differential settlement. Thus the allowable bearing capacity is the value which is used in design, this taking into account all possibilities of ground movement, and so its value is normally less than that of the safe bearing capacity. The value of ultimate bearing capacity depends on the type of foundation structure as well as the soil properties, therefore the dimensions, shape, and depth at which a footing is placed all influence the bearing capacity. More specifically the width of the foundation is important in cohesion less sands-the greater the width the larger the bearing capacity, whilst in saturated clays it is of little effect. With uniform soil conditions the ultimate bearing capacity increases with depth of installation of the foundation structure. This increase is associated with the confining effects of the soil, the decreased overburden pressure at foundation level, and with the shear forces that can be mobilized between the side of the foundation structure and the ground.The development of foundation failure involves first, the soil beneath the foundation being forced downwards in a wedge-shaped zone. Consequently the soil beneath the wedge is forced downwards and outwards, elastic bulging and distortion taking place within the soil mass. Second, the soil around the foundation perimeter pulls away from the foundation and the shear forces propagate outwards from the apex of the wedge. This forms the zone of radial shear in which plastic failure by shear occurs. If the soil is very compressible or can endure large strains without plastic flow the failure is confined to fan-shaped zones of local shear. The foundation displace downwards with little load increase. On the other hand, if the soil is more rigid, the shear zone propagates outwards until a continuous surface of failure extends to ground surface and the surface heaves.The stress distribution due to a structure declines rapidly with depth within the soil. It should be determined in order to calculate bearing capacity and settlement at given depth.The ultimate bearing capacity of foundations on cohesion less deposits depends on the width and depth of placement of the foundation structure as well as the angle of shearing resistance. The position of the water table in relation to the foundation structure has an important influence on the ultimate bearing capacity. High groundwater levels lower the effective stresses in the ground so that the ultimatebearing capacity is reduced by anything up to 50%. Generally speaking, gravels and dense sands afford good foundations. It is possible to estimate the bearing capacity of such soils from penetration tests, either static or dynamic, or plate-load tests.The ultimate bearing capacity of foundation on clay soils depends on the shear strength of the soil and the shape and depth at which the foundation structure is placed. The shear strength of clay is, in turn, influenced by its consistency. Although there is a small decrease in the moisture content of a clay beneath a foundation structure which gives rise to a small increase in soil strength, this is of no importance as far as estimation of the factor of safety against shear is concerned. In relation to applied stress, saturated clays behave as purely cohesive materials provided that no change of moisture content occurs. Thus, when a load is applied to saturated clay it produces excess pore water pressures which are not quickly dissipated. In other words the angle of shearing resistance is equal to zero. The assumption that φ=0 forms the basis of all normal calculations of ultimate bearing capacity of clays. The strength may then be taken as the untrained shear strength or one half of the unconfined compressive strength. To the extent that consolidation does occur, the results of analyses based on the premise thatφ=0 are on the safe side. Only in special cases, with prolonged loading periods or with vary salty clays, is the assumption sufficiently far from the truth to justifya more elaborate analysis.For all types of foundation structure on clays the factors safety must be adequate against bearing capacity failure. Experience has indicated that it is desirable to use a factor of safety of 3, yet although this means that complete failure almost invariably is ruled out, settlement may still be excessive. It is therefore necessary to give consideration to the settlement problem if bearing capacity is to be viewed correctly. More particularly it is important to make a reliable estimate of the amount of differential settlement that may be experienced by the structure. If the estimated differential settlement is excessive it may be necessary to change the layout or type of foundation structure.If a rock mass contains few defects, the allowable contact pressure at the surface may be taken conservatively as the unconfined compressive strength of the intact rock. Most rock masses, however, are affected by joints or weathering which may significantly alter their strength and engineering behavior.The great variation in the physical properties of weathered rock and the non-uniformity of the extent of weathering even at a single site permits few generalizations concerning the design and construction of foundation structures. The depth to bedrock and the degree of weathering must be determined. If the weathered residuum plays the major role in the regolith, rock fragments being of minor consequence,then design of rafts or footings should be according to the matrix material. Piles can provide support at depth.SettlementThe average values of settlement beneath a structure together with the individual settlements experienced by its various parts influence the degree to which the structure serves its purpose. The damage attributable to settlement can range from complete failure of the structure to slight disfigurement.If cohesion less sediments are densely packed they will be almost incompressible. Loosely packed sand located above the water table will undergo some settlement but is otherwise stable. Where foundation level is below the water table, greater settlement is likely to be experienced. Additional settlement may occur if the water table fluctuates or the ground is subject to vibrations. Settlement commonly is relatively rapid but there can be a significant time lag when stresses are large enough to produce appreciable grain fracturing. Nonetheless settlement in sands and gravels is frequently substantially complete by the end of the construction period.Settlement can present a problem in clayey soils so that the amount which is likely to take place when they are loaded needs to be determined. Settlement invariable continues after the construction period, often for several years. Immediate or elastic settlement is thatwhich occurs under constant volume conditions when clay deforms to accommodate the imposed shear stresses. Primary consolidation in clay takes place due to the void space being gradually reduced as the pore water and/or air are expelled there from on loading. The rate at which this occurs depends on the rate at which the excess pore water pressure induced by a structural load is dissipated, thereby allowing the structure to be supported entirely by the soil skeleton. Consequently, the permeability of the clay is all important. After a sufficient time has elapsed excess pore water pressures approach zero but a deposit of clay may continue to decrease in volume. This is referred to as secondary consolidation and involves compression of the soil fabric.。

住建部印发_建筑施工企业负责人及项目负责人施工现场带班暂行办法_

住建部印发_建筑施工企业负责人及项目负责人施工现场带班暂行办法_

第一条为进一步加强建筑施工现场质量安全管理工作,根据《国务院关于进一步加强企业安全生产工作的通知》(国发[2010]23号)要求和有关法规规定,制定本办法。

第二条本办法所称的建筑施工企业负责人,是指企业的法定代表人、总经理、主管质量安全和生产工作的副总经理、总工程师和副总工程师。

本办法所称的项目负责人,是指工程项目的项目经理。

本办法所称的施工现场,是指进行房屋建筑和市政工程施工作业活动的场所。

第三条建筑施工企业应当建立企业负责人及项目负责人施工现场带班制度,并严格考核。

施工现场带班制度应明确其工作内容、职责权限和考核奖惩等要求。

第四条施工现场带班包括企业负责人带班检查和项目负责人带班生产。

企业负责人带班检查是指由建筑施工企业负责人带队实施对工程项目质量安全生产状况及项目负责人带班生产情况的检查。

项目负责人带班生产是指项目负责人在施工现场组织协调工程项目的质量安全生产活动。

第五条建筑施工企业法定代表人是落实企业负责人及项目负责人施工现场带班制度的第一责任人,对落实带班制度全面负责。

第六条建筑施工企业负责人要定期带班检查,每月检查时间不少于其工作日的25%。

建筑施工企业负责人带班检查时,应认真做好检查记录,并分别在企业和工程项目存档备查。

第七条工程项目进行超过一定规模的危险性较大的分部分项工程施工时,建筑施工企业负责人应到施工现场进行带班检查。

对于有分公司(非独立法人)的企业集团,集团负责人因故不能到现场的,可书面委托工程所在地的分公司负责人对施工现场进行带班检查。

本条所称“超过一定规模的危险性较大的分部分项工程”详见《关于印发<危险性较大的分部分项工程安全管理办法>的通知》(建质[2009]87号)的规定。

第八条工程项目出现险情或发现重大隐患时,建筑施工企业负责人应到施工现场带班检查,督促工程项目进行整改,及时消除险情和隐患。

第九条项目负责人是工程项目质量安全管理的第一责任人,应对工程项目落实带班制度负责。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Construction of a Temporal Coherency Preserving Dynamic Data DisseminationNetworkShweta Agrawal shweta@it.iitb.ac.inKrithi Ramamrithamkrithi@cse.iitb.ac.inIndian Institue of Technology,Bombay.Shetal Shahshetals@cse.iitb.ac.inAbstractIn this paper,we discuss various techniques for the effi-cient organization of a temporal coherency preserving dy-namic data dissemination network.The network consists of sources of dynamically changing data,repositories to repli-cate this data,and clients.Given the temporal coherency properties of the data available at various repositories,we suggest methods to intelligently choose a repository to serve a new client request.The goal is to support as many clients as possible,from the given network.Secondly,we propose strategies to decide what data should reside on the reposi-tories,given the data coherency needs of the clients.We model the problem of selection of repositories for serving each of the clients as a linear optimization prob-lem,and derive its objective function and constraints.In view of the complexity and infeasibility of using this so-lution in practical scenarios,we also suggest a heuristic solution.Experimental evaluation,using real world data, demonstrates that thefidelity achieved by clients using the heuristic algorithm is close to that achieved using linear optimization.To improve thefidelity further through better load sharing between repositories,we propose an adaptive algorithm to adjust the resource provisions of repositories according to their recent response times.It is often advantageous to reorganize the data at the repositories according to the needs of clients.To this end, we propose two strategies based on reducing the communi-cation and computational overheads.We evaluate and com-pare the two strategies,analytically,using the expected re-sponse time for an update at repositories,and by simula-tion,using the loss offidelity at clients,as our performance measure.The results suggest that a considerable improve-ment infidelity can be achieved by judicious reorganization.1.IntroductionThe Internet has grown in popularity from being a mere facility to a necessity.This raises the need for an efficient and scalable dissemination of Internet data to the clients all over the globe.The problem becomes even more challeng-ing when the data is dynamically changing and is used for online decision making.Examples of such time critical data are many-stock prices onfinance sites,weather informa-tion,sports data,sensor data,etc.We focus on ways and means to distribute such dynamically changing(streaming) data to a large number of users with high accuracy,effi-ciency,and scalability.A natural solution of this problem of efficient distri-bution is to introduce some repositories that replicate the data between the sources and clients.These repositories can serve the clients geographically closer to them and reduce the load on sources.But,for rapidly changing data,the over-heads can be very large for achieving replication.Fortunately,we can and should exploit the fact that dif-ferent users have different requirements for accuracy of data.The user can specify the bound on the tolerable im-precision for each requested data-item;this can be viewed as the coherency requirement associated with the data.For example,a stock broker might be concerned with every cent of change in stock prices while a casual user might be con-tent with a much lower accuracy.The goal is to provide a user with data at the desired accuracy.The fraction of to-tal time for which the client receives the data at the desired precision is calledfidelity.(Formal definitions of coherency requirement andfidelity are given in Section2.1.) Meeting user coherency requirements when the data is changing rapidly and unpredictably is a challenging prob-lem.We had previously developed an algorithm to construct a cooperative repository network for dynamic data which is coherency-preserving,resilient to failures,and scalable to a large number of data-items and clients[13].The fo-cus of this algorithm,called DiTA,is on maintaining co-herency of dynamic data-items in a network of repositories: data disseminated(pushed)to one repository isfiltered by that repository and disseminated to the repositories depen-dent on it,according to their respective data and coherency requirements.Section2.2gives a brief description of the DiTA algorithm.This paper makes two key contributions.1.Given a repository network,we solve the problem ofdetermining the repository to which a client should connect in order to satisfy its data needs.2.Given the data and coherency needs of clients inthe network,we propose techniques to reorga-nize the repository network,so that the clients receive the data at a betterfidelity.Figure1summarizes the contributions of the paper.Our algorithms assume accurate,global knowl-edge about the load on repositories and distance of clients to repositories,at the source.The design and evalua-tion of a scalable,distributed architecture for implementing this algorithm is a topic for future research.Figure1:Construction of the dynamic data dissemination tree:summary of contributions1.1.Assigning clients to repositoriesIn Section3.1,we formulate the problem of assign-ing client requests to repositories as a linear optimization problem.Given the complexity of the LP solution(expo-nential in the worst case),we also offer a heuristic solu-tion in Section3.2.In this solution,the source of a data-item maintains the information about the availability and coherency values of the data-item on all the repositories in the network,using which it chooses the appropriate repos-itory to serve a request.Experimental evaluation using real world traces,in Section3.3,demonstrates that thefidelity achieved by clients,using the heuristic assignment,is close to that achieved using linear optimization.An effective load sharing among repositories can im-prove thefidelity experienced at clients by decreasing the average computational delay at the repositories.The re-source contribution limit,which denotes the maximum number of requests a repository can serve,helps in con-trolling the load at the repositories.In Section3.4,we propose an algorithm for adaptive adjustment of indi-vidual resource contribution limits of repositories dur-ing assignment of client requests,for better load shar-ing.1.2.Reorganizing the repository networkIn a practical scenario,client needs change continuously: the data-items and the coherency requirements of the clients can change.The repositories in the network should be able to adapt to these changes and serve the data as per the cur-rent needs of the clients.In Section4,we present and evalu-ate two strategies to reorganize the data served by the repos-itories according to the current clients’requirements.Thefirst strategy,called the Closest Repository algo-rithm,is based on reducing thefidelity loss due to network communication delay,by making the data re-quired by a client available at a repository close to it.The second strategy,called the Divide and Conquer strategy,extends the Closest Repository approach by categorizing requests into multiple classes,according to their respective coherency requirements.A fraction of repositories is reserved for serving each class of re-quests.This strategy further improvesfidelity by re-ducing the average computational delay at repositories. We evaluate and compare the two strategies,analytically, by calculating the expected response time for updates at repositories,and by simulation,monitoring the loss offi-delity at clients during simulation.We observe that thefi-delity achieved using the Divide and Conquer method de-pends on the relative size of the fraction of repositories re-served for serving each class of requests.We show that us-ing the fraction size at which the analytically calculated re-sponse time is minimum,the Divide and Conquer strategy gives a significant improvement infidelity over the Clos-est Repository approach.In Section5,we present the overall algorithm for the construction of a coherency preserving dynamic data dis-semination network,making use of the above algorithms and strategies.In summary,this paper presents strategies for efficient organization of a temporal coherency preserving dynamic data dissemination network.The word“dynamic”here ap-plies to“data”as well as“network”.We construct a net-work for the distribution of dynamically changing data to the clients according to their coherency requirements.The network adapts its organization dynamically according to the requirements of the clients.We give an overview of the related work in Section6, and present the conclusion and future work in Section7. 2.Background:The Basic Framework andDiTA algorithm2.1.Data coherency and overlay networkAs shown in Figure2,we build a push based data dis-semination network of sources and repositories with clients connecting to the repositories.To maintain coherency of the cached data at repositories,each data-item must be period-ically refreshed with the copy at the source.Let the client specify a coherency requirement for each data-item of interest.The value of denotes the maximum permissibleFigure2:The network architecturedeviation of the value of the data-item at the client from the value at the server,and thus constitutes the user-specified tolerance.Observe that can be specified in units of time, e.g.,the data-item should never be out-of-sync by more than 5minutes(“time domain consistency requirement”,similar to the data validity interval[11])or value,e.g.the temper-ature should never be out-of-sync by more than one degree (“value domain consistency requirement”).In this paper,we only consider temporal coherency requirements specified in terms of value of the object;maintaining coherency require-ments in units of time is a simpler problem that requires less sophisticated techniques(e.g.push every5minutes).Formally,let and denote the value of data-item at the source and the client,respectively,at time. Then,to maintain coherence,we should haveEmpirically,fidelity observed by a client can be defined as the total time for which the above inequality holds,normal-ized by the total length of observations.The goal of a good organization of dissemination network is to provide highfi-delity at low cost,where cost is measured in terms of the number of messages.For each data-item,we build a logical overlay network, as follows.Consider a data-item served by a source. The source directly serves some of the repositories.These repositories in turn serve a subset of remaining reposito-ries and a subset of clients,such that the resulting network is in the form of a tree rooted at the source,and consisting of repositories and clients interested in.This tree is re-ferred as the dynamic data dissemination tree,or for. The children of a node in the tree are also called the depen-dents of the node.When a data change occurs at the source, it checks which of its direct or indirect dependents are in-terested in the change,and pushes the change to them.Each repository acts as afilter and sends only the updates of in-terest further down.2.2.DiTA:Data-item-at-a-Time AlgorithmHere is a short description of the DiTA algorithm[13]to insert a repository with given data-items and coherency re-quirements,in the overlay network.A repository inter-ested in data-item with coherency requests the source of for insertion.If the source has resources to service,it is made a child of the source in the for.Otherwise,the source determines the best subtree rooted at its children for the insertion of.The subtree is chosen such that the level of in the is the smallest possible and the communica-tion delay between and its parent is small.This is recur-sively applied to select subtrees in this subtree,till a repos-itory is chosen that has resources to serve.If the co-herency requirement of is less stringent than,pushes it down in the subtree and replaces.This ensures that the repositories with more stringent coherency serve reposito-ries with less stringent coherency in the for.DiTA requires very little book-keeping and experimental results in[13]show that it indeed produces of repositories that delivers data with highfidelity.3.Assigning clients to repositories in a givennetworkIn this section,we focus on the problem of assigning new client requests to an existing dynamic data dissemination network.If all the client requests were known in advance, the task of assignment could be seen as solving an optimiza-tion problem,as described in Section3.1,where the aver-agefidelity observed by clients has to be maximized,sub-ject to the constraints of data-availability and resource lim-itations at the repositories.In practice,however,the client requests are expected to arrive one at a time,and have to be assigned to some repository as soon as they come.More-over,the complexity of solving such an optimization prob-lem also makes it inappropriate for use as and when the re-quests arrive.Still,this solution can be used to estimate the goodness of the heuristic solution described in Section3.2, that handles one request at a time.In Section3.3,we eval-uate the performance of the heuristic assignment as com-pared to the linear optimization approach.Section3.4de-scribes an algorithm for adaptive adjustment of the resource contribution limits(See Section1.1)of repositories for a better load sharing.3.1.Assignment using linear optimization Assumptions1.Every repository has afixed,hard resource contribu-tion limit.2.The repository to client communication delay is negli-gible.1.3.The basic capacity of the repositories(computationaldelay for processing a single update)is same.Problem definition Given a set of repositories and the data-items served by each of them,and a set of clients with their respective data-item requirements,the aim of the problem is tofind a three dimensional0/1matrix that gives a map-ping between the repositories and the client,data-itemrequests.A“1”value at a position of recommends that the request for the data-item by client should be as-signed to the repository.Input information about the dissemination tree The follow-ing information about the of repositories and about the clients to be inserted is provided as input to the optimiza-tion problem.served:Two dimensional matrix giving the coherency values of data-items available at the repositories.A positive value of denotes the coherency value at which data-item is available at repository.A negative value indicates that does not serve.request:Two dimensional matrix giving the co-herency requirements of the data-items requested by the clients.A positive value of denotes the coherency with which client needs data-item.A negative value indicates that client has not re-quested.max4.Serving maximum possible requests:The total num-ber of requests served for a data-item must be equal to total number of requests made for that data-item(),whichever is smaller.This ensures that the system serves as many requests as possi-ble.5.Load balancing:If too many requests are assigned toa repository,the computational delay at the repository(to check and push the updates)will increase which will result in loss offidelity of the dependent clients.The number of requests should be balanced among repositories.But,the resource contribution limits of repositories can be different in which case they may not serve an equal number of requests,hence,some deviation must be allowed.We try to achieve load bal-ancing by introducing the following constraint:where,and D is the deviation from the average.The value of depends on the deviation ofwhereFor example,a client request for0.05$coherency can be served by0.05$or0.01$coherency data,either of thechoices leads to an addition of“1”to the objective function.There is no extra gain in the objective function by serving with more stringent coherency than required.This objective ensures that the served coherency is as closeas possible to the requested coherency,but not less.3.2.Assignment using heuristicsThe heuristic algorithm handles the dynamic arrival ofclient requests,inserting one request at a time in the net-work.In this approach,there is a selector node for eachdata-item.The selector maintains a list of reposito-ries serving,sorted in the order of coherency values at which is available at these repositories.In the DiTA al-gorithm[13],the source already stores such information tomaintain the temporal coherency of data-items at the repos-itories.So,the source of a data-item can act as a selec-tor for it.Each request initially contacts the selector,which then directs it to the appropriate repository.When a request from a client for a data-item at co-herency value arrives at the selector,it goes through the list of repositories serving and having resources to serve a new request(they have not exceeded their resource contri-bution limits),and short-lists those serving at a coherency close to.Among those shortlisted,a repository is se-lected such that the sum of the computational delay(esti-mated by the number of client dependents)at and the communication delay between the client and repository is the smallest possible.Experimental results,discussed in the next section,in-dicate that thefidelity achieved by assignment of client re-quests using this simple approach is close to that achieved using linear optimization.We now discuss the space and computational overheadsof this approach.This algorithm involves storing and main-taining the sorted list of coherencies at the selector,and finding the repositories with coherency values close to.In case of a large number of repositories,the overheads at the source can be prohibitive.To avoid this scalability problem, the computation can be moved to the repositories served di-rectly by the source.Each such repository will be at thefirst level of the and act as selector for a subset of data-items. The source updates these selectors when there are changes in the coherency values of data available at the repositories in the network.When a new client,data-item request ar-rives,the source passes it to the repository acting as the se-lector for that data-item.3.3.Performance of the assignment algorithms 3.3.1.Experimental methodology The performance of our solutions is investigated using real world stock price streams as exemplars of dynamic data.The pre-sented results are based on stock price traces obtained by continuously polling http://fi.We col-lected values for1000stocks making sure that the stocks did see some trading during that day.As an indica-tion of the characteristics of the traces used,Table1 gives the details of some of the traces.Max and Min re-fer to the maximum and minimum prices observed inthe10000values polled during the indicated time inter-val on the given Date in Jan/Feb2002.A new data valuewas obtained approximately once per second.Since stock prices change at a slower rate,the traces can be consid-ered to be“real-time”traces.Table1:Characteristics of some of the stock traces used Company Time interval Max Feb1260.09 SUNW21:30-01:22hours10.99 Jan3027.16 QCOM22:46-01:46hours41.23 Jan3033.66Oracle21:30-01:22hours17.10 We simulated the situation where all the repositories andclients accessed data kept at one or more sources.Each of them requests a subset of data-items,with a particular data-item chosen with50%probability.A coherency require-ment is associated with each of the chosen data-items.The’s are a mix of stringent and lenient tolerances.A re-quest for a data-item has a coherency requirement chosen uniformly from the stringent range with probability and from the lenient range with probability.Unless oth-erwise stated,the stringent range is kept as$to$and the lenient range as$to$,and T as0.5for our ex-periments in this paper.The results stated are an averageover5or more independent runs of the simulation.Each run involves using a different set of update traces for the data-items in the simulation.The physical network consists of nodes(routers,sources,repositories&clients)and links.The un-derlying router topology was generated using BRITE (/brite).The clients,repositories and the sources were randomly placed in the router plane and connected to the closest router.For each client and repos-itory,set of data-items of interest was generated and then the coherencies were chosen from the desired range.Our experiments use node-node communication delays derived from a heavy tailed Pareto[12]distribution:where is given byand the time to prepare an update for transmission.In the presence of complex processing at repositories,for exam-ple,if a repository aggregates information before transmit-ting updates to its client and repository dependents,the time taken to perform the checks can be considerable and hence the above default for the computational delay.For solving the linear optimization problem of mapping,the GNU Linear programming kit (GLPK)(/prep/ftp.html )was used.3.3.2.Performance Metrics The key metric for our ex-periments is the loss in fidelity experienced by the clients:fidelity is the degree to which a client’s coherency require-ments are met.It is measured as the total length of time forwhich the inequalityholds (normalized by the total length of observations),where is the value of the data-item at the client and is the actual value at the source at time .The fidelity of a client is the mean fidelity over all data-items requested by that client,while the over-all fidelity of the system is the mean fidelity of all clients.Loss in fidelity is simply .0 5 10 15 20 25 30 35 40L o s s o f f i d e l i t y a t c l i e n t s (%)#ClientsFor 2 sources, 20 repositories and 40 data-items Figure 3:Fidelity loss for various client assignment ap-proachesparison of the linear optimization and heuristic algorithm Figure 3shows the simulation re-sults for various number of clients for a network with 2sources,20repositories and 40data-items.To get an idea of the scale of the problem,let us calculate the total num-ber of requests,given the number of clients and data-items in the network.Since each client requests a subset of data-items,with each data-item chosen with 50%prob-ability,the average number of data-items requested by a client will be half of the total number of data-items avail-able.Therefore,the total number of requests in thenetwork withclients and data-items will be ap-proximately,see Section 3.1).The value of is50for this simulation.As the number of client requests ex-ceeds the total resource contribution limit of repositories,the requests start getting dropped.The fidelity experienced by client is zero for each dropped request.This causes a quick drop in the average fidelity.3.4.Setting the resource contribution limit of therepositoriesResource contribution limit is a way by which the load on a repository can be controlled.But as seen above,if ,and hence the resource contribution limit,is too small,a large number of requests can get dropped.Figure 4(a)shows the effect of the change in value of on the average fidelity achieved by clients.In this simu-lation,the clients join and leave the network dynamically.The inter-arrival and inter-departure time of clients follow exponential distribution.The arrival rate is 2000times the departure rate and the average number of clients at a time in the system is 2000.(This can be viewed as anqueuing system.The average number of clientsfor such a system is given by0 10 20 30 40 50 60L o s s o f f i d e l i t y a t c l i e n t s (%)Initial resource limit (R)2 sources, 20 repositories, 2000 clients(avg.), 40 data-items (a)Effect of resource contribution limit ()on fidelity,and the adaptive approach to decide the limit.L o s s o f f i d e l i t y a t c l i e n t s (%)#ClientsFor 2 sources, 20 repositories, and 40 data-items(b)Excluding/Including the sources for client assignment.Figure 4:Resource contribution limit of repositories 3.4.1.Adaptive algorithm to adjust the resource contri-bution limit The response time for an update gives an indi-cation of the load on a repository.Varying the resource con-tribution limit of repositories according to their recent re-sponse time can help in controlling the load at the reposi-tories.This leads us to the following algorithm for resource contribution limit adjustment:Adaptive increase:When the selector node finds that the resource contribution limits of all repositories serv-ing a data-item are exhausted,it sends a message down the dissemination tree to the repositories to increase their limit.A repository with a short response time for updates is expected to be less loaded than one with a large response time,and can serve more clients.Thus on receiving the selector’s message,the repositories in-crease their resource contribution limit by an amount inversely proportional to their recent average response time.Adaptive decrease:The selector node periodi-cally monitors the difference between the total resource contribution limit of all repositories and to-tal number of client requests in the network.If the total limit is much more than total number of re-quests,it sends a message to all repositories to decrease their limits.A large response time at a repos-itory indicates heavy load.Thus,on receiving the selector’s message for decrease,the repositories de-crease their future resource contribution limit by an amount directly proportional to their recent aver-age response time.This reduces the future assignment of clients to the overloaded repositories.Note that a repository may reduce its future re-source contribution limit to a number less than the number of requests already assigned to it.No reassign-ment of extra requests is done;but as the clients join and leave,in due course,the requests assigned to the repository is reduced to the desired number.Also,if at some point,the total resource contribution limit of all the repositories goes below the total requests in the net-work,the selector node will automatically detect this and call for an “Adaptive increase”.3.4.2.Performance of the adaptive algorithm The adaptive algorithm achieves better performance through a better load sharing,as is evident from the curve la-beled “Adaptive resource contribution limit”in Figure 4(a).The variation of resource contribution limit of reposito-ries over time was examined.It was seen that the resource contribution limit of the source remains at a value lower than the limits of other repositories.The algorithm de-tects that the assignment of clients to the source increases its response time considerably,and keeps the resource con-tribution limit at the source low.This suggests that it might be beneficial not to assign any clients to the sources.Fig-ure 4(b)shows a comparison of fidelity in case the requests are assigned only to sources,assigned to sources and repos-itories both,and only to the repositories.For this graph,the resource contribution limit is non-adaptive but suf-ficient such that requests are not dropped.The loss of fidelity rises rapidly if we have no repositories in the net-work;the computational delay at the source,to push the up-dates to a large number of clients,results in high loss of fidelity.Further,it is seen that better fidelity can be achieved if the client requests are assigned only to reposi-tories.For subsequent experiments,we assign the client re-quests only to the repositories in the network,using the heuristic assignment algorithm.But,as these experiments are snapshot based (all the repositories and clients are in-serted in the network and then the updates are simulated),we do not use the adaptive algorithm for adjusting the re-source contribution limit.However,it is kept sufficiently large such that the requests are not dropped.4.Reorganization of a repository NetworkIdeally,the choice of data-items available at the reposi-tories should be driven by the data needs of the clients,and。

相关文档
最新文档