AN ENHANCEMENT SCHEME OF TCP PROTOCOL IN MOBILE AD HOC NETWORKS: MME-TCP

合集下载

国际自动化与计算杂志.英文版.

国际自动化与计算杂志.英文版.

国际自动化与计算杂志.英文版.1.Improved Exponential Stability Criteria for Uncertain Neutral System with Nonlinear Parameter PerturbationsFang Qiu,Ban-Tong Cui2.Robust Active Suspension Design Subject to Vehicle Inertial Parameter VariationsHai-Ping Du,Nong Zhang3.Delay-dependent Non-fragile H∞ Filtering for Uncertain Fuzzy Systems Based on Switching Fuzzy Model and Piecewise Lyapunov FunctionZhi-Le Xia,Jun-Min Li,Jiang-Rong Li4.Observer-based Adaptive Iterative Learning Control for Nonlinear Systems with Time-varying DelaysWei-Sheng Chen,Rui-Hong Li,Jing Li5.H∞ Output Feedback Control for Stochastic Systems with Mode-dependent Time-varying Delays and Markovian Jump ParametersXu-Dong Zhao,Qing-Shuang Zeng6.Delay and Its Time-derivative Dependent Robust Stability of Uncertain Neutral Systems with Saturating ActuatorsFatima El Haoussi,El Houssaine Tissir7.Parallel Fuzzy P+Fuzzy I+Fuzzy D Controller:Design and Performance EvaluationVineet Kumar,A.P.Mittal8.Observers for Descriptor Systems with Slope-restricted NonlinearitiesLin-Na Zhou,Chun-Yu Yang,Qing-Ling Zhang9.Parameterized Solution to a Class of Sylvester MatrixEquationsYu-Peng Qiao,Hong-Sheng Qi,Dai-Zhan Cheng10.Indirect Adaptive Fuzzy and Impulsive Control of Nonlinear SystemsHai-Bo Jiang11.Robust Fuzzy Tracking Control for Nonlinear Networked Control Systems with Integral Quadratic ConstraintsZhi-Sheng Chen,Yong He,Min Wu12.A Power-and Coverage-aware Clustering Scheme for Wireless Sensor NetworksLiang Xue,Xin-Ping Guan,Zhi-Xin Liu,Qing-Chao Zheng13.Guaranteed Cost Active Fault-tolerant Control of Networked Control System with Packet Dropout and Transmission DelayXiao-Yuan Luo,Mei-Jie Shang,Cai-Lian Chen,Xin-Ping Guanparison of Two Novel MRAS Based Strategies for Identifying Parameters in Permanent Magnet Synchronous MotorsKan Liu,Qiao Zhang,Zi-Qiang Zhu,Jing Zhang,An-Wen Shen,Paul Stewart15.Modeling and Analysis of Scheduling for Distributed Real-time Embedded SystemsHai-Tao Zhang,Gui-Fang Wu16.Passive Steganalysis Based on Higher Order Image Statistics of Curvelet TransformS.Geetha,Siva S.Sivatha Sindhu,N.Kamaraj17.Movement Invariants-based Algorithm for Medical Image Tilt CorrectionMei-Sen Pan,Jing-Tian Tang,Xiao-Li Yang18.Target Tracking and Obstacle Avoidance for Multi-agent SystemsJing Yan,Xin-Ping Guan,Fu-Xiao Tan19.Automatic Generation of Optimally Rigid Formations Using Decentralized MethodsRui Ren,Yu-Yan Zhang,Xiao-Yuan Luo,Shao-Bao Li20.Semi-blind Adaptive Beamforming for High-throughput Quadrature Amplitude Modulation SystemsSheng Chen,Wang Yao,Lajos Hanzo21.Throughput Analysis of IEEE 802.11 Multirate WLANs with Collision Aware Rate Adaptation AlgorithmDhanasekaran Senthilkumar,A. Krishnan22.Innovative Product Design Based on Customer Requirement Weight Calculation ModelChen-Guang Guo,Yong-Xian Liu,Shou-Ming Hou,Wei Wang23.A Service Composition Approach Based on Sequence Mining for Migrating E-learning Legacy System to SOAZhuo Zhang,Dong-Dai Zhou,Hong-Ji Yang,Shao-Chun Zhong24.Modeling of Agile Intelligent Manufacturing-oriented Production Scheduling SystemZhong-Qi Sheng,Chang-Ping Tang,Ci-Xing Lv25.Estimation of Reliability and Cost Relationship for Architecture-based SoftwareHui Guan,Wei-Ru Chen,Ning Huang,Hong-Ji Yang1.A Computer-aided Design System for Framed-mould in Autoclave ProcessingTian-Guo Jin,Feng-Yang Bi2.Wear State Recognition of Drills Based on K-means Cluster and Radial Basis Function Neural NetworkXu Yang3.The Knee Joint Design and Control of Above-knee Intelligent Bionic Leg Based on Magneto-rheological DamperHua-Long Xie,Ze-Zhong Liang,Fei Li,Li-Xin Guo4.Modeling of Pneumatic Muscle with Shape Memory Alloy and Braided SleeveBin-Rui Wang,Ying-Lian Jin,Dong Wei5.Extended Object Model for Product Configuration DesignZhi-Wei Xu,Ze-Zhong Liang,Zhong-Qi Sheng6.Analysis of Sheet Metal Extrusion Process Using Finite Element MethodXin-Cun Zhuang,Hua Xiang,Zhen Zhao7.Implementation of Enterprises' Interoperation Based on OntologyXiao-Feng Di,Yu-Shun Fan8.Path Planning Approach in Unknown EnvironmentTing-Kai Wang,Quan Dang,Pei-Yuan Pan9.Sliding Mode Variable Structure Control for Visual Servoing SystemFei Li,Hua-Long Xie10.Correlation of Direct Piezoelectric Effect on EAPap under Ambient FactorsLi-Jie Zhao,Chang-Ping Tang,Peng Gong11.XML-based Data Processing in Network Supported Collaborative DesignQi Wang,Zhong-Wei Ren,Zhong-Feng Guo12.Production Management Modelling Based on MASLi He,Zheng-Hao Wang,Ke-Long Zhang13.Experimental Tests of Autonomous Ground Vehicles with PreviewCunjia Liu,Wen-Hua Chen,John Andrews14.Modelling and Remote Control of an ExcavatorYang Liu,Mohammad Shahidul Hasan,Hong-Nian Yu15.TOPSIS with Belief Structure for Group Belief Multiple Criteria Decision MakingJiang Jiang,Ying-Wu Chen,Da-Wei Tang,Yu-Wang Chen16.Video Analysis Based on Volumetric Event DetectionJing Wang,Zhi-Jie Xu17.Improving Decision Tree Performance by Exception HandlingAppavu Alias Balamurugan Subramanian,S.Pramala,B.Rajalakshmi,Ramasamy Rajaram18.Robustness Analysis of Discrete-time Indirect Model Reference Adaptive Control with Normalized Adaptive LawsQing-Zheng Gao,Xue-Jun Xie19.A Novel Lifecycle Model for Web-based Application Development in Small and Medium EnterprisesWei Huang,Ru Li,Carsten Maple,Hong-Ji Yang,David Foskett,Vince Cleaver20.Design of a Two-dimensional Recursive Filter Using the Bees AlgorithmD. T. Pham,Ebubekir Ko(c)21.Designing Genetic Regulatory Networks Using Fuzzy Petri Nets ApproachRaed I. Hamed,Syed I. Ahson,Rafat Parveen1.State of the Art and Emerging Trends in Operations and Maintenance of Offshore Oil and Gas Production Facilities: Some Experiences and ObservationsJayantha P.Liyanage2.Statistical Safety Analysis of Maintenance Management Process of Excavator UnitsLjubisa Papic,Milorad Pantelic,Joseph Aronov,Ajit Kumar Verma3.Improving Energy and Power Efficiency Using NComputing and Approaches for Predicting Reliability of Complex Computing SystemsHoang Pham,Hoang Pham Jr.4.Running Temperature and Mechanical Stability of Grease as Maintenance Parameters of Railway BearingsJan Lundberg,Aditya Parida,Peter S(o)derholm5.Subsea Maintenance Service Delivery: Mapping Factors Influencing Scheduled Service DurationEfosa Emmanuel Uyiomendo,Tore Markeset6.A Systemic Approach to Integrated E-maintenance of Large Engineering PlantsAjit Kumar Verma,A.Srividya,P.G.Ramesh7.Authentication and Access Control in RFID Based Logistics-customs Clearance Service PlatformHui-Fang Deng,Wen Deng,Han Li,Hong-Ji Yang8.Evolutionary Trajectory Planning for an Industrial RobotR.Saravanan,S.Ramabalan,C.Balamurugan,A.Subash9.Improved Exponential Stability Criteria for Recurrent Neural Networks with Time-varying Discrete and Distributed DelaysYuan-Yuan Wu,Tao Li,Yu-Qiang Wu10.An Improved Approach to Delay-dependent Robust Stabilization for Uncertain Singular Time-delay SystemsXin Sun,Qing-Ling Zhang,Chun-Yu Yang,Zhan Su,Yong-Yun Shao11.Robust Stability of Nonlinear Plants with a Non-symmetric Prandtl-Ishlinskii Hysteresis ModelChang-An Jiang,Ming-Cong Deng,Akira Inoue12.Stability Analysis of Discrete-time Systems with Additive Time-varying DelaysXian-Ming Tang,Jin-Shou Yu13.Delay-dependent Stability Analysis for Markovian Jump Systems with Interval Time-varying-delaysXu-Dong Zhao,Qing-Shuang Zeng14.H∞ Synchronization of Chaotic Systems via Delayed Feedback ControlLi Sheng,Hui-Zhong Yang15.Adaptive Fuzzy Observer Backstepping Control for a Class of Uncertain Nonlinear Systems with Unknown Time-delayShao-Cheng Tong,Ning Sheng16.Simulation-based Optimal Design of α-β-γ-δ FilterChun-Mu Wu,Paul P.Lin,Zhen-Yu Han,Shu-Rong Li17.Independent Cycle Time Assignment for Min-max SystemsWen-De Chen,Yue-Gang Tao,Hong-Nian Yu1.An Assessment Tool for Land Reuse with Artificial Intelligence MethodDieter D. Genske,Dongbin Huang,Ariane Ruff2.Interpolation of Images Using Discrete Wavelet Transform to Simulate Image Resizing as in Human VisionRohini S. Asamwar,Kishor M. Bhurchandi,Abhay S. Gandhi3.Watermarking of Digital Images in Frequency DomainSami E. I. Baba,Lala Z. Krikor,Thawar Arif,Zyad Shaaban4.An Effective Image Retrieval Mechanism Using Family-based Spatial Consistency Filtration with Object RegionJing Sun,Ying-Jie Xing5.Robust Object Tracking under Appearance Change ConditionsQi-Cong Wang,Yuan-Hao Gong,Chen-Hui Yang,Cui-Hua Li6.A Visual Attention Model for Robot Object TrackingJin-Kui Chu,Rong-Hua Li,Qing-Ying Li,Hong-Qing Wang7.SVM-based Identification and Un-calibrated Visual Servoing for Micro-manipulationXin-Han Huang,Xiang-Jin Zeng,Min Wang8.Action Control of Soccer Robots Based on Simulated Human IntelligenceTie-Jun Li,Gui-Qiang Chen,Gui-Fang Shao9.Emotional Gait Generation for a Humanoid RobotLun Xie,Zhi-Liang Wang,Wei Wang,Guo-Chen Yu10.Cultural Algorithm for Minimization of Binary Decision Diagram and Its Application in Crosstalk Fault DetectionZhong-Liang Pan,Ling Chen,Guang-Zhao Zhang11.A Novel Fuzzy Direct Torque Control System for Three-level Inverter-fed Induction MachineShu-Xi Liu,Ming-Yu Wang,Yu-Guang Chen,Shan Li12.Statistic Learning-based Defect Detection for Twill FabricsLi-Wei Han,De Xu13.Nonsaturation Throughput Enhancement of IEEE 802.11b Distributed Coordination Function for Heterogeneous Traffic under Noisy EnvironmentDhanasekaran Senthilkumar,A. Krishnan14.Structure and Dynamics of Artificial Regulatory Networks Evolved by Segmental Duplication and Divergence ModelXiang-Hong Lin,Tian-Wen Zhang15.Random Fuzzy Chance-constrained Programming Based on Adaptive Chaos Quantum Honey Bee Algorithm and Robustness AnalysisHan Xue,Xun Li,Hong-Xu Ma16.A Bit-level Text Compression Scheme Based on the ACW AlgorithmHussein A1-Bahadili,Shakir M. Hussain17.A Note on an Economic Lot-sizing Problem with Perishable Inventory and Economies of Scale Costs:Approximation Solutions and Worst Case AnalysisQing-Guo Bai,Yu-Zhong Zhang,Guang-Long Dong1.Virtual Reality: A State-of-the-Art SurveyNing-Ning Zhou,Yu-Long Deng2.Real-time Virtual Environment Signal Extraction and DenoisingUsing Programmable Graphics HardwareYang Su,Zhi-Jie Xu,Xiang-Qian Jiang3.Effective Virtual Reality Based Building Navigation Using Dynamic Loading and Path OptimizationQing-Jin Peng,Xiu-Mei Kang,Ting-Ting Zhao4.The Skin Deformation of a 3D Virtual HumanXiao-Jing Zhou,Zheng-Xu Zhao5.Technology for Simulating Crowd Evacuation BehaviorsWen-Hu Qin,Guo-Hui Su,Xiao-Na Li6.Research on Modelling Digital Paper-cut PreservationXiao-Fen Wang,Ying-Rui Liu,Wen-Sheng Zhang7.On Problems of Multicomponent System Maintenance ModellingTomasz Nowakowski,Sylwia Werbinka8.Soft Sensing Modelling Based on Optimal Selection of Secondary Variables and Its ApplicationQi Li,Cheng Shao9.Adaptive Fuzzy Dynamic Surface Control for Uncertain Nonlinear SystemsXiao-Yuan Luo,Zhi-Hao Zhu,Xin-Ping Guan10.Output Feedback for Stochastic Nonlinear Systems with Unmeasurable Inverse DynamicsXin Yu,Na Duan11.Kalman Filtering with Partial Markovian Packet LossesBao-Feng Wang,Ge Guo12.A Modified Projection Method for Linear FeasibilityProblemsYi-Ju Wang,Hong-Yu Zhang13.A Neuro-genetic Based Short-term Forecasting Framework for Network Intrusion Prediction SystemSiva S. Sivatha Sindhu,S. Geetha,M. Marikannan,A. Kannan14.New Delay-dependent Global Asymptotic Stability Condition for Hopfield Neural Networks with Time-varying DelaysGuang-Deng Zong,Jia Liu hHTTp://15.Crosscumulants Based Approaches for the Structure Identification of Volterra ModelsHouda Mathlouthi,Kamel Abederrahim,Faouzi Msahli,Gerard Favier1.Coalition Formation in Weighted Simple-majority Games under Proportional Payoff Allocation RulesZhi-Gang Cao,Xiao-Guang Yang2.Stability Analysis for Recurrent Neural Networks with Time-varying DelayYuan-Yuan Wu,Yu-Qiang Wu3.A New Type of Solution Method for the Generalized Linear Complementarity Problem over a Polyhedral ConeHong-Chun Sun,Yan-Liang Dong4.An Improved Control Algorithm for High-order Nonlinear Systems with Unmodelled DynamicsNa Duan,Fu-Nian Hu,Xin Yu5.Controller Design of High Order Nonholonomic System with Nonlinear DriftsXiu-Yun Zheng,Yu-Qiang Wu6.Directional Filter for SAR Images Based on NonsubsampledContourlet Transform and Immune Clonal SelectionXiao-Hui Yang,Li-Cheng Jiao,Deng-Feng Li7.Text Extraction and Enhancement of Binary Images Using Cellular AutomataG. Sahoo,Tapas Kumar,B.L. Rains,C.M. Bhatia8.GH2 Control for Uncertain Discrete-time-delay Fuzzy Systems Based on a Switching Fuzzy Model and Piecewise Lyapunov FunctionZhi-Le Xia,Jun-Min Li9.A New Energy Optimal Control Scheme for a Separately Excited DC Motor Based Incremental Motion DriveMilan A.Sheta,Vivek Agarwal,Paluri S.V.Nataraj10.Nonlinear Backstepping Ship Course ControllerAnna Witkowska,Roman Smierzchalski11.A New Method of Embedded Fourth Order with Four Stages to Study Raster CNN SimulationR. Ponalagusamy,S. Senthilkumar12.A Minimum-energy Path-preserving Topology Control Algorithm for Wireless Sensor NetworksJin-Zhao Lin,Xian Zhou,Yun Li13.Synchronization and Exponential Estimates of Complex Networks with Mixed Time-varying Coupling DelaysYang Dai,YunZe Cai,Xiao-Ming Xu14.Step-coordination Algorithm of Traffic Control Based on Multi-agent SystemHai-Tao Zhang,Fang Yu,Wen Li15.A Research of the Employment Problem on Common Job-seekersand GraduatesBai-Da Qu。

00C06_ASCCPaper

00C06_ASCCPaper

AbstractThis paper presents a method of enhancing tracking in repetitive processes which can be approximated by first order plus dead-time (FOPDT) models. Enhancement is achieved through the iterative learning control (ILC) scheme known as filter-based ILC. Using the approximate model, the design of the ILC parameters is done in the frequency domain. In a water heating plant, this ILC scheme is effortlessly added onto a PI controller, with minimal implementation requirements. The results show good improvements in tracking. For even better performance, the trajectory is segmented into piecewise smooth sections with ILC applied separately to each. For constant level sections, the feed-forward signal is initialised to the desired control estimated from previous iterations. This gives the best empirical results.1.IntroductionTrajectory tracking is very important in many industrial applications. The control objective is for system outputs to track a specified profile as tightly as possible over a given finite period. This task, known as an o ptimal t racking c ontrol p roblem (OTCP) over a finite interval, is very difficult to achieve in practice. One frequently encountered problem is the lack of exact system information for accurate derivation of the precise control effort required for tracking.Usually, perfect tracking can only be achieved asymptotically - the initial tracking will be conspicuously poor within the finite interval. Thus, standard controllers, like the widely used PI or PID, are not able to achieve satisfactory tracking performance for the whole operation period. Moreover, most advance control schemes only guarantee asymptotic convergence along the time horizon, and thus are unable to improve transient response within the finite tracking duration.On the other hand, most OTCP industrial processes are batch operations, which by virtue are repeated many times with the same desired tracking profile. The same tracking performance will thus be observed, albeit with hindsight from previous operations. Clearly, these continual repetitions make it conceivable to improve tracking, potentially over the entire task duration, by using information from past operations.For enhancing tracking in repeated operations, ILC schemes developed hitherto well cater to the needs [1-7]. Basically, ILC uses repetitions as experience to improve tracking without exact system knowledge [8]. Hence, ILC improves tracking by using previous control and error signals in the current operation. Numeric processing on these data yields a feed-forward signal added to the current feedback control. Clearly, ILC requirements are minimal - a memory store for past data plus some simple data operations to derive the feed-forward signal. With its utmost simplicity, ILC can very easily be added on top of existing (predominantly PID batch) facilities without any hassle at all.This paper illustrates the application of ILC to a class of processes which can be well approximated by a FOPDT model. The control scheme known as filter-based ILC is applied to a water heating plant. Using non-causal zero-phase filtering, this ILC scheme simply involves 2 parameters - the filter length and the learning gain, both easily tuned using the approximate model. Also, this scheme is practically robust to random system noise.The remainder parts of this paper are organised as follows. Section 2 describes the water heating plant, its modelling work and the control objective. Section 3 gives an overview of filter-based ILC with its convergence analysis. Section 4 shows the controller design work and the experimental results. From these results, an improved ILC scheme, with profile segmentation and feed-forward initialisation, is used to improve tracking performance even further. Finally, Section 5 concludes the paper.2.Modelling the Water Heating Plant andProblem StatementA.Modelling the Water Heating PlantFigure 1 - Schematic Diagram of the Water Heating Plant Fig. 1 is a schematic diagram showing the relevant portions of the water heating plant. Water from Tank A is pumped (N1) through a heat exchanger as the cooling stream. The heating stream on the other side of the exchanger is supplied from a heated reservoir. ThisEnhancing Trajectory Tracking for a Class of Process Control Problems usingIterative LearningJian Xin Xu, Tong Heng Lee, Yang Quan Chen and Hou TanNational University of SingaporeDepartment of Electrical Engineering10 Kent Ridge CrescentSingapore 119260Singapore(E-mail: elexujx@.sg)heated stream is pumped (N2) through the exchanger before returning to the reservoir where it is heated by aheating rod (PWR).Figure 2 - Model and Plant Response to 200W StepFig. 2 shows the actual response of the plant and the response of its distributed PDE model to a step input of 200W at the heater. (The heater is switched to 200W at t=0s and then switched off at t=12.5Ks .) Both the simulation and the experimental responses show that the plant can be effectively approximated by the FOPDT system:ses s PWR s T s G 50128651390.0)()(2)(-+==(1)B. Problem StatementIn general, the control problem is to enhance trajectory tracking in repeated batch operations where the process can be approximated with the FOPDT model:asT τs Ke G(s)+-=1(2)where K : Plant gainτ : Apparent dead-time a T :Apparent time constantThe desired trajectory is a piecewise continuous profile commonly found in process control eg. desired temperature profile, desired concentration profile, etc. For the water heating plant, the control objective is for T2 (water temperature in the heated reservoir) to track the desired profile shown in Fig. 3 as closely as possible over its entire duration, by varying the input to the heater (PWR). Both pumps (N1 & N2) are maintained at pre-set values throughout the run.Figure 3 - Desired Temperature ProfileTo achieve the objective, a learning controller based on filter-based ILC will be designed using systematic analysis on the approximate model. This learning controller will be effortlessly augmented on top of a PI controller with minimal implementation requirements.3. Iterative Learning ControlA.The Schematic of Filter-based ILCBecause of the effectiveness for enhancing repeated tracking tasks, ILC has drawn increased attention and many schemes have been developed. In this paper, a d y : Desired output profile i e : Error at i th iterationi fb u: Feedback signal at i th iteration i ff u: Feed-forward signal at i th iterationi y: Plant output at i th iterationAs seen from Fig. 5, the learning update law and the overall control signal is )(*)()(1k u h k u k u i fb i ff i ff γ+=+ (3))()()(k u k u k u i fb i ff i +=(4)where k represents the k th time sample of the respective signal, γis the filter gain and *h is the filter operator: moving averager (* denotes convolution)Figure 5 - Block Diagram of Filter-based ILC For this paper, the (non-causal zero-phase) filter *h γ is a simple moving averager with 2 parameters M and γ, related to the filter length and filter gain respectively.∑-=++=MMj ifb i fb j kuM k u h )(12)(*γγ(5)Basically, filter-based ILC attempts to store the desired control signal in the memory bank as the feed-forward signal. With convergence, the feed-forward signal will tend to the desired control signal so that, in time, it will relieve the burden of the feedback controller. As for all ILC schemes, it is important that the plant output converges to the desired profile along with iterations. This is shown in the convergence analysis.B. Frequency Domain Convergence Analysis ofFilter-based ILCThe time-domain convergence analysis of the filter-based ILC (3),(4) is fully presented by Chen et al [9] for quite general non-linear systems. For this paper, the focus is on FOPDT processes (2). Thus, it is sufficient and convenient to provide frequency domain convergence analysis for systematic design.Compared with the time domain analysis, frequency domain analysis offers more insight into the effects of the plant, the PI controller and the learning filter on ILC performance and allows the systematic design of M and γ based on the linearized FOPDT plant model.Since ILC can only be implemented digitally, the frequency analysis should actually be done on sampled systems. Provided sampling is fast compared to the system time constant, the problem of aliasing is avoided and a zero-order hold filter will reconstruct the correct output signal to the plant. Assuming this is so, the analysis can proceed as though for continuous systems. Consider linear systems with )(ωj G b and )(ωj G o as the transfer functions of the feed b ack controller and the o pen-loop plant respectively. The c losed loop transfer function is)()(1)()()(ωωωωωj G j G j G j G j G o b o b c +=(6)Writing the learning update law (3) in the frequency domain gives)()()()(1ωγωωj U w H j U j U i fb i ff i ff +=+ (7)In the following ωj is omitted for brevity. By considering the transfer function from )(t y d and from)(t u i ff to )(t u i fb , i fb U can be written as)(1i ff d c i fb i ffc d o b bi fb U U G U U G Y G G G U -=-+=(8)where d Y and d U are the Fourier transforms of thedesired profile and the desired control respectively. Thus, (7) becomes the following difference equation in the frequency domaind c i ff c i ff U HG U HG U γγ+-=+)1(1(9)From (9) and assuming 0)(0=t u ff , it is interesting toobserve that c G acts like a filter on )(t u d while H γ is similar to an "equaliser" used to compensate the "channel" filter c G .Also, it is noted that, where H or 0=c G , the feed-forward signal maintains at its initial value. (H and c G are the noise filter and the "closed loop filter" respectively). This means that learning will not take place at these filtered frequencies. One way to get around this is to initialise the feed-forward signal with an estimate of the required control signal. Iterating in i yieldsd ic ff i c i ff U HG U HG U ])1(1[)1(01γγ--+-=+(10)Clearly, the convergence condition is1|)()(1|<-ωωγj G H c , ∀ω present in 0ff U and d U (11)The converged value is given byd ff U U =∞(12)Therefore, in linear systems, convergence requiresthat (11) is satisfied, meaning that the Nyquist plot of c HG γ-1 must fall within the unit circle for all frequencies in )(t u d . Optimally, the curve should be close to the origin to give a high rate of learning for all relevant frequencies.4. Control of the Water Heating Plant usingFilter-based ILCA. Experimental Set-up of the Water HeatingPlantFig. 6 shows the hardware block diagram of the water heating plant - PCT23 Process Plant Trainer. In the experiment, the console is interfaced to a PC via the MetraByte DAS-16 AD/DA card. The plant is controlled by a user-written C DOS program in the PC. This program is interrupt driven and serves to command the required control signal to the plant as well as collect and store plant readings.Control and reading are done at a rate of 1 Hz (which is more than adequate for the system and the ILC bandwidths – see Section 4.3).B. Design of the PI ControllerFilter-based ILC is used to augment the PI feedbackcontroller. The PI controller is tuned using relay experiments [10] according to Ziegler Nichols rule.The ultimate gain K u and period P u are 125.0o C/W and 512.2s , giving the P gain and integral time as:8.562.2==up K K & 8.4262.1==u i P TC. Design of M and γ in Filter-based ILCM is designed with 2 opposing considerations inmind – the noise rejection and the learning rate at high frequencies. High frequency noise is more effectively rejected with a small filter bandwidth, when M is large. On the other hand, as seen in the convergence analysis, H and c G , as low-pass filters, limit the learning rate at high frequencies. Thus, a large M and hence a small filter bandwidth is detrimental to high frequency learning. To reduce the impact of H on high frequency learning, H is designed so that its bandwidth is slightly larger than c G . For the plant, the bandwidth of c G is 0.0045 rad/s. M =10 and 100 give filter bandwidths of 0.14 and 0.014 rad/s respectively. Obviously, M=100 will provide better noise rejection. At the same time, M=100 also gives a bandwidth slightly larger than G c. Thus, M =100 is chosen. The noise rejection effectiveness of this filter will be verified empirically. From the bandwidths of c G and H , it is seen that the sampling frequency of 1 Hz is adequate to prevent aliasing in the signals. Thus, frequency convergence analysis presented in Section 3.B is applicable and the design of γ can be done using Nyquist plots.Using the FOPDT model (1) and the PI controller just obtained, the Nyquist plot (Fig. 7) of c HG γ-1 with100=M is obtained for γ=0.25, 0.50, 0.75 and 1.00.Note that the size of the curves, i.e. the heart-shaped lobe,increases with γ. Also, each curve starts at 1+0j for ω=-πf s and traces clockwise back to 1+0j for ω=πf s where f s =1Hz , the sampling interval.90270180Figure 7 - Nyquist Plot of 1-γHG c (M=100)Thus, all curves have portions outside the unit circle. Thus, convergence is not guaranteed for ω/2π>0.0046, 0.0040, 0.0034 & 0.0028Hz for γ=0.25, 0.5, 0.75 & 1 respectively. However, Fig. 8 shows that the frequency content of the desired control signal (Fig. 9) is negligible for ω/2π>0.003Hz . As a trade-off between stability (location in unit circle) and learning rate (proximity to origin), γ=0.5 is chosen.1010101234567891Figure 8 – Frequency Content of the Desired ControlSignalFigure 9 - Approximate Model Inverse of the DesiredTemperature ProfileD. Filter-based ILC Results for γ=0.5 and M=100Fig. 10 shows the plant output at the 1st iteration - the tracking performance of the PI controller only. From Fig. 11, after 8 ILC iterations, tracking improves vastly. This is also obvious from the RMS error trend in Fig. 12.Figure 10 - T2 with PI Control (K p =56.8 & T i =426.8)Figure 11 - T2 after 8 iterations (γ=0.5 and M=100)Figure 12 - RMS Error for γ=0.5 and M=100In Fig. 11, overshoot and undershoot occur at the turn in )(t u d (Fig. 9). The high frequency components of)(t u d are filtered out from )(t u iff by c G . This resultsin "ringing" at the turn seen in Fig. 13, thus leading to overshoot and undershoot in the output.Figure 13 -)(8t u fffor γ=0.5 and M=100From Fig. 13, it is also seen that the feed-forward signal is relatively smooth and noiseless. This implies that the ILC filter is effective in rejecting noise, making the scheme robust to random perturbations in the system.E. Profile Segmentation with Feed-forwardInitialisationFeed-forward InitialisationTo improve tracking at the turn, a variation (Fig. 14) is attempted. The desired control (Fig. 9) is piecewise smooth - a ramp and a level. Near the turn, due to the ILC filter's "window averagin g” effect, the feed -forward signal is derived from these 2 radical control efforts - one "correct", the other "wrong".Thus, the original profile is divided into 2 entirely smooth profiles. ILC proceeds separately for each - the ILC filter is not applied around the turn. At the same time, the integral part of the PI controller is reset to 0 at the start of each profile. Effectively, it is as though a brand new batch process is started at the turn.In addition, it is easy to estimate, from the first iteration, the static control effort required for the level profile. At the second iteration, the feed-forward signal for this profile is initialised to this estimate. From then on, ILC proceeds normally for all subsequent iterations, further alleviating any inaccuracy in the estimate.Compared with Fig. 12, the RMS error trend in Fig. 15 shows that the improvement from the first to the second iteration is more significant due to the feed-forward initialisation. In addition, the error settles to a smaller value.Figure 15 - RMS Error for Profile Segmentation withFeed-forward InitialisationThe smaller error is due to better tracking at the turn seen in Fig. 16. Compared to Fig. 11, there is very much less overshoot and undershoot.Figure 16 - T2 after 8 iterations (Profile Segmentationwith Feed-forward Initialisation)Through segmentation, the filter’s "window averaging" effect at the turn is eliminated. In general, the system closed-loop bandwidth still limits the frequency components successfully incorporated into the feed-forward signal. This limitation is somewhat compensated by initialising the signal with the control estimate.F. Initial Resetting ConditionOne very important property of ILC is that initial plant reset is required - the feed-forward signal is meaningful only if the plant starts from the same initial conditions in all iterations, this being known as the initial reset condition. Obviously, this is not guaranteed with profile segmentation except for the first section.In other words, initial reset is not satisfied (for the 1st few iterations) for sections except the first. Given convergence in the first section, this occurs for only the first few iterations after which the first section converges to the desired profile and reset is satisfied for (at least) the 2nd section. Note that the end point of the 1st section is exactly the initial point of the 2nd one. Likewise, this applies successively to all subsequent sections.Due to the lack of initial reset for the 1st few iterations, a deviation ∆ will be erroneously incorporated into the feed-forward signal during these iterations. When initial reset is finally satisfied at the i th iteration,∆+=d i ff U U(13)From (9),∆-+=+∆+-=++)1())(1(11c d i ff d c d c i ff HG U U U HG U HG U γγγ (14)Provided |1|c HG γ-<0 for all frequencies of interest, ∆ is reduced with further iterations till it finally becomes negligible. Thus, there is no strict requirement for initial reset in the first few iterations.Obviously, convergence in each section is independent of those sections following it. As long as the first section converges, reset will be observed for the second section after a few iterations. In the same manner, this extends to all sections that follow, provided those sections preceding them converge.5. ConclusionFilter-based ILC is used to improve trajectory tracking in FOPDT repetitive plants, in particular a water heating plant. The ILC scheme is easily implemented on top of a PI controller using simple memory components and easy numeric operations. Trajectory tracking is vastly enhanced with this effortless add-on ILC scheme. With the approximate FOPDT model, the systematic design of M and γ in this scheme can be approached from the frequency perspective. It is found that 100=M is effective in eliminating noise from the feedback signal while not affecting tracking performance. For γ, design is based on Nyquist curves.Filter-based ILC handles smooth profiles best. Thus, for even further improvement in tracking, the piecewise-smooth profile is divided into smooth sections with ILC applied separately to each. Also, to hasten convergence, the feed-forward signal is initialised to the estimated desired control signal, easily achievable if this signal is simple. These modifications give the best tracking.References[1] Z. Bien, J. X. Xu, Iterative Learning Control –Analysis, Design, Integration and Applications , Kluwer Academic Publishers, 1998.[2] T. Y. Kuc, J. S. Lee, K. Nam, An iterative learningcontrol theory for a class of non-linear dynamic systems , Automatica, vol. 28, no. 6, pp. 1215-1221, 1992.[3] H. S. Lee, Z. Bien, A note on convergence propertyof iterative learning controller with respect to sup norm , Automatica, vol. 33, pp. 1591-1593, 1997. [4] K. S. Lee, S. H. Bang, K. S. Chang, Feedback-assisted iterative learning control based on an inverse process model , Journal of Process Control, vol. 4, no. 2, pp. 77-89, 1994.[5] R. W. Longman, Designing Iterative Learning andRepetitive Controllers , Iterative Learning Control – Analysis, Design, Integration and Application, Kluwer Academic Publishers, pp. 107-145, 1998. [6] K. L. Moore, Iterative Learning Control - AnExpository Overview , Applied & Computational Controls, Signal Processing and Circuits, pp. 1-42, 1998.[7] J. Phan, J. Juang, Designs of learning controllersbased on an auto-regressive representation of a linear system , AIAA Journal of Guidance, Control and Dynamics, vol. 19, no. 2 , pp. 355-362, 1996. [8] S. Arimoto, S. Kawamura, F. Miyazaki, Betteringoperation of robots by learning , Journal of Robotic Systems, vol. 1, no. 2, pp. 123-140, 1984.[9] Y. Q. Chen, T. H. Lee, J. X. Xu, S. Yamamoto, Non-Causal Filtering Based Design of Iterative Learning Controller , In Proc. of 1st International Workshop on Iterative Learning Control, Tampa, Florida, pp. 63-70, 1998.[10] K. J. Astrom, T. Hagglund, Automatic Tuning of PIDControllers , Instrument Society of America, 1988.1. OverviewA method of enhancing trajectory tracking in repetitive processes is presented. Tracking enhancement is achieved through the i terative l earning c ontrol (ILC) scheme known as filter-based ILC. In this scheme, only 2 ILC parameters are involved, both of which are open to systematic design in the frequency domain. Also, with its minimal practical requirements, the scheme is very easily implemented on top of PID controllers.This scheme is empirically tested on a PID-controlled water heating plant. Through theoretical modelling and empirical test, it is found that this plant can be approximated with a f irst o rder p lus d ead-t ime (FOPDT) model. This model is used in the design of the ILC parameters. The empirical results show good improvements in tracking.For even better tracking performance, the trajectory is segmented into piecewise smooth sections with ILC applied separately to each. For constant level sections, the feed-forward signal is initialised to the desired control estimated from previous iterations. These modifications are known as profile segmentation and feed-forward initialisation. Empirical test on the heating plant shows that these modifications give the best tracking results.2. Filter based ILCFilter based ILC scheme uses non-causal zero-phase filtering, which simply involves 2 parameters - the filter length M and the learning gain γ. This scheme is robust to random system noise.d y: Desired output profile i e : Error at i th iterationi fb u: Feedback signal at i th iteration i ff u: Feed-forward signal at i th iteration i y: Plant output at i th iterationThe learning law and the overall control signal is )(*)()(1k u h k u k u i fb i ff i ff γ+=+ (15))()()(k u k u k u i fb i ff i +=(16)where k represents the k th time sample of the respective signal, γis the filter gain and *h is the filter operator: moving averager (* denotes convolution)3. Frequency Analysis of Filter-based ILCThe convergence analysis of this scheme is done in the frequency domain. Using this analysis and the approximate model of the plant to be controlled, the systematic design of M and γ can be approached from the frequency perspective.From the analysis, M is found to be critical in eliminating noise from the feedback signal as well as in the tracking performance at high frequencies. A compromise in the value of M must be sought between these 2 opposing considerations.The design of γ is based on Nyquist curves. From the frequency analysis, γ affects the convergence of ILC as well as its convergence rate. Likewise, a compromise must be sought between these 2 opposing factors.4. Empirical Test on a Water Heating PlantFilter based ILC is implemented on a PID-controlled water heating plant. Through theoretical modelling and actual empirical testing, this plant is found to be well approximated by a FOPDT model. This information is used in the design of M and γ.The empirical results show that filter based ILC gives good improvements in tracking.5. Profile Segmentation & Feed-forwardInitialisationFrom the empirical results, it is easy to see that filter-based ILC handles smooth profiles best. Thus, for further improvement in tracking, any piecewise-smooth profile can be divided into smooth sections with ILC applied separately to each - profile segmentation.Also, to hasten convergence, the feed-forward signal is initialised to the estimated desired control signal. This is known as feed-forward initialisation.Coupled with these modifications, filter-based ILC now gives much better tracking performance.Enhancing Trajectory Tracking for a Class of Process Control Problems usingIterative LearningJian Xin Xu, Tong Heng Lee, Yang Quan Chen and Hou TanNational University of Singapore Department of Electrical Engineering10 Kent Ridge Crescent Singapore 119260Singapore(E-mail: elexujx@.sg)。

2025年软件资格考试信息处理技术员(初级)(基础知识、应用技术)合卷试卷及答案指导

2025年软件资格考试信息处理技术员(初级)(基础知识、应用技术)合卷试卷及答案指导

2025年软件资格考试信息处理技术员(基础知识、应用技术)合卷(初级)自测试卷(答案在后面)一、基础知识(客观选择题,75题,每题1分,共75分)1、软件工程的三个基本要素是()A. 软件需求、软件设计、软件测试B. 软件需求、软件实现、软件维护C. 软件需求、软件项目管理、软件测试D. 软件设计、软件实现、软件维护2、在软件工程中,需求分析的主要目的是()A. 确定软件的功能和性能B. 设计软件的架构和模块C. 编写软件的源代码D. 测试软件的可用性和稳定性3、题干:以下关于操作系统内核的描述,正确的是()A. 操作系统内核是计算机硬件的一部分B. 操作系统内核是操作系统的核心部分,负责管理计算机硬件资源C. 操作系统内核只负责处理用户请求,不涉及硬件资源管理D. 操作系统内核是用户程序的一部分4、题干:以下关于数据库管理系统的描述,错误的是()A. 数据库管理系统(DBMS)是数据库系统的核心软件B. 数据库管理系统负责数据的存储、检索、更新和维护C. 数据库管理系统不负责数据的备份和恢复D. 数据库管理系统提供用户界面,方便用户对数据库进行操作5、在计算机系统中,以下哪个设备属于输入设备?A. 打印机B. 显示器C. 鼠标D. 键盘6、在操作系统中,以下哪个概念指的是计算机中程序和数据的存储区域?A. 内存B. 硬盘C. CPUD. 网络接口卡7、在计算机系统中,CPU与内存之间的数据传输宽度通常指的是什么?A. 数据总线的宽度B. 地址总线的宽度C. 控制总线的宽度D. 存储单元的大小8、下列哪一项不是操作系统的功能?A. 进程管理B. 文件管理C. 用户界面管理D. 硬件直接控制9、以下哪种数据结构最适合用于实现一个需要频繁插入和删除元素的有序序列?A. 链表B. 数组C. 二叉搜索树D. 平衡二叉搜索树 10、在面向对象编程中,以下哪个原则强调“一个类应该只包含它所需的功能,不应包含其他无关的功能”?A. 单一职责原则(Single Responsibility Principle, SRP)B. 开放封闭原则(Open-Closed Principle, OCP)C. 依赖倒置原则(Dependency Inversion Principle, DIP)D. 接口隔离原则(Interface Segregation Principle, ISP)11、在计算机网络中,用来衡量数据传输可靠性的指标是:A. 误码率B. 频带利用率C. 信道容量D. 吞吐量12、下列不属于操作系统基本功能的是:A. 处理器管理B. 存储管理C. 文件管理D. 程序设计13、以下关于计算机系统组成中,不属于硬件设备的是:A. CPUB. 主板C. 显卡D. 操作系统14、在计算机系统中,下列哪个部件主要用来存储和读取数据?A. CPUB. 内存C. 硬盘D. 显卡15、下列选项中哪一个不是计算机硬件?A. 操作系统B. 内存条C. 显卡D. 硬盘16、在下列存储单位中,哪个单位最大?A. GB (Gigabyte)B. KB (Kilobyte)C. MB (Megabyte)D. TB (Terabyte)17、以下关于数据结构中栈的描述,正确的是()A. 栈是一种线性表,其插入和删除运算都在一端进行B. 栈是一种非线性结构,其插入和删除运算都在一端进行C. 栈是一种非线性结构,其插入和删除运算都在另一端进行D. 栈是一种线性表,其插入和删除运算都在另一端进行18、在数据库管理系统中,以下关于SQL语言中JOIN操作的说法,错误的是()A. JOIN操作用于连接两个或多个表B. INNER JOIN操作返回两个表中匹配的行C. LEFT JOIN操作返回左表中所有的行,右表中没有匹配的行时返回NULLD. RIGHT JOIN操作返回右表中所有的行,左表中没有匹配的行时返回NULL19、在数据库设计中,E-R图(实体-联系图)用于描述数据的哪种模型?A. 逻辑模型B. 物理模型C. 概念模型D. 结构模型 20、下列选项中,哪一项不是软件工程的基本原则?A. 遵循良好的编程实践B. 提高软件的可重用性C. 增强软件的复杂度D. 保证软件的可靠性21、在关系数据库中,若要实现多个表之间数据的连接操作,通常使用以下哪种操作符?A. INB. BETWEENC. LIKE22、以下哪个选项不属于面向对象程序设计的基本原则?A. 封装B. 继承C. 多态D. 重载23、关于计算机网络的描述,下列哪一项是错误的?A. 计算机网络是由多台计算机通过通信设备和线路连接起来,按照网络协议实现数据通信和资源共享的系统。

TcpReassembly

TcpReassembly

Robust TCP Stream Reassembly In the Presence of Adversaries Sarang Dharmapurikar Vern PaxsonWashington University in Saint Louis International Computer Science Institute,Berkeley sarang@ vern@AbstractThere is a growing interest in designing high-speed network de-vices to perform packet processing at semantic levels above the network layer.Some examples are layer-7switches,content in-spection and transformation systems,and network intrusion de-tection/prevention systems.Such systems must maintain per-flow state in order to correctly perform their higher-level pro-cessing.A basic operation inherent to per-flow state manage-ment for a transport protocol such as TCP is the task of reassem-bling any out-of-sequence packets delivered by an underlying unreliable network protocol such as IP.This seemingly prosaic task of reassembling the byte stream becomes an order of mag-nitude more difficult to soundly execute when conducted in the presence of an adversary whose goal is to either subvert the higher-level analysis or impede the operation of legitimate traffic sharing the same network path.We present a design of a hardware-based high-speed TCP reassembly mechanism that is robust against attacks.It is in-tended to serve as a module used to construct a variety of net-work analysis systems,especially intrusion prevention systems. Using trace-driven analysis of out-of-sequence packets,wefirst characterize the dynamics of benign TCP traffic and show how we can leverage the results to design a reassembly mechanism that is efficient when dealing with non-attack traffic.We then refine the mechanism to keep the system effective in the pres-ence of adversaries.We show that although the damage caused by an adversary cannot be completely eliminated,it is possible to mitigate the damage to a great extent by careful design and resource allocation.Finally,we quantify the trade-off between resource availability and damage from an adversary in terms of Zombie equations that specify,for a given configuration of our system,the number of compromised machines an attacker must have under their control in order to exceed a specified notion of “acceptable collateral damage.”1IntroductionThe continual growth of network traffic rates and the in-creasing sophistication of types of network traffic process-ing have driven a need for supporting traffic analysis using specialized hardware.In some cases the analysis is in a purely passive form(intrusion detection,accounting,per-formance monitoring)and for others in an active,in-line form(intrusion prevention,firewalling,content transfor-mation,network address translation).Either way,a key consideration is that increasingly the processing must op-erate at a semantic level higher than the network layer;in particular,we often can no longer use stateless process-ing but must instead maintain per-flow state in order to correctly perform higher-level analyses.Such stateful analysis brings with it the core problem of state management:what hardware resources to allocate for holding state,how to efficiently access it,and how to reclaim state when the resources become exhausted.De-signing a hardware device for effective state management can require considerable care.This is particularly the case for in-line devices,where decisions regarding state man-agement can adversely affect network operation,such as prematurely terminating established TCP connections be-cause the device no longer has the necessary state to cor-rectly transform theflow.Critically,the entire problem of state management be-comes an order of magnitude more difficult when con-ducted in the presence of an adversary whose goal is to either subvert the hardware-assisted analysis or impede the operation of legitimate traffic along the same network path.Two main avenues for subverting the analysis(“eva-sion”)are to exploit the ambiguities inherent in observ-ing network traffic mid-stream[18,12]or to cause the hardware to discard the state it requires for sound anal-ysis.If the hardware terminatesflows for which it has lost the necessary state,then the adversary can pursue the second goal of impeding legitimate traffic—i.e.,denial-of-service,where rather than targeting the raw capacity of the network path or end server,instead the attacker targets the newly-introduced bottleneck of the hardware device’s limited state capacity.Issues of state-holding,disambiguation,and robust op-eration in the presence offlooding arise in different ways depending on the semantic level at which the hardware conducts its analysis.In this paper we consider one of the basic building blocks of higher-level analysis,namely the conceptually simple task of reassembling the layer-4 byte streams of TCP connections.As we will show,this seemingly prosaic bookkeeping task—just track the con-nection’s sequence numbers,buffer out-of-sequence data,lay down new packets in the holes theyfill,and deliver to the next stage of processing any bytes that are now in-order—becomes subtle and challenging when faced with (i)limited hardware resources and,more importantly,(ii) an adversary who wishes to either undermine the sound-ness of the reassembly or impair the operation of other connections managed by the hardware.While fast hardware for robust stream reassembly has a number of applications,we focus our discussion on en-abling high-speed intrusion prevention.The basic model we assume is a high-speed,in-line network element de-ployed at a site’s gateway(so it sees both directions of the flows it monitors).This module serves as thefirst stage of network traffic analysis,with its output(reassembled byte streams)feeding the next stage that examines those byte streams for malicious activity.This next stage might also execute in specialized hardware(perhaps integrated with the stream reassembly hardware),or could be a function-ally distinct unit.A key consideration is that because the reassembly module is in-line,it can(i)normalize the traffic[12]prior to subsequent analysis,and(ii)enable intrusion preven-tion by only forwarding traffic if the next stage signals that it is okay to do so.(Thus,by instead signaling the hardware to discard rather than forward a given packet, the next stage can prevent attacks by blocking their deliv-ery to the end receiver.)As we will discuss,another key property with operating in-line is that the hardware has the potential means of defending itself if itfinds its resources exhausted(e.g.,due to the presence of state-holding at-tacks).It can either reclaim state that likely belongs to at-tackerflows,or else at least exhibit graceful degradation, sacrificing performancefirst rather than connectivity.A basic notion we will use throughout our discussion is that of a sequence gap,or hole,which occurs in the TCP stream with the arrival of a packet with a sequence number greater than the expected sequence number.Such a hole can result from packet loss or reordering.The hardware must buffer out-of-order packets until a subsequent packet fills the gap between the expected and received sequence numbers.After this gap isfilled,the hardware can then supply the byte-stream analyzer with the packets in the correct order,which is crucial for higher-level semantic analysis of the traffic stream.At this point,the hardware can release the buffer al-located to the out-of-order packet.However,this pro-cess raises some natural questions:if the hardware has to buffer all of the out-of-order packets for all the connec-tions,how much buffer does it need for a“typical”TCP traffic?How long do holes persist,and how many con-nections exhibit them?Should the hardware immediately forward out-of-order packets along to the receiver,or only after they have been inspected in the correct order?To explore these questions,we present a detailed trace-driven analysis to characterize the packet re-sequencing phenomena seen in regular TCP/IP traffic.This analy-sis informs us with regard to provisioning an appropriate amount of resources for packet re-sequencing.Wefind that for monitoring the Internet access link of sites with several thousand hosts,less than a megabyte of buffer suf-fices.Moreover,the analysis helps us differentiate between benign TCP traffic and malicious traffic,which we then leverage to realize a reassembly design that is robust in the presence of adversaries.After taking care of traf-fic normalization as discussed above,the main remain-ing threat is that an adversary can attempt to overflow the hardware’s re-sequencing buffer by creating numerous se-quence holes.If an adversary creates such holes in a dis-tributed fashion,spreading them across numerous connec-tions,then it becomes difficult to isolate the benign traffic from the adversarial.Tackling the threat of adversaries gives rise to another set of interesting issues:what should be done when the buffer overflows?Terminate the connections with holes, or just drop the buffered packets?How can we minimize the collateral damage?In light of these issues,we devise a buffer management policy and evaluate its impact on system performance and security.We frame our analysis in terms of Zombie equations: that is,given a set of operational parameters(available buffer,traffic volume,acceptable collateral damage),how many total hosts(“zombies”)must an adversary control in order to inflict an unacceptably high degree of collateral damage?The rest of the paper is organized as follows.Section2 discusses the related work.In Section3we present the re-sults of our trace-driven analysis of out-of-sequence pack-ets.Section4describes the design of a basic reassembly mechanism which handles the most commonly occurring re-ordering case.In Section5,we explore the vulnera-bilities of this mechanism from an adversarial point of view,refine it to handle attacks gracefully,and develop the Zombie equations quantifying the robustness of the system.Section6concludes the paper.2Related WorkPrevious work relating to TCP stream reassembly primar-ily addresses(i)measuring,characterizing and modeling packet loss and reordering,and(ii)modifying the TCP protocol to perform more robustly in the presence of se-quence holes.Paxson characterized the prevalence of packet loss and reordering observed in100KB TCP transfers between a number of Internet hosts[16],recording the traffic at both sender and receiver in order to disambiguate behavior.Hefound that many connections are loss-free,and for those that are not,packet loss often comes in bursts of consec-utive losses.We note that such bursts do not necessarily create large sequence holes—if all packets in aflight are lost,or all packets other than those at the beginning,then no hole is created.Similarly,consecutive retransmissions of the same packet(which would count as a loss burst for his definition)do not create larger holes,and again might not create any hole if the packet is the only one un-acknowledged.For packet reordering,he found that the observed rate of reordering varies greatly from site to site, with about2%of all packets in his1994dataset being reordered,and0.6%in his1995dataset.However,it is difficult to gauge how we might apply these results to to-day’s traffic,since much has changed in terms of degree of congestion and multipathing in the interim.Bennett and colleagues described a methodology for measuring packet reordering using ICMP messages and reported their results as seen at a MAE-East router[5]. They found that almost90%of the TCP packets were re-ordered in the network.They provide an insightful dis-cussion over the causes of packet reordering and isolate the packet-level parallelism offered by packet switches in the data path as the main culprit.Our observations differ significantly from theirs;wefind that packet re-ordering in TCP traffic affects2–3%of the overall traf-fic.We attribute this difference to the fact that the results in[5]reflect an older generation of router architecture. In addition,it should be mentioned that some router ven-dors have modified or are modifying router architectures to provide connection-level parallelism instead of packet level-parallelism in order to eliminate reordering[1]. Jaiswal and colleagues presented measurements of out-of-sequence packets on a backbone router[13].Through their passive measurements on recent OC-12and OC-48 traces,they found that packet reordering is seen for3–5% of overall TCP traffic.This more closely aligns with our findings.Gharai and colleagues presented a similar mea-surement study of out-of-order packets using end-to-end UDP measurements[11].They too conclude that reorder-ing due to network parallelism is more prevalent than the packet loss.Bellardo and Savage devised a clever scheme for mea-suring TCP packet reordering from a single endpoint and discriminating between reordering on the forward path with that on the reverse path[4].(For many TCP con-nections,reordering along one of the paths is irrelevant with respect to the formation of sequencing holes,because data transfers tend to be unidirectional,and reordered ac-knowledgments do not affect the creation of holes.)They quantify the degree that reordering rates increase as the spacing between packets decreases.The overall reorder-ing rates they report appear consistent with our own ob-servations.Laor et.al.investigated the effect of packet reorder-ing on application throughput[14].In a laboratory with a Cisco backbone router connecting multiple end-hosts run-ning different OSes,they measured HTTP throughput as a function of injected packet reordering.Their experi-ments were however confined to cases where the reorder-ing elicited enough duplicate-ACKs to trigger TCP’s“fast retransmission”in the sender.This leads to a significant degradation in throughput.However,wefind that this degree of reordering does not represent the TCP traffic behavior seen in actual traffic,where very few reordered packets cause the sender to retransmit spuriously. Various algorithms have been proposed to make TCP robust to packet reordering[6,7,21].In[6],Blanton and Allman propose to modify the duplicate-ACK thresh-old dynamically to minimize the effect of duplicate re-transmissions on TCP throughput degradation.Our work differs from theirs in that we use trace-driven analysis to guide us in choosing various parameters to realize a robust reassembly system,as well as our interest in the compli-cations due to sequence holes being possibly created by an adversary.In a project more closely related to our work,Schuehler et al.discuss the design of a TCP processor that maintains per-flow TCP state to facilitate application-level analy-sis[20].However,the design does not perform packet reordering—instead,out-of-order packets are dropped. There are also some commercial network processors available today that perform TCP processing and packet reassembly.Most of these processors,however,are used as TCP offload engines in end hosts to accelerate TCP pro-cessing.To our knowledge,there are few TCP processors which process and manipulate TCPflows in-line,with In-tel’s TCP processor being one example[10].TCP pack-ets are reordered using a CAM that stores the sequence numbers of out-of-order packets.When a new data packet arrives,the device compares its sequence number against the CAM entries to see if it plugs any of the sequence holes.Unfortunately,details regarding the handling of edge cases do not appear to be available;nor is it clear how such processors handle adversarial attacks that aim to overflow the CAM entries.Finally,Paxson discusses the problem of state manage-ment for TCP stream reassembly in the presence of an adversary,in the context of the Bro intrusion detection system[17].The problem is framed in terms of when to release memory used to hold reassembled byte streams, with the conclusion being to do so upon observing an ac-knowledgment for the data,rather than when the datafirst becomes in-sequence,in order to detect inconsistent TCP retransmissions.The paper does not discuss the problem of managing state for out-of-sequence data;Bro simply buffers such data until exhausting available memory,at which point the system fails.Univ sub Lab2Super T3Munich Trace duration(seconds)3033,6043,60610,8006,1676.2M 1.5MTotal connections53K215K21K 1.04M 5.62M17,4764,469Total holes2,04879,3214,088575K 1.88M91KB68KBAvg buffer required(bytes)5,94313,39212228,707178KB139Max simultaneous holes91668561 in single connection87%90%Fraction of connections with96%97%97%95%97% single concurrent hole0.02%0.06%minute trace,since wefind that“local”maxima are gener-ally short-lived and thus would not likely overlap in time. On the other hand—and this is a major point to bear in mind—the short duration of the Univ sub and Univ19 traces introduces a significant bias towards underestimat-ing the prevalence of holes.This comes both from the short lifetimes of the traces(less opportunity to observe diverse behavior)and,perhaps more significantly,from the truncation effect:we do not analyze connections that were already in progress when a trace began,or that have notfinished upon termination of the trace,because we donot accurately know their state in terms of which pack-ets constitute holes(upon trace startup)or how long it takes holes to resolve(upon trace termination).This will tend to bias our analysis towards under-representing long-running connections,and these may in turn be respon-sible for a disproportionate number of holes.However, the overall consistency of the results for the university traces with those for the longer-lived traces suggests that the generalfindings we base on the traces—that buffer re-quired for holes are modest,connections tend to have few holes that take little per-hole memory,and that holes re-solve quickly—remain plausible.In addition,the similar Munich environment does not suffer from these biases.Its results mostly agree qualitatively with those from Univ19, except it shows a much higher level of average concurrent holes,which appears to be due to a higher prevalence of fine-grained packet reordering.Thefirst of the research lab traces,Lab lo,was extracted from ongoing tracing that the lab runs.This tracing uses an elaboratefilter to reduce the total traffic to about5% of its full volume,and this subset is generally recorded without any measurement drops.Thefiltering includes eliminating traffic corresponding to some popular ports; in particular,HTTP,which includes the majority of the site’s traffic.Thus,this trace is more simply a touchstone reflecting a lower-volume environment.Lab2includes all packet headers.It was recorded dur-ing workday afternoon hours.The packetfilter inspected 46M packets,reporting about1-in-566dropped.Super is a full header trace captured during workday afternoon hours,with thefilter inspecting13.5M packets and report-ing no drops.T3is a three-hour full header trace captured during workday afternoon hours,with thefilter capturing101M packets and reporting no drops.The mean inbound data rate over the three hours was30.3Mbps(with the link having a raw capacity of44.7Mbps);outbound was 11.0Mbps.Note that the actual monitoring was at a Gbps Ethernet link just inside of the T3bottleneck,so losses in-duced by the congested T3on packets arriving from the exterior Internet would show up as holes in the trace,but losses induced on traffic outbound from the site would not. However,thefigures above show that the congestion was0.10.20.30.40.50.60.70.80.911e-07 1e-06 1e-05 0.0001 0.001 0.01 0.1 1 10 100 1000 FractionofholesHole duration (seconds)SuperUniv_subUniv_19Lab_2Lab_loT3MunichFigure2:Cumulative distribution function of the duration of holes.Most of the holes have a lifetime of less than0.01 seconds.primarily for the inbound traffic.We note that monitor-ing in this fashion,just inside of a bottleneck access link, is a natural deployment location for intrusion prevention systems and the like.Table1summarizes the datasets and some of the char-acteristics of the sequence holes present in their TCP con-nections.We see that holes are very common:in Univ sub and Super,about3%of connections include holes,while in Lab lo and T3,the number jumps to10–20%.Overall, 0.1%–0.5%of all packets lead to holes.Figure1shows how the reassembly buffer occupancy changes during the traces.Of the four traces,Super is pe-culiar:the buffer occupancy is mostly very low,but surges to a high value for a very short period.This likely reflects the fact that Super contains fewer connections,many of which do bulk data transfers.It is important to note the de-synchronized nature of the sequence hole creation phenomenon.A key point is that the buffer occupancy remains below600KB across all of the traces,which indicates that stream reassembly over a set of several thousand connections requires only a modest amount of memory,and thus may be feasible at very high speeds.It is also noteworthy how some holes last for a long du-ration and keep the buffer level elevated.These are visi-ble as plateaus in several of thefigures—for example,be-tween T=100and T=150in the Univ sub plot—and are due to some long lived holes whose durations overlap, whereas for the Munich trace we observed that the aver-age buffer occupancy is significantly higher than the rest of the traces.This too is a result of concurrent but short-lived holes,although the average number of concurrent holes for this trace is larger(around60)compared to the other traces(<5concurrent holes on an average).The frequent sudden transitions in the buffer level show that most of the holes are quite transient.Indeed,Figure21000020000300004000050000 60000700000 500 1000 1500 2000 2500 3000 3500 4000B u f f e r r e q u r e m e n t (b y t e s )Time (seconds)Lab_lo(a)Lab lo 050000 100000 150000 200000250000 3000000 500 1000 1500 2000 2500 3000 3500 4000B u f f e r r e q u r e m e n t (b y t e s )Time (seconds)lab_hv(b)Lab 2 0500001000001500002000002500003000000 500 1000 1500 2000 2500 3000 3500 4000B u f f e r r e q u r e m e n t (b y t e s )Time (seconds)Super(c)Super1000020000 30000 40000 50000 60000 70000 80000 90000 100000 0 1000 200030004000 5000 6000B u f f e r r e q u r e m e n t (b y t e s )Time(seconds)0123456789101112131415161718(d)Univ 19 0 100000200000300000400000500000600000 10001500 2000 25003000 3500 4000 4500 5000B u f f e r r e q u r e m e n t (b y t e s )Time (seconds)(e)Munich 0500001000001500002000002500000 2000 400060008000 12000B u f f e r r e q u r e m e n t (b y t e s )Time (seconds)T3(f)T3Figure 1:Reassembly buffer occupancy due to unfilled holes.Univ sub ,which we omitted,is similar to the elements ofUniv 19.0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 110100 1000 10000 1000001e+06F r a c t i o n o f h o l e sBuffer accumulated by a hole (bytes)Super Univ_sub Univ_19Lab_2Lab_loT3MunichFigure 3:Cumulative distribution of the buffer accumulatedby a hole.shows the cumulative distribution of the duration of holes.Most holes have a very short lifetime,strongly sugges-tive that they are created due to packet reordering and not packet loss,as in the latter case the hole will persist for at least an RTT,significantly longer than a millisecond for non-local connections.The average hole duration is less than a millisecond.In addition,the short-lived holes have a strong bias towards the out-of-order packet (sent later,arriving earlier)being smaller than its later-arriving pre-decessor,which is suggestive of reordering due to multi-pathing.Finally,in Figure 3we plot the cumulative distribution of the size of the buffer associated with a single hole.The graph shows that nearly all holes require less than 10KB of buffer.This plot thus argues that we can choose an ap-propriate limit on the buffer-per-hole so that we can iden-tify an adversary trying to claim an excessively large por-tion.4System architectureSince our reassembly module is an in-line element,one of its key properties is the capability to transform the packet stream if needed,including dropping packets or killing connections (by sending TCP RST packets to both sides and discarding the corresponding state).This ability al-lows the system to make intelligent choices for more ro-bust performance.TCP streams semantically allow nearly arbitrary permutations of sequence hole creation (illus-trated in Figure 4below).In particular,all of the fol-lowing possible scenarios might in principle occur in a TCP stream:very long-lived holes;holes that accumulate large amounts of buffer;large numbers of simultaneous holes in a connection;presence of simultaneous holes in both directions of a single connection;and/or a high rate of sequence hole creation.However ,as our trace analysis shows,most of these cases are highly rare in typical TCP traffic.On the one hand,we have an objective to preserveend-to-end TCP semantics as much as possible.On the other hand,we have limited hardware resources in terms of memory and computation.Hence,we adopt the well-known principle of“optimize for the common case,de-sign for the worst case,”i.e.,the system should be effi-cient in handling commonly-seen cases of reordering,and should not catastrophically fail when faced with a worst-case scenario,but exhibit graceful degradation.Since the traces highlight that the highly dominant case is that of a single,short-lived hole in just one direction within a connection,we design the system to handle this case ef-ficiently.We then also leverage its capability of dropping packets in order to restrict the occurrence of uncommon cases,saving us from the complexity of having to accom-modate these.With this approach,most of the TCP traffic passes un-altered,while a very small portion experiences a higher packet loss rate than it otherwise would.Note that this latter traffic is likely already suffering from impaired per-formance due to TCP’s congestion-control response in the presence of packet loss,since multiple concurrent holes are generally due to loss rather than reordering.We fur-ther note that dropping packets is much more benign than terminating connections that exhibit uncommon behavior, since the connection will still proceed by retransmitting the dropped packet.The reader might wonder:Why not drop packets when thefirst hole is created?Why design a system that bothers buffering data at all?The simple answer:the occurrence of a single connection hole is very common,much more so than multiple holes of any form,and we would like to avoid the performance degradation of a packet drop in this case.4.1Maintaining Connection RecordsOur system needs to maintain TCP connection records for thousands of simultaneous connections,and must ac-cess these at high speeds.For such a high-speed and high-density storage,commodity synchronous DRAM (SDRAM)chip is the only appropriate choice.Today, Dual Data Rate SDRAM modules operating at166MHz and with a capacity of512MB are available commer-cially[15].With a64-bit wide data bus,such an SDRAM module offers a raw data throughput of64×2×166×106≈21Gbps.However,due to high access latency,the actual throughput realized in practice is generally much less.Nevertheless,we can design memory controllers to exploit bank-level parallelism in order to hide the access latency and achieve good performance[19].When dimensioning connection records,we want to try tofit them into multiples of four SDRAM words,since modern SDRAMs are well suited for burst access with such multiples.With this practical consideration,we de-sign the following connection record.First,in the absence of any sequence hole in the stream,the minimum informa-tion we need in the connection record is:•CA,SA:client/server address(4bytes+4bytes)•CP,SP:client/server port(2bytes+2bytes)•Cseq:client’s expected sequence number(4bytes)•Sseq:server’s expected sequence number(4bytes)•Next:pointer to the next connection record for re-solving hash collisions(23bits)•Est:whether the connection has been established,i.e.,we’ve seen both the initial SYN and a SYN-ACK(1bit).This bit also helps us in identifying SYNfloods.Here,we allocate23bits to store the pointer to the next connection record,assuming that the total number of records does not exceed8M.When a single sequence hole is present in a connection,we need to maintain the following extra information:•CSH:Client hole or server hole(1bit)•HS:hole size(2bytes)•BS:buffer size(2bytes)•Bh,Bt:pointer to buffer head/tail(2bytes+2bytes)•PC:IP Packet count in the buffer(7bits)Theflag CSH indicates whether the hole corresponds to the client-to-server stream or the server-to-client stream. Hole size tells us how many bytes are missing,starting from the expected sequence number of the client or the server.Buffer size tells how many bytes we have buffered up,starting from the end of the hole.Here we assume that both the hole size and the buffer size do not exceed 64KB.We drop packets that would cause these thresh-olds to be exceeded,a tolerable performance degradation as such packets are extremely rare.Finally,Bh and Bt are the pointers to the head and tail of the associated buffer. We access the buffer at a coarse granularity of a“page”instead of byte.Hence,the pointers Bh and Bt point to pages.With two bytes allocated to Bh and Bt,the number of pages in the buffer must not exceed64K.We can com-pactly arrange thefields mentioned above in four8-byte SDRAM words.We keep all connection records in a hash table for ef-ficient access.Upon receiving a TCP packet,we com-pute a hash value over its4-tuple(source address,source port,destination address,destination port).Note that the hash value needs to be independent of the permutation of source and destination ing this hash value as the address in the hash table,we locate the corresponding connection.We resolve hash collisions by chaining the colliding records in a linked list.A question arises here regarding possibly having to traverse large hash chains. Recall that by using a512MB SDRAM,we have space。

TPOT Translucent Proxying of TCP Pablo Rodriguez

TPOT Translucent Proxying of TCP Pablo Rodriguez

TPOT:Translucent Proxying of TCPPablo Rodriguez Sandeep Sibal,Oliver SpatscheckEURECOM,France AT&T Labs–Researchrodrigue@eurecom.fr sibal,spatsch@AbstractTransparent Layer-4proxies are being widely deployed inthe current Internet to enable a vast variety of applications.These include Web proxy caching,transcoding,service dif-ferentiation,and load balancing.To ensure that all IP pack-ets of an intercepted TCP connection are seen by the inter-cepting transparent proxy,they must sit at focal points inthe network.Translucent Proxying of TCP(TPOT)over-comes this limitation by using TCP options and IP tun-neling to ensure that all IP packets belonging to a TCPconnection will traverse the proxy that intercepted thefirstpacket.This guarantee allows the ad-hoc deployment ofTPOT proxies anywhere within the network.No extra sig-naling support is required for its correct functioning.Inaddition to the advantages TPOT proxies offer at the ap-plication level,they also usually improve the throughputof intercepted TCP connections.In this paper we discussthe TPOT protocol,explain how it enables various appli-cations,describe a prototype implementation,analyze itsimpact on the performance of TCP,and address scalabilityissues.1Introduction and Related WorkTransparent proxies are commonly used in solutionswhen an application is to be proxied in a manner thatis completely oblivious to a client,without requiring anyprior configuration.Recently,there has been a great dealof activity in the area of transparent proxies for Webcaching.Several vendors in the area of Web proxy cachinghave announced dedicated Web proxy switches and appli-ances[1,2,8,12].In the simplest scenario,a transparent proxy interceptsall TCP connections that are routed through it.This maybe refined by having the proxy intercept TCP connectionsdestined only for specific ports(e.g.,80for HTTP),or for aspecific set of destination addresses.The proxy responds tothe client request,masquerading as the remote web server.A TPOT proxy,on seeing such a SYN packet,intercepts it.The ACK packet that it returns to the source carries the proxy’s IP address stuffed within a TCP-OPTION.On re-ceiving this ACK,the source sends the rest of the packets via the intercepting proxy over an IP tunnel.The protocol is discussed in detail in Section2.The above mechanism will work if the client is TPOT enabled.In a situation where the client is not TPOT en-abled,we may still be able to use TPOT.As long as the client is single-homed,and has a proxy at a focal point,we can TPOT enable the connection by having the proxy be-have like a regular transparent proxy on the side facing the client,but a TPOT(translucent)proxy on the side facing the server.Implementation of such a proxy is covered in Section3.The general idea of using TCP-OPTIONs as a signaling scheme for proxies is not new[20].However combining this idea with IP tunneling to pin down the path of a TCP connection has not been proposed before to the best of our knowledge.One alternative to TPOT is the use of Active Network techniques[31].We believe that TPOT is a relatively lightweight solution that does not require an overhaul of existing IP networks.In addition,TPOT can be deployed incrementally in the current IP network,without disrupting other Internet traffic.1.1Applications of TPOTIn addition to allowing the placement of transparent Web proxy caches anywhere in the network,TPOT also enables newer architectures that employ Web proxy networks.In such architectures a proxy located along the path from the client to the server simply picks up the request and satisfies it from its own cache,or lets it pass through.This,in turn, may be picked up by another proxy further down the path. These incremental actions lead to the dynamic construction of spontaneous hierarchies rooted at the server.Such archi-tectures require the placement of multiple proxies within the network,not just at their edges and gateways.Existing proposals[15,21,33]either need extra signaling,or they simply assume that all packets of the connection will pass through an intercepting proxy.Since TPOT explicitly pro-vides this guarantee,implementing such architectures with TPOT is elegant and easy.With TPOT no extra signaling support or prior knowledge of neighboring proxies is re-quired.While the original motivation for TPOT was to enable Web proxy caching and Web proxy caching networks in an oblivious and ad-hoc fashion,the general idea of TPOT can also be applied to enable proxy-based services in a variety of other applications layered on top of TCP in an elegant and efficient fashion.One such use is Transcoding.This refers to a broad class of problems that involve some sort of adaptation of con-tent(e.g.,[13,23]),where content is transformed so as to increase transfer efficiency,or is distilled to suit the capa-bilities of the client.Another similar use is the notion of enabling a transformer tunnel[30]over a segment of the path within which data transfer is accomplished through some alternate technique that may be better suited to the specific properties of the link(s)traversed.Proposals that we know of in this space require one end-point to explic-itly know of the existence of the other end-point–requir-ing either manual configuration or some external signal-ing/discovery protocol.TPOT can accomplish such func-tionality in a superior fashion.In TPOT an end-point non-invasivelyflags a connection,signifying that it can trans-form content–without actually performing any transfor-mation.Only if and when a second TPOT proxy(capable of handling this transformation)sees thisflag and notifies thefirst proxy of its existence,does thefirst proxy begin to transform the connection.Note that this does not require any additional handshake for this to operate correctly,since the TPOT mechanism plays out in concert with TCP’s ex-isting3-way handshake.Another use of TPOT is to enable the selection of spe-cific applications for preferential treatment.Such service differentiation could be used to enable and enforce Q0S policies.One might also want to prioritize traffic belong-ing to an important set of clients,or a set of mission-critical servers.1.2Paper OverviewSection2describes the TPOT protocol.In addition to the basic version,a pipelined version of the protocol is also discussed.Pathological cases,extensions,and limitations are also studied.Section3details a prototype implementation of TPOT in Scout[24].We use this prototype in all our experiments. We address the TCP level performance of TPOT in Sec-tion4using both theoretical analysis as well as exper-iments.Contrary to what we initially expected,TPOT typically improves the performance of TCP connections. This apparent counter-intuitive result has been observed before[3,7,16],though in somewhat different contexts. In[3]a modified TCP stack called Indirect TCP is em-ployed for mobile hosts to combat problems of mobility and unreliability of wireless links.Results show that em-ploying Indirect TCP outperforms regular TCP.In[16]sim-ilar improvements are reported for the case when TCP con-nections over a satellite link are split using a proxy.Fi-nally,in[7],the authors discuss at length how TCP per-formance may be enhanced by using proxies for HFC net-works.The notion of inserting proxies with the sole rea-son of enhancing performance has recently led to the coin-ing of the term Performance Enhancing Proxies(PEP).An overview is provided in[5].As we will see in Section4, TPOT does indeed enhance performance,but unlike PEP, this is not the motivation behind TPOT.Scalability is an important criterion if TPOT is to bepractically deployed.Section5discusses our approach to solving this problem using a technique that we call TPARTY,which employs a farm of servers that sit behind a front-end machine.The front-end machine only farms out requests to the army of TPOT machines that sit behind it. We show that our solution scales almost linearly with the number of TCP connections in the region of interest. Finally Section6highlights our major contributions,dis-cusses future work,and possible extensions to TPOT.2The TPOT ProtocolThis section describes the operation of the basic and pipelined versions of the TPOT protocol.Pathological cases,extensions,and limitations are also studied.Before describing the operation of the TPOT protocol,we provide a brief background of IP and TCP which will help in better understanding TPOT.See[29]for a detailed discussion of TCP.2.1IP and TCPEach IP packet typically contains an IP header and a TCP segment.The IP header contains the packet’s source and destination IP address.The TCP segment itself contains a TCP header.The TCP header contains the source port and the destination port that the packet is intended for.This4-tuple of the IP addresses and port numbers of the source and destination uniquely identify the TCP connection that the packet belongs to.In addition,the TCP header contains aflag that indicates whether it is a SYN packet,and also an ACKflag and sequence number that acknowledges the receipt of data from its peer.Finally,a TCP header might also contain TCP-OPTIONs that can be used for custom signaling.In addition to the above basic format of an IP packet,an IP packet can also be encapsulated in another IP packet.At the source,this involves prefixing an IP header with the IP address of an intermediate tunnel point on an IP packet.On reaching the intermediate tunnel point,the IP header of the intermediary is stripped off.The(remaining)IP packet is then processed as usual.See RFC2003[27]for a longer discussion.2.2TPOT:Basic VersionIn the basic version of TPOT a source that intends to connect with destination via TCP,as shown in Fig-ure1(a).Assume that thefirst(SYN)packet sent out by to reaches the intermediary TPOT proxy.is the notation that we use to describe a packet that is headed from to,and has andas the source and destination ports respectively.To co-exist peacefully with other end-points that do not wish to talk TPOT,we use a special TCP-OPTION “TPOT,”that a source uses to explicitly indicate to TPOT proxies within the network,such as,that they are inter-ested in using the TPOT mechanism.If does not see this option,it will take no action,and simply forwards the packet on to on its fast-path.If sees a SYN packet that has the TCP-OPTION“TPOT”set,it responds to with a SYN-ACK that encodes its own IP address in the TCP-OPTIONfield.On receiving this packet,must then send the remaining packets of that TCP connection,IP tunneled to.From an implementation standpoint this would imply adding another20byte IP header with’s IP address as destination address to all packets that sends out for that TCP connection.Since this additional header is removed on the next TPOT proxy,the total overhead is limited to20 bytes regardless of the number of TPOT proxies intercept-ing the connection from the source to thefinal destination. This overhead can be further reduced by IP header com-pression[10,18].For applications such as Web Caching where may be able to satisfy a request from,the response is simply served from one or more caches attached to.In the case of a“cache miss”or for other applications where might connect to after inspecting some data,communicates with the destination as shown in Figure1(a).Note that the proxy sets the TCP-OPTION“TPOT”in its SYN to to allow possibly another TPOT proxy along the way to again proxy the connection.Note that Figure1only shows the single proxy scenario.2.3TPOT:Pipelined VersionIn certain situations one can do better that the basic ver-sion of the TPOT protocol.It is possible for to pipeline the handshake by sending out the SYN to immediately after receiving the SYN from.This pipelined version of TPOT is depicted infigure1(b).The degree of pipelining depends on the objective of the proxying mechanism.In the case of an L-4proxy for Web Caching,the initial SYN contains the destination IP ad-dress and port number.Since L-4proxies do not inspect the content,no further information is needed from the connec-tion before deciding a course of action.In such a situation a SYN can be sent out by to almost immediately after received a SYN from,as shown in Figure1(b).In the case of L-7switching,however,the proxy would need to inspect the HTTP Request(or at a minimum the URL in the Request).Since this is typically not sent with the SYN, a SYN sent out to can only happen after thefirst ACK is received by from.This is consistent with Figure1.2.4Pathological CasesWhile the typical operation of TPOT appears correct,we are aware of two pathological cases that also need to be addressed.Destination: (D, D_p)Intermediary: (T, T_p)Source: (S, S_p)DATA: (T,T_p,D,D_p)SYN-ACK: (D,D_p,T,T_p)tcp-option: TPOTSYN: (T,T_p,D,D_p)SYN-ACK: (D,D_p,S,S_p)ip-tunneled via TDATA: (S,S_p,D,D_p)tcp-option: TPOT SYN: (S,S_p,D,D_p)tcp-option: T (a)Basic Version Destination: (D, D_p)Intermediary: (T, T_p)Source: (S, S_p)SYN-ACK: (D,D_p,S,S_p)ip-tunneled via TDATA: (S,S_p,D,D_p)tcp-option: TPOTSYN: (S,S_p,D,D_p)tcp-option: T tcp-option: TPOTSYN: (T,T_p,D,D_p)SYN-ACK: (D,D_p,T,T_p)DATA: (T,T_p,D,D_p)(b)Pipelined VersionFigure 1:The TPOT protocol1.In a situation when a SYN is retransmitted by ,it is possible that the retransmitted SYN is intercepted by ,while the first SYN is not –or vice versa.In such a situation,may receive SYN-ACKs from both as well as .In such a situation simply ignores the second SYN-ACK,by sending a RST to the source of the second SYN-ACK.2.Yet another scenario,is a simultaneous open from to and vice-versa,that uses the same port number.Further intercepts only one of the SYNs.This is a situation that does not arise in the client-server appli-cations which we envision for TPOT.Since can turn on TPOT for only those TCP connections for which TPOT is appropriate,this scenario is not a cause for concern.2.5ExtensionsAs a further sophistication to the TPOT protocol it is pos-sible for multiple proxied TCP connections at a client or proxy that terminate at the same (next-hop)proxy,to inte-grate their congestion control and loss recovery at the TCP level.Mechanisms such as TCP-Int proposed in [4]can be employed in TPOT as well.Since the primary focus of TPOT,and this paper,is to enable proxy services on-the-fly,rather than enhance performance we do not discuss this further.The interested reader is directed to [4]and [32]for such a discussion.Note that an alternative approach is to multiplex sev-eral TCP connections onto a single TCP connection.This is generally more complex as it requires the demarcationof the multiple data-streams,so that they may be sensi-bly demultiplexed at the other end.Proposals such as P-HTTP [22]and MUX [14],which use this approach,may also be built into TPOT.2.6LimitationsAs shown in Figure 1the TCP connection that the inter-mediate proxy initiates to the destinationwill carry ’s IP address.This defeats any IP-based access-control or authentication that may use.Note that this limitation is not germane to TPOT,and in general,is true of any trans-parent or explicit proxying mechanism.In a situation where the entire payload of an IP packet is encrypted,as is the case with IPsec,TPOT will simply not be enabled.This does not break TPOT,it simply restricts its use.The purist may also object to TPOT breaking the seman-tics of TCP,since in TPOT a proxy in general interacts with ,in a fashion that is asynchronous with its interac-tion with .While it is possible to construct a version of TPOT that preserves the semantics of TCP,we do not pur-sue it here.In defense,we point to several applications that are prolific on the Internet today (such as firewalls)that are just as promiscuous as TPOT.3Implementing TPOT in ScoutTPOT can be implemented in any operating system.This section describes an implementation in an OS designed specifically to support communication:Scout [24].Whilethe primary purpose of this section is toflesh out some of the details any implementation would have to address,it has a secondary objective of illustrating how a technique like TPOT can be naturally realized in an operating sys-tem designed around communication-oriented abstractions. Many overheads and latency penalties incurred by proxies on general purpose operating systems like Linux,BSD or WindowsNT can be avoided by such an operating system. Scout is a configurable OS explicitly designed to sup-port dataflows,such as video streams through an MPEG player,or a pair of TCP connections through afirewall. Specifically,Scout defines a path abstraction that encap-sulates data as they move through the system,for example, from input device to output device.In effect,a Scout path is an extension of a network connection through the OS. Each path is an object that encapsulates two important ele-ments:(1)it defines the sequence of code modules that are applied to the data as they move through the system,and (2)it represents the entity that is scheduled for execution.PROXYTCPTCPIPIPNET1NET2Figure2:TCP proxy in two Scout paths.The path abstraction lends itself to a natural implemen-tation of TCP proxies.Figure2schematically depicts a naive implementation of a proxy in Scout.It consists of two paths:one connecting thefirst network interface to the proxy,and another connecting the proxy to a second net-work interface.In thisfigure,the path has a source and a sink queue,and is labeled with the sequence of software modules that define how the path transforms the data it car-ries1.As afirst approximation,the configuration of Scout shown in Figure2represents the implementation one would expect in a traditional OS.The two-path configuration shown in Figure2has subop-timal performance because it requires the hand-off of eachTCP TCP (with TPOT)TCP (with TPOT)TCP (with TPOT)FWDIPNET2NET1FWDNET1IP in IP IP in IP IPIPIPNET2IP in IP Figure 4:TPOT implementation in Scout.buffer is also set to 32KByte to match the values used by the Linux client and server.4Performance MeasurementsThis section analyzes the TCP performance of TPOT based on actual measurements in lab setups using proto-type TPOT proxies.Wherever relevant,we compare the observed performance with expected values suggested by theoretical results on the performance of idealized TCP.In our experiments we use the Reno flavor of TCP [29],which is generally considered to be the most popular im-plementation on the Internet today.We expect our obser-vations to largely hold for other flavors of TCP,though it is quite possible that flavors of TCP such as TCP-Vegas [6],which have different congestion detection and avoidance techniques,may yield somewhat different numbers.The primary focus of the following experiments is to evaluate the performance benefits and penalties in the presence of realistic Round-trip-times (RTTs)and packet losses,when one or more TPOT machines intercepts a TCP connection.For these experiments the TPOT machines are not overloaded.Section 5discusses overload situations,and techniques of scaling TPOT to combat this.For our experiments we tested the pipelined version of TPOT (See Section 2).In the worst case the basic version would incur an additional delay of half a round-trip-time.Aside from this,the two versions of TPOT yield similar results.4.1SetupAll hosts used in our experiment are at least 200MHz Pentium II workstations with 256KB cache,32MB RAM,and 3COM 3x5932-bit PCI 10/100Mb/s adapters.The first TPOT machine runs the transparent proxy version of TPOT,while the second TPOT machine runs the interior TPOT version which are described in the previous section.The clients and servers are Linux 2.2.12machines.Thephysical configuration of our test setup is shown in Fig-ure 5.The client is connected with a 10Mbit hub to the first TPOT machine.The first TPOT machine is connected by another 10Mbit Hub to the second TPOT machine.The second TPOT machine is in turn connected by a 10Mbit hub to the server.TPOT 1HubHubTPOT 2HubServerClientFigure 5:Test SetupThe TPOT machines either operate as TPOT proxies or as simple routers.If they operate as TPOT proxies the first TPOT machine enables the TPOT protocol and data is subsequently tunneled between the TPOT machines.De-lays and losses are added in the device driver code of each TPOT device.The granularity of the delay queue is 1ms.For throughput measurements TTCP is used to measure the throughput on the receiver.TTCP transfers a specified amount of data from the client to the server.After all the data has been transfered,the connection is closed.The re-sults reported for each experiment,are averaged over ten runs.The Linux TCP code implements the Timestamp option which is not supported by Scout.We believe that the impact of this shortcoming is minor in our test environments which by design have low RTT variances.Despite that fact that both Linux and Scout advertise the SACK option during initial handshake,tcpdump traces show that SACK was not used during the data transfer phase of the TCP connections in any of the experiments.4.2Impact of RTTTo measure the impact of the RTT,we introduced de-lay into the output queue of the Ethernet devices on the TPOT machines.In one set of experiments the TPOT ma-chines work exclusively as routers,and in the second set exclusively as TPOT proxies.In the second experiment the distribution of the added delay,is either equally distributedover all links,or is concentrated on a single link between the two TPOT machines.50100150200250300350400050100150200250300T h r o u g h p u t i n K B /s e cTotal RTT in msRouterTPOT delay on central hub TPOT delay equaly distributed100200300400500600700800050100150200250300T h r o u g h p u t i n K B /s e cTotal RTT in msRouterTPOT delay on central hub TPOT delay equaly distributedFigure 6:Throughput for different RTTs for 10KB (top)and 10MB (bottom)document sizes.Figure 6shows the throughput for RTTs from 1-300ms for 10KB and 10MB document sizes.Smaller documents are not measured since the connection establishment time dominates the experiment.The impact on small documents is discussed in section 4.5.The results show that if the en-tire RTT is concentrated at the single link between the two TPOT machines,the throughput is on average 24%worse for 10KB documents and 6%worse for 10MB documents.This is not surprising since the TPOT machines need to per-form additional processing during connection setup which gets amortized over the lifetime of the connection,in addi-tion to the processing for each packet.On the other hand in the case when the RTT is equally distributed over the links,we find that TPOT improves the overall throughput.For example,the TPOT throughput for a 300ms RTT and 10MB documents is 92%better than the routed throughput for the same RTT.Theoretical AnalysisTo better understand this phenomenon we turn our atten-tion to results in the literature that analyze the performance of TCP using idealized models.Note that this section is intended as a theoretical backing for our study,and is notintended as a comprehensive or formal analysis of TCP.In [11]the authors provide a rough sketch for the throughput of an idealized TCP connection in the conges-tion avoidance phase.A more rigorous derivation of this and a few other results may be found in [25].The authors of [26]model TCP throughput in a more comprehensive fashion taking into account TCP timeouts as well.We use the terminology of [26]in what follows.Letand be the packet loss and RTT on link ,and be the corresponding throughput in packets per second .Also,let be the maximum advertised window size.Let the constant ,be the number of pack-ets acknowledged by each ACK.Then in steady-state,as per [26]:(1)Note that the above equation ignores timeouts.Including timeouts does not change the nature of the analysis that follows.A detailed discussion is beyond the scope of this paper.For connections with a high RTT the advertised windowsizebecomes the bottleneck,so that the above equa-tion reduces to:amount.If the scaled window,,becomes so large that it is no longer the bottleneck,the send window will be-come the limiting factor.This case is discussed in the next section.4.3Impact of LossWhile the advertised window size was the determiningfactor offor high RTT connections in the previous experiment,the goal of this experiment is to demonstrate that TPOT also performs better if the sender’s congestion window and not the receiver’s advertised window limits throughput.To study this scenario we randomly drop pack-ets in an uniform and independent fashion in the Ethernet device driver of the TPOT machines.In this experiment no artificial delay is introduced.This results in an RTT of 1ms between the client and server due to the real delay on the Ethernet and TPOT machines.The idea here is to simulate packet losses either due to lossy links or packet loss due to buffer overflows along the path.Again we measure the performance of end to end TCP using the TPOT machines configured exclusively as routers,or as TPOT proxies.In the case where they are configured as proxies the loss is either equally distributed between the links,or is concen-trated on the link between the two TPOT machines.10020030040050060070080090005101520T h r o u g h p u t i n K B /s e cLoss in %RouterTPOT loss on central hub TPOT loss equaly distributedFigure 7:Throughput for different drop rates.Figure 7depicts the results of this experiment for 10MB document sizes for different loss rates.The experiment for 10KB is not reported since the results were highly vari-ant.This is because of the timeout behavior of TCP SYN packets,and the fact that the the total number of packets transfered is low.Figure 7shows that the Router version is slightly better than the TPOT proxy version with packet loss concentrated on the central link.We believe this is due to the overhead involved in introucing TPOT proxies.However when the packet loss is equally distributed,The TPOT proxy version outperforms the Router version by far.Note that this shows up only for throughput values below 600KBps,since above this,the 10Mbps Ethernet dominates the picture.Theoretical AnalysisIn this situation,theof each link is the same.Let us denote this by .When the throughput is not dom-inated by,Equation 1reduces to:is thus determined by the most lossy link.Note that in this case the overall loss probability is conserved,so that,.The results of the experiments roughlycorroborate this.For the 10MB document size,we see that after the throughput drops below the Ethernet saturation point,the equally distributed packet loss case,outperforms the router case significantly –in fact slightly more than the theoretically expected value ofCase Link1Link3(RTT/drop)(RTT/drop) I1ms/0%1ms/0%139ms/6.96%III110ms/3%70ms/6% Table1:RTT and packet drop rate distributions for the ex-periments.of size100KB was measured from the client to the server. The TPOT machines,as in the previous experiments,were either used as routers or as TPOT proxies.The RTT and packet drop rate distribution was the same in either case.Case TPOT throughput24kBps24kBps24kBps Table2:100KB transfer for case I-III with and without TPOT.Table2shows the results of the experiment.Not surpris-ingly,the throughput remains the same for all cases,when the TPOT machines are configured as routers.It does not make any difference where the data is lost or where RTT delays are introduced as long as the end to end RTT and loss are equal.Also,not surprisingly,the throughput of the proxied TCP increases by more than a factor of three due to the reduction of the individual TCP connections drop rate and RTT.This case study also shows that additional bene-fits can be derived from the TPOT architecture.Case I and II can be implemented without TPOT since all proxies can be placed on focal points in the network.However,case III is possible only if the connection is TPOT enabled,since in Case III the proxy/switch is in the middle of the network.4.5Small Data TransfersAnother important question is how TPOT effects small data transfers.The previous experiments have shown that throughput increases when TPOT proxies are used.How-ever,for smallfiles,the connection establishment overhead dominates the overall performance,and sustained through-put rates become irrelevant.To measure the effect of TPOT on smallfile transfers,the setup in the previous experiments was used.However,instead of TTCP which transfers data in one direction,we used a TCP ping test which returns the data back to the sender simulating a HTTP Request fol-lowed by a short HTTP Response.The total time from be-fore the open system call to after the close system call on the client side of the connection was measured using the hardware cycle counter of the Pentium II processor. Table3shows the results for three transfer sizes and two different values for RTT.The delay was equally distributed RTT Routed Delay 1 2.7ms1KB8.5ms133.5ms 1B137.3ms100210.7ms 10KB374.3ms。

计算机网络英文复习题

计算机网络英文复习题

计算机⽹络英⽂复习题、英译汉(10分)1.TCP(Transmission Control Protocol) 传输控制协议2.IP(Internet Protocol) 互联⽹协议3.RFC(Requests for comments) 请求评议4.SMTP(Simple Mail Transfer Protocol) 简单邮件传输协议5.Congestion-control 拥塞控制6.Flow control 流控制7.UDP (User Datagram Protocol) ⽤户数据报协议8.FTP(File Transfer Protocol) ⽂件传输协议9.HTTP( Hyper-Text Transfer Protocol ) 超⽂本传输协议10.TDM 时分复⽤11.FDM 频分复⽤12.ISP(Internet Service Provider) 互联⽹服务提供商13.DSL(Digital Subscriber Line) 数字⽤户线路14.DNS(Domain Name System) 域名系统15.ARQ(Automatic Repeat Request) ⾃动重发请求16.ICMP(Internet Control Message Protocol) ⽹间控制报⽂协议17.AS(Autonomous Systems) ⾃制系统18.RIP(Routing Information Protocol)\ 路由信息协议19.OSPF(Open Shortest Path First) 开放最短路径优先20.BGP (Border Gateway Protocol) 边界⽹关协议21.HFC 光纤同轴电缆混合⽹22.CRC(Cyclic Redundancy Check) 循环冗余检验23.CSMA/CD 带冲突检测的载波侦听多路存取24.ARP 地址解析协议25.RARP 反向地址解析协议26.DHCP 动态主机配置协议27.RTT 循环时间28.IETF(P5)互联⽹⼯程任务组29.URL(P88)统⼀资源定位30.API应⽤程序编程接⼝31.MIME多⽤途互联⽹邮件扩展1. DSL divides the communication link between the home and the ISP into three nonoverlapping frequency bands, a upstream channel is in _A_________.A)50 kHz to 1MHz band B) 1MHz to 2MHz bandC)4 kHz to 50kHz band D) 0 to 4kHz band2. As a data packet moves from the upper to the lower layers, headers are A .A) Added; B) subtracted; C) rearranged; D) modified3. What is the main function of the network layer? DA) node-to-node delivery; B) process-to-process message deliveryC) synchronization; D) updating and maintenance of routingtables4. Which of the following is the default mask for the address 168.0.46.201? BA) 255.0.0.0; B) 255.255.0.0; C) 255.255.255.0; D) 255.255.255.2555.A router reads theaddress on a packet to determine the next hop. AA) IP ; B) MAC; C) source; D)ARP6 .Which device can’t isolates the departme ntal collision domains. AA) Hub; B) switch; C) router; D) A and B7. Input port of a router don’t perform ____D____ functions.A) the physical layer functions B) the data link layer functionsC) lookup and forwarding function D) network management8. HTTP has a mechanism that allows a cache to verify that its objects are up to date. The mechanism is DA) persistent connections B) cookies C) Web Caching D) conditional GET9. A protocol layer can be implemented in ___D_____.A) software B) hardware C) a combination of the software and hardware D) All of the above10. A protocol has three important factors, they are_A______.A)syntax, semantics, order B) syntax, semantics, layerC)syntax, semantics, packet D) syntax , layer, packet11. There are two broad classes of packet-switched networks: datagram networks and virtual-circuit networks. The virtual-circuit networks forward packets in their switches use ___D___.A) MAC addresses B) IP addressesC) e-mail addresses D) virtual-circuit numbers12. TCP service model doesn’t provide ___D_______service.A) reliable transport service B) flow control serviceC) congestion-control service D) guarantee a minimum transmission rate service.13. Usually elastic applications don’t include____B______.A) Electronic mail B) Internet telephony14. A user who uses a user agent on his local PC receives his mail sited in a mail server by using _B___ protocol.A)SMTP B) POP3C)SNMP D) FTP15. Considering sliding-window protocol, if the size of the transmitted window is N and the size of the receiving window is 1,the protocol is BA) stop-and-wait protocol B) Go-Back-N protocolC) selective Repeat protocol D) alternating-bit protocol16. which IP address is effective___B______.A) 202,131,45,61 B) 126.0.0.1C) 192.268.0.2 D) 290.25.135.1217. if IP address is 202.130.191.33, subnet mask is 255.255.255.0,then subnet prefix is__D_____A) 202.130.0.0 B) 202.0.0.0C) 202.130.191.33 D)202.130.191.018.The command Ping s implemented with __B______messagesA) DNS B) ICMPC) IGMP D) RIP19. Which layer-function is mostly implemented in an adapter? __A________A) physical layer and link layer B) network layer and transport layerC)physical layer and network layer D) transport layer and application layer20. If a user brings his computer from Chengdu to Peking, and accesses Internet again. Now, __B_____ of his computer needs to be changed.A) MAC address B) IP addressC) e-mail address D) user address1. .traceroute is implemented with __B____messages.A) DNS B) ICMPC) ARP D) RIP2.A router reads the A address on a packet to determine the next hop.A. IP ;B. MAC;C. source;D.ARP3. There are two broad classes of packet-switched networks: datagram networks andvirtual-circuit networks. The virtual-circuit networks forward packets in their switches use ___D___.A) MAC addresses B) IP addressesC) e-mail addresses D) virtual-circuit numbersA) device interfaces with same subnet part of IP addressB) can’t physically reach each other without intervening a router.C)all of the devices on a given subnet having the same subnet address.D)A portion of an interface’s IP address must be determined by the subnet to which it is connected.5. if IP address is 102.100.100.32, subnet mask is 255.255.240.0,then subnet prefix is___A___A) 102.100.96.0 B) 102.100.0.0C) 102.100.48.0 D) 102.100.112.06 If a user brings his computer from chengdu to beijing, and accesses Internet again. Now,___B__ of his computer needs to be changed.A) MAC address B) IP addressC) e-mail address D) user address7.I nput port of a router don’t perform ____D___ functions.A) the physical layer functions B) the data link layer functionsC) lookup and forwarding function D) network management8.switching fabric is at the heart of a router, switching can be accomplished in a number of ways, donit include_D_A)Switching via memory B)Switching via crossbarC)Switching via a bus D) Switching via buffer9.if a host wants to emit a datagram to all hosts on the same subnet, then the datagram’s destination IP address is ___B__A)255.255.255.0 B) 255.255.255.255C)255.255.255.254 D) 127.0.0.110.The advantage of Circuit switching does not include________.A) small transmission delay B)small Processing costC) high link utilization D)no limited to format of message1.an ARP query sent to __A__A) local network B) all over the Internet.2. .packet-switching technologies that use virtual circuitsinclude__B___:A) X.25, ATM, IP B) X.25, ATM, frame relay.C) IPX, IP, ATM D) IPX, IP, TCP3. In Internet, _D_ protocol is used to report error and provide the information forun-normal cases.A) IP B) TCP C)UDP D) ICMP1.A is a Circuit-switched network.B. Datagram networkC. InternetD. virtual circuit network2.The store-and-forward delay is DA. processing delayB. queuing delayC. propagation delayD. transmission delay3.Which is not the function of connection-oriented service? DA. flow controlB. congestion controlC. error correctionD. reliabledata transfer4.The IP protocol lies in CA. application layerB. transport layerC. network layerD. link layer5.Which of the following is the PDU for application layer __B___A.datagram;B. message;C. frame;D.segment6.bandwidth is described in _B__A) Bytes per second B) Bits per secondC) megabits per millisecond D) centimeters7.A user who uses a user agent on his local PC receives his mail sited in a mail server by using __A__ protocol.A)SMTP B) POP3C)SNMP D) FTP8.As a data packet moves from the lower to the upper layers, headers are B.A)Added; B. subtracted; C. rearranged; D. modified三、填空题(每空1分,共22分 (注意:所有填空题不能写中⽂,否则中⽂答案对的情况1. link-layer address is variously called a LAN address, a MAC address, or a physical address.2 In the layered architecture of computer networking, n layer is the user of n-1 layer and the service provider of n+1 layer.A) n B) n+3 C) n+1 D) n-1四、判断题(每⼩题1分,共10分)1.√The services of TCP’s reliable data transfer founded on the services of theunreliable data transfer.2.√Any protocol that performs handshaking between the communication entitiesbefore transferring data is a connection-oriented service.3.× HOL blocking occur in output ports of router.4.√Socket is globally unique.5.√SMTP require multimedia data to be ASCII encoded before transfer.6.×The transmission delay is a function of the distance between the two routers.7.×IP address is associated with the host or router. SO one device only have one IPaddress.8. √In packet-switched networks, a session’s messages use the resources on demand, and Internet makes its best effort to deliver packets in a timely manner.9. ×UDP is a kind of unreliable transmission layer protocol, so there is not any checksum field in UDP datagram header.10.√Forwarding table is configured by both Intra and Inter-AS routing algorithmIP is a kind of reliable transmission protocol. F8.Forwarding table is configured by both Intra and Inter-AS routing algorithm.T9.Distance vector routing protocol use lsa to advertise the network which router10.RIP and OSPF are Intra-AS routing protocols T11.Packet switching is suitable for real-time services, and offers better sharing ofbandwidth than circuit switching F五、计算题(28 points)1.C onsider the following network. With the indicated link costs, use Dijkstra’s shortest-path algorithm to compute the shortest path from X to all network nodes.2 Given: an organization has been assigned the network number 198.1.1.0/24 and it needs todefine six subnets. The largest subnet is required to support 25 hosts. Please:●Defining the subnet mask; (2分) 27bits or 255.255.255.224●Defining each of the subnet numbers; which are starting from 0# (4分)198.1.1.0/27 198.1.1.32/27 198.1.1.64/27 198.1.1.96/27 198.1.1.128/27 198.1.1.160/27 198.1.1.192/27 198.1.1.224/27●Defining the subnet 2#’s broadcast address.(2分) 198.1.1.95/27Defining host addresses scope for subnet 2#. (2分) 198.1.1.65/27--198.1.1.94/273. Consider sending a 3,000-byte datagram into a link that has an MTU of 1500bytes.Suppose the original datagram is stamped with the identification number 422 .Assuming a 20-byte IP header,How many fragments are generated? What are their characteristics?(10分)。

计算机网络英文缩写词汇

计算机网络英文缩写词汇

计算机网络英文缩写词汇Computer Network AbbreviationsIntroduction:In the world of computer networking, various abbreviations and acronyms are commonly used to represent complex technical terms and concepts. These abbreviations not only save space in written communication but also enhance efficiency in daily networking operations. In this article, we will explore and explain some of the most commonly used computer network abbreviations and their meanings. Let's dive in!1. OSI - Open Systems Interconnection:The Open Systems Interconnection (OSI) model is a conceptual framework that standardizes the functions of a communication system into seven distinct layers. These layers, from the physical layer to the application layer, provide a systematic approach to understanding and developing network protocols.2. TCP/IP - Transmission Control Protocol/Internet Protocol:Transmission Control Protocol/Internet Protocol (TCP/IP) is the set of protocols used for the internet and most private networks. TCP controls the transmission of data, while IP handles the routing of packets between network devices. TCP/IP is the foundation for network communication across various platforms and devices.3. LAN - Local Area Network:A Local Area Network (LAN) refers to a network of computers and devices connected in a limited geographical area, such as an office building or a school. LANs enable the sharing of resources and facilitate communication between devices within the network.4. WAN - Wide Area Network:In contrast to LAN, a Wide Area Network (WAN) covers a larger geographical area, typically spanning multiple locations and utilizing public or private telecommunication services. WANs connect LANs over long distances and often rely on routers and leased lines.5. VPN - Virtual Private Network:A Virtual Private Network (VPN) extends a private network across a public network, such as the internet. By encrypting data and creating secure connections, VPNs enable users to access a private network remotely, ensuring confidentiality and data integrity.6. DNS - Domain Name System:The Domain Name System (DNS) translates domain names into their corresponding IP addresses. DNS plays a crucial role in web browsing, as it allows users to access websites using easy-to-remember domain names instead of numeric IP addresses.7. DHCP - Dynamic Host Configuration Protocol:Dynamic Host Configuration Protocol (DHCP) automatically assigns IP addresses and network configuration parameters to devices on a network.DHCP helps simplify network administration by eliminating the manual configuration of IP addresses for each device.8. FTP - File Transfer Protocol:File Transfer Protocol (FTP) is a standard network protocol used to transfer files between a client and a server on a computer network. FTP provides a secure and efficient method for uploading, downloading, and managing files over a network.9. HTTP - Hypertext Transfer Protocol:Hypertext Transfer Protocol (HTTP) is the protocol used for transferring hypertext, such as web pages, over the internet. HTTP enables the communication between web servers and clients, allowing users to access and interact with websites.10. IP - Internet Protocol:Internet Protocol (IP) is the principal communication protocol used for transmitting data across interconnected networks. IP provides the addressing scheme and routing functionality that allows data packets to be delivered between devices on different networks.Conclusion:In this article, we have explored some of the key abbreviations used in computer networking. Understanding these abbreviations is essential for effective communication and collaboration in the networking field. By familiarizing yourself with these abbreviations, you will be better equippedto navigate the world of computer networks and stay up to date with the latest industry developments.。

Tiannet UNMS 解决方案指南说明书

Tiannet UNMS 解决方案指南说明书

The growing diversity and size of telecommunication networks and a proliferation ofnetwork equipments represent an increasing challenge to the telecom service providers. Today's service providers need to cope with the complex O&M overheads with a centralized management tool offering simultaneous access to heterogeneous networks without compromising control and security.UNMS is the leading edge Network Management System platform with client-server architecture providing integrated management functions for the various communication equipment series T ainet offers. While acting as an information warehouse capable of managing the entire network rather than pieces of it, the UNMS also has an extensive set of useful features built into its distributed object-oriented architecture. Such features include:Java-BasedUNMS is implemented in Java language so that it can support multiple computer platforms, such as Windows series, Unix platforms, or Linux, etc.Client-Server SystemUNMS is a client-server system that supports coordinated management of logic execution and information storage. Multiple client workstations can log into UNMS simultaneously to perform network management tasks.RMI-IIOP and JNDIUNMS is constructed using a distributed object computing strategy, which is based on RMI-IIOP and JNDI technologies introduced in J2EE class suite. In this way UNMS Client-Server system can run upon RMI communication mechanism or CORBA-IIOP infrastructure. Further enhancements to UNMS include full compliance with the J2EE standard.Relational Database ServerUNMS is equipped with relational database server to store related data like configurations, performance monitoring counters, historical alarm records, and so on. Graphic User Interface (GUI)Client application of UNMS presents users with user-friendly Java-style GUI. The system look-and-feel will be checked and modified for consistent GUI display across all computing platforms. For example, UNMS client will be rendered in Windows-style when running on Microsoft Windows OS, and in X-Windows style when on Unix platform. Simple Network Management Protocol (SNMP)TCP/IP is a suite of protocols for datacomm network. Recent trend in telecom technology sees a merging of voice and data communications to bring the TCP/IP protocol suite into the management infrastructure of next-generation network equipments. SNMP is dominating the management architecture of today's multi-service networks. UNMS supports SNMP management protocol and exchanges messages defined in standard or enterprise MIB sets with T ainet equipments.Trivial File Transfer Protocol (TFTP)TFTP is a simple protocol used by memoryless devices to transfer files during embedded software upgrades or configuration file backup/download. It is very popular in modern communication product design. UNMS platform integrates the TFTP server protocol to seamlessly perform all related network management tasks.System Architecture GraphUNMS is a distributed object system. Server computation is bound to customizable naming service, RMIRegistry, or CORBA ORB, while exchanging SNMP messages with managed network agents. UNMS Server summarizes and classifies collected data in the relational database tables. UNMS Client applications running on workstations reference Server remote objects from naming service, which in turn obtain requested information and commit transactions to the network. Multiple client Workstations can be set up to perform centralized management services in a distributed manner.Network ManagerTopologyUNMS allows editing and displaying of topology in hierarchical tree structure and of network model in GUI.Modular ArchitectureUNMS is a flexible platform accommodating various EMS modules to be plugged in for management of different Tainet equipments. Setup programs are provided to support the plug-in installation in an integrated manner.Security Control MechanismVersatile access control includes user level classification, user privilege management, user's geographical responsibility assignment, and operational work ProvisioningProvisioning functions include template commissioning, end-to-end management, manual or scheduled network backup, and device configuration restoration in case of catastrophic work MonitoringNetwork status is represented in different colors of node icons on topology model and summarized upward along topology tree.Mercury EMSFront Panel ViewA front panel view look-alike screen allows user to perform administrative functions of Mercury devices. Port and slot LED statuses are graphically displayed in real time.Configuration ManagementConfiguration settings can be performed over the graphical user interface. Manual and automatic backup function ensures stable network operation.Loopback OperationMercury EMS supports interface for channel and timeslot loopback operation. The LED display in the front panel view reflects the channel loopback state.Path ManagementVia GUI, users can set cross connections individually for Mercury. Alternatively, users can search for all available logical paths on topology model and set an entire batch of cross connections to multiple devices.Alarm ManagementAlarm records can be stored and managed by UNMS. Users can search and view alarm records with flexible query conditions.Performance MonitoringMercury device supports the function of accumulating PM variables like Error-Second. UNMS-Mercury allows users to query current PM data or historical PM records.Mercury 3820Venus 2832Scorpio 1000Scorpio 1400Scorpio 1000 EMSRack Management ViewA window view that resembles the ETSI or ANSI Scorpio 1000rack is rendered as the starting point to invoke OAM functions.Remote management of Scorpio 1400 is also supported.Equipment ManagementUNMS-Scorpio1000 allows users to set the required card type for each slot. If the actual card inserted does not match with the required card type, this condition will be detected and displayed in real time.Configuration ManagementIn addition to integrated support for the configuration of point-to-point paths between CO- and CPE-end modems, advanced backup and restoration features are provided as well.UNMS-PCServer Minimum Requirements zWindows 2003 based PC server zPentium IV 1800 MHz or equivalent CPU, 1Gb RAM, 200GB HDD, 1024 x 768 pixel resolution monitor, and 10/100Mbps Ethernet cardzJava Runtime Environment v1.3.x or above DatabasezMySQL v3.23.39 or above*Future availableFault ManagementThe fault management functions include current status query,historical alarm management, loopback from network side/ loop side/ customer side, and test pattern generation.Performance MonitoringPM functions are supported for all interface types and include generation of threshold crossing alarms.Template CommissioningUsers can edit templates for port or rack configurations which can be stored in database server and applied quickly to the actual network.Integrated Firmware Upgrade SchemeUNMS integrates TFTP services as built-in functions to support on-line firmware upgrades.Clientz Windows XP based PC workstation or abovezPentium IV 1300 MHz or equivalent CPU, 640 Mb RAM, 300Gb HDD, 1024 x 768 resolution monitor, and 10/100Mbps Ethernet cardzJava Runtime Environment v1.3.x or aboveThe Professional PartnerTA I N E T C O M M U N I C AT I O N S Y S T E M C ORP.Headquarters3F., No.108, Ruiguang Rd., Neihu Dist., Taipei City 114, Taiwan TEL: 886-2-2658-3000FAX: 886-2-2793-8000。

通过发送方的TCP增强高速延期网络的启动性能A Sender-Side TCP Enhancement for Startup performance

通过发送方的TCP增强高速延期网络的启动性能A Sender-Side TCP Enhancement for Startup performance

A Sender-Side TCP Enhancement for Startup Performance in High-Speed Long-Delay Networks Xiao Lu,Ke Zhang,Cheng Peng Fu,and Chuan Heng FohSchool of Computer EngineeringNanyang Technological University,SingaporeLuxi0007,Y030069,ascpfu,aschfoh@.sgAbstract—Many previous studies have shown that traditionalTCP slow-start algorithm suffers performance degradation in high-speed and long-delay networks.This paper presents a sender-side enhancement,which makes use of TCP Vegas congestion-detecting scheme to monitor the router queue,and accordingly refines slow-start window evolution by introducing a two-phase approach to probe bandwidth more efficiently. Moreover,it achieves good fairness of bandwidth utilization in coexistence of multiple connections.Simulation results show that, compared with traditional slow-start and many other enhance-ments,it is able to significantly improve the startup performance without adversely affecting coexisting TCP connections.I.I NTRODUCTIONTCP is a connection-oriented,reliable and in-order transport protocol.The current legacy TCP,namely TCP Reno[1], and its enhancements such as NewReno[2]use slow-start during startup phase to probe available bandwidth by gradually increasing the amount of data injected into the network. However,blind initial ssthresh(slow-start threshold)setting of legacy TCP leads to one possible problem that traditional slow-start algorithm may suffer performance degradation in high-speed long-delay networks.Before a TCP connection starts,slow-start sets initial ssthresh to an arbitrary default value within a range,depending on different operating system implementations.As a result,1)if ssthresh is set too high compared with the bandwidth-delay product(BDP),multiple packet losses and timeout may occur;otherwise2)If ssthresh is set too low,the TCP connection will exit slow-start and switch to congestion avoidance phase prematurely.Both cases cause low link utilization.In this paper,we present a sender-side enhancement that is simple and efficient to improve TCP startup performance in high-speed long-delay networks.The key idea is to make use of TCP Vegas congestion-detecting scheme to monitor the router queue,and then a two-phase approach is used to refine cwnd evolution.We note that several existing startup modifications are based on the widely used congestion control algorithm,NewReno,i.e.,Hoe’s Change[5]and Limited Slow-Start[6].Thus,to make performance comparison,our enhancement is also combined with NewReno.Simulation results demonstrate that,compared with traditional slow-start and many other enhancements,our methods significantly im-proves link utilization without adversely affecting coexist-ing TCP connections.Furthermore,the enhanced throughput performance is achieved by using the bandwidth effectivelyTCP Dst 1work simulation topology.and fairly rather than aggressively depriving bandwidth from other TCP connections.Therefore,our algorithm causes little negative effect on other co-existing TCP connections.The remainder of the paper is organized as follows.We start by showing the traditional slow-start limitations demon-strated by stimulations in the next section.After summarizing some related works in Section III,we provide the analytical approach and describe the enhanced algorithm used by sender side in Section IV,validate it through simulations in Section V,and conclude this paper in Section VI.II.T RADITIONAL SLOW-START LIMITATIONSWe use ns-2.28[16][17]to simulate TCP startup perfor-mance in high-speed long-delay networks.Fig.2shows the simulation topology.TCP Src represents TCP sender and TCP Dst TCP receiver.Router A and Router B are two Droptail bottleneck routers.Side links are all with bandwidth of500 Mbps,and one-way delay of0.1ms.Between the two routers there is a bottleneck link with40Mbps bandwidth and50ms one-way delay.For convenience,window size is measured in number of packets,and the packet size is1000bytes while ACK is40bytes.Thus the BDP value is500packets.The bottleneck router is with250packets buffer size(BDP/2). TCP sources run NewReno with traditional slow-start,and the maximum burst size is5packets.All the experiments in this paper are based on this topology.In traditional slow-start[1],sender increases cwnd(conges-tion window)by one packet upon receiving every new ACK, until cwnd reaches initial ssthresh.Before a TCP connection starts,initial ssthresh is set to an arbitrary value,ranging from4KB to extremely high.Due to blind initial ssthresh978-1-4244-6398-5/10/$26.00 ©2010 IEEE0 100 200 300 400 500 600 700800 0246810W i n d o w (p k t )Time (second)NewReno with SS (large ssth):NewReno with SS (small ssth):NewReno with Enhanced Start:BDP line:0 20 40 60 80 100 120 140 02 4 6 8 10B u f f e r (p k t )Time (second)Estimated queue at sender:Actual queue at router:Fig.2.Comparison of slow-start and Enhanced Start cwnd evolutions.setting,TCP suffers from low startup performance,especially in high-speed long-delay networks.Fig.2shows two typical cases where initial ssthresh mismatches the BDP.In the first case,ssthresh is set to 800packets,which is higher than the BDP represented by the dot line.We observe that the TCP connection,shown in the semi-dashed line,suffers many packets losses and long recovery time.In the second case,ssthresh is set to 32packets,which is much below the BDP.It shows that the TCP connection,expressed by the dashed line,exits slow-start and switches to congestion avoidance phase prematurely,resulting in a low bandwidth utilization.III.RELATED WORKOne critical reason of traditional slow-start performance suffering is that,sender lacks the ability to estimate the network condition properly.In recent years,various senderside modifications have been proposed to improve TCP startup performance.Some approaches aim to solve arbitrary ssthresh setting problem by setting ssthresh to some estimation value.In [5],Hoe proposed to set initial ssthresh to the estimated value of BDP obtained by using packet pair measurement.This method avoids the slow-start limitations,mentioned above,of enter-ing the congestion avoidance phase prematurely.However,attribute to cwnd increasing too fast towards the estimated BDP,Hoe’s Change may suffer temporary queue overflow and multiple losses when the bottleneck buffer is not big enough compared to the BDP.In [10],a measurement improves Hoe’s method by making use of multiple packet pairs to iteratively improve the estimate of ssthresh during startup progress.Nev-ertheless,simply using packet pairs cannot estimate available bandwidth paratively,Early Slow Start Exit (ESSE)[15]is robust against estimation errors.It adopts several approximations of pipesize estimation,based on the observation of few ACK arrival times,to set the initial ssthreshvalue and drastically reduces the packet drop rate.Another modified slow-start mechanism,called Adaptive Start [8],is proposed to make use of eligible rate estimation (ERE)mechanism [7],repeatedly resetting the slow-start ssthresh to a more appropriate value.This endues sender with the ability to grow cwnd efficiently without packet overflows.Agile Probing [4]uses a similar idea.However,an early transition form slow-start to Congestion Avoidance may occur to affect the throughput performance.Another approach improves startup performance by modify-ing the cwnd evolution.Smooth-Start [14]splits the slow-start into slow and fast phases to adjust the congestion window in different ways.It is capable of reducing packet loss during startup,however,it still does not address the question of how the ssthresh and the threshold that distinguishes the two phases should be set.Limited Slow-Start [6]introduces an additional threshold max ssthresh and modifies cwnd increase ly,when cwnd ≤max ssthresh ,cwnd doubles per RTT,the same as slow-start.When max ssthresh <cwnd ≤ssthresh ,cwnd is increased by a fixed amount of max ssthresh/2packets per RTT.This method reduces the number of drops during the startup,especially for TCP connections that is able to reach very large congestion window.However,max ssthresh is statically set before the TCP connection starts.IV.E NHANCEMENTSIn this section,we provide the analytical approach that is the fundamental scheme of our measurement.Then,the two-phase approach is introduced,followed by the description of the pseudo code.A.Analytical ApproachOur proposal,making use of TCP Vegas congestion-detecting scheme to monitor the router queue,attributes to that the congestion avoidance mechanism is based on changes in the estimated amount of extra data in the network rather than only on dropped segments.Past research work [3][12]has shown this detection,in terms of how many extra buffers the connection is occupying,leads to a more accurate estimation of network traffic condition.In Vegas,the throughput difference is calculated by:Diff =(Expected −Actual )=(cwndBaseRT T −cwnd RT T)where BaseRTT is the minimum of all measured RTT,and RTT is the actual round trip time of a tagged packet.Denote the backlog at router queue by N ,we have,RT T =BaseRT T +N/Actual.Rearrange the above equation,we deduce that,N =(cwndBaseRT T −cwnd RT T)×BaseRT T.(1)We note the Rerouting problem [11]this estimation method of TCP Vegas faces.Rerouting a path may change the prop-agation delay of the connection.More specifically,if a new route for the connection has a longer propagation delay,theconnection will not be able to tell whether the increase in the round trip delay is due to a congestion in the network or a change in the route.However,based on the fact that startup phase only last for a few seconds.Thus,Rerouting does not necessarily affect the startup performance.Therefore,during startup progress we can use(1)to estimate the backlog at router queue.This forms the basis to our enhancement of TCP startup behavior.B.Startup EnhancementThe key idea here is to detect the backlog status of the bot-tleneck routers with TCP Vegas congestion detection detecting schemefirst,and then modify the startup algorithm to properly react to the backlog.We propose a two-phase approach to adjust probing rate reacting to the changes of buildup queue in the router.A certain threshold of queue length can be used as a signal of queue being building up.The two phases of our measurement are for the queue buildup and non-buildup situation,respectively.After a certain threshold is set,changes of the estimated backlog can be used as a trigger to switch between two phases.Now,from(1),if N,the estimation of backlog at router queue,exceeds a certain threshold ofβpackets,we can assume that the router queue is building up and here we call it a congestive event.Each occurring times of congestion event is recorded as a new parameter at TCP sender every time the estimated backlog is greater than the threshold.Next,the two-phase approach is described in detail as follows.In Linear Increase Phase,when a TCP connection starts,the sender increases cwnd by one packet for every ACK received which the same as traditional slow-start.This process con-tinues until the queue length exceeds the thresholdβ,which marks a congestive event.Such a congestive event may due to either the exponential growth of cwnd being too fast for the bottleneck to cope with[4],or bandwidth fully utilization.For the both causes,network bottleneck capacity can be assumed reached.Thus,increasing cwnd in a conservative linear manner is more appropriate.We design cwnd to increment one packet every round-trip time in this phase.In the former cause of congestion event detection,by switching to increase linearly, sender can quickly drain the temporary queue to avoid buffer overflow and multiple losses.In the latter one,sender can assume that cwnd has already met the available bandwidth. Switching to linear increase actually have the same effect as congestion avoidance.This skillfully solves the ssthresh setting problem.In Adjustive Increase Phase,upon sensing that the router queue is drained,a sender enters this phase with the aim to adjust probing rate more intelligently.That is,every time the queue length draw back below the thresholdβ,implying under-utilization of bottleneck bandwidth,sender should speed up again its sending rate to probe the available bandwidth. However,the increase speed should be reduced because the spared bandwidth is less than before.Therefore,in this phase, the cwnd increase speed is set to half of that before the last congestive event.The startup phase exits when a packet loss event occurs. The pseudo code of our proposed scheme,which we called Enhanced Start,is given in the following.Algorithm1Enhanced Startthenssthresh=cwnd/2;/*switch to Congestion Avoidance Phase*/elseif(N≥β)thencwnd+=1/cwnd;for every ACK/*Linear Increase Phase*/elsecwnd+=max(1/cwnd,1/2exp{Congestion Event No});for every ACK/*Adjustive Increase Phase*/end ifend ifIn the above pseudo code,Congestion Event No indicates the number of congestion events occurred with its initial value set to0.Note that our algorithm is to probe eligible bandwidth intelligently during startup phase,when connection has no information about the network to set ssthresh.It is not executed after a timeout as ssthresh is no longer blindly set.V.P ERFORMANCE EVALUATIONIn this section,we present numerical results of Enhanced Start,compared with the tradition slow-start and other variant modifications,given different network environments with dis-similar parameter settings.We show the cwnd evolutions of Enhanced Start,and then comparisons of throughput achieve-ment.We also show the fairness and friendliness of our enhanced approach.A.Enhanced Start cwnd EvolutionFig.2shows Enhanced Start queue estimation and cwnd evolution.We see that the estimated queue at sender and the actual queue at router match quite well.By correctly estimat-ing the router queue,the sender increases cwnd in exponential-linear cycles,allowing cwnd to adaptively converge to the BDP in a timely manner.Our method also prevents the temporary queue from overflow when the buffer size is small.In Fig.3,we vary the value of threshold,β,to study the sensitivity of our algorithm.As can be seen from thefigure, surprisingly,varying the value does not cause much difference in cwnd evolutions.This means that the setting ofβis not a mainly decisive factor in the performance.However,as adopted in[9],we setβas3to be the test value for the remaininng experiments.To assess the capability of our measurement in the network with heterogenous stacks,we add a burst UDP cross-traffic set to10Mbps starting at thefirst second and stopping at the fifth second.Fig.4shows Enhanced Start queue estimation0 100 200 300 400 500 600 700800 0246810W i n d o w (p k t )Time (second)β= 2:β= 3:β= 5:β=10:β=15:β=20:BDP line:Fig.3.Enhanced Start cwnd evolution under different values of β.0 100 200 300 400 500600 0246810300250200 150 100 50 0W i n d o w (p k t )B u f f e r (p k t )Time (second)NewReno with Enhanced Start:Estimated queue at sender:Actual queue at router:BDP and full buffer line:Fig.4.Enhanced Start cwnd evolution under burst UDP cross-traffic (10Mbps starts at 1sec and stops at 5sec).and cwnd evolution under this scenario.We see that the queue estimation is accurate in approaching the actual queue.The cwnd evolution reveals that during the first second,the exponential growth behavior is just as traditional slow start.After the burst of UDP traffic,the sender detects the decrease of available bandwidth quickly through the backlogged queue,and accordingly,halves cwnd growth rate each time con-gestion event happens.After reaching the spared bandwidth,cwnd tends to maintain its value by growing smoothly.Then,right after the termination of UDP traffic flow,cwnd grows exponentially again to reach the BDP swiftly.Eventually,cwnd seizes the BDP quite accurately.This shows that Linear Increase Phase is able to avoid buffer overflow as the sudden decrease of available bandwidth,and avoid congestion when BDP is reached.While,Adjustive Increase Phase plays the main role in speeding up cwnd growth again when more available bandwidth is released.B.Throughput PerformanceThis subsection shows that the Enhanced Start significantly improves startup performance with regards to various band-width,one-way delay,and buffer size.To focus on the startup performance,we only calculate the throughput in the first 20seconds.Fig.5shows NewReno throughput with different startup algorithms under bottleneck bandwidth varying from 10Mbps to 150Mbps.We fix the bottleneck one-way delay to 50ms and buffer size to BDP/2.We compare NewReno with20 40 60 80 100 120 140T h r o u g h p u t (M b p s )Bandwidth (Mbps)Fig.5.NewReno (NR)throughput versus bottleneck bandwidth (first 20s).0 5 10 15 20 25 30 35 40102030405060708090100T h r o u g h p u t (M b p s )One-way delay (ms)Fig.6.NewReno (NR)throughput versus delay (first 20s).Enhanced Start (ES),Hoe’s Change (HC),Limited Slow-Start (LSS),slow-start with small ssthresh (32packets),slow-start with large ssthresh (extremely high),and TCP Vegas.It is shown that,NewReno with Enhanced Start and Hoe’s Change scale well with bandwidth.Other algorithms lack the ability to adapt to network bandwidth effectively,leading to poor throughput.Fig.6shows the throughput comparison under bottleneck one-way delay varying from 10ms to 100ms.We fix the bandwidth to 40Mbps and buffer size to BDP/2.The subtle changes in the throughput of NewReno with Enhanced Start and Hoe’s Change shows their ability to scale well with delay,while other startup algorithms suffer from performance degradation as delay increases.In Fig.7,we fix the bandwidth to 40Mbps and delay to 50ms,and vary the buffer size from 100packets to 300packets.It is evident that the only desirable throughput is achieved by Enhanced Start which keeps high throughput in all test cases.Also as is shown,when the buffer size is small,Hoe’s Change and Limited Slow-Start suffer severe performance degradation,while other startup algorithms fail to obtain a high throughput even with the help of increasing buffer size.C.Enhanced Start Fairness and FriendlinessFig.8shows the coexistence of multiple Enhanced Start and slow-start connections.We consider five NewReno con-nections,in which connections 1and 2are NewReno with slow-start (ssthresh 32packets)and connections 3,4,5are300280 260 240 220 200 180 160 140 120 100T h r o u g h p u t (M b p s )Buffer size (pkt)Fig.7.NewReno (NR)throughput versus buffer size (first 20s).NewReno with Enhanced Start.Connections 1,2,3,4start at 0second to investigate the effect of Enhanced Start and slow-start startup at the same time.Connection 5starts at 30th second to estimate the effect of Enhanced Start on existing TCP connections.It is shown that,comparatively NewReno behavior is more aggressive at the very beginning,while Enhanced Start starts up to make better use of the network bandwidth left unused by other connections.As time proceeds,the window sizes of the connections incline to approach to each ter,the presence of connection 5does not adversely affect coexisting TCP connections.After a burst,it joins the underway evolutions of the others.Finally,five connections converge to the same window size,which is around 100packet sizes,one fifth of the BDP.Each link shares its own part fairly.Enhanced Start shows good fairness to connections with same stack and friendliness to connections with NewReno stack.VI.C ONCLUSIONSIn this paper,we present a sender-side enhancement to improve TCP startup performance in high-speed long-delay networks by introducing a two-phase approach in startup process.It makes use of TCP Vegas congestion-detecting scheme to monitor the router queue,and refines congestion window evolution to quickly reach the eligible window sizes,meanwhile avoids multiple packet losses.Simulation results demonstrated that it is capable of significantly improving TCP startup performance without adversely affecting coexisting TCP connections and it is robust to small buffer size and long delay.Moreover,the performance improvement is achieved by making better use of the link bandwidth,and therefore,our algorithm causes little negative effect on other co-existing connections.R EFERENCES[1]V .Jacobson,“Congestion avoidance and control”,in Proc.SIG-COMM’88,Stanford,CA,pp.314-329.[2]S.Floyd,and T.henderson,“Newreno modification to TCP’s FastRecovery”,RFC 2582,April 1999.[3]L.S.Brakmo,S.W.O’Malley,et al.,“TCP Vegas:New Techniques forCongestion Detection and Avoidance”,in Proc.SIGCOMM’94,London,U.K.,Oct.1994,pp.24-35.0 50 100 150 200 250 300 350 40010203040 50 60 70W i n d o w s i z e (p k t )Time (second)NewReno 1 with SS:NewReno 2 with SS:NewReno 3 with ES:NewReno 4 with ES:NewReno 5 with ES:Fig.8.Co-existence of multiple Enhanced Start and slow-start connections.[4]R.Wang,K.Yamada,et al.,“TCP with Sender-Side Intelligence toHandle Evolution,Large,Leaky Pipes”,IEEE Journal on Selected Areas in Communications.V ol.23,No.2,February 2005.[5]J. C.Hoe,“Improving the startup behavior of a congestion controlscheme for TCP”,in Proc.SIGCOMM’96,pp.270-280.[6]S.Floyd,“Limited Slow-Start for TCP with Large Congestion Win-dows”,RFC 3742,March 2004.[7] C.Casetti,M.Gerla,S.Mascolo,M.Y .Sanadidi,and R.Wang,“TCP West-wood:bandwidth estimation for enhanced transport over wireless links”,In Proceedings of Mobicom 2001,Rome,Italy,July 2001.[8]Ren Wang Giowanni Pau,Kenshin Yamada,M.Y .Sanadidi,and MarioGerla,“Tcp startup Performance in Large Bandwidth Delay Networks”,in COM,April 2004.[9] C.P.Fu and S.C.Liew,“TCP Veno:TCP enhancement for transmissionover wireless access networks”,IEEE (JSAC)Journal of selected Areas in Communications,Feb 2003.[10]M.Aron and P.Druschel,“TCP:Improving startup dynamics by adaptivetimers and congestion control”,Technical Report (TR98-318),Depart-ment of Computer Science,Rice University,1998.[11]Richard ,Jean Walrand,and Venkat Anantharam,“Issues in TCPVegas”,Available at /hyongla,July 1998[12]Lawrence S.Brakmo,and Larry L.Peterson,“TCP Vegas:End to EndCongestion Avoidance on a Global Internet”,IEEE (JSAC)Journal of selected Areas in Communications,VOL.13,NO.8,OCTOBER 1995.[13]G.Hengartner,J.Bolliger,and T.Gross,“TCP Vegas revisited”,in Proc.IEEE INFOCOM,Mar 2000,pp.1546-1555.[14]Haining Wang and Care L.Williamson,“A New TCP congestion controlscheme:Smooth-start and dynamic recovery”,In Proceedings of IEEE MASCOTS98,Montreal,Canada,1998.[15]S.Giordano,G.Procissi,F.Russo,and Raffaello Secchi,“On the Use ofPipesize Estimators to Improve TCP Transient Behavior”,Proccedings of the IEEE International Conference on Communications ICC2005,V ol.1,pp.16-20,May 2005.[16]K.Fall and K.Varadhan,“The ns Manual”,/nsnam/ns/ns-documentation.html.[17]work Simulator./nsnam/ns。

第5章电子信息类专业英语李白萍

第5章电子信息类专业英语李白萍
Passage A Introduction of ISDN
Imagine a network that would break down the walls of classrooms, letting teachers, students, and staff directly access computer-based resources and other communications facilities. This network could enhance communications between teachers and staff, allowing students to gain more of their teachers time for instruction, personal attention, and an enriched curriculum.[1] Computer-aided instruction (CAI) and computer-based teaching and grading could be enhanced, allowing better tailor-made courses and tracking of an individual student’s progress. Teleconferencing could provide courses to the sick or homebound students, courses to student spread over a wide geographic area, and continuing education for teachers and other staff members. Community or state-wide databases could allow the sharing of scarce educational resources. Student attendance and teacher availability could all be tracked via centralized databases. PCto-PC networking could be easily accomplished between buildings on a campus or between campuses.

rfc721.Out-of-Band Control Signals in a Host-to-Host Protocol

rfc721.Out-of-Band Control Signals in a Host-to-Host Protocol

NWG/RFC 721 1 SEP 76 LLG 36636 Out-of-Band Control SignalsNetwork Working Group Larry Garlick Request for Comments 721 SRI-ARC NIC 36636 1 September 76Out-of-Band Control Signalsin aHost-to-Host ProtocolThis note addresses the problem of implementing a reliable out-of-band signal for use in a host-to-host protocol. It is motivated by the fact that such a satisfactory mechanism does not exist in the Transmission Control Protocol (TCP) of Cerf et. al. [reference 4, 6] In addition to discussing some requirements for such an out-of-band signal (interrupts) and the implications for the implementation of the requirements, a discussion of the problem for the TCP case will be presented.While the ARPANET host-to-host protocol does not support reliable transmission of either data or controls, it does meet the other requirements we have for an out-of-band control signal and will be drawn upon to provide a solution for the TCP case.The TCP currently handles all data and controls on the same logical channel. To achieve reliable transmission, it provides positive acknowledgement and retransmission of all data and most controls. Since interrupts are on the same channel as data, the TCP must flush data whenever an interrupt is sent so as not to be subject to flow control.Functional RequirementsIt is desirable to insure reliable delivery of an interrupt. Thesender must be assured that one and only one interrupt is deliveredat the destination for each interrupt it sends. The protocol neednot be concerned about the order of delivery of interrupts to theuser.The interrupt signal must be independent of data flow controlmechanisms. An interrupt must be delivered whether or not there are buffers provided for data, whether or not other controls are beinghandled. The interrupt should not interfere with the reliabledelivery of other data and controls.The host-to-host protocol need not provide synchronization betweenthe interrupt channel and the data-control channel. In fact, ifcoupling of the channels relies on the advancement of sequencenumbers on the data-control channel, then the interrupt channel is no longer independent of flow control as required above. Thesynchronization with the data stream can be performed by the user by 1marking the data stream when an interrupt is generated. Theinterrupt need not be coupled with other controls since it in no way affects the state of a connection.Once the interrupt has been delivered to the user, no other semantics are associated with it at the host-to-host level.ImplicationsTo provide for reliable delivery and accountability of interruptdelivery, an acknowledgement scheme is required. To associateinterrupt acknowledgements with the correct interrupt, some namingconvention for interrupts is necessary. Sequence numbers providesuch a naming convention, along with the potential for providing anordering mechanism.A separate interrupt channel is required to make interruptsindependent of flow control. A separate sequence number space fornaming interrupts is also necessary. If the sequence numbers arefrom the same sequence number space as some other channel, thensending an interrupt can be blocked by the need to resynchronize the sequence numbers on that channel.In the current TCP, which uses one channel for data, controls, andinterrupts, flushing of data is combined with the interrupt to bypass the flow control mechanism. However, flushing of resynchronizationcontrols is not allowed and receipt of these controls is dependent on the acknowledgement of all previous data. The ARPANET protocol,while not providing for reliable transmission, does provide for theseparation of the interrupt-control channel and the data channel. Multiple Channels and Sequence NumbersIf multiple channels are to be used for a connection, then it becomes interesting to determine how the sequence numbers of the channels can be coupled so that sequence number maintenance can be doneefficiently.Assigning sequence numbers to each octet of data and control, as inthe TCP, allows positive acknowledgement and ordering. However,since packets are retransmitted on timeout, and since multi-pathpacket switch networks can cause a packet to stay around for a longtime, the presence of duplicate packets and out-of-order packets must be accounted for. A sequence number acceptability test must beperformed on each packet received to determine if one of thefollowing actions should be taken:2Acknowledge the packet and pass it on to the user.Acknowledge the packet, but do not send it to the user, since ithas already been delivered.Discard the packet; the sequence number is not believable.Acceptability on Channel 0To determine the action to be taken for a packet, acceptabilityranges are defined. The following three ranges are mutuallyexclusive and collectively exhaustive of the sequence number space (see Figure 1):Ack-deliver range (ADR)Ack-only range (AOR)Discard range (DR)ACCEPTABILITY RANGESDR AOR ADR DR\\=====)[===========)[===================](========\\^ ^^ ^^! !\ !\! ! FCLE ! DRLEAOLE AORE ADREFigure 1Let S = size of sequence number space (number per octet)x = sequence number to be testedFCLE = flow control left window edge3ADRE = (FCLE+ADR) mod S = Ack-deliver right edge (Discardleft edge - 1)AOLE = (FCLE-AOR) mod S = Ack-only left edge (Discardright edge + 1)TSE = time since connection establishment (in sec)MPL = maximum packet lifetime (in sec)TB = TCP bandwidth (in octets/sec)For any sequence number, x, and packet text length, l, if(AOLE <= x <= ADRE) mod S and(AOLE <= x+l-1 <= ADRE) mod Sthen the packet should be acknowledged.If x and l satisfy(FCLE <= x <= ADRE) mod S and(FCLE <= x+l-1 <= ADRE) mod Sthen x can also be delivered to the user; however, ordereddelivery requires that x = FCLE.A packet is not in a range only if all of it lies outside a range. When a packet falls in more than one range, precedence is ADR,then AOR, then DR. When a packet falls in the AOR then an ACKshould be sent, even if a packet has to be created. The ACK will specify the current left window edge. This assures acknowledgment of all duplicates.ADRE is exactly the maximum sequence number ever "advertised"through the flow control window, plus one. This allows forcontrols to be accepted even though permission for them may never have been explicitly given. Of course, each time a control with a sequence number equal to the ADRE is sent, the ADRE must beincremented by one.AOR is set so that old duplicates (from previous incarnations ofthe connection) can be detected and discarded. ThusAOR = Min(TSE, MPL) * TB4Synchronization and Resynchronization ProblemsA special problem arises concerning detection of packets (oldduplicates) in the network that have sequence numbers assigned by old instances of a connection. To handle this reliably, carefulselection of the initial sequence number is required [ref. 2, 3]as well as periodic checks to determine if resynchronization ofsequence numbers is necessary. The overhead of such elaboratemachinery is expensive and repeating it for each additionalchannel is undesirable.Acceptability on Channel iWe have concluded that the only savings realizable in the muiltple channel case is to use channel zero’s initial sequence number and resynchronization maintenance mechanism for the additionalchannels. This can be accomplished by coupling each additionalchannel to channel zero’s sequence numbers (CZSN), so that eachitem on channel i carries a pair of sequence numbers, the current CZSN and the current channel i’s sequence number (CISN).The acceptablility test of items on channel i is a composite test of both sequence numbers. First the CZSN is checked to see if it would be acknowledged if it were an octet received on channelzero. Only if it would have been discarded would the item onchannel i be discarded. Having passed the CZSN test, the CISN is checked to see if the item is deliverable and acknowledgable with respect to the CISN sequence number space. The CISN test is acheck for everything but the existence of old duplicates from old instances of the connection and is performed like the check forchannel zero items.It has been shown that to implement additional channels for a TCP connection, two alternatives are available-- (1) provide eachchannel with its own initial sequence number and resynchronization maintenance mechanism or (2) provide one initial sequence numberand resynchronization maintenance mechanism for all channelsthrough channel zero’s mechanism. It is hard for us to comparethe two alternatives, since we have no experience implementing any resynchronization maintenance mechanism.TCP CaseTo implement a completely reliable separate interrupt channel for TCP requires a channel with a full sequence number space. It is possible to compromise here and make the interrupt number space smaller thanthat required to support consumption of numbers at the TCP’s5bandwidth. What is lost is the total independence of the flowcontrol from the data-control channel. Normally, the data-controlsequence numbers will change often enough so that wraparound in theinterrupt number space causes no problems.Things become slightly messy when many interrupts are generated inquick succession. Even if the interrupt numbers are acknowledged,they cannot be reused if they refer to the same data-control sequence number, until a full packet lifetime has elapsed. This can beremedied in all but one case by sending a NOP on the data-controlchannel so that the next set of interrupts can refer to a newdata-control sequence number. However, if the data-control channelis blocked due to flow control and a resynchronizing control (DSN in the TCP case) was just sent, a NOP cannot be created until theresynchronization is complete and a new starting sequence number ischosen. Thus to send another interrupt, the TCP must wait for apacket lifetime or an indication that it can send a NOP on thedata-control channel. (In reality, a connection would probably beclosed long before a packet lifetime elapsed if the sender is notaccepting data from the receiver. [reference 1])6REFERENCES(1) J. Postel, L. Garlick, R. Rom, "TCP Specification (AUTODIN II)," ARC Catalog #35938, July 1976.(2) R. Tomlinson, "Selecting Sequence Numbers," INWG Protocol Note#2, September 1974.(3) Y. Dalal, "More on Selecting Sequence Numbers," INWG ProtocolNote #4, October 1974.(4) V. Cerf, Y. Dalal, C. Sunshine, "Specification of InternetTransmission Control Program," INWG General Note #72, December1974 (Revised). [Also as RFC 675, NIC Catalog #31505.](5) Cerf, V., "TCP Resynchronization," SU-DSL Technical Note #79,January 1976.(6) Cerf, V. and R. Kahn, "A Protocol for Packet NetworkIntercommunication," IEEE Transactions on Communication, VolCOM-20, No. 5, May 1974.(7) C. Sunshine, "Interprocess Communication Protocols for Computer Networks," Digital Systems Laboratory Technical Note #105,December 1975.7。

防火墙报文转发流程的描述

防火墙报文转发流程的描述

防火墙报文转发流程的描述英文回答:The firewall message forwarding process follows a specific sequence of steps to ensure secure and efficient network communication. Here's a detailed explanation of the process:1. Packet Reception: The firewall receives an incoming packet from an external or internal network interface. The packet contains source and destination IP addresses, port numbers, and payload data.2. Header Inspection: The firewall examines the packet header to determine the type of protocol (TCP, UDP, ICMP, etc.) and the source and destination addresses. This information helps the firewall identify the packet's purpose and potential threats.3. Security Policy Rule Matching: The firewall checksthe packet contents against a set of predefined security rules. These rules specify criteria for allowing or denying traffic based on various parameters such as source and destination IP addresses, ports, and protocols.4. Action Enforcement: If the packet matches an allowed rule, the firewall forwards it to its destination. If the packet violates a rule, the firewall can take various actions, such as dropping the packet, logging the incident, or generating an alert.5. Stateful Inspection: Stateful firewalls keep track of ongoing network sessions to make more intelligent decisions about incoming traffic. They examine the packet's sequence numbers and other session-related information to determine if it belongs to an existing connection or is a new attempt.6. Packet Modification: Some firewalls have the ability to modify packets before forwarding them. This can involve changing the source or destination IP addresses, ports, or adding security headers to enhance protection.7. Packet Redirection: In certain cases, the firewall may redirect packets to a specific destination or service. This is useful for load balancing, content filtering, or implementing network security zones.8. Logging and Monitoring: The firewall typically logs all incoming and outgoing packets, including information about the source and destination addresses, ports, protocols, and actions taken. This log data is invaluablefor security monitoring, troubleshooting, and auditing.中文回答:防火墙报文转发流程是一个涉及多个步骤的复杂过程,确保网络通信的安全和高效。

计算机基本英语单词

计算机基本英语单词

计算机基本英语单词1.CPU(Center Processor Unit)中央处理单元2.mainboard主板3.RAM(random access memory)随机存储器(内存)4.ROM(Read Only Memory)只读存储器5.Floppy Disk软盘6.Hard Disk硬盘7.CD-ROM光盘驱动器(光驱)8.install安装9.monitor监视器10.keyboard键盘11.mouse鼠标12.chip芯片13.CD-R光盘刻录机14.HUB集线器15.Modem= MOdulator-DEModulator,调制解调器16.P-P(Plug and Play)即插即用17.UPS(Uninterruptable Power Supply)不间断电源18.BIOS(Basic-input-Output System)基本输入输出系统19.CMOS(plementaryMetal-Oxide-Semiconductor)互补金属氧化物半导体20.setup安装21.uninstall卸载22.wizzard向导23.OS(Operation Systrem)操作系统24.OA(Office AutoMation)办公自动化25.exit退出26.edit编辑27.copy复制28.cut剪切29.paste粘贴30.delete删除31.select选择32.find查找33.select all全选34.replace替换35.undo撤消36.redo重做37.program程序38.license许可(证)39.back前一步40.next下一步41.finish结束42.folder文件夹43.Destination Folder目的文件夹er用户45.click点击46.double click双击47.right click右击48.settings设置49.update更新50.release发布51.data数据52.data base数据库53.DBMS(Data Base Manege System)数据库管理系统54.view视图55.insert插入56.object对象57.configuration配置58.mand命令59.document文档60.POST(power-on-self-test)电源自检程序61.cursor光标62.attribute属性63.icon图标64.service pack服务补丁65.option pack功能补丁66.Demo演示67.short cut快捷方式68.exception异常69.debug调试70.previous前一个71.column行72.row列73.restart重新启动74.text文本75.font字体76.size大小77.scale比例78.interface界面79.function函数80.access访问81.manual指南82.active激活83.puter language计算机语言84.menu菜单85.GUI(graphical user interfaces )图形用户界面86.template模版87.page setup页面设置88.password口令89.code密码90.print preview打印预览91.zoom in放大92.zoom out缩小93.pan漫游94.cruise漫游95.ful l screen全屏96.tool bar工具条97.status bar状态条98.ruler标尺99.table表100.paragraph段落101.symbol符号102.style风格103.execute执行104.graphics图形105.image图像106.Unix用于服务器的一种操作系统107.Mac OS苹果公司开发的操作系统108.OO(Object-Oriented)面向对象109.virus病毒110.file文件111.open打开112.colse关闭113.new新建114.save保存115.exit退出116.clear清除117.default默认N局域网119.WAN广域网120.Client/Server客户机/服务器121.ATM( Asynchronous Transfer Mode)异步传输模式122.Windows NT微软公司的网络操作系统123.Internet互联网124.(World Wide Web)万维网125.protocol协议126.HTTP超文本传输协议127.FTP文件传输协议128.Browser浏览器129.homepage主页130.Webpage网页131.website132.URL在Internet的服务程序上133.用于指定信息位置的表示方法134.Online在线135.Email电子136.ICQ网上寻呼137.Firewall防火墙138.Gateway网关139.HTML超文本标识语言140.hypertext超文本141.hyperlink超级142.IP(Address)互联网协议(地址)143.Search Engine搜索引擎144.TCP/IP用于网络的一组通讯协议145.Telnet远程登录146.IE(Internet Explorer)探索者(微软公司的网络浏览器)147.Navigator引航者(网景公司的浏览器) 148.multimedia多媒体149.ISO国际标准化组织150.ANSI美国国家标准协会151.able 能152.active file 活动文件153.add watch 添加监视点154.allf iles 所有文件155.all rights reserved 所有的权力保留156.altdirlst 切换目录格式157.and fix a much wider range of disk problems 并能够解决更大X围内的磁盘问题158.and other inFORMation 以及其它的信息159.archive file attribute 归档文件属性160.assignto 指定到161.auto answer 自动应答162.auto detect 自动检测163.auto indent 自动缩进164.auto save 自动存储165.avail able onvolume 该盘剩余空间166.bad mand 命令错167.bad mand or file name 命令或文件名错168.batch parameters 批处理参数169.binary file 二进制文件170.binary files 二进制文件171.International panies国际公司172.Under the blank page页下空白173.by date 按日期174.by extension 按扩展名175.by name 按名称176.by tesfree 字节空闲177.call stack 调用栈178.case-insensitive 区分大小写179.Software pany shares 软件股份公司180.change directory 更换目录181.change drive 改变驱动器182.change name 更改名称183.characterset 字符集184.checking for 正在检查185.checks a disk and display a status report 检查磁盘并显示一个状态报告186.Change Disk / path改变盘/路径187.china 中国188.choose one of the following 从下列中选一项189.clean all 全部清除190.clean all breakpoints 清除所有断点191.clean attribute 清除属性192.Removal Order history 清除命令历史193.clean screen 清除屏幕194.close all 关闭所有文件195.code generation 代码生成196.color palette 彩色调色板197.The mand line命令行198.mand prompt命令提示符199.pressed file 压缩文件200.Hard disk configuration that used by MS-DOS 配置硬盘,以为MS-DOS 所用201.Conventional memory常规内存202.Copy directory and subdirectories, empty except 拷贝目录和子目录,空的除外203.Set up a copy of a document archiving attributes 拷贝设置了归档属性的文件204.Copying files or moving to another place把文件拷贝或搬移至另一地方205.To copy the contents of a floppy disk to another disk 把一个软盘的内容拷贝到另一个软盘上206.Copy Disk复制磁盘207.copyrightc208.Create Logical DOS district or DOS actuator创建DOS分区或逻辑DOS驱动器209.Create DOS district expansion创建扩展DOS分区210.The expansion DOS partitions to create logical DOS drives 在扩展DOS分区中创建逻辑DOS驱动器211.Create DOS Main district 创建DOS主分区212.Create a directory 创建一个目录213.To create, change or delete disk label创建,改变或删除磁盘的卷标214.the current file 当前文件215.Current drives 当前硬盘驱动器216.current settings 当前设置217.current time 当前时间218.The cursor position光标位置219.defrag 整理碎片220.dele 删去221.Logical DOS or delete Division actuator 删除分区或逻辑DOS驱动器222.deltree 删除树223.device driver 设备驱动程序224.Dialogue column对话栏225.direction keys 方向键226.directly 直接地227.dContents variables that目录显示变量228.Directory Structure目录结构229.disk access 磁盘存取230.disk copy 磁盘拷贝231.disk space 磁盘空间232.That document 显示文件233.display options 显示选项234.That geographical information 显示分区信息235.That specified directory and all subdirectories documents 显示指定目录和所有目录下的文件236.The document specified that attribute显示指定属性的文件237.Show or change file a ttributes 显示或改变文件属性238.That date or equipment 显示或设备日期239.Rather than a monochromatic color display screen installation information 以单色而非彩色显示安装屏信息240.That system has been used and unused amount of memory 显示系统中已用和未用的内存数量241.All documents on the disk that the full path and name显示磁盘上所有文件的完整路径和名称242.Show or change the current directory 显示或改变当前目录243.doctor 医生244.doesn 不245.doesnt change the attribute 不要改变属性246.dosshell DOS 外壳247.doubleclick 双击248.You want that information? Logical drive (y / n)?你想显示逻辑驱动器信息吗(y/n)?249.driveletter 驱动器名250.editmenu 编辑选单251.Memory内存252.end of file 文件尾253.end of line 行尾254.enter choice 输入选择255.entire disk 转换磁盘256.environment variable 环境变量257.All the documents and subdirectories所有的文件和子目录258.The directory has been in existence documents已存在的目录文件时259.expanded memory 扩充内存260.expand tabs 扩充标签261.explicitly 明确地262.extended memory 扩展内存263.fastest 最快的264.file system文件系统265.fdis koptions fdisk选项266.file attributes 文件属性267.file FORMat 文件格式268.file functions 文件功能269.files election 文件选择270.Documents choice variables文件选择变元271.file sin 文件在272.file sinsubdir 子目录中文件273.file slisted 列出文件274.file spec 文件说明275.file specification 文件标识276.files selected 选中文件277.find file 文件查寻278.fixed disk 硬盘279.fixed disk setup program 硬盘安装程序280.fixes errors on the disk 解决磁盘错误281.floppy disk 软盘282.FORMat disk格式化磁盘283.FORMats a disk for use with ms-dos 格式化用于MS-DOS的磁盘284.FORM feed 进纸285.free memory 闲置内存286.full screen 全屏幕287.function procedure 函数过程288.graphical 图解的289.graphics library 图形库290.group directories first 先显示目录组291.hang up 挂断292.hard disk 硬盘293.hardware detection 硬件检测294.has been 已经295.help file 帮助文件296.help index 帮助索引297.help in FORM ation 帮助信息298.help path 帮助路径299.help screen 帮助屏300.help text 帮助说明301.help topics 帮助主题302.help window 帮助窗口303.hidden file 隐含文件304.hidden file attribute 隐含文件属性305.hidden files 隐含文件306.how to 操作方式307.ignore case 忽略大小写308.in both conventional and upper memory 在常规和上位内存309.incorrect dos 不正确的DOS310.incorrect dos version DOS 版本不正确311.indicatesa binary file 表示是一个二进制文件312.indicatesan ascii text file 表示是一个ascii 文本文件313.insert mode 插入方式314.inuse 在使用315.invalid directory 无效的目录316.is 是317.kbytes 千字节318.keyboard type 键盘类型bel disk 标注磁盘p top 膝上321.The largest executable 最大可执行程序322.The largest available memory block最大内存块可用323.left handed 左手习惯324.left margin 左边界325.line number 行号326.line numbers 行号327.line spacing 行间距328.The document specified that the order按指定顺序显示文件329.listfile 列表文件330.listof 清单331.Position papers 文件定位332.look at 查看333.look up 查找334.macro name 宏名字335.make directory 创建目录336.memory info 内存信息337.memory model 内存模式338.menu bar 菜单条339.menu mand 菜单命令340.menus 菜单341.message window 信息窗口342.microsoft 微软343.microsoft antivirus 微软反病毒软件344.microsoft corporation 微软公司345.mini 小的346.modem setup 调制解调器安装347.module name 模块名348.monitor mode 监控状态349.Monochrome monitors 单色监视器350.move to 移至351.multi 多352.new data 新建数据353.newer 更新的354.new file 新文件355.new name 新名称356.new window 新建窗口357.norton norton358.nostack 栈未定义359.Note: careless use deltree 注意:小心使用deltree360.online help 联机求助361.optionally 可选择地362.or 或363.page frame 页面364.page length 页长365.Every screen displayed information about suspended after 在显示每屏信息后暂停一下366.pctools pc工具367.postscript 附言368.prefix meaning not 前缀意即\"不369.prefix to rever seorder 反向显示的前缀370.Prefixes used on a short horizontal line and - after the switch (for example /-w) presetswitch用前缀和放在短横线-后的开关(例如/-w)预置开关371.press a key toresume 按一键继续372.press any key for file functions 敲任意键执行文件功能373.press enter to keep the same date 敲回车以保持相同的日期374.press enter to keep thes ame time 敲回车以保持相同的时间375.press esc tocontinue 敲esc继续376.press esc to exit 敲<esc>键退出377.press esc to exit fdisk 敲esc退出fdisk 378.press esc to return to fdisk options 敲esc 返回fdisk选项379.previously 在以前380.print all 全部打印381.print device 打印设备382.print erport 打印机端口383.processesfilesinalldirectoriesinthespecifiedp ath 在指定的路径下处理所有目录下的文件384.program file 程序文件385.program ming environment 程序设计环境386.Each goal in the creation of documents remind you 在创建每个目标文件时提醒你387.promptsy outo press a key before copying 在拷贝前提示你敲一下键388.pull down 下拉389.pull downmenus 下拉式选单390.quick FORMat 快速格式化391.quick view 快速查看392.read only file 只读文件393.read only file attribute 只读文件属性394.read only files 只读文件395.read only mode 只读方式396.redial 重拨397.repeat last find 重复上次查找398.report bfile 报表文件399.resize 调整大小400.respectively 分别地401.rightmargin 右边距402.rootdirectory 根目录403.Running debug, it is a testing and editingtools 运行debug, 它是一个测试和编辑工具404.run time error 运行时出错405.save all 全部保存406.saveas 另存为407.scan disk 磁盘扫描程序408.screen colors 屏幕色彩409.screen options 屏幕任选项410.screen saver 屏幕暂存器411.screen savers 屏幕保护程序412.screen size 屏幕大小413.scroll bars 翻卷栏414.scroll lock off 滚屏已锁定415.search for 搜索416.sector spertrack 每道扇区数417.selectgroup 选定组418.selectionbar 选择栏419.setactivepartition 设置活动分区420.setupoptions 安装选项421.shortcutkeys 快捷键422.showclipboard 显示剪贴板423.singleside 单面424.size move 大小/移动425.sort help S排序H帮助426.sortorder 顺序427.Special services: D directory maintenance 特殊服务功能: D目录维护428.List of designated to drive, directory, and documents 指定要列出的驱动器,目录,和文件429.Designated you want to parent directory as the current directory 指定你想把父目录作为当前目录430.The new document specified directory or file name指定新文件的目录或文件名431.Copies of the document to designated 指定要拷贝的文件432.stack over flow 栈溢出433.stand alone 独立的434.startu poptions 启动选项435.statu sline 状态行436.stepover 单步437.summaryof 摘要信息438.Cancellation confirmed suggest that you would like to cover 取消确认提示,在你想覆盖一个439.swapfile 交换文件440.Switches can be installed in the environment variable dircmd开关可在dircmd环境变量中设置441.switch to 切换到442.sync 同步443.system file 系统文件444.system files 系统文件445.system info 系统信息446.system in FORM ation 系统信息程序447.table of contents 目录448.terminal emulation 终端仿真449.terminal settings 终端设置450.test file 测试文件451.test file para meters 测试文件参数452.the active window 激活窗口453.Switches can copycmd preset environment variables开关可以在copycmd环境变量中预置454.the two floppy disks must be the same type 两个软磁盘必须是同种类型的455.this may be over ridden with y on the mandline 在命令行输入/-y可以使之无效456.toggle reakpoint 切换断点457.toms dos 转到MS-DOS458.topm argin 页面顶栏459.turn off 关闭460.Type cd drives: designated driver that the current directory 键入cd驱动器:显示指定驱动器的当前目录461.Type cd without parameters of the current drive to show the current directory键入无参数的cd以显示当前驱动器的当前目录462.The date and type of parameters that the current date set键入无参数的date,显示当前日期设置463.unmark 取消标记464.unse lect 取消选择es bare FORMat 使用简洁方式es lower case 使用小写es widelist FORMat 使用宽行显示ing help 使用帮助469.verbosely 冗长地470.verifies that new file sare writ tencorrectly校验新文件是否正确写入了471.video mode 显示方式472.view window 内容浏览473.viruses 病毒474.vision 景象475.vollabel 卷标476.volumelabel 卷标477.volume serial number is 卷序号是478.windows help windows 帮助479.wordwrap 整字换行480.working directory 正在工作的目录481.worm 蠕虫482.write mode 写方式483.write to 写到484.xmsmemory 扩充内存485.you may 你可以486.我把网络安全方面的专业词汇整理了一下,虽然大多是乱谈,但初衷在于初学者能更好的了解这些词汇。

阅读能给我们带来幸福的英语作文

阅读能给我们带来幸福的英语作文

阅读能给我们带来幸福的英语作文The Enriching Embrace of Literature: Reading as a Catalyst for Happiness.In the tapestry of human experiences, reading stands as an unparalleled thread, intertwining itself with our emotions, intellect, and sense of well-being. It is a sanctuary where we can escape the mundane, embark on extraordinary journeys, and explore the depths of our own humanity. Beyond the mere accumulation of knowledge, reading has a profound impact on our overall happiness, enriching our lives in myriad ways.Intellectual Stimulation and Cognitive Enhancement.Reading activates countless neural pathways,stimulating both our critical thinking abilities and our creativity. As we delve into different genres and perspectives, our minds are challenged to process new information, make connections, and form original thoughts.This mental exercise not only sharpens our cognitive skills but also promotes neuroplasticity, ensuring our brains remain flexible and adaptable throughout our lives.Emotional Enrichment and Empathy.Literature provides a window into the hearts and minds of characters, allowing us to experience a wide range of emotions vicariously. Through their triumphs and tribulations, we develop empathy, compassion, and a deeper understanding of the human condition. By stepping into different perspectives, we cultivate tolerance, acceptance, and a broader appreciation of diversity.Stress Reduction and Relaxation.In an era marked by constant stimuli and information overload, reading offers a sanctuary for our weary minds. Losing ourselves in a captivating story can transport us to distant realms, calming our anxious thoughts and providing a much-needed respite from the pressures of daily life. Studies have shown that reading just six minutes a day cansignificantly reduce stress levels and improve overallwell-being.Improved Sleep Quality.Reading before bed can significantly enhance sleep quality by reducing racing thoughts and promoting relaxation. The soft glow of a book or e-reader can lull us into a state of tranquility, preparing our bodies and minds for a restful night's sleep.Increased Social Connection and Community Building.Literature has the power to bridge divides and foster connections between individuals. Book clubs, online forums, and literary events provide platforms for readers to share their insights, engage in lively discussions, and make new friends who share their love of reading.Enhanced Imagination and Creativity.Reading stimulates our imagination, allowing us toescape the confines of our everyday lives and explore limitless possibilities. By immersing ourselves in fantastical worlds, historical epics, and thought-provoking essays, we expand our mental horizons, develop our problem-solving skills, and nurture our own creativity.Personal Growth and Self-Discovery.Literature holds a mirror up to our own experiences, allowing us to reflect on our values, beliefs, and aspirations. By witnessing the struggles and triumphs of fictional characters, we can draw parallels to our ownlives and gain valuable insights into our own strengths, weaknesses, and potential.Meaning and Purpose in Life.In a world often filled with uncertainty, literature can provide a sense of meaning and purpose. By exploring the complexities of human existence, grappling with ethical dilemmas, and seeking enlightenment through the written word, we can gain a deeper understanding of our place inthe grand scheme of things.Preservation of Cultural Heritage and Collective Memory.Reading connects us to our cultural heritage and collective memory. Through books, we learn about the past, present, and future, preserving the stories, traditions,and values that shape our societies. By engaging with diverse literary works, we cultivate a sense of global citizenship and appreciate the richness of human experience.Conclusion.The benefits of reading extend far beyond the mere acquisition of knowledge. It is an enriching activity that stimulates our minds, expands our horizons, fosters our empathy, and promotes our overall happiness and well-being. By embracing the transformative power of literature, we unlock a world of imagination, discovery, and personal growth. In the words of George Bernard Shaw, "A readerlives a thousand lives before he dies... The man who never reads lives only one."。

tcp协议介绍(英文)

tcp协议介绍(英文)

数据,而不是将数据拆分为IP大小的块,并发布了一系列
的IP请求,该软件可以发出一个单一的请求,TCP,并让 TCP处理IP的详细信息。
TCP报文结构
• TCP的特点
– 面向连接
• TCP连接仅存于端系统,中间路由器对此毫不知情
– 全双工服务
• 可双向同时传输数据
– 点对点连接
• 仅存在于两个端系统之间,无第三者“插足”
oriented layer and the Internet Protocol at the
internetworking (datagram) layer. The model became known informally as TCP/IP, although formally it was henceforth(从此以后) called the Internet Protocol Suite.
首部 长度
保 留
U A P R S F R C S S Y I G K H T N N
检 验 和
选 项 (长 度 可 变)
紧 急 指 针
填 充
32 bit
比特 0 8
源 端 口 序 号 TCP 首部 确 首部 长度 保 留 认 号 窗 口
16
24
目 的 端 口
31
U A P R S F R C S S Y I G K H T N N
32 bit
比特 0 8 源 端 口 序 号 TCP 首部 确 首部 长度 保 留 认 号 窗 口 16 24 目 的 端 口 31
U A P R S F R C S S Y I G K H T N N
检 验 和
选 项 (长 度 可 变)
紧 急 指 针

Congestion and Corruption Loss Detection with Enhanced-TCP

Congestion and Corruption Loss Detection with Enhanced-TCP

Congestion and Corruption Loss Detection with Enhanced-TCP Deddy Chandra+, Richard J. Harris+ and Nirmala Shenoy+++ School of Electrical and Computer System Engineering Royal Melbourne Institute of TechnologyMelbourne 3000, Australia[deddy.chandra@] [richard@.au] ++ Department of Information Technology Rochester Institute of TechnologyRochester, NY 14623[ns@]Abstract - In the Internet area, the Transmission Control Protocol (TCP) is the most commonly used transport protocol. In this paper, we present how TCP is tuned to differentiate between corruption and congestion losses. We also discuss why regular TCP is not suitable for mobile hosts and its inability to detect the type of losses that yield significant degradation of TCP performance. We introduce an Enhancement to TCP that will be referred to as E-TCP that makes TCP aware of the existence of wireless losses, a new acknowledgment packet format and an agent to assist E-TCP in implementation.I.I NTRODUCTIONThe current explosive growth of cellular systems and wireless networks are just the early state of a “wireless revolution”. The ultimate goal is global connectivity and performance of mobile computing devices. To achieve such goals, we are meeting the challenge of creating a full, integrated, seamless, fault-tolerant and heterogenous networks composed of fully distributed, energy efficient and ubiquitous mobile computing platforms.Mobile and wireless networks are fundamentally different from conventional wired computer networks [7]. With the introduction of mobile connectivity, location is no longer a constraint in terms of communication. It allows users to access information anytime, anywhere. Users are insisting upon having the same applications over wireless link with the same quality of service that they are getting over a wired link.Mobile wireless is currently one of the most challenging environments for the Internet protocol, particularly TCP. There are approaches that have been used to support the transport protocol in a wireless environment. For example, WTCP by Sinha et al. [3] was proposed specifically for wireless wide-area networks. It is a rate based protocol rather than window based like TCP. It uses a ratio of the average inter-packet delay observed at the receiver to the inter-packet delay at the sender as the primary metric for rate control, rather than the packet loss and retransmit timeout used by TCP.TCP is the transport protocol that integrates a wide range of different physical networks into a global Internet [2]. TCP was developed to perform well in conventional wired networks where it serves as a transport protocol that guarantees a reliable service for data transmission over networks with its congestion control mechanism. Reliable service ensures that one end successfully receives data sent by another end. However, the current implementation of TCP, which is known as Reno-TCP, faces a great challenge with the introduction of wireless networks.As discussed in [2], the significant degradation of TCP performance over wired and wireless networks is due to the inability of TCP to distinguish the type of packet loss that drives TCP to unnecessarily invoke its congestion avoidance mechanism to handle wireless losses. Figure 1 and 2 illustrate the message sequence diagram for data transmission in the case when packet losses were due to either congestion in the networks or error-prone in the wireless environment. It shows that in both scenarios, TCP is unable to distinguish the cause of packet losses, thus it interprets the packet losses as being due to congestion in the network, which was not always true. TCP then reduces its sending rate by reducing the size of the congestion window that causes significant performance degradation.F ixed H o st B aseS tatio n M o bile H ostC W N D=C W N D=C W N D=C W N D/The rest of this paper will discuss the algorithms of our proposed schemes more in detail and they are organised as follows. The related works and their drawbacks will be discussed in section II. The algorithm of our proposed scheme will be discussed more detail in section III. Simulation results will be discussed in section IV and at finally conclusions will be presented in section V.II.RELATED W ORKSMany researchers in recent years have tried to improve TCP performance over wireless networks. The proposed solutions can be categorised into three main schemes, they are: the split connection scheme, the link layer scheme and the end-to-end scheme. In this section, some of the proposed solutions will be discussed in order to understand their advantages and drawbacks in comparison with our proposed solution.The last-hop acknowledgment scheme (LHACK) was proposed in [7]. In summary, for this proposal, the base station/wireless router sends a “LHACK” (“FHACK”, first-hop acknowledgment) to the stationary (mobile) source for each packet that it receives. For connection from a fixed host to a mobile host, if “LHACK” is received at the source but the “DACK” (acknowledgment from the receiver) is missing, this implies a corruption of the packet and no congestion control mechanism is triggered. In contrast, missing both “LHACK” and “DACK” implies congestion, thus a congestion control mechanism is invoked. The drawbacks of this scheme are that two acknowledgments are sent for each message and this result in an extra load on the return path. Thus, this scheme must also rely on timeout and fast retransmission to detect corrupted packets. Significant throughput degradation occurs when multiple corruptions occur in a single window. Furthermore, for connection between two mobile hosts, the loss of “LHACK” on the wireless link to the source can be misinterpreted as network congestion. Another scheme, known as Explicit Congestion Notification (ECN) has been discussed in [5]. The ECN scheme marks packets when it senses congestion in the router. Upon receipt of a congestion marked packet, the TCP receiver informs the sender (in a subsequent acknowledgment) about incipient congestion, which in turn will trigger the congestion avoidance algorithm at the sender. The major drawback is that it requires support from both routers as well as the end hosts. Since changesin the operation of the router is required, it introduces more processing overhead on routers and in addition, it is not feasible to do so considering there could be many routers along the path.Negative acknowledgment (NACK) has been proposed in [1]. This scheme uses the TCP cumulative acknowledgment packet with an additional “NACK” in the option field to explicitly indicate which packet is received in error, so that the retransmission can be initiated quickly. The drawback of this scheme is that it relies on the reception of an ACK packet, if the source does not receive the ACK, it could lead to expiration of the retransmission timer. Furthermore, this scheme does not consider communication between a fixed host and a mobile host where data packets may be lost due to congestion and corruption.Our work was inspired by an acknowledgment scheme called Explicit Loss Notification with Acknowledgment (ELN-ACK), which was proposed in [8]. Unfortunately, in their proposed scheme, they only discussed the improvement of TCP performance for communication between a fixed host as the data sender and a mobile host as the data receiver. In contrast, their scheme does not apply in the case when the sender is a mobile host and fixed host is the data receiver or communication between two mobile hosts.The motivation that we have is to improve TCP performance when wired and wireless networks are involved in the communication. Therefore, we introduce the Transit Agent and Enhanced-TCP (E-TCP) concept for this purpose. In this paper, we show the algorithms for the transit agent and E-TCP. For further details, our proposed scheme has been discussed in [2].Figure 3: Internetworking TopologyIII.T RANSIT AGENT AND E NHANCED-TCPIn our proposed scheme, we still have both TCP and Enhanced-TCP (E-TCP) located on the transport protocol layer. E-TCP will only be activated in the case when a wireless environment is involved in the data transmission process. Acknowledgment of the SYN packet will be anindication for the sender to decide which one will be activated. We are also using a new form of acknowledgment packet, which will be referred to as ACK C-CLN (congestion-corruption loss notification). This acknowledgment packet will need to use up to two additional bits in the TCP header to store information from the transit agent. We need to allocate two from the six reserved bits in the TCP header for this purpose. When communication occurs between a fixed host and a mobile host, data will only transit through one basestation. Thus, one bit will be used. If communicationoccurs between mobile hosts that might involve two base stations (Figure 3), two bits will be used instead, to acknowledge the sender for a lost packet.For simplicity of understanding, we name each bit in the ACK C-CLN packet, the first bit will be called “Fb” (First bit) and the second bit called “Sb” (Second bit). The first transit agent that reached by ACK C-CLN packet will always fill the Fb and the second transit agent that reached by ACK C-CLN will fill the Sb with transit information.If Fb bit or both Fb and Sb bits have been set in the acknowledgement packet for the SYN packet, E-TCP will be activated. Otherwise, regular TCP will be activated.A. Transit Agent The proposed transit agent should permanently be allocated on the gateway node. First, it needs to perform the simple task of recording the sequence number of any incoming data packets from the sender before they get forwarded to the receiver. Second, it puts transitinginformation into the acknowledgment packet. The processflows for the data packet and ACK packet can be referred in [2].The following is the data processing algorithm performed by the transit agent:if (data packet arrives ) thenpkt_seqno =get(SEQ_NO);pkt_transit[pkt_seqno]=TRUE;end ifNote: initially the value of pkt_transit[pkt_id]=FALSE. Where FALSE=0 and TRUE=1.Upon receiving data from the sender, the receiver will acknowledge the successfully received packet as well as its request for the expected packet (packet that has not been received). When the acknowledgment packet (ACK C-CLN ) has arrived at gateway node, the transit agent will proceed according to the following algorithm:if (ACK C-CLN arrives ) thenexpt_seqno = get(SEQ_NO of expected packet );if (!isset(Fb)) then // if Fb hasn’t been setif (pkt_transit[expt_seqno] == TRUE) thenset (Fb) = 1;elseset (Fb) =0;end if else If (pkt_transit[expt_seqno] == TRUE) then set (Sb) = 1; elseset (Sb) = 0; end if end if end if B. E-TCP at SenderSince the sender could be a fixed host or a mobile host, the interpretation for Fb and Sb value would be different.The following is the algorithm for E-TCP on a fixed sender. Upon receiving an ACK C-CLN packet, E-TCP at the sender will read the “Fb” bit, thus the following algorithm shows how E-TCP learns about packet losses:If (ACK C-CLN arrives ) thenexpt_seqno = get(SEQ_NO of expected pkt );if (Fb == 1) then // wireless loss detected retransmit_pkt(); … keep ssthresh …… keep cwnd … else // congestion loss detected… act like Reno … retransmit_pkt();… reset cwnd & ssthresh … end if end ifE-TCP on the fixed sender seems simple since it will only deal with a mobile receiver. But, it would be adifferent case for a mobile sender since the receiver could be a fixed host or another mobile host. The following isthe E-TCP algorithm for a mobile sender: If (ACK C-CLN arrives ) then expt_seqno = get(SEQ_NO of expected pkt ); If (isset(Fb) && !isset(Sb)) then // mobile to fixed If (Fb == 1) then // congestion loss detected … act like TCP Reno …retransmit_pkt(); … reset cwnd & sshtresh … else // wireless loss detectedretransmit_pkt( );… keep ssthresh …… keep cwnd … endif else if (isset(Fb) && isset(Sb) then //mobile to mobileif {(Fb== 0 && Sb==0) | | (Fb == 1 && Sb == 1)} then retransmit_pkt( ); … keep ssthresh … … keep cwnd … else if (Sb == 1 && Fb == 0) then … act like TCP Reno … retransmit_pkt(); … reset cwnd & sshtresh … end ifendifend ifend ifend ifReno-TCP assumes that a data packet has been lost if it receives k duplicate acknowledgments (generally k=3). Afterwards, Reno will retransmit the lost packet and reduce its congestion window, which results in reducing the number of packets that will be sent on the next cycle. This recovery action will apply for both lost packets that were due to congestion in the network and corrupted packets caused by a high bit-error rate in the wireless environment. Packets that were lost in the wireless environment were actually corrupted and simply discarded by the receiver. Therefore, no congestion actually occurs in the network that causes the lost of the packet. Furthermore, the lost packet has to be replaced as quickly as possible by retransmitting the packet. However, it is unnecessary to reduce the transmission rate since there was actually no congestion in the network. With the implementation of E-TCP and transit agent, the sender will be able to act appropriately upon packet loss, which was due to corruption.IV.S IMULATIONA simulation-based study is described in this section with three different scenarios for data transmission. The sender might be a fixed host or a mobile host and it is a similar case for the receiver. The base station includes a finite-buffer drop tail gateway and the network has both wired and wireless links that form the internetworking scheme (see Figure 3).It is assumed that the sender always has data to send. It is also assumed that the receiver can always send out acknowledgments immediately for each received data packet without delay.For simplicity, the simulations shown in this paper use one-way traffic, which means that acknowledgment packets are never “compressed” or discarded on the path from the receiver back to the sender. The simulation model was implemented using the OPNET TM simulator. The current implementation of TCP (Reno) has been considered in this simulation for comparison with the proposed E-TCP. The simulation is then performed under a typical wireless environment and it is assumed that the bit-error rate on wired links is very low. All wired links have a bandwidth of 100 Mbps and 30 msec of propagation delay. The wireless link capacity is 2 Mbps. TCP segment size is relatively small and it sets at 170 bits. Maximum window size is 94 packets and minimum slow start threshold is 1 packet. The simulation was run under a 10-3 BER for the wireless link. A maximum of 50 packets could be processed in the router queue.In this simulation, we show that E-TCP can differentiate the type of losses and therefore the performance increases. The simulation model has both incorporated causes of packet loss; these are congestion in the wired network and corruption in the wireless environment. We choose a reasonable level of packet loss ratio to actually show the effectiveness of E-TCP. The congestion loss was simulated with the limitation of the number of packets that could be processed in the router’s queue. An appropriate bit error rate in the wireless environment has been selected to model packet corruption.In order to show the improvements resulting from implementing E-TCP over wired and wireless environments, three scenarios were considered. The three scenarios cover the following cases: data was sent from a fixed host to a mobile host (FtoM), data was sent from a mobile to a fixed host (MtoF) and data was sent from one mobile to another mobile host (MtoM). In all cases we assume that the mobile hosts stay in the same location during the lifetime of the connection. All results from our simulations are obtained using a 95% confidence level. Figure 4 shows the results of a throughput comparison when Reno or E-TCP was implemented in the three different scenarios. It shows that E-TCP performance is higher than Reno. From the figures we can see that in all three scenarios, E-TCP has improved the performance of TCP transport protocol.In the case when congestion exists in the wired network and error-prone exists in the wireless network, the E-TCP will be able to discover the type of losses with explicit notification from the transit agent and invoke an appropriate recovery action. When congestion losses have been detected, E-TCP will invoke the congestion avoidance mechanism to overcome the losses and reduce transmission rate as if done by Reno. But, when corruption losses have been detected, the transmission rate will not be reduced and it results in more packets being sent. E-TCP is able to maintain the transmission rate (as a result of large congestion window size) and therefore the performance is improved.Figure 5 (a) and (b) shows the trace of congestion window size (cwnd) for E-TCP and Reno for data transmission from a fixed to a mobile host. From the figure, we can see that E-TCP can maintain a larger congestion window size compared with Reno. Since Reno is unable to distinguish the type of losses, it often reduces the congestion window size, which results in lower throughput.Figure 6 (a) and (b) shows the trace of congestion window size for E-TCP and Reno for data transmission from a mobile to a fixed host. It shows that E-TCP can also maintain a larger congestion window size under the congestion and corruption loss scenario. On the contrary, Reno faces a difficulty in increasing its congestion window under the same scenario. A lower congestion window size causes fewer packets to be sent for the next cycle and therefore lowers the performance as well. Finally, in Figure 7 (a) and (b), we can see the trace of congestion window size for E-TCP and Reno-TCP for data transmission between mobile hosts. Reno with its ordinary congestion avoidance mechanism has been forced to reduce its congestion window size in the case when packet losses have been detected.Reno TCP is also suffering from multiple timeout due to the misinterpretation of packet loss that keeps cwndsize small. It is shown in figure 5, 6 and 7 (b) that it is hard for Reno to maintain its congestion window size while operating under lossy environments.Figure 4: Throughput comparison of the three differentscenariosV.CONCLUSIONSIn this paper, we have discussed the algorithms of enhancement for TCP, which has been referred to as E-TCP, an agent (transit agent) that will assist E-TCP to determine the type of losses with the use of an acknowledgment packet (ACK C-CLN). The proposed scheme allows the transport protocol to distinguish between congestion and corruption losses and then perform appropriate recovery actions.This scheme requires only a small amount of functionality at the gateway node and the sender, while leaving all the other intermediate nodes unchanged in the wired network. We have described the algorithms of our proposed solution and demonstrated the effectiveness of the approach through the simulation.The main idea of our proposed scheme is to let a sender to become aware of the type of packet losses in the case where packet losses have been detected. An acknowledgment packet has been introduced to deliver explicit information from the transit agent to E-TCP at the sender. Upon receiving this acknowledgment packet, the sender will be able to decide whether it needs to reduce the transmission rate if congestion losses have been detected or to maintain the transmission rate in the case when corruption loss has been detected. In both cases, the sender needs to retransmit the lost packet.The simulation results show that enhancement to current implementation of TCP (E-TCP) able to maintain congestion window size that allows the sender to send more packets to the receiver. It also has been shown that E-TCP has performed better in comparison with Reno-TCP in three studied scenarios.VI.R EFERENCES[1] A.C.F. Chan, D.H.K Tsang and S. Gupta, “TCP(Transmission Control Protocol) over Wireless Links”, In Proceeding of VTC'97, p. 2326-30, Phoenix, U.S.A., May 1997.[2] D. Chandra, R.J. Harris and N. Shenoy, “TCPPerformance for future IP-based Wireless Networks”,3rd IASTED International Conference on Wirelessand Optical Communications (WOC’03), pp. 521-526, July 2003.[3] D. Comer, Internetworking with TCP/IP, vol. 1-3,Reading, MA: Prentice Hall, Englewood Cliff, NJ,1991.[4]P. Sinha, N. Venkitaraman, R. Sivakumar and V.Bharghavan, “WTCP:A reliable transport protocol forwireless wide-area networks.”, Proceedings of ACMMobicom’99, Seattle, WA, pp. 231-241.[5]R. Ramani and A. Karandikar, “Explicit CongestionNotification (ECN) in TCP over Wireless Network”,IEEE International Conference on Personal WirelessCommunications, pp. 495-499, Dec 2000.[6]V. Jacobson, “Congestion avoidance and control.”,SIGCOMM Symposium on Communications Architectures and Protocols, 1988, pp. 314-329.ftp:///papers/congavoid.ps.Z for an updated version.[7]J.A. Cobb, P.Agrawal, ”Congestion or Corruption? Astrategy for efficient wireless TCP sessions:, IEEESymposium on Computer and Communications (ISCC’95), June 1995.[8]W. Ding and A. Jamalipour, “A New LossNotification with Acknowledgment for Wireless TCP”, 12th IEEE Symposium on PIMRC, vol.1, pp.65-69, Sep. 2001.[9]W.R. Stevens, TCP/IP Illustrated, vol. 1. Reading,MA: Addison-Wesley, Nov. 1994.(a)(b)Figure 5: Congestion window of data transmission from Fixed to Mobile host(a)Figure 6: Congestion window of data transmission from Mobile to Fixed hostFigure 7: Congestion window of data transmission from mobile to mobile host。

protocol的名词解释

protocol的名词解释

protocol的名词解释Protocol(协议)的名词解释一、引言协议(Protocol)是一个广泛应用于各种领域的概念,它是指为了达成一致、促进交流和实现共同目标而制定的一系列规定或约定。

在计算机科学、通信、外交、商务等多个领域中,协议是确保各方按照相同的规则行事的基石。

本文将从不同领域的角度,对协议的含义和应用进行解释。

二、计算机网络中的协议在计算机网络领域,协议是使网络设备(如计算机、路由器、服务器)之间能够相互通信的规则和约定。

其中,最为人熟知的是互联网协议套件(Internet Protocol Suite),简称为TCP/IP协议。

TCP/IP协议是互联网上数据传输的基础,它包括了多个层次的协议,如IP协议、TCP协议和UDP协议等,分别负责数据的分组、传输和结束。

协议不仅仅局限于传输层面,在应用层面也有一系列的协议,如HTTP协议、SMTP协议等。

这些协议规定了数据的格式和传递方式,使得网络上的服务器和客户端能够有效地进行数据交互。

协议的制定极大地促进了互联网的发展,为人们的生活和工作提供了方便。

三、通信领域中的协议在通信领域中,协议是指为了确保消息的准确性和完整性而制定的规则。

不同的通信协议在性质和应用上有所不同。

例如,在移动通信领域,GSM、CDMA等协议规定了无线通信的标准,确保了不同厂商的设备能够互相兼容和通信。

而在有线通信领域,各种协议如RS-232、Ethernet等也充当着重要的角色。

协议的制定不仅仅是为了保证通信的有效,更是为了确保通信的安全。

因此,安全性成为了协议设计的核心考虑因素之一。

例如,SSL/TLS协议(安全套接层/传输层安全协议)为网络通信提供了加密和身份验证的功能,确保了数据传输的机密性和完整性。

四、外交领域中的协议协议在外交领域中常被用于指代国家之间签订的协议或公约。

这些协议规定了国家之间的权利和义务,并为双方提供了保障。

例如,联合国的成立是通过《联合国宪章》这一协议来完成的,它规定了联合国的宗旨、原则及工作方式,成为了国际关系中的基本准则。

英语作文-福利审批申请

英语作文-福利审批申请

英语作文-福利审批申请In the realm of employment, welfare benefits play a pivotal role in ensuring the well-being of employees. They serve as a safety net, providing support during unforeseen circumstances and fostering a sense of security within the workforce. Recognizing the importance of these benefits, I am compelled to submit an application for the approval of additional welfare provisions for our team.The essence of this application lies in the acknowledgment of the hard work and dedication exhibited by our team members. Their unwavering commitment has not only contributed to the success of our projects but has also elevated the company's status in the competitive market. It is, therefore, imperative to reciprocate this dedication by enhancing their welfare benefits, which will, in turn, reinforce their loyalty and motivation.To begin with, the introduction of comprehensive health insurance is paramount. The current policy covers the basics but falls short in providing extensive care, especially in the face of critical illnesses. An upgraded health plan that includes coverage for such ailments, mental health services, and wellness programs would be a significant improvement. It would not only cater to the immediate health needs of our employees but also promote a proactive approach to their overall well-being.Furthermore, the establishment of a flexible working hours scheme is suggested. The traditional nine-to-five workday may not align with everyone's peak productivity periods or personal responsibilities. A flexible schedule would empower employees to work during hours that suit them best, leading to increased productivity and job satisfaction. This flexibility would also demonstrate the company's trust in its employees, fostering a culture of mutual respect.Another aspect to consider is the enhancement of parental leave policies. The current duration of leave provided for new parents is minimal, barely allowing them sufficient time to adjust to their new responsibilities. Extending this period would provide themwith the opportunity to establish a solid foundation for their growing family without the looming stress of an abrupt return to work.Additionally, the implementation of an employee assistance program (EAP) would be beneficial. Such a program would offer confidential counseling services for employees facing personal or work-related issues. It would serve as a valuable resource, offering support and guidance, thereby contributing to a healthier and more productive work environment.Lastly, the introduction of professional development opportunities is crucial. Encouraging employees to pursue further education or training not only aids in their personal growth but also brings fresh perspectives and skills to the team. Supporting such initiatives could include tuition reimbursement or the provision of in-house training sessions.In conclusion, the proposed enhancements to our welfare benefits are designed to address the diverse needs of our employees comprehensively. By investing in their well-being, we are not only nurturing a positive and supportive work environment but also cementing the company's reputation as an employer of choice. It is with great hope that this application is received favorably, and the necessary approvals are granted to bring these beneficial changes to fruition. 。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
相关文档
最新文档